Browse Source

Docs update for 4.1

Russell Teague 6 years ago
parent
commit
2d78e7cb44

+ 0 - 19
BUILD.md

@@ -32,22 +32,3 @@ To build a container image of `openshift-ansible` using standalone **Docker**:
 
         cd openshift-ansible
         docker build -f images/installer/Dockerfile -t openshift-ansible .
-
-## Build the Atomic System Container
-
-A system container runs using runC instead of Docker and it is managed
-by the [atomic](https://github.com/projectatomic/atomic/) tool.  As it
-doesn't require Docker to run, the installer can run on a node of the
-cluster without interfering with the Docker daemon that is configured
-by the installer itself.
-
-The first step is to build the [container image](#build-an-openshift-ansible-container-image)
-as described before.  The container image already contains all the
-required files to run as a system container.
-
-Once the container image is built, we can import it into the OSTree
-storage:
-
-```
-atomic pull --storage ostree docker:openshift-ansible:latest
-```

+ 0 - 28
CONTRIBUTING.md

@@ -74,27 +74,6 @@ If you are new to Git, these links might help:
 
 ---
 
-## Simple all-in-one localhost installation
-```
-git clone https://github.com/openshift/openshift-ansible
-cd openshift-ansible
-sudo ansible-playbook -i inventory/hosts.localhost playbooks/prerequisites.yml
-sudo ansible-playbook -i inventory/hosts.localhost playbooks/deploy_cluster.yml
-```
-
-## Development process
-Most changes can be applied by re-running the config playbook. However, while
-the config playbook will run faster the second time through it's still going to
-take a very long time. As such, you may wish to run a smaller subsection of the
-installation playbooks. You can for instance run the node, master, or hosted
-playbooks in playbooks/openshift-node/config.yml,
-playbooks/openshift-master/config.yml, playbooks/openshift-hosted/config.yml
-respectively.
-
-We're actively working to refactor the playbooks into smaller discrete
-components and we'll be documenting that structure shortly, for now those are
-the most sensible logical units of work.
-
 ## Running tests and other verification tasks
 
 We use [`tox`](http://readthedocs.org/docs/tox/) to manage virtualenvs where
@@ -171,13 +150,6 @@ be reinstalled.
 
 Here are some useful tips that might improve your workflow while working on this repository.
 
-#### Git Hooks
-
-Git hooks are included in this repository to aid in development. Check
-out the README in the
-[hack/hooks](http://github.com/openshift/openshift-ansible/blob/master/hack/hooks/README.md)
-directory for more information.
-
 #### Activating a virtualenv managed by tox
 
 If you want to enter a virtualenv created by tox to do additional debugging, you

+ 0 - 16
DEPLOYMENT_TYPES.md

@@ -1,16 +0,0 @@
-# Deployment Types
-
-This repository supports OpenShift Origin and OpenShift Container Platform.
-
-Various defaults used throughout the playbooks and roles in this repository are
-set based on the deployment type configuration (usually defined in an Ansible
-hosts file).
-
-The table below outlines the defaults per `openshift_deployment_type`:
-
-| openshift_deployment_type                                       | origin                                   | openshift-enterprise                   |
-|-----------------------------------------------------------------|------------------------------------------|----------------------------------------|
-| **openshift_service_type** (also used for package names)        | origin                                   | atomic-openshift                       |
-| **openshift.common.config_base**                                | /etc/origin                              | /etc/origin                            |
-| **openshift_data_dir**                                          | /var/lib/origin                          | /var/lib/origin                        |
-| **Image Streams**                                               | centos                                   | rhel                                   |

+ 12 - 31
HOOKS.md

@@ -1,6 +1,6 @@
 # Hooks
 
-The ansible installer allows for operators to execute custom tasks during
+OpenShift Ansible allows for operators to execute custom tasks during
 specific operations through a system called hooks. Hooks allow operators to
 provide files defining tasks to execute before and/or after specific areas
 during installations and upgrades. This can be very helpful to validate
@@ -16,21 +16,17 @@ need to be updated to meet the new standard.
 
 ## Using Hooks
 
-Hooks are defined in the ``hosts`` inventory file under the ``OSEv3:vars``
+Hooks are defined in the ``hosts`` inventory file under the ``nodes:vars``
 section.
 
 Each hook should point to a yaml file which defines Ansible tasks. This file
 will be used as an include meaning that the file can not be a playbook but
 a set of tasks. Best practice suggests using absolute paths to the hook file to avoid any ambiguity.
 
-### Example
+### Example inventory variables
 ```ini
-[OSEv3:vars]
+[nodes:vars]
 # <snip>
-openshift_master_upgrade_pre_hook=/usr/share/custom/pre_master.yml
-openshift_master_upgrade_hook=/usr/share/custom/master.yml
-openshift_master_upgrade_post_hook=/usr/share/custom/post_master.yml
-
 openshift_node_upgrade_pre_hook=/usr/share/custom/pre_node.yml
 openshift_node_upgrade_hook=/usr/share/custom/node.yml
 openshift_node_upgrade_post_hook=/usr/share/custom/post_node.yml
@@ -40,38 +36,23 @@ openshift_node_upgrade_post_hook=/usr/share/custom/post_node.yml
 Hook files must be a yaml formatted file that defines a set of Ansible tasks.
 The file may **not** be a playbook.
 
-### Example
+### Example hook task file
 ```yaml
+
 ---
 # Trivial example forcing an operator to ack the start of an upgrade
-# file=/usr/share/custom/pre_master.yml
+# file=/usr/share/custom/pre_node.yml
 
-- name: note the start of a master upgrade
+- name: note the start of a node upgrade
   debug:
-      msg: "Master upgrade of {{ inventory_hostname }} is about to start"
+      msg: "Node upgrade of {{ inventory_hostname }} is about to start"
 
 - name: require an operator agree to start an upgrade
   pause:
-      prompt: "Hit enter to start the master upgrade"
+      prompt: "Hit enter to start the node upgrade"
 ```
 
-## Upgrade Hooks
-
-### openshift_master_upgrade_pre_hook
-- Runs **before** each master is upgraded.
-- This hook runs against **each master** in serial.
-- If a task needs to run against a different host, said task will need to use [``delegate_to`` or ``local_action``](http://docs.ansible.com/ansible/playbooks_delegation.html#delegation).
-
-### openshift_master_upgrade_hook
-- Runs **after** each master is upgraded but **before** it's service/system restart.
-- This hook runs against **each master** in serial.
-- If a task needs to run against a different host, said task will need to use [``delegate_to`` or ``local_action``](http://docs.ansible.com/ansible/playbooks_delegation.html#delegation).
-
-
-### openshift_master_upgrade_post_hook
-- Runs **after** each master is upgraded and has had it's service/system restart.
-- This hook runs against **each master** in serial.
-- If a task needs to run against a different host, said task will need to use [``delegate_to`` or ``local_action``](http://docs.ansible.com/ansible/playbooks_delegation.html#delegation).
+## Available Upgrade Hooks
 
 ### openshift_node_upgrade_pre_hook
 - Runs **before** each node is upgraded.
@@ -79,7 +60,7 @@ The file may **not** be a playbook.
 - If a task needs to run against a different host, said task will need to use [``delegate_to`` or ``local_action``](http://docs.ansible.com/ansible/playbooks_delegation.html#delegation).
 
 ### openshift_node_upgrade_hook
-- Runs **after** each node is upgraded but **before** it's marked schedulable again..
+- Runs **after** each node is upgraded but **before** it's marked schedulable again.
 - This hook runs against **each node** in serial.
 - If a task needs to run against a different host, said task will need to use [``delegate_to`` or ``local_action``](http://docs.ansible.com/ansible/playbooks_delegation.html#delegation).
 

+ 29 - 146
README.md

@@ -1,172 +1,55 @@
 [![Join the chat at https://gitter.im/openshift/openshift-ansible](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/openshift/openshift-ansible)
 [![Build Status](https://travis-ci.org/openshift/openshift-ansible.svg?branch=master)](https://travis-ci.org/openshift/openshift-ansible)
-[![Coverage Status](https://coveralls.io/repos/github/openshift/openshift-ansible/badge.svg?branch=master)](https://coveralls.io/github/openshift/openshift-ansible?branch=master)
-
-NOTICE
-======
-
-Master branch is closed! A major refactor is ongoing in devel-40.
-Changes for 3.x should be made directly to the latest release branch they're
-relevant to and backported from there.
-
-WARNING
-=======
-
-This branch is under heavy development.  If you are interested in deploying a
-working cluster, please utilize a release branch.
 
 # OpenShift Ansible
-
 This repository contains [Ansible](https://www.ansible.com/) roles and
-playbooks to install, upgrade, and manage
-[OpenShift](https://www.openshift.com/) clusters.
-
-## Getting the correct version
-When choosing an openshift release, ensure that the necessary origin packages
-are available in your distribution's repository.  By default, openshift-ansible
-will not configure extra repositories for testing or staging packages for
-end users.
-
-We recommend using a release branch. We maintain stable branches
-corresponding to upstream Origin releases, e.g.: we guarantee an
-openshift-ansible 3.2 release will fully support an origin
-[1.2 release](https://github.com/openshift/openshift-ansible/tree/release-1.2).
-
-The most recent branch will often receive minor feature backports and
-fixes. Older branches will receive only critical fixes.
-
-In addition to the release branches, the master branch
-[master branch](https://github.com/openshift/openshift-ansible/tree/master)
-tracks our current work **in development** and should be compatible
-with the
-[Origin master branch](https://github.com/openshift/origin/tree/master)
-(code in development).
-
+playbooks for [OpenShift](https://www.openshift.com/) clusters.
 
+## Previous OpenShift Ansible 3.x releases
+For 3.x releases of OpenShift Ansible please reference the release branch for
+specific versions.  The last 3.x release is 
+[3.11 release](https://github.com/openshift/openshift-ansible/tree/release-3.11).
 
-**Getting the right openshift-ansible release**
+## OpenShift 4.x
+Installation of OpenShift 4.x uses a command-line installation wizard instead of
+Ansible playbooks.  Learn more about the OpenShift Installer in this
+[overview](https://github.com/openshift/installer/blob/master/docs/user/overview.md#installer-overview).
 
-Follow this release pattern and you can't go wrong:
+For OpenShift 4.x, this repo only provides playbooks necessary for scaling up an
+existing 4.x cluster with RHEL hosts.
 
-| Origin/OCP    | OpenShift-Ansible version | openshift-ansible branch |
-| ------------- | ----------------- |----------------------------------|
-| 1.3 / 3.3          | 3.3               | release-1.3 |
-| 1.4 / 3.4          | 3.4               | release-1.4 |
-| 1.5 / 3.5          | 3.5               | release-1.5 |
-| 3.*X*         | 3.*X*             | release-3.x |
-
-If you're running from the openshift-ansible **master branch** we can
-only guarantee compatibility with the newest origin releases **in
-development**. Use a branch corresponding to your origin version if
-you are not running a stable release.
-
-
-## Setup
-
-Install base dependencies:
+The [master branch](https://github.com/openshift/openshift-ansible/tree/master)
+tracks our current work **in development**.
 
 Requirements:
 
 - Ansible >= 2.7.8
-- Jinja >= 2.7
 - pyOpenSSL
-- python-lxml
-
-----
-
-Fedora:
-
-```
-dnf install -y ansible pyOpenSSL python-cryptography python-lxml
-```
-
-## Simple all-in-one localhost Installation
-This assumes that you've installed the base dependencies and you're running on
-Fedora or RHEL
-```
-git clone https://github.com/openshift/openshift-ansible
-cd openshift-ansible
-sudo ansible-playbook -i inventory/hosts.localhost playbooks/prerequisites.yml
-sudo ansible-playbook -i inventory/hosts.localhost playbooks/deploy_cluster.yml
-```
+- python2-openshift
 
 # Quickstart
 
-Install the new installer from https://www.github.com/openshift/installer
-
-Construct a proper install-config.yml, and make a copy called
-install-config-ansible.yml.
-
-## Hosts
-You will need the following hosts
-
-### Boostrap host
-This is a special host that is not part of the cluster but is required to be
-available to help the cluster bootstrap itself.  This is not a bastion host,
-it will initially be part of the cluster and should be able to communicate with
-the masters in the cluster.
-
-### Masters
-You need 1 or 3 masters.
-
-### Workers
-You need 0 or more workers.  Note, by default, masters are unschedulable so
-you will need one or more workers if you want to schedule workloads.
-
-## DNS
-4.x installs require specific dns records to be in place, and there is no way
-to complete an install without working DNS.  You are in charge of ensuring the
-following DNS records are resolvable from your cluster, the openshift-ansible
-installer will not make any attempt to do any of this for you.
-
-First, the output of ```hostname``` on each host must be resolvable to other hosts.
-The nodes will communicate with each other based on this value.
-
-install-config.yml value of 'baseDomain' must be a working domain.
-
-### A records
-```sh
-<clustername>-api.<baseDomain> # ex: mycluster-api.example.com
-<clustername>-master-0.<baseDomain> # ex: mycluster-master-0.example.com
-<clustername>-etcd-0.<baseDomain> # ex: mycluster-etcd-0.example.com
-<clustername>-bootstrap.<baseDomain> # ex: mycluster-bootstrap.example.com
-```
-
-Note: There should be a master/etcd record for each master host in your cluster
-(either 1 or 3).  etcd hosts must be master hosts, and the records must resolve
-to the same host for each master/etcd record, respectively.
-
-### SRV records
-```sh
-SRV _etcd-client-ssl._tcp.<clustername>.<baseDomain> '1 1 2379 <clustername>-etcd-0.<baseDomain>'
-SRV _etcd-server-ssl._tcp.<clustername>.<baseDomain> '1 1 2380 <clustername>-etcd-0.<baseDomain>'
-...
-SRV _etcd-client-ssl._tcp.<clustername>.<baseDomain> '1 1 2379 <clustername>-etcd-<N-1>.<baseDomain>'
-SRV _etcd-server-ssl._tcp.<clustername>.<baseDomain> '1 1 2380 <clustername>-etcd-<N-1>.<baseDomain>'
-
-# ex: _etcd-client-ssl._tcp.mycluster.example.com '1 1 2379 mycluster-etcd-0.example.com'
-```
-
-Consult with your DNS provider about the proper way to create SRV records.  In
-any case, there should be a client and server SRV record for each etcd backend,
-and you MUST use the etcd FQDN you created earlier, not the master or any other
-record.
+## Install an OpenShift 4.x cluster
+Install a cluster using the [OpenShift Installer](https://www.github.com/openshift/installer).
 
 ## Inventory
-Check out inventory/40_basic_inventory.ini for an example.
+Create an inventory file with the `new_workers` group to identify the hosts which
+should be added to the cluster.
+```yaml
+
+---
+[new_workers]
+mycluster-worker-0.example.com
+mycluster-worker-1.example.com
+mycluster-worker-2.example.com
+```
 
-## Generate ignition configs
-Use the openshift-install command to generate ignition configs utilizing the
-install-config.yml you created earlier.  This will consume the install-config.yml
-file, so ensure you have copied the file as mentioned previously.
+## Run the scaleup playbook
 
-```sh
-openshift-install create ignition-configs
+```bash
+ansible-playbook playbooks/openshift_node/scaleup.yml
 ```
 
-## Run playbook
-playbooks/deploy_cluster_40.yml
-
 # Further reading
 
 ## Complete Production Installation Documentation:

+ 0 - 152
docs/openshift_components.md

@@ -1,152 +0,0 @@
-# OpenShift-Ansible Components
-
->**TL;DR: Look at playbooks/openshift-web-console as an example**
-
-## General Guidelines
-
-Components in OpenShift-Ansible consist of two main parts:
-* Entry point playbook(s)
-* Ansible role
-* OWNERS files in both the playbooks and roles associated with the component
-
-When writing playbooks and roles, follow these basic guidelines to ensure
-success and maintainability. 
-
-### Idempotency
-
-Definition:
-
->_an idempotent operation is one that has no additional effect if it is called
-more than once with the same input parameters_
-
-Ansible playbooks and roles should be written such that when the playbook is run
-again with the same configuration, no tasks should report `changed` as well as
-no material changes should be made to hosts in the inventory.  Playbooks should
-be re-runnable, but also be idempotent.
-
-### Other advice for success
-
-* Try not to leave artifacts like files or directories
-* Avoid using `failed_when:` where ever possible
-* Always `name:` your tasks
-* Document complex logic or code in tasks
-* Set role defaults in `defaults/main.yml`
-* Avoid the use of `set_fact:`
-
-## Building Component Playbooks
-
-Component playbooks are divided between the root of the component directory and
-the `private` directory.  This allows other parts of openshift-ansible to import
-component playbooks without also running the common initialization playbooks
-unnecessarily.
-
-Entry point playbooks are located in the `playbooks` directory and follow the
-following structure:
-
-```
-playbooks/openshift-component_name
-├── config.yml                          Entry point playbook
-├── private
-│   ├── config.yml                      Included by the Cluster Installer
-│   └── roles -> ../../roles            Don't forget to create this symlink
-├── OWNERS                              Assign 2-3 approvers and reviewers
-└── README.md                           Tell us what this component does
-```
-
-### Entry point config playbook
-
-The primary component entry point playbook will at a minimum run the common
-initialization playbooks and then import the private playbook.
-
-```yaml
-# playbooks/openshift-component_name/config.yml
----
-- import_playbook: ../init/main.yml
-
-- import_playbook: private/config.yml
-
-```
-
-### Private config playbook
-
-The private component playbook will run the component role against the intended
-host groups and provide any required variables.  This playbook is also called
-during cluster installs and upgrades.  Think of this as the shareable portion of
-the component playbooks.
-
-```yaml
-# playbooks/openshift-component_name/private/config.yml
----
-
-- name: OpenShift Component_Name Installation
-  hosts: oo_first_master
-  tasks:
-  - import_role:
-      name: openshift_component_name
-```
-
-NOTE: The private playbook may also include wrapper plays for the Installer
-Checkpoint plugin which will be discussed later.
-
-## Building Component Roles
-
-Component roles contain all of the necessary files and logic to install and
-configure the component.  The install portion of the role should also support
-performing upgrades on the component.
-
-Ansible roles are located in the `roles` directory and follow the following
-structure:
-
-```
-roles/openshift_component_name
-├── defaults
-│   └── main.yml                        Defaults for variables used in the role
-│                                           which can be overridden by the user
-├── files
-│   ├── component-config.yml
-│   ├── component-rbac-template.yml
-│   └── component-template.yml
-├── handlers
-│   └── main.yml
-├── meta
-│   └── main.yml
-├── OWNERS                              Assign 2-3 approvers and reviewers
-├── README.md
-├── tasks
-│   └── main.yml                        Default playbook used when calling the role
-├── templates
-└── vars
-    └── main.yml                        Internal roles variables
-```
-### Component Installation
-
-Where possible, Ansible modules should be used to perform idempotent operations
-with the OpenShift API.  Avoid using the `command` or `shell` modules with the
-`oc` cli unless the required operation is not available through either the
-`lib_openshift` modules or Ansible core modules.
-
-The following is a basic flow of Ansible tasks for installation. 
-
-- Create the project (oc_project)
-- Create a temp directory for processing files
-- Copy the client config to temp
-- Copy templates to temp
-- Read existing config map
-- Copy existing config map to temp
-- Generate/update config map
-- Reconcile component RBAC (oc_process)
-- Apply component template (oc_process)
-- Poll healthz and wait for it to come up
-- Log status of deployment
-- Clean up temp
-
-### Component Removal
-
-- Remove the project (oc_project)
-
-## Enabling the Installer Checkpoint callback
-
-- Add the wrapper plays to the entry point playbook
-- Update the installer_checkpoint callback plugin
-
-Details can be found in the installer_checkpoint role.

+ 0 - 27
docs/proposals/README.md

@@ -1,27 +0,0 @@
-# OpenShift-Ansible Proposal Process
-
-## Proposal Decision Tree
-TODO: Add details about when a proposal is or is not required. 
-
-## Proposal Process
-The following process should be followed when a proposal is needed:
-
-1. Create a pull request with the initial proposal
-  * Use the [proposal template][template]
-  * Name the proposal using two or three topic words with underscores as a separator (i.e. proposal_template.md)
-  * Place the proposal in the docs/proposals directory
-2. Notify the development team of the proposal and request feedback
-3. Review the proposal on the OpenShift-Ansible Architecture Meeting
-4. Update the proposal as needed and ask for feedback
-5. Approved/Closed Phase
-  * If 75% or more of the active development team give the proposal a :+1: it is Approved
-  * If 50% or more of the active development team disagrees with the proposal it is Closed
-  * If the person proposing the proposal no longer wishes to continue they can request it to be Closed
-  * If there is no activity on a proposal, the active development team may Close the proposal at their discretion
-  * If none of the above is met the cycle can continue to Step 4.
-6. For approved proposals, the current development lead(s) will:
-  * Update the Pull Request with the result and merge the proposal
-  * Create a card on the Cluster Lifecycle [Trello board][trello] so it may be scheduled for implementation.
-
-[template]: proposal_template.md
-[trello]: https://trello.com/b/wJYDst6C

+ 0 - 113
docs/proposals/crt_management_proposal.md

@@ -1,113 +0,0 @@
-# Container Runtime Management
-
-## Description
-origin and openshift-ansible support multiple container runtimes.  This proposal
-is related to refactoring how we handle those runtimes in openshift-ansible.
-
-### Problems addressed
-We currently don't install docker during the install at a point early enough to
-not fail health checks, and we don't have a good story around when/how to do it.
-This is complicated by logic around containerized and non-containerized installs.
-
-A web of dependencies can cause changes to docker that are unintended and has
-resulted in a series of work-around such as 'skip_docker' boolean.
-
-We don't handle docker storage because it's BYO.  By moving docker to a prerequisite
-play, we can tackle storage up front and never have to touch it again.
-
-container_runtime logic is currently spread across 3 roles: docker, openshift_docker,
-and openshift_docker_facts.  The name 'docker' does not accurately portray what
-the role(s) do.
-
-## Rationale
-* Refactor docker (and related meta/fact roles) into 'container_runtime' role.
-* Strip all meta-depends on container runtime out of other roles and plays.
-* Create a 'prerequisites.yml' entry point that will setup various items
-such as container storage and container runtime before executing installation.
-* All other roles and plays should merely consume container runtime, should not
-configure, restart, or change the container runtime as much as feasible.
-
-## Design
-
-The container_runtime role should be comprised of 3 'pseudo-roles' which will be
-consumed using import_role; each component area should be enabled/disabled with
-a boolean value, defaulting to true.
-
-I call them 'pseudo-roles' because they are more or less independent functional
-areas that may share some variables and act on closely related components.  This
-is an effort to reuse as much code as possible, limit role-bloat (we already have
-an abundance of roles), and make things as modular as possible.
-
-```yaml
-# prerequisites.yml
-- include: std_include.yml
-- include: container_runtime_setup.yml
-...
-# container_runtime_setup.yml
-- hosts: "{{ openshift_runtime_manage_hosts | default('oo_nodes_to_config') }}"
-  tasks:
-    - import_role:
-        name: container_runtime
-        tasks_from: install.yml
-      when: openshift_container_runtime_install | default(True) | bool
-    - import_role:
-        name: container_runtime
-        tasks_from: storage.yml
-      when: openshift_container_runtime_storage | default(True) | bool
-    - import_role:
-        name: container_runtime
-        tasks_from: configure.yml
-      when: openshift_container_runtime_configure | default(True) | bool
-```
-
-Note the host group on the above play.  No more guessing what hosts to run this
-stuff against.  If you want to use an atomic install, specify what hosts will need
-us to setup container runtime (such as etcd hosts, loadbalancers, etc);
-
-We should direct users that are using atomic hosts to disable install in the docs,
-let's not add a bunch of logic.
-
-Alternatively, we can create a new group.
-
-### Part 1, container runtime install
-Install the container runtime components of the desired type.
-
-```yaml
-# install.yml
-- include: docker.yml
-  when: openshift_container_runtime_install_docker | bool
-
-- include: crio.yml
-  when: openshift_container_runtime_install_crio | bool
-
-... other container run times...
-```
-
-Alternatively to using booleans for each run time, we could use a variable like
-"openshift_container_runtime_type".  This would be my preference, as we could
-use this information in later roles.
-
-### Part 2, configure/setup container runtime storage
-Configure a supported storage solution for containers.
-
-Similar setup to the previous section.  We might need to add some logic for the
-different runtimes here, or we maybe create a matrix of possible options.
-
-### Part 3, configure container runtime.
-Place config files, environment files, systemd units, etc.  Start/restart
-the container runtime as needed.
-
-Similar to Part 1 with how we should do things.
-
-## Checklist
-* Strip docker from meta dependencies.
-* Combine docker facts and meta roles into container_runtime role.
-* Docs
-
-## User Story
-As a user of openshift-ansible, I want to be able to manage my container runtime
-and related components independent of openshift itself.
-
-## Acceptance Criteria
-* Verify that each container runtime installs with this new method.
-* Verify that openshift installs with this new method.

+ 0 - 178
docs/proposals/playbook_consolidation.md

@@ -1,178 +0,0 @@
-# OpenShift-Ansible Playbook Consolidation
-
-## Description
-The designation of `byo` is no longer applicable due to being able to deploy on
-physical hardware or cloud resources using the playbooks in the `byo` directory.
-Consolidation of these directories will make maintaining the code base easier
-and provide a more straightforward project for users and developers.
-
-The main points of this proposal are:
-* Consolidate initialization playbooks into one set of playbooks in
-  `playbooks/init`. 
-* Collapse the `playbooks/byo` and `playbooks/common` into one set of
-  directories at `playbooks/openshift-*`.
-
-This consolidation effort may be more appropriate when the project moves to
-using a container as the default installation method.
-
-## Design
-
-### Initialization Playbook Consolidation
-Currently there are two separate sets of initialization playbooks:
-* `playbooks/byo/openshift-cluster/initialize_groups.yml`
-* `playbooks/common/openshift-cluster/std_include.yml`
-
-Although these playbooks are located in the `openshift-cluster` directory they
-are shared by all of the `openshift-*` areas.  These playbooks would be better
-organized in a `playbooks/init` directory collocated with all their related
-playbooks.
-
-In the example below, the following changes have been made:
-* `playbooks/byo/openshift-cluster/initialize_groups.yml` renamed to
-  `playbooks/init/initialize_host_groups.yml`
-* `playbooks/common/openshift-cluster/std_include.yml` renamed to
-  `playbooks/init/main.yml`
-* `- include: playbooks/init/initialize_host_groups.yml` has been added to the
-  top of `playbooks/init/main.yml`
-* All other related files for initialization have been moved to `playbooks/init`
-
-The `initialize_host_groups.yml` playbook is only one play with one task for
-importing variables for inventory group conversions.  This task could be further
-consolidated with the play in `evaluate_groups.yml`.
-
-The new standard initialization playbook would be
-`playbooks/init/main.yml`.
-
-
-```
- 
-> $ tree openshift-ansible/playbooks/init
-.
-├── evaluate_groups.yml
-├── initialize_facts.yml
-├── initialize_host_groups.yml
-├── initialize_openshift_repos.yml
-├── initialize_openshift_version.yml
-├── main.yml
-├── roles -> ../../roles
-├── validate_hostnames.yml
-└── vars
-    └── cluster_hosts.yml
-```
-
-```yaml
-# openshift-ansible/playbooks/init/main.yml
----
-- include: initialize_host_groups.yml
-
-- include: evaluate_groups.yml
-
-- include: initialize_facts.yml
-
-- include: validate_hostnames.yml
-
-- include: initialize_openshift_repos.yml
-
-- include: initialize_openshift_version.yml
-```
-
-### `byo` and `common` Playbook Consolidation
-Historically, the `byo` directory coexisted with other platform directories
-which contained playbooks that then called into `common` playbooks to perform
-common installation steps for all platforms.  Since the other platform
-directories have been removed this separation is no longer necessary.
-
-In the example below, the following changes have been made:
-* `playbooks/byo/openshift-master` renamed to
-  `playbooks/openshift-master`
-* `playbooks/common/openshift-master` renamed to
-  `playbooks/openshift-master/private`
-* Original `byo` entry point playbooks have been updated to include their
-  respective playbooks from `private/`.
-* Symbolic links have been updated as necessary
-
-All user consumable playbooks are in the root of `openshift-master` and no entry
-point playbooks exist in the `private` directory.  Maintaining the separation
-between entry point playbooks and the private playbooks allows individual pieces
-of the deployments to be used as needed by other components.
-
-```
-openshift-ansible/playbooks/openshift-master 
-> $ tree
-.
-├── config.yml
-├── private
-│   ├── additional_config.yml
-│   ├── config.yml
-│   ├── filter_plugins -> ../../../filter_plugins
-│   ├── library -> ../../../library
-│   ├── lookup_plugins -> ../../../lookup_plugins
-│   ├── restart_hosts.yml
-│   ├── restart_services.yml
-│   ├── restart.yml
-│   ├── roles -> ../../../roles
-│   ├── scaleup.yml
-│   └── validate_restart.yml
-├── restart.yml
-└── scaleup.yml
-```
-
-```yaml
-# openshift-ansible/playbooks/openshift-master/config.yml
----
-- include: ../init/main.yml
-
-- include: private/config.yml
-```
-
-With the consolidation of the directory structure and component installs being
-removed from `openshift-cluster`, that directory is no longer necessary.  To
-deploy an entire OpenShift cluster, a playbook would be created to tie together
-all of the different components.  The following example shows how multiple
-components would be combined to perform a complete install.
-
-```yaml
-# openshift-ansible/playbooks/deploy_cluster.yml
----
-- include: init/main.yml
-
-- include: openshift-etcd/private/config.yml
-
-- include: openshift-nfs/private/config.yml
-
-- include: openshift-loadbalancer/private/config.yml
-
-- include: openshift-master/private/config.yml
-
-- include: openshift-node/private/config.yml
-
-- include: openshift-glusterfs/private/config.yml
-
-- include: openshift-hosted/private/config.yml
-
-- include: openshift-service-catalog/private/config.yml
-```
-
-## User Story
-As a developer of OpenShift-Ansible,
-I want simplify the playbook directory structure
-so that users can easily find deployment playbooks and developers know where new
-features should be developed.
-
-## Implementation
-Given the size of this refactoring effort, it should be broken into smaller
-steps which can be completed independently while still maintaining a functional
-project.
-
-Steps:
-1. Update and merge consolidation of the initialization playbooks.
-2. Update each merge consolidation of each `openshift-*` component area
-3. Update and merge consolidation of `openshift-cluster` 
-
-## Acceptance Criteria
-* Verify that all entry points playbooks install or configure as expected.
-* Verify that CI is updated for testing new playbook locations.
-* Verify that repo documentation is updated
-* Verify that user documentation is updated
-
-## References

+ 0 - 30
docs/proposals/proposal_template.md

@@ -1,30 +0,0 @@
-# Proposal Title
-
-## Description
-<Short introduction>
-
-## Rationale
-<Summary of main points of Design>
-
-## Design
-<Main content goes here>
-
-## Checklist
-* Item 1
-* Item 2
-* Item 3
-
-## User Story
-As a developer on OpenShift-Ansible,
-I want ...
-so that ...
-
-## Acceptance Criteria
-* Verify that ...
-* Verify that ...
-* Verify that ...
-
-## References
-* Link
-* Link
-* Link

+ 0 - 353
docs/proposals/role_decomposition.md

@@ -1,353 +0,0 @@
-# Scaffolding for decomposing large roles
-
-## Why?
-
-Currently we have roles that are very large and encompass a lot of different
-components. This makes for a lot of logic required within the role, can
-create complex conditionals, and increases the learning curve for the role.
-
-## How?
-
-Creating a guide on how to approach breaking up a large role into smaller,
-component based, roles. Also describe how to develop new roles, to avoid creating
-large roles.
-
-## Proposal
-
-Create a new guide or append to the current contributing guide a process for
-identifying large roles that can be split up, and how to compose smaller roles
-going forward.
-
-### Large roles
-
-A role should be considered for decomposition if it:
-
-1) Configures/installs more than one product.
-1) Can configure multiple variations of the same product that can live
-side by side.
-1) Has different entry points for upgrading and installing a product
-
-Large roles<sup>1</sup> should be responsible for:
-> 1 or composing playbooks
-
-1) Composing smaller roles to provide a full solution such as an Openshift Master
-1) Ensuring that smaller roles are called in the correct order if necessary
-1) Calling smaller roles with their required variables
-1) Performing prerequisite tasks that small roles may depend on being in place
-(openshift_logging certificate generation for example)
-
-### Small roles
-
-A small role should be able to:
-
-1) Be deployed independently of other products (this is different than requiring
-being installed after other base components such as OCP)
-1) Be self contained and able to determine facts that it requires to complete
-1) Fail fast when facts it requires are not available or are invalid
-1) "Make it so" based on provided variables and anything that may be required
-as part of doing such (this should include data migrations)
-1) Have a minimal set of dependencies in meta/main.yml, just enough to do its job
-
-### Example using decomposition of openshift_logging
-
-The `openshift_logging` role was created as a port from the deployer image for
-the `3.5` deliverable. It was a large role that created the service accounts,
-configmaps, secrets, routes, and deployment configs/daemonset required for each
-of its different components (Fluentd, Kibana, Curator, Elasticsearch).
-
-It was possible to configure any of the components independently of one another,
-up to a point. However, it was an all of nothing installation and there was a
-need from customers to be able to do things like just deploy Fluentd.
-
-Also being able to support multiple versions of configuration files would become
-increasingly messy with a large role. Especially if the components had changes
-at different intervals.
-
-#### Folding of responsibility
-
-There was a duplicate of work within the installation of three of the four logging
-components where there was a possibility to deploy both an 'operations' and
-'non-operations' cluster side-by-side. The first step was to collapse that
-duplicate work into a single path and allow a variable to be provided to
-configure such that either possibility could be created.
-
-#### Consolidation of responsibility
-
-The generation of OCP objects required for each component were being created in
-the same task file, all Service Accounts were created at the same time, all secrets,
-configmaps, etc. The only components that were not generated at the same time were
-the deployment configs and the daemonset. The second step was to make the small
-roles self contained and generate their own required objects.
-
-#### Consideration for prerequisites
-
-Currently the Aggregated Logging stack generates its own certificates as it has
-some requirements that prevent it from utilizing the OCP cert generation service.
-In order to make sure that all components were able to trust one another as they
-did previously, until the cert generation service can be used, the certificate
-generation is being handled within the top level `openshift_logging` role and
-providing the location of the generated certificates to the individual roles.
-
-#### Snippets
-
-[openshift_logging/tasks/install_logging.yaml](https://github.com/ewolinetz/openshift-ansible/blob/logging_component_subroles/roles/openshift_logging/tasks/install_logging.yaml)
-```yaml
-- name: Gather OpenShift Logging Facts
-  openshift_logging_facts:
-    oc_bin: "{{openshift.common.client_binary}}"
-    openshift_logging_namespace: "{{openshift_logging_namespace}}"
-
-- name: Set logging project
-  oc_project:
-    state: present
-    name: "{{ openshift_logging_namespace }}"
-
-- name: Create logging cert directory
-  file:
-    path: "{{ openshift.common.config_base }}/logging"
-    state: directory
-    mode: 0755
-  changed_when: False
-  check_mode: no
-
-- include: generate_certs.yaml
-  vars:
-    generated_certs_dir: "{{openshift.common.config_base}}/logging"
-
-## Elasticsearch
-- import_role:
-    name: openshift_logging_elasticsearch
-  vars:
-    generated_certs_dir: "{{openshift.common.config_base}}/logging"
-
-- import_role:
-    name: openshift_logging_elasticsearch
-  vars:
-    generated_certs_dir: "{{openshift.common.config_base}}/logging"
-    openshift_logging_es_ops_deployment: true
-  when:
-  - openshift_logging_use_ops | bool
-
-
-## Kibana
-- import_role:
-    name: openshift_logging_kibana
-  vars:
-    generated_certs_dir: "{{openshift.common.config_base}}/logging"
-    openshift_logging_kibana_namespace: "{{ openshift_logging_namespace }}"
-    openshift_logging_kibana_master_url: "{{ openshift_logging_master_url }}"
-    openshift_logging_kibana_master_public_url: "{{ openshift_logging_master_public_url }}"
-    openshift_logging_kibana_image_prefix: "{{ openshift_logging_image_prefix }}"
-    openshift_logging_kibana_image_version: "{{ openshift_logging_image_version }}"
-    openshift_logging_kibana_replicas: "{{ openshift_logging_kibana_replica_count }}"
-    openshift_logging_kibana_es_host: "{{ openshift_logging_es_host }}"
-    openshift_logging_kibana_es_port: "{{ openshift_logging_es_port }}"
-    openshift_logging_kibana_image_pull_secret: "{{ openshift_logging_image_pull_secret }}"
-
-- import_role:
-    name: openshift_logging_kibana
-  vars:
-    generated_certs_dir: "{{openshift.common.config_base}}/logging"
-    openshift_logging_kibana_ops_deployment: true
-    openshift_logging_kibana_namespace: "{{ openshift_logging_namespace }}"
-    openshift_logging_kibana_master_url: "{{ openshift_logging_master_url }}"
-    openshift_logging_kibana_master_public_url: "{{ openshift_logging_master_public_url }}"
-    openshift_logging_kibana_image_prefix: "{{ openshift_logging_image_prefix }}"
-    openshift_logging_kibana_image_version: "{{ openshift_logging_image_version }}"
-    openshift_logging_kibana_image_pull_secret: "{{ openshift_logging_image_pull_secret }}"
-    openshift_logging_kibana_es_host: "{{ openshift_logging_es_ops_host }}"
-    openshift_logging_kibana_es_port: "{{ openshift_logging_es_ops_port }}"
-    openshift_logging_kibana_nodeselector: "{{ openshift_logging_kibana_ops_nodeselector }}"
-    openshift_logging_kibana_memory_limit: "{{ openshift_logging_kibana_ops_memory_limit }}"
-    openshift_logging_kibana_cpu_request: "{{ openshift_logging_kibana_ops_cpu_request }}"
-    openshift_logging_kibana_hostname: "{{ openshift_logging_kibana_ops_hostname }}"
-    openshift_logging_kibana_replicas: "{{ openshift_logging_kibana_ops_replica_count }}"
-    openshift_logging_kibana_proxy_debug: "{{ openshift_logging_kibana_ops_proxy_debug }}"
-    openshift_logging_kibana_proxy_memory_limit: "{{ openshift_logging_kibana_ops_proxy_memory_limit }}"
-    openshift_logging_kibana_proxy_cpu_request: "{{ openshift_logging_kibana_ops_proxy_cpu_request }}"
-    openshift_logging_kibana_cert: "{{ openshift_logging_kibana_ops_cert }}"
-    openshift_logging_kibana_key: "{{ openshift_logging_kibana_ops_key }}"
-    openshift_logging_kibana_ca: "{{ openshift_logging_kibana_ops_ca}}"
-  when:
-  - openshift_logging_use_ops | bool
-
-
-## Curator
-- import_role:
-    name: openshift_logging_curator
-  vars:
-    generated_certs_dir: "{{openshift.common.config_base}}/logging"
-    openshift_logging_curator_namespace: "{{ openshift_logging_namespace }}"
-    openshift_logging_curator_master_url: "{{ openshift_logging_master_url }}"
-    openshift_logging_curator_image_prefix: "{{ openshift_logging_image_prefix }}"
-    openshift_logging_curator_image_version: "{{ openshift_logging_image_version }}"
-    openshift_logging_curator_image_pull_secret: "{{ openshift_logging_image_pull_secret }}"
-
-- import_role:
-    name: openshift_logging_curator
-  vars:
-    generated_certs_dir: "{{openshift.common.config_base}}/logging"
-    openshift_logging_curator_ops_deployment: true
-    openshift_logging_curator_namespace: "{{ openshift_logging_namespace }}"
-    openshift_logging_curator_master_url: "{{ openshift_logging_master_url }}"
-    openshift_logging_curator_image_prefix: "{{ openshift_logging_image_prefix }}"
-    openshift_logging_curator_image_version: "{{ openshift_logging_image_version }}"
-    openshift_logging_curator_image_pull_secret: "{{ openshift_logging_image_pull_secret }}"
-    openshift_logging_curator_memory_limit: "{{ openshift_logging_curator_ops_memory_limit }}"
-    openshift_logging_curator_cpu_request: "{{ openshift_logging_curator_ops_cpu_request }}"
-    openshift_logging_curator_nodeselector: "{{ openshift_logging_curator_ops_nodeselector }}"
-  when:
-  - openshift_logging_use_ops | bool
-
-
-## Fluentd
-- import_role:
-    name: openshift_logging_fluentd
-  vars:
-    generated_certs_dir: "{{openshift.common.config_base}}/logging"
-
-- include: update_master_config.yaml
-```
-
-[openshift_logging_elasticsearch/meta/main.yaml](https://github.com/ewolinetz/openshift-ansible/blob/logging_component_subroles/roles/openshift_logging_elasticsearch/meta/main.yaml)
-```yaml
----
-galaxy_info:
-  author: OpenShift Red Hat
-  description: OpenShift Aggregated Logging Elasticsearch Component
-  company: Red Hat, Inc.
-  license: Apache License, Version 2.0
-  min_ansible_version: 2.2
-  platforms:
-  - name: EL
-    versions:
-    - 7
-  categories:
-  - cloud
-dependencies:
-- role: lib_openshift
-```
-
-[openshift_logging/meta/main.yaml](https://github.com/ewolinetz/openshift-ansible/blob/logging_component_subroles/roles/openshift_logging/meta/main.yaml)
-```yaml
----
-galaxy_info:
-  author: OpenShift Red Hat
-  description: OpenShift Aggregated Logging
-  company: Red Hat, Inc.
-  license: Apache License, Version 2.0
-  min_ansible_version: 2.2
-  platforms:
-  - name: EL
-    versions:
-    - 7
-  categories:
-  - cloud
-dependencies:
-- role: lib_openshift
-- role: openshift_facts
-```
-
-[openshift_logging/tasks/install_support.yaml - old](https://github.com/openshift/openshift-ansible/blob/master/roles/openshift_logging/tasks/install_support.yaml)
-```yaml
----
-# This is the base configuration for installing the other components
-- name: Check for logging project already exists
-  command: >
-    {{ openshift.common.client_binary }} --config={{ mktemp.stdout }}/admin.kubeconfig get project {{openshift_logging_namespace}} --no-headers
-  register: logging_project_result
-  ignore_errors: yes
-  when: not ansible_check_mode
-  changed_when: no
-
-- name: "Create logging project"
-  command: >
-    {{ openshift.common.client_binary }} adm --config={{ mktemp.stdout }}/admin.kubeconfig new-project {{openshift_logging_namespace}}
-  when: not ansible_check_mode and "not found" in logging_project_result.stderr
-
-- name: Create logging cert directory
-  file: path={{openshift.common.config_base}}/logging state=directory mode=0755
-  changed_when: False
-  check_mode: no
-
-- include: generate_certs.yaml
-  vars:
-    generated_certs_dir: "{{openshift.common.config_base}}/logging"
-
-- name: Create temp directory for all our templates
-  file: path={{mktemp.stdout}}/templates state=directory mode=0755
-  changed_when: False
-  check_mode: no
-
-- include: generate_secrets.yaml
-  vars:
-    generated_certs_dir: "{{openshift.common.config_base}}/logging"
-
-- include: generate_configmaps.yaml
-
-- include: generate_services.yaml
-
-- name: Generate kibana-proxy oauth client
-  template: src=oauth-client.j2 dest={{mktemp.stdout}}/templates/oauth-client.yaml
-  vars:
-    secret: "{{oauth_secret}}"
-  when: oauth_secret is defined
-  check_mode: no
-  changed_when: no
-
-- include: generate_clusterroles.yaml
-
-- include: generate_rolebindings.yaml
-
-- include: generate_clusterrolebindings.yaml
-
-- include: generate_serviceaccounts.yaml
-
-- include: generate_routes.yaml
-```
-
-# Limitations
-
-There will always be exceptions for some of these rules, however the majority of
-roles should be able to fall within these guidelines.
-
-# Additional considerations
-
-## Playbooks including playbooks
-In some circumstances it does not make sense to have a composing role but instead
-a playbook would be best for orchestrating the role flow. Decisions made regarding
-playbooks including playbooks will need to be taken into consideration as part of
-defining this process.
-Ref: (link to rteague's presentation?)
-
-## Role dependencies
-We want to make sure that our roles do not have any extra or unnecessary dependencies
-in meta/main.yml without:
-
-1. Proposing the inclusion in a team meeting or as part of the PR review and getting agreement
-1. Documenting in meta/main.yml why it is there and when it was agreed to (date)
-
-## Avoiding overly verbose roles
-When we are splitting our roles up into smaller components we want to ensure we
-avoid creating roles that are, for a lack of a better term, overly verbose. What
-do we mean by that? If we have `openshift_control_plane` as an example, and we were to
-split it up, we would have a component for `etcd`, `docker`, and possibly for
-its rpms/configs. We would want to avoid creating a role that would just create
-certificates as those would make sense to be contained with the rpms and configs.
-Likewise, when it comes to being able to restart the master, we wouldn't have a
-role where that was its sole purpose.
-
-The same would apply for the `etcd` and `docker` roles. Anything that is required
-as part of installing `etcd` such as generating certificates, installing rpms,
-and upgrading data between versions should all be contained within the single
-`etcd` role.
-
-## Enforcing standards
-Certain naming standards like variable names could be verified as part of a Travis
-test. If we were going to also enforce that a role either has tasks or includes
-(for example) then we could create tests for that as well.
-
-## CI tests for individual roles
-If we are able to correctly split up roles, it should be possible to test role
-installations/upgrades like unit tests (assuming they would be able to be installed
-independently of other components).

+ 0 - 14
docs/repo_structure.md

@@ -54,17 +54,3 @@ _OpenShift Components_
 └── test                Contains tests.
 ```
 
-### CI
-
-These files are used by [PAPR](https://github.com/projectatomic/papr),
-It is very similar in workflow to Travis, with the test
-environment and test scripts defined in a YAML file.
-
-```
-.
-├── .papr.yml
-├── .papr.sh
-└── .papr.inventory
-├── .papr.all-in-one.inventory
-└── .papr-master-ha.inventory
-```

+ 0 - 96
examples/README.md

@@ -1,96 +0,0 @@
-# openshift-ansible usage examples
-
-The primary use of `openshift-ansible` is to install, configure and upgrade OpenShift clusters.
-
-This is typically done by direct invocation of Ansible tools like `ansible-playbook`. This use case is covered in detail in the [OpenShift advanced installation documentation](https://docs.okd.io/latest/install_config/install/advanced_install.html)
-
-For OpenShift Container Platform there's also an installation utility that wraps `openshift-ansible`. This usage case is covered in the [Quick Installation](https://docs.openshift.com/container-platform/latest/install_config/install/quick_install.html) section of the documentation.
-
-The usage examples below cover use cases other than install/configure/upgrade.
-
-## Container image
-
-The examples below run [openshift-ansible in a container](../README_CONTAINER_IMAGE.md) to perform certificate expiration checks on an OpenShift cluster from pods running on the cluster itself.
-
-You can find more details about the certificate expiration check roles and example playbooks in [the openshift_certificate_expiry role's README](../roles/openshift_certificate_expiry/README.md).
-
-### Job to upload certificate expiration reports
-
-The example `Job` in [certificate-check-upload.yaml](certificate-check-upload.yaml) executes a [Job](https://docs.okd.io/latest/dev_guide/jobs.html) that checks the expiration dates of the internal certificates of the cluster and uploads HTML and JSON reports to `/etc/origin/certificate_expiration_report` in the masters.
-
-This example uses the [`easy-mode-upload.yaml`](../playbooks/openshift-checks/certificate_expiry/easy-mode-upload.yaml) example playbook, which generates reports and uploads them to the masters. The playbook can be customized via environment variables to control the length of the warning period (`CERT_EXPIRY_WARN_DAYS`) and the location in the masters where the reports are uploaded (`COPY_TO_PATH`).
-
-The job expects the inventory to be provided via the *hosts* key of a [ConfigMap](https://docs.okd.io/latest/dev_guide/configmaps.html) named *inventory*, and the passwordless ssh key that allows connecting to the hosts to be availalbe as *ssh-privatekey* from a [Secret](https://docs.okd.io/latest/dev_guide/secrets.html) named *sshkey*, so these are created first:
-
-    oc new-project certcheck
-    oc create configmap inventory --from-file=hosts=/etc/ansible/hosts
-    oc create secret generic sshkey \
-      --from-file=ssh-privatekey=$HOME/.ssh/id_rsa \
-      --type=kubernetes.io/ssh-auth
-
-Note that `inventory`, `hosts`, `sshkey` and `ssh-privatekey` are referenced by name from the provided example Job definition. If you use different names for the objects/attributes you will have to adjust the Job accordingly.
-
-To create the Job:
-
-    oc create -f examples/certificate-check-upload.yaml
-
-### Scheduled job for certificate expiration report upload
-
-The example `CronJob` in [scheduled-certcheck-upload.yaml](scheduled-certcheck-upload.yaml) does the same as the `Job` example above, but it is scheduled to automatically run every first day of the month (see the `spec.schedule` value in the example).
-
-The job definition is the same and it expects the same configuration: we provide the inventory and ssh key via a ConfigMap and a Secret respectively:
-
-    oc new-project certcheck
-    oc create configmap inventory --from-file=hosts=/etc/ansible/hosts
-    oc create secret generic sshkey \
-      --from-file=ssh-privatekey=$HOME/.ssh/id_rsa \
-      --type=kubernetes.io/ssh-auth
-
-And then we create the CronJob:
-
-    oc create -f examples/scheduled-certcheck-upload.yaml
-
-### Job and CronJob to check certificates using volumes
-
-There are two additional examples:
-
- - A `Job` [certificate-check-volume.yaml](certificate-check-volume.yaml)
- - A `CronJob` [scheduled-certcheck-upload.yaml](scheduled-certcheck-upload.yaml)
-
-These perform the same work as the two examples above, but instead of uploading the generated reports to the masters they store them in a custom path within the container that is expected to be backed by a [PersistentVolumeClaim](https://docs.okd.io/latest/dev_guide/persistent_volumes.html), so that the reports are actually written to storage external to the container.
-
-These examples assume that there is an existing `PersistentVolumeClaim` called `certcheck-reports` and they use the  [`html_and_json_timestamp.yaml`](../playbooks/openshift-checks/certificate_expiry/html_and_json_timestamp.yaml) example playbook to write timestamped reports into it.
-
-You can later access the reports from another pod that mounts the same volume, or externally via direct access to the backend storage behind the matching `PersistentVolume`.
-
-To run these examples we prepare the inventory and ssh keys as in the other examples:
-
-    oc new-project certcheck
-    oc create configmap inventory --from-file=hosts=/etc/ansible/hosts
-    oc create secret generic sshkey \
-      --from-file=ssh-privatekey=$HOME/.ssh/id_rsa \
-      --type=kubernetes.io/ssh-auth
-
-Additionally we allocate a `PersistentVolumeClaim` to store the reports:
-
-    oc create -f - <<PVC
-    ---
-    apiVersion: v1
-    kind: PersistentVolumeClaim
-    metadata:
-      name: certcheck-reports
-    spec:
-      accessModes:
-        - ReadWriteOnce
-      resources:
-        requests:
-          storage: 1Gi
-    PVC
-
-With that we can run the `Job` once:
-
-    oc create -f examples/certificate-check-volume.yaml
-
-or schedule it to run periodically as a `CronJob`:
-
-    oc create -f examples/scheduled-certcheck-volume.yaml

+ 0 - 53
examples/certificate-check-upload.yaml

@@ -1,53 +0,0 @@
-# An example Job to run a certificate check of OpenShift's internal
-# certificate status from within OpenShift.
-#
-# The generated reports are uploaded to a location in the master
-# hosts, using the playbook 'easy-mode-upload.yaml'.
-#
-# This example uses the openshift/origin-ansible container image.
-# (see README_CONTAINER_IMAGE.md in the top level dir for more details).
-#
-# The following objects are expected to be configured before the creation
-# of this Job:
-#   - A ConfigMap named 'inventory' with a key named 'hosts' that
-#     contains the Ansible inventory file
-#   - A Secret named 'sshkey' with a key named 'ssh-privatekey
-#     that contains the ssh key to connect to the hosts
-# (see examples/README.md for more details)
----
-apiVersion: batch/v1
-kind: Job
-metadata:
-  name: certificate-check
-spec:
-  parallelism: 1
-  completions: 1
-  template:
-    metadata:
-      name: certificate-check
-    spec:
-      containers:
-      - name: openshift-ansible
-        image: docker.io/openshift/origin-ansible
-        env:
-        - name: PLAYBOOK_FILE
-          value: playbooks/openshift-checks/certificate_expiry/easy-mode-upload.yaml
-        - name: INVENTORY_FILE
-          value: /tmp/inventory/hosts       # from configmap vol below
-        - name: ANSIBLE_PRIVATE_KEY_FILE    # from secret vol below
-          value: /opt/app-root/src/.ssh/id_rsa/ssh-privatekey
-        - name: CERT_EXPIRY_WARN_DAYS
-          value: "45"      # must be a string, don't forget the quotes
-        volumeMounts:
-        - name: sshkey
-          mountPath: /opt/app-root/src/.ssh/id_rsa
-        - name: inventory
-          mountPath: /tmp/inventory
-      volumes:
-      - name: sshkey
-        secret:
-          secretName: sshkey
-      - name: inventory
-        configMap:
-          name: inventory
-      restartPolicy: Never

+ 0 - 60
examples/certificate-check-volume.yaml

@@ -1,60 +0,0 @@
-# An example Job to run a certificate check of OpenShift's internal
-# certificate status from within OpenShift.
-#
-# The generated reports are stored in a Persistent Volume using
-# the playbook 'html_and_json_timestamp.yaml'.
-#
-# This example uses the openshift/origin-ansible container image.
-# (see README_CONTAINER_IMAGE.md in the top level dir for more details).
-#
-# The following objects are expected to be configured before the creation
-# of this Job:
-#   - A ConfigMap named 'inventory' with a key named 'hosts' that
-#     contains the Ansible inventory file
-#   - A Secret named 'sshkey' with a key named 'ssh-privatekey
-#     that contains the ssh key to connect to the hosts
-#   - A PersistentVolumeClaim named 'certcheck-reports' where the
-#     generated reports are going to be stored
-# (see examples/README.md for more details)
----
-apiVersion: batch/v1
-kind: Job
-metadata:
-  name: certificate-check
-spec:
-  parallelism: 1
-  completions: 1
-  template:
-    metadata:
-      name: certificate-check
-    spec:
-      containers:
-      - name: openshift-ansible
-        image: docker.io/openshift/origin-ansible
-        env:
-        - name: PLAYBOOK_FILE
-          value: playbooks/openshift-checks/certificate_expiry/html_and_json_timestamp.yaml
-        - name: INVENTORY_FILE
-          value: /tmp/inventory/hosts       # from configmap vol below
-        - name: ANSIBLE_PRIVATE_KEY_FILE    # from secret vol below
-          value: /opt/app-root/src/.ssh/id_rsa/ssh-privatekey
-        - name: CERT_EXPIRY_WARN_DAYS
-          value: "45"      # must be a string, don't forget the quotes
-        volumeMounts:
-        - name: sshkey
-          mountPath: /opt/app-root/src/.ssh/id_rsa
-        - name: inventory
-          mountPath: /tmp/inventory
-        - name: reports
-          mountPath: /var/lib/certcheck
-      volumes:
-      - name: sshkey
-        secret:
-          secretName: sshkey
-      - name: inventory
-        configMap:
-          name: inventory
-      - name: reports
-        persistentVolumeClaim:
-          claimName: certcheck-reports
-      restartPolicy: Never

+ 0 - 50
examples/scheduled-certcheck-upload.yaml

@@ -1,50 +0,0 @@
-# An example CronJob to run a regular check of OpenShift's internal
-# certificate status.
-#
-# Each job will upload new reports to a directory in the master hosts
-#
-# The Job specification is the same as 'certificate-check-upload.yaml'
-# and the expected pre-configuration is equivalent.
-# See that Job example and examples/README.md for more details.
-
----
-apiVersion: batch/v1beta1
-kind: CronJob
-metadata:
-  name: certificate-check
-  labels:
-    app: certcheck
-spec:
-  schedule: "0 0 1 * *"      # every 1st day of the month at midnight
-  jobTemplate:
-    metadata:
-      labels:
-        app: certcheck
-    spec:
-      template:
-        spec:
-          containers:
-          - name: openshift-ansible
-            image: docker.io/openshift/origin-ansible
-            env:
-            - name: PLAYBOOK_FILE
-              value: playbooks/openshift-checks/certificate_expiry/easy-mode-upload.yaml
-            - name: INVENTORY_FILE
-              value: /tmp/inventory/hosts       # from configmap vol below
-            - name: ANSIBLE_PRIVATE_KEY_FILE    # from secret vol below
-              value: /opt/app-root/src/.ssh/id_rsa/ssh-privatekey
-            - name: CERT_EXPIRY_WARN_DAYS
-              value: "45"      # must be a string, don't forget the quotes
-            volumeMounts:
-            - name: sshkey
-              mountPath: /opt/app-root/src/.ssh/id_rsa
-            - name: inventory
-              mountPath: /tmp/inventory
-          volumes:
-          - name: sshkey
-            secret:
-              secretName: sshkey
-          - name: inventory
-            configMap:
-              name: inventory
-          restartPolicy: Never

+ 0 - 55
examples/scheduled-certcheck-volume.yaml

@@ -1,55 +0,0 @@
-# An example CronJob to run a regular check of OpenShift's internal
-# certificate status.
-#
-# Each job will add a new pair of reports to the configured Persistent Volume
-#
-# The Job specification is the same as 'certificate-check-volume.yaml'
-# and the expected pre-configuration is equivalent.
-# See that Job example and examples/README.md for more details.
-
----
-apiVersion: batch/v1beta1
-kind: CronJob
-metadata:
-  name: certificate-check
-  labels:
-    app: certcheck
-spec:
-  schedule: "0 0 1 * *"      # every 1st day of the month at midnight
-  jobTemplate:
-    metadata:
-      labels:
-        app: certcheck
-    spec:
-      template:
-        spec:
-          containers:
-          - name: openshift-ansible
-            image: docker.io/openshift/origin-ansible
-            env:
-            - name: PLAYBOOK_FILE
-              value: playbooks/openshift-checks/certificate_expiry/html_and_json_timestamp.yaml
-            - name: INVENTORY_FILE
-              value: /tmp/inventory/hosts       # from configmap vol below
-            - name: ANSIBLE_PRIVATE_KEY_FILE    # from secret vol below
-              value: /opt/app-root/src/.ssh/id_rsa/ssh-privatekey
-            - name: CERT_EXPIRY_WARN_DAYS
-              value: "45"      # must be a string, don't forget the quotes
-            volumeMounts:
-            - name: sshkey
-              mountPath: /opt/app-root/src/.ssh/id_rsa
-            - name: inventory
-              mountPath: /tmp/inventory
-            - name: reports
-              mountPath: /var/lib/certcheck
-          volumes:
-          - name: sshkey
-            secret:
-              secretName: sshkey
-          - name: inventory
-            configMap:
-              name: inventory
-          - name: reports
-            persistentVolumeClaim:
-              claimName: certcheck-reports
-          restartPolicy: Never

+ 0 - 37
hack/hooks/README.md

@@ -1,37 +0,0 @@
-# OpenShift-Ansible Git Hooks
-
-## Introduction
-
-This `hack` sub-directory holds
-[git commit hooks](https://www.atlassian.com/git/tutorials/git-hooks#conceptual-overview)
-you may use when working on openshift-ansible contributions. See the
-README in each sub-directory for an overview of what each hook does
-and if the hook has any specific usage or setup instructions.
-
-## Usage
-
-Basic git hook usage is simple:
-
-1) Copy (or symbolic link) the hook to the `$REPO_ROOT/.git/hooks/` directory
-2) Make the hook executable (`chmod +x $PATH_TO_HOOK`)
-
-## Multiple Hooks of the Same Type
-
-If you want to install multiple hooks of the same type, for example:
-multiple `pre-commit` hooks, you will need some kind of *hook
-dispatcher*. For an example of an easy to use hook dispatcher check
-out this gist by carlos-jenkins:
-
-* [multihooks.py](https://gist.github.com/carlos-jenkins/89da9dcf9e0d528ac978311938aade43)
-
-## Contributing Hooks
-
-If you want to contribute a new hook there are only a few criteria
-that must be met:
-
-* The hook **MUST** include a README describing the purpose of the hook
-* The README **MUST** describe special setup instructions if they are required
-* The hook **MUST** be in a sub-directory of this directory
-* The hook file **MUST** be named following the standard git hook
-  naming pattern (i.e., pre-commit hooks **MUST** be called
-  `pre-commit`)

+ 0 - 19
hack/hooks/verify_generated_modules/README.md

@@ -1,19 +0,0 @@
-# Verify Generated Modules
-
-Pre-commit hook for verifying that generated library modules match
-their EXPECTED content. Library modules are generated from fragments
-under the `roles/lib_(openshift|utils)/src/` directories.
-
-If the attempted commit modified files under the
-`roles/lib_(openshift|utils)/` directories this script will run the
-`generate.py --verify` command.
-
-This script will **NOT RUN** if module source fragments are modified
-but *not part of the commit*. I.e., you can still make commits if you
-modified module fragments AND other files but are *not comitting the
-the module fragments*.
-
-# Setup Instructions
-
-Standard installation procedure. Copy the hook to the `.git/hooks/`
-directory and ensure it is executable.

+ 0 - 55
hack/hooks/verify_generated_modules/pre-commit

@@ -1,55 +0,0 @@
-#!/bin/sh
-
-######################################################################
-# Pre-commit hook for verifying that generated library modules match
-# their EXPECTED content. Library modules are generated from fragments
-# under the 'roles/lib_(openshift|utils)/src/' directories.
-#
-# If the attempted commit modified files under the
-# 'roles/lib_(openshift|utils)/' directories this script will run the
-# 'generate.py --verify' command.
-#
-# This script will NOT RUN if module source fragments are modified but
-# not part of the commit. I.e., you can still make commits if you
-# modified module fragments AND other files but are not comitting the
-# the module fragments.
-
-# Did the commit modify any source module files?
-CHANGES=`git diff-index --stat --cached HEAD | grep -E '^ roles/lib_(openshift|utils)/src/(class|doc|ansible|lib)/'`
-RET_CODE=$?
-ABORT=0
-
-if [ "${RET_CODE}" -eq "0" ]; then
-    # Modifications detected. Run the verification scripts.
-
-    # Which was it?
-    if $(echo $CHANGES | grep -q 'roles/lib_openshift/'); then
-	echo "Validating lib_openshift..."
-	./roles/lib_openshift/src/generate.py --verify
-	if [ "${?}" -ne "0" ]; then
-	    ABORT=1
-	fi
-    fi
-
-    if $(echo $CHANGES | grep -q 'roles/lib_utils/'); then
-	echo "Validating lib_utils..."
-	./roles/lib_utils/src/generate.py --verify
-	if [ "${?}" -ne "0" ]; then
-	    ABORT=1
-	fi
-    fi
-
-    if [ "${ABORT}" -eq "1" ]; then
-	cat <<EOF
-
-ERROR: Module verification failed. Generated files do not match fragments.
-
-Choices to continue:
-  1) Run './roles/lib_(openshift|utils)/src/generate.py' from the root of
-     the repo to regenerate the files
-  2) Skip verification with '--no-verify' option to 'git commit'
-EOF
-    fi
-fi
-
-exit $ABORT

+ 0 - 1
inventory/.gitignore

@@ -1,3 +1,2 @@
-hosts
 /dynamic/gcp/group_vars/all/00_default_files_dir.yml
 /dynamic/aws/group_vars/all/00_default_files_dir.yml

+ 0 - 24
inventory/40_basic_inventory.ini

@@ -1,24 +0,0 @@
-[nodes:children]
-bootstrap
-masters
-workers
-
-[nodes:vars]
-ansible_ssh_user=centos
-ansible_become=True
-
-openshift_install_config_path="~/install-config-ansible.yml"
-openshift_deployment_type=origin
-openshift_release=v4.0
-
-[bootstrap]
-mycluster-bootstrap.example.com
-
-[bootstrap:vars]
-openshift_ignition_file_path="~/bootstrap.ign"
-
-[masters]
-mycluster-master-0.example.com
-
-[workers]
-mycluster-worker-0.example.com

File diff suppressed because it is too large
+ 4 - 1093
inventory/hosts.example


+ 0 - 61
inventory/hosts.glusterfs.external.example

@@ -1,61 +0,0 @@
-# This is an example of an OpenShift-Ansible host inventory for a cluster
-# with natively hosted, containerized GlusterFS storage.
-#
-# This inventory may be used with the deploy_cluster.yml playbook to deploy a new
-# cluster with GlusterFS storage, which will use that storage to create a
-# volume that will provide backend storage for a hosted Docker registry.
-#
-# This inventory may also be used with openshift-glusterfs/config.yml to
-# deploy GlusterFS storage on an existing cluster. With this playbook, the
-# registry backend volume will be created but the administrator must then
-# either deploy a hosted registry or change an existing hosted registry to use
-# that volume.
-#
-# There are additional configuration parameters that can be specified to
-# control the deployment and state of a GlusterFS cluster. Please see the
-# documentation in playbooks/openshift-glusterfs/README.md and
-# roles/openshift_storage_glusterfs/README.md for additional details.
-
-[OSEv3:children]
-masters
-nodes
-etcd
-# Specify there will be GlusterFS nodes
-glusterfs
-
-[OSEv3:vars]
-ansible_ssh_user=root
-openshift_deployment_type=origin
-# Specify that we want to use an external GlusterFS cluster
-openshift_storage_glusterfs_is_native=False
-# Specify the IP address or hostname of the external heketi service
-openshift_storage_glusterfs_heketi_url=172.0.0.1
-
-[masters]
-master
-
-[nodes]
-# masters should be schedulable to run web console pods
-master  openshift_schedulable=True
-node0   openshift_schedulable=True
-node1   openshift_schedulable=True
-node2   openshift_schedulable=True
-
-[etcd]
-master
-
-# Specify the glusterfs group, which contains the nodes of the external
-# GlusterFS cluster. At a minimum, each node must have "glusterfs_hostname"
-# and "glusterfs_devices" variables defined.
-#
-# The first variable indicates the hostname of the external GLusterFS node,
-# and must be reachable by the external heketi service.
-#
-# The second variable is a list of block devices the node will have access to
-# that are intended solely for use as GlusterFS storage. These block devices
-# must be bare (e.g. have no data, not be marked as LVM PVs), and will be
-# formatted.
-[glusterfs]
-node0.local  glusterfs_ip='172.0.0.10' glusterfs_devices='[ "/dev/vdb" ]'
-node1.local  glusterfs_ip='172.0.0.11' glusterfs_devices='[ "/dev/vdb", "/dev/vdc" ]'
-node2.local  glusterfs_ip='172.0.0.11' glusterfs_devices='[ "/dev/vdd" ]'

+ 0 - 64
inventory/hosts.glusterfs.mixed.example

@@ -1,64 +0,0 @@
-# This is an example of an OpenShift-Ansible host inventory for a cluster
-# with natively hosted, containerized GlusterFS storage.
-#
-# This inventory may be used with the deploy_cluster.yml playbook to deploy a new
-# cluster with GlusterFS storage, which will use that storage to create a
-# volume that will provide backend storage for a hosted Docker registry.
-#
-# This inventory may also be used with openshift-glusterfs/config.yml to
-# deploy GlusterFS storage on an existing cluster. With this playbook, the
-# registry backend volume will be created but the administrator must then
-# either deploy a hosted registry or change an existing hosted registry to use
-# that volume.
-#
-# There are additional configuration parameters that can be specified to
-# control the deployment and state of a GlusterFS cluster. Please see the
-# documentation in playbooks/openshift-glusterfs/README.md and
-# roles/openshift_storage_glusterfs/README.md for additional details.
-
-[OSEv3:children]
-masters
-nodes
-etcd
-# Specify there will be GlusterFS nodes
-glusterfs
-
-[OSEv3:vars]
-ansible_ssh_user=root
-openshift_deployment_type=origin
-# Specify that we want to use an external GlusterFS cluster and a native
-# heketi service
-openshift_storage_glusterfs_is_native=False
-openshift_storage_glusterfs_heketi_is_native=True
-# Specify that heketi will use SSH to communicate to the GlusterFS nodes and
-# the private key file it will use for authentication
-openshift_storage_glusterfs_heketi_executor=ssh
-openshift_storage_glusterfs_heketi_ssh_keyfile=/root/id_rsa
-[masters]
-master
-
-[nodes]
-# masters should be schedulable to run web console pods
-master  openshift_schedulable=True
-node0   openshift_schedulable=True
-node1   openshift_schedulable=True
-node2   openshift_schedulable=True
-
-[etcd]
-master
-
-# Specify the glusterfs group, which contains the nodes of the external
-# GlusterFS cluster. At a minimum, each node must have "glusterfs_hostname"
-# and "glusterfs_devices" variables defined.
-#
-# The first variable indicates the hostname of the external GLusterFS node,
-# and must be reachable by the external heketi service.
-#
-# The second variable is a list of block devices the node will have access to
-# that are intended solely for use as GlusterFS storage. These block devices
-# must be bare (e.g. have no data, not be marked as LVM PVs), and will be
-# formatted.
-[glusterfs]
-node0.local  glusterfs_ip='172.0.0.10' glusterfs_devices='[ "/dev/vdb" ]'
-node1.local  glusterfs_ip='172.0.0.11' glusterfs_devices='[ "/dev/vdb", "/dev/vdc" ]'
-node2.local  glusterfs_ip='172.0.0.11' glusterfs_devices='[ "/dev/vdd" ]'

+ 0 - 51
inventory/hosts.glusterfs.native.example

@@ -1,51 +0,0 @@
-# This is an example of an OpenShift-Ansible host inventory for a cluster
-# with natively hosted, containerized GlusterFS storage for applications. It
-# will also automatically create a StorageClass for this purpose.
-#
-# This inventory may be used with the deploy_cluster.yml playbook to deploy a new
-# cluster with GlusterFS storage.
-#
-# This inventory may also be used with openshift-glusterfs/config.yml to
-# deploy GlusterFS storage on an existing cluster.
-#
-# There are additional configuration parameters that can be specified to
-# control the deployment and state of a GlusterFS cluster. Please see the
-# documentation in playbooks/openshift-glusterfs/README.md and
-# roles/openshift_storage_glusterfs/README.md for additional details.
-
-[OSEv3:children]
-masters
-nodes
-etcd
-# Specify there will be GlusterFS nodes
-glusterfs
-
-[OSEv3:vars]
-ansible_ssh_user=root
-openshift_deployment_type=origin
-
-[masters]
-master
-
-[nodes]
-# masters should be schedulable to run web console pods
-master  openshift_schedulable=True
-# A hosted registry, by default, will only be deployed on nodes labeled
-# "node-role.kubernetes.io/infra=true".
-node0   openshift_schedulable=True
-node1   openshift_schedulable=True
-node2   openshift_schedulable=True
-
-[etcd]
-master
-
-# Specify the glusterfs group, which contains the nodes that will host
-# GlusterFS storage pods. At a minimum, each node must have a
-# "glusterfs_devices" variable defined. This variable is a list of block
-# devices the node will have access to that is intended solely for use as
-# GlusterFS storage. These block devices must be bare (e.g. have no data, not
-# be marked as LVM PVs), and will be formatted.
-[glusterfs]
-node0  glusterfs_devices='[ "/dev/vdb", "/dev/vdc", "/dev/vdd" ]'
-node1  glusterfs_devices='[ "/dev/vdb", "/dev/vdc", "/dev/vdd" ]'
-node2  glusterfs_devices='[ "/dev/vdb", "/dev/vdc", "/dev/vdd" ]'

+ 0 - 57
inventory/hosts.glusterfs.registry-only.example

@@ -1,57 +0,0 @@
-# This is an example of an OpenShift-Ansible host inventory for a cluster
-# with natively hosted, containerized GlusterFS storage for exclusive use
-# as storage for a natively hosted Docker registry.
-#
-# This inventory may be used with the deploy_cluster.yml playbook to deploy a new
-# cluster with GlusterFS storage, which will use that storage to create a
-# volume that will provide backend storage for a hosted Docker registry.
-#
-# This inventory may also be used with openshift-glusterfs/registry.yml to
-# deploy GlusterFS storage on an existing cluster. With this playbook, the
-# registry backend volume will be created but the administrator must then
-# either deploy a hosted registry or change an existing hosted registry to use
-# that volume.
-#
-# There are additional configuration parameters that can be specified to
-# control the deployment and state of a GlusterFS cluster. Please see the
-# documentation in playbooks/openshift-glusterfs/README.md and
-# roles/openshift_storage_glusterfs/README.md for additional details.
-
-[OSEv3:children]
-masters
-nodes
-etcd
-# Specify there will be GlusterFS nodes
-glusterfs_registry
-
-[OSEv3:vars]
-ansible_ssh_user=root
-openshift_deployment_type=origin
-# Specify that we want to use GlusterFS storage for a hosted registry
-openshift_hosted_registry_storage_kind=glusterfs
-
-[masters]
-master openshift_node_group_name="node-config-master"
-
-[nodes]
-# masters should be schedulable to run web console pods
-master  openshift_schedulable=True
-# A hosted registry, by default, will only be deployed on nodes labeled
-# "node-role.kubernetes.io/infra=true".
-node0   openshift_node_group_name="node-config-infra"
-node1   openshift_node_group_name="node-config-infra"
-node2   openshift_node_group_name="node-config-infra"
-
-[etcd]
-master
-
-# Specify the glusterfs group, which contains the nodes that will host
-# GlusterFS storage pods. At a minimum, each node must have a
-# "glusterfs_devices" variable defined. This variable is a list of block
-# devices the node will have access to that is intended solely for use as
-# GlusterFS storage. These block devices must be bare (e.g. have no data, not
-# be marked as LVM PVs), and will be formatted.
-[glusterfs_registry]
-node0  glusterfs_devices='[ "/dev/vdb", "/dev/vdc", "/dev/vdd" ]'
-node1  glusterfs_devices='[ "/dev/vdb", "/dev/vdc", "/dev/vdd" ]'
-node2  glusterfs_devices='[ "/dev/vdb", "/dev/vdc", "/dev/vdd" ]'

+ 0 - 68
inventory/hosts.glusterfs.storage-and-registry.example

@@ -1,68 +0,0 @@
-# This is an example of an OpenShift-Ansible host inventory for a cluster
-# with natively hosted, containerized GlusterFS storage for both general
-# application use and a natively hosted Docker registry. It will also create a
-# StorageClass for the general storage.
-#
-# This inventory may be used with the deploy_cluster.yml playbook to deploy a new
-# cluster with GlusterFS storage.
-#
-# This inventory may also be used with openshift-glusterfs/config.yml to
-# deploy GlusterFS storage on an existing cluster. With this playbook, the
-# registry backend volume will be created but the administrator must then
-# either deploy a hosted registry or change an existing hosted registry to use
-# that volume.
-#
-# There are additional configuration parameters that can be specified to
-# control the deployment and state of a GlusterFS cluster. Please see the
-# documentation in playbooks/openshift-glusterfs/README.md and
-# roles/openshift_storage_glusterfs/README.md for additional details.
-
-[OSEv3:children]
-masters
-nodes
-etcd
-# Specify there will be GlusterFS nodes
-glusterfs
-glusterfs_registry
-
-[OSEv3:vars]
-ansible_ssh_user=root
-openshift_deployment_type=origin
-# Specify that we want to use GlusterFS storage for a hosted registry
-openshift_hosted_registry_storage_kind=glusterfs
-
-[masters]
-master
-
-[nodes]
-# masters should be schedulable to run web console pods
-master  openshift_node_group_name="node-config-master" openshift_schedulable=True
-# It is recommended to not use a single cluster for both general and registry
-# storage, so two three-node clusters will be required.
-node0   openshift_node_group_name="node-config-compute"
-node1   openshift_node_group_name="node-config-compute"
-node2   openshift_node_group_name="node-config-compute"
-# A hosted registry, by default, will only be deployed on nodes labeled
-# "node-role.kubernetes.io/infra=true".
-node3   openshift_node_group_name="node-config-infra"
-node4   openshift_node_group_name="node-config-infra"
-node5   openshift_node_group_name="node-config-infra"
-
-[etcd]
-master
-
-# Specify the glusterfs group, which contains the nodes that will host
-# GlusterFS storage pods. At a minimum, each node must have a
-# "glusterfs_devices" variable defined. This variable is a list of block
-# devices the node will have access to that is intended solely for use as
-# GlusterFS storage. These block devices must be bare (e.g. have no data, not
-# be marked as LVM PVs), and will be formatted.
-[glusterfs]
-node0  glusterfs_devices='[ "/dev/vdb", "/dev/vdc", "/dev/vdd" ]'
-node1  glusterfs_devices='[ "/dev/vdb", "/dev/vdc", "/dev/vdd" ]'
-node2  glusterfs_devices='[ "/dev/vdb", "/dev/vdc", "/dev/vdd" ]'
-
-[glusterfs_registry]
-node3  glusterfs_devices='[ "/dev/vdb", "/dev/vdc", "/dev/vdd" ]'
-node4  glusterfs_devices='[ "/dev/vdb", "/dev/vdc", "/dev/vdd" ]'
-node5  glusterfs_devices='[ "/dev/vdb", "/dev/vdc", "/dev/vdd" ]'

+ 0 - 17
inventory/hosts.grafana.example

@@ -1,17 +0,0 @@
-[OSEv3:children]
-masters
-nodes
-
-[OSEv3:vars]
-# Grafana Configuration
-#grafana_namespace=grafana
-#grafana_user=grafana
-#grafana_password=grafana
-#grafana_datasource_name="example"
-#grafana_prometheus_namespace="openshift-metrics"
-#grafana_prometheus_sa=prometheus
-#grafana_node_exporter=false
-#grafana_graph_granularity="2m"
-
-[masters]
-master

+ 0 - 27
inventory/hosts.localhost

@@ -1,27 +0,0 @@
-#bare minimum hostfile
-
-[OSEv3:children]
-masters
-nodes
-etcd
-
-[OSEv3:vars]
-# if your target hosts are Fedora uncomment this
-#ansible_python_interpreter=/usr/bin/python3
-openshift_deployment_type=origin
-openshift_portal_net=172.30.0.0/16
-# localhost likely doesn't meet the minimum requirements
-openshift_disable_check=disk_availability,memory_availability
-
-openshift_node_groups=[{'name': 'node-config-all-in-one', 'labels': ['node-role.kubernetes.io/master=true', 'node-role.kubernetes.io/infra=true', 'node-role.kubernetes.io/compute=true']}]
-
-
-[masters]
-localhost ansible_connection=local
-
-[etcd]
-localhost ansible_connection=local
-
-[nodes]
-# openshift_node_group_name should refer to a dictionary with matching key of name in list openshift_node_groups.
-localhost ansible_connection=local openshift_node_group_name="node-config-all-in-one"

+ 0 - 37
inventory/hosts.openstack

@@ -1,37 +0,0 @@
-# This is an example of an OpenShift-Ansible host inventory
-
-# Create an OSEv3 group that contains the masters and nodes groups
-[OSEv3:children]
-masters
-nodes
-etcd
-lb
-
-# Set variables common for all OSEv3 hosts
-[OSEv3:vars]
-ansible_ssh_user=cloud-user
-ansible_become=yes
-
-# Debug level for all OpenShift components (Defaults to 2)
-debug_level=2
-
-openshift_deployment_type=openshift-enterprise
-
-openshift_additional_repos=[{'id': 'ose-3.1', 'name': 'ose-3.1', 'baseurl': 'http://pulp.dist.prod.ext.phx2.redhat.com/content/dist/rhel/server/7/7Server/x86_64/ose/3.1/os', 'enabled': 1, 'gpgcheck': 0}]
-
-openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
-
-#openshift_pkg_version=-3.0.0.0
-
-[masters]
-jdetiber-master.usersys.redhat.com openshift_public_hostname="{{ inventory_hostname }}"
-
-[etcd]
-jdetiber-etcd.usersys.redhat.com
-
-[lb]
-#ose3-lb-ansible.test.example.com
-
-[nodes]
-jdetiber-master.usersys.redhat.com openshift_public_hostname="{{ inventory_hostname }}" openshift_node_group_name="node-config-master"
-jdetiber-node[1:2].usersys.redhat.com openshift_public_hostname="{{ inventory_hostname }}" openshift_node_group_name="node-config-compute"

+ 0 - 30
inventory/install-config-example.yml

@@ -1,30 +0,0 @@
----
-baseDomain: example.com
-machines:
-- name: master
-  replicas: 1
-- name: worker
-  # This should always be zero for openshift-ansible
-  replicas: 0
-metadata:
-  name: mycluster
-networking:
-  clusterNetworks:
-  - cidr: 10.128.0.0/14
-    hostSubnetLength: 9
-  serviceCIDR: 172.30.0.0/16
-  type: OpenShiftSDN
-platform:
-  libvirt:
-    # This URI is not actually used
-    URI: null
-    defaultMachinePlatform:
-      image: file:///unused
-    masterIPs: null
-    network:
-      if: null
-      ipRange: null
-pullSecret: |
-  < paste your pullSecret here >
-sshKey: |
-  < paster your pubkey here >

+ 0 - 2
meta/main.yml

@@ -1,2 +0,0 @@
----
-dependencies:

+ 0 - 16
playbooks/README.md

@@ -1,17 +1 @@
 # openshift-ansible playbooks
-
-In summary:
-
-- [`byo`](byo) (_Bring Your Own_ hosts) has the most actively maintained
-  playbooks for installing, upgrading and performing others tasks on OpenShift
-  clusters.
-- [`common`](common) has a set of playbooks that are included by playbooks in
-  `byo` and others.
-
-And:
-
-- [`adhoc`](adhoc) is a generic home for playbooks and tasks that are community
-  supported and not officially maintained.
-
-Refer to the `README.md` file in each playbook directory for more information
-about them.