Selaa lähdekoodia

CFME 4.6 work begins. CFME 4.5 references added to the release-3.6 branch

Tim Bielawa 7 vuotta sitten
vanhempi
commit
42d330a1cf

+ 1 - 15
playbooks/common/openshift-cfme/config.yml

@@ -14,27 +14,13 @@
 #     # F-a-f, never check on this. True 'background' task.
 #     poll: 0
 
-- name: Configure Masters for CFME Bulk Image Imports
-  hosts: oo_masters_to_config
-  serial: 1
-  tasks:
-  - name: Run master cfme tuning playbook
-    include_role:
-      name: openshift_cfme
-      tasks_from: tune_masters
-
 - name: Setup CFME
-  hosts: oo_first_master
-  vars:
-    r_openshift_cfme_miq_template_content: "{{ lookup('file', 'roles/openshift_cfme/files/miq-template.yaml') | from_yaml}}"
+  hosts: m01.example.com
   pre_tasks:
   - name: Create a temporary place to evaluate the PV templates
     command: mktemp -d /tmp/openshift-ansible-XXXXXXX
     register: r_openshift_cfme_mktemp
     changed_when: false
-  - name: Ensure the server template was read from disk
-    debug:
-      msg="{{ r_openshift_cfme_miq_template_content | from_yaml }}"
 
   tasks:
   - name: Run the CFME Setup Role

+ 17 - 394
roles/openshift_cfme/README.md

@@ -1,404 +1,27 @@
-# OpenShift-Ansible - CFME Role
+## OpenShift-Ansible - CFME Role
 
-# PROOF OF CONCEPT - Alpha Version
+# CloudForms - 4.6
 
-This role is based on the work in the upstream
-[manageiq/manageiq-pods](https://github.com/ManageIQ/manageiq-pods)
-project. For additional literature on configuration specific to
-ManageIQ (optional post-installation tasks), visit the project's
-[upstream documentation page](http://manageiq.org/docs/get-started/basic-configuration).
+**Important Note**: As of 2017-09-06 the `master` branch of the
+[openshift-ansible](https://github.com/openshift/openshift-ansible)
+repository is now tracking changes for CloudForms 4.6.
 
-Please submit a
-[new issue](https://github.com/openshift/openshift-ansible/issues/new)
-if you run into bugs with this role or wish to request enhancements.
+If you installed CFME **4.5** previously using this role then you
+**must** use the role from the stable `release-3.6` branch.
 
-# Important Notes
+This role, `openshift_cfme`, in OpenShift Container Platform (OCP) 3.7
+**will not** be backwards compatible with the previous tech preview
+released in OCP 3.6.
 
-This is an early *proof of concept* role to install the Cloud Forms
-Management Engine (ManageIQ) on OpenShift Container Platform (OCP).
 
-* This role is still in **ALPHA STATUS**
-* Many options are hard-coded still (ex: NFS setup)
-* Not many configurable options yet
-* **Should** be ran on a dedicated cluster
-* **Will not run** on undersized infra
-* The terms *CFME* and *MIQ* / *ManageIQ* are interchangeable
+# CFME/MIQ 4.5 Legacy Instructions
 
-## Requirements
+* [OCP 3.6 - CFME 4.5 Installation Instruction](https://github.com/openshift/openshift-ansible/tree/release-3.6/roles/openshift_cfme)
 
-**NOTE:** These requirements are copied from the upstream
-[manageiq/manageiq-pods](https://github.com/ManageIQ/manageiq-pods)
-project.
+The instructions linked in the bulleted item above are for the **TECH
+PREVIEW** CloudForms Management Engine (ManageIQ) 4.5 release.
 
-### Prerequisites:
+# CloudForms 4.5 Pull Requests
 
-*
-  [OpenShift Origin 1.5](https://docs.openshift.com/container-platform/3.5/welcome/index.html)
-  or
-  [higher](https://docs.openshift.com/container-platform/latest/welcome/index.html)
-  provisioned
-* NFS or other compatible volume provider
-* A cluster-admin user (created by role if required)
-
-### Cluster Sizing
-
-In order to avoid random deployment failures due to resource
-starvation, we recommend a minimum cluster size for a **test**
-environment.
-
-| Type           | Size    | CPUs     | Memory   |
-|----------------|---------|----------|----------|
-| Masters        | `1+`    | `8`      | `12GB`   |
-| Nodes          | `2+`    | `4`      | `8GB`    |
-| PV Storage     | `25GB`  | `N/A`    | `N/A`    |
-
-
-![Basic CFME Deployment](img/CFMEBasicDeployment.png)
-
-**CFME has hard-requirements for memory. CFME will NOT install if your
-  infrastructure does not meet or exceed the requirements given
-  above. Do not run this playbook if you do not have the required
-  memory, you will just waste your time.**
-
-
-### Other sizing considerations
-
-* Recommendations assume MIQ will be the **only application running**
-  on this cluster.
-* Alternatively, you can provision an infrastructure node to run
-  registry/metrics/router/logging pods.
-* Each MIQ application pod will consume at least `3GB` of RAM on initial
-  deployment (blank deployment without providers).
-* RAM consumption will ramp up higher depending on appliance use, once
-  providers are added expect higher resource consumption.
-
-
-### Assumptions
-
-1) You meet/exceed the [cluster sizing](#cluster-sizing) requirements
-1) Your NFS server is on your master host
-1) Your PV backing NFS storage volume is mounted on `/exports/`
-
-Required directories that NFS will export to back the PVs:
-
-* `/exports/miq-pv0[123]`
-
-If the required directories are not present at install-time, they will
-be created using the recommended permissions per the
-[upstream documentation](https://github.com/ManageIQ/manageiq-pods#make-persistent-volumes-to-host-the-miq-database-and-application-data):
-
-* UID/GID: `root`/`root`
-* Mode: `0775`
-
-**IMPORTANT:** If you are using a separate volume (`/dev/vdX`) for NFS
-  storage, **ensure** it is mounted on `/exports/` **before** running
-  this role.
-
-
-
-## Role Variables
-
-Core variables in this role:
-
-| Name                          | Default value | Description   |
-|-------------------------------|---------------|---------------|
-| `openshift_cfme_install_app`  | `False`       | `True`: Install everything and create a new CFME app, `False`: Just install all of the templates and scaffolding |
-
-
-Variables you may override have defaults defined in
-[defaults/main.yml](defaults/main.yml).
-
-
-# Important Notes
-
-This is a **tech preview** status role presently. Use it with the same
-caution you would give any other pre-release software.
-
-**Most importantly** follow this one rule: don't re-run the entrypoint
-playbook multiple times in a row without cleaning up after previous
-runs if some of the CFME steps have ran. This is a known
-flake. Cleanup instructions are provided at the bottom of this README.
-
-
-# Usage
-
-This section describes the basic usage of this role. All parameters
-will use their [default values](defaults/main.yml).
-
-## Pre-flight Checks
-
-**IMPORTANT:** As documented above in [the prerequisites](#prerequisites),
-  you **must already** have your OCP cluster up and running.
-
-**Optional:** The ManageIQ pod is fairly large (about 1.7 GB) so to
-save some spin-up time post-deployment, you can begin pre-pulling the
-docker image to each of your nodes now:
-
-```
-root@node0x # docker pull docker.io/manageiq/manageiq-pods:app-latest-fine
-```
-
-## Getting Started
-
-1) The *entry point playbook* to install CFME is located in
-[the BYO playbooks](../../playbooks/byo/openshift-cfme/config.yml)
-directory
-
-2) Update your existing `hosts` inventory file and ensure the
-parameter `openshift_cfme_install_app` is set to `True` under the
-`[OSEv3:vars]` block.
-
-2) Using your existing `hosts` inventory file, run `ansible-playbook`
-with the entry point playbook:
-
-```
-$ ansible-playbook -v -i <INVENTORY_FILE> playbooks/byo/openshift-cfme/config.yml
-```
-
-## Next Steps
-
-Once complete, the playbook will let you know:
-
-
-```
-TASK [openshift_cfme : Status update] *********************************************************
-ok: [ho.st.na.me] => {
-    "msg": "CFME has been deployed. Note that there will be a delay before it is fully initialized.\n"
-}
-```
-
-This will take several minutes (*possibly 10 or more*, depending on
-your network connection). However, you can get some insight into the
-deployment process during initialization.
-
-### oc describe pod manageiq-0
-
-*Some useful information about the output you will see if you run the
-`oc describe pod manageiq-0` command*
-
-**Readiness probe**s - These will take a while to become
-`Healthy`. The initial health probes won't even happen for at least 8
-minutes depending on how long it takes you to pull down the large
-images. ManageIQ is a large application so it may take a considerable
-amount of time for it to deploy and be marked as `Healthy`.
-
-If you go to the node you know the application is running on (check
-for `Successfully assigned manageiq-0 to <HOST|IP>` in the `describe`
-output) you can run a `docker pull` command to monitor the progress of
-the image pull:
-
-```
-[root@cfme-node ~]# docker pull docker.io/manageiq/manageiq-pods:app-latest-fine
-Trying to pull repository docker.io/manageiq/manageiq-pods ...
-sha256:6c055ca9d3c65cd694d6c0e28986b5239ba56bbdf0488cccdaa283d545258f8a: Pulling from docker.io/manageiq/manageiq-pods
-Digest: sha256:6c055ca9d3c65cd694d6c0e28986b5239ba56bbdf0488cccdaa283d545258f8a
-Status: Image is up to date for docker.io/manageiq/manageiq-pods:app-latest-fine
-```
-
-The example above demonstrates the case where the image has been
-successfully pulled already.
-
-If the image isn't completely pulled already then you will see
-multiple progress bars detailing each image layer download status.
-
-
-### rsh
-
-*Useful inspection/progress monitoring techniques with the `oc rsh`
-command.*
-
-
-On your master node, switch to the `cfme` project (or whatever you
-named it if you overrode the `openshift_cfme_project` variable) and
-check on the pod states:
-
-```
-[root@cfme-master01 ~]# oc project cfme
-Now using project "cfme" on server "https://10.10.0.100:8443".
-
-[root@cfme-master01 ~]# oc get pod
-NAME                 READY     STATUS    RESTARTS   AGE
-manageiq-0           0/1       Running   0          14m
-memcached-1-3lk7g    1/1       Running   0          14m
-postgresql-1-12slb   1/1       Running   0          14m
-```
-
-Note how the `manageiq-0` pod says `0/1` under the **READY**
-column. After some time (depending on your network connection) you'll
-be able to `rsh` into the pod to find out more of what's happening in
-real time. First, the easy-mode command, run this once `rsh` is
-available and then watch until it says `Started Initialize Appliance
-Database`:
-
-```
-[root@cfme-master01 ~]# oc rsh manageiq-0 journalctl -f -u appliance-initialize.service
-```
-
-For the full explanation of what this means, and more interactive
-inspection techniques, keep reading on.
-
-To obtain a shell on our `manageiq` pod we use this command:
-
-```
-[root@cfme-master01 ~]# oc rsh manageiq-0 bash -l
-```
-
-The `rsh` command opens a shell in your pod for you. In this case it's
-the pod called `manageiq-0`. `systemd` is managing the services in
-this pod so we can use the `list-units` command to see what is running
-currently: `# systemctl list-units | grep appliance`.
-
-If you see the `appliance-initialize` service running, this indicates
-that basic setup is still in progress. We can monitor the process with
-the `journalctl` command like so:
-
-
-```
-[root@manageiq-0 vmdb]# journalctl -f -u appliance-initialize.service
-Jun 14 14:55:52 manageiq-0 appliance-initialize.sh[58]: == Checking deployment status ==
-Jun 14 14:55:52 manageiq-0 appliance-initialize.sh[58]: No pre-existing EVM configuration found on region PV
-Jun 14 14:55:52 manageiq-0 appliance-initialize.sh[58]: == Checking for existing data on server PV ==
-Jun 14 14:55:52 manageiq-0 appliance-initialize.sh[58]: == Starting New Deployment ==
-Jun 14 14:55:52 manageiq-0 appliance-initialize.sh[58]: == Applying memcached config ==
-Jun 14 14:55:53 manageiq-0 appliance-initialize.sh[58]: == Initializing Appliance ==
-Jun 14 14:55:57 manageiq-0 appliance-initialize.sh[58]: create encryption key
-Jun 14 14:55:57 manageiq-0 appliance-initialize.sh[58]: configuring external database
-Jun 14 14:55:57 manageiq-0 appliance-initialize.sh[58]: Checking for connections to the database...
-Jun 14 14:56:09 manageiq-0 appliance-initialize.sh[58]: Create region starting
-Jun 14 14:58:15 manageiq-0 appliance-initialize.sh[58]: Create region complete
-Jun 14 14:58:15 manageiq-0 appliance-initialize.sh[58]: == Initializing PV data ==
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: == Initializing PV data backup ==
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: sending incremental file list
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: created directory /persistent/server-deploy/backup/backup_2017_06_14_145816
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: region-data/
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: region-data/var/
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: region-data/var/www/
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: region-data/var/www/miq/
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: region-data/var/www/miq/vmdb/
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: region-data/var/www/miq/vmdb/REGION
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: region-data/var/www/miq/vmdb/certs/
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: region-data/var/www/miq/vmdb/certs/v2_key
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: region-data/var/www/miq/vmdb/config/
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: region-data/var/www/miq/vmdb/config/database.yml
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: server-data/
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: server-data/var/
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: server-data/var/www/
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: server-data/var/www/miq/
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: server-data/var/www/miq/vmdb/
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: server-data/var/www/miq/vmdb/GUID
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: sent 1330 bytes  received 136 bytes  2932.00 bytes/sec
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: total size is 770  speedup is 0.53
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: == Restoring PV data symlinks ==
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: /var/www/miq/vmdb/REGION symlink is already in place, skipping
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: /var/www/miq/vmdb/config/database.yml symlink is already in place, skipping
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: /var/www/miq/vmdb/certs/v2_key symlink is already in place, skipping
-Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: /var/www/miq/vmdb/log symlink is already in place, skipping
-Jun 14 14:58:28 manageiq-0 systemctl[304]: Removed symlink /etc/systemd/system/multi-user.target.wants/appliance-initialize.service.
-Jun 14 14:58:29 manageiq-0 systemd[1]: Started Initialize Appliance Database.
-```
-
-Most of what we see here (above) is the initial database seeding
-process. This process isn't very quick, so be patient.
-
-At the bottom of the log there is a special line from the `systemctl`
-service, `Removed symlink
-/etc/systemd/system/multi-user.target.wants/appliance-initialize.service`. The
-`appliance-initialize` service is no longer marked as enabled. This
-indicates that the base application initialization is complete now.
-
-We're not done yet though, there are other ancillary services which
-run in this pod to support the application. *Still in the rsh shell*,
-Use the `ps` command to monitor for the `httpd` processes
-starting. You will see output similar to the following when that stage
-has completed:
-
-```
-[root@manageiq-0 vmdb]# ps aux | grep http
-root       1941  0.0  0.1 249820  7640 ?        Ss   15:02   0:00 /usr/sbin/httpd -DFOREGROUND
-apache     1942  0.0  0.0 250752  6012 ?        S    15:02   0:00 /usr/sbin/httpd -DFOREGROUND
-apache     1943  0.0  0.0 250472  5952 ?        S    15:02   0:00 /usr/sbin/httpd -DFOREGROUND
-apache     1944  0.0  0.0 250472  5916 ?        S    15:02   0:00 /usr/sbin/httpd -DFOREGROUND
-apache     1945  0.0  0.0 250360  5764 ?        S    15:02   0:00 /usr/sbin/httpd -DFOREGROUND
-```
-
-Furthermore, you can find other related processes by just looking for
-ones with `MIQ` in their name:
-
-```
-[root@manageiq-0 vmdb]# ps aux | grep miq
-root        333 27.7  4.2 555884 315916 ?       Sl   14:58   3:59 MIQ Server
-root       1976  0.6  4.0 507224 303740 ?       SNl  15:02   0:03 MIQ: MiqGenericWorker id: 1, queue: generic
-root       1984  0.6  4.0 507224 304312 ?       SNl  15:02   0:03 MIQ: MiqGenericWorker id: 2, queue: generic
-root       1992  0.9  4.0 508252 304888 ?       SNl  15:02   0:05 MIQ: MiqPriorityWorker id: 3, queue: generic
-root       2000  0.7  4.0 510308 304696 ?       SNl  15:02   0:04 MIQ: MiqPriorityWorker id: 4, queue: generic
-root       2008  1.2  4.0 514000 303612 ?       SNl  15:02   0:07 MIQ: MiqScheduleWorker id: 5
-root       2026  0.2  4.0 517504 303644 ?       SNl  15:02   0:01 MIQ: MiqEventHandler id: 6, queue: ems
-root       2036  0.2  4.0 518532 303768 ?       SNl  15:02   0:01 MIQ: MiqReportingWorker id: 7, queue: reporting
-root       2044  0.2  4.0 519560 303812 ?       SNl  15:02   0:01 MIQ: MiqReportingWorker id: 8, queue: reporting
-root       2059  0.2  4.0 528372 303956 ?       SNl  15:02   0:01 puma 3.3.0 (tcp://127.0.0.1:5000) [MIQ: Web Server Worker]
-root       2067  0.9  4.0 529664 305716 ?       SNl  15:02   0:05 puma 3.3.0 (tcp://127.0.0.1:3000) [MIQ: Web Server Worker]
-root       2075  0.2  4.0 529408 304056 ?       SNl  15:02   0:01 puma 3.3.0 (tcp://127.0.0.1:4000) [MIQ: Web Server Worker]
-root       2329  0.0  0.0  10640   972 ?        S+   15:13   0:00 grep --color=auto -i miq
-```
-
-Finally, *still in the rsh shell*, to test if the application is
-running correctly, we can request the application homepage. If the
-page is available the page title will be `ManageIQ: Login`:
-
-```
-[root@manageiq-0 vmdb]# curl -s -k https://localhost | grep -A2 '<title>'
-<title>
-ManageIQ: Login
-</title>
-```
-
-**Note:** The `-s` flag makes `curl` operations silent and the `-k`
-flag to ignore errors about untrusted certificates.
-
-
-
-# Additional Upstream Resources
-
-Below are some useful resources from the upstream project
-documentation. You may find these of value.
-
-* [Verify Setup Was Successful](https://github.com/ManageIQ/manageiq-pods#verifying-the-setup-was-successful)
-* [POD Access And Routes](https://github.com/ManageIQ/manageiq-pods#pod-access-and-routes)
-* [Troubleshooting](https://github.com/ManageIQ/manageiq-pods#troubleshooting)
-
-
-# Manual Cleanup
-
-At this time uninstallation/cleanup is still a manual process. You
-will have to follow a few steps to fully remove CFME from your
-cluster.
-
-Delete the project:
-
-* `oc delete project cfme`
-
-Delete the PVs:
-
-* `oc delete pv miq-pv01`
-* `oc delete pv miq-pv02`
-* `oc delete pv miq-pv03`
-
-Clean out the old PV data:
-
-* `cd /exports/`
-* `find miq* -type f -delete`
-* `find miq* -type d -delete`
-
-Remove the NFS exports:
-
-* `rm /etc/exports.d/openshift_cfme.exports`
-* `exportfs -ar`
-
-Delete the user:
-
-* `oc delete user cfme`
-
-**NOTE:** The `oc delete project cfme` command will return quickly
-however it will continue to operate in the background. Continue
-running `oc get project` after you've completed the other steps to
-monitor the pods and final project termination progress.
+We are no longer accepting pull requests for the *Tech Preview*
+CloudForms 4.5 release.

+ 156 - 32
roles/openshift_cfme/defaults/main.yml

@@ -1,32 +1,154 @@
 ---
-# Namespace for the CFME project (Note: changed post-3.6 to use
-# reserved 'openshift-' namespace prefix)
+# Namespace for the CFME project
 openshift_cfme_project: openshift-cfme
 # Namespace/project description
-openshift_cfme_project_description: ManageIQ - CloudForms Management Engine
-# Basic user assigned the `admin` role for the project
-openshift_cfme_user: cfme
-# Project system account for enabling privileged pods
-openshift_cfme_service_account: "system:serviceaccount:{{ openshift_cfme_project }}:default"
-# All the required exports
-openshift_cfme_pv_exports:
-  - miq-pv01
-  - miq-pv02
-  - miq-pv03
-# PV template files and their created object names
-openshift_cfme_pv_data:
-  - pv_name: miq-pv01
-    pv_template: miq-pv-db.yaml
-    pv_label: CFME DB PV
-  - pv_name: miq-pv02
-    pv_template: miq-pv-region.yaml
-    pv_label: CFME Region PV
-  - pv_name: miq-pv03
-    pv_template: miq-pv-server.yaml
-    pv_label: CFME Server PV
-
-# Tuning parameter to use more than 5 images at once from an ImageStream
-openshift_cfme_maxImagesBulkImportedPerRepository: 100
+openshift_cfme_project_description: ManageIQ - CloudForms Management Engine 4.6
+
+######################################################################
+# BASE TEMPLATE AND DATABASE OPTIONS
+######################################################################
+# Which flavor of CFME would you like? You may install CFME using a
+# podified PostgreSQL server, or you may choose to use an existing
+# PostgreSQL server.
+#
+# Choose 'miq-template' for a podified database install
+# Choose 'miq-template-ext-db' for an external database install
+openshift_cfme_app_template: miq-template
+
+# If you are using the miq-template-ext-db template then you must add
+# the required database parameters to the
+# openshift_cfme_template_parameters variable. For example:
+#
+# openshift_cfme_template_parameters:
+#   DATABASE_USER: root
+#   DATABASE_PASSWORD: @_grrrr8Pa$$.h3r3
+#   DATABASE_IP: 10.1.1.10
+#   DATABASE_PORT: 5432
+#   DATABASE_NAME: vmdb_production
+
+######################################################################
+# STORAGE OPTIONS
+######################################################################
+# DEFAULT - 'nfs'
+# Allowed options: nfs, external, preconfigured, cloudprovider.
+openshift_cfme_storage_class: nfs
+# * nfs - Best used for proof-of-concept installs. Will setup NFS on a
+#   cluster host (defaults to your first master in the inventory file)
+#   to back the required PVCs. The application requires a PVC and the
+#   database (which may be hosted externally) may require a
+#   second. PVC minimum required sizes are: 5GiB for the MIQ
+#   application, and 15GiB for the PostgreSQL database (20GiB minimum
+#   available space on an volume/partition if used specifically for
+#   NFS purposes)
+#
+# * external - You are using an external NFS server, such as a netapp
+#   appliance. See the STORAGE - NFS OPTIONS section below for
+#   required information.
+#
+# * preconfigured - This CFME role will do NOTHING to modify storage
+#   settings. This option assumes expert knowledge and that you have
+#   done everything required ahead of time.
+#
+# * cloudprovider - You are using an OCP cloudprovider integration for
+#   your storage class. For this to work you must have already
+#   configured the required inventory parameters for your cloud
+#   provider
+#
+#   Ensure 'openshift_cloudprovider_kind' is defined (aws or gce) and
+#   that the applicable cloudprovider parameters are provided.
+
+######################################################################
+# STORAGE - NFS OPTIONS
+######################################################################
+# [OPTIONAL] - If you are using an EXTERNAL NFS server, such as a
+# netapp appliance, then you must set the hostname here. Leave the
+# value as 'false' if you are not using external NFS
+openshift_cfme_storage_external_nfs_hostname: false
+# [OPTIONAL] - If you are using external NFS then you must set the base
+# path to the exports location here.
+#
+# Or, change this value if you want to change the default path used
+# for local NFS exports.
+openshift_cfme_storage_external_nfs_base_dir: /exports/
+
+
+######################################################################
+# VARIOUS CONSTANTS - DO NOT OVERRIDE THESE UNDER ANY CIRCUMSTANCES
+######################################################################
+
+######################################################################
+# Misc enumerated values
+# Allowed choices for the storage class parameter
+openshift_cfme_storage_classes:
+  - nfs
+  - external
+  - preconfigured
+  - cloudprovider
+# Name of the application templates with object/parameter definitions
+openshift_cfme_app_templates:
+  - miq-template-ext-db
+  - miq-template
+# PostgreSQL database connection parameters
+openshift_cfme_db_parameters:
+  - DATABASE_USER
+  - DATABASE_PASSWORD
+  - DATABASE_IP
+  - DATABASE_PORT
+  - DATABASE_NAME
+
+
+######################################################################
+# ACCOUNTING
+######################################################################
+# Service Account SSCs
+openshift_system_account_sccs:
+  - name: miq-anyuid
+    resource_name: anyuid
+  - name: miq-orchestrator
+    resource_name: anyuid
+  - name: miq-privileged
+    resource_name: privileged
+  - name: miq-httpd
+    resource_name: miq-httpd
+
+# Service Account Roles
+openshift_cfme_system_account_roles:
+  - name: miq-orchestrator
+    resource_name: view
+  - name: miq-orchestrator
+    resource_name: edit
+
+
+######################################################################
+# SCAFFOLDING - These are parameters we pre-seed that a user may or
+# may not set later
+######################################################################
+# A hash of parameters you want to override or set in the
+# miq-template.yaml or miq-template-ext-db.yaml templates. Set this in
+# your inventory file as a simple hash. Acceptable values are defined
+# under the .parameters list in files/miq-template{-ext-db}.yaml
+# Example:
+#
+# openshift_cfme_template_parameters={'APPLICATION_MEM_REQ': '512Mi'}
+openshift_cfme_template_parameters: {}
+
+# # All the required exports
+# openshift_cfme_pv_exports:
+#   - miq-pv01
+#   - miq-pv02
+#   - miq-pv03
+# # PV template files and their created object names
+# openshift_cfme_pv_data:
+#   - pv_name: miq-pv01
+#     pv_template: miq-pv-db.yaml
+#     pv_label: CFME DB PV
+#   - pv_name: miq-pv02
+#     pv_template: miq-pv-region.yaml
+#     pv_label: CFME Region PV
+#   - pv_name: miq-pv03
+#     pv_template: miq-pv-server.yaml
+#     pv_label: CFME Server PV
+
 # TODO: Refactor '_install_app' variable. This is just for testing but
 # maybe in the future it should control the entire yes/no for CFME.
 #
@@ -34,9 +156,11 @@ openshift_cfme_maxImagesBulkImportedPerRepository: 100
 # --template=manageiq). If False everything UP TO 'new-app' is ran.
 openshift_cfme_install_app: False
 # Docker image to pull
-openshift_cfme_application_img_name: "{{ 'registry.access.redhat.com/cloudforms45/cfme-openshift-app' if openshift_deployment_type == 'openshift-enterprise' else 'docker.io/manageiq/manageiq-pods' }}"
-openshift_cfme_postgresql_img_name: "{{ 'registry.access.redhat.com/cloudforms45/cfme-openshift-postgresql' if openshift_deployment_type == 'openshift-enterprise' else 'docker.io/manageiq/manageiq-pods' }}"
-openshift_cfme_memcached_img_name: "{{ 'registry.access.redhat.com/cloudforms45/cfme-openshift-memcached' if openshift_deployment_type == 'openshift-enterprise' else 'docker.io/manageiq/manageiq-pods' }}"
-openshift_cfme_application_img_tag: "{{ 'latest' if openshift_deployment_type == 'openshift-enterprise' else 'app-latest-fine' }}"
-openshift_cfme_memcached_img_tag: "{{ 'latest' if openshift_deployment_type == 'openshift-enterprise' else 'memcached-latest-fine' }}"
-openshift_cfme_postgresql_img_tag: "{{ 'latest' if openshift_deployment_type == 'openshift-enterprise' else 'postgresql-latest-fine' }}"
+# openshift_cfme_application_img_name: "{{ 'registry.access.redhat.com/cloudforms46/cfme-openshift-app' if openshift_deployment_type == 'openshift-enterprise' else 'docker.io/manageiq/manageiq-pods' }}"
+# openshift_cfme_application_img_tag: "{{ 'latest' if openshift_deployment_type == 'openshift-enterprise' else 'frontend-latest' }}"
+
+# openshift_cfme_memcached_img_name: "{{ 'registry.access.redhat.com/cloudforms46/cfme-openshift-memcached' if openshift_deployment_type == 'openshift-enterprise' else 'docker.io/manageiq/manageiq-pods' }}"
+# openshift_cfme_memcached_img_tag: "{{ 'latest' if openshift_deployment_type == 'openshift-enterprise' else 'memcached-latest-fine' }}"
+
+# openshift_cfme_postgresql_img_tag: "{{ 'latest' if openshift_deployment_type == 'openshift-enterprise' else 'postgresql-latest-fine' }}"
+# openshift_cfme_postgresql_img_name: "{{ 'registry.access.redhat.com/cloudforms46/cfme-openshift-postgresql' if openshift_deployment_type == 'openshift-enterprise' else 'docker.io/manageiq/manageiq-pods' }}"

+ 28 - 0
roles/openshift_cfme/files/miq-backup-job.yaml

@@ -0,0 +1,28 @@
+apiVersion: batch/v1
+kind: Job
+metadata:
+  name: manageiq-backup
+spec:
+  template:
+    metadata:
+      name: manageiq-backup
+    spec:
+      containers:
+      - name: postgresql
+        image: docker.io/manageiq/postgresql:latest
+        command:
+        - "/opt/manageiq/container-scripts/backup_db"
+        env:
+        - name: DATABASE_URL
+          valueFrom:
+            secretKeyRef:
+              name: manageiq-secrets
+              key: database-url
+        volumeMounts:
+        - name: miq-backup-vol
+          mountPath: "/backups"
+      volumes:
+      - name: miq-backup-vol
+        persistentVolumeClaim:
+          claimName: manageiq-backup
+      restartPolicy: Never

+ 10 - 0
roles/openshift_cfme/files/miq-backup-pvc.yaml

@@ -0,0 +1,10 @@
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+  name: manageiq-backup
+spec:
+  accessModes:
+  - ReadWriteOnce
+  resources:
+    requests:
+      storage: 15Gi

+ 13 - 0
roles/openshift_cfme/files/miq-pv-backup-example.yaml

@@ -0,0 +1,13 @@
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+  name: miq-pv03
+spec:
+  capacity:
+    storage: 15Gi
+  accessModes:
+  - ReadWriteOnce
+  nfs:
+    path: "/exports/miq-pv03"
+    server: "<your-nfs-host-here>"
+  persistentVolumeReclaimPolicy: Retain

+ 4 - 4
roles/openshift_cfme/templates/miq-pv-db.yaml.j2

@@ -6,8 +6,8 @@ spec:
   capacity:
     storage: 15Gi
   accessModes:
-    - ReadWriteOnce
-  nfs: 
-    path: {{ openshift_cfme_nfs_directory }}/miq-pv01
-    server: {{ openshift_cfme_nfs_server }}
+  - ReadWriteOnce
+  nfs:
+    path: "/exports/miq-pv01"
+    server: "<your-nfs-host-here>"
   persistentVolumeReclaimPolicy: Retain

+ 27 - 0
roles/openshift_cfme/files/miq-pv-server-example.yaml

@@ -0,0 +1,27 @@
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+  name: "${PV_NAME}"
+spec:
+  capacity:
+    storage: 5Gi
+  accessModes:
+  - ReadWriteOnce
+  nfs:
+    path: "/${BASE_PATH}/${PV_NAME}"
+    server: "${NFS_SERVER}"
+  persistentVolumeReclaimPolicy: Retain
+parameters:
+- name: BASE_PATH
+  displayName: BasePath
+  required: true
+  description: The parent directory of your NFS exports
+  value: /exports
+- name: PV_NAME
+  displayName: PVName
+  required: true
+  description: The name of this PV
+- name: NFS_SERVER
+  displayName: NFSServer
+  required: true
+  description: The hostname or IP address of the NFS server

+ 35 - 0
roles/openshift_cfme/files/miq-restore-job.yaml

@@ -0,0 +1,35 @@
+apiVersion: batch/v1
+kind: Job
+metadata:
+  name: manageiq-restore
+spec:
+  template:
+    metadata:
+      name: manageiq-restore
+    spec:
+      containers:
+      - name: postgresql
+        image: docker.io/manageiq/postgresql:latest
+        command:
+        - "/opt/manageiq/container-scripts/restore_db"
+        env:
+        - name: DATABASE_URL
+          valueFrom:
+            secretKeyRef:
+              name: manageiq-secrets
+              key: database-url
+        - name: BACKUP_VERSION
+          value: latest
+        volumeMounts:
+        - name: miq-backup-vol
+          mountPath: "/backups"
+        - name: miq-prod-vol
+          mountPath: "/restore"
+      volumes:
+      - name: miq-backup-vol
+        persistentVolumeClaim:
+          claimName: manageiq-backup
+      - name: miq-prod-vol
+        persistentVolumeClaim:
+          claimName: manageiq-postgresql
+      restartPolicy: Never

+ 38 - 0
roles/openshift_cfme/files/miq-scc-httpd.yaml

@@ -0,0 +1,38 @@
+allowHostDirVolumePlugin: false
+allowHostIPC: false
+allowHostNetwork: false
+allowHostPID: false
+allowHostPorts: false
+allowPrivilegedContainer: false
+allowedCapabilities:
+apiVersion: v1
+defaultAddCapabilities:
+- SYS_ADMIN
+fsGroup:
+  type: RunAsAny
+groups:
+- system:cluster-admins
+kind: SecurityContextConstraints
+metadata:
+  annotations:
+    kubernetes.io/description: miq-httpd provides all features of the anyuid SCC but allows users to have SYS_ADMIN capabilities. This is the required scc for Pods requiring to run with systemd and the message bus.
+  creationTimestamp:
+  name: miq-httpd
+priority: 10
+readOnlyRootFilesystem: false
+requiredDropCapabilities:
+- MKNOD
+- SYS_CHROOT
+runAsUser:
+  type: RunAsAny
+seLinuxContext:
+  type: MustRunAs
+supplementalGroups:
+  type: RunAsAny
+users:
+volumes:
+- configMap
+- downwardAPI
+- emptyDir
+- persistentVolumeClaim
+- secret

+ 771 - 0
roles/openshift_cfme/files/miq-template-ext-db.yaml

@@ -0,0 +1,771 @@
+apiVersion: v1
+kind: Template
+labels:
+  template: manageiq-ext-db
+metadata:
+  name: manageiq-ext-db
+  annotations:
+    description: ManageIQ appliance with persistent storage using a external DB host
+    tags: instant-app,manageiq,miq
+    iconClass: icon-rails
+objects:
+- apiVersion: v1
+  kind: ServiceAccount
+  metadata:
+    name: miq-orchestrator
+- apiVersion: v1
+  kind: ServiceAccount
+  metadata:
+    name: miq-anyuid
+- apiVersion: v1
+  kind: ServiceAccount
+  metadata:
+    name: miq-privileged
+- apiVersion: v1
+  kind: ServiceAccount
+  metadata:
+    name: miq-httpd
+- apiVersion: v1
+  kind: Secret
+  metadata:
+    name: "${NAME}-secrets"
+  stringData:
+    pg-password: "${DATABASE_PASSWORD}"
+    database-url: postgresql://${DATABASE_USER}:${DATABASE_PASSWORD}@${DATABASE_SERVICE_NAME}/${DATABASE_NAME}?encoding=utf8&pool=5&wait_timeout=5
+    v2-key: "${V2_KEY}"
+- apiVersion: v1
+  kind: Secret
+  metadata:
+    name: "${ANSIBLE_SERVICE_NAME}-secrets"
+  stringData:
+    rabbit-password: "${ANSIBLE_RABBITMQ_PASSWORD}"
+    secret-key: "${ANSIBLE_SECRET_KEY}"
+    admin-password: "${ANSIBLE_ADMIN_PASSWORD}"
+- apiVersion: v1
+  kind: Service
+  metadata:
+    annotations:
+      description: Exposes and load balances ManageIQ pods
+      service.alpha.openshift.io/dependencies: '[{"name":"${DATABASE_SERVICE_NAME}","namespace":"","kind":"Service"},{"name":"${MEMCACHED_SERVICE_NAME}","namespace":"","kind":"Service"}]'
+    name: "${NAME}"
+  spec:
+    clusterIP: None
+    ports:
+    - name: http
+      port: 80
+      protocol: TCP
+      targetPort: 80
+    selector:
+      name: "${NAME}"
+- apiVersion: v1
+  kind: Route
+  metadata:
+    name: "${HTTPD_SERVICE_NAME}"
+  spec:
+    host: "${APPLICATION_DOMAIN}"
+    port:
+      targetPort: http
+    tls:
+      termination: edge
+      insecureEdgeTerminationPolicy: Redirect
+    to:
+      kind: Service
+      name: "${HTTPD_SERVICE_NAME}"
+- apiVersion: apps/v1beta1
+  kind: StatefulSet
+  metadata:
+    name: "${NAME}"
+    annotations:
+      description: Defines how to deploy the ManageIQ appliance
+  spec:
+    serviceName: "${NAME}"
+    replicas: "${APPLICATION_REPLICA_COUNT}"
+    template:
+      metadata:
+        labels:
+          name: "${NAME}"
+        name: "${NAME}"
+      spec:
+        containers:
+        - name: manageiq
+          image: "${APPLICATION_IMG_NAME}:${FRONTEND_APPLICATION_IMG_TAG}"
+          livenessProbe:
+            tcpSocket:
+              port: 80
+            initialDelaySeconds: 480
+            timeoutSeconds: 3
+          readinessProbe:
+            httpGet:
+              path: "/"
+              port: 80
+              scheme: HTTP
+            initialDelaySeconds: 200
+            timeoutSeconds: 3
+          ports:
+          - containerPort: 80
+            protocol: TCP
+          volumeMounts:
+          - name: "${NAME}-server"
+            mountPath: "/persistent"
+          env:
+          - name: MY_POD_NAMESPACE
+            valueFrom:
+              fieldRef:
+                fieldPath: metadata.namespace
+          - name: APPLICATION_INIT_DELAY
+            value: "${APPLICATION_INIT_DELAY}"
+          - name: DATABASE_SERVICE_NAME
+            value: "${DATABASE_SERVICE_NAME}"
+          - name: DATABASE_REGION
+            value: "${DATABASE_REGION}"
+          - name: DATABASE_URL
+            valueFrom:
+              secretKeyRef:
+                name: "${NAME}-secrets"
+                key: database-url
+          - name: MEMCACHED_SERVER
+            value: "${MEMCACHED_SERVICE_NAME}:11211"
+          - name: MEMCACHED_SERVICE_NAME
+            value: "${MEMCACHED_SERVICE_NAME}"
+          - name: V2_KEY
+            valueFrom:
+              secretKeyRef:
+                name: "${NAME}-secrets"
+                key: v2-key
+          - name: ANSIBLE_SERVICE_NAME
+            value: "${ANSIBLE_SERVICE_NAME}"
+          - name: ANSIBLE_ADMIN_PASSWORD
+            valueFrom:
+              secretKeyRef:
+                name: "${ANSIBLE_SERVICE_NAME}-secrets"
+                key: admin-password
+          resources:
+            requests:
+              memory: "${APPLICATION_MEM_REQ}"
+              cpu: "${APPLICATION_CPU_REQ}"
+            limits:
+              memory: "${APPLICATION_MEM_LIMIT}"
+          lifecycle:
+            preStop:
+              exec:
+                command:
+                - "/opt/manageiq/container-scripts/sync-pv-data"
+        serviceAccount: miq-orchestrator
+        serviceAccountName: miq-orchestrator
+        terminationGracePeriodSeconds: 90
+    volumeClaimTemplates:
+    - metadata:
+        name: "${NAME}-server"
+        annotations:
+      spec:
+        accessModes:
+        - ReadWriteOnce
+        resources:
+          requests:
+            storage: "${APPLICATION_VOLUME_CAPACITY}"
+- apiVersion: v1
+  kind: Service
+  metadata:
+    annotations:
+      description: Headless service for ManageIQ backend pods
+    name: "${NAME}-backend"
+  spec:
+    clusterIP: None
+    selector:
+      name: "${NAME}-backend"
+- apiVersion: apps/v1beta1
+  kind: StatefulSet
+  metadata:
+    name: "${NAME}-backend"
+    annotations:
+      description: Defines how to deploy the ManageIQ appliance
+  spec:
+    serviceName: "${NAME}-backend"
+    replicas: 0
+    template:
+      metadata:
+        labels:
+          name: "${NAME}-backend"
+        name: "${NAME}-backend"
+      spec:
+        containers:
+        - name: manageiq
+          image: "${APPLICATION_IMG_NAME}:${BACKEND_APPLICATION_IMG_TAG}"
+          livenessProbe:
+            exec:
+              command:
+              - pidof
+              - MIQ Server
+            initialDelaySeconds: 480
+            timeoutSeconds: 3
+          volumeMounts:
+          - name: "${NAME}-server"
+            mountPath: "/persistent"
+          env:
+          - name: APPLICATION_INIT_DELAY
+            value: "${APPLICATION_INIT_DELAY}"
+          - name: DATABASE_URL
+            valueFrom:
+              secretKeyRef:
+                name: "${NAME}-secrets"
+                key: database-url
+          - name: MIQ_SERVER_DEFAULT_ROLES
+            value: database_operations,event,reporting,scheduler,smartstate,ems_operations,ems_inventory,automate
+          - name: FRONTEND_SERVICE_NAME
+            value: "${NAME}"
+          - name: MEMCACHED_SERVER
+            value: "${MEMCACHED_SERVICE_NAME}:11211"
+          - name: V2_KEY
+            valueFrom:
+              secretKeyRef:
+                name: "${NAME}-secrets"
+                key: v2-key
+          - name: ANSIBLE_SERVICE_NAME
+            value: "${ANSIBLE_SERVICE_NAME}"
+          - name: ANSIBLE_ADMIN_PASSWORD
+            valueFrom:
+              secretKeyRef:
+                name: "${ANSIBLE_SERVICE_NAME}-secrets"
+                key: admin-password
+          resources:
+            requests:
+              memory: "${APPLICATION_MEM_REQ}"
+              cpu: "${APPLICATION_CPU_REQ}"
+            limits:
+              memory: "${APPLICATION_MEM_LIMIT}"
+          lifecycle:
+            preStop:
+              exec:
+                command:
+                - "/opt/manageiq/container-scripts/sync-pv-data"
+        serviceAccount: miq-orchestrator
+        serviceAccountName: miq-orchestrator
+        terminationGracePeriodSeconds: 90
+    volumeClaimTemplates:
+    - metadata:
+        name: "${NAME}-server"
+        annotations:
+      spec:
+        accessModes:
+        - ReadWriteOnce
+        resources:
+          requests:
+            storage: "${APPLICATION_VOLUME_CAPACITY}"
+- apiVersion: v1
+  kind: Service
+  metadata:
+    name: "${MEMCACHED_SERVICE_NAME}"
+    annotations:
+      description: Exposes the memcached server
+  spec:
+    ports:
+    - name: memcached
+      port: 11211
+      targetPort: 11211
+    selector:
+      name: "${MEMCACHED_SERVICE_NAME}"
+- apiVersion: v1
+  kind: DeploymentConfig
+  metadata:
+    name: "${MEMCACHED_SERVICE_NAME}"
+    annotations:
+      description: Defines how to deploy memcached
+  spec:
+    strategy:
+      type: Recreate
+    triggers:
+    - type: ConfigChange
+    replicas: 1
+    selector:
+      name: "${MEMCACHED_SERVICE_NAME}"
+    template:
+      metadata:
+        name: "${MEMCACHED_SERVICE_NAME}"
+        labels:
+          name: "${MEMCACHED_SERVICE_NAME}"
+      spec:
+        volumes: []
+        containers:
+        - name: memcached
+          image: "${MEMCACHED_IMG_NAME}:${MEMCACHED_IMG_TAG}"
+          ports:
+          - containerPort: 11211
+          readinessProbe:
+            timeoutSeconds: 1
+            initialDelaySeconds: 5
+            tcpSocket:
+              port: 11211
+          livenessProbe:
+            timeoutSeconds: 1
+            initialDelaySeconds: 30
+            tcpSocket:
+              port: 11211
+          volumeMounts: []
+          env:
+          - name: MEMCACHED_MAX_MEMORY
+            value: "${MEMCACHED_MAX_MEMORY}"
+          - name: MEMCACHED_MAX_CONNECTIONS
+            value: "${MEMCACHED_MAX_CONNECTIONS}"
+          - name: MEMCACHED_SLAB_PAGE_SIZE
+            value: "${MEMCACHED_SLAB_PAGE_SIZE}"
+          resources:
+            requests:
+              memory: "${MEMCACHED_MEM_REQ}"
+              cpu: "${MEMCACHED_CPU_REQ}"
+            limits:
+              memory: "${MEMCACHED_MEM_LIMIT}"
+- apiVersion: v1
+  kind: Service
+  metadata:
+    name: "${DATABASE_SERVICE_NAME}"
+    annotations:
+      description: Remote database service
+  spec:
+    ports:
+    - name: postgresql
+      port: 5432
+      targetPort: "${{DATABASE_PORT}}"
+    selector: {}
+- apiVersion: v1
+  kind: Endpoints
+  metadata:
+    name: "${DATABASE_SERVICE_NAME}"
+  subsets:
+  - addresses:
+    - ip: "${DATABASE_IP}"
+    ports:
+    - port: "${{DATABASE_PORT}}"
+      name: postgresql
+- apiVersion: v1
+  kind: Service
+  metadata:
+    annotations:
+      description: Exposes and load balances Ansible pods
+      service.alpha.openshift.io/dependencies: '[{"name":"${DATABASE_SERVICE_NAME}","namespace":"","kind":"Service"}]'
+    name: "${ANSIBLE_SERVICE_NAME}"
+  spec:
+    ports:
+    - name: http
+      port: 80
+      protocol: TCP
+      targetPort: 80
+    - name: https
+      port: 443
+      protocol: TCP
+      targetPort: 443
+    selector:
+      name: "${ANSIBLE_SERVICE_NAME}"
+- apiVersion: v1
+  kind: DeploymentConfig
+  metadata:
+    name: "${ANSIBLE_SERVICE_NAME}"
+    annotations:
+      description: Defines how to deploy the Ansible appliance
+  spec:
+    strategy:
+      type: Recreate
+    serviceName: "${ANSIBLE_SERVICE_NAME}"
+    replicas: 0
+    template:
+      metadata:
+        labels:
+          name: "${ANSIBLE_SERVICE_NAME}"
+        name: "${ANSIBLE_SERVICE_NAME}"
+      spec:
+        containers:
+        - name: ansible
+          image: "${ANSIBLE_IMG_NAME}:${ANSIBLE_IMG_TAG}"
+          livenessProbe:
+            tcpSocket:
+              port: 443
+            initialDelaySeconds: 480
+            timeoutSeconds: 3
+          readinessProbe:
+            httpGet:
+              path: "/"
+              port: 443
+              scheme: HTTPS
+            initialDelaySeconds: 200
+            timeoutSeconds: 3
+          ports:
+          - containerPort: 80
+            protocol: TCP
+          - containerPort: 443
+            protocol: TCP
+          securityContext:
+            privileged: true
+          env:
+          - name: ADMIN_PASSWORD
+            valueFrom:
+              secretKeyRef:
+                name: "${ANSIBLE_SERVICE_NAME}-secrets"
+                key: admin-password
+          - name: RABBITMQ_USER_NAME
+            value: "${ANSIBLE_RABBITMQ_USER_NAME}"
+          - name: RABBITMQ_PASSWORD
+            valueFrom:
+              secretKeyRef:
+                name: "${ANSIBLE_SERVICE_NAME}-secrets"
+                key: rabbit-password
+          - name: ANSIBLE_SECRET_KEY
+            valueFrom:
+              secretKeyRef:
+                name: "${ANSIBLE_SERVICE_NAME}-secrets"
+                key: secret-key
+          - name: DATABASE_SERVICE_NAME
+            value: "${DATABASE_SERVICE_NAME}"
+          - name: POSTGRESQL_USER
+            value: "${DATABASE_USER}"
+          - name: POSTGRESQL_PASSWORD
+            valueFrom:
+              secretKeyRef:
+                name: "${NAME}-secrets"
+                key: pg-password
+          - name: POSTGRESQL_DATABASE
+            value: "${ANSIBLE_DATABASE_NAME}"
+          resources:
+            requests:
+              memory: "${ANSIBLE_MEM_REQ}"
+              cpu: "${ANSIBLE_CPU_REQ}"
+            limits:
+              memory: "${ANSIBLE_MEM_LIMIT}"
+        serviceAccount: miq-privileged
+        serviceAccountName: miq-privileged
+- apiVersion: v1
+  kind: ConfigMap
+  metadata:
+    name: "${HTTPD_SERVICE_NAME}-configs"
+  data:
+    application.conf: |
+      # Timeout: The number of seconds before receives and sends time out.
+      Timeout 120
+
+      RewriteEngine On
+      Options SymLinksIfOwnerMatch
+
+      <VirtualHost *:80>
+        KeepAlive on
+        ProxyPreserveHost on
+        ProxyPass        /ws/ ws://${NAME}/ws/
+        ProxyPassReverse /ws/ ws://${NAME}/ws/
+        ProxyPass        / http://${NAME}/
+        ProxyPassReverse / http://${NAME}/
+      </VirtualHost>
+- apiVersion: v1
+  kind: ConfigMap
+  metadata:
+    name: "${HTTPD_SERVICE_NAME}-auth-configs"
+  data:
+    auth-type: internal
+    auth-configuration.conf: |
+      # External Authentication Configuration File
+      #
+      # For details on usage please see https://github.com/ManageIQ/manageiq-pods/blob/master/README.md#configuring-external-authentication
+- apiVersion: v1
+  kind: Service
+  metadata:
+    name: "${HTTPD_SERVICE_NAME}"
+    annotations:
+      description: Exposes the httpd server
+      service.alpha.openshift.io/dependencies: '[{"name":"${NAME}","namespace":"","kind":"Service"}]'
+  spec:
+    ports:
+    - name: http
+      port: 80
+      targetPort: 80
+    selector:
+      name: httpd
+- apiVersion: v1
+  kind: DeploymentConfig
+  metadata:
+    name: "${HTTPD_SERVICE_NAME}"
+    annotations:
+      description: Defines how to deploy httpd
+  spec:
+    strategy:
+      type: Recreate
+      recreateParams:
+        timeoutSeconds: 1200
+    triggers:
+    - type: ConfigChange
+    replicas: 1
+    selector:
+      name: "${HTTPD_SERVICE_NAME}"
+    template:
+      metadata:
+        name: "${HTTPD_SERVICE_NAME}"
+        labels:
+          name: "${HTTPD_SERVICE_NAME}"
+      spec:
+        volumes:
+        - name: httpd-config
+          configMap:
+            name: "${HTTPD_SERVICE_NAME}-configs"
+        - name: httpd-auth-config
+          configMap:
+            name: "${HTTPD_SERVICE_NAME}-auth-configs"
+        containers:
+        - name: httpd
+          image: "${HTTPD_IMG_NAME}:${HTTPD_IMG_TAG}"
+          ports:
+          - containerPort: 80
+          livenessProbe:
+            exec:
+              command:
+              - pidof
+              - httpd
+            initialDelaySeconds: 15
+            timeoutSeconds: 3
+          readinessProbe:
+            tcpSocket:
+              port: 80
+            initialDelaySeconds: 10
+            timeoutSeconds: 3
+          volumeMounts:
+          - name: httpd-config
+            mountPath: "${HTTPD_CONFIG_DIR}"
+          - name: httpd-auth-config
+            mountPath: "${HTTPD_AUTH_CONFIG_DIR}"
+          resources:
+            requests:
+              memory: "${HTTPD_MEM_REQ}"
+              cpu: "${HTTPD_CPU_REQ}"
+            limits:
+              memory: "${HTTPD_MEM_LIMIT}"
+          env:
+          - name: HTTPD_AUTH_TYPE
+            valueFrom:
+              configMapKeyRef:
+                name: "${HTTPD_SERVICE_NAME}-auth-configs"
+                key: auth-type
+          lifecycle:
+            postStart:
+              exec:
+                command:
+                - "/usr/bin/save-container-environment"
+        serviceAccount: miq-anyuid
+        serviceAccountName: miq-anyuid
+parameters:
+- name: NAME
+  displayName: Name
+  required: true
+  description: The name assigned to all of the frontend objects defined in this template.
+  value: manageiq
+- name: V2_KEY
+  displayName: ManageIQ Encryption Key
+  required: true
+  description: Encryption Key for ManageIQ Passwords
+  from: "[a-zA-Z0-9]{43}"
+  generate: expression
+- name: DATABASE_SERVICE_NAME
+  displayName: PostgreSQL Service Name
+  required: true
+  description: The name of the OpenShift Service exposed for the PostgreSQL container.
+  value: postgresql
+- name: DATABASE_USER
+  displayName: PostgreSQL User
+  required: true
+  description: PostgreSQL user that will access the database.
+  value: root
+- name: DATABASE_PASSWORD
+  displayName: PostgreSQL Password
+  required: true
+  description: Password for the PostgreSQL user.
+  from: "[a-zA-Z0-9]{8}"
+  generate: expression
+- name: DATABASE_IP
+  displayName: PostgreSQL Server IP
+  required: true
+  description: PostgreSQL external server IP used to configure service.
+  value: ''
+- name: DATABASE_PORT
+  displayName: PostgreSQL Server Port
+  required: true
+  description: PostgreSQL external server port used to configure service.
+  value: '5432'
+- name: DATABASE_NAME
+  required: true
+  displayName: PostgreSQL Database Name
+  description: Name of the PostgreSQL database accessed.
+  value: vmdb_production
+- name: DATABASE_REGION
+  required: true
+  displayName: Application Database Region
+  description: Database region that will be used for application.
+  value: '0'
+- name: ANSIBLE_DATABASE_NAME
+  displayName: Ansible PostgreSQL database name
+  required: true
+  description: The database to be used by the Ansible continer
+  value: awx
+- name: MEMCACHED_SERVICE_NAME
+  required: true
+  displayName: Memcached Service Name
+  description: The name of the OpenShift Service exposed for the Memcached container.
+  value: memcached
+- name: MEMCACHED_MAX_MEMORY
+  displayName: Memcached Max Memory
+  description: Memcached maximum memory for memcached object storage in MB.
+  value: '64'
+- name: MEMCACHED_MAX_CONNECTIONS
+  displayName: Memcached Max Connections
+  description: Memcached maximum number of connections allowed.
+  value: '1024'
+- name: MEMCACHED_SLAB_PAGE_SIZE
+  displayName: Memcached Slab Page Size
+  description: Memcached size of each slab page.
+  value: 1m
+- name: ANSIBLE_SERVICE_NAME
+  displayName: Ansible Service Name
+  description: The name of the OpenShift Service exposed for the Ansible container.
+  value: ansible
+- name: ANSIBLE_ADMIN_PASSWORD
+  displayName: Ansible admin User password
+  required: true
+  description: The password for the Ansible container admin user
+  from: "[a-zA-Z0-9]{32}"
+  generate: expression
+- name: ANSIBLE_SECRET_KEY
+  displayName: Ansible Secret Key
+  required: true
+  description: Encryption key for the Ansible container
+  from: "[a-f0-9]{32}"
+  generate: expression
+- name: ANSIBLE_RABBITMQ_USER_NAME
+  displayName: RabbitMQ Username
+  required: true
+  description: Username for the Ansible RabbitMQ Server
+  value: ansible
+- name: ANSIBLE_RABBITMQ_PASSWORD
+  displayName: RabbitMQ Server Password
+  required: true
+  description: Password for the Ansible RabbitMQ Server
+  from: "[a-zA-Z0-9]{32}"
+  generate: expression
+- name: APPLICATION_CPU_REQ
+  displayName: Application Min CPU Requested
+  required: true
+  description: Minimum amount of CPU time the Application container will need (expressed in millicores).
+  value: 1000m
+- name: MEMCACHED_CPU_REQ
+  displayName: Memcached Min CPU Requested
+  required: true
+  description: Minimum amount of CPU time the Memcached container will need (expressed in millicores).
+  value: 200m
+- name: ANSIBLE_CPU_REQ
+  displayName: Ansible Min CPU Requested
+  required: true
+  description: Minimum amount of CPU time the Ansible container will need (expressed in millicores).
+  value: 1000m
+- name: APPLICATION_MEM_REQ
+  displayName: Application Min RAM Requested
+  required: true
+  description: Minimum amount of memory the Application container will need.
+  value: 6144Mi
+- name: MEMCACHED_MEM_REQ
+  displayName: Memcached Min RAM Requested
+  required: true
+  description: Minimum amount of memory the Memcached container will need.
+  value: 64Mi
+- name: ANSIBLE_MEM_REQ
+  displayName: Ansible Min RAM Requested
+  required: true
+  description: Minimum amount of memory the Ansible container will need.
+  value: 2048Mi
+- name: APPLICATION_MEM_LIMIT
+  displayName: Application Max RAM Limit
+  required: true
+  description: Maximum amount of memory the Application container can consume.
+  value: 16384Mi
+- name: MEMCACHED_MEM_LIMIT
+  displayName: Memcached Max RAM Limit
+  required: true
+  description: Maximum amount of memory the Memcached container can consume.
+  value: 256Mi
+- name: ANSIBLE_MEM_LIMIT
+  displayName: Ansible Max RAM Limit
+  required: true
+  description: Maximum amount of memory the Ansible container can consume.
+  value: 8096Mi
+- name: MEMCACHED_IMG_NAME
+  displayName: Memcached Image Name
+  description: This is the Memcached image name requested to deploy.
+  value: docker.io/manageiq/memcached
+- name: MEMCACHED_IMG_TAG
+  displayName: Memcached Image Tag
+  description: This is the Memcached image tag/version requested to deploy.
+  value: latest
+- name: APPLICATION_IMG_NAME
+  displayName: Application Image Name
+  description: This is the Application image name requested to deploy.
+  value: docker.io/manageiq/manageiq-pods
+- name: FRONTEND_APPLICATION_IMG_TAG
+  displayName: Front end Application Image Tag
+  description: This is the ManageIQ Frontend Application image tag/version requested to deploy.
+  value: frontend-latest
+- name: BACKEND_APPLICATION_IMG_TAG
+  displayName: Back end Application Image Tag
+  description: This is the ManageIQ Backend Application image tag/version requested to deploy.
+  value: backend-latest
+- name: ANSIBLE_IMG_NAME
+  displayName: Ansible Image Name
+  description: This is the Ansible image name requested to deploy.
+  value: docker.io/manageiq/embedded-ansible
+- name: ANSIBLE_IMG_TAG
+  displayName: Ansible Image Tag
+  description: This is the Ansible image tag/version requested to deploy.
+  value: latest
+- name: APPLICATION_DOMAIN
+  displayName: Application Hostname
+  description: The exposed hostname that will route to the application service, if left blank a value will be defaulted.
+  value: ''
+- name: APPLICATION_REPLICA_COUNT
+  displayName: Application Replica Count
+  description: This is the number of Application replicas requested to deploy.
+  value: '1'
+- name: APPLICATION_INIT_DELAY
+  displayName: Application Init Delay
+  required: true
+  description: Delay in seconds before we attempt to initialize the application.
+  value: '15'
+- name: APPLICATION_VOLUME_CAPACITY
+  displayName: Application Volume Capacity
+  required: true
+  description: Volume space available for application data.
+  value: 5Gi
+- name: HTTPD_SERVICE_NAME
+  required: true
+  displayName: Apache httpd Service Name
+  description: The name of the OpenShift Service exposed for the httpd container.
+  value: httpd
+- name: HTTPD_IMG_NAME
+  displayName: Apache httpd Image Name
+  description: This is the httpd image name requested to deploy.
+  value: docker.io/manageiq/httpd
+- name: HTTPD_IMG_TAG
+  displayName: Apache httpd Image Tag
+  description: This is the httpd image tag/version requested to deploy.
+  value: latest
+- name: HTTPD_CONFIG_DIR
+  displayName: Apache httpd Configuration Directory
+  description: Directory used to store the Apache configuration files.
+  value: "/etc/httpd/conf.d"
+- name: HTTPD_AUTH_CONFIG_DIR
+  displayName: External Authentication Configuration Directory
+  description: Directory used to store the external authentication configuration files.
+  value: "/etc/httpd/auth-conf.d"
+- name: HTTPD_CPU_REQ
+  displayName: Apache httpd Min CPU Requested
+  required: true
+  description: Minimum amount of CPU time the httpd container will need (expressed in millicores).
+  value: 500m
+- name: HTTPD_MEM_REQ
+  displayName: Apache httpd Min RAM Requested
+  required: true
+  description: Minimum amount of memory the httpd container will need.
+  value: 512Mi
+- name: HTTPD_MEM_LIMIT
+  displayName: Apache httpd Max RAM Limit
+  required: true
+  description: Maximum amount of memory the httpd container can consume.
+  value: 8192Mi

Tiedoston diff-näkymää rajattu, sillä se on liian suuri
+ 936 - 554
roles/openshift_cfme/files/miq-template.yaml


+ 3 - 0
roles/openshift_cfme/handlers/main.yml

@@ -35,3 +35,6 @@
   retries: 120
   delay: 1
   changed_when: false
+
+- name: OpenShift-CFME - Reload NFS Exports
+  command: exportfs -ar

+ 2 - 1
roles/openshift_cfme/meta/main.yml

@@ -16,4 +16,5 @@ galaxy_info:
 dependencies:
 - role: lib_openshift
 - role: lib_utils
-- role: openshift_master_facts
+# - role: openshift_facts
+# - role: openshift_master_facts

+ 65 - 0
roles/openshift_cfme/tasks/accounts.yml

@@ -0,0 +1,65 @@
+---
+# This role task file is responsible for user/system account creation,
+# and ensuring correct access is provided as required.
+
+# TODO: This is currently not idempotent, bug report will be filed
+# after this. Currently this task will return 'changed' if it just
+# created a user, updated a user, or doesn't modify a user at
+# all. Seems to be failing some kind of 'does it need updating' test
+# condition and running the replace command regardless.
+- name: Check if the miq-httpd scc exists
+  oc_obj:
+    namespace: "{{ openshift_cfme_project }}"
+    state: list
+    kind: scc
+    name: miq-httpd
+  register: miq_httpd_scc_exists
+
+# TODO: Cleanup when conditions
+- name: Copy the miq-httpd SCC to the cluster
+  copy:
+    src: miq-scc-httpd.yaml
+    dest: "{{ template_dir }}"
+  when:
+    - miq_httpd_scc_exists.results.results | length == 1
+    - miq_httpd_scc_exists.results.results[0] == {}
+
+- name: Ensure the CFME miq-httpd SCC exists
+  oc_obj:
+    state: present
+    name: miq-httpd
+    namespace: "{{ openshift_cfme_project }}"
+    kind: scc
+    files:
+      - "{{ template_dir }}/miq-scc-httpd.yaml"
+    delete_after: True
+  run_once: True
+  when:
+    - miq_httpd_scc_exists.results.results | length == 1
+    - miq_httpd_scc_exists.results.results[0] == {}
+
+- name: Ensure the CFME system users exist
+  oc_serviceaccount:
+    namespace: "{{ openshift_cfme_project }}"
+    state: present
+    name: "{{ item.name }}"
+  with_items:
+    - "{{ openshift_system_account_sccs }}"
+
+- name: Ensure the CFME system accounts have all the required SCCs
+  oc_adm_policy_user:
+    namespace: "{{ openshift_cfme_project }}"
+    user: "system:serviceaccount:{{ openshift_cfme_project }}:{{ item.name }}"
+    resource_kind: scc
+    resource_name: "{{ item.resource_name }}"
+  with_items:
+    - "{{ openshift_system_account_sccs }}"
+
+- name: Ensure the CFME system accounts have the required roles
+  oc_adm_policy_user:
+    namespace: "{{ openshift_cfme_project }}"
+    user: "system:serviceaccount:{{ openshift_cfme_project }}:{{ item.name }}"
+    resource_kind: role
+    resource_name: "{{ item.resource_name }}"
+  with_items:
+    - "{{ openshift_cfme_system_account_roles }}"

+ 79 - 97
roles/openshift_cfme/tasks/main.yml

@@ -1,117 +1,99 @@
 ---
-######################################################################
+######################################################################)
 # Users, projects, and privileges
 
-- name: Ensure the CFME user exists
-  oc_user:
-    state: present
-    username: "{{ openshift_cfme_user }}"
+- name: Run pre-install CFME validation checks
+  include: validate.yml
 
-- name: Ensure the CFME namespace exists with CFME user as admin
+- name: "Ensure the CFME '{{ openshift_cfme_project }}' namespace exists"
   oc_project:
     state: present
     name: "{{ openshift_cfme_project }}"
     display_name: "{{ openshift_cfme_project_description }}"
-    admin: "{{ openshift_cfme_user }}"
-
-- name: Ensure the CFME namespace service account is privileged
-  oc_adm_policy_user:
-    namespace: "{{ openshift_cfme_project }}"
-    user: "{{ openshift_cfme_service_account }}"
-    resource_kind: scc
-    resource_name: privileged
-    state: present
 
-######################################################################
-# NFS
-# In the case that we are not running on a cloud provider, volumes must be statically provisioned
-
-- include: nfs.yml
-  when: not (openshift_cloudprovider_kind is defined and (openshift_cloudprovider_kind == 'aws' or openshift_cloudprovider_kind == 'gce'))
+- name: Create and Authorize CFME Accounts
+  include: accounts.yml
 
 ######################################################################
-# CFME App Template
-#
-# Note, this is different from the create_pvs.yml tasks in that the
-# application template does not require any jinja2 evaluation.
-#
-# TODO: Handle the case where the server template is updated in
-# openshift-ansible and the change needs to be landed on the managed
-# cluster.
-
-- name: Check if the CFME Server template has been created already
-  oc_obj:
-    namespace: "{{ openshift_cfme_project }}"
-    state: list
-    kind: template
-    name: manageiq
-  register: miq_server_check
-
-- name: Copy over CFME Server template
-  copy:
-    src: miq-template.yaml
-    dest: "{{ template_dir }}/miq-template.yaml"
-
-- name: Ensure the server template was read from disk
+# STORAGE - Initialize basic storage classes
+#---------------------------------------------------------------------
+# * nfs - set up NFS shares on the first master for a proof of concept
+- name: Create required NFS exports for CFME app storage
+  include: storage/nfs.yml
+  when: openshift_cfme_storage_class == 'nfs'
+
+#---------------------------------------------------------------------
+# * external - NFS again, but pointing to a pre-configured NFS server
+- name: Note Storage Type -  External NFS
   debug:
-    var=r_openshift_cfme_miq_template_content
+    msg: Setting up external NFS storage, openshift_cfme_storage_class is 'external'
+  when: openshift_cfme_storage_class == 'external'
 
-- name: Ensure CFME Server Template exists
-  oc_obj:
-    namespace: "{{ openshift_cfme_project }}"
-    kind: template
-    name: "manageiq"
-    state: present
-    content: "{{ r_openshift_cfme_miq_template_content }}"
+#---------------------------------------------------------------------
+# * cloudprovider - use an existing cloudprovider based storage
+- name: Note Storage Type - Cloud Provider
+  debug:
+    msg: Validating cloud provider storage type, openshift_cfme_storage_class is 'cloudprovider'
+  when: openshift_cfme_storage_class == 'cloudprovider'
+
+#---------------------------------------------------------------------
+# * preconfigured - don't do anything, assume it's all there ready to go
+- name: Note Storage Type - Preconfigured
+  debug:
+    msg: Skipping storage configuration, openshift_cfme_storage_class is 'preconfigured'
+  when: openshift_cfme_storage_class == 'preconfigured'
 
 ######################################################################
-# Let's do this
-
-- name: Ensure the CFME Server is created
-  oc_process:
-    namespace: "{{ openshift_cfme_project }}"
-    template_name: manageiq
-    create: True
-    params:
-      APPLICATION_IMG_NAME: "{{ openshift_cfme_application_img_name }}"
-      POSTGRESQL_IMG_NAME: "{{ openshift_cfme_postgresql_img_name }}"
-      MEMCACHED_IMG_NAME: "{{ openshift_cfme_memcached_img_name }}"
-      APPLICATION_IMG_TAG: "{{ openshift_cfme_application_img_tag }}"
-      POSTGRESQL_IMG_TAG: "{{ openshift_cfme_postgresql_img_tag }}"
-      MEMCACHED_IMG_TAG: "{{ openshift_cfme_memcached_img_tag }}"
-  register: cfme_new_app_process
-  run_once: True
-  when:
-    # User said to install CFME in their inventory
-    - openshift_cfme_install_app | bool
-    # # The server app doesn't exist already
-    # - not miq_server_check.results.results.0
-
-- debug:
-    var: cfme_new_app_process
+# APPLICATION TEMPLATE
+- name: Install the correct CFME app template
+  include: template.yml
 
 ######################################################################
-# Various cleanup steps
-
-# TODO: Not sure what to do about this right now. Might be able to
-# just delete it?  This currently warns about "Unable to find
-# '<TEMP_DIR>' in expected paths."
-- name: Ensure the temporary PV/App templates are erased
-  file:
-    path: "{{ item }}"
-    state: absent
-  with_fileglob:
-    - "{{ template_dir }}/*.yaml"
-
-- name: Ensure the temporary PV/app template directory is erased
-  file:
-    path: "{{ template_dir }}"
-    state: absent
+# APP & DB Storage
+
 
 ######################################################################
 
-- name: Status update
-  debug:
-    msg: >
-      CFME has been deployed. Note that there will be a delay before
-      it is fully initialized.
+# ######################################################################
+# # Let's do this
+
+# - name: Ensure the CFME Server is created
+#   oc_process:
+#     namespace: "{{ openshift_cfme_project }}"
+#     template_name: manageiq
+#     create: True
+#     params:
+#       APPLICATION_IMG_NAME: "{{ openshift_cfme_application_img_name }}"
+#       POSTGRESQL_IMG_NAME: "{{ openshift_cfme_postgresql_img_name }}"
+#       MEMCACHED_IMG_NAME: "{{ openshift_cfme_memcached_img_name }}"
+#       APPLICATION_IMG_TAG: "{{ openshift_cfme_application_img_tag }}"
+#       POSTGRESQL_IMG_TAG: "{{ openshift_cfme_postgresql_img_tag }}"
+#       MEMCACHED_IMG_TAG: "{{ openshift_cfme_memcached_img_tag }}"
+#   register: cfme_new_app_process
+#   run_once: True
+#   when:
+#     # User said to install CFME in their inventory
+#     - openshift_cfme_install_app | bool
+#     # # The server app doesn't exist already
+#     # - not miq_server_check.results.results.0
+
+# - debug:
+#     var: cfme_new_app_process
+
+# ######################################################################
+# # Various cleanup steps
+
+# # TODO: Not sure what to do about this right now. Might be able to
+# # just delete it?  This currently warns about "Unable to find
+# # '<TEMP_DIR>' in expected paths."
+# - name: Ensure the temporary PV/App templates are erased
+#   file:
+#     path: "{{ item }}"
+#     state: absent
+#   with_fileglob:
+#     - "{{ template_dir }}/*.yaml"
+
+# - name: Ensure the temporary PV/app template directory is erased
+#   file:
+#     path: "{{ template_dir }}"
+#     state: absent

+ 0 - 51
roles/openshift_cfme/tasks/nfs.yml

@@ -1,51 +0,0 @@
----
-# Tasks to statically provision NFS volumes
-# Include if not using dynamic volume provisioning
-
-- name: Set openshift_cfme_nfs_server fact
-  when: openshift_cfme_nfs_server is not defined
-  set_fact:
-    # Hostname/IP of the NFS server. Currently defaults to first master
-    openshift_cfme_nfs_server: "{{ oo_nfs_to_config.0 }}"
-
-- name: Ensure the /exports/ directory exists
-  file:
-    path: /exports/
-    state: directory
-    mode: 0755
-    owner: root
-    group: root
-
-- name: Ensure the miq-pv0X export directories exist
-  file:
-    path: "/exports/{{ item }}"
-    state: directory
-    mode: 0775
-    owner: root
-    group: root
-  with_items: "{{ openshift_cfme_pv_exports }}"
-
-- name: Ensure the NFS exports for CFME PVs exist
-  copy:
-    src: openshift_cfme.exports
-    dest: /etc/exports.d/openshift_cfme.exports
-  register: nfs_exports_updated
-
-- name: Ensure the NFS export table is refreshed if exports were added
-  command: exportfs -ar
-  when:
-    - nfs_exports_updated.changed
-
-
-######################################################################
-# Create the required CFME PVs. Check out these online docs if you
-# need a refresher on includes looping with items:
-# * http://docs.ansible.com/ansible/playbooks_loops.html#loops-and-includes-in-2-0
-# * http://stackoverflow.com/a/35128533
-#
-# TODO: Handle the case where a PV template is updated in
-# openshift-ansible and the change needs to be landed on the managed
-# cluster.
-
-- include: create_pvs.yml
-  with_items: "{{ openshift_cfme_pv_data }}"

roles/openshift_cfme/tasks/create_pvs.yml → roles/openshift_cfme/tasks/storage/create_pvs.yml


+ 103 - 0
roles/openshift_cfme/tasks/storage/nfs.yml

@@ -0,0 +1,103 @@
+---
+# Tasks to statically provision NFS volumes
+# Include if not using dynamic volume provisioning
+
+- name: Note Storage Type - NFS
+  debug:
+    msg: Setting up NFS storage, openshift_cfme_storage_class is 'nfs'
+
+- name: TODO
+  debug:
+    msg: TODO - replace hard-coded hostname below with oo_nfs_to_config.0
+
+- name: Set openshift_cfme_nfs_server fact
+  when: openshift_cfme_nfs_server is not defined
+  set_fact:
+    # Hostname/IP of the NFS server. Currently defaults to first master
+    openshift_cfme_nfs_server: m01.example.com
+
+# TODO: I was going to try to apply the openshift_storage_nfs role to
+# handle this, however, that role is not written to be used by
+# itself. Attempting to use it to create CFME exports would just add
+# more hard-coded values to the role. That said, we're doing this here
+# manually for now until some one comes up with a better solution, or
+# the role is made to accept parameters in a more functional way.
+#
+# I can't really even include the openshift_storage_nfs role in here
+# to do basic setup stuff because it would just result in a lot of
+# unwanted exports getting set up for the users.
+
+- name: Ensure the /exports/ directory exists
+  file:
+    path: /exports/
+    state: directory
+    mode: 0755
+    owner: root
+    group: root
+
+- name: Ensure exports directory exists
+  file:
+    path: /etc/exports.d/
+    state: directory
+
+# # TODO - with_items should be passed a list of storage configs for the
+# # desired CFME setup. This might mean a local or remote nfs server, as
+# # well as fully qualified filesystem paths.
+# - name: Ensure export directories exist
+#   file:
+#     path: "{{ item.storage.nfs.directory }}/{{ item.storage.volume.name }}"
+#     state: directory
+#     mode: 0777
+#     owner: nfsnobody
+#     group: nfsnobody
+#   with_items:
+
+- name: Enable and start services
+  systemd:
+    name: nfs-server
+    state: started
+    enabled: yes
+  register: start_result
+
+- set_fact:
+    nfs_service_status_changed: "{{ start_result | changed }}"
+
+- name: restart nfs-server
+  systemd:
+    name: nfs-server
+    state: restarted
+  when: nfs_service_status_changed | default(false)
+  notify:
+    - "OpenShift-CFME - Reload NFS Exports"
+
+######################################################################
+# TODO: Move the export directory and PV creation into individual
+# tasks under the respective server/database task files.
+
+# # - name: Ensure the miq-pv0X export directories exist
+# #   file:
+# #     path: "/exports/{{ item }}"
+# #     state: directory
+# #     mode: 0775
+# #     owner: nfsnobody
+# #     group: nfsnobody
+# #   with_items: "{{ openshift_cfme_pv_exports }}"
+
+# # - name: Ensure the NFS exports for CFME PVs exist
+# #   copy:
+# #     src: openshift_cfme.exports
+# #     dest: /etc/exports.d/openshift_cfme.exports
+# #   register: nfs_exports_updated
+
+
+# # Create the required CFME PVs. Check out these online docs if you
+# # need a refresher on includes looping with items:
+# # * http://docs.ansible.com/ansible/playbooks_loops.html#loops-and-includes-in-2-0
+# # * http://stackoverflow.com/a/35128533
+
+# # TODO: Handle the case where a PV template is updated in
+# # openshift-ansible and the change needs to be landed on the managed
+# # cluster.
+
+# # - include: create_pvs.yml
+# #   with_items: "{{ openshift_cfme_pv_data }}"

+ 3 - 0
roles/openshift_cfme/tasks/storage/storage.yml

@@ -0,0 +1,3 @@
+---
+- include: nfs.yml
+  when: not (openshift_cloudprovider_kind is defined and (openshift_cloudprovider_kind == 'aws' or openshift_cloudprovider_kind == 'gce'))

+ 72 - 0
roles/openshift_cfme/tasks/template.yml

@@ -0,0 +1,72 @@
+---
+# Tasks for ensuring the correct CFME templates are landed on the remote system
+
+######################################################################
+# CFME App Template
+#
+# Note, this is different from the create_pvs.yml tasks in that the
+# application template does not require any jinja2 evaluation.
+#
+# TODO: Handle the case where the server template is updated in
+# openshift-ansible and the change needs to be landed on the managed
+# cluster.
+
+######################################################################
+# STANDARD PODIFIED DATABASE TEMPLATE
+- when: openshift_cfme_app_template == 'miq-template'
+  block:
+  - name: Check if the CFME Server template has been created already
+    oc_obj:
+      namespace: "{{ openshift_cfme_project }}"
+      state: list
+      kind: template
+      name: manageiq
+    register: miq_server_check
+
+  - name: Copy over CFME Server template
+    copy:
+      src: miq-template.yaml
+      dest: "{{ template_dir }}/"
+    when:
+    - miq_server_check.results.results == [{}]
+
+  - name: Ensure CFME Server Template is created
+    oc_obj:
+      namespace: "{{ openshift_cfme_project }}"
+      name: manageiq
+      state: present
+      kind: template
+      files:
+      - "{{ template_dir }}/miq-template.yaml"
+    when:
+    - miq_server_check.results.results == [{}]
+
+######################################################################
+# EXTERNAL DATABASE TEMPLATE
+- when: openshift_cfme_app_template == 'miq-template-ext-db'
+  block:
+  - name: Check if the CFME Ext-DB Server template has been created already
+    oc_obj:
+      namespace: "{{ openshift_cfme_project }}"
+      state: list
+      kind: template
+      name: manageiq-ext-db
+    register: miq_ext_db_server_check
+
+  - name: Copy over CFME Ext-DB Server template
+    copy:
+      src: miq-template-ext-db.yaml
+      dest: "{{ template_dir }}/"
+    when:
+    - miq_ext_db_server_check.results.results == [{}]
+
+  - name: Ensure CFME Ext-DB Server Template is created
+    oc_obj:
+      namespace: "{{ openshift_cfme_project }}"
+      name: manageiq-ext-db
+      state: present
+      kind: template
+      files:
+      - "{{ template_dir }}/miq-template-ext-db.yaml"
+    when:
+    - miq_ext_db_server_check.results.results == [{}]

+ 0 - 12
roles/openshift_cfme/tasks/tune_masters.yml

@@ -1,12 +0,0 @@
----
-- name: Ensure bulk image import limit is tuned
-  yedit:
-    src: /etc/origin/master/master-config.yaml
-    key: 'imagePolicyConfig.maxImagesBulkImportedPerRepository'
-    value: "{{ openshift_cfme_maxImagesBulkImportedPerRepository | int() }}"
-    state: present
-    backup: True
-  notify:
-    - restart master
-
-- meta: flush_handlers

+ 34 - 0
roles/openshift_cfme/tasks/validate.yml

@@ -0,0 +1,34 @@
+---
+# Validate configuration parameters passed to the openshift_cfme role
+
+- name: Ensure openshift_cfme_app_template is valid
+  assert:
+    that:
+      - openshift_cfme_app_template in openshift_cfme_app_templates
+    msg: "openshift_cfme_app_template must be one of {{ openshift_cfme_app_templates | join(', ') }}"
+
+- name: Ensure openshift_cfme_storage_class is a valid type
+  assert:
+    that:
+      - openshift_cfme_storage_class in openshift_cfme_storage_classes
+    msg: "openshift_cfme_storage_class must be one of {{ openshift_cfme_storage_classes | join(', ') }}"
+
+- name: Ensure external NFS storage has a valid NFS server hostname defined
+  assert:
+    that:
+      - openshift_cfme_storage_external_nfs_hostname is not False
+    msg: The selected storage class 'external' requires a valid hostname for the openshift_cfme_storage_external_nfs_hostname parameter
+  when:
+    - openshift_cfme_storage_class == 'external'
+
+- name: Validate Cloud Provider storage class
+  assert:
+    that:
+      - openshift_cloudprovider_kind == 'aws' or openshift_cloudprovider_kind == 'gce'
+    msg: |
+      openshift_cfme_storage_class is 'cloudprovider' but you have an
+      invalid kind defined. See 'openshift_cloudprovider_kind' in the
+      example inventories for the required parameters for your
+      selected cloud provider. Working providers: 'aws' and 'gce'.
+  when:
+    - openshift_cloudprovider_kind is defined

+ 0 - 13
roles/openshift_cfme/templates/miq-pv-region.yaml.j2

@@ -1,13 +0,0 @@
-apiVersion: v1
-kind: PersistentVolume
-metadata:
-  name: miq-pv02
-spec:
-  capacity:
-    storage: 5Gi
-  accessModes:
-    - ReadWriteOnce
-  nfs: 
-    path: {{ openshift_cfme_nfs_directory }}/miq-pv02
-    server: {{ openshift_cfme_nfs_server }}
-  persistentVolumeReclaimPolicy: Retain

+ 0 - 13
roles/openshift_cfme/templates/miq-pv-server.yaml.j2

@@ -1,13 +0,0 @@
-apiVersion: v1
-kind: PersistentVolume
-metadata:
-  name: miq-pv03
-spec:
-  capacity:
-    storage: 5Gi
-  accessModes:
-    - ReadWriteOnce
-  nfs: 
-    path: {{ openshift_cfme_nfs_directory }}/miq-pv03
-    server: {{ openshift_cfme_nfs_server }}
-  persistentVolumeReclaimPolicy: Retain

+ 2 - 0
roles/openshift_storage_nfs/templates/exports.j2

@@ -3,3 +3,5 @@
 {{ openshift.logging.storage.nfs.directory }}/{{ openshift.logging.storage.volume.name }} {{ openshift.logging.storage.nfs.options }}
 {{ openshift.loggingops.storage.nfs.directory }}/{{ openshift.loggingops.storage.volume.name }} {{ openshift.loggingops.storage.nfs.options }}
 {{ openshift.hosted.etcd.storage.nfs.directory }}/{{ openshift.hosted.etcd.storage.volume.name }} {{ openshift.hosted.etcd.storage.nfs.options }}
+{{ openshift.hosted.cfme_app.storage.nfs.directory }}/{{ openshift.hosted.cfme.storage.volume.name }} {{ openshift.hosted.cfme.storage.nfs.options }}
+{{ openshift.hosted.cfme_db.storage.nfs.directory }}/{{ openshift.hosted.cfme.storage.volume.name }} {{ openshift.hosted.cfme.storage.nfs.options }}