Pārlūkot izejas kodu

Merge branch 'master' into prod

Troy Dawson 10 gadi atpakaļ
vecāks
revīzija
c650920bc7
100 mainītis faili ar 1851 papildinājumiem un 1842 dzēšanām
  1. 8 6
      README.md
  2. 1 1
      README_AWS.md
  3. 1 1
      README_OSE.md
  4. 80 0
      README_openstack.md
  5. 61 6
      bin/cluster
  6. 8 1
      bin/openshift-ansible-bin.spec
  7. 0 29
      cloud.rb
  8. 218 0
      docs/best_practices_guide.adoc
  9. 43 0
      docs/core_concepts_guide.adoc
  10. 138 0
      docs/style_guide.adoc
  11. 219 177
      filter_plugins/oo_filters.py
  12. 4 3
      git/.pylintrc
  13. 7 4
      inventory/byo/hosts
  14. 53 42
      inventory/libvirt/hosts/libvirt_generic.py
  15. 9 4
      inventory/multi_ec2.py
  16. 12 1
      inventory/openshift-ansible-inventory.spec
  17. 1 0
      inventory/openstack/hosts/hosts
  18. 45 0
      inventory/openstack/hosts/nova.ini
  19. 224 0
      inventory/openstack/hosts/nova.py
  20. 0 94
      lib/ansible_helper.rb
  21. 0 148
      lib/aws_command.rb
  22. 0 85
      lib/aws_helper.rb
  23. 0 228
      lib/gce_command.rb
  24. 0 94
      lib/gce_helper.rb
  25. 0 30
      lib/launch_helper.rb
  26. 1 1
      playbooks/aws/ansible-tower/launch.yml
  27. 1 0
      playbooks/aws/openshift-cluster/config.yml
  28. 8 0
      playbooks/aws/openshift-cluster/launch.yml
  29. 28 0
      playbooks/aws/openshift-cluster/service.yml
  30. 2 1
      playbooks/aws/openshift-cluster/tasks/launch_instances.yml
  31. 5 5
      playbooks/aws/openshift-master/launch.yml
  32. 1 0
      playbooks/aws/openshift-node/config.yml
  33. 5 5
      playbooks/aws/openshift-node/launch.yml
  34. 0 20
      playbooks/aws/os2-atomic-proxy/config.yml
  35. 0 97
      playbooks/aws/os2-atomic-proxy/launch.yml
  36. 0 6
      playbooks/aws/os2-atomic-proxy/user_data.txt
  37. 0 3
      playbooks/aws/os2-atomic-proxy/vars.int.yml
  38. 0 3
      playbooks/aws/os2-atomic-proxy/vars.prod.yml
  39. 0 10
      playbooks/aws/os2-atomic-proxy/vars.stg.yml
  40. 3 1
      playbooks/byo/openshift-node/config.yml
  41. 8 0
      playbooks/common/openshift-cluster/create_services.yml
  42. 2 3
      playbooks/common/openshift-master/config.yml
  43. 18 0
      playbooks/common/openshift-master/service.yml
  44. 43 38
      playbooks/common/openshift-node/config.yml
  45. 18 0
      playbooks/common/openshift-node/service.yml
  46. 1 0
      playbooks/gce/openshift-cluster/config.yml
  47. 16 0
      playbooks/gce/openshift-cluster/launch.yml
  48. 1 1
      playbooks/gce/openshift-cluster/list.yml
  49. 28 0
      playbooks/gce/openshift-cluster/service.yml
  50. 26 0
      playbooks/gce/openshift-cluster/wip.yml
  51. 1 0
      playbooks/gce/openshift-node/config.yml
  52. 1 0
      playbooks/libvirt/openshift-cluster/config.yml
  53. 32 0
      playbooks/libvirt/openshift-cluster/service.yml
  54. 3 9
      playbooks/libvirt/openshift-cluster/tasks/launch_instances.yml
  55. 35 0
      playbooks/openstack/openshift-cluster/config.yml
  56. 149 0
      playbooks/openstack/openshift-cluster/files/heat_stack.yml
  57. 7 0
      playbooks/openstack/openshift-cluster/files/user-data
  58. 0 0
      playbooks/openstack/openshift-cluster/filter_plugins
  59. 31 0
      playbooks/openstack/openshift-cluster/launch.yml
  60. 24 0
      playbooks/openstack/openshift-cluster/list.yml
  61. 0 0
      playbooks/openstack/openshift-cluster/roles
  62. 27 0
      playbooks/openstack/openshift-cluster/tasks/configure_openstack.yml
  63. 48 0
      playbooks/openstack/openshift-cluster/tasks/launch_instances.yml
  64. 43 0
      playbooks/openstack/openshift-cluster/terminate.yml
  65. 18 0
      playbooks/openstack/openshift-cluster/update.yml
  66. 39 0
      playbooks/openstack/openshift-cluster/vars.yml
  67. 1 1
      rel-eng/packages/openshift-ansible-bin
  68. 1 1
      rel-eng/packages/openshift-ansible-inventory
  69. 0 56
      roles/atomic_base/README.md
  70. 0 12
      roles/atomic_base/files/bash/bashrc
  71. 0 10
      roles/atomic_base/files/ostree/repo_config
  72. 0 7
      roles/atomic_base/files/system/90-nofile.conf
  73. 0 19
      roles/atomic_base/meta/main.yml
  74. 0 14
      roles/atomic_base/tasks/bash.yml
  75. 0 6
      roles/atomic_base/tasks/cloud_user.yml
  76. 0 4
      roles/atomic_base/tasks/main.yml
  77. 0 18
      roles/atomic_base/tasks/ostree.yml
  78. 0 3
      roles/atomic_base/tasks/system.yml
  79. 0 2
      roles/atomic_base/vars/main.yml
  80. 0 56
      roles/atomic_proxy/README.md
  81. 0 29
      roles/atomic_proxy/files/proxy_containers_deploy_descriptor.json
  82. 0 116
      roles/atomic_proxy/files/puppet/auth.conf
  83. 0 43
      roles/atomic_proxy/files/setup-proxy-containers.sh
  84. 0 3
      roles/atomic_proxy/handlers/main.yml
  85. 0 21
      roles/atomic_proxy/meta/main.yml
  86. 0 3
      roles/atomic_proxy/tasks/main.yml
  87. 0 57
      roles/atomic_proxy/tasks/setup_containers.yml
  88. 0 24
      roles/atomic_proxy/tasks/setup_puppet.yml
  89. 0 40
      roles/atomic_proxy/templates/puppet/puppet.conf.j2
  90. 0 16
      roles/atomic_proxy/templates/sync/sync-proxy-configs.sh.j2
  91. 0 32
      roles/atomic_proxy/templates/systemd/ctr-proxy-1.service.j2
  92. 0 36
      roles/atomic_proxy/templates/systemd/ctr-proxy-monitoring-1.service.j2
  93. 0 33
      roles/atomic_proxy/templates/systemd/ctr-proxy-puppet-1.service.j2
  94. 0 2
      roles/atomic_proxy/vars/main.yml
  95. 0 13
      roles/docker/files/enter-container.sh
  96. 4 0
      roles/docker/handlers/main.yml
  97. 1 8
      roles/docker/tasks/main.yml
  98. 39 0
      roles/docker_storage/README.md
  99. 0 0
      roles/docker_storage/defaults/main.yml
  100. 0 0
      roles/docker_storage/handlers/main.yml

+ 8 - 6
README.md

@@ -1,10 +1,8 @@
-openshift-ansible
-========================
+#openshift-ansible
 
 This repo contains OpenShift Ansible code.
 
-Setup
------
+##Setup
 - Install base dependencies:
   - Fedora:
   ```
@@ -30,10 +28,14 @@ Setup
   - [How to build the openshift-ansible rpms](BUILD.md)
 
 - Directory Structure:
-  - [cloud.rb](cloud.rb) - light wrapper around Ansible
   - [bin/cluster](bin/cluster) - python script to easily create OpenShift 3 clusters
+  - [docs](docs) - Documentation for the project
   - [filter_plugins/](filter_plugins) - custom filters used to manipulate data in Ansible
   - [inventory/](inventory) - houses Ansible dynamic inventory scripts
-  - [lib/](lib) - library components of cloud.rb
   - [playbooks/](playbooks) - houses host-type Ansible playbooks (launch, config, destroy, vars)
   - [roles/](roles) - shareable Ansible tasks
+
+##Contributing
+
+###Feature Roadmap
+Our Feature Roadmap is available on the OpenShift Origin Infrastructure [Trello board](https://trello.com/b/nbkIrqKa/openshift-origin-infrastructure). All ansible items will be tagged with [installv3].

+ 1 - 1
README_AWS.md

@@ -18,7 +18,7 @@ Create a credentials file
 ```
   source ~/.aws_creds
 ```
-Note: You must source this file in each shell that you want to run cloud.rb
+Note: You must source this file before running any Ansible commands.
 
 
 (Optional) Setup your $HOME/.ssh/config file

+ 1 - 1
README_OSE.md

@@ -80,7 +80,7 @@ ansible_ssh_user=root
 deployment_type=enterprise
 
 # Pre-release registry URL
-openshift_registry_url=docker-buildvm-rhose.usersys.redhat.com:5000/openshift3_beta/ose-${component}:${version}
+oreg_url=docker-buildvm-rhose.usersys.redhat.com:5000/openshift3_beta/ose-${component}:${version}
 
 # Pre-release additional repo
 openshift_additional_repos=[{'id': 'ose-devel', 'name': 'ose-devel',

+ 80 - 0
README_openstack.md

@@ -0,0 +1,80 @@
+OPENSTACK Setup instructions
+============================
+
+Requirements
+------------
+
+The OpenStack instance must have Neutron and Heat enabled.
+
+Install Dependencies
+--------------------
+
+1. The OpenStack python clients for Nova, Neutron and Heat are required:
+
+* `python-novaclient`
+* `python-neutronclient`
+* `python-heatclient`
+
+On RHEL / CentOS / Fedora:
+```
+  yum install -y ansible python-novaclient python-neutronclient python-heatclient
+```
+
+Configuration
+-------------
+
+The following options can be passed via the `-o` flag of the `create` command:
+
+* `image_name`: Name of the image to use to spawn VMs
+* `keypair` (default to `${LOGNAME}_key`): Name of the ssh key
+* `public_key` (default to `~/.ssh/id_rsa.pub`): filename of the ssh public key
+* `master_flavor_ram` (default to `2048`): VM flavor for the master (by amount of RAM)
+* `master_flavor_id`: VM flavor for the master (by ID)
+* `master_flavor_include`: VM flavor for the master (by name)
+* `node_flavor_ram` (default to `4096`): VM flavor for the nodes (by amount of RAM)
+* `node_flavor_id`: VM flavor for the nodes (by ID)
+* `node_flavor_include`: VM flavor for the nodes (by name)
+* `infra_heat_stack` (default to `playbooks/openstack/openshift-cluster/files/heat_stack.yml`): filename of the HEAT template to use to create the cluster infrastructure
+
+The following options are used only by `heat_stack.yml`. They are so used only if the `infra_heat_stack` option is left with its default value.
+
+* `network_prefix` (default to `openshift-ansible-<cluster_id>`): prefix prepended to all network objects (net, subnet, router, security groups)
+* `dns` (default to `8.8.8.8,8.8.4.4`): comma separated list of DNS to use
+* `net_cidr` (default to `192.168.<rand()>.0/24`): CIDR of the network created by `heat_stack.yml`
+* `external_net` (default to `external`): Name of the external network to connect to
+* `floating_ip_pools` (default to `external`): comma separated list of floating IP pools
+* `ssh_from` (default to `0.0.0.0/0`): IPs authorized to connect to the VMs via ssh
+
+
+Creating a cluster
+------------------
+
+1. To create a cluster with one master and two nodes
+
+```
+  bin/cluster create openstack <cluster-id>
+```
+
+2. To create a cluster with one master and three nodes, a custom VM image and custom DNS:
+
+```
+  bin/cluster create -n 3 -o image_name=rhel-7.1-openshift-2015.05.21 -o dns=172.16.50.210,172.16.50.250 openstack lenaic
+```
+
+Updating a cluster
+------------------
+
+1. To update the cluster
+
+```
+  bin/cluster update openstack <cluster-id>
+```
+
+Terminating a cluster
+---------------------
+
+1. To terminate the cluster
+
+```
+  bin/cluster terminate openstack <cluster-id>
+```

+ 61 - 6
bin/cluster

@@ -9,8 +9,9 @@ import os
 
 class Cluster(object):
     """
-    Control and Configuration Interface for OpenShift Clusters
+    Provide Command, Control and Configuration (c3) Interface for OpenShift Clusters
     """
+
     def __init__(self):
         # setup ansible ssh environment
         if 'ANSIBLE_SSH_ARGS' not in os.environ:
@@ -104,6 +105,21 @@ class Cluster(object):
 
         return self.action(args, inventory, env, playbook)
 
+    def service(self, args):
+        """
+        Make the same service call across all nodes in the cluster
+        :param args: command line arguments provided by user
+        :return: exit status from run command
+        """
+        env = {'cluster_id': args.cluster_id,
+               'deployment_type': self.get_deployment_type(args),
+               'new_cluster_state': args.state}
+
+        playbook = "playbooks/{}/openshift-cluster/service.yml".format(args.provider)
+        inventory = self.setup_provider(args.provider)
+
+        return self.action(args, inventory, env, playbook)
+
     def setup_provider(self, provider):
         """
         Setup ansible playbook environment
@@ -127,6 +143,8 @@ class Cluster(object):
             inventory = '-i inventory/aws/hosts'
         elif 'libvirt' == provider:
             inventory = '-i inventory/libvirt/hosts'
+        elif 'openstack' == provider:
+            inventory = '-i inventory/openstack/hosts'
         else:
             # this code should never be reached
             raise ValueError("invalid PROVIDER {}".format(provider))
@@ -147,6 +165,11 @@ class Cluster(object):
         if args.verbose > 0:
             verbose = '-{}'.format('v' * args.verbose)
 
+        if args.option:
+            for opt in args.option:
+                k, v = opt.split('=', 1)
+                env['opt_'+k] = v
+
         ansible_env = '-e \'{}\''.format(
             ' '.join(['%s=%s' % (key, value) for (key, value) in env.items()])
         )
@@ -167,25 +190,49 @@ class Cluster(object):
 
 if __name__ == '__main__':
     """
-    Implemented to support writing unit tests
+    User command to invoke ansible playbooks in a "known" environment
+
+    Reads ~/.openshift-ansible for default configuration items
+      [DEFAULT]
+      validate_cluster_ids = False
+      cluster_ids = marketing,sales
+      providers = gce,aws,libvirt,openstack
     """
 
+    environment = ConfigParser.SafeConfigParser({
+        'cluster_ids': 'marketing,sales',
+        'validate_cluster_ids': 'False',
+        'providers': 'gce,aws,libvirt,openstack',
+    })
+
+    path = os.path.expanduser("~/.openshift-ansible")
+    if os.path.isfile(path):
+        environment.read(path)
+
     cluster = Cluster()
 
-    providers = ['gce', 'aws', 'libvirt']
     parser = argparse.ArgumentParser(
         description='Python wrapper to ensure proper environment for OpenShift ansible playbooks',
     )
     parser.add_argument('-v', '--verbose', action='count',
                         help='Multiple -v options increase the verbosity')
-    parser.add_argument('--version', action='version', version='%(prog)s 0.2')
+    parser.add_argument('--version', action='version', version='%(prog)s 0.3')
 
     meta_parser = argparse.ArgumentParser(add_help=False)
+    providers = environment.get('DEFAULT', 'providers').split(',')
     meta_parser.add_argument('provider', choices=providers, help='provider')
-    meta_parser.add_argument('cluster_id', help='prefix for cluster VM names')
+
+    if environment.get('DEFAULT', 'validate_cluster_ids').lower() in ("yes", "true", "1"):
+        meta_parser.add_argument('cluster_id', choices=environment.get('DEFAULT', 'cluster_ids').split(','),
+                                 help='prefix for cluster VM names')
+    else:
+        meta_parser.add_argument('cluster_id', help='prefix for cluster VM names')
+
     meta_parser.add_argument('-t', '--deployment-type',
                              choices=['origin', 'online', 'enterprise'],
                              help='Deployment type. (default: origin)')
+    meta_parser.add_argument('-o', '--option', action='append',
+                             help='options')
 
     action_parser = parser.add_subparsers(dest='action', title='actions',
                                           description='Choose from valid actions')
@@ -221,6 +268,13 @@ if __name__ == '__main__':
                                            parents=[meta_parser])
     list_parser.set_defaults(func=cluster.list)
 
+    service_parser = action_parser.add_parser('service', help='service for openshift across cluster',
+                                              parents=[meta_parser])
+    # choices are the only ones valid for the ansible service module: http://docs.ansible.com/service_module.html
+    service_parser.add_argument('state', choices=['started', 'stopped', 'restarted', 'reloaded'],
+                                help='make service call across cluster')
+    service_parser.set_defaults(func=cluster.service)
+
     args = parser.parse_args()
 
     if 'terminate' == args.action and not args.force:
@@ -230,7 +284,8 @@ if __name__ == '__main__':
             exit(1)
 
     if 'update' == args.action and not args.force:
-        answer = raw_input("This is destructive and could corrupt {} environment. Continue? [y/N] ".format(args.cluster_id))
+        answer = raw_input(
+            "This is destructive and could corrupt {} environment. Continue? [y/N] ".format(args.cluster_id))
         if answer not in ['y', 'Y']:
             sys.stderr.write('\nACTION [update] aborted by user!\n')
             exit(1)

+ 8 - 1
bin/openshift-ansible-bin.spec

@@ -1,6 +1,6 @@
 Summary:       OpenShift Ansible Scripts for working with metadata hosts
 Name:          openshift-ansible-bin
-Version:       0.0.17
+Version:       0.0.18
 Release:       1%{?dist}
 License:       ASL 2.0
 URL:           https://github.com/openshift/openshift-ansible
@@ -42,6 +42,13 @@ cp -p openshift_ansible.conf.example %{buildroot}/etc/openshift_ansible/openshif
 %config(noreplace) /etc/openshift_ansible/
 
 %changelog
+* Tue Jun 09 2015 Kenny Woodson <kwoodson@redhat.com> 0.0.18-1
+- Implement OpenStack provider (lhuard@amadeus.com)
+- * Update defaults and examples to track core concepts guide
+  (jhonce@redhat.com)
+- Issue 119 - Add support for ~/.openshift-ansible (jhonce@redhat.com)
+- Infrastructure - Add service action to bin/cluster (jhonce@redhat.com)
+
 * Fri May 15 2015 Thomas Wiest <twiest@redhat.com> 0.0.17-1
 - fixed the openshift-ansible-bin build (twiest@redhat.com)
 

+ 0 - 29
cloud.rb

@@ -1,29 +0,0 @@
-#!/usr/bin/env ruby
-
-require 'thor'
-require_relative 'lib/gce_command'
-require_relative 'lib/aws_command'
-
-# Don't buffer output to the client
-STDOUT.sync = true
-STDERR.sync = true
-
-module OpenShift
-  module Ops
-    class CloudCommand < Thor
-      desc 'gce', 'Manages Google Compute Engine assets'
-      subcommand "gce", GceCommand
-
-      desc 'aws', 'Manages Amazon Web Services assets'
-      subcommand "aws", AwsCommand
-    end
-  end
-end
-
-if __FILE__ == $0
-  SCRIPT_DIR = File.expand_path(File.dirname(__FILE__))
-  Dir.chdir(SCRIPT_DIR) do
-    # Kick off thor
-    OpenShift::Ops::CloudCommand.start(ARGV)
-  end
-end

+ 218 - 0
docs/best_practices_guide.adoc

@@ -0,0 +1,218 @@
+// vim: ft=asciidoc
+
+= Openshift-Ansible Best Practices Guide
+
+The purpose of this guide is to describe the preferred patterns and best practices used in this repository (both in ansible and python).
+
+It is important to note that this repository may not currently comply with all best practices, but the intention is that it will.
+
+All new pull requests created against this repository MUST comply with this guide.
+
+This guide complies with https://www.ietf.org/rfc/rfc2119.txt[RFC2119].
+
+
+== Pull Requests
+
+[cols="2v,v"]
+|===
+| **Rule**
+| All pull requests MUST pass the build bot *before* they are merged.
+|===
+
+The purpose of this rule is to avoid cases where the build bot will fail pull requests for code modified in a previous pull request.
+
+The tooling is flexible enough that exceptions can be made so that the tool the build bot is running will ignore certain areas or certain checks, but the build bot itself must pass for the pull request to be merged.
+
+
+
+== Python
+
+=== PyLint
+http://www.pylint.org/[PyLint] is used in an attempt to keep the python code as clean and as managable as possible. The build bot runs each pull request through PyLint and any warnings or errors cause the build bot to fail the pull request.
+
+'''
+[cols="2v,v"]
+|===
+| **Rule**
+| PyLint rules MUST NOT be disabled on a whole file.
+|===
+
+Instead, http://docs.pylint.org/faq.html#is-it-possible-to-locally-disable-a-particular-message[disable the PyLint check on the line where PyLint is complaining].
+
+'''
+[cols="2v,v"]
+|===
+| **Rule**
+| PyLint rules MUST NOT be disabled unless they meet one of the following exceptions
+|===
+
+.Exceptions:
+1. When PyLint fails because of a dependency that can't be installed on the build bot
+1. When PyLint fails because of including a module that is outside of control (like Ansible)
+1. When PyLint fails, but the code makes more sense the way it is formatted (stylistic exception). For this exception, the description of the PyLint disable MUST state why the code is more clear, AND the person reviewing the PR will decide if they agree or not. The reviewer may reject the PR if they disagree with the reason for the disable.
+
+'''
+[cols="2v,v"]
+|===
+| **Rule**
+| All PyLint rule disables MUST be documented in the code.
+|===
+
+The purpose of this rule is to inform future developers about the disable.
+
+.Specifically, the following MUST accompany every PyLint disable:
+1. Why is the check being disabled?
+1. Is disabling this check meant to be permanent or temporary?
+
+.Example:
+[source,python]
+----
+# Reason: disable pylint maybe-no-member because overloaded use of
+#     the module name causes pylint to not detect that 'results'
+#     is an array or hash
+# Status: permanently disabled unless a way is found to fix this.
+# pylint: disable=maybe-no-member
+metadata[line] = results.pop()
+----
+
+
+== Ansible
+
+=== Yaml Files (Playbooks, Roles, Vars, etc)
+
+'''
+[cols="2v,v"]
+|===
+| **Rule**
+| Ansible files SHOULD NOT use JSON (use pure YAML instead).
+|===
+
+YAML is a superset of JSON, which means that Ansible allows JSON syntax to be interspersed. Even though YAML (and by extension Ansible) allows for this, JSON SHOULD NOT be used.
+
+.Reasons:
+* Ansible is able to give clearer error messages when the files are pure YAML
+* YAML reads nicer (preference held by several team members)
+* YAML makes for nicer diffs as YAML tends to be multi-line, whereas JSON tends to be more concise
+
+.Exceptions:
+* Ansible static inventory files are INI files. To pass in variables for specific hosts, Ansible allows for these variables to be put inside of the static inventory files. These variables can be in JSON format, but can't be in YAML format. This is an acceptable use of JSON, as YAML is not allowed in this case.
+
+Every effort should be made to keep our Ansible YAML files in pure YAML.
+
+=== Defensive Programming
+
+.Context
+* http://docs.ansible.com/fail_module.html[Ansible Fail Module]
+
+'''
+[cols="2v,v"]
+|===
+| **Rule**
+| Ansible playbooks MUST begin with checks for any variables that they require.
+|===
+
+If an Ansible playbook requires certain variables to be set, it's best to check for these up front before any other actions have been performed. In this way, the user knows exactly what needs to be passed into the playbook.
+
+.Example:
+[source,yaml]
+----
+---
+- hosts: localhost
+  gather_facts: no
+  tasks:
+  - fail: msg="This playbook requires g_environment to be set and non empty"
+    when: g_environment is not defined or g_environment == ''
+----
+
+'''
+[cols="2v,v"]
+|===
+| **Rule**
+| Ansible roles tasks/main.yml file MUST begin with checks for any variables that they require.
+|===
+
+If an Ansible role requires certain variables to be set, it's best to check for these up front before any other actions have been performed. In this way, the user knows exactly what needs to be passed into the role.
+
+.Example:
+[source,yaml]
+----
+---
+# tasks/main.yml
+- fail: msg="This role requires arl_environment to be set and non empty"
+  when: arl_environment is not defined or arl_environment == ''
+----
+
+=== Roles
+
+'''
+[cols="2v,v"]
+|===
+| **Rule**
+| The Ansible roles directory MUST maintain a flat structure.
+|===
+
+.Context
+* http://docs.ansible.com/playbooks_best_practices.html#directory-layout[Ansible Suggested Directory Layout]
+
+.The purpose of this rule is to:
+* Comply with the upstream best practices
+* Make it familiar for new contributors
+* Make it compatible with Ansible Galaxy
+
+'''
+[cols="2v,v"]
+|===
+| **Rule**
+| Ansible Roles SHOULD be named like technology_component[_subcomponent].
+|===
+
+For consistency, role names SHOULD follow the above naming pattern. It is important to note that this is a recommendation for role naming, and follows the pattern used by upstream.
+
+Many times the `technology` portion of the pattern will line up with a package name. It is advised that whenever possible, the package name should be used.
+
+.Examples:
+* The role to configure an OpenShift Master is called `openshift_master`
+* The role to configure OpenShift specific yum repositories is called `openshift_repos`
+
+=== Filters
+.Context:
+* https://docs.ansible.com/playbooks_filters.html[Ansible Playbook Filters]
+* http://jinja.pocoo.org/docs/dev/templates/#builtin-filters[Jinja2 Builtin Filters]
+
+'''
+[cols="2v,v"]
+|===
+| **Rule**
+| The `default` filter SHOULD replace empty strings, lists, etc.
+|===
+
+When using the jinja2 `default` filter, unless the variable is a boolean, specify `true` as the second parameter. This will cause the default filter to replace empty strings, lists, etc with the provided default.
+
+This is because it is preferable to either have a sane default set than to have an empty string, list, etc. For example, it is preferable to have a config value set to a sane default than to have it simply set as an empty string.
+
+.From the http://jinja.pocoo.org/docs/dev/templates/[Jinja2 Docs]:
+[quote]
+If you want to use default with variables that evaluate to false you have to set the second parameter to true
+
+.Example:
+[source,yaml]
+----
+---
+- hosts: localhost
+  gather_facts: no
+  vars:
+    somevar: ''
+  tasks:
+  - debug: var=somevar
+
+  - name: "Will output 'somevar: []'"
+    debug: "msg='somevar: [{{ somevar | default('the string was empty') }}]'"
+
+  - name: "Will output 'somevar: [the string was empty]'"
+    debug: "msg='somevar: [{{ somevar | default('the string was empty', true) }}]'"
+----
+
+
+In other words, normally the `default` filter will only replace the value if it's undefined. By setting the second parameter to `true`, it will also replace the value if it defaults to a false value in python, so None, empty list, empty string, etc.
+
+This is almost always more desirable than an empty list, string, etc.

+ 43 - 0
docs/core_concepts_guide.adoc

@@ -0,0 +1,43 @@
+// vim: ft=asciidoc
+
+= Openshift-Ansible Core Concepts Guide
+
+The purpose of this guide is to describe core concepts used in this repository.
+
+It is important to note that this repository may not currently implement all of the concepts, but the intention is that it will.
+
+== Logical Grouping Concepts
+The following are the concepts used to logically group OpenShift cluster instances.
+
+These groupings are used to perform operations specifically against instances in the specified group.
+
+For example, run an Ansible playbook against all instances in the `production` environment, or run an adhoc command against all instances in the `acme-corp` cluster group.
+
+=== Cluster
+A Cluster is a complete install of OpenShift (master, nodes, registry, router, etc).
+
+Example: Acme Corp has sales and marketing departments that both want to use OpenShift for their internal applications, but they do not want to share resources because they have different cost centers. Each department could have their own completely separate install of OpenShift. Each install is a separate OpenShift cluster.
+
+Defined Clusters:
+`acme-sales`
+`acme-marketing`
+
+=== Cluster Group
+A cluster group is a logical grouping of one or more clusters. Which clusters are in which cluster groups is determined by the OpenShift administrators.
+
+Example: Extending the example above, both marketing and sales clusters are part of Acme Corp. Let's say that Acme Corp contracts with Hosting Corp to host their OpenShift clusters. Hosting Corp could create an Acme Corp cluster group.
+
+This would logically group Acme Corp resources from other Hosting Corp customers, which would enable the Hosting Corp's OpenShift administrators to run operations specifically targeting Acme Corp instances.
+
+Defined Cluster Group:
+`acme-corp`
+
+=== Environment
+An environment is a logical grouping of one or more cluster groups. How the environment is defined is determined by the OpenShift administrators.
+
+Example: Extending the two examples above, Hosting Corp is upgrading to the latest version of OpenShift. Before deploying it to their clusters in the Production environment, they want to test it out. So, Hosting Corp runs an Ansible playbook specifically against all of the cluster groups in the Staging environment in order to do the OpenShift upgrade.
+
+
+Defined Environments:
+`production`
+`staging`

+ 138 - 0
docs/style_guide.adoc

@@ -0,0 +1,138 @@
+// vim: ft=asciidoc
+
+= Openshift-Ansible Style Guide
+
+The purpose of this guide is to describe the preferred coding conventions used in this repository (both in ansible and python).
+
+It is important to note that this repository may not currently comply with all style guide rules, but the intention is that it will.
+
+All new pull requests created against this repository MUST comply with this guide.
+
+This style guide complies with https://www.ietf.org/rfc/rfc2119.txt[RFC2119].
+
+== Python
+
+
+=== Python Maximum Line Length
+
+.Context:
+* https://www.python.org/dev/peps/pep-0008/#maximum-line-length[Python Pep8 Line Length]
+
+'''
+[cols="2v,v"]
+|===
+| **Rule**
+| All lines SHOULD be no longer than 80 characters.
+|===
+
+Every attempt SHOULD be made to comply with this soft line length limit, and only when it makes the code more readable should this be violated.
+
+Code readability is subjective, therefore pull-requests SHOULD still be merged, even if they violate this soft limit as it is up to the individual contributor to determine if they should violate the 80 character soft limit.
+
+
+'''
+[cols="2v,v"]
+|===
+| **Rule**
+| All lines MUST be no longer than 120 characters.
+|===
+
+This is a hard limit and is enforced by the build bot. This check MUST NOT be disabled.
+
+
+
+== Ansible
+
+=== Ansible Global Variables
+Ansible global variables are defined as any variables outside of ansible roles. Examples include playbook variables, variables passed in on the cli, etc.
+
+'''
+[cols="2v,v"]
+|===
+| **Rule**
+| Global variables MUST have a prefix of g_
+|===
+
+
+Example:
+[source]
+----
+g_environment: someval
+----
+
+=== Ansible Role Variables
+Ansible role variables are defined as variables contained in (or passed into) a role.
+
+'''
+[cols="2v,v"]
+|===
+| **Rule**
+| Role variables MUST have a prefix of atleast 3 characters. See below for specific naming rules.
+|===
+
+==== Role with 3 (or more) words in the name
+
+Take the first letter of each of the words.
+
+.3 word example:
+* Role name: made_up_role
+* Prefix: mur
+[source]
+----
+mur_var1: value_one
+----
+
+.4 word example:
+* Role name: totally_made_up_role
+* Prefix: tmur
+[source]
+----
+tmur_var1: value_one
+----
+
+
+
+==== Role with 2 (or less) words in the name
+
+Make up a prefix that makes sense.
+
+.1 word example:
+* Role name: ansible
+* Prefix: ans
+[source]
+----
+ans_var1: value_one
+----
+
+.2 word example:
+* Role name: ansible_tower
+* Prefix: tow
+[source]
+----
+tow_var1: value_one
+----
+
+
+==== Role name prefix conflicts
+If two role names contain words that start with the same letters, it will seem like their prefixes would conflict.
+
+Role variables are confined to the roles themselves, so this is actually only a problem if one of the roles depends on the other role (or uses includes into the other role).
+
+.Same prefix example:
+* First Role Name: made_up_role
+* First Role Prefix: mur
+* Second Role Name: my_uber_role
+* Second Role Prefix: mur
+[source]
+----
+- hosts: localhost
+  roles:
+  - { role: made_up_role, mur_var1: val1 }
+  - { role: my_uber_role, mur_var1: val2 }
+----
+
+Even though both roles have the same prefix (mur), and even though both roles have a variable named mur_var1, these two variables never exist outside of their respective roles. This means that this is not a problem.
+
+This would only be a problem if my_uber_role depended on made_up_role, or vice versa. Or if either of these two roles included things from the other.
+
+This is enough of a corner case that it is unlikely to happen. If it does, it will be addressed on a case by case basis.

+ 219 - 177
filter_plugins/oo_filters.py

@@ -9,188 +9,230 @@ from ansible import errors
 from operator import itemgetter
 import pdb
 
-def oo_pdb(arg):
-    ''' This pops you into a pdb instance where arg is the data passed in
-        from the filter.
-        Ex: "{{ hostvars | oo_pdb }}"
-    '''
-    pdb.set_trace()
-    return arg
-
-def oo_len(arg):
-    ''' This returns the length of the argument
-        Ex: "{{ hostvars | oo_len }}"
-    '''
-    return len(arg)
-
-def get_attr(data, attribute=None):
-    ''' This looks up dictionary attributes of the form a.b.c and returns
-        the value.
-        Ex: data = {'a': {'b': {'c': 5}}}
-            attribute = "a.b.c"
-            returns 5
-    '''
-    if not attribute:
-        raise errors.AnsibleFilterError("|failed expects attribute to be set")
-
-    ptr = data
-    for attr in attribute.split('.'):
-        ptr = ptr[attr]
-
-    return ptr
-
-def oo_flatten(data):
-    ''' This filter plugin will flatten a list of lists
-    '''
-    if not issubclass(type(data), list):
-        raise errors.AnsibleFilterError("|failed expects to flatten a List")
-
-    return [item for sublist in data for item in sublist]
-
-
-def oo_collect(data, attribute=None, filters=None):
-    ''' This takes a list of dict and collects all attributes specified into a
-        list If filter is specified then we will include all items that match
-        _ALL_ of filters.
-        Ex: data = [ {'a':1, 'b':5, 'z': 'z'}, # True, return
-                     {'a':2, 'z': 'z'},        # True, return
-                     {'a':3, 'z': 'z'},        # True, return
-                     {'a':4, 'z': 'b'},        # FAILED, obj['z'] != obj['z']
-                   ]
-            attribute = 'a'
-            filters   = {'z': 'z'}
-            returns [1, 2, 3]
-    '''
-    if not issubclass(type(data), list):
-        raise errors.AnsibleFilterError("|failed expects to filter on a List")
-
-    if not attribute:
-        raise errors.AnsibleFilterError("|failed expects attribute to be set")
-
-    if filters is not None:
-        if not issubclass(type(filters), dict):
-            raise errors.AnsibleFilterError("|fialed expects filter to be a"
-                                            " dict")
-        retval = [get_attr(d, attribute) for d in data if (
-            all([d[key] == filters[key] for key in filters]))]
-    else:
-        retval = [get_attr(d, attribute) for d in data]
-
-    return retval
-
-def oo_select_keys(data, keys):
-    ''' This returns a list, which contains the value portions for the keys
-        Ex: data = { 'a':1, 'b':2, 'c':3 }
-            keys = ['a', 'c']
-            returns [1, 3]
-    '''
-
-    if not issubclass(type(data), dict):
-        raise errors.AnsibleFilterError("|failed expects to filter on a dict")
-
-    if not issubclass(type(keys), list):
-        raise errors.AnsibleFilterError("|failed expects first param is a list")
-
-    # Gather up the values for the list of keys passed in
-    retval = [data[key] for key in keys]
-
-    return retval
-
-def oo_prepend_strings_in_list(data, prepend):
-    ''' This takes a list of strings and prepends a string to each item in the
-        list
-        Ex: data = ['cart', 'tree']
-            prepend = 'apple-'
-            returns ['apple-cart', 'apple-tree']
-    '''
-    if not issubclass(type(data), list):
-        raise errors.AnsibleFilterError("|failed expects first param is a list")
-    if not all(isinstance(x, basestring) for x in data):
-        raise errors.AnsibleFilterError("|failed expects first param is a list"
-                                        " of strings")
-    retval = [prepend + s for s in data]
-    return retval
-
-def oo_ami_selector(data, image_name):
-    ''' This takes a list of amis and an image name and attempts to return
-        the latest ami.
-    '''
-    if not issubclass(type(data), list):
-        raise errors.AnsibleFilterError("|failed expects first param is a list")
-
-    if not data:
-        return None
-    else:
-        if image_name is None or not image_name.endswith('_*'):
-            ami = sorted(data, key=itemgetter('name'), reverse=True)[0]
-            return ami['ami_id']
+
+class FilterModule(object):
+    ''' Custom ansible filters '''
+
+    @staticmethod
+    def oo_pdb(arg):
+        ''' This pops you into a pdb instance where arg is the data passed in
+            from the filter.
+            Ex: "{{ hostvars | oo_pdb }}"
+        '''
+        pdb.set_trace()
+        return arg
+
+    @staticmethod
+    def get_attr(data, attribute=None):
+        ''' This looks up dictionary attributes of the form a.b.c and returns
+            the value.
+            Ex: data = {'a': {'b': {'c': 5}}}
+                attribute = "a.b.c"
+                returns 5
+        '''
+        if not attribute:
+            raise errors.AnsibleFilterError("|failed expects attribute to be set")
+
+        ptr = data
+        for attr in attribute.split('.'):
+            ptr = ptr[attr]
+
+        return ptr
+
+    @staticmethod
+    def oo_flatten(data):
+        ''' This filter plugin will flatten a list of lists
+        '''
+        if not issubclass(type(data), list):
+            raise errors.AnsibleFilterError("|failed expects to flatten a List")
+
+        return [item for sublist in data for item in sublist]
+
+
+    @staticmethod
+    def oo_collect(data, attribute=None, filters=None):
+        ''' This takes a list of dict and collects all attributes specified into a
+            list If filter is specified then we will include all items that match
+            _ALL_ of filters.
+            Ex: data = [ {'a':1, 'b':5, 'z': 'z'}, # True, return
+                         {'a':2, 'z': 'z'},        # True, return
+                         {'a':3, 'z': 'z'},        # True, return
+                         {'a':4, 'z': 'b'},        # FAILED, obj['z'] != obj['z']
+                       ]
+                attribute = 'a'
+                filters   = {'z': 'z'}
+                returns [1, 2, 3]
+        '''
+        if not issubclass(type(data), list):
+            raise errors.AnsibleFilterError("|failed expects to filter on a List")
+
+        if not attribute:
+            raise errors.AnsibleFilterError("|failed expects attribute to be set")
+
+        if filters is not None:
+            if not issubclass(type(filters), dict):
+                raise errors.AnsibleFilterError("|fialed expects filter to be a"
+                                                " dict")
+            retval = [FilterModule.get_attr(d, attribute) for d in data if (
+                all([d[key] == filters[key] for key in filters]))]
         else:
-            ami_info = [(ami, ami['name'].split('_')[-1]) for ami in data]
-            ami = sorted(ami_info, key=itemgetter(1), reverse=True)[0][0]
-            return ami['ami_id']
-
-def oo_ec2_volume_definition(data, host_type, docker_ephemeral=False):
-    ''' This takes a dictionary of volume definitions and returns a valid ec2
-        volume definition based on the host_type and the values in the
-        dictionary.
-        The dictionary should look similar to this:
-            { 'master':
-                { 'root':
-                    { 'volume_size': 10, 'device_type': 'gp2',
-                      'iops': 500
-                    }
-                },
-              'node':
-                { 'root':
-                    { 'volume_size': 10, 'device_type': 'io1',
-                      'iops': 1000
+            retval = [FilterModule.get_attr(d, attribute) for d in data]
+
+        return retval
+
+    @staticmethod
+    def oo_select_keys(data, keys):
+        ''' This returns a list, which contains the value portions for the keys
+            Ex: data = { 'a':1, 'b':2, 'c':3 }
+                keys = ['a', 'c']
+                returns [1, 3]
+        '''
+
+        if not issubclass(type(data), dict):
+            raise errors.AnsibleFilterError("|failed expects to filter on a dict")
+
+        if not issubclass(type(keys), list):
+            raise errors.AnsibleFilterError("|failed expects first param is a list")
+
+        # Gather up the values for the list of keys passed in
+        retval = [data[key] for key in keys]
+
+        return retval
+
+    @staticmethod
+    def oo_prepend_strings_in_list(data, prepend):
+        ''' This takes a list of strings and prepends a string to each item in the
+            list
+            Ex: data = ['cart', 'tree']
+                prepend = 'apple-'
+                returns ['apple-cart', 'apple-tree']
+        '''
+        if not issubclass(type(data), list):
+            raise errors.AnsibleFilterError("|failed expects first param is a list")
+        if not all(isinstance(x, basestring) for x in data):
+            raise errors.AnsibleFilterError("|failed expects first param is a list"
+                                            " of strings")
+        retval = [prepend + s for s in data]
+        return retval
+
+    @staticmethod
+    def oo_combine_key_value(data, joiner='='):
+        '''Take a list of dict in the form of { 'key': 'value'} and
+           arrange them as a list of strings ['key=value']
+        '''
+        if not issubclass(type(data), list):
+            raise errors.AnsibleFilterError("|failed expects first param is a list")
+
+        rval = []
+        for item in data:
+            rval.append("%s%s%s" % (item['key'], joiner, item['value']))
+
+        return rval
+
+    @staticmethod
+    def oo_ami_selector(data, image_name):
+        ''' This takes a list of amis and an image name and attempts to return
+            the latest ami.
+        '''
+        if not issubclass(type(data), list):
+            raise errors.AnsibleFilterError("|failed expects first param is a list")
+
+        if not data:
+            return None
+        else:
+            if image_name is None or not image_name.endswith('_*'):
+                ami = sorted(data, key=itemgetter('name'), reverse=True)[0]
+                return ami['ami_id']
+            else:
+                ami_info = [(ami, ami['name'].split('_')[-1]) for ami in data]
+                ami = sorted(ami_info, key=itemgetter(1), reverse=True)[0][0]
+                return ami['ami_id']
+
+    @staticmethod
+    def oo_ec2_volume_definition(data, host_type, docker_ephemeral=False):
+        ''' This takes a dictionary of volume definitions and returns a valid ec2
+            volume definition based on the host_type and the values in the
+            dictionary.
+            The dictionary should look similar to this:
+                { 'master':
+                    { 'root':
+                        { 'volume_size': 10, 'device_type': 'gp2',
+                          'iops': 500
+                        }
                     },
-                  'docker':
-                    { 'volume_size': 40, 'device_type': 'gp2',
-                      'iops': 500, 'ephemeral': 'true'
+                  'node':
+                    { 'root':
+                        { 'volume_size': 10, 'device_type': 'io1',
+                          'iops': 1000
+                        },
+                      'docker':
+                        { 'volume_size': 40, 'device_type': 'gp2',
+                          'iops': 500, 'ephemeral': 'true'
+                        }
                     }
                 }
-            }
-    '''
-    if not issubclass(type(data), dict):
-        raise errors.AnsibleFilterError("|failed expects first param is a dict")
-    if host_type not in ['master', 'node']:
-        raise errors.AnsibleFilterError("|failed expects either master or node"
-                                        " host type")
-
-    root_vol = data[host_type]['root']
-    root_vol['device_name'] = '/dev/sda1'
-    root_vol['delete_on_termination'] = True
-    if root_vol['device_type'] != 'io1':
-        root_vol.pop('iops', None)
-    if host_type == 'node':
-        docker_vol = data[host_type]['docker']
-        docker_vol['device_name'] = '/dev/xvdb'
-        docker_vol['delete_on_termination'] = True
-        if docker_vol['device_type'] != 'io1':
-            docker_vol.pop('iops', None)
-        if docker_ephemeral:
-            docker_vol.pop('device_type', None)
-            docker_vol.pop('delete_on_termination', None)
-            docker_vol['ephemeral'] = 'ephemeral0'
-        return [root_vol, docker_vol]
-    return [root_vol]
-
-# disabling pylint checks for too-few-public-methods and no-self-use since we
-# need to expose a FilterModule object that has a filters method that returns
-# a mapping of filter names to methods.
-# pylint: disable=too-few-public-methods, no-self-use
-class FilterModule(object):
-    ''' FilterModule '''
+        '''
+        if not issubclass(type(data), dict):
+            raise errors.AnsibleFilterError("|failed expects first param is a dict")
+        if host_type not in ['master', 'node']:
+            raise errors.AnsibleFilterError("|failed expects either master or node"
+                                            " host type")
+
+        root_vol = data[host_type]['root']
+        root_vol['device_name'] = '/dev/sda1'
+        root_vol['delete_on_termination'] = True
+        if root_vol['device_type'] != 'io1':
+            root_vol.pop('iops', None)
+        if host_type == 'node':
+            docker_vol = data[host_type]['docker']
+            docker_vol['device_name'] = '/dev/xvdb'
+            docker_vol['delete_on_termination'] = True
+            if docker_vol['device_type'] != 'io1':
+                docker_vol.pop('iops', None)
+            if docker_ephemeral:
+                docker_vol.pop('device_type', None)
+                docker_vol.pop('delete_on_termination', None)
+                docker_vol['ephemeral'] = 'ephemeral0'
+            return [root_vol, docker_vol]
+        return [root_vol]
+
+    @staticmethod
+    def oo_split(string, separator=','):
+        ''' This splits the input string into a list
+        '''
+        return string.split(separator)
+
+    @staticmethod
+    def oo_filter_list(data, filter_attr=None):
+        ''' This returns a list, which contains all items where filter_attr
+            evaluates to true
+            Ex: data = [ { a: 1, b: True },
+                         { a: 3, b: False },
+                         { a: 5, b: True } ]
+                filter_attr = 'b'
+                returns [ { a: 1, b: True },
+                          { a: 5, b: True } ]
+        '''
+        if not issubclass(type(data), list):
+            raise errors.AnsibleFilterError("|failed expects to filter on a list")
+
+        if not issubclass(type(filter_attr), str):
+            raise errors.AnsibleFilterError("|failed expects filter_attr is a str")
+
+        # Gather up the values for the list of keys passed in
+        return [x for x in data if x[filter_attr]]
+
     def filters(self):
         ''' returns a mapping of filters to methods '''
         return {
-            "oo_select_keys": oo_select_keys,
-            "oo_collect": oo_collect,
-            "oo_flatten": oo_flatten,
-            "oo_len": oo_len,
-            "oo_pdb": oo_pdb,
-            "oo_prepend_strings_in_list": oo_prepend_strings_in_list,
-            "oo_ami_selector": oo_ami_selector,
-            "oo_ec2_volume_definition": oo_ec2_volume_definition
+            "oo_select_keys": self.oo_select_keys,
+            "oo_collect": self.oo_collect,
+            "oo_flatten": self.oo_flatten,
+            "oo_pdb": self.oo_pdb,
+            "oo_prepend_strings_in_list": self.oo_prepend_strings_in_list,
+            "oo_ami_selector": self.oo_ami_selector,
+            "oo_ec2_volume_definition": self.oo_ec2_volume_definition,
+            "oo_combine_key_value": self.oo_combine_key_value,
+            "oo_split": self.oo_split,
+            "oo_filter_list": self.oo_filter_list
         }

+ 4 - 3
git/.pylintrc

@@ -70,7 +70,8 @@ confidence=
 # --enable=similarities". If you want to run only the classes checker, but have
 # no Warning level messages displayed, use"--disable=all --enable=classes
 # --disable=W"
-disable=E1608,W1627,E1601,E1603,E1602,E1605,E1604,E1607,E1606,W1621,W1620,W1623,W1622,W1625,W1624,W1609,W1608,W1607,W1606,W1605,W1604,W1603,W1602,W1601,W1639,W1640,I0021,W1638,I0020,W1618,W1619,W1630,W1626,W1637,W1634,W1635,W1610,W1611,W1612,W1613,W1614,W1615,W1616,W1617,W1632,W1633,W0704,W1628,W1629,W1636
+# w0511 - fixme - disabled because TODOs are acceptable
+disable=E1608,W1627,E1601,E1603,E1602,E1605,E1604,E1607,E1606,W1621,W1620,W1623,W1622,W1625,W1624,W1609,W1608,W1607,W1606,W1605,W1604,W1603,W1602,W1601,W1639,W1640,I0021,W1638,I0020,W1618,W1619,W1630,W1626,W1637,W1634,W1635,W1610,W1611,W1612,W1613,W1614,W1615,W1616,W1617,W1632,W1633,W0704,W1628,W1629,W1636,W0511
 
 
 [REPORTS]
@@ -285,7 +286,7 @@ notes=FIXME,XXX,TODO
 [FORMAT]
 
 # Maximum number of characters on a single line.
-max-line-length=100
+max-line-length=120
 
 # Regexp for a line that is allowed to be longer than the limit.
 ignore-long-lines=^\s*(# )?<?https?://\S+>?$
@@ -321,7 +322,7 @@ max-args=5
 ignored-argument-names=_.*
 
 # Maximum number of locals for function / method body
-max-locals=15
+max-locals=20
 
 # Maximum number of return / yield for function / method body
 max-returns=6

+ 7 - 4
inventory/byo/hosts

@@ -17,20 +17,23 @@ ansible_ssh_user=root
 deployment_type=enterprise
 
 # Pre-release registry URL
-openshift_registry_url=docker-buildvm-rhose.usersys.redhat.com:5000/openshift3_beta/ose-${component}:${version}
+oreg_url=docker-buildvm-rhose.usersys.redhat.com:5000/openshift3_beta/ose-${component}:${version}
 
 # Pre-release additional repo
-#openshift_additional_repos=[{'id': 'ose-devel', 'name': 'ose-devel', 'baseurl': 'http://buildvm-devops.usersys.redhat.com/puddle/build/OpenShiftEnterprise/3.0/latest/RH7-RHOSE-3.0/$basearch/os', 'enabled': 1, 'gpgcheck': 0}]
-openshift_additional_repos=[{'id': 'ose-devel', 'name': 'ose-devel', 'baseurl': 'http://buildvm-devops.usersys.redhat.com/puddle/build/OpenShiftEnterpriseErrata/3.0/latest/RH7-RHOSE-3.0/$basearch/os', 'enabled': 1, 'gpgcheck': 0}]
+openshift_additional_repos=[{'id': 'ose-devel', 'name': 'ose-devel', 'baseurl': 'http://buildvm-devops.usersys.redhat.com/puddle/build/OpenShiftEnterprise/3.0/latest/RH7-RHOSE-3.0/$basearch/os', 'enabled': 1, 'gpgcheck': 0}]
+#openshift_additional_repos=[{'id': 'ose-devel', 'name': 'ose-devel', 'baseurl': 'http://buildvm-devops.usersys.redhat.com/puddle/build/OpenShiftEnterpriseErrata/3.0/latest/RH7-RHOSE-3.0/$basearch/os', 'enabled': 1, 'gpgcheck': 0}]
 
 # Origin copr repo
 #openshift_additional_repos=[{'id': 'openshift-origin-copr', 'name': 'OpenShift Origin COPR', 'baseurl': 'https://copr-be.cloud.fedoraproject.org/results/maxamillion/origin-next/epel-7-$basearch/', 'enabled': 1, 'gpgcheck': 1, gpgkey: 'https://copr-be.cloud.fedoraproject.org/results/maxamillion/origin-next/pubkey.gpg'}]
 
+# htpasswd auth
+#openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/openshift/htpasswd'}]
+
 # host group for masters
 [masters]
 ose3-master-ansible.test.example.com
 
 # host group for nodes
 [nodes]
-ose3-master-ansible.test.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
+#ose3-master-ansible.test.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
 ose3-node[1:2]-ansible.test.example.com openshift_node_labels="{'region': 'primary', 'zone': 'default'}"

+ 53 - 42
inventory/libvirt/hosts/libvirt_generic.py

@@ -1,6 +1,6 @@
 #!/usr/bin/env python2
 
-"""
+'''
 libvirt external inventory script
 =================================
 
@@ -12,7 +12,7 @@ To use this, copy this file over /etc/ansible/hosts and chmod +x the file.
 This, more or less, allows you to keep one central database containing
 info about all of your managed instances.
 
-"""
+'''
 
 # (c) 2015, Jason DeTiberus <jdetiber@redhat.com>
 #
@@ -36,9 +36,7 @@ info about all of your managed instances.
 import argparse
 import ConfigParser
 import os
-import re
 import sys
-from time import time
 import libvirt
 import xml.etree.ElementTree as ET
 
@@ -49,8 +47,11 @@ except ImportError:
 
 
 class LibvirtInventory(object):
+    ''' libvirt dynamic inventory '''
 
     def __init__(self):
+        ''' Main execution path '''
+
         self.inventory = dict()  # A list of groups and the hosts in that group
         self.cache = dict()  # Details about hosts in the inventory
 
@@ -59,13 +60,15 @@ class LibvirtInventory(object):
         self.parse_cli_args()
 
         if self.args.host:
-            print self.json_format_dict(self.get_host_info(), self.args.pretty)
+            print _json_format_dict(self.get_host_info(), self.args.pretty)
         elif self.args.list:
-            print self.json_format_dict(self.get_inventory(), self.args.pretty)
+            print _json_format_dict(self.get_inventory(), self.args.pretty)
         else:  # default action with no options
-            print self.json_format_dict(self.get_inventory(), self.args.pretty)
+            print _json_format_dict(self.get_inventory(), self.args.pretty)
 
     def read_settings(self):
+        ''' Reads the settings from the libvirt.ini file '''
+
         config = ConfigParser.SafeConfigParser()
         config.read(
             os.path.dirname(os.path.realpath(__file__)) + '/libvirt.ini'
@@ -73,6 +76,8 @@ class LibvirtInventory(object):
         self.libvirt_uri = config.get('libvirt', 'uri')
 
     def parse_cli_args(self):
+        ''' Command line argument processing '''
+
         parser = argparse.ArgumentParser(
             description='Produce an Ansible Inventory file based on libvirt'
         )
@@ -96,25 +101,27 @@ class LibvirtInventory(object):
         self.args = parser.parse_args()
 
     def get_host_info(self):
+        ''' Get variables about a specific host '''
+
         inventory = self.get_inventory()
         if self.args.host in inventory['_meta']['hostvars']:
             return inventory['_meta']['hostvars'][self.args.host]
 
     def get_inventory(self):
+        ''' Construct the inventory '''
+
         inventory = dict(_meta=dict(hostvars=dict()))
 
         conn = libvirt.openReadOnly(self.libvirt_uri)
         if conn is None:
-            print "Failed to open connection to %s" % libvirt_uri
+            print "Failed to open connection to %s" % self.libvirt_uri
             sys.exit(1)
 
         domains = conn.listAllDomains()
         if domains is None:
-            print "Failed to list domains for connection %s" % libvirt_uri
+            print "Failed to list domains for connection %s" % self.libvirt_uri
             sys.exit(1)
 
-        arp_entries = self.parse_arp_entries()
-
         for domain in domains:
             hostvars = dict(libvirt_name=domain.name(),
                             libvirt_id=domain.ID(),
@@ -130,21 +137,30 @@ class LibvirtInventory(object):
             hostvars['libvirt_status'] = 'running'
 
             root = ET.fromstring(domain.XMLDesc())
-            ns = {'ansible': 'https://github.com/ansible/ansible'}
-            for tag_elem in root.findall('./metadata/ansible:tags/ansible:tag', ns):
+            ansible_ns = {'ansible': 'https://github.com/ansible/ansible'}
+            for tag_elem in root.findall('./metadata/ansible:tags/ansible:tag', ansible_ns):
                 tag = tag_elem.text
-                self.push(inventory, "tag_%s" % tag, domain_name)
-                self.push(hostvars, 'libvirt_tags', tag)
+                _push(inventory, "tag_%s" % tag, domain_name)
+                _push(hostvars, 'libvirt_tags', tag)
 
             # TODO: support more than one network interface, also support
             # interface types other than 'network'
             interface = root.find("./devices/interface[@type='network']")
             if interface is not None:
+                source_elem = interface.find('source')
                 mac_elem = interface.find('mac')
-                if mac_elem is not None:
-                    mac = mac_elem.get('address')
-                    if mac in arp_entries:
-                        ip_address = arp_entries[mac]['ip_address']
+                if source_elem is not None and \
+                   mac_elem    is not None:
+                    # Adding this to disable pylint check specifically
+                    # ignoring libvirt-python versions that
+                    # do not include DHCPLeases
+                    # This is needed until we upgrade the build bot to
+                    # RHEL7 (>= 1.2.6 libvirt)
+                    # pylint: disable=no-member
+                    dhcp_leases = conn.networkLookupByName(source_elem.get('network')) \
+                                      .DHCPLeases(mac_elem.get('address'))
+                    if len(dhcp_leases) > 0:
+                        ip_address = dhcp_leases[0]['ipaddr']
                         hostvars['ansible_ssh_host'] = ip_address
                         hostvars['libvirt_ip_address'] = ip_address
 
@@ -152,28 +168,23 @@ class LibvirtInventory(object):
 
         return inventory
 
-    def parse_arp_entries(self):
-        arp_entries = dict()
-        with open('/proc/net/arp', 'r') as f:
-            # throw away the header
-            f.readline()
-
-            for line in f:
-                ip_address, _, _, mac, _, device = line.strip().split()
-                arp_entries[mac] = dict(ip_address=ip_address, device=device)
-
-        return arp_entries
-
-    def push(self, my_dict, key, element):
-        if key in my_dict:
-            my_dict[key].append(element)
-        else:
-            my_dict[key] = [element]
-
-    def json_format_dict(self, data, pretty=False):
-        if pretty:
-            return json.dumps(data, sort_keys=True, indent=2)
-        else:
-            return json.dumps(data)
+def _push(my_dict, key, element):
+    '''
+    Push element to the my_dict[key] list.
+    After having initialized my_dict[key] if it dosn't exist.
+    '''
+
+    if key in my_dict:
+        my_dict[key].append(element)
+    else:
+        my_dict[key] = [element]
+
+def _json_format_dict(data, pretty=False):
+    ''' Serialize data to a JSON formated str '''
+
+    if pretty:
+        return json.dumps(data, sort_keys=True, indent=2)
+    else:
+        return json.dumps(data)
 
 LibvirtInventory()

+ 9 - 4
inventory/multi_ec2.py

@@ -82,7 +82,6 @@ class MultiEc2(object):
         else:
             raise RuntimeError("Could not find valid ec2 credentials in the environment.")
 
-        # Set the default cache path but if its defined we'll assign it.
         if self.config.has_key('cache_location'):
             self.cache_path = self.config['cache_location']
 
@@ -217,7 +216,12 @@ class MultiEc2(object):
             # For any non-zero, raise an error on it
             for result in provider_results:
                 if result['code'] != 0:
-                    raise RuntimeError(result['err'])
+                    err_msg = ['\nProblem fetching account: {name}',
+                               'Error Code: {code}',
+                               'StdErr: {err}',
+                               'Stdout: {out}',
+                              ]
+                    raise RuntimeError('\n'.join(err_msg).format(**result))
                 else:
                     self.all_ec2_results[result['name']] = json.loads(result['out'])
 
@@ -248,8 +252,9 @@ class MultiEc2(object):
                     data[str(host_property)] = str(value)
 
             # Add this group
-            results["%s_%s" % (host_property, value)] = \
-              copy.copy(results[acc_config['all_group']])
+            if results.has_key(acc_config['all_group']):
+                results["%s_%s" % (host_property, value)] = \
+                  copy.copy(results[acc_config['all_group']])
 
         # store the results back into all_ec2_results
         self.all_ec2_results[acc_config['name']] = results

+ 12 - 1
inventory/openshift-ansible-inventory.spec

@@ -1,6 +1,6 @@
 Summary:       OpenShift Ansible Inventories
 Name:          openshift-ansible-inventory
-Version:       0.0.7
+Version:       0.0.8
 Release:       1%{?dist}
 License:       ASL 2.0
 URL:           https://github.com/openshift/openshift-ansible
@@ -36,6 +36,17 @@ cp -p gce/hosts/gce.py %{buildroot}/usr/share/ansible/inventory/gce
 /usr/share/ansible/inventory/gce/gce.py*
 
 %changelog
+* Tue Jun 09 2015 Kenny Woodson <kwoodson@redhat.com> 0.0.8-1
+- Added more verbosity when error happens.  Also fixed a bug.
+  (kwoodson@redhat.com)
+- Implement OpenStack provider (lhuard@amadeus.com)
+- * rename openshift_registry_url oreg_url * rename option_images to
+  _{oreg|ortr}_images (jhonce@redhat.com)
+- Fix the remaining pylint warnings (lhuard@amadeus.com)
+- Fix some of the pylint warnings (lhuard@amadeus.com)
+- [libvirt cluster] Use net-dhcp-leases to find VMs’ IPs (lhuard@amadeus.com)
+- fixed the openshift-ansible-bin build (twiest@redhat.com)
+
 * Fri May 15 2015 Kenny Woodson <kwoodson@redhat.com> 0.0.7-1
 - Making multi_ec2 into a library (kwoodson@redhat.com)
 

+ 1 - 0
inventory/openstack/hosts/hosts

@@ -0,0 +1 @@
+localhost ansible_sudo=no ansible_python_interpreter=/usr/bin/python2 connection=local

+ 45 - 0
inventory/openstack/hosts/nova.ini

@@ -0,0 +1,45 @@
+# Ansible OpenStack external inventory script
+
+[openstack]
+
+#-------------------------------------------------------------------------
+#  Required settings
+#-------------------------------------------------------------------------
+
+# API version
+version       = 2
+
+# OpenStack nova username
+username      =
+
+# OpenStack nova api_key or password
+api_key       =
+
+# OpenStack nova auth_url
+auth_url      =
+
+# OpenStack nova project_id or tenant name
+project_id    =
+
+#-------------------------------------------------------------------------
+#  Optional settings
+#-------------------------------------------------------------------------
+
+# Authentication system
+# auth_system = keystone
+
+# Serverarm region name to use
+# region_name   =
+
+# Specify a preference for public or private IPs (public is default)
+# prefer_private = False
+
+# What service type (required for newer nova client)
+# service_type = compute
+
+
+# TODO: Some other options
+# insecure      =
+# endpoint_type =
+# extensions    =
+# service_name  =

+ 224 - 0
inventory/openstack/hosts/nova.py

@@ -0,0 +1,224 @@
+#!/usr/bin/env python2
+
+# pylint: skip-file
+
+# (c) 2012, Marco Vito Moscaritolo <marco@agavee.com>
+#
+# This file is part of Ansible,
+#
+# Ansible is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# Ansible is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with Ansible.  If not, see <http://www.gnu.org/licenses/>.
+
+import sys
+import re
+import os
+import ConfigParser
+from novaclient import client as nova_client
+
+try:
+    import json
+except ImportError:
+    import simplejson as json
+
+###################################################
+# executed with no parameters, return the list of
+# all groups and hosts
+
+NOVA_CONFIG_FILES = [os.getcwd() + "/nova.ini",
+                     os.path.expanduser(os.environ.get('ANSIBLE_CONFIG', "~/nova.ini")),
+                     "/etc/ansible/nova.ini"]
+
+NOVA_DEFAULTS = {
+    'auth_system': None,
+    'region_name': None,
+    'service_type': 'compute',
+}
+
+
+def nova_load_config_file():
+    p = ConfigParser.SafeConfigParser(NOVA_DEFAULTS)
+
+    for path in NOVA_CONFIG_FILES:
+        if os.path.exists(path):
+            p.read(path)
+            return p
+
+    return None
+
+
+def get_fallback(config, value, section="openstack"):
+    """
+    Get value from config object and return the value
+    or false
+    """
+    try:
+        return config.get(section, value)
+    except ConfigParser.NoOptionError:
+        return False
+
+
+def push(data, key, element):
+    """
+    Assist in items to a dictionary of lists
+    """
+    if (not element) or (not key):
+        return
+
+    if key in data:
+        data[key].append(element)
+    else:
+        data[key] = [element]
+
+
+def to_safe(word):
+    '''
+    Converts 'bad' characters in a string to underscores so they can
+    be used as Ansible groups
+    '''
+    return re.sub(r"[^A-Za-z0-9\-]", "_", word)
+
+
+def get_ips(server, access_ip=True):
+    """
+    Returns a list of the server's IPs, or the preferred
+    access IP
+    """
+    private = []
+    public = []
+    address_list = []
+    # Iterate through each servers network(s), get addresses and get type
+    addresses = getattr(server, 'addresses', {})
+    if len(addresses) > 0:
+        for network in addresses.itervalues():
+            for address in network:
+                if address.get('OS-EXT-IPS:type', False) == 'fixed':
+                    private.append(address['addr'])
+                elif address.get('OS-EXT-IPS:type', False) == 'floating':
+                    public.append(address['addr'])
+
+    if not access_ip:
+        address_list.append(server.accessIPv4)
+        address_list.extend(private)
+        address_list.extend(public)
+        return address_list
+
+    access_ip = None
+    # Append group to list
+    if server.accessIPv4:
+        access_ip = server.accessIPv4
+    if (not access_ip) and public and not (private and prefer_private):
+        access_ip = public[0]
+    if private and not access_ip:
+        access_ip = private[0]
+
+    return access_ip
+
+
+def get_metadata(server):
+    """Returns dictionary of all host metadata"""
+    get_ips(server, False)
+    results = {}
+    for key in vars(server):
+        # Extract value
+        value = getattr(server, key)
+
+        # Generate sanitized key
+        key = 'os_' + re.sub(r"[^A-Za-z0-9\-]", "_", key).lower()
+
+        # Att value to instance result (exclude manager class)
+        #TODO: maybe use value.__class__ or similar inside of key_name
+        if key != 'os_manager':
+            results[key] = value
+    return results
+
+config = nova_load_config_file()
+if not config:
+    sys.exit('Unable to find configfile in %s' % ', '.join(NOVA_CONFIG_FILES))
+
+# Load up connections info based on config and then environment
+# variables
+username = (get_fallback(config, 'username') or
+            os.environ.get('OS_USERNAME', None))
+api_key = (get_fallback(config, 'api_key') or
+           os.environ.get('OS_PASSWORD', None))
+auth_url = (get_fallback(config, 'auth_url') or
+            os.environ.get('OS_AUTH_URL', None))
+project_id = (get_fallback(config, 'project_id') or
+              os.environ.get('OS_TENANT_NAME', None))
+region_name = (get_fallback(config, 'region_name') or
+               os.environ.get('OS_REGION_NAME', None))
+auth_system = (get_fallback(config, 'auth_system') or
+               os.environ.get('OS_AUTH_SYSTEM', None))
+
+# Determine what type of IP is preferred to return
+prefer_private = False
+try:
+    prefer_private = config.getboolean('openstack', 'prefer_private')
+except ConfigParser.NoOptionError:
+    pass
+
+client = nova_client.Client(
+    version=config.get('openstack', 'version'),
+    username=username,
+    api_key=api_key,
+    auth_url=auth_url,
+    region_name=region_name,
+    project_id=project_id,
+    auth_system=auth_system,
+    service_type=config.get('openstack', 'service_type'),
+)
+
+# Default or added list option
+if (len(sys.argv) == 2 and sys.argv[1] == '--list') or len(sys.argv) == 1:
+    groups = {'_meta': {'hostvars': {}}}
+    # Cycle on servers
+    for server in client.servers.list():
+        access_ip = get_ips(server)
+
+        # Push to name group of 1
+        push(groups, server.name, access_ip)
+
+        # Run through each metadata item and add instance to it
+        for key, value in server.metadata.iteritems():
+            composed_key = to_safe('tag_{0}_{1}'.format(key, value))
+            push(groups, composed_key, access_ip)
+
+        # Do special handling of group for backwards compat
+        # inventory groups
+        group = server.metadata['group'] if 'group' in server.metadata else 'undefined'
+        push(groups, group, access_ip)
+
+        # Add vars to _meta key for performance optimization in
+        # Ansible 1.3+
+        groups['_meta']['hostvars'][access_ip] = get_metadata(server)
+
+    # Return server list
+    print(json.dumps(groups, sort_keys=True, indent=2))
+    sys.exit(0)
+
+#####################################################
+# executed with a hostname as a parameter, return the
+# variables for that host
+
+elif len(sys.argv) == 3 and (sys.argv[1] == '--host'):
+    results = {}
+    ips = []
+    for server in client.servers.list():
+        if sys.argv[2] in (get_ips(server) or []):
+            results = get_metadata(server)
+    print(json.dumps(results, sort_keys=True, indent=2))
+    sys.exit(0)
+
+else:
+    print "usage: --list  ..OR.. --host <hostname>"
+    sys.exit(1)

+ 0 - 94
lib/ansible_helper.rb

@@ -1,94 +0,0 @@
-require 'json'
-require 'parseconfig'
-
-module OpenShift
-  module Ops
-    class AnsibleHelper
-      MYDIR = File.expand_path(File.dirname(__FILE__))
-
-      attr_accessor :inventory, :extra_vars, :verbosity, :pipelining
-
-      def initialize(extra_vars={}, inventory=nil)
-        @extra_vars = extra_vars
-        @verbosity = '-vvvv'
-        @pipelining = true
-      end
-
-      def all_eof(files)
-        files.find { |f| !f.eof }.nil?
-      end
-
-      def run_playbook(playbook)
-        @inventory = 'inventory/hosts' if @inventory.nil?
-
-        # This is used instead of passing in the json on the cli to avoid quoting problems
-        tmpfile    = Tempfile.open('extra_vars') { |f| f.write(@extra_vars.to_json); f}
-
-        cmds = []
-        #cmds << 'set -x'
-        cmds << %Q[export ANSIBLE_FILTER_PLUGINS="#{Dir.pwd}/filter_plugins"]
-
-        # We need this for launching instances, otherwise conflicting keys and what not kill it
-        cmds << %q[export ANSIBLE_TRANSPORT="ssh"]
-        cmds << %q[export ANSIBLE_SSH_ARGS="-o ForwardAgent=yes -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"]
-
-        # We need pipelining off so that we can do sudo to enable the root account
-        cmds << %Q[export ANSIBLE_SSH_PIPELINING='#{@pipelining.to_s}']
-        cmds << %Q[time ansible-playbook  -i #{@inventory} #{@verbosity} #{playbook} --extra-vars '@#{tmpfile.path}' ]
-        cmd = cmds.join(' ; ')
-
-        pid = spawn(cmd, :out => $stdout, :err => $stderr, :close_others => true)
-        _, state = Process.wait2(pid)
-
-        if 0 != state.exitstatus
-          raise %Q[Warning failed with exit code: #{state.exitstatus}
-
-#{cmd}
-
-extra_vars: #{@extra_vars.to_json}
-]
-        end
-      ensure
-        tmpfile.unlink if tmpfile
-      end
-
-      def merge_extra_vars_file(file)
-        vars = YAML.load_file(file)
-        @extra_vars.merge!(vars)
-      end
-
-      def self.for_gce
-        ah = AnsibleHelper.new
-
-        # GCE specific configs
-        gce_ini = "#{MYDIR}/../inventory/gce/gce.ini"
-        config  = ParseConfig.new(gce_ini)
-
-        if config['gce']['gce_project_id'].to_s.empty?
-          raise %Q['gce_project_id' not set in #{gce_ini}]
-        end
-        ah.extra_vars['gce_project_id'] = config['gce']['gce_project_id']
-
-        if config['gce']['gce_service_account_pem_file_path'].to_s.empty?
-          raise %Q['gce_service_account_pem_file_path' not set in #{gce_ini}]
-        end
-        ah.extra_vars['gce_pem_file'] = config['gce']['gce_service_account_pem_file_path']
-
-        if config['gce']['gce_service_account_email_address'].to_s.empty?
-          raise %Q['gce_service_account_email_address' not set in #{gce_ini}]
-        end
-        ah.extra_vars['gce_service_account_email'] = config['gce']['gce_service_account_email_address']
-
-        ah.inventory = 'inventory/gce/gce.py'
-        return ah
-      end
-
-      def self.for_aws
-        ah = AnsibleHelper.new
-
-        ah.inventory = 'inventory/aws/ec2.py'
-        return ah
-      end
-    end
-  end
-end

+ 0 - 148
lib/aws_command.rb

@@ -1,148 +0,0 @@
-require 'thor'
-
-require_relative 'aws_helper'
-require_relative 'launch_helper'
-
-module OpenShift
-  module Ops
-    class AwsCommand < Thor
-      # WARNING: we do not currently support environments with hyphens in the name
-      SUPPORTED_ENVS = %w(prod stg int ops twiest gshipley kint test jhonce amint tdint lint jdetiber)
-
-      option :type, :required => true, :enum => LaunchHelper.get_aws_host_types,
-             :desc => 'The host type of the new instances.'
-      option :env, :required => true, :aliases => '-e', :enum => SUPPORTED_ENVS,
-             :desc => 'The environment of the new instances.'
-      option :count, :default => 1, :aliases => '-c', :type => :numeric,
-             :desc => 'The number of instances to create'
-      option :tag, :type => :array,
-             :desc => 'The tag(s) to add to the new instances. Allowed characters are letters, numbers, and hyphens.'
-      desc "launch", "Launches instances."
-      def launch()
-        AwsHelper.check_creds()
-
-        # Expand all of the instance names so that we have a complete array
-        names = []
-        options[:count].times { names << "#{options[:env]}-#{options[:type]}-#{SecureRandom.hex(5)}" }
-
-        ah = AnsibleHelper.for_aws()
-
-        # AWS specific configs
-        ah.extra_vars['oo_new_inst_names'] = names
-        ah.extra_vars['oo_new_inst_tags'] = options[:tag]
-        ah.extra_vars['oo_env'] = options[:env]
-
-        # Add a created by tag
-        ah.extra_vars['oo_new_inst_tags'] = {} if ah.extra_vars['oo_new_inst_tags'].nil?
-
-        ah.extra_vars['oo_new_inst_tags']['created-by'] = ENV['USER']
-        ah.extra_vars['oo_new_inst_tags'].merge!(AwsHelper.generate_env_tag(options[:env]))
-        ah.extra_vars['oo_new_inst_tags'].merge!(AwsHelper.generate_host_type_tag(options[:type]))
-        ah.extra_vars['oo_new_inst_tags'].merge!(AwsHelper.generate_env_host_type_tag(options[:env], options[:type]))
-
-        puts
-        puts "Creating #{options[:count]} #{options[:type]} instance(s) in AWS..."
-
-        # Make sure we're completely up to date before launching
-        clear_cache()
-        ah.run_playbook("playbooks/aws/#{options[:type]}/launch.yml")
-      ensure
-        # This is so that if we a config right after a launch, the newly launched instances will be
-        # in the list.
-        clear_cache()
-      end
-
-      desc "clear-cache", 'Clear the inventory cache'
-      def clear_cache()
-        print "Clearing inventory cache... "
-        AwsHelper.clear_inventory_cache()
-        puts "Done."
-      end
-
-      option :name, :required => false, :type => :string,
-             :desc => 'The name of the instance to configure.'
-      option :env, :required => false, :aliases => '-e', :enum => SUPPORTED_ENVS,
-             :desc => 'The environment of the new instances.'
-      option :type, :required => false, :enum => LaunchHelper.get_aws_host_types,
-             :desc => 'The type of the instances to configure.'
-      desc "config", 'Configures instances.'
-      def config()
-        ah = AnsibleHelper.for_aws()
-
-        abort 'Error: you can\'t specify both --name and --type' unless options[:type].nil? || options[:name].nil?
-
-        abort 'Error: you can\'t specify both --name and --env' unless options[:env].nil? || options[:name].nil?
-
-        host_type = nil
-        if options[:name]
-          details = AwsHelper.get_host_details(options[:name])
-          ah.extra_vars['oo_host_group_exp'] = details['ec2_public_dns_name']
-          ah.extra_vars['oo_env'] = details['ec2_tag_environment']
-          host_type = details['ec2_tag_host-type']
-        elsif options[:type] && options[:env]
-          oo_env_host_type_tag = AwsHelper.generate_env_host_type_tag_name(options[:env], options[:type])
-          ah.extra_vars['oo_host_group_exp'] = "groups['#{oo_env_host_type_tag}']"
-          ah.extra_vars['oo_env'] = options[:env]
-          host_type = options[:type]
-        else
-          abort 'Error: you need to specify either --name or (--type and --env)'
-        end
-
-        puts
-        puts "Configuring #{options[:type]} instance(s) in AWS..."
-
-        ah.run_playbook("playbooks/aws/#{host_type}/config.yml")
-      end
-
-      option :env, :required => false, :aliases => '-e', :enum => SUPPORTED_ENVS,
-             :desc => 'The environment to list.'
-      desc "list", "Lists instances."
-      def list()
-        AwsHelper.check_creds()
-        hosts = AwsHelper.get_hosts()
-
-        hosts.delete_if { |h| h.env != options[:env] } unless options[:env].nil?
-
-        fmt_str = "%34s %5s %8s %17s %7s"
-
-        puts
-        puts fmt_str % ['Name','Env', 'State', 'IP Address', 'Created By']
-        puts fmt_str % ['----','---', '-----', '----------', '----------']
-        hosts.each { |h| puts fmt_str % [h.name, h.env, h.state, h.public_ip, h.created_by ] }
-        puts
-      end
-
-      desc "ssh", "Ssh to an instance"
-      def ssh(*ssh_ops, host)
-        if host =~ /^([\w\d_.\-]+)@([\w\d\-_.]+)/
-          user = $1
-          host = $2
-        end
-
-        details = AwsHelper.get_host_details(host)
-        abort "\nError: Instance [#{host}] is not RUNNING\n\n" unless details['ec2_state'] == 'running'
-
-        cmd = "ssh #{ssh_ops.join(' ')}"
-
-        if user.nil?
-          cmd += " "
-        else
-          cmd += " #{user}@"
-        end
-
-        cmd += "#{details['ec2_ip_address']}"
-
-        exec(cmd)
-      end
-
-      desc 'types', 'Displays instance types'
-      def types()
-        puts
-        puts "Available Host Types"
-        puts "--------------------"
-        LaunchHelper.get_aws_host_types.each { |t| puts "  #{t}" }
-        puts
-      end
-    end
-  end
-end

+ 0 - 85
lib/aws_helper.rb

@@ -1,85 +0,0 @@
-require 'fileutils'
-
-module OpenShift
-  module Ops
-    class AwsHelper
-      MYDIR = File.expand_path(File.dirname(__FILE__))
-
-      def self.get_list()
-        cmd = "#{MYDIR}/../inventory/aws/ec2.py --list"
-        hosts = %x[#{cmd} 2>&1]
-
-        raise "Error: failed to list hosts\n#{hosts}" unless $?.exitstatus == 0
-        return JSON.parse(hosts)
-      end
-
-      def self.get_hosts()
-        hosts = get_list()
-
-        retval = []
-        hosts['_meta']['hostvars'].each do |host, info|
-          retval << OpenStruct.new({
-            :name        => info['ec2_tag_Name']        || 'UNSET',
-            :env         => info['ec2_tag_environment'] || 'UNSET',
-            :public_ip   => info['ec2_ip_address'],
-            :public_dns  => info['ec2_public_dns_name'],
-            :state       => info['ec2_state'],
-            :created_by  => info['ec2_tag_created-by']
-          })
-        end
-
-        retval.sort_by! { |h| [h.env, h.state, h.name] }
-
-        return retval
-      end
-
-      def self.get_host_details(host)
-        hosts = get_list()
-        dns_names = hosts["tag_Name_#{host}"]
-
-        raise "Host not found [#{host}]" if dns_names.nil?
-        raise "Multiple entries found for [#{host}]" if dns_names.size > 1
-
-        return hosts['_meta']['hostvars'][dns_names.first]
-      end
-
-      def self.check_creds()
-        raise "AWS_ACCESS_KEY_ID environment variable must be set" if ENV['AWS_ACCESS_KEY_ID'].nil?
-        raise "AWS_SECRET_ACCESS_KEY environment variable must be set" if ENV['AWS_SECRET_ACCESS_KEY'].nil?
-      end
-
-      def self.clear_inventory_cache()
-        path = "#{ENV['HOME']}/.ansible/tmp"
-        cache_files = ["#{path}/ansible-ec2.cache", "#{path}/ansible-ec2.index"]
-        FileUtils.rm_f(cache_files)
-      end
-
-      def self.generate_env_tag(env)
-        return { "environment" => env }
-      end
-
-      def self.generate_env_tag_name(env)
-        h = generate_env_tag(env)
-        return "tag_#{h.keys.first}_#{h.values.first}"
-      end
-
-      def self.generate_host_type_tag(host_type)
-        return { "host-type" => host_type }
-      end
-
-      def self.generate_host_type_tag_name(host_type)
-        h = generate_host_type_tag(host_type)
-        return "tag_#{h.keys.first}_#{h.values.first}"
-      end
-
-      def self.generate_env_host_type_tag(env, host_type)
-        return { "env-host-type" => "#{env}-#{host_type}" }
-      end
-
-      def self.generate_env_host_type_tag_name(env, host_type)
-        h = generate_env_host_type_tag(env, host_type)
-        return "tag_#{h.keys.first}_#{h.values.first}"
-      end
-    end
-  end
-end

+ 0 - 228
lib/gce_command.rb

@@ -1,228 +0,0 @@
-require 'thor'
-require 'securerandom'
-require 'fileutils'
-
-require_relative 'gce_helper'
-require_relative 'launch_helper'
-require_relative 'ansible_helper'
-
-module OpenShift
-  module Ops
-    class GceCommand < Thor
-      # WARNING: we do not currently support environments with hyphens in the name
-      SUPPORTED_ENVS = %w(prod stg int twiest gshipley kint test jhonce amint tdint lint jdetiber)
-
-      option :type, :required => true, :enum => LaunchHelper.get_gce_host_types,
-             :desc => 'The host type of the new instances.'
-      option :env, :required => true, :aliases => '-e', :enum => SUPPORTED_ENVS,
-             :desc => 'The environment of the new instances.'
-      option :count, :default => 1, :aliases => '-c', :type => :numeric,
-             :desc => 'The number of instances to create'
-      option :tag, :type => :array,
-             :desc => 'The tag(s) to add to the new instances. Allowed characters are letters, numbers, and hyphens.'
-      desc "launch", "Launches instances."
-      def launch()
-        # Expand all of the instance names so that we have a complete array
-        names = []
-        options[:count].times { names << "#{options[:env]}-#{options[:type]}-#{SecureRandom.hex(5)}" }
-
-        ah = AnsibleHelper.for_gce()
-
-        # GCE specific configs
-        ah.extra_vars['oo_new_inst_names'] = names
-        ah.extra_vars['oo_new_inst_tags'] = options[:tag]
-        ah.extra_vars['oo_env'] = options[:env]
-
-        # Add a created by tag
-        ah.extra_vars['oo_new_inst_tags'] = [] if ah.extra_vars['oo_new_inst_tags'].nil?
-
-        ah.extra_vars['oo_new_inst_tags'] << "created-by-#{ENV['USER']}"
-        ah.extra_vars['oo_new_inst_tags'] << GceHelper.generate_env_tag(options[:env])
-        ah.extra_vars['oo_new_inst_tags'] << GceHelper.generate_host_type_tag(options[:type])
-        ah.extra_vars['oo_new_inst_tags'] << GceHelper.generate_env_host_type_tag(options[:env], options[:type])
-
-        puts
-        puts "Creating #{options[:count]} #{options[:type]} instance(s) in GCE..."
-
-        ah.run_playbook("playbooks/gce/#{options[:type]}/launch.yml")
-      end
-
-
-      option :name, :required => false, :type => :string,
-             :desc => 'The name of the instance to configure.'
-      option :env, :required => false, :aliases => '-e', :enum => SUPPORTED_ENVS,
-             :desc => 'The environment of the new instances.'
-      option :type, :required => false, :enum => LaunchHelper.get_gce_host_types,
-             :desc => 'The type of the instances to configure.'
-      desc "config", 'Configures instances.'
-      def config()
-        ah = AnsibleHelper.for_gce()
-
-        abort 'Error: you can\'t specify both --name and --type' unless options[:type].nil? || options[:name].nil?
-
-        abort 'Error: you can\'t specify both --name and --env' unless options[:env].nil? || options[:name].nil?
-
-        host_type = nil
-        if options[:name]
-          details = GceHelper.get_host_details(options[:name])
-          ah.extra_vars['oo_host_group_exp'] = options[:name]
-          ah.extra_vars['oo_env'] = details['env']
-          host_type = details['host-type']
-        elsif options[:type] && options[:env]
-          oo_env_host_type_tag = GceHelper.generate_env_host_type_tag_name(options[:env], options[:type])
-          ah.extra_vars['oo_host_group_exp'] = "groups['#{oo_env_host_type_tag}']"
-          ah.extra_vars['oo_env'] = options[:env]
-          host_type = options[:type]
-        else
-          abort 'Error: you need to specify either --name or (--type and --env)'
-        end
-
-        puts
-        puts "Configuring #{options[:type]} instance(s) in GCE..."
-
-        ah.run_playbook("playbooks/gce/#{host_type}/config.yml")
-      end
-
-      option :name, :required => false, :type => :string,
-             :desc => 'The name of the instance to terminate.'
-      option :env, :required => false, :aliases => '-e', :enum => SUPPORTED_ENVS,
-             :desc => 'The environment of the new instances.'
-      option :type, :required => false, :enum => LaunchHelper.get_gce_host_types,
-             :desc => 'The type of the instances to configure.'
-      option :confirm, :required => false, :type => :boolean,
-             :desc => 'Terminate without interactive confirmation'
-      desc "terminate", 'Terminate instances'
-      def terminate()
-        ah = AnsibleHelper.for_gce()
-
-        abort 'Error: you can\'t specify both --name and --type' unless options[:type].nil? || options[:name].nil?
-
-        abort 'Error: you can\'t specify both --name and --env' unless options[:env].nil? || options[:name].nil?
-
-        host_type = nil
-        if options[:name]
-          details = GceHelper.get_host_details(options[:name])
-          ah.extra_vars['oo_host_group_exp'] = options[:name]
-          ah.extra_vars['oo_env'] = details['env']
-          host_type = details['host-type']
-        elsif options[:type] && options[:env]
-          oo_env_host_type_tag = GceHelper.generate_env_host_type_tag_name(options[:env], options[:type])
-          ah.extra_vars['oo_host_group_exp'] = "groups['#{oo_env_host_type_tag}']"
-          ah.extra_vars['oo_env'] = options[:env]
-          host_type = options[:type]
-        else
-          abort 'Error: you need to specify either --name or (--type and --env)'
-        end
-
-        puts
-        puts "Terminating #{options[:type]} instance(s) in GCE..."
-
-        ah.run_playbook("playbooks/gce/#{host_type}/terminate.yml")
-      end
-
-      option :env, :required => false, :aliases => '-e', :enum => SUPPORTED_ENVS,
-             :desc => 'The environment to list.'
-      desc "list", "Lists instances."
-      def list()
-        hosts = GceHelper.get_hosts()
-
-        hosts.delete_if { |h| h.env != options[:env] } unless options[:env].nil?
-
-        fmt_str = "%34s %5s %8s %17s %7s"
-
-        puts
-        puts fmt_str % ['Name','Env', 'State', 'IP Address', 'Created By']
-        puts fmt_str % ['----','---', '-----', '----------', '----------']
-        hosts.each { |h| puts fmt_str % [h.name, h.env, h.state, h.public_ip, h.created_by ] }
-        puts
-      end
-
-      option :file, :required => true, :type => :string,
-             :desc => 'The name of the file to copy.'
-      option :dest, :required => false, :type => :string,
-             :desc => 'A relative path where files are written to.'
-      desc "scp_from", "scp files from an instance"
-      def scp_from(*ssh_ops, host)
-        if host =~ /^([\w\d_.\-]+)@([\w\d\-_.]+)$/
-          user = $1
-          host = $2
-        end
-
-        path_to_file = options['file']
-        dest = options['dest']
-
-        details = GceHelper.get_host_details(host)
-        abort "\nError: Instance [#{host}] is not RUNNING\n\n" unless details['gce_status'] == 'RUNNING'
-
-        cmd = "scp #{ssh_ops.join(' ')}"
-
-        if user.nil?
-          cmd += " "
-        else
-          cmd += " #{user}@"
-        end
-
-        if dest.nil?
-          download = File.join(Dir.pwd, 'download')
-          FileUtils.mkdir_p(download) unless File.exists?(download)
-          cmd += "#{details['gce_public_ip']}:#{path_to_file} download/"
-        else
-          cmd += "#{details['gce_public_ip']}:#{path_to_file} #{File.expand_path(dest)}"
-        end
-
-        exec(cmd)
-      end
-
-      desc "ssh", "Ssh to an instance"
-      def ssh(*ssh_ops, host)
-        if host =~ /^([\w\d_.\-]+)@([\w\d\-_.]+)/
-          user = $1
-          host = $2
-        end
-
-        details = GceHelper.get_host_details(host)
-        abort "\nError: Instance [#{host}] is not RUNNING\n\n" unless details['gce_status'] == 'RUNNING'
-
-        cmd = "ssh #{ssh_ops.join(' ')}"
-
-        if user.nil?
-          cmd += " "
-        else
-          cmd += " #{user}@"
-        end
-
-        cmd += "#{details['gce_public_ip']}"
-
-        exec(cmd)
-      end
-
-      option :name, :required => true, :aliases => '-n', :type => :string,
-             :desc => 'The name of the instance.'
-      desc 'details', 'Displays details about an instance.'
-      def details()
-        name = options[:name]
-
-        details = GceHelper.get_host_details(name)
-
-        key_size = details.keys.max_by { |k| k.size }.size
-
-        header = "Details for #{name}"
-        puts
-        puts header
-        header.size.times { print '-' }
-        puts
-        details.each { |k,v| printf("%#{key_size + 2}s: %s\n", k, v) }
-        puts
-      end
-
-      desc 'types', 'Displays instance types'
-      def types()
-        puts
-        puts "Available Host Types"
-        puts "--------------------"
-        LaunchHelper.get_gce_host_types.each { |t| puts "  #{t}" }
-        puts
-      end
-    end
-  end
-end

+ 0 - 94
lib/gce_helper.rb

@@ -1,94 +0,0 @@
-require 'ostruct'
-
-module OpenShift
-  module Ops
-    class GceHelper
-      MYDIR = File.expand_path(File.dirname(__FILE__))
-
-      def self.get_list()
-        cmd = "#{MYDIR}/../inventory/gce/gce.py --list"
-        hosts = %x[#{cmd} 2>&1]
-
-        raise "Error: failed to list hosts\n#{hosts}" unless $?.exitstatus == 0
-
-        return JSON.parse(hosts)
-      end
-
-      def self.get_tag(tags, selector)
-        tags.each do |tag|
-          return $1 if tag =~ selector
-        end
-
-        return nil
-      end
-
-      def self.get_hosts()
-        hosts = get_list()
-
-        retval = []
-        hosts['_meta']['hostvars'].each do |host, info|
-          retval << OpenStruct.new({
-            :name        => info['gce_name'],
-            :env         => get_tag(info['gce_tags'], /^env-(\w+)$/) || 'UNSET',
-            :public_ip   => info['gce_public_ip'],
-            :state       => info['gce_status'],
-            :created_by  => get_tag(info['gce_tags'], /^created-by-(\w+)$/) || 'UNSET',
-          })
-        end
-
-        retval.sort_by! { |h| [h.env, h.state, h.name] }
-
-        return retval
-
-      end
-
-      def self.get_host_details(host)
-        cmd = "#{MYDIR}/../inventory/gce/gce.py --host #{host}"
-        details = %x[#{cmd} 2>&1]
-
-        raise "Error: failed to get host details\n#{details}" unless $?.exitstatus == 0
-
-        retval = JSON.parse(details)
-
-        raise "Error: host not found [#{host}]" if retval.empty?
-
-        # Convert OpenShift specific tags to entries
-        retval['gce_tags'].each do |tag|
-          if tag =~ /\Ahost-type-([\w\d-]+)\z/
-            retval['host-type'] = $1
-          end
-
-          if tag =~ /\Aenv-([\w\d]+)\z/
-            retval['env'] = $1
-          end
-        end
-
-        return retval
-      end
-
-      def self.generate_env_tag(env)
-        return "env-#{env}"
-      end
-
-      def self.generate_env_tag_name(env)
-        return "tag_#{generate_env_tag(env)}"
-      end
-
-      def self.generate_host_type_tag(host_type)
-        return "host-type-#{host_type}"
-      end
-
-      def self.generate_host_type_tag_name(host_type)
-        return "tag_#{generate_host_type_tag(host_type)}"
-      end
-
-      def self.generate_env_host_type_tag(env, host_type)
-        return "env-host-type-#{env}-#{host_type}"
-      end
-
-      def self.generate_env_host_type_tag_name(env, host_type)
-        return "tag_#{generate_env_host_type_tag(env, host_type)}"
-      end
-    end
-  end
-end

+ 0 - 30
lib/launch_helper.rb

@@ -1,30 +0,0 @@
-module OpenShift
-  module Ops
-    class LaunchHelper
-      MYDIR = File.expand_path(File.dirname(__FILE__))
-
-      def self.expand_name(name)
-        return [name] unless name =~ /^([a-zA-Z0-9\-]+)\{(\d+)-(\d+)\}$/
-
-        # Regex matched, so grab the values
-        start_num = $2
-        end_num = $3
-
-        retval = []
-        start_num.upto(end_num) do |i|
-          retval << "#{$1}#{i}"
-        end
-
-        return retval
-      end
-
-      def self.get_gce_host_types()
-        return Dir.glob("#{MYDIR}/../playbooks/gce/*").map { |d| File.basename(d) }
-      end
-
-      def self.get_aws_host_types()
-        return Dir.glob("#{MYDIR}/../playbooks/aws/*").map { |d| File.basename(d) }
-      end
-    end
-  end
-end

+ 1 - 1
playbooks/aws/ansible-tower/launch.yml

@@ -22,7 +22,7 @@
         group_id: "{{ oo_security_group_ids }}"
         instance_type: c4.xlarge
         image: "{{ rhel7_ami }}"
-        count: "{{ oo_new_inst_names | oo_len }}"
+        count: "{{ oo_new_inst_names | length }}"
         user_data: "{{ lookup('file', user_data_file) }}"
         wait: yes
         assign_public_ip: "{{ oo_assign_public_ip }}"

+ 1 - 0
playbooks/aws/openshift-cluster/config.yml

@@ -32,5 +32,6 @@
     openshift_cluster_id: "{{ cluster_id }}"
     openshift_debug_level: 4
     openshift_deployment_type: "{{ deployment_type }}"
+    openshift_first_master: "{{ groups.oo_first_master.0 }}"
     openshift_hostname: "{{ ec2_private_ip_address }}"
     openshift_public_hostname: "{{ ec2_ip_address }}"

+ 8 - 0
playbooks/aws/openshift-cluster/launch.yml

@@ -25,6 +25,14 @@
       cluster: "{{ cluster_id }}"
       type: "{{ k8s_type }}"
 
+  - set_fact:
+      a_master: "{{ master_names[0] }}"
+  - add_host: name={{ a_master }} groups=service_master
+
 - include: update.yml
 
+- include: ../../common/openshift-cluster/create_services.yml
+  vars:
+     g_svc_master: "{{ service_master }}"
+
 - include: list.yml

+ 28 - 0
playbooks/aws/openshift-cluster/service.yml

@@ -0,0 +1,28 @@
+---
+- name: Call same systemctl command for openshift on all instance(s)
+  hosts: localhost
+  gather_facts: no
+  vars_files:
+  - vars.yml
+  tasks:
+  - fail: msg="cluster_id is required to be injected in this playbook"
+    when: cluster_id is not defined
+
+  - name: Evaluate g_service_masters
+    add_host:
+      name: "{{ item }}"
+      groups: g_service_masters
+      ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
+      ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
+    with_items: groups["tag_env-host-type_{{ cluster_id }}-openshift-master"] | default([])
+
+  - name: Evaluate g_service_nodes
+    add_host:
+      name: "{{ item }}"
+      groups: g_service_nodes
+      ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
+      ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
+    with_items: groups["tag_env-host-type_{{ cluster_id }}-openshift-node"] | default([])
+
+- include: ../../common/openshift-node/service.yml
+- include: ../../common/openshift-master/service.yml

+ 2 - 1
playbooks/aws/openshift-cluster/tasks/launch_instances.yml

@@ -79,13 +79,14 @@
     group: "{{ ec2_security_groups }}"
     instance_type: "{{ ec2_instance_type }}"
     image: "{{ latest_ami }}"
-    count: "{{ instances | oo_len }}"
+    count: "{{ instances | length }}"
     vpc_subnet_id: "{{ ec2_vpc_subnet | default(omit, true) }}"
     assign_public_ip: "{{ ec2_assign_public_ip | default(omit, true) }}"
     user_data: "{{ user_data }}"
     wait: yes
     instance_tags:
       created-by: "{{ created_by }}"
+      environment: "{{ env }}"
       env: "{{ env }}"
       host-type: "{{ host_type }}"
       env-host-type: "{{ env_host_type }}"

+ 5 - 5
playbooks/aws/openshift-master/launch.yml

@@ -4,10 +4,10 @@
   connection: local
   gather_facts: no
 
-# TODO: modify atomic_ami based on deployment_type
+# TODO: modify g_ami based on deployment_type
   vars:
     inst_region: us-east-1
-    atomic_ami: ami-86781fee
+    g_ami: ami-86781fee
     user_data_file: user_data.txt
 
   tasks:
@@ -18,13 +18,13 @@
         keypair: libra
         group: ['public']
         instance_type: m3.large
-        image: "{{ atomic_ami }}"
-        count: "{{ oo_new_inst_names | oo_len }}"
+        image: "{{ g_ami }}"
+        count: "{{ oo_new_inst_names | length }}"
         user_data: "{{ lookup('file', user_data_file) }}"
         wait: yes
       register: ec2
 
-    - name: Add new instances public IPs to the atomic proxy host group
+    - name: Add new instances public IPs to the host group
       add_host: "hostname={{ item.public_ip }} groupname=new_ec2_instances"
       with_items: ec2.instances
 

+ 1 - 0
playbooks/aws/openshift-node/config.yml

@@ -21,5 +21,6 @@
     openshift_cluster_id: "{{ cluster_id }}"
     openshift_debug_level: 4
     openshift_deployment_type: "{{ deployment_type }}"
+    openshift_first_master: "{{ groups.oo_first_master.0 }}"
     openshift_hostname: "{{ ec2_private_ip_address }}"
     openshift_public_hostname: "{{ ec2_ip_address }}"

+ 5 - 5
playbooks/aws/openshift-node/launch.yml

@@ -4,10 +4,10 @@
   connection: local
   gather_facts: no
 
-# TODO: modify atomic_ami based on deployment_type
+# TODO: modify g_ami based on deployment_type
   vars:
     inst_region: us-east-1
-    atomic_ami: ami-86781fee
+    g_ami: ami-86781fee
     user_data_file: user_data.txt
 
   tasks:
@@ -18,13 +18,13 @@
         keypair: libra
         group: ['public']
         instance_type: m3.large
-        image: "{{ atomic_ami }}"
-        count: "{{ oo_new_inst_names | oo_len }}"
+        image: "{{ g_ami }}"
+        count: "{{ oo_new_inst_names | length }}"
         user_data: "{{ lookup('file', user_data_file) }}"
         wait: yes
       register: ec2
 
-    - name: Add new instances public IPs to the atomic proxy host group
+    - name: Add new instances public IPs to the host group
       add_host:
         hostname: "{{ item.public_ip }}"
         groupname: new_ec2_instances"

+ 0 - 20
playbooks/aws/os2-atomic-proxy/config.yml

@@ -1,20 +0,0 @@
----
-- name: "populate oo_hosts_to_config host group if needed"
-  hosts: localhost
-  gather_facts: no
-  tasks:
-  - name: Evaluate oo_host_group_exp if it's set
-    add_host: "name={{ item }} groups=oo_hosts_to_config"
-    with_items: "{{ oo_host_group_exp | default(['']) }}"
-    when: oo_host_group_exp is defined
-
-- name: "Configure instances"
-  hosts: oo_hosts_to_config
-  connection: ssh
-  user: root
-  vars_files:
-    - vars.yml
-    - "vars.{{ oo_env }}.yml"
-  roles:
-    - atomic_base
-    - atomic_proxy

+ 0 - 97
playbooks/aws/os2-atomic-proxy/launch.yml

@@ -1,97 +0,0 @@
----
-- name: Launch instance(s)
-  hosts: localhost
-  connection: local
-  gather_facts: no
-
-  vars:
-    inst_region: us-east-1
-    atomic_ami: ami-8e239fe6
-    user_data_file: user_data.txt
-    oo_vpc_subnet_id:    # Purposely left blank, these are here to be overridden in env vars_files
-    oo_assign_public_ip: # Purposely left blank, these are here to be overridden in env vars_files
-
-  vars_files:
-    - vars.yml
-    - "vars.{{ oo_env }}.yml"
-
-  tasks:
-    - name: Launch instances in VPC
-      ec2:
-        state: present
-        region: "{{ inst_region }}"
-        keypair: mmcgrath_libra
-        group_id: "{{ oo_security_group_ids }}"
-        instance_type: m3.large
-        image: "{{ atomic_ami }}"
-        count: "{{ oo_new_inst_names | oo_len }}"
-        user_data: "{{ lookup('file', user_data_file) }}"
-        wait: yes
-        assign_public_ip: "{{ oo_assign_public_ip }}"
-        vpc_subnet_id: "{{ oo_vpc_subnet_id }}"
-      when: oo_vpc_subnet_id
-      register: ec2_vpc
-
-    - set_fact:
-        ec2: "{{ ec2_vpc }}"
-      when: oo_vpc_subnet_id
-
-    - name: Launch instances in Classic
-      ec2:
-        state: present
-        region: "{{ inst_region }}"
-        keypair: mmcgrath_libra
-        group: ['Libra', '{{ oo_env }}', '{{ oo_env }}_proxy', '{{ oo_env }}_proxy_atomic']
-        instance_type: m3.large
-        image: "{{ atomic_ami }}"
-        count: "{{ oo_new_inst_names | oo_len }}"
-        user_data: "{{ lookup('file', user_data_file) }}"
-        wait: yes
-      when: not oo_vpc_subnet_id
-      register: ec2_classic
-
-    - set_fact:
-        ec2: "{{ ec2_classic }}"
-      when: not oo_vpc_subnet_id
-
-    - name: Add new instances public IPs to the atomic proxy host group
-      add_host: "hostname={{ item.public_ip }} groupname=new_ec2_instances"
-      with_items: ec2.instances
-
-    - name: Add Name and environment tags to instances
-      ec2_tag: "resource={{ item.1.id }} region={{ inst_region }} state=present"
-      with_together:
-        - oo_new_inst_names
-        - ec2.instances
-      args:
-        tags:
-          Name: "{{ item.0 }}"
-
-    - name: Add other tags to instances
-      ec2_tag: "resource={{ item.id }} region={{ inst_region }} state=present"
-      with_items: ec2.instances
-      args:
-        tags: "{{ oo_new_inst_tags }}"
-
-    - name: Add new instances public IPs to oo_hosts_to_config
-      add_host: "hostname={{ item.0 }} ansible_ssh_host={{ item.1.public_ip }} groupname=oo_hosts_to_config"
-      with_together:
-        - oo_new_inst_names
-        - ec2.instances
-
-    - debug: var=ec2
-
-    - name: Wait for ssh
-      wait_for: "port=22 host={{ item.public_ip }}"
-      with_items: ec2.instances
-
-    - name: Wait for root user setup
-      command: "ssh -o StrictHostKeyChecking=no -o PasswordAuthentication=no -o ConnectTimeout=10 -o UserKnownHostsFile=/dev/null root@{{ item.public_ip }} echo root user is setup"
-      register: result
-      until: result.rc == 0
-      retries: 20
-      delay: 10
-      with_items: ec2.instances
-
-# Apply the configs, seprate so that just the configs can be run by themselves
-- include: config.yml

+ 0 - 6
playbooks/aws/os2-atomic-proxy/user_data.txt

@@ -1,6 +0,0 @@
-#cloud-config
-disable_root: 0
-
-system_info:
-  default_user:
-    name: root

+ 0 - 3
playbooks/aws/os2-atomic-proxy/vars.int.yml

@@ -1,3 +0,0 @@
----
-oo_env_long: integration
-oo_zabbix_hostgroups: ['INT Environment']

+ 0 - 3
playbooks/aws/os2-atomic-proxy/vars.prod.yml

@@ -1,3 +0,0 @@
----
-oo_env_long: production
-oo_zabbix_hostgroups: ['PROD Environment']

+ 0 - 10
playbooks/aws/os2-atomic-proxy/vars.stg.yml

@@ -1,10 +0,0 @@
----
-oo_env_long: staging
-oo_zabbix_hostgroups: ['STG Environment']
-oo_vpc_subnet_id: subnet-700bdd07
-oo_assign_public_ip: yes
-oo_security_group_ids:
-  - sg-02c2f267 # Libra (vpc)
-  - sg-f0bfbe95 # stg (vpc)
-  - sg-a3bfbec6 # stg_proxy (vpc)
-  - sg-d4bfbeb1 # stg_proxy_atomic (vpc)

+ 3 - 1
playbooks/byo/openshift-node/config.yml

@@ -10,12 +10,14 @@
     with_items: groups.nodes
   - name: Evaluate oo_first_master
     add_host:
-      name: "{{ groups.masters[0] }}"
+      name: "{{ item }}"
       groups: oo_first_master
+    with_items: groups.masters.0
 
 
 - include: ../../common/openshift-node/config.yml
   vars:
+    openshift_first_master: "{{ groups.masters.0 }}"
     openshift_cluster_id: "{{ cluster_id | default('default') }}"
     openshift_debug_level: 4
     openshift_deployment_type: "{{ deployment_type }}"

+ 8 - 0
playbooks/common/openshift-cluster/create_services.yml

@@ -0,0 +1,8 @@
+---
+- name: Deploy OpenShift Services
+  hosts: "{{ g_svc_master }}"
+  connection: ssh
+  gather_facts: yes
+  roles:
+  - openshift_registry
+  - openshift_router

+ 2 - 3
playbooks/common/openshift-master/config.yml

@@ -1,11 +1,10 @@
 ---
 - name: Configure master instances
   hosts: oo_masters_to_config
-  vars:
-    openshift_sdn_master_url: https://{{ openshift.common.hostname }}:4001
   roles:
   - openshift_master
-  - { role: openshift_sdn_master, when: openshift.common.use_openshift_sdn | bool }
+  - role: fluentd_master
+    when: openshift.common.use_fluentd | bool
   tasks:
   - name: Create group for deployment type
     group_by: key=oo_masters_deployment_type_{{ openshift.common.deployment_type }}

+ 18 - 0
playbooks/common/openshift-master/service.yml

@@ -0,0 +1,18 @@
+---
+- name: Populate g_service_masters host group if needed
+  hosts: localhost
+  gather_facts: no
+  tasks:
+  - fail: msg="new_cluster_state is required to be injected in this playbook"
+    when: new_cluster_state is not defined
+
+  - name: Evaluate g_service_masters
+    add_host: name={{ item }} groups=g_service_masters
+    with_items: oo_host_group_exp | default([])
+
+- name: Change openshift-master state on master instance(s)
+  hosts: g_service_masters
+  connection: ssh
+  gather_facts: no
+  tasks:
+    - service: name=openshift-master state="{{ new_cluster_state }}"

+ 43 - 38
playbooks/common/openshift-node/config.yml

@@ -4,9 +4,9 @@
   roles:
   - openshift_facts
   tasks:
-  # Since the master is registering the nodes before they are configured, we
-  # need to make sure to set the node properties beforehand if we do not want
-  # the defaults
+  # Since the master is generating the node certificates before they are
+  # configured, we need to make sure to set the node properties beforehand if
+  # we do not want the defaults
   - openshift_facts:
       role: "{{ item.role }}"
       local_facts: "{{ item.local_facts }}"
@@ -18,13 +18,26 @@
           deployment_type: "{{ openshift_deployment_type }}"
       - role: node
         local_facts:
-          external_id: "{{ openshift_node_external_id | default(None) }}"
           resources_cpu: "{{ openshift_node_resources_cpu | default(None) }}"
           resources_memory: "{{ openshift_node_resources_memory | default(None) }}"
           pod_cidr: "{{ openshift_node_pod_cidr | default(None) }}"
           labels: "{{ openshift_node_labels | default(None) }}"
           annotations: "{{ openshift_node_annotations | default(None) }}"
-
+  - name: Check status of node certificates
+    stat:
+      path: "{{ item }}"
+    with_items:
+    - "/etc/openshift/node/node.key"
+    - "/etc/openshift/node/node.kubeconfig"
+    - "/etc/openshift/node/ca.crt"
+    - "/etc/openshift/node/server.key"
+    register: stat_result
+  - set_fact:
+      certs_missing: "{{ stat_result.results | map(attribute='stat.exists')
+                         | list | intersect([false])}}"
+      node_subdir: node-{{ openshift.common.hostname }}
+      config_dir: /etc/openshift/generated-configs/node-{{ openshift.common.hostname }}
+      node_cert_dir: /etc/openshift/node
 
 - name: Create temp directory for syncing certs
   hosts: localhost
@@ -37,65 +50,59 @@
     register: mktemp
     changed_when: False
 
-
 - name: Register nodes
   hosts: oo_first_master
   vars:
-    openshift_nodes: "{{ hostvars | oo_select_keys(groups['oo_nodes_to_config']) }}"
+    nodes_needing_certs: "{{ hostvars
+                             | oo_select_keys(groups['oo_nodes_to_config'])
+                             | oo_filter_list(filter_attr='certs_missing') }}"
+    openshift_nodes: "{{ hostvars
+                         | oo_select_keys(groups['oo_nodes_to_config']) }}"
     sync_tmpdir: "{{ hostvars.localhost.mktemp.stdout }}"
   roles:
   - openshift_register_nodes
-  tasks:
-  # TODO: update so that we only sync necessary configs/directories, currently
-  # we sync for all nodes in oo_nodes_to_config.  We will need to inspect the
-  # configs on the nodes to make the determination on whether to sync or not.
-  - name: Create the temp directory on the master
-    file:
-      path: "{{ sync_tmpdir }}"
-      owner: "{{ ansible_ssh_user }}"
-      mode: 0700
-      state: directory
-    changed_when: False
-
+  post_tasks:
   - name: Create a tarball of the node config directories
-    command: tar -czvf {{ sync_tmpdir }}/{{ item.openshift.common.hostname }}.tgz ./
+    command: >
+      tar -czvf {{ item.config_dir }}.tgz
+        --transform 's|system:{{ item.node_subdir }}|node|'
+        -C {{ item.config_dir }} .
     args:
-      chdir: "{{ openshift_cert_dir }}/node-{{ item.openshift.common.hostname }}"
-    with_items: openshift_nodes
-    changed_when: False
+      creates: "{{ item.config_dir }}.tgz"
+    with_items: nodes_needing_certs
 
   - name: Retrieve the node config tarballs from the master
     fetch:
-      src: "{{ sync_tmpdir }}/{{ item.openshift.common.hostname }}.tgz"
+      src: "{{ item.config_dir }}.tgz"
       dest: "{{ sync_tmpdir }}/"
+      flat: yes
       fail_on_missing: yes
       validate_checksum: yes
-    with_items: openshift_nodes
-    changed_when: False
-
+    with_items: nodes_needing_certs
 
 - name: Configure node instances
   hosts: oo_nodes_to_config
-  gather_facts: no
   vars:
-    sync_tmpdir: "{{ hostvars.localhost.mktemp.stdout }}/{{ groups['oo_first_master'][0] }}/{{ hostvars.localhost.mktemp.stdout }}"
-    openshift_sdn_master_url: "https://{{ hostvars[groups['oo_first_master'][0]].openshift.common.hostname }}:4001"
+    sync_tmpdir: "{{ hostvars.localhost.mktemp.stdout }}"
+    openshift_node_master_api_url: "{{ hostvars[openshift_first_master].openshift.master.api_url }}"
   pre_tasks:
   - name: Ensure certificate directory exists
     file:
-      path: "{{ openshift_node_cert_dir }}"
+      path: "{{ node_cert_dir }}"
       state: directory
 
-  # TODO: notify restart openshift-node and/or restart openshift-sdn-node,
+  # TODO: notify restart openshift-node
   # possibly test service started time against certificate/config file
-  # timestamps in openshift-node or openshift-sdn-node to trigger notify
+  # timestamps in openshift-node to trigger notify
   - name: Unarchive the tarball on the node
     unarchive:
-      src: "{{ sync_tmpdir }}/{{ openshift.common.hostname }}.tgz"
-      dest: "{{ openshift_node_cert_dir }}"
+      src: "{{ sync_tmpdir }}/{{ node_subdir }}.tgz"
+      dest: "{{ node_cert_dir }}"
+    when: certs_missing
   roles:
   - openshift_node
-  - { role: openshift_sdn_node, when: openshift.common.use_openshift_sdn | bool }
+  - role: fluentd_node
+    when: openshift.common.use_fluentd | bool
   tasks:
   - name: Create group for deployment type
     group_by: key=oo_nodes_deployment_type_{{ openshift.common.deployment_type }}
@@ -110,7 +117,6 @@
   - file: name={{ sync_tmpdir }} state=absent
     changed_when: False
 
-
 - name: Delete temporary directory on localhost
   hosts: localhost
   connection: local
@@ -120,7 +126,6 @@
   - file: name={{ mktemp.stdout }} state=absent
     changed_when: False
 
-
 # Additional config for online type deployments
 - name: Additional instance config
   hosts: oo_nodes_deployment_type_online

+ 18 - 0
playbooks/common/openshift-node/service.yml

@@ -0,0 +1,18 @@
+---
+- name: Populate g_service_nodes host group if needed
+  hosts: localhost
+  gather_facts: no
+  tasks:
+  - fail: msg="new_cluster_state is required to be injected in this playbook"
+    when: new_cluster_state is not defined
+
+  - name: Evaluate g_service_nodes
+    add_host: name={{ item }} groups=g_service_nodes
+    with_items: oo_host_group_exp | default([])
+
+- name: Change openshift-node state on node instance(s)
+  hosts: g_service_nodes
+  connection: ssh
+  gather_facts: no
+  tasks:
+    - service: name=openshift-node state="{{ new_cluster_state }}"

+ 1 - 0
playbooks/gce/openshift-cluster/config.yml

@@ -34,4 +34,5 @@
     openshift_cluster_id: "{{ cluster_id }}"
     openshift_debug_level: 4
     openshift_deployment_type: "{{ deployment_type }}"
+    openshift_first_master: "{{ groups.oo_first_master.0 }}"
     openshift_hostname: "{{ gce_private_ip }}"

+ 16 - 0
playbooks/gce/openshift-cluster/launch.yml

@@ -23,6 +23,22 @@
       cluster: "{{ cluster_id }}"
       type: "{{ k8s_type }}"
 
+  - set_fact:
+      a_master: "{{ master_names[0] }}"
+  - add_host: name={{ a_master }} groups=service_master
+
 - include: update.yml
 
+- name: Deploy OpenShift Services
+  hosts: service_master
+  connection: ssh
+  gather_facts: yes
+  roles:
+  - openshift_registry
+  - openshift_router
+
+- include: ../../common/openshift-cluster/create_services.yml
+  vars:
+     g_svc_master: "{{ service_master }}"
+
 - include: list.yml

+ 1 - 1
playbooks/gce/openshift-cluster/list.yml

@@ -16,7 +16,7 @@
       ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
     with_items: groups[scratch_group] | default([]) | difference(['localhost']) | difference(groups.status_terminated)
 
-- name: List Hosts
+- name: List instance(s)
   hosts: oo_list_hosts
   gather_facts: no
   tasks:

+ 28 - 0
playbooks/gce/openshift-cluster/service.yml

@@ -0,0 +1,28 @@
+---
+- name: Call same systemctl command for openshift on all instance(s)
+  hosts: localhost
+  gather_facts: no
+  vars_files:
+  - vars.yml
+  tasks:
+  - fail: msg="cluster_id is required to be injected in this playbook"
+    when: cluster_id is not defined
+
+  - set_fact: scratch_group=tag_env-host-type-{{ cluster_id }}-openshift-node
+  - add_host:
+      name: "{{ item }}"
+      groups: g_service_nodes
+      ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user | default(ansible_ssh_user, true) }}"
+      ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
+    with_items: groups[scratch_group] | default([]) | difference(['localhost']) | difference(groups.status_terminated)
+
+  - set_fact: scratch_group=tag_env-host-type-{{ cluster_id }}-openshift-master
+  - add_host:
+      name: "{{ item }}"
+      groups: g_service_masters
+      ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user | default(ansible_ssh_user, true) }}"
+      ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
+    with_items: groups[scratch_group] | default([]) | difference(['localhost']) | difference(groups.status_terminated)
+
+- include: ../../common/openshift-node/service.yml
+- include: ../../common/openshift-master/service.yml

+ 26 - 0
playbooks/gce/openshift-cluster/wip.yml

@@ -0,0 +1,26 @@
+---
+- name: WIP
+  hosts: localhost
+  connection: local
+  gather_facts: no
+  vars_files:
+  - vars.yml
+  tasks:
+  - name: Evaluate oo_masters_for_deploy
+    add_host:
+      name: "{{ item }}"
+      groups: oo_masters_for_deploy
+      ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user | default(ansible_ssh_user, true) }}"
+      ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
+    with_items: groups["tag_env-host-type-{{ cluster_id }}-openshift-master"] | default([])
+
+- name: Deploy OpenShift Services
+  hosts: oo_masters_for_deploy
+  connection: ssh
+  gather_facts: yes
+  user: root
+  vars_files:
+  - vars.yml
+  roles:
+  - openshift_registry
+  - openshift_router

+ 1 - 0
playbooks/gce/openshift-node/config.yml

@@ -21,4 +21,5 @@
     openshift_cluster_id: "{{ cluster_id }}"
     openshift_debug_level: 4
     openshift_deployment_type: "{{ deployment_type }}"
+    openshift_first_master: "{{ groups.oo_first_master.0 }}"
     openshift_hostname: "{{ gce_private_ip }}"

+ 1 - 0
playbooks/libvirt/openshift-cluster/config.yml

@@ -36,3 +36,4 @@
     openshift_cluster_id: "{{ cluster_id }}"
     openshift_debug_level: 4
     openshift_deployment_type: "{{ deployment_type }}"
+    openshift_first_master: "{{ groups.oo_first_master.0 }}"

+ 32 - 0
playbooks/libvirt/openshift-cluster/service.yml

@@ -0,0 +1,32 @@
+---
+# TODO: need to figure out a plan for setting hostname, currently the default
+# is localhost, so no hostname value (or public_hostname) value is getting
+# assigned
+
+- name: Call same systemctl command for openshift on all instance(s)
+  hosts: localhost
+  gather_facts: no
+  vars_files:
+  - vars.yml
+  tasks:
+  - fail: msg="cluster_id is required to be injected in this playbook"
+    when: cluster_id is not defined
+
+  - name: Evaluate g_service_masters
+    add_host:
+      name: "{{ item }}"
+      ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
+      ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
+      groups: g_service_masters
+    with_items: groups["tag_env-host-type-{{ cluster_id }}-openshift-master"] | default([])
+
+  - name: Evaluate g_service_nodes
+    add_host:
+      name: "{{ item }}"
+      ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
+      ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
+      groups: g_service_nodes
+    with_items: groups["tag_env-host-type-{{ cluster_id }}-openshift-node"] | default([])
+
+- include: ../../common/openshift-node/service.yml
+- include: ../../common/openshift-master/service.yml

+ 3 - 9
playbooks/libvirt/openshift-cluster/tasks/launch_instances.yml

@@ -58,23 +58,17 @@
     uri: '{{ libvirt_uri }}'
   with_items: instances
 
-- name: Collect MAC addresses of the VMs
-  shell: 'virsh -c {{ libvirt_uri }} dumpxml {{ item }} | xmllint --xpath "string(//domain/devices/interface/mac/@address)" -'
-  register: scratch_mac
-  with_items: instances
-
 - name: Wait for the VMs to get an IP
-  command: "egrep -c '{{ scratch_mac.results | oo_collect('stdout') | join('|') }}' /proc/net/arp"
-  ignore_errors: yes
+  shell: 'virsh -c {{ libvirt_uri }} net-dhcp-leases openshift-ansible | egrep -c ''{{ instances | join("|") }}'''
   register: nb_allocated_ips
   until: nb_allocated_ips.stdout == '{{ instances | length }}'
   retries: 30
   delay: 1
 
 - name: Collect IP addresses of the VMs
-  shell: "awk '/{{ item.stdout }}/ {print $1}' /proc/net/arp"
+  shell: 'virsh -c {{ libvirt_uri }} net-dhcp-leases openshift-ansible | awk ''$6 == "{{ item }}" {gsub(/\/.*/, "", $5); print $5}'''
   register: scratch_ip
-  with_items: scratch_mac.results
+  with_items: instances
 
 - set_fact:
     ips: "{{ scratch_ip.results | oo_collect('stdout') }}"

+ 35 - 0
playbooks/openstack/openshift-cluster/config.yml

@@ -0,0 +1,35 @@
+- name: Populate oo_masters_to_config host group
+  hosts: localhost
+  gather_facts: no
+  vars_files:
+  - vars.yml
+  tasks:
+  - name: Evaluate oo_masters_to_config
+    add_host:
+      name: "{{ item }}"
+      ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
+      ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
+      groups: oo_masters_to_config
+    with_items: groups["tag_env-host-type_{{ cluster_id }}-openshift-master"] | default([])
+  - name: Evaluate oo_nodes_to_config
+    add_host:
+      name: "{{ item }}"
+      ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
+      ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
+      groups: oo_nodes_to_config
+    with_items: groups["tag_env-host-type_{{ cluster_id }}-openshift-node"] | default([])
+  - name: Evaluate oo_first_master
+    add_host:
+      name: "{{ groups['tag_env-host-type_' ~ cluster_id ~ '-openshift-master'][0] }}"
+      ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
+      ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
+      groups: oo_first_master
+    when: "'tag_env-host-type_{{ cluster_id }}-openshift-master' in groups"
+
+- include: ../../common/openshift-cluster/config.yml
+  vars:
+    openshift_cluster_id: "{{ cluster_id }}"
+    openshift_debug_level: 4
+    openshift_deployment_type: "{{ deployment_type }}"
+    openshift_first_master: "{{ groups.oo_first_master.0 }}"
+    openshift_hostname: "{{ ansible_default_ipv4.address }}"

+ 149 - 0
playbooks/openstack/openshift-cluster/files/heat_stack.yml

@@ -0,0 +1,149 @@
+heat_template_version: 2014-10-16
+
+description: OpenShift cluster
+
+parameters:
+  cluster-id:
+    type: string
+    label: Cluster ID
+    description: Identifier of the cluster
+
+  network-prefix:
+    type: string
+    label: Network prefix
+    description: Prefix of the network objects
+
+  cidr:
+    type: string
+    label: CIDR
+    description: CIDR of the network of the cluster
+
+  dns-nameservers:
+    type: comma_delimited_list
+    label: DNS nameservers list
+    description: List of DNS nameservers
+
+  external-net:
+    type: string
+    label: External network
+    description: Name of the external network
+    default: external
+
+  ssh-incoming:
+    type: string
+    label: Source of ssh connections
+    description: Source of legitimate ssh connections
+
+resources:
+  net:
+    type: OS::Neutron::Net
+    properties:
+      name:
+        str_replace:
+          template: network-prefix-net
+          params:
+            network-prefix: { get_param: network-prefix }
+
+  subnet:
+    type: OS::Neutron::Subnet
+    properties:
+      name:
+        str_replace:
+          template: network-prefix-subnet
+          params:
+            network-prefix: { get_param: network-prefix }
+      network: { get_resource: net }
+      cidr: { get_param: cidr }
+      dns_nameservers: { get_param: dns-nameservers }
+
+  router:
+    type: OS::Neutron::Router
+    properties:
+      name:
+        str_replace:
+          template: network-prefix-router
+          params:
+            network-prefix: { get_param: network-prefix }
+      external_gateway_info:
+        network: { get_param: external-net }
+
+  interface:
+    type: OS::Neutron::RouterInterface
+    properties:
+      router_id: { get_resource: router }
+      subnet_id: { get_resource: subnet }
+
+  node-secgrp:
+    type: OS::Neutron::SecurityGroup
+    properties:
+      name:
+        str_replace:
+          template: network-prefix-node-secgrp
+          params:
+            network-prefix: { get_param: network-prefix }
+      description:
+        str_replace:
+          template: Security group for cluster-id OpenShift cluster nodes
+          params:
+            cluster-id: { get_param: cluster-id }
+      rules:
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 22
+          port_range_max: 22
+          remote_ip_prefix: { get_param: ssh-incoming }
+        - direction: ingress
+          protocol: udp
+          port_range_min: 4789
+          port_range_max: 4789
+          remote_mode: remote_group_id
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 10250
+          port_range_max: 10250
+          remote_mode: remote_group_id
+          remote_group_id: { get_resource: master-secgrp }
+
+  master-secgrp:
+    type: OS::Neutron::SecurityGroup
+    properties:
+      name:
+        str_replace:
+          template: network-prefix-master-secgrp
+          params:
+            network-prefix: { get_param: network-prefix }
+      description:
+        str_replace:
+          template: Security group for cluster-id OpenShift cluster master
+          params:
+            cluster-id: { get_param: cluster-id }
+      rules:
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 22
+          port_range_max: 22
+          remote_ip_prefix: { get_param: ssh-incoming }
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 4001
+          port_range_max: 4001
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 8443
+          port_range_max: 8443
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 53
+          port_range_max: 53
+        - direction: ingress
+          protocol: udp
+          port_range_min: 53
+          port_range_max: 53
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 24224
+          port_range_max: 24224
+        - direction: ingress
+          protocol: udp
+          port_range_min: 24224
+          port_range_max: 24224

+ 7 - 0
playbooks/openstack/openshift-cluster/files/user-data

@@ -0,0 +1,7 @@
+#cloud-config
+disable_root: true
+
+system_info:
+  default_user:
+    name: openshift
+    sudo: ["ALL=(ALL) NOPASSWD: ALL"]

playbooks/aws/os2-atomic-proxy/filter_plugins → playbooks/openstack/openshift-cluster/filter_plugins


+ 31 - 0
playbooks/openstack/openshift-cluster/launch.yml

@@ -0,0 +1,31 @@
+---
+- name: Launch instance(s)
+  hosts: localhost
+  connection: local
+  gather_facts: no
+  vars_files:
+  - vars.yml
+  tasks:
+  - fail:
+      msg: "Deployment type not supported for OpenStack provider yet"
+    when: deployment_type in ['online', 'enterprise']
+
+  - include: tasks/configure_openstack.yml
+
+  - include: ../../common/openshift-cluster/set_master_launch_facts_tasks.yml
+  - include: tasks/launch_instances.yml
+    vars:
+      instances: "{{ master_names }}"
+      cluster: "{{ cluster_id }}"
+      type: "{{ k8s_type }}"
+
+  - include: ../../common/openshift-cluster/set_node_launch_facts_tasks.yml
+  - include: tasks/launch_instances.yml
+    vars:
+      instances: "{{ node_names }}"
+      cluster: "{{ cluster_id }}"
+      type: "{{ k8s_type }}"
+
+- include: update.yml
+
+- include: list.yml

+ 24 - 0
playbooks/openstack/openshift-cluster/list.yml

@@ -0,0 +1,24 @@
+---
+- name: Generate oo_list_hosts group
+  hosts: localhost
+  gather_facts: no
+  vars_files:
+  - vars.yml
+  tasks:
+  - set_fact: scratch_group=tag_env_{{ cluster_id }}
+    when: cluster_id != ''
+  - set_fact: scratch_group=all
+    when: cluster_id == ''
+  - add_host:
+      name: "{{ item }}"
+      groups: oo_list_hosts
+      ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
+      ansible_ssh_host: "{{ hostvars[item].ansible_ssh_host | default(item) }}"
+      ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
+    with_items: groups[scratch_group] | default([]) | difference(['localhost'])
+
+- name: List Hosts
+  hosts: oo_list_hosts
+  tasks:
+  - debug:
+      msg: 'public:{{ansible_ssh_host}} private:{{ansible_default_ipv4.address}}'

playbooks/aws/os2-atomic-proxy/roles → playbooks/openstack/openshift-cluster/roles


+ 27 - 0
playbooks/openstack/openshift-cluster/tasks/configure_openstack.yml

@@ -0,0 +1,27 @@
+---
+- name: Check infra
+  command: 'heat stack-show {{ openstack_network_prefix }}-stack'
+  register: stack_show_result
+  changed_when: false
+  failed_when: stack_show_result.rc != 0 and 'Stack not found' not in stack_show_result.stderr
+
+- name: Create infra
+  command: 'heat stack-create -f {{ openstack_infra_heat_stack }} -P cluster-id={{ cluster_id }} -P network-prefix={{ openstack_network_prefix }} -P dns-nameservers={{ openstack_network_dns | join(",") }} -P cidr={{ openstack_network_cidr }} -P ssh-incoming={{ openstack_ssh_access_from }} {{ openstack_network_prefix }}-stack'
+  when: stack_show_result.rc == 1
+
+- name: Update infra
+  command: 'heat stack-update -f {{ openstack_infra_heat_stack }} -P cluster-id={{ cluster_id }} -P network-prefix={{ openstack_network_prefix }} -P dns-nameservers={{ openstack_network_dns | join(",") }} -P cidr={{ openstack_network_cidr }} -P ssh-incoming={{ openstack_ssh_access_from }} {{ openstack_network_prefix }}-stack'
+  when: stack_show_result.rc == 0
+
+- name: Wait for infra readiness
+  shell: 'heat stack-show {{ openstack_network_prefix }}-stack | awk ''$2 == "stack_status" {print $4}'''
+  register: stack_show_status_result
+  until: stack_show_status_result.stdout not in ['CREATE_IN_PROGRESS', 'UPDATE_IN_PROGRESS']
+  retries: 30
+  delay: 1
+  failed_when: stack_show_status_result.stdout not in ['CREATE_COMPLETE', 'UPDATE_COMPLETE']
+
+- name: Create ssh keypair
+  nova_keypair:
+    name: "{{ openstack_ssh_keypair }}"
+    public_key: "{{ openstack_ssh_public_key }}"

+ 48 - 0
playbooks/openstack/openshift-cluster/tasks/launch_instances.yml

@@ -0,0 +1,48 @@
+---
+- name: Get net id
+  shell: 'neutron net-show {{ openstack_network_prefix }}-net | awk "/\\<id\\>/ {print \$4}"'
+  register: net_id_result
+
+- name: Launch instance(s)
+  nova_compute:
+    name: '{{ item }}'
+    image_name:     '{{ deployment_vars[deployment_type].image.name | default(omit, true) }}'
+    image_id:       '{{ deployment_vars[deployment_type].image.id   | default(omit, true) }}'
+    flavor_ram:     '{{ openstack_flavor[k8s_type].ram              | default(omit, true) }}'
+    flavor_id:      '{{ openstack_flavor[k8s_type].id               | default(omit, true) }}'
+    flavor_include: '{{ openstack_flavor[k8s_type].include          | default(omit, true) }}'
+    key_name: '{{ openstack_ssh_keypair }}'
+    security_groups: '{{ openstack_network_prefix }}-{{ k8s_type }}-secgrp'
+    nics:
+      - net-id: '{{ net_id_result.stdout }}'
+    user_data: "{{ lookup('file','files/user-data') }}"
+    meta:
+      env: '{{ cluster }}'
+      host-type: '{{ type }}'
+      env-host-type: '{{ cluster }}-openshift-{{ type }}'
+    floating_ip_pools: '{{ openstack_floating_ip_pools }}'
+  with_items: instances
+  register: nova_compute_result
+
+- name: Add new instances groups and variables
+  add_host:
+    hostname: '{{ item.item }}'
+    ansible_ssh_host: '{{ item.public_ip }}'
+    ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
+    ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
+    groups: 'tag_env_{{ cluster }}, tag_host-type_{{ type }}, tag_env-host-type_{{ cluster }}-openshift-{{ type }}'
+  with_items: nova_compute_result.results
+
+- name: Wait for ssh
+  wait_for:
+    host: '{{ item.public_ip }}'
+    port: 22
+  with_items: nova_compute_result.results
+
+- name: Wait for user setup
+  command: 'ssh -o StrictHostKeyChecking=no -o PasswordAuthentication=no -o ConnectTimeout=10 -o UserKnownHostsFile=/dev/null {{ hostvars[item.item].ansible_ssh_user }}@{{ item.public_ip }} echo {{ hostvars[item.item].ansible_ssh_user }} user is setup'
+  register: result
+  until: result.rc == 0
+  retries: 30
+  delay: 1
+  with_items: nova_compute_result.results

+ 43 - 0
playbooks/openstack/openshift-cluster/terminate.yml

@@ -0,0 +1,43 @@
+- name: Terminate instance(s)
+  hosts: localhost
+  connection: local
+  gather_facts: no
+  vars_files:
+  - vars.yml
+  tasks:
+  - set_fact: cluster_group=tag_env_{{ cluster_id }}
+  - add_host:
+      name: "{{ item }}"
+      groups: oo_hosts_to_terminate
+      ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
+      ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
+    with_items: groups[cluster_group] | default([])
+
+- hosts: oo_hosts_to_terminate
+
+- hosts: localhost
+  connection: local
+  gather_facts: no
+  vars_files:
+  - vars.yml
+  tasks:
+  - name: Retrieve the floating IPs
+    shell: "neutron floatingip-list | awk '/{{ hostvars[item].ansible_default_ipv4.address }}/ {print $2}'"
+    with_items: groups['oo_hosts_to_terminate'] | default([])
+    register: floating_ips_to_delete
+
+  - name: Terminate instance(s)
+    nova_compute:
+      name: "{{ hostvars[item].os_name }}"
+      state: absent
+    with_items: groups['oo_hosts_to_terminate'] | default([])
+
+  - name: Delete floating IPs
+    command: "neutron floatingip-delete {{ item.stdout }}"
+    with_items: floating_ips_to_delete.results | default([])
+
+  - name: Destroy the network
+    command: "heat stack-delete {{ openstack_network_prefix }}-stack"
+    register: stack_delete_result
+    changed_when: stack_delete_result.rc == 0
+    failed_when: stack_delete_result.rc != 0 and 'could not be found' not in stack_delete_result.stdout

+ 18 - 0
playbooks/openstack/openshift-cluster/update.yml

@@ -0,0 +1,18 @@
+---
+- name: Populate oo_hosts_to_update group
+  hosts: localhost
+  gather_facts: no
+  vars_files:
+  - vars.yml
+  tasks:
+  - name: Evaluate oo_hosts_to_update
+    add_host:
+      name: "{{ item }}"
+      groups: oo_hosts_to_update
+      ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
+      ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
+    with_items: groups["tag_env-host-type_{{ cluster_id }}-openshift-master"] | union(groups["tag_env-host-type_{{ cluster_id }}-openshift-node"]) | default([])
+
+- include: ../../common/openshift-cluster/update_repos_and_packages.yml
+
+- include: config.yml

+ 39 - 0
playbooks/openstack/openshift-cluster/vars.yml

@@ -0,0 +1,39 @@
+---
+openstack_infra_heat_stack:     "{{ opt_infra_heat_stack  | default('files/heat_stack.yml') }}"
+openstack_network_prefix:       "{{ opt_network_prefix    | default('openshift-ansible-'+cluster_id) }}"
+openstack_network_cidr:         "{{ opt_net_cidr          | default('192.168.' + ( ( 1048576 | random % 256 ) | string() ) + '.0/24') }}"
+openstack_network_external_net: "{{ opt_external_net      | default('external') }}"
+openstack_floating_ip_pools:    "{{ opt_floating_ip_pools | default('external')        | oo_split() }}"
+openstack_network_dns:          "{{ opt_dns               | default('8.8.8.8,8.8.4.4') | oo_split() }}"
+openstack_ssh_keypair:          "{{ opt_keypair           | default(lookup('env', 'LOGNAME')+'_key') }}"
+openstack_ssh_public_key:       "{{ lookup('file', opt_public_key | default('~/.ssh/id_rsa.pub')) }}"
+openstack_ssh_access_from:      "{{ opt_ssh_from          | default('0.0.0.0/0') }}"
+openstack_flavor:
+  master:
+    ram:     "{{ opt_master_flavor_ram     | default(2048) }}"
+    id:      "{{ opt_master_flavor_id      | default() }}"
+    include: "{{ opt_master_flavor_include | default() }}"
+  node:
+    ram:     "{{ opt_node_flavor_ram     | default(4096) }}"
+    id:      "{{ opt_node_flavor_id      | default() }}"
+    include: "{{ opt_node_flavor_include | default() }}"
+
+deployment_vars:
+  origin:
+    image:
+      name: "{{ opt_image_name | default('centos-70-raw') }}"
+      id:
+    ssh_user: openshift
+    sudo: yes
+  online:
+    image:
+      name:
+      id:
+    ssh_user: root
+    sudo: no
+  enterprise:
+    image:
+      name: "{{ opt_image_name | default('centos-70-raw') }}"
+      id:
+    ssh_user: openshift
+    sudo: yes

+ 1 - 1
rel-eng/packages/openshift-ansible-bin

@@ -1 +1 @@
-0.0.17-1 bin/
+0.0.18-1 bin/

+ 1 - 1
rel-eng/packages/openshift-ansible-inventory

@@ -1 +1 @@
-0.0.7-1 inventory/
+0.0.8-1 inventory/

+ 0 - 56
roles/atomic_base/README.md

@@ -1,56 +0,0 @@
-Role Name
-========
-
-The purpose of this role is to do common configurations for all RHEL atomic hosts.
-
-
-Requirements
-------------
-
-None
-
-
-Role Variables
---------------
-
-None
-
-
-Dependencies
-------------
-
-None
-
-
-Example Playbook
--------------------------
-
-From a group playbook:
-
-  hosts: servers
-  roles:
-    - ../../roles/atomic_base
-
-
-License
--------
-
-Copyright 2012-2014 Red Hat, Inc., All rights reserved.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-   http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-
-
-Author Information
-------------------
-
-Thomas Wiest <twiest@redhat.com>

+ 0 - 12
roles/atomic_base/files/bash/bashrc

@@ -1,12 +0,0 @@
-# .bashrc
-
-# User specific aliases and functions
-
-alias rm='rm -i'
-alias cp='cp -i'
-alias mv='mv -i'
-
-# Source global definitions
-if [ -f /etc/bashrc ]; then
-    . /etc/bashrc
-fi

+ 0 - 10
roles/atomic_base/files/ostree/repo_config

@@ -1,10 +0,0 @@
-[core]
-repo_version=1
-mode=bare
-
-[remote "rh-atomic-controller"]
-url=https://mirror.openshift.com/libra/ostree/rhel-7-atomic-host
-branches=rh-atomic-controller/el7/x86_64/buildmaster/controller/docker;
-tls-client-cert-path=/var/lib/yum/client-cert.pem
-tls-client-key-path=/var/lib/yum/client-key.pem
-gpg-verify=false

+ 0 - 7
roles/atomic_base/files/system/90-nofile.conf

@@ -1,7 +0,0 @@
-# PAM process file descriptor limits
-# see limits.conf(5) for details.
-#Each line describes a limit for a user in the form:
-#
-#<domain> <type> <item> <value>
-*       hard    nofile  16384
-root	soft	nofile	16384

+ 0 - 19
roles/atomic_base/meta/main.yml

@@ -1,19 +0,0 @@
----
-galaxy_info:
-  author: Thomas Wiest
-  description: Common base RHEL atomic configurations
-  company: Red Hat
-  # Some suggested licenses:
-  # - BSD (default)
-  # - MIT
-  # - GPLv2
-  # - GPLv3
-  # - Apache
-  # - CC-BY
-  license: Apache
-  min_ansible_version: 1.2
-  platforms:
-  - name: EL
-    versions:
-    - 7
-dependencies: []

+ 0 - 14
roles/atomic_base/tasks/bash.yml

@@ -1,14 +0,0 @@
----
-- name: Copy .bashrc
-  copy: src=bash/bashrc dest=/root/.bashrc owner=root group=root mode=0644
-
-- name: Link to .profile to .bashrc
-  file: src=/root/.bashrc dest=/root/.profile owner=root group=root state=link
-
-- name: "Setup Timezone [{{ oo_timezone }}]"
-  file:
-    src: "/usr/share/zoneinfo/{{ oo_timezone }}"
-    dest: /etc/localtime
-    owner: root
-    group: root
-    state: link

+ 0 - 6
roles/atomic_base/tasks/cloud_user.yml

@@ -1,6 +0,0 @@
----
-- name: Remove cloud-user account
-  user: name=cloud-user state=absent remove=yes force=yes
-
-- name: Remove cloud-user sudo
-  file: path=/etc/sudoers.d/90-cloud-init-users state=absent

+ 0 - 4
roles/atomic_base/tasks/main.yml

@@ -1,4 +0,0 @@
----
-- include: system.yml
-- include: bash.yml
-- include: ostree.yml

+ 0 - 18
roles/atomic_base/tasks/ostree.yml

@@ -1,18 +0,0 @@
----
-- name: Copy ostree repo config
-  copy:
-    src: ostree/repo_config
-    dest: /ostree/repo/config
-    owner: root
-    group: root
-    mode: 0644
-
-- name: "WORK AROUND: Stat redhat repo file"
-  stat: path=/etc/yum.repos.d/redhat.repo
-  register: redhat_repo
-
-- name: "WORK AROUND: subscription manager failures"
-  file:
-    path: /etc/yum.repos.d/redhat.repo
-    state: touch
-  when: redhat_repo.stat.exists == False

+ 0 - 3
roles/atomic_base/tasks/system.yml

@@ -1,3 +0,0 @@
----
-- name: Upload nofile limits.d file
-  copy: src=system/90-nofile.conf dest=/etc/security/limits.d/90-nofile.conf owner=root group=root mode=0644

+ 0 - 2
roles/atomic_base/vars/main.yml

@@ -1,2 +0,0 @@
----
-oo_timezone: US/Eastern

+ 0 - 56
roles/atomic_proxy/README.md

@@ -1,56 +0,0 @@
-Role Name
-========
-
-The purpose of this role is to do common configurations for all RHEL atomic hosts.
-
-
-Requirements
-------------
-
-None
-
-
-Role Variables
---------------
-
-None
-
-
-Dependencies
-------------
-
-None
-
-
-Example Playbook
--------------------------
-
-From a group playbook:
-
-  hosts: servers
-  roles:
-    - ../../roles/atomic_proxy
-
-
-License
--------
-
-Copyright 2012-2014 Red Hat, Inc., All rights reserved.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-   http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-
-
-Author Information
-------------------
-
-Thomas Wiest <twiest@redhat.com>

+ 0 - 29
roles/atomic_proxy/files/proxy_containers_deploy_descriptor.json

@@ -1,29 +0,0 @@
-{
-  "Containers":[
-    {
-      "Name":"proxy-puppet",
-      "Count":1,
-      "Image":"puppet:latest",
-      "PublicPorts":[
-      ]
-    },
-    {
-      "Name":"proxy",
-      "Count":1,
-      "Image":"proxy:latest",
-      "PublicPorts":[
-        {"Internal":80,"External":80},
-        {"Internal":443,"External":443},
-        {"Internal":4999,"External":4999}
-      ]
-    },
-    {
-      "Name":"proxy-monitoring",
-      "Count":1,
-      "Image":"monitoring:latest",
-      "PublicPorts":[
-      ]
-    }
-  ],
-  "RandomizeIds": false
-}

+ 0 - 116
roles/atomic_proxy/files/puppet/auth.conf

@@ -1,116 +0,0 @@
-# This is the default auth.conf file, which implements the default rules
-# used by the puppet master. (That is, the rules below will still apply
-# even if this file is deleted.)
-#
-# The ACLs are evaluated in top-down order. More specific stanzas should
-# be towards the top of the file and more general ones at the bottom;
-# otherwise, the general rules may "steal" requests that should be
-# governed by the specific rules.
-#
-# See http://docs.puppetlabs.com/guides/rest_auth_conf.html for a more complete
-# description of auth.conf's behavior.
-#
-# Supported syntax:
-# Each stanza in auth.conf starts with a path to match, followed
-# by optional modifiers, and finally, a series of allow or deny
-# directives.
-#
-# Example Stanza
-# ---------------------------------
-# path /path/to/resource     # simple prefix match
-# # path ~ regex             # alternately, regex match
-# [environment envlist]
-# [method methodlist]
-# [auth[enthicated] {yes|no|on|off|any}]
-# allow [host|backreference|*|regex]
-# deny [host|backreference|*|regex]
-# allow_ip [ip|cidr|ip_wildcard|*]
-# deny_ip [ip|cidr|ip_wildcard|*]
-#
-# The path match can either be a simple prefix match or a regular
-# expression. `path /file` would match both `/file_metadata` and
-# `/file_content`. Regex matches allow the use of backreferences
-# in the allow/deny directives.
-#
-# The regex syntax is the same as for Ruby regex, and captures backreferences
-# for use in the `allow` and `deny` lines of that stanza
-#
-# Examples:
-#
-# path ~ ^/path/to/resource    # Equivalent to `path /path/to/resource`.
-# allow *                      # Allow all authenticated nodes (since auth
-#                              # defaults to `yes`).
-#
-# path ~ ^/catalog/([^/]+)$    # Permit nodes to access their own catalog (by
-# allow $1                     # certname), but not any other node's catalog.
-#
-# path ~ ^/file_(metadata|content)/extra_files/  # Only allow certain nodes to
-# auth yes                                       # access the "extra_files"
-# allow /^(.+)\.example\.com$/                   # mount point; note this must
-# allow_ip 192.168.100.0/24                      # go ABOVE the "/file" rule,
-#                                                # since it is more specific.
-#
-# environment:: restrict an ACL to a comma-separated list of environments
-# method:: restrict an ACL to a comma-separated list of HTTP methods
-# auth:: restrict an ACL to an authenticated or unauthenticated request
-# the default when unspecified is to restrict the ACL to authenticated requests
-# (ie exactly as if auth yes was present).
-#
-
-### Authenticated ACLs - these rules apply only when the client
-### has a valid certificate and is thus authenticated
-
-# allow nodes to retrieve their own catalog
-path ~ ^/catalog/([^/]+)$
-method find
-allow $1
-
-# allow nodes to retrieve their own node definition
-path ~ ^/node/([^/]+)$
-method find
-allow $1
-
-# allow all nodes to access the certificates services
-path /certificate_revocation_list/ca
-method find
-allow *
-
-# allow all nodes to store their own reports
-path ~ ^/report/([^/]+)$
-method save
-allow $1
-
-# Allow all nodes to access all file services; this is necessary for
-# pluginsync, file serving from modules, and file serving from custom
-# mount points (see fileserver.conf). Note that the `/file` prefix matches
-# requests to both the file_metadata and file_content paths. See "Examples"
-# above if you need more granular access control for custom mount points.
-path /file
-allow *
-
-### Unauthenticated ACLs, for clients without valid certificates; authenticated
-### clients can also access these paths, though they rarely need to.
-
-# allow access to the CA certificate; unauthenticated nodes need this
-# in order to validate the puppet master's certificate
-path /certificate/ca
-auth any
-method find
-allow *
-
-# allow nodes to retrieve the certificate they requested earlier
-path /certificate/
-auth any
-method find
-allow *
-
-# allow nodes to request a new certificate
-path /certificate_request
-auth any
-method find, save
-allow *
-
-# deny everything else; this ACL is not strictly necessary, but
-# illustrates the default policy.
-path /
-auth any

+ 0 - 43
roles/atomic_proxy/files/setup-proxy-containers.sh

@@ -1,43 +0,0 @@
-#!/bin/bash
-
-function fail {
-  msg=$1
-  echo
-  echo $msg
-  echo
-  exit 5
-}
-
-
-NUM_DATA_CTR=$(docker ps -a | grep -c proxy-shared-data-1)
-[ "$NUM_DATA_CTR" -ne 0 ] && fail "ERROR: proxy-shared-data-1 exists"
-
-
-# pre-cache the container images
-echo
-timeout --signal TERM --kill-after 30 600  docker pull busybox:latest  || fail "ERROR: docker pull of busybox failed"
-
-echo
-# WORKAROUND: Setup the shared data container
-/usr/bin/docker run --name "proxy-shared-data-1"  \
-          -v /shared/etc/haproxy                  \
-          -v /shared/etc/httpd                    \
-          -v /shared/etc/openshift                \
-          -v /shared/etc/pki                      \
-          -v /shared/var/run/ctr-ipc              \
-          -v /shared/var/lib/haproxy              \
-          -v /shared/usr/local                    \
-          "busybox:latest" true
-
-# WORKAROUND: These are because we're not using a pod yet
-cp /usr/local/etc/ctr-proxy-1.service /usr/local/etc/ctr-proxy-puppet-1.service /usr/local/etc/ctr-proxy-monitoring-1.service /etc/systemd/system/
-
-systemctl daemon-reload
-
-echo
-echo -n "sleeping 10 seconds for systemd reload to take affect..."
-sleep 10
-echo " Done."
-
-# Start the services
-systemctl start ctr-proxy-puppet-1 ctr-proxy-1 ctr-proxy-monitoring-1

+ 0 - 3
roles/atomic_proxy/handlers/main.yml

@@ -1,3 +0,0 @@
----
-- name: reload systemd
-  command: systemctl daemon-reload

+ 0 - 21
roles/atomic_proxy/meta/main.yml

@@ -1,21 +0,0 @@
----
-galaxy_info:
-  author: Thomas Wiest
-  description: Common base RHEL atomic configurations
-  company: Red Hat
-  # Some suggested licenses:
-  # - BSD (default)
-  # - MIT
-  # - GPLv2
-  # - GPLv3
-  # - Apache
-  # - CC-BY
-  license: Apache
-  min_ansible_version: 1.2
-  platforms:
-  - name: EL
-    versions:
-    - 7
-dependencies:
-  # This is the role's PRIVATE counterpart, which is used.
-  - ../../../../../atomic_private/ansible/roles/atomic_proxy

+ 0 - 3
roles/atomic_proxy/tasks/main.yml

@@ -1,3 +0,0 @@
----
-- include: setup_puppet.yml
-- include: setup_containers.yml

+ 0 - 57
roles/atomic_proxy/tasks/setup_containers.yml

@@ -1,57 +0,0 @@
----
-- name: "get output of: docker images"
-  command: docker images
-  changed_when: False # don't report as changed
-  register: docker_images
-
-- name: docker pull busybox ONLY if it's not present
-  command: "docker pull busybox:latest"
-  when: "not docker_images.stdout | search('busybox.*latest')"
-
-- name: docker pull containers ONLY if they're not present (needed otherwise systemd will timeout pulling the containers)
-  command: "docker pull docker-registry.ops.rhcloud.com/{{ item }}:{{ oo_env }}"
-  with_items:
-    - oso-v2-proxy
-    - oso-v2-puppet
-    - oso-v2-monitoring
-  when: "not docker_images.stdout | search('docker-registry.ops.rhcloud.com/{{ item }}.*{{ oo_env }}')"
-
-- name: "get output of: docker ps -a"
-  command: docker ps -a
-  changed_when: False # don't report as changed
-  register: docker_ps
-
-- name: run proxy-shared-data-1
-  command: /usr/bin/docker run --name "proxy-shared-data-1"  \
-                     -v /shared/etc/haproxy                  \
-                     -v /shared/etc/httpd                    \
-                     -v /shared/etc/openshift                \
-                     -v /shared/etc/pki                      \
-                     -v /shared/var/run/ctr-ipc              \
-                     -v /shared/var/lib/haproxy              \
-                     -v /shared/usr/local                    \
-                     "busybox:latest" true
-  when: "not docker_ps.stdout | search('proxy-shared-data-1')"
-
-- name: Deploy systemd files for containers
-  template:
-    src: "systemd/{{ item }}.j2"
-    dest: "/etc/systemd/system/{{ item }}"
-    mode: 0640
-    owner: root
-    group: root
-  with_items:
-    - ctr-proxy-1.service
-    - ctr-proxy-monitoring-1.service
-    - ctr-proxy-puppet-1.service
-  notify: reload systemd
-
-- name: start containers
-  service:
-    name: "{{ item }}"
-    state: started
-    enabled: yes
-  with_items:
-    - ctr-proxy-puppet-1
-    - ctr-proxy-1
-    - ctr-proxy-monitoring-1

+ 0 - 24
roles/atomic_proxy/tasks/setup_puppet.yml

@@ -1,24 +0,0 @@
----
-- name: make puppet conf dir
-  file:
-    dest: "{{ oo_proxy_puppet_volume_dir }}/etc/puppet"
-    mode: 755
-    owner: root
-    group: root
-    state: directory
-
-- name: upload puppet auth config
-  copy:
-    src: puppet/auth.conf
-    dest: "{{ oo_proxy_puppet_volume_dir }}/etc/puppet/auth.conf"
-    mode: 0644
-    owner: root
-    group: root
-
-- name: upload puppet config
-  template:
-    src: puppet/puppet.conf.j2
-    dest: "{{ oo_proxy_puppet_volume_dir }}/etc/puppet/puppet.conf"
-    mode: 0644
-    owner: root
-    group: root

+ 0 - 40
roles/atomic_proxy/templates/puppet/puppet.conf.j2

@@ -1,40 +0,0 @@
-[main]
-    # we need to override the host name of the container
-    certname = ctr-proxy.{{ oo_env }}.rhcloud.com
-
-    # The Puppet log directory.
-    # The default value is '$vardir/log'.
-    logdir = /var/log/puppet
-
-    # Where Puppet PID files are kept.
-    # The default value is '$vardir/run'.
-    rundir = /var/run/puppet
-
-    # Where SSL certificates are kept.
-    # The default value is '$confdir/ssl'.
-    ssldir = $vardir/ssl
-    manifest = $manifestdir/site.pp
-    manifestdir = /var/lib/puppet/environments/pub/$environment/manifests
-    environment = {{ oo_env_long }}
-    modulepath = /var/lib/puppet/environments/pub/$environment/modules:/var/lib/puppet/environments/pri/$environment/modules:/var/lib/puppet/environments/pri/production/modules:$confdir/modules:/usr/share/puppet/modules
-
-[agent]
-    # The file in which puppetd stores a list of the classes
-    # associated with the retrieved configuratiion.  Can be loaded in
-    # the separate ``puppet`` executable using the ``--loadclasses``
-    # option.
-    # The default value is '$confdir/classes.txt'.
-    classfile = $vardir/classes.txt
-
-    # Where puppetd caches the local configuration.  An
-    # extension indicating the cache format is added automatically.
-    # The default value is '$confdir/localconfig'.
-    localconfig = $vardir/localconfig
-    server = puppet.ops.rhcloud.com
-    environment = {{ oo_env_long }}
-    pluginsync = true
-    graph = true
-    configtimeout = 600
-    report = true
-    runinterval = 3600
-    splay = true

+ 0 - 16
roles/atomic_proxy/templates/sync/sync-proxy-configs.sh.j2

@@ -1,16 +0,0 @@
-#!/bin/bash
-
-VOL_DIR=/var/lib/docker/volumes/proxy
-SSH_CMD="ssh -o StrictHostKeyChecking=no -o PasswordAuthentication=no -o ConnectTimeout=10 -o UserKnownHostsFile=/dev/null"
-
-mkdir -p ${VOL_DIR}/etc/haproxy/
-rsync -e "${SSH_CMD}" -va --progress root@proxy1.{{ oo_env }}.rhcloud.com:/etc/haproxy/ ${VOL_DIR}/etc/haproxy/
-
-mkdir -p ${VOL_DIR}/etc/httpd/
-rsync -e "${SSH_CMD}" -va --progress root@proxy1.{{ oo_env }}.rhcloud.com:/etc/httpd/ ${VOL_DIR}/etc/httpd/
-
-mkdir -p ${VOL_DIR}/etc/pki/tls/
-rsync -e "${SSH_CMD}" -va --progress root@proxy1.{{ oo_env }}.rhcloud.com:/etc/pki/tls/ ${VOL_DIR}/etc/pki/tls/
-
-# We need to disable the haproxy chroot
-sed -i -re 's/^(\s+)chroot/\1#chroot/' /var/lib/docker/volumes/proxy/etc/haproxy/haproxy.cfg

+ 0 - 32
roles/atomic_proxy/templates/systemd/ctr-proxy-1.service.j2

@@ -1,32 +0,0 @@
-[Unit]
-Description=Container proxy-1
-
-
-[Service]
-Type=simple
-TimeoutStartSec=5m
-Slice=container-small.slice
-
-ExecStartPre=-/usr/bin/docker rm "proxy-1"
-
-ExecStart=/usr/bin/docker run --rm --name "proxy-1"                           \
-          --volumes-from proxy-shared-data-1                                  \
-          -a stdout -a stderr -p 80:80 -p 443:443 -p 4999:4999                \
-          "docker-registry.ops.rhcloud.com/oso-v2-proxy:{{ oo_env }}"
-
-ExecReload=-/usr/bin/docker stop "proxy-1"
-ExecReload=-/usr/bin/docker rm "proxy-1"
-ExecStop=-/usr/bin/docker stop "proxy-1"
-
-[Install]
-WantedBy=container.target
-
-# Container information
-X-ContainerId=proxy-1
-X-ContainerImage=docker-registry.ops.rhcloud.com/oso-v2-proxy:{{ oo_env }}
-X-ContainerUserId=
-X-ContainerRequestId=LwiWtYWaAvSavH6Ze53QJg
-X-ContainerType=simple
-X-PortMapping=80:80
-X-PortMapping=443:443
-X-PortMapping=4999:4999

+ 0 - 36
roles/atomic_proxy/templates/systemd/ctr-proxy-monitoring-1.service.j2

@@ -1,36 +0,0 @@
-[Unit]
-Description=Container proxy-monitoring-1
-
-
-[Service]
-Type=simple
-TimeoutStartSec=5m
-Slice=container-small.slice
-
-ExecStartPre=-/usr/bin/docker rm "proxy-monitoring-1"
-
-ExecStart=/usr/bin/docker run --rm --name "proxy-monitoring-1"                \
-          --volumes-from proxy-shared-data-1                                  \
-          -a stdout -a stderr                                                 \
-          -e "OO_ENV={{ oo_env }}"                                            \
-          -e "OO_CTR_TYPE=proxy"                                              \
-          -e "OO_ZABBIX_HOSTGROUPS={{ oo_zabbix_hostgroups | join(',') }}"    \
-          -e "OO_ZABBIX_TEMPLATES=Template OpenShift Proxy Ctr"               \
-          "docker-registry.ops.rhcloud.com/oso-v2-monitoring:{{ oo_env }}"
-
-ExecReload=-/usr/bin/docker stop "proxy-monitoring-1"
-ExecReload=-/usr/bin/docker rm "proxy-monitoring-1"
-ExecStop=-/usr/bin/docker stop "proxy-monitoring-1"
-
-[Install]
-WantedBy=container.target
-
-# Container information
-X-ContainerId=proxy-monitoring-1
-X-ContainerImage=docker-registry.ops.rhcloud.com/oso-v2-monitoring:{{ oo_env }}
-X-ContainerUserId=
-X-ContainerRequestId=LwiWtYWaAvSavH6Ze53QJg
-X-ContainerType=simple
-X-PortMapping=80:80
-X-PortMapping=443:443
-X-PortMapping=4999:4999

+ 0 - 33
roles/atomic_proxy/templates/systemd/ctr-proxy-puppet-1.service.j2

@@ -1,33 +0,0 @@
-[Unit]
-Description=Container proxy-puppet-1
-
-
-[Service]
-Type=simple
-TimeoutStartSec=5m
-Slice=container-small.slice
-
-
-ExecStartPre=-/usr/bin/docker rm "proxy-puppet-1"
-
-ExecStart=/usr/bin/docker run --rm --name "proxy-puppet-1"                                    \
-          --volumes-from proxy-shared-data-1                                                  \
-          -v /var/lib/docker/volumes/proxy_puppet/var/lib/puppet/ssl:/var/lib/puppet/ssl      \
-          -v /var/lib/docker/volumes/proxy_puppet/etc/puppet:/etc/puppet                      \
-          -a stdout -a stderr                                                                 \
-          "docker-registry.ops.rhcloud.com/oso-v2-puppet:{{ oo_env }}"
-
-# Set links (requires container have a name)
-ExecReload=-/usr/bin/docker stop "proxy-puppet-1"
-ExecReload=-/usr/bin/docker rm "proxy-puppet-1"
-ExecStop=-/usr/bin/docker stop "proxy-puppet-1"
-
-[Install]
-WantedBy=container.target
-
-# Container information
-X-ContainerId=proxy-puppet-1
-X-ContainerImage=docker-registry.ops.rhcloud.com/oso-v2-puppet:{{ oo_env }}
-X-ContainerUserId=
-X-ContainerRequestId=Ky0lhw0onwoSDJR4GK6t3g
-X-ContainerType=simple

+ 0 - 2
roles/atomic_proxy/vars/main.yml

@@ -1,2 +0,0 @@
----
-oo_proxy_puppet_volume_dir: /var/lib/docker/volumes/proxy_puppet

+ 0 - 13
roles/docker/files/enter-container.sh

@@ -1,13 +0,0 @@
-#!/bin/bash
-
-if [ $# -ne 1 ]
-then
-  echo
-  echo "Usage: $(basename $0) <container_name>"
-  echo
-  exit 1
-fi
-
-PID=$(docker inspect --format '{{.State.Pid}}' $1)
-
-nsenter --target $PID --mount --uts --ipc --net --pid

+ 4 - 0
roles/docker/handlers/main.yml

@@ -0,0 +1,4 @@
+---
+
+- name: restart docker
+  service: name=docker state=restarted

+ 1 - 8
roles/docker/tasks/main.yml

@@ -1,15 +1,8 @@
 ---
 # tasks file for docker
 - name: Install docker
-  yum: pkg=docker-io
+  yum: pkg=docker
 
 - name: enable and start the docker service
   service: name=docker enabled=yes state=started
 
-- copy: src=enter-container.sh dest=/usr/local/bin/enter-container.sh mode=0755
-
-# From the origin rpm there exists instructions on how to
-# setup origin properly.  The following steps come from there
-- name: Change root to be in the Docker group
-  user: name=root groups=dockerroot append=yes
-

+ 39 - 0
roles/docker_storage/README.md

@@ -0,0 +1,39 @@
+docker_storage
+=========
+
+Configure docker_storage options
+------------
+
+None
+
+Role Variables
+--------------
+
+None
+
+Dependencies
+------------
+
+None
+
+Example Playbook
+----------------
+
+Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too:
+
+    - hosts: servers
+      roles:
+         - { role/docker_storage: 
+               - key: df.fs
+                 value: xfs
+         }
+
+License
+-------
+
+ASL 2.0
+
+Author Information
+------------------
+
+Openshift operations, Red Hat, Inc

playbooks/aws/os2-atomic-proxy/vars.yml → roles/docker_storage/defaults/main.yml


+ 0 - 0
roles/docker_storage/handlers/main.yml


Daži faili netika attēloti, jo izmaiņu fails ir pārāk liels