Browse Source

Merge pull request #1048 from twiest/prod

Sync master -> Prod
Thomas Wiest 9 years ago
parent
commit
eeb164fae0
100 changed files with 4147 additions and 383 deletions
  1. 1 1
      .tito/packages/openshift-ansible
  2. 25 3
      README_AWS.md
  3. 8 0
      README_GCE.md
  4. 1 0
      README_openstack.md
  5. 15 0
      README_origin.md
  6. 35 10
      bin/cluster
  7. 22 13
      filter_plugins/oo_filters.py
  8. 469 0
      filter_plugins/openshift_master.py
  9. 95 2
      inventory/aws/hosts/ec2.ini
  10. 585 60
      inventory/aws/hosts/ec2.py
  11. 178 0
      inventory/byo/hosts.aep.example
  12. 182 0
      inventory/byo/hosts.origin.example
  13. 41 16
      inventory/byo/hosts.example
  14. 19 15
      inventory/multi_inventory.py
  15. 216 5
      openshift-ansible.spec
  16. 5 0
      playbooks/adhoc/bootstrap-fedora.yml
  17. 42 6
      playbooks/adhoc/uninstall.yml
  18. 39 0
      playbooks/aws/openshift-cluster/addNodes.yml
  19. 34 0
      playbooks/aws/openshift-cluster/scaleup.yml
  20. 25 10
      playbooks/aws/openshift-cluster/tasks/launch_instances.yml
  21. 9 2
      playbooks/aws/openshift-cluster/templates/user_data.j2
  22. 33 0
      playbooks/aws/openshift-cluster/upgrades/v3_0_to_v3_1/upgrade.yml
  23. 10 0
      playbooks/byo/openshift-cluster/scaleup.yml
  24. 1 2
      playbooks/byo/openshift_facts.yml
  25. 0 3
      playbooks/common/openshift-cluster/config.yml
  26. 9 4
      playbooks/common/openshift-cluster/evaluate_groups.yml
  27. 0 10
      playbooks/common/openshift-cluster/scaleup.yml
  28. 10 7
      playbooks/common/openshift-cluster/upgrades/files/pre-upgrade-check
  29. 2 2
      playbooks/common/openshift-cluster/upgrades/files/versions.sh
  30. 4 0
      playbooks/common/openshift-cluster/upgrades/library/openshift_upgrade_config.py
  31. 8 3
      playbooks/common/openshift-cluster/upgrades/v3_0_to_v3_1/upgrade.yml
  32. 1 1
      playbooks/common/openshift-etcd/config.yml
  33. 53 35
      playbooks/common/openshift-master/config.yml
  34. 4 2
      playbooks/common/openshift-node/config.yml
  35. 0 2
      playbooks/gce/openshift-cluster/join_node.yml
  36. 4 0
      playbooks/gce/openshift-cluster/launch.yml
  37. 2 2
      playbooks/gce/openshift-cluster/tasks/launch_instances.yml
  38. 3 0
      playbooks/gce/openshift-cluster/vars.yml
  39. 88 0
      playbooks/openstack/openshift-cluster/files/heat_stack.yaml
  40. 15 0
      playbooks/openstack/openshift-cluster/launch.yml
  41. 1 0
      playbooks/openstack/openshift-cluster/vars.yml
  42. 7 0
      roles/ansible/tasks/main.yml
  43. 12 0
      roles/cockpit/tasks/main.yml
  44. 6 0
      roles/copr_cli/tasks/main.yml
  45. 9 9
      roles/docker/README.md
  46. 5 0
      roles/docker/handlers/main.yml
  47. 8 120
      roles/docker/meta/main.yml
  48. 7 0
      roles/docker/tasks/main.yml
  49. 30 0
      roles/docker/tasks/udev_workaround.yml
  50. 3 0
      roles/docker/vars/main.yml
  51. 1 1
      roles/etcd/README.md
  52. 5 0
      roles/etcd/tasks/main.yml
  53. 1 1
      roles/etcd_common/defaults/main.yml
  54. 2 1
      roles/flannel/README.md
  55. 6 0
      roles/flannel/tasks/main.yml
  56. 7 0
      roles/fluentd_master/tasks/main.yml
  57. 7 0
      roles/fluentd_node/tasks/main.yml
  58. 7 0
      roles/haproxy/tasks/main.yml
  59. 5 0
      roles/kube_nfs_volumes/tasks/main.yml
  60. 5 0
      roles/kube_nfs_volumes/tasks/nfs.yml
  61. 7 4
      roles/lib_zabbix/library/zbx_action.py
  62. 331 0
      roles/lib_zabbix/library/zbx_graph.py
  63. 331 0
      roles/lib_zabbix/library/zbx_graphprototype.py
  64. 290 0
      roles/lib_zabbix/library/zbx_httptest.py
  65. 42 22
      roles/lib_zabbix/library/zbx_usergroup.py
  66. 24 0
      roles/lib_zabbix/tasks/create_template.yml
  67. 10 0
      roles/openshift_ansible_inventory/tasks/main.yml
  68. 3 3
      roles/openshift_cluster_metrics/tasks/main.yml
  69. 11 0
      roles/openshift_common/tasks/main.yml
  70. 3 1
      roles/openshift_examples/defaults/main.yml
  71. 4 3
      roles/openshift_examples/examples-sync.sh
  72. 7 0
      roles/openshift_examples/files/examples/README.md
  73. 0 0
      roles/openshift_examples/files/examples/v1.0/db-templates/mongodb-ephemeral-template.json
  74. 0 0
      roles/openshift_examples/files/examples/v1.0/db-templates/mongodb-persistent-template.json
  75. 0 0
      roles/openshift_examples/files/examples/v1.0/db-templates/mysql-ephemeral-template.json
  76. 0 0
      roles/openshift_examples/files/examples/v1.0/db-templates/mysql-persistent-template.json
  77. 0 0
      roles/openshift_examples/files/examples/v1.0/db-templates/postgresql-ephemeral-template.json
  78. 0 0
      roles/openshift_examples/files/examples/v1.0/db-templates/postgresql-persistent-template.json
  79. 285 0
      roles/openshift_examples/files/examples/v1.0/image-streams/image-streams-centos7.json
  80. 254 0
      roles/openshift_examples/files/examples/v1.0/image-streams/image-streams-rhel7.json
  81. 0 0
      roles/openshift_examples/files/examples/v1.0/infrastructure-templates/enterprise/logging-deployer.yaml
  82. 116 0
      roles/openshift_examples/files/examples/v1.0/infrastructure-templates/enterprise/metrics-deployer.yaml
  83. 0 0
      roles/openshift_examples/files/examples/v1.0/infrastructure-templates/origin/logging-deployer.yaml
  84. 2 2
      roles/openshift_examples/files/examples/infrastructure-templates/origin/metrics-deployer.yaml
  85. 0 0
      roles/openshift_examples/files/examples/v1.0/quickstart-templates/cakephp-mysql.json
  86. 0 0
      roles/openshift_examples/files/examples/v1.0/quickstart-templates/cakephp.json
  87. 0 0
      roles/openshift_examples/files/examples/v1.0/quickstart-templates/dancer-mysql.json
  88. 0 0
      roles/openshift_examples/files/examples/v1.0/quickstart-templates/dancer.json
  89. 0 0
      roles/openshift_examples/files/examples/v1.0/quickstart-templates/django-postgresql.json
  90. 0 0
      roles/openshift_examples/files/examples/v1.0/quickstart-templates/django.json
  91. 0 0
      roles/openshift_examples/files/examples/v1.0/quickstart-templates/jenkins-ephemeral-template.json
  92. 0 0
      roles/openshift_examples/files/examples/v1.0/quickstart-templates/jenkins-persistent-template.json
  93. 0 0
      roles/openshift_examples/files/examples/v1.0/quickstart-templates/nodejs-mongodb.json
  94. 0 0
      roles/openshift_examples/files/examples/v1.0/quickstart-templates/nodejs.json
  95. 0 0
      roles/openshift_examples/files/examples/v1.0/quickstart-templates/rails-postgresql.json
  96. 0 0
      roles/openshift_examples/files/examples/v1.0/xpaas-streams/jboss-image-streams.json
  97. 0 0
      roles/openshift_examples/files/examples/v1.0/xpaas-templates/amq62-basic.json
  98. 0 0
      roles/openshift_examples/files/examples/v1.0/xpaas-templates/amq62-persistent-ssl.json
  99. 0 0
      roles/openshift_examples/files/examples/v1.0/xpaas-templates/amq62-persistent.json
  100. 0 0
      roles/openshift_examples/files/examples/xpaas-templates/amq62-ssl.json

+ 1 - 1
.tito/packages/openshift-ansible

@@ -1 +1 @@
-3.0.12-1 ./
+3.0.19-1 ./

+ 25 - 3
README_AWS.md

@@ -67,12 +67,12 @@ By default, a cluster is launched with the following configuration:
 - Keypair name: libra
 - Security group: public
 
-Master specific defaults:
+#### Master specific defaults:
 - Master root volume size: 10 (in GiBs)
 - Master root volume type: gp2
 - Master root volume iops: 500 (only applicable when volume type is io1)
 
-Node specific defaults:
+#### Node specific defaults:
 - Node root volume size: 10 (in GiBs)
 - Node root volume type: gp2
 - Node root volume iops: 500 (only applicable when volume type is io1)
@@ -81,9 +81,30 @@ Node specific defaults:
 - Docker volume type: gp2 (only applicable if ephemeral is false)
 - Docker volume iops: 500 (only applicable when volume type is io1)
 
-If needed, these values can be changed by setting environment variables on your system.
+### Specifying ec2 instance type.
+
+#### All instances:
 
 - export ec2_instance_type='m4.large'
+
+#### Master instances:
+
+- export ec2_master_instance_type='m4.large'
+
+#### Infra node instances:
+
+- export ec2_infra_instance_type='m4.large'
+
+#### Non-infra node instances:
+
+- export ec2_node_instance_type='m4.large'
+
+#### etcd instances:
+
+- export ec2_etcd_instance_type='m4.large'
+
+If needed, these values can be changed by setting environment variables on your system.
+
 - export ec2_image='ami-307b3658'
 - export ec2_region='us-east-1'
 - export ec2_keypair='libra'
@@ -103,6 +124,7 @@ If needed, these values can be changed by setting environment variables on your
 Install Dependencies
 --------------------
 1. Ansible requires python-boto for aws operations:
+
 RHEL/CentOS/Fedora
 ```
   yum install -y ansible python-boto pyOpenSSL

+ 8 - 0
README_GCE.md

@@ -43,7 +43,11 @@ Mandatory customization variables (check the values according to your tenant):
 * zone = europe-west1-d
 * network = default
 * gce_machine_type = n1-standard-2
+* gce_machine_master_type = n1-standard-1
+* gce_machine_node_type = n1-standard-2
 * gce_machine_image = preinstalled-slave-50g-v5
+* gce_machine_master_image = preinstalled-slave-50g-v5
+* gce_machine_node_image = preinstalled-slave-50g-v5
 
 
 1. vi ~/.gce/gce.ini
@@ -56,7 +60,11 @@ gce_project_id = project_id
 zone = europe-west1-d
 network = default
 gce_machine_type = n1-standard-2
+gce_machine_master_type = n1-standard-1
+gce_machine_node_type = n1-standard-2
 gce_machine_image = preinstalled-slave-50g-v5
+gce_machine_master_image = preinstalled-slave-50g-v5
+gce_machine_node_image = preinstalled-slave-50g-v5
 
 ```
 1. Define the environment variable GCE_INI_PATH so gce.py can pick it up and bin/cluster can also read it

+ 1 - 0
README_openstack.md

@@ -31,6 +31,7 @@ The following options are used only by `heat_stack.yaml`. They are so used only
 
 * `image_name`: Name of the image to use to spawn VMs
 * `public_key` (default to `~/.ssh/id_rsa.pub`): filename of the ssh public key
+* `etcd_flavor` (default to `m1.small`): The ID or name of the flavor for the etcd nodes
 * `master_flavor` (default to `m1.small`): The ID or name of the flavor for the master
 * `node_flavor` (default to `m1.medium`): The ID or name of the flavor for the compute nodes
 * `infra_flavor` (default to `m1.small`): The ID or name of the flavor for the infrastructure nodes

+ 15 - 0
README_origin.md

@@ -39,6 +39,12 @@ subscription-manager repos \
 ```
 * Configuration of router is not automated yet
 * Configuration of docker-registry is not automated yet
+* Fedora 23+ doesn't come with python2 and will need a quick bootstrap. Setup
+  your inventory as described below and run the following (substituting the
+  `$PATH_TO_INVENTORY_FILE` with the actual path to your inventory file):
+```sh
+ansible-playbook ./playbooks/adhoc/bootstrap-fedora.yml -i $PATH_TO_INVENTORY_FILE
+```
 
 ## Configuring the host inventory
 [Ansible docs](http://docs.ansible.com/intro_inventory.html)
@@ -59,6 +65,7 @@ nodes
 
 # Set variables common for all OSEv3 hosts
 [OSv3:vars]
+
 # SSH user, this user should allow ssh based auth without requiring a password
 ansible_ssh_user=root
 
@@ -75,6 +82,14 @@ osv3-master.example.com
 [nodes]
 osv3-master.example.com
 osv3-node[1:2].example.com
+
+# host group for etcd
+[etcd]
+osv3-etcd[1:3].example.com
+
+[lb]
+osv3-lb.example.com
+
 ```
 
 The hostnames above should resolve both from the hosts themselves and

+ 35 - 10
bin/cluster

@@ -67,6 +67,21 @@ class Cluster(object):
 
         self.action(args, inventory, env, playbook)
 
+    def addNodes(self, args):
+        """
+        Add nodes to an existing cluster for given provider
+        :param args: command line arguments provided by user
+        """
+        env = {'cluster_id': args.cluster_id,
+               'deployment_type': self.get_deployment_type(args)}
+        playbook = "playbooks/{0}/openshift-cluster/addNodes.yml".format(args.provider)
+        inventory = self.setup_provider(args.provider)
+
+        env['num_nodes'] = args.nodes
+        env['num_infra'] = args.infra
+
+        self.action(args, inventory, env, playbook)
+
     def terminate(self, args):
         """
         Destroy OpenShift cluster
@@ -163,7 +178,7 @@ class Cluster(object):
             boto_configs = [conf for conf in boto_conf_files if conf_exists(conf)]
 
             if len(key_missing) > 0 and len(boto_configs) == 0:
-                raise ValueError("PROVIDER aws requires {} environment variable(s). See README_AWS.md".format(key_missing))
+                raise ValueError("PROVIDER aws requires {0} environment variable(s). See README_AWS.md".format(key_missing))
 
         elif 'libvirt' == provider:
             inventory = '-i inventory/libvirt/hosts'
@@ -171,7 +186,7 @@ class Cluster(object):
             inventory = '-i inventory/openstack/hosts'
         else:
             # this code should never be reached
-            raise ValueError("invalid PROVIDER {}".format(provider))
+            raise ValueError("invalid PROVIDER {0}".format(provider))
 
         return inventory
 
@@ -186,18 +201,18 @@ class Cluster(object):
 
         verbose = ''
         if args.verbose > 0:
-            verbose = '-{}'.format('v' * args.verbose)
+            verbose = '-{0}'.format('v' * args.verbose)
 
         if args.option:
             for opt in args.option:
                 k, v = opt.split('=', 1)
                 env['cli_' + k] = v
 
-        ansible_env = '-e \'{}\''.format(
+        ansible_env = '-e \'{0}\''.format(
             ' '.join(['%s=%s' % (key, value) for (key, value) in env.items()])
         )
 
-        command = 'ansible-playbook {} {} {} {}'.format(
+        command = 'ansible-playbook {0} {1} {2} {3}'.format(
             verbose, inventory, ansible_env, playbook
         )
 
@@ -205,16 +220,16 @@ class Cluster(object):
             command = 'ANSIBLE_CALLBACK_PLUGINS=ansible-profile/callback_plugins ' + command
 
         if args.verbose > 1:
-            command = 'time {}'.format(command)
+            command = 'time {0}'.format(command)
 
         if args.verbose > 0:
-            sys.stderr.write('RUN [{}]\n'.format(command))
+            sys.stderr.write('RUN [{0}]\n'.format(command))
             sys.stderr.flush()
 
         try:
             subprocess.check_call(command, shell=True)
         except subprocess.CalledProcessError as exc:
-            raise ActionFailed("ACTION [{}] failed: {}"
+            raise ActionFailed("ACTION [{0}] failed: {1}"
                                .format(args.action, exc))
 
 
@@ -292,6 +307,16 @@ if __name__ == '__main__':
                                help='number of external etcd hosts to create in cluster')
     create_parser.set_defaults(func=cluster.create)
 
+
+    create_parser = action_parser.add_parser('addNodes', help='Add nodes to a cluster',
+                                             parents=[meta_parser])
+    create_parser.add_argument('-n', '--nodes', default=1, type=int,
+                               help='number of nodes to add to the cluster')
+    create_parser.add_argument('-i', '--infra', default=1, type=int,
+                               help='number of infra nodes to add to the cluster')
+    create_parser.set_defaults(func=cluster.addNodes)
+
+
     config_parser = action_parser.add_parser('config',
                                              help='Configure or reconfigure a cluster',
                                              parents=[meta_parser])
@@ -325,14 +350,14 @@ if __name__ == '__main__':
     args = parser.parse_args()
 
     if 'terminate' == args.action and not args.force:
-        answer = raw_input("This will destroy the ENTIRE {} environment. Are you sure? [y/N] ".format(args.cluster_id))
+        answer = raw_input("This will destroy the ENTIRE {0} environment. Are you sure? [y/N] ".format(args.cluster_id))
         if answer not in ['y', 'Y']:
             sys.stderr.write('\nACTION [terminate] aborted by user!\n')
             exit(1)
 
     if 'update' == args.action and not args.force:
         answer = raw_input(
-            "This is destructive and could corrupt {} environment. Continue? [y/N] ".format(args.cluster_id))
+            "This is destructive and could corrupt {0} environment. Continue? [y/N] ".format(args.cluster_id))
         if answer not in ['y', 'Y']:
             sys.stderr.write('\nACTION [update] aborted by user!\n')
             exit(1)

+ 22 - 13
filter_plugins/oo_filters.py

@@ -191,7 +191,11 @@ class FilterModule(object):
                     { 'root':
                         { 'volume_size': 10, 'device_type': 'gp2',
                           'iops': 500
-                        }
+                        },
+                        'docker':
+                          { 'volume_size': 40, 'device_type': 'gp2',
+                            'iops': 500, 'ephemeral': 'true'
+                          }
                     },
                   'node':
                     { 'root':
@@ -216,7 +220,7 @@ class FilterModule(object):
         root_vol['delete_on_termination'] = True
         if root_vol['device_type'] != 'io1':
             root_vol.pop('iops', None)
-        if host_type == 'node':
+        if host_type in ['master', 'node'] and 'docker' in data[host_type]:
             docker_vol = data[host_type]['docker']
             docker_vol['device_name'] = '/dev/xvdb'
             docker_vol['delete_on_termination'] = True
@@ -227,7 +231,7 @@ class FilterModule(object):
                 docker_vol.pop('delete_on_termination', None)
                 docker_vol['ephemeral'] = 'ephemeral0'
             return [root_vol, docker_vol]
-        elif host_type == 'etcd':
+        elif host_type == 'etcd' and 'etcd' in data[host_type]:
             etcd_vol = data[host_type]['etcd']
             etcd_vol['device_name'] = '/dev/xvdb'
             etcd_vol['delete_on_termination'] = True
@@ -346,27 +350,27 @@ class FilterModule(object):
 
     @staticmethod
     # pylint: disable=too-many-branches
-    def oo_parse_certificate_names(certificates, data_dir, internal_hostnames):
+    def oo_parse_named_certificates(certificates, named_certs_dir, internal_hostnames):
         ''' Parses names from list of certificate hashes.
 
-            Ex: certificates = [{ "certfile": "/etc/origin/master/custom1.crt",
-                                  "keyfile": "/etc/origin/master/custom1.key" },
+            Ex: certificates = [{ "certfile": "/root/custom1.crt",
+                                  "keyfile": "/root/custom1.key" },
                                 { "certfile": "custom2.crt",
                                   "keyfile": "custom2.key" }]
 
-                returns [{ "certfile": "/etc/origin/master/custom1.crt",
-                           "keyfile": "/etc/origin/master/custom1.key",
+                returns [{ "certfile": "/etc/origin/master/named_certificates/custom1.crt",
+                           "keyfile": "/etc/origin/master/named_certificates/custom1.key",
                            "names": [ "public-master-host.com",
                                       "other-master-host.com" ] },
-                         { "certfile": "/etc/origin/master/custom2.crt",
-                           "keyfile": "/etc/origin/master/custom2.key",
+                         { "certfile": "/etc/origin/master/named_certificates/custom2.crt",
+                           "keyfile": "/etc/origin/master/named_certificates/custom2.key",
                            "names": [ "some-hostname.com" ] }]
         '''
         if not issubclass(type(certificates), list):
             raise errors.AnsibleFilterError("|failed expects certificates is a list")
 
-        if not issubclass(type(data_dir), unicode):
-            raise errors.AnsibleFilterError("|failed expects data_dir is unicode")
+        if not issubclass(type(named_certs_dir), unicode):
+            raise errors.AnsibleFilterError("|failed expects named_certs_dir is unicode")
 
         if not issubclass(type(internal_hostnames), list):
             raise errors.AnsibleFilterError("|failed expects internal_hostnames is list")
@@ -399,6 +403,11 @@ class FilterModule(object):
                 raise errors.AnsibleFilterError(("|failed to parse certificate '%s' or " % certificate['certfile'] +
                                                  "detected a collision with internal hostname, please specify " +
                                                  "certificate names in host inventory"))
+
+        for certificate in certificates:
+            # Update paths for configuration
+            certificate['certfile'] = os.path.join(named_certs_dir, os.path.basename(certificate['certfile']))
+            certificate['keyfile'] = os.path.join(named_certs_dir, os.path.basename(certificate['keyfile']))
         return certificates
 
     @staticmethod
@@ -474,7 +483,7 @@ class FilterModule(object):
             "oo_split": self.oo_split,
             "oo_filter_list": self.oo_filter_list,
             "oo_parse_heat_stack_outputs": self.oo_parse_heat_stack_outputs,
-            "oo_parse_certificate_names": self.oo_parse_certificate_names,
+            "oo_parse_named_certificates": self.oo_parse_named_certificates,
             "oo_haproxy_backend_masters": self.oo_haproxy_backend_masters,
             "oo_pretty_print_cluster": self.oo_pretty_print_cluster
         }

+ 469 - 0
filter_plugins/openshift_master.py

@@ -0,0 +1,469 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+# vim: expandtab:tabstop=4:shiftwidth=4
+'''
+Custom filters for use in openshift-master
+'''
+import copy
+import sys
+import yaml
+
+from ansible import errors
+from ansible.runner.filter_plugins.core import bool as ansible_bool
+
+
+class IdentityProviderBase(object):
+    """ IdentityProviderBase
+
+        Attributes:
+            name (str): Identity provider Name
+            login (bool): Is this identity provider a login provider?
+            challenge (bool): Is this identity provider a challenge provider?
+            provider (dict): Provider specific config
+            _idp (dict): internal copy of the IDP dict passed in
+            _required (list): List of lists of strings for required attributes
+            _optional (list): List of lists of strings for optional attributes
+            _allow_additional (bool): Does this provider support attributes
+                not in _required and _optional
+
+        Args:
+            api_version(str): OpenShift config version
+            idp (dict): idp config dict
+
+        Raises:
+            AnsibleFilterError:
+    """
+    # disabling this check since the number of instance attributes are
+    # necessary for this class
+    # pylint: disable=too-many-instance-attributes
+    def __init__(self, api_version, idp):
+        if api_version not in ['v1']:
+            raise errors.AnsibleFilterError("|failed api version {0} unknown".format(api_version))
+
+        self._idp = copy.deepcopy(idp)
+
+        if 'name' not in self._idp:
+            raise errors.AnsibleFilterError("|failed identity provider missing a name")
+
+        if 'kind' not in self._idp:
+            raise errors.AnsibleFilterError("|failed identity provider missing a kind")
+
+        self.name = self._idp.pop('name')
+        self.login = ansible_bool(self._idp.pop('login', False))
+        self.challenge = ansible_bool(self._idp.pop('challenge', False))
+        self.provider = dict(apiVersion=api_version, kind=self._idp.pop('kind'))
+
+        self._required = [['mappingMethod', 'mapping_method']]
+        self._optional = []
+        self._allow_additional = True
+
+    @staticmethod
+    def validate_idp_list(idp_list):
+        ''' validates a list of idps '''
+        login_providers = [x.name for x in idp_list if x.login]
+        if len(login_providers) > 1:
+            raise errors.AnsibleFilterError("|failed multiple providers are "
+                                            "not allowed for login. login "
+                                            "providers: {0}".format(', '.join(login_providers)))
+
+        names = [x.name for x in idp_list]
+        if len(set(names)) != len(names):
+            raise errors.AnsibleFilterError("|failed more than one provider configured with the same name")
+
+        for idp in idp_list:
+            idp.validate()
+
+    def validate(self):
+        ''' validate an instance of this idp class '''
+        valid_mapping_methods = ['add', 'claim', 'generate', 'lookup']
+        if self.provider['mappingMethod'] not in valid_mapping_methods:
+            raise errors.AnsibleFilterError("|failed unkown mapping method "
+                                            "for provider {0}".format(self.__class__.__name__))
+
+    @staticmethod
+    def get_default(key):
+        ''' get a default value for a given key '''
+        if key == 'mappingMethod':
+            return 'claim'
+        else:
+            return None
+
+    def set_provider_item(self, items, required=False):
+        ''' set a provider item based on the list of item names provided. '''
+        for item in items:
+            provider_key = items[0]
+            if item in self._idp:
+                self.provider[provider_key] = self._idp.pop(item)
+                break
+        else:
+            default = self.get_default(provider_key)
+            if default is not None:
+                self.provider[provider_key] = default
+            elif required:
+                raise errors.AnsibleFilterError("|failed provider {0} missing "
+                                                "required key {1}".format(self.__class__.__name__, provider_key))
+
+    def set_provider_items(self):
+        ''' set the provider items for this idp '''
+        for items in self._required:
+            self.set_provider_item(items, True)
+        for items in self._optional:
+            self.set_provider_item(items)
+        if self._allow_additional:
+            for key in self._idp.keys():
+                self.set_provider_item([key])
+        else:
+            if len(self._idp) > 0:
+                raise errors.AnsibleFilterError("|failed provider {0} "
+                                                "contains unknown keys "
+                                                "{1}".format(self.__class__.__name__, ', '.join(self._idp.keys())))
+
+    def to_dict(self):
+        ''' translate this idp to a dictionary '''
+        return dict(name=self.name, challenge=self.challenge,
+                    login=self.login, provider=self.provider)
+
+
+class LDAPPasswordIdentityProvider(IdentityProviderBase):
+    """ LDAPPasswordIdentityProvider
+
+        Attributes:
+
+        Args:
+            api_version(str): OpenShift config version
+            idp (dict): idp config dict
+
+        Raises:
+            AnsibleFilterError:
+    """
+    def __init__(self, api_version, idp):
+        IdentityProviderBase.__init__(self, api_version, idp)
+        self._allow_additional = False
+        self._required += [['attributes'], ['url'], ['insecure']]
+        self._optional += [['ca'],
+                           ['bindDN', 'bind_dn'],
+                           ['bindPassword', 'bind_password']]
+
+        self._idp['insecure'] = ansible_bool(self._idp.pop('insecure', False))
+
+        if 'attributes' in self._idp and 'preferred_username' in self._idp['attributes']:
+            pref_user = self._idp['attributes'].pop('preferred_username')
+            self._idp['attributes']['preferredUsername'] = pref_user
+
+    def validate(self):
+        ''' validate this idp instance '''
+        IdentityProviderBase.validate(self)
+        if not isinstance(self.provider['attributes'], dict):
+            raise errors.AnsibleFilterError("|failed attributes for provider "
+                                            "{0} must be a dictionary".format(self.__class__.__name__))
+
+        attrs = ['id', 'email', 'name', 'preferredUsername']
+        for attr in attrs:
+            if attr in self.provider['attributes'] and not isinstance(self.provider['attributes'][attr], list):
+                raise errors.AnsibleFilterError("|failed {0} attribute for "
+                                                "provider {1} must be a list".format(attr, self.__class__.__name__))
+
+        unknown_attrs = set(self.provider['attributes'].keys()) - set(attrs)
+        if len(unknown_attrs) > 0:
+            raise errors.AnsibleFilterError("|failed provider {0} has unknown "
+                                            "attributes: {1}".format(self.__class__.__name__, ', '.join(unknown_attrs)))
+
+
+class KeystonePasswordIdentityProvider(IdentityProviderBase):
+    """ KeystoneIdentityProvider
+
+        Attributes:
+
+        Args:
+            api_version(str): OpenShift config version
+            idp (dict): idp config dict
+
+        Raises:
+            AnsibleFilterError:
+    """
+    def __init__(self, api_version, idp):
+        IdentityProviderBase.__init__(self, api_version, idp)
+        self._allow_additional = False
+        self._required += [['url'], ['domainName', 'domain_name']]
+        self._optional += [['ca'], ['certFile', 'cert_file'], ['keyFile', 'key_file']]
+
+
+class RequestHeaderIdentityProvider(IdentityProviderBase):
+    """ RequestHeaderIdentityProvider
+
+        Attributes:
+
+        Args:
+            api_version(str): OpenShift config version
+            idp (dict): idp config dict
+
+        Raises:
+            AnsibleFilterError:
+    """
+    def __init__(self, api_version, idp):
+        IdentityProviderBase.__init__(self, api_version, idp)
+        self._allow_additional = False
+        self._required += [['headers']]
+        self._optional += [['challengeURL', 'challenge_url'],
+                           ['loginURL', 'login_url'],
+                           ['clientCA', 'client_ca']]
+
+    def validate(self):
+        ''' validate this idp instance '''
+        IdentityProviderBase.validate(self)
+        if not isinstance(self.provider['headers'], list):
+            raise errors.AnsibleFilterError("|failed headers for provider {0} "
+                                            "must be a list".format(self.__class__.__name__))
+
+
+class AllowAllPasswordIdentityProvider(IdentityProviderBase):
+    """ AllowAllPasswordIdentityProvider
+
+        Attributes:
+
+        Args:
+            api_version(str): OpenShift config version
+            idp (dict): idp config dict
+
+        Raises:
+            AnsibleFilterError:
+    """
+    def __init__(self, api_version, idp):
+        IdentityProviderBase.__init__(self, api_version, idp)
+        self._allow_additional = False
+
+
+class DenyAllPasswordIdentityProvider(IdentityProviderBase):
+    """ DenyAllPasswordIdentityProvider
+
+        Attributes:
+
+        Args:
+            api_version(str): OpenShift config version
+            idp (dict): idp config dict
+
+        Raises:
+            AnsibleFilterError:
+    """
+    def __init__(self, api_version, idp):
+        IdentityProviderBase.__init__(self, api_version, idp)
+        self._allow_additional = False
+
+
+class HTPasswdPasswordIdentityProvider(IdentityProviderBase):
+    """ HTPasswdPasswordIdentity
+
+        Attributes:
+
+        Args:
+            api_version(str): OpenShift config version
+            idp (dict): idp config dict
+
+        Raises:
+            AnsibleFilterError:
+    """
+    def __init__(self, api_version, idp):
+        IdentityProviderBase.__init__(self, api_version, idp)
+        self._allow_additional = False
+        self._required += [['file', 'filename', 'fileName', 'file_name']]
+
+    @staticmethod
+    def get_default(key):
+        if key == 'file':
+            return '/etc/origin/htpasswd'
+        else:
+            return IdentityProviderBase.get_default(key)
+
+
+class BasicAuthPasswordIdentityProvider(IdentityProviderBase):
+    """ BasicAuthPasswordIdentityProvider
+
+        Attributes:
+
+        Args:
+            api_version(str): OpenShift config version
+            idp (dict): idp config dict
+
+        Raises:
+            AnsibleFilterError:
+    """
+    def __init__(self, api_version, idp):
+        IdentityProviderBase.__init__(self, api_version, idp)
+        self._allow_additional = False
+        self._required += [['url']]
+        self._optional += [['ca'], ['certFile', 'cert_file'], ['keyFile', 'key_file']]
+
+
+class IdentityProviderOauthBase(IdentityProviderBase):
+    """ IdentityProviderOauthBase
+
+        Attributes:
+
+        Args:
+            api_version(str): OpenShift config version
+            idp (dict): idp config dict
+
+        Raises:
+            AnsibleFilterError:
+    """
+    def __init__(self, api_version, idp):
+        IdentityProviderBase.__init__(self, api_version, idp)
+        self._allow_additional = False
+        self._required += [['clientID', 'client_id'], ['clientSecret', 'client_secret']]
+
+    def validate(self):
+        ''' validate this idp instance '''
+        IdentityProviderBase.validate(self)
+        if self.challenge:
+            raise errors.AnsibleFilterError("|failed provider {0} does not "
+                                            "allow challenge authentication".format(self.__class__.__name__))
+
+
+class OpenIDIdentityProvider(IdentityProviderOauthBase):
+    """ OpenIDIdentityProvider
+
+        Attributes:
+
+        Args:
+            api_version(str): OpenShift config version
+            idp (dict): idp config dict
+
+        Raises:
+            AnsibleFilterError:
+    """
+    def __init__(self, api_version, idp):
+        IdentityProviderOauthBase.__init__(self, api_version, idp)
+        self._required += [['claims'], ['urls']]
+        self._optional += [['ca'],
+                           ['extraScopes'],
+                           ['extraAuthorizeParameters']]
+        if 'claims' in self._idp and 'preferred_username' in self._idp['claims']:
+            pref_user = self._idp['claims'].pop('preferred_username')
+            self._idp['claims']['preferredUsername'] = pref_user
+        if 'urls' in self._idp and 'user_info' in self._idp['urls']:
+            user_info = self._idp['urls'].pop('user_info')
+            self._idp['urls']['userInfo'] = user_info
+        if 'extra_scopes' in self._idp:
+            self._idp['extraScopes'] = self._idp.pop('extra_scopes')
+        if 'extra_authorize_parameters' in self._idp:
+            self._idp['extraAuthorizeParameters'] = self._idp.pop('extra_authorize_parameters')
+
+        if 'extraAuthorizeParameters' in self._idp:
+            if 'include_granted_scopes' in self._idp['extraAuthorizeParameters']:
+                val = ansible_bool(self._idp['extraAuthorizeParameters'].pop('include_granted_scopes'))
+                self._idp['extraAuthorizeParameters']['include_granted_scopes'] = val
+
+
+    def validate(self):
+        ''' validate this idp instance '''
+        IdentityProviderOauthBase.validate(self)
+        if not isinstance(self.provider['claims'], dict):
+            raise errors.AnsibleFilterError("|failed claims for provider {0} "
+                                            "must be a dictionary".format(self.__class__.__name__))
+
+        if 'extraScopes' not in self.provider['extraScopes'] and not isinstance(self.provider['extraScopes'], list):
+            raise errors.AnsibleFilterError("|failed extraScopes for provider "
+                                            "{0} must be a list".format(self.__class__.__name__))
+        if ('extraAuthorizeParameters' not in self.provider['extraAuthorizeParameters']
+                and not  isinstance(self.provider['extraAuthorizeParameters'], dict)):
+            raise errors.AnsibleFilterError("|failed extraAuthorizeParameters "
+                                            "for provider {0} must be a dictionary".format(self.__class__.__name__))
+
+        required_claims = ['id']
+        optional_claims = ['email', 'name', 'preferredUsername']
+        all_claims = required_claims + optional_claims
+
+        for claim in required_claims:
+            if claim in required_claims and claim not in self.provider['claims']:
+                raise errors.AnsibleFilterError("|failed {0} claim missing "
+                                                "for provider {1}".format(claim, self.__class__.__name__))
+
+        for claim in all_claims:
+            if claim in self.provider['claims'] and not isinstance(self.provider['claims'][claim], list):
+                raise errors.AnsibleFilterError("|failed {0} claims for "
+                                                "provider {1} must be a list".format(claim, self.__class__.__name__))
+
+        unknown_claims = set(self.provider['claims'].keys()) - set(all_claims)
+        if len(unknown_claims) > 0:
+            raise errors.AnsibleFilterError("|failed provider {0} has unknown "
+                                            "claims: {1}".format(self.__class__.__name__, ', '.join(unknown_claims)))
+
+        if not isinstance(self.provider['urls'], dict):
+            raise errors.AnsibleFilterError("|failed urls for provider {0} "
+                                            "must be a dictionary".format(self.__class__.__name__))
+
+        required_urls = ['authorize', 'token']
+        optional_urls = ['userInfo']
+        all_urls = required_urls + optional_urls
+
+        for url in required_urls:
+            if url not in self.provider['urls']:
+                raise errors.AnsibleFilterError("|failed {0} url missing for "
+                                                "provider {1}".format(url, self.__class__.__name__))
+
+        unknown_urls = set(self.provider['urls'].keys()) - set(all_urls)
+        if len(unknown_urls) > 0:
+            raise errors.AnsibleFilterError("|failed provider {0} has unknown "
+                                            "urls: {1}".format(self.__class__.__name__, ', '.join(unknown_urls)))
+
+
+class GoogleIdentityProvider(IdentityProviderOauthBase):
+    """ GoogleIdentityProvider
+
+        Attributes:
+
+        Args:
+            api_version(str): OpenShift config version
+            idp (dict): idp config dict
+
+        Raises:
+            AnsibleFilterError:
+    """
+    def __init__(self, api_version, idp):
+        IdentityProviderOauthBase.__init__(self, api_version, idp)
+        self._optional += [['hostedDomain', 'hosted_domain']]
+
+
+class GitHubIdentityProvider(IdentityProviderOauthBase):
+    """ GitHubIdentityProvider
+
+        Attributes:
+
+        Args:
+            api_version(str): OpenShift config version
+            idp (dict): idp config dict
+
+        Raises:
+            AnsibleFilterError:
+    """
+    pass
+
+
+class FilterModule(object):
+    ''' Custom ansible filters for use by the openshift_master role'''
+
+    @staticmethod
+    def translate_idps(idps, api_version):
+        ''' Translates a list of dictionaries into a valid identityProviders config '''
+        idp_list = []
+
+        if not isinstance(idps, list):
+            raise errors.AnsibleFilterError("|failed expects to filter on a list of identity providers")
+        for idp in idps:
+            if not isinstance(idp, dict):
+                raise errors.AnsibleFilterError("|failed identity providers must be a list of dictionaries")
+
+            cur_module = sys.modules[__name__]
+            idp_class = getattr(cur_module, idp['kind'], None)
+            idp_inst = idp_class(api_version, idp) if idp_class is not None else IdentityProviderBase(api_version, idp)
+            idp_inst.set_provider_items()
+            idp_list.append(idp_inst)
+
+
+        IdentityProviderBase.validate_idp_list(idp_list)
+        return yaml.safe_dump([idp.to_dict() for idp in idp_list], default_flow_style=False)
+
+
+    def filters(self):
+        ''' returns a mapping of filters to methods '''
+        return {"translate_idps": self.translate_idps}

+ 95 - 2
inventory/aws/hosts/ec2.ini

@@ -24,24 +24,61 @@ regions_exclude = us-gov-west-1,cn-north-1
 # This is the normal destination variable to use. If you are running Ansible
 # from outside EC2, then 'public_dns_name' makes the most sense. If you are
 # running Ansible from within EC2, then perhaps you want to use the internal
-# address, and should set this to 'private_dns_name'.
+# address, and should set this to 'private_dns_name'. The key of an EC2 tag
+# may optionally be used; however the boto instance variables hold precedence
+# in the event of a collision.
 destination_variable = public_dns_name
 
 # For server inside a VPC, using DNS names may not make sense. When an instance
 # has 'subnet_id' set, this variable is used. If the subnet is public, setting
 # this to 'ip_address' will return the public IP address. For instances in a
 # private subnet, this should be set to 'private_ip_address', and Ansible must
-# be run from with EC2.
+# be run from within EC2. The key of an EC2 tag may optionally be used; however
+# the boto instance variables hold precedence in the event of a collision.
+# WARNING: - instances that are in the private vpc, _without_ public ip address 
+# will not be listed in the inventory until You set:
+# vpc_destination_variable = 'private_ip_address'
 vpc_destination_variable = ip_address
 
 # To tag instances on EC2 with the resource records that point to them from
 # Route53, uncomment and set 'route53' to True.
 route53 = False
 
+# To exclude RDS instances from the inventory, uncomment and set to False.
+#rds = False
+
+# To exclude ElastiCache instances from the inventory, uncomment and set to False.
+#elasticache = False
+
 # Additionally, you can specify the list of zones to exclude looking up in
 # 'route53_excluded_zones' as a comma-separated list.
 # route53_excluded_zones = samplezone1.com, samplezone2.com
 
+# By default, only EC2 instances in the 'running' state are returned. Set
+# 'all_instances' to True to return all instances regardless of state.
+all_instances = False
+
+# By default, only EC2 instances in the 'running' state are returned. Specify
+# EC2 instance states to return as a comma-separated list. This
+# option is overriden when 'all_instances' is True.
+# instance_states = pending, running, shutting-down, terminated, stopping, stopped
+
+# By default, only RDS instances in the 'available' state are returned.  Set
+# 'all_rds_instances' to True return all RDS instances regardless of state.
+all_rds_instances = False
+
+# By default, only ElastiCache clusters and nodes in the 'available' state
+# are returned. Set 'all_elasticache_clusters' and/or 'all_elastic_nodes'
+# to True return all ElastiCache clusters and nodes, regardless of state.
+#
+# Note that all_elasticache_nodes only applies to listed clusters. That means
+# if you set all_elastic_clusters to false, no node will be return from
+# unavailable clusters, regardless of the state and to what you set for
+# all_elasticache_nodes.
+all_elasticache_replication_groups = False
+all_elasticache_clusters = False
+all_elasticache_nodes = False
+
 # API calls to EC2 are slow. For this reason, we cache the results of an API
 # call. Set this to the path you want cache files to be written to. Two files
 # will be written to this directory:
@@ -60,3 +97,59 @@ cache_max_age = 300
 # destination_variable and vpc_destination_variable.
 # destination_format = {0}.{1}.rhcloud.com
 # destination_format_tags = Name,environment
+
+# Organize groups into a nested/hierarchy instead of a flat namespace.
+nested_groups = False
+
+# Replace - tags when creating groups to avoid issues with ansible
+replace_dash_in_groups = False
+
+# The EC2 inventory output can become very large. To manage its size,
+# configure which groups should be created.
+group_by_instance_id = True
+group_by_region = True
+group_by_availability_zone = True
+group_by_ami_id = True
+group_by_instance_type = True
+group_by_key_pair = True
+group_by_vpc_id = True
+group_by_security_group = True
+group_by_tag_keys = True
+group_by_tag_none = True
+group_by_route53_names = True
+group_by_rds_engine = True
+group_by_rds_parameter_group = True
+group_by_elasticache_engine = True
+group_by_elasticache_cluster = True
+group_by_elasticache_parameter_group = True
+group_by_elasticache_replication_group = True
+
+# If you only want to include hosts that match a certain regular expression
+# pattern_include = staging-*
+
+# If you want to exclude any hosts that match a certain regular expression
+# pattern_exclude = staging-*
+
+# Instance filters can be used to control which instances are retrieved for
+# inventory. For the full list of possible filters, please read the EC2 API
+# docs: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-DescribeInstances.html#query-DescribeInstances-filters
+# Filters are key/value pairs separated by '=', to list multiple filters use
+# a list separated by commas. See examples below.
+
+# Retrieve only instances with (key=value) env=staging tag
+# instance_filters = tag:env=staging
+
+# Retrieve only instances with role=webservers OR role=dbservers tag
+# instance_filters = tag:role=webservers,tag:role=dbservers
+
+# Retrieve only t1.micro instances OR instances with tag env=staging
+# instance_filters = instance-type=t1.micro,tag:env=staging
+
+# You can use wildcards in filter values also. Below will list instances which
+# tag Name value matches webservers1*
+# (ex. webservers15, webservers1a, webservers123 etc) 
+# instance_filters = tag:Name=webservers1*
+
+# A boto configuration profile may be used to separate out credentials
+# see http://boto.readthedocs.org/en/latest/boto_config_tut.html
+# boto_profile = some-boto-profile-name

+ 585 - 60
inventory/aws/hosts/ec2.py

@@ -22,6 +22,12 @@ you need to define:
 
     export EC2_URL=http://hostname_of_your_cc:port/services/Eucalyptus
 
+If you're using boto profiles (requires boto>=2.24.0) you can choose a profile
+using the --boto-profile command line argument (e.g. ec2.py --boto-profile prod) or using
+the AWS_PROFILE variable:
+
+    AWS_PROFILE=prod ansible-playbook -i ec2.py myplaybook.yml
+
 For more details, see: http://docs.pythonboto.org/en/latest/boto_config_tut.html
 
 When run against a specific host, this script returns the following variables:
@@ -121,8 +127,11 @@ from time import time
 import boto
 from boto import ec2
 from boto import rds
+from boto import elasticache
 from boto import route53
-import ConfigParser
+import six
+
+from six.moves import configparser
 from collections import defaultdict
 
 try:
@@ -145,9 +154,18 @@ class Ec2Inventory(object):
         # Index of hostname (address) to instance ID
         self.index = {}
 
+        # Boto profile to use (if any)
+        self.boto_profile = None
+
         # Read settings and parse CLI arguments
-        self.read_settings()
         self.parse_cli_args()
+        self.read_settings()
+
+        # Make sure that profile_name is not passed at all if not set
+        # as pre 2.24 boto will fall over otherwise
+        if self.boto_profile:
+            if not hasattr(boto.ec2.EC2Connection, 'profile_name'):
+                self.fail_with_error("boto version must be >= 2.24 to use profile")
 
         # Cache
         if self.args.refresh_cache:
@@ -166,7 +184,7 @@ class Ec2Inventory(object):
             else:
                 data_to_print = self.json_format_dict(self.inventory, True)
 
-        print data_to_print
+        print(data_to_print)
 
 
     def is_cache_valid(self):
@@ -184,10 +202,12 @@ class Ec2Inventory(object):
 
     def read_settings(self):
         ''' Reads the settings from the ec2.ini file '''
-
-        config = ConfigParser.SafeConfigParser()
+        if six.PY3:
+            config = configparser.ConfigParser()
+        else:
+            config = configparser.SafeConfigParser()
         ec2_default_ini_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'ec2.ini')
-        ec2_ini_path = os.environ.get('EC2_INI_PATH', ec2_default_ini_path)
+        ec2_ini_path = os.path.expanduser(os.path.expandvars(os.environ.get('EC2_INI_PATH', ec2_default_ini_path)))
         config.read(ec2_ini_path)
 
         # is eucalyptus?
@@ -236,18 +256,72 @@ class Ec2Inventory(object):
         if config.has_option('ec2', 'rds'):
             self.rds_enabled = config.getboolean('ec2', 'rds')
 
-        # Return all EC2 and RDS instances (if RDS is enabled)
+        # Include ElastiCache instances?
+        self.elasticache_enabled = True
+        if config.has_option('ec2', 'elasticache'):
+            self.elasticache_enabled = config.getboolean('ec2', 'elasticache')
+
+        # Return all EC2 instances?
         if config.has_option('ec2', 'all_instances'):
             self.all_instances = config.getboolean('ec2', 'all_instances')
         else:
             self.all_instances = False
+
+        # Instance states to be gathered in inventory. Default is 'running'.
+        # Setting 'all_instances' to 'yes' overrides this option.
+        ec2_valid_instance_states = [
+            'pending',
+            'running',
+            'shutting-down',
+            'terminated',
+            'stopping',
+            'stopped'
+        ]
+        self.ec2_instance_states = []
+        if self.all_instances:
+            self.ec2_instance_states = ec2_valid_instance_states
+        elif config.has_option('ec2', 'instance_states'):
+          for instance_state in config.get('ec2', 'instance_states').split(','):
+            instance_state = instance_state.strip()
+            if instance_state not in ec2_valid_instance_states:
+              continue
+            self.ec2_instance_states.append(instance_state)
+        else:
+          self.ec2_instance_states = ['running']
+
+        # Return all RDS instances? (if RDS is enabled)
         if config.has_option('ec2', 'all_rds_instances') and self.rds_enabled:
             self.all_rds_instances = config.getboolean('ec2', 'all_rds_instances')
         else:
             self.all_rds_instances = False
 
+        # Return all ElastiCache replication groups? (if ElastiCache is enabled)
+        if config.has_option('ec2', 'all_elasticache_replication_groups') and self.elasticache_enabled:
+            self.all_elasticache_replication_groups = config.getboolean('ec2', 'all_elasticache_replication_groups')
+        else:
+            self.all_elasticache_replication_groups = False
+
+        # Return all ElastiCache clusters? (if ElastiCache is enabled)
+        if config.has_option('ec2', 'all_elasticache_clusters') and self.elasticache_enabled:
+            self.all_elasticache_clusters = config.getboolean('ec2', 'all_elasticache_clusters')
+        else:
+            self.all_elasticache_clusters = False
+
+        # Return all ElastiCache nodes? (if ElastiCache is enabled)
+        if config.has_option('ec2', 'all_elasticache_nodes') and self.elasticache_enabled:
+            self.all_elasticache_nodes = config.getboolean('ec2', 'all_elasticache_nodes')
+        else:
+            self.all_elasticache_nodes = False
+
+        # boto configuration profile (prefer CLI argument)
+        self.boto_profile = self.args.boto_profile
+        if config.has_option('ec2', 'boto_profile') and not self.boto_profile:
+            self.boto_profile = config.get('ec2', 'boto_profile')
+
         # Cache related
         cache_dir = os.path.expanduser(config.get('ec2', 'cache_path'))
+        if self.boto_profile:
+            cache_dir = os.path.join(cache_dir, 'profile_' + self.boto_profile)
         if not os.path.exists(cache_dir):
             os.makedirs(cache_dir)
 
@@ -261,6 +335,12 @@ class Ec2Inventory(object):
         else:
             self.nested_groups = False
 
+        # Replace dash or not in group names
+        if config.has_option('ec2', 'replace_dash_in_groups'):
+            self.replace_dash_in_groups = config.getboolean('ec2', 'replace_dash_in_groups')
+        else:
+            self.replace_dash_in_groups = True
+
         # Configure which groups should be created.
         group_by_options = [
             'group_by_instance_id',
@@ -276,6 +356,10 @@ class Ec2Inventory(object):
             'group_by_route53_names',
             'group_by_rds_engine',
             'group_by_rds_parameter_group',
+            'group_by_elasticache_engine',
+            'group_by_elasticache_cluster',
+            'group_by_elasticache_parameter_group',
+            'group_by_elasticache_replication_group',
         ]
         for option in group_by_options:
             if config.has_option('ec2', option):
@@ -290,7 +374,7 @@ class Ec2Inventory(object):
                 self.pattern_include = re.compile(pattern_include)
             else:
                 self.pattern_include = None
-        except ConfigParser.NoOptionError, e:
+        except configparser.NoOptionError:
             self.pattern_include = None
 
         # Do we need to exclude hosts that match a pattern?
@@ -300,7 +384,7 @@ class Ec2Inventory(object):
                 self.pattern_exclude = re.compile(pattern_exclude)
             else:
                 self.pattern_exclude = None
-        except ConfigParser.NoOptionError, e:
+        except configparser.NoOptionError:
             self.pattern_exclude = None
 
         # Instance filters (see boto and EC2 API docs). Ignore invalid filters.
@@ -325,6 +409,8 @@ class Ec2Inventory(object):
                            help='Get all the variables about a specific instance')
         parser.add_argument('--refresh-cache', action='store_true', default=False,
                            help='Force refresh of cache by making API requests to EC2 (default: False - use cache files)')
+        parser.add_argument('--boto-profile', action='store',
+                           help='Use boto profile for connections to EC2')
         self.args = parser.parse_args()
 
 
@@ -338,30 +424,52 @@ class Ec2Inventory(object):
             self.get_instances_by_region(region)
             if self.rds_enabled:
                 self.get_rds_instances_by_region(region)
+            if self.elasticache_enabled:
+                self.get_elasticache_clusters_by_region(region)
+                self.get_elasticache_replication_groups_by_region(region)
 
         self.write_to_cache(self.inventory, self.cache_path_cache)
         self.write_to_cache(self.index, self.cache_path_index)
 
+    def connect(self, region):
+        ''' create connection to api server'''
+        if self.eucalyptus:
+            conn = boto.connect_euca(host=self.eucalyptus_host)
+            conn.APIVersion = '2010-08-31'
+        else:
+            conn = self.connect_to_aws(ec2, region)
+        return conn
+
+    def boto_fix_security_token_in_profile(self, connect_args):
+        ''' monkey patch for boto issue boto/boto#2100 '''
+        profile = 'profile ' + self.boto_profile
+        if boto.config.has_option(profile, 'aws_security_token'):
+            connect_args['security_token'] = boto.config.get(profile, 'aws_security_token')
+        return connect_args
+
+    def connect_to_aws(self, module, region):
+        connect_args = {}
+
+        # only pass the profile name if it's set (as it is not supported by older boto versions)
+        if self.boto_profile:
+            connect_args['profile_name'] = self.boto_profile
+            self.boto_fix_security_token_in_profile(connect_args)
+
+        conn = module.connect_to_region(region, **connect_args)
+        # connect_to_region will fail "silently" by returning None if the region name is wrong or not supported
+        if conn is None:
+            self.fail_with_error("region name: %s likely not supported, or AWS is down.  connection to region failed." % region)
+        return conn
 
     def get_instances_by_region(self, region):
         ''' Makes an AWS EC2 API call to the list of instances in a particular
         region '''
 
         try:
-            if self.eucalyptus:
-                conn = boto.connect_euca(host=self.eucalyptus_host)
-                conn.APIVersion = '2010-08-31'
-            else:
-                conn = ec2.connect_to_region(region)
-
-            # connect_to_region will fail "silently" by returning None if the region name is wrong or not supported
-            if conn is None:
-                print("region name: %s likely not supported, or AWS is down.  connection to region failed." % region)
-                sys.exit(1)
-
+            conn = self.connect(region)
             reservations = []
             if self.ec2_instance_filters:
-                for filter_key, filter_values in self.ec2_instance_filters.iteritems():
+                for filter_key, filter_values in self.ec2_instance_filters.items():
                     reservations.extend(conn.get_all_instances(filters = { filter_key : filter_values }))
             else:
                 reservations = conn.get_all_instances()
@@ -370,40 +478,130 @@ class Ec2Inventory(object):
                 for instance in reservation.instances:
                     self.add_instance(instance, region)
 
-        except boto.exception.BotoServerError, e:
-            if  not self.eucalyptus:
-                print "Looks like AWS is down again:"
-            print e
-            sys.exit(1)
+        except boto.exception.BotoServerError as e:
+            if e.error_code == 'AuthFailure':
+                error = self.get_auth_error_message()
+            else:
+                backend = 'Eucalyptus' if self.eucalyptus else 'AWS' 
+                error = "Error connecting to %s backend.\n%s" % (backend, e.message)
+            self.fail_with_error(error, 'getting EC2 instances')
 
     def get_rds_instances_by_region(self, region):
         ''' Makes an AWS API call to the list of RDS instances in a particular
         region '''
 
         try:
-            conn = rds.connect_to_region(region)
+            conn = self.connect_to_aws(rds, region)
             if conn:
                 instances = conn.get_all_dbinstances()
                 for instance in instances:
                     self.add_rds_instance(instance, region)
-        except boto.exception.BotoServerError, e:
+        except boto.exception.BotoServerError as e:
+            error = e.reason
+
+            if e.error_code == 'AuthFailure':
+                error = self.get_auth_error_message()
             if not e.reason == "Forbidden":
-                print "Looks like AWS RDS is down: "
-                print e
-                sys.exit(1)
+                error = "Looks like AWS RDS is down:\n%s" % e.message
+            self.fail_with_error(error, 'getting RDS instances')
 
-    def get_instance(self, region, instance_id):
-        ''' Gets details about a specific instance '''
-        if self.eucalyptus:
-            conn = boto.connect_euca(self.eucalyptus_host)
-            conn.APIVersion = '2010-08-31'
+    def get_elasticache_clusters_by_region(self, region):
+        ''' Makes an AWS API call to the list of ElastiCache clusters (with
+        nodes' info) in a particular region.'''
+
+        # ElastiCache boto module doesn't provide a get_all_intances method,
+        # that's why we need to call describe directly (it would be called by
+        # the shorthand method anyway...)
+        try:
+            conn = elasticache.connect_to_region(region)
+            if conn:
+                # show_cache_node_info = True
+                # because we also want nodes' information
+                response = conn.describe_cache_clusters(None, None, None, True)
+
+        except boto.exception.BotoServerError as e:
+            error = e.reason
+
+            if e.error_code == 'AuthFailure':
+                error = self.get_auth_error_message()
+            if not e.reason == "Forbidden":
+                error = "Looks like AWS ElastiCache is down:\n%s" % e.message
+            self.fail_with_error(error, 'getting ElastiCache clusters')
+
+        try:
+            # Boto also doesn't provide wrapper classes to CacheClusters or
+            # CacheNodes. Because of that wo can't make use of the get_list
+            # method in the AWSQueryConnection. Let's do the work manually
+            clusters = response['DescribeCacheClustersResponse']['DescribeCacheClustersResult']['CacheClusters']
+
+        except KeyError as e:
+            error = "ElastiCache query to AWS failed (unexpected format)."
+            self.fail_with_error(error, 'getting ElastiCache clusters')
+
+        for cluster in clusters:
+            self.add_elasticache_cluster(cluster, region)
+
+    def get_elasticache_replication_groups_by_region(self, region):
+        ''' Makes an AWS API call to the list of ElastiCache replication groups
+        in a particular region.'''
+
+        # ElastiCache boto module doesn't provide a get_all_intances method,
+        # that's why we need to call describe directly (it would be called by
+        # the shorthand method anyway...)
+        try:
+            conn = elasticache.connect_to_region(region)
+            if conn:
+                response = conn.describe_replication_groups()
+
+        except boto.exception.BotoServerError as e:
+            error = e.reason
+
+            if e.error_code == 'AuthFailure':
+                error = self.get_auth_error_message()
+            if not e.reason == "Forbidden":
+                error = "Looks like AWS ElastiCache [Replication Groups] is down:\n%s" % e.message
+            self.fail_with_error(error, 'getting ElastiCache clusters')
+
+        try:
+            # Boto also doesn't provide wrapper classes to ReplicationGroups
+            # Because of that wo can't make use of the get_list method in the
+            # AWSQueryConnection. Let's do the work manually
+            replication_groups = response['DescribeReplicationGroupsResponse']['DescribeReplicationGroupsResult']['ReplicationGroups']
+
+        except KeyError as e:
+            error = "ElastiCache [Replication Groups] query to AWS failed (unexpected format)."
+            self.fail_with_error(error, 'getting ElastiCache clusters')
+
+        for replication_group in replication_groups:
+            self.add_elasticache_replication_group(replication_group, region)
+
+    def get_auth_error_message(self):
+        ''' create an informative error message if there is an issue authenticating'''
+        errors = ["Authentication error retrieving ec2 inventory."]
+        if None in [os.environ.get('AWS_ACCESS_KEY_ID'), os.environ.get('AWS_SECRET_ACCESS_KEY')]:
+            errors.append(' - No AWS_ACCESS_KEY_ID or AWS_SECRET_ACCESS_KEY environment vars found')
         else:
-            conn = ec2.connect_to_region(region)
+            errors.append(' - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment vars found but may not be correct')
 
-        # connect_to_region will fail "silently" by returning None if the region name is wrong or not supported
-        if conn is None:
-            print("region name: %s likely not supported, or AWS is down.  connection to region failed." % region)
-            sys.exit(1)
+        boto_paths = ['/etc/boto.cfg', '~/.boto', '~/.aws/credentials']
+        boto_config_found = list(p for p in boto_paths if os.path.isfile(os.path.expanduser(p)))
+        if len(boto_config_found) > 0:
+            errors.append(" - Boto configs found at '%s', but the credentials contained may not be correct" % ', '.join(boto_config_found))
+        else:
+            errors.append(" - No Boto config found at any expected location '%s'" % ', '.join(boto_paths))
+
+        return '\n'.join(errors)
+
+    def fail_with_error(self, err_msg, err_operation=None):
+        '''log an error to std err for ansible-playbook to consume and exit'''
+        if err_operation:
+            err_msg = 'ERROR: "{err_msg}", while: {err_operation}'.format(
+                err_msg=err_msg, err_operation=err_operation)
+        sys.stderr.write(err_msg)
+        sys.exit(1)
+
+    def get_instance(self, region, instance_id):
+        conn = self.connect(region)
 
         reservations = conn.get_all_instances([instance_id])
         for reservation in reservations:
@@ -414,8 +612,8 @@ class Ec2Inventory(object):
         ''' Adds an instance to the inventory and index, as long as it is
         addressable '''
 
-        # Only want running instances unless all_instances is True
-        if not self.all_instances and instance.state != 'running':
+        # Only return instances with desired instance states
+        if instance.state not in self.ec2_instance_states:
             return
 
         # Select the best destination address
@@ -502,18 +700,21 @@ class Ec2Inventory(object):
                     if self.nested_groups:
                         self.push_group(self.inventory, 'security_groups', key)
             except AttributeError:
-                print 'Package boto seems a bit older.'
-                print 'Please upgrade boto >= 2.3.0.'
-                sys.exit(1)
+                self.fail_with_error('\n'.join(['Package boto seems a bit older.', 
+                                            'Please upgrade boto >= 2.3.0.']))
 
         # Inventory: Group by tag keys
         if self.group_by_tag_keys:
-            for k, v in instance.tags.iteritems():
-                key = self.to_safe("tag_" + k + "=" + v)
+            for k, v in instance.tags.items():
+                if v:
+                    key = self.to_safe("tag_" + k + "=" + v)
+                else:
+                    key = self.to_safe("tag_" + k)
                 self.push(self.inventory, key, dest)
                 if self.nested_groups:
                     self.push_group(self.inventory, 'tags', self.to_safe("tag_" + k))
-                    self.push_group(self.inventory, self.to_safe("tag_" + k), key)
+                    if v:
+                        self.push_group(self.inventory, self.to_safe("tag_" + k), key)
 
         # Inventory: Group by Route53 domain names if enabled
         if self.route53_enabled and self.group_by_route53_names:
@@ -597,9 +798,9 @@ class Ec2Inventory(object):
                         self.push_group(self.inventory, 'security_groups', key)
 
             except AttributeError:
-                print 'Package boto seems a bit older.'
-                print 'Please upgrade boto >= 2.3.0.'
-                sys.exit(1)
+                self.fail_with_error('\n'.join(['Package boto seems a bit older.', 
+                                            'Please upgrade boto >= 2.3.0.']))
+
 
         # Inventory: Group by engine
         if self.group_by_rds_engine:
@@ -618,6 +819,243 @@ class Ec2Inventory(object):
 
         self.inventory["_meta"]["hostvars"][dest] = self.get_host_info_dict_from_instance(instance)
 
+    def add_elasticache_cluster(self, cluster, region):
+        ''' Adds an ElastiCache cluster to the inventory and index, as long as
+        it's nodes are addressable '''
+
+        # Only want available clusters unless all_elasticache_clusters is True
+        if not self.all_elasticache_clusters and cluster['CacheClusterStatus'] != 'available':
+            return
+
+        # Select the best destination address
+        if 'ConfigurationEndpoint' in cluster and cluster['ConfigurationEndpoint']:
+            # Memcached cluster
+            dest = cluster['ConfigurationEndpoint']['Address']
+            is_redis = False
+        else:
+            # Redis sigle node cluster
+            # Because all Redis clusters are single nodes, we'll merge the
+            # info from the cluster with info about the node
+            dest = cluster['CacheNodes'][0]['Endpoint']['Address']
+            is_redis = True
+
+        if not dest:
+            # Skip clusters we cannot address (e.g. private VPC subnet)
+            return
+
+        # Add to index
+        self.index[dest] = [region, cluster['CacheClusterId']]
+
+        # Inventory: Group by instance ID (always a group of 1)
+        if self.group_by_instance_id:
+            self.inventory[cluster['CacheClusterId']] = [dest]
+            if self.nested_groups:
+                self.push_group(self.inventory, 'instances', cluster['CacheClusterId'])
+
+        # Inventory: Group by region
+        if self.group_by_region and not is_redis:
+            self.push(self.inventory, region, dest)
+            if self.nested_groups:
+                self.push_group(self.inventory, 'regions', region)
+
+        # Inventory: Group by availability zone
+        if self.group_by_availability_zone and not is_redis:
+            self.push(self.inventory, cluster['PreferredAvailabilityZone'], dest)
+            if self.nested_groups:
+                if self.group_by_region:
+                    self.push_group(self.inventory, region, cluster['PreferredAvailabilityZone'])
+                self.push_group(self.inventory, 'zones', cluster['PreferredAvailabilityZone'])
+
+        # Inventory: Group by node type
+        if self.group_by_instance_type and not is_redis:
+            type_name = self.to_safe('type_' + cluster['CacheNodeType'])
+            self.push(self.inventory, type_name, dest)
+            if self.nested_groups:
+                self.push_group(self.inventory, 'types', type_name)
+
+        # Inventory: Group by VPC (information not available in the current
+        # AWS API version for ElastiCache)
+
+        # Inventory: Group by security group
+        if self.group_by_security_group and not is_redis:
+
+            # Check for the existence of the 'SecurityGroups' key and also if
+            # this key has some value. When the cluster is not placed in a SG
+            # the query can return None here and cause an error.
+            if 'SecurityGroups' in cluster and cluster['SecurityGroups'] is not None:
+                for security_group in cluster['SecurityGroups']:
+                    key = self.to_safe("security_group_" + security_group['SecurityGroupId'])
+                    self.push(self.inventory, key, dest)
+                    if self.nested_groups:
+                        self.push_group(self.inventory, 'security_groups', key)
+
+        # Inventory: Group by engine
+        if self.group_by_elasticache_engine and not is_redis:
+            self.push(self.inventory, self.to_safe("elasticache_" + cluster['Engine']), dest)
+            if self.nested_groups:
+                self.push_group(self.inventory, 'elasticache_engines', self.to_safe(cluster['Engine']))
+
+        # Inventory: Group by parameter group
+        if self.group_by_elasticache_parameter_group:
+            self.push(self.inventory, self.to_safe("elasticache_parameter_group_" + cluster['CacheParameterGroup']['CacheParameterGroupName']), dest)
+            if self.nested_groups:
+                self.push_group(self.inventory, 'elasticache_parameter_groups', self.to_safe(cluster['CacheParameterGroup']['CacheParameterGroupName']))
+
+        # Inventory: Group by replication group
+        if self.group_by_elasticache_replication_group and 'ReplicationGroupId' in cluster and cluster['ReplicationGroupId']:
+            self.push(self.inventory, self.to_safe("elasticache_replication_group_" + cluster['ReplicationGroupId']), dest)
+            if self.nested_groups:
+                self.push_group(self.inventory, 'elasticache_replication_groups', self.to_safe(cluster['ReplicationGroupId']))
+
+        # Global Tag: all ElastiCache clusters
+        self.push(self.inventory, 'elasticache_clusters', cluster['CacheClusterId'])
+
+        host_info = self.get_host_info_dict_from_describe_dict(cluster)
+
+        self.inventory["_meta"]["hostvars"][dest] = host_info
+
+        # Add the nodes
+        for node in cluster['CacheNodes']:
+            self.add_elasticache_node(node, cluster, region)
+
+    def add_elasticache_node(self, node, cluster, region):
+        ''' Adds an ElastiCache node to the inventory and index, as long as
+        it is addressable '''
+
+        # Only want available nodes unless all_elasticache_nodes is True
+        if not self.all_elasticache_nodes and node['CacheNodeStatus'] != 'available':
+            return
+
+        # Select the best destination address
+        dest = node['Endpoint']['Address']
+
+        if not dest:
+            # Skip nodes we cannot address (e.g. private VPC subnet)
+            return
+
+        node_id = self.to_safe(cluster['CacheClusterId'] + '_' + node['CacheNodeId'])
+
+        # Add to index
+        self.index[dest] = [region, node_id]
+
+        # Inventory: Group by node ID (always a group of 1)
+        if self.group_by_instance_id:
+            self.inventory[node_id] = [dest]
+            if self.nested_groups:
+                self.push_group(self.inventory, 'instances', node_id)
+
+        # Inventory: Group by region
+        if self.group_by_region:
+            self.push(self.inventory, region, dest)
+            if self.nested_groups:
+                self.push_group(self.inventory, 'regions', region)
+
+        # Inventory: Group by availability zone
+        if self.group_by_availability_zone:
+            self.push(self.inventory, cluster['PreferredAvailabilityZone'], dest)
+            if self.nested_groups:
+                if self.group_by_region:
+                    self.push_group(self.inventory, region, cluster['PreferredAvailabilityZone'])
+                self.push_group(self.inventory, 'zones', cluster['PreferredAvailabilityZone'])
+
+        # Inventory: Group by node type
+        if self.group_by_instance_type:
+            type_name = self.to_safe('type_' + cluster['CacheNodeType'])
+            self.push(self.inventory, type_name, dest)
+            if self.nested_groups:
+                self.push_group(self.inventory, 'types', type_name)
+
+        # Inventory: Group by VPC (information not available in the current
+        # AWS API version for ElastiCache)
+
+        # Inventory: Group by security group
+        if self.group_by_security_group:
+
+            # Check for the existence of the 'SecurityGroups' key and also if
+            # this key has some value. When the cluster is not placed in a SG
+            # the query can return None here and cause an error.
+            if 'SecurityGroups' in cluster and cluster['SecurityGroups'] is not None:
+                for security_group in cluster['SecurityGroups']:
+                    key = self.to_safe("security_group_" + security_group['SecurityGroupId'])
+                    self.push(self.inventory, key, dest)
+                    if self.nested_groups:
+                        self.push_group(self.inventory, 'security_groups', key)
+
+        # Inventory: Group by engine
+        if self.group_by_elasticache_engine:
+            self.push(self.inventory, self.to_safe("elasticache_" + cluster['Engine']), dest)
+            if self.nested_groups:
+                self.push_group(self.inventory, 'elasticache_engines', self.to_safe("elasticache_" + cluster['Engine']))
+
+        # Inventory: Group by parameter group (done at cluster level)
+
+        # Inventory: Group by replication group (done at cluster level)
+
+        # Inventory: Group by ElastiCache Cluster
+        if self.group_by_elasticache_cluster:
+            self.push(self.inventory, self.to_safe("elasticache_cluster_" + cluster['CacheClusterId']), dest)
+
+        # Global Tag: all ElastiCache nodes
+        self.push(self.inventory, 'elasticache_nodes', dest)
+
+        host_info = self.get_host_info_dict_from_describe_dict(node)
+
+        if dest in self.inventory["_meta"]["hostvars"]:
+            self.inventory["_meta"]["hostvars"][dest].update(host_info)
+        else:
+            self.inventory["_meta"]["hostvars"][dest] = host_info
+
+    def add_elasticache_replication_group(self, replication_group, region):
+        ''' Adds an ElastiCache replication group to the inventory and index '''
+
+        # Only want available clusters unless all_elasticache_replication_groups is True
+        if not self.all_elasticache_replication_groups and replication_group['Status'] != 'available':
+            return
+
+        # Select the best destination address (PrimaryEndpoint)
+        dest = replication_group['NodeGroups'][0]['PrimaryEndpoint']['Address']
+
+        if not dest:
+            # Skip clusters we cannot address (e.g. private VPC subnet)
+            return
+
+        # Add to index
+        self.index[dest] = [region, replication_group['ReplicationGroupId']]
+
+        # Inventory: Group by ID (always a group of 1)
+        if self.group_by_instance_id:
+            self.inventory[replication_group['ReplicationGroupId']] = [dest]
+            if self.nested_groups:
+                self.push_group(self.inventory, 'instances', replication_group['ReplicationGroupId'])
+
+        # Inventory: Group by region
+        if self.group_by_region:
+            self.push(self.inventory, region, dest)
+            if self.nested_groups:
+                self.push_group(self.inventory, 'regions', region)
+
+        # Inventory: Group by availability zone (doesn't apply to replication groups)
+
+        # Inventory: Group by node type (doesn't apply to replication groups)
+
+        # Inventory: Group by VPC (information not available in the current
+        # AWS API version for replication groups
+
+        # Inventory: Group by security group (doesn't apply to replication groups)
+        # Check this value in cluster level
+
+        # Inventory: Group by engine (replication groups are always Redis)
+        if self.group_by_elasticache_engine:
+            self.push(self.inventory, 'elasticache_redis', dest)
+            if self.nested_groups:
+                self.push_group(self.inventory, 'elasticache_engines', 'redis')
+
+        # Global Tag: all ElastiCache clusters
+        self.push(self.inventory, 'elasticache_replication_groups', replication_group['ReplicationGroupId'])
+
+        host_info = self.get_host_info_dict_from_describe_dict(replication_group)
+
+        self.inventory["_meta"]["hostvars"][dest] = host_info
 
     def get_route53_records(self):
         ''' Get and store the map of resource records to domain names that
@@ -666,7 +1104,6 @@ class Ec2Inventory(object):
 
         return list(name_list)
 
-
     def get_host_info_dict_from_instance(self, instance):
         instance_vars = {}
         for key in vars(instance):
@@ -683,7 +1120,7 @@ class Ec2Inventory(object):
                 instance_vars['ec2_previous_state_code'] = instance.previous_state_code
             elif type(value) in [int, bool]:
                 instance_vars[key] = value
-            elif type(value) in [str, unicode]:
+            elif isinstance(value, six.string_types):
                 instance_vars[key] = value.strip()
             elif type(value) == type(None):
                 instance_vars[key] = ''
@@ -692,7 +1129,7 @@ class Ec2Inventory(object):
             elif key == 'ec2__placement':
                 instance_vars['ec2_placement'] = value.zone
             elif key == 'ec2_tags':
-                for k, v in value.iteritems():
+                for k, v in value.items():
                     key = self.to_safe('ec2_tag_' + k)
                     instance_vars[key] = v
             elif key == 'ec2_groups':
@@ -712,6 +1149,91 @@ class Ec2Inventory(object):
 
         return instance_vars
 
+    def get_host_info_dict_from_describe_dict(self, describe_dict):
+        ''' Parses the dictionary returned by the API call into a flat list
+            of parameters. This method should be used only when 'describe' is
+            used directly because Boto doesn't provide specific classes. '''
+
+        # I really don't agree with prefixing everything with 'ec2'
+        # because EC2, RDS and ElastiCache are different services.
+        # I'm just following the pattern used until now to not break any
+        # compatibility.
+
+        host_info = {}
+        for key in describe_dict:
+            value = describe_dict[key]
+            key = self.to_safe('ec2_' + self.uncammelize(key))
+
+            # Handle complex types
+
+            # Target: Memcached Cache Clusters
+            if key == 'ec2_configuration_endpoint' and value:
+                host_info['ec2_configuration_endpoint_address'] = value['Address']
+                host_info['ec2_configuration_endpoint_port'] = value['Port']
+
+            # Target: Cache Nodes and Redis Cache Clusters (single node)
+            if key == 'ec2_endpoint' and value:
+                host_info['ec2_endpoint_address'] = value['Address']
+                host_info['ec2_endpoint_port'] = value['Port']
+
+            # Target: Redis Replication Groups
+            if key == 'ec2_node_groups' and value:
+                host_info['ec2_endpoint_address'] = value[0]['PrimaryEndpoint']['Address']
+                host_info['ec2_endpoint_port'] = value[0]['PrimaryEndpoint']['Port']
+                replica_count = 0
+                for node in value[0]['NodeGroupMembers']:
+                    if node['CurrentRole'] == 'primary':
+                        host_info['ec2_primary_cluster_address'] = node['ReadEndpoint']['Address']
+                        host_info['ec2_primary_cluster_port'] = node['ReadEndpoint']['Port']
+                        host_info['ec2_primary_cluster_id'] = node['CacheClusterId']
+                    elif node['CurrentRole'] == 'replica':
+                        host_info['ec2_replica_cluster_address_'+ str(replica_count)] = node['ReadEndpoint']['Address']
+                        host_info['ec2_replica_cluster_port_'+ str(replica_count)] = node['ReadEndpoint']['Port']
+                        host_info['ec2_replica_cluster_id_'+ str(replica_count)] = node['CacheClusterId']
+                        replica_count += 1
+
+            # Target: Redis Replication Groups
+            if key == 'ec2_member_clusters' and value:
+                host_info['ec2_member_clusters'] = ','.join([str(i) for i in value])
+
+            # Target: All Cache Clusters
+            elif key == 'ec2_cache_parameter_group':
+                host_info["ec2_cache_node_ids_to_reboot"] = ','.join([str(i) for i in value['CacheNodeIdsToReboot']])
+                host_info['ec2_cache_parameter_group_name'] = value['CacheParameterGroupName']
+                host_info['ec2_cache_parameter_apply_status'] = value['ParameterApplyStatus']
+
+            # Target: Almost everything
+            elif key == 'ec2_security_groups':
+
+                # Skip if SecurityGroups is None
+                # (it is possible to have the key defined but no value in it).
+                if value is not None:
+                    sg_ids = []
+                    for sg in value:
+                        sg_ids.append(sg['SecurityGroupId'])
+                    host_info["ec2_security_group_ids"] = ','.join([str(i) for i in sg_ids])
+
+            # Target: Everything
+            # Preserve booleans and integers
+            elif type(value) in [int, bool]:
+                host_info[key] = value
+
+            # Target: Everything
+            # Sanitize string values
+            elif isinstance(value, six.string_types):
+                host_info[key] = value.strip()
+
+            # Target: Everything
+            # Replace None by an empty string
+            elif type(value) == type(None):
+                host_info[key] = ''
+
+            else:
+                # Remove non-processed complex types
+                pass
+
+        return host_info
+
     def get_host_info(self):
         ''' Get variables about a specific host '''
 
@@ -775,13 +1297,16 @@ class Ec2Inventory(object):
         cache.write(json_data)
         cache.close()
 
+    def uncammelize(self, key):
+        temp = re.sub('(.)([A-Z][a-z]+)', r'\1_\2', key)
+        return re.sub('([a-z0-9])([A-Z])', r'\1_\2', temp).lower()
 
     def to_safe(self, word):
-        ''' Converts 'bad' characters in a string to underscores so they can be
-        used as Ansible groups '''
-
-        return re.sub("[^A-Za-z0-9\-]", "_", word)
-
+        ''' Converts 'bad' characters in a string to underscores so they can be used as Ansible groups '''
+        regex = "[^A-Za-z0-9\_"
+        if not self.replace_dash_in_groups:
+            regex += "\-"
+        return re.sub(regex + "]", "_", word)
 
     def json_format_dict(self, data, pretty=False):
         ''' Converts a dict to a JSON object and dumps it as a formatted

+ 178 - 0
inventory/byo/hosts.aep.example

@@ -0,0 +1,178 @@
+# This is an example of a bring your own (byo) host inventory
+
+# Create an OSEv3 group that contains the masters and nodes groups
+[OSEv3:children]
+masters
+nodes
+etcd
+lb
+
+# Set variables common for all OSEv3 hosts
+[OSEv3:vars]
+# SSH user, this user should allow ssh based auth without requiring a
+# password. If using ssh key based auth, then the key should be managed by an
+# ssh agent.
+ansible_ssh_user=root
+
+# If ansible_ssh_user is not root, ansible_sudo must be set to true and the
+# user must be configured for passwordless sudo
+#ansible_sudo=true
+
+# deployment type valid values are origin, online, atomic-enterprise, and openshift-enterprise
+deployment_type=atomic-enterprise
+
+# Enable cluster metrics
+#use_cluster_metrics=true
+
+# Add additional, insecure, and blocked registries to global docker configuration
+# For enterprise deployment types we ensure that registry.access.redhat.com is
+# included if you do not include it
+#cli_docker_additional_registries=registry.example.com
+#cli_docker_insecure_registries=registry.example.com
+#cli_docker_blocked_registries=registry.hacker.com
+
+# Alternate image format string. If you're not modifying the format string and
+# only need to inject your own registry you may want to consider
+# cli_docker_additional_registries instead
+#oreg_url=example.com/aep3/aep-${component}:${version}
+
+# Additional yum repos to install
+#openshift_additional_repos=[{'id': 'aep-devel', 'name': 'aep-devel', 'baseurl': 'http://example.com/puddle/build/AtomicOpenShift/3.1/latest/RH7-RHOSE-3.0/$basearch/os', 'enabled': 1, 'gpgcheck': 0}]
+
+# htpasswd auth
+openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/htpasswd'}]
+
+# Allow all auth
+#openshift_master_identity_providers=[{'name': 'allow_all', 'login': 'true', 'challenge': 'true', 'kind': 'AllowAllPasswordIdentityProvider'}]
+
+# LDAP auth
+#openshift_master_identity_providers=[{'name': 'my_ldap_provider', 'challenge': 'true', 'login': 'true', 'kind': 'LDAPPasswordIdentityProvider', 'attributes': {'id': ['dn'], 'email': ['mail'], 'name': ['cn'], 'preferredUsername': ['uid']}, 'bindDN': '', 'bindPassword': '', 'ca': '', 'insecure': 'false', 'url': 'ldap://ldap.example.com:389/ou=users,dc=example,dc=com?uid'}]
+
+# Project Configuration
+#osm_project_request_message=''
+#osm_project_request_template=''
+#osm_mcs_allocator_range='s0:/2'
+#osm_mcs_labels_per_project=5
+#osm_uid_allocator_range='1000000000-1999999999/10000'
+
+# Configure Fluentd
+#use_fluentd=true
+
+# Enable cockpit
+#osm_use_cockpit=true
+#
+# Set cockpit plugins
+#osm_cockpit_plugins=['cockpit-kubernetes']
+
+# Native high availbility cluster method with optional load balancer.
+# If no lb group is defined installer assumes that a load balancer has
+# been preconfigured. For installation the value of
+# openshift_master_cluster_hostname must resolve to the load balancer
+# or to one or all of the masters defined in the inventory if no load
+# balancer is present.
+#openshift_master_cluster_method=native
+#openshift_master_cluster_hostname=openshift-ansible.test.example.com
+#openshift_master_cluster_public_hostname=openshift-ansible.test.example.com
+
+# Pacemaker high availability cluster method.
+# Pacemaker HA environment must be able to self provision the
+# configured VIP. For installation openshift_master_cluster_hostname
+# must resolve to the configured VIP.
+#openshift_master_cluster_method=pacemaker
+#openshift_master_cluster_password=openshift_cluster
+#openshift_master_cluster_vip=192.168.133.25
+#openshift_master_cluster_public_vip=192.168.133.25
+#openshift_master_cluster_hostname=openshift-ansible.test.example.com
+#openshift_master_cluster_public_hostname=openshift-ansible.test.example.com
+
+# Override the default controller lease ttl
+#osm_controller_lease_ttl=30
+
+# default subdomain to use for exposed routes
+#osm_default_subdomain=apps.test.example.com
+
+# additional cors origins
+#osm_custom_cors_origins=['foo.example.com', 'bar.example.com']
+
+# default project node selector
+#osm_default_node_selector='region=primary'
+
+# default storage plugin dependencies to install, by default the ceph and
+# glusterfs plugin dependencies will be installed, if available.
+#osn_storage_plugin_deps=['ceph','glusterfs']
+
+# default selectors for router and registry services
+# openshift_router_selector='region=infra'
+# openshift_registry_selector='region=infra'
+
+# Configure the multi-tenant SDN plugin (default is 'redhat/openshift-ovs-subnet')
+# os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant'
+
+# Disable the OpenShift SDN plugin
+# openshift_use_openshift_sdn=False
+
+# set RPM version for debugging purposes
+#openshift_pkg_version=-3.1.0.0
+
+# Configure custom named certificates
+# NOTE: openshift_master_named_certificates is cached on masters and is an
+# additive fact, meaning that each run with a different set of certificates
+# will add the newly provided certificates to the cached set of certificates.
+# If you would like openshift_master_named_certificates to be overwritten with
+# the provided value, specify openshift_master_overwrite_named_certificates.
+#openshift_master_overwrite_named_certificates: true
+#
+# Provide local certificate paths which will be deployed to masters
+#openshift_master_named_certificates=[{"certfile": "/path/to/custom1.crt", "keyfile": "/path/to/custom1.key"}]
+#
+# Detected names may be overridden by specifying the "names" key
+#openshift_master_named_certificates=[{"certfile": "/path/to/custom1.crt", "keyfile": "/path/to/custom1.key", "names": ["public-master-host.com"]}]
+
+# Session options
+#openshift_master_session_name=ssn
+#openshift_master_session_max_seconds=3600
+
+# An authentication and encryption secret will be generated if secrets
+# are not provided. If provided, openshift_master_session_auth_secrets
+# and openshift_master_encryption_secrets must be equal length.
+#
+# Signing secrets, used to authenticate sessions using
+# HMAC. Recommended to use secrets with 32 or 64 bytes.
+#openshift_master_session_auth_secrets=['DONT+USE+THIS+SECRET+b4NV+pmZNSO']
+#
+# Encrypting secrets, used to encrypt sessions. Must be 16, 24, or 32
+# characters long, to select AES-128, AES-192, or AES-256.
+#openshift_master_session_encryption_secrets=['DONT+USE+THIS+SECRET+b4NV+pmZNSO']
+
+# configure how often node iptables rules are refreshed
+#openshift_node_iptables_sync_period=5s
+
+# Configure nodeIP in the node config
+# This is needed in cases where node traffic is desired to go over an
+# interface other than the default network interface.
+#openshift_node_set_node_ip=True
+
+# Force setting of system hostname when configuring OpenShift
+# This works around issues related to installations that do not have valid dns
+# entries for the interfaces attached to the host.
+#openshift_set_hostname=True
+
+# Configure dnsIP in the node config
+#openshift_dns_ip=172.30.0.1
+
+# host group for masters
+[masters]
+aep3-master[1:3]-ansible.test.example.com
+
+[etcd]
+aep3-etcd[1:3]-ansible.test.example.com
+
+[lb]
+aep3-lb-ansible.test.example.com
+
+# NOTE: Currently we require that masters be part of the SDN which requires that they also be nodes
+# However, in order to ensure that your masters are not burdened with running pods you should
+# make them unschedulable by adding openshift_schedulable=False any node that's also a master.
+[nodes]
+aep3-master[1:3]-ansible.test.example.com
+aep3-node[1:2]-ansible.test.example.com openshift_node_labels="{'region': 'primary', 'zone': 'default'}"

+ 182 - 0
inventory/byo/hosts.origin.example

@@ -0,0 +1,182 @@
+# This is an example of a bring your own (byo) host inventory
+
+# Create an OSEv3 group that contains the masters and nodes groups
+[OSEv3:children]
+masters
+nodes
+etcd
+lb
+
+# Set variables common for all OSEv3 hosts
+[OSEv3:vars]
+# SSH user, this user should allow ssh based auth without requiring a
+# password. If using ssh key based auth, then the key should be managed by an
+# ssh agent.
+ansible_ssh_user=root
+
+# If ansible_ssh_user is not root, ansible_sudo must be set to true and the
+# user must be configured for passwordless sudo
+#ansible_sudo=true
+
+# deployment type valid values are origin, online, atomic-enterprise and openshift-enterprise
+deployment_type=origin
+
+# Enable cluster metrics
+#use_cluster_metrics=true
+
+# Add additional, insecure, and blocked registries to global docker configuration
+# For enterprise deployment types we ensure that registry.access.redhat.com is
+# included if you do not include it
+#cli_docker_additional_registries=registry.example.com
+#cli_docker_insecure_registries=registry.example.com
+#cli_docker_blocked_registries=registry.hacker.com
+
+# Alternate image format string. If you're not modifying the format string and
+# only need to inject your own registry you may want to consider
+# cli_docker_additional_registries instead
+#oreg_url=example.com/openshift3/ose-${component}:${version}
+
+# Origin copr repo
+#openshift_additional_repos=[{'id': 'openshift-origin-copr', 'name': 'OpenShift Origin COPR', 'baseurl': 'https://copr-be.cloud.fedoraproject.org/results/maxamillion/origin-next/epel-7-$basearch/', 'enabled': 1, 'gpgcheck': 1, gpgkey: 'https://copr-be.cloud.fedoraproject.org/results/maxamillion/origin-next/pubkey.gpg'}]
+
+# Origin Fedora copr repo
+# Use this if you are installing on Fedora
+#openshift_additional_repos=[{'id': 'fedora-openshift-origin-copr', 'name': 'OpenShift Origin COPR for Fedora', 'baseurl': 'https://copr-be.cloud.fedoraproject.org/results/maxamillion/fedora-openshift/fedora-$releasever-$basearch/', 'enabled': 1, 'gpgcheck': 1, gpgkey: 'https://copr-be.cloud.fedoraproject.org/results/maxamillion/fedora-openshift/pubkey.gpg'}]
+
+# htpasswd auth
+openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/htpasswd'}]
+
+# Allow all auth
+#openshift_master_identity_providers=[{'name': 'allow_all', 'login': 'true', 'challenge': 'true', 'kind': 'AllowAllPasswordIdentityProvider'}]
+
+# LDAP auth
+#openshift_master_identity_providers=[{'name': 'my_ldap_provider', 'challenge': 'true', 'login': 'true', 'kind': 'LDAPPasswordIdentityProvider', 'attributes': {'id': ['dn'], 'email': ['mail'], 'name': ['cn'], 'preferredUsername': ['uid']}, 'bindDN': '', 'bindPassword': '', 'ca': '', 'insecure': 'false', 'url': 'ldap://ldap.example.com:389/ou=users,dc=example,dc=com?uid'}]
+
+# Project Configuration
+#osm_project_request_message=''
+#osm_project_request_template=''
+#osm_mcs_allocator_range='s0:/2'
+#osm_mcs_labels_per_project=5
+#osm_uid_allocator_range='1000000000-1999999999/10000'
+
+# Configure Fluentd
+#use_fluentd=true
+
+# Enable cockpit
+#osm_use_cockpit=true
+#
+# Set cockpit plugins
+#osm_cockpit_plugins=['cockpit-kubernetes']
+
+# Native high availbility cluster method with optional load balancer.
+# If no lb group is defined installer assumes that a load balancer has
+# been preconfigured. For installation the value of
+# openshift_master_cluster_hostname must resolve to the load balancer
+# or to one or all of the masters defined in the inventory if no load
+# balancer is present.
+#openshift_master_cluster_method=native
+#openshift_master_cluster_hostname=openshift-ansible.test.example.com
+#openshift_master_cluster_public_hostname=openshift-ansible.test.example.com
+
+# Pacemaker high availability cluster method.
+# Pacemaker HA environment must be able to self provision the
+# configured VIP. For installation openshift_master_cluster_hostname
+# must resolve to the configured VIP.
+#openshift_master_cluster_method=pacemaker
+#openshift_master_cluster_password=openshift_cluster
+#openshift_master_cluster_vip=192.168.133.25
+#openshift_master_cluster_public_vip=192.168.133.25
+#openshift_master_cluster_hostname=openshift-ansible.test.example.com
+#openshift_master_cluster_public_hostname=openshift-ansible.test.example.com
+
+# Override the default controller lease ttl
+#osm_controller_lease_ttl=30
+
+# default subdomain to use for exposed routes
+#osm_default_subdomain=apps.test.example.com
+
+# additional cors origins
+#osm_custom_cors_origins=['foo.example.com', 'bar.example.com']
+
+# default project node selector
+#osm_default_node_selector='region=primary'
+
+# default storage plugin dependencies to install, by default the ceph and
+# glusterfs plugin dependencies will be installed, if available.
+#osn_storage_plugin_deps=['ceph','glusterfs']
+
+# default selectors for router and registry services
+# openshift_router_selector='region=infra'
+# openshift_registry_selector='region=infra'
+
+# Configure the multi-tenant SDN plugin (default is 'redhat/openshift-ovs-subnet')
+# os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant'
+
+# Disable the OpenShift SDN plugin
+# openshift_use_openshift_sdn=False
+
+# set RPM version for debugging purposes
+#openshift_pkg_version=-1.1
+
+# Configure custom named certificates
+# NOTE: openshift_master_named_certificates is cached on masters and is an
+# additive fact, meaning that each run with a different set of certificates
+# will add the newly provided certificates to the cached set of certificates.
+# If you would like openshift_master_named_certificates to be overwritten with
+# the provided value, specify openshift_master_overwrite_named_certificates.
+#openshift_master_overwrite_named_certificates: true
+#
+# Provide local certificate paths which will be deployed to masters
+#openshift_master_named_certificates=[{"certfile": "/path/to/custom1.crt", "keyfile": "/path/to/custom1.key"}]
+#
+# Detected names may be overridden by specifying the "names" key
+#openshift_master_named_certificates=[{"certfile": "/path/to/custom1.crt", "keyfile": "/path/to/custom1.key", "names": ["public-master-host.com"]}]
+
+# Session options
+#openshift_master_session_name=ssn
+#openshift_master_session_max_seconds=3600
+
+# An authentication and encryption secret will be generated if secrets
+# are not provided. If provided, openshift_master_session_auth_secrets
+# and openshift_master_encryption_secrets must be equal length.
+#
+# Signing secrets, used to authenticate sessions using
+# HMAC. Recommended to use secrets with 32 or 64 bytes.
+#openshift_master_session_auth_secrets=['DONT+USE+THIS+SECRET+b4NV+pmZNSO']
+#
+# Encrypting secrets, used to encrypt sessions. Must be 16, 24, or 32
+# characters long, to select AES-128, AES-192, or AES-256.
+#openshift_master_session_encryption_secrets=['DONT+USE+THIS+SECRET+b4NV+pmZNSO']
+
+# configure how often node iptables rules are refreshed
+#openshift_node_iptables_sync_period=5s
+
+# Configure nodeIP in the node config
+# This is needed in cases where node traffic is desired to go over an
+# interface other than the default network interface.
+#openshift_node_set_node_ip=True
+
+# Force setting of system hostname when configuring OpenShift
+# This works around issues related to installations that do not have valid dns
+# entries for the interfaces attached to the host.
+#openshift_set_hostname=True
+
+# Configure dnsIP in the node config
+#openshift_dns_ip=172.30.0.1
+
+# host group for masters
+[masters]
+ose3-master[1:3]-ansible.test.example.com
+
+[etcd]
+ose3-etcd[1:3]-ansible.test.example.com
+
+[lb]
+ose3-lb-ansible.test.example.com
+
+# NOTE: Currently we require that masters be part of the SDN which requires that they also be nodes
+# However, in order to ensure that your masters are not burdened with running pods you should
+# make them unschedulable by adding openshift_schedulable=False any node that's also a master.
+[nodes]
+ose3-master[1:3]-ansible.test.example.com
+ose3-node[1:2]-ansible.test.example.com openshift_node_labels="{'region': 'primary', 'zone': 'default'}"

+ 41 - 16
inventory/byo/hosts.example

@@ -18,26 +18,29 @@ ansible_ssh_user=root
 # user must be configured for passwordless sudo
 #ansible_sudo=true
 
-# deployment type valid values are origin, online and enterprise
-deployment_type=atomic-enterprise
+# deployment type valid values are origin, online, atomic-enterprise, and openshift-enterprise
+deployment_type=openshift-enterprise
 
 # Enable cluster metrics
 #use_cluster_metrics=true
 
-# Pre-release registry URL
-#oreg_url=example.com/openshift3/ose-${component}:${version}
-
-# Pre-release Dev puddle repo
-#openshift_additional_repos=[{'id': 'ose-devel', 'name': 'ose-devel', 'baseurl': 'http://buildvm-devops.usersys.redhat.com/puddle/build/OpenShiftEnterprise/3.0/latest/RH7-RHOSE-3.0/$basearch/os', 'enabled': 1, 'gpgcheck': 0}]
+# Add additional, insecure, and blocked registries to global docker configuration
+# For enterprise deployment types we ensure that registry.access.redhat.com is
+# included if you do not include it
+#cli_docker_additional_registries=registry.example.com
+#cli_docker_insecure_registries=registry.example.com
+#cli_docker_blocked_registries=registry.hacker.com
 
-# Pre-release Errata puddle repo
-#openshift_additional_repos=[{'id': 'ose-devel', 'name': 'ose-devel', 'baseurl': 'http://buildvm-devops.usersys.redhat.com/puddle/build/OpenShiftEnterpriseErrata/3.0/latest/RH7-RHOSE-3.0/$basearch/os', 'enabled': 1, 'gpgcheck': 0}]
+# Alternate image format string. If you're not modifying the format string and
+# only need to inject your own registry you may want to consider
+# cli_docker_additional_registries instead
+#oreg_url=example.com/openshift3/ose-${component}:${version}
 
-# Origin copr repo
-#openshift_additional_repos=[{'id': 'openshift-origin-copr', 'name': 'OpenShift Origin COPR', 'baseurl': 'https://copr-be.cloud.fedoraproject.org/results/maxamillion/origin-next/epel-7-$basearch/', 'enabled': 1, 'gpgcheck': 1, gpgkey: 'https://copr-be.cloud.fedoraproject.org/results/maxamillion/origin-next/pubkey.gpg'}]
+# Additional yum repos to install
+#openshift_additional_repos=[{'id': 'ose-devel', 'name': 'ose-devel', 'baseurl': 'http://example.com/puddle/build/AtomicOpenShift/3.1/latest/RH7-RHOSE-3.0/$basearch/os', 'enabled': 1, 'gpgcheck': 0}]
 
 # htpasswd auth
-openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/openshift/htpasswd'}]
+openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/htpasswd'}]
 
 # Allow all auth
 #openshift_master_identity_providers=[{'name': 'allow_all', 'login': 'true', 'challenge': 'true', 'kind': 'AllowAllPasswordIdentityProvider'}]
@@ -109,10 +112,19 @@ openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true',
 # openshift_use_openshift_sdn=False
 
 # set RPM version for debugging purposes
-#openshift_pkg_version=-3.0.0.0
-
-# Configure custom master certificates
+#openshift_pkg_version=-3.1.0.0
+
+# Configure custom named certificates
+# NOTE: openshift_master_named_certificates is cached on masters and is an
+# additive fact, meaning that each run with a different set of certificates
+# will add the newly provided certificates to the cached set of certificates.
+# If you would like openshift_master_named_certificates to be overwritten with
+# the provided value, specify openshift_master_overwrite_named_certificates.
+#openshift_master_overwrite_named_certificates: true
+#
+# Provide local certificate paths which will be deployed to masters
 #openshift_master_named_certificates=[{"certfile": "/path/to/custom1.crt", "keyfile": "/path/to/custom1.key"}]
+#
 # Detected names may be overridden by specifying the "names" key
 #openshift_master_named_certificates=[{"certfile": "/path/to/custom1.crt", "keyfile": "/path/to/custom1.key", "names": ["public-master-host.com"]}]
 
@@ -135,6 +147,19 @@ openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true',
 # configure how often node iptables rules are refreshed
 #openshift_node_iptables_sync_period=5s
 
+# Configure nodeIP in the node config
+# This is needed in cases where node traffic is desired to go over an
+# interface other than the default network interface.
+#openshift_node_set_node_ip=True
+
+# Force setting of system hostname when configuring OpenShift
+# This works around issues related to installations that do not have valid dns
+# entries for the interfaces attached to the host.
+#openshift_set_hostname=True
+
+# Configure dnsIP in the node config
+#openshift_dns_ip=172.30.0.1
+
 # host group for masters
 [masters]
 ose3-master[1:3]-ansible.test.example.com
@@ -147,7 +172,7 @@ ose3-lb-ansible.test.example.com
 
 # NOTE: Currently we require that masters be part of the SDN which requires that they also be nodes
 # However, in order to ensure that your masters are not burdened with running pods you should
-# make them unschedulable by adding openshift_scheduleable=False any node that's also a master.
+# make them unschedulable by adding openshift_schedulable=False any node that's also a master.
 [nodes]
 ose3-master[1:3]-ansible.test.example.com
 ose3-node[1:2]-ansible.test.example.com openshift_node_labels="{'region': 'primary', 'zone': 'default'}"

+ 19 - 15
inventory/multi_inventory.py

@@ -288,26 +288,30 @@ class MultiInventory(object):
         results = self.all_inventory_results[acc_config['name']]
         results['all_hosts'] = results['_meta']['hostvars'].keys()
 
-        # Update each hostvar with the newly desired key: value from extra_*
-        for _extra in ['extra_vars', 'extra_groups']:
-            for new_var, value in acc_config.get(_extra, {}).items():
-                for data in results['_meta']['hostvars'].values():
-                    self.add_entry(data, new_var, value)
-
-                # Add this group
-                if _extra == 'extra_groups':
-                    results["%s_%s" % (new_var, value)] = copy.copy(results['all_hosts'])
-
-        # Clone groups goes here
-        for to_name, from_name in acc_config.get('clone_groups', {}).items():
-            if results.has_key(from_name):
-                results[to_name] = copy.copy(results[from_name])
+        # Extra vars go here
+        for new_var, value in acc_config.get('extra_vars', {}).items():
+            for data in results['_meta']['hostvars'].values():
+                self.add_entry(data, new_var, value)
 
-        # Clone vars goes here
+        # Clone vars go here
         for to_name, from_name in acc_config.get('clone_vars', {}).items():
             for data in results['_meta']['hostvars'].values():
                 self.add_entry(data, to_name, self.get_entry(data, from_name))
 
+        # Extra groups go here
+        for new_var, value in acc_config.get('extra_groups', {}).items():
+            for data in results['_meta']['hostvars'].values():
+                results["%s_%s" % (new_var, value)] = copy.copy(results['all_hosts'])
+
+        # Clone groups go here
+        # Build a group based on the desired key name
+        for to_name, from_name in acc_config.get('clone_groups', {}).items():
+            for name, data in results['_meta']['hostvars'].items():
+                key = '%s_%s' % (to_name, self.get_entry(data, from_name))
+                if not results.has_key(key):
+                    results[key] = []
+                results[key].append(name)
+
         # store the results back into all_inventory_results
         self.all_inventory_results[acc_config['name']] = results
 

+ 216 - 5
openshift-ansible.spec

@@ -5,7 +5,7 @@
 }
 
 Name:           openshift-ansible
-Version:        3.0.12
+Version:        3.0.19
 Release:        1%{?dist}
 Summary:        Openshift and Atomic Enterprise Ansible
 License:        ASL 2.0
@@ -13,7 +13,7 @@ URL:            https://github.com/openshift/openshift-ansible
 Source0:        https://github.com/openshift/openshift-ansible/archive/%{commit}/%{name}-%{version}.tar.gz
 BuildArch:      noarch
 
-Requires:      ansible >= 1.9.3
+Requires:      ansible >= 1.9.4
 Requires:      python2
 
 %description
@@ -192,7 +192,7 @@ BuildArch:     noarch
 # ----------------------------------------------------------------------------------
 %package roles
 Summary:       Openshift and Atomic Enterprise Ansible roles
-Requires:      %{name}
+Requires:      %{name} = %{version}
 Requires:      %{name}-lookup-plugins = %{version}
 Requires:      %{name}-filter-plugins = %{version}
 BuildArch:     noarch
@@ -209,8 +209,9 @@ BuildArch:     noarch
 # ----------------------------------------------------------------------------------
 %package filter-plugins
 Summary:       Openshift and Atomic Enterprise Ansible filter plugins
-Requires:      %{name}
+Requires:      %{name} = %{version}
 BuildArch:     noarch
+Requires:      pyOpenSSL
 
 %description filter-plugins
 %{summary}.
@@ -224,7 +225,7 @@ BuildArch:     noarch
 # ----------------------------------------------------------------------------------
 %package lookup-plugins
 Summary:       Openshift and Atomic Enterprise Ansible lookup plugins
-Requires:      %{name}
+Requires:      %{name} = %{version}
 BuildArch:     noarch
 
 %description lookup-plugins
@@ -258,6 +259,216 @@ Atomic OpenShift Utilities includes
 
 
 %changelog
+* Wed Dec 09 2015 Brenton Leanhardt <bleanhar@redhat.com> 3.0.19-1
+- Fix version dependent image streams (sdodson@redhat.com)
+- atomic-openshift-installer: Error handling on yaml loading
+  (smunilla@redhat.com)
+- Betterize AWS readme (jtslear@gmail.com)
+
+* Tue Dec 08 2015 Brenton Leanhardt <bleanhar@redhat.com> 3.0.18-1
+- Pass in and use first_master_ip as dnsIP for pre 3.1 nodes.
+  (abutcher@redhat.com)
+- Fix delete state (jdiaz@redhat.com)
+- Require pyOpenSSL (sdodson@redhat.com)
+- Update sync db-templates, image-streams, and quickstart-templates
+  (sdodson@redhat.com)
+- Clarify the preflight port check output (sdodson@redhat.com)
+- Fix missing dependency version locking (sdodson@redhat.com)
+
+* Tue Dec 08 2015 Brenton Leanhardt <bleanhar@redhat.com> 3.0.17-1
+- Improving output when gathering facts (bleanhar@redhat.com)
+- Bug 1287977 - Incorrect check output from atomic-openshift-installer when
+  working with preconfigured load balancer (bleanhar@redhat.com)
+- Add unique AEP, OSE, and Origin BYO inventories (sdodson@redhat.com)
+- bring the docker udev workaround into openshift-ansible.git
+  (jdiaz@redhat.com)
+- Zabbix: put in a note about trigger prototype dependency
+  (mwoodson@redhat.com)
+- Zabbix: added dependency for inode disk check (mwoodson@redhat.com)
+- Zabbix: added dependency for disk check (mwoodson@redhat.com)
+- zabbix: removed ethernet graphs (mwoodson@redhat.com)
+- Zabbix: added trigger dependencies to certain master checks
+  (mwoodson@redhat.com)
+- ManageIQ Service Account: added role for ManageIQ service account
+  (efreiber@redhat.com)
+- added the pv zabbix keys (mwoodson@redhat.com)
+- Refactor dns options and facts. (abutcher@redhat.com)
+- Fix openshift_facts playbook for yum/dnf changes (jdetiber@redhat.com)
+- Configured master count should be 1 for pacemaker ha. (abutcher@redhat.com)
+- Fedora changes: (admiller@redhat.com)
+- Centralize etcd/schedulability logic for each host. (dgoodwin@redhat.com)
+- added upgrade playbook for online (sedgar@redhat.com)
+- Improved installation summary. (dgoodwin@redhat.com)
+- Fix kubernetes service ip gathering. (abutcher@redhat.com)
+- added docker registry cluster check (mwoodson@redhat.com)
+- Add warning for HA deployments with < 3 dedicated nodes.
+  (dgoodwin@redhat.com)
+- Cleanup more schedulable typos. (dgoodwin@redhat.com)
+- Fix validation for BasicAuthPasswordIdentityProvider (tschan@puzzle.ch)
+- Fix ec2 instance type lookups (jdetiber@redhat.com)
+- remove debug logging from scc/privileged patch command (jdetiber@redhat.com)
+- Set api version for oc commands (jdetiber@redhat.com)
+- 3.1 upgrade - use --api-version for patch commands (jdetiber@redhat.com)
+- Fix bug when warning on no dedicated nodes. (dgoodwin@redhat.com)
+- Suggest dedicated nodes for an HA deployment. (dgoodwin@redhat.com)
+- Error out if no load balancer specified. (dgoodwin@redhat.com)
+- Adjust requirement for 3 masters for HA deployments. (dgoodwin@redhat.com)
+- Fixing 'unscheduleable' typo (bleanhar@redhat.com)
+- Update IMAGE_PREFIX and IMAGE_VERSION values in hawkular template
+  (nakayamakenjiro@gmail.com)
+- Improved output when re-running after editing config. (dgoodwin@redhat.com)
+- Print a system summary after adding each. (dgoodwin@redhat.com)
+- Text improvements for host specification. (dgoodwin@redhat.com)
+- Assert etcd section written for HA installs. (dgoodwin@redhat.com)
+- Breakout a test fixture to reduce module size. (dgoodwin@redhat.com)
+- Pylint touchups. (dgoodwin@redhat.com)
+- Trim assertions in HA testing. (dgoodwin@redhat.com)
+- Test unattended HA quick install. (dgoodwin@redhat.com)
+- Don't prompt to continue during unattended installs. (dgoodwin@redhat.com)
+- Block re-use of master/node as load balancer in attended install.
+  (dgoodwin@redhat.com)
+- Add -q flag to remove unwantend output (such as mirror and cache information)
+  (urs.breu@ergon.ch)
+- Uninstall: only restart docker on node hosts. (abutcher@redhat.com)
+- Explicitly set schedulable when masters == nodes. (dgoodwin@redhat.com)
+- Use admin.kubeconfig for get svc ip. (abutcher@redhat.com)
+- Point enterprise metrics at registry.access.redhat.com/openshift3/metrics-
+  (sdodson@redhat.com)
+- Make sure that OpenSSL is installed before use (fsimonce@redhat.com)
+- fixes for installer wrapper scaleup (jdetiber@redhat.com)
+- addtl aws fixes (jdetiber@redhat.com)
+- Fix failure when seboolean not present (jdetiber@redhat.com)
+- fix addNodes.yml (jdetiber@redhat.com)
+- more aws support for scaleup (jdetiber@redhat.com)
+- start of aws scaleup (jdetiber@redhat.com)
+- Improve scaleup playbook (jdetiber@redhat.com)
+- Update openshift_repos to refresh package cache on changes
+  (jdetiber@redhat.com)
+- Add etcd nodes management in OpenStack (lhuard@amadeus.com)
+
+* Tue Nov 24 2015 Brenton Leanhardt <bleanhar@redhat.com> 3.0.16-1
+- Silencing pylint branch errors for now for the atomic-openshift-installer
+  harness (bleanhar@redhat.com)
+- Properly setting scheduleability for HA Master scenarios
+  (bleanhar@redhat.com)
+- added graphs (mwoodson@redhat.com)
+- Rework setting of hostname (jdetiber@redhat.com)
+- Fixed a bug in the actions.  It now supports changing opconditions
+  (kwoodson@redhat.com)
+- Conditionally set the nodeIP (jdetiber@redhat.com)
+- Bug 1284991 - "atomic-openshift-installer uninstall" error when configuration
+  file is missing. (bleanhar@redhat.com)
+- Avoid printing the master and node totals in the add-a-node scenario
+  (bleanhar@redhat.com)
+- Fixing tests for quick_ha (bleanhar@redhat.com)
+- Removing a debug line (bleanhar@redhat.com)
+- atomic-openshift-installer: Fix lint issue (smunilla@redhat.com)
+- Handling preconfigured load balancers (bleanhar@redhat.com)
+- atomic-openshift-installer: Rename ha_proxy (smunilla@redhat.com)
+- atomic-openshift-installer: Reverse version and host collection
+  (smunilla@redhat.com)
+- cli_installer_tests: Add test for unattended quick HA (smunilla@redhat.com)
+- Breakup inventory writing (smunilla@redhat.com)
+- Enforce 1 or 3 masters (smunilla@redhat.com)
+- Add interactive test (smunilla@redhat.com)
+- atomic-openshift-installer: HA for quick installer (smunilla@redhat.com)
+- Adding zbx_graph support (kwoodson@redhat.com)
+- Modified step params to be in order when passed as a list
+  (kwoodson@redhat.com)
+- Add serviceAccountConfig.masterCA during 3.1 upgrade (jdetiber@redhat.com)
+- Use the identity_providers from openshift_facts instead of always using the
+  inventory variable (jdetiber@redhat.com)
+- Refactor master identity provider configuration (jdetiber@redhat.com)
+
+* Fri Nov 20 2015 Kenny Woodson <kwoodson@redhat.com> 3.0.15-1
+- Fixing clone group functionality.  Also separating extra_vars from
+  extra_groups (kwoodson@redhat.com)
+- Check the end result on bad config file (smunilla@redhat.com)
+- Add some tests for a bad config (smunilla@redhat.com)
+- atomic-openshift-installer: connect_to error handling (smunilla@redhat.com)
+- atomic-openshift-installer: pylint fixes (smunilla@redhat.com)
+- Replace map with oo_collect to support python-jinja2 <2.7
+  (abutcher@redhat.com)
+- Making the uninstall playbook more flexible (bleanhar@redhat.com)
+- Install version dependent image streams for v1.0 and v1.1
+  (sdodson@redhat.com)
+- Do not update the hostname (jdetiber@redhat.com)
+- Pylint fix for long line in cli docstring. (dgoodwin@redhat.com)
+- Default to installing OSE 3.1 instead of 3.0. (dgoodwin@redhat.com)
+- Fix tests on systems with openshift-ansible rpms installed.
+  (dgoodwin@redhat.com)
+
+* Thu Nov 19 2015 Brenton Leanhardt <bleanhar@redhat.com> 3.0.14-1
+- added metric items to zabbix for openshift online (mwoodson@redhat.com)
+- Updating usergroups to accept users (kwoodson@redhat.com)
+- Differentiate machine types on GCE (master and nodes)
+  (romain.dossin@amadeus.com)
+- Uninstall - Remove systemd wants file for node (jdetiber@redhat.com)
+- ec2 - force !requiretty for ssh_user (jdetiber@redhat.com)
+- small tweaks for adding docker volume for aws master hosts
+  (jdetiber@redhat.com)
+- Created role to deploy ops host monitoring (jdiaz@redhat.com)
+- Update certificate paths when 'names' key is provided. (abutcher@redhat.com)
+- add a volume on master host, in AWS provisioning (chengcheng.mu@amadeus.com)
+- First attempt at adding web scenarios (kwoodson@redhat.com)
+- Use field numbers for all formats in bin/cluster for python 2.6
+  (abutcher@redhat.com)
+- atomic-openshift-installer: Correct single master case (smunilla@redhat.com)
+- added copr-openshift-ansible releaser, removed old rel-eng stuff.
+  (twiest@redhat.com)
+- changed counter -> count (mwoodson@redhat.com)
+- Updating zbx_item classes to support data types for bool.
+  (kwoodson@redhat.com)
+- Fix ec2 instance type override (jdetiber@redhat.com)
+- updated my check to support the boolean data type (mwoodson@redhat.com)
+- Add additive_facts_to_overwrite instead of overwriting all additive_facts
+  (abutcher@redhat.com)
+- added healthz check and more pod count checks (mwoodson@redhat.com)
+- updating to the latest ec2.py (and re-patching with our changes).
+  (twiest@redhat.com)
+- atomic-openshift-installer: Temporarily restrict to single master
+  (smunilla@redhat.com)
+- openshift-ansible: Correct variable (smunilla@redhat.com)
+- Refactor named certificates. (abutcher@redhat.com)
+- atomic-openshift-utils: Version lock playbooks (smunilla@redhat.com)
+- Add the native ha services and configs to uninstall (jdetiber@redhat.com)
+- Bug 1282336 - Add additional seboolean for gluster (jdetiber@redhat.com)
+- Raise lifetime to 2 weeks for dynamic AWS items (jdiaz@redhat.com)
+- bin/cluster fix python 2.6 issue (jdetiber@redhat.com)
+- cluster list: break host types by subtype (lhuard@amadeus.com)
+- README_AWS: Add needed dependency (c.witt.1900@gmail.com)
+- Fix invalid sudo command test (takayoshi@gmail.com)
+- Docs: Fedora: Add missing dependencies and update to dnf. (public@omeid.me)
+- Gate upgrade steps for 3.0 to 3.1 upgrade (jdetiber@redhat.com)
+- added the tito and copr_cli roles (twiest@redhat.com)
+- pylint openshift_facts (jdetiber@redhat.com)
+- Update etcd default facts setting (jdetiber@redhat.com)
+- Update master facts prior to upgrading incase facts are missing.
+  (abutcher@redhat.com)
+- pre-upgrade-check: differentiates between port and targetPort in output
+  (smilner@redhat.com)
+- Better structure the output of the list playbook (lhuard@amadeus.com)
+- Add the sub-host-type tag to the libvirt VMs (lhuard@amadeus.com)
+- atomic-openshift-installer: Update nopwd sudo test (smunilla@redhat.com)
+- Fix pylint import errors for utils/test/. (dgoodwin@redhat.com)
+- atomic-openshift-installer: Update prompts and help messages
+  (smunilla@redhat.com)
+- Dependencies need to be added when a create occurs on SLA object.
+  (kwoodson@redhat.com)
+- Test additions for cli_installer:get_hosts_to_install_on
+  (bleanhar@redhat.com)
+- adding itservice (kwoodson@redhat.com)
+- remove netaddr dependency (tob@butter.sh)
+- Add pyOpenSSL to dependencies for Fedora. (public@omeid.me)
+- Vagrant RHEL registration cleanup (pep@redhat.com)
+- RH subscription: optional satellite and pkg update (pep@redhat.com)
+
+* Tue Nov 17 2015 Brenton Leanhardt <bleanhar@redhat.com> 3.0.13-1
+- The aep3 images changed locations. (bleanhar@redhat.com)
+- atomic-openshift-installer: Correct single master case (smunilla@redhat.com)
+- atomic-openshift-installer: Temporarily restrict to single master
+  (smunilla@redhat.com)
+
 * Wed Nov 11 2015 Brenton Leanhardt <bleanhar@redhat.com> 3.0.12-1
 - Sync with the latest image streams (sdodson@redhat.com)
 

+ 5 - 0
playbooks/adhoc/bootstrap-fedora.yml

@@ -0,0 +1,5 @@
+- hosts: OSv3
+  gather_facts: false
+  tasks:
+    - name: install python and deps for ansible modules
+      raw: dnf install -y python2 python2-dnf libselinux-python libsemanage-python

+ 42 - 6
playbooks/adhoc/uninstall.yml

@@ -48,7 +48,39 @@
         - pcsd
 
     - yum: name={{ item }} state=absent
-      when: not is_atomic | bool
+      when: ansible_pkg_mgr == "yum" and not is_atomic | bool
+      with_items:
+        - atomic-enterprise
+        - atomic-enterprise-master
+        - atomic-enterprise-node
+        - atomic-enterprise-sdn-ovs
+        - atomic-openshift
+        - atomic-openshift-clients
+        - atomic-openshift-master
+        - atomic-openshift-node
+        - atomic-openshift-sdn-ovs
+        - corosync
+        - etcd
+        - openshift
+        - openshift-master
+        - openshift-node
+        - openshift-sdn
+        - openshift-sdn-ovs
+        - openvswitch
+        - origin
+        - origin-clients
+        - origin-master
+        - origin-node
+        - origin-sdn-ovs
+        - pacemaker
+        - pcs
+        - tuned-profiles-atomic-enterprise-node
+        - tuned-profiles-atomic-openshift-node
+        - tuned-profiles-openshift-node
+        - tuned-profiles-origin-node
+
+    - dnf: name={{ item }} state=absent
+      when: ansible_pkg_mgr == "dnf" and not is_atomic | bool
       with_items:
         - atomic-enterprise
         - atomic-enterprise-master
@@ -111,12 +143,12 @@
         - atomic-enterprise
         - origin
 
-    - shell: docker ps -a | grep Exited | grep "{{ item }}" | awk '{print $1}'
+    - shell: docker ps -a | grep Exited | egrep "{{ item }}" | awk '{print $1}'
       changed_when: False
       failed_when: False
       register: exited_containers_to_delete
       with_items:
-        - aep3/aep
+        - aep3.*/aep
         - openshift3/ose
         - openshift/origin
 
@@ -125,13 +157,13 @@
       failed_when: False
       with_items: "{{ exited_containers_to_delete.results }}"
 
-    - shell: docker images | grep {{ item }} | awk '{ print $3 }'
+    - shell: docker images | egrep {{ item }} | awk '{ print $3 }'
       changed_when: False
       failed_when: False
       register: images_to_delete
       with_items:
-        - registry.access.redhat.com/openshift3
-        - registry.access.redhat.com/aep3
+        - registry\.access\..*redhat\.com/openshift3
+        - registry\.access\..*redhat\.com/aep3
         - docker.io/openshift
 
     - shell:  "docker rmi -f {{ item.stdout_lines | join(' ') }}"
@@ -161,6 +193,7 @@
         - /etc/sysconfig/origin-master-api
         - /etc/sysconfig/origin-master-controllers
         - /etc/sysconfig/origin-node
+        - /etc/systemd/system/atomic-openshift-node.service.wants
         - /root/.kube
         - /run/openshift-sdn
         - /usr/share/openshift/examples
@@ -180,5 +213,8 @@
     - name: Reload systemd manager configuration
       command: systemctl daemon-reload
 
+- hosts: nodes
+  sudo: yes
+  tasks:
     - name: restart docker
       service: name=docker state=restarted

+ 39 - 0
playbooks/aws/openshift-cluster/addNodes.yml

@@ -0,0 +1,39 @@
+---
+- name: Launch instance(s)
+  hosts: localhost
+  connection: local
+  gather_facts: no
+  vars_files:
+  - vars.yml
+  - ["vars.{{ deployment_type }}.{{ cluster_id }}.yml", vars.defaults.yml]
+  vars:
+    oo_extend_env: True
+  tasks:
+  - fail:
+      msg: Deployment type not supported for aws provider yet
+    when: deployment_type == 'enterprise'
+
+  - include: ../../common/openshift-cluster/tasks/set_node_launch_facts.yml
+    vars:
+      type: "compute"
+      count: "{{ num_nodes }}"
+  - include: tasks/launch_instances.yml
+    vars:
+      instances: "{{ node_names }}"
+      cluster: "{{ cluster_id }}"
+      type: "{{ k8s_type }}"
+      g_sub_host_type: "{{ sub_host_type }}"
+
+  - include: ../../common/openshift-cluster/tasks/set_node_launch_facts.yml
+    vars:
+      type: "infra"
+      count: "{{ num_infra }}"
+  - include: tasks/launch_instances.yml
+    vars:
+      instances: "{{ node_names }}"
+      cluster: "{{ cluster_id }}"
+      type: "{{ k8s_type }}"
+      g_sub_host_type: "{{ sub_host_type }}"
+
+- include: scaleup.yml
+- include: list.yml

+ 34 - 0
playbooks/aws/openshift-cluster/scaleup.yml

@@ -0,0 +1,34 @@
+---
+
+- hosts: localhost
+  gather_facts: no
+  vars_files:
+  - vars.yml
+  tasks:
+  - set_fact:
+      g_ssh_user_tmp: "{{ deployment_vars[deployment_type].ssh_user }}"
+      g_sudo_tmp: "{{ deployment_vars[deployment_type].sudo }}"
+  - name: Evaluate oo_hosts_to_update
+    add_host:
+      name: "{{ item }}"
+      groups: oo_hosts_to_update
+      ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
+      ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
+    with_items: "{{ groups.nodes_to_add }}"
+
+- include: ../../common/openshift-cluster/update_repos_and_packages.yml
+
+- include: ../../common/openshift-cluster/scaleup.yml
+  vars:
+    g_etcd_group: "{{ 'tag_env-host-type_' ~ cluster_id ~ '-openshift-etcd' }}"
+    g_lb_group: "{{ 'tag_env-host-type_' ~ cluster_id ~ '-openshift-lb' }}"
+    g_masters_group: "{{ 'tag_env-host-type_' ~ cluster_id ~ '-openshift-master' }}"
+    g_new_nodes_group: 'nodes_to_add'
+    g_ssh_user: "{{ hostvars.localhost.g_ssh_user_tmp }}"
+    g_sudo: "{{ hostvars.localhost.g_sudo_tmp }}"
+    g_nodeonmaster: true
+    openshift_cluster_id: "{{ cluster_id }}"
+    openshift_debug_level: 2
+    openshift_deployment_type: "{{ deployment_type }}"
+    openshift_hostname: "{{ ec2_private_ip_address }}"
+    openshift_public_hostname: "{{ ec2_ip_address }}"

+ 25 - 10
playbooks/aws/openshift-cluster/tasks/launch_instances.yml

@@ -20,10 +20,6 @@
                    | default(deployment_vars[deployment_type].image, true) }}"
   when: ec2_image is not defined and not ec2_image_name
 - set_fact:
-    ec2_instance_type: "{{ lookup('env', 'ec2_instance_type')
-                    | default(deployment_vars[deployment_type].type, true) }}"
-  when: ec2_instance_type is not defined
-- set_fact:
     ec2_keypair: "{{ lookup('env', 'ec2_keypair')
                     | default(deployment_vars[deployment_type].keypair, true) }}"
   when: ec2_keypair is not defined
@@ -37,25 +33,25 @@
   when: ec2_assign_public_ip is not defined
 
 - set_fact:
-    ec2_instance_type: "{{ ec2_master_instance_type | default(deployment_vars[deployment_type].type, true) }}"
+    ec2_instance_type: "{{ ec2_master_instance_type | default(lookup('env', 'ec2_master_instance_type') | default(lookup('env', 'ec2_instance_type') | default(deployment_vars[deployment_type].type, true), true), true) }}"
     ec2_security_groups: "{{ ec2_master_security_groups
                     | default(deployment_vars[deployment_type].security_groups, true) }}"
   when: host_type == "master" and sub_host_type == "default"
 
 - set_fact:
-    ec2_instance_type: "{{ ec2_etcd_instance_type | default(deployment_vars[deployment_type].type, true) }}"
+    ec2_instance_type: "{{ ec2_etcd_instance_type | default(lookup('env', 'ec2_etcd_instance_type') | default(lookup('env', 'ec2_instance_type') | default(deployment_vars[deployment_type].type, true), true), true) }}"
     ec2_security_groups: "{{ ec2_etcd_security_groups
                     | default(deployment_vars[deployment_type].security_groups, true)}}"
   when: host_type == "etcd" and sub_host_type == "default"
 
 - set_fact:
-    ec2_instance_type: "{{ ec2_infra_instance_type | default(deployment_vars[deployment_type].type, true) }}"
+    ec2_instance_type: "{{ ec2_infra_instance_type | default(lookup('env', 'ec2_infra_instance_type') | default(lookup('env', 'ec2_instance_type') | default(deployment_vars[deployment_type].type, true), true), true) }}"
     ec2_security_groups: "{{ ec2_infra_security_groups
                     | default(deployment_vars[deployment_type].security_groups, true) }}"
   when: host_type == "node" and sub_host_type == "infra"
 
 - set_fact:
-    ec2_instance_type: "{{ ec2_node_instance_type | default(deployment_vars[deployment_type].type, true) }}"
+    ec2_instance_type: "{{ ec2_node_instance_type | default(lookup('env', 'ec2_node_instance_type') | default(lookup('env', 'ec2_instance_type') | default(deployment_vars[deployment_type].type, true), true), true) }}"
     ec2_security_groups: "{{ ec2_node_security_groups
                     | default(deployment_vars[deployment_type].security_groups, true) }}"
   when: host_type == "node" and sub_host_type == "compute"
@@ -81,7 +77,6 @@
 
 - set_fact:
     latest_ami: "{{ ami_result.results | oo_ami_selector(ec2_image_name) }}"
-    user_data: "{{ lookup('template', '../templates/user_data.j2') }}"
     volume_defs:
       etcd:
         root:
@@ -97,6 +92,10 @@
           volume_size: "{{ lookup('env', 'os_master_root_vol_size') | default(25, true) }}"
           device_type: "{{ lookup('env', 'os_master_root_vol_type') | default('gp2', true) }}"
           iops: "{{ lookup('env', 'os_master_root_vol_iops') | default(500, true) }}"
+        docker:
+          volume_size: "{{ lookup('env', 'os_docker_vol_size') | default(10, true) }}"
+          device_type: "{{ lookup('env', 'os_docker_vol_type') | default('gp2', true) }}"
+          iops: "{{ lookup('env', 'os_docker_vol_iops') | default(500, true) }}"
       node:
         root:
           volume_size: "{{ lookup('env', 'os_node_root_vol_size') | default(85, true) }}"
@@ -121,7 +120,7 @@
     count: "{{ instances | length }}"
     vpc_subnet_id: "{{ ec2_vpc_subnet | default(omit, true) }}"
     assign_public_ip: "{{ ec2_assign_public_ip | default(omit, true) }}"
-    user_data: "{{ user_data }}"
+    user_data: "{{ lookup('template', '../templates/user_data.j2') }}"
     wait: yes
     instance_tags:
       created-by: "{{ created_by }}"
@@ -191,6 +190,22 @@
   - instances
   - ec2.instances
 
+- name: Add new instances to nodes_to_add group if needed
+  add_host:
+    hostname: "{{ item.0 }}"
+    ansible_ssh_host: "{{ item.1.dns_name }}"
+    ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
+    ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
+    groups: nodes_to_add
+    ec2_private_ip_address: "{{ item.1.private_ip }}"
+    ec2_ip_address: "{{ item.1.public_ip }}"
+    openshift_node_labels: "{{ node_label }}"
+    logrotate_scripts: "{{ logrotate }}"
+  with_together:
+  - instances
+  - ec2.instances
+  when: oo_extend_env is defined and oo_extend_env | bool
+
 - name: Wait for ssh
   wait_for: "port=22 host={{ item.dns_name }}"
   with_items: ec2.instances

+ 9 - 2
playbooks/aws/openshift-cluster/templates/user_data.j2

@@ -1,5 +1,5 @@
 #cloud-config
-{% if type =='etcd' %}
+{% if type == 'etcd' and 'etcd' in volume_defs[type] %}
 cloud_config_modules:
 - disk_setup
 - mounts
@@ -19,7 +19,7 @@ fs_setup:
   partition: auto
 {% endif %}
 
-{% if type == 'node' %}
+{% if type in ['node', 'master'] and 'docker' in volume_defs[type] %}
 mounts:
 - [ xvdb ]
 - [ ephemeral0 ]
@@ -43,3 +43,10 @@ growpart:
 runcmd:
 - xfs_growfs /var
 {% endif %}
+
+{% if deployment_vars[deployment_type].sudo %}
+- path: /etc/sudoers.d/99-{{ deployment_vars[deployment_type].ssh_user }}-cloud-init-requiretty
+  permissions: 440
+  content: |
+    Defaults:{{ deployment_vars[deployment_type].ssh_user }} !requiretty
+{% endif %}

+ 33 - 0
playbooks/aws/openshift-cluster/upgrades/v3_0_to_v3_1/upgrade.yml

@@ -0,0 +1,33 @@
+---
+# This playbook upgrades an existing AWS cluster, leaving nodes untouched if used with an 'online' deployment type.
+# Usage:
+#  ansible-playbook playbooks/aws/openshift-cluster/upgrades/v3_0_to_v3_1/upgrade.yml -e deployment_type=online -e cluster_id=<cluster_id>
+- hosts: localhost
+  gather_facts: no
+  vars_files:
+  - ../../vars.yml
+  - "../../vars.{{ deployment_type }}.{{ cluster_id }}.yml"
+
+  tasks:
+  - set_fact:
+      g_ssh_user_tmp: "{{ deployment_vars[deployment_type].ssh_user }}"
+      g_sudo_tmp: "{{ deployment_vars[deployment_type].sudo }}"
+
+  - set_fact:
+      tmp_nodes_group: "{{ 'tag_env-host-type_' ~ cluster_id ~ '-openshift-node' }}"
+    when: deployment_type != 'online'
+
+- include: ../../../../common/openshift-cluster/upgrades/v3_0_to_v3_1/upgrade.yml
+  vars:
+    g_etcd_group: "{{ 'tag_env-host-type_' ~ cluster_id ~ '-openshift-etcd' }}"
+    g_lb_group: "{{ 'tag_env-host-type_' ~ cluster_id ~ '-openshift-lb' }}"
+    g_masters_group: "{{ 'tag_env-host-type_' ~ cluster_id ~ '-openshift-master' }}"
+    g_nodes_group: "{{ tmp_nodes_group | default('') }}"
+    g_ssh_user: "{{ hostvars.localhost.g_ssh_user_tmp }}"
+    g_sudo: "{{ hostvars.localhost.g_sudo_tmp }}"
+    g_nodeonmaster: true
+    openshift_cluster_id: "{{ cluster_id }}"
+    openshift_debug_level: 2
+    openshift_deployment_type: "{{ deployment_type }}"
+    openshift_hostname: "{{ ec2_private_ip_address }}"
+    openshift_public_hostname: "{{ ec2_ip_address }}"

+ 10 - 0
playbooks/byo/openshift-cluster/scaleup.yml

@@ -0,0 +1,10 @@
+---
+- include: ../../common/openshift-cluster/scaleup.yml
+  vars:
+    g_etcd_group: "{{ 'etcd' }}"
+    g_masters_group: "{{ 'masters' }}"
+    g_new_nodes_group: "{{ 'new_nodes' }}"
+    g_lb_group: "{{ 'lb' }}"
+    openshift_cluster_id: "{{ cluster_id | default('default') }}"
+    openshift_debug_level: 2
+    openshift_deployment_type: "{{ deployment_type }}"

+ 1 - 2
playbooks/byo/openshift_facts.yml

@@ -1,7 +1,6 @@
 ---
 - name: Gather Cluster facts
-  hosts: all
-  gather_facts: no
+  hosts: OSEv3
   roles:
   - openshift_facts
   tasks:

+ 0 - 3
playbooks/common/openshift-cluster/config.yml

@@ -6,6 +6,3 @@
 - include: ../openshift-master/config.yml
 
 - include: ../openshift-node/config.yml
-  vars:
-    osn_cluster_dns_domain: "{{ hostvars[groups.oo_first_master.0].openshift.dns.domain }}"
-    osn_cluster_dns_ip: "{{ hostvars[groups.oo_first_master.0].cluster_dns_ip }}"

+ 9 - 4
playbooks/common/openshift-cluster/evaluate_groups.yml

@@ -12,8 +12,8 @@
     when: g_masters_group is not defined
 
   - fail:
-      msg: This playbook requires g_nodes_group to be set
-    when: g_nodes_group is not defined
+      msg: This playbook requires g_nodes_group or g_new_nodes_group to be set
+    when: g_nodes_group is not defined and g_new_nodes_group is not defined
 
   - fail:
       msg: This playbook requires g_lb_group to be set
@@ -35,14 +35,19 @@
       ansible_sudo: "{{ g_sudo | default(omit) }}"
     with_items: groups[g_masters_group] | default([])
 
+  # Use g_new_nodes_group if it exists otherwise g_nodes_group
+  - set_fact:
+      g_nodes_to_config: "{{ g_new_nodes_group | default(g_nodes_group | default([])) }}"
+
   - name: Evaluate oo_nodes_to_config
     add_host:
       name: "{{ item }}"
       groups: oo_nodes_to_config
       ansible_ssh_user: "{{ g_ssh_user | default(omit) }}"
       ansible_sudo: "{{ g_sudo | default(omit) }}"
-    with_items: groups[g_nodes_group] | default([])
+    with_items: groups[g_nodes_to_config] | default([])
 
+  # Skip adding the master to oo_nodes_to_config when g_new_nodes_group is
   - name: Evaluate oo_nodes_to_config
     add_host:
       name: "{{ item }}"
@@ -50,7 +55,7 @@
       ansible_ssh_user: "{{ g_ssh_user | default(omit) }}"
       ansible_sudo: "{{ g_sudo | default(omit) }}"
     with_items: groups[g_masters_group] | default([])
-    when: g_nodeonmaster is defined and g_nodeonmaster == true
+    when: g_nodeonmaster | default(false) == true and g_new_nodes_group is not defined
 
   - name: Evaluate oo_first_etcd
     add_host:

+ 0 - 10
playbooks/common/openshift-cluster/scaleup.yml

@@ -1,16 +1,6 @@
 ---
 - include: evaluate_groups.yml
-  vars:
-    g_etcd_group: "{{ 'etcd' }}"
-    g_masters_group: "{{ 'masters' }}"
-    g_nodes_group: "{{ 'nodes' }}"
-    g_lb_group: "{{ 'lb' }}"
-    openshift_cluster_id: "{{ cluster_id | default('default') }}"
-    openshift_debug_level: 2
-    openshift_deployment_type: "{{ deployment_type }}"
 
 - include: ../openshift-node/config.yml
   vars:
-    osn_cluster_dns_domain: "{{ hostvars[groups.oo_first_master.0].openshift.dns.domain }}"
-    osn_cluster_dns_ip: "{{ hostvars[groups.oo_first_master.0].openshift.dns.ip }}"
     openshift_deployment_type: "{{ deployment_type }}"

+ 10 - 7
playbooks/common/openshift-cluster/upgrades/files/pre-upgrade-check

@@ -111,13 +111,16 @@ def print_validation_header():
     overwhelming the user.
     """
     print """\
-At least one port name does not validate. Valid port names:
+At least one port name is invalid and must be corrected before upgrading.
+Please update or remove any resources with invalid port names.
 
-    * must be less that 16 chars
+  Valid port names must:
+
+    * be less that 16 characters
     * have at least one letter
-    * only a-z0-9-
-    * do not start or end with -
-    * Dashes may not be next to eachother ('--')
+    * contain only a-z0-9-
+    * not start or end with -
+    * not contain dashes next to each other ('--')
 """
 
 
@@ -142,9 +145,9 @@ def main():
     # Where the magic happens
     first_error = True
     for kind, path in [
+            ('deploymentconfigs', ("spec", "template", "spec", "containers")),
             ('replicationcontrollers', ("spec", "template", "spec", "containers")),
-            ('pods', ("spec", "containers")),
-            ('deploymentconfigs', ("spec", "template", "spec", "containers"))]:
+            ('pods', ("spec", "containers"))]:
         for item in list_items(kind):
             namespace = item["metadata"]["namespace"]
             item_name = item["metadata"]["name"]

+ 2 - 2
playbooks/common/openshift-cluster/upgrades/files/versions.sh

@@ -2,9 +2,9 @@
 
 yum_installed=$(yum list installed "$@" 2>&1 | tail -n +2 | grep -v 'Installed Packages' | grep -v 'Red Hat Subscription Management' | grep -v 'Error:' | awk '{ print $2 }' | tr '\n' ' ')
 
-yum_available=$(yum list available "$@" 2>&1 | tail -n +2 | grep -v 'Available Packages' | grep -v 'Red Hat Subscription Management' | grep -v 'el7ose' | grep -v 'Error:' | awk '{ print $2 }' | tr '\n' ' ')
+yum_available=$(yum list available -q "$@" 2>&1 | tail -n +2 | grep -v 'Available Packages' | grep -v 'Red Hat Subscription Management' | grep -v 'el7ose' | grep -v 'Error:' | awk '{ print $2 }' | tr '\n' ' ')
 
 
 echo "---"
-echo "curr_version: ${yum_installed}" 
+echo "curr_version: ${yum_installed}"
 echo "avail_version: ${yum_available}"

+ 4 - 0
playbooks/common/openshift-cluster/upgrades/library/openshift_upgrade_config.py

@@ -78,6 +78,10 @@ def upgrade_master_3_0_to_3_1(ansible_module, config_base, backup):
         config['kubernetesMasterConfig'].pop('apiLevels')
         changes.append('master-config.yaml: removed kubernetesMasterConfig.apiLevels')
 
+    # Add masterCA to serviceAccountConfig
+    if 'serviceAccountConfig' in config and 'masterCA' not in config['serviceAccountConfig']:
+        config['serviceAccountConfig']['masterCA'] = config['oauthConfig'].get('masterCA', 'ca.crt')
+
     # Add proxyClientInfo to master-config
     if 'proxyClientInfo' not in config['kubernetesMasterConfig']:
         config['kubernetesMasterConfig']['proxyClientInfo'] = {

+ 8 - 3
playbooks/common/openshift-cluster/upgrades/v3_0_to_v3_1/upgrade.yml

@@ -36,9 +36,9 @@
 
   - fail:
       msg: >
-        This upgrade is only supported for origin and openshift-enterprise
+        This upgrade is only supported for origin, openshift-enterprise, and online
         deployment types
-    when: deployment_type not in ['origin','openshift-enterprise']
+    when: deployment_type not in ['origin','openshift-enterprise', 'online']
 
   - fail:
       msg: >
@@ -517,24 +517,28 @@
     - _default_router.rc == 0
     - "'false' in _scc.stdout"
     command: >
-      {{ oc_cmd }} patch scc/privileged -p '{"allowHostPorts":true,"allowHostNetwork":true}' --loglevel=9
+      {{ oc_cmd }} patch scc/privileged -p
+      '{"allowHostPorts":true,"allowHostNetwork":true}' --api-version=v1
 
   - name: Update deployment config to 1.0.4/3.0.1 spec
     when: _default_router.rc == 0
     command: >
       {{ oc_cmd }} patch dc/router -p
       '{"spec":{"strategy":{"rollingParams":{"updatePercent":-10},"spec":{"serviceAccount":"router","serviceAccountName":"router"}}}}'
+      --api-version=v1
 
   - name: Switch to hostNetwork=true
     when: _default_router.rc == 0
     command: >
       {{ oc_cmd }} patch dc/router -p '{"spec":{"template":{"spec":{"hostNetwork":true}}}}'
+      --api-version=v1
 
   - name: Update router image to current version
     when: _default_router.rc == 0
     command: >
       {{ oc_cmd }} patch dc/router -p
       '{"spec":{"template":{"spec":{"containers":[{"name":"router","image":"{{ router_image }}"}]}}}}'
+      --api-version=v1
 
   - name: Check for default registry
     command: >
@@ -548,3 +552,4 @@
     command: >
       {{ oc_cmd }} patch dc/docker-registry -p
       '{"spec":{"template":{"spec":{"containers":[{"name":"registry","image":"{{ registry_image }}"}]}}}}'
+      --api-version=v1

+ 1 - 1
playbooks/common/openshift-etcd/config.yml

@@ -24,7 +24,7 @@
     - /etc/etcd/ca.crt
     register: g_etcd_server_cert_stat_result
   - set_fact:
-      etcd_server_certs_missing: "{{ g_etcd_server_cert_stat_result.results | map(attribute='stat.exists')
+      etcd_server_certs_missing: "{{ g_etcd_server_cert_stat_result.results | oo_collect(attribute='stat.exists')
                                     | list | intersect([false])}}"
       etcd_cert_subdir: etcd-{{ openshift.common.hostname }}
       etcd_cert_config_dir: /etc/etcd

+ 53 - 35
playbooks/common/openshift-master/config.yml

@@ -60,7 +60,7 @@
     register: g_external_etcd_cert_stat_result
   - set_fact:
       etcd_client_certs_missing: "{{ g_external_etcd_cert_stat_result.results
-                                    | map(attribute='stat.exists')
+                                    | oo_collect(attribute='stat.exists')
                                     | list | intersect([false])}}"
       etcd_cert_subdir: openshift-master-{{ openshift.common.hostname }}
       etcd_cert_config_dir: "{{ openshift.common.config_base }}/master"
@@ -157,7 +157,7 @@
     register: g_master_cert_stat_result
   - set_fact:
       master_certs_missing: "{{ False in (g_master_cert_stat_result.results
-                                | map(attribute='stat.exists')
+                                | oo_collect(attribute='stat.exists')
                                 | list ) }}"
       master_cert_subdir: master-{{ openshift.common.hostname }}
       master_cert_config_dir: "{{ openshift.common.config_base }}/master"
@@ -204,14 +204,6 @@
       validate_checksum: yes
     with_items: masters_needing_certs
 
-- name: Inspect named certificates
-  hosts: oo_first_master
-  tasks:
-  - name: Collect certificate names
-    set_fact:
-      parsed_named_certificates: "{{ openshift_master_named_certificates | oo_parse_certificate_names(master_cert_config_dir, openshift.common.internal_hostnames) }}"
-    when: openshift_master_named_certificates is defined
-
 - name: Compute haproxy_backend_servers
   hosts: localhost
   connection: local
@@ -252,31 +244,69 @@
   - fail:
       msg: "openshift_master_session_auth_secrets and openshift_master_encryption_secrets must be equal length"
     when: (openshift_master_session_auth_secrets is defined and openshift_master_session_encryption_secrets is defined) and (openshift_master_session_auth_secrets | length != openshift_master_session_encryption_secrets | length)
+  - name: Install OpenSSL package
+    action: "{{ansible_pkg_mgr}} pkg=openssl state=present"
   - name: Generate session authentication key
     command: /usr/bin/openssl rand -base64 24
     register: session_auth_output
-    with_sequence: count=1
     when: openshift_master_session_auth_secrets is undefined
   - name: Generate session encryption key
     command: /usr/bin/openssl rand -base64 24
     register: session_encryption_output
-    with_sequence: count=1
     when: openshift_master_session_encryption_secrets is undefined
   - set_fact:
-      session_auth_secret: "{{ openshift_master_session_auth_secrets
-                                | default(session_auth_output.results
-                                | map(attribute='stdout')
-                                | list) }}"
-      session_encryption_secret: "{{ openshift_master_session_encryption_secrets
-                                      | default(session_encryption_output.results
-                                      | map(attribute='stdout')
-                                      | list) }}"
+      session_auth_secret: "{{ openshift_master_session_auth_secrets | default([session_auth_output.stdout]) }}"
+      session_encryption_secret: "{{ openshift_master_session_encryption_secrets | default([session_encryption_output.stdout]) }}"
+
+- name: Parse named certificates
+  hosts: localhost
+  vars:
+    internal_hostnames: "{{ hostvars[groups.oo_first_master.0].openshift.common.internal_hostnames }}"
+    named_certificates: "{{ hostvars[groups.oo_first_master.0].openshift_master_named_certificates | default([]) }}"
+    named_certificates_dir: "{{ hostvars[groups.oo_first_master.0].master_cert_config_dir }}/named_certificates/"
+  tasks:
+  - set_fact:
+      parsed_named_certificates: "{{ named_certificates | oo_parse_named_certificates(named_certificates_dir, internal_hostnames) }}"
+    when: named_certificates | length > 0
+
+- name: Deploy named certificates
+  hosts: oo_masters_to_config
+  vars:
+    named_certs_dir: "{{ master_cert_config_dir }}/named_certificates/"
+    named_certs_specified: "{{ openshift_master_named_certificates is defined }}"
+    overwrite_named_certs: "{{ openshift_master_overwrite_named_certificates | default(false) }}"
+  roles:
+  - role: openshift_facts
+  post_tasks:
+  - openshift_facts:
+      role: master
+      local_facts:
+        named_certificates: "{{ hostvars.localhost.parsed_named_certificates | default([]) }}"
+      additive_facts_to_overwrite:
+      - "{{ 'master.named_certificates' if overwrite_named_certs | bool else omit }}"
+  - name: Clear named certificates
+    file:
+      path: "{{ named_certs_dir }}"
+      state: absent
+    when: overwrite_named_certs | bool
+  - name: Ensure named certificate directory exists
+    file:
+      path: "{{ named_certs_dir }}"
+      state: directory
+    when: named_certs_specified | bool
+  - name: Land named certificates
+    copy: src="{{ item.certfile }}" dest="{{ named_certs_dir }}"
+    with_items: openshift_master_named_certificates
+    when: named_certs_specified | bool
+  - name: Land named certificate keys
+    copy: src="{{ item.keyfile }}" dest="{{ named_certs_dir }}"
+    with_items: openshift_master_named_certificates
+    when: named_certs_specified | bool
 
 - name: Configure master instances
   hosts: oo_masters_to_config
   serial: 1
   vars:
-    named_certificates: "{{ hostvars[groups['oo_first_master'][0]]['parsed_named_certificates'] | default([])}}"
     sync_tmpdir: "{{ hostvars.localhost.g_master_mktemp.stdout }}"
     openshift_master_ha: "{{ groups.oo_masters_to_config | length > 1 }}"
     openshift_master_count: "{{ groups.oo_masters_to_config | length }}"
@@ -314,20 +344,8 @@
   - openshift_examples
   - role: openshift_cluster_metrics
     when: openshift.common.use_cluster_metrics | bool
-
-- name: Determine cluster dns ip
-  hosts: oo_first_master
-  tasks:
-  - name: Get master service ip
-    command: "{{ openshift.common.client_binary }} get -o template svc kubernetes --template=\\{\\{.spec.clusterIP\\}\\}"
-    register: master_service_ip_output
-    when: openshift.common.version_greater_than_3_1_or_1_1 | bool
-  - set_fact:
-      cluster_dns_ip: "{{ hostvars[groups.oo_first_master.0].openshift.dns.ip }}"
-    when: not openshift.common.version_greater_than_3_1_or_1_1 | bool
-  - set_fact:
-      cluster_dns_ip: "{{ master_service_ip_output.stdout }}"
-    when: openshift.common.version_greater_than_3_1_or_1_1 | bool
+  - role: openshift_manageiq
+    when: openshift.common.use_manageiq | bool
 
 - name: Enable cockpit
   hosts: oo_first_master

+ 4 - 2
playbooks/common/openshift-node/config.yml

@@ -33,7 +33,7 @@
     - server.crt
     register: stat_result
   - set_fact:
-      certs_missing: "{{ stat_result.results | map(attribute='stat.exists')
+      certs_missing: "{{ stat_result.results | oo_collect(attribute='stat.exists')
                          | list | intersect([false])}}"
       node_subdir: node-{{ openshift.common.hostname }}
       config_dir: "{{ openshift.common.config_base }}/generated-configs/node-{{ openshift.common.hostname }}"
@@ -48,7 +48,7 @@
     when: groups.oo_etcd_to_config is defined and groups.oo_etcd_to_config and (openshift.common.use_flannel | bool)
   - set_fact:
       etcd_client_flannel_certs_missing: "{{ g_external_etcd_flannel_cert_stat_result.results
-                                             | map(attribute='stat.exists')
+                                             | oo_collect(attribute='stat.exists')
                                              | list | intersect([false])}}"
       etcd_cert_subdir: openshift-node-{{ openshift.common.hostname }}
       etcd_cert_config_dir: "{{ openshift.common.config_base }}/node"
@@ -158,8 +158,10 @@
   vars:
     sync_tmpdir: "{{ hostvars.localhost.mktemp.stdout }}"
     openshift_node_master_api_url: "{{ hostvars[groups.oo_first_master.0].openshift.master.api_url }}"
+    # TODO: Prefix flannel role variables.
     etcd_urls: "{{ hostvars[groups.oo_first_master.0].openshift.master.etcd_urls }}"
     embedded_etcd: "{{ hostvars[groups.oo_first_master.0].openshift.master.embedded_etcd }}"
+    openshift_node_first_master_ip: "{{ hostvars[groups.oo_first_master.0].openshift.common.ip }}"
   pre_tasks:
   - name: Ensure certificate directory exists
     file:

+ 0 - 2
playbooks/gce/openshift-cluster/join_node.yml

@@ -45,5 +45,3 @@
     openshift_use_openshift_sdn: true
     openshift_node_labels: "{{ lookup('oo_option', 'openshift_node_labels') }} "
     os_sdn_network_plugin_name: "redhat/openshift-ovs-subnet"
-    osn_cluster_dns_domain: "{{ hostvars[groups.oo_first_master.0].openshift.dns.domain }}"
-    osn_cluster_dns_ip: "{{ hostvars[groups.oo_first_master.0].cluster_dns_ip }}"

+ 4 - 0
playbooks/gce/openshift-cluster/launch.yml

@@ -16,6 +16,8 @@
       cluster: "{{ cluster_id }}"
       type: "{{ k8s_type }}"
       g_sub_host_type: "default"
+      gce_machine_type: "{{ lookup('env', 'gce_machine_master_type') | default(lookup('env', 'gce_machine_type'), true) }}"
+      gce_machine_image: "{{ lookup('env', 'gce_machine_master_image') | default(lookup('env', 'gce_machine_image'), true) }}"
 
   - include: ../../common/openshift-cluster/tasks/set_node_launch_facts.yml
     vars:
@@ -27,6 +29,8 @@
       cluster: "{{ cluster_id }}"
       type: "{{ k8s_type }}"
       g_sub_host_type: "{{ sub_host_type }}"
+      gce_machine_type: "{{ lookup('env', 'gce_machine_node_type') | default(lookup('env', 'gce_machine_type'), true) }}"
+      gce_machine_image: "{{ lookup('env', 'gce_machine_node_image') | default(lookup('env', 'gce_machine_image'), true) }}"
 
   - include: ../../common/openshift-cluster/tasks/set_node_launch_facts.yml
     vars:

+ 2 - 2
playbooks/gce/openshift-cluster/tasks/launch_instances.yml

@@ -5,8 +5,8 @@
 - name: Launch instance(s)
   gce:
     instance_names: "{{ instances }}"
-    machine_type: "{{ lookup('env', 'gce_machine_type') | default('n1-standard-1', true) }}"
-    image: "{{ lookup('env', 'gce_machine_image') | default(deployment_vars[deployment_type].image, true) }}"
+    machine_type: "{{ gce_machine_type | default(deployment_vars[deployment_type].machine_type, true) }}"
+    image: "{{ gce_machine_image | default(deployment_vars[deployment_type].image, true) }}"
     service_account_email: "{{ lookup('env', 'gce_service_account_email_address') }}"
     pem_file: "{{ lookup('env', 'gce_service_account_pem_file_path') }}"
     project_id: "{{ lookup('env', 'gce_project_id') }}"

+ 3 - 0
playbooks/gce/openshift-cluster/vars.yml

@@ -5,13 +5,16 @@ sdn_network_plugin: redhat/openshift-ovs-subnet
 deployment_vars:
   origin:
     image: preinstalled-slave-50g-v5
+    machine_type: n1-standard-1
     ssh_user: root
     sudo: yes
   online:
     image: libra-rhel7
+    machine_type: n1-standard-1
     ssh_user: root
     sudo: no
   enterprise:
     image: rhel-7
+    machine_type: n1-standard-1
     ssh_user:
     sudo: yes

+ 88 - 0
playbooks/openstack/openshift-cluster/files/heat_stack.yaml

@@ -43,6 +43,11 @@ parameters:
     description: Source of legitimate ssh connections
     default: 0.0.0.0/0
 
+  num_etcd:
+    type: number
+    label: Number of etcd nodes
+    description: Number of etcd nodes
+
   num_masters:
     type: number
     label: Number of masters
@@ -58,6 +63,11 @@ parameters:
     label: Number of infrastructure nodes
     description: Number of infrastructure nodes
 
+  etcd_image:
+    type: string
+    label: Etcd image
+    description: Name of the image for the etcd servers
+
   master_image:
     type: string
     label: Master image
@@ -73,6 +83,11 @@ parameters:
     label: Infra image
     description: Name of the image for the infra node servers
 
+  etcd_flavor:
+    type: string
+    label: Etcd flavor
+    description: Flavor of the etcd servers
+
   master_flavor:
     type: string
     label: Master flavor
@@ -90,6 +105,18 @@ parameters:
 
 outputs:
 
+  etcd_names:
+    description: Name of the etcds
+    value: { get_attr: [ etcd, name ] }
+
+  etcd_ips:
+    description: IPs of the etcds
+    value: { get_attr: [ etcd, private_ip ] }
+
+  etcd_floating_ips:
+    description: Floating IPs of the etcds
+    value: { get_attr: [ etcd, floating_ip ] }
+
   master_names:
     description: Name of the masters
     value: { get_attr: [ masters, name ] }
@@ -220,6 +247,37 @@ resources:
           port_range_min: 24224
           port_range_max: 24224
 
+  etcd-secgrp:
+    type: OS::Neutron::SecurityGroup
+    properties:
+      name:
+        str_replace:
+          template: openshift-ansible-cluster_id-etcd-secgrp
+          params:
+            cluster_id: { get_param: cluster_id }
+      description:
+        str_replace:
+          template: Security group for cluster_id etcd cluster
+          params:
+            cluster_id: { get_param: cluster_id }
+      rules:
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 22
+          port_range_max: 22
+          remote_ip_prefix: { get_param: ssh_incoming }
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 2379
+          port_range_max: 2379
+          remote_mode: remote_group_id
+          remote_group_id: { get_resource: master-secgrp }
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 2380
+          port_range_max: 2380
+          remote_mode: remote_group_id
+
   node-secgrp:
     type: OS::Neutron::SecurityGroup
     properties:
@@ -274,6 +332,36 @@ resources:
           port_range_min: 443
           port_range_max: 443
 
+  etcd:
+    type: OS::Heat::ResourceGroup
+    properties:
+      count: { get_param: num_etcd }
+      resource_def:
+        type: heat_stack_server.yaml
+        properties:
+          name:
+            str_replace:
+              template: cluster_id-k8s_type-%index%
+              params:
+                cluster_id: { get_param: cluster_id }
+                k8s_type: etcd
+          cluster_id: { get_param: cluster_id }
+          type:       etcd
+          image:      { get_param: etcd_image }
+          flavor:     { get_param: etcd_flavor }
+          key_name:   { get_resource: keypair }
+          net:        { get_resource: net }
+          subnet:     { get_resource: subnet }
+          secgrp:
+            - { get_resource: etcd-secgrp }
+          floating_network: { get_param: floating_ip_pool }
+          net_name:
+            str_replace:
+              template: openshift-ansible-cluster_id-net
+              params:
+                cluster_id: { get_param: cluster_id }
+    depends_on: interface
+
   masters:
     type: OS::Heat::ResourceGroup
     properties:

+ 15 - 0
playbooks/openstack/openshift-cluster/launch.yml

@@ -35,12 +35,15 @@
              -P floating_ip_pool={{ openstack_floating_ip_pool }}
              -P ssh_public_key="{{ openstack_ssh_public_key }}"
              -P ssh_incoming={{ openstack_ssh_access_from }}
+             -P num_etcd={{ num_etcd }}
              -P num_masters={{ num_masters }}
              -P num_nodes={{ num_nodes }}
              -P num_infra={{ num_infra }}
+             -P etcd_image={{ deployment_vars[deployment_type].image }}
              -P master_image={{ deployment_vars[deployment_type].image }}
              -P node_image={{ deployment_vars[deployment_type].image }}
              -P infra_image={{ deployment_vars[deployment_type].image }}
+             -P etcd_flavor={{ openstack_flavor["etcd"] }}
              -P master_flavor={{ openstack_flavor["master"] }}
              -P node_flavor={{ openstack_flavor["node"] }}
              -P infra_flavor={{ openstack_flavor["infra"] }}
@@ -61,6 +64,18 @@
   - set_fact:
       parsed_outputs: "{{ stack_show_result | oo_parse_heat_stack_outputs }}"
 
+  - name: Add new etcd instances groups and variables
+    add_host:
+      hostname: '{{ item[0] }}'
+      ansible_ssh_host: '{{ item[2] }}'
+      ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
+      ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
+      groups: 'tag_env_{{ cluster_id }}, tag_host-type_etcd, tag_env-host-type_{{ cluster_id }}-openshift-etcd, tag_sub-host-type_default'
+    with_together:
+      - parsed_outputs.etcd_names
+      - parsed_outputs.etcd_ips
+      - parsed_outputs.etcd_floating_ips
+
   - name: Add new master instances groups and variables
     add_host:
       hostname: '{{ item[0] }}'

+ 1 - 0
playbooks/openstack/openshift-cluster/vars.yml

@@ -14,6 +14,7 @@ openstack_ssh_public_key:       "{{ lookup('file', lookup('oo_option', 'public_k
 openstack_ssh_access_from:      "{{ lookup('oo_option', 'ssh_from')          |
                                     default('0.0.0.0/0',                     True) }}"
 openstack_flavor:
+  etcd:   "{{ lookup('oo_option', 'etcd_flavor'      ) | default('m1.small',  True) }}"
   master: "{{ lookup('oo_option', 'master_flavor'    ) | default('m1.small',  True) }}"
   infra:  "{{ lookup('oo_option', 'infra_flavor'     ) | default('m1.small',  True) }}"
   node:   "{{ lookup('oo_option', 'node_flavor'      ) | default('m1.medium', True) }}"

+ 7 - 0
roles/ansible/tasks/main.yml

@@ -5,6 +5,13 @@
   yum:
     pkg: ansible
     state: installed
+  when: ansible_pkg_mgr == "yum"
+
+- name: Install Ansible
+  dnf:
+    pkg: ansible
+    state: installed
+  when: ansible_pkg_mgr == "dnf"
 
 - include: config.yml
   vars:

+ 12 - 0
roles/cockpit/tasks/main.yml

@@ -8,6 +8,18 @@
     - cockpit-shell
     - cockpit-bridge
     - "{{ cockpit_plugins }}"
+  when: ansible_pkg_mgr == "yum"
+
+- name: Install cockpit-ws
+  dnf:
+    name: "{{ item }}"
+    state: present
+  with_items:
+    - cockpit-ws
+    - cockpit-shell
+    - cockpit-bridge
+    - "{{ cockpit_plugins }}"
+  when: ansible_pkg_mgr == "dnf"
 
 - name: Enable cockpit-ws
   service:

+ 6 - 0
roles/copr_cli/tasks/main.yml

@@ -2,3 +2,9 @@
 - yum:
     name: copr-cli
     state: present
+  when: ansible_pkg_mgr == "yum"
+
+- dnf:
+    name: copr-cli
+    state: present
+  when: ansible_pkg_mgr == "dnf"

+ 9 - 9
roles/docker/README.md

@@ -1,38 +1,38 @@
 Role Name
 =========
 
-A brief description of the role goes here.
+Ensures docker package is installed, and optionally raises timeout for systemd-udevd.service to 5 minutes.
 
 Requirements
 ------------
 
-Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required.
+None
 
 Role Variables
 --------------
 
-A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well.
+udevw_udevd_dir: location of systemd config for systemd-udevd.service
+docker_udev_workaround: raises udevd timeout to 5 minutes (https://bugzilla.redhat.com/show_bug.cgi?id=1272446)
 
 Dependencies
 ------------
 
-A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles.
+None
 
 Example Playbook
 ----------------
 
-Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too:
-
     - hosts: servers
       roles:
-         - { role: username.rolename, x: 42 }
+      - role: docker
+        docker_udev_workaround: "true"
 
 License
 -------
 
-BSD
+ASL 2.0
 
 Author Information
 ------------------
 
-An optional section for the role authors to include contact information, or a website (HTML is not allowed).
+OpenShift operations, Red Hat, Inc

+ 5 - 0
roles/docker/handlers/main.yml

@@ -2,3 +2,8 @@
 
 - name: restart docker
   service: name=docker state=restarted
+
+- name: restart udev
+  service:
+    name: systemd-udevd
+    state: restarted

+ 8 - 120
roles/docker/meta/main.yml

@@ -1,124 +1,12 @@
 ---
 galaxy_info:
-  author: your name
-  description: 
-  company: your company (optional)
-  # Some suggested licenses:
-  # - BSD (default)
-  # - MIT
-  # - GPLv2
-  # - GPLv3
-  # - Apache
-  # - CC-BY
-  license: license (GPLv2, CC-BY, etc)
+  author: OpenShift
+  description: docker package install
+  company: Red Hat, Inc
+  license: ASL 2.0
   min_ansible_version: 1.2
-  #
-  # Below are all platforms currently available. Just uncomment
-  # the ones that apply to your role. If you don't see your 
-  # platform on this list, let us know and we'll get it added!
-  #
-  #platforms:
-  #- name: EL
-  #  versions:
-  #  - all
-  #  - 5
-  #  - 6
-  #  - 7
-  #- name: GenericUNIX
-  #  versions:
-  #  - all
-  #  - any
-  #- name: Fedora
-  #  versions:
-  #  - all
-  #  - 16
-  #  - 17
-  #  - 18
-  #  - 19
-  #  - 20
-  #- name: opensuse
-  #  versions:
-  #  - all
-  #  - 12.1
-  #  - 12.2
-  #  - 12.3
-  #  - 13.1
-  #  - 13.2
-  #- name: Amazon
-  #  versions:
-  #  - all
-  #  - 2013.03
-  #  - 2013.09
-  #- name: GenericBSD
-  #  versions:
-  #  - all
-  #  - any
-  #- name: FreeBSD
-  #  versions:
-  #  - all
-  #  - 8.0
-  #  - 8.1
-  #  - 8.2
-  #  - 8.3
-  #  - 8.4
-  #  - 9.0
-  #  - 9.1
-  #  - 9.1
-  #  - 9.2
-  #- name: Ubuntu
-  #  versions:
-  #  - all
-  #  - lucid
-  #  - maverick
-  #  - natty
-  #  - oneiric
-  #  - precise
-  #  - quantal
-  #  - raring
-  #  - saucy
-  #  - trusty
-  #- name: SLES
-  #  versions:
-  #  - all
-  #  - 10SP3
-  #  - 10SP4
-  #  - 11
-  #  - 11SP1
-  #  - 11SP2
-  #  - 11SP3
-  #- name: GenericLinux
-  #  versions:
-  #  - all
-  #  - any
-  #- name: Debian
-  #  versions:
-  #  - all
-  #  - etch
-  #  - lenny
-  #  - squeeze
-  #  - wheezy
-  #
-  # Below are all categories currently available. Just as with
-  # the platforms above, uncomment those that apply to your role.
-  #
-  #categories:
-  #- cloud
-  #- cloud:ec2
-  #- cloud:gce
-  #- cloud:rax
-  #- clustering
-  #- database
-  #- database:nosql
-  #- database:sql
-  #- development
-  #- monitoring
-  #- networking
-  #- packaging
-  #- system
-  #- web
+  platforms:
+  - name: EL
+    versions:
+    - 7
 dependencies: []
-  # List your role dependencies here, one per line. Only
-  # dependencies available via galaxy should be listed here.
-  # Be sure to remove the '[]' above if you add dependencies
-  # to this list.
-  

+ 7 - 0
roles/docker/tasks/main.yml

@@ -2,7 +2,14 @@
 # tasks file for docker
 - name: Install docker
   yum: pkg=docker
+  when: ansible_pkg_mgr == "yum"
+
+- name: Install docker
+  dnf: pkg=docker
+  when: ansible_pkg_mgr == "dnf"
 
 - name: enable and start the docker service
   service: name=docker enabled=yes state=started
 
+- include: udev_workaround.yml
+  when: docker_udev_workaround | default(False)

+ 30 - 0
roles/docker/tasks/udev_workaround.yml

@@ -0,0 +1,30 @@
+---
+
+- name: Getting current systemd-udevd exec command
+  command: grep -e "^ExecStart=" /lib/systemd/system/systemd-udevd.service
+  changed_when: false
+  register: udevw_udev_start_cmd
+
+- name: Assure systemd-udevd.service.d directory exists
+  file:
+    path: "{{ udevw_udevd_dir }}"
+    state: directory
+
+- name: Create systemd-udevd override file
+  copy:
+    content: |
+      [Service]
+      #Need blank ExecStart to "clear" pre-exising one
+      ExecStart=
+      {{ udevw_udev_start_cmd.stdout }} --event-timeout=300
+    dest: "{{ udevw_udevd_dir }}/override.conf"
+    owner: root
+    mode: "0644"
+  notify:
+  - restart udev
+  register: udevw_override_conf
+
+- name: reload systemd config files
+  command: systemctl daemon-reload
+  when: udevw_override_conf | changed
+ 

+ 3 - 0
roles/docker/vars/main.yml

@@ -0,0 +1,3 @@
+---
+
+udevw_udevd_dir: /etc/systemd/system/systemd-udevd.service.d

+ 1 - 1
roles/etcd/README.md

@@ -7,7 +7,7 @@ Requirements
 ------------
 
 This role assumes it's being deployed on a RHEL/Fedora based host with package
-named 'etcd' available via yum.
+named 'etcd' available via yum or dnf (conditionally).
 
 Role Variables
 --------------

+ 5 - 0
roles/etcd/tasks/main.yml

@@ -9,6 +9,11 @@
 
 - name: Install etcd
   yum: pkg=etcd-2.* state=present
+  when: ansible_pkg_mgr == "yum"
+
+- name: Install etcd
+  dnf: pkg=etcd* state=present
+  when: ansible_pkg_mgr == "dnf"
 
 - name: Validate permissions on the config dir
   file:

+ 1 - 1
roles/etcd_common/defaults/main.yml

@@ -1,5 +1,5 @@
 ---
-etcd_peers_group: etcd
+etcd_peers_group: oo_etcd_to_config
 
 # etcd server vars
 etcd_conf_dir: /etc/etcd

+ 2 - 1
roles/flannel/README.md

@@ -7,7 +7,8 @@ Requirements
 ------------
 
 This role assumes it's being deployed on a RHEL/Fedora based host with package
-named 'flannel' available via yum, in version superior to 0.3.
+named 'flannel' available via yum or dnf (conditionally), in version superior
+to 0.3.
 
 Role Variables
 --------------

+ 6 - 0
roles/flannel/tasks/main.yml

@@ -2,6 +2,12 @@
 - name: Install flannel
   sudo: true
   yum: pkg=flannel state=present
+  when: ansible_pkg_mgr == "yum"
+
+- name: Install flannel
+  sudo: true
+  dnf: pkg=flannel state=present
+  when: ansible_pkg_mgr == "dnf"
 
 - name: Set flannel etcd url
   sudo: true

+ 7 - 0
roles/fluentd_master/tasks/main.yml

@@ -4,6 +4,13 @@
   yum:
     name: 'http://packages.treasuredata.com/2/redhat/7/x86_64/td-agent-2.2.0-0.x86_64.rpm'
     state: present
+  when: ansible_pkg_mgr == "yum"
+
+- name: download and install td-agent
+  dnf:
+    name: 'http://packages.treasuredata.com/2/redhat/7/x86_64/td-agent-2.2.0-0.x86_64.rpm'
+    state: present
+  when: ansible_pkg_mgr == "dnf"
 
 - name: Verify fluentd plugin installed
   command: '/opt/td-agent/embedded/bin/gem query -i fluent-plugin-kubernetes'

+ 7 - 0
roles/fluentd_node/tasks/main.yml

@@ -4,6 +4,13 @@
   yum:
     name: 'http://packages.treasuredata.com/2/redhat/7/x86_64/td-agent-2.2.0-0.x86_64.rpm'
     state: present
+  when: ansible_pkg_mgr == "yum"
+
+- name: download and install td-agent
+  dnf:
+    name: 'http://packages.treasuredata.com/2/redhat/7/x86_64/td-agent-2.2.0-0.x86_64.rpm'
+    state: present
+  when: ansible_pkg_mgr == "dnf"
 
 - name: Verify fluentd plugin installed
   command: '/opt/td-agent/embedded/bin/gem query -i fluent-plugin-kubernetes'

+ 7 - 0
roles/haproxy/tasks/main.yml

@@ -3,6 +3,13 @@
   yum:
     pkg: haproxy
     state: present
+  when: ansible_pkg_mgr == "yum"
+
+- name: Install haproxy
+  dnf:
+    pkg: haproxy
+    state: present
+  when: ansible_pkg_mgr == "dnf"
 
 - name: Configure haproxy
   template:

+ 5 - 0
roles/kube_nfs_volumes/tasks/main.yml

@@ -1,6 +1,11 @@
 ---
 - name: Install pyparted (RedHat/Fedora)
   yum: name=pyparted,python-httplib2 state=present
+  when: ansible_pkg_mgr == "yum"
+
+- name: Install pyparted (RedHat/Fedora)
+  dnf: name=pyparted,python-httplib2 state=present
+  when: ansible_pkg_mgr == "dnf"
 
 - name: partition the drives
   partitionpool: disks={{ disks }} force={{ force }} sizes={{ sizes }}

+ 5 - 0
roles/kube_nfs_volumes/tasks/nfs.yml

@@ -1,6 +1,11 @@
 ---
 - name: Install NFS server on Fedora/Red Hat
   yum: name=nfs-utils state=present
+  when: ansible_pkg_mgr == "yum"
+
+- name: Install NFS server on Fedora/Red Hat
+  dnf: name=nfs-utils state=present
+  when: ansible_pkg_mgr == "dnf"
 
 - name: Start rpcbind on Fedora/Red Hat
   service: name=rpcbind state=started enabled=yes

+ 7 - 4
roles/lib_zabbix/library/zbx_action.py

@@ -1,8 +1,8 @@
 #!/usr/bin/env python
+# vim: expandtab:tabstop=4:shiftwidth=4
 '''
  Ansible module for zabbix actions
 '''
-# vim: expandtab:tabstop=4:shiftwidth=4
 #
 #   Zabbix action ansible module
 #
@@ -89,6 +89,9 @@ def operation_differences(zabbix_ops, user_ops):
     for zab, user in zip(zabbix_ops, user_ops):
         for key, val in user.items():
             if key == 'opconditions':
+                if len(zab[key]) != len(val):
+                    rval[key] = val
+                    break
                 for z_cond, u_cond in zip(zab[key], user[key]):
                     if not all([str(u_cond[op_key]) == z_cond[op_key] for op_key in \
                                 ['conditiontype', 'operator', 'value']]):
@@ -330,9 +333,9 @@ def get_action_operations(zapi, inc_operations):
                     condition['operator'] = 0
 
                 if condition['value'] == 'acknowledged':
-                    condition['operator'] = 1
+                    condition['value'] = 1
                 else:
-                    condition['operator'] = 0
+                    condition['value'] = 0
 
 
     return inc_operations
@@ -454,7 +457,7 @@ def main():
         if not exists(content):
             module.exit_json(changed=False, state="absent")
 
-        content = zapi.get_content(zbx_class_name, 'delete', [content['result'][0]['itemid']])
+        content = zapi.get_content(zbx_class_name, 'delete', [content['result'][0]['actionid']])
         module.exit_json(changed=True, results=content['result'], state="absent")
 
     # Create and Update

+ 331 - 0
roles/lib_zabbix/library/zbx_graph.py

@@ -0,0 +1,331 @@
+#!/usr/bin/env python
+'''
+ Ansible module for zabbix graphs
+'''
+# vim: expandtab:tabstop=4:shiftwidth=4
+#
+#   Zabbix graphs ansible module
+#
+#
+#   Copyright 2015 Red Hat Inc.
+#
+#   Licensed under the Apache License, Version 2.0 (the "License");
+#   you may not use this file except in compliance with the License.
+#   You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#   Unless required by applicable law or agreed to in writing, software
+#   distributed under the License is distributed on an "AS IS" BASIS,
+#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#   See the License for the specific language governing permissions and
+#   limitations under the License.
+#
+
+#---
+#- hosts: localhost
+#  gather_facts: no
+#  tasks:
+#  - zbx_graph:
+#      zbx_server: https://zabbixserver/zabbix/api_jsonrpc.php
+#      zbx_user: Admin
+#      zbx_password: zabbix
+#      name: Test Graph
+#      height: 300
+#      width: 500
+#      graph_items:
+#      - item_name: openshift.master.etcd.create.fail
+#        color: red
+#        line_style: bold
+#      - item_name: openshift.master.etcd.create.success
+#        color: red
+#        line_style: bold
+#
+#
+
+# This is in place because each module looks similar to each other.
+# These need duplicate code as their behavior is very similar
+# but different for each zabbix class.
+# pylint: disable=duplicate-code
+
+# pylint: disable=import-error
+from openshift_tools.monitoring.zbxapi import ZabbixAPI, ZabbixConnection
+
+def exists(content, key='result'):
+    ''' Check if key exists in content or the size of content[key] > 0
+    '''
+    if not content.has_key(key):
+        return False
+
+    if not content[key]:
+        return False
+
+    return True
+
+def get_graph_type(graphtype):
+    '''
+    Possible values:
+    0 - normal;
+    1 - stacked;
+    2 - pie;
+    3 - exploded;
+    '''
+    gtype = 0
+    if 'stacked' in graphtype:
+        gtype = 1
+    elif 'pie' in graphtype:
+        gtype = 2
+    elif 'exploded' in graphtype:
+        gtype = 3
+
+    return gtype
+
+def get_show_legend(show_legend):
+    '''Get the value for show_legend
+       0 - hide
+       1 - (default) show
+    '''
+    rval = 1
+    if 'hide' == show_legend:
+        rval = 0
+
+    return rval
+
+def get_template_id(zapi, template_name):
+    '''
+    get related templates
+    '''
+    # Fetch templates by name
+    content = zapi.get_content('template',
+                               'get',
+                               {'filter': {'host': template_name},})
+
+    if content.has_key('result'):
+        return content['result'][0]['templateid']
+
+    return None
+
+def get_color(color_in):
+    ''' Receive a color and translate it to a hex representation of the color
+
+        Will have a few setup by default
+    '''
+    colors = {'black': '000000',
+              'red': 'FF0000',
+              'pink': 'FFC0CB',
+              'purple': '800080',
+              'orange': 'FFA500',
+              'gold': 'FFD700',
+              'yellow': 'FFFF00',
+              'green': '008000',
+              'cyan': '00FFFF',
+              'aqua': '00FFFF',
+              'blue': '0000FF',
+              'brown': 'A52A2A',
+              'gray': '808080',
+              'grey': '808080',
+              'silver': 'C0C0C0',
+             }
+    if colors.has_key(color_in):
+        return colors[color_in]
+
+    return color_in
+
+def get_line_style(style):
+    '''determine the line style
+    '''
+    line_style = {'line': 0,
+                  'filled': 1,
+                  'bold': 2,
+                  'dot': 3,
+                  'dashed': 4,
+                  'gradient': 5,
+                 }
+
+    if line_style.has_key(style):
+        return line_style[style]
+
+    return 0
+
+def get_calc_function(func):
+    '''Determine the caclulation function'''
+    rval = 2 # default to avg
+    if 'min' in func:
+        rval = 1
+    elif 'max' in func:
+        rval = 4
+    elif 'all' in func:
+        rval = 7
+    elif 'last' in func:
+        rval = 9
+
+    return rval
+
+def get_graph_item_type(gtype):
+    '''Determine the graph item type
+    '''
+    rval = 0 # simple graph type
+    if 'sum' in gtype:
+        rval = 2
+
+    return rval
+
+def get_graph_items(zapi, gitems):
+    '''Get graph items by id'''
+
+    r_items = []
+    for item in gitems:
+        content = zapi.get_content('item',
+                                   'get',
+                                   {'filter': {'name': item['item_name']}})
+        _ = item.pop('item_name')
+        color = get_color(item.pop('color'))
+        drawtype = get_line_style(item.get('line_style', 'line'))
+        func = get_calc_function(item.get('calc_func', 'avg'))
+        g_type = get_graph_item_type(item.get('graph_item_type', 'simple'))
+
+        if content.has_key('result'):
+            tmp = {'itemid': content['result'][0]['itemid'],
+                   'color': color,
+                   'drawtype': drawtype,
+                   'calc_fnc': func,
+                   'type': g_type,
+                  }
+            r_items.append(tmp)
+
+    return r_items
+
+def compare_gitems(zabbix_items, user_items):
+    '''Compare zabbix results with the user's supplied items
+       return True if user_items are equal
+       return False if any of the values differ
+    '''
+    if len(zabbix_items) != len(user_items):
+        return False
+
+    for u_item in user_items:
+        for z_item in zabbix_items:
+            if u_item['itemid'] == z_item['itemid']:
+                if not all([str(value) == z_item[key] for key, value in u_item.items()]):
+                    return False
+
+    return True
+
+# The branches are needed for CRUD and error handling
+# pylint: disable=too-many-branches
+def main():
+    '''
+    ansible zabbix module for zbx_graphs
+    '''
+
+    module = AnsibleModule(
+        argument_spec=dict(
+            zbx_server=dict(default='https://localhost/zabbix/api_jsonrpc.php', type='str'),
+            zbx_user=dict(default=os.environ.get('ZABBIX_USER', None), type='str'),
+            zbx_password=dict(default=os.environ.get('ZABBIX_PASSWORD', None), type='str'),
+            zbx_debug=dict(default=False, type='bool'),
+            name=dict(default=None, type='str'),
+            height=dict(default=None, type='int'),
+            width=dict(default=None, type='int'),
+            graph_type=dict(default='normal', type='str'),
+            show_legend=dict(default='show', type='str'),
+            state=dict(default='present', type='str'),
+            graph_items=dict(default=None, type='list'),
+        ),
+        #supports_check_mode=True
+    )
+
+    zapi = ZabbixAPI(ZabbixConnection(module.params['zbx_server'],
+                                      module.params['zbx_user'],
+                                      module.params['zbx_password'],
+                                      module.params['zbx_debug']))
+
+    #Set the instance and the template for the rest of the calls
+    zbx_class_name = 'graph'
+    state = module.params['state']
+
+    content = zapi.get_content(zbx_class_name,
+                               'get',
+                               {'filter': {'name': module.params['name']},
+                                #'templateids': templateid,
+                                'selectGraphItems': 'extend',
+                               })
+
+    #******#
+    # GET
+    #******#
+    if state == 'list':
+        module.exit_json(changed=False, results=content['result'], state="list")
+
+    #******#
+    # DELETE
+    #******#
+    if state == 'absent':
+        if not exists(content):
+            module.exit_json(changed=False, state="absent")
+
+        content = zapi.get_content(zbx_class_name, 'delete', [content['result'][0]['graphid']])
+        module.exit_json(changed=True, results=content['result'], state="absent")
+
+    # Create and Update
+    if state == 'present':
+
+        params = {'name': module.params['name'],
+                  'height': module.params['height'],
+                  'width': module.params['width'],
+                  'graphtype': get_graph_type(module.params['graph_type']),
+                  'show_legend': get_show_legend(module.params['show_legend']),
+                  'gitems': get_graph_items(zapi, module.params['graph_items']),
+                 }
+
+        # Remove any None valued params
+        _ = [params.pop(key, None) for key in params.keys() if params[key] is None]
+
+        #******#
+        # CREATE
+        #******#
+        if not exists(content):
+            content = zapi.get_content(zbx_class_name, 'create', params)
+
+            if content.has_key('error'):
+                module.exit_json(failed=True, changed=True, results=content['error'], state="present")
+
+            module.exit_json(changed=True, results=content['result'], state='present')
+
+
+        ########
+        # UPDATE
+        ########
+        differences = {}
+        zab_results = content['result'][0]
+        for key, value in params.items():
+
+            if key == 'gitems':
+                if not compare_gitems(zab_results[key], value):
+                    differences[key] = value
+
+            elif zab_results[key] != value and zab_results[key] != str(value):
+                differences[key] = value
+
+        if not differences:
+            module.exit_json(changed=False, results=zab_results, state="present")
+
+        # We have differences and need to update
+        differences['graphid'] = zab_results['graphid']
+        content = zapi.get_content(zbx_class_name, 'update', differences)
+
+        if content.has_key('error'):
+            module.exit_json(failed=True, changed=False, results=content['error'], state="present")
+
+        module.exit_json(changed=True, results=content['result'], state="present")
+
+    module.exit_json(failed=True,
+                     changed=False,
+                     results='Unknown state passed. %s' % state,
+                     state="unknown")
+
+# pylint: disable=redefined-builtin, unused-wildcard-import, wildcard-import, locally-disabled
+# import module snippets.  This are required
+from ansible.module_utils.basic import *
+
+main()

+ 331 - 0
roles/lib_zabbix/library/zbx_graphprototype.py

@@ -0,0 +1,331 @@
+#!/usr/bin/env python
+'''
+ Ansible module for zabbix graphprototypes
+'''
+# vim: expandtab:tabstop=4:shiftwidth=4
+#
+#   Zabbix graphprototypes ansible module
+#
+#
+#   Copyright 2015 Red Hat Inc.
+#
+#   Licensed under the Apache License, Version 2.0 (the "License");
+#   you may not use this file except in compliance with the License.
+#   You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#   Unless required by applicable law or agreed to in writing, software
+#   distributed under the License is distributed on an "AS IS" BASIS,
+#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#   See the License for the specific language governing permissions and
+#   limitations under the License.
+#
+
+#---
+#- hosts: localhost
+#  gather_facts: no
+#  tasks:
+#  - zbx_graphprototype:
+#      zbx_server: https://zabbixserver/zabbix/api_jsonrpc.php
+#      zbx_user: Admin
+#      zbx_password: zabbix
+#      name: Test Graph
+#      height: 300
+#      width: 500
+#      graph_items:
+#      - item_name: Bytes per second IN on network interface {#OSO_NET_INTERFACE}
+#        color: red
+#        line_style: bold
+#        item_type: prototype
+#      - item_name: Template OS Linux: Bytes per second OUT on network interface {#OSO_NET_INTERFACE}
+#        item_type: prototype
+#
+#
+
+# This is in place because each module looks similar to each other.
+# These need duplicate code as their behavior is very similar
+# but different for each zabbix class.
+# pylint: disable=duplicate-code
+
+# pylint: disable=import-error
+from openshift_tools.monitoring.zbxapi import ZabbixAPI, ZabbixConnection
+
+def exists(content, key='result'):
+    ''' Check if key exists in content or the size of content[key] > 0
+    '''
+    if not content.has_key(key):
+        return False
+
+    if not content[key]:
+        return False
+
+    return True
+
+def get_graph_type(graphtype):
+    '''
+    Possible values:
+    0 - normal;
+    1 - stacked;
+    2 - pie;
+    3 - exploded;
+    '''
+    gtype = 0
+    if 'stacked' in graphtype:
+        gtype = 1
+    elif 'pie' in graphtype:
+        gtype = 2
+    elif 'exploded' in graphtype:
+        gtype = 3
+
+    return gtype
+
+def get_show_legend(show_legend):
+    '''Get the value for show_legend
+       0 - hide
+       1 - (default) show
+    '''
+    rval = 1
+    if 'hide' == show_legend:
+        rval = 0
+
+    return rval
+
+def get_template_id(zapi, template_name):
+    '''
+    get related templates
+    '''
+    # Fetch templates by name
+    content = zapi.get_content('template',
+                               'get',
+                               {'filter': {'host': template_name},})
+
+    if content.has_key('result'):
+        return content['result'][0]['templateid']
+
+    return None
+
+def get_color(color_in='black'):
+    ''' Receive a color and translate it to a hex representation of the color
+
+        Will have a few setup by default
+    '''
+    colors = {'black': '000000',
+              'red': 'FF0000',
+              'pink': 'FFC0CB',
+              'purple': '800080',
+              'orange': 'FFA500',
+              'gold': 'FFD700',
+              'yellow': 'FFFF00',
+              'green': '008000',
+              'cyan': '00FFFF',
+              'aqua': '00FFFF',
+              'blue': '0000FF',
+              'brown': 'A52A2A',
+              'gray': '808080',
+              'grey': '808080',
+              'silver': 'C0C0C0',
+             }
+    if colors.has_key(color_in):
+        return colors[color_in]
+
+    return color_in
+
+def get_line_style(style):
+    '''determine the line style
+    '''
+    line_style = {'line': 0,
+                  'filled': 1,
+                  'bold': 2,
+                  'dot': 3,
+                  'dashed': 4,
+                  'gradient': 5,
+                 }
+
+    if line_style.has_key(style):
+        return line_style[style]
+
+    return 0
+
+def get_calc_function(func):
+    '''Determine the caclulation function'''
+    rval = 2 # default to avg
+    if 'min' in func:
+        rval = 1
+    elif 'max' in func:
+        rval = 4
+    elif 'all' in func:
+        rval = 7
+    elif 'last' in func:
+        rval = 9
+
+    return rval
+
+def get_graph_item_type(gtype):
+    '''Determine the graph item type
+    '''
+    rval = 0 # simple graph type
+    if 'sum' in gtype:
+        rval = 2
+
+    return rval
+
+def get_graph_items(zapi, gitems):
+    '''Get graph items by id'''
+
+    r_items = []
+    for item in gitems:
+        content = zapi.get_content('item%s' % item.get('item_type', ''),
+                                   'get',
+                                   {'filter': {'name': item['item_name']}})
+        _ = item.pop('item_name')
+        color = get_color(item.pop('color', 'black'))
+        drawtype = get_line_style(item.get('line_style', 'line'))
+        func = get_calc_function(item.get('calc_func', 'avg'))
+        g_type = get_graph_item_type(item.get('graph_item_type', 'simple'))
+
+        if content.has_key('result'):
+            tmp = {'itemid': content['result'][0]['itemid'],
+                   'color': color,
+                   'drawtype': drawtype,
+                   'calc_fnc': func,
+                   'type': g_type,
+                  }
+            r_items.append(tmp)
+
+    return r_items
+
+def compare_gitems(zabbix_items, user_items):
+    '''Compare zabbix results with the user's supplied items
+       return True if user_items are equal
+       return False if any of the values differ
+    '''
+    if len(zabbix_items) != len(user_items):
+        return False
+
+    for u_item in user_items:
+        for z_item in zabbix_items:
+            if u_item['itemid'] == z_item['itemid']:
+                if not all([str(value) == z_item[key] for key, value in u_item.items()]):
+                    return False
+
+    return True
+
+# The branches are needed for CRUD and error handling
+# pylint: disable=too-many-branches
+def main():
+    '''
+    ansible zabbix module for zbx_graphprototypes
+    '''
+
+    module = AnsibleModule(
+        argument_spec=dict(
+            zbx_server=dict(default='https://localhost/zabbix/api_jsonrpc.php', type='str'),
+            zbx_user=dict(default=os.environ.get('ZABBIX_USER', None), type='str'),
+            zbx_password=dict(default=os.environ.get('ZABBIX_PASSWORD', None), type='str'),
+            zbx_debug=dict(default=False, type='bool'),
+            name=dict(default=None, type='str'),
+            height=dict(default=None, type='int'),
+            width=dict(default=None, type='int'),
+            graph_type=dict(default='normal', type='str'),
+            show_legend=dict(default='show', type='str'),
+            state=dict(default='present', type='str'),
+            graph_items=dict(default=None, type='list'),
+        ),
+        #supports_check_mode=True
+    )
+
+    zapi = ZabbixAPI(ZabbixConnection(module.params['zbx_server'],
+                                      module.params['zbx_user'],
+                                      module.params['zbx_password'],
+                                      module.params['zbx_debug']))
+
+    #Set the instance and the template for the rest of the calls
+    zbx_class_name = 'graphprototype'
+    state = module.params['state']
+
+    content = zapi.get_content(zbx_class_name,
+                               'get',
+                               {'filter': {'name': module.params['name']},
+                                #'templateids': templateid,
+                                'selectGraphItems': 'extend',
+                               })
+
+    #******#
+    # GET
+    #******#
+    if state == 'list':
+        module.exit_json(changed=False, results=content['result'], state="list")
+
+    #******#
+    # DELETE
+    #******#
+    if state == 'absent':
+        if not exists(content):
+            module.exit_json(changed=False, state="absent")
+
+        content = zapi.get_content(zbx_class_name, 'delete', [content['result'][0]['graphid']])
+        module.exit_json(changed=True, results=content['result'], state="absent")
+
+    # Create and Update
+    if state == 'present':
+
+        params = {'name': module.params['name'],
+                  'height': module.params['height'],
+                  'width': module.params['width'],
+                  'graphtype': get_graph_type(module.params['graph_type']),
+                  'show_legend': get_show_legend(module.params['show_legend']),
+                  'gitems': get_graph_items(zapi, module.params['graph_items']),
+                 }
+
+        # Remove any None valued params
+        _ = [params.pop(key, None) for key in params.keys() if params[key] is None]
+
+        #******#
+        # CREATE
+        #******#
+        if not exists(content):
+            content = zapi.get_content(zbx_class_name, 'create', params)
+
+            if content.has_key('error'):
+                module.exit_json(failed=True, changed=True, results=content['error'], state="present")
+
+            module.exit_json(changed=True, results=content['result'], state='present')
+
+
+        ########
+        # UPDATE
+        ########
+        differences = {}
+        zab_results = content['result'][0]
+        for key, value in params.items():
+
+            if key == 'gitems':
+                if not compare_gitems(zab_results[key], value):
+                    differences[key] = value
+
+            elif zab_results[key] != value and zab_results[key] != str(value):
+                differences[key] = value
+
+        if not differences:
+            module.exit_json(changed=False, results=zab_results, state="present")
+
+        # We have differences and need to update
+        differences['graphid'] = zab_results['graphid']
+        content = zapi.get_content(zbx_class_name, 'update', differences)
+
+        if content.has_key('error'):
+            module.exit_json(failed=True, changed=False, results=content['error'], state="present")
+
+        module.exit_json(changed=True, results=content['result'], state="present")
+
+    module.exit_json(failed=True,
+                     changed=False,
+                     results='Unknown state passed. %s' % state,
+                     state="unknown")
+
+# pylint: disable=redefined-builtin, unused-wildcard-import, wildcard-import, locally-disabled
+# import module snippets.  This are required
+from ansible.module_utils.basic import *
+
+main()

+ 290 - 0
roles/lib_zabbix/library/zbx_httptest.py

@@ -0,0 +1,290 @@
+#!/usr/bin/env python
+'''
+ Ansible module for zabbix httpservice
+'''
+# vim: expandtab:tabstop=4:shiftwidth=4
+#
+#   Zabbix item ansible module
+#
+#
+#   Copyright 2015 Red Hat Inc.
+#
+#   Licensed under the Apache License, Version 2.0 (the "License");
+#   you may not use this file except in compliance with the License.
+#   You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#   Unless required by applicable law or agreed to in writing, software
+#   distributed under the License is distributed on an "AS IS" BASIS,
+#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#   See the License for the specific language governing permissions and
+#   limitations under the License.
+#
+
+# This is in place because each module looks similar to each other.
+# These need duplicate code as their behavior is very similar
+# but different for each zabbix class.
+# pylint: disable=duplicate-code
+
+# pylint: disable=import-error
+from openshift_tools.monitoring.zbxapi import ZabbixAPI, ZabbixConnection
+
+def exists(content, key='result'):
+    ''' Check if key exists in content or the size of content[key] > 0
+    '''
+    if not content.has_key(key):
+        return False
+
+    if not content[key]:
+        return False
+
+    return True
+
+def get_authentication_method(auth):
+    ''' determine authentication type'''
+    rval = 0
+    if 'basic' in auth:
+        rval = 1
+    elif 'ntlm' in auth:
+        rval = 2
+
+    return rval
+
+def get_verify_host(verify):
+    '''
+    get the values for verify_host
+    '''
+    if verify:
+        return 1
+
+    return 0
+
+def get_app_id(zapi, application):
+    '''
+    get related templates
+    '''
+    # Fetch templates by name
+    content = zapi.get_content('application',
+                               'get',
+                               {'search': {'name': application},
+                                'selectApplications': ['applicationid', 'name']})
+    if content.has_key('result'):
+        return content['result'][0]['applicationid']
+
+    return None
+
+def get_template_id(zapi, template_name):
+    '''
+    get related templates
+    '''
+    # Fetch templates by name
+    content = zapi.get_content('template',
+                               'get',
+                               {'search': {'host': template_name},
+                                'selectApplications': ['applicationid', 'name']})
+    if content.has_key('result'):
+        return content['result'][0]['templateid']
+
+    return None
+
+def get_host_id_by_name(zapi, host_name):
+    '''Get host id by name'''
+    content = zapi.get_content('host',
+                               'get',
+                               {'filter': {'name': host_name}})
+
+    return content['result'][0]['hostid']
+
+def get_status(status):
+    ''' Determine the status of the web scenario  '''
+    rval = 0
+    if 'disabled' in status:
+        return 1
+
+    return rval
+
+def find_step(idx, step_list):
+    ''' find step by index '''
+    for step in step_list:
+        if str(step['no']) == str(idx):
+            return step
+
+    return None
+
+def steps_equal(zab_steps, user_steps):
+    '''compare steps returned from zabbix
+       and steps passed from user
+    '''
+
+    if len(user_steps) != len(zab_steps):
+        return False
+
+    for idx in range(1, len(user_steps)+1):
+
+        user = find_step(idx, user_steps)
+        zab = find_step(idx, zab_steps)
+
+        for key, value in user.items():
+            if str(value) != str(zab[key]):
+                return False
+
+    return True
+
+def process_steps(steps):
+    '''Preprocess the step parameters'''
+    for idx, step in enumerate(steps):
+        if not step.has_key('no'):
+            step['no'] = idx + 1
+
+    return steps
+
+# The branches are needed for CRUD and error handling
+# pylint: disable=too-many-branches
+def main():
+    '''
+    ansible zabbix module for zbx_item
+    '''
+
+    module = AnsibleModule(
+        argument_spec=dict(
+            zbx_server=dict(default='https://localhost/zabbix/api_jsonrpc.php', type='str'),
+            zbx_user=dict(default=os.environ.get('ZABBIX_USER', None), type='str'),
+            zbx_password=dict(default=os.environ.get('ZABBIX_PASSWORD', None), type='str'),
+            zbx_debug=dict(default=False, type='bool'),
+            name=dict(default=None, require=True, type='str'),
+            agent=dict(default=None, type='str'),
+            template_name=dict(default=None, type='str'),
+            host_name=dict(default=None, type='str'),
+            interval=dict(default=60, type='int'),
+            application=dict(default=None, type='str'),
+            authentication=dict(default=None, type='str'),
+            http_user=dict(default=None, type='str'),
+            http_password=dict(default=None, type='str'),
+            state=dict(default='present', type='str'),
+            status=dict(default='enabled', type='str'),
+            steps=dict(default='present', type='list'),
+            verify_host=dict(default=False, type='bool'),
+            retries=dict(default=1, type='int'),
+            headers=dict(default=None, type='dict'),
+            query_type=dict(default='filter', choices=['filter', 'search'], type='str'),
+        ),
+        #supports_check_mode=True
+        mutually_exclusive=[['template_name', 'host_name']],
+    )
+
+    zapi = ZabbixAPI(ZabbixConnection(module.params['zbx_server'],
+                                      module.params['zbx_user'],
+                                      module.params['zbx_password'],
+                                      module.params['zbx_debug']))
+
+    #Set the instance and the template for the rest of the calls
+    zbx_class_name = 'httptest'
+    state = module.params['state']
+    hostid = None
+
+    # If a template name was passed then accept the template
+    if module.params['template_name']:
+        hostid = get_template_id(zapi, module.params['template_name'])
+    else:
+        hostid = get_host_id_by_name(zapi, module.params['host_name'])
+
+    # Fail if a template was not found matching the name
+    if not hostid:
+        module.exit_json(failed=True,
+                         changed=False,
+                         results='Error: Could find template or host with name [%s].' %
+                         (module.params.get('template_name', module.params['host_name'])),
+                         state="Unkown")
+
+    content = zapi.get_content(zbx_class_name,
+                               'get',
+                               {module.params['query_type']: {'name': module.params['name']},
+                                'selectSteps': 'extend',
+                               })
+
+    #******#
+    # GET
+    #******#
+    if state == 'list':
+        module.exit_json(changed=False, results=content['result'], state="list")
+
+    #******#
+    # DELETE
+    #******#
+    if state == 'absent':
+        if not exists(content):
+            module.exit_json(changed=False, state="absent")
+
+        content = zapi.get_content(zbx_class_name, 'delete', [content['result'][0]['httptestid']])
+        module.exit_json(changed=True, results=content['result'], state="absent")
+
+    # Create and Update
+    if state == 'present':
+
+        params = {'name': module.params['name'],
+                  'hostid': hostid,
+                  'agent': module.params['agent'],
+                  'retries': module.params['retries'],
+                  'steps': process_steps(module.params['steps']),
+                  'applicationid': get_app_id(zapi, module.params['application']),
+                  'delay': module.params['interval'],
+                  'verify_host': get_verify_host(module.params['verify_host']),
+                  'status': get_status(module.params['status']),
+                  'headers': module.params['headers'],
+                  'http_user': module.params['http_user'],
+                  'http_password': module.params['http_password'],
+                 }
+
+
+        # Remove any None valued params
+        _ = [params.pop(key, None) for key in params.keys() if params[key] is None]
+
+        #******#
+        # CREATE
+        #******#
+        if not exists(content):
+            content = zapi.get_content(zbx_class_name, 'create', params)
+
+            if content.has_key('error'):
+                module.exit_json(failed=True, changed=True, results=content['error'], state="present")
+
+            module.exit_json(changed=True, results=content['result'], state='present')
+
+
+        ########
+        # UPDATE
+        ########
+        differences = {}
+        zab_results = content['result'][0]
+        for key, value in params.items():
+
+            if key == 'steps':
+                if not steps_equal(zab_results[key], value):
+                    differences[key] = value
+
+            elif zab_results[key] != value and zab_results[key] != str(value):
+                differences[key] = value
+
+        # We have differences and need to update
+        if not differences:
+            module.exit_json(changed=False, results=zab_results, state="present")
+
+        differences['httptestid'] = zab_results['httptestid']
+        content = zapi.get_content(zbx_class_name, 'update', differences)
+
+        if content.has_key('error'):
+            module.exit_json(failed=True, changed=False, results=content['error'], state="present")
+
+        module.exit_json(changed=True, results=content['result'], state="present")
+
+    module.exit_json(failed=True,
+                     changed=False,
+                     results='Unknown state passed. %s' % state,
+                     state="unknown")
+
+# pylint: disable=redefined-builtin, unused-wildcard-import, wildcard-import, locally-disabled
+# import module snippets.  This are required
+from ansible.module_utils.basic import *
+
+main()

+ 42 - 22
roles/lib_zabbix/library/zbx_usergroup.py

@@ -27,6 +27,10 @@ zabbix ansible module for usergroups
 # but different for each zabbix class.
 # pylint: disable=duplicate-code
 
+# Disabling too-many-branches as we need the error checking and the if-statements
+# to determine the proper state
+# pylint: disable=too-many-branches
+
 # pylint: disable=import-error
 from openshift_tools.monitoring.zbxapi import ZabbixAPI, ZabbixConnection
 
@@ -92,26 +96,24 @@ def get_user_status(status):
     return 1
 
 
-#def get_userids(zapi, users):
-#    ''' Get userids from user aliases
-#    '''
-#    if not users:
-#        return None
-#
-#    userids = []
-#    for alias in users:
-#        content = zapi.get_content('user', 'get', {'search': {'alias': alias}})
-#        if content['result']:
-#            userids.append(content['result'][0]['userid'])
-#
-#    return userids
+def get_userids(zapi, users):
+    ''' Get userids from user aliases
+    '''
+    if not users:
+        return None
+
+    userids = []
+    for alias in users:
+        content = zapi.get_content('user', 'get', {'search': {'alias': alias}})
+        if content['result']:
+            userids.append(content['result'][0]['userid'])
+
+    return userids
 
 def main():
     ''' Ansible module for usergroup
     '''
 
-    ##def usergroup(self, name, rights=None, users=None, state='present', params=None):
-
     module = AnsibleModule(
         argument_spec=dict(
             zbx_server=dict(default='https://localhost/zabbix/api_jsonrpc.php', type='str'),
@@ -123,7 +125,7 @@ def main():
             status=dict(default='enabled', type='str'),
             name=dict(default=None, type='str', required=True),
             rights=dict(default=None, type='list'),
-            #users=dict(default=None, type='list'),
+            users=dict(default=None, type='list'),
             state=dict(default='present', type='str'),
         ),
         #supports_check_mode=True
@@ -144,9 +146,15 @@ def main():
                                {'search': {'name': uname},
                                 'selectUsers': 'userid',
                                })
+    #******#
+    # GET
+    #******#
     if state == 'list':
         module.exit_json(changed=False, results=content['result'], state="list")
 
+    #******#
+    # DELETE
+    #******#
     if state == 'absent':
         if not exists(content):
             module.exit_json(changed=False, state="absent")
@@ -157,6 +165,7 @@ def main():
         content = zapi.get_content(zbx_class_name, 'delete', [content['result'][0][idname]])
         module.exit_json(changed=True, results=content['result'], state="absent")
 
+    # Create and Update
     if state == 'present':
 
         params = {'name': uname,
@@ -164,26 +173,37 @@ def main():
                   'users_status': get_user_status(module.params['status']),
                   'gui_access': get_gui_access(module.params['gui_access']),
                   'debug_mode': get_debug_mode(module.params['debug_mode']),
-                  #'userids': get_userids(zapi, module.params['users']),
+                  'userids': get_userids(zapi, module.params['users']),
                  }
 
+        # Remove any None valued params
         _ = [params.pop(key, None) for key in params.keys() if params[key] == None]
 
+        #******#
+        # CREATE
+        #******#
         if not exists(content):
             # if we didn't find it, create it
             content = zapi.get_content(zbx_class_name, 'create', params)
+
+            if content.has_key('error'):
+                module.exit_json(failed=True, changed=True, results=content['error'], state="present")
+
             module.exit_json(changed=True, results=content['result'], state='present')
-        # already exists, we need to update it
-        # let's compare properties
+
+
+        ########
+        # UPDATE
+        ########
         differences = {}
         zab_results = content['result'][0]
         for key, value in params.items():
             if key == 'rights':
                 differences['rights'] = value
 
-            #elif key == 'userids' and zab_results.has_key('users'):
-                #if zab_results['users'] != value:
-                    #differences['userids'] = value
+            elif key == 'userids' and zab_results.has_key('users'):
+                if zab_results['users'] != value:
+                    differences['userids'] = value
 
             elif zab_results[key] != value and zab_results[key] != str(value):
                 differences[key] = value

+ 24 - 0
roles/lib_zabbix/tasks/create_template.yml

@@ -105,3 +105,27 @@
     description: "{{ item.description | default('', True) }}"
   with_items: template.ztriggerprototypes
   when: template.ztriggerprototypes is defined
+
+- name: Create Graphs
+  zbx_graph:
+    zbx_server: "{{ server }}"
+    zbx_user: "{{ user }}"
+    zbx_password: "{{ password }}"
+    name: "{{ item.name }}"
+    height: "{{ item.height }}"
+    width: "{{ item.width }}"
+    graph_items: "{{ item.graph_items }}"
+  with_items: template.zgraphs
+  when: template.zgraphs is defined
+
+- name: Create Graph Prototypes
+  zbx_graphprototype:
+    zbx_server: "{{ server }}"
+    zbx_user: "{{ user }}"
+    zbx_password: "{{ password }}"
+    name: "{{ item.name }}"
+    height: "{{ item.height }}"
+    width: "{{ item.width }}"
+    graph_items: "{{ item.graph_items }}"
+  with_items: template.zgraphprototypes
+  when: template.zgraphprototypes is defined

+ 10 - 0
roles/openshift_ansible_inventory/tasks/main.yml

@@ -2,6 +2,16 @@
 - yum:
     name: "{{ item }}"
     state: present
+  when: ansible_pkg_mgr == "yum"
+  with_items:
+  - openshift-ansible-inventory
+  - openshift-ansible-inventory-aws
+  - openshift-ansible-inventory-gce
+
+- dnf:
+    name: "{{ item }}"
+    state: present
+  when: ansible_pkg_mgr == "dnf"
   with_items:
   - openshift-ansible-inventory
   - openshift-ansible-inventory-aws

+ 3 - 3
roles/openshift_cluster_metrics/tasks/main.yml

@@ -7,7 +7,7 @@
 
 - name: Create InfluxDB Services
   command: >
-    {{ openshift.common.client_binary }} create -f 
+    {{ openshift.common.client_binary }} create -f
     /etc/openshift/cluster-metrics/influxdb.yaml
   register: oex_influxdb_services
   failed_when: "'already exists' not in oex_influxdb_services.stderr and oex_influxdb_services.rc != 0"
@@ -15,14 +15,14 @@
 
 - name: Create Heapster Service Account
   command: >
-    {{ openshift.common.client_binary }} create -f 
+    {{ openshift.common.client_binary }} create -f
     /etc/openshift/cluster-metrics/heapster-serviceaccount.yaml
   register: oex_heapster_serviceaccount
   failed_when: "'already exists' not in oex_heapster_serviceaccount.stderr and oex_heapster_serviceaccount.rc != 0"
   changed_when: false
 
 - name: Add cluster-reader role to Heapster
-  command: > 
+  command: >
     {{ openshift.common.admin_binary }} policy
     add-cluster-role-to-user
     cluster-reader

+ 11 - 0
roles/openshift_common/tasks/main.yml

@@ -3,6 +3,10 @@
     msg: Flannel can not be used with openshift sdn
   when: openshift_use_openshift_sdn | default(false) | bool and openshift_use_flannel | default(false) | bool
 
+- fail:
+    msg: openshift_hostname must be 64 characters or less
+  when: openshift_hostname is defined and openshift_hostname | length > 64
+
 - name: Set common Cluster facts
   openshift_facts:
     role: common
@@ -18,6 +22,13 @@
       deployment_type: "{{ openshift_deployment_type }}"
       use_fluentd: "{{ openshift_use_fluentd | default(None) }}"
       use_flannel: "{{ openshift_use_flannel | default(None) }}"
+      use_manageiq: "{{ openshift_use_manageiq | default(None) }}"
+
+  # For enterprise versions < 3.1 and origin versions < 1.1 we want to set the
+  # hostname by default.
+- set_fact:
+    set_hostname_default: "{{ not openshift.common.version_greater_than_3_1_or_1_1 }}"
 
 - name: Set hostname
   hostname: name={{ openshift.common.hostname }}
+  when: openshift_set_hostname | default(set_hostname_default) | bool

+ 3 - 1
roles/openshift_examples/defaults/main.yml

@@ -6,7 +6,9 @@ openshift_examples_load_db_templates: true
 openshift_examples_load_xpaas: "{{ openshift_deployment_type in ['enterprise','openshift-enterprise','atomic-enterprise','online']  }}"
 openshift_examples_load_quickstarts: true
 
-examples_base: /usr/share/openshift/examples
+content_version: "{{ 'v1.1' if openshift.common.version_greater_than_3_1_or_1_1 else 'v1.0' }}"
+
+examples_base: "/usr/share/openshift/examples"
 image_streams_base: "{{ examples_base }}/image-streams"
 centos_image_streams: "{{ image_streams_base}}/image-streams-centos7.json"
 rhel_image_streams: "{{ image_streams_base}}/image-streams-rhel7.json"

+ 4 - 3
roles/openshift_examples/examples-sync.sh

@@ -6,9 +6,10 @@
 # This script should be run from openshift-ansible/roles/openshift_examples
 
 XPAAS_VERSION=ose-v1.1.0
-EXAMPLES_BASE=$(pwd)/files/examples
-find files/examples -name '*.json' -delete
-find files/examples -name '*.yaml' -delete
+ORIGIN_VERSION=v1.1
+EXAMPLES_BASE=$(pwd)/files/examples/${ORIGIN_VERSION}
+find ${EXAMPLES_BASE} -name '*.json' -delete
+find ${EXAMPLES_BASE} -name '*.yaml' -delete
 TEMP=`mktemp -d`
 pushd $TEMP
 

+ 7 - 0
roles/openshift_examples/files/examples/README.md

@@ -0,0 +1,7 @@
+Image Streams and Templates may require specific versions of OpenShift so
+they've been namespaced. At this time, once a new version of Origin is released
+the older versions will only receive new content by speficic request.
+
+Please file an issue at https://github.com/openshift/openshift-ansible if you'd
+like to see older content updated and have tested to ensure it's backwards
+compatible.

roles/openshift_examples/files/examples/db-templates/mongodb-ephemeral-template.json → roles/openshift_examples/files/examples/v1.0/db-templates/mongodb-ephemeral-template.json


roles/openshift_examples/files/examples/db-templates/mongodb-persistent-template.json → roles/openshift_examples/files/examples/v1.0/db-templates/mongodb-persistent-template.json


roles/openshift_examples/files/examples/db-templates/mysql-ephemeral-template.json → roles/openshift_examples/files/examples/v1.0/db-templates/mysql-ephemeral-template.json


roles/openshift_examples/files/examples/db-templates/mysql-persistent-template.json → roles/openshift_examples/files/examples/v1.0/db-templates/mysql-persistent-template.json


roles/openshift_examples/files/examples/db-templates/postgresql-ephemeral-template.json → roles/openshift_examples/files/examples/v1.0/db-templates/postgresql-ephemeral-template.json


roles/openshift_examples/files/examples/db-templates/postgresql-persistent-template.json → roles/openshift_examples/files/examples/v1.0/db-templates/postgresql-persistent-template.json


+ 285 - 0
roles/openshift_examples/files/examples/v1.0/image-streams/image-streams-centos7.json

@@ -0,0 +1,285 @@
+{
+  "kind": "ImageStreamList",
+  "apiVersion": "v1",
+  "metadata": {},
+  "items": [
+    {
+      "kind": "ImageStream",
+      "apiVersion": "v1",
+      "metadata": {
+        "name": "ruby",
+        "creationTimestamp": null
+      },
+      "spec": {
+        "dockerImageRepository": "openshift/ruby-20-centos7",
+        "tags": [
+          {
+            "name": "latest"
+          },
+          {
+            "name": "2.0",
+            "annotations": {
+              "description": "Build and run Ruby 2.0 applications",
+              "iconClass": "icon-ruby",
+              "tags": "builder,ruby",
+              "supports": "ruby:2.0,ruby",
+              "version": "2.0",
+              "sampleRepo": "https://github.com/openshift/ruby-ex.git"
+            },
+            "from": {
+              "Kind": "ImageStreamTag",
+              "Name": "latest"
+            }
+          }
+        ]
+      }
+    },
+    {
+      "kind": "ImageStream",
+      "apiVersion": "v1",
+      "metadata": {
+        "name": "nodejs",
+        "creationTimestamp": null
+      },
+      "spec": {
+        "dockerImageRepository": "openshift/nodejs-010-centos7",
+        "tags": [
+          {
+            "name": "latest"
+          },
+          {
+            "name": "0.10",
+            "annotations": {
+              "description": "Build and run NodeJS 0.10 applications",
+              "iconClass": "icon-nodejs",
+              "tags": "builder,nodejs",
+              "supports":"nodejs:0.10,nodejs:0.1,nodejs",
+              "version": "0.10",
+              "sampleRepo": "https://github.com/openshift/nodejs-ex.git"
+            },
+            "from": {
+              "Kind": "ImageStreamTag",
+              "Name": "latest"
+            }
+          }
+        ]
+      }
+    },
+    {
+      "kind": "ImageStream",
+      "apiVersion": "v1",
+      "metadata": {
+        "name": "perl",
+        "creationTimestamp": null
+      },
+      "spec": {
+        "dockerImageRepository": "openshift/perl-516-centos7",
+        "tags": [
+          {
+            "name": "latest"
+          },
+          {
+            "name": "5.16",
+            "annotations": {
+              "description": "Build and run Perl 5.16 applications",
+              "iconClass": "icon-perl",
+              "tags": "builder,perl",
+              "supports":"perl:5.16,perl",
+              "version": "5.16",
+              "sampleRepo": "https://github.com/openshift/dancer-ex.git"
+            },
+            "from": {
+              "Kind": "ImageStreamTag",
+              "Name": "latest"
+            }
+          }
+        ]
+      }
+    },
+    {
+      "kind": "ImageStream",
+      "apiVersion": "v1",
+      "metadata": {
+        "name": "php",
+        "creationTimestamp": null
+      },
+      "spec": {
+        "dockerImageRepository": "openshift/php-55-centos7",
+        "tags": [
+          {
+            "name": "latest"
+          },
+          {
+            "name": "5.5",
+            "annotations": {
+              "description": "Build and run PHP 5.5 applications",
+              "iconClass": "icon-php",
+              "tags": "builder,php",
+              "supports":"php:5.5,php",
+              "version": "5.5",
+              "sampleRepo": "https://github.com/openshift/cakephp-ex.git"
+            },
+            "from": {
+              "Kind": "ImageStreamTag",
+              "Name": "latest"
+            }
+          }
+        ]
+      }
+    },
+    {
+      "kind": "ImageStream",
+      "apiVersion": "v1",
+      "metadata": {
+        "name": "python",
+        "creationTimestamp": null
+      },
+      "spec": {
+        "dockerImageRepository": "openshift/python-33-centos7",
+        "tags": [
+          {
+            "name": "latest"
+          },
+          {
+            "name": "3.3",
+            "annotations": {
+              "description": "Build and run Python 3.3 applications",
+              "iconClass": "icon-python",
+              "tags": "builder,python",
+              "supports":"python:3.3,python",
+              "version": "3.3",
+              "sampleRepo": "https://github.com/openshift/django-ex.git"
+            },
+            "from": {
+              "Kind": "ImageStreamTag",
+              "Name": "latest"
+            }
+          }
+        ]
+      }
+    },
+    {
+      "kind": "ImageStream",
+      "apiVersion": "v1",
+      "metadata": {
+        "name": "wildfly",
+        "creationTimestamp": null
+      },
+      "spec": {
+        "dockerImageRepository": "openshift/wildfly-81-centos7",
+        "tags": [
+          {
+            "name": "latest"
+          },
+          {
+            "name": "8.1",
+            "annotations": {
+              "description": "Build and run Java applications on Wildfly 8.1",
+              "iconClass": "icon-wildfly",
+              "tags": "builder,wildfly,java",
+              "supports":"wildfly:8.1,jee,java",
+              "version": "8.1",
+              "sampleRepo": "https://github.com/bparees/openshift-jee-sample.git"
+            },
+            "from": {
+              "Kind": "ImageStreamTag",
+              "Name": "latest"
+            }
+          }
+        ]
+      }
+    },
+    {
+      "kind": "ImageStream",
+      "apiVersion": "v1",
+      "metadata": {
+        "name": "mysql",
+        "creationTimestamp": null
+      },
+      "spec": {
+        "dockerImageRepository": "openshift/mysql-55-centos7",
+        "tags": [
+          {
+            "name": "latest"
+          },
+          {
+            "name": "5.5",
+            "from": {
+              "Kind": "ImageStreamTag",
+              "Name": "latest"
+            }
+          }
+        ]
+      }
+    },
+    {
+      "kind": "ImageStream",
+      "apiVersion": "v1",
+      "metadata": {
+        "name": "postgresql",
+        "creationTimestamp": null
+      },
+      "spec": {
+        "dockerImageRepository": "openshift/postgresql-92-centos7",
+        "tags": [
+          {
+            "name": "latest"
+          },
+          {
+            "name": "9.2",
+            "from": {
+              "Kind": "ImageStreamTag",
+              "Name": "latest"
+            }
+          }
+        ]
+      }
+    },
+    {
+      "kind": "ImageStream",
+      "apiVersion": "v1",
+      "metadata": {
+        "name": "mongodb",
+        "creationTimestamp": null
+      },
+      "spec": {
+        "dockerImageRepository": "openshift/mongodb-24-centos7",
+        "tags": [
+          {
+            "name": "latest"
+          },
+          {
+            "name": "2.4",
+            "from": {
+              "Kind": "ImageStreamTag",
+              "Name": "latest"
+            }
+          }
+        ]
+      }
+    },
+    {
+      "kind": "ImageStream",
+      "apiVersion": "v1",
+      "metadata": {
+        "name": "jenkins",
+        "creationTimestamp": null
+      },
+      "spec": {
+        "dockerImageRepository": "openshift/jenkins-1-centos7",
+        "tags": [
+          {
+            "name": "latest"
+          },
+          {
+            "name": "1",
+            "from": {
+              "Kind": "ImageStreamTag",
+              "Name": "latest"
+            }
+          }
+        ]
+      }
+    }
+  ]
+}

+ 254 - 0
roles/openshift_examples/files/examples/v1.0/image-streams/image-streams-rhel7.json

@@ -0,0 +1,254 @@
+{
+  "kind": "ImageStreamList",
+  "apiVersion": "v1",
+  "metadata": {},
+  "items": [
+    {
+      "kind": "ImageStream",
+      "apiVersion": "v1",
+      "metadata": {
+        "name": "ruby",
+        "creationTimestamp": null
+      },
+      "spec": {
+        "dockerImageRepository": "registry.access.redhat.com/openshift3/ruby-20-rhel7",
+        "tags": [
+          {
+            "name": "latest"
+          },
+          {
+            "name": "2.0",
+            "annotations": {
+              "description": "Build and run Ruby 2.0 applications",
+              "iconClass": "icon-ruby",
+              "tags": "builder,ruby",
+              "supports": "ruby:2.0,ruby",
+              "version": "2.0",
+              "sampleRepo": "https://github.com/openshift/ruby-ex.git"
+            },
+            "from": {
+              "Kind": "ImageStreamTag",
+              "Name": "latest"
+            }
+          }
+        ]
+      }
+    },
+    {
+      "kind": "ImageStream",
+      "apiVersion": "v1",
+      "metadata": {
+        "name": "nodejs",
+        "creationTimestamp": null
+      },
+      "spec": {
+        "dockerImageRepository": "registry.access.redhat.com/openshift3/nodejs-010-rhel7",
+        "tags": [
+          {
+            "name": "latest"
+          },
+          {
+            "name": "0.10",
+            "annotations": {
+              "description": "Build and run NodeJS 0.10 applications",
+              "iconClass": "icon-nodejs",
+              "tags": "builder,nodejs",
+              "supports":"nodejs:0.10,nodejs:0.1,nodejs",
+              "version": "0.10",
+              "sampleRepo": "https://github.com/openshift/nodejs-ex.git"
+            },
+            "from": {
+              "Kind": "ImageStreamTag",
+              "Name": "latest"
+            }
+          }
+        ]
+      }
+    },
+    {
+      "kind": "ImageStream",
+      "apiVersion": "v1",
+      "metadata": {
+        "name": "perl",
+        "creationTimestamp": null
+      },
+      "spec": {
+        "dockerImageRepository": "registry.access.redhat.com/openshift3/perl-516-rhel7",
+        "tags": [
+          {
+            "name": "latest"
+          },
+          {
+            "name": "5.16",
+            "annotations": {
+              "description": "Build and run Perl 5.16 applications",
+              "iconClass": "icon-perl",
+              "tags": "builder,perl",
+              "supports":"perl:5.16,perl",
+              "version": "5.16",
+              "sampleRepo": "https://github.com/openshift/dancer-ex.git"
+            },
+            "from": {
+              "Kind": "ImageStreamTag",
+              "Name": "latest"
+            }
+          }
+        ]
+      }
+    },
+    {
+      "kind": "ImageStream",
+      "apiVersion": "v1",
+      "metadata": {
+        "name": "php",
+        "creationTimestamp": null
+      },
+      "spec": {
+        "dockerImageRepository": "registry.access.redhat.com/openshift3/php-55-rhel7",
+        "tags": [
+          {
+            "name": "latest"
+          },
+          {
+            "name": "5.5",
+            "annotations": {
+              "description": "Build and run PHP 5.5 applications",
+              "iconClass": "icon-php",
+              "tags": "builder,php",
+              "supports":"php:5.5,php",
+              "version": "5.5",
+              "sampleRepo": "https://github.com/openshift/cakephp-ex.git"              
+            },
+            "from": {
+              "Kind": "ImageStreamTag",
+              "Name": "latest"
+            }
+          }
+        ]
+      }
+    },
+    {
+      "kind": "ImageStream",
+      "apiVersion": "v1",
+      "metadata": {
+        "name": "python",
+        "creationTimestamp": null
+      },
+      "spec": {
+        "dockerImageRepository": "registry.access.redhat.com/openshift3/python-33-rhel7",
+        "tags": [
+          {
+            "name": "latest"
+          },
+          {
+            "name": "3.3",
+            "annotations": {
+              "description": "Build and run Python 3.3 applications",
+              "iconClass": "icon-python",
+              "tags": "builder,python",
+              "supports":"python:3.3,python",
+              "version": "3.3",
+              "sampleRepo": "https://github.com/openshift/django-ex.git"
+            },
+            "from": {
+              "Kind": "ImageStreamTag",
+              "Name": "latest"
+            }
+          }
+        ]
+      }
+    },
+    {
+      "kind": "ImageStream",
+      "apiVersion": "v1",
+      "metadata": {
+        "name": "mysql",
+        "creationTimestamp": null
+      },
+      "spec": {
+        "dockerImageRepository": "registry.access.redhat.com/openshift3/mysql-55-rhel7",
+        "tags": [
+          {
+            "name": "latest"
+          },
+          {
+            "name": "5.5",
+            "from": {
+              "Kind": "ImageStreamTag",
+              "Name": "latest"
+            }
+          }
+        ]
+      }
+    },
+    {
+      "kind": "ImageStream",
+      "apiVersion": "v1",
+      "metadata": {
+        "name": "postgresql",
+        "creationTimestamp": null
+      },
+      "spec": {
+        "dockerImageRepository": "registry.access.redhat.com/openshift3/postgresql-92-rhel7",
+        "tags": [
+          {
+            "name": "latest"
+          },
+          {
+            "name": "9.2",
+            "from": {
+              "Kind": "ImageStreamTag",
+              "Name": "latest"
+            }
+          }
+        ]
+      }
+    },
+    {
+      "kind": "ImageStream",
+      "apiVersion": "v1",
+      "metadata": {
+        "name": "mongodb",
+        "creationTimestamp": null
+      },
+      "spec": {
+        "dockerImageRepository": "registry.access.redhat.com/openshift3/mongodb-24-rhel7",
+        "tags": [
+          {
+            "name": "latest"
+          },
+          {
+            "name": "2.4",
+            "from": {
+              "Kind": "ImageStreamTag",
+              "Name": "latest"
+            }
+          }
+        ]
+      }
+    },
+    {
+      "kind": "ImageStream",
+      "apiVersion": "v1",
+      "metadata": {
+        "name": "jenkins",
+        "creationTimestamp": null
+      },
+      "spec": {
+        "dockerImageRepository": "registry.access.redhat.com/openshift3/jenkins-1-rhel7",
+        "tags": [
+          {
+            "name": "latest"
+          },
+          {
+            "name": "1",
+            "from": {
+              "Kind": "ImageStreamTag",
+              "Name": "latest"
+            }
+          }
+        ]
+      }
+    }
+  ]
+}

roles/openshift_examples/files/examples/infrastructure-templates/enterprise/logging-deployer.yaml → roles/openshift_examples/files/examples/v1.0/infrastructure-templates/enterprise/logging-deployer.yaml


+ 116 - 0
roles/openshift_examples/files/examples/v1.0/infrastructure-templates/enterprise/metrics-deployer.yaml

@@ -0,0 +1,116 @@
+#!/bin/bash
+#
+# Copyright 2014-2015 Red Hat, Inc. and/or its affiliates
+# and other contributors as indicated by the @author tags.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+apiVersion: "v1"
+kind: "Template"
+metadata:
+  name: metrics-deployer-template
+  annotations:
+    description: "Template for deploying the required Metrics integration. Requires cluster-admin 'metrics-deployer' service account and 'metrics-deployer' secret."
+    tags: "infrastructure"
+labels:
+  metrics-infra: deployer
+  provider: openshift
+  component: deployer
+objects:
+-
+  apiVersion: v1
+  kind: Pod
+  metadata:
+    generateName: metrics-deployer-
+  spec:
+    containers:
+    - image: ${IMAGE_PREFIX}metrics-deployer:${IMAGE_VERSION}
+      name: deployer
+      volumeMounts:
+      - name: secret
+        mountPath: /secret
+        readOnly: true
+      - name: empty
+        mountPath: /etc/deploy
+      env:
+        - name: PROJECT
+          valueFrom:
+            fieldRef:
+              fieldPath: metadata.namespace
+        - name: IMAGE_PREFIX
+          value: ${IMAGE_PREFIX}
+        - name: IMAGE_VERSION
+          value: ${IMAGE_VERSION}
+        - name: PUBLIC_MASTER_URL
+          value: ${PUBLIC_MASTER_URL}
+        - name: MASTER_URL
+          value: ${MASTER_URL}
+        - name: REDEPLOY
+          value: ${REDEPLOY}
+        - name: USE_PERSISTENT_STORAGE
+          value: ${USE_PERSISTENT_STORAGE}
+        - name: HAWKULAR_METRICS_HOSTNAME
+          value: ${HAWKULAR_METRICS_HOSTNAME}
+        - name: CASSANDRA_NODES
+          value: ${CASSANDRA_NODES}
+        - name: CASSANDRA_PV_SIZE
+          value: ${CASSANDRA_PV_SIZE}
+        - name: METRIC_DURATION
+          value: ${METRIC_DURATION}
+    dnsPolicy: ClusterFirst
+    restartPolicy: Never
+    serviceAccount: metrics-deployer
+    volumes:
+    - name: empty
+      emptyDir: {}
+    - name: secret
+      secret:
+        secretName: metrics-deployer
+parameters:
+-
+  description: 'Specify prefix for metrics components; e.g. for "openshift/origin-metrics-deployer:v1.1", set prefix "openshift/origin-"'
+  name: IMAGE_PREFIX
+  value: "registry.access.redhat.com/openshift3/"
+-
+  description: 'Specify version for metrics components; e.g. for "openshift/origin-metrics-deployer:v1.1", set version "v1.1"'
+  name: IMAGE_VERSION
+  value: "3.1.0"
+-
+  description: "Internal URL for the master, for authentication retrieval"
+  name: MASTER_URL
+  value: "https://kubernetes.default.svc:443"
+-
+  description: "External hostname where clients will reach Hawkular Metrics"
+  name: HAWKULAR_METRICS_HOSTNAME
+  required: true
+-
+  description: "If set to true the deployer will try and delete all the existing components before trying to redeploy."
+  name: REDEPLOY
+  value: "false"
+-
+  description: "Set to true for persistent storage, set to false to use non persistent storage"
+  name: USE_PERSISTENT_STORAGE
+  value: "true"
+-
+  description: "The number of Cassandra Nodes to deploy for the initial cluster"
+  name: CASSANDRA_NODES
+  value: "1"
+-
+  description: "The persistent volume size for each of the Cassandra nodes"
+  name: CASSANDRA_PV_SIZE
+  value: "1Gi"
+-
+  description: "How many days metrics should be stored for."
+  name: METRIC_DURATION
+  value: "7"

roles/openshift_examples/files/examples/infrastructure-templates/origin/logging-deployer.yaml → roles/openshift_examples/files/examples/v1.0/infrastructure-templates/origin/logging-deployer.yaml


+ 2 - 2
roles/openshift_examples/files/examples/infrastructure-templates/origin/metrics-deployer.yaml

@@ -81,11 +81,11 @@ parameters:
 -
   description: 'Specify prefix for metrics components; e.g. for "openshift/origin-metrics-deployer:v1.1", set prefix "openshift/origin-"'
   name: IMAGE_PREFIX
-  value: "hawkular/"
+  value: "docker.io/openshift/origin-"
 -
   description: 'Specify version for metrics components; e.g. for "openshift/origin-metrics-deployer:v1.1", set version "v1.1"'
   name: IMAGE_VERSION
-  value: "0.7.0-SNAPSHOT"
+  value: "latest"
 -
   description: "Internal URL for the master, for authentication retrieval"
   name: MASTER_URL

roles/openshift_examples/files/examples/quickstart-templates/cakephp-mysql.json → roles/openshift_examples/files/examples/v1.0/quickstart-templates/cakephp-mysql.json


roles/openshift_examples/files/examples/quickstart-templates/cakephp.json → roles/openshift_examples/files/examples/v1.0/quickstart-templates/cakephp.json


roles/openshift_examples/files/examples/quickstart-templates/dancer-mysql.json → roles/openshift_examples/files/examples/v1.0/quickstart-templates/dancer-mysql.json


roles/openshift_examples/files/examples/quickstart-templates/dancer.json → roles/openshift_examples/files/examples/v1.0/quickstart-templates/dancer.json


roles/openshift_examples/files/examples/quickstart-templates/django-postgresql.json → roles/openshift_examples/files/examples/v1.0/quickstart-templates/django-postgresql.json


roles/openshift_examples/files/examples/quickstart-templates/django.json → roles/openshift_examples/files/examples/v1.0/quickstart-templates/django.json


roles/openshift_examples/files/examples/quickstart-templates/jenkins-ephemeral-template.json → roles/openshift_examples/files/examples/v1.0/quickstart-templates/jenkins-ephemeral-template.json


roles/openshift_examples/files/examples/quickstart-templates/jenkins-persistent-template.json → roles/openshift_examples/files/examples/v1.0/quickstart-templates/jenkins-persistent-template.json


roles/openshift_examples/files/examples/quickstart-templates/nodejs-mongodb.json → roles/openshift_examples/files/examples/v1.0/quickstart-templates/nodejs-mongodb.json


roles/openshift_examples/files/examples/quickstart-templates/nodejs.json → roles/openshift_examples/files/examples/v1.0/quickstart-templates/nodejs.json


roles/openshift_examples/files/examples/quickstart-templates/rails-postgresql.json → roles/openshift_examples/files/examples/v1.0/quickstart-templates/rails-postgresql.json


roles/openshift_examples/files/examples/xpaas-streams/jboss-image-streams.json → roles/openshift_examples/files/examples/v1.0/xpaas-streams/jboss-image-streams.json


roles/openshift_examples/files/examples/xpaas-templates/amq62-basic.json → roles/openshift_examples/files/examples/v1.0/xpaas-templates/amq62-basic.json


roles/openshift_examples/files/examples/xpaas-templates/amq62-persistent-ssl.json → roles/openshift_examples/files/examples/v1.0/xpaas-templates/amq62-persistent-ssl.json


roles/openshift_examples/files/examples/xpaas-templates/amq62-persistent.json → roles/openshift_examples/files/examples/v1.0/xpaas-templates/amq62-persistent.json


+ 0 - 0
roles/openshift_examples/files/examples/xpaas-templates/amq62-ssl.json


Some files were not shown because too many files changed in this diff