Forráskód Böngészése

Merge remote-tracking branch 'openshift/master'

Diego Castro 9 éve
szülő
commit
055921cd54
60 módosított fájl, 1387 hozzáadás és 93 törlés
  1. 17 0
      Dockerfile
  2. 7 3
      README.md
  3. 240 0
      README_AEP.md
  4. 15 0
      README_ANSIBLE_CONTAINER.md
  5. 3 3
      README_OSE.md
  6. 26 2
      README_vagrant.md
  7. 34 7
      Vagrantfile
  8. 19 1
      bin/openshift-ansible-bin.spec
  9. 2 2
      bin/oscp
  10. 2 2
      bin/ossh
  11. 1 1
      docs/best_practices_guide.adoc
  12. 2 2
      filter_plugins/oo_filters.py
  13. 1 1
      inventory/aws/hosts/hosts
  14. 5 2
      inventory/byo/hosts.example
  15. 1 1
      inventory/gce/hosts/hosts
  16. 1 1
      inventory/libvirt/hosts/hosts
  17. 27 1
      inventory/openshift-ansible-inventory.spec
  18. 1 1
      inventory/openstack/hosts/hosts
  19. 93 0
      playbooks/adhoc/atomic_openshift_tutorial_reset.yml
  20. 17 0
      playbooks/adhoc/create_pv/create_pv.yaml
  21. 31 0
      playbooks/adhoc/zabbix_setup/create_user.yml
  22. 1 1
      playbooks/aws/openshift-cluster/config.yml
  23. 2 2
      playbooks/aws/openshift-cluster/vars.online.int.yml
  24. 2 2
      playbooks/aws/openshift-cluster/vars.online.prod.yml
  25. 2 2
      playbooks/aws/openshift-cluster/vars.online.stage.yml
  26. 1 1
      playbooks/byo/openshift-cluster/config.yml
  27. 12 0
      playbooks/byo/rhel_subscribe.yml
  28. 4 0
      playbooks/byo/vagrant.yml
  29. 2 1
      playbooks/common/openshift-node/config.yml
  30. 1 1
      playbooks/gce/openshift-cluster/config.yml
  31. 1 1
      playbooks/libvirt/openshift-cluster/config.yml
  32. 1 1
      playbooks/openstack/openshift-cluster/config.yml
  33. 1 1
      rel-eng/packages/openshift-ansible-bin
  34. 1 1
      rel-eng/packages/openshift-ansible-inventory
  35. 1 1
      roles/etcd/tasks/main.yml
  36. 1 0
      roles/etcd_ca/tasks/main.yml
  37. 1 1
      roles/fluentd_master/tasks/main.yml
  38. 4 4
      roles/openshift_common/README.md
  39. 1 1
      roles/openshift_common/defaults/main.yml
  40. 1 1
      roles/openshift_common/tasks/main.yml
  41. 3 4
      roles/openshift_manage_node/tasks/main.yml
  42. 1 1
      roles/openshift_master/README.md
  43. 4 1
      roles/openshift_master/tasks/main.yml
  44. 7 1
      roles/openshift_master/templates/master.yaml.v1.j2
  45. 1 1
      roles/openshift_master/templates/v1_partials/oauthConfig.j2
  46. 2 2
      roles/openshift_node/README.md
  47. 7 1
      roles/openshift_node/tasks/main.yml
  48. 3 0
      roles/openshift_node/templates/node.yaml.v1.j2
  49. 1 2
      roles/openshift_registry/README.md
  50. 1 2
      roles/openshift_router/README.md
  51. 115 0
      roles/os_zabbix/library/get_drule.yml
  52. 44 5
      roles/os_zabbix/library/test.yml
  53. 135 0
      roles/os_zabbix/library/zbx_application.py
  54. 177 0
      roles/os_zabbix/library/zbx_discoveryrule.py
  55. 16 15
      roles/os_zabbix/library/zbx_host.py
  56. 11 1
      roles/os_zabbix/library/zbx_item.py
  57. 241 0
      roles/os_zabbix/library/zbx_itemprototype.py
  58. 2 1
      roles/os_zabbix/library/zbx_template.py
  59. 27 4
      roles/os_zabbix/library/zbx_user.py
  60. 4 0
      roles/rhel_subscribe/tasks/enterprise.yml

+ 17 - 0
Dockerfile

@@ -0,0 +1,17 @@
+FROM rhel7
+
+MAINTAINER Aaron Weitekamp <aweiteka@redhat.com>
+
+RUN yum -y install http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
+
+# Not sure if all of these packages are necessary
+# only git and ansible are known requirements
+RUN yum install -y --enablerepo rhel-7-server-extras-rpms net-tools bind-utils git ansible
+
+ADD ./  /opt/openshift-ansible/
+
+ENTRYPOINT ["/usr/bin/ansible-playbook"]
+
+CMD ["/opt/openshift-ansible/playbooks/byo/config.yml"]
+
+LABEL RUN docker run -it --rm --privileged --net=host -v ~/.ssh:/root/.ssh -v /etc/ansible:/etc/ansible --name NAME -e NAME=NAME -e IMAGE=IMAGE IMAGE

+ 7 - 3
README.md

@@ -1,6 +1,6 @@
-#openshift-ansible
+#Openshift and Atomic Enterprise Ansible
 
-This repo contains OpenShift Ansible code.
+This repo contains Ansible code for Openshift and Atomic Enterprise.
 
 ##Setup
 - Install base dependencies:
@@ -23,12 +23,13 @@ This repo contains OpenShift Ansible code.
 - Bring your own host deployments:
   - [OpenShift Enterprise](README_OSE.md)
   - [OpenShift Origin](README_origin.md)
+  - [Atomic Enterprise](README_AEP.md)
 
 - Build
   - [How to build the openshift-ansible rpms](BUILD.md)
 
 - Directory Structure:
-  - [bin/cluster](bin/cluster) - python script to easily create OpenShift 3 clusters
+  - [bin/cluster](bin/cluster) - python script to easily create clusters
   - [docs](docs) - Documentation for the project
   - [filter_plugins/](filter_plugins) - custom filters used to manipulate data in Ansible
   - [inventory/](inventory) - houses Ansible dynamic inventory scripts
@@ -36,6 +37,9 @@ This repo contains OpenShift Ansible code.
   - [roles/](roles) - shareable Ansible tasks
 
 ##Contributing
+- [Best Practices Guide](docs/best_practices_guide.adoc)
+- [Core Concepts](docs/core_concepts_guide.adoc)
+- [Style Guide](docs/style_guide.adoc)
 
 ###Feature Roadmap
 Our Feature Roadmap is available on the OpenShift Origin Infrastructure [Trello board](https://trello.com/b/nbkIrqKa/openshift-origin-infrastructure). All ansible items will be tagged with [installv3].

+ 240 - 0
README_AEP.md

@@ -0,0 +1,240 @@
+# Installing AEP from dev puddles using ansible
+
+* [Requirements](#requirements)
+* [Caveats](#caveats)
+* [Known Issues](#known-issues)
+* [Configuring the host inventory](#configuring-the-host-inventory)
+* [Creating the default variables for the hosts and host groups](#creating-the-default-variables-for-the-hosts-and-host-groups)
+* [Running the ansible playbooks](#running-the-ansible-playbooks)
+* [Post-ansible steps](#post-ansible-steps)
+* [Overriding detected ip addresses and hostnames](#overriding-detected-ip-addresses-and-hostnames)
+
+## Requirements
+* ansible
+  * Tested using ansible 1.9.1 and 1.9.2
+  * There is currently a known issue with ansible-1.9.0, you can downgrade to 1.8.4 on Fedora by installing one of the builds from Koji: http://koji.fedoraproject.org/koji/packageinfo?packageID=13842
+  * Available in Fedora channels
+  * Available for EL with EPEL and Optional channel
+* One or more RHEL 7.1 VMs
+* Either ssh key based auth for the root user or ssh key based auth for a user
+  with sudo access (no password)
+* A checkout of atomic-enterprise-ansible from https://github.com/projectatomic/atomic-enterprise-ansible/
+
+  ```sh
+  git clone https://github.com/projectatomic/atomic-enterprise-ansible.git
+  cd atomic-enterprise-ansible
+  ```
+
+## Caveats
+This ansible repo is currently under heavy revision for providing OSE support;
+the following items are highly likely to change before the OSE support is
+merged into the upstream repo:
+  * the current git branch for testing
+  * how the inventory file should be configured
+  * variables that need to be set
+  * bootstrapping steps
+  * other configuration steps
+
+## Known Issues
+* Host subscriptions are not configurable yet, the hosts need to be
+  pre-registered with subscription-manager or have the RHEL base repo
+  pre-configured. If using subscription-manager the following commands will
+  disable all but the rhel-7-server rhel-7-server-extras and
+  rhel-server7-ose-beta repos:
+```sh
+subscription-manager repos --disable="*"
+subscription-manager repos \
+--enable="rhel-7-server-rpms" \
+--enable="rhel-7-server-extras-rpms" \
+--enable="rhel-7-server-ose-3.0-rpms"
+```
+* Configuration of router is not automated yet
+* Configuration of docker-registry is not automated yet
+
+## Configuring the host inventory
+[Ansible docs](http://docs.ansible.com/intro_inventory.html)
+
+Example inventory file for configuring one master and two nodes for the test
+environment. This can be configured in the default inventory file
+(/etc/ansible/hosts), or using a custom file and passing the --inventory
+option to ansible-playbook.
+
+/etc/ansible/hosts:
+```ini
+# This is an example of a bring your own (byo) host inventory
+
+# Create an OSEv3 group that contains the masters and nodes groups
+[OSEv3:children]
+masters
+nodes
+
+# Set variables common for all OSEv3 hosts
+[OSEv3:vars]
+# SSH user, this user should allow ssh based auth without requiring a password
+ansible_ssh_user=root
+
+# If ansible_ssh_user is not root, ansible_sudo must be set to true
+#ansible_sudo=true
+
+# To deploy origin, change deployment_type to origin
+deployment_type=enterprise
+
+# Pre-release registry URL
+oreg_url=docker-buildvm-rhose.usersys.redhat.com:5000/openshift3/ose-${component}:${version}
+
+# Pre-release additional repo
+openshift_additional_repos=[{'id': 'ose-devel', 'name': 'ose-devel',
+'baseurl':
+'http://buildvm-devops.usersys.redhat.com/puddle/build/OpenShiftEnterprise/3.0/latest/RH7-RHOSE-3.0/$basearch/os',
+'enabled': 1, 'gpgcheck': 0}]
+
+# Origin copr repo
+#openshift_additional_repos=[{'id': 'openshift-origin-copr', 'name':
+'OpenShift Origin COPR', 'baseurl':
+'https://copr-be.cloud.fedoraproject.org/results/maxamillion/origin-next/epel-7-$basearch/',
+'enabled': 1, 'gpgcheck': 1, gpgkey:
+'https://copr-be.cloud.fedoraproject.org/results/maxamillion/origin-next/pubkey.gpg'}]
+
+# host group for masters
+[masters]
+ose3-master.example.com
+
+# host group for nodes
+[nodes]
+ose3-node[1:2].example.com
+```
+
+The hostnames above should resolve both from the hosts themselves and
+the host where ansible is running (if different).
+
+## Running the ansible playbooks
+From the atomic-enterprise-ansible checkout run:
+```sh
+ansible-playbook playbooks/byo/config.yml
+```
+**Note:** this assumes that the host inventory is /etc/ansible/hosts, if using a different
+inventory file use the -i option for ansible-playbook.
+
+## Post-ansible steps
+#### Create the default router
+On the master host:
+```sh
+oadm router --create=true \
+  --credentials=/etc/openshift/master/openshift-router.kubeconfig \
+  --images='docker-buildvm-rhose.usersys.redhat.com:5000/openshift3/ose-${component}:${version}'
+```
+
+#### Create the default docker-registry
+On the master host:
+```sh
+oadm registry --create=true \
+  --credentials=/etc/openshift/master/openshift-registry.kubeconfig \
+  --images='docker-buildvm-rhose.usersys.redhat.com:5000/openshift3/ose-${component}:${version}' \
+  --mount-host=/var/lib/openshift/docker-registry
+```
+
+## Overriding detected ip addresses and hostnames
+Some deployments will require that the user override the detected hostnames
+and ip addresses for the hosts. To see what the default values will be you can
+run the openshift_facts playbook:
+```sh
+ansible-playbook playbooks/byo/openshift_facts.yml
+```
+The output will be similar to:
+```
+ok: [10.3.9.45] => {
+    "result": {
+        "ansible_facts": {
+            "openshift": {
+                "common": {
+                    "hostname": "jdetiber-osev3-ansible-005dcfa6-27c6-463d-9b95-ef059579befd.os1.phx2.redhat.com",
+                    "ip": "172.16.4.79",
+                    "public_hostname": "jdetiber-osev3-ansible-005dcfa6-27c6-463d-9b95-ef059579befd.os1.phx2.redhat.com",
+                    "public_ip": "10.3.9.45",
+                    "use_openshift_sdn": true
+                },
+                "provider": {
+                  ... <snip> ...
+                }
+            }
+        },
+        "changed": false,
+        "invocation": {
+            "module_args": "",
+            "module_name": "openshift_facts"
+        }
+    }
+}
+ok: [10.3.9.42] => {
+    "result": {
+        "ansible_facts": {
+            "openshift": {
+                "common": {
+                    "hostname": "jdetiber-osev3-ansible-c6ae8cdc-ba0b-4a81-bb37-14549893f9d3.os1.phx2.redhat.com",
+                    "ip": "172.16.4.75",
+                    "public_hostname": "jdetiber-osev3-ansible-c6ae8cdc-ba0b-4a81-bb37-14549893f9d3.os1.phx2.redhat.com",
+                    "public_ip": "10.3.9.42",
+                    "use_openshift_sdn": true
+                },
+                "provider": {
+                  ...<snip>...
+                }
+            }
+        },
+        "changed": false,
+        "invocation": {
+            "module_args": "",
+            "module_name": "openshift_facts"
+        }
+    }
+}
+ok: [10.3.9.36] => {
+    "result": {
+        "ansible_facts": {
+            "openshift": {
+                "common": {
+                    "hostname": "jdetiber-osev3-ansible-bc39a3d3-cdd7-42fe-9c12-9fac9b0ec320.os1.phx2.redhat.com",
+                    "ip": "172.16.4.73",
+                    "public_hostname": "jdetiber-osev3-ansible-bc39a3d3-cdd7-42fe-9c12-9fac9b0ec320.os1.phx2.redhat.com",
+                    "public_ip": "10.3.9.36",
+                    "use_openshift_sdn": true
+                },
+                "provider": {
+                    ...<snip>...
+                }
+            }
+        },
+        "changed": false,
+        "invocation": {
+            "module_args": "",
+            "module_name": "openshift_facts"
+        }
+    }
+}
+```
+Now, we want to verify the detected common settings to verify that they are
+what we expect them to be (if not, we can override them).
+
+* hostname
+  * Should resolve to the internal ip from the instances themselves.
+  * openshift_hostname will override.
+* ip
+  * Should be the internal ip of the instance.
+  * openshift_ip will override.
+* public hostname
+  * Should resolve to the external ip from hosts outside of the cloud
+  * provider openshift_public_hostname will override.
+* public_ip
+  * Should be the externally accessible ip associated with the instance
+  * openshift_public_ip will override
+* use_openshift_sdn
+  * Should be true unless the cloud is GCE.
+  * openshift_use_openshift_sdn overrides
+
+To override the the defaults, you can set the variables in your inventory:
+```
+...snip...
+[masters]
+ose3-master.example.com openshift_ip=1.1.1.1 openshift_hostname=ose3-master.example.com openshift_public_ip=2.2.2.2 openshift_public_hostname=ose3-master.public.example.com
+...snip...
+```

+ 15 - 0
README_ANSIBLE_CONTAINER.md

@@ -0,0 +1,15 @@
+# Running ansible in a docker container
+* Building ansible container:
+
+  ```sh
+  git clone https://github.com/openshift/openshift-ansible.git
+  cd openshift-ansible
+  docker build --rm -t ansible .
+  ```
+* Create /etc/ansible directory on the host machine and copy inventory file (hosts) into it.
+* Copy ssh public key of the host machine to master and nodes machines in the cluster.
+* Running the ansible container:
+
+  ```sh
+  docker run -it --rm --privileged --net=host -v ~/.ssh:/root/.ssh -v /etc/ansible:/etc/ansible ansible
+  ```

+ 3 - 3
README_OSE.md

@@ -80,7 +80,7 @@ ansible_ssh_user=root
 deployment_type=enterprise
 
 # Pre-release registry URL
-oreg_url=docker-buildvm-rhose.usersys.redhat.com:5000/openshift3/ose-${component}:${version}
+oreg_url=rcm-img-docker01.build.eng.bos.redhat.com:5001/openshift3/ose-${component}:${version}
 
 # Pre-release additional repo
 openshift_additional_repos=[{'id': 'ose-devel', 'name': 'ose-devel',
@@ -121,7 +121,7 @@ On the master host:
 ```sh
 oadm router --create=true \
   --credentials=/etc/openshift/master/openshift-router.kubeconfig \
-  --images='docker-buildvm-rhose.usersys.redhat.com:5000/openshift3/ose-${component}:${version}'
+  --images='rcm-img-docker01.build.eng.bos.redhat.com:5001/openshift3/ose-${component}:${version}'
 ```
 
 #### Create the default docker-registry
@@ -129,7 +129,7 @@ On the master host:
 ```sh
 oadm registry --create=true \
   --credentials=/etc/openshift/master/openshift-registry.kubeconfig \
-  --images='docker-buildvm-rhose.usersys.redhat.com:5000/openshift3/ose-${component}:${version}' \
+  --images='rcm-img-docker01.build.eng.bos.redhat.com:5001/openshift3/ose-${component}:${version}' \
   --mount-host=/var/lib/openshift/docker-registry
 ```
 

+ 26 - 2
README_vagrant.md

@@ -2,9 +2,28 @@ Requirements
 ------------
 - vagrant (tested against version 1.7.2)
 - vagrant-hostmanager plugin (tested against version 1.5.0)
+- vagrant-registration plugin (only required for enterprise deployment type)
 - vagrant-libvirt (tested against version 0.0.26)
   - Only required if using libvirt instead of virtualbox
 
+For ``enterprise`` deployment types the base RHEL box has to be added to Vagrant:
+
+1. Download the RHEL7 vagrant image (libvirt or virtualbox) available from the [Red Hat Container Development Kit downloads in the customer portal](https://access.redhat.com/downloads/content/293/ver=1/rhel---7/1.0.1/x86_64/product-downloads)
+
+2. Install it into vagrant
+
+   ``$ vagrant box add --name rhel-7 /path/to/rhel-server-libvirt-7.1-3.x86_64.box``
+
+3. (optional, recommended) Increase the disk size of the image to 20GB - This is a two step process. (these instructions are specific to libvirt)
+
+    Resize the actual qcow2 image:
+
+	``$ qemu-img resize ~/.vagrant.d/boxes/rhel-7/0/libvirt/box.img 20GB``
+
+    Edit `~/.vagrant.d/boxes/rhel-7/0/libvirt/metadata.json` to reflect the new size.  A corrected metadata.json looks like this:
+
+	``{"provider": "libvirt", "format": "qcow2", "virtual_size": 20}``
+
 Usage
 -----
 ```
@@ -21,5 +40,10 @@ vagrant provision
 Environment Variables
 ---------------------
 The following environment variables can be overriden:
-- OPENSHIFT_DEPLOYMENT_TYPE (defaults to origin, choices: origin, enterprise, online)
-- OPENSHIFT_NUM_NODES (the number of nodes to create, defaults to 2)
+- ``OPENSHIFT_DEPLOYMENT_TYPE`` (defaults to origin, choices: origin, enterprise, online)
+- ``OPENSHIFT_NUM_NODES`` (the number of nodes to create, defaults to 2)
+
+For ``enterprise`` deployment types these env variables should also be specified:
+- ``rhel_subscription_user``: rhsm user
+- ``rhel_subscription_pass``: rhsm password
+- (optional) ``rhel_subscription_pool``: poolID to attach a specific subscription besides what auto-attach detects

+ 34 - 7
Vagrantfile

@@ -15,6 +15,28 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
   config.hostmanager.manage_host = true
   config.hostmanager.include_offline = true
   config.ssh.insert_key = false
+
+  if deployment_type === 'enterprise'
+    unless Vagrant.has_plugin?('vagrant-registration')
+      raise 'vagrant-registration-plugin is required for enterprise deployment'
+    end
+    username = ENV['rhel_subscription_user']
+    password = ENV['rhel_subscription_pass']
+    unless username and password
+      raise 'rhel_subscription_user and rhel_subscription_pass are required'
+    end
+    config.registration.username = username
+    config.registration.password = password
+    # FIXME this is temporary until vagrant/ansible registration modules
+    # are capable of handling specific subscription pools
+    if not ENV['rhel_subscription_pool'].nil?
+      config.vm.provision "shell" do |s|
+        s.inline = "subscription-manager attach --pool=$1 || true"
+        s.args = "#{ENV['rhel_subscription_pool']}"
+      end
+    end
+  end
+
   config.vm.provider "virtualbox" do |vbox, override|
     override.vm.box = "chef/centos-7.1"
     vbox.memory = 1024
@@ -28,10 +50,15 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
     libvirt.cpus = 2
     libvirt.memory = 1024
     libvirt.driver = 'kvm'
-    override.vm.box = "centos-7.1"
-    override.vm.box_url = "https://download.gluster.org/pub/gluster/purpleidea/vagrant/centos-7.1/centos-7.1.box"
-    override.vm.box_download_checksum = "b2a9f7421e04e73a5acad6fbaf4e9aba78b5aeabf4230eebacc9942e577c1e05"
-    override.vm.box_download_checksum_type = "sha256"
+    case deployment_type
+    when "enterprise"
+      override.vm.box = "rhel-7"
+    when "origin"
+      override.vm.box = "centos-7.1"
+      override.vm.box_url = "https://download.gluster.org/pub/gluster/purpleidea/vagrant/centos-7.1/centos-7.1.box"
+      override.vm.box_download_checksum = "b2a9f7421e04e73a5acad6fbaf4e9aba78b5aeabf4230eebacc9942e577c1e05"
+      override.vm.box_download_checksum_type = "sha256"
+    end
   end
 
   num_nodes.times do |n|
@@ -53,12 +80,12 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
       ansible.sudo = true
       ansible.groups = {
         "masters" => ["master"],
-        "nodes"   => ["node1", "node2"],
+        "nodes"   => ["master", "node1", "node2"],
       }
       ansible.extra_vars = {
-        openshift_deployment_type: "origin",
+        deployment_type: deployment_type,
       }
-      ansible.playbook = "playbooks/byo/config.yml"
+      ansible.playbook = "playbooks/byo/vagrant.yml"
     end
   end
 end

+ 19 - 1
bin/openshift-ansible-bin.spec

@@ -1,6 +1,6 @@
 Summary:       OpenShift Ansible Scripts for working with metadata hosts
 Name:          openshift-ansible-bin
-Version:       0.0.18
+Version:       0.0.19
 Release:       1%{?dist}
 License:       ASL 2.0
 URL:           https://github.com/openshift/openshift-ansible
@@ -42,6 +42,24 @@ cp -p openshift_ansible.conf.example %{buildroot}/etc/openshift_ansible/openshif
 %config(noreplace) /etc/openshift_ansible/
 
 %changelog
+* Thu Aug 20 2015 Kenny Woodson <kwoodson@redhat.com> 0.0.19-1
+- Updated to show private ips when doing a list (kwoodson@redhat.com)
+- Updated to read config first and default to users home dir
+  (kwoodson@redhat.com)
+- Prevent Ansible from serializing tasks (lhuard@amadeus.com)
+- Infra node support (whearn@redhat.com)
+- Playbook updates for clustered etcd (jdetiber@redhat.com)
+- bin/cluster supports boto credentials as well as env variables
+  (jdetiber@redhat.com)
+- Merge pull request #291 from lhuard1A/profile
+  (twiest@users.noreply.github.com)
+- Add a generic mechanism for passing options (lhuard@amadeus.com)
+- Infrastructure - Validate AWS environment before calling playbooks
+  (jhonce@redhat.com)
+- Add a --profile option to spot which task takes more time
+  (lhuard@amadeus.com)
+- changed Openshift to OpenShift (twiest@redhat.com)
+
 * Tue Jun 09 2015 Kenny Woodson <kwoodson@redhat.com> 0.0.18-1
 - Implement OpenStack provider (lhuard@amadeus.com)
 - * Update defaults and examples to track core concepts guide

+ 2 - 2
bin/oscp

@@ -167,7 +167,7 @@ class Oscp(object):
                     name = server_info['ec2_tag_Name']
                     ec2_id = server_info['ec2_id']
                     ip = server_info['ec2_ip_address']
-                    print '{ec2_tag_Name:<35} {ec2_tag_environment:<8} {ec2_id:<15} {ec2_ip_address}'.format(**server_info)
+                    print '{ec2_tag_Name:<35} {ec2_tag_environment:<8} {ec2_id:<15} {ec2_ip_address:<18} {ec2_private_ip_address}'.format(**server_info)
 
                 if limit:
                     print
@@ -180,7 +180,7 @@ class Oscp(object):
                     name = server_info['ec2_tag_Name']
                     ec2_id = server_info['ec2_id']
                     ip = server_info['ec2_ip_address']
-                    print '{ec2_tag_Name:<35} {ec2_tag_environment:<5} {ec2_id:<15} {ec2_ip_address}'.format(**server_info)
+                    print '{ec2_tag_Name:<35} {ec2_tag_environment:<8} {ec2_id:<15} {ec2_ip_address:<18} {ec2_private_ip_address}'.format(**server_info)
 
     def scp(self):
         '''scp files to or from a specified host

+ 2 - 2
bin/ossh

@@ -156,7 +156,7 @@ class Ossh(object):
                     name = server_info['ec2_tag_Name']
                     ec2_id = server_info['ec2_id']
                     ip = server_info['ec2_ip_address']
-                    print '{ec2_tag_Name:<35} {ec2_tag_environment:<8} {ec2_id:<15} {ec2_ip_address}'.format(**server_info)
+                    print '{ec2_tag_Name:<35} {ec2_tag_environment:<8} {ec2_id:<15} {ec2_ip_address:<18} {ec2_private_ip_address}'.format(**server_info)
 
                 if limit:
                     print
@@ -169,7 +169,7 @@ class Ossh(object):
                     name = server_info['ec2_tag_Name']
                     ec2_id = server_info['ec2_id']
                     ip = server_info['ec2_ip_address']
-                    print '{ec2_tag_Name:<35} {ec2_tag_environment:<5} {ec2_id:<15} {ec2_ip_address}'.format(**server_info)
+                    print '{ec2_tag_Name:<35} {ec2_tag_environment:<8} {ec2_id:<15} {ec2_ip_address:<18} {ec2_private_ip_address}'.format(**server_info)
 
     def ssh(self):
         '''SSH to a specified host

+ 1 - 1
docs/best_practices_guide.adoc

@@ -421,7 +421,7 @@ For consistency, role names SHOULD follow the above naming pattern. It is import
 Many times the `technology` portion of the pattern will line up with a package name. It is advised that whenever possible, the package name should be used.
 
 .Examples:
-* The role to configure an OpenShift Master is called `openshift_master`
+* The role to configure a master is called `openshift_master`
 * The role to configure OpenShift specific yum repositories is called `openshift_repos`
 
 === Filters

+ 2 - 2
filter_plugins/oo_filters.py

@@ -130,7 +130,7 @@ class FilterModule(object):
             rval.append("%s%s%s" % (item['key'], joiner, item['value']))
 
         return rval
-    
+
     @staticmethod
     def oo_combine_dict(data, in_joiner='=', out_joiner=' '):
         '''Take a dict in the form of { 'key': 'value', 'key': 'value' } and
@@ -139,7 +139,7 @@ class FilterModule(object):
         if not issubclass(type(data), dict):
             raise errors.AnsibleFilterError("|failed expects first param is a dict")
 
-        return out_joiner.join([ in_joiner.join([k, v]) for k, v in data.items() ])
+        return out_joiner.join([in_joiner.join([k, v]) for k, v in data.items()])
 
     @staticmethod
     def oo_ami_selector(data, image_name):

+ 1 - 1
inventory/aws/hosts/hosts

@@ -1 +1 @@
-localhost ansible_connection=local ansible_sudo=no ansible_python_interpreter=/usr/bin/python2
+localhost ansible_connection=local ansible_sudo=no ansible_python_interpreter='/usr/bin/env python2'

+ 5 - 2
inventory/byo/hosts.example

@@ -21,7 +21,7 @@ ansible_ssh_user=root
 deployment_type=enterprise
 
 # Pre-release registry URL
-#oreg_url=docker-buildvm-rhose.usersys.redhat.com:5000/openshift3/ose-${component}:${version}
+#oreg_url=rcm-img-docker01.build.eng.bos.redhat.com:5001/openshift3/ose-${component}:${version}
 
 # Pre-release Dev puddle repo
 #openshift_additional_repos=[{'id': 'ose-devel', 'name': 'ose-devel', 'baseurl': 'http://buildvm-devops.usersys.redhat.com/puddle/build/OpenShiftEnterprise/3.0/latest/RH7-RHOSE-3.0/$basearch/os', 'enabled': 1, 'gpgcheck': 0}]
@@ -33,7 +33,7 @@ deployment_type=enterprise
 #openshift_additional_repos=[{'id': 'openshift-origin-copr', 'name': 'OpenShift Origin COPR', 'baseurl': 'https://copr-be.cloud.fedoraproject.org/results/maxamillion/origin-next/epel-7-$basearch/', 'enabled': 1, 'gpgcheck': 1, gpgkey: 'https://copr-be.cloud.fedoraproject.org/results/maxamillion/origin-next/pubkey.gpg'}]
 
 # htpasswd auth
-#openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/openshift/htpasswd'}]
+openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/openshift/htpasswd'}]
 
 # Allow all auth
 #openshift_master_identity_providers=[{'name': 'allow_all', 'login': 'true', 'challenge': 'true', 'kind': 'AllowAllPasswordIdentityProvider'}]
@@ -60,6 +60,9 @@ deployment_type=enterprise
 # additional cors origins
 #osm_custom_cors_origins=['foo.example.com', 'bar.example.com'] 
 
+# default project node selector
+#osm_default_node_selector='region=primary'
+
 # host group for masters
 [masters]
 ose3-master[1:3]-ansible.test.example.com

+ 1 - 1
inventory/gce/hosts/hosts

@@ -1 +1 @@
-localhost ansible_connection=local ansible_sudo=no ansible_python_interpreter=/usr/bin/python2
+localhost ansible_connection=local ansible_sudo=no ansible_python_interpreter='/usr/bin/env python2'

+ 1 - 1
inventory/libvirt/hosts/hosts

@@ -1 +1 @@
-localhost ansible_connection=local ansible_sudo=no ansible_python_interpreter=/usr/bin/python2
+localhost ansible_connection=local ansible_sudo=no ansible_python_interpreter='/usr/bin/env python2'

+ 27 - 1
inventory/openshift-ansible-inventory.spec

@@ -1,6 +1,6 @@
 Summary:       OpenShift Ansible Inventories
 Name:          openshift-ansible-inventory
-Version:       0.0.8
+Version:       0.0.9
 Release:       1%{?dist}
 License:       ASL 2.0
 URL:           https://github.com/openshift/openshift-ansible
@@ -36,6 +36,32 @@ cp -p gce/hosts/gce.py %{buildroot}/usr/share/ansible/inventory/gce
 /usr/share/ansible/inventory/gce/gce.py*
 
 %changelog
+* Thu Aug 20 2015 Kenny Woodson <kwoodson@redhat.com> 0.0.9-1
+- Merge pull request #408 from sdodson/docker-buildvm (bleanhar@redhat.com)
+- Merge pull request #428 from jtslear/issue-383
+  (twiest@users.noreply.github.com)
+- Merge pull request #407 from aveshagarwal/ae-ansible-merge-auth
+  (bleanhar@redhat.com)
+- Enable htpasswd by default in the example hosts file. (avagarwa@redhat.com)
+- Add support for setting default node selector (jdetiber@redhat.com)
+- Merge pull request #429 from spinolacastro/custom_cors (bleanhar@redhat.com)
+- Updated to read config first and default to users home dir
+  (kwoodson@redhat.com)
+- Fix Custom Cors (spinolacastro@gmail.com)
+- Revert "namespace the byo inventory so the group names aren't so generic"
+  (sdodson@redhat.com)
+- Removes hardcoded python2 (jtslear@gmail.com)
+- namespace the byo inventory so the group names aren't so generic
+  (admiller@redhat.com)
+- docker-buildvm-rhose is dead (sdodson@redhat.com)
+- Add support for setting routingConfig:subdomain (jdetiber@redhat.com)
+- Initial HA master (jdetiber@redhat.com)
+- Make it clear that the byo inventory file is just an example
+  (jdetiber@redhat.com)
+- Playbook updates for clustered etcd (jdetiber@redhat.com)
+- Update for RC2 changes (sdodson@redhat.com)
+- Templatize configs and 0.5.2 changes (jdetiber@redhat.com)
+
 * Tue Jun 09 2015 Kenny Woodson <kwoodson@redhat.com> 0.0.8-1
 - Added more verbosity when error happens.  Also fixed a bug.
   (kwoodson@redhat.com)

+ 1 - 1
inventory/openstack/hosts/hosts

@@ -1 +1 @@
-localhost ansible_sudo=no ansible_python_interpreter=/usr/bin/python2 connection=local
+localhost ansible_sudo=no ansible_python_interpreter='/usr/bin/env python2' connection=local

+ 93 - 0
playbooks/adhoc/atomic_openshift_tutorial_reset.yml

@@ -0,0 +1,93 @@
+# This deletes *ALL* Docker images, and uninstalls OpenShift and
+# Atomic Enterprise RPMs.  It is primarily intended for use
+# with the tutorial as well as for developers to reset state.
+
+- hosts:
+    - OSEv3:children
+
+  sudo: yes
+
+  tasks:
+    - service: name={{ item }} state=stopped
+      with_items:
+        - openvswitch
+        - origin-master
+        - origin-node
+        - atomic-openshift-master
+        - atomic-openshift-node
+        - openshift-master
+        - openshift-node
+        - atomic-enterprise-master
+        - atomic-enterprise-node
+
+    - yum: name={{ item }} state=absent
+      with_items:
+        - openvswitch
+        - origin
+        - origin-master
+        - origin-node
+        - origin-sdn-ovs
+        - tuned-profiles-origin-node
+        - atomic-openshift
+        - atomic-openshift-master
+        - atomic-openshift-node
+        - atomic-openshift-sdn-ovs
+        - tuned-profiles-atomic-openshift-node
+        - atomic-enterprise
+        - atomic-enterprise-master
+        - atomic-enterprise-node
+        - atomic-enterprise-sdn-ovs
+        - tuned-profiles-atomic-enterprise-node
+        - openshift
+        - openshift-master
+        - openshift-node
+        - openshift-sdn-ovs
+        - tuned-profiles-openshift-node
+
+    - shell: systemctl reset-failed
+      changed_when: False
+
+    - shell: systemctl daemon-reload
+      changed_when: False
+
+    - shell: find /var/lib/origin/openshift.local.volumes -type d -exec umount {} \; 2>/dev/null || true
+      changed_when: False
+
+    - shell: find /var/lib/atomic-enterprise/openshift.local.volumes -type d -exec umount {} \; 2>/dev/null || true
+      changed_when: False
+
+    - shell: find /var/lib/openshift/openshift.local.volumes -type d -exec umount {} \; 2>/dev/null || true
+      changed_when: False
+
+    - shell: docker ps -a -q | xargs docker stop
+      changed_when: False
+
+    - shell: docker ps -a -q| xargs docker rm
+      changed_when: False
+
+    - shell:  docker images -q |xargs docker rmi
+      changed_when: False
+
+    - file: path={{ item }} state=absent
+      with_items:
+        - /etc/openshift-sdn
+        - /root/.kube
+        - /etc/origin
+        - /etc/atomic-enterprise
+        - /etc/openshift
+        - /var/lib/origin
+        - /var/lib/openshift
+        - /var/lib/atomic-enterprise
+        - /etc/sysconfig/origin-master
+        - /etc/sysconfig/origin-node
+        - /etc/sysconfig/atomic-openshift-master
+        - /etc/sysconfig/atomic-openshift-node
+        - /etc/sysconfig/openshift-master
+        - /etc/sysconfig/openshift-node
+        - /etc/sysconfig/atomic-enterprise-master
+        - /etc/sysconfig/atomic-enterprise-node
+
+    - user: name={{ item }} state=absent remove=yes
+      with_items:
+        - alice
+        - joe

+ 17 - 0
playbooks/adhoc/create_pv/create_pv.yaml

@@ -50,6 +50,16 @@
 
   - debug: var=vol
 
+  - name: tag the vol with a name
+    ec2_tag: region={{ hostvars[oo_name]['ec2_region'] }} resource={{vol.volume_id}}
+    args:
+      tags:
+        Name: "pv-{{ hostvars[oo_name]['ec2_tag_Name'] }}"
+        env: "{{cli_environment}}"
+    register: voltags
+
+  - debug: var=voltags
+
 - name: Configure the drive
   gather_facts: no
   hosts: oo_master
@@ -118,6 +128,13 @@
       state: unmounted
       fstype: ext4
 
+  - name: remove from fstab
+    mount:
+      name: "{{ pv_mntdir }}"
+      src: "{{ cli_device_name }}"
+      state: absent
+      fstype: ext4
+
   - name: detach drive
     delegate_to: localhost
     ec2_vol:

+ 31 - 0
playbooks/adhoc/zabbix_setup/create_user.yml

@@ -0,0 +1,31 @@
+---
+# export PYTHONPATH='/usr/lib/python2.7/site-packages/:/home/kwoodson/git/openshift-tools'
+# ansible-playbook -e 'cli_password=zabbix' -e 'cli_new_password=new-zabbix' create_user.yml
+- hosts: localhost
+  gather_facts: no
+  vars_files:
+  - vars/template_heartbeat.yml
+  - vars/template_os_linux.yml
+  vars:
+    g_zserver: http://localhost/zabbix/api_jsonrpc.php
+    g_zuser: admin
+    g_zpassword: "{{ cli_password }}"
+  roles:
+  - ../../../roles/os_zabbix
+  post_tasks:
+  - zbx_user:
+      server: "{{ g_zserver }}"
+      user: "{{ g_zuser }}"
+      password: "{{ g_zpassword }}"
+      state: list
+    register: users
+
+  - debug: var=users
+
+  - name: Update zabbix creds for admin
+    zbx_user:
+      server: "{{ g_zserver }}"
+      user: "{{ g_zuser }}"
+      password: "{{ g_zpassword }}"
+      alias: Admin
+      passwd: "{{ cli_new_password | default(g_zpassword, true) }}"

+ 1 - 1
playbooks/aws/openshift-cluster/config.yml

@@ -17,7 +17,7 @@
     g_sudo: "{{ hostvars.localhost.g_sudo_tmp }}"
     g_nodeonmaster: true
     openshift_cluster_id: "{{ cluster_id }}"
-    openshift_debug_level: 4
+    openshift_debug_level: 2
     openshift_deployment_type: "{{ deployment_type }}"
     openshift_hostname: "{{ ec2_private_ip_address }}"
     openshift_public_hostname: "{{ ec2_ip_address }}"

+ 2 - 2
playbooks/aws/openshift-cluster/vars.online.int.yml

@@ -3,9 +3,9 @@ ec2_image: ami-9101c8fa
 ec2_image_name: libra-ops-rhel7*
 ec2_region: us-east-1
 ec2_keypair: mmcgrath_libra
-ec2_master_instance_type: m4.large
+ec2_master_instance_type: t2.small
 ec2_master_security_groups: [ 'integration', 'integration-master' ]
-ec2_infra_instance_type: m4.large
+ec2_infra_instance_type: c4.large
 ec2_infra_security_groups: [ 'integration', 'integration-infra' ]
 ec2_node_instance_type: m4.large
 ec2_node_security_groups: [ 'integration', 'integration-node' ]

+ 2 - 2
playbooks/aws/openshift-cluster/vars.online.prod.yml

@@ -3,9 +3,9 @@ ec2_image: ami-9101c8fa
 ec2_image_name: libra-ops-rhel7*
 ec2_region: us-east-1
 ec2_keypair: mmcgrath_libra
-ec2_master_instance_type: m4.large
+ec2_master_instance_type: t2.small
 ec2_master_security_groups: [ 'production', 'production-master' ]
-ec2_infra_instance_type: m4.large
+ec2_infra_instance_type: c4.large
 ec2_infra_security_groups: [ 'production', 'production-infra' ]
 ec2_node_instance_type: m4.large
 ec2_node_security_groups: [ 'production', 'production-node' ]

+ 2 - 2
playbooks/aws/openshift-cluster/vars.online.stage.yml

@@ -3,9 +3,9 @@ ec2_image: ami-9101c8fa
 ec2_image_name: libra-ops-rhel7*
 ec2_region: us-east-1
 ec2_keypair: mmcgrath_libra
-ec2_master_instance_type: m4.large
+ec2_master_instance_type: t2.small
 ec2_master_security_groups: [ 'stage', 'stage-master' ]
-ec2_infra_instance_type: m4.large
+ec2_infra_instance_type: c4.large
 ec2_infra_security_groups: [ 'stage', 'stage-infra' ]
 ec2_node_instance_type: m4.large
 ec2_node_security_groups: [ 'stage', 'stage-node' ]

+ 1 - 1
playbooks/byo/openshift-cluster/config.yml

@@ -5,5 +5,5 @@
     g_masters_group: "{{ 'masters' }}"
     g_nodes_group: "{{ 'nodes' }}"
     openshift_cluster_id: "{{ cluster_id | default('default') }}"
-    openshift_debug_level: 4
+    openshift_debug_level: 2
     openshift_deployment_type: "{{ deployment_type }}"

+ 12 - 0
playbooks/byo/rhel_subscribe.yml

@@ -0,0 +1,12 @@
+---
+- hosts: all
+  vars:
+    openshift_deployment_type: "{{ deployment_type }}"
+  roles:
+  - role: rhel_subscribe
+    when: deployment_type == "enterprise" and
+          ansible_distribution == "RedHat" and
+          lookup('oo_option', 'rhel_skip_subscription') | default(rhsub_skip, True) |
+          default('no', True) | lower in ['no', 'false']
+  - openshift_repos
+  - os_update_latest

+ 4 - 0
playbooks/byo/vagrant.yml

@@ -0,0 +1,4 @@
+---
+- include: rhel_subscribe.yml
+
+- include: config.yml

+ 2 - 1
playbooks/common/openshift-node/config.yml

@@ -128,9 +128,10 @@
   vars:
     openshift_nodes: "{{ hostvars
                          | oo_select_keys(groups['oo_nodes_to_config'])
-                         | oo_collect('openshift.common.hostname') }}" 
+                         | oo_collect('openshift.common.hostname') }}"
     openshift_unscheduleable_nodes: "{{ hostvars | oo_select_keys(groups['oo_nodes_to_config'] | default([]))
                                       | oo_collect('openshift.common.hostname', {'openshift_scheduleable': False}) }}"
+    openshift_node_vars: "{{ hostvars | oo_select_keys(groups['oo_nodes_to_config']) }}"
   pre_tasks:
   - set_fact:
       openshift_scheduleable_nodes: "{{ hostvars

+ 1 - 1
playbooks/gce/openshift-cluster/config.yml

@@ -19,6 +19,6 @@
     g_ssh_user: "{{ hostvars.localhost.g_ssh_user_tmp }}"
     g_sudo: "{{ hostvars.localhost.g_sudo_tmp }}"
     openshift_cluster_id: "{{ cluster_id }}"
-    openshift_debug_level: 4
+    openshift_debug_level: 2
     openshift_deployment_type: "{{ deployment_type }}"
     openshift_hostname: "{{ gce_private_ip }}"

+ 1 - 1
playbooks/libvirt/openshift-cluster/config.yml

@@ -20,5 +20,5 @@
     g_ssh_user: "{{ hostvars.localhost.g_ssh_user_tmp }}"
     g_sudo: "{{ hostvars.localhost.g_sudo_tmp }}"
     openshift_cluster_id: "{{ cluster_id }}"
-    openshift_debug_level: 4
+    openshift_debug_level: 2
     openshift_deployment_type: "{{ deployment_type }}"

+ 1 - 1
playbooks/openstack/openshift-cluster/config.yml

@@ -15,6 +15,6 @@
     g_ssh_user: "{{ hostvars.localhost.g_ssh_user_tmp }}"
     g_sudo: "{{ hostvars.localhost.g_sudo_tmp }}"
     openshift_cluster_id: "{{ cluster_id }}"
-    openshift_debug_level: 4
+    openshift_debug_level: 2
     openshift_deployment_type: "{{ deployment_type }}"
     openshift_hostname: "{{ ansible_default_ipv4.address }}"

+ 1 - 1
rel-eng/packages/openshift-ansible-bin

@@ -1 +1 @@
-0.0.18-1 bin/
+0.0.19-1 bin/

+ 1 - 1
rel-eng/packages/openshift-ansible-inventory

@@ -1 +1 @@
-0.0.8-1 inventory/
+0.0.9-1 inventory/

+ 1 - 1
roles/etcd/tasks/main.yml

@@ -1,6 +1,6 @@
 ---
 - name: Install etcd
-  yum: pkg=etcd state=present
+  yum: pkg=etcd-2.* state=present
 
 - name: Validate permissions on the config dir
   file:

+ 1 - 0
roles/etcd_ca/tasks/main.yml

@@ -37,6 +37,7 @@
     openssl req -config openssl.cnf -newkey rsa:4096
     -keyout ca.key -new -out ca.crt -x509 -extensions etcd_v3_ca_self
     -batch -nodes -subj /CN=etcd-signer@{{ ansible_date_time.epoch }}
+    -days 365
   args:
     chdir: "{{ etcd_ca_dir }}"
     creates: "{{ etcd_ca_dir }}/ca.crt"

+ 1 - 1
roles/fluentd_master/tasks/main.yml

@@ -40,7 +40,7 @@
     mode: 0444
 
 - name: "Pause before restarting td-agent and openshift-master, depending on the number of nodes."
-  pause: seconds={{ num_nodes|int * 5 }}
+  pause: seconds={{ ( num_nodes|int < 3 ) | ternary(15, (num_nodes|int * 5)) }}
 
 - name: ensure td-agent is running
   service:

+ 4 - 4
roles/openshift_common/README.md

@@ -1,7 +1,7 @@
-OpenShift Common
-================
+OpenShift/Atomic Enterprise Common
+===================================
 
-OpenShift common installation and configuration tasks.
+OpenShift/Atomic Enterprise common installation and configuration tasks.
 
 Requirements
 ------------
@@ -15,7 +15,7 @@ Role Variables
 | Name                      | Default value     |                                             |
 |---------------------------|-------------------|---------------------------------------------|
 | openshift_cluster_id      | default           | Cluster name if multiple OpenShift clusters |
-| openshift_debug_level     | 0                 | Global openshift debug log verbosity        |
+| openshift_debug_level     | 2                 | Global openshift debug log verbosity        |
 | openshift_hostname        | UNDEF             | Internal hostname to use for this host (this value will set the hostname on the system) |
 | openshift_ip              | UNDEF             | Internal IP address to use for this host    |
 | openshift_public_hostname | UNDEF             | Public hostname to use for this host        |

+ 1 - 1
roles/openshift_common/defaults/main.yml

@@ -1,3 +1,3 @@
 ---
 openshift_cluster_id: 'default'
-openshift_debug_level: 0
+openshift_debug_level: 2

+ 1 - 1
roles/openshift_common/tasks/main.yml

@@ -4,7 +4,7 @@
     role: common
     local_facts:
       cluster_id: "{{ openshift_cluster_id | default('default') }}"
-      debug_level: "{{ openshift_debug_level | default(0) }}"
+      debug_level: "{{ openshift_debug_level | default(2) }}"
       hostname: "{{ openshift_hostname | default(None) }}"
       ip: "{{ openshift_ip | default(None) }}"
       public_hostname: "{{ openshift_public_hostname | default(None) }}"

+ 3 - 4
roles/openshift_manage_node/tasks/main.yml

@@ -19,8 +19,7 @@
 
 - name: Label nodes
   command: >
-    {{ openshift.common.client_binary }} label --overwrite node {{ item }} {{ hostvars[item]['openshift_node_labels'] | oo_combine_dict  }}
+    {{ openshift.common.client_binary }} label --overwrite node {{ item.openshift.common.hostname }} {{ item.openshift.node.labels | oo_combine_dict  }}
   with_items:
-    -  "{{ openshift_nodes }}"
-  when: 
-    "'openshift_node_labels' in hostvars[item]"
+    -  "{{ openshift_node_vars }}"
+  when: "'labels' in item.openshift.node and item.openshift.node.labels != {}"

+ 1 - 1
roles/openshift_master/README.md

@@ -28,7 +28,7 @@ From this role:
 From openshift_common:
 | Name                          | Default Value  |                                        |
 |-------------------------------|----------------|----------------------------------------|
-| openshift_debug_level         | 0              | Global openshift debug log verbosity   |
+| openshift_debug_level         | 2              | Global openshift debug log verbosity   |
 | openshift_public_ip           | UNDEF          | Public IP address to use for this host |
 | openshift_hostname            | UNDEF          | hostname to use for this instance      |
 

+ 4 - 1
roles/openshift_master/tasks/main.yml

@@ -55,13 +55,16 @@
       sdn_host_subnet_length: "{{ osm_host_subnet_length | default(None) }}"
       default_subdomain: "{{ osm_default_subdomain | default(None) }}"
       custom_cors_origins: "{{ osm_custom_cors_origins | default(None) }}"
+      default_node_selector: "{{ osm_default_node_selector | default(None) }}"
+      api_server_args: "{{ osm_api_server_args | default(None) }}"
+      controller_args: "{{ osm_controller_args | default(None) }}"
 
 # TODO: These values need to be configurable
 - name: Set dns OpenShift facts
   openshift_facts:
     role: dns
     local_facts:
-      ip: "{{ openshift.common.ip }}"
+      ip: "{{ openshift_master_cluster_vip | default(openshift.common.ip, true) | default(None) }}"
       domain: cluster.local
   when: openshift.master.embedded_dns
 

+ 7 - 1
roles/openshift_master/templates/master.yaml.v1.j2

@@ -2,6 +2,9 @@ apiLevels:
 - v1beta3
 - v1
 apiVersion: v1
+{% if api_server_args is defined and api_server_args %}
+apiServerArguments: {{ api_server_args }}
+{% endif %}
 assetConfig:
   logoutURL: ""
   masterPublicURL: {{ openshift.master.public_api_url }}
@@ -13,6 +16,9 @@ assetConfig:
     keyFile: master.server.key
     maxRequestsInFlight: 0
     requestTimeoutSeconds: 0
+{% if controller_args is defined and controller_args %}
+controllerArguments: {{ controller_args }}
+{% endif %}
 corsAllowedOrigins:
 {% for origin in ['127.0.0.1', 'localhost', openshift.common.hostname, openshift.common.ip, openshift.common.public_hostname, openshift.common.public_ip] %}
   - {{ origin }}
@@ -95,7 +101,7 @@ policyConfig:
   openshiftSharedResourcesNamespace: openshift
 {# TODO: Allow users to override projectConfig items #}
 projectConfig:
-  defaultNodeSelector: ""
+  defaultNodeSelector: "{{ openshift.master.default_node_selector | default("") }}"
   projectRequestMessage: ""
   projectRequestTemplate: ""
   securityAllocator:

+ 1 - 1
roles/openshift_master/templates/v1_partials/oauthConfig.j2

@@ -7,7 +7,7 @@
       url: {{ identity_provider.url }}
 {% for key in ('ca', 'certFile', 'keyFile') %}
 {% if key in identity_provider %}
-      {{ key }}: {{ identity_provider[key] }}"
+      {{ key }}: "{{ identity_provider[key] }}"
 {% endif %}
 {% endfor %}
 {% elif identity_provider.kind == 'LDAPPasswordIdentityProvider' %}

+ 2 - 2
roles/openshift_node/README.md

@@ -20,9 +20,9 @@ From this role:
 | oreg_url                                 | UNDEF (Optional)      | Default docker registry to use |
 
 From openshift_common:
-| Name                          |  Default Value      |                     | 
+| Name                          |  Default Value      |                     |
 |-------------------------------|---------------------|---------------------|
-| openshift_debug_level         | 0                   | Global openshift debug log verbosity |
+| openshift_debug_level         | 2                   | Global openshift debug log verbosity |
 | openshift_public_ip           | UNDEF (Required)    | Public IP address to use for this host |
 | openshift_hostname            | UNDEF (Required)    | hostname to use for this instance |
 

+ 7 - 1
roles/openshift_node/tasks/main.yml

@@ -6,6 +6,9 @@
 - fail:
     msg: This role requres that osn_cluster_dns_ip is set
   when: osn_cluster_dns_ip is not defined or not osn_cluster_dns_ip
+- fail:
+    msg: "SELinux is disabled, This deployment type requires that SELinux is enabled."
+  when: (not ansible_selinux or ansible_selinux.status != 'enabled') and deployment_type in ['enterprise', 'online']
 
 - name: Install OpenShift Node package
   yum: pkg=openshift-node state=present
@@ -33,6 +36,7 @@
       registry_url: "{{ oreg_url | default(none) }}"
       debug_level: "{{ openshift_node_debug_level | default(openshift.common.debug_level) }}"
       portal_net: "{{ openshift_master_portal_net | default(None) }}"
+      kubelet_args: "{{ openshift_node_kubelet_args | default(None) }}"
 
 # TODO: add the validate parameter when there is a validation command to run
 - name: Create the Node config
@@ -63,11 +67,13 @@
   lineinfile:
     dest: /etc/sysconfig/docker
     regexp: '^OPTIONS=.*'
-    line: "OPTIONS='--insecure-registry={{ openshift.node.portal_net }} --selinux-enabled'"
+    line: "OPTIONS='--insecure-registry={{ openshift.node.portal_net }} \
+{% if ansible_selinux and ansible_selinux.status == '''enabled''' %}--selinux-enabled{% endif %}'"
   when: docker_check.stat.isreg
 
 - name: Allow NFS access for VMs
   seboolean: name=virt_use_nfs state=yes persistent=yes
+  when: ansible_selinux and ansible_selinux.status == "enabled"
 
 - name: Start and enable openshift-node
   service: name=openshift-node enabled=yes state=started

+ 3 - 0
roles/openshift_node/templates/node.yaml.v1.j2

@@ -8,6 +8,9 @@ imageConfig:
   format: {{ openshift.node.registry_url }}
   latest: false
 kind: NodeConfig
+{% if openshift.node.kubelet_args is defined and openshift.node.kubelet_args %}
+kubeletArguments: {{ openshift.node.kubelet_args | to_json }}
+{% endif %}
 masterKubeConfig: system:node:{{ openshift.common.hostname }}.kubeconfig
 networkPluginName: {{ openshift.common.sdn_network_plugin_name }}
 nodeName: {{ openshift.common.hostname }}

+ 1 - 2
roles/openshift_registry/README.md

@@ -21,7 +21,7 @@ From openshift_common:
 
 | Name                  | Default value |                                      |
 |-----------------------|---------------|--------------------------------------|
-| openshift_debug_level | 0             | Global openshift debug log verbosity |
+| openshift_debug_level | 2             | Global openshift debug log verbosity |
 
 
 Dependencies
@@ -41,4 +41,3 @@ Author Information
 ------------------
 
 Red Hat openshift@redhat.com
-

+ 1 - 2
roles/openshift_router/README.md

@@ -19,7 +19,7 @@ From this role:
 From openshift_common:
 | Name                  | Default value |                                      |
 |-----------------------|---------------|--------------------------------------|
-| openshift_debug_level | 0             | Global openshift debug log verbosity |
+| openshift_debug_level | 2             | Global openshift debug log verbosity |
 
 Dependencies
 ------------
@@ -38,4 +38,3 @@ Author Information
 ------------------
 
 Red Hat openshift@redhat.com
-

+ 115 - 0
roles/os_zabbix/library/get_drule.yml

@@ -0,0 +1,115 @@
+---
+# This is a test playbook to create one of each of the zabbix ansible modules.
+# ensure that the zbxapi module is installed
+# ansible-playbook test.yml
+- name: Test zabbix ansible module
+  hosts: localhost
+  gather_facts: no
+  vars:
+#zbx_server: https://localhost/zabbix/api_jsonrpc.php
+#zbx_user: Admin
+#zbx_password: zabbix
+
+  pre_tasks:
+  - name: Template Discovery rules
+    zbx_template:
+      server: "{{ zbx_server }}"
+      user: "{{ zbx_user }}"
+      password: "{{ zbx_password }}"
+      name: 'Template App HaProxy'
+      state: list
+    register: template_output
+
+  - debug: var=template_output
+
+  - name: Discovery rules
+    zbx_discovery_rule:
+      server: "{{ zbx_server }}"
+      user: "{{ zbx_user }}"
+      password: "{{ zbx_password }}"
+      name: 'haproxy.discovery sender'
+      state: list
+    register: drule
+
+  - debug: var=drule
+
+#  - name: Create an application
+#    zbx_application:
+#      server: "{{ zbx_server }}"
+#      user: "{{ zbx_user }}"
+#      password: "{{ zbx_password }}"
+#      name: 'Test App'
+#      template_name: "test template"
+#    register: item_output
+#
+#  - name: Create an item
+#    zbx_item:
+#      server: "{{ zbx_server }}"
+#      user: "{{ zbx_user }}"
+#      password: "{{ zbx_password }}"
+#      name: 'test item'
+#      key: 'kenny.item.1'
+#      applications:
+#      - 'Test App'
+#      template_name: "test template"
+#    register: item_output
+#
+#  - debug: var=item_output
+#
+#  - name: Create an trigger
+#    zbx_trigger:
+#      server: "{{ zbx_server }}"
+#      user: "{{ zbx_user }}"
+#      password: "{{ zbx_password }}"
+#      expression: '{test template:kenny.item.1.last()}>2'
+#      description: 'Kenny desc'
+#    register: trigger_output
+#
+#  - debug: var=trigger_output
+#
+#  - name: Create a hostgroup
+#    zbx_hostgroup:
+#      server: "{{ zbx_server }}"
+#      user: "{{ zbx_user }}"
+#      password: "{{ zbx_password }}"
+#      name: 'kenny hostgroup'
+#    register: hostgroup_output
+#
+#  - debug: var=hostgroup_output
+#
+#  - name: Create a host
+#    zbx_host:
+#      server: "{{ zbx_server }}"
+#      user: "{{ zbx_user }}"
+#      password: "{{ zbx_password }}"
+#      name: 'kenny host'
+#      template_names:
+#      - test template
+#      hostgroup_names:
+#      - kenny hostgroup
+#    register: host_output
+#
+#  - debug: var=host_output
+#
+#  - name: Create a usergroup
+#    zbx_usergroup:
+#      server: "{{ zbx_server }}"
+#      user: "{{ zbx_user }}"
+#      password: "{{ zbx_password }}"
+#      name: kenny usergroup
+#      rights:
+#      - 'kenny hostgroup': rw
+#    register: usergroup_output
+#
+#  - debug: var=usergroup_output
+#
+#  - name: Create a user
+#    zbx_user:
+#      server: "{{ zbx_server }}"
+#      user: "{{ zbx_user }}"
+#      password: "{{ zbx_password }}"
+#      alias: kwoodson
+#      state: list
+#    register: user_output
+#
+#  - debug: var=user_output

+ 44 - 5
roles/os_zabbix/library/test.yml

@@ -6,7 +6,7 @@
   hosts: localhost
   gather_facts: no
   vars:
-    zbx_server: http://localhost/zabbix/api_jsonrpc.php
+    zbx_server: http://localhost:8080/zabbix/api_jsonrpc.php
     zbx_user: Admin
     zbx_password: zabbix
 
@@ -21,6 +21,41 @@
 
   - debug: var=template_output
 
+  - name: Create a discoveryrule
+    zbx_discoveryrule:
+      server: "{{ zbx_server }}"
+      user: "{{ zbx_user }}"
+      password: "{{ zbx_password }}"
+      name: test discoverule
+      key: test_listener
+      template_name: test template
+      lifetime: 14
+    register: discoveryrule
+
+  - debug: var=discoveryrule
+
+  - name: Create an itemprototype
+    zbx_itemprototype:
+      server: "{{ zbx_server }}"
+      user: "{{ zbx_user }}"
+      password: "{{ zbx_password }}"
+      name: 'Test itemprototype on {#TEST_LISTENER}'
+      key: 'test[{#TEST_LISTENER}]'
+      template_name: test template
+      discoveryrule_name: test discoverule
+    register: itemproto
+
+  - debug: var=itemproto
+
+  - name: Create an application
+    zbx_application:
+      server: "{{ zbx_server }}"
+      user: "{{ zbx_user }}"
+      password: "{{ zbx_password }}"
+      name: 'Test App'
+      template_name: "test template"
+    register: item_output
+
   - name: Create an item
     zbx_item:
       server: "{{ zbx_server }}"
@@ -28,7 +63,9 @@
       password: "{{ zbx_password }}"
       name: 'test item'
       key: 'kenny.item.1'
-      template_name: "{{ template_output.results[0].host }}"
+      applications:
+      - 'Test App'
+      template_name: "test template"
     register: item_output
 
   - debug: var=item_output
@@ -39,7 +76,7 @@
       user: "{{ zbx_user }}"
       password: "{{ zbx_password }}"
       expression: '{test template:kenny.item.1.last()}>2'
-      desc: 'Kenny desc'
+      description: 'Kenny desc'
     register: trigger_output
 
   - debug: var=trigger_output
@@ -60,8 +97,10 @@
       user: "{{ zbx_user }}"
       password: "{{ zbx_password }}"
       name: 'kenny host'
-      hostgroups:
-      -  'kenny hostgroup'
+      template_names:
+      - test template
+      hostgroup_names:
+      - kenny hostgroup
     register: host_output
 
   - debug: var=host_output

+ 135 - 0
roles/os_zabbix/library/zbx_application.py

@@ -0,0 +1,135 @@
+#!/usr/bin/env python
+'''
+Ansible module for application
+'''
+# vim: expandtab:tabstop=4:shiftwidth=4
+#
+#   Zabbix application ansible module
+#
+#
+#   Copyright 2015 Red Hat Inc.
+#
+#   Licensed under the Apache License, Version 2.0 (the "License");
+#   you may not use this file except in compliance with the License.
+#   You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#   Unless required by applicable law or agreed to in writing, software
+#   distributed under the License is distributed on an "AS IS" BASIS,
+#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#   See the License for the specific language governing permissions and
+#   limitations under the License.
+#
+
+# This is in place because each module looks similar to each other.
+# These need duplicate code as their behavior is very similar
+# but different for each zabbix class.
+# pylint: disable=duplicate-code
+
+# pylint: disable=import-error
+from openshift_tools.monitoring.zbxapi import ZabbixAPI, ZabbixConnection
+
+def exists(content, key='result'):
+    ''' Check if key exists in content or the size of content[key] > 0
+    '''
+    if not content.has_key(key):
+        return False
+
+    if not content[key]:
+        return False
+
+    return True
+
+def get_template_ids(zapi, template_names):
+    '''
+    get related templates
+    '''
+    template_ids = []
+    # Fetch templates by name
+    for template_name in template_names:
+        content = zapi.get_content('template', 'get', {'search': {'host': template_name}})
+        if content.has_key('result'):
+            template_ids.append(content['result'][0]['templateid'])
+    return template_ids
+
+def main():
+    ''' Ansible module for application
+    '''
+
+    module = AnsibleModule(
+        argument_spec=dict(
+            server=dict(default='https://localhost/zabbix/api_jsonrpc.php', type='str'),
+            user=dict(default=None, type='str'),
+            password=dict(default=None, type='str'),
+            name=dict(default=None, type='str'),
+            template_name=dict(default=None, type='list'),
+            debug=dict(default=False, type='bool'),
+            state=dict(default='present', type='str'),
+        ),
+        #supports_check_mode=True
+    )
+
+    user = module.params.get('user', os.environ['ZABBIX_USER'])
+    passwd = module.params.get('password', os.environ['ZABBIX_PASSWORD'])
+
+    zapi = ZabbixAPI(ZabbixConnection(module.params['server'], user, passwd, module.params['debug']))
+
+    #Set the instance and the application for the rest of the calls
+    zbx_class_name = 'application'
+    idname = 'applicationid'
+    aname = module.params['name']
+    state = module.params['state']
+    # get a applicationid, see if it exists
+    content = zapi.get_content(zbx_class_name,
+                               'get',
+                               {'search': {'host': aname},
+                                'selectHost': 'hostid',
+                               })
+    if state == 'list':
+        module.exit_json(changed=False, results=content['result'], state="list")
+
+    if state == 'absent':
+        if not exists(content):
+            module.exit_json(changed=False, state="absent")
+
+        content = zapi.get_content(zbx_class_name, 'delete', [content['result'][0][idname]])
+        module.exit_json(changed=True, results=content['result'], state="absent")
+
+    if state == 'present':
+        params = {'hostid': get_template_ids(zapi, module.params['template_name'])[0],
+                  'name': aname,
+                 }
+        if not exists(content):
+            # if we didn't find it, create it
+            content = zapi.get_content(zbx_class_name, 'create', params)
+            module.exit_json(changed=True, results=content['result'], state='present')
+        # already exists, we need to update it
+        # let's compare properties
+        differences = {}
+        zab_results = content['result'][0]
+        for key, value in params.items():
+            if key == 'templates' and zab_results.has_key('parentTemplates'):
+                if zab_results['parentTemplates'] != value:
+                    differences[key] = value
+            elif zab_results[key] != str(value) and zab_results[key] != value:
+                differences[key] = value
+
+        if not differences:
+            module.exit_json(changed=False, results=content['result'], state="present")
+
+        # We have differences and need to update
+        differences[idname] = zab_results[idname]
+        content = zapi.get_content(zbx_class_name, 'update', differences)
+        module.exit_json(changed=True, results=content['result'], state="present")
+
+    module.exit_json(failed=True,
+                     changed=False,
+                     results='Unknown state passed. %s' % state,
+                     state="unknown")
+
+# pylint: disable=redefined-builtin, unused-wildcard-import, wildcard-import, locally-disabled
+# import module snippets.  This are required
+from ansible.module_utils.basic import *
+
+main()

+ 177 - 0
roles/os_zabbix/library/zbx_discoveryrule.py

@@ -0,0 +1,177 @@
+#!/usr/bin/env python
+'''
+Zabbix discovery rule ansible module
+'''
+# vim: expandtab:tabstop=4:shiftwidth=4
+#
+#   Copyright 2015 Red Hat Inc.
+#
+#   Licensed under the Apache License, Version 2.0 (the "License");
+#   you may not use this file except in compliance with the License.
+#   You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#   Unless required by applicable law or agreed to in writing, software
+#   distributed under the License is distributed on an "AS IS" BASIS,
+#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#   See the License for the specific language governing permissions and
+#   limitations under the License.
+#
+
+# This is in place because each module looks similar to each other.
+# These need duplicate code as their behavior is very similar
+# but different for each zabbix class.
+# pylint: disable=duplicate-code
+
+# pylint: disable=import-error
+from openshift_tools.monitoring.zbxapi import ZabbixAPI, ZabbixConnection
+
+def exists(content, key='result'):
+    ''' Check if key exists in content or the size of content[key] > 0
+    '''
+    if not content.has_key(key):
+        return False
+
+    if not content[key]:
+        return False
+
+    return True
+
+def get_template(zapi, template_name):
+    '''get a template by name
+    '''
+    content = zapi.get_content('template',
+                               'get',
+                               {'search': {'host': template_name},
+                                'output': 'extend',
+                                'selectInterfaces': 'interfaceid',
+                               })
+    if not content['result']:
+        return None
+    return content['result'][0]
+
+def get_type(vtype):
+    '''
+    Determine which type of discoverrule this is
+    '''
+    _types = {'agent': 0,
+              'SNMPv1': 1,
+              'trapper': 2,
+              'simple': 3,
+              'SNMPv2': 4,
+              'internal': 5,
+              'SNMPv3': 6,
+              'active': 7,
+              'external': 10,
+              'database monitor': 11,
+              'ipmi': 12,
+              'ssh': 13,
+              'telnet': 14,
+              'JMX': 16,
+             }
+
+    for typ in _types.keys():
+        if vtype in typ or vtype == typ:
+            _vtype = _types[typ]
+            break
+    else:
+        _vtype = 2
+
+    return _vtype
+
+def main():
+    '''
+    Ansible module for zabbix discovery rules
+    '''
+
+    module = AnsibleModule(
+        argument_spec=dict(
+            server=dict(default='https://localhost/zabbix/api_jsonrpc.php', type='str'),
+            user=dict(default=os.environ['ZABBIX_USER'], type='str'),
+            password=dict(default=os.environ['ZABBIX_PASSWORD'], type='str'),
+            name=dict(default=None, type='str'),
+            key=dict(default=None, type='str'),
+            interfaceid=dict(default=None, type='int'),
+            ztype=dict(default='trapper', type='str'),
+            delay=dict(default=60, type='int'),
+            lifetime=dict(default=30, type='int'),
+            template_name=dict(default=[], type='list'),
+            debug=dict(default=False, type='bool'),
+            state=dict(default='present', type='str'),
+        ),
+        #supports_check_mode=True
+    )
+
+    user = module.params['user']
+    passwd = module.params['password']
+
+    zapi = ZabbixAPI(ZabbixConnection(module.params['server'], user, passwd, module.params['debug']))
+
+    #Set the instance and the template for the rest of the calls
+    zbx_class_name = 'discoveryrule'
+    idname = "itemid"
+    dname = module.params['name']
+    state = module.params['state']
+
+    # selectInterfaces doesn't appear to be working but is needed.
+    content = zapi.get_content(zbx_class_name,
+                               'get',
+                               {'search': {'name': dname},
+                                #'selectDServices': 'extend',
+                                #'selectDChecks': 'extend',
+                                #'selectDhosts': 'dhostid',
+                               })
+    if state == 'list':
+        module.exit_json(changed=False, results=content['result'], state="list")
+
+    if state == 'absent':
+        if not exists(content):
+            module.exit_json(changed=False, state="absent")
+
+        content = zapi.get_content(zbx_class_name, 'delete', [content['result'][0][idname]])
+        module.exit_json(changed=True, results=content['result'], state="absent")
+
+    if state == 'present':
+        template = get_template(zapi, module.params['template_name'])
+        params = {'name': dname,
+                  'key_':  module.params['key'],
+                  'hostid':  template['templateid'],
+                  'interfaceid': module.params['interfaceid'],
+                  'lifetime': module.params['lifetime'],
+                  'type': get_type(module.params['ztype']),
+                 }
+        if params['type'] in [2, 5, 7, 11]:
+            params.pop('interfaceid')
+
+        if not exists(content):
+            # if we didn't find it, create it
+            content = zapi.get_content(zbx_class_name, 'create', params)
+            module.exit_json(changed=True, results=content['result'], state='present')
+        # already exists, we need to update it
+        # let's compare properties
+        differences = {}
+        zab_results = content['result'][0]
+        for key, value in params.items():
+
+            if zab_results[key] != value and zab_results[key] != str(value):
+                differences[key] = value
+
+        if not differences:
+            module.exit_json(changed=False, results=zab_results, state="present")
+
+        # We have differences and need to update
+        differences[idname] = zab_results[idname]
+        content = zapi.get_content(zbx_class_name, 'update', differences)
+        module.exit_json(changed=True, results=content['result'], state="present")
+
+    module.exit_json(failed=True,
+                     changed=False,
+                     results='Unknown state passed. %s' % state,
+                     state="unknown")
+
+# pylint: disable=redefined-builtin, unused-wildcard-import, wildcard-import, locally-disabled
+# import module snippets.  This are required
+from ansible.module_utils.basic import *
+
+main()

+ 16 - 15
roles/os_zabbix/library/zbx_host.py

@@ -60,7 +60,7 @@ def get_template_ids(zapi, template_names):
     for template_name in template_names:
         content = zapi.get_content('template', 'get', {'search': {'host': template_name}})
         if content.has_key('result'):
-            template_ids.append({'templateid': content['results'][0]['templateid']})
+            template_ids.append({'templateid': content['result'][0]['templateid']})
     return template_ids
 
 def main():
@@ -71,20 +71,20 @@ def main():
     module = AnsibleModule(
         argument_spec=dict(
             server=dict(default='https://localhost/zabbix/api_jsonrpc.php', type='str'),
-            user=dict(default=None, type='str'),
-            password=dict(default=None, type='str'),
+            user=dict(default=os.environ['ZABBIX_USER'], type='str'),
+            password=dict(default=os.environ['ZABBIX_PASSWORD'], type='str'),
             name=dict(default=None, type='str'),
             hostgroup_names=dict(default=[], type='list'),
             template_names=dict(default=[], type='list'),
             debug=dict(default=False, type='bool'),
             state=dict(default='present', type='str'),
-            interfaces=dict(default=[], type='list'),
+            interfaces=dict(default=None, type='list'),
         ),
         #supports_check_mode=True
     )
 
-    user = module.params.get('user', os.environ['ZABBIX_USER'])
-    passwd = module.params.get('password', os.environ['ZABBIX_PASSWORD'])
+    user = module.params['user']
+    passwd = module.params['password']
 
     zapi = ZabbixAPI(ZabbixConnection(module.params['server'], user, passwd, module.params['debug']))
 
@@ -113,16 +113,17 @@ def main():
         module.exit_json(changed=True, results=content['result'], state="absent")
 
     if state == 'present':
+        ifs = module.params['interfaces'] or [{'type':  1,         # interface type, 1 = agent
+                                               'main':  1,         # default interface? 1 = true
+                                               'useip':  1,        # default interface? 1 = true
+                                               'ip':  '127.0.0.1', # default interface? 1 = true
+                                               'dns':  '',         # dns for host
+                                               'port':  '10050',   # port for interface? 10050
+                                              }]
         params = {'host': hname,
-                  'groups':  get_group_ids(zapi, module.params('hostgroup_names')),
-                  'templates':  get_template_ids(zapi, module.params('template_names')),
-                  'interfaces': module.params.get('interfaces', [{'type':  1,         # interface type, 1 = agent
-                                                                  'main':  1,         # default interface? 1 = true
-                                                                  'useip':  1,        # default interface? 1 = true
-                                                                  'ip':  '127.0.0.1', # default interface? 1 = true
-                                                                  'dns':  '',         # dns for host
-                                                                  'port':  '10050',   # port for interface? 10050
-                                                                 }])
+                  'groups':  get_group_ids(zapi, module.params['hostgroup_names']),
+                  'templates':  get_template_ids(zapi, module.params['template_names']),
+                  'interfaces': ifs,
                  }
 
         if not exists(content):

+ 11 - 1
roles/os_zabbix/library/zbx_item.py

@@ -60,6 +60,16 @@ def get_value_type(value_type):
 
     return vtype
 
+def get_app_ids(zapi, application_names):
+    ''' get application ids from names
+    '''
+    app_ids = []
+    for app_name in application_names:
+        content = zapi.get_content('application', 'get', {'search': {'name': app_name}})
+        if content.has_key('result'):
+            app_ids.append(content['result'][0]['applicationid'])
+    return app_ids
+
 def main():
     '''
     ansible zabbix module for zbx_item
@@ -124,7 +134,7 @@ def main():
                   'hostid': templateid,
                   'type': module.params['zabbix_type'],
                   'value_type': get_value_type(module.params['value_type']),
-                  'applications': module.params['applications'],
+                  'applications': get_app_ids(zapi, module.params['applications']),
                  }
 
         if not exists(content):

+ 241 - 0
roles/os_zabbix/library/zbx_itemprototype.py

@@ -0,0 +1,241 @@
+#!/usr/bin/env python
+'''
+Zabbix discovery rule ansible module
+'''
+# vim: expandtab:tabstop=4:shiftwidth=4
+#
+#   Copyright 2015 Red Hat Inc.
+#
+#   Licensed under the Apache License, Version 2.0 (the "License");
+#   you may not use this file except in compliance with the License.
+#   You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#   Unless required by applicable law or agreed to in writing, software
+#   distributed under the License is distributed on an "AS IS" BASIS,
+#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#   See the License for the specific language governing permissions and
+#   limitations under the License.
+#
+
+# This is in place because each module looks similar to each other.
+# These need duplicate code as their behavior is very similar
+# but different for each zabbix class.
+# pylint: disable=duplicate-code
+
+# pylint: disable=import-error
+from openshift_tools.monitoring.zbxapi import ZabbixAPI, ZabbixConnection
+
+def exists(content, key='result'):
+    ''' Check if key exists in content or the size of content[key] > 0
+    '''
+    if not content.has_key(key):
+        return False
+
+    if not content[key]:
+        return False
+
+    return True
+
+def get_rule_id(zapi, discoveryrule_name):
+    '''get a discoveryrule by name
+    '''
+    content = zapi.get_content('discoveryrule',
+                               'get',
+                               {'search': {'name': discoveryrule_name},
+                                'output': 'extend',
+                               })
+    if not content['result']:
+        return None
+    return content['result'][0]['itemid']
+
+def get_template(zapi, template_name):
+    '''get a template by name
+    '''
+    content = zapi.get_content('template',
+                               'get',
+                               {'search': {'host': template_name},
+                                'output': 'extend',
+                                'selectInterfaces': 'interfaceid',
+                               })
+    if not content['result']:
+        return None
+    return content['result'][0]
+
+def get_type(ztype):
+    '''
+    Determine which type of discoverrule this is
+    '''
+    _types = {'agent': 0,
+              'SNMPv1': 1,
+              'trapper': 2,
+              'simple': 3,
+              'SNMPv2': 4,
+              'internal': 5,
+              'SNMPv3': 6,
+              'active': 7,
+              'aggregate': 8,
+              'external': 10,
+              'database monitor': 11,
+              'ipmi': 12,
+              'ssh': 13,
+              'telnet': 14,
+              'calculated': 15,
+              'JMX': 16,
+             }
+
+    for typ in _types.keys():
+        if ztype in typ or ztype == typ:
+            _vtype = _types[typ]
+            break
+    else:
+        _vtype = 2
+
+    return _vtype
+
+def get_value_type(value_type):
+    '''
+    Possible values:
+    0 - numeric float;
+    1 - character;
+    2 - log;
+    3 - numeric unsigned;
+    4 - text
+    '''
+    vtype = 0
+    if 'int' in value_type:
+        vtype = 3
+    elif 'char' in value_type:
+        vtype = 1
+    elif 'str' in value_type:
+        vtype = 4
+
+    return vtype
+
+def get_status(status):
+    ''' Determine status
+    '''
+    _status = 0
+    if status == 'disabled':
+        _status = 1
+    elif status == 'unsupported':
+        _status = 3
+
+    return _status
+
+def get_app_ids(zapi, application_names):
+    ''' get application ids from names
+    '''
+    app_ids = []
+    for app_name in application_names:
+        content = zapi.get_content('application', 'get', {'search': {'name': app_name}})
+        if content.has_key('result'):
+            app_ids.append(content['result'][0]['applicationid'])
+    return app_ids
+
+def main():
+    '''
+    Ansible module for zabbix discovery rules
+    '''
+
+    module = AnsibleModule(
+        argument_spec=dict(
+            server=dict(default='https://localhost/zabbix/api_jsonrpc.php', type='str'),
+            user=dict(default=os.environ['ZABBIX_USER'], type='str'),
+            password=dict(default=os.environ['ZABBIX_PASSWORD'], type='str'),
+            name=dict(default=None, type='str'),
+            key=dict(default=None, type='str'),
+            interfaceid=dict(default=None, type='int'),
+            ztype=dict(default='trapper', type='str'),
+            value_type=dict(default='float', type='str'),
+            delay=dict(default=60, type='int'),
+            lifetime=dict(default=30, type='int'),
+            template_name=dict(default=[], type='list'),
+            debug=dict(default=False, type='bool'),
+            state=dict(default='present', type='str'),
+            status=dict(default='enabled', type='str'),
+            discoveryrule_name=dict(default=None, type='str'),
+            applications=dict(default=[], type='list'),
+        ),
+        #supports_check_mode=True
+    )
+
+    user = module.params['user']
+    passwd = module.params['password']
+
+    zapi = ZabbixAPI(ZabbixConnection(module.params['server'], user, passwd, module.params['debug']))
+
+    #Set the instance and the template for the rest of the calls
+    zbx_class_name = 'itemprototype'
+    idname = "itemid"
+    dname = module.params['name']
+    state = module.params['state']
+
+    # selectInterfaces doesn't appear to be working but is needed.
+    content = zapi.get_content(zbx_class_name,
+                               'get',
+                               {'search': {'name': dname},
+                                'selectApplications': 'applicationid',
+                                'selectDiscoveryRule': 'itemid',
+                                #'selectDhosts': 'dhostid',
+                               })
+    if state == 'list':
+        module.exit_json(changed=False, results=content['result'], state="list")
+
+    if state == 'absent':
+        if not exists(content):
+            module.exit_json(changed=False, state="absent")
+
+        content = zapi.get_content(zbx_class_name, 'delete', [content['result'][0][idname]])
+        module.exit_json(changed=True, results=content['result'], state="absent")
+
+    if state == 'present':
+        template = get_template(zapi, module.params['template_name'])
+        params = {'name': dname,
+                  'key_':  module.params['key'],
+                  'hostid':  template['templateid'],
+                  'interfaceid': module.params['interfaceid'],
+                  'ruleid': get_rule_id(zapi, module.params['discoveryrule_name']),
+                  'type': get_type(module.params['ztype']),
+                  'value_type': get_value_type(module.params['value_type']),
+                  'applications': get_app_ids(zapi, module.params['applications']),
+                 }
+        if params['type'] in [2, 5, 7, 8, 11, 15]:
+            params.pop('interfaceid')
+
+        if not exists(content):
+            # if we didn't find it, create it
+            content = zapi.get_content(zbx_class_name, 'create', params)
+            module.exit_json(changed=True, results=content['result'], state='present')
+        # already exists, we need to update it
+        # let's compare properties
+        differences = {}
+        zab_results = content['result'][0]
+        for key, value in params.items():
+
+            if key == 'ruleid':
+                if value != zab_results['discoveryRule']['itemid']:
+                    differences[key] = value
+
+            elif zab_results[key] != value and zab_results[key] != str(value):
+                differences[key] = value
+
+        if not differences:
+            module.exit_json(changed=False, results=zab_results, state="present")
+
+        # We have differences and need to update
+        differences[idname] = zab_results[idname]
+        content = zapi.get_content(zbx_class_name, 'update', differences)
+        module.exit_json(changed=True, results=content['result'], state="present")
+
+    module.exit_json(failed=True,
+                     changed=False,
+                     results='Unknown state passed. %s' % state,
+                     state="unknown")
+
+# pylint: disable=redefined-builtin, unused-wildcard-import, wildcard-import, locally-disabled
+# import module snippets.  This are required
+from ansible.module_utils.basic import *
+
+main()

+ 2 - 1
roles/os_zabbix/library/zbx_template.py

@@ -74,7 +74,8 @@ def main():
                                {'search': {'host': tname},
                                 'selectParentTemplates': 'templateid',
                                 'selectGroups': 'groupid',
-                                #'selectApplications': extend,
+                                'selectApplications': 'applicationid',
+                                'selectDiscoveries': 'extend',
                                })
     if state == 'list':
         module.exit_json(changed=False, results=content['result'], state="list")

+ 27 - 4
roles/os_zabbix/library/zbx_user.py

@@ -54,7 +54,22 @@ def get_usergroups(zapi, usergroups):
         if content['result']:
             ugroups.append({'usrgrpid': content['result'][0]['usrgrpid']})
 
-    return ugroups
+    return ugroups or None
+
+def get_usertype(user_type):
+    '''
+    Determine zabbix user account type
+    '''
+    if not user_type:
+        return None
+
+    utype = 1
+    if 'super' in user_type:
+        utype = 3
+    elif 'admin' in user_type or user_type == 'admin':
+        utype = 2
+
+    return utype
 
 def main():
     '''
@@ -69,8 +84,11 @@ def main():
             user=dict(default=None, type='str'),
             password=dict(default=None, type='str'),
             alias=dict(default=None, type='str'),
+            name=dict(default=None, type='str'),
+            surname=dict(default=None, type='str'),
+            user_type=dict(default=None, type='str'),
             passwd=dict(default=None, type='str'),
-            usergroups=dict(default=None, type='list'),
+            usergroups=dict(default=[], type='list'),
             debug=dict(default=False, type='bool'),
             state=dict(default='present', type='str'),
         ),
@@ -80,8 +98,7 @@ def main():
     user = module.params.get('user', os.environ['ZABBIX_USER'])
     password = module.params.get('password', os.environ['ZABBIX_PASSWORD'])
 
-    zbc = ZabbixConnection(module.params['server'], user, password, module.params['debug'])
-    zapi = ZabbixAPI(zbc)
+    zapi = ZabbixAPI(ZabbixConnection(module.params['server'], user, password, module.params['debug']))
 
     ## before we can create a user media and users with media types we need media
     zbx_class_name = 'user'
@@ -109,8 +126,14 @@ def main():
         params = {'alias': alias,
                   'passwd': module.params['passwd'],
                   'usrgrps': get_usergroups(zapi, module.params['usergroups']),
+                  'name': module.params['name'],
+                  'surname': module.params['surname'],
+                  'type': get_usertype(module.params['user_type']),
                  }
 
+        # Remove any None valued params
+        _ = [params.pop(key, None) for key in params.keys() if params[key] is None]
+
         if not exists(content):
             # if we didn't find it, create it
             content = zapi.get_content(zbx_class_name, 'create', params)

+ 4 - 0
roles/rhel_subscribe/tasks/enterprise.yml

@@ -1,5 +1,9 @@
 ---
+- name: Disable all repositories
+  command: subscription-manager repos --disable="*"
+
 - name: Enable RHEL repositories
   command: subscription-manager repos \
                --enable="rhel-7-server-rpms" \
+               --enable="rhel-7-server-extras-rpms" \
                --enable="rhel-7-server-ose-3.0-rpms"