Browse Source

Merge remote-tracking branch 'openshift/master' into project_config

Diego Castro 9 years ago
parent
commit
e792536434

+ 4 - 3
README.md

@@ -1,6 +1,6 @@
-#openshift-ansible
+#Openshift and Atomic Enterprise Ansible
 
-This repo contains OpenShift Ansible code.
+This repo contains Ansible code for Openshift and Atomic Enterprise.
 
 ##Setup
 - Install base dependencies:
@@ -23,12 +23,13 @@ This repo contains OpenShift Ansible code.
 - Bring your own host deployments:
   - [OpenShift Enterprise](README_OSE.md)
   - [OpenShift Origin](README_origin.md)
+  - [Atomic Enterprise](README_AEP.md)
 
 - Build
   - [How to build the openshift-ansible rpms](BUILD.md)
 
 - Directory Structure:
-  - [bin/cluster](bin/cluster) - python script to easily create OpenShift 3 clusters
+  - [bin/cluster](bin/cluster) - python script to easily create clusters
   - [docs](docs) - Documentation for the project
   - [filter_plugins/](filter_plugins) - custom filters used to manipulate data in Ansible
   - [inventory/](inventory) - houses Ansible dynamic inventory scripts

+ 240 - 0
README_AEP.md

@@ -0,0 +1,240 @@
+# Installing AEP from dev puddles using ansible
+
+* [Requirements](#requirements)
+* [Caveats](#caveats)
+* [Known Issues](#known-issues)
+* [Configuring the host inventory](#configuring-the-host-inventory)
+* [Creating the default variables for the hosts and host groups](#creating-the-default-variables-for-the-hosts-and-host-groups)
+* [Running the ansible playbooks](#running-the-ansible-playbooks)
+* [Post-ansible steps](#post-ansible-steps)
+* [Overriding detected ip addresses and hostnames](#overriding-detected-ip-addresses-and-hostnames)
+
+## Requirements
+* ansible
+  * Tested using ansible 1.9.1 and 1.9.2
+  * There is currently a known issue with ansible-1.9.0, you can downgrade to 1.8.4 on Fedora by installing one of the builds from Koji: http://koji.fedoraproject.org/koji/packageinfo?packageID=13842
+  * Available in Fedora channels
+  * Available for EL with EPEL and Optional channel
+* One or more RHEL 7.1 VMs
+* Either ssh key based auth for the root user or ssh key based auth for a user
+  with sudo access (no password)
+* A checkout of atomic-enterprise-ansible from https://github.com/projectatomic/atomic-enterprise-ansible/
+
+  ```sh
+  git clone https://github.com/projectatomic/atomic-enterprise-ansible.git
+  cd atomic-enterprise-ansible
+  ```
+
+## Caveats
+This ansible repo is currently under heavy revision for providing OSE support;
+the following items are highly likely to change before the OSE support is
+merged into the upstream repo:
+  * the current git branch for testing
+  * how the inventory file should be configured
+  * variables that need to be set
+  * bootstrapping steps
+  * other configuration steps
+
+## Known Issues
+* Host subscriptions are not configurable yet, the hosts need to be
+  pre-registered with subscription-manager or have the RHEL base repo
+  pre-configured. If using subscription-manager the following commands will
+  disable all but the rhel-7-server rhel-7-server-extras and
+  rhel-server7-ose-beta repos:
+```sh
+subscription-manager repos --disable="*"
+subscription-manager repos \
+--enable="rhel-7-server-rpms" \
+--enable="rhel-7-server-extras-rpms" \
+--enable="rhel-7-server-ose-3.0-rpms"
+```
+* Configuration of router is not automated yet
+* Configuration of docker-registry is not automated yet
+
+## Configuring the host inventory
+[Ansible docs](http://docs.ansible.com/intro_inventory.html)
+
+Example inventory file for configuring one master and two nodes for the test
+environment. This can be configured in the default inventory file
+(/etc/ansible/hosts), or using a custom file and passing the --inventory
+option to ansible-playbook.
+
+/etc/ansible/hosts:
+```ini
+# This is an example of a bring your own (byo) host inventory
+
+# Create an OSEv3 group that contains the masters and nodes groups
+[OSEv3:children]
+masters
+nodes
+
+# Set variables common for all OSEv3 hosts
+[OSEv3:vars]
+# SSH user, this user should allow ssh based auth without requiring a password
+ansible_ssh_user=root
+
+# If ansible_ssh_user is not root, ansible_sudo must be set to true
+#ansible_sudo=true
+
+# To deploy origin, change deployment_type to origin
+deployment_type=enterprise
+
+# Pre-release registry URL
+oreg_url=docker-buildvm-rhose.usersys.redhat.com:5000/openshift3/ose-${component}:${version}
+
+# Pre-release additional repo
+openshift_additional_repos=[{'id': 'ose-devel', 'name': 'ose-devel',
+'baseurl':
+'http://buildvm-devops.usersys.redhat.com/puddle/build/OpenShiftEnterprise/3.0/latest/RH7-RHOSE-3.0/$basearch/os',
+'enabled': 1, 'gpgcheck': 0}]
+
+# Origin copr repo
+#openshift_additional_repos=[{'id': 'openshift-origin-copr', 'name':
+'OpenShift Origin COPR', 'baseurl':
+'https://copr-be.cloud.fedoraproject.org/results/maxamillion/origin-next/epel-7-$basearch/',
+'enabled': 1, 'gpgcheck': 1, gpgkey:
+'https://copr-be.cloud.fedoraproject.org/results/maxamillion/origin-next/pubkey.gpg'}]
+
+# host group for masters
+[masters]
+ose3-master.example.com
+
+# host group for nodes
+[nodes]
+ose3-node[1:2].example.com
+```
+
+The hostnames above should resolve both from the hosts themselves and
+the host where ansible is running (if different).
+
+## Running the ansible playbooks
+From the atomic-enterprise-ansible checkout run:
+```sh
+ansible-playbook playbooks/byo/config.yml
+```
+**Note:** this assumes that the host inventory is /etc/ansible/hosts, if using a different
+inventory file use the -i option for ansible-playbook.
+
+## Post-ansible steps
+#### Create the default router
+On the master host:
+```sh
+oadm router --create=true \
+  --credentials=/etc/openshift/master/openshift-router.kubeconfig \
+  --images='docker-buildvm-rhose.usersys.redhat.com:5000/openshift3/ose-${component}:${version}'
+```
+
+#### Create the default docker-registry
+On the master host:
+```sh
+oadm registry --create=true \
+  --credentials=/etc/openshift/master/openshift-registry.kubeconfig \
+  --images='docker-buildvm-rhose.usersys.redhat.com:5000/openshift3/ose-${component}:${version}' \
+  --mount-host=/var/lib/openshift/docker-registry
+```
+
+## Overriding detected ip addresses and hostnames
+Some deployments will require that the user override the detected hostnames
+and ip addresses for the hosts. To see what the default values will be you can
+run the openshift_facts playbook:
+```sh
+ansible-playbook playbooks/byo/openshift_facts.yml
+```
+The output will be similar to:
+```
+ok: [10.3.9.45] => {
+    "result": {
+        "ansible_facts": {
+            "openshift": {
+                "common": {
+                    "hostname": "jdetiber-osev3-ansible-005dcfa6-27c6-463d-9b95-ef059579befd.os1.phx2.redhat.com",
+                    "ip": "172.16.4.79",
+                    "public_hostname": "jdetiber-osev3-ansible-005dcfa6-27c6-463d-9b95-ef059579befd.os1.phx2.redhat.com",
+                    "public_ip": "10.3.9.45",
+                    "use_openshift_sdn": true
+                },
+                "provider": {
+                  ... <snip> ...
+                }
+            }
+        },
+        "changed": false,
+        "invocation": {
+            "module_args": "",
+            "module_name": "openshift_facts"
+        }
+    }
+}
+ok: [10.3.9.42] => {
+    "result": {
+        "ansible_facts": {
+            "openshift": {
+                "common": {
+                    "hostname": "jdetiber-osev3-ansible-c6ae8cdc-ba0b-4a81-bb37-14549893f9d3.os1.phx2.redhat.com",
+                    "ip": "172.16.4.75",
+                    "public_hostname": "jdetiber-osev3-ansible-c6ae8cdc-ba0b-4a81-bb37-14549893f9d3.os1.phx2.redhat.com",
+                    "public_ip": "10.3.9.42",
+                    "use_openshift_sdn": true
+                },
+                "provider": {
+                  ...<snip>...
+                }
+            }
+        },
+        "changed": false,
+        "invocation": {
+            "module_args": "",
+            "module_name": "openshift_facts"
+        }
+    }
+}
+ok: [10.3.9.36] => {
+    "result": {
+        "ansible_facts": {
+            "openshift": {
+                "common": {
+                    "hostname": "jdetiber-osev3-ansible-bc39a3d3-cdd7-42fe-9c12-9fac9b0ec320.os1.phx2.redhat.com",
+                    "ip": "172.16.4.73",
+                    "public_hostname": "jdetiber-osev3-ansible-bc39a3d3-cdd7-42fe-9c12-9fac9b0ec320.os1.phx2.redhat.com",
+                    "public_ip": "10.3.9.36",
+                    "use_openshift_sdn": true
+                },
+                "provider": {
+                    ...<snip>...
+                }
+            }
+        },
+        "changed": false,
+        "invocation": {
+            "module_args": "",
+            "module_name": "openshift_facts"
+        }
+    }
+}
+```
+Now, we want to verify the detected common settings to verify that they are
+what we expect them to be (if not, we can override them).
+
+* hostname
+  * Should resolve to the internal ip from the instances themselves.
+  * openshift_hostname will override.
+* ip
+  * Should be the internal ip of the instance.
+  * openshift_ip will override.
+* public hostname
+  * Should resolve to the external ip from hosts outside of the cloud
+  * provider openshift_public_hostname will override.
+* public_ip
+  * Should be the externally accessible ip associated with the instance
+  * openshift_public_ip will override
+* use_openshift_sdn
+  * Should be true unless the cloud is GCE.
+  * openshift_use_openshift_sdn overrides
+
+To override the the defaults, you can set the variables in your inventory:
+```
+...snip...
+[masters]
+ose3-master.example.com openshift_ip=1.1.1.1 openshift_hostname=ose3-master.example.com openshift_public_ip=2.2.2.2 openshift_public_hostname=ose3-master.public.example.com
+...snip...
+```

+ 26 - 2
README_vagrant.md

@@ -2,9 +2,28 @@ Requirements
 ------------
 - vagrant (tested against version 1.7.2)
 - vagrant-hostmanager plugin (tested against version 1.5.0)
+- vagrant-registration plugin (only required for enterprise deployment type)
 - vagrant-libvirt (tested against version 0.0.26)
   - Only required if using libvirt instead of virtualbox
 
+For ``enterprise`` deployment types the base RHEL box has to be added to Vagrant:
+
+1. Download the RHEL7 vagrant image (libvirt or virtualbox) available from the [Red Hat Container Development Kit downloads in the customer portal](https://access.redhat.com/downloads/content/293/ver=1/rhel---7/1.0.1/x86_64/product-downloads)
+
+2. Install it into vagrant
+
+   ``$ vagrant box add --name rhel-7 /path/to/rhel-server-libvirt-7.1-3.x86_64.box``
+
+3. (optional, recommended) Increase the disk size of the image to 20GB - This is a two step process. (these instructions are specific to libvirt)
+
+    Resize the actual qcow2 image:
+
+	``$ qemu-img resize ~/.vagrant.d/boxes/rhel-7/0/libvirt/box.img 20GB``
+
+    Edit `~/.vagrant.d/boxes/rhel-7/0/libvirt/metadata.json` to reflect the new size.  A corrected metadata.json looks like this:
+
+	``{"provider": "libvirt", "format": "qcow2", "virtual_size": 20}``
+
 Usage
 -----
 ```
@@ -21,5 +40,10 @@ vagrant provision
 Environment Variables
 ---------------------
 The following environment variables can be overriden:
-- OPENSHIFT_DEPLOYMENT_TYPE (defaults to origin, choices: origin, enterprise, online)
-- OPENSHIFT_NUM_NODES (the number of nodes to create, defaults to 2)
+- ``OPENSHIFT_DEPLOYMENT_TYPE`` (defaults to origin, choices: origin, enterprise, online)
+- ``OPENSHIFT_NUM_NODES`` (the number of nodes to create, defaults to 2)
+
+For ``enterprise`` deployment types these env variables should also be specified:
+- ``rhel_subscription_user``: rhsm user
+- ``rhel_subscription_pass``: rhsm password
+- (optional) ``rhel_subscription_pool``: poolID to attach a specific subscription besides what auto-attach detects

+ 34 - 7
Vagrantfile

@@ -15,6 +15,28 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
   config.hostmanager.manage_host = true
   config.hostmanager.include_offline = true
   config.ssh.insert_key = false
+
+  if deployment_type === 'enterprise'
+    unless Vagrant.has_plugin?('vagrant-registration')
+      raise 'vagrant-registration-plugin is required for enterprise deployment'
+    end
+    username = ENV['rhel_subscription_user']
+    password = ENV['rhel_subscription_pass']
+    unless username and password
+      raise 'rhel_subscription_user and rhel_subscription_pass are required'
+    end
+    config.registration.username = username
+    config.registration.password = password
+    # FIXME this is temporary until vagrant/ansible registration modules
+    # are capable of handling specific subscription pools
+    if not ENV['rhel_subscription_pool'].nil?
+      config.vm.provision "shell" do |s|
+        s.inline = "subscription-manager attach --pool=$1 || true"
+        s.args = "#{ENV['rhel_subscription_pool']}"
+      end
+    end
+  end
+
   config.vm.provider "virtualbox" do |vbox, override|
     override.vm.box = "chef/centos-7.1"
     vbox.memory = 1024
@@ -28,10 +50,15 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
     libvirt.cpus = 2
     libvirt.memory = 1024
     libvirt.driver = 'kvm'
-    override.vm.box = "centos-7.1"
-    override.vm.box_url = "https://download.gluster.org/pub/gluster/purpleidea/vagrant/centos-7.1/centos-7.1.box"
-    override.vm.box_download_checksum = "b2a9f7421e04e73a5acad6fbaf4e9aba78b5aeabf4230eebacc9942e577c1e05"
-    override.vm.box_download_checksum_type = "sha256"
+    case deployment_type
+    when "enterprise"
+      override.vm.box = "rhel-7"
+    when "origin"
+      override.vm.box = "centos-7.1"
+      override.vm.box_url = "https://download.gluster.org/pub/gluster/purpleidea/vagrant/centos-7.1/centos-7.1.box"
+      override.vm.box_download_checksum = "b2a9f7421e04e73a5acad6fbaf4e9aba78b5aeabf4230eebacc9942e577c1e05"
+      override.vm.box_download_checksum_type = "sha256"
+    end
   end
 
   num_nodes.times do |n|
@@ -53,12 +80,12 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
       ansible.sudo = true
       ansible.groups = {
         "masters" => ["master"],
-        "nodes"   => ["node1", "node2"],
+        "nodes"   => ["master", "node1", "node2"],
       }
       ansible.extra_vars = {
-        openshift_deployment_type: "origin",
+        deployment_type: deployment_type,
       }
-      ansible.playbook = "playbooks/byo/config.yml"
+      ansible.playbook = "playbooks/byo/vagrant.yml"
     end
   end
 end

+ 1 - 1
docs/best_practices_guide.adoc

@@ -421,7 +421,7 @@ For consistency, role names SHOULD follow the above naming pattern. It is import
 Many times the `technology` portion of the pattern will line up with a package name. It is advised that whenever possible, the package name should be used.
 
 .Examples:
-* The role to configure an OpenShift Master is called `openshift_master`
+* The role to configure a master is called `openshift_master`
 * The role to configure OpenShift specific yum repositories is called `openshift_repos`
 
 === Filters

+ 11 - 0
filter_plugins/oo_filters.py

@@ -132,6 +132,16 @@ class FilterModule(object):
         return rval
 
     @staticmethod
+    def oo_combine_dict(data, in_joiner='=', out_joiner=' '):
+        '''Take a dict in the form of { 'key': 'value', 'key': 'value' } and
+           arrange them as a string 'key=value key=value'
+        '''
+        if not issubclass(type(data), dict):
+            raise errors.AnsibleFilterError("|failed expects first param is a dict")
+
+        return out_joiner.join([in_joiner.join([k, v]) for k, v in data.items()])
+
+    @staticmethod
     def oo_ami_selector(data, image_name):
         ''' This takes a list of amis and an image name and attempts to return
             the latest ami.
@@ -309,6 +319,7 @@ class FilterModule(object):
             "oo_ami_selector": self.oo_ami_selector,
             "oo_ec2_volume_definition": self.oo_ec2_volume_definition,
             "oo_combine_key_value": self.oo_combine_key_value,
+            "oo_combine_dict": self.oo_combine_dict,
             "oo_split": self.oo_split,
             "oo_filter_list": self.oo_filter_list,
             "oo_parse_heat_stack_outputs": self.oo_parse_heat_stack_outputs

+ 1 - 1
inventory/aws/hosts/hosts

@@ -1 +1 @@
-localhost ansible_connection=local ansible_sudo=no ansible_python_interpreter=/usr/bin/python2
+localhost ansible_connection=local ansible_sudo=no ansible_python_interpreter='/usr/bin/env python2'

+ 4 - 1
inventory/byo/hosts.example

@@ -33,7 +33,7 @@ deployment_type=enterprise
 #openshift_additional_repos=[{'id': 'openshift-origin-copr', 'name': 'OpenShift Origin COPR', 'baseurl': 'https://copr-be.cloud.fedoraproject.org/results/maxamillion/origin-next/epel-7-$basearch/', 'enabled': 1, 'gpgcheck': 1, gpgkey: 'https://copr-be.cloud.fedoraproject.org/results/maxamillion/origin-next/pubkey.gpg'}]
 
 # htpasswd auth
-#openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/openshift/htpasswd'}]
+openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/openshift/htpasswd'}]
 
 # Allow all auth
 #openshift_master_identity_providers=[{'name': 'allow_all', 'login': 'true', 'challenge': 'true', 'kind': 'AllowAllPasswordIdentityProvider'}]
@@ -68,6 +68,9 @@ deployment_type=enterprise
 # additional cors origins
 #osm_custom_cors_origins=['foo.example.com', 'bar.example.com'] 
 
+# default project node selector
+#osm_default_node_selector='region=primary'
+
 # host group for masters
 [masters]
 ose3-master[1:3]-ansible.test.example.com

+ 1 - 1
inventory/gce/hosts/hosts

@@ -1 +1 @@
-localhost ansible_connection=local ansible_sudo=no ansible_python_interpreter=/usr/bin/python2
+localhost ansible_connection=local ansible_sudo=no ansible_python_interpreter='/usr/bin/env python2'

+ 1 - 1
inventory/libvirt/hosts/hosts

@@ -1 +1 @@
-localhost ansible_connection=local ansible_sudo=no ansible_python_interpreter=/usr/bin/python2
+localhost ansible_connection=local ansible_sudo=no ansible_python_interpreter='/usr/bin/env python2'

+ 1 - 1
inventory/openstack/hosts/hosts

@@ -1 +1 @@
-localhost ansible_sudo=no ansible_python_interpreter=/usr/bin/python2 connection=local
+localhost ansible_sudo=no ansible_python_interpreter='/usr/bin/env python2' connection=local

+ 2 - 2
playbooks/aws/openshift-cluster/vars.online.int.yml

@@ -3,9 +3,9 @@ ec2_image: ami-9101c8fa
 ec2_image_name: libra-ops-rhel7*
 ec2_region: us-east-1
 ec2_keypair: mmcgrath_libra
-ec2_master_instance_type: m4.large
+ec2_master_instance_type: t2.small
 ec2_master_security_groups: [ 'integration', 'integration-master' ]
-ec2_infra_instance_type: m4.large
+ec2_infra_instance_type: c4.large
 ec2_infra_security_groups: [ 'integration', 'integration-infra' ]
 ec2_node_instance_type: m4.large
 ec2_node_security_groups: [ 'integration', 'integration-node' ]

+ 2 - 2
playbooks/aws/openshift-cluster/vars.online.prod.yml

@@ -3,9 +3,9 @@ ec2_image: ami-9101c8fa
 ec2_image_name: libra-ops-rhel7*
 ec2_region: us-east-1
 ec2_keypair: mmcgrath_libra
-ec2_master_instance_type: m4.large
+ec2_master_instance_type: t2.small
 ec2_master_security_groups: [ 'production', 'production-master' ]
-ec2_infra_instance_type: m4.large
+ec2_infra_instance_type: c4.large
 ec2_infra_security_groups: [ 'production', 'production-infra' ]
 ec2_node_instance_type: m4.large
 ec2_node_security_groups: [ 'production', 'production-node' ]

+ 2 - 2
playbooks/aws/openshift-cluster/vars.online.stage.yml

@@ -3,9 +3,9 @@ ec2_image: ami-9101c8fa
 ec2_image_name: libra-ops-rhel7*
 ec2_region: us-east-1
 ec2_keypair: mmcgrath_libra
-ec2_master_instance_type: m4.large
+ec2_master_instance_type: t2.small
 ec2_master_security_groups: [ 'stage', 'stage-master' ]
-ec2_infra_instance_type: m4.large
+ec2_infra_instance_type: c4.large
 ec2_infra_security_groups: [ 'stage', 'stage-infra' ]
 ec2_node_instance_type: m4.large
 ec2_node_security_groups: [ 'stage', 'stage-node' ]

+ 12 - 0
playbooks/byo/rhel_subscribe.yml

@@ -0,0 +1,12 @@
+---
+- hosts: all
+  vars:
+    openshift_deployment_type: "{{ deployment_type }}"
+  roles:
+  - role: rhel_subscribe
+    when: deployment_type == "enterprise" and
+          ansible_distribution == "RedHat" and
+          lookup('oo_option', 'rhel_skip_subscription') | default(rhsub_skip, True) |
+          default('no', True) | lower in ['no', 'false']
+  - openshift_repos
+  - os_update_latest

+ 4 - 0
playbooks/byo/vagrant.yml

@@ -0,0 +1,4 @@
+---
+- include: rhel_subscribe.yml
+
+- include: config.yml

+ 1 - 0
playbooks/common/openshift-node/config.yml

@@ -131,6 +131,7 @@
                          | oo_collect('openshift.common.hostname') }}"
     openshift_unscheduleable_nodes: "{{ hostvars | oo_select_keys(groups['oo_nodes_to_config'] | default([]))
                                       | oo_collect('openshift.common.hostname', {'openshift_scheduleable': False}) }}"
+    openshift_node_vars: "{{ hostvars | oo_select_keys(groups['oo_nodes_to_config']) }}"
   pre_tasks:
   - set_fact:
       openshift_scheduleable_nodes: "{{ hostvars

+ 1 - 1
roles/etcd/tasks/main.yml

@@ -1,6 +1,6 @@
 ---
 - name: Install etcd
-  yum: pkg=etcd state=present
+  yum: pkg=etcd-2.* state=present
 
 - name: Validate permissions on the config dir
   file:

+ 1 - 1
roles/fluentd_master/tasks/main.yml

@@ -40,7 +40,7 @@
     mode: 0444
 
 - name: "Pause before restarting td-agent and openshift-master, depending on the number of nodes."
-  pause: seconds={{ num_nodes|int * 5 }}
+  pause: seconds={{ ( num_nodes|int < 3 ) | ternary(15, (num_nodes|int * 5)) }}
 
 - name: ensure td-agent is running
   service:

+ 3 - 3
roles/openshift_common/README.md

@@ -1,7 +1,7 @@
-OpenShift Common
-================
+OpenShift/Atomic Enterprise Common
+===================================
 
-OpenShift common installation and configuration tasks.
+OpenShift/Atomic Enterprise common installation and configuration tasks.
 
 Requirements
 ------------

+ 7 - 0
roles/openshift_manage_node/tasks/main.yml

@@ -16,3 +16,10 @@
   command: >
     {{ openshift.common.admin_binary }} manage-node {{ item }} --schedulable=true
   with_items: openshift_scheduleable_nodes
+
+- name: Label nodes
+  command: >
+    {{ openshift.common.client_binary }} label --overwrite node {{ item.openshift.common.hostname }} {{ item.openshift.node.labels | oo_combine_dict  }}
+  with_items:
+    -  "{{ openshift_node_vars }}"
+  when: "'labels' in item.openshift.node and item.openshift.node.labels != {}"

+ 2 - 1
roles/openshift_master/tasks/main.yml

@@ -61,7 +61,8 @@
       mcs_allocator_range: "{{ osm_mcs_allocator_range | default(None) }}"
       mcs_labels_per_project: "{{ osm_mcs_labels_per_project | default(None) }}"
       uid_allocator_range: "{{ osm_uid_allocator_range | default(None) }}"
-
+      api_server_args: "{{ osm_api_server_args | default(None) }}"
+      controller_args: "{{ osm_controller_args | default(None) }}"
 
 # TODO: These values need to be configurable
 - name: Set dns OpenShift facts

+ 6 - 0
roles/openshift_master/templates/master.yaml.v1.j2

@@ -2,6 +2,9 @@ apiLevels:
 - v1beta3
 - v1
 apiVersion: v1
+{% if api_server_args is defined and api_server_args %}
+apiServerArguments: {{ api_server_args }}
+{% endif %}
 assetConfig:
   logoutURL: ""
   masterPublicURL: {{ openshift.master.public_api_url }}
@@ -13,6 +16,9 @@ assetConfig:
     keyFile: master.server.key
     maxRequestsInFlight: 0
     requestTimeoutSeconds: 0
+{% if controller_args is defined and controller_args %}
+controllerArguments: {{ controller_args }}
+{% endif %}
 corsAllowedOrigins:
 {% for origin in ['127.0.0.1', 'localhost', openshift.common.hostname, openshift.common.ip, openshift.common.public_hostname, openshift.common.public_ip] %}
   - {{ origin }}

+ 12 - 0
roles/openshift_node/README.md

@@ -34,6 +34,18 @@ openshift_common
 Example Playbook
 ----------------
 
+Notes
+-----
+
+Currently we support re-labeling nodes but we don't re-schedule running pods nor remove existing labels. That means you will have to trigger the re-schedulling manually. To re-schedule your pods, just follow the steps below:
+
+```
+oadm manage-node --schedulable=false ${NODE}
+oadm manage-node --evacuate ${NODE}
+oadm manage-node --schedulable=true ${NODE}
+````
+
+
 TODO
 
 License

+ 7 - 1
roles/openshift_node/tasks/main.yml

@@ -6,6 +6,9 @@
 - fail:
     msg: This role requres that osn_cluster_dns_ip is set
   when: osn_cluster_dns_ip is not defined or not osn_cluster_dns_ip
+- fail:
+    msg: "SELinux is disabled, This deployment type requires that SELinux is enabled."
+  when: (not ansible_selinux or ansible_selinux.status != 'enabled') and deployment_type in ['enterprise', 'online']
 
 - name: Install OpenShift Node package
   yum: pkg=openshift-node state=present
@@ -33,6 +36,7 @@
       registry_url: "{{ oreg_url | default(none) }}"
       debug_level: "{{ openshift_node_debug_level | default(openshift.common.debug_level) }}"
       portal_net: "{{ openshift_master_portal_net | default(None) }}"
+      kubelet_args: "{{ openshift_node_kubelet_args | default(None) }}"
 
 # TODO: add the validate parameter when there is a validation command to run
 - name: Create the Node config
@@ -63,11 +67,13 @@
   lineinfile:
     dest: /etc/sysconfig/docker
     regexp: '^OPTIONS=.*'
-    line: "OPTIONS='--insecure-registry={{ openshift.node.portal_net }} --selinux-enabled'"
+    line: "OPTIONS='--insecure-registry={{ openshift.node.portal_net }} \
+{% if ansible_selinux and ansible_selinux.status == '''enabled''' %}--selinux-enabled{% endif %}'"
   when: docker_check.stat.isreg
 
 - name: Allow NFS access for VMs
   seboolean: name=virt_use_nfs state=yes persistent=yes
+  when: ansible_selinux and ansible_selinux.status == "enabled"
 
 - name: Start and enable openshift-node
   service: name=openshift-node enabled=yes state=started

+ 3 - 0
roles/openshift_node/templates/node.yaml.v1.j2

@@ -8,6 +8,9 @@ imageConfig:
   format: {{ openshift.node.registry_url }}
   latest: false
 kind: NodeConfig
+{% if openshift.common.kubelet_args is defined and openshift.common.kubelet_args %}
+kubeletArguments: {{ kubelet_args }}
+{% endif %}
 masterKubeConfig: system:node:{{ openshift.common.hostname }}.kubeconfig
 networkPluginName: {{ openshift.common.sdn_network_plugin_name }}
 nodeName: {{ openshift.common.hostname }}