Jelajahi Sumber

Templatize configs and 0.5.2 changes

- Templatize node config
- Templatize master config
- Integrated sdn changes
- Updates for openshift_facts
  - Added support for node, master and sdn related changes
    - registry_url
  - added identity provider facts
- Removed openshift_sdn_* roles
- Install httpd-tools if configuring htpasswd auth
- Remove references to external_id
  - Setting external_id interferes with nodes associating with the generated
    node object when pre-registering nodes.
- osc/oc and osadm/oadm binary detection in openshift_facts

Misc Changes:
- make non-errata puddle default for byo example
- comment out master in list of nodes in inventory/byo/hosts
- remove non-error errors from fluentd_* roles
- Use admin kubeconfig instead of openshift-client
Jason DeTiberus 10 tahun lalu
induk
melakukan
94a77cb1d8
38 mengubah file dengan 656 tambahan dan 582 penghapusan
  1. 21 0
      filter_plugins/oo_filters.py
  2. 6 3
      inventory/byo/hosts
  3. 1 0
      playbooks/aws/openshift-cluster/config.yml
  4. 1 0
      playbooks/aws/openshift-node/config.yml
  5. 3 1
      playbooks/byo/openshift-node/config.yml
  6. 0 4
      playbooks/common/openshift-master/config.yml
  7. 41 39
      playbooks/common/openshift-node/config.yml
  8. 1 0
      playbooks/gce/openshift-cluster/config.yml
  9. 1 0
      playbooks/gce/openshift-node/config.yml
  10. 1 0
      playbooks/libvirt/openshift-cluster/config.yml
  11. 1 0
      playbooks/openstack/openshift-cluster/config.yml
  12. 2 1
      roles/fluentd_master/tasks/main.yml
  13. 2 1
      roles/fluentd_node/tasks/main.yml
  14. 2 0
      roles/openshift_common/tasks/main.yml
  15. 2 0
      roles/openshift_common/vars/main.yml
  16. 144 52
      roles/openshift_facts/library/openshift_facts.py
  17. 76 34
      roles/openshift_master/tasks/main.yml
  18. 98 0
      roles/openshift_master/templates/master.yaml.v1.j2
  19. 12 0
      roles/openshift_master/templates/scheduler.json.j2
  20. 78 0
      roles/openshift_master/templates/v1_partials/oauthConfig.j2
  21. 7 3
      roles/openshift_master/vars/main.yml
  22. 4 0
      roles/openshift_node/defaults/main.yml
  23. 0 1
      roles/openshift_node/handlers/main.yml
  24. 38 29
      roles/openshift_node/tasks/main.yml
  25. 18 0
      roles/openshift_node/templates/node.yaml.v1.j2
  26. 2 1
      roles/openshift_node/vars/main.yml
  27. 0 2
      roles/openshift_register_nodes/defaults/main.yml
  28. 69 157
      roles/openshift_register_nodes/library/kubernetes_register_node.py
  29. 24 35
      roles/openshift_register_nodes/tasks/main.yml
  30. 1 1
      roles/openshift_register_nodes/vars/main.yml
  31. 0 41
      roles/openshift_sdn_master/README.md
  32. 0 3
      roles/openshift_sdn_master/handlers/main.yml
  33. 0 15
      roles/openshift_sdn_master/meta/main.yml
  34. 0 37
      roles/openshift_sdn_master/tasks/main.yml
  35. 0 44
      roles/openshift_sdn_node/README.md
  36. 0 3
      roles/openshift_sdn_node/handlers/main.yml
  37. 0 15
      roles/openshift_sdn_node/meta/main.yml
  38. 0 60
      roles/openshift_sdn_node/tasks/main.yml

+ 21 - 0
filter_plugins/oo_filters.py

@@ -202,6 +202,26 @@ class FilterModule(object):
         '''
         return string.split(separator)
 
+    @staticmethod
+    def oo_filter_list(data, filter_attr=None):
+        ''' This returns a list, which contains all items where filter_attr
+            evaluates to true
+            Ex: data = [ { a: 1, b: True },
+                         { a: 3, b: False },
+                         { a: 5, b: True } ]
+                filter_attr = 'b'
+                returns [ { a: 1, b: True },
+                          { a: 5, b: True } ]
+        '''
+        if not issubclass(type(data), list):
+            raise errors.AnsibleFilterError("|failed expects to filter on a list")
+
+        if not issubclass(type(filter_attr), str):
+            raise errors.AnsibleFilterError("|failed expects filter_attr is a str")
+
+        # Gather up the values for the list of keys passed in
+        return [x for x in data if x[filter_attr]]
+
     def filters(self):
         ''' returns a mapping of filters to methods '''
         return {
@@ -214,4 +234,5 @@ class FilterModule(object):
             "oo_ec2_volume_definition": self.oo_ec2_volume_definition,
             "oo_combine_key_value": self.oo_combine_key_value,
             "oo_split": self.oo_split,
+            "oo_filter_list": self.oo_filter_list
         }

+ 6 - 3
inventory/byo/hosts

@@ -20,17 +20,20 @@ deployment_type=enterprise
 oreg_url=docker-buildvm-rhose.usersys.redhat.com:5000/openshift3_beta/ose-${component}:${version}
 
 # Pre-release additional repo
-#openshift_additional_repos=[{'id': 'ose-devel', 'name': 'ose-devel', 'baseurl': 'http://buildvm-devops.usersys.redhat.com/puddle/build/OpenShiftEnterprise/3.0/latest/RH7-RHOSE-3.0/$basearch/os', 'enabled': 1, 'gpgcheck': 0}]
-openshift_additional_repos=[{'id': 'ose-devel', 'name': 'ose-devel', 'baseurl': 'http://buildvm-devops.usersys.redhat.com/puddle/build/OpenShiftEnterpriseErrata/3.0/latest/RH7-RHOSE-3.0/$basearch/os', 'enabled': 1, 'gpgcheck': 0}]
+openshift_additional_repos=[{'id': 'ose-devel', 'name': 'ose-devel', 'baseurl': 'http://buildvm-devops.usersys.redhat.com/puddle/build/OpenShiftEnterprise/3.0/latest/RH7-RHOSE-3.0/$basearch/os', 'enabled': 1, 'gpgcheck': 0}]
+#openshift_additional_repos=[{'id': 'ose-devel', 'name': 'ose-devel', 'baseurl': 'http://buildvm-devops.usersys.redhat.com/puddle/build/OpenShiftEnterpriseErrata/3.0/latest/RH7-RHOSE-3.0/$basearch/os', 'enabled': 1, 'gpgcheck': 0}]
 
 # Origin copr repo
 #openshift_additional_repos=[{'id': 'openshift-origin-copr', 'name': 'OpenShift Origin COPR', 'baseurl': 'https://copr-be.cloud.fedoraproject.org/results/maxamillion/origin-next/epel-7-$basearch/', 'enabled': 1, 'gpgcheck': 1, gpgkey: 'https://copr-be.cloud.fedoraproject.org/results/maxamillion/origin-next/pubkey.gpg'}]
 
+# htpasswd auth
+#openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/openshift/htpasswd'}]
+
 # host group for masters
 [masters]
 ose3-master-ansible.test.example.com
 
 # host group for nodes
 [nodes]
-ose3-master-ansible.test.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
+#ose3-master-ansible.test.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
 ose3-node[1:2]-ansible.test.example.com openshift_node_labels="{'region': 'primary', 'zone': 'default'}"

+ 1 - 0
playbooks/aws/openshift-cluster/config.yml

@@ -32,5 +32,6 @@
     openshift_cluster_id: "{{ cluster_id }}"
     openshift_debug_level: 4
     openshift_deployment_type: "{{ deployment_type }}"
+    openshift_first_master: "{{ groups.oo_first_master.0 }}"
     openshift_hostname: "{{ ec2_private_ip_address }}"
     openshift_public_hostname: "{{ ec2_ip_address }}"

+ 1 - 0
playbooks/aws/openshift-node/config.yml

@@ -21,5 +21,6 @@
     openshift_cluster_id: "{{ cluster_id }}"
     openshift_debug_level: 4
     openshift_deployment_type: "{{ deployment_type }}"
+    openshift_first_master: "{{ groups.oo_first_master.0 }}"
     openshift_hostname: "{{ ec2_private_ip_address }}"
     openshift_public_hostname: "{{ ec2_ip_address }}"

+ 3 - 1
playbooks/byo/openshift-node/config.yml

@@ -10,12 +10,14 @@
     with_items: groups.nodes
   - name: Evaluate oo_first_master
     add_host:
-      name: "{{ groups.masters[0] }}"
+      name: "{{ item }}"
       groups: oo_first_master
+    with_items: groups.masters.0
 
 
 - include: ../../common/openshift-node/config.yml
   vars:
+    openshift_first_master: "{{ groups.masters.0 }}"
     openshift_cluster_id: "{{ cluster_id | default('default') }}"
     openshift_debug_level: 4
     openshift_deployment_type: "{{ deployment_type }}"

+ 0 - 4
playbooks/common/openshift-master/config.yml

@@ -1,12 +1,8 @@
 ---
 - name: Configure master instances
   hosts: oo_masters_to_config
-  vars:
-    openshift_sdn_master_url: https://{{ openshift.common.hostname }}:4001
   roles:
   - openshift_master
-  - role: openshift_sdn_master
-    when: openshift.common.use_openshift_sdn | bool
   - role: fluentd_master
     when: openshift.common.use_fluentd | bool
   tasks:

+ 41 - 39
playbooks/common/openshift-node/config.yml

@@ -4,9 +4,9 @@
   roles:
   - openshift_facts
   tasks:
-  # Since the master is registering the nodes before they are configured, we
-  # need to make sure to set the node properties beforehand if we do not want
-  # the defaults
+  # Since the master is generating the node certificates before they are
+  # configured, we need to make sure to set the node properties beforehand if
+  # we do not want the defaults
   - openshift_facts:
       role: "{{ item.role }}"
       local_facts: "{{ item.local_facts }}"
@@ -18,13 +18,26 @@
           deployment_type: "{{ openshift_deployment_type }}"
       - role: node
         local_facts:
-          external_id: "{{ openshift_node_external_id | default(None) }}"
           resources_cpu: "{{ openshift_node_resources_cpu | default(None) }}"
           resources_memory: "{{ openshift_node_resources_memory | default(None) }}"
           pod_cidr: "{{ openshift_node_pod_cidr | default(None) }}"
           labels: "{{ openshift_node_labels | default(None) }}"
           annotations: "{{ openshift_node_annotations | default(None) }}"
-
+  - name: Check status of node certificates
+    stat:
+      path: "{{ item }}"
+    with_items:
+    - "/etc/openshift/node/node.key"
+    - "/etc/openshift/node/node.kubeconfig"
+    - "/etc/openshift/node/ca.crt"
+    - "/etc/openshift/node/server.key"
+    register: stat_result
+  - set_fact:
+      certs_missing: "{{ stat_result.results | map(attribute='stat.exists')
+                         | list | intersect([false])}}"
+      node_subdir: node-{{ openshift.common.hostname }}
+      config_dir: /etc/openshift/generated-configs/node-{{ openshift.common.hostname }}
+      node_cert_dir: /etc/openshift/node
 
 - name: Create temp directory for syncing certs
   hosts: localhost
@@ -37,66 +50,57 @@
     register: mktemp
     changed_when: False
 
-
 - name: Register nodes
   hosts: oo_first_master
   vars:
-    openshift_nodes: "{{ hostvars | oo_select_keys(groups['oo_nodes_to_config']) }}"
+    nodes_needing_certs: "{{ hostvars
+                             | oo_select_keys(groups['oo_nodes_to_config'])
+                             | oo_filter_list(filter_attr='certs_missing') }}"
+    openshift_nodes: "{{ hostvars
+                         | oo_select_keys(groups['oo_nodes_to_config']) }}"
     sync_tmpdir: "{{ hostvars.localhost.mktemp.stdout }}"
   roles:
   - openshift_register_nodes
-  tasks:
-  # TODO: update so that we only sync necessary configs/directories, currently
-  # we sync for all nodes in oo_nodes_to_config.  We will need to inspect the
-  # configs on the nodes to make the determination on whether to sync or not.
-  - name: Create the temp directory on the master
-    file:
-      path: "{{ sync_tmpdir }}"
-      owner: "{{ ansible_ssh_user }}"
-      mode: 0700
-      state: directory
-    changed_when: False
-
+  post_tasks:
   - name: Create a tarball of the node config directories
-    command: tar -czvf {{ sync_tmpdir }}/{{ item.openshift.common.hostname }}.tgz ./
+    command: >
+      tar -czvf {{ item.config_dir }}.tgz ./
+        --transform 's|system:{{ item.node_subdir }}|node|'
+        -C {{ item.config_dir }} .
     args:
-      chdir: "{{ openshift_generated_configs_dir }}/node-{{ item.openshift.common.hostname }}"
-    with_items: openshift_nodes
-    changed_when: False
+      creates: "{{ item.config_dir }}.tgz"
+    with_items: nodes_needing_certs
 
   - name: Retrieve the node config tarballs from the master
     fetch:
-      src: "{{ sync_tmpdir }}/{{ item.openshift.common.hostname }}.tgz"
+      src: "{{ item.config_dir }}.tgz"
       dest: "{{ sync_tmpdir }}/"
+      flat: yes
       fail_on_missing: yes
       validate_checksum: yes
-    with_items: openshift_nodes
-    changed_when: False
-
+    with_items: nodes_needing_certs
 
 - name: Configure node instances
   hosts: oo_nodes_to_config
-  gather_facts: no
   vars:
-    sync_tmpdir: "{{ hostvars.localhost.mktemp.stdout }}/{{ groups['oo_first_master'][0] }}/{{ hostvars.localhost.mktemp.stdout }}"
-    openshift_sdn_master_url: "https://{{ hostvars[groups['oo_first_master'][0]].openshift.common.hostname }}:4001"
+    sync_tmpdir: "{{ hostvars.localhost.mktemp.stdout }}"
+    openshift_node_master_api_url: "{{ hostvars[openshift_first_master].openshift.master.api_url }}"
   pre_tasks:
   - name: Ensure certificate directory exists
     file:
-      path: "{{ openshift_node_cert_dir }}"
+      path: "{{ node_cert_dir }}"
       state: directory
 
-  # TODO: notify restart openshift-node and/or restart openshift-sdn-node,
+  # TODO: notify restart openshift-node
   # possibly test service started time against certificate/config file
-  # timestamps in openshift-node or openshift-sdn-node to trigger notify
+  # timestamps in openshift-node to trigger notify
   - name: Unarchive the tarball on the node
     unarchive:
-      src: "{{ sync_tmpdir }}/{{ openshift.common.hostname }}.tgz"
-      dest: "{{ openshift_node_cert_dir }}"
+      src: "{{ sync_tmpdir }}/{{ node_subdir }}.tgz"
+      dest: "{{ node_cert_dir }}"
+    when: certs_missing
   roles:
   - openshift_node
-  - role: openshift_sdn_node
-    when: openshift.common.use_openshift_sdn | bool
   - role: fluentd_node
     when: openshift.common.use_fluentd | bool
   tasks:
@@ -113,7 +117,6 @@
   - file: name={{ sync_tmpdir }} state=absent
     changed_when: False
 
-
 - name: Delete temporary directory on localhost
   hosts: localhost
   connection: local
@@ -123,7 +126,6 @@
   - file: name={{ mktemp.stdout }} state=absent
     changed_when: False
 
-
 # Additional config for online type deployments
 - name: Additional instance config
   hosts: oo_nodes_deployment_type_online

+ 1 - 0
playbooks/gce/openshift-cluster/config.yml

@@ -34,4 +34,5 @@
     openshift_cluster_id: "{{ cluster_id }}"
     openshift_debug_level: 4
     openshift_deployment_type: "{{ deployment_type }}"
+    openshift_first_master: "{{ groups.oo_first_master.0 }}"
     openshift_hostname: "{{ gce_private_ip }}"

+ 1 - 0
playbooks/gce/openshift-node/config.yml

@@ -21,4 +21,5 @@
     openshift_cluster_id: "{{ cluster_id }}"
     openshift_debug_level: 4
     openshift_deployment_type: "{{ deployment_type }}"
+    openshift_first_master: "{{ groups.oo_first_master.0 }}"
     openshift_hostname: "{{ gce_private_ip }}"

+ 1 - 0
playbooks/libvirt/openshift-cluster/config.yml

@@ -36,3 +36,4 @@
     openshift_cluster_id: "{{ cluster_id }}"
     openshift_debug_level: 4
     openshift_deployment_type: "{{ deployment_type }}"
+    openshift_first_master: "{{ groups.oo_first_master.0 }}"

+ 1 - 0
playbooks/openstack/openshift-cluster/config.yml

@@ -31,4 +31,5 @@
     openshift_cluster_id: "{{ cluster_id }}"
     openshift_debug_level: 4
     openshift_deployment_type: "{{ deployment_type }}"
+    openshift_first_master: "{{ groups.oo_first_master.0 }}"
     openshift_hostname: "{{ ansible_default_ipv4.address }}"

+ 2 - 1
roles/fluentd_master/tasks/main.yml

@@ -8,7 +8,8 @@
 - name: Verify fluentd plugin installed
   command: '/opt/td-agent/embedded/bin/gem query -i fluent-plugin-kubernetes'
   register: _fluent_plugin_check
-  ignore_errors: yes
+  failed_when: false
+  changed_when: false
 
 - name: install Kubernetes fluentd plugin
   command: '/opt/td-agent/embedded/bin/gem install fluent-plugin-kubernetes'

+ 2 - 1
roles/fluentd_node/tasks/main.yml

@@ -8,7 +8,8 @@
 - name: Verify fluentd plugin installed
   command: '/opt/td-agent/embedded/bin/gem query -i fluent-plugin-kubernetes'
   register: _fluent_plugin_check
-  ignore_errors: yes
+  failed_when: false
+  changed_when: false
 
 - name: install Kubernetes fluentd plugin
   command: '/opt/td-agent/embedded/bin/gem install fluent-plugin-kubernetes'

+ 2 - 0
roles/openshift_common/tasks/main.yml

@@ -10,7 +10,9 @@
       public_hostname: "{{ openshift_public_hostname | default(None) }}"
       public_ip: "{{ openshift_public_ip | default(None) }}"
       use_openshift_sdn: "{{ openshift_use_openshift_sdn | default(None) }}"
+      sdn_network_plugin_name: "{{ os_sdn_network_plugin_name | default(None) }}"
       deployment_type: "{{ openshift_deployment_type }}"
+
 - name: Set hostname
   hostname: name={{ openshift.common.hostname }}
 

+ 2 - 0
roles/openshift_common/vars/main.yml

@@ -5,3 +5,5 @@
 # chains with the public zone (or the zone associated with the correct
 # interfaces)
 os_firewall_use_firewalld: False
+
+openshift_data_dir: /var/lib/openshift

+ 144 - 52
roles/openshift_facts/library/openshift_facts.py

@@ -1,10 +1,6 @@
 #!/usr/bin/python
 # -*- coding: utf-8 -*-
 # vim: expandtab:tabstop=4:shiftwidth=4
-# disable pylint checks
-# temporarily disabled until items can be addressed:
-#   fixme - until all TODO comments have been addressed
-# pylint:disable=fixme
 """Ansible module for retrieving and setting openshift related facts"""
 
 DOCUMENTATION = '''
@@ -19,6 +15,7 @@ EXAMPLES = '''
 
 import ConfigParser
 import copy
+import os
 
 
 def hostname_valid(hostname):
@@ -166,7 +163,6 @@ def normalize_gce_facts(metadata, facts):
         facts['network']['interfaces'].append(int_info)
     _, _, zone = metadata['instance']['zone'].rpartition('/')
     facts['zone'] = zone
-    facts['external_id'] = metadata['instance']['id']
 
     # Default to no sdn for GCE deployments
     facts['use_openshift_sdn'] = False
@@ -215,7 +211,6 @@ def normalize_aws_facts(metadata, facts):
             int_info['network_id'] = None
         facts['network']['interfaces'].append(int_info)
     facts['zone'] = metadata['placement']['availability-zone']
-    facts['external_id'] = metadata['instance-id']
 
     # TODO: actually attempt to determine default local and public ips
     # by using the ansible default ip fact and the ipv4-associations
@@ -247,7 +242,7 @@ def normalize_openstack_facts(metadata, facts):
     # metadata api, should be updated if neutron exposes this.
 
     facts['zone'] = metadata['availability_zone']
-    facts['external_id'] = metadata['uuid']
+
     facts['network']['ip'] = metadata['ec2_compat']['local-ipv4']
     facts['network']['public_ip'] = metadata['ec2_compat']['public-ipv4']
 
@@ -288,14 +283,39 @@ def normalize_provider_facts(provider, metadata):
         facts = normalize_openstack_facts(metadata, facts)
     return facts
 
-def set_fluentd_facts_if_unset(facts):
-    """ Set fluentd facts if not already present in facts dict
+def set_registry_url_if_unset(facts):
+    """ Set registry_url fact if not already present in facts dict
 
         Args:
             facts (dict): existing facts
         Returns:
+            dict: the facts dict updated with the generated identity providers
+            facts if they were not already present
+    """
+    for role in ('master', 'node'):
+        if role in facts:
+            deployment_type = facts['common']['deployment_type']
+            if 'registry_url' not in facts[role]:
+                registry_url = "openshift/origin-${component}:${version}"
+                if deployment_type == 'enterprise':
+                    registry_url = "openshift3_beta/ose-${component}:${version}"
+                elif deployment_type == 'online':
+                    registry_url = ("docker-registry.ops.rhcloud.com/"
+                                    "openshift3_beta/ose-${component}:${version}")
+                facts[role]['registry_url'] = registry_url
+
+    return facts
+
+def set_fluentd_facts_if_unset(facts):
+    """ Set fluentd facts if not already present in facts dict
             dict: the facts dict updated with the generated fluentd facts if
             missing
+        Args:
+            facts (dict): existing facts
+        Returns:
+            dict: the facts dict updated with the generated fluentd
+            facts if they were not already present
+
     """
     if 'common' in facts:
         deployment_type = facts['common']['deployment_type']
@@ -304,6 +324,32 @@ def set_fluentd_facts_if_unset(facts):
             facts['common']['use_fluentd'] = use_fluentd
     return facts
 
+def set_identity_providers_if_unset(facts):
+    """ Set identity_providers fact if not already present in facts dict
+
+        Args:
+            facts (dict): existing facts
+        Returns:
+            dict: the facts dict updated with the generated identity providers
+            facts if they were not already present
+    """
+    if 'master' in facts:
+        deployment_type = facts['common']['deployment_type']
+        if 'identity_providers' not in facts['master']:
+            identity_provider = dict(
+                name='allow_all', challenge=True, login=True,
+                kind='AllowAllPasswordIdentityProvider'
+            )
+            if deployment_type == 'enterprise':
+                identity_provider = dict(
+                    name='deny_all', challenge=True, login=True,
+                    kind='DenyAllPasswordIdentityProvider'
+                )
+
+            facts['master']['identity_providers'] = [identity_provider]
+
+    return facts
+
 def set_url_facts_if_unset(facts):
     """ Set url facts if not already present in facts dict
 
@@ -314,34 +360,77 @@ def set_url_facts_if_unset(facts):
                   were not already present
     """
     if 'master' in facts:
-        for (url_var, use_ssl, port, default) in [
-                ('api_url',
-                 facts['master']['api_use_ssl'],
-                 facts['master']['api_port'],
-                 facts['common']['hostname']),
-                ('public_api_url',
-                 facts['master']['api_use_ssl'],
-                 facts['master']['api_port'],
-                 facts['common']['public_hostname']),
-                ('console_url',
-                 facts['master']['console_use_ssl'],
-                 facts['master']['console_port'],
-                 facts['common']['hostname']),
-                ('public_console_url' 'console_use_ssl',
-                 facts['master']['console_use_ssl'],
-                 facts['master']['console_port'],
-                 facts['common']['public_hostname'])]:
-            if url_var not in facts['master']:
-                scheme = 'https' if use_ssl else 'http'
-                netloc = default
-                if ((scheme == 'https' and port != '443')
-                        or (scheme == 'http' and port != '80')):
-                    netloc = "%s:%s" % (netloc, port)
-                facts['master'][url_var] = urlparse.urlunparse(
-                    (scheme, netloc, '', '', '', '')
-                )
+        api_use_ssl = facts['master']['api_use_ssl']
+        api_port = facts['master']['api_port']
+        console_use_ssl = facts['master']['console_use_ssl']
+        console_port = facts['master']['console_port']
+        console_path = facts['master']['console_path']
+        etcd_use_ssl = facts['master']['etcd_use_ssl']
+        etcd_port = facts['master']['etcd_port'],
+        hostname = facts['common']['hostname']
+        public_hostname = facts['common']['public_hostname']
+
+        if 'etcd_urls' not in facts['master']:
+            facts['master']['etcd_urls'] = [format_url(etcd_use_ssl, hostname,
+                                                       etcd_port)]
+        if 'api_url' not in facts['master']:
+            facts['master']['api_url'] = format_url(api_use_ssl, hostname,
+                                                    api_port)
+        if 'public_api_url' not in facts['master']:
+            facts['master']['public_api_url'] = format_url(api_use_ssl,
+                                                           public_hostname,
+                                                           api_port)
+        if 'console_url' not in facts['master']:
+            facts['master']['console_url'] = format_url(console_use_ssl,
+                                                        hostname,
+                                                        console_port,
+                                                        console_path)
+        if 'public_console_url' not in facts['master']:
+            facts['master']['public_console_url'] = format_url(console_use_ssl,
+                                                               public_hostname,
+                                                               console_port,
+                                                               console_path)
+    return facts
+
+def set_sdn_facts_if_unset(facts):
+    """ Set sdn facts if not already present in facts dict
+
+        Args:
+            facts (dict): existing facts
+        Returns:
+            dict: the facts dict updated with the generated sdn facts if they
+                  were not already present
+    """
+    if 'common' in facts:
+        if 'sdn_network_plugin_name' not in facts['common']:
+            use_sdn = facts['common']['use_openshift_sdn']
+            plugin = 'redhat/openshift-ovs-subnet' if use_sdn else ''
+            facts['common']['sdn_network_plugin_name'] = plugin
+
+    if 'master' in facts:
+        if 'sdn_cluster_network_cidr' not in facts['master']:
+            facts['master']['sdn_cluster_network_cidr'] = '10.1.0.0/16'
+        if 'sdn_host_subnet_length' not in facts['master']:
+            facts['master']['sdn_host_subnet_length'] = '8'
+
     return facts
 
+def format_url(use_ssl, hostname, port, path=''):
+    """ Format url based on ssl flag, hostname, port and path
+
+        Args:
+            use_ssl (bool): is ssl enabled
+            hostname (str): hostname
+            port (str): port
+            path (str): url path
+        Returns:
+            str: The generated url string
+    """
+    scheme = 'https' if use_ssl else 'http'
+    netloc = hostname
+    if (use_ssl and port != '443') or (not use_ssl and port != '80'):
+        netloc += ":%s" % port
+    return urlparse.urlunparse((scheme, netloc, path, '', '', ''))
 
 def get_current_config(facts):
     """ Get current openshift config
@@ -405,7 +494,7 @@ def get_current_config(facts):
     return current_config
 
 
-def apply_provider_facts(facts, provider_facts, roles):
+def apply_provider_facts(facts, provider_facts):
     """ Apply provider facts to supplied facts dict
 
         Args:
@@ -433,11 +522,6 @@ def apply_provider_facts(facts, provider_facts, roles):
             facts['common'][ip_var]
         )
 
-    if 'node' in roles:
-        ext_id = provider_facts.get('external_id')
-        if ext_id:
-            facts['node']['external_id'] = ext_id
-
     facts['provider'] = provider_facts
     return facts
 
@@ -571,11 +655,14 @@ class OpenShiftFacts(object):
 
         defaults = self.get_defaults(roles)
         provider_facts = self.init_provider_facts()
-        facts = apply_provider_facts(defaults, provider_facts, roles)
+        facts = apply_provider_facts(defaults, provider_facts)
         facts = merge_facts(facts, local_facts)
         facts['current_config'] = get_current_config(facts)
         facts = set_url_facts_if_unset(facts)
         facts = set_fluentd_facts_if_unset(facts)
+        facts = set_identity_providers_if_unset(facts)
+        facts = set_registry_url_if_unset(facts)
+        facts = set_sdn_facts_if_unset(facts)
         return dict(openshift=facts)
 
     def get_defaults(self, roles):
@@ -589,31 +676,36 @@ class OpenShiftFacts(object):
         """
         defaults = dict()
 
-        common = dict(use_openshift_sdn=True)
         ip_addr = self.system_facts['default_ipv4']['address']
-        common['ip'] = ip_addr
-        common['public_ip'] = ip_addr
-
         exit_code, output, _ = module.run_command(['hostname', '-f'])
         hostname_f = output.strip() if exit_code == 0 else ''
         hostname_values = [hostname_f, self.system_facts['nodename'],
                            self.system_facts['fqdn']]
         hostname = choose_hostname(hostname_values)
 
-        common['hostname'] = hostname
-        common['public_hostname'] = hostname
+        common = dict(use_openshift_sdn=True, ip=ip_addr, public_ip=ip_addr,
+                      deployment_type='origin', hostname=hostname,
+                      public_hostname=hostname)
+        common['client_binary'] = 'oc' if os.path.isfile('/usr/bin/oc') else 'osc'
+        common['admin_binary'] = 'oadm' if os.path.isfile('/usr/bin/oadm') else 'osadm'
         defaults['common'] = common
 
         if 'master' in roles:
             master = dict(api_use_ssl=True, api_port='8443',
                           console_use_ssl=True, console_path='/console',
-                          console_port='8443', etcd_use_ssl=False,
-                          etcd_port='4001', portal_net='172.30.17.0/24')
+                          console_port='8443', etcd_use_ssl=True,
+                          etcd_port='4001', portal_net='172.30.0.0/16',
+                          embedded_etcd=True, embedded_kube=True,
+                          embedded_dns=True, dns_port='53',
+                          bind_addr='0.0.0.0', session_max_seconds=3600,
+                          session_name='ssn', session_secrets_file='',
+                          access_token_max_seconds=86400,
+                          auth_token_max_seconds=500,
+                          oauth_grant_method='auto')
             defaults['master'] = master
 
         if 'node' in roles:
-            node = dict(external_id=common['hostname'], pod_cidr='',
-                        labels={}, annotations={})
+            node = dict(pod_cidr='', labels={}, annotations={})
             node['resources_cpu'] = self.system_facts['processor_cores']
             node['resources_memory'] = int(
                 int(self.system_facts['memtotal_mb']) * 1024 * 1024 * 0.75

+ 76 - 34
roles/openshift_master/tasks/main.yml

@@ -1,10 +1,16 @@
 ---
-# TODO: actually have api_port, api_use_ssl, console_port, console_use_ssl,
-# etcd_use_ssl actually change the master config.
+# TODO: add validation for openshift_master_identity_providers
+# TODO: add ability to configure certificates given either a local file to
+#       point to or certificate contents, set in default cert locations.
+
+- assert:
+    that:
+    - openshift_master_oauth_grant_method in openshift_master_valid_grant_methods
+  when: openshift_master_oauth_grant_method is defined
 
 - name: Set master OpenShift facts
   openshift_facts:
-    role: 'master'
+    role: master
     local_facts:
       debug_level: "{{ openshift_master_debug_level | default(openshift.common.debug_level) }}"
       api_port: "{{ openshift_master_api_port | default(None) }}"
@@ -18,15 +24,32 @@
       public_console_url: "{{ openshift_master_public_console_url | default(None) }}"
       etcd_port: "{{ openshift_master_etcd_port | default(None) }}"
       etcd_use_ssl: "{{ openshift_master_etcd_use_ssl | default(None) }}"
+      etcd_urls: "{{ openshift_master_etcd_urls | default(None) }}"
+      embedded_etcd: "{{ openshift_master_embedded_etcd | default(None) }}"
+      embedded_kube: "{{ openshift_master_embedded_kube | default(None) }}"
+      embedded_dns: "{{ openshift_master_embedded_dns | default(None) }}"
+      dns_port: "{{ openshift_master_dns_port | default(None) }}"
+      bind_addr: "{{ openshift_master_bind_addr | default(None) }}"
       portal_net: "{{ openshift_master_portal_net | default(None) }}"
+      session_max_seconds: "{{ openshift_master_session_max_seconds | default(None) }}"
+      session_name: "{{ openshift_master_session_name | default(None) }}"
+      session_secrets_file: "{{ openshift_master_session_secrets_file | default(None) }}"
+      access_token_max_seconds: "{{ openshift_master_access_token_max_seconds | default(None) }}"
+      auth_token_max_seconds: "{{ openshift_master_auth_token_max_seconds | default(None) }}"
+      identity_providers: "{{ openshift_master_identity_providers | default(None) }}"
+      registry_url: "{{ oreg_url | default(None) }}"
+      oauth_grant_method: "{{ openshift_master_oauth_grant_method | default(None) }}"
+      sdn_cluster_network_cidr: "{{ osm_cluster_network_cidr | default(None) }}"
+      sdn_host_subnet_length: "{{ osm_host_subnet_length | default(None) }}"
 
 # TODO: These values need to be configurable
 - name: Set dns OpenShift facts
   openshift_facts:
-    role: 'dns'
+    role: dns
     local_facts:
       ip: "{{ openshift.common.ip }}"
-      domain: local
+      domain: cluster.local
+  when: openshift.master.embedded_dns
 
 - name: Install OpenShift Master package
   yum: pkg=openshift-master state=installed
@@ -41,34 +64,53 @@
     path: "{{ openshift_master_config_dir }}"
     state: directory
 
-# TODO: should probably use a template lookup for this
-# TODO: should allow for setting --etcd, --kubernetes options
-# TODO: recreate config if values change
-- name: Use enterprise default for oreg_url if not set
-  set_fact:
-    oreg_url: "openshift3_beta/ose-${component}:${version}"
-  when: openshift.common.deployment_type == 'enterprise' and oreg_url is not defined
-
-- name: Use online default for oreg_url if not set
-  set_fact:
-    oreg_url: "docker-registry.ops.rhcloud.com/openshift3_beta/ose-${component}:${version}"
-  when: openshift.common.deployment_type == 'online' and oreg_url is not defined
+- name: Create the master certificates if they do not already exist
+  command: >
+    {{ openshift.common.admin_binary }} create-master-certs
+      --hostnames={{ openshift.common.hostname }},{{ openshift.common.public_hostname }}
+      --master={{ openshift.master.api_url }}
+      --public-master={{ openshift.master.public_api_url }}
+      --cert-dir={{ openshift_master_config_dir }} --overwrite=false
+  args:
+    creates: "{{ openshift_master_config_dir }}/master.server.key"
 
-# TODO: Need to get a flag added for volumes path, i think it'll get put in
-- name: Create master config
+- name: Create the policy file if it does not already exist
   command: >
-    /usr/bin/openshift start master
-    --write-config={{ openshift_master_config_dir }}
-    --portal-net={{ openshift.master.portal_net }}
-    --etcd-dir={{ openshift_data_dir }}/openshift.local.etcd
-    --master={{ openshift.master.api_url }}
-    --public-master={{ openshift.master.public_api_url }}
-    --listen={{ 'https' if openshift.master.api_use_ssl else 'http' }}://0.0.0.0:{{ openshift.master.api_port }}
-    {{ ('--images=' ~ oreg_url) if (oreg_url | default('', true) != '') else '' }}
-    {{ ('--nodes=' ~ openshift_node_ips | join(',')) if (openshift_node_ips | default('', true) != '') else '' }}
+    {{ openshift.common.admin_binary }} create-bootstrap-policy-file
+      --filename={{ openshift_master_policy }}
   args:
-    chdir: "{{ openshift_master_config_dir }}"
-    creates: "{{ openshift_master_config_file }}"
+    creates: "{{ openshift_master_policy }}"
+  notify:
+  - restart openshift-master
+
+- name: Create the scheduler config
+  template:
+    dest: "{{ openshift_master_scheduler_conf }}"
+    src: scheduler.json.j2
+  notify:
+  - restart openshift-master
+
+- name: Install httpd-tools if needed
+  yum: pkg=httpd-tools state=installed
+  when: item.kind == 'HTPasswdPasswordIdentityProvider'
+  with_items: openshift.master.identity_providers
+
+- name: Create the htpasswd file if needed
+  copy:
+    dest: "{{ item.filename }}"
+    content: ""
+    mode: 0600
+    force: no
+  when: item.kind == 'HTPasswdPasswordIdentityProvider'
+  with_items: openshift.master.identity_providers
+
+# TODO: add the validate parameter when there is a validation command to run
+- name: Create master config
+  template:
+    dest: "{{ openshift_master_config_file }}"
+    src: master.yaml.v1.j2
+  notify:
+  - restart openshift-master
 
 - name: Configure OpenShift settings
   lineinfile:
@@ -79,7 +121,7 @@
     - regex: '^OPTIONS='
       line: "OPTIONS=--loglevel={{ openshift.master.debug_level }}"
     - regex: '^CONFIG_FILE='
-      line: "CONFIG_FILE={{ openshift_master_config_file}}"
+      line: "CONFIG_FILE={{ openshift_master_config_file }}"
   notify:
   - restart openshift-master
 
@@ -99,15 +141,15 @@
 
 # TODO: Update this file if the contents of the source file are not present in
 # the dest file, will need to make sure to ignore things that could be added
-- name: Create the OpenShift client config(s)
-  command: cp {{ openshift_master_config_dir }}/openshift-client.kubeconfig ~{{ item }}/.config/openshift/.config
+- name: Copy the OpenShift admin client config(s)
+  command: cp {{ openshift_master_config_dir }}/admin.kubeconfig ~{{ item }}/.config/openshift/.config
   args:
     creates: ~{{ item }}/.config/openshift/.config
   with_items:
   - root
   - "{{ ansible_ssh_user }}"
 
-- name: Update the permissions on the OpenShift client config(s)
+- name: Update the permissions on the OpenShift admin client config(s)
   file:
     path: "~{{ item }}/.config/openshift/.config"
     state: file

+ 98 - 0
roles/openshift_master/templates/master.yaml.v1.j2

@@ -0,0 +1,98 @@
+apiVersion: v1
+assetConfig:
+  logoutURL: ""
+  masterPublicURL: {{ openshift.master.public_api_url }}
+  publicURL: {{ openshift.master.public_console_url }}/
+  servingInfo:
+    bindAddress: {{ openshift.master.bind_addr }}:{{ openshift.master.console_port }}
+    certFile: master.server.crt
+    clientCA: ""
+    keyFile: master.server.key
+corsAllowedOrigins:
+{# TODO: add support for user specified corsAllowedOrigins #}
+{% for origin in ['127.0.0.1', 'localhost', openshift.common.hostname, openshift.common.ip, openshift.common.public_hostname, openshift.common.public_ip] %}
+  - {{ origin }}
+{% endfor %}
+{% if openshift.master.embedded_dns %}
+dnsConfig:
+  bindAddress: {{ openshift.master.bind_addr }}:{{ openshift.master.dns_port }}
+{% endif %}
+etcdClientInfo:
+  ca: ca.crt
+  certFile: master.etcd-client.crt
+  keyFile: master.etcd-client.key
+  urls:
+{% for etcd_url in openshift.master.etcd_urls %}
+    - {{ etcd_url }}
+{% endfor %}
+{% if openshift.master.embedded_etcd %}
+etcdConfig:
+  address: {{ openshift.common.hostname }}:{{ openshift.master.etcd_port }}
+  peerAddress: {{ openshift.common.hostname }}:7001
+  peerServingInfo:
+    bindAddress: {{ openshift.master.bind_addr }}:7001
+    certFile: etcd.server.crt
+    clientCA: ca.crt
+    keyFile: etcd.server.key
+  servingInfo:
+    bindAddress: {{ openshift.master.bind_addr }}:{{ openshift.master.etcd_port }}
+    certFile: etcd.server.crt
+    clientCA: ca.crt
+    keyFile: etcd.server.key
+  storageDirectory: {{ openshift_data_dir }}/openshift.local.etcd
+{% endif %}
+etcdStorageConfig:
+  kubernetesStoragePrefix: kubernetes.io
+  kubernetesStorageVersion: v1beta3
+  kubernetesStoragePrefix: kubernetes.io
+  openShiftStorageVersion: v1beta3
+imageConfig:
+  format: {{ openshift.master.registry_url }}
+  latest: false
+kind: MasterConfig
+kubeletClientInfo:
+{# TODO: allow user specified kubelet port #}
+  ca: ca.crt
+  certFile: master.kubelet-client.crt
+  keyFile: master.kubelet-client.key
+  port: 10250
+{% if openshift.master.embedded_kube %}
+kubernetesMasterConfig:
+{# TODO: support overriding masterCount #}
+  masterCount: 1
+  masterIP: ""
+  schedulerConfigFile: {{ openshift_master_scheduler_conf }}
+  servicesSubnet: {{ openshift.master.portal_net }}
+  staticNodeNames: {{ openshift_node_ips | default([], true) }}
+{% endif %}
+masterClients:
+{# TODO: allow user to set externalKubernetesKubeConfig #}
+  deployerKubeConfig: openshift-deployer.kubeconfig
+  externalKubernetesKubeConfig: ""
+  openshiftLoopbackKubeConfig: openshift-client.kubeconfig
+masterPublicURL: {{ openshift.master.public_api_url }}
+networkConfig:
+  clusterNetworkCIDR: {{ openshift.master.sdn_cluster_network_cidr }}
+  hostSubnetLength: {{ openshift.master.sdn_host_subnet_length }}
+  networkPluginName: {{ openshift.common.sdn_network_plugin_name }}
+{% include 'v1_partials/oauthConfig.j2' %}
+policyConfig:
+  bootstrapPolicyFile: {{ openshift_master_policy }}
+  openshiftSharedResourcesNamespace: openshift
+{# TODO: Allow users to override projectConfig items #}
+projectConfig:
+  defaultNodeSelector: ""
+  projectRequestMessage: ""
+  projectRequestTemplate: ""
+serviceAccountConfig:
+  managedNames:
+  - default
+  - builder
+  privateKeyFile: serviceaccounts.private.key
+  publicKeyFiles:
+  - serviceaccounts.public.key
+servingInfo:
+  bindAddress: {{ openshift.master.bind_addr }}:{{ openshift.master.api_port }}
+  certFile: master.server.crt
+  clientCA: ca.crt
+  keyFile: master.server.key

+ 12 - 0
roles/openshift_master/templates/scheduler.json.j2

@@ -0,0 +1,12 @@
+{
+  "predicates": [
+    {"name": "PodFitsResources"},
+    {"name": "PodFitsPorts"},
+    {"name": "NoDiskConflict"},
+    {"name": "Region", "argument": {"serviceAffinity" : {"labels" : ["region"]}}}
+  ],"priorities": [
+    {"name": "LeastRequestedPriority", "weight": 1},
+    {"name": "ServiceSpreadingPriority", "weight": 1},
+    {"name": "Zone", "weight" : 2, "argument": {"serviceAntiAffinity" : {"label": "zone"}}}
+  ]
+}

+ 78 - 0
roles/openshift_master/templates/v1_partials/oauthConfig.j2

@@ -0,0 +1,78 @@
+{% macro identity_provider_config(identity_provider) %}
+      apiVersion: v1
+      kind: {{ identity_provider.kind }}
+{% if identity_provider.kind == 'HTPasswdPasswordIdentityProvider' %}
+      file: {{ identity_provider.filename }}
+{% elif identity_provider.kind == 'BasicAuthPasswordIdentityProvider' %}
+      url: {{ identity_provider.url }}
+{% for key in ('ca', 'certFile', 'keyFile') %}
+{% if key in identity_provider %}
+      {{ key }}: {{ identity_provider[key] }}"
+{% endif %}
+{% endfor %}
+{% elif identity_provider.kind == 'RequestHeaderIdentityProvider' %}
+      headers: {{ identity_provider.headers }}
+{% if 'clientCA' in identity_provider %}
+      clientCA: {{ identity_provider.clientCA }}
+{% endif %}
+{% elif identity_provider.kind == 'GitHubIdentityProvider' %}
+      clientID: {{ identity_provider.clientID }}
+      clientSecret: {{ identity_provider.clientSecret }}
+{% elif identity_provider.kind == 'GoogleIdentityProvider' %}
+      clientID: {{ identity_provider.clientID }}
+      clientSecret: {{ identity_provider.clientSecret }}
+{% if 'hostedDomain' in identity_provider %}
+      hostedDomain: {{ identity_provider.hostedDomain }}
+{% endif %}
+{% elif identity_provider.kind == 'OpenIDIdentityProvider' %}
+      clientID: {{ identity_provider.clientID }}
+      clientSecret: {{ identity_provider.clientSecret }}
+      claims:
+        id: identity_provider.claims.id
+{% for claim_key in ('preferredUsername', 'name', 'email') %}
+{% if claim_key in identity_provider.claims %}
+        {{ claim_key }}: {{ identity_provider.claims[claim_key] }}
+{% endif %}
+{% endfor %}
+      urls:
+        authorize: {{ identity_provider.urls.authorize }}
+        token: {{ identity_provider.urls.token }}
+{% if 'userInfo' in identity_provider.urls %}
+        userInfo: {{ identity_provider.userInfo }}
+{% endif %}
+{% if 'extraScopes' in identity_provider %}
+      extraScopes:
+{% for scope in identity_provider.extraScopes %}
+      - {{ scope }}
+{% endfor %}
+{% endif %}
+{% if 'extraAuthorizeParameters' in identity_provider %}
+      extraAuthorizeParameters:
+{% for param_key, param_value in identity_provider.extraAuthorizeParameters.iteritems() %}
+        {{ param_key }}: {{ param_value }}
+{% endfor %}
+{% endif %}
+{% endif %}
+{% endmacro %}
+oauthConfig:
+  assetPublicURL: {{ openshift.master.public_console_url }}/
+  grantConfig:
+    method: {{ openshift.master.oauth_grant_method }}
+  identityProviders:
+{% for identity_provider in openshift.master.identity_providers %}
+  - name: {{ identity_provider.name }}
+    challenge: {{ identity_provider.challenge }}
+    login: {{ identity_provider.login }}
+    provider:
+{{ identity_provider_config(identity_provider) }}
+{%- endfor %}
+  masterPublicURL: {{ openshift.master.public_api_url }}
+  masterURL: {{ openshift.master.api_url }}
+  sessionConfig:
+    sessionMaxAgeSeconds: {{ openshift.master.session_max_seconds }}
+    sessionName: {{ openshift.master.session_name }}
+    sessionSecretsFile: {{ openshift.master.session_secrets_file }}
+  tokenConfig:
+    accessTokenMaxAgeSeconds: {{ openshift.master.access_token_max_seconds }}
+    authorizeTokenMaxAgeSeconds: {{ openshift.master.auth_token_max_seconds }}
+{# Comment to preserve newline after authorizeTokenMaxAgeSeconds #}

+ 7 - 3
roles/openshift_master/vars/main.yml

@@ -1,6 +1,10 @@
 ---
-openshift_data_dir: /var/lib/openshift
 openshift_master_config_dir: /etc/openshift/master
 openshift_master_config_file: "{{ openshift_master_config_dir }}/master-config.yaml"
-openshift_master_ca_cert: "{{ openshift_master_config_dir }}/ca.crt"
-openshift_master_ca_key: "{{ openshift_master_config_dir }}/ca.key"
+openshift_master_scheduler_conf: "{{ openshift_master_config_dir }}/scheduler.json"
+openshift_master_policy: "{{ openshift_master_config_dir }}/policy.json"
+
+openshift_master_valid_grant_methods:
+- auto
+- prompt
+- deny

+ 4 - 0
roles/openshift_node/defaults/main.yml

@@ -2,3 +2,7 @@
 os_firewall_allow:
 - service: OpenShift kubelet
   port: 10250/tcp
+- service: http
+  port: 80/tcp
+- service: https
+  port: 443/tcp

+ 0 - 1
roles/openshift_node/handlers/main.yml

@@ -1,4 +1,3 @@
 ---
 - name: restart openshift-node
   service: name=openshift-node state=restarted
-  when: not openshift.common.use_openshift_sdn|bool

+ 38 - 29
roles/openshift_node/tasks/main.yml

@@ -1,44 +1,58 @@
 ---
 # TODO: allow for overriding default ports where possible
-# TODO: trigger the external service when restart is needed
-# TODO: work with upstream to fix naming of 'master-client.crt/master-client.key'
 
 - name: Set node OpenShift facts
   openshift_facts:
-    role: 'node'
+    role: "{{ item.role }}"
+    local_facts: "{{ item.local_facts }}"
+  with_items:
+  - role: common
+    local_facts:
+      hostname: "{{ openshift_hostname | default(none) }}"
+      public_hostname: "{{ openshift_public_hostname | default(none) }}"
+      deployment_type: "{{ openshift_deployment_type }}"
+  - role: node
     local_facts:
+      resources_cpu: "{{ openshift_node_resources_cpu | default(none) }}"
+      resources_memory: "{{ openshift_node_resources_memory | default(none) }}"
+      pod_cidr: "{{ openshift_node_pod_cidr | default(none) }}"
+      labels: "{{ openshift_node_labels | default(none) }}"
+      annotations: "{{ openshift_node_annotations | default(none) }}"
+      registry_url: "{{ oreg_url | default(none) }}"
       debug_level: "{{ openshift_node_debug_level | default(openshift.common.debug_level) }}"
 
-- name: Test if node certs and config exist
-  stat: path={{ item }}
-  failed_when: not result.stat.exists
-  register: result
-  with_items:
-  - "{{ openshift_node_cert_dir }}"
-  - "{{ openshift_node_cert_dir }}/ca.crt"
-  - "{{ openshift_node_cert_dir }}/master-client.crt"
-  - "{{ openshift_node_cert_dir }}/master-client.key"
-  - "{{ openshift_node_cert_dir }}/node.kubeconfig"
-  - "{{ openshift_node_cert_dir }}/node-config.yaml"
-  - "{{ openshift_node_cert_dir }}/server.crt"
-  - "{{ openshift_node_cert_dir }}/server.key"
-
 - name: Install OpenShift Node package
   yum: pkg=openshift-node state=installed
-  register: install_result
+  register: node_install_result
+
+- name: Install openshift-sdn-ovs
+  yum: pkg=openshift-sdn-ovs state=installed
+  register: sdn_install_result
+  when: openshift.common.use_openshift_sdn
 
 - name: Reload systemd units
   command: systemctl daemon-reload
-  when: install_result | changed
+  when: (node_install_result | changed or (openshift.common.use_openshift_sdn
+          and sdn_install_result | changed))
+
+# TODO: add the validate parameter when there is a validation command to run
+- name: Create the Node config
+  template:
+    dest: "{{ openshift_node_config_file }}"
+    src: node.yaml.v1.j2
+  notify:
+  - restart openshift-node
 
-# --create-certs=false is a temporary workaround until
-# https://github.com/openshift/origin/pull/1361 is merged upstream and it is
-# the default for nodes
 - name: Configure OpenShift Node settings
   lineinfile:
     dest: /etc/sysconfig/openshift-node
-    regexp: '^OPTIONS='
-    line: "OPTIONS=\"--loglevel={{ openshift.node.debug_level }} --config={{ openshift_node_cert_dir }}/node-config.yaml\""
+    regexp: "{{ item.regex }}"
+    line: "{{ item.line }}"
+  with_items:
+    - regex: '^OPTIONS='
+      line: "OPTIONS=--loglevel={{ openshift.node.debug_level }}"
+    - regex: '^CONFIG_FILE='
+      line: "CONFIG_FILE={{ openshift_node_config_file }}"
   notify:
   - restart openshift-node
 
@@ -47,8 +61,3 @@
 
 - name: Start and enable openshift-node
   service: name=openshift-node enabled=yes state=started
-  when: not openshift.common.use_openshift_sdn|bool
-
-- name: Disable openshift-node if openshift-node is managed externally
-  service: name=openshift-node enabled=false
-  when: openshift.common.use_openshift_sdn|bool

+ 18 - 0
roles/openshift_node/templates/node.yaml.v1.j2

@@ -0,0 +1,18 @@
+allowDisabledDocker: false
+apiVersion: v1
+dnsDomain: {{ hostvars[openshift_first_master].openshift.dns.domain }}
+dnsIP: {{ hostvars[openshift_first_master].openshift.dns.ip }}
+imageConfig:
+  format: {{ openshift.node.registry_url }}
+  latest: false
+kind: NodeConfig
+masterKubeConfig: node.kubeconfig
+networkPluginName: {{ openshift.common.sdn_network_plugin_name }}
+nodeName: {{ openshift.common.hostname }}
+podManifestConfig: null
+servingInfo:
+  bindAddress: 0.0.0.0:10250
+  certFile: server.crt
+  clientCA: ca.crt
+  keyFile: server.key
+volumeDirectory: {{ openshift_data_dir }}/openshift.local.volumes

+ 2 - 1
roles/openshift_node/vars/main.yml

@@ -1,2 +1,3 @@
 ---
-openshift_node_cert_dir: /etc/openshift/node
+openshift_node_config_dir: /etc/openshift/node
+openshift_node_config_file: "{{ openshift_node_config_dir }}/node-config.yaml"

+ 0 - 2
roles/openshift_register_nodes/defaults/main.yml

@@ -1,2 +0,0 @@
----
-openshift_kube_api_version: v1beta1

+ 69 - 157
roles/openshift_register_nodes/library/kubernetes_register_node.py

@@ -3,15 +3,13 @@
 # vim: expandtab:tabstop=4:shiftwidth=4
 #
 # disable pylint checks
-# temporarily disabled until items can be addressed:
-#   fixme - until all TODO comments have been addressed
 # permanently disabled unless someone wants to refactor the object model:
 #   too-few-public-methods
 #   no-self-use
 #   too-many-arguments
 #   too-many-locals
 #   too-many-branches
-# pylint:disable=fixme, too-many-arguments, no-self-use
+# pylint:disable=too-many-arguments, no-self-use
 # pylint:disable=too-many-locals, too-many-branches, too-few-public-methods
 """Ansible module to register a kubernetes node to the cluster"""
 
@@ -41,24 +39,6 @@ options:
             - IP Address to associate with the node when registering.
               Available in the following API versions: v1beta1.
         required: false
-    hostnames:
-        default: []
-        description:
-            - Valid hostnames for this node. Available in the following API
-              versions: v1beta3.
-        required: false
-    external_ips:
-        default: []
-        description:
-            - External IP Addresses for this node. Available in the following API
-              versions: v1beta3.
-        required: false
-    internal_ips:
-        default: []
-        description:
-            - Internal IP Addresses for this node. Available in the following API
-              versions: v1beta3.
-        required: false
     cpu:
         default: null
         description:
@@ -87,17 +67,6 @@ EXAMPLES = '''
     hostIP: 192.168.1.1
     cpu: 1
     memory: 500000000
-
-# Node registration using the v1beta3 API, setting an alternate hostname,
-# internalIP, externalIP and assigning 3.5 CPU cores and 1 TiB of Memory
-- openshift_register_node:
-    name: ose3.node.example.com
-    api_version: v1beta3
-    external_ips: ['192.168.1.5']
-    internal_ips: ['10.0.0.5']
-    hostnames: ['ose2.node.internal.local']
-    cpu: 3.5
-    memory: 1Ti
 '''
 
 
@@ -313,57 +282,11 @@ class NodeSpec(object):
         """
         return Util.remove_empty_elements(self.spec)
 
-class NodeStatus(object):
-    """ Kubernetes Node Status
-
-        Attributes:
-            status (dict): A dictionary representing the node status
-
-        Args:
-            version (str): kubernetes api version
-            externalIPs (list, optional): externalIPs for the node
-            internalIPs (list, optional): internalIPs for the node
-            hostnames (list, optional): hostnames for the node
-    """
-    def add_addresses(self, address_type, addresses):
-        """ Adds addresses of the specified type
-
-            Args:
-                address_type (str): address type
-                addresses (list): addresses to add
-        """
-        address_list = []
-        for address in addresses:
-            address_list.append(dict(type=address_type, address=address))
-        return address_list
-
-    def __init__(self, version, externalIPs=None, internalIPs=None,
-                 hostnames=None):
-        if version == 'v1beta3':
-            addresses = []
-            if externalIPs is not None:
-                addresses += self.add_addresses('ExternalIP', externalIPs)
-            if internalIPs is not None:
-                addresses += self.add_addresses('InternalIP', internalIPs)
-            if hostnames is not None:
-                addresses += self.add_addresses('Hostname', hostnames)
-
-            self.status = dict(addresses=addresses)
-
-    def get_status(self):
-        """ Get the dict representing the node status
-
-            Returns:
-                dict: representation of the node status with any empty elements
-                    removed
-        """
-        return Util.remove_empty_elements(self.status)
-
 class Node(object):
     """ Kubernetes Node
 
         Attributes:
-            status (dict): A dictionary representing the node
+            node (dict): A dictionary representing the node
 
         Args:
             module (AnsibleModule):
@@ -371,9 +294,6 @@ class Node(object):
             version (str, optional): kubernetes api version
             node_name (str, optional): name for node
             hostIP (str, optional): node host ip
-            hostnames (list, optional): hostnames for the node
-            externalIPs (list, optional): externalIPs for the node
-            internalIPs (list, optional): internalIPs for the node
             cpu (str, optional): cpu resources for the node
             memory (str, optional): memory resources for the node
             labels (list, optional): labels for the node
@@ -382,8 +302,7 @@ class Node(object):
             externalID (str, optional): external id of the node
     """
     def __init__(self, module, client_opts, version='v1beta1', node_name=None,
-                 hostIP=None, hostnames=None, externalIPs=None,
-                 internalIPs=None, cpu=None, memory=None, labels=None,
+                 hostIP=None, cpu=None, memory=None, labels=None,
                  annotations=None, podCIDR=None, externalID=None):
         self.module = module
         self.client_opts = client_opts
@@ -405,9 +324,7 @@ class Node(object):
                              apiVersion=version,
                              metadata=metadata,
                              spec=NodeSpec(version, cpu, memory, podCIDR,
-                                           externalID),
-                             status=NodeStatus(version, externalIPs,
-                                               internalIPs, hostnames))
+                                           externalID))
 
     def get_name(self):
         """ Get the name for the node
@@ -432,7 +349,6 @@ class Node(object):
             node['resources'] = self.node['resources'].get_resources()
         elif self.node['apiVersion'] == 'v1beta3':
             node['spec'] = self.node['spec'].get_spec()
-            node['status'] = self.node['status'].get_status()
         return Util.remove_empty_elements(node)
 
     def exists(self):
@@ -473,52 +389,15 @@ class Node(object):
         else:
             return True
 
-def main():
-    """ main """
-    module = AnsibleModule(
-        argument_spec=dict(
-            name=dict(required=True, type='str'),
-            host_ip=dict(type='str'),
-            hostnames=dict(type='list', default=[]),
-            external_ips=dict(type='list', default=[]),
-            internal_ips=dict(type='list', default=[]),
-            api_version=dict(type='str', default='v1beta1',
-                             choices=['v1beta1', 'v1beta3']),
-            cpu=dict(type='str'),
-            memory=dict(type='str'),
-            # TODO: needs documented
-            labels=dict(type='dict', default={}),
-            # TODO: needs documented
-            annotations=dict(type='dict', default={}),
-            # TODO: needs documented
-            pod_cidr=dict(type='str'),
-            # TODO: needs documented
-            external_id=dict(type='str'),
-            # TODO: needs documented
-            client_config=dict(type='str'),
-            # TODO: needs documented
-            client_cluster=dict(type='str', default='master'),
-            # TODO: needs documented
-            client_context=dict(type='str', default='default'),
-            # TODO: needs documented
-            client_namespace=dict(type='str', default='default'),
-            # TODO: needs documented
-            client_user=dict(type='str', default='system:openshift-client'),
-            # TODO: needs documented
-            kubectl_cmd=dict(type='list', default=['kubectl']),
-            # TODO: needs documented
-            kubeconfig_flag=dict(type='str'),
-            # TODO: needs documented
-            default_client_config=dict(type='str')
-        ),
-        mutually_exclusive=[
-            ['host_ip', 'external_ips'],
-            ['host_ip', 'internal_ips'],
-            ['host_ip', 'hostnames'],
-        ],
-        supports_check_mode=True
-    )
+def generate_client_opts(module):
+    """ Generates the client options
 
+        Args:
+            module(AnsibleModule)
+
+        Returns:
+            str: client options
+    """
     client_config = '~/.kube/.kubeconfig'
     if 'default_client_config' in module.params:
         client_config = module.params['default_client_config']
@@ -533,8 +412,7 @@ def main():
         kubeconfig_flag = '--kubeconfig'
         if 'kubeconfig_flag' in module.params:
             kubeconfig_flag = module.params['kubeconfig_flag']
-        client_opts.append(kubeconfig_flag + '=' +
-                           os.path.expanduser(module.params['client_config']))
+        client_opts.append(kubeconfig_flag + '=' + os.path.expanduser(module.params['client_config']))
 
     try:
         config = ClientConfig(client_opts, module)
@@ -547,51 +425,85 @@ def main():
         if client_context != config.current_context():
             client_opts.append("--context=%s" % client_context)
     else:
-        module.fail_json(msg="Context %s not found in client config" %
-                         client_context)
+        module.fail_json(msg="Context %s not found in client config" % client_context)
 
     client_user = module.params['client_user']
     if config.has_user(client_user):
         if client_user != config.get_user_for_context(client_context):
             client_opts.append("--user=%s" % client_user)
     else:
-        module.fail_json(msg="User %s not found in client config" %
-                         client_user)
+        module.fail_json(msg="User %s not found in client config" % client_user)
 
     client_cluster = module.params['client_cluster']
     if config.has_cluster(client_cluster):
         if client_cluster != config.get_cluster_for_context(client_context):
             client_opts.append("--cluster=%s" % client_cluster)
     else:
-        module.fail_json(msg="Cluster %s not found in client config" %
-                         client_cluster)
+        module.fail_json(msg="Cluster %s not found in client config" % client_cluster)
 
     client_namespace = module.params['client_namespace']
     if client_namespace != config.get_namespace_for_context(client_context):
         client_opts.append("--namespace=%s" % client_namespace)
 
-    node = Node(module, client_opts, module.params['api_version'],
-                module.params['name'], module.params['host_ip'],
-                module.params['hostnames'], module.params['external_ips'],
-                module.params['internal_ips'], module.params['cpu'],
-                module.params['memory'], module.params['labels'],
-                module.params['annotations'], module.params['pod_cidr'],
-                module.params['external_id'])
+    return client_opts
+
+
+def main():
+    """ main """
+    module = AnsibleModule(
+        argument_spec=dict(
+            name=dict(required=True, type='str'),
+            host_ip=dict(type='str'),
+            api_version=dict(type='str', default='v1beta1',
+                             choices=['v1beta1', 'v1beta3']),
+            cpu=dict(type='str'),
+            memory=dict(type='str'),
+            # TODO: needs documented
+            labels=dict(type='dict', default={}),
+            # TODO: needs documented
+            annotations=dict(type='dict', default={}),
+            # TODO: needs documented
+            pod_cidr=dict(type='str'),
+            # TODO: needs documented
+            client_config=dict(type='str'),
+            # TODO: needs documented
+            client_cluster=dict(type='str', default='master'),
+            # TODO: needs documented
+            client_context=dict(type='str', default='default'),
+            # TODO: needs documented
+            client_namespace=dict(type='str', default='default'),
+            # TODO: needs documented
+            client_user=dict(type='str', default='system:admin'),
+            # TODO: needs documented
+            kubectl_cmd=dict(type='list', default=['kubectl']),
+            # TODO: needs documented
+            kubeconfig_flag=dict(type='str'),
+            # TODO: needs documented
+            default_client_config=dict(type='str')
+        ),
+        supports_check_mode=True
+    )
+
+    labels = module.params['labels']
+    kube_hostname_label = 'kubernetes.io/hostname'
+    if kube_hostname_label not in labels:
+        labels[kube_hostname_label] = module.params['name']
+
+    node = Node(module, generate_client_opts(module),
+                module.params['api_version'], module.params['name'],
+                module.params['host_ip'], module.params['cpu'],
+                module.params['memory'], labels, module.params['annotations'],
+                module.params['pod_cidr'])
 
-    # TODO: attempt to support changing node settings where possible and/or
-    # modifying node resources
     if node.exists():
         module.exit_json(changed=False, node=node.get_node())
     elif module.check_mode:
         module.exit_json(changed=True, node=node.get_node())
+    elif node.create():
+        module.exit_json(changed=True, msg="Node created successfully",
+                         node=node.get_node())
     else:
-        if node.create():
-            module.exit_json(changed=True,
-                             msg="Node created successfully",
-                             node=node.get_node())
-        else:
-            module.fail_json(msg="Unknown error creating node",
-                             node=node.get_node())
+        module.fail_json(msg="Unknown error creating node", node=node.get_node())
 
 # ignore pylint errors related to the module_utils import
 # pylint: disable=redefined-builtin, unused-wildcard-import, wildcard-import

+ 24 - 35
roles/openshift_register_nodes/tasks/main.yml

@@ -1,51 +1,42 @@
 ---
-# TODO: support new create-config command to generate node certs and config
-# TODO: recreate master/node configs if settings that affect the configs
-# change (hostname, public_hostname, ip, public_ip, etc)
-
-
-# TODO: use a template lookup here
-# TODO: create a failed_when condition
-- name: Use enterprise default for oreg_url if not set
-  set_fact:
-    oreg_url: "openshift3_beta/ose-${component}:${version}"
-  when: openshift.common.deployment_type == 'enterprise' and oreg_url is not defined
-
-- name: Use online default for oreg_url if not set
-  set_fact:
-    oreg_url: "docker-registry.ops.rhcloud.com/openshift3_beta/ose-${component}:${version}"
-  when: openshift.common.deployment_type == 'online' and oreg_url is not defined
-
 - name: Create openshift_generated_configs_dir if it doesn't exist
   file:
     path: "{{ openshift_generated_configs_dir }}"
     state: directory
 
-- name: Create node config
+- name: Generate the node client config
   command: >
-    /usr/bin/openshift admin create-node-config
-      --node-dir={{ openshift_generated_configs_dir }}/node-{{ item.openshift.common.hostname }}
-      --node={{ item.openshift.common.hostname }}
-      --hostnames={{ [item.openshift.common.hostname, item.openshift.common.public_hostname]|unique|join(",") }}
-      --dns-domain={{ openshift.dns.domain }}
-      --dns-ip={{ openshift.dns.ip }}
+    {{ openshift.common.admin_binary }} create-api-client-config
+      --certificate-authority={{ openshift_master_ca_cert }}
+      --client-dir={{ openshift_generated_configs_dir }}/node-{{ item.openshift.common.hostname }}
+      --groups=system:nodes
       --master={{ openshift.master.api_url }}
-      --signer-key={{ openshift_master_ca_key }}
       --signer-cert={{ openshift_master_ca_cert }}
-      --certificate-authority={{ openshift_master_ca_cert }}
+      --signer-key={{ openshift_master_ca_key }}
       --signer-serial={{ openshift_master_ca_serial }}
-      --node-client-certificate-authority={{ openshift_master_ca_cert }}
-      {{ ('--images=' ~ oreg_url) if oreg_url is defined else '' }}
-      --listen=https://0.0.0.0:10250
-      --volume-dir={{ openshift_data_dir }}/openshift.local.volumes
+      --user=system:node-{{ item.openshift.common.hostname }}
   args:
     chdir: "{{ openshift_generated_configs_dir }}"
     creates: "{{ openshift_generated_configs_dir }}/node-{{ item.openshift.common.hostname }}"
-  with_items: openshift_nodes
+  with_items: nodes_needing_certs
+
+- name: Generate the node server certificate
+  delegate_to: "{{ openshift_first_master }}"
+  command: >
+    {{ openshift.common.admin_binary }} create-server-cert
+      --cert=server.crt --key=server.key --overwrite=true
+      --hostnames={{ [item.openshift.common.hostname, item.openshift.common.public_hostname]|unique|join(",") }}
+      --signer-cert={{ openshift_master_ca_cert }}
+      --signer-key={{ openshift_master_ca_key }}
+      --signer-serial={{ openshift_master_ca_serial }}
+  args:
+    chdir: "{{ openshift_generated_configs_dir }}/node-{{ item.openshift.common.hostname }}"
+    creates: "{{ openshift_generated_configs_dir }}/node-{{ item.openshift.common.hostname }}/server.crt"
+  with_items: nodes_needing_certs
 
 - name: Register unregistered nodes
   kubernetes_register_node:
-    kubectl_cmd: ['osc']
+    kubectl_cmd: "{{ [openshift.common.client_binary] }}"
     default_client_config: '~/.config/openshift/.config'
     name: "{{ item.openshift.common.hostname }}"
     api_version: "{{ openshift_kube_api_version }}"
@@ -55,8 +46,6 @@
     host_ip: "{{ item.openshift.common.ip }}"
     labels: "{{ item.openshift.node.labels | default({}) }}"
     annotations: "{{ item.openshift.node.annotations | default({}) }}"
-    external_id: "{{ item.openshift.node.external_id }}"
-    # TODO: support customizing other attributes such as: client_config,
-    # client_cluster, client_context, client_user
   with_items: openshift_nodes
   register: register_result
+

+ 1 - 1
roles/openshift_register_nodes/vars/main.yml

@@ -2,7 +2,7 @@
 openshift_node_config_dir: /etc/openshift/node
 openshift_master_config_dir: /etc/openshift/master
 openshift_generated_configs_dir: /etc/openshift/generated-configs
-openshift_data_dir: /var/lib/openshift
 openshift_master_ca_cert: "{{ openshift_master_config_dir }}/ca.crt"
 openshift_master_ca_key: "{{ openshift_master_config_dir }}/ca.key"
 openshift_master_ca_serial: "{{ openshift_master_config_dir }}/ca.serial.txt"
+openshift_kube_api_version: v1beta3

+ 0 - 41
roles/openshift_sdn_master/README.md

@@ -1,41 +0,0 @@
-OpenShift SDN Master
-====================
-
-OpenShift SDN Master service installation
-
-Requirements
-------------
-
-A host with the openshift_master role applied
-
-Role Variables
---------------
-
-From this role:
-| Name                             | Default value         |                                                  |
-|----------------------------------|-----------------------|--------------------------------------------------|
-| openshift_sdn_master_debug_level | openshift_debug_level | Verbosity of the debug logs for openshift-master |
-
-From openshift_common:
-| Name                  | Default value |                                      |
-|-----------------------|---------------|--------------------------------------|
-| openshift_debug_level | 0             | Global openshift debug log verbosity |
-
-Dependencies
-------------
-
-
-Example Playbook
-----------------
-
-TODO
-
-License
--------
-
-Apache License, Version 2.0
-
-Author Information
-------------------
-
-TODO

+ 0 - 3
roles/openshift_sdn_master/handlers/main.yml

@@ -1,3 +0,0 @@
----
-- name: restart openshift-sdn-master
-  service: name=openshift-sdn-master state=restarted

+ 0 - 15
roles/openshift_sdn_master/meta/main.yml

@@ -1,15 +0,0 @@
----
-galaxy_info:
-  author: Jason DeTiberus
-  description: OpenShift SDN Master
-  company: Red Hat, Inc.
-  license: Apache License, Version 2.0
-  min_ansible_version: 1.7
-  platforms:
-  - name: EL
-    versions:
-    - 7
-  categories:
-  - cloud
-dependencies:
-- { role: openshift_common }

+ 0 - 37
roles/openshift_sdn_master/tasks/main.yml

@@ -1,37 +0,0 @@
----
-# TODO: add task to set the sdn subnet if openshift-sdn-master hasn't been
-# started yet
-
-- name: Set master sdn OpenShift facts
-  openshift_facts:
-    role: 'master_sdn'
-    local_facts:
-      debug_level: "{{ openshift_master_sdn_debug_level | default(openshift.common.debug_level) }}"
-
-- name: Install openshift-sdn-master
-  yum:
-    pkg: openshift-sdn-master
-    state: installed
-  register: install_result
-
-- name: Reload systemd units
-  command: systemctl daemon-reload
-  when: install_result | changed
-
-# TODO: we should probably generate certs specifically for sdn
-- name: Configure openshift-sdn-master settings
-  lineinfile:
-    dest: /etc/sysconfig/openshift-sdn-master
-    regexp: '^OPTIONS='
-    line: "OPTIONS=\"-v={{ openshift.master_sdn.debug_level }} -etcd-endpoints={{ openshift_sdn_master_url}}
-      -etcd-cafile={{ openshift_master_config_dir }}/ca.crt
-      -etcd-certfile={{ openshift_master_config_dir }}/master.etcd-client.crt
-      -etcd-keyfile={{ openshift_master_config_dir }}/master.etcd-client.key\""
-  notify:
-  - restart openshift-sdn-master
-
-- name: Enable openshift-sdn-master
-  service:
-    name: openshift-sdn-master
-    enabled: yes
-    state: started

+ 0 - 44
roles/openshift_sdn_node/README.md

@@ -1,44 +0,0 @@
-OpenShift SDN Node
-==================
-
-OpenShift SDN Node service installation
-
-Requirements
-------------
-
-A host with the openshift_node role applied
-
-Role Variables
---------------
-
-From this role:
-| Name                           | Default value         |                                                  |
-|--------------------------------|-----------------------|--------------------------------------------------|
-| openshift_sdn_node_debug_level | openshift_debug_level | Verbosity of the debug logs for openshift-master |
-
-
-From openshift_common:
-| Name                          | Default value       |                                        |
-|-------------------------------|---------------------|----------------------------------------|
-| openshift_debug_level         | 0                   | Global openshift debug log verbosity   |
-| openshift_public_ip           | UNDEF (Required)    | Public IP address to use for this host |
-| openshift_hostname            | UNDEF (Required)    | hostname to use for this instance |
-
-Dependencies
-------------
-
-
-Example Playbook
-----------------
-
-TODO
-
-License
--------
-
-Apache License, Version 2.0
-
-Author Information
-------------------
-
-TODO

+ 0 - 3
roles/openshift_sdn_node/handlers/main.yml

@@ -1,3 +0,0 @@
----
-- name: restart openshift-sdn-node
-  service: name=openshift-sdn-node state=restarted

+ 0 - 15
roles/openshift_sdn_node/meta/main.yml

@@ -1,15 +0,0 @@
----
-galaxy_info:
-  author: Jason DeTiberus
-  description: OpenShift SDN Node
-  company: Red Hat, Inc.
-  license: Apache License, Version 2.0
-  min_ansible_version: 1.7
-  platforms:
-  - name: EL
-    versions:
-    - 7
-  categories:
-  - cloud
-dependencies:
-- { role: openshift_common }

+ 0 - 60
roles/openshift_sdn_node/tasks/main.yml

@@ -1,60 +0,0 @@
----
-- name: Set node sdn OpenShift facts
-  openshift_facts:
-    role: 'node_sdn'
-    local_facts:
-      debug_level: "{{ openshift_node_sdn_debug_level | default(openshift.common.debug_level) }}"
-
-- name: Install openshift-sdn-node
-  yum:
-    pkg: openshift-sdn-node
-    state: installed
-  register: install_result
-
-- name: Reload systemd units
-  command: systemctl daemon-reload
-  when: install_result | changed
-
-# TODO: we are specifying -hostname= for OPTIONS as a workaround for
-# openshift-sdn-node not properly detecting the hostname.
-# TODO: we should probably generate certs specifically for sdn
-- name: Configure openshift-sdn-node settings
-  lineinfile:
-    dest: /etc/sysconfig/openshift-sdn-node
-    regexp: "{{ item.regex }}"
-    line: "{{ item.line }}"
-    backrefs: yes
-  with_items:
-    - regex: '^(OPTIONS=)'
-      line: '\1"-v={{ openshift.node_sdn.debug_level }} -hostname={{ openshift.common.hostname }}
-        -etcd-cafile={{ openshift_node_cert_dir }}/ca.crt
-        -etcd-certfile={{ openshift_node_cert_dir }}/master-client.crt
-        -etcd-keyfile={{ openshift_node_cert_dir }}/master-client.key\"'
-    - regex: '^(MASTER_URL=)'
-      line: '\1"{{ openshift_sdn_master_url }}"'
-    - regex: '^(MINION_IP=)'
-      line: '\1"{{ openshift.common.ip }}"'
-  notify: restart openshift-sdn-node
-
-- name: Ensure we aren't setting DOCKER_OPTIONS in /etc/sysconfig/openshift-sdn-node
-  lineinfile:
-    dest: /etc/sysconfig/openshift-sdn-node
-    regexp: '^DOCKER_OPTIONS='
-    state: absent
-  notify: restart openshift-sdn-node
-
-# TODO lock down the insecure-registry config to a more sane value than
-# 0.0.0.0/0
-- name: Configure docker insecure-registry setting
-  lineinfile:
-    dest: /etc/sysconfig/docker
-    regexp: INSECURE_REGISTRY=
-    line: INSECURE_REGISTRY='--insecure-registry=0.0.0.0/0'
-  notify: restart openshift-sdn-node
-
-
-- name: Start and enable openshift-sdn-node
-  service:
-    name: openshift-sdn-node
-    enabled: yes
-    state: started