Browse Source

Refactor upgrade playbook(s)

- Split playbooks into two, one for 3.0 minor upgrades and one for 3.0 to 3.1
  upgrades
- Move upgrade playbooks to common/openshift/cluster/upgrades from adhoc
- Added a byo wrapper playbooks to set the groups based on the byo
  conventions, other providers will need similar playbooks added eventually
- installer wrapper updates for refactored upgrade playbooks
  - call new 3.0 to 3.1 upgrade playbook
  - various fixes for edge cases I hit with a really old config laying
    around.
  - fix output of host facts to show connect_to value.
Jason DeTiberus 9 years ago
parent
commit
4c1b0dd4ab
30 changed files with 259 additions and 85 deletions
  1. 0 1
      playbooks/adhoc/upgrades/filter_plugins
  2. 0 1
      playbooks/adhoc/upgrades/lookup_plugins
  3. 0 1
      playbooks/adhoc/upgrades/roles
  4. 8 0
      playbooks/byo/openshift-cluster/upgrades/README.md
  5. 5 5
      playbooks/adhoc/upgrades/README.md
  6. 9 0
      playbooks/byo/openshift-cluster/upgrades/v3_0_minor/upgrade.yml
  7. 17 0
      playbooks/byo/openshift-cluster/upgrades/v3_0_to_v3_1/README.md
  8. 9 0
      playbooks/byo/openshift-cluster/upgrades/v3_0_to_v3_1/upgrade.yml
  9. 0 0
      playbooks/common/openshift-cluster/upgrades/files/pre-upgrade-check
  10. 0 0
      playbooks/common/openshift-cluster/upgrades/files/versions.sh
  11. 1 0
      playbooks/common/openshift-cluster/upgrades/filter_plugins
  12. 0 0
      playbooks/common/openshift-cluster/upgrades/library/openshift_upgrade_config.py
  13. 1 0
      playbooks/common/openshift-cluster/upgrades/lookup_plugins
  14. 1 0
      playbooks/common/openshift-cluster/upgrades/roles
  15. 1 0
      playbooks/common/openshift-cluster/upgrades/v3_0_minor/filter_plugins
  16. 1 0
      playbooks/common/openshift-cluster/upgrades/v3_0_minor/library
  17. 1 0
      playbooks/common/openshift-cluster/upgrades/v3_0_minor/lookup_plugins
  18. 1 0
      playbooks/common/openshift-cluster/upgrades/v3_0_minor/roles
  19. 112 0
      playbooks/common/openshift-cluster/upgrades/v3_0_minor/upgrade.yml
  20. 1 0
      playbooks/common/openshift-cluster/upgrades/v3_0_to_v3_1/filter_plugins
  21. 1 0
      playbooks/common/openshift-cluster/upgrades/v3_0_to_v3_1/library
  22. 1 0
      playbooks/common/openshift-cluster/upgrades/v3_0_to_v3_1/lookup_plugins
  23. 1 0
      playbooks/common/openshift-cluster/upgrades/v3_0_to_v3_1/roles
  24. 36 36
      playbooks/adhoc/upgrades/upgrade.yml
  25. 2 0
      playbooks/common/openshift-etcd/config.yml
  26. 3 0
      playbooks/common/openshift-master/config.yml
  27. 36 35
      roles/openshift_facts/library/openshift_facts.py
  28. 3 2
      utils/src/ooinstall/cli_installer.py
  29. 5 3
      utils/src/ooinstall/oo_config.py
  30. 3 1
      utils/src/ooinstall/openshift_ansible.py

+ 0 - 1
playbooks/adhoc/upgrades/filter_plugins

@@ -1 +0,0 @@
-../../../filter_plugins/

+ 0 - 1
playbooks/adhoc/upgrades/lookup_plugins

@@ -1 +0,0 @@
-../../../lookup_plugins/

+ 0 - 1
playbooks/adhoc/upgrades/roles

@@ -1 +0,0 @@
-../../../roles/

+ 8 - 0
playbooks/byo/openshift-cluster/upgrades/README.md

@@ -0,0 +1,8 @@
+# Upgrade playbooks
+The playbooks provided in this directory can be used for upgrading an existing
+environment. Additional notes for the associated upgrade playbooks are
+provided in their respective directories.
+
+# Upgrades available
+- [OpenShift Enterprise 3.0 to latest minor release](v3_0_minor/README.md)
+- [OpenShift Enterprise 3.0 to 3.1](v3_0_to_v3_1/README.md)

+ 5 - 5
playbooks/adhoc/upgrades/README.md

@@ -1,11 +1,11 @@
-# [NOTE]
-This playbook will re-run installation steps overwriting any local
+# v3.0 minor upgrade playbook
+**Note:** This playbook will re-run installation steps overwriting any local
 modifications. You should ensure that your inventory has been updated with any
 modifications. You should ensure that your inventory has been updated with any
 modifications you've made after your initial installation. If you find any items
 modifications you've made after your initial installation. If you find any items
 that cannot be configured via ansible please open an issue at
 that cannot be configured via ansible please open an issue at
 https://github.com/openshift/openshift-ansible
 https://github.com/openshift/openshift-ansible
 
 
-# Overview
+## Overview
 This playbook is available as a technical preview. It currently performs the
 This playbook is available as a technical preview. It currently performs the
 following steps.
 following steps.
 
 
@@ -17,5 +17,5 @@ following steps.
  * Updates the default registry if one exists
  * Updates the default registry if one exists
  * Updates image streams and quickstarts
  * Updates image streams and quickstarts
 
 
-# Usage
-ansible-playbook -i ~/ansible-inventory openshift-ansible/playbooks/adhoc/upgrades/upgrade.yml
+## Usage
+ansible-playbook -i ~/ansible-inventory openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_0_minor/upgrade.yml

+ 9 - 0
playbooks/byo/openshift-cluster/upgrades/v3_0_minor/upgrade.yml

@@ -0,0 +1,9 @@
+---
+- include: ../../../../common/openshift-cluster/upgrades/v3_0_minor/upgrade.yml
+  vars:
+    g_etcd_group: "{{ 'etcd' }}"
+    g_masters_group: "{{ 'masters' }}"
+    g_nodes_group: "{{ 'nodes' }}"
+    g_lb_group: "{{ 'lb' }}"
+    openshift_cluster_id: "{{ cluster_id | default('default') }}"
+    openshift_deployment_type: "{{ deployment_type }}"

+ 17 - 0
playbooks/byo/openshift-cluster/upgrades/v3_0_to_v3_1/README.md

@@ -0,0 +1,17 @@
+# v3.0 to v3.1 upgrade playbook
+
+## Overview
+This playbook currently performs the
+following steps.
+
+**TODO: update for current steps**
+ * Upgrade and restart master services
+ * Upgrade and restart node services
+ * Modifies the subset of the configuration necessary
+ * Applies the latest cluster policies
+ * Updates the default router if one exists
+ * Updates the default registry if one exists
+ * Updates image streams and quickstarts
+
+## Usage
+ansible-playbook -i ~/ansible-inventory openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_0_to_v3_1/upgrade.yml

+ 9 - 0
playbooks/byo/openshift-cluster/upgrades/v3_0_to_v3_1/upgrade.yml

@@ -0,0 +1,9 @@
+---
+- include: ../../../../common/openshift-cluster/upgrades/v3_0_to_v3_1/upgrade.yml
+  vars:
+    g_etcd_group: "{{ 'etcd' }}"
+    g_masters_group: "{{ 'masters' }}"
+    g_nodes_group: "{{ 'nodes' }}"
+    g_lb_group: "{{ 'lb' }}"
+    openshift_cluster_id: "{{ cluster_id | default('default') }}"
+    openshift_deployment_type: "{{ deployment_type }}"

playbooks/adhoc/upgrades/files/pre-upgrade-check → playbooks/common/openshift-cluster/upgrades/files/pre-upgrade-check


playbooks/adhoc/upgrades/files/versions.sh → playbooks/common/openshift-cluster/upgrades/files/versions.sh


+ 1 - 0
playbooks/common/openshift-cluster/upgrades/filter_plugins

@@ -0,0 +1 @@
+../../../../filter_plugins

playbooks/adhoc/upgrades/library/openshift_upgrade_config.py → playbooks/common/openshift-cluster/upgrades/library/openshift_upgrade_config.py


+ 1 - 0
playbooks/common/openshift-cluster/upgrades/lookup_plugins

@@ -0,0 +1 @@
+../../../../lookup_plugins

+ 1 - 0
playbooks/common/openshift-cluster/upgrades/roles

@@ -0,0 +1 @@
+../../../../roles

+ 1 - 0
playbooks/common/openshift-cluster/upgrades/v3_0_minor/filter_plugins

@@ -0,0 +1 @@
+../../../../../filter_plugins

+ 1 - 0
playbooks/common/openshift-cluster/upgrades/v3_0_minor/library

@@ -0,0 +1 @@
+../library

+ 1 - 0
playbooks/common/openshift-cluster/upgrades/v3_0_minor/lookup_plugins

@@ -0,0 +1 @@
+../../../../../lookup_plugins

+ 1 - 0
playbooks/common/openshift-cluster/upgrades/v3_0_minor/roles

@@ -0,0 +1 @@
+../../../../../roles

+ 112 - 0
playbooks/common/openshift-cluster/upgrades/v3_0_minor/upgrade.yml

@@ -0,0 +1,112 @@
+---
+- name: Evaluate groups
+  include: ../../evaluate_groups.yml
+
+- name: Re-Run cluster configuration to apply latest configuration changes
+  include: ../../config.yml
+
+- name: Upgrade masters
+  hosts: oo_masters_to_config
+  vars:
+    openshift_version: "{{ openshift_pkg_version | default('') }}"
+  tasks:
+    - name: Upgrade master packages
+      yum: pkg={{ openshift.common.service_type }}-master{{ openshift_version }} state=latest
+    - name: Restart master services
+      service: name="{{ openshift.common.service_type}}-master" state=restarted
+
+- name: Upgrade nodes
+  hosts: oo_nodes_to_config
+  vars:
+    openshift_version: "{{ openshift_pkg_version | default('') }}"
+  tasks:
+    - name: Upgrade node packages
+      yum: pkg={{ openshift.common.service_type }}-node{{ openshift_version }} state=latest
+    - name: Restart node services
+      service: name="{{ openshift.common.service_type }}-node" state=restarted
+
+- name: Determine new master version
+  hosts: oo_first_master
+  tasks:
+    - name: Determine new version
+      command: >
+        rpm -q --queryformat '%{version}' {{ openshift.common.service_type }}-master
+      register: _new_version
+
+- name: Ensure AOS 3.0.2 or Origin 1.0.6
+  hosts: oo_first_master
+  tasks:
+    fail: This playbook requires Origin 1.0.6 or Atomic OpenShift 3.0.2 or later
+    when: _new_version.stdout | version_compare('1.0.6','<') or ( _new_version.stdout | version_compare('3.0','>=' and _new_version.stdout | version_compare('3.0.2','<') )
+
+- name: Update cluster policy
+  hosts: oo_first_master
+  tasks:
+    - name: oadm policy reconcile-cluster-roles --confirm
+      command: >
+        {{ openshift.common.admin_binary}} --config={{ openshift.common.config_base }}/master/admin.kubeconfig
+        policy reconcile-cluster-roles --confirm
+
+- name: Upgrade default router
+  hosts: oo_first_master
+  vars:
+    - router_image: "{{ openshift.master.registry_url | replace( '${component}', 'haproxy-router' ) | replace ( '${version}', 'v' + _new_version.stdout ) }}"
+    - oc_cmd: "{{ openshift.common.client_binary }} --config={{ openshift.common.config_base }}/master/admin.kubeconfig"
+  tasks:
+    - name: Check for default router
+      command: >
+        {{ oc_cmd }} get -n default dc/router
+      register: _default_router
+      failed_when: false
+      changed_when: false
+    - name: Check for allowHostNetwork and allowHostPorts
+      when: _default_router.rc == 0
+      shell: >
+        {{ oc_cmd }} get -o yaml scc/privileged | /usr/bin/grep -e allowHostPorts -e allowHostNetwork
+      register: _scc
+    - name: Grant allowHostNetwork and allowHostPorts
+      when:
+        - _default_router.rc == 0
+        - "'false' in _scc.stdout"
+      command: >
+        {{ oc_cmd }} patch scc/privileged -p '{"allowHostPorts":true,"allowHostNetwork":true}' --loglevel=9
+    - name: Update deployment config to 1.0.4/3.0.1 spec
+      when: _default_router.rc == 0
+      command: >
+        {{ oc_cmd }} patch dc/router -p
+        '{"spec":{"strategy":{"rollingParams":{"updatePercent":-10},"spec":{"serviceAccount":"router","serviceAccountName":"router"}}}}'
+    - name: Switch to hostNetwork=true
+      when: _default_router.rc == 0
+      command: >
+        {{ oc_cmd }} patch dc/router -p '{"spec":{"template":{"spec":{"hostNetwork":true}}}}'
+    - name: Update router image to current version
+      when: _default_router.rc == 0
+      command: >
+        {{ oc_cmd }} patch dc/router -p
+        '{"spec":{"template":{"spec":{"containers":[{"name":"router","image":"{{ router_image }}"}]}}}}'
+
+- name: Upgrade default
+  hosts: oo_first_master
+  vars:
+    - registry_image: "{{  openshift.master.registry_url | replace( '${component}', 'docker-registry' )  | replace ( '${version}', 'v' + _new_version.stdout  ) }}"
+    - oc_cmd: "{{ openshift.common.client_binary }} --config={{ openshift.common.config_base }}/master/admin.kubeconfig"
+  tasks:
+    - name: Check for default registry
+      command: >
+          {{ oc_cmd }} get -n default dc/docker-registry
+      register: _default_registry
+      failed_when: false
+      changed_when: false
+    - name: Update registry image to current version
+      when: _default_registry.rc == 0
+      command: >
+        {{ oc_cmd }} patch dc/docker-registry -p
+        '{"spec":{"template":{"spec":{"containers":[{"name":"registry","image":"{{ registry_image }}"}]}}}}'
+
+- name: Update image streams and templates
+  hosts: oo_first_master
+  vars:
+    openshift_examples_import_command: "update"
+    openshift_deployment_type: "{{ deployment_type }}"
+  roles:
+    - openshift_examples

+ 1 - 0
playbooks/common/openshift-cluster/upgrades/v3_0_to_v3_1/filter_plugins

@@ -0,0 +1 @@
+../../../../../filter_plugins

+ 1 - 0
playbooks/common/openshift-cluster/upgrades/v3_0_to_v3_1/library

@@ -0,0 +1 @@
+../library

+ 1 - 0
playbooks/common/openshift-cluster/upgrades/v3_0_to_v3_1/lookup_plugins

@@ -0,0 +1 @@
+../../../../../lookup_plugins

+ 1 - 0
playbooks/common/openshift-cluster/upgrades/v3_0_to_v3_1/roles

@@ -0,0 +1 @@
+../../../../../roles

+ 36 - 36
playbooks/adhoc/upgrades/upgrade.yml

@@ -1,50 +1,58 @@
 ---
 ---
-- name: Load master facts
-  hosts: masters
+- name: Evaluate host groups
+  include: ../../evaluate_groups.yml
+
+- name: Load openshift_facts from the environment
+  hosts: oo_masters_to_config oo_nodes_to_config oo_etcd_to_config oo_lb_to_config
   roles:
   roles:
   - openshift_facts
   - openshift_facts
 
 
 - name: Verify upgrade can proceed
 - name: Verify upgrade can proceed
-  hosts: masters[0]
+  hosts: oo_first_master
   vars:
   vars:
     openshift_master_ha: "{{ groups['masters'] | length > 1 }}"
     openshift_master_ha: "{{ groups['masters'] | length > 1 }}"
   gather_facts: no
   gather_facts: no
   tasks:
   tasks:
-    # Pacemaker is currently the only supported upgrade path for multiple masters
-    - fail:
-        msg: "openshift_master_cluster_method must be set to 'pacemaker'"
-      when: openshift_master_ha | bool and ((openshift_master_cluster_method is not defined) or (openshift_master_cluster_method is defined and openshift_master_cluster_method != "pacemaker"))
+  # Pacemaker is currently the only supported upgrade path for multiple masters
+  - fail:
+      msg: "openshift_master_cluster_method must be set to 'pacemaker'"
+    when: openshift_master_ha | bool and ((openshift_master_cluster_method is not defined) or (openshift_master_cluster_method is defined and openshift_master_cluster_method != "pacemaker"))
+  - fail:
+      msg: >
+        This upgrade is only supported for origin and openshift-enterprise
+        deployment types
+    when: deployment_type not in ['origin','openshift-enterprise']
+  - fail:
+      msg: >
+        openshift_pkg_version is {{ openshift_pkg_version }} which is not a
+        valid version for a 3.1 upgrade
+    when: openshift_pkg_version is defined and openshift_pkg_version.split('-',1).1 | version_compare('3.0.2.900','<')
 
 
-- name: Run pre-upgrade checks on first master
-  hosts: masters[0]
-  tasks:
   # If this script errors out ansible will show the default stdout/stderr
   # If this script errors out ansible will show the default stdout/stderr
   # which contains details for the user:
   # which contains details for the user:
-  - script: files/pre-upgrade-check
+  - script: ../files/pre-upgrade-check
 
 
-- name: Evaluate etcd_hosts
+- name: Evaluate etcd_hosts_to_backup
   hosts: localhost
   hosts: localhost
   tasks:
   tasks:
-  - name: Evaluate etcd hosts
-    add_host:
-      name: "{{ groups.masters.0 }}"
-      groups: etcd_hosts
-    when: hostvars[groups.masters.0].openshift.master.embedded_etcd | bool
-  - name: Evaluate etcd hosts
+  - name: Evaluate etcd_hosts_to_backup
     add_host:
     add_host:
       name: "{{ item }}"
       name: "{{ item }}"
-      groups: etcd_hosts
-    with_items: groups.etcd
-    when: not hostvars[groups.masters.0].openshift.master.embedded_etcd | bool
+      groups: etcd_hosts_to_backup
+    with_items: groups.oo_etcd_to_config if groups.oo_etcd_to_config is defined and groups.oo_etcd_to_config | length > 0 else groups.oo_first_master
 
 
 - name: Backup etcd
 - name: Backup etcd
-  hosts: etcd_hosts
+  hosts: etcd_hosts_to_backup
   vars:
   vars:
     embedded_etcd: "{{ openshift.master.embedded_etcd }}"
     embedded_etcd: "{{ openshift.master.embedded_etcd }}"
     timestamp: "{{ lookup('pipe', 'date +%Y%m%d%H%M%S') }}"
     timestamp: "{{ lookup('pipe', 'date +%Y%m%d%H%M%S') }}"
   roles:
   roles:
   - openshift_facts
   - openshift_facts
   tasks:
   tasks:
+  - openshift_facts:
+      role: etcd
+      local_facts: {}
+    when: "'etcd' not in openshift"
 
 
   - stat: path=/var/lib/openshift
   - stat: path=/var/lib/openshift
     register: var_lib_openshift
     register: var_lib_openshift
@@ -64,7 +72,7 @@
 
 
   - name: Check current embedded etcd disk usage
   - name: Check current embedded etcd disk usage
     shell: >
     shell: >
-      du -k {{ openshift.master.etcd_data_dir }} | tail -n 1 | cut -f1
+      du -k {{ openshift.etcd.etcd_data_dir }} | tail -n 1 | cut -f1
     register: etcd_disk_usage
     register: etcd_disk_usage
     when: embedded_etcd | bool
     when: embedded_etcd | bool
 
 
@@ -82,13 +90,14 @@
 
 
   - name: Generate etcd backup
   - name: Generate etcd backup
     command: >
     command: >
-      etcdctl backup --data-dir={{ openshift.master.etcd_data_dir }}
+      etcdctl backup --data-dir={{ openshift.etcd.etcd_data_dir }}
       --backup-dir={{ openshift.common.data_dir }}/etcd-backup-{{ timestamp }}
       --backup-dir={{ openshift.common.data_dir }}/etcd-backup-{{ timestamp }}
 
 
   - name: Display location of etcd backup
   - name: Display location of etcd backup
     debug:
     debug:
       msg: "Etcd backup created in {{ openshift.common.data_dir }}/etcd-backup-{{ timestamp }}"
       msg: "Etcd backup created in {{ openshift.common.data_dir }}/etcd-backup-{{ timestamp }}"
 
 
+
 - name: Update deployment type
 - name: Update deployment type
   hosts: OSEv3
   hosts: OSEv3
   roles:
   roles:
@@ -107,7 +116,7 @@
     command: yum clean all
     command: yum clean all
 
 
   - name: Determine available versions
   - name: Determine available versions
-    script: files/versions.sh {{ openshift.common.service_type }} openshift
+    script: ../files/versions.sh {{ openshift.common.service_type }} openshift
     register: g_versions_result
     register: g_versions_result
 
 
   - set_fact:
   - set_fact:
@@ -120,17 +129,9 @@
       msg: This playbook requires Origin 1.0.6 or later
       msg: This playbook requires Origin 1.0.6 or later
     when: deployment_type == 'origin' and g_aos_versions.curr_version | version_compare('1.0.6','<')
     when: deployment_type == 'origin' and g_aos_versions.curr_version | version_compare('1.0.6','<')
 
 
-  # TODO: This should be specific to the 3.1 upgrade playbook (coming in future refactor), otherwise we are blocking 3.0.1 to 3.0.2 here.
   - fail:
   - fail:
       msg: Atomic OpenShift 3.1 packages not found
       msg: Atomic OpenShift 3.1 packages not found
-    when: deployment_type in ['openshift-enterprise', 'atomic-openshift'] and g_aos_versions.curr_version | version_compare('3.0.2.900','<') and (g_aos_versions.avail_version is none or g_aos_versions.avail_version | version_compare('3.0.2.900','<'))
-  # Deployment type 'enterprise' is no longer valid if we're upgrading to 3.1 or beyond.
-  # (still valid for 3.0.x to 3.0.y however) Using the global deployment_type here as
-  # we're checking what was requested by the upgrade, not the current type on the system.
-  - fail:
-      msg: "Deployment type enterprise not supported for upgrade"
-    when: deployment_type == "enterprise" and  g_aos_versions.curr_version | version_compare('3.1', '>=')
-
+    when: g_aos_versions.curr_version | version_compare('3.0.2.900','<') and (g_aos_versions.avail_version is none or g_aos_versions.avail_version | version_compare('3.0.2.900','<'))
 
 
 - name: Upgrade masters
 - name: Upgrade masters
   hosts: masters
   hosts: masters
@@ -156,7 +157,6 @@
         to_version: '3.1'
         to_version: '3.1'
         role: master
         role: master
         config_base: "{{ hostvars[inventory_hostname].openshift.common.config_base }}"
         config_base: "{{ hostvars[inventory_hostname].openshift.common.config_base }}"
-      when: deployment_type in ['openshift-enterprise', 'atomic-enterprise'] and g_aos_versions.curr_version | version_compare('3.1', '>=')
 
 
     - set_fact:
     - set_fact:
         master_certs_missing: True
         master_certs_missing: True
@@ -287,7 +287,7 @@
   hosts: masters[0]
   hosts: masters[0]
   vars:
   vars:
     origin_reconcile_bindings: "{{ deployment_type == 'origin' and g_new_version | version_compare('1.0.6', '>') }}"
     origin_reconcile_bindings: "{{ deployment_type == 'origin' and g_new_version | version_compare('1.0.6', '>') }}"
-    ent_reconcile_bindings: "{{ deployment_type in ['openshift-enterprise', 'atomic-enterprise'] and g_new_version | version_compare('3.0.2','>') }}"
+    ent_reconcile_bindings: true
   tasks:
   tasks:
     - name: oadm policy reconcile-cluster-roles --confirm
     - name: oadm policy reconcile-cluster-roles --confirm
       command: >
       command: >

+ 2 - 0
playbooks/common/openshift-etcd/config.yml

@@ -13,6 +13,8 @@
           hostname: "{{ openshift_hostname | default(None) }}"
           hostname: "{{ openshift_hostname | default(None) }}"
           public_hostname: "{{ openshift_public_hostname | default(None) }}"
           public_hostname: "{{ openshift_public_hostname | default(None) }}"
           deployment_type: "{{ openshift_deployment_type }}"
           deployment_type: "{{ openshift_deployment_type }}"
+      - role: etcd
+        local_facts: {}
   - name: Check status of etcd certificates
   - name: Check status of etcd certificates
     stat:
     stat:
       path: "{{ item }}"
       path: "{{ item }}"

+ 3 - 0
playbooks/common/openshift-master/config.yml

@@ -51,6 +51,9 @@
           console_url: "{{ openshift_master_console_url | default(None) }}"
           console_url: "{{ openshift_master_console_url | default(None) }}"
           console_use_ssl: "{{ openshift_master_console_use_ssl | default(None) }}"
           console_use_ssl: "{{ openshift_master_console_use_ssl | default(None) }}"
           public_console_url: "{{ openshift_master_public_console_url | default(None) }}"
           public_console_url: "{{ openshift_master_public_console_url | default(None) }}"
+      - role: etcd
+        local_facts: {}
+        when: openshift.master.embedded_etcd | bool
   - name: Check status of external etcd certificatees
   - name: Check status of external etcd certificatees
     stat:
     stat:
       path: "{{ openshift.common.config_base }}/master/{{ item }}"
       path: "{{ openshift.common.config_base }}/master/{{ item }}"

+ 36 - 35
roles/openshift_facts/library/openshift_facts.py

@@ -528,7 +528,6 @@ def set_aggregate_facts(facts):
             first_svc_ip = str(IPNetwork(facts['master']['portal_net'])[1])
             first_svc_ip = str(IPNetwork(facts['master']['portal_net'])[1])
             all_hostnames.add(first_svc_ip)
             all_hostnames.add(first_svc_ip)
             internal_hostnames.add(first_svc_ip)
             internal_hostnames.add(first_svc_ip)
-            _add_etcd_data_dir_fact(facts)
 
 
         facts['common']['all_hostnames'] = list(all_hostnames)
         facts['common']['all_hostnames'] = list(all_hostnames)
         facts['common']['internal_hostnames'] = list(internal_hostnames)
         facts['common']['internal_hostnames'] = list(internal_hostnames)
@@ -536,7 +535,7 @@ def set_aggregate_facts(facts):
     return facts
     return facts
 
 
 
 
-def _add_etcd_data_dir_fact(facts):
+def set_etcd_facts_if_unset(facts):
     """
     """
     If using embedded etcd, loads the data directory from master-config.yaml.
     If using embedded etcd, loads the data directory from master-config.yaml.
 
 
@@ -544,38 +543,39 @@ def _add_etcd_data_dir_fact(facts):
 
 
     If anything goes wrong parsing these, the fact will not be set.
     If anything goes wrong parsing these, the fact will not be set.
     """
     """
-    if facts['master']['embedded_etcd']:
-        try:
-            # Parse master config to find actual etcd data dir:
-            master_cfg_path = os.path.join(facts['common']['config_base'],
-                                           'master/master-config.yaml')
-            master_cfg_f = open(master_cfg_path, 'r')
-            config = yaml.safe_load(master_cfg_f.read())
-            master_cfg_f.close()
-
-            facts['master']['etcd_data_dir'] = \
-                config['etcdConfig']['storageDirectory']
-        # We don't want exceptions bubbling up here:
-        # pylint: disable=broad-except
-        except Exception:
-            pass
-    else:
-        # Read ETCD_DATA_DIR from /etc/etcd/etcd.conf:
-        try:
-            # Add a fake section for parsing:
-            ini_str = '[root]\n' + open('/etc/etcd/etcd.conf', 'r').read()
-            ini_fp = StringIO.StringIO(ini_str)
-            config = ConfigParser.RawConfigParser()
-            config.readfp(ini_fp)
-            etcd_data_dir = config.get('root', 'ETCD_DATA_DIR')
-            if etcd_data_dir.startswith('"') and etcd_data_dir.endswith('"'):
-                etcd_data_dir = etcd_data_dir[1:-1]
-            facts['master']['etcd_data_dir'] = etcd_data_dir
-        # We don't want exceptions bubbling up here:
-        # pylint: disable=broad-except
-        except Exception:
-            pass
-
+    if 'etcd' in facts:
+        if 'master' in facts and facts['master']['embedded_etcd']:
+            try:
+                # Parse master config to find actual etcd data dir:
+                master_cfg_path = os.path.join(facts['common']['config_base'],
+                                               'master/master-config.yaml')
+                master_cfg_f = open(master_cfg_path, 'r')
+                config = yaml.safe_load(master_cfg_f.read())
+                master_cfg_f.close()
+
+                facts['etcd']['etcd_data_dir'] = \
+                    config['etcdConfig']['storageDirectory']
+            # We don't want exceptions bubbling up here:
+            # pylint: disable=broad-except
+            except Exception:
+                pass
+        else:
+            # Read ETCD_DATA_DIR from /etc/etcd/etcd.conf:
+            try:
+                # Add a fake section for parsing:
+                ini_str = '[root]\n' + open('/etc/etcd/etcd.conf', 'r').read()
+                ini_fp = StringIO.StringIO(ini_str)
+                config = ConfigParser.RawConfigParser()
+                config.readfp(ini_fp)
+                etcd_data_dir = config.get('root', 'ETCD_DATA_DIR')
+                if etcd_data_dir.startswith('"') and etcd_data_dir.endswith('"'):
+                    etcd_data_dir = etcd_data_dir[1:-1]
+                facts['etcd']['etcd_data_dir'] = etcd_data_dir
+            # We don't want exceptions bubbling up here:
+            # pylint: disable=broad-except
+            except Exception:
+                pass
+    return facts
 
 
 def set_deployment_facts_if_unset(facts):
 def set_deployment_facts_if_unset(facts):
     """ Set Facts that vary based on deployment_type. This currently
     """ Set Facts that vary based on deployment_type. This currently
@@ -939,7 +939,7 @@ class OpenShiftFacts(object):
         Raises:
         Raises:
             OpenShiftFactsUnsupportedRoleError:
             OpenShiftFactsUnsupportedRoleError:
     """
     """
-    known_roles = ['common', 'master', 'node', 'master_sdn', 'node_sdn', 'dns']
+    known_roles = ['common', 'master', 'node', 'master_sdn', 'node_sdn', 'dns', 'etcd']
 
 
     def __init__(self, role, filename, local_facts):
     def __init__(self, role, filename, local_facts):
         self.changed = False
         self.changed = False
@@ -982,6 +982,7 @@ class OpenShiftFacts(object):
         facts = set_deployment_facts_if_unset(facts)
         facts = set_deployment_facts_if_unset(facts)
         facts = set_version_facts_if_unset(facts)
         facts = set_version_facts_if_unset(facts)
         facts = set_aggregate_facts(facts)
         facts = set_aggregate_facts(facts)
+        facts = set_etcd_facts_if_unset(facts)
         return dict(openshift=facts)
         return dict(openshift=facts)
 
 
     def get_defaults(self, roles):
     def get_defaults(self, roles):

+ 3 - 2
utils/src/ooinstall/cli_installer.py

@@ -177,7 +177,8 @@ Notes:
                                              h.public_ip,
                                              h.public_ip,
                                              h.hostname,
                                              h.hostname,
                                              h.public_hostname]))
                                              h.public_hostname]))
-        output = "%s\n%s" % (output, ",".join([h.ip,
+        output = "%s\n%s" % (output, ",".join([h.connect_to,
+                             h.ip,
                              h.public_ip,
                              h.public_ip,
                              h.hostname,
                              h.hostname,
                              h.public_hostname]))
                              h.public_hostname]))
@@ -493,7 +494,7 @@ def upgrade(ctx):
     verbose = ctx.obj['verbose']
     verbose = ctx.obj['verbose']
 
 
     if len(oo_cfg.hosts) == 0:
     if len(oo_cfg.hosts) == 0:
-        click.echo("No hosts defined in: %s" % oo_cfg['configuration'])
+        click.echo("No hosts defined in: %s" % oo_cfg.config_path)
         sys.exit(1)
         sys.exit(1)
 
 
     # Update config to reflect the version we're targetting, we'll write
     # Update config to reflect the version we're targetting, we'll write

+ 5 - 3
utils/src/ooinstall/oo_config.py

@@ -116,6 +116,9 @@ class OOConfig(object):
 
 
     def _upgrade_legacy_config(self):
     def _upgrade_legacy_config(self):
         new_hosts = []
         new_hosts = []
+        remove_settings = ['validated_facts', 'Description', 'Name',
+            'Subscription', 'Vendor', 'Version', 'masters', 'nodes']
+
         if 'validated_facts' in self.settings:
         if 'validated_facts' in self.settings:
             for key, value in self.settings['validated_facts'].iteritems():
             for key, value in self.settings['validated_facts'].iteritems():
                 value['connect_to'] = key
                 value['connect_to'] = key
@@ -126,10 +129,9 @@ class OOConfig(object):
                 new_hosts.append(value)
                 new_hosts.append(value)
         self.settings['hosts'] = new_hosts
         self.settings['hosts'] = new_hosts
 
 
-        remove_settings = ['validated_facts', 'Description', 'Name',
-            'Subscription', 'Vendor', 'Version', 'masters', 'nodes']
         for s in remove_settings:
         for s in remove_settings:
-            del self.settings[s]
+            if s in self.settings:
+                del self.settings[s]
 
 
         # A legacy config implies openshift-enterprise 3.0:
         # A legacy config implies openshift-enterprise 3.0:
         self.settings['variant'] = 'openshift-enterprise'
         self.settings['variant'] = 'openshift-enterprise'

+ 3 - 1
utils/src/ooinstall/openshift_ansible.py

@@ -164,8 +164,10 @@ def run_uninstall_playbook(verbose=False):
 
 
 
 
 def run_upgrade_playbook(verbose=False):
 def run_upgrade_playbook(verbose=False):
+    # TODO: do not hardcode the upgrade playbook, add ability to select the
+    # right playbook depending on the type of upgrade.
     playbook = os.path.join(CFG.settings['ansible_playbook_directory'],
     playbook = os.path.join(CFG.settings['ansible_playbook_directory'],
-        'playbooks/adhoc/upgrades/upgrade.yml')
+        'playbooks/byo/openshift-cluster/upgrades/v3_0_to_v3_1/upgrade.yml')
     # TODO: Upgrade inventory for upgrade?
     # TODO: Upgrade inventory for upgrade?
     inventory_file = generate_inventory(CFG.hosts)
     inventory_file = generate_inventory(CFG.hosts)
     facts_env = os.environ.copy()
     facts_env = os.environ.copy()