Browse Source

Merge pull request #7166 from vrutkovs/remove-3.8-upgrade-playbooks

Automatic merge from submit-queue.

Block clean install of OCP 3.8 and upgrade to 3.9 only

This PR ensures clean 3.8 cannot be installed and 3.7 can be upgraded
to 3.9 only.


TODO:
- [x] Decide on Origin 3.8 vs. OCP 3.8.
  ~~Sounds like we certainly don't want OCP 3.8 upgrade and clean install, does it also apply to Origin?~~
   Neither OCP not Origin 3.8 are supported, playbooks were removed
- [x] Block clean OCP 3.8 install of any component
- [x] Check during control plane upgrade that both 3.8 and 3.9 packages
      are available on all masters if RPM based install. 
      Abort if both aren't available.

      This is already implemented, just a small fix to show which 
      package version is missing

- [x] Update upgrades/README.md about supported versions
- [x] Block node upgrade from 3.7 to 3.8.

      Playbooks to upgrade node from 3.7 to 3.8 were removed,
      nodes won't be updated to 3.8, but straight to 3.9 for OCP.

Note, that 3.8 upgrade playbooks were not removed to ensure 3.8 -> 3.9 
upgrade can still be performed in case any failure occurs.

Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1541555
OpenShift Merge Robot 7 years ago
parent
commit
4e83fec11b

+ 1 - 0
playbooks/byo/openshift-cluster/upgrades/README.md

@@ -4,5 +4,6 @@ cluster. Additional notes for the associated upgrade playbooks are
 provided in their respective directories.
 
 # Upgrades available
+- [OpenShift Container Platform 3.7 to 3.9](v3_6/README.md) (works also to upgrade OpenShift Origin from 3.7.x to 3.9.x)
 - [OpenShift Container Platform 3.6 to 3.7](v3_7/README.md) (works also to upgrade OpenShift Origin from 3.6.x to 3.7.x)
 - [OpenShift Container Platform 3.5 to 3.6](v3_6/README.md) (works also to upgrade OpenShift Origin from 1.5.x to 3.6.x)

+ 0 - 20
playbooks/byo/openshift-cluster/upgrades/v3_8/README.md

@@ -1,20 +0,0 @@
-# v3.8 Major and Minor Upgrade Playbook
-
-## Overview
-This playbook currently performs the following steps.
-
- * Upgrade and restart master services
- * Unschedule node
- * Upgrade and restart docker
- * Upgrade and restart node services
- * Modifies the subset of the configuration necessary
- * Applies the latest cluster policies
- * Updates the default router if one exists
- * Updates the default registry if one exists
- * Updates image streams and quickstarts
-
-## Usage
-
-```
-ansible-playbook -i ~/ansible-inventory openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_8/upgrade.yml
-```

+ 0 - 5
playbooks/byo/openshift-cluster/upgrades/v3_8/upgrade.yml

@@ -1,5 +0,0 @@
----
-#
-# Full Control Plane + Nodes Upgrade
-#
-- import_playbook: ../../../../common/openshift-cluster/upgrades/v3_8/upgrade.yml

+ 0 - 14
playbooks/byo/openshift-cluster/upgrades/v3_8/upgrade_control_plane.yml

@@ -1,14 +0,0 @@
----
-#
-# Control Plane Upgrade Playbook
-#
-# Upgrades masters and Docker (only on standalone etcd hosts)
-#
-# This upgrade does not include:
-# - node service running on masters
-# - docker running on masters
-# - node service running on dedicated nodes
-#
-# You can run the upgrade_nodes.yml playbook after this to upgrade these components separately.
-#
-- import_playbook: ../../../../common/openshift-cluster/upgrades/v3_8/upgrade_control_plane.yml

+ 0 - 7
playbooks/byo/openshift-cluster/upgrades/v3_8/upgrade_nodes.yml

@@ -1,7 +0,0 @@
----
-#
-# Node Upgrade Playbook
-#
-# Upgrades nodes only, but requires the control plane to have already been upgraded.
-#
-- import_playbook: ../../../../common/openshift-cluster/upgrades/v3_8/upgrade_nodes.yml

+ 4 - 1
playbooks/common/openshift-cluster/upgrades/upgrade_control_plane.yml

@@ -270,8 +270,11 @@
   - include_tasks: docker/tasks/upgrade.yml
     when: l_docker_upgrade is defined and l_docker_upgrade | bool and not openshift_is_atomic | bool
 
+
 - name: Drain and upgrade master nodes
-  hosts: oo_masters_to_config:&oo_nodes_to_upgrade
+  # There is no need to update nodes in the middle of double upgrade
+  # This would skip node update to 3.8 during 3.7->3.9 upgrade
+  hosts: "{{ l_double_upgrade_cp | default(False) | ternary('all:!all', 'oo_masters_to_config:&oo_nodes_to_upgrade') }}"
   # This var must be set with -e on invocation, as it is not a per-host inventory var
   # and is evaluated early. Values such as "20%" can also be used.
   serial: "{{ openshift_upgrade_control_plane_nodes_serial | default(1) }}"

+ 0 - 20
playbooks/common/openshift-cluster/upgrades/v3_8/master_config_upgrade.yml

@@ -1,20 +0,0 @@
----
-- modify_yaml:
-    dest: "{{ openshift.common.config_base}}/master/master-config.yaml"
-    yaml_key: 'controllerConfig.election.lockName'
-    yaml_value: 'openshift-master-controllers'
-
-- modify_yaml:
-    dest: "{{ openshift.common.config_base}}/master/master-config.yaml"
-    yaml_key: 'controllerConfig.serviceServingCert.signer.certFile'
-    yaml_value: service-signer.crt
-
-- modify_yaml:
-    dest: "{{ openshift.common.config_base}}/master/master-config.yaml"
-    yaml_key: 'controllerConfig.serviceServingCert.signer.keyFile'
-    yaml_value: service-signer.key
-
-- modify_yaml:
-    dest: "{{ openshift.common.config_base }}/master/master-config.yaml"
-    yaml_key: servingInfo.clientCA
-    yaml_value: ca.crt

+ 0 - 1
playbooks/common/openshift-cluster/upgrades/v3_8/roles

@@ -1 +0,0 @@
-../../../../../roles/

+ 0 - 56
playbooks/common/openshift-cluster/upgrades/v3_8/upgrade.yml

@@ -1,56 +0,0 @@
----
-#
-# Full Control Plane + Nodes Upgrade
-#
-- import_playbook: ../init.yml
-  tags:
-  - pre_upgrade
-
-- name: Configure the upgrade target for the common upgrade tasks
-  hosts: oo_all_hosts
-  tags:
-  - pre_upgrade
-  tasks:
-  - set_fact:
-      openshift_upgrade_target: '3.8'
-      openshift_upgrade_min: '3.7'
-
-- import_playbook: ../pre/config.yml
-  vars:
-    l_upgrade_repo_hosts: "oo_masters_to_config:oo_nodes_to_upgrade:oo_etcd_to_config:oo_lb_to_config"
-    l_upgrade_no_proxy_hosts: "oo_masters_to_config:oo_nodes_to_upgrade"
-    l_upgrade_health_check_hosts: "oo_masters_to_config:oo_etcd_to_config:oo_lb_to_config"
-    l_upgrade_verify_targets_hosts: "oo_masters_to_config:oo_nodes_to_upgrade"
-    l_upgrade_docker_target_hosts: "oo_masters_to_config:oo_nodes_to_upgrade:oo_etcd_to_config"
-    l_upgrade_excluder_hosts: "oo_nodes_to_config:oo_masters_to_config"
-    openshift_protect_installed_version: False
-
-- name: Flag pre-upgrade checks complete for hosts without errors
-  hosts: oo_masters_to_config:oo_nodes_to_upgrade:oo_etcd_to_config
-  tasks:
-  - set_fact:
-      pre_upgrade_complete: True
-
-# Pre-upgrade completed
-
-- import_playbook: ../upgrade_control_plane.yml
-
-# All controllers must be stopped at the same time then restarted
-- name: Cycle all controller services to force new leader election mode
-  hosts: oo_masters_to_config
-  gather_facts: no
-  roles:
-  - role: openshift_facts
-  tasks:
-  - name: Stop {{ openshift_service_type }}-master-controllers
-    systemd:
-      name: "{{ openshift_service_type }}-master-controllers"
-      state: stopped
-  - name: Start {{ openshift_service_type }}-master-controllers
-    systemd:
-      name: "{{ openshift_service_type }}-master-controllers"
-      state: started
-
-- import_playbook: ../upgrade_nodes.yml
-
-- import_playbook: ../post_control_plane.yml

+ 0 - 67
playbooks/common/openshift-cluster/upgrades/v3_8/upgrade_control_plane.yml

@@ -1,67 +0,0 @@
----
-#
-# Control Plane Upgrade Playbook
-#
-# Upgrades masters and Docker (only on standalone etcd hosts)
-#
-# This upgrade does not include:
-# - node service running on masters
-# - docker running on masters
-# - node service running on dedicated nodes
-#
-# You can run the upgrade_nodes.yml playbook after this to upgrade these components separately.
-#
-- import_playbook: ../init.yml
-  vars:
-    l_upgrade_no_switch_firewall_hosts: "oo_masters_to_config:oo_etcd_to_config:oo_lb_to_config"
-    l_init_fact_hosts: "oo_masters_to_config:oo_etcd_to_config:oo_lb_to_config"
-  when: not skip_version_info | default(false)
-
-- name: Configure the upgrade target for the common upgrade tasks
-  hosts: oo_masters_to_config:oo_etcd_to_config:oo_lb_to_config
-  tasks:
-  - set_fact:
-      openshift_upgrade_target: '3.8'
-      openshift_upgrade_min: '3.7'
-
-- import_playbook: ../pre/config.yml
-  # These vars a meant to exclude oo_nodes from plays that would otherwise include
-  # them by default.
-  vars:
-    l_openshift_version_set_hosts: "oo_etcd_to_config:oo_masters_to_config:!oo_first_master"
-    l_openshift_version_check_hosts: "oo_masters_to_config:!oo_first_master"
-    l_upgrade_repo_hosts: "oo_masters_to_config:oo_etcd_to_config:oo_lb_to_config"
-    l_upgrade_no_proxy_hosts: "oo_masters_to_config"
-    l_upgrade_health_check_hosts: "oo_masters_to_config:oo_etcd_to_config:oo_lb_to_config"
-    l_upgrade_verify_targets_hosts: "oo_masters_to_config"
-    l_upgrade_docker_target_hosts: "oo_masters_to_config:oo_etcd_to_config"
-    l_upgrade_excluder_hosts: "oo_masters_to_config"
-    openshift_protect_installed_version: False
-
-- name: Flag pre-upgrade checks complete for hosts without errors
-  hosts: oo_masters_to_config:oo_etcd_to_config
-  tasks:
-  - set_fact:
-      pre_upgrade_complete: True
-
-# Pre-upgrade completed
-
-- import_playbook: ../upgrade_control_plane.yml
-
-# All controllers must be stopped at the same time then restarted
-- name: Cycle all controller services to force new leader election mode
-  hosts: oo_masters_to_config
-  gather_facts: no
-  roles:
-  - role: openshift_facts
-  tasks:
-  - name: Stop {{ openshift_service_type }}-master-controllers
-    systemd:
-      name: "{{ openshift_service_type }}-master-controllers"
-      state: stopped
-  - name: Start {{ openshift_service_type }}-master-controllers
-    systemd:
-      name: "{{ openshift_service_type }}-master-controllers"
-      state: started
-
-- import_playbook: ../post_control_plane.yml

+ 0 - 38
playbooks/common/openshift-cluster/upgrades/v3_8/upgrade_nodes.yml

@@ -1,38 +0,0 @@
----
-#
-# Node Upgrade Playbook
-#
-# Upgrades nodes only, but requires the control plane to have already been upgraded.
-#
-- import_playbook: ../init.yml
-  tags:
-  - pre_upgrade
-
-- name: Configure the upgrade target for the common upgrade tasks
-  hosts: oo_all_hosts
-  tags:
-  - pre_upgrade
-  tasks:
-  - set_fact:
-      openshift_upgrade_target: '3.8'
-      openshift_upgrade_min: '3.7'
-
-- import_playbook: ../pre/config.yml
-  vars:
-    l_upgrade_repo_hosts: "oo_nodes_to_config"
-    l_upgrade_no_proxy_hosts: "oo_all_hosts"
-    l_upgrade_health_check_hosts: "oo_nodes_to_config"
-    l_upgrade_verify_targets_hosts: "oo_nodes_to_config"
-    l_upgrade_docker_target_hosts: "oo_nodes_to_config"
-    l_upgrade_excluder_hosts: "oo_nodes_to_config:!oo_masters_to_config"
-    l_upgrade_nodes_only: True
-
-- name: Flag pre-upgrade checks complete for hosts without errors
-  hosts: oo_masters_to_config:oo_nodes_to_upgrade:oo_etcd_to_config
-  tasks:
-  - set_fact:
-      pre_upgrade_complete: True
-
-# Pre-upgrade completed
-
-- import_playbook: ../upgrade_nodes.yml

+ 16 - 0
roles/lib_utils/action_plugins/sanity_checks.py

@@ -33,6 +33,10 @@ ENTERPRISE_TAG_REGEX = {'re': '(^v\\d+\\.\\d+(\\.\\d+)*(-\\d+(\\.\\d+)*)?$)',
 IMAGE_TAG_REGEX = {'origin': ORIGIN_TAG_REGEX,
                    'openshift-enterprise': ENTERPRISE_TAG_REGEX}
 
+UNSUPPORTED_OCP_VERSIONS = {
+    '^3.8.*$': 'OCP 3.8 is not supported and cannot be installed'
+}
+
 CONTAINERIZED_NO_TAG_ERROR_MSG = """To install a containerized Origin release,
 you must set openshift_release or openshift_image_tag in your inventory to
 specify which version of the OpenShift component images to use.
@@ -144,6 +148,17 @@ class ActionModule(ActionBase):
                 msg = '{} must be 63 characters or less'.format(varname)
                 raise errors.AnsibleModuleError(msg)
 
+    def check_supported_ocp_version(self, hostvars, host, openshift_deployment_type):
+        """Checks that the OCP version supported"""
+        if openshift_deployment_type == 'origin':
+            return None
+        openshift_version = self.template_var(hostvars, host, 'openshift_version')
+        for regex_to_match, error_msg in UNSUPPORTED_OCP_VERSIONS.items():
+            res = re.match(regex_to_match, str(openshift_version))
+            if res is not None:
+                raise errors.AnsibleModuleError(error_msg)
+        return None
+
     def run_checks(self, hostvars, host):
         """Execute the hostvars validations against host"""
         distro = self.template_var(hostvars, host, 'ansible_distribution')
@@ -153,6 +168,7 @@ class ActionModule(ActionBase):
         self.no_origin_image_version(hostvars, host, odt)
         self.network_plugin_check(hostvars, host)
         self.check_hostname_vars(hostvars, host)
+        self.check_supported_ocp_version(hostvars, host, odt)
 
     def run(self, tmp=None, task_vars=None):
         result = super(ActionModule, self).run(tmp, task_vars)

+ 1 - 1
roles/lib_utils/library/repoquery.py

@@ -547,13 +547,13 @@ class Repoquery(RepoqueryCLI):
         rval = self._repoquery_cmd(repoquery_cmd, True, 'raw')
 
         # check to see if there are actual results
+        rval['package_name'] = self.name
         if rval['results']:
             processed_versions = Repoquery.process_versions(rval['results'].strip())
             formatted_versions = self.format_versions(processed_versions)
 
             rval['package_found'] = True
             rval['versions'] = formatted_versions
-            rval['package_name'] = self.name
 
             if self.verbose:
                 rval['raw_versions'] = processed_versions

+ 1 - 1
roles/lib_utils/src/class/repoquery.py

@@ -128,13 +128,13 @@ class Repoquery(RepoqueryCLI):
         rval = self._repoquery_cmd(repoquery_cmd, True, 'raw')
 
         # check to see if there are actual results
+        rval['package_name'] = self.name
         if rval['results']:
             processed_versions = Repoquery.process_versions(rval['results'].strip())
             formatted_versions = self.format_versions(processed_versions)
 
             rval['package_found'] = True
             rval['versions'] = formatted_versions
-            rval['package_name'] = self.name
 
             if self.verbose:
                 rval['raw_versions'] = processed_versions

+ 1 - 1
roles/openshift_version/tasks/check_available_rpms.yml

@@ -6,5 +6,5 @@
   register: rpm_results
 
 - fail:
-    msg: "Package {{ openshift_service_type}} not found"
+    msg: "Package '{{ rpm_results.results.package_name }}' not found"
   when: not rpm_results.results.package_found