Bläddra i källkod

Merge pull request #10877 from vrutkovs/devel-40-master-rebase

Devel 4.0: master rebase
OpenShift Merge Robot 6 år sedan
förälder
incheckning
6cae05276d
48 ändrade filer med 897 tillägg och 217 borttagningar
  1. 6 0
      .github/PULL_REQUEST_TEMPLATE.md
  2. 1 1
      .tito/packages/openshift-ansible
  3. 9 1
      README.md
  4. 2 2
      images/installer/Dockerfile
  5. 0 3
      images/installer/Dockerfile.rhel7
  6. 6 0
      images/installer/origin-extra-root/etc/yum.repos.d/centos-ansible27.repo
  7. 1 1
      images/installer/root/usr/local/bin/entrypoint-provider
  8. 2 0
      inventory/dynamic/gcp/ansible.cfg
  9. 4 1
      inventory/dynamic/gcp/group_vars/all/00_defaults.yml
  10. 14 0
      inventory/hosts.example
  11. 211 1
      openshift-ansible.spec
  12. 0 10
      playbooks/bootkube.yml
  13. 2 0
      playbooks/common/openshift-cluster/upgrades/upgrade_components.yml
  14. 50 19
      playbooks/deploy_cluster_40.yml
  15. 1 1
      playbooks/init/base_packages.yml
  16. 9 2
      playbooks/openshift-logging/private/config.yml
  17. 161 1
      playbooks/openstack/configuration.md
  18. 3 3
      playbooks/openstack/openshift-cluster/install.yml
  19. 3 1
      playbooks/openstack/openshift-cluster/provision.yml
  20. 5 1
      roles/container_runtime/tasks/package_crio.yml
  21. 8 6
      roles/container_runtime/templates/crio.conf.j2
  22. 0 46
      roles/container_runtime/templates/registries.conf
  23. 27 0
      roles/container_runtime/templates/registries.conf.j2
  24. 1 0
      roles/lib_utils/action_plugins/master_check_paths_in_config.py
  25. 2 5
      roles/lib_utils/action_plugins/sanity_checks.py
  26. 5 1
      roles/lib_utils/filter_plugins/openshift_master.py
  27. 9 5
      roles/openshift_cli/tasks/main.yml
  28. 5 0
      roles/openshift_control_plane/files/controller.yaml
  29. 11 0
      roles/openshift_control_plane/tasks/main.yml
  30. 0 21
      roles/openshift_facts/defaults/main.yml
  31. 18 5
      roles/openshift_gcp/defaults/main.yml
  32. 62 0
      roles/openshift_gcp/tasks/deprovision.yml
  33. 82 3
      roles/openshift_gcp/tasks/main.yml
  34. 93 0
      roles/openshift_gcp/tasks/remove_bootstrap.yml
  35. 10 4
      roles/openshift_gcp/tasks/setup_scale_group_facts.yml
  36. 2 42
      roles/openshift_gcp/templates/additional_settings.j2.sh
  37. 5 25
      roles/openshift_gcp/templates/remove.j2.sh
  38. 1 0
      roles/openshift_master_facts/tasks/main.yml
  39. 3 3
      roles/openshift_node/tasks/main.yml
  40. 5 0
      roles/openshift_node40/tasks/install.yml
  41. 3 0
      roles/openshift_node_group/files/sync.yaml
  42. 8 0
      roles/openshift_openstack/defaults/main.yml
  43. 11 0
      roles/openshift_openstack/tasks/container-storage-setup.yml
  44. 1 1
      roles/openshift_repos/tasks/centos_repos.yml
  45. 3 1
      roles/openshift_sdn/files/sdn.yaml
  46. 1 1
      roles/openshift_storage_glusterfs/README.md
  47. 15 0
      test/gcp/build_image.yml
  48. 16 0
      test/gcp/install.yml

+ 6 - 0
.github/PULL_REQUEST_TEMPLATE.md

@@ -0,0 +1,6 @@
+NOTICE
+======
+
+Master branch is closed! A major refactor is ongoing in devel-40.
+Changes for 3.x should be made directly to the latest release branch they're
+relevant to and backported from there.

+ 1 - 1
.tito/packages/openshift-ansible

@@ -1 +1 @@
-4.0.0-0.42.0 ./
+4.0.0-0.96.0 ./

+ 9 - 1
README.md

@@ -2,6 +2,14 @@
 [![Build Status](https://travis-ci.org/openshift/openshift-ansible.svg?branch=master)](https://travis-ci.org/openshift/openshift-ansible)
 [![Coverage Status](https://coveralls.io/repos/github/openshift/openshift-ansible/badge.svg?branch=master)](https://coveralls.io/github/openshift/openshift-ansible?branch=master)
 
+NOTICE
+======
+
+Master branch is closed! A major refactor is ongoing in devel-40.
+Changes for 3.x should be made directly to the latest release branch they're
+relevant to and backported from there.
+
+
 # OpenShift Ansible
 
 This repository contains [Ansible](https://www.ansible.com/) roles and
@@ -153,7 +161,7 @@ created for you automatically.
 
 ## Complete Production Installation Documentation:
 
-- [OpenShift Container Platform](https://docs.openshift.com/container-platform/latest/install_config/install/advanced_install.html)
+- [OpenShift Container Platform](https://docs.openshift.com/container-platform/3.11/install/running_install.html)
 - [OpenShift Origin](https://docs.okd.io/latest/install/index.html)
 
 ## Containerized OpenShift Ansible

+ 2 - 2
images/installer/Dockerfile

@@ -10,13 +10,13 @@ COPY images/installer/origin-extra-root /
 # install ansible and deps
 RUN INSTALL_PKGS="python-lxml python-dns pyOpenSSL python2-cryptography openssl python2-passlib httpd-tools openssh-clients origin-clients iproute patch" \
  && yum install -y --setopt=tsflags=nodocs $INSTALL_PKGS \
- && EPEL_PKGS="ansible python2-boto python2-boto3 python2-crypto which python2-pip.noarch python2-scandir python2-packaging azure-cli-2.0.46" \
+ && EPEL_PKGS="ansible-2.7.4 python2-boto python2-crypto which python2-pip.noarch python2-scandir python2-packaging azure-cli-2.0.46" \
  && yum install -y epel-release \
  && yum install -y --setopt=tsflags=nodocs $EPEL_PKGS \
  && if [ "$(uname -m)" == "x86_64" ]; then yum install -y https://sdodson.fedorapeople.org/google-cloud-sdk-183.0.0-3.el7.x86_64.rpm ; fi \
  && yum install -y java-1.8.0-openjdk-headless \
  && rpm -V $INSTALL_PKGS $EPEL_PKGS $EPEL_TESTING_PKGS \
- && pip install 'apache-libcloud~=2.2.1' 'SecretStorage<3' 'ansible[azure]' 'google-auth' \
+ && pip install 'apache-libcloud~=2.2.1' 'SecretStorage<3' 'ansible[azure]' 'google-auth' 'boto3==1.4.6' \
  && yum clean all
 
 LABEL name="openshift/origin-ansible" \

+ 0 - 3
images/installer/Dockerfile.rhel7

@@ -24,9 +24,6 @@ LABEL name="openshift3/ose-ansible" \
       io.openshift.expose-services="" \
       io.openshift.tags="openshift,install,upgrade,ansible" \
       com.redhat.component="aos3-installation-docker" \
-      version="v3.6.0" \
-      release="1" \
-      architecture="x86_64" \
       atomic.run="once"
 
 ENV USER_UID=1001 \

+ 6 - 0
images/installer/origin-extra-root/etc/yum.repos.d/centos-ansible27.repo

@@ -0,0 +1,6 @@
+
+[centos-ansible26-testing]
+name=CentOS Ansible 2.6 testing repo
+baseurl=https://cbs.centos.org/repos/configmanagement7-ansible-27-testing/x86_64/os/
+enabled=1
+gpgcheck=0

+ 1 - 1
images/installer/root/usr/local/bin/entrypoint-provider

@@ -45,7 +45,7 @@ if [[ -f "${FILES}/ssh-privatekey" ]]; then
   else
     keyfile="${HOME}/.ssh/id_rsa"
   fi
-  mkdir "${HOME}/.ssh"
+  mkdir -p "${HOME}/.ssh"
   rm -f "${keyfile}"
   cat "${FILES}/ssh-privatekey" > "${keyfile}"
   chmod 0600 "${keyfile}"

+ 2 - 0
inventory/dynamic/gcp/ansible.cfg

@@ -28,6 +28,8 @@ inventory_ignore_extensions = secrets.py, .pyc, .cfg, .crt
 # work around privilege escalation timeouts in ansible:
 timeout = 30
 
+stdout_callback = yaml
+
 # Uncomment to use the provided example inventory
 inventory = hosts.sh
 

+ 4 - 1
inventory/dynamic/gcp/group_vars/all/00_defaults.yml

@@ -20,6 +20,9 @@ openshift_master_cluster_hostname: "internal-openshift-master.{{ public_hosted_z
 openshift_master_cluster_public_hostname: "openshift-master.{{ public_hosted_zone }}"
 openshift_master_default_subdomain: "{{ wildcard_zone }}"
 
+mcd_port: 49500
+mcd_endpoint: "{{ openshift_master_cluster_public_hostname }}:{{ mcd_port }}"
+
 # Cloud specific settings
 openshift_cloudprovider_kind: gce
 openshift_hosted_registry_storage_provider: gcs
@@ -31,7 +34,7 @@ openshift_master_identity_providers:
 openshift_node_port_range: 30000-32000
 openshift_node_open_ports: [{"service":"Router stats port", "port":"1936/tcp"}, {"service":"Allowed open host ports", "port":"9000-10000/tcp"}, {"service":"Allowed open host ports", "port":"9000-10000/udp"}]
 os_sdn_network_plugin_name: redhat/openshift-ovs-networkpolicy
-openshift_node_sdn_mtu: 1410
+openshift_node_sdn_mtu: 1500
 osm_cluster_network_cidr: 172.16.0.0/16
 osm_host_subnet_length: 9
 openshift_portal_net: 172.30.0.0/16

+ 14 - 0
inventory/hosts.example

@@ -249,6 +249,20 @@ debug_level=2
 # or
 #openshift_master_request_header_ca_file=<path to local ca file to use>
 
+# GitHub auth
+#openshift_master_identity_providers=[{"name": "github", "login": "true", "challenge": "false", "kind": "GitHubIdentityProvider", "mappingMethod": "claim", "client_id": "my_client_id", "client_secret": "my_client_secret", "teams": ["team1", "team2"], "hostname": "githubenterprise.example.com", "ca": "" }]
+#
+# Configure github CA certificate
+# Specify either the ASCII contents of the certificate or the path to
+# the local file that will be copied to the remote host. CA
+# certificate contents will be copied to master systems and saved
+# within /etc/origin/master/ with a filename matching the "ca" key set
+# within the GitHubIdentityProvider.
+#
+#openshift_master_github_ca=<ca text>
+# or
+#openshift_master_github_ca_file=<path to local ca file to use>
+
 # CloudForms Management Engine (ManageIQ) App Install
 #
 # Enables installation of MIQ server. Recommended for dedicated

+ 211 - 1
openshift-ansible.spec

@@ -10,7 +10,7 @@
 
 Name:           openshift-ansible
 Version:        4.0.0
-Release:        0.42.0%{?dist}
+Release:        0.96.0%{?dist}
 Summary:        Openshift and Atomic Enterprise Ansible
 License:        ASL 2.0
 URL:            https://github.com/openshift/openshift-ansible
@@ -189,6 +189,216 @@ BuildArch:     noarch
 %{_datadir}/ansible/%{name}/test
 
 %changelog
+* Wed Dec 12 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.96.0
+- Revert "Devel 4.0: CI test" (sdodson@redhat.com)
+- DEBUG: skip openshift-apiserver operator (roignac@gmail.com)
+- Add retries when installing openshift packages (roignac@gmail.com)
+- Wait for core operators to come up (roignac@gmail.com)
+- GCP: open ports on masters for cadvisor and CVO (roignac@gmail.com)
+- GCP: open port on masters to collect cadvisor metrics (roignac@gmail.com)
+- Don't install atomic - we don't use it (roignac@gmail.com)
+- Install nfs-utils on nodes to pass storage tests (roignac@gmail.com)
+- GCP: use YAML output (roignac@gmail.com)
+- bootstrap kubeconfig location is now /opt/openshift (roignac@gmail.com)
+- GCP: set MTU to 1500 (1450 on veth + 50) (roignac@gmail.com)
+- Router is now a deployment (roignac@gmail.com)
+- Open ports for cadvisor and CVO metrics - this is master-internal
+  (roignac@gmail.com)
+- GCP firewall: nodes don't expose 80/443 (roignac@gmail.com)
+- Install boto3 from pip (roignac@gmail.com)
+- base: install python-docker-py (roignac@gmail.com)
+- Remove crio pause_image hack (roignac@gmail.com)
+- GCP: include all etcd discovery records in one line (roignac@gmail.com)
+- HACK CRIO: set docker.io as a source for unqualified images
+  (roignac@gmail.com)
+- Fix ident errors in new playbooks (roignac@gmail.com)
+- Wait for ingress to appear (roignac@gmail.com)
+- HACK GCP: create and remove etcd discovery entries via a script
+  (roignac@gmail.com)
+- Rework playbooks to setup 4.0 on GCP (roignac@gmail.com)
+- Enhance parse_ignition file content decoding (mgugino@redhat.com)
+- Add additional parse_igintion options and support (mgugino@redhat.com)
+- WIP: Scale node to new-installer cluster (mgugino@redhat.com)
+
+* Tue Dec 11 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.95.0
+- Dockerfile.rhel7: remove superfluous labels (lmeyer@redhat.com)
+
+* Mon Dec 10 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.94.0
+- 
+
+* Sun Dec 09 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.93.0
+- 
+
+* Sat Dec 08 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.92.0
+- 
+
+* Sat Dec 08 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.91.0
+- 
+
+* Fri Dec 07 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.90.0
+- 
+
+* Fri Dec 07 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.89.0
+- 
+
+* Thu Dec 06 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.88.0
+- 
+
+* Thu Dec 06 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.87.0
+- 
+
+* Thu Dec 06 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.86.0
+- 
+
+* Thu Dec 06 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.85.0
+- 
+
+* Wed Dec 05 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.84.0
+- 
+
+* Tue Dec 04 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.83.0
+- 
+
+* Mon Dec 03 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.82.0
+- 
+
+* Sun Dec 02 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.81.0
+- 
+
+* Sat Dec 01 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.80.0
+- 
+
+* Sat Dec 01 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.79.0
+- 
+
+* Thu Nov 29 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.78.0
+- 
+
+* Wed Nov 28 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.77.0
+- 
+
+* Tue Nov 27 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.76.0
+- 
+
+* Tue Nov 27 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.75.0
+- 
+
+* Sun Nov 25 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.74.0
+- 
+
+* Sun Nov 25 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.73.0
+- 
+
+* Sat Nov 24 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.72.0
+- 
+
+* Sat Nov 24 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.71.0
+- 
+
+* Fri Nov 23 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.70.0
+- 
+
+* Fri Nov 23 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.69.0
+- 
+
+* Thu Nov 22 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.68.0
+- 
+
+* Wed Nov 21 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.67.0
+- 
+
+* Tue Nov 20 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.66.0
+- 
+
+* Tue Nov 20 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.65.0
+- 
+
+* Tue Nov 20 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.64.0
+- 
+
+* Mon Nov 19 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.63.0
+- 
+
+* Sun Nov 18 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.62.0
+- 
+
+* Sat Nov 17 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.61.0
+- 
+
+* Fri Nov 16 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.60.0
+- 
+
+* Thu Nov 15 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.59.0
+- 
+
+* Wed Nov 14 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.58.0
+- 
+
+* Tue Nov 13 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.57.0
+- 
+
+* Mon Nov 12 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.56.0
+- 
+
+* Mon Nov 12 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.55.0
+- 
+
+* Sat Nov 10 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.54.0
+- 
+
+* Sat Nov 10 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.53.0
+- GitHubIdentityProvider catering for GitHub Enterprise and includes examples
+  on using the provider. Installation includes parameters for ca and hostname
+  (GH enterprise specific) (ckyriaki@redhat.com)
+- Check both service catalog and install vars (ruju@itu.dk)
+
+* Thu Nov 08 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.52.0
+- Simplify PR template and add text to README.md (sdodson@redhat.com)
+- Pre-pull CLI image using openshift_container_cli (vrutkovs@redhat.com)
+- Start node image prepull after CRIO is restarted (vrutkovs@redhat.com)
+- sdn: tolerate all taints (vrutkovs@redhat.com)
+- sync: tolerate all taints (vrutkovs@redhat.com)
+- Update centos_repos.yml (camabeh@users.noreply.github.com)
+- Update centos_repos.yml (camabeh@users.noreply.github.com)
+- Update .github/PULL_REQUEST_TEMPLATE.md (roignac@gmail.com)
+- Add notice about MASTER branch (sdodson@redhat.com)
+
+* Thu Nov 08 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.51.0
+- Mount /etc/pki into controller pod (mchappel@redhat.com)
+
+* Wed Nov 07 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.50.0
+- Restart docker after openstack storage setup (tzumainn@redhat.com)
+- Update crio.conf.j2 template for registries (umohnani@redhat.com)
+- Fix master paths check, while using Istio (faust64@gmail.com)
+
+* Tue Nov 06 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.49.0
+- Add instructions to use cri-o in openstack (e.minguez@gmail.com)
+- Fix broken link in README.md (artheus@users.noreply.github.com)
+- openshift_prometheus: cleanup unused variables (pgier@redhat.com)
+- fix gce-logging problem (rmeggins@redhat.com)
+- Run the init/main playbook properly (e.minguez@gmail.com)
+
+* Mon Nov 05 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.48.0
+- 
+
+* Mon Nov 05 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.47.0
+- 
+
+* Sun Nov 04 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.46.0
+- 
+
+* Sat Nov 03 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.45.0
+- added needed space in error message as stated in bug# 1645718
+  (pruan@redhat.com)
+
+* Fri Nov 02 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.44.0
+- glusterfs: Fix a typo in the README (obnox@redhat.com)
+
+* Thu Nov 01 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.43.0
+- Update playbooks/azure/openshift-cluster/build_node_image.yml
+  (roignac@gmail.com)
+- add oreg_url check (mangirdas@judeikis.lt)
+
 * Wed Oct 31 2018 AOS Automation Release Team <aos-team-art@redhat.com> 4.0.0-0.42.0
 - Adding configuration documentation for etcd (bedin@redhat.com)
 - Fixing provisioning of separate etcd (bedin@redhat.com)

+ 0 - 10
playbooks/bootkube.yml

@@ -27,21 +27,11 @@
       name: openshift_node40
       tasks_from: install.yml
 
-- name: setup AWS creds
-  hosts: masters:bootstrap:workers
-  tasks:
-  - import_role:
-      name: openshift_node40
-      tasks_from: aws.yml
-
 - name: Config bootstrap node
   hosts: bootstrap
   tasks:
   - import_role:
       name: openshift_node40
-      tasks_from: aws.yml
-  - import_role:
-      name: openshift_node40
       tasks_from: config.yml
   - import_role:
       name: openshift_node40

+ 2 - 0
playbooks/common/openshift-cluster/upgrades/upgrade_components.yml

@@ -14,11 +14,13 @@
       tasks_from: install.yml
     when:
     - openshift_enable_service_catalog | default(true) | bool
+    - ansible_service_broker_install | default(true) | bool
   - import_role:
       name: template_service_broker
       tasks_from: upgrade.yml
     when:
     - openshift_enable_service_catalog | default(true) | bool
+    - template_service_broker_install | default(true) | bool
 
 - import_playbook: ../../../olm/private/config.yml
   when: openshift_enable_olm | default(false) | bool

+ 50 - 19
playbooks/deploy_cluster_40.yml

@@ -2,14 +2,14 @@
 - name: run the init
   import_playbook: init/main.yml
   vars:
-    l_init_fact_hosts: "bootstrap:masters:workers"
-    l_openshift_version_set_hosts: "bootstrap:masters:workers"
+    l_init_fact_hosts: "nodes"
+    l_openshift_version_set_hosts: "nodes"
     l_install_base_packages: True
     l_repo_hosts: "all:!all"
 
 # TODO(michaelgugino): break up the rest of this file into reusable chunks.
 - name: Install nodes
-  hosts: bootstrap:masters:workers
+  hosts: nodes
   roles:
   - role: container_runtime
   tasks:
@@ -22,16 +22,6 @@
   - import_role:
       name: container_runtime
       tasks_from: package_crio.yml
-  - name: FIXME pause_image
-    ini_file:
-      dest: "/etc/crio/crio.conf"
-      section: crio.image
-      option: pause_image
-      value: '"docker.io/openshift/origin-pod:v4.0"'
-  - name: FIXME restart crio
-    service:
-      name: crio
-      state: restarted
   - import_role:
       name: openshift_node40
       tasks_from: install.yml
@@ -102,12 +92,11 @@
   hosts: bootstrap
   tasks:
   - name: Wait for temporary control plane to show up
-    #TODO: Rework with k8s module
     oc_obj:
       state: list
       kind: pod
       namespace: kube-system
-      kubeconfig: /opt/tectonic/auth/kubeconfig
+      kubeconfig: /opt/openshift/auth/kubeconfig
     register: control_plane_pods
     retries: 60
     delay: 10
@@ -115,12 +104,11 @@
     - "'results' in control_plane_pods and 'results' in control_plane_pods.results"
     - control_plane_pods.results.results[0]['items'] | length > 0
   - name: Wait for master nodes to show up
-    #TODO: Rework with k8s module
     oc_obj:
       state: list
       kind: node
       selector: "node-role.kubernetes.io/master"
-      kubeconfig: /opt/tectonic/auth/kubeconfig
+      kubeconfig: /opt/openshift/auth/kubeconfig
     register: master_nodes
     retries: 60
     delay: 10
@@ -132,10 +120,53 @@
     #10 mins to complete temp plane
     retries: 120
     delay: 5
-    until: ansible_facts.services['bootkube.service'].state == 'stopped'
+    until: "'bootkube.service' not in ansible_facts.services"
     ignore_errors: true
   - name: Fetch kubeconfig for test container
     fetch:
-      src: /opt/tectonic/auth/kubeconfig
+      src: /opt/openshift/auth/kubeconfig
       dest: /tmp/artifacts/installer/auth/kubeconfig
       flat: yes
+
+  - name: Wait for core operators to appear and complete
+    oc_obj:
+      state: list
+      kind: ClusterOperator
+      name: "{{ item }}"
+      kubeconfig: /opt/openshift/auth/kubeconfig
+    register: operator
+    #Give each operator 5 mins to come up
+    retries: 60
+    delay: 5
+    until:
+    - "'results' in operator"
+    - "'results' in operator.results"
+    - operator.results.results | length > 0
+    - "'status' in operator.results.results[0]"
+    - "'conditions' in operator.results.results[0]['status']"
+    - operator.results.results[0].status.conditions | selectattr('type', 'match', '^Available$') | map(attribute='status') | join | bool == True
+    - operator.results.results[0].status.conditions | selectattr('type', 'match', '^Progressing$') | map(attribute='status') | join | bool == False
+    - operator.results.results[0].status.conditions | selectattr('type', 'match', '^Failing$') | map(attribute='status') | join | bool == False
+    with_items:
+    - machine-config-operator
+    # Fails often with 'x of y nodes are not at revision n'
+    #- openshift-cluster-kube-apiserver-operator
+    # Failing with 'ConfigObservationFailing: configmap/cluster-config-v1.kube-system: no recognized cloud provider platform found' - https://github.com/openshift/cluster-kube-controller-manager-operator/issues/100
+    #- openshift-cluster-kube-controller-manager-operator
+    # Fails often with 'x of y nodes are not at revision n'
+    #- openshift-cluster-kube-scheduler-operator
+    #- openshift-cluster-openshift-apiserver-operator
+    - openshift-cluster-openshift-controller-manager-operator
+    - openshift-ingress-operator
+    ignore_errors: true
+
+  - block:
+    - name: Output the operators status
+      oc_obj:
+        state: list
+        kind: ClusterOperator
+        selector: ""
+        kubeconfig: /opt/openshift/auth/kubeconfig
+    - fail:
+        msg: Required operators didn't complete the install
+    when: operator.failed

+ 1 - 1
playbooks/init/base_packages.yml

@@ -36,7 +36,7 @@
       - "{{ 'python3-PyYAML' if ansible_distribution == 'Fedora' else 'PyYAML' }}"
       - libsemanage-python
       - yum-utils
-      - "{{ 'python3-docker' if ansible_distribution == 'Fedora' else 'python-docker' }}"
+      - "{{ 'python3-docker' if ansible_distribution == 'Fedora' else 'python-docker-py' }}"
       pkg_list_non_fedora:
       - 'python-ipaddress'
       pkg_list_use_non_fedora: "{{ ansible_distribution != 'Fedora' | bool }}"

+ 9 - 2
playbooks/openshift-logging/private/config.yml

@@ -76,6 +76,13 @@
     - set_fact:
         openshift_logging_elasticsearch_hosts: "{{ ( openshift_logging_es_hosts.stdout.split(' ') | default([]) + (openshift_logging_es_ops_hosts.stdout.split(' ') if openshift_logging_es_ops_hosts.stdout is defined else []) ) | unique }}"
 
+    #- name: Debug groups
+    #  debug:
+    #    var: groups
+    #- name: Debug hostvars
+    #  debug:
+    #    var: hostvars
+
     # Check to see if the collected ip from the openshift facts above matches our node back to a
     # group entry in our inventory so we can maintain our group variables when updating the sysctl
     # files for specific nodes based on <node>.status.addresses[@.type==InternalIP].address
@@ -85,12 +92,12 @@
         groups: oo_elasticsearch_nodes
         ansible_ssh_user: "{{ g_ssh_user | default(omit) }}"
         ansible_become: "{{ g_sudo | default(omit) }}"
-      with_items: "{{ groups['oo_nodes_to_config'] }}"
+      with_items: "{{ groups.get('oo_nodes_to_config', groups['all']) }}"
       changed_when: no
       run_once: true
       delegate_to: localhost
       connection: local
-      when: hostvars[item]['openshift']['common']['ip'] in openshift_logging_elasticsearch_hosts
+      when: hostvars[item].get('openshift',{}).get('common',{}).get('ip', None) in openshift_logging_elasticsearch_hosts
 
 - name: Update vm.max_map_count for ES 5.x
   hosts: oo_elasticsearch_nodes

+ 161 - 1
playbooks/openstack/configuration.md

@@ -31,6 +31,7 @@ Environment variables may also be used.
 * [Scaling The OpenShift Cluster](#scaling-the-openshift-cluster)
 * [Deploying At Scale](#deploying-at-scale)
 * [Using A Static Inventory](#using-a-static-inventory)
+* [Using CRI-O](#using-cri-o)
 
 
 ## OpenStack Configuration
@@ -1137,7 +1138,7 @@ The following code to open ports for prometheus should also be added to the open
     port_range_min: 9100
     port_range_max: 9100
 ```
-    
+
 ### Elastic Search
 Add this to the openshift_openstack_node_secgroup_rules section of main.yml to enable elastic search.
 
@@ -1181,3 +1182,162 @@ If you are running a template router to expose your statistics, there are a few
     port_range_max: 1936
 ```
 
+## Using CRI-O
+There are some different scenarios to customize the container runtime in the
+instances:
+
+* All hosts use docker (no changes required)
+* All hosts using cri-o.
+
+Modify the OSEv3.yml file and add the following variables:
+
+```
+openshift_use_crio_only: true
+openshift_use_crio: true
+# cockpit-docker is installed if using cockpit docker as dependency
+# setting osm_use_cockpit=false to avoid that
+osm_use_cockpit=false
+```
+
+Modify the all.yml file and add the following variables:
+
+```
+openshift_openstack_master_group_name: node-config-master-crio
+openshift_openstack_infra_group_name: node-config-infra-crio
+openshift_openstack_compute_group_name: node-config-compute-crio  
+```
+
+NOTE: Currently, OpenShift builds require docker.
+
+* Masters/app/infra_nodes use cri-o:
+
+Add the proper variables to the `~/inventory/group_vars/` files in the ansible host such as:
+
+* `~/inventory/group_vars/[masters|openstack_infra_nodes|openstack_compute_nodes].yml`:
+
+```
+openshift_use_crio_only: true/false
+openshift_use_crio: true/false
+openshift_openstack_[master|infra|compute]_group_name: node-config-[master|infra|compute]-crio
+osm_use_cockpit: false
+```
+
+* Some app nodes using cri-o, some others docker, some others cri-o and docker. This scenario requires the following steps:
+
+* Create the `~/inventory/host_vars/<hostname>.yml` depending on the hostname
+of the instance you want to customize:
+
+  * For cri-o only:
+
+```
+openshift_use_crio_only: true
+openshift_use_crio: true
+openshift_node_group_name: node-config-[master|infra|compute]-crio
+osm_use_cockpit: false
+```
+
+  * For docker only (optionally, by default it will install docker):
+
+```
+openshift_use_crio: false
+```
+
+  * For both cri-o and docker (and want to use cri-o as container runtime)
+
+```
+openshift_use_crio_only: false
+openshift_use_crio: true
+openshift_node_group_name: node-config-[master|infra|compute]-crio
+osm_use_cockpit: false
+```
+
+Also, it is required to configure the openshift_builddefaults_nodeselectors variable to the proper node selector for the builds to be executed in hosts
+where docker is running as container runtime.
+
+Run the playbooks to provision and install the environment.
+
+Example:
+
+All hosts using docker as container runtime except:
+* app-node-0 using cri-o
+* app-node-1 using docker (explicitely)
+* app-node-2 using cri-o and docker
+
+In this particular case, those are variable files:
+
+* `~/inventory/group_vars/OSEv3.yml`
+
+```
+# Avoid installing cockpit in all nodes
+osm_use_cockpit: false
+```
+
+* `~/inventory/host_vars/app-node-0.${DOMAIN}.yml`
+
+```
+# CRI-O only
+openshift_use_crio_only: true
+openshift_use_crio: true
+openshift_node_group_name: node-config-compute-crio
+```
+
+* `~/inventory/host_vars/app-node-1.${DOMAIN}.yml`
+
+```
+# Explicit docker
+openshift_use_crio: false
+# openshift_node_group_name: node-config-compute
+```
+
+* `~/inventory/host_vars/app-node-2.${DOMAIN}.yml`
+
+```
+# CRI-O and Docker side by side
+openshift_use_crio_only: false
+openshift_use_crio: true
+# As we didn't modified the node_group, it will use docker
+```
+
+After a successful installation, the containerRuntimeVersion field says the CR
+it uses:
+
+```
+$ oc get nodes -o=custom-columns=NAME:.metadata.name,CR:.status.nodeInfo.containerRuntimeVersion --selector='node-role.kubernetes.io/compute=true'                                                                   
+NAME                                  CR
+app-node-0.shiftstack.automated.lan   cri-o://1.11.5
+app-node-1.shiftstack.automated.lan   docker://1.13.1
+app-node-2.shiftstack.automated.lan   docker://1.13.1
+```
+
+Also, notice the host running cri-o has a label added automatically such as
+`runtime=cri-o`:
+
+```
+$ oc get nodes app-node-0.shiftstack.automated.lan --show-labels
+NAME                                  STATUS    ROLES     AGE       VERSION           LABELS
+app-node-0.shiftstack.automated.lan   Ready     compute   37m       v1.11.0+d4cacc0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=1470ffe1-aea0-4806-a1be-e24c83c08e5f,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/zone=nova,kubernetes.io/hostname=app-node-0.shiftstack.automated.lan,node-role.kubernetes.io/compute=true,runtime=cri-o
+```
+
+And there are some pods running:
+
+```
+$ kubectl get pods --all-namespaces --field-selector spec.nodeName=app-node-0.shiftstack.automated.lan -o wide
+NAMESPACE              NAME                  READY     STATUS    RESTARTS   AGE       IP            NODE                                  NOMINATED NODE
+openshift-monitoring   node-exporter-d4bq9   2/2       Running   0          24m       10.240.0.19   app-node-0.shiftstack.automated.lan   <none>
+openshift-node         sync-rsgrp            1/1       Running   0          40m       10.240.0.19   app-node-0.shiftstack.automated.lan   <none>
+openshift-sdn          ovs-t54s9             1/1       Running   0          40m       10.240.0.19   app-node-0.shiftstack.automated.lan   <none>
+openshift-sdn          sdn-64tz4             1/1       Running   0          40m       10.240.0.19   app-node-0.shiftstack.automated.lan   <none>
+```
+
+```
+[openshift@app-node-0 ~]$ sudo crictl ps
+W1025 04:45:04.056296   13242 util_unix.go:75] Using "/var/run/crio/crio.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/crio/crio.sock".
+CONTAINER ID        IMAGE                                                                                                                            CREATED             STATE               NAME                ATTEMPT
+ddfd64fdfb6a3       registry.redhat.io/openshift3/ose-kube-rbac-proxy@sha256:16daf6802d5e88393c271f78037f7c002ff774cd52161c1c1a71f2a84df71868        26 minutes ago      Running             kube-rbac-proxy     0
+3463217a35030       registry.redhat.io/openshift3/prometheus-node-exporter@sha256:e9b47d1705eb027735d528342e0457e597e28e36f6e38a0262b65802156bfe9b   26 minutes ago      Running             node-exporter       0
+02652966e1180       074bf04571e220389b5f3afa7669ea07ddd53d281668820ebf537f054487191f                                                                 41 minutes ago      Running             openvswitch         0
+acf2afc99b950       registry.redhat.io/openshift3/ose-node@sha256:3da731d733cd4d67897d22bfdcb027b009494de667bd7a3c870557102ce10bf5                   41 minutes ago      Running             sync                0
+6814b5f7a05d7       registry.redhat.io/openshift3/ose-node@sha256:3da731d733cd4d67897d22bfdcb027b009494de667bd7a3c870557102ce10bf5                   41 minutes ago      Running             sdn                 0
+[openshift@app-node-0 ~]$ sudo docker ps
+Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
+```

+ 3 - 3
playbooks/openstack/openshift-cluster/install.yml

@@ -8,14 +8,14 @@
 # values here. We do it in the OSEv3 group vars. Do we need to add
 # some logic here?
 
+- name: Run the init
+  import_playbook: ../../init/main.yml
+
 - name: Evaluate basic OpenStack groups
   import_playbook: evaluate_groups.yml
 
 - import_playbook: ../../prerequisites.yml
 
-- name: Run the init
-  import_playbook: ../../init/main.yml
-
 - name: Prepare the Nodes in the cluster for installation
   any_errors_fatal: true
   hosts: oo_all_hosts

+ 3 - 1
playbooks/openstack/openshift-cluster/provision.yml

@@ -2,7 +2,6 @@
 - name: Create the OpenStack resources for cluster installation
   import_playbook: provision_resources.yml
 
-
 - name: Evaluate OpenStack groups from the dynamic inventory
   import_playbook: evaluate_groups.yml
 
@@ -43,6 +42,9 @@
 
 - import_playbook: ../../init/basic_facts.yml
 
+- name: Run the init
+  import_playbook: ../../init/main.yml
+
 - name: Optionally subscribe the RHEL nodes
   any_errors_fatal: true
   hosts: oo_all_hosts

+ 5 - 1
roles/container_runtime/tasks/package_crio.yml

@@ -35,7 +35,6 @@
     pkg_list:
       - cri-o
       - cri-tools
-      - atomic
       - skopeo
       - podman
 
@@ -78,6 +77,11 @@
     dest: /etc/sysconfig/crio-network
     src: crio-network.j2
 
+- name: Place registries.conf in /etc/containers/registries.conf
+  template:
+    dest: "{{ containers_registries_conf_path }}"
+    src: registries.conf.j2
+
 - name: Start the CRI-O service
   systemd:
     name: "cri-o"

+ 8 - 6
roles/container_runtime/templates/crio.conf.j2

@@ -141,16 +141,18 @@ signature_policy = ""
 # The valid values are mkdir and ignore.
 image_volumes = "mkdir"
 
+# CRI-O reads its configured registries defaults from the containers/image configuration
+# file, /etc/containers/registries.conf. Modify registries.conf if you want to
+# change default registries for all tools that use containers/image.  If you
+# want to modify just crio, you can change the registies configuration in this
+# file.
+
 # insecure_registries is used to skip TLS verification when pulling images.
-insecure_registries = [
-{{ l_insecure_crio_registries|default("") }}
-]
+# insecure_registries = []
 
 # registries is used to specify a comma separated list of registries to be used
 # when pulling an unqualified image (e.g. fedora:rawhide).
-registries = [
-{{ l_additional_crio_registries|default("") }}
-]
+registries = ['docker.io']
 
 # The "crio.network" table contains settings pertaining to the
 # management of CNI plugins.

+ 0 - 46
roles/container_runtime/templates/registries.conf

@@ -1,46 +0,0 @@
-# {{ ansible_managed }}
-# This is a system-wide configuration file used to
-# keep track of registries for various container backends.
-# It adheres to YAML format and does not support recursive
-# lists of registries.
-
-# The default location for this configuration file is /etc/containers/registries.conf.
-
-# The only valid categories are: 'registries', 'insecure_registries',
-# and 'block_registries'.
-
-
-#registries:
-#  - registry.redhat.io
-
-{% if l2_docker_additional_registries %}
-registries:
-{% for reg in l2_docker_additional_registries %}
-  - {{ reg }}
-{% endfor %}
-{% endif %}
-
-# If you need to access insecure registries, uncomment the section below
-# and add the registries fully-qualified name. An insecure registry is one
-# that does not have a valid SSL certificate or only does HTTP.
-#insecure_registries:
-#  -
-
-{% if l2_docker_insecure_registries %}
-insecure_registries:
-{% for reg in l2_docker_insecure_registries %}
-  - {{ reg }}
-{% endfor %}
-{% endif %}
-
-# If you need to block pull access from a registry, uncomment the section below
-# and add the registries fully-qualified name.
-#block_registries:
-# -
-
-{% if l2_docker_blocked_registries %}
-block_registries:
-{% for reg in l2_docker_blocked_registries %}
-  - {{ reg }}
-{% endfor %}
-{% endif %}

+ 27 - 0
roles/container_runtime/templates/registries.conf.j2

@@ -0,0 +1,27 @@
+# {{ ansible_managed }}
+# This is a system-wide configuration file used to
+# keep track of registries for various container backends.
+# It adheres to TOML format and does not support recursive
+# lists of registries.
+
+# The default location for this configuration file is /etc/containers/registries.conf.
+
+# The only valid categories are: 'registries.search', 'registries.insecure',
+# and 'registries.block'.
+
+[registries.search]
+registries = [{{ l_additional_crio_registries|default("") }}]
+
+
+# If you need to access insecure registries, add the registry's fully-qualified name.
+# An insecure registry is one that does not have a valid SSL certificate or only does HTTP.
+[registries.insecure]
+registries = [{{ l_insecure_crio_registries|default("") }}]
+
+
+# If you need to block pull access from a registry, uncomment the section below
+# and add the registries fully-qualified name.
+#
+# Docker only
+[registries.block]
+registries = {{ l2_docker_blocked_registries | to_json }}

+ 1 - 0
roles/lib_utils/action_plugins/master_check_paths_in_config.py

@@ -29,6 +29,7 @@ ITEMS_TO_POP = (
 MIGRATED_ITEMS = ", ".join([".".join(x) for x in ITEMS_TO_POP])
 
 ALLOWED_DIRS = (
+    '/dev/null',
     '/etc/origin/master/',
     '/var/lib/origin',
     '/etc/origin/cloudprovider',

+ 2 - 5
roles/lib_utils/action_plugins/sanity_checks.py

@@ -55,10 +55,7 @@ RELEASE_REGEX = {'re': '(^v?\\d+(\\.\\d+(\\.\\d+)?)?$)',
 STORAGE_KIND_TUPLE = (
     'openshift_loggingops_storage_kind',
     'openshift_logging_storage_kind',
-    'openshift_metrics_storage_kind',
-    'openshift_prometheus_alertbuffer_storage_kind',
-    'openshift_prometheus_alertmanager_storage_kind',
-    'openshift_prometheus_storage_kind')
+    'openshift_metrics_storage_kind')
 
 IMAGE_POLICY_CONFIG_VAR = "openshift_master_image_policy_config"
 ALLOWED_REGISTRIES_VAR = "openshift_master_image_policy_allowed_registries_for_import"
@@ -386,7 +383,7 @@ class ActionModule(ActionBase):
             if kind == 'nfs':
                 raise errors.AnsibleModuleError(
                     'nfs is an unsupported type for {}. '
-                    'openshift_enable_unsupported_configurations=True must'
+                    'openshift_enable_unsupported_configurations=True must '
                     'be specified to continue with this configuration.'
                     ''.format(storage))
         return None

+ 5 - 1
roles/lib_utils/filter_plugins/openshift_master.py

@@ -454,7 +454,9 @@ class GitHubIdentityProvider(IdentityProviderOauthBase):
     def __init__(self, api_version, idp):
         IdentityProviderOauthBase.__init__(self, api_version, idp)
         self._optional += [['organizations'],
-                           ['teams']]
+                           ['teams'],
+                           ['ca'],
+                           ['hostname']]
 
     def validate(self):
         ''' validate this idp instance '''
@@ -462,6 +464,8 @@ class GitHubIdentityProvider(IdentityProviderOauthBase):
             raise errors.AnsibleFilterError("|failed provider {0} does not "
                                             "allow challenge authentication".format(self.__class__.__name__))
 
+        self._idp['ca'] = '/etc/origin/master/{}_github_ca.crt'.format(self.name)
+
 
 class FilterModule(object):
     ''' Custom ansible filters for use by the openshift_control_plane role'''

+ 9 - 5
roles/openshift_cli/tasks/main.yml

@@ -6,11 +6,15 @@
   register: result
   until: result is succeeded
 
-- block:
-  - name: Pull CLI Image
-    docker_image:
-      name: "{{ openshift_cli_image }}"
-    when: not openshift_use_crio_only | bool
+- name: Check that CLI image is present
+  command: "{{ openshift_container_cli }} images -q {{ openshift_cli_image }}"
+  register: cli_image
+
+- name: Pre-pull cli image
+  command: "{{ openshift_container_cli }} pull {{ openshift_cli_image }}"
+  environment:
+    NO_PROXY: "{{ openshift.common.no_proxy | default('') }}"
+  when: cli_image.stdout_lines == []
 
 - name: Install bash completion for oc tools
   package:

+ 5 - 0
roles/openshift_control_plane/files/controller.yaml

@@ -37,6 +37,8 @@ spec:
      - mountPath: /usr/libexec/kubernetes/kubelet-plugins
        name: kubelet-plugins
        mountPropagation: "HostToContainer"
+     - mountPath: /etc/pki
+       name: master-pki
     livenessProbe:
       httpGet:
         scheme: HTTPS
@@ -57,3 +59,6 @@ spec:
   - name: kubelet-plugins
     hostPath:
       path: /usr/libexec/kubernetes/kubelet-plugins
+  - name: master-pki
+    hostPath:
+      path: /etc/pki

+ 11 - 0
roles/openshift_control_plane/tasks/main.yml

@@ -82,6 +82,17 @@
   - item.kind == 'OpenIDIdentityProvider'
   with_items: "{{ openshift_master_identity_providers }}"
 
+- name: Create the GitHub (Enterprise) ca file if needed
+  copy:
+    dest: "/etc/origin/master/{{ item.name }}_github_ca.crt"
+    content: "{{ openshift.master.github_ca }}"
+    mode: 0600
+    backup: yes
+  when:
+  - openshift.master.github_ca is defined
+  - item.kind == 'GitHubIdentityProvider'
+  with_items: "{{ openshift_master_identity_providers }}"
+
 - name: Create the request header ca file if needed
   copy:
     dest: "/etc/origin/master/{{ item.name }}_request_header_ca.crt"

+ 0 - 21
roles/openshift_facts/defaults/main.yml

@@ -131,27 +131,6 @@ openshift_metrics_storage_nfs_options: '*(rw,root_squash)'
 openshift_metrics_storage_access_modes:
   - 'ReadWriteOnce'
 
-openshift_prometheus_storage_volume_name: 'prometheus'
-openshift_prometheus_storage_volume_size: '10Gi'
-openshift_prometheus_storage_access_modes:
-  - 'ReadWriteOnce'
-openshift_prometheus_storage_create_pv: True
-openshift_prometheus_storage_create_pvc: False
-
-openshift_prometheus_alertmanager_storage_volume_name: 'prometheus-alertmanager'
-openshift_prometheus_alertmanager_storage_volume_size: '10Gi'
-openshift_prometheus_alertmanager_storage_access_modes:
-  - 'ReadWriteOnce'
-openshift_prometheus_alertmanager_storage_create_pv: True
-openshift_prometheus_alertmanager_storage_create_pvc: False
-
-openshift_prometheus_alertbuffer_storage_volume_name: 'prometheus-alertbuffer'
-openshift_prometheus_alertbuffer_storage_volume_size: '10Gi'
-openshift_prometheus_alertbuffer_storage_access_modes:
-  - 'ReadWriteOnce'
-openshift_prometheus_alertbuffer_storage_create_pv: True
-openshift_prometheus_alertbuffer_storage_create_pvc: False
-
 openshift_service_type_dict:
   origin: origin
   openshift-enterprise: atomic-openshift

+ 18 - 5
roles/openshift_gcp/defaults/main.yml

@@ -91,11 +91,17 @@ openshift_gcp_firewall_rules:
           - '2379'
           - '2380'
           - '4001'
+          #kube-system/kubelet:cadvisor
+          - '4193'
           - "{{ openshift_gcp_kubernetes_api_port }}"
           - "{{ internal_console_port }}"
           - '8053'
           - '8444'
           - "{{ openshift_gcp_master_healthcheck_port }}"
+          #cadvisor port
+          - '9100'
+          # CVO port
+          - '9099'
           - '10250'
           - '10255'
           - '24224'
@@ -123,10 +129,6 @@ openshift_gcp_firewall_rules:
           - "{{ openshift_gcp_kubernetes_api_port }}"
           - "{{ openshift_master_api_port }}"
           - "{{ mcd_port }}"
-          - "{{ openshift_node_port_range }}"
-      - ip_protocol: 'udp'
-        ports:
-          - "{{ openshift_node_port_range }}"
     target_tags:
       - ocp-master
       - ocp-bootstrap
@@ -134,6 +136,7 @@ openshift_gcp_firewall_rules:
     allowed:
       - ip_protocol: 'tcp'
         ports:
+          - '1936'
           - '10250'
           - '10255'
           - '9000-10000'
@@ -144,4 +147,14 @@ openshift_gcp_firewall_rules:
     source_tags:
       - ocp
     target_tags:
-      - ocp-node
+      - ocp-worker
+  - rule: node-external
+    allowed:
+      - ip_protocol: 'tcp'
+        ports:
+          - "{{ openshift_node_port_range }}"
+      - ip_protocol: 'udp'
+        ports:
+          - "{{ openshift_node_port_range }}"
+    target_tags:
+      - ocp-worker

+ 62 - 0
roles/openshift_gcp/tasks/deprovision.yml

@@ -22,6 +22,68 @@
     - "name : {{ openshift_gcp_prefix }}instance-template*"
   register: instance_templates
 
+- name: Collect a list of instances
+  gcp_compute_instance_facts:
+    auth_kind: serviceaccount
+    scopes:
+    - https://www.googleapis.com/auth/compute
+    service_account_file: "{{ openshift_gcp_iam_service_account_keyfile }}"
+    project: "{{ openshift_gcp_project }}"
+    zone: "{{ openshift_gcp_zone }}"
+  register: all_instances
+
+- name: Filter instances to fetch masters
+  set_fact:
+    master_instances: "{{ master_instances | default([]) }} + [ {{ item }} ]"
+  with_items:
+  - "{{ all_instances['items'] }}"
+  when:
+  - "'tags' in item"
+  - "'items' in item['tags']"
+  - "cluster_tag in item['tags']['items']"
+  - "'ocp-master' in item['tags']['items']"
+  vars:
+    cluster_tag: "{{ openshift_gcp_prefix }}ocp"
+
+- name: Get managed zone
+  gcp_dns_managed_zone:
+    auth_kind: serviceaccount
+    scopes:
+    - https://www.googleapis.com/auth/ndev.clouddns.readwrite
+    service_account_file: "{{ openshift_gcp_iam_service_account_keyfile }}"
+    project: "{{ openshift_gcp_project }}"
+    name: "{{ dns_managed_zone | default(openshift_gcp_prefix + 'managed-zone') }}"
+    state: present
+  register: managed_zone
+
+- name: Remove public API hostname
+  gcp_dns_resource_record_set:
+    auth_kind: serviceaccount
+    scopes:
+    - https://www.googleapis.com/auth/ndev.clouddns.readwrite
+    service_account_file: "{{ openshift_gcp_iam_service_account_keyfile }}"
+    project: "{{ openshift_gcp_project }}"
+    name: "{{ openshift_master_cluster_public_hostname }}."
+    managed_zone: "{{ managed_zone }}"
+    type: A
+    state: absent
+
+- name: Remove etcd records for masters
+  gcp_dns_resource_record_set:
+    auth_kind: serviceaccount
+    scopes:
+    - https://www.googleapis.com/auth/ndev.clouddns.readwrite
+    service_account_file: "{{ openshift_gcp_iam_service_account_keyfile }}"
+    project: "{{ openshift_gcp_project }}"
+    name: "{{ entry_name }}"
+    managed_zone: "{{ managed_zone }}"
+    type: A
+    state: absent
+  with_indexed_items: "{{ master_instances }}"
+  when: master_instances is defined
+  vars:
+    entry_name: "{{ openshift_gcp_prefix }}etcd-{{ item.0 }}.{{ public_hosted_zone }}."
+
 - name: Remove GCP Instance Groups
   gcp_compute_instance_group_manager:
     auth_kind: serviceaccount

+ 82 - 3
roles/openshift_gcp/tasks/main.yml

@@ -38,6 +38,10 @@
     - "family = {{ openshift_gcp_image }}"
   register: gcp_node_image
 
+- fail:
+    msg: "No images for family '{{ openshift_gcp_image }}' found"
+  when: gcp_node_image['items'] | length == 0
+
 - name: Provision GCP instance templates
   gcp_compute_instance_template:
     auth_kind: serviceaccount
@@ -92,6 +96,18 @@
   with_items: "{{ instance_template.results }}"
   register: instance_groups
 
+- name: Get bootstrap instance group
+  gcp_compute_instance_group_facts:
+    auth_kind: serviceaccount
+    scopes:
+    - https://www.googleapis.com/auth/compute
+    service_account_file: "{{ openshift_gcp_iam_service_account_keyfile }}"
+    project: "{{ openshift_gcp_project }}"
+    zone: "{{ openshift_gcp_zone }}"
+    filters:
+    - name = "{{ openshift_gcp_prefix }}ig-b"
+  register: bootstrap_instance_group
+
 - name: Get master instance group
   gcp_compute_instance_group_facts:
     auth_kind: serviceaccount
@@ -105,8 +121,25 @@
   register: master_instance_group
 
 - set_fact:
+    bootstrap_instance_group: "{{ bootstrap_instance_group['items'][0] }}"
     master_instance_group: "{{ master_instance_group['items'][0] }}"
 
+- name: Wait for bootstrap instance group to start all instances
+  gcp_compute_instance_group_manager_facts:
+    auth_kind: serviceaccount
+    scopes:
+    - https://www.googleapis.com/auth/compute
+    service_account_file: "{{ openshift_gcp_iam_service_account_keyfile }}"
+    project: "{{ openshift_gcp_project }}"
+    zone: "{{ openshift_gcp_zone }}"
+    filters: "name = {{ bootstrap_instance_group['name'] }}"
+  register: bootstrap_group_result
+  # Wait for 3 minutes
+  retries: 36
+  delay: 5
+  until:
+  - "bootstrap_group_result['items'][0]['currentActions']['none'] == bootstrap_group_result['items'][0]['targetSize']"
+
 - name: Wait for master instance group to start all instances
   gcp_compute_instance_group_manager_facts:
     auth_kind: serviceaccount
@@ -135,7 +168,7 @@
 
 - name: Filter instances to fetch bootstrap
   set_fact:
-    bootstrap_instance: "{{ item }}"
+    bootstrap_instances: "{{ item }}"
   with_items:
   - "{{ all_instances['items'] }}"
   when:
@@ -160,13 +193,59 @@
     cluster_tag: "{{ openshift_gcp_prefix }}ocp"
 
 - set_fact:
-    etcd_discovery_targets: "{{ etcd_discovery_targets | default('') }} '0 0 2380 {{ entry_name }}'"
-    master_external_ips: "{{ master_external_ips | default('') }} '{{ master_ip }}'"
+    etcd_discovery_targets: "{{ etcd_discovery_targets | default([]) }} + ['0 0 2380 {{ entry_name }}']"
+    master_external_ips: "{{ master_external_ips | default([]) }} + ['{{ master_ip }}']"
   with_indexed_items: "{{ master_instances }}"
   vars:
     entry_name: "{{ openshift_gcp_prefix }}etcd-{{ item.0 }}.{{ public_hosted_zone }}."
     master_ip: "{{ item.1.networkInterfaces[0].accessConfigs[0].natIP }}"
 
+- set_fact:
+    bootstrap_and_masters: "{{ master_external_ips | list }} + ['{{ bootstrap_instances.networkInterfaces[0].accessConfigs[0].natIP }}']"
+
+- name: Get managed zone
+  gcp_dns_managed_zone:
+    auth_kind: serviceaccount
+    scopes:
+    - https://www.googleapis.com/auth/ndev.clouddns.readwrite
+    service_account_file: "{{ openshift_gcp_iam_service_account_keyfile }}"
+    project: "{{ openshift_gcp_project }}"
+    name: "{{ dns_managed_zone | default(openshift_gcp_prefix + 'managed-zone') }}"
+    state: present
+  register: managed_zone
+
+- name: Create public API hostname
+  gcp_dns_resource_record_set:
+    auth_kind: serviceaccount
+    scopes:
+    - https://www.googleapis.com/auth/ndev.clouddns.readwrite
+    service_account_file: "{{ openshift_gcp_iam_service_account_keyfile }}"
+    project: "{{ openshift_gcp_project }}"
+    name: "{{ openshift_master_cluster_public_hostname }}."
+    managed_zone: "{{ managed_zone }}"
+    type: A
+    ttl: 600
+    target: "{{ bootstrap_and_masters }}"
+    state: present
+
+- name: Create etcd records for masters
+  gcp_dns_resource_record_set:
+    auth_kind: serviceaccount
+    scopes:
+    - https://www.googleapis.com/auth/ndev.clouddns.readwrite
+    service_account_file: "{{ openshift_gcp_iam_service_account_keyfile }}"
+    project: "{{ openshift_gcp_project }}"
+    name: "{{ entry_name }}"
+    managed_zone: "{{ managed_zone }}"
+    type: A
+    ttl: 600
+    target: "{{ master_ip }}"
+    state: present
+  with_indexed_items: "{{ master_instances }}"
+  vars:
+    entry_name: "{{ openshift_gcp_prefix }}etcd-{{ item.0 }}.{{ public_hosted_zone }}."
+    master_ip: "{{ item.1.networkInterfaces[0].networkIP }}"
+
 - name: Templatize DNS script
   template: src=additional_settings.j2.sh dest=/tmp/additional_settings.sh mode=u+rx
 

+ 93 - 0
roles/openshift_gcp/tasks/remove_bootstrap.yml

@@ -0,0 +1,93 @@
+---
+- name: Get bootstrap instance group
+  gcp_compute_instance_group_manager_facts:
+    auth_kind: serviceaccount
+    scopes:
+    - https://www.googleapis.com/auth/compute
+    service_account_file: "{{ openshift_gcp_iam_service_account_keyfile }}"
+    project: "{{ openshift_gcp_project }}"
+    zone: "{{ openshift_gcp_zone }}"
+    filters:
+    - name = "{{ openshift_gcp_prefix }}ig-b"
+  register: bootstrap_instance_group
+
+- name: Get bootstrap instance template
+  gcp_compute_instance_template_facts:
+    auth_kind: serviceaccount
+    scopes:
+    - https://www.googleapis.com/auth/compute
+    service_account_file: "{{ openshift_gcp_iam_service_account_keyfile }}"
+    project: "{{ openshift_gcp_project }}"
+    filters:
+    - "name : {{ openshift_gcp_prefix }}instance-template-bootstrap"
+  register: bootstrap_instance_template
+
+- name: Collect a list of instances
+  gcp_compute_instance_facts:
+    auth_kind: serviceaccount
+    scopes:
+    - https://www.googleapis.com/auth/compute
+    service_account_file: "{{ openshift_gcp_iam_service_account_keyfile }}"
+    project: "{{ openshift_gcp_project }}"
+    zone: "{{ openshift_gcp_zone }}"
+  register: all_instances
+
+- name: Filter instances to fetch masters
+  set_fact:
+    master_instances: "{{ master_instances | default([]) }} + [ {{ item }} ]"
+  with_items:
+  - "{{ all_instances['items'] }}"
+  when:
+  - "'tags' in item"
+  - "'items' in item['tags']"
+  - "cluster_tag in item['tags']['items']"
+  - "'ocp-master' in item['tags']['items']"
+  vars:
+    cluster_tag: "{{ openshift_gcp_prefix }}ocp"
+
+- set_fact:
+    master_external_ips: "{{ master_external_ips | default([]) }}  + [ '{{ master_ip }}' ]"
+  with_indexed_items: "{{ master_instances }}"
+  vars:
+    master_ip: "{{ item.1.networkInterfaces[0].accessConfigs[0].natIP }}"
+
+- name: Get a managed zone
+  gcp_dns_managed_zone:
+    auth_kind: serviceaccount
+    scopes:
+    - https://www.googleapis.com/auth/ndev.clouddns.readwrite
+    service_account_file: "{{ openshift_gcp_iam_service_account_keyfile }}"
+    project: "{{ openshift_gcp_project }}"
+    name: "{{ dns_managed_zone | default(openshift_gcp_prefix + 'managed-zone') }}"
+    state: present
+  register: managed_zone
+
+- name: Update public API hostname
+  gcp_dns_resource_record_set:
+    auth_kind: serviceaccount
+    scopes:
+    - https://www.googleapis.com/auth/ndev.clouddns.readwrite
+    service_account_file: "{{ openshift_gcp_iam_service_account_keyfile }}"
+    project: "{{ openshift_gcp_project }}"
+    name: "{{ openshift_master_cluster_public_hostname }}."
+    managed_zone: "{{ managed_zone }}"
+    type: A
+    ttl: 600
+    target: "{{ master_external_ips }}"
+    state: present
+
+- name: Delete bootstrap instance group
+  gcp_compute_instance_group_manager:
+    auth_kind: serviceaccount
+    scopes:
+    - https://www.googleapis.com/auth/compute
+    service_account_file: "{{ openshift_gcp_iam_service_account_keyfile }}"
+    project: "{{ openshift_gcp_project }}"
+    zone: "{{ openshift_gcp_zone }}"
+    name: "{{ bootstrap_instance_group['items'][0]['name'] }}"
+    base_instance_name: "{{ bootstrap_instance_group['items'][0]['baseInstanceName'] }}"
+    instance_template: "{{ bootstrap_instance_template['items'][0] }}"
+    state: absent
+  when:
+  - bootstrap_instance_group['items'] | length > 0
+  - bootstrap_instance_template['items'] | length > 0

+ 10 - 4
roles/openshift_gcp/tasks/setup_scale_group_facts.yml

@@ -2,18 +2,24 @@
 - name: Add bootstrap instances
   add_host:
     name: "{{ hostvars[item].gce_name }}"
-    groups: bootstrap
-    ignition_file: "{{ openshift_gcp_bootstrap_ignition_file }}"
+    groups:
+    - bootstrap
+    - nodes
+    ignition_file: "{{ openshift_bootstrap_ignition_file }}"
   with_items: "{{ groups['tag_ocp-bootstrap'] | default([]) }}"
 
 - name: Add master instances
   add_host:
     name: "{{ hostvars[item].gce_name }}"
-    groups: masters
+    groups:
+    - masters
+    - nodes
   with_items: "{{ groups['tag_ocp-master'] | default([]) }}"
 
 - name: Add worker instances
   add_host:
     name: "{{ hostvars[item].gce_name }}"
-    groups: workers
+    groups:
+    - workers
+    - nodes
   with_items: "{{ groups['tag_ocp-worker'] | default([]) }}"

+ 2 - 42
roles/openshift_gcp/templates/additional_settings.j2.sh

@@ -10,39 +10,16 @@ while true; do
     dns="${TMPDIR:-/tmp}/dns.yaml"
     rm -f $dns
 
-    # DNS records for etcd servers
-    {% for master in master_instances %}
-      MASTER_DNS_NAME="{{ openshift_gcp_prefix }}etcd-{{ loop.index-1 }}.{{ public_hosted_zone }}."
-      IP="{{ master.networkInterfaces[0].networkIP }}"
-      if ! gcloud --project "{{ openshift_gcp_project }}" dns record-sets list -z "${dns_zone}" --name "{{ openshift_master_cluster_hostname }}" 2>/dev/null | grep -q "${MASTER_DNS_NAME}"; then
-          if [[ ! -f $dns ]]; then
-              gcloud --project "{{ openshift_gcp_project }}" dns record-sets transaction --transaction-file=$dns start -z "${dns_zone}"
-          fi
-          gcloud --project "{{ openshift_gcp_project }}" dns record-sets transaction --transaction-file=$dns add -z "${dns_zone}" --ttl {{ openshift_gcp_master_dns_ttl }} --name "${MASTER_DNS_NAME}" --type A "$IP"
-      else
-          echo "DNS record for '${MASTER_DNS_NAME}' already exists"
-      fi
-    {% endfor %}
-
     # DNS records for etcd discovery
     ETCD_DNS_NAME="_etcd-server-ssl._tcp.{{ lookup('env', 'INSTANCE_PREFIX') | mandatory }}.{{ public_hosted_zone }}."
     if ! gcloud --project "{{ openshift_gcp_project }}" dns record-sets list -z "${dns_zone}" --name "${ETCD_DNS_NAME}" 2>/dev/null | grep -q "${ETCD_DNS_NAME}"; then
         if [[ ! -f $dns ]]; then
             gcloud --project "{{ openshift_gcp_project }}" dns record-sets transaction --transaction-file=$dns start -z "${dns_zone}"
         fi
-        gcloud --project "{{ openshift_gcp_project }}" dns record-sets transaction --transaction-file=$dns add -z "${dns_zone}" --ttl {{ openshift_gcp_master_dns_ttl }} --name "${ETCD_DNS_NAME}" --type SRV {{ etcd_discovery_targets }}
-    else
-        echo "DNS record for '${ETCD_DNS_NAME}' already exists"
-    fi
+        gcloud --project "{{ openshift_gcp_project }}" dns record-sets transaction --transaction-file=$dns add -z "${dns_zone}" --ttl {{ openshift_gcp_master_dns_ttl }} --name "${ETCD_DNS_NAME}" --type SRV {% for etcd in etcd_discovery_targets %}'{{ etcd }}' {% endfor %}
 
-    # Roundrobin masters and bootstrap
-    if ! gcloud --project "{{ openshift_gcp_project }}" dns record-sets list -z "${dns_zone}" --name "{{ openshift_master_cluster_public_hostname }}" 2>/dev/null | grep -q "{{ openshift_master_cluster_public_hostname }}"; then
-        if [[ ! -f $dns ]]; then
-            gcloud --project "{{ openshift_gcp_project }}" dns record-sets transaction --transaction-file=$dns start -z "${dns_zone}"
-        fi
-        gcloud --project "{{ openshift_gcp_project }}" dns record-sets transaction --transaction-file=$dns add -z "${dns_zone}" --ttl {{ openshift_gcp_master_dns_ttl }} --name "{{ openshift_master_cluster_public_hostname }}" --type A {{ bootstrap_instance.networkInterfaces[0].accessConfigs[0].natIP }} {{ master_external_ips }}
     else
-        echo "DNS record for '{{ openshift_master_cluster_public_hostname }}' already exists"
+        echo "DNS record for '${ETCD_DNS_NAME}' already exists"
     fi
 
     # Commit all DNS changes, retrying if preconditions are not met
@@ -59,21 +36,4 @@ while true; do
 done
 ) &
 
-# Add groups to target pools
-# Add bootstrap
-# gcloud --project "{{ openshift_gcp_project }}" compute instance-groups managed set-target-pools "{{ openshift_gcp_prefix }}ig-b" --target-pools "{{ openshift_gcp_prefix }}master-lb-pool" --zone "{{ openshift_gcp_zone }}"
-
-# # Add masters
-# gcloud --project "{{ openshift_gcp_project }}" compute instance-groups managed set-target-pools "{{ openshift_gcp_prefix }}ig-m" --target-pools "{{ openshift_gcp_prefix }}master-lb-pool" --zone "{{ openshift_gcp_zone }}"
-
-# wait until all node groups are stable
-{% for node_group in openshift_gcp_node_group_config %}
-{% if node_group.wait_for_stable | default(False) %}
-# wait for stable {{ node_group.name }}
-( gcloud --project "{{ openshift_gcp_project }}" compute instance-groups managed wait-until-stable "{{ openshift_gcp_prefix }}ig-{{ node_group.suffix }}" --zone "{{ openshift_gcp_zone }}" --timeout=600 ) &
-{% else %}
-# not waiting for {{ node_group.name }} due to bootstrapping
-{% endif %}
-{% endfor %}
-
 for i in `jobs -p`; do wait $i; done

+ 5 - 25
roles/openshift_gcp/templates/remove.j2.sh

@@ -14,32 +14,12 @@ if gcloud --project "{{ openshift_gcp_project }}" dns managed-zones describe "${
         # export all dns records that match into a zone format, and turn each line into a set of args for
         # record-sets transaction.
         gcloud dns record-sets export --project "{{ openshift_gcp_project }}" -z "${dns_zone}" --zone-file-format "${dns}"
-
-        # Fetch API record to get a list of masters + bootstrap node
-        bootstrap_and_masters=""
-        public_ip_output=($(grep -F -e '{{ openshift_master_cluster_public_hostname }}.' "${dns}" | awk '{ print $5 }')) || public_ip_output=""
-
-        for index in "${!public_ip_output[@]}"; do
-            bootstrap_and_masters="${bootstrap_and_masters} ${public_ip_output[${index}]}"
-            if [ ${index} -eq 0 ]; then
-                # First record is bootstrap
-                continue
-            fi
-            # etcd server name
-            MASTER_DNS_NAME="{{ openshift_gcp_prefix }}etcd-$((index-1)).{{ public_hosted_zone }}."
-            # Add a extra space here so that it won't match etcd discovery record
-            grep -F -e "${MASTER_DNS_NAME} " "${dns}" | awk '{ print "--name", $1, "--ttl", $2, "--type", $4, $5; }' >> "${dns}.input" || true
-        done
-
-        # Remove API record
-        if [ ! -z "${public_ip_output}" ]; then
-            args=`grep -F -e '{{ openshift_master_cluster_public_hostname }}.' "${dns}" | awk '{ print "--name", $1, "--ttl", $2, "--type", $4; }' | head -n1`
-            echo "${args}${bootstrap_and_masters}" >> "${dns}.input"
-        fi
-
-        # Remove etcd discovery record
+        # Write the header
         ETCD_DNS_NAME="_etcd-server-ssl._tcp.{{ lookup('env', 'INSTANCE_PREFIX') | mandatory }}.{{ public_hosted_zone }}."
-        grep -F -e "${ETCD_DNS_NAME}" "${dns}" | awk '{ print "--name", $1, "--ttl", $2, "--type", $4, "\x27"$5" "$6" "$7" "$8"\x27"; }'  >> "${dns}.input" || true
+        grep -F -e "${ETCD_DNS_NAME}" "${dns}" | awk '{ print "--name", $1, "--ttl", $2, "--type", $4 }' | head -n1 | xargs echo -n > "${dns}.input"
+        # Append all etcd records
+        grep -F -e "${ETCD_DNS_NAME}" "${dns}" | awk '{ print " \x27"$5" "$6" "$7" "$8"\x27"; }' | tr -d '\n\r' >> "${dns}.input" || true
+        echo >> "${dns}.input"
 
         if [ -s "${dns}.input" ]; then
             rm -f "${dns}"

+ 1 - 0
roles/openshift_master_facts/tasks/main.yml

@@ -43,6 +43,7 @@
       session_name: "{{ openshift_master_session_name | default(None) }}"
       ldap_ca: "{{ openshift_master_ldap_ca | default(lookup('file', openshift_master_ldap_ca_file) if openshift_master_ldap_ca_file is defined else None) }}"
       openid_ca: "{{ openshift_master_openid_ca | default(lookup('file', openshift_master_openid_ca_file) if openshift_master_openid_ca_file is defined else None) }}"
+      github_ca: "{{ openshift_master_github_ca | default(lookup('file', openshift_master_github_ca_file) if openshift_master_github_ca_file is defined else None) }}"
       registry_url: "{{ oreg_url | default(None) }}"
       registry_selector: "{{ openshift_registry_selector | default(None) }}"
       api_server_args: "{{ osm_api_server_args | default(None) }}"

+ 3 - 3
roles/openshift_node/tasks/main.yml

@@ -6,9 +6,6 @@
     - openshift_deployment_type == 'openshift-enterprise'
     - not openshift_use_crio | bool
 
-- name: Start node image prepull
-  import_tasks: prepull.yml
-
 - import_tasks: dnsmasq_install.yml
 - import_tasks: dnsmasq.yml
 
@@ -32,6 +29,9 @@
     enabled: yes
     state: restarted
 
+- name: Start node image prepull
+  import_tasks: prepull.yml
+
 - name: include node installer
   import_tasks: install.yml
 

+ 5 - 0
roles/openshift_node40/tasks/install.yml

@@ -3,6 +3,11 @@
 - name: Install openshift packages
   package:
     name: "{{ l_node_packages | join(',') }}"
+    update_cache: true
+  register: install_openshift
+  until: install_openshift.rc == 0
+  retries: 3
+  delay: 1
   vars:
     l_node_packages:
     - "origin-node{{ (openshift_pkg_version | default('')) | lib_utils_oo_image_tag_to_rpm_version(include_dash=True) }}"

+ 3 - 0
roles/openshift_node_group/files/sync.yaml

@@ -196,3 +196,6 @@ spec:
       - hostPath:
           path: /run/systemd/system
         name: run-systemd-system
+      # Sync daemonset should tolerate all taints to make sure it runs on all nodes
+      tolerations:
+      - operator: "Exists"

+ 8 - 0
roles/openshift_openstack/defaults/main.yml

@@ -30,6 +30,8 @@ openshift_openstack_lbaasv2_provider: Octavia
 openshift_openstack_use_vm_load_balancer: false
 openshift_openstack_api_lb_listeners_timeout: 500000
 
+openshift_docker_service_name: "docker"
+
 # container-storage-setup
 openshift_openstack_container_storage_setup:
   docker_dev: "/dev/sdb"
@@ -193,6 +195,12 @@ openshift_openstack_node_secgroup_rules:
     port_range_min: 4789
     port_range_max: 4789
     remote_mode: remote_group_id
+  # NOTE: 10010/tcp required by cri-o stream protocol (oc exec/oc rsh)
+  - direction: ingress
+    protocol: tcp
+    port_range_min: 10010
+    port_range_max: 10010
+    remote_mode: remote_group_id
 openshift_openstack_infra_secgroup_rules:
   - direction: ingress
     protocol: tcp

+ 11 - 0
roles/openshift_openstack/tasks/container-storage-setup.yml

@@ -35,3 +35,14 @@
   # TODO(shadower): Find out which CentOS version supports overlayfs2
   when:
     - ansible_distribution == "CentOS"
+
+- name: restart docker after storage configuration
+  become: yes
+  systemd:
+    name: "{{ openshift_docker_service_name }}"
+    state: restarted
+  register: l_docker_restart_docker_in_storage_setup_result
+  until: not (l_docker_restart_docker_in_storage_setup_result is failed)
+  retries: 3
+  delay: 30
+  when: not openshift_use_crio_only|default(None)

+ 1 - 1
roles/openshift_repos/tasks/centos_repos.yml

@@ -19,6 +19,6 @@
     src: "{{ item }}"
     dest: "/etc/yum.repos.d/{{ (item | basename | splitext)[0] }}"
   with_first_found:
-    - "CentOS-OpenShift-Origin{{ ((openshift_version | default('')).split('.') | join(''))[0:2] }}.repo.j2"
+    - "CentOS-OpenShift-Origin{{ ((openshift_version | default('')).split('.')[0:2] | join('')) }}.repo.j2"
     - "CentOS-OpenShift-Origin.repo.j2"
   notify: refresh cache

+ 3 - 1
roles/openshift_sdn/files/sdn.yaml

@@ -38,7 +38,7 @@ spec:
       # It relies on an up to date node-config.yaml being present.
       - name: sdn
         image: " "
-        command: 
+        command:
         - /bin/bash
         - -c
         - |
@@ -204,3 +204,5 @@ spec:
       - name: host-var-lib-cni-networks-openshift-sdn
         hostPath:
           path: /var/lib/cni/networks/openshift-sdn
+      tolerations:
+      - operator: "Exists"

+ 1 - 1
roles/openshift_storage_glusterfs/README.md

@@ -87,7 +87,7 @@ GlusterFS cluster into a new or existing OpenShift cluster:
 | openshift_storage_glusterfs_block_deploy               | True                    | Deploy glusterblock provisioner service
 | openshift_storage_glusterfs_block_image                | 'gluster/glusterblock-provisioner'| Container image to use for glusterblock-provisioner pod, enterprise default is 'rhgs3/rhgs-gluster-block-prov-rhel7'
 | openshift_storage_glusterfs_block_host_vol_create      | True                    | Automatically create GlusterFS volumes to host glusterblock volumes. **NOTE:** If this is False, block-hosting volumes will need to be manually created before glusterblock volumes can be provisioned
-| openshift_storage_glusterfs_block_host_vol_size        | 100                     | Size, in GB, of GlusterFS volumes that will be automatically create to host glusterblock volumes if not enough space is available for a glusterblock volume create request. **NOTE:** This value is effectively an upper limit on the size of glusterblock volumes unless you manually create larger GlusterFS block-hosting volumes
+| openshift_storage_glusterfs_block_host_vol_size        | 100                     | Size, in GB, of GlusterFS volumes that will be automatically created to host glusterblock volumes if not enough space is available for a glusterblock volume create request. **NOTE:** This value is effectively an upper limit on the size of glusterblock volumes unless you manually create larger GlusterFS block-hosting volumes
 | openshift_storage_glusterfs_block_host_vol_max         | 15                      | Max number of GlusterFS volumes to host glusterblock volumes
 | openshift_storage_glusterfs_block_storageclass         | False                   | Automatically create a StorageClass for each glusterblock cluster
 | openshift_storage_glusterfs_block_storageclass_default | False                   | Sets the glusterblock StorageClass for this group as cluster-wide default

+ 15 - 0
test/gcp/build_image.yml

@@ -91,6 +91,21 @@
   - include_role:
       name: openshift_gcp
       tasks_from: frequent_log_rotation.yml
+  - name: Install networkmanager-glib to reset MTU
+    package:
+      name: NetworkManager-glib
+      state: present
+  - name: Set MTU
+    nmcli:
+      conn_name: "System eth0"
+      mtu: "{{ openshift_node_sdn_mtu }}"
+      type: ethernet
+      state: present
+  #Required for storage tests to mount NFS shares
+  - name: Install packages for tests
+    package:
+      name: "nfs-utils"
+      state: present
 
 - name: Commit image
   hosts: localhost

+ 16 - 0
test/gcp/install.yml

@@ -9,5 +9,21 @@
       name: openshift_gcp
       tasks_from: setup_scale_group_facts.yml
 
+- hosts: nodes
+  tasks:
+  - name: Disable google hostname updater
+    file:
+      path: /etc/dhcp/dhclient.d/google_hostname.sh
+      mode: 0644
+
 - name: run the deploy_cluster_40
   import_playbook: ../../playbooks/deploy_cluster_40.yml
+
+- name: destroy bootstrap node
+  hosts: localhost
+  connection: local
+  tasks:
+  - name: Scale down bootstrap node and update public API DNS record
+    include_role:
+      name: openshift_gcp
+      tasks_from: remove_bootstrap.yml