Преглед изворни кода

Merge pull request #6916 from smarterclayton/static_masters

Automatic merge from submit-queue.

Make openshift-ansible use static pods to install the control plane, make nodes prefer bootstrapping

1. Nodes continue to be configured for bootstrapping (as today)
2. For bootstrap nodes, we write a generic bootstrap-node-config.yaml that contains static pod references and any bootstrap config, and then use that to start a child kubelet using `--write-flags` instead of launching the node ourselves.  If a node-config.yaml is laid down in `/etc/origin/node` it takes precedence.
3. For 3.10 we want dynamic node config from Kubernetes to pull down additional files, but there are functional gaps.  For now, the openshift SDN container has a sidecar that syncs node config to disk and updates labels (kubelet doesn't update labels, https://github.com/kubernetes/kubernetes/issues/59314)
4. On the masters, if openshift_master_bootstrap_enabled we generate the master-config.yaml and the etcd config, but we don't start etcd or the masters (no services installed)
5. On the masters, we copy the static files into the correct pod-manifest-path (/etc/origin/node/pods) or similar
6. The kubelet at that point should automatically pick up the new static files and launch the components
7. We wait for them to converge
8. We install openshift-sdn as the first component, which allows nodes to go ready and start installing things.  There is a gap here where the masters are up, the nodes can bootstrap, but the nodes are not ready because no network plugin is installed.

Challenges at this point:

* The master shims (`master-logs` and `master-restart`) need to deal with CRI-O and systemd.  Ideally this is a temporary shim until we remove systemd for these components and have cri-ctl installed.
* We need to test failure modes of the static pods
* Testing

Further exploration things:

* need to get all the images using image streams or properly replaced into the static pods
* need to look at upgrades and updates
* disk locations become our API (`/var/lib/origin`, `/var/lib/etcd`) - how many customers have fiddled with this?
* may need to make the kubelet halt if it hasn't been able to get server/client certs within a bounded window (5m?) so to ensure that autoheals happen (https://github.com/openshift/origin/pull/18430)
* have to figure out whether dynamic kubelet config is a thing we can rely on for 3.10 (@liggitt), and what gaps there are with dynamic reconfig
* client-ca.crt is not handled by bootstrapping or dynamic config.  This needs a solution unless we keep the openshift-sdn sidecar around
* kubelet doesn't send sd notify to systemd (https://github.com/kubernetes/kubernetes/issues/59079)

@derekwaynecarr @sdodson @liggitt @deads2k this is the core of self-hosting.
OpenShift Merge Robot пре 7 година
родитељ
комит
4d51b562fa
84 измењених фајлова са 2289 додато и 551 уклоњено
  1. 13 3
      .papr.inventory
  2. 12 12
      .papr.sh
  3. 14 5
      playbooks/common/openshift-cluster/upgrades/docker/tasks/restart.yml
  4. 13 5
      playbooks/common/openshift-cluster/upgrades/docker/tasks/upgrade.yml
  5. 0 1
      playbooks/init/base_packages.yml
  6. 3 0
      playbooks/openshift-master/private/additional_config.yml
  7. 18 0
      playbooks/openshift-master/private/config.yml
  8. 6 11
      playbooks/openshift-master/private/scaleup.yml
  9. 5 10
      playbooks/openshift-master/private/tasks/wire_aggregator.yml
  10. 4 0
      playbooks/openshift-node/private/image_prep.yml
  11. 0 1
      roles/container_runtime/tasks/package_docker.yml
  12. 1 1
      roles/etcd/defaults/main.yaml
  13. 40 0
      roles/etcd/files/etcd.yaml
  14. 9 7
      roles/etcd/tasks/certificates/fetch_server_certificates_from_ca.yml
  15. 6 134
      roles/etcd/tasks/main.yml
  16. 84 0
      roles/etcd/tasks/rpm.yml
  17. 76 0
      roles/etcd/tasks/static.yml
  18. 0 94
      roles/etcd/tasks/system_container.yml
  19. 50 0
      roles/openshift_control_plane/README.md
  20. 152 0
      roles/openshift_control_plane/defaults/main.yml
  21. 48 0
      roles/openshift_control_plane/files/apiserver.yaml
  22. 45 0
      roles/openshift_control_plane/files/controller.yaml
  23. 28 0
      roles/openshift_control_plane/files/scripts/docker/master-logs
  24. 26 0
      roles/openshift_control_plane/files/scripts/docker/master-restart
  25. 27 0
      roles/openshift_control_plane/handlers/main.yml
  26. 17 0
      roles/openshift_control_plane/meta/main.yml
  27. 15 0
      roles/openshift_control_plane/tasks/bootstrap.yml
  28. 17 0
      roles/openshift_control_plane/tasks/configure_external_etcd.yml
  29. 44 0
      roles/openshift_control_plane/tasks/firewall.yml
  30. 29 0
      roles/openshift_control_plane/tasks/journald.yml
  31. 224 0
      roles/openshift_control_plane/tasks/main.yml
  32. 50 0
      roles/openshift_control_plane/tasks/registry_auth.yml
  33. 25 0
      roles/openshift_control_plane/tasks/restart.yml
  34. 34 0
      roles/openshift_control_plane/tasks/set_loopback_context.yml
  35. 63 0
      roles/openshift_control_plane/tasks/static.yml
  36. 10 0
      roles/openshift_control_plane/tasks/static_shim.yml
  37. 7 0
      roles/openshift_control_plane/tasks/update_etcd_client_urls.yml
  38. 45 0
      roles/openshift_control_plane/tasks/upgrade.yml
  39. 36 0
      roles/openshift_control_plane/tasks/upgrade/rpm_upgrade.yml
  40. 175 0
      roles/openshift_control_plane/tasks/upgrade/upgrade_scheduler.yml
  41. 5 0
      roles/openshift_control_plane/templates/htpasswd.j2
  42. 16 0
      roles/openshift_control_plane/templates/master.env.j2
  43. 230 0
      roles/openshift_control_plane/templates/master.yaml.v1.j2
  44. 7 0
      roles/openshift_control_plane/templates/sessionSecretsFile.yaml.v1.j2
  45. 1 1
      roles/openshift_gcp/defaults/main.yml
  46. 4 3
      roles/openshift_hosted/defaults/main.yml
  47. 1 1
      roles/openshift_hosted_templates/defaults/main.yml
  48. 5 11
      roles/openshift_logging/handlers/main.yml
  49. 0 2
      roles/openshift_master/tasks/upgrade/rpm_upgrade.yml
  50. 5 11
      roles/openshift_metrics/handlers/main.yml
  51. 3 26
      roles/openshift_node/defaults/main.yml
  52. 18 0
      roles/openshift_node/files/openshift-node
  53. 0 22
      roles/openshift_node/handlers/main.yml
  54. 8 9
      roles/openshift_node/tasks/bootstrap.yml
  55. 6 17
      roles/openshift_node/tasks/config.yml
  56. 3 1
      roles/openshift_node/tasks/config/configure-node-settings.yml
  57. 0 8
      roles/openshift_node/tasks/config/install-ovs-docker-service-file.yml
  58. 0 8
      roles/openshift_node/tasks/config/install-ovs-service-env-file.yml
  59. 0 17
      roles/openshift_node/tasks/container_images.yml
  60. 2 3
      roles/openshift_node/tasks/install.yml
  61. 0 3
      roles/openshift_node/tasks/main.yml
  62. 7 0
      roles/openshift_node/tasks/node_system_container.yml
  63. 0 22
      roles/openshift_node/tasks/openvswitch_system_container.yml
  64. 5 11
      roles/openshift_node/tasks/systemd_units.yml
  65. 0 15
      roles/openshift_node/tasks/upgrade/containerized_upgrade_pull.yml
  66. 0 4
      roles/openshift_node/tasks/upgrade/restart.yml
  67. 1 8
      roles/openshift_node/tasks/upgrade/rpm_upgrade.yml
  68. 1 1
      roles/openshift_node/tasks/upgrade/rpm_upgrade_install.yml
  69. 8 10
      roles/openshift_node/tasks/upgrade/stop_services.yml
  70. 1 9
      roles/openshift_node/templates/node.service.j2
  71. 4 10
      roles/openshift_node/templates/openshift.docker.node.service
  72. 0 3
      roles/openshift_node/templates/openvswitch-avoid-oom.conf
  73. 0 17
      roles/openshift_node/templates/openvswitch.docker.service
  74. 0 1
      roles/openshift_node/templates/openvswitch.sysconfig.j2
  75. 1 1
      roles/openshift_node_group/defaults/main.yml
  76. 11 0
      roles/openshift_node_group/tasks/bootstrap.yml
  77. 17 12
      roles/openshift_node_group/templates/node-config.yaml.j2
  78. 6 0
      roles/openshift_sdn/defaults/main.yml
  79. 9 0
      roles/openshift_sdn/files/sdn-images.yaml
  80. 83 0
      roles/openshift_sdn/files/sdn-ovs.yaml
  81. 29 0
      roles/openshift_sdn/files/sdn-policy.yaml
  82. 251 0
      roles/openshift_sdn/files/sdn.yaml
  83. 19 0
      roles/openshift_sdn/meta/main.yaml
  84. 51 0
      roles/openshift_sdn/tasks/main.yml

+ 13 - 3
.papr.inventory

@@ -7,7 +7,6 @@ etcd
 ansible_ssh_user=root
 ansible_python_interpreter=/usr/bin/python3
 openshift_deployment_type=origin
-openshift_image_tag="{{ lookup('env', 'OPENSHIFT_IMAGE_TAG') }}"
 openshift_master_default_subdomain="{{ lookup('env', 'RHCI_ocp_node1_IP') }}.xip.io"
 openshift_check_min_host_disk_gb=1.5
 openshift_check_min_host_memory_gb=1.9
@@ -15,6 +14,17 @@ osm_cluster_network_cidr=10.128.0.0/14
 openshift_portal_net=172.30.0.0/16
 osm_host_subnet_length=9
 
+[all:vars]
+# bootstrap configs
+openshift_node_groups=[{"name":"node-config-master","labels":["node-role.kubernetes.io/master=true","node-role.kubernetes.io/infra=true"]},{"name":"node-config-node","labels":["node-role.kubernetes.io/compute=true"]}]
+openshift_master_bootstrap_enabled=true
+openshift_master_bootstrap_auto_approve=true
+openshift_master_bootstrap_auto_approver_node_selector={"region":"infra"}
+osm_controller_args={"experimental-cluster-signing-duration": ["20m"]}
+openshift_node_bootstrap=true
+openshift_hosted_infra_selector="node-role.kubernetes.io/infra=true"
+osm_default_node_selector="node-role.kubernetes.io/compute=true"
+
 [masters]
 ocp-master
 
@@ -23,5 +33,5 @@ ocp-master
 
 [nodes]
 ocp-master openshift_schedulable=true
-ocp-node1  openshift_node_labels="{'region':'infra'}"
-ocp-node2  openshift_node_labels="{'region':'infra'}"
+ocp-node1
+ocp-node2

+ 12 - 12
.papr.sh

@@ -6,17 +6,16 @@ set -xeuo pipefail
 # specific version which quickly becomes stale.
 
 if [ -n "${PAPR_BRANCH:-}" ]; then
-    target_branch=$PAPR_BRANCH
+  target_branch=$PAPR_BRANCH
 else
-    target_branch=$PAPR_PULL_TARGET_BRANCH
+  target_branch=$PAPR_PULL_TARGET_BRANCH
+fi
+if [[ "${target_branch}" =~ ^release- ]]; then
+  target_branch="${target_branch/release-/v}"
+else
+  dnf install -y sed
+  target_branch="$( git describe | sed 's/^openshift-ansible-\([0-9]*\.[0-9]*\)\.[0-9]*-.*/v\1/' )"
 fi
-
-# this is a bit wasteful, though there's no easy way to say "only clone up to
-# the first tag in the branch" -- ideally, PAPR could help with caching here
-git clone --branch $target_branch --single-branch https://github.com/openshift/origin
-export OPENSHIFT_IMAGE_TAG=$(git -C origin describe --abbrev=0)
-
-echo "Targeting OpenShift Origin $OPENSHIFT_IMAGE_TAG"
 
 pip install -r requirements.txt
 
@@ -32,10 +31,11 @@ upload_journals() {
 
 trap upload_journals ERR
 
+# make all nodes ready for bootstrapping
+ansible-playbook -vvv -i .papr.inventory playbooks/openshift-node/private/image_prep.yml
+
 # run the actual installer
-# FIXME: override openshift_image_tag defined in the inventory until
-# https://github.com/openshift/openshift-ansible/issues/4478 is fixed.
-ansible-playbook -vvv -i .papr.inventory playbooks/deploy_cluster.yml -e "openshift_image_tag=$OPENSHIFT_IMAGE_TAG"
+ansible-playbook -vvv -i .papr.inventory playbooks/deploy_cluster.yml -e "openshift_release=${target_release}"
 
 ### DISABLING TESTS FOR NOW, SEE:
 ### https://github.com/openshift/openshift-ansible/pull/6132

+ 14 - 5
playbooks/common/openshift-cluster/upgrades/docker/tasks/restart.yml

@@ -6,14 +6,23 @@
   retries: 3
   delay: 30
 
+- name: Restart static master services
+  command: /usr/local/bin/master-restart "{{ item }}"
+  with_items:
+  - api
+  - controllers
+  - etcd
+  failed_when: false
+  when: openshift_is_containerized | bool
+
 - name: Restart containerized services
   service: name={{ item }} state=started
   with_items:
-    - etcd_container
-    - openvswitch
-    - "{{ openshift_service_type }}-master-api"
-    - "{{ openshift_service_type }}-master-controllers"
-    - "{{ openshift_service_type }}-node"
+  - etcd_container
+  - openvswitch
+  - "{{ openshift_service_type }}-master-api"
+  - "{{ openshift_service_type }}-master-controllers"
+  - "{{ openshift_service_type }}-node"
   failed_when: false
   when: openshift_is_containerized | bool
 

+ 13 - 5
playbooks/common/openshift-cluster/upgrades/docker/tasks/upgrade.yml

@@ -4,14 +4,22 @@
 - name: Stop containerized services
   service: name={{ item }} state=stopped
   with_items:
-    - "{{ openshift_service_type }}-master-api"
-    - "{{ openshift_service_type }}-master-controllers"
-    - "{{ openshift_service_type }}-node"
-    - etcd_container
-    - openvswitch
+  - "{{ openshift_service_type }}-master-api"
+  - "{{ openshift_service_type }}-master-controllers"
+  - "{{ openshift_service_type }}-node"
+  - etcd_container
+  - openvswitch
   failed_when: false
   when: openshift_is_containerized | bool
 
+- name: Restart static master services
+  command: /usr/local/bin/master-restart "{{ item }}"
+  with_items:
+  - api
+  - controllers
+  - etcd
+  failed_when: false
+
 - name: Check Docker image count
   shell: "docker images -aq | wc -l"
   register: docker_image_count

+ 0 - 1
playbooks/init/base_packages.yml

@@ -35,7 +35,6 @@
       - >
         (openshift_use_system_containers | default(False)) | bool
         or (openshift_use_etcd_system_container | default(False)) | bool
-        or (openshift_use_openvswitch_system_container | default(False)) | bool
         or (openshift_use_node_system_container | default(False)) | bool
         or (openshift_use_master_system_container | default(False)) | bool
       register: result

+ 3 - 0
playbooks/openshift-master/private/additional_config.yml

@@ -18,6 +18,9 @@
     etcd_urls: "{{ openshift.master.etcd_urls }}"
     omc_cluster_hosts: "{{ groups.oo_masters | join(' ')}}"
   roles:
+  # TODO: this is currently required in order to schedule pods onto the masters, but
+  #   should be moved into components once nodes are using dynamic config
+  - role: openshift_sdn
   - role: openshift_project_request_template
     when: openshift_project_request_template_manage
   - role: openshift_examples

+ 18 - 0
playbooks/openshift-master/private/config.yml

@@ -176,6 +176,18 @@
     openshift_no_proxy_etcd_host_ips: "{{ hostvars | lib_utils_oo_select_keys(groups['oo_etcd_to_config'] | default([]))
                                                 | lib_utils_oo_collect('openshift.common.ip') | default([]) | join(',')
                                                 }}"
+  pre_tasks:
+  # This will be moved into the control plane role once openshift_master is removed
+  - name: Add static pod and systemd shim commands
+    import_role:
+      name: openshift_control_plane
+      tasks_from: static_shim
+  - name: Prepare the bootstrap node config on masters for self-hosting
+    import_role:
+      name: openshift_node_group
+      tasks_from: bootstrap
+    when: openshift_master_bootstrap_enabled | default(false) | bool
+
   roles:
   - role: openshift_master_facts
   - role: openshift_clock
@@ -184,6 +196,8 @@
   - role: openshift_builddefaults
   - role: openshift_buildoverrides
   - role: nickhammond.logrotate
+
+  # DEPRECATED: begin moving away from this
   - role: openshift_master
     openshift_master_ha: "{{ (groups.oo_masters | length > 1) | bool }}"
     openshift_master_hosts: "{{ groups.oo_masters_to_config }}"
@@ -193,6 +207,10 @@
     openshift_master_default_registry_value: "{{ hostvars[groups.oo_first_master.0].l_default_registry_value }}"
     openshift_master_default_registry_value_api: "{{ hostvars[groups.oo_first_master.0].l_default_registry_value_api }}"
     openshift_master_default_registry_value_controllers: "{{ hostvars[groups.oo_first_master.0].l_default_registry_value_controllers }}"
+    when: not ( openshift_master_bootstrap_enabled | default(false) | bool )
+
+  - role: openshift_control_plane
+    when: openshift_master_bootstrap_enabled | default(false) | bool
   - role: tuned
   - role: nuage_ca
     when: openshift_use_nuage | default(false) | bool

+ 6 - 11
playbooks/openshift-master/private/scaleup.yml

@@ -15,19 +15,14 @@
       yaml_key: 'kubernetesMasterConfig.masterCount'
       yaml_value: "{{ openshift.master.master_count }}"
     notify:
-    - restart master api
-    - restart master controllers
+    - restart master
   handlers:
-  - name: restart master api
-    service: name={{ openshift_service_type }}-master-controllers state=restarted
+  - name: restart master
+    command: /usr/local/bin/master-restart "{{ item }}"
+    with_items:
+    - api
+    - controllers
     notify: verify api server
-  # We retry the controllers because the API may not be 100% initialized yet.
-  - name: restart master controllers
-    command: "systemctl restart {{ openshift_service_type }}-master-controllers"
-    retries: 3
-    delay: 5
-    register: result
-    until: result.rc == 0
   - name: verify api server
     command: >
       curl --silent --tlsv1.2

+ 5 - 10
playbooks/openshift-master/private/tasks/wire_aggregator.yml

@@ -191,16 +191,11 @@
 #restart master serially here
 - when: yedit_output.changed or (yedit_asset_config_output is defined and yedit_asset_config_output.changed)
   block:
-  - name: restart master api
-    systemd: name={{ openshift_service_type }}-master-api state=restarted
-
-  # We retry the controllers because the API may not be 100% initialized yet.
-  - name: restart master controllers
-    command: "systemctl restart {{ openshift_service_type }}-master-controllers"
-    retries: 3
-    delay: 5
-    register: result
-    until: result.rc == 0
+  - name: restart master
+    command: /usr/local/bin/master-restart "{{ item }}"
+    with_items:
+    - api
+    - controllers
 
   - name: Verify API Server
     # Using curl here since the uri module requires python-httplib2 and

+ 4 - 0
playbooks/openshift-node/private/image_prep.yml

@@ -25,6 +25,10 @@
     - import_role:
         name: openshift_node
         tasks_from: bootstrap.yml
+    - import_role:
+        name: openshift_node_group
+        tasks_from: bootstrap.yml
+
 
 - name: Re-enable excluders
   import_playbook: enable_excluders.yml

+ 0 - 1
roles/container_runtime/tasks/package_docker.yml

@@ -8,7 +8,6 @@
   - >
     (openshift_use_system_containers | default(False)) | bool
     or (openshift_use_etcd_system_container | default(False)) | bool
-    or (openshift_use_openvswitch_system_container | default(False)) | bool
     or (openshift_use_node_system_container | default(False)) | bool
     or (openshift_use_master_system_container | default(False)) | bool
 

+ 1 - 1
roles/etcd/defaults/main.yaml

@@ -80,7 +80,7 @@ etcd_listen_client_urls: "{{ etcd_url_scheme }}://{{ etcd_ip }}:{{ etcd_client_p
 #etcd_peer: 127.0.0.1
 etcdctlv2: "{{ r_etcd_common_etcdctl_command }} --cert-file {{ etcd_peer_cert_file }} --key-file {{ etcd_peer_key_file }} --ca-file {{ etcd_peer_ca_file }} -C https://{{ etcd_peer }}:{{ etcd_client_port }}"
 
-etcd_service: "{{ 'etcd_container' if r_etcd_common_etcd_runtime == 'docker' else 'etcd' }}"
+etcd_service: etcd
 # Location of the service file is fixed and not meant to be changed
 etcd_service_file: "/etc/systemd/system/{{ etcd_service }}.service"
 

+ 40 - 0
roles/etcd/files/etcd.yaml

@@ -0,0 +1,40 @@
+kind: Pod
+apiVersion: v1
+metadata:
+  name: master-etcd
+  namespace: kube-system
+  labels:
+    openshift.io/control-plane: "true"
+    openshift.io/component: etcd
+spec:
+  restartPolicy: Always
+  hostNetwork: true
+  containers:
+  - name: etcd
+    image: quay.io/coreos/etcd:v3.3
+    workingDir: /var/lib/etcd
+    command: ["/bin/sh", "-c"]
+    args:
+    - |
+      #!/bin/sh
+      set -o allexport
+      source /etc/etcd/etcd.conf
+      exec etcd
+    securityContext:
+      privileged: true
+    volumeMounts:
+     - mountPath: /etc/etcd/
+       name: master-config
+       readOnly: true
+     - mountPath: /var/lib/etcd/
+       name: master-data
+    livenessProbe:
+      tcpSocket:
+        port: 2379
+  volumes:
+  - name: master-config
+    hostPath:
+      path: /etc/etcd/
+  - name: master-data
+    hostPath:
+      path: /var/lib/etcd

+ 9 - 7
roles/etcd/tasks/certificates/fetch_server_certificates_from_ca.yml

@@ -3,7 +3,9 @@
   package:
     name: "etcd{{ '-' + etcd_version if etcd_version is defined else '' }}"
     state: present
-  when: not etcd_is_containerized | bool
+  when: not etcd_is_atomic | bool
+  delegate_to: "{{ etcd_ca_host }}"
+  run_once: true
   register: result
   until: result is succeeded
 
@@ -178,8 +180,8 @@
   file:
     path: "{{ item }}"
     mode: 0600
-    owner: "{{ 'etcd' if not etcd_is_containerized | bool else omit }}"
-    group: "{{ 'etcd' if not etcd_is_containerized | bool else omit }}"
+    owner: "etcd"
+    group: "etcd"
   when: etcd_url_scheme == 'https'
   with_items:
   - "{{ etcd_ca_file }}"
@@ -190,8 +192,8 @@
   file:
     path: "{{ item }}"
     mode: 0600
-    owner: "{{ 'etcd' if not etcd_is_containerized | bool else omit }}"
-    group: "{{ 'etcd' if not etcd_is_containerized | bool else omit }}"
+    owner: "etcd"
+    group: "etcd"
   when: etcd_peer_url_scheme == 'https'
   with_items:
   - "{{ etcd_peer_ca_file }}"
@@ -202,6 +204,6 @@
   file:
     path: "{{ etcd_conf_dir }}"
     state: directory
-    owner: "{{ 'etcd' if not etcd_is_containerized | bool else omit }}"
-    group: "{{ 'etcd' if not etcd_is_containerized | bool else omit }}"
+    owner: "etcd"
+    group: "etcd"
     mode: 0700

+ 6 - 134
roles/etcd/tasks/main.yml

@@ -1,136 +1,8 @@
 ---
-- name: Set hostname and ip facts
-  set_fact:
-    # Store etcd_hostname and etcd_ip such that they will be available
-    # in hostvars. Defaults for these variables are set in etcd_common.
-    etcd_hostname: "{{ etcd_hostname }}"
-    etcd_ip: "{{ etcd_ip }}"
+- name: Configure etcd with static pods
+  import_tasks: static.yml
+  when: openshift_master_bootstrap_enabled | default(False) | bool
 
-- name: setup firewall
-  import_tasks: firewall.yml
-
-- name: Install etcd
-  package: name=etcd{{ '-' + etcd_version if etcd_version is defined else '' }} state=present
-  when: not etcd_is_containerized | bool
-  register: result
-  until: result is succeeded
-
-- include_tasks: drop_etcdctl.yml
-  when:
-  - openshift_etcd_etcdctl_profile | default(true) | bool
-
-- block:
-  - name: Pull etcd container
-    command: docker pull {{ etcd_image }}
-    register: pull_result
-    changed_when: "'Downloaded newer image' in pull_result.stdout"
-
-  - name: Install etcd container service file
-    template:
-      dest: "/etc/systemd/system/etcd_container.service"
-      src: etcd.docker.service
-  when:
-  - etcd_is_containerized | bool
-  - not l_is_etcd_system_container | bool
-
-# Start secondary etcd instance for third party integrations
-# TODO: Determine an alternative to using thirdparty variable
-- block:
-  - name: Create configuration directory
-    file:
-      path: "{{ etcd_conf_dir }}"
-      state: directory
-      mode: 0700
-
-  # TODO: retest with symlink to confirm it does or does not function
-  - name: Copy service file for etcd instance
-    copy:
-      src: /usr/lib/systemd/system/etcd.service
-      dest: "/etc/systemd/system/{{ etcd_service }}.service"
-      remote_src: True
-
-  - name: Create third party etcd service.d directory exists
-    file:
-      path: "{{ etcd_systemd_dir }}"
-      state: directory
-
-  - name: Configure third part etcd service unit file
-    template:
-      dest: "{{ etcd_systemd_dir }}/custom.conf"
-      src: custom.conf.j2
-  when: etcd_is_thirdparty
-
-  # TODO: this task may not be needed with Validate permissions
-- name: Ensure etcd datadir exists
-  file:
-    path: "{{ etcd_data_dir }}"
-    state: directory
-    mode: 0700
-  when: etcd_is_containerized | bool
-
-- name: Ensure etcd datadir ownership for thirdparty datadir
-  file:
-    path: "{{ etcd_data_dir }}"
-    state: directory
-    mode: 0700
-    owner: etcd
-    group: etcd
-    recurse: True
-  when: etcd_is_thirdparty | bool
-
-  # TODO: Determine if the below reload would work here, for now just reload
-- name:
-  command: systemctl daemon-reload
-  when: etcd_is_thirdparty | bool
-
-- block:
-  - name: Disable system etcd when containerized
-    systemd:
-      name: etcd
-      state: stopped
-      enabled: no
-      masked: yes
-      daemon_reload: yes
-    when: not l_is_etcd_system_container | bool
-    register: task_result
-    failed_when:
-    - task_result is failed
-    - ('could not' not in task_result.msg|lower)
-
-  - name: Install etcd container service file
-    template:
-      dest: "/etc/systemd/system/etcd_container.service"
-      src: etcd.docker.service
-    when: not l_is_etcd_system_container | bool
-
-  - name: Install Etcd system container
-    include_tasks: system_container.yml
-    when: l_is_etcd_system_container | bool
-  when: etcd_is_containerized | bool
-
-- name: Validate permissions on the config dir
-  file:
-    path: "{{ etcd_conf_dir }}"
-    state: directory
-    owner: "{{ 'etcd' if not etcd_is_containerized | bool else omit }}"
-    group: "{{ 'etcd' if not etcd_is_containerized | bool else omit }}"
-    mode: 0700
-
-- name: Write etcd global config file
-  template:
-    src: etcd.conf.j2
-    dest: "{{ etcd_conf_file }}"
-    backup: true
-  notify:
-  - restart etcd
-
-- name: Enable etcd
-  systemd:
-    name: "{{ etcd_service }}"
-    state: started
-    enabled: yes
-  register: start_result
-
-- name: Set fact etcd_service_status_changed
-  set_fact:
-    etcd_service_status_changed: "{{ start_result is changed }}"
+- name: Configure etcd with RPMs
+  import_tasks: rpm.yml
+  when: not (openshift_master_bootstrap_enabled | default(False) | bool)

+ 84 - 0
roles/etcd/tasks/rpm.yml

@@ -0,0 +1,84 @@
+---
+- name: Set hostname and ip facts
+  set_fact:
+    # Store etcd_hostname and etcd_ip such that they will be available
+    # in hostvars. Defaults for these variables are set in etcd_common.
+    etcd_hostname: "{{ etcd_hostname }}"
+    etcd_ip: "{{ etcd_ip }}"
+
+- name: setup firewall
+  import_tasks: firewall.yml
+
+- name: Install etcd
+  package: name=etcd{{ '-' + etcd_version if etcd_version is defined else '' }} state=present
+  register: result
+  until: result is succeeded
+
+- include_tasks: drop_etcdctl.yml
+  when:
+  - openshift_etcd_etcdctl_profile | default(true) | bool
+
+  # Start secondary etcd instance for third party integrations
+# TODO: Determine an alternative to using thirdparty variable
+- block:
+  - name: Create configuration directory
+    file:
+      path: "{{ etcd_conf_dir }}"
+      state: directory
+      mode: 0700
+
+  # TODO: retest with symlink to confirm it does or does not function
+  - name: Copy service file for etcd instance
+    copy:
+      src: /usr/lib/systemd/system/etcd.service
+      dest: "/etc/systemd/system/{{ etcd_service }}.service"
+      remote_src: True
+
+  - name: Create third party etcd service.d directory exists
+    file:
+      path: "{{ etcd_systemd_dir }}"
+      state: directory
+
+  - name: Configure third part etcd service unit file
+    template:
+      dest: "{{ etcd_systemd_dir }}/custom.conf"
+      src: custom.conf.j2
+  when: etcd_is_thirdparty
+
+- name: Ensure etcd datadir ownership for thirdparty datadir
+  file:
+    path: "{{ etcd_data_dir }}"
+    state: directory
+    mode: 0700
+    owner: etcd
+    group: etcd
+    recurse: True
+  when: etcd_is_thirdparty | bool
+
+- name: Validate permissions on the config dir
+  file:
+    path: "{{ etcd_conf_dir }}"
+    state: directory
+    owner: "etcd"
+    group: "etcd"
+    mode: 0700
+
+- name: Write etcd global config file
+  template:
+    src: etcd.conf.j2
+    dest: "{{ etcd_conf_file }}"
+    backup: true
+  notify:
+  - restart etcd
+
+- name: Enable etcd
+  systemd:
+    name: "{{ etcd_service }}"
+    state: started
+    enabled: yes
+    daemon_reload: yes
+  register: start_result
+
+- name: Set fact etcd_service_status_changed
+  set_fact:
+    etcd_service_status_changed: "{{ start_result is changed }}"

+ 76 - 0
roles/etcd/tasks/static.yml

@@ -0,0 +1,76 @@
+---
+- name: Set hostname and ip facts
+  set_fact:
+    # Store etcd_hostname and etcd_ip such that they will be available
+    # in hostvars. Defaults for these variables are set in etcd_common.
+    etcd_hostname: "{{ etcd_hostname }}"
+    etcd_ip: "{{ etcd_ip }}"
+
+- name: setup firewall
+  import_tasks: firewall.yml
+
+  # TODO: this task may not be needed with Validate permissions
+- name: Ensure etcd datadir exists
+  file:
+    path: "{{ etcd_data_dir }}"
+    state: directory
+    mode: 0700
+
+- name: Validate permissions on the config dir
+  file:
+    path: "{{ etcd_conf_dir }}"
+    state: directory
+    owner: "etcd"
+    group: "etcd"
+    mode: 0700
+
+- name: Validate permissions on the static pods dir
+  file:
+    path: "/etc/origin/node/pods/"
+    state: directory
+    owner: "root"
+    group: "root"
+    mode: 0700
+
+- name: Write etcd global config file
+  template:
+    src: etcd.conf.j2
+    dest: "{{ etcd_conf_file }}"
+    backup: true
+
+- name: Create temp directory for static pods
+  command: mktemp -d /tmp/openshift-ansible-XXXXXX
+  register: mktemp
+  changed_when: false
+
+- name: Prepare etcd static pod
+  copy:
+    src: "{{ item }}"
+    dest: "{{ mktemp.stdout }}"
+    mode: 0600
+  with_items:
+  - etcd.yaml
+
+- name: Update etcd static pod
+  yedit:
+    src: "{{ mktemp.stdout }}/{{ item }}"
+    edits:
+    - key: spec.containers[0].image
+      value: "{{ etcd_image }}"
+  with_items:
+  - etcd.yaml
+
+- name: Deploy etcd static pod
+  copy:
+    remote_src: true
+    src: "{{ mktemp.stdout }}/{{ item }}"
+    dest: "/etc/origin/node/pods/"
+    mode: 0600
+  with_items:
+  - etcd.yaml
+
+- name: Remove temp directory
+  file:
+    state: absent
+    name: "{{ mktemp.stdout }}"
+  changed_when: False

+ 0 - 94
roles/etcd/tasks/system_container.yml

@@ -1,94 +0,0 @@
----
-- name: Pull etcd system container
-  command: atomic pull --storage=ostree {{ etcd_image }}
-  register: pull_result
-  changed_when: "'Pulling layer' in pull_result.stdout"
-
-- name: Set initial Etcd cluster
-  set_fact:
-    etcd_initial_cluster: >-
-      {% for host in etcd_peers | default([]) -%}
-      {% if loop.last -%}
-      {{ hostvars[host].etcd_hostname }}={{ etcd_peer_url_scheme }}://{{ hostvars[host].etcd_ip }}:{{ etcd_peer_port }}
-      {%- else -%}
-      {{ hostvars[host].etcd_hostname }}={{ etcd_peer_url_scheme }}://{{ hostvars[host].etcd_ip }}:{{ etcd_peer_port }},
-      {%- endif -%}
-      {% endfor -%}
-  when: etcd_initial_cluster is undefined
-
-- name: Check etcd system container package
-  command: >
-    atomic containers list --no-trunc -a -f container=etcd -f backend=ostree
-  register: etcd_result
-
-- name: Unmask etcd service
-  systemd:
-    name: etcd
-    state: stopped
-    enabled: no
-    masked: no
-    daemon_reload: yes
-  register: task_result
-  failed_when:
-    - task_result is failed
-    - ('could not' not in task_result.msg|lower)
-  when: "'etcd' not in etcd_result.stdout"
-
-- name: Disable etcd_container
-  systemd:
-    name: etcd_container
-    state: stopped
-    enabled: no
-    daemon_reload: yes
-  register: task_result
-  failed_when:
-    - task_result is failed
-    - ('could not' not in task_result.msg|lower)
-
-- name: Remove etcd_container.service
-  file:
-    path: /etc/systemd/system/etcd_container.service
-    state: absent
-
-- name: Systemd reload configuration
-  systemd: name=etcd_container daemon_reload=yes
-
-- name: Install or Update Etcd system container package
-  oc_atomic_container:
-    name: etcd
-    image: "{{ etcd_image }}"
-    state: latest
-    values:
-      - ETCD_DATA_DIR=/var/lib/etcd
-      - ETCD_LISTEN_PEER_URLS={{ etcd_listen_peer_urls }}
-      - ETCD_NAME={{ etcd_hostname }}
-      - ETCD_INITIAL_CLUSTER={{ etcd_initial_cluster }}
-      - ETCD_LISTEN_CLIENT_URLS={{ etcd_listen_client_urls }}
-      - ETCD_INITIAL_ADVERTISE_PEER_URLS={{ etcd_initial_advertise_peer_urls }}
-      - ETCD_INITIAL_CLUSTER_STATE={{ etcd_initial_cluster_state }}
-      - ETCD_INITIAL_CLUSTER_TOKEN={{ etcd_initial_cluster_token }}
-      - ETCD_ADVERTISE_CLIENT_URLS={{ etcd_advertise_client_urls }}
-      - ETCD_CA_FILE={{ etcd_ca_file }}
-      - ETCD_CERT_FILE={{ etcd_cert_file }}
-      - ETCD_KEY_FILE={{ etcd_key_file }}
-      - ETCD_PEER_CA_FILE={{ etcd_peer_ca_file }}
-      - ETCD_PEER_CERT_FILE={{ etcd_peer_cert_file }}
-      - ETCD_PEER_KEY_FILE={{ etcd_peer_key_file }}
-      - ETCD_TRUSTED_CA_FILE={{ etcd_ca_file }}
-      - ETCD_PEER_TRUSTED_CA_FILE={{ etcd_peer_ca_file }}
-      - 'ADDTL_MOUNTS=,{"type":"bind","source":"/etc/","destination":"/etc/","options":["rbind","rw","rslave"]},{"type":"bind","source":"/var/lib/etcd","destination":"/var/lib/etcd/","options":["rbind","rw","rslave"]}'
-
-- name: Ensure etcd datadir ownership for the system container
-  file:
-    path: "{{ etcd_data_dir }}"
-    state: directory
-    mode: 0700
-    owner: root
-    group: root
-    recurse: True
-
-- name: Ensure correct permissions are set for etcd_data_dir
-  template:
-    src: etcd-dir.conf.j2
-    dest: "/etc/tmpfiles.d/etcd-dir.conf"
-    backup: true

+ 50 - 0
roles/openshift_control_plane/README.md

@@ -0,0 +1,50 @@
+OpenShift Control Plane
+==================================
+
+Installs the services that comprise the OpenShift control plane onto nodes that are preconfigured for
+bootstrapping.
+
+Requirements
+------------
+
+* Ansible 2.2
+* A RHEL 7.1 host pre-configured with access to the rhel-7-server-rpms,
+rhel-7-server-extras-rpms, and rhel-7-server-ose-3.0-rpms repos.
+
+Role Variables
+--------------
+
+From this role:
+
+| Name                                             | Default value         |                                                                               |
+|---------------------------------------------------|-----------------------|-------------------------------------------------------------------------------|
+| openshift_node_ips                                | []                    | List of the openshift node ip addresses to pre-register when master starts up |
+| oreg_url                                          | UNDEF                 | Default docker registry to use                                                |
+| oreg_url_master                                   | UNDEF                 | Default docker registry to use, specifically on the master                    |
+| openshift_master_api_port                         | UNDEF                 |                                                                               |
+| openshift_master_console_port                     | UNDEF                 |                                                                               |
+| openshift_master_api_url                          | UNDEF                 |                                                                               |
+| openshift_master_console_url                      | UNDEF                 |                                                                               |
+| openshift_master_public_api_url                   | UNDEF                 |                                                                               |
+| openshift_master_public_console_url               | UNDEF                 |                                                                               |
+| openshift_master_saconfig_limit_secret_references | false                 |                                                                               |
+
+
+Dependencies
+------------
+
+
+Example Playbook
+----------------
+
+TODO
+
+License
+-------
+
+Apache License, Version 2.0
+
+Author Information
+------------------
+
+TODO

+ 152 - 0
roles/openshift_control_plane/defaults/main.yml

@@ -0,0 +1,152 @@
+---
+# openshift_master_defaults_in_use is a workaround to detect if we are consuming
+# the plays from the role or outside of the role.
+openshift_master_defaults_in_use: True
+openshift_master_debug_level: "{{ debug_level | default(2) }}"
+
+r_openshift_master_firewall_enabled: "{{ os_firewall_enabled | default(True) }}"
+r_openshift_master_use_firewalld: "{{ os_firewall_use_firewalld | default(False) }}"
+
+osm_image_default_dict:
+  origin: 'openshift/origin'
+  openshift-enterprise: 'openshift3/ose'
+osm_image_default: "{{ osm_image_default_dict[openshift_deployment_type] }}"
+osm_image: "{{ osm_image_default }}"
+
+l_openshift_master_images_dict:
+  origin: 'openshift/origin-${component}:${version}'
+  openshift-enterprise: 'openshift3/ose-${component}:${version}'
+l_osm_registry_url_default: "{{ l_openshift_master_images_dict[openshift_deployment_type] }}"
+l_osm_registry_url: "{{ oreg_url_master | default(oreg_url) | default(l_osm_registry_url_default) | regex_replace('${version}' | regex_escape, openshift_image_tag | default('${version}')) }}"
+
+system_images_registry_dict:
+  openshift-enterprise: "registry.access.redhat.com"
+  origin: "docker.io"
+
+system_images_registry: "{{ system_images_registry_dict[openshift_deployment_type | default('origin')] }}"
+
+l_is_master_system_container: "{{ (openshift_use_master_system_container | default(openshift_use_system_containers | default(false)) | bool) }}"
+
+openshift_master_dns_port: 8053
+osm_default_node_selector: ''
+osm_project_request_template: ''
+osm_mcs_allocator_range: 's0:/2'
+osm_mcs_labels_per_project: 5
+osm_uid_allocator_range: '1000000000-1999999999/10000'
+osm_project_request_message: ''
+
+openshift_node_ips: []
+r_openshift_master_clean_install: false
+r_openshift_master_os_firewall_enable: true
+r_openshift_master_os_firewall_deny: []
+default_r_openshift_master_os_firewall_allow:
+- service: api server https
+  port: "{{ openshift.master.api_port }}/tcp"
+- service: api controllers https
+  port: "{{ openshift.master.controllers_port }}/tcp"
+- service: skydns tcp
+  port: "{{ openshift_master_dns_port }}/tcp"
+- service: skydns udp
+  port: "{{ openshift_master_dns_port }}/udp"
+- service: etcd embedded
+  port: 4001/tcp
+  cond: "{{ groups.oo_etcd_to_config | default([]) | length == 0 }}"
+r_openshift_master_os_firewall_allow: "{{ default_r_openshift_master_os_firewall_allow | union(openshift_master_open_ports | default([])) }}"
+
+# oreg_url is defined by user input
+oreg_host: "{{ oreg_url.split('/')[0] if (oreg_url is defined and '.' in oreg_url.split('/')[0]) else '' }}"
+oreg_auth_credentials_path: "{{ r_openshift_master_data_dir }}/.docker"
+oreg_auth_credentials_replace: False
+l_bind_docker_reg_auth: False
+openshift_docker_alternative_creds: "{{ (openshift_docker_use_system_container | default(False) | bool) or (openshift_use_crio_only | default(False)) }}"
+
+containerized_svc_dir: "/usr/lib/systemd/system"
+ha_svc_template_path: "native-cluster"
+
+openshift_docker_service_name: "{{ 'container-engine' if (openshift_docker_use_system_container | default(False) | bool) else 'docker' }}"
+
+openshift_master_loopback_config: "{{ openshift_master_config_dir }}/openshift-master.kubeconfig"
+loopback_context_string: "current-context: {{ openshift.master.loopback_context_name }}"
+openshift_master_session_secrets_file: "{{ openshift_master_config_dir }}/session-secrets.yaml"
+openshift_master_policy: "{{ openshift_master_config_dir }}/policy.json"
+
+scheduler_config:
+  kind: Policy
+  apiVersion: v1
+  predicates: "{{ openshift_master_scheduler_predicates
+                  | default(openshift_master_scheduler_current_predicates
+                            | default(openshift_master_scheduler_default_predicates)) }}"
+  priorities: "{{ openshift_master_scheduler_priorities
+                  | default(openshift_master_scheduler_current_priorities
+                            | default(openshift_master_scheduler_default_priorities)) }}"
+
+openshift_master_valid_grant_methods:
+- auto
+- prompt
+- deny
+
+openshift_master_is_scaleup_host: False
+
+# openshift_master_oauth_template is deprecated.  Should be added to deprecations
+# and removed.
+openshift_master_oauth_template: False
+openshift_master_oauth_templates_default:
+  login: "{{ openshift_master_oauth_template }}"
+openshift_master_oauth_templates: "{{ openshift_master_oauth_template | ternary(openshift_master_oauth_templates_default, False) }}"
+# Here we combine openshift_master_oath_template into 'login' key of openshift_master_oath_templates, if not present.
+l_openshift_master_oauth_templates: "{{ openshift_master_oauth_templates | default(openshift_master_oauth_templates_default) }}"
+
+# These defaults assume forcing journald persistence, fsync to disk once
+# a second, rate-limiting to 10,000 logs a second, no forwarding to
+# syslog or wall, using 8GB of disk space maximum, using 10MB journal
+# files, keeping only a days worth of logs per journal file, and
+# retaining journal files no longer than a month.
+journald_vars_to_replace:
+- { var: Storage, val: persistent }
+- { var: Compress, val: yes }
+- { var: SyncIntervalSec, val: 1s }
+- { var: RateLimitInterval, val: 1s }
+- { var: RateLimitBurst, val: 10000 }
+- { var: SystemMaxUse, val: 8G }
+- { var: SystemKeepFree, val: 20% }
+- { var: SystemMaxFileSize, val: 10M }
+- { var: MaxRetentionSec, val: 1month }
+- { var: MaxFileSec, val: 1day }
+- { var: ForwardToSyslog, val: no }
+- { var: ForwardToWall, val: no }
+
+
+# NOTE
+# r_openshift_master_*_default may be defined external to this role.
+# openshift_use_*, if defined, may affect other roles or play behavior.
+r_openshift_master_use_openshift_sdn_default: "{{ openshift_use_openshift_sdn | default(True) }}"
+r_openshift_master_use_openshift_sdn: "{{ r_openshift_master_use_openshift_sdn_default }}"
+
+r_openshift_master_use_nuage_default: "{{ openshift_use_nuage | default(False) }}"
+r_openshift_master_use_nuage: "{{ r_openshift_master_use_nuage_default }}"
+
+r_openshift_master_use_contiv_default: "{{ openshift_use_contiv | default(False) }}"
+r_openshift_master_use_contiv: "{{ r_openshift_master_use_contiv_default }}"
+
+r_openshift_master_use_kuryr_default: "{{ openshift_use_kuryr | default(False) }}"
+r_openshift_master_use_kuryr: "{{ r_openshift_master_use_kuryr_default }}"
+
+r_openshift_master_data_dir_default: "{{ openshift_data_dir | default('/var/lib/origin') }}"
+r_openshift_master_data_dir: "{{ r_openshift_master_data_dir_default }}"
+
+r_openshift_master_sdn_network_plugin_name_default: "{{ os_sdn_network_plugin_name | default('redhat/openshift-ovs-subnet') }}"
+r_openshift_master_sdn_network_plugin_name: "{{ r_openshift_master_sdn_network_plugin_name_default }}"
+
+openshift_master_image_config_latest_default: "{{ openshift_image_config_latest | default(False) }}"
+openshift_master_image_config_latest: "{{ openshift_master_image_config_latest_default }}"
+
+openshift_master_config_dir_default: "{{ openshift.common.config_base ~ '/master' if openshift is defined and 'common' in openshift else '/etc/origin/master' }}"
+openshift_master_config_dir: "{{ openshift_master_config_dir_default }}"
+
+openshift_master_bootstrap_enabled: False
+
+openshift_master_csr_sa: node-bootstrapper
+openshift_master_csr_namespace: openshift-infra
+
+openshift_master_config_file: "{{ openshift_master_config_dir }}/master-config.yaml"
+openshift_master_scheduler_conf: "{{ openshift_master_config_dir }}/scheduler.json"

+ 48 - 0
roles/openshift_control_plane/files/apiserver.yaml

@@ -0,0 +1,48 @@
+kind: Pod
+apiVersion: v1
+metadata:
+  name: master-api
+  namespace: kube-system
+  labels:
+    openshift.io/control-plane: "true"
+    openshift.io/component: api
+spec:
+  restartPolicy: Always
+  hostNetwork: true
+  containers:
+  - name: api
+    image: openshift/origin:v3.9.0-alpha.4
+    command: ["/bin/bash", "-c"]
+    args:
+    - |
+      #!/bin/bash
+      set -euo pipefail
+      if [[ -f /etc/origin/master/master.env ]]; then
+        set -o allexport
+        source /etc/origin/master/master.env
+      fi
+      exec openshift start master api --config=/etc/origin/master/master-config.yaml
+    securityContext:
+      privileged: true
+    volumeMounts:
+     - mountPath: /etc/origin/master/
+       name: master-config
+     - mountPath: /etc/origin/cloudprovider/
+       name: master-cloud-provider
+     - mountPath: /var/lib/origin/
+       name: master-data
+    livenessProbe:
+      httpGet:
+        scheme: HTTPS
+        port: 8443
+        path: healthz
+  volumes:
+  - name: master-config
+    hostPath:
+      path: /etc/origin/master/
+  - name: master-cloud-provider
+    hostPath:
+      path: /etc/origin/cloudprovider
+  - name: master-data
+    hostPath:
+      path: /var/lib/origin

+ 45 - 0
roles/openshift_control_plane/files/controller.yaml

@@ -0,0 +1,45 @@
+kind: Pod
+apiVersion: v1
+metadata:
+  name: master-controllers
+  namespace: kube-system
+  labels:
+    openshift.io/control-plane: "true"
+    openshift.io/component: controllers
+spec:
+  restartPolicy: Always
+  hostNetwork: true
+  containers:
+  - name: controllers
+    image: openshift/origin:v3.9.0-alpha.4
+    command: ["/bin/bash", "-c"]
+    args:
+    - |
+      #!/bin/bash
+      set -euo pipefail
+      if [[ -f /etc/origin/master/master.env ]]; then
+        set -o allexport
+        source /etc/origin/master/master.env
+      fi
+      exec openshift start master controllers --config=/etc/origin/master/master-config.yaml --listen=https://0.0.0.0:8444
+    securityContext:
+      privileged: true
+    volumeMounts:
+     - mountPath: /etc/origin/master/
+       name: master-config
+     - mountPath: /etc/origin/cloudprovider/
+       name: master-cloud-provider
+    livenessProbe:
+      httpGet:
+        scheme: HTTPS
+        port: 8444
+        path: healthz
+  # second controllers container would be started here
+  # scheduler container started here
+  volumes:
+  - name: master-config
+    hostPath:
+      path: /etc/origin/master/
+  - name: master-cloud-provider
+    hostPath:
+      path: /etc/origin/cloudprovider

+ 28 - 0
roles/openshift_control_plane/files/scripts/docker/master-logs

@@ -0,0 +1,28 @@
+#!/bin/bash
+set -euo pipefail
+
+# Return the logs for a given static pod by component name and container name. Remaining arguments are passed to the
+# current container runtime.
+if [[ -z "${1-}" || -z "${2-}" ]]; then
+  echo "A component name like 'api', 'etcd', or 'controllers' must be specified along with the container name within that component." 1>&2
+  exit 1
+fi
+
+# container name is ignored for services
+for type in ( "atomic-openshift"  "origin" ); then
+  if systemctl cat "${type}-master-${1}.service" &>/dev/null; then
+    journalctl -u "${type}-master-${1}.service" "${@:3}"
+    exit 0
+  fi
+fi
+
+# TODO: move to cri-ctl
+# TODO: short term hack for cri-o
+
+uid=$(docker ps -l -a --filter "label=openshift.io/component=${1}" --filter "label=io.kubernetes.container.name=POD" --format '{{ .Label "io.kubernetes.pod.uid" }}')
+if [[ -z "${uid}" ]]; then
+  echo "Component ${1} is stopped or not running" 1>&2
+  exit 0
+fi
+container=$(docker ps -l -a -q --filter "label=io.kubernetes.pod.uid=${uid}" --filter "label=io.kubernetes.container.name=${2}")
+exec docker logs "${@:3}" "${container}"

+ 26 - 0
roles/openshift_control_plane/files/scripts/docker/master-restart

@@ -0,0 +1,26 @@
+#!/bin/bash
+set -euo pipefail
+
+# Restart the named component by stopping its base container.
+if [[ -z "${1-}" ]]; then
+  echo "A component name like 'api', 'etcd', or 'controllers' must be specified." 1>&2
+  exit 1
+fi
+
+types=( "atomic-openshift" "origin" )
+for type in "${types[@]}"; do
+  if systemctl cat "${type}-master-${1}.service" &>/dev/null; then
+    systemctl restart "${type}-master-${1}.service"
+    exit 0
+  fi
+done
+
+# TODO: move to cri-ctl
+# TODO: short term hack for cri-o
+
+container=$(docker ps -l -q --filter "label=openshift.io/component=${1}" --filter "label=io.kubernetes.container.name=POD")
+if [[ -z "${container}" ]]; then
+  echo "Component ${1} is already stopped" 1>&2
+  exit 0
+fi
+exec docker stop "${container}" --time 30 >/dev/null

+ 27 - 0
roles/openshift_control_plane/handlers/main.yml

@@ -0,0 +1,27 @@
+---
+- name: restart master
+  command: /usr/local/bin/master-restart "{{ item }}"
+  with_items:
+  - api
+  - controllers
+  when:
+  - not (master_api_service_status_changed | default(false) | bool)
+  notify:
+  - verify API server
+
+- name: verify API server
+  # Using curl here since the uri module requires python-httplib2 and
+  # wait_for port doesn't provide health information.
+  command: >
+    curl --silent --tlsv1.2
+    --cacert {{ openshift.common.config_base }}/master/ca-bundle.crt
+    {{ openshift.master.api_url }}/healthz/ready
+  args:
+    # Disables the following warning:
+    # Consider using get_url or uri module rather than running curl
+    warn: no
+  register: l_api_available_output
+  until: l_api_available_output.stdout == 'ok'
+  retries: 120
+  delay: 1
+  changed_when: false

+ 17 - 0
roles/openshift_control_plane/meta/main.yml

@@ -0,0 +1,17 @@
+---
+galaxy_info:
+  author: Clayton Coleman
+  description: OpenShift Control Plane
+  company: Red Hat, Inc.
+  license: Apache License, Version 2.0
+  min_ansible_version: 2.2
+  platforms:
+  - name: EL
+    versions:
+    - 7
+  categories:
+  - cloud
+dependencies:
+- role: lib_openshift
+- role: lib_utils
+- role: openshift_facts

+ 15 - 0
roles/openshift_control_plane/tasks/bootstrap.yml

@@ -0,0 +1,15 @@
+---
+# TODO: create a module for this command.
+# oc_serviceaccounts_kubeconfig
+- name: create service account kubeconfig with csr rights
+  command: >
+    oc serviceaccounts create-kubeconfig {{ openshift_master_csr_sa }} -n {{ openshift_master_csr_namespace }}
+  register: kubeconfig_out
+  until: kubeconfig_out.rc == 0
+  retries: 24
+  delay: 5
+
+- name: put service account kubeconfig into a file on disk for bootstrap
+  copy:
+    content: "{{ kubeconfig_out.stdout }}"
+    dest: "{{ openshift_master_config_dir }}/bootstrap.kubeconfig"

+ 17 - 0
roles/openshift_control_plane/tasks/configure_external_etcd.yml

@@ -0,0 +1,17 @@
+---
+- name: Remove etcdConfig section
+  yedit:
+    src: /etc/origin/master/master-config.yaml
+    key: "etcdConfig"
+    state: absent
+- name: Set etcdClientInfo.ca to master.etcd-ca.crt
+  yedit:
+    src: /etc/origin/master/master-config.yaml
+    key: etcdClientInfo.ca
+    value: master.etcd-ca.crt
+- name: Set etcdClientInfo.urls to the external etcd
+  yedit:
+    src: /etc/origin/master/master-config.yaml
+    key: etcdClientInfo.urls
+    value:
+      - "{{ etcd_peer_url_scheme }}://{{ etcd_ip }}:{{ etcd_peer_port }}"

+ 44 - 0
roles/openshift_control_plane/tasks/firewall.yml

@@ -0,0 +1,44 @@
+---
+- when: r_openshift_master_firewall_enabled | bool and not r_openshift_master_use_firewalld | bool
+  block:
+  - name: Add iptables allow rules
+    os_firewall_manage_iptables:
+      name: "{{ item.service }}"
+      action: add
+      protocol: "{{ item.port.split('/')[1] }}"
+      port: "{{ item.port.split('/')[0] }}"
+    when:
+    - item.cond | default(True)
+    with_items: "{{ r_openshift_master_os_firewall_allow }}"
+
+  - name: Remove iptables rules
+    os_firewall_manage_iptables:
+      name: "{{ item.service }}"
+      action: remove
+      protocol: "{{ item.port.split('/')[1] }}"
+      port: "{{ item.port.split('/')[0] }}"
+    when:
+    - item.cond | default(True)
+    with_items: "{{ r_openshift_master_os_firewall_deny }}"
+
+- when: r_openshift_master_firewall_enabled | bool and r_openshift_master_use_firewalld | bool
+  block:
+  - name: Add firewalld allow rules
+    firewalld:
+      port: "{{ item.port }}"
+      permanent: true
+      immediate: true
+      state: enabled
+    when:
+    - item.cond | default(True)
+    with_items: "{{ r_openshift_master_os_firewall_allow }}"
+
+  - name: Remove firewalld allow rules
+    firewalld:
+      port: "{{ item.port }}"
+      permanent: true
+      immediate: true
+      state: disabled
+    when:
+    - item.cond | default(True)
+    with_items: "{{ r_openshift_master_os_firewall_deny }}"

+ 29 - 0
roles/openshift_control_plane/tasks/journald.yml

@@ -0,0 +1,29 @@
+---
+- name: Checking for journald.conf
+  stat: path=/etc/systemd/journald.conf
+  register: journald_conf_file
+
+- name: Create journald persistence directories
+  file:
+    path: /var/log/journal
+    state: directory
+
+- name: Update journald setup
+  replace:
+    dest: /etc/systemd/journald.conf
+    regexp: '^(\#| )?{{ item.var }}=\s*.*?$'
+    replace: ' {{ item.var }}={{ item.val }}'
+    backup: yes
+  with_items: "{{ journald_vars_to_replace | default([]) }}"
+  when: journald_conf_file.stat.exists
+  register: journald_update
+
+# I need to restart journald immediatelly, otherwise it gets into way during
+# further steps in ansible
+- name: Restart journald
+  command: "systemctl restart systemd-journald"
+  retries: 3
+  delay: 5
+  register: result
+  until: result.rc == 0
+  when: journald_update is changed

+ 224 - 0
roles/openshift_control_plane/tasks/main.yml

@@ -0,0 +1,224 @@
+---
+# TODO: add ability to configure certificates given either a local file to
+#       point to or certificate contents, set in default cert locations.
+
+# Authentication Variable Validation
+# TODO: validate the different identity provider kinds as well
+- fail:
+    msg: >
+      Invalid OAuth grant method: {{ openshift_master_oauth_grant_method }}
+  when:
+  - openshift_master_oauth_grant_method is defined
+  - openshift_master_oauth_grant_method not in openshift_master_valid_grant_methods
+
+- name: Open up firewall ports
+  import_tasks: firewall.yml
+
+- name: Create r_openshift_master_data_dir
+  file:
+    path: "{{ r_openshift_master_data_dir }}"
+    state: directory
+    mode: 0755
+    owner: root
+    group: root
+
+- name: Create config parent directory if it does not exist
+  file:
+    path: "{{ openshift_master_config_dir }}"
+    state: directory
+
+- name: Create the policy file if it does not already exist
+  command: >
+    {{ openshift_client_binary }} adm create-bootstrap-policy-file
+      --filename={{ openshift_master_policy }}
+  args:
+    creates: "{{ openshift_master_policy }}"
+
+- name: Create the scheduler config
+  copy:
+    content: "{{ scheduler_config | to_nice_json }}"
+    dest: "{{ openshift_master_scheduler_conf }}"
+    backup: true
+
+- name: Install httpd-tools if needed
+  package: name=httpd-tools state=present
+  when:
+  - item.kind == 'HTPasswdPasswordIdentityProvider'
+  - not openshift_is_atomic | bool
+  with_items: "{{ openshift_master_identity_providers }}"
+  register: result
+  until: result is succeeded
+
+- name: Ensure htpasswd directory exists
+  file:
+    path: "{{ item.filename | dirname }}"
+    state: directory
+  when:
+  - item.kind == 'HTPasswdPasswordIdentityProvider'
+  with_items: "{{ openshift_master_identity_providers }}"
+
+- name: Create the htpasswd file if needed
+  template:
+    dest: "{{ item.filename }}"
+    src: htpasswd.j2
+    backup: yes
+    mode: 0600
+  when:
+  - item.kind == 'HTPasswdPasswordIdentityProvider'
+  - openshift.master.manage_htpasswd | bool
+  with_items: "{{ openshift_master_identity_providers }}"
+
+- name: Ensure htpasswd file exists
+  copy:
+    dest: "{{ item.filename }}"
+    force: no
+    content: ""
+    mode: 0600
+  when:
+  - item.kind == 'HTPasswdPasswordIdentityProvider'
+  with_items: "{{ openshift_master_identity_providers }}"
+
+- name: Create the ldap ca file if needed
+  copy:
+    dest: "{{ item.ca if 'ca' in item and '/' in item.ca else openshift_master_config_dir ~ '/' ~ item.ca | default('ldap_ca.crt') }}"
+    content: "{{ openshift.master.ldap_ca }}"
+    mode: 0600
+    backup: yes
+  when:
+  - openshift.master.ldap_ca is defined
+  - item.kind == 'LDAPPasswordIdentityProvider'
+  with_items: "{{ openshift_master_identity_providers }}"
+
+- name: Create the openid ca file if needed
+  copy:
+    dest: "{{ item.ca if 'ca' in item and '/' in item.ca else openshift_master_config_dir ~ '/' ~ item.ca | default('openid_ca.crt') }}"
+    content: "{{ openshift.master.openid_ca }}"
+    mode: 0600
+    backup: yes
+  when:
+  - openshift.master.openid_ca is defined
+  - item.kind == 'OpenIDIdentityProvider'
+  - item.ca | default('') != ''
+  with_items: "{{ openshift_master_identity_providers }}"
+
+- name: Create the request header ca file if needed
+  copy:
+    dest: "{{ item.clientCA if 'clientCA' in item and '/' in item.clientCA else openshift_master_config_dir ~ '/' ~ item.clientCA | default('request_header_ca.crt') }}"
+    content: "{{ openshift.master.request_header_ca }}"
+    mode: 0600
+    backup: yes
+  when:
+  - openshift.master.request_header_ca is defined
+  - item.kind == 'RequestHeaderIdentityProvider'
+  - item.clientCA | default('') != ''
+  with_items: "{{ openshift_master_identity_providers }}"
+
+- name: Set fact of all etcd host IPs
+  openshift_facts:
+    role: common
+    local_facts:
+      no_proxy_etcd_host_ips: "{{ openshift_no_proxy_etcd_host_ips }}"
+
+- name: Update journald config
+  include_tasks: journald.yml
+
+- name: Create session secrets file
+  template:
+    dest: "{{ openshift.master.session_secrets_file }}"
+    src: sessionSecretsFile.yaml.v1.j2
+    owner: root
+    group: root
+    mode: 0600
+  when:
+  - openshift.master.session_auth_secrets is defined
+  - openshift.master.session_encryption_secrets is defined
+
+- set_fact:
+    # translate_idps is a custom filter in role lib_utils
+    translated_identity_providers: "{{ openshift_master_identity_providers | translate_idps('v1') }}"
+
+# TODO: add the validate parameter when there is a validation command to run
+- name: Create master config
+  template:
+    dest: "{{ openshift_master_config_file }}"
+    src: master.yaml.v1.j2
+    backup: true
+    owner: root
+    group: root
+    mode: 0600
+
+- include_tasks: set_loopback_context.yml
+
+- name: Create the master service env file
+  template:
+    src: "master.env.j2"
+    dest: /etc/origin/master/master.env
+    backup: true
+
+- include_tasks: static.yml
+
+- name: Start and enable self-hosting node
+  systemd:
+    name: "{{ openshift_service_type }}-node"
+    state: restarted
+    enabled: yes
+
+- name: Verify that the control plane is running
+  command: >
+    curl -k {{ openshift.master.api_url }}/healthz
+  args:
+    # Disables the following warning:
+    # Consider using get_url or uri module rather than running curl
+    warn: no
+  register: control_plane_health
+  until: control_plane_health.stdout == 'ok'
+  retries: 60
+  delay: 5
+  changed_when: false
+  # Ignore errors so we can log troubleshooting info on failures.
+  ignore_errors: yes
+
+# Capture debug output here to simplify triage
+- when: control_plane_health.stdout != 'ok'
+  block:
+  - name: Check status in the kube-system namespace
+    command: >
+      {{ openshift_client_binary }} status --config=/etc/origin/master/admin.kubeconfig -n kube-system
+    register: control_plane_status
+    ignore_errors: true
+  - debug:
+      msg: "{{ control_plane_status.stdout_lines }}"
+  - name: Get pods in the kube-system namespace
+    command: >
+      {{ openshift_client_binary }} get pods --config=/etc/origin/master/admin.kubeconfig -n kube-system -o wide
+    register: control_plane_pods
+    ignore_errors: true
+  - debug:
+      msg: "{{ control_plane_pods.stdout_lines }}"
+  - name: Get events in the kube-system namespace
+    command: >
+      {{ openshift_client_binary }} get events --config=/etc/origin/master/admin.kubeconfig -n kube-system
+    register: control_plane_events
+    ignore_errors: true
+  - debug:
+      msg: "{{ control_plane_events.stdout_lines }}"
+  - name: Get API logs
+    command: >
+      /usr/local/bin/master-logs api api
+    register: control_plane_logs_api
+    ignore_errors: true
+  - debug:
+      msg: "{{ control_plane_logs_api.stdout_lines }}"
+  - name: Get node logs
+    command: journalctl --no-pager -n 300 -u {{ openshift_service_type }}-node
+    register: control_plane_logs_node
+    ignore_errors: true
+  - debug:
+      msg: "{{ control_plane_logs_node.stdout_lines }}"
+
+- name: Report control plane errors
+  fail:
+    msg: Control plane install failed.
+  when: control_plane_health.stdout != 'ok'
+
+- include_tasks: bootstrap.yml

+ 50 - 0
roles/openshift_control_plane/tasks/registry_auth.yml

@@ -0,0 +1,50 @@
+---
+- name: Check for credentials file for registry auth
+  stat:
+    path: "{{ oreg_auth_credentials_path }}"
+  when: oreg_auth_user is defined
+  register: master_oreg_auth_credentials_stat
+
+- name: Create credentials for registry auth
+  command: "docker --config={{ oreg_auth_credentials_path }} login -u {{ oreg_auth_user }} -p {{ oreg_auth_password }} {{ oreg_host }}"
+  when:
+  - not (openshift_docker_alternative_creds | default(False))
+  - oreg_auth_user is defined
+  - (not master_oreg_auth_credentials_stat.stat.exists or oreg_auth_credentials_replace) | bool
+  register: master_oreg_auth_credentials_create
+  retries: 3
+  delay: 5
+  until: master_oreg_auth_credentials_create.rc == 0
+  notify:
+  - restart master
+
+# docker_creds is a custom module from lib_utils
+# 'docker login' requires a docker.service running on the local host, this is an
+# alternative implementation for non-docker hosts.  This implementation does not
+# check the registry to determine whether or not the credentials will work.
+- name: Create credentials for registry auth (alternative)
+  docker_creds:
+    path: "{{ oreg_auth_credentials_path }}"
+    registry: "{{ oreg_host }}"
+    username: "{{ oreg_auth_user }}"
+    password: "{{ oreg_auth_password }}"
+  when:
+  - openshift_docker_alternative_creds | default(False) | bool
+  - oreg_auth_user is defined
+  - (not master_oreg_auth_credentials_stat.stat.exists or oreg_auth_credentials_replace) | bool
+  register: master_oreg_auth_credentials_create_alt
+  notify:
+  - restart master
+
+# Container images may need the registry credentials
+- name: Setup ro mount of /root/.docker for containerized hosts
+  set_fact:
+    l_bind_docker_reg_auth: True
+  when:
+  - openshift_is_containerized | bool
+  - oreg_auth_user is defined
+  - >
+      (master_oreg_auth_credentials_stat.stat.exists
+      or oreg_auth_credentials_replace
+      or master_oreg_auth_credentials_create.changed
+      or master_oreg_auth_credentials_create_alt.changed) | bool

+ 25 - 0
roles/openshift_control_plane/tasks/restart.yml

@@ -0,0 +1,25 @@
+---
+- name: restart master
+  command: /usr/local/bin/master-restart "{{ item }}"
+  with_items:
+  - api
+  - controllers
+  notify:
+  - verify API server
+
+- name: verify API server
+  # Using curl here since the uri module requires python-httplib2 and
+  # wait_for port doesn't provide health information.
+  command: >
+    curl --silent --tlsv1.2
+    --cacert {{ openshift.common.config_base }}/master/ca-bundle.crt
+    {{ openshift.master.api_url }}/healthz/ready
+  args:
+    # Disables the following warning:
+    # Consider using get_url or uri module rather than running curl
+    warn: no
+  register: l_api_available_output
+  until: l_api_available_output.stdout == 'ok'
+  retries: 120
+  delay: 1
+  changed_when: false

+ 34 - 0
roles/openshift_control_plane/tasks/set_loopback_context.yml

@@ -0,0 +1,34 @@
+---
+- name: Test local loopback context
+  command: >
+    {{ openshift_client_binary }} config view
+    --config={{ openshift_master_loopback_config }}
+  changed_when: false
+  register: l_loopback_config
+
+- command: >
+    {{ openshift_client_binary }} config set-cluster
+    --certificate-authority={{ openshift_master_config_dir }}/ca.crt
+    --embed-certs=true --server={{ openshift.master.loopback_api_url }}
+    {{ openshift.master.loopback_cluster_name }}
+    --config={{ openshift_master_loopback_config }}
+  when:
+  - loopback_context_string not in l_loopback_config.stdout
+  register: set_loopback_cluster
+
+- command: >
+    {{ openshift_client_binary }} config set-context
+    --cluster={{ openshift.master.loopback_cluster_name }}
+    --namespace=default --user={{ openshift.master.loopback_user }}
+    {{ openshift.master.loopback_context_name }}
+    --config={{ openshift_master_loopback_config }}
+  when:
+  - set_loopback_cluster is changed
+  register: l_set_loopback_context
+
+- command: >
+    {{ openshift_client_binary }} config use-context {{ openshift.master.loopback_context_name }}
+    --config={{ openshift_master_loopback_config }}
+  when:
+  - l_set_loopback_context is changed
+  register: set_current_context

+ 63 - 0
roles/openshift_control_plane/tasks/static.yml

@@ -0,0 +1,63 @@
+---
+- name: Enable bootstrapping in the master config
+  yedit:
+    src: /etc/origin/master/master-config.yaml
+    edits:
+    - key: kubernetesMasterConfig.controllerArguments.cluster-signing-cert-file
+      value:
+      - /etc/origin/master/ca.crt
+    - key: kubernetesMasterConfig.controllerArguments.cluster-signing-key-file
+      value:
+      - /etc/origin/master/ca.key
+
+- name: Create temp directory for static pods
+  command: mktemp -d /tmp/openshift-ansible-XXXXXX
+  register: mktemp
+  changed_when: false
+
+- name: Prepare master static pods
+  copy:
+    src: "{{ item }}"
+    dest: "{{ mktemp.stdout }}"
+    mode: 0600
+  with_items:
+  - apiserver.yaml
+  - controller.yaml
+
+- name: Update master static pods
+  yedit:
+    src: "{{ mktemp.stdout }}/{{ item }}"
+    edits:
+    - key: spec.containers[0].image
+      value: "{{ osm_image }}:{{ openshift_image_tag }}"
+  with_items:
+  - apiserver.yaml
+  - controller.yaml
+
+- name: Update master static pods
+  copy:
+    remote_src: true
+    src: "{{ mktemp.stdout }}/{{ item }}"
+    dest: "/etc/origin/node/pods/"
+    mode: 0600
+  with_items:
+  - apiserver.yaml
+  - controller.yaml
+
+- name: Remove temporary directory
+  file:
+    name: "{{ mktemp.stdout }}"
+    state: absent
+  changed_when: False
+
+- name: Establish the default bootstrap kubeconfig for masters
+  copy:
+    remote_src: true
+    src: "/etc/origin/master/admin.kubeconfig"
+    dest: "{{ item }}"
+    mode: 0600
+  with_items:
+  # bootstrap as an admin
+  - /etc/origin/node/bootstrap.kubeconfig
+  # copy to this location to bypass initial bootstrap request
+  - /etc/origin/node/node.kubeconfig

+ 10 - 0
roles/openshift_control_plane/tasks/static_shim.yml

@@ -0,0 +1,10 @@
+---
+# TODO: package this?
+- name: Copy static master scripts
+  copy:
+    src: "{{ item }}"
+    dest: "/usr/local/bin/"
+    mode: 0500
+  with_items:
+  - scripts/docker/master-logs
+  - scripts/docker/master-restart

+ 7 - 0
roles/openshift_control_plane/tasks/update_etcd_client_urls.yml

@@ -0,0 +1,7 @@
+---
+- yedit:
+    src: "{{ openshift.common.config_base }}/master/master-config.yaml"
+    key: 'etcdClientInfo.urls'
+    value: "{{ openshift.master.etcd_urls }}"
+  notify:
+  - restart master

+ 45 - 0
roles/openshift_control_plane/tasks/upgrade.yml

@@ -0,0 +1,45 @@
+---
+- include_tasks: upgrade/rpm_upgrade.yml
+  when: not openshift_is_containerized | bool
+
+- include_tasks: upgrade/upgrade_scheduler.yml
+
+# master_config_hook is passed in from upgrade play.
+- include_tasks: "upgrade/{{ master_config_hook }}"
+  when: master_config_hook is defined
+
+- include_tasks: journald.yml
+
+- name: Check for ca-bundle.crt
+  stat:
+    path: "{{ openshift.common.config_base }}/master/ca-bundle.crt"
+  register: ca_bundle_stat
+  failed_when: false
+
+- name: Check for ca.crt
+  stat:
+    path: "{{ openshift.common.config_base }}/master/ca.crt"
+  register: ca_crt_stat
+  failed_when: false
+
+- name: Migrate ca.crt to ca-bundle.crt
+  command: mv ca.crt ca-bundle.crt
+  args:
+    chdir: "{{ openshift.common.config_base }}/master"
+  when: ca_crt_stat.stat.isreg and not ca_bundle_stat.stat.exists
+
+- name: Link ca.crt to ca-bundle.crt
+  file:
+    src: "{{ openshift.common.config_base }}/master/ca-bundle.crt"
+    path: "{{ openshift.common.config_base }}/master/ca.crt"
+    state: link
+  when: ca_crt_stat.stat.isreg and not ca_bundle_stat.stat.exists
+
+- name: Update oreg value
+  yedit:
+    src: "{{ openshift.common.config_base }}/master/master-config.yaml"
+    key: 'imageConfig.format'
+    value: "{{ oreg_url | default(oreg_url_master) }}"
+  when: oreg_url is defined or oreg_url_master is defined
+
+- include_tasks: static.yml

+ 36 - 0
roles/openshift_control_plane/tasks/upgrade/rpm_upgrade.yml

@@ -0,0 +1,36 @@
+---
+# When we update package "a-${version}" and a requires b >= ${version} if we
+# don't specify the version of b yum will choose the latest version of b
+# available and the whole set of dependencies end up at the latest version.
+# Since the package module, unlike the yum module, doesn't flatten a list
+# of packages into one transaction we need to do that explicitly. The ansible
+# core team tells us not to rely on yum module transaction flattening anyway.
+
+# TODO: If the sdn package isn't already installed this will install it, we
+# should fix that
+
+- import_tasks: ../static.yml
+
+- name: Upgrade master packages
+  command:
+    yum install -y {{ master_pkgs | join(' ') }} \
+    {{ ' --exclude *' ~ openshift_service_type ~ '*3.9*' if openshift_release | version_compare('3.9','<') else '' }}
+  vars:
+    master_pkgs:
+      - "{{ openshift_service_type }}-node{{ openshift_pkg_version | default('') }}"
+      - "{{ openshift_service_type }}-clients{{ openshift_pkg_version | default('') }}"
+  register: result
+  until: result is succeeded
+  when: ansible_pkg_mgr == 'yum'
+
+- name: Upgrade master packages - dnf
+  dnf:
+    name: "{{ master_pkgs | join(',') }}"
+    state: present
+  vars:
+    master_pkgs:
+      - "{{ openshift_service_type }}-node{{ openshift_pkg_version }}"
+      - "{{ openshift_service_type }}-clients{{ openshift_pkg_version }}"
+  register: result
+  until: result is succeeded
+  when: ansible_pkg_mgr == 'dnf'

+ 175 - 0
roles/openshift_control_plane/tasks/upgrade/upgrade_scheduler.yml

@@ -0,0 +1,175 @@
+---
+# Upgrade predicates
+- vars:
+    # openshift_master_facts_default_predicates is a custom lookup plugin in
+    # role lib_utils
+    prev_predicates: "{{ lookup('openshift_master_facts_default_predicates', short_version=openshift_upgrade_min, deployment_type=openshift_deployment_type) }}"
+    prev_predicates_no_region: "{{ lookup('openshift_master_facts_default_predicates', short_version=openshift_upgrade_min, deployment_type=openshift_deployment_type, regions_enabled=False) }}"
+    default_predicates_no_region: "{{ lookup('openshift_master_facts_default_predicates', regions_enabled=False) }}"
+    # older_predicates are the set of predicates that have previously been
+    # hard-coded into openshift_facts
+    older_predicates:
+    - - name: MatchNodeSelector
+      - name: PodFitsResources
+      - name: PodFitsPorts
+      - name: NoDiskConflict
+      - name: NoVolumeZoneConflict
+      - name: MaxEBSVolumeCount
+      - name: MaxGCEPDVolumeCount
+      - name: Region
+        argument:
+          serviceAffinity:
+            labels:
+            - region
+    - - name: MatchNodeSelector
+      - name: PodFitsResources
+      - name: PodFitsPorts
+      - name: NoDiskConflict
+      - name: NoVolumeZoneConflict
+      - name: Region
+        argument:
+          serviceAffinity:
+            labels:
+            - region
+    - - name: MatchNodeSelector
+      - name: PodFitsResources
+      - name: PodFitsPorts
+      - name: NoDiskConflict
+      - name: Region
+        argument:
+          serviceAffinity:
+            labels:
+            - region
+    # older_predicates_no_region are the set of predicates that have previously
+    # been hard-coded into openshift_facts, with the Region predicate removed
+    older_predicates_no_region:
+    - - name: MatchNodeSelector
+      - name: PodFitsResources
+      - name: PodFitsPorts
+      - name: NoDiskConflict
+      - name: NoVolumeZoneConflict
+      - name: MaxEBSVolumeCount
+      - name: MaxGCEPDVolumeCount
+    - - name: MatchNodeSelector
+      - name: PodFitsResources
+      - name: PodFitsPorts
+      - name: NoDiskConflict
+      - name: NoVolumeZoneConflict
+    - - name: MatchNodeSelector
+      - name: PodFitsResources
+      - name: PodFitsPorts
+      - name: NoDiskConflict
+  block:
+
+  # Handle case where openshift_master_predicates is defined
+  - block:
+    - debug:
+        msg: "WARNING: openshift_master_scheduler_predicates is set to defaults from an earlier release of OpenShift current defaults are: {{ openshift_master_scheduler_default_predicates }}"
+      when: openshift_master_scheduler_predicates in older_predicates + older_predicates_no_region + [prev_predicates] + [prev_predicates_no_region]
+
+    - debug:
+        msg: "WARNING: openshift_master_scheduler_predicates does not match current defaults of: {{ openshift_master_scheduler_default_predicates }}"
+      when: openshift_master_scheduler_predicates != openshift_master_scheduler_default_predicates
+    when: openshift_master_scheduler_predicates | default(none) is not none
+
+  # Handle cases where openshift_master_predicates is not defined
+  - block:
+    - debug:
+        msg: "WARNING: existing scheduler config does not match previous known defaults automated upgrade of scheduler config is disabled.\nexisting scheduler predicates: {{ openshift_master_scheduler_current_predicates }}\ncurrent scheduler default predicates are: {{ openshift_master_scheduler_default_predicates }}"
+      when:
+      - openshift_master_scheduler_current_predicates != openshift_master_scheduler_default_predicates
+      - openshift_master_scheduler_current_predicates not in older_predicates + [prev_predicates]
+
+    - set_fact:
+        openshift_upgrade_scheduler_predicates: "{{ openshift_master_scheduler_default_predicates }}"
+      when:
+      - openshift_master_scheduler_current_predicates != openshift_master_scheduler_default_predicates
+      - openshift_master_scheduler_current_predicates in older_predicates + [prev_predicates]
+
+    - set_fact:
+        openshift_upgrade_scheduler_predicates: "{{ default_predicates_no_region }}"
+      when:
+      - openshift_master_scheduler_current_predicates != default_predicates_no_region
+      - openshift_master_scheduler_current_predicates in older_predicates_no_region + [prev_predicates_no_region]
+
+    when: openshift_master_scheduler_predicates | default(none) is none
+
+
+# Upgrade priorities
+- vars:
+    prev_priorities: "{{ lookup('openshift_master_facts_default_priorities', short_version=openshift_upgrade_min, deployment_type=openshift_deployment_type) }}"
+    prev_priorities_no_zone: "{{ lookup('openshift_master_facts_default_priorities', short_version=openshift_upgrade_min, deployment_type=openshift_deployment_type, zones_enabled=False) }}"
+    default_priorities_no_zone: "{{ lookup('openshift_master_facts_default_priorities', zones_enabled=False) }}"
+    # older_priorities are the set of priorities that have previously been
+    # hard-coded into openshift_facts
+    older_priorities:
+    - - name: LeastRequestedPriority
+        weight: 1
+      - name: SelectorSpreadPriority
+        weight: 1
+      - name: Zone
+        weight: 2
+        argument:
+          serviceAntiAffinity:
+            label: zone
+    # older_priorities_no_region are the set of priorities that have previously
+    # been hard-coded into openshift_facts, with the Zone priority removed
+    older_priorities_no_zone:
+    - - name: LeastRequestedPriority
+        weight: 1
+      - name: SelectorSpreadPriority
+        weight: 1
+  block:
+
+  # Handle case where openshift_master_priorities is defined
+  - block:
+    - debug:
+        msg: "WARNING: openshift_master_scheduler_priorities is set to defaults from an earlier release of OpenShift current defaults are: {{ openshift_master_scheduler_default_priorities }}"
+      when: openshift_master_scheduler_priorities in older_priorities + older_priorities_no_zone + [prev_priorities] + [prev_priorities_no_zone]
+
+    - debug:
+        msg: "WARNING: openshift_master_scheduler_priorities does not match current defaults of: {{ openshift_master_scheduler_default_priorities }}"
+      when: openshift_master_scheduler_priorities != openshift_master_scheduler_default_priorities
+    when: openshift_master_scheduler_priorities | default(none) is not none
+
+  # Handle cases where openshift_master_priorities is not defined
+  - block:
+    - debug:
+        msg: "WARNING: existing scheduler config does not match previous known defaults automated upgrade of scheduler config is disabled.\nexisting scheduler priorities: {{ openshift_master_scheduler_current_priorities }}\ncurrent scheduler default priorities are: {{ openshift_master_scheduler_default_priorities }}"
+      when:
+      - openshift_master_scheduler_current_priorities != openshift_master_scheduler_default_priorities
+      - openshift_master_scheduler_current_priorities not in older_priorities + [prev_priorities]
+
+    - set_fact:
+        openshift_upgrade_scheduler_priorities: "{{ openshift_master_scheduler_default_priorities }}"
+      when:
+      - openshift_master_scheduler_current_priorities != openshift_master_scheduler_default_priorities
+      - openshift_master_scheduler_current_priorities in older_priorities + [prev_priorities]
+
+    - set_fact:
+        openshift_upgrade_scheduler_priorities: "{{ default_priorities_no_zone }}"
+      when:
+      - openshift_master_scheduler_current_priorities != default_priorities_no_zone
+      - openshift_master_scheduler_current_priorities in older_priorities_no_zone + [prev_priorities_no_zone]
+
+    when: openshift_master_scheduler_priorities | default(none) is none
+
+
+# Update scheduler
+- vars:
+    scheduler_config:
+      kind: Policy
+      apiVersion: v1
+      predicates: "{{ openshift_upgrade_scheduler_predicates
+                      | default(openshift_master_scheduler_current_predicates) }}"
+      priorities: "{{ openshift_upgrade_scheduler_priorities
+                      | default(openshift_master_scheduler_current_priorities) }}"
+  block:
+  - name: Update scheduler config
+    copy:
+      content: "{{ scheduler_config | to_nice_json }}"
+      dest: "{{ openshift_master_scheduler_conf }}"
+      backup: true
+  when: >
+    openshift_upgrade_scheduler_predicates is defined or
+    openshift_upgrade_scheduler_priorities is defined

+ 5 - 0
roles/openshift_control_plane/templates/htpasswd.j2

@@ -0,0 +1,5 @@
+{% if 'htpasswd_users' in openshift.master %}
+{%   for user,pass in openshift.master.htpasswd_users.items() %}
+{{     user ~ ':' ~ pass }}
+{%   endfor %}
+{% endif %}

+ 16 - 0
roles/openshift_control_plane/templates/master.env.j2

@@ -0,0 +1,16 @@
+{% if openshift_cloudprovider_kind | default('') == 'aws' and openshift_cloudprovider_aws_access_key is defined and openshift_cloudprovider_aws_secret_key is defined %}
+AWS_ACCESS_KEY_ID={{ openshift_cloudprovider_aws_access_key }}
+AWS_SECRET_ACCESS_KEY={{ openshift_cloudprovider_aws_secret_key }}
+{% endif %}
+
+# Proxy configuration
+# See https://docs.openshift.com/enterprise/latest/install_config/install/advanced_install.html#configuring-global-proxy
+{% if 'http_proxy' in openshift.common %}
+HTTP_PROXY={{ openshift.common.http_proxy | default('') }}
+{% endif %}
+{% if 'https_proxy' in openshift.common %}
+HTTPS_PROXY={{ openshift.common.https_proxy | default('')}}
+{% endif %}
+{% if 'no_proxy' in openshift.common %}
+NO_PROXY={{ openshift.common.no_proxy | default('') }},{{ openshift.common.portal_net }},{{ openshift.master.sdn_cluster_network_cidr }}
+{% endif %}

+ 230 - 0
roles/openshift_control_plane/templates/master.yaml.v1.j2

@@ -0,0 +1,230 @@
+kind: MasterConfig
+apiVersion: v1
+admissionConfig:
+{% if 'admission_plugin_config' in openshift.master %}
+  pluginConfig:{{ openshift.master.admission_plugin_config | lib_utils_to_padded_yaml(level=2) }}
+{% endif %}
+apiLevels:
+- v1
+{% if not openshift_version_gte_3_9 %}
+assetConfig:
+  logoutURL: "{{ openshift.master.logout_url | default('') }}"
+  masterPublicURL: {{ openshift.master.public_api_url }}
+  publicURL: {{ openshift.master.public_console_url }}/
+{% if 'logging_public_url' in openshift.master %}
+  loggingPublicURL: {{ openshift.master.logging_public_url }}
+{% endif %}
+{% if openshift_hosted_metrics_deploy_url is defined %}
+  metricsPublicURL: {{ openshift_hosted_metrics_deploy_url }}
+{% endif %}
+{% if 'extension_scripts' in openshift.master %}
+  extensionScripts: {{ openshift.master.extension_scripts | lib_utils_to_padded_yaml(1, 2) }}
+{% endif %}
+{% if 'extension_stylesheets' in openshift.master %}
+  extensionStylesheets: {{ openshift.master.extension_stylesheets | lib_utils_to_padded_yaml(1, 2) }}
+{% endif %}
+{% if 'extensions' in openshift.master %}
+  extensions: {{ openshift.master.extensions | lib_utils_to_padded_yaml(1, 2) }}
+{% endif %}
+  servingInfo:
+    bindAddress: {{ openshift.master.bind_addr }}:{{ openshift.master.console_port }}
+    bindNetwork: tcp4
+    certFile: master.server.crt
+    clientCA: ""
+    keyFile: master.server.key
+    maxRequestsInFlight: 0
+    requestTimeoutSeconds: 0
+{% if openshift_master_min_tls_version is defined %}
+    minTLSVersion: {{ openshift_master_min_tls_version }}
+{% endif %}
+{% if openshift_master_cipher_suites is defined %}
+    cipherSuites:
+{% for cipher_suite in openshift_master_cipher_suites %}
+    - {{ cipher_suite }}
+{% endfor %}
+{% endif %}
+# assetconfig end
+{% endif %}
+{% if openshift.master.audit_config | default(none) is not none %}
+auditConfig:{{ openshift.master.audit_config | lib_utils_to_padded_yaml(level=1) }}
+{% endif %}
+controllerConfig:
+  election:
+    lockName: openshift-master-controllers
+  serviceServingCert:
+    signer:
+      certFile: service-signer.crt
+      keyFile: service-signer.key
+controllers: '*'
+corsAllowedOrigins:
+  # anchor with start (\A) and end (\z) of the string, make the check case insensitive ((?i)) and escape hostname
+{% for origin in ['127.0.0.1', 'localhost', openshift.common.ip, openshift.common.public_ip] | union(openshift.common.all_hostnames) | unique %}
+  - (?i)//{{ origin | regex_escape() }}(:|\z)
+{% endfor %}
+{% for custom_origin in openshift.master.custom_cors_origins | default("") %}
+  - (?i)//{{ custom_origin | regex_escape() }}(:|\z)
+{% endfor %}
+{% if 'disabled_features' in openshift.master %}
+disabledFeatures: {{ openshift.master.disabled_features | to_json }}
+{% endif %}
+{% if openshift.master.embedded_dns | bool %}
+dnsConfig:
+  bindAddress: {{ openshift.master.bind_addr }}:{{ openshift_master_dns_port }}
+  bindNetwork: tcp4
+{% endif %}
+etcdClientInfo:
+  ca: master.etcd-ca.crt
+  certFile: master.etcd-client.crt
+  keyFile: master.etcd-client.key
+  urls:
+{% for etcd_url in openshift.master.etcd_urls %}
+    - {{ etcd_url }}
+{% endfor %}
+etcdStorageConfig:
+  kubernetesStoragePrefix: kubernetes.io
+  kubernetesStorageVersion: v1
+  openShiftStoragePrefix: openshift.io
+  openShiftStorageVersion: v1
+imageConfig:
+  format: {{ l_osm_registry_url }}
+  latest: {{ openshift_master_image_config_latest }}
+imagePolicyConfig:{{ openshift.master.image_policy_config | default({"internalRegistryHostname":"docker-registry.default.svc:5000"}) | lib_utils_to_padded_yaml(level=1) }}
+kubeletClientInfo:
+{# TODO: allow user specified kubelet port #}
+  ca: ca-bundle.crt
+  certFile: master.kubelet-client.crt
+  keyFile: master.kubelet-client.key
+  port: 10250
+{% if openshift.master.embedded_kube | bool %}
+kubernetesMasterConfig:
+  apiServerArguments: {{ openshift.master.api_server_args | default(None) | lib_utils_to_padded_yaml( level=2 ) }}
+    storage-backend:
+    - etcd3
+    storage-media-type:
+    - application/vnd.kubernetes.protobuf
+  controllerArguments: {{ openshift.master.controller_args | default(None) | lib_utils_to_padded_yaml( level=2 ) }}
+  masterCount: {{ openshift.master.master_count }}
+  masterIP: {{ openshift.common.ip }}
+  podEvictionTimeout: {{ openshift.master.pod_eviction_timeout | default("") }}
+  proxyClientInfo:
+    certFile: master.proxy-client.crt
+    keyFile: master.proxy-client.key
+  schedulerArguments: {{ openshift_master_scheduler_args | default(None) | lib_utils_to_padded_yaml( level=3 ) }}
+  schedulerConfigFile: {{ openshift_master_scheduler_conf }}
+  servicesNodePortRange: "{{ openshift_node_port_range | default("") }}"
+  servicesSubnet: {{ openshift.common.portal_net }}
+  staticNodeNames: {{ openshift_node_ips | default([], true) }}
+{% endif %}
+masterClients:
+{# TODO: allow user to set externalKubernetesKubeConfig #}
+  externalKubernetesClientConnectionOverrides:
+    acceptContentTypes: application/vnd.kubernetes.protobuf,application/json
+    contentType: application/vnd.kubernetes.protobuf
+    burst: {{ openshift_master_external_ratelimit_burst | default(400) }}
+    qps: {{ openshift_master_external_ratelimit_qps | default(200) }}
+  externalKubernetesKubeConfig: ""
+  openshiftLoopbackClientConnectionOverrides:
+    acceptContentTypes: application/vnd.kubernetes.protobuf,application/json
+    contentType: application/vnd.kubernetes.protobuf
+    burst: {{ openshift_master_loopback_ratelimit_burst | default(600) }}
+    qps: {{ openshift_master_loopback_ratelimit_qps | default(300) }}
+  openshiftLoopbackKubeConfig: openshift-master.kubeconfig
+masterPublicURL: {{ openshift.master.public_api_url }}
+networkConfig:
+  clusterNetworkCIDR: {{ openshift.master.sdn_cluster_network_cidr }}
+  hostSubnetLength: {{ openshift.master.sdn_host_subnet_length }}
+{% if openshift_version_gte_3_7 | bool %}
+  clusterNetworks:
+  - cidr: {{ openshift.master.sdn_cluster_network_cidr }}
+    hostSubnetLength: {{ openshift.master.sdn_host_subnet_length }}
+{% endif %}
+{% if r_openshift_master_use_openshift_sdn or r_openshift_master_use_nuage or r_openshift_master_use_contiv or r_openshift_master_use_kuryr or r_openshift_master_sdn_network_plugin_name == 'cni' %}
+  networkPluginName: {{ r_openshift_master_sdn_network_plugin_name_default }}
+{% endif %}
+# serviceNetworkCIDR must match kubernetesMasterConfig.servicesSubnet
+  serviceNetworkCIDR: {{ openshift.common.portal_net }}
+  externalIPNetworkCIDRs: {{ openshift_master_external_ip_network_cidrs | default(["0.0.0.0/0"]) | lib_utils_to_padded_yaml(1,2) }}
+{% if openshift_master_ingress_ip_network_cidr is defined %}
+  ingressIPNetworkCIDR: {{ openshift_master_ingress_ip_network_cidr }}
+{% endif %}
+oauthConfig:
+{% if 'oauth_always_show_provider_selection' in openshift.master %}
+  alwaysShowProviderSelection: {{ openshift.master.oauth_always_show_provider_selection }}
+{% endif %}
+{% if l_openshift_master_oauth_templates %}
+  templates:{{ l_openshift_master_oauth_templates | lib_utils_to_padded_yaml(level=2) }}
+{% endif %}
+  assetPublicURL: {{ openshift.master.public_console_url }}/
+  grantConfig:
+    method: {{ openshift.master.oauth_grant_method }}
+  identityProviders:
+{% for line in translated_identity_providers.splitlines() %}
+  {{ line }}
+{% endfor %}
+  masterCA: ca-bundle.crt
+  masterPublicURL: {{ openshift.master.public_api_url }}
+  masterURL: {{ openshift.master.api_url }}
+  sessionConfig:
+    sessionMaxAgeSeconds: {{ openshift.master.session_max_seconds }}
+    sessionName: {{ openshift.master.session_name }}
+{% if openshift.master.session_auth_secrets is defined and openshift.master.session_encryption_secrets is defined %}
+    sessionSecretsFile: {{ openshift.master.session_secrets_file }}
+{% endif %}
+  tokenConfig:
+    accessTokenMaxAgeSeconds: {{ openshift.master.access_token_max_seconds }}
+    authorizeTokenMaxAgeSeconds: {{ openshift.master.auth_token_max_seconds }}
+pauseControllers: false
+policyConfig:
+  bootstrapPolicyFile: {{ openshift_master_policy }}
+  openshiftInfrastructureNamespace: openshift-infra
+  openshiftSharedResourcesNamespace: openshift
+projectConfig:
+  defaultNodeSelector: "{{ osm_default_node_selector }}"
+  projectRequestMessage: "{{ osm_project_request_message }}"
+  projectRequestTemplate: "{{ osm_project_request_template }}"
+  securityAllocator:
+    mcsAllocatorRange: "{{ osm_mcs_allocator_range }}"
+    mcsLabelsPerProject: {{ osm_mcs_labels_per_project }}
+    uidAllocatorRange: "{{ osm_uid_allocator_range }}"
+routingConfig:
+  subdomain:  "{{ openshift_master_default_subdomain }}"
+serviceAccountConfig:
+  limitSecretReferences: {{ openshift_master_saconfig_limitsecretreferences | default(false) }}
+  managedNames:
+  - default
+  - builder
+  - deployer
+  masterCA: ca-bundle.crt
+  privateKeyFile: serviceaccounts.private.key
+  publicKeyFiles:
+  - serviceaccounts.public.key
+servingInfo:
+  bindAddress: {{ openshift.master.bind_addr }}:{{ openshift.master.api_port }}
+  bindNetwork: tcp4
+  certFile: master.server.crt
+  clientCA: ca.crt
+  keyFile: master.server.key
+  maxRequestsInFlight: {{ openshift.master.max_requests_inflight }}
+  requestTimeoutSeconds: 3600
+{% if openshift.master.named_certificates | default([]) | length > 0 %}
+  namedCertificates:
+{% for named_certificate in openshift.master.named_certificates %}
+  - certFile: {{ named_certificate['certfile'] }}
+    keyFile: {{ named_certificate['keyfile'] }}
+    names:
+{% for name in named_certificate['names'] %}
+    - "{{ name }}"
+{% endfor %}
+{% endfor %}
+{% endif %}
+{% if openshift_master_min_tls_version is defined %}
+  minTLSVersion: {{ openshift_master_min_tls_version }}
+{% endif %}
+{% if openshift_master_cipher_suites is defined %}
+  cipherSuites:
+{% for cipher_suite in openshift_master_cipher_suites %}
+  - {{ cipher_suite }}
+{% endfor %}
+{% endif %}
+volumeConfig:
+  dynamicProvisioningEnabled: {{ openshift.master.dynamic_provisioning_enabled }}

+ 7 - 0
roles/openshift_control_plane/templates/sessionSecretsFile.yaml.v1.j2

@@ -0,0 +1,7 @@
+apiVersion: v1
+kind: SessionSecrets
+secrets:
+{% for secret in openshift.master.session_auth_secrets %}
+- authentication: "{{ openshift.master.session_auth_secrets[loop.index0] }}"
+  encryption: "{{ openshift.master.session_encryption_secrets[loop.index0] }}"
+{% endfor %}

+ 1 - 1
roles/openshift_gcp/defaults/main.yml

@@ -54,7 +54,7 @@ openshift_gcp_node_group_config:
     boot_disk_size: 150
     scale: 0
 
-openshift_gcp_startup_script_file: ''
+openshift_gcp_startup_script_file: "{{ role_path }}/files/bootstrap-script.sh"
 openshift_gcp_user_data_file: ''
 
 openshift_gcp_multizone: False

+ 4 - 3
roles/openshift_hosted/defaults/main.yml

@@ -47,7 +47,7 @@ openshift_hosted_router_edits:
   value: 21600
   action: put
 
-openshift_hosted_router_registryurl: "{{ oreg_url_master | default(oreg_url) | default(openshift_hosted_images_dict[openshift_deployment_type]) }}"
+openshift_hosted_router_registryurl: "{{ oreg_url_master | default(oreg_url) | default(openshift_hosted_images_dict[openshift_deployment_type]) | regex_replace('${version}' | regex_escape, openshift_image_tag | default('${version}')) }}"
 openshift_hosted_routers:
 - name: router
   replicas: "{{ replicas | default(1) }}"
@@ -73,7 +73,7 @@ r_openshift_hosted_router_os_firewall_allow: []
 ############
 
 openshift_hosted_registry_selector: "{{ openshift_registry_selector | default(openshift_hosted_infra_selector) }}"
-openshift_hosted_registry_registryurl: "{{ oreg_url_master | default(oreg_url) | default(openshift_hosted_images_dict[openshift_deployment_type]) }}"
+openshift_hosted_registry_registryurl: "{{ oreg_url_master | default(oreg_url) | default(openshift_hosted_images_dict[openshift_deployment_type]) | regex_replace('${version}' | regex_escape, openshift_image_tag | default('${version}')) }}"
 openshift_hosted_registry_routecertificates: {}
 openshift_hosted_registry_routetermination: "passthrough"
 
@@ -108,7 +108,8 @@ openshift_hosted_registry_edits:
 openshift_hosted_registry_force:
 - False
 
-openshift_push_via_dns: False
+# TODO: this flag should be removed when master bootstrapping is enforced
+openshift_push_via_dns: True
 
 # NOTE: settting openshift_docker_hosted_registry_insecure may affect other roles
 openshift_hosted_docker_registry_insecure_default: "{{ openshift_docker_hosted_registry_insecure | default(False) }}"

+ 1 - 1
roles/openshift_hosted_templates/defaults/main.yml

@@ -8,7 +8,7 @@ openshift_hosted_images_dict:
   origin: 'openshift/origin-${component}:${version}'
   openshift-enterprise: 'openshift3/ose-${component}:${version}'
 
-openshift_hosted_templates_registryurl: "{{ oreg_url_master | default(oreg_url) | default(openshift_hosted_images_dict[openshift_deployment_type]) }}"
+openshift_hosted_templates_registryurl: "{{ oreg_url_master | default(oreg_url) | default(openshift_hosted_images_dict[openshift_deployment_type]) | regex_replace('${version}' | regex_escape, openshift_image_tag | default('${version}')) }}"
 registry_host: "{{ openshift_hosted_templates_registryurl.split('/')[0] if '.' in openshift_hosted_templates_registryurl.split('/')[0] else '' }}"
 
 openshift_hosted_templates_import_command: 'create'

+ 5 - 11
roles/openshift_logging/handlers/main.yml

@@ -1,18 +1,12 @@
 ---
-- name: restart master api
-  systemd: name={{ openshift_service_type }}-master-api state=restarted
+- name: restart master
+  command: /usr/local/bin/master-restart "{{ item }}"
+  with_items:
+  - api
+  - controllers
   when: (not (master_api_service_status_changed | default(false) | bool))
   notify: Verify API Server
 
-# We retry the controllers because the API may not be 100% initialized yet.
-- name: restart master controllers
-  command: "systemctl restart {{ openshift_service_type }}-master-controllers"
-  retries: 3
-  delay: 5
-  register: result
-  until: result.rc == 0
-  when: (not (master_controllers_service_status_changed | default(false) | bool))
-
 - name: Verify API Server
   # Using curl here since the uri module requires python-httplib2 and
   # wait_for port doesn't provide health information.

+ 0 - 2
roles/openshift_master/tasks/upgrade/rpm_upgrade.yml

@@ -17,7 +17,6 @@
       - "{{ openshift_service_type }}{{ openshift_pkg_version | default('') }}"
       - "{{ openshift_service_type }}-master{{ openshift_pkg_version | default('') }}"
       - "{{ openshift_service_type }}-node{{ openshift_pkg_version | default('') }}"
-      - "{{ openshift_service_type }}-sdn-ovs{{ openshift_pkg_version | default('') }}"
       - "{{ openshift_service_type }}-clients{{ openshift_pkg_version | default('') }}"
   register: result
   until: result is succeeded
@@ -32,7 +31,6 @@
       - "{{ openshift_service_type }}{{ openshift_pkg_version }}"
       - "{{ openshift_service_type }}-master{{ openshift_pkg_version }}"
       - "{{ openshift_service_type }}-node{{ openshift_pkg_version }}"
-      - "{{ openshift_service_type }}-sdn-ovs{{ openshift_pkg_version }}"
       - "{{ openshift_service_type }}-clients{{ openshift_pkg_version }}"
   register: result
   until: result is succeeded

+ 5 - 11
roles/openshift_metrics/handlers/main.yml

@@ -1,18 +1,12 @@
 ---
-- name: restart master api
-  systemd: name={{ openshift_service_type }}-master-api state=restarted
+- name: restart master
+  command: /usr/local/bin/master-restart "{{ item }}"
+  with_items:
+  - api
+  - controllers
   when: (not (master_api_service_status_changed | default(false) | bool))
   notify: Verify API Server
 
-# We retry the controllers because the API may not be 100% initialized yet.
-- name: restart master controllers
-  command: "systemctl restart {{ openshift_service_type }}-master-controllers"
-  retries: 3
-  delay: 5
-  register: result
-  until: result.rc == 0
-  when: (not (master_controllers_service_status_changed | default(false) | bool))
-
 - name: Verify API Server
   # Using curl here since the uri module requires python-httplib2 and
   # wait_for port doesn't provide health information.

+ 3 - 26
roles/openshift_node/defaults/main.yml

@@ -14,13 +14,7 @@ openshift_oreg_url_default_dict:
   origin: "openshift/origin-${component}:${version}"
   openshift-enterprise: "openshift3/ose-${component}:${version}"
 openshift_oreg_url_default: "{{ openshift_oreg_url_default_dict[openshift_deployment_type] }}"
-oreg_url_node: "{{ oreg_url | default(openshift_oreg_url_default) }}"
-
-osn_ovs_image_default_dict:
-  origin: "openshift/openvswitch"
-  openshift-enterprise: "openshift3/openvswitch"
-osn_ovs_image_default: "{{ osn_ovs_image_default_dict[openshift_deployment_type] }}"
-osn_ovs_image: "{{ osn_ovs_image_default }}"
+oreg_url_node: "{{ oreg_url | default(openshift_oreg_url_default) | regex_replace('${version}' | regex_escape, openshift_image_tag | default('${version}')) }}"
 
 openshift_dns_ip: "{{ ansible_default_ipv4['address'] }}"
 
@@ -113,26 +107,21 @@ system_images_registry_dict:
   origin: "docker.io"
 
 system_images_registry: "{{ system_images_registry_dict[openshift_deployment_type | default('origin')] }}"
-openshift_use_external_openvswitch: False
-l_is_openvswitch_system_container: "{{ (openshift_use_openvswitch_system_container | default(openshift_use_system_containers | default(false)) | bool) }}"
 
 openshift_image_tag: ''
 
 default_r_openshift_node_image_prep_packages:
-- "{{ openshift_service_type }}-master"
 - "{{ openshift_service_type }}-node"
 - "{{ openshift_service_type }}-docker-excluder"
-- "{{ openshift_service_type }}-sdn-ovs"
 - ansible
-- openvswitch
+- bash-completion
 - docker
-- etcd
 - haproxy
 - dnsmasq
 - ntp
 - logrotate
 - httpd-tools
-- bind
+- bind-utils
 - firewalld
 - libselinux-python
 - conntrack-tools
@@ -142,27 +131,15 @@ default_r_openshift_node_image_prep_packages:
 - python-dbus
 - PyYAML
 - yum-utils
-# gluster
 - glusterfs-fuse
 - device-mapper-multipath
-# nfs
 - nfs-utils
-- flannel
-- bash-completion
-# cockpit
 - cockpit-ws
 - cockpit-system
 - cockpit-bridge
 - cockpit-docker
-# iscsi
 - iscsi-initiator-utils
-# ceph
 - ceph-common
-# systemcontainer
-# - runc
-# - container-selinux
-# - atomic
-#
 r_openshift_node_image_prep_packages: "{{ default_r_openshift_node_image_prep_packages | union(openshift_node_image_prep_packages | default([])) }}"
 
 openshift_node_bootstrap: False

+ 18 - 0
roles/openshift_node/files/openshift-node

@@ -0,0 +1,18 @@
+#!/bin/sh
+
+# This launches the Kubelet by converting the node configuration into kube flags.
+
+set -euo pipefail
+
+if ! [[ -f /etc/origin/node/client-ca.crt ]]; then
+  if [[ -f /etc/origin/node/bootstrap.kubeconfig ]]; then
+    oc config --config=/etc/origin/node/bootstrap.kubeconfig view --raw --minify -o go-template='{{ index .clusters 0 "cluster" "certificate-authority-data" }}' | base64 -d - > /etc/origin/node/client-ca.crt
+  fi
+fi
+config=/etc/origin/node/bootstrap-node-config.yaml
+# TODO: remove when dynamic kubelet config is delivered
+if [[ -f /etc/origin/node/node-config.yaml ]]; then
+  config=/etc/origin/node/node-config.yaml
+fi
+flags=$( /usr/bin/openshift start node --write-flags "--config=${config}" --loglevel=${DEBUG_LOGLEVEL:-2} )
+exec /usr/bin/hyperkube kubelet --v=${DEBUG_LOGLEVEL:-2} ${flags}

+ 0 - 22
roles/openshift_node/handlers/main.yml

@@ -14,28 +14,6 @@
   when:
   - (not skip_node_svc_handlers | default(False) | bool)
 
-- name: restart openvswitch
-  systemd:
-    name: openvswitch
-    state: restarted
-  when:
-  - (not skip_node_svc_handlers | default(False) | bool)
-  - not (ovs_service_status_changed | default(false) | bool)
-  - openshift_node_use_openshift_sdn | bool
-  - not openshift_node_bootstrap
-  register: l_openshift_node_stop_openvswitch_result
-  until: not (l_openshift_node_stop_openvswitch_result is failed)
-  retries: 3
-  delay: 30
-  notify:
-  - restart openvswitch pause
-
-- name: restart openvswitch pause
-  pause: seconds=15
-  when:
-  - (not skip_node_svc_handlers | default(False) | bool)
-  - openshift_is_containerized | bool
-
 - name: restart node
   systemd:
     name: "{{ openshift_service_type }}-node"

+ 8 - 9
roles/openshift_node/tasks/bootstrap.yml

@@ -26,26 +26,20 @@
   - line: "KUBECONFIG={{ openshift_node_config_dir }}/bootstrap.kubeconfig"
     regexp: "^KUBECONFIG=.*"
   # remove the config file.  This comes from openshift_facts
-  - line: "CONFIG_FILE={{ openshift_node_config_dir }}/node-config.yaml"
+  - line: "CONFIG_FILE={{ openshift_node_config_dir }}/bootstrap-node-config.yaml"
     regexp: "^CONFIG_FILE=.*"
 
 - name: include aws sysconfig credentials
   import_tasks: aws.yml
   when: not (openshift_node_use_instance_profiles | default(False))
 
-#- name: update the ExecStart to have bootstrap
-#  lineinfile:
-#    dest: "/usr/lib/systemd/system/{{ openshift_service_type }}-node.service"
-#    line: "{% raw %}ExecStart=/usr/bin/openshift start node --bootstrap --kubeconfig=${KUBECONFIG} $OPTIONS{% endraw %}"
-#    regexp: "^ExecStart=.*"
 
-- name: "disable {{ openshift_service_type }}-node"  # and {{ openshift_service_type }}-master services"
+- name: "disable {{ openshift_service_type }}-node service"
   systemd:
     name: "{{ item }}"
     enabled: no
   with_items:
   - "{{ openshift_service_type }}-node.service"
-#  - "{{ openshift_service_type }}-master.service"
 
 - name: Check for RPM generated config marker file .config_managed
   stat:
@@ -74,7 +68,12 @@
     state: link
     force: yes
   with_items:
-  - "{{ openshift_node_config_dir }}/node-client-ca.crt"
+  - "{{ openshift_node_config_dir }}/client-ca.crt"
+
+- name: Remove default node-config.yaml to allow bootstrapping config
+  file:
+    path: "/etc/origin/node/node-config.yaml"
+    state: absent
 
 - when: rpmgenerated_config.stat.exists
   block:

+ 6 - 17
roles/openshift_node/tasks/config.yml

@@ -6,23 +6,6 @@
   include_tasks: container_images.yml
   when: openshift_is_containerized | bool
 
-- name: Start and enable openvswitch service
-  systemd:
-    name: openvswitch.service
-    enabled: yes
-    state: started
-    daemon_reload: yes
-  when:
-    - openshift_is_containerized | bool
-    - openshift_node_use_openshift_sdn | default(true) | bool
-  register: ovs_start_result
-  until: not (ovs_start_result is failed)
-  retries: 3
-  delay: 30
-
-- set_fact:
-    ovs_service_status_changed: "{{ ovs_start_result is changed }}"
-
 - file:
     dest: "{{ l2_openshift_node_kubelet_args['config'] }}"
     state: directory
@@ -50,6 +33,12 @@
   notify:
     - restart node
 
+- name: Ensure the node static pod directory exists
+  file:
+    path: "{{ openshift.common.config_base }}/node/pods"
+    state: directory
+    mode: 0755
+
 - name: include aws provider credentials
   import_tasks: aws.yml
   when: not (openshift_node_use_instance_profiles | default(False))

+ 3 - 1
roles/openshift_node/tasks/config/configure-node-settings.yml

@@ -7,7 +7,9 @@
     create: true
   with_items:
   - regex: '^OPTIONS='
-    line: "OPTIONS=--loglevel={{ openshift_node_debug_level }} {{ openshift_node_start_options | default('') }}"
+    line: "OPTIONS={{ openshift_node_start_options | default('') }}"
+  - regex: '^DEBUG_LOGLEVEL='
+    line: "DEBUG_LOGLEVEL={{ openshift_node_debug_level }}"
   - regex: '^CONFIG_FILE='
     line: "CONFIG_FILE={{ openshift.common.config_base }}/node/node-config.yaml"
   - regex: '^IMAGE_VERSION='

+ 0 - 8
roles/openshift_node/tasks/config/install-ovs-docker-service-file.yml

@@ -1,8 +0,0 @@
----
-- name: Install OpenvSwitch docker service file
-  template:
-    dest: "/etc/systemd/system/openvswitch.service"
-    src: openvswitch.docker.service
-  notify:
-  - reload systemd units
-  - restart openvswitch

+ 0 - 8
roles/openshift_node/tasks/config/install-ovs-service-env-file.yml

@@ -1,8 +0,0 @@
----
-- name: Create the openvswitch service env file
-  template:
-    src: openvswitch.sysconfig.j2
-    dest: /etc/sysconfig/openvswitch
-  notify:
-  - reload systemd units
-  - restart openvswitch

+ 0 - 17
roles/openshift_node/tasks/container_images.yml

@@ -3,20 +3,3 @@
   include_tasks: node_system_container.yml
   when:
   - l_is_node_system_container | bool
-
-- name: Install OpenvSwitch system containers
-  include_tasks: openvswitch_system_container.yml
-  when:
-  - openshift_node_use_openshift_sdn | bool
-  - l_is_openvswitch_system_container | bool
-  - not openshift_use_external_openvswitch | bool
-
-- name: Pre-pull openvswitch image
-  command: >
-    docker pull {{ osn_ovs_image }}:{{ openshift_image_tag }}
-  register: pull_result
-  changed_when: "'Downloaded newer image' in pull_result.stdout"
-  when:
-  - openshift_node_use_openshift_sdn | bool
-  - not l_is_openvswitch_system_container | bool
-  - not openshift_use_external_openvswitch | bool

+ 2 - 3
roles/openshift_node/tasks/install.yml

@@ -1,5 +1,5 @@
 ---
-- name: Install Node package, sdn-ovs, conntrack packages
+- name: Install node, clients, and conntrack packages
   package:
     name: "{{ item.name }}"
     state: present
@@ -7,8 +7,7 @@
   until: result is succeeded
   with_items:
   - name: "{{ openshift_service_type }}-node{{ (openshift_pkg_version | default('')) | lib_utils_oo_image_tag_to_rpm_version(include_dash=True) }}"
-  - name: "{{ openshift_service_type }}-sdn-ovs{{ (openshift_pkg_version | default('')) | lib_utils_oo_image_tag_to_rpm_version(include_dash=True) }}"
-    install: "{{ openshift_node_use_openshift_sdn | bool }}"
+  - name: "{{ openshift_service_type }}-clients{{ (openshift_pkg_version | default('')) | lib_utils_oo_image_tag_to_rpm_version(include_dash=True) }}"
   - name: "conntrack-tools"
   when:
   - not openshift_is_containerized | bool

+ 0 - 3
roles/openshift_node/tasks/main.yml

@@ -77,6 +77,3 @@
   when: "'iscsi' in osn_storage_plugin_deps"
 
 ##### END Storage #####
-
-- include_tasks: config/workaround-bz1331590-ovs-oom-fix.yml
-  when: openshift_node_use_openshift_sdn | default(true) | bool

+ 7 - 0
roles/openshift_node/tasks/node_system_container.yml

@@ -11,6 +11,13 @@
   register: pull_result
   changed_when: "'Pulling layer' in pull_result.stdout"
 
+# TODO: remove when system container is fixed to not include it
+- name: Ensure old system path is set
+  file:
+    state: directory
+    path: "/etc/origin/openvswitch"
+    mode: '0750'
+
 - name: Install or Update node system container
   oc_atomic_container:
     name: "{{ openshift_service_type }}-node"

+ 0 - 22
roles/openshift_node/tasks/openvswitch_system_container.yml

@@ -1,22 +0,0 @@
----
-- set_fact:
-    l_service_name: "cri-o"
-  when: openshift_use_crio | bool
-
-- set_fact:
-    l_service_name: "{{ openshift_docker_service_name }}"
-  when: not openshift_use_crio | bool
-
-- name: Pre-pull OpenVSwitch system container image
-  command: >
-    atomic pull --storage=ostree {{ 'docker:' if system_images_registry == 'docker' else system_images_registry + '/' }}{{ osn_ovs_image }}:{{ openshift_image_tag }}
-  register: pull_result
-  changed_when: "'Pulling layer' in pull_result.stdout"
-
-- name: Install or Update OpenVSwitch system container
-  oc_atomic_container:
-    name: openvswitch
-    image: "{{ 'docker:' if system_images_registry == 'docker' else system_images_registry + '/' }}{{ osn_ovs_image }}:{{ openshift_image_tag }}"
-    state: latest
-    values:
-      - "DOCKER_SERVICE={{ l_service_name }}"

+ 5 - 11
roles/openshift_node/tasks/systemd_units.yml

@@ -1,4 +1,9 @@
 ---
+- name: Copy node script to the node
+  copy:
+    src: openshift-node
+    dest: /usr/local/bin/openshift-node
+    mode: 0500
 - name: Install Node service file
   template:
     dest: "/etc/systemd/system/{{ openshift_service_type }}-node.service"
@@ -13,16 +18,5 @@
   - name: include node deps docker service file
     include_tasks: config/install-node-deps-docker-service-file.yml
 
-  - name: include ovs service environment file
-    include_tasks: config/install-ovs-service-env-file.yml
-    when:
-    - not openshift_use_external_openvswitch | bool
-
-  - include_tasks: config/install-ovs-docker-service-file.yml
-    when:
-    - openshift_node_use_openshift_sdn | bool
-    - not l_is_openvswitch_system_container | bool
-    - not openshift_use_external_openvswitch | bool
-
 - include_tasks: config/configure-node-settings.yml
 - include_tasks: config/configure-proxy-settings.yml

+ 0 - 15
roles/openshift_node/tasks/upgrade/containerized_upgrade_pull.yml

@@ -1,15 +0,0 @@
----
-- name: Pre-pull node image
-  command: >
-    docker pull {{ osn_image }}:{{ openshift_image_tag }}
-  register: pull_result
-  changed_when: "'Downloaded newer image' in pull_result.stdout"
-
-- name: Pre-pull openvswitch image
-  command: >
-    docker pull {{ osn_ovs_image }}:{{ openshift_image_tag }}
-  register: pull_result
-  changed_when: "'Downloaded newer image' in pull_result.stdout"
-  when: openshift_node_use_openshift_sdn | bool
-
-- include_tasks: ../container_images.yml

+ 0 - 4
roles/openshift_node/tasks/upgrade/restart.yml

@@ -34,10 +34,6 @@
 - name: Start services
   service: name={{ item }} state=started
   with_items:
-    - etcd_container
-    - openvswitch
-    - "{{ openshift_service_type }}-master-api"
-    - "{{ openshift_service_type }}-master-controllers"
     - "{{ openshift_service_type }}-node"
   failed_when: false
 

+ 1 - 8
roles/openshift_node/tasks/upgrade/rpm_upgrade.yml

@@ -13,12 +13,5 @@
   vars:
     openshift_node_upgrade_rpm_list:
       - "{{ openshift_service_type }}-node{{ openshift_pkg_version | default('') }}"
+      - "{{ openshift_service_type }}-clients{{ openshift_pkg_version | default('') }}"
       - "PyYAML"
-      - "dnsmasq"
-
-# Pre-pull the rpms for openvswitch, but don't install
-# openvswitch requires the latest version to be installed.
-- name: download openvswitch upgrade rpm
-  command: "{{ ansible_pkg_mgr }} update -y --downloadonly openvswitch"
-  register: result
-  until: result is succeeded

+ 1 - 1
roles/openshift_node/tasks/upgrade/rpm_upgrade_install.yml

@@ -15,5 +15,5 @@
   vars:
     openshift_node_upgrade_rpm_list:
       - "{{ openshift_service_type }}-node{{ openshift_pkg_version | default('') }}"
+      - "{{ openshift_service_type }}-clients{{ openshift_pkg_version | default('') }}"
       - "PyYAML"
-      - "openvswitch"

+ 8 - 10
roles/openshift_node/tasks/upgrade/stop_services.yml

@@ -4,22 +4,20 @@
     name: "{{ item }}"
     state: stopped
   with_items:
-  - "{{ openshift_service_type }}-node"
-  - openvswitch
-  failed_when: false
-
-- name: Ensure containerized services stopped before Docker restart
-  service:
-    name: "{{ item }}"
-    state: stopped
-  with_items:
   - etcd_container
   - openvswitch
   - "{{ openshift_service_type }}-master-api"
   - "{{ openshift_service_type }}-master-controllers"
   - "{{ openshift_service_type }}-node"
   failed_when: false
-  when: openshift_is_containerized | bool
+
+- name: Ensure static containerized services stopped before Docker restart
+  command: /usr/local/bin/master-restart "{{ item }}"
+  with_items:
+  - api
+  - controllers
+  - etcd
+  failed_when: false
 
 - service:
     name: docker

+ 1 - 9
roles/openshift_node/templates/node.service.j2

@@ -3,9 +3,6 @@ Description=OpenShift Node
 After={{ openshift_docker_service_name }}.service
 After=chronyd.service
 After=ntpd.service
-Wants=openvswitch.service
-After=ovsdb-server.service
-After=ovs-vswitchd.service
 Wants={{ openshift_docker_service_name }}.service
 Documentation=https://github.com/openshift/origin
 Wants=dnsmasq.service
@@ -15,12 +12,7 @@ After=dnsmasq.service
 [Service]
 Type=notify
 EnvironmentFile=/etc/sysconfig/{{ openshift_service_type }}-node
-Environment=GOTRACEBACK=crash
-ExecStartPre=/usr/bin/cp /etc/origin/node/node-dnsmasq.conf /etc/dnsmasq.d/
-ExecStartPre=/usr/bin/dbus-send --system --dest=uk.org.thekelleys.dnsmasq /uk/org/thekelleys/dnsmasq uk.org.thekelleys.SetDomainServers array:string:/in-addr.arpa/127.0.0.1,/{{ openshift.common.dns_domain }}/127.0.0.1
-ExecStopPost=/usr/bin/rm /etc/dnsmasq.d/node-dnsmasq.conf
-ExecStopPost=/usr/bin/dbus-send --system --dest=uk.org.thekelleys.dnsmasq /uk/org/thekelleys/dnsmasq uk.org.thekelleys.SetDomainServers array:string:
-ExecStart=/usr/bin/openshift start node {% if openshift_node_bootstrap %} --kubeconfig=${KUBECONFIG} --bootstrap-config-name=${BOOTSTRAP_CONFIG_NAME}{% endif %} --config=${CONFIG_FILE} $OPTIONS
+ExecStart=/usr/local/bin/openshift-node
 LimitNOFILE=65536
 LimitCORE=infinity
 WorkingDirectory=/var/lib/origin/

+ 4 - 10
roles/openshift_node/templates/openshift.docker.node.service

@@ -3,15 +3,8 @@ After={{ openshift_service_type }}-master.service
 After={{ openshift_docker_service_name }}.service
 After=chronyd.service
 After=ntpd.service
-After=openvswitch.service
 PartOf={{ openshift_docker_service_name }}.service
 Requires={{ openshift_docker_service_name }}.service
-{% if openshift_node_use_openshift_sdn %}
-Wants=openvswitch.service
-PartOf=openvswitch.service
-After=ovsdb-server.service
-After=ovs-vswitchd.service
-{% endif %}
 Wants={{ openshift_service_type }}-master.service
 Requires={{ openshift_service_type }}-node-dep.service
 After={{ openshift_service_type }}-node-dep.service
@@ -26,7 +19,8 @@ ExecStartPre=/usr/bin/cp /etc/origin/node/node-dnsmasq.conf /etc/dnsmasq.d/
 ExecStartPre=/usr/bin/dbus-send --system --dest=uk.org.thekelleys.dnsmasq /uk/org/thekelleys/dnsmasq uk.org.thekelleys.SetDomainServers array:string:/in-addr.arpa/127.0.0.1,/{{ openshift.common.dns_domain }}/127.0.0.1
 ExecStart=/usr/bin/docker run --name {{ openshift_service_type }}-node \
   --rm --privileged --net=host --pid=host --env-file=/etc/sysconfig/{{ openshift_service_type }}-node \
-  -v /:/rootfs:ro,rslave -e CONFIG_FILE=${CONFIG_FILE} -e OPTIONS=${OPTIONS} \
+  --entrypoint /usr/local/bin/openshift-node \
+  -v /:/rootfs:ro,rslave -e CONFIG_FILE=${CONFIG_FILE} -e OPTIONS=${OPTIONS} -e DEBUG_LOGLEVEL=${DEBUG_LOGLEVEL}\
   -e HOST=/rootfs -e HOST_ETC=/host-etc \
   -v {{ openshift_node_data_dir }}:{{ openshift_node_data_dir }}:rslave \
   -v {{ openshift.common.config_base }}/node:{{ openshift.common.config_base }}/node \
@@ -34,8 +28,8 @@ ExecStart=/usr/bin/docker run --name {{ openshift_service_type }}-node \
   -v /etc/localtime:/etc/localtime:ro -v /etc/machine-id:/etc/machine-id:ro \
   -v /run:/run -v /sys:/sys:rw -v /sys/fs/cgroup:/sys/fs/cgroup:rw \
   -v /usr/bin/docker:/usr/bin/docker:ro -v /var/lib/docker:/var/lib/docker \
-  -v /lib/modules:/lib/modules -v /etc/origin/openvswitch:/etc/openvswitch \
-  -v /etc/origin/sdn:/etc/openshift-sdn -v /var/lib/cni:/var/lib/cni \
+  -v /lib/modules:/lib/modules \
+  -v /etc/cni:/etc/cni:ro -v /opt/cni:/opt/cni:ro \
   -v /etc/systemd/system:/host-etc/systemd/system -v /var/log:/var/log \
   {% if openshift_use_nuage | default(false) -%} $NUAGE_ADDTL_BIND_MOUNTS {% endif -%} \
   -v /dev:/dev $DOCKER_ADDTL_BIND_MOUNTS -v /etc/pki:/etc/pki:ro \

+ 0 - 3
roles/openshift_node/templates/openvswitch-avoid-oom.conf

@@ -1,3 +0,0 @@
-# Avoid the OOM killer for openvswitch and it's children:
-[Service]
-OOMScoreAdjust=-1000

+ 0 - 17
roles/openshift_node/templates/openvswitch.docker.service

@@ -1,17 +0,0 @@
-[Unit]
-After={{ openshift_docker_service_name }}.service
-Requires={{ openshift_docker_service_name }}.service
-PartOf={{ openshift_docker_service_name }}.service
-
-[Service]
-EnvironmentFile=/etc/sysconfig/openvswitch
-ExecStartPre=-/usr/bin/docker rm -f openvswitch
-ExecStart=/usr/bin/docker run --name openvswitch --rm --privileged --net=host --pid=host -v /lib/modules:/lib/modules -v /run:/run -v /sys:/sys:ro -v /etc/origin/openvswitch:/etc/openvswitch {{ osn_ovs_image }}:${IMAGE_VERSION}
-ExecStartPost=/usr/bin/sleep 5
-ExecStop=/usr/bin/docker stop openvswitch
-SyslogIdentifier=openvswitch
-Restart=always
-RestartSec=5s
-
-[Install]
-WantedBy={{ openshift_docker_service_name }}.service

+ 0 - 1
roles/openshift_node/templates/openvswitch.sysconfig.j2

@@ -1 +0,0 @@
-IMAGE_VERSION={{ openshift_image_tag }}

+ 1 - 1
roles/openshift_node_group/defaults/main.yml

@@ -21,7 +21,7 @@ openshift_oreg_url_default_dict:
   origin: "openshift/origin-${component}:${version}"
   openshift-enterprise: openshift3/ose-${component}:${version}
 openshift_oreg_url_default: "{{ openshift_oreg_url_default_dict[openshift_deployment_type] }}"
-oreg_url_node: "{{ oreg_url | default(openshift_oreg_url_default) }}"
+oreg_url_node: "{{ oreg_url | default(openshift_oreg_url_default) | regex_replace('${version}' | regex_escape, openshift_image_tag | default('${version}')) }}"
 
 openshift_imageconfig_format: "{{ oreg_url_node }}"
 openshift_node_group_cloud_provider: "{{ openshift_cloudprovider_kind | default('aws') }}"

+ 11 - 0
roles/openshift_node_group/tasks/bootstrap.yml

@@ -0,0 +1,11 @@
+---
+- name: create node config template
+  template:
+    src: node-config.yaml.j2
+    dest: "/etc/origin/node/bootstrap-node-config.yaml"
+    mode: 0600
+
+- name: remove existing node config
+  file:
+    dest: "/etc/origin/node/node-config.yaml"
+    state: absent

+ 17 - 12
roles/openshift_node_group/templates/node-config.yaml.j2

@@ -1,4 +1,4 @@
-allowDisabledDocker: false
+kind: NodeConfig
 apiVersion: v1
 authConfig:
   authenticationCacheSize: 1000
@@ -19,13 +19,25 @@ imageConfig:
   format: "{{ openshift_imageconfig_format }}"
   latest: false
 iptablesSyncPeriod: 30s
-kind: NodeConfig
 kubeletArguments:
+  pod-manifest-path:
+  - /etc/origin/node/pods
+  bootstrap-kubeconfig:
+  - /etc/origin/node/bootstrap.kubeconfig
+  feature-gates:
+  - RotateKubeletClientCertificate=true,RotateKubeletServerCertificate=true
+  rotate-certificates:
+  - "true"
+  cert-dir:
+  - /etc/origin/node/certificates
   cloud-config:
   - /etc/origin/cloudprovider/{{ openshift_node_group_cloud_provider }}.conf
   cloud-provider:
   - {{ openshift_node_group_cloud_provider }}
-  node-labels: {{ openshift_node_group_labels | to_json }}
+  node-labels: 
+  - "{{ openshift_node_group_labels | join(',') }}"
+  enable-controller-attach-detach:
+  - 'true'
 masterClientConnectionOverrides:
   acceptContentTypes: application/vnd.kubernetes.protobuf,application/json
   burst: 40
@@ -35,19 +47,12 @@ masterKubeConfig: node.kubeconfig
 networkConfig:
   mtu: {{ openshift_node_group_network_mtu }}
   networkPluginName: {{ openshift_node_group_network_plugin }}
-nodeIP: ""
-podManifestConfig: null
+networkPluginName: {{ openshift_node_group_network_plugin }}
 servingInfo:
   bindAddress: 0.0.0.0:10250
   bindNetwork: tcp4
-  certFile: server.crt
-  clientCA: node-client-ca.crt
-  keyFile: server.key
-  namedCertificates: null
+  clientCA: client-ca.crt
 volumeConfig:
   localQuota:
     perFSGroup: null
 volumeDirectory: {{ openshift_node_group_node_data_dir }}/openshift.local.volumes
-enable-controller-attach-detach:
-- 'true'
-networkPluginName: {{ openshift_node_group_network_plugin }}

+ 6 - 0
roles/openshift_sdn/defaults/main.yml

@@ -0,0 +1,6 @@
+---
+openshift_node_image_dict:
+  origin: 'openshift/node'
+  openshift-enterprise: 'openshift3/node'
+oreg_host: "{{ oreg_url.split('/')[0] if (oreg_url is defined and '.' in oreg_url.split('/')[0]) else '' }}"
+osn_image: "{{ oreg_host }}{{ openshift_node_image_dict[openshift_deployment_type | default('origin')] }}:{{ openshift_image_tag | default('latest') }}"

+ 9 - 0
roles/openshift_sdn/files/sdn-images.yaml

@@ -0,0 +1,9 @@
+apiVersion: image.openshift.io/v1
+kind: ImageStreamTag
+metadata:
+  name: node:v3.9
+  namespace: openshift-sdn
+tag:
+  from:
+    kind: DockerImage
+    name: openshift/node:v3.9

+ 83 - 0
roles/openshift_sdn/files/sdn-ovs.yaml

@@ -0,0 +1,83 @@
+kind: DaemonSet
+apiVersion: apps/v1
+metadata:
+  name: ovs
+  namespace: openshift-sdn
+  annotations:
+    kubernetes.io/description: |
+      This daemon set launches the openvswitch daemon.
+    image.openshift.io/triggers: |
+      [{"from":{"kind":"ImageStreamTag","name":"node:v3.9"},"fieldPath":"spec.template.spec.containers[?(@.name==\"openvswitch\")].image"}]
+spec:
+  selector:
+    matchLabels:
+      app: ovs
+  updateStrategy:
+    type: RollingUpdate
+  template:
+    metadata:
+      labels:
+        app: ovs
+        component: network
+        type: infra
+        openshift.io/component: network
+      annotations:
+        scheduler.alpha.kubernetes.io/critical-pod: ''
+    spec:
+      # Requires fairly broad permissions - ability to read all services and network functions as well
+      # as all pods.
+      serviceAccountName: sdn
+      hostNetwork: true
+      containers:
+      - name: openvswitch
+        image: " "
+        command:
+        - /bin/bash
+        - -c
+        - |
+          #!/bin/bash
+          set -euo pipefail
+          function quit {
+              /usr/share/openvswitch/scripts/ovs-ctl stop
+              exit 0
+          }
+          trap quit SIGTERM
+          /usr/share/openvswitch/scripts/ovs-ctl start --system-id=random
+          while true; do sleep 5; done
+        securityContext:
+          runAsUser: 0
+          privileged: true
+        volumeMounts:
+        - mountPath: /lib/modules
+          name: host-modules
+          readOnly: true
+        - mountPath: /run/openvswitch
+          name: host-run-ovs
+        - mountPath: /var/run/openvswitch
+          name: host-run-ovs
+        - mountPath: /sys
+          name: host-sys
+          readOnly: true
+        - mountPath: /etc/openvswitch
+          name: host-config-openvswitch
+        resources:
+          requests:
+            cpu: 100m
+            memory: 200Mi
+          limits:
+            cpu: 200m
+            memory: 300Mi
+
+      volumes:
+      - name: host-modules
+        hostPath:
+          path: /lib/modules
+      - name: host-run-ovs
+        hostPath:
+          path: /run/openvswitch
+      - name: host-sys
+        hostPath:
+          path: /sys
+      - name: host-config-openvswitch
+        hostPath:
+          path: /etc/origin/openvswitch

+ 29 - 0
roles/openshift_sdn/files/sdn-policy.yaml

@@ -0,0 +1,29 @@
+kind: List
+apiVersion: v1
+items:
+- kind: ServiceAccount
+  apiVersion: v1
+  metadata:
+    name: sdn
+    namespace: openshift-sdn
+- apiVersion: authorization.openshift.io/v1
+  kind: ClusterRoleBinding
+  metadata:
+    name: sdn-cluster-reader
+  roleRef:
+    name: cluster-reader
+  subjects:
+  - kind: ServiceAccount
+    name: sdn
+    namespace: openshift-sdn
+- apiVersion: authorization.openshift.io/v1
+  kind: ClusterRoleBinding
+  metadata:
+    name: sdn-reader
+  roleRef:
+    name: system:sdn-reader
+  subjects:
+  - kind: ServiceAccount
+    name: sdn
+    namespace: openshift-sdn
+# TODO: PSP binding

+ 251 - 0
roles/openshift_sdn/files/sdn.yaml

@@ -0,0 +1,251 @@
+kind: DaemonSet
+apiVersion: apps/v1
+metadata:
+  name: sdn
+  namespace: openshift-sdn
+  annotations:
+    kubernetes.io/description: |
+      This daemon set launches the OpenShift networking components (kube-proxy, DNS, and openshift-sdn).
+      It expects that OVS is running on the node.
+    image.openshift.io/triggers: |
+      [
+        {"from":{"kind":"ImageStreamTag","name":"node:v3.9"},"fieldPath":"spec.template.spec.containers[?(@.name==\"sync\")].image"},
+        {"from":{"kind":"ImageStreamTag","name":"node:v3.9"},"fieldPath":"spec.template.spec.containers[?(@.name==\"sdn\")].image"}
+      ]
+spec:
+  selector:
+    matchLabels:
+      app: sdn
+  updateStrategy:
+    type: RollingUpdate
+  template:
+    metadata:
+      labels:
+        app: sdn
+        component: network
+        type: infra
+        openshift.io/component: network
+      annotations:
+        scheduler.alpha.kubernetes.io/critical-pod: ''
+    spec:
+      # Requires fairly broad permissions - ability to read all services and network functions as well
+      # as all pods.
+      serviceAccountName: sdn
+      hostNetwork: true
+      # Must be hostPID because it invokes operations on processes in the host space
+      hostPID: true
+      containers:
+
+      # The sync container is a temporary config loop until Kubelet dynamic config is implemented. It refreshes
+      # the contents of /etc/origin/node/ with the config map ${BOOTSTRAP_CONFIG_NAME} from the openshift-node
+      # namespace. It will restart the Kubelet on the host if it detects the node-config.yaml has changed.
+      #
+      # 1. Dynamic Kubelet config must pull down a full configmap
+      # 2. Nodes must relabel themselves https://github.com/kubernetes/kubernetes/issues/59314
+      #
+      - name: sync
+        image: " "
+        command:
+        - /bin/bash
+        - -c
+        - |
+          #!/bin/bash
+          set -euo pipefail
+
+          # loop until BOOTSTRAP_CONFIG_NAME is set
+          set -o allexport
+          while true; do
+            if [[ -f /etc/sysconfig/origin-node ]]; then
+              source /etc/sysconfig/origin-node
+              if [[ -z "${BOOTSTRAP_CONFIG_NAME-}" ]]; then
+                echo "info: Waiting for BOOTSTRAP_CONFIG_NAME to be set" 2>&1
+                sleep 15
+                continue
+              fi
+              break
+            fi
+          done
+
+          # track the current state of the config
+          if [[ -f /etc/origin/node/node-config.yaml ]]; then
+            md5sum /etc/origin/node/node-config.yaml > /tmp/.old
+          else
+            touch /tmp/.old
+          fi
+
+          # periodically refresh both node-config.yaml and relabel the node
+          while true; do
+            name=${BOOTSTRAP_CONFIG_NAME}
+            if ! oc extract --config=/etc/origin/node/node.kubeconfig "cm/${BOOTSTRAP_CONFIG_NAME}" -n openshift-node --to=/etc/origin/node --confirm; then
+              echo "error: Unable to retrieve latest config for node" 2>&1
+              sleep 15
+              continue
+            fi
+            # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.
+            md5sum /etc/origin/node/node-config.yaml > /tmp/.new
+            if [[ "$( cat /tmp/.old )" != "$( cat /tmp/.new )" ]]; then
+              echo "info: Configuration changed, restarting kubelet" 2>&1
+              # TODO: kubelet doesn't relabel nodes, best effort for now
+              # https://github.com/kubernetes/kubernetes/issues/59314
+              if args="$(openshift start node --write-flags --config /etc/origin/node/node-config.yaml)"; then
+                labels=' --node-labels=([^ ]+) '
+                if [[ ${args} =~ ${labels} ]]; then
+                  labels="${BASH_REMATCH[1]//,/ }"
+                  echo "info: Applying node labels $labels" 2>&1
+                  if ! oc label --config=/etc/origin/node/node.kubeconfig "node/${NODE_NAME}" ${labels} --overwrite; then
+                    echo "error: Unable to apply labels, will retry in 10" 2>&1
+                    sleep 10
+                    continue
+                  fi
+                fi
+              fi
+              if ! pgrep -U 0 -f 'hyperkube kubelet ' | xargs kill; then
+                echo "error: Unable to restart Kubelet" 2>&1
+              fi
+            fi
+            cp -f /tmp/.new /tmp/.old
+            sleep 180
+          done
+
+        env:
+        - name: NODE_NAME
+          valueFrom:
+            fieldRef:
+              fieldPath: spec.nodeName
+        securityContext:
+          runAsUser: 0
+          # Permission could be reduced by selecting an appropriate SELinux policy
+          privileged: true
+        volumeMounts:
+        # Directory which contains the host configuration. We write to this directory
+        - mountPath: /etc/origin/node/
+          name: host-config
+        - mountPath: /etc/sysconfig/origin-node
+          name: host-sysconfig-node
+          readOnly: true
+
+      # The network container launches the openshift-sdn process, the kube-proxy, and the local DNS service.
+      # It relies on an up to date node-config.yaml being present.
+      - name: sdn
+        image: " "
+        command: 
+        - /bin/bash
+        - -c
+        - |
+          #!/bin/bash
+          set -euo pipefail
+          # Take over network functions on the node
+          rm -Rf /etc/cni/net.d/*
+          rm -Rf /host/opt/cni/bin/*
+          cp -Rf /opt/cni/bin/* /host/opt/cni/bin/
+
+          if [[ -f /etc/sysconfig/origin-node ]]; then
+            set -o allexport
+            source /etc/sysconfig/origin-node
+          fi
+
+          # use either the bootstrapped node kubeconfig or the static configuration
+          file=/etc/origin/node/node.kubeconfig
+          if [[ ! -f "${file}" ]]; then
+            # use the static node config if it exists
+            # TODO: remove when static node configuration is no longer supported
+            for f in /etc/origin/node/system*.kubeconfig; do
+              echo "info: Using ${f} for node configuration" 1>&2
+              file="${f}"
+              break
+            done
+          fi
+          # Use the same config as the node, but with the service account token
+          oc config "--config=${file}" view --flatten > /tmp/kubeconfig
+          oc config --config=/tmp/kubeconfig set-credentials sa "--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )"
+          oc config --config=/tmp/kubeconfig set-context "$( oc config --config=/tmp/kubeconfig current-context )" --user=sa
+          # Launch the network process
+          exec openshift start network --config=/etc/origin/node/node-config.yaml --kubeconfig=/tmp/kubeconfig --loglevel=${DEBUG_LOGLEVEL:-2}
+
+        securityContext:
+          runAsUser: 0
+          # Permission could be reduced by selecting an appropriate SELinux policy
+          privileged: true
+
+        volumeMounts:
+        # Directory which contains the host configuration.
+        - mountPath: /etc/origin/node/
+          name: host-config
+          readOnly: true
+        - mountPath: /etc/sysconfig/origin-node
+          name: host-sysconfig-node
+          readOnly: true
+        # Run directories where we need to be able to access sockets
+        - mountPath: /var/run/dbus/
+          name: host-var-run-dbus
+          readOnly: true
+        - mountPath: /var/run/openvswitch/
+          name: host-var-run-ovs
+          readOnly: true
+        - mountPath: /var/run/kubernetes/
+          name: host-var-run-kubernetes
+          readOnly: true
+        # We mount our socket here
+        - mountPath: /var/run/openshift-sdn
+          name: host-var-run-openshift-sdn
+        # CNI related mounts which we take over
+        - mountPath: /host/opt/cni/bin
+          name: host-opt-cni-bin
+        - mountPath: /etc/cni/net.d
+          name: host-etc-cni-netd
+        - mountPath: /var/lib/cni/networks/openshift-sdn
+          name: host-var-lib-cni-networks-openshift-sdn
+
+        resources:
+          requests:
+            cpu: 100m
+            memory: 200Mi
+        env:
+        - name: OPENSHIFT_DNS_DOMAIN
+          value: cluster.local
+        ports:
+        - name: healthz
+          containerPort: 10256
+        livenessProbe:
+          initialDelaySeconds: 10
+          httpGet:
+            path: /healthz
+            port: 10256
+            scheme: HTTP
+        lifecycle:
+
+      volumes:
+      # In bootstrap mode, the host config contains information not easily available
+      # from other locations.
+      - name: host-config
+        hostPath:
+          path: /etc/origin/node
+      - name: host-sysconfig-node
+        hostPath:
+          path: /etc/sysconfig/origin-node
+      - name: host-modules
+        hostPath:
+          path: /lib/modules
+
+      - name: host-var-run-ovs
+        hostPath:
+          path: /var/run/openvswitch
+      - name: host-var-run-kubernetes
+        hostPath:
+          path: /var/run/kubernetes
+      - name: host-var-run-dbus
+        hostPath:
+          path: /var/run/dbus
+      - name: host-var-run-openshift-sdn
+        hostPath:
+          path: /var/run/openshift-sdn
+
+      - name: host-opt-cni-bin
+        hostPath:
+          path: /opt/cni/bin
+      - name: host-etc-cni-netd
+        hostPath:
+          path: /etc/cni/net.d
+      - name: host-var-lib-cni-networks-openshift-sdn
+        hostPath:
+          path: /var/lib/cni/networks/openshift-sdn

+ 19 - 0
roles/openshift_sdn/meta/main.yaml

@@ -0,0 +1,19 @@
+---
+galaxy_info:
+  author: OpenShift Development <dev@lists.openshift.redhat.com>
+  description: Deploy OpenShift SDN
+  company: Red Hat, Inc.
+  license: Apache License, Version 2.0
+  min_ansible_version: 2.4
+  platforms:
+  - name: EL
+    versions:
+    - 7
+  - name: Fedora
+    versions:
+    - all
+  categories:
+  - openshift
+dependencies:
+- role: lib_openshift
+- role: openshift_facts

+ 51 - 0
roles/openshift_sdn/tasks/main.yml

@@ -0,0 +1,51 @@
+---
+# Fact setting
+# - name: Set default image variables based on deployment type
+#   include_vars: "{{ item }}"
+#   with_first_found:
+#     - "{{ openshift_deployment_type | default(deployment_type) }}.yml"
+#     - "default_images.yml"
+
+- name: Ensure openshift-sdn project exists
+  oc_project:
+    name: openshift-sdn
+    state: present
+    node_selector:
+      - ""
+
+- name: Make temp directory for templates
+  command: mktemp -d /tmp/console-ansible-XXXXXX
+  register: mktemp
+  changed_when: False
+
+- name: Copy web console templates to temp directory
+  copy:
+    src: "{{ item }}"
+    dest: "{{ mktemp.stdout }}/{{ item | basename }}"
+  with_fileglob:
+    - "files/*.yaml"
+
+- name: Update the image tag
+  yedit:
+    src: "{{ mktemp.stdout }}/sdn-images.yaml"
+    key: 'tag.from.name'
+    # TODO: this should be easier to replace
+    value: "{{ osn_image }}"
+
+- name: Ensure the SDN can run privileged
+  oc_adm_policy_user:
+    namespace: "openshift-sdn"
+    resource_kind: scc
+    resource_name: privileged
+    state: present
+    user: "system:serviceaccount:openshift-sdn:sdn"
+
+- name: Apply the SDN config
+  shell: >
+    {{ openshift_client_binary }} apply -f {{ mktemp.stdout }}
+
+- name: Remove temp directory
+  file:
+    state: absent
+    name: "{{ mktemp.stdout }}"
+  changed_when: False