Parcourir la source

Add openshift_openstack role and move tasks there

All the tasks that were previously in playbooks are now under
`roles/openshift_openstack`.

The `openshift-cluster` directory now only contains playbooks that
include tasks from that role. This makes the structure much closer to
that of the AWS provider.
Tomas Sedovic il y a 7 ans
Parent
commit
4ed9aef6f8
56 fichiers modifiés avec 1636 ajouts et 803 suppressions
  1. 18 0
      playbooks/openstack/README.md
  2. 0 10
      playbooks/openstack/galaxy-requirements.yaml
  3. 18 0
      playbooks/openstack/openshift-cluster/install.yml
  4. 2 2
      playbooks/openstack/openshift-cluster/post-install.yml
  5. 0 118
      playbooks/openstack/openshift-cluster/post-provision-openstack.yml
  6. 0 21
      playbooks/openstack/openshift-cluster/pre-install.yml
  7. 0 53
      playbooks/openstack/openshift-cluster/pre_tasks.yml
  8. 0 67
      playbooks/openstack/openshift-cluster/prepare-and-format-cinder-volume.yaml
  9. 9 120
      playbooks/openstack/openshift-cluster/prerequisites.yml
  10. 0 35
      playbooks/openstack/openshift-cluster/provision-openstack.yml
  11. 0 4
      playbooks/openstack/openshift-cluster/provision.yaml
  12. 37 0
      playbooks/openstack/openshift-cluster/provision.yml
  13. 9 0
      playbooks/openstack/openshift-cluster/provision_install.yml
  14. 3 8
      playbooks/openstack/openshift-cluster/scale-up.yaml
  15. 22 14
      playbooks/openstack/sample-inventory/inventory.py
  16. 1 0
      requirements.txt
  17. 0 6
      roles/common/defaults/main.yml
  18. 0 2
      roles/dns-records/defaults/main.yml
  19. 0 121
      roles/dns-records/tasks/main.yml
  20. 0 3
      roles/dns-server-detect/defaults/main.yml
  21. 0 36
      roles/dns-server-detect/tasks/main.yml
  22. 0 4
      roles/dns-views/defaults/main.yml
  23. 0 30
      roles/dns-views/tasks/main.yml
  24. 0 7
      roles/docker-storage-setup/defaults/main.yaml
  25. 0 26
      roles/hostnames/tasks/main.yaml
  26. 0 12
      roles/hostnames/test/inv
  27. 0 1
      roles/hostnames/test/roles
  28. 0 4
      roles/hostnames/test/test.yaml
  29. 0 2
      roles/hostnames/vars/main.yaml
  30. 0 28
      roles/hostnames/vars/records.yaml
  31. 0 13
      roles/openshift-prep/defaults/main.yml
  32. 0 4
      roles/openshift-prep/tasks/main.yml
  33. 0 37
      roles/openshift-prep/tasks/prerequisites.yml
  34. 49 0
      roles/openshift_openstack/defaults/main.yml
  35. 109 0
      roles/openshift_openstack/tasks/check-prerequisites.yml
  36. 6 0
      roles/openshift_openstack/tasks/cleanup.yml
  37. 0 9
      roles/docker-storage-setup/tasks/main.yaml
  38. 0 0
      roles/openshift_openstack/tasks/custom_flavor_check.yaml
  39. 1 0
      playbooks/openstack/openshift-cluster/custom_image_check.yaml
  40. 26 0
      roles/openshift_openstack/tasks/generate-templates.yml
  41. 33 0
      roles/openshift_openstack/tasks/hostname.yml
  42. 0 0
      roles/openshift_openstack/tasks/net_vars_check.yaml
  43. 11 0
      roles/openshift_openstack/tasks/node-configuration.yml
  44. 2 5
      roles/node-network-manager/tasks/main.yml
  45. 15 0
      roles/openshift_openstack/tasks/node-packages.yml
  46. 5 0
      roles/openshift_openstack/tasks/populate-dns.yml
  47. 59 0
      roles/openshift_openstack/tasks/prepare-and-format-cinder-volume.yaml
  48. 30 0
      roles/openshift_openstack/tasks/provision.yml
  49. 0 0
      roles/openshift_openstack/tasks/subnet_update_dns_servers.yaml
  50. 0 0
      roles/openshift_openstack/templates/docker-storage-setup-dm.j2
  51. 0 0
      roles/openshift_openstack/templates/docker-storage-setup-overlayfs.j2
  52. 888 0
      roles/openshift_openstack/templates/heat_stack.yaml.j2
  53. 270 0
      roles/openshift_openstack/templates/heat_stack_server.yaml.j2
  54. 13 0
      roles/openshift_openstack/templates/user_data.j2
  55. 0 0
      roles/openshift_openstack/vars/main.yml
  56. 0 1
      roles/openstack-stack/tasks/main.yml

+ 18 - 0
playbooks/openstack/README.md

@@ -38,6 +38,19 @@ Optional:
 * External Neutron network with a floating IP address pool
 * External Neutron network with a floating IP address pool
 
 
 
 
+## DNS Requirements
+
+OpenShift requires DNS to operate properly. OpenStack supports DNS-as-a-service
+in the form of the Designate project, but the playbooks here don't support it
+yet. Until we do, you will need to provide a DNS solution yourself (or in case
+you are not running Designate when we do).
+
+If your server supports nsupdate, we will use it to add the necessary records.
+
+TODO(shadower): describe how to build a sample DNS server and how to configure
+our playbooks for nsupdate.
+
+
 ## Installation
 ## Installation
 
 
 There are four main parts to the installation:
 There are four main parts to the installation:
@@ -143,6 +156,8 @@ $ vi inventory/group_vars/all.yml
 4. Set the `openstack_default_flavor` to the flavor you want your
 4. Set the `openstack_default_flavor` to the flavor you want your
    OpenShift VMs to use.
    OpenShift VMs to use.
    - See `openstack flavor list` for the list of available flavors.
    - See `openstack flavor list` for the list of available flavors.
+5. Set the `public_dns_nameservers` to the list of the IP addresses
+   of the DNS servers used for the **private** address resolution[1].
 
 
 **NOTE**: In most OpenStack environments, you will also need to
 **NOTE**: In most OpenStack environments, you will also need to
 configure the forwarders for the DNS server we create. This depends on
 configure the forwarders for the DNS server we create. This depends on
@@ -153,6 +168,9 @@ put the IP addresses into `public_dns_nameservers` in
 `inventory/group_vars/all.yml`.
 `inventory/group_vars/all.yml`.
 
 
 
 
+[1]: Yes, the name is bad. We will fix it.
+
+
 #### OpenShift configuration
 #### OpenShift configuration
 
 
 The OpenShift configuration is in `inventory/group_vars/OSEv3.yml`.
 The OpenShift configuration is in `inventory/group_vars/OSEv3.yml`.

+ 0 - 10
playbooks/openstack/galaxy-requirements.yaml

@@ -1,10 +0,0 @@
----
-# This is the Ansible Galaxy requirements file to pull in the correct roles
-
-# From 'infra-ansible'
-- src: https://github.com/redhat-cop/infra-ansible
-  version: master
-
-# From 'openshift-ansible'
-- src: https://github.com/openshift/openshift-ansible
-  version: master

+ 18 - 0
playbooks/openstack/openshift-cluster/install.yml

@@ -0,0 +1,18 @@
+---
+# NOTE(shadower): the AWS playbook builds an in-memory inventory of
+# all the EC2 instances here. We don't need to as that's done by the
+# dynamic inventory.
+
+# TODO(shadower): the AWS playbook sets the
+# `openshift_master_cluster_hostname` and `osm_custom_cors_origins`
+# values here. We do it in the OSEv3 group vars. Do we need to add
+# some logic here?
+
+- name: normalize groups
+  include: ../../byo/openshift-cluster/initialize_groups.yml
+
+- name: run the std_include
+  include: ../../common/openshift-cluster/std_include.yml
+
+- name: run the config
+  include: ../../common/openshift-cluster/config.yml

+ 2 - 2
playbooks/openstack/openshift-cluster/post-install.yml

@@ -22,9 +22,9 @@
     - when: openshift_use_flannel|default(False)|bool
     - when: openshift_use_flannel|default(False)|bool
       block:
       block:
         - include_role:
         - include_role:
-            name: openshift-ansible/roles/os_firewall
+            name: os_firewall
         - include_role:
         - include_role:
-            name: openshift-ansible/roles/lib_os_firewall
+            name: lib_os_firewall
         - name: set allow rules for dnsmasq
         - name: set allow rules for dnsmasq
           os_firewall_manage_iptables:
           os_firewall_manage_iptables:
             name: "{{ item.service }}"
             name: "{{ item.service }}"

+ 0 - 118
playbooks/openstack/openshift-cluster/post-provision-openstack.yml

@@ -1,118 +0,0 @@
----
-- hosts: cluster_hosts
-  name: Wait for the the nodes to come up
-  become: False
-  gather_facts: False
-  tasks:
-    - when: not openstack_use_bastion|default(False)|bool
-      wait_for_connection:
-    - when: openstack_use_bastion|default(False)|bool
-      delegate_to: bastion
-      wait_for_connection:
-
-- hosts: cluster_hosts
-  gather_facts: True
-  tasks:
-    - name: Debug hostvar
-      debug:
-        msg: "{{ hostvars[inventory_hostname] }}"
-        verbosity: 2
-
-- name: OpenShift Pre-Requisites (part 1)
-  include: pre-install.yml
-
-- name: Assign hostnames
-  hosts: cluster_hosts
-  gather_facts: False
-  become: true
-  roles:
-    - role: hostnames
-
-- name: Subscribe DNS Host to allow for configuration below
-  hosts: dns
-  gather_facts: False
-  become: true
-  roles:
-    - role: subscription-manager
-      when: hostvars.localhost.rhsm_register|default(False)
-      tags: 'subscription-manager'
-
-- name: Determine which DNS server(s) to use for our generated records
-  hosts: localhost
-  gather_facts: False
-  become: False
-  roles:
-    - dns-server-detect
-
-- name: Build the DNS Server Views and Configure DNS Server(s)
-  hosts: dns
-  gather_facts: False
-  become: true
-  roles:
-    - role: dns-views
-    - role: infra-ansible/roles/dns-server
-
-- name: Build and process DNS Records
-  hosts: localhost
-  gather_facts: True
-  become: False
-  roles:
-    - role: dns-records
-      use_bastion: "{{ openstack_use_bastion|default(False)|bool }}"
-    - role: infra-ansible/roles/dns
-
-- name: Switch the stack subnet to the configured private DNS server
-  hosts: localhost
-  gather_facts: False
-  become: False
-  vars_files:
-    - stack_params.yaml
-  tasks:
-    - include_role:
-        name: openstack-stack
-        tasks_from: subnet_update_dns_servers
-
-- name: OpenShift Pre-Requisites (part 2)
-  hosts: OSEv3
-  gather_facts: true
-  become: true
-  vars:
-    interface: "{{ flannel_interface|default('eth1') }}"
-    interface_file: /etc/sysconfig/network-scripts/ifcfg-{{ interface }}
-    interface_config:
-      DEVICE: "{{ interface }}"
-      TYPE: Ethernet
-      BOOTPROTO: dhcp
-      ONBOOT: 'yes'
-      DEFTROUTE: 'no'
-      PEERDNS: 'no'
-  pre_tasks:
-    - name: "Include DNS configuration to ensure proper name resolution"
-      lineinfile:
-        state: present
-        dest: /etc/sysconfig/network
-        regexp: "IP4_NAMESERVERS={{ hostvars['localhost'].private_dns_server }}"
-        line: "IP4_NAMESERVERS={{ hostvars['localhost'].private_dns_server }}"
-    - name: "Configure the flannel interface options"
-      when: openshift_use_flannel|default(False)|bool
-      block:
-        - file:
-            dest: "{{ interface_file }}"
-            state: touch
-            mode: 0644
-            owner: root
-            group: root
-        - lineinfile:
-            state: present
-            dest: "{{ interface_file }}"
-            regexp: "{{ item.key }}="
-            line: "{{ item.key }}={{ item.value }}"
-          with_dict: "{{ interface_config }}"
-  roles:
-    - node-network-manager
-
-- include: prepare-and-format-cinder-volume.yaml
-  when: >
-    prepare_and_format_registry_volume|default(False) or
-    (cinder_registry_volume is defined and
-      cinder_registry_volume.changed|default(False))

+ 0 - 21
playbooks/openstack/openshift-cluster/pre-install.yml

@@ -1,21 +0,0 @@
----
-###############################
-# OpenShift Pre-Requisites
-
-# - subscribe hosts
-# - prepare docker
-# - other prep (install additional packages, etc.)
-#
-- hosts: OSEv3
-  become: true
-  roles:
-    - { role: subscription-manager, when: hostvars.localhost.rhsm_register|default(False), tags: 'subscription-manager', ansible_sudo: true }
-    - role: docker-storage-setup
-      docker_dev: /dev/vdb
-      tags: 'docker'
-    - { role: openshift-prep, tags: 'openshift-prep' }
-
-- hosts: localhost:cluster_hosts
-  become: False
-  tasks:
-    - include: pre_tasks.yml

+ 0 - 53
playbooks/openstack/openshift-cluster/pre_tasks.yml

@@ -1,53 +0,0 @@
----
-- name: Generate Environment ID
-  set_fact:
-    env_random_id: "{{ ansible_date_time.epoch }}"
-  run_once: true
-  delegate_to: localhost
-
-- name: Set default Environment ID
-  set_fact:
-    default_env_id: "openshift-{{ lookup('env','OS_USERNAME') }}-{{ env_random_id }}"
-  delegate_to: localhost
-
-- name: Setting Common Facts
-  set_fact:
-    env_id: "{{ env_id | default(default_env_id) }}"
-  delegate_to: localhost
-
-- name: Updating DNS domain to include env_id (if not empty)
-  set_fact:
-    full_dns_domain: "{{ (env_id|trim == '') | ternary(public_dns_domain, env_id + '.' + public_dns_domain) }}"
-  delegate_to: localhost
-
-- name: Set the APP domain for OpenShift use
-  set_fact:
-    openshift_app_domain: "{{ openshift_app_domain | default('apps') }}"
-  delegate_to: localhost
-
-- name: Set the default app domain for routing purposes
-  set_fact:
-    openshift_master_default_subdomain: "{{ openshift_app_domain }}.{{ full_dns_domain }}"
-  delegate_to: localhost
-  when:
-  - openshift_master_default_subdomain is undefined
-
-# Check that openshift_cluster_node_labels has regions defined for all groups
-# NOTE(kpilatov): if node labels are to be enabled for more groups,
-#                 this check needs to be modified as well
-- name: Set openshift_cluster_node_labels if undefined (should not happen)
-  set_fact:
-    openshift_cluster_node_labels: {'app': {'region': 'primary'}, 'infra': {'region': 'infra'}}
-  when: openshift_cluster_node_labels is not defined
-
-- name: Set openshift_cluster_node_labels for the infra group
-  set_fact:
-    openshift_cluster_node_labels: "{{ openshift_cluster_node_labels | combine({'infra': {'region': 'infra'}}, recursive=True) }}"
-
-- name: Set openshift_cluster_node_labels for the app group
-  set_fact:
-    openshift_cluster_node_labels: "{{ openshift_cluster_node_labels | combine({'app': {'region': 'primary'}}, recursive=True) }}"
-
-- name: Set openshift_cluster_node_labels for auto-scaling app nodes
-  set_fact:
-    openshift_cluster_node_labels: "{{ openshift_cluster_node_labels | combine({'app': {'autoscaling': 'app'}}, recursive=True) }}"

+ 0 - 67
playbooks/openstack/openshift-cluster/prepare-and-format-cinder-volume.yaml

@@ -1,67 +0,0 @@
----
-- hosts: localhost
-  gather_facts: False
-  become: False
-  tasks:
-  - set_fact:
-      cinder_volume: "{{ hostvars[groups.masters[0]].openshift_hosted_registry_storage_openstack_volumeID }}"
-      cinder_fs: "{{ hostvars[groups.masters[0]].openshift_hosted_registry_storage_openstack_filesystem }}"
-
-  - name: Attach the volume to the VM
-    os_server_volume:
-      state: present
-      server: "{{ groups['masters'][0] }}"
-      volume: "{{ cinder_volume }}"
-    register: volume_attachment
-
-  - set_fact:
-      attached_device: >-
-        {{ volume_attachment['attachments']|json_query("[?volume_id=='" + cinder_volume + "'].device | [0]") }}
-
-  - delegate_to: "{{ groups['masters'][0] }}"
-    block:
-    - name: Wait for the device to appear
-      wait_for: path={{ attached_device }}
-
-    - name: Create a temp directory for mounting the volume
-      tempfile:
-        prefix: cinder-volume
-        state: directory
-      register: cinder_mount_dir
-
-    - name: Format the device
-      filesystem:
-        fstype: "{{ cinder_fs }}"
-        dev: "{{ attached_device }}"
-
-    - name: Mount the device
-      mount:
-        name: "{{ cinder_mount_dir.path }}"
-        src: "{{ attached_device }}"
-        state: mounted
-        fstype: "{{ cinder_fs }}"
-
-    - name: Change mode on the filesystem
-      file:
-        path: "{{ cinder_mount_dir.path }}"
-        state: directory
-        recurse: true
-        mode: 0777
-
-    - name: Unmount the device
-      mount:
-        name: "{{ cinder_mount_dir.path }}"
-        src: "{{ attached_device }}"
-        state: absent
-        fstype: "{{ cinder_fs }}"
-
-    - name: Delete the temp directory
-      file:
-        name: "{{ cinder_mount_dir.path }}"
-        state: absent
-
-  - name: Detach the volume from the VM
-    os_server_volume:
-      state: absent
-      server: "{{ groups['masters'][0] }}"
-      volume: "{{ cinder_volume }}"

+ 9 - 120
playbooks/openstack/openshift-cluster/prerequisites.yml

@@ -1,123 +1,12 @@
 ---
 ---
 - hosts: localhost
 - hosts: localhost
   tasks:
   tasks:
-
-  # Sanity check of inventory variables
-  - include: net_vars_check.yaml
-
-  # Check ansible
-  - name: Check Ansible version
-    assert:
-      that: >
-        (ansible_version.major == 2 and ansible_version.minor >= 3) or
-        (ansible_version.major > 2)
-      msg: "Ansible version must be at least 2.3"
-
-  # Check shade
-  - name: Try to import python module shade
-    command: python -c "import shade"
-    ignore_errors: yes
-    register: shade_result
-  - name: Check if shade is installed
-    assert:
-      that: 'shade_result.rc == 0'
-      msg: "Python module shade is not installed"
-
-  # Check jmespath
-  - name: Try to import python module shade
-    command: python -c "import jmespath"
-    ignore_errors: yes
-    register: jmespath_result
-  - name: Check if jmespath is installed
-    assert:
-      that: 'jmespath_result.rc == 0'
-      msg: "Python module jmespath is not installed"
-
-  # Check python-dns
-  - name: Try to import python DNS module
-    command: python -c "import dns"
-    ignore_errors: yes
-    register: pythondns_result
-  - name: Check if python-dns is installed
-    assert:
-      that: 'pythondns_result.rc == 0'
-      msg: "Python module python-dns is not installed"
-
-  # Check jinja2
-  - name: Try to import jinja2 module
-    command: python -c "import jinja2"
-    ignore_errors: yes
-    register: jinja_result
-  - name: Check if jinja2 is installed
-    assert:
-      that: 'jinja_result.rc == 0'
-      msg: "Python module jinja2 is not installed"
-
-  # Check Glance image
-  - name: Try to get image facts
-    os_image_facts:
-      image: "{{ openstack_default_image_name }}"
-    register: image_result
-  - name: Check that image is available
-    assert:
-      that: "image_result.ansible_facts.openstack_image"
-      msg: "Image {{ openstack_default_image_name }} is not available"
-
-  # Check network name
-  - name: Try to get network facts
-    os_networks_facts:
-      name: "{{ openstack_external_network_name }}"
-    register: network_result
-    when: not openstack_provider_network_name|default(None)
-  - name: Check that network is available
-    assert:
-      that: "network_result.ansible_facts.openstack_networks"
-      msg: "Network {{ openstack_external_network_name }} is not available"
-    when: not openstack_provider_network_name|default(None)
-
-  # Check keypair
-  # TODO kpilatov: there is no Ansible module for getting OS keypairs
-  #                (os_keypair is not suitable for this)
-  #                this method does not force python-openstackclient dependency
-  - name: Try to show keypair
-    command: >
-             python -c 'import shade; cloud = shade.openstack_cloud();
-             exit(cloud.get_keypair("{{ openstack_ssh_public_key }}") is None)'
-    ignore_errors: yes
-    register: key_result
-  - name: Check that keypair is available
-    assert:
-      that: 'key_result.rc == 0'
-      msg: "Keypair {{ openstack_ssh_public_key }} is not available"
-
-# Check that custom images and flavors exist
-- hosts: localhost
-
-  # Include variables that will be used by heat
-  vars_files:
-  - stack_params.yaml
-
-  tasks:
-  # Check that custom images are available
-  - include: custom_image_check.yaml
-    with_items:
-    - "{{ openstack_master_image }}"
-    - "{{ openstack_infra_image }}"
-    - "{{ openstack_node_image }}"
-    - "{{ openstack_lb_image }}"
-    - "{{ openstack_etcd_image }}"
-    - "{{ openstack_dns_image }}"
-    loop_control:
-      loop_var: image
-
-  # Check that custom flavors are available
-  - include: custom_flavor_check.yaml
-    with_items:
-    - "{{ master_flavor }}"
-    - "{{ infra_flavor }}"
-    - "{{ node_flavor }}"
-    - "{{ lb_flavor }}"
-    - "{{ etcd_flavor }}"
-    - "{{ dns_flavor }}"
-    loop_control:
-      loop_var: flavor
+  - name: Check dependencies and OpenStack prerequisites
+    include_role:
+      name: openshift_openstack
+      tasks_from: check-prerequisites.yml
+
+  - name: Check network configuration
+    include_role:
+      name: openshift_openstack
+      tasks_from: net_vars_check.yaml

+ 0 - 35
playbooks/openstack/openshift-cluster/provision-openstack.yml

@@ -1,35 +0,0 @@
----
-- hosts: localhost
-  gather_facts: True
-  become: False
-  vars_files:
-    - stack_params.yaml
-  pre_tasks:
-    - include: pre_tasks.yml
-  roles:
-    - role: openstack-stack
-    - role: openstack-create-cinder-registry
-      when:
-        - cinder_hosted_registry_name is defined
-        - cinder_hosted_registry_size_gb is defined
-    - role: static_inventory
-      when: openstack_inventory|default('static') == 'static'
-      inventory_path: "{{ openstack_inventory_path|default(inventory_dir) }}"
-      private_ssh_key: "{{ openstack_private_ssh_key|default('') }}"
-      ssh_config_path: "{{ openstack_ssh_config_path|default('/tmp/ssh.config.openshift.ansible' + '.' + stack_name) }}"
-      ssh_user: "{{ ansible_user }}"
-
-- name: Refresh Server inventory or exit to apply SSH config
-  hosts: localhost
-  connection: local
-  become: False
-  gather_facts: False
-  tasks:
-    - name: Exit to apply SSH config for a bastion
-      meta: end_play
-      when: openstack_use_bastion|default(False)|bool
-    - name: Refresh Server inventory
-      meta: refresh_inventory
-
-- include: post-provision-openstack.yml
-  when: not openstack_use_bastion|default(False)|bool

+ 0 - 4
playbooks/openstack/openshift-cluster/provision.yaml

@@ -1,4 +0,0 @@
----
-- include: "prerequisites.yml"
-
-- include: "provision-openstack.yml"

+ 37 - 0
playbooks/openstack/openshift-cluster/provision.yml

@@ -0,0 +1,37 @@
+---
+- name: Create the OpenStack resources for cluster installation
+  hosts: localhost
+  tasks:
+  - name: provision cluster
+    include_role:
+      name: openshift_openstack
+      tasks_from: provision.yml
+
+# NOTE(shadower): the (internal) DNS must be functional at this point!!
+# That will have happened in provision.yml if nsupdate was configured.
+
+# TODO(shadower): consider splitting this up so people can stop here
+# and configure their DNS if they have to.
+
+- name: Prepare the Nodes in the cluster for installation
+  hosts: cluster_hosts
+  become: true
+  # NOTE: The nodes may not be up yet, don't gather facts here.
+  # They'll be collected after `wait_for_connection`.
+  gather_facts: no
+  tasks:
+  - name: Wait for the the nodes to come up
+    wait_for_connection:
+
+  - name: Gather facts for the new nodes
+    setup:
+
+  - name: Install dependencies
+    include_role:
+      name: openshift_openstack
+      tasks_from: node-packages.yml
+
+  - name: Configure Node
+    include_role:
+      name: openshift_openstack
+      tasks_from: node-configuration.yml

+ 9 - 0
playbooks/openstack/openshift-cluster/provision_install.yml

@@ -0,0 +1,9 @@
+---
+- name: Check the prerequisites for cluster provisioning in OpenStack
+  include: prerequisites.yml
+
+- name: Include the provision.yml playbook to create cluster
+  include: provision.yml
+
+- name: Include the install.yml playbook to install cluster
+  include: install.yml

+ 3 - 8
playbooks/openstack/openshift-cluster/scale-up.yaml

@@ -41,21 +41,16 @@
       openstack_num_nodes: "{{ oc_old_num_nodes | int + increment_by | int }}"
       openstack_num_nodes: "{{ oc_old_num_nodes | int + increment_by | int }}"
 
 
 # Run provision.yaml with higher number of nodes to create a new app-node VM
 # Run provision.yaml with higher number of nodes to create a new app-node VM
-- include: provision.yaml
+- include: provision.yml
 
 
 # Run config.yml to perform openshift installation
 # Run config.yml to perform openshift installation
-# Path to openshift-ansible can be customised:
-# - the value of openshift_ansible_dir has to be an absolute path
-# - the path cannot contain the '/' symbol at the end
 
 
 # Creating a new deployment by the full installation
 # Creating a new deployment by the full installation
-- include: "{{ openshift_ansible_dir }}/playbooks/byo/config.yml"
-  vars:
-    openshift_ansible_dir: ../../../../openshift-ansible
+- include: install.yml
   when: 'not groups["new_nodes"] | list'
   when: 'not groups["new_nodes"] | list'
 
 
 # Scaling up existing deployment
 # Scaling up existing deployment
-- include: "{{ openshift_ansible_dir }}/playbooks/byo/openshift-node/scaleup.yml"
+- include: "../../byo/openshift-node/scaleup.yml"
   vars:
   vars:
     openshift_ansible_dir: ../../../../openshift-ansible
     openshift_ansible_dir: ../../../../openshift-ansible
   when: 'groups["new_nodes"] | list'
   when: 'groups["new_nodes"] | list'

+ 22 - 14
playbooks/openstack/sample-inventory/inventory.py

@@ -1,4 +1,11 @@
 #!/usr/bin/env python
 #!/usr/bin/env python
+"""
+This is an Ansible dynamic inventory for OpenStack.
+
+It requires your OpenStack credentials to be set in clouds.yaml or your shell
+environment.
+
+"""
 
 
 from __future__ import print_function
 from __future__ import print_function
 
 
@@ -7,7 +14,8 @@ import json
 import shade
 import shade
 
 
 
 
-if __name__ == '__main__':
+def build_inventory():
+    '''Build the dynamic inventory.'''
     cloud = shade.openstack_cloud()
     cloud = shade.openstack_cloud()
 
 
     inventory = {}
     inventory = {}
@@ -39,13 +47,10 @@ if __name__ == '__main__':
     dns = [server.name for server in cluster_hosts
     dns = [server.name for server in cluster_hosts
            if server.metadata['host-type'] == 'dns']
            if server.metadata['host-type'] == 'dns']
 
 
-    lb = [server.name for server in cluster_hosts
-          if server.metadata['host-type'] == 'lb']
+    load_balancers = [server.name for server in cluster_hosts
+                      if server.metadata['host-type'] == 'lb']
 
 
-    osev3 = list(set(nodes + etcd + lb))
-
-    groups = [server.metadata.group for server in cluster_hosts
-              if 'group' in server.metadata]
+    osev3 = list(set(nodes + etcd + load_balancers))
 
 
     inventory['cluster_hosts'] = {'hosts': [s.name for s in cluster_hosts]}
     inventory['cluster_hosts'] = {'hosts': [s.name for s in cluster_hosts]}
     inventory['OSEv3'] = {'hosts': osev3}
     inventory['OSEv3'] = {'hosts': osev3}
@@ -55,7 +60,7 @@ if __name__ == '__main__':
     inventory['infra_hosts'] = {'hosts': infra_hosts}
     inventory['infra_hosts'] = {'hosts': infra_hosts}
     inventory['app'] = {'hosts': app}
     inventory['app'] = {'hosts': app}
     inventory['dns'] = {'hosts': dns}
     inventory['dns'] = {'hosts': dns}
-    inventory['lb'] = {'hosts': lb}
+    inventory['lb'] = {'hosts': load_balancers}
 
 
     for server in cluster_hosts:
     for server in cluster_hosts:
         if 'group' in server.metadata:
         if 'group' in server.metadata:
@@ -68,21 +73,24 @@ if __name__ == '__main__':
 
 
     for server in cluster_hosts:
     for server in cluster_hosts:
         ssh_ip_address = server.public_v4 or server.private_v4
         ssh_ip_address = server.public_v4 or server.private_v4
-        vars = {
+        hostvars = {
             'ansible_host': ssh_ip_address
             'ansible_host': ssh_ip_address
         }
         }
 
 
         public_v4 = server.public_v4 or server.private_v4
         public_v4 = server.public_v4 or server.private_v4
         if public_v4:
         if public_v4:
-            vars['public_v4'] = public_v4
+            hostvars['public_v4'] = public_v4
         # TODO(shadower): what about multiple networks?
         # TODO(shadower): what about multiple networks?
         if server.private_v4:
         if server.private_v4:
-            vars['private_v4'] = server.private_v4
+            hostvars['private_v4'] = server.private_v4
 
 
         node_labels = server.metadata.get('node_labels')
         node_labels = server.metadata.get('node_labels')
         if node_labels:
         if node_labels:
-            vars['openshift_node_labels'] = node_labels
+            hostvars['openshift_node_labels'] = node_labels
+
+        inventory['_meta']['hostvars'][server.name] = hostvars
+    return inventory
 
 
-        inventory['_meta']['hostvars'][server.name] = vars
 
 
-    print(json.dumps(inventory, indent=4, sort_keys=True))
+if __name__ == '__main__':
+    print(json.dumps(build_inventory(), indent=4, sort_keys=True))

+ 1 - 0
requirements.txt

@@ -7,4 +7,5 @@ pyOpenSSL==16.2.0
 # We need to disable ruamel.yaml for now because of test failures
 # We need to disable ruamel.yaml for now because of test failures
 #ruamel.yaml
 #ruamel.yaml
 six==1.10.0
 six==1.10.0
+shade==1.24.0
 passlib==1.6.5
 passlib==1.6.5

+ 0 - 6
roles/common/defaults/main.yml

@@ -1,6 +0,0 @@
----
-openshift_cluster_node_labels:
-  app:
-    region: primary
-  infra:
-    region: infra

+ 0 - 2
roles/dns-records/defaults/main.yml

@@ -1,2 +0,0 @@
----
-use_bastion: False

+ 0 - 121
roles/dns-records/tasks/main.yml

@@ -1,121 +0,0 @@
----
-- name: "Generate list of private A records"
-  set_fact:
-    private_records: "{{ private_records | default([]) + [ { 'type': 'A', 'hostname': hostvars[item]['ansible_hostname'], 'ip': hostvars[item]['private_v4'] } ] }}"
-  with_items: "{{ groups['cluster_hosts'] }}"
-
-- name: "Add wildcard records to the private A records for infrahosts"
-  set_fact:
-    private_records: "{{ private_records | default([]) + [ { 'type': 'A', 'hostname': '*.' + openshift_app_domain, 'ip': hostvars[item]['private_v4'] } ] }}"
-  with_items: "{{ groups['infra_hosts'] }}"
-
-- name: "Add public master cluster hostname records to the private A records (single master)"
-  set_fact:
-    private_records: "{{ private_records | default([]) + [ { 'type': 'A', 'hostname': (hostvars[groups.masters[0]].openshift_master_cluster_public_hostname | replace(full_dns_domain, ''))[:-1], 'ip': hostvars[groups.masters[0]].private_v4 } ] }}"
-  when:
-    - hostvars[groups.masters[0]].openshift_master_cluster_public_hostname is defined
-    - openstack_num_masters == 1
-
-- name: "Add public master cluster hostname records to the private A records (multi-master)"
-  set_fact:
-    private_records: "{{ private_records | default([]) + [ { 'type': 'A', 'hostname': (hostvars[groups.masters[0]].openshift_master_cluster_public_hostname | replace(full_dns_domain, ''))[:-1], 'ip': hostvars[groups.lb[0]].private_v4 } ] }}"
-  when:
-    - hostvars[groups.masters[0]].openshift_master_cluster_public_hostname is defined
-    - openstack_num_masters > 1
-
-- name: "Set the private DNS server to use the external value (if provided)"
-  set_fact:
-    nsupdate_server_private: "{{ external_nsupdate_keys['private']['server'] }}"
-    nsupdate_key_secret_private: "{{ external_nsupdate_keys['private']['key_secret'] }}"
-    nsupdate_key_algorithm_private: "{{ external_nsupdate_keys['private']['key_algorithm'] }}"
-    nsupdate_private_key_name: "{{ external_nsupdate_keys['private']['key_name']|default('private-' + full_dns_domain) }}"
-  when:
-    - external_nsupdate_keys is defined
-    - external_nsupdate_keys['private'] is defined
-
-- name: "Set the private DNS server to use the provisioned value"
-  set_fact:
-    nsupdate_server_private: "{{ hostvars[groups['dns'][0]].public_v4 }}"
-    nsupdate_key_secret_private: "{{ hostvars[groups['dns'][0]].nsupdate_keys['private-' + full_dns_domain].key_secret }}"
-    nsupdate_key_algorithm_private: "{{ hostvars[groups['dns'][0]].nsupdate_keys['private-' + full_dns_domain].key_algorithm }}"
-  when:
-    - nsupdate_server_private is undefined
-
-- name: "Generate the private Add section for DNS"
-  set_fact:
-    private_named_records:
-      - view: "private"
-        zone: "{{ full_dns_domain }}"
-        server: "{{ nsupdate_server_private }}"
-        key_name: "{{ nsupdate_private_key_name|default('private-' + full_dns_domain) }}"
-        key_secret: "{{ nsupdate_key_secret_private }}"
-        key_algorithm: "{{ nsupdate_key_algorithm_private | lower }}"
-        entries: "{{ private_records }}"
-
-- name: "Generate list of public A records"
-  set_fact:
-    public_records: "{{ public_records | default([]) + [ { 'type': 'A', 'hostname': hostvars[item]['ansible_hostname'], 'ip': hostvars[item]['public_v4'] } ] }}"
-  with_items: "{{ groups['cluster_hosts'] }}"
-  when: hostvars[item]['public_v4'] is defined
-
-- name: "Add wildcard records to the public A records"
-  set_fact:
-    public_records: "{{ public_records | default([]) + [ { 'type': 'A', 'hostname': '*.' + openshift_app_domain, 'ip': hostvars[item]['public_v4'] } ] }}"
-  with_items: "{{ groups['infra_hosts'] }}"
-  when: hostvars[item]['public_v4'] is defined
-
-- name: "Add public master cluster hostname records to the public A records (single master)"
-  set_fact:
-    public_records: "{{ public_records | default([]) + [ { 'type': 'A', 'hostname': (hostvars[groups.masters[0]].openshift_master_cluster_public_hostname | replace(full_dns_domain, ''))[:-1], 'ip': hostvars[groups.masters[0]].public_v4 } ] }}"
-  when:
-    - hostvars[groups.masters[0]].openshift_master_cluster_public_hostname is defined
-    - openstack_num_masters == 1
-    - not use_bastion|bool
-
-- name: "Add public master cluster hostname records to the public A records (single master behind a bastion)"
-  set_fact:
-    public_records: "{{ public_records | default([]) + [ { 'type': 'A', 'hostname': (hostvars[groups.masters[0]].openshift_master_cluster_public_hostname | replace(full_dns_domain, ''))[:-1], 'ip': hostvars[groups.bastions[0]].public_v4 } ] }}"
-  when:
-    - hostvars[groups.masters[0]].openshift_master_cluster_public_hostname is defined
-    - openstack_num_masters == 1
-    - use_bastion|bool
-
-- name: "Add public master cluster hostname records to the public A records (multi-master)"
-  set_fact:
-    public_records: "{{ public_records | default([]) + [ { 'type': 'A', 'hostname': (hostvars[groups.masters[0]].openshift_master_cluster_public_hostname | replace(full_dns_domain, ''))[:-1], 'ip': hostvars[groups.lb[0]].public_v4 } ] }}"
-  when:
-    - hostvars[groups.masters[0]].openshift_master_cluster_public_hostname is defined
-    - openstack_num_masters > 1
-
-- name: "Set the public DNS server details to use the external value (if provided)"
-  set_fact:
-    nsupdate_server_public: "{{ external_nsupdate_keys['public']['server'] }}"
-    nsupdate_key_secret_public: "{{ external_nsupdate_keys['public']['key_secret'] }}"
-    nsupdate_key_algorithm_public: "{{ external_nsupdate_keys['public']['key_algorithm'] }}"
-    nsupdate_public_key_name: "{{ external_nsupdate_keys['public']['key_name']|default('public-' + full_dns_domain) }}"
-  when:
-    - external_nsupdate_keys is defined
-    - external_nsupdate_keys['public'] is defined
-
-- name: "Set the public DNS server details to use the provisioned value"
-  set_fact:
-    nsupdate_server_public: "{{ hostvars[groups['dns'][0]].public_v4 }}"
-    nsupdate_key_secret_public: "{{ hostvars[groups['dns'][0]].nsupdate_keys['public-' + full_dns_domain].key_secret }}"
-    nsupdate_key_algorithm_public: "{{ hostvars[groups['dns'][0]].nsupdate_keys['public-' + full_dns_domain].key_algorithm }}"
-  when:
-    - nsupdate_server_public is undefined
-
-- name: "Generate the public Add section for DNS"
-  set_fact:
-    public_named_records:
-      - view: "public"
-        zone: "{{ full_dns_domain }}"
-        server: "{{ nsupdate_server_public }}"
-        key_name: "{{ nsupdate_public_key_name|default('public-' + full_dns_domain) }}"
-        key_secret: "{{ nsupdate_key_secret_public }}"
-        key_algorithm: "{{ nsupdate_key_algorithm_public | lower }}"
-        entries: "{{ public_records }}"
-
-- name: "Generate the final dns_records_add"
-  set_fact:
-    dns_records_add: "{{ private_named_records + public_named_records }}"

+ 0 - 3
roles/dns-server-detect/defaults/main.yml

@@ -1,3 +0,0 @@
----
-
-external_nsupdate_keys: {}

+ 0 - 36
roles/dns-server-detect/tasks/main.yml

@@ -1,36 +0,0 @@
----
-- fail:
-    msg: 'Missing required private DNS server(s)'
-  when:
-    - external_nsupdate_keys['private'] is undefined
-    - hostvars[groups['dns'][0]] is undefined
-
-- fail:
-    msg: 'Missing required public DNS server(s)'
-  when:
-    - external_nsupdate_keys['public'] is undefined
-    - hostvars[groups['dns'][0]] is undefined
-
-- name: "Set the private DNS server to use the external value (if provided)"
-  set_fact:
-    private_dns_server: "{{ external_nsupdate_keys['private']['server'] }}"
-  when:
-    - external_nsupdate_keys['private'] is defined
-
-- name: "Set the private DNS server to use the provisioned value"
-  set_fact:
-    private_dns_server: "{{ hostvars[groups['dns'][0]].private_v4 }}"
-  when:
-    - private_dns_server is undefined
-
-- name: "Set the public DNS server to use the external value (if provided)"
-  set_fact:
-    public_dns_server: "{{ external_nsupdate_keys['public']['server'] }}"
-  when:
-    - external_nsupdate_keys['public'] is defined
-
-- name: "Set the public DNS server to use the provisioned value"
-  set_fact:
-    public_dns_server: "{{ hostvars[groups['dns'][0]].public_v4 }}"
-  when:
-    - public_dns_server is undefined

+ 0 - 4
roles/dns-views/defaults/main.yml

@@ -1,4 +0,0 @@
----
-external_nsupdate_keys: {}
-named_private_recursion: 'yes'
-named_public_recursion: 'no'

+ 0 - 30
roles/dns-views/tasks/main.yml

@@ -1,30 +0,0 @@
----
-- name: "Generate ACL list for DNS server"
-  set_fact:
-    acl_list: "{{ acl_list | default([]) + [ (hostvars[item]['private_v4'] + '/32') ] }}"
-  with_items: "{{ groups['cluster_hosts'] }}"
-
-- name: "Generate the private view"
-  set_fact:
-    private_named_view:
-      - name: "private"
-        recursion: "{{ named_private_recursion }}"
-        acl_entry: "{{ acl_list }}"
-        zone:
-          - dns_domain: "{{ full_dns_domain }}"
-        forwarder: "{{ public_dns_nameservers }}"
-  when: external_nsupdate_keys['private'] is undefined
-
-- name: "Generate the public view"
-  set_fact:
-    public_named_view:
-      - name: "public"
-        recursion: "{{ named_public_recursion }}"
-        zone:
-          - dns_domain: "{{ full_dns_domain }}"
-        forwarder: "{{ public_dns_nameservers }}"
-  when: external_nsupdate_keys['public'] is undefined
-
-- name: "Generate the final named_config_views"
-  set_fact:
-    named_config_views: "{{ private_named_view|default([]) + public_named_view|default([]) }}"

+ 0 - 7
roles/docker-storage-setup/defaults/main.yaml

@@ -1,7 +0,0 @@
----
-docker_dev: "/dev/sdb"
-docker_vg: "docker-vol"
-docker_data_size: "95%VG"
-docker_dm_basesize: "3G"
-container_root_lv_name: "dockerlv"
-container_root_lv_mount_path: "/var/lib/docker"

+ 0 - 26
roles/hostnames/tasks/main.yaml

@@ -1,26 +0,0 @@
----
-- name: Setting Hostname Fact
-  set_fact:
-    new_hostname: "{{ custom_hostname | default(inventory_hostname_short) }}"
-
-- name: Setting FQDN Fact
-  set_fact:
-    new_fqdn: "{{ new_hostname }}.{{ full_dns_domain }}"
-
-- name: Setting hostname and DNS domain
-  hostname: name="{{ new_fqdn }}"
-
-- name: Check for cloud.cfg
-  stat: path=/etc/cloud/cloud.cfg
-  register: cloud_cfg
-
-- name: Prevent cloud-init updates of hostname/fqdn (if applicable)
-  lineinfile:
-    dest: /etc/cloud/cloud.cfg
-    state: present
-    regexp: "{{ item.regexp }}"
-    line: "{{ item.line }}"
-  with_items:
-    - { regexp: '^ - set_hostname', line: '# - set_hostname' }
-    - { regexp: '^ - update_hostname', line: '# - update_hostname' }
-  when: cloud_cfg.stat.exists == True

+ 0 - 12
roles/hostnames/test/inv

@@ -1,12 +0,0 @@
-[all:vars]
-dns_domain=example.com
-
-[openshift_masters]
-192.168.124.41 dns_private_ip=1.1.1.41 dns_public_ip=192.168.124.41
-192.168.124.117 dns_private_ip=1.1.1.117 dns_public_ip=192.168.124.117
-
-[openshift_nodes]
-192.168.124.40  dns_private_ip=1.1.1.40 dns_public_ip=192.168.124.40
-
-#[dns]
-#192.168.124.117 dns_private_ip=1.1.1.117

+ 0 - 1
roles/hostnames/test/roles

@@ -1 +0,0 @@
-../../../roles/

+ 0 - 4
roles/hostnames/test/test.yaml

@@ -1,4 +0,0 @@
----
-- hosts: all
-  roles:
-    - role: hostnames

+ 0 - 2
roles/hostnames/vars/main.yaml

@@ -1,2 +0,0 @@
----
-counter: 1

+ 0 - 28
roles/hostnames/vars/records.yaml

@@ -1,28 +0,0 @@
----
-- name: "Building Records"
-  set_fact:
-    dns_records_add:
-      - view: private
-        zone: example.com
-        entries:
-          - type: A
-            hostname: master1.example.com
-            ip: 172.16.15.94
-          - type: A
-            hostname: node1.example.com
-            ip: 172.16.15.86
-          - type: A
-            hostname: node2.example.com
-            ip: 172.16.15.87
-      - view: public
-        zone: example.com
-        entries:
-          - type: A
-            hostname: master1.example.com
-            ip: 10.3.10.116
-          - type: A
-            hostname: node1.example.com
-            ip: 10.3.11.46
-          - type: A
-            hostname: node2.example.com
-            ip: 10.3.12.6

+ 0 - 13
roles/openshift-prep/defaults/main.yml

@@ -1,13 +0,0 @@
----
-# Defines either to install required packages and update all
-manage_packages: true
-install_debug_packages: false
-required_packages:
-  - wget
-  - git
-  - net-tools
-  - bind-utils
-  - bridge-utils
-debug_packages:
-  - bash-completion
-  - vim-enhanced

+ 0 - 4
roles/openshift-prep/tasks/main.yml

@@ -1,4 +0,0 @@
----
-# Starting Point for OpenShift Installation and Configuration
-- include: prerequisites.yml
-  tags: [prerequisites]

+ 0 - 37
roles/openshift-prep/tasks/prerequisites.yml

@@ -1,37 +0,0 @@
----
-- name: "Cleaning yum repositories"
-  command: "yum clean all"
-
-- name: "Install required packages"
-  yum:
-    name: "{{ item }}"
-    state: latest
-  with_items: "{{ required_packages }}"
-  when: manage_packages|bool
-
-- name: "Install debug packages (optional)"
-  yum:
-    name: "{{ item }}"
-    state: latest
-  with_items: "{{ debug_packages }}"
-  when: install_debug_packages|bool
-
-- name: "Update all packages (this can take a very long time)"
-  yum:
-    name: '*'
-    state: latest
-  when: manage_packages|bool
-
-- name: "Verify hostname"
-  shell: hostnamectl status | awk "/Static hostname/"'{ print $3 }'
-  register: hostname_fqdn
-
-- name: "Set hostname if required"
-  hostname:
-    name: "{{ ansible_fqdn }}"
-  when: hostname_fqdn.stdout != ansible_fqdn
-
-- name: "Verify SELinux is enforcing"
-  fail:
-    msg: "SELinux is required for OpenShift and has been detected as '{{ ansible_selinux.config_mode }}'"
-  when: ansible_selinux.config_mode != "enforcing"

+ 49 - 0
roles/openshift_openstack/defaults/main.yml

@@ -0,0 +1,49 @@
+---
+
+stack_state: 'present'
+
+ssh_ingress_cidr: 0.0.0.0/0
+node_ingress_cidr: 0.0.0.0/0
+master_ingress_cidr: 0.0.0.0/0
+lb_ingress_cidr: 0.0.0.0/0
+bastion_ingress_cidr: 0.0.0.0/0
+num_etcd: 0
+num_masters: 1
+num_nodes: 1
+num_dns: 1
+num_infra: 1
+nodes_to_remove: []
+etcd_volume_size: 2
+dns_volume_size: 1
+lb_volume_size: 5
+use_bastion: False
+ui_ssh_tunnel: False
+provider_network: False
+
+
+openshift_cluster_node_labels:
+  app:
+    region: primary
+  infra:
+    region: infra
+
+install_debug_packages: false
+required_packages:
+  - docker
+  - NetworkManager
+  - wget
+  - git
+  - net-tools
+  - bind-utils
+  - bridge-utils
+debug_packages:
+  - bash-completion
+  - vim-enhanced
+
+# container-storage-setup
+docker_dev: "/dev/sdb"
+docker_vg: "docker-vol"
+docker_data_size: "95%VG"
+docker_dm_basesize: "3G"
+container_root_lv_name: "dockerlv"
+container_root_lv_mount_path: "/var/lib/docker"

+ 109 - 0
roles/openshift_openstack/tasks/check-prerequisites.yml

@@ -0,0 +1,109 @@
+---
+# Check ansible
+- name: Check Ansible version
+  assert:
+    that: >
+      (ansible_version.major == 2 and ansible_version.minor >= 3) or
+      (ansible_version.major > 2)
+    msg: "Ansible version must be at least 2.3"
+
+# Check shade
+- name: Try to import python module shade
+  command: python -c "import shade"
+  ignore_errors: yes
+  register: shade_result
+- name: Check if shade is installed
+  assert:
+    that: 'shade_result.rc == 0'
+    msg: "Python module shade is not installed"
+
+# Check jmespath
+- name: Try to import python module shade
+  command: python -c "import jmespath"
+  ignore_errors: yes
+  register: jmespath_result
+- name: Check if jmespath is installed
+  assert:
+    that: 'jmespath_result.rc == 0'
+    msg: "Python module jmespath is not installed"
+
+# Check python-dns
+- name: Try to import python DNS module
+  command: python -c "import dns"
+  ignore_errors: yes
+  register: pythondns_result
+- name: Check if python-dns is installed
+  assert:
+    that: 'pythondns_result.rc == 0'
+    msg: "Python module python-dns is not installed"
+
+# Check jinja2
+- name: Try to import jinja2 module
+  command: python -c "import jinja2"
+  ignore_errors: yes
+  register: jinja_result
+- name: Check if jinja2 is installed
+  assert:
+    that: 'jinja_result.rc == 0'
+    msg: "Python module jinja2 is not installed"
+
+# Check Glance image
+- name: Try to get image facts
+  os_image_facts:
+    image: "{{ openstack_default_image_name }}"
+  register: image_result
+- name: Check that image is available
+  assert:
+    that: "image_result.ansible_facts.openstack_image"
+    msg: "Image {{ openstack_default_image_name }} is not available"
+
+# Check network name
+- name: Try to get network facts
+  os_networks_facts:
+    name: "{{ openstack_external_network_name }}"
+  register: network_result
+  when: not openstack_provider_network_name|default(None)
+- name: Check that network is available
+  assert:
+    that: "network_result.ansible_facts.openstack_networks"
+    msg: "Network {{ openstack_external_network_name }} is not available"
+  when: not openstack_provider_network_name|default(None)
+
+# Check keypair
+# TODO kpilatov: there is no Ansible module for getting OS keypairs
+#                (os_keypair is not suitable for this)
+#                this method does not force python-openstackclient dependency
+- name: Try to show keypair
+  command: >
+           python -c 'import shade; cloud = shade.openstack_cloud();
+           exit(cloud.get_keypair("{{ openstack_ssh_public_key }}") is None)'
+  ignore_errors: yes
+  register: key_result
+- name: Check that keypair is available
+  assert:
+    that: 'key_result.rc == 0'
+    msg: "Keypair {{ openstack_ssh_public_key }} is not available"
+
+# Check that custom images are available
+- include: custom_image_check.yaml
+  with_items:
+  - "{{ openstack_master_image }}"
+  - "{{ openstack_infra_image }}"
+  - "{{ openstack_node_image }}"
+  - "{{ openstack_lb_image }}"
+  - "{{ openstack_etcd_image }}"
+  - "{{ openstack_dns_image }}"
+  loop_control:
+    loop_var: image
+
+# Check that custom flavors are available
+- include: custom_flavor_check.yaml
+  with_items:
+  - "{{ master_flavor }}"
+  - "{{ infra_flavor }}"
+  - "{{ node_flavor }}"
+  - "{{ lb_flavor }}"
+  - "{{ etcd_flavor }}"
+  - "{{ dns_flavor }}"
+  loop_control:
+    loop_var: flavor

+ 6 - 0
roles/openshift_openstack/tasks/cleanup.yml

@@ -0,0 +1,6 @@
+---
+
+- name: cleanup temp files
+  file:
+    path: "{{ stack_template_pre.path }}"
+    state: absent

+ 0 - 9
roles/docker-storage-setup/tasks/main.yaml

@@ -1,7 +1,4 @@
 ---
 ---
-- name: stop docker
-  service: name=docker state=stopped
-
 - block:
 - block:
     - name: create the docker-storage config file
     - name: create the docker-storage config file
       template:
       template:
@@ -38,9 +35,3 @@
   # TODO(shadower): Find out which CentOS version supports overlayfs2
   # TODO(shadower): Find out which CentOS version supports overlayfs2
   when:
   when:
     - ansible_distribution == "CentOS"
     - ansible_distribution == "CentOS"
-
-- name: Install Docker
-  package: name=docker state=present
-
-- name: start docker
-  service: name=docker state=restarted enabled=true

playbooks/openstack/openshift-cluster/custom_flavor_check.yaml → roles/openshift_openstack/tasks/custom_flavor_check.yaml


+ 1 - 0
playbooks/openstack/openshift-cluster/custom_image_check.yaml

@@ -3,6 +3,7 @@
   os_image_facts:
   os_image_facts:
     image: "{{ image }}"
     image: "{{ image }}"
   register: image_result
   register: image_result
+
 - name: Check that custom image is available
 - name: Check that custom image is available
   assert:
   assert:
     that: "image_result.ansible_facts.openstack_image"
     that: "image_result.ansible_facts.openstack_image"

+ 26 - 0
roles/openshift_openstack/tasks/generate-templates.yml

@@ -0,0 +1,26 @@
+---
+- name: create HOT stack template prefix
+  register: stack_template_pre
+  tempfile:
+    state: directory
+    prefix: openshift-ansible
+
+- name: set template paths
+  set_fact:
+    stack_template_path: "{{ stack_template_pre.path }}/stack.yaml"
+    user_data_template_path: "{{ stack_template_pre.path }}/user-data"
+
+- name: generate HOT stack template from jinja2 template
+  template:
+    src: heat_stack.yaml.j2
+    dest: "{{ stack_template_path }}"
+
+- name: generate HOT server template from jinja2 template
+  template:
+    src: heat_stack_server.yaml.j2
+    dest: "{{ stack_template_pre.path }}/server.yaml"
+
+- name: generate user_data from jinja2 template
+  template:
+    src: user_data.j2
+    dest: "{{ user_data_template_path }}"

+ 33 - 0
roles/openshift_openstack/tasks/hostname.yml

@@ -0,0 +1,33 @@
+---
+- name: "Verify hostname"
+  command: hostnamectl status --static
+  register: hostname_fqdn
+
+- name: "Set hostname if required"
+  when: hostname_fqdn.stdout != ansible_fqdn
+  block:
+  - name: Setting Hostname Fact
+    set_fact:
+      new_hostname: "{{ custom_hostname | default(inventory_hostname_short) }}"
+
+  - name: Setting FQDN Fact
+    set_fact:
+      new_fqdn: "{{ new_hostname }}.{{ full_dns_domain }}"
+
+  - name: Setting hostname and DNS domain
+    hostname: name="{{ new_fqdn }}"
+
+  - name: Check for cloud.cfg
+    stat: path=/etc/cloud/cloud.cfg
+    register: cloud_cfg
+
+  - name: Prevent cloud-init updates of hostname/fqdn (if applicable)
+    lineinfile:
+      dest: /etc/cloud/cloud.cfg
+      state: present
+      regexp: "{{ item.regexp }}"
+      line: "{{ item.line }}"
+    with_items:
+    - { regexp: '^ - set_hostname', line: '# - set_hostname' }
+    - { regexp: '^ - update_hostname', line: '# - update_hostname' }
+    when: cloud_cfg.stat.exists == True

playbooks/openstack/openshift-cluster/net_vars_check.yaml → roles/openshift_openstack/tasks/net_vars_check.yaml


+ 11 - 0
roles/openshift_openstack/tasks/node-configuration.yml

@@ -0,0 +1,11 @@
+---
+- include: hostname.yml
+
+- include: container-storage-setup.yml
+
+- include: node-network.yml
+
+- name: "Verify SELinux is enforcing"
+  fail:
+    msg: "SELinux is required for OpenShift and has been detected as '{{ ansible_selinux.config_mode }}'"
+  when: ansible_selinux.config_mode != "enforcing"

+ 2 - 5
roles/node-network-manager/tasks/main.yml

@@ -1,9 +1,4 @@
 ---
 ---
-- name: install NetworkManager
-  package:
-    name: NetworkManager
-    state: present
-
 - name: configure NetworkManager
 - name: configure NetworkManager
   lineinfile:
   lineinfile:
     dest: "/etc/sysconfig/network-scripts/ifcfg-{{ ansible_default_ipv4['interface'] }}"
     dest: "/etc/sysconfig/network-scripts/ifcfg-{{ ansible_default_ipv4['interface'] }}"
@@ -20,3 +15,5 @@
     name: NetworkManager
     name: NetworkManager
     state: restarted
     state: restarted
     enabled: yes
     enabled: yes
+
+# TODO(shadower): add the flannel interface tasks from post-provision-openstack.yml

+ 15 - 0
roles/openshift_openstack/tasks/node-packages.yml

@@ -0,0 +1,15 @@
+---
+# TODO: subscribe to RHEL and install docker and other packages here
+
+- name: Install required packages
+  yum:
+    name: "{{ item }}"
+    state: latest
+  with_items: "{{ required_packages }}"
+
+- name: Install debug packages (optional)
+  yum:
+    name: "{{ item }}"
+    state: latest
+  with_items: "{{ debug_packages }}"
+  when: install_debug_packages|bool

+ 5 - 0
roles/openshift_openstack/tasks/populate-dns.yml

@@ -0,0 +1,5 @@
+# TODO: use nsupdate to populate the DNS servers using the keys
+# specified in the inventory.
+
+# this is an optional step -- the deployers may do whatever else they
+# wish here.

+ 59 - 0
roles/openshift_openstack/tasks/prepare-and-format-cinder-volume.yaml

@@ -0,0 +1,59 @@
+---
+- name: Attach the volume to the VM
+  os_server_volume:
+    state: present
+    server: "{{ groups['masters'][0] }}"
+    volume: "{{ cinder_volume }}"
+  register: volume_attachment
+
+- set_fact:
+    attached_device: >-
+      {{ volume_attachment['attachments']|json_query("[?volume_id=='" + cinder_volume + "'].device | [0]") }}
+
+- delegate_to: "{{ groups['masters'][0] }}"
+  block:
+  - name: Wait for the device to appear
+    wait_for: path={{ attached_device }}
+
+  - name: Create a temp directory for mounting the volume
+    tempfile:
+      prefix: cinder-volume
+      state: directory
+    register: cinder_mount_dir
+
+  - name: Format the device
+    filesystem:
+      fstype: "{{ cinder_fs }}"
+      dev: "{{ attached_device }}"
+
+  - name: Mount the device
+    mount:
+      name: "{{ cinder_mount_dir.path }}"
+      src: "{{ attached_device }}"
+      state: mounted
+      fstype: "{{ cinder_fs }}"
+
+  - name: Change mode on the filesystem
+    file:
+      path: "{{ cinder_mount_dir.path }}"
+      state: directory
+      recurse: true
+      mode: 0777
+
+  - name: Unmount the device
+    mount:
+      name: "{{ cinder_mount_dir.path }}"
+      src: "{{ attached_device }}"
+      state: absent
+      fstype: "{{ cinder_fs }}"
+
+  - name: Delete the temp directory
+    file:
+      name: "{{ cinder_mount_dir.path }}"
+      state: absent
+
+- name: Detach the volume from the VM
+  os_server_volume:
+    state: absent
+    server: "{{ groups['masters'][0] }}"
+    volume: "{{ cinder_volume }}"

+ 30 - 0
roles/openshift_openstack/tasks/provision.yml

@@ -0,0 +1,30 @@
+---
+- name: Generate the templates
+  include: generate-templates.yml
+  when:
+  - stack_state == 'present'
+
+- name: Handle the Stack (create/delete)
+  ignore_errors: False
+  register: stack_create
+  os_stack:
+    name: "{{ stack_name }}"
+    state: "{{ stack_state }}"
+    template: "{{ stack_template_path | default(omit) }}"
+    wait: yes
+
+- name: Add the new nodes to the inventory
+  meta: refresh_inventory
+
+- name: Populate DNS entries
+  include: populate-dns.yml
+  when:
+  - stack_state == 'present'
+
+- name: CleanUp
+  include: cleanup.yml
+  when:
+  - stack_state == 'present'
+
+# TODO(shadower): create the registry and PV Cinder volumes if specified
+# and include the `prepare-and-format-cinder-volume` tasks to set it up

roles/openstack-stack/tasks/subnet_update_dns_servers.yaml → roles/openshift_openstack/tasks/subnet_update_dns_servers.yaml


roles/docker-storage-setup/templates/docker-storage-setup-dm.j2 → roles/openshift_openstack/templates/docker-storage-setup-dm.j2


roles/docker-storage-setup/templates/docker-storage-setup-overlayfs.j2 → roles/openshift_openstack/templates/docker-storage-setup-overlayfs.j2


+ 888 - 0
roles/openshift_openstack/templates/heat_stack.yaml.j2

@@ -0,0 +1,888 @@
+heat_template_version: 2016-10-14
+
+description: OpenShift cluster
+
+parameters:
+
+outputs:
+
+  etcd_names:
+    description: Name of the etcds
+    value: { get_attr: [ etcd, name ] }
+
+  etcd_ips:
+    description: IPs of the etcds
+    value: { get_attr: [ etcd, private_ip ] }
+
+  etcd_floating_ips:
+    description: Floating IPs of the etcds
+    value: { get_attr: [ etcd, floating_ip ] }
+
+  master_names:
+    description: Name of the masters
+    value: { get_attr: [ masters, name ] }
+
+  master_ips:
+    description: IPs of the masters
+    value: { get_attr: [ masters, private_ip ] }
+
+  master_floating_ips:
+    description: Floating IPs of the masters
+    value: { get_attr: [ masters, floating_ip ] }
+
+  node_names:
+    description: Name of the nodes
+    value: { get_attr: [ compute_nodes, name ] }
+
+  node_ips:
+    description: IPs of the nodes
+    value: { get_attr: [ compute_nodes, private_ip ] }
+
+  node_floating_ips:
+    description: Floating IPs of the nodes
+    value: { get_attr: [ compute_nodes, floating_ip ] }
+
+  infra_names:
+    description: Name of the nodes
+    value: { get_attr: [ infra_nodes, name ] }
+
+  infra_ips:
+    description: IPs of the nodes
+    value: { get_attr: [ infra_nodes, private_ip ] }
+
+  infra_floating_ips:
+    description: Floating IPs of the nodes
+    value: { get_attr: [ infra_nodes, floating_ip ] }
+
+{% if num_dns|int > 0 %}
+  dns_name:
+    description: Name of the DNS
+    value:
+      get_attr:
+        - dns
+        - name
+
+  dns_floating_ips:
+    description: Floating IPs of the DNS
+    value: { get_attr: [ dns, floating_ip ] }
+
+  dns_private_ips:
+    description: Private IPs of the DNS
+    value: { get_attr: [ dns, private_ip ] }
+{% endif %}
+
+conditions:
+  no_floating: {% if provider_network or use_bastion|bool %}true{% else %}false{% endif %}
+
+resources:
+
+{% if not provider_network %}
+  net:
+    type: OS::Neutron::Net
+    properties:
+      name:
+        str_replace:
+          template: openshift-ansible-cluster_id-net
+          params:
+            cluster_id: {{ stack_name }}
+
+  subnet:
+    type: OS::Neutron::Subnet
+    properties:
+      name:
+        str_replace:
+          template: openshift-ansible-cluster_id-subnet
+          params:
+            cluster_id: {{ stack_name }}
+      network: { get_resource: net }
+      cidr:
+        str_replace:
+          template: subnet_24_prefix.0/24
+          params:
+            subnet_24_prefix: {{ subnet_prefix }}
+      allocation_pools:
+        - start:
+            str_replace:
+              template: subnet_24_prefix.3
+              params:
+                subnet_24_prefix: {{ subnet_prefix }}
+          end:
+            str_replace:
+              template: subnet_24_prefix.254
+              params:
+                subnet_24_prefix: {{ subnet_prefix }}
+      dns_nameservers:
+{% for nameserver in dns_nameservers %}
+        - {{ nameserver }}
+{% endfor %}
+
+{% if openshift_use_flannel|default(False)|bool %}
+  data_net:
+    type: OS::Neutron::Net
+    properties:
+      name: openshift-ansible-{{ stack_name }}-data-net
+      port_security_enabled: false
+
+  data_subnet:
+    type: OS::Neutron::Subnet
+    properties:
+      name: openshift-ansible-{{ stack_name }}-data-subnet
+      network: { get_resource: data_net }
+      cidr: {{ osm_cluster_network_cidr|default('10.128.0.0/14') }}
+      gateway_ip: null
+{% endif %}
+
+  router:
+    type: OS::Neutron::Router
+    properties:
+      name:
+        str_replace:
+          template: openshift-ansible-cluster_id-router
+          params:
+            cluster_id: {{ stack_name }}
+      external_gateway_info:
+        network: {{ external_network }}
+
+  interface:
+    type: OS::Neutron::RouterInterface
+    properties:
+      router_id: { get_resource: router }
+      subnet_id: { get_resource: subnet }
+
+{% endif %}
+
+#  keypair:
+#    type: OS::Nova::KeyPair
+#    properties:
+#      name:
+#        str_replace:
+#          template: openshift-ansible-cluster_id-keypair
+#          params:
+#            cluster_id: {{ stack_name }}
+#      public_key: {{ ssh_public_key }}
+
+  common-secgrp:
+    type: OS::Neutron::SecurityGroup
+    properties:
+      name:
+        str_replace:
+          template: openshift-ansible-cluster_id-common-secgrp
+          params:
+            cluster_id: {{ stack_name }}
+      description:
+        str_replace:
+          template: Basic ssh/icmp security group for cluster_id OpenShift cluster
+          params:
+            cluster_id: {{ stack_name }}
+      rules:
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 22
+          port_range_max: 22
+          remote_ip_prefix: {{ ssh_ingress_cidr }}
+{% if use_bastion|bool %}
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 22
+          port_range_max: 22
+          remote_ip_prefix: {{ bastion_ingress_cidr }}
+{% endif %}
+        - direction: ingress
+          protocol: icmp
+          remote_ip_prefix: {{ ssh_ingress_cidr }}
+
+{% if openstack_flat_secgrp|default(False)|bool %}
+  flat-secgrp:
+    type: OS::Neutron::SecurityGroup
+    properties:
+      name:
+        str_replace:
+          template: openshift-ansible-cluster_id-flat-secgrp
+          params:
+            cluster_id: {{ stack_name }}
+      description:
+        str_replace:
+          template: Security group for cluster_id OpenShift cluster
+          params:
+            cluster_id: {{ stack_name }}
+      rules:
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 4001
+          port_range_max: 4001
+        - direction: ingress
+          protocol: tcp
+          port_range_min: {{ openshift_master_api_port|default(8443) }}
+          port_range_max: {{ openshift_master_api_port|default(8443) }}
+        - direction: ingress
+          protocol: tcp
+          port_range_min: {{ openshift_master_console_port|default(8443) }}
+          port_range_max: {{ openshift_master_console_port|default(8443) }}
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 8053
+          port_range_max: 8053
+        - direction: ingress
+          protocol: udp
+          port_range_min: 8053
+          port_range_max: 8053
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 24224
+          port_range_max: 24224
+        - direction: ingress
+          protocol: udp
+          port_range_min: 24224
+          port_range_max: 24224
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 2224
+          port_range_max: 2224
+        - direction: ingress
+          protocol: udp
+          port_range_min: 5404
+          port_range_max: 5405
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 9090
+          port_range_max: 9090
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 2379
+          port_range_max: 2380
+          remote_mode: remote_group_id
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 10250
+          port_range_max: 10250
+          remote_mode: remote_group_id
+        - direction: ingress
+          protocol: udp
+          port_range_min: 10250
+          port_range_max: 10250
+          remote_mode: remote_group_id
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 10255
+          port_range_max: 10255
+          remote_mode: remote_group_id
+        - direction: ingress
+          protocol: udp
+          port_range_min: 10255
+          port_range_max: 10255
+          remote_mode: remote_group_id
+        - direction: ingress
+          protocol: udp
+          port_range_min: 4789
+          port_range_max: 4789
+          remote_mode: remote_group_id
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 30000
+          port_range_max: 32767
+          remote_ip_prefix: {{ node_ingress_cidr }}
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 30000
+          port_range_max: 32767
+          remote_ip_prefix: "{{ openstack_subnet_prefix }}.0/24"
+{% else %}
+  master-secgrp:
+    type: OS::Neutron::SecurityGroup
+    properties:
+      name:
+        str_replace:
+          template: openshift-ansible-cluster_id-master-secgrp
+          params:
+            cluster_id: {{ stack_name }}
+      description:
+        str_replace:
+          template: Security group for cluster_id OpenShift cluster master
+          params:
+            cluster_id: {{ stack_name }}
+      rules:
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 4001
+          port_range_max: 4001
+        - direction: ingress
+          protocol: tcp
+          port_range_min: {{ openshift_master_api_port|default(8443) }}
+          port_range_max: {{ openshift_master_api_port|default(8443) }}
+        - direction: ingress
+          protocol: tcp
+          port_range_min: {{ openshift_master_console_port|default(8443) }}
+          port_range_max: {{ openshift_master_console_port|default(8443) }}
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 8053
+          port_range_max: 8053
+        - direction: ingress
+          protocol: udp
+          port_range_min: 8053
+          port_range_max: 8053
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 24224
+          port_range_max: 24224
+        - direction: ingress
+          protocol: udp
+          port_range_min: 24224
+          port_range_max: 24224
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 2224
+          port_range_max: 2224
+        - direction: ingress
+          protocol: udp
+          port_range_min: 5404
+          port_range_max: 5405
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 9090
+          port_range_max: 9090
+{% if openshift_use_flannel|default(False)|bool %}
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 2379
+          port_range_max: 2379
+{% endif %}
+
+  etcd-secgrp:
+    type: OS::Neutron::SecurityGroup
+    properties:
+      name:
+        str_replace:
+          template: openshift-ansible-cluster_id-etcd-secgrp
+          params:
+            cluster_id: {{ stack_name }}
+      description:
+        str_replace:
+          template: Security group for cluster_id etcd cluster
+          params:
+            cluster_id: {{ stack_name }}
+      rules:
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 2379
+          port_range_max: 2379
+          remote_mode: remote_group_id
+          remote_group_id: { get_resource: master-secgrp }
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 2380
+          port_range_max: 2380
+          remote_mode: remote_group_id
+
+  node-secgrp:
+    type: OS::Neutron::SecurityGroup
+    properties:
+      name:
+        str_replace:
+          template: openshift-ansible-cluster_id-node-secgrp
+          params:
+            cluster_id: {{ stack_name }}
+      description:
+        str_replace:
+          template: Security group for cluster_id OpenShift cluster nodes
+          params:
+            cluster_id: {{ stack_name }}
+      rules:
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 10250
+          port_range_max: 10250
+          remote_mode: remote_group_id
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 10255
+          port_range_max: 10255
+          remote_mode: remote_group_id
+        - direction: ingress
+          protocol: udp
+          port_range_min: 10255
+          port_range_max: 10255
+          remote_mode: remote_group_id
+        - direction: ingress
+          protocol: udp
+          port_range_min: 4789
+          port_range_max: 4789
+          remote_mode: remote_group_id
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 30000
+          port_range_max: 32767
+          remote_ip_prefix: {{ node_ingress_cidr }}
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 30000
+          port_range_max: 32767
+          remote_ip_prefix: "{{ openstack_subnet_prefix }}.0/24"
+{% endif %}
+
+  infra-secgrp:
+    type: OS::Neutron::SecurityGroup
+    properties:
+      name:
+        str_replace:
+          template: openshift-ansible-cluster_id-infra-secgrp
+          params:
+            cluster_id: {{ stack_name }}
+      description:
+        str_replace:
+          template: Security group for cluster_id OpenShift infrastructure cluster nodes
+          params:
+            cluster_id: {{ stack_name }}
+      rules:
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 80
+          port_range_max: 80
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 443
+          port_range_max: 443
+
+{% if num_dns|int > 0 %}
+  dns-secgrp:
+    type: OS::Neutron::SecurityGroup
+    properties:
+      name:
+        str_replace:
+          template: openshift-ansible-cluster_id-dns-secgrp
+          params:
+            cluster_id: {{ stack_name }}
+      description:
+        str_replace:
+          template: Security group for cluster_id cluster DNS
+          params:
+            cluster_id: {{ stack_name }}
+      rules:
+        - direction: ingress
+          protocol: udp
+          port_range_min: 53
+          port_range_max: 53
+          remote_ip_prefix: {{ node_ingress_cidr }}
+        - direction: ingress
+          protocol: udp
+          port_range_min: 53
+          port_range_max: 53
+          remote_ip_prefix: "{{ openstack_subnet_prefix }}.0/24"
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 53
+          port_range_max: 53
+          remote_ip_prefix: {{ node_ingress_cidr }}
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 53
+          port_range_max: 53
+          remote_ip_prefix: "{{ openstack_subnet_prefix }}.0/24"
+{% endif %}
+
+{% if num_masters|int > 1 or ui_ssh_tunnel|bool %}
+  lb-secgrp:
+    type: OS::Neutron::SecurityGroup
+    properties:
+      name: openshift-ansible-{{ stack_name }}-lb-secgrp
+      description: Security group for {{ stack_name }} cluster Load Balancer
+      rules:
+      - direction: ingress
+        protocol: tcp
+        port_range_min: {{ openshift_master_api_port | default(8443) }}
+        port_range_max: {{ openshift_master_api_port | default(8443) }}
+        remote_ip_prefix: {{ lb_ingress_cidr | default(bastion_ingress_cidr) }}
+{% if ui_ssh_tunnel|bool %}
+      - direction: ingress
+        protocol: tcp
+        port_range_min: {{ openshift_master_api_port | default(8443) }}
+        port_range_max: {{ openshift_master_api_port | default(8443) }}
+        remote_ip_prefix: {{ ssh_ingress_cidr }}
+{% endif %}
+{% if openshift_master_console_port is defined and openshift_master_console_port != openshift_master_api_port %}
+      - direction: ingress
+        protocol: tcp
+        port_range_min: {{ openshift_master_console_port | default(8443) }}
+        port_range_max: {{ openshift_master_console_port | default(8443) }}
+        remote_ip_prefix: {{ lb_ingress_cidr | default(bastion_ingress_cidr) }}
+{% endif %}
+{% endif %}
+
+  etcd:
+    type: OS::Heat::ResourceGroup
+    properties:
+      count: {{ num_etcd }}
+      resource_def:
+        type: server.yaml
+        properties:
+          name:
+            str_replace:
+              template: k8s_type-%index%.cluster_id
+              params:
+                cluster_id: {{ stack_name }}
+                k8s_type: {{ etcd_hostname | default('etcd') }}
+          cluster_env: {{ public_dns_domain }}
+          cluster_id:  {{ stack_name }}
+          group:
+            str_replace:
+              template: k8s_type.cluster_id
+              params:
+                k8s_type: etcds
+                cluster_id: {{ stack_name }}
+          type:        etcd
+          image:       {{ openstack_etcd_image | default(openstack_image) }}
+          flavor:      {{ etcd_flavor }}
+          key_name:    {{ ssh_public_key }}
+{% if provider_network %}
+          net:         {{ provider_network }}
+          net_name:         {{ provider_network }}
+{% else %}
+          net:         { get_resource: net }
+          subnet:      { get_resource: subnet }
+          net_name:
+            str_replace:
+              template: openshift-ansible-cluster_id-net
+              params:
+                cluster_id: {{ stack_name }}
+{% endif %}
+          secgrp:
+            - { get_resource: {% if openstack_flat_secgrp|default(False)|bool %}flat-secgrp{% else %}etcd-secgrp{% endif %} }
+            - { get_resource: common-secgrp }
+          floating_network:
+            if:
+              - no_floating
+              - null
+              - {{ external_network }}
+{% if use_bastion|bool or provider_network %}
+          attach_float_net: false
+{% endif %}
+          volume_size: {{ etcd_volume_size }}
+{% if not provider_network %}
+    depends_on:
+      - interface
+{% endif %}
+
+{% if master_server_group_policies|length > 0 %}
+  master_server_group:
+    type: OS::Nova::ServerGroup
+    properties:
+      name: master_server_group
+      policies: {{ master_server_group_policies }}
+{% endif %}
+{% if infra_server_group_policies|length > 0 %}
+  infra_server_group:
+    type: OS::Nova::ServerGroup
+    properties:
+      name: infra_server_group
+      policies: {{ infra_server_group_policies }}
+{% endif %}
+{% if num_masters|int > 1 %}
+  loadbalancer:
+    type: OS::Heat::ResourceGroup
+    properties:
+      count: 1
+      resource_def:
+        type: server.yaml
+        properties:
+          name:
+            str_replace:
+              template: k8s_type-%index%.cluster_id
+              params:
+                cluster_id: {{ stack_name }}
+                k8s_type: {{ lb_hostname | default('lb') }}
+          cluster_env: {{ public_dns_domain }}
+          cluster_id:  {{ stack_name }}
+          group:
+            str_replace:
+              template: k8s_type.cluster_id
+              params:
+                k8s_type: lb
+                cluster_id: {{ stack_name }}
+          type:        lb
+          image:       {{ openstack_lb_image | default(openstack_image) }}
+          flavor:      {{ lb_flavor }}
+          key_name:    {{ ssh_public_key }}
+{% if provider_network %}
+          net:         {{ provider_network }}
+          net_name:         {{ provider_network }}
+{% else %}
+          net:         { get_resource: net }
+          subnet:      { get_resource: subnet }
+          net_name:
+            str_replace:
+              template: openshift-ansible-cluster_id-net
+              params:
+                cluster_id: {{ stack_name }}
+{% endif %}
+          secgrp:
+            - { get_resource: lb-secgrp }
+            - { get_resource: common-secgrp }
+{% if not provider_network %}
+          floating_network: {{ external_network }}
+{% endif %}
+          volume_size: {{ lb_volume_size }}
+{% if not provider_network %}
+    depends_on:
+      - interface
+{% endif %}
+{% endif %}
+
+  masters:
+    type: OS::Heat::ResourceGroup
+    properties:
+      count: {{ num_masters }}
+      resource_def:
+        type: server.yaml
+        properties:
+          name:
+            str_replace:
+              template: k8s_type-%index%.cluster_id
+              params:
+                cluster_id: {{ stack_name }}
+                k8s_type: {{ master_hostname | default('master')}}
+          cluster_env: {{ public_dns_domain }}
+          cluster_id:  {{ stack_name }}
+          group:
+            str_replace:
+              template: k8s_type.cluster_id
+              params:
+                k8s_type: masters
+                cluster_id: {{ stack_name }}
+          type:        master
+          image:       {{ openstack_master_image | default(openstack_image) }}
+          flavor:      {{ master_flavor }}
+          key_name:    {{ ssh_public_key }}
+{% if provider_network %}
+          net:         {{ provider_network }}
+          net_name:         {{ provider_network }}
+{% else %}
+          net:         { get_resource: net }
+          subnet:      { get_resource: subnet }
+          net_name:
+            str_replace:
+              template: openshift-ansible-cluster_id-net
+              params:
+                cluster_id: {{ stack_name }}
+{% if openshift_use_flannel|default(False)|bool %}
+          attach_data_net: true
+          data_net:    { get_resource: data_net }
+          data_subnet: { get_resource: data_subnet }
+{% endif %}
+{% endif %}
+          secgrp:
+{% if openstack_flat_secgrp|default(False)|bool %}
+            - { get_resource: flat-secgrp }
+{% else %}
+            - { get_resource: master-secgrp }
+            - { get_resource: node-secgrp }
+{% if num_etcd|int == 0 %}
+            - { get_resource: etcd-secgrp }
+{% endif %}
+{% endif %}
+            - { get_resource: common-secgrp }
+          floating_network:
+            if:
+              - no_floating
+              - null
+              - {{ external_network }}
+{% if use_bastion|bool or provider_network %}
+          attach_float_net: false
+{% endif %}
+          volume_size: {{ master_volume_size }}
+{% if master_server_group_policies|length > 0 %}
+          scheduler_hints:
+            group: { get_resource: master_server_group }
+{% endif %}
+{% if not provider_network %}
+    depends_on:
+      - interface
+{% endif %}
+
+  compute_nodes:
+    type: OS::Heat::ResourceGroup
+    properties:
+      count: {{ num_nodes }}
+      removal_policies:
+      - resource_list: {{ nodes_to_remove }}
+      resource_def:
+        type: server.yaml
+        properties:
+          name:
+            str_replace:
+              template: sub_type_k8s_type-%index%.cluster_id
+              params:
+                cluster_id: {{ stack_name }}
+                sub_type_k8s_type: {{ node_hostname | default('app-node') }}
+          cluster_env: {{ public_dns_domain }}
+          cluster_id:  {{ stack_name }}
+          group:
+            str_replace:
+              template: k8s_type.cluster_id
+              params:
+                k8s_type: nodes
+                cluster_id: {{ stack_name }}
+          type:        node
+          subtype:     app
+          node_labels:
+{% for k, v in openshift_cluster_node_labels.app.iteritems() %}
+            {{ k|e }}: {{ v|e }}
+{% endfor %}
+          image:       {{ openstack_node_image | default(openstack_image) }}
+          flavor:      {{ node_flavor }}
+          key_name:    {{ ssh_public_key }}
+{% if provider_network %}
+          net:         {{ provider_network }}
+          net_name:         {{ provider_network }}
+{% else %}
+          net:         { get_resource: net }
+          subnet:      { get_resource: subnet }
+          net_name:
+            str_replace:
+              template: openshift-ansible-cluster_id-net
+              params:
+                cluster_id: {{ stack_name }}
+{% if openshift_use_flannel|default(False)|bool %}
+          attach_data_net: true
+          data_net:    { get_resource: data_net }
+          data_subnet: { get_resource: data_subnet }
+{% endif %}
+{% endif %}
+          secgrp:
+            - { get_resource: {% if openstack_flat_secgrp|default(False)|bool %}flat-secgrp{% else %}node-secgrp{% endif %} }
+            - { get_resource: common-secgrp }
+          floating_network:
+            if:
+              - no_floating
+              - null
+              - {{ external_network }}
+{% if use_bastion|bool or provider_network %}
+          attach_float_net: false
+{% endif %}
+          volume_size: {{ node_volume_size }}
+{% if not provider_network %}
+    depends_on:
+      - interface
+{% endif %}
+
+  infra_nodes:
+    type: OS::Heat::ResourceGroup
+    properties:
+      count: {{ num_infra }}
+      resource_def:
+        type: server.yaml
+        properties:
+          name:
+            str_replace:
+              template: sub_type_k8s_type-%index%.cluster_id
+              params:
+                cluster_id: {{ stack_name }}
+                sub_type_k8s_type: {{ infra_hostname | default('infranode') }}
+          cluster_env: {{ public_dns_domain }}
+          cluster_id:  {{ stack_name }}
+          group:
+            str_replace:
+              template: k8s_type.cluster_id
+              params:
+                k8s_type: infra
+                cluster_id: {{ stack_name }}
+          type:        node
+          subtype:     infra
+          node_labels:
+{% for k, v in openshift_cluster_node_labels.infra.iteritems() %}
+            {{ k|e }}: {{ v|e }}
+{% endfor %}
+          image:       {{ openstack_infra_image | default(openstack_image) }}
+          flavor:      {{ infra_flavor }}
+          key_name:    {{ ssh_public_key }}
+{% if provider_network %}
+          net:         {{ provider_network }}
+          net_name:         {{ provider_network }}
+{% else %}
+          net:         { get_resource: net }
+          subnet:      { get_resource: subnet }
+          net_name:
+            str_replace:
+              template: openshift-ansible-cluster_id-net
+              params:
+                cluster_id: {{ stack_name }}
+{% if openshift_use_flannel|default(False)|bool %}
+          attach_data_net: true
+          data_net:    { get_resource: data_net }
+          data_subnet: { get_resource: data_subnet }
+{% endif %}
+{% endif %}
+          secgrp:
+# TODO(bogdando) filter only required node rules into infra-secgrp
+{% if openstack_flat_secgrp|default(False)|bool %}
+            - { get_resource: flat-secgrp }
+{% else %}
+            - { get_resource: node-secgrp }
+{% endif %}
+{% if ui_ssh_tunnel|bool and num_masters|int < 2 %}
+            - { get_resource: lb-secgrp }
+{% endif %}
+            - { get_resource: infra-secgrp }
+            - { get_resource: common-secgrp }
+{% if not provider_network %}
+          floating_network: {{ external_network }}
+{% endif %}
+          volume_size: {{ infra_volume_size }}
+{% if infra_server_group_policies|length > 0 %}
+          scheduler_hints:
+            group: { get_resource: infra_server_group }
+{% endif %}
+{% if not provider_network %}
+    depends_on:
+      - interface
+{% endif %}
+
+{% if num_dns|int > 0 %}
+  dns:
+    type: OS::Heat::ResourceGroup
+    properties:
+      count: {{ num_dns }}
+      resource_def:
+        type: server.yaml
+        properties:
+          name:
+            str_replace:
+              template: k8s_type-%index%.cluster_id
+              params:
+                cluster_id: {{ stack_name }}
+                k8s_type: {{ dns_hostname | default('dns') }}
+          cluster_env: {{ public_dns_domain }}
+          cluster_id:  {{ stack_name }}
+          group:
+            str_replace:
+              template: k8s_type.cluster_id
+              params:
+                k8s_type: dns
+                cluster_id: {{ stack_name }}
+          type:        dns
+          image:       {{ openstack_dns_image | default(openstack_image) }}
+          flavor:      {{ dns_flavor }}
+          key_name:    {{ ssh_public_key }}
+{% if provider_network %}
+          net:         {{ provider_network }}
+          net_name:         {{ provider_network }}
+{% else %}
+          net:         { get_resource: net }
+          subnet:      { get_resource: subnet }
+          net_name:
+            str_replace:
+              template: openshift-ansible-cluster_id-net
+              params:
+                cluster_id: {{ stack_name }}
+{% endif %}
+          secgrp:
+            - { get_resource: dns-secgrp }
+            - { get_resource: common-secgrp }
+{% if not provider_network %}
+          floating_network: {{ external_network }}
+{% endif %}
+          volume_size: {{ dns_volume_size }}
+{% if not provider_network %}
+    depends_on:
+      - interface
+{% endif %}
+{% endif %}

+ 270 - 0
roles/openshift_openstack/templates/heat_stack_server.yaml.j2

@@ -0,0 +1,270 @@
+heat_template_version: 2016-10-14
+
+description: OpenShift cluster server
+
+parameters:
+
+  name:
+    type: string
+    label: Name
+    description: Name
+
+  group:
+    type: string
+    label: Host Group
+    description: The Primary Ansible Host Group
+    default: host
+
+  cluster_env:
+    type: string
+    label: Cluster environment
+    description: Environment of the cluster
+
+  cluster_id:
+    type: string
+    label: Cluster ID
+    description: Identifier of the cluster
+
+  type:
+    type: string
+    label: Type
+    description: Type master or node
+
+  subtype:
+    type: string
+    label: Sub-type
+    description: Sub-type compute or infra for nodes, default otherwise
+    default: default
+
+  key_name:
+    type: string
+    label: Key name
+    description: Key name of keypair
+
+  image:
+    type: string
+    label: Image
+    description: Name of the image
+
+  flavor:
+    type: string
+    label: Flavor
+    description: Name of the flavor
+
+  net:
+    type: string
+    label: Net ID
+    description: Net resource
+
+  net_name:
+    type: string
+    label: Net name
+    description: Net name
+
+{% if not provider_network %}
+  subnet:
+    type: string
+    label: Subnet ID
+    description: Subnet resource
+{% endif %}
+
+{% if openshift_use_flannel|default(False)|bool %}
+  attach_data_net:
+    type: boolean
+    default: false
+    label: Attach-data-net
+    description: A switch for data port connection
+
+  data_net:
+    type: string
+    default: ''
+    label: Net ID
+    description: Net resource
+
+{% if not provider_network %}
+  data_subnet:
+    type: string
+    default: ''
+    label: Subnet ID
+    description: Subnet resource
+{% endif %}
+{% endif %}
+
+  secgrp:
+    type: comma_delimited_list
+    label: Security groups
+    description: Security group resources
+
+  attach_float_net:
+    type: boolean
+    default: true
+
+    label: Attach-float-net
+    description: A switch for floating network port connection
+
+{% if not provider_network %}
+  floating_network:
+    type: string
+    default: ''
+    label: Floating network
+    description: Network to allocate floating IP from
+{% endif %}
+
+  availability_zone:
+    type: string
+    description: The Availability Zone to launch the instance.
+    default: nova
+
+  volume_size:
+    type: number
+    description: Size of the volume to be created.
+    default: 1
+    constraints:
+      - range: { min: 1, max: 1024 }
+        description: must be between 1 and 1024 Gb.
+
+  node_labels:
+    type: json
+    description: OpenShift Node Labels
+    default: {"region": "default" }
+
+  scheduler_hints:
+    type: json
+    description: Server scheduler hints.
+    default: {}
+
+outputs:
+
+  name:
+    description: Name of the server
+    value: { get_attr: [ server, name ] }
+
+  private_ip:
+    description: Private IP of the server
+    value:
+      get_attr:
+        - server
+        - addresses
+        - { get_param: net_name }
+        - 0
+        - addr
+
+  floating_ip:
+    description: Floating IP of the server
+    value:
+      get_attr:
+        - server
+        - addresses
+        - { get_param: net_name }
+{% if provider_network %}
+        - 0
+{% else %}
+        - 1
+{% endif %}
+        - addr
+
+conditions:
+  no_floating: {not: { get_param: attach_float_net} }
+{% if openshift_use_flannel|default(False)|bool %}
+  no_data_subnet: {not: { get_param: attach_data_net} }
+{% endif %}
+
+resources:
+
+  server:
+    type: OS::Nova::Server
+    properties:
+      name:      { get_param: name }
+      key_name:  { get_param: key_name }
+      image:     { get_param: image }
+      flavor:    { get_param: flavor }
+      networks:
+{% if openshift_use_flannel|default(False)|bool %}
+        if:
+          - no_data_subnet
+{% if use_trunk_ports|default(false)|bool %}
+          - - port:  { get_attr: [trunk-port, port_id] }
+{% else %}
+          - - port:  { get_resource: port }
+{% endif %}
+{% if use_trunk_ports|default(false)|bool %}
+          - - port:  { get_attr: [trunk-port, port_id] }
+{% else %}
+          - - port:  { get_resource: port }
+            - port:  { get_resource: data_port }
+{% endif %}
+
+{% else %}
+{% if use_trunk_ports|default(false)|bool %}
+        - port:  { get_attr: [trunk-port, port_id] }
+{% else %}
+        - port:  { get_resource: port }
+{% endif %}
+{% endif %}
+      user_data:
+        get_file: user-data
+      user_data_format: RAW
+      user_data_update_policy: IGNORE
+      metadata:
+        group: { get_param: group }
+        environment: { get_param: cluster_env }
+        clusterid: { get_param: cluster_id }
+        host-type: { get_param: type }
+        sub-host-type:    { get_param: subtype }
+        node_labels: { get_param: node_labels }
+      scheduler_hints: { get_param: scheduler_hints }
+
+{% if use_trunk_ports|default(false)|bool %}
+  trunk-port:
+    type: OS::Neutron::Trunk
+    properties:
+      name: { get_param: name }
+      port: { get_resource: port }
+{% endif %}
+
+  port:
+    type: OS::Neutron::Port
+    properties:
+      network: { get_param: net }
+{% if not provider_network %}
+      fixed_ips:
+        - subnet: { get_param: subnet }
+{% endif %}
+      security_groups: { get_param: secgrp }
+
+{% if openshift_use_flannel|default(False)|bool %}
+  data_port:
+    type: OS::Neutron::Port
+    condition: { not: no_data_subnet }
+    properties:
+      network: { get_param: data_net }
+      port_security_enabled: false
+{% if not provider_network %}
+      fixed_ips:
+        - subnet: { get_param: data_subnet }
+{% endif %}
+{% endif %}
+
+{% if not provider_network %}
+  floating-ip:
+    condition: { not: no_floating }
+    type: OS::Neutron::FloatingIP
+    properties:
+      floating_network: { get_param: floating_network }
+      port_id: { get_resource: port }
+{% endif %}
+
+{% if not ephemeral_volumes|default(false)|bool %}
+  cinder_volume:
+    type: OS::Cinder::Volume
+    properties:
+      size: { get_param: volume_size }
+      availability_zone: { get_param: availability_zone }
+
+  volume_attachment:
+    type: OS::Cinder::VolumeAttachment
+    properties:
+      volume_id: { get_resource: cinder_volume }
+      instance_uuid: { get_resource: server }
+      mountpoint: /dev/sdb
+{% endif %}

+ 13 - 0
roles/openshift_openstack/templates/user_data.j2

@@ -0,0 +1,13 @@
+#cloud-config
+disable_root: true
+
+system_info:
+  default_user:
+    name: openshift
+    sudo: ["ALL=(ALL) NOPASSWD: ALL"]
+
+write_files:
+  - path: /etc/sudoers.d/00-openshift-no-requiretty
+    permissions: 440
+    content: |
+      Defaults:openshift !requiretty

playbooks/openstack/openshift-cluster/stack_params.yaml → roles/openshift_openstack/vars/main.yml


+ 0 - 1
roles/openstack-stack/tasks/main.yml

@@ -1,5 +1,4 @@
 ---
 ---
-
 - name: Generate the templates
 - name: Generate the templates
   include: generate-templates.yml
   include: generate-templates.yml
   when:
   when: