Pārlūkot izejas kodu

updating rhsm-sub and rhsm-repos

Davis Phillips 7 gadi atpakaļ
vecāks
revīzija
4ec2795d29
31 mainītis faili ar 996 papildinājumiem un 123 dzēšanām
  1. 234 20
      playbooks/provisioning/openstack/README.md
  2. 12 0
      playbooks/provisioning/openstack/custom-actions/add-yum-repos.yml
  3. 9 0
      playbooks/provisioning/openstack/custom_flavor_check.yaml
  4. 9 0
      playbooks/provisioning/openstack/custom_image_check.yaml
  5. 7 11
      playbooks/provisioning/openstack/post-provision-openstack.yml
  6. 5 0
      playbooks/provisioning/openstack/pre-install.yml
  7. 16 0
      playbooks/provisioning/openstack/pre_tasks.yml
  8. 44 0
      playbooks/provisioning/openstack/prerequisites.yml
  9. 9 2
      playbooks/provisioning/openstack/provision-openstack.yml
  10. 1 1
      playbooks/provisioning/openstack/sample-inventory/ansible.cfg
  11. 0 5
      playbooks/provisioning/openstack/sample-inventory/clouds.yaml
  12. 3 6
      playbooks/provisioning/openstack/sample-inventory/group_vars/OSEv3.yml
  13. 72 2
      playbooks/provisioning/openstack/sample-inventory/group_vars/all.yml
  14. 34 10
      playbooks/provisioning/openstack/stack_params.yaml
  15. 39 2
      playbooks/provisioning/openstack/openstack_dns_records.yml
  16. 4 0
      roles/dns-views/defaults/main.yml
  17. 6 1
      playbooks/provisioning/openstack/openstack_dns_views.yml
  18. 6 1
      roles/openstack-stack/defaults/main.yml
  19. 7 2
      roles/openstack-stack/tasks/main.yml
  20. 1 0
      roles/openstack-stack/tasks/subnet_update_dns_servers.yaml
  21. 142 51
      roles/openstack-stack/templates/heat_stack.yaml.j2
  22. 15 0
      roles/openstack-stack/templates/heat_stack_server.yaml.j2
  23. 152 0
      roles/openstack-stack/templates/heat_stack_server_nofloating.yaml.j2
  24. 21 0
      roles/static_inventory/defaults/main.yml
  25. 11 0
      roles/static_inventory/tasks/main.yml
  26. 59 8
      roles/static_inventory/tasks/openstack.yml
  27. 13 0
      roles/static_inventory/tasks/sshconfig.yml
  28. 15 0
      roles/static_inventory/tasks/sshtun.yml
  29. 9 1
      roles/static_inventory/templates/inventory.j2
  30. 21 0
      roles/static_inventory/templates/openstack_ssh_config.j2
  31. 20 0
      roles/static_inventory/templates/ssh-tunnel.service.j2

+ 234 - 20
playbooks/provisioning/openstack/README.md

@@ -10,6 +10,7 @@ etc.). The result is an environment ready for openshift-ansible.
 * [Ansible-galaxy](https://pypi.python.org/pypi/ansible-galaxy-local-deps)
 * [jinja2](http://jinja.pocoo.org/docs/2.9/)
 * [shade](https://pypi.python.org/pypi/shade)
+* python-jmespath / [jmespath](https://pypi.python.org/pypi/jmespath)
 * python-dns / [dnspython](https://pypi.python.org/pypi/dnspython)
 * Become (sudo) is not required.
 
@@ -40,7 +41,7 @@ Alternatively you can install directly from github:
       -p openshift-ansible-contrib/roles
 
 Notes:
-* This assumes we're in the directory that contains the clonned 
+* This assumes we're in the directory that contains the clonned
 openshift-ansible-contrib repo in its root path.
 * When trying to install a different version, the previous one must be removed first
 (`infra-ansible` directory from [roles](https://github.com/openshift/openshift-ansible-contrib/tree/master/roles)).
@@ -52,8 +53,9 @@ Otherwise, even if there are differences between the two versions, installation
 * Assigns Cinder volumes to the servers
 * Set up an `openshift` user with sudo privileges
 * Optionally attach Red Hat subscriptions
-* Set up a bind-based DNS server
-* When deploying more than one master, set up a HAproxy server
+* Sets up a bind-based DNS server or configures the cluster servers to use an external DNS server.
+* Supports mixed in-stack/external DNS servers for dynamic updates.
+* When deploying more than one master, sets up a HAproxy server
 
 
 ## Set up
@@ -62,28 +64,38 @@ Otherwise, even if there are differences between the two versions, installation
 
     cp -r openshift-ansible-contrib/playbooks/provisioning/openstack/sample-inventory inventory
 
-### Copy clouds.yaml
-
-    cp openshift-ansible-contrib/playbooks/provisioning/openstack/sample-inventory/clouds.yaml clouds.yaml
-
 ### Copy ansible config
 
     cp openshift-ansible-contrib/playbooks/provisioning/openstack/sample-inventory/ansible.cfg ansible.cfg
 
 ### Update `inventory/group_vars/all.yml`
 
+#### DNS configuration variables
+
 Pay special attention to the values in the first paragraph -- these
 will depend on your OpenStack environment.
 
+Note that the provsisioning playbooks update the original Neutron subnet
+created with the Heat stack to point to the configured DNS servers.
+So the provisioned cluster nodes will start using those natively as
+default nameservers. Technically, this allows to deploy OpenShift clusters
+without dnsmasq proxies.
+
 The `env_id` and `public_dns_domain` will form the cluster's DNS domain all
 your servers will be under. With the default values, this will be
 `openshift.example.com`. For workloads, the default subdomain is 'apps'.
 That sudomain can be set as well by the `openshift_app_domain` variable in
 the inventory.
 
+The `openstack_<role name>_hostname` is a set of variables used for customising
+hostnames of servers with a given role. When such a variable stays commented,
+default hostname (usually the role name) is used.
+
 The `public_dns_nameservers` is a list of DNS servers accessible from all
 the created Nova servers. These will be serving as your DNS forwarders for
 external FQDNs that do not belong to the cluster's DNS domain and its subdomains.
+If you're unsure what to put in here, you can try the google or opendns servers,
+but note that some organizations may be blocking them.
 
 The `openshift_use_dnsmasq` controls either dnsmasq is deployed or not.
 By default, dnsmasq is deployed and comes as the hosts' /etc/resolv.conf file
@@ -92,37 +104,101 @@ daemon that in turn proxies DNS requests to the authoritative DNS server.
 When Network Manager is enabled for provisioned cluster nodes, which is
 normally the case, you should not change the defaults and always deploy dnsmasq.
 
-Note that the authoritative DNS server is configured on post provsision
-steps, and the Neutron subnet for the Heat stack is updated to point to that
-server in the end. So the provisioned servers will start using it natively
-as a default nameserver that comes from the NetworkManager and cloud-init.
+`external_nsupdate_keys` describes an external authoritative DNS server(s)
+processing dynamic records updates in the public and private cluster views:
+
+    external_nsupdate_keys:
+      public:
+        key_secret: <some nsupdate key>
+        key_algorithm: 'hmac-md5'
+        key_name: 'update-key'
+        server: <public DNS server IP>
+      private:
+        key_secret: <some nsupdate key 2>
+        key_algorithm: 'hmac-sha256'
+        server: <public or private DNS server IP>
+
+Here, for the public view section, we specified another key algorithm and
+optional `key_name`, which normally defaults to the cluster's DNS domain.
+This just illustrates a compatibility mode with a DNS service deployed
+by OpenShift on OSP10 reference architecture, and used in a mixed mode with
+another external DNS server.
+
+Another example defines an external DNS server for the public view
+additionally to the in-stack DNS server used for the private view only:
+
+    external_nsupdate_keys:
+      public:
+        key_secret: <some nsupdate key>
+        key_algorithm: 'hmac-sha256'
+        server: <public DNS server IP>
+
+Here, updates matching the public view will be hitting the given public
+server IP. While updates matching the private view will be sent to the
+auto evaluated in-stack DNS server's **public** IP.
+
+Note, for the in-stack DNS server, private view updates may be sent only
+via the public IP of the server. You can not send updates via the private
+IP yet. This forces the in-stack private server to have a floating IP.
+See also the [security notes](#security-notes)
+
+#### Other configuration variables
 
 `openstack_ssh_key` is a Nova keypair - you can see your keypairs with
 `openstack keypair list`. This guide assumes that its corresponding private
 key is `~/.ssh/openshift`, stored on the ansible admin (control) node.
 
-`openstack_default_image_name` is the name of the Glance image the
-servers will use. You can
-see your images with `openstack image list`.
+`openstack_default_image_name` is the default name of the Glance image the
+servers will use. You can see your images with `openstack image list`.
+In order to set a different image for a role, uncomment the line with the
+corresponding variable (e.g. `openstack_lb_image_name` for load balancer) and
+set its value to another available image name. `openstack_default_image_name`
+must stay defined as it is used as a default value for the rest of the roles.
 
-`openstack_default_flavor` is the Nova flavor the servers will use.
+`openstack_default_flavor` is the default Nova flavor the servers will use.
 You can see your flavors with `openstack flavor list`.
+In order to set a different flavor for a role, uncomment the line with the
+corresponding variable (e.g. `openstack_lb_flavor` for load balancer) and
+set its value to another available flavor. `openstack_default_flavor` must
+stay defined as it is used as a default value for the rest of the roles.
 
 `openstack_external_network_name` is the name of the Neutron network
 providing external connectivity. It is often called `public`,
 `external` or `ext-net`. You can see your networks with `openstack
 network list`.
 
+`openstack_private_network_name` is the name of the private Neutron network
+providing admin/control access for ansible. It can be merged with other
+cluster networks, there are no special requirements for networking.
+
 The `openstack_num_masters`, `openstack_num_infra` and
 `openstack_num_nodes` values specify the number of Master, Infra and
 App nodes to create.
 
 The `openshift_cluster_node_labels` defines custom labels for your openshift
-cluster node groups, like app or infra nodes. For example: `{'region': 'infra'}`.
+cluster node groups. It currently supports app and infra node groups.
+The default value of this variable sets `region: primary` to app nodes and
+`region: infra` to infra nodes.
+An example of setting a customised label:
+```
+openshift_cluster_node_labels:
+  app:
+    mylabel: myvalue
+```
 
 The `openstack_nodes_to_remove` allows you to specify the numerical indexes
 of App nodes that should be removed; for example, ['0', '2'],
 
+The `docker_volume_size` is the default Docker volume size the servers will use.
+In order to set a different volume size for a role,
+uncomment the line with the corresponding variable (e. g. `docker_master_volume_size`
+for master) and change its value. `docker_volume_size` must stay defined as it is
+used as a default value for some of the servers (master, infra, app node).
+The rest of the roles (etcd, load balancer, dns) have their defaults hard-coded.
+
+**Note**: If the `ephemeral_volumes` is set to `true`, the `*_volume_size` variables
+will be ignored and the deployment will not create any cinder volumes.
+
 The `openstack_flat_secgrp`, controls Neutron security groups creation for Heat
 stacks. Set it to true, if you experience issues with sec group rules
 quotas. It trades security for number of rules, by sharing the same set
@@ -140,6 +216,37 @@ The `openstack_inventory_path` points the directory to host the generated static
 It should point to the copied example inventory directory, otherwise ti creates
 a new one for you.
 
+#### Multi-master configuration
+
+Please refer to the official documentation for the
+[multi-master setup](https://docs.openshift.com/container-platform/3.6/install_config/install/advanced_install.html#multiple-masters)
+and define the corresponding [inventory
+variables](https://docs.openshift.com/container-platform/3.6/install_config/install/advanced_install.html#configuring-cluster-variables)
+in `inventory/group_vars/OSEv3.yml`. For example, given a load balancer node
+under the ansible group named `ext_lb`:
+
+    openshift_master_cluster_method: native
+    openshift_master_cluster_hostname: "{{ groups.ext_lb.0 }}"
+    openshift_master_cluster_public_hostname: "{{ groups.ext_lb.0 }}"
+
+#### Provider Network
+
+Normally, the playbooks create a new Neutron network and subnet and attach
+floating IP addresses to each node. If you have a provider network set up, this
+is all unnecessary as you can just access servers that are placed in the
+provider network directly.
+
+To use a provider network, set its name in `openstack_provider_network_name` in
+`inventory/group_vars/all.yml`.
+
+If you set the provider network name, the `openstack_external_network_name` and
+`openstack_private_network_name` fields will be ignored.
+
+**NOTE**: this will not update the nodes' DNS, so running openshift-ansible
+right after provisioning will fail (unless you're using an external DNS server
+your provider network knows about). You must make sure your nodes are able to
+resolve each other by name.
+
 #### Security notes
 
 Configure required `*_ingress_cidr` variables to restrict public access
@@ -157,6 +264,18 @@ be the case for development environments. When turned off, the servers will
 be provisioned omitting the ``yum update`` command. This brings security
 implications though, and is not recommended for production deployments.
 
+##### DNS servers security options
+
+Aside from `node_ingress_cidr` restricting public access to in-stack DNS
+servers, there are following (bind/named specific) DNS security
+options available:
+
+    named_public_recursion: 'no'
+    named_private_recursion: 'yes'
+
+External DNS servers, which is not included in the 'dns' hosts group,
+are not managed. It is up to you to configure such ones.
+
 ### Configure the OpenShift parameters
 
 Finally, you need to update the DNS entry in
@@ -174,19 +293,41 @@ Note, that in order to deploy OpenShift origin, you should update the following
 variables for the `inventory/group_vars/OSEv3.yml`, `all.yml`:
 
     deployment_type: origin
-    origin_release: 1.5.1
     openshift_deployment_type: "{{ deployment_type }}"
 
-### Configure static inventory
+#### Setting a custom entrypoint
+
+In order to set a custom entrypoint, update `openshift_master_cluster_public_hostname`
+
+    openshift_master_cluster_public_hostname: api.openshift.example.com
+
+Note than an empty hostname does not work, so if your domain is `openshift.example.com`,
+you cannot set this value to simply `openshift.example.com`.
+
+### Configure static inventory and access via a bastion node
 
 Example inventory variables:
 
+    openstack_use_bastion: true
+    bastion_ingress_cidr: "{{openstack_subnet_prefix}}.0/24"
     openstack_private_ssh_key: ~/.ssh/openshift
     openstack_inventory: static
     openstack_inventory_path: ../../../../inventory
+    openstack_ssh_config_path: /tmp/ssh.config.openshift.ansible.openshift.example.com
 
+The `openstack_subnet_prefix` is the openstack private network for your cluster.
+And the `bastion_ingress_cidr` defines accepted range for SSH connections to nodes
+additionally to the `ssh_ingress_cidr`` (see the security notes above).
 
-In this guide, the latter points to the current directory, where you run ansible commands
+The SSH config will be stored on the ansible control node by the
+gitven path. Ansible uses it automatically. To access the cluster nodes with
+that ssh config, use the `-F` prefix, f.e.:
+
+    ssh -F /tmp/ssh.config.openshift.ansible.openshift.example.com master-0.openshift.example.com echo OK
+
+Note, relative paths will not work for the `openstack_ssh_config_path`, but it
+works for the `openstack_private_ssh_key` and `openstack_inventory_path`. In this
+guide, the latter points to the current directory, where you run ansible commands
 from.
 
 To verify nodes connectivity, use the command:
@@ -194,7 +335,7 @@ To verify nodes connectivity, use the command:
     ansible -v -i inventory/hosts -m ping all
 
 If something is broken, double-check the inventory variables, paths and the
-generated `<openstack_inventory_path>/hosts` file.
+generated `<openstack_inventory_path>/hosts` and `openstack_ssh_config_path` files.
 
 The `inventory: dynamic` can be used instead to access cluster nodes directly via
 floating IPs. In this mode you can not use a bastion node and should specify
@@ -213,6 +354,61 @@ this is how you stat the provisioning process from your ansible control node:
 Note, here you start with an empty inventory. The static inventory will be populated
 with data so you can omit providing additional arguments for future ansible commands.
 
+If bastion enabled, the generates SSH config must be applied for ansible.
+Otherwise, it is auto included by the previous step. In order to execute it
+as a separate playbook, use the following command:
+
+    ansible-playbook openshift-ansible-contrib/playbooks/provisioning/openstack/post-provision-openstack.yml
+
+The first infra node then becomes a bastion node as well and proxies access
+for future ansible commands. The post-provision step also configures Satellite,
+if requested, and DNS server, and ensures other OpenShift requirements to be met.
+
+### Running Custom Post-Provision Actions
+
+A custom playbook can be run like this:
+
+```
+ansible-playbook --private-key ~/.ssh/openshift -i inventory/ openshift-ansible-contrib/playbooks/provisioning/openstack/custom-actions/custom-playbook.yml
+```
+
+If you'd like to limit the run to one particular host, you can do so as follows:
+
+```
+ansible-playbook --private-key ~/.ssh/openshift -i inventory/ openshift-ansible-contrib/playbooks/provisioning/openstack/custom-actions/custom-playbook.yml -l app-node-0.openshift.example.com
+```
+
+You can also create your own custom playbook. Here's one example that adds additional YUM repositories:
+
+```
+---
+- hosts: app
+  tasks:
+
+  # enable EPL
+  - name: Add repository
+    yum_repository:
+      name: epel
+      description: EPEL YUM repo
+      baseurl: https://download.fedoraproject.org/pub/epel/$releasever/$basearch/
+```
+
+This example runs against app nodes. The list of options include:
+
+  - cluster_hosts (all hosts: app, infra, masters, dns, lb)
+  - OSEv3 (app, infra, masters)
+  - app
+  - dns
+  - masters
+  - infra_hosts
+
+Please consider contributing your custom playbook back to openshift-ansible-contrib!
+
+A library of custom post-provision actions exists in `openshift-ansible-contrib/playbooks/provisioning/openstack/custom-actions`. Playbooks include:
+
+##### add-yum-repos.yml
+
+[add-yum-repos.yml](https://github.com/openshift/openshift-ansible-contrib/blob/master/playbooks/provisioning/openstack/custom-actions/add-yum-repos.yml) adds a list of custom yum repositories to every node in the cluster.
 
 ### Install OpenShift
 
@@ -220,6 +416,24 @@ Once it succeeds, you can install openshift by running:
 
     ansible-playbook openshift-ansible/playbooks/byo/config.yml
 
+### Access UI
+
+OpenShift UI may be accessed via the 1st master node FQDN, port 8443.
+
+When using a bastion, you may want to make an SSH tunnel from your control node
+to access UI on the `https://localhost:8443`, with this inventory variable:
+
+   openshift_ui_ssh_tunnel: True
+
+Note, this requires sudo rights on the ansible control node and an absolute path
+for the `openstack_private_ssh_key`. You should also update the control node's
+`/etc/hosts`:
+
+    127.0.0.1 master-0.openshift.example.com
+
+In order to access UI, the ssh-tunnel service will be created and started on the
+control node. Make sure to remove these changes and the service manually, when not
+needed anymore.
 
 ## License
 

+ 12 - 0
playbooks/provisioning/openstack/custom-actions/add-yum-repos.yml

@@ -0,0 +1,12 @@
+---
+- hosts: cluster_hosts
+  vars:
+    yum_repos: []
+  tasks:
+  # enable additional yum repos
+  - name: Add repository
+    yum_repository:
+      name: "{{ item.name }}"
+      description: "{{ item.description }}"
+      baseurl: "{{ item.baseurl }}"
+    with_items: "{{ yum_repos }}"

+ 9 - 0
playbooks/provisioning/openstack/custom_flavor_check.yaml

@@ -0,0 +1,9 @@
+---
+- name: Try to get flavor facts
+  os_flavor_facts:
+    name: "{{ flavor }}"
+  register: flavor_result
+- name: Check that custom flavor is available
+  assert:
+    that: "flavor_result.ansible_facts.openstack_flavors"
+    msg: "Flavor {{ flavor }} is not available."

+ 9 - 0
playbooks/provisioning/openstack/custom_image_check.yaml

@@ -0,0 +1,9 @@
+---
+- name: Try to get image facts
+  os_image_facts:
+    image: "{{ image }}"
+  register: image_result
+- name: Check that custom image is available
+  assert:
+    that: "image_result.ansible_facts.openstack_image"
+    msg: "Image {{ image }} is not available."

+ 7 - 11
playbooks/provisioning/openstack/post-provision-openstack.yml

@@ -4,7 +4,11 @@
   become: False
   gather_facts: False
   tasks:
-    - wait_for_connection:
+    - when: not openstack_use_bastion|default(False)|bool
+      wait_for_connection:
+    - when: openstack_use_bastion|default(False)|bool
+      delegate_to: bastion
+      wait_for_connection:
 
 - hosts: cluster_hosts
   gather_facts: True
@@ -21,8 +25,6 @@
   hosts: cluster_hosts
   gather_facts: False
   become: true
-  pre_tasks:
-    - include: pre_tasks.yml
   roles:
     - role: hostnames
 
@@ -46,22 +48,16 @@
   hosts: dns
   gather_facts: False
   become: true
-  pre_tasks:
-    - include: pre_tasks.yml
-    - name: "Generate dns-server views"
-      include: openstack_dns_views.yml
   roles:
+    - role: dns-views
     - role: infra-ansible/roles/dns-server
 
 - name: Build and process DNS Records
   hosts: localhost
   gather_facts: True
   become: False
-  pre_tasks:
-    - include: pre_tasks.yml
-    - name: "Generate dns records"
-      include: openstack_dns_records.yml
   roles:
+    - role: dns-records
     - role: infra-ansible/roles/dns
 
 - name: Switch the stack subnet to the configured private DNS server

+ 5 - 0
playbooks/provisioning/openstack/pre-install.yml

@@ -12,3 +12,8 @@
     - { role: subscription-manager, when: hostvars.localhost.rhsm_register, tags: 'subscription-manager', ansible_sudo: true }
     - { role: docker, tags: 'docker' }
     - { role: openshift-prep, tags: 'openshift-prep' }
+
+- hosts: localhost:cluster_hosts
+  become: False
+  tasks:
+    - include: pre_tasks.yml

+ 16 - 0
playbooks/provisioning/openstack/pre_tasks.yml

@@ -31,3 +31,19 @@
   delegate_to: localhost
   when:
   - openshift_master_default_subdomain is undefined
+
+# Check that openshift_cluster_node_labels has regions defined for all groups
+# NOTE(kpilatov): if node labels are to be enabled for more groups,
+#                 this check needs to be modified as well
+- name: Set openshift_cluster_node_labels if undefined (should not happen)
+  set_fact:
+    openshift_cluster_node_labels: {'app': {'region': 'primary'}, 'infra': {'region': 'infra'}}
+  when: openshift_cluster_node_labels is not defined
+
+- name: Set openshift_cluster_node_labels for the infra group
+  set_fact:
+    openshift_cluster_node_labels: "{{ openshift_cluster_node_labels | combine({'infra': {'region': 'infra'}}, recursive=True) }}"
+
+- name: Set openshift_cluster_node_labels for the app group
+  set_fact:
+    openshift_cluster_node_labels: "{{ openshift_cluster_node_labels | combine({'app': {'region': 'primary'}}, recursive=True) }}"

+ 44 - 0
playbooks/provisioning/openstack/prerequisites.yml

@@ -20,6 +20,16 @@
       that: 'shade_result.rc == 0'
       msg: "Python module shade is not installed"
 
+  # Check jmespath
+  - name: Try to import python module shade
+    command: python -c "import jmespath"
+    ignore_errors: yes
+    register: jmespath_result
+  - name: Check if jmespath is installed
+    assert:
+      that: 'jmespath_result.rc == 0'
+      msg: "Python module jmespath is not installed"
+
   # Check python-dns
   - name: Try to import python DNS module
     command: python -c "import dns"
@@ -55,10 +65,12 @@
     os_networks_facts:
       name: "{{ openstack_external_network_name }}"
     register: network_result
+    when: not openstack_provider_network_name|default(None)
   - name: Check that network is available
     assert:
       that: "network_result.ansible_facts.openstack_networks"
       msg: "Network {{ openstack_external_network_name }} is not available"
+    when: not openstack_provider_network_name|default(None)
 
   # Check keypair
   # TODO kpilatov: there is no Ansible module for getting OS keypairs
@@ -74,3 +86,35 @@
     assert:
       that: 'key_result.rc == 0'
       msg: "Keypair {{ openstack_ssh_public_key }} is not available"
+
+# Check that custom images and flavors exist
+- hosts: localhost
+
+  # Include variables that will be used by heat
+  vars_files:
+  - stack_params.yaml
+
+  tasks:
+  # Check that custom images are available
+  - include: custom_image_check.yaml
+    with_items:
+    - "{{ openstack_master_image }}"
+    - "{{ openstack_infra_image }}"
+    - "{{ openstack_node_image }}"
+    - "{{ openstack_lb_image }}"
+    - "{{ openstack_etcd_image }}"
+    - "{{ openstack_dns_image }}"
+    loop_control:
+      loop_var: image
+
+  # Check that custom flavors are available
+  - include: custom_flavor_check.yaml
+    with_items:
+    - "{{ master_flavor }}"
+    - "{{ infra_flavor }}"
+    - "{{ node_flavor }}"
+    - "{{ lb_flavor }}"
+    - "{{ etcd_flavor }}"
+    - "{{ dns_flavor }}"
+    loop_control:
+      loop_var: flavor

+ 9 - 2
playbooks/provisioning/openstack/provision-openstack.yml

@@ -12,13 +12,20 @@
       when: openstack_inventory|default('static') == 'static'
       inventory_path: "{{ openstack_inventory_path|default(inventory_dir) }}"
       private_ssh_key: "{{ openstack_private_ssh_key|default('~/.ssh/id_rsa') }}"
+      ssh_config_path: "{{ openstack_ssh_config_path|default('/tmp/ssh.config.openshift.ansible' + '.' + stack_name) }}"
+      ssh_user: "{{ ansible_user }}"
 
-- name: Refresh Server inventory
+- name: Refresh Server inventory or exit to apply SSH config
   hosts: localhost
   connection: local
   become: False
   gather_facts: False
   tasks:
-    - meta: refresh_inventory
+    - name: Exit to apply SSH config for a bastion
+      meta: end_play
+      when: openstack_use_bastion|default(False)|bool
+    - name: Refresh Server inventory
+      meta: refresh_inventory
 
 - include: post-provision-openstack.yml
+  when: not openstack_use_bastion|default(False)|bool

+ 1 - 1
playbooks/provisioning/openstack/sample-inventory/ansible.cfg

@@ -6,7 +6,7 @@ forks = 50
 timeout = 30
 host_key_checking = false
 inventory = inventory
-inventory_ignore_extensions = secrets.py, .pyc
+inventory_ignore_extensions = secrets.py, .pyc, .cfg, .crt
 gathering = smart
 retry_files_enabled = false
 fact_caching = jsonfile

+ 0 - 5
playbooks/provisioning/openstack/sample-inventory/clouds.yaml

@@ -1,5 +0,0 @@
----
-ansible:
-  use_hostnames: True
-  expand_hostvars: True
-  fail_on_errors: True

+ 3 - 6
playbooks/provisioning/openstack/sample-inventory/group_vars/OSEv3.yml

@@ -1,15 +1,12 @@
 ---
 openshift_deployment_type: origin
-openshift_release: 1.5.1
 #openshift_deployment_type: openshift-enterprise
 #openshift_release: v3.5
 openshift_master_default_subdomain: "apps.{{ env_id }}.{{ public_dns_domain }}"
 
-#openshift_cluster_node_labels:
-#  app:
-#    region: primary
-#  infra:
-#    region: infra
+openshift_master_cluster_method: native
+openshift_master_cluster_hostname: "{{ groups.lb.0|default(groups.masters.0) }}"
+openshift_master_cluster_public_hostname: "{{ groups.lb.0|default(groups.masters.0) }}"
 
 osm_default_node_selector: 'region=primary'
 

+ 72 - 2
playbooks/provisioning/openstack/sample-inventory/group_vars/all.yml

@@ -3,18 +3,63 @@ env_id: "openshift"
 public_dns_domain: "example.com"
 public_dns_nameservers: []
 
+# # Used Hostnames
+# # - set custom hostnames for roles by uncommenting corresponding lines
+#openstack_master_hostname: "master"
+#openstack_infra_hostname: "infra-node"
+#openstack_node_hostname: "app-node"
+#openstack_lb_hostname: "lb"
+#openstack_etcd_hostname: "etcd"
+#openstack_dns_hostname: "dns"
+
 openstack_ssh_public_key: "openshift"
-openstack_default_image_name: "centos7"
-openstack_default_flavor: "m1.medium"
 openstack_external_network_name: "public"
+#openstack_private_network_name:  "openshift-ansible-{{ stack_name }}-net"
+
+## If you want to use a provider network, set its name here.
+## NOTE: the `openstack_external_network_name` and
+## `openstack_private_network_name` options will be ignored when using a
+## provider network.
+#openstack_provider_network_name: "provider"
+
+# # Used Images
+# # - set specific images for roles by uncommenting corresponding lines
+# # - note: do not remove openstack_default_image_name definition
+#openstack_master_image_name: "centos7"
+#openstack_infra_image_name: "centos7"
+#openstack_node_image_name: "centos7"
+#openstack_lb_image_name: "centos7"
+#openstack_etcd_image_name: "centos7"
+#openstack_dns_image_name: "centos7"
+openstack_default_image_name: "centos7"
 
 openstack_num_masters: 1
 openstack_num_infra: 1
 openstack_num_nodes: 2
 
+# # Used Flavors
+# # - set specific flavors for roles by uncommenting corresponding lines
+# # - note: do note remove openstack_default_flavor definition
+#openstack_master_flavor: "m1.medium"
+#openstack_infra_flavor: "m1.medium"
+#openstack_node_flavor: "m1.medium"
+#openstack_lb_flavor: "m1.medium"
+#openstack_etcd_flavor: "m1.medium"
+#openstack_dns_flavor: "m1.medium"
+openstack_default_flavor: "m1.medium"
+
 # # Numerical index of nodes to remove
 # openstack_nodes_to_remove: []
 
+# # Docker volume size
+# # - set specific volume size for roles by uncommenting corresponding lines
+# # - note: do not remove docker_default_volume_size definition
+#docker_master_volume_size: "15"
+#docker_infra_volume_size: "15"
+#docker_node_volume_size: "15"
+#docker_etcd_volume_size: "2"
+#docker_dns_volume_size: "1"
+#docker_lb_volume_size: "5"
 docker_volume_size: "15"
 
 openstack_subnet_prefix: "192.168.99"
@@ -53,6 +98,10 @@ rhsm_register: False
 #    key_algorithm: 'hmac-md5'
 #    server: '192.168.1.2'
 
+# # Customize DNS server security options
+#named_public_recursion: 'no'
+#named_private_recursion: 'yes'
+
 
 # NOTE(shadower): Do not change this value. The Ansible user is currently
 # hardcoded to `openshift`.
@@ -69,5 +118,26 @@ ansible_user: openshift
 # # The path to checkpoint the static inventory from the in-memory one
 #openstack_inventory_path: ../../../../inventory
 
+# # Use bastion node to access cluster nodes (Defaults to False).
+# # Requires a static inventory.
+#openstack_use_bastion: False
+#bastion_ingress_cidr: "{{openstack_subnet_prefix}}.0/24"
+#
 # # The Nova key-pair's private SSH key to access inventory nodes
 #openstack_private_ssh_key: ~/.ssh/openshift
+# # The path for the SSH config to access all nodes
+#openstack_ssh_config_path: /tmp/ssh.config.openshift.ansible.{{ env_id }}.{{ public_dns_domain }}
+
+
+# If you want to use the VM storage instead of Cinder volumes, set this to `true`.
+# NOTE: this is for testing only! Your data will be gone once the VM disappears!
+# ephemeral_volumes: false
+
+# # OpenShift node labels
+# # - in order to customise node labels for app and/or infra group, set the
+# #   openshift_cluster_node_labels variable
+#openshift_cluster_node_labels:
+#  app:
+#    region: primary
+#  infra:
+#    region: infra

+ 34 - 10
playbooks/provisioning/openstack/stack_params.yaml

@@ -3,21 +3,45 @@ stack_name: "{{ env_id }}.{{ public_dns_domain }}"
 dns_domain: "{{ public_dns_domain }}"
 dns_nameservers: "{{ public_dns_nameservers }}"
 subnet_prefix: "{{ openstack_subnet_prefix }}"
+master_hostname: "{{ openstack_master_hostname | default('master') }}"
+infra_hostname: "{{ openstack_infra_hostname | default('infra-node') }}"
+node_hostname: "{{ openstack_node_hostname | default('app-node') }}"
+lb_hostname: "{{ openstack_lb_hostname | default('lb') }}"
+etcd_hostname: "{{ openstack_etcd_hostname | default('etcd') }}"
+dns_hostname: "{{ openstack_dns_hostname | default('dns') }}"
 ssh_public_key: "{{ openstack_ssh_public_key }}"
 openstack_image: "{{ openstack_default_image_name }}"
-lb_flavor: "{{ openstack_default_flavor | default('m1.small') }}"
-etcd_flavor: "{{ openstack_default_flavor | default('m1.small') }}"
-master_flavor: "{{ openstack_default_flavor | default('m1.medium') }}"
-node_flavor: "{{ openstack_default_flavor | default('m1.medium') }}"
-infra_flavor: "{{ openstack_default_flavor | default('m1.medium') }}"
-dns_flavor: "{{ openstack_default_flavor | default('m1.small') }}"
-external_network: "{{ openstack_external_network_name }}"
+lb_flavor: "{{ openstack_lb_flavor | default(openstack_default_flavor) }}"
+etcd_flavor: "{{ openstack_etcd_flavor | default(openstack_default_flavor) }}"
+master_flavor: "{{ openstack_master_flavor | default(openstack_default_flavor) }}"
+node_flavor: "{{ openstack_node_flavor | default(openstack_default_flavor) }}"
+infra_flavor: "{{ openstack_infra_flavor | default(openstack_default_flavor) }}"
+dns_flavor: "{{ openstack_dns_flavor | default(openstack_default_flavor) }}"
+openstack_master_image: "{{ openstack_master_image_name | default(openstack_default_image_name) }}"
+openstack_infra_image: "{{ openstack_infra_image_name | default(openstack_default_image_name) }}"
+openstack_node_image: "{{ openstack_node_image_name | default(openstack_default_image_name) }}"
+openstack_lb_image: "{{ openstack_lb_image_name | default(openstack_default_image_name) }}"
+openstack_etcd_image: "{{ openstack_etcd_image_name | default(openstack_default_image_name) }}"
+openstack_dns_image: "{{ openstack_dns_image_name | default(openstack_default_image_name) }}"
+openstack_private_network: >-
+  {% if openstack_provider_network_name | default(None) -%}
+  {{ openstack_provider_network_name }}
+  {%- else -%}
+  {{ openstack_private_network_name | default ('openshift-ansible-' + stack_name + '-net') }}
+  {%- endif -%}
+provider_network: "{{ openstack_provider_network_name | default(None) }}"
+external_network: "{{ openstack_external_network_name | default(None) }}"
 num_etcd: "{{ openstack_num_etcd | default(0) }}"
 num_masters: "{{ openstack_num_masters }}"
 num_nodes: "{{ openstack_num_nodes }}"
 num_infra: "{{ openstack_num_infra }}"
 num_dns: "{{ openstack_num_dns | default(1) }}"
-master_volume_size: "{{ docker_volume_size }}"
-app_volume_size: "{{ docker_volume_size }}"
-infra_volume_size: "{{ docker_volume_size }}"
+master_volume_size: "{{ docker_master_volume_size | default(docker_volume_size) }}"
+infra_volume_size: "{{ docker_infra_volume_size | default(docker_volume_size) }}"
+node_volume_size: "{{ docker_node_volume_size | default(docker_volume_size) }}"
+etcd_volume_size: "{{ docker_etcd_volume_size | default('2') }}"
+dns_volume_size: "{{ docker_dns_volume_size | default('1') }}"
+lb_volume_size: "{{ docker_lb_volume_size | default('5') }}"
 nodes_to_remove: "{{ openstack_nodes_to_remove | default([]) |  to_yaml }}"
+use_bastion: "{{ openstack_use_bastion|default(False) }}"
+ui_ssh_tunnel: "{{ openshift_ui_ssh_tunnel|default(False) }}"

+ 39 - 2
playbooks/provisioning/openstack/openstack_dns_records.yml

@@ -4,11 +4,31 @@
     private_records: "{{ private_records | default([]) + [ { 'type': 'A', 'hostname': hostvars[item]['ansible_hostname'], 'ip': hostvars[item]['private_v4'] } ] }}"
   with_items: "{{ groups['cluster_hosts'] }}"
 
+- name: "Add wildcard records to the private A records for infrahosts"
+  set_fact:
+    private_records: "{{ private_records | default([]) + [ { 'type': 'A', 'hostname': '*.' + openshift_app_domain, 'ip': hostvars[item]['private_v4'] } ] }}"
+  with_items: "{{ groups['infra_hosts'] }}"
+
+- name: "Add public master cluster hostname records to the private A records (single master)"
+  set_fact:
+    private_records: "{{ private_records | default([]) + [ { 'type': 'A', 'hostname': (hostvars[groups.masters[0]].openshift_master_cluster_public_hostname | replace(full_dns_domain, ''))[:-1], 'ip': hostvars[groups.masters[0]].private_v4 } ] }}"
+  when:
+    - hostvars[groups.masters[0]].openshift_master_cluster_public_hostname is defined
+    - openstack_num_masters == 1
+
+- name: "Add public master cluster hostname records to the private A records (multi-master)"
+  set_fact:
+    private_records: "{{ private_records | default([]) + [ { 'type': 'A', 'hostname': (hostvars[groups.masters[0]].openshift_master_cluster_public_hostname | replace(full_dns_domain, ''))[:-1], 'ip': hostvars[groups.lb[0]].private_v4 } ] }}"
+  when:
+    - hostvars[groups.masters[0]].openshift_master_cluster_public_hostname is defined
+    - openstack_num_masters > 1
+
 - name: "Set the private DNS server to use the external value (if provided)"
   set_fact:
     nsupdate_server_private: "{{ external_nsupdate_keys['private']['server'] }}"
     nsupdate_key_secret_private: "{{ external_nsupdate_keys['private']['key_secret'] }}"
     nsupdate_key_algorithm_private: "{{ external_nsupdate_keys['private']['key_algorithm'] }}"
+    nsupdate_private_key_name: "{{ external_nsupdate_keys['private']['key_name']|default('private-' + full_dns_domain) }}"
   when:
     - external_nsupdate_keys is defined
     - external_nsupdate_keys['private'] is defined
@@ -27,7 +47,7 @@
       - view: "private"
         zone: "{{ full_dns_domain }}"
         server: "{{ nsupdate_server_private }}"
-        key_name: "{{ ( 'private-' + full_dns_domain ) }}"
+        key_name: "{{ nsupdate_private_key_name|default('private-' + full_dns_domain) }}"
         key_secret: "{{ nsupdate_key_secret_private }}"
         key_algorithm: "{{ nsupdate_key_algorithm_private | lower }}"
         entries: "{{ private_records }}"
@@ -36,17 +56,34 @@
   set_fact:
     public_records: "{{ public_records | default([]) + [ { 'type': 'A', 'hostname': hostvars[item]['ansible_hostname'], 'ip': hostvars[item]['public_v4'] } ] }}"
   with_items: "{{ groups['cluster_hosts'] }}"
+  when: hostvars[item]['public_v4'] is defined
 
 - name: "Add wildcard records to the public A records"
   set_fact:
     public_records: "{{ public_records | default([]) + [ { 'type': 'A', 'hostname': '*.' + openshift_app_domain, 'ip': hostvars[item]['public_v4'] } ] }}"
   with_items: "{{ groups['infra_hosts'] }}"
+  when: hostvars[item]['public_v4'] is defined
+
+- name: "Add public master cluster hostname records to the public A records (single master)"
+  set_fact:
+    public_records: "{{ public_records | default([]) + [ { 'type': 'A', 'hostname': (hostvars[groups.masters[0]].openshift_master_cluster_public_hostname | replace(full_dns_domain, ''))[:-1], 'ip': hostvars[groups.masters[0]].public_v4 } ] }}"
+  when:
+    - hostvars[groups.masters[0]].openshift_master_cluster_public_hostname is defined
+    - openstack_num_masters == 1
+
+- name: "Add public master cluster hostname records to the public A records (multi-master)"
+  set_fact:
+    public_records: "{{ public_records | default([]) + [ { 'type': 'A', 'hostname': (hostvars[groups.masters[0]].openshift_master_cluster_public_hostname | replace(full_dns_domain, ''))[:-1], 'ip': hostvars[groups.lb[0]].public_v4 } ] }}"
+  when:
+    - hostvars[groups.masters[0]].openshift_master_cluster_public_hostname is defined
+    - openstack_num_masters > 1
 
 - name: "Set the public DNS server details to use the external value (if provided)"
   set_fact:
     nsupdate_server_public: "{{ external_nsupdate_keys['public']['server'] }}"
     nsupdate_key_secret_public: "{{ external_nsupdate_keys['public']['key_secret'] }}"
     nsupdate_key_algorithm_public: "{{ external_nsupdate_keys['public']['key_algorithm'] }}"
+    nsupdate_public_key_name: "{{ external_nsupdate_keys['public']['key_name']|default('public-' + full_dns_domain) }}"
   when:
     - external_nsupdate_keys is defined
     - external_nsupdate_keys['public'] is defined
@@ -65,7 +102,7 @@
       - view: "public"
         zone: "{{ full_dns_domain }}"
         server: "{{ nsupdate_server_public }}"
-        key_name: "{{ ( 'public-' + full_dns_domain ) }}"
+        key_name: "{{ nsupdate_public_key_name|default('public-' + full_dns_domain) }}"
         key_secret: "{{ nsupdate_key_secret_public }}"
         key_algorithm: "{{ nsupdate_key_algorithm_public | lower }}"
         entries: "{{ public_records }}"

+ 4 - 0
roles/dns-views/defaults/main.yml

@@ -0,0 +1,4 @@
+---
+external_nsupdate_keys: {}
+named_private_recursion: 'yes'
+named_public_recursion: 'no'

+ 6 - 1
playbooks/provisioning/openstack/openstack_dns_views.yml

@@ -8,18 +8,23 @@
   set_fact:
     private_named_view:
       - name: "private"
+        recursion: "{{ named_private_recursion }}"
         acl_entry: "{{ acl_list }}"
         zone:
           - dns_domain: "{{ full_dns_domain }}"
+        forwarder: "{{ public_dns_nameservers }}"
+  when: external_nsupdate_keys['private'] is undefined
 
 - name: "Generate the public view"
   set_fact:
     public_named_view:
       - name: "public"
+        recursion: "{{ named_public_recursion }}"
         zone:
           - dns_domain: "{{ full_dns_domain }}"
         forwarder: "{{ public_dns_nameservers }}"
+  when: external_nsupdate_keys['public'] is undefined
 
 - name: "Generate the final named_config_views"
   set_fact:
-    named_config_views: "{{ private_named_view + public_named_view }}"
+    named_config_views: "{{ private_named_view|default([]) + public_named_view|default([]) }}"

+ 6 - 1
roles/openstack-stack/defaults/main.yml

@@ -1,9 +1,9 @@
 ---
-dns_volume_size: 1
 ssh_ingress_cidr: 0.0.0.0/0
 node_ingress_cidr: 0.0.0.0/0
 master_ingress_cidr: 0.0.0.0/0
 lb_ingress_cidr: 0.0.0.0/0
+bastion_ingress_cidr: 0.0.0.0/0
 num_etcd: 0
 num_masters: 1
 num_nodes: 1
@@ -11,3 +11,8 @@ num_dns: 1
 num_infra: 1
 nodes_to_remove: []
 etcd_volume_size: 2
+dns_volume_size: 1
+lb_volume_size: 5
+use_bastion: False
+ui_ssh_tunnel: False
+provider_network: None

+ 7 - 2
roles/openstack-stack/tasks/main.yml

@@ -8,7 +8,6 @@
 - name: set template paths
   set_fact:
     stack_template_path: "{{ stack_template_pre.path }}/stack.yaml"
-    server_template_path: "{{ stack_template_pre.path }}/server.yaml"
     user_data_template_path: "{{ stack_template_pre.path }}/user-data"
 
 - name: generate HOT stack template from jinja2 template
@@ -19,7 +18,13 @@
 - name: generate HOT server template from jinja2 template
   template:
     src: heat_stack_server.yaml.j2
-    dest: "{{ server_template_path }}"
+    dest: "{{ stack_template_pre.path }}/server.yaml"
+
+- name: generate HOT server w/o floating IPs template from jinja2 template
+  template:
+    src: heat_stack_server_nofloating.yaml.j2
+    dest: "{{ stack_template_pre.path }}/server_nofloating.yaml"
+  when: use_bastion|bool
 
 - name: generate user_data from jinja2 template
   template:

+ 1 - 0
roles/openstack-stack/tasks/subnet_update_dns_servers.yaml

@@ -6,3 +6,4 @@
     state: present
     use_default_subnetpool: yes
     dns_nameservers: "{{ [private_dns_server|default(public_dns_nameservers[0])]|union(public_dns_nameservers)|unique }}"
+  when: not provider_network

+ 142 - 51
roles/openstack-stack/templates/heat_stack.yaml.j2

@@ -54,6 +54,7 @@ outputs:
     description: Floating IPs of the nodes
     value: { get_attr: [ infra_nodes, floating_ip ] }
 
+{% if num_dns|int > 0 %}
   dns_name:
     description: Name of the DNS
     value:
@@ -68,9 +69,11 @@ outputs:
   dns_private_ips:
     description: Private IPs of the DNS
     value: { get_attr: [ dns, private_ip ] }
+{% endif %}
 
 resources:
 
+{% if not provider_network %}
   net:
     type: OS::Neutron::Net
     properties:
@@ -127,6 +130,8 @@ resources:
       router_id: { get_resource: router }
       subnet_id: { get_resource: subnet }
 
+{% endif %}
+
 #  keypair:
 #    type: OS::Nova::KeyPair
 #    properties:
@@ -156,6 +161,13 @@ resources:
           port_range_min: 22
           port_range_max: 22
           remote_ip_prefix: {{ ssh_ingress_cidr }}
+{% if use_bastion|bool %}
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 22
+          port_range_max: 22
+          remote_ip_prefix: {{ bastion_ingress_cidr }}
+{% endif %}
         - direction: ingress
           protocol: icmp
           remote_ip_prefix: {{ ssh_ingress_cidr }}
@@ -398,6 +410,7 @@ resources:
           port_range_min: 443
           port_range_max: 443
 
+{% if num_dns|int > 0 %}
   dns-secgrp:
     type: OS::Neutron::SecurityGroup
     properties:
@@ -432,7 +445,9 @@ resources:
           port_range_min: 53
           port_range_max: 53
           remote_ip_prefix: "{{ openstack_subnet_prefix }}.0/24"
-{% if num_masters > 1 %}
+{% endif %}
+
+{% if num_masters|int > 1 or ui_ssh_tunnel|bool %}
   lb-secgrp:
     type: OS::Neutron::SecurityGroup
     properties:
@@ -443,14 +458,21 @@ resources:
         protocol: tcp
         port_range_min: {{ openshift_master_api_port | default(8443) }}
         port_range_max: {{ openshift_master_api_port | default(8443) }}
-        remote_ip_prefix: {{ lb_ingress_cidr }}
-  {% if openshift_master_console_port is defined and openshift_master_console_port != openshift_master_api_port %}
+        remote_ip_prefix: {{ lb_ingress_cidr | default(bastion_ingress_cidr) }}
+{% if ui_ssh_tunnel|bool %}
+      - direction: ingress
+        protocol: tcp
+        port_range_min: {{ openshift_master_api_port | default(8443) }}
+        port_range_max: {{ openshift_master_api_port | default(8443) }}
+        remote_ip_prefix: {{ ssh_ingress_cidr }}
+{% endif %}
+{% if openshift_master_console_port is defined and openshift_master_console_port != openshift_master_api_port %}
       - direction: ingress
         protocol: tcp
         port_range_min: {{ openshift_master_console_port | default(8443) }}
         port_range_max: {{ openshift_master_console_port | default(8443) }}
-        remote_ip_prefix: {{ lb_ingress_cidr }}
-  {% endif %}
+        remote_ip_prefix: {{ lb_ingress_cidr | default(bastion_ingress_cidr) }}
+{% endif %}
 {% endif %}
 
   etcd:
@@ -458,14 +480,18 @@ resources:
     properties:
       count: {{ num_etcd }}
       resource_def:
+{% if use_bastion|bool %}
+        type: server_nofloating.yaml
+{% else %}
         type: server.yaml
+{% endif %}
         properties:
           name:
             str_replace:
               template: k8s_type-%index%.cluster_id
               params:
                 cluster_id: {{ stack_name }}
-                k8s_type: etcd
+                k8s_type: {{ etcd_hostname }}
           cluster_env: {{ public_dns_domain }}
           cluster_id:  {{ stack_name }}
           group:
@@ -475,25 +501,34 @@ resources:
                 k8s_type: etcds
                 cluster_id: {{ stack_name }}
           type:        etcd
-          image:       {{ openstack_image }}
+          image:       {{ openstack_etcd_image }}
           flavor:      {{ etcd_flavor }}
           key_name:    {{ ssh_public_key }}
+{% if provider_network %}
+          net:         {{ provider_network }}
+          net_name:         {{ provider_network }}
+{% else %}
           net:         { get_resource: net }
           subnet:      { get_resource: subnet }
-          secgrp:
-            - { get_resource: {% if openstack_flat_secgrp|default(False)|bool %}flat-secgrp{% else %}etcd-secgrp{% endif %} }
-            - { get_resource: common-secgrp }
-          floating_network: {{ external_network }}
           net_name:
             str_replace:
               template: openshift-ansible-cluster_id-net
               params:
                 cluster_id: {{ stack_name }}
+{% endif %}
+          secgrp:
+            - { get_resource: {% if openstack_flat_secgrp|default(False)|bool %}flat-secgrp{% else %}etcd-secgrp{% endif %} }
+            - { get_resource: common-secgrp }
+{% if not use_bastion|bool and not provider_network %}
+          floating_network: {{ external_network }}
+{% endif %}
           volume_size: {{ etcd_volume_size }}
+{% if not provider_network %}
     depends_on:
       - interface
+{% endif %}
 
-{% if num_masters > 1 %}
+{% if num_masters|int > 1 %}
   loadbalancer:
     type: OS::Heat::ResourceGroup
     properties:
@@ -506,7 +541,7 @@ resources:
               template: k8s_type-%index%.cluster_id
               params:
                 cluster_id: {{ stack_name }}
-                k8s_type: lb
+                k8s_type: {{ lb_hostname }}
           cluster_env: {{ public_dns_domain }}
           cluster_id:  {{ stack_name }}
           group:
@@ -516,23 +551,32 @@ resources:
                 k8s_type: lb
                 cluster_id: {{ stack_name }}
           type:        lb
-          image:       {{ openstack_image }}
+          image:       {{ openstack_lb_image }}
           flavor:      {{ lb_flavor }}
           key_name:    {{ ssh_public_key }}
+{% if provider_network %}
+          net:         {{ provider_network }}
+          net_name:         {{ provider_network }}
+{% else %}
           net:         { get_resource: net }
           subnet:      { get_resource: subnet }
-          secgrp:
-            - { get_resource: lb-secgrp }
-            - { get_resource: common-secgrp }
-          floating_network: {{ external_network }}
           net_name:
             str_replace:
               template: openshift-ansible-cluster_id-net
               params:
                 cluster_id: {{ stack_name }}
-          volume_size: 5
+{% endif %}
+          secgrp:
+            - { get_resource: lb-secgrp }
+            - { get_resource: common-secgrp }
+    {% if not provider_network %}
+          floating_network: {{ external_network }}
+    {% endif %}
+          volume_size: {{ lb_volume_size }}
+    {% if not provider_network %}
     depends_on:
       - interface
+    {% endif %}
 {% endif %}
 
   masters:
@@ -540,14 +584,18 @@ resources:
     properties:
       count: {{ num_masters }}
       resource_def:
+{% if use_bastion|bool %}
+        type: server_nofloating.yaml
+{% else %}
         type: server.yaml
+{% endif %}
         properties:
           name:
             str_replace:
               template: k8s_type-%index%.cluster_id
               params:
                 cluster_id: {{ stack_name }}
-                k8s_type: master
+                k8s_type: {{ master_hostname }}
           cluster_env: {{ public_dns_domain }}
           cluster_id:  {{ stack_name }}
           group:
@@ -557,31 +605,40 @@ resources:
                 k8s_type: masters
                 cluster_id: {{ stack_name }}
           type:        master
-          image:       {{ openstack_image }}
+          image:       {{ openstack_master_image }}
           flavor:      {{ master_flavor }}
           key_name:    {{ ssh_public_key }}
+{% if provider_network %}
+          net:         {{ provider_network }}
+          net_name:         {{ provider_network }}
+{% else %}
           net:         { get_resource: net }
           subnet:      { get_resource: subnet }
+          net_name:
+            str_replace:
+              template: openshift-ansible-cluster_id-net
+              params:
+                cluster_id: {{ stack_name }}
+{% endif %}
           secgrp:
 {% if openstack_flat_secgrp|default(False)|bool %}
             - { get_resource: flat-secgrp }
 {% else %}
             - { get_resource: master-secgrp }
             - { get_resource: node-secgrp }
-{% if num_etcd == 0 %}
+{% if num_etcd|int == 0 %}
             - { get_resource: etcd-secgrp }
 {% endif %}
 {% endif %}
             - { get_resource: common-secgrp }
+{% if not use_bastion|bool and not provider_network %}
           floating_network: {{ external_network }}
-          net_name:
-            str_replace:
-              template: openshift-ansible-cluster_id-net
-              params:
-                cluster_id: {{ stack_name }}
+{% endif %}
           volume_size: {{ master_volume_size }}
+{% if not provider_network %}
     depends_on:
       - interface
+{% endif %}
 
   compute_nodes:
     type: OS::Heat::ResourceGroup
@@ -590,15 +647,18 @@ resources:
       removal_policies:
       - resource_list: {{ nodes_to_remove }}
       resource_def:
+{% if use_bastion|bool %}
+        type: server_nofloating.yaml
+{% else %}
         type: server.yaml
+{% endif %}
         properties:
           name:
             str_replace:
-              template: subtype-k8s_type-%index%.cluster_id
+              template: sub_type_k8s_type-%index%.cluster_id
               params:
                 cluster_id: {{ stack_name }}
-                k8s_type: node
-                subtype: app
+                sub_type_k8s_type: {{ node_hostname }}
           cluster_env: {{ public_dns_domain }}
           cluster_id:  {{ stack_name }}
           group:
@@ -613,23 +673,32 @@ resources:
 {% for k, v in openshift_cluster_node_labels.app.iteritems() %}
             {{ k|e }}: {{ v|e }}
 {% endfor %}
-          image:       {{ openstack_image }}
+          image:       {{ openstack_node_image }}
           flavor:      {{ node_flavor }}
           key_name:    {{ ssh_public_key }}
+{% if provider_network %}
+          net:         {{ provider_network }}
+          net_name:         {{ provider_network }}
+{% else %}
           net:         { get_resource: net }
           subnet:      { get_resource: subnet }
-          secgrp:
-            - { get_resource: {% if openstack_flat_secgrp|default(False)|bool %}flat-secgrp{% else %}node-secgrp{% endif %} }
-            - { get_resource: common-secgrp }
-          floating_network: {{ external_network }}
           net_name:
             str_replace:
               template: openshift-ansible-cluster_id-net
               params:
                 cluster_id: {{ stack_name }}
-          volume_size: {{ app_volume_size }}
+{% endif %}
+          secgrp:
+            - { get_resource: {% if openstack_flat_secgrp|default(False)|bool %}flat-secgrp{% else %}node-secgrp{% endif %} }
+            - { get_resource: common-secgrp }
+{% if not use_bastion|bool and not provider_network %}
+          floating_network: {{ external_network }}
+{% endif %}
+          volume_size: {{ node_volume_size }}
+{% if not provider_network %}
     depends_on:
       - interface
+{% endif %}
 
   infra_nodes:
     type: OS::Heat::ResourceGroup
@@ -640,11 +709,10 @@ resources:
         properties:
           name:
             str_replace:
-              template: subtypek8s_type-%index%.cluster_id
+              template: sub_type_k8s_type-%index%.cluster_id
               params:
                 cluster_id: {{ stack_name }}
-                k8s_type: node
-                subtype: infra
+                sub_type_k8s_type: {{ infra_hostname }}
           cluster_env: {{ public_dns_domain }}
           cluster_id:  {{ stack_name }}
           group:
@@ -659,11 +727,21 @@ resources:
 {% for k, v in openshift_cluster_node_labels.infra.iteritems() %}
             {{ k|e }}: {{ v|e }}
 {% endfor %}
-          image:       {{ openstack_image }}
+          image:       {{ openstack_infra_image }}
           flavor:      {{ infra_flavor }}
           key_name:    {{ ssh_public_key }}
+{% if provider_network %}
+          net:         {{ provider_network }}
+          net_name:         {{ provider_network }}
+{% else %}
           net:         { get_resource: net }
           subnet:      { get_resource: subnet }
+          net_name:
+            str_replace:
+              template: openshift-ansible-cluster_id-net
+              params:
+                cluster_id: {{ stack_name }}
+{% endif %}
           secgrp:
 # TODO(bogdando) filter only required node rules into infra-secgrp
 {% if openstack_flat_secgrp|default(False)|bool %}
@@ -671,18 +749,21 @@ resources:
 {% else %}
             - { get_resource: node-secgrp }
 {% endif %}
+{% if ui_ssh_tunnel|bool and num_masters|int < 2 %}
+            - { get_resource: lb-secgrp }
+{% endif %}
             - { get_resource: infra-secgrp }
             - { get_resource: common-secgrp }
+{% if not provider_network %}
           floating_network: {{ external_network }}
-          net_name:
-            str_replace:
-              template: openshift-ansible-cluster_id-net
-              params:
-                cluster_id: {{ stack_name }}
+{% endif %}
           volume_size: {{ infra_volume_size }}
+{% if not provider_network %}
     depends_on:
       - interface
+{% endif %}
 
+{% if num_dns|int > 0 %}
   dns:
     type: OS::Heat::ResourceGroup
     properties:
@@ -695,7 +776,7 @@ resources:
               template: k8s_type-%index%.cluster_id
               params:
                 cluster_id: {{ stack_name }}
-                k8s_type: dns
+                k8s_type: {{ dns_hostname }}
           cluster_env: {{ public_dns_domain }}
           cluster_id:  {{ stack_name }}
           group:
@@ -705,20 +786,30 @@ resources:
                 k8s_type: dns
                 cluster_id: {{ stack_name }}
           type:        dns
-          image:       {{ openstack_image }}
+          image:       {{ openstack_dns_image }}
           flavor:      {{ dns_flavor }}
           key_name:    {{ ssh_public_key }}
+{% if provider_network %}
+          net:         {{ provider_network }}
+          net_name:         {{ provider_network }}
+{% else %}
           net:         { get_resource: net }
           subnet:      { get_resource: subnet }
-          secgrp:
-            - { get_resource: dns-secgrp }
-            - { get_resource: common-secgrp }
-          floating_network: {{ external_network }}
           net_name:
             str_replace:
               template: openshift-ansible-cluster_id-net
               params:
                 cluster_id: {{ stack_name }}
+{% endif %}
+          secgrp:
+            - { get_resource: dns-secgrp }
+            - { get_resource: common-secgrp }
+{% if not provider_network %}
+          floating_network: {{ external_network }}
+{% endif %}
           volume_size: {{ dns_volume_size }}
+{% if not provider_network %}
     depends_on:
       - interface
+{% endif %}
+{% endif %}

+ 15 - 0
roles/openstack-stack/templates/heat_stack_server.yaml.j2

@@ -61,20 +61,24 @@ parameters:
     label: Net name
     description: Net name
 
+{% if not provider_network %}
   subnet:
     type: string
     label: Subnet ID
     description: Subnet resource
+{% endif %}
 
   secgrp:
     type: comma_delimited_list
     label: Security groups
     description: Security group resources
 
+{% if not provider_network %}
   floating_network:
     type: string
     label: Floating network
     description: Network to allocate floating IP from
+{% endif %}
 
   availability_zone:
     type: string
@@ -117,7 +121,11 @@ outputs:
         - server
         - addresses
         - { get_param: net_name }
+{% if provider_network %}
+        - 0
+{% else %}
         - 1
+{% endif %}
         - addr
 
 resources:
@@ -134,6 +142,7 @@ resources:
       user_data:
         get_file: user-data
       user_data_format: RAW
+      user_data_update_policy: IGNORE
       metadata:
         group: { get_param: group }
         environment: { get_param: cluster_env }
@@ -146,16 +155,21 @@ resources:
     type: OS::Neutron::Port
     properties:
       network: { get_param: net }
+{% if not provider_network %}
       fixed_ips:
         - subnet: { get_param: subnet }
+{% endif %}
       security_groups: { get_param: secgrp }
 
+{% if not provider_network %}
   floating-ip:
     type: OS::Neutron::FloatingIP
     properties:
       floating_network: { get_param: floating_network }
       port_id: { get_resource: port }
+{% endif %}
 
+{% if not ephemeral_volumes|default(false)|bool %}
   cinder_volume:
     type: OS::Cinder::Volume
     properties:
@@ -168,3 +182,4 @@ resources:
       volume_id: { get_resource: cinder_volume }
       instance_uuid: { get_resource: server }
       mountpoint: /dev/sdb
+{% endif %}

+ 152 - 0
roles/openstack-stack/templates/heat_stack_server_nofloating.yaml.j2

@@ -0,0 +1,152 @@
+heat_template_version: 2016-10-14
+
+description: OpenShift cluster server w/o floating IP
+
+parameters:
+
+  name:
+    type: string
+    label: Name
+    description: Name
+
+  group:
+    type: string
+    label: Host Group
+    description: The Primary Ansible Host Group
+    default: host
+
+  cluster_env:
+    type: string
+    label: Cluster environment
+    description: Environment of the cluster
+
+  cluster_id:
+    type: string
+    label: Cluster ID
+    description: Identifier of the cluster
+
+  type:
+    type: string
+    label: Type
+    description: Type master or node
+
+  subtype:
+    type: string
+    label: Sub-type
+    description: Sub-type compute or infra for nodes, default otherwise
+    default: default
+
+  key_name:
+    type: string
+    label: Key name
+    description: Key name of keypair
+
+  image:
+    type: string
+    label: Image
+    description: Name of the image
+
+  flavor:
+    type: string
+    label: Flavor
+    description: Name of the flavor
+
+  net:
+    type: string
+    label: Net ID
+    description: Net resource
+
+  net_name:
+    type: string
+    label: Net name
+    description: Net name
+
+  subnet:
+    type: string
+    label: Subnet ID
+    description: Subnet resource
+
+  secgrp:
+    type: comma_delimited_list
+    label: Security groups
+    description: Security group resources
+
+  availability_zone:
+    type: string
+    description: The Availability Zone to launch the instance.
+    default: nova
+
+  volume_size:
+    type: number
+    description: Size of the volume to be created.
+    default: 1
+    constraints:
+      - range: { min: 1, max: 1024 }
+        description: must be between 1 and 1024 Gb.
+
+  node_labels:
+    type: json
+    description: OpenShift Node Labels
+    default: {"region": "default" }
+
+outputs:
+
+  name:
+    description: Name of the server
+    value: { get_attr: [ server_nofloating, name ] }
+
+  private_ip:
+    description: Private IP of the server
+    value:
+      get_attr:
+        - server_nofloating
+        - addresses
+        - { get_param: net_name }
+        - 0
+        - addr
+
+resources:
+
+  server_nofloating:
+    type: OS::Nova::Server
+    properties:
+      name:      { get_param: name }
+      key_name:  { get_param: key_name }
+      image:     { get_param: image }
+      flavor:    { get_param: flavor }
+      networks:
+        - port:  { get_resource: port }
+      user_data:
+        get_file: user-data
+      user_data_format: RAW
+      user_data_update_policy: IGNORE
+      metadata:
+        group: { get_param: group }
+        environment: { get_param: cluster_env }
+        clusterid: { get_param: cluster_id }
+        host-type: { get_param: type }
+        sub-host-type:    { get_param: subtype }
+        node_labels: { get_param: node_labels }
+
+  port:
+    type: OS::Neutron::Port
+    properties:
+      network: { get_param: net }
+      fixed_ips:
+        - subnet: { get_param: subnet }
+      security_groups: { get_param: secgrp }
+
+{% if not ephemeral_volumes|default(false)|bool %}
+  cinder_volume:
+    type: OS::Cinder::Volume
+    properties:
+      size: { get_param: volume_size }
+      availability_zone: { get_param: availability_zone }
+
+  volume_attachment:
+    type: OS::Cinder::VolumeAttachment
+    properties:
+      volume_id: { get_resource: cinder_volume }
+      instance_uuid: { get_resource: server_nofloating }
+      mountpoint: /dev/sdb
+{% endif %}

+ 21 - 0
roles/static_inventory/defaults/main.yml

@@ -4,5 +4,26 @@ refresh_inventory: True
 inventory: static
 inventory_path: ~/openstack-inventory
 
+# Either to configure bastion
+use_bastion: true
+
+# SSH user/key/options to access hosts via bastion
+ssh_user: openshift
+ssh_options: >-
+  -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
+  -o ConnectTimeout=90 -o ControlMaster=auto -o ControlPersist=270s
+  -o ServerAliveInterval=30 -o GSSAPIAuthentication=no
+
 # SSH key to access nodes
 private_ssh_key: ~/.ssh/openshift
+
+# The patch to store the generated config to access bastion/hosts
+ssh_config_path: /tmp/ssh.config.ansible
+
+# The IP:port to make an SSH tunnel to access UI on the 1st master
+# via bastion node (requires sudo on the ansible control node)
+ui_ssh_tunnel: False
+ui_port: "{{ openshift_master_api_port | default(8443) }}"
+target_ip: "{{ hostvars[groups['masters.' + stack_name|quote][0]].private_v4 }}"
+
+openstack_private_network: private

+ 11 - 0
roles/static_inventory/tasks/main.yml

@@ -4,3 +4,14 @@
 
 - name: Checkpoint in-memory data into a static inventory
   include: checkpoint.yml
+
+- name: Generate SSH config for accessing hosts via bastion
+  include: sshconfig.yml
+  when: use_bastion|bool
+
+- name: Configure SSH tunneling to access UI
+  include: sshtun.yml
+  become: true
+  when:
+    - use_bastion|bool
+    - ui_ssh_tunnel|bool

+ 59 - 8
roles/static_inventory/tasks/openstack.yml

@@ -16,6 +16,7 @@
 
     - name: set_fact for openstack inventory nodes
       set_fact:
+        registered_bastion_nodes: "{{ (registered_nodes_output.stdout | from_json) | json_query(q) }}"
         registered_nodes_floating: "{{ (registered_nodes_output.stdout | from_json) | json_query(q2) }}"
       vars:
         q: "[] | [?metadata.group=='infra.{{stack_name}}']"
@@ -23,25 +24,75 @@
       when:
         - refresh_inventory|bool
 
+    - name: set_fact for openstack inventory nodes with provider network
+      set_fact:
+        registered_nodes_floating: "{{ (registered_nodes_output.stdout | from_json) | json_query(q) }}"
+      vars:
+        q: "[] | [?metadata.clusterid=='{{stack_name}}'] | [?public_v4=='']"
+      when:
+        - refresh_inventory|bool
+        - openstack_provider_network_name|default(None)
+
     - name: Add cluster nodes w/o floating IPs to inventory
-      with_items: "{{ registered_nodes }}"
-      when: not item in registered_nodes_floating
+      with_items: "{{ registered_nodes|difference(registered_nodes_floating) }}"
       add_host:
         name: '{{ item.name }}'
         groups: '{{ item.metadata.group }}'
-        ansible_host: '{{ item.private_v4 }}'
+        ansible_host: >-
+          {% if use_bastion|bool -%}
+          {{ item.name }}
+          {%- else -%}
+          {%- set node = registered_nodes | json_query("[?name=='" + item.name + "']") -%}
+          {{ node[0].addresses[openstack_private_network|quote][0].addr }}
+          {%- endif %}
         ansible_fqdn: '{{ item.name }}'
+        ansible_user: '{{ ssh_user }}'
         ansible_private_key_file: '{{ private_ssh_key }}'
-        private_v4: '{{ item.private_v4 }}'
+        ansible_ssh_extra_args: '-F {{ ssh_config_path }}'
+        private_v4: >-
+          {% set node = registered_nodes | json_query("[?name=='" + item.name + "']") -%}
+          {{ node[0].addresses[openstack_private_network|quote][0].addr }}
 
     - name: Add cluster nodes with floating IPs to inventory
       with_items: "{{ registered_nodes_floating }}"
-      when: item in registered_nodes_floating
       add_host:
         name: '{{ item.name }}'
         groups: '{{ item.metadata.group }}'
-        ansible_host: '{{ item.public_v4 }}'
+        ansible_host: >-
+          {% if use_bastion|bool -%}
+          {{ item.name }}
+          {%- elif openstack_provider_network_name|default(None) -%}
+          {{ item.private_v4 }}
+          {%- else -%}
+          {{ item.public_v4 }}
+          {%- endif %}
         ansible_fqdn: '{{ item.name }}'
+        ansible_user: '{{ ssh_user }}'
         ansible_private_key_file: '{{ private_ssh_key }}'
-        private_v4: '{{ item.private_v4 }}'
-        public_v4: '{{ item.public_v4 }}'
+        ansible_ssh_extra_args: '-F {{ ssh_config_path }}'
+        private_v4: >-
+          {% set node = registered_nodes | json_query("[?name=='" + item.name + "']") -%}
+          {{ node[0].addresses[openstack_private_network|quote][0].addr }}
+        public_v4: >-
+          {% if openstack_provider_network_name|default(None) -%}
+          {{ item.private_v4 }}
+          {%- else -%}
+          {{ item.public_v4 }}
+          {%- endif %}
+
+    - name: Add bastion node to inventory
+      add_host:
+        name: bastion
+        groups: bastions
+        ansible_host: '{{ registered_bastion_nodes[0].public_v4 }}'
+        ansible_fqdn: '{{ registered_bastion_nodes[0].name }}'
+        ansible_user: '{{ ssh_user }}'
+        ansible_private_key_file: '{{ private_ssh_key }}'
+        ansible_ssh_extra_args: '-F {{ ssh_config_path }}'
+        private_v4: >-
+          {% set node = registered_nodes | json_query("[?name=='" + registered_bastion_nodes[0].name + "']") -%}
+          {{ node[0].addresses[openstack_private_network|quote][0].addr }}
+        public_v4: '{{ registered_bastion_nodes[0].public_v4 }}'
+      when:
+        - registered_bastion_nodes is defined
+        - use_bastion|bool

+ 13 - 0
roles/static_inventory/tasks/sshconfig.yml

@@ -0,0 +1,13 @@
+---
+- name: set ssh proxy command prefix for accessing nodes via bastion
+  set_fact:
+    ssh_proxy_command: >-
+      ssh {{ ssh_options }}
+      -i {{ private_ssh_key }}
+      {{ ssh_user }}@{{ hostvars['bastion'].ansible_host }}
+
+- name: regenerate ssh config
+  template:
+    src: openstack_ssh_config.j2
+    dest: "{{ ssh_config_path }}"
+    mode: 0644

+ 15 - 0
roles/static_inventory/tasks/sshtun.yml

@@ -0,0 +1,15 @@
+---
+- name: Create ssh tunnel systemd service
+  template:
+    src: ssh-tunnel.service.j2
+    dest: /etc/systemd/system/ssh-tunnel.service
+    mode: 0644
+
+- name: reload the systemctl daemon after file update
+  command: systemctl daemon-reload
+
+- name: Enable ssh tunnel service
+  service:
+    name: ssh-tunnel
+    enabled: true
+    state: restarted

+ 9 - 1
roles/static_inventory/templates/inventory.j2

@@ -10,9 +10,12 @@
 %} private_v4={{ hostvars[host]['private_v4'] }}{% endif %}
 {% if 'public_v4' in hostvars[host]
 %} public_v4={{ hostvars[host]['public_v4'] }}{% endif %}
+{% if 'ansible_user' in hostvars[host]
+%} ansible_user={{ hostvars[host]['ansible_user'] }}{% endif %}
 {% if 'ansible_private_key_file' in hostvars[host]
 %} ansible_private_key_file={{ hostvars[host]['ansible_private_key_file'] }}{% endif %}
- openshift_hostname={{ host }}
+{% if use_bastion|bool and 'ansible_ssh_extra_args' in hostvars[host]
+%} ansible_ssh_extra_args={{ hostvars[host]['ansible_ssh_extra_args']|quote }}{% endif %} openshift_hostname={{ host }}
 
 {% endif %}
 {% endfor %}
@@ -36,6 +39,7 @@ dns
 [OSEv3:children]
 nodes
 etcd
+lb
 
 # Set variables common for all OSEv3 hosts
 #[OSEv3:vars]
@@ -65,6 +69,9 @@ nodes.{{ stack_name }}
 [dns:children]
 dns.{{ stack_name }}
 
+[lb:children]
+lb.{{ stack_name }}
+
 # Empty placeholders for all groups of the cluster nodes
 [masters.{{ stack_name }}]
 [etcd.{{ stack_name }}]
@@ -72,6 +79,7 @@ dns.{{ stack_name }}
 [nodes.{{ stack_name }}]
 [app.{{ stack_name }}]
 [dns.{{ stack_name }}]
+[lb.{{ stack_name }}]
 
 # BEGIN Autogenerated groups
 {% for group in groups %}

+ 21 - 0
roles/static_inventory/templates/openstack_ssh_config.j2

@@ -0,0 +1,21 @@
+Host *
+    IdentitiesOnly yes
+
+Host bastion
+    Hostname {{ hostvars['bastion'].ansible_host }}
+    IdentityFile {{ hostvars['bastion'].ansible_private_key_file }}
+    User {{ ssh_user }}
+    StrictHostKeyChecking no
+    UserKnownHostsFile=/dev/null
+
+{% for host in groups['all'] | difference(groups['bastions'][0]) %}
+
+Host {{ host }}
+    Hostname {{ hostvars[host].ansible_host }}
+    ProxyCommand {{ ssh_proxy_command  }} -W {{ hostvars[host].private_v4 }}:22
+    IdentityFile {{ hostvars[host].ansible_private_key_file }}
+    User {{ ssh_user }}
+    StrictHostKeyChecking no
+    UserKnownHostsFile=/dev/null
+
+{% endfor %}

+ 20 - 0
roles/static_inventory/templates/ssh-tunnel.service.j2

@@ -0,0 +1,20 @@
+[Unit]
+Description=Set up ssh tunneling for OpenShift cluster UI
+After=network.target
+
+[Service]
+ExecStart=/usr/bin/ssh -NT -o \
+   ServerAliveInterval=60 -o \
+   UserKnownHostsFile=/dev/null -o \
+   StrictHostKeyChecking=no -o \
+   ExitOnForwardFailure=no -i \
+   {{ private_ssh_key }} {{ ssh_user }}@{{ hostvars['bastion'].ansible_host }} \
+   -L 0.0.0.0:{{ ui_port }}:{{ target_ip }}:{{ ui_port }}
+
+
+# Restart every >2 seconds to avoid StartLimitInterval failure
+RestartSec=5
+Restart=always
+
+[Install]
+WantedBy=multi-user.target