The majority of the configuration is handled through an Ansible inventory
directory. A sample inventory can be found at
openshift-ansible/playbooks/openstack/sample-inventory/
.
inventory/group_vars/all.yml
is used for OpenStack configuration,
while inventory/group_vars/OSEv3.yml
is used for OpenShift
configuration.
Environment variables may also be used.
In inventory/group_vars/all.yml
:
openshift_openstack_keypair_name
OpenStack keypair to use.openshift_openstack_num_masters
Number of master nodes to create.openshift_openstack_num_etcd
Number of etcd nodes to create (0 if co-hosted on master hosts).openshift_openstack_num_infra
Number of infra nodes to create.openshift_openstack_num_nodes
Number of app nodes to create.openshift_openstack_master_floating_ip
Assign floating IP to master nodes. Defaults to True
.openshift_openstack_etcd_floating_ip
Assign floating IP to etcd nodes (if any). Defaults to True
.openshift_openstack_infra_floating_ip
Assign floating IP to infra nodes. Defaults to True
.openshift_openstack_compute_floating_ip
Assign floating IP to app nodes. Defaults to True
.openshift_openstack_default_image_name
OpenStack image used by all VMs, unless a particular role image name is specified.openshift_openstack_master_image_name
openshift_openstack_infra_image_name
openshift_openstack_cns_image_name
openshift_openstack_node_image_name
openshift_openstack_lb_image_name
openshift_openstack_etcd_image_name
openshift_openstack_default_flavor
OpenStack flavor used by all VMs, unless a particular role flavor name is specified.openshift_openstack_master_flavor
openshift_openstack_infra_flavor
openshift_openstack_cns_flavor
openshift_openstack_node_flavor
openshift_openstack_lb_flavor
openshift_openstack_etcd_flavor
openshift_openstack_master_hostname
Defaults to master
.openshift_openstack_infra_hostname
Defaults to infra-node
.openshift_openstack_cns_hostname
Defaults to cns
.openshift_openstack_node_hostname
Defaults to app-node
.openshift_openstack_lb_hostname
Defaults to lb
.openshift_openstack_etcd_hostname
Defaults to etcd
.openshift_openstack_external_network_name
OpenStack network providing external connectivity.openshift_openstack_provision_user_commands
Allows users to execute shell commands via cloud-init for all of the created Nova servers in the Heat stack, before they are available for SSH connections. Note that you should use custom Ansible playbooks whenever possible. User specified shell commands for cloud-init need to be either strings or lists:- openshift_openstack_provision_user_commands:
- set -vx
- systemctl stop sshd # fences off ansible playbooks as we want to reboot later
- ['echo', 'foo', '>', '/tmp/foo']
- [ ls, /tmp/foo, '||', true ]
- reboot # unfences ansible playbooks to continue after reboot
openshift_openstack_nodes_to_remove
The numerical indexes of app nodes that should be removed; for example, ['0', '2']
,openshift_openstack_docker_volume_size
Default Docker volume size used by all VMs, unless a particular role Docker volume size is specified. If openshift_openstack_ephemeral_volumes
is set to true
, the *_volume_size
variables will be ignored and the deployment will not create any cinder volumes.openshift_openstack_docker_master_volume_size
openshift_openstack_docker_infra_volume_size
openshift_openstack_docker_cns_volume_size
openshift_openstack_docker_node_volume_size
openshift_openstack_docker_etcd_volume_size
openshift_openstack_docker_lb_volume_size
openshift_openstack_flat_secgrp
Set to True if you experience issues with sec group rules quotas. It trades security for number of rules, by sharing the same set of firewall rules for master, node, etcd and infra nodes.openshift_openstack_required_packages
List of additional prerequisite packages to be installed before deploying an OpenShift cluster.openshift_openstack_heat_template_version
Defaults to pike
In inventory/group_vars/OSEv3.yml
:
openshift_disable_check
List of checks to disable.openshift_master_cluster_public_hostname
Custom entrypoint; for example, api.openshift.example.com
. Note than an empty hostname does not work, so if your domain is openshift.example.com
you cannot set this value to simply openshift.example.com
.openshift_deployment_type
Version of OpenShift to deploy; for example, origin
or openshift-enterprise
openshift_master_default_subdomain
Additional options can be found in this sample inventory:
https://github.com/openshift/openshift-ansible/blob/master/inventory/hosts.example
Some features require you to configure the OpenStack cloud provider. For example, in
inventory/group_vars/OSEv3.yml
:
openshift_cloudprovider_kind
: openstackopenshift_cloudprovider_openstack_auth_url
: "{{ lookup('env','OS_AUTH_URL') }}"openshift_cloudprovider_openstack_username
: "{{ lookup('env','OS_USERNAME') }}"openshift_cloudprovider_openstack_password
: "{{ lookup('env','OS_PASSWORD') }}"openshift_cloudprovider_openstack_tenant_name
: "{{ lookup('env','OS_PROJECT_NAME') }}"openshift_cloudprovider_openstack_domain_name
: "{{ lookup('env','OS_USER_DOMAIN_NAME') }}"The full range of openshift-ansible OpenStack cloud provider options can be found at:
For more information, consult the Configuring for OpenStack page in the OpenShift documentation.
If you would like to use additional parameters, create a custom cloud provider
configuration file locally and specify it in inventory/group_vars/OSEv3.yml
:
openshift_cloudprovider_openstack_conf_file
Path to local openstack.confIn order to configure your OpenShift cluster to work properly with OpenStack with
SSL-endpoints, set the following in inventory/group_vars/all.yml
:
openshift_use_openstack_ssl
: TrueThen add the following to inventory/group_vars/OSEv3.yml
:
openshift_certificates_redeploy: true
openshift_additional_ca: /path/to/ca.crt.pem
kuryr_openstack_ca: /path/to/ca.crt.pem (optional)
openshift_cloudprovider_openstack_ca_file: |
-----BEGIN CERTIFICATE-----
CONTENTS OF OPENSTACK SSL CERTIFICATE
-----END CERTIFICATE-----
By default the Heat stack created by OpenStack for the OpenShift cluster will be
named openshift-cluster
. If you would like to use a different name then you
must set the OPENSHIFT_CLUSTER
environment variable before running the playbooks:
$ export OPENSHIFT_CLUSTER=openshift.example.com
If you use a non-default stack name and run the openshift-ansible playbooks to update
your deployment, you must set OPENSHIFT_CLUSTER
to your stack name to avoid errors.
For its installation, OpenShift requires that the nodes can resolve each other by their hostnames. Specifically, the hostname must resolve to the private (i.e. nonfloating) IP address.
Most OpenStack deployments do not support this out of the box. If you have a control over your OpenStack, you can set this up in the OpenStack Internal DNS section.
Otherwise, you need an external DNS server.
While we do not create a DNS for you, if it supports nsupdate (RFC 2136nsupdate-rfc), we can populate it with the cluster records automatically.
To set up the domain name of your OpenShift cluster, set these
parameters in inventory/group_vars/all.yml
:
openshift_openstack_clusterid
Defaults to openshift
openshift_openstack_public_dns_domain
Defaults to example.com
Together, they form the cluster's public DNS domain that all the
servers will be under; by default this domain will be
openshift.example.com
.
They're split so you can deploy multiple clusters under the same
domain with a single inventory change: e.g. testing.example.com
and
production.example.com
.
You will also want to put the IP addresses of your DNS server(s) in
the openshift_openstack_dns_nameservers
array in the same file.
This will configure the Neutron subnet with all the OpenShift nodes to forward to these DNS servers. Which means that any server running in that subnet will use the DNS automatically, without any extra configuration.
This is the preferred way to handle internal node name resolution.
OpenStack Neutron is capable of resolving its servers by name, but it needs to be configured to do so. This requires operator access to the OpenStack servers and services.
In /etc/neutron/neutron.conf
, set the dns_domain
option. For example:
dns_domain = internal.
Note the trailing dot. This can be a domain name or any string and it does
not have to be externally resolvable. Values such as openshift.cool.
,
example.com.
or openstack-internal.
are all fine.
It must not be openshiftlocal.
however. That is the default value and it does
not provide the behaviour we need.
Next, in /etc/neutron/plugins/ml2/ml2_conf.ini
, add the dns_domain_ports
extension driver:
extension_drivers=dns_domain_ports
If you already have other drivers set, just add it at the end, separated by a coma. E.g.:
extension_drivers=port_security,dns_domain_ports
Finally, restart the neutron-server
service:
systemctl restart neutron-server
(or systemctl restart 'devstack@q-svc'
in DevStack)
To verify that it works, you should create two servers, SSH into one of them and ping the other one by name. For example
$ openstack server create .... --network private test1
$ openstack server create .... --network private test2
$ openstack floating ip create external
$ openstack server add floating ip test1 <floating ip>
$ ssh centos@<floating ip>
$ ping test2
If the ping succeeds, everything is set up correctly.
For more information, read the relevant OpenStack documentation:
https://docs.openstack.org/neutron/latest/admin/config-dns-int.html
Since the internal DNS does not use the domain name suffix our OpenShift cluster will work with, we must make sure that our Nodes' hostnames do not have it either. Nor should they use any other internal DNS server.
Put this in your inventory/group_vars/all.yml
:
openshift_openstack_use_neutron_internal_dns: True
openshift_openstack_fqdn_nodes: false
openshift_openstack_dns_nameservers: []
The nodes will now be called master-0
instead of
master-0.openshift.example.com
. Neutron's DNS resolution requires these short
hostnames.
If you were using a private DNS before, you'll also want to remove the
private
section of openshift_openstack_external_nsupdate_keys
(the public
one is okay). The internal name resolution is handled by Neutron so the DNS and
its private records are no longer necessary.
If you're setting openshift_master_cluster_hostname
to a master node, it must
be updated accordingly, too (e.g. openshift_master_cluster_hostname:
master-0
).
And finally, run the provision_install.yml
playbooks as you normally would.
If you don't have operator access to your OpenStack, it may still be configured to provide server name resolution anyway. Try running the validation steps from the OpenStack Internal DNS section. If ping fails, you will need to use an external DNS server.
If your DNS supports nsupdate, you can set up the
openshift_openstack_external_nsupdate_keys
variable and all the necessary DNS
records will be added during the provisioning phase (after the OpenShift nodes
are created, but before we install anything on them).
Add this to your inventory/group_vars/all.yml
:
openshift_openstack_use_nsupdate: True
openshift_openstack_external_nsupdate_keys:
private:
key_secret: <some nsupdate key>
key_algorithm: 'hmac-md5'
key_name: 'update-key'
server: <private DNS server IP>
Make sure that all four values (key secret, algorithm, key name and the DNS IP address) are correct.
This will create the records for the internal OpenShift communication.
If you also want public records for external access, add another
section called public
with the same structure.
If you want to use the same DNS server for both public and private records, you must set at least one of:
openshift_openstack_public_hostname_suffix
Empty by default.openshift_openstack_private_hostname_suffix
Empty by default.Otherwise the private records will be overwritten by the public ones.
For example by leaving the private suffix empty and setting the public one to:
openshift_openstack_public_hostname_suffix: -public
The internal access to the first master node would be available with:
master-0.openshift.example.com
, while the public access using the floating IP
address would be under master-0-public.openshift.example.com
.
Note that these suffixes are only applied to the OpenShift Node names as they appear in the DNS. They will not affect the actual hostnames.
It is recommended that you use two separate servers for the private and public access instead.
If your nsupdate zone differs from the full OpenShift DNS name (e.g. your DNS' zone is "example.com" but you want your cluster to be at "openshift.example.com"), you can specify the zone in this parameter:
openshift_openstack_nsupdate_zone: example.com
If left out, it will be equal to the OpenShift cluster DNS.
Don't forget to put your the internal (private) DNS servers to the
openshift_openstack_dns_nameservers
array.
If you're unable (or do not want) to use nsupdate, you will have to create your DNS records out-of-band.
To do that, you will have to split the deployment into three phases:
To do this, run the provision.yml
and install.yml
playbooks
instead of the all-in-one provision_install.yml
and add your DNS
records between the runs.
You still need to set the openshift_openstack_dns_nameservers
with
your (private/internal) DNS servers in inventory/group_vars/all.yml
.
Next, you need to create a DNS record for every OpenShift node that was created. This record must point to the node's private IP address (not the floating IP).
You can see the server names and their private floating IP addresses
by running openstack server list
.
For example with the following output:
$ openstack server list
+--------------------------------------+--------------------------------------+---------+----------------------------------------------------------------------------+---------+-----------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+--------------------------------------+---------+----------------------------------------------------------------------------+---------+-----------+
| 8445bd74-aaf1-4c54-b6fe-e98efa6e47de | master-0.openshift.example.com | ACTIVE | openshift-ansible-openshift.example.com-net=192.168.99.10, 10.40.128.136 | centos7 | m1.medium |
| 635f0a24-bde7-488d-aa0d-c31e0a01e7c4 | infra-node-0.openshift.example.com | ACTIVE | openshift-ansible-openshift.example.com-net=192.168.99.4, 10.40.128.130 | centos7 | m1.medium |
| 04657a99-29b1-48c8-8979-3c88ee1c1615 | app-node-0.openshift.example.com | ACTIVE | openshift-ansible-openshift.example.com-net=192.168.99.6, 10.40.128.132 | centos7 | m1.medium |
+--------------------------------------+--------------------------------------+---------+----------------------------------------------------------------------------+---------+-----------+
You will need to create these A records:
master-0.openshift.cool. 192.168.99.10
infra-node-0.openshift.cool. 192.168.99.4
app-node-0.openshift.cool. 192.168.99.16
For the public access, you'll need to create 2 records: one for the API access and the other for the OpenShift apps running on the cluster.
console.openshift.cool. 10.40.128.137
*.apps.openshift.cool. 10.40.128.129
These must point to the publicly-accessible IP addresses of your master and infra nodes or preferably to the load balancers.
Every OpenShift node as well as the API and Router load balancer will receive a floating IP address by default. This is to make the deployment and debugging experience easier.
You may want to change that behaviour, for example to prevent any possibility of external access to the nodes (defense in depth) or if your floating IP pool is not large enough.
It possible to configure the playbooks to not asssign floating IP addresses. However, the Ansible playbooks will then not be able to SSH and install OpenShift.
The nodes will only be accessible from the subnet they are assigned to.
To solve this, we need to create the network the nodes will be placed in beforehnd, then boot up a bastion host in the same network and run the playbooks from there.
We will have to create a Neutron Network, Subnet and a Router for external
connectivity. Take note of any DNS servers you would normally put under
openshift_openstack_dns_nameservers
-- they must be added to the subnet.
In this example, we will call the network and its subnet openshift
and configure
a DNS server with IP address 10.20.30.40
. The external network will be called public
.
$ openstack network create openshift
$ openstack subnet create --subnet-range 192.168.0.0/24 --dns-nameserver 10.20.30.40 --network openshift openshift
$ openstack router create openshift-router
$ openstack router set --external-gateway public openshift-router
$ openstack router add subnet openshift-router openshift
To provide SSH connectivity (that Ansible requires) to the OpenShift nodes without using floating IP addresses, the playbooks must be running on a server inside the same subnet.
This will create such server and place it into the subnet created above.
We will use an image called CentOS-7-x86_64-GenericCloud
, and assume that the
created floating IP address will be 172.24.4.10
.
$ openstack server create --wait --image CentOS-7-x86_64-GenericCloud --flavor m1.medium --key-name openshift --network openshift bastion
$ openstack floating ip create public
$ openstack server add floating ip bastion 172.24.4.10
$ ping 172.24.4.10
$ ssh centos@172.24.4.10
In addition to the rest of openshift-ansible configuration, we will need to specify the node subnet, the routerand that we do not want any floating IP addresses.
You must do this from inside the "bastion" host created in the previous step.
Put the following to inventory/group_vars/all.yml
:
openshift_openstack_use_no_floating_ip: True
openshift_openstack_router_name: openshift-router
openshift_openstack_node_subnet_name: openshift
openshift_openstack_master_floating_ip: false
openshift_openstack_infra_floating_ip: false
openshift_openstack_compute_floating_ip: false
openshift_openstack_load_balancer_floating_ip: false
And then run the playbooks/openstack/openshift-cluster/*.yml
as usual.
If you want to deploy OpenShift on a single node (e.g. for quick evaluation), you can do so with a few configuration changes.
First, set the following in inventory/group_vars/all.yml
:
openshift_use_all_in_one_cluster_deployment: True
openshift_openstack_num_masters: 1
openshift_openstack_num_etcd: 0
openshift_openstack_num_infra: 0
openshift_openstack_num_nodes: 0
openshift_openstack_master_group_name: node-config-all-in-one
Next, define the node-config-all-in-one
group in OSEv3.yml
:
openshift_node_groups:
- name: node-config-all-in-one
labels:
- 'node-role.kubernetes.io/master=true'
- 'node-role.kubernetes.io/infra=true'
- 'node-role.kubernetes.io/compute=true'
Then run the deployment playbooks as usual. At the end, you will have an OpenShift running on a single OpenStack VM.
The options here define a new OpenShift node group that has the labels for all three roles: master, infra and compute. And we create a single node and assign this new group to it.
Note that the "all in one" node must be the "master". openshift-ansible
expects at least one node in the masters
Ansible group.
Also keep in mind that if you don't use LBaaS with an all-in-one setup the DNS wildcard record for the apps domain will not be added, because there are no dedicated infra nodes, so you will have to add it manually. See Custom DNS Records Configuration.
If you want to deploy OpenShift Container Platform with the etcd running on separate hosts appart from the master hosts, the following changes need to be made to the inventory:
Single master and single etcd host:
:
openshift_openstack_num_masters: 1
openshift_openstack_num_etcd: 1
:
Multiple master and multiple etcd hosts:
:
openshift_openstack_num_masters: 3
openshift_openstack_num_etcd: 3
:
If you want to deploy multiple OpenShift environments in the same OpenStack project, you can do so with a few configuration changes.
First, set the openshift_openstack_clusterid
option in the
inventory/group_vars/all.yml
file with specific unique name for cluster.
vi inventory/group_vars/all.yml
openshift_openstack_clusterid: foobar
openshift_openstack_public_dns_domain: example.com
Second, set OPENSHIFT_CLUSTER
environment variables. The OPENSHIFT_CLUSTER
environment variable has to consist of openshift_openstack_clusterid
and
openshift_openstack_public_dns_domain
, that's required because cluster_id
variable stored in the instance metadata is concatanated in the same way.
If value will be different then instances won't be accessible in ansible inventory.
export OPENSHIFT_CLUSTER='foobar.example.com'
Then run the deployment playbooks as usual. When you finish deployment of first environment, please update above options that correspond to a new environment and run the deployment playbooks.
It is possible to build the OpenShift images in advance (instead of installing the dependencies during the deployment). This will reduce the disk and network throughput as well as speed up the installation.
To do this, the inventory must already exist and be configured.
Set the openshift_openstack_default_image_name
value in
inventory/group_vars/all.yml
to a name you want this new image to be called
(e.g. origin-node
). This name must not exist in OpenStack yet.
Next, set openshift_openstack_build_base_image
to a name of an existing
image that you want to use as a base. This should be the cloud image you would
normally use for the deployment.
And finally, run the build_image.yml
playbook:
ansible-playbook -i inventory openshift-ansible/playbooks/openstack/openshift-cluster/build_image.yml
This will create a temporary Neutron network, subnet and router, launch a
server in that subnet, install all the packages and pull the necessary
container images and upload an image with the name set in
openshift_openstack_default_image_name
.
All the extra OpenStack resources (network, subnet, router) will then be deleted.
Note that the subnet's CIDR will be 192.168.23.0/24
. If you need to use a
different value, set openshift_openstack_build_network_cidr
before running
the build_image
playbook.
If you don't want to be setting the build variables in your inventory, you can pass them to ansible-playbook directly:
ansible-playbook -i inventory openshift-ansible/playbooks/openstack/openshift-cluster/build_image.yml -e openshift_openstack_build_base_image=CentOS-7-x86_64-GenericCloud-1805 -e openshift_openstack_build_network_cidr=192.168.42.0/24
Kuryr is an SDN that uses OpenStack Neutron. This prevents the double overlay overhead one would get when running OpenShift on OpenStack using the default OpenShift SDN.
https://docs.openstack.org/kuryr-kubernetes/latest/readme.html
Kuryr has a few additional requirements on the underlying OpenStack deployment:
neutron-server
after you change the configurationneutron-openvswitch-agent
after the config changeWe recommend you use the Queens or newer release of OpenStack.
This is the minimum you need to set (in group_vars/all.yml
):
openshift_use_kuryr: true
openshift_use_openshift_sdn: false
os_sdn_network_plugin_name: cni
openshift_node_proxy_mode: userspace
use_trunk_ports: true
openshift_master_open_ports:
- service: dns tcp
port: 53/tcp
- service: dns udp
port: 53/udp
openshift_node_open_ports:
- service: dns tcp
port: 53/tcp
- service: dns udp
port: 53/udp
kuryr_openstack_public_net_id: <public/external net UUID>
The kuryr_openstack_public_net_id
value must be set to the UUID of the
public net in your OpenStack. In other words, the net with the Floating
IP range defined. It corresponds to the public network, which is often called
public
, external
or ext-net
.
Additionally, if the public net has different subnet, you can specify the
specific one with kuryr_openstack_public_subnet_id
, whose value must be set
to the UUID of the public subnet in your OpenStack.
NOTE: A lot of OpenStack deployments do not make the public subnet accessible to regular users.
Finally, you must set up an OpenStack cloud provider as specified in OpenStack Cloud Provider Configuration.
It is possible to pre-create Neutron ports for later use. This means that several ports (each port will be attached to an OpenShift pod) would be created at once. This will speed up individual pod creation at the cost of having a few extra ports that are not currently in use.
For more information on the Kuryr port pools, check out the Kuryr documentation:
https://docs.openstack.org/kuryr-kubernetes/latest/installation/ports-pool.html
You can control the port pooling characteristics with these options:
kuryr_openstack_pool_max: 0
kuryr_openstack_pool_min: 1
kuryr_openstack_pool_batch: 5
kuryr_openstack_pool_update_frequency: 20
`openshift_kuryr_precreate_subports: 5`
Note in the last variable you specify the number of subports that will be created per trunk port, i.e., per pool.
You need to set the pool driver you want to use, depending on the target environment, i.e., neutron for baremetal deployments or nested for deployments on top of VMs:
kuryr_openstack_pool_driver: neutron
kuryr_openstack_pool_driver: nested
And to disable this feature, you must set:
kuryr_openstack_pool_driver: noop
On the other hand, there is a multi driver support to enable hybrid
deployments with different pools drivers. In order to enable the kuryr
multi-pool
driver support, we need to also tag the nodes with their
corresponding pod_vif
labels so that the right kuryr pool driver is used
for each VM/node.
To do that, set this in inventory/group_vars/OSEv3.yml
:
kuryr_openstack_pool_driver: multi
openshift_node_groups:
- name: node-config-master
labels:
- 'node-role.kubernetes.io/master=true'
- 'pod_vif=nested-vlan'
edits: []
- name: node-config-infra
labels:
- 'node-role.kubernetes.io/infra=true'
- 'pod_vif=nested-vlan'
edits: []
- name: node-config-compute
labels:
- 'node-role.kubernetes.io/compute=true'
- 'pod_vif=nested-vlan'
edits: []
By default, kuryr is configured with the default subnet driver where all the pods are deployed on the same Neutron subnet. However, there is an option of enabling a different subnet driver, named namespace, which makes pods to be allocated on different subnets depending on the namespace they belong to. In addition to the subnet driver, to properly enable isolation between different namespaces (through OpenStack security groups) there is a need of also enabling the related security group driver for namespaces. To enable this new kuryr namespace isolation capability you need to uncomment:
openshift_kuryr_subnet_driver: namespace
openshift_kuryr_sg_driver: namespace
By default kuryr controller and cni pods are deployed with readiness and liveness probes enabled. To disable them you can just uncomment:
enable_kuryr_controller_probes: True
enable_kuryr_cni_probes: True
A production deployment should contain more then one master and infra node and have a load balancer in front of them.
The playbooks will not create any load balancer by default. Even if you do request multiple masters.
You can opt into that if you want though. There are two options: a VM-based load balancer and OpenStack's Load Balancer as a Service.
If your OpenStack supports Load Balancer as a Service (LBaaS) provided by the Octavia project, our playbooks can set it up automatically.
Put this in your inventory/group_vars/all.yml
:
openshift_openstack_use_lbaas_load_balancer: true
This will create two load balancers: one for the API and UI console and the other for the OpenShift router. Each will have its own public IP address.
This playbook defaults to using OpenStack Octavia as its LBaaSv2 provider:
openshift_openstack_lbaasv2_provider: Octavia
If your cloud uses the deprecated Neutron LBaaSv2 provider set:
openshift_openstack_lbaasv2_provider: "Neutron::LBaaS"
The Octavia listeners connection timeout associated to the API can be modified by setting the next variable in miliseconds (default value 500000):
openshift_openstack_api_lb_listeners_timeout: 500000
If you can't use OpenStack's LBaaS, we can create and configure a virtual machine running HAProxy to serve as one.
Put this in your inventory/group_vars/all.yml
:
openshift_openstack_use_vm_load_balancer: true
WARNING this VM will only handle the API and UI requests, not the OpenShift routes.
That means, if you have more than one infra node, you will have to balance them externally. It is not recommended to use this option in production.
If you specify neither openshift_openstack_use_lbaas_load_balancer
nor
openshift_openstack_use_vm_load_balancer
, the resulting OpenShift cluster
will have no load balancing configured out of the box.
This is regardless of how many master or infra nodes you create.
In this mode, you are expected to configure and maintain a load balancer yourself.
However, the cluster is usable without a load balancer as well. To talk to the API or UI, connect to any of the master nodes. For the OpenShift routes, use any of the infra nodes.
In either of these cases (LBaaS, VM HAProxy, no LB) the public addresses to access the cluster's API and router will be printed out at the end of the playbook.
If you want to get them out explicitly, run the following playbook with the same arguments (private key, inventories, etc.) as your provision/install ones:
playbooks/openstack/inventory.py openshift-ansible/playbooks/openstack/openshift-cluster/cluster-info.yml
These addresses will depend on the load balancing solution. For LBaaS, they'll be the floating IPs of the load balancers. In the VM-based solution, the API address will be the public IP of the load balancer VM and the router IP will be the address of the first infra node that was created. If no load balancer is selected, the API will be the address of the first master node and the router will be the address of the first infra node.
This means that regardless of the load balancing solution, you can use these two entries to provide access to your cluster.
Normally, the playbooks create a new Neutron network and subnet and attach floating IP addresses to each node. If you have a provider network set up, this is all unnecessary as you can just access servers that are placed in the provider network directly.
Note that this will not update the nodes' DNS, so running openshift-ansible right after provisioning will fail (unless you're using an external DNS server your provider network knows about). You must make sure your nodes are able to resolve each other by name.
In inventory/group_vars/all.yml
:
openshift_openstack_use_provider_network
Trueopenshift_openstack_provider_network_name
Provider network name. Setting this will cause the openshift_openstack_external_network_name
and openshift_openstack_private_network_name
parameters to be ignored.Set the following in inventory/group_vars/all.yml
:
openshift_use_cinder_persistent_volume
: TrueThen, in addition to setting up an OpenStack cloud provider,
you must set the following in inventory/group_vars/OSEv3.yml
:
openshift_cloudprovider_openstack_blockstorage_version
: v2The Block Storage version must be set to v2
, because OpenShift does not support
the v3 API yet and the version detection currently does not work.
After a successful deployment, the cluster will be configured for Cinder persistent volumes.
oc login
and oc new-project persistent
)oc new-app --template=django-psql-persistent
openstack volume list
kubernetes-dynamic-pvc-UUID
should be createdPage views
counter increases with each reloadoc delete pod <name>
)Page views
number is not lost and still goes upoc delete project persistent
)You can use a pre-existing Cinder volume for the storage of your OpenShift registry. To do that, you need to have a Cinder volume. You can create one by running:
openstack volume create --size <volume size in gb> <volume name>
Alternatively, the playbooks can create the volume created automatically if you specify its name and size.
Then, set the following in inventory/group_vars/all.yml
:
openshift_use_cinder_registry
: TrueAnd set up an OpenStack cloud provider,
and then set the following in inventory/group_vars/OSEv3.yml
:
openshift_hosted_registry_storage_kind
: openstackopenshift_hosted_registry_storage_access_modes
: ['ReadWriteOnce']openshift_hosted_registry_storage_openstack_filesystem
: xfsopenshift_hosted_registry_storage_volume_size
: 10GiFor a volume you created, you must also specify its UUID (it must be the UUID, not the volume's name):
openshift_hosted_registry_storage_openstack_volumeID: e0ba2d73-d2f9-4514-a3b2-a0ced507fa05
If you want the volume created automatically, set the desired name instead:
openshift_hosted_registry_storage_volume_name: registry
The volume will be formatted automaticaly and it will be mounted to one of the infra nodes when the registry pod gets started.
You can use OpenStack Swift or Ceph Rados GW to store your OpenShift registry.
In order to do so, set the following in inventory/group_vars/all.yml
:
openshift_use_swift_registry
: trueAnd the following in inventory/group_vars/OSEv3.yml
:
openshift_hosted_registry_storage_kind
: objectopenshift_hosted_registry_storage_provider
: swiftopenshift_hosted_registry_storage_swift_container
: "openshift-registry" #can be any nameopenshift_hosted_registry_storage_swift_authurl
: "{{ lookup('env','OS_AUTH_URL') }}"openshift_hosted_registry_storage_swift_username
: "{{ lookup('env','OS_USERNAME') }}"openshift_hosted_registry_storage_swift_password
: "{{ lookup('env','OS_PASSWORD') }}"openshift_hosted_registry_storage_swift_region
: "{{ lookup('env', 'OS_REGION_NAME') }}" # optionalopenshift_hosted_registry_storage_swift_tenant
: "{{ lookup('env','OS_PROJECT_NAME') }}" # can also specify tenantidopenshift_hosted_registry_storage_swift_tenantid
: "{{ lookup('env','OS_PROJECT_ID') }}" # can also specify tenantopenshift_hosted_registry_storage_swift_domain
: "{{ lookup('env','OS_USER_DOMAIN_NAME') }}" # optional; can also specifiy domainidopenshift_hosted_registry_storage_swift_domainid
: "{{ lookup('env','OS_USER_DOMAIN_ID') }}" # optional; can also specifiy domainopenshift_hosted_registry_storage_swift_insecureskipverify
: "false" # optional; true to skip TLS verificationNote that the exact environment variable names may vary depending on the contents of your OpenStack RC file. If you use Keystone v2, you may not need to set all of these parameters.
Adding more nodes to the cluster is a simple process: we need to update the
node cloud in inventory/group_vars/all/yml
, then run the appropriate
scaleup playbook.
NOTE: the dynamic inventory used for scaling is different. Make sure you
use scaleup_inventory.py
for all the operations below.
Edit your inventory/group_vars/all.yml
and set the new node total.
For example, if your cluster has currently 3 masters, 2 infra and 5 app nodes
and you want to add another 3 compute nodes, all.yml
should contain this:
openshift_openstack_num_masters: 3
openshift_openstack_num_infra: 2
openshift_openstack_num_nodes: 8 # 5 existing and 3 new
Next, run the appropriate playbook - either
openshift-ansible/playbooks/openstack/openshift-cluster/master-scaleup.yml
for master nodes or
openshift-ansible/playbooks/openstack/openshift-cluster/node-scaleup.yml
for other nodes. For example:
$ ansible-playbook --user openshift \
-i openshift-ansible/playbooks/openstack/scaleup_inventory.py \
-i inventory \
openshift-ansible/playbooks/openstack/openshift-cluster/master-scaleup.yml
This will create the new OpenStack nodes, optionally create the DNS records
and subscribe them to RHN, configure the new_masters
, new_nodes
and
new_etcd
groups and run the OpenShift scaleup tasks.
When the playbook finishes, you should have new nodes up and running.
Run oc get nodes
to verify.
If you have added new infra nodes, the extra docker-registry
and router
pods may not have been created automatically. E.g. if you started with a single
infra node and then scaled it to three, you might still only see a single
registry and router.
In that case, you can scale the pods them by running the following as the OpenShift admin:
oc scale --replicas=<count> dc/docker-registry
oc scale --replicas=<count> dc/router
Where <count>
is the number of the pods you want (i.e. the number of your
infra nodes).
By default, heat stack outputs are resolved. This may cause
problems in large scale deployments. Querying heat stack can take
a long time and eventually time out. The following setting in
inventory/group_vars/all.yml
is recommended to prevent the timeouts:
openshift_openstack_resolve_heat_outputs
: FalseThe playbooks default to using a dynamic inventory in openshift-ansible/playbooks/openstack/inventory.py
.
You can also create a static inventory after the provision step, and
then use that inventory for the install step. The steps to do so are as
follows:
$ ansible-playbook --user openshift \
-i openshift-ansible/playbooks/openstack/inventory.py \
-i inventory \
openshift-ansible/playbooks/openstack/openshift-cluster/provision.yml
$ python openshift-ansible/playbooks/openstack/inventory.py --static hosts
$ ansible-playbook --user openshift \
-i hosts \
-i inventory \
openshift-ansible/playbooks/openstack/openshift-cluster/install.yml
There are certian optional and legacy features that require ports to be opened. The code provided in the following sections can be used to enable these features.
If you want to enable metrics in your openshift cluster, then port 10255 must be open on all nodes in the cluster. The following code should be added to openshift_openstack_node_secgroup_rules in main.yml.
- direction: ingress
protocol: tcp
port_range_min: 10255
port_range_max: 10255
- direction: ingress
protocol: udp
port_range_min: 10255
port_range_max: 10255
The following code to open ports for prometheus should also be added to the openshift_openstack_node_secgroup_rules section of main.yml.
- direction: ingress
protocol: tcp
port_range_min: 9100
port_range_max: 9100
Add this to the openshift_openstack_node_secgroup_rules section of main.yml to enable elastic search.
- direction: ingress
protocol: tcp
port_range_min: 9200
port_range_max: 9200
- direction: ingress
protocol: tcp
port_range_min: 9300
port_range_max: 9300
If you choose to use Pacemaker to manage the HA system on the master nodes, the following changes should be made to the openshift_openstack_master_secgroup_rules section.
- direction: ingress
protocol: tcp
port_range_min: 2224
port_range_max: 2224
- direction: ingress
protocol: udp
port_range_min: 5404
port_range_max: 5405
The following Documentation may prove helpful as well:
If you are running a template router to expose your statistics, there are a few changes you need to make. First, add this to main.yml under the openshift_openstack_infra_secgroup_rules section.
# Required when running template router to access statistics
- direction: ingress
protocol: tcp
port_range_min: 1936
port_range_max: 1936
There are some different scenarios to customize the container runtime in the instances:
Modify the OSEv3.yml file and add the following variables:
openshift_use_crio_only: true
openshift_use_crio: true
# cockpit-docker is installed if using cockpit docker as dependency
# setting osm_use_cockpit=false to avoid that
osm_use_cockpit=false
Modify the all.yml file and add the following variables:
openshift_openstack_master_group_name: node-config-master-crio
openshift_openstack_infra_group_name: node-config-infra-crio
openshift_openstack_compute_group_name: node-config-compute-crio
NOTE: Currently, OpenShift builds require docker.
Add the proper variables to the ~/inventory/group_vars/
files in the ansible host such as:
~/inventory/group_vars/[masters|openstack_infra_nodes|openstack_compute_nodes].yml
:openshift_use_crio_only: true/false
openshift_use_crio: true/false
openshift_openstack_[master|infra|compute]_group_name: node-config-[master|infra|compute]-crio
osm_use_cockpit: false
Some app nodes using cri-o, some others docker, some others cri-o and docker. This scenario requires the following steps:
Create the ~/inventory/host_vars/<hostname>.yml
depending on the hostname
of the instance you want to customize:
openshift_use_crio_only: true
openshift_use_crio: true
openshift_node_group_name: node-config-[master|infra|compute]-crio
osm_use_cockpit: false
openshift_use_crio: false
openshift_use_crio_only: false
openshift_use_crio: true
openshift_node_group_name: node-config-[master|infra|compute]-crio
osm_use_cockpit: false
Also, it is required to configure the openshift_builddefaults_nodeselectors variable to the proper node selector for the builds to be executed in hosts where docker is running as container runtime.
Run the playbooks to provision and install the environment.
Example:
All hosts using docker as container runtime except:
In this particular case, those are variable files:
~/inventory/group_vars/OSEv3.yml
# Avoid installing cockpit in all nodes
osm_use_cockpit: false
~/inventory/host_vars/app-node-0.${DOMAIN}.yml
# CRI-O only
openshift_use_crio_only: true
openshift_use_crio: true
openshift_node_group_name: node-config-compute-crio
~/inventory/host_vars/app-node-1.${DOMAIN}.yml
# Explicit docker
openshift_use_crio: false
# openshift_node_group_name: node-config-compute
~/inventory/host_vars/app-node-2.${DOMAIN}.yml
# CRI-O and Docker side by side
openshift_use_crio_only: false
openshift_use_crio: true
# As we didn't modified the node_group, it will use docker
After a successful installation, the containerRuntimeVersion field says the CR it uses:
$ oc get nodes -o=custom-columns=NAME:.metadata.name,CR:.status.nodeInfo.containerRuntimeVersion --selector='node-role.kubernetes.io/compute=true'
NAME CR
app-node-0.shiftstack.automated.lan cri-o://1.11.5
app-node-1.shiftstack.automated.lan docker://1.13.1
app-node-2.shiftstack.automated.lan docker://1.13.1
Also, notice the host running cri-o has a label added automatically such as
runtime=cri-o
:
$ oc get nodes app-node-0.shiftstack.automated.lan --show-labels
NAME STATUS ROLES AGE VERSION LABELS
app-node-0.shiftstack.automated.lan Ready compute 37m v1.11.0+d4cacc0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=1470ffe1-aea0-4806-a1be-e24c83c08e5f,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/zone=nova,kubernetes.io/hostname=app-node-0.shiftstack.automated.lan,node-role.kubernetes.io/compute=true,runtime=cri-o
And there are some pods running:
$ kubectl get pods --all-namespaces --field-selector spec.nodeName=app-node-0.shiftstack.automated.lan -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
openshift-monitoring node-exporter-d4bq9 2/2 Running 0 24m 10.240.0.19 app-node-0.shiftstack.automated.lan <none>
openshift-node sync-rsgrp 1/1 Running 0 40m 10.240.0.19 app-node-0.shiftstack.automated.lan <none>
openshift-sdn ovs-t54s9 1/1 Running 0 40m 10.240.0.19 app-node-0.shiftstack.automated.lan <none>
openshift-sdn sdn-64tz4 1/1 Running 0 40m 10.240.0.19 app-node-0.shiftstack.automated.lan <none>
[openshift@app-node-0 ~]$ sudo crictl ps
W1025 04:45:04.056296 13242 util_unix.go:75] Using "/var/run/crio/crio.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/crio/crio.sock".
CONTAINER ID IMAGE CREATED STATE NAME ATTEMPT
ddfd64fdfb6a3 registry.redhat.io/openshift3/ose-kube-rbac-proxy@sha256:16daf6802d5e88393c271f78037f7c002ff774cd52161c1c1a71f2a84df71868 26 minutes ago Running kube-rbac-proxy 0
3463217a35030 registry.redhat.io/openshift3/prometheus-node-exporter@sha256:e9b47d1705eb027735d528342e0457e597e28e36f6e38a0262b65802156bfe9b 26 minutes ago Running node-exporter 0
02652966e1180 074bf04571e220389b5f3afa7669ea07ddd53d281668820ebf537f054487191f 41 minutes ago Running openvswitch 0
acf2afc99b950 registry.redhat.io/openshift3/ose-node@sha256:3da731d733cd4d67897d22bfdcb027b009494de667bd7a3c870557102ce10bf5 41 minutes ago Running sync 0
6814b5f7a05d7 registry.redhat.io/openshift3/ose-node@sha256:3da731d733cd4d67897d22bfdcb027b009494de667bd7a3c870557102ce10bf5 41 minutes ago Running sdn 0
[openshift@app-node-0 ~]$ sudo docker ps
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?