|
@@ -24,6 +24,7 @@ Environment variables may also be used.
|
|
|
* [Cinder-Backed Persistent Volumes Configuration](#cinder-backed-persistent-volumes-configuration)
|
|
|
* [Cinder-Backed Registry Configuration](#cinder-backed-registry-configuration)
|
|
|
* [Swift or Ceph Rados GW Backed Registry Configuration](#swift-or-ceph-rados-gw-backed-registry-configuration)
|
|
|
+* [Scaling The OpenShift Cluster](#scaling-the-openshift-cluster)
|
|
|
* [Deploying At Scale](#deploying-at-scale)
|
|
|
* [Using A Static Inventory](#using-a-static-inventory)
|
|
|
|
|
@@ -883,6 +884,131 @@ Note that the exact environment variable names may vary depending on the content
|
|
|
your OpenStack RC file. If you use Keystone v2, you may not need to set all of these
|
|
|
parameters.
|
|
|
|
|
|
+## Scaling The OpenShift Cluster
|
|
|
+
|
|
|
+Adding more nodes to the cluster is a three-step process: we need to create the
|
|
|
+new nodes, mark them for scale up and then run the appropriate playbook.
|
|
|
+
|
|
|
+Note that the scaling playbooks are different from the installation ones. If
|
|
|
+you just change the number of nodes in your inventory and re-run
|
|
|
+`provision_install.yml`, the playbooks will fail.
|
|
|
+
|
|
|
+### Scaling the Compute and Infra Nodes
|
|
|
+
|
|
|
+#### 1. Adding Extra Nodes
|
|
|
+
|
|
|
+Edit your `inventory/group_vars/all.yml` and set the new node total in
|
|
|
+`openshift_openstack_num_nodes`.
|
|
|
+
|
|
|
+For example, if your cluster has currently 3 masters, 2 infra and 5 app nodes
|
|
|
+and you want to add another 3 compute nodes, `all.yml` should contain this:
|
|
|
+
|
|
|
+```
|
|
|
+openshift_openstack_num_masters: 3
|
|
|
+openshift_openstack_num_infra: 2
|
|
|
+openshift_openstack_num_nodes: 8 # 5 existing and 3 new
|
|
|
+```
|
|
|
+
|
|
|
+**NOTE**: scaling the infra nodes is exactly the same. Just update the
|
|
|
+`openshift_openstack_num_infra` var instead.
|
|
|
+
|
|
|
+Then run the `provision.yml` playbook only (*do not run*
|
|
|
+`provision_install.yml` -- it can mess up your cluster!).
|
|
|
+
|
|
|
+```
|
|
|
+$ ansible-playbook --user openshift \
|
|
|
+ -i openshift-ansible/playbooks/openstack/inventory.py \
|
|
|
+ -i inventory \
|
|
|
+ openshift-ansible/playbooks/openstack/openshift-cluster/provision.yml
|
|
|
+```
|
|
|
+
|
|
|
+This will create the new nodes while leaving everything else as is.
|
|
|
+
|
|
|
+
|
|
|
+#### 2. Marking The Nodes For Scaleup
|
|
|
+
|
|
|
+You can see all nodes (old as well as new) by running:
|
|
|
+
|
|
|
+```
|
|
|
+openstack server list
|
|
|
+```
|
|
|
+
|
|
|
+Note the names of the newly created nodes.
|
|
|
+
|
|
|
+Then create a new text file called `new-nodes` with the following contents:
|
|
|
+
|
|
|
+```
|
|
|
+[new_nodes]
|
|
|
+app-node-5.openshift.example.com
|
|
|
+app-node-6.openshift.example.com
|
|
|
+app-node-7.openshift.example.com
|
|
|
+
|
|
|
+[OSEv3:children]
|
|
|
+new_nodes
|
|
|
+```
|
|
|
+
|
|
|
+That is, an ini group section called `new_nodes` and the name of every new node
|
|
|
+on its own line. If you're scaling the infra nodes, they should go into the
|
|
|
+`new_nodes` section as well.
|
|
|
+
|
|
|
+You must also run the `configure-new-nodes.yml` playbook to run a few
|
|
|
+OpenStack-specific configurations. Don't forget to pass the `new-nodes` file
|
|
|
+in.
|
|
|
+
|
|
|
+```
|
|
|
+$ ansible-playbook --user openshift \
|
|
|
+ -i openshift-ansible/playbooks/openstack/inventory.py \
|
|
|
+ -i new-nodes \
|
|
|
+ -i inventory \
|
|
|
+ openshift-ansible/playbooks/openstack/openshift-cluster/configure-new-nodes.yml
|
|
|
+```
|
|
|
+
|
|
|
+#### 3. Adding The Nodes To OpenShift
|
|
|
+
|
|
|
+The final step is running the `scaleup.yml` playbook and passing in the
|
|
|
+`new-nodes` file:
|
|
|
+
|
|
|
+```
|
|
|
+$ ansible-playbook --user openshift \
|
|
|
+ -i openshift-ansible/playbooks/openstack/inventory.py \
|
|
|
+ -i new-nodes \
|
|
|
+ -i inventory \
|
|
|
+ openshift-ansible/playbooks/openshift-node/scaleup.yml
|
|
|
+```
|
|
|
+
|
|
|
+After that, your OpenShift cluster will be configured with the extra nodes. Be
|
|
|
+sure to **remove the `-i new-nodes` parameter from any subsequent Ansible
|
|
|
+runs** as these "new nodes" are now "regular nodes".
|
|
|
+
|
|
|
+For any subsequent scaling operations, you will have to update the `new-nodes`
|
|
|
+file. The `new_nodes` group should only contain nodes that were not deployed.
|
|
|
+So you would remove the previously added nodes and put in the new ones.
|
|
|
+
|
|
|
+For more information on scaling, please visit the [OpenShift Admin
|
|
|
+Guide][admin-guide].
|
|
|
+
|
|
|
+[admin-guide]: https://docs.openshift.org/latest/admin_guide/manage_nodes.html#adding-cluster-hosts_manage-nodes
|
|
|
+
|
|
|
+
|
|
|
+#### 4. Updating The Registry and Router Replicas
|
|
|
+
|
|
|
+If you have added new infra nodes, the extra `docker-registry` and `router`
|
|
|
+pods may not have been created automatically. E.g. if you started with a single
|
|
|
+infra node and then scaled it to three, you might still only see a single
|
|
|
+registry and router.
|
|
|
+
|
|
|
+In that case, you can scale the pods them by running the following as the
|
|
|
+OpenShift admin:
|
|
|
+
|
|
|
+```
|
|
|
+oc scale --replicas=<count> dc/docker-registry
|
|
|
+oc scale --replicas=<count> dc/router
|
|
|
+```
|
|
|
+
|
|
|
+Where `<count>` is the number of the pods you want (i.e. the number of your
|
|
|
+infra nodes).
|
|
|
+
|
|
|
+
|
|
|
## Deploying At Scale
|
|
|
|
|
|
By default, heat stack outputs are resolved. This may cause
|