Kaynağa Gözat

Upscaling OpenShift application nodes (#571)

* scale-up: playbook for upscaling app nodes

* scale-up: removed debug

* scale-up: made suggested changes

* scale-up: indentation fix

* upscaling: process split into two playbooks that are executed by a bash script

- upscaling_run.sh: bash script, usage displayed using -h parameter
- upscaling_pre-tasks: check that new value is higher, change inventory variable
- upscaling_scale-up: rerun provisioning and installation, verify change

* upscaling_run: fixed openshift-ansible-contrib directory name

* upscaling_run: inventory can be entered as relative path

* upscaling_scale-up: fixed formatting

* upscaling: minor changes

* upscaling: moved to .../provisioning/openstack directory, README updated, minor changes made

* README: minor changes

* README: formatting

* uspcaling: minor fix

* upscaling: fix

* upscaling: added customisations, fixes

- openshift-ansible-contrib and openshift-ansible paths are customisable
- fixed implicit incrementation by 1

* upscaling: fixes

* upscaling: fixes

* upscaling: another fix

* upscaling: another fix

* upscaling: fix

* upscaling: back to a single playbook, README updated

* minor fix

* pre_tasks: added labels for autoscaling

* scale-up: fixes

* scale-up: fixed host variables, post-verification is only based on labels

* scale-up: added openshift-ansible path customisation

- path has to be absolute, cannot contain '/' at the end

* scale-up: fix

* scale-up: debug removed

* README: added docs on openshift_ansible_dir, note about bastion

* static_inventory: newly added nodes are added to new_nodes group

- note: re-running provisioning fails when trying to install docker

* removing new line

* scale-up: running byo/config.yml or scaleup.yml based on the situation

- (whether there is an existing deployment or not)

* openstack.yml: indentation fix

* added refresh inventory

* upscaling: new_nodes only contains new does, it is not used during the first deployment

* static_inventory: make sure that new nodes end up only in their new_nodes group

* bug fixes

* another fix

* fixed condition

* scale-up, static_inventory role: all app node data gathered before provisioning

* upscaling: bug fixes

* upscaling: another fixes

* fixes

* upscaling: fix

* upscaling: fix

* upscaling: another logic fix

* bug fix for non-scaling deployments
Tlacenka 7 yıl önce
ebeveyn
işleme
d361dc4b30

+ 21 - 0
playbooks/provisioning/openstack/README.md

@@ -568,6 +568,27 @@ In order to access UI, the ssh-tunnel service will be created and started on the
 control node. Make sure to remove these changes and the service manually, when not
 control node. Make sure to remove these changes and the service manually, when not
 needed anymore.
 needed anymore.
 
 
+## Scale Deployment up/down
+
+### Scaling up
+
+One can scale up the number of application nodes by executing the ansible playbook
+`openshift-ansible-contrib/playbooks/provisioning/openstack/scale-up.yaml`.
+This process can be done even if there is currently no deployment available.
+The `increment_by` variable is used to specify by how much the deployment should
+be scaled up (if none exists, it serves as a target number of application nodes).
+The path to `openshift-ansible` directory can be customised by the `openshift_ansible_dir`
+variable. Its value must be an absolute path to `openshift-ansible` and it cannot
+contain the '/' symbol at the end. 
+
+Usage:
+
+```
+ansible-playbook -i <path to inventory> openshift-ansible-contrib/playbooks/provisioning/openstack/scale-up.yaml` [-e increment_by=<number>] [-e openshift_ansible_dir=<path to openshift-ansible>]
+```
+
+Note: This playbook works only without a bastion node (`openstack_use_bastion: False`).
+
 ## License
 ## License
 
 
 As the rest of the openshift-ansible-contrib repository, the code here is
 As the rest of the openshift-ansible-contrib repository, the code here is

+ 4 - 0
playbooks/provisioning/openstack/pre_tasks.yml

@@ -47,3 +47,7 @@
 - name: Set openshift_cluster_node_labels for the app group
 - name: Set openshift_cluster_node_labels for the app group
   set_fact:
   set_fact:
     openshift_cluster_node_labels: "{{ openshift_cluster_node_labels | combine({'app': {'region': 'primary'}}, recursive=True) }}"
     openshift_cluster_node_labels: "{{ openshift_cluster_node_labels | combine({'app': {'region': 'primary'}}, recursive=True) }}"
+
+- name: Set openshift_cluster_node_labels for auto-scaling app nodes
+  set_fact:
+    openshift_cluster_node_labels: "{{ openshift_cluster_node_labels | combine({'app': {'autoscaling': 'app'}}, recursive=True) }}"

+ 75 - 0
playbooks/provisioning/openstack/scale-up.yaml

@@ -0,0 +1,75 @@
+---
+# Get the needed information about the current deployment
+- hosts: masters[0]
+  tasks:
+  - name: Get number of app nodes
+    shell: oc get nodes -l autoscaling=app --no-headers=true | wc -l
+    register: oc_old_num_nodes
+  - name: Get names of app nodes
+    shell: oc get nodes -l autoscaling=app --no-headers=true | cut -f1 -d " "
+    register: oc_old_app_nodes
+
+- hosts: localhost
+  tasks:
+  # Since both number and names of app nodes are to be removed
+  # localhost variables for these values need to be set
+  - name: Store old number and names of app nodes locally (if there is an existing deployment)
+    when: '"masters" in groups'
+    register: set_fact_result
+    set_fact:
+      oc_old_num_nodes: "{{ hostvars[groups['masters'][0]]['oc_old_num_nodes'].stdout }}"
+      oc_old_app_nodes: "{{ hostvars[groups['masters'][0]]['oc_old_app_nodes'].stdout_lines }}"
+
+  - name: Set default values for old app nodes (if there is no existing deployment)
+    when: 'set_fact_result | skipped'
+    set_fact:
+      oc_old_num_nodes: 0
+      oc_old_app_nodes: []
+
+  # Set how many nodes are to be added (1 by default)
+  - name: Set how many nodes are to be added
+    set_fact:
+      increment_by: 1
+  - name: Check that the number corresponds to scaling up (not down)
+    assert:
+      that: 'increment_by | int >= 1'
+      msg: >
+        FAIL: The value of increment_by must be at least 1
+        (but it is {{ increment_by | int }}).
+  - name: Update openstack_num_nodes variable
+    set_fact:
+      openstack_num_nodes: "{{ oc_old_num_nodes | int + increment_by | int }}"
+
+# Run provision.yaml with higher number of nodes to create a new app-node VM
+- include: provision.yaml
+
+# Run config.yml to perform openshift installation
+# Path to openshift-ansible can be customised:
+# - the value of openshift_ansible_dir has to be an absolute path
+# - the path cannot contain the '/' symbol at the end
+
+# Creating a new deployment by the full installation
+- include: "{{ openshift_ansible_dir }}/playbooks/byo/config.yml"
+  vars:
+    openshift_ansible_dir: ../../../../openshift-ansible
+  when: 'not groups["new_nodes"] | list'
+
+# Scaling up existing deployment
+- include: "{{ openshift_ansible_dir }}/playbooks/byo/openshift-node/scaleup.yml"
+  vars:
+    openshift_ansible_dir: ../../../../openshift-ansible
+  when: 'groups["new_nodes"] | list'
+
+# Post-verification: Verify new number of nodes
+- hosts: masters[0]
+  tasks:
+  - name: Get number of nodes
+    shell: oc get nodes -l autoscaling=app --no-headers=true | wc -l
+    register: oc_new_num_nodes
+  - name: Check that the actual result matches the defined value
+    assert:
+      that: 'oc_new_num_nodes.stdout | int == (hostvars["localhost"]["oc_old_num_nodes"] | int + hostvars["localhost"]["increment_by"] | int)'
+      msg: >
+        FAIL: Number of application nodes has not been increased accordingly
+        (it should be {{ hostvars["localhost"]["oc_old_num_nodes"] | int + hostvars["localhost"]["increment_by"] | int }}
+        but it is {{ oc_new_num_nodes.stdout | int }}).

+ 15 - 0
roles/static_inventory/tasks/filter_out_new_app_nodes.yaml

@@ -0,0 +1,15 @@
+---
+- name: Add all new app nodes to new_app_nodes
+  when:
+  - 'oc_old_app_nodes is defined'
+  - 'oc_old_app_nodes | list'
+  - 'node.name not in oc_old_app_nodes'
+  - 'node["metadata"]["sub-host-type"] == "app"'
+  register: result
+  set_fact:
+    new_app_nodes: '{{ new_app_nodes }} + [ {{ node }} ]'
+
+- name: If the node was added to new_nodes, remove it from registered nodes
+  set_fact:
+    registered_nodes: '{{ registered_nodes | difference([ node ]) }}'
+  when: 'not result | skipped'

+ 24 - 2
roles/static_inventory/tasks/openstack.yml

@@ -37,7 +37,6 @@
       with_items: "{{ registered_nodes|difference(registered_nodes_floating) }}"
       with_items: "{{ registered_nodes|difference(registered_nodes_floating) }}"
       add_host:
       add_host:
         name: '{{ item.name }}'
         name: '{{ item.name }}'
-        groups: '{{ item.metadata.group }}'
         ansible_host: >-
         ansible_host: >-
           {% if use_bastion|bool -%}
           {% if use_bastion|bool -%}
           {{ item.name }}
           {{ item.name }}
@@ -57,7 +56,6 @@
       with_items: "{{ registered_nodes_floating }}"
       with_items: "{{ registered_nodes_floating }}"
       add_host:
       add_host:
         name: '{{ item.name }}'
         name: '{{ item.name }}'
-        groups: '{{ item.metadata.group }}'
         ansible_host: >-
         ansible_host: >-
           {% if use_bastion|bool -%}
           {% if use_bastion|bool -%}
           {{ item.name }}
           {{ item.name }}
@@ -80,6 +78,30 @@
           {{ item.public_v4 }}
           {{ item.public_v4 }}
           {%- endif %}
           {%- endif %}
 
 
+    # Split registered_nodes into old nodes and new app nodes
+    # Add new app nodes to new_nodes host group for upscaling
+    - name: Create new_app_nodes variable
+      set_fact:
+        new_app_nodes: []
+
+    - name: Filter new app nodes out of registered_nodes
+      include: filter_out_new_app_nodes.yaml
+      with_items: "{{ registered_nodes }}"
+      loop_control:
+        loop_var: node
+
+    - name: Add new app nodes to the new_nodes section (if a deployment already exists)
+      with_items: "{{ new_app_nodes }}"
+      add_host:
+        name: "{{ item.name }}"
+        groups: new_nodes, app
+
+    - name: Add the rest of cluster nodes to their corresponding groups
+      with_items: "{{ registered_nodes }}"
+      add_host:
+        name: '{{ item.name }}'
+        groups: '{{ item.metadata.group }}'
+
     - name: Add bastion node to inventory
     - name: Add bastion node to inventory
       add_host:
       add_host:
         name: bastion
         name: bastion

+ 4 - 0
roles/static_inventory/templates/inventory.j2

@@ -40,6 +40,7 @@ dns
 nodes
 nodes
 etcd
 etcd
 lb
 lb
+new_nodes
 
 
 # Set variables common for all OSEv3 hosts
 # Set variables common for all OSEv3 hosts
 [OSEv3:vars]
 [OSEv3:vars]
@@ -78,6 +79,8 @@ dns.{{ stack_name }}
 [lb:children]
 [lb:children]
 lb.{{ stack_name }}
 lb.{{ stack_name }}
 
 
+[new_nodes:children]
+
 # Empty placeholders for all groups of the cluster nodes
 # Empty placeholders for all groups of the cluster nodes
 [masters.{{ stack_name }}]
 [masters.{{ stack_name }}]
 [etcd.{{ stack_name }}]
 [etcd.{{ stack_name }}]
@@ -86,6 +89,7 @@ lb.{{ stack_name }}
 [app.{{ stack_name }}]
 [app.{{ stack_name }}]
 [dns.{{ stack_name }}]
 [dns.{{ stack_name }}]
 [lb.{{ stack_name }}]
 [lb.{{ stack_name }}]
+[new_nodes.{{ stack_name }}]
 
 
 # BEGIN Autogenerated groups
 # BEGIN Autogenerated groups
 {% for group in groups %}
 {% for group in groups %}