فهرست منبع

Merge pull request #10147 from rusichen/master

Use clusterid attribute to filter servers in dynamic inventory
OpenShift Merge Robot 6 سال پیش
والد
کامیت
09f24672ce
2فایلهای تغییر یافته به همراه38 افزوده شده و 4 حذف شده
  1. 31 0
      playbooks/openstack/configuration.md
  2. 7 4
      playbooks/openstack/resources.py

+ 31 - 0
playbooks/openstack/configuration.md

@@ -18,6 +18,7 @@ Environment variables may also be used.
 * [DNS Configuration](#dns-configuration)
 * [Floating IP Address Configuration](#floating-ip-address-configuration)
 * [All-in-one Deployment Configuration](#all-in-one-deployment-configuration)
+* [Multi-env Deployment Configuration](#multi-env-deployment-configuration)
 * [Building Node Images](#building-node-images)
 * [Kuryr Networking Configuration](#kuryr-networking-configuration)
 * [Provider Network Configuration](#provider-network-configuration)
@@ -546,6 +547,36 @@ added, because there are no dedicated infra nodes, so you will have to add it
 manually. See
 [Custom DNS Records Configuration](#custom-dns-records-configuration).
 
+## Multi-env Deployment Configuration
+
+If you want to deploy multiple OpenShift environments in the same OpenStack
+project, you can do so with a few configuration changes.
+
+First, set the `openshift_openstack_clusterid` option in the
+`inventory/group_vars/all.yml` file with specific unique name for cluster.
+
+```
+vi inventory/group_vars/all.yml
+
+openshift_openstack_clusterid: foobar
+openshift_openstack_public_dns_domain: example.com
+```
+
+Second, set `OPENSHIFT_CLUSTER` environment variables. The `OPENSHIFT_CLUSTER`
+environment variable has to consist of `openshift_openstack_clusterid` and
+`openshift_openstack_public_dns_domain`, that's required because cluster_id
+variable stored in the instance metadata is concatanated in the same way.
+If value will be different then instances won't be accessible in ansible inventory.
+
+```
+export OPENSHIFT_CLUSTER='foobar.example.com'
+```
+
+Then run the deployment playbooks as usual. When you finish deployment of first
+environment, please update above options that correspond to a new environment
+and run the deployment playbooks.
+
+
 ## Building Node Images
 
 It is possible to build the OpenShift images in advance (instead of installing

+ 7 - 4
playbooks/openstack/resources.py

@@ -19,6 +19,8 @@ except ImportError:
 from keystoneauth1.exceptions.catalog import EndpointNotFound
 import shade
 
+OPENSHIFT_CLUSTER = os.getenv('OPENSHIFT_CLUSTER')
+
 
 def base_openshift_inventory(cluster_hosts):
     '''Set the base openshift inventory.'''
@@ -124,11 +126,12 @@ def build_inventory():
     # Use an environment variable to optionally skip returning the app nodes.
     show_compute_nodes = os.environ.get('OPENSTACK_SHOW_COMPUTE_NODES', 'true').lower() == "true"
 
-    # TODO(shadower): filter the servers based on the `OPENSHIFT_CLUSTER`
-    # environment variable.
+    # If `OPENSHIFT_CLUSTER` env variable is defined then it's used to
+    # filter servers by metadata.clusterid attribute value.
     cluster_hosts = [
         server for server in cloud.list_servers()
-        if 'metadata' in server and 'clusterid' in server.metadata and
+        if 'clusterid' in server.get('metadata', []) and
+        (OPENSHIFT_CLUSTER is None or server.metadata.clusterid == OPENSHIFT_CLUSTER) and
         (show_compute_nodes or server.metadata.get('sub-host-type') != 'app')]
 
     inventory = base_openshift_inventory(cluster_hosts)
@@ -183,7 +186,7 @@ def build_inventory():
 
 def _get_stack_outputs(cloud_client):
     """Returns a dictionary with the stack outputs"""
-    cluster_name = os.getenv('OPENSHIFT_CLUSTER', 'openshift-cluster')
+    cluster_name = OPENSHIFT_CLUSTER or 'openshift-cluster'
 
     stack = cloud_client.get_stack(cluster_name)
     if stack is None or stack['stack_status'] not in (