Browse Source

Merge pull request #6607 from tomassedovic/fix-cinder-pv

Automatic merge from submit-queue.

Fix Cinder Persistent Volume support

This documents how to use Cinder-backed persistent volumes with OpenStack.

It needed a change to the dynamic inventory because the "openstack" cloudprovider plugin does actually require internal name resolution -- and the `openshift_hostname` value must match the name of the Nova server.

In addition, we need to be able to specify the V2 of the Cinder API for now as described in: https://github.com/openshift/openshift-docs/issues/5730
OpenShift Merge Robot 7 years ago
parent
commit
fdc5829d6c

+ 3 - 0
inventory/hosts.example

@@ -286,6 +286,9 @@ openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true',
 #openshift_cloudprovider_openstack_region=region
 #openshift_cloudprovider_openstack_lb_subnet_id=subnet_id
 #
+# Note: If you're getting a "BS API version autodetection failed" when provisioning cinder volumes you may need this setting
+#openshift_cloudprovider_openstack_blockstorage_version=v2
+#
 # GCE
 #openshift_cloudprovider_kind=gce
 #

+ 106 - 0
playbooks/openstack/advanced-configuration.md

@@ -372,6 +372,112 @@ In order to set a custom entrypoint, update `openshift_master_cluster_public_hos
 Note than an empty hostname does not work, so if your domain is `openshift.example.com`,
 you cannot set this value to simply `openshift.example.com`.
 
+
+## Using Cinder-backed Persistent Volumes
+
+You will need to set up OpenStack credentials. You can try putting this in your
+`inventory/group_vars/OSEv3.yml`:
+
+    openshift_cloudprovider_kind: openstack
+    openshift_cloudprovider_openstack_auth_url: "{{ lookup('env','OS_AUTH_URL') }}"
+    openshift_cloudprovider_openstack_username: "{{ lookup('env','OS_USERNAME') }}"
+    openshift_cloudprovider_openstack_password: "{{ lookup('env','OS_PASSWORD') }}"
+    openshift_cloudprovider_openstack_tenant_name: "{{ lookup('env','OS_PROJECT_NAME') }}"
+    openshift_cloudprovider_openstack_domain_name: "{{ lookup('env','OS_USER_DOMAIN_NAME') }}"
+    openshift_cloudprovider_openstack_blockstorage_version: v2
+
+**NOTE**: you must specify the Block Storage version as v2, because OpenShift
+does not support the v3 API yet and the version detection is currently not
+working properly.
+
+For more information, consult the [Configuring for OpenStack page in the OpenShift documentation][openstack-credentials].
+
+[openstack-credentials]: https://docs.openshift.org/latest/install_config/configuring_openstack.html#install-config-configuring-openstack
+
+**NOTE** the OpenStack integration currently requires DNS to be configured and
+running and the `openshift_hostname` variable must match the Nova server name
+for each node. The cluster deployment will fail without it. If you use the
+provided OpenStack dynamic inventory and configure the
+`openshift_openstack_dns_nameservers` Ansible variable, this will be handled
+for you.
+
+After a successful deployment, the cluster is configured for Cinder persistent
+volumes.
+
+### Validation
+
+1. Log in and create a new project (with `oc login` and `oc new-project`)
+2. Create a file called `cinder-claim.yaml` with the following contents:
+
+```yaml
+apiVersion: "v1"
+kind: "PersistentVolumeClaim"
+metadata:
+  name: "claim1"
+spec:
+  accessModes:
+    - "ReadWriteOnce"
+  resources:
+    requests:
+      storage: "1Gi"
+```
+3. Run `oc create -f cinder-claim.yaml` to create the Persistent Volume Claim object in OpenShift
+4. Run `oc describe pvc claim1` to verify that the claim was created and its Status is `Bound`
+5. Run `openstack volume list`
+   * A new volume called `kubernetes-dynamic-pvc-UUID` should be created
+   * Its size should be `1`
+   * It should not be attached to any server
+6. Create a file called `mysql-pod.yaml` with the following contents:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+  name: mysql
+  labels:
+    name: mysql
+spec:
+  containers:
+    - resources:
+        limits :
+          cpu: 0.5
+      image: openshift/mysql-55-centos7
+      name: mysql
+      env:
+        - name: MYSQL_ROOT_PASSWORD
+          value: yourpassword
+        - name: MYSQL_USER
+          value: wp_user
+        - name: MYSQL_PASSWORD
+          value: wp_pass
+        - name: MYSQL_DATABASE
+          value: wp_db
+      ports:
+        - containerPort: 3306
+          name: mysql
+      volumeMounts:
+        - name: mysql-persistent-storage
+          mountPath: /var/lib/mysql/data
+  volumes:
+    - name: mysql-persistent-storage
+      persistentVolumeClaim:
+        claimName: claim1
+```
+
+7. Run `oc create -f mysql-pod.yaml` to create the pod
+8. Run `oc describe pod mysql`
+   * Its events should show that the pod has successfully attached the volume above
+   * It should show no errors
+   * `openstack volume list` should show the volume attached to an OpenShift app node
+   * NOTE: this can take several seconds
+9. After a while, `oc get pod` should show the `mysql` pod as running
+10. Run `oc delete pod mysql` to remove the pod
+   * The Cinder volume should no longer be attached
+11. Run `oc delete pvc claim1` to remove the volume claim
+   * The Cinder volume should be deleted
+
+
+
 ## Creating and using a Cinder volume for the OpenShift registry
 
 You can optionally have the playbooks create a Cinder volume and set

+ 1 - 0
playbooks/openstack/sample-inventory/group_vars/OSEv3.yml

@@ -20,6 +20,7 @@ openshift_hosted_registry_wait: True
 #openshift_cloudprovider_openstack_password: "{{ lookup('env','OS_PASSWORD') }}"
 #openshift_cloudprovider_openstack_tenant_name: "{{ lookup('env','OS_TENANT_NAME') }}"
 #openshift_cloudprovider_openstack_region: "{{ lookup('env', 'OS_REGION_NAME') }}"
+#openshift_cloudprovider_openstack_blockstorage_version: v2
 
 
 ## Use Cinder volume for Openshift registry:

+ 4 - 2
playbooks/openstack/sample-inventory/inventory.py

@@ -89,13 +89,15 @@ def build_inventory():
         # TODO(shadower): what about multiple networks?
         if server.private_v4:
             hostvars['private_v4'] = server.private_v4
+            hostvars['openshift_ip'] = server.private_v4
+
             # NOTE(shadower): Yes, we set both hostname and IP to the private
             # IP address for each node. OpenStack doesn't resolve nodes by
             # name at all, so using a hostname here would require an internal
             # DNS which would complicate the setup and potentially introduce
             # performance issues.
-            hostvars['openshift_ip'] = server.private_v4
-            hostvars['openshift_hostname'] = server.private_v4
+            hostvars['openshift_hostname'] = server.metadata.get(
+                'openshift_hostname', server.private_v4)
         hostvars['openshift_public_hostname'] = server.name
 
         if server.metadata['host-type'] == 'cns':

+ 4 - 0
roles/openshift_cloud_provider/templates/openstack.conf.j2

@@ -19,3 +19,7 @@ region = {{ openshift_cloudprovider_openstack_region }}
 [LoadBalancer]
 subnet-id = {{ openshift_cloudprovider_openstack_lb_subnet_id }}
 {% endif %}
+{% if openshift_cloudprovider_openstack_blockstorage_version is defined %}
+[BlockStorage]
+bs-version={{ openshift_cloudprovider_openstack_blockstorage_version }}
+{% endif %}

+ 3 - 0
roles/openshift_openstack/templates/heat_stack_server.yaml.j2

@@ -212,6 +212,9 @@ resources:
         host-type: { get_param: type }
         sub-host-type:    { get_param: subtype }
         node_labels: { get_param: node_labels }
+{% if openshift_openstack_dns_nameservers %}
+        openshift_hostname: { get_param: name }
+{% endif %}
       scheduler_hints: { get_param: scheduler_hints }
 
 {% if use_trunk_ports|default(false)|bool %}