Parcourir la source

add S3 bucket cleanup

Default to just cleaning out all the objects in the S3 bucket (IFF openshift_aws_create_s3 is 'true').

If you really, trully want to delete the S3 bucket and free up the bucket name, you can set openshift_aws_really_delete_s3_bucket to 'true' ('false' by default).
Joel Diaz il y a 7 ans
Parent
commit
f6afef5ca3

+ 7 - 3
playbooks/aws/README.md

@@ -201,9 +201,7 @@ There are more enhancements that are arriving for provisioning.  These will incl
 
 ## Uninstall / Deprovisioning
 
-At this time, only deprovisioning of the output of the prerequisites step is provided. You can/must manually remove things like ELBs and scale groups before attempting to undo the work by the preprovisiong step.
-
-To undo the work done by the prerequisites playbook, simply call the uninstall_prerequisites.yml playbook. You should use the same inventory file and provisioning_vars.yml file that was used during provisioning.
+To undo the work done by the prerequisites playbook, simply call the uninstall_prerequisites.yml playbook. You will have needed to remove any of the other objects (ie ELBs, instances, etc) before attempting. You should use the same inventory file and provisioning_vars.yml file that was used during provisioning.
 
 ```
 ansible-playbook -i <previous inventory file> -e @<previous provisioning_vars file> uninstall_prerequisites.yml
@@ -211,4 +209,10 @@ ansible-playbook -i <previous inventory file> -e @<previous provisioning_vars fi
 
 This should result in removal of the security groups and VPC that were created.
 
+Cleaning up the S3 bucket contents can be accomplished with:
+
+```
+ansible-playbook -i <previous inventory file> -e @<previous provisioning_vars file> uninstall_s3.yml
+```
+
 NOTE: If you want to also remove the ssh keys that were uploaded (**these ssh keys would be shared if you are running multiple clusters in the same AWS account** so we don't remove these by default) then you should add 'openshift_aws_enable_uninstall_shared_objects: True' to your provisioning_vars.yml file.

+ 10 - 0
playbooks/aws/openshift-cluster/uninstall_s3.yml

@@ -0,0 +1,10 @@
+---
+- name: Empty/delete s3 bucket
+  hosts: localhost
+  connection: local
+  tasks:
+  - name: empty/delete s3 bucket
+    include_role:
+      name: openshift_aws
+      tasks_from: uninstall_s3.yml
+    when: openshift_aws_create_s3 | default(true) | bool

+ 5 - 0
roles/openshift_aws/defaults/main.yml

@@ -322,3 +322,8 @@ openshift_aws_masters_groups: masters,etcd,nodes
 # By default, don't delete things like the shared IAM instance
 # profile and uploaded ssh keys
 openshift_aws_enable_uninstall_shared_objects: False
+# S3 bucket names are global by default and can take minutes/hours for the
+# name to become available for re-use (assuming someone doesn't take the
+# name in the meantime). Default to just emptying the contents of the S3
+# bucket if we've been asked to create the bucket during provisioning.
+openshift_aws_really_delete_s3_bucket: False

+ 26 - 0
roles/openshift_aws/tasks/uninstall_s3.yml

@@ -0,0 +1,26 @@
+---
+- name: empty S3 bucket
+  block:
+  - name: get S3 object list
+    aws_s3:
+      bucket: "{{ openshift_aws_s3_bucket_name }}"
+      mode: list
+      region: "{{ openshift_aws_region }}"
+    register: s3_out
+
+  - name: delete S3 objects
+    aws_s3:
+      bucket: "{{ openshift_aws_s3_bucket_name }}"
+      mode: delobj
+      object: "{{ item }}"
+    with_items: "{{ s3_out.s3_keys }}"
+  when: openshift_aws_create_s3 | bool
+
+- name: delete S3 bucket
+  aws_s3:
+    bucket: "{{ openshift_aws_s3_bucket_name }}"
+    mode: delete
+    region: "{{ openshift_aws_region }}"
+  when:
+  - openshift_aws_create_s3 | bool
+  - openshift_aws_really_delete_s3_bucket | bool