Browse Source

Merge pull request #3712 from wongma7/remove-kube-nfs

Merged by openshift-bot
OpenShift Bot 8 năm trước cách đây
mục cha
commit
577b4f46eb

+ 0 - 114
roles/kube_nfs_volumes/README.md

@@ -1,114 +0,0 @@
-# kube_nfs_volumes
-
-This role is useful to export disks as set of Kubernetes persistent volumes.
-It does so by partitioning the disks, creating ext4 filesystem on each
-partition, mounting the partitions, exporting the mounts via NFS and adding
-these NFS shares as NFS persistent volumes to existing Kubernetes installation.
-
-All partitions on given disks are used as the persistent volumes, including
-already existing partitions! There should be no other data (such as operating
-system) on the disks!
-
-## Requirements
-
-* Ansible 2.2
-* Running Kubernetes with NFS persistent volume support (on a remote machine).
-* Works only on RHEL/Fedora-like distros.
-
-## Role Variables
-
-```
-# Options of NFS exports.
-nfs_export_options: "*(rw,no_root_squash,insecure,no_subtree_check)"
-
-# Directory, where the created partitions should be mounted. They will be
-# mounted as <mount_dir>/sda1 etc.
-mount_dir: /exports
-
-# Comma-separated list of disks to partition.
-# This role always assumes that all partitions on these disks are used as
-# physical volumes.
-disks: /dev/sdb,/dev/sdc
-
-# Whether to re-partition already partitioned disks.
-# Even though the disks won't get repartitioned on 'false', all existing
-# partitions on the disk are exported via NFS as physical volumes!
-force: false
-
-# Specification of size of partitions to create. See library/partitionpool.py
-# for details.
-sizes: 100M
-
-# URL of Kubernetes API server, incl. port.
-kubernetes_url: https://10.245.1.2:6443
-
-# Token to use for authentication to the API server
-kubernetes_token: tJdce6Fn3cL1112YoIJ5m2exzAbzcPZX
-
-# API Version to use for kubernetes
-kube_api_version: v1
-```
-
-## Dependencies
-
-None
-
-## Example Playbook
-
-With this playbook, `/dev/sdb` is partitioned into 100MiB partitions, all of
-them are mounted into `/exports/sdb<N>` directory and all these directories
-are exported via NFS and added as physical volumes to Kubernetes running at
-`https://10.245.1.2:6443`.
-
-    - hosts: servers
-      roles:
-        - role: kube_nfs_volumes
-          disks: "/dev/sdb"
-          sizes: 100M
-          kubernetes_url: https://10.245.1.2:6443
-          kubernetes_token: tJdce6Fn3cL1112YoIJ5m2exzAbzcPZX
-
-See library/partitionpool.py for details how `sizes` parameter can be used
-to create partitions of various sizes.
-
-## Full example
-Let's say there are two machines, 10.0.0.1 and 10.0.0.2, that we want to use as
-NFS servers for our Kubernetes cluster, running Kubernetes public API at
-https://10.245.1.2:6443.
-
-Both servers have three 1 TB disks, /dev/sda for the system and /dev/sdb and
-/dev/sdc to be partitioned. We want to split the data disks into 5, 10 and
-20 GiB partitions so that 10% of the total capacity is in 5 GiB partitions, 40%
-in 10 GiB and 50% in 20 GiB partitions.
-
-That means, each data disk will have 20x 5 GiB, 40x 10 GiB and 25x 20 GiB
-partitions.
-
-* Create an `inventory` file:
-    ```
-    [nfsservers]
-    10.0.0.1
-    10.0.0.2
-    ```
-
-* Create an ansible playbook, say `setupnfs.yaml`:
-    ```
-    - hosts: nfsservers
-      become: yes
-      roles:
-         - role: kube_nfs_volumes
-           disks: "/dev/sdb,/dev/sdc"
-           sizes: 5G:10,10G:40,20G:50
-           force: no
-           kubernetes_url: https://10.245.1.2:6443
-           kubernetes_token: tJdce6Fn3cL1112YoIJ5m2exzAbzcPZX
-    ```
-
-* Run the playbook:
-    ```
-    ansible-playbook -i inventory setupnfs.yml
-    ```
-
-## License
-
-Apache 2.0

+ 0 - 16
roles/kube_nfs_volumes/defaults/main.yml

@@ -1,16 +0,0 @@
----
-kubernetes_url: https://172.30.0.1:443
-
-kube_api_version: v1
-
-kube_req_template: "../templates/{{ kube_api_version }}/nfs.json.j2"
-
-# Options of NFS exports.
-nfs_export_options: "*(rw,no_root_squash,insecure,no_subtree_check)"
-
-# Directory, where the created partitions should be mounted. They will be
-# mounted as <mount_dir>/sda1 etc.
-mount_dir: /exports
-
-# Force re-partitioning the disks
-force: false

+ 0 - 3
roles/kube_nfs_volumes/handlers/main.yml

@@ -1,3 +0,0 @@
----
-- name: restart nfs
-  systemd: name=nfs-server state=restarted

+ 0 - 247
roles/kube_nfs_volumes/library/partitionpool.py

@@ -1,247 +0,0 @@
-#!/usr/bin/python
-"""
-Ansible module for partitioning.
-"""
-
-from __future__ import print_function
-
-# There is no pyparted on our Jenkins worker
-# pylint: disable=import-error
-import parted
-
-DOCUMENTATION = """
----
-module: partitionpool
-short_description; Partition a disk into parititions.
-description:
-  - Creates partitions on given disk based on partition sizes and their weights.
-    Unless 'force' option is set to True, it ignores already partitioned disks.
-
-    When the disk is empty or 'force' is set to True, it always creates a new
-    GPT partition table on the disk. Then it creates number of partitions, based
-    on their weights.
-
-    This module should be used when a system admin wants to split existing disk(s)
-    into pools of partitions of specific sizes. It is not intended as generic disk
-    partitioning module!
-
-    Independent on 'force' parameter value and actual disk state, the task
-    always fills 'partition_pool' fact with all partitions on given disks,
-    together with their sizes (in bytes). E.g.:
-    partition_sizes = [
-        { name: sda1, Size: 1048576000 },
-        { name: sda2, Size: 1048576000 },
-        { name: sdb1, Size: 1048576000 },
-        ...
-    ]
-
-options:
-  disk:
-    description:
-      - Disk to partition.
-  size:
-    description:
-      - Sizes of partitions to create and their weights. Has form of:
-        <size1>[:<weigth1>][,<size2>[:<weight2>][,...]]
-      - Any <size> can end with 'm' or 'M' for megabyte, 'g/G' for gigabyte
-        and 't/T' for terabyte. Megabyte is used when no unit is specified.
-      - If <weight> is missing, 1.0 is used.
-      - From each specified partition <sizeX>, number of these partitions are
-        created so they occupy spaces represented by <weightX>, proportionally to
-        other weights.
-
-      - Example 1: size=100G says, that the whole disk is split in number of 100 GiB
-        partitions. On 1 TiB disk, 10 partitions will be created.
-
-      - Example 2: size=100G:1,10G:1 says that ratio of space occupied by 100 GiB
-        partitions and 10 GiB partitions is 1:1. Therefore, on 1 TiB disk, 500 GiB
-        will be split into five 100 GiB partition and 500 GiB will be split into fifty
-        10GiB partitions.
-      - size=100G:1,10G:1 = 5x 100 GiB and 50x 10 GiB partitions (on 1 TiB disk).
-
-      - Example 3: size=200G:1,100G:2 says that the ratio of space occupied by 200 GiB
-        partitions and 100GiB partition is 1:2. Therefore, on 1 TiB disk, 1/3
-        (300 GiB) should be occupied by 200 GiB partitions. Only one fits there,
-        so only one is created (we always round nr. of partitions *down*). The rest
-        (800 GiB) is split into eight 100 GiB partitions, even though it's more
-        than 2/3 of total space - free space is always allocated as much as possible.
-      - size=200G:1,100G:2 = 1x 200 GiB and 8x 100 GiB partitions (on 1 TiB disk).
-
-      - Example: size=200G:1,100G:1,50G:1 says that the ratio of space occupied by
-        200 GiB, 100 GiB and 50 GiB partitions is 1:1:1. Therefore 1/3 of 1 TiB disk
-        is dedicated to 200 GiB partitions. Only one fits there and only one is
-        created. The rest (800 GiB) is distributed according to remaining weights:
-        100 GiB vs 50 GiB is 1:1, we create four 100 GiB partitions (400 GiB in total)
-        and eight 50 GiB partitions (again, 400 GiB).
-      - size=200G:1,100G:1,50G:1 = 1x 200 GiB, 4x 100 GiB and 8x 50 GiB partitions
-        (on 1 TiB disk).
-
-  force:
-    description:
-      - If True, it will always overwite partition table on the disk and create new one.
-      - If False (default), it won't change existing partition tables.
-
-"""
-
-
-# It's not class, it's more a simple struct with almost no functionality.
-# pylint: disable=too-few-public-methods
-class PartitionSpec(object):
-    """ Simple class to represent required partitions."""
-    def __init__(self, size, weight):
-        """ Initialize the partition specifications."""
-        # Size of the partitions
-        self.size = size
-        # Relative weight of this request
-        self.weight = weight
-        # Number of partitions to create, will be calculated later
-        self.count = -1
-
-    def set_count(self, count):
-        """ Set count of parititions of this specification. """
-        self.count = count
-
-
-def assign_space(total_size, specs):
-    """
-    Satisfy all the PartitionSpecs according to their weight.
-    In other words, calculate spec.count of all the specs.
-    """
-    total_weight = 0.0
-    for spec in specs:
-        total_weight += float(spec.weight)
-
-    for spec in specs:
-        num_blocks = int((float(spec.weight) / total_weight) * (total_size / float(spec.size)))
-        spec.set_count(num_blocks)
-        total_size -= num_blocks * spec.size
-        total_weight -= spec.weight
-
-
-def partition(diskname, specs, force=False, check_mode=False):
-    """
-    Create requested partitions.
-    Returns nr. of created partitions or 0 when the disk was already partitioned.
-    """
-    count = 0
-
-    dev = parted.getDevice(diskname)
-    try:
-        disk = parted.newDisk(dev)
-    except parted.DiskException:
-        # unrecognizable format, treat as empty disk
-        disk = None
-
-    if disk and len(disk.partitions) > 0 and not force:
-        print("skipping", diskname)
-        return 0
-
-    # create new partition table, wiping all existing data
-    disk = parted.freshDisk(dev, 'gpt')
-    # calculate nr. of partitions of each size
-    assign_space(dev.getSize(), specs)
-    last_megabyte = 1
-    for spec in specs:
-        for _ in range(spec.count):
-            # create the partition
-            start = parted.sizeToSectors(last_megabyte, "MiB", dev.sectorSize)
-            length = parted.sizeToSectors(spec.size, "MiB", dev.sectorSize)
-            geo = parted.Geometry(device=dev, start=start, length=length)
-            filesystem = parted.FileSystem(type='ext4', geometry=geo)
-            part = parted.Partition(
-                disk=disk,
-                type=parted.PARTITION_NORMAL,
-                fs=filesystem,
-                geometry=geo)
-            disk.addPartition(partition=part, constraint=dev.optimalAlignedConstraint)
-            last_megabyte += spec.size
-            count += 1
-    try:
-        if not check_mode:
-            disk.commit()
-    except parted.IOException:
-        # partitions have been written, but we have been unable to inform the
-        # kernel of the change, probably because they are in use.
-        # Ignore it and hope for the best...
-        pass
-    return count
-
-
-def parse_spec(text):
-    """ Parse string with partition specification. """
-    tokens = text.split(",")
-    specs = []
-    for token in tokens:
-        if ":" not in token:
-            token += ":1"
-
-        (sizespec, weight) = token.split(':')
-        weight = float(weight)  # throws exception with reasonable error string
-
-        units = {"m": 1, "g": 1 << 10, "t": 1 << 20, "p": 1 << 30}
-        unit = units.get(sizespec[-1].lower(), None)
-        if not unit:
-            # there is no unit specifier, it must be just the number
-            size = float(sizespec)
-            unit = 1
-        else:
-            size = float(sizespec[:-1])
-        spec = PartitionSpec(int(size * unit), weight)
-        specs.append(spec)
-    return specs
-
-
-def get_partitions(diskpath):
-    """ Return array of partition names for given disk """
-    dev = parted.getDevice(diskpath)
-    disk = parted.newDisk(dev)
-    partitions = []
-    for part in disk.partitions:
-        (_, _, pname) = part.path.rsplit("/")
-        partitions.append({"name": pname, "size": part.getLength() * dev.sectorSize})
-
-    return partitions
-
-
-def main():
-    """ Ansible module main method. """
-    module = AnsibleModule(  # noqa: F405
-        argument_spec=dict(
-            disks=dict(required=True, type='str'),
-            force=dict(required=False, default="no", type='bool'),
-            sizes=dict(required=True, type='str')
-        ),
-        supports_check_mode=True,
-    )
-
-    disks = module.params['disks']
-    force = module.params['force']
-    if force is None:
-        force = False
-    sizes = module.params['sizes']
-
-    try:
-        specs = parse_spec(sizes)
-    except ValueError as ex:
-        err = "Error parsing sizes=" + sizes + ": " + str(ex)
-        module.fail_json(msg=err)
-
-    partitions = []
-    changed_count = 0
-    for disk in disks.split(","):
-        try:
-            changed_count += partition(disk, specs, force, module.check_mode)
-        except Exception as ex:
-            err = "Error creating partitions on " + disk + ": " + str(ex)
-            raise
-            # module.fail_json(msg=err)
-        partitions += get_partitions(disk)
-
-    module.exit_json(changed=(changed_count > 0), ansible_facts={"partition_pool": partitions})
-
-
-# ignore pylint errors related to the module_utils import
-# pylint: disable=redefined-builtin, unused-wildcard-import, wildcard-import, wrong-import-order, wrong-import-position
-# import module snippets
-from ansible.module_utils.basic import *  # noqa: E402,F403
-main()

+ 0 - 17
roles/kube_nfs_volumes/meta/main.yml

@@ -1,17 +0,0 @@
----
-galaxy_info:
-  author: Jan Safranek
-  description: Partition disks and use them as Kubernetes NFS physical volumes.
-  company: Red Hat, Inc.
-  license: license (Apache)
-  min_ansible_version: 2.2
-  platforms:
-  - name: EL
-    versions:
-    - 7
-  - name: Fedora
-    versions:
-    - all
-  categories:
-  - cloud
-dependencies: []

+ 0 - 34
roles/kube_nfs_volumes/tasks/main.yml

@@ -1,34 +0,0 @@
----
-- fail:
-    msg: "This role is not yet supported on atomic hosts"
-  when: openshift.common.is_atomic | bool
-
-- name: Install pyparted (RedHat/Fedora)
-  package: name={{ item }} state=present
-  with_items:
-    - pyparted
-    - python-httplib2
-  when: not openshift.common.is_containerized | bool
-
-- name: partition the drives
-  partitionpool: disks={{ disks }} force={{ force }} sizes={{ sizes }}
-
-- name: create filesystem
-  filesystem: fstype=ext4 dev=/dev/{{ item.name }}
-  with_items: "{{ partition_pool }}"
-
-- name: mount
-  mount: name={{mount_dir}}/{{ item.name }} src=/dev/{{ item.name }} state=mounted fstype=ext4 passno=2
-  with_items: "{{ partition_pool }}"
-
-- include: nfs.yml
-
-- name: export physical volumes
-  uri:
-    url: "{{ kubernetes_url }}/api/{{ kube_api_version }}/persistentvolumes"
-    method: POST
-    body: "{{ lookup('template', kube_req_template) }}"
-    body_format: json
-    status_code: 201
-    HEADER_Authorization: "Bearer {{ kubernetes_token }}"
-  with_items: "{{ partition_pool }}"

+ 0 - 23
roles/kube_nfs_volumes/tasks/nfs.yml

@@ -1,23 +0,0 @@
----
-- name: Install NFS server
-  package: name=nfs-utils state=present
-  when: not openshift.common.is_containerized | bool
-
-- name: Start rpcbind on Fedora/Red Hat
-  systemd:
-    name: rpcbind
-    state: started
-    enabled: yes
-
-- name: Start nfs on Fedora/Red Hat
-  systemd:
-    name: nfs-server
-    state: started
-    enabled: yes
-
-- name: Export the directories
-  lineinfile: dest=/etc/exports
-              regexp="^{{ mount_dir }}/{{ item.name }} "
-              line="{{ mount_dir }}/{{ item.name }} {{nfs_export_options}}"
-  with_items: "{{ partition_pool }}"
-  notify: restart nfs

+ 0 - 1
roles/kube_nfs_volumes/templates/v1/nfs.json.j2

@@ -1 +0,0 @@
-../v1beta3/nfs.json.j2

+ 0 - 23
roles/kube_nfs_volumes/templates/v1beta3/nfs.json.j2

@@ -1,23 +0,0 @@
-{
-  "kind": "PersistentVolume",
-  "apiVersion": "v1beta3",
-  "metadata": {
-    "name": "pv-{{ inventory_hostname | regex_replace("\.", "-")  }}-{{ item.name }}",
-    "labels": {
-      "type": "nfs"
-    }
-  },
-  "spec": {
-    "capacity": {
-      "storage": "{{ item.size }}"
-    },
-    "accessModes": [
-      "ReadWriteOnce"
-    ],
-    "NFS": {
-      "Server": "{{ inventory_hostname }}",
-      "Path": "{{ mount_dir }}/{{ item.name }}",
-      "ReadOnly": false
-    }
-  }
-}