Procházet zdrojové kódy

Combined (squashed) commit for all changes related to adding Contiv support into Openshift Ansible. This is the first (beta) release of Contiv with Openshift and is only supported for Openshift Origin + Bare metal deployments at the time of this commit. Please refer to the Openshift and Contiv official documentation for details of the level of support for different features and modes of operation.

Sanjeev Rampal před 8 roky
rodič
revize
58818a6af1
44 změnil soubory, kde provedl 984 přidání a 6 odebrání
  1. 29 0
      playbooks/adhoc/contiv/delete_contiv.yml
  2. 3 0
      playbooks/common/openshift-node/config.yml
  3. 39 0
      roles/contiv/README.md
  4. binární
      roles/contiv/contiv-openshift-vlan-network.png
  5. 91 0
      roles/contiv/defaults/main.yml
  6. 5 0
      roles/contiv/files/contiv_cni.conf
  7. binární
      roles/contiv/files/loopback
  8. 18 0
      roles/contiv/handlers/main.yml
  9. 28 0
      roles/contiv/meta/main.yml
  10. 32 0
      roles/contiv/tasks/aci.yml
  11. 15 0
      roles/contiv/tasks/default_network.yml
  12. 27 0
      roles/contiv/tasks/download_bins.yml
  13. 14 0
      roles/contiv/tasks/main.yml
  14. 65 0
      roles/contiv/tasks/netmaster.yml
  15. 16 0
      roles/contiv/tasks/netmaster_firewalld.yml
  16. 21 0
      roles/contiv/tasks/netmaster_iptables.yml
  17. 121 0
      roles/contiv/tasks/netplugin.yml
  18. 34 0
      roles/contiv/tasks/netplugin_firewalld.yml
  19. 29 0
      roles/contiv/tasks/netplugin_iptables.yml
  20. 28 0
      roles/contiv/tasks/ovs.yml
  21. 12 0
      roles/contiv/tasks/packageManagerInstall.yml
  22. 33 0
      roles/contiv/tasks/pkgMgrInstallers/centos-install.yml
  23. 10 0
      roles/contiv/templates/aci-gw.service
  24. 35 0
      roles/contiv/templates/aci_gw.j2
  25. 6 0
      roles/contiv/templates/contiv.cfg.j2
  26. 2 0
      roles/contiv/templates/netmaster.env.j2
  27. 8 0
      roles/contiv/templates/netmaster.service
  28. 9 0
      roles/contiv/templates/netplugin.j2
  29. 8 0
      roles/contiv/templates/netplugin.service
  30. 10 0
      roles/contiv_facts/defaults/main.yaml
  31. 3 0
      roles/contiv_facts/handlers/main.yml
  32. 24 0
      roles/contiv_facts/tasks/fedora-install.yml
  33. 88 0
      roles/contiv_facts/tasks/main.yml
  34. 24 0
      roles/contiv_facts/tasks/rpm.yml
  35. 1 0
      roles/etcd/defaults/main.yaml
  36. 43 3
      roles/etcd/tasks/main.yml
  37. 3 0
      roles/etcd/templates/custom.conf.j2
  38. 11 1
      roles/etcd/templates/etcd.conf.j2
  39. 2 0
      roles/etcd_common/defaults/main.yml
  40. 13 0
      roles/openshift_common/tasks/main.yml
  41. 19 0
      roles/openshift_facts/library/openshift_facts.py
  42. 3 0
      roles/openshift_master/meta/main.yml
  43. 1 1
      roles/openshift_master/templates/master.yaml.v1.j2
  44. 1 1
      roles/openshift_node/templates/node.yaml.v1.j2

+ 29 - 0
playbooks/adhoc/contiv/delete_contiv.yml

@@ -0,0 +1,29 @@
+---
+- name: delete contiv
+  hosts: all
+  gather_facts: False
+  tasks:
+    - systemd:
+        name: "{{ item }}"
+        state: stopped
+      with_items:
+        - contiv-etcd
+        - netmaster
+        - netplugin
+        - openvswitch
+      ignore_errors: True
+    - file:
+        path: "{{ item }}"
+        state: absent
+      with_items:
+        - /opt/cni
+        - /opt/contiv
+        - /etc/systemd/system/netmaster.service
+        - /etc/systemd/system/netplugin.service
+        - /etc/systemd/system/contiv-etcd.service
+        - /etc/systemd/system/contiv-etcd.service.d
+        - /var/lib/contiv-etcd
+        - /etc/default/netmaster
+        - /etc/default/netplugin
+        - /etc/openvswitch/conf.db
+    - command: systemctl daemon-reload

+ 3 - 0
playbooks/common/openshift-node/config.yml

@@ -95,6 +95,9 @@
     when: openshift.common.use_flannel | bool
   - role: nuage_node
     when: openshift.common.use_nuage | bool
+  - role: contiv
+    contiv_role: netplugin
+    when: openshift.common.use_contiv | bool
   - role: nickhammond.logrotate
   - role: openshift_manage_node
     openshift_master_host: "{{ groups.oo_first_master.0 }}"

+ 39 - 0
roles/contiv/README.md

@@ -0,0 +1,39 @@
+## Contiv
+
+Install Contiv components (netmaster, netplugin, contiv_etcd) on Master and Minion nodes 
+
+## Requirements
+
+* Ansible 2.2
+* Centos/ RHEL
+
+## Current Contiv restrictions when used with Openshift
+
+* Openshift Origin only 
+* VLAN encap mode only (default for Openshift Ansible)
+* Bare metal deployments only
+* Requires additional network configuration on the external physical routers (ref. Openshift docs Contiv section)
+
+## Key Ansible inventory configuration parameters
+
+* ``openshift_use_contiv=True``
+* ``openshift_use_openshift_sdn=False``
+* ``os_sdn_network_plugin_name='cni'``
+* ``netmaster_interface=eth0``
+* ``netplugin_interface=eth1``
+* ref. Openshift docs Contiv section for more details
+
+## Example bare metal deployment of Openshift + Contiv 
+
+* Example bare metal deployment
+
+![Screenshot](roles/contiv/contiv-openshift-vlan-network.png)
+
+* contiv241 is a Master + minion node
+* contiv242 and contiv243 are minion nodes
+* VLANs 1001, 1002 used for contiv container networks
+* VLAN 10 used for cluster-internal host network 
+* VLANs added to isolated VRF on external physical switch 
+* Static routes added on external switch as shown to allow routing between host and container networks
+* External switch also used for public internet access 
+

binární
roles/contiv/contiv-openshift-vlan-network.png


+ 91 - 0
roles/contiv/defaults/main.yml

@@ -0,0 +1,91 @@
+---
+# The version of Contiv binaries to use
+contiv_version: 1.0.0-beta.3-02-21-2017.20-52-42.UTC
+
+contiv_default_subnet: "20.1.1.1/24"
+contiv_default_gw: "20.1.1.254"
+# TCP port that Netmaster listens for network connections
+netmaster_port: 9999
+
+# TCP port that Netplugin listens for network connections
+netplugin_port: 6640
+contiv_rpc_port1: 9001
+contiv_rpc_port2: 9002
+contiv_rpc_port3: 9003
+
+# Interface used by Netplugin for inter-host traffic when encap_mode is vlan.
+# The interface must support 802.1Q trunking.
+netplugin_interface: "eno16780032"
+
+# IP address of the interface used for control communication within the cluster
+# It needs to be reachable from all nodes in the cluster.
+netplugin_ctrl_ip: "{{ hostvars[inventory_hostname]['ansible_' + netplugin_interface].ipv4.address }}"
+
+# IP used to terminate vxlan tunnels
+netplugin_vtep_ip: "{{ hostvars[inventory_hostname]['ansible_' + netplugin_interface].ipv4.address }}"
+
+# Interface used to bind Netmaster service
+netmaster_interface: "{{ netplugin_interface }}"
+
+# Path to the contiv binaries
+bin_dir: /usr/bin
+
+# Path to the contivk8s cni binary
+cni_bin_dir: /opt/cni/bin
+
+# Contiv config directory
+contiv_config_dir: /opt/contiv/config
+
+# Directory to store downloaded Contiv releases
+contiv_releases_directory: /opt/contiv
+contiv_current_release_directory: "{{ contiv_releases_directory }}/{{ contiv_version }}"
+
+#The default url to download the Contiv tar's from
+contiv_download_url_base: "https://github.com/contiv/netplugin/releases/download"
+contiv_download_url: "{{ contiv_download_url_base }}/{{ contiv_version }}/netplugin-{{ contiv_version }}.tar.bz2"
+
+# This is where kubelet looks for plugin files
+kube_plugin_dir: /usr/libexec/kubernetes/kubelet-plugins/net/exec
+
+# Specifies routed mode vs bridged mode for networking (bridge | routing)
+# if you are using an external router for all routing, you should select bridge here
+netplugin_fwd_mode: bridge
+
+# Contiv fabric mode aci|default
+contiv_fabric_mode: default
+
+# Encapsulation type vlan|vxlan to use for instantiating container networks
+contiv_encap_mode: vlan
+
+# Backend used by Netplugin for instantiating container networks
+netplugin_driver: ovs
+
+# Create a default Contiv network for use by pods
+contiv_default_network: true
+
+# VLAN/ VXLAN tag value to be used for the default network
+contiv_default_network_tag: 1
+
+#SRFIXME (use the openshift variables)
+https_proxy: ""
+http_proxy: ""
+no_proxy: ""
+
+# The following are aci specific parameters when contiv_fabric_mode: aci is set.
+# Otherwise, you can ignore these.
+apic_url: ""
+apic_username: ""
+apic_password: ""
+apic_leaf_nodes: ""
+apic_phys_dom: ""
+apic_contracts_unrestricted_mode: no
+apic_epg_bridge_domain: not_specified
+is_atomic: False
+kube_cert_dir: "/data/src/github.com/openshift/origin/openshift.local.config/master"
+master_name: "{{ groups['masters'][0] }}"
+contiv_etcd_port: 22379
+etcd_url: "{{ hostvars[groups['masters'][0]]['ansible_' + netmaster_interface].ipv4.address }}:{{ contiv_etcd_port }}"
+kube_ca_cert: "{{ kube_cert_dir }}/ca.crt"
+kube_key: "{{ kube_cert_dir }}/admin.key"
+kube_cert: "{{ kube_cert_dir }}/admin.crt"
+kube_master_api_port: 8443

+ 5 - 0
roles/contiv/files/contiv_cni.conf

@@ -0,0 +1,5 @@
+{
+  "cniVersion": "0.1.0",
+  "name": "contiv-net",
+  "type": "contivk8s"
+}

binární
roles/contiv/files/loopback


+ 18 - 0
roles/contiv/handlers/main.yml

@@ -0,0 +1,18 @@
+---
+- name: reload systemd
+  command: systemctl --system daemon-reload
+
+- name: restart netmaster
+  service:
+    name: netmaster
+    state: restarted
+  when: netmaster_started.changed == false
+
+- name: restart netplugin
+  service:
+    name: netplugin
+    state: restarted
+  when: netplugin_started.changed == false
+
+- name: Save iptables rules
+  command: service iptables save

+ 28 - 0
roles/contiv/meta/main.yml

@@ -0,0 +1,28 @@
+---
+galaxy_info:
+  author: Cisco
+  description:
+  company: Cisco
+  license:
+  min_ansible_version: 2.2
+  platforms:
+  - name: EL
+    versions:
+    - 7
+  categories:
+  - cloud
+  - system
+dependencies:
+- role: contiv_facts
+- role: etcd
+  etcd_service: contiv-etcd
+  etcd_is_thirdparty: True
+  etcd_peer_port: 22380
+  etcd_client_port: 22379
+  etcd_conf_dir: /etc/contiv-etcd/
+  etcd_data_dir: /var/lib/contiv-etcd/
+  etcd_ca_host: "{{ inventory_hostname }}"
+  etcd_cert_config_dir: /etc/contiv-etcd/
+  etcd_url_scheme: http
+  etcd_peer_url_scheme: http
+  when: contiv_role == "netmaster"

+ 32 - 0
roles/contiv/tasks/aci.yml

@@ -0,0 +1,32 @@
+---
+- name: ACI | Check aci-gw container image
+  command: "docker inspect contiv/aci-gw"
+  register: docker_aci_inspect_result
+  ignore_errors: yes
+
+- name: ACI | Pull aci-gw container
+  command: "docker pull contiv/aci-gw"
+  when: "'No such image' in docker_aci_inspect_result.stderr"
+
+- name: ACI | Copy shell script used by aci-gw service
+  template:
+    src: aci_gw.j2
+    dest: "{{ bin_dir }}/aci_gw.sh"
+    mode: u=rwx,g=rx,o=rx
+
+- name: ACI | Copy systemd units for aci-gw
+  template:
+    src: aci-gw.service
+    dest: /etc/systemd/system/aci-gw.service
+  notify: reload systemd
+
+- name: ACI | Enable aci-gw service
+  service:
+    name: aci-gw
+    enabled: yes
+
+- name: ACI | Start aci-gw service
+  service:
+    name: aci-gw
+    state: started
+  register: aci-gw_started

+ 15 - 0
roles/contiv/tasks/default_network.yml

@@ -0,0 +1,15 @@
+---
+- name: Contiv | Wait for netmaster
+  command: 'netctl --netmaster "http://{{ inventory_hostname }}:{{ netmaster_port }}" tenant ls'
+  register: tenant_result
+  until: tenant_result.stdout.find("default") != -1
+  retries: 9
+  delay: 10
+
+- name: Contiv | Check if default-net exists
+  command: 'netctl --netmaster "http://{{ inventory_hostname }}:{{ netmaster_port }}" net ls'
+  register: net_result
+
+- name: Contiv | Create default-net
+  command: 'netctl --netmaster "http://{{ inventory_hostname }}:{{ netmaster_port }}" net create --subnet={{ contiv_default_subnet }} -e {{ contiv_encap_mode }} -p {{ contiv_default_network_tag }} --gateway={{ contiv_default_gw }} default-net'
+  when: net_result.stdout.find("default-net") == -1

+ 27 - 0
roles/contiv/tasks/download_bins.yml

@@ -0,0 +1,27 @@
+---
+- name: Download Bins | Create directory for current Contiv release
+  file:
+    path: "{{ contiv_current_release_directory }}"
+    state: directory
+
+- name: Install bzip2
+  yum:
+    name: bzip2
+    state: installed
+
+- name: Download Bins | Download Contiv tar file
+  get_url:
+    url: "{{ contiv_download_url }}"
+    dest: "{{ contiv_current_release_directory }}"
+    mode: 0755
+    validate_certs: False
+  environment:
+    http_proxy: "{{ http_proxy|default('') }}"
+    https_proxy: "{{ https_proxy|default('') }}"
+    no_proxy: "{{ no_proxy|default('') }}"
+
+- name: Download Bins | Extract Contiv tar file
+  unarchive:
+    src: "{{ contiv_current_release_directory }}/netplugin-{{ contiv_version }}.tar.bz2"
+    dest: "{{ contiv_current_release_directory }}"
+    copy: no

+ 14 - 0
roles/contiv/tasks/main.yml

@@ -0,0 +1,14 @@
+---
+- name: Ensure bin_dir exists
+  file:
+    path: "{{ bin_dir }}"
+    recurse: yes
+    state: directory
+
+- include: download_bins.yml
+
+- include: netmaster.yml
+  when: contiv_role == "netmaster"
+
+- include: netplugin.yml
+  when: contiv_role == "netplugin"

+ 65 - 0
roles/contiv/tasks/netmaster.yml

@@ -0,0 +1,65 @@
+---
+- include: netmaster_firewalld.yml
+  when: has_firewalld
+
+- include: netmaster_iptables.yml
+  when: not has_firewalld and has_iptables
+
+- name: Netmaster | Check is /etc/hosts file exists
+  stat:
+    path: /etc/hosts
+  register: hosts
+
+- name: Netmaster | Create hosts file if it is not present
+  file:
+    path: /etc/hosts
+    state: touch
+  when: not hosts.stat.exists
+
+- name: Netmaster | Build hosts file
+  lineinfile:
+    dest: /etc/hosts
+    regexp: .*netmaster$
+    line: "{{ hostvars[item]['ansible_' + netmaster_interface].ipv4.address }} netmaster"
+    state: present
+  when: hostvars[item]['ansible_' + netmaster_interface].ipv4.address is defined
+  with_items: groups['masters']
+
+- name: Netmaster | Create netmaster symlinks
+  file:
+    src: "{{ contiv_current_release_directory }}/{{ item }}"
+    dest: "{{ bin_dir }}/{{ item }}"
+    state: link
+  with_items:
+    - netmaster
+    - netctl
+
+- name: Netmaster | Copy environment file for netmaster
+  template:
+    src: netmaster.env.j2
+    dest: /etc/default/netmaster
+    mode: 0644
+  notify: restart netmaster
+
+- name: Netmaster | Copy systemd units for netmaster
+  template:
+    src: netmaster.service
+    dest: /etc/systemd/system/netmaster.service
+  notify: reload systemd
+
+- name: Netmaster | Enable Netmaster
+  service:
+    name: netmaster
+    enabled: yes
+
+- name: Netmaster | Start Netmaster
+  service:
+    name: netmaster
+    state: started
+  register: netmaster_started
+
+- include: aci.yml
+  when: contiv_fabric_mode == "aci"
+
+- include: default_network.yml
+  when: contiv_default_network == true

+ 16 - 0
roles/contiv/tasks/netmaster_firewalld.yml

@@ -0,0 +1,16 @@
+---
+- name: Netmaster Firewalld | Open Netmaster port
+  firewalld:
+    port: "{{ netmaster_port }}/tcp"
+    permanent: false
+    state: enabled
+  # in case this is also a node where firewalld turned off
+  ignore_errors: yes
+
+- name: Netmaster Firewalld | Save Netmaster port
+  firewalld:
+    port: "{{ netmaster_port }}/tcp"
+    permanent: true
+    state: enabled
+  # in case this is also a node where firewalld turned off
+  ignore_errors: yes

+ 21 - 0
roles/contiv/tasks/netmaster_iptables.yml

@@ -0,0 +1,21 @@
+---
+- name: Netmaster IPtables | Get iptables rules
+  command: iptables -L --wait
+  register: iptablesrules
+  always_run: yes
+
+- name: Netmaster IPtables | Enable iptables at boot
+  service:
+    name: iptables
+    enabled: yes
+    state: started
+
+- name: Netmaster IPtables | Open Netmaster with iptables
+  command: /sbin/iptables -I INPUT 1 -p tcp --dport {{ item }} -j ACCEPT -m comment --comment "contiv"
+  with_items:
+    - "{{ netmaster_port }}"
+    - "{{ contiv_rpc_port1 }}"
+    - "{{ contiv_rpc_port2 }}"
+    - "{{ contiv_rpc_port3 }}"
+  when: iptablesrules.stdout.find("contiv") == -1
+  notify: Save iptables rules

+ 121 - 0
roles/contiv/tasks/netplugin.yml

@@ -0,0 +1,121 @@
+---
+- include: netplugin_firewalld.yml
+  when: has_firewalld
+
+- include: netplugin_iptables.yml
+  when: has_iptables
+
+- name: Netplugin | Ensure localhost entry correct in /etc/hosts
+  lineinfile:
+    dest: /etc/hosts
+    regexp: '^127\.0\.0\.1.*'
+    line: '127.0.0.1 localhost {{ ansible_hostname }}'
+    state: present
+
+- name: Netplugin | Remove incorrect localhost entry in /etc/hosts
+  lineinfile:
+    dest: /etc/hosts
+    regexp: '^::1. localhost '
+    line: '::1 '
+    state: absent
+
+- include: ovs.yml
+  when: netplugin_driver == "ovs"
+
+- name: Netplugin | Create Netplugin bin symlink
+  file:
+    src: "{{ contiv_current_release_directory }}/netplugin"
+    dest: "{{ bin_dir }}/netplugin"
+    state: link
+
+
+- name: Netplugin | Ensure cni_bin_dir exists
+  file:
+    path: "{{ cni_bin_dir }}"
+    recurse: yes
+    state: directory
+
+- name: Netplugin | Create CNI bin symlink
+  file:
+    src: "{{ contiv_current_release_directory }}/contivk8s"
+    dest: "{{ cni_bin_dir }}/contivk8s"
+    state: link
+
+- name: Netplugin | Copy CNI loopback bin
+  copy:
+    src: loopback
+    dest: "{{ cni_bin_dir }}/loopback"
+    mode: 0755
+
+- name: Netplugin | Ensure kube_plugin_dir and cni/net.d directories exist
+  file:
+    path: "{{ item }}"
+    recurse: yes
+    state: directory
+  with_items:
+    - "{{ kube_plugin_dir }}"
+    - "/etc/cni/net.d"
+
+- name: Netplugin | Ensure contiv_config_dir exists
+  file:
+    path: "{{ contiv_config_dir }}"
+    recurse: yes
+    state: directory
+
+- name: Netplugin | Copy contiv_cni.conf file
+  copy:
+    src: contiv_cni.conf
+    dest: "{{ item }}"
+  with_items:
+    - "{{ kube_plugin_dir }}/contiv_cni.conf"
+    - "/etc/cni/net.d"
+# notify: restart kubelet
+
+- name: Netplugin | Setup contiv.json config for the cni plugin
+  template:
+    src: contiv.cfg.j2
+    dest: "{{ contiv_config_dir }}/contiv.json"
+  notify: restart netplugin
+
+- name: Netplugin | Copy environment file for netplugin
+  template:
+    src: netplugin.j2
+    dest: /etc/default/netplugin
+    mode: 0644
+  notify: restart netplugin
+
+- name: Docker | Make sure proxy setting exists
+  lineinfile:
+    dest: /etc/sysconfig/docker-network
+    regexp: '^https_proxy.*'
+    line: 'https_proxy={{ https_proxy }}'
+    state: present
+  register: docker_updated
+
+- name: Netplugin | Copy systemd unit for netplugin
+  template:
+    src: netplugin.service
+    dest: /etc/systemd/system/netplugin.service
+  notify: reload systemd
+
+- name: systemd reload
+  command: systemctl daemon-reload
+  when: docker_updated|changed
+
+- name: Docker | Restart docker
+  service:
+    name: docker
+    state: restarted
+  when: docker_updated|changed
+
+- name: Netplugin | Enable Netplugin
+  service:
+    name: netplugin
+    enabled: yes
+
+- name: Netplugin | Start Netplugin
+  service:
+    name: netplugin
+    state: started
+  register: netplugin_started
+# notify: restart kubelet

+ 34 - 0
roles/contiv/tasks/netplugin_firewalld.yml

@@ -0,0 +1,34 @@
+---
+- name: Netplugin Firewalld | Open Netplugin port
+  firewalld:
+    port: "{{ netplugin_port }}/tcp"
+    permanent: false
+    state: enabled
+  # in case this is also a node where firewalld turned off
+  ignore_errors: yes
+
+- name: Netplugin Firewalld | Save Netplugin port
+  firewalld:
+    port: "{{ netplugin_port }}/tcp"
+    permanent: true
+    state: enabled
+  # in case this is also a node where firewalld turned off
+  ignore_errors: yes
+
+- name: Netplugin Firewalld | Open vxlan port
+  firewalld:
+    port: "8472/udp"
+    permanent: false
+    state: enabled
+  # in case this is also a node where firewalld turned off
+  ignore_errors: yes
+  when: contiv_encap_mode == "vxlan"
+
+- name: Netplugin Firewalld | Save firewalld vxlan port for flanneld
+  firewalld:
+    port: "8472/udp"
+    permanent: true
+    state: enabled
+  # in case this is also a node where firewalld turned off
+  ignore_errors: yes
+  when: contiv_encap_mode == "vxlan"

+ 29 - 0
roles/contiv/tasks/netplugin_iptables.yml

@@ -0,0 +1,29 @@
+---
+- name: Netplugin IPtables | Get iptables rules
+  command: iptables -L --wait
+  register: iptablesrules
+  always_run: yes
+
+- name: Netplugin IPtables | Enable iptables at boot
+  service:
+    name: iptables
+    enabled: yes
+    state: started
+
+- name: Netplugin IPtables | Open Netmaster with iptables
+  command: /sbin/iptables -I INPUT 1 -p tcp --dport {{ item }} -j ACCEPT -m comment --comment "contiv"
+  with_items:
+  - "{{ netmaster_port }}"
+  - "{{ contiv_rpc_port1 }}"
+  - "{{ contiv_rpc_port2 }}"
+  - "{{ contiv_rpc_port3 }}"
+  - "{{ contiv_etcd_port }}"
+  - "{{ kube_master_api_port }}"
+  when: iptablesrules.stdout.find("contiv") == -1
+  notify: Save iptables rules
+
+- name: Netplugin IPtables | Open vxlan port with iptables
+  command: /sbin/iptables -I INPUT 1 -p udp --dport 8472 -j ACCEPT -m comment --comment "vxlan"
+
+- name: Netplugin IPtables | Open vxlan port with iptables
+  command: /sbin/iptables -I INPUT 1 -p udp --dport 4789 -j ACCEPT -m comment --comment "vxlan"

+ 28 - 0
roles/contiv/tasks/ovs.yml

@@ -0,0 +1,28 @@
+---
+- include: packageManagerInstall.yml
+  when: source_type == "packageManager"
+  tags:
+    - binary-update
+
+- name: OVS | Configure selinux for ovs
+  command: "semanage permissive -a openvswitch_t"
+
+- name: OVS | Enable ovs
+  service:
+    name: openvswitch
+    enabled: yes
+
+- name: OVS | Start ovs
+  service:
+    name: openvswitch
+    state: started
+  register: ovs_started
+
+- name: OVS | Configure ovs
+  command: "ovs-vsctl set-manager {{ item }}"
+  with_items:
+    - "tcp:127.0.0.1:6640"
+    - "ptcp:6640"
+
+- name: OVS | Configure ovsdb-server
+  command: "ovs-appctl -t ovsdb-server ovsdb-server/add-remote ptcp:6640"

+ 12 - 0
roles/contiv/tasks/packageManagerInstall.yml

@@ -0,0 +1,12 @@
+---
+- name: Package Manager | Init the did_install fact
+  set_fact:
+    did_install: false
+
+- include: pkgMgrInstallers/centos-install.yml
+  when: ansible_distribution == "CentOS" and not is_atomic
+
+- name: Package Manager | Set fact saying we did CentOS package install
+  set_fact:
+    did_install: true
+  when: ansible_distribution == "CentOS"

+ 33 - 0
roles/contiv/tasks/pkgMgrInstallers/centos-install.yml

@@ -0,0 +1,33 @@
+---
+- name: PkgMgr CentOS | Install net-tools pkg for route
+  yum:
+    pkg=net-tools
+    state=latest
+
+- name: PkgMgr CentOS | Get openstack kilo rpm
+  get_url:
+    url: https://repos.fedorapeople.org/repos/openstack/openstack-kilo/rdo-release-kilo-2.noarch.rpm
+    dest: /tmp/rdo-release-kilo-2.noarch.rpm
+    validate_certs: False
+  environment:
+    http_proxy: "{{ http_proxy|default('') }}"
+    https_proxy: "{{ https_proxy|default('') }}"
+    no_proxy: "{{ no_proxy|default('') }}"
+  tags:
+    - ovs_install
+
+- name: PkgMgr CentOS | Install openstack kilo rpm
+  yum: name=/tmp/rdo-release-kilo-2.noarch.rpm state=present
+  tags:
+    - ovs_install
+
+- name: PkgMgr CentOS | Install ovs
+  yum:
+    pkg=openvswitch
+    state=latest
+  environment:
+    http_proxy: "{{ http_proxy|default('') }}"
+    https_proxy: "{{ https_proxy|default('') }}"
+    no_proxy: "{{ no_proxy|default('') }}"
+  tags:
+    - ovs_install

+ 10 - 0
roles/contiv/templates/aci-gw.service

@@ -0,0 +1,10 @@
+[Unit]
+Description=Contiv ACI gw
+After=auditd.service systemd-user-sessions.service time-sync.target docker.service
+
+[Service]
+ExecStart={{ bin_dir }}/aci_gw.sh start
+ExecStop={{ bin_dir }}/aci_gw.sh stop
+KillMode=control-group
+Restart=on-failure
+RestartSec=10

+ 35 - 0
roles/contiv/templates/aci_gw.j2

@@ -0,0 +1,35 @@
+#!/bin/bash
+
+usage="$0 start"
+if [ $# -ne 1 ]; then
+    echo USAGE: $usage
+    exit 1
+fi
+
+case $1 in
+start)
+    set -e
+
+    docker run --net=host \
+    -e "APIC_URL={{ apic_url }}" \
+    -e "APIC_USERNAME={{ apic_username }}" \
+    -e "APIC_PASSWORD={{ apic_password }}" \
+    -e "APIC_LEAF_NODE={{ apic_leaf_nodes }}" \
+    -e "APIC_PHYS_DOMAIN={{ apic_phys_dom }}" \
+    -e "APIC_EPG_BRIDGE_DOMAIN={{ apic_epg_bridge_domain }}" \
+    -e "APIC_CONTRACTS_UNRESTRICTED_MODE={{ apic_contracts_unrestricted_mode }}" \
+    --name=contiv-aci-gw \
+    contiv/aci-gw
+    ;;
+
+stop)
+    # don't stop on error
+    docker stop contiv-aci-gw
+    docker rm contiv-aci-gw
+    ;;
+
+*)
+    echo USAGE: $usage
+    exit 1
+    ;;
+esac

+ 6 - 0
roles/contiv/templates/contiv.cfg.j2

@@ -0,0 +1,6 @@
+{
+  "K8S_API_SERVER": "https://{{ hostvars[groups['masters'][0]]['ansible_' + netmaster_interface].ipv4.address }}:{{ kube_master_api_port }}",
+  "K8S_CA": "{{ openshift.common.config_base }}/node/ca.crt",
+  "K8S_KEY": "{{ openshift.common.config_base }}/node/system:node:{{ openshift.common.hostname }}.key",
+  "K8S_CERT": "{{ openshift.common.config_base }}/node/system:node:{{ openshift.common.hostname }}.crt"
+}

+ 2 - 0
roles/contiv/templates/netmaster.env.j2

@@ -0,0 +1,2 @@
+NETMASTER_ARGS='--cluster-store etcd://{{ etcd_url }}  --cluster-mode=kubernetes'
+

+ 8 - 0
roles/contiv/templates/netmaster.service

@@ -0,0 +1,8 @@
+[Unit]
+Description=Netmaster
+After=auditd.service systemd-user-sessions.service contiv-etcd.service
+
+[Service]
+EnvironmentFile=/etc/default/netmaster
+ExecStart={{ bin_dir }}/netmaster $NETMASTER_ARGS
+KillMode=control-group

+ 9 - 0
roles/contiv/templates/netplugin.j2

@@ -0,0 +1,9 @@
+{% if contiv_encap_mode == "vlan" %}
+NETPLUGIN_ARGS='-vlan-if {{ netplugin_interface }} -ctrl-ip {{ netplugin_ctrl_ip }} -plugin-mode kubernetes -cluster-store etcd://{{ etcd_url }}'
+{% endif %}
+{#   Note: Commenting out vxlan encap mode support until it is fully supported
+{% if contiv_encap_mode == "vxlan" %}
+NETPLUGIN_ARGS='-vtep-ip {{ netplugin_ctrl_ip }} -e {{contiv_encap_mode}} -ctrl-ip {{ netplugin_ctrl_ip }} -plugin-mode kubernetes -cluster-store etcd://{{ etcd_url }}'
+{% endif %}
+#}
+

+ 8 - 0
roles/contiv/templates/netplugin.service

@@ -0,0 +1,8 @@
+[Unit]
+Description=Netplugin
+After=auditd.service systemd-user-sessions.service contiv-etcd.service
+
+[Service]
+EnvironmentFile=/etc/default/netplugin
+ExecStart={{ bin_dir }}/netplugin $NETPLUGIN_ARGS
+KillMode=control-group

+ 10 - 0
roles/contiv_facts/defaults/main.yaml

@@ -0,0 +1,10 @@
+---
+# The directory where binaries are stored on Ansible
+# managed systems.
+bin_dir: /usr/bin
+
+# The directory used by Ansible to temporarily store
+# files on Ansible managed systems.
+ansible_temp_dir: /tmp/.ansible/files
+
+source_type: packageManager

+ 3 - 0
roles/contiv_facts/handlers/main.yml

@@ -0,0 +1,3 @@
+---
+- name: reload systemd
+  command: systemctl --system daemon-reload

+ 24 - 0
roles/contiv_facts/tasks/fedora-install.yml

@@ -0,0 +1,24 @@
+---
+- name: Install dnf
+  yum:
+    name: dnf
+    state: installed
+
+- name: Update repo cache
+  command: dnf update -y
+  retries: 5
+  delay: 10
+  environment:
+    https_proxy: "{{ https_proxy }}"
+    http_proxy: "{{ http_proxy }}"
+    no_proxy: "{{ no_proxy }}"
+
+- name: Install libselinux-python
+  command: dnf install {{ item }} -y
+  with_items:
+    - python-dnf
+    - libselinux-python
+  environment:
+    https_proxy: "{{ https_proxy }}"
+    http_proxy: "{{ http_proxy }}"
+    no_proxy: "{{ no_proxy }}"

+ 88 - 0
roles/contiv_facts/tasks/main.yml

@@ -0,0 +1,88 @@
+---
+- name: Determine if Atomic
+  stat: path=/run/ostree-booted
+  register: s
+  changed_when: false
+  always_run: yes
+
+- name: Init the is_atomic fact
+  set_fact:
+    is_atomic: false
+
+- name: Set the is_atomic fact
+  set_fact:
+    is_atomic: true
+  when: s.stat.exists
+
+- name: Determine if CoreOS
+  raw: "grep '^NAME=' /etc/os-release | sed s'/NAME=//'"
+  register: distro
+  always_run: yes
+
+- name: Init the is_coreos fact
+  set_fact:
+    is_coreos: false
+
+- name: Set the is_coreos fact
+  set_fact:
+    is_coreos: true
+  when: "'CoreOS' in distro.stdout"
+
+- name: Set docker config file directory
+  set_fact:
+    docker_config_dir: "/etc/sysconfig"
+
+- name: Override docker config file directory for Debian
+  set_fact:
+    docker_config_dir: "/etc/default"
+  when: ansible_distribution == "Debian" or ansible_distribution == "Ubuntu"
+
+- name: Create config file directory
+  file:
+    path: "{{ docker_config_dir }}"
+    state: directory
+
+- name: Set the bin directory path for CoreOS
+  set_fact:
+    bin_dir: "/opt/bin"
+  when: is_coreos
+
+- name: Create the directory used to store binaries
+  file:
+    path: "{{ bin_dir }}"
+    state: directory
+
+- name: Create Ansible temp directory
+  file:
+    path: "{{ ansible_temp_dir }}"
+    state: directory
+
+- name: Determine if has rpm
+  stat: path=/usr/bin/rpm
+  register: s
+  changed_when: false
+  always_run: yes
+
+- name: Init the has_rpm fact
+  set_fact:
+    has_rpm: false
+
+- name: Set the has_rpm fact
+  set_fact:
+    has_rpm: true
+  when: s.stat.exists
+
+- name: Init the has_firewalld fact
+  set_fact:
+    has_firewalld: false
+
+- name: Init the has_iptables fact
+  set_fact:
+    has_iptables: false
+
+# collect information about what packages are installed
+- include: rpm.yml
+  when: has_rpm
+
+- include: fedora-install.yml
+  when: not is_atomic and ansible_distribution == "Fedora"

+ 24 - 0
roles/contiv_facts/tasks/rpm.yml

@@ -0,0 +1,24 @@
+---
+- name: RPM | Determine if firewalld installed
+  command: "rpm -q firewalld"
+  register: s
+  changed_when: false
+  failed_when: false
+  always_run: yes
+
+- name: Set the has_firewalld fact
+  set_fact:
+    has_firewalld: true
+  when: s.rc == 0
+
+- name: Determine if iptables-services installed
+  command: "rpm -q iptables-services"
+  register: s
+  changed_when: false
+  failed_when: false
+  always_run: yes
+
+- name: Set the has_iptables fact
+  set_fact:
+    has_iptables: true
+  when: s.rc == 0

+ 1 - 0
roles/etcd/defaults/main.yaml

@@ -14,3 +14,4 @@ etcd_advertise_client_urls: "{{ etcd_url_scheme }}://{{ etcd_ip }}:{{ etcd_clien
 etcd_listen_client_urls: "{{ etcd_url_scheme }}://{{ etcd_ip }}:{{ etcd_client_port }}"
 
 etcd_data_dir: /var/lib/etcd/
+etcd_systemd_dir: "/etc/systemd/system/{{ etcd_service }}.service.d"

+ 43 - 3
roles/etcd/tasks/main.yml

@@ -26,12 +26,52 @@
   - etcd_is_containerized | bool
   - not openshift.common.is_etcd_system_container | bool
 
-- name: Ensure etcd datadir exists when containerized
+
+# Start secondary etcd instance for third party integrations
+# TODO: Determine an alternative to using thirdparty variable
+
+- name: Create configuration directory
+  file:
+    path: "{{ etcd_conf_dir }}"
+    state: directory
+    mode: 0700
+  when: etcd_is_thirdparty | bool
+
+  # TODO: retest with symlink to confirm it does or does not function
+- name: Copy service file for etcd instance
+  copy:
+    src: /usr/lib/systemd/system/etcd.service
+    dest: "/etc/systemd/system/{{ etcd_service }}.service"
+    remote_src: True
+  when: etcd_is_thirdparty | bool
+
+- name: Create third party etcd service.d directory exists
+  file:
+    path: "{{ etcd_systemd_dir }}"
+    state: directory
+  when: etcd_is_thirdparty | bool
+
+- name: Configure third part etcd service unit file
+  template:
+    dest: "{{ etcd_systemd_dir }}/custom.conf"
+    src: custom.conf.j2
+  when: etcd_is_thirdparty
+
+  # TODO: this task may not be needed with Validate permissions
+- name: Ensure etcd datadir exists
   file:
     path: "{{ etcd_data_dir }}"
     state: directory
     mode: 0700
-  when: etcd_is_containerized | bool
+    owner: etcd
+    group: etcd
+    recurse: True
+  when: etcd_is_containerized | bool or etcd_is_thirdparty | bool
+
+  # TODO: Determine if the below reload would work here, for now just reload
+- name:
+  command: systemctl daemon-reload
+  when: etcd_is_thirdparty | bool
 
 - name: Disable system etcd when containerized
   systemd:
@@ -67,7 +107,7 @@
 - name: Write etcd global config file
   template:
     src: etcd.conf.j2
-    dest: /etc/etcd/etcd.conf
+    dest: "{{ etcd_conf_file }}"
     backup: true
   notify:
   - restart etcd

+ 3 - 0
roles/etcd/templates/custom.conf.j2

@@ -0,0 +1,3 @@
+[Service]
+WorkingDirectory={{ etcd_data_dir }}
+EnvironmentFile=-{{ etcd_conf_file }}

+ 11 - 1
roles/etcd/templates/etcd.conf.j2

@@ -8,7 +8,7 @@
 {% endfor -%}
 {% endmacro -%}
 
-{% if etcd_peers | default([]) | length > 1 %}
+{% if (etcd_peers | default([]) | length > 1) or (etcd_is_thirdparty) %}
 ETCD_NAME={{ etcd_hostname }}
 ETCD_LISTEN_PEER_URLS={{ etcd_listen_peer_urls }}
 {% else %}
@@ -23,6 +23,16 @@ ETCD_LISTEN_CLIENT_URLS={{ etcd_listen_client_urls }}
 #ETCD_MAX_WALS=5
 #ETCD_CORS=
 
+{% if etcd_is_thirdparty %}
+#[cluster]
+ETCD_INITIAL_ADVERTISE_PEER_URLS={{ etcd_initial_advertise_peer_urls }}
+
+# TODO: This needs to be altered to support the correct etcd instances
+ETCD_INITIAL_CLUSTER={{ etcd_hostname}}={{ etcd_initial_advertise_peer_urls }}
+ETCD_INITIAL_CLUSTER_STATE={{ etcd_initial_cluster_state }}
+ETCD_INITIAL_CLUSTER_TOKEN=thirdparty-etcd-cluster-1
+{% endif %}
+
 {% if etcd_peers | default([]) | length > 1 %}
 #[cluster]
 ETCD_INITIAL_ADVERTISE_PEER_URLS={{ etcd_initial_advertise_peer_urls }}

+ 2 - 0
roles/etcd_common/defaults/main.yml

@@ -2,6 +2,7 @@
 # etcd server vars
 etcd_conf_dir: "{{ '/etc/etcd' if not openshift.common.is_etcd_system_container else '/var/lib/etcd/etcd.etcd/etc'  }}"
 etcd_system_container_conf_dir: /var/lib/etcd/etc
+etcd_conf_file: "{{ etcd_conf_dir }}/etcd.conf"
 etcd_ca_file: "{{ etcd_conf_dir }}/ca.crt"
 etcd_cert_file: "{{ etcd_conf_dir }}/server.crt"
 etcd_key_file: "{{ etcd_conf_dir }}/server.key"
@@ -33,3 +34,4 @@ etcd_hostname: "{{ inventory_hostname }}"
 etcd_ip: "{{ ansible_default_ipv4.address }}"
 etcd_is_atomic: False
 etcd_is_containerized: False
+etcd_is_thirdparty: False

+ 13 - 0
roles/openshift_common/tasks/main.yml

@@ -12,6 +12,18 @@
   when: openshift_use_flannel | default(false) | bool and openshift_use_nuage | default(false) | bool
 
 - fail:
+    msg: Contiv can not be used with openshift sdn, set openshift_use_openshift_sdn=false if you want to use contiv
+  when: openshift_use_openshift_sdn | default(true) | bool and openshift_use_contiv | default(false) | bool
+
+- fail:
+    msg: Contiv can not be used with flannel
+  when: openshift_use_flannel | default(false) | bool and openshift_use_contiv | default(false) | bool
+
+- fail:
+    msg: Contiv can not be used with nuage
+  when: openshift_use_nuage | default(false) | bool and openshift_use_contiv | default(false) | bool
+
+- fail:
     msg: openshift_hostname must be 64 characters or less
   when: openshift_hostname is defined and openshift_hostname | length > 64
 
@@ -24,6 +36,7 @@
       sdn_network_plugin_name: "{{ os_sdn_network_plugin_name | default(None) }}"
       use_flannel: "{{ openshift_use_flannel | default(None) }}"
       use_nuage: "{{ openshift_use_nuage | default(None) }}"
+      use_contiv: "{{ openshift_use_contiv | default(None) }}"
       use_manageiq: "{{ openshift_use_manageiq | default(None) }}"
       data_dir: "{{ openshift_data_dir | default(None) }}"
       use_dnsmasq: "{{ openshift_use_dnsmasq | default(None) }}"

+ 19 - 0
roles/openshift_facts/library/openshift_facts.py

@@ -485,6 +485,24 @@ def set_nuage_facts_if_unset(facts):
     return facts
 
 
+def set_contiv_facts_if_unset(facts):
+    """ Set contiv facts if not already present in facts dict
+            dict: the facts dict updated with the contiv facts if
+            missing
+        Args:
+            facts (dict): existing facts
+        Returns:
+            dict: the facts dict updated with the contiv
+            facts if they were not already present
+
+    """
+    if 'common' in facts:
+        if 'use_contiv' not in facts['common']:
+            use_contiv = False
+            facts['common']['use_contiv'] = use_contiv
+    return facts
+
+
 def set_node_schedulability(facts):
     """ Set schedulable facts if not already present in facts dict
         Args:
@@ -1936,6 +1954,7 @@ class OpenShiftFacts(object):
         facts = set_project_cfg_facts_if_unset(facts)
         facts = set_flannel_facts_if_unset(facts)
         facts = set_nuage_facts_if_unset(facts)
+        facts = set_contiv_facts_if_unset(facts)
         facts = set_node_schedulability(facts)
         facts = set_selectors(facts)
         facts = set_identity_providers_if_unset(facts)

+ 3 - 0
roles/openshift_master/meta/main.yml

@@ -42,3 +42,6 @@ dependencies:
 - role: nickhammond.logrotate
 - role: nuage_master
   when: openshift.common.use_nuage | bool
+- role: contiv
+  contiv_role: netmaster
+  when: openshift.common.use_contiv | bool

+ 1 - 1
roles/openshift_master/templates/master.yaml.v1.j2

@@ -165,7 +165,7 @@ masterPublicURL: {{ openshift.master.public_api_url }}
 networkConfig:
   clusterNetworkCIDR: {{ openshift.master.sdn_cluster_network_cidr }}
   hostSubnetLength: {{ openshift.master.sdn_host_subnet_length }}
-{% if openshift.common.use_openshift_sdn or openshift.common.use_nuage or openshift.common.sdn_network_plugin_name == 'cni' %}
+{% if openshift.common.use_openshift_sdn or openshift.common.use_nuage or openshift.common.use_contiv or openshift.common.sdn_network_plugin_name == 'cni' %}
   networkPluginName: {{ openshift.common.sdn_network_plugin_name }}
 {% endif %}
 # serviceNetworkCIDR must match kubernetesMasterConfig.servicesSubnet

+ 1 - 1
roles/openshift_node/templates/node.yaml.v1.j2

@@ -27,7 +27,7 @@ networkPluginName: {{ openshift.common.sdn_network_plugin_name }}
 # deprecates networkPluginName above. The two should match.
 networkConfig:
    mtu: {{ openshift.node.sdn_mtu }}
-{% if openshift.common.use_openshift_sdn | bool or openshift.common.use_nuage | bool or openshift.common.sdn_network_plugin_name == 'cni' %}
+{% if openshift.common.use_openshift_sdn | bool or openshift.common.use_nuage | bool or openshift.common.use_contiv | bool or openshift.common.sdn_network_plugin_name == 'cni' %}
    networkPluginName: {{ openshift.common.sdn_network_plugin_name }}
 {% endif %}
 {% if openshift.node.set_node_ip | bool %}