소스 검색

fixed some issues to boot VM on GCE

corrected openshift master config, commented infra

correct list and terminate, it was bugged in case where no instance were terminated

Using openshift-sdn for gce

new join_node playbook for gce

openstack/hosts/nova.py is now taking the nova/ini of its directory and not the directory of execution of bin/cluster

add fix of ICMP reject rules

Avoid a recursive loop

Jenkins image was renamed

Default masters to t2.medium instead of t2.small

Fix a minor bug involving AWS ENV Keys

* If a user forgot to set their AWS keys, we'd get a non descriptive error about a variable not being set
* This patch uses the correct variable so the error message is more informative

delete some fix that are needed anymore (selinux, iptables rules for sdn)

GCE : all variables needed are in gce.ini, it will be used by bin/cluster (now check better the presence of gce.init in the default place or use GCE_INI_PATH to locate it ), also by gce.ini

openshift_node_labels : get from oo_option

fix syntax error in bin/cluster

fix lookup for openshift_node_labels

Adding desc, multiplier, and units to zabbix item

Adding capability to have descriptions on triggers

updated triggers and items to have better descriptions and multipliers

Move openshift_data_dir to a fact based on deployment_type

Previously this was being set to /var/lib/origin regardless of deployment_type
which isn't correct given that existing 'enterprise' and 'online' deployments
would have been deployed with /var/lib/openshift

Verify again that ansible version is different than 1.9.0 and 1.9.0.1

bin/cluste does not take -a and -s anymore

fix master_public_api_url : using by default a correct url

Really fixed master public api url this time

Really fixed master public api url this time

uncommented infra deployment like before

fixed again masterpublicurl in a template

README_GCE.md : use GCE_INI_PATH in order to locate gce.ini, update description of gce.ini
Chengcheng Mu 9 년 전
부모
커밋
d0b167bd07
40개의 변경된 파일401개의 추가작업 그리고 140개의 파일을 삭제
  1. 14 3
      README_GCE.md
  2. 9 5
      bin/cluster
  3. 1 1
      inventory/byo/hosts.example
  4. 11 3
      inventory/gce/hosts/gce.py
  5. 1 1
      inventory/openstack/hosts/nova.py
  6. 1 1
      playbooks/aws/openshift-cluster/vars.online.int.yml
  7. 1 1
      playbooks/aws/openshift-cluster/vars.online.prod.yml
  8. 1 1
      playbooks/aws/openshift-cluster/vars.online.stage.yml
  9. 15 0
      playbooks/common/openshift-cluster/set_infra_launch_facts_tasks.yml
  10. 4 0
      playbooks/gce/openshift-cluster/config.yml
  11. 64 0
      playbooks/gce/openshift-cluster/join_node.yml
  12. 1 1
      playbooks/gce/openshift-cluster/launch.yml
  13. 2 2
      playbooks/gce/openshift-cluster/list.yml
  14. 9 5
      playbooks/gce/openshift-cluster/tasks/launch_instances.yml
  15. 34 21
      playbooks/gce/openshift-cluster/terminate.yml
  16. 5 3
      playbooks/gce/openshift-cluster/vars.yml
  17. 17 3
      playbooks/openstack/openshift-cluster/files/heat_stack.yaml
  18. 30 5
      playbooks/openstack/openshift-cluster/launch.yml
  19. 31 2
      roles/lib_zabbix/library/zbx_item.py
  20. 5 3
      roles/lib_zabbix/library/zbx_trigger.py
  21. 6 2
      roles/lib_zabbix/tasks/create_template.yml
  22. 0 2
      roles/openshift_common/vars/main.yml
  23. 7 7
      roles/openshift_examples/files/examples/image-streams/image-streams-centos7.json
  24. 2 2
      roles/openshift_examples/files/examples/image-streams/image-streams-rhel7.json
  25. 6 1
      roles/openshift_examples/files/examples/quickstart-templates/jenkins-ephemeral-template.json
  26. 6 1
      roles/openshift_examples/files/examples/quickstart-templates/jenkins-persistent-template.json
  27. 9 0
      roles/openshift_facts/library/openshift_facts.py
  28. 1 1
      roles/openshift_facts/tasks/main.yml
  29. 1 1
      roles/openshift_manage_node/tasks/main.yml
  30. 10 1
      roles/openshift_master/tasks/main.yml
  31. 1 1
      roles/openshift_master/templates/master.yaml.v1.j2
  32. 1 1
      roles/openshift_master/vars/main.yml
  33. 1 1
      roles/openshift_master_ca/vars/main.yml
  34. 7 1
      roles/openshift_node/tasks/main.yml
  35. 1 1
      roles/openshift_node/templates/node.yaml.v1.j2
  36. 1 1
      roles/openshift_node/vars/main.yml
  37. 6 6
      roles/os_zabbix/vars/template_docker.yml
  38. 1 1
      roles/os_zabbix/vars/template_heartbeat.yml
  39. 1 1
      roles/os_zabbix/vars/template_openshift_master.yml
  40. 77 47
      roles/os_zabbix/vars/template_os_linux.yml

+ 14 - 3
README_GCE.md

@@ -39,6 +39,13 @@ Create a gce.ini file for GCE
 * gce_service_account_pem_file_path - Full path from previous steps
 * gce_project_id - Found in "Projects", it list all the gce projects you are associated with.  The page lists their "Project Name" and "Project ID".  You want the "Project ID"
 
+Mandatory customization variables (check the values according to your tenant):
+* zone = europe-west1-d
+* network = default
+* gce_machine_type = n1-standard-2
+* gce_machine_image = preinstalled-slave-50g-v5
+
+
 1. vi ~/.gce/gce.ini
 1. make the contents look like this:
 ```
@@ -46,11 +53,15 @@ Create a gce.ini file for GCE
 gce_service_account_email_address = long...@developer.gserviceaccount.com
 gce_service_account_pem_file_path = /full/path/to/project_id-gce_key_hash.pem
 gce_project_id = project_id
+zone = europe-west1-d
+network = default
+gce_machine_type = n1-standard-2
+gce_machine_image = preinstalled-slave-50g-v5
+
 ```
-1. Setup a sym link so that gce.py will pick it up (link must be in same dir as gce.py)
+1. Define the environment variable GCE_INI_PATH so gce.py can pick it up and bin/cluster can also read it
 ```
-  cd openshift-ansible/inventory/gce
-  ln -s ~/.gce/gce.ini gce.ini
+export GCE_INI_PATH=~/.gce/gce.ini
 ```
 
 

+ 9 - 5
bin/cluster

@@ -142,10 +142,14 @@ class Cluster(object):
         """
         config = ConfigParser.ConfigParser()
         if 'gce' == provider:
-            config.readfp(open('inventory/gce/hosts/gce.ini'))
+            gce_ini_default_path = os.path.join(
+                'inventory/gce/hosts/gce.ini')
+            gce_ini_path = os.environ.get('GCE_INI_PATH', gce_ini_default_path)
+            if os.path.exists(gce_ini_path): 
+                config.readfp(open(gce_ini_path))
 
-            for key in config.options('gce'):
-                os.environ[key] = config.get('gce', key)
+                for key in config.options('gce'):
+                    os.environ[key] = config.get('gce', key)
 
             inventory = '-i inventory/gce/hosts'
         elif 'aws' == provider:
@@ -164,7 +168,7 @@ class Cluster(object):
             boto_configs = [conf for conf in boto_conf_files if conf_exists(conf)]
 
             if len(key_missing) > 0 and len(boto_configs) == 0:
-                raise ValueError("PROVIDER aws requires {} environment variable(s). See README_AWS.md".format(missing))
+                raise ValueError("PROVIDER aws requires {} environment variable(s). See README_AWS.md".format(key_missing))
 
         elif 'libvirt' == provider:
             inventory = '-i inventory/libvirt/hosts'
@@ -193,7 +197,7 @@ class Cluster(object):
         if args.option:
             for opt in args.option:
                 k, v = opt.split('=', 1)
-                env['cli_' + k] = v
+                env[k] = v
 
         ansible_env = '-e \'{}\''.format(
             ' '.join(['%s=%s' % (key, value) for (key, value) in env.items()])

+ 1 - 1
inventory/byo/hosts.example

@@ -70,7 +70,7 @@ openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true',
 #osm_default_node_selector='region=primary'
 
 # set RPM version for debugging purposes
-#openshift_version=-3.0.0.0
+#openshift_pkg_version=-3.0.0.0
 
 # host group for masters
 [masters]

+ 11 - 3
inventory/gce/hosts/gce.py

@@ -120,6 +120,8 @@ class GceInventory(object):
             os.path.dirname(os.path.realpath(__file__)), "gce.ini")
         gce_ini_path = os.environ.get('GCE_INI_PATH', gce_ini_default_path)
 
+	print "GCE INI PATH :: "+gce_ini_path
+
         # Create a ConfigParser.
         # This provides empty defaults to each key, so that environment
         # variable configuration (as opposed to INI configuration) is able
@@ -173,6 +175,10 @@ class GceInventory(object):
         args[1] = os.environ.get('GCE_PEM_FILE_PATH', args[1])
         kwargs['project'] = os.environ.get('GCE_PROJECT', kwargs['project'])
 
+	sys.stderr.write("GCE_EMAIL : "+args[0]+"\n")
+	sys.stderr.write("GCE_PEM_FILE_PATH : "+args[1]+"\n")
+	sys.stderr.write("GCE_PROJECT : "+kwargs['project']+"\n")
+
         # Retrieve and return the GCE driver.
         gce = get_driver(Provider.GCE)(*args, **kwargs)
         gce.connection.user_agent_append(
@@ -211,7 +217,8 @@ class GceInventory(object):
             'gce_image': inst.image,
             'gce_machine_type': inst.size,
             'gce_private_ip': inst.private_ips[0],
-            'gce_public_ip': inst.public_ips[0],
+            # Hosts don't always have a public IP name
+            #'gce_public_ip': inst.public_ips[0],
             'gce_name': inst.name,
             'gce_description': inst.extra['description'],
             'gce_status': inst.extra['status'],
@@ -219,8 +226,8 @@ class GceInventory(object):
             'gce_tags': inst.extra['tags'],
             'gce_metadata': md,
             'gce_network': net,
-            # Hosts don't have a public name, so we add an IP
-            'ansible_ssh_host': inst.public_ips[0]
+            # Hosts don't always have a public IP name
+            #'ansible_ssh_host': inst.public_ips[0]
         }
 
     def get_instance(self, instance_name):
@@ -284,4 +291,5 @@ class GceInventory(object):
 
 
 # Run the script
+print "Hello world"
 GceInventory()

+ 1 - 1
inventory/openstack/hosts/nova.py

@@ -34,7 +34,7 @@ except ImportError:
 # executed with no parameters, return the list of
 # all groups and hosts
 
-NOVA_CONFIG_FILES = [os.getcwd() + "/nova.ini",
+NOVA_CONFIG_FILES = [os.path.join(os.path.dirname(os.path.realpath(__file__)), "nova.ini"),
                      os.path.expanduser(os.environ.get('ANSIBLE_CONFIG', "~/nova.ini")),
                      "/etc/ansible/nova.ini"]
 

+ 1 - 1
playbooks/aws/openshift-cluster/vars.online.int.yml

@@ -3,7 +3,7 @@ ec2_image: ami-9101c8fa
 ec2_image_name: libra-ops-rhel7*
 ec2_region: us-east-1
 ec2_keypair: mmcgrath_libra
-ec2_master_instance_type: t2.small
+ec2_master_instance_type: t2.medium
 ec2_master_security_groups: [ 'integration', 'integration-master' ]
 ec2_infra_instance_type: c4.large
 ec2_infra_security_groups: [ 'integration', 'integration-infra' ]

+ 1 - 1
playbooks/aws/openshift-cluster/vars.online.prod.yml

@@ -3,7 +3,7 @@ ec2_image: ami-9101c8fa
 ec2_image_name: libra-ops-rhel7*
 ec2_region: us-east-1
 ec2_keypair: mmcgrath_libra
-ec2_master_instance_type: t2.small
+ec2_master_instance_type: t2.medium
 ec2_master_security_groups: [ 'production', 'production-master' ]
 ec2_infra_instance_type: c4.large
 ec2_infra_security_groups: [ 'production', 'production-infra' ]

+ 1 - 1
playbooks/aws/openshift-cluster/vars.online.stage.yml

@@ -3,7 +3,7 @@ ec2_image: ami-9101c8fa
 ec2_image_name: libra-ops-rhel7*
 ec2_region: us-east-1
 ec2_keypair: mmcgrath_libra
-ec2_master_instance_type: t2.small
+ec2_master_instance_type: t2.medium
 ec2_master_security_groups: [ 'stage', 'stage-master' ]
 ec2_infra_instance_type: c4.large
 ec2_infra_security_groups: [ 'stage', 'stage-infra' ]

+ 15 - 0
playbooks/common/openshift-cluster/set_infra_launch_facts_tasks.yml

@@ -0,0 +1,15 @@
+---
+- set_fact: k8s_type=infra
+- set_fact: sub_host_type="{{ type }}"
+- set_fact: number_infra="{{ count }}"
+
+- name: Generate infra  instance names(s)
+  set_fact:
+    scratch_name: "{{ cluster_id }}-{{ k8s_type }}-{{ sub_host_type }}-{{ '%05x' | format(1048576 | random) }}"
+  register: infra_names_output
+  with_sequence: count={{ number_infra }}
+
+- set_fact:
+    infra_names: "{{ infra_names_output.results | default([])
+                    | oo_collect('ansible_facts')
+                    | oo_collect('scratch_name') }}"

+ 4 - 0
playbooks/gce/openshift-cluster/config.yml

@@ -10,6 +10,8 @@
   - set_fact:
       g_ssh_user_tmp: "{{ deployment_vars[deployment_type].ssh_user }}"
       g_sudo_tmp: "{{ deployment_vars[deployment_type].sudo }}"
+      use_sdn: "{{ do_we_use_openshift_sdn }}"
+      sdn_plugin: "{{ sdn_network_plugin }}"
 
 - include: ../../common/openshift-cluster/config.yml
   vars:
@@ -22,3 +24,5 @@
     openshift_debug_level: 2
     openshift_deployment_type: "{{ deployment_type }}"
     openshift_hostname: "{{ gce_private_ip }}"
+    openshift_use_openshift_sdn: "{{ hostvars.localhost.use_sdn  }}"
+    os_sdn_network_plugin_name: "{{ hostvars.localhost.sdn_plugin }}"

+ 64 - 0
playbooks/gce/openshift-cluster/join_node.yml

@@ -0,0 +1,64 @@
+---
+- name: Populate oo_hosts_to_update group
+  hosts: localhost
+  gather_facts: no
+  vars_files:
+  - vars.yml
+  tasks:
+  - name: Evaluate oo_hosts_to_update
+    add_host:
+      name: "{{ node_ip }}"
+      groups: oo_hosts_to_update
+      ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
+      ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
+
+- include: ../../common/openshift-cluster/update_repos_and_packages.yml
+
+- name: Populate oo_masters_to_config host group
+  hosts: localhost
+  gather_facts: no
+  vars_files:
+  - vars.yml
+  tasks:
+  - name: Evaluate oo_nodes_to_config
+    add_host:
+      name: "{{ node_ip }}"
+      ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
+      ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
+      groups: oo_nodes_to_config
+
+  - name: Add to preemptible group if needed
+    add_host:
+      name: "{{ node_ip }}"
+      ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
+      ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
+      groups: oo_preemptible_nodes
+    when: preemptible is defined and preemptible == "true"
+  
+  - name: Add to not preemptible group if needed
+    add_host:
+      name: "{{ node_ip }}"
+      ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
+      ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
+      groups: oo_non_preemptible_nodes
+    when: preemptible is defined and  preemptible == "false"
+  
+  - name: Evaluate oo_first_master
+    add_host:
+      name: "{{ groups['tag_env-host-type-' ~ cluster_id ~ '-openshift-master'][0] }}"
+      ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
+      ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
+      groups: oo_first_master
+    when: "'tag_env-host-type-{{ cluster_id }}-openshift-master' in groups"
+
+#- include: config.yml
+- include: ../../common/openshift-node/config.yml
+  vars:
+    openshift_cluster_id: "{{ cluster_id }}"
+    openshift_debug_level: 4
+    openshift_deployment_type: "{{ deployment_type }}"
+    openshift_hostname: "{{ ansible_default_ipv4.address }}"
+    openshift_use_openshift_sdn: true
+    os_sdn_network_plugin_name: "redhat/openshift-ovs-subnet"
+    osn_cluster_dns_domain: "{{ hostvars[groups.oo_first_master.0].openshift.dns.domain }}"
+    osn_cluster_dns_ip: "{{ hostvars[groups.oo_first_master.0].openshift.dns.ip }}"

+ 1 - 1
playbooks/gce/openshift-cluster/launch.yml

@@ -28,7 +28,7 @@
       type: "{{ k8s_type }}"
       g_sub_host_type: "{{ sub_host_type }}"
 
-  - include: ../../common/openshift-cluster/set_node_launch_facts_tasks.yml
+  - include: ../../common/openshift-cluster/set_infra_launch_facts_tasks.yml
     vars:
       type: "infra"
       count: "{{ num_infra }}"

+ 2 - 2
playbooks/gce/openshift-cluster/list.yml

@@ -14,11 +14,11 @@
       groups: oo_list_hosts
       ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user | default(ansible_ssh_user, true) }}"
       ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
-    with_items: groups[scratch_group] | default([]) | difference(['localhost']) | difference(groups.status_terminated)
+    with_items: groups[scratch_group] | default([]) | difference(['localhost']) | difference(groups.status_terminated | default([]))
 
 - name: List instance(s)
   hosts: oo_list_hosts
   gather_facts: no
   tasks:
   - debug:
-      msg: "public ip:{{ hostvars[inventory_hostname].gce_public_ip }} private ip:{{ hostvars[inventory_hostname].gce_private_ip }}"
+      msg: "private ip:{{ hostvars[inventory_hostname].gce_private_ip }}"

+ 9 - 5
playbooks/gce/openshift-cluster/tasks/launch_instances.yml

@@ -10,18 +10,22 @@
     service_account_email: "{{ lookup('env', 'gce_service_account_email_address') }}"
     pem_file: "{{ lookup('env', 'gce_service_account_pem_file_path') }}"
     project_id: "{{ lookup('env', 'gce_project_id') }}"
+    zone: "{{ lookup('env', 'zone') }}"
+    network: "{{ lookup('env', 'network') }}"
+# unsupported in 1.9.+
+    #service_account_permissions: "datastore,logging-write"
     tags:
       - created-by-{{ lookup('env', 'LOGNAME') |default(cluster, true) }}
       - env-{{ cluster }}
       - host-type-{{ type }}
-      - sub-host-type-{{ sub_host_type }}
+      - sub-host-type-{{ g_sub_host_type }}
       - env-host-type-{{ cluster }}-openshift-{{ type }}
   register: gce
 
 - name: Add new instances to groups and set variables needed
   add_host:
     hostname: "{{ item.name }}"
-    ansible_ssh_host: "{{ item.public_ip }}"
+    ansible_ssh_host: "{{ item.name }}"
     ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user | default(ansible_ssh_user, true) }}"
     ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
     groups: "{{ item.tags | oo_prepend_strings_in_list('tag_') | join(',') }}"
@@ -30,13 +34,13 @@
   with_items: gce.instance_data
 
 - name: Wait for ssh
-  wait_for: port=22 host={{ item.public_ip }}
+  wait_for: port=22 host={{ item.name }}
   with_items: gce.instance_data
 
 - name: Wait for user setup
   command: "ssh -o StrictHostKeyChecking=no -o PasswordAuthentication=no -o ConnectTimeout=10 -o UserKnownHostsFile=/dev/null {{ hostvars[item.name].ansible_ssh_user }}@{{ item.public_ip }} echo {{ hostvars[item.name].ansible_ssh_user }} user is setup"
   register: result
   until: result.rc == 0
-  retries: 20
-  delay: 10
+  retries: 30
+  delay: 5
   with_items: gce.instance_data

+ 34 - 21
playbooks/gce/openshift-cluster/terminate.yml

@@ -1,25 +1,18 @@
 ---
 - name: Terminate instance(s)
   hosts: localhost
+  connection: local
   gather_facts: no
   vars_files:
   - vars.yml
   tasks:
-  - set_fact: scratch_group=tag_env-host-type-{{ cluster_id }}-openshift-node
+  - set_fact: scratch_group=tag_env-{{ cluster_id }}
   - add_host:
       name: "{{ item }}"
-      groups: oo_hosts_to_terminate, oo_nodes_to_terminate
+      groups: oo_hosts_to_terminate
       ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user | default(ansible_ssh_user, true) }}"
       ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
-    with_items: groups[scratch_group] | default([]) | difference(['localhost']) | difference(groups.status_terminated)
-
-  - set_fact: scratch_group=tag_env-host-type-{{ cluster_id }}-openshift-master
-  - add_host:
-      name: "{{ item }}"
-      groups: oo_hosts_to_terminate, oo_masters_to_terminate
-      ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user | default(ansible_ssh_user, true) }}"
-      ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
-    with_items: groups[scratch_group] | default([]) | difference(['localhost']) | difference(groups.status_terminated)
+    with_items: groups[scratch_group] | default([]) | difference(['localhost']) | difference(groups.status_terminated | default([]))
 
 - name: Unsubscribe VMs
   hosts: oo_hosts_to_terminate
@@ -32,14 +25,34 @@
           lookup('oo_option', 'rhel_skip_subscription') | default(rhsub_skip, True) |
             default('no', True) | lower in ['no', 'false']
 
-- include: ../openshift-node/terminate.yml
-  vars:
-    gce_service_account_email: "{{ lookup('env', 'gce_service_account_email_address') }}"
-    gce_pem_file: "{{ lookup('env', 'gce_service_account_pem_file_path') }}"
-    gce_project_id: "{{ lookup('env', 'gce_project_id') }}"
+- name: Terminate instances(s)
+  hosts: localhost
+  connection: local
+  gather_facts: no
+  vars_files:
+  - vars.yml
+  tasks:
+
+    - name: Terminate instances that were previously launched
+      local_action:
+        module: gce
+        state: 'absent'
+        name: "{{ item }}"
+        service_account_email: "{{ lookup('env', 'gce_service_account_email_address') }}"
+        pem_file: "{{ lookup('env', 'gce_service_account_pem_file_path') }}"
+        project_id: "{{ lookup('env', 'gce_project_id') }}"
+        zone: "{{ lookup('env', 'zone') }}"
+      with_items: groups['oo_hosts_to_terminate'] | default([])
+      when: item is defined
 
-- include: ../openshift-master/terminate.yml
-  vars:
-    gce_service_account_email: "{{ lookup('env', 'gce_service_account_email_address') }}"
-    gce_pem_file: "{{ lookup('env', 'gce_service_account_pem_file_path') }}"
-    gce_project_id: "{{ lookup('env', 'gce_project_id') }}"
+#- include: ../openshift-node/terminate.yml
+#  vars:
+#    gce_service_account_email: "{{ lookup('env', 'gce_service_account_email_address') }}"
+#    gce_pem_file: "{{ lookup('env', 'gce_service_account_pem_file_path') }}"
+#    gce_project_id: "{{ lookup('env', 'gce_project_id') }}"
+#
+#- include: ../openshift-master/terminate.yml
+#  vars:
+#    gce_service_account_email: "{{ lookup('env', 'gce_service_account_email_address') }}"
+#    gce_pem_file: "{{ lookup('env', 'gce_service_account_pem_file_path') }}"
+#    gce_project_id: "{{ lookup('env', 'gce_project_id') }}"

+ 5 - 3
playbooks/gce/openshift-cluster/vars.yml

@@ -1,8 +1,11 @@
 ---
+do_we_use_openshift_sdn: true
+sdn_network_plugin: redhat/openshift-ovs-subnet 
+# os_sdn_network_plugin_name can be ovssubnet or multitenant, see https://docs.openshift.org/latest/architecture/additional_concepts/sdn.html#ovssubnet-plugin-operation
 deployment_vars:
   origin:
-    image: centos-7
-    ssh_user:
+    image: preinstalled-slave-50g-v5
+    ssh_user: root
     sudo: yes
   online:
     image: libra-rhel7
@@ -12,4 +15,3 @@ deployment_vars:
     image: rhel-7
     ssh_user:
     sudo: yes
-

+ 17 - 3
playbooks/openstack/openshift-cluster/files/heat_stack.yaml

@@ -88,6 +88,12 @@ parameters:
     label: Infra flavor
     description: Flavor of the infra node servers
 
+  key_pair:
+    type: string
+    label: Key name
+    description: Name of the key
+
+
 outputs:
 
   master_names:
@@ -250,6 +256,14 @@ resources:
           port_range_max: 10250
           remote_mode: remote_group_id
           remote_group_id: { get_resource: master-secgrp }
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 30001 
+          port_range_max: 30001
+        - direction: ingress
+          protocol: tcp
+          port_range_min: 30850 
+          port_range_max: 30850
 
   infra-secgrp:
     type: OS::Neutron::SecurityGroup
@@ -291,7 +305,7 @@ resources:
           type:       master
           image:      { get_param: master_image }
           flavor:     { get_param: master_flavor }
-          key_name:   { get_resource: keypair }
+          key_name:   { get_param: key_pair }
           net:        { get_resource: net }
           subnet:     { get_resource: subnet }
           secgrp:
@@ -323,7 +337,7 @@ resources:
           subtype:    compute
           image:      { get_param: node_image }
           flavor:     { get_param: node_flavor }
-          key_name:   { get_resource: keypair }
+          key_name:   { get_param: key_pair }
           net:        { get_resource: net }
           subnet:     { get_resource: subnet }
           secgrp:
@@ -355,7 +369,7 @@ resources:
           subtype:    infra
           image:      { get_param: infra_image }
           flavor:     { get_param: infra_flavor }
-          key_name:   { get_resource: keypair }
+          key_name:   { get_param: key_pair }
           net:        { get_resource: net }
           subnet:     { get_resource: subnet }
           secgrp:

+ 30 - 5
playbooks/openstack/openshift-cluster/launch.yml

@@ -19,15 +19,32 @@
     changed_when: false
     failed_when: stack_show_result.rc != 0 and 'Stack not found' not in stack_show_result.stderr
 
-  - set_fact:
-      heat_stack_action: 'stack-create'
+  - name: Create OpenStack Stack
+    command: 'heat stack-create -f {{ openstack_infra_heat_stack }}
+             -P key_pair={{ openstack_ssh_keypair }}
+             -P cluster_id={{ cluster_id }}
+             -P dns_nameservers={{ openstack_network_dns | join(",") }}
+             -P cidr={{ openstack_network_cidr }}
+             -P ssh_incoming={{ openstack_ssh_access_from }}
+             -P num_masters={{ num_masters }}
+             -P num_nodes={{ num_nodes }}
+             -P num_infra={{ num_infra }}
+             -P master_image={{ deployment_vars[deployment_type].image }}
+             -P node_image={{ deployment_vars[deployment_type].image }}
+             -P infra_image={{ deployment_vars[deployment_type].image }}
+             -P master_flavor={{ openstack_flavor["master"] }}
+             -P node_flavor={{ openstack_flavor["node"] }}
+             -P infra_flavor={{ openstack_flavor["infra"] }}
+             -P ssh_public_key="{{ openstack_ssh_public_key }}"
+             openshift-ansible-{{ cluster_id }}-stack'
     when: stack_show_result.rc == 1
   - set_fact:
       heat_stack_action: 'stack-update'
     when: stack_show_result.rc == 0
 
-  - name: Create or Update OpenStack Stack
-    command: 'heat {{ heat_stack_action }} -f {{ openstack_infra_heat_stack }}
+  - name: Update OpenStack Stack
+    command: 'heat stack-update -f {{ openstack_infra_heat_stack }}
+             -P key_pair={{ openstack_ssh_keypair }}
              -P cluster_id={{ cluster_id }}
              -P cidr={{ openstack_network_cidr }}
              -P dns_nameservers={{ openstack_network_dns | join(",") }}
@@ -50,7 +67,7 @@
     shell: 'heat stack-show openshift-ansible-{{ cluster_id }}-stack | awk ''$2 == "stack_status" {print $4}'''
     register: stack_show_status_result
     until: stack_show_status_result.stdout not in ['CREATE_IN_PROGRESS', 'UPDATE_IN_PROGRESS']
-    retries: 30
+    retries: 300
     delay: 1
     failed_when: stack_show_status_result.stdout not in ['CREATE_COMPLETE', 'UPDATE_COMPLETE']
 
@@ -119,4 +136,12 @@
 
 - include: update.yml
 
+# Fix icmp reject iptables rules
+# It should be solved in openshift-sdn but unfortunately it's not the case
+# Mysterious
+- name: Configuring Nodes for RBox
+  hosts: oo_nodes_to_config
+  roles:
+    - rbox-node
+
 - include: list.yml

+ 31 - 2
roles/lib_zabbix/library/zbx_item.py

@@ -88,6 +88,23 @@ def get_template_id(zapi, template_name):
 
     return template_ids, app_ids
 
+def get_multiplier(inval):
+    ''' Determine the multiplier
+    '''
+    if inval == None or inval == '':
+        return None, None
+
+    rval = None
+    try:
+        rval = int(inval)
+    except ValueError:
+        pass
+
+    if rval:
+        return rval, True
+
+    return rval, False
+
 # The branches are needed for CRUD and error handling
 # pylint: disable=too-many-branches
 def main():
@@ -106,6 +123,9 @@ def main():
             template_name=dict(default=None, type='str'),
             zabbix_type=dict(default=2, type='int'),
             value_type=dict(default='int', type='str'),
+            multiplier=dict(default=None, type='str'),
+            description=dict(default=None, type='str'),
+            units=dict(default=None, type='str'),
             applications=dict(default=None, type='list'),
             state=dict(default='present', type='str'),
         ),
@@ -137,11 +157,15 @@ def main():
                                 'templateids': templateid,
                                })
 
-    # Get
+    #******#
+    # GET
+    #******#
     if state == 'list':
         module.exit_json(changed=False, results=content['result'], state="list")
 
-    # Delete
+    #******#
+    # DELETE
+    #******#
     if state == 'absent':
         if not exists(content):
             module.exit_json(changed=False, state="absent")
@@ -152,12 +176,17 @@ def main():
     # Create and Update
     if state == 'present':
 
+        formula, use_multiplier = get_multiplier(module.params['multiplier'])
         params = {'name': module.params.get('name', module.params['key']),
                   'key_': module.params['key'],
                   'hostid': templateid[0],
                   'type': module.params['zabbix_type'],
                   'value_type': get_value_type(module.params['value_type']),
                   'applications': get_app_ids(module.params['applications'], app_name_ids),
+                  'formula': formula,
+                  'multiplier': use_multiplier,
+                  'description': module.params['description'],
+                  'units': module.params['units'],
                  }
 
         # Remove any None valued params

+ 5 - 3
roles/lib_zabbix/library/zbx_trigger.py

@@ -98,6 +98,7 @@ def main():
             zbx_password=dict(default=os.environ.get('ZABBIX_PASSWORD', None), type='str'),
             zbx_debug=dict(default=False, type='bool'),
             expression=dict(default=None, type='str'),
+            name=dict(default=None, type='str'),
             description=dict(default=None, type='str'),
             dependencies=dict(default=[], type='list'),
             priority=dict(default='avg', type='str'),
@@ -116,11 +117,11 @@ def main():
     zbx_class_name = 'trigger'
     idname = "triggerid"
     state = module.params['state']
-    description = module.params['description']
+    tname = module.params['name']
 
     content = zapi.get_content(zbx_class_name,
                                'get',
-                               {'filter': {'description': description},
+                               {'filter': {'description': tname},
                                 'expandExpression': True,
                                 'selectDependencies': 'triggerid',
                                })
@@ -138,7 +139,8 @@ def main():
 
     # Create and Update
     if state == 'present':
-        params = {'description': description,
+        params = {'description': tname,
+                  'comments':  module.params['description'],
                   'expression':  module.params['expression'],
                   'dependencies': get_deps(zapi, module.params['dependencies']),
                   'priority': get_priority(module.params['priority']),

+ 6 - 2
roles/lib_zabbix/tasks/create_template.yml

@@ -30,6 +30,9 @@
     key: "{{ item.key }}"
     name: "{{ item.name | default(item.key, true) }}"
     value_type: "{{ item.value_type | default('int') }}"
+    description: "{{ item.description | default('', True) }}"
+    multiplier: "{{ item.multiplier | default('', True) }}"
+    units: "{{ item.units | default('', True) }}"
     template_name: "{{ template.name }}"
     applications: "{{ item.applications }}"
   with_items: template.zitems
@@ -41,8 +44,9 @@
     zbx_server: "{{ server }}"
     zbx_user: "{{ user }}"
     zbx_password: "{{ password }}"
-    description: "{{ item.description }}"
-    dependencies: "{{ item.dependencies | default([], true) }}"
+    name: "{{ item.name }}"
+    description: "{{ item.description | default('', True) }}"
+    dependencies: "{{ item.dependencies | default([], True) }}"
     expression: "{{ item.expression }}"
     priority: "{{ item.priority }}"
     url: "{{ item.url | default(None, True) }}"

+ 0 - 2
roles/openshift_common/vars/main.yml

@@ -5,5 +5,3 @@
 # chains with the public zone (or the zone associated with the correct
 # interfaces)
 os_firewall_use_firewalld: False
-
-openshift_data_dir: /var/lib/origin

+ 7 - 7
roles/openshift_examples/files/examples/image-streams/image-streams-centos7.json

@@ -161,19 +161,19 @@
         "creationTimestamp": null
       },
       "spec": {
-        "dockerImageRepository": "openshift/wildfly-8-centos",
+        "dockerImageRepository": "openshift/wildfly-81-centos7",
         "tags": [
           {
             "name": "latest"
           },
           {
-            "name": "8",
+            "name": "8.1",
             "annotations": {
-              "description": "Build and run Java applications on Wildfly 8",
+              "description": "Build and run Java applications on Wildfly 8.1",
               "iconClass": "icon-wildfly",
               "tags": "builder,wildfly,java",
-              "supports":"wildfly:8,jee,java",
-              "version": "8"
+              "supports":"wildfly:8.1,jee,java",
+              "version": "8.1"
             },
             "from": {
               "Kind": "ImageStreamTag",
@@ -260,13 +260,13 @@
         "creationTimestamp": null
       },
       "spec": {
-        "dockerImageRepository": "openshift/jenkins-16-centos7",
+        "dockerImageRepository": "openshift/jenkins-1-centos7",
         "tags": [
           {
             "name": "latest"
           },
           {
-            "name": "1.6",
+            "name": "1",
             "from": {
               "Kind": "ImageStreamTag",
               "Name": "latest"

+ 2 - 2
roles/openshift_examples/files/examples/image-streams/image-streams-rhel7.json

@@ -230,13 +230,13 @@
         "creationTimestamp": null
       },
       "spec": {
-        "dockerImageRepository": "registry.access.redhat.com/openshift3/jenkins-16-rhel7",
+        "dockerImageRepository": "registry.access.redhat.com/openshift3/jenkins-1-rhel7",
         "tags": [
           {
             "name": "latest"
           },
           {
-            "name": "1.6",
+            "name": "1",
             "from": {
               "Kind": "ImageStreamTag",
               "Name": "latest"

+ 6 - 1
roles/openshift_examples/files/examples/quickstart-templates/jenkins-ephemeral-template.json

@@ -88,7 +88,7 @@
             "containers": [
               {
                 "name": "jenkins",
-                "image": "openshift/jenkins-16-centos7",
+                "image": "${JENKINS_IMAGE}",
                 "env": [
                   {
                     "name": "JENKINS_PASSWORD",
@@ -133,6 +133,11 @@
       "value": "jenkins"
     },
     {
+      "name": "JENKINS_IMAGE",
+      "description": "Jenkins Docker image to use",
+      "value": "openshift/jenkins-1-centos7"
+    },
+    {
       "name": "JENKINS_PASSWORD",
       "description": "Password for the Jenkins user",
       "generate": "expression",

+ 6 - 1
roles/openshift_examples/files/examples/quickstart-templates/jenkins-persistent-template.json

@@ -105,7 +105,7 @@
             "containers": [
               {
                 "name": "jenkins",
-                "image": "openshift/jenkins-16-centos7",
+                "image": "${JENKINS_IMAGE}",
                 "env": [
                   {
                     "name": "JENKINS_PASSWORD",
@@ -156,6 +156,11 @@
       "value": "password"
     },
     {
+      "name": "JENKINS_IMAGE",
+      "description": "Jenkins Docker image to use",
+      "value": "openshift/jenkins-1-centos7"
+    },
+    {
       "name": "VOLUME_CAPACITY",
       "description": "Volume space available for data, e.g. 512Mi, 2Gi",
       "value": "512Mi",

+ 9 - 0
roles/openshift_facts/library/openshift_facts.py

@@ -454,6 +454,8 @@ def set_deployment_facts_if_unset(facts):
             dict: the facts dict updated with the generated deployment_type
             facts
     """
+    # Perhaps re-factor this as a map?
+    # pylint: disable=too-many-branches
     if 'common' in facts:
         deployment_type = facts['common']['deployment_type']
         if 'service_type' not in facts['common']:
@@ -470,6 +472,13 @@ def set_deployment_facts_if_unset(facts):
             elif deployment_type == 'origin':
                 config_base = '/etc/openshift'
             facts['common']['config_base'] = config_base
+        if 'data_dir' not in facts['common']:
+            data_dir = '/var/lib/origin'
+            if deployment_type in ['enterprise', 'online']:
+                data_dir = '/var/lib/openshift'
+            elif deployment_type == 'origin':
+                data_dir = '/var/lib/openshift'
+            facts['common']['data_dir'] = data_dir
 
     for role in ('master', 'node'):
         if role in facts:

+ 1 - 1
roles/openshift_facts/tasks/main.yml

@@ -1,5 +1,5 @@
 ---
-- name: Verify Ansible version is greater than 1.8.0 and not 1.9.0
+- name: Verify Ansible version is greater than 1.8.0 and not 1.9.0 and not 1.9.0.1
   assert:
     that:
     - ansible_version | version_compare('1.8.0', 'ge')

+ 1 - 1
roles/openshift_manage_node/tasks/main.yml

@@ -3,7 +3,7 @@
       {{ openshift.common.client_binary }} get node {{ item }}
   register: omd_get_node
   until: omd_get_node.rc == 0
-  retries: 10
+  retries: 20
   delay: 5
   with_items: openshift_nodes
 

+ 10 - 1
roles/openshift_master/tasks/main.yml

@@ -8,6 +8,15 @@
     - openshift_master_oauth_grant_method in openshift_master_valid_grant_methods
   when: openshift_master_oauth_grant_method is defined
 
+- name: Displaying openshift_master_ha
+  debug: var=openshift_master_ha
+
+- name: openshift_master_cluster_password
+  debug: var=openshift_master_cluster_password
+
+- name: openshift.master.cluster_defer_ha
+  debug: var=openshift.master.cluster_defer_ha
+
 - fail:
     msg: "openshift_master_cluster_password must be set for multi-master installations"
   when: openshift_master_ha | bool and not openshift.master.cluster_defer_ha | bool and openshift_master_cluster_password is not defined
@@ -23,7 +32,7 @@
       api_port: "{{ openshift_master_api_port | default(None) }}"
       api_url: "{{ openshift_master_api_url | default(None) }}"
       api_use_ssl: "{{ openshift_master_api_use_ssl | default(None) }}"
-      public_api_url: "{{ openshift_master_public_api_url | default(None) }}"
+      public_api_url: "{{ openshift_master_public_api_url | default('https://' ~ openshift.common.public_ip ~ ':8443') }}"
       console_path: "{{ openshift_master_console_path | default(None) }}"
       console_port: "{{ openshift_master_console_port | default(None) }}"
       console_url: "{{ openshift_master_console_url | default(None) }}"

+ 1 - 1
roles/openshift_master/templates/master.yaml.v1.j2

@@ -46,7 +46,7 @@ etcdConfig:
     certFile: etcd.server.crt
     clientCA: ca.crt
     keyFile: etcd.server.key
-  storageDirectory: {{ openshift_data_dir }}/openshift.local.etcd
+  storageDirectory: {{ openshift.common.data_dir }}/openshift.local.etcd
 {% endif %}
 etcdStorageConfig:
   kubernetesStoragePrefix: kubernetes.io

+ 1 - 1
roles/openshift_master/vars/main.yml

@@ -3,7 +3,7 @@ openshift_master_config_dir: "{{ openshift.common.config_base }}/master"
 openshift_master_config_file: "{{ openshift_master_config_dir }}/master-config.yaml"
 openshift_master_scheduler_conf: "{{ openshift_master_config_dir }}/scheduler.json"
 openshift_master_policy: "{{ openshift_master_config_dir }}/policy.json"
-openshift_version: "{{ openshift_version | default('') }}"
+openshift_version: "{{ openshift_pkg_version | default('') }}"
 
 openshift_master_valid_grant_methods:
 - auto

+ 1 - 1
roles/openshift_master_ca/vars/main.yml

@@ -3,4 +3,4 @@ openshift_master_config_dir: "{{ openshift.common.config_base }}/master"
 openshift_master_ca_cert: "{{ openshift_master_config_dir }}/ca.crt"
 openshift_master_ca_key: "{{ openshift_master_config_dir }}/ca.key"
 openshift_master_ca_serial: "{{ openshift_master_config_dir }}/ca.serial.txt"
-openshift_version: "{{ openshift_version | default('') }}"
+openshift_version: "{{ openshift_pkg_version | default('') }}"

+ 7 - 1
roles/openshift_node/tasks/main.yml

@@ -22,7 +22,7 @@
       deployment_type: "{{ openshift_deployment_type }}"
   - role: node
     local_facts:
-      labels: "{{ openshift_node_labels | default(none) }}"
+      labels: "{{ lookup('oo_option', 'openshift_node_labels') | default( openshift_node_labels | default() ) }}"
       annotations: "{{ openshift_node_annotations | default(none) }}"
       registry_url: "{{ oreg_url | default(none) }}"
       debug_level: "{{ openshift_node_debug_level | default(openshift.common.debug_level) }}"
@@ -72,6 +72,12 @@
     dest: /etc/sysconfig/docker
     regexp: '^OPTIONS=.*$'
     line: "OPTIONS='--insecure-registry={{ openshift.node.portal_net }} \
+--insecure-registry=dockerhub.rnd.amadeus.net:5000 \
+--insecure-registry=dockerhub.rnd.amadeus.net:5001 \
+--insecure-registry=dockerhub.rnd.amadeus.net:5002 \
+--add-registry=dockerhub.rnd.amadeus.net:5000 \
+--add-registry=dockerhub.rnd.amadeus.net:5001 \
+--add-registry=dockerhub.rnd.amadeus.net:5002 \
 {% if ansible_selinux and ansible_selinux.status == '''enabled''' %}--selinux-enabled{% endif %}'"
   when: docker_check.stat.isreg
   notify:

+ 1 - 1
roles/openshift_node/templates/node.yaml.v1.j2

@@ -25,5 +25,5 @@ servingInfo:
   certFile: server.crt
   clientCA: ca.crt
   keyFile: server.key
-volumeDirectory: {{ openshift_data_dir }}/openshift.local.volumes
+volumeDirectory: {{ openshift.common.data_dir }}/openshift.local.volumes
 {% include 'partials/kubeletArguments.j2' %}

+ 1 - 1
roles/openshift_node/vars/main.yml

@@ -1,4 +1,4 @@
 ---
 openshift_node_config_dir: "{{ openshift.common.config_base }}/node"
 openshift_node_config_file: "{{ openshift_node_config_dir }}/node-config.yaml"
-openshift_version: "{{ openshift_version | default('') }}"
+openshift_version: "{{ openshift_pkg_version | default('') }}"

+ 6 - 6
roles/os_zabbix/vars/template_docker.yml

@@ -52,35 +52,35 @@ g_template_docker:
     - Docker Storage
     value_type: float
   ztriggers:
-  - description: 'docker.ping failed on {HOST.NAME}'
+  - name: 'docker.ping failed on {HOST.NAME}'
     expression: '{Template Docker:docker.ping.max(#3)}<1'
     url: 'https://github.com/openshift/ops-sop/blob/master/V3/Alerts/check_docker_ping.asciidoc'
     priority: high
 
-  - description: 'Docker storage is using LOOPBACK on {HOST.NAME}'
+  - name: 'Docker storage is using LOOPBACK on {HOST.NAME}'
     expression: '{Template Docker:docker.storage.is_loopback.last()}<>0'
     url: 'https://github.com/openshift/ops-sop/blob/master/V3/Alerts/check_docker_loopback.asciidoc'
     priority: high
 
-  - description: 'Critically low docker storage data space on {HOST.NAME}'
+  - name: 'Critically low docker storage data space on {HOST.NAME}'
     expression: '{Template Docker:docker.storage.data.space.percent_available.max(#3)}<5 or {Template Docker:docker.storage.data.space.available.max(#3)}<5' # < 5% or < 5GB
     url: 'https://github.com/openshift/ops-sop/blob/master/V3/Alerts/check_docker_storage.asciidoc'
     priority: high
 
-  - description: 'Critically low docker storage metadata space on {HOST.NAME}'
+  - name: 'Critically low docker storage metadata space on {HOST.NAME}'
     expression: '{Template Docker:docker.storage.metadata.space.percent_available.max(#3)}<5 or {Template Docker:docker.storage.metadata.space.available.max(#3)}<0.005' # < 5% or < 5MB
     url: 'https://github.com/openshift/ops-sop/blob/master/V3/Alerts/check_docker_storage.asciidoc'
     priority: high
 
   # Put triggers that depend on other triggers here (deps must be created first)
-  - description: 'Low docker storage data space on {HOST.NAME}'
+  - name: 'Low docker storage data space on {HOST.NAME}'
     expression: '{Template Docker:docker.storage.data.space.percent_available.max(#3)}<10 or {Template Docker:docker.storage.data.space.available.max(#3)}<10' # < 10% or < 10GB
     url: 'https://github.com/openshift/ops-sop/blob/master/V3/Alerts/check_docker_storage.asciidoc'
     dependencies:
     - 'Critically low docker storage data space on {HOST.NAME}'
     priority: average
 
-  - description: 'Low docker storage metadata space on {HOST.NAME}'
+  - name: 'Low docker storage metadata space on {HOST.NAME}'
     expression: '{Template Docker:docker.storage.metadata.space.percent_available.max(#3)}<10 or {Template Docker:docker.storage.metadata.space.available.max(#3)}<0.01' # < 10% or < 10MB
     url: 'https://github.com/openshift/ops-sop/blob/master/V3/Alerts/check_docker_storage.asciidoc'
     dependencies:

+ 1 - 1
roles/os_zabbix/vars/template_heartbeat.yml

@@ -7,7 +7,7 @@ g_template_heartbeat:
     - Heartbeat
     key: heartbeat.ping
   ztriggers:
-  - description: 'Heartbeat.ping has failed on {HOST.NAME}'
+  - name: 'Heartbeat.ping has failed on {HOST.NAME}'
     expression: '{Template Heartbeat:heartbeat.ping.nodata(20m)}=1'
     priority: avg
     url: 'https://github.com/openshift/ops-sop/blob/master/V3/Alerts/check_node_heartbeat.asciidoc'

+ 1 - 1
roles/os_zabbix/vars/template_openshift_master.yml

@@ -7,7 +7,7 @@ g_template_openshift_master:
     - Openshift Master
     key: create_app
   ztriggers:
-  - description: 'Application creation has failed on {HOST.NAME}'
+  - name: 'Application creation has failed on {HOST.NAME}'
     expression: '{Template Openshift Master:create_app.last(#1)}=1 and {Template Openshift Master:create_app.last(#2)}=1'
     url: 'https://github.com/openshift/ops-sop/blob/master/V3/Alerts/check_create_app.asciidoc'
     priority: avg

+ 77 - 47
roles/os_zabbix/vars/template_os_linux.yml

@@ -52,106 +52,135 @@ g_template_os_linux:
     - Kernel
     value_type: float
 
-  - key: mem.freemem
+  - key: kernel.all.cpu.nice
     applications:
-    - Memory
+    - Kernel
     value_type: int
 
-  - key: kernel.all.cpu.nice
+  - key: kernel.all.load.1_minute
     applications:
     - Kernel
-    value_type: int
+    value_type: float
 
-  - key: mem.util.bufmem
+  - key: kernel.uname.version
     applications:
-    - Memory
-    value_type: int
+    - Kernel
+    value_type: string
 
-  - key: swap.used
+  - key: kernel.all.uptime
     applications:
-    - Memory
+    - Kernel
     value_type: int
 
-  - key: kernel.all.load.1_minute
+  - key: kernel.all.cpu.user
     applications:
     - Kernel
-    value_type: float
+    value_type: int
 
-  - key: kernel.uname.version
+  - key: kernel.uname.machine
     applications:
     - Kernel
     value_type: string
 
-  - key: swap.length
+  - key: hinv.ncpu
     applications:
-    - Memory
+    - Kernel
     value_type: int
 
-  - key: mem.physmem
+  - key: kernel.all.cpu.steal
     applications:
-    - Memory
+    - Kernel
     value_type: int
 
-  - key: kernel.all.uptime
+  - key: kernel.all.pswitch
     applications:
     - Kernel
     value_type: int
 
-  - key: swap.free
+  - key: kernel.uname.release
     applications:
-    - Memory
-    value_type: int
+    - Kernel
+    value_type: string
 
-  - key: mem.util.available
+  - key: proc.nprocs
     applications:
-    - Memory
+    - Kernel
     value_type: int
 
-  - key: mem.util.used
+  # Memory Items
+  - key: mem.freemem
     applications:
     - Memory
     value_type: int
+    description: "PCP: free system memory metric from /proc/meminfo"
+    multiplier: 1024
+    units: B
 
-  - key: kernel.all.cpu.user
+  - key: mem.util.bufmem
     applications:
-    - Kernel
+    - Memory
     value_type: int
+    description: "PCP: Memory allocated for buffer_heads.; I/O buffers metric from /proc/meminfo"
+    multiplier: 1024
+    units: B
 
-  - key: kernel.uname.machine
+  - key: swap.used
     applications:
-    - Kernel
-    value_type: string
+    - Memory
+    value_type: int
+    description: "PCP: swap used metric from /proc/meminfo"
+    multiplier: 1024
+    units: B
 
-  - key: hinv.ncpu
+  - key: swap.length
     applications:
-    - Kernel
+    - Memory
     value_type: int
+    description: "PCP: total swap available metric from /proc/meminfo"
+    multiplier: 1024
+    units: B
 
-  - key: mem.util.cached
+  - key: mem.physmem
     applications:
     - Memory
     value_type: int
+    description: "PCP: The value of this metric corresponds to the \"MemTotal\" field reported by /proc/meminfo. Note that this does not necessarily correspond to actual installed physical memory - there may be areas of the physical address space mapped as ROM in various peripheral devices and the bios may be mirroring certain ROMs in RAM."
+    multiplier: 1024
+    units: B
 
-  - key: kernel.all.cpu.steal
+  - key: swap.free
     applications:
-    - Kernel
+    - Memory
     value_type: int
+    description: "PCP: swap free metric from /proc/meminfo"
+    multiplier: 1024
+    units: B
 
-  - key: kernel.all.pswitch
+  - key: mem.util.available
     applications:
-    - Kernel
+    - Memory
     value_type: int
+    description: "PCP: The amount of memory that is available for a new workload, without pushing the system into swap. Estimated from MemFree, Active(file), Inactive(file), and SReclaimable, as well as the \"low\" watermarks from /proc/zoneinfo.; available memory from /proc/meminfo"
+    multiplier: 1024
+    units: B
 
-  - key: kernel.uname.release
+  - key: mem.util.used
     applications:
-    - Kernel
-    value_type: string
+    - Memory
+    value_type: int
+    description: "PCP: Used memory is the difference between mem.physmem and mem.freemem; used memory metric from /proc/meminfo"
+    multiplier: 1024
+    units: B
 
-  - key: proc.nprocs
+  - key: mem.util.cached
     applications:
-    - Kernel
+    - Memory
     value_type: int
+    description: "PCP: Memory used by the page cache, including buffered file data.  This is in-memory cache for files read from the disk (the pagecache) but doesn't include SwapCached.; page cache metric from /proc/meminfo"
+    multiplier: 1024
+    units: B
 
+  # Disk items
   - key: filesys.full.xvda2
     applications:
     - Disk
@@ -163,32 +192,33 @@ g_template_os_linux:
     value_type: float
 
   ztriggers:
-  - description: 'Filesystem: / has less than 10% free on {HOST.NAME}'
+  - name: 'Filesystem: / has less than 10% free on {HOST.NAME}'
     expression: '{Template OS Linux:filesys.full.xvda2.last()}>90'
     url: 'https://github.com/openshift/ops-sop/blob/master/V3/Alerts/check_filesys_full.asciidoc'
     priority: warn
 
-  - description: 'Filesystem: / has less than 5% free on {HOST.NAME}'
+  - name: 'Filesystem: / has less than 5% free on {HOST.NAME}'
     expression: '{Template OS Linux:filesys.full.xvda2.last()}>95'
     url: 'https://github.com/openshift/ops-sop/blob/master/V3/Alerts/check_filesys_full.asciidoc'
     priority: high
 
-  - description: 'Filesystem: /var has less than 10% free on {HOST.NAME}'
+  - name: 'Filesystem: /var has less than 10% free on {HOST.NAME}'
     expression: '{Template OS Linux:filesys.full.xvda3.last()}>90'
     url: 'https://github.com/openshift/ops-sop/blob/master/V3/Alerts/check_filesys_full.asciidoc'
     priority: warn
 
-  - description: 'Filesystem: /var has less than 5% free on {HOST.NAME}'
+  - name: 'Filesystem: /var has less than 5% free on {HOST.NAME}'
     expression: '{Template OS Linux:filesys.full.xvda3.last()}>95'
     url: 'https://github.com/openshift/ops-sop/blob/master/V3/Alerts/check_filesys_full.asciidoc'
     priority: high
 
-  - description: 'Too many TOTAL processes on {HOST.NAME}'
+  - name: 'Too many TOTAL processes on {HOST.NAME}'
     expression: '{Template OS Linux:proc.nprocs.last()}>5000'
     url: 'https://github.com/openshift/ops-sop/blob/master/V3/Alerts/check_proc.asciidoc'
     priority: warn
 
-  - description: 'Lack of available memory on {HOST.NAME}'
-    expression: '{Template OS Linux:mem.freemem.last()}<3000'
+  - name: 'Lack of available memory on {HOST.NAME}'
+    expression: '{Template OS Linux:mem.freemem.last()}<30720000'
     url: 'https://github.com/openshift/ops-sop/blob/master/V3/Alerts/check_memory.asciidoc'
     priority: warn
+    description: 'Alert on less than 30MegaBytes.  This is 30 Million Bytes.  30000 KB x 1024'