Browse Source

Add proper 4.0 README steps

Michael Gugino 6 years ago
parent
commit
598ec8e2bb
1 changed files with 58 additions and 66 deletions
  1. 58 66
      README.md

+ 58 - 66
README.md

@@ -9,7 +9,11 @@ Master branch is closed! A major refactor is ongoing in devel-40.
 Changes for 3.x should be made directly to the latest release branch they're
 relevant to and backported from there.
 
+WARNING
+=======
 
+This branch is under heavy development.  If you are interested in deploying a
+working cluster, please utilize a release branch.
 
 # OpenShift Ansible
 
@@ -17,13 +21,6 @@ This repository contains [Ansible](https://www.ansible.com/) roles and
 playbooks to install, upgrade, and manage
 [OpenShift](https://www.openshift.com/) clusters.
 
-**Note**: the Ansible playbooks in this repository require an RPM
-package that provides `docker`. Currently, the RPMs from
-[dockerproject.org](https://dockerproject.org/) do not provide this
-requirement, though they may in the future. This limitation is being
-tracked by
-[#2720](https://github.com/openshift/openshift-ansible/issues/2720).
-
 ## Getting the correct version
 When choosing an openshift release, ensure that the necessary origin packages
 are available in your distribution's repository.  By default, openshift-ansible
@@ -83,17 +80,6 @@ Fedora:
 dnf install -y ansible pyOpenSSL python-cryptography python-lxml
 ```
 
-Additional requirements:
-
-Logging:
-
-- java-1.8.0-openjdk-headless
-- patch
-
-Metrics:
-
-- httpd-tools
-
 ## Simple all-in-one localhost Installation
 This assumes that you've installed the base dependencies and you're running on
 Fedora or RHEL
@@ -103,62 +89,68 @@ cd openshift-ansible
 sudo ansible-playbook -i inventory/hosts.localhost playbooks/prerequisites.yml
 sudo ansible-playbook -i inventory/hosts.localhost playbooks/deploy_cluster.yml
 ```
-## Node Group Definition and Mapping
-In 3.10 and newer all members of the [nodes] inventory group must be assigned an
-`openshift_node_group_name`. This value is used to select the configmap that
-configures each node. By default there are three configmaps created; one for
-each node group defined in `openshift_node_groups` and they're named
-`node-config-master` `node-config-infra` `node-config-compute`. It's important
-to note that the configmap is also the authoritative definition of node labels,
-the old `openshift_node_labels` value is effectively ignored.
-
-There are also two configmaps that label nodes into multiple roles, these are
-not recommended for production clusters, however they're named
-`node-config-all-in-one` and `node-config-master-infra` if you'd like to use
-them to deploy non production clusters.
-
-The default set of node groups is defined in
-[roles/openshift_facts/defaults/main.yml] like so
 
+# Quickstart
+
+Install the new installer from https://www.github.com/openshift/installer
+
+Construct a proper install-config.yml, and make a copy called
+install-config-ansible.yml.
+
+## DNS
+4.x installs require specific dns records to be in place, and there is no way
+to complete an install without working DNS.  You are in charge of ensuring the
+following DNS records are resolvable from your cluster, the openshift-ansible
+installer will not make any attempt to do any of this for you.
+
+First, the output of ```hostname``` on each host must be resolvable to other hosts.
+The nodes will communicate with each other based on this value.
+
+install-config.yml value of 'baseDomain' must be a working domain.
+
+### A records
+```sh
+<clustername>-api.<baseDomain> # ex: mycluster-api.example.com
+<clustername>-master-0.<baseDomain> # ex: mycluster-master-0.example.com
+<clustername>-etcd-0.<baseDomain> # ex: mycluster-etcd-0.example.com
 ```
-openshift_node_groups:
-  - name: node-config-master
-    labels:
-      - 'node-role.kubernetes.io/master=true'
-    edits: []
-  - name: node-config-infra
-    labels:
-      - 'node-role.kubernetes.io/infra=true'
-    edits: []
-  - name: node-config-compute
-    labels:
-      - 'node-role.kubernetes.io/compute=true'
-    edits: []
-  - name: node-config-master-infra
-    labels:
-      - 'node-role.kubernetes.io/infra=true,node-role.kubernetes.io/master=true'
-    edits: []
-  - name: node-config-all-in-one
-    labels:
-      - 'node-role.kubernetes.io/infra=true,node-role.kubernetes.io/master=true,node-role.kubernetes.io/compute=true'
-    edits: []
-```
 
-When configuring this in the INI based inventory this must be translated into a
-Python dictionary. Here's an example of a group named `node-config-all-in-one`
-which is suitable for an All-In-One installation with
-kubeletArguments.pods-per-core set to 20
+Note: There should be a master/etcd record for each master host in your cluster
+(either 1 or 3).  etcd hosts must be master hosts, and the records must resolve
+to the same host for each master/etcd record, respectively.
+
+### SRV records
+```sh
+SRV _etcd-client-ssl._tcp.<clustername>.<baseDomain> '1 1 2379 <clustername>-etcd-0.<baseDomain>'
+SRV _etcd-server-ssl._tcp.<clustername>.<baseDomain> '1 1 2380 <clustername>-etcd-0.<baseDomain>'
+...
+SRV _etcd-client-ssl._tcp.<clustername>.<baseDomain> '1 1 2379 <clustername>-etcd-<N-1>.<baseDomain>'
+SRV _etcd-server-ssl._tcp.<clustername>.<baseDomain> '1 1 2380 <clustername>-etcd-<N-1>.<baseDomain>'
 
+# ex: _etcd-client-ssl._tcp.mycluster.example.com '1 1 2379 mycluster-etcd-0.example.com'
 ```
-openshift_node_groups=[{'name': 'node-config-all-in-one', 'labels': ['node-role.kubernetes.io/master=true', 'node-role.kubernetes.io/infra=true', 'node-role.kubernetes.io/compute=true'], 'edits': [{ 'key': 'kubeletArguments.pods-per-core','value': ['20']}]}]
+
+Consult with your DNS provider about the proper way to create SRV records.  In
+any case, there should be a client and server SRV record for each etcd backend,
+and you MUST use the etcd FQDN you created earlier, not the master or any other
+record.
+
+## Inventory
+Check out inventory/40_basic_inventory.ini for an example.
+
+## Generate ignition configs
+Use the openshift-install command to generate ignition configs utilizing the
+install-config.yml you created earlier.  This will consume the install-config.yml
+file, so ensure you have copied the file as mentioned previously.
+
+```sh
+openshift-install create ignition-configs
 ```
 
-For upgrades, the upgrade process will block until you have the required
-configmaps in the openshift-node namespace. Please define
-`openshift_node_groups` as explained above or accept the defaults and run the
-playbooks/openshift-master/openshift_node_group.yml playbook to have them
-created for you automatically.
+## Run playbook
+playbooks/deploy_cluster_40.yml
 
+# Further reading
 
 ## Complete Production Installation Documentation: