Browse Source

Merge pull request #9502 from vrutkovs/okd-docs

Update documentation links, docs.openshift.org -> docs.okd.io
OpenShift Merge Robot 6 years ago
parent
commit
98f7d90673

+ 1 - 1
README.md

@@ -154,7 +154,7 @@ created for you automatically.
 ## Complete Production Installation Documentation:
 
 - [OpenShift Enterprise](https://docs.openshift.com/enterprise/latest/install_config/install/advanced_install.html)
-- [OpenShift Origin](https://docs.openshift.org/latest/install/index.html)
+- [OpenShift Origin](https://docs.okd.io/latest/install/index.html)
 
 ## Containerized OpenShift Ansible
 

File diff suppressed because it is too large
+ 1 - 1
README_CONTAINER_IMAGE.md


+ 4 - 4
examples/README.md

@@ -2,7 +2,7 @@
 
 The primary use of `openshift-ansible` is to install, configure and upgrade OpenShift clusters.
 
-This is typically done by direct invocation of Ansible tools like `ansible-playbook`. This use case is covered in detail in the [OpenShift advanced installation documentation](https://docs.openshift.org/latest/install_config/install/advanced_install.html)
+This is typically done by direct invocation of Ansible tools like `ansible-playbook`. This use case is covered in detail in the [OpenShift advanced installation documentation](https://docs.okd.io/latest/install_config/install/advanced_install.html)
 
 For OpenShift Container Platform there's also an installation utility that wraps `openshift-ansible`. This usage case is covered in the [Quick Installation](https://docs.openshift.com/container-platform/latest/install_config/install/quick_install.html) section of the documentation.
 
@@ -16,11 +16,11 @@ You can find more details about the certificate expiration check roles and examp
 
 ### Job to upload certificate expiration reports
 
-The example `Job` in [certificate-check-upload.yaml](certificate-check-upload.yaml) executes a [Job](https://docs.openshift.org/latest/dev_guide/jobs.html) that checks the expiration dates of the internal certificates of the cluster and uploads HTML and JSON reports to `/etc/origin/certificate_expiration_report` in the masters.
+The example `Job` in [certificate-check-upload.yaml](certificate-check-upload.yaml) executes a [Job](https://docs.okd.io/latest/dev_guide/jobs.html) that checks the expiration dates of the internal certificates of the cluster and uploads HTML and JSON reports to `/etc/origin/certificate_expiration_report` in the masters.
 
 This example uses the [`easy-mode-upload.yaml`](../playbooks/openshift-checks/certificate_expiry/easy-mode-upload.yaml) example playbook, which generates reports and uploads them to the masters. The playbook can be customized via environment variables to control the length of the warning period (`CERT_EXPIRY_WARN_DAYS`) and the location in the masters where the reports are uploaded (`COPY_TO_PATH`).
 
-The job expects the inventory to be provided via the *hosts* key of a [ConfigMap](https://docs.openshift.org/latest/dev_guide/configmaps.html) named *inventory*, and the passwordless ssh key that allows connecting to the hosts to be availalbe as *ssh-privatekey* from a [Secret](https://docs.openshift.org/latest/dev_guide/secrets.html) named *sshkey*, so these are created first:
+The job expects the inventory to be provided via the *hosts* key of a [ConfigMap](https://docs.okd.io/latest/dev_guide/configmaps.html) named *inventory*, and the passwordless ssh key that allows connecting to the hosts to be availalbe as *ssh-privatekey* from a [Secret](https://docs.okd.io/latest/dev_guide/secrets.html) named *sshkey*, so these are created first:
 
     oc new-project certcheck
     oc create configmap inventory --from-file=hosts=/etc/ansible/hosts
@@ -57,7 +57,7 @@ There are two additional examples:
  - A `Job` [certificate-check-volume.yaml](certificate-check-volume.yaml)
  - A `CronJob` [scheduled-certcheck-upload.yaml](scheduled-certcheck-upload.yaml)
 
-These perform the same work as the two examples above, but instead of uploading the generated reports to the masters they store them in a custom path within the container that is expected to be backed by a [PersistentVolumeClaim](https://docs.openshift.org/latest/dev_guide/persistent_volumes.html), so that the reports are actually written to storage external to the container.
+These perform the same work as the two examples above, but instead of uploading the generated reports to the masters they store them in a custom path within the container that is expected to be backed by a [PersistentVolumeClaim](https://docs.okd.io/latest/dev_guide/persistent_volumes.html), so that the reports are actually written to storage external to the container.
 
 These examples assume that there is an existing `PersistentVolumeClaim` called `certcheck-reports` and they use the  [`html_and_json_timestamp.yaml`](../playbooks/openshift-checks/certificate_expiry/html_and_json_timestamp.yaml) example playbook to write timestamped reports into it.
 

+ 6 - 6
inventory/hosts.example

@@ -89,17 +89,17 @@ debug_level=2
 #openshift_install_examples=true
 
 # Configure logoutURL in the master config for console customization
-# See: https://docs.openshift.org/latest/install_config/web_console_customization.html#changing-the-logout-url
+# See: https://docs.okd.io/latest/install_config/web_console_customization.html#changing-the-logout-url
 #openshift_master_logout_url=http://example.com
 
 # Configure extensions in the master config for console customization
-# See: https://docs.openshift.org/latest/install_config/web_console_customization.html#serving-static-files
+# See: https://docs.okd.io/latest/install_config/web_console_customization.html#serving-static-files
 #openshift_master_oauth_templates={'login': '/path/to/login-template.html'}
 # openshift_master_oauth_template is deprecated.  Use openshift_master_oauth_templates instead.
 #openshift_master_oauth_template=/path/to/login-template.html
 
 # Configure imagePolicyConfig in the master config
-# See: https://docs.openshift.org/latest/admin_guide/image_policy.html
+# See: https://docs.okd.io/latest/admin_guide/image_policy.html
 #openshift_master_image_policy_config={"maxImagesBulkImportedPerRepository": 3, "disableScheduledImport": true}
 # This setting overrides allowedRegistriesForImport in openshift_master_image_policy_config. By default, all registries are allowed.
 #openshift_master_image_policy_allowed_registries_for_import=["docker.io", "*.docker.io", "*.redhat.com", "gcr.io", "quay.io", "registry.centos.org", "registry.redhat.io", "*.amazonaws.com"]
@@ -777,7 +777,7 @@ debug_level=2
 
 # Configure custom named certificates (SNI certificates)
 #
-# https://docs.openshift.org/latest/install_config/certificate_customization.html
+# https://docs.okd.io/latest/install_config/certificate_customization.html
 # https://docs.openshift.com/enterprise/latest/install_config/certificate_customization.html
 #
 # NOTE: openshift_master_named_certificates is cached on masters and is an
@@ -873,7 +873,7 @@ debug_level=2
 # configuration into Builds. Proxy related values will default to the global proxy
 # config values. You only need to set these if they differ from the global proxy settings.
 # See BuildDefaults documentation at
-# https://docs.openshift.org/latest/admin_guide/build_defaults_overrides.html
+# https://docs.okd.io/latest/admin_guide/build_defaults_overrides.html
 #openshift_builddefaults_http_proxy=http://USER:PASSWORD@HOST:PORT
 #openshift_builddefaults_https_proxy=https://USER:PASSWORD@HOST:PORT
 #openshift_builddefaults_no_proxy=mycorp.com
@@ -894,7 +894,7 @@ debug_level=2
 # These options configure the BuildOverrides admission controller which injects
 # configuration into Builds.
 # See BuildOverrides documentation at
-# https://docs.openshift.org/latest/admin_guide/build_defaults_overrides.html
+# https://docs.okd.io/latest/admin_guide/build_defaults_overrides.html
 #openshift_buildoverrides_force_pull=true
 #openshift_buildoverrides_image_labels=[{'name':'imagelabelname1','value':'imagelabelvalue1'}]
 #openshift_buildoverrides_nodeselectors={'nodelabel1':'nodelabelvalue1'}

+ 1 - 1
playbooks/byo/README.md

@@ -7,5 +7,5 @@ clusters.
 Usage is documented in the official OpenShift documentation pages, under the
 Advanced Installation topic:
 
-- [OpenShift Origin: Advanced Installation](https://docs.openshift.org/latest/install_config/install/advanced_install.html)
+- [OpenShift Origin: Advanced Installation](https://docs.okd.io/latest/install_config/install/advanced_install.html)
 - [OpenShift Container Platform: Advanced Installation](https://docs.openshift.com/container-platform/latest/install_config/install/advanced_install.html)

+ 1 - 1
playbooks/common/openshift-cluster/upgrades/v3_11/upgrade_control_plane.yml

@@ -63,7 +63,7 @@
         "{{ openshift_client_binary }} adm policy --config={{ openshift.common.config_base }}/master/admin.kubeconfig reconcile-sccs --additive-only=true"
         After reviewing the changes please apply those changes by adding the '--confirm' flag.
         Do not modify the default SCCs. Customizing the default SCCs will cause this check to fail when upgrading.
-        If you require non standard SCCs please refer to https://docs.openshift.org/latest/admin_guide/manage_scc.html
+        If you require non standard SCCs please refer to https://docs.okd.io/latest/admin_guide/manage_scc.html
     when:
     - openshift_reconcile_sccs_reject_change | default(true) | bool
     - check_reconcile_scc_result.stdout != '' or check_reconcile_scc_result.rc != 0

+ 2 - 2
playbooks/init/validate_hostnames.yml

@@ -21,7 +21,7 @@
         Inventory setting: openshift_hostname={{ openshift_hostname | default ('undefined') }}
         This check can be overridden by setting openshift_hostname_check=false in
         the inventory.
-        See https://docs.openshift.org/latest/install_config/install/advanced_install.html#configuring-host-variables
+        See https://docs.okd.io/latest/install_config/install/advanced_install.html#configuring-host-variables
     when:
     - lookupip.stdout != '127.0.0.1'
     - lookupip.stdout not in ansible_all_ipv4_addresses
@@ -36,7 +36,7 @@
         Inventory setting: openshift_ip={{ openshift_ip }}
         This check can be overridden by setting openshift_ip_check=false in
         the inventory.
-        See https://docs.openshift.org/latest/install_config/install/advanced_install.html#configuring-host-variables
+        See https://docs.okd.io/latest/install_config/install/advanced_install.html#configuring-host-variables
     when:
     - openshift_ip is defined
     - openshift_ip not in ansible_all_ipv4_addresses

+ 23 - 1
playbooks/openshift-master/private/upgrade.yml

@@ -3,6 +3,29 @@
 # Upgrade Masters
 ###############################################################################
 
+# Some change makes critical outage on current cluster.
+- name: Confirm upgrade will not make critical changes
+  hosts: oo_first_master
+  tasks:
+  - name: Confirm Reconcile Security Context Constraints will not change current SCCs
+    command: >
+      {{ openshift_client_binary }} adm policy --config={{ openshift.common.config_base }}/master/admin.kubeconfig reconcile-sccs --additive-only=true -o name
+    register: check_reconcile_scc_result
+    when: openshift_reconcile_sccs_reject_change | default(true) | bool
+    until: check_reconcile_scc_result.rc == 0
+    retries: 3
+
+  - fail:
+      msg: >
+        Changes to bootstrapped SCCs have been detected. Please review the changes by running
+        "{{ openshift_client_binary }} adm policy --config={{ openshift.common.config_base }}/master/admin.kubeconfig reconcile-sccs --additive-only=true"
+        After reviewing the changes please apply those changes by adding the '--confirm' flag.
+        Do not modify the default SCCs. Customizing the default SCCs will cause this check to fail when upgrading.
+        If you require non standard SCCs please refer to https://docs.okd.io/latest/admin_guide/manage_scc.html
+    when:
+    - openshift_reconcile_sccs_reject_change | default(true) | bool
+    - check_reconcile_scc_result.stdout != '' or check_reconcile_scc_result.rc != 0
+
 # Create service signer cert when missing. Service signer certificate
 # is added to master config in the master_config_upgrade hook.
 - name: Determine if service signer cert must be created
@@ -232,4 +255,3 @@
       tasks_from: config.yml
     vars:
       openshift_master_host: "{{ groups.oo_first_master.0 }}"
-      openshift_manage_node_is_master: true

+ 2 - 2
playbooks/openstack/README.md

@@ -230,8 +230,8 @@ $ ansible-playbook --user openshift \
 [devstack]: https://docs.openstack.org/devstack/
 [tripleo]: http://tripleo.org/
 [packstack]: https://www.rdoproject.org/install/packstack/
-[configure-authentication]: https://docs.openshift.org/latest/install_config/configuring_authentication.html
-[hardware-requirements]: https://docs.openshift.org/latest/install_config/install/prerequisites.html#hardware
+[configure-authentication]: https://docs.okd.io/latest/install_config/configuring_authentication.html
+[hardware-requirements]: https://docs.okd.io/latest/install_config/install/prerequisites.html#hardware
 [origin]: https://www.openshift.org/
 [centos7]: https://www.centos.org/
 [sample-openshift-inventory]: https://github.com/openshift/openshift-ansible/blob/master/inventory/hosts.example

+ 1 - 1
playbooks/openstack/configuration.md

@@ -120,7 +120,7 @@ https://github.com/openshift/openshift-ansible/blob/master/roles/openshift_cloud
 
 For more information, consult the [Configuring for OpenStack page in the OpenShift documentation][openstack-credentials].
 
-[openstack-credentials]: https://docs.openshift.org/latest/install_config/configuring_openstack.html#install-config-configuring-openstack
+[openstack-credentials]: https://docs.okd.io/latest/install_config/configuring_openstack.html#install-config-configuring-openstack
 
 If you would like to use additional parameters, create a custom cloud provider
 configuration file locally and specify it in `inventory/group_vars/OSEv3.yml`:

+ 1 - 1
roles/openshift_health_checker/openshift_checks/disk_availability.py

@@ -13,7 +13,7 @@ class DiskAvailability(OpenShiftCheck):
     tags = ["preflight"]
 
     # Values taken from the official installation documentation:
-    # https://docs.openshift.org/latest/install_config/install/prerequisites.html#system-requirements
+    # https://docs.okd.io/latest/install_config/install/prerequisites.html#system-requirements
     recommended_disk_space_bytes = {
         '/var': {
             'oo_masters_to_config': 40 * 10**9,

+ 1 - 1
roles/openshift_health_checker/openshift_checks/memory_availability.py

@@ -12,7 +12,7 @@ class MemoryAvailability(OpenShiftCheck):
     tags = ["preflight"]
 
     # Values taken from the official installation documentation:
-    # https://docs.openshift.org/latest/install_config/install/prerequisites.html#system-requirements
+    # https://docs.okd.io/latest/install_config/install/prerequisites.html#system-requirements
     recommended_memory_bytes = {
         "oo_masters_to_config": 16 * GIB,
         "oo_nodes_to_config": 8 * GIB,

+ 2 - 2
roles/openshift_logging/README.md

@@ -10,7 +10,7 @@ This role requires that the control host it is run on has Java installed as part
 generation for Elasticsearch (it uses JKS) as well as openssl to sign certificates.
 
 As part of the installation, it is recommended that you add the Fluentd node selector label
-to the list of persisted [node labels](https://docs.openshift.org/latest/install_config/install/advanced_install.html#configuring-node-host-labels).
+to the list of persisted [node labels](https://docs.okd.io/latest/install_config/install/advanced_install.html#configuring-node-host-labels).
 
 ### Required vars:
 
@@ -232,7 +232,7 @@ whether to setup `in_tail` plugin to parse cri-o formatted logs in
 
 Image update procedure
 ----------------------
-An upgrade of the logging stack from older version to newer is an automated process and should be performed by calling appropriate ansible playbook and setting required ansible variables in your inventory as documented in https://docs.openshift.org/.
+An upgrade of the logging stack from older version to newer is an automated process and should be performed by calling appropriate ansible playbook and setting required ansible variables in your inventory as documented in https://docs.okd.io/.
 
 Following text describes manual update of the logging images without version upgrade. To determine the current version of images being used you can.
 ```

+ 1 - 1
roles/openshift_metrics/README.md

@@ -105,7 +105,7 @@ Jose David Martín (j.david.nieto@gmail.com)
 
 Image update procedure
 ----------------------
-An upgrade of the metrics stack from older version to newer is an automated process and should be performed by calling appropriate ansible playbook and setting required ansible variables in your inventory as documented in https://docs.openshift.org/.
+An upgrade of the metrics stack from older version to newer is an automated process and should be performed by calling appropriate ansible playbook and setting required ansible variables in your inventory as documented in https://docs.okd.io/.
 
 Following text describes manual update of the metrics images without version upgrade. To determine the current version of images being used you can:
 ```