When new versions of OpenShift are released, you can upgrade your existing cluster to apply the latest enhancements and bug fixes.
Unless noted otherwise, node and masters within a major version are forward and backward compatible, so upgrading your cluster should go smoothly. However, you should not run mismatched versions longer than necessary to upgrade the entire cluster.
you can use an automated upgrade process. Alternatively, you can upgrade OpenShift manually.
Note
|
This topic pertains to RPM-based installations only and does not currently cover container-based installations. |
if you installed using the advanced installation and the inventory file that was used is available, you can use the upgrade playbook to automate the upgrade process.
The automated upgrade performs the following steps for you:
-
Applies the latest configuration by re-running the installation playbook.
-
Upgrades and restart master services.
-
Upgrades and restart node services.
-
Applies the latest cluster policies.
-
Updates the default router if one exists.
-
Updates the default registry if one exists.
-
Updates default image streams and InstantApp templates.
Important
|
The automated upgrade re-runs cluster configuration steps, therefore any settings that are not stored in your inventory file will be overwritten. The upgrade process creates a backup of any files that are changed, and you should carefully review the differences after the upgrade finishes to ensure that your environment is configured as expected. |
To verify the upgrade, first check that all nodes are marked as Ready:
# oc get nodes NAME LABELS STATUS master.example.com kubernetes.io/hostname=master.example.com,region=infra,zone=default Ready node1.example.com kubernetes.io/hostname=node1.example.com,region=primary,zone=east Ready
Then, verify that you are running the expected versions of the docker-registry and router images, if deployed:
After upgrading, you can use the experimental diagnostics tool to look for common issues:
# openshift ex diagnostics ... [Note] Summary of diagnostics execution: [Note] Completed with no errors or warnings seen.
As an alternative to using the automated upgrade, you can manually upgrade your OpenShift cluster. To manually upgrade without disruption, it is important to upgrade each component as documented in this topic.
Important
|
Before you begin your upgrade, familiarize yourself now with the entire procedure. Specific releases may require additional steps to be performed at key points before or during the standard upgrade process. |
-
Create an etcd backup on each master:
# yum install etcd # etcdctl backup --data-dir /var/lib/openshift/openshift.local.etcd \ --backup-dir /var/lib/openshift/openshift.local.etcd.bak
-
Remove support for the v1beta3 API. Update the /etc/openshift/master/master-config.yml file on each master, and remove v1beta3 from the
apiLevels
andkubernetesMasterConfig.apiLevels
parameters. -
During this upgrade, some directories are renamed from openshift to origin, so create the following symlinks on each host:
# ln -s /var/lib/openshift /var/lib/origin # ln -s /etc/openshift /etc/origin
# yum install etcd # etcdctl backup --data-dir /var/lib/origin/openshift.local.etcd \ --backup-dir /var/lib/origin/openshift.local.etcd.bak
For any upgrade path, always ensure that you are running the latest kernel:
# yum update kernel
After a cluster upgrade, the recommended default cluster roles may have been updated. To check if an update is recommended for your environment, you can run:
# oadm policy reconcile-cluster-roles
This command outputs a list of roles that are out of date and their new proposed values. For example:
# oadm policy reconcile-cluster-roles apiVersion: v1 items: - apiVersion: v1 kind: ClusterRole metadata: creationTimestamp: null name: admin rules: - attributeRestrictions: null resources: - builds/custom ...
Note
|
Your output will vary based on the OpenShift version and any local customizations you have made. Review the proposed policy carefully. |
You can either modify this output to re-apply any local policy changes you have made, or you can automatically apply the new policy using the following process:
-
Reconcile the cluster roles:
# oadm policy reconcile-cluster-roles --confirm
-
Restart the master service:
-
Reconcile the cluster roles again to pick up new capabilities:
# oadm policy reconcile-cluster-roles --confirm
-
Reconcile the cluster role bindings:
# oadm policy reconcile-cluster-role-bindings \ --exclude-groups=system:authenticated \ --exclude-groups=system:unauthenticated \ --exclude-users=system:anonymous \ --additive-only=true \ --confirm
After upgrading your masters, you can upgrade your nodes. When restarting the connectivity from running pods to services while the service proxy is restarted. The length of this disruption should be very short and scales based on the number of services in the entire cluster.
For each node that is not also a master, disable scheduling before you upgrade them:
# oadm manage-node <node> --schedulable=false
Enable scheduling again for any non-master nodes that you disabled:
# oadm manage-node <node> --schedulable=true
As a user with cluster-admin privileges, verify that all nodes are showing as Ready:
# oc get nodes NAME LABELS STATUS master.example.com kubernetes.io/hostname=master.example.com Ready,SchedulingDisabled node1.example.com kubernetes.io/hostname=node1.example.com Ready node2.example.com kubernetes.io/hostname=node2.example.com Ready
If you have previously deployed a router, the router deployment configuration must be upgraded to apply updates contained in the router image. To upgrade your router without disrupting services, you must have previously deployed a highly-available routing service.
Edit your router’s deployment configuration. For example, if it has the default router name:
# oc edit dc/router
Apply the following changes:
... spec: template: spec: containers: - env: ... imagePullPolicy: IfNotPresent ...
-
Adjust the image version to match the version you are upgrading to.
You should see one router pod updated and then the next.
The registry must also be upgraded for changes to take effect in the registry
image. If you have used a PersistentVolumeClaim
or a host mount point, you
may restart the registry without losing the contents of your registry. The
registry
installation topic details how to configure persistent storage.
Edit your registry’s deployment configuration:
# oc edit dc/docker-registry
Apply the following changes:
... spec: template: spec: containers: - env: ... imagePullPolicy: IfNotPresent ...
-
Adjust the image version to match the version you are upgrading to.
Important
|
Images that are being pushed or pulled from the internal registry at the time of upgrade will fail and should be restarted automatically. This will not disrupt pods that are already running. |
Now update the global openshift project by running the following commands as a user with cluster-admin privileges. It is expected that you will receive warnings about items that already exist.
After updating the default image streams, you may also want to ensure that the images within those streams are updated. For each image stream in the default openshift project, you can run:
# oc import-image -n openshift <imagestream>
For example, get the list of all image streams in the default openshift project:
# oc get is -n openshift NAME DOCKER REPO TAGS UPDATED mongodb registry.access.redhat.com/openshift3/mongodb-24-rhel7 2.4,latest,v3.0.0.0 16 hours ago mysql registry.access.redhat.com/openshift3/mysql-55-rhel7 5.5,latest,v3.0.0.0 16 hours ago nodejs registry.access.redhat.com/openshift3/nodejs-010-rhel7 0.10,latest,v3.0.0.0 16 hours ago ...
Update each image stream one at a time:
# oc import-image -n openshift nodejs Waiting for the import to complete, CTRL+C to stop waiting. The import completed successfully. Name: nodejs Created: 16 hours ago Labels: <none> Annotations: openshift.io/image.dockerRepositoryCheck=2015-07-21T13:17:00Z Docker Pull Spec: registry.access.redhat.com/openshift3/nodejs-010-rhel7 Tag Spec Created PullSpec Image 0.10 latest 16 hours ago registry.access.redhat.com/openshift3/nodejs-010-rhel7:latest 66d92cebc0e48e4e4be3a93d0f9bd54f21af7928ceaa384d20800f6e6fcf669f latest 16 hours ago registry.access.redhat.com/openshift3/nodejs-010-rhel7:latest 66d92cebc0e48e4e4be3a93d0f9bd54f21af7928ceaa384d20800f6e6fcf669f v3.0.0.0 <pushed> 16 hours ago registry.access.redhat.com/openshift3/nodejs-010-rhel7:v3.0.0.0 66d92cebc0e48e4e4be3a93d0f9bd54f21af7928ceaa384d20800f6e6fcf669f
Important
|
In order to update your S2I-based applications, you must manually trigger a new
build of those applications after importing the new images using |
If a node is missing the IP address as part of its certificate,
clients may refuse to connect to the kubelet endpoint. Usually this
will result in errors regarding the certificate not containing an IP
SAN
In order to remedy this situation, you may need to manually update the certificates for your node.
The following command can be used to determine which Subject
Alternative Names
are present in the node’s serving certificate. In
this example, the Subject Alternative Names
are: mynode,
mynode.mydomain.com, 1.2.3.4.
# openssl x509 -in /etc/origin/node/server.crt -text -noout | grep -A 1 "Subject Alternative Name" X509v3 Subject Alternative Name: DNS:mynode, DNS:mynode.mydomain.com, IP: 1.2.3.4
Ensure that the nodeIP
value set in
/etc/origin/node/node-config.yaml is present in the IP values from
the Subject Alternative Names
listed in the node’s serving
certificate. If the nodeIP
is not present then it will need to be
added to the node’s certificate.
If the nodeIP
value is already contained within the Subject
Alternative Names
, then no further steps are required.
You will need to know the Subject Alternative Names
and nodeIP
value for the following steps.
If your current node certificate does not contain the proper IP address, then you will need to regenerate a new certificate for your node.
Important
|
Node certificates will be regenerated on the master (or first master) and are then copied into place on node systems. |
-
Create a temporary directory in which to perform the following steps
# mkdir /tmp/node_certificate_update # cd /tmp/node_certificate_update
-
Export signing options:
# export signing_opts="--signer-cert=/etc/origin/master/ca.crt --signer-key=/etc/origin/master/ca.key --signer-serial=/etc/origin/master/ca.serial.txt"
-
Generate the new certificate
# oadm ca create-server-cert --cert=server.crt \ --key=server.key $signing_opts \ --hostnames=<existing subject alt names>,<nodeIP>
For example, if the
Subject Alternative Names
from before were mynode,mynode.mydomain.com,1.2.3.4 and thenodeIP
was 10.10.10.1, then you will need to run the following command:# oadm ca create-server-cert --cert=server.crt \ --key=server.key $signing_opts \ --hostnames=mynode,mynode.mydomain.com,1.2.3.4,10.10.10.1
Back up the existing /etc/origin/node/server.crt and /etc/origin/node/server.key file for your node:
# mv /etc/origin/node/server.crt /etc/origin/node/server.crt.bak # mv /etc/origin/node/server.key /etc/origin/node/server.key.bak
You will now need to copy the new server.crt and server.key created in the temporary directory during the previous step.
# mv /tmp/node_certificate_update/server.crt /etc/origin/node/server.crt # mv /tmp/node_certificate_update/server.key /etc/origin/node/server.key
After you have replaced the node’s certificate, you will now need to restart the node service.
The following command can be used to determine which Subject
Alternative Names
are present in the master’s serving certificate. In
this example, the Subject Alternative Names
are: mymaster,
mymaster.mydomain.com, 1.2.3.4.
# openssl x509 -in /etc/origin/master/master.server.crt -text -noout | grep -A 1 "Subject Alternative Name" X509v3 Subject Alternative Name: DNS:mymaster, DNS:mymaster.mydomain.com, IP: 1.2.3.4
Ensure that the following entries are present in the Subject
Alternative Names
for the master’s serving certificate:
-
Kubernetes service IP address ex: 172.30.0.1
-
All master hostnames ex: master1.example.com
-
All master IP addresses ex: 192.168.122.1
-
Public master hostname in clustered environments ex: public-master.example.com
-
kubernetes
-
kubernetes.default
-
kubernetes.default.svc
-
kubernetes.default.svc.cluster.local
-
openshift
-
openshift.default
-
openshift.default.svc
-
openshift.default.svc.cluster.local
If these names are already contained within the Subject Alternative
Names
, then no further steps are required.
If your current master certificate does not contain all names from the list above, then you will need to generate a new certificate for your master.
-
Back up the existing /etc/origin/master/master.server.crt and /etc/origin/master/master.server.key file for your master:
# mv /etc/origin/master/master.server.crt /etc/origin/master/master.server.crt.bak # mv /etc/origin/master/master.server.key /etc/origin/master/master.server.key.bak
-
Export service names
These names will be used when generating the new certificate.
# export service_names="kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster.local,openshift,openshift.default,openshift.default.svc,openshift.default.svc.cluster.local"
-
Generate the new certificate
You will need the first IP in the services subnet (kubernetes service IP) as well as the values of
masterIP
,masterURL
andpublicMasterURL
contained in /etc/origin/master/master-config.yaml for the following steps.The kubernetes service IP can be obtained with:
# oc get svc/kubernetes --template='{{.spec.clusterIP}}'
# oadm ca create-master-certs \ --hostnames=<master hostnames>,<master IP addresses>,<kubernetes service IP>,$service_names \ (1) (2) (3) --master=<internal master address> \ (4) --public-master=<public master address> \ (5) --cert-dir=/etc/origin/master/ \ --overwrite=false
-
Adjust master hostname to match your master hostname. In a clustered environment, add all master hostnames.
-
Adjust master IP address to match the value of
masterIP
. In a clustered environment, add all master IP addresses. -
The first IP in the services subnet (Kubernetes service IP).
-
Adjust internal master address to match the value of
masterURL
. -
Adjust public master address to match the value of
masterPublicURL
.
-
-
Restart master services
Single master deployments:
+
native
multi-master deployments:+
pacemaker
multi-master deployments:
# pcs resource restart master
Some OpenShift releases may have additional instructions specific to that release that must be performed to fully apply the updates across the cluster. Read through the following sections carefully depending on your upgrade path, as you may be required to perform certain steps at key points during the standard upgrade process described earlier in this topic.
There are no additional manual steps for this release that are not already mentioned inline during the standard manual upgrade process.