specs/doc/source/specs/stx-6.0/approved/containerization-2008972-co...

31 KiB

Containerization Components Refresh

Storyboard: https://storyboard.openstack.org/#!/story/2008972

This story covers the refresh of the containerization components in StarlingX, including Kubernetes but also the other supporting components such as containerd and the various components which run as Kubernetes plugins. It also covers streamlining the Kubernetes upgrade process to better deal with the need to upgrade Kubernets multiple times per StarlingX release when Kubernetes only supports upgrading one release at a time.

Problem description

The containerization components used by StarlingX are starting to become dated and in need of a refresh. This would include Kubernetes itself, as well as the components it relies on (containerd, runc) and the plugins that extend its behaviour (SR-IOV device plugin, Calico, Multus). Just as a couple of examples, the versions of both Kubernetes and containerd that we are using have both reached end-of-life and are no longer supported upstream.

The containerization components are delivered in two ways:

  • RPMs (e.g. kubeadm, kubelet, containerd)
  • Docker images (e.g. kube-apiserver, kube-controller-manager, Calico)

StarlingX already provides a mechanism for upgrading both types of components, but it is based around patches which deliver upgraded RPMs. Given that the release cadence of Kubernetes is faster than StarlingX, and the fact that Kubernetes only supports upgrading by one version at a time, it would be useful to have a more streamlined way to upgrade Kubernetes that wouldn't involve so many patches.

Since we cannot reasonably force users to upgrade Kubernetes on any particular cadence, we also need to be able to validate the functionality of the system at each intermediate Kubernetes version. This implies that we need to be able to specify the desired Kubernetes version on system installation.

Use Cases

  • Deployer wants to upgrade to a fully-supported version of Kubernetes on a running StarlingX system with minimal impact to running applications.
  • QA tester wants to install and test all intermediate versions of Kubernetes to ensure that they are stable and functional.

Proposed change

At a high level, the components to be updated are as follows:

  • Kubernetes
  • Containerd/Crictl/Runc (these all get packaged in one binary RPM)
  • Calico
  • Multus
  • SR-IOV CNI
  • SR-IOV device plugin
  • CNI plugins

We are hoping to drop the Docker container runtime, so we are not planning on upversioning it as part of this feature. The sections below document each of the components in more detail.

Overview

The upgrade from K8s 1.18 to 1.21 requires an incremental upgrade to each minor release. We do not have the ability to prescribe when the customer will perform the incremental upgrades and so we need to be able to run for extended periods (days to weeks) on any of the intermediate versions. Using the current upgrade method of multiple patch application, this process would be complex and would come with significant overhead for patch management and delivery. Therefore, the following improvements are recommended to be implemented to facilitate Kubernetes upgrades.

All supported versions of Kubernetes should be packaged onto the system into separate versioned installation paths, and they would all show up in the output of system kube-version-list. The separate runtime version may then be chosen based on the specific host requirements as part of the K8s upgrade procedure. (We already specify the desired version in the system kube-upgrade-start command.)

Newly-installed systems would default to the latest version of K8s. To facilitate testing the intermediate versions, the active K8s version should be selectable during initial installation via the localhost.yaml file.

In addition to the Kubernetes upgrade we also want to upgrade the various other components related to containers. In order to de-risk and simplify the Kubernetes upgrades, it is proposed that we upgrade the various containerization components as follows:

  1. Upgrade the RPM-based components: CNI plugins and containerd (which includes crictl and runc). This would presumably be done as part of the StarlingX 6.0 upgrade. Docker will be left alone.
  2. Upgrade Kubernetes to 1.19 using the system kube-upgrade-start procedure discussed above. As part of this, upgrade the containerized components (Calico, Multus, the SR-IOV CNI, and the SR-IOV device plugin) via the system kube-upgrade-networking step of the existing Kubernetes upgrade.
  3. Upgrade Kubernetes to 1.20 using the system kube-upgrade-start procedure. Everything else stays the same version.
  4. Upgrade Kubernetes to 1.21 using the system kube-upgrade-start procedure. Everything else stays the same version.

RPM-Based Components

The simplest option for the RPM-based components would be to update them as part of the upgrade to StarlingX 6.0. That way we wouldn't need to patch the RPMs, but it assumes that the new versions of the containerized components can work with the existing version of the RPM-based components. This will need to be validated.

CNI Plugins

This package contains some CNI network plugins, maintained by the upstream containernetworking team. The upstream project is https://github.com/containernetworking/plugins This code is called by the containerized components (multus, calico, etc.) that will be discussed further down.

The Kubernetes code (versions 1.18 to 1.21) specifies version 0.8.7 for their testing.

We're currently running with version 0.8.1 as packaged in the "containernetworking-plugins-0.8.1-1.el7.centos.x86_64" binary RPM.

The proposed version is 0.9.1 or later. There is a package "containernetworking-plugins-0.9" available pre-built for CentOS 8, but it depends on versions of libc that aren't present in our system currently. I think we'll need to build from source, the CentOS 8 SRPM depends on go-md2man which may or may not be a problem. The SRPM is available at https://vault.centos.org/8-stream/AppStream/Source/SPackages/

Alternately, we may be able to adapt the CentOS 7 RPM spec file to use the updated source code.

The following K8s CNI meta plugins (from plugins/plugins/meta in the above package) need to be validated (these are essentially binaries that get called by multus/calico, they live under /usr/libexec/cni):

  • tuning
  • portmap
  • sbr
  • vrf (requires 4.x kernel)

Containerd

Containerd is the container runtime that we use. The upstream project is https://github.com/containerd/containerd and we're currently running version 1.3.3 as packaged in the "containerd-1.3.3-10.tis.x86_64" binary RPM.

The most recent version is 1.5.2, but the proposed version is 1.4.6 as this aligns with Docker 20.10, which is what K8s 1.21.1 tested with.

We package "crictl" in with our containerd RPM, this comes from the upstream project at https://github.com/kubernetes-sigs/cri-tools and we currently use version 1.18.0. We'd want to upgrade to version 1.21.0 to align with the version of Kubernetes that we're moving to. One of the upstream devs has indicated that they do not expect any issues with cross-version compatibility for crictl and K8s between 1.18 and 1.21 but this should be validated.

We also package "runc" in with our containerd RPM, this comes from the upstream project at https://github.com/opencontainers/runc and we currently use version 1.0.0-rc10. We'd want to upgrade to version 1.0.0-rc95 (the most recent release) to align with Docker 20.10 and also pick up a recommended CVE fix.

Etcd

The Etcd upgrade has some complicating factors and does not appear to be a required upgrade currently, so the plan is to stay on our current version unless something comes up to push us to upgrade.

For background we're currently using 3.3.15 and Kubernetes calls for a minimum version of 3.2.10 for use in production. (https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/)

If we were to upgrade, the recommended version would be 3.4.13 to align with what was tested by Kubernetes 1.21.

On our current version the CLI tools default to using version 2 of the API, while as of version 3.4 the commandline tool defaults to version 3 of the API. According to https://etcd.io/docs/v3.4/dev-guide/interacting_v3/ any key that was created using the v2 API will not be able to be queried via the v3 API.

The various K8s components explicitly use the v3 API in which case this won't be an issue. However, we'd have to ensure that the CLI defaulting to v3 doesn't cause any problems. Our backup/restore code uses the v3 API aleady.

Containerized Components

The containerized components consist of calico, multus, sriov-cni, and the SR-IOV device plugin. There is already a mechanism for upgrading these components as part of the system kube-upgrade-networking step in the existing Kubernetes upgrade process, so the plan is to leverage that mechanism. This will also allow us to make use of the existing distributed-cloud orchestration of Kubernetes upgrades.

Calico

The upstream project is at https://github.com/projectcalico/calico, current docker images are quay.io/calico/cni:v3.12.0, quay.io/calico/pod2daemon-flexvol:v3.12.0, quay.io/calico/node:v3.12.0, quay.io/calico/kube-controllers:v3.12.0

Currently on 3.12, proposed version is 3.19.1

  • 3.12 is validated up to k8s 1.17 so we're already on our own
  • 3.18 is validated on k8s 1.18 to 1.20
  • 3.19.1 is validated on k8s 1.19 to 1.21

The assumption is that version 3.19.1 will work on Kubernetes 1.18 (for the upgrade). If it doesn't, then we might need to upgrade to 3.18 initially, then upgrade again as part of the upgrade to Kubernetes 1.21.

The upstream docs describing the requirements are available at: https://docs.projectcalico.org/getting-started/kubernetes/requirements

The upstream docs describing the upgrade process are available at: https://docs.projectcalico.org/maintenance/kubernetes-upgrade

Multus

The upstream project is at https://github.com/k8snetworkplumbingwg/multus-cni, current docker image is registry.local:9001/docker.io/nfvpe/multus:v3.4

We're currently on 3.4, the proposed version is v3.7.1+. A Multus developer has indicated there should be no problems upgrading to the latest version with an earlier version of Kubernetes.

Multus is a daemonset, we should be able to do a rolling update. We use the kubernetes datastore.

SR-IOV CNI (Container Network Interface)

This Kubernetes plugin enables the configuration and usage of SR-IOV VF networks in containers and orchestrators like Kubernetes. The upstream project is https://github.com/k8snetworkplumbingwg/sriov-cni and the most recent release is 2.6 which is what we're already running, but we're using a more recent commit. The current docker image is registry.local:9001/docker.io/starlingx/k8s-cni-sriov:stx.5.0-v2.6-7-gb18123d8

If a new release comes out it should be possible to switch to the upstream version of this container image in order to avoid building it ourselves. It might make sense to combine that sideways change with the upgrade of the other containerized components.

If we choose golang 1.16 as the default we may want to move to upstream commit c6c0cdf1d as it fixes a compilation issue.

The SR-IOV CNI runs as a daemonset. We should be able to use a rolling update when we need to upgrade eventually.

SR-IOV Device Plugin

This is a Kubernetes plugin for discovering and advertising SR-IOV virtual functions (VFs) available on a Kubernetes host. The upstream project is https://github.com/k8snetworkplumbingwg/sriov-network-device-plugin and the current docker image is registry.local:9001/docker.io/starlingx/k8s-plugins-sriov-network-device:stx.4.0-v3.2-16-g4e0302ae

We're currently running v3.2, the proposed version is 3.3.2 or later. It should be possible to switch to the upstream version of this container image in order to avoid building it ourselves since we only did so to pick up a fix before it was officially released.

The SR-IOV device plugin runs as a daemonset. We should be able to use a rolling update.

Kubernetes Components

For Kubernetes we want to move from the current version of 1.18 to 1.21. Unfortunately, Kubernetes only supports upgrading by one version at a time. We are not supporting downgrades or aborting a partially-completed upgrade--the only way to recover from a corrupted upgrade is to restore from backup.

The existing orchestration mechanism for Kubernetes upgrades1 requires two patches to be uploaded for each version of Kubernetes, which becomes unwieldy when we need to upgrade across multiple versions.

In order to avoid needing multiple patches for each Kubernetes version, the proposal is to package all supported versions of Kubernetes onto the system into separate versioned installation paths, and they will all show up in the output of system kube-version-list. In order to reduce confusion, it is proposed that only the "next" version shows a state of available, while the versions later than that show a state of unavailable. The desired runtime version may then be chosen via sysinv as part of the K8s upgrade procedure. (The system kube-upgrade-start command already takes a version argument, we will ensure that only one version newer is allowed.) Behind the scenes, the Kubernetes upgrade mechanism would be modified to update the currently active version instead of applying the new version via a patch.

The Kubernetes upgrade mechanism requires that kubeadm be moved over to the new version first, and then later in the upgrade process kubelet/kubectl are moved over to the new version. This means that the selection mechanism needs to have two separate locations for binaries that can be updated separately.

In order to be compatible with the planned conversion to an OSTree-based filesystem, we plan to use bind mounts rather than symlinks. With OSTree the /usr subtree is readonly, so we can't create symlinks. We can create bind mounts though as they don't involve changing the filesystem contents.

The Kubernetes build process (i.e., the RPM spec file for CentOS) will need to be modified to install the build output files into a versioned directory with subdirectories underneath it for the different types of files. Since kubeadm and kubelet/kubectl need to be updated separately, we will have "stage1" and "stage2" subdirectories to separate these different stages. As part of this update, we will only include the packages we run on baremetal--kubernetes-node, kubernetes-client, and kubernetes-kubeadm. The suggested installation paths are listed below but are subject to change during implementation. We may want to use something other than /usr/local in order to better align with FHS and OSTree standards:

  • /usr/local/kubernetes/1.18/{stage1, stage2}
  • /usr/local/kubernetes/1.19/{stage1, stage2}
  • /usr/local/kubernetes/1.20/{stage1, stage2}
  • /usr/local/kubernetes/1.21/{stage1, stage2}

The selected version will be bind-mounted to subdirectories under /usr/local/kubernetes/current, and there will be symlinks at the normal kubernetes locations (/usr/bin/kubelet, /usr/bin/kubeadm, /usr/lib/systemd/system/kubelet.service, /usr/share/bash-completion/completions/kubectl, etc.) pointing at the versions of the files under /usr/local/kubernetes/current. The Kubernetes config files that get updated dynamically based on system configuration will not be symlinked in this way, they will live in /etc/kubernetes as normal. If there are any config file changes needed for newer Kubernetes versions they will be handled automatically as part of the upgrade procedure.

The filesystem for StarlingX 6.0 will be set up in such a way that a newly installed system will default to running K8s 1.21. To facilitate testing the intermediate versions, the active K8s version should be selectable during initial installation via the localhost.yaml file, and the desired K8s version for each of the two bind mounts mentioned above will be stored in the sysinv database and included in the hieradata generated by sysinv and used by puppet. At initial install time if the desired version is specified it will be written to the sysinv DB by the bootstrap Ansible playbook (roles/bootstrap/persist-config/files/populate_initial_config.py). At bootup the Kubernetes puppet manifest will check the desired K8s versions in the hieradata and if either one is anything other than 1.21 it will set up the bind mount(s) appropriately. For installing with Kubernetes 1.18 we must use the current version of Calico/Multus/SR-IOV CNI and SR-IOV Device Plugin. For 1.19 and later we would want to use the final versions.

A new "kube_versions" EB table will store an entry for "kubeadm_version" which will specify the desired version of kubeadm for the system. It will also store an entry for "kubelet_version" which will specify the desired version of kubelet and kubectl for the system. The latter is superceded by the per-host value of the "target_version" field in the "kube_host_upgrade" table, if-and-only-if the "state" field in the "kube_upgrade" table is kubernetes.KUBE_UPGRADING_KUBELETS. This will determine the desired versions of kubeadm/kubelet written to the hieradata.

The policies for allowable version skew2 are as follows:

  • kubelet can be 2 versions older than kube-apiserver and not newer
  • kube-proxy must be the same version as kubelet
  • kubectl can be one version older than kube-apiserver and not newer
  • but note that we don't really run kubectl on compute nodes normally
  • kube-controller-manager, kube-scheduler, and cloud-controller-manager can be one version older than kube-apiserver and not newer

The basic upstream process for upgrading Kubernetes3 specifies that kubeadm needs to be upgraded first on controller nodes, then kubelet and kubectl on controller nodes, then kubeadm on worker nodes, then kubelet and kubectl on worker nodes. Compared to the current StarlingX process for upgrading Kubernetes4 the general procedure will still apply, except that it won't be necessary to upload/apply patches to switch versions. For clarity, only the next largest version will be marked as "available" in the output of system kube-version-list since we can only upgrade one version at a time. We will also remove the "applied_patches" and "available_patches" fields in the output of system kube-version-show <version> and in the internal data structures, as they are no longer meaningful.

Just to make things interesting, each version of Kubernetes wants to be built with a different version of golang. 1.18 wants golang 1.13.15, 1.19 and 1.20 want golang 1.15.12 (which has our http2 golang fix already backported), 1.21 wants golang 1.16.4. Our build system does not support using different golang versions for different packages (just like it doesn't support different versions of gcc for different packages), and so the suggested workaround is to build versions 1.19 and 1.20 on dedicated branches and copy the binary RPMs into the StarlingX 6.0 load as prebuilt binaries. The main risk with this is if we must patch Kubernetes while a customer is on 1.19 or 1.20, in which case we'd have to rebuild the binary RPMs and copy the new version into the StarlingX 6.0 load.

Also, the Kubernetes-contrib package has been retired and is now readonly upstream. Currently we get the systemd service files from this package, so we'll need to sort out the build process for newer Kubernetes. The obvious alternatives are to just use the readonly repository, or to copy the (relatively few) files we care about into our own repository with a suitable attribution notice.

Here are the upstream Kubernetes release notes, we should check them all in detail to make sure there aren't any gotchas with upgrading (there are some "you must read this before you upgrade" notes in the documents at the links below):

It is likely that changes will be needed at some point to the code that invokes kubeadm/kubectl commands and updates kubernetes ConfigMaps in order to handle any changes in the newer versions. We should also run the "pluto" tool from https://pluto.docs.fairwinds.com to see if we're using any deprecated resources.

Alternatives

It would be possible to use the existing mechanism to upgrade Kubernetes via a series of software patches (multiple patches per Kubernetes version). However, this process is complex and comes with significant overhead for patch management and delivery.

For building each version of Kubernetes with a specific version of golang, the alternative would be to modify the build system to package multiple simultaneous golang versions as separate packages (and update each version of Kubernetes to depend on that separate package instead of "golang"). We would need to keep one of the versions of golang as the default.

Data model impact

The following new table in the sysinv DB will be required:

  • kube_versions:
    • created/updated/deleted_at: as per other tables
    • id: as per other tables
    • uuid: as per other tables
    • kubeadm_version: text, stores the desired version string for kubeadm
    • kubelet_version: text, stores the desired version string for kubelet/kubectl

On install, the initial entry in this table will default to the highest version of K8s in the load (planned to be 1.21 in this release) but will be able to be overridden based on an entry in the localhost.yml file on initial setup.

REST API impact

Each API method which is either added or changed should have the following

  • Specification for the method : As best as can be determined at the definition stage.
    • Parameters which can be passed via the url
  • Example use case including typical API samples for both data supplied by the caller and the response
  • Discuss any policy changes, and discuss what things a deployer needs to think about when defining their policy.

Note that the schema should be defined as restrictively as possible. Parameters which are required should be marked as such and only under exceptional circumstances should additional parameters which are not defined in the schema be permitted (eg additionaProperties should be False).

Reuse of existing predefined parameter types such as regexps for passwords and user defined names is highly encouraged. This impacts the sysinv REST API:

  • The resource /kube_versions is modified.
    • URLS:
      • /v1/kube_versions
    • Request Methods:
      • GET /v1/kube_versions
        • Modified to return all kube_versions known to the system. Only the next-newer version will be labelled as "available", ones after that will be labelled as "unavailable".

        • Response body example:

          {"kube_versions": [
             {"state": "active", "version": "v1.18.1", "target": true},
             {"state": "available", "version": "v1.19", "target": false},
             {"state": "unavailable", "version": "v1.20", "target": false},
             {"state": "unavailable", "version": "v1.21", "target": false}]}
      • GET /v1/kube_versions/{version}
        • Returns details of specified kube_version. Modified to remove the "applied_patches" and "available_patches" fields as they are no longer applicable.

        • Response body example:

          {"target": true,
           "upgrade_from": ["v1.16.4"],
           "downgrade_to": [],
           "state": "active",
           "version": "v1.17.1"}

Security impact

This story is modifying the mechanism to upgrade Kubernetes from one version to another. It does not introduce any additional security impacts above what is already there regarding Kubernetes upgrades.

Other end user impact

During initial system install it will be possible to specify the desired Kubernetes version via the "localhost.yml" file. The desired version will be specified as follows:

kubernetes_version: v1.18.1

Performance Impact

No significant change.

Other deployer impact

Upgrading Kubernetes will be simpler since no software patches will be needed.

Developer impact

No significant change.

Upgrade impact

Upgrading Kubernetes will be simpler since no software patches will be needed. The platform upgrade code will continue to enforce that the k8s version cannot change due to a platform upgrade. This will also enforce that k8s has been fully upgraded before the platform upgrade has been started.

Implementation

Assignee(s)

Who is leading the writing of the code? Or is this a blueprint where you're throwing it out there to see who picks it up?

If more than one person is working on the implementation, please designate the primary author and contact.

Primary assignee:

Chris Friesen (cbf123)

Other contributors:

Scott Little (slittle1) Jim Gauld (jgauld) Mihnea Saracin (msaracin)

The main expert on the containerized networking components is Steve Webster (swebster-wr in Launchpad).

Repos Impacted

  • config
  • integ
  • stx-puppet
  • ansible-playbooks

Work Items

  • Investigate the feasibility of having multiple versions of golang available in our build system. Per Scott Little the different golang versions must be able to be installed simultaneously, as once a given version is installed in mock it persists when building other packages.
  • Assuming it's possible to have multiple versions of golang in the build system, this work item would cover doing the work to make them available. We will also need to choose a golang version to use for packages (containerd for example) which are written in Go but do not specify a particular version. The most future-proof option would be to choose the newest version of Go, but I suspect that RPM will choose the lowest version that meets the requirements, which would probably be the oldest version.
  • Write up a detailed design for how the different versions of Kubernetes can exist in parallel in the load, and how they will be selected at runtime using bind mounts. This includes the mechanics of config and environment files, updating the sysinv Kubernetes upgrade commands, and updating the VIM and dcmanager to deal with the fact that no software patches are needed. The default version will eventually be 1.21, but there must be a way to override the default and install with an earlier version. When installing with 1.18 we must use the "old" versions of the containerized components.
  • Build and install the different versions of Kubernetes into the load, such that installing results in 1.18 being active with the "old" versions of the containerized components.
  • Investigate whether any problems are expected using crictl 1.21 with Kubernetes 1.18-1.20. It may make sense to include multiple versions of crictl to match the multiple versions of Kubernetes, and to update the "current" version of crictl when we change Kubernetes versions.
  • Update the containerd package (which bundles up containerd, crictl, and runc) to the desired versions of the underlying packages, validate against Kubernetes 1.18.
  • Validate version 0.9.1 of the CNI plugins against Kubernetes 1.18 and ensure that they work as expected.
  • Assuming version 0.9.1 of the CNI plugins works with Kubernetes 1.18, update the CNI plugins RPM package.
  • Validate platform apps, stx-openstack, etc., against Kubernetes 1.18 with updated containerd/runc, CNI plugins, and possibly crictl.
  • Validate the newer versions of the containerized components (Calico, Multus, SR-IOV CNI, SR-IOV Device Plugin) against Kubernetes 1.19, ensuring that basic functionality works.
  • Add support for selecting the active version of Kubernetes on install.
  • Update the sysinv Kubernetes upgrade mechanism to work with the "multiple versions of Kubernetes installed in parallel" mechanism. This will likely involve either an ansible playbook or a puppet runtime manifest to handle updating the bind mounts for the new version of Kubernetes.
  • Validate platform apps, stx-openstack, etc., against Kubernetes 1.19.
  • Once validated, change the installation default to 1.19 with the newer containerized components.
  • Validate platform apps, stx-openstack, etc., against Kubernetes 1.20.
  • Once validated, change the installation default to 1.20.
  • Validate platform apps, stx-openstack, etc., against Kubernetes 1.21.
  • Once validated, change the installation default to 1.21.
  • Update the VIM to deal with the fact that there are no software patches that needs to be applied when upgrading Kubernetes.
  • Update dcmanager to deal with the fact that there are no software patches that needs to be applied when upgrading Kubernetes.
  • Run the "pluto" tool from https://pluto.docs.fairwinds.com on our system with stx-openstack installed to look for deprecated resources. Update our resource definitions as needed.
  • Update the customer documentation describing the procedure for upgrading Kubernetes. Add release notes highlighting the upstream Kubernetes release notes for customers to validate their own applications to ensure compatibility with newer releases of Kubernetes.

Dependencies

None

Testing

Kubernetes upgrades from 1.18 to 1.21 must be tested in the following StarlingX configurations:

  • AIO-SX
  • AIO-DX
  • Standard with controller storage
  • Standard with dedicated storage
  • Distributed cloud

The testing can be performed on hardware or virtual environments. Sanity must be performed on each intermediate Kubernetes version.

Documentation Impact

The existing Kubernetes upgrade documentation will need to be updated to reflect the fact that there will no longer be software patching involved.

The Release Notes will need to be updated to reflect the requirement to upgrade to Kubernetes 1.21 as part of STX 6.0.

The config API reference will also need updates.

References

History

Optional section intended to be used each time the spec is updated to describe new design, API or any database schema updated. Useful to let reader understand what's happened along the time.

Revisions
Release Name Description
STX-6.0 Introduced

  1. https://docs.starlingx.io/configuration/k8s_upgrade.html↩︎

  2. https://kubernetes.io/docs/setup/release/version-skew-policy↩︎

  3. https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade↩︎

  4. https://docs.starlingx.io/configuration/k8s_upgrade.html↩︎