Without this, heat container agents using kubectl version
1.18.x (e.g. ussuri-dev) fail because they do not have the correct
KUBECONFIG in the environment.
Task: 39938
Story: 2007591
Change-Id: Ifc212478ae09c658adeb6ba4c8e8afc8943e3977
There are several some issues in current upgrade script.
1. The kubectl command location has changed
2. Before checking the digest of the hyperkube image, better wait
until the image fully downloaded.
3. Using full name to inspect image
4. Get the correct ostree commit id
Task: 39785
Story: 2007676
Change-Id: I5c16b123683ef1173c22d4e4628c36234871cb93
Given we're using public container registry as the default registry,
so it would be nice to have a verification for the image's digest.
Kubernetes already supports that so user can just use format like
@sha256:xxx for those addons' tags. This patch introduces the support
for hyperkube based on podman and fedora coreos driver.
Task: 37776
Story: 2007001
Change-Id: I970c1b91254d2a375192420a9169f3a629c56ce7
There are some small regression issues introduced by the podman
support patch. And another issue is since k8s v1.16, the daemonsets
has been moved app/v1 from extensions[1], so we need to update the
system:node-drainer ClusterRole so that kubectl can be called on
worker node to trigger the drain process. Both issues are fixed
in this patch.
[1] https://kubernetes.io/docs/setup/release/notes/#deprecations-and-removals
Task: 37642
Story: 2005201
Change-Id: I87ed49fd1e9cd513ae54f6758717379adafae3a4
Remove hard coded reference to train-dev which ends up pulling multiple
images down and use HEAT_CONTAINER_AGENT_TAG instead.
Also add missing CONTAINER_INFRA_PREFIX.
Story: 2006459
Task: 37566
Change-Id: Ic8d0e3ba125ef6ce7fde68c086ccbdb4730ac4a6
Choose whether system containers etcd, kubernetes and the heat-agent will be
installed with podman or atomic. This label is relevant for k8s_fedora drivers.
k8s_fedora_atomic_v1 defaults to use_podman=false, meaning atomic will be used
pulling containers from docker.io/openstackmagnum. use_podman=true is accepted
as well, which will pull containers by k8s.gcr.io.
k8s_fedora_coreos_v1 defaults and accepts only use_podman=true.
Fix upgrade for k8s_fedora_coreos_v1 and magnum-cordon systemd unit.
Task: 37242
Story: 2005201
Change-Id: I0d5e4e059cd4f0458746df7c09d2fd47c389c6a0
Signed-off-by: Spyros Trigazis <spyridon.trigazis@cern.ch>
Along with the kubernetes version upgrade support we just released, we're
adding the support to upgrade the operating system of the k8s cluster
(including master and worker nodes). It's an inplace upgrade leveraging the
atomic/ostree upgrade capability.
Story: 2002210
Task: 33607
Change-Id: If6b9c054bbf5395c30e2803314e5695a531c22bc
Using the atomic cli to install kubelet breaks mount
propagation of secrets, configmaps and so on. Using podman
in a systemd unit works.
Additionally, with this change all atomic commands are dropped,
containers are pulled from gcr.io (ofiicial kubernetes containers).
Finally, after this patch only by starting the heat-agent with
ignition, we can use fedora coreos as a drop-in replacement.
* Drop del of docker0
This command to remove docker0 is carried from
earlier versions of docker. This is not an issue
anymore.
story: 2006459
task: 36871
Change-Id: I2ed8e02f5295e48d371ac9e1aff2ad5d30d0c2bd
Signed-off-by: Spyros Trigazis <spyridon.trigazi@cern.ch>
We kept introspecting the name of the instance with the assumption
that the network always existed under .novalocal
This is not always the case, with certain variables changed inside
Neutron it is possible to control this, therefore, leading in failing
deploys.
With this change, we pass the instance name directly to the cluster
and therefore we always have the accurate name.
Task: 36160
Story: 2006371
Change-Id: I2ba32844b822ffc14da043e6ef7d071bb62a22ee
After a k8s version upgrade, the initial KUBE_TAG in heat-params will be
out of date. The patch will append a new KUBE_TAG to log and update
the current k8s version to make sure it's always consistent.
Story: 2002210
Task: 35949
Change-Id: Ie8044316bb1ba64a37c54f5f75ced1d47b35a3aa
Rolling ugprade is an important feature for a managed k8s service,
at this stage, two user cases will be covered:
1. Upgrade base operating system
2. Upgrade k8s version
Known limitation: When doing operating system upgrade, there is no
chance to call kubectl drain to evict pods on that node.
Task: 30185
Story: 2002210
Change-Id: Ibbed59bc135969174a20e5243ff8464908801a23