The default for auto_scaling_enabled in kubernetes is set to false in
the cluster heat templates, but true in the end user docs. Update the
end user docs to match the actual value.
Change-Id: Ie11a12f4ee8e5fbb760c177de72f8a3d88c751c5
Story: #2005928
Task: #34270
Minion is not a good name for k8s worker node anymore, now it has been
replace with 'node' to align with the k8s terminologies. So the server
name of a worker will be something like `k8s-1-lnveovyzpreg-node-0`
instead of `k8s-1-lnveovyzpreg-minion-0`.
Task: 31008
Story: 2005689
Change-Id: Ie9a68b18658e94b6ebe76ebeae8becc23714380d
With the new config option `keystone_auth_default_policy`, cloud admin
can set a default keystone auth policy for k8s cluster when the
keystone auth is enabled. As a result, user can use their current
keystone user to access k8s cluster as long as they're assigned
correct roles, and they will get the pre-defined permissions
set by the cloud provider.
The default policy now is based on the v2 format recently introduced
in k8s-keystone-auth which is getting more useful now. For example,
in v1 it doesn't support a policy for user to access resources from
all namespaces but kube-system, but v2 can do that.
NOTE: Now we're using openstackmagnum dockerhub repo until CPO
team fixing their image release issue.
Task: 30069
Story: 1755770
Change-Id: I2425e957bd99edc92482b6f11ca0b1f91fe59ff6
Now the coe_version is out of sync with the k8s version deployed
for the cluster. This patch will make sure the kube_version is
consistent with the kube_tag when creating the cluster and upgrading
the cluster.
Task: 33608
Story: 2002210
Change-Id: I5812dac340099ecd8923c1e4a60ce0e6611f7ca4
When using docker_storage_driver=overlay2 plus docker_volume_size > 0,
user will run into problem that some pods can't be created. The root
cause is kubelet needs the permission for /var/lib/docker to read/write.
This patch fixes it by add /var/lib/docker to kubelet container's mount.
Task: 30221
Story: 2005314
Change-Id: Ie19c95e6280e16644c686550950359cc9934c719
Rolling ugprade is an important feature for a managed k8s service,
at this stage, two user cases will be covered:
1. Upgrade base operating system
2. Upgrade k8s version
Known limitation: When doing operating system upgrade, there is no
chance to call kubectl drain to evict pods on that node.
Task: 30185
Story: 2002210
Change-Id: Ibbed59bc135969174a20e5243ff8464908801a23
To enable the rolling upgrade ability of Kubernetes Cluster, this
patch is proposing a new API /upgrade to support upgrade the
base operating system of nodes and the version of Kubernetes, even
add-ons running on the k8s cluster:
POST <ClusterID>/actions/upgrade
And the post body will be:
{
"cluster_template": 'dd9cc5ed-3a2b-11e9-9233-fa163e46bcc2',
"max_batch_size": 1,
"nodegroup": "production_group"
}
Co-Authored-By: Feilong Wang <flwang@catalyst.net.nz>
Task: 30168
Story: 2002210
Change-Id: Ia168877778aa0d473383eb06b1c8a16dc06b0576
In https://review.opendev.org/#/c/548139/, we did the same change for
worker node, because kubelet is also installed on master nodes, we need
the same configuration, otherwise, the pods on master nodes won't work
properly(lost connection or timout frequently).
Story: #2005805
Task: #33544
Change-Id: I14c4dcdd1d73e2d94325974b4e55c1e37a20d9ea
There's a regression[0] in bandit 1.6.0 which causes bandit to stop
respecting excluded directories, and our tests throw a bunch of
violations. Blacklist this version, but allow newer versions as there is
already a pull request[1] to fix it, and I expect it will be included in
the next release.
Also fix the requirements job which was broken by
https://review.opendev.org/657890 adding a cap on Sphinx on Python 2.
[0] https://github.com/PyCQA/bandit/issues/488
[1] https://github.com/PyCQA/bandit/pull/489
Co-Authored-By: Jake Yip <jake.yip@unimelb.edu.au>
Task: 33401
Story: 2005740
Change-Id: I34dc36c5236debc42424073af2c2d2104e18179a
The periodic jobs for building images are all failing, add some
low-hanging fruits for fixing:
1) Remove usage of own Fedora image build tools, they were removed as
part of change Ie6a8496c202ff0bf330dd0f434cff8777e5ef112.
2) Add openstack/tripleo-image-elements and
openstack/heat-templates as required-repo for the build
jobs, they are requirements.
Still, these are still failing, let's disable the periodic jobs.
They have been broken since ages without fixing. There's no
record of a successful run under Zuul v3.
Last images at http://tarballs.openstack.org/magnum/images/ are from
2017.
Change-Id: I01122fa029b4124d912e80ea43bca07b8f2ebe5c
The current magnum traefik deployment will always pull latest traefik
container image. With the new launch of traefik v2
(https://blog.containo.us/back-to-traefik-2-0-2f9aa17be305) this will
have impact on how the ingress is described in k8s.
This patch:
* Sets the traefik version to default tag v1.7.9, stable release
prior to v2.
* Adds a new label <traefik_ingress_controller_tag> to enable user
to specify other than default traefik release.
Task: 30143
Task: 30146
Story: 2005286
Change-Id: I031a594f7b6014d88df055664afcf51b1cd2cd94
Signed-off-by: Diogo Guerra <dy090.guerra@gmail.com>
Using Node Problem Detector, Draino and AutoScaler to support
auto healing for K8s cluster, user can use a new label
"auto_healing_enabled' to turn on/off it.
Meanwhile, a new label "auto_scaling_enabled" is also introduced
to enable the capability to let the k8s cluster auto scale based
its workload.
Task: 28923
Story: 2004782
Change-Id: I25af2a72a7a960205929374d2300bd83d4d20960
When using calico as network driver, the traffic between k8s
worker nodes need to be allowed otherwise services maybe not
accessible sometimes because connection can't be established.
This issue only impacts calico.
Task: 30525
Story: 2005294
Change-Id: Ia71283a1abc75a7fb806f2601ac09a685dc5a4bc
This fixes an issue with --registry-enabled in k8s_fedora_atomic where
the registry container fails to start in the minion due to two missing
heat parameters: TRUSTEE_USERNAME and TRUSTEE_DOMAIN_ID.
Change-Id: Ib93a7c0f761d047da3408703a5cf4208821acb33
Task: 23067
Story: 2003033
The proportional autoscaler was not taken from
the real gcr.io/google_containers but but from
docker.io/googlecontainer.
story: 2003993
task: 30492
Change-Id: I2b6fa6f6c839d86b935feb9e1fa9f044d1835b34
Signed-off-by: Spyros Trigazis <spyridon.trigazis@cern.ch>