Support boot from volume for Kubernetes all nodes (master and worker)
so that user can create a big size root volume, which could be more
flexible than using docker_volume_size. And user can specify the
volume type so that user can leverage high performance storage, e.g.
NVMe etc.
And a new label etcd_volme_type is added as well so that user can
set volume type for etcd volume.
If the boot_volume_type or etcd_volume_type are not passed by labels,
Magnum will try to read them from config option
default_boot_volume_type and default_etcd_volume_type. A random
volume type from Cinder will be used if those options are not set.
Task: 30374
Story: 2005386
Co-Authorized-By: Feilong Wang<flwang@catalyst.net.nz>
Change-Id: I39dd456bfa285bf06dd948d11c86867fc03d5afb
There shouldn't be a default value for floating_ip_enabled when creating
cluster. By default, when it's not set, the cluster's floating_ip_enabled
attribute should be set with the value of cluster template. It's fixed
by removing the default value from Magnum API.
Task: 36500
Story: 2006208
Change-Id: I4077783c6a19a413d534f77f287da587353df0af
This is a missing case after we fixed[1]. When user passing in
an existing network when creating cluster, the network name is
missed in the code. This patch fixes it.
[1] https://review.opendev.org/678067
Task: 36430
Story: 2005333
Change-Id: I3a005089c4a755812c40589d8fa1e3ab7bbf062d
Sometimes, the fixed_network value gets rendered as UUID. However OCCM's
internal-network-name requires the network name, it does not support
UUID. This patch introduces a new parameter called fixed_network_name
which converts fixed_network UUID to name if it is UUID-like.
Story: 2005333
Task: 36313
Change-Id: I3453bc0dbea285687d39c9782685cb1f2a3ecd39
When using a public cluster template, user still need the capability
to reuse their existing network/subnet, and they also need to be
able to turn of/off the floatingip to overwrite the setting in the
public template. This patch supports that by adding those three
items as parameters when creating cluster.
Story: 2006208
Task: 35797
Change-Id: I11579ff6b83d133c71c2cbf49ee4b20996dfb918
This is a regression issue introduced by the rolling upgrade feature,
without setting the master_kube_tag and minion_kube_tag, they will
be set with the default value. This patch fixes it by keeping them
consistent with the kube_tag label.
Change-Id: I8b0ca3f87c9a52d48ecb75e4dd8de18a61a10d6f
* prometheus-operator chart version upgraded from 0.1.31. to 5.12.3
* Fix an issue where when using Feature Gate Priority the scheduler
would evict the prometheus monitoring node-exporter pods
* Fix an issue where intensive CPU utilization would make the
metrics fail intermitently or completly fail
* Prometheus resources are now calculated based on the MAX_NODE_COUNT
requested
* Change the sampling rate from the standard 30s to 1 minute (Rollback)
* Add the missing tiller CONTAINER_INFRA_PREFIX variable to the ConfigMap
* Add label prometheus_operator_chart_tag to enable the user to
specify the stable/prometheus-operator chart to use
* Fix breaking changes on CoreDNS metrics introduced by
8fb27da2fc
* Fix Graphana dashboard not showing data.
Change-Id: If42873cd6668c07e4e911e4eef5e4ae2232be66f
Task: 30777
Task: 30779
Story: 2005588
Signed-off-by: Diogo Guerra <dy090.guerra@gmail.com>
With the new config option `keystone_auth_default_policy`, cloud admin
can set a default keystone auth policy for k8s cluster when the
keystone auth is enabled. As a result, user can use their current
keystone user to access k8s cluster as long as they're assigned
correct roles, and they will get the pre-defined permissions
set by the cloud provider.
The default policy now is based on the v2 format recently introduced
in k8s-keystone-auth which is getting more useful now. For example,
in v1 it doesn't support a policy for user to access resources from
all namespaces but kube-system, but v2 can do that.
NOTE: Now we're using openstackmagnum dockerhub repo until CPO
team fixing their image release issue.
Task: 30069
Story: 1755770
Change-Id: I2425e957bd99edc92482b6f11ca0b1f91fe59ff6
Now the coe_version is out of sync with the k8s version deployed
for the cluster. This patch will make sure the kube_version is
consistent with the kube_tag when creating the cluster and upgrading
the cluster.
Task: 33608
Story: 2002210
Change-Id: I5812dac340099ecd8923c1e4a60ce0e6611f7ca4
Rolling ugprade is an important feature for a managed k8s service,
at this stage, two user cases will be covered:
1. Upgrade base operating system
2. Upgrade k8s version
Known limitation: When doing operating system upgrade, there is no
chance to call kubectl drain to evict pods on that node.
Task: 30185
Story: 2002210
Change-Id: Ibbed59bc135969174a20e5243ff8464908801a23
This reverts commit e8d0ee1b14.
This commit is reverted for two reasons:
* It is undesirable that the end user can inject proxy config into
the magnum-conductor service via the cluster template.
* The proxy settings for the magnum-conductor service may not be
the same as those which are required in the cluster template for
the end user VM.
Systemd, docker and podman all include native mechanisms for setting
environment variables for proecesses, and this should be used by the
cloud operator / deployment tooling to configure the required proxy
settings for the magnum-conductor service.
In particular this patch makes it impossible for the cloud operator
to specify their own http_proxy via the environment, the user supplied
cluster template setting will always be used.
Change-Id: I33da19ad6764bedcf15f2a08381063e2471f8991
The current magnum traefik deployment will always pull latest traefik
container image. With the new launch of traefik v2
(https://blog.containo.us/back-to-traefik-2-0-2f9aa17be305) this will
have impact on how the ingress is described in k8s.
This patch:
* Sets the traefik version to default tag v1.7.9, stable release
prior to v2.
* Adds a new label <traefik_ingress_controller_tag> to enable user
to specify other than default traefik release.
Task: 30143
Task: 30146
Story: 2005286
Change-Id: I031a594f7b6014d88df055664afcf51b1cd2cd94
Signed-off-by: Diogo Guerra <dy090.guerra@gmail.com>
Using Node Problem Detector, Draino and AutoScaler to support
auto healing for K8s cluster, user can use a new label
"auto_healing_enabled' to turn on/off it.
Meanwhile, a new label "auto_scaling_enabled" is also introduced
to enable the capability to let the k8s cluster auto scale based
its workload.
Task: 28923
Story: 2004782
Change-Id: I25af2a72a7a960205929374d2300bd83d4d20960
Add an nginx based Ingress controller for Kubernetes.
The use case is to provide better support use cases which require either
L4 access or SSL passthrough, which lack proper support in Traefik.
Selection is done via the same label 'ingress_controller' with value
'nginx'. Deployment relies on the upstream nginx-ingress helm chart.
Change-Id: I1db2074fce9d43c03f479a6aaeb4f238d7101555
Story: 2005327
Task: 30255
The existing drivers are adapted to get node_count and master_count
information from the cluster's nodegroups. At the same time the
output mappings were updated to reflect the changes in the stack to
the nodegroups.
story: 2005266
Change-Id: I725413e77f5a7bdb48131e8a10e5dc884b5e066a
The Kubernetes Helm repository includes in its stable distribution
a prometheus-operator Chart.
This stable/prometheus-operator chart can be used to install all the
dependencies and some default configurations to use prometheus.
The installed extra charts are:
* stable/prometheus-node-exporter (data scraping)
* stable/prometheus (prometheus and alertmanager server)
* stable/grafana (visualization dashboard)
* stable/prometheus-operator (supervision and simple configuration)
The prometheus-operator is installed by using the label
monitoring_enabled=True. Also, the label grafana_admin_passwd can be
used to set the admin password for access to the grafana dashboard
This patch allows for transferral of prometheus monitoring maintenance
work to be done by the kubernetes/helm team.
Task: 28544
Story: 2004623
depends_on: I99d3a78085ba10030200f12bbfe58a72964e2326
Change-Id: I80d590785bf30f9d634debeaf51c0d4cce0aeb93
Signed-off-by: Diogo Guerra <dy090.guerra@gmail.com>
- Never allocate floating IP for etcd service.
- Introduce a new label `master_lb_floating_ip_enabled` which controls
if Magnum allocates floating IP for the master load balancer. This
label only takes effect when the `master_lb_enabled` is set. The
default value is the same with `floating_ip_enabled`.
- The `floating_ip_enabled` property now only controls if Magnum
should allocate the floating IPs for the master and worker nodes.
Change-Id: I0a232406deaf112b0cb9e445735d7b49206c676d
Story: #2005153
Task: #29868
Deploying Node Problem Detector to all nodes to detect problems which
can be leverage by auto healing. This is the first step of enabling
the auto healing feature.
Task: 29886
Story: 2004782
Change-Id: I1b6075025c5f369821b4136783e68b16535dc6ef
Similar to calico, deploy flannel as a DS.
Flannel can use the kubernetes API to store
data, so it doesn't need to contact the etcd
server directly anymore.
This patch drops to relatively large files for
flannel's config, flannel-config-service.sh and
write-flannel-config.sh. All required config is
in the manifests.
Additional options to the controller manager:
--allocate-node-cidrs=true and --cluster-cidr.
Change-Id: I4f1129e155e2602299394b5866165260f4ea0df8
story: 2002751
task: 24870
Fixes the problem with Mesos cluster creation where the
nodes_affinity_policy was not properly conveyed as it is required
in order to create the corresponding server group in Nova.
Change-Id: Ie8d73247ba95f20e24d6cae27963d18b35f8715a
story: 2005116
Add enable_tiller label to install tiller in k8s_fedora_atomic
clusters. Defaults to false.
Add tiller_tag label to select the version of tiller. If the
tag is not set the tag that matches the helm client version in
the heat-agent will be picked. The tiller image can be stored
in a private registry and the cluster can pull it using the
container_infra_prefix label.
Install tiller securely using helper container.
TODO:
*add instructions on how RBAC is designed
https://docs.helm.sh/using_helm/#example-deploy-tiller-in-a-namespace-restricted-to-deploying-resources-in-another-namespace
* add docs on how to install addon in the cluster using this tiller
* how users can get the creds to talk to tiller
NOTE:
The main goal of this tiller is internal usage!
Users can still deploy other tillers in other namespaces.
story: 2003902
task: 26780
Change-Id: I99d3a78085ba10030200f12bbfe58a72964e2326
Signed-off-by: dioguerra <dy090.guerra@gmail.com>
Allow passing label values on cluster creation for swarm mode. This is
available in all kubernetes drivers as well as swarm, but somehow missed
on swarm mode.
Story: 2004942
Task: 29343
Change-Id: Ie3ac66f45e27cc92993116c3df0b33873dc67e24
- Add "octavia" as one of the "ingress_controller" options.
- Add label "octavia_ingress_controller_tag".
- Use external network ID in the heat templates.
Story: 2004838
Change-Id: I7d889a054cd5feb2eeef523b20607a6c7630d777
Now cloud-provider-openstack of Kubernetes has a webhook to support
Keystone authorization and authentication. With this feature, user
can use a new label 'keystone-auth-enabled' to enable the keystone
authN and authZ.
DocImpact
Task: 21637
Story: 1755770
Change-Id: I3d21ad8f55c0d7308a302f62db9e9af147a604f8
HTTP(S) proxy can be specified when creating the template.
https://docs.openstack.org/magnum/latest/admin/magnum-proxy.html
However, it is not being utilized when talking to a public etcd discovery
service, which result in failed cluster creation. We need to be able to
use HTTP(S) proxy when services are running behind a firewall.
Change-Id: I13d86b0dc7c232a51149107f0412219388d8c2cd
story: 2004664
* Use the external cloud-provider [0]
* Label master nodes
* Make the script the deploys the cloud-provider and clusterroles
for the apiserver a SoftwareDeployment
* Rename kube_openstack_config to cloud-config,
for cinder to workm the kubelet expects the cloud config name only
like this. Keep a copy of kube_openstack_config for backwards
compatibility.
Change-Id: Ife5558f1db4e581b64cc4a8ffead151f7b405702
Task: 22361
Story: 2002652
Co-Authored-By: Spyros Trigazis <spyridon.trigazis@cern.ch>
Cluster update was used for scaling operations only,
but if the heat-temaplates where changed for any reason
(eg upgrade of the magnum server), the stack update command
was destructive.
This patch uses the existing parameter in the stack update call.
story: 1722573
task: 21583
Change-Id: Id84e5d878b21c908021e631514c2c58b3fe8b8b0
To upgrade cluster we need to be able to set image tags
so this change adds to labels for corresponding containers
Task: 23314
Story: 2003171
Change-Id: I4cd0270a69fb889c59bdb28966821adb11fd0292
A new label `service_cluster_ip_range` is added for k8s so that
user can set the service portal ip range to avoid conflicts with
pod ip range.
Task: 22568
Story: 2002725
Change-Id: Ie6e95a953059cc4bd5cf15a44f8666b714defb13
Due to a change in Go 1.10.3[1], which k8s v1.11.1 is based on, now
magnum is failing to create a working k8s cluster with version 1.11.1.
This patch is changing removing the extention usage for server auth
for ca cert and using simple public/private keys for k8s service account
keys.
[1] https://go.googlesource.com/go/+/09fa131c99da0ef9f78c9f4f6cd955237ccc01cd
Task: 23210
Story: 2003103
Change-Id: Ieba8f55d55db2afda6888d4bc6c2caa87370d13d
Add 'cloud_provider_enabled' label for the k8s_fedora_atomic
driver. Defaults to true. For specific kubernetes versions if
'cinder' is selected as a 'volume_driver', it is implied that
the cloud provider will be enabled since they are combined.
The motivation for this change is that in environments with
high load to the OpenStack APIs, users might want to disable
the cloud provider.
story: 1775358
task: 1775358
Change-Id: I2920f699654af1f4ba45644ab60a04a3f70918fe
We use the same technique that is used for kubernetes clusters, with a
custom heat resource that provides either a floating IP, or
OS::Heat::None when disabled. We also add coverage tests for swarm-mode.
Change-Id: I3b5877bcd89fc2436776f49e479ffadf72c00ea3
Story: 1772433
Task: 21662
Task: 22102
Co-authored-by: Mark Goddard <mark@stackhpc.com>
Currently the option of selecting no floating IP will not apply to
a multimaster configuration and loadbalancers will be expected to use
floating IPs. This patch allows the floating IP resources to be
disabled among the load balancers.
Task: 22121
Story: 2002557
Change-Id: I8f96fba8aa41319ac209baedd9d3a927aad0eb91
Multi master deployments for k8s driver use different service account
keys for each api/controller manager server which leads to 401 errors
for service accounts. This patch will create a signed cert and private
key for k8s service account keys explicitly, dedicatedly for the k8s
cluster to avoid the inconsistent keys issue.
Task: 21653
Story: 1766546
Change-Id: I61547405f866d3c5a84da63de66724b55af1066a
This adds an immediate failure response if the etcd discovery service returns
a bad status code. Before Magnum would continue to run and fail to configure
but with vague information of its failure. This would cause Magnum to generally
wait until the entire timeout before failing.
Change-Id: Iebd51e5dc8a3e3c285cb0c2af35c19f6f37ed0a7
Task: 22193
Story: 2002584
This patch allows specification of Cgroup driver for Kubelet service.
The necessity of this patch was realised after upgrading Docker to the
new community edition (17.3+) which defaults to `cgroupfs` Cgroup
driver but on the other hand, Fedora Atomic (version 27) comes with
1.13. Cgroup drivers for Docker need to be identical for the two
services, Docker and Kubelet, need to be able to work together.
Story: 2002533
Task: 22079
Change-Id: Ia4b38a63ede59e18c8edb01e93acbb66f1e0b0e4
In the OpenStack deployment with Octavia service enabled, the octavia
service should be used not only for master nodes high availability, but
also for k8s LoadBalancer type service implementation as well.
Change-Id: Ib61f59507510253794a4780a91e49aa6682c8039
Closes-Bug: #1770133
Define a set of new labels to pass additional options to the kubernetes
daemons - kubelet_options, kubeapi_options, kubescheduler_options,
kubecontroller_options, kubeproxy_options.
In all cases the default value is "", meaning no extra options are
passed to the daemons.
Change-Id: Idabe33b1365c7530edc53d1a81dee3c857a4ea47
Closes-Bug: #1701223
Add ingress controller configuration and backend to kubernetes clusters.
A new label 'ingress_controller' defines which backend should serve
ingress, with traefik added as the only option for now.
It is defined as a DaemonSet, with instances on all nodes defined with a
certain role. This role is set as an additional cluster label
'ingress_controller_role', with a default value of 'ingress'.
For now no node is automatically set with this role, with users or operators
having to do this manually after cluster creation.
Change-Id: I5175cf91f37e2988dc3d33042558d994810842f3
Closes-Bug: #1738808
In Fedora Atomic 27 etcd and flanneld are removed from the base image.
Install them as a system containers.
* update docker-storage configuration
* add etcd and flannel tags as labels
Change-Id: I2103c7c3d50f4b68ddc11abff72bc9e3f22839f3
Closes-Bug: #1735381
Add a new label 'cert_manager_api' to kubernetes clusters controlling the
enable/disable of the kubernetes certificate manager api.
The same cluster cert/key pair is used by this api. The heat agent is used
to install the key in the master node(s), as this is required for kubernetes
to later sign new certificate requests.
The master template init order is changed so the heat agent is launched
previous to enabling the services - the controller manager requires the CA key
to be locally available before being launched.
Change-Id: Ibf85147316e3a194d8a3f92cbb4ae9ce8e16c98f
Partial-Bug: #1734318
Update usage of tenant to project_id and user to user_id when handling context
fields. This drops deprecation warnings.
Change-Id: I8001be34bcc25678ed99b6b6717ad170ae6d2d77
Add a new label 'availability_zone' allowing users to specify the AZ
the nodes should be deployed in. Only one AZ can be passed for this
first implementation.
Change-Id: I9e55d7631191fffa6cc6b9bebbeb4faf2497815b
Partially-Implements: blueprint magnum-availability-zones