Browse Source

k8s_fedora: Use external kubernetes/cloud-provider-openstack

* Use the external cloud-provider [0]
* Label master nodes
* Make the script the deploys the cloud-provider and clusterroles
  for the apiserver a SoftwareDeployment
* Rename kube_openstack_config to cloud-config,
  for cinder to workm the kubelet expects the cloud config name only
  like this. Keep a copy of kube_openstack_config for backwards
  compatibility.

Change-Id: Ife5558f1db4e581b64cc4a8ffead151f7b405702
Task: 22361
Story: 2002652
Co-Authored-By: Spyros Trigazis <spyridon.trigazis@cern.ch>
changes/77/577477/11
Jim Bach 3 years ago
committed by Spyros Trigazis
parent
commit
6c61a1a949
  1. 42
      doc/source/user/index.rst
  2. 12
      magnum/drivers/common/templates/kubernetes/fragments/configure-kubernetes-master.sh
  3. 4
      magnum/drivers/common/templates/kubernetes/fragments/configure-kubernetes-minion.sh
  4. 241
      magnum/drivers/common/templates/kubernetes/fragments/kube-apiserver-to-kubelet-role.sh
  5. 3
      magnum/drivers/common/templates/kubernetes/fragments/write-heat-params-master.yaml
  6. 10
      magnum/drivers/common/templates/kubernetes/fragments/write-kube-os-config.sh
  7. 1
      magnum/drivers/heat/k8s_fedora_template_def.py
  8. 22
      magnum/drivers/k8s_fedora_atomic_v1/templates/kubecluster.yaml
  9. 17
      magnum/drivers/k8s_fedora_atomic_v1/templates/kubemaster.yaml
  10. 1
      magnum/tests/contrib/copy_instance_logs.sh
  11. 6
      magnum/tests/unit/drivers/test_template_definition.py
  12. 14
      releasenotes/notes/kubernetes-cloud-config-6c9a4bfec47e3bb4.yaml

42
doc/source/user/index.rst

@ -317,6 +317,8 @@ the table are linked to more details elsewhere in the user guide.
+---------------------------------------+--------------------+---------------+
| `kube_tag`_ | see below | see below |
+---------------------------------------+--------------------+---------------+
| `cloud_provider_tag`_ | see below | see below |
+---------------------------------------+--------------------+---------------+
| `etcd_tag`_ | see below | see below |
+---------------------------------------+--------------------+---------------+
| `flannel_tag`_ | see below | see below |
@ -1096,6 +1098,18 @@ _`kube_tag`
If unset, the current Magnum version's default Kubernetes release is
installed.
_`cloud_provider_tag`
This label allows users to select `a specific release for the openstack
cloud provider
<https://hub.docker.com/r/openstackmagnum/kubernetes-apiserver/tags/>`_.
If unset, the current Magnum version's default
kubernetes/cloud-provider-openstack release is installed.
For version compatibility, please consult the `release page
<https://github.com/kubernetes/cloud-provider-openstack/releases>`_ of
the cloud-provider. The images are hosted `here
<https://hub.docker.com/r/k8scloudprovider/openstack-cloud-controller-manager/tags/>`_.
Stein default: v0.2.0
_`etcd_tag`
This label allows users to select `a specific etcd version,
based on its container tag
@ -2695,19 +2709,21 @@ or can be built locally using diskimagebuilder. Details can be found in the
<https://github.com/openstack/magnum/tree/master/magnum/elements/fedora-atomic>`_
The image currently has the following OS/software:
+-------------+-----------+
| OS/software | version |
+=============+===========+
| Fedora | 26 |
+-------------+-----------+
| Docker | 1.13.1 |
+-------------+-----------+
| Kubernetes | 1.9.3 |
+-------------+-----------+
| etcd | 3.1.3 |
+-------------+-----------+
| Flannel | 0.7.0 |
+-------------+-----------+
+--------------------------+-----------+
| OS/software | version |
+==========================+===========+
| Fedora | 27 |
+--------------------------+-----------+
| Docker | 1.13.1 |
+--------------------------+-----------+
| Kubernetes | 1.11.5 |
+--------------------------+-----------+
| etcd | v3.2.7 |
+--------------------------+-----------+
| Flannel | v0.9.0 |
+--------------------------+-----------+
| Cloud Provider OpenStack | v0.2.0 |
+--------------------------+-----------+
The following software are managed as systemd services:

12
magnum/drivers/common/templates/kubernetes/fragments/configure-kubernetes-master.sh

@ -76,9 +76,10 @@ if [ -n "${ADMISSION_CONTROL_LIST}" ] && [ "${TLS_DISABLED}" == "False" ]; then
fi
if [ -n "$TRUST_ID" ] && [ "$(echo "${CLOUD_PROVIDER_ENABLED}" | tr '[:upper:]' '[:lower:]')" = "true" ]; then
KUBE_API_ARGS="$KUBE_API_ARGS --cloud-config=/etc/kubernetes/kube_openstack_config --cloud-provider=openstack"
KUBE_API_ARGS="$KUBE_API_ARGS --cloud-provider=external"
fi
sed -i '
/^KUBE_API_ADDRESS=/ s/=.*/="'"${KUBE_API_ADDRESS}"'"/
/^KUBE_SERVICE_ADDRESSES=/ s|=.*|="--service-cluster-ip-range='"$PORTAL_NETWORK_CIDR"'"|
@ -97,9 +98,11 @@ if [ -n "${ADMISSION_CONTROL_LIST}" ] && [ "${TLS_DISABLED}" == "False" ]; then
fi
if [ -n "$TRUST_ID" ] && [ "$(echo "${CLOUD_PROVIDER_ENABLED}" | tr '[:upper:]' '[:lower:]')" = "true" ]; then
KUBE_CONTROLLER_MANAGER_ARGS="$KUBE_CONTROLLER_MANAGER_ARGS --cloud-config=/etc/kubernetes/kube_openstack_config --cloud-provider=openstack"
KUBE_CONTROLLER_MANAGER_ARGS="$KUBE_CONTROLLER_MANAGER_ARGS --cloud-provider=external"
KUBE_CONTROLLER_MANAGER_ARGS="$KUBE_CONTROLLER_MANAGER_ARGS --external-cloud-volume-plugin=openstack --cloud-config=/etc/kubernetes/cloud-config"
fi
if [ "$(echo $CERT_MANAGER_API | tr '[:upper:]' '[:lower:]')" = "true" ]; then
KUBE_CONTROLLER_MANAGER_ARGS="$KUBE_CONTROLLER_MANAGER_ARGS --cluster-signing-cert-file=$CERT_DIR/ca.crt --cluster-signing-key-file=$CERT_DIR/ca.key"
fi
@ -119,6 +122,10 @@ KUBELET_ARGS="${KUBELET_ARGS} --cluster_dns=${DNS_SERVICE_IP} --cluster_domain=$
KUBELET_ARGS="${KUBELET_ARGS} --volume-plugin-dir=/var/lib/kubelet/volumeplugins"
KUBELET_ARGS="${KUBELET_ARGS} ${KUBELET_OPTIONS}"
if [ -n "$TRUST_ID" ] && [ "$(echo "${CLOUD_PROVIDER_ENABLED}" | tr '[:upper:]' '[:lower:]')" = "true" ]; then
KUBELET_ARGS="${KUBELET_ARGS} --cloud-provider=external"
fi
# For using default log-driver, other options should be ignored
sed -i 's/\-\-log\-driver\=journald//g' /etc/sysconfig/docker
@ -130,6 +137,7 @@ if [ "$NETWORK_DRIVER" = "calico" ]; then
KUBELET_ARGS="${KUBELET_ARGS} --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
fi
KUBELET_ARGS="${KUBELET_ARGS} --register-with-taints=CriticalAddonsOnly=True:NoSchedule,dedicated=master:NoSchedule"
KUBELET_ARGS="${KUBELET_ARGS} --node-labels=node-role.kubernetes.io/master=\"\""
KUBELET_KUBECONFIG=/etc/kubernetes/kubelet-config.yaml
HOSTNAME_OVERRIDE=$(hostname --short | sed 's/\.novalocal//')

4
magnum/drivers/common/templates/kubernetes/fragments/configure-kubernetes-minion.sh

@ -120,8 +120,8 @@ KUBELET_ARGS="${KUBELET_ARGS} --cluster_dns=${DNS_SERVICE_IP} --cluster_domain=$
KUBELET_ARGS="${KUBELET_ARGS} --volume-plugin-dir=/var/lib/kubelet/volumeplugins"
KUBELET_ARGS="${KUBELET_ARGS} ${KUBELET_OPTIONS}"
if [ -n "$TRUST_ID" ] && [ "$(echo "${CLOUD_PROVIDER_ENABLED}" | tr '[:upper:]' '[:lower:]')" = "true" ]; then
KUBELET_ARGS="$KUBELET_ARGS --cloud-provider=openstack --cloud-config=/etc/kubernetes/kube_openstack_config"
if [ "$(echo "${CLOUD_PROVIDER_ENABLED}" | tr '[:upper:]' '[:lower:]')" = "true" ]; then
KUBELET_ARGS="${KUBELET_ARGS} --cloud-provider=external"
fi
# Workaround for Cinder support (fixed in k8s >= 1.6)

241
magnum/drivers/common/templates/kubernetes/fragments/kube-apiserver-to-kubelet-role.sh

@ -5,6 +5,8 @@ printf "Starting to run ${step}\n"
. /etc/sysconfig/heat-params
set -x
echo "Waiting for Kubernetes API..."
until [ "ok" = "$(curl --silent http://127.0.0.1:8080/healthz)" ]
do
@ -79,4 +81,243 @@ EOF
kubectl apply --validate=false -f ${ADMIN_RBAC}
if [ -z "${TRUST_ID}" ] || [ "$(echo "${CLOUD_PROVIDER_ENABLED}" | tr '[:upper:]' '[:lower:]')" != "true" ]; then
exit 0
fi
#TODO: add heat variables for master count to determine leaderelect true/False ?
occm_image="${CONTAINER_INFRA_PREFIX:-docker.io/k8scloudprovider/}openstack-cloud-controller-manager:${CLOUD_PROVIDER_TAG}"
OCCM=/srv/magnum/kubernetes/openstack-cloud-controller-manager.yaml
[ -f ${OCCM} ] || {
echo "Writing File: ${OCCM}"
mkdir -p $(dirname ${OCCM})
cat << EOF > ${OCCM}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: cloud-controller-manager
namespace: kube-system
---
apiVersion: v1
items:
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:cloud-controller-manager
rules:
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- update
- apiGroups:
- ""
resources:
- nodes
verbs:
- '*'
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
- apiGroups:
- ""
resources:
- services
verbs:
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- serviceaccounts
verbs:
- create
- get
- apiGroups:
- ""
resources:
- persistentvolumes
verbs:
- '*'
- apiGroups:
- ""
resources:
- endpoints
verbs:
- create
- get
- list
- watch
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- secrets
verbs:
- list
- get
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:cloud-node-controller
rules:
- apiGroups:
- ""
resources:
- nodes
verbs:
- '*'
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- update
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:pvl-controller
rules:
- apiGroups:
- ""
resources:
- persistentvolumes
verbs:
- '*'
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- update
kind: List
metadata: {}
---
apiVersion: v1
items:
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:cloud-node-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:cloud-node-controller
subjects:
- kind: ServiceAccount
name: cloud-node-controller
namespace: kube-system
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:pvl-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:pvl-controller
subjects:
- kind: ServiceAccount
name: pvl-controller
namespace: kube-system
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:cloud-controller-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:cloud-controller-manager
subjects:
- kind: ServiceAccount
name: cloud-controller-manager
namespace: kube-system
kind: List
metadata: {}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
k8s-app: openstack-cloud-controller-manager
name: openstack-cloud-controller-manager
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: openstack-cloud-controller-manager
template:
metadata:
labels:
k8s-app: openstack-cloud-controller-manager
spec:
hostNetwork: true
serviceAccountName: cloud-controller-manager
containers:
- name: openstack-cloud-controller-manager
image: ${occm_image}
command:
- /bin/openstack-cloud-controller-manager
- --v=2
- --cloud-config=/etc/kubernetes/cloud-config
- --cluster-name=${CLUSTER_UUID}
- --use-service-account-credentials=true
- --bind-address=127.0.0.1
volumeMounts:
- name: cloudconfig
mountPath: /etc/kubernetes
readOnly: true
volumes:
- name: cloudconfig
hostPath:
path: /etc/kubernetes
tolerations:
# this is required so CCM can bootstrap itself
- key: node.cloudprovider.kubernetes.io/uninitialized
value: "true"
effect: NoSchedule
# this is to have the daemonset runnable on master nodes
# the taint may vary depending on your cluster setup
- key: dedicated
value: master
effect: NoSchedule
- key: CriticalAddonsOnly
value: "True"
effect: NoSchedule
# this is to restrict CCM to only run on master nodes
# the node selector may vary depending on your cluster setup
nodeSelector:
node-role.kubernetes.io/master: ""
EOF
}
kubectl create -f ${OCCM}
printf "Finished running ${step}\n"

3
magnum/drivers/common/templates/kubernetes/fragments/write-heat-params-master.yaml

@ -42,6 +42,8 @@ write_files:
HTTPS_PROXY="$HTTPS_PROXY"
NO_PROXY="$NO_PROXY"
KUBE_TAG="$KUBE_TAG"
CLOUD_PROVIDER_TAG="$CLOUD_PROVIDER_TAG"
CLOUD_PROVIDER_ENABLED="$CLOUD_PROVIDER_ENABLED"
ETCD_TAG="$ETCD_TAG"
FLANNEL_TAG="$FLANNEL_TAG"
KUBE_VERSION="$KUBE_VERSION"
@ -50,7 +52,6 @@ write_files:
TRUSTEE_PASSWORD="$TRUSTEE_PASSWORD"
TRUST_ID="$TRUST_ID"
AUTH_URL="$AUTH_URL"
CLOUD_PROVIDER_ENABLED="$CLOUD_PROVIDER_ENABLED"
INSECURE_REGISTRY_URL="$INSECURE_REGISTRY_URL"
CONTAINER_INFRA_PREFIX="$CONTAINER_INFRA_PREFIX"
SYSTEM_PODS_INITIAL_DELAY="$SYSTEM_PODS_INITIAL_DELAY"

10
magnum/drivers/common/templates/kubernetes/fragments/write-kube-os-config.sh

@ -3,7 +3,12 @@
. /etc/sysconfig/heat-params
mkdir -p /etc/kubernetes/
KUBE_OS_CLOUD_CONFIG=/etc/kubernetes/kube_openstack_config
if [ -z "${TRUST_ID}" ]; then
exit 0
fi
KUBE_OS_CLOUD_CONFIG=/etc/kubernetes/cloud-config
cp /etc/pki/tls/certs/ca-bundle.crt /etc/kubernetes/ca-bundle.crt
# Generate a the configuration for Kubernetes services
@ -30,3 +35,6 @@ EOF
if [ -n ${REGION_NAME} ]; then
sed -i '/ca-file/a region='${REGION_NAME}'' $KUBE_OS_CLOUD_CONFIG
fi
# backwards compatibility, some apps may expect this file from previous magnum versions.
cp ${KUBE_OS_CLOUD_CONFIG} /etc/kubernetes/kube_openstack_config

1
magnum/drivers/heat/k8s_fedora_template_def.py

@ -110,6 +110,7 @@ class K8sFedoraTemplateDefinition(k8s_template_def.K8sTemplateDefinition):
'calico_kube_controllers_tag', 'calico_ipv4pool',
'etcd_tag', 'flannel_tag',
'cloud_provider_enabled',
'cloud_provider_tag',
'prometheus_tag',
'grafana_tag',
'heat_container_agent_tag']

22
magnum/drivers/k8s_fedora_atomic_v1/templates/kubecluster.yaml

@ -328,6 +328,20 @@ parameters:
description: tag of the k8s containers used to provision the kubernetes cluster
default: v1.11.1
# FIXME update cloud_provider_tag when a fix for PVC is released
# https://github.com/kubernetes/cloud-provider-openstack/pull/405
cloud_provider_tag:
type: string
description:
tag of the kubernetes/cloud-provider-openstack
https://hub.docker.com/r/k8scloudprovider/openstack-cloud-controller-manager/tags/
default: v0.2.0
cloud_provider_enabled:
type: boolean
description: Enable or disable the openstack kubernetes cloud provider
default: true
etcd_tag:
type: string
description: tag of the etcd system container
@ -490,11 +504,6 @@ parameters:
The private key will be used to sign generated k8s service account
tokens.
cloud_provider_enabled:
type: boolean
description: Enable or disable the openstack kubernetes cloud provider
default: true
prometheus_tag:
type: string
description: tag of the prometheus container
@ -698,6 +707,8 @@ resources:
https_proxy: {get_param: https_proxy}
no_proxy: {get_param: no_proxy}
kube_tag: {get_param: kube_tag}
cloud_provider_tag: {get_param: cloud_provider_tag}
cloud_provider_enabled: {get_param: cloud_provider_enabled}
kube_version: {get_param: kube_version}
etcd_tag: {get_param: etcd_tag}
flannel_tag: {get_param: flannel_tag}
@ -706,7 +717,6 @@ resources:
trustee_password: {get_param: trustee_password}
trust_id: {get_param: trust_id}
auth_url: {get_param: auth_url}
cloud_provider_enabled: {get_param: cloud_provider_enabled}
insecure_registry_url: {get_param: insecure_registry_url}
container_infra_prefix: {get_param: container_infra_prefix}
etcd_lb_vip: {get_attr: [etcd_lb, address]}

17
magnum/drivers/k8s_fedora_atomic_v1/templates/kubemaster.yaml

@ -221,6 +221,16 @@ parameters:
type: string
description: tag of the k8s containers used to provision the kubernetes cluster
cloud_provider_tag:
type: string
description:
tag of the kubernetes/cloud-provider-openstack
https://hub.docker.com/r/k8scloudprovider/openstack-cloud-controller-manager/tags/
cloud_provider_enabled:
type: boolean
description: Enable or disable the openstack kubernetes cloud provider
etcd_tag:
type: string
description: tag of the etcd system container
@ -376,10 +386,6 @@ parameters:
The private key will be used to sign generated k8s service account
tokens.
cloud_provider_enabled:
type: boolean
description: Enable or disable the openstack kubernetes cloud provider
prometheus_tag:
type: string
description: tag of prometheus container
@ -460,6 +466,8 @@ resources:
"$HTTPS_PROXY": {get_param: https_proxy}
"$NO_PROXY": {get_param: no_proxy}
"$KUBE_TAG": {get_param: kube_tag}
"$CLOUD_PROVIDER_TAG": {get_param: cloud_provider_tag}
"$CLOUD_PROVIDER_ENABLED": {get_param: cloud_provider_enabled}
"$ETCD_TAG": {get_param: etcd_tag}
"$FLANNEL_TAG": {get_param: flannel_tag}
"$KUBE_VERSION": {get_param: kube_version}
@ -467,7 +475,6 @@ resources:
"$TRUSTEE_USER_ID": {get_param: trustee_user_id}
"$TRUSTEE_PASSWORD": {get_param: trustee_password}
"$TRUST_ID": {get_param: trust_id}
"$CLOUD_PROVIDER_ENABLED": {get_param: cloud_provider_enabled}
"$INSECURE_REGISTRY_URL": {get_param: insecure_registry_url}
"$CONTAINER_INFRA_PREFIX": {get_param: container_infra_prefix}
"$ETCD_LB_VIP": {get_param: etcd_lb_vip}

1
magnum/tests/contrib/copy_instance_logs.sh

@ -87,6 +87,7 @@ if [[ "$COE" == "kubernetes" ]]; then
remote_exec $SSH_USER "sudo tail -n +1 -- /etc/kubernetes/certs/*" kubernetes-certs
remote_exec $SSH_USER "sudo cat /usr/local/bin/wc-notify" bin-wc-notify
remote_exec $SSH_USER "sudo cat /etc/kubernetes/kube_openstack_config" kube_openstack_config
remote_exec $SSH_USER "sudo cat /etc/kubernetes/cloud-config" cloud-config
remote_exec $SSH_USER "sudo cat /etc/sysconfig/flanneld" flanneld.sysconfig
remote_exec $SSH_USER "sudo cat /usr/local/bin/flannel-config" bin-flannel-config
remote_exec $SSH_USER "sudo cat /etc/sysconfig/flannel-network.json" flannel-network.json.sysconfig

6
magnum/tests/unit/drivers/test_template_definition.py

@ -399,6 +399,8 @@ class AtomicK8sTemplateDefinitionTestCase(BaseK8sTemplateDefinitionTestCase):
'kubeproxy_options')
cloud_provider_enabled = mock_cluster.labels.get(
'cloud_provider_enabled')
cloud_provider_tag = mock_cluster.labels.get(
'cloud_provider_tag')
service_cluster_ip_range = mock_cluster.labels.get(
'service_cluster_ip_range')
prometheus_tag = mock_cluster.labels.get(
@ -433,6 +435,7 @@ class AtomicK8sTemplateDefinitionTestCase(BaseK8sTemplateDefinitionTestCase):
'kubescheduler_options': kubescheduler_options,
'kubeproxy_options': kubeproxy_options,
'cloud_provider_enabled': cloud_provider_enabled,
'cloud_provider_tag': cloud_provider_tag,
'username': 'fake_user',
'magnum_url': mock_osc.magnum_url.return_value,
'region_name': mock_osc.cinder_region_name.return_value,
@ -575,6 +578,8 @@ class AtomicK8sTemplateDefinitionTestCase(BaseK8sTemplateDefinitionTestCase):
'kubeproxy_options')
cloud_provider_enabled = mock_cluster.labels.get(
'cloud_provider_enabled')
cloud_provider_tag = mock_cluster.labels.get(
'cloud_provider_tag')
service_cluster_ip_range = mock_cluster.labels.get(
'service_cluster_ip_range')
prometheus_tag = mock_cluster.labels.get(
@ -609,6 +614,7 @@ class AtomicK8sTemplateDefinitionTestCase(BaseK8sTemplateDefinitionTestCase):
'kubescheduler_options': kubescheduler_options,
'kubeproxy_options': kubeproxy_options,
'cloud_provider_enabled': cloud_provider_enabled,
'cloud_provider_tag': cloud_provider_tag,
'username': 'fake_user',
'magnum_url': mock_osc.magnum_url.return_value,
'region_name': mock_osc.cinder_region_name.return_value,

14
releasenotes/notes/kubernetes-cloud-config-6c9a4bfec47e3bb4.yaml

@ -0,0 +1,14 @@
---
features:
- |
Use the external cloud provider in k8s_fedora_atomic. The
cloud_provider_tag label can be used to select the container tag for it,
together with the cloud_provider_enabled label. The cloud provider runs
as a DaemonSet on all master nodes.
upgrade:
- |
The cloud config for kubernets has been renamed from
/etc/kubernetes/kube_openstack_config to /etc/kubernetes/cloud-config as
the kubelet expects this exact name when the external cloud provider is
used. A copy of /etc/kubernetes/kube_openstack_config is in place for
applications developed for previous versions of magnum.
Loading…
Cancel
Save