Updates for Rook 1.6.2 and Ceph 15.2.11

This PS is to update the Rook yaml files for version v1.6.2. Additionally, the version of Ceph is upgraded to v15.2.11 and Ceph-CSI is upgraded to v3.3.1.

v1.6 provides a few features the storage team wants:

* The operator supports upgrading multiple OSDs in parallel
* LVM no longer used to provision OSDs by default
* Monitor failover can be disabled if needed
* Operator support for Ceph Pacific (v16)
* Ceph 15.2.11 by default
* CephClient CRD standardized to controller-runtime library (kubebuilder)

https://github.com/kubernetes-sigs/controller-runtime

* Pod Disruption Budgets enabled by default.

https://github.com/rook/rook/blob/master/design/ceph/ceph-managed-disruptionbudgets.md

More notes:

* There are many indentation changes in common.yaml
* There is now a variable in operator.yaml for enabling host networking for the CSI pods. Default is to use host network.

* CSI image updates:

ROOK_CSI_CEPH_IMAGE: "quay.io/cephcsi/cephcsi:v3.3.1"
ROOK_CSI_SNAPSHOTTER_IMAGE: "k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0"

* There is a very large update to crds.yaml largely due to the controller-runtime being employed.

* Ceph 15.2.11 needed for CVE-2021-20288

Change-Id: I5cf0cf63bfcf4b0ea1d242d6eae2f53adda7be5e
This commit is contained in:
Frank Ritchie 2021-05-11 13:56:37 -04:00 committed by Alexey
parent 2946a13806
commit e7130f4301
12 changed files with 8179 additions and 1824 deletions

View File

@ -19,7 +19,7 @@ spec:
# Inline compression mode for the data pool # Inline compression mode for the data pool
# Further reference: https://docs.ceph.com/docs/nautilus/rados/configuration/bluestore-config-ref/#inline-compression # Further reference: https://docs.ceph.com/docs/nautilus/rados/configuration/bluestore-config-ref/#inline-compression
compression_mode: none compression_mode: none
# gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool # gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool
# for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size # for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size
#target_size_ratio: ".5" #target_size_ratio: ".5"
# The list of data pool specs. Can use replication or erasure coding. # The list of data pool specs. Can use replication or erasure coding.
@ -27,41 +27,40 @@ spec:
preserveFilesystemOnDelete: true preserveFilesystemOnDelete: true
# The metadata service (mds) configuration # The metadata service (mds) configuration
metadataServer: metadataServer:
# The affinity rules to apply to the mds deployment # The affinity rules to apply to the mds deployment
placement: placement:
# nodeAffinity: # nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution: # requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms: # nodeSelectorTerms:
# - matchExpressions: # - matchExpressions:
# - key: role # - key: role
# operator: In # operator: In
# values: # values:
# - mds-node # - mds-node
# topologySpreadConstraints: # topologySpreadConstraints:
# tolerations: # tolerations:
# - key: mds-node # - key: mds-node
# operator: Exists # operator: Exists
# podAffinity: # podAffinity:
podAntiAffinity: podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution: requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector: - labelSelector:
matchExpressions: matchExpressions:
- key: app - key: app
operator: In operator: In
values: values:
- rook-ceph-mds - rook-ceph-mds
# topologyKey: kubernetes.io/hostname will place MDS across different hosts # topologyKey: kubernetes.io/hostname will place MDS across different hosts
topologyKey: kubernetes.io/hostname topologyKey: kubernetes.io/hostname
preferredDuringSchedulingIgnoredDuringExecution: preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100 - weight: 100
podAffinityTerm: podAffinityTerm:
labelSelector: labelSelector:
matchExpressions: matchExpressions:
- key: app - key: app
operator: In operator: In
values: values:
- rook-ceph-mds - rook-ceph-mds
# topologyKey: */zone can be used to spread MDS across different AZ # topologyKey: */zone can be used to spread MDS across different AZ
# Use <topologyKey: failure-domain.beta.kubernetes.io/zone> in k8s cluster if your cluster is v1.16 or lower # Use <topologyKey: failure-domain.beta.kubernetes.io/zone> in k8s cluster if your cluster is v1.16 or lower
# Use <topologyKey: topology.kubernetes.io/zone> in k8s cluster is v1.17 or upper # Use <topologyKey: topology.kubernetes.io/zone> in k8s cluster is v1.17 or upper
@ -72,4 +71,3 @@ spec:
# A key/value list of labels # A key/value list of labels
labels: labels:
# key: value # key: value

View File

@ -15,6 +15,6 @@ spec:
# Inline compression mode for the data pool # Inline compression mode for the data pool
# Further reference: https://docs.ceph.com/docs/nautilus/rados/configuration/bluestore-config-ref/#inline-compression # Further reference: https://docs.ceph.com/docs/nautilus/rados/configuration/bluestore-config-ref/#inline-compression
compression_mode: none compression_mode: none
# gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool # gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool
# for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size # for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size
target_size_ratio: ".5" target_size_ratio: ".5"

View File

@ -58,7 +58,7 @@ spec:
# quota in bytes and/or objects, default value is 0 (unlimited) # quota in bytes and/or objects, default value is 0 (unlimited)
# see https://docs.ceph.com/en/latest/rados/operations/pools/#set-pool-quotas # see https://docs.ceph.com/en/latest/rados/operations/pools/#set-pool-quotas
# quotas: # quotas:
# maxSize: "10Gi" # valid suffixes include K, M, G, T, P, Ki, Mi, Gi, Ti, Pi # maxSize: "10Gi" # valid suffixes include k, M, G, T, P, E, Ki, Mi, Gi, Ti, Pi, Ei
# maxObjects: 1000000000 # 1 billion objects # maxObjects: 1000000000 # 1 billion objects
# A key/value list of annotations # A key/value list of annotations
annotations: annotations:

View File

@ -8,5 +8,5 @@ spec:
replicated: replicated:
size: 2 size: 2
quotas: quotas:
maxSize: "10Gi" # valid suffixes include K, M, G, T, P, Ki, Mi, Gi, Ti, Pi maxSize: "10Gi" # valid suffixes include k, M, G, T, P, E, Ki, Mi, Gi, Ti, Pi, Ei
maxObjects: 1000000000 # 1 billion objects maxObjects: 1000000000 # 1 billion objects

View File

@ -8,6 +8,5 @@ spec:
replicated: replicated:
size: 3 size: 3
quotas: quotas:
maxSize: "0" # valid suffixes include K, M, G, T, P, Ki, Mi, Gi, Ti, Pi, eg: "10Gi" maxSize: "0" # e.g. "10Gi" - valid suffixes include k, M, G, T, P, E, Ki, Mi, Gi, Ti, Pi, Ei
# "0" means no quotas. Since rook 1.5.9 you must use string as a value's type maxObjects: 0 # 1000000000 = billion objects, 0 means no quotas
maxObjects: 0 # 1000000000 = billion objects, 0 means no quotas

View File

@ -9,7 +9,6 @@
# #
# Most of the sections are prefixed with a 'OLM' keyword which is used to build our CSV for an OLM (Operator Life Cycle manager) # Most of the sections are prefixed with a 'OLM' keyword which is used to build our CSV for an OLM (Operator Life Cycle manager)
################################################################################################################### ###################################################################################################################
kind: ClusterRoleBinding kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1 apiVersion: rbac.authorization.k8s.io/v1
metadata: metadata:
@ -63,26 +62,26 @@ metadata:
operator: rook operator: rook
storage-backend: ceph storage-backend: ceph
rules: rules:
- apiGroups: - apiGroups:
- "" - ""
- apps - apps
- extensions - extensions
resources: resources:
- secrets - secrets
- pods - pods
- pods/log - pods/log
- services - services
- configmaps - configmaps
- deployments - deployments
- daemonsets - daemonsets
verbs: verbs:
- get - get
- list - list
- watch - watch
- patch - patch
- create - create
- update - update
- delete - delete
--- ---
# The role for the operator to manage resources in its own namespace # The role for the operator to manage resources in its own namespace
apiVersion: rbac.authorization.k8s.io/v1 apiVersion: rbac.authorization.k8s.io/v1
@ -94,34 +93,40 @@ metadata:
operator: rook operator: rook
storage-backend: ceph storage-backend: ceph
rules: rules:
- apiGroups: - apiGroups:
- "" - ""
resources: resources:
- pods - pods
- configmaps - configmaps
- services - services
verbs: verbs:
- get - get
- list - list
- watch - watch
- patch - patch
- create - create
- update - update
- delete - delete
- apiGroups: - apiGroups:
- apps - apps
- extensions - extensions
resources: resources:
- daemonsets - daemonsets
- statefulsets - statefulsets
- deployments - deployments
verbs: verbs:
- get - get
- list - list
- watch - watch
- create - create
- update - update
- delete - delete
- apiGroups:
- batch
resources:
- cronjobs
verbs:
- delete
--- ---
# The cluster role for managing the Rook CRDs # The cluster role for managing the Rook CRDs
apiVersion: rbac.authorization.k8s.io/v1 apiVersion: rbac.authorization.k8s.io/v1
@ -132,115 +137,116 @@ metadata:
operator: rook operator: rook
storage-backend: ceph storage-backend: ceph
rules: rules:
- apiGroups: - apiGroups:
- "" - ""
resources: resources:
# Pod access is needed for fencing # Pod access is needed for fencing
- pods - pods
# Node access is needed for determining nodes where mons should run # Node access is needed for determining nodes where mons should run
- nodes - nodes
- nodes/proxy - nodes/proxy
- services - services
verbs: verbs:
- get - get
- list - list
- watch - watch
- apiGroups: - apiGroups:
- "" - ""
resources: resources:
- events - events
# PVs and PVCs are managed by the Rook provisioner # PVs and PVCs are managed by the Rook provisioner
- persistentvolumes - persistentvolumes
- persistentvolumeclaims - persistentvolumeclaims
- endpoints - endpoints
verbs: verbs:
- get - get
- list - list
- watch - watch
- patch - patch
- create - create
- update - update
- delete - delete
- apiGroups: - apiGroups:
- storage.k8s.io - storage.k8s.io
resources: resources:
- storageclasses - storageclasses
verbs: verbs:
- get - get
- list - list
- watch - watch
- apiGroups: - apiGroups:
- batch - batch
resources: resources:
- jobs - jobs
verbs: - cronjobs
- get verbs:
- list - get
- watch - list
- create - watch
- update - create
- delete - update
- apiGroups: - delete
- ceph.rook.io - apiGroups:
resources: - ceph.rook.io
- "*" resources:
verbs: - "*"
- "*" verbs:
- apiGroups: - "*"
- rook.io - apiGroups:
resources: - rook.io
- "*" resources:
verbs: - "*"
- "*" verbs:
- apiGroups: - "*"
- policy - apiGroups:
- apps - policy
- extensions - apps
resources: - extensions
# This is for the clusterdisruption controller resources:
- poddisruptionbudgets # This is for the clusterdisruption controller
# This is for both clusterdisruption and nodedrain controllers - poddisruptionbudgets
- deployments # This is for both clusterdisruption and nodedrain controllers
- replicasets - deployments
verbs: - replicasets
- "*" verbs:
- apiGroups: - "*"
- healthchecking.openshift.io - apiGroups:
resources: - healthchecking.openshift.io
- machinedisruptionbudgets resources:
verbs: - machinedisruptionbudgets
- get verbs:
- list - get
- watch - list
- create - watch
- update - create
- delete - update
- apiGroups: - delete
- machine.openshift.io - apiGroups:
resources: - machine.openshift.io
- machines resources:
verbs: - machines
- get verbs:
- list - get
- watch - list
- create - watch
- update - create
- delete - update
- apiGroups: - delete
- storage.k8s.io - apiGroups:
resources: - storage.k8s.io
- csidrivers resources:
verbs: - csidrivers
- create verbs:
- delete - create
- get - delete
- update - get
- apiGroups: - update
- k8s.cni.cncf.io - apiGroups:
resources: - k8s.cni.cncf.io
- network-attachment-definitions resources:
verbs: - network-attachment-definitions
- get verbs:
- get
--- ---
# Aspects of ceph-mgr that require cluster-wide access # Aspects of ceph-mgr that require cluster-wide access
kind: ClusterRole kind: ClusterRole
@ -251,26 +257,26 @@ metadata:
operator: rook operator: rook
storage-backend: ceph storage-backend: ceph
rules: rules:
- apiGroups: - apiGroups:
- "" - ""
resources: resources:
- configmaps - configmaps
- nodes - nodes
- nodes/proxy - nodes/proxy
verbs: verbs:
- get - get
- list - list
- watch - watch
- apiGroups: - apiGroups:
- "" - ""
resources: resources:
- events - events
verbs: verbs:
- create - create
- patch - patch
- list - list
- get - get
- watch - watch
--- ---
kind: ClusterRole kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1 apiVersion: rbac.authorization.k8s.io/v1
@ -280,27 +286,27 @@ metadata:
operator: rook operator: rook
storage-backend: ceph storage-backend: ceph
rules: rules:
- apiGroups: - apiGroups:
- "" - ""
verbs: verbs:
- "*" - "*"
resources: resources:
- secrets - secrets
- configmaps - configmaps
- apiGroups: - apiGroups:
- storage.k8s.io - storage.k8s.io
resources: resources:
- storageclasses - storageclasses
verbs: verbs:
- get - get
- list - list
- watch - watch
- apiGroups: - apiGroups:
- "objectbucket.io" - "objectbucket.io"
verbs: verbs:
- "*" - "*"
resources: resources:
- "*" - "*"
# OLM: END OPERATOR ROLE # OLM: END OPERATOR ROLE
# OLM: BEGIN SERVICE ACCOUNT SYSTEM # OLM: BEGIN SERVICE ACCOUNT SYSTEM
--- ---
@ -333,9 +339,9 @@ roleRef:
kind: Role kind: Role
name: rook-ceph-system name: rook-ceph-system
subjects: subjects:
- kind: ServiceAccount - kind: ServiceAccount
name: rook-ceph-system name: rook-ceph-system
namespace: rook-ceph # namespace:operator namespace: rook-ceph # namespace:operator
--- ---
# Grant the rook system daemons cluster-wide access to manage the Rook CRDs, PVCs, and storage classes # Grant the rook system daemons cluster-wide access to manage the Rook CRDs, PVCs, and storage classes
kind: ClusterRoleBinding kind: ClusterRoleBinding
@ -350,9 +356,9 @@ roleRef:
kind: ClusterRole kind: ClusterRole
name: rook-ceph-global name: rook-ceph-global
subjects: subjects:
- kind: ServiceAccount - kind: ServiceAccount
name: rook-ceph-system name: rook-ceph-system
namespace: rook-ceph # namespace:operator namespace: rook-ceph # namespace:operator
# OLM: END OPERATOR ROLEBINDING # OLM: END OPERATOR ROLEBINDING
################################################################################################################# #################################################################################################################
# Beginning of cluster-specific resources. The example will assume the cluster will be created in the "rook-ceph" # Beginning of cluster-specific resources. The example will assume the cluster will be created in the "rook-ceph"
@ -399,25 +405,25 @@ metadata:
name: rook-ceph-osd name: rook-ceph-osd
namespace: rook-ceph # namespace:cluster namespace: rook-ceph # namespace:cluster
rules: rules:
- apiGroups: [""] - apiGroups: [""]
resources: ["configmaps"] resources: ["configmaps"]
verbs: [ "get", "list", "watch", "create", "update", "delete" ] verbs: ["get", "list", "watch", "create", "update", "delete"]
- apiGroups: ["ceph.rook.io"] - apiGroups: ["ceph.rook.io"]
resources: ["cephclusters", "cephclusters/finalizers"] resources: ["cephclusters", "cephclusters/finalizers"]
verbs: [ "get", "list", "create", "update", "delete" ] verbs: ["get", "list", "create", "update", "delete"]
--- ---
kind: ClusterRole kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1 apiVersion: rbac.authorization.k8s.io/v1
metadata: metadata:
name: rook-ceph-osd name: rook-ceph-osd
rules: rules:
- apiGroups: - apiGroups:
- "" - ""
resources: resources:
- nodes - nodes
verbs: verbs:
- get - get
- list - list
--- ---
# Aspects of ceph-mgr that require access to the system namespace # Aspects of ceph-mgr that require access to the system namespace
kind: ClusterRole kind: ClusterRole
@ -425,14 +431,14 @@ apiVersion: rbac.authorization.k8s.io/v1
metadata: metadata:
name: rook-ceph-mgr-system name: rook-ceph-mgr-system
rules: rules:
- apiGroups: - apiGroups:
- "" - ""
resources: resources:
- configmaps - configmaps
verbs: verbs:
- get - get
- list - list
- watch - watch
--- ---
# Aspects of ceph-mgr that operate within the cluster's namespace # Aspects of ceph-mgr that operate within the cluster's namespace
kind: Role kind: Role
@ -441,34 +447,36 @@ metadata:
name: rook-ceph-mgr name: rook-ceph-mgr
namespace: rook-ceph # namespace:cluster namespace: rook-ceph # namespace:cluster
rules: rules:
- apiGroups: - apiGroups:
- "" - ""
resources: resources:
- pods - pods
- services - services
- pods/log - pods/log
verbs: verbs:
- get - get
- list - list
- watch - watch
- delete - create
- apiGroups: - update
- batch - delete
resources: - apiGroups:
- jobs - batch
verbs: resources:
- get - jobs
- list verbs:
- watch - get
- create - list
- update - watch
- delete - create
- apiGroups: - update
- ceph.rook.io - delete
resources: - apiGroups:
- "*" - ceph.rook.io
verbs: resources:
- "*" - "*"
verbs:
- "*"
# OLM: END CLUSTER ROLE # OLM: END CLUSTER ROLE
# OLM: BEGIN CMD REPORTER ROLE # OLM: BEGIN CMD REPORTER ROLE
--- ---
@ -478,18 +486,18 @@ metadata:
name: rook-ceph-cmd-reporter name: rook-ceph-cmd-reporter
namespace: rook-ceph # namespace:cluster namespace: rook-ceph # namespace:cluster
rules: rules:
- apiGroups: - apiGroups:
- "" - ""
resources: resources:
- pods - pods
- configmaps - configmaps
verbs: verbs:
- get - get
- list - list
- watch - watch
- create - create
- update - update
- delete - delete
# OLM: END CMD REPORTER ROLE # OLM: END CMD REPORTER ROLE
# OLM: BEGIN CLUSTER ROLEBINDING # OLM: BEGIN CLUSTER ROLEBINDING
--- ---
@ -504,9 +512,9 @@ roleRef:
kind: ClusterRole kind: ClusterRole
name: rook-ceph-cluster-mgmt name: rook-ceph-cluster-mgmt
subjects: subjects:
- kind: ServiceAccount - kind: ServiceAccount
name: rook-ceph-system name: rook-ceph-system
namespace: rook-ceph # namespace:operator namespace: rook-ceph # namespace:operator
--- ---
# Allow the osd pods in this namespace to work with configmaps # Allow the osd pods in this namespace to work with configmaps
kind: RoleBinding kind: RoleBinding
@ -519,9 +527,9 @@ roleRef:
kind: Role kind: Role
name: rook-ceph-osd name: rook-ceph-osd
subjects: subjects:
- kind: ServiceAccount - kind: ServiceAccount
name: rook-ceph-osd name: rook-ceph-osd
namespace: rook-ceph # namespace:cluster namespace: rook-ceph # namespace:cluster
--- ---
# Allow the ceph mgr to access the cluster-specific resources necessary for the mgr modules # Allow the ceph mgr to access the cluster-specific resources necessary for the mgr modules
kind: RoleBinding kind: RoleBinding
@ -534,24 +542,24 @@ roleRef:
kind: Role kind: Role
name: rook-ceph-mgr name: rook-ceph-mgr
subjects: subjects:
- kind: ServiceAccount - kind: ServiceAccount
name: rook-ceph-mgr name: rook-ceph-mgr
namespace: rook-ceph # namespace:cluster namespace: rook-ceph # namespace:cluster
--- ---
# Allow the ceph mgr to access the rook system resources necessary for the mgr modules # Allow the ceph mgr to access the rook system resources necessary for the mgr modules
kind: RoleBinding kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1 apiVersion: rbac.authorization.k8s.io/v1
metadata: metadata:
name: rook-ceph-mgr-system name: rook-ceph-mgr-system
namespace: rook-ceph # namespace:operator namespace: rook-ceph # namespace:operator
roleRef: roleRef:
apiGroup: rbac.authorization.k8s.io apiGroup: rbac.authorization.k8s.io
kind: ClusterRole kind: ClusterRole
name: rook-ceph-mgr-system name: rook-ceph-mgr-system
subjects: subjects:
- kind: ServiceAccount - kind: ServiceAccount
name: rook-ceph-mgr name: rook-ceph-mgr
namespace: rook-ceph # namespace:cluster namespace: rook-ceph # namespace:cluster
--- ---
# Allow the ceph mgr to access cluster-wide resources necessary for the mgr modules # Allow the ceph mgr to access cluster-wide resources necessary for the mgr modules
kind: ClusterRoleBinding kind: ClusterRoleBinding
@ -563,9 +571,9 @@ roleRef:
kind: ClusterRole kind: ClusterRole
name: rook-ceph-mgr-cluster name: rook-ceph-mgr-cluster
subjects: subjects:
- kind: ServiceAccount - kind: ServiceAccount
name: rook-ceph-mgr name: rook-ceph-mgr
namespace: rook-ceph # namespace:cluster namespace: rook-ceph # namespace:cluster
--- ---
# Allow the ceph osd to access cluster-wide resources necessary for determining their topology location # Allow the ceph osd to access cluster-wide resources necessary for determining their topology location
@ -578,9 +586,9 @@ roleRef:
kind: ClusterRole kind: ClusterRole
name: rook-ceph-osd name: rook-ceph-osd
subjects: subjects:
- kind: ServiceAccount - kind: ServiceAccount
name: rook-ceph-osd name: rook-ceph-osd
namespace: rook-ceph # namespace:cluster namespace: rook-ceph # namespace:cluster
# OLM: END CLUSTER ROLEBINDING # OLM: END CLUSTER ROLEBINDING
# OLM: BEGIN CMD REPORTER ROLEBINDING # OLM: BEGIN CMD REPORTER ROLEBINDING
@ -595,9 +603,9 @@ roleRef:
kind: Role kind: Role
name: rook-ceph-cmd-reporter name: rook-ceph-cmd-reporter
subjects: subjects:
- kind: ServiceAccount - kind: ServiceAccount
name: rook-ceph-cmd-reporter name: rook-ceph-cmd-reporter
namespace: rook-ceph # namespace:cluster namespace: rook-ceph # namespace:cluster
# OLM: END CMD REPORTER ROLEBINDING # OLM: END CMD REPORTER ROLEBINDING
################################################################################################################# #################################################################################################################
# Beginning of pod security policy resources. The example will assume the cluster will be created in the # Beginning of pod security policy resources. The example will assume the cluster will be created in the
@ -613,8 +621,8 @@ metadata:
# need to be renamed with a value that will match before others. # need to be renamed with a value that will match before others.
name: 00-rook-privileged name: 00-rook-privileged
annotations: annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'runtime/default' seccomp.security.alpha.kubernetes.io/allowedProfileNames: "runtime/default"
seccomp.security.alpha.kubernetes.io/defaultProfileName: 'runtime/default' seccomp.security.alpha.kubernetes.io/defaultProfileName: "runtime/default"
spec: spec:
privileged: true privileged: true
allowedCapabilities: allowedCapabilities:
@ -682,7 +690,7 @@ spec:
apiVersion: rbac.authorization.k8s.io/v1 apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole kind: ClusterRole
metadata: metadata:
name: 'psp:rook' name: "psp:rook"
rules: rules:
- apiGroups: - apiGroups:
- policy - policy
@ -700,7 +708,7 @@ metadata:
roleRef: roleRef:
apiGroup: rbac.authorization.k8s.io apiGroup: rbac.authorization.k8s.io
kind: ClusterRole kind: ClusterRole
name: 'psp:rook' name: "psp:rook"
subjects: subjects:
- kind: ServiceAccount - kind: ServiceAccount
name: rook-ceph-system name: rook-ceph-system
@ -716,9 +724,9 @@ roleRef:
kind: ClusterRole kind: ClusterRole
name: psp:rook name: psp:rook
subjects: subjects:
- kind: ServiceAccount - kind: ServiceAccount
name: default name: default
namespace: rook-ceph # namespace:cluster namespace: rook-ceph # namespace:cluster
--- ---
apiVersion: rbac.authorization.k8s.io/v1 apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding kind: RoleBinding
@ -730,9 +738,9 @@ roleRef:
kind: ClusterRole kind: ClusterRole
name: psp:rook name: psp:rook
subjects: subjects:
- kind: ServiceAccount - kind: ServiceAccount
name: rook-ceph-osd name: rook-ceph-osd
namespace: rook-ceph # namespace:cluster namespace: rook-ceph # namespace:cluster
--- ---
apiVersion: rbac.authorization.k8s.io/v1 apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding kind: RoleBinding
@ -744,9 +752,9 @@ roleRef:
kind: ClusterRole kind: ClusterRole
name: psp:rook name: psp:rook
subjects: subjects:
- kind: ServiceAccount - kind: ServiceAccount
name: rook-ceph-mgr name: rook-ceph-mgr
namespace: rook-ceph # namespace:cluster namespace: rook-ceph # namespace:cluster
--- ---
apiVersion: rbac.authorization.k8s.io/v1 apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding kind: RoleBinding
@ -758,9 +766,9 @@ roleRef:
kind: ClusterRole kind: ClusterRole
name: psp:rook name: psp:rook
subjects: subjects:
- kind: ServiceAccount - kind: ServiceAccount
name: rook-ceph-cmd-reporter name: rook-ceph-cmd-reporter
namespace: rook-ceph # namespace:cluster namespace: rook-ceph # namespace:cluster
# OLM: END CLUSTER POD SECURITY POLICY BINDINGS # OLM: END CLUSTER POD SECURITY POLICY BINDINGS
# OLM: BEGIN CSI CEPHFS SERVICE ACCOUNT # OLM: BEGIN CSI CEPHFS SERVICE ACCOUNT
--- ---
@ -893,7 +901,7 @@ metadata:
roleRef: roleRef:
apiGroup: rbac.authorization.k8s.io apiGroup: rbac.authorization.k8s.io
kind: ClusterRole kind: ClusterRole
name: 'psp:rook' name: "psp:rook"
subjects: subjects:
- kind: ServiceAccount - kind: ServiceAccount
name: rook-csi-cephfs-plugin-sa name: rook-csi-cephfs-plugin-sa
@ -906,7 +914,7 @@ metadata:
roleRef: roleRef:
apiGroup: rbac.authorization.k8s.io apiGroup: rbac.authorization.k8s.io
kind: ClusterRole kind: ClusterRole
name: 'psp:rook' name: "psp:rook"
subjects: subjects:
- kind: ServiceAccount - kind: ServiceAccount
name: rook-csi-cephfs-provisioner-sa name: rook-csi-cephfs-provisioner-sa
@ -1065,7 +1073,19 @@ rules:
verbs: ["update", "patch"] verbs: ["update", "patch"]
- apiGroups: [""] - apiGroups: [""]
resources: ["configmaps"] resources: ["configmaps"]
verbs: [ "get"] verbs: ["get"]
- apiGroups: ["replication.storage.openshift.io"]
resources: ["volumereplications", "volumereplicationclasses"]
verbs: ["create", "delete", "get", "list", "patch", "update", "watch"]
- apiGroups: ["replication.storage.openshift.io"]
resources: ["volumereplications/finalizers"]
verbs: ["update"]
- apiGroups: ["replication.storage.openshift.io"]
resources: ["volumereplications/status"]
verbs: ["get", "patch", "update"]
- apiGroups: ["replication.storage.openshift.io"]
resources: ["volumereplicationclasses/status"]
verbs: ["get"]
# OLM: END CSI RBD CLUSTER ROLE # OLM: END CSI RBD CLUSTER ROLE
# OLM: BEGIN CSI RBD CLUSTER ROLEBINDING # OLM: BEGIN CSI RBD CLUSTER ROLEBINDING
--- ---
@ -1076,7 +1096,7 @@ metadata:
roleRef: roleRef:
apiGroup: rbac.authorization.k8s.io apiGroup: rbac.authorization.k8s.io
kind: ClusterRole kind: ClusterRole
name: 'psp:rook' name: "psp:rook"
subjects: subjects:
- kind: ServiceAccount - kind: ServiceAccount
name: rook-csi-rbd-plugin-sa name: rook-csi-rbd-plugin-sa
@ -1089,7 +1109,7 @@ metadata:
roleRef: roleRef:
apiGroup: rbac.authorization.k8s.io apiGroup: rbac.authorization.k8s.io
kind: ClusterRole kind: ClusterRole
name: 'psp:rook' name: "psp:rook"
subjects: subjects:
- kind: ServiceAccount - kind: ServiceAccount
name: rook-csi-rbd-provisioner-sa name: rook-csi-rbd-provisioner-sa

File diff suppressed because it is too large Load Diff

View File

@ -29,6 +29,11 @@ data:
ROOK_CSI_ENABLE_RBD: "true" ROOK_CSI_ENABLE_RBD: "true"
ROOK_CSI_ENABLE_GRPC_METRICS: "false" ROOK_CSI_ENABLE_GRPC_METRICS: "false"
# Set to true to enable host networking for CSI CephFS and RBD nodeplugins. This may be necessary
# in some network configurations where the SDN does not provide access to an external cluster or
# there is significant drop in read/write performance.
# CSI_ENABLE_HOST_NETWORK: "true"
# Set logging level for csi containers. # Set logging level for csi containers.
# Supported values from 0 to 5. 0 for general useful logs, 5 for trace level verbosity. # Supported values from 0 to 5. 0 for general useful logs, 5 for trace level verbosity.
# CSI_LOG_LEVEL: "0" # CSI_LOG_LEVEL: "0"
@ -64,11 +69,11 @@ data:
# The default version of CSI supported by Rook will be started. To change the version # The default version of CSI supported by Rook will be started. To change the version
# of the CSI driver to something other than what is officially supported, change # of the CSI driver to something other than what is officially supported, change
# these images to the desired release of the CSI driver. # these images to the desired release of the CSI driver.
ROOK_CSI_CEPH_IMAGE: "quay.io/cephcsi/cephcsi:v3.2.1" ROOK_CSI_CEPH_IMAGE: "quay.io/cephcsi/cephcsi:v3.3.1"
ROOK_CSI_REGISTRAR_IMAGE: "k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1" ROOK_CSI_REGISTRAR_IMAGE: "k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1"
ROOK_CSI_RESIZER_IMAGE: "k8s.gcr.io/sig-storage/csi-resizer:v1.0.1" ROOK_CSI_RESIZER_IMAGE: "k8s.gcr.io/sig-storage/csi-resizer:v1.0.1"
ROOK_CSI_PROVISIONER_IMAGE: "k8s.gcr.io/sig-storage/csi-provisioner:v2.0.4" ROOK_CSI_PROVISIONER_IMAGE: "k8s.gcr.io/sig-storage/csi-provisioner:v2.0.4"
ROOK_CSI_SNAPSHOTTER_IMAGE: "k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.2" ROOK_CSI_SNAPSHOTTER_IMAGE: "k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0"
ROOK_CSI_ATTACHER_IMAGE: "k8s.gcr.io/sig-storage/csi-attacher:v3.0.2" ROOK_CSI_ATTACHER_IMAGE: "k8s.gcr.io/sig-storage/csi-attacher:v3.0.2"
# (Optional) set user created priorityclassName for csi plugin pods. # (Optional) set user created priorityclassName for csi plugin pods.
@ -274,6 +279,16 @@ data:
# Whether the OBC provisioner should watch on the operator namespace or not, if not the namespace of the cluster will be used # Whether the OBC provisioner should watch on the operator namespace or not, if not the namespace of the cluster will be used
ROOK_OBC_WATCH_OPERATOR_NAMESPACE: "true" ROOK_OBC_WATCH_OPERATOR_NAMESPACE: "true"
# Whether to enable the flex driver. By default it is enabled and is fully supported, but will be deprecated in some future release
# in favor of the CSI driver.
ROOK_ENABLE_FLEX_DRIVER: "false"
# Whether to start the discovery daemon to watch for raw storage devices on nodes in the cluster.
# This daemon does not need to run if you are only going to create your OSDs based on StorageClassDeviceSets with PVCs.
ROOK_ENABLE_DISCOVERY_DAEMON: "false"
# Enable volume replication controller
CSI_ENABLE_VOLUME_REPLICATION: "false"
# CSI_VOLUME_REPLICATION_IMAGE: "quay.io/csiaddons/volumereplication-operator:v0.1.0"
# (Optional) Admission controller NodeAffinity. # (Optional) Admission controller NodeAffinity.
# ADMISSION_CONTROLLER_NODE_AFFINITY: "role=storage-node; storage=rook, ceph" # ADMISSION_CONTROLLER_NODE_AFFINITY: "role=storage-node; storage=rook, ceph"
# (Optional) Admission controller tolerations list. Put here list of taints you want to tolerate in YAML format. # (Optional) Admission controller tolerations list. Put here list of taints you want to tolerate in YAML format.
@ -307,178 +322,162 @@ spec:
spec: spec:
serviceAccountName: rook-ceph-system serviceAccountName: rook-ceph-system
containers: containers:
- name: rook-ceph-operator - name: rook-ceph-operator
image: rook/ceph:v1.5.9 image: rook/ceph:v1.6.2
args: ["ceph", "operator"] args: ["ceph", "operator"]
volumeMounts: volumeMounts:
- mountPath: /var/lib/rook - mountPath: /var/lib/rook
name: rook-config name: rook-config
- mountPath: /etc/ceph - mountPath: /etc/ceph
name: default-config-dir name: default-config-dir
env: env:
# If the operator should only watch for cluster CRDs in the same namespace, set this to "true". # If the operator should only watch for cluster CRDs in the same namespace, set this to "true".
# If this is not set to true, the operator will watch for cluster CRDs in all namespaces. # If this is not set to true, the operator will watch for cluster CRDs in all namespaces.
- name: ROOK_CURRENT_NAMESPACE_ONLY - name: ROOK_CURRENT_NAMESPACE_ONLY
value: "false" value: "false"
# To disable RBAC, uncomment the following: # To disable RBAC, uncomment the following:
# - name: RBAC_ENABLED # - name: RBAC_ENABLED
# value: "false" # value: "false"
# Rook Agent toleration. Will tolerate all taints with all keys. # Rook Agent toleration. Will tolerate all taints with all keys.
# Choose between NoSchedule, PreferNoSchedule and NoExecute: # Choose between NoSchedule, PreferNoSchedule and NoExecute:
# - name: AGENT_TOLERATION # - name: AGENT_TOLERATION
# value: "NoSchedule" # value: "NoSchedule"
# (Optional) Rook Agent toleration key. Set this to the key of the taint you want to tolerate # (Optional) Rook Agent toleration key. Set this to the key of the taint you want to tolerate
# - name: AGENT_TOLERATION_KEY # - name: AGENT_TOLERATION_KEY
# value: "<KeyOfTheTaintToTolerate>" # value: "<KeyOfTheTaintToTolerate>"
# (Optional) Rook Agent tolerations list. Put here list of taints you want to tolerate in YAML format. # (Optional) Rook Agent tolerations list. Put here list of taints you want to tolerate in YAML format.
# - name: AGENT_TOLERATIONS # - name: AGENT_TOLERATIONS
# value: | # value: |
# - effect: NoSchedule # - effect: NoSchedule
# key: node-role.kubernetes.io/controlplane # key: node-role.kubernetes.io/controlplane
# operator: Exists # operator: Exists
# - effect: NoExecute # - effect: NoExecute
# key: node-role.kubernetes.io/etcd # key: node-role.kubernetes.io/etcd
# operator: Exists # operator: Exists
# (Optional) Rook Agent priority class name to set on the pod(s) # (Optional) Rook Agent priority class name to set on the pod(s)
# - name: AGENT_PRIORITY_CLASS_NAME # - name: AGENT_PRIORITY_CLASS_NAME
# value: "<PriorityClassName>" # value: "<PriorityClassName>"
# (Optional) Rook Agent NodeAffinity. # (Optional) Rook Agent NodeAffinity.
# - name: AGENT_NODE_AFFINITY # - name: AGENT_NODE_AFFINITY
# value: "role=storage-node; storage=rook,ceph" # value: "role=storage-node; storage=rook,ceph"
# (Optional) Rook Agent mount security mode. Can by `Any` or `Restricted`. # (Optional) Rook Agent mount security mode. Can by `Any` or `Restricted`.
# `Any` uses Ceph admin credentials by default/fallback. # `Any` uses Ceph admin credentials by default/fallback.
# For using `Restricted` you must have a Ceph secret in each namespace storage should be consumed from and # For using `Restricted` you must have a Ceph secret in each namespace storage should be consumed from and
# set `mountUser` to the Ceph user, `mountSecret` to the Kubernetes secret name. # set `mountUser` to the Ceph user, `mountSecret` to the Kubernetes secret name.
# to the namespace in which the `mountSecret` Kubernetes secret namespace. # to the namespace in which the `mountSecret` Kubernetes secret namespace.
# - name: AGENT_MOUNT_SECURITY_MODE # - name: AGENT_MOUNT_SECURITY_MODE
# value: "Any" # value: "Any"
# Set the path where the Rook agent can find the flex volumes # Set the path where the Rook agent can find the flex volumes
# - name: FLEXVOLUME_DIR_PATH # - name: FLEXVOLUME_DIR_PATH
# value: "<PathToFlexVolumes>" # value: "<PathToFlexVolumes>"
# Set the path where kernel modules can be found # Set the path where kernel modules can be found
# - name: LIB_MODULES_DIR_PATH # - name: LIB_MODULES_DIR_PATH
# value: "<PathToLibModules>" # value: "<PathToLibModules>"
# Mount any extra directories into the agent container # Mount any extra directories into the agent container
# - name: AGENT_MOUNTS # - name: AGENT_MOUNTS
# value: "somemount=/host/path:/container/path,someothermount=/host/path2:/container/path2" # value: "somemount=/host/path:/container/path,someothermount=/host/path2:/container/path2"
# Rook Discover toleration. Will tolerate all taints with all keys. # Rook Discover toleration. Will tolerate all taints with all keys.
# Choose between NoSchedule, PreferNoSchedule and NoExecute: # Choose between NoSchedule, PreferNoSchedule and NoExecute:
# - name: DISCOVER_TOLERATION # - name: DISCOVER_TOLERATION
# value: "NoSchedule" # value: "NoSchedule"
# (Optional) Rook Discover toleration key. Set this to the key of the taint you want to tolerate # (Optional) Rook Discover toleration key. Set this to the key of the taint you want to tolerate
# - name: DISCOVER_TOLERATION_KEY # - name: DISCOVER_TOLERATION_KEY
# value: "<KeyOfTheTaintToTolerate>" # value: "<KeyOfTheTaintToTolerate>"
# (Optional) Rook Discover tolerations list. Put here list of taints you want to tolerate in YAML format. # (Optional) Rook Discover tolerations list. Put here list of taints you want to tolerate in YAML format.
# - name: DISCOVER_TOLERATIONS # - name: DISCOVER_TOLERATIONS
# value: | # value: |
# - effect: NoSchedule # - effect: NoSchedule
# key: node-role.kubernetes.io/controlplane # key: node-role.kubernetes.io/controlplane
# operator: Exists # operator: Exists
# - effect: NoExecute # - effect: NoExecute
# key: node-role.kubernetes.io/etcd # key: node-role.kubernetes.io/etcd
# operator: Exists # operator: Exists
# (Optional) Rook Discover priority class name to set on the pod(s) # (Optional) Rook Discover priority class name to set on the pod(s)
# - name: DISCOVER_PRIORITY_CLASS_NAME # - name: DISCOVER_PRIORITY_CLASS_NAME
# value: "<PriorityClassName>" # value: "<PriorityClassName>"
# (Optional) Discover Agent NodeAffinity. # (Optional) Discover Agent NodeAffinity.
# - name: DISCOVER_AGENT_NODE_AFFINITY # - name: DISCOVER_AGENT_NODE_AFFINITY
# value: "role=storage-node; storage=rook, ceph" # value: "role=storage-node; storage=rook, ceph"
# (Optional) Discover Agent Pod Labels. # (Optional) Discover Agent Pod Labels.
# - name: DISCOVER_AGENT_POD_LABELS # - name: DISCOVER_AGENT_POD_LABELS
# value: "key1=value1,key2=value2" # value: "key1=value1,key2=value2"
# Allow rook to create multiple file systems. Note: This is considered
# an experimental feature in Ceph as described at
# http://docs.ceph.com/docs/master/cephfs/experimental-features/#multiple-filesystems-within-a-ceph-cluster
# which might cause mons to crash as seen in https://github.com/rook/rook/issues/1027
- name: ROOK_ALLOW_MULTIPLE_FILESYSTEMS
value: "false"
# The logging level for the operator: INFO | DEBUG # The logging level for the operator: INFO | DEBUG
- name: ROOK_LOG_LEVEL - name: ROOK_LOG_LEVEL
value: "INFO" value: "INFO"
# The duration between discovering devices in the rook-discover daemonset. # The duration between discovering devices in the rook-discover daemonset.
- name: ROOK_DISCOVER_DEVICES_INTERVAL - name: ROOK_DISCOVER_DEVICES_INTERVAL
value: "60m" value: "60m"
# Whether to start pods as privileged that mount a host path, which includes the Ceph mon and osd pods. # Whether to start pods as privileged that mount a host path, which includes the Ceph mon and osd pods.
# Set this to true if SELinux is enabled (e.g. OpenShift) to workaround the anyuid issues. # Set this to true if SELinux is enabled (e.g. OpenShift) to workaround the anyuid issues.
# For more details see https://github.com/rook/rook/issues/1314#issuecomment-355799641 # For more details see https://github.com/rook/rook/issues/1314#issuecomment-355799641
- name: ROOK_HOSTPATH_REQUIRES_PRIVILEGED - name: ROOK_HOSTPATH_REQUIRES_PRIVILEGED
value: "false" value: "false"
# In some situations SELinux relabelling breaks (times out) on large filesystems, and doesn't work with cephfs ReadWriteMany volumes (last relabel wins). # In some situations SELinux relabelling breaks (times out) on large filesystems, and doesn't work with cephfs ReadWriteMany volumes (last relabel wins).
# Disable it here if you have similar issues. # Disable it here if you have similar issues.
# For more details see https://github.com/rook/rook/issues/2417 # For more details see https://github.com/rook/rook/issues/2417
- name: ROOK_ENABLE_SELINUX_RELABELING - name: ROOK_ENABLE_SELINUX_RELABELING
value: "true" value: "true"
# In large volumes it will take some time to chown all the files. Disable it here if you have performance issues. # In large volumes it will take some time to chown all the files. Disable it here if you have performance issues.
# For more details see https://github.com/rook/rook/issues/2254 # For more details see https://github.com/rook/rook/issues/2254
- name: ROOK_ENABLE_FSGROUP - name: ROOK_ENABLE_FSGROUP
value: "true" value: "true"
# Disable automatic orchestration when new devices are discovered # Disable automatic orchestration when new devices are discovered
- name: ROOK_DISABLE_DEVICE_HOTPLUG - name: ROOK_DISABLE_DEVICE_HOTPLUG
value: "false" value: "false"
# Provide customised regex as the values using comma. For eg. regex for rbd based volume, value will be like "(?i)rbd[0-9]+". # Provide customised regex as the values using comma. For eg. regex for rbd based volume, value will be like "(?i)rbd[0-9]+".
# In case of more than one regex, use comma to separate between them. # In case of more than one regex, use comma to separate between them.
# Default regex will be "(?i)dm-[0-9]+,(?i)rbd[0-9]+,(?i)nbd[0-9]+" # Default regex will be "(?i)dm-[0-9]+,(?i)rbd[0-9]+,(?i)nbd[0-9]+"
# Add regex expression after putting a comma to blacklist a disk # Add regex expression after putting a comma to blacklist a disk
# If value is empty, the default regex will be used. # If value is empty, the default regex will be used.
- name: DISCOVER_DAEMON_UDEV_BLACKLIST - name: DISCOVER_DAEMON_UDEV_BLACKLIST
value: "(?i)dm-[0-9]+,(?i)rbd[0-9]+,(?i)nbd[0-9]+" value: "(?i)dm-[0-9]+,(?i)rbd[0-9]+,(?i)nbd[0-9]+"
# Whether to enable the flex driver. By default it is enabled and is fully supported, but will be deprecated in some future release # Time to wait until the node controller will move Rook pods to other
# in favor of the CSI driver. # nodes after detecting an unreachable node.
- name: ROOK_ENABLE_FLEX_DRIVER # Pods affected by this setting are:
value: "false" # mgr, rbd, mds, rgw, nfs, PVC based mons and osds, and ceph toolbox
# The value used in this variable replaces the default value of 300 secs
# added automatically by k8s as Toleration for
# <node.kubernetes.io/unreachable>
# The total amount of time to reschedule Rook pods in healthy nodes
# before detecting a <not ready node> condition will be the sum of:
# --> node-monitor-grace-period: 40 seconds (k8s kube-controller-manager flag)
# --> ROOK_UNREACHABLE_NODE_TOLERATION_SECONDS: 5 seconds
- name: ROOK_UNREACHABLE_NODE_TOLERATION_SECONDS
value: "5"
# Whether to start the discovery daemon to watch for raw storage devices on nodes in the cluster. # The name of the node to pass with the downward API
# This daemon does not need to run if you are only going to create your OSDs based on StorageClassDeviceSets with PVCs. - name: NODE_NAME
- name: ROOK_ENABLE_DISCOVERY_DAEMON valueFrom:
value: "false" fieldRef:
fieldPath: spec.nodeName
# The pod name to pass with the downward API
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
# The pod namespace to pass with the downward API
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# Time to wait until the node controller will move Rook pods to other # Uncomment it to run lib bucket provisioner in multithreaded mode
# nodes after detecting an unreachable node. #- name: LIB_BUCKET_PROVISIONER_THREADS
# Pods affected by this setting are: # value: "5"
# mgr, rbd, mds, rgw, nfs, PVC based mons and osds, and ceph toolbox
# The value used in this variable replaces the default value of 300 secs
# added automatically by k8s as Toleration for
# <node.kubernetes.io/unreachable>
# The total amount of time to reschedule Rook pods in healthy nodes
# before detecting a <not ready node> condition will be the sum of:
# --> node-monitor-grace-period: 40 seconds (k8s kube-controller-manager flag)
# --> ROOK_UNREACHABLE_NODE_TOLERATION_SECONDS: 5 seconds
- name: ROOK_UNREACHABLE_NODE_TOLERATION_SECONDS
value: "5"
# The name of the node to pass with the downward API
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# The pod name to pass with the downward API
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
# The pod namespace to pass with the downward API
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# Uncomment it to run lib bucket provisioner in multithreaded mode
#- name: LIB_BUCKET_PROVISIONER_THREADS
# value: "5"
# Uncomment it to run rook operator on the host network # Uncomment it to run rook operator on the host network
#hostNetwork: true #hostNetwork: true
volumes: volumes:
- name: rook-config - name: rook-config
emptyDir: {} emptyDir: {}
- name: default-config-dir - name: default-config-dir
emptyDir: {} emptyDir: {}
# OLM: END OPERATOR DEPLOYMENT # OLM: END OPERATOR DEPLOYMENT

View File

@ -108,12 +108,12 @@ spec:
rook-operator: rook-operator:
rook-ceph-operator: rook-ceph-operator:
rook-ceph-operator: rook-ceph-operator:
image: rook/ceph:v1.5.9 image: rook/ceph:v1.6.2
rook-ceph-operator-config: rook-ceph-operator-config:
ceph_daemon: ceph_daemon:
image: ceph/ceph:v15.2.10 image: ceph/ceph:v15.2.11
rook_csi_ceph_image: rook_csi_ceph_image:
image: quay.io/cephcsi/cephcsi:v3.2.1 image: quay.io/cephcsi/cephcsi:v3.3.1
rook_csi_registrar_image: rook_csi_registrar_image:
image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1 image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
rook_csi_resizer_image: rook_csi_resizer_image:
@ -121,15 +121,15 @@ spec:
rook_csi_provisioner_image: rook_csi_provisioner_image:
image: k8s.gcr.io/sig-storage/csi-provisioner:v2.0.4 image: k8s.gcr.io/sig-storage/csi-provisioner:v2.0.4
rook_csi_snapshotter_image: rook_csi_snapshotter_image:
image: k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.2 image: k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
rook_csi_attacher_image: rook_csi_attacher_image:
image: k8s.gcr.io/sig-storage/csi-attacher:v3.0.2 image: k8s.gcr.io/sig-storage/csi-attacher:v3.0.2
storage-rook: storage-rook:
ceph: ceph:
ceph-version: ceph-version:
image: ceph/ceph:v15.2.10 image: ceph/ceph:v15.2.11
rook-ceph-tools: rook-ceph-tools:
image: rook/ceph:v1.5.9 image: rook/ceph:v1.6.2
image_components: image_components:
# image_components are organized by # image_components are organized by

View File

@ -16,6 +16,7 @@ data:
mon_warn_on_pool_no_redundancy = true mon_warn_on_pool_no_redundancy = true
# # You can add other default configuration sections # # You can add other default configuration sections
# # to create fully customized ceph.conf # # to create fully customized ceph.conf
# [mon] [mon]
auth_allow_insecure_global_id_reclaim = false
# [osd] # [osd]
# [rgw] # [rgw]

View File

@ -6,12 +6,16 @@ metadata:
spec: spec:
dataDirHostPath: /var/lib/rook dataDirHostPath: /var/lib/rook
cephVersion: cephVersion:
#see: https://tracker.ceph.com/issues/48797 image: ceph/ceph:v15.2.11
image: ceph/ceph:v15.2.10
#allowUnsupported: true #allowUnsupported: true
mon: mon:
count: 3 count: 3
allowMultiplePerNode: false allowMultiplePerNode: false
mgr:
count: 1
modules:
- name: pg_autoscaler
enabled: true
dashboard: dashboard:
enabled: true enabled: true
# If you are going to use the dashboard together with ingress-controller, # If you are going to use the dashboard together with ingress-controller,
@ -57,4 +61,17 @@ spec:
# deviceFilter: "^/dev/sd[c-h]" # deviceFilter: "^/dev/sd[c-h]"
# Also you can configure each device and/or each node. Please refer to the official rook # Also you can configure each device and/or each node. Please refer to the official rook
# documentation for the branch 1.5.x # documentation for the branch 1.5.x
# The section for configuring management of daemon disruptions during upgrade or fencing.
disruptionManagement:
# If true, the operator will create and manage PodDisruptionBudgets for OSD, Mon, RGW, and MDS daemons. OSD PDBs are managed dynamically
# via the strategy outlined in the [design](https://github.com/rook/rook/blob/master/design/ceph/ceph-managed-disruptionbudgets.md). The operator will
# block eviction of OSDs by default and unblock them safely when drains are detected.
managePodBudgets: true
# A duration in minutes that determines how long an entire failureDomain like `region/zone/host` will be held in `noout` (in addition to the
# default DOWN/OUT interval) when it is draining. This is only relevant when `managePodBudgets` is `true`. The default value is `30` minutes.
osdMaintenanceTimeout: 30
# A duration in minutes that the operator will wait for the placement groups to become healthy (active+clean) after a drain was completed and OSDs came back up.
# Operator will continue with the next drain if the timeout exceeds. It only works if `managePodBudgets` is `true`.
# No values or 0 means that the operator will wait until the placement groups are healthy before unblocking the next drain.
pgHealthCheckTimeout: 0
--- ---

View File

@ -18,34 +18,34 @@ spec:
spec: spec:
dnsPolicy: ClusterFirstWithHostNet dnsPolicy: ClusterFirstWithHostNet
containers: containers:
- name: rook-ceph-tools - name: rook-ceph-tools
image: rook/ceph:v1.5.9 image: rook/ceph:v1.6.2
command: ["/tini"] command: ["/tini"]
args: ["-g", "--", "/usr/local/bin/toolbox.sh"] args: ["-g", "--", "/usr/local/bin/toolbox.sh"]
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
env: env:
- name: ROOK_CEPH_USERNAME - name: ROOK_CEPH_USERNAME
valueFrom: valueFrom:
secretKeyRef: secretKeyRef:
name: rook-ceph-mon name: rook-ceph-mon
key: ceph-username key: ceph-username
- name: ROOK_CEPH_SECRET - name: ROOK_CEPH_SECRET
valueFrom: valueFrom:
secretKeyRef: secretKeyRef:
name: rook-ceph-mon name: rook-ceph-mon
key: ceph-secret key: ceph-secret
volumeMounts: volumeMounts:
- mountPath: /etc/ceph - mountPath: /etc/ceph
name: ceph-config name: ceph-config
- name: mon-endpoint-volume - name: mon-endpoint-volume
mountPath: /etc/rook mountPath: /etc/rook
volumes: volumes:
- name: mon-endpoint-volume - name: mon-endpoint-volume
configMap: configMap:
name: rook-ceph-mon-endpoints name: rook-ceph-mon-endpoints
items: items:
- key: data - key: data
path: mon-endpoints path: mon-endpoints
- name: ceph-config - name: ceph-config
emptyDir: {} emptyDir: {}
tolerations: tolerations: