SIGUNOV, VLADIMIR (vs422h) fd3f0d747a Rook-ceph cluster deployment
* Type catalog should contain only core services related to
  the deployment of the ceph cluster (monitors, osds, mgrs, etc)
* Manifests to create pools, dashboards, cephfs - are moved to
  the function catalog.
* Code related to the OpenStack deployment is removed
* Dashboard is disabled by default, ingress controller is removed
* Rook-operator version is upgraded to 1.5.9 to prevent incompatibility
  with pool quota settings
* Fixed a minor bug in the site-level catalogue storage definition
  and in the replacement function
* Added cleanup manifest for StorageCatalogue
* Added airshipctl phase to deploy rook-operator
* Implementation of the rook-ceph operator has been changed
* Added the configuration for the csi driver images
* Added overrides for ceph.conf
* Added configuration for rook-operator and ceph images

* Merge conflict resolution

* Code standartization

* Rename rook-ceph-crds -> rook-operator

Relates-to: [WIP] Expects to deliver Rook/Ceph via 2 phases
Relates-to: #30

Change-Id: I7ec7f756e742db1595143c2dfc6751b16fb25efb
2021-04-30 14:47:15 +00:00

76 lines
2.9 KiB
YAML

#################################################################################################################
# Create a filesystem with settings with replication enabled for a production environment.
# A minimum of 3 OSDs on different nodes are required in this example.
# kubectl create -f filesystem.yaml
#################################################################################################################
apiVersion: ceph.rook.io/v1
kind: CephFilesystem
metadata:
name: cephfs
namespace: rook-ceph # namespace:cluster
spec:
# The metadata pool spec. Must use replication.
metadataPool:
replicated:
size: 3
requireSafeReplicaSize: true
parameters:
# Inline compression mode for the data pool
# Further reference: https://docs.ceph.com/docs/nautilus/rados/configuration/bluestore-config-ref/#inline-compression
compression_mode: none
# gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool
# for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size
#target_size_ratio: ".5"
# The list of data pool specs. Can use replication or erasure coding.
# Whether to preserve filesystem after CephFilesystem CRD deletion
preserveFilesystemOnDelete: true
# The metadata service (mds) configuration
metadataServer:
# The affinity rules to apply to the mds deployment
placement:
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: role
# operator: In
# values:
# - mds-node
# topologySpreadConstraints:
# tolerations:
# - key: mds-node
# operator: Exists
# podAffinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- rook-ceph-mds
# topologyKey: kubernetes.io/hostname will place MDS across different hosts
topologyKey: kubernetes.io/hostname
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- rook-ceph-mds
# topologyKey: */zone can be used to spread MDS across different AZ
# Use <topologyKey: failure-domain.beta.kubernetes.io/zone> in k8s cluster if your cluster is v1.16 or lower
# Use <topologyKey: topology.kubernetes.io/zone> in k8s cluster is v1.17 or upper
topologyKey: topology.kubernetes.io/zone
# A key/value list of annotations
annotations:
# key: value
# A key/value list of labels
labels:
# key: value