SIGUNOV, VLADIMIR (vs422h) fd3f0d747a Rook-ceph cluster deployment
* Type catalog should contain only core services related to
  the deployment of the ceph cluster (monitors, osds, mgrs, etc)
* Manifests to create pools, dashboards, cephfs - are moved to
  the function catalog.
* Code related to the OpenStack deployment is removed
* Dashboard is disabled by default, ingress controller is removed
* Rook-operator version is upgraded to 1.5.9 to prevent incompatibility
  with pool quota settings
* Fixed a minor bug in the site-level catalogue storage definition
  and in the replacement function
* Added cleanup manifest for StorageCatalogue
* Added airshipctl phase to deploy rook-operator
* Implementation of the rook-ceph operator has been changed
* Added the configuration for the csi driver images
* Added overrides for ceph.conf
* Added configuration for rook-operator and ceph images

* Merge conflict resolution

* Code standartization

* Rename rook-ceph-crds -> rook-operator

Relates-to: [WIP] Expects to deliver Rook/Ceph via 2 phases
Relates-to: #30

Change-Id: I7ec7f756e742db1595143c2dfc6751b16fb25efb
2021-04-30 14:47:15 +00:00

66 lines
3.3 KiB
YAML

#################################################################################################################
# Create a Ceph pool with settings for replication in production environments. A minimum of 3 OSDs on
# different hosts are required in this example.
# kubectl create -f pool.yaml
#################################################################################################################
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
name: "pool"
namespace: rook-ceph # namespace:cluster
spec:
# The failure domain will spread the replicas of the data across different failure zones
# Default value is host. Could be osd or rack, depending on your crushmap
failureDomain: host
# For a pool based on raw copies, specify the number of copies. A size of 1 indicates no redundancy.
replicated:
size: 3
# Disallow setting pool with replica 1, this could lead to data loss without recovery.
# Make sure you're *ABSOLUTELY CERTAIN* that is what you want
requireSafeReplicaSize: true
# The number for replicas per failure domain, the value must be a divisor of the replica count. If specified, the most common value is 2 for stretch clusters, where the replica count would be 4.
# replicasPerFailureDomain: 2
# The name of the failure domain to place further down replicas
# subFailureDomain: host
# Ceph CRUSH root location of the rule
# For reference: https://docs.ceph.com/docs/nautilus/rados/operations/crush-map/#types-and-buckets
#crushRoot: my-root
# The Ceph CRUSH device class associated with the CRUSH replicated rule
# For reference: https://docs.ceph.com/docs/nautilus/rados/operations/crush-map/#device-classes
#deviceClass: my-class
# Enables collecting RBD per-image IO statistics by enabling dynamic OSD performance counters. Defaults to false.
# For reference: https://docs.ceph.com/docs/master/mgr/prometheus/#rbd-io-statistics
# enableRBDStats: true
# Set any property on a given pool
# see https://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values
parameters:
# Inline compression mode for the data pool
# Further reference: https://docs.ceph.com/docs/nautilus/rados/configuration/bluestore-config-ref/#inline-compression
compression_mode: none
# gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool
# for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size
#target_size_ratio: ".5"
mirroring:
enabled: false
# mirroring mode: pool level or per image
# for more details see: https://docs.ceph.com/docs/master/rbd/rbd-mirroring/#enable-mirroring
mode: image
# specify the schedule(s) on which snapshots should be taken
# snapshotSchedules:
# - interval: 24h # daily snapshots
# startTime: 14:00:00-05:00
# reports pool mirroring status if enabled
statusCheck:
mirror:
disabled: false
interval: 60s
# quota in bytes and/or objects, default value is 0 (unlimited)
# see https://docs.ceph.com/en/latest/rados/operations/pools/#set-pool-quotas
# quotas:
# maxSize: "10Gi" # valid suffixes include K, M, G, T, P, Ki, Mi, Gi, Ti, Pi
# maxObjects: 1000000000 # 1 billion objects
# A key/value list of annotations
annotations:
# key: value