CephFS RWX Support in Host-based Ceph

Incorporated patchset 1 review comments
Updated patchset 5 review comments
Updated patchset 6 review comments
Fixed merge conflicts
Updated patchset 8 review comments

Change-Id: Icd7b08ab69273f6073b960a13cf59905532f851a
Signed-off-by: Juanita-Balaraj <juanita.balaraj@windriver.com>
This commit is contained in:
Juanita-Balaraj 2021-04-19 00:22:38 -04:00
parent ec42ebdda0
commit 63cd4f5fdc
113 changed files with 1634 additions and 973 deletions

View File

@ -1 +1 @@
.. [#f1] See :ref:`Data Network Planning <data-network-planning>` for more information.
.. [#]_ See :ref:`Data Network Planning <data-network-planning>` for more information.

View File

@ -149,4 +149,3 @@ Configure the cables associated with the management |LAG| so that the primary
interface within the |LAG| with the lowest |MAC| address on the active
controller connects to the primary interface within the |LAG| with the lowest
|MAC| address on standby controller.

View File

@ -16,7 +16,7 @@ here.
+----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Component | Approved Hardware |
+==========================================================+=========================================================================================================================================================================================================================================================================================================================================================================================================================================+
+----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Hardware Platforms | - Hewlett Packard Enterprise |
| | |
| | |
@ -122,7 +122,7 @@ here.
| | |
| | - Mellanox MT27700 Family \(ConnectX-4\) 40G |
+----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| NICs Verified for Data Interfaces [#f1]_ | The following NICs are supported: |
| NICs Verified for Data Interfaces | The following NICs are supported: |
| | |
| | - Intel I350 \(Powerville\) 1G |
| | |

View File

@ -88,6 +88,7 @@
.. |QoS| replace:: :abbr:`QoS (Quality of Service)`
.. |RAID| replace:: :abbr:`RAID (Redundant Array of Inexpensive Disks)`
.. |RBAC| replace:: :abbr:`RBAC (Role-Based Access Control)`
.. |RBD| replace:: :abbr:`RBD (RADOS Block Device)`
.. |RPC| replace:: :abbr:`RPC (Remote Procedure Call)`
.. |SAN| replace:: :abbr:`SAN (Subject Alternative Name)`
.. |SANs| replace:: :abbr:`SANs (Subject Alternative Names)`

View File

@ -0,0 +1,3 @@
{
"restructuredtext.confPath": ""
}

View File

@ -12,9 +12,56 @@ for containers to persist files beyond the lifetime of the container, a
Persistent Volume Claim can be created to obtain a persistent volume which the
container can mount and read/write files.
Management and customization tasks for Kubernetes Persistent Volume Claims can
be accomplished using the **rbd-provisioner** helm chart. The
**rbd-provisioner** helm chart is included in the **platform-integ-apps**
system application, which is automatically loaded and applied as part of the
|prod| installation.
Management and customization tasks for Kubernetes |PVCs|
can be accomplished by using StorageClasses set up by two Helm charts;
**rbd-provisioner** and **cephfs-provisioner**. The **rbd-provisioner**,
and **cephfs-provisioner** Helm charts are included in the
**platform-integ-apps** system application, which is automatically loaded and
applied as part of the |prod| installation.
PVCs are supported with the following options:
- with accessMode of ReadWriteOnce backed by Ceph |RBD|
- only one container can attach to these PVCs
- management and customization tasks related to these PVCs are done
through the **rbd-provisioner** Helm chart provided by
platform-integ-apps
- with accessMode of ReadWriteMany backed by CephFS
- multiple containers can attach to these PVCs
- management and customization tasks related to these PVCs are done
through the **cephfs-provisioner** Helm chart provided by
platform-integ-apps
After platform-integ-apps is applied the following system configurations are
created:
- **Ceph Pools**
.. code-block:: none
~(keystone_admin)]$ ceph osd lspools
kube-rbd
kube-cephfs-data
kube-cephfs-metadata
- **CephFS**
.. code-block:: none
~(keystone_admin)]$ ceph fs ls
name: kube-cephfs, metadata pool: kube-cephfs-metadata, data pools: [kube-cephfs-data ]
- **Kubernetes StorageClasses**
.. code-block:: none
~(keystone_admin)]$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION
cephfs ceph.com/cephfs Delete Immediate false
general (default) ceph.com/rbd Delete Immediate false

View File

@ -250,8 +250,8 @@ appropriate storage-class name you set up in step :ref:`2
<configure-an-external-netapp-deployment-as-the-storage-backend>`
\(**netapp-nas-backend** in this example\) to the persistent volume
claim's yaml configuration file. For more information about this file, see
|usertasks-doc|: :ref:`Create Persistent Volume Claims
<kubernetes-user-tutorials-creating-persistent-volume-claims>`.
|usertasks-doc|: :ref:`Create ReadWriteOnce Persistent Volume Claims
<kubernetes-user-tutorials-create-readwriteonce-persistent-volume-claims>`.
.. seealso::

View File

@ -1,243 +0,0 @@
.. clb1615317605723
.. _configure-ceph-file-system-for-internal-ceph-storage-backend:
============================================================
Configure Ceph File System for Internal Ceph Storage Backend
============================================================
CephFS \(Ceph File System\) is a highly available, mutli-use, performant file
store for a variety of applications, built on top of Ceph's Distributed Object
Store \(RADOS\).
.. rubric:: |context|
CephFS provides the following functionality:
.. _configure-ceph-file-system-for-internal-ceph-storage-backend-ul-h2b-h1k-x4b:
- Enabled by default \(along with existing Ceph RDB\)
- Highly available, multi-use, performant file storage
- Scalability using a separate RADOS pool for the file's metadata
- Metadata using Metadata Servers \(MDS\) that provide high availability and
scalability
- Deployed in HA configurations for all |prod| deployment options
- Integrates **cephfs-provisioner** supporting Kubernetes **StorageClass**
- Enables configuration of:
- **PersistentVolumeClaim** \(|PVC|\) using **StorageClass** and
ReadWriteMany accessmode
- Two or more application pods mounting |PVC| and reading/writing data to it
CephFS is configured automatically when a Ceph backend is enabled and provides
a Kubernetes **StorageClass**. Once enabled, every node in the cluster that
serves as a Ceph monitor will also be configured as a CephFS Metadata Server
\(MDS\). Creation of the CephFS pools, filesystem initialization, and creation
of Kubernetes resource is done by the **platform-integ-apps** application,
using **cephfs-provisioner** Helm chart.
When applied, **platform-integ-apps** creates two Ceph pools for each storage
backend configured, one for CephFS data and a second pool for metadata:
.. _configure-ceph-file-system-for-internal-ceph-storage-backend-ul-jp2-yn2-x4b:
- **CephFS data pool**: The pool name for the default storage backend is
**kube-cephfs-data**
- **Metadata pool**: The pool name is **kube-cephfs-metadata**
When a new storage backend is created, a new CephFS data pool will be
created with the name **kube-cephfs-data-** \<storage\_backend\_name\>, and
the metadata pool will be created with the name
**kube-cephfs-metadata-** \<storage\_backend\_name\>. The default
filesystem name is **kube-cephfs**.
When a new storage backend is created, a new filesystem will be created
with the name **kube-cephfs-** \<storage\_backend\_name\>.
For example, if the user adds a storage backend named, 'test',
**cephfs-provisioner** will create the following pools:
.. _configure-ceph-file-system-for-internal-ceph-storage-backend-ul-i3w-h1f-x4b:
- kube-cephfs-data-test
- kube-cephfs-metadata-test
Also, the application **platform-integ-apps** will create a filesystem **kube
cephfs-test**.
If you list all the pools in a cluster with 'test' storage backend, you should
see four pools created by **cephfs-provisioner** using **platform-integ-apps**.
Use the following command to list the CephFS |OSD| pools created.
.. code-block:: none
$ ceph osd lspools
.. _configure-ceph-file-system-for-internal-ceph-storage-backend-ul-nnv-lr2-x4b:
- kube-rbd
- kube-rbd-test
- **kube-cephfs-data**
- **kube-cephfs-data-test**
- **kube-cephfs-metadata**
- **kube-cephfs-metadata-test**
Use the following command to list Ceph File Systems:
.. code-block:: none
$ ceph fs ls
name: kube-cephfs, metadata pool: kube-cephfs-metadata, data pools: [kube-cephfs-data ]
name: kube-cephfs-silver, metadata pool: kube-cephfs-metadata-silver, data pools: [kube-cephfs-data-silver ]
:command:`cephfs-provisioner` creates in a Kubernetes cluster, a
**StorageClass** for each storage backend present.
These **StorageClass** resources should be used to create
**PersistentVolumeClaim** resources in order to allow pods to use CephFS. The
default **StorageClass** resource is named **cephfs**, and additional resources
are created with the name \<storage\_backend\_name\> **-cephfs** for each
additional storage backend created.
For example, when listing **StorageClass** resources in a cluster that is
configured with a storage backend named 'test', the following storage classes
are created:
.. code-block:: none
$ kubectl get sc
NAME PROVISIONER RECLAIM.. VOLUME.. ALLOWVOLUME.. AGE
cephfs ceph.com/cephfs Delete Immediate false 65m
general (default) ceph.com/rbd Delete Immediate false 66m
test-cephfs ceph.com/cephfs Delete Immediate false 65m
test-general ceph.com/rbd Delete Immediate false 66m
All Kubernetes resources \(pods, StorageClasses, PersistentVolumeClaims,
configmaps, etc.\) used by the provisioner are created in the **kube-system
namespace.**
.. note::
Multiple Ceph file systems are not enabled by default in the cluster. You
can enable it manually, for example, using the command; :command:`ceph fs
flag set enable\_multiple true --yes-i-really-mean-it`.
.. _configure-ceph-file-system-for-internal-ceph-storage-backend-section-dq5-wgk-x4b:
-------------------------------
Persistent Volume Claim \(PVC\)
-------------------------------
.. rubric:: |context|
If you need to create a Persistent Volume Claim, you can create it using
**kubectl**. For example:
.. _configure-ceph-file-system-for-internal-ceph-storage-backend-ol-lrh-pdf-x4b:
#. Create a file named **my\_pvc.yaml**, and add the following content:
.. code-block:: none
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: claim1
namespace: kube-system
spec:
storageClassName: cephfs
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
#. To apply the updates, use the following command:
.. code-block:: none
$ kubectl apply -f my_pvc.yaml
#. After the |PVC| is created, use the following command to see the |PVC|
bound to the existing **StorageClass**.
.. code-block:: none
$ kubectl get pvc -n kube-system
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
claim1 Boundpvc.. 1Gi RWX cephfs
#. The |PVC| is automatically provisioned by the **StorageClass**, and a |PVC|
is created. Use the following command to list the |PVC|.
.. code-block:: none
$ kubectl get pv -n kube-system
NAME CAPACITY ACCESS..RECLAIM.. STATUS CLAIM STORAGE.. REASON AGE
pvc-5.. 1Gi RWX Delete Bound kube-system/claim1 cephfs 26s
#. Create Pods to use the |PVC|. Create a file my\_pod.yaml:
.. code-block:: none
kind: Pod
apiVersion: v1
metadata:
name: test-pod
namespace: kube-system
spec:
containers:
- name: test-pod
image: gcr.io/google_containers/busybox:1.24
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS && exit 0 || exit 1"
volumeMounts:
- name: pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: pvc
persistentVolumeClaim:
claimName: claim1
#. Apply the inputs to the **pod.yaml** file, using the following command.
.. code-block:: none
$ kubectl apply -f my_pod.yaml
For more information on Persistent Volume Support, see, :ref:`About Persistent
Volume Support <about-persistent-volume-support>`, and,
|usertasks-doc|: :ref:`Creating Persistent Volume Claims
<kubernetes-user-tutorials-creating-persistent-volume-claims>`.

View File

@ -48,6 +48,11 @@ the internal Ceph storage backend.
third Ceph monitor instance is configured by default on the first
storage node.
.. note::
CephFS support requires Metadata servers \(MDS\) to be deployed. When
CephFS is configured, an MDS is deployed automatically along with each
node that has been configured to run a Ceph Monitor.
#. Configure Ceph OSDs. For more information, see :ref:`Provision
Storage on a Controller or Storage Host Using Horizon
<provision-storage-on-a-controller-or-storage-host-using-horizon>`.

View File

@ -0,0 +1,71 @@
.. iqu1616951298602
.. _create-readwritemany-persistent-volume-claims:
=============================================
Create ReadWriteMany Persistent Volume Claims
=============================================
Container images have an ephemeral file system by default. For data to survive
beyond the lifetime of a container, it can read and write files to a persistent
volume obtained with a Persistent Volume Claim \(PVC\) created to provide
persistent storage.
.. rubric:: |context|
For multiple containers to mount the same |PVC|, create a |PVC| with accessMode
of ReadWriteMany \(RWX\).
The following steps show an example of creating a 1GB |PVC| with ReadWriteMany
accessMode.
.. rubric:: |proc|
.. _iqu1616951298602-steps-bdr-qnm-tkb:
#. Create the **rwx-test-claim** Persistent Volume Claim.
#. Create a yaml file defining the claim and its attributes.
For example:
.. code-block:: none
~(keystone_admin)]$
cat <<EOF > rwx-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: rwx-test-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: cephfs
EOF
2. Apply the settings created above.
.. code-block:: none
~(keystone_admin)]$ kubectl apply -f rwx-claim.yaml
persistentvolumeclaim/rwx-test-claim created
This results in 1GB |PVC| being created. You can view the |PVC| using the
following command.
.. code-block:: none
~(keystone_admin)]$ kubectl get persistentvolumeclaims
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS
rwx-test-claim Bound pvc-df9f.. 1Gi RWX cephfs
.. code-block:: none
~(keystone_admin)]$ kubectl get persistentvolume
NAME CAPACITY ACCESS.. RECLAIM.. STATUS CLAIM STORAGECLASS
pvc-df9f.. 1Gi RWX Delete Bound default/rwx-test-claim cephfs

View File

@ -0,0 +1,38 @@
.. mgt1616518429546
.. _default-behavior-of-the-cephfs-provisioner:
==========================================
Default Behavior of the CephFS Provisioner
==========================================
The default Ceph Cluster configuration set up during |prod| installation
contains a single storage tier, storage, containing all the OSDs.
The default CephFS provisioner service runs within the kube-system namespace
and has a single storage class, '**cephfs**', which is configured to:
.. _mgt1616518429546-ul-g3n-qdb-bpb:
- use the default 'storage' Ceph storage tier
- use a **kube-cephfs-data** and **kube-cephfs-metadata** Ceph pool, and
- only support |PVC| requests from the following namespaces: kube-system,
default and kube-public.
The full details of the **cephfs-provisioner** configuration can be viewed
using the following commands:
.. code-block:: none
~(keystone_admin)]$ system helm-override-list platform-integ-apps
The following command provides the chart names and the overrides namespaces.
.. code-block:: none
~(keystone_admin)]$ system helm-override-show platform-integ-apps cephfs-provisioner kube-system
See :ref:`Create ReadWriteMany Persistent Volume Claims <create-readwritemany-persistent-volume-claims>`
and :ref:`Mount ReadWriteMany Persistent Volumes in Containers <mount-readwritemany-persistent-volumes-in-containers>`
for an example of how to create and mount a ReadWriteMany PVC from the **cephfs**
storage class.

View File

@ -9,9 +9,8 @@ Default Behavior of the RBD Provisioner
The default Ceph Cluster configuration set up during |prod| installation
contains a single storage tier, storage, containing all the |OSDs|.
The default rbd-provisioner service runs within the kube-system namespace
and has a single storage class, 'general', which is configured to:
The default |RBD| provisioner service runs within the kube-system namespace and
has a single storage class, 'general', which is configured to:
.. _default-behavior-of-the-rbd-provisioner-ul-zg2-r2q-43b:
@ -19,7 +18,8 @@ and has a single storage class, 'general', which is configured to:
- use a **kube-rbd** ceph pool, and
- only support PVC requests from the following namespaces: kube-system, default and kube-public.
- only support PVC requests from the following namespaces: kube-system,
default and kube-public.
The full details of the rbd-provisioner configuration can be viewed with
@ -35,9 +35,7 @@ This command provides the chart names and the overrides namespaces.
~(keystone_admin)$ system helm-override-show platform-integ-apps rbd-provisioner kube-system
See :ref:`Create Persistent Volume Claims
<storage-configuration-create-persistent-volume-claims>` and
:ref:`Mount Persistent Volumes in Containers
<storage-configuration-mount-persistent-volumes-in-containers>` for
details on how to create and mount a PVC from this storage class.
See :ref:`Creating ReadWriteOnce Persistent Volume Claims <storage-configuration-create-readwriteonce-persistent-volume-claims>` and
:ref:`Mounting ReadWriteOnce Persistent Volumes in Containers <storage-configuration-mount-readwriteonce-persistent-volumes-in-containers>`
for an example of how to create and mount a ReadWriteOnce |PVC| from the
'general' storage class.

View File

@ -1,27 +1,27 @@
.. csl1561030322454
.. _enable-additional-storage-classes:
.. _enable-rbd-readwriteonly-additional-storage-classes:
=================================
Enable Additional Storage Classes
=================================
===================================================
Enable RBD ReadWriteOnly Additional Storage Classes
===================================================
Additional storage classes can be added to the default rbd-provisioner
Additional storage classes can be added to the default |RBD| provisioner
service.
.. rubric:: |context|
Some reasons for adding an additional storage class include:
.. _enable-additional-storage-classes-ul-nz1-r3q-43b:
.. _enable-rbd-readwriteonly-additional-storage-classes-ul-nz1-r3q-43b:
- managing Ceph resources for particular namespaces in a separate Ceph
pool; simply for Ceph partitioning reasons
- using an alternate Ceph Storage Tier, for example. with faster drives
A modification to the configuration \(helm overrides\) of the
**rbd-provisioner** service is required to enable an additional storage class
A modification to the configuration \(Helm overrides\) of the
|RBD| provisioner service is required to enable an additional storage class
The following example that illustrates adding a second storage class to be
utilized by a specific namespace.
@ -33,19 +33,19 @@ utilized by a specific namespace.
.. rubric:: |proc|
#. List installed helm chart overrides for the platform-integ-apps.
#. List installed Helm chart overrides for the platform-integ-apps.
.. code-block:: none
~(keystone_admin)$ system helm-override-list platform-integ-apps
+------------------+----------------------+
+--------------------+----------------------+
| chart name | overrides namespaces |
+------------------+----------------------+
+--------------------+----------------------+
| ceph-pools-audit | [u'kube-system'] |
| cephfs-provisioner | [u'kube-system'] |
| helm-toolkit | [] |
| rbd-provisioner | [u'kube-system'] |
+------------------+----------------------+
+--------------------+----------------------+
#. Review existing overrides for the rbd-provisioner chart. You will refer
to this information in the following step.
@ -85,7 +85,6 @@ utilized by a specific namespace.
~(keystone_admin)$ system helm-override-update --values /home/sysadmin/update-namespaces.yaml \
platform-integ-apps rbd-provisioner
+----------------+-----------------------------------------+
| Property | Value |
+----------------+-----------------------------------------+
@ -123,7 +122,6 @@ utilized by a specific namespace.
.. code-block:: none
~(keystone_admin)$ system helm-override-show platform-integ-apps rbd-provisioner kube-system
+--------------------+-----------------------------------------+
| Property | Value |
+--------------------+-----------------------------------------+
@ -161,13 +159,11 @@ utilized by a specific namespace.
#. Apply the overrides.
#. Run the :command:`application-apply` command.
.. code-block:: none
~(keystone_admin)$ system application-apply platform-integ-apps
+---------------+----------------------------------+
| Property | Value |
+---------------+----------------------------------+
@ -187,7 +183,6 @@ utilized by a specific namespace.
.. code-block:: none
~(keystone_admin)$ system application-list
+-------------+---------+---------------+---------------+---------+-----------+
| application | version | manifest name | manifest file | status | progress |
+-------------+---------+---------------+---------------+---------+-----------+
@ -196,9 +191,8 @@ utilized by a specific namespace.
| | | manifest | | | |
+-------------+---------+------ --------+---------------+---------+-----------+
You can now create and mount persistent volumes from the new
rbd-provisioner's **special** storage class from within the
**new-sc-app** application-specific namespace.
You can now create and mount persistent volumes from the new |RBD|
provisioner's **special** storage class from within the **new-sc-app**
application-specific namespace.

View File

@ -0,0 +1,220 @@
.. wyf1616954377690
.. _enable-readwritemany-pvc-support-in-additional-namespaces:
=========================================================
Enable ReadWriteMany PVC Support in Additional Namespaces
=========================================================
The default general **cephfs-provisioner** storage class is enabled for the
default, kube-system, and kube-public namespaces. To enable an additional
namespace, for example for an application-specific namespace, a modification
to the configuration \(Helm overrides\) of the **cephfs-provisioner** service
is required.
.. rubric:: |context|
The following example illustrates the configuration of three additional
application-specific namespaces to access the **cephfs-provisioner**
**cephfs** storage class.
.. note::
Due to limitations with templating and merging of overrides, the entire
storage class must be redefined in the override when updating specific
values.
.. rubric:: |proc|
#. List installed Helm chart overrides for the platform-integ-apps.
.. code-block:: none
~(keystone_admin)]$ system helm-override-list platform-integ-apps
+--------------------+----------------------+
| chart name | overrides namespaces |
+--------------------+----------------------+
| ceph-pools-audit | [u'kube-system'] |
| cephfs-provisioner | [u'kube-system'] |
| helm-toolkit | [] |
| rbd-provisioner | [u'kube-system'] |
+--------------------+----------------------+
#. Review existing overrides for the cephfs-provisioner chart. You will refer
to this information in the following step.
.. code-block:: none
~(keystone_admin)]$ system helm-override-show platform-integ-apps cephfs-provisioner kube-system
+--------------------+----------------------------------------------------------+
| Property | Value |
+--------------------+----------------------------------------------------------+
| attributes | enabled: true |
| | |
| combined_overrides | classdefaults: |
| | adminId: admin |
| | adminSecretName: ceph-secret-admin |
| | monitors: |
| | - 192.168.204.3:6789 |
| | - 192.168.204.1:6789 |
| | - 192.168.204.2:6789 |
| | classes: |
| | - additionalNamespaces: |
| | - default |
| | - kube-public |
| | chunk_size: 64 |
| | claim_root: /pvc-volumes |
| | crush_rule_name: storage_tier_ruleset |
| | data_pool_name: kube-cephfs-data |
| | fs_name: kube-cephfs |
| | metadata_pool_name: kube-cephfs-metadata |
| | name: cephfs |
| | replication: 2 |
| | userId: ceph-pool-kube-cephfs-data |
| | userSecretName: ceph-pool-kube-cephfs-data |
| | global: |
| | replicas: 2 |
| | |
| name | cephfs-provisioner |
| namespace | kube-system |
| system_overrides | classdefaults: |
| | adminId: admin |
| | adminSecretName: ceph-secret-admin |
| | monitors: ['192.168.204.3:6789', '192.168.204.1:6789', |
| | '192.168.204.2:6789'] |
| | classes: |
| | - additionalNamespaces: [default, kube-public] |
| | chunk_size: 64 |
| | claim_root: /pvc-volumes |
| | crush_rule_name: storage_tier_ruleset |
| | data_pool_name: kube-cephfs-data |
| | fs_name: kube-cephfs |
| | metadata_pool_name: kube-cephfs-metadata |
| | name: cephfs |
| | replication: 2 |
| | userId: ceph-pool-kube-cephfs-data |
| | userSecretName: ceph-pool-kube-cephfs-data |
| | global: {replicas: 2} |
| | |
| user_overrides | None |
+--------------------+----------------------------------------------------------+
#. Create an overrides yaml file defining the new namespaces.
In this example, create the file /home/sysadmin/update-namespaces.yaml with the following content:
.. code-block:: none
~(keystone_admin)]$ cat <<EOF > ~/update-namespaces.yaml
classes:
- additionalNamespaces: [default, kube-public, new-app, new-app2, new-app3]
chunk_size: 64
claim_root: /pvc-volumes
crush_rule_name: storage_tier_ruleset
data_pool_name: kube-cephfs-data
fs_name: kube-cephfs
metadata_pool_name: kube-cephfs-metadata
name: cephfs
replication: 2
userId: ceph-pool-kube-cephfs-data
userSecretName: ceph-pool-kube-cephfs-data
EOF
#. Apply the overrides file to the chart.
.. code-block:: none
~(keystone_admin)]$ system helm-override-update --values /home/sysadmin/update-namespaces.yaml platform-integ-apps cephfs-provisioner kube-system
+----------------+----------------------------------------------+
| Property | Value |
+----------------+----------------------------------------------+
| name | cephfs-provisioner |
| namespace | kube-system |
| user_overrides | classes: |
| | - additionalNamespaces: |
| | - default |
| | - kube-public |
| | - new-app |
| | - new-app2 |
| | - new-app3 |
| | chunk_size: 64 |
| | claim_root: /pvc-volumes |
| | crush_rule_name: storage_tier_ruleset |
| | data_pool_name: kube-cephfs-data |
| | fs_name: kube-cephfs |
| | metadata_pool_name: kube-cephfs-metadata |
| | name: cephfs |
| | replication: 2 |
| | userId: ceph-pool-kube-cephfs-data |
| | userSecretName: ceph-pool-kube-cephfs-data |
+----------------+----------------------------------------------+
#. Confirm that the new overrides have been applied to the chart.
The following output has been edited for brevity.
.. code-block:: none
~(keystone_admin)]$ system helm-override-show platform-integ-apps cephfs-provisioner kube-system
+--------------------+---------------------------------------------+
| Property | Value |
+--------------------+---------------------------------------------+
| user_overrides | classes: |
| | - additionalNamespaces: |
| | - default |
| | - kube-public |
| | - new-app |
| | - new-app2 |
| | - new-app3 |
| | chunk_size: 64 |
| | claim_root: /pvc-volumes |
| | crush_rule_name: storage_tier_ruleset |
| | data_pool_name: kube-cephfs-data |
| | fs_name: kube-cephfs |
| | metadata_pool_name: kube-cephfs-metadata |
| | name: cephfs |
| | replication: 2 |
| | userId: ceph-pool-kube-cephfs-data |
| | userSecretName: ceph-pool-kube-cephfs-data|
+--------------------+---------------------------------------------+
#. Apply the overrides.
#. Run the :command:`application-apply` command.
.. code-block:: none
~(keystone_admin)]$ system application-apply platform-integ-apps
+---------------+----------------------------------+
| Property | Value |
+---------------+----------------------------------+
| active | True |
| app_version | 1.0-24 |
| created_at | 2019-05-26T06:22:20.711732+00:00 |
| manifest_file | manifest.yaml |
| manifest_name | platform-integration-manifest |
| name | platform-integ-apps |
| progress | None |
| status | applying |
| updated_at | 2019-05-26T22:27:26.547181+00:00 |
+---------------+----------------------------------+
#. Monitor progress using the :command:`application-list` command.
.. code-block:: none
~(keystone_admin)]$ system application-list
+-------------+---------+---------------+---------------+---------+-----------+
| application | version | manifest name | manifest file | status | progress |
+-------------+---------+---------------+---------------+---------+-----------+
| platform- | 1.0-24 | platform | manifest.yaml | applied | completed |
| integ-apps | | -integration | | | |
| | | -manifest | | | |
+-------------+---------+---------------+---------------+---------+-----------+
You can now create and mount PVCs from the default |RBD| provisioner's
**general** storage class, from within these application-specific
namespaces.

View File

@ -1,22 +1,21 @@
.. vqw1561030204071
.. _enable-pvc-support-in-additional-namespaces:
.. _enable-readwriteonce-pvc-support-in-additional-namespaces:
===========================================
Enable PVC Support in Additional Namespaces
===========================================
=========================================================
Enable ReadWriteOnce PVC Support in Additional Namespaces
=========================================================
The default general **rbd-provisioner** storage class is enabled for the
default, kube-system, and kube-public namespaces. To enable an additional
namespace, for example for an application-specific namespace, a
modification to the configuration \(helm overrides\) of the
**rbd-provisioner** service is required.
|RBD| provisioner service is required.
.. rubric:: |context|
The following example illustrates the configuration of three additional
application-specific namespaces to access the rbd-provisioner's **general**
storage class.
application-specific namespaces to access the |RBD| provisioner's **general storage class**.
.. note::
Due to limitations with templating and merging of overrides, the entire
@ -30,13 +29,14 @@ storage class.
.. code-block:: none
~(keystone_admin)$ system helm-override-list platform-integ-apps
+------------------+----------------------+
+--------------------+----------------------+
| chart name | overrides namespaces |
+------------------+----------------------+
+--------------------+----------------------+
| ceph-pools-audit | [u'kube-system'] |
| cephfs-provisioner | [u'kube-system'] |
| helm-toolkit | [] |
| rbd-provisioner | [u'kube-system'] |
+------------------+----------------------+
+--------------------+----------------------+
#. Review existing overrides for the rbd-provisioner chart. You will refer
to this information in the following step.
@ -94,29 +94,28 @@ storage class.
+--------------------+--------------------------------------------------+
#. Create an overrides yaml file defining the new namespaces.
In this example we will create the file
/home/sysadmin/update-namespaces.yaml with the following content:
#. Create an overrides yaml file defining the new namespaces. In this example we will create the file /home/sysadmin/update-namespaces.yaml with the following content:
.. code-block:: none
~(keystone_admin)]$ cat <<EOF > ~/update-namespaces.yaml
classes:
- additionalNamespaces: [default, kube-public, new-app, new-app2, new-app3]
chunk_size: 64
crush_rule_name: storage_tier_ruleset
name: general
pool_name: kube-rbd
replication: 1
replication: 2
userId: ceph-pool-kube-rbd
userSecretName: ceph-pool-kube-rbd
EOF
#. Apply the overrides file to the chart.
.. code-block:: none
~(keystone_admin)$ system helm-override-update --values /home/sysadmin/update-namespaces.yaml \
platform-integ-apps rbd-provisioner kube-system
~(keystone_admin)$ system helm-override-update --values /home/sysadmin/update-namespaces.yaml platform-integ-apps rbd-provisioner kube-system
+----------------+-----------------------------------------+
| Property | Value |
+----------------+-----------------------------------------+
@ -133,7 +132,7 @@ storage class.
| | crush_rule_name: storage_tier_ruleset |
| | name: general |
| | pool_name: kube-rbd |
| | replication: 1 |
| | replication: 2 |
| | userId: ceph-pool-kube-rbd |
| | userSecretName: ceph-pool-kube-rbd |
+----------------+-----------------------------------------+
@ -166,14 +165,13 @@ storage class.
| | crush_rule_name: storage_tier_ruleset|
| | name: general |
| | pool_name: kube-rbd |
| | replication: 1 |
| | replication: 2 |
| | userId: ceph-pool-kube-rbd |
| | userSecretName: ceph-pool-kube-rbd |
+--------------------+----------------------------------------+
#. Apply the overrides.
#. Run the :command:`application-apply` command.
.. code-block:: none
@ -183,7 +181,7 @@ storage class.
| Property | Value |
+---------------+----------------------------------+
| active | True |
| app_version | 1.0-5 |
| app_version | 1.0-24 |
| created_at | 2019-05-26T06:22:20.711732+00:00 |
| manifest_file | manifest.yaml |
| manifest_name | platform-integration-manifest |
@ -201,18 +199,12 @@ storage class.
+-------------+---------+---------------+---------------+---------+-----------+
| application | version | manifest name | manifest file | status | progress |
+-------------+---------+---------------+---------------+---------+-----------+
| platform- | 1.0-5 | platform | manifest.yaml | applied | completed |
| platform- | 1.0-24 | platform | manifest.yaml | applied | completed |
| integ-apps | | -integration | | | |
| | | -manifest | | | |
+-------------+---------+---------------+---------------+---------+-----------+
You can now create and mount PVCs from the default |RBD| provisioner's
**general storage class**, from within these application-specific namespaces.
You can now create and mount PVCs from the default
**rbd-provisioner's general** storage class, from within these
application-specific namespaces.
#. Apply the secret to the new **rbd-provisioner** namespace.
.. code-block:: none
~(keystone_admin)$ kubectl get secret ceph-pool-kube-rbd -n default -o yaml | grep -v '^\s*namespace:\s' | kubectl apply -n <namespace> -f

View File

@ -60,7 +60,6 @@ Storage Backends
storage-backends
configure-the-internal-ceph-storage-backend
configure-ceph-file-system-for-internal-ceph-storage-backend
configure-an-external-netapp-deployment-as-the-storage-backend
configure-netapps-using-a-private-docker-registry
uninstall-the-netapp-backend
@ -121,13 +120,33 @@ Persistent Volume Support
:maxdepth: 1
about-persistent-volume-support
***************
RBD Provisioner
***************
.. toctree::
:maxdepth: 1
default-behavior-of-the-rbd-provisioner
storage-configuration-create-persistent-volume-claims
storage-configuration-mount-persistent-volumes-in-containers
enable-pvc-support-in-additional-namespaces
enable-additional-storage-classes
storage-configuration-create-readwriteonce-persistent-volume-claims
storage-configuration-mount-readwriteonce-persistent-volumes-in-containers
enable-readwriteonce-pvc-support-in-additional-namespaces
enable-rbd-readwriteonly-additional-storage-classes
install-additional-rbd-provisioners
****************************
Ceph File System Provisioner
****************************
.. toctree::
:maxdepth: 1
default-behavior-of-the-cephfs-provisioner
create-readwritemany-persistent-volume-claims
mount-readwritemany-persistent-volumes-in-containers
enable-readwritemany-pvc-support-in-additional-namespaces
----------------
Storage Profiles
----------------

View File

@ -6,7 +6,7 @@
Install Additional RBD Provisioners
===================================
You can launch additional dedicated rdb-provisioners to support specific
You can launch additional dedicated |RBD| provisioners to support specific
applications using dedicated pools, storage classes, and namespaces.
.. rubric:: |context|
@ -14,11 +14,11 @@ applications using dedicated pools, storage classes, and namespaces.
This can be useful if, for example, to allow an application to have control
over its own persistent volume provisioner, that is, managing the Ceph
pool, storage tier, allowed namespaces, and so on, without requiring the
kubernetes admin to modify the default rbd-provisioner service in the
kubernetes admin to modify the default |RBD| provisioner service in the
kube-system namespace.
This procedure uses standard Helm mechanisms to install a second
rbd-provisioner.
|RBD| provisioner.
.. rubric:: |proc|
@ -101,6 +101,6 @@ rbd-provisioner.
general (default) ceph.com/rbd 6h39m
special-storage-class ceph.com/rbd 5h58m
You can now create and mount PVCs from the new rbd-provisioner's
You can now create and mount PVCs from the new |RBD| provisioner's
**2nd-storage** storage class, from within the **isolated-app**
namespace.

View File

@ -0,0 +1,169 @@
.. fkk1616520068837
.. _mount-readwritemany-persistent-volumes-in-containers:
====================================================
Mount ReadWriteMany Persistent Volumes in Containers
====================================================
You can attach a ReadWriteMany PVC to multiple containers, and that |PVC| can
be written to, by all containers.
.. rubric:: |context|
This example shows how a volume is claimed and mounted by each container
replica of a deployment with 2 replicas, and each container replica can read
and write to the |PVC|. It is the responsibility of an individual micro-service
within an application to make a volume claim, mount it, and use it.
.. rubric:: |prereq|
You must have created the |PVCs|. This procedure uses |PVCs| with names and
configurations created in |prod| |stor-doc|: :ref:`Create ReadWriteMany Persistent Volume Claims <create-readwritemany-persistent-volume-claims>` .
.. rubric:: |proc|
.. _fkk1616520068837-steps-fqj-flr-tkb:
#. Create the busybox container with the persistent volumes created from the PVCs mounted. This deployment will create two replicas mounting the same persistent volume.
#. Create a yaml file definition for the busybox container.
.. code-block:: none
% cat <<EOF > wrx-busybox.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: wrx-busybox
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 2
selector:
matchLabels:
run: busybox
template:
metadata:
labels:
run: busybox
spec:
containers:
- args:
- sh
image: busybox
imagePullPolicy: Always
name: busybox
stdin: true
tty: true
volumeMounts:
- name: pvc1
mountPath: "/mnt1"
restartPolicy: Always
volumes:
- name: pvc1
persistentVolumeClaim:
claimName: rwx-test-claim
EOF
#. Apply the busybox configuration.
.. code-block:: none
% kubectl apply -f wrx-busybox.yaml
deployment.apps/wrx-busybox created
#. Attach to the busybox and create files on the Persistent Volumes.
#. List the available pods.
.. code-block:: none
% kubectl get pods
NAME READY STATUS RESTARTS AGE
wrx-busybox-6455997c76-4kg8v 1/1 Running 0 108s
wrx-busybox-6455997c76-crmw6 1/1 Running 0 108s
#. Connect to the pod shell for CLI access.
.. code-block:: none
% kubectl attach wrx-busybox-6455997c76-4kg8v -c busybox -i -t
#. From the container's console, list the disks to verify that the Persistent Volume is attached.
.. code-block:: none
% df
Filesystem 1K-blocks Used Available Use% Mounted on
overlay 31441920 1783748 29658172 6% /
tmpfs 65536 0 65536 0% /dev
tmpfs 5033188 0 5033188 0% /sys/fs/cgroup
ceph-fuse 516542464 643072 515899392 0% /mnt1
The PVC is mounted as /mnt1.
#. Create files in the mount.
.. code-block:: none
# cd /mnt1
# touch i-was-here-${HOSTNAME}
# ls /mnt1
i-was-here-wrx-busybox-6455997c76-4kg8vi
#. End the container session.
.. code-block:: none
% exit
wrx-busybox-6455997c76-4kg8v -c busybox -i -t' command when the pod is running
#. Connect to the other busybox container
.. code-block:: none
% kubectl attach wrx-busybox-6455997c76-crmw6 -c busybox -i -t
#. Optional: From the container's console list the disks to verify that the PVC is attached.
.. code-block:: none
% df
Filesystem 1K-blocks Used Available Use% Mounted on
overlay 31441920 1783888 29658032 6% /
tmpfs 65536 0 65536 0% /dev
tmpfs 5033188 0 5033188 0% /sys/fs/cgroup
ceph-fuse 516542464 643072 515899392 0% /mnt1
#. Verify that the file created from the other container exists and that this container can also write to the Persistent Volume.
.. code-block:: none
# cd /mnt1
# ls /mnt1
i-was-here-wrx-busybox-6455997c76-4kg8v
# echo ${HOSTNAME}
wrx-busybox-6455997c76-crmw6
# touch i-was-here-${HOSTNAME}
# ls /mnt1
i-was-here-wrx-busybox-6455997c76-4kg8v i-was-here-wrx-busybox-6455997c76-crmw6
#. End the container session.
.. code-block:: none
% exit
Session ended, resume using 'kubectl attach wrx-busybox-6455997c76-crmw6 -c busybox -i -t' command when the pod is running
#. Terminate the busybox container.
.. code-block:: none
% kubectl delete -f wrx-busybox.yaml
For more information on Persistent Volume Support, see, :ref:`About Persistent Volume Support <about-persistent-volume-support>`.

View File

@ -116,12 +116,7 @@ For more information about Trident, see
- :ref:`Configure the Internal Ceph Storage Backend
<configure-the-internal-ceph-storage-backend>`
- :ref:`Configuring Ceph File System for Internal Ceph Storage Backend
<configure-ceph-file-system-for-internal-ceph-storage-backend>`
- :ref:`Configure an External Netapp Deployment as the Storage Backend
<configure-an-external-netapp-deployment-as-the-storage-backend>`
- :ref:`Uninstall the Netapp Backend <uninstall-the-netapp-backend>`

View File

@ -1,98 +0,0 @@
.. xco1564696647432
.. _storage-configuration-create-persistent-volume-claims:
===============================
Create Persistent Volume Claims
===============================
Container images have an ephemeral file system by default. For data to
survive beyond the lifetime of a container, it can read and write files to
a persistent volume obtained with a |PVC| created to provide persistent
storage.
.. rubric:: |context|
The following steps create two 1Gb persistent volume claims.
.. rubric:: |proc|
.. _storage-configuration-create-persistent-volume-claims-d891e32:
#. Create the **test-claim1** persistent volume claim.
#. Create a yaml file defining the claim and its attributes.
For example:
.. code-block:: none
~(keystone_admin)]$ cat <<EOF > claim1.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: general
EOF
#. Apply the settings created above.
.. code-block:: none
~(keystone_admin)]$ kubectl apply -f claim1.yaml
persistentvolumeclaim/test-claim1 created
#. Create the **test-claim2** persistent volume claim.
#. Create a yaml file defining the claim and its attributes.
For example:
.. code-block:: none
~(keystone_admin)]$ cat <<EOF > claim2.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim2
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: general
EOF
#. Apply the settings created above.
.. code-block:: none
~(keystone_admin)]$ kubectl apply -f claim2.yaml
persistentvolumeclaim/test-claim2 created
.. rubric:: |result|
Two 1Gb persistent volume claims have been created. You can view them with
the following command.
.. code-block:: none
~(keystone_admin)]$ kubectl get persistentvolumeclaims
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-claim1 Bound pvc-aaca.. 1Gi RWO general 2m56s
test-claim2 Bound pvc-e93f.. 1Gi RWO general 68s
For more information on using CephFS for internal Ceph backends, see,
:ref:`Using CephFS for Internal Ceph Storage Backend <configure-ceph-file-system-for-internal-ceph-storage-backend>`

View File

@ -0,0 +1,105 @@
.. xco1564696647432
.. _storage-configuration-create-readwriteonce-persistent-volume-claims:
=============================================
Create ReadWriteOnce Persistent Volume Claims
=============================================
Container images have an ephemeral file system by default. For data to
survive beyond the lifetime of a container, it can read and write files to
a persistent volume obtained with a |PVC| created to provide persistent
storage.
.. rubric:: |context|
The following steps show an example of creating two 1GB |PVCs| with
ReadWriteOnce accessMode.
.. rubric:: |proc|
.. _storage-configuration-create-readwriteonce-persistent-volume-claims-d891e32:
#. Create the **rwo-test-claim1** Persistent Volume Claim.
#. Create a yaml file defining the claim and its attributes.
For example:
.. code-block:: none
~(keystone_admin)]$ cat <<EOF > rwo-claim1.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: rwo-test-claim1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: general
EOF
#. Apply the settings created above.
.. code-block:: none
~(keystone_admin)]$ kubectl apply -f rwo-claim1.yaml
persistentvolumeclaim/rwo-test-claim1 created
#. Create the **rwo-test-claim2** Persistent Volume Claim.
#. Create a yaml file defining the claim and its attributes.
For example:
.. code-block:: none
~(keystone_admin)]$ cat <<EOF > rwo-claim2.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: rwo-test-claim2
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: general
EOF
#. Apply the settings created above.
.. code-block:: none
~(keystone_admin)]$ kubectl apply -f rwo-claim2.yaml
persistentvolumeclaim/rwo-test-claim2 created
.. rubric:: |result|
Two 1Gb |PVCs| have been created. You can view the |PVCs| using
the following command.
.. code-block:: none
~(keystone_admin)]$ kubectl get persistentvolumeclaims
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS
rwo-test-claim1 Bound pvc-aaca.. 1Gi RWO general
rwo-test-claim2 Bound pvc-e93f.. 1Gi RWO general
.. code-block:: none
~(keystone_admin)]$ kubectl get persistentvolume
NAME CAPACITY ACCESS.. RECLAIM.. STATUS CLAIM STORAGECLASS
pvc-08d8.. 1Gi RWO Delete Bound default/rwo-test-claim1 general
pvc-af10.. 1Gi RWO Delete Bound default/rwo-test-claim2 general

View File

@ -1,28 +1,26 @@
.. pjw1564749970685
.. _storage-configuration-mount-persistent-volumes-in-containers:
.. _storage-configuration-mount-readwriteonce-persistent-volumes-in-containers:
======================================
Mount Persistent Volumes in Containers
======================================
====================================================
Mount ReadWriteOnce Persistent Volumes in Containers
====================================================
You can launch, attach, and terminate a busybox container to mount |PVCs| in
your cluster.
You can attach ReadWriteOnce |PVCs| to a container when launching a container,
and changes to those |PVCs| will persist even if that container gets terminated
and restarted.
.. rubric:: |context|
This example shows how a volume is claimed and mounted by a simple running
container. It is the responsibility of an individual micro-service within
an application to make a volume claim, mount it, and use it. For example,
the stx-openstack application will make volume claims for **mariadb** and
**rabbitmq** via their helm charts to orchestrate this.
container, and the contents of the volume claim persists across restarts of
the container. It is the responsibility of an individual micro-service within
an application to make a volume claim, mount it, and use it.
.. rubric:: |prereq|
You must have created the persistent volume claims. This procedure uses
PVCs with names and configurations created in |stor-doc|: :ref:`Create
Persistent Volume Claims
<storage-configuration-create-persistent-volume-claims>`.
You should refer to the Volume Claim examples. For more information, see,
:ref:`Create ReadWriteOnce Persistent Volume Claims <storage-configuration-create-readwriteonce-persistent-volume-claims>`.
.. rubric:: |proc|
@ -30,18 +28,18 @@ Persistent Volume Claims
.. _storage-configuration-mount-persistent-volumes-in-containers-d583e55:
#. Create the busybox container with the persistent volumes created from
the PVCs mounted.
the |PVCs| mounted.
#. Create a yaml file definition for the busybox container.
.. code-block:: none
% cat <<EOF > busybox.yaml
% cat <<EOF > rwo-busybox.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox
name: rwo-busybox
namespace: default
spec:
progressDeadlineSeconds: 600
@ -71,10 +69,10 @@ Persistent Volume Claims
volumes:
- name: pvc1
persistentVolumeClaim:
claimName: test-claim1
claimName: rwo-test-claim1
- name: pvc2
persistentVolumeClaim:
claimName: test-claim2
claimName: rwo-test-claim2
EOF
@ -82,10 +80,11 @@ Persistent Volume Claims
.. code-block:: none
% kubectl apply -f busybox.yaml
% kubectl apply -f rwo-busybox.yaml
deployment.apps/rwo-busybox created
#. Attach to the busybox and create files on the persistent volumes.
#. Attach to the busybox and create files on the Persistent Volumes.
#. List the available pods.
@ -94,17 +93,16 @@ Persistent Volume Claims
% kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox-5c4f877455-gkg2s 1/1 Running 0 19s
rwo-busybox-5c4f877455-gkg2s 1/1 Running 0 19s
#. Connect to the pod shell for CLI access.
.. code-block:: none
% kubectl attach busybox-5c4f877455-gkg2s -c busybox -i -t
% kubectl attach rwo-busybox-5c4f877455-gkg2s -c busybox -i -t
#. From the container's console, list the disks to verify that the
persistent volumes are attached.
Persistent Volumes are attached.
.. code-block:: none
@ -116,11 +114,9 @@ Persistent Volume Claims
/dev/rbd0 999320 2564 980372 0% /mnt1
/dev/rbd1 999320 2564 980372 0% /mnt2
/dev/sda4 20027216 4952208 14034624 26%
...
The PVCs are mounted as /mnt1 and /mnt2.
#. Create files in the mounted volumes.
.. code-block:: none
@ -140,22 +136,24 @@ Persistent Volume Claims
.. code-block:: none
# exit
Session ended, resume using 'kubectl attach busybox-5c4f877455-gkg2s -c busybox -i -t' command when the pod is running
Session ended, resume using
'kubectl attach busybox-5c4f877455-gkg2s -c busybox -i -t' command when
the pod is running
#. Terminate the busybox container.
.. code-block:: none
% kubectl delete -f busybox.yaml
% kubectl delete -f rwo-busybox.yaml
#. Recreate the busybox container, again attached to persistent volumes.
#. Apply the busybox configuration.
.. code-block:: none
% kubectl apply -f busybox.yaml
% kubectl apply -f rwo-busybox.yaml
deployment.apps/rwo-busybox created
#. List the available pods.
@ -163,8 +161,7 @@ Persistent Volume Claims
% kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox-5c4f877455-jgcc4 1/1 Running 0 19s
rwo-busybox-5c4f877455-jgcc4 1/1 Running 0 19s
#. Connect to the pod shell for CLI access.
@ -197,5 +194,3 @@ Persistent Volume Claims
i-was-here lost+found
# ls /mnt2
i-was-here-too lost+found

View File

@ -2,17 +2,18 @@
Contents
========
*************
-------------
System access
*************
-------------
.. toctree::
:maxdepth: 1
kubernetes-user-tutorials-access-overview
-----------------
Remote CLI access
*****************
-----------------
.. toctree::
:maxdepth: 1
@ -24,34 +25,36 @@ Remote CLI access
configuring-remote-helm-client
using-container-based-remote-clis-and-clients
----------
GUI access
**********
----------
.. toctree::
:maxdepth: 1
accessing-the-kubernetes-dashboard
----------
API access
**********
----------
.. toctree::
:maxdepth: 1
kubernetes-user-tutorials-rest-api-access
**********************
----------------------
Application management
**********************
----------------------
.. toctree::
:maxdepth: 1
kubernetes-user-tutorials-helm-package-manager
*********************
Local docker registry
*********************
---------------------
Local Docker registry
---------------------
.. toctree::
:maxdepth: 1
@ -59,18 +62,18 @@ Local docker registry
kubernetes-user-tutorials-authentication-and-authorization
using-an-image-from-the-local-docker-registry-in-a-container-spec
***************************
---------------------------
NodePort usage restrictions
***************************
---------------------------
.. toctree::
:maxdepth: 1
nodeport-usage-restrictions
************
------------
Cert Manager
************
------------
.. toctree::
:maxdepth: 1
@ -78,9 +81,9 @@ Cert Manager
kubernetes-user-tutorials-cert-manager
letsencrypt-example
********************************
--------------------------------
Vault secret and data management
********************************
--------------------------------
.. toctree::
:maxdepth: 1
@ -89,9 +92,9 @@ Vault secret and data management
vault-aware
vault-unaware
****************************
Using Kata container runtime
****************************
-----------------------------
Using Kata Containers runtime
-----------------------------
.. toctree::
:maxdepth: 1
@ -100,19 +103,38 @@ Using Kata container runtime
specifying-kata-container-runtime-in-pod-spec
known-limitations
*******************************
-------------------------------
Adding persistent volume claims
*******************************
-------------------------------
.. toctree::
:maxdepth: 1
kubernetes-user-tutorials-creating-persistent-volume-claims
kubernetes-user-tutorials-mounting-persistent-volumes-in-containers
kubernetes-user-tutorials-about-persistent-volume-support
****************************************
***************
RBD Provisioner
***************
.. toctree::
:maxdepth: 1
kubernetes-user-tutorials-create-readwriteonce-persistent-volume-claims
kubernetes-user-tutorials-mount-readwriteonce-persistent-volumes-in-containers
****************************
Ceph File System Provisioner
****************************
.. toctree::
:maxdepth: 1
kubernetes-user-tutorials-create-readwritemany-persistent-volume-claims
kubernetes-user-tutorials-mount-readwritemany-persistent-volumes-in-containers
----------------------------------------
Adding an SRIOV interface to a container
****************************************
----------------------------------------
.. toctree::
:maxdepth: 1
@ -120,9 +142,9 @@ Adding an SRIOV interface to a container
creating-network-attachment-definitions
using-network-attachment-definitions-in-a-container
**************************
--------------------------
CPU Manager for Kubernetes
**************************
--------------------------
.. toctree::
:maxdepth: 1

View File

@ -0,0 +1,67 @@
.. rhb1561120463240
.. _kubernetes-user-tutorials-about-persistent-volume-support:
===============================
About Persistent Volume Support
===============================
Persistent Volume Claims \(PVCs\) are requests for storage resources in your
cluster. By default, container images have an ephemeral file system. In order
for containers to persist files beyond the lifetime of the container, a
Persistent Volume Claim can be created to obtain a persistent volume which the
container can mount and read/write files.
Management and customization tasks for Kubernetes |PVCs|
can be accomplished by using StorageClasses set up by two Helm charts;
**rbd-provisioner** and **cephfs-provisioner**. The **rbd-provisioner**,
and **cephfs-provisioner** Helm charts are included in the
**platform-integ-apps** system application, which is automatically loaded and
applied as part of the |prod| installation.
PVCs are supported with the following options:
- with accessMode of ReadWriteOnce backed by Ceph RBD
- only one container can attach to these PVCs
- management and customization tasks related to these PVCs are done
through the **rbd-provisioner** Helm chart provided by
platform-integ-apps
- with accessMode of ReadWriteMany backed by CephFS
- multiple containers can attach to these PVCs
- management and customization tasks related to these PVCs are done
through the **cephfs-provisioner** Helm chart provided by
platform-integ-apps
After platform-integ-apps is applied the following system configurations are
created:
- **Ceph Pools**
.. code-block:: none
~(keystone_admin)]$ ceph osd lspools
kube-rbd
kube-cephfs-data
kube-cephfs-metadata
- **CephFS**
.. code-block:: none
~(keystone_admin)]$ ceph fs ls
name: kube-cephfs, metadata pool: kube-cephfs-metadata, data pools: [kube-cephfs-data ]
- **Kubernetes StorageClasses**
.. code-block:: none
~(keystone_admin)]$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION
cephfs ceph.com/cephfs Delete Immediate false
general (default) ceph.com/rbd Delete Immediate false

View File

@ -0,0 +1,70 @@
.. xms1617036308112
.. _kubernetes-user-tutorials-create-readwritemany-persistent-volume-claims:
=============================================
Create ReadWriteMany Persistent Volume Claims
=============================================
Container images have an ephemeral file system by default. For data to survive
beyond the lifetime of a container, it can read and write files to a
volume obtained with a Persistent Volume Claim \(PVC\) created to provide
persistent storage.
.. rubric:: |context|
For multiple containers to mount the same PVC, create a PVC with accessMode of
ReadWriteMany \(RWX\).
The following steps show an example of creating a 1GB |PVC|
with ReadWriteMany accessMode.
.. rubric:: |proc|
.. _xms1617036308112-steps-bdr-qnm-tkb:
#. Create the **rwx-test-claim** Persistent Volume Claim.
#. Create a yaml file defining the claim and its attributes. For example:
.. code-block:: none
~(keystone_admin)]$ cat <<EOF > rwx-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: rwx-test-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: cephfs
EOF
#. Apply the settings created above.
.. code-block:: none
~(keystone_admin)]$ kubectl apply -f rwx-claim.yaml
persistentvolumeclaim/rwx-test-claim created
1GB PVC has been created. You can view the PVCs using the following command.
.. code-block:: none
~(keystone_admin)]$ kubectl get persistentvolumeclaims
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS
rwx-test-claim Bound pvc-df9f.. 1Gi RWX cephfs
.. code-block:: none
~(keystone_admin)]$ kubectl get persistentvolume
NAME CAPACITY ACCESS.. RECLAIM.. STATUS CLAIM STORAGECLASS
pvc-df9f.. 1Gi RWX Delete Bound default/rwx-test-claim cephfs
For more information on using CephFS for internal Ceph backends, see,
|stor-doc| :ref:`About Persistent Volume Support <about-persistent-volume-support>`.

View File

@ -0,0 +1,102 @@
.. rqy1582055871598
.. _kubernetes-user-tutorials-create-readwriteonce-persistent-volume-claims:
=============================================
Create ReadWriteOnce Persistent Volume Claims
=============================================
Container images have an ephemeral file system by default. For data to survive
beyond the lifetime of a container, it can read and write files to a persistent
volume obtained with a Persistent Volume Claim \(PVC\) created to provide
persistent storage.
.. rubric:: |context|
For the use case of a single container mounting a specific |PVC|, create a PVC
with accessMode of ReadWriteOnce (RWO).
The following steps show an example of creating two 1GB |PVCs| with
ReadWriteOnce accessMode.
.. rubric:: |proc|
.. _kubernetes-user-tutorials-creating-persistent-volume-claims-d380e32:
#. Create the **rwo-test-claim1** Persistent Volume Claim.
#. Create a yaml file defining the claim and its attributes.
For example:
.. code-block:: none
~(keystone_admin)]$
cat <<EOF > rwo-claim1.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: rwo-test-claim1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: general
EOF
#. Apply the settings created above.
.. code-block:: none
~(keystone_admin)]$ kubectl apply -f rwo-claim1.yaml
persistentvolumeclaim/rwo-test-claim1 created
#. Create the **rwo-test-claim2** Persistent Volume Claim.
#. Create a yaml file defining the claim and its attributes.
For example:
.. code-block:: none
~(keystone_admin)]$ cat <<EOF > rwo-claim2.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: rwo-test-claim2
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: general
EOF
#. Apply the settings created above.
.. code-block:: none
~(keystone_admin)]$ kubectl apply -f rwo-claim2.yaml
persistentvolumeclaim/rwo-test-claim2 created
Result: Two 1GB Persistent Volume Claims have been created. You can view the PVCs using
the following command.
.. code-block:: none
~(keystone_admin)]$ kubectl get persistentvolumeclaims
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS
rwo-test-claim1 Bound pvc-08d8.. 1Gi RWO general
rwo-test-claim2 Bound pvc-af10.. 1Gi RWO general
.. code-block:: none
~(keystone_admin)]$ kubectl get persistentvolume
NAME CAPACITY ACCESS.. RECLAIM.. STATUS CLAIM STORAGECLASS
pvc-08d8.. 1Gi RWO Delete Bound default/rwo-test-claim1 general
pvc-af10.. 1Gi RWO Delete Bound default/rwo-test-claim2 general

View File

@ -1,91 +0,0 @@
.. rqy1582055871598
.. _kubernetes-user-tutorials-creating-persistent-volume-claims:
===============================
Create Persistent Volume Claims
===============================
Container images have an ephemeral file system by default. For data to survive
beyond the lifetime of a container, it can read and write files to a persistent
volume obtained with a :abbr:`PVC (Persistent Volume Claim)` created to provide
persistent storage.
.. rubric:: |context|
The following steps create two 1Gb persistent volume claims.
.. rubric:: |proc|
.. _kubernetes-user-tutorials-creating-persistent-volume-claims-d395e32:
#. Create the **test-claim1** persistent volume claim.
#. Create a yaml file defining the claim and its attributes.
For example:
.. code-block:: yaml
~(keystone_admin)]$ cat <<EOF > claim1.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: general
EOF
#. Apply the settings created above.
.. code-block:: none
~(keystone_admin)]$ kubectl apply -f claim1.yaml
persistentvolumeclaim/test-claim1 created
#. Create the **test-claim2** persistent volume claim.
#. Create a yaml file defining the claim and its attributes.
For example:
.. code-block:: yaml
~(keystone_admin)]$ cat <<EOF > claim2.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim2
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: general
EOF
#. Apply the settings created above.
.. code-block:: none
~(keystone_admin)]$ kubectl apply -f claim2.yaml
persistentvolumeclaim/test-claim2 created
.. rubric:: |result|
Two 1Gb persistent volume claims have been created. You can view them with the
following command.
.. code-block:: none
~(keystone_admin)]$ kubectl get persistentvolumeclaims
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-claim1 Bound pvc-aaca.. 1Gi RWO general 2m56s
test-claim2 Bound pvc-e93f.. 1Gi RWO general 68s

View File

@ -0,0 +1,178 @@
.. iqs1617036367453
.. _kubernetes-user-tutorials-mount-readwritemany-persistent-volumes-in-containers:
====================================================
Mount ReadWriteMany Persistent Volumes in Containers
====================================================
You can attach a ReadWriteMany PVC to multiple containers, and that PVC can be
written to, by all containers.
.. rubric:: |context|
This example shows how a volume is claimed and mounted by each container
replica of a deployment with 2 replicas, and each container replica can read
and write to the PVC. It is the responsibility of an individual micro-service
within an application to make a volume claim, mount it, and use it.
.. rubric:: |prereq|
You should refer to the Volume Claim examples. For more information,
see :ref:`Create ReadWriteMany Persistent Volume Claims <kubernetes-user-tutorials-create-readwritemany-persistent-volume-claims>`.
.. rubric:: |proc|
.. _iqs1617036367453-steps-fqj-flr-tkb:
#. Create the busybox container with the persistent volumes created from the
|PVCs| mounted. This deployment will create two replicas mounting the same
persistent volume.
#. Create a yaml file definition for the busybox container.
.. code-block:: none
% cat <<EOF > wrx-busybox.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: wrx-busybox
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 2
selector:
matchLabels:
run: busybox
template:
metadata:
labels:
run: busybox
spec:
containers:
- args:
- sh
image: busybox
imagePullPolicy: Always
name: busybox
stdin: true
tty: true
volumeMounts:
- name: pvc1
mountPath: "/mnt1"
restartPolicy: Always
volumes:
- name: pvc1
persistentVolumeClaim:
claimName: rwx-test-claim
EOF
#. Apply the busybox configuration.
.. code-block:: none
% kubectl apply -f wrx-busybox.yaml
deployment.apps/wrx-busybox created
#. Attach to the busybox and create files on the Persistent Volumes.
#. List the available pods.
.. code-block:: none
% kubectl get pods
NAME READY STATUS RESTARTS AGE
wrx-busybox-6455997c76-4kg8v 1/1 Running 0 108s
wrx-busybox-6455997c76-crmw6 1/1 Running 0 108s
#. Connect to the pod shell for CLI access.
.. code-block:: none
% kubectl attach wrx-busybox-6455997c76-4kg8v -c busybox -i -t
#. From the container's console, list the disks to verify that the
Persistent Volume is attached.
.. code-block:: none
% df
Filesystem 1K-blocks Used Available Use% Mounted on
overlay 31441920 1783748 29658172 6% /
tmpfs 65536 0 65536 0% /dev
tmpfs 5033188 0 5033188 0% /sys/fs/cgroup
ceph-fuse 516542464 643072 515899392 0% /mnt1
The PVC is mounted as /mnt1.
#. Create files in the mount.
.. code-block:: none
# cd /mnt1
# touch i-was-here-${HOSTNAME}
# ls /mnt1
i-was-here-wrx-busybox-6455997c76-4kg8vi
#. End the container session.
.. code-block:: none
% exit
wrx-busybox-6455997c76-4kg8v -c busybox -i -t' command when the pod is running
#. Connect to the other busybox container
.. code-block:: none
% kubectl attach wrx-busybox-6455997c76-crmw6 -c busybox -i -t
#. \(Optional\): From the container's console list the disks to verify that
the |PVC| is attached.
.. code-block:: none
% df
Filesystem 1K-blocks Used Available Use% Mounted on
overlay 31441920 1783888 29658032 6% /
tmpfs 65536 0 65536 0% /dev
tmpfs 5033188 0 5033188 0% /sys/fs/cgroup
ceph-fuse 516542464 643072 515899392 0% /mnt1
#. Verify that the file created from the other container exists and that this
container can also write to the Persistent Volume.
.. code-block:: none
# cd /mnt1
# ls /mnt1
i-was-here-wrx-busybox-6455997c76-4kg8v
# echo ${HOSTNAME}
wrx-busybox-6455997c76-crmw6
# touch i-was-here-${HOSTNAME}
# ls /mnt1
i-was-here-wrx-busybox-6455997c76-4kg8v i-was-here-wrx-busybox-6455997c76-crmw6
#. End the container session.
.. code-block:: none
% exit
Session ended, resume using
:command:`kubectl attach wrx-busybox-6455997c76-crmw6 -c busybox -i -t`
command when the pod is running
#. Terminate the busybox container.
.. code-block:: none
% kubectl delete -f wrx-busybox.yaml
For more information on Persistent Volume Support, see, |prod| |stor-doc|
:ref:`About Persistent Volume Support <about-persistent-volume-support>`.

View File

@ -1,46 +1,46 @@
.. nos1582114374670
.. _kubernetes-user-tutorials-mounting-persistent-volumes-in-containers:
.. _kubernetes-user-tutorials-mount-readwriteonce-persistent-volumes-in-containers:
======================================
Mount Persistent Volumes in Containers
======================================
====================================================
Mount ReadWriteOnce Persistent Volumes in Containers
====================================================
You can launch, attach, and terminate a busybox container to mount :abbr:`PVCs
(Persistent Volume Claims)` in your cluster.
You can attach ReadWriteOnce |PVCs| to a container when launching a container,
and changes to those PVCs will persist even if that container gets terminated
and restarted.
.. note::
A ReadWriteOnce PVC can only be mounted by a single container.
.. rubric:: |context|
This example shows how a volume is claimed and mounted by a simple running
container, and the contents of the volume claim persists across restarts of the
container. It is the responsibility of an individual micro-service within an
application to make a volume claim, mount it, and use it. For example, the
stx-openstack application will make volume claims for **mariadb** and
**rabbitmq** via their helm charts to orchestrate this.
application to make a volume claim, mount it, and use it.
.. rubric:: |prereq|
You must have created the persistent volume claims.
.. xreflink This procedure uses PVCs
with names and configurations created in |stor-doc|: :ref:`Creating Persistent
Volume Claims <storage-configuration-creating-persistent-volume-claims>`.
You should refer to the Volume Claim examples. For more information, see
:ref:`Create ReadWriteOnce Persistent Volume Claims <storage-configuration-create-readwriteonce-persistent-volume-claims>`.
.. rubric:: |proc|
.. _kubernetes-user-tutorials-mounting-persistent-volumes-in-containers-d18e55:
.. _kubernetes-user-tutorials-mounting-persistent-volumes-in-containers-d18e44:
#. Create the busybox container with the persistent volumes created from the
PVCs mounted.
|PVCs| mounted.
#. Create a yaml file definition for the busybox container.
.. code-block:: none
% cat <<EOF > busybox.yaml
% cat <<EOF > rwo-busybox.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox
name: rwo-busybox
namespace: default
spec:
progressDeadlineSeconds: 600
@ -70,17 +70,19 @@ You must have created the persistent volume claims.
volumes:
- name: pvc1
persistentVolumeClaim:
claimName: test-claim1
claimName: rwo-test-claim1
- name: pvc2
persistentVolumeClaim:
claimName: test-claim2
claimName: rwo-test-claim2
EOF
#. Apply the busybox configuration.
.. code-block:: none
% kubectl apply -f busybox.yaml
% kubectl apply -f rwo-busybox.yaml
deployment.apps/rwo-busybox created
#. Attach to the busybox and create files on the persistent volumes.
@ -90,17 +92,16 @@ You must have created the persistent volume claims.
% kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox-5c4f877455-gkg2s 1/1 Running 0 19s
rwo-busybox-5c4f877455-gkg2s 1/1 Running 0 19s
#. Connect to the pod shell for CLI access.
.. code-block:: none
% kubectl attach busybox-5c4f877455-gkg2s -c busybox -i -t
% kubectl attach rwo-busybox-5c4f877455-gkg2s -c busybox -i -t
#. From the container's console, list the disks to verify that the
persistent volumes are attached.
Persistent Volumes are attached.
.. code-block:: none
@ -116,6 +117,7 @@ You must have created the persistent volume claims.
The PVCs are mounted as /mnt1 and /mnt2.
#. Create files in the mounted volumes.
.. code-block:: none
@ -124,7 +126,6 @@ You must have created the persistent volume claims.
# touch i-was-here
# ls /mnt1
i-was-here lost+found
#
# cd /mnt2
# touch i-was-here-too
# ls /mnt2
@ -135,13 +136,14 @@ You must have created the persistent volume claims.
.. code-block:: none
# exit
Session ended, resume using 'kubectl attach busybox-5c4f877455-gkg2s -c busybox -i -t' command when the pod is running
Session ended, resume using :command:`kubectl attach busybox-5c4f877455-gkg2s -c busybox -i -t`
when the pod is running
#. Terminate the busybox container.
.. code-block:: none
% kubectl delete -f busybox.yaml
% kubectl delete -f rwo-busybox.yaml
#. Re-create the busybox container, again attached to persistent volumes.
@ -149,7 +151,8 @@ You must have created the persistent volume claims.
.. code-block:: none
% kubectl apply -f busybox.yaml
% kubectl apply -f rwo-busybox.yaml
deployment.apps/rwo-busybox created
#. List the available pods.
@ -157,7 +160,7 @@ You must have created the persistent volume claims.
% kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox-5c4f877455-jgcc4 1/1 Running 0 19s
rwo-busybox-5c4f877455-jgcc4 1/1 Running 0 19s
#. Connect to the pod shell for CLI access.
@ -165,8 +168,7 @@ You must have created the persistent volume claims.
% kubectl attach busybox-5c4f877455-jgcc4 -c busybox -i -t
#. From the container's console, list the disks to verify that the PVCs are
attached.
#. From the container's console list the disks to verify that the PVCs are attached.
.. code-block:: none
@ -180,8 +182,7 @@ You must have created the persistent volume claims.
/dev/sda4 20027216 4952208 14034624 26%
...
#. Verify that the files created during the earlier container session still
exist.
#. Verify that the files created during the earlier container session still exist.
.. code-block:: none
@ -189,3 +190,5 @@ You must have created the persistent volume claims.
i-was-here lost+found
# ls /mnt2
i-was-here-too lost+found