Instead of relying on the dashboard availability
we check the ceph-mgr map
Signed-off-by: Alexandre Marangone <amarango@redhat.com>
Change-Id: I78d33a4b522ed085ed85a638b3784c2d07026e39
This PS set the Ceph MGR pod .spec.strategy to recreate.
Signed-off-by: Alexandre Marangone <amarango@redhat.com>
Change-Id: I14a817dbf8e0d1ec86345cf97911302f5acc3466
When monmap is persisted, don't overwrite it when mon pod restarts.
This helps when there is just one mon or all mons reboot
Change-Id: I9119379f4bc026c315a2fa7507a1664b12ea6205
Signed-off-by: Huamin Chen <hchen@redhat.com>
The Cinder chart can now manage its storage init itself. This PS
now removes the unrequired pool creation in the Ceph bootstrap job.
It also updates the `ensure_pool` to better support luminious.
Change-Id: I4a71df9a6d3a0e45c6ef6812926d66455055ae9f
This PS updates the dependency tree in ceph to take into account
the keyring jobs and also the tollerance for unready endpoints
introduced in the k8s 1.6 era.
Change-Id: If76efeafdbcbe88ee699e7553f0effd5da7ce624
The ceph mon daemonset had a typo, and referenced the osd
resource limit specification in Values instead of its own.
Change-Id: I06433b9039842322786e73eff89da2836c17bb7f
init osd: Ceph luminous release init osd differently. This fix detects
ceph releases and use the right process to init osd directory
mgr: Set mgr daemonset that is in Luminous
Change-Id: I99a102f24c4a8ba18a0bba873e9f752368bea594
Signed-off-by: Huamin Chen <hchen@redhat.com>
Depends-On: I17359df62a720cbd0b3ff79b1d642f99b3e81b3f
Replace socket-based liveness checks with scripts
The current TCP socket-based liveness/readiness check for Ceph
doesn't accurately reflect when daemons are live, doesn't handle
multiple OSDs on a host, and doesn't work when hostNetworking is
in use and the Ceph network is different from the one associated
with the hostname. This change adds new scripts for checking
Ceph monitor and OSD liveness/readiness that query the Ceph Unix
domain sockets to get daemon status and exits 0 iff all sockets
report that their daemons are in an "active" state.
This isn't perfect: we don't know how many daemons SHOULD be
active, so if only a subset is live and the others have no
sockets (yet?), we'll still claim the pod is ready. The scripts
also don't distinguish between liveness and readiness for OSDs.
Change-Id: I5d370b4bc4025fece2e640355c3a29167afca871
This PS makes the service-specific images for Ceph have
explicit names, allowing simple over-riding of images for an
entire site.
Change-Id: I735c5fdc08c2a83893f25e6f6f9824089916507f
This PS updates the values file layout for images to allow simple
parsing of the images in use by charts, allowing them to be queried
and modified much more simply. By moving the image tags to a 'tags'
key, we can extend the options used simply to accomodate extra
options simply (eg prefixing the tag for use with an internal
registry) or pre-pulling the images to reduce chart deploy failure.
Change-Id: I9ec1dbb00d997ab6cb021bf0b698f7aae740e95d
Currently, "general" storage class always created even if
provision_storage_class is set to false. This patch fixed
storageclass template to check the option is enabled.
Change-Id: I6397b24fa9c6517f2646e53ea0f601ad2aa4b9f8
New kubernetes-entrypoint version was released. K8s-entrypoint
authors maintain images at Quay. The image uses CoreOS, which
is more lightweight than the current Ubuntu image, so it
should lessen the burden on the infrastructure.
Change-Id: Id8c2a4d065550ffbd64476377247cccf213b58e1
Partial-Implements: blueprint entrypoint-namespaces
Kubernetes 1.8 is stricter about the feilds in a secrets manifest,
this PS updates OpenStack-Helm to be compliant.
Change-Id: I9e19d07060d8517e0f4fd3056013191b1b4ba2da
Log the filesystem type of directory OSD to help diagnose root cause of OSD failures
Change-Id: I8c8de033afeeb7e6e33f88db33dc962d03ed3ba9
Signed-off-by: Huamin Chen <hchen@redhat.com>
This PS enables the following backends for glance:
* PVC
* RBD
* RadosGW (direct)
* Swift
It also moves the creation of the RBD pool when required to a storage
init job. This job also creates credentials as required for glance to
use when accessing the required backend, rather than using the admin
keyring.
Change-Id: I90fead961ff73a9263826acc794128fa73ead2e1
Currently CLUSTER and deployment namespace both default to ceph, so these
variables can be used exchangeably.But once deployment namespace changes,
MON daemonset will not be able to get its IP from ceph namespace.
This fix swaps CLUSTER with NAMESPACE and solves this problem
Change-Id: I0cf6afafb71f3972e24d13d479192e7a4e155de4
Signed-off-by: Huamin Chen <hchen@redhat.com>
This PS implements the ceph radosgw and also provides keystone
intergration, allowing ceph to provide a swift like service if
desired for object storage.
In addtion it updates the endpoint lookups to use valid yaml when
dealing with keystone services with a '-' in their name.
Change-Id: I9162ad657df2f77c1bc1afa93a8b999894b1b470
This PS provides the same level of configuration tuneability and control to
the ceph chart as other charts within openstack-helm.
Change-Id: I620c3fdb31abe67ee5b4b4766b1523e02bb7f814
This PS adds namespace and fqdn support to endpoint lookup functions,
it also permits over-riding of the puplic endpoint for ingress.
Change-Id: Ib61c5c00a214d75fe85fbffe9080c2ae88bd8cb9
add dnsPolicy parameter in daemonset-mon.yaml, ceph-mon should have
dnsPolicy ClusterFirstWithHostNet because it use hostNetwork.
Closes-bug: 1713383
Change-Id: I14aba0f5caeb6cb7057aeadb18c60337b130da90
This PS updates the ceph namespace client key script to hard fail
if it cannot get the admin storage key from the namespace ceph is
deployed into.
Change-Id: Ieefe6d800a678d721294561b25bbebc874cfa74d