Currently there is a bug in the beast code that makes it fail
during the initial lookup for a keystone user map. For the time
being we will continue to use civetweb when keystone is present
until this issue is resolved.
Change-Id: I56bcd77f38adb3763d35f46443c1403816d1dcea
This updates the helm-toolkit script for creating rgw s3 users
to first check if a user exists, then create the user if it does
not exist or modify the user's keys if it does exist. This is
accomplished by using jq to identify all existing access keys for
the specified user, removing those key pairs using the access key,
then modifies the existing user with the supplied access/secret
key pair for the given user
This also updates the ceph-rgw chart to use the helm-toolkit s3
user script for creating the admin s3 user instead of using a
similar script defined directly in the ceph-rgw chart
Change-Id: I575b66415d44db7bb752102e45595305d86e623b
- Since the admin key has been removed, we need to also replace
radosrgw-admin with openstack container commands.
- Additionally expand the helm tests for keystone to also upload
and validate an object in RGW (similiar to S3 helm tests).
Change-Id: I4be603121fc227dd48f83704e99bba94341c4c09
This changes the application label for the ceph-rgw storage init
job to 'ceph' to match the other jobs defined for the chart, rather
than use 'ceph-rgw'
Change-Id: Ia0b679567161e91241250f0c250d24a45c5ebb92
- Support using custom client params for S3 configurations
- Move common tuning for S3 and Keystone into there own
configuration option
- Cleanup the rgw helm tests, since copying the ceph admin key is
no longer required
- Cleanup duplicate portions of the code for configuring the RGW
backend and frontend port
- Add an rgw helm test check for the osh-infra-logging gates
Change-Id: I46dbb4c45b0b96f5cf555077e49d2e09a1171424
This PS udpates the default image in the chart to the latest OSH image.
Change-Id: Ib8d2a72ad48049fe02560dc4405f0088890b6f64
Signed-off-by: Pete Birley <pete@port.direct>
This PS updates the helm test driven pod template:
* places rgw keystone conditional to correct location
* removes unrequired roles and bindings
* adds dependency on the rgw being running
* corrects spelling error
* corrects s3cmd to work with version 1.6.1
Change-Id: I665dba9fdca1d840f4d864e32f07b6185af51d25
Signed-off-by: Pete Birley <pete@port.direct>
Use the Beast backend only when Mimic binaries are installed.
Otherwise use civitweb if the binares are from Ceph Luminous.
Change-Id: Ia7cb64d8db7eed2fc0c57387b26a27163af34520
Change the release of Ceph from 12.2.3 (Luminous) to latest 13.2.2
(Mimic). Additionally use supported RHEL/Centos Images rather then
Ubuntu images, which are now considered deprecated by Redhat.
- Uplift all Ceph images to the latest 13.2.2 ceph-container images.
- RadosGW by default will now use the Beast backend.
- RadosGW has relaxed settings enabled for S3 naming conventions.
- Increased RadosGW resource limits due to backend change.
- All Luminous specific tests now test for both Luminous/Mimic.
- Gate scripts will remove all none required ceph packages. This is
required to not conflict with the pid/gid that the Redhat container
uses.
Change-Id: I9c00f3baa6c427e6223596ade95c65c331e763fb
Set rgw_override_bucket_index_max_shards to 8 (default: 0)
By default create 8 shards per a bucket with Ceph RagosGW. This allows
up to ~800k-1M objects to be in a bucket before seeing performance slow-
downs. The only downside to this change is that a directory listing for
a bucket may take slightly longer to finish.
Change-Id: I96c7ac81501a41d29927e102a6029bf432bd3d21
This PS implements the helm toolkit function to generate the
Egress in kubernetes network policy manifest based on overrideable values.
It also enbale the K8s network policy at Osh-infra gate.
Change-Id: Icbe2a18c98dba795d15398dcdcac64228f6a7b4c
This ps allows multiple ceph test pods to be present in cluster with
more than one ceph deployment.
Change-Id: I002a8b4681d97ed6ab95af23e1938870c28f5a83
Signed-off-by: Pete Birley <pete@port.direct>
- Throttle down snap trimming as to lessen it's performance impact
(Setting just osd_snap_trim_priority isn't effective enough to throttle
down the impact)
osd_snap_trim_sleep: 0.1 (default 0)
osd_pg_max_concurrent_snap_trims: 1 (default 2)
- Align filestore_merge_threshold with upstream Ceph values
(A negative number disables this function, no change in behavior)
filestore_merge_threshold: -10 (formerly -50, default 10)
- Increase RGW pool thread size for more concurrent connections
rgw_thread_pool_size: 512 (default 100)
- Disable in-memory logs for the ms subsytem.
debug_ms: 0/0 (default 0/5)
- Formating cleanups
Change-Id: I4aefcb6e774cb3e1252e52ca6003cec495556467
This PS moves to use the hostname, not the pod name for the
instances specific config sections.
Change-Id: If2bc60c9f4f12038e8aa70fbd33a009cdf652b75
Signed-off-by: Pete Birley <pete@port.direct>
Problem was discovered regarding issues being caused by RGW dynamic
bucket resharding. It is at this time recommended to disable this feature.
Change-Id: Id524415f4ed08ee5374f7fd3b53f6e36c3ab084e
adding configmap hash to following ds/deployments to trigger
rolling updates if there are any update for configmap
- ceph-mon
- ceph-mds
- ceph-mgr
- ceph-rgw
Change-Id: I4173cb12c18640c9b1a0e5a698d48f4735e250fb
This PS adds the ability to attach a release uuid to pods and rc
objects as desired. A follow up ps will add the ability to add arbitary
annotations to the same objects.
Change-Id: Iceedba457a03387f6fc44eb763a00fd57f9d84a5
Signed-off-by: Pete Birley <pete@port.direct>
This updates the ceph-rgw s3 admin access and secret keys to more
generic default values to avoid the possibility of a user assuming
the default keys are acceptable to use
Change-Id: I618ec16059e12c8ce74513da7580a9853af707df
This changes the conditional check for including the configmap-bin
template in the ceph-rgw chart to their original state, and also
adds the rgw-s3-admin.sh script that was removed unintentionally
Change-Id: I60c3660a5bca37199effcf74f3060059345a327b
This continues the work of moving infrastructure related services
out of openstack-helm, by moving the ceph charts to openstack
helm infra instead.
Change-Id: I306ccd9d494f72a7946a7850f96d5c22f36eb8a0