CEPH_CONF does not exist anymore, resulting both cinder-volume and
cinder-backup being configured with an empty rbd_ceph_conf option.
Using CEPH_CONF_FILE to fix this.
Change-Id: I1aa590aba900a4a94698917e45a0ea5c6f497f18
Signed-off-by: Sébastien Han <seb@redhat.com>
We cannot restore a backup to a larger volume on ceph because it
fails with status "error_restoring". This patch adds read/write
permissions to volumes pool for backup user. We need such permissions
to change volume size during restoring backup when the backup is
smaller than a volume.
Change-Id: I794c1126bcee4e07baf5a9dcfee779fd61da5636
Closes-Bug: 1519749
I noticed this when debugging some grenade issues failures.
An include of grenade/functions stores the current value of XTRACE
(on) and disables xtrace for the rest of the import.
We then include devstack's "functions" library, which now overwrites
the stored value of XTRACE the current state; i.e. disabled.
When it finishes it restores the prior state (disabled), and then
grenade restores the same value of XTRACE (disabled).
The result is that xtrace is incorrectly disabled until the next time
it just happens to be turned on.
The solution is to name-space the store of the current-value of xtrace
so when we finish sourcing a file, we always restore the tracing value
to what it was when we entered.
Some files had already discovered this. In general there is
inconsistency around the setting of the variable, and a lot of obvious
copy-paste. This brings consistency across all files by using
_XTRACE_* prefixes for the sotre/restore of tracing values.
Change-Id: Iba7739eada5711d9c269cb4127fa712e9f961695
Sometimes we want to run some benchmarks on virtual machines that will be
backed by a Ceph cluster. The first idea that comes in our mind is to
use devstack to quickly get an OpenStack up and running but what about
the configuration of Devstack with this remote cluster?
Thanks to this commit it's now possible to use an already existing Ceph
cluster. In this case Devstack just needs two things:
* the location of the Ceph config file (by default devstack will look
for /etc/ceph/ceph.conf
* the admin key of the remote ceph cluster (by default devstack will
look for /etc/ceph/ceph.client.admin.keyring)
Devstack will then create the necessary pools, users, keys and will
connect the OpenStack environment as usual. During the unstack phase
every pools, users and keys will be deleted on the remote cluster while
local files and ceph-common package will be removed from the current
Devstack host.
To enable this mode simply add REMOTE_CEPH=True to your localrc file.
Change-Id: I1a4b6fd676d50b6a41a09e7beba9b11f8d1478f7
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
With gerrit 2.8, and the new change screen, this will trigger syntax
highlighting in gerrit. Thus making reviewing code a lot nicer.
Change-Id: Id238748417ffab53e02d59413dba66f61e724383
The new lib installs a full Ceph cluster. It can be managed
by the service init scripts. Ceph can also be installed in
standalone without any other components.
This implementation adds the auto-configuration for
the following services with Ceph:
* Glance
* Cinder
* Cinder backup
* Nova
To enable Ceph simply add: ENABLED_SERVICES+=,ceph to your localrc.
If you want to play with the Ceph replication, you can use the
CEPH_REPLICAS option and set a replica. This replica will be used for
every pools (Glance, Cinder, Cinder backup and Nova). The size of the
loopback disk used for Ceph can also be managed thanks to the
CEPH_LOOPBACK_DISK_SIZE option.
Going further pools, users and PGs are configurable as well. The
convention is <SERVICE_NAME_IN_CAPITAL>_CEPH_<OPTION> where services are
GLANCE, CINDER, NOVA, CINDER_BAK. Let's take the example of Cinder:
* CINDER_CEPH_POOL
* CINDER_CEPH_USER
* CINDER_CEPH_POOL_PG
* CINDER_CEPH_POOL_PGP
** Only works on Ubuntu Trusty, Fedora 19/20 or later **
Change-Id: Ifec850ba8e1e5263234ef428669150c76cfdb6ad
Implements: blueprint implement-ceph-backend
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>