Introduce tripleo --ceph-vip to deployed Ceph

The current status of networkv2 allows to provision 1 vip per network,
which means that using the new Ceph Ingress daemon (that requires 1
vip per service) can break components that are still using the VIP
provisioned on the storage network (or any other network depending on
the tripleo-heat-templates override specified) and are managed by
pacemaker.
Other services, like Redis and ovn, instead, create their own vips
during the overcloud deploy which is the same approach of this patch.
A new option '--ceph-vip' for the "openstack overcloud ceph deploy"
command has been added. This option may be used to reserve VIP(s) for
each Ceph service specified by the 'service/network' mapping defined
as input.
This option is required by the ingress daemon because VIPs should be
reserved in advance.

Change-Id: I4450814bc2d25424a955150036f4cefc13847837
This commit is contained in:
Francesco Pantano
2022-01-20 22:07:40 +01:00
parent 75b27c6a9d
commit f911410b89
2 changed files with 63 additions and 0 deletions

View File

@@ -0,0 +1,50 @@
---
features:
- |
A new option --ceph-vip for the "openstack overcloud ceph deploy" command
has been added. This option may be used to reserve VIP(s) for each Ceph
service specified by the 'service/network' mapping defined as input.
For instance, a generic ceph service mapping can be something like the
following::
---
ceph_services:
- service: ceph_nfs
network: storage_cloud_0
- service: ceph_rgw
network: storage_cloud_0
For each service added to the list above, a virtual IP on the specified
network is created to be used as frontend_vip of the ingress daemon. When
no subnet is specified, a default `<network>_subnet` pattern is used. If
the subnet does not follow the `<network>_subnet` pattern, a subnet for
the VIP may be specified per service::
---
ceph_services:
- service: ceph_nfs
network: storage_cloud_0
- service: ceph_rgw
network: storage_cloud_0
subnet: storage_leafX
When the `subnet` parameter is provided, it will be used by the ansible
module, otherwise the default pattern is followed. This feature also
supports the fixed_ips mode. When fixed_ip(s) are defined, the module is
able to use that input to reserve the VIP on that network. A valid input
can be something like the following::
---
fixed: true
ceph_services:
- service: ceph_nfs
network: storage_cloud_0
ip_address: 172.16.11.159
- service: ceph_rgw
network: storage_cloud_0
ip_address: 172.16.11.160
When the boolean fixed is set to True, the subnet pattern is ignored, and
a sanity check on the user input is performed, looking for the ip_address
keys associated to the specified services.
If the `fixed` keyword is missing, the subnet pattern is followed.

View File

@@ -161,6 +161,11 @@ class OvercloudCephDeploy(command.Command):
"Path to an existing ceph.conf with settings "
"to be assimilated by the new cluster via "
"'cephadm bootstrap --config' ")),
parser.add_argument('--ceph-vip',
help=_(
"Path to an existing Ceph services/network "
"mapping file."),
default=None),
spec_group = parser.add_mutually_exclusive_group()
spec_group.add_argument('--ceph-spec',
help=_(
@@ -357,6 +362,14 @@ class OvercloudCephDeploy(command.Command):
else:
extra_vars['crush_hierarchy_path'] = \
os.path.abspath(parsed_args.crush_hierarchy)
if parsed_args.ceph_vip:
if not os.path.exists(parsed_args.ceph_vip):
raise oscexc.CommandError(
"ceph vip mapping file not found --ceph-vip %s."
% os.path.abspath(parsed_args.ceph_vip))
else:
extra_vars['tripleo_cephadm_ha_services_path'] = \
os.path.abspath(parsed_args.ceph_vip)
# optional container vars to pass to playbook
keys = ['ceph_namespace', 'ceph_image', 'ceph_tag']
key = 'ContainerImagePrepare'