openstack-ansible/tests/roles/bootstrap-host/templates/user_variables_ceph.yml.j2
Damian Dabrowski bb1287555c Make ceph use storage network
With current "Ceph production example" the difference between ceph's
public and storage network is not clear.

We assign Storage Network to compute nodes, but it's not used there.
We also asign Storage Network to ceph monitors, but it's not used there
as well.

Same problems apply to AIO environment.

As Dmitriy suggested in [1], ceph should not use mgmt network for
storage traffic.

This change makes ceph use storage network for:
- OSD<>OSD communication
- client<>OSD communication
- client<>MON communication

I think it's the most common scenario where all ceph-related traffic
uses dedicated(storage) network and do not depend on mgmt network.

This change affects both "Ceph production example" docs and AIO
environments.

[1] https://review.opendev.org/c/openstack/openstack-ansible/+/856566

Change-Id: I74387a2e961e2b8355ea6a0c889b2f5674233ebf
2022-11-11 10:45:51 +01:00

35 lines
1.4 KiB
Django/Jinja

---
# Copyright 2017, Logan Vig <logan2211@gmail.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
## ceph-ansible AIO settings
is_hci: true
common_single_host_mode: true
monitor_interface: "{{ ('metal' in bootstrap_host_scenarios_expanded) | ternary('br-storage', 'eth2') }}" # Storage network in the AIO
public_network: "{{ (storage_range ~ '.0/' ~ netmask) | ansible.utils.ipaddr('net') }}"
journal_size: 100
osd_scenario: collocated
ceph_conf_overrides_custom:
global:
mon_max_pg_per_osd: 500
openstack_config: true # Ceph ansible automatically creates pools & keys
cinder_default_volume_type: aio_ceph
glance_ceph_client: glance
glance_default_store: rbd
glance_rbd_store_pool: images
nova_libvirt_images_rbd_pool: vms
# NOTE(noonedeadpunk): ceph bug to track the issue https://tracker.ceph.com/issues/46295
tempest_test_includelist:
- tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern