Browse Source

Merge "Use appropriate allocation pools for StorageNFS" into stable/rocky

changes/47/760147/1
Zuul 7 months ago
committed by Gerrit Code Review
parent
commit
f87ebe935c
2 changed files with 37 additions and 3 deletions
  1. +17
    -3
      network_data_ganesha.yaml
  2. +20
    -0
      releasenotes/notes/fix-allocation-range-for-StorageNFS-net.yaml-bd77be924e8b7056.yaml

+ 17
- 3
network_data_ganesha.yaml View File

@ -5,6 +5,9 @@
# name: Name of the network (mandatory)
# name_lower: lowercase version of name used for filenames
# (optional, defaults to name.lower())
# service_net_map_replace: if name_lower is set to a custom name this should be set
# to original default (optional). This field is only necessary when
# changing the default network names, not when adding a new custom network.
# enabled: Is the network enabled (optional, defaults to true)
# NOTE: False will use noop.yaml for unused legacy networks to support upgrades.
# vlan: vlan for the network (optional)
@ -109,7 +112,18 @@
vip: true
name_lower: storage_nfs
vlan: 70
ip_subnet: '172.16.4.0/24'
allocation_pools: [{'start': '172.16.4.4', 'end': '172.16.4.250'}]
ip_subnet: '172.17.0.0/20'
ipv6_subnet: 'fd00:fd00:fd00:7000::/64'
ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:7000::10', 'end': 'fd00:fd00:fd00:7000:ffff:ffff:ffff:fffe'}]
# This network is shared by the overcloud deployment and a Neutron provider network
# that is set up post-deployment for consumers like Nova VMs to use to mount shares.
# The allocation pool specified here is used for the overcloud deployment for interfaces
# on the ControllerStorageNfs role nodes and for the VIP where the Ganesha service itself is
# exposed. With a default three-controller node deployment, only four IPs are actually needed
# for this allocation pool.
# When you adapt this file for your own deployment you can of course change the /20 CIDR
# and adjust the allocation pool -- just make sure to leave a good-sized range outside the
# allocation pool specified here for use in the allocation pool for the overcloud Neutron
# StorageNFS provider network's subnet definition.
allocation_pools: [{'start': '172.17.0.4', 'end': '172.17.0.250'}]
ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:7000::4', 'end': 'fd00:fd00:fd00:7000::fffe'}]
mtu: 1500

+ 20
- 0
releasenotes/notes/fix-allocation-range-for-StorageNFS-net.yaml-bd77be924e8b7056.yaml View File

@ -0,0 +1,20 @@
---
upgrade:
- |
The CIDR for the StorageNFS network in the sample network_data_ganesha.yaml file
has been modified to provide more usable IPs for the corresponding Neutron
overcloud StorageNFS provider network. Since the CIDR of an existing network
cannot be modified, deployments with existing StorageNFS networks should be
sure to customize the StorageNFS network definition to use the same CIDR
as that in their existing deployment in order to avoid a heat resource failure
when updating or upgrading the overcloud.
fixes:
- |
Fixed issue in the sample network_data_ganesha.yaml file where the
IPv4 allocation range for the StorageNFS network occupies almost
the whole of its CIDR. If network_data_ganesha.yaml is used
without modification in a customer deployment then there are too
few IPs left over in its CIDR for use by the corresponding
overcloud Neutron StorageNFS provider network for its overcloud
DHCP service.
(See `bug: #1889682 <https://bugs.launchpad.net/tripleo/+bug/1889682>`_)

Loading…
Cancel
Save