Browse Source

Use appropriate allocation pools for StorageNFS

The StorageNFS network was defined in network_data_ganesha.yaml
using allocation ranges that mirrored those of the other isolated
networks.  But TripleO and the undercloud only need to use a small
number of addresses for overcloud deployment -- in the default
three-controller case, one for the regular StorageNFS interface on
each ControllerNfs role node, and one for the VIP on which the NFS
service is offered.  The bulk of the addresses in the StorageNFS CIDR
should be left out of the allocation range defined in network_data
so that they can be used by the allocation pool for the overcloud
Netron StorageNFS provider network's subnet without danger of
overlap.

This change uses a CIDR with a shorter prefix so that there will
be sufficient IPs left over after the undercloud's TripleO allocation
pool to deploy almost 4000 clients on the Neutron StorageNFS provider
network's subnet.

This commit also includes some minor changes to synchronize
network_data_ganesha.yaml with network_data.yaml since the former
was derived from the latter and should differ from it only by
inclustion of the StorageNFS network.

Closes-bug: #1889682

Change-Id: Ibb50dad42ec3dc154cd27ae9094a9be5d0a2dd28
(cherry picked from commit 4e8a05833c)
(cherry picked from commit 6efb29dcc7)
(cherry picked from commit 688b59301d)
changes/71/750071/1
Tom Barron 1 year ago
parent
commit
2026fa444e
  1. 50
      network_data_ganesha.yaml
  2. 20
      releasenotes/notes/fix-allocation-range-for-StorageNFS-net.yaml-bd77be924e8b7056.yaml

50
network_data_ganesha.yaml

@ -5,7 +5,22 @@
# name: Name of the network (mandatory)
# name_lower: lowercase version of name used for filenames
# (optional, defaults to name.lower())
# service_net_map_replace: if name_lower is set to a custom name this should be set
# to original default (optional). This field is only necessary when
# changing the default network names, not when adding a new custom network.
# enabled: Is the network enabled (optional, defaults to true)
# external_resource_network_id: Optional. If set, it should be the UUID of an existing already
# created Neutron network that will be used in place of creating a
# new network.
# external_resource_vip_id: Optional. If set, it should be the UUID of an existing already
# created Neutron port for the VIP that will be used
# in place of creating a new port.
# external_resource_subnet_id: Optional. If set, it should be the UUID of an existing already
# created Neutron subnet that will be used in place of creating a
# new subnet for the network.
# external_resource_segment_id: Optional. If set, it should be the UUID of an existing already
# created Neutron segment that will be used in place of creating a
# new segment for the network.
# NOTE: False will use noop.yaml for unused legacy networks to support upgrades.
# vlan: vlan for the network (optional)
# vip: Enable creation of a virtual IP on this network
@ -20,6 +35,9 @@
# ipv6_allocation_pools: Set default IPv6 allocation pools if IPv4 allocation pools
# are already defined.
# gateway_ipv6: Set an IPv6 gateway if IPv4 gateway already defined.
# routes_ipv6: Optional, list of networks that should be routed via network gateway.
# Example: [{'destination':'fd00:fd00:fd00:3004::/64',
# 'nexthop':'fd00:fd00:fd00:3000::1'}]
# ipv6: If ip_subnet not defined, this specifies that the network is IPv6-only.
# NOTE: IP-related values set parameter defaults in templates, may be overridden,
# either by operators, or e.g in environments/network-isolation-v6.yaml where we
@ -29,6 +47,20 @@
# mtu: Set the maximum transmission unit (MTU) that is guaranteed to pass
# through the data path of the segments in the network.
# (optional, defaults to 1500)
# subnets: A map of additional subnets for the network (optional). The map
# takes the following format:
# {'<subnet name>': {'enabled': '<true|false>',
# 'vlan': '<vlan-id>',
# 'ip_subnet': '<IP/CIDR>',
# 'allocation_pools': '<IP range list>',
# 'gateway_ip': '<gateway IP>',
# 'routes': '<Routes list>',
# 'ipv6_subnet': '<IPv6/CIDR>',
# 'ipv6_allocation_pools': '<IPv6 range list>',
# 'gateway_ipv6': '<IPv6 gateway>',
# 'routes_ipv6': '<Routes list>',
# 'external_resource_subnet_id': '<Existing subnet UUID (optional)>'}}
# 'external_resource_segment_id': '<Existing segment UUID (optional)>'}}
#
# Example:
# - name Example
@ -120,8 +152,18 @@
vip: true
name_lower: storage_nfs
vlan: 70
ip_subnet: '172.16.4.0/24'
allocation_pools: [{'start': '172.16.4.4', 'end': '172.16.4.250'}]
ip_subnet: '172.17.0.0/20'
ipv6_subnet: 'fd00:fd00:fd00:7000::/64'
ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:7000::10', 'end': 'fd00:fd00:fd00:7000:ffff:ffff:ffff:fffe'}]
mtu: 1500
# This network is shared by the overcloud deployment and a Neutron provider network
# that is set up post-deployment for consumers like Nova VMs to use to mount shares.
# The allocation pool specified here is used for the overcloud deployment for interfaces
# on the ControllerStorageNfs role nodes and for the VIP where the Ganesha service itself is
# exposed. With a default three-controller node deployment, only four IPs are actually needed
# for this allocation pool.
# When you adapt this file for your own deployment you can of course change the /20 CIDR
# and adjust the allocation pool -- just make sure to leave a good-sized range outside the
# allocation pool specified here for use in the allocation pool for the overcloud Neutron
# StorageNFS provider network's subnet definition.
allocation_pools: [{'start': '172.17.0.4', 'end': '172.17.0.250'}]
ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:7000::4', 'end': 'fd00:fd00:fd00:7000::fffe'}]
mtu: 1500

20
releasenotes/notes/fix-allocation-range-for-StorageNFS-net.yaml-bd77be924e8b7056.yaml

@ -0,0 +1,20 @@
---
upgrade:
- |
The CIDR for the StorageNFS network in the sample network_data_ganesha.yaml file
has been modified to provide more usable IPs for the corresponding Neutron
overcloud StorageNFS provider network. Since the CIDR of an existing network
cannot be modified, deployments with existing StorageNFS networks should be
sure to customize the StorageNFS network definition to use the same CIDR
as that in their existing deployment in order to avoid a heat resource failure
when updating or upgrading the overcloud.
fixes:
- |
Fixed issue in the sample network_data_ganesha.yaml file where the
IPv4 allocation range for the StorageNFS network occupies almost
the whole of its CIDR. If network_data_ganesha.yaml is used
without modification in a customer deployment then there are too
few IPs left over in its CIDR for use by the corresponding
overcloud Neutron StorageNFS provider network for its overcloud
DHCP service.
(See `bug: #1889682 <https://bugs.launchpad.net/tripleo/+bug/1889682>`_)
Loading…
Cancel
Save