Add action for configuring mgmt network for comms with lb instances

End user is expected to run the `configure-resources` action
after the deployment is complete and the cloud APIs are
reliably available.

The end user may rely on our defaults or may create the
required resources themselves.  The same action is used to
prompt configuration of the Octavia service and whether it
does creation or just discovery depends on the setting of
`create-mgmt-network` and `custom-amp-flavor-id` configuration
options.

Switch to using the more recent ``section-database`` part
for database configuration.

Change-Id: I9529a8a633ef0ba696c22570ec388991ba408ac4
This commit is contained in:
Frode Nordahl 2018-10-24 14:22:50 +02:00
parent 3f1480e8c9
commit a562b391bd
No known key found for this signature in database
GPG Key ID: 6A5D59A3BA48373F
28 changed files with 2222 additions and 137 deletions

View File

@ -8,13 +8,100 @@ OpenStack Rocky or later is required.
Octavia relies on services from a fully functional OpenStack Cloud and expects
to be able to add images to glance, create networks in Neutron, store
certificate secrets in Vault (optionally) and spin up instances with Nova.
certificate secrets in Barbican (preferably utilizing a Vault backend) and spin
up instances with Nova.
juju deploy octavia --config openstack-origin=bionic:rocky
juju add-relation octavia rabbitmq-server
juju add-relation octavia mysql
juju add-relation octavia keystone
juju add-relation octavia vault
A example overlay bundle to be used in conjunction with the
[OpenStack Base bundle](https://jujucharms.com/openstack-base/) can be found
[here](https://github.com/openstack-charmers/openstack-bundles/blob/master/stable/overlays/loadbalancer-octavia.yaml)
## Required configuration
After the deployment is complete and has settled, you must run the `configure-resources` action on the lead unit.
This will prompt it to configure required resources in the deployed cloud for Octavia to operate.
You must also configure certificates for internal communication between the controller and its load balancer instances.
Excerpt from the upstream [operator maintenance guide](https://docs.openstack.org/octavia/latest/admin/guides/operator-maintenance.html#rotating-cryptographic-certificates):
> Octavia secures the communication between the amphora agent and the control plane with two-way SSL encryption. To accomplish that, several certificates are distributed in the system:
>
> * Control plane:
> * Amphora certificate authority (CA) certificate: Used to validate amphora certificates if Octavia acts as a Certificate Authority to issue new amphora certificates
> * Client certificate: Used to authenticate with the amphora
> * Amphora:
> * Client CA certificate: Used to validate control plane client certificate
> * Amphora certificate: Presented to control plane processes to prove amphora identity.
The charm represents this with the following mandatory configuration options:
* `lb-mgmt-issuing-cacert`
* `lb-mgmt-issuing-ca-private-key`
* `lb-mgmt-issuing-ca-key-passphrase`
* `lb-mgmt-controller-cacert`
* `lb-mgmt-controller-cert`
You must issue/request certificates that meets your organizations requirements.
__NOTE__ It is important not to use the same CA certificate for both `lb-mgmt-issuing-cacert` and `lb-mgmt-controller-cacert` configuration options. Failing to keep them separate may lead to abuse of certificate data to gain access to other ``Amphora`` instances in the event one of them is compromised.
To get you started we include an example of generating your own certificates:
mkdir -p demoCA/newcerts
touch demoCA/index.txt
touch demoCA/index.txt.attr
openssl genrsa -passout pass:foobar -des3 -out issuing_ca_key.pem 2048
openssl req -x509 -passin pass:foobar -new -nodes -key issuing_ca_key.pem \
-config /etc/ssl/openssl.cnf \
-subj "/C=US/ST=Somestate/O=Org/CN=www.example.com" \
-days 30 \
-out issuing_ca.pem
openssl genrsa -passout pass:foobar -des3 -out controller_ca_key.pem 2048
openssl req -x509 -passin pass:foobar -new -nodes \
-key controller_ca_key.pem \
-config /etc/ssl/openssl.cnf \
-subj "/C=US/ST=Somestate/O=Org/CN=www.example.com" \
-days 30 \
-out controller_ca.pem
openssl req \
-newkey rsa:2048 -nodes -keyout controller_key.pem \
-subj "/C=US/ST=Somestate/O=Org/CN=www.example.com" \
-out controller.csr
openssl ca -passin pass:foobar -config /etc/ssl/openssl.cnf \
-cert controller_ca.pem -keyfile controller_ca_key.pem \
-create_serial -batch \
-in controller.csr -days 30 -out controller_cert.pem
cat controller_cert.pem controller_key.pem > controller_cert_bundle.pem
To apply the configuration execute:
juju config octavia \
lb-mgmt-issuing-cacert="$(base64 controller_ca.pem)" \
lb-mgmt-issuing-ca-private-key="$(base64 controller_ca_key.pem)" \
lb-mgmt-issuing-ca-key-passphrase=foobar \
lb-mgmt-controller-cacert="$(base64 controller_ca.pem)" \
lb-mgmt-controller-cert="$(base64 controller_cert_bundle.pem)"
## Optional resource configuration
By executing the `configure-resources` action the charm will create the resources
required for operation of the Octavia service. If you want to manage these
resources yourself you must set the `create-mgmt-network` configuration option to False.
You can at any time use the `configure-resources` action to prompt immediate resource
discovery.
To let the charm discover the resources and apply the appropriate configuration
to Octavia, you must use [Neutron resource tags](https://docs.openstack.org/neutron/latest/contributor/internals/tag.html).
The UUID of the Nova flavor you want to use must be set with the
`custom-amp-flavor-id` configuration option.
| Resource type | Tag | Note |
| ------------------------- | -------------------- | ------------------------ |
| Neutron Network | charm-octavia | |
| Neutron Subnet | charm-octavia | |
| Neutron Router | charm-octavia | |
| `Amphora` Security Group | charm-octavia | |
| Controller Security Group | charm-octavia-health | |
# Bugs

6
src/actions.yaml Normal file
View File

@ -0,0 +1,6 @@
configure-resources:
description: |
Create or discover Neutron and Nova resources for Octavia in the deployed
cloud. This action must be run after deployment is complete and is a
prerequisite for successful consumption of the Octavia load balancer
service.

107
src/actions/actions.py Executable file
View File

@ -0,0 +1,107 @@
#!/usr/local/sbin/charm-env python3
# Copyright 2018 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
# Load modules from $CHARM_DIR/lib
sys.path.append('lib')
sys.path.append('reactive')
from charms.layer import basic
basic.bootstrap_charm_deps()
basic.init_config_states()
import charms.reactive as reactive
import charms.leadership as leadership
import charms_openstack.bus
import charms_openstack.charm as charm
import charmhelpers.core as ch_core
import charm.openstack.api_crud as api_crud
charms_openstack.bus.discover()
def configure_resources(*args):
"""Create/discover resources for management of load balancer instances."""
if not reactive.is_flag_set('leadership.is_leader'):
return ch_core.hookenv.action_fail('action must be run on the leader '
'unit.')
if not reactive.all_flags_set('identity-service.available',
'neutron-api.available',
'neutron-openvswitch.connected',
'amqp.available'):
return ch_core.hookenv.action_fail('all required relations not '
'available, please defer action'
'until deployment is complete.')
identity_service = reactive.endpoint_from_flag(
'identity-service.available')
try:
(network, secgrp) = api_crud.get_mgmt_network(
identity_service,
create=reactive.is_flag_set('config.default.create-mgmt-network'),
)
except api_crud.APIUnavailable as e:
ch_core.hookenv.action_fail('Neutron API not available yet, deferring '
'network creation/discovery. ("{}")'
.format(e))
return
if network and secgrp:
leadership.leader_set({'amp-boot-network-list': network['id'],
'amp-secgroup-list': secgrp['id']})
if reactive.is_flag_set('config.default.custom-amp-flavor-id'):
# NOTE(fnordahl): custom flavor provided through configuration is
# handled in the charm class configuration property.
try:
flavor = api_crud.get_nova_flavor(identity_service)
except api_crud.APIUnavailable as e:
ch_core.hookenv.action_fail('Nova API not available yet, '
'deferring flavor '
'creation. ("{}")'
.format(e))
return
else:
leadership.leader_set({'amp-flavor-id': flavor.id})
# execute port setup for leader, the followers will execute theirs on
# `leader-settings-changed` hook
with charm.provide_charm_instance() as octavia_charm:
api_crud.setup_hm_port(identity_service, octavia_charm)
octavia_charm.render_all_configs()
octavia_charm._assess_status()
ACTIONS = {
'configure-resources': configure_resources,
}
def main(args):
action_name = os.path.basename(args[0])
try:
action = ACTIONS[action_name]
except KeyError:
return 'Action {} undefined'.format(action_name)
else:
try:
action(args)
except Exception as e:
ch_core.hookenv.action_fail(str(e))
if __name__ == '__main__':
sys.exit(main(sys.argv))

View File

@ -0,0 +1 @@
actions.py

View File

@ -10,6 +10,8 @@ options:
type: string
default:
description: |
Note that setting this configuration option is mandatory.
.
Certificate Authority Certificate used to issue new certificates stored
on the ``Amphora`` load balancer instances. The ``Amphorae`` use them to
authenticate themselves to the ``Octavia`` controller services.
@ -26,6 +28,8 @@ options:
type: string
default:
description: |
Note that setting this configuration option is mandatory.
.
Private key for the Certificate Authority set in ``lb-mgmt-issuing-ca``.
.
Note that these certificates are not used for any load balancer payload
@ -34,6 +38,8 @@ options:
type: string
default:
description: |
Note that setting this configuration option is mandatory.
.
Passphrase for the key set in ``lb-mgmt-ca-private-key``.
.
NOTE: As of this writing Octavia requires the private key to be protected
@ -45,6 +51,8 @@ options:
type: string
default:
description: |
Note that setting this configuration option is mandatory.
.
Certificate Authority Certificate installed on ``Amphorae`` with the
purpose of the ``Amphora`` agent using it to authenticate connections
from ``Octavia`` controller services.
@ -61,6 +69,8 @@ options:
type: string
default:
description: |
Note that setting this configuration option is mandatory.
.
Certificate used by the ``Octavia`` controller to authenticate itself to
its ``Amphorae``.
.
@ -74,3 +84,31 @@ options:
instances.
.
The default behaviour is to let the charm create and maintain the flavor.
create-mgmt-network:
type: boolean
default: True
description: |
The ``octavia`` charm utilizes Neutron Resource tags to locate networks,
security groups and ports for use with the service.
.
If none are found the default behaviour is to create the resources
required for management of the load balancer instances.
.
Set this to False if you want to be in control of creation and management
of these resources yourself. Please note that the service will not be
fully operational until they are available.
.
Refer to the documentation on https://jujucharms.com/octavia/ for a
complete list of resources required and how they should be tagged.
amp-image-tag:
type: string
default: octavia-amphora
description: |
Glance image tag for selection of Amphorae image to boot load balancer
instances.
amp-image-owner-id:
type: string
default:
description: |
Restrict glance image selection to a specific owner ID. This is a
recommended security setting.

View File

@ -5,6 +5,7 @@ includes:
- interface:rabbitmq
- interface:keystone
- interface:neutron-load-balancer
- interface:neutron-plugin
options:
basic:
use_venv: True

13
src/lib/__init__.py Normal file
View File

@ -0,0 +1,13 @@
# Copyright 2018 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

View File

@ -21,20 +21,64 @@
# and YAML for the ``fixed_ips`` field when providing details for a Neutron
# port.
import neutronclient
import subprocess
from keystoneauth1 import identity as keystone_identity
from keystoneauth1 import session as keystone_session
from keystoneauth1 import exceptions as keystone_exceptions
from neutronclient.v2_0 import client as neutron_client
from novaclient import client as nova_client
import charm.openstack.octavia as octavia # for constants
import charmhelpers.core as ch_core
import charmhelpers.contrib.network.ip as ch_net_ip
NEUTRON_TEMP_EXCS = (keystone_exceptions.catalog.EndpointNotFound,
keystone_exceptions.connection.ConnectFailure,
neutronclient.common.exceptions.ServiceUnavailable)
class APIUnavailable(Exception):
"""Exception raised when a temporary availability issue occurs."""
def __init__(self, service_type, resource_type, upstream_exception):
"""Initialize APIUnavailable exception.
:param service_type: Name of service we had issues with (e.g. `nova`).
:type service_type: str
:param resource_type: Name of resource we had issues with
(e.g. `flavors`)
:type resource_type: str
:param upstream_exception: Reference to the exception caught
:type upstream_exception: BaseException derived object
"""
self.service_type = service_type
self.resource_type = resource_type
self.upstream_exception = upstream_exception
class DuplicateResource(Exception):
"""Exception raised when resource query result in multiple entries."""
def __init__(self, service_type, resource_type, data=None):
"""Initialize DuplicateResource exception.
:param service_type: Name of service we had issues with (e.g. `nova`).
:type service_type: str
:param resource_type: Name of resource we had issues with
(e.g. `flavors`)
:type resource_type: str
:param data: Data from search result
:type data: (Optional)any
"""
self.service_type = service_type
self.resource_type = resource_type
self.data = data
def session_from_identity_service(identity_service):
"""Get Keystone Session from `identity-service` relation.
@ -63,11 +107,6 @@ def get_nova_flavor(identity_service):
A side effect of calling this function is that Nova flavors are
created if they do not already exist.
Handle exceptions ourself without Tenacity so we can detect Nova API
readiness. At present we do not have a relation or interface to inform us
about Nova API readiness. This function also executes just one or two API
calls.
:param identity_service: reactive Endpoint of type ``identity-service``
:type identity_service: RelationBase class
:returns: Nova Flavor Resource object
@ -89,3 +128,468 @@ def get_nova_flavor(identity_service):
nova_client.exceptions.ConnectionRefused,
nova_client.exceptions.ClientException) as e:
raise APIUnavailable('nova', 'flavors', e)
def get_hm_port(identity_service, local_unit_name, local_unit_address):
"""Get or create a per unit Neutron port for Octavia Health Manager.
A side effect of calling this function is that a port is created if one
does not already exist.
:param identity_service: reactive Endpoint of type ``identity-service``
:type identity_service: RelationBase class
:param local_unit_name: Name of juju unit, used to build tag name for port
:type local_unit_name: str
:param local_unit_address: DNS resolvable IP address of unit, used to
build Neutron port ``binding:host_id``
:type local_unit_address: str
:returns: Port details extracted from result of call to
neutron_client.list_ports or neutron_client.create_port
:rtype: dict
:raises: api_crud.APIUnavailable, api_crud.DuplicateResource
"""
session = session_from_identity_service(identity_service)
try:
nc = neutron_client.Client(session=session)
resp = nc.list_networks(tags='charm-octavia')
except NEUTRON_TEMP_EXCS as e:
raise APIUnavailable('neutron', 'networks', e)
network = None
n_resp = len(resp.get('networks', []))
if n_resp == 1:
network = resp['networks'][0]
elif n_resp > 1:
raise DuplicateResource('neutron', 'networks', data=resp)
else:
ch_core.hookenv.log('No network tagged with `charm-octavia` exists, '
'deferring port setup awaiting network and port '
'(re-)creation.', level=ch_core.hookenv.WARNING)
return
health_secgrp = None
try:
resp = nc.list_security_groups(tags='charm-octavia-health')
except NEUTRON_TEMP_EXCS as e:
raise APIUnavailable('neutron', 'security_groups', e)
n_resp = len(resp.get('security_groups', []))
if n_resp == 1:
health_secgrp = resp['security_groups'][0]
elif n_resp > 1:
raise DuplicateResource('neutron', 'security_groups', data=resp)
else:
ch_core.hookenv.log('No security group tagged with '
'`charm-octavia-health` exists, deferring '
'port setup awaiting network and port '
'(re-)creation...',
level=ch_core.hookenv.WARNING)
return
try:
resp = nc.list_ports(tags='charm-octavia-{}'
.format(local_unit_name))
except NEUTRON_TEMP_EXCS as e:
raise APIUnavailable('neutron', 'ports', e)
n_resp = len(resp.get('ports', []))
if n_resp == 1:
hm_port = resp['ports'][0]
elif n_resp > 1:
raise DuplicateResource('neutron', 'ports', data=resp)
else:
# create new port
try:
resp = nc.create_port(
{
'port': {
# avoid race with OVS agent attempting to bind port
# before it is created in the local units OVSDB
'admin_state_up': False,
'binding:host_id': ch_net_ip.get_hostname(
local_unit_address, fqdn=False),
'device_owner': 'Octavia:health-mgr',
'security_groups': [
health_secgrp['id'],
],
'name': 'octavia-health-manager-{}-listen-port'
.format(local_unit_name),
'network_id': network['id'],
},
})
hm_port = resp['port']
ch_core.hookenv.log('Created port {}'.format(hm_port['id']),
ch_core.hookenv.INFO)
# unit specific tag is used by each unit to load their state
nc.add_tag('ports', hm_port['id'],
'charm-octavia-{}'
.format(local_unit_name))
# charm-wide tag is used by leader to load cluster state and build
# ``controller_ip_port_list`` configuration property
nc.add_tag('ports', hm_port['id'], 'charm-octavia')
except NEUTRON_TEMP_EXCS as e:
raise APIUnavailable('neutron', 'ports', e)
return hm_port
def toggle_hm_port(identity_service, local_unit_name, enabled=True):
"""Toggle administrative state of Neutron port for local unit.
:param identity_service: reactive Endpoint of type ``identity-service``
:type identity_service: RelationBase class
:param local_unit_name: Name of juju unit, used to build tag name for port
:type local_unit_name: str
:param enabled: Desired state
:type enabled: bool
:raises: api_crud.APIUnavailable
"""
session = session_from_identity_service(identity_service)
try:
nc = neutron_client.Client(session=session)
resp = nc.list_ports(tags='charm-octavia-{}'
.format(local_unit_name))
except NEUTRON_TEMP_EXCS as e:
raise APIUnavailable('neutron', 'ports', e)
for port in (resp['ports']):
nc.update_port(port['id'], {'port': {'admin_state_up': enabled}})
def setup_hm_port(identity_service, octavia_charm):
"""Create a per unit Neutron and OVS port for Octavia Health Manager.
This is used to plug the unit into the overlay network for direct
communication with the octavia managed load balancer instances running
within the deployed cloud.
:param identity_service: reactive Endpoint of type ``identity-service``
:type identity_service: RelationBase class
:param ocataiva_charm: charm instance
:type octavia_charm: OctaviaCharm class instance
:retruns: True on change to local unit, False otherwise
:rtype: bool
:raises: api_crud.APIUnavailable, api_crud.DuplicateResource
"""
unit_changed = False
hm_port = get_hm_port(
identity_service,
octavia_charm.local_unit_name,
octavia_charm.local_address)
if not hm_port:
ch_core.hookenv.log('No network tagged with `charm-octavia` '
'exists, deferring port setup awaiting '
'network and port (re-)creation...',
level=ch_core.hookenv.WARNING)
return
HM_PORT_MAC = hm_port['mac_address']
HM_PORT_ID = hm_port['id']
try:
subprocess.check_output(
['ip', 'link', 'show', octavia.OCTAVIA_MGMT_INTF],
stderr=subprocess.STDOUT, universal_newlines=True)
except subprocess.CalledProcessError as e:
if 'does not exist' in e.output:
subprocess.check_call(
['ovs-vsctl', '--', 'add-port',
octavia.OCTAVIA_INT_BRIDGE, octavia.OCTAVIA_MGMT_INTF,
'--', 'set', 'Interface', octavia.OCTAVIA_MGMT_INTF,
'type=internal',
'--', 'set', 'Interface', octavia.OCTAVIA_MGMT_INTF,
'external-ids:iface-status=active',
'--', 'set', 'Interface', octavia.OCTAVIA_MGMT_INTF,
'external-ids:attached-mac={}'.format(HM_PORT_MAC),
'--', 'set', 'Interface', octavia.OCTAVIA_MGMT_INTF,
'external-ids:iface-id={}'.format(HM_PORT_ID),
'--', 'set', 'Interface', octavia.OCTAVIA_MGMT_INTF,
'external-ids:skip_cleanup=true',
])
ch_core.hookenv.log('add OVS port', level=ch_core.hookenv.INFO)
# post boot reconfiguration of systemd-networkd does not appear to
# set the MAC addresss on the interface, do it ourself.
subprocess.check_call(
['ip', 'link', 'set', octavia.OCTAVIA_MGMT_INTF,
'up', 'address', HM_PORT_MAC])
# Signal that change has been made to local unit
unit_changed = True
else:
# unknown error, raise
raise e
if not hm_port['admin_state_up'] or hm_port['status'] == 'DOWN':
# NOTE(fnordahl) there appears to be a handful of race conditions
# hitting us sometimes making the newly created ports unusable.
# as a workaround we toggle the port belonging to us.
# a disable/enable round trip makes Neutron reset the port
# configuration which resolves these situations.
ch_core.hookenv.log('toggling port {} (admin_state_up: {} '
'status: {})'
.format(hm_port['id'],
hm_port['admin_state_up'],
hm_port['status']),
level=ch_core.hookenv.INFO)
toggle_hm_port(identity_service,
octavia_charm.local_unit_name,
enabled=False)
toggle_hm_port(identity_service,
octavia_charm.local_unit_name,
enabled=True)
return unit_changed
def get_port_ips(identity_service):
"""Extract IP information from Neutron ports tagged with ``charm-octavia``
:param identity_service: reactive Endpoint of type ``identity-service``
:type identity_service: RelationBase class
:returns: List of IP addresses extracted from port details in search result
:rtype: list of str
:raises: api_crud.APIUnavailable
"""
session = session_from_identity_service(identity_service)
try:
nc = neutron_client.Client(session=session)
resp = nc.list_ports(tags='charm-octavia')
except NEUTRON_TEMP_EXCS as e:
raise APIUnavailable('neutron', 'ports', e)
neutron_ip_list = []
for port in resp['ports']:
for ip_info in port['fixed_ips']:
neutron_ip_list.append(ip_info['ip_address'])
return neutron_ip_list
def get_mgmt_network(identity_service, create=True):
"""Get or create Neutron network resources for Octavia.
A side effect of calling this function is that network resources are
created if they do not already exist, unless ``create`` is set to False.
:param identity_service: reactive Endpoint of type ``identity-service``
:type identity_service: RelationBase class
:param create: (Optional and default) Create resources that do not exist
:type: create: bool
:returns: List of IP addresses extracted from port details in search result
:rtype: list of str
:raises: api_crud.APIUnavailable, api_crud.DuplicateResource
"""
session = session_from_identity_service(identity_service)
try:
nc = neutron_client.Client(session=session)
resp = nc.list_networks(tags='charm-octavia')
except NEUTRON_TEMP_EXCS as e:
raise APIUnavailable('neutron', 'networks', e)
n_resp = len(resp.get('networks', []))
if n_resp == 1:
network = resp['networks'][0]
elif n_resp > 1:
raise DuplicateResource('neutron', 'networks', data=resp)
elif not create:
ch_core.hookenv.log('No network tagged with `charm-octavia` exists, '
'and we are configured to not create resources.'
'Awaiting end user resource creation.',
level=ch_core.hookenv.WARNING)
return
else:
try:
resp = nc.create_network({
'network': {'name': octavia.OCTAVIA_MGMT_NET}})
network = resp['network']
nc.add_tag('networks', network['id'], 'charm-octavia')
ch_core.hookenv.log('Created network {}'.format(network['id']),
level=ch_core.hookenv.INFO)
except NEUTRON_TEMP_EXCS as e:
raise APIUnavailable('neutron', 'networks', e)
try:
resp = nc.list_subnets(tags='charm-octavia')
except NEUTRON_TEMP_EXCS as e:
raise APIUnavailable('neutron', 'subnets', e)
n_resp = len(resp.get('subnets', []))
subnets = None
if n_resp < 1 and create:
# make rfc4193 Unique Local IPv6 Unicast Addresses from network UUID
rfc4193_addr = 'fc00'
for n in [0, 4, 8]:
rfc4193_addr += ':' + network['id'].split('-')[4][n:n + 4]
rfc4193_addr += '::/64'
try:
resp = nc.create_subnet(
{
'subnets': [
{
'name': octavia.OCTAVIA_MGMT_SUBNET + 'v6',
'ip_version': 6,
'ipv6_address_mode': 'slaac',
'ipv6_ra_mode': 'slaac',
'cidr': rfc4193_addr,
'network_id': network['id'],
},
],
})
subnets = resp['subnets']
for subnet in resp['subnets']:
nc.add_tag('subnets', subnet['id'], 'charm-octavia')
ch_core.hookenv.log('Created subnet {} with cidr {}'
.format(subnet['id'], subnet['cidr']),
level=ch_core.hookenv.INFO)
except NEUTRON_TEMP_EXCS as e:
raise APIUnavailable('neutron', 'subnets', e)
try:
resp = nc.list_routers(tags='charm-octavia')
except NEUTRON_TEMP_EXCS as e:
raise APIUnavailable('neutron', 'routers', e)
n_resp = len(resp.get('routers', []))
router = None
if n_resp < 1 and create:
try:
resp = nc.create_router(
{
'router': {
'name': octavia.OCTAVIA_MGMT_NAME_PREFIX,
}
})
router = resp['router']
nc.add_tag('routers', router['id'], 'charm-octavia')
ch_core.hookenv.log('Created router {}'.format(router['id']),
level=ch_core.hookenv.INFO)
for subnet in subnets:
nc.add_interface_router(router['id'],
{'subnet_id': subnet['id']})
ch_core.hookenv.log('Added interface from router {} '
'to subnet {}'
.format(router['id'], subnet['id']),
level=ch_core.hookenv.INFO)
except NEUTRON_TEMP_EXCS as e:
raise APIUnavailable('neutron', 'routers', e)
try:
resp = nc.list_security_groups(tags='charm-octavia')
except NEUTRON_TEMP_EXCS as e:
raise APIUnavailable('neutron', 'security_groups', e)
n_resp = len(resp.get('security_groups', []))
if n_resp == 1:
secgrp = resp['security_groups'][0]
elif n_resp > 1:
raise DuplicateResource('neutron', 'security_groups', data=resp)
elif not create:
ch_core.hookenv.log('No security group tagged with `charm-octavia` '
'exists, and we are configured to not create '
'resources. Awaiting end user resource '
'creation.',
level=ch_core.hookenv.WARNING)
return
else:
try:
resp = nc.create_security_group(
{
'security_group': {
'name': octavia.OCTAVIA_MGMT_SECGRP,
},
})
secgrp = resp['security_group']
nc.add_tag('security_groups', secgrp['id'], 'charm-octavia')
ch_core.hookenv.log('Created security group "{}"'
.format(secgrp['id']),
level=ch_core.hookenv.INFO)
except NEUTRON_TEMP_EXCS as e:
raise APIUnavailable('neutron', 'security_groups', e)
if create:
security_group_rules = [
{
'direction': 'ingress',
'protocol': 'icmpv6',
'ethertype': 'IPv6',
'security_group_id': secgrp['id'],
},
{
'direction': 'ingress',
'protocol': 'tcp',
'ethertype': 'IPv6',
'port_range_min': '22',
'port_range_max': '22',
'security_group_id': secgrp['id'],
},
{
'direction': 'ingress',
'protocol': 'tcp',
'ethertype': 'IPv6',
'port_range_min': '9443',
'port_range_max': '9443',
'security_group_id': secgrp['id'],
},
]
for rule in security_group_rules:
try:
nc.create_security_group_rule({'security_group_rule': rule})
except neutronclient.common.exceptions.Conflict:
pass
except NEUTRON_TEMP_EXCS as e:
raise APIUnavailable('neutron', 'security_group_rules', e)
try:
resp = nc.list_security_groups(tags='charm-octavia-health')
except NEUTRON_TEMP_EXCS as e:
raise APIUnavailable('neutron', 'security_groups', e)
n_resp = len(resp.get('security_groups', []))
if n_resp == 1:
health_secgrp = resp['security_groups'][0]
elif n_resp > 1:
raise DuplicateResource('neutron', 'security_groups', data=resp)
elif not create:
ch_core.hookenv.log('No security group tagged with '
'`charm-octavia-health` exists, and we are '
'configured to not create resources. Awaiting '
'end user resource creation.',
level=ch_core.hookenv.WARNING)
return
else:
try:
resp = nc.create_security_group(
{
'security_group': {
'name': octavia.OCTAVIA_HEALTH_SECGRP,
},
})
health_secgrp = resp['security_group']
nc.add_tag('security_groups', health_secgrp['id'],
'charm-octavia-health')
ch_core.hookenv.log('Created security group "{}"'
.format(health_secgrp['id']),
level=ch_core.hookenv.INFO)
except NEUTRON_TEMP_EXCS as e:
raise APIUnavailable('neutron', 'security_groups', e)
if create:
health_security_group_rules = [
{
'direction': 'ingress',
'protocol': 'icmpv6',
'ethertype': 'IPv6',
'security_group_id': health_secgrp['id'],
},
{
'direction': 'ingress',
'protocol': 'udp',
'ethertype': 'IPv6',
'port_range_min': octavia.OCTAVIA_HEALTH_LISTEN_PORT,
'port_range_max': octavia.OCTAVIA_HEALTH_LISTEN_PORT,
'security_group_id': health_secgrp['id'],
},
]
for rule in health_security_group_rules:
try:
nc.create_security_group_rule({'security_group_rule': rule})
except neutronclient.common.exceptions.Conflict:
pass
except NEUTRON_TEMP_EXCS as e:
raise APIUnavailable('neutron', 'security_groups', e)
resp = nc.list_security_group_rules(security_group_id=health_secgrp['id'])
return (network, secgrp)

View File

@ -14,6 +14,7 @@
import base64
import collections
import json
import os
import subprocess
@ -22,8 +23,10 @@ import charms_openstack.adapters
import charms_openstack.ip as os_ip
import charms.leadership as leadership
import charms.reactive as reactive
import charmhelpers.core as ch_core
import charmhelpers.contrib.network.ip as ch_net_ip
OCTAVIA_DIR = '/etc/octavia'
OCTAVIA_CACERT_DIR = os.path.join(OCTAVIA_DIR, 'certs')
@ -31,25 +34,145 @@ OCTAVIA_CONF = os.path.join(OCTAVIA_DIR, 'octavia.conf')
OCTAVIA_WEBSERVER_SITE = 'octavia-api'
OCTAVIA_WSGI_CONF = '/etc/apache2/sites-available/octavia-api.conf'
OCTAVIA_INT_BRIDGE = 'br-int'
OCTAVIA_MGMT_INTF = 'o-hm0'
OCTAVIA_MGMT_INTF_CONF = ('/etc/systemd/network/99-charm-octavia-{}.network'
.format(OCTAVIA_MGMT_INTF))
OCTAVIA_MGMT_NAME_PREFIX = 'lb-mgmt'
OCTAVIA_MGMT_NET = OCTAVIA_MGMT_NAME_PREFIX + '-net'
OCTAVIA_MGMT_SUBNET = OCTAVIA_MGMT_NAME_PREFIX + '-subnet'
OCTAVIA_MGMT_SECGRP = OCTAVIA_MGMT_NAME_PREFIX + '-sec-grp'
OCTAVIA_HEALTH_SECGRP = 'lb-health-mgr-sec-grp'
OCTAVIA_HEALTH_LISTEN_PORT = '5555'
charms_openstack.charm.use_defaults('charm.default-select-release')
class OctaviaAdapters(charms_openstack.adapters.OpenStackAPIRelationAdapters):
"""Adapters class for the Octavia charm."""
def __init__(self, relations, charm_instance=None):
super(OctaviaAdapters, self).__init__(
relations,
options_instance=charms_openstack.adapters.APIConfigurationAdapter(
service_name='octavia',
port_map=OctaviaCharm.api_ports),
charm_intance=charm_instance)
@charms_openstack.adapters.config_property
def health_manager_hwaddr(cls):
"""Return hardware address for Health Manager interface.
:param cls: charms_openstack.adapters.ConfigurationAdapter derived class
instance. Charm class instance is at cls.charm_instance.
:type: cls: charms_openstack.adapters.ConfiguartionAdapter
:returns: hardware address for unit local Health Manager interface.
:rtype: str
"""
try:
external_ids = json.loads(
subprocess.check_output(['ovs-vsctl', 'get', 'Interface',
OCTAVIA_MGMT_INTF,
'external_ids:attached-mac'],
universal_newlines=True))
except (subprocess.CalledProcessError, OSError) as e:
ch_core.hookenv.log('Unable query OVS, not ready? ("{}")'
.format(e),
level=ch_core.hookenv.DEBUG)
return
return external_ids
@charms_openstack.adapters.config_property
def health_manager_bind_ip(cls):
"""IP address health manager process should bind to.
The value is configured individually per unit and reflects the IP
address assigned to the specific units tunnel port.
:param cls: charms_openstack.adapters.ConfigurationAdapter derived class
instance. Charm class instance is at cls.charm_instance.
:type: cls: charms_openstack.adapters.ConfiguartionAdapter
:returns: IP address of unit local Health Manager interface.
:rtype: str
"""
ip_list = []
for af in ['AF_INET6', 'AF_INET']:
try:
ip_list.extend(
(ip for ip in
ch_net_ip.get_iface_addr(iface=OCTAVIA_MGMT_INTF,
inet_type=af)
if '%' not in ip))
except Exception:
# ch_net_ip.get_iface_addr() throws an exception of type
# Exception when the requested interface does not exist or if
# it has no addresses in the requested address family.
pass
if ip_list:
return ip_list[0]
@charms_openstack.adapters.config_property
def heartbeat_key(cls):
"""Key used to validate Amphorae heartbeat messages.
The value is generated by the charm and is shared among all units
through leader storage.
:param cls: charms_openstack.adapters.ConfigurationAdapter derived class
instance. Charm class instance is at cls.charm_instance.
:type: cls: charms_openstack.adapters.ConfiguartionAdapter
:returns: Key as retrieved from Juju leader storage.
:rtype: str
"""
return leadership.leader_get('heartbeat-key')
@charms_openstack.adapters.config_property
def controller_ip_port_list(cls):
"""List of ip:port pairs for Amphorae instances health reporting.
The list is built based on information from individual Octavia units
coordinated, stored and shared among all units trhough leader storage.
:param cls: charms_openstack.adapters.ConfigurationAdapter derived class
instance. Charm class instance is at cls.charm_instance.
:type: cls: charms_openstack.adapters.ConfiguartionAdapter
:returns: Comma separated list of ip:port pairs.
:rtype: str
"""
try:
ip_list = json.loads(
leadership.leader_get('controller-ip-port-list'))
except TypeError:
return
if ip_list:
port_suffix = ':' + OCTAVIA_HEALTH_LISTEN_PORT
return (port_suffix + ', ').join(sorted(ip_list)) + port_suffix
@charms_openstack.adapters.config_property
def amp_secgroup_list(cls):
"""List of security groups to attach to Amphorae instances.
The list is built from IDs of charm managed security groups shared
among all units through leader storage.
:param cls: charms_openstack.adapters.ConfigurationAdapter derived class
instance. Charm class instance is at cls.charm_instance.
:type: cls: charms_openstack.adapters.ConfiguartionAdapter
:returns: Comma separated list of Neutron security group UUIDs.
:rtype: str
"""
return leadership.leader_get('amp-secgroup-list')
@charms_openstack.adapters.config_property
def amp_boot_network_list(cls):
"""Networks to attach when creating Amphorae instances.
IDs from charm managed networks shared among all units through leader
storage.
:param cls: charms_openstack.adapters.ConfigurationAdapter derived class
instance. Charm class instance is at cls.charm_instance.
:type: cls: charms_openstack.adapters.ConfiguartionAdapter
:returns: Comma separated list of Neutron network UUIDs.
:rtype: str
"""
return leadership.leader_get('amp-boot-network-list')
@charms_openstack.adapters.config_property
def issuing_cacert(cls):
"""Get path to certificate provided in ``lb-mgmt-issuing-cacert`` option.
@ -156,7 +279,7 @@ def amp_flavor_id(cls):
class OctaviaCharm(charms_openstack.charm.HAOpenStackCharm):
"""Charm class for the Octavia charm."""
# layer-openstack-api uses service_type as service name in endpoint catalog
service_type = 'octavia'
name = service_type = 'octavia'
release = 'rocky'
packages = ['octavia-api', 'octavia-health-manager',
'octavia-housekeeping', 'octavia-worker',
@ -172,8 +295,10 @@ class OctaviaCharm(charms_openstack.charm.HAOpenStackCharm):
default_service = 'octavia-api'
services = ['apache2', 'octavia-health-manager', 'octavia-housekeeping',
'octavia-worker']
required_relations = ['shared-db', 'amqp', 'identity-service']
required_relations = ['shared-db', 'amqp', 'identity-service',
'neutron-openvswitch']
restart_map = {
OCTAVIA_MGMT_INTF_CONF: services + ['systemd-networkd'],
OCTAVIA_CONF: services,
OCTAVIA_WSGI_CONF: ['apache2'],
}
@ -187,6 +312,75 @@ class OctaviaCharm(charms_openstack.charm.HAOpenStackCharm):
}
group = 'octavia'
def install(self):
"""Custom install function.
We need to add user `systemd-network` to `octavia` group so it can
read the systemd-networkd config we write.
We run octavia as a WSGI service and need to disable the `octavia-api`
service in init system so it does not steal the port from haproxy /
apache2.
"""
super().install()
ch_core.host.add_user_to_group('systemd-network', 'octavia')
ch_core.host.service_pause('octavia-api')
def states_to_check(self, required_relations=None):
"""Custom state check function for charm specific state check needs.
Interface used for ``neutron_openvswitch`` subordinate lacks a
``available`` state.
The ``Octavia`` service will not operate normally until Nova and
Neutron resources have been created, this needs to be tracked in
workload status.
"""
states_to_check = super().states_to_check(required_relations)
override_relation = 'neutron-openvswitch'
if override_relation in states_to_check:
states_to_check[override_relation] = [
("{}.connected".format(override_relation),
"blocked",
"'{}' missing".format(override_relation))]
if not leadership.leader_get('amp-boot-network-list'):
if not reactive.is_flag_set('config.default.create-mgmt-network'):
# we are configured to not create required resources and they
# are not present, prompt end-user to create them.
states_to_check['crud'] = [
('crud.available', # imaginate ``crud`` relation
'blocked',
'Awaiting end-user to create required resources and '
'execute `configure-resources` action')]
else:
if reactive.is_flag_set('leadership.is_leader'):
who = 'end-user execution of `configure-resources` action'
else:
who = 'leader'
states_to_check['crud'] = [
('crud.available', # imaginate ``crud`` relation
'blocked',
'Awaiting {} to create required resources'.format(who))]
# if these configuration options are at default value it means they are
# not set by end-user, they are required for successfull creation of
# load balancer instances.
if (reactive.is_flag_set('config.default.lb-mgmt-issuing-cacert') or
reactive.is_flag_set(
'config.default.lb-mgmt-issuing-ca-private-key') or
reactive.is_flag_set(
'config.default.lb-mgmt-issuing-ca-key-passphrase') or
reactive.is_flag_set(
'config.default.lb-mgmt-controller-cacert') or
reactive.is_flag_set(
'config.default.lb-mgmt-controller-cert')):
# set workload status to prompt end-user attention
states_to_check['config'] = [
('config._required_certs', # imaginate flag
'blocked',
'Missing required certificate configuration, please '
'examine documentation')]
return states_to_check
def get_amqp_credentials(self):
"""Configure the AMQP credentials for Octavia."""
return ('octavia', 'openstack')
@ -226,3 +420,13 @@ class OctaviaCharm(charms_openstack.charm.HAOpenStackCharm):
ch_core.host.write_file(filename, base64.b64decode(encoded_data),
group=self.group, perms=0o440)
return filename
@property
def local_address(self):
"""Return local address as provided by our ConfigurationClass."""
return self.configuration_class().local_address
@property
def local_unit_name(self):
"""Return local unit name as provided by our ConfigurationClass."""
return self.configuration_class().local_unit_name

View File

@ -5,9 +5,8 @@ description: |
Octavia is an open source, operator-scale load balancing solution designed to
work with OpenStack.
.
Octavia was borne out of the Neutron LBaaS project. Starting with the
Liberty release of OpenStack, Octavia has become the reference implementation
for Neutron LBaaS version 2.
Octavia was borne out of the Neutron LBaaS project. Octavia has become the
reference implementation for Neutron LBaaS version 2.
.
Octavia accomplishes its delivery of load balancing services by managing a
fleet of virtual machines, containers, or bare metal servers collectively
@ -20,7 +19,11 @@ tags:
- openstack
series:
- bionic
- cosmic
subordinate: false
requires:
neutron-load-balancer:
neutron-api:
interface: neutron-load-balancer
neutron-openvswitch:
interface: neutron-plugin
scope: container

View File

@ -12,19 +12,22 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import uuid
import charms.reactive as reactive
import charms.leadership as leadership
import charms_openstack.bus
import charms_openstack.charm as charm
import charms_openstack.ip as os_ip
import charmhelpers.core as ch_core
import charm.openstack.octavia as octavia # noqa
import charm.openstack.api_crud as api_crud
charms_openstack.bus.discover()
charm.use_defaults(
'charm.installed',
'amqp.connected',
@ -42,9 +45,20 @@ def generate_heartbeat_key():
leadership.leader_set({'heartbeat-key': str(uuid.uuid4())})
@reactive.when('neutron-load-balancer.available')
@reactive.when('neutron-api.available')
def setup_neutron_lbaas_proxy():
neutron = reactive.endpoint_from_flag('neutron-load-balancer.available')
"""Publish our URL to Neutron API units.
The Neutron API unit will use this information to set up the
``lbaasv2-proxy`` service_plugin.
This is to help migrate workloads expecting to talk to the Neutron API for
their Load Balancing needs.
Software should be updated to look up load balancer in the Keystone service
catalog and talk directly to the Octavia endpoint.
"""
neutron = reactive.endpoint_from_flag('neutron-api.available')
with charm.provide_charm_instance() as octavia_charm:
octavia_url = '{}:{}'.format(
os_ip.canonical_url(endpoint_type=os_ip.INTERNAL),
@ -52,22 +66,58 @@ def setup_neutron_lbaas_proxy():
neutron.publish_load_balancer_info('octavia', octavia_url)
@reactive.when('identity-service.available')
@reactive.when('neutron-api.available')
@reactive.when('neutron-openvswitch.connected')
# Neutron API calls will consistently fail as long as AMQP is unavailable
@reactive.when('amqp.available')
def setup_hm_port():
"""Create a per unit Neutron and OVS port for Octavia Health Manager.
This is used to plug the unit into the overlay network for direct
communication with the octavia managed load balancer instances running
within the deployed cloud.
"""
with charm.provide_charm_instance() as octavia_charm:
identity_service = reactive.endpoint_from_flag(
'identity-service.available')
try:
if api_crud.setup_hm_port(
identity_service,
octavia_charm):
# trigger config render to make systemd-networkd bring up
# automatic IP configuration of the new port right now.
reactive.set_flag('config.changed')
except api_crud.APIUnavailable as e:
ch_core.hookenv.log('Neutron API not available yet, deferring '
'port discovery. ("{}")'
.format(e),
level=ch_core.hookenv.DEBUG)
return
@reactive.when('leadership.is_leader')
@reactive.when('identity-service.available')
@reactive.when('config.default.custom-amp-flavor-id')
def get_nova_flavor():
"""Get or create private Nova flavor for use with Octavia."""
@reactive.when('neutron-api.available')
# Neutron API calls will consistently fail as long as AMQP is unavailable
@reactive.when('amqp.available')
def update_controller_ip_port_list():
"""Load state from Neutron and update ``controller-ip-port-list``."""
identity_service = reactive.endpoint_from_flag(
'identity-service.available')
leader_ip_list = leadership.leader_get('controller-ip-port-list') or []
try:
flavor = api_crud.get_nova_flavor(identity_service)
neutron_ip_list = sorted(api_crud.get_port_ips(identity_service))
except api_crud.APIUnavailable as e:
ch_core.hookenv.log('Nova API not available yet, deferring '
'flavor discovery/creation. ("{}")'
ch_core.hookenv.log('Neutron API not available yet, deferring '
'port discovery. ("{}")'
.format(e),
level=ch_core.hookenv.DEBUG)
else:
leadership.leader_set({'amp-flavor-id': flavor.id})
return
if neutron_ip_list != sorted(leader_ip_list):
leadership.leader_set(
{'controller-ip-port-list': json.dumps(neutron_ip_list)})
@reactive.when('shared-db.available')
@ -75,8 +125,7 @@ def get_nova_flavor():
@reactive.when('amqp.available')
@reactive.when('leadership.set.heartbeat-key')
def render(*args):
"""
Render the configuration for Octavia when all interfaces are available.
"""Render the configuration for Octavia when all interfaces are available.
"""
with charm.provide_charm_instance() as octavia_charm:
octavia_charm.render_with_interfaces(args)

View File

@ -0,0 +1,16 @@
[Match]
Name=o-hm0
[Link]
{% if options.health_manager_hwaddr -%}
MACAddress={{ options.health_manager_hwaddr }}
{% endif -%}
RequiredForOnline=no
[Network]
DHCP=yes
[DHCP]
UseMTU=true
UseRoutes=false
RouteMetric=100

View File

@ -2,10 +2,14 @@
debug = {{ options.debug }}
[health_manager]
{% if options.health_manager_bind_ip -%}
bind_ip = {{ options.health_manager_bind_ip }}
{% endif -%}
{% if options.controller_ip_port_list -%}
controller_ip_port_list = {{ options.controller_ip_port_list }}
{% endif -%}
heartbeat_key = {{ options.heartbeat_key }}
[database]
{% include "parts/database" %}
[controller_worker]
{% if options.amp_image_owner_id -%}
@ -20,9 +24,6 @@ amp_flavor_id = {{ options.amp_flavor_id }}
{% if options.amp_boot_network_list -%}
amp_boot_network_list = {{ options.amp_boot_network_list }}
{% endif -%}
{% if options.amp_ssh_key_name -%}
amp_ssh_key_name = {{ options.amp_ssh_key_name }}
{% endif -%}
{% if options.amp_image_tag -%}
amp_image_tag = {{ options.amp_image_tag }}
{% endif -%}
@ -63,6 +64,8 @@ server_ca = {{ options.issuing_cacert }}
# role as a "client" connecting to the ``Amphorae``.
client_cert = {{ options.controller_cert }}
{% include "parts/section-database" %}
[service_auth]
auth_type = password
auth_uri = {{ identity_service.service_protocol }}://{{ identity_service.service_host }}:{{ identity_service.service_port }}

View File

@ -0,0 +1,121 @@
series: bionic
relations:
- - glance:image-service
- nova-cloud-controller:image-service
- - glance:image-service
- nova-compute:image-service
- - mysql:shared-db
- glance:shared-db
- - mysql:shared-db
- keystone:shared-db
- - mysql:shared-db
- neutron-api:shared-db
- - mysql:shared-db
- nova-cloud-controller:shared-db
- - mysql:shared-db
- octavia:shared-db
- - keystone:identity-service
- glance:identity-service
- - keystone:identity-service
- nova-cloud-controller:identity-service
- - keystone:identity-service
- neutron-api:identity-service
- - keystone:identity-service
- octavia:identity-service
- - nova-compute:cloud-compute
- nova-cloud-controller:cloud-compute
- - rabbitmq-server:amqp
- neutron-api:amqp
- - rabbitmq-server:amqp
- glance:amqp
- - rabbitmq-server:amqp
- neutron-gateway:amqp
- - rabbitmq-server:amqp
- nova-cloud-controller:amqp
- - rabbitmq-server:amqp
- nova-compute:amqp
- - rabbitmq-server:amqp
- octavia:amqp
- - neutron-api:neutron-api
- nova-cloud-controller:neutron-api
- - neutron-api:neutron-load-balancer
- octavia:neutron-api
- - neutron-gateway:quantum-network-service
- nova-cloud-controller:quantum-network-service
- - rabbitmq-server:amqp
- neutron-openvswitch:amqp
- - neutron-api:neutron-plugin-api
- neutron-openvswitch:neutron-plugin-api
- - neutron-openvswitch:neutron-plugin
- nova-compute:neutron-plugin
- - neutron-openvswitch:neutron-plugin
- octavia:neutron-openvswitch
- - hacluster-octavia:ha
- octavia:ha
applications:
glance:
charm: cs:~openstack-charmers-next/glance
num_units: 1
options:
openstack-origin: cloud:bionic-rocky
keystone:
charm: cs:~openstack-charmers-next/keystone
num_units: 1
options:
openstack-origin: cloud:bionic-rocky
mysql:
constraints: mem=3072M
charm: cs:~openstack-charmers-next/percona-cluster
num_units: 1
neutron-api:
charm: cs:~openstack-charmers-next/neutron-api
num_units: 1
options:
openstack-origin: cloud:bionic-rocky
debug: True
flat-network-providers: physnet1
neutron-security-groups: True
neutron-gateway:
charm: cs:~openstack-charmers-next/neutron-gateway
num_units: 1
options:
openstack-origin: cloud:bionic-rocky
bridge-mappings: physnet1:br-ex
debug: True
neutron-openvswitch:
series: bionic
charm: cs:~openstack-charmers-next/neutron-openvswitch
num_units: 0
options:
debug: True
prevent-arp-spoofing: False
firewall-driver: openvswitch
nova-cloud-controller:
constraints: mem=3072M
charm: cs:~openstack-charmers-next/nova-cloud-controller
num_units: 1
options:
openstack-origin: cloud:bionic-rocky
debug: True
network-manager: Neutron
nova-compute:
constraints: mem=7168M
charm: cs:~openstack-charmers-next/nova-compute
num_units: 1
options:
openstack-origin: cloud:bionic-rocky
debug: True
hacluster-octavia:
charm: cs:~openstack-charmers-next/hacluster
num_units: 0
octavia:
series: bionic
charm: ../../../octavia
num_units: 3
options:
debug: True
openstack-origin: cloud:bionic-rocky
vip: 'ADD YOUR VIP HERE'
rabbitmq-server:
charm: cs:~openstack-charmers-next/rabbitmq-server
num_units: 1

View File

@ -0,0 +1,148 @@
series: bionic
machines:
'0':
constraints: mem=3072M
'1':
constraints: mem=3072M
'2':
constraints: mem=3072M
'3':
constraints: mem=3072M
relations:
- - glance:image-service
- nova-cloud-controller:image-service
- - glance:image-service
- nova-compute:image-service
- - mysql:shared-db
- glance:shared-db
- - mysql:shared-db
- keystone:shared-db
- - mysql:shared-db
- neutron-api:shared-db
- - mysql:shared-db
- nova-cloud-controller:shared-db
- - mysql:shared-db
- octavia:shared-db
- - keystone:identity-service
- glance:identity-service
- - keystone:identity-service
- nova-cloud-controller:identity-service
- - keystone:identity-service
- neutron-api:identity-service
- - keystone:identity-service
- octavia:identity-service
- - nova-compute:cloud-compute
- nova-cloud-controller:cloud-compute
- - rabbitmq-server:amqp
- neutron-api:amqp
- - rabbitmq-server:amqp
- glance:amqp
- - rabbitmq-server:amqp
- neutron-gateway:amqp
- - rabbitmq-server:amqp
- nova-cloud-controller:amqp
- - rabbitmq-server:amqp
- nova-compute:amqp
- - rabbitmq-server:amqp
- octavia:amqp
- - neutron-api:neutron-api
- nova-cloud-controller:neutron-api
- - neutron-api:neutron-load-balancer
- octavia:neutron-api
- - neutron-gateway:quantum-network-service
- nova-cloud-controller:quantum-network-service
- - rabbitmq-server:amqp
- neutron-openvswitch:amqp
- - neutron-api:neutron-plugin-api
- neutron-openvswitch:neutron-plugin-api
- - neutron-openvswitch:neutron-plugin
- nova-compute:neutron-plugin
- - neutron-openvswitch:neutron-plugin
- octavia:neutron-openvswitch
applications:
glance:
charm: cs:~openstack-charmers-next/glance
num_units: 1
options:
openstack-origin: cloud:bionic-rocky
to:
- lxd:1
keystone:
charm: cs:~openstack-charmers-next/keystone
num_units: 1
options:
openstack-origin: cloud:bionic-rocky
to:
- lxd:0
mysql:
constraints: mem=3072M
charm: cs:~openstack-charmers-next/percona-cluster
num_units: 1
to:
- lxd:0
neutron-api:
charm: cs:~openstack-charmers-next/neutron-api
num_units: 1
options:
openstack-origin: cloud:bionic-rocky
debug: True
flat-network-providers: physnet1
neutron-security-groups: True
to:
- lxd:1
neutron-gateway:
charm: cs:~openstack-charmers-next/neutron-gateway
num_units: 1
options:
openstack-origin: cloud:bionic-rocky
bridge-mappings: physnet1:br-ex
os-data-network: 252.0.0.0/8
instance-mtu: 1300
debug: True
to:
- 3
neutron-openvswitch:
series: bionic
charm: cs:~openstack-charmers-next/neutron-openvswitch
num_units: 0
options:
debug: True
os-data-network: 252.0.0.0/8
instance-mtu: 1300
prevent-arp-spoofing: False
firewall-driver: openvswitch
nova-cloud-controller:
constraints: mem=3072M
charm: cs:~openstack-charmers-next/nova-cloud-controller
num_units: 1
options:
openstack-origin: cloud:bionic-rocky
debug: True
network-manager: Neutron
to:
- lxd:1
nova-compute:
constraints: mem=7168M
charm: cs:~openstack-charmers-next/nova-compute
num_units: 1
options:
openstack-origin: cloud:bionic-rocky
debug: True
to:
- 2
octavia:
series: bionic
charm: ../../../octavia
num_units: 3
options:
openstack-origin: cloud:bionic-rocky
debug: True
to:
- lxd:0
- lxd:1
- lxd:2
rabbitmq-server:
charm: cs:~openstack-charmers-next/rabbitmq-server
num_units: 1
to:
- lxd:0

View File

@ -0,0 +1,115 @@
series: bionic
relations:
- - glance:image-service
- nova-cloud-controller:image-service
- - glance:image-service
- nova-compute:image-service
- - mysql:shared-db
- glance:shared-db
- - mysql:shared-db
- keystone:shared-db
- - mysql:shared-db
- neutron-api:shared-db
- - mysql:shared-db
- nova-cloud-controller:shared-db
- - mysql:shared-db
- octavia:shared-db
- - keystone:identity-service
- glance:identity-service
- - keystone:identity-service
- nova-cloud-controller:identity-service
- - keystone:identity-service
- neutron-api:identity-service
- - keystone:identity-service
- octavia:identity-service
- - nova-compute:cloud-compute
- nova-cloud-controller:cloud-compute
- - rabbitmq-server:amqp
- neutron-api:amqp
- - rabbitmq-server:amqp
- glance:amqp
- - rabbitmq-server:amqp
- neutron-gateway:amqp
- - rabbitmq-server:amqp
- nova-cloud-controller:amqp
- - rabbitmq-server:amqp
- nova-compute:amqp
- - rabbitmq-server:amqp
- octavia:amqp
- - neutron-api:neutron-api
- nova-cloud-controller:neutron-api
- - neutron-api:neutron-load-balancer
- octavia:neutron-api
- - neutron-gateway:quantum-network-service
- nova-cloud-controller:quantum-network-service
- - rabbitmq-server:amqp
- neutron-openvswitch:amqp
- - neutron-api:neutron-plugin-api
- neutron-openvswitch:neutron-plugin-api
- - neutron-openvswitch:neutron-plugin
- nova-compute:neutron-plugin
- - neutron-openvswitch:neutron-plugin
- octavia:neutron-openvswitch
applications:
glance:
charm: cs:~openstack-charmers-next/glance
num_units: 1
options:
openstack-origin: cloud:bionic-rocky
keystone:
charm: cs:~openstack-charmers-next/keystone
num_units: 1
options:
openstack-origin: cloud:bionic-rocky
mysql:
constraints: mem=3072M
charm: cs:~openstack-charmers-next/percona-cluster
num_units: 1
neutron-api:
charm: cs:~openstack-charmers-next/neutron-api
num_units: 1
options:
openstack-origin: cloud:bionic-rocky
debug: True
flat-network-providers: physnet1
neutron-security-groups: True
neutron-gateway:
charm: cs:~openstack-charmers-next/neutron-gateway
num_units: 1
options:
openstack-origin: cloud:bionic-rocky
bridge-mappings: physnet1:br-ex
debug: True
neutron-openvswitch:
series: bionic
charm: cs:~openstack-charmers-next/neutron-openvswitch
num_units: 0
options:
debug: True
prevent-arp-spoofing: False
firewall-driver: openvswitch
nova-cloud-controller:
constraints: mem=3072M
charm: cs:~openstack-charmers-next/nova-cloud-controller
num_units: 1
options:
openstack-origin: cloud:bionic-rocky
debug: True
network-manager: Neutron
nova-compute:
constraints: mem=7168M
charm: cs:~openstack-charmers-next/nova-compute
num_units: 1
options:
openstack-origin: cloud:bionic-rocky
debug: True
octavia:
series: bionic
charm: ../../../octavia
num_units: 3
options:
debug: True
openstack-origin: cloud:bionic-rocky
rabbitmq-server:
charm: cs:~openstack-charmers-next/rabbitmq-server
num_units: 1

View File

@ -0,0 +1,112 @@
series: cosmic
relations:
- - glance:image-service
- nova-cloud-controller:image-service
- - glance:image-service
- nova-compute:image-service
- - mysql:shared-db
- glance:shared-db
- - mysql:shared-db
- keystone:shared-db
- - mysql:shared-db
- neutron-api:shared-db
- - mysql:shared-db
- nova-cloud-controller:shared-db
- - mysql:shared-db
- octavia:shared-db
- - keystone:identity-service
- glance:identity-service
- - keystone:identity-service
- nova-cloud-controller:identity-service
- - keystone:identity-service
- neutron-api:identity-service
- - keystone:identity-service
- octavia:identity-service
- - nova-compute:cloud-compute
- nova-cloud-controller:cloud-compute
- - rabbitmq-server:amqp
- neutron-api:amqp
- - rabbitmq-server:amqp
- glance:amqp
- - rabbitmq-server:amqp
- neutron-gateway:amqp
- - rabbitmq-server:amqp
- nova-cloud-controller:amqp
- - rabbitmq-server:amqp
- nova-compute:amqp
- - rabbitmq-server:amqp
- octavia:amqp
- - neutron-api:neutron-api
- nova-cloud-controller:neutron-api
- - neutron-api:neutron-load-balancer
- octavia:neutron-api
- - neutron-gateway:quantum-network-service
- nova-cloud-controller:quantum-network-service
- - rabbitmq-server:amqp
- neutron-openvswitch:amqp
- - neutron-api:neutron-plugin-api
- neutron-openvswitch:neutron-plugin-api
- - neutron-openvswitch:neutron-plugin
- nova-compute:neutron-plugin
- - neutron-openvswitch:neutron-plugin
- octavia:neutron-openvswitch
- - hacluster-octavia:ha
- octavia:ha
applications:
glance:
charm: cs:~openstack-charmers-next/glance
num_units: 1
keystone:
charm: cs:~openstack-charmers-next/keystone
num_units: 1
mysql:
constraints: mem=3072M
charm: cs:~openstack-charmers-next/percona-cluster
num_units: 1
neutron-api:
charm: cs:~openstack-charmers-next/neutron-api
num_units: 1
options:
debug: True
flat-network-providers: physnet1
neutron-security-groups: True
neutron-gateway:
charm: cs:~openstack-charmers-next/neutron-gateway
num_units: 1
options:
bridge-mappings: physnet1:br-ex
debug: True
neutron-openvswitch:
series: cosmic
charm: cs:~openstack-charmers-next/neutron-openvswitch
num_units: 0
options:
debug: True
prevent-arp-spoofing: False
firewall-driver: openvswitch
nova-cloud-controller:
constraints: mem=3072M
charm: cs:~openstack-charmers-next/nova-cloud-controller
num_units: 1
options:
debug: True
network-manager: Neutron
nova-compute:
constraints: mem=7168M
charm: cs:~openstack-charmers-next/nova-compute
num_units: 1
options:
debug: True
hacluster-octavia:
charm: cs:~openstack-charmers-next/hacluster
num_units: 0
octavia:
series: cosmic
charm: ../../../octavia
num_units: 3
options:
debug: True
vip: 'ADD YOUR VIP HERE'
rabbitmq-server:
charm: cs:~openstack-charmers-next/rabbitmq-server
num_units: 1

View File

@ -0,0 +1,139 @@
series: cosmic
machines:
'0':
constraints: mem=3072M
'1':
constraints: mem=3072M
'2':
constraints: mem=3072M
'3':
constraints: mem=3072M
relations:
- - glance:image-service
- nova-cloud-controller:image-service
- - glance:image-service
- nova-compute:image-service
- - mysql:shared-db
- glance:shared-db
- - mysql:shared-db
- keystone:shared-db
- - mysql:shared-db
- neutron-api:shared-db
- - mysql:shared-db
- nova-cloud-controller:shared-db
- - mysql:shared-db
- octavia:shared-db
- - keystone:identity-service
- glance:identity-service
- - keystone:identity-service
- nova-cloud-controller:identity-service
- - keystone:identity-service
- neutron-api:identity-service
- - keystone:identity-service
- octavia:identity-service
- - nova-compute:cloud-compute
- nova-cloud-controller:cloud-compute
- - rabbitmq-server:amqp
- neutron-api:amqp
- - rabbitmq-server:amqp
- glance:amqp
- - rabbitmq-server:amqp
- neutron-gateway:amqp
- - rabbitmq-server:amqp
- nova-cloud-controller:amqp
- - rabbitmq-server:amqp
- nova-compute:amqp
- - rabbitmq-server:amqp
- octavia:amqp
- - neutron-api:neutron-api
- nova-cloud-controller:neutron-api
- - neutron-api:neutron-load-balancer
- octavia:neutron-api
- - neutron-gateway:quantum-network-service
- nova-cloud-controller:quantum-network-service
- - rabbitmq-server:amqp
- neutron-openvswitch:amqp
- - neutron-api:neutron-plugin-api
- neutron-openvswitch:neutron-plugin-api
- - neutron-openvswitch:neutron-plugin
- nova-compute:neutron-plugin
- - neutron-openvswitch:neutron-plugin
- octavia:neutron-openvswitch
applications:
glance:
charm: cs:~openstack-charmers-next/glance
num_units: 1
to:
- lxd:1
keystone:
charm: cs:~openstack-charmers-next/keystone
num_units: 1
to:
- lxd:0
mysql:
constraints: mem=3072M
charm: cs:~openstack-charmers-next/percona-cluster
num_units: 1
to:
- lxd:0
neutron-api:
charm: cs:~openstack-charmers-next/neutron-api
num_units: 1
options:
debug: True
flat-network-providers: physnet1
neutron-security-groups: True
to:
- lxd:1
neutron-gateway:
charm: cs:~openstack-charmers-next/neutron-gateway
num_units: 1
options:
bridge-mappings: physnet1:br-ex
os-data-network: 252.0.0.0/8
instance-mtu: 1300
debug: True
to:
- 3
neutron-openvswitch:
series: cosmic
charm: cs:~openstack-charmers-next/neutron-openvswitch
num_units: 0
options:
debug: True
os-data-network: 252.0.0.0/8
instance-mtu: 1300
prevent-arp-spoofing: False
firewall-driver: openvswitch
nova-cloud-controller:
constraints: mem=3072M
charm: cs:~openstack-charmers-next/nova-cloud-controller
num_units: 1
options:
debug: True
network-manager: Neutron
to:
- lxd:1
nova-compute:
constraints: mem=7168M
charm: cs:~openstack-charmers-next/nova-compute
num_units: 1
options:
debug: True
to:
- 2
octavia:
series: cosmic
charm: ../../../octavia
num_units: 3
options:
debug: True
to:
- lxd:0
- lxd:1
- lxd:2
rabbitmq-server:
charm: cs:~openstack-charmers-next/rabbitmq-server
num_units: 1
to:
- lxd:0

View File

@ -0,0 +1,106 @@
series: cosmic
relations:
- - glance:image-service
- nova-cloud-controller:image-service
- - glance:image-service
- nova-compute:image-service
- - mysql:shared-db
- glance:shared-db
- - mysql:shared-db
- keystone:shared-db
- - mysql:shared-db
- neutron-api:shared-db
- - mysql:shared-db
- nova-cloud-controller:shared-db
- - mysql:shared-db
- octavia:shared-db
- - keystone:identity-service
- glance:identity-service
- - keystone:identity-service
- nova-cloud-controller:identity-service
- - keystone:identity-service
- neutron-api:identity-service
- - keystone:identity-service
- octavia:identity-service
- - nova-compute:cloud-compute
- nova-cloud-controller:cloud-compute
- - rabbitmq-server:amqp
- neutron-api:amqp
- - rabbitmq-server:amqp
- glance:amqp
- - rabbitmq-server:amqp
- neutron-gateway:amqp
- - rabbitmq-server:amqp
- nova-cloud-controller:amqp
- - rabbitmq-server:amqp
- nova-compute:amqp
- - rabbitmq-server:amqp
- octavia:amqp
- - neutron-api:neutron-api
- nova-cloud-controller:neutron-api
- - neutron-api:neutron-load-balancer
- octavia:neutron-api
- - neutron-gateway:quantum-network-service
- nova-cloud-controller:quantum-network-service
- - rabbitmq-server:amqp
- neutron-openvswitch:amqp
- - neutron-api:neutron-plugin-api
- neutron-openvswitch:neutron-plugin-api
- - neutron-openvswitch:neutron-plugin
- nova-compute:neutron-plugin
- - neutron-openvswitch:neutron-plugin
- octavia:neutron-openvswitch
applications:
glance:
charm: cs:~openstack-charmers-next/glance
num_units: 1
keystone:
charm: cs:~openstack-charmers-next/keystone
num_units: 1
mysql:
constraints: mem=3072M
charm: cs:~openstack-charmers-next/percona-cluster
num_units: 1
neutron-api:
charm: cs:~openstack-charmers-next/neutron-api
num_units: 1
options:
debug: True
flat-network-providers: physnet1
neutron-security-groups: True
neutron-gateway:
charm: cs:~openstack-charmers-next/neutron-gateway
num_units: 1
options:
bridge-mappings: physnet1:br-ex
debug: True
neutron-openvswitch:
series: cosmic
charm: cs:~openstack-charmers-next/neutron-openvswitch
num_units: 0
options:
debug: True
prevent-arp-spoofing: False
firewall-driver: openvswitch
nova-cloud-controller:
constraints: mem=3072M
charm: cs:~openstack-charmers-next/nova-cloud-controller
num_units: 1
options:
debug: True
network-manager: Neutron
nova-compute:
constraints: mem=7168M
charm: cs:~openstack-charmers-next/nova-compute
num_units: 1
options:
debug: True
octavia:
series: cosmic
charm: ../../../octavia
num_units: 3
options:
debug: True
rabbitmq-server:
charm: cs:~openstack-charmers-next/rabbitmq-server
num_units: 1

View File

@ -1,34 +0,0 @@
series: bionic
relations:
- - keystone
- mysql
- - octavia
- mysql
- - octavia
- keystone
- - octavia
- rabbitmq-server
- - octavia
- hacluster-octavia
applications:
keystone:
charm: cs:~openstack-charmers-next/keystone
num_units: 1
options:
openstack-origin: cloud:bionic-rocky
mysql:
charm: cs:~openstack-charmers-next/percona-cluster
num_units: 1
hacluster-octavia:
charm: cs:~openstack-charmers-next/hacluster
num_units: 0
octavia:
series: bionic
charm: octavia
num_units: 3
options:
openstack-origin: cloud:bionic-rocky
vip: 'ADD YOUR VIP HERE'
rabbitmq-server:
charm: cs:~openstack-charmers-next/rabbitmq-server
num_units: 1

View File

@ -1,28 +0,0 @@
series: bionic
relations:
- - keystone
- mysql
- - octavia
- mysql
- - octavia
- keystone
- - octavia
- rabbitmq-server
applications:
keystone:
charm: cs:~openstack-charmers-next/keystone
num_units: 1
options:
openstack-origin: cloud:bionic-rocky
mysql:
charm: cs:~openstack-charmers-next/percona-cluster
num_units: 1
octavia:
series: bionic
charm: ../../../octavia
num_units: 1
options:
openstack-origin: cloud:bionic-rocky
rabbitmq-server:
charm: cs:~openstack-charmers-next/rabbitmq-server
num_units: 1

View File

@ -1,10 +1,31 @@
charm_name: octavia
gate_bundles:
- smoke-bionic-rocky
- smoke-bionic-rocky-ha
- bionic-rocky
- bionic-rocky-ha
- cosmic-rocky
- cosmic-rocky-ha
smoke_bundles:
- smoke-bionic-rocky
- bionic-rocky
comment: |
The `bionic-rocky-lxd` bundle currently fails due to a bug in LXD.
https://github.com/lxc/lxd/issues/4947
Holding the `cosmic-rocky-lxd` bundle awaiting required changes to CI.
dev_bundles:
- bionic-rocky-lxd
- cosmic-rocky-lxd
target_deploy_status:
octavia:
workload-status: blocked
workload-status-message: Awaiting
configure:
- zaza.charm_tests.noop.setup.basic_setup
- zaza.charm_tests.octavia.setup.configure_octavia
- zaza.charm_tests.octavia.setup.add_amphora_image
- zaza.charm_tests.glance.setup.add_lts_image
- zaza.charm_tests.nova.setup.create_flavors
- zaza.charm_tests.nova.setup.manage_ssh_key
- zaza.charm_tests.neutron.setup.basic_overcloud_network
tests:
- zaza.charm_tests.noop.tests.NoopTest
- zaza.charm_tests.neutron_openvswitch.tests.NeutronOpenvSwitchOverlayTest
- zaza.charm_tests.nova.tests.LTSGuestCreateTest
- zaza.charm_tests.octavia.tests.LBAASv2Test
- zaza.charm_tests.octavia.tests.CharmOperationTest

View File

@ -1,4 +1,4 @@
tenacity
keystoneauth1
pbr
python-novaclient
python-neutronclient

View File

@ -24,9 +24,13 @@ charms_openstack.test_mocks.mock_charmhelpers()
import mock
import charms
charms.leadership = mock.MagicMock()
keystoneauth1 = mock.MagicMock()
neutronclient = mock.MagicMock()
sys.modules['charms.leadership'] = charms.leadership
keystoneauth1 = mock.MagicMock()
novaclient = mock.MagicMock()
sys.modules['charms.leadership'] = charms.leadership
sys.modules['keystoneauth1'] = keystoneauth1
sys.modules['novaclient'] = novaclient
sys.modules['neutronclient'] = neutronclient
sys.modules['neutronclient.v2_0'] = neutronclient.v2_0

View File

@ -13,14 +13,63 @@
# limitations under the License.
import mock
import subprocess
import charms_openstack.test_utils as test_utils
import charm.openstack.octavia as octavia # for constants
import charm.openstack.api_crud as api_crud
class FakeNeutronConflictException(Exception):
pass
class TestAPICrud(test_utils.PatchHelper):
def setUp(self):
super().setUp()
self.secgrp_uuid = 'fake-secgrp-uuid'
self.health_secgrp_uuid = 'fake-health_secgrp-uuid'
self.security_group_rule_calls = [
mock.call(
{'security_group_rule': {
'direction': 'ingress',
'protocol': 'icmpv6',
'ethertype': 'IPv6',
'security_group_id': self.secgrp_uuid}}),
mock.call(
{'security_group_rule': {
'direction': 'ingress',
'protocol': 'tcp',
'ethertype': 'IPv6',
'port_range_min': '22',
'port_range_max': '22',
'security_group_id': self.secgrp_uuid}}),
mock.call(
{'security_group_rule': {
'direction': 'ingress',
'protocol': 'tcp',
'ethertype': 'IPv6',
'port_range_min': '9443',
'port_range_max': '9443',
'security_group_id': self.secgrp_uuid}}),
mock.call(
{'security_group_rule': {
'direction': 'ingress',
'protocol': 'icmpv6',
'ethertype': 'IPv6',
'security_group_id': self.health_secgrp_uuid}}),
mock.call(
{'security_group_rule': {
'direction': 'ingress',
'protocol': 'udp',
'ethertype': 'IPv6',
'port_range_min': octavia.OCTAVIA_HEALTH_LISTEN_PORT,
'port_range_max': octavia.OCTAVIA_HEALTH_LISTEN_PORT,
'security_group_id': self.health_secgrp_uuid}}),
]
def test_session_from_identity_service(self):
self.patch_object(api_crud, 'keystone_identity')
self.patch_object(api_crud, 'keystone_session')
@ -74,3 +123,190 @@ class TestAPICrud(test_utils.PatchHelper):
api_crud.get_nova_flavor(identity_service)
nova.flavors.create.assert_called_with('charm-octavia', 1024, 1, 8,
is_public=False)
def test_get_hm_port(self):
self.patch_object(api_crud, 'neutron_client')
nc = mock.MagicMock()
self.neutron_client.Client.return_value = nc
network_uuid = 'fake-network-uuid'
nc.list_networks.return_value = {'networks': [{'id': network_uuid}]}
health_secgrp_uuid = 'fake-secgrp-uuid'
nc.list_security_groups.return_value = {
'security_groups': [{'id': health_secgrp_uuid}]}
self.patch_object(api_crud.ch_net_ip, 'get_hostname')
port_uuid = 'fake-port-uuid'
port_mac_address = 'fake-mac-address'
nc.create_port.return_value = {
'port': {'id': port_uuid, 'mac_address': port_mac_address}}
self.patch('subprocess.check_output', 'check_output')
self.patch('charms.reactive.set_flag', 'set_flag')
identity_service = mock.MagicMock()
result = api_crud.get_hm_port(identity_service,
'fake-unit-name',
'192.0.2.42')
self.neutron_client.Client.assert_called()
nc.list_networks.assert_called_with(tags='charm-octavia')
nc.list_security_groups.assert_called_with(
tags='charm-octavia-health')
nc.list_ports.assert_called_once_with(
tags='charm-octavia-fake-unit-name')
nc.create_port.assert_called_once_with(
{
'port': {
'admin_state_up': False,
'binding:host_id': self.get_hostname(),
'device_owner': 'Octavia:health-mgr',
'security_groups': ['fake-secgrp-uuid'],
'name': 'octavia-health-manager-'
'fake-unit-name-listen-port',
'network_id': 'fake-network-uuid',
},
})
nc.add_tag.assert_called_with('ports', port_uuid, 'charm-octavia')
self.assertEqual(result, {'id': 'fake-port-uuid',
'mac_address': 'fake-mac-address'})
def test_toggle_hm_port(self):
self.patch_object(api_crud, 'neutron_client')
identity_service = mock.MagicMock()
nc = mock.MagicMock()
self.neutron_client.Client.return_value = nc
nc.list_ports.return_value = {'ports': [{'id': 'fake-port-uuid'}]}
api_crud.toggle_hm_port(identity_service, 'fake-unit-name')
nc.list_ports.asssert_called_with(tags='charm-octavia-fake-unit-name')
nc.update_port.assert_called_with('fake-port-uuid',
{'port': {'admin_state_up': True}})
def test_setup_hm_port(self):
self.patch('subprocess.check_output', 'check_output')
self.patch('subprocess.check_call', 'check_call')
self.patch_object(api_crud, 'get_hm_port')
self.patch_object(api_crud, 'toggle_hm_port')
identity_service = mock.MagicMock()
octavia_charm = mock.MagicMock()
port_uuid = 'fake-port-uuid'
port_mac_address = 'fake-mac-address'
self.get_hm_port.return_value = {
'id': port_uuid,
'mac_address': port_mac_address,
'admin_state_up': False,
'status': 'DOWN',
}
e = subprocess.CalledProcessError(returncode=1, cmd=None)
e.output = ('Device "{}" does not exist.'
.format(api_crud.octavia.OCTAVIA_MGMT_INTF))
self.check_output.side_effect = e
api_crud.setup_hm_port(identity_service, octavia_charm)
self.get_hm_port.assert_called_with(
identity_service,
octavia_charm.local_unit_name,
octavia_charm.local_address)
self.check_output.assert_called_with(
['ip', 'link', 'show', api_crud.octavia.OCTAVIA_MGMT_INTF],
stderr=-2, universal_newlines=True)
self.check_call.assert_has_calls([
mock.call(
['ovs-vsctl', '--', 'add-port',
api_crud.octavia.OCTAVIA_INT_BRIDGE,
api_crud.octavia.OCTAVIA_MGMT_INTF,
'--', 'set', 'Interface', api_crud.octavia.OCTAVIA_MGMT_INTF,
'type=internal',
'--', 'set', 'Interface', api_crud.octavia.OCTAVIA_MGMT_INTF,
'external-ids:iface-status=active',
'--', 'set', 'Interface', api_crud.octavia.OCTAVIA_MGMT_INTF,
'external-ids:attached-mac={}'.format(port_mac_address),
'--', 'set', 'Interface', api_crud.octavia.OCTAVIA_MGMT_INTF,
'external-ids:iface-id={}'.format(port_uuid),
'--', 'set', 'Interface', api_crud.octavia.OCTAVIA_MGMT_INTF,
'external-ids:skip_cleanup=true']),
mock.call(['ip', 'link', 'set', 'o-hm0', 'up', 'address',
'fake-mac-address']),
])
self.check_call.assert_called_with(
['ip', 'link', 'set', api_crud.octavia.OCTAVIA_MGMT_INTF,
'up', 'address', port_mac_address])
self.toggle_hm_port.assert_called
def test_get_port_ips(self):
self.patch_object(api_crud, 'neutron_client')
nc = mock.MagicMock()
self.neutron_client.Client.return_value = nc
nc.list_ports.return_value = {
'ports': [
{'fixed_ips': [{'ip_address': '2001:db8:42::42'}]},
{'fixed_ips': [{'ip_address': '2001:db8:42::51'}]},
],
}
identity_service = mock.MagicMock()
self.assertEquals(api_crud.get_port_ips(identity_service),
['2001:db8:42::42',
'2001:db8:42::51'])
def test_get_mgmt_network_create(self):
resource_tag = 'charm-octavia'
self.patch_object(api_crud, 'neutron_client')
identity_service = mock.MagicMock()
nc = mock.MagicMock()
self.neutron_client.Client.return_value = nc
network_uuid = '83f1a860-9aed-4c0b-8b72-47195580a0c1'
nc.create_network.return_value = {'network': {'id': network_uuid}}
nc.create_subnet.return_value = {
'subnets': [{'id': 'fake-subnet-uuid', 'cidr': 'fake-cidr'}]}
nc.create_router.return_value = {
'router': {'id': 'fake-router-uuid'}}
nc.create_security_group.side_effect = [
{'security_group': {'id': self.secgrp_uuid}},
{'security_group': {'id': self.health_secgrp_uuid}},
]
result = api_crud.get_mgmt_network(identity_service)
nc.list_networks.assert_called_once_with(tags=resource_tag)
nc.create_network.assert_called_once_with({
'network': {'name': octavia.OCTAVIA_MGMT_NET}})
nc.list_subnets.assert_called_once_with(tags=resource_tag)
nc.list_routers.assert_called_once_with(tags=resource_tag)
nc.list_security_groups.assert_any_call(tags=resource_tag)
nc.list_security_groups.assert_any_call(tags=resource_tag + '-health')
nc.create_security_group_rule.assert_has_calls(
self.security_group_rule_calls)
self.assertEqual(result, (
{'id': network_uuid},
{'id': self.secgrp_uuid},),
)
def test_get_mgmt_network_exists(self):
resource_tag = 'charm-octavia'
self.patch_object(api_crud, 'neutron_client')
identity_service = mock.MagicMock()
nc = mock.MagicMock()
self.neutron_client.Client.return_value = nc
network_uuid = '83f1a860-9aed-4c0b-8b72-47195580a0c1'
nc.list_networks.return_value = {'networks': [{'id': network_uuid}]}
nc.list_subnets.return_value = {
'subnets': [{'id': 'fake-subnet-uuid'}]}
nc.list_routers.return_value = {
'routers': [{'id': 'fake-router-uuid'}]}
nc.list_security_groups.side_effect = [
{'security_groups': [{'id': self.secgrp_uuid}]},
{'security_groups': [{'id': self.health_secgrp_uuid}]},
]
self.patch_object(api_crud.neutronclient.common, 'exceptions',
name='neutron_exceptions')
self.neutron_exceptions.Conflict = FakeNeutronConflictException
nc.create_security_group_rule.side_effect = \
FakeNeutronConflictException
result = api_crud.get_mgmt_network(identity_service)
nc.list_networks.assert_called_once_with(tags=resource_tag)
nc.list_subnets.assert_called_once_with(tags=resource_tag)
nc.list_routers.assert_called_once_with(tags=resource_tag)
nc.list_security_groups.assert_has_calls([
mock.call(tags=resource_tag),
mock.call(tags=resource_tag + '-health'),
])
nc.create_security_group_rule.assert_has_calls(
self.security_group_rule_calls)
self.assertEqual(result, (
{'id': network_uuid},
{'id': self.secgrp_uuid},),
)

View File

@ -31,6 +31,37 @@ class Helper(test_utils.PatchHelper):
class TestOctaviaCharmConfigProperties(Helper):
def test_health_manager_hwaddr(self):
cls = mock.MagicMock()
self.patch('json.loads', 'json_loads')
self.patch('subprocess.check_output', 'check_output')
self.check_output.side_effect = OSError
self.check_output.return_value = '""'
self.assertEqual(octavia.health_manager_hwaddr(cls), None)
self.check_output.side_effect = None
self.assertEqual(octavia.health_manager_hwaddr(cls), self.json_loads())
self.json_loads.assert_called()
self.check_output.assert_any_call(
['ovs-vsctl', 'get', 'Interface', octavia.OCTAVIA_MGMT_INTF,
'external_ids:attached-mac'], universal_newlines=True)
def test_health_manager_bind_ip(self):
cls = mock.MagicMock()
self.patch_object(octavia.ch_net_ip, 'get_iface_addr')
data = ['fe80:db8:42%eth0', '2001:db8:42::42', '127.0.0.1']
self.get_iface_addr.return_value = data
self.assertEqual(octavia.health_manager_bind_ip(cls), data[1])
self.get_iface_addr.assert_any_call(iface=octavia.OCTAVIA_MGMT_INTF,
inet_type='AF_INET6')
self.get_iface_addr.assert_any_call(iface=octavia.OCTAVIA_MGMT_INTF,
inet_type='AF_INET')
self.get_iface_addr.return_value = [data[2]]
self.assertEqual(octavia.health_manager_bind_ip(cls), data[2])
self.get_iface_addr.assert_any_call(iface=octavia.OCTAVIA_MGMT_INTF,
inet_type='AF_INET6')
self.get_iface_addr.assert_any_call(iface=octavia.OCTAVIA_MGMT_INTF,
inet_type='AF_INET')
def test_heartbeat_key(self):
cls = mock.MagicMock()
self.patch('charms.leadership.leader_get', 'leader_get')
@ -52,9 +83,69 @@ class TestOctaviaCharmConfigProperties(Helper):
octavia.amp_flavor_id(cls)
self.leader_get.assert_called_with('amp-flavor-id')
def test_controller_ip_port_list(self):
cls = mock.MagicMock()
self.patch('json.loads', 'json_loads')
self.patch('charms.leadership.leader_get', 'leader_get')
ip_list = ['2001:db8:42::42', '2001:db8:42::51']
self.json_loads.return_value = ip_list
self.assertEqual(
octavia.controller_ip_port_list(cls),
'2001:db8:42::42:{}, 2001:db8:42::51:{}'
.format(octavia.OCTAVIA_HEALTH_LISTEN_PORT,
octavia.OCTAVIA_HEALTH_LISTEN_PORT))
self.json_loads.assert_called_with(
self.leader_get('controller-ip-port-list'))
def test_amp_secgroup_list(self):
cls = mock.MagicMock()
self.patch('charms.leadership.leader_get', 'leader_get')
octavia.amp_secgroup_list(cls)
self.leader_get.assert_called_with('amp-secgroup-list')
def test_amp_boot_network_list(self):
cls = mock.MagicMock()
self.patch('charms.leadership.leader_get', 'leader_get')
octavia.amp_boot_network_list(cls)
self.leader_get.assert_called_with('amp-boot-network-list')
class TestOctaviaCharm(Helper):
def test_install(self):
# we do not care about the internals of the function we are overriding
# and expanding so mock out the call to super()
self.patch('builtins.super', 'super')
self.patch_object(octavia.ch_core, 'host')
c = octavia.OctaviaCharm()
c.install()
self.super.assert_called()
self.host.add_user_to_group.assert_called_once_with('systemd-network',
'octavia')
self.host.service_pause.assert_called_once_with('octavia-api')
def test_states_to_check(self):
# we do not care about the internals of the function we are overriding
# and expanding so mock out the call to super()
self.patch('builtins.super', 'super')
self.patch_object(octavia.leadership, 'leader_get')
self.patch_object(octavia.reactive, 'is_flag_set')
self.leader_get.return_value = True
self.is_flag_set.return_value = True
override_relation = 'neutron-openvswitch'
states_to_check = {
override_relation: 'something-we-are-replacing',
}
self.super().states_to_check.return_value = states_to_check
c = octavia.OctaviaCharm()
c.states_to_check()
self.super().states_to_check.assert_called_once_with(None)
self.leader_get.assert_called_once_with(
'amp-boot-network-list')
self.is_flag_set.assert_has_calls([
mock.call('config.default.lb-mgmt-issuing-cacert')])
self.super.assert_called()
def test_get_amqp_credentials(self):
c = octavia.OctaviaCharm()
result = c.get_amqp_credentials()
@ -81,3 +172,16 @@ class TestOctaviaCharm(Helper):
self.sp_check_call.assert_called_with(['a2ensite', 'octavia-api'])
self.service_reload.assert_called_with(
'apache2', restart_on_failure=True)
def test_local_address(self):
c = octavia.OctaviaCharm()
configuration_class = mock.MagicMock()
c.configuration_class = configuration_class
self.assertEqual(c.local_address, configuration_class().local_address)
def test_local_unit_name(self):
c = octavia.OctaviaCharm()
configuration_class = mock.MagicMock()
c.configuration_class = configuration_class
self.assertEqual(c.local_unit_name,
configuration_class().local_unit_name)

View File

@ -15,6 +15,7 @@
from __future__ import absolute_import
from __future__ import print_function
import json
import mock
import reactive.octavia_handlers as handlers
@ -43,11 +44,17 @@ class TestRegisteredHooks(test_utils.TestRegisteredHooks):
'cluster_connected': ('ha.connected',),
'generate_heartbeat_key': ('leadership.is_leader',),
'setup_neutron_lbaas_proxy': (
'neutron-load-balancer.available',),
'get_nova_flavor': (
'neutron-api.available',),
'setup_hm_port': (
'identity-service.available',
'neutron-api.available',
'neutron-openvswitch.connected',
'amqp.available',),
'update_controller_ip_port_list': (
'leadership.is_leader',
'identity-service.available',
'config.default.custom-amp-flavor-id',),
'neutron-api.available',
'amqp.available',),
},
'when_not': {
'init_db': ('db.synced',),
@ -81,40 +88,42 @@ class TestOctaviaHandlers(test_utils.PatchHelper):
self.uuid4.assert_called_once_with()
def test_neutron_lbaas_proxy(self):
octavia_charm = mock.MagicMock()
self.patch_object(handlers.charm, 'provide_charm_instance',
new=mock.MagicMock())
self.provide_charm_instance().__enter__.return_value = octavia_charm
self.provide_charm_instance().__exit__.return_value = None
self.patch('charms.reactive.endpoint_from_flag', 'endpoint_from_flag')
endpoint = mock.MagicMock()
self.endpoint_from_flag.return_value = endpoint
self.patch('charms_openstack.ip.canonical_url', 'canonical_url')
self.canonical_url.return_value = 'http://1.2.3.4'
octavia_charm.api_port.return_value = '1234'
self.octavia_charm.api_port.return_value = '1234'
handlers.setup_neutron_lbaas_proxy()
self.canonical_url.assert_called_with(endpoint_type='int')
endpoint.publish_load_balancer_info.assert_called_with(
'octavia', 'http://1.2.3.4:1234')
def test_get_nova_flavor(self):
def test_setup_hm_port(self):
self.patch('charms.reactive.endpoint_from_flag', 'endpoint_from_flag')
self.patch('charms.reactive.set_flag', 'set_flag')
self.patch_object(handlers.api_crud, 'setup_hm_port')
handlers.setup_hm_port()
self.setup_hm_port.assert_called_with(self.endpoint_from_flag(),
self.octavia_charm)
self.set_flag.assert_called_once_with('config.changed')
def test_update_controller_ip_port_list(self):
self.patch('charms.reactive.endpoint_from_flag', 'endpoint_from_flag')
self.patch_object(handlers.api_crud, 'get_nova_flavor',
name='api_crud_get_nova_flavor')
self.patch('charms.leadership.leader_set', 'leader_set')
flavor = mock.MagicMock()
flavor.id = 'fake-id'
self.api_crud_get_nova_flavor.side_effect = \
handlers.api_crud.APIUnavailable('nova', 'flavors', Exception)
handlers.get_nova_flavor()
self.assertFalse(self.leader_set.called)
self.api_crud_get_nova_flavor.side_effect = None
self.api_crud_get_nova_flavor.return_value = flavor
handlers.get_nova_flavor()
self.api_crud_get_nova_flavor.assert_called_with(
self.endpoint_from_flag())
self.leader_set.assert_called_with({'amp-flavor-id': 'fake-id'})
self.patch('charms.leadership.leader_get', 'leader_get')
self.patch_object(handlers.api_crud, 'get_port_ips')
self.get_port_ips.return_value = [
'2001:db8:42::42',
'2001:db8:42::51',
]
handlers.update_controller_ip_port_list()
self.leader_set.assert_called_once_with(
{
'controller-ip-port-list': json.dumps([
'2001:db8:42::42',
'2001:db8:42::51',
])})
def test_render(self):
self.patch('charms.reactive.set_state', 'set_state')