Merge telco into master

This commit is contained in:
Yuriy Taraday 2015-09-02 11:37:40 +03:00
commit 925b76c61a
49 changed files with 1526 additions and 883 deletions

View File

@ -3,7 +3,7 @@
## Prerequisites
In this manual we assume that user manages their environment with Fuel 5.1.1 and
has successfully upgraded it to Fuel 6.1 with the standard procedure.
has successfully upgraded it to Fuel 7.0 with the standard procedure.
Environments with the following configuration can be upgraded with Octane:
@ -11,7 +11,7 @@ Environments with the following configuration can be upgraded with Octane:
- HA Multinode deployment mode
- KVM Compute
- Neutron with VLAN segmentation
- Ceph backend for Cinder AND Glance
- Ceph backend for Cinder AND Glance (Optional)
- No additional services installed (like Sahara, Murano, Ceilometer, etc)
## Install Octane
@ -29,10 +29,11 @@ Run Octane script to install necessary packages on Fuel master and patch
manifests and source code of components.
```
[root@fuel bin]# yum install -y git python-pip python-paramiko
[root@fuel bin]# ./octane prepare
```
## Install 6.1 Seed environment
## Install 7.0 Seed environment
First, pick the environment of version 5.1.1 you want to upgrade. Log in to Fuel
Master node and run:
@ -47,7 +48,7 @@ to it as `ORIG_ID` below.
Use Octane script to create Upgrade Seed environment.
```
[root@fuel bin]# ./octane upgrade-env <ORIG_ID>
[root@fuel bin]# octane upgrade-env <ORIG_ID>
```
Remember the ID of resulting environment for later use, or store it to variable.
@ -55,43 +56,44 @@ We will refer to it as <SEED_ID> later on.
### Upgrade controller #1
Choose a controller node from the list of nodes in 5.1.1 environment:
Choose added controller nodes from the list of unallocated nodes:
```
[root@fuel bin]# fuel node --env <ORIG_ID>
[root@fuel bin]# fuel node | grep discover
```
Remember the ID of the node and run the following command replacing <NODE_ID>
Remember the IDs of the nodes and run the following command replacing <NODE_ID>
with that number:
```
[root@fuel bin]# ./octane upgrade-node --isolated <SEED_ID> <NODE_ID>
[root@fuel bin]# octane -v --debug install-node --isolated <ORIG_ID> <SEED_ID> \
<NODE_ID> [<NODE_ID>, ...]
```
This command will move the node to Seed environment and install it as a primary
controller with version 6.1.
This command will install controller(s)with version 7.0 in Upgrade Seed
environment.
### Upgrade State DB
State Database contains all metadata and status data of virtual resources in
your cloud environment. Octane transfers that data to 6.1 environment as a part
your cloud environment. Octane transfers that data to 7.0 environment as a part
of upgrade of CIC using the following command:
```
[root@fuel bin]# ./octane upgrade-db <ORIG_ID> <SEED_ID>
[root@fuel bin]# octane upgrade-db <ORIG_ID> <SEED_ID>
```
Before it starts data transfer, Octane stops all services on 6.1 CICs, and
Before it starts data transfer, Octane stops all services on 7.0 CICs, and
disables APIs on 5.1.1 CICs, putting the environment into **Maintenance mode**.
### Upgrade Ceph cluster
### Upgrade Ceph cluster (OPTIONAL)
Configuration of original Ceph cluster must be replicated to the 6.1
Configuration of original Ceph cluster must be replicated to the 7.0
environment. Use the following command to update configuration and restart
Ceph monitor at 6.1 controller:
Ceph monitor at 7.0 controller:
```
[root@fuel bin]# ./octane upgrade-ceph <ORIG_ID> <SEED_ID>
[root@fuel bin]# octane upgrade-ceph <ORIG_ID> <SEED_ID>
```
Verify the successful update using the following command:
@ -100,30 +102,17 @@ Verify the successful update using the following command:
[root@fuel bin]# ssh root@node-<NODE_ID> "ceph health"
```
## Replace CICs 5.1.1 with 6.1
## Replace CICs 5.1.1 with 7.0
Now start all services on 6.1 CICs with upgraded data and redirect Compute
nodes from 5.1.1 CICs to 6.1 CICs.
Now start all services on 7.0 CICs with upgraded data and redirect Compute
nodes from 5.1.1 CICs to 7.0 CICs.
Following Octane script will start all services on 6.1 CICs, then disconnect 5.1
Following Octane script will start all services on 7.0 CICs, then disconnect 5.1
CICs from Management and Public networks, while keeping connection between CICs
themselves, and connect 6.1 CICs to those networks:
themselves, and connect 7.0 CICs to those networks:
```
[root@fuel bin]# ./octane upgrade-cics ORIG_ID SEED_ID
```
## Upgrade nodes
### Upgrade controllers
Now you have all your hypervisors working with single 6.1 controller from the
Seed environment. Upgrade your 5.1.1 controllers to get HA 6.1 OpenStack
cluster. Use the following command, replacing <NODE_ID> with ID of controller
you are going to upgrade this time:
```
[root@fuel bin]# ./octane upgrade-node <SEED_ID> <NODE_ID>
[root@fuel bin]# octane upgrade-control ORIG_ID SEED_ID
```
### Upgrade compute nodes
@ -134,27 +123,18 @@ Select a node to upgrade from the list of nodes in 5.1.1 environment:
[root@fuel bin]# fuel node --env <ORIG_ID>
```
Run Octane script with 'upgrade-node' command to reassign node to 6.1
Run Octane script with 'upgrade-node' command to reassign node to 7.0
environment and upgrade it. You need to specify ID of the node as a second
argument.
```
[root@fuel bin]# ./octane upgrade-node <SEED_ID> <NODE_ID>
[root@fuel bin]# octane upgrade-node <SEED_ID> <NODE_ID> [<NODE_ID> ...]
```
Repeat this process until all nodes are reassigned from 5.1.1 to 6.1 environment.
Repeat this process until all nodes are reassigned from 5.1.1 to 7.0 environment.
## Finish upgrade
### Clean up 6.1 environment
Run Octane script with 'cleanup' command to delete pending services data from
state database.
```
[root@fuel bin]# ./octane cleanup <SEED_ID>
```
### Clean up the Fuel Master node
Run 'cleanup-fuel' command to revert all changes made to components of the Fuel

View File

@ -0,0 +1,123 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
from octane.handlers.upgrade import controller as controller_upgrade
from octane.helpers import network
from octane.helpers.node_attributes import copy_disks
from octane.helpers.node_attributes import copy_ifaces
from octane import magic_consts
from octane.util import env as env_util
from cliff import command as cmd
from fuelclient.objects import environment as environment_obj
from fuelclient.objects import node as node_obj
LOG = logging.getLogger(__name__)
def isolate(nodes, env):
nodes.sort(key=lambda node: node.id, reverse=True)
hub = nodes[0]
deployment_info = env.get_default_facts(
'deployment', nodes=[hub.data['id']])
network.create_bridges(hub, env, deployment_info)
for node in nodes[1:]:
deployment_info = env.get_default_facts(
'deployment', nodes=[node.data['id']])
network.setup_isolation(hub, node, env, deployment_info)
for node in nodes:
network.flush_arp(node)
def update_node_settings(node, disks_fixture, ifaces_fixture):
if not magic_consts.DEFAULT_DISKS:
LOG.info("Updating node %s disk settings with fixture: %s",
str(node.id), disks_fixture)
disks = node.get_attribute('disks')
LOG.info("Original node %s disk settings: %s",
str(node.id), disks)
new_disks = list(copy_disks(disks_fixture, disks, 'by_name'))
LOG.info("New disk info generated: %s", new_disks)
node.upload_node_attribute('disks', new_disks)
else:
LOG.warn("Using default volumes for node %s", node)
LOG.warn("To keep custom volumes layout, change DEFAULT_DISKS const "
"in magic_consts.py module")
if not magic_consts.DEFAULT_NETS:
LOG.info("Updating node %s network settings with fixture: %s",
str(node.id), ifaces_fixture)
ifaces = node.get_attribute('interfaces')
LOG.info("Original node %s network settings: %s",
str(node.id), ifaces)
new_ifaces = list(copy_ifaces(ifaces_fixture, ifaces))
LOG.info("New interfaces info generated: %s", new_ifaces)
node.upload_node_attribute('interfaces', new_ifaces)
else:
LOG.warn("Using default networks for node %s", node)
def install_node(orig_id, seed_id, node_ids, isolated=False):
env = environment_obj.Environment
nodes = [node_obj.Node(node_id) for node_id in node_ids]
if orig_id == seed_id:
raise Exception("Original and seed environments have the same ID: %s",
orig_id)
orig_env = env(orig_id)
orig_node = env_util.get_one_controller(orig_env)
seed_env = env(seed_id)
seed_env.assign(nodes, orig_node.data['roles'])
for node in nodes:
disk_info_fixture = orig_node.get_attribute('disks')
nic_info_fixture = orig_node.get_attribute('interfaces')
update_node_settings(node, disk_info_fixture, nic_info_fixture)
env_util.provision_nodes(seed_env, nodes)
for node in nodes:
# FIXME: properly call all handlers all over the place
controller_upgrade.ControllerUpgrade(
node, seed_env, isolated=isolated).predeploy()
if len(nodes) > 1:
isolate(nodes, seed_env)
env_util.deploy_changes(seed_env, nodes)
for node in nodes:
controller_upgrade.ControllerUpgrade(
node, seed_env, isolated=isolated).postdeploy()
class InstallNodeCommand(cmd.Command):
"""Install nodes to environment based on settings of orig environment"""
def get_parser(self, prog_name):
parser = super(InstallNodeCommand, self).get_parser(prog_name)
parser.add_argument(
'--isolated', action='store_true',
help="Isolate node's network from original cluster")
parser.add_argument(
'orig_id', type=int, metavar='ORIG_ID',
help="ID of original environment")
parser.add_argument(
'seed_id', type=int, metavar='SEED_ID',
help="ID of upgrade seed environment")
parser.add_argument(
'node_ids', type=int, metavar='NODE_ID', nargs='+',
help="IDs of nodes to be moved")
return parser
def take_action(self, parsed_args):
install_node(parsed_args.orig_id, parsed_args.seed_id,
parsed_args.node_ids, isolated=parsed_args.isolated)

View File

@ -10,7 +10,6 @@
# License for the specific language governing permissions and limitations
# under the License.
import glob
import os.path
from cliff import command as cmd
@ -32,16 +31,6 @@ def patch_puppet(revert=False):
cwd=magic_consts.PUPPET_DIR)
def install_octane_nailgun():
octane_nailgun = os.path.join(magic_consts.CWD, '..', 'octane_nailgun')
subprocess.call(["python", "setup.py", "bdist_wheel"], cwd=octane_nailgun)
wheel = glob.glob(os.path.join(octane_nailgun, 'dist', '*.whl'))[0]
subprocess.call(["dockerctl", "copy", wheel, "nailgun:/root/"])
docker.run_in_container("nailgun", ["pip", "install", "-U",
"/root/" + os.path.basename(wheel)])
docker.run_in_container("nailgun", ["pkill", "-f", "wsgi"])
def apply_patches(revert=False):
for container, prefix, patch in magic_consts.PATCHES:
docker.apply_patches(container, prefix,
@ -54,10 +43,8 @@ def prepare():
os.makedirs(magic_consts.FUEL_CACHE)
subprocess.call(["yum", "-y", "install"] + magic_consts.PACKAGES)
subprocess.call(["pip", "install", "wheel"])
patch_puppet()
# From patch_all_containers
apply_patches()
install_octane_nailgun()
class PrepareCommand(cmd.Command):
@ -72,4 +59,3 @@ class RevertCommand(cmd.Command):
def take_action(self, parsed_args):
apply_patches(revert=True)
patch_puppet(revert=True)

View File

@ -0,0 +1,35 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from cliff import command as cmd
from octane.helpers.sync_glance_images import sync_glance_images
class SyncImagesCommand(cmd.Command):
"""Sync glance images between ORIG and SEED environments"""
def get_parser(self, prog_name):
parser = super(SyncImagesCommand, self).get_parser(prog_name)
parser.add_argument(
'orig_id', type=int, metavar='ORIG_ID',
help="ID of original environment")
parser.add_argument(
'seed_id', type=int, metavar='SEED_ID',
help="ID of seed environment")
parser.add_argument(
'swift_ep', type=str,
help="Endpoint's name where swift-proxy service is listening on")
return parser
def take_action(self, parsed_args):
sync_glance_images(parsed_args.orig_id, parsed_args.seed_id,
parsed_args.swift_ep)

View File

@ -0,0 +1,95 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
import logging
from cliff import command as cmd
from fuelclient import objects
from requests import HTTPError
LOG = logging.getLogger(__name__)
ADMIN_NETWORK_NAME = 'fuelweb_admin'
def get_env_networks(env_id):
env = objects.Environment(env_id)
network_data = env.get_network_data()
return network_data['networks']
def update_env_networks(env_id, networks):
fields_to_update = ['meta', 'ip_ranges']
env = objects.Environment(env_id)
release_id = env.get_fresh_data()['release_id']
network_data = env.get_network_data()
node_group_id = None
for ng in network_data['networks']:
if ng['name'] == ADMIN_NETWORK_NAME:
continue
if node_group_id is None:
# for now we'll have only one node group
# so just take it id from any network
node_group_id = ng['group_id']
objects.NetworkGroup(ng['id']).delete()
data_to_update = {}
for ng in networks:
if ng['name'] == ADMIN_NETWORK_NAME:
continue
try:
objects.NetworkGroup.create(
ng['name'],
release_id,
ng['vlan_start'],
ng['cidr'],
ng['gateway'],
node_group_id,
ng['meta']
)
except HTTPError:
LOG.error("Cannot sync network '{0}'".format(ng['name']))
continue
data = {}
for key in fields_to_update:
data[key] = ng[key]
data_to_update[ng['name']] = data
# now we need to update new networks with
# correct ip_ranges and meta
network_data = env.get_network_data()
for ng in network_data['networks']:
if ng['name'] in data_to_update:
for k in fields_to_update:
ng[k] = data_to_update[ng['name']][k]
env.set_network_data(network_data)
class SyncNetworksCommand(cmd.Command):
"""Synchronize network groups in original and seed environments"""
def get_parser(self, prog_name):
parser = super(SyncNetworksCommand, self).get_parser(prog_name)
parser.add_argument(
'original_env', type=int, metavar='ORIGINAL_ENV_ID',
help="ID of original environment")
parser.add_argument(
'seed_env', type=int, metavar='SEED_ENV_ID',
help="ID of seed environment")
return parser
def take_action(self, parsed_args):
networks = get_env_networks(parsed_args.original_env)
update_env_networks(parsed_args.seed_env, networks)

View File

@ -0,0 +1,181 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import argparse
import logging
import pyzabbix
import re
import requests
import yaml
from cliff import command as cmd
from fuelclient.objects import environment
from fuelclient.objects import node as node_obj
from octane.util import env as env_util
from octane.util import ssh
LOG = logging.getLogger(__name__)
def get_template_hosts_by_name(client, plugin_name):
return client.template.get(filter={'name': plugin_name},
selectHosts=['name'])[0]['hosts']
def get_host_snmp_ip(client, host_id):
# second type is SNMP type
return client.hostinterface.get(hosids=host_id,
output=['ip'],
filter={'type': 2})[0]['ip']
def get_zabbix_url(astute):
return 'http://{0}/zabbix'.format(astute['public_vip'])
def get_zabbix_credentials(astute):
return astute['zabbix']['username'], astute['zabbix']['password']
def get_astute_yaml(env):
node = env_util.get_one_controller(env)
with ssh.sftp(node).open('/etc/astute.yaml') as f:
data = f.read()
return yaml.load(data)
def zabbix_monitoring_settings(astute):
return {'username': {'value': astute['zabbix']['username']},
'password': {'value': astute['zabbix']['password']},
'db_password': {'value': astute['zabbix']['db_password']},
'metadata': {'enabled': astute['zabbix']['enabled']}}
def emc_vnx_settings(astute):
return {'emc_sp_a_ip': {'value': astute['storage']['emc_sp_a_ip']},
'emc_sp_b_ip': {'value': astute['storage']['emc_sp_b_ip']},
'emc_password': {'value': astute['storage']['emc_password']},
'emc_username': {'value': astute['storage']['emc_username']},
'emc_pool_name': {'value': astute['storage']['emc_pool_name']},
'metadata': {'enabled': astute['storage']['volumes_emc']}}
def zabbix_snmptrapd_settings(astute):
node = node_obj.Node(astute['uid'])
with ssh.sftp(node).open('/etc/snmp/snmptrapd.conf') as f:
data = f.read()
template = re.compile(r"authCommunity\s[a-z-,]+\s([a-z-]+)")
match = template.search(data)
return {'community': {'value': match.group(1)},
'metadata': {'enabled': True}}
def get_zabbix_client(astute):
url = get_zabbix_url(astute)
user, password = get_zabbix_credentials(astute)
session = requests.Session()
node_cidr = astute['network_scheme']['endpoints']['br-fw-admin']['IP'][0]
node_ip = node_cidr.split('/')[0]
session.proxies = {
'http': 'http://{0}:8888'.format(node_ip)
}
client = pyzabbix.ZabbixAPI(server=url, session=session)
client.login(user=user, password=password)
return client
def zabbix_monitoring_emc_settings(astute):
client = get_zabbix_client(astute)
hosts = get_template_hosts_by_name(client, 'Template EMC VNX')
for host in hosts:
host['ip'] = get_host_snmp_ip(client, host['hostid'])
settings = ','.join('{0}:{1}'.format(host['name'], host['ip'])
for host in hosts)
return {'hosts': {'value': settings},
'metadata': {'enabled': True}}
def zabbix_monitoring_extreme_networks_settings(astute):
client = get_zabbix_client(astute)
hosts = get_template_hosts_by_name(client, 'Template Extreme Networks')
for host in hosts:
host['ip'] = get_host_snmp_ip(client, host['hostid'])
settings = ','.join('{0}:{1}'.format(host['name'], host['ip'])
for host in hosts)
return {'hosts': {'value': settings},
'metadata': {'enabled': True}}
def transfer_plugins_settings(orig_env_id, seed_env_id, plugins):
orig_env = environment.Environment(orig_env_id)
seed_env = environment.Environment(seed_env_id)
astute = get_astute_yaml(orig_env)
attrs = {}
for plugin in plugins:
LOG.info("Fetching settings for plugin '%s'", plugin)
attrs[plugin] = PLUGINS[plugin](astute)
seed_env.update_attributes({'editable': attrs})
PLUGINS = {
'zabbix_monitoring': zabbix_monitoring_settings,
'emc_vnx': emc_vnx_settings,
'zabbix_snmptrapd': zabbix_snmptrapd_settings,
'zabbix_monitoring_emc': zabbix_monitoring_emc_settings,
'zabbix_monitoring_extreme_networks':
zabbix_monitoring_extreme_networks_settings,
}
def plugin_names(s):
plugins = s.split(',')
for plugin in plugins:
if plugin not in PLUGINS:
raise argparse.ArgumentTypeError("Unknown plugin '{0}'"
.format(plugin))
return plugins
class UpdatePluginSettingsCommand(cmd.Command):
"""Transfer settings for specified plugin from ORIG_ENV to SEED_ENV"""
def get_parser(self, prog_name):
parser = super(UpdatePluginSettingsCommand, self).get_parser(prog_name)
parser.add_argument(
'orig_env',
type=int,
metavar='ORIG_ID',
help="ID of original environment")
parser.add_argument(
'seed_env',
type=int,
metavar='SEED_ID',
help="ID of seed environment")
parser.add_argument(
'--plugins',
type=plugin_names,
help="Comma separated values: {0}".format(', '.join(PLUGINS)))
return parser
def take_action(self, parsed_args):
transfer_plugins_settings(parsed_args.orig_env,
parsed_args.seed_env,
parsed_args.plugins)

View File

@ -0,0 +1,121 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import subprocess
import yaml
from cliff import command as cmd
from fuelclient.objects import environment as environment_obj
from octane.helpers import network
from octane import magic_consts
from octane.util import env as env_util
from octane.util import maintenance
from octane.util import ssh
def start_corosync_services(env):
node = next(env_util.get_controllers(env))
status_out, _ = ssh.call(['crm', 'resource', 'list'],
stdout=ssh.PIPE,
node=node)
for service in maintenance.parse_crm_status(status_out):
while True:
try:
ssh.call(['crm', 'resource', 'start', service],
node=node)
except subprocess.CalledProcessError:
pass
else:
break
def start_upstart_services(env):
controllers = list(env_util.get_controllers(env))
for node in controllers:
sftp = ssh.sftp(node)
try:
svc_file = sftp.open('/root/services_list')
except IOError:
raise
else:
with svc_file:
to_start = svc_file.read().splitlines()
for service in to_start:
ssh.call(['start', service], node=node)
def disconnect_networks(env):
controllers = list(env_util.get_controllers(env))
for node in controllers:
deployment_info = env.get_default_facts('deployment',
nodes=[node.data['id']])
for info in deployment_info:
network.delete_patch_ports(node, info)
def connect_to_networks(env):
deployment_info = []
controllers = list(env_util.get_controllers(env))
backup_path = os.path.join(magic_consts.FUEL_CACHE,
'deployment_{0}.orig'
.format(env.id))
for filename in os.listdir(backup_path):
filepath = os.path.join(backup_path, filename)
with open(filepath) as info_file:
info = yaml.safe_load(info_file)
deployment_info.append(info)
for node in controllers:
for info in deployment_info:
if info['role'] in ('primary-controller', 'controller'):
network.delete_overlay_networks(node, info)
network.create_patch_ports(node, info)
def update_neutron_config(env):
controllers = list(env_util.get_controllers(env))
tenant_file = '%s/env-%s-service-tenant-id' % (magic_consts.FUEL_CACHE,
str(env.id))
with open(tenant_file) as f:
tenant_id = f.read()
sed_script = 's/^(nova_admin_tenant_id )=.*/\1 =%s' % (tenant_id,)
for node in controllers:
ssh.call(['sed', '-re', sed_script, '-i', '/etc/neutron/neutron.conf'],
node=node)
def upgrade_control_plane(orig_id, seed_id):
orig_env = environment_obj.Environment(orig_id)
seed_env = environment_obj.Environment(seed_id)
start_corosync_services(seed_env)
start_upstart_services(seed_env)
disconnect_networks(orig_env)
connect_to_networks(seed_env)
update_neutron_config(seed_env)
class UpgradeControlPlaneCommand(cmd.Command):
"""Switch control plane to the seed environment"""
def get_parser(self, prog_name):
parser = super(UpgradeControlPlaneCommand, self).get_parser(prog_name)
parser.add_argument(
'orig_id', type=int, metavar='ORIG_ID',
help="ID of original environment")
parser.add_argument(
'seed_id', type=int, metavar='SEED_ID',
help="ID of seed environment")
return parser
def take_action(self, parsed_args):
upgrade_control_plane(parsed_args.orig_id, parsed_args.seed_id)

View File

@ -18,7 +18,9 @@ from cliff import command as cmd
from fuelclient.objects import environment as environment_obj
from fuelclient.objects import release as release_obj
from octane import magic_consts
from octane.util import env as env_util
from octane.util import ssh
LOG = logging.getLogger(__name__)
@ -33,6 +35,16 @@ def find_release(operating_system, version):
operating_system, version)
def find_deployable_release(operating_system):
for release in release_obj.Release.get_all():
if release.data['operating_system'] == operating_system and \
release.data['is_deployable']:
return release
else:
raise Exception("Deployable release not found for os %s",
operating_system)
def set_cobbler_provision(env_id):
env = environment_obj.Environment(env_id)
settings = env.get_settings_data()
@ -41,10 +53,24 @@ def set_cobbler_provision(env_id):
def upgrade_env(env_id):
target_release = find_release("Ubuntu", "2014.2.2-6.1")
target_release = find_deployable_release("Ubuntu")
return env_util.clone_env(env_id, target_release)
def write_service_tenant_id(env_id):
env = environment_obj.Environment(env_id)
node = env_util.get_one_controller(env)
tenant_id, _ = ssh.call(["bash", "-c", ". /root/openrc;",
"keystone tenant-list | ",
"awk -F\| '\$2 ~ /id/{print \$3}' | tr -d \ "],
stdout=ssh.PIPE,
node=node)
tenant_file = '%s/env-%s-service-tenant-id' % (magic_consts.FUEL_CACHE,
str(env_id))
with open(tenant_file, 'w') as f:
f.write(tenant_id)
class UpgradeEnvCommand(cmd.Command):
"""Create upgrade seed env for env ENV_ID and copy settings to it"""
@ -56,6 +82,6 @@ class UpgradeEnvCommand(cmd.Command):
return parser
def take_action(self, parsed_args):
write_service_tenant_id(parsed_args.env_id)
seed_id = upgrade_env(parsed_args.env_id)
print(seed_id) # TODO: This shouldn't be needed
set_cobbler_provision(seed_id)

View File

@ -50,7 +50,6 @@ def upgrade_node(env_id, node_ids, isolated=False):
call_handlers('preupgrade')
call_handlers('prepare')
env_util.move_nodes(env, nodes)
env_util.provision_nodes(env, nodes)
call_handlers('predeploy')
env_util.deploy_nodes(env, nodes)
call_handlers('postdeploy')

View File

@ -1,4 +0,0 @@
FROM fuel/cobbler_6.1
ADD resources/pmanager.py.patch /tmp/pmanager.py.patch
ADD resources/patch /usr/bin/
RUN patch -d /usr/lib/python2.6/site-packages/cobbler < /tmp/pmanager.py.patch

View File

@ -1,39 +0,0 @@
--- pmanager.py 2015-03-13 16:14:43.955031037 +0300
+++ pmanager.py 2015-03-13 18:57:27.173407487 +0300
@@ -672,8 +672,26 @@
def num_ceph_osds(self):
return self.get_partition_count('ceph')
+ def contains_save_ceph(self, disk=None, part=None):
+ def is_save_partition(part):
+ return (part.get("name") == "ceph" and
+ part["type"] == "partition" and
+ part["size"] > 0 and
+ part.get("keep"))
+ if disk is not None:
+ for volume in disk["volumes"]:
+ if is_save_partition(volume):
+ return True
+ return False
+ elif part is not None:
+ return is_save_partition(part)
+
def erase_partition_table(self):
for disk in self.iterdisks():
+ if self.contains_save_ceph(disk=disk):
+ self.early("parted -s {0} print free"
+ .format(self._disk_dev(disk)))
+ continue
self.early("test -e {0} && "
"dd if=/dev/zero of={0} "
"bs=1M count=10".format(self._disk_dev(disk)))
@@ -861,6 +879,9 @@
self.late("cat /proc/mdstat")
self.late("cat /proc/partitions")
+ if self.contains_save_ceph(part=part):
+ continue
+
# clear any fs info that may remain on newly created partition
self.late("dd if=/dev/zero of={0}{1}{2} bs=1M count=10"
"".format(self._disk_dev(disk),

View File

@ -1,4 +0,0 @@
FROM fuel/nailgun_6.1
ADD resources/manager.py.patch /tmp/manager.py.patch
ADD resources/patch /usr/bin/
RUN patch -d /usr/lib/python2.6/site-packages/nailgun/volumes/ < /tmp/manager.py.patch

View File

@ -1,82 +0,0 @@
diff --git a/manager.py b/manager.py
index 91200f1..b60aea3 100644
--- manager.py
+++ manager.py
@@ -224,8 +224,10 @@ class DisksFormatConvertor(object):
volume_manager = node.volume_manager
for disk in disks:
for volume in disk['volumes']:
- full_format = volume_manager.set_volume_size(
- disk['id'], volume['name'], volume['size'])
+ volume_manager.set_volume_size(disk['id'], volume['name'],
+ volume['size'])
+ full_format = volume_manager.set_volume_flags(
+ disk['id'], volume)
return full_format
@@ -547,6 +549,14 @@ class Disk(object):
volume['size'] = size
self.free_space -= size
+ def set_keep_flag(self, name, value):
+ """Set keep flag
+ """
+ for volume in self.volumes:
+ if (volume.get('type') == 'partition' and
+ volume.get('name') == name):
+ volume['keep'] = bool(value)
+
def reset(self):
self.volumes = []
self.free_space = self.size
@@ -594,16 +604,21 @@ class VolumeManager(object):
boot_is_raid = True if disks_count > 1 else False
existing_disk = filter(
- lambda disk: d['disk'] == disk['id'],
+ lambda disk: set(d['extra']) == set(disk['extra']),
only_disks(self.volumes))
+ try:
+ disk_id = existing_disk[0]['id']
+ except KeyError as exc:
+ self.__logger('Cannot find existing disk for disk %r' % d)
+ raise exc
disk_volumes = existing_disk[0].get(
'volumes', []) if existing_disk else []
disk = Disk(
disk_volumes,
self.call_generator,
- d["disk"],
+ disk_id,
d["name"],
byte_to_megabyte(d["size"]),
boot_is_raid=boot_is_raid,
@@ -650,6 +665,25 @@ class VolumeManager(object):
self.__logger('Updated volume size %s' % self.volumes)
return self.volumes
+ def set_volume_flags(self, disk_id, volume):
+ """Set flags of volume
+ """
+ volume_name = volume['name']
+ self.__logger('Update volume flags for disk=%s volume_name=%s' %
+ (disk_id, volume_name))
+
+ disk = filter(lambda disk: disk.id == disk_id, self.disks)[0]
+
+ if volume_name == 'ceph':
+ disk.set_keep_flag(volume_name, volume.get('keep'))
+
+ for idx, volume in enumerate(self.volumes):
+ if volume.get('id') == disk.id:
+ self.volumes[idx] = disk.render()
+
+ self.__logger('Updated volume flags %s' % self.volumes)
+ return self.volumes
+
def get_space_type(self, volume_name):
"""Get type of space which represents on disk
as volume with volume_name

View File

@ -16,11 +16,25 @@ import stat
from octane.handlers import upgrade
from octane import magic_consts
from octane.util import env as env_util
from octane.util import node as node_util
from octane.util import ssh
class ComputeUpgrade(upgrade.UpgradeHandler):
def prepare(self):
self.evacuate_host()
self.preserve_partition()
def postdeploy(self):
controller = env_util.get_one_controller(self.env)
ssh.call(
["sh", "-c", ". /root/openrc; "
"nova service-enable node-{0} nova-compute".format(
self.node.data['id'])],
node=controller,
)
def evacuate_host(self):
controller = env_util.get_one_controller(self.env)
with ssh.tempdir(controller) as tempdir:
local_path = os.path.join(
@ -34,11 +48,8 @@ class ComputeUpgrade(upgrade.UpgradeHandler):
node=controller,
)
def postdeploy(self):
controller = env_util.get_one_controller(self.env)
ssh.call(
["sh", "-c", ". /root/openrc; "
"nova service-enable node-{0} nova-compute".format(
self.node.data['id'])],
node=controller,
)
# TODO(ogelbukh): move this action to base handler and set a list of
# partitions to preserve as an attribute of a role.
def preserve_partition(self):
partition = 'vm'
node_util.preserve_partition(self.node, partition)

View File

@ -33,8 +33,7 @@ class ControllerUpgrade(upgrade.UpgradeHandler):
self.env, self.node)
def predeploy(self):
deployment_info = self.env.get_default_facts(
'deployment', nodes=[self.node.data['id']])
deployment_info = env_util.merge_deployment_info(self.env)
if self.isolated:
# From backup_deployment_info
backup_path = os.path.join(
@ -45,6 +44,8 @@ class ControllerUpgrade(upgrade.UpgradeHandler):
os.makedirs(backup_path)
# Roughly taken from Environment.write_facts_to_dir
for info in deployment_info:
if not info['uid'] == str(self.node.id):
continue
fname = os.path.join(
backup_path,
"{0}_{1}.yaml".format(info['role'], info['uid']),
@ -52,14 +53,16 @@ class ControllerUpgrade(upgrade.UpgradeHandler):
with open(fname, 'w') as f:
yaml.safe_dump(info, f, default_flow_style=False)
for info in deployment_info:
if not info['uid'] == str(self.node.id):
continue
if self.isolated:
transformations.remove_physical_ports(info)
transformations.remove_ports(info)
endpoints = deployment_info[0]["network_scheme"]["endpoints"]
self.gateway = endpoints["br-ex"]["gateway"]
transformations.reset_gw_admin(info)
# From run_ping_checker
info['run_ping_checker'] = False
transformations.remove_predefined_nets(info)
transformations.reset_gw_admin(info)
self.env.upload_facts('deployment', deployment_info)
tasks = self.env.get_deployment_tasks()
@ -82,3 +85,9 @@ class ControllerUpgrade(upgrade.UpgradeHandler):
ssh.call(['ip', 'route', 'delete', 'default'], node=self.node)
ssh.call(['ip', 'route', 'add', 'default', 'via', self.gateway],
node=self.node)
def get_admin_gateway(environment):
for net in environment.get_network_data()['networks']:
if net["name"] == "fuelweb_admin":
return net["gateway"]

313
octane/helpers/network.py Normal file
View File

@ -0,0 +1,313 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import re
import subprocess
from octane import magic_consts
from octane.util import ssh
from octane.helpers import transformations as ts
LOG = logging.getLogger(__name__)
def install_openvswitch(node, master_ip):
cmds = []
cmds.append(
['sh', '-c',
'echo'
' "deb http://{0}:8080/2015.1.0-7.0/ubuntu/x86_64 mos7.0'
' main restricted" >> /etc/apt/sources.list'.format(master_ip)])
cmds.append(['apt-get', 'update'])
cmds.append(
['apt-get', 'install', '-y', '--force-yes', 'openvswitch-switch'])
cmds.append(['sh', '-c', 'sed -i 1,2d /etc/apt/sources.list'])
for cmd in cmds:
ssh.call(cmd, node=node)
def set_bridge_mtu(node, bridge):
ssh.call(['ip', 'link', 'set', 'dev', bridge, 'mtu', '1450'], node=node)
def create_ovs_bridge(node, bridge):
cmds = []
cmds.append(['ovs-vsctl', 'add-br', bridge])
cmds.append(['ip', 'link', 'set', 'up', 'dev', bridge])
cmds.append(['ip', 'link', 'set', 'mtu', '1450', 'dev', bridge])
for cmd in cmds:
ssh.call(cmd, node=node)
def create_lnx_bridge(node, bridge):
cmd = ['brctl', 'addbr', bridge]
ssh.call(cmd, node=node)
def create_tunnel_from_node_ovs(local, remote, bridge, key, admin_iface):
def check_tunnel(node, bridge, port):
cmd = ['sh', '-c',
'ovs-vsctl list-ports %s | grep -q %s' % (bridge, port)]
try:
ssh.call(cmd, node=node)
except subprocess.CalledProcessError:
return False
else:
return True
gre_port = '%s--gre-%s' % (bridge, remote.data['ip'])
if check_tunnel(local, bridge, gre_port):
return
cmd = ['ovs-vsctl', 'add-port', bridge, gre_port,
'--', 'set', 'Interface', gre_port,
'type=gre',
'options:remote_ip=%s' % (remote.data['ip'],),
'options:key=%d' % (key,)]
ssh.call(cmd, node=local)
def create_tunnel_from_node_lnx(local, remote, bridge, key, admin_iface):
def check_tunnel(node, port):
cmd = ['sh', '-c',
'ip link show dev %s' % (port,)]
try:
ssh.call(cmd, node=node)
except subprocess.CalledProcessError:
return False
else:
return True
gre_port = 'gre%s-%s' % (remote.id, key)
if check_tunnel(local, gre_port):
return
cmds = []
cmds.append(['ip', 'link', 'add', gre_port,
'type', 'gretap',
'remote', remote.data['ip'],
'local', local.data['ip'],
'key', str(key)])
cmds.append(['ip', 'link', 'set', 'up', 'dev', gre_port])
cmds.append(['ip', 'link', 'set', 'mtu', '1450', 'dev', gre_port])
cmds.append(['ip', 'link', 'set', 'up', 'dev', bridge])
cmds.append(['brctl', 'addif', bridge, gre_port])
for cmd in cmds:
ssh.call(cmd, node=local)
create_tunnel_providers = {
'lnx': create_tunnel_from_node_lnx,
'ovs': create_tunnel_from_node_ovs
}
create_bridge_providers = {
'lnx': create_lnx_bridge,
'ovs': create_ovs_bridge
}
def create_bridges(node, env, deployment_info):
for info in deployment_info:
actions = ts.get_actions(info)
LOG.info("Network scheme actions for node %s: %s",
node.id, actions)
master_ip = info["master_ip"]
for bridge in magic_consts.BRIDGES:
provider = ts.get_bridge_provider(actions, bridge)
LOG.info("Found provider for bridge %s: %s", bridge, provider)
if provider == 'ovs' and bridge == magic_consts.BRIDGES[0]:
LOG.info("Installing openvswitch to node %s", node.id)
install_openvswitch(node, master_ip)
create_bridge = create_bridge_providers[provider]
create_bridge(node, bridge)
def create_overlay_networks(node, remote, env, deployment_info, key=0):
"""Create GRE tunnels between a node and other nodes in the environment
Building tunnels for all bridges listed in constant BRIDGES.
:param: node
:param: remote
:param: env
:param: deployment_info
:param: key
"""
for info in deployment_info:
actions = ts.get_actions(info)
for bridge in magic_consts.BRIDGES:
provider = ts.get_bridge_provider(actions, bridge)
admin_iface = ts.get_admin_iface(actions)
create_tunnel_from_node = create_tunnel_providers[provider]
LOG.info("Creating tun for bridge %s on node %s, remote %s",
bridge, node.id, remote.id)
create_tunnel_from_node(node, remote, bridge, key,
admin_iface)
key += 1
def setup_isolation(hub, node, env, deployment_info):
"""Create bridges and overlay networks for the given node
Isolate a given node in the environment from networks connected to
bridges from maigc_consts.BRIDGES list. Create bridges on the node and
create tunnels that constitute overlay network on top of the admin network.
It ensures that nodes are connected during the deployment, as required.
If there's only 1 controller node in the environment, there's no need to
create any tunnels.
:param: node
:param: env
:param: deployment_info
"""
create_bridges(node, env, deployment_info)
create_overlay_networks(hub,
node,
env,
deployment_info,
node.id)
create_overlay_networks(node,
hub,
env,
deployment_info,
node.id)
def list_tunnels_ovs(node, bridge):
tunnels = []
stdout, _ = ssh.call(['ovs-vsctl', 'list-ports', bridge],
stdout=ssh.PIPE,
node=node)
for match in re.finditer("[\S]+\n", stdout):
tunnels.append(match[:-1])
return tunnels
def delete_tunnels_ovs(node, bridge):
tunnels = list_tunnels_ovs(node, bridge)
for tun in tunnels:
ssh.call(['ovs-vsctl', 'del-port', bridge, tun],
node=node)
def list_tunnels_lnx(node, bridge):
tunnels = []
gre_port_re = "gre[0-9]+-[0-9]+".format(node.id)
stdout, _ = ssh.call(['brctl', 'show', bridge],
stdout=ssh.PIPE,
node=node)
for match in re.finditer(gre_port_re, stdout):
tunnels.append(match.group(0))
return tunnels
def delete_tunnels_lnx(node, bridge):
tunnels = list_tunnels_lnx(node, bridge)
for tun in tunnels:
ssh.call(['brctl', 'delif', bridge, tun], node=node)
ssh.call(['ip', 'link', 'delete', tun], node=node)
delete_tunnels = {
'lnx': delete_tunnels_lnx,
'ovs': delete_tunnels_ovs
}
def delete_overlay_networks(node, host_config):
for bridge in magic_consts.BRIDGES:
actions = ts.get_actions(host_config)
provider = ts.get_bridge_provider(actions, bridge)
delete_tunnels_cmd = delete_tunnels[provider]
delete_tunnels_cmd(node, bridge)
def delete_port_ovs(bridge, port):
bridges = port['bridges']
port_name = "%s--%s" % (bridges[0], bridges[1])
return ['ovs-vsctl', 'del-port', bridges[0], port_name]
def delete_port_lnx(bridge, port):
return ['brctl', 'delif', bridge, port['name']]
delete_port_providers = {
'ovs': delete_port_ovs,
'lnx': delete_port_lnx
}
def delete_patch_ports(node, host_config):
for bridge in magic_consts.BRIDGES:
port, provider = ts.get_patch_port_action(host_config, bridge)
delete_port_cmd = delete_port_providers[provider]
cmd = delete_port_cmd(bridge, port)
ssh.call(cmd, node=node)
def create_port_ovs(bridge, port):
cmds = []
tags = port.get('tags', ['', ''])
trunks = port.get('trunks', [])
bridges = port.get('bridges', [])
for tag in tags:
tag = "tag=%s" % (str(tag),) if tag else ''
trunk = ''
trunk_str = ','.join(trunks)
if trunk_str:
trunk = 'trunks=[%s]' % (trunk_str,)
if bridges:
br_patch = "%s--%s" % (bridges[0], bridges[1])
ph_patch = "%s--%s" % (bridges[1], bridges[0])
cmds.append(['ovs-vsctl', 'add-port', bridge, br_patch, tag[0], trunk,
'--', 'set', 'interface', br_patch, 'type=patch',
'options:peer=%s' % ph_patch])
cmds.append(['ovs-vsctl', 'add-port', bridge, ph_patch, tag[1], trunk,
'--', 'set', 'interface', ph_patch, 'type=patch',
'options:peer=%s' % br_patch])
return cmds
def create_port_lnx(bridge, port):
port_name = port.get('name')
if port_name:
return [
['brctl', 'addif', bridge, port['name']],
['ip', 'link', 'set', 'up', 'dev', port['name']]
]
else:
raise Exception("No name for port: %s", port)
create_port_providers = {
'lnx': create_port_lnx,
'ovs': create_port_ovs
}
def create_patch_ports(node, host_config):
for bridge in magic_consts.BRIDGES:
port, provider = ts.get_patch_port_action(host_config, bridge)
create_port_cmd = create_port_providers[provider]
cmds = create_port_cmd(bridge, port)
for cmd in cmds:
ssh.call(cmd, node=node)
def flush_arp(node):
cmd = ['ip', 'neigh', 'flush', 'all']
ssh.call(cmd, node=node)

View File

@ -0,0 +1,72 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
def copy_ifaces(src, dst):
def pull(ifaces):
for iface in ifaces:
yield (iface['name'],
iface['assigned_networks'])
def push(ifaces, assignments, nets):
for iface in ifaces:
networks = assignments.get(iface['name'], [])
networks = [{'id': nets[net['name']],
'name': net['name']} for net in networks]
yield dict(iface,
assigned_networks=networks,
)
nets = {}
for iface in dst:
nets.update(dict([(net['name'], net['id'])
for net in iface['assigned_networks']]))
assignments = pull(src)
ifaces = push(dst, dict(assignments), nets)
return ifaces
def by_extra(disk):
return ''.join(sorted(disk['extra']))
def by_name(disk):
return disk['name']
KEY_FUNCS = {
'by_extra': by_extra,
'by_name': by_name,
}
def copy_disks(src, dst, method):
key_func = KEY_FUNCS[method]
def pull(disks):
for disk in disks:
yield (key_func(disk),
disk['volumes'])
def push(disks1, disks2):
def to_dict(attrs):
return dict((key_func(attr), attr) for attr in attrs)
dict_disks1 = to_dict(disks1)
for extra, volumes in disks2:
dict_disks1[extra].update(volumes=volumes)
yield dict_disks1[extra]
fixture_disks = pull(src)
disks = push(dst, fixture_disks)
return disks

View File

@ -0,0 +1,226 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import yaml
from fuelclient.objects import environment as environment_obj
from octane.util import env as env_util
from octane.util import ssh
LOG = logging.getLogger(__name__)
def get_astute_yaml(node):
data = None
with ssh.sftp(node).open('/etc/astute.yaml') as f:
data = f.read()
return yaml.load(data)
def get_endpoint_ip(ep_name, yaml_data):
endpoint = yaml_data['network_scheme']['endpoints'].get(ep_name)
if not endpoint:
return None
net_data = endpoint["IP"][0]
if net_data:
return net_data.split('/')[0]
def get_glance_password(yaml_data):
return yaml_data['glance']['user_password']
def parse_swift_out(output, field):
for line in output.splitlines()[1:-1]:
parts = line.split(': ')
if parts[0].strip() == field:
return parts[1]
raise Exception(
"Field {0} not found in output:\n{1}".format(field, output))
def get_swift_objects(node, tenant, user, password, token, container):
cmd = ". /root/openrc; swift --os-project-name {0} --os-username {1}"\
" --os-password {2} --os-auth-token {3} list {4}".format(tenant,
user,
password,
token,
container)
objects_list, _ = ssh.call(["sh", "-c", cmd],
stdout=ssh.PIPE,
node=node)
return objects_list.split('\n')[:-1]
def get_object_property(node, tenant, user, password, token, container,
object_id, prop):
cmd = ". /root/openrc; swift --os-project-name {0} --os-username {1}"\
" --os-password {2} --os-auth-token {3} stat {4} {5}"\
.format(tenant,
user,
password,
token,
container,
object_id)
object_data, _ = ssh.call(["sh", "-c", cmd],
stdout=ssh.PIPE,
node=node)
return parse_swift_out(object_data, prop)
def get_auth_token(node, tenant, user, password):
cmd = ". /root/openrc; keystone --os-tenant-name {0}"\
" --os-username {1} --os-password {2} token-get".format(tenant,
user,
password)
token_info, _ = ssh.call(["sh", "-c", cmd],
stdout=ssh.PIPE,
node=node)
return env_util.parse_tenant_get(token_info, 'id')
def download_image(node, tenant, user, password, token, container, object_id):
cmd = ". /root/openrc; swift --os-project-name {0} --os-username {1}"\
" --os-password {2} --os-auth-token {3} download {4} {5}"\
.format(tenant,
user,
password,
token,
container,
object_id)
ssh.call(["sh", "-c", cmd], node=node)
LOG.info("Swift %s image has been downloaded" % object_id)
def delete_image(node, tenant, user, password, token, container, object_id):
cmd = ". /root/openrc; swift --os-project-name {0}"\
" --os-username {1} --os-password {2} --os-auth-token {3}"\
" delete {4} {5}".format(tenant, user, password, token,
container, object_id)
ssh.call(["sh", "-c", cmd], node=node)
LOG.info("Swift %s image has been deleted" % object_id)
def transfer_image(node, tenant, user, password, token, container, object_id,
storage_ip, tenant_id):
storage_url = "http://{0}:8080/v1/AUTH_{1}".format(storage_ip, tenant_id)
cmd = ['swift', '--os-project-name', tenant, '--os-username', user,
'--os-password', password, '--os-auth-token', token,
'--os-storage-url', storage_url, 'upload', container,
object_id]
ssh.call(cmd, node=node)
LOG.info("Swift %s image has been transferred" % object_id)
def sync_glance_images(source_env_id, seed_env_id, seed_swift_ep):
"""Sync glance images from original ENV to seed ENV
Args:
source_env_id (int): ID of original ENV.
seed_env_id (int): ID of seed ENV.
seed_swift_ep (str): endpoint's name where swift-proxy service is
listening on.
Examples:
sync_glance_images(2, 3, 'br-mgmt')
"""
# set glance username
glance_user = "glance"
# set swift container value
container = "glance"
# choose tenant
tenant = "services"
# get clusters by id
source_env = environment_obj.Environment(source_env_id)
seed_env = environment_obj.Environment(seed_env_id)
# gather cics admin IPs
source_node = next(env_util.get_controllers(source_env))
seed_node = next(env_util.get_controllers(seed_env))
# get cics yaml files
source_yaml = get_astute_yaml(source_node)
seed_yaml = get_astute_yaml(seed_node)
# get glance passwords
source_glance_pass = get_glance_password(source_yaml)
seed_glance_pass = get_glance_password(seed_yaml)
# get seed node swift ip
seed_swift_ip = get_endpoint_ip(seed_swift_ep, seed_yaml)
# get service tenant id & lists of objects for source env
source_token = get_auth_token(source_node, tenant, glance_user,
source_glance_pass)
source_swift_list = set(get_swift_objects(source_node,
tenant,
glance_user,
source_glance_pass,
source_token,
container))
# get service tenant id & lists of objects for seed env
seed_token = get_auth_token(seed_node, tenant, glance_user,
seed_glance_pass)
seed_swift_list = set(get_swift_objects(seed_node,
tenant,
glance_user,
seed_glance_pass,
seed_token,
container))
# get service tenant for seed env
seed_tenant = env_util.get_service_tenant_id(seed_env)
# check consistency of matched images
source_token = get_auth_token(source_node, tenant, glance_user,
source_glance_pass)
seed_token = get_auth_token(seed_node, tenant, glance_user,
seed_glance_pass)
for image in source_swift_list & seed_swift_list:
source_obj_etag = get_object_property(source_node,
tenant,
glance_user,
source_glance_pass,
source_token,
container,
image,
'ETag')
seed_obj_etag = get_object_property(seed_node, tenant,
glance_user, seed_glance_pass,
seed_token, container, image,
'ETag')
if source_obj_etag != seed_obj_etag:
# image should be resynced
delete_image(seed_node, tenant, glance_user, seed_glance_pass,
seed_token, container, image)
LOG.info("Swift %s image should be resynced" % image)
seed_swift_list.remove(image)
# migrate new images
for image in source_swift_list - seed_swift_list:
# download image on source's node local drive
source_token = get_auth_token(source_node, tenant, glance_user,
source_glance_pass)
download_image(source_node, tenant, glance_user, source_glance_pass,
source_token, container, image)
# transfer image
source_token = get_auth_token(source_node, tenant,
glance_user, source_glance_pass)
seed_token = get_auth_token(seed_node, tenant, glance_user,
seed_glance_pass)
transfer_image(source_node, tenant, glance_user, seed_glance_pass,
seed_token, container, image, seed_swift_ip,
seed_tenant)
# remove transferred image
ssh.sftp(source_node).remove(image)
# delete outdated images
for image in seed_swift_list - source_swift_list:
token = get_auth_token(seed_node, tenant, glance_user,
seed_glance_pass)
delete_image(seed_node, tenant, glance_user, seed_glance_pass,
token, container, image)

View File

@ -15,8 +15,8 @@ import os
import re
import yaml
BRIDGES = ('br-ex', 'br-mgmt')
from distutils.version import LooseVersion
from octane import magic_consts
def get_parser():
@ -59,6 +59,10 @@ def dump_yaml_file(dict_obj, filename):
yaml.dump(dict_obj, f, default_flow_style=False)
def get_actions(host_config):
return host_config['network_scheme']['transformations']
def remove_patch_port(host_config, bridge_name):
transformations = host_config['network_scheme']['transformations']
for action in transformations:
@ -72,19 +76,20 @@ def remove_physical_port(host_config, bridge_name):
transformations = host_config['network_scheme']['transformations']
for action in transformations:
if (action['action'] == 'add-port') and (
bridge_name in action['bridge']):
transformations.remove(action)
action.get('bridge')) and (
bridge_name == action['bridge']):
action.pop('bridge')
return host_config
def remove_patch_ports(host_config):
for bridge_name in BRIDGES:
for bridge_name in magic_consts.BRIDGES:
host_config = remove_patch_port(host_config, bridge_name)
return host_config
def remove_physical_ports(host_config):
for bridge_name in BRIDGES:
for bridge_name in magic_consts.BRIDGES:
host_config = remove_physical_port(host_config, bridge_name)
return host_config
@ -94,8 +99,11 @@ def remove_predefined_nets(host_config):
return host_config
def reset_gw_admin(host_config):
gw = host_config["master_ip"]
def reset_gw_admin(host_config, gateway=None):
if gateway:
gw = gateway
else:
gw = host_config["master_ip"]
endpoints = host_config["network_scheme"]["endpoints"]
if endpoints["br-ex"].get("gateway"):
endpoints["br-ex"]["gateway"] = 'none'
@ -117,12 +125,33 @@ def update_env_deployment_info(dirname, action):
def get_bridge_provider(actions, bridge):
add_br_actions = [action for action in actions
if action.get("action") == "add-br"]
providers = [action.get("provider") for action in add_br_actions
providers = [action.get("provider", "lnx") for action in add_br_actions
if action.get("name") == bridge]
if len(providers):
return providers[-1]
else:
return None
return 'lnx'
def get_admin_iface(actions):
return 'br-fw-admin'
def get_patch_port_action(host_config, bridge):
actions = get_actions(host_config)
version = LooseVersion(host_config.get('openstack_version'))
if version < LooseVersion('2014.2-6.1'):
provider = 'ovs'
else:
provider = get_bridge_provider(actions, bridge)
for action in actions:
if provider == 'ovs' and action.get('action') == 'add-patch':
bridges = action.get('bridges', [])
if bridge in bridges:
return action, provider
elif provider == 'lnx' and action.get('action') == 'add-port':
if action.get('bridge') == bridge:
return action, provider
def lnx_add_port(actions, bridge):
@ -159,6 +188,16 @@ def ovs_add_patch_ports(actions, bridge):
.format(bridges[0], bridges[1], tags[1], trunk_param)]
def remove_ports(host_config):
actions = host_config['network_scheme']['transformations']
for bridge_name in magic_consts.BRIDGES:
provider = get_bridge_provider(actions, bridge_name)
if provider == 'ovs':
remove_patch_port(host_config, bridge_name)
else:
remove_physical_port(host_config, bridge_name)
def main():
args = get_parser().parse_args()

View File

@ -3,7 +3,8 @@
pycmd() {
if ! python -c 'import octane'; then
yum install -y python-paramiko
pip install --no-index -e "$CWD/.."
pip install --no-index -e "$CWD/.." ||
die "Cannot install octane, exiting"
fi
local opts=""
if shopt -qo xtrace; then

View File

@ -13,13 +13,7 @@
import os.path
PACKAGES = ["postgresql.x86_64", "pssh", "patch", "python-pip"]
PATCHES = [
("cobbler", "/usr/lib/python2.6/site-packages/cobbler",
"docker/cobbler/resources/pmanager.py.patch"),
("nailgun", "/usr/lib/python2.6/site-packages/nailgun/volumes",
"docker/nailgun/resources/manager.py.patch"),
("nailgun", "/", "../octane_nailgun/tools/urls.py.patch"),
]
PATCHES = []
# TODO: use pkg_resources for patches
CWD = os.path.dirname(__file__) # FIXME
FUEL_CACHE = "/tmp/octane/deployment" # TODO: we shouldn't need this
@ -27,7 +21,6 @@ PUPPET_DIR = "/etc/puppet/2014.2.2-6.1/modules"
SSH_KEYS = ['/root/.ssh/id_rsa', '/root/.ssh/bootstrap.rsa']
OS_SERVICES = ["nova", "keystone", "heat", "neutron", "cinder", "glance"]
MCOLLECTIVE_PATCH = os.path.join(CWD, "patches/pman/erase_node.rb.patch")
MCOLLECTIVE_PATCH_TARGET = \
"/usr/share/mcollective/plugins/mcollective/agent/erase_node.rb"
BRIDGES = ['br-ex', 'br-mgmt']
DEFAULT_DISKS = True
DEFAULT_NETS = True

View File

@ -1,122 +0,0 @@
diff --git a/deployment/puppet/ceph/lib/facter/ceph_osd.rb b/deployment/puppet/ceph/lib/facter/ceph_osd.rb
index c9b7aae..f3d7cdc 100644
--- a/deployment/puppet/ceph/lib/facter/ceph_osd.rb
+++ b/deployment/puppet/ceph/lib/facter/ceph_osd.rb
@@ -22,7 +22,15 @@ Facter.add("osd_devices_list") do
when "4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D"
# Only use unmounted devices
if %x{grep -c #{device}#{p} /proc/mounts}.to_i == 0
- osds << "#{device}#{p}"
+ mp = %x{mktemp -d}.strip()
+ begin
+ mp_code = %x{mount #{device}#{p} #{mp} && test -f #{mp}/fsid && echo 0 || echo 1}.to_i
+ rescue
+ else
+ osds << ["#{device}#{p}", !mp_code.zero?]
+ ensure
+ %x{umount -f #{mp}}
+ end
end
when "45B0969E-9B03-4F30-B4C6-B4B80CEFF106"
if %x{grep -c #{device}#{p} /proc/mounts}.to_i == 0
@@ -32,21 +40,21 @@ Facter.add("osd_devices_list") do
}
}
- if journals.length > 0
- osds.each { |osd|
- journal = journals.shift
- if (not journal.nil?) && (not journal.empty?)
- devlink = %x{udevadm info -q property -n #{journal} | awk 'BEGIN {FS="="} {if ($1 == "DEVLINKS") print $2}'}
- devlink = devlink.split(' ')
- journal = (devlink.find { |s| s.include? 'by-id' } or journal)
- output << "#{osd}:#{journal}"
- else
- output << osd
- end
- }
- else
- output = osds
- end
+ osds.each { |osd, prep|
+ journal = journals.shift
+ if (not journal.nil?) && (not journal.empty?)
+ devlink = %x{udevadm info -q property -n #{journal} | awk 'BEGIN {FS="="} {if ($1 == "DEVLINKS") print $2}'}
+ devlink = devlink.split(' ')
+ journal = (devlink.find { |s| s.include? 'by-id' } or journal)
+ osd_disk = "#{osd}:#{journal}"
+ else
+ osd_disk = osd
+ end
+ if prep == true
+ osd_disk += "!prep"
+ end
+ output << osd_disk
+ }
output.join(" ")
end
end
diff --git a/deployment/puppet/ceph/manifests/osds/osd.pp b/deployment/puppet/ceph/manifests/osds/osd.pp
index 814ccab..f8cd740 100644
--- a/deployment/puppet/ceph/manifests/osds/osd.pp
+++ b/deployment/puppet/ceph/manifests/osds/osd.pp
@@ -1,22 +1,34 @@
define ceph::osds::osd () {
- $deploy_device_name = "${::hostname}:${name}"
+ $prepare_device = split($name, '!')
+ $deploy_device_name = "${::hostname}:${prepare_device[0]}"
- exec { "ceph-deploy osd prepare ${deploy_device_name}":
- # ceph-deploy osd prepare is ensuring there is a filesystem on the
- # disk according to the args passed to ceph.conf (above).
- #
- # It has a long timeout because of the format taking forever. A
- # resonable amount of time would be around 300 times the length of
- # $osd_nodes. Right now its 0 to prevent puppet from aborting it.
- command => "ceph-deploy osd prepare ${deploy_device_name}",
- returns => 0,
- timeout => 0, # TODO: make this something reasonable
- tries => 2, # This is necessary because of race for mon creating keys
- try_sleep => 1,
- logoutput => true,
- unless => "grep -q ${name} /proc/mounts",
- } ->
+# $prepare_device = delete_at($prepare_device, 0)
+# $prepare_device = grep($prepare_device, 'prep')
+# $prepare_device = intersection($prepare_device, ['prep'])
+# $prepare_device = prefix($prepare_device, 'prep')
+# $prepare_device = suffix($prepare_device, 'prep')
+# $prepare_device = reject($prepare_device, $prepare_device[0])
+
+# if ! empty($prepare_device) {
+# if member($prepare_device, ['prep']) {
+ if size($prepare_device) == 2 {
+ exec { "ceph-deploy osd prepare ${deploy_device_name}":
+ # ceph-deploy osd prepare is ensuring there is a filesystem on the
+ # disk according to the args passed to ceph.conf (above).
+ #
+ # It has a long timeout because of the format taking forever. A
+ # resonable amount of time would be around 300 times the length of
+ # $osd_nodes. Right now its 0 to prevent puppet from aborting it.
+ command => "ceph-deploy osd prepare ${deploy_device_name}",
+ returns => 0,
+ timeout => 0, # TODO: make this something reasonable
+ tries => 2, # This is necessary because of race for mon creating keys
+ try_sleep => 1,
+ logoutput => true,
+ unless => "grep -q ${name} /proc/mounts",
+ }
+ }
exec { "ceph-deploy osd activate ${deploy_device_name}":
command => "ceph-deploy osd activate ${deploy_device_name}",
@@ -27,4 +39,8 @@ define ceph::osds::osd () {
unless => "ceph osd dump | grep -q \"osd.$(sed -nEe 's|${name}\\ .*ceph-([0-9]+).*$|\\1|p' /proc/mounts)\\ up\\ .*\\ in\\ \"",
}
+ if size($prepare_device) == 2 {
+ Exec["ceph-deploy osd prepare ${deploy_device_name}"] ->
+ Exec["ceph-deploy osd activate ${deploy_device_name}"]
+ }
}

View File

@ -1,36 +0,0 @@
diff --git a/deployment/puppet/osnailyfacter/modular/openstack-network/openstack-network-controller.pp b/deployment/puppet/osnailyfacter/modular/openstack-network/openstack-network-controller.pp
index 2a261d5..c42a9eb 100644
--- a/deployment/puppet/osnailyfacter/modular/openstack-network/openstack-network-controller.pp
+++ b/deployment/puppet/osnailyfacter/modular/openstack-network/openstack-network-controller.pp
@@ -86,19 +86,19 @@ if $network_provider == 'neutron' {
Service<| title == 'neutron-server' |> ->
Openstack::Network::Create_router <||>
-
- openstack::network::create_network{'net04':
- netdata => $nets['net04']
- } ->
- openstack::network::create_network{'net04_ext':
- netdata => $nets['net04_ext']
- } ->
- openstack::network::create_router{'router04':
- internal_network => 'net04',
- external_network => 'net04_ext',
- tenant_name => $keystone_admin_tenant
+ if ! empty($nets) {
+ openstack::network::create_network{'net04':
+ netdata => $nets['net04']
+ } ->
+ openstack::network::create_network{'net04_ext':
+ netdata => $nets['net04_ext']
+ } ->
+ openstack::network::create_router{'router04':
+ internal_network => 'net04',
+ external_network => 'net04_ext',
+ tenant_name => $keystone_admin_tenant
+ }
}
-
}
nova_config { 'DEFAULT/default_floating_pool': value => 'net04_ext' }
$pnets = $neutron_settings['L2']['phys_nets']

View File

@ -0,0 +1,19 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
def test_parser(mocker, octane_app):
m = mocker.patch('octane.commands.install_node.install_node')
octane_app.run(["install-node", "--isolated", "1", "2", "3", "4"])
assert not octane_app.stdout.getvalue()
assert not octane_app.stderr.getvalue()
m.assert_called_once_with(1, 2, [3, 4], isolated=True)

View File

@ -0,0 +1,97 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import subprocess
from mock import call
from octane.helpers import network
def test_parser(mocker, octane_app):
m = mocker.patch('octane.commands.upgrade_node.upgrade_node')
octane_app.run(["upgrade-node", "--isolated", "1", "2", "3"])
assert not octane_app.stdout.getvalue()
assert not octane_app.stderr.getvalue()
m.assert_called_once_with(1, [2, 3], isolated=True)
def test_create_overlay_network(mocker):
node1 = mocker.MagicMock()
node1.id = 2
node1.data = {
'id': node1.id,
'cluster': 101,
'roles': ['controller'],
'ip': '10.10.10.1',
}
node2 = mocker.MagicMock()
node2.id = 3
node2.data = {
'id': node2.id,
'cluster': 101,
'roles': [],
'pending_roles': ['controller'],
'ip': '10.10.10.2',
}
env = mocker.MagicMock()
env.data = {
'id': 101,
}
deployment_info = [{
'network_scheme': {
'transformations': [{
'action': 'add-br',
'name': 'br-ex',
'provider': 'ovs',
}, {
'action': 'add-br',
'name': 'br-mgmt',
}]
}
}]
mock_ssh = mocker.patch('octane.util.ssh.call')
mock_ssh.side_effect = [subprocess.CalledProcessError('', ''), None,
subprocess.CalledProcessError('', ''), None,
None, None, None, None]
expected_args = [
call(['sh', '-c',
'ovs-vsctl list-ports br-ex | grep -q br-ex--gre-10.10.10.2'],
node=node1),
call(['ovs-vsctl', 'add-port', 'br-ex', 'br-ex--gre-10.10.10.2',
'--', 'set', 'Interface', 'br-ex--gre-10.10.10.2',
'type=gre',
'options:remote_ip=10.10.10.2',
'options:key=2'],
node=node1),
call(['sh', '-c',
'ip link show dev gre3-3'],
node=node1),
call(['ip', 'link', 'add', 'gre3-3',
'type', 'gretap',
'remote', '10.10.10.2',
'local', '10.10.10.1',
'key', '3'],
node=node1),
call(['ip', 'link', 'set', 'up', 'dev', 'gre3-3'],
node=node1),
call(['ip', 'link', 'set', 'mtu', '1450', 'dev', 'gre3-3', ],
node=node1),
call(['ip', 'link', 'set', 'up', 'dev', 'br-mgmt'], node=node1),
call(['brctl', 'addif', 'br-mgmt', 'gre3-3'],
node=node1),
]
network.create_overlay_networks(node1, node2, env, deployment_info,
node1.id)
assert mock_ssh.call_args_list == expected_args

View File

@ -20,10 +20,8 @@ def test_prepare_parser(mocker, octane_app):
def test_revert_parser(mocker, octane_app):
mock_puppet = mocker.patch('octane.commands.prepare.patch_puppet')
mock_apply = mocker.patch('octane.commands.prepare.apply_patches')
octane_app.run(["revert-prepare"])
assert not octane_app.stdout.getvalue()
assert not octane_app.stderr.getvalue()
mock_apply.assert_called_once_with(revert=True)
mock_puppet.assert_called_once_with(revert=True)

View File

@ -0,0 +1,19 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
def test_parser(mocker, octane_app):
m = mocker.patch('octane.commands.sync_images.sync_glance_images')
octane_app.run(['sync-images', '1', '2', 'br-mgmt'])
assert not octane_app.stdout.getvalue()
assert not octane_app.stderr.getvalue()
m.assert_called_once_with(1, 2, 'br-mgmt')

View File

@ -0,0 +1,25 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
def test_parser(mocker, octane_app):
networks = [{'key': 'value'}]
m1 = mocker.patch('octane.commands.sync_networks.get_env_networks')
m1.return_value = networks
m2 = mocker.patch('octane.commands.sync_networks.update_env_networks')
octane_app.run(["sync-networks", "1", "2"])
assert not octane_app.stdout.getvalue()
assert not octane_app.stderr.getvalue()
m1.assert_called_once_with(1)
m2.assert_called_once_with(2, networks)

View File

@ -0,0 +1,24 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from octane.commands.update_plugin_settings import PLUGINS
def test_parser(mocker, octane_app):
m = mocker.patch('octane.commands.update_plugin_settings'
'.transfer_plugins_settings')
plugins_str = ','.join(PLUGINS)
octane_app.run(["update-plugin-settings", "--plugins", plugins_str,
"1", "2"])
assert not octane_app.stdout.getvalue()
assert not octane_app.stderr.getvalue()
m.assert_called_once_with(1, 2, PLUGINS.keys())

View File

@ -14,9 +14,9 @@
def test_parser(mocker, octane_app):
m1 = mocker.patch('octane.commands.upgrade_env.upgrade_env')
m1.return_value = 2
m2 = mocker.patch('octane.commands.upgrade_env.set_cobbler_provision')
m2 = mocker.patch('octane.commands.upgrade_env.write_service_tenant_id')
octane_app.run(["upgrade-env", "1"])
assert not octane_app.stdout.getvalue()
assert not octane_app.stderr.getvalue()
m1.assert_called_once_with(1)
m2.assert_called_once_with(2)
m2.assert_called_once_with(1)

View File

@ -10,6 +10,7 @@
# License for the specific language governing permissions and limitations
# under the License.
import fuelclient
import json
import logging
import os.path
@ -103,6 +104,9 @@ def get_service_tenant_id(env, node=None):
node=node,
)
tenant_id = parse_tenant_get(tenant_out, 'id')
dname = os.path.dirname(fname)
if not os.path.exists(dname):
os.makedirs(dname)
with open(fname, 'w') as f:
f.write(tenant_id)
return tenant_id
@ -137,7 +141,7 @@ def move_nodes(env, nodes):
node_id = node.data['id']
subprocess.call(
["fuel2", "env", "move", "node", str(node_id), str(env_id)])
wait_for_nodes(nodes, "discover")
wait_for_nodes(nodes, "provisioned")
def provision_nodes(env, nodes):
@ -148,3 +152,22 @@ def provision_nodes(env, nodes):
def deploy_nodes(env, nodes):
env.install_selected_nodes('deploy', nodes)
wait_for_nodes(nodes, "ready")
def deploy_changes(env, nodes):
env.deploy_changes()
wait_for_nodes(nodes, "ready")
def merge_deployment_info(env):
default_info = env.get_default_facts('deployment')
try:
deployment_info = env.get_facts('deployment')
except fuelclient.cli.error.ServerDataException:
LOG.warn('Deployment info is unchanged for env: %s',
env.id)
deployment_info = []
for info in default_info:
if not info['uid'] in [i['uid'] for i in deployment_info]:
deployment_info.append(info)
return deployment_info

View File

@ -1 +0,0 @@
.git

View File

@ -1,7 +0,0 @@
*.egg-info
*.pyc
*.pyo
.coverage
.tox
build
htmlcov

View File

@ -1,7 +0,0 @@
FROM fuel/nailgun_6.1:latest
ENV OCTANE_WHEEL_VERSION 0.0.0
COPY dist/octane_nailgun-${OCTANE_WHEEL_VERSION}-py2-none-any.whl /root/
COPY tools/*.patch /root/
RUN pip install octane_nailgun-${OCTANE_WHEEL_VERSION}-py2-none-any.whl \
&& yum install -y patch \
&& cat /root/*.patch | patch -bNp1 -d /

View File

@ -1,315 +0,0 @@
import copy
from nailgun.api.v1.handlers import base
from nailgun import consts
from nailgun.db import db
from nailgun.db.sqlalchemy import models
from nailgun.errors import errors
from nailgun.logger import logger
from nailgun import objects
from nailgun.objects.serializers import network_configuration
from nailgun import rpc
from nailgun.settings import settings
from nailgun.task import task as tasks
from nailgun import utils
from octane import validators
class ClusterCloneHandler(base.BaseHandler):
single = objects.Cluster
validator = validators.ClusterCloneValidator
network_serializers = {
consts.CLUSTER_NET_PROVIDERS.neutron:
network_configuration.NeutronNetworkConfigurationSerializer,
consts.CLUSTER_NET_PROVIDERS.nova_network:
network_configuration.NovaNetworkConfigurationSerializer,
}
@base.content
def POST(self, cluster_id):
"""Create a clone of the cluster.
Creates a new cluster with specified name and release_id. The
new cluster are created with parameters that are copied from the
cluster with cluster_id. The values of the generated and
editable attributes are just copied from one to the other.
:param cluster_id: ID of the cluster to copy parameters from it
:returns: JSON representation of the created cluster
:http: * 200 (OK)
* 404 (cluster or release did not found in db)
"""
data = self.checked_data()
orig_cluster = self.get_object_or_404(self.single, cluster_id)
release = self.get_object_or_404(objects.Release, data["release_id"])
# TODO(ikharin): Here should be more elegant code that verifies
# release versions of the original cluster and
# the future cluster. The upgrade process itself
# is meaningful only to upgrade the cluster
# between the major releases.
# TODO(ikharin): This method should properly handle the upgrade
# from one major release to another but now it's
# hardcoded to perform the upgrade from 5.1.1 to
# 6.1 release.
data = {
"name": data["name"],
"mode": orig_cluster.mode,
"status": consts.CLUSTER_STATUSES.new,
"net_provider": orig_cluster.net_provider,
"grouping": consts.CLUSTER_GROUPING.roles,
"release_id": release.id,
}
if orig_cluster.net_provider == consts.CLUSTER_NET_PROVIDERS.neutron:
data["net_segment_type"] = \
orig_cluster.network_config.segmentation_type
data["net_l23_provider"] = \
orig_cluster.network_config.net_l23_provider
new_cluster = self.single.create(data)
new_cluster.attributes.generated = utils.dict_merge(
new_cluster.attributes.generated,
orig_cluster.attributes.generated)
new_cluster.attributes.editable = self.merge_attributes(
orig_cluster.attributes.editable,
new_cluster.attributes.editable)
nets_serializer = self.network_serializers[orig_cluster.net_provider]
nets = self.merge_nets(
nets_serializer.serialize_for_cluster(orig_cluster),
nets_serializer.serialize_for_cluster(new_cluster))
net_manager = self.single.get_network_manager(instance=new_cluster)
net_manager.update(new_cluster, nets)
self.copy_vips(orig_cluster, new_cluster)
net_manager.assign_vips_for_net_groups(new_cluster)
logger.debug("The cluster %s was created as a clone of the cluster %s",
new_cluster.id, orig_cluster.id)
return self.single.to_json(new_cluster)
@staticmethod
def copy_vips(orig_cluster, new_cluster):
orig_vips = {}
for ng in orig_cluster.network_groups:
vips = db.query(models.IPAddr).filter(
models.IPAddr.network == ng.id,
models.IPAddr.node.is_(None),
models.IPAddr.vip_type.isnot(None),
).all()
orig_vips[ng.name] = list(vips)
new_vips = []
for ng in new_cluster.network_groups:
orig_ng_vips = orig_vips.get(ng.name)
for vip in orig_ng_vips:
ip_addr = models.IPAddr(
network=ng.id,
ip_addr=vip.ip_addr,
vip_type=vip.vip_type,
)
new_vips.append(ip_addr)
db.add_all(new_vips)
db.commit()
@staticmethod
def merge_attributes(a, b):
"""Merge values of editable attributes.
The values of the b attributes have precedence over the values
of the a attributes.
Added:
common.
puppet_debug = true
additional_components.
mongo = false
external_dns.
dns_list = "8.8.8.8"
external_mongo.
host_ip = ""
mongo_db_name = "ceilometer"
mongo_password = "ceilometer"
mongo_replset = ""
mongo_user = "ceilometer"
external_ntp.
ntp_list = "0.pool.ntp.org, 1.pool.ntp.org, 2.pool.ntp.org"
murano_settings.
murano_repo_url = "http://storage.apps.openstack.org/"
provision.
method = "image"
storage.images_vcenter = false
workloads_collector.
password = "..."
tenant = "services"
user = "fuel_stats_user"
Renamed:
common.
start_guests_on_host_boot ->
resume_guests_state_on_host_boot
Changed:
repo_setup.repos (extended by additional items)
common.libvirt_type = kvm | data (removed vcenter)
Removed:
common.
compute_scheduler_driver
nsx_plugin.
connector_type
l3_gw_service_uuid
nsx_controllers
nsx_password
nsx_username
packages_url
transport_zone_uuid
storage.volumes_vmdk = false
vcenter.
cluster
host_ip
use_vcenter
vc_password
vc_user
zabbix.
password
username
:param a: a dict with editable attributes
:param b: a dict with editable attributes
:returns: a dict with merged editable attributes
"""
attrs = copy.deepcopy(b)
for section, pairs in attrs.iteritems():
if section == "repo_setup" or section not in a:
continue
a_values = a[section]
for key, values in pairs.iteritems():
if key != "metadata" and key in a_values:
values["value"] = a_values[key]["value"]
return attrs
@classmethod
def merge_nets(cls, a, b):
"""Merge network settings.
Some parameters are copied from a to b.
:param a: a dict with network settings
:param b: a dict with network settings
:returns: a dict with merged network settings
"""
new_settings = copy.deepcopy(b)
source_networks = dict((n["name"], n) for n in a["networks"])
for net in new_settings["networks"]:
if net["name"] not in source_networks:
continue
source_net = source_networks[net["name"]]
for key, value in net.iteritems():
if (key not in ("cluster_id", "id", "meta", "group_id") and
key in source_net):
net[key] = source_net[key]
networking_params = new_settings["networking_parameters"]
source_params = a["networking_parameters"]
for key, value in networking_params.iteritems():
if key not in source_params:
continue
networking_params[key] = source_params[key]
return new_settings
class UpgradeNodeAssignmentHandler(base.BaseHandler):
validator = validators.UpgradeNodeAssignmentValidator
@classmethod
def get_netgroups_map(cls, orig_cluster, new_cluster):
netgroups = dict((ng.name, ng.id)
for ng in orig_cluster.network_groups)
mapping = dict((netgroups[ng.name], ng.id)
for ng in new_cluster.network_groups)
orig_admin_ng = cls.get_admin_network_group(orig_cluster)
admin_ng = cls.get_admin_network_group(new_cluster)
mapping[orig_admin_ng.id] = admin_ng.id
return mapping
@staticmethod
def get_admin_network_group(cluster):
query = db().query(models.NetworkGroup).filter_by(
name="fuelweb_admin",
)
default_group = objects.Cluster.get_default_group(cluster)
admin_ng = query.filter_by(group_id=default_group.id).first()
if admin_ng is None:
admin_ng = query.filter_by(group_id=None).first()
if admin_ng is None:
raise errors.AdminNetworkNotFound()
return admin_ng
@base.content
def POST(self, cluster_id):
cluster = self.get_object_or_404(objects.Cluster, cluster_id)
data = self.checked_data()
node_id = data["node_id"]
node = self.get_object_or_404(objects.Node, node_id)
netgroups_mapping = self.get_netgroups_map(node.cluster, cluster)
orig_roles = node.roles
objects.Node.update_roles(node, []) # flush
objects.Node.update_pending_roles(node, []) # flush
node.replaced_deployment_info = []
node.deployment_info = []
node.kernel_params = None
node.cluster_id = cluster.id
node.group_id = None
objects.Node.assign_group(node) # flush
objects.Node.update_pending_roles(node, orig_roles) # flush
for ip in node.ip_addrs:
ip.network = netgroups_mapping[ip.network]
nic_assignments = db.query(models.NetworkNICAssignment).\
join(models.NodeNICInterface).\
filter(models.NodeNICInterface.node_id == node.id).\
all()
for nic_assignment in nic_assignments:
nic_assignment.network_id = \
netgroups_mapping[nic_assignment.network_id]
bond_assignments = db.query(models.NetworkBondAssignment).\
join(models.NodeBondInterface).\
filter(models.NodeBondInterface.node_id == node.id).\
all()
for bond_assignment in bond_assignments:
bond_assignment.network_id = \
netgroups_mapping[bond_assignment.network_id]
objects.Node.add_pending_change(node,
consts.CLUSTER_CHANGES.interfaces)
node.pending_addition = True
node.pending_deletion = False
task = models.Task(name=consts.TASK_NAMES.node_deletion,
cluster=cluster)
db.commit()
self.delete_node_by_astute(task, node)
@staticmethod
def delete_node_by_astute(task, node):
node_to_delete = tasks.DeletionTask.format_node_to_delete(node)
msg_delete = tasks.make_astute_message(
task,
'remove_nodes',
'remove_nodes_resp',
{
'nodes': [node_to_delete],
'check_ceph': False,
'engine': {
'url': settings.COBBLER_URL,
'username': settings.COBBLER_USER,
'password': settings.COBBLER_PASSWORD,
'master_ip': settings.MASTER_IP,
}
}
)
rpc.cast('naily', msg_delete)

View File

@ -1,9 +0,0 @@
from octane import handlers
urls = (
r'/clusters/(?P<cluster_id>\d+)/upgrade/clone/?$',
handlers.ClusterCloneHandler,
r'/clusters/(?P<cluster_id>\d+)/upgrade/assign/?$',
handlers.UpgradeNodeAssignmentHandler,
)

View File

@ -1,26 +0,0 @@
from nailgun.api.v1.validators import base
class ClusterCloneValidator(base.BasicValidator):
single_schema = {
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "Cluster Clone Parameters",
"description": "Serialized parameters to clone clusters",
"type": "object",
"properties": {
"name": {"type": "string"},
"release_id": {"type": "number"},
},
}
class UpgradeNodeAssignmentValidator(base.BasicValidator):
single_schema = {
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "Assign Node Parameters",
"description": "Serialized parameters to assign node",
"type": "object",
"properties": {
"node_id": {"type": "number"},
},
}

View File

@ -1,10 +0,0 @@
from setuptools import find_packages
from setuptools import setup
setup(name="octane_nailgun",
version="0.0.0",
packages=find_packages(),
zip_safe=False,
# install_requires=["nailgun==6.1.0"],
)

View File

@ -1,20 +0,0 @@
#!/bin/bash -ex
host=${1:-"cz5545-fuel"}
location=${2:-"/root/octane"}
branch=${3:-$(git rev-parse --abbrev-ref HEAD)}
ssh $host \
"set -ex;" \
"yum install -y git python-pip patch;" \
"pip install wheel;" \
"mkdir -p ${location};" \
"git init ${location};" \
"git config --file ${location}/.git/config receive.denyCurrentBranch warn;"
[ -z "$(git remote | grep ${host})" ] &&
git remote add "$host" "ssh://${host}${location}"
git push --force "$host" "$branch"
ssh $host \
"set -ex;" \
"cd ${location};" \
"git reset --hard $branch;"

View File

@ -1,17 +0,0 @@
#!/bin/bash -ex
host=${1:-"cz5545-fuel"}
branch=${2:-$(git rev-parse --abbrev-ref HEAD)}
remote="$(git remote -v | awk "/$host/ && /fetch/{print \$2}")"
location="${remote#ssh://$host}/octane_nailgun"
git push --force $host HEAD
ssh $host \
"set -ex;" \
"cd ${location};" \
"git reset --hard $branch;" \
"git clean -x -d -f;" \
"python setup.py bdist_wheel;" \
"docker build -t octane/nailgun_6.1 .;"

View File

@ -1,26 +0,0 @@
#!/bin/bash -ex
host=${1:-"cz5545-fuel"}
branch=${2:-$(git rev-parse --abbrev-ref HEAD)}
version=$(python setup.py --version)
wheel="octane_nailgun-${version}-py2-none-any.whl"
remote="$(git remote -v | awk "/$host/ && /fetch/{print \$2}")"
location="${remote#ssh://$host}/octane_nailgun"
container="fuel-core-6.1-nailgun"
git push --force "$host" "$branch"
ssh $host \
"set -ex;" \
"cd ${location};" \
"git reset --hard $branch;" \
"git clean -x -d -f;" \
"python setup.py bdist_wheel;" \
"id=\"\$(docker inspect -f='{{if .ID}}{{.ID}}{{else}}{{.Id}}{{end}}' ${container})\";" \
"rootfs=\"/var/lib/docker/devicemapper/mnt/\${id}/rootfs\";" \
"cp \"${location}/dist/${wheel}\" \"\${rootfs}/root/${wheel}\";" \
"docker exec ${container} pip install -U ${wheel};" \
"patch -bV numbered -Np1 -d \"\${rootfs}\" < ${location}/tools/urls.py.patch ||:;" \
"dockerctl shell ${container} pkill -f wsgi;"

View File

@ -1,14 +0,0 @@
diff -Nura a/usr/lib/python2.6/site-packages/nailgun/api/v1/urls.py b/usr/lib/python2.6/site-packages/nailgun/api/v1/urls.py
--- a/usr/lib/python2.6/site-packages/nailgun/api/v1/urls.py 2015-06-05 11:32:58.786062277 +0000
+++ b/usr/lib/python2.6/site-packages/nailgun/api/v1/urls.py 2015-06-05 11:30:49.053210286 +0000
@@ -264,6 +264,10 @@
MasterNodeSettingsHandler,
)
+import octane.urls
+urls += octane.urls.urls
+from octane.handlers import *
+
urls = [i if isinstance(i, str) else i.__name__ for i in urls]
_locals = locals()

View File

@ -1,40 +0,0 @@
[tox]
minversion = 1.6
skipsdist = True
envlist = py26,py27,pep8
[testenv]
basepython = python2
usedevelop = True
install_command = pip install -U {opts} {packages}
setenv = VIRTUAL_ENV={envdir}
deps = pytest
commands =
py.test {posargs:octane/test}
[tox:jenkins]
downloadcache = ~/cache/pip
[testenv:pep8]
deps = hacking==0.7
usedevelop = False
commands =
flake8 {posargs:octane}
[testenv:cover]
deps = pytest-cov
commands =
py.test --cov-report html --cov-report term-missing --cov octane {posargs:octane/test}
[testenv:venv]
commands = {posargs:}
[flake8]
ignore = H234,H302,H802
exclude = .venv,.git,.tox,dist,doc,*lib/python*,*egg,build,tools,__init__.py,docs
show-pep8 = True
show-source = True
count = True
[hacking]
import_exceptions = testtools.matchers

View File

@ -16,3 +16,5 @@ python-keystoneclient<=0.11.1 # the last version without too fresh requirements
python-fuelclient>=6.1
cliff>=1.7.0,<=1.9.0 # should already be pulled by python-fuelclient
paramiko==1.13.0
pyzabbix==0.7.3

View File

@ -33,6 +33,11 @@ octane =
upgrade-env = octane.commands.upgrade_env:UpgradeEnvCommand
upgrade-node = octane.commands.upgrade_node:UpgradeNodeCommand
upgrade-db = octane.commands.upgrade_db:UpgradeDBCommand
install-node = octane.commands.install_node:InstallNodeCommand
upgrade-control = octane.commands.upgrade_controlplane:UpgradeControlPlaneCommand
sync-networks = octane.commands.sync_networks:SyncNetworksCommand
sync-images = octane.commands.sync_images:SyncImagesCommand
update-plugin-settings = octane.commands.update_plugin_settings:UpdatePluginSettingsCommand
octane.handlers.upgrade =
controller = octane.handlers.upgrade.controller:ControllerUpgrade
compute = octane.handlers.upgrade.compute:ComputeUpgrade