Separate plugins from Spyglass

This change removes plugins from Spyglass and places them in separate
repositories. Formation, a proprietary plugin, will be removed by this
change and Tugboat will become its own OpenDev maintained repo,
spyglass-plugin-xls. By creating more streamlined plugin management,
end users should be able to more easily create their own plugins for
different data sources.

Related change https://review.opendev.org/#/c/659116/

Depends-On: Ib2f75878b1a29e835cb8e2323aebe9d431c479e7

Change-Id: Ie0eb2e5aefe6bb764e1aa608e53371adaabb9a17
This commit is contained in:
Ian Pittwood 2019-04-17 15:55:21 -05:00
parent bad2472bc5
commit a002e4203d
16 changed files with 17 additions and 1702 deletions

View File

@ -65,12 +65,7 @@ Architecture
Supported Features
------------------
1. Tugboat Plugin: Supports extracting site data from Excel files and
then generate site manifests for sitetype:airship-seaworthy.
Find more documentation for Tugboat, see :ref:`tugboatinfo`.
2. Remote Data Source Plugin: Supports extracting site data from a REST
endpoint.
1. Spyglass XLS Plugin: https://opendev.org/airship/spyglass-plugin-xls
Future Work
-----------
@ -135,4 +130,4 @@ Before using Spyglass you must:
.. code-block:: console
pip3 install -r airship-spyglass/requirements.txt
pip3 install -r spyglass/requirements.txt

View File

@ -34,4 +34,3 @@ fed to Shipyard for site deployment / updates.
getting_started
developer_quickstart
cli
tugboat

View File

@ -1,108 +0,0 @@
..
Copyright 2019 AT&T Intellectual Property.
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
.. _tugboatinfo:
=======
Tugboat
=======
What is Tugboat?
----------------
Tugboat is a Spyglass plugin to generate airship-seaworthy site manifest files
from an excel based engineering spec. The plugin is configured with an Excel
sheet and its corresponding excel specification as inputs. Spyglass uses this
plugin to construct an intermediary yaml which is processed further using J2
templates to generate site manifests.
Excel specification
-------------------
Excel Spec is like an index to the Excel sheet to look for the data to be
collected by the tool. Excel Spec Sample specifies all the details that
need to be filled by the Deployment Engineer.
Below is the definition for each key in the Excel spec
* ipmi_sheet_name - name of the sheet from where IPMI and host profile
information is to be read
* start_row - row number from where the IPMI and host profile information
starts
* end_row - row number from where the IPMI and host profile information ends
* hostname_col - column number where the hostnames are to be read from
* ipmi_address_col - column number from where the ipmi addresses are to be read
* host_profile_col - column number from where the host profiles are to be read
* ipmi_gateway_col - column number from where the ipmi gateways are to be read
* private_ip_sheet - name of the sheet which has the private IP information
* net_type_col - column number from where the network type is to be read
* vlan_col - column number from where the network vlan is to be read
* vlan_start_row - row number from where the vlan information starts
* vlan_end_row - row number from where the vlan information ends
* net_start_row - row number from where the network information starts
* net_end_row - row number from where the network information ends
* net_col - column number where the IP ranges for network is to be read
* net_vlan_col - column number where the vlan information is present in the
pod wise network section
* public_ip_sheet - name of the sheet which has the public IP information
* oam_vlan_col - column number from where the OAM vlan information is to be
read from
* oam_ip_row - row number from where the OAM network information is to be read
from
* oam_ip_col - column number from where the OAM network information is to be
read from
* oob_net_row - row number which has the OOB network subnet ranges
* oob_net_start_col - column number from where the OOB network ranges start
* oob_net_end_col - column number from where the OOB network ranges end
* ingress_ip_row - row number from where the Ingress network information is to
be read from
* dns_ntp_ldap_sheet - name of the sheet which has the DNS, NTP and LDAP
information
* login_domain_row - row number which has the ldap login domain
* ldap_col - column number which has the all ldap related information
* global_group - row number which has the ldap group information
* ldap_search_url_row - row number which has the ldap url
* ntp_row - row number which has the ntp information
* ntp_col - column number which has the ntp information
* dns_row - row number which has the dns information
* dns_col - column number which has the dns information
* domain_row - row number which has the domain information
* domain_col - column number which has the domain information
* location_sheet - name of the sheet which has the location information
* column - column number which has all the information
* corridor_row - row number which has the corridor information
* site_name_row - row number which has the site name
* state_name_row - row number which has the state name
* country_name_row - row number which has the country name
* clli_name_row - row number which has CLLI information
Example: Tugboat Plugin Usage
-----------------------------
1. Required Input(Refer to 'spyglass/examples' folder to get these inputs)
a) Excel File: SiteDesignSpec_v0.1.xlsx
b) Excel Spec: excel_spec_upstream.yaml
c) Site Config: site_config.yaml
d) Template_dir: '../examples/templates'
e) Site name: airship-seaworthy
2. Spyglass CLI Command:
.. code-block:: bash
spyglass m -i -p tugboat -x SiteDesignSpec_v0.1.xlsx \
-e excel_spec_upstream.yaml -c site_config.yaml \
-s airship-seaworthy -t <relative path to J2 templates dir>

View File

@ -1,7 +1,10 @@
click==7.0
click-plugins==1.1.1
jinja2==2.10
jsonschema==3.0.1
openpyxl==2.5.4
netaddr==0.7.19
pyyaml==5.1
requests==2.21.0
git+https://opendev.org/airship/spyglass-plugin-xls.git#egg=spyglass-plugin-xls

View File

@ -27,8 +27,9 @@ packages =
console_scripts =
spyglass = spyglass.cli:main
data_extractor_plugins =
tugboat = spyglass.data_extractor.plugins.tugboat.tugboat:TugboatPlugin
formation = spyglass.data_extractor.plugins.formation.formation:FormationPlugin
excel = spyglass_plugin_xls.excel:ExcelPlugin
cli_plugins =
excel = spyglass_plugin_xls.cli:excel
[yapf]
based_on_style = pep8

View File

@ -16,6 +16,7 @@ import logging
import pprint
import click
from click_plugins import with_plugins
import pkg_resources
import yaml
@ -31,96 +32,6 @@ CONTEXT_SETTINGS = {
'help_option_names': ['-h', '--help'],
}
def tugboat_required_callback(ctx, param, value):
LOG.debug('Evaluating %s: %s', param.name, value)
if 'plugin_type' not in ctx.params or \
ctx.params['plugin_type'] == 'tugboat':
if not value:
raise click.UsageError(
'%s is required for the tugboat '
'plugin.' % str(param.name),
ctx=ctx)
return value
def formation_required_callback(ctx, param, value):
LOG.debug('Evaluating %s: %s', param.name, value)
if 'plugin_type' in ctx.params:
if ctx.params['plugin_type'] == 'formation':
if not value:
raise click.UsageError(
'%s is required for the '
'formation plugin.' % str(param.name),
ctx=ctx)
return value
return ['', '', '']
PLUGIN_TYPE_OPTION = click.option(
'-p',
'--plugin-type',
'plugin_type',
type=click.Choice(['formation', 'tugboat']),
default='tugboat',
show_default=True,
help='The plugin type to use.')
# TODO(ianp): Either provide a prompt for passwords or use environment
# variable so passwords are no longer plain text
FORMATION_TARGET_OPTION = click.option(
'-f',
'--formation-target',
'formation_target',
nargs=3,
help=(
'Target URL, username, and password for formation plugin. Required '
'for formation plugin.'),
callback=formation_required_callback)
INTERMEDIARY_DIR_OPTION = click.option(
'-d',
'--intermediary-dir',
'intermediary_dir',
type=click.Path(exists=True, file_okay=False, writable=True),
default='./',
help='Directory in which the intermediary file will be created.')
EXCEL_FILE_OPTION = click.option(
'-x',
'--excel-file',
'excel_file',
multiple=True,
type=click.Path(exists=True, readable=True, dir_okay=False),
help='Path to the engineering Excel file. Required for tugboat plugin.',
callback=tugboat_required_callback)
EXCEL_SPEC_OPTION = click.option(
'-e',
'--excel-spec',
'excel_spec',
type=click.Path(exists=True, readable=True, dir_okay=False),
help=(
'Path to the Excel specification YAML file for the engineering '
'Excel file. Required for tugboat plugin.'),
callback=tugboat_required_callback)
SITE_CONFIGURATION_FILE_OPTION = click.option(
'-c',
'--site-configuration',
'site_configuration',
type=click.Path(exists=True, readable=True, dir_okay=False),
required=False,
help='Path to site specific configuration details YAML file.')
SITE_NAME_CONFIGURATION_OPTION = click.option(
'-s',
'--site-name',
'site_name',
type=click.STRING,
required=False,
help='Name of the site for which the intermediary is being generated.')
TEMPLATE_DIR_OPTION = click.option(
'-t',
'--template-dir',
@ -138,13 +49,14 @@ MANIFEST_DIR_OPTION = click.option(
help='Path to place created manifest files.')
@click.group(context_settings=CONTEXT_SETTINGS)
@click.option(
'-v',
'--verbose',
is_flag=True,
default=False,
help='Enable debug messages in log.')
@with_plugins(pkg_resources.iter_entry_points('cli_plugins'))
@click.group()
def main(*, verbose):
"""CLI for Airship Spyglass"""
if verbose:
@ -154,9 +66,7 @@ def main(*, verbose):
logging.basicConfig(format=LOG_FORMAT, level=log_level)
def _intermediary_helper(
plugin_type, formation_data, site, excel_file, excel_spec,
additional_configuration):
def intermediary_processor(plugin_type, **kwargs):
LOG.info("Generating Intermediary yaml")
plugin_type = plugin_type
plugin_class = None
@ -165,6 +75,7 @@ def _intermediary_helper(
LOG.info("Load the plugin class")
for entry_point in \
pkg_resources.iter_entry_points('data_extractor_plugins'):
LOG.debug("Entry point '%s' found", entry_point.name)
if entry_point.name == plugin_type:
plugin_class = entry_point.load()
@ -175,20 +86,13 @@ def _intermediary_helper(
# Extract data from plugin data source
LOG.info("Extract data from plugin data source")
data_extractor = plugin_class(site)
plugin_conf = data_extractor.get_plugin_conf(
{
'excel': excel_file,
'excel_spec': excel_spec,
'formation_url': formation_data[0],
'formation_user': formation_data[1],
'formation_password': formation_data[2]
})
data_extractor = plugin_class(kwargs['site_name'])
plugin_conf = data_extractor.get_plugin_conf(**kwargs)
data_extractor.set_config_opts(plugin_conf)
data_extractor.extract_data()
# Apply any additional_config provided by user
additional_config = additional_configuration
additional_config = kwargs.get('site_configuration', None)
if additional_config is not None:
with open(additional_config, 'r') as config:
raw_data = config.read()
@ -204,75 +108,12 @@ def _intermediary_helper(
# Apply design rules to the data
LOG.info("Apply design rules to the extracted data")
process_input_ob = ProcessDataSource(site)
process_input_ob = ProcessDataSource(kwargs['site_name'])
process_input_ob.load_extracted_data_from_data_source(
data_extractor.site_data)
return process_input_ob
@main.command(
'i',
short_help='generate intermediary',
help='Generates an intermediary file from passed excel data.')
@PLUGIN_TYPE_OPTION
@FORMATION_TARGET_OPTION
@INTERMEDIARY_DIR_OPTION
@EXCEL_FILE_OPTION
@EXCEL_SPEC_OPTION
@SITE_CONFIGURATION_FILE_OPTION
@SITE_NAME_CONFIGURATION_OPTION
def generate_intermediary(
*, plugin_type, formation_target, intermediary_dir, excel_file,
excel_spec, site_configuration, site_name):
process_input_ob = _intermediary_helper(
plugin_type, formation_target, site_name, excel_file, excel_spec,
site_configuration)
LOG.info("Generate intermediary yaml")
process_input_ob.generate_intermediary_yaml()
process_input_ob.dump_intermediary_file(intermediary_dir)
@main.command(
'm',
short_help='generates manifest and intermediary',
help='Generates manifest and intermediary files.')
@click.option(
'-i',
'--save-intermediary',
'save_intermediary',
is_flag=True,
default=False,
help='Flag to save the generated intermediary file used for the manifests.'
)
@PLUGIN_TYPE_OPTION
@FORMATION_TARGET_OPTION
@INTERMEDIARY_DIR_OPTION
@EXCEL_FILE_OPTION
@EXCEL_SPEC_OPTION
@SITE_CONFIGURATION_FILE_OPTION
@SITE_NAME_CONFIGURATION_OPTION
@TEMPLATE_DIR_OPTION
@MANIFEST_DIR_OPTION
def generate_manifests_and_intermediary(
*, save_intermediary, plugin_type, formation_target, intermediary_dir,
excel_file, excel_spec, site_configuration, site_name, template_dir,
manifest_dir):
process_input_ob = _intermediary_helper(
plugin_type, formation_target, site_name, excel_file, excel_spec,
site_configuration)
LOG.info("Generate intermediary yaml")
intermediary_yaml = process_input_ob.generate_intermediary_yaml()
if save_intermediary:
LOG.debug("Dumping intermediary yaml")
process_input_ob.dump_intermediary_file(intermediary_dir)
else:
LOG.debug("Skipping dump for intermediary yaml")
LOG.info("Generating site Manifests")
processor_engine = SiteProcessor(intermediary_yaml, manifest_dir)
processor_engine.render_template(template_dir)
@main.command(
'mi',
short_help='generates manifest from intermediary',

View File

@ -1,508 +0,0 @@
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import pprint
import re
import formation_client
import requests
import urllib3
from spyglass.data_extractor.base import BaseDataSourcePlugin
import spyglass.data_extractor.custom_exceptions as exceptions
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
LOG = logging.getLogger(__name__)
class FormationPlugin(BaseDataSourcePlugin):
def __init__(self, region):
# Save site name is valid
if not region:
LOG.error("Site: None! Spyglass exited!")
LOG.info("Check spyglass --help for details")
exit()
super().__init__(region)
self.source_type = "rest"
self.source_name = "formation"
# Configuration parameters
self.formation_api_url = None
self.user = None
self.password = None
self.token = None
# Formation objects
self.client_config = None
self.formation_api_client = None
# Site related data
self.region_zone_map = {}
self.site_name_id_mapping = {}
self.zone_name_id_mapping = {}
self.region_name_id_mapping = {}
self.rack_name_id_mapping = {}
self.device_name_id_mapping = {}
LOG.info("Initiated data extractor plugin:{}".format(self.source_name))
def set_config_opts(self, conf):
"""Sets the config params passed by CLI"""
LOG.info("Plugin params passed:\n{}".format(pprint.pformat(conf)))
self._validate_config_options(conf)
self.formation_api_url = conf["url"]
self.user = conf["user"]
self.password = conf["password"]
self.token = conf.get("token", None)
self._get_formation_client()
self._update_site_and_zone(self.region)
def get_plugin_conf(self, kwargs):
"""Validates the plugin param and return if success"""
if not kwargs["formation_url"]:
LOG.error("formation_url not specified! Spyglass exited!")
exit()
url = kwargs["formation_url"]
if not kwargs["formation_user"]:
LOG.error("formation_user not specified! Spyglass exited!")
exit()
user = kwargs["formation_user"]
if not kwargs["formation_password"]:
LOG.error("formation_password not specified! Spyglass exited!")
exit()
password = kwargs['formation_password']
plugin_conf = {"url": url, "user": user, "password": password}
return plugin_conf
def _validate_config_options(self, conf):
"""Validate the CLI params passed
The method checks for missing parameters and terminates
Spyglass execution if found so.
"""
missing_params = []
for key in conf.keys():
if conf[key] is None:
missing_params.append(key)
if len(missing_params) != 0:
LOG.error("Missing Plugin Params{}:".format(missing_params))
exit()
# Implement helper classes
def _generate_token(self):
"""Generate token for Formation
Formation API does not provide separate resource to generate
token. This is a workaround to call directly Formation API
to get token instead of using Formation client.
"""
# Create formation client config object
self.client_config = formation_client.Configuration()
self.client_config.host = self.formation_api_url
self.client_config.username = self.user
self.client_config.password = self.password
self.client_config.verify_ssl = False
# Assumes token is never expired in the execution of this tool
if self.token:
return self.token
url = self.formation_api_url + "/zones"
try:
token_response = requests.get(
url,
auth=(self.user, self.password),
verify=self.client_config.verify_ssl,
)
except requests.exceptions.ConnectionError:
raise exceptions.FormationConnectionError(
"Incorrect URL: {}".format(url))
if token_response.status_code == 200:
self.token = token_response.json().get("X-Subject-Token", None)
else:
raise exceptions.TokenGenerationError(
"Unable to generate token because {}".format(
token_response.reason))
return self.token
def _get_formation_client(self):
"""Create formation client object
Formation uses X-Auth-Token for authentication and should be in
format "user|token".
Generate the token and add it formation config object.
"""
token = self._generate_token()
self.client_config.api_key = {"X-Auth-Token": self.user + "|" + token}
self.formation_api_client = \
formation_client.ApiClient(self.client_config)
def _update_site_and_zone(self, region):
"""Get Zone name and Site name from region"""
zone = self._get_zone_by_region_name(region)
site = self._get_site_by_zone_name(zone)
# zone = region[:-1]
# site = zone[:-1]
self.region_zone_map[region] = {}
self.region_zone_map[region]["zone"] = zone
self.region_zone_map[region]["site"] = site
def _get_zone_by_region_name(self, region_name):
zone_api = formation_client.ZonesApi(self.formation_api_client)
zones = zone_api.zones_get()
# Walk through each zone and get regions
# Return when region name matches
for zone in zones:
self.zone_name_id_mapping[zone.name] = zone.id
zone_regions = self.get_regions(zone.name)
if region_name in zone_regions:
return zone.name
return None
def _get_site_by_zone_name(self, zone_name):
site_api = formation_client.SitesApi(self.formation_api_client)
sites = site_api.sites_get()
# Walk through each site and get zones
# Return when site name matches
for site in sites:
self.site_name_id_mapping[site.name] = site.id
site_zones = self.get_zones(site.name)
if zone_name in site_zones:
return site.name
return None
def _get_site_id_by_name(self, site_name):
if site_name in self.site_name_id_mapping:
return self.site_name_id_mapping.get(site_name)
site_api = formation_client.SitesApi(self.formation_api_client)
sites = site_api.sites_get()
for site in sites:
self.site_name_id_mapping[site.name] = site.id
if site.name == site_name:
return site.id
def _get_zone_id_by_name(self, zone_name):
if zone_name in self.zone_name_id_mapping:
return self.zone_name_id_mapping.get(zone_name)
zone_api = formation_client.ZonesApi(self.formation_api_client)
zones = zone_api.zones_get()
for zone in zones:
if zone.name == zone_name:
self.zone_name_id_mapping[zone.name] = zone.id
return zone.id
def _get_region_id_by_name(self, region_name):
if region_name in self.region_name_id_mapping:
return self.region_name_id_mapping.get(region_name)
for zone in self.zone_name_id_mapping:
self.get_regions(zone)
return self.region_name_id_mapping.get(region_name, None)
def _get_rack_id_by_name(self, rack_name):
if rack_name in self.rack_name_id_mapping:
return self.rack_name_id_mapping.get(rack_name)
for zone in self.zone_name_id_mapping:
self.get_racks(zone)
return self.rack_name_id_mapping.get(rack_name, None)
def _get_device_id_by_name(self, device_name):
if device_name in self.device_name_id_mapping:
return self.device_name_id_mapping.get(device_name)
self.get_hosts(self.zone)
return self.device_name_id_mapping.get(device_name, None)
def _get_racks(self, zone, rack_type="compute"):
zone_id = self._get_zone_id_by_name(zone)
rack_api = formation_client.RacksApi(self.formation_api_client)
racks = rack_api.zones_zone_id_racks_get(zone_id)
racks_list = []
for rack in racks:
rack_name = rack.name
self.rack_name_id_mapping[rack_name] = rack.id
if rack.rack_type.name == rack_type:
racks_list.append(rack_name)
return racks_list
# Functions that will be used internally within this plugin
def get_zones(self, site=None):
zone_api = formation_client.ZonesApi(self.formation_api_client)
if site is None:
zones = zone_api.zones_get()
else:
site_id = self._get_site_id_by_name(site)
zones = zone_api.sites_site_id_zones_get(site_id)
zones_list = []
for zone in zones:
zone_name = zone.name
self.zone_name_id_mapping[zone_name] = zone.id
zones_list.append(zone_name)
return zones_list
def get_regions(self, zone):
zone_id = self._get_zone_id_by_name(zone)
region_api = formation_client.RegionApi(self.formation_api_client)
regions = region_api.zones_zone_id_regions_get(zone_id)
regions_list = []
for region in regions:
region_name = region.name
self.region_name_id_mapping[region_name] = region.id
regions_list.append(region_name)
return regions_list
# Implement Abstract functions
def get_racks(self, region):
zone = self.region_zone_map[region]["zone"]
return self._get_racks(zone, rack_type="compute")
def get_hosts(self, region, rack=None):
zone = self.region_zone_map[region]["zone"]
zone_id = self._get_zone_id_by_name(zone)
device_api = formation_client.DevicesApi(self.formation_api_client)
control_hosts = device_api.zones_zone_id_control_nodes_get(zone_id)
compute_hosts = device_api.zones_zone_id_devices_get(
zone_id, type="KVM")
hosts_list = []
for host in control_hosts:
self.device_name_id_mapping[host.aic_standard_name] = host.id
hosts_list.append(
{
"name": host.aic_standard_name,
"type": "controller",
"rack_name": host.rack_name,
"host_profile": host.host_profile_name,
})
for host in compute_hosts:
self.device_name_id_mapping[host.aic_standard_name] = host.id
hosts_list.append(
{
"name": host.aic_standard_name,
"type": "compute",
"rack_name": host.rack_name,
"host_profile": host.host_profile_name,
})
"""
for host in itertools.chain(control_hosts, compute_hosts):
self.device_name_id_mapping[host.aic_standard_name] = host.id
hosts_list.append({
'name': host.aic_standard_name,
'type': host.categories[0],
'rack_name': host.rack_name,
'host_profile': host.host_profile_name
})
"""
return hosts_list
def get_networks(self, region):
zone = self.region_zone_map[region]["zone"]
zone_id = self._get_zone_id_by_name(zone)
region_id = self._get_region_id_by_name(region)
vlan_api = formation_client.VlansApi(self.formation_api_client)
vlans = vlan_api.zones_zone_id_regions_region_id_vlans_get(
zone_id, region_id)
# Case when vlans list is empty from
# zones_zone_id_regions_region_id_vlans_get
if len(vlans) == 0:
# get device-id from the first host and get the network details
hosts = self.get_hosts(self.region)
host = hosts[0]["name"]
device_id = self._get_device_id_by_name(host)
vlans = \
vlan_api.zones_zone_id_devices_device_id_vlans_get(zone_id,
device_id)
LOG.debug("Extracted region network information\n{}".format(vlans))
vlans_list = []
for vlan_ in vlans:
if len(vlan_.vlan.ipv4) != 0:
tmp_vlan = {
"name": self._get_network_name_from_vlan_name(
vlan_.vlan.name),
"vlan": vlan_.vlan.vlan_id,
"subnet": vlan_.vlan.subnet_range,
"gateway": vlan_.ipv4_gateway,
"subnet_level": vlan_.vlan.subnet_level
}
vlans_list.append(tmp_vlan)
return vlans_list
def get_ips(self, region, host=None):
zone = self.region_zone_map[region]["zone"]
zone_id = self._get_zone_id_by_name(zone)
if host:
hosts = [host]
else:
hosts = []
hosts_dict = self.get_hosts(zone)
for host in hosts_dict:
hosts.append(host["name"])
vlan_api = formation_client.VlansApi(self.formation_api_client)
ip_ = {}
for host in hosts:
device_id = self._get_device_id_by_name(host)
vlans = \
vlan_api.zones_zone_id_devices_device_id_vlans_get(zone_id,
device_id)
LOG.debug("Received VLAN Network Information\n{}".format(vlans))
ip_[host] = {}
for vlan_ in vlans:
# TODO(pg710r) We need to handle the case when incoming ipv4
# list is empty
if len(vlan_.vlan.ipv4) != 0:
name = self._get_network_name_from_vlan_name(
vlan_.vlan.name)
ipv4 = vlan_.vlan.ipv4[0].ip
LOG.debug(
"vlan:{},name:{},ip:{},vlan_name:{}".format(
vlan_.vlan.vlan_id, name, ipv4, vlan_.vlan.name))
# TODD(pg710r) This code needs to extended to support ipv4
# and ipv6
# ip_[host][name] = {'ipv4': ipv4}
ip_[host][name] = ipv4
return ip_
def _get_network_name_from_vlan_name(self, vlan_name):
"""Network names are ksn, oam, oob, overlay, storage, pxe
The following mapping rules apply:
vlan_name contains "ksn" the network name is "calico"
vlan_name contains "storage" the network name is "storage"
vlan_name contains "server" the network name is "oam"
vlan_name contains "ovs" the network name is "overlay"
vlan_name contains "ILO" the network name is "oob"
"""
network_names = {
"ksn": "calico",
"storage": "storage",
"server": "oam",
"ovs": "overlay",
"ILO": "oob",
"pxe": "pxe",
}
for name in network_names:
# Make a pattern that would ignore case.
# if name is 'ksn' pattern name is '(?i)(ksn)'
name_pattern = "(?i)({})".format(name)
if re.search(name_pattern, vlan_name):
return network_names[name]
# Return empty string is vlan_name is not matched with network_names
return ""
def get_dns_servers(self, region):
try:
zone = self.region_zone_map[region]["zone"]
zone_id = self._get_zone_id_by_name(zone)
zone_api = formation_client.ZonesApi(self.formation_api_client)
zone_ = zone_api.zones_zone_id_get(zone_id)
except formation_client.rest.ApiException as e:
raise exceptions.ApiClientError(e.msg)
if not zone_.ipv4_dns:
LOG.warning("No dns server")
return []
dns_list = []
for dns in zone_.ipv4_dns:
dns_list.append(dns.ip)
return dns_list
def get_ntp_servers(self, region):
return []
def get_ldap_information(self, region):
return {}
def get_location_information(self, region):
"""Get location information for a zone and return"""
site = self.region_zone_map[region]["site"]
site_id = self._get_site_id_by_name(site)
site_api = formation_client.SitesApi(self.formation_api_client)
site_info = site_api.sites_site_id_get(site_id)
try:
return {
# 'corridor': site_info.corridor,
"name": site_info.city,
"state": site_info.state,
"country": site_info.country,
"physical_location_id": site_info.clli,
}
except AttributeError as e:
raise exceptions.MissingAttributeError(
"Missing {} information in {}".format(e, site_info.city))
def get_domain_name(self, region):
try:
zone = self.region_zone_map[region]["zone"]
zone_id = self._get_zone_id_by_name(zone)
zone_api = formation_client.ZonesApi(self.formation_api_client)
zone_ = zone_api.zones_zone_id_get(zone_id)
except formation_client.rest.ApiException as e:
raise exceptions.ApiClientError(e.msg)
if not zone_.dns:
LOG.warning("Got None while running get domain name")
return None
return zone_.dns

View File

@ -1,38 +0,0 @@
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
class BaseError(Exception):
pass
class NotEnoughIp(BaseError):
def __init__(self, cidr, total_nodes):
self.cidr = cidr
self.total_nodes = total_nodes
def display_error(self):
print("{} can not handle {} nodes".format(self.cidr, self.total_nodes))
class NoSpecMatched(BaseError):
def __init__(self, excel_specs):
self.specs = excel_specs
def display_error(self):
print(
"No spec matched. Following are the available specs:\n".format(
self.specs))

View File

@ -1,417 +0,0 @@
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import pprint
import re
import sys
from openpyxl import load_workbook
from openpyxl import Workbook
import yaml
from spyglass.data_extractor.custom_exceptions import NoSpecMatched
LOG = logging.getLogger(__name__)
class ExcelParser(object):
"""Parse data from excel into a dict"""
def __init__(self, file_name, excel_specs):
self.file_name = file_name
with open(excel_specs, "r") as f:
spec_raw_data = f.read()
self.excel_specs = yaml.safe_load(spec_raw_data)
# A combined design spec, returns a workbook object after combining
# all the inputs excel specs
combined_design_spec = self.combine_excel_design_specs(file_name)
self.wb_combined = combined_design_spec
self.filenames = file_name
self.spec = "xl_spec"
@staticmethod
def sanitize(string):
"""Remove extra spaces and convert string to lower case"""
return string.replace(" ", "").lower()
def compare(self, string1, string2):
"""Compare the strings"""
return bool(re.search(self.sanitize(string1), self.sanitize(string2)))
def validate_sheet(self, spec, sheet):
"""Check if the sheet is correct or not"""
ws = self.wb_combined[sheet]
header_row = self.excel_specs["specs"][spec]["header_row"]
ipmi_header = self.excel_specs["specs"][spec]["ipmi_address_header"]
ipmi_column = self.excel_specs["specs"][spec]["ipmi_address_col"]
header_value = ws.cell(row=header_row, column=ipmi_column).value
return bool(self.compare(ipmi_header, header_value))
def find_correct_spec(self):
"""Find the correct spec"""
for spec in self.excel_specs["specs"]:
sheet_name = self.excel_specs["specs"][spec]["ipmi_sheet_name"]
for sheet in self.wb_combined.sheetnames:
if self.compare(sheet_name, sheet):
self.excel_specs["specs"][spec]["ipmi_sheet_name"] = sheet
if self.validate_sheet(spec, sheet):
return spec
raise NoSpecMatched(self.excel_specs)
def get_ipmi_data(self):
"""Read IPMI data from the sheet"""
ipmi_data = {}
hosts = []
spec_ = self.excel_specs["specs"][self.spec]
provided_sheetname = spec_["ipmi_sheet_name"]
workbook_object, extracted_sheetname = \
self.get_xl_obj_and_sheetname(provided_sheetname)
if workbook_object is not None:
ws = workbook_object[extracted_sheetname]
else:
ws = self.wb_combined[provided_sheetname]
row = spec_["start_row"]
end_row = spec_["end_row"]
hostname_col = spec_["hostname_col"]
ipmi_address_col = spec_["ipmi_address_col"]
host_profile_col = spec_["host_profile_col"]
ipmi_gateway_col = spec_["ipmi_gateway_col"]
previous_server_gateway = None
while row <= end_row:
hostname = \
self.sanitize(ws.cell(row=row, column=hostname_col).value)
hosts.append(hostname)
ipmi_address = ws.cell(row=row, column=ipmi_address_col).value
if "/" in ipmi_address:
ipmi_address = ipmi_address.split("/")[0]
ipmi_gateway = ws.cell(row=row, column=ipmi_gateway_col).value
if ipmi_gateway:
previous_server_gateway = ipmi_gateway
else:
ipmi_gateway = previous_server_gateway
host_profile = ws.cell(row=row, column=host_profile_col).value
try:
if host_profile is None:
raise RuntimeError(
"No value read from "
"{} sheet:{} row:{}, col:{}".format(
self.file_name, self.spec, row, host_profile_col))
except RuntimeError as rerror:
LOG.critical(rerror)
sys.exit("Tugboat exited!!")
ipmi_data[hostname] = {
"ipmi_address": ipmi_address,
"ipmi_gateway": ipmi_gateway,
"host_profile": host_profile,
"type": type, # FIXME (Ian Pittwood): shadows type built-in
}
row += 1
LOG.debug(
"ipmi data extracted from excel:\n{}".format(
pprint.pformat(ipmi_data)))
LOG.debug(
"host data extracted from excel:\n{}".format(
pprint.pformat(hosts)))
return [ipmi_data, hosts]
def get_private_vlan_data(self, ws):
"""Get private vlan data from private IP sheet"""
vlan_data = {}
row = self.excel_specs["specs"][self.spec]["vlan_start_row"]
end_row = self.excel_specs["specs"][self.spec]["vlan_end_row"]
type_col = self.excel_specs["specs"][self.spec]["net_type_col"]
vlan_col = self.excel_specs["specs"][self.spec]["vlan_col"]
while row <= end_row:
cell_value = ws.cell(row=row, column=type_col).value
if cell_value:
vlan = ws.cell(row=row, column=vlan_col).value
if vlan:
vlan = vlan.lower()
vlan_data[vlan] = cell_value
row += 1
LOG.debug(
"vlan data extracted from excel:\n%s" % pprint.pformat(vlan_data))
return vlan_data
def get_private_network_data(self):
"""Read network data from the private ip sheet"""
spec_ = self.excel_specs["specs"][self.spec]
provided_sheetname = spec_["private_ip_sheet"]
workbook_object, extracted_sheetname = \
self.get_xl_obj_and_sheetname(provided_sheetname)
if workbook_object is not None:
ws = workbook_object[extracted_sheetname]
else:
ws = self.wb_combined[provided_sheetname]
vlan_data = self.get_private_vlan_data(ws)
network_data = {}
row = spec_["net_start_row"]
end_row = spec_["net_end_row"]
col = spec_["net_col"]
vlan_col = spec_["net_vlan_col"]
old_vlan = ""
while row <= end_row:
vlan = ws.cell(row=row, column=vlan_col).value
if vlan:
vlan = vlan.lower()
network = ws.cell(row=row, column=col).value
if vlan and network:
net_type = vlan_data[vlan]
if "vlan" not in network_data:
network_data[net_type] = {"vlan": vlan, "subnet": []}
elif not vlan and network:
# If vlan is not present then assign old vlan to vlan as vlan
# value is spread over several rows
vlan = old_vlan
else:
row += 1
continue
network_data[vlan_data[vlan]]["subnet"].append(network)
old_vlan = vlan
row += 1
for network in network_data:
network_data[network]["is_common"] = True
"""
if len(network_data[network]['subnet']) > 1:
network_data[network]['is_common'] = False
else:
network_data[network]['is_common'] = True
LOG.debug("private network data extracted from excel:\n%s"
% pprint.pformat(network_data))
"""
return network_data
def get_public_network_data(self):
"""Read public network data from public ip data"""
spec_ = self.excel_specs["specs"][self.spec]
provided_sheetname = spec_["public_ip_sheet"]
workbook_object, extracted_sheetname = self.get_xl_obj_and_sheetname(
provided_sheetname)
if workbook_object is not None:
ws = workbook_object[extracted_sheetname]
else:
ws = self.wb_combined[provided_sheetname]
oam_row = spec_["oam_ip_row"]
oam_col = spec_["oam_ip_col"]
oam_vlan_col = spec_["oam_vlan_col"]
ingress_row = spec_["ingress_ip_row"]
oob_row = spec_["oob_net_row"]
col = spec_["oob_net_start_col"]
end_col = spec_["oob_net_end_col"]
network_data = {
"oam": {
"subnet": [ws.cell(row=oam_row, column=oam_col).value],
"vlan": ws.cell(row=oam_row, column=oam_vlan_col).value,
},
"ingress": ws.cell(row=ingress_row, column=oam_col).value,
"oob": {
"subnet": [],
}
}
while col <= end_col:
cell_value = ws.cell(row=oob_row, column=col).value
if cell_value:
network_data["oob"]["subnet"].append(self.sanitize(cell_value))
col += 1
LOG.debug(
"public network data extracted from excel:\n%s" %
pprint.pformat(network_data))
return network_data
def get_site_info(self):
"""Read location, dns, ntp and ldap data"""
spec_ = self.excel_specs["specs"][self.spec]
provided_sheetname = spec_["dns_ntp_ldap_sheet"]
workbook_object, extracted_sheetname = \
self.get_xl_obj_and_sheetname(provided_sheetname)
if workbook_object is not None:
ws = workbook_object[extracted_sheetname]
else:
ws = self.wb_combined[provided_sheetname]
dns_row = spec_["dns_row"]
dns_col = spec_["dns_col"]
ntp_row = spec_["ntp_row"]
ntp_col = spec_["ntp_col"]
domain_row = spec_["domain_row"]
domain_col = spec_["domain_col"]
login_domain_row = spec_["login_domain_row"]
ldap_col = spec_["ldap_col"]
global_group = spec_["global_group"]
ldap_search_url_row = spec_["ldap_search_url_row"]
dns_servers = ws.cell(row=dns_row, column=dns_col).value
ntp_servers = ws.cell(row=ntp_row, column=ntp_col).value
try:
if dns_servers is None:
raise RuntimeError(
"No value for dns_server from: "
"{} Sheet:'{}' Row:{} Col:{}".format(
self.file_name, provided_sheetname, dns_row, dns_col))
if ntp_servers is None:
raise RuntimeError(
"No value for ntp_server from: "
"{} Sheet:'{}' Row:{} Col:{}".format(
self.file_name, provided_sheetname, ntp_row, ntp_col))
except RuntimeError as rerror:
LOG.critical(rerror)
sys.exit("Tugboat exited!!")
dns_servers = dns_servers.replace("\n", " ")
ntp_servers = ntp_servers.replace("\n", " ")
if "," in dns_servers:
dns_servers = dns_servers.split(",")
else:
dns_servers = dns_servers.split()
if "," in ntp_servers:
ntp_servers = ntp_servers.split(",")
else:
ntp_servers = ntp_servers.split()
site_info = {
"location": self.get_location_data(),
"dns": dns_servers,
"ntp": ntp_servers,
"domain": ws.cell(row=domain_row, column=domain_col).value,
"ldap": {
"subdomain": ws.cell(row=login_domain_row,
column=ldap_col).value,
"common_name": ws.cell(row=global_group,
column=ldap_col).value,
"url": ws.cell(row=ldap_search_url_row, column=ldap_col).value,
},
}
LOG.debug(
"Site Info extracted from\
excel:\n%s",
pprint.pformat(site_info),
)
return site_info
def get_location_data(self):
"""Read location data from the site and zone sheet"""
spec_ = self.excel_specs["specs"][self.spec]
provided_sheetname = spec_["location_sheet"]
workbook_object, extracted_sheetname = \
self.get_xl_obj_and_sheetname(provided_sheetname)
if workbook_object is not None:
ws = workbook_object[extracted_sheetname]
else:
ws = self.wb_combined[provided_sheetname]
corridor_row = spec_["corridor_row"]
column = spec_["column"]
site_name_row = spec_["site_name_row"]
state_name_row = spec_["state_name_row"]
country_name_row = spec_["country_name_row"]
clli_name_row = spec_["clli_name_row"]
return {
"corridor": ws.cell(row=corridor_row, column=column).value,
"name": ws.cell(row=site_name_row, column=column).value,
"state": ws.cell(row=state_name_row, column=column).value,
"country": ws.cell(row=country_name_row, column=column).value,
"physical_location": ws.cell(row=clli_name_row,
column=column).value,
}
def validate_sheet_names_with_spec(self):
"""Checks is sheet name in spec file matches with excel file"""
spec = list(self.excel_specs["specs"].keys())[0]
spec_item = self.excel_specs["specs"][spec]
sheet_name_list = []
ipmi_header_sheet_name = spec_item["ipmi_sheet_name"]
sheet_name_list.append(ipmi_header_sheet_name)
private_ip_sheet_name = spec_item["private_ip_sheet"]
sheet_name_list.append(private_ip_sheet_name)
public_ip_sheet_name = spec_item["public_ip_sheet"]
sheet_name_list.append(public_ip_sheet_name)
dns_ntp_ldap_sheet_name = spec_item["dns_ntp_ldap_sheet"]
sheet_name_list.append(dns_ntp_ldap_sheet_name)
location_sheet_name = spec_item["location_sheet"]
sheet_name_list.append(location_sheet_name)
try:
for sheetname in sheet_name_list:
workbook_object, extracted_sheetname = \
self.get_xl_obj_and_sheetname(sheetname)
if workbook_object is not None:
wb = workbook_object
sheetname = extracted_sheetname
else:
wb = self.wb_combined
if sheetname not in wb.sheetnames:
raise RuntimeError(
"SheetName '{}' not found ".format(sheetname))
except RuntimeError as rerror:
LOG.critical(rerror)
sys.exit("Tugboat exited!!")
LOG.info("Sheet names in excel spec validated")
def get_data(self):
"""Create a dict with combined data"""
self.validate_sheet_names_with_spec()
ipmi_data = self.get_ipmi_data()
network_data = self.get_private_network_data()
public_network_data = self.get_public_network_data()
site_info_data = self.get_site_info()
data = {
"ipmi_data": ipmi_data,
"network_data": {
"private": network_data,
"public": public_network_data,
},
"site_info": site_info_data,
}
LOG.debug(
"Location data extracted from excel:\n%s" % pprint.pformat(data))
return data
def combine_excel_design_specs(self, filenames):
"""Combines multiple excel file to a single design spec"""
design_spec = Workbook()
for exel_file in filenames:
loaded_workbook = load_workbook(exel_file, data_only=True)
for names in loaded_workbook.sheetnames:
design_spec_worksheet = design_spec.create_sheet(names)
loaded_workbook_ws = loaded_workbook[names]
for row in loaded_workbook_ws:
for cell in row:
design_spec_worksheet[cell.coordinate].value = \
cell.value
return design_spec
def get_xl_obj_and_sheetname(self, sheetname):
"""The logic confirms if the sheetname is specified for example as:
'MTN57a_AEC_Network_Design_v1.6.xlsx:Public IPs'
"""
if re.search(".xlsx", sheetname) or re.search(".xls", sheetname):
# Extract file name
source_xl_file = sheetname.split(":")[0]
wb = load_workbook(source_xl_file, data_only=True)
return [wb, sheetname.split(":")[1]]
else:
return [None, sheetname]

View File

@ -1,357 +0,0 @@
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import itertools
import logging
import pprint
import re
from spyglass.data_extractor.base import BaseDataSourcePlugin
from spyglass.data_extractor.plugins.tugboat.excel_parser import ExcelParser
LOG = logging.getLogger(__name__)
class TugboatPlugin(BaseDataSourcePlugin):
def __init__(self, region):
LOG.info("Tugboat Initializing")
self.source_type = "excel"
self.source_name = "tugboat"
# Configuration parameters
self.excel_path = None
self.excel_spec = None
# Site related data
self.region = region
# Raw data from excel
self.parsed_xl_data = None
LOG.info("Initiated data extractor plugin:{}".format(self.source_name))
def set_config_opts(self, conf):
"""Placeholder to set configuration options specific to each plugin.
:param dict conf: Configuration options as dict
Example: conf = { 'excel_spec': 'spec1.yaml',
'excel_path': 'excel.xls' }
Each plugin will have their own config opts.
"""
self.excel_path = conf["excel_path"]
self.excel_spec = conf["excel_spec"]
# Extract raw data from excel sheets
self._get_excel_obj()
self._extract_raw_data_from_excel()
return
def get_plugin_conf(self, kwargs):
"""Validates the plugin param from CLI and return if correct
Ideally the CLICK module shall report an error if excel file
and excel specs are not specified. The below code has been
written as an additional safeguard.
"""
if not kwargs["excel"]:
LOG.error("Engineering excel file not specified: Spyglass exited!")
exit()
excel_file_info = kwargs["excel"]
if not kwargs["excel_spec"]:
LOG.error("Engineering spec file not specified: Spyglass exited!")
exit()
excel_spec_info = kwargs["excel_spec"]
plugin_conf = {
"excel_path": excel_file_info,
"excel_spec": excel_spec_info,
}
return plugin_conf
def get_hosts(self, region, rack=None):
"""Return list of hosts in the region
:param string region: Region name
:param string rack: Rack name
:returns: list of hosts information
:rtype: list of dict
Example: [
{
'name': 'host01',
'type': 'controller',
'host_profile': 'hp_01'
},
{
'name': 'host02',
'type': 'compute',
'host_profile': 'hp_02'}
]
"""
LOG.info("Get Host Information")
ipmi_data = self.parsed_xl_data["ipmi_data"][0]
rackwise_hosts = self._get_rackwise_hosts()
host_list = []
for rack in rackwise_hosts.keys():
for host in rackwise_hosts[rack]:
host_list.append(
{
"rack_name": rack,
"name": host,
"host_profile": ipmi_data[host]["host_profile"],
})
return host_list
def get_networks(self, region):
"""Extracts vlan network info from raw network data from excel"""
vlan_list = []
# Network data extracted from xl is formatted to have a predictable
# data type. For e.g VlAN 45 extracted from xl is formatted as 45
vlan_pattern = r"\d+"
private_net = self.parsed_xl_data["network_data"]["private"]
public_net = self.parsed_xl_data["network_data"]["public"]
# Extract network information from private and public network data
for net_type, net_val in itertools.chain(private_net.items(),
public_net.items()):
tmp_vlan = {}
# Ingress is special network that has no vlan, only a subnet string
# So treatment for ingress is different
if net_type != "ingress":
# standardize the network name as net_type may ne different.
# For e.g instead of pxe it may be PXE or instead of calico
# it may be ksn. Valid network names are pxe, calico, oob, oam,
# overlay, storage, ingress
tmp_vlan["name"] = \
self._get_network_name_from_vlan_name(net_type)
# extract vlan tag. It was extracted from xl file as 'VlAN 45'
# The code below extracts the numeric data fron net_val['vlan']
if net_val.get("vlan", "") != "":
value = re.findall(vlan_pattern, net_val["vlan"])
tmp_vlan["vlan"] = value[0]
else:
tmp_vlan["vlan"] = "#CHANGE_ME"
tmp_vlan["subnet"] = net_val.get("subnet", "#CHANGE_ME")
tmp_vlan["gateway"] = net_val.get("gateway", "#CHANGE_ME")
else:
tmp_vlan["name"] = "ingress"
tmp_vlan["subnet"] = net_val
vlan_list.append(tmp_vlan)
LOG.debug(
"vlan list extracted from tugboat:\n{}".format(
pprint.pformat(vlan_list)))
return vlan_list
def get_ips(self, region, host=None):
"""Return list of IPs on the host
:param string region: Region name
:param string host: Host name
:returns: Dict of IPs per network on the host
:rtype: dict
Example: {'oob': {'ipv4': '192.168.1.10'},
'pxe': {'ipv4': '192.168.2.10'}}
The network name from get_networks is expected to be the keys of this
dict. In case some networks are missed, they are expected to be either
DHCP or internally generated n the next steps by the design rules.
"""
ip_ = {}
ipmi_data = self.parsed_xl_data["ipmi_data"][0]
ip_[host] = {
"oob": ipmi_data[host].get("ipmi_address", "#CHANGE_ME"),
"oam": ipmi_data[host].get("oam", "#CHANGE_ME"),
"calico": ipmi_data[host].get("calico", "#CHANGE_ME"),
"overlay": ipmi_data[host].get("overlay", "#CHANGE_ME"),
"pxe": ipmi_data[host].get("pxe", "#CHANGE_ME"),
"storage": ipmi_data[host].get("storage", "#CHANGE_ME"),
}
return ip_
def get_ldap_information(self, region):
"""Extract ldap information from excel"""
ldap_raw_data = self.parsed_xl_data["site_info"]["ldap"]
ldap_info = {}
# raw url is 'url: ldap://example.com' so we are converting to
# 'ldap://example.com'
url = ldap_raw_data.get("url", "#CHANGE_ME")
try:
ldap_info["url"] = url.split(" ")[1]
ldap_info["domain"] = url.split(".")[1]
except IndexError as e:
LOG.error("url.split:{}".format(e))
ldap_info["common_name"] = \
ldap_raw_data.get("common_name", "#CHANGE_ME")
ldap_info["subdomain"] = ldap_raw_data.get("subdomain", "#CHANGE_ME")
return ldap_info
def get_ntp_servers(self, region):
"""Returns a comma separated list of ntp ip addresses"""
ntp_server_list = \
self._get_formatted_server_list(self.parsed_xl_data["site_info"]
["ntp"])
return ntp_server_list
def get_dns_servers(self, region):
"""Returns a comma separated list of dns ip addresses"""
dns_server_list = \
self._get_formatted_server_list(self.parsed_xl_data["site_info"]
["dns"])
return dns_server_list
def get_domain_name(self, region):
"""Returns domain name extracted from excel file"""
return self.parsed_xl_data["site_info"]["domain"]
def get_location_information(self, region):
"""Prepare location data from information extracted by ExcelParser"""
location_data = self.parsed_xl_data["site_info"]["location"]
corridor_pattern = r"\d+"
corridor_number = \
re.findall(corridor_pattern, location_data["corridor"])[0]
name = location_data.get("name", "#CHANGE_ME")
state = location_data.get("state", "#CHANGE_ME")
country = location_data.get("country", "#CHANGE_ME")
physical_location_id = location_data.get("physical_location", "")
return {
"name": name,
"physical_location_id": physical_location_id,
"state": state,
"country": country,
"corridor": "c{}".format(corridor_number),
}
def get_racks(self, region):
# This function is not required since the excel plugin
# already provide rack information.
pass
def _get_excel_obj(self):
"""Creation of an ExcelParser object to store site information.
The information is obtained based on a excel spec yaml file.
This spec contains row, column and sheet information of
the excel file from where site specific data can be extracted.
"""
self.excel_obj = ExcelParser(self.excel_path, self.excel_spec)
def _extract_raw_data_from_excel(self):
"""Extracts raw information from excel file based on excel spec"""
self.parsed_xl_data = self.excel_obj.get_data()
def _get_network_name_from_vlan_name(self, vlan_name):
"""Network names are ksn, oam, oob, overlay, storage, pxe
This is a utility function to determine the vlan acceptable
vlan from the name extracted from excel file
The following mapping rules apply:
vlan_name contains "ksn or calico" the network name is "calico"
vlan_name contains "storage" the network name is "storage"
vlan_name contains "server" the network name is "oam"
vlan_name contains "ovs" the network name is "overlay"
vlan_name contains "oob" the network name is "oob"
vlan_name contains "pxe" the network name is "pxe"
"""
network_names = [
"ksn|calico",
"storage",
"oam|server",
"ovs|overlay",
"oob",
"pxe",
]
for name in network_names:
# Make a pattern that would ignore case.
# if name is 'ksn' pattern name is '(?i)(ksn)'
name_pattern = "(?i)({})".format(name)
if re.search(name_pattern, vlan_name):
if name == "ksn|calico":
return "calico"
if name == "storage":
return "storage"
if name == "oam|server":
return "oam"
if name == "ovs|overlay":
return "overlay"
if name == "oob":
return "oob"
if name == "pxe":
return "pxe"
# if nothing matches
LOG.error(
"Unable to recognize VLAN name extracted from Plugin data source")
return ""
def _get_formatted_server_list(self, server_list):
"""Format dns and ntp server list as comma separated string"""
# dns/ntp server info from excel is of the format
# 'xxx.xxx.xxx.xxx, (aaa.bbb.ccc.com)'
# The function returns a list of comma separated dns ip addresses
servers = []
for data in server_list:
if "(" not in data:
servers.append(data)
formatted_server_list = ",".join(servers)
return formatted_server_list
def _get_rack(self, host):
"""Get rack id from the rack string extracted from xl"""
rack_pattern = r"\w.*(r\d+)\w.*"
rack = re.findall(rack_pattern, host)[0]
if not self.region:
self.region = host.split(rack)[0]
return rack
def _get_rackwise_hosts(self):
"""Mapping hosts with rack ids"""
rackwise_hosts = {}
hostnames = self.parsed_xl_data["ipmi_data"][1]
racks = self._get_rack_data()
for rack in racks:
if rack not in rackwise_hosts:
rackwise_hosts[racks[rack]] = []
for host in hostnames:
if rack in host:
rackwise_hosts[racks[rack]].append(host)
LOG.debug("rackwise hosts:\n%s", pprint.pformat(rackwise_hosts))
return rackwise_hosts
def _get_rack_data(self):
"""Format rack name"""
LOG.info("Getting rack data")
racks = {}
hostnames = self.parsed_xl_data["ipmi_data"][1]
for host in hostnames:
rack = self._get_rack(host)
racks[rack] = rack.replace("r", "rack")
return racks

View File

@ -1,63 +0,0 @@
# Copyright 2018 The Openstack-Helm Authors.
# Copyright (c) 2018 AT&T Intellectual Property. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Important: Please modify the dictionary with appropriate
# design spec file.
---
specs:
# Design Spec file name: SiteDesignSpec_v0.1.xlsx
xl_spec:
ipmi_sheet_name: 'Site-Information'
start_row: 4
end_row: 15
hostname_col: 2
ipmi_address_col: 3
host_profile_col: 5
ipmi_gateway_col: 4
private_ip_sheet: 'Site-Information'
net_type_col: 1
vlan_col: 2
vlan_start_row: 19
vlan_end_row: 30
net_start_row: 33
net_end_row: 40
net_col: 2
net_vlan_col: 1
public_ip_sheet: 'Site-Information'
oam_vlan_col: 1
oam_ip_row: 43
oam_ip_col: 2
oob_net_row: 48
oob_net_start_col: 2
oob_net_end_col: 5
ingress_ip_row: 45
dns_ntp_ldap_sheet: 'Site-Information'
login_domain_row: 52
ldap_col: 2
global_group: 53
ldap_search_url_row: 54
ntp_row: 55
ntp_col: 2
dns_row: 56
dns_col: 2
domain_row: 51
domain_col: 2
location_sheet: 'Site-Information'
column: 2
corridor_row: 59
site_name_row: 58
state_name_row: 60
country_name_row: 61
clli_name_row: 62

View File

@ -1,33 +0,0 @@
##################################
# Site Specific Tugboat Settings #
##################################
---
site_info:
ldap:
common_name: test
url: ldap://ldap.example.com
subdomain: test
ntp:
servers: 10.10.10.10,20.20.20.20,30.30.30.30
sitetype: foundry
domain: atlantafoundry.com
dns:
servers: 8.8.8.8,8.8.4.4,208.67.222.222
network:
vlan_network_data:
ingress:
subnet:
- 132.68.226.72/29
bgp :
peers:
- '172.29.0.2'
- '172.29.0.3'
asnumber: 64671
peer_asnumber: 64688
storage:
ceph:
controller:
osd_count: 6
...