Base code for spyglass
- Spyglass skelton with engine, site processor - Spyglass data extractor with formation plugin - Docker files and scripts to run spyglass
This commit is contained in:
parent
b59ff4cd03
commit
296705a0a5
4
.dockerignore
Normal file
4
.dockerignore
Normal file
@ -0,0 +1,4 @@
|
||||
**/__pycache__
|
||||
**/.tox
|
||||
**/.eggs
|
||||
**/spyglass.egg-info
|
116
.gitignore
vendored
Normal file
116
.gitignore
vendored
Normal file
@ -0,0 +1,116 @@
|
||||
# Byte-compiled / optimized / DLL files
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
|
||||
# C extensions
|
||||
*.so
|
||||
|
||||
# Distribution / packaging
|
||||
.Python
|
||||
env/
|
||||
build/
|
||||
develop-eggs/
|
||||
dist/
|
||||
downloads/
|
||||
eggs/
|
||||
.eggs/
|
||||
lib/
|
||||
lib64/
|
||||
parts/
|
||||
sdist/
|
||||
var/
|
||||
wheels/
|
||||
*.egg-info/
|
||||
.installed.cfg
|
||||
*.egg
|
||||
*.tgz
|
||||
|
||||
# PyInstaller
|
||||
# Usually these files are written by a python script from a template
|
||||
# before PyInstaller builds the exe, so as to inject date/other infos into it.
|
||||
*.manifest
|
||||
*.spec
|
||||
|
||||
# Installer logs
|
||||
pip-log.txt
|
||||
pip-delete-this-directory.txt
|
||||
|
||||
# Unit test / coverage reports
|
||||
htmlcov/
|
||||
.tox/
|
||||
.coverage
|
||||
.coverage.*
|
||||
.cache
|
||||
nosetests.xml
|
||||
coverage.xml
|
||||
*.cover
|
||||
.hypothesis/
|
||||
.testrepository/*
|
||||
cover/*
|
||||
results/*
|
||||
.stestr/
|
||||
|
||||
# Translations
|
||||
*.mo
|
||||
*.pot
|
||||
|
||||
# Django stuff:
|
||||
*.log
|
||||
local_settings.py
|
||||
|
||||
# Flask stuff:
|
||||
instance/
|
||||
.webassets-cache
|
||||
|
||||
# Scrapy stuff:
|
||||
.scrapy
|
||||
|
||||
# PyBuilder
|
||||
target/
|
||||
|
||||
# Jupyter Notebook
|
||||
.ipynb_checkpoints
|
||||
|
||||
# pyenv
|
||||
.python-version
|
||||
|
||||
# celery beat schedule file
|
||||
celerybeat-schedule
|
||||
|
||||
# SageMath parsed files
|
||||
*.sage.py
|
||||
|
||||
# dotenv
|
||||
.env
|
||||
|
||||
# virtualenv
|
||||
.venv
|
||||
venv/
|
||||
ENV/
|
||||
|
||||
# Spyder project settings
|
||||
.spyderproject
|
||||
.spyproject
|
||||
|
||||
# Rope project settings
|
||||
.ropeproject
|
||||
|
||||
# mkdocs documentation
|
||||
/site
|
||||
|
||||
# mypy
|
||||
.mypy_cache/
|
||||
|
||||
# pycharm-ide
|
||||
.idea/
|
||||
|
||||
# osx
|
||||
.DS_Store
|
||||
|
||||
# git
|
||||
Changelog
|
||||
AUTHORS
|
||||
|
||||
# Ansible
|
||||
*.retry
|
201
LICENSE
Normal file
201
LICENSE
Normal file
@ -0,0 +1,201 @@
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
84
Makefile
Normal file
84
Makefile
Normal file
@ -0,0 +1,84 @@
|
||||
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
SPYGLASS_BUILD_CTX ?= .
|
||||
IMAGE_NAME ?= spyglass
|
||||
IMAGE_PREFIX ?= att-comdev
|
||||
DOCKER_REGISTRY ?= quay.io
|
||||
IMAGE_TAG ?= latest
|
||||
PROXY ?= http://proxy.foo.com:8000
|
||||
NO_PROXY ?= localhost,127.0.0.1,.svc.cluster.local
|
||||
USE_PROXY ?= false
|
||||
PUSH_IMAGE ?= false
|
||||
LABEL ?= commit-id
|
||||
IMAGE ?= $(DOCKER_REGISTRY)/$(IMAGE_PREFIX)/$(IMAGE_NAME):$(IMAGE_TAG)
|
||||
PYTHON_BASE_IMAGE ?= python:3.6
|
||||
export
|
||||
|
||||
# Build all docker images for this project
|
||||
.PHONY: images
|
||||
images: build_spyglass
|
||||
|
||||
# Run an image locally and exercise simple tests
|
||||
.PHONY: run_images
|
||||
run_images: run_spyglass
|
||||
|
||||
.PHONY: run_spyglass
|
||||
run_spyglass: build_spyglass
|
||||
tools/spyglass.sh --help
|
||||
|
||||
.PHONY: security
|
||||
security:
|
||||
tox -c tox.ini -e bandit
|
||||
|
||||
# Perform Linting
|
||||
.PHONY: lint
|
||||
lint: py_lint
|
||||
|
||||
# Perform auto formatting
|
||||
.PHONY: format
|
||||
format: py_format
|
||||
|
||||
.PHONY: build_spyglass
|
||||
build_spyglass:
|
||||
ifeq ($(USE_PROXY), true)
|
||||
docker build -t $(IMAGE) --network=host --label $(LABEL) -f images/spyglass/Dockerfile \
|
||||
--build-arg FROM=$(PYTHON_BASE_IMAGE) \
|
||||
--build-arg http_proxy=$(PROXY) \
|
||||
--build-arg https_proxy=$(PROXY) \
|
||||
--build-arg HTTP_PROXY=$(PROXY) \
|
||||
--build-arg HTTPS_PROXY=$(PROXY) \
|
||||
--build-arg no_proxy=$(NO_PROXY) \
|
||||
--build-arg NO_PROXY=$(NO_PROXY) \
|
||||
--build-arg ctx_base=$(SPYGLASS_BUILD_CTX) .
|
||||
else
|
||||
docker build -t $(IMAGE) --network=host --label $(LABEL) -f images/spyglass/Dockerfile \
|
||||
--build-arg FROM=$(PYTHON_BASE_IMAGE) \
|
||||
--build-arg ctx_base=$(SPYGLASS_BUILD_CTX) .
|
||||
endif
|
||||
ifeq ($(PUSH_IMAGE), true)
|
||||
docker push $(IMAGE)
|
||||
endif
|
||||
|
||||
.PHONY: clean
|
||||
clean:
|
||||
rm -rf build
|
||||
|
||||
.PHONY: py_lint
|
||||
py_lint:
|
||||
tox -e pep8
|
||||
|
||||
.PHONY: py_format
|
||||
py_format:
|
||||
tox -e fmt
|
31
README.md
31
README.md
@ -1,2 +1,29 @@
|
||||
# spyglass
|
||||
staging for the spyglass airship-spyglass repo
|
||||
|
||||
What is Spyglass?
|
||||
----------------
|
||||
|
||||
Spyglass is the data extractor tool which can interface with
|
||||
different input data sources to generate site manifest YAML files.
|
||||
The data sources will provide all the configuration data needed
|
||||
for a site deployment. These site manifest YAML files generated
|
||||
by spyglass will be saved in a Git repository, from where Pegleg
|
||||
can access and aggregate them. This aggregated file can then be
|
||||
fed to shipyard for site deployment / updates.
|
||||
|
||||
Spyglass follows plugin model to support multiple input data sources.
|
||||
Current supported plugins are formation-plugin and Tugboat. Formation
|
||||
is a rest API based service which will be the source of information
|
||||
related to hardware, networking, site data. Formation plugin will
|
||||
interact with Formation API to gather necessary configuration.
|
||||
Similarly Tugboat accepts engineering spec which is in the form of
|
||||
spreadsheet and an index file to read spreadsheet as inputs and
|
||||
generates the site level manifests.
|
||||
As an optional step it can generate an intermediary yaml which contain
|
||||
all the information that will be rendered to generate Airship site
|
||||
manifests. This optional step will help the deployment engineer to
|
||||
modify any data if required.
|
||||
|
||||
Basic Usage
|
||||
-----------
|
||||
|
||||
TODO
|
||||
|
13
images/spyglass/Dockerfile
Normal file
13
images/spyglass/Dockerfile
Normal file
@ -0,0 +1,13 @@
|
||||
ARG FROM=python:3.6
|
||||
FROM ${FROM}
|
||||
|
||||
VOLUME /var/spyglass
|
||||
WORKDIR /var/spyglass
|
||||
|
||||
ARG ctx_base=.
|
||||
|
||||
COPY ${ctx_base}/requirements.txt /opt/spyglass/requirements.txt
|
||||
RUN pip3 install --no-cache-dir -r /opt/spyglass/requirements.txt
|
||||
|
||||
COPY ${ctx_base} /opt/spyglass
|
||||
RUN pip3 install -e /opt/spyglass
|
7
requirements.txt
Normal file
7
requirements.txt
Normal file
@ -0,0 +1,7 @@
|
||||
jinja2==2.10
|
||||
jsonschema
|
||||
netaddr
|
||||
openpyxl==2.5.4
|
||||
pyyaml==3.12
|
||||
requests
|
||||
six
|
45
setup.py
Normal file
45
setup.py
Normal file
@ -0,0 +1,45 @@
|
||||
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from setuptools import setup
|
||||
from setuptools import find_packages
|
||||
|
||||
setup(
|
||||
name='spyglass',
|
||||
version='0.0.1',
|
||||
description='Generate Airship specific yaml manifests from data sources',
|
||||
url='http://github.com/att-comdev/tugboat',
|
||||
python_requires='>=3.5.0',
|
||||
license='Apache 2.0',
|
||||
packages=find_packages(),
|
||||
install_requires=[
|
||||
'jsonschema',
|
||||
'Click',
|
||||
'openpyxl',
|
||||
'netaddr',
|
||||
'pyyaml',
|
||||
'jinja2',
|
||||
'flask',
|
||||
'flask-bootstrap',
|
||||
],
|
||||
entry_points={
|
||||
'console_scripts': [
|
||||
'spyglass=spyglass.spyglass:main',
|
||||
],
|
||||
'data_extractor_plugins':
|
||||
['formation=spyglass.data_extractor.plugins.formation:FormationPlugin',
|
||||
]
|
||||
},
|
||||
include_package_data=True,
|
||||
)
|
0
spyglass/data_extractor/__init__.py
Normal file
0
spyglass/data_extractor/__init__.py
Normal file
450
spyglass/data_extractor/base.py
Normal file
450
spyglass/data_extractor/base.py
Normal file
@ -0,0 +1,450 @@
|
||||
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the 'License');
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an 'AS IS' BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import abc
|
||||
import pprint
|
||||
import six
|
||||
import logging
|
||||
|
||||
from spyglass.utils import utils
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@six.add_metaclass(abc.ABCMeta)
|
||||
class BaseDataSourcePlugin(object):
|
||||
"""Provide basic hooks for data source plugins"""
|
||||
|
||||
def __init__(self, region):
|
||||
self.source_type = None
|
||||
self.source_name = None
|
||||
self.region = region
|
||||
self.site_data = {}
|
||||
|
||||
@abc.abstractmethod
|
||||
def set_config_opts(self, conf):
|
||||
"""Placeholder to set confgiuration options
|
||||
specific to each plugin.
|
||||
|
||||
:param dict conf: Configuration options as dict
|
||||
|
||||
Example: conf = { 'excel_spec': 'spec1.yaml',
|
||||
'excel_path': 'excel.xls' }
|
||||
|
||||
Each plugin will have their own config opts.
|
||||
"""
|
||||
return
|
||||
|
||||
@abc.abstractmethod
|
||||
def get_plugin_conf(self, kwargs):
|
||||
""" Validate and returns the plugin config parameters.
|
||||
If validation fails, Spyglass exits.
|
||||
|
||||
:param char pointer: Spyglass CLI parameters.
|
||||
|
||||
:returns plugin conf if successfully validated.
|
||||
|
||||
Each plugin implements their own validaton mechanism.
|
||||
"""
|
||||
return {}
|
||||
|
||||
@abc.abstractmethod
|
||||
def get_racks(self, region):
|
||||
"""Return list of racks in the region
|
||||
|
||||
:param string region: Region name
|
||||
|
||||
:returns: list of rack names
|
||||
|
||||
:rtype: list
|
||||
|
||||
Example: ['rack01', 'rack02']
|
||||
"""
|
||||
return []
|
||||
|
||||
@abc.abstractmethod
|
||||
def get_hosts(self, region, rack):
|
||||
"""Return list of hosts in the region
|
||||
|
||||
:param string region: Region name
|
||||
:param string rack: Rack name
|
||||
|
||||
:returns: list of hosts information
|
||||
|
||||
:rtype: list of dict
|
||||
|
||||
Example: [
|
||||
{
|
||||
'name': 'host01',
|
||||
'type': 'controller',
|
||||
'host_profile': 'hp_01'
|
||||
},
|
||||
{
|
||||
'name': 'host02',
|
||||
'type': 'compute',
|
||||
'host_profile': 'hp_02'}
|
||||
]
|
||||
"""
|
||||
return []
|
||||
|
||||
@abc.abstractmethod
|
||||
def get_networks(self, region):
|
||||
"""Return list of networks in the region
|
||||
|
||||
:param string region: Region name
|
||||
|
||||
:returns: list of networks and their vlans
|
||||
|
||||
:rtype: list of dict
|
||||
|
||||
Example: [
|
||||
{
|
||||
'name': 'oob',
|
||||
'vlan': '41',
|
||||
'subnet': '192.168.1.0/24',
|
||||
'gateway': '192.168.1.1'
|
||||
},
|
||||
{
|
||||
'name': 'pxe',
|
||||
'vlan': '42',
|
||||
'subnet': '192.168.2.0/24',
|
||||
'gateway': '192.168.2.1'
|
||||
},
|
||||
{
|
||||
'name': 'oam',
|
||||
'vlan': '43',
|
||||
'subnet': '192.168.3.0/24',
|
||||
'gateway': '192.168.3.1'
|
||||
},
|
||||
{
|
||||
'name': 'ksn',
|
||||
'vlan': '44',
|
||||
'subnet': '192.168.4.0/24',
|
||||
'gateway': '192.168.4.1'
|
||||
},
|
||||
{
|
||||
'name': 'storage',
|
||||
'vlan': '45',
|
||||
'subnet': '192.168.5.0/24',
|
||||
'gateway': '192.168.5.1'
|
||||
},
|
||||
{
|
||||
'name': 'overlay',
|
||||
'vlan': '45',
|
||||
'subnet': '192.168.6.0/24',
|
||||
'gateway': '192.168.6.1'
|
||||
}
|
||||
]
|
||||
"""
|
||||
|
||||
# TODO(nh863p): Expand the return type if they are rack level subnets
|
||||
# TODO(nh863p): Is ingress information can be provided here?
|
||||
return []
|
||||
|
||||
@abc.abstractmethod
|
||||
def get_ips(self, region, host):
|
||||
"""Return list of IPs on the host
|
||||
|
||||
:param string region: Region name
|
||||
:param string host: Host name
|
||||
|
||||
:returns: Dict of IPs per network on the host
|
||||
|
||||
:rtype: dict
|
||||
|
||||
Example: {'oob': {'ipv4': '192.168.1.10'},
|
||||
'pxe': {'ipv4': '192.168.2.10'}}
|
||||
|
||||
The network name from get_networks is expected to be the keys of this
|
||||
dict. In case some networks are missed, they are expected to be either
|
||||
DHCP or internally generated n the next steps by the design rules.
|
||||
"""
|
||||
return {}
|
||||
|
||||
@abc.abstractmethod
|
||||
def get_dns_servers(self, region):
|
||||
"""Return the DNS servers
|
||||
|
||||
:param string region: Region name
|
||||
|
||||
:returns: List of DNS servers to be configured on host
|
||||
|
||||
:rtype: List
|
||||
|
||||
Example: ['8.8.8.8', '8.8.8.4']
|
||||
"""
|
||||
return []
|
||||
|
||||
@abc.abstractmethod
|
||||
def get_ntp_servers(self, region):
|
||||
"""Return the NTP servers
|
||||
|
||||
:param string region: Region name
|
||||
|
||||
:returns: List of NTP servers to be configured on host
|
||||
|
||||
:rtype: List
|
||||
|
||||
Example: ['ntp1.ubuntu1.example', 'ntp2.ubuntu.example']
|
||||
"""
|
||||
return []
|
||||
|
||||
@abc.abstractmethod
|
||||
def get_ldap_information(self, region):
|
||||
"""Return the LDAP server information
|
||||
|
||||
:param string region: Region name
|
||||
|
||||
:returns: LDAP server information
|
||||
|
||||
:rtype: Dict
|
||||
|
||||
Example: {'url': 'ldap.example.com',
|
||||
'common_name': 'ldap-site1',
|
||||
'domain': 'test',
|
||||
'subdomain': 'test_sub1'}
|
||||
"""
|
||||
return {}
|
||||
|
||||
@abc.abstractmethod
|
||||
def get_location_information(self, region):
|
||||
"""Return location information
|
||||
|
||||
:param string region: Region name
|
||||
|
||||
:returns: Dict of location information
|
||||
|
||||
:rtype: dict
|
||||
|
||||
Example: {'name': 'Dallas',
|
||||
'physical_location': 'DAL01',
|
||||
'state': 'Texas',
|
||||
'country': 'US',
|
||||
'corridor': 'CR1'}
|
||||
"""
|
||||
return {}
|
||||
|
||||
@abc.abstractmethod
|
||||
def get_domain_name(self, region):
|
||||
"""Return the Domain name
|
||||
|
||||
:param string region: Region name
|
||||
|
||||
:returns: Domain name
|
||||
|
||||
:rtype: string
|
||||
|
||||
Example: example.com
|
||||
"""
|
||||
return ""
|
||||
|
||||
def extract_baremetal_information(self):
|
||||
"""Get baremetal information from plugin
|
||||
|
||||
:returns: dict of baremetal nodes
|
||||
|
||||
:rtype: dict
|
||||
|
||||
Return dict should be in the format
|
||||
{
|
||||
'EXAMR06': { # rack name
|
||||
'examr06c036': { # host name
|
||||
'host_profile': None,
|
||||
'ip': {
|
||||
'overlay': {},
|
||||
'oob': {},
|
||||
'calico': {},
|
||||
'oam': {},
|
||||
'storage': {},
|
||||
'pxe': {}
|
||||
},
|
||||
'rack': 'EXAMR06',
|
||||
'type': 'compute'
|
||||
}
|
||||
}
|
||||
}
|
||||
"""
|
||||
LOG.info("Extract baremetal information from plugin")
|
||||
baremetal = {}
|
||||
is_genesis = False
|
||||
hosts = self.get_hosts(self.region)
|
||||
|
||||
# For each host list fill host profile and network IPs
|
||||
for host in hosts:
|
||||
host_name = host['name']
|
||||
rack_name = host['rack_name']
|
||||
|
||||
if rack_name not in baremetal:
|
||||
baremetal[rack_name] = {}
|
||||
|
||||
# Prepare temp dict for each host and append it to baremetal
|
||||
# at a rack level
|
||||
temp_host = {}
|
||||
if host['host_profile'] is None:
|
||||
temp_host['host_profile'] = "#CHANGE_ME"
|
||||
else:
|
||||
temp_host['host_profile'] = host['host_profile']
|
||||
|
||||
# Get Host IPs from plugin
|
||||
temp_host_ips = self.get_ips(self.region, host_name)
|
||||
|
||||
# Fill network IP for this host
|
||||
temp_host['ip'] = {}
|
||||
temp_host['ip']['oob'] = temp_host_ips[host_name].get('oob', "")
|
||||
temp_host['ip']['calico'] = temp_host_ips[host_name].get(
|
||||
'calico', "")
|
||||
temp_host['ip']['oam'] = temp_host_ips[host_name].get('oam', "")
|
||||
temp_host['ip']['storage'] = temp_host_ips[host_name].get(
|
||||
'storage', "")
|
||||
temp_host['ip']['overlay'] = temp_host_ips[host_name].get(
|
||||
'overlay', "")
|
||||
temp_host['ip']['pxe'] = temp_host_ips[host_name].get(
|
||||
'pxe', "#CHANGE_ME")
|
||||
|
||||
# Filling rack_type( compute/controller/genesis)
|
||||
# "cp" host profile is controller
|
||||
# "ns" host profile is compute
|
||||
if (temp_host['host_profile'] == 'cp'):
|
||||
# The controller node is designates as genesis"
|
||||
if is_genesis is False:
|
||||
is_genesis = True
|
||||
temp_host['type'] = 'genesis'
|
||||
else:
|
||||
temp_host['type'] = 'controller'
|
||||
else:
|
||||
temp_host['type'] = 'compute'
|
||||
|
||||
baremetal[rack_name][host_name] = temp_host
|
||||
LOG.debug("Baremetal information:\n{}".format(
|
||||
pprint.pformat(baremetal)))
|
||||
|
||||
return baremetal
|
||||
|
||||
def extract_site_information(self):
|
||||
"""Get site information from plugin
|
||||
|
||||
:returns: dict of site information
|
||||
|
||||
:rtpe: dict
|
||||
|
||||
Return dict should be in the format
|
||||
{
|
||||
'name': '',
|
||||
'country': '',
|
||||
'state': '',
|
||||
'corridor': '',
|
||||
'sitetype': '',
|
||||
'dns': [],
|
||||
'ntp': [],
|
||||
'ldap': {},
|
||||
'domain': None
|
||||
}
|
||||
"""
|
||||
LOG.info("Extract site information from plugin")
|
||||
site_info = {}
|
||||
|
||||
# Extract location information
|
||||
location_data = self.get_location_information(self.region)
|
||||
if location_data is not None:
|
||||
site_info = location_data
|
||||
|
||||
dns_data = self.get_dns_servers(self.region)
|
||||
site_info['dns'] = dns_data
|
||||
|
||||
ntp_data = self.get_ntp_servers(self.region)
|
||||
site_info['ntp'] = ntp_data
|
||||
|
||||
ldap_data = self.get_ldap_information(self.region)
|
||||
site_info['ldap'] = ldap_data
|
||||
|
||||
domain_data = self.get_domain_name(self.region)
|
||||
site_info['domain'] = domain_data
|
||||
|
||||
LOG.debug("Extracted site information:\n{}".format(
|
||||
pprint.pformat(site_info)))
|
||||
|
||||
return site_info
|
||||
|
||||
def extract_network_information(self):
|
||||
"""Get network information from plugin
|
||||
like Subnets, DNS, NTP, LDAP details.
|
||||
|
||||
:returns: dict of baremetal nodes
|
||||
|
||||
:rtype: dict
|
||||
|
||||
Return dict should be in the format
|
||||
{
|
||||
'vlan_network_data': {
|
||||
'oam': {},
|
||||
'ingress': {},
|
||||
'oob': {}
|
||||
'calico': {},
|
||||
'storage': {},
|
||||
'pxe': {},
|
||||
'overlay': {}
|
||||
}
|
||||
}
|
||||
"""
|
||||
LOG.info("Extract network information from plugin")
|
||||
network_data = {}
|
||||
networks = self.get_networks(self.region)
|
||||
|
||||
# We are interested in only the below networks mentioned in
|
||||
# networks_to_scan, so look for these networks from the data
|
||||
# returned by plugin
|
||||
networks_to_scan = [
|
||||
'calico', 'overlay', 'pxe', 'storage', 'oam', 'oob', 'ingress'
|
||||
]
|
||||
network_data['vlan_network_data'] = {}
|
||||
|
||||
for net in networks:
|
||||
tmp_net = {}
|
||||
if net['name'] in networks_to_scan:
|
||||
tmp_net['subnet'] = net['subnet']
|
||||
tmp_net['vlan'] = net['vlan']
|
||||
|
||||
network_data['vlan_network_data'][net['name']] = tmp_net
|
||||
|
||||
LOG.debug("Extracted network data:\n{}".format(
|
||||
pprint.pformat(network_data)))
|
||||
return network_data
|
||||
|
||||
def extract_data(self):
|
||||
"""Extract data from plugin
|
||||
|
||||
Gather data related to baremetal, networks, storage and other site
|
||||
related information from plugin
|
||||
"""
|
||||
LOG.info("Extract data from plugin")
|
||||
site_data = {}
|
||||
site_data['baremetal'] = self.extract_baremetal_information()
|
||||
site_data['site_info'] = self.extract_site_information()
|
||||
site_data['network'] = self.extract_network_information()
|
||||
self.site_data = site_data
|
||||
return site_data
|
||||
|
||||
def apply_additional_data(self, extra_data):
|
||||
"""Apply any additional inputs from user
|
||||
|
||||
In case plugin doesnot provide some data, user can specify
|
||||
the same as part of additional data in form of dict. The user
|
||||
provided dict will be merged recursively to site_data.
|
||||
If there is repetition of data then additional data supplied
|
||||
shall take precedence.
|
||||
"""
|
||||
LOG.info("Update site data with additional input")
|
||||
tmp_site_data = utils.dict_merge(self.site_data, extra_data)
|
||||
self.site_data = tmp_site_data
|
||||
return self.site_data
|
46
spyglass/data_extractor/custom_exceptions.py
Normal file
46
spyglass/data_extractor/custom_exceptions.py
Normal file
@ -0,0 +1,46 @@
|
||||
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the 'License');
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an 'AS IS' BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import logging
|
||||
import sys
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class BaseError(Exception):
|
||||
def __init__(self, msg):
|
||||
self.msg = msg
|
||||
|
||||
def display_error(self):
|
||||
LOG.info(self.msg)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
class MissingAttributeError(BaseError):
|
||||
pass
|
||||
|
||||
|
||||
class MissingValueError(BaseError):
|
||||
pass
|
||||
|
||||
|
||||
class ApiClientError(BaseError):
|
||||
pass
|
||||
|
||||
|
||||
class TokenGenerationError(BaseError):
|
||||
pass
|
||||
|
||||
|
||||
class ConnectionError(BaseError):
|
||||
pass
|
0
spyglass/data_extractor/plugins/__init__.py
Normal file
0
spyglass/data_extractor/plugins/__init__.py
Normal file
496
spyglass/data_extractor/plugins/formation.py
Normal file
496
spyglass/data_extractor/plugins/formation.py
Normal file
@ -0,0 +1,496 @@
|
||||
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the 'License');
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an 'AS IS' BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
import pprint
|
||||
import re
|
||||
import requests
|
||||
import formation_client
|
||||
import urllib3
|
||||
|
||||
from spyglass.data_extractor.base import BaseDataSourcePlugin
|
||||
|
||||
from spyglass.data_extractor.custom_exceptions import (
|
||||
ApiClientError, ConnectionError, MissingAttributeError,
|
||||
TokenGenerationError)
|
||||
|
||||
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class FormationPlugin(BaseDataSourcePlugin):
|
||||
def __init__(self, region):
|
||||
# Save site name is valid
|
||||
try:
|
||||
assert region is not None
|
||||
super().__init__(region)
|
||||
except AssertionError:
|
||||
LOG.error("Site: None! Spyglass exited!")
|
||||
LOG.info("Check spyglass --help for details")
|
||||
exit()
|
||||
|
||||
self.source_type = 'rest'
|
||||
self.source_name = 'formation'
|
||||
|
||||
# Configuration parameters
|
||||
self.formation_api_url = None
|
||||
self.user = None
|
||||
self.password = None
|
||||
self.token = None
|
||||
|
||||
# Formation objects
|
||||
self.client_config = None
|
||||
self.formation_api_client = None
|
||||
|
||||
# Site related data
|
||||
self.region_zone_map = {}
|
||||
self.site_name_id_mapping = {}
|
||||
self.zone_name_id_mapping = {}
|
||||
self.region_name_id_mapping = {}
|
||||
self.rack_name_id_mapping = {}
|
||||
self.device_name_id_mapping = {}
|
||||
LOG.info("Initiated data extractor plugin:{}".format(self.source_name))
|
||||
|
||||
def set_config_opts(self, conf):
|
||||
""" Sets the config params passed by CLI"""
|
||||
LOG.info("Plugin params passed:\n{}".format(pprint.pformat(conf)))
|
||||
self._validate_config_options(conf)
|
||||
self.formation_api_url = conf['url']
|
||||
self.user = conf['user']
|
||||
self.password = conf['password']
|
||||
self.token = conf.get('token', None)
|
||||
|
||||
self._get_formation_client()
|
||||
self._update_site_and_zone(self.region)
|
||||
|
||||
def get_plugin_conf(self, kwargs):
|
||||
""" Validates the plugin param and return if success"""
|
||||
try:
|
||||
assert (kwargs['formation_url']
|
||||
) is not None, "formation_url is Not Specified"
|
||||
url = kwargs['formation_url']
|
||||
assert (kwargs['formation_user']
|
||||
) is not None, "formation_user is Not Specified"
|
||||
user = kwargs['formation_user']
|
||||
assert (kwargs['formation_password']
|
||||
) is not None, "formation_password is Not Specified"
|
||||
password = kwargs['formation_password']
|
||||
except AssertionError:
|
||||
LOG.error("Insufficient plugin parameter! Spyglass exited!")
|
||||
raise
|
||||
exit()
|
||||
|
||||
plugin_conf = {'url': url, 'user': user, 'password': password}
|
||||
return plugin_conf
|
||||
|
||||
def _validate_config_options(self, conf):
|
||||
"""Validate the CLI params passed
|
||||
|
||||
The method checks for missing parameters and terminates
|
||||
Spyglass execution if found so.
|
||||
"""
|
||||
|
||||
missing_params = []
|
||||
for key in conf.keys():
|
||||
if conf[key] is None:
|
||||
missing_params.append(key)
|
||||
if len(missing_params) != 0:
|
||||
LOG.error("Missing Plugin Params{}:".format(missing_params))
|
||||
exit()
|
||||
|
||||
# Implement helper classes
|
||||
|
||||
def _generate_token(self):
|
||||
"""Generate token for Formation
|
||||
Formation API does not provide separate resource to generate
|
||||
token. This is a workaround to call directly Formation API
|
||||
to get token instead of using Formation client.
|
||||
"""
|
||||
# Create formation client config object
|
||||
self.client_config = formation_client.Configuration()
|
||||
self.client_config.host = self.formation_api_url
|
||||
self.client_config.username = self.user
|
||||
self.client_config.password = self.password
|
||||
self.client_config.verify_ssl = False
|
||||
|
||||
# Assumes token is never expired in the execution of this tool
|
||||
if self.token:
|
||||
return self.token
|
||||
|
||||
url = self.formation_api_url + '/zones'
|
||||
try:
|
||||
token_response = requests.get(
|
||||
url,
|
||||
auth=(self.user, self.password),
|
||||
verify=self.client_config.verify_ssl)
|
||||
except requests.exceptions.ConnectionError:
|
||||
raise ConnectionError('Incorrect URL: {}'.format(url))
|
||||
|
||||
if token_response.status_code == 200:
|
||||
self.token = token_response.json().get('X-Subject-Token', None)
|
||||
else:
|
||||
raise TokenGenerationError(
|
||||
'Unable to generate token because {}'.format(
|
||||
token_response.reason))
|
||||
|
||||
return self.token
|
||||
|
||||
def _get_formation_client(self):
|
||||
"""Create formation client object
|
||||
|
||||
Formation uses X-Auth-Token for authentication and should be in
|
||||
format "user|token".
|
||||
Generate the token and add it formation config object.
|
||||
"""
|
||||
token = self._generate_token()
|
||||
self.client_config.api_key = {'X-Auth-Token': self.user + '|' + token}
|
||||
self.formation_api_client = formation_client.ApiClient(
|
||||
self.client_config)
|
||||
|
||||
def _update_site_and_zone(self, region):
|
||||
"""Get Zone name and Site name from region"""
|
||||
|
||||
zone = self._get_zone_by_region_name(region)
|
||||
site = self._get_site_by_zone_name(zone)
|
||||
|
||||
# zone = region[:-1]
|
||||
# site = zone[:-1]
|
||||
|
||||
self.region_zone_map[region] = {}
|
||||
self.region_zone_map[region]['zone'] = zone
|
||||
self.region_zone_map[region]['site'] = site
|
||||
|
||||
def _get_zone_by_region_name(self, region_name):
|
||||
zone_api = formation_client.ZonesApi(self.formation_api_client)
|
||||
zones = zone_api.zones_get()
|
||||
|
||||
# Walk through each zone and get regions
|
||||
# Return when region name matches
|
||||
for zone in zones:
|
||||
self.zone_name_id_mapping[zone.name] = zone.id
|
||||
zone_regions = self.get_regions(zone.name)
|
||||
if region_name in zone_regions:
|
||||
return zone.name
|
||||
|
||||
return None
|
||||
|
||||
def _get_site_by_zone_name(self, zone_name):
|
||||
site_api = formation_client.SitesApi(self.formation_api_client)
|
||||
sites = site_api.sites_get()
|
||||
|
||||
# Walk through each site and get zones
|
||||
# Return when site name matches
|
||||
for site in sites:
|
||||
self.site_name_id_mapping[site.name] = site.id
|
||||
site_zones = self.get_zones(site.name)
|
||||
if zone_name in site_zones:
|
||||
return site.name
|
||||
|
||||
return None
|
||||
|
||||
def _get_site_id_by_name(self, site_name):
|
||||
if site_name in self.site_name_id_mapping:
|
||||
return self.site_name_id_mapping.get(site_name)
|
||||
|
||||
site_api = formation_client.SitesApi(self.formation_api_client)
|
||||
sites = site_api.sites_get()
|
||||
for site in sites:
|
||||
self.site_name_id_mapping[site.name] = site.id
|
||||
if site.name == site_name:
|
||||
return site.id
|
||||
|
||||
def _get_zone_id_by_name(self, zone_name):
|
||||
if zone_name in self.zone_name_id_mapping:
|
||||
return self.zone_name_id_mapping.get(zone_name)
|
||||
|
||||
zone_api = formation_client.ZonesApi(self.formation_api_client)
|
||||
zones = zone_api.zones_get()
|
||||
for zone in zones:
|
||||
if zone.name == zone_name:
|
||||
self.zone_name_id_mapping[zone.name] = zone.id
|
||||
return zone.id
|
||||
|
||||
def _get_region_id_by_name(self, region_name):
|
||||
if region_name in self.region_name_id_mapping:
|
||||
return self.region_name_id_mapping.get(region_name)
|
||||
|
||||
for zone in self.zone_name_id_mapping:
|
||||
self.get_regions(zone)
|
||||
|
||||
return self.region_name_id_mapping.get(region_name, None)
|
||||
|
||||
def _get_rack_id_by_name(self, rack_name):
|
||||
if rack_name in self.rack_name_id_mapping:
|
||||
return self.rack_name_id_mapping.get(rack_name)
|
||||
|
||||
for zone in self.zone_name_id_mapping:
|
||||
self.get_racks(zone)
|
||||
|
||||
return self.rack_name_id_mapping.get(rack_name, None)
|
||||
|
||||
def _get_device_id_by_name(self, device_name):
|
||||
if device_name in self.device_name_id_mapping:
|
||||
return self.device_name_id_mapping.get(device_name)
|
||||
|
||||
self.get_hosts(self.zone)
|
||||
|
||||
return self.device_name_id_mapping.get(device_name, None)
|
||||
|
||||
def _get_racks(self, zone, rack_type='compute'):
|
||||
zone_id = self._get_zone_id_by_name(zone)
|
||||
rack_api = formation_client.RacksApi(self.formation_api_client)
|
||||
racks = rack_api.zones_zone_id_racks_get(zone_id)
|
||||
|
||||
racks_list = []
|
||||
for rack in racks:
|
||||
rack_name = rack.name
|
||||
self.rack_name_id_mapping[rack_name] = rack.id
|
||||
if rack.rack_type.name == rack_type:
|
||||
racks_list.append(rack_name)
|
||||
|
||||
return racks_list
|
||||
|
||||
# Functions that will be used internally within this plugin
|
||||
|
||||
def get_zones(self, site=None):
|
||||
zone_api = formation_client.ZonesApi(self.formation_api_client)
|
||||
|
||||
if site is None:
|
||||
zones = zone_api.zones_get()
|
||||
else:
|
||||
site_id = self._get_site_id_by_name(site)
|
||||
zones = zone_api.sites_site_id_zones_get(site_id)
|
||||
|
||||
zones_list = []
|
||||
for zone in zones:
|
||||
zone_name = zone.name
|
||||
self.zone_name_id_mapping[zone_name] = zone.id
|
||||
zones_list.append(zone_name)
|
||||
|
||||
return zones_list
|
||||
|
||||
def get_regions(self, zone):
|
||||
zone_id = self._get_zone_id_by_name(zone)
|
||||
region_api = formation_client.RegionApi(self.formation_api_client)
|
||||
regions = region_api.zones_zone_id_regions_get(zone_id)
|
||||
regions_list = []
|
||||
for region in regions:
|
||||
region_name = region.name
|
||||
self.region_name_id_mapping[region_name] = region.id
|
||||
regions_list.append(region_name)
|
||||
|
||||
return regions_list
|
||||
|
||||
# Implement Abstract functions
|
||||
|
||||
def get_racks(self, region):
|
||||
zone = self.region_zone_map[region]['zone']
|
||||
return self._get_racks(zone, rack_type='compute')
|
||||
|
||||
def get_hosts(self, region, rack=None):
|
||||
zone = self.region_zone_map[region]['zone']
|
||||
zone_id = self._get_zone_id_by_name(zone)
|
||||
device_api = formation_client.DevicesApi(self.formation_api_client)
|
||||
control_hosts = device_api.zones_zone_id_control_nodes_get(zone_id)
|
||||
compute_hosts = device_api.zones_zone_id_devices_get(
|
||||
zone_id, type='KVM')
|
||||
|
||||
hosts_list = []
|
||||
for host in control_hosts:
|
||||
self.device_name_id_mapping[host.aic_standard_name] = host.id
|
||||
hosts_list.append({
|
||||
'name': host.aic_standard_name,
|
||||
'type': 'controller',
|
||||
'rack_name': host.rack_name,
|
||||
'host_profile': host.host_profile_name
|
||||
})
|
||||
|
||||
for host in compute_hosts:
|
||||
self.device_name_id_mapping[host.aic_standard_name] = host.id
|
||||
hosts_list.append({
|
||||
'name': host.aic_standard_name,
|
||||
'type': 'compute',
|
||||
'rack_name': host.rack_name,
|
||||
'host_profile': host.host_profile_name
|
||||
})
|
||||
"""
|
||||
for host in itertools.chain(control_hosts, compute_hosts):
|
||||
self.device_name_id_mapping[host.aic_standard_name] = host.id
|
||||
hosts_list.append({
|
||||
'name': host.aic_standard_name,
|
||||
'type': host.categories[0],
|
||||
'rack_name': host.rack_name,
|
||||
'host_profile': host.host_profile_name
|
||||
})
|
||||
"""
|
||||
|
||||
return hosts_list
|
||||
|
||||
def get_networks(self, region):
|
||||
zone = self.region_zone_map[region]['zone']
|
||||
zone_id = self._get_zone_id_by_name(zone)
|
||||
region_id = self._get_region_id_by_name(region)
|
||||
vlan_api = formation_client.VlansApi(self.formation_api_client)
|
||||
vlans = vlan_api.zones_zone_id_regions_region_id_vlans_get(
|
||||
zone_id, region_id)
|
||||
|
||||
# Case when vlans list is empty from
|
||||
# zones_zone_id_regions_region_id_vlans_get
|
||||
if len(vlans) is 0:
|
||||
# get device-id from the first host and get the network details
|
||||
hosts = self.get_hosts(self.region)
|
||||
host = hosts[0]['name']
|
||||
device_id = self._get_device_id_by_name(host)
|
||||
vlans = vlan_api.zones_zone_id_devices_device_id_vlans_get(
|
||||
zone_id, device_id)
|
||||
|
||||
LOG.debug("Extracted region network information\n{}".format(vlans))
|
||||
vlans_list = []
|
||||
for vlan_ in vlans:
|
||||
if len(vlan_.vlan.ipv4) is not 0:
|
||||
tmp_vlan = {}
|
||||
tmp_vlan['name'] = self._get_network_name_from_vlan_name(
|
||||
vlan_.vlan.name)
|
||||
tmp_vlan['vlan'] = vlan_.vlan.vlan_id
|
||||
tmp_vlan['subnet'] = vlan_.vlan.subnet_range
|
||||
tmp_vlan['gateway'] = vlan_.ipv4_gateway
|
||||
tmp_vlan['subnet_level'] = vlan_.vlan.subnet_level
|
||||
vlans_list.append(tmp_vlan)
|
||||
|
||||
return vlans_list
|
||||
|
||||
def get_ips(self, region, host=None):
|
||||
zone = self.region_zone_map[region]['zone']
|
||||
zone_id = self._get_zone_id_by_name(zone)
|
||||
|
||||
if host:
|
||||
hosts = [host]
|
||||
else:
|
||||
hosts = []
|
||||
hosts_dict = self.get_hosts(zone)
|
||||
for host in hosts_dict:
|
||||
hosts.append(host['name'])
|
||||
|
||||
vlan_api = formation_client.VlansApi(self.formation_api_client)
|
||||
ip_ = {}
|
||||
|
||||
for host in hosts:
|
||||
device_id = self._get_device_id_by_name(host)
|
||||
vlans = vlan_api.zones_zone_id_devices_device_id_vlans_get(
|
||||
zone_id, device_id)
|
||||
LOG.debug("Received VLAN Network Information\n{}".format(vlans))
|
||||
ip_[host] = {}
|
||||
for vlan_ in vlans:
|
||||
# TODO(pg710r) We need to handle the case when incoming ipv4
|
||||
# list is empty
|
||||
if len(vlan_.vlan.ipv4) is not 0:
|
||||
name = self._get_network_name_from_vlan_name(
|
||||
vlan_.vlan.name)
|
||||
ipv4 = vlan_.vlan.ipv4[0].ip
|
||||
LOG.debug("vlan:{},name:{},ip:{},vlan_name:{}".format(
|
||||
vlan_.vlan.vlan_id, name, ipv4, vlan_.vlan.name))
|
||||
# TODD(pg710r) This code needs to extended to support ipv4
|
||||
# and ipv6
|
||||
# ip_[host][name] = {'ipv4': ipv4}
|
||||
ip_[host][name] = ipv4
|
||||
|
||||
return ip_
|
||||
|
||||
def _get_network_name_from_vlan_name(self, vlan_name):
|
||||
""" network names are ksn, oam, oob, overlay, storage, pxe
|
||||
|
||||
The following mapping rules apply:
|
||||
vlan_name contains "ksn" the network name is "calico"
|
||||
vlan_name contains "storage" the network name is "storage"
|
||||
vlan_name contains "server" the network name is "oam"
|
||||
vlan_name contains "ovs" the network name is "overlay"
|
||||
vlan_name contains "ILO" the network name is "oob"
|
||||
"""
|
||||
network_names = {
|
||||
'ksn': 'calico',
|
||||
'storage': 'storage',
|
||||
'server': 'oam',
|
||||
'ovs': 'overlay',
|
||||
'ILO': 'oob',
|
||||
'pxe': 'pxe'
|
||||
}
|
||||
|
||||
for name in network_names:
|
||||
# Make a pattern that would ignore case.
|
||||
# if name is 'ksn' pattern name is '(?i)(ksn)'
|
||||
name_pattern = "(?i)({})".format(name)
|
||||
if re.search(name_pattern, vlan_name):
|
||||
return network_names[name]
|
||||
|
||||
return ("")
|
||||
|
||||
def get_dns_servers(self, region):
|
||||
try:
|
||||
zone = self.region_zone_map[region]['zone']
|
||||
zone_id = self._get_zone_id_by_name(zone)
|
||||
zone_api = formation_client.ZonesApi(self.formation_api_client)
|
||||
zone_ = zone_api.zones_zone_id_get(zone_id)
|
||||
except formation_client.rest.ApiException as e:
|
||||
raise ApiClientError(e.msg)
|
||||
|
||||
if not zone_.ipv4_dns:
|
||||
LOG.warn("No dns server")
|
||||
return []
|
||||
|
||||
dns_list = []
|
||||
for dns in zone_.ipv4_dns:
|
||||
dns_list.append(dns.ip)
|
||||
|
||||
return dns_list
|
||||
|
||||
def get_ntp_servers(self, region):
|
||||
return []
|
||||
|
||||
def get_ldap_information(self, region):
|
||||
return {}
|
||||
|
||||
def get_location_information(self, region):
|
||||
""" get location information for a zone and return """
|
||||
site = self.region_zone_map[region]['site']
|
||||
site_id = self._get_site_id_by_name(site)
|
||||
site_api = formation_client.SitesApi(self.formation_api_client)
|
||||
site_info = site_api.sites_site_id_get(site_id)
|
||||
|
||||
try:
|
||||
return {
|
||||
# 'corridor': site_info.corridor,
|
||||
'name': site_info.city,
|
||||
'state': site_info.state,
|
||||
'country': site_info.country,
|
||||
'physical_location_id': site_info.clli,
|
||||
}
|
||||
except AttributeError as e:
|
||||
raise MissingAttributeError('Missing {} information in {}'.format(
|
||||
e, site_info.city))
|
||||
|
||||
def get_domain_name(self, region):
|
||||
try:
|
||||
zone = self.region_zone_map[region]['zone']
|
||||
zone_id = self._get_zone_id_by_name(zone)
|
||||
zone_api = formation_client.ZonesApi(self.formation_api_client)
|
||||
zone_ = zone_api.zones_zone_id_get(zone_id)
|
||||
except formation_client.rest.ApiException as e:
|
||||
raise ApiClientError(e.msg)
|
||||
|
||||
if not zone_.dns:
|
||||
LOG.warn('Got None while running get domain name')
|
||||
return None
|
||||
|
||||
return zone_.dns
|
0
spyglass/parser/__init__.py
Normal file
0
spyglass/parser/__init__.py
Normal file
289
spyglass/parser/engine.py
Normal file
289
spyglass/parser/engine.py
Normal file
@ -0,0 +1,289 @@
|
||||
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import copy
|
||||
import json
|
||||
import logging
|
||||
import pkg_resources
|
||||
import pprint
|
||||
import sys
|
||||
|
||||
import jsonschema
|
||||
import netaddr
|
||||
import yaml
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class ProcessDataSource():
|
||||
def __init__(self, sitetype):
|
||||
# Initialize intermediary and save site type
|
||||
self._initialize_intermediary()
|
||||
self.region_name = sitetype
|
||||
|
||||
@staticmethod
|
||||
def _read_file(file_name):
|
||||
with open(file_name, 'r') as f:
|
||||
raw_data = f.read()
|
||||
return raw_data
|
||||
|
||||
def _initialize_intermediary(self):
|
||||
self.host_type = {}
|
||||
self.data = {
|
||||
'network': {},
|
||||
'baremetal': {},
|
||||
'region_name': '',
|
||||
'storage': {},
|
||||
'site_info': {},
|
||||
}
|
||||
self.sitetype = None
|
||||
self.genesis_node = None
|
||||
self.region_name = None
|
||||
|
||||
def _get_network_subnets(self):
|
||||
# Extract subnet information for networks
|
||||
LOG.info("Extracting network subnets")
|
||||
network_subnets = {}
|
||||
for net_type in self.data['network']['vlan_network_data']:
|
||||
# One of the type is ingress and we don't want that here
|
||||
if (net_type != 'ingress'):
|
||||
network_subnets[net_type] = netaddr.IPNetwork(
|
||||
self.data['network']['vlan_network_data'][net_type]
|
||||
['subnet'])
|
||||
|
||||
LOG.debug("Network subnets:\n{}".format(
|
||||
pprint.pformat(network_subnets)))
|
||||
return network_subnets
|
||||
|
||||
def _get_genesis_node_details(self):
|
||||
# Returns the genesis node details
|
||||
LOG.info("Getting Genesis Node Details")
|
||||
for racks in self.data['baremetal'].keys():
|
||||
rack_hosts = self.data['baremetal'][racks]
|
||||
for host in rack_hosts:
|
||||
if rack_hosts[host]['type'] == 'genesis':
|
||||
self.genesis_node = rack_hosts[host]
|
||||
self.genesis_node['name'] = host
|
||||
|
||||
LOG.debug("Genesis Node Details:{}".format(
|
||||
pprint.pformat(self.genesis_node)))
|
||||
|
||||
def _validate_extracted_data(self, data):
|
||||
""" Validates the extracted data from input source.
|
||||
|
||||
|
||||
It checks wether the data types and data format are as expected.
|
||||
The method validates this with regex pattern defined for each
|
||||
data type.
|
||||
"""
|
||||
LOG.info('Validating data read from extracted data')
|
||||
temp_data = {}
|
||||
temp_data = copy.deepcopy(data)
|
||||
|
||||
# Converting baremetal dict to list.
|
||||
baremetal_list = []
|
||||
for rack in temp_data['baremetal'].keys():
|
||||
temp = [{k: v} for k, v in temp_data['baremetal'][rack].items()]
|
||||
baremetal_list = baremetal_list + temp
|
||||
|
||||
temp_data['baremetal'] = baremetal_list
|
||||
schema_dir = pkg_resources.resource_filename('spyglass', 'schemas/')
|
||||
schema_file = schema_dir + "data_schema.json"
|
||||
json_data = json.loads(json.dumps(temp_data))
|
||||
with open(schema_file, 'r') as f:
|
||||
json_schema = json.load(f)
|
||||
|
||||
try:
|
||||
# Suppressing writing of data2.json. Can use it for debugging
|
||||
with open('data2.json', 'w') as outfile:
|
||||
json.dump(temp_data, outfile, sort_keys=True, indent=4)
|
||||
jsonschema.validate(json_data, json_schema)
|
||||
except jsonschema.exceptions.ValidationError as e:
|
||||
LOG.error("Validation Error")
|
||||
LOG.error("Message:{}".format(e.message))
|
||||
LOG.error("Validator_path:{}".format(e.path))
|
||||
LOG.error("Validator_pattern:{}".format(e.validator_value))
|
||||
LOG.error("Validator:{}".format(e.validator))
|
||||
sys.exit()
|
||||
except jsonschema.exceptions.SchemaError as e:
|
||||
LOG.error("Schema Validation Error!!")
|
||||
LOG.error("Message:{}".format(e.message))
|
||||
LOG.error("Schema:{}".format(e.schema))
|
||||
LOG.error("Validator_value:{}".format(e.validator_value))
|
||||
LOG.error("Validator:{}".format(e.validator))
|
||||
LOG.error("path:{}".format(e.path))
|
||||
sys.exit()
|
||||
|
||||
LOG.info("Data validation Passed!")
|
||||
|
||||
def _apply_design_rules(self):
|
||||
""" Applies design rules from rules.yaml
|
||||
|
||||
|
||||
These rules are used to determine ip address allocation ranges,
|
||||
host profile interfaces and also to create hardware profile
|
||||
information. The method calls corresponding rule hander function
|
||||
based on rule name and applies them to appropriate data objects.
|
||||
"""
|
||||
LOG.info("Apply design rules")
|
||||
rules_dir = pkg_resources.resource_filename('spyglass', 'config/')
|
||||
rules_file = rules_dir + 'rules.yaml'
|
||||
rules_data_raw = self._read_file(rules_file)
|
||||
rules_yaml = yaml.safe_load(rules_data_raw)
|
||||
rules_data = {}
|
||||
rules_data.update(rules_yaml)
|
||||
|
||||
for rule in rules_data.keys():
|
||||
rule_name = rules_data[rule]['name']
|
||||
function_str = "_apply_rule_" + rule_name
|
||||
rule_data_name = rules_data[rule][rule_name]
|
||||
function = getattr(self, function_str)
|
||||
function(rule_data_name)
|
||||
LOG.info("Applying rule:{}".format(rule_name))
|
||||
|
||||
def _apply_rule_host_profile_interfaces(self, rule_data):
|
||||
pass
|
||||
|
||||
def _apply_rule_hardware_profile(self, rule_data):
|
||||
pass
|
||||
|
||||
def _apply_rule_ip_alloc_offset(self, rule_data):
|
||||
""" Offset allocation rules to determine ip address range(s)
|
||||
|
||||
|
||||
This rule is applied to incoming network data to determine
|
||||
network address, gateway ip and other address ranges
|
||||
"""
|
||||
LOG.info("Apply network design rules")
|
||||
vlan_network_data = {}
|
||||
|
||||
# Collect Rules
|
||||
default_ip_offset = rule_data['default']
|
||||
oob_ip_offset = rule_data['oob']
|
||||
gateway_ip_offset = rule_data['gateway']
|
||||
ingress_vip_offset = rule_data['ingress_vip']
|
||||
# static_ip_end_offset for non pxe network
|
||||
static_ip_end_offset = rule_data['static_ip_end']
|
||||
# dhcp_ip_end_offset for pxe network
|
||||
dhcp_ip_end_offset = rule_data['dhcp_ip_end']
|
||||
|
||||
# Set ingress vip and CIDR for bgp
|
||||
LOG.info("Applying rule to network bgp data")
|
||||
subnet = netaddr.IPNetwork(
|
||||
self.data['network']['vlan_network_data']['ingress']['subnet'][0])
|
||||
ips = list(subnet)
|
||||
self.data['network']['bgp']['ingress_vip'] = str(
|
||||
ips[ingress_vip_offset])
|
||||
self.data['network']['bgp']['public_service_cidr'] = self.data[
|
||||
'network']['vlan_network_data']['ingress']['subnet'][0]
|
||||
LOG.debug("Updated network bgp data:\n{}".format(
|
||||
pprint.pformat(self.data['network']['bgp'])))
|
||||
|
||||
LOG.info("Applying rule to vlan network data")
|
||||
# Get network subnets
|
||||
network_subnets = self._get_network_subnets()
|
||||
# Apply rules to vlan networks
|
||||
for net_type in network_subnets:
|
||||
if net_type == 'oob':
|
||||
ip_offset = oob_ip_offset
|
||||
else:
|
||||
ip_offset = default_ip_offset
|
||||
vlan_network_data[net_type] = {}
|
||||
subnet = network_subnets[net_type]
|
||||
ips = list(subnet)
|
||||
|
||||
vlan_network_data[net_type]['network'] = str(
|
||||
network_subnets[net_type])
|
||||
|
||||
vlan_network_data[net_type]['gateway'] = str(
|
||||
ips[gateway_ip_offset])
|
||||
|
||||
vlan_network_data[net_type]['reserved_start'] = str(ips[1])
|
||||
vlan_network_data[net_type]['reserved_end'] = str(ips[ip_offset])
|
||||
|
||||
static_start = str(ips[ip_offset + 1])
|
||||
static_end = str(ips[static_ip_end_offset])
|
||||
|
||||
if net_type == 'pxe':
|
||||
mid = len(ips) // 2
|
||||
static_end = str(ips[mid - 1])
|
||||
dhcp_start = str(ips[mid])
|
||||
dhcp_end = str(ips[dhcp_ip_end_offset])
|
||||
|
||||
vlan_network_data[net_type]['dhcp_start'] = dhcp_start
|
||||
vlan_network_data[net_type]['dhcp_end'] = dhcp_end
|
||||
|
||||
vlan_network_data[net_type]['static_start'] = static_start
|
||||
vlan_network_data[net_type]['static_end'] = static_end
|
||||
|
||||
# There is no vlan for oob network
|
||||
if (net_type != 'oob'):
|
||||
vlan_network_data[net_type]['vlan'] = self.data['network'][
|
||||
'vlan_network_data'][net_type]['vlan']
|
||||
|
||||
# OAM have default routes. Only for cruiser. TBD
|
||||
if (net_type == 'oam'):
|
||||
routes = ["0.0.0.0/0"]
|
||||
else:
|
||||
routes = []
|
||||
vlan_network_data[net_type]['routes'] = routes
|
||||
|
||||
# Update network data to self.data
|
||||
self.data['network']['vlan_network_data'][
|
||||
net_type] = vlan_network_data[net_type]
|
||||
|
||||
LOG.debug("Updated vlan network data:\n{}".format(
|
||||
pprint.pformat(vlan_network_data)))
|
||||
|
||||
def load_extracted_data_from_data_source(self, extracted_data):
|
||||
"""
|
||||
Function called from spyglass.py to pass extracted data
|
||||
from input data source
|
||||
"""
|
||||
LOG.info("Load extracted data from data source")
|
||||
self._validate_extracted_data(extracted_data)
|
||||
self.data = extracted_data
|
||||
LOG.debug("Extracted data from plugin data source:\n{}".format(
|
||||
pprint.pformat(extracted_data)))
|
||||
extracted_file = "extracted_file.yaml"
|
||||
yaml_file = yaml.dump(extracted_data, default_flow_style=False)
|
||||
with open(extracted_file, 'w') as f:
|
||||
f.write(yaml_file)
|
||||
|
||||
# Append region_data supplied from CLI to self.data
|
||||
self.data['region_name'] = self.region_name
|
||||
|
||||
def dump_intermediary_file(self, intermediary_dir):
|
||||
""" Dumping intermediary yaml """
|
||||
LOG.info("Dumping intermediary yaml")
|
||||
intermediary_file = "{}_intermediary.yaml".format(
|
||||
self.data['region_name'])
|
||||
|
||||
# Check of if output dir = intermediary_dir exists
|
||||
if intermediary_dir is not None:
|
||||
outfile = "{}/{}".format(intermediary_dir, intermediary_file)
|
||||
else:
|
||||
outfile = intermediary_file
|
||||
LOG.info("Intermediary file dir:{}".format(outfile))
|
||||
yaml_file = yaml.dump(self.data, default_flow_style=False)
|
||||
with open(outfile, 'w') as f:
|
||||
f.write(yaml_file)
|
||||
|
||||
def generate_intermediary_yaml(self):
|
||||
""" Generating intermediary yaml """
|
||||
LOG.info("Generating intermediary yaml")
|
||||
self._apply_design_rules()
|
||||
self._get_genesis_node_details()
|
||||
self.intermediary_yaml = self.data
|
||||
return self.intermediary_yaml
|
362
spyglass/schemas/data_schema.json
Normal file
362
spyglass/schemas/data_schema.json
Normal file
@ -0,0 +1,362 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/schema#",
|
||||
"title": "All",
|
||||
"description": "All information",
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"baremetal": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"$ref": "#/definitions/baremetal_list"
|
||||
}
|
||||
},
|
||||
"network": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"bgp": {
|
||||
"type": "object",
|
||||
"$ref": "#/definitions/bgp"
|
||||
},
|
||||
"vlan_network_data": {
|
||||
"type": "array",
|
||||
"$ref": "#/definitions/vlan_network_data"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"bgp",
|
||||
"vlan_network_data"
|
||||
]
|
||||
},
|
||||
"site_info": {
|
||||
"type": "object",
|
||||
"$ref": "#/definitions/site_info"
|
||||
},
|
||||
"storage": {
|
||||
"type": "object",
|
||||
"$ref": "#/definitions/storage"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"baremetal",
|
||||
"network",
|
||||
"site_info",
|
||||
"storage"
|
||||
],
|
||||
"definitions": {
|
||||
"baremetal_list": {
|
||||
"type": "object",
|
||||
"patternProperties": {
|
||||
".*": {
|
||||
"properties": {
|
||||
"ip": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"calico": {
|
||||
"type": "string",
|
||||
"pattern": "^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])$"
|
||||
},
|
||||
"oam": {
|
||||
"type": "string",
|
||||
"pattern": "^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])$"
|
||||
},
|
||||
"oob": {
|
||||
"type": "string",
|
||||
"pattern": "^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])$"
|
||||
},
|
||||
"overlay": {
|
||||
"type": "string",
|
||||
"pattern": "^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])$"
|
||||
},
|
||||
"pxe": {
|
||||
"type": "string",
|
||||
"pattern": "^((([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(#CHANGE_ME)$"
|
||||
},
|
||||
"storage": {
|
||||
"type": "string",
|
||||
"pattern": "^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])$"
|
||||
}
|
||||
},
|
||||
"required" :[
|
||||
"calico",
|
||||
"oam",
|
||||
"oob",
|
||||
"overlay",
|
||||
"pxe",
|
||||
"storage"
|
||||
]
|
||||
},
|
||||
"host_profile": {
|
||||
"description": "Host profile of the host",
|
||||
"type": "string",
|
||||
"pattern": "^([a-zA-Z]+)|(#CHANGE_ME)$"
|
||||
},
|
||||
"type": {
|
||||
"description": "Host profile type:Compute or Controller or genesis ",
|
||||
"type": "string",
|
||||
"pattern": "^(?i)compute|controller|genesis$"
|
||||
}
|
||||
},
|
||||
"required" :[
|
||||
"ip",
|
||||
"host_profile",
|
||||
"type"
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"bgp": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"asnumber": {
|
||||
"type": "integer",
|
||||
"pattern": "^[0-9]{1,10}$"
|
||||
},
|
||||
"peer_asnumber": {
|
||||
"type": "integer",
|
||||
"pattern": "^[0-9]{1,10}$"
|
||||
},
|
||||
"peers": {
|
||||
"type": "array",
|
||||
"items": [
|
||||
{
|
||||
"type": "string",
|
||||
"pattern": "^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])$"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"asnumber",
|
||||
"peer_asnumber",
|
||||
"peers"
|
||||
]
|
||||
},
|
||||
"vlan_network_data": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"calico": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"subnet": {
|
||||
"description": "Subnet address of the network",
|
||||
"type": "string",
|
||||
"pattern": "^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])/([0-9]|[1-2][0-9]|3[0-2])$"
|
||||
},
|
||||
"vlan": {
|
||||
"description": "Vlan id of the network",
|
||||
"type": "string",
|
||||
"pattern": "^([0-9]|[0-9][0-9]|[0-9][0-9][0-9]|[0-3][0-9][0-9][0-9]|40[0-9][0-5])$"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"subnet",
|
||||
"vlan"
|
||||
]
|
||||
},
|
||||
"ingress": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"subnet": {
|
||||
"description": "Subnet address of the network",
|
||||
"type": "array",
|
||||
"items": [
|
||||
{
|
||||
"type": "string",
|
||||
"pattern":"^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])/([0-9]|[1-2][0-9]|3[0-2])$"
|
||||
}
|
||||
]
|
||||
},
|
||||
"vlan": {
|
||||
"description": "Vlan id of the network",
|
||||
"type": "string",
|
||||
"pattern": "^([0-9]|[0-9][0-9]|[0-9][0-9][0-9]|[0-3][0-9][0-9][0-9]|40[0-9][0-5])$"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"subnet"
|
||||
]
|
||||
},
|
||||
"oam": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"subnet": {
|
||||
"description": "Subnet address of the network",
|
||||
"type": "string",
|
||||
"pattern": "^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])/([0-9]|[1-2][0-9]|3[0-2])$"
|
||||
},
|
||||
"vlan": {
|
||||
"description": "Vlan id of the network",
|
||||
"type": "string",
|
||||
"pattern": "^([0-9]|[0-9][0-9]|[0-9][0-9][0-9]|[0-3][0-9][0-9][0-9]|40[0-9][0-5])$"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"subnet",
|
||||
"vlan"
|
||||
]
|
||||
},
|
||||
"oob": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"subnet": {
|
||||
"description": "Subnet address of the network",
|
||||
"type": "string",
|
||||
"pattern": "^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])/([0-9]|[1-2][0-9]|3[0-2])$"
|
||||
},
|
||||
"vlan": {
|
||||
"description": "Vlan id of the network",
|
||||
"type": "string",
|
||||
"pattern": "^([0-9]|[0-9][0-9]|[0-9][0-9][0-9]|[0-3][0-9][0-9][0-9]|40[0-9][0-5])$"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"subnet",
|
||||
"vlan"
|
||||
]
|
||||
},
|
||||
"pxe": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"subnet": {
|
||||
"description": "Subnet address of the network",
|
||||
"type": "string",
|
||||
"pattern": "^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])/([0-9]|[1-2][0-9]|3[0-2])$"
|
||||
},
|
||||
"vlan": {
|
||||
"description": "Vlan id of the network",
|
||||
"type": "string",
|
||||
"pattern": "^([0-9]|[0-9][0-9]|[0-9][0-9][0-9]|[0-3][0-9][0-9][0-9]|40[0-9][0-5])$"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"subnet",
|
||||
"vlan"
|
||||
]
|
||||
},
|
||||
"storage": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"subnet": {
|
||||
"description": "Subnet address of the network",
|
||||
"type": "string",
|
||||
"pattern": "^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])/([0-9]|[1-2][0-9]|3[0-2])$"
|
||||
},
|
||||
"vlan": {
|
||||
"description": "Vlan id of the network",
|
||||
"type": "string",
|
||||
"pattern": "^([0-9]|[0-9][0-9]|[0-9][0-9][0-9]|[0-3][0-9][0-9][0-9]|40[0-9][0-5])$"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"subnet",
|
||||
"vlan"
|
||||
]
|
||||
}
|
||||
|
||||
},
|
||||
"required" :[
|
||||
"calico",
|
||||
"ingress",
|
||||
"oam",
|
||||
"oob",
|
||||
"overlay",
|
||||
"pxe",
|
||||
"storage"
|
||||
]
|
||||
},
|
||||
"site_info": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"dns": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"servers": {
|
||||
"type": "string",
|
||||
"pattern": "^((((([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]),)+)|(((([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))+))+$"
|
||||
}
|
||||
}
|
||||
},
|
||||
"ntp": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"servers": {
|
||||
"type": "string",
|
||||
"pattern": "^((((([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]),)+)|(((([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))+))+$"
|
||||
}
|
||||
}
|
||||
},
|
||||
"ldap": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"common_name": {
|
||||
"type": "string",
|
||||
"pattern": "\\W+|\\w+"
|
||||
},
|
||||
"subdomain": {
|
||||
"type": "string",
|
||||
"pattern": "(?i)\\w+"
|
||||
},
|
||||
"url": {
|
||||
"type": "string",
|
||||
"pattern": "^\\w+://\\w+.*\\.[a-zA-Z]{2,3}$"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"common_name",
|
||||
"subdomain",
|
||||
"url"
|
||||
]
|
||||
},
|
||||
"country": {
|
||||
"type": "string",
|
||||
"pattern": "(?i)\\w+"
|
||||
},
|
||||
"name": {
|
||||
"type": "string",
|
||||
"pattern": "(?i)\\w+"
|
||||
},
|
||||
"state": {
|
||||
"type": "string",
|
||||
"pattern": "(?i)\\w+"
|
||||
},
|
||||
"sitetype": {
|
||||
"type": "string",
|
||||
"pattern": "(?i)\\w+"
|
||||
},
|
||||
"physical_location_id": {
|
||||
"type": "string",
|
||||
"pattern": "^\\w+"
|
||||
},
|
||||
"domain": {
|
||||
"type": "string",
|
||||
"pattern": "^\\w+.*\\.[a-zA-Z]{2,3}$"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"dns",
|
||||
"ntp",
|
||||
"ldap",
|
||||
"country",
|
||||
"name",
|
||||
"state",
|
||||
"sitetype",
|
||||
"physical_location_id",
|
||||
"domain"
|
||||
]
|
||||
},
|
||||
"storage": {
|
||||
"type": "object",
|
||||
"patternProperties": {
|
||||
"ceph": {
|
||||
"controller": {
|
||||
"osd_count": {
|
||||
"type": "integer",
|
||||
"pattern": "^[0-9]{1,2}$"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
0
spyglass/site_processors/__init__.py
Normal file
0
spyglass/site_processors/__init__.py
Normal file
44
spyglass/site_processors/base.py
Normal file
44
spyglass/site_processors/base.py
Normal file
@ -0,0 +1,44 @@
|
||||
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
class BaseProcessor:
|
||||
def __init__(self, file_name):
|
||||
pass
|
||||
|
||||
def render_template(self, template):
|
||||
pass
|
||||
|
||||
@staticmethod
|
||||
def get_role_wise_nodes(yaml_data):
|
||||
hosts = {
|
||||
'genesis': {},
|
||||
'masters': [],
|
||||
'workers': [],
|
||||
}
|
||||
|
||||
for rack in yaml_data['baremetal']:
|
||||
for host in yaml_data['baremetal'][rack]:
|
||||
if yaml_data['baremetal'][rack][host]['type'] == 'genesis':
|
||||
hosts['genesis'] = {
|
||||
'name': host,
|
||||
'pxe': yaml_data['baremetal'][rack][host]['ip']['pxe'],
|
||||
'oam': yaml_data['baremetal'][rack][host]['ip']['oam'],
|
||||
}
|
||||
elif yaml_data['baremetal'][rack][host][
|
||||
'type'] == 'controller':
|
||||
hosts['masters'].append(host)
|
||||
else:
|
||||
hosts['workers'].append(host)
|
||||
return hosts
|
79
spyglass/site_processors/site_processor.py
Normal file
79
spyglass/site_processors/site_processor.py
Normal file
@ -0,0 +1,79 @@
|
||||
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
import pkg_resources
|
||||
import os
|
||||
from jinja2 import Environment
|
||||
from jinja2 import FileSystemLoader
|
||||
from .base import BaseProcessor
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class SiteProcessor(BaseProcessor):
|
||||
def __init__(self, intermediary_yaml, manifest_dir):
|
||||
self.yaml_data = intermediary_yaml
|
||||
self.manifest_dir = manifest_dir
|
||||
|
||||
def render_template(self):
|
||||
""" The method renders network config yaml from j2 templates.
|
||||
|
||||
|
||||
Network configs common to all racks (i.e oam, overlay, storage,
|
||||
calico) are generated in a single file. Rack specific
|
||||
configs( pxe and oob) are generated per rack.
|
||||
"""
|
||||
# Check of manifest_dir exists
|
||||
if self.manifest_dir is not None:
|
||||
site_manifest_dir = self.manifest_dir + '/pegleg_manifests/site/'
|
||||
else:
|
||||
site_manifest_dir = 'pegleg_manifests/site/'
|
||||
LOG.info("Site manifest output dir:{}".format(site_manifest_dir))
|
||||
|
||||
template_software_dir = pkg_resources.resource_filename(
|
||||
'spyglass', 'templates/')
|
||||
template_dir_abspath = os.path.dirname(template_software_dir)
|
||||
LOG.debug("Template Path:%s", template_dir_abspath)
|
||||
|
||||
for dirpath, dirs, files in os.walk(template_dir_abspath):
|
||||
for filename in files:
|
||||
j2_env = Environment(
|
||||
autoescape=False,
|
||||
loader=FileSystemLoader(dirpath),
|
||||
trim_blocks=True)
|
||||
j2_env.filters[
|
||||
'get_role_wise_nodes'] = self.get_role_wise_nodes
|
||||
templatefile = os.path.join(dirpath, filename)
|
||||
outdirs = dirpath.split('templates')[1]
|
||||
|
||||
outfile_path = '{}{}{}'.format(
|
||||
site_manifest_dir, self.yaml_data['region_name'], outdirs)
|
||||
outfile_yaml = templatefile.split('.j2')[0].split('/')[-1]
|
||||
outfile = outfile_path + '/' + outfile_yaml
|
||||
outfile_dir = os.path.dirname(outfile)
|
||||
if not os.path.exists(outfile_dir):
|
||||
os.makedirs(outfile_dir)
|
||||
template_j2 = j2_env.get_template(filename)
|
||||
try:
|
||||
out = open(outfile, "w")
|
||||
template_j2.stream(data=self.yaml_data).dump(out)
|
||||
LOG.info("Rendering {}".format(outfile_yaml))
|
||||
out.close()
|
||||
except IOError as ioe:
|
||||
LOG.error(
|
||||
"IOError during rendering:{}".format(outfile_yaml))
|
||||
raise SystemExit(
|
||||
"Error when generating {:s}:\n{:s}".format(
|
||||
outfile, ioe.strerror))
|
172
spyglass/spyglass.py
Normal file
172
spyglass/spyglass.py
Normal file
@ -0,0 +1,172 @@
|
||||
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the 'License');
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an 'AS IS' BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
import pkg_resources
|
||||
import pprint
|
||||
|
||||
import click
|
||||
import yaml
|
||||
|
||||
from spyglass.parser.engine import ProcessDataSource
|
||||
from spyglass.site_processors.site_processor import SiteProcessor
|
||||
|
||||
LOG = logging.getLogger('spyglass')
|
||||
|
||||
|
||||
@click.command()
|
||||
@click.option(
|
||||
'--site',
|
||||
'-s',
|
||||
help='Specify the site for which manifests to be generated')
|
||||
@click.option(
|
||||
'--type', '-t', help='Specify the plugin type formation or tugboat')
|
||||
@click.option('--formation_url', '-f', help='Specify the formation url')
|
||||
@click.option('--formation_user', '-u', help='Specify the formation user id')
|
||||
@click.option(
|
||||
'--formation_password', '-p', help='Specify the formation user password')
|
||||
@click.option(
|
||||
'--intermediary',
|
||||
'-i',
|
||||
type=click.Path(exists=True),
|
||||
help=
|
||||
'Intermediary file path generate manifests, use -m also with this option')
|
||||
@click.option(
|
||||
'--additional_config',
|
||||
'-d',
|
||||
type=click.Path(exists=True),
|
||||
help='Site specific configuraton details')
|
||||
@click.option(
|
||||
'--generate_intermediary',
|
||||
'-g',
|
||||
is_flag=True,
|
||||
help='Dump intermediary file from passed excel and excel spec')
|
||||
@click.option(
|
||||
'--intermediary_dir',
|
||||
'-idir',
|
||||
type=click.Path(exists=True),
|
||||
help='The path where intermediary file needs to be generated')
|
||||
@click.option(
|
||||
'--generate_manifests',
|
||||
'-m',
|
||||
is_flag=True,
|
||||
help='Generate manifests from the generated intermediary file')
|
||||
@click.option(
|
||||
'--manifest_dir',
|
||||
'-mdir',
|
||||
type=click.Path(exists=True),
|
||||
help='The path where manifest files needs to be generated')
|
||||
@click.option(
|
||||
'--loglevel',
|
||||
'-l',
|
||||
default=20,
|
||||
multiple=False,
|
||||
show_default=True,
|
||||
help='Loglevel NOTSET:0 ,DEBUG:10, \
|
||||
INFO:20, WARNING:30, ERROR:40, CRITICAL:50')
|
||||
def main(*args, **kwargs):
|
||||
# Extract user provided inputs
|
||||
generate_intermediary = kwargs['generate_intermediary']
|
||||
intermediary_dir = kwargs['intermediary_dir']
|
||||
generate_manifests = kwargs['generate_manifests']
|
||||
manifest_dir = kwargs['manifest_dir']
|
||||
intermediary = kwargs['intermediary']
|
||||
site = kwargs['site']
|
||||
loglevel = kwargs['loglevel']
|
||||
|
||||
# Set Logging format
|
||||
LOG.setLevel(loglevel)
|
||||
stream_handle = logging.StreamHandler()
|
||||
formatter = logging.Formatter(
|
||||
'(%(name)s): %(asctime)s %(levelname)s %(message)s')
|
||||
stream_handle.setFormatter(formatter)
|
||||
LOG.addHandler(stream_handle)
|
||||
|
||||
LOG.info("Spyglass start")
|
||||
LOG.debug("CLI Parameters passed:\n{}".format(kwargs))
|
||||
|
||||
if not (generate_intermediary or generate_manifests):
|
||||
LOG.error("Invalid CLI parameters passed!! Spyglass exited")
|
||||
LOG.error("One of the options -m/-g is mandatory")
|
||||
LOG.info("CLI Parameters:\n{}".format(kwargs))
|
||||
exit()
|
||||
|
||||
# Generate Intermediary yaml and manifests extracting data
|
||||
# from data source specified by plugin type
|
||||
intermediary_yaml = {}
|
||||
if intermediary is None:
|
||||
LOG.info("Generating Intermediary yaml")
|
||||
plugin_type = kwargs.get('type', None)
|
||||
plugin_class = None
|
||||
|
||||
# Discover the plugin and load the plugin class
|
||||
LOG.info("Load the plugin class")
|
||||
for entry_point in pkg_resources.iter_entry_points(
|
||||
'data_extractor_plugins'):
|
||||
if entry_point.name == plugin_type:
|
||||
plugin_class = entry_point.load()
|
||||
|
||||
if plugin_class is None:
|
||||
LOG.error(
|
||||
"Unsupported Plugin type. Plugin type:{}".format(plugin_type))
|
||||
exit()
|
||||
|
||||
# Extract data from plugin data source
|
||||
LOG.info("Extract data from plugin data source")
|
||||
data_extractor = plugin_class(site)
|
||||
plugin_conf = data_extractor.get_plugin_conf(kwargs)
|
||||
data_extractor.set_config_opts(plugin_conf)
|
||||
data_extractor.extract_data()
|
||||
|
||||
# Apply any additional_config provided by user
|
||||
additional_config = kwargs.get('additional_config', None)
|
||||
if additional_config is not None:
|
||||
with open(additional_config, 'r') as config:
|
||||
raw_data = config.read()
|
||||
additional_config_data = yaml.safe_load(raw_data)
|
||||
LOG.debug("Additional config data:\n{}".format(
|
||||
pprint.pformat(additional_config_data)))
|
||||
|
||||
LOG.info("Apply additional configuration from:{}".format(
|
||||
additional_config))
|
||||
data_extractor.apply_additional_data(additional_config_data)
|
||||
LOG.debug(pprint.pformat(data_extractor.site_data))
|
||||
|
||||
# Apply design rules to the data
|
||||
LOG.info("Apply design rules to the extracted data")
|
||||
process_input_ob = ProcessDataSource(site)
|
||||
process_input_ob.load_extracted_data_from_data_source(
|
||||
data_extractor.site_data)
|
||||
|
||||
LOG.info("Generate intermediary yaml")
|
||||
intermediary_yaml = process_input_ob.generate_intermediary_yaml()
|
||||
else:
|
||||
LOG.info("Loading intermediary from user provided input")
|
||||
with open(intermediary, 'r') as intermediary_file:
|
||||
raw_data = intermediary_file.read()
|
||||
intermediary_yaml = yaml.safe_load(raw_data)
|
||||
|
||||
if generate_intermediary:
|
||||
process_input_ob.dump_intermediary_file(intermediary_dir)
|
||||
|
||||
if generate_manifests:
|
||||
LOG.info("Generating site Manifests")
|
||||
processor_engine = SiteProcessor(intermediary_yaml, manifest_dir)
|
||||
processor_engine.render_template()
|
||||
|
||||
LOG.info("Spyglass Execution Completed")
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
0
spyglass/utils/__init__.py
Normal file
0
spyglass/utils/__init__.py
Normal file
41
spyglass/utils/utils.py
Normal file
41
spyglass/utils/utils.py
Normal file
@ -0,0 +1,41 @@
|
||||
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the 'License');
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an 'AS IS' BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
# Merge two dictionaries
|
||||
def dict_merge(dictA, dictB, path=None):
|
||||
""" Recursively Merge dictionary dictB into dictA
|
||||
|
||||
|
||||
DictA represents the data extracted by a plugin and DictB
|
||||
represents the additional site config dictionary that is passed
|
||||
to CLI. The merge process compares the dictionary keys and if they
|
||||
are same and the values they point to are different , then
|
||||
dictB object's value is copied to dictA. If a key is unique
|
||||
to dictB, then it is copied to dictA.
|
||||
"""
|
||||
if path is None:
|
||||
path = []
|
||||
|
||||
for key in dictB:
|
||||
if key in dictA:
|
||||
if isinstance(dictA[key], dict) and isinstance(dictB[key], dict):
|
||||
dict_merge(dictA[key], dictB[key], path + [str(key)])
|
||||
elif dictA[key] == dictB[key]:
|
||||
pass # values are same, so no processing here
|
||||
else:
|
||||
dictA[key] = dictB[key]
|
||||
else:
|
||||
dictA[key] = dictB[key]
|
||||
return dictA
|
21
tools/spyglass.sh
Executable file
21
tools/spyglass.sh
Executable file
@ -0,0 +1,21 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -e
|
||||
|
||||
: ${WORKSPACE:=$(pwd)}
|
||||
: ${IMAGE:=quay.io/att-comdev/spyglass:latest}
|
||||
|
||||
echo
|
||||
echo "== NOTE: Workspace $WORKSPACE is the execution directory in the container =="
|
||||
echo
|
||||
|
||||
# Working directory inside container to execute commands from and mount from
|
||||
# host OS
|
||||
container_workspace_path='/var/spyglass'
|
||||
|
||||
docker run --rm -t \
|
||||
--net=none \
|
||||
--workdir="$container_workspace_path" \
|
||||
-v "${WORKSPACE}:$container_workspace_path" \
|
||||
"${IMAGE}" \
|
||||
spyglass "${@}"
|
45
tox.ini
Normal file
45
tox.ini
Normal file
@ -0,0 +1,45 @@
|
||||
[tox]
|
||||
envlist = py35, py36, pep8, docs
|
||||
skipsdist = True
|
||||
|
||||
[testenv]
|
||||
deps =
|
||||
-r{toxinidir}/requirements.txt
|
||||
basepython=python3
|
||||
whitelist_externals =
|
||||
find
|
||||
commands =
|
||||
find . -type f -name "*.pyc" -delete
|
||||
pytest \
|
||||
{posargs}
|
||||
|
||||
[testenv:fmt]
|
||||
deps = yapf
|
||||
commands =
|
||||
yapf --style=pep8 -ir {toxinidir}/spyglass {toxinidir}/tests
|
||||
|
||||
[testenv:pep8]
|
||||
deps =
|
||||
yapf
|
||||
flake8
|
||||
commands =
|
||||
#yapf --style=.style.yapf -rd {toxinidir}/spyglass
|
||||
flake8 {toxinidir}/spyglass
|
||||
|
||||
[testenv:bandit]
|
||||
deps =
|
||||
bandit
|
||||
commands = bandit -r spyglass -n 5
|
||||
|
||||
[flake8]
|
||||
ignore = E125,E251,W503
|
||||
|
||||
[testenv:docs]
|
||||
basepython = python3
|
||||
deps =
|
||||
-r{toxinidir}/requirements.txt
|
||||
-r{toxinidir}/doc/requirements.txt
|
||||
commands =
|
||||
rm -rf doc/build
|
||||
sphinx-build -b html doc/source doc/build -n -W -v
|
||||
whitelist_externals = rm
|
Loading…
Reference in New Issue
Block a user