Retire Packaging Deb project repos
This commit is part of a series to retire the Packaging Deb project. Step 2 is to remove all content from the project repos, replacing it with a README notification where to find ongoing work, and how to recover the repo if needed at some future point (as in https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project). Change-Id: I683c8ce852336c719cbbab0bcd25ad4dbd664dff
This commit is contained in:
parent
b765ac3814
commit
779c84672b
|
@ -1,31 +0,0 @@
|
||||||
*.pyc
|
|
||||||
*.sqlite
|
|
||||||
|
|
||||||
*.gem
|
|
||||||
|
|
||||||
# vim swap files
|
|
||||||
.*.swp
|
|
||||||
|
|
||||||
# services' runtime files
|
|
||||||
*.log
|
|
||||||
*.pid
|
|
||||||
|
|
||||||
# Vagrant housekeeping file
|
|
||||||
.vagrant
|
|
||||||
|
|
||||||
build
|
|
||||||
dist
|
|
||||||
lock
|
|
||||||
|
|
||||||
*.egg
|
|
||||||
.testrepository
|
|
||||||
.tox
|
|
||||||
.venv
|
|
||||||
.idea
|
|
||||||
.DS_Store
|
|
||||||
test_run/*
|
|
||||||
|
|
||||||
*.egg-info
|
|
||||||
|
|
||||||
fuelclient-*.xml
|
|
||||||
.coverage
|
|
|
@ -1,4 +0,0 @@
|
||||||
[gerrit]
|
|
||||||
host=review.openstack.org
|
|
||||||
port=29418
|
|
||||||
project=openstack/python-fuelclient.git
|
|
|
@ -1,5 +0,0 @@
|
||||||
[DEFAULT]
|
|
||||||
test_command=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 ${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./fuelclient/tests/unit} $LISTOPT $IDOPTION
|
|
||||||
test_id_option=--load-list $IDFILE
|
|
||||||
test_list_option=--list
|
|
||||||
test_run_concurrency=echo 1
|
|
69
MAINTAINERS
69
MAINTAINERS
|
@ -1,69 +0,0 @@
|
||||||
---
|
|
||||||
description:
|
|
||||||
For Fuel team structure and contribution policy, see [1].
|
|
||||||
|
|
||||||
This is repository level MAINTAINERS file. All contributions to this
|
|
||||||
repository must be approved by one or more Core Reviewers [2].
|
|
||||||
If you are contributing to files (or create new directories) in
|
|
||||||
root folder of this repository, please contact Core Reviewers for
|
|
||||||
review and merge requests.
|
|
||||||
|
|
||||||
If you are contributing to subfolders of this repository, please
|
|
||||||
check 'maintainers' section of this file in order to find maintainers
|
|
||||||
for those specific modules.
|
|
||||||
|
|
||||||
It is mandatory to get +1 from one or more maintainers before asking
|
|
||||||
Core Reviewers for review/merge in order to decrease a load on Core Reviewers [3].
|
|
||||||
Exceptions are when maintainers are actually cores, or when maintainers
|
|
||||||
are not available for some reason (e.g. on vacation).
|
|
||||||
|
|
||||||
[1] http://specs.openstack.org/openstack/fuel-specs/policy/team-structure.html
|
|
||||||
[2] https://review.openstack.org/#/admin/groups/551,members
|
|
||||||
[3] http://lists.openstack.org/pipermail/openstack-dev/2015-August/072406.html
|
|
||||||
|
|
||||||
Please keep this file in YAML format in order to allow helper scripts
|
|
||||||
to read this as a configuration data.
|
|
||||||
|
|
||||||
maintainers:
|
|
||||||
|
|
||||||
- ./:
|
|
||||||
- name: Alexander Saprykin
|
|
||||||
email: asaprykin@mirantis.com
|
|
||||||
IRC: asaprykin
|
|
||||||
|
|
||||||
- name: Bulat Gaifullin
|
|
||||||
email: bgaifullin@mirantis.com
|
|
||||||
IRC: bgaifullin
|
|
||||||
|
|
||||||
- name: Maciej Kwiek
|
|
||||||
email: mkwiek@mirantis.com
|
|
||||||
IRC: mkwiek
|
|
||||||
|
|
||||||
- name: Sylwester Brzeczkowski
|
|
||||||
email: sbrzeczkowski@mirantis.com
|
|
||||||
IRC: sylwesterB
|
|
||||||
|
|
||||||
- specs/:
|
|
||||||
- name: Mikhail Ivanov
|
|
||||||
email: mivanov@mirantis.com
|
|
||||||
IRC: mivanov
|
|
||||||
|
|
||||||
- name: Artem Silenkov
|
|
||||||
email: asilenkov@mirantis.com
|
|
||||||
IRC: asilenkov
|
|
||||||
|
|
||||||
- name: Alexander Tsamutali
|
|
||||||
email: atsamutali@mirantis.com
|
|
||||||
IRC: astsmtl
|
|
||||||
|
|
||||||
- name: Daniil Trishkin
|
|
||||||
email: dtrishkin@mirantis.com
|
|
||||||
IRC: dtrishkin
|
|
||||||
|
|
||||||
- name: Ivan Udovichenko
|
|
||||||
email: iudovichenko@mirantis.com
|
|
||||||
IRC: tlbr
|
|
||||||
|
|
||||||
- name: Igor Yozhikov
|
|
||||||
email: iyozhikov@mirantis.com
|
|
||||||
IRC: IgorYozhikov
|
|
|
@ -1 +0,0 @@
|
||||||
include fuelclient/fuel_client.yaml
|
|
|
@ -0,0 +1,14 @@
|
||||||
|
This project is no longer maintained.
|
||||||
|
|
||||||
|
The contents of this repository are still available in the Git
|
||||||
|
source code management system. To see the contents of this
|
||||||
|
repository before it reached its end of life, please check out the
|
||||||
|
previous commit with "git checkout HEAD^1".
|
||||||
|
|
||||||
|
For ongoing work on maintaining OpenStack packages in the Debian
|
||||||
|
distribution, please see the Debian OpenStack packaging team at
|
||||||
|
https://wiki.debian.org/OpenStack/.
|
||||||
|
|
||||||
|
For any further questions, please email
|
||||||
|
openstack-dev@lists.openstack.org or join #openstack-dev on
|
||||||
|
Freenode.
|
38
README.rst
38
README.rst
|
@ -1,38 +0,0 @@
|
||||||
========================
|
|
||||||
Team and repository tags
|
|
||||||
========================
|
|
||||||
|
|
||||||
.. image:: http://governance.openstack.org/badges/python-fuelclient.svg
|
|
||||||
:target: http://governance.openstack.org/reference/tags/index.html
|
|
||||||
|
|
||||||
.. Change things from this point on
|
|
||||||
|
|
||||||
python-fuelclient
|
|
||||||
=================
|
|
||||||
|
|
||||||
python-fuelclient provides a CLI tool and a Python API wrapper for interacting
|
|
||||||
with `Fuel <https://github.com/stackforge/fuel-web>`_.
|
|
||||||
|
|
||||||
|
|
||||||
-----------------
|
|
||||||
Project resources
|
|
||||||
-----------------
|
|
||||||
|
|
||||||
Project status, bugs, and blueprints are tracked on Launchpad:
|
|
||||||
https://launchpad.net/fuel
|
|
||||||
|
|
||||||
Development documentation is hosted here:
|
|
||||||
https://docs.fuel-infra.org/fuel-dev
|
|
||||||
|
|
||||||
User guide can be found here:
|
|
||||||
http://docs.mirantis.com
|
|
||||||
|
|
||||||
Any additional information can be found on the Fuel's project wiki
|
|
||||||
https://wiki.openstack.org/wiki/Fuel
|
|
||||||
|
|
||||||
Anyone wishing to contribute to python-fuelclient should follow the general
|
|
||||||
OpenStack process. A good reference for it can be found here:
|
|
||||||
https://wiki.openstack.org/wiki/How_To_Contribute
|
|
||||||
|
|
||||||
http://docs.openstack.org/infra/manual/developers.html
|
|
||||||
|
|
|
@ -1,90 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
# DO NOT PUT ANY IMPORTS HERE BECAUSE THIS FILE IS USED
|
|
||||||
# DURING THE INSTALLATION.
|
|
||||||
|
|
||||||
try:
|
|
||||||
import pkg_resources
|
|
||||||
try:
|
|
||||||
__version__ = pkg_resources.get_distribution(
|
|
||||||
"python-fuelclient").version
|
|
||||||
except pkg_resources.DistributionNotFound:
|
|
||||||
__version__ = ""
|
|
||||||
except ImportError:
|
|
||||||
__version__ = ""
|
|
||||||
|
|
||||||
|
|
||||||
def connect(host, port, http_proxy=None, os_username=None, os_password=None,
|
|
||||||
os_tenant_name=None, debug=False):
|
|
||||||
"""Creates API connection."""
|
|
||||||
from fuelclient import client
|
|
||||||
|
|
||||||
return client.APIClient(
|
|
||||||
host, port, http_proxy=http_proxy, os_username=os_username,
|
|
||||||
os_password=os_password, os_tenant_name=os_tenant_name, debug=debug)
|
|
||||||
|
|
||||||
|
|
||||||
def get_client(resource, version='v1', connection=None):
|
|
||||||
"""Gets an API client for a resource
|
|
||||||
|
|
||||||
python-fuelclient provides access to Fuel's API
|
|
||||||
through a set of per-resource facades. In order to
|
|
||||||
get a proper facade it's necessary to specify the name
|
|
||||||
of the API resource and the version of Fuel's API.
|
|
||||||
|
|
||||||
:param resource: Name of the resource to get a facade for.
|
|
||||||
:type resource: str
|
|
||||||
Valid values are environment, node and task
|
|
||||||
:param version: Version of the Fuel's API
|
|
||||||
:type version: str,
|
|
||||||
Available: v1. Default: v1.
|
|
||||||
:param connection: API connection
|
|
||||||
:type connection: fuelclient.client.APIClient
|
|
||||||
:return: Facade to the specified resource that wraps
|
|
||||||
calls to the specified version of the API.
|
|
||||||
|
|
||||||
"""
|
|
||||||
from fuelclient import v1
|
|
||||||
|
|
||||||
version_map = {
|
|
||||||
'v1': {
|
|
||||||
'cluster-settings': v1.cluster_settings,
|
|
||||||
'deployment_history': v1.deployment_history,
|
|
||||||
'deployment-info': v1.deployment_info,
|
|
||||||
'environment': v1.environment,
|
|
||||||
'extension': v1.extension,
|
|
||||||
'fuel-version': v1.fuelversion,
|
|
||||||
'graph': v1.graph,
|
|
||||||
'health': v1.health,
|
|
||||||
'network-configuration': v1.network_configuration,
|
|
||||||
'network-group': v1.network_group,
|
|
||||||
'node': v1.node,
|
|
||||||
'openstack-config': v1.openstack_config,
|
|
||||||
'plugins': v1.plugins,
|
|
||||||
'release': v1.release,
|
|
||||||
'role': v1.role,
|
|
||||||
'sequence': v1.sequence,
|
|
||||||
'snapshot': v1.snapshot,
|
|
||||||
'task': v1.task,
|
|
||||||
'tag': v1.tag,
|
|
||||||
'vip': v1.vip
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
try:
|
|
||||||
return version_map[version][resource].get_client(connection)
|
|
||||||
except KeyError:
|
|
||||||
msg = 'Cannot load API client for "{r}" in the API version "{v}".'
|
|
||||||
raise ValueError(msg.format(r=resource, v=version))
|
|
|
@ -1,19 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
"""fuelclient.cli sub-module contains functionality of
|
|
||||||
fuelclient command line interface
|
|
||||||
|
|
||||||
|
|
||||||
"""
|
|
|
@ -1,80 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
"""fuelclient.cli.actions sub-module contains files with action classes
|
|
||||||
which implement command line interface logic
|
|
||||||
|
|
||||||
All action classes must be added to action_tuple to be used by parser
|
|
||||||
"""
|
|
||||||
from fuelclient.cli.actions.deploy import DeployChangesAction
|
|
||||||
from fuelclient.cli.actions.deploy import RedeployChangesAction
|
|
||||||
from fuelclient.cli.actions.deployment_tasks import DeploymentTasksAction
|
|
||||||
from fuelclient.cli.actions.environment import EnvironmentAction
|
|
||||||
from fuelclient.cli.actions.fact import DeploymentAction
|
|
||||||
from fuelclient.cli.actions.fact import ProvisioningAction
|
|
||||||
from fuelclient.cli.actions.graph import GraphAction
|
|
||||||
from fuelclient.cli.actions.token import TokenAction
|
|
||||||
from fuelclient.cli.actions.health import HealthCheckAction
|
|
||||||
from fuelclient.cli.actions.interrupt import ResetAction
|
|
||||||
from fuelclient.cli.actions.interrupt import StopAction
|
|
||||||
from fuelclient.cli.actions.network import NetworkAction
|
|
||||||
from fuelclient.cli.actions.network import NetworkTemplateAction
|
|
||||||
from fuelclient.cli.actions.network_group import NetworkGroupAction
|
|
||||||
from fuelclient.cli.actions.node import NodeAction
|
|
||||||
from fuelclient.cli.actions.nodegroup import NodeGroupAction
|
|
||||||
from fuelclient.cli.actions.notifications import NotificationsAction
|
|
||||||
from fuelclient.cli.actions.notifications import NotifyAction
|
|
||||||
from fuelclient.cli.actions.openstack_config import OpenstackConfigAction
|
|
||||||
from fuelclient.cli.actions.release import ReleaseAction
|
|
||||||
from fuelclient.cli.actions.role import RoleAction
|
|
||||||
from fuelclient.cli.actions.settings import SettingsAction
|
|
||||||
from fuelclient.cli.actions.snapshot import SnapshotAction
|
|
||||||
from fuelclient.cli.actions.user import UserAction
|
|
||||||
from fuelclient.cli.actions.plugins import PluginAction
|
|
||||||
from fuelclient.cli.actions.fuelversion import FuelVersionAction
|
|
||||||
from fuelclient.cli.actions.vip import VIPAction
|
|
||||||
|
|
||||||
actions_tuple = (
|
|
||||||
DeployChangesAction,
|
|
||||||
DeploymentAction,
|
|
||||||
DeploymentTasksAction,
|
|
||||||
EnvironmentAction,
|
|
||||||
FuelVersionAction,
|
|
||||||
GraphAction,
|
|
||||||
HealthCheckAction,
|
|
||||||
NetworkAction,
|
|
||||||
NetworkGroupAction,
|
|
||||||
NetworkTemplateAction,
|
|
||||||
NodeAction,
|
|
||||||
NodeGroupAction,
|
|
||||||
NotificationsAction,
|
|
||||||
NotifyAction,
|
|
||||||
OpenstackConfigAction,
|
|
||||||
PluginAction,
|
|
||||||
ProvisioningAction,
|
|
||||||
RedeployChangesAction,
|
|
||||||
ReleaseAction,
|
|
||||||
ResetAction,
|
|
||||||
RoleAction,
|
|
||||||
SettingsAction,
|
|
||||||
SnapshotAction,
|
|
||||||
StopAction,
|
|
||||||
TokenAction,
|
|
||||||
UserAction,
|
|
||||||
VIPAction,
|
|
||||||
)
|
|
||||||
|
|
||||||
actions = dict(
|
|
||||||
(action.action_name, action())
|
|
||||||
for action in actions_tuple
|
|
||||||
)
|
|
|
@ -1,129 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from functools import partial
|
|
||||||
from functools import wraps
|
|
||||||
import os
|
|
||||||
|
|
||||||
import six
|
|
||||||
|
|
||||||
from fuelclient.cli import error
|
|
||||||
from fuelclient.cli.formatting import quote_and_join
|
|
||||||
from fuelclient.cli.serializers import Serializer
|
|
||||||
from fuelclient.client import DefaultAPIClient
|
|
||||||
|
|
||||||
|
|
||||||
class Action(object):
|
|
||||||
"""Action class generalizes logic of action execution
|
|
||||||
method action_func - entry point of parser with parsed arguments
|
|
||||||
|
|
||||||
flag_func_map - is tuple of pairs ("flag", self.some_method) where
|
|
||||||
"flag" is name of argument which causes "some_method" to be called.
|
|
||||||
None is used as "flag" when method will be called without any flag.
|
|
||||||
|
|
||||||
serializer - is Serializer class instance which supposed to be the
|
|
||||||
only way to read and write to output or file system.
|
|
||||||
|
|
||||||
args - tuple of function calls of functions from arguments module,
|
|
||||||
is a manifest of all arguments used in action, and is used to initialize
|
|
||||||
argparse subparser of that action.
|
|
||||||
"""
|
|
||||||
def __init__(self):
|
|
||||||
# Mapping of flags to methods
|
|
||||||
self.flag_func_map = None
|
|
||||||
self.serializer = Serializer()
|
|
||||||
|
|
||||||
def action_func(self, params):
|
|
||||||
"""Entry point for all actions subclasses
|
|
||||||
"""
|
|
||||||
DefaultAPIClient.debug_mode(debug=params.debug)
|
|
||||||
|
|
||||||
self.serializer = Serializer.from_params(params)
|
|
||||||
if self.flag_func_map is not None:
|
|
||||||
for flag, method in self.flag_func_map:
|
|
||||||
if flag is None or getattr(params, flag):
|
|
||||||
method(params)
|
|
||||||
break
|
|
||||||
|
|
||||||
@property
|
|
||||||
def examples(self):
|
|
||||||
"""examples property is concatenation of __doc__ strings from
|
|
||||||
methods in child action classes, and is added as epilog of help
|
|
||||||
output
|
|
||||||
"""
|
|
||||||
methods_with_docs = set(
|
|
||||||
method
|
|
||||||
for _, method in self.flag_func_map
|
|
||||||
)
|
|
||||||
return "Examples:\n\n" + \
|
|
||||||
"\n".join(
|
|
||||||
six.moves.map(
|
|
||||||
lambda method: (
|
|
||||||
"\t" + method.__doc__.replace("\n ", "\n")
|
|
||||||
),
|
|
||||||
methods_with_docs
|
|
||||||
)
|
|
||||||
).format(
|
|
||||||
action_name=self.action_name
|
|
||||||
)
|
|
||||||
|
|
||||||
def full_path_directory(self, directory, base_name):
|
|
||||||
full_path = os.path.join(directory, base_name)
|
|
||||||
if not os.path.exists(full_path):
|
|
||||||
try:
|
|
||||||
os.mkdir(full_path)
|
|
||||||
except OSError as e:
|
|
||||||
raise error.ActionException(six.text_type(e))
|
|
||||||
return full_path
|
|
||||||
|
|
||||||
def default_directory(self, directory=None):
|
|
||||||
return os.path.abspath(os.curdir if directory is None else directory)
|
|
||||||
|
|
||||||
|
|
||||||
def wrap(method, args, f):
|
|
||||||
"""wrap - is second order function, purpose of which is to
|
|
||||||
generalize argument checking for methods in actions in form
|
|
||||||
of decorator with arguments.
|
|
||||||
|
|
||||||
'check_all' and 'check_any' are partial function of wrap.
|
|
||||||
"""
|
|
||||||
@wraps(f)
|
|
||||||
def wrapped_f(self, params):
|
|
||||||
if method(getattr(params, _arg) for _arg in args):
|
|
||||||
return f(self, params)
|
|
||||||
else:
|
|
||||||
raise error.ArgumentException(
|
|
||||||
"{0} required!".format(
|
|
||||||
quote_and_join(
|
|
||||||
"--" + arg for arg in args
|
|
||||||
)
|
|
||||||
)
|
|
||||||
)
|
|
||||||
return wrapped_f
|
|
||||||
|
|
||||||
|
|
||||||
def check_all(*args):
|
|
||||||
"""check_all - decorator with arguments, which checks that
|
|
||||||
all arguments are given before running action method, if
|
|
||||||
not all arguments are given, it raises an ArgumentException.
|
|
||||||
"""
|
|
||||||
return partial(wrap, all, args)
|
|
||||||
|
|
||||||
|
|
||||||
def check_any(*args):
|
|
||||||
"""check_any - decorator with arguments, which checks that
|
|
||||||
at least one arguments is given before running action method,
|
|
||||||
if no arguments were given, it raises an ArgumentException.
|
|
||||||
"""
|
|
||||||
return partial(wrap, any, args)
|
|
|
@ -1,74 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
import six
|
|
||||||
|
|
||||||
from fuelclient.cli.actions.base import Action
|
|
||||||
import fuelclient.cli.arguments as Args
|
|
||||||
from fuelclient.objects.environment import Environment
|
|
||||||
|
|
||||||
|
|
||||||
class ChangesAction(Action):
|
|
||||||
|
|
||||||
action_name = None
|
|
||||||
actions_func_map = {}
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super(ChangesAction, self).__init__()
|
|
||||||
self.args = (
|
|
||||||
Args.get_env_arg(required=True),
|
|
||||||
Args.get_dry_run_deployment_arg(),
|
|
||||||
)
|
|
||||||
self.flag_func_map = (
|
|
||||||
(None, self.deploy_changes),
|
|
||||||
)
|
|
||||||
|
|
||||||
def print_deploy_info(self, deploy_task):
|
|
||||||
six.print_("Deployment task with id {t} for the environment {e} "
|
|
||||||
"has been started.".format(t=deploy_task.id,
|
|
||||||
e=deploy_task.env.id)
|
|
||||||
)
|
|
||||||
|
|
||||||
def deploy_changes(self, params):
|
|
||||||
"""To apply all changes to some environment:
|
|
||||||
fuel --env 1 {action_name}
|
|
||||||
"""
|
|
||||||
env = Environment(params.env)
|
|
||||||
|
|
||||||
deploy_task = getattr(
|
|
||||||
env, self.actions_func_map[self.action_name])(
|
|
||||||
dry_run=params.dry_run)
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
deploy_task.data,
|
|
||||||
deploy_task,
|
|
||||||
print_method=self.print_deploy_info)
|
|
||||||
|
|
||||||
|
|
||||||
class DeployChangesAction(ChangesAction):
|
|
||||||
"""Deploy changes to environments
|
|
||||||
"""
|
|
||||||
action_name = "deploy-changes"
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super(DeployChangesAction, self).__init__()
|
|
||||||
self.actions_func_map[self.action_name] = 'deploy_changes'
|
|
||||||
|
|
||||||
|
|
||||||
class RedeployChangesAction(ChangesAction):
|
|
||||||
"""Redeploy changes to environment which is in the operational state
|
|
||||||
"""
|
|
||||||
action_name = "redeploy-changes"
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super(RedeployChangesAction, self).__init__()
|
|
||||||
self.actions_func_map[self.action_name] = 'redeploy_changes'
|
|
|
@ -1,108 +0,0 @@
|
||||||
# Copyright 2016 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from fuelclient.cli.actions.base import Action
|
|
||||||
import fuelclient.cli.arguments as Args
|
|
||||||
from fuelclient.cli.arguments import group
|
|
||||||
from fuelclient.cli.formatting import format_table
|
|
||||||
|
|
||||||
from fuelclient.v1.deployment_history import DeploymentHistoryClient
|
|
||||||
|
|
||||||
|
|
||||||
class DeploymentTasksAction(Action):
|
|
||||||
"""Show deployment tasks
|
|
||||||
"""
|
|
||||||
action_name = "deployment-tasks"
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super(DeploymentTasksAction, self).__init__()
|
|
||||||
self.args = [
|
|
||||||
group(
|
|
||||||
Args.get_list_arg("List all deployment tasks"),
|
|
||||||
),
|
|
||||||
Args.get_single_task_arg(required=True),
|
|
||||||
Args.get_deployment_node_arg(
|
|
||||||
"Node ids."
|
|
||||||
),
|
|
||||||
Args.get_status_arg(
|
|
||||||
"Statuses: pending, error, ready, running, skipped"
|
|
||||||
),
|
|
||||||
Args.get_tasks_names_arg(
|
|
||||||
"Show deployment history for specific deployment tasks names "
|
|
||||||
"and group output by task"
|
|
||||||
),
|
|
||||||
Args.get_show_parameters_arg(
|
|
||||||
"Show deployment tasks parameters"
|
|
||||||
),
|
|
||||||
Args.get_include_summary_arg(
|
|
||||||
"Show deployment tasks summary"
|
|
||||||
),
|
|
||||||
]
|
|
||||||
self.flag_func_map = (
|
|
||||||
(None, self.list),
|
|
||||||
)
|
|
||||||
|
|
||||||
def list(self, params):
|
|
||||||
"""To display all deployment tasks for task:
|
|
||||||
fuel deployment-tasks --task-id 5
|
|
||||||
|
|
||||||
To display deployment tasks for some nodes:
|
|
||||||
fuel deployment-tasks --task-id 5 --node 1,2
|
|
||||||
|
|
||||||
To display deployment tasks for some statuses(pending, error,
|
|
||||||
ready, running):
|
|
||||||
fuel deployment-tasks --task-id 5 --status pending,running
|
|
||||||
|
|
||||||
To display deployment tasks for some statuses(pending, error,
|
|
||||||
ready, running) on some nodes:
|
|
||||||
fuel deployment-tasks --task-id 5 --status error --node 1,2
|
|
||||||
|
|
||||||
To display certain deployment tasks results only:
|
|
||||||
fuel deployment-tasks --task-id 5
|
|
||||||
--task-name task-name1,task-name2
|
|
||||||
|
|
||||||
To display tasks parameters use:
|
|
||||||
fuel deployment-tasks --task-id 5 --show-parameters
|
|
||||||
|
|
||||||
"""
|
|
||||||
client = DeploymentHistoryClient()
|
|
||||||
tasks_names = getattr(params, 'task-name', None)
|
|
||||||
show_parameters = getattr(params, 'show-parameters')
|
|
||||||
statuses = params.status.split(',') if params.status else []
|
|
||||||
nodes = params.node.split(',') if params.node else []
|
|
||||||
tasks_names = tasks_names.split(',') if tasks_names else []
|
|
||||||
include_summary = getattr(params, 'include-summary')
|
|
||||||
|
|
||||||
data = client.get_all(
|
|
||||||
transaction_id=params.task,
|
|
||||||
nodes=nodes,
|
|
||||||
statuses=statuses,
|
|
||||||
tasks_names=tasks_names,
|
|
||||||
show_parameters=show_parameters,
|
|
||||||
include_summary=include_summary
|
|
||||||
)
|
|
||||||
|
|
||||||
if show_parameters:
|
|
||||||
table_keys = client.tasks_records_keys
|
|
||||||
else:
|
|
||||||
table_keys = client.history_records_keys
|
|
||||||
if include_summary:
|
|
||||||
table_keys += ('summary',)
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
data,
|
|
||||||
format_table(
|
|
||||||
data,
|
|
||||||
acceptable_keys=table_keys
|
|
||||||
)
|
|
||||||
)
|
|
|
@ -1,228 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
import sys
|
|
||||||
|
|
||||||
import six
|
|
||||||
|
|
||||||
from fuelclient.cli.actions.base import Action
|
|
||||||
from fuelclient.cli.actions.base import check_all
|
|
||||||
from fuelclient.cli.actions.base import check_any
|
|
||||||
import fuelclient.cli.arguments as Args
|
|
||||||
from fuelclient.cli.arguments import group
|
|
||||||
from fuelclient.cli.formatting import format_table
|
|
||||||
from fuelclient.objects.environment import Environment
|
|
||||||
|
|
||||||
|
|
||||||
class EnvironmentAction(Action):
|
|
||||||
"""Create, list and modify currently existing environments(clusters)
|
|
||||||
"""
|
|
||||||
action_name = "environment"
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super(EnvironmentAction, self).__init__()
|
|
||||||
self.args = [
|
|
||||||
Args.get_env_arg(),
|
|
||||||
group(
|
|
||||||
Args.get_list_arg(
|
|
||||||
"List all available environments"
|
|
||||||
),
|
|
||||||
Args.get_set_arg(
|
|
||||||
"Set environment parameters e.g., its name"
|
|
||||||
),
|
|
||||||
Args.get_delete_arg(
|
|
||||||
"Delete environment with a specific id or name"
|
|
||||||
),
|
|
||||||
Args.get_create_arg(
|
|
||||||
"Create a new environment with "
|
|
||||||
"specific release id and name"
|
|
||||||
),
|
|
||||||
),
|
|
||||||
Args.get_release_arg(
|
|
||||||
"Release id"
|
|
||||||
),
|
|
||||||
Args.get_force_arg(
|
|
||||||
"Do it anyway"
|
|
||||||
),
|
|
||||||
Args.get_name_arg(
|
|
||||||
"Environment name"
|
|
||||||
),
|
|
||||||
Args.get_nst_arg(
|
|
||||||
"Set network segment type"
|
|
||||||
),
|
|
||||||
Args.get_deployment_tasks_arg("Environment tasks configuration"),
|
|
||||||
Args.get_attributes_arg("Environment attributes"),
|
|
||||||
group(
|
|
||||||
Args.get_download_arg(
|
|
||||||
"Download configuration of specific cluster"),
|
|
||||||
Args.get_upload_arg(
|
|
||||||
"Upload configuration to specific cluster")
|
|
||||||
),
|
|
||||||
Args.get_dir_arg(
|
|
||||||
"Select directory to which download release attributes"),
|
|
||||||
]
|
|
||||||
self.flag_func_map = (
|
|
||||||
("deployment-tasks", self.deployment_tasks),
|
|
||||||
("attributes", self.attributes),
|
|
||||||
("create", self.create),
|
|
||||||
("set", self.set),
|
|
||||||
("delete", self.delete),
|
|
||||||
(None, self.list)
|
|
||||||
)
|
|
||||||
|
|
||||||
@check_all("name", "release")
|
|
||||||
def create(self, params):
|
|
||||||
"""To create an environment with name MyEnv and release id=1 run:
|
|
||||||
fuel env create --name MyEnv --rel 1
|
|
||||||
|
|
||||||
By default, it creates environment setting neutron with VLAN
|
|
||||||
network segmentation as network provider
|
|
||||||
To specify other modes add optional arguments:
|
|
||||||
fuel env create --name MyEnv --rel 1 --net-segment-type vlan
|
|
||||||
"""
|
|
||||||
|
|
||||||
if params.nst == 'gre':
|
|
||||||
six.print_(
|
|
||||||
"WARNING: GRE network segmentation type is deprecated "
|
|
||||||
"since 7.0 release.", file=sys.stderr)
|
|
||||||
|
|
||||||
env = Environment.create(
|
|
||||||
params.name,
|
|
||||||
params.release,
|
|
||||||
params.nst,
|
|
||||||
)
|
|
||||||
|
|
||||||
data = env.get_fresh_data()
|
|
||||||
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
data,
|
|
||||||
u"Environment '{name}' with id={id} was created!"
|
|
||||||
.format(**data)
|
|
||||||
)
|
|
||||||
|
|
||||||
@check_all("env")
|
|
||||||
def set(self, params):
|
|
||||||
"""To change environment name:
|
|
||||||
fuel --env 1 env set --name NewEnvName
|
|
||||||
"""
|
|
||||||
acceptable_params = ('name', )
|
|
||||||
|
|
||||||
env = Environment(params.env, params=params)
|
|
||||||
|
|
||||||
# forming message for output and data structure for request body
|
|
||||||
# TODO(aroma): make it less ugly
|
|
||||||
msg_template = ("Following attributes are changed for "
|
|
||||||
"the environment: {env_attributes}")
|
|
||||||
|
|
||||||
env_attributes = []
|
|
||||||
update_kwargs = dict()
|
|
||||||
for param_name in acceptable_params:
|
|
||||||
attr_value = getattr(params, param_name, None)
|
|
||||||
if attr_value:
|
|
||||||
update_kwargs[param_name] = attr_value
|
|
||||||
env_attributes.append(
|
|
||||||
''.join([param_name, '=', str(attr_value)])
|
|
||||||
)
|
|
||||||
|
|
||||||
data = env.set(update_kwargs)
|
|
||||||
env_attributes = ', '.join(env_attributes)
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
data,
|
|
||||||
msg_template.format(env_attributes=env_attributes)
|
|
||||||
)
|
|
||||||
|
|
||||||
@check_all("env")
|
|
||||||
def delete(self, params):
|
|
||||||
"""To delete the environment:
|
|
||||||
fuel --env 1 env --force delete
|
|
||||||
"""
|
|
||||||
env = Environment(params.env, params=params)
|
|
||||||
|
|
||||||
if env.status == "operational" and not params.force:
|
|
||||||
self.serializer.print_to_output(env.data,
|
|
||||||
"Deleting an operational"
|
|
||||||
"environment is a dangerous "
|
|
||||||
"operation. Please use --force to "
|
|
||||||
"bypass this message.")
|
|
||||||
return
|
|
||||||
|
|
||||||
data = env.delete()
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
data,
|
|
||||||
"Environment with id={0} was deleted"
|
|
||||||
.format(env.id)
|
|
||||||
)
|
|
||||||
|
|
||||||
def list(self, params):
|
|
||||||
"""Print all available environments:
|
|
||||||
fuel env
|
|
||||||
"""
|
|
||||||
acceptable_keys = ("id", "status", "name", "release_id", )
|
|
||||||
data = Environment.get_all_data()
|
|
||||||
if params.env:
|
|
||||||
data = filter(
|
|
||||||
lambda x: x[u"id"] == int(params.env),
|
|
||||||
data
|
|
||||||
)
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
data,
|
|
||||||
format_table(
|
|
||||||
data,
|
|
||||||
acceptable_keys=acceptable_keys
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
@check_all("env")
|
|
||||||
@check_any("download", "upload")
|
|
||||||
def deployment_tasks(self, params):
|
|
||||||
"""Modify deployment_tasks for environment:
|
|
||||||
fuel env --env 1 --deployment-tasks --download
|
|
||||||
fuel env --env 1 --deployment-tasks --upload
|
|
||||||
"""
|
|
||||||
cluster = Environment(params.env)
|
|
||||||
dir_path = self.full_path_directory(
|
|
||||||
params.dir, 'cluster_{0}'.format(params.env))
|
|
||||||
full_path = '{0}/deployment_tasks'.format(dir_path)
|
|
||||||
if params.download:
|
|
||||||
tasks = cluster.get_deployment_tasks()
|
|
||||||
self.serializer.write_to_path(full_path, tasks)
|
|
||||||
print("Deployment tasks for cluster {0} "
|
|
||||||
"downloaded into {1}.yaml.".format(cluster.id, full_path))
|
|
||||||
elif params.upload:
|
|
||||||
tasks = self.serializer.read_from_file(full_path)
|
|
||||||
cluster.update_deployment_tasks(tasks)
|
|
||||||
print("Deployment tasks for cluster {0} "
|
|
||||||
"uploaded from {1}.yaml".format(cluster.id, full_path))
|
|
||||||
|
|
||||||
@check_all("env")
|
|
||||||
@check_any("download", "upload")
|
|
||||||
def attributes(self, params):
|
|
||||||
"""Modify attributes of the environment:
|
|
||||||
fuel env --env 1 --attributes --download
|
|
||||||
fuel env --env 1 --attributes --upload
|
|
||||||
"""
|
|
||||||
cluster = Environment(params.env)
|
|
||||||
dir_path = self.full_path_directory(
|
|
||||||
params.dir, 'cluster_{0}'.format(params.env))
|
|
||||||
full_path = '{0}/attributes'.format(dir_path)
|
|
||||||
|
|
||||||
if params.download:
|
|
||||||
attributes = cluster.get_attributes()
|
|
||||||
self.serializer.write_to_path(full_path, attributes)
|
|
||||||
print("Attributes of cluster {0} "
|
|
||||||
"downloaded into {1}.yaml.".format(cluster.id, full_path))
|
|
||||||
elif params.upload:
|
|
||||||
attributes = self.serializer.read_from_file(full_path)
|
|
||||||
cluster.update_attributes(attributes, params.force)
|
|
||||||
print("Attributes of cluster {0} "
|
|
||||||
"uploaded from {1}.yaml".format(cluster.id, full_path))
|
|
|
@ -1,136 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from fuelclient.cli.actions.base import Action
|
|
||||||
import fuelclient.cli.arguments as Args
|
|
||||||
from fuelclient.cli.arguments import group
|
|
||||||
from fuelclient.objects.environment import Environment
|
|
||||||
|
|
||||||
|
|
||||||
class FactAction(Action):
|
|
||||||
|
|
||||||
action_name = None
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super(FactAction, self).__init__()
|
|
||||||
self.args = [
|
|
||||||
Args.get_env_arg(required=True),
|
|
||||||
group(
|
|
||||||
Args.get_delete_arg(
|
|
||||||
"Delete current {0} data.".format(self.action_name)
|
|
||||||
),
|
|
||||||
Args.get_download_arg(
|
|
||||||
"Download current {0} data.".format(self.action_name)
|
|
||||||
),
|
|
||||||
Args.get_upload_arg(
|
|
||||||
"Upload current {0} data.".format(self.action_name)
|
|
||||||
),
|
|
||||||
Args.get_default_arg(
|
|
||||||
"Download default {0} data.".format(self.action_name)
|
|
||||||
),
|
|
||||||
required=True
|
|
||||||
),
|
|
||||||
Args.get_dir_arg(
|
|
||||||
"Directory with {0} data.".format(self.action_name)
|
|
||||||
),
|
|
||||||
Args.get_node_arg(
|
|
||||||
"Node ids."
|
|
||||||
),
|
|
||||||
Args.get_not_split_facts_args(),
|
|
||||||
]
|
|
||||||
self.flag_func_map = (
|
|
||||||
("default", self.default),
|
|
||||||
("upload", self.upload),
|
|
||||||
("delete", self.delete),
|
|
||||||
("download", self.download)
|
|
||||||
)
|
|
||||||
|
|
||||||
def default(self, params):
|
|
||||||
"""To get default {action_name} information for some environment:
|
|
||||||
fuel --env 1 {action_name} --default
|
|
||||||
|
|
||||||
It's possible to get default {action_name} information
|
|
||||||
just for some nodes:
|
|
||||||
fuel --env 1 {action_name} --default --node 1,2,3
|
|
||||||
"""
|
|
||||||
env = Environment(params.env)
|
|
||||||
dir_name = env.write_facts_to_dir(
|
|
||||||
self.action_name,
|
|
||||||
env.get_default_facts(
|
|
||||||
self.action_name, nodes=params.node, split=params.split
|
|
||||||
),
|
|
||||||
directory=params.dir,
|
|
||||||
serializer=self.serializer
|
|
||||||
)
|
|
||||||
print(
|
|
||||||
"Default {0} info was downloaded to {1}".format(
|
|
||||||
self.action_name,
|
|
||||||
dir_name
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
def upload(self, params):
|
|
||||||
"""To upload {action_name} information for some environment:
|
|
||||||
fuel --env 1 {action_name} --upload
|
|
||||||
"""
|
|
||||||
env = Environment(params.env)
|
|
||||||
facts = env.read_fact_info(
|
|
||||||
self.action_name,
|
|
||||||
directory=params.dir,
|
|
||||||
serializer=self.serializer
|
|
||||||
)
|
|
||||||
env.upload_facts(self.action_name, facts)
|
|
||||||
print("{0} facts were uploaded.".format(self.action_name))
|
|
||||||
|
|
||||||
def delete(self, params):
|
|
||||||
"""Also {action_name} information can be left or
|
|
||||||
taken from specific directory:
|
|
||||||
fuel --env 1 {action_name} --upload \\
|
|
||||||
--dir path/to/some/directory
|
|
||||||
"""
|
|
||||||
env = Environment(params.env)
|
|
||||||
env.delete_facts(self.action_name)
|
|
||||||
print("{0} facts deleted.".format(self.action_name))
|
|
||||||
|
|
||||||
def download(self, params):
|
|
||||||
"""To download {action_name} information for some environment:
|
|
||||||
fuel --env 1 {action_name} --download
|
|
||||||
"""
|
|
||||||
env = Environment(params.env)
|
|
||||||
dir_name = env.write_facts_to_dir(
|
|
||||||
self.action_name,
|
|
||||||
env.get_facts(
|
|
||||||
self.action_name, nodes=params.node, split=params.split
|
|
||||||
),
|
|
||||||
directory=params.dir,
|
|
||||||
serializer=self.serializer
|
|
||||||
)
|
|
||||||
print(
|
|
||||||
"Current {0} info was downloaded to {1}".format(
|
|
||||||
self.action_name,
|
|
||||||
dir_name
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class DeploymentAction(FactAction):
|
|
||||||
"""Show computed deployment facts for orchestrator
|
|
||||||
"""
|
|
||||||
action_name = "deployment"
|
|
||||||
|
|
||||||
|
|
||||||
class ProvisioningAction(FactAction):
|
|
||||||
"""Show computed provisioning facts for orchestrator
|
|
||||||
"""
|
|
||||||
action_name = "provisioning"
|
|
|
@ -1,42 +0,0 @@
|
||||||
# Copyright 2015 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import fuelclient.cli.arguments as Args
|
|
||||||
|
|
||||||
from fuelclient.cli.actions.base import Action
|
|
||||||
from fuelclient.objects.fuelversion import FuelVersion
|
|
||||||
|
|
||||||
|
|
||||||
class FuelVersionAction(Action):
|
|
||||||
"""Show Fuel server's version
|
|
||||||
"""
|
|
||||||
|
|
||||||
action_name = "fuel-version"
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super(FuelVersionAction, self).__init__()
|
|
||||||
|
|
||||||
self.args = [
|
|
||||||
Args.get_list_arg("Show fuel version"),
|
|
||||||
]
|
|
||||||
self.flag_func_map = (
|
|
||||||
(None, self.version),
|
|
||||||
)
|
|
||||||
|
|
||||||
def version(self, params):
|
|
||||||
"""To show fuel version data:
|
|
||||||
fuel fuel-version
|
|
||||||
"""
|
|
||||||
version = FuelVersion.get_all_data()
|
|
||||||
print(self.serializer.serialize(version))
|
|
|
@ -1,163 +0,0 @@
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
|
|
||||||
# Copyright 2015 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
|
|
||||||
import six
|
|
||||||
|
|
||||||
from fuelclient.cli.actions import base
|
|
||||||
import fuelclient.cli.arguments as Args
|
|
||||||
from fuelclient.cli import error
|
|
||||||
from fuelclient.objects import environment
|
|
||||||
|
|
||||||
|
|
||||||
class GraphAction(base.Action):
|
|
||||||
"""Manipulate deployment graph's representation."""
|
|
||||||
|
|
||||||
action_name = 'graph'
|
|
||||||
|
|
||||||
task_types = ('skipped', 'group', 'stage')
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super(GraphAction, self).__init__()
|
|
||||||
self.args = (
|
|
||||||
Args.get_env_arg(),
|
|
||||||
Args.get_render_arg(
|
|
||||||
"Render graph from DOT to PNG"
|
|
||||||
),
|
|
||||||
Args.get_download_arg(
|
|
||||||
"Download graph of specific cluster"
|
|
||||||
),
|
|
||||||
Args.get_dir_arg(
|
|
||||||
"Select target dir to render graph."
|
|
||||||
),
|
|
||||||
Args.group(
|
|
||||||
Args.get_skip_tasks(),
|
|
||||||
Args.get_tasks()
|
|
||||||
),
|
|
||||||
Args.get_graph_endpoint(),
|
|
||||||
Args.get_graph_startpoint(),
|
|
||||||
Args.get_remove_type_arg(self.task_types),
|
|
||||||
Args.get_parents_arg(),
|
|
||||||
Args.get_tred_arg("Apply transitive reduction filter for graph."),
|
|
||||||
)
|
|
||||||
self.flag_func_map = (
|
|
||||||
('render', self.render),
|
|
||||||
('download', self.download),
|
|
||||||
)
|
|
||||||
|
|
||||||
@base.check_all("env")
|
|
||||||
def download(self, params):
|
|
||||||
"""Download deployment graph to stdout
|
|
||||||
|
|
||||||
fuel graph --env 1 --download
|
|
||||||
fuel graph --env 1 --download --tasks A B C
|
|
||||||
fuel graph --env 1 --download --skip X Y --end pre_deployment
|
|
||||||
fuel graph --env 1 --download --skip X Y --start post_deployment
|
|
||||||
|
|
||||||
Specify output:
|
|
||||||
fuel graph --env 1 --download > output/dir/file.gv
|
|
||||||
|
|
||||||
Get parents only for task A:
|
|
||||||
|
|
||||||
fuel graph --env 1 --download --parents-for A
|
|
||||||
"""
|
|
||||||
env = environment.Environment(params.env)
|
|
||||||
|
|
||||||
parents_for = getattr(params, 'parents-for')
|
|
||||||
|
|
||||||
used_params = "# params:\n"
|
|
||||||
for param in ('start', 'end', 'skip', 'tasks', 'parents-for',
|
|
||||||
'remove'):
|
|
||||||
used_params += "# - {0}: {1}\n".format(param,
|
|
||||||
getattr(params, param))
|
|
||||||
|
|
||||||
tasks = params.tasks
|
|
||||||
|
|
||||||
if not tasks or (params.skip or params.end or params.start):
|
|
||||||
tasks = env.get_tasks(
|
|
||||||
skip=params.skip, end=params.end,
|
|
||||||
start=params.start, include=params.tasks)
|
|
||||||
|
|
||||||
dotraph = env.get_deployment_tasks_graph(tasks,
|
|
||||||
parents_for=parents_for,
|
|
||||||
remove=params.remove)
|
|
||||||
sys.stdout.write(six.text_type(used_params))
|
|
||||||
sys.stdout.write(six.text_type(dotraph))
|
|
||||||
|
|
||||||
@base.check_all("render")
|
|
||||||
def render(self, params):
|
|
||||||
"""Render graph in PNG format
|
|
||||||
|
|
||||||
fuel graph --render graph.gv
|
|
||||||
fuel graph --render graph.gv --dir ./output/dir/
|
|
||||||
|
|
||||||
To apply transitive reduction filter on rendered graph:
|
|
||||||
|
|
||||||
fuel graph --render graph.gv --tred
|
|
||||||
fuel graph --render graph.gv --dir ./output/dir/ --tred
|
|
||||||
|
|
||||||
Read graph from stdin
|
|
||||||
some_process | fuel graph --render -
|
|
||||||
"""
|
|
||||||
if params.render == '-':
|
|
||||||
dot_data = sys.stdin.read()
|
|
||||||
out_filename = 'graph.gv'
|
|
||||||
elif not os.path.exists(params.render):
|
|
||||||
raise error.ArgumentException(
|
|
||||||
"Input file does not exist"
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
out_filename = os.path.basename(params.render)
|
|
||||||
with open(params.render, 'r') as f:
|
|
||||||
dot_data = f.read()
|
|
||||||
|
|
||||||
target_dir = self.full_path_directory(
|
|
||||||
self.default_directory(params.dir),
|
|
||||||
''
|
|
||||||
)
|
|
||||||
target_file = os.path.join(
|
|
||||||
target_dir,
|
|
||||||
'{0}.png'.format(out_filename),
|
|
||||||
)
|
|
||||||
|
|
||||||
if not os.access(os.path.dirname(target_file), os.W_OK):
|
|
||||||
raise error.ActionException(
|
|
||||||
'Path {0} is not writable'.format(target_file))
|
|
||||||
render_graph(dot_data, target_file, params.tred)
|
|
||||||
print('Graph saved in "{0}"'.format(target_file))
|
|
||||||
|
|
||||||
|
|
||||||
def render_graph(input_data, output_path, tred=False):
|
|
||||||
"""Renders DOT graph using pygraphviz.
|
|
||||||
|
|
||||||
:param input_data: DOT graph representation
|
|
||||||
:param output_path: path to the rendered graph
|
|
||||||
:param tred: applies transitive reduction of graph
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
import pygraphviz
|
|
||||||
except ImportError:
|
|
||||||
raise error.WrongEnvironmentError(
|
|
||||||
"This action requires Graphviz installed "
|
|
||||||
"together with 'pygraphviz' Python library")
|
|
||||||
|
|
||||||
graph = pygraphviz.AGraph(string=input_data)
|
|
||||||
if tred:
|
|
||||||
graph = graph.tred()
|
|
||||||
|
|
||||||
graph.draw(output_path, prog='dot', format='png')
|
|
|
@ -1,105 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import sys
|
|
||||||
|
|
||||||
from fuelclient.cli.actions.base import Action
|
|
||||||
import fuelclient.cli.arguments as Args
|
|
||||||
from fuelclient.cli.error import EnvironmentException
|
|
||||||
from fuelclient.cli.formatting import format_table
|
|
||||||
from fuelclient.cli.formatting import print_health_check
|
|
||||||
from fuelclient.objects.environment import Environment
|
|
||||||
import six
|
|
||||||
|
|
||||||
|
|
||||||
class HealthCheckAction(Action):
|
|
||||||
"""Run health check on environment
|
|
||||||
"""
|
|
||||||
action_name = "health"
|
|
||||||
|
|
||||||
_allowed_statuses = (
|
|
||||||
'error',
|
|
||||||
'operational',
|
|
||||||
'update_error',
|
|
||||||
)
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super(HealthCheckAction, self).__init__()
|
|
||||||
self.args = (
|
|
||||||
Args.get_env_arg(required=True),
|
|
||||||
Args.get_list_arg("List all available checks"),
|
|
||||||
Args.get_force_arg("Forced test run"),
|
|
||||||
Args.get_check_arg("Run check for some testset."),
|
|
||||||
Args.get_ostf_username_arg(),
|
|
||||||
Args.get_ostf_password_arg(),
|
|
||||||
Args.get_ostf_tenant_name_arg()
|
|
||||||
)
|
|
||||||
|
|
||||||
self.flag_func_map = (
|
|
||||||
("check", self.check),
|
|
||||||
(None, self.list)
|
|
||||||
)
|
|
||||||
|
|
||||||
def check(self, params):
|
|
||||||
"""To run some health checks:
|
|
||||||
fuel --env 1 health --check smoke,sanity
|
|
||||||
"""
|
|
||||||
env = Environment(params.env)
|
|
||||||
|
|
||||||
if env.status not in self._allowed_statuses and not params.force:
|
|
||||||
raise EnvironmentException(
|
|
||||||
"Environment is not ready to run health check "
|
|
||||||
"because it is in {0} state. "
|
|
||||||
"Health check is likely to fail because of "
|
|
||||||
"this. Use --force flag to proceed anyway.". format(env.status)
|
|
||||||
)
|
|
||||||
|
|
||||||
if env.is_customized and not params.force:
|
|
||||||
raise EnvironmentException(
|
|
||||||
"Environment deployment facts were updated. "
|
|
||||||
"Health check is likely to fail because of "
|
|
||||||
"that. Use --force flag to proceed anyway."
|
|
||||||
)
|
|
||||||
test_sets_to_check = params.check or set(
|
|
||||||
ts["id"] for ts in env.get_testsets())
|
|
||||||
ostf_credentials = {}
|
|
||||||
if params.ostf_tenant_name is not None:
|
|
||||||
ostf_credentials['tenant'] = params.ostf_tenant_name
|
|
||||||
if params.ostf_username is not None:
|
|
||||||
ostf_credentials['username'] = params.ostf_username
|
|
||||||
if params.ostf_password is not None:
|
|
||||||
ostf_credentials['password'] = params.ostf_password
|
|
||||||
if not ostf_credentials:
|
|
||||||
six.print_("WARNING: ostf credentials are going to be",
|
|
||||||
"mandatory in the next release.", file=sys.stderr)
|
|
||||||
env.run_test_sets(test_sets_to_check, ostf_credentials)
|
|
||||||
tests_state = env.get_state_of_tests()
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
tests_state,
|
|
||||||
env,
|
|
||||||
print_method=print_health_check
|
|
||||||
)
|
|
||||||
|
|
||||||
def list(self, params):
|
|
||||||
"""To list all health check test sets:
|
|
||||||
fuel --env 1 health
|
|
||||||
or:
|
|
||||||
fuel --env 1 health --list
|
|
||||||
"""
|
|
||||||
env = Environment(params.env)
|
|
||||||
test_sets = env.get_testsets()
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
test_sets,
|
|
||||||
format_table(test_sets)
|
|
||||||
)
|
|
|
@ -1,58 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
|
|
||||||
from fuelclient.cli.actions.base import Action
|
|
||||||
import fuelclient.cli.arguments as Args
|
|
||||||
from fuelclient.objects.environment import Environment
|
|
||||||
|
|
||||||
|
|
||||||
class InterruptAction(Action):
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super(InterruptAction, self).__init__()
|
|
||||||
self.args = [
|
|
||||||
Args.get_env_arg(required=True)
|
|
||||||
]
|
|
||||||
self.flag_func_map = (
|
|
||||||
(None, self.interrupt),
|
|
||||||
)
|
|
||||||
|
|
||||||
def interrupt(self, params):
|
|
||||||
"""To {action_name} some environment:
|
|
||||||
fuel --env 1 {action_name}
|
|
||||||
"""
|
|
||||||
env = Environment(params.env)
|
|
||||||
intercept_task = getattr(env, self.action_name)()
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
intercept_task.data,
|
|
||||||
"{0} task of environment with id={1} started. "
|
|
||||||
"To check task status run 'fuel task --tid {2}'.".format(
|
|
||||||
self.action_name.title(),
|
|
||||||
params.env,
|
|
||||||
intercept_task.data["id"]
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class StopAction(InterruptAction):
|
|
||||||
"""Stop deployment process for specific environment
|
|
||||||
"""
|
|
||||||
action_name = "stop"
|
|
||||||
|
|
||||||
|
|
||||||
class ResetAction(InterruptAction):
|
|
||||||
"""Reset deployed process for specific environment
|
|
||||||
"""
|
|
||||||
action_name = "reset"
|
|
|
@ -1,151 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from fuelclient.cli.actions.base import Action
|
|
||||||
import fuelclient.cli.arguments as Args
|
|
||||||
from fuelclient.cli.arguments import group
|
|
||||||
from fuelclient.objects.environment import Environment
|
|
||||||
|
|
||||||
|
|
||||||
class NetworkAction(Action):
|
|
||||||
"""Show or modify network settings of specific environments
|
|
||||||
"""
|
|
||||||
action_name = "network"
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super(NetworkAction, self).__init__()
|
|
||||||
self.args = (
|
|
||||||
Args.get_env_arg(required=True),
|
|
||||||
Args.get_dir_arg("Directory with network data."),
|
|
||||||
group(
|
|
||||||
Args.get_download_arg(
|
|
||||||
"Download current network configuration."),
|
|
||||||
Args.get_verify_arg(
|
|
||||||
"Verify current network configuration."),
|
|
||||||
Args.get_upload_arg(
|
|
||||||
"Upload changed network configuration."),
|
|
||||||
required=True
|
|
||||||
)
|
|
||||||
)
|
|
||||||
self.flag_func_map = (
|
|
||||||
("upload", self.upload),
|
|
||||||
("verify", self.verify),
|
|
||||||
("download", self.download)
|
|
||||||
)
|
|
||||||
|
|
||||||
def upload(self, params):
|
|
||||||
"""To upload network configuration from some
|
|
||||||
directory for some environment:
|
|
||||||
fuel --env 1 network --upload --dir path/to/directory
|
|
||||||
"""
|
|
||||||
env = Environment(params.env)
|
|
||||||
network_data = env.read_network_data(
|
|
||||||
directory=params.dir,
|
|
||||||
serializer=self.serializer
|
|
||||||
)
|
|
||||||
env.set_network_data(network_data)
|
|
||||||
print("Network configuration uploaded.")
|
|
||||||
|
|
||||||
def verify(self, params):
|
|
||||||
"""To verify network configuration from some directory
|
|
||||||
for some environment:
|
|
||||||
fuel --env 1 network --verify --dir path/to/directory
|
|
||||||
"""
|
|
||||||
env = Environment(params.env)
|
|
||||||
response = env.verify_network()
|
|
||||||
print(
|
|
||||||
"Verification status is '{status}'. message: {message}"
|
|
||||||
.format(**response)
|
|
||||||
)
|
|
||||||
|
|
||||||
def download(self, params):
|
|
||||||
"""To download network configuration in this
|
|
||||||
directory for some environment:
|
|
||||||
fuel --env 1 network --download
|
|
||||||
"""
|
|
||||||
env = Environment(params.env)
|
|
||||||
network_data = env.get_network_data()
|
|
||||||
network_file_path = env.write_network_data(
|
|
||||||
network_data,
|
|
||||||
directory=params.dir,
|
|
||||||
serializer=self.serializer)
|
|
||||||
print(
|
|
||||||
"Network configuration for environment with id={0}"
|
|
||||||
" downloaded to {1}"
|
|
||||||
.format(env.id, network_file_path)
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class NetworkTemplateAction(Action):
|
|
||||||
"""Manipulate network templates for a specific environment
|
|
||||||
"""
|
|
||||||
action_name = 'network-template'
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super(NetworkTemplateAction, self).__init__()
|
|
||||||
self.args = (
|
|
||||||
Args.get_env_arg(required=True),
|
|
||||||
Args.get_dir_arg("Directory with network templates."),
|
|
||||||
group(
|
|
||||||
Args.get_download_arg(
|
|
||||||
"Download current network template configuration."),
|
|
||||||
Args.get_upload_arg(
|
|
||||||
"Upload changed network template configuration."),
|
|
||||||
Args.get_delete_arg(
|
|
||||||
"Delete network template configuration."),
|
|
||||||
required=True))
|
|
||||||
|
|
||||||
self.flag_func_map = (
|
|
||||||
("upload", self.upload),
|
|
||||||
("download", self.download),
|
|
||||||
("delete", self.delete))
|
|
||||||
|
|
||||||
def upload(self, params):
|
|
||||||
"""Uploads network template from filesystem path
|
|
||||||
for specified environment:
|
|
||||||
fuel --env 1 network-template --upload --dir path/to/directory
|
|
||||||
"""
|
|
||||||
env = Environment(params.env)
|
|
||||||
network_template_data = env.read_network_template_data(
|
|
||||||
directory=params.dir,
|
|
||||||
serializer=self.serializer)
|
|
||||||
env.set_network_template_data(network_template_data)
|
|
||||||
full_path = self.serializer.prepare_path(
|
|
||||||
env.get_network_template_data_path(directory=params.dir))
|
|
||||||
print("Network template {0} has been uploaded.".format(full_path))
|
|
||||||
|
|
||||||
def download(self, params):
|
|
||||||
"""Downloads network template in current
|
|
||||||
directory for specified environment:
|
|
||||||
fuel --env 1 network-template --download
|
|
||||||
"""
|
|
||||||
env = Environment(params.env)
|
|
||||||
template_data = env.get_network_template_data()
|
|
||||||
network_template_file_path = env.write_network_template_data(
|
|
||||||
template_data,
|
|
||||||
directory=params.dir,
|
|
||||||
serializer=self.serializer)
|
|
||||||
|
|
||||||
print("Network template configuration for environment with id={0}"
|
|
||||||
" downloaded to {1}".format(env.id, network_template_file_path))
|
|
||||||
|
|
||||||
def delete(self, params):
|
|
||||||
"""Deletes network template for specified environment:
|
|
||||||
fuel --env 1 --network-template --delete
|
|
||||||
"""
|
|
||||||
env = Environment(params.env)
|
|
||||||
env.delete_network_template_data()
|
|
||||||
|
|
||||||
print("Network template configuration for environment id={0}"
|
|
||||||
" has been deleted.".format(env.id))
|
|
|
@ -1,147 +0,0 @@
|
||||||
# Copyright 2015 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import sys
|
|
||||||
|
|
||||||
import six
|
|
||||||
|
|
||||||
from fuelclient.cli.actions.base import Action
|
|
||||||
from fuelclient.cli.actions.base import check_all
|
|
||||||
import fuelclient.cli.arguments as Args
|
|
||||||
from fuelclient.cli.arguments import group
|
|
||||||
from fuelclient.cli.formatting import format_table
|
|
||||||
from fuelclient.commands.network_group import get_args_for_update
|
|
||||||
from fuelclient.objects.network_group import NetworkGroup
|
|
||||||
from fuelclient.objects.network_group import NetworkGroupCollection
|
|
||||||
|
|
||||||
|
|
||||||
class NetworkGroupAction(Action):
|
|
||||||
"""Show or modify network groups
|
|
||||||
"""
|
|
||||||
action_name = "network-group"
|
|
||||||
acceptable_keys = ("id", "name", "vlan_start", "cidr",
|
|
||||||
"gateway", "group_id")
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super(NetworkGroupAction, self).__init__()
|
|
||||||
self.args = (
|
|
||||||
Args.get_env_arg(),
|
|
||||||
Args.get_name_arg("Name of new network group."),
|
|
||||||
Args.get_node_group_arg("ID of node group."),
|
|
||||||
Args.get_release_arg("Release ID this network group belongs to."),
|
|
||||||
Args.get_vlan_arg("VLAN of network."),
|
|
||||||
Args.get_cidr_arg("CIDR of network."),
|
|
||||||
Args.get_gateway_arg("Gateway of network."),
|
|
||||||
Args.get_network_group_arg("ID of network group."),
|
|
||||||
Args.get_meta_arg("Metadata in JSON format to override default "
|
|
||||||
"network metadata."),
|
|
||||||
group(
|
|
||||||
Args.get_create_network_arg(
|
|
||||||
"Create a new network group for the specified "
|
|
||||||
" node group."
|
|
||||||
),
|
|
||||||
Args.get_delete_arg("Delete specified network groups."),
|
|
||||||
Args.get_list_arg("List all network groups."),
|
|
||||||
Args.get_set_arg("Set network group parameters.")
|
|
||||||
)
|
|
||||||
)
|
|
||||||
self.flag_func_map = (
|
|
||||||
("create", self.create),
|
|
||||||
("delete", self.delete),
|
|
||||||
("set", self.set),
|
|
||||||
(None, self.list),
|
|
||||||
)
|
|
||||||
|
|
||||||
@check_all('nodegroup', 'name', 'cidr')
|
|
||||||
def create(self, params):
|
|
||||||
"""Create a new network group
|
|
||||||
fuel network-group --create --node-group 1 --name "new network"
|
|
||||||
--release 2 --vlan 100 --cidr 10.0.0.0/24
|
|
||||||
|
|
||||||
fuel network-group --create --node-group 2 --name "new network"
|
|
||||||
--release 2 --vlan 100 --cidr 10.0.0.0/24 --gateway 10.0.0.1
|
|
||||||
--meta 'meta information in JSON format'
|
|
||||||
"""
|
|
||||||
meta = self.serializer.deserialize(params.meta) if params.meta else {}
|
|
||||||
|
|
||||||
NetworkGroup.create(
|
|
||||||
params.name,
|
|
||||||
params.release,
|
|
||||||
params.vlan,
|
|
||||||
params.cidr,
|
|
||||||
params.gateway,
|
|
||||||
int(params.nodegroup.pop()),
|
|
||||||
meta
|
|
||||||
)
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
{},
|
|
||||||
"Network group {0} has been created".format(params.name)
|
|
||||||
)
|
|
||||||
|
|
||||||
@check_all('network')
|
|
||||||
def delete(self, params):
|
|
||||||
"""Delete the specified network groups
|
|
||||||
fuel network-group --delete --network 1
|
|
||||||
fuel network-group --delete --network 2,3,4
|
|
||||||
"""
|
|
||||||
ngs = NetworkGroup.get_by_ids(params.network)
|
|
||||||
for network_group in ngs:
|
|
||||||
network_group.delete()
|
|
||||||
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
{},
|
|
||||||
"Network groups with IDS {0} have been deleted.".format(
|
|
||||||
','.join(params.network))
|
|
||||||
)
|
|
||||||
|
|
||||||
@check_all('network')
|
|
||||||
def set(self, params):
|
|
||||||
"""Set parameters for the specified network group:
|
|
||||||
fuel network-group --set --network 1 --name new_name
|
|
||||||
"""
|
|
||||||
# Since network has set type and we cannot update multiple network
|
|
||||||
# groups at once, we pick first network group id from set.
|
|
||||||
ng_id = next(iter(params.network))
|
|
||||||
|
|
||||||
if len(params.network) > 1:
|
|
||||||
msg = ("Warning: Only first network with id={0}"
|
|
||||||
" will be updated.".format(ng_id))
|
|
||||||
six.print_(msg, file=sys.stderr)
|
|
||||||
|
|
||||||
ng = NetworkGroup(ng_id)
|
|
||||||
|
|
||||||
update_params = get_args_for_update(params, self.serializer)
|
|
||||||
data = ng.set(update_params)
|
|
||||||
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
data,
|
|
||||||
"Network group id={0} has been updated".format(ng_id))
|
|
||||||
|
|
||||||
def list(self, params):
|
|
||||||
"""To list all available network groups:
|
|
||||||
fuel network-group list
|
|
||||||
|
|
||||||
To filter them by node group:
|
|
||||||
fuel network-group --node-group 1
|
|
||||||
"""
|
|
||||||
group_collection = NetworkGroupCollection.get_all()
|
|
||||||
if params.nodegroup:
|
|
||||||
group_collection.filter_by_group_id(int(params.nodegroup.pop()))
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
group_collection.data,
|
|
||||||
format_table(
|
|
||||||
group_collection.data,
|
|
||||||
acceptable_keys=self.acceptable_keys,
|
|
||||||
)
|
|
||||||
)
|
|
|
@ -1,418 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from itertools import groupby
|
|
||||||
from operator import attrgetter
|
|
||||||
|
|
||||||
from fuelclient.cli.actions.base import Action
|
|
||||||
from fuelclient.cli.actions.base import check_all
|
|
||||||
from fuelclient.cli.actions.base import check_any
|
|
||||||
import fuelclient.cli.arguments as Args
|
|
||||||
from fuelclient.cli.arguments import group
|
|
||||||
from fuelclient.cli import error
|
|
||||||
from fuelclient.cli.formatting import format_table
|
|
||||||
from fuelclient.objects.environment import Environment
|
|
||||||
from fuelclient.objects.node import Node
|
|
||||||
from fuelclient.objects.node import NodeCollection
|
|
||||||
|
|
||||||
|
|
||||||
class NodeAction(Action):
|
|
||||||
"""List and assign available nodes to environments
|
|
||||||
"""
|
|
||||||
action_name = "node"
|
|
||||||
acceptable_keys = ("id", "status", "name", "cluster", "ip",
|
|
||||||
"mac", "roles", "pending_roles", "online", "group_id")
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super(NodeAction, self).__init__()
|
|
||||||
self.args = [
|
|
||||||
Args.get_env_arg(),
|
|
||||||
group(
|
|
||||||
Args.get_list_arg("List all nodes."),
|
|
||||||
Args.get_set_arg("Set role for specific node."),
|
|
||||||
Args.get_delete_arg("Delete specific node from environment."),
|
|
||||||
Args.get_attributes_arg("Node attributes."),
|
|
||||||
Args.get_network_arg("Node network configuration."),
|
|
||||||
Args.get_disk_arg("Node disk configuration."),
|
|
||||||
Args.get_deploy_arg("Deploy specific nodes."),
|
|
||||||
Args.get_hostname_arg("Set node hostname."),
|
|
||||||
Args.get_node_name_arg("Set node name."),
|
|
||||||
Args.get_delete_from_db_arg(
|
|
||||||
"Delete specific nodes only from fuel db.\n"
|
|
||||||
"User should still delete node from cobbler"),
|
|
||||||
Args.get_provision_arg("Provision specific nodes."),
|
|
||||||
),
|
|
||||||
group(
|
|
||||||
Args.get_default_arg(
|
|
||||||
"Get default configuration of some node"),
|
|
||||||
Args.get_download_arg(
|
|
||||||
"Download configuration of specific node"),
|
|
||||||
Args.get_upload_arg(
|
|
||||||
"Upload configuration to specific node")
|
|
||||||
),
|
|
||||||
Args.get_dir_arg(
|
|
||||||
"Select directory to which download node attributes"),
|
|
||||||
Args.get_node_arg("Node id."),
|
|
||||||
Args.get_force_arg("Bypassing parameter validation."),
|
|
||||||
Args.get_noop_run_deployment_arg(),
|
|
||||||
Args.get_all_arg("Select all nodes."),
|
|
||||||
Args.get_role_arg("Role to assign for node."),
|
|
||||||
group(
|
|
||||||
Args.get_skip_tasks(),
|
|
||||||
Args.get_tasks()
|
|
||||||
),
|
|
||||||
Args.get_graph_endpoint(),
|
|
||||||
Args.get_graph_startpoint(),
|
|
||||||
]
|
|
||||||
|
|
||||||
self.flag_func_map = (
|
|
||||||
("set", self.set),
|
|
||||||
("delete", self.delete),
|
|
||||||
("network", self.attributes),
|
|
||||||
("disk", self.attributes),
|
|
||||||
("deploy", self.start),
|
|
||||||
("provision", self.start),
|
|
||||||
("hostname", self.set_hostname),
|
|
||||||
("name", self.set_name),
|
|
||||||
("delete-from-db", self.delete_from_db),
|
|
||||||
("tasks", self.execute_tasks),
|
|
||||||
("skip", self.execute_tasks),
|
|
||||||
("end", self.execute_tasks),
|
|
||||||
("start", self.execute_tasks),
|
|
||||||
("attributes", self.node_attributes),
|
|
||||||
(None, self.list)
|
|
||||||
)
|
|
||||||
|
|
||||||
@check_all("node", "role", "env")
|
|
||||||
def set(self, params):
|
|
||||||
"""Assign some nodes to environment with with specific roles:
|
|
||||||
fuel --env 1 node set --node 1 --role controller
|
|
||||||
fuel --env 1 node set --node 2,3,4 --role compute,cinder
|
|
||||||
"""
|
|
||||||
env = Environment(params.env)
|
|
||||||
nodes = Node.get_by_ids(params.node)
|
|
||||||
roles = map(str.lower, params.role)
|
|
||||||
env.assign(nodes, roles)
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
{},
|
|
||||||
"Nodes {0} with roles {1} "
|
|
||||||
"were added to environment {2}"
|
|
||||||
.format(params.node, roles, params.env)
|
|
||||||
)
|
|
||||||
|
|
||||||
@check_any("node", "env")
|
|
||||||
def delete(self, params):
|
|
||||||
"""Remove some nodes from environment:
|
|
||||||
fuel --env 1 node remove --node 2,3
|
|
||||||
|
|
||||||
Remove nodes no matter to which environment they were assigned:
|
|
||||||
fuel node remove --node 2,3,6,7
|
|
||||||
|
|
||||||
Remove all nodes from some environment:
|
|
||||||
fuel --env 1 node remove --all
|
|
||||||
"""
|
|
||||||
if params.env:
|
|
||||||
env = Environment(params.env)
|
|
||||||
if params.node:
|
|
||||||
env.unassign(params.node)
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
{},
|
|
||||||
"Nodes with ids {0} were removed "
|
|
||||||
"from environment with id {1}."
|
|
||||||
.format(params.node, params.env))
|
|
||||||
else:
|
|
||||||
if params.all:
|
|
||||||
env.unassign_all()
|
|
||||||
else:
|
|
||||||
raise error.ArgumentException(
|
|
||||||
"You have to select which nodes to remove "
|
|
||||||
"with --node-id. Try --all for removing all nodes."
|
|
||||||
)
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
{},
|
|
||||||
"All nodes from environment with id {0} were removed."
|
|
||||||
.format(params.env))
|
|
||||||
else:
|
|
||||||
nodes = map(Node, params.node)
|
|
||||||
for env_id, _nodes in groupby(nodes, attrgetter("env_id")):
|
|
||||||
list_of_nodes = [n.id for n in _nodes]
|
|
||||||
if env_id:
|
|
||||||
Environment(env_id).unassign(list_of_nodes)
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
{},
|
|
||||||
"Nodes with ids {0} were removed "
|
|
||||||
"from environment with id {1}."
|
|
||||||
.format(list_of_nodes, env_id)
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
{},
|
|
||||||
"Nodes with ids {0} aren't added to "
|
|
||||||
"any environment.".format(list_of_nodes)
|
|
||||||
)
|
|
||||||
|
|
||||||
@check_all("node")
|
|
||||||
@check_any("default", "download", "upload")
|
|
||||||
def attributes(self, params):
|
|
||||||
"""Download current or default disk, network,
|
|
||||||
configuration for some node:
|
|
||||||
fuel node --node-id 2 --disk --default
|
|
||||||
fuel node --node-id 2 --network --download \\
|
|
||||||
--dir path/to/directory
|
|
||||||
|
|
||||||
Upload disk, network, configuration for some node:
|
|
||||||
fuel node --node-id 2 --network --upload
|
|
||||||
fuel node --node-id 2 --disk --upload --dir path/to/directory
|
|
||||||
"""
|
|
||||||
nodes = Node.get_by_ids(params.node)
|
|
||||||
attribute_type = "interfaces" if params.network else "disks"
|
|
||||||
attributes = []
|
|
||||||
files = []
|
|
||||||
if params.default:
|
|
||||||
for node in nodes:
|
|
||||||
default_attribute = node.get_default_attribute(attribute_type)
|
|
||||||
file_path = node.write_attribute(
|
|
||||||
attribute_type,
|
|
||||||
default_attribute,
|
|
||||||
params.dir,
|
|
||||||
serializer=self.serializer
|
|
||||||
)
|
|
||||||
files.append(file_path)
|
|
||||||
attributes.append(default_attribute)
|
|
||||||
message = "Default node attributes for {0} were written" \
|
|
||||||
" to:\n{1}".format(attribute_type, "\n".join(files))
|
|
||||||
elif params.upload:
|
|
||||||
for node in nodes:
|
|
||||||
attribute = node.read_attribute(
|
|
||||||
attribute_type,
|
|
||||||
params.dir,
|
|
||||||
serializer=self.serializer
|
|
||||||
)
|
|
||||||
node.upload_node_attribute(
|
|
||||||
attribute_type,
|
|
||||||
attribute
|
|
||||||
)
|
|
||||||
attributes.append(attribute)
|
|
||||||
message = "Node attributes for {0} were uploaded" \
|
|
||||||
" from {1}".format(attribute_type, params.dir)
|
|
||||||
else:
|
|
||||||
for node in nodes:
|
|
||||||
downloaded_attribute = node.get_attribute(attribute_type)
|
|
||||||
file_path = node.write_attribute(
|
|
||||||
attribute_type,
|
|
||||||
downloaded_attribute,
|
|
||||||
params.dir,
|
|
||||||
serializer=self.serializer
|
|
||||||
)
|
|
||||||
attributes.append(downloaded_attribute)
|
|
||||||
files.append(file_path)
|
|
||||||
message = "Node attributes for {0} were written" \
|
|
||||||
" to:\n{1}".format(attribute_type, "\n".join(files))
|
|
||||||
print(message)
|
|
||||||
|
|
||||||
def get_env_id(self, node_collection):
|
|
||||||
env_ids = set(n.env_id for n in node_collection)
|
|
||||||
if len(env_ids) != 1:
|
|
||||||
raise error.ActionException(
|
|
||||||
"Inputed nodes assigned to multiple environments!")
|
|
||||||
else:
|
|
||||||
return env_ids.pop()
|
|
||||||
|
|
||||||
@check_all("node")
|
|
||||||
def start(self, params):
|
|
||||||
"""Deploy/Provision some node:
|
|
||||||
fuel node --node-id 2 --provision
|
|
||||||
fuel node --node-id 2 --deploy
|
|
||||||
"""
|
|
||||||
node_collection = NodeCollection.init_with_ids(params.node)
|
|
||||||
method_type = "deploy" if params.deploy else "provision"
|
|
||||||
env_id_to_start = self.get_env_id(node_collection)
|
|
||||||
|
|
||||||
if not env_id_to_start:
|
|
||||||
raise error.ActionException(
|
|
||||||
"Input nodes are not assigned to any environment!")
|
|
||||||
|
|
||||||
task = Environment(env_id_to_start).install_selected_nodes(
|
|
||||||
method_type, node_collection.collection)
|
|
||||||
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
task.data,
|
|
||||||
"Started {0}ing {1}."
|
|
||||||
.format(method_type, node_collection))
|
|
||||||
|
|
||||||
@check_all("node")
|
|
||||||
def execute_tasks(self, params):
|
|
||||||
"""Execute deployment tasks
|
|
||||||
fuel node --node 2 --tasks hiera netconfig
|
|
||||||
fuel node --node 2 --tasks netconfig --force
|
|
||||||
fuel node --node 2 --skip hiera netconfig
|
|
||||||
fuel node --node 2 --skip rsync --end pre_deployment_end
|
|
||||||
fuel node --node 2 --end netconfig
|
|
||||||
fuel node --node 2 --start hiera --end neutron
|
|
||||||
fuel node --node 2 --start post_deployment_start
|
|
||||||
"""
|
|
||||||
node_collection = NodeCollection.init_with_ids(params.node)
|
|
||||||
env_id_to_start = self.get_env_id(node_collection)
|
|
||||||
|
|
||||||
env = Environment(env_id_to_start)
|
|
||||||
|
|
||||||
tasks = params.tasks or None
|
|
||||||
force = params.force or None
|
|
||||||
noop_run = params.noop_run or None
|
|
||||||
|
|
||||||
if params.skip or params.end or params.start:
|
|
||||||
tasks = env.get_tasks(
|
|
||||||
skip=params.skip,
|
|
||||||
end=params.end,
|
|
||||||
start=params.start,
|
|
||||||
include=tasks)
|
|
||||||
|
|
||||||
if not tasks:
|
|
||||||
self.serializer.print_to_output({}, "Nothing to run.")
|
|
||||||
return
|
|
||||||
|
|
||||||
task = env.execute_tasks(
|
|
||||||
node_collection.collection, tasks=tasks, force=force,
|
|
||||||
noop_run=noop_run)
|
|
||||||
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
task.data,
|
|
||||||
"Started tasks {0} for nodes {1}.".format(tasks, node_collection))
|
|
||||||
|
|
||||||
def list(self, params):
|
|
||||||
"""To list all available nodes:
|
|
||||||
fuel node
|
|
||||||
|
|
||||||
To filter them by environment:
|
|
||||||
fuel --env-id 1 node
|
|
||||||
|
|
||||||
It's Possible to manipulate nodes with their short mac addresses:
|
|
||||||
fuel node --node-id 80:ac
|
|
||||||
fuel node remove --node-id 80:ac,5d:a2
|
|
||||||
"""
|
|
||||||
if params.node:
|
|
||||||
node_collection = NodeCollection.init_with_ids(params.node)
|
|
||||||
else:
|
|
||||||
node_collection = NodeCollection.get_all()
|
|
||||||
if params.env:
|
|
||||||
node_collection.filter_by_env_id(int(params.env))
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
node_collection.data,
|
|
||||||
format_table(
|
|
||||||
node_collection.data,
|
|
||||||
acceptable_keys=self.acceptable_keys,
|
|
||||||
column_to_join=("roles", "pending_roles")
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
@check_all("node")
|
|
||||||
def delete_from_db(self, params):
|
|
||||||
"""To delete nodes from fuel db:
|
|
||||||
fuel node --node-id 1 --delete-from-db
|
|
||||||
fuel node --node-id 1 2 --delete-from-db
|
|
||||||
(this works only for offline nodes)
|
|
||||||
fuel node --node-id 1 --delete-from-db --force
|
|
||||||
(this forces deletion of nodes regardless of their state)
|
|
||||||
"""
|
|
||||||
if not params.force:
|
|
||||||
node_collection = NodeCollection.init_with_ids(params.node)
|
|
||||||
|
|
||||||
online_nodes = [node for node in node_collection.data
|
|
||||||
if node['online']]
|
|
||||||
|
|
||||||
if online_nodes:
|
|
||||||
raise error.ActionException(
|
|
||||||
"Nodes with ids {0} cannot be deleted from cluster "
|
|
||||||
"because they are online. You might want to use the "
|
|
||||||
"--force option.".format(
|
|
||||||
[node['id'] for node in online_nodes]))
|
|
||||||
|
|
||||||
NodeCollection.delete_by_ids(params.node)
|
|
||||||
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
{},
|
|
||||||
"Nodes with ids {0} have been deleted from Fuel db.".format(
|
|
||||||
params.node)
|
|
||||||
)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _get_one_node(params):
|
|
||||||
"""Ensures that only one node was passed in the command and returns it.
|
|
||||||
|
|
||||||
:raises ArgumentException: When more than 1 node provided.
|
|
||||||
"""
|
|
||||||
if len(params.node) > 1:
|
|
||||||
raise error.ArgumentException(
|
|
||||||
"You should select only one node to change.")
|
|
||||||
|
|
||||||
return Node(params.node[0])
|
|
||||||
|
|
||||||
@check_all("node", "name")
|
|
||||||
def set_name(self, params):
|
|
||||||
"""To set node name:
|
|
||||||
fuel node --node-id 1 --name NewName
|
|
||||||
"""
|
|
||||||
node = self._get_one_node(params)
|
|
||||||
node.set({"name": params.name})
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
{},
|
|
||||||
u"Name for node with id {0} has been changed to {1}."
|
|
||||||
.format(node.id, params.name)
|
|
||||||
)
|
|
||||||
|
|
||||||
@check_all("node", "hostname")
|
|
||||||
def set_hostname(self, params):
|
|
||||||
"""To set node hostname:
|
|
||||||
fuel node --node-id 1 --hostname ctrl-01
|
|
||||||
"""
|
|
||||||
node = self._get_one_node(params)
|
|
||||||
node.set({"hostname": params.hostname})
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
{},
|
|
||||||
"Hostname for node with id {0} has been changed to {1}."
|
|
||||||
.format(node.id, params.hostname)
|
|
||||||
)
|
|
||||||
|
|
||||||
@check_all("node")
|
|
||||||
@check_any("upload", "download")
|
|
||||||
def node_attributes(self, params):
|
|
||||||
"""Download node attributes for specified node:
|
|
||||||
fuel node --node-id 1 --attributes --download [--dir download-dir]
|
|
||||||
|
|
||||||
Upload node attributes for specified node
|
|
||||||
fuel node --node-id 1 --attributes --upload [--dir upload-dir]
|
|
||||||
|
|
||||||
"""
|
|
||||||
node = self._get_one_node(params)
|
|
||||||
if params.upload:
|
|
||||||
data = node.read_attribute(
|
|
||||||
'attributes', params.dir, serializer=self.serializer)
|
|
||||||
node.update_node_attributes(data)
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
{},
|
|
||||||
"Attributes for node {0} were uploaded."
|
|
||||||
.format(node.id))
|
|
||||||
elif params.download:
|
|
||||||
attributes = node.get_node_attributes()
|
|
||||||
file_path = node.write_attribute(
|
|
||||||
'attributes', attributes,
|
|
||||||
params.dir, serializer=self.serializer)
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
{},
|
|
||||||
"Attributes for node {0} were written to {1}"
|
|
||||||
.format(node.id, file_path))
|
|
||||||
|
|
||||||
else:
|
|
||||||
raise error.ArgumentException(
|
|
||||||
"--upload or --download action should be specified.")
|
|
|
@ -1,137 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import sys
|
|
||||||
|
|
||||||
import six
|
|
||||||
|
|
||||||
from fuelclient.cli.actions.base import Action
|
|
||||||
from fuelclient.cli.actions.base import check_all
|
|
||||||
import fuelclient.cli.arguments as Args
|
|
||||||
from fuelclient.cli.arguments import group
|
|
||||||
from fuelclient.cli.error import ActionException
|
|
||||||
from fuelclient.cli.formatting import format_table
|
|
||||||
from fuelclient.objects import Environment
|
|
||||||
from fuelclient.objects.nodegroup import NodeGroup
|
|
||||||
from fuelclient.objects.nodegroup import NodeGroupCollection
|
|
||||||
|
|
||||||
|
|
||||||
class NodeGroupAction(Action):
|
|
||||||
"""Show or modify node groups
|
|
||||||
"""
|
|
||||||
action_name = "nodegroup"
|
|
||||||
acceptable_keys = ("id", "cluster_id", "name")
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super(NodeGroupAction, self).__init__()
|
|
||||||
self.args = (
|
|
||||||
Args.get_env_arg(),
|
|
||||||
Args.get_list_arg("List all node groups."),
|
|
||||||
Args.get_name_arg("Name of new node group."),
|
|
||||||
Args.get_group_arg("ID of node group."),
|
|
||||||
Args.get_node_arg("List of nodes to assign specified group to."),
|
|
||||||
group(
|
|
||||||
Args.get_create_arg(
|
|
||||||
"Create a new node group in the specified environment."
|
|
||||||
),
|
|
||||||
Args.get_assign_arg(
|
|
||||||
"Assign nodes to the specified node group."),
|
|
||||||
Args.get_delete_arg(
|
|
||||||
"Delete specified node groups."),
|
|
||||||
)
|
|
||||||
)
|
|
||||||
self.flag_func_map = (
|
|
||||||
("create", self.create),
|
|
||||||
("delete", self.delete),
|
|
||||||
("assign", self.assign),
|
|
||||||
(None, self.list)
|
|
||||||
)
|
|
||||||
|
|
||||||
@check_all("env", "name")
|
|
||||||
def create(self, params):
|
|
||||||
"""Create a new node group
|
|
||||||
fuel --env 1 nodegroup --create --name "group 1"
|
|
||||||
"""
|
|
||||||
env_id = int(params.env)
|
|
||||||
data = NodeGroup.create(params.name, env_id)
|
|
||||||
env = Environment(env_id)
|
|
||||||
network_data = env.get_network_data()
|
|
||||||
seg_type = network_data['networking_parameters'].get(
|
|
||||||
'segmentation_type'
|
|
||||||
)
|
|
||||||
if seg_type == 'vlan':
|
|
||||||
six.print_("WARNING: In VLAN segmentation type, there will be no "
|
|
||||||
"connectivity over private network between instances "
|
|
||||||
"running on hypervisors in different segments and that "
|
|
||||||
"it's a user's responsibility to handle this "
|
|
||||||
"situation.",
|
|
||||||
file=sys.stderr)
|
|
||||||
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
data,
|
|
||||||
u"Node group '{name}' with id={id} "
|
|
||||||
u"in environment {env} was created!"
|
|
||||||
.format(env=env_id, **data)
|
|
||||||
)
|
|
||||||
|
|
||||||
@check_all("group")
|
|
||||||
def delete(self, params):
|
|
||||||
"""Delete the specified node groups
|
|
||||||
fuel nodegroup --delete --group 1
|
|
||||||
fuel nodegroup --delete --group 2,3,4
|
|
||||||
"""
|
|
||||||
ngs = NodeGroup.get_by_ids(params.group)
|
|
||||||
for n in ngs:
|
|
||||||
if n.name == "default":
|
|
||||||
raise ActionException(
|
|
||||||
"Default node groups cannot be deleted."
|
|
||||||
)
|
|
||||||
data = NodeGroup.delete(n.id)
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
data,
|
|
||||||
u"Node group with id={id} was deleted!"
|
|
||||||
.format(id=n.id)
|
|
||||||
)
|
|
||||||
|
|
||||||
@check_all("node", "group")
|
|
||||||
def assign(self, params):
|
|
||||||
"""Assign nodes to specified node group:
|
|
||||||
fuel nodegroup --assign --node 1 --group 1
|
|
||||||
fuel nodegroup --assign --node 2,3,4 --group 1
|
|
||||||
"""
|
|
||||||
if len(params.group) > 1:
|
|
||||||
raise ActionException(
|
|
||||||
"Nodes can only be assigned to one node group."
|
|
||||||
)
|
|
||||||
|
|
||||||
group = NodeGroup(params.group.pop())
|
|
||||||
group.assign(params.node)
|
|
||||||
|
|
||||||
def list(self, params):
|
|
||||||
"""To list all available node groups:
|
|
||||||
fuel nodegroup
|
|
||||||
|
|
||||||
To filter them by environment:
|
|
||||||
fuel --env-id 1 nodegroup
|
|
||||||
"""
|
|
||||||
group_collection = NodeGroupCollection.get_all()
|
|
||||||
if params.env:
|
|
||||||
group_collection.filter_by_env_id(int(params.env))
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
group_collection.data,
|
|
||||||
format_table(
|
|
||||||
group_collection.data,
|
|
||||||
acceptable_keys=self.acceptable_keys,
|
|
||||||
)
|
|
||||||
)
|
|
|
@ -1,113 +0,0 @@
|
||||||
# Copyright 2015 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from fuelclient.cli.actions.base import Action
|
|
||||||
import fuelclient.cli.arguments as Args
|
|
||||||
from fuelclient.cli.formatting import format_table
|
|
||||||
from fuelclient.objects.notifications import Notifications
|
|
||||||
|
|
||||||
|
|
||||||
class NotificationsAction(Action):
|
|
||||||
"""List and create notifications
|
|
||||||
"""
|
|
||||||
action_name = "notifications"
|
|
||||||
|
|
||||||
acceptable_keys = (
|
|
||||||
"id",
|
|
||||||
"message",
|
|
||||||
"status",
|
|
||||||
"topic",
|
|
||||||
)
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super(NotificationsAction, self).__init__()
|
|
||||||
self.args = [
|
|
||||||
Args.group(
|
|
||||||
Args.get_list_arg("List all available notifications."),
|
|
||||||
Args.get_notify_send_arg("Send notification"),
|
|
||||||
Args.get_notify_mark_as_read_arg(
|
|
||||||
"Mark notification(s) as read ('*' to mark all as read)."),
|
|
||||||
),
|
|
||||||
Args.group(
|
|
||||||
Args.get_notify_topic_arg("Notification topic (severity)"),
|
|
||||||
),
|
|
||||||
Args.get_notify_all_messages_arg(
|
|
||||||
"Select all messages (only unread by default)."),
|
|
||||||
]
|
|
||||||
self.flag_func_map = (
|
|
||||||
("send", self.send),
|
|
||||||
("mark-as-read", self.mark_as_read),
|
|
||||||
(None, self.list),
|
|
||||||
)
|
|
||||||
|
|
||||||
def list(self, params):
|
|
||||||
"""Print all available notifications:
|
|
||||||
fuel notifications
|
|
||||||
fuel notifications --list
|
|
||||||
"""
|
|
||||||
notifications = Notifications.get_all_data()
|
|
||||||
|
|
||||||
if not params.all:
|
|
||||||
notifications = [notification for notification in notifications
|
|
||||||
if notification['status'] == 'unread']
|
|
||||||
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
notifications,
|
|
||||||
format_table(
|
|
||||||
notifications,
|
|
||||||
acceptable_keys=self.acceptable_keys
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
def mark_as_read(self, params):
|
|
||||||
"""Mark given notifications as read.
|
|
||||||
fuel notifications --mark-as-read 1 2
|
|
||||||
fuel notifications -r 1 2
|
|
||||||
"""
|
|
||||||
result = Notifications.mark_as_read(
|
|
||||||
ids=getattr(params, 'mark-as-read'))
|
|
||||||
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
result,
|
|
||||||
'Notification(s) marked as read'
|
|
||||||
)
|
|
||||||
|
|
||||||
def send(self, params):
|
|
||||||
"""Send notification:
|
|
||||||
fuel notifications --send "message" --topic done
|
|
||||||
"""
|
|
||||||
message = params.send
|
|
||||||
if isinstance(message, list):
|
|
||||||
message = ' '.join(message)
|
|
||||||
result = Notifications.send(message, topic=params.topic)
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
result,
|
|
||||||
"Notification sent")
|
|
||||||
|
|
||||||
|
|
||||||
class NotifyAction(NotificationsAction):
|
|
||||||
"""Shortcut for quickly sending a notification.
|
|
||||||
"""
|
|
||||||
|
|
||||||
action_name = "notify"
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super(NotifyAction, self).__init__()
|
|
||||||
self.args = [
|
|
||||||
Args.get_notify_message_arg("Notification message"),
|
|
||||||
Args.get_notify_topic_arg("Notification topic (severity)"),
|
|
||||||
]
|
|
||||||
self.flag_func_map = (
|
|
||||||
(None, self.send),
|
|
||||||
)
|
|
|
@ -1,155 +0,0 @@
|
||||||
# Copyright 2015 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from fuelclient.cli.actions.base import Action
|
|
||||||
from fuelclient.cli.actions.base import check_all
|
|
||||||
import fuelclient.cli.arguments as Args
|
|
||||||
from fuelclient.cli.arguments import group
|
|
||||||
from fuelclient.cli.formatting import format_table
|
|
||||||
from fuelclient.objects.openstack_config import OpenstackConfig
|
|
||||||
|
|
||||||
|
|
||||||
class OpenstackConfigAction(Action):
|
|
||||||
"""Manage openstack configuration"""
|
|
||||||
|
|
||||||
action_name = 'openstack-config'
|
|
||||||
acceptable_keys = ('id', 'is_active', 'config_type',
|
|
||||||
'cluster_id', 'node_id', 'node_role')
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super(OpenstackConfigAction, self).__init__()
|
|
||||||
self.args = (
|
|
||||||
Args.get_env_arg(),
|
|
||||||
Args.get_file_arg("Openstack configuration file"),
|
|
||||||
Args.get_node_arg("Node IDs list"),
|
|
||||||
Args.get_single_role_arg("Node role"),
|
|
||||||
Args.get_config_id_arg("Openstack config ID"),
|
|
||||||
Args.get_deleted_arg("Get deleted configurations"),
|
|
||||||
Args.get_force_arg("Force configuration update"),
|
|
||||||
group(
|
|
||||||
Args.get_list_arg("List openstack configurations"),
|
|
||||||
Args.get_download_arg(
|
|
||||||
"Download current openstack configuration"),
|
|
||||||
Args.get_upload_arg("Upload new openstack configuration"),
|
|
||||||
Args.get_delete_arg("Delete openstack configuration"),
|
|
||||||
Args.get_execute_arg("Apply openstack configuration"),
|
|
||||||
required=True,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
self.flag_func_map = (
|
|
||||||
('list', self.list),
|
|
||||||
('download', self.download),
|
|
||||||
('upload', self.upload),
|
|
||||||
('delete', self.delete),
|
|
||||||
('execute', self.execute)
|
|
||||||
)
|
|
||||||
|
|
||||||
@check_all('env')
|
|
||||||
def list(self, params):
|
|
||||||
"""List all available configurations:
|
|
||||||
fuel openstack-config --list --env 1
|
|
||||||
fuel openstack-config --list --env 1 --node 1[,2,3,...]
|
|
||||||
fuel openstack-config --list --env 1 --deleted
|
|
||||||
"""
|
|
||||||
filters = {'cluster_id': params.env}
|
|
||||||
|
|
||||||
if 'deleted' in params:
|
|
||||||
filters['is_active'] = int(not params.deleted)
|
|
||||||
|
|
||||||
if 'node' in params:
|
|
||||||
filters['node_ids'] = params.node
|
|
||||||
|
|
||||||
if 'role' in params:
|
|
||||||
filters['node_role'] = params.role
|
|
||||||
|
|
||||||
configs = OpenstackConfig.get_filtered_data(**filters)
|
|
||||||
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
configs,
|
|
||||||
format_table(
|
|
||||||
configs,
|
|
||||||
acceptable_keys=self.acceptable_keys
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
@check_all('config-id', 'file')
|
|
||||||
def download(self, params):
|
|
||||||
"""Download an existing configuration to file:
|
|
||||||
fuel openstack-config --download --config-id 1 --file config.yaml
|
|
||||||
"""
|
|
||||||
config_id = getattr(params, 'config-id')
|
|
||||||
config = OpenstackConfig(config_id)
|
|
||||||
data = config.data
|
|
||||||
OpenstackConfig.write_file(params.file, {
|
|
||||||
'configuration': data['configuration']})
|
|
||||||
|
|
||||||
@check_all('env', 'file')
|
|
||||||
def upload(self, params):
|
|
||||||
"""Upload new configuration from file:
|
|
||||||
fuel openstack-config --upload --env 1 --file config.yaml
|
|
||||||
fuel openstack-config --upload --env 1 --node 1[,2,3,...]
|
|
||||||
--file config.yaml
|
|
||||||
fuel openstack-config --upload --env 1
|
|
||||||
--role controller --file config.yaml
|
|
||||||
"""
|
|
||||||
node_ids = getattr(params, 'node', None)
|
|
||||||
node_role = getattr(params, 'role', None)
|
|
||||||
data = OpenstackConfig.read_file(params.file)
|
|
||||||
|
|
||||||
configs = OpenstackConfig.create(
|
|
||||||
cluster_id=params.env,
|
|
||||||
configuration=data['configuration'],
|
|
||||||
node_ids=node_ids, node_role=node_role)
|
|
||||||
configs = [c.data for c in configs]
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
configs,
|
|
||||||
format_table(
|
|
||||||
configs,
|
|
||||||
acceptable_keys=self.acceptable_keys
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
@check_all('config-id')
|
|
||||||
def delete(self, params):
|
|
||||||
"""Delete an existing configuration:
|
|
||||||
fuel openstack-config --delete --config 1
|
|
||||||
"""
|
|
||||||
config_id = getattr(params, 'config-id')
|
|
||||||
config = OpenstackConfig(config_id)
|
|
||||||
config.delete()
|
|
||||||
print("Openstack configuration '{0}' "
|
|
||||||
"has been deleted.".format(config_id))
|
|
||||||
|
|
||||||
@check_all('env')
|
|
||||||
def execute(self, params):
|
|
||||||
"""Deploy configuration:
|
|
||||||
fuel openstack-config --execute --env 1
|
|
||||||
fuel openstack-config --execute --env 1 --node 1[,2,3,...]
|
|
||||||
fuel openstack-config --execute --env 1 --role controller
|
|
||||||
fuel openstack-config --execute --env 1 --force
|
|
||||||
"""
|
|
||||||
node_ids = getattr(params, 'node', None)
|
|
||||||
node_role = getattr(params, 'role', None)
|
|
||||||
force = getattr(params, 'force', False)
|
|
||||||
task_result = OpenstackConfig.execute(
|
|
||||||
cluster_id=params.env, node_ids=node_ids,
|
|
||||||
node_role=node_role, force=force)
|
|
||||||
if task_result['status'] == 'error':
|
|
||||||
print(
|
|
||||||
'Error applying openstack configuration: {0}.'.format(
|
|
||||||
task_result['message'])
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
print('Openstack configuration update is started.')
|
|
|
@ -1,209 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import collections
|
|
||||||
import six
|
|
||||||
|
|
||||||
import fuelclient.cli.arguments as Args
|
|
||||||
|
|
||||||
from fuelclient.cli.actions.base import Action
|
|
||||||
from fuelclient.cli import error
|
|
||||||
from fuelclient.cli.formatting import format_table
|
|
||||||
from fuelclient.objects.plugins import Plugins
|
|
||||||
from fuelclient import utils
|
|
||||||
|
|
||||||
|
|
||||||
class PluginAction(Action):
|
|
||||||
"""List and modify currently available releases
|
|
||||||
"""
|
|
||||||
action_name = "plugins"
|
|
||||||
|
|
||||||
acceptable_keys = (
|
|
||||||
"id",
|
|
||||||
"name",
|
|
||||||
"version",
|
|
||||||
"package_version",
|
|
||||||
"releases"
|
|
||||||
)
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super(PluginAction, self).__init__()
|
|
||||||
self.args = [
|
|
||||||
Args.group(
|
|
||||||
Args.get_list_arg(
|
|
||||||
"List of all registered plugins."),
|
|
||||||
Args.get_plugin_install_arg(
|
|
||||||
"Install and register plugin package"),
|
|
||||||
Args.get_plugin_remove_arg(
|
|
||||||
"Remove and unregister plugin"),
|
|
||||||
Args.get_plugin_register_arg(
|
|
||||||
"Register installed plugin"),
|
|
||||||
Args.get_plugin_unregister_arg(
|
|
||||||
"Unregister plugin"),
|
|
||||||
Args.get_plugin_update_arg(
|
|
||||||
"Update installed plugin"),
|
|
||||||
Args.get_plugin_downgrade_arg(
|
|
||||||
"Downgrade installed plugin"),
|
|
||||||
Args.get_plugin_sync_arg(
|
|
||||||
"Synchronise plugins with API service")),
|
|
||||||
Args.get_plugin_arg("Plugin id."),
|
|
||||||
Args.get_force_arg("Force action")
|
|
||||||
]
|
|
||||||
self.flag_func_map = (
|
|
||||||
("install", self.install),
|
|
||||||
("remove", self.remove),
|
|
||||||
("update", self.update),
|
|
||||||
("downgrade", self.downgrade),
|
|
||||||
("sync", self.sync),
|
|
||||||
("register", self.register),
|
|
||||||
("unregister", self.unregister),
|
|
||||||
(None, self.list),
|
|
||||||
)
|
|
||||||
|
|
||||||
def list(self, params):
|
|
||||||
"""Print all available plugins
|
|
||||||
|
|
||||||
fuel plugins
|
|
||||||
fuel plugins --list
|
|
||||||
"""
|
|
||||||
plugins = Plugins.get_all_data()
|
|
||||||
# Replace original nested 'release' dictionary (from plugins meta
|
|
||||||
# dictionary) to flat one with necessary release info (os, version)
|
|
||||||
for plugin in plugins:
|
|
||||||
releases = collections.defaultdict(list)
|
|
||||||
for key in plugin['releases']:
|
|
||||||
releases[key['os']].append(key['version'])
|
|
||||||
plugin['releases'] = ', '.join('{} ({})'.format(k, ', '.join(v))
|
|
||||||
for k, v in six.iteritems(releases))
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
plugins,
|
|
||||||
format_table(plugins, acceptable_keys=self.acceptable_keys))
|
|
||||||
|
|
||||||
def install(self, params):
|
|
||||||
"""Install plugin archive and register in API service
|
|
||||||
|
|
||||||
fuel plugins --install plugin-name-2.0-2.0.1-0.noarch.rpm
|
|
||||||
"""
|
|
||||||
file_path = params.install
|
|
||||||
self.check_file(file_path)
|
|
||||||
results = Plugins.install(file_path, force=params.force)
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
results,
|
|
||||||
"Plugin {0} was successfully installed.".format(
|
|
||||||
params.install))
|
|
||||||
|
|
||||||
def remove(self, params):
|
|
||||||
"""Remove plugin from file system and from API service
|
|
||||||
|
|
||||||
fuel plugins --remove plugin-name==1.0.1
|
|
||||||
"""
|
|
||||||
name, version = self.parse_name_version(params.remove)
|
|
||||||
results = Plugins.remove(name, version)
|
|
||||||
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
results,
|
|
||||||
"Plugin {0} was successfully removed.".format(params.remove))
|
|
||||||
|
|
||||||
def update(self, params):
|
|
||||||
"""Update plugin from one minor version to another.
|
|
||||||
For example if there is a plugin with version 2.0.0,
|
|
||||||
plugin with version 2.0.1 can be used as update. But
|
|
||||||
plugin with version 2.1.0, cannot be used to update
|
|
||||||
plugin. Note that update is supported for plugins
|
|
||||||
beginning from package_version 2.0.0
|
|
||||||
|
|
||||||
fuel plugins --update plugin-name-2.0-2.0.1-0.noarch.rpm
|
|
||||||
"""
|
|
||||||
plugin_path = params.update
|
|
||||||
self.check_file(plugin_path)
|
|
||||||
result = Plugins.update(plugin_path)
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
result,
|
|
||||||
"Plugin {0} was successfully updated.".format(plugin_path))
|
|
||||||
|
|
||||||
def downgrade(self, params):
|
|
||||||
"""Downgrade plugin from one minor version to another.
|
|
||||||
For example if there is a plugin with version 2.0.1,
|
|
||||||
plugin with version 2.0.0 can be used to perform downgrade.
|
|
||||||
Plugin with version 1.0.0, cannot be used to perform downgrade
|
|
||||||
plugin. Note that downgrade is supported for plugins
|
|
||||||
beginning from package_version 2.0.0
|
|
||||||
|
|
||||||
fuel plugins --downgrade plugin-name-2.0-2.0.1-0.noarch.rpm
|
|
||||||
"""
|
|
||||||
plugin_path = params.downgrade
|
|
||||||
self.check_file(plugin_path)
|
|
||||||
result = Plugins.downgrade(plugin_path)
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
result,
|
|
||||||
"Plugin {0} was successfully downgraded.".format(plugin_path))
|
|
||||||
|
|
||||||
def sync(self, params):
|
|
||||||
"""Synchronise plugins on file system with plugins in
|
|
||||||
API service, creates plugin if it is not exists,
|
|
||||||
updates existent plugins
|
|
||||||
|
|
||||||
fuel plugins --sync
|
|
||||||
fuel plugins --sync --plugin-id=1,2
|
|
||||||
"""
|
|
||||||
Plugins.sync(plugin_ids=params.plugin)
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
None, "Plugins were successfully synchronized.")
|
|
||||||
|
|
||||||
def register(self, params):
|
|
||||||
"""Register plugin in API service
|
|
||||||
|
|
||||||
fuel plugins --register plugin-name==1.0.1
|
|
||||||
"""
|
|
||||||
name, version = self.parse_name_version(params.register)
|
|
||||||
result = Plugins.register(name, version, force=params.force)
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
result,
|
|
||||||
"Plugin {0} was successfully registered.".format(params.register))
|
|
||||||
|
|
||||||
def unregister(self, params):
|
|
||||||
"""Deletes plugin from API service
|
|
||||||
|
|
||||||
fuel plugins --unregister plugin-name==1.0.1
|
|
||||||
"""
|
|
||||||
name, version = self.parse_name_version(params.unregister)
|
|
||||||
result = Plugins.unregister(name, version)
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
result,
|
|
||||||
"Plugin {0} was successfully unregistered."
|
|
||||||
"".format(params.unregister))
|
|
||||||
|
|
||||||
def parse_name_version(self, param):
|
|
||||||
"""Takes the string and returns name and version
|
|
||||||
|
|
||||||
:param str param: string with name and version
|
|
||||||
:raises: error.ArgumentException if version is not specified
|
|
||||||
"""
|
|
||||||
attrs = param.split('==')
|
|
||||||
|
|
||||||
if len(attrs) != 2:
|
|
||||||
raise error.ArgumentException(
|
|
||||||
'Syntax: fuel plugins <action> fuel_plugin==1.0.0')
|
|
||||||
|
|
||||||
return attrs
|
|
||||||
|
|
||||||
def check_file(self, file_path):
|
|
||||||
"""Checks if file exists
|
|
||||||
|
|
||||||
:param str file_path: path to the file
|
|
||||||
:raises: error.ArgumentException if file does not exist
|
|
||||||
"""
|
|
||||||
if not utils.file_exists(file_path):
|
|
||||||
raise error.ArgumentException(
|
|
||||||
'File "{0}" does not exists'.format(file_path))
|
|
|
@ -1,172 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from collections import defaultdict
|
|
||||||
import os
|
|
||||||
|
|
||||||
from fuelclient.cli.actions.base import Action
|
|
||||||
from fuelclient.cli.actions.base import check_all
|
|
||||||
from fuelclient.cli.actions.base import check_any
|
|
||||||
import fuelclient.cli.arguments as Args
|
|
||||||
from fuelclient.cli.arguments import group
|
|
||||||
from fuelclient.cli.formatting import format_table
|
|
||||||
from fuelclient.objects.release import Release
|
|
||||||
from fuelclient import utils
|
|
||||||
|
|
||||||
|
|
||||||
class ReleaseAction(Action):
|
|
||||||
"""List and modify currently available releases
|
|
||||||
"""
|
|
||||||
action_name = "release"
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super(ReleaseAction, self).__init__()
|
|
||||||
self.args = [
|
|
||||||
Args.get_release_arg('Specify particular release id'),
|
|
||||||
Args.get_list_arg("List all available releases."),
|
|
||||||
Args.get_network_arg("Release network configuration."),
|
|
||||||
Args.get_deployment_tasks_arg("Release tasks configuration."),
|
|
||||||
Args.get_sync_deployment_tasks_arg(),
|
|
||||||
Args.get_file_pattern_arg(),
|
|
||||||
Args.get_dir_arg(
|
|
||||||
"Select directory to which download release attributes"),
|
|
||||||
group(
|
|
||||||
Args.get_download_arg(
|
|
||||||
"Download configuration of specific release"),
|
|
||||||
Args.get_upload_arg(
|
|
||||||
"Upload configuration to specific release")
|
|
||||||
)
|
|
||||||
]
|
|
||||||
self.flag_func_map = (
|
|
||||||
('sync-deployment-tasks', self.sync_deployment_tasks),
|
|
||||||
('deployment-tasks', self.deployment_tasks),
|
|
||||||
('network', self.network),
|
|
||||||
(None, self.list),
|
|
||||||
)
|
|
||||||
|
|
||||||
def list(self, params):
|
|
||||||
"""Print all available releases:
|
|
||||||
fuel release --list
|
|
||||||
|
|
||||||
Print release with specific id=1:
|
|
||||||
fuel release --rel 1
|
|
||||||
"""
|
|
||||||
acceptable_keys = (
|
|
||||||
"id",
|
|
||||||
"name",
|
|
||||||
"state",
|
|
||||||
"operating_system",
|
|
||||||
"version"
|
|
||||||
)
|
|
||||||
if params.release:
|
|
||||||
release = Release(params.release)
|
|
||||||
data = [release.get_fresh_data()]
|
|
||||||
else:
|
|
||||||
data = Release.get_all_data()
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
data,
|
|
||||||
format_table(
|
|
||||||
data,
|
|
||||||
acceptable_keys=acceptable_keys
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
@check_all("release")
|
|
||||||
@check_any("download", "upload")
|
|
||||||
def network(self, params):
|
|
||||||
"""Modify release networks configuration.
|
|
||||||
fuel rel --rel 1 --network --download
|
|
||||||
fuel rel --rel 2 --network --upload
|
|
||||||
"""
|
|
||||||
release = Release(params.release)
|
|
||||||
dir_path = self.full_path_directory(
|
|
||||||
params.dir, 'release_{0}'.format(params.release))
|
|
||||||
full_path = '{0}/networks'.format(dir_path)
|
|
||||||
if params.download:
|
|
||||||
networks = release.get_networks()
|
|
||||||
self.serializer.write_to_path(full_path, networks)
|
|
||||||
print("Networks for release {0} "
|
|
||||||
"downloaded into {1}.yaml".format(release.id, full_path))
|
|
||||||
elif params.upload:
|
|
||||||
networks = self.serializer.read_from_file(full_path)
|
|
||||||
release.update_networks(networks)
|
|
||||||
print("Networks for release {0} uploaded from {1}.yaml".format(
|
|
||||||
release.id, full_path))
|
|
||||||
|
|
||||||
@check_all("release")
|
|
||||||
@check_any("download", "upload")
|
|
||||||
def deployment_tasks(self, params):
|
|
||||||
"""Modify deployment_tasks for release.
|
|
||||||
fuel rel --rel 1 --deployment-tasks --download
|
|
||||||
fuel rel --rel 1 --deployment-tasks --upload
|
|
||||||
"""
|
|
||||||
release = Release(params.release)
|
|
||||||
dir_path = self.full_path_directory(
|
|
||||||
params.dir, 'release_{0}'.format(params.release))
|
|
||||||
full_path = '{0}/deployment_tasks'.format(dir_path)
|
|
||||||
if params.download:
|
|
||||||
tasks = release.get_deployment_tasks()
|
|
||||||
self.serializer.write_to_path(full_path, tasks)
|
|
||||||
print("Deployment tasks for release {0} "
|
|
||||||
"downloaded into {1}.yaml.".format(release.id, full_path))
|
|
||||||
elif params.upload:
|
|
||||||
tasks = self.serializer.read_from_file(full_path)
|
|
||||||
release.update_deployment_tasks(tasks)
|
|
||||||
print("Deployment tasks for release {0}"
|
|
||||||
" uploaded from {1}.yaml".format(release.id, dir_path))
|
|
||||||
|
|
||||||
@check_all("dir")
|
|
||||||
def sync_deployment_tasks(self, params):
|
|
||||||
"""Upload tasks for different releases based on directories.
|
|
||||||
The string identifier for the release(s) to update is expected to be
|
|
||||||
found in the path (see `fuel release`), like:
|
|
||||||
|
|
||||||
/etc/puppet/<release>/
|
|
||||||
/etc/puppet/liberty-9.0
|
|
||||||
|
|
||||||
fuel rel --sync-deployment-tasks --dir /etc/puppet/liberty-9.0/
|
|
||||||
fuel rel --sync-deployment-tasks --fp '*tasks.yaml'
|
|
||||||
|
|
||||||
In case no directory was provided:
|
|
||||||
|
|
||||||
fuel rel --sync-deployment-tasks
|
|
||||||
|
|
||||||
The current directory will be used
|
|
||||||
"""
|
|
||||||
all_rels = Release.get_all_data()
|
|
||||||
real_path = os.path.realpath(params.dir)
|
|
||||||
serialized_tasks = defaultdict(list)
|
|
||||||
versions = set([r['version'] for r in all_rels])
|
|
||||||
|
|
||||||
for file_name in utils.iterfiles(real_path, params.filepattern):
|
|
||||||
for version in versions:
|
|
||||||
if version in file_name:
|
|
||||||
serialized_tasks[version].extend(
|
|
||||||
self.serializer.read_from_full_path(file_name))
|
|
||||||
|
|
||||||
for rel in all_rels:
|
|
||||||
release = Release(rel['id'])
|
|
||||||
data = serialized_tasks.get(rel['version'])
|
|
||||||
if data:
|
|
||||||
release.update_deployment_tasks(data)
|
|
||||||
print("Deployment tasks synchronized for release"
|
|
||||||
" {0} of version {1}".format(rel['name'],
|
|
||||||
rel['version']))
|
|
||||||
else:
|
|
||||||
print("No tasks were synchronized for release {0} "
|
|
||||||
"of version {1}.(Hint: nothing matched "
|
|
||||||
"{2}/{1}/{3})".format(rel['name'],
|
|
||||||
rel['version'],
|
|
||||||
real_path,
|
|
||||||
params.filepattern))
|
|
|
@ -1,155 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
|
|
||||||
from fuelclient.cli.actions.base import Action
|
|
||||||
from fuelclient.cli.actions.base import check_all
|
|
||||||
from fuelclient.cli.actions.base import check_any
|
|
||||||
import fuelclient.cli.arguments as Args
|
|
||||||
from fuelclient.cli.arguments import group
|
|
||||||
from fuelclient.cli.formatting import format_table
|
|
||||||
from fuelclient.cli.serializers import FileFormatBasedSerializer
|
|
||||||
from fuelclient.objects.role import Role
|
|
||||||
|
|
||||||
|
|
||||||
class RoleAction(Action):
|
|
||||||
"""List all roles for specific release or cluster
|
|
||||||
"""
|
|
||||||
action_name = "role"
|
|
||||||
|
|
||||||
fields_mapper = (
|
|
||||||
('env', 'clusters'),
|
|
||||||
('release', 'releases')
|
|
||||||
)
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
# NOTE(dshulyak) this serializers are really messed up
|
|
||||||
# it gets overwritten in several places
|
|
||||||
self.file_serializer = FileFormatBasedSerializer()
|
|
||||||
self.args = [
|
|
||||||
Args.get_list_arg("List all roles"),
|
|
||||||
group(
|
|
||||||
Args.get_env_arg(),
|
|
||||||
Args.get_release_arg("Release id"),
|
|
||||||
required=True
|
|
||||||
),
|
|
||||||
Args.get_str_arg("role", help="Name of the role"),
|
|
||||||
Args.get_file_arg("File with role description"),
|
|
||||||
|
|
||||||
group(
|
|
||||||
Args.get_create_arg("Create role from file"),
|
|
||||||
Args.get_boolean_arg("update", help="Update role from file"),
|
|
||||||
Args.get_delete_arg("Delete role from fuel")
|
|
||||||
)
|
|
||||||
]
|
|
||||||
self.flag_func_map = (
|
|
||||||
("delete", self.delete),
|
|
||||||
("create", self.create),
|
|
||||||
("update", self.update),
|
|
||||||
("role", self.item),
|
|
||||||
(None, self.list),
|
|
||||||
)
|
|
||||||
|
|
||||||
def parse_model(self, args):
|
|
||||||
for param, role_class in self.fields_mapper:
|
|
||||||
model_id = getattr(args, param)
|
|
||||||
if model_id:
|
|
||||||
return role_class, model_id
|
|
||||||
|
|
||||||
@check_any('release', 'env')
|
|
||||||
def list(self, params):
|
|
||||||
"""Print all available roles for release or cluster
|
|
||||||
|
|
||||||
fuel role --rel 1
|
|
||||||
fuel role --env 1
|
|
||||||
"""
|
|
||||||
model, model_id = self.parse_model(params)
|
|
||||||
roles = Role(owner_type=model, owner_id=model_id).get_all()
|
|
||||||
|
|
||||||
acceptable_keys = ("name", )
|
|
||||||
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
roles,
|
|
||||||
format_table(
|
|
||||||
roles,
|
|
||||||
acceptable_keys=acceptable_keys
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
@check_all('role', 'file')
|
|
||||||
@check_any('release', 'env')
|
|
||||||
def item(self, params):
|
|
||||||
"""Save full role description to file
|
|
||||||
fuel role --rel 1 --role controller --file some.yaml
|
|
||||||
fuel role --env 1 --role controller --file some.yaml
|
|
||||||
"""
|
|
||||||
model, model_id = self.parse_model(params)
|
|
||||||
role = Role(owner_type=model, owner_id=model_id).get_role(params.role)
|
|
||||||
self.file_serializer.write_to_file(params.file, role)
|
|
||||||
self.file_serializer.print_to_output(
|
|
||||||
role,
|
|
||||||
"Role {0} for {1} successfully saved to {2}.".format(
|
|
||||||
params.role,
|
|
||||||
model,
|
|
||||||
params.file))
|
|
||||||
|
|
||||||
@check_all('file')
|
|
||||||
@check_any('release', 'env')
|
|
||||||
def create(self, params):
|
|
||||||
"""Create a role from file description
|
|
||||||
fuel role --rel 1 --create --file some.yaml
|
|
||||||
fuel role --env 1 --create --file some.yaml
|
|
||||||
"""
|
|
||||||
model, model_id = self.parse_model(params)
|
|
||||||
role = self.file_serializer.read_from_file(params.file)
|
|
||||||
role = Role(owner_type=model, owner_id=model_id).create_role(role)
|
|
||||||
self.file_serializer.print_to_output(
|
|
||||||
role,
|
|
||||||
"Role {0} for {1} successfully created from {2}.".format(
|
|
||||||
role['name'], model, params.file))
|
|
||||||
|
|
||||||
@check_all('file')
|
|
||||||
@check_any('release', 'env')
|
|
||||||
def update(self, params):
|
|
||||||
"""Update a role from file description
|
|
||||||
fuel role --rel 1 --create --file some.yaml
|
|
||||||
fuel role --env 1 --create --file some.yaml
|
|
||||||
"""
|
|
||||||
model, model_id = self.parse_model(params)
|
|
||||||
role = self.file_serializer.read_from_file(params.file)
|
|
||||||
role = Role(owner_type=model, owner_id=model_id).update_role(
|
|
||||||
role['name'],
|
|
||||||
role)
|
|
||||||
self.file_serializer.print_to_output(
|
|
||||||
role,
|
|
||||||
"Role {0} for {1} successfully updated from {2}.".format(
|
|
||||||
params.role,
|
|
||||||
model,
|
|
||||||
params.file))
|
|
||||||
|
|
||||||
@check_all('role')
|
|
||||||
@check_any('release', 'env')
|
|
||||||
def delete(self, params):
|
|
||||||
"""Delete role from fuel
|
|
||||||
fuel role --delete --role controller --rel 1
|
|
||||||
fuel role --delete --role controller --env 1
|
|
||||||
"""
|
|
||||||
model, model_id = self.parse_model(params)
|
|
||||||
Role(owner_type=model, owner_id=model_id).delete_role(params.role)
|
|
||||||
self.file_serializer.print_to_output(
|
|
||||||
{},
|
|
||||||
"Role {0} for {1} with id {2} successfully deleted.".format(
|
|
||||||
params.role,
|
|
||||||
model,
|
|
||||||
model_id))
|
|
|
@ -1,86 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from fuelclient.cli.actions.base import Action
|
|
||||||
import fuelclient.cli.arguments as Args
|
|
||||||
from fuelclient.cli.arguments import group
|
|
||||||
from fuelclient.objects.environment import Environment
|
|
||||||
|
|
||||||
|
|
||||||
class SettingsAction(Action):
|
|
||||||
"""Show or modify environment settings
|
|
||||||
"""
|
|
||||||
action_name = "settings"
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super(SettingsAction, self).__init__()
|
|
||||||
self.args = (
|
|
||||||
Args.get_env_arg(required=True),
|
|
||||||
group(
|
|
||||||
Args.get_download_arg("Modify current configuration."),
|
|
||||||
Args.get_default_arg("Open default configuration."),
|
|
||||||
Args.get_upload_arg("Save current changes in configuration."),
|
|
||||||
required=True
|
|
||||||
),
|
|
||||||
Args.get_dir_arg("Directory with configuration data."),
|
|
||||||
Args.get_force_arg("Force settings upload.")
|
|
||||||
)
|
|
||||||
self.flag_func_map = (
|
|
||||||
("upload", self.upload),
|
|
||||||
("default", self.default),
|
|
||||||
("download", self.download)
|
|
||||||
)
|
|
||||||
|
|
||||||
def upload(self, params):
|
|
||||||
"""To upload settings for some environment from some directory:
|
|
||||||
fuel --env 1 settings --upload --dir path/to/directory
|
|
||||||
"""
|
|
||||||
env = Environment(params.env)
|
|
||||||
settings_data = env.read_settings_data(
|
|
||||||
directory=params.dir,
|
|
||||||
serializer=self.serializer
|
|
||||||
)
|
|
||||||
env.set_settings_data(settings_data, params.force)
|
|
||||||
print("Settings configuration uploaded.")
|
|
||||||
|
|
||||||
def default(self, params):
|
|
||||||
"""To download default settings for some environment in some directory:
|
|
||||||
fuel --env 1 settings --default --dir path/to/directory
|
|
||||||
"""
|
|
||||||
env = Environment(params.env)
|
|
||||||
default_data = env.get_default_settings_data()
|
|
||||||
settings_file_path = env.write_settings_data(
|
|
||||||
default_data,
|
|
||||||
directory=params.dir,
|
|
||||||
serializer=self.serializer)
|
|
||||||
print(
|
|
||||||
"Default settings configuration downloaded to {0}."
|
|
||||||
.format(settings_file_path)
|
|
||||||
)
|
|
||||||
|
|
||||||
def download(self, params):
|
|
||||||
"""To download settings for some environment in this directory:
|
|
||||||
fuel --env 1 settings --download
|
|
||||||
"""
|
|
||||||
env = Environment(params.env)
|
|
||||||
settings_data = env.get_settings_data()
|
|
||||||
settings_file_path = env.write_settings_data(
|
|
||||||
settings_data,
|
|
||||||
directory=params.dir,
|
|
||||||
serializer=self.serializer)
|
|
||||||
print(
|
|
||||||
"Settings configuration for environment with id={0}"
|
|
||||||
" downloaded to {1}"
|
|
||||||
.format(env.id, settings_file_path)
|
|
||||||
)
|
|
|
@ -1,84 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import sys
|
|
||||||
|
|
||||||
import six
|
|
||||||
import yaml
|
|
||||||
|
|
||||||
from fuelclient.cli.actions.base import Action
|
|
||||||
import fuelclient.cli.arguments as Args
|
|
||||||
from fuelclient.objects.task import SnapshotTask
|
|
||||||
|
|
||||||
|
|
||||||
class SnapshotAction(Action):
|
|
||||||
"""Generate and download snapshot.
|
|
||||||
"""
|
|
||||||
action_name = "snapshot"
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super(SnapshotAction, self).__init__()
|
|
||||||
self.args = (
|
|
||||||
Args.get_boolean_arg("conf",
|
|
||||||
help_="Provide this flag to generate conf"),
|
|
||||||
)
|
|
||||||
self.flag_func_map = (
|
|
||||||
('conf', self.get_snapshot_config),
|
|
||||||
(None, self.create_snapshot),
|
|
||||||
)
|
|
||||||
|
|
||||||
def create_snapshot(self, params):
|
|
||||||
"""To create diagnostic snapshot:
|
|
||||||
fuel snapshot
|
|
||||||
|
|
||||||
To specify config for snapshotting:
|
|
||||||
fuel snapshot < conf.yaml
|
|
||||||
|
|
||||||
"""
|
|
||||||
if sys.stdin.isatty():
|
|
||||||
conf = {}
|
|
||||||
else:
|
|
||||||
conf = yaml.load(sys.stdin.read())
|
|
||||||
|
|
||||||
snapshot_task = SnapshotTask.start_snapshot_task(conf)
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
snapshot_task.data,
|
|
||||||
"Generating diagnostic snapshot..."
|
|
||||||
)
|
|
||||||
snapshot_task.wait()
|
|
||||||
|
|
||||||
if snapshot_task.status == 'ready':
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
snapshot_task.data,
|
|
||||||
"...Completed...\n"
|
|
||||||
"Diagnostic snapshot can be downloaded from " +
|
|
||||||
snapshot_task.connection.root +
|
|
||||||
snapshot_task.data["message"]
|
|
||||||
)
|
|
||||||
elif snapshot_task.status == 'error':
|
|
||||||
six.print_(
|
|
||||||
"Snapshot generating task ended with error. Task message: {0}"
|
|
||||||
.format(snapshot_task.data["message"]),
|
|
||||||
file=sys.stderr
|
|
||||||
)
|
|
||||||
|
|
||||||
def get_snapshot_config(self, params):
|
|
||||||
"""Download default config for snapshot:
|
|
||||||
fuel snapshot --conf > dump_conf.yaml
|
|
||||||
|
|
||||||
To use json formatter:
|
|
||||||
fuel snapshot --conf --json
|
|
||||||
"""
|
|
||||||
conf = SnapshotTask.get_default_config()
|
|
||||||
self.serializer.write_to_file(sys.stdout, conf)
|
|
|
@ -1,37 +0,0 @@
|
||||||
# Copyright 2015 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import sys
|
|
||||||
|
|
||||||
from fuelclient.cli.actions.base import Action
|
|
||||||
from fuelclient.client import DefaultAPIClient
|
|
||||||
|
|
||||||
|
|
||||||
class TokenAction(Action):
|
|
||||||
"""Return a valid keystone auth token
|
|
||||||
"""
|
|
||||||
action_name = "token"
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super(TokenAction, self).__init__()
|
|
||||||
|
|
||||||
self.args = []
|
|
||||||
self.flag_func_map = (
|
|
||||||
(None, self.get_token),
|
|
||||||
)
|
|
||||||
|
|
||||||
def get_token(self, params):
|
|
||||||
"""Print out a valid Keystone auth token
|
|
||||||
"""
|
|
||||||
sys.stdout.write(DefaultAPIClient.auth_token)
|
|
|
@ -1,65 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from getpass import getpass
|
|
||||||
|
|
||||||
from fuelclient.cli.actions.base import Action
|
|
||||||
import fuelclient.cli.arguments as Args
|
|
||||||
from fuelclient.cli.error import ArgumentException
|
|
||||||
from fuelclient.client import DefaultAPIClient
|
|
||||||
from fuelclient import fuelclient_settings
|
|
||||||
|
|
||||||
|
|
||||||
class UserAction(Action):
|
|
||||||
"""Change password for user
|
|
||||||
"""
|
|
||||||
action_name = "user"
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super(UserAction, self).__init__()
|
|
||||||
self.args = (
|
|
||||||
Args.get_new_password_arg(
|
|
||||||
"WARNING: This method of changing the "
|
|
||||||
"password is dangerous - it may be saved in bash history."),
|
|
||||||
Args.get_change_password_arg(
|
|
||||||
"Change user password using interactive prompt")
|
|
||||||
)
|
|
||||||
|
|
||||||
self.flag_func_map = (
|
|
||||||
("change-password", self.change_password),
|
|
||||||
)
|
|
||||||
|
|
||||||
def _get_password_from_prompt(self):
|
|
||||||
password1 = getpass("Changing password for Fuel User.\nNew Password:")
|
|
||||||
password2 = getpass("Retype new Password:")
|
|
||||||
if password1 != password2:
|
|
||||||
raise ArgumentException("Passwords are not the same.")
|
|
||||||
return password1
|
|
||||||
|
|
||||||
def change_password(self, params):
|
|
||||||
"""To change user password:
|
|
||||||
fuel user change-password
|
|
||||||
"""
|
|
||||||
if params.newpass:
|
|
||||||
password = params.newpass
|
|
||||||
else:
|
|
||||||
password = self._get_password_from_prompt()
|
|
||||||
|
|
||||||
DefaultAPIClient.update_own_password(password)
|
|
||||||
settings = fuelclient_settings.get_settings()
|
|
||||||
self.serializer.print_to_output(
|
|
||||||
None, "\nPassword changed.\nPlease note that configuration "
|
|
||||||
"is not automatically updated.\nYou may want to update "
|
|
||||||
"{0}.".format(
|
|
||||||
settings.user_settings))
|
|
|
@ -1,106 +0,0 @@
|
||||||
# Copyright 2016 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from fuelclient.cli.actions.base import Action
|
|
||||||
import fuelclient.cli.arguments as Args
|
|
||||||
from fuelclient.cli import serializers
|
|
||||||
from fuelclient.objects.environment import Environment
|
|
||||||
|
|
||||||
|
|
||||||
class VIPAction(Action):
|
|
||||||
"""Download or upload VIP settings of specific environments.
|
|
||||||
"""
|
|
||||||
action_name = "vip"
|
|
||||||
acceptable_keys = ("id", "upload", "download", "network", "network-role",)
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super(VIPAction, self).__init__()
|
|
||||||
# NOTE(aroma): 'serializer' attribute for action objects is
|
|
||||||
# overwritten while building parser object
|
|
||||||
# (fuelclient.cli.parser.Parser)
|
|
||||||
self.file_serializer = serializers.FileFormatBasedSerializer()
|
|
||||||
self.args = (
|
|
||||||
Args.get_env_arg(required=True),
|
|
||||||
Args.get_create_arg("Create VIP"),
|
|
||||||
Args.get_upload_file_arg("Upload changed VIP configuration "
|
|
||||||
"from given file"),
|
|
||||||
Args.get_download_arg("Download VIP configuration"),
|
|
||||||
Args.get_file_arg("Target file with vip data."),
|
|
||||||
Args.get_ip_id_arg("IP address entity identifier"),
|
|
||||||
Args.get_ip_address_arg("IP address string"),
|
|
||||||
Args.get_network_id_arg("Network identifier"),
|
|
||||||
Args.get_network_role_arg("Network role string"),
|
|
||||||
Args.get_vip_name_arg("VIP name string"),
|
|
||||||
Args.get_vip_namespace_arg("VIP namespace string"),
|
|
||||||
)
|
|
||||||
self.flag_func_map = (
|
|
||||||
("create", self.create),
|
|
||||||
("upload", self.upload),
|
|
||||||
("download", self.download)
|
|
||||||
)
|
|
||||||
|
|
||||||
def create(self, params):
|
|
||||||
"""To create VIP for environment:
|
|
||||||
fuel --env 1 vip create --address 172.16.0.10 --network 1 \\
|
|
||||||
--name public_vip --namespace haproxy
|
|
||||||
"""
|
|
||||||
env = Environment(params.env)
|
|
||||||
vip_kwargs = {
|
|
||||||
"ip_addr": getattr(params, 'ip-address'),
|
|
||||||
"network": getattr(params, 'network'),
|
|
||||||
"vip_name": getattr(params, 'vip-name'),
|
|
||||||
}
|
|
||||||
|
|
||||||
vip_namespace = getattr(params, 'vip-namespace', None)
|
|
||||||
if vip_namespace is not None:
|
|
||||||
vip_kwargs['vip_namespace'] = vip_namespace
|
|
||||||
|
|
||||||
env.create_vip(**vip_kwargs)
|
|
||||||
print("VIP has been created")
|
|
||||||
|
|
||||||
def upload(self, params):
|
|
||||||
"""To upload VIP configuration from some
|
|
||||||
file for some environment:
|
|
||||||
fuel --env 1 vip --upload vip.yaml
|
|
||||||
"""
|
|
||||||
env = Environment(params.env)
|
|
||||||
vips_data = env.read_vips_data_from_file(
|
|
||||||
file_path=params.upload,
|
|
||||||
serializer=self.file_serializer
|
|
||||||
)
|
|
||||||
env.set_vips_data(vips_data)
|
|
||||||
print("VIP configuration uploaded.")
|
|
||||||
|
|
||||||
def download(self, params):
|
|
||||||
"""To download VIP configuration in this
|
|
||||||
file for some environment:
|
|
||||||
fuel --env 1 vip --download --file vip.yaml
|
|
||||||
where --file param is optional
|
|
||||||
"""
|
|
||||||
|
|
||||||
env = Environment(params.env)
|
|
||||||
vips_data = env.get_vips_data(
|
|
||||||
ip_address_id=getattr(params, 'ip-address-id'),
|
|
||||||
network=getattr(params, 'network'),
|
|
||||||
network_role=getattr(params, 'network-role')
|
|
||||||
)
|
|
||||||
vips_data_file_path = env.write_vips_data_to_file(
|
|
||||||
vips_data,
|
|
||||||
file_path=params.file,
|
|
||||||
serializer=self.serializer
|
|
||||||
)
|
|
||||||
print(
|
|
||||||
"VIP configuration for environment with id={0}"
|
|
||||||
" downloaded to {1}".format(env.id, vips_data_file_path)
|
|
||||||
)
|
|
|
@ -1,787 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import argparse
|
|
||||||
from itertools import chain
|
|
||||||
import os
|
|
||||||
|
|
||||||
from fuelclient import __version__
|
|
||||||
from fuelclient.cli.error import ArgumentException
|
|
||||||
from fuelclient.client import DefaultAPIClient
|
|
||||||
|
|
||||||
substitutions = {
|
|
||||||
# replace from: to
|
|
||||||
"env": "environment",
|
|
||||||
"nodes": "node",
|
|
||||||
"statuses": "status",
|
|
||||||
"net": "network",
|
|
||||||
"rel": "release",
|
|
||||||
"list": "--list",
|
|
||||||
"set": "--set",
|
|
||||||
"delete": "--delete",
|
|
||||||
"download": "--download",
|
|
||||||
"upload": "--upload",
|
|
||||||
"default": "--default",
|
|
||||||
"create": "--create",
|
|
||||||
"remove": "--delete",
|
|
||||||
"config": "--config",
|
|
||||||
"--roles": "--role",
|
|
||||||
"help": "--help",
|
|
||||||
"change-password": "--change-password",
|
|
||||||
"hostname": "--hostname",
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def group(*args, **kwargs):
|
|
||||||
required = kwargs.get("required", False)
|
|
||||||
return (required,) + args
|
|
||||||
|
|
||||||
|
|
||||||
class ArrayAction(argparse.Action):
|
|
||||||
"""Custom argparse.Action subclass to store ids
|
|
||||||
|
|
||||||
:returns: list of ids
|
|
||||||
"""
|
|
||||||
def __call__(self, parser, namespace, values, option_string=None):
|
|
||||||
list_ids = [int(value) for value in chain(*values)]
|
|
||||||
setattr(namespace, self.dest, list_ids)
|
|
||||||
|
|
||||||
|
|
||||||
class NodeAction(argparse.Action):
|
|
||||||
"""Custom argparse.Action subclass to store node identity
|
|
||||||
|
|
||||||
:returns: list of ids
|
|
||||||
"""
|
|
||||||
def __call__(self, parser, namespace, values, option_string=None):
|
|
||||||
if values:
|
|
||||||
node_identities = set(chain(*values))
|
|
||||||
input_macs = set(n for n in node_identities if ":" in n)
|
|
||||||
only_ids = set()
|
|
||||||
for _id in (node_identities - input_macs):
|
|
||||||
try:
|
|
||||||
only_ids.add(int(_id))
|
|
||||||
except ValueError:
|
|
||||||
raise ArgumentException(
|
|
||||||
"'{0}' is not valid node id.".format(_id))
|
|
||||||
if input_macs:
|
|
||||||
nodes_mac_to_id_map = dict(
|
|
||||||
(n["mac"], n["id"])
|
|
||||||
for n in DefaultAPIClient.get_request("nodes/")
|
|
||||||
)
|
|
||||||
for short_mac in input_macs:
|
|
||||||
target_node = None
|
|
||||||
for mac in nodes_mac_to_id_map:
|
|
||||||
if mac.endswith(short_mac):
|
|
||||||
target_node = mac
|
|
||||||
break
|
|
||||||
if target_node:
|
|
||||||
only_ids.add(nodes_mac_to_id_map[target_node])
|
|
||||||
else:
|
|
||||||
raise ArgumentException(
|
|
||||||
'Node with mac endfix "{0}" was not found.'
|
|
||||||
.format(short_mac)
|
|
||||||
)
|
|
||||||
node_ids = [int(node_id) for node_id in only_ids]
|
|
||||||
setattr(namespace, self.dest, node_ids)
|
|
||||||
|
|
||||||
|
|
||||||
class SetAction(argparse.Action):
|
|
||||||
"""Custom argparse.Action subclass to store distinct values
|
|
||||||
|
|
||||||
:returns: Set of arguments
|
|
||||||
"""
|
|
||||||
def __call__(self, _parser, namespace, values, option_string=None):
|
|
||||||
try:
|
|
||||||
getattr(namespace, self.dest).update(values)
|
|
||||||
except AttributeError:
|
|
||||||
setattr(namespace, self.dest, set(values))
|
|
||||||
|
|
||||||
|
|
||||||
def get_debug_arg():
|
|
||||||
return {
|
|
||||||
"args": ["--debug"],
|
|
||||||
"params": {
|
|
||||||
"dest": "debug",
|
|
||||||
"action": "store_true",
|
|
||||||
"help": "prints details of all HTTP request",
|
|
||||||
"default": False
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def get_version_arg():
|
|
||||||
return {
|
|
||||||
"args": ["-v", "--version"],
|
|
||||||
"params": {
|
|
||||||
"action": "version",
|
|
||||||
"version": __version__
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def get_arg(name, flags=None, aliases=None, help_=None, **kwargs):
|
|
||||||
name = name.replace("_", "-")
|
|
||||||
args = ["--" + name, ]
|
|
||||||
if flags is not None:
|
|
||||||
args.extend(flags)
|
|
||||||
if aliases is not None:
|
|
||||||
substitutions.update(
|
|
||||||
dict((alias, args[0]) for alias in aliases)
|
|
||||||
)
|
|
||||||
all_args = {
|
|
||||||
"args": args,
|
|
||||||
"params": {
|
|
||||||
"dest": name,
|
|
||||||
"help": help_ or name
|
|
||||||
}
|
|
||||||
}
|
|
||||||
all_args["params"].update(kwargs)
|
|
||||||
return all_args
|
|
||||||
|
|
||||||
|
|
||||||
def get_boolean_arg(name, **kwargs):
|
|
||||||
kwargs.update({
|
|
||||||
"action": "store_true",
|
|
||||||
"default": False
|
|
||||||
})
|
|
||||||
return get_arg(name, **kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
def get_env_arg(required=False):
|
|
||||||
return get_int_arg(
|
|
||||||
"env",
|
|
||||||
flags=("--env-id",),
|
|
||||||
help="environment id",
|
|
||||||
required=required
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_single_task_arg(required=False):
|
|
||||||
return get_int_arg(
|
|
||||||
"task",
|
|
||||||
flags=("--task-id", "--tid"),
|
|
||||||
help="task id",
|
|
||||||
required=required
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_new_password_arg(help_msg):
|
|
||||||
return get_str_arg(
|
|
||||||
"newpass",
|
|
||||||
flags=("--new-pass",),
|
|
||||||
help=help_msg,
|
|
||||||
required=False
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_str_arg(name, **kwargs):
|
|
||||||
default_kwargs = {
|
|
||||||
"action": "store",
|
|
||||||
"type": str,
|
|
||||||
"default": None
|
|
||||||
}
|
|
||||||
default_kwargs.update(kwargs)
|
|
||||||
return get_arg(name, **default_kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
def get_int_arg(name, **kwargs):
|
|
||||||
default_kwargs = {
|
|
||||||
"action": "store",
|
|
||||||
"type": int,
|
|
||||||
"default": None
|
|
||||||
}
|
|
||||||
default_kwargs.update(kwargs)
|
|
||||||
return get_arg(name, **default_kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
def get_array_arg(name, **kwargs):
|
|
||||||
default_kwargs = {
|
|
||||||
"action": ArrayAction,
|
|
||||||
"nargs": '+',
|
|
||||||
"type": lambda v: v.split(","),
|
|
||||||
"default": None
|
|
||||||
}
|
|
||||||
default_kwargs.update(kwargs)
|
|
||||||
return get_arg(name, **default_kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
def get_set_type_arg(name, **kwargs):
|
|
||||||
default_kwargs = {
|
|
||||||
"type": lambda v: v.split(','),
|
|
||||||
"action": SetAction,
|
|
||||||
"default": None
|
|
||||||
}
|
|
||||||
default_kwargs.update(kwargs)
|
|
||||||
return get_arg(name, **default_kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
def get_delete_from_db_arg(help_msg):
|
|
||||||
return get_boolean_arg("delete-from-db", help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_deployment_tasks_arg(help_msg):
|
|
||||||
return get_boolean_arg(
|
|
||||||
"deployment-tasks", help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_attributes_arg(help_msg):
|
|
||||||
return get_boolean_arg("attributes", help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_sync_deployment_tasks_arg():
|
|
||||||
return get_boolean_arg(
|
|
||||||
"sync-deployment-tasks",
|
|
||||||
help="Update tasks for each release.")
|
|
||||||
|
|
||||||
|
|
||||||
def get_dry_run_deployment_arg():
|
|
||||||
return get_boolean_arg(
|
|
||||||
"dry-run",
|
|
||||||
dest='dry_run',
|
|
||||||
help="Specifies to dry-run a deployment by configuring task executor"
|
|
||||||
"to dump the deployment graph to a dot file.")
|
|
||||||
|
|
||||||
|
|
||||||
def get_noop_run_deployment_arg():
|
|
||||||
return get_boolean_arg(
|
|
||||||
"noop",
|
|
||||||
dest='noop_run',
|
|
||||||
help="Specifies noop-run deployment configuring tasks executor to run "
|
|
||||||
"puppet and shell tasks in noop mode and skip all other. "
|
|
||||||
"Stores noop-run result summary in nailgun database")
|
|
||||||
|
|
||||||
|
|
||||||
def get_file_pattern_arg():
|
|
||||||
return get_str_arg(
|
|
||||||
"filepattern",
|
|
||||||
flags=("--fp", "--file-pattern"),
|
|
||||||
default="*tasks.yaml",
|
|
||||||
help="Provide unix file pattern to filter tasks with files.")
|
|
||||||
|
|
||||||
|
|
||||||
def get_node_name_arg(help_msg):
|
|
||||||
return get_str_arg("name", help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_hostname_arg(help_msg):
|
|
||||||
return get_str_arg("hostname", help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_network_arg(help_msg):
|
|
||||||
return get_boolean_arg("network", flags=("--net",), help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_force_arg(help_msg):
|
|
||||||
return get_boolean_arg("force", flags=("-f",), help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_disk_arg(help_msg):
|
|
||||||
return get_boolean_arg("disk", help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_deploy_arg(help_msg):
|
|
||||||
return get_boolean_arg("deploy", help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_provision_arg(help_msg):
|
|
||||||
return get_boolean_arg("provision", help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_role_arg(help_msg):
|
|
||||||
return get_set_type_arg("role", flags=("-r",), help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_single_role_arg(help_msg):
|
|
||||||
return get_str_arg("role", flags=('--role', ), help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_check_arg(help_msg):
|
|
||||||
return get_set_type_arg("check", help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_ostf_username_arg():
|
|
||||||
return get_str_arg(
|
|
||||||
"ostf_username",
|
|
||||||
dest="ostf_username",
|
|
||||||
help="OSTF username",
|
|
||||||
required=False
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_ostf_password_arg():
|
|
||||||
return get_str_arg(
|
|
||||||
"ostf_password",
|
|
||||||
dest="ostf_password",
|
|
||||||
help="OSTF password",
|
|
||||||
required=False
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_ostf_tenant_name_arg():
|
|
||||||
return get_str_arg(
|
|
||||||
"ostf_tenant_name",
|
|
||||||
dest="ostf_tenant_name",
|
|
||||||
help="OSTF tenant name",
|
|
||||||
required=False
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_change_password_arg(help_msg):
|
|
||||||
return get_boolean_arg("change-password", help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_name_arg(help_msg):
|
|
||||||
return get_str_arg("name", flags=("--env-name",), help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_graph_endpoint():
|
|
||||||
return get_arg(
|
|
||||||
'end',
|
|
||||||
action="store",
|
|
||||||
default=None,
|
|
||||||
help="Specify endpoint for the graph traversal.",
|
|
||||||
metavar='TASK',
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_graph_startpoint():
|
|
||||||
return get_arg(
|
|
||||||
'start',
|
|
||||||
action="store",
|
|
||||||
default=None,
|
|
||||||
help="Specify start point for the graph traversal.",
|
|
||||||
metavar='TASK',
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_skip_tasks():
|
|
||||||
return get_arg(
|
|
||||||
'skip',
|
|
||||||
nargs='+',
|
|
||||||
default=[],
|
|
||||||
help="Get list of tasks to be skipped.",
|
|
||||||
metavar='TASK',
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_tasks():
|
|
||||||
return get_arg(
|
|
||||||
'tasks',
|
|
||||||
nargs='+',
|
|
||||||
default=[],
|
|
||||||
help="Get list of tasks to be executed.",
|
|
||||||
metavar='TASK',
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_parents_arg():
|
|
||||||
return get_arg(
|
|
||||||
'parents-for',
|
|
||||||
help="Get parent for given task",
|
|
||||||
metavar='TASK',
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_remove_type_arg(types):
|
|
||||||
return get_arg(
|
|
||||||
'remove',
|
|
||||||
nargs='+',
|
|
||||||
default=[],
|
|
||||||
choices=types,
|
|
||||||
help="Select task types to remove from graph.",
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_nst_arg(help_msg):
|
|
||||||
return get_arg("nst",
|
|
||||||
flags=("--net-segment-type",),
|
|
||||||
action="store",
|
|
||||||
choices=("gre", "vlan", "tun"),
|
|
||||||
help_=help_msg,
|
|
||||||
default="vlan")
|
|
||||||
|
|
||||||
|
|
||||||
def get_all_arg(help_msg):
|
|
||||||
return get_boolean_arg("all", help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_create_arg(help_msg):
|
|
||||||
return get_boolean_arg(
|
|
||||||
"create",
|
|
||||||
flags=("-c", "--env-create"),
|
|
||||||
help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_download_arg(help_msg):
|
|
||||||
return get_boolean_arg("download", flags=("-d",), help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_list_arg(help_msg):
|
|
||||||
return get_boolean_arg("list", flags=("-l",), help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_dir_arg(help_msg):
|
|
||||||
return get_str_arg("dir", default=os.curdir, help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_file_arg(help_msg):
|
|
||||||
return get_str_arg("file", help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_verify_arg(help_msg):
|
|
||||||
return get_boolean_arg("verify", flags=("-v",), help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_upload_arg(help_msg):
|
|
||||||
return get_boolean_arg("upload", flags=("-u",), help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_default_arg(help_msg):
|
|
||||||
return get_boolean_arg("default", help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_set_arg(help_msg):
|
|
||||||
return get_boolean_arg("set", flags=("-s",), help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_delete_arg(help_msg):
|
|
||||||
return get_boolean_arg("delete", help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_execute_arg(help_msg):
|
|
||||||
return get_boolean_arg("execute", help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_assign_arg(help_msg):
|
|
||||||
return get_boolean_arg("assign", help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_group_arg(help_msg):
|
|
||||||
return get_set_type_arg("group", help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_node_group_arg(help_msg):
|
|
||||||
return get_set_type_arg("nodegroup", flags=("--node-group",),
|
|
||||||
help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_vlan_arg(help_msg):
|
|
||||||
return get_int_arg("vlan", help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_cidr_arg(help_msg):
|
|
||||||
return get_str_arg("cidr", help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_gateway_arg(help_msg):
|
|
||||||
return get_str_arg("gateway", help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_meta_arg(help_msg):
|
|
||||||
return get_str_arg("meta", help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_create_network_arg(help_msg):
|
|
||||||
return get_boolean_arg(
|
|
||||||
"create",
|
|
||||||
flags=("-c", "--create"),
|
|
||||||
help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_network_group_arg(help_msg):
|
|
||||||
return get_set_type_arg("network", help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_release_arg(help_msg, required=False):
|
|
||||||
return get_int_arg(
|
|
||||||
"release",
|
|
||||||
flags=("--rel",),
|
|
||||||
required=required,
|
|
||||||
help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_render_arg(help_msg):
|
|
||||||
return get_str_arg(
|
|
||||||
"render",
|
|
||||||
metavar='INPUT',
|
|
||||||
help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_tred_arg(help_msg):
|
|
||||||
return get_boolean_arg("tred", help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_node_arg(help_msg):
|
|
||||||
default_kwargs = {
|
|
||||||
"action": NodeAction,
|
|
||||||
"flags": ("--node-id",),
|
|
||||||
"nargs": '+',
|
|
||||||
"type": lambda v: v.split(","),
|
|
||||||
"default": None,
|
|
||||||
"help": help_msg
|
|
||||||
}
|
|
||||||
return get_arg("node", **default_kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
def get_single_node_arg(help_msg):
|
|
||||||
return get_int_arg('node', flags=('--node-id',), help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_task_arg(help_msg):
|
|
||||||
return get_array_arg(
|
|
||||||
'task',
|
|
||||||
flags=("--task-id", "--tid"),
|
|
||||||
help=help_msg
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_config_id_arg(help_msg):
|
|
||||||
return get_int_arg(
|
|
||||||
'config-id',
|
|
||||||
help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_deleted_arg(help_msg):
|
|
||||||
return get_boolean_arg(
|
|
||||||
'deleted', help=help_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_plugin_install_arg(help_msg):
|
|
||||||
return get_str_arg(
|
|
||||||
"install",
|
|
||||||
metavar='PLUGIN_FILE',
|
|
||||||
help=help_msg
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_plugin_remove_arg(help_msg):
|
|
||||||
return get_str_arg(
|
|
||||||
"remove",
|
|
||||||
metavar='PLUGIN_NAME==VERSION',
|
|
||||||
help=help_msg
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_plugin_register_arg(help_msg):
|
|
||||||
return get_str_arg(
|
|
||||||
"register",
|
|
||||||
metavar='PLUGIN_NAME==VERSION',
|
|
||||||
help=help_msg
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_plugin_unregister_arg(help_msg):
|
|
||||||
return get_str_arg(
|
|
||||||
"unregister",
|
|
||||||
metavar='PLUGIN_NAME==VERSION',
|
|
||||||
help=help_msg
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_plugin_update_arg(help_msg):
|
|
||||||
return get_str_arg(
|
|
||||||
"update",
|
|
||||||
metavar='PLUGIN_FILE',
|
|
||||||
help=help_msg
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_plugin_downgrade_arg(help_msg):
|
|
||||||
return get_str_arg(
|
|
||||||
"downgrade",
|
|
||||||
metavar='PLUGIN_FILE',
|
|
||||||
help=help_msg
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_plugin_sync_arg(help_msg):
|
|
||||||
return get_boolean_arg(
|
|
||||||
"sync",
|
|
||||||
help=help_msg
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_plugin_arg(help_msg):
|
|
||||||
return get_array_arg(
|
|
||||||
'plugin',
|
|
||||||
flags=('--plugin-id',),
|
|
||||||
help=help_msg
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_notify_all_messages_arg(help_msg):
|
|
||||||
return get_boolean_arg(
|
|
||||||
'all',
|
|
||||||
flags=('-a',),
|
|
||||||
help=help_msg
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_notify_mark_as_read_arg(help_msg):
|
|
||||||
return get_str_arg(
|
|
||||||
"mark-as-read",
|
|
||||||
flags=('-r',),
|
|
||||||
nargs='+',
|
|
||||||
help=help_msg,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_notify_message_arg(help_msg):
|
|
||||||
return get_str_arg(
|
|
||||||
"send",
|
|
||||||
nargs='+',
|
|
||||||
flags=('-m',),
|
|
||||||
help=help_msg,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_notify_send_arg(help_msg):
|
|
||||||
return get_str_arg(
|
|
||||||
"send",
|
|
||||||
flags=("--send",),
|
|
||||||
help=help_msg
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_notify_topic_arg(help_msg):
|
|
||||||
return get_str_arg(
|
|
||||||
"topic",
|
|
||||||
flags=("--topic",),
|
|
||||||
choices=(
|
|
||||||
'discover',
|
|
||||||
'done',
|
|
||||||
'error',
|
|
||||||
'warning',
|
|
||||||
'release'
|
|
||||||
),
|
|
||||||
help=help_msg
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_vip_arg(help_msg):
|
|
||||||
return get_boolean_arg(
|
|
||||||
"vip",
|
|
||||||
flags=("--vip",),
|
|
||||||
help=help_msg
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_vip_name_arg(help_msg):
|
|
||||||
return get_str_arg(
|
|
||||||
"vip-name",
|
|
||||||
flags=("--name",),
|
|
||||||
help=help_msg
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_vip_namespace_arg(help_msg, required=False):
|
|
||||||
return get_str_arg(
|
|
||||||
"vip-namespace",
|
|
||||||
flags=("--namespace",),
|
|
||||||
required=required,
|
|
||||||
help=help_msg
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_ip_address_arg(help_msg):
|
|
||||||
return get_str_arg(
|
|
||||||
"ip-address",
|
|
||||||
flags=("--address", "--ip-addr"),
|
|
||||||
help=help_msg
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_ip_id_arg(help_msg):
|
|
||||||
return get_int_arg(
|
|
||||||
"ip-address-id",
|
|
||||||
flags=("--ip-address-id",),
|
|
||||||
help=help_msg
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_network_id_arg(help_msg):
|
|
||||||
return get_int_arg(
|
|
||||||
"network",
|
|
||||||
flags=("--network",),
|
|
||||||
help=help_msg
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_network_role_arg(help_msg):
|
|
||||||
return get_str_arg(
|
|
||||||
"network-role",
|
|
||||||
flags=("--network-role",),
|
|
||||||
help=help_msg
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_upload_file_arg(help_msg):
|
|
||||||
return get_str_arg(
|
|
||||||
"upload",
|
|
||||||
flags=("-u", "--upload",),
|
|
||||||
help=help_msg
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_status_arg(help_msg):
|
|
||||||
default_kwargs = {
|
|
||||||
"flags": ("--status",),
|
|
||||||
"default": None,
|
|
||||||
"help": help_msg
|
|
||||||
}
|
|
||||||
return get_arg("status", **default_kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
def get_deployment_node_arg(help_msg):
|
|
||||||
default_kwargs = {
|
|
||||||
"flags": ("--node-id",),
|
|
||||||
"default": None,
|
|
||||||
"help": help_msg
|
|
||||||
}
|
|
||||||
return get_arg("node", **default_kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
def get_tasks_names_arg(help_msg):
|
|
||||||
default_kwargs = {
|
|
||||||
"flags": ("-d", "--task-name",),
|
|
||||||
"default": None,
|
|
||||||
"help": help_msg
|
|
||||||
}
|
|
||||||
return get_arg("task-name", **default_kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
def get_show_parameters_arg(help_msg):
|
|
||||||
default_kwargs = {
|
|
||||||
"flags": ("-p", "--show-parameters",),
|
|
||||||
"help": help_msg
|
|
||||||
}
|
|
||||||
return get_boolean_arg("show-parameters", **default_kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
def get_include_summary_arg(help_msg):
|
|
||||||
default_kwargs = {
|
|
||||||
"flags": ("--include-summary",),
|
|
||||||
"help": help_msg
|
|
||||||
}
|
|
||||||
return get_boolean_arg("include-summary", **default_kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
def get_not_split_facts_args():
|
|
||||||
kwargs = {
|
|
||||||
"action": "store_false",
|
|
||||||
"default": True,
|
|
||||||
"dest": "split",
|
|
||||||
"help": "Do not split deployment info for node and cluster parts."
|
|
||||||
}
|
|
||||||
return get_arg('no-split', **kwargs)
|
|
|
@ -1,155 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from functools import wraps
|
|
||||||
import json
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
|
|
||||||
from keystoneclient.exceptions import Unauthorized
|
|
||||||
import requests
|
|
||||||
import textwrap
|
|
||||||
|
|
||||||
|
|
||||||
def exit_with_error(message):
|
|
||||||
"""exit_with_error - writes message to stderr and exits with exit code 1.
|
|
||||||
"""
|
|
||||||
sys.stderr.write("{}{}".format(message, os.linesep))
|
|
||||||
exit(1)
|
|
||||||
|
|
||||||
|
|
||||||
class FuelClientException(Exception):
|
|
||||||
"""Base Exception for Fuel-Client
|
|
||||||
|
|
||||||
All child classes must be instantiated before raising.
|
|
||||||
"""
|
|
||||||
def __init__(self, *args, **kwargs):
|
|
||||||
super(FuelClientException, self).__init__(*args, **kwargs)
|
|
||||||
self.message = args[0]
|
|
||||||
|
|
||||||
|
|
||||||
class BadDataException(FuelClientException):
|
|
||||||
"""Should be raised when user provides corrupted data."""
|
|
||||||
|
|
||||||
|
|
||||||
class WrongEnvironmentError(FuelClientException):
|
|
||||||
"""Raised when particular action is not supported on environment."""
|
|
||||||
|
|
||||||
|
|
||||||
class ServerDataException(FuelClientException):
|
|
||||||
"""ServerDataException - must be raised when
|
|
||||||
data returned from server cannot be processed by Fuel-Client methods.
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
class DeployProgressError(FuelClientException):
|
|
||||||
"""DeployProgressError - must be raised when
|
|
||||||
deployment process interrupted on server.
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
class ArgumentException(FuelClientException):
|
|
||||||
"""ArgumentException - must be raised when
|
|
||||||
incorrect arguments inputted through argparse or some function.
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
class ActionException(FuelClientException):
|
|
||||||
"""ActionException - must be raised when
|
|
||||||
though arguments inputted to action are correct but they contradict
|
|
||||||
to logic in action.
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
class ParserException(FuelClientException):
|
|
||||||
"""ParserException - must be raised when
|
|
||||||
some problem occurred in process of argument parsing,
|
|
||||||
in argparse extension or in Fuel-Client Parser submodule.
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
class ProfilingError(FuelClientException):
|
|
||||||
"""Indicates errors and other issues related to performance profiling."""
|
|
||||||
|
|
||||||
|
|
||||||
class SettingsException(FuelClientException):
|
|
||||||
"""Indicates errors or unexpected behaviour in processing settings."""
|
|
||||||
|
|
||||||
|
|
||||||
class ExecutedErrorNonZeroExitCode(FuelClientException):
|
|
||||||
"""Subshell command returned non-zero exit code."""
|
|
||||||
|
|
||||||
|
|
||||||
class LabelEmptyKeyError(BadDataException):
|
|
||||||
"""Should be raised when user provides labels with empty key."""
|
|
||||||
|
|
||||||
|
|
||||||
class InvalidDirectoryException(FuelClientException):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class InvalidFileException(FuelClientException):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class HTTPError(FuelClientException):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class EnvironmentException(Exception):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
def exceptions_decorator(func):
|
|
||||||
"""Handles HTTP errors and expected exceptions that may occur
|
|
||||||
in methods of DefaultAPIClient class
|
|
||||||
"""
|
|
||||||
@wraps(func)
|
|
||||||
def wrapper(*args, **kwargs):
|
|
||||||
try:
|
|
||||||
return func(*args, **kwargs)
|
|
||||||
|
|
||||||
# when server returns to us bad request check that
|
|
||||||
# and print meaningful reason
|
|
||||||
except HTTPError as exc:
|
|
||||||
exit_with_error(exc)
|
|
||||||
except requests.ConnectionError:
|
|
||||||
message = """
|
|
||||||
Can't connect to Nailgun server!
|
|
||||||
Please check connection settings in your configuration file."""
|
|
||||||
exit_with_error(textwrap.dedent(message).strip())
|
|
||||||
except Unauthorized:
|
|
||||||
message = """
|
|
||||||
Unauthorized: need authentication!
|
|
||||||
Please provide user and password via client
|
|
||||||
fuel --os-username=user --os-password=pass [action]
|
|
||||||
or modify your credentials in your configuration file."""
|
|
||||||
exit_with_error(textwrap.dedent(message).strip())
|
|
||||||
except FuelClientException as exc:
|
|
||||||
exit_with_error(exc.message)
|
|
||||||
|
|
||||||
return wrapper
|
|
||||||
|
|
||||||
|
|
||||||
def get_error_body(error):
|
|
||||||
try:
|
|
||||||
error_body = json.loads(error.response.text)['message']
|
|
||||||
except (ValueError, TypeError, KeyError):
|
|
||||||
error_body = error.response.text
|
|
||||||
|
|
||||||
return error_body
|
|
||||||
|
|
||||||
|
|
||||||
def get_full_error_message(error):
|
|
||||||
return "{} ({})".format(error, get_error_body(error))
|
|
|
@ -1,161 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from itertools import chain
|
|
||||||
from operator import itemgetter
|
|
||||||
from time import sleep
|
|
||||||
|
|
||||||
import six
|
|
||||||
|
|
||||||
|
|
||||||
def format_table(data, acceptable_keys=None, column_to_join=None):
|
|
||||||
"""Format list of dicts to table in a string form
|
|
||||||
|
|
||||||
:acceptable_keys list(str): list of keys for which to create table
|
|
||||||
also specifies their order
|
|
||||||
"""
|
|
||||||
|
|
||||||
# prepare columns
|
|
||||||
if column_to_join is not None:
|
|
||||||
for data_dict in data:
|
|
||||||
for column_name in column_to_join:
|
|
||||||
data_dict[column_name] = u", ".join(
|
|
||||||
sorted(data_dict[column_name])
|
|
||||||
)
|
|
||||||
if acceptable_keys is not None:
|
|
||||||
rows = [tuple(value.get(key, "") for key in acceptable_keys)
|
|
||||||
for value in data]
|
|
||||||
header = tuple(acceptable_keys)
|
|
||||||
else:
|
|
||||||
rows = [tuple(x.values()) for x in data]
|
|
||||||
header = tuple(data[0].keys())
|
|
||||||
number_of_columns = len(header)
|
|
||||||
|
|
||||||
# split multi-lines cells if there is no automatic columns merge
|
|
||||||
if column_to_join:
|
|
||||||
def format_cell(cell):
|
|
||||||
return [cell or ""]
|
|
||||||
else:
|
|
||||||
def format_cell(cell):
|
|
||||||
return six.text_type(cell).split('\n')
|
|
||||||
rows = [
|
|
||||||
[format_cell(cell) if cell is not None else [''] for cell in row]
|
|
||||||
for row in rows
|
|
||||||
]
|
|
||||||
|
|
||||||
# calculate columns widths
|
|
||||||
column_widths = dict(
|
|
||||||
zip(
|
|
||||||
range(number_of_columns),
|
|
||||||
(len(str(x)) for x in header)
|
|
||||||
)
|
|
||||||
)
|
|
||||||
for row in rows:
|
|
||||||
column_widths.update(
|
|
||||||
(
|
|
||||||
index,
|
|
||||||
max(
|
|
||||||
column_widths[index],
|
|
||||||
max(len(six.text_type(line)) for line in cell)
|
|
||||||
)
|
|
||||||
)
|
|
||||||
for index, cell in enumerate(row)
|
|
||||||
)
|
|
||||||
|
|
||||||
# make output
|
|
||||||
hor_delimeter = u'-+-'.join(column_widths[column_index] * u'-'
|
|
||||||
for column_index in range(number_of_columns))
|
|
||||||
|
|
||||||
row_template = u' | '.join(
|
|
||||||
u"{{{0}:{1}}}".format(idx, width)
|
|
||||||
for idx, width in column_widths.items()
|
|
||||||
)
|
|
||||||
|
|
||||||
output_lines = [
|
|
||||||
row_template.format(*header),
|
|
||||||
hor_delimeter
|
|
||||||
]
|
|
||||||
|
|
||||||
for row in rows:
|
|
||||||
max_cell_lines = max(len(cell) for cell in row)
|
|
||||||
for cell_line_no in range(max_cell_lines):
|
|
||||||
output_lines.append(
|
|
||||||
row_template.format(
|
|
||||||
*(
|
|
||||||
cell[cell_line_no] if len(cell) > cell_line_no else u""
|
|
||||||
for cell in row
|
|
||||||
)
|
|
||||||
)
|
|
||||||
)
|
|
||||||
return u'\n'.join(output_lines)
|
|
||||||
|
|
||||||
|
|
||||||
def quote_and_join(words):
|
|
||||||
"""quote_and_join - performs listing of objects and returns string.
|
|
||||||
"""
|
|
||||||
words = list(words)
|
|
||||||
if len(words) > 1:
|
|
||||||
return '{0} and "{1}"'.format(
|
|
||||||
", ".join(
|
|
||||||
['"{0}"'.format(x) for x in words][:-1]
|
|
||||||
),
|
|
||||||
words[-1]
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
return '"{0}"'.format(words[0])
|
|
||||||
|
|
||||||
|
|
||||||
# TODO(vkulanov): remove when deprecate old cli
|
|
||||||
def print_health_check(env):
|
|
||||||
tests_states = [{"status": "not finished"}]
|
|
||||||
finished_tests = set()
|
|
||||||
test_counter, total_tests_count = 1, None
|
|
||||||
while not all(map(
|
|
||||||
lambda t: t["status"] == "finished",
|
|
||||||
tests_states
|
|
||||||
)):
|
|
||||||
tests_states = env.get_state_of_tests()
|
|
||||||
all_tests = list(chain(*map(
|
|
||||||
itemgetter("tests"),
|
|
||||||
filter(
|
|
||||||
env.is_in_running_test_sets,
|
|
||||||
tests_states
|
|
||||||
))))
|
|
||||||
if total_tests_count is None:
|
|
||||||
total_tests_count = len(all_tests)
|
|
||||||
all_finished_tests = filter(
|
|
||||||
lambda t: "running" not in t["status"],
|
|
||||||
all_tests
|
|
||||||
)
|
|
||||||
new_finished_tests = filter(
|
|
||||||
lambda t: t["name"] not in finished_tests,
|
|
||||||
all_finished_tests
|
|
||||||
)
|
|
||||||
finished_tests.update(
|
|
||||||
map(
|
|
||||||
itemgetter("name"),
|
|
||||||
new_finished_tests
|
|
||||||
)
|
|
||||||
)
|
|
||||||
for test in new_finished_tests:
|
|
||||||
print(
|
|
||||||
u"[{0:2} of {1}] [{status}] '{name}' "
|
|
||||||
u"({taken:.4} s) {message}".format(
|
|
||||||
test_counter,
|
|
||||||
total_tests_count,
|
|
||||||
**test
|
|
||||||
)
|
|
||||||
)
|
|
||||||
test_counter += 1
|
|
||||||
sleep(1)
|
|
|
@ -1,246 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import argparse
|
|
||||||
import sys
|
|
||||||
|
|
||||||
from fuelclient.cli.actions import actions
|
|
||||||
from fuelclient.cli.arguments import get_version_arg
|
|
||||||
from fuelclient.cli.arguments import substitutions
|
|
||||||
from fuelclient.cli.error import exceptions_decorator
|
|
||||||
from fuelclient.cli.error import ParserException
|
|
||||||
from fuelclient.cli.serializers import Serializer
|
|
||||||
from fuelclient import consts
|
|
||||||
from fuelclient import fuelclient_settings
|
|
||||||
from fuelclient import profiler
|
|
||||||
from fuelclient import utils
|
|
||||||
|
|
||||||
|
|
||||||
class Parser(object):
|
|
||||||
"""Parser class - encapsulates argparse's ArgumentParser
|
|
||||||
and based on available actions, serializers and additional flags
|
|
||||||
populates it.
|
|
||||||
"""
|
|
||||||
def __init__(self, argv):
|
|
||||||
self.args = argv
|
|
||||||
self.parser = argparse.ArgumentParser(
|
|
||||||
usage="""
|
|
||||||
fuel [optional args] <namespace> [action] [flags]
|
|
||||||
|
|
||||||
DEPRECATION WARNING:
|
|
||||||
|
|
||||||
In an upcoming release of Fuel Client, the syntax will
|
|
||||||
be changed to the following:
|
|
||||||
|
|
||||||
fuel [general flags] <entity> <action> [action flags]
|
|
||||||
|
|
||||||
where both [general flags] and [action flags] are derivatives
|
|
||||||
from [optional args] and [flags]; <entity> is a derivative from
|
|
||||||
<namespace>. Keep in mind that specifying <action> will be
|
|
||||||
mandatory.
|
|
||||||
|
|
||||||
Some of the [optional args] are going to specific to a
|
|
||||||
particular <entity> and <action> context in the upcoming
|
|
||||||
release of Fuel Client, so specifying them
|
|
||||||
before either <namespace> or <action> will not be possible.
|
|
||||||
|
|
||||||
Example:
|
|
||||||
Correct: fuel node list --env 1
|
|
||||||
Wrong: fuel --env 1 node list
|
|
||||||
|
|
||||||
The table below describes the upcoming changes to commands
|
|
||||||
which will be removed or changed significantly.
|
|
||||||
|
|
||||||
+--------------------------------------------------------+
|
|
||||||
| Old command | New command |
|
|
||||||
+------------------------+-------------------------------+
|
|
||||||
| fuel deploy-changes | fuel env deploy |
|
|
||||||
+------------------------+-------------------------------+
|
|
||||||
| fuel node --set --env | fuel env add nodes |
|
|
||||||
+------------------------+-------------------------------+
|
|
||||||
| fuel network <> --env | fuel env network <> |
|
|
||||||
+------------------------+-------------------------------+
|
|
||||||
| fuel settings <> --env | fuel env settings <> |
|
|
||||||
+------------------------+-------------------------------+
|
|
||||||
| fuel stop | fuel env stop-deploy |
|
|
||||||
+------------------------+-------------------------------+
|
|
||||||
|
|
||||||
Further information will be located in Fuel Documentation and
|
|
||||||
on our Wiki page: https://wiki.openstack.org/wiki/Fuel_CLI
|
|
||||||
|
|
||||||
You can check out an experimental version of the new
|
|
||||||
Fuel Client by using the following command:
|
|
||||||
|
|
||||||
fuel2 --help
|
|
||||||
|
|
||||||
"""
|
|
||||||
)
|
|
||||||
self.universal_flags = []
|
|
||||||
self.credential_flags = []
|
|
||||||
self.subparsers = self.parser.add_subparsers(
|
|
||||||
title="Namespaces",
|
|
||||||
metavar="",
|
|
||||||
dest="action",
|
|
||||||
help='actions'
|
|
||||||
)
|
|
||||||
self.generate_actions()
|
|
||||||
self.add_version_args()
|
|
||||||
self.add_debug_arg()
|
|
||||||
self.add_serializers_args()
|
|
||||||
utils.add_os_cli_parameters(self.parser)
|
|
||||||
|
|
||||||
def generate_actions(self):
|
|
||||||
for action, action_object in actions.items():
|
|
||||||
action_parser = self.subparsers.add_parser(
|
|
||||||
action,
|
|
||||||
prog="fuel {0}".format(action),
|
|
||||||
help=action_object.__doc__,
|
|
||||||
formatter_class=argparse.RawTextHelpFormatter,
|
|
||||||
epilog=action_object.examples
|
|
||||||
)
|
|
||||||
for argument in action_object.args:
|
|
||||||
if isinstance(argument, dict):
|
|
||||||
action_parser.add_argument(
|
|
||||||
*argument["args"],
|
|
||||||
**argument["params"]
|
|
||||||
)
|
|
||||||
elif isinstance(argument, tuple):
|
|
||||||
required = argument[0]
|
|
||||||
group = action_parser.add_mutually_exclusive_group(
|
|
||||||
required=required)
|
|
||||||
for argument_in_group in argument[1:]:
|
|
||||||
group.add_argument(
|
|
||||||
*argument_in_group["args"],
|
|
||||||
**argument_in_group["params"]
|
|
||||||
)
|
|
||||||
|
|
||||||
def parse(self):
|
|
||||||
self.prepare_args()
|
|
||||||
if len(self.args) < 2:
|
|
||||||
self.parser.print_help()
|
|
||||||
sys.exit(0)
|
|
||||||
|
|
||||||
parsed_params = self.parser.parse_args(self.args[1:])
|
|
||||||
|
|
||||||
settings = fuelclient_settings.get_settings()
|
|
||||||
settings.update_from_command_line_options(parsed_params)
|
|
||||||
|
|
||||||
if parsed_params.action not in actions:
|
|
||||||
self.parser.print_help()
|
|
||||||
sys.exit(0)
|
|
||||||
|
|
||||||
if profiler.profiling_enabled():
|
|
||||||
handler_name = parsed_params.action
|
|
||||||
method_name = ''.join([method for method in parsed_params.__dict__
|
|
||||||
if getattr(parsed_params, method) is True])
|
|
||||||
prof = profiler.Profiler(method_name, handler_name)
|
|
||||||
|
|
||||||
actions[parsed_params.action].action_func(parsed_params)
|
|
||||||
|
|
||||||
if profiler.profiling_enabled():
|
|
||||||
prof.save_data()
|
|
||||||
|
|
||||||
def add_serializers_args(self):
|
|
||||||
serializers = self.parser.add_mutually_exclusive_group()
|
|
||||||
for format_name in Serializer.serializers.keys():
|
|
||||||
serialization_flag = "--{0}".format(format_name)
|
|
||||||
self.universal_flags.append(serialization_flag)
|
|
||||||
serializers.add_argument(
|
|
||||||
serialization_flag,
|
|
||||||
dest=consts.SERIALIZATION_FORMAT_FLAG,
|
|
||||||
action="store_const",
|
|
||||||
const=format_name,
|
|
||||||
help="prints only {0} to stdout".format(format_name),
|
|
||||||
default=False
|
|
||||||
)
|
|
||||||
|
|
||||||
def add_debug_arg(self):
|
|
||||||
self.universal_flags.append("--debug")
|
|
||||||
self.parser.add_argument(
|
|
||||||
"--debug",
|
|
||||||
dest="debug",
|
|
||||||
action="store_true",
|
|
||||||
help="prints details of all HTTP request",
|
|
||||||
default=False
|
|
||||||
)
|
|
||||||
|
|
||||||
def add_version_args(self):
|
|
||||||
arg = get_version_arg()
|
|
||||||
self.parser.add_argument(*arg["args"], **arg["params"])
|
|
||||||
|
|
||||||
def prepare_args(self):
|
|
||||||
# replace some args from dict substitutions
|
|
||||||
self.args = [substitutions.get(arg, arg) for arg in self.args]
|
|
||||||
|
|
||||||
# move general used flags before actions, otherwise they will be used
|
|
||||||
# as a part of action by action_generator
|
|
||||||
for flag in self.credential_flags:
|
|
||||||
self.move_argument_before_action(flag)
|
|
||||||
|
|
||||||
for flag in self.universal_flags:
|
|
||||||
self.move_argument_before_action(flag, has_value=False)
|
|
||||||
|
|
||||||
self.move_argument_after_action("--env",)
|
|
||||||
|
|
||||||
def move_argument_before_action(self, flag, has_value=True):
|
|
||||||
"""We need to move general argument before action, we use them
|
|
||||||
not directly in action but in DefaultAPIClient.
|
|
||||||
"""
|
|
||||||
for arg in self.args:
|
|
||||||
if flag in arg:
|
|
||||||
if "=" in arg or not has_value:
|
|
||||||
index_of_flag = self.args.index(arg)
|
|
||||||
flag = self.args.pop(index_of_flag)
|
|
||||||
self.args.insert(1, flag)
|
|
||||||
else:
|
|
||||||
try:
|
|
||||||
index_of_flag = self.args.index(arg)
|
|
||||||
flag = self.args.pop(index_of_flag)
|
|
||||||
value = self.args.pop(index_of_flag)
|
|
||||||
self.args.insert(1, value)
|
|
||||||
self.args.insert(1, flag)
|
|
||||||
except IndexError:
|
|
||||||
raise ParserException(
|
|
||||||
'Corresponding value must follow "{0}" flag'
|
|
||||||
.format(arg)
|
|
||||||
)
|
|
||||||
break
|
|
||||||
|
|
||||||
def move_argument_after_action(self, flag):
|
|
||||||
for arg in self.args:
|
|
||||||
if flag in arg:
|
|
||||||
# if declaration with '=' sign (e.g. --env-id=1)
|
|
||||||
if "=" in arg:
|
|
||||||
index_of_flag = self.args.index(arg)
|
|
||||||
flag = self.args.pop(index_of_flag)
|
|
||||||
self.args.append(flag)
|
|
||||||
else:
|
|
||||||
try:
|
|
||||||
index_of_flag = self.args.index(arg)
|
|
||||||
self.args.pop(index_of_flag)
|
|
||||||
flag = self.args.pop(index_of_flag)
|
|
||||||
self.args.append(arg)
|
|
||||||
self.args.append(flag)
|
|
||||||
except IndexError:
|
|
||||||
raise ParserException(
|
|
||||||
'Corresponding value must follow "{0}" flag'
|
|
||||||
.format(arg)
|
|
||||||
)
|
|
||||||
break
|
|
||||||
|
|
||||||
|
|
||||||
@exceptions_decorator
|
|
||||||
def main(args=sys.argv):
|
|
||||||
parser = Parser(args)
|
|
||||||
parser.parse()
|
|
|
@ -1,152 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
from __future__ import print_function
|
|
||||||
|
|
||||||
import json
|
|
||||||
import os
|
|
||||||
|
|
||||||
import six
|
|
||||||
import yaml
|
|
||||||
|
|
||||||
from fuelclient.cli import error
|
|
||||||
from fuelclient import consts
|
|
||||||
from fuelclient import utils
|
|
||||||
|
|
||||||
|
|
||||||
class Serializer(object):
|
|
||||||
"""Serializer class - contains all logic responsible for
|
|
||||||
printing to stdout, reading and writing files to file system.
|
|
||||||
"""
|
|
||||||
serializers = {
|
|
||||||
"json": {
|
|
||||||
"w": lambda d: json.dumps(d, indent=4),
|
|
||||||
"r": utils.safe_deserialize(json.loads)
|
|
||||||
},
|
|
||||||
"yaml": {
|
|
||||||
"w": lambda d: yaml.safe_dump(d, default_flow_style=False),
|
|
||||||
"r": utils.safe_deserialize(yaml.load)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
format_flags = False
|
|
||||||
default_format = "yaml"
|
|
||||||
format = default_format
|
|
||||||
|
|
||||||
def __init__(self, format=None):
|
|
||||||
if format and format in self.serializers:
|
|
||||||
self.format = format
|
|
||||||
self.format_flags = True
|
|
||||||
|
|
||||||
@property
|
|
||||||
def serializer(self):
|
|
||||||
"""Returns dicts with methods for loadin/dumping current fromat.
|
|
||||||
|
|
||||||
Returned dict's keys:
|
|
||||||
* 'w' - from 'writing', method for serializing/dumping data
|
|
||||||
* 'r' - from 'reading', method for deserializing/loading data
|
|
||||||
"""
|
|
||||||
return self.serializers[self.format]
|
|
||||||
|
|
||||||
def serialize(self, data):
|
|
||||||
"""Shortcut for serializing data with current format."""
|
|
||||||
return self.serializer['w'](data)
|
|
||||||
|
|
||||||
def deserialize(self, data):
|
|
||||||
"""Shortcut for deserializing data with current format."""
|
|
||||||
return self.serializer['r'](data)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def from_params(cls, params):
|
|
||||||
return cls(format=getattr(params,
|
|
||||||
consts.SERIALIZATION_FORMAT_FLAG, None))
|
|
||||||
|
|
||||||
def print_formatted(self, data):
|
|
||||||
print(self.serializer["w"](data))
|
|
||||||
|
|
||||||
def print_to_output(self, formatted_data, arg, print_method=print):
|
|
||||||
if self.format_flags:
|
|
||||||
self.print_formatted(formatted_data)
|
|
||||||
else:
|
|
||||||
if six.PY2 and isinstance(arg, six.text_type):
|
|
||||||
arg = arg.encode('utf-8')
|
|
||||||
print_method(arg)
|
|
||||||
|
|
||||||
def prepare_path(self, path):
|
|
||||||
return "{0}.{1}".format(
|
|
||||||
path, self.format
|
|
||||||
)
|
|
||||||
|
|
||||||
def write_to_path(self, path, data):
|
|
||||||
full_path = self.prepare_path(path)
|
|
||||||
return self.write_to_full_path(full_path, data)
|
|
||||||
|
|
||||||
def write_to_full_path(self, path, data):
|
|
||||||
try:
|
|
||||||
with open(path, "w") as file_to_write:
|
|
||||||
self.write_to_file(file_to_write, data)
|
|
||||||
except IOError as e:
|
|
||||||
raise error.InvalidFileException(
|
|
||||||
"Can't write to file '{0}': {1}.".format(
|
|
||||||
path, e.strerror))
|
|
||||||
return path
|
|
||||||
|
|
||||||
def read_from_file(self, path):
|
|
||||||
return self.read_from_full_path(self.prepare_path(path))
|
|
||||||
|
|
||||||
def read_from_full_path(self, full_path):
|
|
||||||
try:
|
|
||||||
with open(full_path, "r") as file_to_read:
|
|
||||||
return self.serializer["r"](file_to_read.read())
|
|
||||||
except IOError as e:
|
|
||||||
raise error.InvalidFileException(
|
|
||||||
"Can't open file '{0}': {1}.".format(full_path, e.strerror))
|
|
||||||
|
|
||||||
def write_to_file(self, file_obj, data):
|
|
||||||
"""Writes to opened file or file like object
|
|
||||||
:param file_obj: opened file
|
|
||||||
:param data: any serializable object
|
|
||||||
"""
|
|
||||||
serialized = self.serializer["w"](data)
|
|
||||||
file_obj.write(serialized)
|
|
||||||
|
|
||||||
|
|
||||||
class FileFormatBasedSerializer(Serializer):
|
|
||||||
|
|
||||||
def get_serializer(self, path):
|
|
||||||
extension = os.path.splitext(path)[1][1:]
|
|
||||||
if extension not in self.serializers:
|
|
||||||
raise error.BadDataException(
|
|
||||||
'No serializer for provided file {0}'.format(path))
|
|
||||||
return self.serializers[extension]
|
|
||||||
|
|
||||||
def write_to_file(self, full_path, data):
|
|
||||||
serializer = self.get_serializer(full_path)
|
|
||||||
with open(full_path, "w+") as f:
|
|
||||||
f.write(serializer["w"](data))
|
|
||||||
return full_path
|
|
||||||
|
|
||||||
def read_from_file(self, full_path):
|
|
||||||
serializer = self.get_serializer(full_path)
|
|
||||||
with open(full_path, "r") as f:
|
|
||||||
return serializer["r"](f.read())
|
|
||||||
|
|
||||||
|
|
||||||
def listdir_without_extensions(dir_path):
|
|
||||||
return six.moves.filter(
|
|
||||||
lambda f: f != "",
|
|
||||||
six.moves.map(
|
|
||||||
lambda f: f.split(".")[0],
|
|
||||||
os.listdir(dir_path)
|
|
||||||
)
|
|
||||||
)
|
|
|
@ -1,248 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import json
|
|
||||||
import requests
|
|
||||||
|
|
||||||
from keystoneclient.v2_0 import client as auth_client
|
|
||||||
from six.moves.urllib import parse as urlparse
|
|
||||||
|
|
||||||
from fuelclient.cli import error
|
|
||||||
from fuelclient import fuelclient_settings
|
|
||||||
|
|
||||||
|
|
||||||
class APIClient(object):
|
|
||||||
"""This class handles API requests
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, host, port, http_proxy=None, http_timeout=None,
|
|
||||||
os_username=None, os_password=None,
|
|
||||||
os_tenant_name=None, debug=False):
|
|
||||||
self.debug = debug
|
|
||||||
|
|
||||||
self._http_proxy = http_proxy
|
|
||||||
self._http_timeout = http_timeout
|
|
||||||
self._os_username = os_username
|
|
||||||
self._os_password = os_password
|
|
||||||
self._os_tenant_name = os_tenant_name
|
|
||||||
|
|
||||||
self.root = "http://{host}:{port}".format(host=host, port=port)
|
|
||||||
|
|
||||||
self.keystone_base = urlparse.urljoin(self.root, "/keystone/v2.0")
|
|
||||||
self.api_root = urlparse.urljoin(self.root, "/api/v1/")
|
|
||||||
self.ostf_root = urlparse.urljoin(self.root, "/ostf/")
|
|
||||||
|
|
||||||
self._keystone_client = None
|
|
||||||
self._auth_required = None
|
|
||||||
self._session = None
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def default_client(cls):
|
|
||||||
conf = fuelclient_settings.get_settings()
|
|
||||||
return cls(
|
|
||||||
host=conf.SERVER_ADDRESS,
|
|
||||||
port=conf.SERVER_PORT,
|
|
||||||
http_proxy=conf.HTTP_PROXY,
|
|
||||||
http_timeout=conf.HTTP_TIMEOUT,
|
|
||||||
os_username=conf.OS_USERNAME,
|
|
||||||
os_password=conf.OS_PASSWORD,
|
|
||||||
os_tenant_name=conf.OS_TENANT_NAME
|
|
||||||
)
|
|
||||||
|
|
||||||
def _make_common_headers(self):
|
|
||||||
"""Returns a dict of HTTP headers common for all requests."""
|
|
||||||
|
|
||||||
return {'Content-Type': 'application/json',
|
|
||||||
'Accept': 'application/json',
|
|
||||||
'X-Auth-Token': self.auth_token}
|
|
||||||
|
|
||||||
def _make_proxies(self):
|
|
||||||
"""Provides HTTP proxy configuration for requests module."""
|
|
||||||
if self._http_proxy is None:
|
|
||||||
return None
|
|
||||||
|
|
||||||
return {'http': self._http_proxy,
|
|
||||||
'https': self._http_proxy}
|
|
||||||
|
|
||||||
def _make_session(self):
|
|
||||||
"""Initializes a HTTP session."""
|
|
||||||
session = requests.Session()
|
|
||||||
session.headers.update(self._make_common_headers())
|
|
||||||
session.timeout = self._http_timeout
|
|
||||||
session.proxies = self._make_proxies()
|
|
||||||
|
|
||||||
return session
|
|
||||||
|
|
||||||
@property
|
|
||||||
def session(self):
|
|
||||||
"""Lazy initialization of a session
|
|
||||||
|
|
||||||
Since HTTP client is a singleton test runners cannot
|
|
||||||
collect tests due to keystone authentication issues.
|
|
||||||
|
|
||||||
TODO(romcheg): remove lazy initialization for session
|
|
||||||
when HTTP client is not a singleton.
|
|
||||||
|
|
||||||
"""
|
|
||||||
if self._session is None:
|
|
||||||
self._session = self._make_session()
|
|
||||||
|
|
||||||
return self._session
|
|
||||||
|
|
||||||
@property
|
|
||||||
def auth_token(self):
|
|
||||||
if self.auth_required:
|
|
||||||
if not self.keystone_client.auth_token:
|
|
||||||
self.keystone_client.authenticate()
|
|
||||||
return self.keystone_client.auth_token
|
|
||||||
return ''
|
|
||||||
|
|
||||||
@property
|
|
||||||
def auth_required(self):
|
|
||||||
if self._auth_required is None:
|
|
||||||
url = self.api_root + 'version'
|
|
||||||
resp = requests.get(url)
|
|
||||||
if resp.status_code == 401:
|
|
||||||
self._auth_required = True
|
|
||||||
else:
|
|
||||||
self._raise_for_status_with_info(resp)
|
|
||||||
self._auth_required = resp.json().get('auth_required', False)
|
|
||||||
|
|
||||||
return self._auth_required
|
|
||||||
|
|
||||||
@property
|
|
||||||
def keystone_client(self):
|
|
||||||
if not self._keystone_client:
|
|
||||||
self.initialize_keystone_client()
|
|
||||||
return self._keystone_client
|
|
||||||
|
|
||||||
def update_own_password(self, new_pass):
|
|
||||||
if self.auth_token:
|
|
||||||
self.keystone_client.users.update_own_password(
|
|
||||||
self._os_password, new_pass)
|
|
||||||
|
|
||||||
def initialize_keystone_client(self):
|
|
||||||
if self.auth_required:
|
|
||||||
self._keystone_client = auth_client.Client(
|
|
||||||
auth_url=self.keystone_base,
|
|
||||||
username=self._os_username,
|
|
||||||
password=self._os_password,
|
|
||||||
tenant_name=self._os_tenant_name)
|
|
||||||
|
|
||||||
self._keystone_client.session.auth = self._keystone_client
|
|
||||||
self._keystone_client.authenticate()
|
|
||||||
|
|
||||||
def debug_mode(self, debug=False):
|
|
||||||
self.debug = debug
|
|
||||||
return self
|
|
||||||
|
|
||||||
def print_debug(self, message):
|
|
||||||
if self.debug:
|
|
||||||
print(message)
|
|
||||||
|
|
||||||
def delete_request(self, api):
|
|
||||||
"""Make DELETE request to specific API with some data."""
|
|
||||||
|
|
||||||
url = self.api_root + api
|
|
||||||
self.print_debug('DELETE {0}'.format(url))
|
|
||||||
|
|
||||||
resp = self.session.delete(url)
|
|
||||||
self._raise_for_status_with_info(resp)
|
|
||||||
|
|
||||||
return self._decode_content(resp)
|
|
||||||
|
|
||||||
def put_request(self, api, data, ostf=False, **params):
|
|
||||||
"""Make PUT request to specific API with some data.
|
|
||||||
|
|
||||||
:param api: API endpoint (path)
|
|
||||||
:param data: Data send in request, will be serialized to JSON
|
|
||||||
:param ostf: is this a call to OSTF API
|
|
||||||
:param params: Params of query string
|
|
||||||
"""
|
|
||||||
url = (self.ostf_root if ostf else self.api_root) + api
|
|
||||||
data_json = json.dumps(data)
|
|
||||||
resp = self.session.put(url, data=data_json, params=params)
|
|
||||||
|
|
||||||
self.print_debug('PUT {0} data={1}'.format(resp.url, data_json))
|
|
||||||
self._raise_for_status_with_info(resp)
|
|
||||||
return self._decode_content(resp)
|
|
||||||
|
|
||||||
def get_request_raw(self, api, ostf=False, params=None):
|
|
||||||
"""Make a GET request to specific API and return raw response
|
|
||||||
|
|
||||||
:param api: API endpoint (path)
|
|
||||||
:param ostf: is this a call to OSTF API
|
|
||||||
:param params: params passed to GET request
|
|
||||||
|
|
||||||
"""
|
|
||||||
url = (self.ostf_root if ostf else self.api_root) + api
|
|
||||||
self.print_debug('GET {0}'.format(url))
|
|
||||||
|
|
||||||
return self.session.get(url, params=params)
|
|
||||||
|
|
||||||
def get_request(self, api, ostf=False, params=None):
|
|
||||||
"""Make GET request to specific API."""
|
|
||||||
|
|
||||||
params = params or {}
|
|
||||||
|
|
||||||
resp = self.get_request_raw(api, ostf, params)
|
|
||||||
self._raise_for_status_with_info(resp)
|
|
||||||
|
|
||||||
return resp.json()
|
|
||||||
|
|
||||||
def post_request_raw(self, api, data=None, ostf=False):
|
|
||||||
"""Make a POST request to specific API and return raw response.
|
|
||||||
|
|
||||||
:param api: API endpoint (path)
|
|
||||||
:param data: data send in request, will be serialzied to JSON
|
|
||||||
:param ostf: is this a call to OSTF API
|
|
||||||
|
|
||||||
"""
|
|
||||||
url = (self.ostf_root if ostf else self.api_root) + api
|
|
||||||
data_json = None if data is None else json.dumps(data)
|
|
||||||
|
|
||||||
self.print_debug('POST {0} data={1}'.format(url, data_json))
|
|
||||||
|
|
||||||
return self.session.post(url, data=data_json)
|
|
||||||
|
|
||||||
def post_request(self, api, data=None, ostf=False):
|
|
||||||
"""Make POST request to specific API with some data
|
|
||||||
"""
|
|
||||||
resp = self.post_request_raw(api, data, ostf=ostf)
|
|
||||||
self._raise_for_status_with_info(resp)
|
|
||||||
return self._decode_content(resp)
|
|
||||||
|
|
||||||
def get_fuel_version(self):
|
|
||||||
return self.get_request("version")
|
|
||||||
|
|
||||||
def _raise_for_status_with_info(self, response):
|
|
||||||
try:
|
|
||||||
response.raise_for_status()
|
|
||||||
except requests.exceptions.HTTPError as e:
|
|
||||||
raise error.HTTPError(error.get_full_error_message(e))
|
|
||||||
|
|
||||||
def _decode_content(self, response):
|
|
||||||
if response.status_code == 204:
|
|
||||||
return {}
|
|
||||||
|
|
||||||
self.print_debug(response.text)
|
|
||||||
return response.json()
|
|
||||||
|
|
||||||
|
|
||||||
# This line is single point of instantiation for 'APIClient' class,
|
|
||||||
# which intended to implement Singleton design pattern.
|
|
||||||
DefaultAPIClient = APIClient.default_client()
|
|
||||||
"""
|
|
||||||
.. deprecated:: Use fuelclient.client.APIClient instead
|
|
||||||
"""
|
|
|
@ -1,253 +0,0 @@
|
||||||
# Copyright 2015 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import abc
|
|
||||||
import os
|
|
||||||
|
|
||||||
from cliff import command
|
|
||||||
from cliff import lister
|
|
||||||
from cliff import show
|
|
||||||
import six
|
|
||||||
|
|
||||||
import fuelclient
|
|
||||||
from fuelclient.cli.serializers import Serializer
|
|
||||||
from fuelclient.common import data_utils
|
|
||||||
|
|
||||||
VERSION = 'v1'
|
|
||||||
|
|
||||||
|
|
||||||
@six.add_metaclass(abc.ABCMeta)
|
|
||||||
class BaseCommand(command.Command):
|
|
||||||
"""Base Fuel Client command."""
|
|
||||||
|
|
||||||
def get_attributes_path(self, attr_type, file_format, ent_id, directory):
|
|
||||||
"""Returnes a path for attributes of an entity
|
|
||||||
|
|
||||||
:param attr_type: Type of the attribute, e. g., disks, networks.
|
|
||||||
:param file_format: The format of the file that contains or will
|
|
||||||
contain the attributes, e. g., json or yaml.
|
|
||||||
:param ent_id: Id of an entity
|
|
||||||
:param directory: Directory that is used to store attributes.
|
|
||||||
|
|
||||||
"""
|
|
||||||
if attr_type not in self.allowed_attr_types:
|
|
||||||
raise ValueError('attr_type must be '
|
|
||||||
'one of {}'.format(self.allowed_attr_types))
|
|
||||||
|
|
||||||
if file_format not in self.supported_file_formats:
|
|
||||||
raise ValueError('file_format must be '
|
|
||||||
'one of {}'.format(self.supported_file_formats))
|
|
||||||
|
|
||||||
return os.path.join(os.path.abspath(directory),
|
|
||||||
'{ent}_{id}'.format(ent=self.entity_name,
|
|
||||||
id=ent_id),
|
|
||||||
'{}.{}'.format(attr_type, file_format))
|
|
||||||
|
|
||||||
def __init__(self, *args, **kwargs):
|
|
||||||
super(BaseCommand, self).__init__(*args, **kwargs)
|
|
||||||
self.client = fuelclient.get_client(self.entity_name, VERSION)
|
|
||||||
|
|
||||||
@abc.abstractproperty
|
|
||||||
def entity_name(self):
|
|
||||||
"""Name of the Fuel entity."""
|
|
||||||
pass
|
|
||||||
|
|
||||||
@property
|
|
||||||
def supported_file_formats(self):
|
|
||||||
raise NotImplementedError()
|
|
||||||
|
|
||||||
@property
|
|
||||||
def allowed_attr_types(self):
|
|
||||||
raise NotImplementedError()
|
|
||||||
|
|
||||||
|
|
||||||
@six.add_metaclass(abc.ABCMeta)
|
|
||||||
class BaseListCommand(lister.Lister, BaseCommand):
|
|
||||||
"""Lists all entities showing some information."""
|
|
||||||
|
|
||||||
filters = {}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def default_sorting_by(self):
|
|
||||||
return ['id']
|
|
||||||
|
|
||||||
@abc.abstractproperty
|
|
||||||
def columns(self):
|
|
||||||
"""Names of columns in the resulting table."""
|
|
||||||
pass
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(BaseListCommand, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
# Add sorting key argument to the output formatters group
|
|
||||||
# if it exists. If not -- add is to the general group.
|
|
||||||
matching_groups = (gr
|
|
||||||
for gr in parser._action_groups
|
|
||||||
if gr.title == 'output formatters')
|
|
||||||
|
|
||||||
group = next(matching_groups, None) or parser
|
|
||||||
|
|
||||||
group.add_argument('-s',
|
|
||||||
'--sort-columns',
|
|
||||||
type=str,
|
|
||||||
nargs='+',
|
|
||||||
choices=self.columns,
|
|
||||||
metavar='SORT_COLUMN',
|
|
||||||
default=self.default_sorting_by,
|
|
||||||
help='Space separated list of keys for sorting '
|
|
||||||
'the data. Defaults to {}. Wrong values '
|
|
||||||
'are ignored.'.format(
|
|
||||||
', '.join(self.default_sorting_by)))
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def _sort_data(self, parsed_args, data):
|
|
||||||
scolumn_ids = [self.columns.index(col)
|
|
||||||
for col in parsed_args.sort_columns]
|
|
||||||
data.sort(key=lambda x: [x[scolumn_id] for scolumn_id in scolumn_ids])
|
|
||||||
return data
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
filters = {}
|
|
||||||
for name, prop in self.filters.items():
|
|
||||||
value = getattr(parsed_args, prop, None)
|
|
||||||
if value is not None:
|
|
||||||
filters[name] = value
|
|
||||||
|
|
||||||
data = self.client.get_all(**filters)
|
|
||||||
data = data_utils.get_display_data_multi(self.columns, data)
|
|
||||||
data = self._sort_data(parsed_args, data)
|
|
||||||
|
|
||||||
return self.columns, data
|
|
||||||
|
|
||||||
|
|
||||||
@six.add_metaclass(abc.ABCMeta)
|
|
||||||
class BaseShowCommand(show.ShowOne, BaseCommand):
|
|
||||||
"""Shows detailed information about the entity."""
|
|
||||||
|
|
||||||
@abc.abstractproperty
|
|
||||||
def columns(self):
|
|
||||||
"""Names of columns in the resulting table."""
|
|
||||||
pass
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(BaseShowCommand, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
parser.add_argument('id', type=int,
|
|
||||||
help='Id of the {0}.'.format(self.entity_name))
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
data = self.client.get_by_id(parsed_args.id)
|
|
||||||
data = data_utils.get_display_data_single(self.columns, data)
|
|
||||||
|
|
||||||
return (self.columns, data)
|
|
||||||
|
|
||||||
|
|
||||||
@six.add_metaclass(abc.ABCMeta)
|
|
||||||
class BaseDeleteCommand(BaseCommand):
|
|
||||||
"""Deletes entity with the specified id."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(BaseDeleteCommand, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'id',
|
|
||||||
type=int,
|
|
||||||
help='Id of the {0} to delete.'.format(self.entity_name))
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
self.client.delete_by_id(parsed_args.id)
|
|
||||||
|
|
||||||
msg = '{ent} with id {ent_id} was deleted\n'
|
|
||||||
|
|
||||||
self.app.stdout.write(
|
|
||||||
msg.format(
|
|
||||||
ent=self.entity_name.capitalize(),
|
|
||||||
ent_id=parsed_args.id))
|
|
||||||
|
|
||||||
|
|
||||||
@six.add_metaclass(abc.ABCMeta)
|
|
||||||
class BaseTasksExecuteCommand(BaseCommand):
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(BaseTasksExecuteCommand, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'-e', '--env',
|
|
||||||
type=int,
|
|
||||||
required=True,
|
|
||||||
help='Id of the environment'
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
'--force',
|
|
||||||
action="store_true",
|
|
||||||
default=False,
|
|
||||||
help='Force run all deployment tasks without skipping.')
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'--trace',
|
|
||||||
action="store_true",
|
|
||||||
default=False,
|
|
||||||
help='Enable debugging mode in tasks executor.'
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
'--format',
|
|
||||||
choices=['json', 'yaml'],
|
|
||||||
help='Select output format, by default text message will produce.'
|
|
||||||
)
|
|
||||||
|
|
||||||
mode_group = parser.add_mutually_exclusive_group()
|
|
||||||
mode_group.add_argument(
|
|
||||||
'--dry-run',
|
|
||||||
action="store_true",
|
|
||||||
default=False,
|
|
||||||
help='Specifies to dry-run a deployment by configuring '
|
|
||||||
'task executor to dump the deployment graph to a dot file.'
|
|
||||||
)
|
|
||||||
mode_group.add_argument(
|
|
||||||
'--noop',
|
|
||||||
action="store_true",
|
|
||||||
default=False,
|
|
||||||
help='Specifies noop-run deployment configuring tasks executor '
|
|
||||||
'to run all tasks in noop mode. '
|
|
||||||
'Execution result summary can be got via history of tasks.')
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
task = self.client.execute(
|
|
||||||
env_id=parsed_args.env,
|
|
||||||
dry_run=parsed_args.dry_run,
|
|
||||||
noop_run=parsed_args.noop,
|
|
||||||
force=parsed_args.force,
|
|
||||||
debug=parsed_args.trace,
|
|
||||||
**self.get_options(parsed_args)
|
|
||||||
)
|
|
||||||
if parsed_args.format:
|
|
||||||
msg = Serializer(parsed_args.format).serialize(task.data) + '\n'
|
|
||||||
else:
|
|
||||||
msg = (
|
|
||||||
'Deployment task with id {0} for the environment {1} '
|
|
||||||
'has been started.\n'
|
|
||||||
.format(task.data['id'], task.data['cluster'])
|
|
||||||
)
|
|
||||||
self.app.stdout.write(msg)
|
|
||||||
|
|
||||||
def get_options(self, parsed_args):
|
|
||||||
"""Produce additional options from cmdline arguments."""
|
|
||||||
raise NotImplementedError
|
|
|
@ -1,933 +0,0 @@
|
||||||
# Copyright 2015 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import abc
|
|
||||||
import argparse
|
|
||||||
import functools
|
|
||||||
import os
|
|
||||||
import shutil
|
|
||||||
|
|
||||||
import six
|
|
||||||
|
|
||||||
from cliff import show
|
|
||||||
from oslo_utils import fileutils
|
|
||||||
|
|
||||||
from fuelclient.cli import error
|
|
||||||
from fuelclient.commands import base
|
|
||||||
from fuelclient.common import data_utils
|
|
||||||
|
|
||||||
|
|
||||||
class EnvMixIn(object):
|
|
||||||
entity_name = 'environment'
|
|
||||||
|
|
||||||
supported_file_formats = ('json', 'yaml')
|
|
||||||
allowed_attr_types = ('network', 'settings')
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def source_dir(directory):
|
|
||||||
"""Check that the source directory exists and is readable.
|
|
||||||
|
|
||||||
:param directory: Path to source directory
|
|
||||||
:type directory: str
|
|
||||||
:return: Absolute path to source directory
|
|
||||||
:rtype: str
|
|
||||||
"""
|
|
||||||
path = os.path.abspath(directory)
|
|
||||||
if not os.path.isdir(path):
|
|
||||||
raise argparse.ArgumentTypeError(
|
|
||||||
'"{0}" is not a directory.'.format(path))
|
|
||||||
if not os.access(path, os.R_OK):
|
|
||||||
raise argparse.ArgumentTypeError(
|
|
||||||
'directory "{0}" is not readable'.format(path))
|
|
||||||
return path
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def destination_dir(directory):
|
|
||||||
"""Check that the destination directory exists and is writable.
|
|
||||||
|
|
||||||
:param directory: Path to destination directory
|
|
||||||
:type directory: str
|
|
||||||
:return: Absolute path to destination directory
|
|
||||||
:rtype: str
|
|
||||||
"""
|
|
||||||
path = os.path.abspath(directory)
|
|
||||||
if not os.path.isdir(path):
|
|
||||||
raise argparse.ArgumentTypeError(
|
|
||||||
'"{0}" is not a directory.'.format(path))
|
|
||||||
if not os.access(path, os.W_OK):
|
|
||||||
raise argparse.ArgumentTypeError(
|
|
||||||
'directory "{0}" is not writable'.format(path))
|
|
||||||
return path
|
|
||||||
|
|
||||||
|
|
||||||
@six.add_metaclass(abc.ABCMeta)
|
|
||||||
class BaseUploadCommand(EnvMixIn, base.BaseCommand):
|
|
||||||
|
|
||||||
@abc.abstractproperty
|
|
||||||
def uploader(self):
|
|
||||||
pass
|
|
||||||
|
|
||||||
@abc.abstractproperty
|
|
||||||
def attribute(self):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(BaseUploadCommand, self).get_parser(prog_name)
|
|
||||||
parser.add_argument('id',
|
|
||||||
type=int,
|
|
||||||
help='Id of environment.')
|
|
||||||
parser.add_argument('-f',
|
|
||||||
'--format',
|
|
||||||
required=True,
|
|
||||||
choices=self.supported_file_formats,
|
|
||||||
help='Format of serialized '
|
|
||||||
'{}.'.format(self.attribute))
|
|
||||||
parser.add_argument('-d',
|
|
||||||
'--directory',
|
|
||||||
required=False,
|
|
||||||
default=os.curdir,
|
|
||||||
help='Source directory. Defaults to the '
|
|
||||||
'current directory.')
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
directory = parsed_args.directory
|
|
||||||
file_path = self.get_attributes_path(self.attribute,
|
|
||||||
parsed_args.format,
|
|
||||||
parsed_args.id,
|
|
||||||
directory)
|
|
||||||
try:
|
|
||||||
with open(file_path, 'r') as stream:
|
|
||||||
attribute = data_utils.safe_load(parsed_args.format, stream)
|
|
||||||
except (IOError, OSError):
|
|
||||||
msg = 'Could not read configuration of {} at {}.'
|
|
||||||
raise error.InvalidFileException(msg.format(self.attribute,
|
|
||||||
file_path))
|
|
||||||
|
|
||||||
self.uploader(parsed_args.id, attribute)
|
|
||||||
|
|
||||||
msg = ('Configuration of {t} for the environment with id '
|
|
||||||
'{env} was loaded from {path}\n')
|
|
||||||
|
|
||||||
self.app.stdout.write(msg.format(t=self.attribute,
|
|
||||||
env=parsed_args.id,
|
|
||||||
path=file_path))
|
|
||||||
|
|
||||||
|
|
||||||
@six.add_metaclass(abc.ABCMeta)
|
|
||||||
class BaseDownloadCommand(EnvMixIn, base.BaseCommand):
|
|
||||||
|
|
||||||
@abc.abstractproperty
|
|
||||||
def downloader(self):
|
|
||||||
pass
|
|
||||||
|
|
||||||
@abc.abstractproperty
|
|
||||||
def attribute(self):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(BaseDownloadCommand, self).get_parser(prog_name)
|
|
||||||
parser.add_argument('id',
|
|
||||||
type=int,
|
|
||||||
help='Id of an environment.')
|
|
||||||
parser.add_argument('-f',
|
|
||||||
'--format',
|
|
||||||
required=True,
|
|
||||||
choices=self.supported_file_formats,
|
|
||||||
help='Format of serialized '
|
|
||||||
'{}.'.format(self.attribute))
|
|
||||||
parser.add_argument('-d',
|
|
||||||
'--directory',
|
|
||||||
required=False,
|
|
||||||
default=os.curdir,
|
|
||||||
help='Destination directory. Defaults to the '
|
|
||||||
'current directory.')
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
directory = parsed_args.directory or os.curdir
|
|
||||||
attributes = self.downloader(parsed_args.id)
|
|
||||||
|
|
||||||
file_path = self.get_attributes_path(self.attribute,
|
|
||||||
parsed_args.format,
|
|
||||||
parsed_args.id,
|
|
||||||
directory)
|
|
||||||
|
|
||||||
try:
|
|
||||||
fileutils.ensure_tree(os.path.dirname(file_path))
|
|
||||||
fileutils.delete_if_exists(file_path)
|
|
||||||
|
|
||||||
with open(file_path, 'w') as stream:
|
|
||||||
data_utils.safe_dump(parsed_args.format, stream, attributes)
|
|
||||||
except (IOError, OSError):
|
|
||||||
msg = 'Could not store configuration of {} at {}.'
|
|
||||||
raise error.InvalidFileException(msg.format(self.attribute,
|
|
||||||
file_path))
|
|
||||||
|
|
||||||
msg = ('Configuration of {t} for the environment with id '
|
|
||||||
'{env} was stored in {path}\n')
|
|
||||||
self.app.stdout.write(msg.format(t=self.attribute,
|
|
||||||
env=parsed_args.id,
|
|
||||||
path=file_path))
|
|
||||||
|
|
||||||
|
|
||||||
class EnvList(EnvMixIn, base.BaseListCommand):
|
|
||||||
"""Show list of all available environments."""
|
|
||||||
|
|
||||||
columns = ("id",
|
|
||||||
"status",
|
|
||||||
"name",
|
|
||||||
"release_id")
|
|
||||||
|
|
||||||
|
|
||||||
class EnvShow(EnvMixIn, base.BaseShowCommand):
|
|
||||||
"""Show info about environment with given id."""
|
|
||||||
columns = ("id",
|
|
||||||
"status",
|
|
||||||
"fuel_version",
|
|
||||||
"name",
|
|
||||||
"release_id",
|
|
||||||
"is_customized",
|
|
||||||
"changes")
|
|
||||||
|
|
||||||
|
|
||||||
class EnvCreate(EnvMixIn, base.BaseShowCommand):
|
|
||||||
"""Creates environment with given attributes."""
|
|
||||||
|
|
||||||
columns = EnvShow.columns
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
# Avoid adding id argument by BaseShowCommand
|
|
||||||
parser = show.ShowOne.get_parser(self, prog_name)
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'name',
|
|
||||||
type=str,
|
|
||||||
help='Name of the new environment'
|
|
||||||
)
|
|
||||||
|
|
||||||
parser.add_argument('-r',
|
|
||||||
'--release',
|
|
||||||
type=int,
|
|
||||||
required=True,
|
|
||||||
help='Id of the release for which will '
|
|
||||||
'be deployed')
|
|
||||||
|
|
||||||
parser.add_argument('-nst',
|
|
||||||
'--net-segmentation-type',
|
|
||||||
type=str,
|
|
||||||
choices=['vlan', 'gre', 'tun'],
|
|
||||||
dest='nst',
|
|
||||||
default='vlan',
|
|
||||||
help='Network segmentation type.\n'
|
|
||||||
'WARNING: GRE network segmentation type '
|
|
||||||
'is deprecated since 7.0 release.')
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
if parsed_args.nst == 'gre':
|
|
||||||
self.app.stderr.write('WARNING: GRE network segmentation type is '
|
|
||||||
'deprecated since 7.0 release')
|
|
||||||
|
|
||||||
new_env = self.client.create(name=parsed_args.name,
|
|
||||||
release_id=parsed_args.release,
|
|
||||||
net_segment_type=parsed_args.nst)
|
|
||||||
|
|
||||||
new_env = data_utils.get_display_data_single(self.columns, new_env)
|
|
||||||
|
|
||||||
return (self.columns, new_env)
|
|
||||||
|
|
||||||
|
|
||||||
class EnvDelete(EnvMixIn, base.BaseDeleteCommand):
|
|
||||||
"""Delete environment with given id."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(EnvDelete, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
parser.add_argument('-f',
|
|
||||||
'--force',
|
|
||||||
action='store_true',
|
|
||||||
help='Force-delete the environment.')
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
env = self.client.get_by_id(parsed_args.id)
|
|
||||||
|
|
||||||
if env['status'] == 'operational' and not parsed_args.force:
|
|
||||||
self.app.stdout.write("Deleting an operational environment is a "
|
|
||||||
"dangerous operation.\n"
|
|
||||||
"Please use --force to bypass this message.")
|
|
||||||
return
|
|
||||||
|
|
||||||
return super(EnvDelete, self).take_action(parsed_args)
|
|
||||||
|
|
||||||
|
|
||||||
class EnvUpdate(EnvMixIn, base.BaseShowCommand):
|
|
||||||
"""Change given attributes for an environment."""
|
|
||||||
|
|
||||||
columns = EnvShow.columns
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
# Avoid adding id argument by BaseShowCommand
|
|
||||||
parser = show.ShowOne.get_parser(self, prog_name)
|
|
||||||
|
|
||||||
parser.add_argument('id',
|
|
||||||
type=int,
|
|
||||||
help='Id of the nailgun entity to be processed.')
|
|
||||||
|
|
||||||
parser.add_argument('-n',
|
|
||||||
'--name',
|
|
||||||
type=str,
|
|
||||||
dest='name',
|
|
||||||
default=None,
|
|
||||||
help='New name for environment')
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
updates = {}
|
|
||||||
for attr in self.client._updatable_attributes:
|
|
||||||
if getattr(parsed_args, attr, None):
|
|
||||||
updates[attr] = getattr(parsed_args, attr)
|
|
||||||
|
|
||||||
updated_env = self.client.update(environment_id=parsed_args.id,
|
|
||||||
**updates)
|
|
||||||
updated_env = data_utils.get_display_data_single(self.columns,
|
|
||||||
updated_env)
|
|
||||||
|
|
||||||
return (self.columns, updated_env)
|
|
||||||
|
|
||||||
|
|
||||||
class EnvReset(EnvMixIn, base.BaseCommand):
|
|
||||||
"""Reset deployed environment."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(EnvReset, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
parser.add_argument('id',
|
|
||||||
type=int,
|
|
||||||
help='Id of the environment to reset.')
|
|
||||||
parser.add_argument('-f',
|
|
||||||
'--force',
|
|
||||||
action='store_true',
|
|
||||||
help='Force reset environment.')
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
result = self.client.reset(parsed_args.id, force=parsed_args.force)
|
|
||||||
|
|
||||||
msg = ('Reset task with id {t} for the environment {e} '
|
|
||||||
'has been started.\n'.format(t=result.data['id'],
|
|
||||||
e=result.data['cluster']))
|
|
||||||
|
|
||||||
self.app.stdout.write(msg)
|
|
||||||
|
|
||||||
|
|
||||||
class EnvStopDeploy(EnvMixIn, base.BaseCommand):
|
|
||||||
"""Stop deployment process for specific environment."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(EnvStopDeploy, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
parser.add_argument('id',
|
|
||||||
type=int,
|
|
||||||
help='Id of the environment to stop deployment.')
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
result = self.client.stop(parsed_args.id)
|
|
||||||
|
|
||||||
msg = ('Stop deployment task with id {t} for the environment '
|
|
||||||
'{e} has been started.\n'.format(t=result.data['id'],
|
|
||||||
e=result.data['cluster']))
|
|
||||||
self.app.stdout.write(msg)
|
|
||||||
|
|
||||||
|
|
||||||
class EnvAddNodes(EnvMixIn, base.BaseCommand):
|
|
||||||
"""Adds nodes to an environment with the specified roles."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
|
|
||||||
parser = super(EnvAddNodes, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
parser.add_argument('-e',
|
|
||||||
'--env',
|
|
||||||
type=int,
|
|
||||||
required=True,
|
|
||||||
help='Id of the environment to add nodes to')
|
|
||||||
|
|
||||||
parser.add_argument('-n',
|
|
||||||
'--nodes',
|
|
||||||
type=int,
|
|
||||||
nargs='+',
|
|
||||||
required=True,
|
|
||||||
help='Ids of the nodes to add.')
|
|
||||||
|
|
||||||
parser.add_argument('-r',
|
|
||||||
'--roles',
|
|
||||||
type=str,
|
|
||||||
nargs='+',
|
|
||||||
required=True,
|
|
||||||
help='Target roles of the nodes.')
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
env_id = parsed_args.env
|
|
||||||
|
|
||||||
self.client.add_nodes(environment_id=env_id,
|
|
||||||
nodes=parsed_args.nodes,
|
|
||||||
roles=parsed_args.roles)
|
|
||||||
|
|
||||||
msg = 'Nodes {n} were added to the environment {e} with roles {r}\n'
|
|
||||||
self.app.stdout.write(msg.format(n=parsed_args.nodes,
|
|
||||||
e=parsed_args.env,
|
|
||||||
r=parsed_args.roles))
|
|
||||||
|
|
||||||
|
|
||||||
class EnvRemoveNodes(EnvMixIn, base.BaseCommand):
|
|
||||||
"""Removes nodes from an environment."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
|
|
||||||
parser = super(EnvRemoveNodes, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
parser.add_argument('-e',
|
|
||||||
'--env',
|
|
||||||
type=int,
|
|
||||||
required=True,
|
|
||||||
help='Id of the environment to remove nodes from')
|
|
||||||
|
|
||||||
group = parser.add_mutually_exclusive_group(required=True)
|
|
||||||
group.add_argument('-n',
|
|
||||||
'--nodes',
|
|
||||||
type=int,
|
|
||||||
nargs='+',
|
|
||||||
help='Ids of the nodes to remove.')
|
|
||||||
|
|
||||||
group.add_argument('--nodes-all',
|
|
||||||
action='store_true',
|
|
||||||
help='Remove all nodes from environment')
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
nodes = None if parsed_args.nodes_all else parsed_args.nodes
|
|
||||||
self.client.remove_nodes(environment_id=parsed_args.env,
|
|
||||||
nodes=nodes)
|
|
||||||
|
|
||||||
msg = 'Nodes were removed from the environment with id={e}\n'.format(
|
|
||||||
e=parsed_args.env)
|
|
||||||
|
|
||||||
self.app.stdout.write(msg)
|
|
||||||
|
|
||||||
|
|
||||||
class EnvDeploy(EnvMixIn, base.BaseCommand):
|
|
||||||
"""Deploys changes on the specified environment."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(EnvDeploy, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
parser.add_argument('id',
|
|
||||||
type=int,
|
|
||||||
help='Id of the environment to be deployed.')
|
|
||||||
|
|
||||||
dry_run_help_string = 'Specifies to dry-run a deployment by' \
|
|
||||||
'configuring task executor to dump the' \
|
|
||||||
'deployment graph to a dot file.' \
|
|
||||||
'Store cluster settings and serialized ' \
|
|
||||||
'data in the db and ask the task executor ' \
|
|
||||||
'to dump the resulting graph into a dot file'
|
|
||||||
noop_run_help_string = 'Specifies noop-run deployment ' \
|
|
||||||
'configuring tasks executor to run ' \
|
|
||||||
'puppet and shell tasks in noop mode and ' \
|
|
||||||
'skip all other. Stores noop-run result ' \
|
|
||||||
'summary in nailgun database'
|
|
||||||
parser.add_argument(
|
|
||||||
'-d', '--dry-run', dest="dry_run",
|
|
||||||
action='store_true', help=dry_run_help_string)
|
|
||||||
parser.add_argument(
|
|
||||||
'--noop', dest="noop_run",
|
|
||||||
action='store_true', help=noop_run_help_string)
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
task_id = self.client.deploy_changes(parsed_args.id,
|
|
||||||
dry_run=parsed_args.dry_run,
|
|
||||||
noop_run=parsed_args.noop_run)
|
|
||||||
|
|
||||||
msg = 'Deployment task with id {t} for the environment {e} '\
|
|
||||||
'has been started.\n'.format(t=task_id, e=parsed_args.id)
|
|
||||||
|
|
||||||
self.app.stdout.write(msg)
|
|
||||||
|
|
||||||
|
|
||||||
class EnvRedeploy(EnvDeploy):
|
|
||||||
"""Redeploys changes on the specified environment."""
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
task_id = self.client.redeploy_changes(parsed_args.id,
|
|
||||||
dry_run=parsed_args.dry_run,
|
|
||||||
noop_run=parsed_args.noop_run)
|
|
||||||
|
|
||||||
msg = 'Deployment task with id {t} for the environment {e} '\
|
|
||||||
'has been started.\n'.format(t=task_id, e=parsed_args.id)
|
|
||||||
|
|
||||||
self.app.stdout.write(msg)
|
|
||||||
|
|
||||||
|
|
||||||
class EnvProvisionNodes(EnvMixIn, base.BaseCommand):
|
|
||||||
"""Provision specified nodes for a specified environment."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(EnvProvisionNodes, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
parser.add_argument('-e',
|
|
||||||
'--env',
|
|
||||||
required=True,
|
|
||||||
type=int,
|
|
||||||
help='Id of the environment.')
|
|
||||||
parser.add_argument('-n',
|
|
||||||
'--nodes',
|
|
||||||
required=True,
|
|
||||||
type=int,
|
|
||||||
nargs='+',
|
|
||||||
help='Ids of nodes to provision.')
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
node_ids = parsed_args.nodes
|
|
||||||
task = self.client.provision_nodes(parsed_args.env, node_ids)
|
|
||||||
|
|
||||||
msg = ('Provisioning task with id {t} for the nodes {n} '
|
|
||||||
'within the environment {e} has been '
|
|
||||||
'started.\n').format(t=task['id'],
|
|
||||||
e=parsed_args.env,
|
|
||||||
n=', '.join(str(i) for i in node_ids))
|
|
||||||
|
|
||||||
self.app.stdout.write(msg)
|
|
||||||
|
|
||||||
|
|
||||||
class EnvDeployNodes(EnvMixIn, base.BaseCommand):
|
|
||||||
"""Deploy specified nodes for a specified environment."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(EnvDeployNodes, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
parser.add_argument('-e',
|
|
||||||
'--env',
|
|
||||||
required=True,
|
|
||||||
type=int,
|
|
||||||
help='Id of the environment.')
|
|
||||||
parser.add_argument('-n',
|
|
||||||
'--nodes',
|
|
||||||
required=True,
|
|
||||||
type=int,
|
|
||||||
nargs='+',
|
|
||||||
help='Ids of nodes to deploy.')
|
|
||||||
parser.add_argument('-f',
|
|
||||||
'--force',
|
|
||||||
action='store_true',
|
|
||||||
help='Force deploy nodes.')
|
|
||||||
|
|
||||||
noop_run_help_string = 'Specifies noop-run deployment ' \
|
|
||||||
'configuring tasks executor to run ' \
|
|
||||||
'puppet and shell tasks in noop mode and ' \
|
|
||||||
'skip all other. Stores noop-run result ' \
|
|
||||||
'summary in nailgun database'
|
|
||||||
parser.add_argument('--noop', dest="noop_run", action='store_true',
|
|
||||||
help=noop_run_help_string)
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
node_ids = parsed_args.nodes
|
|
||||||
task = self.client.deploy_nodes(parsed_args.env, node_ids,
|
|
||||||
force=parsed_args.force,
|
|
||||||
noop_run=parsed_args.noop_run)
|
|
||||||
|
|
||||||
msg = ('Deployment task with id {t} for the nodes {n} within '
|
|
||||||
'the environment {e} has been '
|
|
||||||
'started.\n').format(t=task['id'],
|
|
||||||
e=parsed_args.env,
|
|
||||||
n=', '.join(str(i) for i in node_ids))
|
|
||||||
|
|
||||||
self.app.stdout.write(msg)
|
|
||||||
|
|
||||||
|
|
||||||
class EnvSpawnVms(EnvMixIn, base.BaseCommand):
|
|
||||||
"""Provision specified environment."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(EnvSpawnVms, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
parser.add_argument('id',
|
|
||||||
type=int,
|
|
||||||
help='Id of the environment to be provision.')
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
return self.client.spawn_vms(parsed_args.id)
|
|
||||||
|
|
||||||
|
|
||||||
class EnvNetworkVerify(EnvMixIn, base.BaseCommand):
|
|
||||||
"""Run network verification for specified environment."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(EnvNetworkVerify, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
parser.add_argument('id',
|
|
||||||
type=int,
|
|
||||||
help='Id of the environment to verify network.')
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
task = self.client.verify_network(parsed_args.id)
|
|
||||||
msg = 'Network verification task with id {t} for the environment {e} '\
|
|
||||||
'has been started.\n'.format(t=task['id'], e=parsed_args.id)
|
|
||||||
|
|
||||||
self.app.stdout.write(msg)
|
|
||||||
|
|
||||||
|
|
||||||
class EnvNetworkUpload(BaseUploadCommand):
|
|
||||||
"""Upload network configuration and apply it to an environment."""
|
|
||||||
|
|
||||||
attribute = 'network'
|
|
||||||
|
|
||||||
@property
|
|
||||||
def uploader(self):
|
|
||||||
return self.client.set_network_configuration
|
|
||||||
|
|
||||||
|
|
||||||
class EnvNetworkDownload(BaseDownloadCommand):
|
|
||||||
"""Download and store network configuration of an environment."""
|
|
||||||
|
|
||||||
attribute = 'network'
|
|
||||||
|
|
||||||
@property
|
|
||||||
def downloader(self):
|
|
||||||
return self.client.get_network_configuration
|
|
||||||
|
|
||||||
|
|
||||||
class EnvSettingsUpload(BaseUploadCommand):
|
|
||||||
"""Upload and apply environment settings."""
|
|
||||||
|
|
||||||
attribute = 'settings'
|
|
||||||
|
|
||||||
@property
|
|
||||||
def uploader(self):
|
|
||||||
return functools.partial(self.client.set_settings,
|
|
||||||
force=self.force_flag)
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(EnvSettingsUpload, self).get_parser(prog_name)
|
|
||||||
parser.add_argument('--force',
|
|
||||||
action='store_true',
|
|
||||||
help='Force applying the settings.')
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
self.force_flag = parsed_args.force
|
|
||||||
|
|
||||||
super(EnvSettingsUpload, self).take_action(parsed_args)
|
|
||||||
|
|
||||||
|
|
||||||
class EnvSettingsDownload(BaseDownloadCommand):
|
|
||||||
"""Download and store environment settings."""
|
|
||||||
|
|
||||||
attribute = 'settings'
|
|
||||||
|
|
||||||
@property
|
|
||||||
def downloader(self):
|
|
||||||
return self.client.get_settings
|
|
||||||
|
|
||||||
|
|
||||||
class FactsMixIn(object):
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _get_fact_dir(env_id, fact_type, directory):
|
|
||||||
return os.path.join(directory, "{0}_{1}".format(fact_type, env_id))
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _read_deployment_facts_from_file(directory, file_format):
|
|
||||||
return list(six.moves.map(
|
|
||||||
lambda f: data_utils.read_from_file(f),
|
|
||||||
[os.path.join(directory, file_name)
|
|
||||||
for file_name in os.listdir(directory)
|
|
||||||
if file_format == os.path.splitext(file_name)[1].lstrip('.')]
|
|
||||||
))
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _read_provisioning_facts_from_file(directory, file_format):
|
|
||||||
node_facts = list(six.moves.map(
|
|
||||||
lambda f: data_utils.read_from_file(f),
|
|
||||||
[os.path.join(directory, file_name)
|
|
||||||
for file_name in os.listdir(directory)
|
|
||||||
if file_format == os.path.splitext(file_name)[1].lstrip('.')
|
|
||||||
and 'engine' != os.path.splitext(file_name)[0]]
|
|
||||||
))
|
|
||||||
|
|
||||||
engine_facts = None
|
|
||||||
engine_file = os.path.join(directory,
|
|
||||||
"{}.{}".format('engine', file_format))
|
|
||||||
if os.path.lexists(engine_file):
|
|
||||||
engine_facts = data_utils.read_from_file(engine_file)
|
|
||||||
|
|
||||||
return {'engine': engine_facts, 'nodes': node_facts}
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _write_deployment_facts_to_file(facts, directory, file_format):
|
|
||||||
# from 9.0 the deployment info is serialized only per node
|
|
||||||
for _fact in facts:
|
|
||||||
file_name = "{role}_{uid}." if 'role' in _fact else "{uid}."
|
|
||||||
file_name += file_format
|
|
||||||
data_utils.write_to_file(
|
|
||||||
os.path.join(directory, file_name.format(**_fact)),
|
|
||||||
_fact)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _write_provisioning_facts_to_file(facts, directory, file_format):
|
|
||||||
file_name = "{uid}."
|
|
||||||
file_name += file_format
|
|
||||||
data_utils.write_to_file(
|
|
||||||
os.path.join(directory, file_name.format(uid='engine')),
|
|
||||||
facts['engine'])
|
|
||||||
|
|
||||||
for _fact in facts['nodes']:
|
|
||||||
data_utils.write_to_file(
|
|
||||||
os.path.join(directory, file_name.format(**_fact)),
|
|
||||||
_fact)
|
|
||||||
|
|
||||||
def download(self, env_id, fact_type, destination_dir, file_format,
|
|
||||||
nodes=None, default=False, split=None):
|
|
||||||
facts = self.client.download_facts(
|
|
||||||
env_id, fact_type, nodes=nodes, default=default, split=split)
|
|
||||||
|
|
||||||
facts_dir = self._get_fact_dir(env_id, fact_type, destination_dir)
|
|
||||||
if os.path.exists(facts_dir):
|
|
||||||
shutil.rmtree(facts_dir)
|
|
||||||
os.makedirs(facts_dir)
|
|
||||||
|
|
||||||
getattr(self, "_write_{0}_facts_to_file".format(fact_type))(
|
|
||||||
facts, facts_dir, file_format)
|
|
||||||
|
|
||||||
return facts_dir
|
|
||||||
|
|
||||||
def upload(self, env_id, fact_type, source_dir, file_format):
|
|
||||||
facts_dir = self._get_fact_dir(env_id, fact_type, source_dir)
|
|
||||||
facts = getattr(self, "_read_{0}_facts_from_file".format(fact_type))(
|
|
||||||
facts_dir, file_format)
|
|
||||||
|
|
||||||
if not facts \
|
|
||||||
or isinstance(facts, dict) and not six.moves.reduce(
|
|
||||||
lambda a, b: a or b, facts.values()):
|
|
||||||
raise error.ServerDataException(
|
|
||||||
"There are no {} facts for this environment!".format(
|
|
||||||
fact_type))
|
|
||||||
|
|
||||||
return self.client.upload_facts(env_id, fact_type, facts)
|
|
||||||
|
|
||||||
|
|
||||||
class BaseEnvFactsDelete(EnvMixIn, base.BaseCommand):
|
|
||||||
"""Delete current various facts for orchestrator."""
|
|
||||||
|
|
||||||
fact_type = ''
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(BaseEnvFactsDelete, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'id',
|
|
||||||
type=int,
|
|
||||||
help='ID of the environment')
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
self.client.delete_facts(parsed_args.id, self.fact_type)
|
|
||||||
self.app.stdout.write(
|
|
||||||
"{0} facts for the environment {1} were deleted "
|
|
||||||
"successfully.\n".format(self.fact_type.capitalize(),
|
|
||||||
parsed_args.id)
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class EnvDeploymentFactsDelete(BaseEnvFactsDelete):
|
|
||||||
"""Delete current deployment facts."""
|
|
||||||
|
|
||||||
fact_type = 'deployment'
|
|
||||||
|
|
||||||
|
|
||||||
class EnvProvisioningFactsDelete(BaseEnvFactsDelete):
|
|
||||||
"""Delete current provisioning facts."""
|
|
||||||
|
|
||||||
fact_type = 'provisioning'
|
|
||||||
|
|
||||||
|
|
||||||
class BaseEnvFactsDownload(FactsMixIn, EnvMixIn, base.BaseCommand):
|
|
||||||
"""Download various facts for orchestrator."""
|
|
||||||
|
|
||||||
fact_type = ''
|
|
||||||
fact_default = False
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(BaseEnvFactsDownload, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'-e', '--env',
|
|
||||||
type=int,
|
|
||||||
required=True,
|
|
||||||
help='ID of the environment')
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'-d', '--directory',
|
|
||||||
type=self.destination_dir,
|
|
||||||
default=os.path.curdir,
|
|
||||||
help='Path to directory to save {} facts. '
|
|
||||||
'Defaults to the current directory'.format(self.fact_type))
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'-n', '--nodes',
|
|
||||||
type=int,
|
|
||||||
nargs='+',
|
|
||||||
help='Get {} facts for nodes with given IDs'.format(
|
|
||||||
self.fact_type))
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'-f', '--format',
|
|
||||||
choices=self.supported_file_formats,
|
|
||||||
required=True,
|
|
||||||
help='Format of serialized {} facts'.format(self.fact_type))
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'--no-split',
|
|
||||||
action='store_false',
|
|
||||||
dest='split',
|
|
||||||
default=True,
|
|
||||||
help='Do not split deployment info for node and cluster parts.'
|
|
||||||
)
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
facts_dir = self.download(
|
|
||||||
parsed_args.env,
|
|
||||||
self.fact_type,
|
|
||||||
parsed_args.directory,
|
|
||||||
parsed_args.format,
|
|
||||||
nodes=parsed_args.nodes,
|
|
||||||
default=self.fact_default,
|
|
||||||
split=parsed_args.split
|
|
||||||
)
|
|
||||||
self.app.stdout.write(
|
|
||||||
"{0} {1} facts for the environment {2} "
|
|
||||||
"were downloaded to {3}\n".format(
|
|
||||||
'Default' if self.fact_default else 'User-defined',
|
|
||||||
self.fact_type,
|
|
||||||
parsed_args.env,
|
|
||||||
facts_dir)
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class EnvDeploymentFactsDownload(BaseEnvFactsDownload):
|
|
||||||
"""Download the user-defined deployment facts."""
|
|
||||||
|
|
||||||
fact_type = 'deployment'
|
|
||||||
fact_default = False
|
|
||||||
|
|
||||||
|
|
||||||
class EnvDeploymentFactsGetDefault(BaseEnvFactsDownload):
|
|
||||||
"""Download the default deployment facts."""
|
|
||||||
|
|
||||||
fact_type = 'deployment'
|
|
||||||
fact_default = True
|
|
||||||
|
|
||||||
|
|
||||||
class EnvProvisioningFactsDownload(BaseEnvFactsDownload):
|
|
||||||
"""Download the user-defined provisioning facts."""
|
|
||||||
|
|
||||||
fact_type = 'provisioning'
|
|
||||||
fact_default = False
|
|
||||||
|
|
||||||
|
|
||||||
class EnvProvisioningFactsGetDefault(BaseEnvFactsDownload):
|
|
||||||
"""Download the default provisioning facts."""
|
|
||||||
|
|
||||||
fact_type = 'provisioning'
|
|
||||||
fact_default = True
|
|
||||||
|
|
||||||
|
|
||||||
class BaseEnvFactsUpload(FactsMixIn, EnvMixIn, base.BaseCommand):
|
|
||||||
"""Upload various facts for orchestrator."""
|
|
||||||
|
|
||||||
fact_type = ''
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(BaseEnvFactsUpload, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'-e', '--env',
|
|
||||||
type=int,
|
|
||||||
required=True,
|
|
||||||
help='ID of the environment')
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'-d', '--directory',
|
|
||||||
type=self.source_dir,
|
|
||||||
default=os.path.curdir,
|
|
||||||
help='Path to directory to read {} facts. '
|
|
||||||
'Defaults to the current directory'.format(self.fact_type))
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'-f', '--format',
|
|
||||||
choices=self.supported_file_formats,
|
|
||||||
required=True,
|
|
||||||
help='Format of serialized {} facts'.format(self.fact_type))
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
self.upload(
|
|
||||||
parsed_args.env,
|
|
||||||
self.fact_type,
|
|
||||||
parsed_args.directory,
|
|
||||||
parsed_args.format
|
|
||||||
)
|
|
||||||
self.app.stdout.write(
|
|
||||||
"{0} facts for the environment {1} were uploaded "
|
|
||||||
"successfully.\n".format(self.fact_type.capitalize(),
|
|
||||||
parsed_args.env)
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class EnvDeploymentFactsUpload(BaseEnvFactsUpload):
|
|
||||||
"""Upload deployment facts."""
|
|
||||||
|
|
||||||
fact_type = 'deployment'
|
|
||||||
|
|
||||||
|
|
||||||
class EnvProvisioningFactsUpload(BaseEnvFactsUpload):
|
|
||||||
"""Upload provisioning facts."""
|
|
||||||
|
|
||||||
fact_type = 'provisioning'
|
|
|
@ -1,98 +0,0 @@
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# Copyright 2016 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from cliff import show
|
|
||||||
|
|
||||||
from fuelclient.commands import base
|
|
||||||
|
|
||||||
|
|
||||||
class ExtensionMixIn(object):
|
|
||||||
entity_name = 'extension'
|
|
||||||
|
|
||||||
|
|
||||||
class ExtensionList(ExtensionMixIn, base.BaseListCommand):
|
|
||||||
"""Show list of all available extensions."""
|
|
||||||
|
|
||||||
columns = ("name",
|
|
||||||
"version",
|
|
||||||
"description",
|
|
||||||
"provides")
|
|
||||||
default_sorting_by = ["name"]
|
|
||||||
|
|
||||||
|
|
||||||
class EnvExtensionShow(ExtensionMixIn, base.BaseShowCommand):
|
|
||||||
"""Show list of enabled extensions for environment with given id."""
|
|
||||||
|
|
||||||
columns = ("extensions", )
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
# Avoid adding id argument by BaseShowCommand
|
|
||||||
# Because it adds 'id' with wrong help message for this class
|
|
||||||
parser = show.ShowOne.get_parser(self, prog_name)
|
|
||||||
|
|
||||||
parser.add_argument('id', type=int, help='Id of the environment.')
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
|
|
||||||
class EnvExtensionEnable(ExtensionMixIn, base.BaseCommand):
|
|
||||||
"""Enable specified extensions for environment with given id."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(EnvExtensionEnable, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
parser.add_argument('id', type=int, help='Id of the environment.')
|
|
||||||
parser.add_argument('-E',
|
|
||||||
'--extensions',
|
|
||||||
required=True,
|
|
||||||
nargs='+',
|
|
||||||
help='Names of extensions to enable.')
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
self.client.enable_extensions(parsed_args.id, parsed_args.extensions)
|
|
||||||
|
|
||||||
msg = ('The following extensions: {e} have been enabled for '
|
|
||||||
'the environment with id {id}.\n'.format(
|
|
||||||
e=', '.join(parsed_args.extensions), id=parsed_args.id))
|
|
||||||
|
|
||||||
self.app.stdout.write(msg)
|
|
||||||
|
|
||||||
|
|
||||||
class EnvExtensionDisable(ExtensionMixIn, base.BaseCommand):
|
|
||||||
"""Disable specified extensions for environment with given id."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(EnvExtensionDisable, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
parser.add_argument('id', type=int, help='Id of the environment.')
|
|
||||||
parser.add_argument('-E',
|
|
||||||
'--extensions',
|
|
||||||
required=True,
|
|
||||||
nargs='+',
|
|
||||||
help='Names of extensions to disable.')
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
self.client.disable_extensions(parsed_args.id, parsed_args.extensions)
|
|
||||||
|
|
||||||
msg = ('The following extensions: {e} have been disabled for '
|
|
||||||
'the environment with id {id}.\n'.format(
|
|
||||||
e=', '.join(parsed_args.extensions), id=parsed_args.id))
|
|
||||||
|
|
||||||
self.app.stdout.write(msg)
|
|
|
@ -1,35 +0,0 @@
|
||||||
# Copyright 2015 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from cliff import show
|
|
||||||
|
|
||||||
from fuelclient.commands import base
|
|
||||||
from fuelclient.common import data_utils
|
|
||||||
|
|
||||||
|
|
||||||
class FuelVersion(show.ShowOne, base.BaseCommand):
|
|
||||||
"""Show the version of Fuel."""
|
|
||||||
|
|
||||||
entity_name = 'fuel-version'
|
|
||||||
columns = ('api',
|
|
||||||
'auth_required',
|
|
||||||
'feature_groups',
|
|
||||||
'openstack_version',
|
|
||||||
'release')
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
data = self.client.get_all()
|
|
||||||
data = data_utils.get_display_data_single(self.columns, data)
|
|
||||||
|
|
||||||
return (self.columns, data)
|
|
|
@ -1,414 +0,0 @@
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# Copyright 2016 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import os
|
|
||||||
|
|
||||||
from fuelclient.cli import error
|
|
||||||
from fuelclient.cli.serializers import Serializer
|
|
||||||
from fuelclient.commands import base
|
|
||||||
from fuelclient.common import data_utils
|
|
||||||
from fuelclient.utils import iterfiles
|
|
||||||
|
|
||||||
|
|
||||||
class FileMethodsMixin(object):
|
|
||||||
@classmethod
|
|
||||||
def check_file_path(cls, file_path):
|
|
||||||
if not os.path.exists(file_path):
|
|
||||||
raise error.InvalidFileException(
|
|
||||||
"File '{0}' doesn't exist.".format(file_path))
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def check_dir(cls, directory):
|
|
||||||
if not os.path.exists(directory):
|
|
||||||
raise error.InvalidDirectoryException(
|
|
||||||
"Directory '{0}' doesn't exist.".format(directory))
|
|
||||||
if not os.path.isdir(directory):
|
|
||||||
raise error.InvalidDirectoryException(
|
|
||||||
"Error: '{0}' is not a directory.".format(directory))
|
|
||||||
|
|
||||||
|
|
||||||
class GraphUpload(base.BaseCommand, FileMethodsMixin):
|
|
||||||
"""Upload deployment graph configuration."""
|
|
||||||
entity_name = 'graph'
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def read_data_from_file(cls, file_path=None, serializer=None):
|
|
||||||
"""Read graph data from given path.
|
|
||||||
|
|
||||||
:param file_path: path
|
|
||||||
:type file_path: str
|
|
||||||
:param serializer: serializer object
|
|
||||||
:type serializer: object
|
|
||||||
:return: data
|
|
||||||
:rtype: list|object
|
|
||||||
"""
|
|
||||||
cls.check_file_path(file_path)
|
|
||||||
return (serializer or Serializer()).read_from_full_path(file_path)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def read_data_from_dir(cls, dir_path=None, serializer=None):
|
|
||||||
"""Read graph data from directory.
|
|
||||||
|
|
||||||
:param dir_path: path
|
|
||||||
:type dir_path: str
|
|
||||||
:param serializer: serializer object
|
|
||||||
:type serializer: object
|
|
||||||
:return: data
|
|
||||||
:rtype: list|object
|
|
||||||
"""
|
|
||||||
cls.check_dir(dir_path)
|
|
||||||
serializer = serializer or Serializer()
|
|
||||||
|
|
||||||
metadata_filepath = os.path.join(dir_path, 'metadata.yaml')
|
|
||||||
if os.path.exists(metadata_filepath):
|
|
||||||
data = serializer.read_from_full_path(metadata_filepath)
|
|
||||||
else:
|
|
||||||
data = {}
|
|
||||||
|
|
||||||
tasks = []
|
|
||||||
for file_name in iterfiles(dir_path, 'tasks.yaml'):
|
|
||||||
task_data = serializer.read_from_full_path(file_name)
|
|
||||||
if task_data:
|
|
||||||
tasks.extend(task_data)
|
|
||||||
|
|
||||||
if tasks:
|
|
||||||
data['tasks'] = tasks
|
|
||||||
|
|
||||||
if not data:
|
|
||||||
msg = ("Nothing to upload. Check if at least one 'tasks.yaml' "
|
|
||||||
"file is not empty and exists in '{path}' directory "
|
|
||||||
"path".format(path=dir_path))
|
|
||||||
raise error.ActionException(msg)
|
|
||||||
return data
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(GraphUpload, self).get_parser(prog_name)
|
|
||||||
graph_class = parser.add_mutually_exclusive_group(required=True)
|
|
||||||
|
|
||||||
graph_class.add_argument('-e',
|
|
||||||
'--env',
|
|
||||||
type=int,
|
|
||||||
required=False,
|
|
||||||
help='Id of the environment')
|
|
||||||
graph_class.add_argument('-r',
|
|
||||||
'--release',
|
|
||||||
type=int,
|
|
||||||
required=False,
|
|
||||||
help='Id of the release')
|
|
||||||
graph_class.add_argument('-p',
|
|
||||||
'--plugin',
|
|
||||||
type=int,
|
|
||||||
required=False,
|
|
||||||
help='Id of the plugin')
|
|
||||||
|
|
||||||
parser.add_argument('-t',
|
|
||||||
'--graph-type',
|
|
||||||
required=True,
|
|
||||||
help='Type of the deployment graph')
|
|
||||||
|
|
||||||
graph_source = parser.add_mutually_exclusive_group(required=True)
|
|
||||||
graph_source.add_argument(
|
|
||||||
'-f',
|
|
||||||
'--file',
|
|
||||||
default=None,
|
|
||||||
help='YAML file that contains deployment graph data.'
|
|
||||||
)
|
|
||||||
graph_source.add_argument(
|
|
||||||
'-d',
|
|
||||||
'--dir',
|
|
||||||
default=None,
|
|
||||||
help='The directory that includes tasks.yaml and metadata.yaml.'
|
|
||||||
)
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, args):
|
|
||||||
parameters_to_graph_class = (
|
|
||||||
('env', 'clusters'),
|
|
||||||
('release', 'releases'),
|
|
||||||
('plugin', 'plugins'),
|
|
||||||
)
|
|
||||||
|
|
||||||
if args.file:
|
|
||||||
data = self.read_data_from_file(args.file)
|
|
||||||
else:
|
|
||||||
data = self.read_data_from_dir(args.dir)
|
|
||||||
|
|
||||||
for parameter, graph_class in parameters_to_graph_class:
|
|
||||||
model_id = getattr(args, parameter)
|
|
||||||
if model_id:
|
|
||||||
self.client.upload(
|
|
||||||
data=data,
|
|
||||||
related_model=graph_class,
|
|
||||||
related_id=model_id,
|
|
||||||
graph_type=args.graph_type
|
|
||||||
)
|
|
||||||
break
|
|
||||||
|
|
||||||
self.app.stdout.write("Deployment graph was successfully uploaded.\n")
|
|
||||||
|
|
||||||
|
|
||||||
class GraphExecute(base.BaseTasksExecuteCommand):
|
|
||||||
"""Start deployment with given graph type."""
|
|
||||||
entity_name = 'graph'
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(GraphExecute, self).get_parser(prog_name)
|
|
||||||
parser.add_argument(
|
|
||||||
'-t',
|
|
||||||
'--graph-types',
|
|
||||||
nargs='+',
|
|
||||||
required=True,
|
|
||||||
help='Types of the deployment graph in order of execution'
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
'-n',
|
|
||||||
'--nodes',
|
|
||||||
type=int,
|
|
||||||
nargs='+',
|
|
||||||
help='Ids of the nodes to use for deployment.'
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
'-T',
|
|
||||||
'--task-names',
|
|
||||||
nargs='+',
|
|
||||||
help='List of deployment tasks to run.'
|
|
||||||
)
|
|
||||||
parser.add_argument('-S',
|
|
||||||
'--subgraphs',
|
|
||||||
type=str,
|
|
||||||
nargs='+',
|
|
||||||
required=False,
|
|
||||||
help='List of subgraphs to execute'
|
|
||||||
'Format is: '
|
|
||||||
'[<start_task>[/<node_ids>]]\
|
|
||||||
[:<end_task>/[<node_ids>]]')
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def get_options(self, parsed_args):
|
|
||||||
return {
|
|
||||||
'graph_types': parsed_args.graph_types,
|
|
||||||
'nodes': parsed_args.nodes,
|
|
||||||
'task_names': parsed_args.task_names,
|
|
||||||
'subgraphs': parsed_args.subgraphs
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
class GraphDownload(base.BaseCommand):
|
|
||||||
"""Download deployment graph configuration."""
|
|
||||||
entity_name = 'graph'
|
|
||||||
supported_file_formats = ('json', 'yaml')
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(GraphDownload, self).get_parser(prog_name)
|
|
||||||
tasks_level = parser.add_mutually_exclusive_group(required=True)
|
|
||||||
parser.add_argument('-e',
|
|
||||||
'--env',
|
|
||||||
type=int,
|
|
||||||
required=True,
|
|
||||||
help='Id of the environment')
|
|
||||||
|
|
||||||
tasks_level.add_argument('-a',
|
|
||||||
'--all',
|
|
||||||
action="store_true",
|
|
||||||
required=False,
|
|
||||||
default=False,
|
|
||||||
help='Download merged graph for the '
|
|
||||||
'environment')
|
|
||||||
tasks_level.add_argument('-c',
|
|
||||||
'--cluster',
|
|
||||||
action="store_true",
|
|
||||||
required=False,
|
|
||||||
default=False,
|
|
||||||
help='Download cluster-specific tasks')
|
|
||||||
tasks_level.add_argument('-p',
|
|
||||||
'--plugins',
|
|
||||||
action="store_true",
|
|
||||||
required=False,
|
|
||||||
default=False,
|
|
||||||
help='Download plugins-specific tasks')
|
|
||||||
tasks_level.add_argument('-r',
|
|
||||||
'--release',
|
|
||||||
action="store_true",
|
|
||||||
required=False,
|
|
||||||
default=False,
|
|
||||||
help='Download release-specific tasks')
|
|
||||||
|
|
||||||
parser.add_argument('-t',
|
|
||||||
'--graph-type',
|
|
||||||
type=str,
|
|
||||||
default=None,
|
|
||||||
required=False,
|
|
||||||
help='Graph type string')
|
|
||||||
parser.add_argument('-f',
|
|
||||||
'--file',
|
|
||||||
type=str,
|
|
||||||
required=False,
|
|
||||||
default=None,
|
|
||||||
help='File in {} format that contains tasks '
|
|
||||||
'data.'.format(self.supported_file_formats))
|
|
||||||
parser.add_argument('--format',
|
|
||||||
required=False,
|
|
||||||
choices=self.supported_file_formats,
|
|
||||||
default='yaml',
|
|
||||||
help='Format of serialized tasks data. '
|
|
||||||
'Defaults to YAML.')
|
|
||||||
return parser
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def get_default_tasks_data_path(cls, env_id, task_level_name, file_format):
|
|
||||||
return os.path.join(
|
|
||||||
os.path.abspath(os.curdir),
|
|
||||||
'{}_graph_{}.{}'.format(task_level_name, env_id, file_format)
|
|
||||||
)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def write_tasks_to_file(cls, tasks_data, serializer, file_path):
|
|
||||||
return serializer.write_to_full_path(file_path, tasks_data)
|
|
||||||
|
|
||||||
def take_action(self, args):
|
|
||||||
tasks_data = []
|
|
||||||
tasks_level_name = ''
|
|
||||||
for tasks_level_name in ('all', 'cluster', 'release', 'plugins'):
|
|
||||||
if getattr(args, tasks_level_name):
|
|
||||||
tasks_data = self.client.download(
|
|
||||||
env_id=args.env,
|
|
||||||
level=tasks_level_name,
|
|
||||||
graph_type=args.graph_type
|
|
||||||
)
|
|
||||||
break
|
|
||||||
|
|
||||||
# write to file
|
|
||||||
file_path = args.file or self.get_default_tasks_data_path(
|
|
||||||
args.env, tasks_level_name, args.format)
|
|
||||||
graph_data_file_path = self.write_tasks_to_file(
|
|
||||||
tasks_data=tasks_data,
|
|
||||||
serializer=Serializer(format=args.format),
|
|
||||||
file_path=file_path)
|
|
||||||
|
|
||||||
self.app.stdout.write(
|
|
||||||
"Tasks were downloaded to {0}\n".format(graph_data_file_path)
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class GraphList(base.BaseListCommand):
|
|
||||||
"""List deployment graphs."""
|
|
||||||
entity_name = 'graph'
|
|
||||||
columns = ("id",
|
|
||||||
"name",
|
|
||||||
"tasks",
|
|
||||||
"relations")
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(GraphList, self).get_parser(prog_name)
|
|
||||||
parser.add_argument(
|
|
||||||
'-e',
|
|
||||||
'--env',
|
|
||||||
type=int,
|
|
||||||
help='Id of the environment'
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
'--cluster',
|
|
||||||
dest='filters',
|
|
||||||
action='append_const',
|
|
||||||
const='cluster',
|
|
||||||
help='Include cluster-specific graphs'
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
'--plugins',
|
|
||||||
dest='filters',
|
|
||||||
action='append_const',
|
|
||||||
const='plugin',
|
|
||||||
help='Include plugins-specific graphs'
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
'--release',
|
|
||||||
dest='filters',
|
|
||||||
action='append_const',
|
|
||||||
const='release',
|
|
||||||
help='Include release-specific graphs'
|
|
||||||
)
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, args):
|
|
||||||
data = self.client.list(env_id=args.env, filters=args.filters)
|
|
||||||
|
|
||||||
# make table context applying special formatting to data copy
|
|
||||||
display_data = []
|
|
||||||
for d in data:
|
|
||||||
d = d.copy()
|
|
||||||
d.update({
|
|
||||||
'relations': "\n".join(
|
|
||||||
'as "{type}" to {model}(ID={model_id})'.format(**r)
|
|
||||||
for r in d['relations']
|
|
||||||
),
|
|
||||||
'tasks': ', '.join(sorted(t['id'] for t in d['tasks']))
|
|
||||||
})
|
|
||||||
display_data.append(d)
|
|
||||||
|
|
||||||
data = data_utils.get_display_data_multi(self.columns, display_data)
|
|
||||||
scolumn_ids = [self.columns.index(col) for col in args.sort_columns]
|
|
||||||
data.sort(key=lambda x: [x[scolumn_id] for scolumn_id in scolumn_ids])
|
|
||||||
return self.columns, data
|
|
||||||
|
|
||||||
|
|
||||||
class GraphDelete(base.BaseCommand):
|
|
||||||
"""Delete deployment graph."""
|
|
||||||
entity_name = 'graph'
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(GraphDelete, self).get_parser(prog_name)
|
|
||||||
graph_class = parser.add_mutually_exclusive_group(required=True)
|
|
||||||
graph_class.add_argument('-e',
|
|
||||||
'--environment',
|
|
||||||
type=int,
|
|
||||||
help='Id of the environment')
|
|
||||||
graph_class.add_argument('-r',
|
|
||||||
'--release',
|
|
||||||
type=int,
|
|
||||||
help='Id of the release')
|
|
||||||
graph_class.add_argument('-p',
|
|
||||||
'--plugin',
|
|
||||||
type=int,
|
|
||||||
help='Id of the plugin')
|
|
||||||
parser.add_argument('-t',
|
|
||||||
'--graph-type',
|
|
||||||
required=True,
|
|
||||||
help='Type of the deployment graph')
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
parameters_to_graph_class = (
|
|
||||||
('environment', 'clusters'),
|
|
||||||
('release', 'releases'),
|
|
||||||
('plugin', 'plugins'),
|
|
||||||
)
|
|
||||||
|
|
||||||
msg = ''
|
|
||||||
for parameter, graph_class in parameters_to_graph_class:
|
|
||||||
model_id = getattr(parsed_args, parameter)
|
|
||||||
if model_id:
|
|
||||||
self.client.delete(
|
|
||||||
related_model=graph_class,
|
|
||||||
related_id=model_id,
|
|
||||||
graph_type=parsed_args.graph_type
|
|
||||||
)
|
|
||||||
msg = ("Deployment graph '{0}' for {1} with id {2} was "
|
|
||||||
"deleted.\n".format(parsed_args.graph_type,
|
|
||||||
parameter,
|
|
||||||
model_id))
|
|
||||||
break
|
|
||||||
|
|
||||||
self.app.stdout.write(msg)
|
|
|
@ -1,177 +0,0 @@
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# Copyright 2016 Vitalii Kulanov
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import abc
|
|
||||||
import six
|
|
||||||
|
|
||||||
from fuelclient.commands import base
|
|
||||||
from fuelclient.common import data_utils
|
|
||||||
|
|
||||||
|
|
||||||
class HealthMixIn(object):
|
|
||||||
|
|
||||||
entity_name = 'health'
|
|
||||||
|
|
||||||
|
|
||||||
class HealthTestSetsList(HealthMixIn, base.BaseListCommand):
|
|
||||||
"""List of all available test sets for a given environment."""
|
|
||||||
|
|
||||||
columns = ("id",
|
|
||||||
"name")
|
|
||||||
|
|
||||||
filters = {'environment_id': 'env'}
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(HealthTestSetsList, self).get_parser(prog_name)
|
|
||||||
parser.add_argument('-e',
|
|
||||||
'--env',
|
|
||||||
type=int,
|
|
||||||
required=True,
|
|
||||||
help='Id of the environment.')
|
|
||||||
return parser
|
|
||||||
|
|
||||||
|
|
||||||
class HealthCheckStart(HealthMixIn, base.BaseListCommand):
|
|
||||||
"""Run specified test sets for a given environment."""
|
|
||||||
|
|
||||||
columns = ("id",
|
|
||||||
"testset",
|
|
||||||
"cluster_id")
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(HealthCheckStart, self).get_parser(prog_name)
|
|
||||||
parser.add_argument('-e',
|
|
||||||
'--env',
|
|
||||||
type=int,
|
|
||||||
required=True,
|
|
||||||
help='Id of the environment.')
|
|
||||||
parser.add_argument('--force',
|
|
||||||
action='store_true',
|
|
||||||
help='Force run health test sets.')
|
|
||||||
parser.add_argument('-t',
|
|
||||||
'--tests',
|
|
||||||
nargs='+',
|
|
||||||
help='Name of the test sets to run.')
|
|
||||||
parser.add_argument('--ostf-username',
|
|
||||||
default=None,
|
|
||||||
help='OSTF username.')
|
|
||||||
parser.add_argument('--ostf-password',
|
|
||||||
default=None,
|
|
||||||
help='OSTF password.')
|
|
||||||
parser.add_argument('--ostf-tenant-name',
|
|
||||||
default=None,
|
|
||||||
help='OSTF tenant name.')
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
ostf_credentials = {}
|
|
||||||
if parsed_args.ostf_tenant_name is not None:
|
|
||||||
ostf_credentials['tenant'] = parsed_args.ostf_tenant_name
|
|
||||||
if parsed_args.ostf_username is not None:
|
|
||||||
ostf_credentials['username'] = parsed_args.ostf_username
|
|
||||||
if parsed_args.ostf_password is not None:
|
|
||||||
ostf_credentials['password'] = parsed_args.ostf_password
|
|
||||||
|
|
||||||
if not ostf_credentials:
|
|
||||||
self.app.stdout.write("WARNING: ostf credentials are going to be "
|
|
||||||
"mandatory in the next release.\n")
|
|
||||||
|
|
||||||
data = self.client.start(parsed_args.env,
|
|
||||||
ostf_credentials=ostf_credentials,
|
|
||||||
test_sets=parsed_args.tests,
|
|
||||||
force=parsed_args.force)
|
|
||||||
|
|
||||||
msg = ("\nHealth check tests for environment with id {0} has been "
|
|
||||||
"started:\n".format(parsed_args.env))
|
|
||||||
self.app.stdout.write(msg)
|
|
||||||
data = data_utils.get_display_data_multi(self.columns, data)
|
|
||||||
return self.columns, data
|
|
||||||
|
|
||||||
|
|
||||||
@six.add_metaclass(abc.ABCMeta)
|
|
||||||
class HealthCheckBaseAction(HealthMixIn, base.BaseShowCommand):
|
|
||||||
"""Base class for implementing action over a given test set."""
|
|
||||||
|
|
||||||
columns = ("id",
|
|
||||||
"testset",
|
|
||||||
"cluster_id",
|
|
||||||
"status")
|
|
||||||
|
|
||||||
@abc.abstractproperty
|
|
||||||
def action_status(self):
|
|
||||||
"""String with the name of the action."""
|
|
||||||
pass
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
data = self.client.action(parsed_args.id, self.action_status)
|
|
||||||
|
|
||||||
data = data_utils.get_display_data_single(self.columns, data)
|
|
||||||
return self.columns, data
|
|
||||||
|
|
||||||
|
|
||||||
class HealthCheckStop(HealthCheckBaseAction):
|
|
||||||
"""Stop test set with given id."""
|
|
||||||
|
|
||||||
action_status = "stopped"
|
|
||||||
|
|
||||||
|
|
||||||
class HealthCheckRestart(HealthCheckBaseAction):
|
|
||||||
"""Restart test set with given id."""
|
|
||||||
|
|
||||||
action_status = "restarted"
|
|
||||||
|
|
||||||
|
|
||||||
class HealthTestSetsStatusList(HealthMixIn, base.BaseListCommand):
|
|
||||||
"""Show list of statuses of all test sets ever been executed in Fuel."""
|
|
||||||
|
|
||||||
columns = ("id",
|
|
||||||
"testset",
|
|
||||||
"cluster_id",
|
|
||||||
"status",
|
|
||||||
"started_at",
|
|
||||||
"ended_at")
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(HealthTestSetsStatusList, self).get_parser(prog_name)
|
|
||||||
parser.add_argument('-e',
|
|
||||||
'--env',
|
|
||||||
type=int,
|
|
||||||
help='Id of the environment.')
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
data = self.client.get_status_all(parsed_args.env)
|
|
||||||
|
|
||||||
data = data_utils.get_display_data_multi(self.columns, data)
|
|
||||||
return self.columns, data
|
|
||||||
|
|
||||||
|
|
||||||
class HealthTestSetsStatusShow(HealthMixIn, base.BaseShowCommand):
|
|
||||||
"""Show status about a test set with given id."""
|
|
||||||
|
|
||||||
columns = ("id",
|
|
||||||
"testset",
|
|
||||||
"cluster_id",
|
|
||||||
"status",
|
|
||||||
"started_at",
|
|
||||||
"ended_at",
|
|
||||||
"tests")
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
data = self.client.get_status_single(parsed_args.id)
|
|
||||||
|
|
||||||
data = data_utils.get_display_data_single(self.columns, data)
|
|
||||||
return self.columns, data
|
|
|
@ -1,174 +0,0 @@
|
||||||
# Copyright 2015 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from cliff import show
|
|
||||||
|
|
||||||
from fuelclient.cli.serializers import Serializer
|
|
||||||
from fuelclient.commands import base
|
|
||||||
from fuelclient.common import data_utils
|
|
||||||
|
|
||||||
|
|
||||||
_updatable_keys = (
|
|
||||||
'name', 'vlan', 'cidr', 'gateway', 'group_id', 'meta')
|
|
||||||
|
|
||||||
|
|
||||||
def get_args_for_update(params, serializer=None):
|
|
||||||
result = {}
|
|
||||||
for attr in _updatable_keys:
|
|
||||||
value = getattr(params, attr, None)
|
|
||||||
if value is not None:
|
|
||||||
result[attr] = value
|
|
||||||
|
|
||||||
if 'meta' in result:
|
|
||||||
serializer = serializer or Serializer.from_params(params)
|
|
||||||
result['meta'] = serializer.deserialize(result['meta'])
|
|
||||||
|
|
||||||
return result
|
|
||||||
|
|
||||||
|
|
||||||
class NetworkGroupMixin(object):
|
|
||||||
entity_name = 'network-group'
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def add_parser_arguments(parser, for_update=False):
|
|
||||||
parser.add_argument(
|
|
||||||
'-N', '--node-group',
|
|
||||||
type=int,
|
|
||||||
required=not for_update,
|
|
||||||
help='ID of the network group'
|
|
||||||
)
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'-C', '--cidr',
|
|
||||||
type=str,
|
|
||||||
required=not for_update,
|
|
||||||
help='CIDR of the network'
|
|
||||||
)
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'-V', '--vlan',
|
|
||||||
type=int,
|
|
||||||
help='VLAN of the network',
|
|
||||||
)
|
|
||||||
|
|
||||||
if not for_update:
|
|
||||||
parser.add_argument(
|
|
||||||
'-r', '--release',
|
|
||||||
type=int,
|
|
||||||
help='Release ID this network group belongs to'
|
|
||||||
)
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'-g', '--gateway',
|
|
||||||
type=str,
|
|
||||||
help='Gateway of the network'
|
|
||||||
)
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'-m', '--meta',
|
|
||||||
type=str,
|
|
||||||
help='Metadata in JSON format to override default network metadata'
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class NetworkGroupList(NetworkGroupMixin, base.BaseListCommand):
|
|
||||||
"""List all network groups."""
|
|
||||||
|
|
||||||
columns = (
|
|
||||||
'id',
|
|
||||||
'name',
|
|
||||||
'vlan_start',
|
|
||||||
'cidr',
|
|
||||||
'gateway',
|
|
||||||
'group_id'
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class NetworkGroupShow(NetworkGroupMixin, base.BaseShowCommand):
|
|
||||||
"""Show network group."""
|
|
||||||
|
|
||||||
columns = NetworkGroupList.columns + ('meta',)
|
|
||||||
|
|
||||||
|
|
||||||
class NetworkGroupCreate(NetworkGroupMixin, base.BaseShowCommand):
|
|
||||||
"""Create a new network group."""
|
|
||||||
|
|
||||||
columns = NetworkGroupList.columns
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = show.ShowOne.get_parser(self, prog_name)
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'name',
|
|
||||||
type=str,
|
|
||||||
help='Name of the new network group'
|
|
||||||
)
|
|
||||||
|
|
||||||
self.add_parser_arguments(parser)
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
meta = None
|
|
||||||
if parsed_args.meta:
|
|
||||||
serializer = Serializer.from_params(parsed_args)
|
|
||||||
meta = serializer.deserialize(parsed_args.meta)
|
|
||||||
|
|
||||||
net_group = self.client.create(
|
|
||||||
name=parsed_args.name,
|
|
||||||
release=parsed_args.release,
|
|
||||||
vlan=parsed_args.vlan,
|
|
||||||
cidr=parsed_args.cidr,
|
|
||||||
gateway=parsed_args.gateway,
|
|
||||||
group_id=parsed_args.node_group,
|
|
||||||
meta=meta)
|
|
||||||
|
|
||||||
net_group = data_utils.get_display_data_single(self.columns, net_group)
|
|
||||||
return self.columns, net_group
|
|
||||||
|
|
||||||
|
|
||||||
class NetworkGroupUpdate(NetworkGroupMixin, base.BaseShowCommand):
|
|
||||||
"""Set parameters for the specified network group."""
|
|
||||||
|
|
||||||
columns = NetworkGroupList.columns
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = show.ShowOne.get_parser(self, prog_name)
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'id',
|
|
||||||
type=int,
|
|
||||||
help='ID of the network group to update')
|
|
||||||
parser.add_argument(
|
|
||||||
'-n',
|
|
||||||
'--name',
|
|
||||||
type=str,
|
|
||||||
help='New name for network group')
|
|
||||||
|
|
||||||
self.add_parser_arguments(parser, for_update=True)
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
to_update = get_args_for_update(parsed_args)
|
|
||||||
|
|
||||||
network_group = self.client.update(parsed_args.id, **to_update)
|
|
||||||
network_group = data_utils.get_display_data_single(
|
|
||||||
self.columns, network_group)
|
|
||||||
|
|
||||||
return self.columns, network_group
|
|
||||||
|
|
||||||
|
|
||||||
class NetworkGroupDelete(NetworkGroupMixin, base.BaseDeleteCommand):
|
|
||||||
"""Delete specified network group."""
|
|
|
@ -1,103 +0,0 @@
|
||||||
# Copyright 2015 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from fuelclient.commands import base
|
|
||||||
|
|
||||||
|
|
||||||
class NetworkTemplateMixin(object):
|
|
||||||
|
|
||||||
entity_name = 'environment'
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def add_env_argument(parser):
|
|
||||||
parser.add_argument(
|
|
||||||
'env',
|
|
||||||
type=int,
|
|
||||||
help='Id of the environment'
|
|
||||||
)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def add_dir_argument(parser):
|
|
||||||
parser.add_argument(
|
|
||||||
'-d', '--dir',
|
|
||||||
type=str,
|
|
||||||
help='Directory for the network template'
|
|
||||||
)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def add_file_argument(parser):
|
|
||||||
parser.add_argument(
|
|
||||||
'-f', '--file',
|
|
||||||
required=True,
|
|
||||||
type=str,
|
|
||||||
help='Yaml file containing the network template'
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class NetworkTemplateUpload(NetworkTemplateMixin, base.BaseCommand):
|
|
||||||
"""Upload network configuration for specified environment."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(NetworkTemplateUpload, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
self.add_env_argument(parser)
|
|
||||||
self.add_file_argument(parser)
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
|
|
||||||
file_path = self.client.upload_network_template(
|
|
||||||
parsed_args.env, parsed_args.file)
|
|
||||||
msg = "Network template {0} has been uploaded.\n".format(file_path)
|
|
||||||
self.app.stdout.write(msg)
|
|
||||||
|
|
||||||
|
|
||||||
class NetworkTemplateDownload(NetworkTemplateMixin, base.BaseCommand):
|
|
||||||
"""Download network configuration for specified environment."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(NetworkTemplateDownload, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
self.add_dir_argument(parser)
|
|
||||||
self.add_env_argument(parser)
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
file_path = self.client.download_network_template(
|
|
||||||
parsed_args.env, parsed_args.dir)
|
|
||||||
|
|
||||||
msg = ("Network template configuration for environment with id={0}"
|
|
||||||
" downloaded to {1}\n").format(
|
|
||||||
parsed_args.env, file_path)
|
|
||||||
self.app.stdout.write(msg)
|
|
||||||
|
|
||||||
|
|
||||||
class NetworkTemplateDelete(NetworkTemplateMixin, base.BaseCommand):
|
|
||||||
"""Delete the network template of the specified environment."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(NetworkTemplateDelete, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
self.add_env_argument(parser)
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
self.client.delete_network_template(parsed_args.env)
|
|
||||||
|
|
||||||
msg = ("Network template for environment id={0}"
|
|
||||||
" has been deleted.\n".format(parsed_args.env))
|
|
||||||
self.app.stdout.write(msg)
|
|
|
@ -1,609 +0,0 @@
|
||||||
# Copyright 2015 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import abc
|
|
||||||
import collections
|
|
||||||
import json
|
|
||||||
import operator
|
|
||||||
import os
|
|
||||||
|
|
||||||
from oslo_utils import fileutils
|
|
||||||
import six
|
|
||||||
|
|
||||||
from fuelclient.cli import error
|
|
||||||
from fuelclient.commands import base
|
|
||||||
from fuelclient.common import data_utils
|
|
||||||
from fuelclient import utils
|
|
||||||
|
|
||||||
|
|
||||||
class NodeMixIn(object):
|
|
||||||
entity_name = 'node'
|
|
||||||
|
|
||||||
numa_fields = (
|
|
||||||
'numa_nodes',
|
|
||||||
'supported_hugepages',
|
|
||||||
'distances')
|
|
||||||
|
|
||||||
supported_file_formats = ('json', 'yaml')
|
|
||||||
allowed_attr_types = ('attributes', 'disks', 'interfaces')
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def get_numa_topology_info(cls, data):
|
|
||||||
numa_topology_info = {}
|
|
||||||
numa_topology = data['meta'].get('numa_topology', {})
|
|
||||||
for key in cls.numa_fields:
|
|
||||||
numa_topology_info[key] = numa_topology.get(key)
|
|
||||||
return numa_topology_info
|
|
||||||
|
|
||||||
|
|
||||||
@six.add_metaclass(abc.ABCMeta)
|
|
||||||
class BaseUploadCommand(NodeMixIn, base.BaseCommand):
|
|
||||||
"""Base class for uploading attributes of a node."""
|
|
||||||
|
|
||||||
@abc.abstractproperty
|
|
||||||
def attribute(self):
|
|
||||||
"""String with the name of the attribute."""
|
|
||||||
pass
|
|
||||||
|
|
||||||
@abc.abstractproperty
|
|
||||||
def uploader(self):
|
|
||||||
"""Callable for uploading data."""
|
|
||||||
pass
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(BaseUploadCommand, self).get_parser(prog_name)
|
|
||||||
parser.add_argument('id',
|
|
||||||
type=int,
|
|
||||||
help='Id of a node.')
|
|
||||||
parser.add_argument('-f',
|
|
||||||
'--format',
|
|
||||||
required=True,
|
|
||||||
choices=self.supported_file_formats,
|
|
||||||
help='Format of serialized '
|
|
||||||
'{} data.'.format(self.attribute))
|
|
||||||
parser.add_argument('-d',
|
|
||||||
'--directory',
|
|
||||||
required=False,
|
|
||||||
default=os.curdir,
|
|
||||||
help='Source directory. Defaults to '
|
|
||||||
'the current directory.')
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
directory = parsed_args.directory
|
|
||||||
|
|
||||||
file_path = self.get_attributes_path(self.attribute,
|
|
||||||
parsed_args.format,
|
|
||||||
parsed_args.id,
|
|
||||||
directory)
|
|
||||||
|
|
||||||
try:
|
|
||||||
with open(file_path, 'r') as stream:
|
|
||||||
attributes = data_utils.safe_load(parsed_args.format,
|
|
||||||
stream)
|
|
||||||
self.uploader(parsed_args.id, attributes)
|
|
||||||
except (OSError, IOError):
|
|
||||||
msg = 'Could not read configuration of {} at {}.'
|
|
||||||
raise error.InvalidFileException(msg.format(self.attribute,
|
|
||||||
file_path))
|
|
||||||
|
|
||||||
msg = ('Configuration of {t} for node with id '
|
|
||||||
'{node} was loaded from {path}\n')
|
|
||||||
self.app.stdout.write(msg.format(t=self.attribute,
|
|
||||||
node=parsed_args.id,
|
|
||||||
path=file_path))
|
|
||||||
|
|
||||||
|
|
||||||
@six.add_metaclass(abc.ABCMeta)
|
|
||||||
class BaseDownloadCommand(NodeMixIn, base.BaseCommand):
|
|
||||||
"""Base class for downloading attributes of a node."""
|
|
||||||
|
|
||||||
@abc.abstractproperty
|
|
||||||
def attribute(self):
|
|
||||||
"""String with the name of the attribute."""
|
|
||||||
pass
|
|
||||||
|
|
||||||
@abc.abstractproperty
|
|
||||||
def downloader(self):
|
|
||||||
"""Callable for downloading data."""
|
|
||||||
pass
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(BaseDownloadCommand, self).get_parser(prog_name)
|
|
||||||
parser.add_argument('id',
|
|
||||||
type=int,
|
|
||||||
help='Id of a node.')
|
|
||||||
parser.add_argument('-f',
|
|
||||||
'--format',
|
|
||||||
required=True,
|
|
||||||
choices=self.supported_file_formats,
|
|
||||||
help='Format of serialized '
|
|
||||||
'{} data.'.format(self.attribute))
|
|
||||||
parser.add_argument('-d',
|
|
||||||
'--directory',
|
|
||||||
required=False,
|
|
||||||
default=os.curdir,
|
|
||||||
help='Destination directory. Defaults to '
|
|
||||||
'the current directory.')
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
directory = parsed_args.directory
|
|
||||||
attributes = self.downloader(parsed_args.id)
|
|
||||||
file_path = self.get_attributes_path(self.attribute,
|
|
||||||
parsed_args.format,
|
|
||||||
parsed_args.id,
|
|
||||||
directory)
|
|
||||||
|
|
||||||
try:
|
|
||||||
fileutils.ensure_tree(os.path.dirname(file_path))
|
|
||||||
fileutils.delete_if_exists(file_path)
|
|
||||||
|
|
||||||
with open(file_path, 'w') as stream:
|
|
||||||
data_utils.safe_dump(parsed_args.format, stream, attributes)
|
|
||||||
except (OSError, IOError):
|
|
||||||
msg = 'Could not store configuration of {} at {}.'
|
|
||||||
raise error.InvalidFileException(msg.format(self.attribute,
|
|
||||||
file_path))
|
|
||||||
|
|
||||||
msg = ('Configuration of {t} for node with id '
|
|
||||||
'{node} was stored in {path}\n')
|
|
||||||
self.app.stdout.write(msg.format(t=self.attribute,
|
|
||||||
node=parsed_args.id,
|
|
||||||
path=file_path))
|
|
||||||
|
|
||||||
|
|
||||||
class NodeList(NodeMixIn, base.BaseListCommand):
|
|
||||||
"""Show list of all available nodes."""
|
|
||||||
|
|
||||||
columns = ('id',
|
|
||||||
'name',
|
|
||||||
'status',
|
|
||||||
'os_platform',
|
|
||||||
'roles',
|
|
||||||
'ip',
|
|
||||||
'mac',
|
|
||||||
'cluster',
|
|
||||||
'platform_name',
|
|
||||||
'online')
|
|
||||||
|
|
||||||
filters = {
|
|
||||||
'environment_id': 'env',
|
|
||||||
'labels': 'labels'
|
|
||||||
}
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(NodeList, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'-e',
|
|
||||||
'--env',
|
|
||||||
type=int,
|
|
||||||
help='Show only nodes that are in the specified environment')
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'-l',
|
|
||||||
'--labels',
|
|
||||||
type=utils.str_to_unicode,
|
|
||||||
nargs='+',
|
|
||||||
help='Show only nodes that have specific labels')
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
|
|
||||||
class NodeShow(NodeMixIn, base.BaseShowCommand):
|
|
||||||
"""Show info about node with given id."""
|
|
||||||
|
|
||||||
columns = ('id',
|
|
||||||
'name',
|
|
||||||
'status',
|
|
||||||
'os_platform',
|
|
||||||
'roles',
|
|
||||||
'kernel_params',
|
|
||||||
'pending_roles',
|
|
||||||
'ip',
|
|
||||||
'mac',
|
|
||||||
'error_type',
|
|
||||||
'pending_addition',
|
|
||||||
'hostname',
|
|
||||||
'fqdn',
|
|
||||||
'platform_name',
|
|
||||||
'cluster',
|
|
||||||
'online',
|
|
||||||
'progress',
|
|
||||||
'pending_deletion',
|
|
||||||
'group_id',
|
|
||||||
# TODO(romcheg): network_data mostly never fits the screen
|
|
||||||
# 'network_data',
|
|
||||||
'manufacturer')
|
|
||||||
columns += NodeMixIn.numa_fields
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
data = self.client.get_by_id(parsed_args.id)
|
|
||||||
numa_topology = self.get_numa_topology_info(data)
|
|
||||||
data.update(numa_topology)
|
|
||||||
data = data_utils.get_display_data_single(self.columns, data)
|
|
||||||
return self.columns, data
|
|
||||||
|
|
||||||
|
|
||||||
class NodeUpdate(NodeMixIn, base.BaseShowCommand):
|
|
||||||
"""Change given attributes for a node."""
|
|
||||||
|
|
||||||
columns = NodeShow.columns
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(NodeUpdate, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'-H',
|
|
||||||
'--hostname',
|
|
||||||
type=str,
|
|
||||||
default=None,
|
|
||||||
help='New hostname for node')
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'--name',
|
|
||||||
type=lambda x: x.decode('utf-8') if six.PY2 else x,
|
|
||||||
default=None,
|
|
||||||
help='New name for node')
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
updates = {}
|
|
||||||
for attr in self.client._updatable_attributes:
|
|
||||||
if getattr(parsed_args, attr, None):
|
|
||||||
updates[attr] = getattr(parsed_args, attr)
|
|
||||||
|
|
||||||
updated_node = self.client.update(
|
|
||||||
parsed_args.id, **updates)
|
|
||||||
numa_topology = self.get_numa_topology_info(updated_node)
|
|
||||||
updated_node.update(numa_topology)
|
|
||||||
updated_node = data_utils.get_display_data_single(
|
|
||||||
self.columns, updated_node)
|
|
||||||
|
|
||||||
return self.columns, updated_node
|
|
||||||
|
|
||||||
|
|
||||||
class NodeVmsList(NodeMixIn, base.BaseShowCommand):
|
|
||||||
"""Show list vms for node."""
|
|
||||||
|
|
||||||
columns = ('vms_conf',)
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
data = self.client.get_node_vms_conf(parsed_args.id)
|
|
||||||
data = data_utils.get_display_data_single(self.columns, data)
|
|
||||||
|
|
||||||
return (self.columns, data)
|
|
||||||
|
|
||||||
|
|
||||||
class NodeCreateVMsConf(NodeMixIn, base.BaseCommand):
|
|
||||||
"""Create vms config in metadata for selected node."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(NodeCreateVMsConf, self).get_parser(prog_name)
|
|
||||||
parser.add_argument('id', type=int,
|
|
||||||
help='Id of the {0}.'.format(self.entity_name))
|
|
||||||
parser.add_argument(
|
|
||||||
'--conf',
|
|
||||||
type=json.loads,
|
|
||||||
required=True,
|
|
||||||
nargs='+',
|
|
||||||
help='JSONs with VMs configuration',
|
|
||||||
)
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
try:
|
|
||||||
confs = utils.parse_to_list_of_dicts(parsed_args.conf)
|
|
||||||
except TypeError:
|
|
||||||
raise error.BadDataException(
|
|
||||||
'VM configuration should be a dictionary '
|
|
||||||
'or a list of dictionaries')
|
|
||||||
data = self.client.node_vms_create(parsed_args.id, confs)
|
|
||||||
msg = "{0}".format(data)
|
|
||||||
self.app.stdout.write(msg)
|
|
||||||
|
|
||||||
|
|
||||||
class NodeLabelList(NodeMixIn, base.BaseListCommand):
|
|
||||||
"""Show list of all labels."""
|
|
||||||
|
|
||||||
columns = (
|
|
||||||
'node_id',
|
|
||||||
'label_name',
|
|
||||||
'label_value')
|
|
||||||
|
|
||||||
@property
|
|
||||||
def default_sorting_by(self):
|
|
||||||
return ['node_id']
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(NodeLabelList, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'-n',
|
|
||||||
'--nodes',
|
|
||||||
nargs='+',
|
|
||||||
help='Show labels for specific nodes')
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
data = self.client.get_all_labels_for_nodes(
|
|
||||||
node_ids=parsed_args.nodes)
|
|
||||||
data = data_utils.get_display_data_multi(self.columns, data)
|
|
||||||
data = self._sort_data(parsed_args, data)
|
|
||||||
|
|
||||||
return self.columns, data
|
|
||||||
|
|
||||||
|
|
||||||
class NodeLabelSet(NodeMixIn, base.BaseCommand):
|
|
||||||
"""Create or update specifc labels on nodes."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(NodeLabelSet, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'-l',
|
|
||||||
'--labels',
|
|
||||||
required=True,
|
|
||||||
nargs='+',
|
|
||||||
help='List of labels for create or update')
|
|
||||||
|
|
||||||
group = parser.add_mutually_exclusive_group(required=True)
|
|
||||||
group.add_argument(
|
|
||||||
'-n',
|
|
||||||
'--nodes',
|
|
||||||
nargs='+',
|
|
||||||
help='Create or update labels only for specific nodes')
|
|
||||||
group.add_argument(
|
|
||||||
'--nodes-all',
|
|
||||||
action='store_true',
|
|
||||||
help='Create or update labels for all nodes')
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
nodes_ids = None if parsed_args.nodes_all else parsed_args.nodes
|
|
||||||
data = self.client.set_labels_for_nodes(
|
|
||||||
labels=parsed_args.labels, node_ids=nodes_ids)
|
|
||||||
msg = "Labels have been updated on nodes: {0} \n".format(
|
|
||||||
','.join(data))
|
|
||||||
self.app.stdout.write(msg)
|
|
||||||
|
|
||||||
|
|
||||||
class NodeLabelDelete(NodeMixIn, base.BaseCommand):
|
|
||||||
"""Delete specific labels on nodes."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(NodeLabelDelete, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
group = parser.add_mutually_exclusive_group(required=True)
|
|
||||||
group.add_argument(
|
|
||||||
'-l',
|
|
||||||
'--labels',
|
|
||||||
nargs='+',
|
|
||||||
help='List of labels keys for delete')
|
|
||||||
group.add_argument(
|
|
||||||
'--labels-all',
|
|
||||||
action='store_true',
|
|
||||||
help='Delete all labels for node')
|
|
||||||
|
|
||||||
group = parser.add_mutually_exclusive_group(required=True)
|
|
||||||
group.add_argument(
|
|
||||||
'-n',
|
|
||||||
'--nodes',
|
|
||||||
nargs='+',
|
|
||||||
help='Delete labels only for specific nodes')
|
|
||||||
group.add_argument(
|
|
||||||
'--nodes-all',
|
|
||||||
action='store_true',
|
|
||||||
help='Delete labels for all nodes')
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
nodes_ids = None if parsed_args.nodes_all else parsed_args.nodes
|
|
||||||
labels = None if parsed_args.labels_all \
|
|
||||||
else parsed_args.labels
|
|
||||||
data = self.client.delete_labels_for_nodes(
|
|
||||||
labels=labels, node_ids=nodes_ids)
|
|
||||||
msg = "Labels have been deleted on nodes: {0} \n".format(
|
|
||||||
','.join(data))
|
|
||||||
self.app.stdout.write(msg)
|
|
||||||
|
|
||||||
|
|
||||||
class NodeAttributesDownload(NodeMixIn, base.BaseCommand):
|
|
||||||
"""Download node attributes."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(NodeAttributesDownload, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'id', type=int, help='Node ID')
|
|
||||||
parser.add_argument(
|
|
||||||
'--dir', type=str, help='Directory to save attributes')
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
file_path = self.client.download_attributes(
|
|
||||||
parsed_args.id, parsed_args.dir)
|
|
||||||
self.app.stdout.write(
|
|
||||||
"Attributes for node {0} were written to {1}"
|
|
||||||
.format(parsed_args.id, file_path) + os.linesep)
|
|
||||||
|
|
||||||
|
|
||||||
class NodeAttributesUpload(NodeMixIn, base.BaseCommand):
|
|
||||||
"""Upload node attributes."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(NodeAttributesUpload, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'id', type=int, help='Node ID')
|
|
||||||
parser.add_argument(
|
|
||||||
'--dir', type=str, help='Directory to read attributes from')
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
self.client.upload_attributes(parsed_args.id, parsed_args.dir)
|
|
||||||
self.app.stdout.write(
|
|
||||||
"Attributes for node {0} were uploaded."
|
|
||||||
.format(parsed_args.id) + os.linesep)
|
|
||||||
|
|
||||||
|
|
||||||
class NodeAnsibleInventory(NodeMixIn, base.BaseCommand):
|
|
||||||
"""Generate ansible inventory file based on the nodes list."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(NodeAnsibleInventory, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
# if this is a required argument, we'll avoid ambiguity of having nodes
|
|
||||||
# of multiple different clusters in the same inventory file
|
|
||||||
parser.add_argument(
|
|
||||||
'-e',
|
|
||||||
'--env',
|
|
||||||
type=int,
|
|
||||||
required=True,
|
|
||||||
help='Use only nodes that are in the specified environment')
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'-l',
|
|
||||||
'--labels',
|
|
||||||
type=utils.str_to_unicode,
|
|
||||||
nargs='+',
|
|
||||||
help='Use only nodes that have specific labels')
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
data = self.client.get_all(environment_id=parsed_args.env,
|
|
||||||
labels=parsed_args.labels)
|
|
||||||
|
|
||||||
nodes_by_role = collections.defaultdict(list)
|
|
||||||
for node in data:
|
|
||||||
for role in node['roles']:
|
|
||||||
nodes_by_role[role].append(node)
|
|
||||||
|
|
||||||
for role, nodes in sorted(nodes_by_role.items()):
|
|
||||||
self.app.stdout.write(u'[{role}]\n'.format(role=role))
|
|
||||||
self.app.stdout.write(
|
|
||||||
u'\n'.join(
|
|
||||||
u'{name} ansible_host={ip}'.format(name=node['hostname'],
|
|
||||||
ip=node['ip'])
|
|
||||||
for node in sorted(nodes_by_role[role],
|
|
||||||
key=operator.itemgetter('hostname'))
|
|
||||||
)
|
|
||||||
)
|
|
||||||
self.app.stdout.write(u'\n\n')
|
|
||||||
|
|
||||||
|
|
||||||
class NodeInterfacesDownload(BaseDownloadCommand):
|
|
||||||
"""Download and store configuration of interfaces for a node to a file."""
|
|
||||||
|
|
||||||
attribute = 'interfaces'
|
|
||||||
|
|
||||||
@property
|
|
||||||
def downloader(self):
|
|
||||||
return self.client.get_interfaces
|
|
||||||
|
|
||||||
|
|
||||||
class NodeInterfacesGetDefault(BaseDownloadCommand):
|
|
||||||
"""Download default configuration of interfaces for a node to a file."""
|
|
||||||
|
|
||||||
attribute = 'interfaces'
|
|
||||||
|
|
||||||
@property
|
|
||||||
def downloader(self):
|
|
||||||
return self.client.get_default_interfaces
|
|
||||||
|
|
||||||
|
|
||||||
class NodeInterfacesUpload(BaseUploadCommand):
|
|
||||||
"""Upload stored configuration of interfaces for a node from a file."""
|
|
||||||
|
|
||||||
attribute = 'interfaces'
|
|
||||||
|
|
||||||
@property
|
|
||||||
def uploader(self):
|
|
||||||
return self.client.set_interfaces
|
|
||||||
|
|
||||||
|
|
||||||
class NodeDisksDownload(BaseDownloadCommand):
|
|
||||||
"""Download and store configuration of disks for a node to a file."""
|
|
||||||
|
|
||||||
attribute = 'disks'
|
|
||||||
|
|
||||||
@property
|
|
||||||
def downloader(self):
|
|
||||||
return self.client.get_disks
|
|
||||||
|
|
||||||
|
|
||||||
class NodeDisksGetDefault(BaseDownloadCommand):
|
|
||||||
"""Download default configuration of disks for a node to a file."""
|
|
||||||
|
|
||||||
attribute = 'disks'
|
|
||||||
|
|
||||||
@property
|
|
||||||
def downloader(self):
|
|
||||||
return self.client.get_default_disks
|
|
||||||
|
|
||||||
|
|
||||||
class NodeDisksUpload(BaseUploadCommand):
|
|
||||||
"""Upload stored configuration of disks for a node from a file."""
|
|
||||||
|
|
||||||
attribute = 'disks'
|
|
||||||
|
|
||||||
@property
|
|
||||||
def uploader(self):
|
|
||||||
return self.client.set_disks
|
|
||||||
|
|
||||||
|
|
||||||
class NodeUndiscover(NodeMixIn, base.BaseCommand):
|
|
||||||
"""Remove nodes from database."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(NodeUndiscover, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
group = parser.add_mutually_exclusive_group(required=True)
|
|
||||||
group.add_argument('-e',
|
|
||||||
'--env',
|
|
||||||
type=int,
|
|
||||||
help='Id of environment to remove all nodes '
|
|
||||||
'from database.')
|
|
||||||
group.add_argument('-n',
|
|
||||||
'--node',
|
|
||||||
type=int,
|
|
||||||
help='Id of the node to remove from database.')
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'-f',
|
|
||||||
'--force',
|
|
||||||
action='store_true',
|
|
||||||
help='Forces deletion of nodes from database '
|
|
||||||
'regardless of their state.')
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
node_ids = self.client.undiscover_nodes(env_id=parsed_args.env,
|
|
||||||
node_id=parsed_args.node,
|
|
||||||
force=parsed_args.force)
|
|
||||||
|
|
||||||
self.app.stdout.write(
|
|
||||||
'Nodes {0} were deleted from the database\n'.format(node_ids)
|
|
||||||
)
|
|
|
@ -1,182 +0,0 @@
|
||||||
# Copyright 2015 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from fuelclient.commands import base
|
|
||||||
from fuelclient.common import data_utils
|
|
||||||
|
|
||||||
|
|
||||||
class OpenstackConfigMixin(object):
|
|
||||||
entity_name = 'openstack-config'
|
|
||||||
|
|
||||||
columns = (
|
|
||||||
'id', 'is_active', 'config_type',
|
|
||||||
'cluster_id', 'node_id', 'node_role')
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def add_env_arg(parser):
|
|
||||||
parser.add_argument(
|
|
||||||
'-e', '--env',
|
|
||||||
type=int, required=True,
|
|
||||||
help='Environment ID.')
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def add_file_arg(parser):
|
|
||||||
parser.add_argument(
|
|
||||||
'--file', required=True,
|
|
||||||
type=str, help='YAML file that contains openstack configuration.')
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def add_config_id_arg(parser):
|
|
||||||
parser.add_argument(
|
|
||||||
'config',
|
|
||||||
type=int, help='Id of the OpenStack configuration.'
|
|
||||||
)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def add_node_ids_arg(parser):
|
|
||||||
parser.add_argument(
|
|
||||||
'-n', '--node',
|
|
||||||
type=int, nargs='+', default=None, help='Ids of the nodes.'
|
|
||||||
)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def add_node_role_arg(parser):
|
|
||||||
parser.add_argument(
|
|
||||||
'-r', '--role',
|
|
||||||
type=str, default=None, help='Role of the nodes.'
|
|
||||||
)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def add_deleted_arg(parser):
|
|
||||||
parser.add_argument(
|
|
||||||
'-D', '--deleted',
|
|
||||||
type=bool, default=False, help='Show deleted configurations.'
|
|
||||||
)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def add_force_arg(parser):
|
|
||||||
parser.add_argument(
|
|
||||||
'-f', '--force',
|
|
||||||
action='store_true', help='Force the update of the configuration.'
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class OpenstackConfigList(OpenstackConfigMixin, base.BaseListCommand):
|
|
||||||
"""List all OpenStack configurations."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(OpenstackConfigList, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
self.add_env_arg(parser)
|
|
||||||
self.add_node_ids_arg(parser)
|
|
||||||
self.add_node_role_arg(parser)
|
|
||||||
self.add_deleted_arg(parser)
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, args):
|
|
||||||
data = self.client.get_filtered(
|
|
||||||
cluster_id=args.env, node_ids=args.node,
|
|
||||||
node_role=args.role, is_active=(not args.deleted))
|
|
||||||
data = data_utils.get_display_data_multi(self.columns, data)
|
|
||||||
data = self._sort_data(args, data)
|
|
||||||
|
|
||||||
return self.columns, data
|
|
||||||
|
|
||||||
|
|
||||||
class OpenstackConfigDownload(OpenstackConfigMixin, base.BaseCommand):
|
|
||||||
"""Download specified configuration file."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(OpenstackConfigDownload, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
self.add_config_id_arg(parser)
|
|
||||||
self.add_file_arg(parser)
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, args):
|
|
||||||
file_path = self.client.download(args.config, args.file)
|
|
||||||
|
|
||||||
msg = 'OpenStack configuration with id={c} '\
|
|
||||||
'downloaded to {p}.\n'.format(c=args.config, p=file_path)
|
|
||||||
|
|
||||||
self.app.stdout.write(msg)
|
|
||||||
|
|
||||||
|
|
||||||
class OpenstackConfigUpload(OpenstackConfigMixin, base.BaseListCommand):
|
|
||||||
"""Upload new OpenStack configuration from file."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(OpenstackConfigUpload, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
self.add_env_arg(parser)
|
|
||||||
self.add_node_ids_arg(parser)
|
|
||||||
self.add_node_role_arg(parser)
|
|
||||||
self.add_file_arg(parser)
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, args):
|
|
||||||
configs = self.client.upload(path=args.file,
|
|
||||||
cluster_id=args.env,
|
|
||||||
node_ids=args.node,
|
|
||||||
node_role=args.role)
|
|
||||||
|
|
||||||
data = data_utils.get_display_data_multi(self.columns, configs)
|
|
||||||
return self.columns, data
|
|
||||||
|
|
||||||
|
|
||||||
class OpenstackConfigExecute(OpenstackConfigMixin, base.BaseCommand):
|
|
||||||
"""Execute OpenStack configuration deployment."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(OpenstackConfigExecute, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
self.add_env_arg(parser)
|
|
||||||
self.add_node_ids_arg(parser)
|
|
||||||
self.add_node_role_arg(parser)
|
|
||||||
self.add_force_arg(parser)
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, args):
|
|
||||||
task = self.client.execute(cluster_id=args.env,
|
|
||||||
node_ids=args.node,
|
|
||||||
node_role=args.role,
|
|
||||||
force=args.force)
|
|
||||||
|
|
||||||
msg = ('Deployment of the OpenStack configuration was started within '
|
|
||||||
'task with id {task_id}.\n').format(task_id=task['id'])
|
|
||||||
|
|
||||||
self.app.stdout.write(msg)
|
|
||||||
|
|
||||||
|
|
||||||
class OpenstackConfigDelete(OpenstackConfigMixin, base.BaseCommand):
|
|
||||||
"""Delete OpenStack configuration with given id."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(OpenstackConfigDelete, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
self.add_config_id_arg(parser)
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, args):
|
|
||||||
self.client.delete(args.config)
|
|
||||||
|
|
||||||
msg = 'Openstack configuration with id {c} '\
|
|
||||||
'was deleted.\n'.format(c=args.config)
|
|
||||||
|
|
||||||
self.app.stdout.write(msg)
|
|
|
@ -1,117 +0,0 @@
|
||||||
# Copyright 2015 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from fuelclient.commands import base
|
|
||||||
|
|
||||||
|
|
||||||
class PluginsMixIn(object):
|
|
||||||
entity_name = 'plugins'
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def add_plugin_file_argument(parser):
|
|
||||||
parser.add_argument(
|
|
||||||
'file',
|
|
||||||
type=str,
|
|
||||||
help='Path to plugin file to install'
|
|
||||||
)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def add_plugin_name_argument(parser):
|
|
||||||
parser.add_argument(
|
|
||||||
'name',
|
|
||||||
type=str,
|
|
||||||
help='Name of plugin to remove'
|
|
||||||
)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def add_plugin_version_argument(parser):
|
|
||||||
parser.add_argument(
|
|
||||||
'version',
|
|
||||||
type=str,
|
|
||||||
help='Version of plugin to remove'
|
|
||||||
)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def add_plugin_ids_argument(parser):
|
|
||||||
parser.add_argument(
|
|
||||||
'ids',
|
|
||||||
type=int,
|
|
||||||
nargs='*',
|
|
||||||
metavar='plugin-id',
|
|
||||||
help='Synchronise only plugins with specified ids'
|
|
||||||
)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def add_plugin_install_force_argument(parser):
|
|
||||||
parser.add_argument(
|
|
||||||
'-f', '--force',
|
|
||||||
action='store_true',
|
|
||||||
help='Used for reinstall plugin with the same version'
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class PluginsList(PluginsMixIn, base.BaseListCommand):
|
|
||||||
"""Show list of all available plugins."""
|
|
||||||
|
|
||||||
columns = ('id',
|
|
||||||
'name',
|
|
||||||
'version',
|
|
||||||
'package_version',
|
|
||||||
'releases')
|
|
||||||
|
|
||||||
|
|
||||||
class PluginsSync(PluginsMixIn, base.BaseCommand):
|
|
||||||
"""Synchronise plugins on file system with plugins in API service."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(PluginsSync, self).get_parser(prog_name)
|
|
||||||
self.add_plugin_ids_argument(parser)
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
ids = parsed_args.ids if len(parsed_args.ids) > 0 else None
|
|
||||||
self.client.sync(ids=ids)
|
|
||||||
self.app.stdout.write("Plugins were successfully synchronized.\n")
|
|
||||||
|
|
||||||
|
|
||||||
class PluginInstall(PluginsMixIn, base.BaseCommand):
|
|
||||||
"""Install plugin archive and register in API service."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(PluginInstall, self).get_parser(prog_name)
|
|
||||||
self.add_plugin_file_argument(parser)
|
|
||||||
self.add_plugin_install_force_argument(parser)
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
self.client.install(parsed_args.file, force=parsed_args.force)
|
|
||||||
self.app.stdout.write(
|
|
||||||
"Plugin {0} was successfully installed.\n".format(parsed_args.file)
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class PluginRemove(PluginsMixIn, base.BaseCommand):
|
|
||||||
"""Remove the plugin package, and update data in API service."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(PluginRemove, self).get_parser(prog_name)
|
|
||||||
self.add_plugin_name_argument(parser)
|
|
||||||
self.add_plugin_version_argument(parser)
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
self.client.remove(parsed_args.name, parsed_args.version)
|
|
||||||
self.app.stdout.write(
|
|
||||||
"Plugin {0} was successfully removed.\n".format(parsed_args.name)
|
|
||||||
)
|
|
|
@ -1,146 +0,0 @@
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
|
|
||||||
# Copyright 2016 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from fuelclient.commands import base
|
|
||||||
from fuelclient.common import data_utils
|
|
||||||
from fuelclient import utils
|
|
||||||
|
|
||||||
|
|
||||||
class ReleaseMixIn(object):
|
|
||||||
entity_name = 'release'
|
|
||||||
|
|
||||||
|
|
||||||
class ReleaseList(ReleaseMixIn, base.BaseListCommand):
|
|
||||||
"""Show list of all available releases."""
|
|
||||||
|
|
||||||
columns = ("id",
|
|
||||||
"name",
|
|
||||||
"state",
|
|
||||||
"operating_system",
|
|
||||||
"version")
|
|
||||||
|
|
||||||
|
|
||||||
class ReleaseReposList(ReleaseMixIn, base.BaseListCommand):
|
|
||||||
"""Show repos for a given release."""
|
|
||||||
|
|
||||||
columns = (
|
|
||||||
'name',
|
|
||||||
'priority',
|
|
||||||
'uri',
|
|
||||||
'section',
|
|
||||||
'suite',
|
|
||||||
'type')
|
|
||||||
|
|
||||||
@property
|
|
||||||
def default_sorting_by(self):
|
|
||||||
return ['priority']
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(ReleaseReposList, self).get_parser(prog_name)
|
|
||||||
parser.add_argument('id', type=int,
|
|
||||||
help='Id of the {0}.'.format(self.entity_name))
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
data = self.client.get_attributes_metadata_by_id(parsed_args.id)
|
|
||||||
repos = data["editable"]["repo_setup"]["repos"]["value"]
|
|
||||||
repos = data_utils.get_display_data_multi(self.columns, repos)
|
|
||||||
repos = self._sort_data(parsed_args, repos)
|
|
||||||
return self.columns, repos
|
|
||||||
|
|
||||||
|
|
||||||
class ReleaseReposUpdate(ReleaseMixIn, base.BaseCommand):
|
|
||||||
"""Update repos for a given release."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(ReleaseReposUpdate, self).get_parser(prog_name)
|
|
||||||
parser.add_argument(
|
|
||||||
'-f', '--file', action='store', required=True,
|
|
||||||
help='Input yaml file with list of repositories')
|
|
||||||
parser.add_argument(
|
|
||||||
'id', type=int, help='Id of the {0}.'.format(self.entity_name))
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
data = self.client.get_attributes_metadata_by_id(parsed_args.id)
|
|
||||||
new_repos = utils.parse_yaml_file(parsed_args.file)
|
|
||||||
data["editable"]["repo_setup"]["repos"]["value"] = new_repos
|
|
||||||
self.client.update_attributes_metadata_by_id(parsed_args.id, data)
|
|
||||||
self.app.stdout.write(
|
|
||||||
"Repositories for the release with "
|
|
||||||
"id {rel_id} were set from {file}.\n".format(
|
|
||||||
rel_id=parsed_args.id,
|
|
||||||
file=parsed_args.file
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class ReleaseComponentList(ReleaseMixIn, base.BaseListCommand):
|
|
||||||
"""Show list of components for a given release."""
|
|
||||||
|
|
||||||
columns = ("name",
|
|
||||||
"requires",
|
|
||||||
"compatible",
|
|
||||||
"incompatible",
|
|
||||||
"default")
|
|
||||||
|
|
||||||
@property
|
|
||||||
def default_sorting_by(self):
|
|
||||||
return ['name']
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def retrieve_predicates(statement):
|
|
||||||
"""Retrieve predicates with respective 'items' components
|
|
||||||
|
|
||||||
:param statement: the dictionary to extract predicate values from
|
|
||||||
:return: retrieval result as a string
|
|
||||||
"""
|
|
||||||
predicates = ('any_of', 'all_of', 'one_of', 'none_of')
|
|
||||||
for predicate in predicates:
|
|
||||||
if predicate in statement:
|
|
||||||
result = ', '.join(statement[predicate].get('items'))
|
|
||||||
return "{0} ({1})".format(predicate, result)
|
|
||||||
raise ValueError('Predicates not found.')
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def retrieve_data(cls, value):
|
|
||||||
"""Retrieve names of components or predicates from nested data
|
|
||||||
|
|
||||||
:param value: data to extract name or to retrieve predicates from
|
|
||||||
:return: names of components or predicates as a string
|
|
||||||
"""
|
|
||||||
if isinstance(value, list):
|
|
||||||
# get only "name" of components otherwise retrieve predicates
|
|
||||||
return ', '.join([v['name'] if 'name' in v
|
|
||||||
else cls.retrieve_predicates(v)
|
|
||||||
for v in value])
|
|
||||||
return value
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(ReleaseComponentList, self).get_parser(prog_name)
|
|
||||||
parser.add_argument('id', type=int,
|
|
||||||
help='Id of the {0}.'.format(self.entity_name))
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
data = self.client.get_components_by_id(parsed_args.id)
|
|
||||||
# some keys (columns) can be missed in origin data
|
|
||||||
# then create them with respective '-' value
|
|
||||||
data = [{k: self.retrieve_data(d.get(k, '-')) for k in self.columns}
|
|
||||||
for d in data]
|
|
||||||
data = data_utils.get_display_data_multi(self.columns, data)
|
|
||||||
data = self._sort_data(parsed_args, data)
|
|
||||||
return self.columns, data
|
|
|
@ -1,268 +0,0 @@
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# Copyright 2016 Vitalii Kulanov
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import abc
|
|
||||||
import os
|
|
||||||
|
|
||||||
from oslo_utils import fileutils
|
|
||||||
import six
|
|
||||||
|
|
||||||
from fuelclient.cli import error
|
|
||||||
from fuelclient.commands import base
|
|
||||||
from fuelclient.common import data_utils
|
|
||||||
|
|
||||||
|
|
||||||
class RoleMixIn(object):
|
|
||||||
entity_name = 'role'
|
|
||||||
supported_file_formats = ('json', 'yaml')
|
|
||||||
fields_mapper = (
|
|
||||||
('env', 'clusters'),
|
|
||||||
('release', 'releases')
|
|
||||||
)
|
|
||||||
|
|
||||||
def parse_model(self, args):
|
|
||||||
for param, role_class in self.fields_mapper:
|
|
||||||
model_id = getattr(args, param)
|
|
||||||
if model_id:
|
|
||||||
return role_class, model_id
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_file_path(directory, owner_type, owner_id, role_name, file_format):
|
|
||||||
return os.path.join(os.path.abspath(directory),
|
|
||||||
'{owner}_{id}'.format(owner=owner_type,
|
|
||||||
id=owner_id),
|
|
||||||
'{}.{}'.format(role_name, file_format))
|
|
||||||
|
|
||||||
|
|
||||||
@six.add_metaclass(abc.ABCMeta)
|
|
||||||
class BaseUploadCommand(RoleMixIn, base.BaseCommand):
|
|
||||||
"""Base class for uploading metadata of a role."""
|
|
||||||
|
|
||||||
@abc.abstractproperty
|
|
||||||
def action(self):
|
|
||||||
"""String with the name of the action."""
|
|
||||||
pass
|
|
||||||
|
|
||||||
@abc.abstractproperty
|
|
||||||
def uploader(self):
|
|
||||||
"""Callable for uploading data."""
|
|
||||||
pass
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(BaseUploadCommand, self).get_parser(prog_name)
|
|
||||||
group = parser.add_mutually_exclusive_group(required=True)
|
|
||||||
group.add_argument('-r',
|
|
||||||
'--release',
|
|
||||||
type=int,
|
|
||||||
help='Id of the release')
|
|
||||||
group.add_argument('-e',
|
|
||||||
'--env',
|
|
||||||
type=int,
|
|
||||||
help='Id of the environment')
|
|
||||||
parser.add_argument('-n',
|
|
||||||
'--name',
|
|
||||||
required=True,
|
|
||||||
help='Name of role.')
|
|
||||||
parser.add_argument('-f',
|
|
||||||
'--format',
|
|
||||||
required=True,
|
|
||||||
choices=self.supported_file_formats,
|
|
||||||
help='Format of serialized role description.')
|
|
||||||
parser.add_argument('-d',
|
|
||||||
'--directory',
|
|
||||||
required=False,
|
|
||||||
default=os.path.curdir,
|
|
||||||
help='Source directory. Defaults to '
|
|
||||||
'the current directory.')
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
model, model_id = self.parse_model(parsed_args)
|
|
||||||
params = {"owner_type": model,
|
|
||||||
"owner_id": model_id,
|
|
||||||
"role_name": parsed_args.name}
|
|
||||||
|
|
||||||
file_path = self.get_file_path(parsed_args.directory,
|
|
||||||
model,
|
|
||||||
model_id,
|
|
||||||
parsed_args.name,
|
|
||||||
parsed_args.format)
|
|
||||||
|
|
||||||
try:
|
|
||||||
with open(file_path, 'r') as stream:
|
|
||||||
data = data_utils.safe_load(parsed_args.format, stream)
|
|
||||||
self.uploader(data, **params)
|
|
||||||
except (OSError, IOError):
|
|
||||||
msg = "Could not read description for role '{}' at {}".format(
|
|
||||||
parsed_args.name, file_path)
|
|
||||||
raise error.InvalidFileException(msg)
|
|
||||||
|
|
||||||
msg = ("Description of role '{role}' for {owner} with id {id} was "
|
|
||||||
"{action}d from {file_path}\n".format(role=parsed_args.name,
|
|
||||||
owner=model,
|
|
||||||
id=model_id,
|
|
||||||
action=self.action,
|
|
||||||
file_path=file_path))
|
|
||||||
self.app.stdout.write(msg)
|
|
||||||
|
|
||||||
|
|
||||||
class RoleList(RoleMixIn, base.BaseListCommand):
|
|
||||||
"""Show list of all available roles for release or cluster."""
|
|
||||||
|
|
||||||
columns = ("name",
|
|
||||||
"group",
|
|
||||||
"conflicts",
|
|
||||||
"description")
|
|
||||||
|
|
||||||
@property
|
|
||||||
def default_sorting_by(self):
|
|
||||||
return ['name']
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(RoleList, self).get_parser(prog_name)
|
|
||||||
group = parser.add_mutually_exclusive_group(required=True)
|
|
||||||
group.add_argument('-r',
|
|
||||||
'--release',
|
|
||||||
type=int,
|
|
||||||
help='Id of the release')
|
|
||||||
group.add_argument('-e',
|
|
||||||
'--env',
|
|
||||||
type=int,
|
|
||||||
help='Id of the environment'
|
|
||||||
)
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
model, model_id = self.parse_model(parsed_args)
|
|
||||||
data = self.client.get_all(model, model_id)
|
|
||||||
|
|
||||||
data = data_utils.get_display_data_multi(self.columns, data)
|
|
||||||
data = self._sort_data(parsed_args, data)
|
|
||||||
return self.columns, data
|
|
||||||
|
|
||||||
|
|
||||||
class RoleDownload(RoleMixIn, base.BaseCommand):
|
|
||||||
"""Download full role description to file."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(RoleDownload, self).get_parser(prog_name)
|
|
||||||
group = parser.add_mutually_exclusive_group(required=True)
|
|
||||||
group.add_argument('-r',
|
|
||||||
'--release',
|
|
||||||
type=int,
|
|
||||||
help='Id of the release')
|
|
||||||
group.add_argument('-e',
|
|
||||||
'--env',
|
|
||||||
type=int,
|
|
||||||
help='Id of the environment'
|
|
||||||
)
|
|
||||||
parser.add_argument('-n',
|
|
||||||
'--name',
|
|
||||||
required=True,
|
|
||||||
help='Name of role.')
|
|
||||||
parser.add_argument('-f',
|
|
||||||
'--format',
|
|
||||||
required=True,
|
|
||||||
choices=self.supported_file_formats,
|
|
||||||
help='Format of serialized role description.')
|
|
||||||
parser.add_argument('-d',
|
|
||||||
'--directory',
|
|
||||||
required=False,
|
|
||||||
default=os.path.curdir,
|
|
||||||
help='Destination directory. Defaults to '
|
|
||||||
'the current directory.')
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
model, model_id = self.parse_model(parsed_args)
|
|
||||||
file_path = self.get_file_path(parsed_args.directory,
|
|
||||||
model,
|
|
||||||
model_id,
|
|
||||||
parsed_args.name,
|
|
||||||
parsed_args.format)
|
|
||||||
data = self.client.get_one(model,
|
|
||||||
model_id,
|
|
||||||
parsed_args.name)
|
|
||||||
|
|
||||||
try:
|
|
||||||
fileutils.ensure_tree(os.path.dirname(file_path))
|
|
||||||
fileutils.delete_if_exists(file_path)
|
|
||||||
|
|
||||||
with open(file_path, 'w') as stream:
|
|
||||||
data_utils.safe_dump(parsed_args.format, stream, data)
|
|
||||||
except (OSError, IOError):
|
|
||||||
msg = ("Could not store description data "
|
|
||||||
"for role {} at {}".format(parsed_args.name, file_path))
|
|
||||||
raise error.InvalidFileException(msg)
|
|
||||||
|
|
||||||
msg = ("Description data of role '{}' within {} id {} "
|
|
||||||
"was stored in {}\n".format(parsed_args.name,
|
|
||||||
model,
|
|
||||||
model_id,
|
|
||||||
file_path))
|
|
||||||
self.app.stdout.write(msg)
|
|
||||||
|
|
||||||
|
|
||||||
class RoleUpdate(BaseUploadCommand):
|
|
||||||
"""Update a role from file description."""
|
|
||||||
|
|
||||||
action = "update"
|
|
||||||
|
|
||||||
@property
|
|
||||||
def uploader(self):
|
|
||||||
return self.client.update
|
|
||||||
|
|
||||||
|
|
||||||
class RoleCreate(BaseUploadCommand):
|
|
||||||
"""Create a role from file description"""
|
|
||||||
|
|
||||||
action = "create"
|
|
||||||
|
|
||||||
@property
|
|
||||||
def uploader(self):
|
|
||||||
return self.client.create
|
|
||||||
|
|
||||||
|
|
||||||
class RoleDelete(RoleMixIn, base.BaseCommand):
|
|
||||||
"""Delete a role from release or cluster"""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(RoleDelete, self).get_parser(prog_name)
|
|
||||||
group = parser.add_mutually_exclusive_group(required=True)
|
|
||||||
group.add_argument('-r',
|
|
||||||
'--release',
|
|
||||||
type=int,
|
|
||||||
help='Id of the release')
|
|
||||||
group.add_argument('-e',
|
|
||||||
'--env',
|
|
||||||
type=int,
|
|
||||||
help='Id of the environment'
|
|
||||||
)
|
|
||||||
parser.add_argument('-n',
|
|
||||||
'--name',
|
|
||||||
required=True,
|
|
||||||
help='Name of role.')
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
model, model_id = self.parse_model(parsed_args)
|
|
||||||
self.client.delete(model,
|
|
||||||
model_id,
|
|
||||||
parsed_args.name)
|
|
||||||
|
|
||||||
msg = "Role '{}' was deleted from {} with id {}\n".format(
|
|
||||||
parsed_args.name, model, model_id)
|
|
||||||
self.app.stdout.write(msg)
|
|
|
@ -1,203 +0,0 @@
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# Copyright 2016 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from fuelclient.cli import serializers
|
|
||||||
from fuelclient.commands import base
|
|
||||||
from fuelclient.common import data_utils
|
|
||||||
|
|
||||||
|
|
||||||
class SequenceMixIn(object):
|
|
||||||
entity_name = 'sequence'
|
|
||||||
|
|
||||||
|
|
||||||
class SequenceCreate(SequenceMixIn, base.show.ShowOne, base.BaseCommand):
|
|
||||||
"""Create a new deployment sequence."""
|
|
||||||
|
|
||||||
columns = ("id", "release_id", "name")
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(SequenceCreate, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
"-r", "--release",
|
|
||||||
type=int,
|
|
||||||
required=True,
|
|
||||||
help="Release object id, sequence will be linked to."
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
'-n', '--name',
|
|
||||||
required=True,
|
|
||||||
help='The unique name for sequence.'
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
'-t', '--graph-type',
|
|
||||||
dest='graph_types',
|
|
||||||
nargs='+',
|
|
||||||
required=True,
|
|
||||||
help='Graph types, which will be included to sequence.\n'
|
|
||||||
'Note: Order is important.'
|
|
||||||
)
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, args):
|
|
||||||
new_sequence = self.client.create(
|
|
||||||
args.release, args.name, args.graph_types
|
|
||||||
)
|
|
||||||
self.app.stdout.write("Sequence was successfully created:\n")
|
|
||||||
data = data_utils.get_display_data_single(self.columns, new_sequence)
|
|
||||||
|
|
||||||
return self.columns, data
|
|
||||||
|
|
||||||
|
|
||||||
class SequenceUpload(SequenceMixIn, base.show.ShowOne, base.BaseCommand):
|
|
||||||
"""Upload a new deployment sequence."""
|
|
||||||
|
|
||||||
columns = ("id", "release_id", "name")
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(SequenceUpload, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
"-r", "--release",
|
|
||||||
type=int,
|
|
||||||
required=True,
|
|
||||||
help="Release object id, sequence will be linked to."
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
'--file',
|
|
||||||
required=True,
|
|
||||||
help='YAML file which contains deployment sequence properties.'
|
|
||||||
)
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, args):
|
|
||||||
serializer = serializers.FileFormatBasedSerializer()
|
|
||||||
new_sequence = self.client.upload(
|
|
||||||
args.release, serializer.read_from_file(args.file)
|
|
||||||
)
|
|
||||||
self.app.stdout.write("Sequence was successfully created:\n")
|
|
||||||
data = data_utils.get_display_data_single(self.columns, new_sequence)
|
|
||||||
return self.columns, data
|
|
||||||
|
|
||||||
|
|
||||||
class SequenceDownload(SequenceMixIn, base.BaseCommand):
|
|
||||||
"""Download deployment sequence data."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(SequenceDownload, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
"id",
|
|
||||||
type=int,
|
|
||||||
help="Sequence ID."
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
'--file',
|
|
||||||
help='The file path where data will be saved.'
|
|
||||||
)
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, args):
|
|
||||||
data = self.client.download(args.id)
|
|
||||||
if args.file:
|
|
||||||
serializer = serializers.FileFormatBasedSerializer()
|
|
||||||
serializer.write_to_file(args.file, data)
|
|
||||||
else:
|
|
||||||
serializer = serializers.Serializer("yaml")
|
|
||||||
serializer.write_to_file(self.app.stdout, data)
|
|
||||||
|
|
||||||
|
|
||||||
class SequenceUpdate(SequenceMixIn, base.BaseShowCommand):
|
|
||||||
"""Update existing sequence."""
|
|
||||||
|
|
||||||
columns = ("id", "name")
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(SequenceUpdate, self).get_parser(prog_name)
|
|
||||||
parser.add_argument(
|
|
||||||
'-n', '--name',
|
|
||||||
required=False,
|
|
||||||
help='The unique name for sequence.'
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
'-t', '--graph-type',
|
|
||||||
dest='graph_types',
|
|
||||||
nargs='+',
|
|
||||||
required=False,
|
|
||||||
help='Graph types, which will be included to sequence.\n'
|
|
||||||
'Note: Order is important.'
|
|
||||||
)
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, args):
|
|
||||||
sequence = self.client.update(
|
|
||||||
args.id, name=args.name, graph_types=args.graph_types
|
|
||||||
)
|
|
||||||
|
|
||||||
if sequence:
|
|
||||||
self.app.stdout.write("Sequence was successfully updated:\n")
|
|
||||||
data = data_utils.get_display_data_single(self.columns, sequence)
|
|
||||||
return self.columns, data
|
|
||||||
else:
|
|
||||||
self.app.stdout.write("Nothing to update.\n")
|
|
||||||
|
|
||||||
|
|
||||||
class SequenceDelete(SequenceMixIn, base.BaseDeleteCommand):
|
|
||||||
"""Delete existing sequence."""
|
|
||||||
|
|
||||||
|
|
||||||
class SequenceShow(SequenceMixIn, base.BaseShowCommand):
|
|
||||||
"""Display information about sequence."""
|
|
||||||
columns = ("id", "release_id", "name", "graphs")
|
|
||||||
|
|
||||||
|
|
||||||
class SequenceList(SequenceMixIn, base.BaseListCommand):
|
|
||||||
"""Show list of all existing sequences."""
|
|
||||||
columns = ("id", "release_id", "name")
|
|
||||||
filters = {'release': 'release', 'cluster': 'env'}
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(SequenceList, self).get_parser(prog_name)
|
|
||||||
group = parser.add_mutually_exclusive_group(required=True)
|
|
||||||
group.add_argument(
|
|
||||||
'-r', '--release',
|
|
||||||
type=int,
|
|
||||||
help='The Release object ID.'
|
|
||||||
)
|
|
||||||
group.add_argument(
|
|
||||||
'-e', '--env',
|
|
||||||
type=int,
|
|
||||||
help='The environment object id.'
|
|
||||||
)
|
|
||||||
return parser
|
|
||||||
|
|
||||||
|
|
||||||
class SequenceExecute(SequenceMixIn, base.BaseTasksExecuteCommand):
|
|
||||||
"""Executes sequence on specified environment."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(SequenceExecute, self).get_parser(prog_name)
|
|
||||||
parser.add_argument(
|
|
||||||
'id',
|
|
||||||
type=int,
|
|
||||||
help='Id of the Sequence.'
|
|
||||||
)
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def get_options(self, parsed_args):
|
|
||||||
return {
|
|
||||||
'sequence_id': parsed_args.id,
|
|
||||||
}
|
|
|
@ -1,132 +0,0 @@
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# Copyright 2016 Vitalii Kulanov
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import argparse
|
|
||||||
import os
|
|
||||||
|
|
||||||
from oslo_utils import fileutils
|
|
||||||
|
|
||||||
from fuelclient.cli import error
|
|
||||||
from fuelclient.commands import base
|
|
||||||
from fuelclient.common import data_utils
|
|
||||||
from fuelclient import utils
|
|
||||||
|
|
||||||
|
|
||||||
class SnapshotMixIn(object):
|
|
||||||
|
|
||||||
entity_name = 'snapshot'
|
|
||||||
supported_file_formats = ('json', 'yaml')
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def config_file(file_path):
|
|
||||||
if not utils.file_exists(file_path):
|
|
||||||
raise argparse.ArgumentTypeError(
|
|
||||||
'File "{0}" does not exist'.format(file_path))
|
|
||||||
return file_path
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_config_path(directory, file_format):
|
|
||||||
return os.path.join(os.path.abspath(directory),
|
|
||||||
'snapshot_conf.{}'.format(file_format))
|
|
||||||
|
|
||||||
|
|
||||||
class SnapshotGenerate(SnapshotMixIn, base.BaseCommand):
|
|
||||||
"""Generate diagnostic snapshot."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(SnapshotGenerate, self).get_parser(prog_name)
|
|
||||||
parser.add_argument('-c',
|
|
||||||
'--config',
|
|
||||||
required=False,
|
|
||||||
type=self.config_file,
|
|
||||||
help='Configuration file.')
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
file_path = parsed_args.config
|
|
||||||
|
|
||||||
config = dict()
|
|
||||||
if file_path:
|
|
||||||
file_format = os.path.splitext(file_path)[1].lstrip('.')
|
|
||||||
try:
|
|
||||||
with open(file_path, 'r') as stream:
|
|
||||||
config = data_utils.safe_load(file_format, stream)
|
|
||||||
except (OSError, IOError):
|
|
||||||
msg = 'Could not read configuration at {}.'
|
|
||||||
raise error.InvalidFileException(msg.format(file_path))
|
|
||||||
|
|
||||||
result = self.client.create_snapshot(config)
|
|
||||||
|
|
||||||
msg = "Diagnostic snapshot generation task with id {id} was started\n"
|
|
||||||
self.app.stdout.write(msg.format(id=result.id))
|
|
||||||
|
|
||||||
|
|
||||||
class SnapshotConfigGetDefault(SnapshotMixIn, base.BaseCommand):
|
|
||||||
"""Download default config to generate custom diagnostic snapshot."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(SnapshotConfigGetDefault, self).get_parser(prog_name)
|
|
||||||
parser.add_argument('-f',
|
|
||||||
'--format',
|
|
||||||
required=True,
|
|
||||||
choices=self.supported_file_formats,
|
|
||||||
help='Format of serialized diagnostic snapshot '
|
|
||||||
'configuration data.')
|
|
||||||
parser.add_argument('-d',
|
|
||||||
'--directory',
|
|
||||||
required=False,
|
|
||||||
default=os.path.curdir,
|
|
||||||
help='Destination directory. Defaults to '
|
|
||||||
'the current directory.')
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
file_path = self.get_config_path(parsed_args.directory,
|
|
||||||
parsed_args.format)
|
|
||||||
config = self.client.get_default_config()
|
|
||||||
|
|
||||||
try:
|
|
||||||
fileutils.ensure_tree(os.path.dirname(file_path))
|
|
||||||
fileutils.delete_if_exists(file_path)
|
|
||||||
|
|
||||||
with open(file_path, 'w') as stream:
|
|
||||||
data_utils.safe_dump(parsed_args.format, stream, config)
|
|
||||||
except (OSError, IOError):
|
|
||||||
msg = 'Could not store configuration at {}.'
|
|
||||||
raise error.InvalidFileException(msg.format(file_path))
|
|
||||||
|
|
||||||
msg = "Configuration was stored in {path}\n"
|
|
||||||
self.app.stdout.write(msg.format(path=file_path))
|
|
||||||
|
|
||||||
|
|
||||||
class SnapshotGetLink(SnapshotMixIn, base.BaseShowCommand):
|
|
||||||
"""Show link to download diagnostic snapshot."""
|
|
||||||
|
|
||||||
columns = ('status',
|
|
||||||
'link')
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
data = self.client.get_by_id(parsed_args.id)
|
|
||||||
if data['name'] != 'dump':
|
|
||||||
msg = "Task with id {0} is not a snapshot generation task"
|
|
||||||
raise error.ActionException(msg.format(data['id']))
|
|
||||||
if data['status'] != 'ready':
|
|
||||||
data['link'] = None
|
|
||||||
else:
|
|
||||||
data['link'] = self.client.connection.root + data['message']
|
|
||||||
|
|
||||||
data = data_utils.get_display_data_single(self.columns, data)
|
|
||||||
return self.columns, data
|
|
|
@ -1,265 +0,0 @@
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# Copyright 2016 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import abc
|
|
||||||
import os
|
|
||||||
|
|
||||||
from oslo_utils import fileutils
|
|
||||||
import six
|
|
||||||
|
|
||||||
from fuelclient.cli import error
|
|
||||||
from fuelclient.commands import base
|
|
||||||
from fuelclient.common import data_utils
|
|
||||||
|
|
||||||
|
|
||||||
class TagMixIn(object):
|
|
||||||
entity_name = 'tag'
|
|
||||||
supported_file_formats = ('json', 'yaml')
|
|
||||||
fields_mapper = (
|
|
||||||
('env', 'clusters'),
|
|
||||||
('release', 'releases')
|
|
||||||
)
|
|
||||||
|
|
||||||
def parse_model(self, args):
|
|
||||||
for param, tag_class in self.fields_mapper:
|
|
||||||
model_id = getattr(args, param)
|
|
||||||
if model_id:
|
|
||||||
return tag_class, model_id
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_file_path(directory, owner_type, owner_id, tag_name, file_format):
|
|
||||||
return os.path.join(os.path.abspath(directory),
|
|
||||||
'{owner}_{id}'.format(owner=owner_type,
|
|
||||||
id=owner_id),
|
|
||||||
'{}.{}'.format(tag_name, file_format))
|
|
||||||
|
|
||||||
|
|
||||||
@six.add_metaclass(abc.ABCMeta)
|
|
||||||
class BaseUploadCommand(TagMixIn, base.BaseCommand):
|
|
||||||
"""Base class for uploading metadata of a tag."""
|
|
||||||
|
|
||||||
@abc.abstractproperty
|
|
||||||
def action(self):
|
|
||||||
"""String with the name of the action."""
|
|
||||||
pass
|
|
||||||
|
|
||||||
@abc.abstractproperty
|
|
||||||
def uploader(self):
|
|
||||||
"""Callable for uploading data."""
|
|
||||||
pass
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(BaseUploadCommand, self).get_parser(prog_name)
|
|
||||||
group = parser.add_mutually_exclusive_group(required=True)
|
|
||||||
group.add_argument('-r',
|
|
||||||
'--release',
|
|
||||||
type=int,
|
|
||||||
help='Id of the release')
|
|
||||||
group.add_argument('-e',
|
|
||||||
'--env',
|
|
||||||
type=int,
|
|
||||||
help='Id of the environment')
|
|
||||||
parser.add_argument('-n',
|
|
||||||
'--name',
|
|
||||||
required=True,
|
|
||||||
help='Name of tag.')
|
|
||||||
parser.add_argument('-f',
|
|
||||||
'--format',
|
|
||||||
required=True,
|
|
||||||
choices=self.supported_file_formats,
|
|
||||||
help='Format of serialized tag description.')
|
|
||||||
parser.add_argument('-d',
|
|
||||||
'--directory',
|
|
||||||
required=False,
|
|
||||||
default=os.path.curdir,
|
|
||||||
help='Source directory. Defaults to '
|
|
||||||
'the current directory.')
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
model, model_id = self.parse_model(parsed_args)
|
|
||||||
params = {"owner_type": model,
|
|
||||||
"owner_id": model_id,
|
|
||||||
"tag_name": parsed_args.name}
|
|
||||||
|
|
||||||
file_path = self.get_file_path(parsed_args.directory,
|
|
||||||
model,
|
|
||||||
model_id,
|
|
||||||
parsed_args.name,
|
|
||||||
parsed_args.format)
|
|
||||||
|
|
||||||
try:
|
|
||||||
with open(file_path, 'r') as stream:
|
|
||||||
data = data_utils.safe_load(parsed_args.format, stream)
|
|
||||||
self.uploader(data, **params)
|
|
||||||
except (OSError, IOError):
|
|
||||||
msg = "Could not read description for tag '{}' at {}".format(
|
|
||||||
parsed_args.name, file_path)
|
|
||||||
raise error.InvalidFileException(msg)
|
|
||||||
|
|
||||||
msg = ("Description of tag '{tag}' for {owner} with id {id} was "
|
|
||||||
"{action}d from {file_path}\n".format(tag=parsed_args.name,
|
|
||||||
owner=model,
|
|
||||||
id=model_id,
|
|
||||||
action=self.action,
|
|
||||||
file_path=file_path))
|
|
||||||
self.app.stdout.write(msg)
|
|
||||||
|
|
||||||
|
|
||||||
class TagList(TagMixIn, base.BaseListCommand):
|
|
||||||
"""Show list of all available tags for release or cluster."""
|
|
||||||
|
|
||||||
columns = ("name",
|
|
||||||
"group",
|
|
||||||
"conflicts",
|
|
||||||
"description")
|
|
||||||
|
|
||||||
@property
|
|
||||||
def default_sorting_by(self):
|
|
||||||
return ['name']
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(TagList, self).get_parser(prog_name)
|
|
||||||
group = parser.add_mutually_exclusive_group(required=True)
|
|
||||||
group.add_argument('-r',
|
|
||||||
'--release',
|
|
||||||
type=int,
|
|
||||||
help='Id of the release')
|
|
||||||
group.add_argument('-e',
|
|
||||||
'--env',
|
|
||||||
type=int,
|
|
||||||
help='Id of the environment')
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
model, model_id = self.parse_model(parsed_args)
|
|
||||||
data = self.client.get_all(model, model_id)
|
|
||||||
|
|
||||||
data = data_utils.get_display_data_multi(self.columns, data)
|
|
||||||
data = self._sort_data(parsed_args, data)
|
|
||||||
return self.columns, data
|
|
||||||
|
|
||||||
|
|
||||||
class TagDownload(TagMixIn, base.BaseCommand):
|
|
||||||
"""Download full tag description to file."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(TagDownload, self).get_parser(prog_name)
|
|
||||||
group = parser.add_mutually_exclusive_group(required=True)
|
|
||||||
group.add_argument('-r',
|
|
||||||
'--release',
|
|
||||||
type=int,
|
|
||||||
help='Id of the release')
|
|
||||||
group.add_argument('-e',
|
|
||||||
'--env',
|
|
||||||
type=int,
|
|
||||||
help='Id of the environment')
|
|
||||||
parser.add_argument('-n',
|
|
||||||
'--name',
|
|
||||||
required=True,
|
|
||||||
help='Name of tag.')
|
|
||||||
parser.add_argument('-f',
|
|
||||||
'--format',
|
|
||||||
required=True,
|
|
||||||
choices=self.supported_file_formats,
|
|
||||||
help='Format of serialized tag description.')
|
|
||||||
parser.add_argument('-d',
|
|
||||||
'--directory',
|
|
||||||
required=False,
|
|
||||||
default=os.path.curdir,
|
|
||||||
help='Destination directory. Defaults to '
|
|
||||||
'the current directory.')
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
model, model_id = self.parse_model(parsed_args)
|
|
||||||
file_path = self.get_file_path(parsed_args.directory,
|
|
||||||
model,
|
|
||||||
model_id,
|
|
||||||
parsed_args.name,
|
|
||||||
parsed_args.format)
|
|
||||||
data = self.client.get_tag(model,
|
|
||||||
model_id,
|
|
||||||
parsed_args.name)
|
|
||||||
|
|
||||||
try:
|
|
||||||
fileutils.ensure_tree(os.path.dirname(file_path))
|
|
||||||
fileutils.delete_if_exists(file_path)
|
|
||||||
|
|
||||||
with open(file_path, 'w') as stream:
|
|
||||||
data_utils.safe_dump(parsed_args.format, stream, data)
|
|
||||||
except (OSError, IOError):
|
|
||||||
msg = ("Could not store description data "
|
|
||||||
"for tag {} at {}".format(parsed_args.name, file_path))
|
|
||||||
raise error.InvalidFileException(msg)
|
|
||||||
|
|
||||||
msg = ("Description data of tag '{}' within {} id {} "
|
|
||||||
"was stored in {}\n".format(parsed_args.name,
|
|
||||||
model,
|
|
||||||
model_id,
|
|
||||||
file_path))
|
|
||||||
self.app.stdout.write(msg)
|
|
||||||
|
|
||||||
|
|
||||||
class TagUpdate(BaseUploadCommand):
|
|
||||||
"""Update a tag from file description."""
|
|
||||||
|
|
||||||
action = "update"
|
|
||||||
|
|
||||||
@property
|
|
||||||
def uploader(self):
|
|
||||||
return self.client.update
|
|
||||||
|
|
||||||
|
|
||||||
class TagCreate(BaseUploadCommand):
|
|
||||||
"""Create a tag from file description"""
|
|
||||||
|
|
||||||
action = "create"
|
|
||||||
|
|
||||||
@property
|
|
||||||
def uploader(self):
|
|
||||||
return self.client.create
|
|
||||||
|
|
||||||
|
|
||||||
class TagDelete(TagMixIn, base.BaseCommand):
|
|
||||||
"""Delete a tag from release or cluster"""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(TagDelete, self).get_parser(prog_name)
|
|
||||||
group = parser.add_mutually_exclusive_group(required=True)
|
|
||||||
group.add_argument('-r',
|
|
||||||
'--release',
|
|
||||||
type=int,
|
|
||||||
help='Id of the release')
|
|
||||||
group.add_argument('-e',
|
|
||||||
'--env',
|
|
||||||
type=int,
|
|
||||||
help='Id of the environment')
|
|
||||||
parser.add_argument('-n',
|
|
||||||
'--name',
|
|
||||||
required=True,
|
|
||||||
help='Name of tag.')
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
model, model_id = self.parse_model(parsed_args)
|
|
||||||
self.client.delete(model,
|
|
||||||
model_id,
|
|
||||||
parsed_args.name)
|
|
||||||
|
|
||||||
msg = "Tag '{}' was deleted from {} with id {}\n".format(
|
|
||||||
parsed_args.name, model, model_id)
|
|
||||||
self.app.stdout.write(msg)
|
|
|
@ -1,309 +0,0 @@
|
||||||
# Copyright 2015 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import os
|
|
||||||
|
|
||||||
from fuelclient.cli.serializers import Serializer
|
|
||||||
from fuelclient.commands import base
|
|
||||||
from fuelclient.common import data_utils
|
|
||||||
|
|
||||||
|
|
||||||
class TaskMixIn(object):
|
|
||||||
entity_name = 'task'
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def add_file_arg(parser):
|
|
||||||
parser.add_argument(
|
|
||||||
'-f',
|
|
||||||
'--file',
|
|
||||||
required=False,
|
|
||||||
type=str,
|
|
||||||
help='Output file in YAML format.'
|
|
||||||
)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def write_info_to_file(cls, info_type, data, transaction_id,
|
|
||||||
serializer=None, file_path=None):
|
|
||||||
"""Write additional info to the given path.
|
|
||||||
|
|
||||||
:param info_type: deployment_info | cluster_settings |
|
|
||||||
network_configuration
|
|
||||||
:type info_type: str
|
|
||||||
:param data: data
|
|
||||||
:type data: list of dict
|
|
||||||
:param serializer: serializer
|
|
||||||
:param transaction_id: Transaction ID
|
|
||||||
:type transaction_id: str or int
|
|
||||||
:param file_path: path
|
|
||||||
:type file_path: str
|
|
||||||
:return: path to resulting file
|
|
||||||
:rtype: str
|
|
||||||
"""
|
|
||||||
serializer = serializer or Serializer()
|
|
||||||
if file_path:
|
|
||||||
return serializer.write_to_full_path(
|
|
||||||
file_path,
|
|
||||||
data
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
return serializer.write_to_path(
|
|
||||||
cls.get_default_info_path(info_type, transaction_id),
|
|
||||||
data
|
|
||||||
)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_default_info_path(info_type, transaction_id):
|
|
||||||
"""Generate default path for task additional info e.g. deployment info
|
|
||||||
|
|
||||||
:param info_type: deployment_info | cluster_settings |
|
|
||||||
network_configuration
|
|
||||||
:type info_type: str
|
|
||||||
:param transaction_id: Transaction ID
|
|
||||||
:type transaction_id: str or int
|
|
||||||
:return: path
|
|
||||||
:rtype: str
|
|
||||||
"""
|
|
||||||
return os.path.join(
|
|
||||||
os.path.abspath(os.curdir),
|
|
||||||
"{info_type}_{transaction_id}".format(
|
|
||||||
info_type=info_type,
|
|
||||||
transaction_id=transaction_id)
|
|
||||||
)
|
|
||||||
|
|
||||||
def download_info_to_file(self, transaction_id, info_type, file_path):
|
|
||||||
"""Get and save to path for task additional info e.g. deployment info
|
|
||||||
|
|
||||||
:param transaction_id: Transaction ID
|
|
||||||
:type transaction_id: str or int
|
|
||||||
:param info_type: deployment_info | cluster_settings |
|
|
||||||
network_configuration
|
|
||||||
:type info_type: str
|
|
||||||
:param file_path: path
|
|
||||||
:type file_path: str
|
|
||||||
:return: path
|
|
||||||
:rtype: str
|
|
||||||
"""
|
|
||||||
data = self.client.download(transaction_id=transaction_id)
|
|
||||||
return self.write_info_to_file(
|
|
||||||
info_type=info_type,
|
|
||||||
data=data,
|
|
||||||
transaction_id=transaction_id,
|
|
||||||
serializer=Serializer(),
|
|
||||||
file_path=file_path)
|
|
||||||
|
|
||||||
|
|
||||||
class TaskInfoFileMixIn(TaskMixIn):
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(TaskInfoFileMixIn, self).get_parser(
|
|
||||||
prog_name)
|
|
||||||
parser.add_argument('id', type=int, help='Id of the Task.')
|
|
||||||
self.add_file_arg(parser)
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def download_info(self, parsed_args):
|
|
||||||
data_file_path = self.download_info_to_file(
|
|
||||||
transaction_id=parsed_args.id,
|
|
||||||
info_type=self.info_type,
|
|
||||||
file_path=parsed_args.file)
|
|
||||||
|
|
||||||
return data_file_path
|
|
||||||
|
|
||||||
|
|
||||||
class TaskList(TaskMixIn, base.BaseListCommand):
|
|
||||||
"""Show list of all available tasks."""
|
|
||||||
columns = ('id',
|
|
||||||
'status',
|
|
||||||
'name',
|
|
||||||
'graph_type',
|
|
||||||
'cluster',
|
|
||||||
'result',
|
|
||||||
'dry_run',
|
|
||||||
'progress')
|
|
||||||
filters = {'cluster_id': 'env',
|
|
||||||
'statuses': 'statuses',
|
|
||||||
'transaction_types': 'names'}
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(TaskList, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'-e',
|
|
||||||
'--env',
|
|
||||||
type=int,
|
|
||||||
help='Show list of tasks that belong to specified environment')
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'-t',
|
|
||||||
'--statuses',
|
|
||||||
type=str,
|
|
||||||
choices=['pending', 'error', 'ready', 'running'],
|
|
||||||
nargs='+',
|
|
||||||
help='Show list of tasks with specified statuses')
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'-n',
|
|
||||||
'--names',
|
|
||||||
type=str,
|
|
||||||
nargs='+',
|
|
||||||
help='Show list of tasks with specified names')
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
|
|
||||||
class TaskShow(TaskMixIn, base.BaseShowCommand):
|
|
||||||
"""Show info about task with given id."""
|
|
||||||
columns = ('id',
|
|
||||||
'uuid',
|
|
||||||
'status',
|
|
||||||
'name',
|
|
||||||
'graph_type',
|
|
||||||
'cluster',
|
|
||||||
'result',
|
|
||||||
'dry_run',
|
|
||||||
'progress',
|
|
||||||
'message')
|
|
||||||
|
|
||||||
|
|
||||||
class TaskDelete(TaskMixIn, base.BaseDeleteCommand):
|
|
||||||
"""Delete task with given id."""
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(TaskDelete, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
parser.add_argument('-f',
|
|
||||||
'--force',
|
|
||||||
action='store_true',
|
|
||||||
default=False,
|
|
||||||
help='Force deletion of a task without '
|
|
||||||
'considering its state.')
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
self.client.delete_by_id(parsed_args.id, parsed_args.force)
|
|
||||||
|
|
||||||
msg = 'Task with id {ent_id} was deleted\n'
|
|
||||||
self.app.stdout.write(msg.format(ent_id=parsed_args.id))
|
|
||||||
|
|
||||||
|
|
||||||
class TaskHistoryShow(TaskMixIn, base.BaseListCommand):
|
|
||||||
"""Show deployment history about task with given ID."""
|
|
||||||
|
|
||||||
entity_name = 'deployment_history'
|
|
||||||
|
|
||||||
columns = ()
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(TaskHistoryShow, self).get_parser(prog_name)
|
|
||||||
|
|
||||||
parser.add_argument('id', type=int, help='Id of the Task')
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'-n',
|
|
||||||
'--nodes',
|
|
||||||
type=str,
|
|
||||||
nargs='+',
|
|
||||||
help='Show deployment history for specific nodes')
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'-t',
|
|
||||||
'--statuses',
|
|
||||||
type=str,
|
|
||||||
choices=['pending', 'error', 'ready', 'running', 'skipped'],
|
|
||||||
nargs='+',
|
|
||||||
help='Show deployment history for specific statuses')
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'-d',
|
|
||||||
'--tasks-names',
|
|
||||||
type=str,
|
|
||||||
nargs='+',
|
|
||||||
help='Show deployment history for specific deployment tasks names')
|
|
||||||
|
|
||||||
parser.add_argument(
|
|
||||||
'-p',
|
|
||||||
'--show-parameters',
|
|
||||||
action='store_true',
|
|
||||||
default=False,
|
|
||||||
help='Show deployment tasks parameters')
|
|
||||||
parser.add_argument(
|
|
||||||
'--include-summary',
|
|
||||||
action='store_true',
|
|
||||||
default=False,
|
|
||||||
help='Show deployment tasks summary')
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
# print parser
|
|
||||||
show_parameters = parsed_args.show_parameters
|
|
||||||
include_summary = parsed_args.include_summary
|
|
||||||
data = self.client.get_all(
|
|
||||||
transaction_id=parsed_args.id,
|
|
||||||
nodes=parsed_args.nodes,
|
|
||||||
statuses=parsed_args.statuses,
|
|
||||||
tasks_names=parsed_args.tasks_names,
|
|
||||||
include_summary=include_summary,
|
|
||||||
show_parameters=show_parameters
|
|
||||||
)
|
|
||||||
if show_parameters:
|
|
||||||
self.columns = self.client.tasks_records_keys
|
|
||||||
else:
|
|
||||||
self.columns = self.client.history_records_keys
|
|
||||||
if include_summary:
|
|
||||||
self.columns += ('summary',)
|
|
||||||
data = data_utils.get_display_data_multi(self.columns, data)
|
|
||||||
return self.columns, data
|
|
||||||
|
|
||||||
|
|
||||||
class TaskNetworkConfigurationDownload(TaskInfoFileMixIn, base.BaseCommand):
|
|
||||||
"""Save task network configuration to a file."""
|
|
||||||
|
|
||||||
entity_name = 'network-configuration'
|
|
||||||
info_type = 'network_configuration'
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
self.app.stdout.write(
|
|
||||||
"Network configuration for task with id={0}"
|
|
||||||
" downloaded to {1}\n".format(parsed_args.id,
|
|
||||||
self.download_info(parsed_args))
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TaskDeploymentInfoDownload(TaskInfoFileMixIn, base.BaseCommand):
|
|
||||||
"""Save task deployment info to a file."""
|
|
||||||
|
|
||||||
entity_name = 'deployment-info'
|
|
||||||
info_type = 'deployment_info'
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
self.app.stdout.write(
|
|
||||||
"Deployment info for task with id={0}"
|
|
||||||
" downloaded to {1}\n".format(parsed_args.id,
|
|
||||||
self.download_info(parsed_args))
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TaskClusterSettingsDownload(TaskInfoFileMixIn, base.BaseCommand):
|
|
||||||
"""Save task settings to a file."""
|
|
||||||
|
|
||||||
entity_name = 'cluster-settings'
|
|
||||||
info_type = 'cluster_settings'
|
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
|
||||||
self.app.stdout.write(
|
|
||||||
"Cluster settings for task with id={0}"
|
|
||||||
" downloaded to {1}\n".format(parsed_args.id,
|
|
||||||
self.download_info(parsed_args))
|
|
||||||
)
|
|
|
@ -1,182 +0,0 @@
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# Copyright 2016 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from fuelclient.commands import base
|
|
||||||
|
|
||||||
|
|
||||||
class VipMixIn(object):
|
|
||||||
entity_name = 'vip'
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def add_env_id_arg(parser):
|
|
||||||
parser.add_argument(
|
|
||||||
'-e',
|
|
||||||
'--env',
|
|
||||||
type=int,
|
|
||||||
required=True,
|
|
||||||
help='Environment identifier'
|
|
||||||
)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def add_network_id_arg(parser):
|
|
||||||
parser.add_argument(
|
|
||||||
"-n",
|
|
||||||
"--network",
|
|
||||||
type=int,
|
|
||||||
default=None,
|
|
||||||
required=False,
|
|
||||||
help="Network identifier"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class VipDownload(VipMixIn, base.BaseCommand):
|
|
||||||
"""Download VIPs configuration."""
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def add_ip_address_id_arg(parser):
|
|
||||||
parser.add_argument(
|
|
||||||
"-a",
|
|
||||||
"--ip-address-id",
|
|
||||||
type=int,
|
|
||||||
default=None,
|
|
||||||
required=False,
|
|
||||||
help="IP address entity identifier"
|
|
||||||
)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def add_network_role_arg(parser):
|
|
||||||
parser.add_argument(
|
|
||||||
"-r",
|
|
||||||
"--network-role",
|
|
||||||
type=str,
|
|
||||||
default=None,
|
|
||||||
required=False,
|
|
||||||
help="Network role string"
|
|
||||||
)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def add_file_arg(parser):
|
|
||||||
parser.add_argument(
|
|
||||||
'-f',
|
|
||||||
'--file',
|
|
||||||
type=str,
|
|
||||||
required=False,
|
|
||||||
default=None,
|
|
||||||
help='YAML file that contains openstack configuration.'
|
|
||||||
)
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(VipDownload, self).get_parser(prog_name)
|
|
||||||
self.add_env_id_arg(parser)
|
|
||||||
self.add_ip_address_id_arg(parser)
|
|
||||||
self.add_file_arg(parser)
|
|
||||||
self.add_network_id_arg(parser)
|
|
||||||
self.add_network_role_arg(parser)
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, args):
|
|
||||||
vips_data_file_path = self.client.download(
|
|
||||||
env_id=args.env,
|
|
||||||
ip_addr_id=args.ip_address_id,
|
|
||||||
network_id=args.network,
|
|
||||||
network_role=args.network_role,
|
|
||||||
file_path=args.file
|
|
||||||
)
|
|
||||||
|
|
||||||
self.app.stdout.write(
|
|
||||||
"VIP configuration for environment with id={0}"
|
|
||||||
" downloaded to {1}".format(args.env, vips_data_file_path)
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class VipUpload(VipMixIn, base.BaseCommand):
|
|
||||||
"""Upload new VIPs configuration from file."""
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def add_file_arg(parser):
|
|
||||||
parser.add_argument(
|
|
||||||
'-f',
|
|
||||||
'--file',
|
|
||||||
required=True,
|
|
||||||
type=str,
|
|
||||||
help='YAML file that contains openstack configuration.'
|
|
||||||
)
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(VipUpload, self).get_parser(prog_name)
|
|
||||||
self.add_env_id_arg(parser)
|
|
||||||
self.add_file_arg(parser)
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, args):
|
|
||||||
self.client.upload(env_id=args.env, file_path=args.file)
|
|
||||||
self.app.stdout.write("VIP configuration uploaded.")
|
|
||||||
|
|
||||||
|
|
||||||
class VipCreate(VipMixIn, base.BaseCommand):
|
|
||||||
"""Create VIP"""
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def add_vip_name_arg(parser):
|
|
||||||
parser.add_argument(
|
|
||||||
'-N',
|
|
||||||
'--name',
|
|
||||||
required=True,
|
|
||||||
type=str,
|
|
||||||
help="VIP name"
|
|
||||||
)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def add_ip_addr_arg(parser):
|
|
||||||
parser.add_argument(
|
|
||||||
'-a',
|
|
||||||
'--address',
|
|
||||||
required=True,
|
|
||||||
type=str,
|
|
||||||
help="IP-address for the VIP"
|
|
||||||
)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def add_vip_namespace_arg(parser):
|
|
||||||
parser.add_argument(
|
|
||||||
'--namespace',
|
|
||||||
required=False,
|
|
||||||
type=str,
|
|
||||||
help="VIP namespace"
|
|
||||||
)
|
|
||||||
|
|
||||||
def get_parser(self, prog_name):
|
|
||||||
parser = super(VipCreate, self).get_parser(prog_name)
|
|
||||||
self.add_env_id_arg(parser)
|
|
||||||
self.add_network_id_arg(parser)
|
|
||||||
self.add_vip_name_arg(parser)
|
|
||||||
self.add_ip_addr_arg(parser)
|
|
||||||
self.add_vip_namespace_arg(parser)
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def take_action(self, args):
|
|
||||||
vip_kwargs = {
|
|
||||||
"env_id": args.env,
|
|
||||||
"ip_addr": args.address,
|
|
||||||
"network": args.network,
|
|
||||||
"vip_name": args.name,
|
|
||||||
}
|
|
||||||
if args.namespace is not None:
|
|
||||||
vip_kwargs['vip_namespace'] = args.namespace
|
|
||||||
|
|
||||||
self.client.create(**vip_kwargs)
|
|
||||||
|
|
||||||
self.app.stdout.write("VIP has been created.")
|
|
|
@ -1,81 +0,0 @@
|
||||||
# Copyright 2015 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import json
|
|
||||||
import os
|
|
||||||
import yaml
|
|
||||||
|
|
||||||
from fuelclient import utils
|
|
||||||
|
|
||||||
|
|
||||||
def get_display_data_single(fields, data, missing_field_value=None):
|
|
||||||
"""Performs slicing of data by set of given fields
|
|
||||||
|
|
||||||
:param fields: Iterable containing names of fields to be retrieved
|
|
||||||
from data
|
|
||||||
:param data: Collection of JSON objects representing some
|
|
||||||
external entities
|
|
||||||
:param missing_field_value: the value will be used for all missing fields
|
|
||||||
|
|
||||||
:return: list containing the collection of values of the
|
|
||||||
supplied attributes.
|
|
||||||
|
|
||||||
"""
|
|
||||||
return [data.get(field, missing_field_value) for field in fields]
|
|
||||||
|
|
||||||
|
|
||||||
def get_display_data_multi(fields, data):
|
|
||||||
"""Performs slice of data by set of given fields for multiple objects."""
|
|
||||||
|
|
||||||
return [get_display_data_single(fields, elem) for elem in data]
|
|
||||||
|
|
||||||
|
|
||||||
def safe_load(data_format, stream):
|
|
||||||
loaders = {'json': utils.safe_deserialize(json.load),
|
|
||||||
'yaml': utils.safe_deserialize(yaml.safe_load)}
|
|
||||||
|
|
||||||
if data_format not in loaders:
|
|
||||||
raise ValueError('Unsupported data format.')
|
|
||||||
|
|
||||||
loader = loaders[data_format]
|
|
||||||
return loader(stream)
|
|
||||||
|
|
||||||
|
|
||||||
def safe_dump(data_format, stream, data):
|
|
||||||
# The reason these dumpers are assigned to individual variables is
|
|
||||||
# making PEP8 check happy.
|
|
||||||
yaml_dumper = lambda data, stream: yaml.safe_dump(data,
|
|
||||||
stream,
|
|
||||||
default_flow_style=False)
|
|
||||||
json_dumper = lambda data, stream: json.dump(data, stream, indent=4)
|
|
||||||
dumpers = {'json': json_dumper,
|
|
||||||
'yaml': yaml_dumper}
|
|
||||||
|
|
||||||
if data_format not in dumpers:
|
|
||||||
raise ValueError('Unsupported data format.')
|
|
||||||
|
|
||||||
dumper = dumpers[data_format]
|
|
||||||
dumper(data, stream)
|
|
||||||
|
|
||||||
|
|
||||||
def read_from_file(file_path):
|
|
||||||
data_format = os.path.splitext(file_path)[1].lstrip('.')
|
|
||||||
with open(file_path, 'r') as stream:
|
|
||||||
return safe_load(data_format, stream)
|
|
||||||
|
|
||||||
|
|
||||||
def write_to_file(file_path, data):
|
|
||||||
data_format = os.path.splitext(file_path)[1].lstrip('.')
|
|
||||||
with open(file_path, 'w') as stream:
|
|
||||||
safe_dump(data_format, stream, data)
|
|
|
@ -1,34 +0,0 @@
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
|
|
||||||
# Copyright 2015 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from collections import namedtuple
|
|
||||||
|
|
||||||
|
|
||||||
def Enum(*values, **kwargs):
|
|
||||||
names = kwargs.get('names')
|
|
||||||
if names:
|
|
||||||
return namedtuple('Enum', names)(*values)
|
|
||||||
return namedtuple('Enum', values)(*values)
|
|
||||||
|
|
||||||
|
|
||||||
SERIALIZATION_FORMAT_FLAG = 'serialization_format'
|
|
||||||
|
|
||||||
TASK_STATUSES = Enum(
|
|
||||||
'error',
|
|
||||||
'pending',
|
|
||||||
'ready',
|
|
||||||
'running'
|
|
||||||
)
|
|
|
@ -1,18 +0,0 @@
|
||||||
# Connection settings
|
|
||||||
SERVER_ADDRESS: "127.0.0.1"
|
|
||||||
SERVER_PORT: "8000"
|
|
||||||
OS_USERNAME:
|
|
||||||
OS_PASSWORD:
|
|
||||||
# There's a need to provide default value for
|
|
||||||
# tenant name until fuel-library is updated.
|
|
||||||
# After that, it will be removed.
|
|
||||||
OS_TENANT_NAME: "admin"
|
|
||||||
HTTP_PROXY: null
|
|
||||||
HTTP_TIMEOUT: 10
|
|
||||||
|
|
||||||
# Performance tests settings
|
|
||||||
PERFORMANCE_PROFILING_TESTS: 0
|
|
||||||
PERF_TESTS_PATHS:
|
|
||||||
perf_tests_base: "/tmp/fuelclient_performance_tests/tests/"
|
|
||||||
last_performance_test: "/tmp/fuelclient_performance_tests/tests/last/"
|
|
||||||
perf_tests_results: "/tmp/fuelclient_performance_tests/results/"
|
|
|
@ -1,198 +0,0 @@
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
|
|
||||||
# Copyright 2013-2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import os
|
|
||||||
import pkg_resources
|
|
||||||
import shutil
|
|
||||||
import sys
|
|
||||||
|
|
||||||
import six
|
|
||||||
import yaml
|
|
||||||
|
|
||||||
from fuelclient.cli import error
|
|
||||||
|
|
||||||
|
|
||||||
_SETTINGS = None
|
|
||||||
|
|
||||||
# Format: old parameter: new parameter or None
|
|
||||||
DEPRECATION_TABLE = {'LISTEN_PORT': 'SERVER_PORT',
|
|
||||||
'KEYSTONE_USER': 'OS_USERNAME',
|
|
||||||
'KEYSTONE_PASS': 'OS_PASSWORD'}
|
|
||||||
|
|
||||||
# Format: parameter: fallback parameter
|
|
||||||
FALLBACK_TABLE = {DEPRECATION_TABLE[p]: p for p in DEPRECATION_TABLE}
|
|
||||||
|
|
||||||
|
|
||||||
class FuelClientSettings(object):
|
|
||||||
"""Represents a model of Fuel Clients settings
|
|
||||||
|
|
||||||
Default settings are saved to $XDG_CONFIG_HOME/fuel/fuel_client.yaml on
|
|
||||||
the first run. If $XDG_CONFIG_HOME is not set, ~/.config/ directory is
|
|
||||||
used by default.
|
|
||||||
|
|
||||||
Custom settings may be stored in any YAML-formatted file the path to
|
|
||||||
which should be supplied via the $FUELCLIENT_CUSTOM_SETTINGS environment
|
|
||||||
variable. Custom settings override default ones.
|
|
||||||
|
|
||||||
Top level values may also be set as environment variables, e.g.
|
|
||||||
export SERVER_PORT=8080. These values have the highest priority.
|
|
||||||
|
|
||||||
"""
|
|
||||||
def __init__(self):
|
|
||||||
settings_files = []
|
|
||||||
|
|
||||||
user_conf_dir = os.getenv('XDG_CONFIG_HOME',
|
|
||||||
os.path.expanduser('~/.config/'))
|
|
||||||
|
|
||||||
# Look up for a default file distributed with the source code
|
|
||||||
default_settings = pkg_resources.resource_filename('fuelclient',
|
|
||||||
'fuel_client.yaml')
|
|
||||||
|
|
||||||
self.user_settings = os.path.join(user_conf_dir, 'fuel',
|
|
||||||
'fuel_client.yaml')
|
|
||||||
custom_settings = os.getenv('FUELCLIENT_CUSTOM_SETTINGS')
|
|
||||||
|
|
||||||
if not os.path.exists(self.user_settings) and not custom_settings:
|
|
||||||
self.populate_default_settings(default_settings,
|
|
||||||
self.user_settings)
|
|
||||||
six.print_('Settings for Fuel Client have been saved to {0}.\n'
|
|
||||||
'Consider changing default values to the ones which '
|
|
||||||
'are appropriate for you.'.format(self.user_settings))
|
|
||||||
|
|
||||||
self._add_file_if_exists(default_settings, settings_files)
|
|
||||||
self._add_file_if_exists(self.user_settings, settings_files)
|
|
||||||
|
|
||||||
# Add a custom settings file specified by user
|
|
||||||
self._add_file_if_exists(custom_settings, settings_files)
|
|
||||||
|
|
||||||
self.config = {}
|
|
||||||
for sf in settings_files:
|
|
||||||
try:
|
|
||||||
self._update_from_file(sf)
|
|
||||||
except Exception as e:
|
|
||||||
msg = ('Error while reading config file '
|
|
||||||
'%(file)s: %(err)s') % {'file': sf, 'err': str(e)}
|
|
||||||
|
|
||||||
raise error.SettingsException(msg)
|
|
||||||
|
|
||||||
self._update_from_env()
|
|
||||||
self._check_deprecated()
|
|
||||||
|
|
||||||
def _add_file_if_exists(self, path_to_file, file_list):
|
|
||||||
if path_to_file and os.access(path_to_file, os.R_OK):
|
|
||||||
file_list.append(path_to_file)
|
|
||||||
|
|
||||||
def _update_from_file(self, path):
|
|
||||||
with open(path, 'r') as custom_config:
|
|
||||||
self.config.update(
|
|
||||||
yaml.load(custom_config.read())
|
|
||||||
)
|
|
||||||
|
|
||||||
def _update_from_env(self):
|
|
||||||
for k in self.config:
|
|
||||||
if k in os.environ:
|
|
||||||
self.config[k] = os.environ[k]
|
|
||||||
|
|
||||||
def _print_deprecation_warning(self, old_option, new_option=None):
|
|
||||||
"""Print deprecation warning for an option."""
|
|
||||||
|
|
||||||
deprecation_tpl = ('DEPRECATION WARNING: {} parameter was '
|
|
||||||
'deprecated and will not be supported in the next '
|
|
||||||
'version of python-fuelclient.')
|
|
||||||
replace_tpl = ' Please replace this parameter with {}'
|
|
||||||
|
|
||||||
deprecation = deprecation_tpl.format(old_option)
|
|
||||||
replace = '' if new_option is None else replace_tpl.format(new_option)
|
|
||||||
|
|
||||||
six.print_(deprecation, end='', file=sys.stderr)
|
|
||||||
six.print_(replace, file=sys.stderr)
|
|
||||||
|
|
||||||
def _check_deprecated(self):
|
|
||||||
"""Looks for deprecated options in user's configuration."""
|
|
||||||
|
|
||||||
dep_opts = [opt for opt in self.config if opt in DEPRECATION_TABLE]
|
|
||||||
|
|
||||||
for opt in dep_opts:
|
|
||||||
|
|
||||||
new_opt = DEPRECATION_TABLE.get(opt)
|
|
||||||
|
|
||||||
# Clean up new option if it was not set by a user
|
|
||||||
# Produce a warning, if both old and new options are set.
|
|
||||||
if self.config.get(new_opt) is None:
|
|
||||||
self.config.pop(new_opt, None)
|
|
||||||
else:
|
|
||||||
six.print_('WARNING: configuration contains both {old} and '
|
|
||||||
'{new} options set. Since {old} was deprecated, '
|
|
||||||
'only the value of {new} '
|
|
||||||
'will be used.'.format(old=opt, new=new_opt),
|
|
||||||
file=sys.stderr
|
|
||||||
)
|
|
||||||
|
|
||||||
self._print_deprecation_warning(opt, new_opt)
|
|
||||||
|
|
||||||
def populate_default_settings(self, source, destination):
|
|
||||||
"""Puts default configuration file to a user's home directory."""
|
|
||||||
|
|
||||||
try:
|
|
||||||
dst_dir = os.path.dirname(destination)
|
|
||||||
|
|
||||||
if not os.path.exists(dst_dir):
|
|
||||||
os.makedirs(dst_dir, 0o700)
|
|
||||||
|
|
||||||
shutil.copy(source, destination)
|
|
||||||
os.chmod(destination, 0o600)
|
|
||||||
except (IOError, OSError):
|
|
||||||
msg = ('Could not save settings to {0}. Please make sure the '
|
|
||||||
'directory is writable')
|
|
||||||
raise error.SettingsException(msg.format(dst_dir))
|
|
||||||
|
|
||||||
def update_from_command_line_options(self, options):
|
|
||||||
"""Update parameters from valid command line options."""
|
|
||||||
|
|
||||||
for param in self.config:
|
|
||||||
opt_name = param.lower()
|
|
||||||
|
|
||||||
value = getattr(options, opt_name, None)
|
|
||||||
if value is not None:
|
|
||||||
self.config[param] = value
|
|
||||||
|
|
||||||
def dump(self):
|
|
||||||
return yaml.dump(self.config)
|
|
||||||
|
|
||||||
def __getattr__(self, name):
|
|
||||||
if name in self.config:
|
|
||||||
return self.config[name]
|
|
||||||
|
|
||||||
if name in FALLBACK_TABLE:
|
|
||||||
return self.config[FALLBACK_TABLE[name]]
|
|
||||||
|
|
||||||
raise error.SettingsException('Value for {0} option is not '
|
|
||||||
'configured'.format(name))
|
|
||||||
|
|
||||||
def __repr__(self):
|
|
||||||
return '<settings object>'
|
|
||||||
|
|
||||||
|
|
||||||
def _init_settings():
|
|
||||||
global _SETTINGS
|
|
||||||
_SETTINGS = FuelClientSettings()
|
|
||||||
|
|
||||||
|
|
||||||
def get_settings():
|
|
||||||
if _SETTINGS is None:
|
|
||||||
_init_settings()
|
|
||||||
|
|
||||||
return _SETTINGS
|
|
|
@ -1,21 +0,0 @@
|
||||||
# Copyright 2015 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
|
|
||||||
def setup_hook(config):
|
|
||||||
import pbr
|
|
||||||
import pbr.packaging
|
|
||||||
|
|
||||||
# this monkey patch is to avoid appending git version to version
|
|
||||||
pbr.packaging._get_version_from_git = lambda pre_version: pre_version
|
|
|
@ -1,81 +0,0 @@
|
||||||
# Copyright 2015 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import logging
|
|
||||||
import sys
|
|
||||||
|
|
||||||
from cliff import app
|
|
||||||
from cliff.commandmanager import CommandManager
|
|
||||||
|
|
||||||
from fuelclient import fuelclient_settings
|
|
||||||
from fuelclient import utils
|
|
||||||
|
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class FuelClient(app.App):
|
|
||||||
"""Main cliff application class.
|
|
||||||
|
|
||||||
Performs initialization of the command manager and
|
|
||||||
configuration of basic engines.
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
def build_option_parser(self, description, version, argparse_kwargs=None):
|
|
||||||
"""Overrides default options for backwards compatibility."""
|
|
||||||
|
|
||||||
p_inst = super(FuelClient, self)
|
|
||||||
parser = p_inst.build_option_parser(description=description,
|
|
||||||
version=version,
|
|
||||||
argparse_kwargs=argparse_kwargs)
|
|
||||||
|
|
||||||
utils.add_os_cli_parameters(parser)
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
def configure_logging(self):
|
|
||||||
super(FuelClient, self).configure_logging()
|
|
||||||
|
|
||||||
# there is issue with management url processing by keystone client
|
|
||||||
# code in our workflow, so we have to mute appropriate keystone
|
|
||||||
# loggers in order to get rid from unprocessable errors
|
|
||||||
logging.getLogger('keystoneclient.httpclient').setLevel(logging.ERROR)
|
|
||||||
|
|
||||||
# increase level of loggin for urllib3 to avoid of displaying
|
|
||||||
# of useless messages. List of logger names is needed for
|
|
||||||
# consistency on different execution environments that could have
|
|
||||||
# installed requests packages (which is used urllib3) of different
|
|
||||||
# versions in turn
|
|
||||||
for logger_name in ('requests.packages.urllib3.connectionpool',
|
|
||||||
'urllib3.connectionpool'):
|
|
||||||
logging.getLogger(logger_name).setLevel(logging.WARNING)
|
|
||||||
|
|
||||||
def run(self, argv):
|
|
||||||
options, _ = self.parser.parse_known_args(argv)
|
|
||||||
|
|
||||||
settings = fuelclient_settings.get_settings()
|
|
||||||
settings.update_from_command_line_options(options)
|
|
||||||
|
|
||||||
return super(FuelClient, self).run(argv)
|
|
||||||
|
|
||||||
|
|
||||||
def main(argv=sys.argv[1:]):
|
|
||||||
fuelclient_app = FuelClient(
|
|
||||||
description='Command line interface and Python API wrapper for Fuel.',
|
|
||||||
version='10.0.0',
|
|
||||||
command_manager=CommandManager('fuelclient', convert_underscores=True),
|
|
||||||
deferred_help=True
|
|
||||||
)
|
|
||||||
return fuelclient_app.run(argv)
|
|
|
@ -1,35 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
"""fuelclient.objects sub-module contains classes that mirror
|
|
||||||
functionality from nailgun objects.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from fuelclient.objects.base import BaseObject
|
|
||||||
from fuelclient.objects.environment import Environment
|
|
||||||
from fuelclient.objects.extension import Extension
|
|
||||||
from fuelclient.objects.health import Health
|
|
||||||
from fuelclient.objects.node import Node
|
|
||||||
from fuelclient.objects.node import NodeCollection
|
|
||||||
from fuelclient.objects.openstack_config import OpenstackConfig
|
|
||||||
from fuelclient.objects.release import Release
|
|
||||||
from fuelclient.objects.role import Role
|
|
||||||
from fuelclient.objects.task import DeployTask
|
|
||||||
from fuelclient.objects.task import SnapshotTask
|
|
||||||
from fuelclient.objects.task import Task
|
|
||||||
from fuelclient.objects.tag import Tag
|
|
||||||
from fuelclient.objects.fuelversion import FuelVersion
|
|
||||||
from fuelclient.objects.network_group import NetworkGroup
|
|
||||||
from fuelclient.objects.plugins import Plugins
|
|
||||||
from fuelclient.objects.sequence import Sequence
|
|
|
@ -1,68 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from fuelclient.cli.serializers import Serializer
|
|
||||||
from fuelclient.client import DefaultAPIClient
|
|
||||||
|
|
||||||
|
|
||||||
class BaseObject(object):
|
|
||||||
"""BaseObject class - base class for fuelclient.objects object classes
|
|
||||||
|
|
||||||
'class_api_path' - url path to object handler on Nailgun server.
|
|
||||||
'instance_api_path' - url path template which formatted with object id
|
|
||||||
returns only one serialized object.
|
|
||||||
'connection' - 'APIClient' class instance from fuelclient.client
|
|
||||||
"""
|
|
||||||
class_api_path = None
|
|
||||||
instance_api_path = None
|
|
||||||
connection = DefaultAPIClient
|
|
||||||
|
|
||||||
def __init__(self, obj_id, **kwargs):
|
|
||||||
self.connection = DefaultAPIClient
|
|
||||||
self.serializer = Serializer.from_params(kwargs.get('params'))
|
|
||||||
self.id = obj_id
|
|
||||||
self._data = None
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def init_with_data(cls, data):
|
|
||||||
instance = cls(data["id"])
|
|
||||||
instance._data = data
|
|
||||||
return instance
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def get_by_ids(cls, ids):
|
|
||||||
return map(cls, ids)
|
|
||||||
|
|
||||||
def update(self):
|
|
||||||
self._data = self.connection.get_request(
|
|
||||||
self.instance_api_path.format(self.id))
|
|
||||||
|
|
||||||
def get_fresh_data(self):
|
|
||||||
self.update()
|
|
||||||
return self.data
|
|
||||||
|
|
||||||
@property
|
|
||||||
def data(self):
|
|
||||||
if self._data is None:
|
|
||||||
return self.get_fresh_data()
|
|
||||||
else:
|
|
||||||
return self._data
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def get_all_data(cls, **kwargs):
|
|
||||||
return cls.connection.get_request(cls.class_api_path, params=kwargs)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def get_all(cls, **kwargs):
|
|
||||||
return map(cls.init_with_data, cls.get_all_data(**kwargs))
|
|
|
@ -1,646 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from operator import attrgetter
|
|
||||||
import os
|
|
||||||
import shutil
|
|
||||||
|
|
||||||
from fuelclient.cli import error
|
|
||||||
from fuelclient.cli.serializers import listdir_without_extensions
|
|
||||||
from fuelclient.objects.base import BaseObject
|
|
||||||
from fuelclient.objects.task import DeployTask
|
|
||||||
from fuelclient.objects.task import Task
|
|
||||||
|
|
||||||
|
|
||||||
class Environment(BaseObject):
|
|
||||||
|
|
||||||
class_api_path = "clusters/"
|
|
||||||
instance_api_path = "clusters/{0}/"
|
|
||||||
deployment_tasks_path = 'clusters/{0}/deployment_tasks'
|
|
||||||
deployment_tasks_graph_path = 'clusters/{0}/deploy_tasks/graph.gv'
|
|
||||||
attributes_path = 'clusters/{0}/attributes'
|
|
||||||
network_template_path = 'clusters/{0}/network_configuration/template'
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def create(cls, name, release_id, net_segment_type):
|
|
||||||
data = {
|
|
||||||
"nodes": [],
|
|
||||||
"tasks": [],
|
|
||||||
"name": name,
|
|
||||||
"release_id": release_id,
|
|
||||||
"net_segment_type": net_segment_type,
|
|
||||||
}
|
|
||||||
|
|
||||||
data = cls.connection.post_request("clusters/", data)
|
|
||||||
return cls.init_with_data(data)
|
|
||||||
|
|
||||||
def __init__(self, *args, **kwargs):
|
|
||||||
super(Environment, self).__init__(*args, **kwargs)
|
|
||||||
self._testruns_ids = []
|
|
||||||
|
|
||||||
def set(self, data):
|
|
||||||
return self.connection.put_request(
|
|
||||||
"clusters/{0}/".format(self.id),
|
|
||||||
data
|
|
||||||
)
|
|
||||||
|
|
||||||
def delete(self):
|
|
||||||
return self.connection.delete_request(
|
|
||||||
"clusters/{0}/".format(self.id)
|
|
||||||
)
|
|
||||||
|
|
||||||
def assign(self, nodes, roles):
|
|
||||||
return self.connection.post_request(
|
|
||||||
"clusters/{0}/assignment/".format(self.id),
|
|
||||||
[{'id': node.id, 'roles': roles} for node in nodes]
|
|
||||||
)
|
|
||||||
|
|
||||||
def unassign(self, nodes):
|
|
||||||
return self.connection.post_request(
|
|
||||||
"clusters/{0}/unassignment/".format(self.id),
|
|
||||||
[{"id": n} for n in nodes]
|
|
||||||
)
|
|
||||||
|
|
||||||
def get_all_nodes(self):
|
|
||||||
from fuelclient.objects.node import Node
|
|
||||||
return sorted(map(
|
|
||||||
Node.init_with_data,
|
|
||||||
self.connection.get_request(
|
|
||||||
"nodes/?cluster_id={0}".format(self.id)
|
|
||||||
)
|
|
||||||
), key=attrgetter('id'))
|
|
||||||
|
|
||||||
def unassign_all(self):
|
|
||||||
nodes = self.get_all_nodes()
|
|
||||||
if not nodes:
|
|
||||||
raise error.ActionException(
|
|
||||||
"Environment with id={0} doesn't have nodes to remove."
|
|
||||||
.format(self.id)
|
|
||||||
)
|
|
||||||
return self.connection.post_request(
|
|
||||||
"clusters/{0}/unassignment/".format(self.id),
|
|
||||||
[{"id": n.id} for n in nodes]
|
|
||||||
)
|
|
||||||
|
|
||||||
def deploy_changes(self, dry_run=False, noop_run=False):
|
|
||||||
deploy_data = self.connection.put_request(
|
|
||||||
"clusters/{0}/changes".format(self.id),
|
|
||||||
{}, dry_run=int(dry_run), noop_run=int(noop_run)
|
|
||||||
)
|
|
||||||
return DeployTask.init_with_data(deploy_data)
|
|
||||||
|
|
||||||
def redeploy_changes(self, dry_run=False, noop_run=False):
|
|
||||||
deploy_data = self.connection.put_request(
|
|
||||||
"clusters/{0}/changes/redeploy".format(self.id),
|
|
||||||
{}, dry_run=int(dry_run), noop_run=int(noop_run)
|
|
||||||
)
|
|
||||||
return DeployTask.init_with_data(deploy_data)
|
|
||||||
|
|
||||||
def get_network_data_path(self, directory=os.curdir):
|
|
||||||
return os.path.join(
|
|
||||||
os.path.abspath(directory),
|
|
||||||
"network_{0}".format(self.id)
|
|
||||||
)
|
|
||||||
|
|
||||||
def get_settings_data_path(self, directory=os.curdir):
|
|
||||||
return os.path.join(
|
|
||||||
os.path.abspath(directory),
|
|
||||||
"settings_{0}".format(self.id)
|
|
||||||
)
|
|
||||||
|
|
||||||
def get_network_template_data_path(self, directory=None):
|
|
||||||
directory = directory or os.curdir
|
|
||||||
return os.path.join(
|
|
||||||
os.path.abspath(directory),
|
|
||||||
"network_template_{0}".format(self.id)
|
|
||||||
)
|
|
||||||
|
|
||||||
def write_network_data(self, network_data, directory=os.curdir,
|
|
||||||
serializer=None):
|
|
||||||
self._check_dir(directory)
|
|
||||||
return (serializer or self.serializer).write_to_path(
|
|
||||||
self.get_network_data_path(directory),
|
|
||||||
network_data
|
|
||||||
)
|
|
||||||
|
|
||||||
def write_settings_data(self, settings_data, directory=os.curdir,
|
|
||||||
serializer=None):
|
|
||||||
self._check_dir(directory)
|
|
||||||
return (serializer or self.serializer).write_to_path(
|
|
||||||
self.get_settings_data_path(directory),
|
|
||||||
settings_data
|
|
||||||
)
|
|
||||||
|
|
||||||
def write_network_template_data(self, template_data, directory=None,
|
|
||||||
serializer=None):
|
|
||||||
directory = directory or os.curdir
|
|
||||||
return (serializer or self.serializer).write_to_path(
|
|
||||||
self.get_network_template_data_path(directory),
|
|
||||||
template_data
|
|
||||||
)
|
|
||||||
|
|
||||||
def read_network_data(self, directory=os.curdir,
|
|
||||||
serializer=None):
|
|
||||||
self._check_dir(directory)
|
|
||||||
network_file_path = self.get_network_data_path(directory)
|
|
||||||
return (serializer or self.serializer).read_from_file(
|
|
||||||
network_file_path)
|
|
||||||
|
|
||||||
def read_settings_data(self, directory=os.curdir, serializer=None):
|
|
||||||
self._check_dir(directory)
|
|
||||||
settings_file_path = self.get_settings_data_path(directory)
|
|
||||||
return (serializer or self.serializer).read_from_file(
|
|
||||||
settings_file_path)
|
|
||||||
|
|
||||||
def _check_file_path(self, file_path):
|
|
||||||
if not os.path.exists(file_path):
|
|
||||||
raise error.InvalidFileException(
|
|
||||||
"File '{0}' doesn't exist.".format(file_path))
|
|
||||||
|
|
||||||
def _check_dir(self, directory):
|
|
||||||
if not os.path.exists(directory):
|
|
||||||
raise error.InvalidDirectoryException(
|
|
||||||
"Directory '{0}' doesn't exist.".format(directory))
|
|
||||||
if not os.path.isdir(directory):
|
|
||||||
raise error.InvalidDirectoryException(
|
|
||||||
"Error: '{0}' is not a directory.".format(directory))
|
|
||||||
|
|
||||||
def read_network_template_data(self, directory=os.curdir,
|
|
||||||
serializer=None):
|
|
||||||
"""Used by 'fuel' command line utility."""
|
|
||||||
self._check_dir(directory)
|
|
||||||
network_template_file_path = self.get_network_template_data_path(
|
|
||||||
directory)
|
|
||||||
return (serializer or self.serializer).\
|
|
||||||
read_from_file(network_template_file_path)
|
|
||||||
|
|
||||||
def read_network_template_data_from_file(self, file_path=None,
|
|
||||||
serializer=None):
|
|
||||||
"""Used by 'fuel2' command line utility."""
|
|
||||||
return (serializer or self.serializer).\
|
|
||||||
read_from_full_path(file_path)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def status(self):
|
|
||||||
return self.get_fresh_data()['status']
|
|
||||||
|
|
||||||
@property
|
|
||||||
def settings_url(self):
|
|
||||||
return self.attributes_path.format(self.id)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def default_settings_url(self):
|
|
||||||
return self.settings_url + "/defaults"
|
|
||||||
|
|
||||||
@property
|
|
||||||
def network_url(self):
|
|
||||||
return "clusters/{id}/network_configuration/neutron".format(
|
|
||||||
**self.data
|
|
||||||
)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def network_template_url(self):
|
|
||||||
return self.network_template_path.format(self.id)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def network_verification_url(self):
|
|
||||||
return self.network_url + "/verify"
|
|
||||||
|
|
||||||
def get_network_data(self):
|
|
||||||
return self.connection.get_request(self.network_url)
|
|
||||||
|
|
||||||
def get_settings_data(self):
|
|
||||||
return self.connection.get_request(self.settings_url)
|
|
||||||
|
|
||||||
def get_default_settings_data(self):
|
|
||||||
return self.connection.get_request(self.default_settings_url)
|
|
||||||
|
|
||||||
def get_network_template_data(self):
|
|
||||||
return self.connection.get_request(self.network_template_url)
|
|
||||||
|
|
||||||
def set_network_data(self, data):
|
|
||||||
return self.connection.put_request(
|
|
||||||
self.network_url, data)
|
|
||||||
|
|
||||||
def set_settings_data(self, data, force=False):
|
|
||||||
if force:
|
|
||||||
result = self.connection.put_request(
|
|
||||||
self.settings_url, data, force=1)
|
|
||||||
else:
|
|
||||||
result = self.connection.put_request(
|
|
||||||
self.settings_url, data)
|
|
||||||
return result
|
|
||||||
|
|
||||||
def verify_network(self):
|
|
||||||
return self.connection.put_request(
|
|
||||||
self.network_verification_url, self.get_network_data())
|
|
||||||
|
|
||||||
def set_network_template_data(self, data):
|
|
||||||
return self.connection.put_request(
|
|
||||||
self.network_template_url, data)
|
|
||||||
|
|
||||||
def delete_network_template_data(self):
|
|
||||||
return self.connection.delete_request(self.network_template_url)
|
|
||||||
|
|
||||||
def _get_fact_dir_name(self, fact_type, directory=os.path.curdir):
|
|
||||||
return os.path.join(
|
|
||||||
os.path.abspath(directory),
|
|
||||||
"{0}_{1}".format(fact_type, self.id))
|
|
||||||
|
|
||||||
def _get_fact_url(self, fact_type, default=False):
|
|
||||||
fact_url = "clusters/{0}/orchestrator/{1}/{2}".format(
|
|
||||||
self.id, fact_type, 'defaults/' if default else ''
|
|
||||||
)
|
|
||||||
return fact_url
|
|
||||||
|
|
||||||
def get_default_facts(self, fact_type, **kwargs):
|
|
||||||
"""Gets default facts for cluster.
|
|
||||||
:param fact_type: the type of facts (deployment, provision)
|
|
||||||
"""
|
|
||||||
return self.get_facts(fact_type, default=True, **kwargs)
|
|
||||||
|
|
||||||
def get_facts(self, fact_type, default=False, nodes=None, split=None):
|
|
||||||
"""Gets facts for cluster.
|
|
||||||
:param fact_type: the type of facts (deployment, provision)
|
|
||||||
:param default: if True, the default facts will be retrieved
|
|
||||||
:param nodes: if specified, get facts only for selected nodes
|
|
||||||
:param split: if True, the node part and common part will be split
|
|
||||||
"""
|
|
||||||
params = {}
|
|
||||||
if nodes is not None:
|
|
||||||
params['nodes'] = ','.join(str(x) for x in nodes)
|
|
||||||
if split is not None:
|
|
||||||
params['split'] = str(int(split))
|
|
||||||
|
|
||||||
facts = self.connection.get_request(
|
|
||||||
self._get_fact_url(fact_type, default=default), params=params
|
|
||||||
)
|
|
||||||
if not facts:
|
|
||||||
raise error.ServerDataException(
|
|
||||||
"There is no {0} info for this "
|
|
||||||
"environment!".format(fact_type)
|
|
||||||
)
|
|
||||||
return facts
|
|
||||||
|
|
||||||
def upload_facts(self, fact_type, facts):
|
|
||||||
self.connection.put_request(self._get_fact_url(fact_type), facts)
|
|
||||||
|
|
||||||
def delete_facts(self, fact_type):
|
|
||||||
self.connection.delete_request(self._get_fact_url(fact_type))
|
|
||||||
|
|
||||||
def read_fact_info(self, fact_type, directory, serializer=None):
|
|
||||||
return getattr(
|
|
||||||
self, "read_{0}_info".format(fact_type)
|
|
||||||
)(fact_type, directory=directory, serializer=serializer)
|
|
||||||
|
|
||||||
def write_facts_to_dir(self, fact_type, facts,
|
|
||||||
directory=os.path.curdir, serializer=None):
|
|
||||||
dir_name = self._get_fact_dir_name(fact_type, directory=directory)
|
|
||||||
if os.path.exists(dir_name):
|
|
||||||
shutil.rmtree(dir_name)
|
|
||||||
os.makedirs(dir_name)
|
|
||||||
if isinstance(facts, dict):
|
|
||||||
engine_file_path = os.path.join(dir_name, "engine")
|
|
||||||
(serializer or self.serializer).write_to_path(
|
|
||||||
engine_file_path, facts["engine"])
|
|
||||||
facts = facts["nodes"]
|
|
||||||
|
|
||||||
def name_builder(fact):
|
|
||||||
return fact['name']
|
|
||||||
else:
|
|
||||||
def name_builder(fact):
|
|
||||||
if 'role' in fact:
|
|
||||||
# from 9.0 the deployment info is serialized only per node
|
|
||||||
return "{role}_{uid}".format(**fact)
|
|
||||||
return fact['uid']
|
|
||||||
|
|
||||||
for _fact in facts:
|
|
||||||
fact_path = os.path.join(
|
|
||||||
dir_name,
|
|
||||||
name_builder(_fact)
|
|
||||||
)
|
|
||||||
(serializer or self.serializer).write_to_path(fact_path, _fact)
|
|
||||||
return dir_name
|
|
||||||
|
|
||||||
def read_deployment_info(self, fact_type,
|
|
||||||
directory=os.path.curdir, serializer=None):
|
|
||||||
self._check_dir(directory)
|
|
||||||
dir_name = self._get_fact_dir_name(fact_type, directory=directory)
|
|
||||||
self._check_dir(dir_name)
|
|
||||||
return map(
|
|
||||||
lambda f: (serializer or self.serializer).read_from_file(f),
|
|
||||||
[os.path.join(dir_name, json_file)
|
|
||||||
for json_file in listdir_without_extensions(dir_name)]
|
|
||||||
)
|
|
||||||
|
|
||||||
def read_provisioning_info(self, fact_type,
|
|
||||||
directory=os.path.curdir, serializer=None):
|
|
||||||
dir_name = self._get_fact_dir_name(fact_type, directory=directory)
|
|
||||||
node_facts = map(
|
|
||||||
lambda f: (serializer or self.serializer).read_from_file(f),
|
|
||||||
[os.path.join(dir_name, fact_file)
|
|
||||||
for fact_file in listdir_without_extensions(dir_name)
|
|
||||||
if "engine" != fact_file]
|
|
||||||
)
|
|
||||||
engine = (serializer or self.serializer).read_from_file(
|
|
||||||
os.path.join(dir_name, "engine"))
|
|
||||||
return {
|
|
||||||
"engine": engine,
|
|
||||||
"nodes": node_facts
|
|
||||||
}
|
|
||||||
|
|
||||||
# TODO(vkulanov): remove method when deprecate old cli
|
|
||||||
def get_testsets(self):
|
|
||||||
return self.connection.get_request(
|
|
||||||
'testsets/{0}'.format(self.id),
|
|
||||||
ostf=True
|
|
||||||
)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def is_customized(self):
|
|
||||||
data = self.get_fresh_data()
|
|
||||||
return data["is_customized"]
|
|
||||||
|
|
||||||
# TODO(vkulanov): remove method when deprecate old cli
|
|
||||||
def is_in_running_test_sets(self, test_set):
|
|
||||||
return test_set["testset"] in self._test_sets_to_run
|
|
||||||
|
|
||||||
# TODO(vkulanov): remove method when deprecate old cli
|
|
||||||
def run_test_sets(self, test_sets_to_run, ostf_credentials=None):
|
|
||||||
self._test_sets_to_run = test_sets_to_run
|
|
||||||
|
|
||||||
def make_test_set(name):
|
|
||||||
result = {
|
|
||||||
"testset": name,
|
|
||||||
"metadata": {
|
|
||||||
"config": {},
|
|
||||||
"cluster_id": self.id,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if ostf_credentials:
|
|
||||||
creds = result['metadata'].setdefault(
|
|
||||||
'ostf_os_access_creds', {})
|
|
||||||
if 'tenant' in ostf_credentials:
|
|
||||||
creds['ostf_os_tenant_name'] = ostf_credentials['tenant']
|
|
||||||
if 'username' in ostf_credentials:
|
|
||||||
creds['ostf_os_username'] = ostf_credentials['username']
|
|
||||||
if 'password' in ostf_credentials:
|
|
||||||
creds['ostf_os_password'] = ostf_credentials['password']
|
|
||||||
return result
|
|
||||||
|
|
||||||
tests_data = [make_test_set(ts) for ts in test_sets_to_run]
|
|
||||||
testruns = self.connection.post_request(
|
|
||||||
"testruns", tests_data, ostf=True)
|
|
||||||
self._testruns_ids = [tr['id'] for tr in testruns]
|
|
||||||
return testruns
|
|
||||||
|
|
||||||
# TODO(vkulanov): remove method when deprecate old cli
|
|
||||||
def get_state_of_tests(self):
|
|
||||||
return [
|
|
||||||
self.connection.get_request(
|
|
||||||
"testruns/{0}".format(testrun_id), ostf=True)
|
|
||||||
for testrun_id in self._testruns_ids
|
|
||||||
]
|
|
||||||
|
|
||||||
def stop(self):
|
|
||||||
return Task.init_with_data(
|
|
||||||
self.connection.put_request(
|
|
||||||
"clusters/{0}/stop_deployment/".format(self.id),
|
|
||||||
{}
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
def reset(self, force=False):
|
|
||||||
return Task.init_with_data(
|
|
||||||
self.connection.put_request(
|
|
||||||
"clusters/{0}/reset/?force={force}".format(self.id,
|
|
||||||
force=int(force)),
|
|
||||||
{}
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
def _get_method_url(self, method_type, nodes, force=False, noop_run=False):
|
|
||||||
endpoint = "clusters/{0}/{1}/?nodes={2}".format(
|
|
||||||
self.id,
|
|
||||||
method_type,
|
|
||||||
','.join(map(lambda n: str(n.id), nodes)))
|
|
||||||
|
|
||||||
if force:
|
|
||||||
endpoint += '&force=1'
|
|
||||||
if noop_run:
|
|
||||||
endpoint += '&noop_run=1'
|
|
||||||
|
|
||||||
return endpoint
|
|
||||||
|
|
||||||
def install_selected_nodes(self, method_type, nodes):
|
|
||||||
return Task.init_with_data(
|
|
||||||
self.connection.put_request(
|
|
||||||
self._get_method_url(method_type, nodes),
|
|
||||||
{}
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
def execute_tasks(self, nodes, tasks, force, noop_run):
|
|
||||||
return Task.init_with_data(
|
|
||||||
self.connection.put_request(
|
|
||||||
self._get_method_url('deploy_tasks', nodes=nodes, force=force,
|
|
||||||
noop_run=noop_run),
|
|
||||||
tasks
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
def get_tasks(self, skip=None, end=None, start=None, include=None):
|
|
||||||
"""Stores logic to filter tasks by known parameters.
|
|
||||||
|
|
||||||
:param skip: list of tasks or None
|
|
||||||
:param end: string or None
|
|
||||||
:param start: string or None
|
|
||||||
:param include: list or None
|
|
||||||
"""
|
|
||||||
tasks = [t['id'] for t in self.get_deployment_tasks(
|
|
||||||
end=end, start=start, include=include)]
|
|
||||||
if skip:
|
|
||||||
tasks_to_execute = set(tasks) - set(skip)
|
|
||||||
return list(tasks_to_execute)
|
|
||||||
return tasks
|
|
||||||
|
|
||||||
def get_deployment_tasks(self, end=None, start=None, include=None):
|
|
||||||
url = self.deployment_tasks_path.format(self.id)
|
|
||||||
return self.connection.get_request(
|
|
||||||
url, params={
|
|
||||||
'end': end,
|
|
||||||
'start': start,
|
|
||||||
'include': include})
|
|
||||||
|
|
||||||
def update_deployment_tasks(self, data):
|
|
||||||
url = self.deployment_tasks_path.format(self.id)
|
|
||||||
return self.connection.put_request(url, data)
|
|
||||||
|
|
||||||
def get_attributes(self):
|
|
||||||
return self.connection.get_request(self.settings_url)
|
|
||||||
|
|
||||||
def update_attributes(self, data, force=False):
|
|
||||||
if force:
|
|
||||||
result = self.connection.put_request(
|
|
||||||
self.settings_url, data, force=1)
|
|
||||||
else:
|
|
||||||
result = self.connection.put_request(
|
|
||||||
self.settings_url, data)
|
|
||||||
return result
|
|
||||||
|
|
||||||
def get_deployment_tasks_graph(self, tasks, parents_for=None, remove=None):
|
|
||||||
url = self.deployment_tasks_graph_path.format(self.id)
|
|
||||||
params = {
|
|
||||||
'tasks': ','.join(tasks),
|
|
||||||
'parents_for': parents_for,
|
|
||||||
'remove': ','.join(remove) if remove else None,
|
|
||||||
}
|
|
||||||
resp = self.connection.get_request_raw(url, params=params)
|
|
||||||
resp.raise_for_status()
|
|
||||||
return resp.text
|
|
||||||
|
|
||||||
def spawn_vms(self):
|
|
||||||
url = 'clusters/{0}/spawn_vms/'.format(self.id)
|
|
||||||
return self.connection.put_request(url, {})
|
|
||||||
|
|
||||||
def _get_ip_addrs_url(self, vips=True, ip_addr_id=None):
|
|
||||||
"""Generate ip address management url.
|
|
||||||
|
|
||||||
:param vips: manage vip properties of ip address
|
|
||||||
:type vips: bool
|
|
||||||
:param ip_addr_id: ip address identifier
|
|
||||||
:type ip_addr_id: int
|
|
||||||
:return: url
|
|
||||||
:rtype: str
|
|
||||||
"""
|
|
||||||
ip_addr_url = "clusters/{0}/network_configuration/ips/".format(self.id)
|
|
||||||
if ip_addr_id:
|
|
||||||
ip_addr_url += '{0}/'.format(ip_addr_id)
|
|
||||||
if vips:
|
|
||||||
ip_addr_url += 'vips/'
|
|
||||||
|
|
||||||
return ip_addr_url
|
|
||||||
|
|
||||||
def get_default_vips_data_path(self):
|
|
||||||
"""Get path where VIPs data is located.
|
|
||||||
:return: path
|
|
||||||
:rtype: str
|
|
||||||
"""
|
|
||||||
return os.path.join(
|
|
||||||
os.path.abspath(os.curdir),
|
|
||||||
"vips_{0}".format(self.id)
|
|
||||||
)
|
|
||||||
|
|
||||||
def get_vips_data(self, ip_address_id=None, network=None,
|
|
||||||
network_role=None):
|
|
||||||
"""Get one or multiple vip data records.
|
|
||||||
|
|
||||||
:param ip_address_id: ip addr id could be specified to download single
|
|
||||||
vip if no ip_addr_id specified multiple entities is
|
|
||||||
returned respecting network and network_role
|
|
||||||
filters
|
|
||||||
:type ip_address_id: int
|
|
||||||
:param network: network id could be specified to filter vips
|
|
||||||
:type network: int
|
|
||||||
:param network_role: network role could be specified to filter vips
|
|
||||||
:type network_role: string
|
|
||||||
:return: response JSON
|
|
||||||
:rtype: list of dict
|
|
||||||
"""
|
|
||||||
params = {}
|
|
||||||
if network:
|
|
||||||
params['network'] = network
|
|
||||||
if network_role:
|
|
||||||
params['network-role'] = network_role
|
|
||||||
|
|
||||||
result = self.connection.get_request(
|
|
||||||
self._get_ip_addrs_url(True, ip_addr_id=ip_address_id),
|
|
||||||
params=params
|
|
||||||
)
|
|
||||||
if ip_address_id is not None: # single vip is returned
|
|
||||||
# wrapping with list is required to respect case when administrator
|
|
||||||
# is downloading vip address info to change it and upload
|
|
||||||
# back. Uploading works only with lists of records.
|
|
||||||
result = [result]
|
|
||||||
return result
|
|
||||||
|
|
||||||
def write_vips_data_to_file(self, vips_data, serializer=None,
|
|
||||||
file_path=None):
|
|
||||||
"""Write VIP data to the given path.
|
|
||||||
|
|
||||||
:param vips_data: vip data
|
|
||||||
:type vips_data: list of dict
|
|
||||||
:param serializer: serializer
|
|
||||||
:param file_path: path
|
|
||||||
:type file_path: str
|
|
||||||
:return: path to resulting file
|
|
||||||
:rtype: str
|
|
||||||
"""
|
|
||||||
serializer = serializer or self.serializer
|
|
||||||
|
|
||||||
if file_path:
|
|
||||||
return serializer.write_to_full_path(
|
|
||||||
file_path,
|
|
||||||
vips_data
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
return serializer.write_to_path(
|
|
||||||
self.get_default_vips_data_path(),
|
|
||||||
vips_data
|
|
||||||
)
|
|
||||||
|
|
||||||
def read_vips_data_from_file(self, file_path=None, serializer=None):
|
|
||||||
"""Read VIPs data from given path.
|
|
||||||
|
|
||||||
:param file_path: path
|
|
||||||
:type file_path: str
|
|
||||||
:param serializer: serializer object
|
|
||||||
:type serializer: object
|
|
||||||
:return: data
|
|
||||||
:rtype: list|object
|
|
||||||
"""
|
|
||||||
self._check_file_path(file_path)
|
|
||||||
return (serializer or self.serializer).read_from_file(file_path)
|
|
||||||
|
|
||||||
def set_vips_data(self, data):
|
|
||||||
"""Sending VIPs data to the Nailgun API.
|
|
||||||
|
|
||||||
:param data: VIPs data
|
|
||||||
:type data: list of dict
|
|
||||||
:return: request result
|
|
||||||
:rtype: object
|
|
||||||
"""
|
|
||||||
return self.connection.put_request(self._get_ip_addrs_url(), data)
|
|
||||||
|
|
||||||
def create_vip(self, **vip_kwargs):
|
|
||||||
"""Create VIP through request to Nailgun API
|
|
||||||
|
|
||||||
:param vip_data: attributes of the VIP to be created
|
|
||||||
"""
|
|
||||||
return self.connection.post_request(self._get_ip_addrs_url(),
|
|
||||||
vip_kwargs)
|
|
||||||
|
|
||||||
def get_enabled_plugins(self):
|
|
||||||
"""Get list of enabled plugins ids.
|
|
||||||
|
|
||||||
:returns: plugins ids list
|
|
||||||
:rtype: list[int]
|
|
||||||
"""
|
|
||||||
attrs = self.get_attributes()['editable']
|
|
||||||
enabled_plugins_ids = []
|
|
||||||
for attr_name in attrs:
|
|
||||||
metadata = attrs[attr_name].get('metadata', {})
|
|
||||||
if metadata.get('class') == 'plugin' and metadata.get('enabled'):
|
|
||||||
enabled_plugins_ids.append(metadata['chosen_id'])
|
|
||||||
return enabled_plugins_ids
|
|
|
@ -1,47 +0,0 @@
|
||||||
# Copyright 2016 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from fuelclient.objects.base import BaseObject
|
|
||||||
|
|
||||||
|
|
||||||
class Extension(BaseObject):
|
|
||||||
|
|
||||||
class_api_path = "extensions/"
|
|
||||||
instance_api_path = "clusters/{0}/extensions/"
|
|
||||||
|
|
||||||
@property
|
|
||||||
def extensions_url(self):
|
|
||||||
return self.instance_api_path.format(self.id)
|
|
||||||
|
|
||||||
def get_env_extensions(self):
|
|
||||||
"""Get list of extensions through request to the Nailgun API
|
|
||||||
|
|
||||||
"""
|
|
||||||
return self.connection.get_request(self.extensions_url)
|
|
||||||
|
|
||||||
def enable_env_extensions(self, extensions):
|
|
||||||
"""Enable extensions through request to the Nailgun API
|
|
||||||
|
|
||||||
:param extensions: list of extenstion to be enabled
|
|
||||||
"""
|
|
||||||
return self.connection.put_request(self.extensions_url, extensions)
|
|
||||||
|
|
||||||
def disable_env_extensions(self, extensions):
|
|
||||||
"""Disable extensions through request to the Nailgun API
|
|
||||||
|
|
||||||
:param extensions: list of extenstion to be disabled
|
|
||||||
"""
|
|
||||||
url = '{0}?extension_names={1}'.format(self.extensions_url,
|
|
||||||
','.join(extensions))
|
|
||||||
return self.connection.delete_request(url)
|
|
|
@ -1,24 +0,0 @@
|
||||||
# Copyright 2015 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from fuelclient.objects.base import BaseObject
|
|
||||||
|
|
||||||
|
|
||||||
class FuelVersion(BaseObject):
|
|
||||||
|
|
||||||
class_api_path = "version/"
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def get_feature_groups(cls):
|
|
||||||
return cls.get_all_data()['feature_groups']
|
|
|
@ -1,86 +0,0 @@
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# Copyright 2016 Vitalii Kulanov
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from fuelclient.objects.base import BaseObject
|
|
||||||
|
|
||||||
|
|
||||||
class Health(BaseObject):
|
|
||||||
|
|
||||||
class_api_path = "testruns/"
|
|
||||||
instance_api_path = "testruns/{0}/"
|
|
||||||
test_sets_api_path = "testsets/{0}/"
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def get_test_sets(cls, environment_id):
|
|
||||||
return cls.connection.get_request(
|
|
||||||
cls.test_sets_api_path.format(environment_id),
|
|
||||||
ostf=True
|
|
||||||
)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def get_tests_status_all(cls):
|
|
||||||
return cls.connection.get_request(cls.class_api_path, ostf=True)
|
|
||||||
|
|
||||||
def get_tests_status_single(self):
|
|
||||||
return self.connection.get_request(
|
|
||||||
self.instance_api_path.format(self.id),
|
|
||||||
ostf=True
|
|
||||||
)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def get_last_tests_status(cls, environment_id):
|
|
||||||
return cls.connection.get_request(
|
|
||||||
'testruns/last/{0}'.format(environment_id),
|
|
||||||
ostf=True
|
|
||||||
)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def run_test_sets(cls, environment_id, test_sets_to_run,
|
|
||||||
ostf_credentials=None):
|
|
||||||
|
|
||||||
def make_test_set(name):
|
|
||||||
result = {
|
|
||||||
"testset": name,
|
|
||||||
"metadata": {
|
|
||||||
"config": {},
|
|
||||||
"cluster_id": environment_id,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if ostf_credentials:
|
|
||||||
creds = result['metadata'].setdefault(
|
|
||||||
'ostf_os_access_creds', {})
|
|
||||||
if 'tenant' in ostf_credentials:
|
|
||||||
creds['ostf_os_tenant_name'] = ostf_credentials['tenant']
|
|
||||||
if 'username' in ostf_credentials:
|
|
||||||
creds['ostf_os_username'] = ostf_credentials['username']
|
|
||||||
if 'password' in ostf_credentials:
|
|
||||||
creds['ostf_os_password'] = ostf_credentials['password']
|
|
||||||
return result
|
|
||||||
|
|
||||||
tests_data = [make_test_set(ts) for ts in test_sets_to_run]
|
|
||||||
test_runs = cls.connection.post_request(cls.class_api_path,
|
|
||||||
tests_data,
|
|
||||||
ostf=True)
|
|
||||||
return test_runs
|
|
||||||
|
|
||||||
def action_test(self, action_status):
|
|
||||||
data = [{
|
|
||||||
"id": self.id,
|
|
||||||
"status": action_status
|
|
||||||
}]
|
|
||||||
return self.connection.put_request(
|
|
||||||
'testruns/', data, ostf=True
|
|
||||||
)
|
|
|
@ -1,108 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from operator import attrgetter
|
|
||||||
|
|
||||||
from fuelclient.objects.base import BaseObject
|
|
||||||
|
|
||||||
|
|
||||||
class NetworkGroup(BaseObject):
|
|
||||||
|
|
||||||
class_api_path = "networks/"
|
|
||||||
instance_api_path = "networks/{0}/"
|
|
||||||
|
|
||||||
@property
|
|
||||||
def name(self):
|
|
||||||
return self.get_fresh_data()["name"]
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def create(cls, name, release, vlan, cidr, gateway,
|
|
||||||
group_id, meta=None):
|
|
||||||
|
|
||||||
metadata = {
|
|
||||||
'notation': 'cidr',
|
|
||||||
'render_type': None,
|
|
||||||
'map_priority': 2,
|
|
||||||
'configurable': True,
|
|
||||||
'use_gateway': False,
|
|
||||||
'name': name,
|
|
||||||
'cidr': cidr,
|
|
||||||
'vlan_start': vlan
|
|
||||||
}
|
|
||||||
if meta:
|
|
||||||
metadata.update(meta)
|
|
||||||
|
|
||||||
network_group = {
|
|
||||||
'name': name,
|
|
||||||
'release': release,
|
|
||||||
'vlan_start': vlan,
|
|
||||||
'cidr': cidr,
|
|
||||||
'gateway': gateway,
|
|
||||||
'meta': metadata,
|
|
||||||
'group_id': group_id,
|
|
||||||
}
|
|
||||||
|
|
||||||
data = cls.connection.post_request(
|
|
||||||
cls.class_api_path,
|
|
||||||
network_group,
|
|
||||||
)
|
|
||||||
return cls.init_with_data(data)
|
|
||||||
|
|
||||||
def set(self, data):
|
|
||||||
vlan = data.pop('vlan', None)
|
|
||||||
if vlan is not None:
|
|
||||||
data['vlan_start'] = vlan
|
|
||||||
|
|
||||||
return self.connection.put_request(
|
|
||||||
self.instance_api_path.format(self.id), data)
|
|
||||||
|
|
||||||
def delete(self):
|
|
||||||
return self.connection.delete_request(
|
|
||||||
self.instance_api_path.format(self.id)
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class NetworkGroupCollection(object):
|
|
||||||
|
|
||||||
def __init__(self, networks):
|
|
||||||
self.collection = networks
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def init_with_ids(cls, ids):
|
|
||||||
return cls(map(NetworkGroup, ids))
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def init_with_data(cls, data):
|
|
||||||
return cls(map(NetworkGroup.init_with_data, data))
|
|
||||||
|
|
||||||
def __str__(self):
|
|
||||||
return "{0} [{1}]".format(
|
|
||||||
self.__class__.__name__,
|
|
||||||
", ".join(map(lambda n: str(n.id), self.collection))
|
|
||||||
)
|
|
||||||
|
|
||||||
def __iter__(self):
|
|
||||||
return iter(self.collection)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def data(self):
|
|
||||||
return map(attrgetter("data"), self.collection)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def get_all(cls):
|
|
||||||
return cls(NetworkGroup.get_all())
|
|
||||||
|
|
||||||
def filter_by_group_id(self, group_id):
|
|
||||||
self.collection = filter(lambda net: net.data['group_id'] == group_id,
|
|
||||||
self.collection)
|
|
|
@ -1,206 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from operator import attrgetter
|
|
||||||
import os
|
|
||||||
|
|
||||||
from fuelclient.cli.error import InvalidDirectoryException
|
|
||||||
from fuelclient.objects.base import BaseObject
|
|
||||||
from fuelclient.objects.environment import Environment
|
|
||||||
|
|
||||||
|
|
||||||
class Node(BaseObject):
|
|
||||||
|
|
||||||
class_api_path = "nodes/"
|
|
||||||
instance_api_path = "nodes/{0}/"
|
|
||||||
attributes_api_path = "nodes/{0}/attributes/"
|
|
||||||
|
|
||||||
attributes_urls = {
|
|
||||||
"interfaces": ("interfaces", "default_assignment"),
|
|
||||||
"disks": ("disks", "defaults")
|
|
||||||
}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def env_id(self):
|
|
||||||
return self.get_fresh_data()["cluster"]
|
|
||||||
|
|
||||||
@property
|
|
||||||
def env(self):
|
|
||||||
return Environment(self.env_id)
|
|
||||||
|
|
||||||
def get_attributes_path(self, directory):
|
|
||||||
return os.path.join(
|
|
||||||
os.path.abspath(
|
|
||||||
os.curdir if directory is None else directory),
|
|
||||||
"node_{0}".format(self.id)
|
|
||||||
)
|
|
||||||
|
|
||||||
def is_finished(self, latest=True):
|
|
||||||
if latest:
|
|
||||||
data = self.get_fresh_data()
|
|
||||||
else:
|
|
||||||
data = self.data
|
|
||||||
return data["status"] in ("ready", "error")
|
|
||||||
|
|
||||||
@property
|
|
||||||
def progress(self):
|
|
||||||
data = self.get_fresh_data()
|
|
||||||
return data["progress"]
|
|
||||||
|
|
||||||
@property
|
|
||||||
def labels(self):
|
|
||||||
return self.get_fresh_data().get('labels', {})
|
|
||||||
|
|
||||||
def get_attribute_default_url(self, attributes_type):
|
|
||||||
url_path, default_url_path = self.attributes_urls[attributes_type]
|
|
||||||
return "nodes/{0}/{1}/{2}".format(self.id, url_path, default_url_path)
|
|
||||||
|
|
||||||
def get_attribute_url(self, attributes_type):
|
|
||||||
url_path, _ = self.attributes_urls[attributes_type]
|
|
||||||
return "nodes/{0}/{1}/".format(self.id, url_path)
|
|
||||||
|
|
||||||
def get_default_attribute(self, attributes_type):
|
|
||||||
return self.connection.get_request(
|
|
||||||
self.get_attribute_default_url(attributes_type)
|
|
||||||
)
|
|
||||||
|
|
||||||
def get_node_attributes(self):
|
|
||||||
return self.connection.get_request(
|
|
||||||
self.attributes_api_path.format(self.id)
|
|
||||||
)
|
|
||||||
|
|
||||||
def update_node_attributes(self, data):
|
|
||||||
return self.connection.put_request(
|
|
||||||
self.attributes_api_path.format(self.id),
|
|
||||||
data
|
|
||||||
)
|
|
||||||
|
|
||||||
def get_attribute(self, attributes_type):
|
|
||||||
return self.connection.get_request(
|
|
||||||
self.get_attribute_url(attributes_type)
|
|
||||||
)
|
|
||||||
|
|
||||||
def upload_node_attribute(self, attributes_type, attributes):
|
|
||||||
url = self.get_attribute_url(attributes_type)
|
|
||||||
return self.connection.put_request(
|
|
||||||
url,
|
|
||||||
attributes
|
|
||||||
)
|
|
||||||
|
|
||||||
def write_attribute(self, attribute_type, attributes,
|
|
||||||
directory, serializer=None):
|
|
||||||
attributes_directory = self.get_attributes_path(directory)
|
|
||||||
if not os.path.exists(attributes_directory):
|
|
||||||
os.mkdir(attributes_directory)
|
|
||||||
attribute_path = os.path.join(
|
|
||||||
attributes_directory,
|
|
||||||
attribute_type
|
|
||||||
)
|
|
||||||
if os.path.exists(attribute_path):
|
|
||||||
os.remove(attribute_path)
|
|
||||||
return (serializer or self.serializer).write_to_path(
|
|
||||||
attribute_path,
|
|
||||||
attributes
|
|
||||||
)
|
|
||||||
|
|
||||||
def read_attribute(self, attributes_type, directory, serializer=None):
|
|
||||||
attributes_directory = self.get_attributes_path(directory)
|
|
||||||
if not os.path.exists(attributes_directory):
|
|
||||||
raise InvalidDirectoryException(
|
|
||||||
"Folder {0} doesn't contain node folder '{1}'"
|
|
||||||
.format(directory, "node_{0}".format(self.id))
|
|
||||||
)
|
|
||||||
return (serializer or self.serializer).read_from_file(
|
|
||||||
os.path.join(
|
|
||||||
attributes_directory,
|
|
||||||
attributes_type
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
def deploy(self):
|
|
||||||
self.env.install_selected_nodes("deploy", (self,))
|
|
||||||
|
|
||||||
def provision(self):
|
|
||||||
self.env.install_selected_nodes("provision", (self,))
|
|
||||||
|
|
||||||
def delete(self):
|
|
||||||
self.connection.delete_request(self.instance_api_path.format(self.id))
|
|
||||||
|
|
||||||
def node_vms_create(self, config):
|
|
||||||
url = "nodes/{0}/vms_conf/".format(self.id)
|
|
||||||
return self.connection.put_request(url, {'vms_conf': config})
|
|
||||||
|
|
||||||
def get_node_vms_conf(self):
|
|
||||||
url = "nodes/{0}/vms_conf/".format(self.id)
|
|
||||||
return self.connection.get_request(url)
|
|
||||||
|
|
||||||
def set(self, data):
|
|
||||||
return self.connection.put_request(
|
|
||||||
self.instance_api_path.format(self.id),
|
|
||||||
data
|
|
||||||
)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def get_by_env_id(cls, cluster_id):
|
|
||||||
params = {'cluster_id': cluster_id}
|
|
||||||
return cls.connection.get_request(cls.class_api_path, params=params)
|
|
||||||
|
|
||||||
|
|
||||||
class NodeCollection(object):
|
|
||||||
|
|
||||||
class_api_path = "nodes/"
|
|
||||||
|
|
||||||
def __init__(self, nodes):
|
|
||||||
self.collection = nodes
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def init_with_ids(cls, ids):
|
|
||||||
return cls(list(map(Node, ids)))
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def init_with_data(cls, data):
|
|
||||||
return cls(list(map(Node.init_with_data, data)))
|
|
||||||
|
|
||||||
def __str__(self):
|
|
||||||
return "nodes [{0}]".format(
|
|
||||||
", ".join(map(lambda n: str(n.id), self.collection))
|
|
||||||
)
|
|
||||||
|
|
||||||
def __iter__(self):
|
|
||||||
return iter(self.collection)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def data(self):
|
|
||||||
return map(attrgetter("data"), self.collection)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def get_all(cls):
|
|
||||||
return cls(Node.get_all())
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def update(cls, data):
|
|
||||||
return BaseObject.connection.put_request(cls.class_api_path, data)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def delete_by_ids(cls, ids):
|
|
||||||
url = '{0}?ids={1}'.format(
|
|
||||||
cls.class_api_path,
|
|
||||||
','.join(map(str, ids))
|
|
||||||
)
|
|
||||||
|
|
||||||
return BaseObject.connection.delete_request(url)
|
|
||||||
|
|
||||||
def filter_by_env_id(self, env_id):
|
|
||||||
predicate = lambda node: node.data['cluster'] == env_id
|
|
||||||
self.collection = filter(predicate, self.collection)
|
|
|
@ -1,83 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from operator import attrgetter
|
|
||||||
|
|
||||||
from fuelclient.objects.base import BaseObject
|
|
||||||
from fuelclient.objects import NodeCollection
|
|
||||||
|
|
||||||
|
|
||||||
class NodeGroup(BaseObject):
|
|
||||||
|
|
||||||
class_api_path = "nodegroups/"
|
|
||||||
instance_api_path = "nodegroups/{0}/"
|
|
||||||
|
|
||||||
@property
|
|
||||||
def env_id(self):
|
|
||||||
return self.get_fresh_data()["cluster_id"]
|
|
||||||
|
|
||||||
@property
|
|
||||||
def name(self):
|
|
||||||
return self.get_fresh_data()["name"]
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def create(cls, name, cluster_id):
|
|
||||||
return cls.connection.post_request(
|
|
||||||
cls.class_api_path,
|
|
||||||
{'cluster_id': cluster_id, 'name': name},
|
|
||||||
)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def delete(cls, group_id):
|
|
||||||
return cls.connection.delete_request(
|
|
||||||
cls.instance_api_path.format(group_id)
|
|
||||||
)
|
|
||||||
|
|
||||||
def assign(self, nodes):
|
|
||||||
data = [{"id": n, "group_id": int(self.id)} for n in nodes]
|
|
||||||
NodeCollection.update(data)
|
|
||||||
|
|
||||||
|
|
||||||
class NodeGroupCollection(object):
|
|
||||||
|
|
||||||
def __init__(self, groups):
|
|
||||||
self.collection = groups
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def init_with_ids(cls, ids):
|
|
||||||
return cls(map(NodeGroup, ids))
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def init_with_data(cls, data):
|
|
||||||
return cls(map(NodeGroup.init_with_data, data))
|
|
||||||
|
|
||||||
def __str__(self):
|
|
||||||
return "node groups [{0}]".format(
|
|
||||||
", ".join(map(lambda n: str(n.id), self.collection))
|
|
||||||
)
|
|
||||||
|
|
||||||
def __iter__(self):
|
|
||||||
return iter(self.collection)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def data(self):
|
|
||||||
return map(attrgetter("data"), self.collection)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def get_all(cls):
|
|
||||||
return cls(NodeGroup.get_all())
|
|
||||||
|
|
||||||
def filter_by_env_id(self, env_id):
|
|
||||||
predicate = lambda group: group.data['cluster_id'] == env_id
|
|
||||||
self.collection = filter(predicate, self.collection)
|
|
|
@ -1,67 +0,0 @@
|
||||||
# Copyright 2015 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from fuelclient.cli import error
|
|
||||||
|
|
||||||
from fuelclient.objects import base
|
|
||||||
|
|
||||||
|
|
||||||
class Notifications(base.BaseObject):
|
|
||||||
|
|
||||||
class_api_path = "notifications/"
|
|
||||||
instance_api_path = "notifications/{0}"
|
|
||||||
|
|
||||||
default_topic = 'done'
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def mark_as_read(cls, ids=None):
|
|
||||||
if not ids:
|
|
||||||
raise error.BadDataException('Message id not specified.')
|
|
||||||
|
|
||||||
if '*' in ids:
|
|
||||||
data = Notifications.get_all_data()
|
|
||||||
else:
|
|
||||||
try:
|
|
||||||
ids = map(int, ids)
|
|
||||||
except ValueError:
|
|
||||||
raise error.BadDataException(
|
|
||||||
"Numerical ids expected or the '*' symbol.")
|
|
||||||
notifications = Notifications.get_by_ids(ids)
|
|
||||||
|
|
||||||
data = [notification.get_fresh_data()
|
|
||||||
for notification in notifications]
|
|
||||||
|
|
||||||
for notification in data:
|
|
||||||
notification['status'] = 'read'
|
|
||||||
|
|
||||||
resp = cls.connection.put_request(
|
|
||||||
cls.class_api_path, data)
|
|
||||||
|
|
||||||
return resp
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def send(cls, message, topic=default_topic):
|
|
||||||
if not topic:
|
|
||||||
topic = cls.default_topic
|
|
||||||
|
|
||||||
if not message:
|
|
||||||
raise error.BadDataException('Message not specified.')
|
|
||||||
|
|
||||||
resp = cls.connection.post_request(
|
|
||||||
cls.class_api_path, {
|
|
||||||
'message': message,
|
|
||||||
'topic': topic,
|
|
||||||
})
|
|
||||||
|
|
||||||
return resp
|
|
|
@ -1,72 +0,0 @@
|
||||||
# Copyright 2015 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import os
|
|
||||||
|
|
||||||
import six
|
|
||||||
|
|
||||||
from fuelclient.cli import error
|
|
||||||
from fuelclient.cli.serializers import Serializer
|
|
||||||
from fuelclient.objects.base import BaseObject
|
|
||||||
|
|
||||||
|
|
||||||
class OpenstackConfig(BaseObject):
|
|
||||||
|
|
||||||
class_api_path = 'openstack-config/'
|
|
||||||
instance_api_path = 'openstack-config/{0}/'
|
|
||||||
execute_api_path = 'openstack-config/execute/'
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def _prepare_params(cls, filters):
|
|
||||||
return dict((k, v) for k, v in six.iteritems(filters) if v is not None)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def create(cls, **kwargs):
|
|
||||||
params = cls._prepare_params(kwargs)
|
|
||||||
data = cls.connection.post_request(cls.class_api_path, params)
|
|
||||||
return [cls.init_with_data(item) for item in data]
|
|
||||||
|
|
||||||
def delete(self):
|
|
||||||
return self.connection.delete_request(
|
|
||||||
self.instance_api_path.format(self.id))
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def execute(cls, **kwargs):
|
|
||||||
params = cls._prepare_params(kwargs)
|
|
||||||
return cls.connection.put_request(cls.execute_api_path, params)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def get_filtered_data(cls, **kwargs):
|
|
||||||
url = cls.class_api_path
|
|
||||||
params = cls._prepare_params(kwargs)
|
|
||||||
|
|
||||||
node_ids = params.get('node_ids')
|
|
||||||
if node_ids is not None:
|
|
||||||
params['node_ids'] = ','.join([str(n) for n in node_ids])
|
|
||||||
|
|
||||||
return cls.connection.get_request(url, params=params)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def read_file(cls, path):
|
|
||||||
if not os.path.exists(path):
|
|
||||||
raise error.InvalidFileException(
|
|
||||||
"File '{0}' doesn't exist.".format(path))
|
|
||||||
|
|
||||||
serializer = Serializer()
|
|
||||||
return serializer.read_from_full_path(path)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def write_file(cls, path, data):
|
|
||||||
serializer = Serializer()
|
|
||||||
return serializer.write_to_full_path(path, data)
|
|
|
@ -1,526 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import abc
|
|
||||||
import json
|
|
||||||
import os
|
|
||||||
import shutil
|
|
||||||
import subprocess
|
|
||||||
import sys
|
|
||||||
import tarfile
|
|
||||||
|
|
||||||
from distutils.version import StrictVersion
|
|
||||||
|
|
||||||
import six
|
|
||||||
import yaml
|
|
||||||
|
|
||||||
from fuelclient.cli import error
|
|
||||||
from fuelclient.objects import base
|
|
||||||
from fuelclient import utils
|
|
||||||
|
|
||||||
IS_MASTER = None
|
|
||||||
FUEL_PACKAGE = 'fuel'
|
|
||||||
PLUGINS_PATH = '/var/www/nailgun/plugins/'
|
|
||||||
METADATA_MASK = '/var/www/nailgun/plugins/*/metadata.yaml'
|
|
||||||
|
|
||||||
|
|
||||||
def raise_error_if_not_master():
|
|
||||||
"""Raises error if it's not Fuel master
|
|
||||||
|
|
||||||
:raises: error.WrongEnvironmentError
|
|
||||||
"""
|
|
||||||
msg_tail = 'Action can be performed from Fuel master node only.'
|
|
||||||
global IS_MASTER
|
|
||||||
if IS_MASTER is None:
|
|
||||||
IS_MASTER = False
|
|
||||||
rpm_exec = utils.find_exec('rpm')
|
|
||||||
if not rpm_exec:
|
|
||||||
msg = 'Command "rpm" not found. ' + msg_tail
|
|
||||||
raise error.WrongEnvironmentError(msg)
|
|
||||||
command = [rpm_exec, '-q', FUEL_PACKAGE]
|
|
||||||
p = subprocess.Popen(
|
|
||||||
command,
|
|
||||||
stdout=subprocess.PIPE,
|
|
||||||
stderr=subprocess.PIPE)
|
|
||||||
p.communicate()
|
|
||||||
if p.poll() == 0:
|
|
||||||
IS_MASTER = True
|
|
||||||
if not IS_MASTER:
|
|
||||||
msg = 'Package "fuel" is not installed. ' + msg_tail
|
|
||||||
raise error.WrongEnvironmentError(msg)
|
|
||||||
|
|
||||||
|
|
||||||
def master_only(f):
|
|
||||||
"""Decorator for the method, which raises error, if method
|
|
||||||
is called on the node which is not Fuel master
|
|
||||||
"""
|
|
||||||
@six.wraps(f)
|
|
||||||
def print_message(*args, **kwargs):
|
|
||||||
raise_error_if_not_master()
|
|
||||||
return f(*args, **kwargs)
|
|
||||||
|
|
||||||
return print_message
|
|
||||||
|
|
||||||
|
|
||||||
@six.add_metaclass(abc.ABCMeta)
|
|
||||||
class BasePlugin(object):
|
|
||||||
|
|
||||||
@abc.abstractmethod
|
|
||||||
def install(cls, plugin_path, force=False):
|
|
||||||
"""Installs plugin package
|
|
||||||
"""
|
|
||||||
|
|
||||||
@abc.abstractmethod
|
|
||||||
def update(cls, plugin_path):
|
|
||||||
"""Updates the plugin
|
|
||||||
"""
|
|
||||||
|
|
||||||
@abc.abstractmethod
|
|
||||||
def remove(cls, plugin_name, plugin_version):
|
|
||||||
"""Removes the plugin from file system
|
|
||||||
"""
|
|
||||||
|
|
||||||
@abc.abstractmethod
|
|
||||||
def downgrade(cls, plugin_path):
|
|
||||||
"""Downgrades the plugin
|
|
||||||
"""
|
|
||||||
|
|
||||||
@abc.abstractmethod
|
|
||||||
def name_from_file(cls, file_path):
|
|
||||||
"""Retrieves name from plugin package
|
|
||||||
"""
|
|
||||||
|
|
||||||
@abc.abstractmethod
|
|
||||||
def version_from_file(cls, file_path):
|
|
||||||
"""Retrieves version from plugin package
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
class PluginV1(BasePlugin):
|
|
||||||
|
|
||||||
metadata_config = 'metadata.yaml'
|
|
||||||
|
|
||||||
def deprecated(f):
|
|
||||||
"""Prints deprecation warning for old plugins
|
|
||||||
"""
|
|
||||||
@six.wraps(f)
|
|
||||||
def print_message(*args, **kwargs):
|
|
||||||
six.print_(
|
|
||||||
'DEPRECATION WARNING: The plugin has old 1.0 package format, '
|
|
||||||
'this format does not support many features, such as '
|
|
||||||
'plugins updates, find plugin in new format or migrate '
|
|
||||||
'and rebuild this one.', file=sys.stderr)
|
|
||||||
return f(*args, **kwargs)
|
|
||||||
|
|
||||||
return print_message
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
@master_only
|
|
||||||
@deprecated
|
|
||||||
def install(cls, plugin_path, force=False):
|
|
||||||
plugin_tar = tarfile.open(plugin_path, 'r')
|
|
||||||
try:
|
|
||||||
plugin_tar.extractall(PLUGINS_PATH)
|
|
||||||
finally:
|
|
||||||
plugin_tar.close()
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
@master_only
|
|
||||||
@deprecated
|
|
||||||
def remove(cls, plugin_name, plugin_version):
|
|
||||||
plugin_path = os.path.join(
|
|
||||||
PLUGINS_PATH, '{0}-{1}'.format(plugin_name, plugin_version))
|
|
||||||
shutil.rmtree(plugin_path)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def update(cls, _):
|
|
||||||
raise error.BadDataException(
|
|
||||||
'Update action is not supported for old plugins with '
|
|
||||||
'package version "1.0.0", you can install your plugin '
|
|
||||||
'or use newer plugin format.')
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def downgrade(cls, _):
|
|
||||||
raise error.BadDataException(
|
|
||||||
'Downgrade action is not supported for old plugins with '
|
|
||||||
'package version "1.0.0", you can install your plugin '
|
|
||||||
'or use newer plugin format.')
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def name_from_file(cls, file_path):
|
|
||||||
"""Retrieves plugin name from plugin archive.
|
|
||||||
|
|
||||||
:param str plugin_path: path to the plugin
|
|
||||||
:returns: plugin name
|
|
||||||
"""
|
|
||||||
return cls._get_metadata(file_path)['name']
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def version_from_file(cls, file_path):
|
|
||||||
"""Retrieves plugin version from plugin archive.
|
|
||||||
|
|
||||||
:param str plugin_path: path to the plugin
|
|
||||||
:returns: plugin version
|
|
||||||
"""
|
|
||||||
return cls._get_metadata(file_path)['version']
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def _get_metadata(cls, plugin_path):
|
|
||||||
"""Retrieves metadata from plugin archive
|
|
||||||
|
|
||||||
:param str plugin_path: path to the plugin
|
|
||||||
:returns: metadata from the plugin
|
|
||||||
"""
|
|
||||||
plugin_tar = tarfile.open(plugin_path, 'r')
|
|
||||||
|
|
||||||
try:
|
|
||||||
for member_name in plugin_tar.getnames():
|
|
||||||
if cls.metadata_config in member_name:
|
|
||||||
return yaml.load(
|
|
||||||
plugin_tar.extractfile(member_name).read())
|
|
||||||
finally:
|
|
||||||
plugin_tar.close()
|
|
||||||
|
|
||||||
|
|
||||||
class PluginV2(BasePlugin):
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
@master_only
|
|
||||||
def install(cls, plugin_path, force=False):
|
|
||||||
if force:
|
|
||||||
utils.exec_cmd(
|
|
||||||
'yum -y install --disablerepo=\'*\' {0} || '
|
|
||||||
'yum -y reinstall --disablerepo=\'*\' {0}'
|
|
||||||
.format(plugin_path))
|
|
||||||
else:
|
|
||||||
utils.exec_cmd('yum -y install --disablerepo=\'*\' {0}'
|
|
||||||
.format(plugin_path))
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
@master_only
|
|
||||||
def remove(cls, name, version):
|
|
||||||
rpm_name = '{0}-{1}'.format(name, utils.major_plugin_version(version))
|
|
||||||
utils.exec_cmd('yum -y remove {0}'.format(rpm_name))
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
@master_only
|
|
||||||
def update(cls, plugin_path):
|
|
||||||
utils.exec_cmd('yum -y update {0}'.format(plugin_path))
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
@master_only
|
|
||||||
def downgrade(cls, plugin_path):
|
|
||||||
utils.exec_cmd('yum -y downgrade {0}'.format(plugin_path))
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def name_from_file(cls, file_path):
|
|
||||||
"""Retrieves plugin name from RPM. RPM name contains
|
|
||||||
the version of the plugin, which should be removed.
|
|
||||||
|
|
||||||
:param str file_path: path to rpm file
|
|
||||||
:returns: name of the plugin
|
|
||||||
"""
|
|
||||||
for line in utils.exec_cmd_iterator(
|
|
||||||
"rpm -qp --queryformat '%{{name}}' {0}".format(file_path)):
|
|
||||||
name = line
|
|
||||||
break
|
|
||||||
|
|
||||||
return cls._remove_major_plugin_version(name)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def version_from_file(cls, file_path):
|
|
||||||
"""Retrieves plugin version from RPM.
|
|
||||||
|
|
||||||
:param str file_path: path to rpm file
|
|
||||||
:returns: version of the plugin
|
|
||||||
"""
|
|
||||||
for line in utils.exec_cmd_iterator(
|
|
||||||
"rpm -qp --queryformat '%{{version}}' {0}".format(file_path)):
|
|
||||||
version = line
|
|
||||||
break
|
|
||||||
|
|
||||||
return version
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def _remove_major_plugin_version(cls, name):
|
|
||||||
"""Removes the version from plugin name.
|
|
||||||
Here is an example: "name-1.0" -> "name"
|
|
||||||
|
|
||||||
:param str name: plugin name
|
|
||||||
:returns: the name withot version
|
|
||||||
"""
|
|
||||||
name_wo_version = name
|
|
||||||
|
|
||||||
if '-' in name_wo_version:
|
|
||||||
name_wo_version = '-'.join(name.split('-')[:-1])
|
|
||||||
|
|
||||||
return name_wo_version
|
|
||||||
|
|
||||||
|
|
||||||
class Plugins(base.BaseObject):
|
|
||||||
|
|
||||||
class_api_path = 'plugins/'
|
|
||||||
class_instance_path = 'plugins/{id}'
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def register(cls, name, version, force=False):
|
|
||||||
"""Tries to find plugin on file system, creates
|
|
||||||
it in API service if it exists.
|
|
||||||
|
|
||||||
:param str name: plugin name
|
|
||||||
:param str version: plugin version
|
|
||||||
:param bool force: if True updates meta information
|
|
||||||
about the plugin even it does not
|
|
||||||
support updates
|
|
||||||
"""
|
|
||||||
metadata = None
|
|
||||||
for m in utils.glob_and_parse_yaml(METADATA_MASK):
|
|
||||||
if m.get('version') == version and \
|
|
||||||
m.get('name') == name:
|
|
||||||
metadata = m
|
|
||||||
break
|
|
||||||
|
|
||||||
if not metadata:
|
|
||||||
raise error.BadDataException(
|
|
||||||
'Plugin {0} with version {1} does '
|
|
||||||
'not exist, install it and try again'.format(
|
|
||||||
name, version))
|
|
||||||
|
|
||||||
return cls.update_or_create(metadata, force=force)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def sync(cls, plugin_ids=None):
|
|
||||||
"""Checks all of the plugins on file systems,
|
|
||||||
and makes sure that they have consistent information
|
|
||||||
in API service.
|
|
||||||
|
|
||||||
:params plugin_ids: list of ids for plugins which should be synced
|
|
||||||
:type plugin_ids: list
|
|
||||||
:returns: None
|
|
||||||
"""
|
|
||||||
post_data = None
|
|
||||||
if plugin_ids is not None:
|
|
||||||
post_data = {'ids': plugin_ids}
|
|
||||||
|
|
||||||
cls.connection.post_request(
|
|
||||||
api='plugins/sync/', data=post_data)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def unregister(cls, name, version):
|
|
||||||
"""Removes the plugin from API service
|
|
||||||
|
|
||||||
:param str name: plugin name
|
|
||||||
:param str version: plugin version
|
|
||||||
"""
|
|
||||||
plugin = cls.get_plugin(name, version)
|
|
||||||
return cls.connection.delete_request(
|
|
||||||
cls.class_instance_path.format(**plugin))
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def install(cls, plugin_path, force=False):
|
|
||||||
"""Installs the package, and creates data in API service
|
|
||||||
|
|
||||||
:param str plugin_path: Name of plugin file
|
|
||||||
:param bool force: Updates existent plugin even if it is not updatable
|
|
||||||
:return: Plugins information
|
|
||||||
:rtype: dict
|
|
||||||
"""
|
|
||||||
if not utils.file_exists(plugin_path):
|
|
||||||
raise error.BadDataException(
|
|
||||||
"No such plugin file: {0}".format(plugin_path)
|
|
||||||
)
|
|
||||||
plugin = cls.make_obj_by_file(plugin_path)
|
|
||||||
|
|
||||||
name = plugin.name_from_file(plugin_path)
|
|
||||||
version = plugin.version_from_file(plugin_path)
|
|
||||||
|
|
||||||
plugin.install(plugin_path, force=force)
|
|
||||||
response = cls.register(name, version, force=force)
|
|
||||||
|
|
||||||
return response
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def remove(cls, plugin_name, plugin_version):
|
|
||||||
"""Removes the package, and updates data in API service
|
|
||||||
|
|
||||||
:param str name: plugin name
|
|
||||||
:param str version: plugin version
|
|
||||||
"""
|
|
||||||
plugin = cls.make_obj_by_name(plugin_name, plugin_version)
|
|
||||||
cls.unregister(plugin_name, plugin_version)
|
|
||||||
return plugin.remove(plugin_name, plugin_version)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def update(cls, plugin_path):
|
|
||||||
"""Updates the package, and updates data in API service
|
|
||||||
|
|
||||||
:param str plugin_path: path to the plugin
|
|
||||||
"""
|
|
||||||
plugin = cls.make_obj_by_file(plugin_path)
|
|
||||||
|
|
||||||
name = plugin.name_from_file(plugin_path)
|
|
||||||
version = plugin.version_from_file(plugin_path)
|
|
||||||
|
|
||||||
plugin.update(plugin_path)
|
|
||||||
return cls.register(name, version)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def downgrade(cls, plugin_path):
|
|
||||||
"""Downgrades the package, and updates data in API service
|
|
||||||
|
|
||||||
:param str plugin_path: path to the plugin
|
|
||||||
"""
|
|
||||||
plugin = cls.make_obj_by_file(plugin_path)
|
|
||||||
|
|
||||||
name = plugin.name_from_file(plugin_path)
|
|
||||||
version = plugin.version_from_file(plugin_path)
|
|
||||||
|
|
||||||
plugin.downgrade(plugin_path)
|
|
||||||
return cls.register(name, version)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def make_obj_by_name(cls, name, version):
|
|
||||||
"""Finds appropriate plugin class version,
|
|
||||||
by plugin version and name.
|
|
||||||
|
|
||||||
:param str name:
|
|
||||||
:param str version:
|
|
||||||
:returns: plugin class
|
|
||||||
:raises: error.BadDataException unsupported package version
|
|
||||||
"""
|
|
||||||
plugin = cls.get_plugin(name, version)
|
|
||||||
package_version = plugin['package_version']
|
|
||||||
|
|
||||||
if StrictVersion('1.0.0') <= \
|
|
||||||
StrictVersion(package_version) < \
|
|
||||||
StrictVersion('2.0.0'):
|
|
||||||
return PluginV1
|
|
||||||
elif StrictVersion('2.0.0') <= StrictVersion(package_version):
|
|
||||||
return PluginV2
|
|
||||||
|
|
||||||
raise error.BadDataException(
|
|
||||||
'Plugin {0}=={1} has unsupported package version {2}'.format(
|
|
||||||
name, version, package_version))
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def make_obj_by_file(cls, file_path):
|
|
||||||
"""Finds appropriate plugin class version,
|
|
||||||
by plugin file.
|
|
||||||
|
|
||||||
:param str file_path: plugin path
|
|
||||||
:returns: plugin class
|
|
||||||
:raises: error.BadDataException unsupported package version
|
|
||||||
"""
|
|
||||||
_, ext = os.path.splitext(file_path)
|
|
||||||
|
|
||||||
if ext == '.fp':
|
|
||||||
return PluginV1
|
|
||||||
elif ext == '.rpm':
|
|
||||||
return PluginV2
|
|
||||||
|
|
||||||
raise error.BadDataException(
|
|
||||||
'Plugin {0} has unsupported format {1}'.format(
|
|
||||||
file_path, ext))
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def update_or_create(cls, metadata, force=False):
|
|
||||||
"""Try to update existent plugin or create new one.
|
|
||||||
|
|
||||||
:param dict metadata: plugin information
|
|
||||||
:param bool force: updates existent plugin even if
|
|
||||||
it is not updatable
|
|
||||||
"""
|
|
||||||
# Try to update plugin
|
|
||||||
plugin_for_update = cls.get_plugin_for_update(metadata)
|
|
||||||
if plugin_for_update:
|
|
||||||
url = cls.class_instance_path.format(id=plugin_for_update['id'])
|
|
||||||
resp = cls.connection.put_request(url, metadata)
|
|
||||||
return resp
|
|
||||||
|
|
||||||
# If plugin is not updatable it means that we should
|
|
||||||
# create new instance in Nailgun
|
|
||||||
resp_raw = cls.connection.post_request_raw(
|
|
||||||
cls.class_api_path, metadata)
|
|
||||||
resp = resp_raw.json()
|
|
||||||
|
|
||||||
if resp_raw.status_code == 409 and force:
|
|
||||||
# Replace plugin information
|
|
||||||
message = json.loads(resp['message'])
|
|
||||||
url = cls.class_instance_path.format(id=message['id'])
|
|
||||||
resp = cls.connection.put_request(url, metadata)
|
|
||||||
elif resp_raw.status_code == 409:
|
|
||||||
error.exit_with_error(
|
|
||||||
"Nothing to do: %(title)s, version "
|
|
||||||
"%(package_version)s, does not update "
|
|
||||||
"installed plugin." % metadata)
|
|
||||||
else:
|
|
||||||
resp_raw.raise_for_status()
|
|
||||||
|
|
||||||
return resp
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def get_plugin_for_update(cls, metadata):
|
|
||||||
"""Retrieves plugins which can be updated
|
|
||||||
|
|
||||||
:param dict metadata: plugin metadata
|
|
||||||
:returns: dict with plugin which can be updated or None
|
|
||||||
"""
|
|
||||||
if not cls.is_updatable(metadata['package_version']):
|
|
||||||
return
|
|
||||||
|
|
||||||
plugins = [p for p in cls.get_all_data()
|
|
||||||
if (p['name'] == metadata['name'] and
|
|
||||||
cls.is_updatable(p['package_version']) and
|
|
||||||
utils.major_plugin_version(metadata['version']) ==
|
|
||||||
utils.major_plugin_version(p['version']))]
|
|
||||||
|
|
||||||
plugin = None
|
|
||||||
if plugins:
|
|
||||||
# List should contain only one plugin, but just
|
|
||||||
# in case we make sure that we get plugin with
|
|
||||||
# higher version
|
|
||||||
plugin = sorted(
|
|
||||||
plugins,
|
|
||||||
key=lambda p: StrictVersion(p['version']))[0]
|
|
||||||
|
|
||||||
return plugin
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def is_updatable(cls, package_version):
|
|
||||||
"""Checks if plugin's package version supports updates.
|
|
||||||
|
|
||||||
:param str package_version: package version of the plugin
|
|
||||||
:returns: True if plugin can be updated
|
|
||||||
False if plugin cannot be updated
|
|
||||||
"""
|
|
||||||
return StrictVersion('2.0.0') <= StrictVersion(package_version)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def get_plugin(cls, name, version):
|
|
||||||
"""Returns plugin fetched by name and version.
|
|
||||||
|
|
||||||
:param str name: plugin name
|
|
||||||
:param str version: plugin version
|
|
||||||
:returns: dictionary with plugin data
|
|
||||||
:raises: error.BadDataException if no plugin was found
|
|
||||||
"""
|
|
||||||
plugins = [p for p in cls.get_all_data()
|
|
||||||
if (p['name'], p['version']) == (name, version)]
|
|
||||||
|
|
||||||
if not plugins:
|
|
||||||
raise error.BadDataException(
|
|
||||||
'Plugin "{name}" with version {version}, does '
|
|
||||||
'not exist'.format(name=name, version=version))
|
|
||||||
|
|
||||||
return plugins[0]
|
|
|
@ -1,53 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from fuelclient.objects.base import BaseObject
|
|
||||||
|
|
||||||
|
|
||||||
class Release(BaseObject):
|
|
||||||
|
|
||||||
class_api_path = "releases/"
|
|
||||||
instance_api_path = "releases/{0}/"
|
|
||||||
networks_path = 'releases/{0}/networks'
|
|
||||||
attributes_metadata_path = 'releases/{0}/attributes_metadata'
|
|
||||||
deployment_tasks_path = 'releases/{0}/deployment_tasks'
|
|
||||||
components_path = 'releases/{0}/components'
|
|
||||||
|
|
||||||
def get_networks(self):
|
|
||||||
url = self.networks_path.format(self.id)
|
|
||||||
return self.connection.get_request(url)
|
|
||||||
|
|
||||||
def update_networks(self, data):
|
|
||||||
url = self.networks_path.format(self.id)
|
|
||||||
return self.connection.put_request(url, data)
|
|
||||||
|
|
||||||
def update_attributes_metadata(self, data):
|
|
||||||
url = self.attributes_metadata_path.format(self.id)
|
|
||||||
self.connection.put_request(url, data)
|
|
||||||
|
|
||||||
def get_attributes_metadata(self):
|
|
||||||
url = self.attributes_metadata_path.format(self.id)
|
|
||||||
return self.connection.get_request(url)
|
|
||||||
|
|
||||||
def get_deployment_tasks(self):
|
|
||||||
url = self.deployment_tasks_path.format(self.id)
|
|
||||||
return self.connection.get_request(url)
|
|
||||||
|
|
||||||
def update_deployment_tasks(self, data):
|
|
||||||
url = self.deployment_tasks_path.format(self.id)
|
|
||||||
return self.connection.put_request(url, data)
|
|
||||||
|
|
||||||
def get_components(self):
|
|
||||||
url = self.components_path.format(self.id)
|
|
||||||
return self.connection.get_request(url)
|
|
|
@ -1,59 +0,0 @@
|
||||||
# Copyright 2015 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
|
|
||||||
from fuelclient.objects.base import BaseObject
|
|
||||||
|
|
||||||
|
|
||||||
class Role(BaseObject):
|
|
||||||
|
|
||||||
instance_api_path = "{owner_type}/{owner_id}/roles/"
|
|
||||||
class_api_path = "{owner_type}/{owner_id}/roles/{role_name}/"
|
|
||||||
|
|
||||||
def __init__(self, owner_type, owner_id, **kwargs):
|
|
||||||
super(Role, self).__init__(owner_id, **kwargs)
|
|
||||||
self.owner_type = owner_type
|
|
||||||
|
|
||||||
def get_all(self):
|
|
||||||
return self.connection.get_request(
|
|
||||||
self.instance_api_path.format(
|
|
||||||
owner_type=self.owner_type,
|
|
||||||
owner_id=self.id))
|
|
||||||
|
|
||||||
def get_role(self, role_name):
|
|
||||||
return self.connection.get_request(
|
|
||||||
self.class_api_path.format(
|
|
||||||
owner_type=self.owner_type,
|
|
||||||
owner_id=self.id,
|
|
||||||
role_name=role_name))
|
|
||||||
|
|
||||||
def update_role(self, role_name, data):
|
|
||||||
return self.connection.put_request(
|
|
||||||
self.class_api_path.format(
|
|
||||||
owner_type=self.owner_type,
|
|
||||||
owner_id=self.id,
|
|
||||||
role_name=role_name),
|
|
||||||
data)
|
|
||||||
|
|
||||||
def create_role(self, data):
|
|
||||||
return self.connection.post_request(
|
|
||||||
self.instance_api_path.format(
|
|
||||||
owner_type=self.owner_type, owner_id=self.id), data)
|
|
||||||
|
|
||||||
def delete_role(self, role_name):
|
|
||||||
return self.connection.delete_request(
|
|
||||||
self.class_api_path.format(
|
|
||||||
owner_type=self.owner_type,
|
|
||||||
owner_id=self.id,
|
|
||||||
role_name=role_name))
|
|
|
@ -1,21 +0,0 @@
|
||||||
# Copyright 2016 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from fuelclient.objects.base import BaseObject
|
|
||||||
|
|
||||||
|
|
||||||
class Sequence(BaseObject):
|
|
||||||
|
|
||||||
class_api_path = "sequences/"
|
|
||||||
instance_api_path = "sequences/{0}/"
|
|
|
@ -1,56 +0,0 @@
|
||||||
# Copyright 2016 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
|
|
||||||
from fuelclient.objects.base import BaseObject
|
|
||||||
|
|
||||||
|
|
||||||
class Tag(BaseObject):
|
|
||||||
|
|
||||||
instance_api_path = "{owner_type}/{owner_id}/tags/"
|
|
||||||
class_api_path = "{owner_type}/{owner_id}/tags/{tag_name}/"
|
|
||||||
|
|
||||||
def __init__(self, owner_type, owner_id, **kwargs):
|
|
||||||
super(Tag, self).__init__(owner_id, **kwargs)
|
|
||||||
self.owner_type = owner_type
|
|
||||||
|
|
||||||
def get_all(self):
|
|
||||||
return self.connection.get_request(
|
|
||||||
self.instance_api_path.format(owner_type=self.owner_type,
|
|
||||||
owner_id=self.id))
|
|
||||||
|
|
||||||
def get_tag(self, tag_name):
|
|
||||||
return self.connection.get_request(
|
|
||||||
self.class_api_path.format(owner_type=self.owner_type,
|
|
||||||
owner_id=self.id,
|
|
||||||
tag_name=tag_name))
|
|
||||||
|
|
||||||
def update_tag(self, tag_name, data):
|
|
||||||
return self.connection.put_request(
|
|
||||||
self.class_api_path.format(owner_type=self.owner_type,
|
|
||||||
owner_id=self.id,
|
|
||||||
tag_name=tag_name),
|
|
||||||
data)
|
|
||||||
|
|
||||||
def create_tag(self, data):
|
|
||||||
return self.connection.post_request(
|
|
||||||
self.instance_api_path.format(owner_type=self.owner_type,
|
|
||||||
owner_id=self.id),
|
|
||||||
data)
|
|
||||||
|
|
||||||
def delete_tag(self, tag_name):
|
|
||||||
return self.connection.delete_request(
|
|
||||||
self.class_api_path.format(owner_type=self.owner_type,
|
|
||||||
owner_id=self.id,
|
|
||||||
tag_name=tag_name))
|
|
|
@ -1,133 +0,0 @@
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from operator import methodcaller
|
|
||||||
from time import sleep
|
|
||||||
|
|
||||||
from fuelclient.cli.error import DeployProgressError
|
|
||||||
from fuelclient.objects.base import BaseObject
|
|
||||||
|
|
||||||
|
|
||||||
class Task(BaseObject):
|
|
||||||
|
|
||||||
class_api_path = "transactions/"
|
|
||||||
instance_api_path = "transactions/{0}/"
|
|
||||||
info_types_url_map = {
|
|
||||||
'deployment_info': 'deployment_info',
|
|
||||||
'cluster_settings': 'settings',
|
|
||||||
'network_configuration': 'network_configuration'}
|
|
||||||
|
|
||||||
def delete(self, force=False):
|
|
||||||
return self.connection.delete_request(
|
|
||||||
"transactions/{0}/?force={1}".format(
|
|
||||||
self.id,
|
|
||||||
int(force),
|
|
||||||
))
|
|
||||||
|
|
||||||
@property
|
|
||||||
def progress(self):
|
|
||||||
return self.get_fresh_data()["progress"]
|
|
||||||
|
|
||||||
@property
|
|
||||||
def status(self):
|
|
||||||
return self.get_fresh_data()["status"]
|
|
||||||
|
|
||||||
@property
|
|
||||||
def is_finished(self):
|
|
||||||
return self.status in ("ready", "error")
|
|
||||||
|
|
||||||
def wait(self):
|
|
||||||
while not self.is_finished:
|
|
||||||
sleep(0.5)
|
|
||||||
|
|
||||||
def deployment_info(self):
|
|
||||||
return self.connection.get_request(
|
|
||||||
self._get_additional_info_url('deployment_info'))
|
|
||||||
|
|
||||||
def network_configuration(self):
|
|
||||||
return self.connection.get_request(
|
|
||||||
self._get_additional_info_url('network_configuration'))
|
|
||||||
|
|
||||||
def cluster_settings(self):
|
|
||||||
return self.connection.get_request(
|
|
||||||
self._get_additional_info_url('cluster_settings'))
|
|
||||||
|
|
||||||
def _get_additional_info_url(self, info_type):
|
|
||||||
"""Generate additional info url.
|
|
||||||
|
|
||||||
:param info_type: one of deployment_info, cluster_settings,
|
|
||||||
network_configuration
|
|
||||||
:type info_type: str
|
|
||||||
:return: url
|
|
||||||
:rtype: str
|
|
||||||
"""
|
|
||||||
|
|
||||||
return self.instance_api_path.format(self.id) +\
|
|
||||||
self.info_types_url_map[info_type]
|
|
||||||
|
|
||||||
|
|
||||||
class DeployTask(Task):
|
|
||||||
|
|
||||||
def __init__(self, obj_id, env_id):
|
|
||||||
from fuelclient.objects.environment import Environment
|
|
||||||
super(DeployTask, self).__init__(obj_id)
|
|
||||||
self.env = Environment(env_id)
|
|
||||||
self.nodes = self.env.get_all_nodes()
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def init_with_data(cls, data):
|
|
||||||
return cls(data["id"], data["cluster"])
|
|
||||||
|
|
||||||
@property
|
|
||||||
def not_finished_nodes(self):
|
|
||||||
return filter(
|
|
||||||
lambda n: not n.is_finished(latest=False),
|
|
||||||
self.nodes
|
|
||||||
)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def is_finished(self):
|
|
||||||
return super(DeployTask, self).is_finished and all(
|
|
||||||
map(
|
|
||||||
methodcaller("is_finished"),
|
|
||||||
self.not_finished_nodes
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
def __iter__(self):
|
|
||||||
return self
|
|
||||||
|
|
||||||
def next(self):
|
|
||||||
if not self.is_finished:
|
|
||||||
sleep(1)
|
|
||||||
deploy_task_data = self.get_fresh_data()
|
|
||||||
if deploy_task_data["status"] == "error":
|
|
||||||
raise DeployProgressError(deploy_task_data["message"])
|
|
||||||
for node in self.not_finished_nodes:
|
|
||||||
node.update()
|
|
||||||
return self.progress, self.nodes
|
|
||||||
else:
|
|
||||||
raise StopIteration
|
|
||||||
|
|
||||||
|
|
||||||
class SnapshotTask(Task):
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def start_snapshot_task(cls, conf):
|
|
||||||
dump_task = cls.connection.put_request("logs/package", conf)
|
|
||||||
return cls(dump_task["id"])
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def get_default_config(cls):
|
|
||||||
return cls.connection.get_request("logs/package/config/default/")
|
|
|
@ -1,99 +0,0 @@
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import cProfile
|
|
||||||
import os
|
|
||||||
from pstats import Stats
|
|
||||||
import time
|
|
||||||
|
|
||||||
from fuelclient.cli import error
|
|
||||||
from fuelclient import fuelclient_settings
|
|
||||||
|
|
||||||
|
|
||||||
def profiling_enabled():
|
|
||||||
settings = fuelclient_settings.get_settings()
|
|
||||||
return bool(settings.PERFORMANCE_PROFILING_TESTS)
|
|
||||||
|
|
||||||
|
|
||||||
class Profiler(object):
|
|
||||||
"""Runs profiler and saves results."""
|
|
||||||
|
|
||||||
def __init__(self, method='', handler_name=''):
|
|
||||||
self.method = method
|
|
||||||
self.handler_name = handler_name
|
|
||||||
settings = fuelclient_settings.get_settings()
|
|
||||||
self.paths = settings.PERF_TESTS_PATHS
|
|
||||||
|
|
||||||
if not os.path.exists(self.paths['last_performance_test']):
|
|
||||||
os.makedirs(self.paths['last_performance_test'])
|
|
||||||
|
|
||||||
self.profiler = cProfile.Profile()
|
|
||||||
self.profiler.enable()
|
|
||||||
self.start = time.time()
|
|
||||||
|
|
||||||
def save_data(self):
|
|
||||||
try:
|
|
||||||
import gprof2dot
|
|
||||||
import pyprof2calltree
|
|
||||||
except ImportError:
|
|
||||||
msg = ('Unable to start profiling.\n Please either '
|
|
||||||
'disable performance profiling in settings.yaml or '
|
|
||||||
'install all modules listed in test-requirements.txt.')
|
|
||||||
raise error.ProfilingError(msg)
|
|
||||||
|
|
||||||
self.profiler.disable()
|
|
||||||
elapsed = time.time() - self.start
|
|
||||||
pref_filename = os.path.join(
|
|
||||||
self.paths['last_performance_test'],
|
|
||||||
'{method:s}.{handler_name:s}.{elapsed_time:.0f}ms.{t_time}.'.
|
|
||||||
format(
|
|
||||||
method=self.method,
|
|
||||||
handler_name=self.handler_name or 'root',
|
|
||||||
elapsed_time=elapsed * 1000.0,
|
|
||||||
t_time=time.time()))
|
|
||||||
tree_file = pref_filename + 'prof'
|
|
||||||
stats_file = pref_filename + 'txt'
|
|
||||||
callgraph_file = pref_filename + 'dot'
|
|
||||||
|
|
||||||
# write pstats
|
|
||||||
with file(stats_file, 'w') as file_o:
|
|
||||||
stats = Stats(self.profiler, stream=file_o)
|
|
||||||
stats.sort_stats('time', 'cumulative').print_stats()
|
|
||||||
|
|
||||||
# write callgraph in dot format
|
|
||||||
parser = gprof2dot.PstatsParser(self.profiler)
|
|
||||||
|
|
||||||
def get_function_name(args):
|
|
||||||
filename, line, name = args
|
|
||||||
module = os.path.splitext(filename)[0]
|
|
||||||
module_pieces = module.split(os.path.sep)
|
|
||||||
return "{module:s}:{line:d}:{name:s}".format(
|
|
||||||
module="/".join(module_pieces[-4:]),
|
|
||||||
line=line,
|
|
||||||
name=name)
|
|
||||||
|
|
||||||
parser.get_function_name = get_function_name
|
|
||||||
gprof = parser.parse()
|
|
||||||
|
|
||||||
with open(callgraph_file, 'w') as file_o:
|
|
||||||
dot = gprof2dot.DotWriter(file_o)
|
|
||||||
theme = gprof2dot.TEMPERATURE_COLORMAP
|
|
||||||
dot.graph(gprof, theme)
|
|
||||||
|
|
||||||
# write calltree
|
|
||||||
call_tree = pyprof2calltree.CalltreeConverter(stats)
|
|
||||||
with file(tree_file, 'wb') as file_o:
|
|
||||||
call_tree.output(file_o)
|
|
|
@ -1,218 +0,0 @@
|
||||||
# Copyright 2013-2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import json
|
|
||||||
import logging
|
|
||||||
import os
|
|
||||||
import re
|
|
||||||
import shutil
|
|
||||||
import subprocess
|
|
||||||
import sys
|
|
||||||
import tempfile
|
|
||||||
import time
|
|
||||||
|
|
||||||
from fuelclient import consts
|
|
||||||
from fuelclient.objects import Release
|
|
||||||
|
|
||||||
from oslotest import base as oslo_base
|
|
||||||
|
|
||||||
logging.basicConfig(stream=sys.stderr)
|
|
||||||
log = logging.getLogger("CliTest.ExecutionLog")
|
|
||||||
log.setLevel(logging.DEBUG)
|
|
||||||
|
|
||||||
|
|
||||||
class CliExecutionResult(object):
|
|
||||||
def __init__(self, process_handle, out, err):
|
|
||||||
self.return_code = process_handle.returncode
|
|
||||||
self.stdout = out
|
|
||||||
self.stderr = err
|
|
||||||
|
|
||||||
@property
|
|
||||||
def has_errors(self):
|
|
||||||
return self.return_code != 0
|
|
||||||
|
|
||||||
@property
|
|
||||||
def is_return_code_zero(self):
|
|
||||||
return self.return_code == 0
|
|
||||||
|
|
||||||
|
|
||||||
class BaseTestCase(oslo_base.BaseTestCase):
|
|
||||||
|
|
||||||
handler = ''
|
|
||||||
nailgun_root = os.environ.get('NAILGUN_ROOT', '/tmp/fuel_web/nailgun')
|
|
||||||
fuel_web_root = os.environ.get('FUEL_WEB_ROOT', '/tmp/fuel_web')
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(BaseTestCase, self).setUp()
|
|
||||||
|
|
||||||
self.reload_nailgun_server()
|
|
||||||
self.temp_directory = tempfile.mkdtemp()
|
|
||||||
|
|
||||||
self.addCleanup(shutil.rmtree, self.temp_directory)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def run_command(*args, **kwargs):
|
|
||||||
handle = subprocess.Popen(
|
|
||||||
[" ".join(args)],
|
|
||||||
stdout=kwargs.pop('stdout', subprocess.PIPE),
|
|
||||||
stderr=kwargs.pop('stderr', subprocess.PIPE),
|
|
||||||
shell=kwargs.pop('shell', True),
|
|
||||||
**kwargs
|
|
||||||
)
|
|
||||||
log.debug("Running " + " ".join(args))
|
|
||||||
out, err = handle.communicate()
|
|
||||||
log.debug("Finished command with {0} - {1}".format(out, err))
|
|
||||||
|
|
||||||
def upload_command(self, cmd):
|
|
||||||
return "{0} --upload --dir {1}".format(cmd, self.temp_directory)
|
|
||||||
|
|
||||||
def download_command(self, cmd):
|
|
||||||
return "{0} --download --dir {1}".format(cmd, self.temp_directory)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def reload_nailgun_server(cls):
|
|
||||||
for action in ("dropdb", "syncdb", "loaddefault"):
|
|
||||||
cmd = 'tox -evenv -- {0}/manage.py {1}'.format(
|
|
||||||
cls.nailgun_root, action)
|
|
||||||
cls.run_command(cmd, cwd=cls.fuel_web_root)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def load_data_to_nailgun_server(cls):
|
|
||||||
file_path = os.path.join(cls.nailgun_root,
|
|
||||||
'nailgun/fixtures/sample_environment.json')
|
|
||||||
cmd = 'tox -evenv -- {0}/manage.py loaddata {1}'.format(
|
|
||||||
cls.nailgun_root, file_path)
|
|
||||||
cls.run_command(cmd, cwd=cls.fuel_web_root)
|
|
||||||
|
|
||||||
def run_cli_command(self, command_line, handler=None,
|
|
||||||
check_errors=True, env=os.environ.copy()):
|
|
||||||
|
|
||||||
command_args = [" ".join((handler or self.handler, command_line))]
|
|
||||||
process_handle = subprocess.Popen(
|
|
||||||
command_args,
|
|
||||||
stdout=subprocess.PIPE,
|
|
||||||
stderr=subprocess.PIPE,
|
|
||||||
shell=True,
|
|
||||||
env=env
|
|
||||||
)
|
|
||||||
out, err = process_handle.communicate()
|
|
||||||
result = CliExecutionResult(process_handle, out, err)
|
|
||||||
log.debug("command_args: '%s',stdout: '%s', stderr: '%s'",
|
|
||||||
command_args[0], out, err)
|
|
||||||
if check_errors:
|
|
||||||
if not result.is_return_code_zero or result.has_errors:
|
|
||||||
self.fail(err)
|
|
||||||
return result
|
|
||||||
|
|
||||||
def get_first_deployable_release_id(self):
|
|
||||||
releases = sorted(Release.get_all_data(),
|
|
||||||
key=lambda x: x['id'])
|
|
||||||
for r in releases:
|
|
||||||
if r['is_deployable']:
|
|
||||||
return r['id']
|
|
||||||
self.fail("There are no deployable releases.")
|
|
||||||
|
|
||||||
def run_cli_commands(self, command_lines, **kwargs):
|
|
||||||
for command in command_lines:
|
|
||||||
self.run_cli_command(command, **kwargs)
|
|
||||||
|
|
||||||
def check_if_required(self, command):
|
|
||||||
call = self.run_cli_command(command, check_errors=False)
|
|
||||||
# should not work without env id
|
|
||||||
self.assertIn("required", call.stderr)
|
|
||||||
|
|
||||||
def check_for_stdout(self, command, msg, check_errors=True):
|
|
||||||
call = self.run_cli_command(command, check_errors=check_errors)
|
|
||||||
self.assertEqual(call.stdout, msg)
|
|
||||||
|
|
||||||
def check_for_stdout_by_regexp(self, command, pattern, check_errors=True):
|
|
||||||
call = self.run_cli_command(command, check_errors=check_errors)
|
|
||||||
result = re.search(pattern, call.stdout)
|
|
||||||
self.assertIsNotNone(result)
|
|
||||||
return result
|
|
||||||
|
|
||||||
def check_for_stderr(self, command, msg, check_errors=True):
|
|
||||||
call = self.run_cli_command(command, check_errors=check_errors)
|
|
||||||
self.assertIn(msg, call.stderr)
|
|
||||||
|
|
||||||
def check_all_in_msg(self, command, substrings, **kwargs):
|
|
||||||
output = self.run_cli_command(command, **kwargs)
|
|
||||||
for substring in substrings:
|
|
||||||
self.assertIn(substring, output.stdout)
|
|
||||||
|
|
||||||
def check_for_rows_in_table(self, command):
|
|
||||||
output = self.run_cli_command(command)
|
|
||||||
message = output.stdout.split("\n")
|
|
||||||
# no env
|
|
||||||
self.assertEqual(message[2], '')
|
|
||||||
|
|
||||||
def check_number_of_rows_in_table(self, command, number_of_rows):
|
|
||||||
output = self.run_cli_command(command)
|
|
||||||
self.assertEqual(len(output.stdout.split("\n")), number_of_rows + 3)
|
|
||||||
|
|
||||||
def _get_task_info(self, task_id):
|
|
||||||
"""Get info about task with given ID.
|
|
||||||
|
|
||||||
:param task_id: Task ID
|
|
||||||
:type task_id: str or int
|
|
||||||
:return: Task info
|
|
||||||
:rtype: dict
|
|
||||||
"""
|
|
||||||
return {}
|
|
||||||
|
|
||||||
def wait_task_ready(self, task_id, timeout=60, interval=3):
|
|
||||||
"""Wait for changing task status to 'ready'.
|
|
||||||
|
|
||||||
:param task_id: Task ID
|
|
||||||
:type task_id: str or int
|
|
||||||
:param timeout: Max time of waiting, in seconds
|
|
||||||
:type timeout: int
|
|
||||||
:param interval: Interval of getting task info, in seconds
|
|
||||||
:type interval: int
|
|
||||||
"""
|
|
||||||
wait_until_in_statuses = (consts.TASK_STATUSES.running,
|
|
||||||
consts.TASK_STATUSES.pending)
|
|
||||||
timer = time.time()
|
|
||||||
while True:
|
|
||||||
task = self._get_task_info(task_id)
|
|
||||||
status = task.get('status', '')
|
|
||||||
if status not in wait_until_in_statuses:
|
|
||||||
self.assertEqual(status, consts.TASK_STATUSES.ready)
|
|
||||||
break
|
|
||||||
|
|
||||||
if time.time() - timer > timeout:
|
|
||||||
raise Exception(
|
|
||||||
"Task '{0}' seems to be hanged".format(task['name'])
|
|
||||||
)
|
|
||||||
time.sleep(interval)
|
|
||||||
|
|
||||||
|
|
||||||
class CLIv1TestCase(BaseTestCase):
|
|
||||||
|
|
||||||
handler = 'fuel'
|
|
||||||
|
|
||||||
def _get_task_info(self, task_id):
|
|
||||||
command = "task show -f json {0}".format(str(task_id))
|
|
||||||
call = self.run_cli_command(command, handler='fuel2')
|
|
||||||
return json.loads(call.stdout)
|
|
||||||
|
|
||||||
|
|
||||||
class CLIv2TestCase(BaseTestCase):
|
|
||||||
|
|
||||||
handler = 'fuel2'
|
|
||||||
|
|
||||||
def _get_task_info(self, task_id):
|
|
||||||
command = "task show -f json {0}".format(str(task_id))
|
|
||||||
call = self.run_cli_command(command)
|
|
||||||
return json.loads(call.stdout)
|
|
|
@ -1,516 +0,0 @@
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# Copyright 2013-2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import os
|
|
||||||
import tempfile
|
|
||||||
|
|
||||||
from fuelclient.tests.functional import base
|
|
||||||
|
|
||||||
|
|
||||||
class TestHandlers(base.CLIv1TestCase):
|
|
||||||
|
|
||||||
def test_env_action(self):
|
|
||||||
# check env help
|
|
||||||
help_msgs = ["usage: fuel environment [-h]",
|
|
||||||
"[--list | --set | --delete | --create]",
|
|
||||||
"optional arguments:", "--help", "--list", "--set",
|
|
||||||
"--delete", "--rel", "--env-create",
|
|
||||||
"--create", "--name", "--env-name", "--nst",
|
|
||||||
"--net-segment-type"]
|
|
||||||
self.check_all_in_msg("env --help", help_msgs)
|
|
||||||
# no clusters
|
|
||||||
self.check_for_rows_in_table("env")
|
|
||||||
|
|
||||||
for action in ("set", "create", "delete"):
|
|
||||||
self.check_if_required("env {0}".format(action))
|
|
||||||
|
|
||||||
release_id = self.get_first_deployable_release_id()
|
|
||||||
|
|
||||||
# list of tuples (<fuel CLI command>, <expected output of a command>)
|
|
||||||
expected_stdout = \
|
|
||||||
[(
|
|
||||||
"env --create --name=TestEnv --release={0}".format(release_id),
|
|
||||||
"Environment 'TestEnv' with id=1 was created!\n"
|
|
||||||
), (
|
|
||||||
"--env-id=1 env set --name=NewEnv",
|
|
||||||
("Following attributes are changed for "
|
|
||||||
"the environment: name=NewEnv\n")
|
|
||||||
)]
|
|
||||||
|
|
||||||
for cmd, msg in expected_stdout:
|
|
||||||
self.check_for_stdout(cmd, msg)
|
|
||||||
|
|
||||||
def test_node_action(self):
|
|
||||||
help_msg = ["fuel node [-h] [--env ENV]",
|
|
||||||
"[--list | --set | --delete | --attributes |"
|
|
||||||
" --network | --disk | --deploy |"
|
|
||||||
" --hostname HOSTNAME | --name NAME |"
|
|
||||||
" --delete-from-db | --provision]", "-h", "--help", " -s",
|
|
||||||
"--default", " -d", "--download", " -u",
|
|
||||||
"--upload", "--dir", "--node", "--node-id", " -r",
|
|
||||||
"--role", "--net", "--hostname", "--name"]
|
|
||||||
self.check_all_in_msg("node --help", help_msg)
|
|
||||||
|
|
||||||
self.check_for_rows_in_table("node")
|
|
||||||
|
|
||||||
for action in ("set", "remove", "--network", "--disk"):
|
|
||||||
self.check_if_required("node {0}".format(action))
|
|
||||||
|
|
||||||
self.load_data_to_nailgun_server()
|
|
||||||
self.check_number_of_rows_in_table("node --node 9f:b6,9d:24,ab:aa", 3)
|
|
||||||
|
|
||||||
def test_selected_node_provision(self):
|
|
||||||
self.load_data_to_nailgun_server()
|
|
||||||
release_id = self.get_first_deployable_release_id()
|
|
||||||
self.run_cli_commands((
|
|
||||||
"env create --name=NewEnv --release={0}".format(release_id),
|
|
||||||
"--env-id=1 node set --node 1 --role=controller"
|
|
||||||
))
|
|
||||||
cmd = "--env-id=1 node --provision --node=1"
|
|
||||||
msg = "Started provisioning nodes [1].\n"
|
|
||||||
|
|
||||||
self.check_for_stdout(cmd, msg)
|
|
||||||
|
|
||||||
def test_help_works_without_connection(self):
|
|
||||||
fake_config = 'SERVER_ADDRESS: "333.333.333.333"'
|
|
||||||
|
|
||||||
c_handle, c_path = tempfile.mkstemp(suffix='.json', text=True)
|
|
||||||
with open(c_path, 'w') as f:
|
|
||||||
f.write(fake_config)
|
|
||||||
|
|
||||||
env = os.environ.copy()
|
|
||||||
env['FUELCLIENT_CUSTOM_SETTINGS'] = c_path
|
|
||||||
|
|
||||||
try:
|
|
||||||
result = self.run_cli_command("--help", env=env)
|
|
||||||
self.assertEqual(result.return_code, 0)
|
|
||||||
finally:
|
|
||||||
os.remove(c_path)
|
|
||||||
|
|
||||||
def test_error_when_destroying_online_node(self):
|
|
||||||
self.load_data_to_nailgun_server()
|
|
||||||
release_id = self.get_first_deployable_release_id()
|
|
||||||
self.run_cli_commands((
|
|
||||||
"env create --name=NewEnv --release={0}".format(release_id),
|
|
||||||
"--env-id=1 node set --node 1 --role=controller"
|
|
||||||
), check_errors=False)
|
|
||||||
msg = ("Nodes with ids [1] cannot be deleted from cluster because "
|
|
||||||
"they are online. You might want to use the --force option.\n")
|
|
||||||
self.check_for_stderr(
|
|
||||||
"node --node 1 --delete-from-db",
|
|
||||||
msg,
|
|
||||||
check_errors=False
|
|
||||||
)
|
|
||||||
|
|
||||||
def test_force_destroy_online_node(self):
|
|
||||||
self.load_data_to_nailgun_server()
|
|
||||||
release_id = self.get_first_deployable_release_id()
|
|
||||||
self.run_cli_commands((
|
|
||||||
"env create --name=NewEnv --release={0}".format(release_id),
|
|
||||||
"--env-id=1 node set --node 1 --role=controller"
|
|
||||||
))
|
|
||||||
msg = ("Nodes with ids [1] have been deleted from Fuel db.\n")
|
|
||||||
self.check_for_stdout(
|
|
||||||
"node --node 1 --delete-from-db --force",
|
|
||||||
msg
|
|
||||||
)
|
|
||||||
|
|
||||||
def test_destroy_offline_node(self):
|
|
||||||
|
|
||||||
self.load_data_to_nailgun_server()
|
|
||||||
release_id = self.get_first_deployable_release_id()
|
|
||||||
node_id = 4
|
|
||||||
self.run_cli_commands((
|
|
||||||
"env create --name=NewEnv --release={0}".format(release_id),
|
|
||||||
"--env-id=1 node set --node {0} --role=controller".format(node_id)
|
|
||||||
))
|
|
||||||
msg = ("Nodes with ids [{0}] have been deleted from Fuel db.\n".format(
|
|
||||||
node_id))
|
|
||||||
self.check_for_stdout(
|
|
||||||
"node --node {0} --delete-from-db".format(node_id),
|
|
||||||
msg
|
|
||||||
)
|
|
||||||
|
|
||||||
def test_node_change_hostname(self):
|
|
||||||
self.load_data_to_nailgun_server()
|
|
||||||
release_id = self.get_first_deployable_release_id()
|
|
||||||
self.run_cli_commands((
|
|
||||||
"env create --name=NewEnv --release={0}".format(release_id),
|
|
||||||
"--env-id=1 node set --node 2 --role=controller"
|
|
||||||
))
|
|
||||||
msg = "Hostname for node with id 2 has been changed to test-name.\n"
|
|
||||||
self.check_for_stdout(
|
|
||||||
"node --node 2 --hostname test-name",
|
|
||||||
msg
|
|
||||||
)
|
|
||||||
|
|
||||||
def test_env_create_neutron_tun(self):
|
|
||||||
self.load_data_to_nailgun_server()
|
|
||||||
release_id = self.get_first_deployable_release_id()
|
|
||||||
self.check_for_stdout(
|
|
||||||
"env create --name=NewEnv --release={0} --nst=tun"
|
|
||||||
.format(release_id),
|
|
||||||
"Environment 'NewEnv' with id=1 was created!\n")
|
|
||||||
|
|
||||||
def test_destroy_multiple_nodes(self):
|
|
||||||
self.load_data_to_nailgun_server()
|
|
||||||
release_id = self.get_first_deployable_release_id()
|
|
||||||
self.run_cli_commands((
|
|
||||||
"env create --name=NewEnv --release={0}".format(release_id),
|
|
||||||
"--env-id=1 node set --node 1 2 --role=controller"
|
|
||||||
))
|
|
||||||
msg = ("Nodes with ids [1, 2] have been deleted from Fuel db.\n")
|
|
||||||
self.check_for_stdout(
|
|
||||||
"node --node 1 2 --delete-from-db --force",
|
|
||||||
msg
|
|
||||||
)
|
|
||||||
|
|
||||||
def test_for_examples_in_action_help(self):
|
|
||||||
actions = (
|
|
||||||
"node", "stop", "deployment", "reset", "network",
|
|
||||||
"settings", "provisioning", "environment", "deploy-changes",
|
|
||||||
"role", "release", "snapshot", "health", "vip"
|
|
||||||
)
|
|
||||||
for action in actions:
|
|
||||||
self.check_all_in_msg("{0} -h".format(action), ("Examples",))
|
|
||||||
|
|
||||||
def test_get_release_list_without_errors(self):
|
|
||||||
cmd = 'release --list'
|
|
||||||
self.run_cli_command(cmd)
|
|
||||||
|
|
||||||
def test_reassign_node_group(self):
|
|
||||||
self.load_data_to_nailgun_server()
|
|
||||||
release_id = self.get_first_deployable_release_id()
|
|
||||||
self.run_cli_commands((
|
|
||||||
"env create --name=NewEnv --release={0} --nst=gre"
|
|
||||||
.format(release_id),
|
|
||||||
"--env-id=1 node set --node 1 2 --role=controller",
|
|
||||||
"nodegroup --create --env 1 --name 'new group'"
|
|
||||||
))
|
|
||||||
msg = ['PUT http://127.0.0.1',
|
|
||||||
'/api/v1/nodes/ data=',
|
|
||||||
'"id": 1',
|
|
||||||
'"group_id": 2']
|
|
||||||
self.check_all_in_msg(
|
|
||||||
"nodegroup --assign --group 2 --node 1 --debug",
|
|
||||||
msg
|
|
||||||
)
|
|
||||||
|
|
||||||
def test_node_group_creation_prints_warning_w_seg_type_vlan(self):
|
|
||||||
warn_msg = ("WARNING: In VLAN segmentation type, there will be no "
|
|
||||||
"connectivity over private network between instances "
|
|
||||||
"running on hypervisors in different segments and that "
|
|
||||||
"it's a user's responsibility to handle this "
|
|
||||||
"situation.")
|
|
||||||
|
|
||||||
self.load_data_to_nailgun_server()
|
|
||||||
release_id = self.get_first_deployable_release_id()
|
|
||||||
self.run_cli_commands((
|
|
||||||
"env create --name=NewEnv --release={0} --nst=vlan"
|
|
||||||
.format(release_id),
|
|
||||||
|
|
||||||
))
|
|
||||||
self.check_for_stderr(
|
|
||||||
"nodegroup create --name tor1 --env 1",
|
|
||||||
warn_msg,
|
|
||||||
check_errors=False
|
|
||||||
)
|
|
||||||
|
|
||||||
def test_create_network_group_fails_w_duplicate_name(self):
|
|
||||||
err_msg = ("(Network with name storage already exists "
|
|
||||||
"in node group default)\n")
|
|
||||||
release_id = self.get_first_deployable_release_id()
|
|
||||||
self.run_cli_commands((
|
|
||||||
"env create --name=NewEnv --release={0} --nst=gre"
|
|
||||||
.format(release_id),
|
|
||||||
))
|
|
||||||
self.check_for_stderr(
|
|
||||||
("network-group --create --name storage --node-group 1 "
|
|
||||||
"--vlan 10 --cidr 10.0.0.0/24"),
|
|
||||||
err_msg,
|
|
||||||
check_errors=False
|
|
||||||
)
|
|
||||||
|
|
||||||
def test_create_network_group_fails_w_invalid_group(self):
|
|
||||||
err_msg = "(Node group with ID 997755 does not exist)\n"
|
|
||||||
|
|
||||||
self.check_for_stderr(
|
|
||||||
("network-group --create --name test --node-group 997755 "
|
|
||||||
"--vlan 10 --cidr 10.0.0.0/24"),
|
|
||||||
err_msg,
|
|
||||||
check_errors=False
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TestCharset(base.CLIv1TestCase):
|
|
||||||
|
|
||||||
def test_charset_problem(self):
|
|
||||||
self.load_data_to_nailgun_server()
|
|
||||||
release_id = self.get_first_deployable_release_id()
|
|
||||||
self.run_cli_commands((
|
|
||||||
"env create --name=привет --release={0}".format(release_id),
|
|
||||||
"--env-id=1 node set --node 1 --role=controller",
|
|
||||||
"env"
|
|
||||||
))
|
|
||||||
|
|
||||||
|
|
||||||
class TestFiles(base.CLIv1TestCase):
|
|
||||||
|
|
||||||
def test_file_creation(self):
|
|
||||||
self.load_data_to_nailgun_server()
|
|
||||||
release_id = self.get_first_deployable_release_id()
|
|
||||||
self.run_cli_commands((
|
|
||||||
"env create --name=NewEnv --release={0}".format(release_id),
|
|
||||||
"--env-id=1 node set --node 1 --role=controller",
|
|
||||||
"--env-id=1 node set --node 2,3 --role=compute"
|
|
||||||
))
|
|
||||||
for action in ("network", "settings"):
|
|
||||||
for format_ in ("yaml", "json"):
|
|
||||||
self.check_if_files_created(
|
|
||||||
"--env 1 {0} --download --{1}".format(action, format_),
|
|
||||||
("{0}_1.{1}".format(action, format_),)
|
|
||||||
)
|
|
||||||
command_to_files_map = (
|
|
||||||
(
|
|
||||||
"--env 1 deployment --default",
|
|
||||||
(
|
|
||||||
"deployment_1",
|
|
||||||
"deployment_1/1.yaml",
|
|
||||||
"deployment_1/2.yaml",
|
|
||||||
"deployment_1/3.yaml"
|
|
||||||
)
|
|
||||||
),
|
|
||||||
(
|
|
||||||
"--env 1 provisioning --default",
|
|
||||||
(
|
|
||||||
"provisioning_1",
|
|
||||||
"provisioning_1/engine.yaml",
|
|
||||||
"provisioning_1/node-1.yaml",
|
|
||||||
"provisioning_1/node-2.yaml",
|
|
||||||
"provisioning_1/node-3.yaml"
|
|
||||||
)
|
|
||||||
),
|
|
||||||
(
|
|
||||||
"--env 1 deployment --default --json",
|
|
||||||
(
|
|
||||||
"deployment_1/1.json",
|
|
||||||
"deployment_1/2.json",
|
|
||||||
"deployment_1/3.json"
|
|
||||||
)
|
|
||||||
),
|
|
||||||
(
|
|
||||||
"--env 1 provisioning --default --json",
|
|
||||||
(
|
|
||||||
"provisioning_1/engine.json",
|
|
||||||
"provisioning_1/node-1.json",
|
|
||||||
"provisioning_1/node-2.json",
|
|
||||||
"provisioning_1/node-3.json"
|
|
||||||
)
|
|
||||||
),
|
|
||||||
(
|
|
||||||
"node --node 1 --disk --default",
|
|
||||||
(
|
|
||||||
"node_1",
|
|
||||||
"node_1/disks.yaml"
|
|
||||||
)
|
|
||||||
),
|
|
||||||
(
|
|
||||||
"node --node 1 --network --default",
|
|
||||||
(
|
|
||||||
"node_1",
|
|
||||||
"node_1/interfaces.yaml"
|
|
||||||
)
|
|
||||||
),
|
|
||||||
(
|
|
||||||
"node --node 1 --disk --default --json",
|
|
||||||
(
|
|
||||||
"node_1/disks.json",
|
|
||||||
)
|
|
||||||
),
|
|
||||||
(
|
|
||||||
"node --node 1 --network --default --json",
|
|
||||||
(
|
|
||||||
"node_1/interfaces.json",
|
|
||||||
)
|
|
||||||
)
|
|
||||||
)
|
|
||||||
for command, files in command_to_files_map:
|
|
||||||
self.check_if_files_created(command, files)
|
|
||||||
|
|
||||||
def check_if_files_created(self, command, paths):
|
|
||||||
command_in_dir = "{0} --dir={1}".format(command, self.temp_directory)
|
|
||||||
self.run_cli_command(command_in_dir)
|
|
||||||
for path in paths:
|
|
||||||
self.assertTrue(os.path.exists(
|
|
||||||
os.path.join(self.temp_directory, path)
|
|
||||||
))
|
|
||||||
|
|
||||||
|
|
||||||
class TestDownloadUploadNodeAttributes(base.CLIv1TestCase):
|
|
||||||
|
|
||||||
def test_upload_download_interfaces(self):
|
|
||||||
self.load_data_to_nailgun_server()
|
|
||||||
|
|
||||||
release_id = self.get_first_deployable_release_id()
|
|
||||||
env_create = "env create --name=test --release={0}".format(release_id)
|
|
||||||
add_node = "--env-id=1 node set --node 1 --role=controller"
|
|
||||||
|
|
||||||
cmd = "node --node-id 1 --network"
|
|
||||||
self.run_cli_commands((env_create,
|
|
||||||
add_node,
|
|
||||||
self.download_command(cmd),
|
|
||||||
self.upload_command(cmd)))
|
|
||||||
|
|
||||||
def test_upload_download_disks(self):
|
|
||||||
self.load_data_to_nailgun_server()
|
|
||||||
cmd = "node --node-id 1 --disk"
|
|
||||||
self.run_cli_commands((self.download_command(cmd),
|
|
||||||
self.upload_command(cmd)))
|
|
||||||
|
|
||||||
|
|
||||||
class TestDeployChanges(base.CLIv1TestCase):
|
|
||||||
|
|
||||||
cmd_create_env = "env create --name=test --release={0}"
|
|
||||||
cmd_add_node = "--env-id=1 node set --node 1 --role=controller"
|
|
||||||
cmd_deploy_changes = "deploy-changes --env 1"
|
|
||||||
cmd_redeploy_changes = "redeploy-changes --env 1"
|
|
||||||
|
|
||||||
pattern_success = (r"^Deployment task with id (\d{1,}) "
|
|
||||||
r"for the environment 1 has been started.\n$")
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestDeployChanges, self).setUp()
|
|
||||||
self.load_data_to_nailgun_server()
|
|
||||||
release_id = self.get_first_deployable_release_id()
|
|
||||||
self.cmd_create_env = self.cmd_create_env.format(release_id)
|
|
||||||
self.run_cli_commands((
|
|
||||||
self.cmd_create_env,
|
|
||||||
self.cmd_add_node
|
|
||||||
))
|
|
||||||
|
|
||||||
def test_deploy_changes(self):
|
|
||||||
self.check_for_stdout_by_regexp(self.cmd_deploy_changes,
|
|
||||||
self.pattern_success)
|
|
||||||
|
|
||||||
def test_redeploy_changes(self):
|
|
||||||
result = self.check_for_stdout_by_regexp(self.cmd_deploy_changes,
|
|
||||||
self.pattern_success)
|
|
||||||
task_id = result.group(1)
|
|
||||||
self.wait_task_ready(task_id)
|
|
||||||
self.check_for_stdout_by_regexp(self.cmd_redeploy_changes,
|
|
||||||
self.pattern_success)
|
|
||||||
|
|
||||||
|
|
||||||
class TestDirectoryDoesntExistErrorMessages(base.CLIv1TestCase):
|
|
||||||
|
|
||||||
def test_settings_upload(self):
|
|
||||||
self.check_for_stderr(
|
|
||||||
"settings --upload --dir /foo/bar/baz --env 1",
|
|
||||||
"Directory '/foo/bar/baz' doesn't exist.\n",
|
|
||||||
check_errors=False
|
|
||||||
)
|
|
||||||
|
|
||||||
def test_deployment_upload(self):
|
|
||||||
self.check_for_stderr(
|
|
||||||
"deployment --upload --dir /foo/bar/baz --env 1",
|
|
||||||
"Directory '/foo/bar/baz' doesn't exist.\n",
|
|
||||||
check_errors=False
|
|
||||||
)
|
|
||||||
|
|
||||||
def test_net_upload(self):
|
|
||||||
self.check_for_stderr(
|
|
||||||
"network --upload --dir /foo/bar/baz --env 1",
|
|
||||||
"Directory '/foo/bar/baz' doesn't exist.\n",
|
|
||||||
check_errors=False
|
|
||||||
)
|
|
||||||
|
|
||||||
def test_env_download(self):
|
|
||||||
self.load_data_to_nailgun_server()
|
|
||||||
release_id = self.get_first_deployable_release_id()
|
|
||||||
self.run_cli_commands((
|
|
||||||
"env create --name=NewEnv --release={0}".format(release_id),
|
|
||||||
"--env-id=1 node set --node 2 --role=controller"
|
|
||||||
))
|
|
||||||
self.check_for_stderr(
|
|
||||||
"network --download --dir /foo/bar/baz --env 1",
|
|
||||||
"Directory '/foo/bar/baz' doesn't exist.\n",
|
|
||||||
check_errors=False
|
|
||||||
)
|
|
||||||
|
|
||||||
def test_download_network_configuration(self):
|
|
||||||
self.load_data_to_nailgun_server()
|
|
||||||
release_id = self.get_first_deployable_release_id()
|
|
||||||
self.run_cli_commands((
|
|
||||||
"env create --name=NewEnv --release={0}".format(release_id),
|
|
||||||
"--env-id=1 node set --node 2 --role=controller"
|
|
||||||
))
|
|
||||||
self.check_for_stderr(
|
|
||||||
"--env 1 network --download --dir /foo/bar/baz",
|
|
||||||
"Directory '/foo/bar/baz' doesn't exist.\n",
|
|
||||||
check_errors=False
|
|
||||||
)
|
|
||||||
|
|
||||||
def test_download_default_settings(self):
|
|
||||||
self.load_data_to_nailgun_server()
|
|
||||||
release_id = self.get_first_deployable_release_id()
|
|
||||||
self.run_cli_commands((
|
|
||||||
"env create --name=NewEnv --release={0}".format(release_id),
|
|
||||||
"--env-id=1 node set --node 2 --role=controller"
|
|
||||||
))
|
|
||||||
self.check_for_stderr(
|
|
||||||
"--env 1 settings --default --dir /foo/bar/baz",
|
|
||||||
"Directory '/foo/bar/baz' doesn't exist.\n",
|
|
||||||
check_errors=False
|
|
||||||
)
|
|
||||||
|
|
||||||
def test_upload_network_configuration(self):
|
|
||||||
self.check_for_stderr(
|
|
||||||
"--env 1 network --upload --dir /foo/bar/baz",
|
|
||||||
"Directory '/foo/bar/baz' doesn't exist.\n",
|
|
||||||
check_errors=False
|
|
||||||
)
|
|
||||||
|
|
||||||
def test_upload_network_template(self):
|
|
||||||
self.check_for_stderr(
|
|
||||||
"--env 1 network-template --upload --dir /foo/bar/baz",
|
|
||||||
"Directory '/foo/bar/baz' doesn't exist.\n",
|
|
||||||
check_errors=False
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TestUploadSettings(base.CLIv1TestCase):
|
|
||||||
|
|
||||||
create_env = "env create --name=test --release={0}"
|
|
||||||
add_node = "--env-id=1 node set --node 1 --role=controller"
|
|
||||||
deploy_changes = "deploy-changes --env 1"
|
|
||||||
cmd = "settings --env 1"
|
|
||||||
cmd_force = "settings --env 1 --force"
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestUploadSettings, self).setUp()
|
|
||||||
self.load_data_to_nailgun_server()
|
|
||||||
release_id = self.get_first_deployable_release_id()
|
|
||||||
self.create_env = self.create_env.format(release_id)
|
|
||||||
self.run_cli_commands((
|
|
||||||
self.create_env,
|
|
||||||
self.add_node,
|
|
||||||
self.download_command(self.cmd)
|
|
||||||
))
|
|
||||||
|
|
||||||
def test_upload_settings(self):
|
|
||||||
msg_success = "Settings configuration uploaded.\n"
|
|
||||||
self.check_for_stdout(self.upload_command(self.cmd),
|
|
||||||
msg_success)
|
|
|
@ -1,82 +0,0 @@
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# Copyright 2013-2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from fuelclient.tests.functional import base
|
|
||||||
|
|
||||||
|
|
||||||
class TestDeployChanges(base.CLIv2TestCase):
|
|
||||||
|
|
||||||
cmd_create_env = "env create -r {0} cluster-test"
|
|
||||||
cmd_add_node = "env add nodes -e 1 -n 1 -r controller"
|
|
||||||
cmd_deploy_changes = "env deploy 1"
|
|
||||||
cmd_redeploy_changes = "env redeploy 1"
|
|
||||||
|
|
||||||
pattern_success = (r"^Deployment task with id (\d{1,}) "
|
|
||||||
r"for the environment 1 has been started.\n$")
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestDeployChanges, self).setUp()
|
|
||||||
self.load_data_to_nailgun_server()
|
|
||||||
release_id = self.get_first_deployable_release_id()
|
|
||||||
self.cmd_create_env = self.cmd_create_env.format(release_id)
|
|
||||||
self.run_cli_commands((
|
|
||||||
self.cmd_create_env,
|
|
||||||
self.cmd_add_node
|
|
||||||
))
|
|
||||||
|
|
||||||
def test_deploy_changes(self):
|
|
||||||
self.check_for_stdout_by_regexp(self.cmd_deploy_changes,
|
|
||||||
self.pattern_success)
|
|
||||||
|
|
||||||
def test_redeploy_changes(self):
|
|
||||||
result = self.check_for_stdout_by_regexp(self.cmd_deploy_changes,
|
|
||||||
self.pattern_success)
|
|
||||||
task_id = result.group(1)
|
|
||||||
self.wait_task_ready(task_id)
|
|
||||||
|
|
||||||
self.check_for_stdout_by_regexp(self.cmd_redeploy_changes,
|
|
||||||
self.pattern_success)
|
|
||||||
|
|
||||||
|
|
||||||
class TestExtensionManagement(base.CLIv2TestCase):
|
|
||||||
|
|
||||||
cmd_create_env = "env create -r {0} cluster-test-extensions-mgmt"
|
|
||||||
cmd_disable_exts = "env extension disable 1 --extensions volume_manager"
|
|
||||||
cmd_enable_exts = "env extension enable 1 --extensions volume_manager"
|
|
||||||
|
|
||||||
pattern_enable_success = (r"^The following extensions: volume_manager "
|
|
||||||
r"have been enabled for the environment with "
|
|
||||||
r"id 1.\n$")
|
|
||||||
pattern_disable_success = (r"^The following extensions: volume_manager "
|
|
||||||
r"have been disabled for the environment with "
|
|
||||||
r"id 1.\n$")
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestExtensionManagement, self).setUp()
|
|
||||||
self.load_data_to_nailgun_server()
|
|
||||||
release_id = self.get_first_deployable_release_id()
|
|
||||||
self.cmd_create_env = self.cmd_create_env.format(release_id)
|
|
||||||
self.run_cli_commands((
|
|
||||||
self.cmd_create_env,
|
|
||||||
))
|
|
||||||
|
|
||||||
def test_disable_extensions(self):
|
|
||||||
self.check_for_stdout_by_regexp(self.cmd_disable_exts,
|
|
||||||
self.pattern_disable_success)
|
|
||||||
|
|
||||||
def test_enable_extensions(self):
|
|
||||||
self.check_for_stdout_by_regexp(self.cmd_enable_exts,
|
|
||||||
self.pattern_enable_success)
|
|
|
@ -1,145 +0,0 @@
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# Copyright 2015 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
|
|
||||||
import fixtures
|
|
||||||
import mock
|
|
||||||
|
|
||||||
from fuelclient.cli import error
|
|
||||||
from fuelclient import fuelclient_settings
|
|
||||||
from fuelclient.tests.unit.v1 import base
|
|
||||||
|
|
||||||
|
|
||||||
class TestSettings(base.UnitTestCase):
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestSettings, self).setUp()
|
|
||||||
|
|
||||||
self.useFixture(fixtures.MockPatchObject(fuelclient_settings,
|
|
||||||
'_SETTINGS',
|
|
||||||
None))
|
|
||||||
|
|
||||||
@mock.patch('os.makedirs')
|
|
||||||
@mock.patch('shutil.copy')
|
|
||||||
@mock.patch('os.chmod')
|
|
||||||
@mock.patch('os.path.exists')
|
|
||||||
def test_config_generation(self, m_exists, m_chmod, m_copy, m_makedirs):
|
|
||||||
project_dir = os.path.dirname(fuelclient_settings.__file__)
|
|
||||||
|
|
||||||
expected_fmode = 0o600
|
|
||||||
expected_dmode = 0o700
|
|
||||||
expected_default = os.path.join(project_dir,
|
|
||||||
'fuel_client.yaml')
|
|
||||||
expected_path = os.path.expanduser('~/.config/fuel/fuel_client.yaml')
|
|
||||||
conf_home = os.path.expanduser('~/.config/')
|
|
||||||
conf_dir = os.path.dirname(expected_path)
|
|
||||||
|
|
||||||
m_exists.return_value = False
|
|
||||||
f_confdir = fixtures.EnvironmentVariable('XDG_CONFIG_HOME', conf_home)
|
|
||||||
f_settings = fixtures.EnvironmentVariable('FUELCLIENT_CUSTOM_SETTINGS')
|
|
||||||
|
|
||||||
self.useFixture(f_confdir)
|
|
||||||
self.useFixture(f_settings)
|
|
||||||
|
|
||||||
fuelclient_settings.get_settings()
|
|
||||||
|
|
||||||
m_makedirs.assert_called_once_with(conf_dir, expected_dmode)
|
|
||||||
m_copy.assert_called_once_with(expected_default, expected_path)
|
|
||||||
m_chmod.assert_called_once_with(expected_path, expected_fmode)
|
|
||||||
|
|
||||||
@mock.patch('os.makedirs')
|
|
||||||
@mock.patch('os.path.exists')
|
|
||||||
def test_config_generation_write_error(self, m_exists, m_makedirs):
|
|
||||||
m_exists.return_value = False
|
|
||||||
m_makedirs.side_effect = OSError('[Errno 13] Permission denied')
|
|
||||||
|
|
||||||
f_settings = fixtures.EnvironmentVariable('FUELCLIENT_CUSTOM_SETTINGS')
|
|
||||||
self.useFixture(f_settings)
|
|
||||||
|
|
||||||
self.assertRaises(error.SettingsException,
|
|
||||||
fuelclient_settings.get_settings)
|
|
||||||
|
|
||||||
@mock.patch('six.print_')
|
|
||||||
def test_deprecated_option_produces_warning(self, m_print):
|
|
||||||
expected_warings = [mock.call('DEPRECATION WARNING: LISTEN_PORT '
|
|
||||||
'parameter was deprecated and will not '
|
|
||||||
'be supported in the next version of '
|
|
||||||
'python-fuelclient.', end='',
|
|
||||||
file=sys.stderr),
|
|
||||||
mock.call(' Please replace this '
|
|
||||||
'parameter with SERVER_PORT',
|
|
||||||
file=sys.stderr)]
|
|
||||||
|
|
||||||
m = mock.mock_open(read_data='LISTEN_PORT: 9000')
|
|
||||||
with mock.patch('fuelclient.fuelclient_settings.open', m):
|
|
||||||
fuelclient_settings.get_settings()
|
|
||||||
|
|
||||||
m_print.assert_has_calls(expected_warings, any_order=False)
|
|
||||||
|
|
||||||
@mock.patch('six.print_')
|
|
||||||
def test_both_deprecated_and_new_options_produce_warning(self, m_print):
|
|
||||||
expected_waring = ('WARNING: configuration contains both '
|
|
||||||
'LISTEN_PORT and SERVER_PORT options set. Since '
|
|
||||||
'LISTEN_PORT was deprecated, only the value of '
|
|
||||||
'SERVER_PORT will be used.')
|
|
||||||
|
|
||||||
m = mock.mock_open(read_data='LISTEN_PORT: 9000\nSERVER_PORT: 9000')
|
|
||||||
with mock.patch('fuelclient.fuelclient_settings.open', m):
|
|
||||||
fuelclient_settings.get_settings()
|
|
||||||
|
|
||||||
m_print.assert_has_calls([mock.call(expected_waring, file=sys.stderr)])
|
|
||||||
|
|
||||||
@mock.patch('six.print_')
|
|
||||||
def test_set_deprecated_option_overwrites_unset_new_option(self, m_print):
|
|
||||||
m = mock.mock_open(read_data='KEYSTONE_PASS: "admin"\n'
|
|
||||||
'OS_PASSWORD:\n')
|
|
||||||
with mock.patch('fuelclient.fuelclient_settings.open', m):
|
|
||||||
settings = fuelclient_settings.get_settings()
|
|
||||||
|
|
||||||
self.assertEqual('admin', settings.OS_PASSWORD)
|
|
||||||
self.assertNotIn('OS_PASSWORD', settings.config)
|
|
||||||
|
|
||||||
def test_fallback_to_deprecated_option(self):
|
|
||||||
m = mock.mock_open(read_data='LISTEN_PORT: 9000')
|
|
||||||
with mock.patch('fuelclient.fuelclient_settings.open', m):
|
|
||||||
settings = fuelclient_settings.get_settings()
|
|
||||||
|
|
||||||
self.assertEqual(9000, settings.LISTEN_PORT)
|
|
||||||
|
|
||||||
def test_update_from_cli_params(self):
|
|
||||||
test_config_text = ('SERVER_ADDRESS: "127.0.0.1"\n'
|
|
||||||
'SERVER_PORT: "8000"\n'
|
|
||||||
'OS_USERNAME: "admin"\n'
|
|
||||||
'OS_PASSWORD:\n'
|
|
||||||
'OS_TENANT_NAME:\n')
|
|
||||||
|
|
||||||
test_parsed_args = mock.Mock(os_password='test_password',
|
|
||||||
server_port="3000",
|
|
||||||
os_username=None)
|
|
||||||
del test_parsed_args.server_address
|
|
||||||
del test_parsed_args.os_tenant_name
|
|
||||||
|
|
||||||
m = mock.mock_open(read_data=test_config_text)
|
|
||||||
with mock.patch('fuelclient.fuelclient_settings.open', m):
|
|
||||||
settings = fuelclient_settings.get_settings()
|
|
||||||
|
|
||||||
settings.update_from_command_line_options(test_parsed_args)
|
|
||||||
|
|
||||||
self.assertEqual('3000', settings.SERVER_PORT)
|
|
||||||
self.assertEqual('test_password', settings.OS_PASSWORD)
|
|
||||||
self.assertEqual('admin', settings.OS_USERNAME)
|
|
|
@ -1,127 +0,0 @@
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# Copyright 2016 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import mock
|
|
||||||
import os
|
|
||||||
|
|
||||||
from fuelclient import objects
|
|
||||||
|
|
||||||
from fuelclient.tests.unit.v1 import base
|
|
||||||
|
|
||||||
|
|
||||||
class TestEnvironmentObject(base.UnitTestCase):
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestEnvironmentObject, self).setUp()
|
|
||||||
self.env_object = objects.Environment(1)
|
|
||||||
|
|
||||||
def _setup_os_mock(self, os_mock):
|
|
||||||
os_mock.path.exists.return_value = False
|
|
||||||
os_mock.path.join = os.path.join
|
|
||||||
os_mock.path.abspath = lambda x: x
|
|
||||||
|
|
||||||
@mock.patch("fuelclient.objects.environment.os")
|
|
||||||
def test_write_facts_to_dir_for_legacy_envs(self, os_mock):
|
|
||||||
facts = [
|
|
||||||
{
|
|
||||||
"uid": "1",
|
|
||||||
"role": "controller",
|
|
||||||
"data": "data1"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"uid": "2",
|
|
||||||
"role": "compute",
|
|
||||||
"data": "data2"
|
|
||||||
},
|
|
||||||
]
|
|
||||||
|
|
||||||
self._setup_os_mock(os_mock)
|
|
||||||
serializer = mock.MagicMock()
|
|
||||||
|
|
||||||
self.env_object.write_facts_to_dir(
|
|
||||||
"deployment", facts, serializer=serializer
|
|
||||||
)
|
|
||||||
|
|
||||||
serializer.write_to_path.assert_has_calls(
|
|
||||||
[
|
|
||||||
mock.call("./deployment_1/controller_1", facts[0]),
|
|
||||||
mock.call("./deployment_1/compute_2", facts[1])
|
|
||||||
]
|
|
||||||
)
|
|
||||||
|
|
||||||
@mock.patch("fuelclient.objects.environment.os")
|
|
||||||
def test_write_facts_to_dir_for_new_envs(self, os_mock):
|
|
||||||
facts = [
|
|
||||||
{
|
|
||||||
"uid": "1",
|
|
||||||
"roles": ["controller"],
|
|
||||||
"data": "data1"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"uid": "2",
|
|
||||||
"roles": ["compute"],
|
|
||||||
"data": "data2"
|
|
||||||
},
|
|
||||||
]
|
|
||||||
|
|
||||||
self._setup_os_mock(os_mock)
|
|
||||||
serializer = mock.MagicMock()
|
|
||||||
|
|
||||||
self.env_object.write_facts_to_dir(
|
|
||||||
"deployment", facts, serializer=serializer
|
|
||||||
)
|
|
||||||
|
|
||||||
serializer.write_to_path.assert_has_calls(
|
|
||||||
[
|
|
||||||
mock.call("./deployment_1/1", facts[0]),
|
|
||||||
mock.call("./deployment_1/2", facts[1])
|
|
||||||
]
|
|
||||||
)
|
|
||||||
|
|
||||||
@mock.patch("fuelclient.objects.environment.os")
|
|
||||||
def test_write_facts_to_dir_if_facts_is_dict(self, os_mock):
|
|
||||||
facts = {
|
|
||||||
"engine": "test_engine",
|
|
||||||
"nodes": [
|
|
||||||
{
|
|
||||||
"uid": "1",
|
|
||||||
"name": "node-1",
|
|
||||||
"roles": ["controller"],
|
|
||||||
"data": "data1"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"uid": "2",
|
|
||||||
"name": "node-2",
|
|
||||||
"roles": ["compute"],
|
|
||||||
"data": "data2"
|
|
||||||
},
|
|
||||||
]
|
|
||||||
}
|
|
||||||
|
|
||||||
self._setup_os_mock(os_mock)
|
|
||||||
serializer = mock.MagicMock()
|
|
||||||
|
|
||||||
self.env_object.write_facts_to_dir(
|
|
||||||
"deployment", facts, serializer=serializer
|
|
||||||
)
|
|
||||||
|
|
||||||
serializer.write_to_path.assert_has_calls(
|
|
||||||
[
|
|
||||||
mock.call("./deployment_1/engine", facts['engine']),
|
|
||||||
mock.call("./deployment_1/node-1", facts['nodes'][0]),
|
|
||||||
mock.call("./deployment_1/node-2", facts['nodes'][1])
|
|
||||||
]
|
|
||||||
)
|
|
|
@ -1,157 +0,0 @@
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# Copyright 2015 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import json
|
|
||||||
import mock
|
|
||||||
import six
|
|
||||||
import yaml
|
|
||||||
|
|
||||||
from fuelclient.tests.unit.v1 import base
|
|
||||||
|
|
||||||
YAML_TEMPLATE = """adv_net_template:
|
|
||||||
default:
|
|
||||||
network_assignments:
|
|
||||||
fuelweb_admin:
|
|
||||||
ep: br-fw-admin
|
|
||||||
management:
|
|
||||||
ep: br-mgmt
|
|
||||||
private:
|
|
||||||
ep: br-prv
|
|
||||||
public:
|
|
||||||
ep: br-ex
|
|
||||||
storage:
|
|
||||||
ep: br-storage
|
|
||||||
nic_mapping:
|
|
||||||
default:
|
|
||||||
if1: eth0
|
|
||||||
templates_for_node_role:
|
|
||||||
ceph-osd:
|
|
||||||
- common
|
|
||||||
- storage
|
|
||||||
compute:
|
|
||||||
- common
|
|
||||||
- private
|
|
||||||
- storage
|
|
||||||
controller:
|
|
||||||
- public
|
|
||||||
- private
|
|
||||||
- storage
|
|
||||||
- common
|
|
||||||
"""
|
|
||||||
|
|
||||||
JSON_TEMPLATE = """{
|
|
||||||
"adv_net_template": {
|
|
||||||
"default": {
|
|
||||||
"nic_mapping": {
|
|
||||||
"default": {
|
|
||||||
"if1": "eth0"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"templates_for_node_role": {
|
|
||||||
"controller": [
|
|
||||||
"public",
|
|
||||||
"private",
|
|
||||||
"storage",
|
|
||||||
"common"
|
|
||||||
],
|
|
||||||
"compute": [
|
|
||||||
"common",
|
|
||||||
"private",
|
|
||||||
"storage"
|
|
||||||
],
|
|
||||||
"ceph-osd": [
|
|
||||||
"common",
|
|
||||||
"storage"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"network_assignments": {
|
|
||||||
"storage": {
|
|
||||||
"ep": "br-storage"
|
|
||||||
},
|
|
||||||
"private": {
|
|
||||||
"ep": "br-prv"
|
|
||||||
},
|
|
||||||
"public": {
|
|
||||||
"ep": "br-ex"
|
|
||||||
},
|
|
||||||
"management": {
|
|
||||||
"ep": "br-mgmt"
|
|
||||||
},
|
|
||||||
"fuelweb_admin": {
|
|
||||||
"ep": "br-fw-admin"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
class TestNetworkTemplate(base.UnitTestCase):
|
|
||||||
def setUp(self):
|
|
||||||
super(TestNetworkTemplate, self).setUp()
|
|
||||||
|
|
||||||
self.env_id = 42
|
|
||||||
self.req_path = ('/api/v1/clusters/{0}/network_configuration/'
|
|
||||||
'template'.format(self.env_id))
|
|
||||||
|
|
||||||
def test_upload_action(self):
|
|
||||||
mput = self.m_request.put(self.req_path, json={})
|
|
||||||
test_command = [
|
|
||||||
'fuel', 'network-template', '--env', str(self.env_id), '--upload']
|
|
||||||
|
|
||||||
m_open = mock.mock_open(read_data=YAML_TEMPLATE)
|
|
||||||
with mock.patch('fuelclient.cli.serializers.open',
|
|
||||||
m_open,
|
|
||||||
create=True):
|
|
||||||
self.execute(test_command)
|
|
||||||
|
|
||||||
self.assertTrue(mput.called)
|
|
||||||
self.assertEqual(mput.last_request.json(), json.loads(JSON_TEMPLATE))
|
|
||||||
m_open().read.assert_called_once_with()
|
|
||||||
|
|
||||||
def test_download_action(self):
|
|
||||||
mget = self.m_request.get(self.req_path, text=JSON_TEMPLATE)
|
|
||||||
|
|
||||||
test_command = [
|
|
||||||
'fuel', 'network-template', '--env', str(self.env_id),
|
|
||||||
'--download']
|
|
||||||
|
|
||||||
m_open = mock.mock_open()
|
|
||||||
with mock.patch('fuelclient.cli.serializers.open', m_open,
|
|
||||||
create=True):
|
|
||||||
self.execute(test_command)
|
|
||||||
|
|
||||||
self.assertTrue(mget.called)
|
|
||||||
|
|
||||||
written_yaml = yaml.safe_load(m_open().write.mock_calls[0][1][0])
|
|
||||||
expected_yaml = yaml.safe_load(YAML_TEMPLATE)
|
|
||||||
self.assertEqual(written_yaml, expected_yaml)
|
|
||||||
|
|
||||||
def test_delete_action(self):
|
|
||||||
mdelete = self.m_request.delete(self.req_path, json={})
|
|
||||||
|
|
||||||
cmd = ['fuel', 'network-template', '--env', str(self.env_id),
|
|
||||||
'--delete']
|
|
||||||
|
|
||||||
with mock.patch('sys.stdout', new=six.StringIO()) as m_out:
|
|
||||||
self.execute(cmd)
|
|
||||||
|
|
||||||
self.assertTrue(mdelete.called)
|
|
||||||
|
|
||||||
msg = ("Network template configuration for environment id={0}"
|
|
||||||
" has been deleted.".format(self.env_id))
|
|
||||||
self.assertIn(msg, m_out.getvalue())
|
|
|
@ -1,88 +0,0 @@
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# Copyright 2016 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from fuelclient.commands.release import ReleaseComponentList
|
|
||||||
from fuelclient.tests.unit.v1 import base
|
|
||||||
|
|
||||||
|
|
||||||
class TestReleaseComponent(base.UnitTestCase):
|
|
||||||
|
|
||||||
def test_retrieve_predicates(self):
|
|
||||||
predicates = ('any_of', 'all_of', 'one_of', 'none_of')
|
|
||||||
items = {
|
|
||||||
"items": ["fake:component:1",
|
|
||||||
"fake:component:2"]
|
|
||||||
}
|
|
||||||
|
|
||||||
for predicate in predicates:
|
|
||||||
test_data = {predicate: items}
|
|
||||||
real_data = ReleaseComponentList.retrieve_predicates(test_data)
|
|
||||||
expected_data = "{} (fake:component:1, fake:component:2)".format(
|
|
||||||
predicate)
|
|
||||||
self.assertEqual(expected_data, real_data)
|
|
||||||
|
|
||||||
def test_retrieve_predicates_w_wrong_predicate(self):
|
|
||||||
test_data = {
|
|
||||||
"bad_predicate": {
|
|
||||||
"items": ["fake:component:1",
|
|
||||||
"fake:component:2"],
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
self.assertRaisesRegexp(ValueError,
|
|
||||||
"Predicates not found.",
|
|
||||||
ReleaseComponentList.retrieve_predicates,
|
|
||||||
test_data)
|
|
||||||
|
|
||||||
def test_retrieve_data(self):
|
|
||||||
test_data = "fake:component:1"
|
|
||||||
real_data = ReleaseComponentList.retrieve_data(test_data)
|
|
||||||
self.assertEqual("fake:component:1", real_data)
|
|
||||||
|
|
||||||
test_data = [{"name": "fake:component:1"}]
|
|
||||||
real_data = ReleaseComponentList.retrieve_data(test_data)
|
|
||||||
self.assertEqual("fake:component:1", real_data)
|
|
||||||
|
|
||||||
test_data = [
|
|
||||||
{
|
|
||||||
"one_of": {
|
|
||||||
"items": ["fake:component:1"]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"any_of": {
|
|
||||||
"items": ["fake:component:1",
|
|
||||||
"fake:component:2"]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"all_of": {
|
|
||||||
"items": ["fake:component:1",
|
|
||||||
"fake:component:2"]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"none_of": {
|
|
||||||
"items": ["fake:component:1"]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
]
|
|
||||||
real_data = ReleaseComponentList.retrieve_data(test_data)
|
|
||||||
expected_data = ("one_of (fake:component:1), "
|
|
||||||
"any_of (fake:component:1, fake:component:2), "
|
|
||||||
"all_of (fake:component:1, fake:component:2), "
|
|
||||||
"none_of (fake:component:1)")
|
|
||||||
self.assertEqual(expected_data, real_data)
|
|
|
@ -1,74 +0,0 @@
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
|
|
||||||
# Copyright 2015 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
import json
|
|
||||||
|
|
||||||
import mock
|
|
||||||
import yaml
|
|
||||||
|
|
||||||
from fuelclient.cli import error
|
|
||||||
from fuelclient.cli.serializers import Serializer
|
|
||||||
from fuelclient.tests.unit.v1 import base
|
|
||||||
|
|
||||||
|
|
||||||
class TestSerializers(base.UnitTestCase):
|
|
||||||
|
|
||||||
DATA = {
|
|
||||||
'a': 1,
|
|
||||||
'b': {
|
|
||||||
'c': [2, 3, 4],
|
|
||||||
'd': 'string',
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
def test_get_from_params(self):
|
|
||||||
params_to_formats = (
|
|
||||||
('yaml', 'yaml'),
|
|
||||||
('json', 'json'),
|
|
||||||
('xyz', Serializer.default_format),
|
|
||||||
)
|
|
||||||
for param, format in params_to_formats:
|
|
||||||
params = mock.Mock(serialization_format=format)
|
|
||||||
serializer = Serializer.from_params(params)
|
|
||||||
self.assertEqual(serializer.format, format)
|
|
||||||
|
|
||||||
def test_serialize(self):
|
|
||||||
deserializers = {'json': json.loads, 'yaml': yaml.load}
|
|
||||||
for format, deserialize in deserializers.items():
|
|
||||||
serialized = Serializer(format).serialize(self.DATA)
|
|
||||||
self.assertEqual(self.DATA, deserialize(serialized))
|
|
||||||
|
|
||||||
def test_deserialize(self):
|
|
||||||
serializers = {'json': json.dumps, 'yaml': yaml.safe_dump}
|
|
||||||
for format, serialize in serializers.items():
|
|
||||||
serialized = serialize(self.DATA)
|
|
||||||
deserialized = Serializer(format).deserialize(serialized)
|
|
||||||
self.assertEqual(self.DATA, deserialized)
|
|
||||||
|
|
||||||
def test_deserialize_fail(self):
|
|
||||||
|
|
||||||
broken_data = '{foo: bar: buzz:}'
|
|
||||||
for format in ('json', 'yaml'):
|
|
||||||
self.assertRaises(error.BadDataException,
|
|
||||||
Serializer(format).deserialize, broken_data)
|
|
||||||
|
|
||||||
def test_write_to_path_invalid_file_exception(self):
|
|
||||||
serializer = Serializer('json')
|
|
||||||
mo = mock.mock_open()
|
|
||||||
with mock.patch('__main__.open', mo, create=True) as mocked_open:
|
|
||||||
mocked_open.side_effect = IOError()
|
|
||||||
self.assertRaises(error.InvalidFileException,
|
|
||||||
serializer.write_to_path,
|
|
||||||
'/foo/bar/baz', self.DATA)
|
|
|
@ -1,97 +0,0 @@
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# Copyright 2015 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from mock import mock_open
|
|
||||||
from mock import patch
|
|
||||||
|
|
||||||
from fuelclient.tests.unit.v1 import base
|
|
||||||
|
|
||||||
|
|
||||||
YAML_SETTINGS_DATA = """editable:
|
|
||||||
access:
|
|
||||||
user:
|
|
||||||
value: test_user
|
|
||||||
"""
|
|
||||||
JSON_SETTINGS_DATA = {
|
|
||||||
'editable': {
|
|
||||||
'access': {
|
|
||||||
'user': {
|
|
||||||
'value': 'test_user'
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
class BaseSettings(base.UnitTestCase):
|
|
||||||
|
|
||||||
def check_upload_action(self, test_command, test_url):
|
|
||||||
m = mock_open(read_data=YAML_SETTINGS_DATA)
|
|
||||||
put = self.m_request.put(test_url, json={})
|
|
||||||
|
|
||||||
with patch('six.moves.builtins.open', m, create=True):
|
|
||||||
self.execute(test_command)
|
|
||||||
|
|
||||||
m().read.assert_called_once_with()
|
|
||||||
self.assertTrue(put.called)
|
|
||||||
self.assertDictEqual(put.last_request.json(), JSON_SETTINGS_DATA)
|
|
||||||
|
|
||||||
def check_default_action(self, test_command, test_url):
|
|
||||||
m = mock_open()
|
|
||||||
get = self.m_request.get(test_url, json=JSON_SETTINGS_DATA)
|
|
||||||
|
|
||||||
with patch('six.moves.builtins.open', m, create=True):
|
|
||||||
self.execute(test_command)
|
|
||||||
|
|
||||||
self.assertTrue(get.called)
|
|
||||||
m().write.assert_called_once_with(YAML_SETTINGS_DATA)
|
|
||||||
|
|
||||||
def check_download_action(self, test_command, test_url):
|
|
||||||
m = mock_open()
|
|
||||||
get = self.m_request.get(test_url, json=JSON_SETTINGS_DATA)
|
|
||||||
|
|
||||||
with patch('six.moves.builtins.open', m, create=True):
|
|
||||||
self.execute(test_command)
|
|
||||||
|
|
||||||
m().write.assert_called_once_with(YAML_SETTINGS_DATA)
|
|
||||||
self.assertTrue(get.called)
|
|
||||||
|
|
||||||
|
|
||||||
class TestSettings(BaseSettings):
|
|
||||||
|
|
||||||
def test_upload_action(self):
|
|
||||||
self.check_upload_action(
|
|
||||||
test_command=[
|
|
||||||
'fuel', 'settings', '--env', '1', '--upload'],
|
|
||||||
test_url='/api/v1/clusters/1/attributes')
|
|
||||||
|
|
||||||
def test_upload_force_action(self):
|
|
||||||
self.check_upload_action(
|
|
||||||
test_command=[
|
|
||||||
'fuel', 'settings', '--env', '1', '--upload', '--force'],
|
|
||||||
test_url='/api/v1/clusters/1/attributes?force=1')
|
|
||||||
|
|
||||||
def test_default_action(self):
|
|
||||||
self.check_default_action(
|
|
||||||
test_command=[
|
|
||||||
'fuel', 'settings', '--env', '1', '--default'],
|
|
||||||
test_url='/api/v1/clusters/1/attributes/defaults')
|
|
||||||
|
|
||||||
def test_download_action(self):
|
|
||||||
self.check_download_action(
|
|
||||||
test_command=[
|
|
||||||
'fuel', 'settings', '--env', '1', '--download'],
|
|
||||||
test_url='/api/v1/clusters/1/attributes')
|
|
|
@ -1,331 +0,0 @@
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# Copyright 2014 Mirantis, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import json
|
|
||||||
import os
|
|
||||||
import six
|
|
||||||
import subprocess
|
|
||||||
import yaml
|
|
||||||
|
|
||||||
import mock
|
|
||||||
import requests
|
|
||||||
|
|
||||||
from fuelclient.cli import error
|
|
||||||
from fuelclient import client
|
|
||||||
from fuelclient.common import data_utils
|
|
||||||
from fuelclient.tests.unit.v1 import base
|
|
||||||
from fuelclient import utils
|
|
||||||
|
|
||||||
|
|
||||||
class TestUtils(base.UnitTestCase):
|
|
||||||
|
|
||||||
@mock.patch('fuelclient.utils.os.walk')
|
|
||||||
def test_iterfiles(self, mwalk):
|
|
||||||
mwalk.return_value = [
|
|
||||||
('/some_directory/', [], ['valid.yaml', 'invalid.yml'])]
|
|
||||||
|
|
||||||
pattern = '*.yaml'
|
|
||||||
directory = '/some_directory'
|
|
||||||
|
|
||||||
expected_result = [os.path.join(directory, 'valid.yaml')]
|
|
||||||
files = list(utils.iterfiles(directory, pattern))
|
|
||||||
|
|
||||||
mwalk.assert_called_once_with(directory, followlinks=True)
|
|
||||||
self.assertEqual(expected_result, files)
|
|
||||||
|
|
||||||
def make_process_mock(self, return_code=0):
|
|
||||||
process_mock = mock.Mock()
|
|
||||||
process_mock.stdout = ['Stdout line 1', 'Stdout line 2']
|
|
||||||
process_mock.returncode = return_code
|
|
||||||
|
|
||||||
return process_mock
|
|
||||||
|
|
||||||
def test_exec_cmd(self):
|
|
||||||
cmd = 'some command'
|
|
||||||
|
|
||||||
process_mock = self.make_process_mock()
|
|
||||||
with mock.patch.object(
|
|
||||||
subprocess, 'Popen', return_value=process_mock) as popen_mock:
|
|
||||||
utils.exec_cmd(cmd)
|
|
||||||
|
|
||||||
popen_mock.assert_called_once_with(
|
|
||||||
cmd,
|
|
||||||
stdout=None,
|
|
||||||
stderr=subprocess.STDOUT,
|
|
||||||
shell=True,
|
|
||||||
cwd=None)
|
|
||||||
|
|
||||||
def test_exec_cmd_raises_error(self):
|
|
||||||
cmd = 'some command'
|
|
||||||
return_code = 1
|
|
||||||
|
|
||||||
process_mock = self.make_process_mock(return_code=return_code)
|
|
||||||
|
|
||||||
with mock.patch.object(
|
|
||||||
subprocess, 'Popen', return_value=process_mock) as popen_mock:
|
|
||||||
self.assertRaisesRegexp(
|
|
||||||
error.ExecutedErrorNonZeroExitCode,
|
|
||||||
'Shell command executed with "{0}" '
|
|
||||||
'exit code: {1} '.format(return_code, cmd),
|
|
||||||
utils.exec_cmd, cmd)
|
|
||||||
|
|
||||||
popen_mock.assert_called_once_with(
|
|
||||||
cmd,
|
|
||||||
stdout=None,
|
|
||||||
stderr=subprocess.STDOUT,
|
|
||||||
shell=True,
|
|
||||||
cwd=None)
|
|
||||||
|
|
||||||
def test_exec_cmd_iterator(self):
|
|
||||||
cmd = 'some command'
|
|
||||||
|
|
||||||
process_mock = self.make_process_mock()
|
|
||||||
with mock.patch.object(
|
|
||||||
subprocess, 'Popen', return_value=process_mock) as popen_mock:
|
|
||||||
for line in utils.exec_cmd_iterator(cmd):
|
|
||||||
self.assertTrue(line.startswith('Stdout line '))
|
|
||||||
|
|
||||||
popen_mock.assert_called_once_with(
|
|
||||||
cmd,
|
|
||||||
stdout=subprocess.PIPE,
|
|
||||||
stderr=subprocess.PIPE,
|
|
||||||
shell=True)
|
|
||||||
|
|
||||||
def test_exec_cmd_iterator_raises_error(self):
|
|
||||||
cmd = 'some command'
|
|
||||||
return_code = 1
|
|
||||||
|
|
||||||
process_mock = self.make_process_mock(return_code=return_code)
|
|
||||||
with mock.patch.object(subprocess, 'Popen', return_value=process_mock):
|
|
||||||
with self.assertRaisesRegexp(
|
|
||||||
error.ExecutedErrorNonZeroExitCode,
|
|
||||||
'Shell command executed with "{0}" '
|
|
||||||
'exit code: {1} '.format(return_code, cmd)):
|
|
||||||
for line in utils.exec_cmd_iterator(cmd):
|
|
||||||
self.assertTrue(line.startswith('Stdout line '))
|
|
||||||
|
|
||||||
def test_parse_yaml_file(self):
|
|
||||||
mock_open = self.mock_open("key: value")
|
|
||||||
|
|
||||||
with mock.patch('fuelclient.utils.io.open', mock_open):
|
|
||||||
self.assertEqual(
|
|
||||||
utils.parse_yaml_file('some_file_name'),
|
|
||||||
{'key': 'value'})
|
|
||||||
|
|
||||||
@mock.patch('fuelclient.utils.glob.iglob',
|
|
||||||
return_value=['file1', 'file2'])
|
|
||||||
@mock.patch('fuelclient.utils.parse_yaml_file',
|
|
||||||
side_effect=['content_file1', 'content_file2'])
|
|
||||||
def test_glob_and_parse_yaml(self, parse_mock, iglob_mock):
|
|
||||||
path = '/tmp/path/mask*'
|
|
||||||
|
|
||||||
content = []
|
|
||||||
for data in utils.glob_and_parse_yaml(path):
|
|
||||||
content.append(data)
|
|
||||||
|
|
||||||
iglob_mock.assert_called_once_with(path)
|
|
||||||
self.assertEqual(
|
|
||||||
parse_mock.call_args_list,
|
|
||||||
[mock.call('file1'),
|
|
||||||
mock.call('file2')])
|
|
||||||
|
|
||||||
self.assertEqual(content, ['content_file1', 'content_file2'])
|
|
||||||
|
|
||||||
def test_major_plugin_version(self):
|
|
||||||
pairs = [
|
|
||||||
['1.2.3', '1.2'],
|
|
||||||
['123456789.123456789.12121', '123456789.123456789'],
|
|
||||||
['1.2', '1.2']]
|
|
||||||
|
|
||||||
for arg, expected in pairs:
|
|
||||||
self.assertEqual(
|
|
||||||
utils.major_plugin_version(arg),
|
|
||||||
expected)
|
|
||||||
|
|
||||||
@mock.patch('fuelclient.utils.os.path.lexists', side_effect=[True, False])
|
|
||||||
def test_file_exists(self, lexists_mock):
|
|
||||||
self.assertTrue(utils.file_exists('file1'))
|
|
||||||
self.assertFalse(utils.file_exists('file2'))
|
|
||||||
|
|
||||||
self.assertEqual(
|
|
||||||
lexists_mock.call_args_list,
|
|
||||||
[mock.call('file1'), mock.call('file2')])
|
|
||||||
|
|
||||||
def test_get_error_body_get_from_json(self):
|
|
||||||
error_body = 'This is error body'
|
|
||||||
|
|
||||||
body_json = json.dumps({
|
|
||||||
'message': error_body
|
|
||||||
})
|
|
||||||
if isinstance(body_json, six.text_type):
|
|
||||||
body_json = body_json.encode('utf-8')
|
|
||||||
|
|
||||||
resp = requests.Response()
|
|
||||||
resp._content = body_json
|
|
||||||
|
|
||||||
exception = requests.HTTPError()
|
|
||||||
exception.response = resp
|
|
||||||
|
|
||||||
self.assertEqual(error.get_error_body(exception), error_body)
|
|
||||||
|
|
||||||
def test_get_error_body_get_from_plaintext(self):
|
|
||||||
error_body = b'This is error body'
|
|
||||||
|
|
||||||
resp = requests.Response()
|
|
||||||
resp._content = error_body
|
|
||||||
|
|
||||||
exception = requests.HTTPError()
|
|
||||||
exception.response = resp
|
|
||||||
|
|
||||||
self.assertEqual(error.get_error_body(exception),
|
|
||||||
error_body.decode('utf-8'))
|
|
||||||
|
|
||||||
def test_get_display_data_single(self):
|
|
||||||
test_data = {'a': 1, 'b': [], 'c': [1, 2, 3], 'd': 4}
|
|
||||||
fields = ('a', 'b', 'c')
|
|
||||||
|
|
||||||
result = data_utils.get_display_data_single(fields, test_data)
|
|
||||||
self.assertEqual([1, [], [1, 2, 3]], result)
|
|
||||||
|
|
||||||
def test_get_display_data_bad_key(self):
|
|
||||||
test_data = {'a': 1, 'b': 2, 'c': 3}
|
|
||||||
fields = ('b', 'bad_key')
|
|
||||||
self.assertEqual(
|
|
||||||
[2, None],
|
|
||||||
data_utils.get_display_data_single(fields, test_data)
|
|
||||||
)
|
|
||||||
|
|
||||||
def test_get_display_data_multi(self):
|
|
||||||
test_data = [{'a': 1, 'b': 2, 'c': 3}, {'b': 8, 'c': 9}]
|
|
||||||
fields = ('b', 'c')
|
|
||||||
|
|
||||||
result = data_utils.get_display_data_multi(fields, test_data)
|
|
||||||
self.assertEqual([[2, 3], [8, 9]], result)
|
|
||||||
|
|
||||||
@mock.patch('sys.getfilesystemencoding', return_value='utf-8')
|
|
||||||
def test_str_to_unicode(self, _):
|
|
||||||
test_data = 'тест'
|
|
||||||
expected_data = test_data if six.PY3 else u'тест'
|
|
||||||
result = utils.str_to_unicode(test_data)
|
|
||||||
self.assertIsInstance(result, six.text_type)
|
|
||||||
self.assertEqual(result, expected_data)
|
|
||||||
|
|
||||||
@mock.patch('fuelclient.utils.sys')
|
|
||||||
def test_latin_str_to_unicode(self, sys_mock):
|
|
||||||
sys_mock.getfilesystemencoding.return_value = 'iso-8859-16'
|
|
||||||
|
|
||||||
test_data = 'czegoś' if six.PY3 else u'czegoś'.encode('iso-8859-16')
|
|
||||||
expected_data = test_data if six.PY3 else u'czegoś'
|
|
||||||
result = utils.str_to_unicode(test_data)
|
|
||||||
self.assertIsInstance(result, six.text_type)
|
|
||||||
self.assertEqual(result, expected_data)
|
|
||||||
|
|
||||||
def test_HTTP_error_message(self):
|
|
||||||
text = 'message text'
|
|
||||||
|
|
||||||
self.m_request.post('/api/v1/address',
|
|
||||||
json={'message': text},
|
|
||||||
status_code=403)
|
|
||||||
|
|
||||||
with self.assertRaisesRegexp(error.HTTPError,
|
|
||||||
'403.*{}'.format(text)):
|
|
||||||
client.DefaultAPIClient.post_request('address')
|
|
||||||
|
|
||||||
def test_parse_to_list_of_dicts(self):
|
|
||||||
items = utils.parse_to_list_of_dicts([{"id": 1}])
|
|
||||||
self.assertEqual(items, [{"id": 1}])
|
|
||||||
|
|
||||||
items = utils.parse_to_list_of_dicts([{"id": 2}, {"id": 3}])
|
|
||||||
self.assertEqual(items, [{"id": 2}, {"id": 3}])
|
|
||||||
|
|
||||||
items = utils.parse_to_list_of_dicts([[{"id": 4}]])
|
|
||||||
self.assertEqual(items, [{"id": 4}])
|
|
||||||
|
|
||||||
items = utils.parse_to_list_of_dicts(
|
|
||||||
[[{"id": 5}, {"id": 6}], {"id": 7}])
|
|
||||||
self.assertEqual(items, [{"id": 5}, {"id": 6}, {"id": 7}])
|
|
||||||
|
|
||||||
self.assertRaisesRegexp(
|
|
||||||
TypeError, 'A dict or list instance expected',
|
|
||||||
utils.parse_to_list_of_dicts, [42])
|
|
||||||
|
|
||||||
def test_safe_load_json(self):
|
|
||||||
test_data = {'test_key': 'test_val'}
|
|
||||||
|
|
||||||
m_open = mock.mock_open(read_data=json.dumps(test_data))
|
|
||||||
with mock.patch('fuelclient.tests.unit.common.test_utils.open',
|
|
||||||
m_open):
|
|
||||||
stream = open('/a/random/file', 'r')
|
|
||||||
loaded = data_utils.safe_load('json', stream)
|
|
||||||
|
|
||||||
self.assertEqual(test_data, loaded)
|
|
||||||
|
|
||||||
def test_safe_load_yaml(self):
|
|
||||||
test_data = {'test_key': 'test_val'}
|
|
||||||
|
|
||||||
m_open = mock.mock_open(read_data=yaml.dump(test_data))
|
|
||||||
with mock.patch('fuelclient.tests.unit.common.test_utils.open',
|
|
||||||
m_open):
|
|
||||||
stream = open('/a/random/file', 'r')
|
|
||||||
loaded = data_utils.safe_load('yaml', stream)
|
|
||||||
|
|
||||||
self.assertEqual(test_data, loaded)
|
|
||||||
|
|
||||||
@mock.patch('json.dump')
|
|
||||||
def test_safe_dump_json(self, m_dump):
|
|
||||||
test_data = {'test_key': 'test_val'}
|
|
||||||
|
|
||||||
m_open = mock.mock_open()
|
|
||||||
with mock.patch('fuelclient.tests.unit.common.test_utils.open',
|
|
||||||
m_open):
|
|
||||||
stream = open('/a/random/file', 'w')
|
|
||||||
data_utils.safe_dump('json', stream, test_data)
|
|
||||||
|
|
||||||
m_dump.assert_called_once_with(test_data, stream, indent=4)
|
|
||||||
|
|
||||||
@mock.patch('yaml.safe_dump')
|
|
||||||
def test_safe_dump_yaml(self, m_dump):
|
|
||||||
test_data = {'test_key': 'test_val'}
|
|
||||||
|
|
||||||
m_open = mock.mock_open()
|
|
||||||
with mock.patch('fuelclient.tests.unit.common.test_utils.open',
|
|
||||||
m_open):
|
|
||||||
stream = open('/a/random/file', 'w')
|
|
||||||
data_utils.safe_dump('yaml', stream, test_data)
|
|
||||||
|
|
||||||
m_dump.assert_called_once_with(test_data,
|
|
||||||
stream,
|
|
||||||
default_flow_style=False)
|
|
||||||
|
|
||||||
def test_safe_dump_wrong_format(self):
|
|
||||||
test_data = {'test_key': 'test_val'}
|
|
||||||
|
|
||||||
m_open = mock.mock_open()
|
|
||||||
with mock.patch('fuelclient.tests.unit.common.test_utils.open',
|
|
||||||
m_open):
|
|
||||||
stream = open('/a/random/file', 'w')
|
|
||||||
self.assertRaises(ValueError,
|
|
||||||
data_utils.safe_dump,
|
|
||||||
'bad', stream, test_data)
|
|
||||||
|
|
||||||
def test_safe_load_wrong_format(self):
|
|
||||||
m_open = mock.mock_open()
|
|
||||||
with mock.patch('fuelclient.tests.unit.common.test_utils.open',
|
|
||||||
m_open):
|
|
||||||
stream = open('/a/random/file', 'w')
|
|
||||||
self.assertRaises(ValueError,
|
|
||||||
data_utils.safe_load,
|
|
||||||
'bad', stream)
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue