Retire repository

Fuel repositories are all retired in openstack namespace, retire
remaining fuel repos in x namespace since they are unused now.

This change removes all content from the repository and adds the usual
README file to point out that the repository is retired following the
process from
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project

See also
http://lists.openstack.org/pipermail/openstack-discuss/2019-December/011675.html

A related change is: https://review.opendev.org/699752 .

Change-Id: Ic6009c318eaf23b13524e3d7ae9c2639372d9ce0
This commit is contained in:
Andreas Jaeger 2019-12-18 19:32:38 +01:00
parent 60f7ff4008
commit 6acce1549b
66 changed files with 10 additions and 5737 deletions

24
.gitignore vendored
View File

@ -1,24 +0,0 @@
.tox
.build
*.pyc
# Packages
*.rpm
# Installer logs
pip-log.txt
# Unit test / coverage reports
.coverage
.tox
nosetests.xml
# Logs
logs
sys_test.log
# IDE settings
.idea/*
# Documentation build files
doc/user_guide/_build

4
.gitmodules vendored
View File

@ -1,4 +0,0 @@
[submodule "plugin_test/fuel-qa"]
path = plugin_test/fuel-qa
url = https://github.com/openstack/fuel-qa.git
branch = stable/mitaka

202
LICENSE
View File

@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,4 +0,0 @@
fuel-plugin-cinder-gcs
============
Plugin description

10
README.rst Normal file
View File

@ -0,0 +1,10 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1,10 +0,0 @@
- name: additional_service:fuel-plugin-cinder-gcs
label: "Enable Google Cloud Storage Fuel plugin"
description: "Configure Cinder to use Google Cloud Storage backup driver."
compatible:
- name: 'hypervisor:qemu'
- name: 'storage:block:lvm'
- name: 'storage:block:ceph'
- name: 'additional_service:ceilometer'
requires: []
incompatible: []

View File

@ -1,19 +0,0 @@
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
notice('MODULAR: fuel-plugin-cinder-gcs/config.pp')
include gcs
class { 'gcs::config': }
class { 'gcs::package_utils': }
class { 'gcs::services': }

View File

@ -1,17 +0,0 @@
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
notice('MODULAR: fuel-plugin-cinder-gcs/gcs_horizon.pp')
include gcs
class { 'gcs::horizon': }

View File

@ -1,40 +0,0 @@
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
class gcs::config {
cinder_config {
'DEFAULT/backup_driver': value => $gcs::backup_driver;
'DEFAULT/backup_gcs_bucket': value => $gcs::settings['backup_gcs_bucket'];
'DEFAULT/backup_gcs_project_id': value => $gcs::settings['backup_gcs_project_id'];
'DEFAULT/backup_gcs_credential_file': value => $gcs::credential_file;
'DEFAULT/backup_gcs_bucket_location': value => $gcs::settings['backup_gcs_bucket_location'];
'DEFAULT/backup_gcs_enable_progress_timer': value => $gcs::settings['backup_gcs_enable_progress_timer'];
'DEFAULT/backup_gcs_storage_class': value => $gcs::settings['backup_gcs_storage_class'];
'DEFAULT/backup_gcs_user_agent': value => $gcs::settings['backup_gcs_user_agent'];
'DEFAULT/backup_gcs_block_size': value => $gcs::settings['backup_gcs_block_size'];
'DEFAULT/backup_gcs_object_size': value => $gcs::settings['backup_gcs_object_size'];
'DEFAULT/backup_gcs_writer_chunk_size': value => $gcs::settings['backup_gcs_writer_chunk_size'];
'DEFAULT/backup_gcs_reader_chunk_size': value => $gcs::settings['backup_gcs_reader_chunk_size'];
'DEFAULT/backup_gcs_retry_error_codes': value => $gcs::settings['backup_gcs_retry_error_codes'];
'DEFAULT/backup_gcs_num_retries': value => $gcs::settings['backup_gcs_num_retries'];
}
file { $gcs::credential_file:
owner => 'cinder',
group => 'cinder',
content => template('gcs/credentials.json.erb'),
mode => '0600',
}
}

View File

@ -1,29 +0,0 @@
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
class gcs::horizon {
if $gcs::configure_horizon {
file_line { 'configure_horizon':
path => '/etc/openstack-dashboard/local_settings.py',
after => '^OPENSTACK_CINDER_FEATURES',
match => '^\s*.enable_backup',
line => '"enable_backup": True,',
} ~>
service { 'apache2':
ensure => running,
}
}
}

View File

@ -1,45 +0,0 @@
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
class gcs {
$services = 'cinder-backup'
$plugin_hash = hiera_hash('fuel-plugin-cinder-gcs')
$backup_driver = 'cinder.backup.drivers.google'
$user_agent = 'gcscinder'
$credential_file = '/var/lib/cinder/credentials.json'
$pip_packages = ['google-api-python-client']
$python_package_provider = ['python-pip']
$configure_horizon = true
if $plugin_hash['backup_gcs_advanced_settings'] {
$settings = $plugin_hash
}
else {
$settings = {
backup_gcs_bucket => $plugin_hash['backup_gcs_bucket'],
backup_gcs_project_id => $plugin_hash['backup_gcs_project_id'],
backup_gcs_bucket_location => $plugin_hash['backup_gcs_bucket_location'],
backup_gcs_storage_class => $plugin_hash['backup_gcs_storage_class'],
gcs_private_key_id => $plugin_hash['gcs_private_key_id'],
gcs_private_key => $plugin_hash['gcs_private_key'],
gcs_client_email => $plugin_hash['gcs_client_email'],
gcs_client_id => $plugin_hash['gcs_client_id'],
gcs_auth_uri => $plugin_hash['gcs_auth_uri'],
gcs_token_uri => $plugin_hash['gcs_token_uri'],
gcs_auth_provider_x509_cert_url => $plugin_hash['gcs_auth_provider_x509_cert_url'],
gcs_client_x509_cert_url => $plugin_hash['gcs_client_x509_cert_url'],
gcs_account_type => $plugin_hash['gcs_account_type']
}
}
}

View File

@ -1,68 +0,0 @@
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
class gcs::package_utils (
$action = 'install',
$packages = $gcs::packages,
$pip_packages = $gcs::pip_packages,
$pip_flags = '',
$python_package_provider = $gcs::python_package_provider,
) {
define gcs::package_utils::exec_pip (
$pip_action = $gcs::package_utils::action,
$flags = $gcs::package_utils::flags,
) {
exec { "pip_install_${name}":
command => "pip ${pip_action} ${flags} ${name}",
provider => shell,
path => '/usr/local/bin:/usr/bin:/bin'
}
}
package { $python_package_provider:
ensure => installed,
}
case $action {
'install': {
if ($packages) {
package { $packages:
ensure => installed,
}
}
if ($pip_packages) {
gcs::package_utils::exec_pip { $pip_packages:
flags => '-U',
require => Package[$python_package_provider],
}
}
}
'uninstall': {
if ($packages) {
package { $packages:
ensure => purged,
}
}
if ($pip_packages) {
package { $pip_packages:
ensure => absent,
provider => pip,
}
}
}
default: {
fail("Option ${action} is not supported by class package_utils")
}
}
}

View File

@ -1,20 +0,0 @@
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
class gcs::services {
$services = $gcs::services
service { $services: ensure => running }
Cinder_config <||> ~> Service[$services]
Class['gcs::package_utils'] ~> Service[$services]
}

View File

@ -1,13 +0,0 @@
<% @settings = scope.lookupvar('gcs::settings') -%>
{
"type": "<%= @settings['gcs_account_type'] %>",
"project_id": "<%= @settings['backup_gcs_project_id'] %>",
"private_key": "<%= @settings['gcs_private_key'] %>",
"private_key_id": "<%= @settings['gcs_private_key_id'] %>",
"client_email": "<%= @settings['gcs_client_email'] %>",
"client_id": "<%= @settings['gcs_client_id'] %>",
"auth_uri": "<%= @settings['gcs_auth_uri'] %>",
"token_uri": "<%= @settings['gcs_token_uri'] %>",
"auth_provider_x509_cert_url": "<%= @settings['gcs_auth_provider_x509_cert_url'] %>",
"client_x509_cert_url": "<%= @settings['gcs_client_x509_cert_url'] %>"
}

View File

@ -1,28 +0,0 @@
- id: gcs_config
type: puppet
role: ['primary-controller','controller','cinder']
requires: [deploy_start]
version: 2.0.0
cross-depends:
- name: /.*cinder.*/
role: self
parameters:
puppet_manifest: puppet/manifests/config.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 300
condition:
yaql_exp: $.storage.volumes_ceph or ('cinder' in $.roles )
- id: gcs_horizon
type: puppet
role: ['primary-controller','controller']
requires: [deploy_start]
version: 2.0.0
cross-depends:
- name: gcs_config
role: self
parameters:
puppet_manifest: puppet/manifests/horizon.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 300

View File

@ -1,229 +0,0 @@
=====================================================
Master test plan for Google cloud storage fuel plugin
=====================================================
1. Introduction
---------------
1.1 Purpose
###########
This document describes Master Test Plan for GCS Fuel Plugin. The scope of
this plan defines the following objectives:
- describe testing activities;
- outline testing approach, test types, test cycles that will be used;
- test mission;
- deliverables;
1.2 Intended Audience
#####################
This document is intended for GCS project team staff (QA and Dev engineers
and managers) all other persons who are interested in testing results.
2. Governing Evaluation Mission
-------------------------------
GCS plugin for Fuel provides the functionality to add backup option to
Google Cloud for Mirantis OpenStack. It uses Fuel plugin architecture along
with pluggable architecture enhancements introduced in latest Mirantis
OpenStack Fuel.
The plugin must be compatible with the version 9.0 of Mirantis OpenStack.
2.1 Evaluation Test Mission
###########################
- Lab environment deployment.
- Deploy MOS with developed plugin installed.
- Create and run specific tests for plugin/deployment.
- Documentation
2.2 Test Items
##############
- Fuel GCS plugin UI setup page, default values tested automatically,
other scenarios will be tested manually;
- Fuel CLI;
- Fuel API;
- Fuel UI;
- MOS;
- MOS API.
3. Test Approach
----------------
The project test approach consists of BVT, Integration/System, Regression and
Acceptance test levels.
3.1 Criteria for test process starting
######################################
Before test process can be started it is needed to make some preparation
actions - to execute important preconditions. The following steps must be
executed successfully for starting test phase:
- all project requirements are reviewed and confirmed;
- implementation of testing features has finished (a new build is ready for
testing);
- implementation code is stored in GIT;
- bvt-tests are executed successfully (100% success);
- test environment is prepared with correct configuration;
- test environment contains the last delivered build for testing;
- test plan is ready and confirmed internally;
- implementation of manual tests and necessary autotests has finished.
3.2 Suspension Criteria
#######################
Testing of a particular feature is suspended if there is a blocking issue
which prevents tests execution. Blocking issue can be one of the following:
- Feature has a blocking defect, which prevents further usage of this feature
and there is no workaround available;
- CI test automation scripts failure.
3.3 Feature Testing Exit Criteria
#################################
Testing of a feature can be finished when:
- All planned tests (prepared before) for the feature are executed; no
defects are found during this run;
- All planned tests for the feature are executed; defects found during this
run are verified or confirmed to be acceptable (known issues);
- The time for testing of that feature according to the project plan has run
out and Project Manager confirms that no changes to the schedule are
possible.
4. Deliverables
---------------
4.1 List of deliverables
########################
Project testing activities are to be resulted in the following reporting
documents:
- Test plan;
- Test run report;
4.2 Acceptance criteria
#######################
90% of tests cases should be with status - passed. Critical and high issues
are fixed. Such manual tests should be executed and passed (100% of them):
- Deploy cluster with GCS plugin enabled.
- Boot VM with proper image.
- Create a snapshot of recently booted vm.
- Backup that snapshot on a GCS.
- Destroy VM.
- Download the snapshot from a GCS.
- Boot VM with downloaded from GCS snapshot.
5. Test Cycle Structure
-----------------------
An ordinary test cycle for each iteration consists of the following steps:
- Smoke testing of each build ready for testing;
- Verification testing of each build ready for testing;
- Regression testing cycles in the end of iteration;
- Creation of a new test case for covering of a new found bug (if such test
does not exist).
5.1.1 Smoke Testing
###################
Smoke testing is intended to check a correct work of a system after new
build delivery. Smoke tests allow to be sure that all main system
functions/features work correctly according to customer requirements.
5.1.2 Verification testing
##########################
Verification testing includes functional testing covering the following:
- new functionality (implemented in the current build);
- critical and major defect fixes (introduced in the current build).
Some iteration test cycles also include non-functional testing types
described in Overview of Planned Tests.
5.1.3 Regression testing
########################
Regression testing includes execution of a set of test cases for features
implemented before current iteration to ensure that following modifications
of the system haven't introduced or uncovered software defects. It also
includes verification of minor defect fixes introduced in the current
iteration.
5.1.4 Bug coverage by new test case
###################################
Bug detection starts after all manual and automated tests are prepared and
test process initiated. Ideally, each bug must be clearly documented and
covered by test case. If a bug without a test coverage was found it must
be clearly documented and covered by custom test case to prevent occurrence
of this bug in future deployments/releases etc. All custom manual test
cases suppose to be added into TestRail and automated tests suppose to be
pushed to Git/Gerrit repo.
5.2 Performance testing
#######################
Performance testing will be executed on the scale lab and a custom set of
Rally scenarios (or other performance tool) must be executed with GCS
environment.
5.3 Metrics
###########
Test case metrics are aimed to estimate a quality of bug fixing; detect not
executed tests and schedule their execution. Passed / Failed test cases -
this metric shows results of test cases execution, especially, a ratio
between test cases passed successfully and failed ones. Such statistics must
be gathered after each delivered build test. This will help to identify a
progress in successful bugs fixing. Ideally, a count of failed test cases
should aim to a zero.
Not Run test cases - this metric shows a count of test cases which should be
run within a current test phase (have not run yet). Having such statistics,
there is an opportunity to detect and analyze a scope of not run test cases,
causes of their non execution and planning of their further execution
(detect time frames, responsible QA).
6. Test scope
-------------
.. include:: test_suite_smoke_bvt.rst
.. include:: test_gcs_gui.rst
.. include:: test_suite_integration.rst
.. include:: test_suite_functional.rst
.. include:: test_suite_destructive.rst
.. include:: test_suite_gui_negative.rst
.. include:: test_suite_system.rst
.. include:: test_gcs_integration_with_mistral.rst

View File

@ -1,38 +0,0 @@
===================
GUI verify defaults
===================
UI test
-------
ID
##
gcs_gui_defaults
Description
###########
Test case designed to verify if plugin is being deployed with correct default
values set.
Complexity
##########
core
Steps
#####
1. Create cluster
2. Upload plugin to the master node
3. Install plugin
4. Create cluster
5. Verify default values
Expected results
################
All steps must be completed successfully, without any errors.

View File

@ -1,47 +0,0 @@
=======================================
Integration with Mistral plugin testing
=======================================
Verify GCS plugin working correctly in integration with Mistral plugin
----------------------------------------------------------------------
ID
##
gcs_mistral_integration
Description
###########
Check deploy env with Mistral and GCS Fuel plugins installed
Complexity
##########
manual
Steps
#####
1. Create cluster
2. Install Mistral Fuel plugin
3. Install GCS Fuel plugin
4. Configure GCS Fuel plugin
5. Add 3 Controller-mistral nodes
6. Add 1 compute and cinder +LVM node
7. Deploy cluster
8. Run OSTF
9. Create a volume
10. Using workbook from examples in gcs plugin on master node, backup
volume
Expected results
################
All steps must be completed successfully, without any errors.

View File

@ -1,144 +0,0 @@
===================
Destructive testing
===================
Verify master controller fail in HA cluster will not crash the system
----------------------------------------------------------------------
ID
##
gcs_controller_failover
Description
###########
Verify that after non-graceful shutoff of controller node, cluster stays
operational and after turning it back online, cluster is operational.
Complexity
##########
manual
Steps
#####
1. Create an environment with 3 controller nodes at least
2. Install and configure GCS plugin
3. Deploy cluster
4. Verify Cluster using OSTF
5. Verify GCS plugin
6. Power off main controller (non-gracefully)
7. Run OSTF
8. Verify GCS plugin
9. Power on controller which was powered off in step 6.
10. Run OSTF
11. Verify GCS plugin
Expected results
################
All steps except step 7 must be completed successfully, without any errors.
Step 7 one OSTF HA test will fail, because one of controllers is offline - this
is expected.
Verify compute node fail in Non-HA cluster will not crush the system
--------------------------------------------------------------------
ID
##
gcs_compute_failover
Description
###########
Verify that after non-graceful shutoff of compute node cluster stays
operational and after turning it back online, cluster is operational.
Complexity
##########
manual
Steps
#####
1. Create an environment with 1 controller, cinder and 2 compute nodes
2. Install and configure GCS plugin
3. Deploy cluster
4. Run OSTF
5. Verify GCS plugin
6. Power off one of the computes (non-gracefully)
7. Run OSTF
8. Verify GCS plugin
9. Power on compute which was powered off in step 6
10. Run OSTF
11. Verify GCS plugin
Expected results
################
All steps except step 7 must be completed successfully, without any errors.
Step 7 one OSTF test will fail, because one of nodes is offline - this is
expected.
Verification of Cinder node non-graceful shutoff in HA cluster
--------------------------------------------------------------
ID
##
gcs_cinder_failover
Description
###########
Verify that after non-graceful shutoff of cinder node cluster stays
operational and after turning it back online, cluster is operational.
Complexity
##########
manual
Steps
#####
1. Create an environment with 1 controller, compute and 2 cinder nodes
2. Install and configure GCS plugin
3. Deploy cluster
4. Run OSTF
5. Verify GCS plugin
6. Power off one of the cinder nodes (non-gracefully)
7. Run OSTF
8. Verify GCS plugin
9. Power on cinder node which was powered off in step 6
10. Run OSTF
11. Verify GCS plugin
Expected results
################
All steps except step 7 must be completed successfully, without any errors.
Step 7 one OSTF test will fail, because one of nodes is offline - this is
expected.

View File

@ -1,225 +0,0 @@
==================
Functional testing
==================
Check that Controller node can be deleted and added again
---------------------------------------------------------
ID
##
gcs_delete_add_controller
Description
###########
Verify that a controller node can be deleted and added after deploying
Complexity
##########
advanced
Steps
#####
1. Create an environment with 3 controller nodes at least
2. Enable and configure GCS plugin
3. Deploy cluster with plugin
4. Run OSTF tests
5. Verify GCS plugin
6. Delete a Controller node and deploy changes
7. Run OSTF tests
8. Verify GCS plugin
9. Add a node with "Controller" role and deploy changes
10. Run OSTF tests
11. Verify GCS plugin
Expected results
################
All steps must be completed successfully, without any errors.
Check that Compute node can be deleted and added again
------------------------------------------------------
ID
##
gcs_delete_add_compute
Description
###########
Verify that a compute node can be deleted and added after deploying
Complexity
##########
advanced
Steps
#####
1. Create an environment with 2 compute nodes at least
2. Enable and configure GCS plugin
3. Deploy cluster with plugin
4. Run OSTF tests
5. Verify GCS plugin
6. Delete a compute node and deploy changes
7. Run OSTF tests
8. Verify GCS plugin
9. Add a node with "compute" role and deploy changes
10. Run OSTF tests
11. Verify GCS plugin
Expected results
################
All steps must be completed successfully, without any errors.
Check that Cinder node can be deleted and added again
-----------------------------------------------------
ID
##
gcs_delete_add_cinder
Description
###########
Verify that a cinder node can be deleted and added after deploying
Complexity
##########
advanced
Steps
#####
1. Create an environment with 2 cinder nodes at least
2. Enable and configure GCS plugin
3. Deploy cluster with plugin
4. Run OSTF tests
5. Verify GCS plugin
6. Delete a cinder node and deploy changes
7. Run OSTF tests
8. Verify GCS plugin
9. Add a node with cinder role and deploy changes
10. Run OSTF tests
11. Verify GCS plugin
Expected results
################
All steps must be completed successfully, without any errors.
Check that the only cinder node can be deleted and added again
--------------------------------------------------------------
ID
##
gcs_delete_add_single_cinder
Description
###########
Verify that the only cinder node can be deleted and added after deploying
Complexity
##########
advanced
Steps
#####
1. Create an environment with 1 cinder node
2. Enable and configure GCS plugin
3. Deploy cluster with plugin
4. Run OSTF tests
5. Verify GCS plugin
6. Delete the cinder node and deploy changes
7. Run OSTF tests
8. Add a node with cinder role and deploy changes
9. Run OSTF tests
10. Verify GCS plugin
Expected results
################
All steps must be completed successfully, without any errors.
Check that a Ceph-OSD node can be added again
---------------------------------------------
ID
##
gcs_add_ceph
Description
###########
Verify that a Ceph-OSD node can be added after deploying
Complexity
##########
advanced
Steps
#####
1. Create an environment with Ceph-OSd as a storage backend
2. Enable and configure GCS plugin
3. Deploy cluster with plugin
4. Run OSTF tests
5. Verify GCS plugin
6. Add a node with Ceph-OSD role and deploy changes
7. Run OSTF tests
8. Verify GCS plugin
Expected results
################
All steps must be completed successfully, without any errors.

View File

@ -1,82 +0,0 @@
====================
GUI negative testing
====================
Check the plugin reaction on non-consistent data in plugin configuration fields
-------------------------------------------------------------------------------
ID
##
gcs_non_consistent_configuration
Description
###########
Verify that during plugin configuration, non-consistent input into plugin
configuration fields are handled properly.
Complexity
##########
manual
Steps
#####
1. Deploy fuel master node
2. Enable GCS plugin
3. Verify that multiple lines in plugin fields are handled correctly
4. Verify if special characters in url fields are handled properly
5. Verify 'Client E-mail' field with incorrect patterns for e-mail
Expected results
################
All incorrect inputs must be handled properly, and warning should be displayed.
Check the plugin reaction on invalid configuration, typos verification
----------------------------------------------------------------------
ID
##
gcs_invalid_configuration
Description
###########
Verify that during plugin configuration with invalid data, proper errors are
displayed after a attempt of deployment.
Complexity
##########
manual
Steps
#####
1. Create an environment with 1 controller, compute, cinder node
2. Enable GCS plugin
3. Configure plugin with invalid bucket location and try to create backup in new bucket
4. Verify that during deployment of changes proper error/warning is shown
5. Configure plugin with invalid private key
6. Verify that during deployment of changes proper error/warning is shown
Expected results
################
Steps 4, 6 will fail with proper error message.

View File

@ -1,155 +0,0 @@
===================
Integration testing
===================
Deploy GCS plugin with Ceph-osd standalone nodes
------------------------------------------------
ID
##
gcs_ceph
Description
###########
Check deploy an environment with Ceph-OSD standalone nodes
Complexity
##########
Core
Steps
#####
1. Create an environment with 3 Ceph-OSD nodes at least
2. Enable and configure GCS plugin
3. Deploy cluster with plugin
4. Run OSTF tests
5. Verify GCS plugin
Expected results
################
All steps must be completed successfully, without any errors.
Deploy with GCS plugin and cinder-multirole
-------------------------------------------
ID
##
gcs_cinder_multirole
Description
###########
Check deploy an environment with cinder-multirole
Complexity
##########
Core
Steps
#####
1. Create an environment with controller+cinder and compute+cinder nodes
2. Enable and configure GCS plugin
3. Deploy cluster with plugin
4. Run OSTF tests
5. Verify GCS plugin
Expected results
################
All steps must be completed successfully, without any errors.
Deploy with GCS plugin and cinder+Ceph-OSD multiroles
-----------------------------------------------------
ID
##
gcs_cinder_ceph_multirole
Description
###########
Check deploy an environment with cinder+Ceph-OSD multirole
Complexity
##########
Core
Steps
#####
1. Create an environment with controller+cinder+CephOSD and compute+cinder+CephOSD nodes
2. Enable and configure GCS plugin
3. Deploy cluster with plugin
4. Run OSTF tests
5. Verify GCS plugin
Expected results
################
All steps must be completed successfully, without any errors.
Deploy an environment with GCS plugin and ceilometer
----------------------------------------------------
ID
##
gcs_ceilometer
Description
###########
Check deploy an environment GCS plugin and ceilometer
Complexity
##########
Core
Steps
#####
1. Create an environment with ceilometer
2. Enable and configure GCS plugin
3. Deploy cluster with plugin
4. Run OSTF tests
5. Verify GCS plugin
Expected results
################
All steps must be completed successfully, without any errors.

View File

@ -1,82 +0,0 @@
=========
BVT tests
=========
Smoke test
----------
ID
##
gcs_smoke
Description
###########
Smoke test for Google Cloud Storage fuel plugin. Deploy cluster with
controller, compute and cinder nodes and install plugin.
Complexity
##########
core
Steps
#####
1. Upload plugin to the master node
2. Install plugin
3. Create cluster
4. Add 1 nodes with controller role
5. Add 1 node with compute role
6. Add 1 node with cinder role
7. Deploy the cluster
Expected results
################
All steps must be completed successfully, without any errors.
BVT test
--------
ID
##
gcs_bvt
Description
###########
BVT test for Google Cloud Storage fuel plugin. Deploy cluster in HA mode with
3 controllers, compute and cinder nodes and install plugin.
Complexity
##########
core
Steps
#####
1. Upload plugin to the master node
2. Install plugin
3. Create cluster
4. Add 3 nodes with controller role
5. Add 1 node with compute role
6. Add 1 node with cinder role
7. Deploy the cluster
8. Run network verification
9. Check plugin installation
10. Run OSTF
Expected results
################
All steps must be completed successfully, without any errors.

View File

@ -1,44 +0,0 @@
==============
System testing
==============
Check data consistency of backed up volume
------------------------------------------
ID
##
gcs_data_consistency_verification
Description
###########
Verify that data writen into volume stays consistent after backup restoration.
Complexity
##########
manual
Steps
#####
1. Boot VM and attach volume to it
2. Write a test file onto volume and get it md5sum value
3. Backup volume
4. Destroy VM and volume
5. Boot VM and restore volume from a GCS
6. Attach restored volume to a VM
7. Verify file consistency by comparing md5sum value with value obtained in step 5
Expected results
################
All steps must be completed successfully, without any errors. Both md5sum
values obtained from step 2 and 7 must be the same.

View File

@ -1,177 +0,0 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
clean:
rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/FuelVMwareDVSplugin.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/FuelVMwareDVSplugin.qhc"
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/FuelVMwareDVSplugin"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/FuelVMwareDVSplugin"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."

View File

@ -1,9 +0,0 @@
Useful links
------------
For more information about Google Cloud Storage(GCS) Fuel plugin
described in this document, see:
* `Specification <https://github.com/openstack/fuel-plugin-cinder-gcs/blob/master/specs/spec.rst>`_
* `GitHub project <https://github.com/openstack/fuel-plugin-cinder-gcs>`_
* `Launchpad project <https://launchpad.net/fuel-plugin-cinder-gcs>`_

View File

@ -1,337 +0,0 @@
"""Copyright 2016 Mirantis, Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
"""
# -*- coding: utf-8 -*- # noqa
#
# Google Cloud Storage(GCS) Fuel plugin documentation build configuration file, created by
# sphinx-quickstart on Fri Aug 14 12:14:29 2015.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
# sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = []
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Google Cloud Storage(GCS) Fuel plugin'
copyright = u'2016, Mirantis Inc.' # noqa
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '1.0.0'
# The full version, including alpha/beta/rc tags.
release = '1.0-1.0.0-0'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
# keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
# html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
# html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
# html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
# If false, no index is generated.
# html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'GCSFuelplugindoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
'classoptions': ',openany,oneside', 'babel': '\\usepackage[english]{babel}'
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'GCSFuelplugin.tex', u'Google Cloud Storage(GCS) Fuel Plugin\
Guide', u'Mirantis Inc.', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'gcsfuelplugin', u'Google Cloud Storage(GCS) Fuel plugin user\
guide', [u'Mirantis Inc.'], 1)
]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'GCSFuelplugin', u'Google Cloud Storage(GCS) Fuel plugin user guide',
u'Mirantis Inc.', 'GCSFuelplugin',
'One line description of project.', 'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False
# Insert footnotes where they are defined instead of at the end.
pdf_inline_footnotes = True
# -- Options for Epub output ----------------------------------------------
# Bibliographic Dublin Core info.
epub_title = u'Google Cloud Storage(GCS) Plugin for Fuel'
epub_author = u'Mirantis Inc.'
epub_publisher = u'Mirantis Inc.'
epub_copyright = u'2016, Mirantis Inc.'
# The basename for the epub file. It defaults to the project name.
#epub_basename = u'fuel-plugin-openbook'
# The HTML theme for the epub output. Since the default themes are not optimized
# for small screen space, using the same theme for HTML and epub output is
# usually not wise. This defaults to 'epub', a theme designed to save visual
# space.
#epub_theme = 'epub'
# The language of the text. It defaults to the language option
# or en if the language is not set.
#epub_language = ''
# The scheme of the identifier. Typical schemes are ISBN or URL.
#epub_scheme = ''
# The unique identifier of the text. This can be a ISBN number
# or the project homepage.
#epub_identifier = ''
# A unique identification for the text.
#epub_uid = ''
# A tuple containing the cover image and cover page html template filenames.
#epub_cover = ()
# A sequence of (type, uri, title) tuples for the guide element of content.opf.
#epub_guide = ()
# HTML files that should be inserted before the pages created by sphinx.
# The format is a list of tuples containing the path and title.
#epub_pre_files = []
# HTML files shat should be inserted after the pages created by sphinx.
# The format is a list of tuples containing the path and title.
#epub_post_files = []
# A list of files that should not be packed into the epub file.
epub_exclude_files = ['search.html']
# The depth of the table of contents in toc.ncx.
#epub_tocdepth = 3
# Allow duplicate toc entries.
#epub_tocdup = True
# Choose between 'default' and 'includehidden'.
#epub_tocscope = 'default'
# Fix unsupported image types using the PIL.
#epub_fix_images = False
# Scale large images.
#epub_max_image_width = 0
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#epub_show_urls = 'inline'
# If false, no index is generated.
#epub_use_index = True

View File

@ -1,83 +0,0 @@
.. _configure:
Configure an environment with GCS Fuel plugin
---------------------------------------------
To create and configure environment with GCS Fuel plugin,
follow the steps below:
#. `Create a new OpenStack environment <http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-user-guide.html>`_
in Fuel web UI.
#. Use Cinder with LVM backend or CEPH for block storage in
Storage Backends tab. Additional details on storage planning can be found in
`Mirantis OpenStack Planning guide <https://docs.mirantis.com/openstack/fuel/fuel-8.0/mos-planning-guide.html#plan-the-storage>`_.
.. image:: images/storage.png
#. Enable the Google Cloud Storage Fuel plugin in `Additional services` tab:
.. image:: images/plugin.png
#. Add nodes and assign them roles:
* in case if LVM backend for Cinder is enabled
* At least 1 Controller
* Desired number of Compute hosts
* At least 1 Cinder node, the Cinder role can also be added to Compute or
Controller node
* in case if CEPH backend is enabled for volumes
* At least 1 Controller
* Desired number of Compute hosts
* At least 3 CEPH OSD hosts, this role can be co-located with other roles
#. Navigate to the `Settings` tab to configure the Fuel GCS plugin parameters.
All of the plugin settings fields must be filled with correct values,
most of them do not have default values as they are environment-specific.
Google Cloud Storage(GCS) Fuel plugin settings are logically divided into
two parts:
* Mandatory settings
.. image:: images/settings.png
* The project ID
* The default bucket name to store backup data.
The bucket is created if not exists. Used as *container* parameter value
when Cinder CLI or API is invoked for creating a backup.
*Note for Horizon users:* make sure *Container Name* in
*Create Volume Backup* window is filled with the appropriate bucket name.
Improper or empty *Container Name* can cause a new bucket creation.
* The storage class for the bucket, can be selected from drop-down list
* Bucket location, a list of locations can be found in
`Google Cloud storage documentation <https://cloud.google.com/storage/docs/bucket-locations>`_
* Credentials related settings such as `GCS Account type`, `Private Key ID`,
`Private Key`, `Client E-mail`, `Client ID`, `Auth URI`, `Token URI`,
`Auth Provider X509 Cert URL`, `Client X509 Cert URL` should be copied from
corresponding fields of credentials JSON file. This file is downloaded from
`Google Cloud Console <https://console.cloud.google.com/apis/credentials>`_
on new service account creation at API management page.
* Advanced settings
.. image:: images/advanced_settings.png
This section is visible only when `Show advanced settings` checkbox is enabled.
Changing values here may be required to override the default settings for
Google Cloud Cinder backup driver.
The fields have reasonable default values, which correspond to driver defaults.
Please see `OpenStack documentation <http://docs.openstack.org/mitaka/config-reference/block-storage/backup/gcs-backup-driver.html>`_
for a list of GCS backup driver configuration options.
#. Press `Save Settings` button.
#. Make additional
`configuration adjustments <http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-user-guide/configure-environment.html>`__.
#. Proceed to the
`environment deployment <http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-user-guide/deploy-environment.html>`__.

View File

@ -1,22 +0,0 @@
.. _definitions:
Key terms
---------
The table below lists the key terms, acronyms, and abbreviations that are used
in this document.
.. tabularcolumns:: |p{4cm}|p{12.5cm}|
====================== ========================================================
**Term/abbreviation** **Definition**
====================== ========================================================
Cinder `Block Storage service for OpenStack <https://github.com/openstack/cinder>`__
GCS `Google Cloud Storage <https://cloud.google.com/storage/>`__
GCS backup driver `Google Cloud Storage backup driver <http://docs.openstack.org/mitaka/config-reference/block-storage/backup/gcs-backup-driver.html>`__
Mistral `OpenStack workflow service <https://github.com/openstack/mistral>`__
====================== ========================================================
.. raw:: latex
\pagebreak

Binary file not shown.

Before

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 103 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 48 KiB

View File

@ -1,36 +0,0 @@
================================================
Google Cloud Storage(GCS) Fuel Plugin User Guide
================================================
Overview
~~~~~~~~
.. toctree::
:maxdepth: 2
intro
definitions
requirements
prerequisites
limitations
release_notes
license
appendix
Install and configure GCS Fuel Plugin
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. toctree::
:maxdepth: 2
install
config
Use GCS Fuel Plugin
~~~~~~~~~~~~~~~~~~~
.. toctree::
:maxdepth: 2
mistral
troubleshooting

View File

@ -1,39 +0,0 @@
Install Google Cloud Storage(GCS) Fuel plugin
---------------------------------------------
Before you proceed with Google Cloud Storage(GCS) Fuel plugin installation, please verify next:
#. You have completed steps from :ref:`prerequisites` section.
#. All the nodes of your future environment are *DISCOVERED* by the Fuel Master node.
**To install the Google Cloud Storage(GCS) Fuel plugin:**
#. Download Google Cloud Storage(GCS) Fuel plugin from the
`Fuel Plugin Catalog <https://www.mirantis.com/products/openstack-drivers-and-plugins/fuel-plugins/>`__.
#. Copy the plugin ``.rpm`` package to the Fuel Master node:
.. code-block:: console
$ scp fuel-plugin-cinder-gcs-1.0-1.0.0-1.noarch.rpm <Fuel Master nodeip>:/tmp
#. Log in to the Fuel Master node CLI as root.
#. Install the plugin:
.. code-block:: console
# fuel plugins --install /tmp/fuel-plugin-cinder-gcs-1.0-1.0.0-1.noarch.rpm
#. Verify that the plugin was installed successfully:
.. code-block:: console
# fuel plugins
id | name | version | package_version | releases
---+------------------------+---------+-----------------+--------------------
2 | fuel-plugin-cinder-gcs | 1.0.0 | 4.0.0 | ubuntu (mitaka-9.0)
#. Proceed to :ref:`configure` section.

View File

@ -1,9 +0,0 @@
Introduction
------------
The purpose of this document is to describe how to install, configure, and use
Google Cloud Storage(GCS) plugin 1.0.0 for Fuel 9.0.
Since Mitaka OpenStack release, Cinder supports Google Cloud Storage backup
driver. The plugin adds a possibility to Fuel to deploy OpenStack clouds with
Cinder configured for using GCS backup driver.

View File

@ -1,9 +0,0 @@
Licenses
--------
===================================== ============
**Component** **License**
===================================== ============
Google Cloud Storage(GCS) Fuel plugin Apache 2.0
google-api-python-client Apache 2.0
===================================== ============

View File

@ -1,10 +0,0 @@
Limitations
-----------
Google Cloud Storage(GCS) Fuel plugin 1.0.0 has the following limitations:
* Cinder does not support multiple backup backends at the same time so switching
backup backend for a cloud with some backups already created by another driver
may not be possible without losing access to previously created backups.
* A single GCS bucket can be used per OpenStack environment.

View File

@ -1,17 +0,0 @@
#!/bin/bash
lsb_release -a 2>/dev/null | grep 'Distributor ID:' | grep -q Ubuntu || \
{ echo 'Not an Ubuntu'; exit 1; }
apt-cache policy texlive-latex-extra | grep 'Installed:' | grep -q '(none)' && \
{ echo 'Please install texlive-latex-extra package'; exit 1; }
rm -rf _build
make latexpdf
echo -e -n '\nCreated pdf : '
find _build/ | grep -e '.*\.pdf'
echo ''

View File

@ -1,244 +0,0 @@
Automate with Mistral
---------------------
Many backup strategies require taking backups on regular basis
and it's good to have these repeatable actions automated.
Taking a drive backup is often considered as a single action but
usually requires taking a snapshot, taking a backup of the snapshot and
then deleting the snapshot. So taking a drive backup is actually a workflow.
Mistral is a workflow service for OpenStack cloud and the plugin provides a
sample Mistral workbook.
The workflow provided by the sample basicly does:
* Create a list of Cinder volumes to backup
* Create snapshots for the volumes
* Create backups for the snapshots
* Wait until backups are created
* Remove the snapshots
* Send a report(optional)
After the plugin is installed on Fuel master the sample can be found in
``/var/www/nailgun/plugins/fuel-plugin-cinder-gcs-1.0/examples/mistral_workbook.yaml``
on Fuel master node.
To use the sample it's required to have Mistral service installed and running.
Copying Mistral workbook to an Openstack controller
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#. Copy the sample from Fuel master to an OpenStack controller::
root@fuel-master# scp /var/www/nailgun/plugins/fuel-plugin-cinder-gcs-1.0/examples/mistral_workbook.yaml root@<CONTROLLER_NAME_OR_IP>:~
Customizing the sample workbook
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Mistral has a possibility to send e-mails via an MTA, which support SSL/TLS and authentication.
It's not possible to use a MTA without SSL/TLS and authentication support.
Proper MTA credentials should be set in the sample file before creating Mistral
workbook to sample.
#. Login to the controller and edit the sample file
::
root@controller:~# vi mistral_workbook.yaml
...
from_addr: '<USERNAME>@<DOMAIN>'
smtp_server: '<MTA_HOSTNAME_OR_IP>'
smtp_password: '<PASSWORD>'
*Note:* The step can be skipped if sending e-mails by the workflow if not
supposed.
Creating Mistral workbook from the sample
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#. Verify Mistral CLI works
::
root@controller:~# openstack workbook list
+------------------------+--------+---------------------+------------+
| Name | Tags | Created at | Updated at |
+------------------------+--------+---------------------+------------+
+------------------------+--------+---------------------+------------+
root@controller:~# openstack workflow list -c Name
+---------------------+
| Name |
+---------------------+
| std.create_instance |
| std.delete_instance |
+---------------------+
*Note:* It may be required to source the approriate *openrc* file to get the
command working.
#. Create Mistral workbook from the sample
::
root@controller:~# openstack workbook create mistral_workbook.yaml
+------------+----------------------------+
| Field | Value |
+------------+----------------------------+
| Name | sample_backup_workbook |
| Tags | <none> |
| Created at | 2016-09-08 13:59:10.306180 |
| Updated at | None |
+------------+----------------------------+
#. Verify the workbook and the workflow are added
::
root@controller:~# openstack workbook list
+------------------------+--------+---------------------+------------+
| Name | Tags | Created at | Updated at |
+------------------------+--------+---------------------+------------+
| sample_backup_workbook | <none> | 2016-09-08 13:59:10 | None |
+------------------------+--------+---------------------+------------+
root@controller:~# openstack workflow list -c Name
+------------------------------------------------+
| Name |
+------------------------------------------------+
| std.create_instance |
| std.delete_instance |
| sample_backup_workbook.create_backups_workflow | <---
+------------------------------------------------+
Using workflow
^^^^^^^^^^^^^^
The workflow accepts the following parameters:
* *projects_id_list*
* Optional
* Default: null
* Mutual exclusive with *volumes_id_list*
* Comment: Mutual exclusive with *volumes_id_list*. If *projects_id_list* is
provided all volumes of the projects are backued up. If *volumes_id_list* is
provided only volumes from the list are backud up. If neither
*projects_id_list* nor *volumes_id_list* is provided all volumes of all
projects will be backed up.
* *volumes_id_list*
* Optional
* Default: null
* Comment: Mutual exclusive with *volumes_id_list*. If *projects_id_list* is
provided all volumes of the projects are backued up. If *volumes_id_list* is
provided only volumes from the list are backud up. If neither
*projects_id_list* nor *volumes_id_list* is provided all volumes of all
projects will be backed up.
* *incremental*
* Optional
* Default: false
* Comment: Full backups are created is not provided.
* *report_to_list*
* Optional
* Default: null
* Comment: E-mails are not sent if not provided.
* *snapshot_name*
* Optional
* Default: 'by_create_backups_workflow'
* Comment: It becomes a name for Cinder snaphots. Useful for detecting not
deleted Cinder snapshots.
Executing workflow without parameters (test only)
"""""""""""""""""""""""""""""""""""""""""""""""""
*Note:* Executing the workflow without parameters will cause taking full backups
of all volumes of all projects(tenants) what cat take a lot of time and
resources.
::
root@controller:~# openstack workflow execution create sample_backup_workbook.create_backups_workflow
+-------------------+------------------------------------------------+
| Field | Value |
+-------------------+------------------------------------------------+
| ID | 93fc32a1-d285-4934-9b14-9a58b395e5d1 | <---ID
| Workflow ID | c5816326-ae05-43cc-8732-943ace7b5947 |
| Workflow name | sample_backup_workbook.create_backups_workflow |
| Description | |
| Task Execution ID | <none> |
| State | RUNNING |
| State info | None |
| Created at | 2016-09-09 13:06:27 |
| Updated at | 2016-09-09 13:06:26.626167 |
+-------------------+------------------------------------------------+
Executing workflow with parameters
""""""""""""""""""""""""""""""""""
The next example shows providing *volumes_id_list* parameter
while creating an execution.
::
root@controller:~# openstack workflow execution create sample_backup_workbook.create_backups_workflow '{"volumes_id_list": ["0774de3c-092a-4eb3-a25f-04c0790f51c6"]}'
+-------------------+------------------------------------------------+
| Field | Value |
+-------------------+------------------------------------------------+
| ID | ec017763-11c6-421f-b7e9-7774bc2a7fa3 |
| Workflow ID | c5816326-ae05-43cc-8732-943ace7b5947 |
| Workflow name | sample_backup_workbook.create_backups_workflow |
| Description | |
| Task Execution ID | <none> |
| State | RUNNING |
| State info | None |
| Created at | 2016-09-09 13:18:14 |
| Updated at | 2016-09-09 13:18:14.044925 |
+-------------------+------------------------------------------------+
Checking execution and execution tasks status
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To check an execution status the execution ID is required. The ID can be found
in ``openstack workflow execution create`` command output.::
root@node-1:~# openstack workflow execution show 9822a1c0-bd79-4bb2-9c91-c0accf96e60e
+-------------------+------------------------------------------------+
| Field | Value |
+-------------------+------------------------------------------------+
| ID | 9822a1c0-bd79-4bb2-9c91-c0accf96e60e |
| Workflow ID | c5816326-ae05-43cc-8732-943ace7b5947 |
| Workflow name | sample_backup_workbook.create_backups_workflow |
| Description | |
| Task Execution ID | <none> |
| State | SUCCESS |
| State info | None |
| Created at | 2016-09-09 12:54:03 |
| Updated at | 2016-09-09 12:55:23 |
+-------------------+------------------------------------------------+
To list the execution tasks run providing the execution ID::
root@node-1:~# openstack task execution list 9822a1c0-bd79-4bb2-9c91-c0accf96e60e
+-----..-+------..-+---------------..-+--------------..-+---------+------..-+
| ID .. | Name .. | Workflow name .. | Execution ID .. | State | State.. |
+-----..-+------..-+---------------..-+--------------..-+---------+------..-+
| c4c3.. | analy.. | sample_backup_.. | 9822a1c0-bd79.. | SUCCESS | None .. |
| c1e0.. | analy.. | sample_backup_.. | 9822a1c0-bd79.. | SUCCESS | None .. |
| 81de.. | get_a.. | sample_backup_.. | 9822a1c0-bd79.. | SUCCESS | None .. |
| cd74.. | creat.. | sample_backup_.. | 9822a1c0-bd79.. | SUCCESS | None .. |
| df6f.. | creat.. | sample_backup_.. | 9822a1c0-bd79.. | SUCCESS | None .. |
| 8513.. | wait_.. | sample_backup_.. | 9822a1c0-bd79.. | SUCCESS | None .. |
| fc62.. | delet.. | sample_backup_.. | 9822a1c0-bd79.. | SUCCESS | None .. |
+-----..-+------..-+---------------..-+--------------..-+---------+------..-+

View File

@ -1,14 +0,0 @@
.. _prerequisites:
Prerequisites
-------------
Before you install and start using Google Cloud Storage(GCS) Fuel plugin,
complete the following steps:
#. Install and set up
`Fuel 9.0 <http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-install-guide.html>`__.
#. Make sure Internet is available on Cinder nodes
#. Download JSON credentials file for service account from `Google Cloud Console <https://console.cloud.google.com/apis/credentials>`_.

View File

@ -1,8 +0,0 @@
Release notes
-------------
The Google Cloud Storage(GCS) Fuel plugin 1.0.0 provides:
* A possibility of using Google Cloud Storage as a backup storage for Cinder backups.
* A sample Mistral workbook for backup process automation.

View File

@ -1,12 +0,0 @@
Requirements
------------
Google Cloud Storage(GCS) Fuel plugin 1.0.0 has the following requirements:
* Fuel 9.0 with Mitaka support
* Internet connection to Cinder nodes
* Valid credentials for GCS service account
Check `Planning Guide <https://docs.mirantis.com/openstack/fuel/fuel-9.0/mos-planning-guide.html>`__
and `Planning hardware for your OpenStack cluster: the answers to your questions <https://www.mirantis.com/blog/planning-hardware-for-your-openstack-cluster-the-answers-to-your-questions/>`__
for additional requirements to consider.

View File

@ -1,51 +0,0 @@
Troubleshooting
---------------
This section contains a guidance on how to verify Cinder is configured for using
Google Cloud Storage backup driver and where to look for logs.
Finding logs
^^^^^^^^^^^^
LVM as backend for Cinder volumes
"""""""""""""""""""""""""""""""""
Backup-related Cinder logs can be found in ``/var/log/cinder/cinder-backup.log``
on nodes with *cinder* role.
Ceph as backend for Cinder volumes
""""""""""""""""""""""""""""""""""
Backup-related Cinder logs can be found in ``/var/log/cinder/cinder-backup.log``
on nodes with *controller* role.
Finding configuration files
^^^^^^^^^^^^^^^^^^^^^^^^^^^
LVM as backend for Cinder volumes
"""""""""""""""""""""""""""""""""
Backup-related Cinder parameters are stored in /etc/cinder/cinder.conf on
*cinder* nodes.
Ceph as backend for Cinder volumes
""""""""""""""""""""""""""""""""""
Backup-related Cinder parameters are stored in /etc/cinder/cinder.conf on
*controller* nodes.
Verifying GCS Cinder backup driver is enabled
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If GCS Cinder backup driver is enabled *[DEFAULT]* section of *cinder.conf*
should contain
``backup_driver = cinder.backup.drivers.google``
and
``backup_gcs_credential_file = /var/lib/cinder/credentials.json``
The file ``/var/lib/cinder/credentials.json`` should contain the same information as
the credentials file downloaded from Google Cloud Storage UI.

View File

@ -1,218 +0,0 @@
attributes:
backup_gcs_project_id:
value: ''
label: 'GCS project ID'
description: ''
weight: 10
type: "text"
regex:
source: '^[A-Za-z\d_-]+$'
error: 'The value should not be empty. Only letters, digits, underscore and dash symbols are allowed.'
backup_gcs_bucket:
value: ''
label: 'Default GCS bucket name'
description: 'Default GCS bucket name to use for backups. The bucket is created if not exists. Please refer to the official bucket naming guidelines https://cloud.google.com/storage/docs/naming .Used as container parameter value when Cinder CLI or API is invoked for creating a backup.'
weight: 15
type: "text"
regex:
source: '^[A-Za-z\d_-]+$'
error: 'Default GCS bucket name to use for backups. The bucket is created if not exists. Please refer to the official bucket naming guidelines https://cloud.google.com/storage/docs/naming .Used as container parameter value when Cinder CLI or API is invoked for creating a backup. The value should not be empty. Only letters, digits, underscore and dash symbols are allowed.'
backup_gcs_storage_class:
value: 'NEARLINE'
label: 'GCS storage class'
weight: 40
type: 'select'
description: 'Storage class of GCS bucket'
values:
- data: 'NEARLINE'
label: 'NEARLINE'
- data: 'STANDARD'
label: 'STANDARD'
- data: 'DURABLE_REDUCED_AVAILABILITY'
label: 'DURABLE_REDUCED_AVAILABILITY'
backup_gcs_bucket_location:
type: "text"
weight: 50
value: "US"
label: "GCS bucket location"
description: "Enter GCS bucket location"
regex:
source: '^[A-Za-z\d_-]+$'
error: 'The value should not be empty. Only letters, digits, underscore and dash symbols are allowed.'
gcs_account_type:
label: "GCS Account type"
description: "type parameter value from the GCS credentials file"
type: text
weight: 51
value: 'service_account'
regex:
source: '^[A-Za-z\d_-]+$'
error: 'The value should not be empty. Only letters, digits, underscore and dash symbols are allowed.'
gcs_private_key_id:
label: "Private Key ID"
description: "Private_key_id parameter value from the GCS credentials file."
type: text
weight: 51
value: ''
regex:
source: '^[A-Za-z\d]+$'
error: 'Only alphanumberic charaters are allowed'
gcs_private_key:
label: "Private Key"
description: "Private_key parameter value from the GCS credentials file."
type: password
weight: 52
value: ''
regex:
source: '^.+$'
error: 'Should not be empty'
gcs_client_email:
label: "Client E-mail"
description: "Client_email parameter value from the GCS credentials file."
type: text
weight: 53
value: ''
regex:
source: '^[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+$'
error: 'Please enter correct e-mail address'
gcs_client_id:
label: "Client ID"
description: "Client_id parameter value from the GCS credentials file."
type: text
weight: 54
value: ''
regex:
source: '^\d+$'
error: 'Only integers [1-9] are allowed'
gcs_auth_uri:
label: "Auth URI"
description: "Auth_uri parameter value from the GCS credentials file."
type: text
weight: 55
value: 'https://accounts.google.com/o/oauth2/auth'
regex:
source: '\w+:\/\/[\w\-.\/]+(?::\d+)?[\w\-.\/]+$'
error: 'Please enter valid URI'
gcs_token_uri:
label: "Token URI"
description: "Token_uri parameter value from the GCS credentials file."
type: text
weight: 56
value: 'https://accounts.google.com/o/oauth2/token'
regex:
source: '\w+:\/\/[\w\-.\/]+(?::\d+)?[\w\-.\/]+$'
error: 'Please enter valid URI'
gcs_auth_provider_x509_cert_url:
label: "Auth Provider X509 Cert URL"
description: "Auth_provider_x509_cert_url parameter value from the GCS credentials file."
type: text
weight: 57
value: 'https://www.googleapis.com/oauth2/v1/certs'
regex:
source: '\w+:\/\/[\w\-.\/]+(?::\d+)?[\w\-.\/]+$'
error: 'Please enter valid URI'
gcs_client_x509_cert_url:
label: "Client X509 Cert URL"
description: "Client_x509_cert_url parameter value from the GCS credentials file."
type: text
weight: 58
value: ''
regex:
source: '\w+:\/\/[\w\-.\/]+(?::\d+)?[\w\-.\/%]+$'
error: 'Please enter valid URI'
backup_gcs_advanced_settings:
type: "checkbox"
weight: 60
value: False
label: 'Show advanced settings'
description: 'When selected all GCS Cinder driver settings are shown'
backup_gcs_object_size:
label: 'GCS Object Size'
description: 'The size in bytes of GCS backup objects in bytes, must be a multiple of GCS block size value, default: 52428800'
type: text
value: '52428800'
weight: 80
restrictions:
- condition: "not ( settings:fuel-plugin-cinder-gcs.backup_gcs_advanced_settings.value == true )"
action: "hide"
regex:
source: '^[1-9]\d*$'
error: 'Must be a possitive integer'
backup_gcs_block_size:
label: 'GCS Block Size'
description: 'The size in bytes that changes are tracked for incremental backups, default: 32768'
type: text
value: '32768'
weight: 75
restrictions:
- condition: "not ( settings:fuel-plugin-cinder-gcs.backup_gcs_advanced_settings.value == true )"
action: "hide"
regex:
source: '^[1-9]\d*$'
error: 'Must be a possitive integer'
backup_gcs_user_agent:
label: 'HTTP User-Agent'
description: 'HTTP User-Agent string for the GCS API, default: gcscinder'
type: text
value: 'gcscinder'
weight: 85
restrictions:
- condition: "not ( settings:fuel-plugin-cinder-gcs.backup_gcs_advanced_settings.value == true )"
action: "hide"
backup_gcs_writer_chunk_size:
label: 'GCS writer Chunk Size'
description: 'Chunk size for GCS object uploads in bytes, -1 for single chunk, default value: 2097152, maximum value: 52428800'
type: text
value: '2097152'
weight: 90
regex:
source: '^((-1$)|(([1-9])\d{0,7}$))'
error: 'Must be a possitive integer or -1'
restrictions:
- condition: "not ( settings:fuel-plugin-cinder-gcs.backup_gcs_advanced_settings.value == true )"
action: "hide"
backup_gcs_reader_chunk_size:
label: 'GCS Reader Chunk Size'
description: 'Chunk size for GCS object downloads in bytes, default: 2097152'
type: text
value: '2097152'
weight: 95
restrictions:
- condition: "not ( settings:fuel-plugin-cinder-gcs.backup_gcs_advanced_settings.value == true )"
action: "hide"
regex:
source: '^((-1$)|(([1-9])\d*$))'
error: 'Must be a possitive integer'
backup_gcs_retry_error_codes:
label: 'GCS Retry Error Codes'
description: "List of GCS error codes for which to initiate a retry, default: 429"
type: text
value: '429'
weight: 100
restrictions:
- condition: "not ( settings:fuel-plugin-cinder-gcs.backup_gcs_advanced_settings.value == true )"
action: "hide"
regex:
source: '^([4-5][0-9]{2})(,\s{1}[4-5][0-9]{2})*$'
error: "Enter list of valid error codes, example: 403, 404, 503"
backup_gcs_num_retries:
label: 'GCS Retries Number'
description: 'Number of times to retry transfers'
type: text
value: '3'
weight: 105
restrictions:
- condition: "not ( settings:fuel-plugin-cinder-gcs.backup_gcs_advanced_settings.value == true )"
action: "hide"
regex:
source: '^[1-9]\d{0,2}$'
error: 'Must be an integer in range of 1-999'
backup_gcs_enable_progress_timer:
type: "checkbox"
weight: 110
value: True
label: "GCS progress update timer"
description: "Timer to send the periodic progress notifications to Ceilometer when backing up the volume."
restrictions:
- condition: "not ( settings:fuel-plugin-cinder-gcs.backup_gcs_advanced_settings.value == true )"
action: "hide"

View File

@ -1,130 +0,0 @@
---
version: '2.0'
name: sample_backup_workbook
description: Sample backup workflows
# Ad-hoc Actions
actions:
send_email:
input:
- to_addrs
- subject
- body
base: std.email
base-input:
to_addrs: <% $.to_addrs %>
subject: <% $.subject %>
body: <% $.body %>
from_addr: 'mistral@localhost'
smtp_server: 'localhost'
smtp_password: 'SECRET'
# Workflows
workflows:
create_backups_workflow:
type: direct
description: Sample workflow to create backups
input:
- projects_id_list: null
- volumes_id_list: null
- incremental: false
- report_to_list: null
- snapshot_name: 'by_create_backups_workflow'
tasks:
analyze_volumes_id_list:
action: std.echo output=<% $.volumes_id_list %>
publish:
volumes_id_list_to_snapshot: <% $.volumes_id_list %>
on-success:
- analyze_projects_id_list: <% isList($.volumes_id_list) = false %>
- create_snapshots: <% isList($.volumes_id_list) %>
analyze_projects_id_list:
action: std.echo output=<% $.projects_id_list %>
on-success:
- get_all_projects_volumes_list: <% $.projects_id_list = null %>
- get_volumes_list: <% $.projects_id_list != null %>
get_all_projects_volumes_list:
action: cinder.volumes_list search_opts=<% {'all_tenants'=>1} %>
publish:
volumes_id_list_to_snapshot: <% task(get_all_projects_volumes_list).result.id %>
on-success:
- create_snapshots: <% task(get_all_projects_volumes_list).result != [] %>
get_volumes_list:
with-items: project_id in <% $.projects_id_list %>
action: cinder.volumes_list search_opts=<% {'all_tenants'=>1,'project_id'=> $.project_id} %>
publish:
volumes_id_list_to_snapshot: <% task(get_volumes_list).result.id %>
on-success:
- create_snapshots: <% task(get_volumes_list).result != [] %>
create_snapshots:
with-items: volume_id_to_snapshot in <% $.volumes_id_list_to_snapshot %>
action: cinder.volume_snapshots_create volume_id=<% $.volume_id_to_snapshot %> force=true name=<% $.snapshot_name %> description='Temporaray snapshot created by Mistral for backup purpose'
on-complete: create_backups
create_backups:
with-items:
- snap_id in <% task(create_snapshots).result.id %>
- vol_id in <% task(create_snapshots).result.volume_id %>
action: cinder.backups_create snapshot_id=<% $.snap_id %> volume_id=<% $.vol_id %> incremental=<% $.incremental %>
publish:
backups_id_list: <% task(create_backups).result.id %>
on-complete: wait_for_backups_completion
wait_for_backups_completion:
with-items: backup_id in <% $.backups_id_list %>
action: cinder.backups_get backup_id=<% $.backup_id %>
publish:
snap_id_to_del_list: <% task(wait_for_backups_completion).result.where($.status != creating).snapshot_id %>
on-complete: delete_snapshots
wait-before: 10
timeout: 300
retry:
count: 30
delay: 10
continue-on: <% creating in task(wait_for_backups_completion).result.status %>
delete_snapshots:
description: Deletes snapshots for backups which are not in creating state
with-items: snap_id_to_del in <% $.snap_id_to_del_list %>
action: cinder.volume_snapshots_delete snapshot=<% $.snap_id_to_del %>
on-success:
- report_success: <% ( $.volumes_id_list_to_snapshot.len() = $.snap_id_to_del_list.len() ) and ( $.report_to_list != null ) %>
- report_error: <% ( $.volumes_id_list_to_snapshot.len() != $.snap_id_to_del_list.len() ) and ( $.report_to_list != null ) %>
on-error:
- report_error: <% $.report_to_list != null %>
report_error:
action: send_email
input:
to_addrs: <% $.report_to_list %>
subject: 'Sample backup workflow - Error'
body: |
Hi,
Please take a look at Mistral Dashboard to find out what's wrong
with your workflow execution <% execution().id %>.
Everything's going to be alright!
-- Regards, Sample backup workflow.
report_success:
action: send_email
input:
to_addrs: <% $.report_to_list %>
subject: 'Sample backup workflow - Success'
body: |
Hi,
The backups has been created.
-- Regards, Sample backup workflow.

View File

@ -1,25 +0,0 @@
# Plugin name
name: fuel-plugin-cinder-gcs
title: Fuel Cinder GCS plugin
# Plugin version
version: '1.0.0'
# Description
description: The plugin allows to use Google Cloud Storage as backend for cinder backup.
# Required fuel version
fuel_version: ['9.0']
licenses: ['Apache License Version 2.0']
authors: ['Mirantis Inc.']
homepage: 'https://github.com/openstack/fuel-plugin-cinder-gcs'
groups: ['storage::cinder']
is_hotpluggable: false
# The plugin is compatible with releases in the list
releases:
- os: ubuntu
version: mitaka-9.0
mode: ['ha']
deployment_scripts_path: deployment_scripts/
repository_path: repositories/ubuntu
# Version of plugin package
package_version: '4.0.0'

View File

@ -1,13 +0,0 @@
# Copyright 2016 Mirantis, Inc.
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

@ -1 +0,0 @@
Subproject commit e1ae7be2b30e1e27d0b7e6a3a2ae98909734044b

View File

@ -1,13 +0,0 @@
# Copyright 2016 Mirantis, Inc.
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,82 +0,0 @@
# Copyright 2016 Mirantis, Inc.
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Base module which classes and methods will be used in test cases."""
import os
from proboscis.asserts import assert_equal
from proboscis.asserts import assert_false
from fuelweb_test.tests.base_test_case import TestBasic
from fuelweb_test import logger
from fuelweb_test.helpers import utils
from helpers import gcs_settings
class GcsTestBase(TestBasic):
"""GcsTestBase.
Base class for GCS verification testing, methods in this class will be used
by test cases.
"""
# TODO(unknown) documentation
def get_remote(self, node):
"""Method designed to get remote credentials."""
logger.info('Getting a remote to {0}'.format(node))
if node == 'master':
environment = self.env
remote = environment.d_env.get_admin_remote()
else:
remote = self.fuel_web.get_ssh_for_node(node)
return remote
def install_plugin(self):
"""Method designed to install plugin on cluster."""
master_remote = self.get_remote('master')
utils.upload_tarball(master_remote.host,
os.environ['GCS_PLUGIN_PATH'],
'/var')
utils.install_plugin_check_code(
master_remote.host,
os.path.basename(os.environ['GCS_PLUGIN_PATH']))
def verify_defaults(self, cluster_id):
"""Method designed to verify plugin default values."""
attr = self.fuel_web.client.get_cluster_attributes(cluster_id)
assert_false(
attr['editable'][gcs_settings.plugin_name]['metadata']['enabled'],
'Plugin should be disabled by default.')
# attr value is being assigned twice in order to fit PEP8 restriction:
# lines in file can not be longer than 80 characters.
attr = attr['editable'][gcs_settings.plugin_name]['metadata']
attr = attr['versions'][0]
error_list = []
for key in gcs_settings.default_values.keys():
next_key = 'value'
if key == 'metadata':
next_key = 'hot_pluggable'
msg = 'Default value is incorrect, got {} = {} instead of {}'
try:
assert_equal(gcs_settings.default_values[key],
attr[key][next_key],
msg.format(key,
attr[key][next_key],
gcs_settings.default_values[key]))
except AssertionError as e:
error_list.append(''.join(('\n', str(e))))
error_msg = ''.join(error_list)
assert_equal(len(error_msg), 0, error_msg)

View File

@ -1,62 +0,0 @@
# Copyright 2016 Mirantis, Inc.
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Module with GCS plugin settings."""
import os
plugin_name = 'fuel-plugin-cinder-gcs'
plugin_version = '1.0.0'
default_tenant = 'gcs'
default_user = 'gcs'
default_user_pass = 'gcs123GCS'
options = {
'backup_gcs_bucket_location/value': os.environ['GCS_LOCATION'],
'backup_gcs_bucket/value': os.environ['GCS_BUCKET_NAME'],
'backup_gcs_project_id/value': os.environ['GCS_PROJECT_ID'],
'gcs_private_key/value': os.environ['GCS_PRIVATE_KEY'],
'gcs_private_key_id/value': os.environ['GCS_KEY_ID'],
'gcs_client_x509_cert_url/value': os.environ['GCS_CERT_URL'],
'gcs_client_email/value': os.environ['GCS_CLIENT_EMAIL'],
'gcs_client_id/value': os.environ['GCS_CLIENT_ID']
}
default_values = {
'backup_gcs_advanced_settings': False,
'backup_gcs_enable_progress_timer': True,
'backup_gcs_retry_error_codes': '429',
'backup_gcs_writer_chunk_size': '2097152',
'backup_gcs_bucket_location': 'US',
'backup_gcs_bucket': '',
'backup_gcs_project_id': '',
'backup_gcs_block_size': '32768',
'backup_gcs_object_size': '52428800',
'backup_gcs_storage_class': 'NEARLINE',
'backup_gcs_user_agent': 'gcscinder',
'backup_gcs_reader_chunk_size': '2097152',
'backup_gcs_num_retries': '3',
'metadata': False,
'gcs_private_key': '',
'gcs_private_key_id': '',
'gcs_token_uri': 'https://accounts.google.com/o/oauth2/token',
'gcs_client_x509_cert_url': '',
'gcs_auth_provider_x509_cert_url': 'https://www.googleapis.com/'
'oauth2/v1/certs',
'gcs_client_email': '',
'gcs_auth_uri': 'https://accounts.google.com/o/oauth2/auth',
'gcs_client_id': '',
'gcs_account_type': 'service_account'
}

View File

@ -1,76 +0,0 @@
# Copyright 2016 Mirantis, Inc.
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sys
import os
import re
from nose.plugins import Plugin
from paramiko.transport import _join_lingering_threads
class CloseSSHConnectionsPlugin(Plugin):
"""Closes all paramiko's ssh connections after each test case.
Plugin fixes proboscis disability to run cleanup of any kind.
'afterTest' calls _join_lingering_threads function from paramiko,
which stops all threads (set the state to inactive and joins for 10s)
"""
name = 'closesshconnections'
def options(self, parser, env=os.environ):
"""Options."""
super(CloseSSHConnectionsPlugin, self).options(parser, env=env)
def configure(self, options, conf):
"""Configure env."""
super(CloseSSHConnectionsPlugin, self).configure(options, conf)
self.enabled = True
def afterTest(self, *args, **kwargs):
"""After_Test.
After_Test calls _join_lingering_threads function from paramiko,
which stops all threads (set the state to inactive and joins for 10s).
"""
_join_lingering_threads()
def import_tests():
"""Import test suite of project."""
from tests import test_smoke_bvt
from tests import test_gcs_gui
from tests import test_integration
from tests import test_functional
def run_tests():
"""Run test cases."""
from proboscis import TestProgram # noqa
import_tests()
# Run Proboscis and exit.
TestProgram(
addplugins=[CloseSSHConnectionsPlugin()]
).run_and_exit()
if __name__ == '__main__':
sys.path.append(sys.path[0] + "/fuel-qa")
import_tests()
from fuelweb_test.helpers.patching import map_test
if any(re.search(r'--group=patching_master_tests', arg)
for arg in sys.argv):
map_test('master')
elif any(re.search(r'--group=patching.*', arg) for arg in sys.argv):
map_test('environment')
run_tests()

View File

@ -1,13 +0,0 @@
# Copyright 2016 Mirantis, Inc.
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,559 +0,0 @@
# Copyright 2016 Mirantis, Inc.
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Module with set of basic test cases."""
from proboscis import test
from fuelweb_test.helpers.decorators import log_snapshot_after_test
from fuelweb_test.tests.base_test_case import SetupEnvironment
from helpers.gcs_base import GcsTestBase
from helpers import gcs_settings
from tests.test_plugin_check import TestPluginCheck
@test(groups=["gcs_functional_tests"])
class GcsTestClass(GcsTestBase):
"""GcsTestBase.""" # TODO(unknown) documentation
@test(depends_on=[SetupEnvironment.prepare_slaves_5],
groups=["gcs_delete_add_controller"])
@log_snapshot_after_test
def gcs_delete_add_controller(self):
"""Delete a controller node and add again.
Scenario:
1. Install GCS plugin
2. Create an environment
3. Add following nodes:
* 1 controller
* 2 controller+ceph-osd
* 1 compute+ceph-osd
* 1 compute
4. Configure GCS plugin
5. Deploy the cluster
6. Run OSTF
7. Verify GCS plugin
8. Delete node with controller role
9. Deploy changes
10. Run OSTF
11. Verify GCS plugin
12. Add a node with controller role
13. Deploy changes
14. Run OSTF
15. Verify GCS plugin
"""
self.env.revert_snapshot("ready_with_5_slaves")
self.show_step(1)
self.install_plugin()
self.show_step(2)
cluster_id = self.fuel_web.create_cluster(
name=self.__class__.__name__,
settings={
'images_ceph': True,
'volumes_ceph': True,
'ephemeral_ceph': True,
'objects_ceph': True,
'volumes_lvm': False
}
)
self.show_step(3)
self.fuel_web.update_nodes(
cluster_id,
{
'slave-01': ['controller'],
'slave-02': ['controller', 'ceph-osd'],
'slave-03': ['ceph-osd', 'controller'],
'slave-04': ['ceph-osd', 'compute'],
'slave-05': ['compute'],
}
)
self.show_step(4)
self.fuel_web.update_plugin_settings(cluster_id,
gcs_settings.plugin_name,
gcs_settings.plugin_version,
gcs_settings.options)
self.show_step(5)
self.fuel_web.deploy_cluster_wait(
cluster_id,
check_services=False
)
self.show_step(6)
self.fuel_web.run_ostf(
cluster_id=cluster_id,
test_sets=['smoke', 'sanity', 'ha'])
self.show_step(7)
TestPluginCheck(self).plugin_check()
self.show_step(8)
self.fuel_web.update_nodes(
cluster_id, {'slave-01': ['controller']},
pending_addition=False, pending_deletion=True)
self.show_step(9)
self.fuel_web.deploy_cluster_wait(
cluster_id,
check_services=False
)
self.show_step(10)
self.fuel_web.run_ostf(
cluster_id=cluster_id,
should_fail=1,
failed_test_name=['Check that required services are running'],
test_sets=['smoke', 'sanity', 'ha'])
self.show_step(11)
TestPluginCheck(self).plugin_check()
self.show_step(12)
self.fuel_web.update_nodes(
cluster_id, {'slave-01': ['controller']})
self.show_step(13)
self.fuel_web.deploy_cluster_wait(
cluster_id,
check_services=False
)
self.show_step(14)
self.fuel_web.run_ostf(
cluster_id=cluster_id,
should_fail=1,
failed_test_name=['Check that required services are running'],
test_sets=['smoke', 'sanity', 'ha'])
self.show_step(15)
TestPluginCheck(self).plugin_check()
@test(depends_on=[SetupEnvironment.prepare_slaves_3],
groups=["gcs_delete_add_compute"])
@log_snapshot_after_test
def gcs_delete_add_compute(self):
"""Delete a compute node and add again.
Scenario:
1. Install GCS plugin
2. Create an environment
3. Add following nodes:
* 1 controller
* 1 compute+cinder
* 1 compute
4. Configure GCS plugin
5. Deploy the cluster
6. Run OSTF
7. Verify GCS plugin
8. Delete a node with compute role
9. Deploy changes
10. Run OSTF
11. Verify GCS plugin
12. Add a node with compute role
13. Deploy changes
14. Run OSTF
15. Verify GCS plugin
"""
self.env.revert_snapshot("ready_with_3_slaves")
self.show_step(1)
self.install_plugin()
self.show_step(2)
cluster_id = self.fuel_web.create_cluster(
name=self.__class__.__name__,
settings={
"net_provider": 'neutron',
"net_segment_type": 'tun',
}
)
self.show_step(3)
self.fuel_web.update_nodes(
cluster_id,
{
'slave-01': ['controller'],
'slave-02': ['compute', 'cinder'],
'slave-03': ['compute'],
}
)
self.show_step(4)
self.fuel_web.update_plugin_settings(cluster_id,
gcs_settings.plugin_name,
gcs_settings.plugin_version,
gcs_settings.options)
self.show_step(5)
self.fuel_web.deploy_cluster_wait(
cluster_id,
check_services=False
)
self.show_step(6)
self.fuel_web.run_ostf(
cluster_id=cluster_id,
test_sets=['smoke', 'sanity'])
self.show_step(7)
TestPluginCheck(self).plugin_check()
self.show_step(8)
self.fuel_web.update_nodes(
cluster_id, {'slave-03': ['compute']},
pending_addition=False, pending_deletion=True)
self.show_step(9)
self.fuel_web.deploy_cluster_wait(
cluster_id,
check_services=False
)
self.show_step(10)
self.fuel_web.run_ostf(
cluster_id=cluster_id,
should_fail=1,
failed_test_name=['Check that required services are running'],
test_sets=['smoke', 'sanity'])
self.show_step(11)
TestPluginCheck(self).plugin_check()
self.show_step(12)
self.fuel_web.update_nodes(
cluster_id, {'slave-03': ['compute']})
self.show_step(13)
self.fuel_web.deploy_cluster_wait(
cluster_id,
check_services=False
)
self.show_step(14)
self.fuel_web.run_ostf(
cluster_id=cluster_id,
should_fail=1,
failed_test_name=['Check that required services are running'],
test_sets=['smoke', 'sanity'])
self.show_step(15)
TestPluginCheck(self).plugin_check()
@test(depends_on=[SetupEnvironment.prepare_slaves_3],
groups=["gcs_delete_add_cinder"])
@log_snapshot_after_test
def gcs_delete_add_cinder(self):
"""Delete a cinder node and add again.
Scenario:
1. Install GCS plugin
2. Create an environment
3. Add following nodes:
* 1 controller+cinder
* 1 compute+cinder
* 1 cinder
4. Configure GCS plugin
5. Deploy the cluster
6. Run OSTF
7. Verify GCS plugin
8. Delete a node with cinder role
9. Deploy changes
10. Run OSTF
11. Verify GCS plugin
12. Add a node with cinder role
13. Deploy changes
14. Run OSTF
15. Verify GCS plugin
"""
self.env.revert_snapshot("ready_with_3_slaves")
self.show_step(1)
self.install_plugin()
self.show_step(2)
cluster_id = self.fuel_web.create_cluster(
name=self.__class__.__name__,
)
self.show_step(3)
self.fuel_web.update_nodes(
cluster_id,
{
'slave-01': ['controller', 'cinder'],
'slave-02': ['compute', 'cinder'],
'slave-03': ['cinder'],
}
)
self.show_step(4)
self.fuel_web.update_plugin_settings(cluster_id,
gcs_settings.plugin_name,
gcs_settings.plugin_version,
gcs_settings.options)
self.show_step(5)
self.fuel_web.deploy_cluster_wait(
cluster_id,
check_services=False
)
self.show_step(6)
self.fuel_web.run_ostf(
cluster_id=cluster_id,
test_sets=['smoke', 'sanity'])
self.show_step(7)
TestPluginCheck(self).plugin_check()
self.show_step(8)
self.fuel_web.update_nodes(
cluster_id, {'slave-03': ['cinder']},
pending_addition=False, pending_deletion=True)
self.show_step(9)
self.fuel_web.deploy_cluster_wait(
cluster_id,
check_services=False
)
self.show_step(10)
self.fuel_web.run_ostf(
cluster_id=cluster_id,
should_fail=1,
failed_test_name=['Check that required services are running'],
test_sets=['smoke', 'sanity'])
self.show_step(11)
TestPluginCheck(self).plugin_check()
self.show_step(12)
self.fuel_web.update_nodes(
cluster_id, {'slave-03': ['cinder']})
self.show_step(13)
self.fuel_web.deploy_cluster_wait(
cluster_id,
check_services=False
)
self.show_step(14)
self.fuel_web.run_ostf(
cluster_id=cluster_id,
should_fail=1,
failed_test_name=['Check that required services are running'],
test_sets=['smoke', 'sanity'])
self.show_step(15)
TestPluginCheck(self).plugin_check()
@test(depends_on=[SetupEnvironment.prepare_slaves_3],
groups=["gcs_delete_add_single_cinder"])
@log_snapshot_after_test
def gcs_delete_add_single_cinder(self):
"""Delete the only cinder node and add again.
Scenario:
1. Install GCS plugin
2. Create an environment
3. Add following nodes:
* 1 controller
* 1 compute
* 1 cinder
4. Configure GCS plugin
5. Deploy the cluster
6. Run OSTF
7. Verify GCS plugin
8. Delete a node with cinder role
9. Deploy changes
10. Run OSTF
11. Add a node with cinder role
12. Deploy changes
13. Run OSTF
14. Verify GCS plugin
"""
self.env.revert_snapshot("ready_with_3_slaves")
self.show_step(1)
self.install_plugin()
self.show_step(2)
cluster_id = self.fuel_web.create_cluster(
name=self.__class__.__name__,
settings={
"net_provider": 'neutron',
"net_segment_type": 'tun',
}
)
self.show_step(3)
self.fuel_web.update_nodes(
cluster_id,
{
'slave-01': ['controller'],
'slave-02': ['compute'],
'slave-03': ['cinder'],
}
)
self.show_step(4)
self.fuel_web.update_plugin_settings(cluster_id,
gcs_settings.plugin_name,
gcs_settings.plugin_version,
gcs_settings.options)
self.show_step(5)
self.fuel_web.deploy_cluster_wait(
cluster_id,
check_services=False
)
self.show_step(6)
self.fuel_web.run_ostf(
cluster_id=cluster_id,
test_sets=['smoke', 'sanity'])
self.show_step(7)
TestPluginCheck(self).plugin_check()
self.show_step(8)
self.fuel_web.update_nodes(
cluster_id, {'slave-03': ['cinder']},
pending_addition=False, pending_deletion=True)
self.show_step(9)
self.fuel_web.deploy_cluster_wait(
cluster_id,
check_services=False
)
self.show_step(10)
self.fuel_web.run_ostf(
cluster_id=cluster_id,
should_fail=1,
failed_test_name=['Check that required services are running'],
test_sets=['smoke', 'sanity'])
self.show_step(11)
self.fuel_web.update_nodes(
cluster_id, {'slave-03': ['cinder']})
self.show_step(12)
self.fuel_web.deploy_cluster_wait(
cluster_id,
check_services=False
)
self.show_step(13)
self.fuel_web.run_ostf(
cluster_id=cluster_id,
should_fail=1,
failed_test_name=['Check that required services are running'],
test_sets=['smoke', 'sanity'])
self.show_step(14)
TestPluginCheck(self).plugin_check()
@test(depends_on=[SetupEnvironment.prepare_slaves_5],
groups=["gcs_add_ceph"])
@log_snapshot_after_test
def gcs_add_ceph(self):
"""Adding a ceph-osd node.
Scenario:
1. Install GCS plugin
2. Create an environment
3. Add following nodes:
* 3 controller+ceph-osd
* 1 compute+ceph-osd
4. Configure GCS plugin
5. Deploy the cluster
6. Run OSTF
7. Verify GCS plugin
8. Add a node with compute+ceph-osd roles
9. Deploy changes
10. Run OSTF
11. Verify GCS plugin
"""
self.env.revert_snapshot("ready_with_5_slaves")
self.show_step(1)
self.install_plugin()
self.show_step(2)
cluster_id = self.fuel_web.create_cluster(
name=self.__class__.__name__,
settings={
'images_ceph': True,
'volumes_ceph': True,
'ephemeral_ceph': True,
'objects_ceph': True,
'volumes_lvm': False,
"net_provider": 'neutron',
"net_segment_type": 'tun',
}
)
self.show_step(3)
self.fuel_web.update_nodes(
cluster_id,
{
'slave-01': ['controller', 'ceph-osd'],
'slave-02': ['controller', 'ceph-osd'],
'slave-03': ['controller', 'ceph-osd'],
'slave-04': ['ceph-osd', 'compute'],
}
)
self.show_step(4)
self.fuel_web.update_plugin_settings(cluster_id,
gcs_settings.plugin_name,
gcs_settings.plugin_version,
gcs_settings.options)
self.show_step(5)
self.fuel_web.deploy_cluster_wait(
cluster_id,
check_services=False
)
self.show_step(6)
self.fuel_web.run_ostf(
cluster_id=cluster_id,
test_sets=['smoke', 'sanity', 'ha'])
self.show_step(7)
TestPluginCheck(self).plugin_check()
self.show_step(8)
self.fuel_web.update_nodes(
cluster_id, {'slave-05': ['ceph-osd', 'compute']})
self.show_step(9)
self.fuel_web.deploy_cluster_wait(
cluster_id,
check_services=False
)
self.show_step(10)
self.fuel_web.run_ostf(
cluster_id=cluster_id,
test_sets=['smoke', 'sanity', 'ha'])
self.show_step(11)
TestPluginCheck(self).plugin_check()

View File

@ -1,71 +0,0 @@
# Copyright 2016 Mirantis, Inc.
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Module with ui defaults verification test."""
from proboscis import test
from fuelweb_test.helpers.decorators import log_snapshot_after_test
from fuelweb_test.tests.base_test_case import SetupEnvironment
from helpers.gcs_base import GcsTestBase
from helpers import gcs_settings
from fuelweb_test.settings import DEPLOYMENT_MODE
from fuelweb_test import logger
@test(groups=["test_gcs_all"])
class TestGCSPlugin(GcsTestBase):
"""TestGCSPlugin.""" # TODO(unknown) documentation
@test(depends_on=[SetupEnvironment.prepare_slaves_3],
groups=["gcs_gui_defaults"])
@log_snapshot_after_test
def gcs_gui_defaults(self):
"""Create non HA cluster with GCS plugin installed.
Scenario:
1. Create cluster
2. Install GCS plugin
3. Create cluster
4. Verify default values
"""
self.env.revert_snapshot("ready_with_3_slaves")
logger.info('Creating GCS non HA cluster...')
segment_type = 'vlan'
cluster_id = self.fuel_web.create_cluster(
name=self.__class__.__name__,
mode=DEPLOYMENT_MODE,
settings={
"net_provider": 'neutron',
"net_segment_type": segment_type,
'tenant': gcs_settings.default_tenant,
'user': gcs_settings.default_user,
'password': gcs_settings.default_user_pass,
'assign_to_all_nodes': True
}
)
self.fuel_web.update_nodes(
cluster_id,
{
'slave-01': ['controller'],
'slave-02': ['compute'],
'slave-03': ['cinder']
}
)
self.install_plugin()
self.verify_defaults(cluster_id)
self.env.make_snapshot("gcs_gui_defaults")

View File

@ -1,286 +0,0 @@
# Copyright 2016 Mirantis, Inc.
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Module with set of basic test cases."""
from proboscis import test
from fuelweb_test.helpers.decorators import log_snapshot_after_test
from fuelweb_test.tests.base_test_case import SetupEnvironment
from helpers.gcs_base import GcsTestBase
from helpers import gcs_settings
from tests.test_plugin_check import TestPluginCheck
@test(groups=["gcs_integration_tests"])
class GcsTestClass(GcsTestBase):
"""GcsTestBase.""" # TODO(unknown) documentation
@test(depends_on=[SetupEnvironment.prepare_slaves_5],
groups=["gcs_ceph"])
@log_snapshot_after_test
def gcs_ceph(self):
"""Deploy with GCS plugin and CEPH standalone roles.
Scenario:
1. Install GCS plugin
2. Create an environment with tunneling segmentation
3. Add a node with controller role
4. Add a node with compute role
5. Add 3 nodes with Ceph-OSD roles
6. Configure GCS plugin
7. Deploy the cluster
8. Run OSTF
9. Verify GCS plugin
"""
self.env.revert_snapshot("ready_with_5_slaves")
self.show_step(1)
self.install_plugin()
self.show_step(2)
cluster_id = self.fuel_web.create_cluster(
name=self.__class__.__name__,
settings={
"net_provider": 'neutron',
"net_segment_type": 'tun',
'images_ceph': True,
'volumes_ceph': True,
'ephemeral_ceph': True,
'objects_ceph': True,
'volumes_lvm': False
}
)
self.show_step(3)
self.show_step(4)
self.show_step(5)
self.fuel_web.update_nodes(
cluster_id,
{
'slave-01': ['controller'],
'slave-02': ['compute'],
'slave-03': ['ceph-osd'],
'slave-04': ['ceph-osd'],
'slave-05': ['ceph-osd'],
}
)
self.show_step(6)
self.fuel_web.update_plugin_settings(cluster_id,
gcs_settings.plugin_name,
gcs_settings.plugin_version,
gcs_settings.options)
self.show_step(7)
self.fuel_web.deploy_cluster_wait(
cluster_id,
check_services=False
)
self.show_step(8)
self.fuel_web.run_ostf(
cluster_id=cluster_id,
test_sets=['smoke', 'sanity'])
self.show_step(9)
TestPluginCheck(self).plugin_check()
@test(depends_on=[SetupEnvironment.prepare_slaves_5],
groups=["gcs_cinder_multirole"])
@log_snapshot_after_test
def gcs_cinder_multirole(self):
"""Deploy with GCS plugin and cinder multirole.
Scenario:
1. Install GCS plugin
2. Create an environment with tunneling segmentation
3. Add 3 nodes with controller+cinder roles
4. Add 2 nodes with compute+cinder roles
5. Configure GCS plugin
6. Deploy the cluster
7. Run OSTF
8. Verify GCS plugin
"""
self.env.revert_snapshot("ready_with_5_slaves")
self.show_step(1)
self.install_plugin()
self.show_step(2)
cluster_id = self.fuel_web.create_cluster(
name=self.__class__.__name__,
settings={
"net_provider": 'neutron',
"net_segment_type": 'tun',
}
)
self.show_step(3)
self.show_step(4)
self.fuel_web.update_nodes(
cluster_id,
{
'slave-01': ['controller', 'cinder'],
'slave-02': ['controller', 'cinder'],
'slave-03': ['controller', 'cinder'],
'slave-04': ['compute', 'cinder'],
'slave-05': ['compute', 'cinder'],
}
)
self.show_step(5)
self.fuel_web.update_plugin_settings(cluster_id,
gcs_settings.plugin_name,
gcs_settings.plugin_version,
gcs_settings.options)
self.show_step(6)
self.fuel_web.deploy_cluster_wait(
cluster_id,
check_services=False
)
self.show_step(7)
self.fuel_web.run_ostf(
cluster_id=cluster_id,
test_sets=['smoke', 'sanity', 'ha'])
self.show_step(8)
TestPluginCheck(self).plugin_check()
@test(depends_on=[SetupEnvironment.prepare_slaves_5],
groups=["gcs_cinder_ceph_multirole"])
@log_snapshot_after_test
def gcs_cinder_ceph_multirole(self):
"""Deploy with GCS plugin and cinder+Ceph-OSD multiroles.
Scenario:
1. Install GCS plugin
2. Create an environment
3. Add following nodes:
* 1 controller + ceph + cinder
* 1 controller + ceph
* 1 controller + cinder
* 1 compute + ceph + cinder
* 1 compute
4. Configure GCS plugin
5. Deploy the cluster
6. Run OSTF
7. Verify GCS plugin
"""
self.env.revert_snapshot("ready_with_5_slaves")
self.show_step(1)
self.install_plugin()
self.show_step(2)
cluster_id = self.fuel_web.create_cluster(
name=self.__class__.__name__,
settings={'images_ceph': True}
)
self.show_step(3)
self.fuel_web.update_nodes(
cluster_id,
{
'slave-01': ['controller', 'cinder', 'ceph-osd'],
'slave-02': ['controller', 'cinder'],
'slave-03': ['controller', 'ceph-osd'],
'slave-04': ['compute', 'cinder', 'ceph-osd'],
'slave-05': ['compute', 'cinder'],
}
)
self.show_step(4)
self.fuel_web.update_plugin_settings(cluster_id,
gcs_settings.plugin_name,
gcs_settings.plugin_version,
gcs_settings.options)
self.show_step(5)
self.fuel_web.deploy_cluster_wait(
cluster_id,
check_services=False
)
self.show_step(6)
self.fuel_web.run_ostf(
cluster_id=cluster_id,
test_sets=['smoke', 'sanity', 'ha'])
self.show_step(7)
TestPluginCheck(self).plugin_check()
@test(depends_on=[SetupEnvironment.prepare_slaves_5],
groups=["gcs_ceilometer"])
@log_snapshot_after_test
def gcs_ceilometer(self):
"""Deploy an environment with GCS plugin and ceilometer.
Scenario:
1. Install GCS plugin
2. Create an environment
3. Add following nodes:
* 1 controller + mongo-db
* 1 mongo-db
* 1 cinder + mongo-db
* 2 compute
4. Configure GCS plugin
5. Deploy the cluster
6. Run OSTF
7. Verify GCS plugin
"""
self.env.revert_snapshot("ready_with_5_slaves")
self.show_step(1)
self.install_plugin()
self.show_step(2)
cluster_id = self.fuel_web.create_cluster(
name=self.__class__.__name__,
settings={'ceilometer': True}
)
self.show_step(3)
self.fuel_web.update_nodes(
cluster_id,
{
'slave-01': ['controller', 'mongo'],
'slave-02': ['mongo'],
'slave-03': ['cinder', 'mongo'],
'slave-04': ['compute'],
'slave-05': ['compute'],
}
)
self.show_step(4)
self.fuel_web.update_plugin_settings(cluster_id,
gcs_settings.plugin_name,
gcs_settings.plugin_version,
gcs_settings.options)
self.show_step(5)
self.fuel_web.deploy_cluster_wait(
cluster_id,
check_services=False
)
self.show_step(6)
self.fuel_web.run_ostf(
cluster_id=cluster_id,
test_sets=['smoke', 'sanity', 'tests_platform'])
self.show_step(7)
TestPluginCheck(self).plugin_check()

View File

@ -1,91 +0,0 @@
"""Copyright 2016 Mirantis, Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
"""
from proboscis.asserts import assert_true
from devops.helpers.helpers import wait
from fuelweb_test import logger
from fuelweb_test.helpers import os_actions
from fuelweb_test.settings import SERVTEST_PASSWORD
from fuelweb_test.settings import SERVTEST_TENANT
from fuelweb_test.settings import SERVTEST_USERNAME
from helpers.gcs_settings import options
class TestPluginCheck(object):
"""Test suite for GCS plugin check."""
def __init__(self, obj):
"""Create Test client for run tests.
:param obj: Test case object
"""
self.obj = obj
cluster_id = self.obj.fuel_web.get_last_created_cluster()
ip = self.obj.fuel_web.get_public_vip(cluster_id)
self.os_conn = os_actions.OpenStackActions(
ip, SERVTEST_USERNAME, SERVTEST_PASSWORD, SERVTEST_TENANT)
def plugin_check(self):
"""TestPluginCheck test suite.
Scenario:
1. Create volume
2. Create backup
3. Verify type of backup
4. Restore volume from backup
5. Delete backup
6. Delete volumes
Duration 5 min
"""
os_cinder = self.os_conn.cinder
os_volumes = os_cinder.volumes
logger.info('#' * 10 +
' Run check_create_backup_and_restore ' +
'#' * 10)
logger.info('Create volume ...')
volume = os_volumes.create(size=1)
wait(lambda: os_volumes.get(volume.id).status == 'available',
timeout=120, timeout_msg='Volume is not created')
logger.info('Create backup ...')
backup = os_cinder.backups.create(volume.id)
wait(lambda: os_cinder.backups.get(backup.id).status == 'available',
timeout=600, timeout_msg='Backup is not created')
logger.info('Verify type of backup ...')
assert_true(backup.container == options['backup_gcs_bucket/value'],
"This doesn't look like GCS backup")
logger.info('Restore volume from backup ...')
restore = os_cinder.restores.restore(backup.id)
wait(lambda: os_volumes.get(restore.volume_id).status == 'available',
timeout=600, timeout_msg='Backup is not restored')
logger.info('Delete backup ...')
os_cinder.backups.delete(backup.id)
wait(lambda: len(os_cinder.backups.list()) == 0,
timeout=600, timeout_msg='Backup is not deleted')
logger.info('Delete volumes ...')
os_volumes.delete(restore.volume_id)
os_volumes.delete(volume.id)
wait(lambda: len(os_volumes.list()) == 0,
timeout=600, timeout_msg='Volumes are not deleted')

View File

@ -1,149 +0,0 @@
# Copyright 2016 Mirantis, Inc.
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Module with set of basic test cases."""
from proboscis import test
from fuelweb_test.helpers.decorators import log_snapshot_after_test
from fuelweb_test.tests.base_test_case import SetupEnvironment
from helpers.gcs_base import GcsTestBase
from helpers import gcs_settings
from fuelweb_test.settings import DEPLOYMENT_MODE
from fuelweb_test import logger
from tests.test_plugin_check import TestPluginCheck
@test(groups=["gcs_smoke_bvt_tests"])
class GcsTestClass(GcsTestBase):
"""GcsTestBase.""" # TODO(unknown) documentation
@test(depends_on=[SetupEnvironment.prepare_slaves_3],
groups=["gcs_smoke"])
@log_snapshot_after_test
def gcs_smoke(self):
"""Deploy non HA cluster with GCS plugin installed and enabled.
Scenario:
1. Create cluster
2. Add 1 node with controller role
3. Add 1 node with compute role
4. Add 1 node with cinder role
5. Install GCS plugin
6. Deploy the cluster
"""
self.env.revert_snapshot("ready_with_3_slaves")
logger.info('Creating GCS non HA cluster...')
segment_type = 'vlan'
cluster_id = self.fuel_web.create_cluster(
name=self.__class__.__name__,
mode=DEPLOYMENT_MODE,
settings={
"net_provider": 'neutron',
"net_segment_type": segment_type,
'tenant': gcs_settings.default_tenant,
'user': gcs_settings.default_user,
'password': gcs_settings.default_user_pass,
'assign_to_all_nodes': True
}
)
self.fuel_web.update_nodes(
cluster_id,
{
'slave-01': ['controller'],
'slave-02': ['compute'],
'slave-03': ['cinder']
}
)
self.install_plugin()
self.fuel_web.update_plugin_settings(cluster_id,
gcs_settings.plugin_name,
gcs_settings.plugin_version,
gcs_settings.options)
self.fuel_web.deploy_cluster_wait(
cluster_id,
check_services=False
)
self.env.make_snapshot("gcs_smoke")
@test(depends_on=[SetupEnvironment.prepare_slaves_5],
groups=["gcs_bvt"])
@log_snapshot_after_test
def gcs_bvt(self):
"""Deploy HA cluster with GCS plugin installed and enabled.
Scenario:
1. Install GCS plugin
2. Create an environment
3. Add 3 nodes with controller+ceph-osd roles
4. Add 2 nodes with compute+ceph-osd role
5. Configure GCS plugin
6. Deploy the cluster
7. Run OSTF
8. Verify GCS plugin
"""
self.env.revert_snapshot("ready_with_5_slaves")
self.show_step(1)
self.install_plugin()
self.show_step(2)
cluster_id = self.fuel_web.create_cluster(
name=self.__class__.__name__,
settings={
'images_ceph': True,
'volumes_ceph': True,
'ephemeral_ceph': True,
'objects_ceph': True,
'volumes_lvm': False
}
)
self.show_step(3)
self.show_step(4)
self.fuel_web.update_nodes(
cluster_id,
{
'slave-01': ['controller', 'ceph-osd'],
'slave-02': ['controller', 'ceph-osd'],
'slave-03': ['controller', 'ceph-osd'],
'slave-04': ['compute', 'ceph-osd'],
'slave-05': ['compute', 'ceph-osd'],
}
)
self.show_step(5)
self.fuel_web.update_plugin_settings(cluster_id,
gcs_settings.plugin_name,
gcs_settings.plugin_version,
gcs_settings.options)
self.show_step(6)
self.fuel_web.deploy_cluster_wait(
cluster_id,
check_services=False
)
self.show_step(7)
self.fuel_web.run_ostf(
cluster_id=cluster_id,
test_sets=['smoke', 'sanity', 'ha'])
self.show_step(8)
TestPluginCheck(self).plugin_check()

View File

@ -1,487 +0,0 @@
#!/bin/sh
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
# functions
INVALIDOPTS_ERR=100
NOJOBNAME_ERR=101
NOISOPATH_ERR=102
NOTASKNAME_ERR=103
NOWORKSPACE_ERR=104
DEEPCLEAN_ERR=105
MAKEISO_ERR=106
NOISOFOUND_ERR=107
COPYISO_ERR=108
SYMLINKISO_ERR=109
CDWORKSPACE_ERR=110
ISODOWNLOAD_ERR=111
INVALIDTASK_ERR=112
# Defaults
export REBOOT_TIMEOUT=${REBOOT_TIMEOUT:-5000}
export ALWAYS_CREATE_DIAGNOSTIC_SNAPSHOT=${ALWAYS_CREATE_DIAGNOSTIC_SNAPSHOT:-true}
# Export specified settings
if [ -z "$OPENSTACK_RELEASE" ]; then export OPENSTACK_RELEASE=Ubuntu; fi
if [ -z "$ENV_NAME" ]; then export ENV_NAME="gcs"; fi
if [ -z "$ADMIN_NODE_MEMORY" ]; then export ADMIN_NODE_MEMORY=4096; fi
if [ -z "$ADMIN_NODE_CPU" ]; then export ADMIN_NODE_CPU=4; fi
if [ -z "$SLAVE_NODE_MEMORY" ]; then export SLAVE_NODE_MEMORY=4096; fi
if [ -z "$SLAVE_NODE_CPU" ]; then export SLAVE_NODE_CPU=4; fi
# Init and update submodule
git submodule update --init --recursive --remote
#sudo /sbin/iptables -F
#sudo /sbin/iptables -t nat -F
#sudo /sbin/iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
ShowHelp() {
cat << EOF
System Tests Script
It can perform several actions depending on Jenkins JOB_NAME it's ran from
or it can take names from exported environment variables or command line options
if you do need to override them.
-w (dir) - Path to workspace where fuelweb git repository was checked out.
Uses Jenkins' WORKSPACE if not set
-e (name) - Directly specify environment name used in tests
Uses ENV_NAME variable is set.
-j (name) - Name of this job. Determines ISO name, Task name and used by tests.
Uses Jenkins' JOB_NAME if not set
-v - Do not use virtual environment
-V (dir) - Path to python virtual environment
-i (file) - Full path to ISO file to build or use for tests.
Made from iso dir and name if not set.
-t (name) - Name of task this script should perform. Should be one of defined ones.
Taken from Jenkins' job's suffix if not set.
-o (str) - Allows you any extra command line option to run test job if you
want to use some parameters.
-a (str) - Allows you to path NOSE_ATTR to the test job if you want
to use some parameters.
-A (str) - Allows you to path NOSE_EVAL_ATTR if you want to enter attributes
as python expressions.
-m (name) - Use this mirror to build ISO from.
Uses 'srt' if not set.
-U - ISO URL for tests.
Null by default.
-r (yes/no) - Should built ISO file be places with build number tag and
symlinked to the last build or just copied over the last file.
-b (num) - Allows you to override Jenkins' build number if you need to.
-l (dir) - Path to logs directory. Can be set by LOGS_DIR evironment variable.
Uses WORKSPACE/logs if not set.
-d - Dry run mode. Only show what would be done and do nothing.
Useful for debugging.
-k - Keep previously created test environment before tests run
-K - Keep test environment after tests are finished
-h - Show this help page
Most variables uses guesing from Jenkins' job name but can be overriden
by exported variable before script is run or by one of command line options.
You can override following variables using export VARNAME="value" before running this script
WORKSPACE - path to directory where Fuelweb repository was checked out by Jenkins or manually
JOB_NAME - name of Jenkins job that determines which task should be done and ISO file name.
If task name is "iso" it will make iso file
Other defined names will run Nose tests using previously built ISO file.
ISO file name is taken from job name prefix
Task name is taken from job name suffix
Separator is one dot '.'
For example if JOB_NAME is:
mytest.somestring.iso
ISO name: mytest.iso
Task name: iso
If ran with such JOB_NAME iso file with name mytest.iso will be created
If JOB_NAME is:
mytest.somestring.node
ISO name: mytest.iso
Task name: node
If script was run with this JOB_NAME node tests will be using ISO file mytest.iso.
First you should run mytest.somestring.iso job to create mytest.iso.
Then you can ran mytest.somestring.node job to start tests using mytest.iso and other tests too.
EOF
}
GlobalVariables() {
# where built iso's should be placed
# use hardcoded default if not set before by export
ISO_DIR="${ISO_DIR:=/var/www/fuelweb-iso}"
# name of iso file
# taken from jenkins job prefix
# if not set before by variable export
if [ -z "${ISO_NAME}" ]; then
ISO_NAME="${JOB_NAME%.*}.iso"
fi
# full path where iso file should be placed
# make from iso name and path to iso shared directory
# if was not overriden by options or export
if [ -z "${ISO_PATH}" ]; then
ISO_PATH="${ISO_DIR}/${ISO_NAME}"
fi
# what task should be ran
# it's taken from jenkins job name suffix if not set by options
if [ -z "${TASK_NAME}" ]; then
TASK_NAME="${JOB_NAME##*.}"
fi
# do we want to keep iso's for each build or just copy over single file
ROTATE_ISO="${ROTATE_ISO:=yes}"
# choose mirror to build iso from. Default is 'srt' for Saratov's mirror
# you can change mirror by exporting USE_MIRROR variable before running this script
USE_MIRROR="${USE_MIRROR:=srt}"
# only show what commands would be executed but do nothing
# this feature is usefull if you want to debug this script's behaviour
DRY_RUN="${DRY_RUN:=no}"
VENV="${VENV:=yes}"
}
GetoptsVariables() {
while getopts ":w:j:i:t:o:a:A:m:U:r:b:V:l:dkKe:v:h" opt; do
case $opt in
w)
WORKSPACE="${OPTARG}"
;;
j)
JOB_NAME="${OPTARG}"
;;
i)
ISO_PATH="${OPTARG}"
;;
t)
TASK_NAME="${OPTARG}"
;;
o)
TEST_OPTIONS="${TEST_OPTIONS} ${OPTARG}"
;;
a)
NOSE_ATTR="${OPTARG}"
;;
A)
NOSE_EVAL_ATTR="${OPTARG}"
;;
m)
USE_MIRROR="${OPTARG}"
;;
U)
ISO_URL="${OPTARG}"
;;
r)
ROTATE_ISO="${OPTARG}"
;;
b)
BUILD_NUMBER="${OPTARG}"
;;
V)
VENV_PATH="${OPTARG}"
;;
l)
LOGS_DIR="${OPTARG}"
;;
k)
KEEP_BEFORE="yes"
;;
K)
KEEP_AFTER="yes"
;;
e)
ENV_NAME="${OPTARG}"
;;
d)
DRY_RUN="yes"
;;
v)
VENV="no"
;;
h)
ShowHelp
exit 0
;;
\?)
echo "Invalid option: -$OPTARG"
ShowHelp
exit $INVALIDOPTS_ERR
;;
:)
echo "Option -$OPTARG requires an argument."
ShowHelp
exit $INVALIDOPTS_ERR
;;
esac
done
}
CheckVariables() {
if [ -z "${JOB_NAME}" ]; then
echo "Error! JOB_NAME is not set!"
exit $NOJOBNAME_ERR
fi
if [ -z "${ISO_PATH}" ]; then
echo "Error! ISO_PATH is not set!"
exit $NOISOPATH_ERR
fi
if [ -z "${TASK_NAME}" ]; then
echo "Error! TASK_NAME is not set!"
exit $NOTASKNAME_ERR
fi
if [ -z "${WORKSPACE}" ]; then
echo "Error! WORKSPACE is not set!"
exit $NOWORKSPACE_ERR
fi
}
MakeISO() {
# Create iso file to be used in tests
# clean previous garbage
if [ "${DRY_RUN}" = "yes" ]; then
echo make deep_clean
else
make deep_clean
fi
ec="${?}"
if [ "${ec}" -gt "0" ]; then
echo "Error! Deep clean failed!"
exit $DEEPCLEAN_ERR
fi
# create ISO file
export USE_MIRROR
if [ "${DRY_RUN}" = "yes" ]; then
echo make iso
else
make iso
fi
ec=$?
if [ "${ec}" -gt "0" ]; then
echo "Error making ISO!"
exit $MAKEISO_ERR
fi
if [ "${DRY_RUN}" = "yes" ]; then
ISO="${WORKSPACE}/build/iso/fuel.iso"
else
ISO="$(find "${WORKSPACE}/build/iso/"*".iso" | head -n 1)"
# check that ISO file exists
if [ ! -f "${ISO}" ]; then
echo "Error! ISO file not found!"
exit $NOISOFOUND_ERR
fi
fi
# copy ISO file to storage dir
# if rotation is enabled and build number is aviable
# save iso to tagged file and symlink to the last build
# if rotation is not enabled just copy iso to iso_dir
if [ "${ROTATE_ISO}" = "yes" -a "${BUILD_NUMBER}" != "" ]; then
# copy iso file to shared dir with revision tagged name
NEW_BUILD_ISO_PATH="${ISO_PATH#.iso}_${BUILD_NUMBER}.iso"
if [ "${DRY_RUN}" = "yes" ]; then
echo cp "${ISO}" "${NEW_BUILD_ISO_PATH}"
else
cp "${ISO}" "${NEW_BUILD_ISO_PATH}"
fi
ec=$?
if [ "${ec}" -gt "0" ]; then
echo "Error! Copy ${ISO} to ${NEW_BUILD_ISO_PATH} failed!"
exit $COPYISO_ERR
fi
# create symlink to the last built ISO file
if [ "${DRY_RUN}" = "yes" ]; then
echo ln -sf "${NEW_BUILD_ISO_PATH}" "${ISO_PATH}"
else
ln -sf "${NEW_BUILD_ISO_PATH}" "${ISO_PATH}"
fi
ec=$?
if [ "${ec}" -gt "0" ]; then
echo "Error! Create symlink from ${NEW_BUILD_ISO_PATH} to ${ISO_PATH} failed!"
exit $SYMLINKISO_ERR
fi
else
# just copy file to shared dir
if [ "${DRY_RUN}" = "yes" ]; then
echo cp "${ISO}" "${ISO_PATH}"
else
cp "${ISO}" "${ISO_PATH}"
fi
ec=$?
if [ "${ec}" -gt "0" ]; then
echo "Error! Copy ${ISO} to ${ISO_PATH} failed!"
exit $COPYISO_ERR
fi
fi
if [ "${ec}" -gt "0" ]; then
echo "Error! Copy ISO from ${ISO} to ${ISO_PATH} failed!"
exit $COPYISO_ERR
fi
echo "Finished building ISO: ${ISO_PATH}"
exit 0
}
CdWorkSpace() {
# chdir into workspace or fail if could not
if [ "${DRY_RUN}" != "yes" ]; then
cd "${WORKSPACE}"
ec=$?
if [ "${ec}" -gt "0" ]; then
echo "Error! Cannot cd to WORKSPACE!"
exit $CDWORKSPACE_ERR
fi
else
echo cd "${WORKSPACE}"
fi
}
RunTest() {
# Run test selected by task name
# check if iso file exists
if [ ! -f "${ISO_PATH}" ]; then
if [ -z "${ISO_URL}" -a "${DRY_RUN}" != "yes" ]; then
echo "Error! File ${ISO_PATH} not found and no ISO_URL (-U key) for downloading!"
exit $NOISOFOUND_ERR
else
if [ "${DRY_RUN}" = "yes" ]; then
echo wget -c "${ISO_URL}" -O "${ISO_PATH}"
else
echo "No ${ISO_PATH} found. Trying to download file."
wget -c "${ISO_URL}" -O "${ISO_PATH}"
rc=$?
if [ $rc -ne 0 ]; then
echo "Failed to fetch ISO from ${ISO_URL}"
exit $ISODOWNLOAD_ERR
fi
fi
fi
fi
if [ -z "${VENV_PATH}" ]; then
VENV_PATH="/home/jenkins/venv-nailgun-tests"
fi
# run python virtualenv
if [ "${VENV}" = "yes" ]; then
if [ "${DRY_RUN}" = "yes" ]; then
echo . $VENV_PATH/bin/activate
else
. $VENV_PATH/bin/activate
fi
fi
if [ "${ENV_NAME}" = "" ]; then
ENV_NAME="${JOB_NAME}_system_test"
fi
if [ "${LOGS_DIR}" = "" ]; then
LOGS_DIR="${WORKSPACE}/logs"
fi
if [ ! -f "$LOGS_DIR" ]; then
mkdir -p "$LOGS_DIR"
fi
export ENV_NAME
export LOGS_DIR
export ISO_PATH
if [ "${KEEP_BEFORE}" != "yes" ]; then
# remove previous environment
if [ "${DRY_RUN}" = "yes" ]; then
echo dos.py erase "${ENV_NAME}"
else
if dos.py list | grep "^${ENV_NAME}\$"; then
dos.py erase "${ENV_NAME}"
fi
fi
fi
# gather additional option for this nose test run
OPTS=""
if [ -n "${NOSE_ATTR}" ]; then
OPTS="${OPTS} -a ${NOSE_ATTR}"
fi
if [ -n "${NOSE_EVAL_ATTR}" ]; then
OPTS="${OPTS} -A ${NOSE_EVAL_ATTR}"
fi
if [ -n "${TEST_OPTIONS}" ]; then
OPTS="${OPTS} ${TEST_OPTIONS}"
fi
# run python test set to create environments, deploy and test product
if [ "${DRY_RUN}" = "yes" ]; then
echo export PYTHONPATH="${PYTHONPATH:+${PYTHONPATH}:}${WORKSPACE}"
echo python plugin_test/run_tests.py -q --nologcapture --with-xunit ${OPTS}
else
export PYTHONPATH="${PYTHONPATH:+${PYTHONPATH}:}${WORKSPACE}"
echo "${PYTHONPATH}"
python plugin_test/run_tests.py -q --nologcapture --with-xunit ${OPTS}
fi
ec=$?
if [ "${KEEP_AFTER}" != "yes" ]; then
# remove environment after tests
if [ "${DRY_RUN}" = "yes" ]; then
echo dos.py destroy "${ENV_NAME}"
else
dos.py destroy "${ENV_NAME}"
fi
fi
exit "${ec}"
}
RouteTasks() {
# this selector defines task names that are recognised by this script
# and runs corresponding jobs for them
# running any jobs should exit this script
case "${TASK_NAME}" in
test)
RunTest
;;
iso)
MakeISO
;;
*)
echo "Unknown task: ${TASK_NAME}!"
exit $INVALIDTASK_ERR
;;
esac
exit 0
}
# MAIN
# first we want to get variable from command line options
GetoptsVariables "${@}"
# then we define global variables and there defaults when needed
GlobalVariables
# check do we have all critical variables set
CheckVariables
# first we chdir into our working directory unless we dry run
CdWorkSpace
# finally we can choose what to do according to TASK_NAME
RouteTasks

View File

@ -1,5 +0,0 @@
#!/bin/bash
# Add here any the actions which are required before plugin build
# like packages building, packages downloading from mirrors and so on.
# The script should return 0 if there were no errors.

View File

@ -1,243 +0,0 @@
==============================
Sample Mistral backup workflow
==============================
The workflow can be imported to Mistral to perform backups in an automated way.
Problem description
===================
Any backup strategy requires taking backups on regular basis
and it's good to have these repeatable actions automated.
Taking a drive backup is often considered as a single action but
usually requires taking a snapshot, taking a backup of the snapshot and
then deleting the snapshot. So taking a drive backup is actually a workflow.
Mistral is a workflow service for OpenStack cloud.
Creating a workflow for Mistral requires some practice and
working workflow example should make it easier to start using Mistral
for the backup process automation.
Proposed changes
================
Provide an example of Mistral workflow for creating backups.
The sample is written in Mistral DSL v2.
The workflow accepts the following input parameters:
- project_id_list - list of project identifiers.
Backup of all volumes of projects from project_id_list
will be taken if not provided.
Mutually exclusive with volume_id_list.
Optional.
- volume_id_list - list of volume identifiers.
Backup of volumes from volume_id_list will be taken.
Mutually exclusive with project_id_list.
Optional.
- is_incremental - create an incremental backup or full.
Default is false.
Optional.
- report_to - list of e-mail addresses to send reports to
Reports are not e-mailed if not provided.
Optional.
If neither project_id_list nor volume_id_list are provided then
backup of all volumes of all projects will be taken.
If both project_id_list and volume_id_list are provided
the workflow does not take backups.
create_backups workflow tasks:
- analyze_input
- chooses the task to execute next accordingly to input
- if neither project_id_list nor volume_id_list are provided
then run get_all_projects_volumes_list
- if project_id_list is provided but volume_id_list is not
then run get_volumes_list
- if volume_id_list is provided but project_id_list is not
then run create_snapshots task
- if project_id_list are volume_id_list are provided
then report error
- get_all_projects_volumes_list
- provides list of volumes to backup
- runs create_snapshots task
- get_volumes_list
- provides list of volumes to backup accordingly to project_id_list
- runs create_snapshots task
- create_snapshots
- creates snapshots of selected volumes
- runs create_backups task
- create_backups
- creates backups using the snapshots
- runs wait_for_backups_completion task
- wait_for_backups_completion
- verifies if the backups are in avalilable state
- runs delete_snapshots task
- delete_snapshots
- deletes snapshots
- runs send_report task
- send_report
- sends report if report_to is provided
Web UI
------
None
Nailgun
-------
None
Data model
----------
None
REST API
--------
None
Orchestration
-------------
None
Fuel Client
-----------
None
Fuel Library
------------
None
Limitations
-----------
None
Alternatives
============
A set of separate OpenStack API calls can be invoked by a self written script.
Upgrade impact
==============
None
Security impact
===============
None
Notifications impact
====================
None
End user impact
===============
None
Performance impact
==================
None
Deployment impact
=================
None
Developer impact
================
None
Infrastructure impact
=====================
None
Documentation impact
====================
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
- Taras Kostyuk <tkostyuk@mirantis.com> - developer
Other contributors:
- Oleksandr Martsyniuk <omartsyniuk@mirantis.com> - feature lead, developer
- Kostiantyn Kalynovskyi <kkalynovskyi@mirantis.com> - developer
Project manager:
- Andrian Noga <anoga@mirantis.com>
Quality assurance:
- Vitaliy Yerys <vyerys@mirantis.com> - qa
- Oleksandr Kosse <okosse@mirantis.com> - qa
Work Items
----------
* Prepare development environment
* Create Mistral workflow
* Test the workflow
Dependencies
============
* Mistral >= 2.0
Testing
=======
TODO
Acceptance criterias
--------------------
* The workflow can be imported to Mistral
* Backups can be created by Mistral
* A report e-mail is received
References
==========
* Mistrals documentation http://docs.openstack.org/developer/mistral/
* YAQLs documentation https://yaql.readthedocs.io/en/latest/

View File

@ -1,541 +0,0 @@
=====================================
Google Cloud Storage(GCS) Fuel plugin
=====================================
Google Cloud Storage(GCS) Fuel plugin allows Fuel to deploy Mirantis OpenStack
with a possibility to store VM backups in Google Cloud Storage using
Cinder Google Cloud Storage backup driver.
Problem description
===================
Since Mitaka OpenStack release, Cinder supports Google Cloud Storage
backup driver.
The user who decided to store backups of their OpenStack VMs in
GCS have to configure Cinder for using GCS backup driver manually.
Fuel is a widely used automation tool for deploying OpenStack clouds but
currently does not support Cinder GCS backup driver configuration
out of the box.
Proposed changes
================
Develop Fuel plugin to automate Cinder configuration
for using GCS backup driver.
VMs backups are stored as objects in object storage like Swift or Ceph.
Also other objects are stored in object storage. Fuel GCS plugin impacts only
backups. Other objects will be stored in the object storage selected during
the environment creation.
Before deployment user has to download and install the Fuel GCS plugin into Fuel
Master. Driver inclusion and configuration will be done by Puppet manifests
included in the plugin.
The Cinder backup driver for GCS is included in Cinder package since Mitaka
OpenStack release. Fuel GCS plugin should support environments with either LVM
or Ceph used as a block storage.
Before deploying an environment the Fuel GCS plugin has to be configured in Fuel
UI or Fuel API.
The Fuel GCS plugin will deploy changes in the following way
* Create a credentials file on all cinder nodes with respective permissions,
readable only by Cinder
* Install python packages for Google Cloud Storage client on cinder nodes
* Modify cinder.conf:
* Overwrite backup_driver parameter value to enable GCS backup driver
* Set up configuration options for driver such as bucket name, project ID,
path to credentials file, etc.
* Restart cinder services to use updated parameters
Volume Backup Workflow
----------------------
The steps that occur when a user requests that a Cinder volume be backed up.
#. User request to backup a Cinder volume by invoking REST API (client may use
python-cinderclient CLI utility).
#. cinder-api process validates request, user credentials; once validated,
posts message to backup manager over AMQP.
#. cinder-backup reads message from queue, creates a database record for
the backup and fetches information from the database for the volume
to be backed up.
#. cinder-backup invokes the backup_volume method of the Cinder volume driver
corresponding to volume to be backed up, passing the backup record and
the connection for the backup service to be used.
#. The appropriate Cinder volume driver attaches to the source Cinder volume.
#. The volume driver invokes the backup method for the configured
backup service, handing off the volume attachment.
#. The backup service transfers the Cinder volume's data and metadata to
the GCS using GCS driver.
#. The backup service updates the database with the completed record for
this backup and posts response information to cinder-api process via
AMQP queue.
#. cinder-api process reads response message from queue and passes results in
RESTful response to the client.
Web UI
------
Fuel Web UI is extended with plugin-specific settings.
The settings are:
* A checkbox to enable Google Cloud Storage Fuel Plugin.
* GCS Project ID
* name: backup_gcs_project_id
* label: GCS Project ID
* description: Denotes the project ID where the backup bucket will be created
* type: text
* default value: ''
* valid values: not empty
* Default GCS Bucket name
* name: backup_gcs_bucket
* label: Default GCS Bucket name
* description: Default GCS bucket name to use for backups. The bucket is
created if not exists. Please refer to the official bucket naming guidelines
https://cloud.google.com/storage/docs/naming .
Used as *container* parameter value when Cinder CLI or API is invoked for
creating a backup.
* type: text
* default value: ''
* valid values: not empty
* GCS Bucket Location
* name: backup_gcs_bucket_location
* label: GCS Bucket Location
* description: Location of GCS bucket.
Check available locations at
https://cloud.google.com/storage/docs/bucket-locations
* type: text
* default value: 'us'
* valid values: alphanumeric with dashes and underscores
* GCS Storage Class
* name: backup_gcs_storage_class
* label: GCS Storage Class
* description: Storage class of GCS bucket
* type: drop-down list
* default value: 'NEARLINE'
* list values: STANDARD, NEARLINE , DURABLE_REDUCED_AVAILABILITY
* GCS Account type
* name: gcs_account_type
* label: GCS Account type
* description: type parameter value from the GCS credentials file.
* type: text
* default value: 'service_account'
* valid values: alphanumeric and symbols -_
* GCS Private Key ID
* name: gcs_private_key_id
* label: GCS Private Key ID
* description: private_key_id parameter value from the GCS credentials file.
* type: text
* default value: ''
* valid values: alphanumeric
* GCS Privare Key
* name: gcs_private_key
* label: GCS Private Key
* description: private_key parameter value from the GCS credentials file.
* type: text
* default value: ''
* valid values: alphanumeric and symbols +-/\ and space
* GCS Client E-mail
* name: gcs_client_email
* label: GCS Client E-mail
* description: client_email parameter value from the GCS credentials file.
* type: text
* default value: ''
* valid values: alphanumeric and symbols -.@
* GCS Client ID
* name: gcs_client_id
* label: GCS Client ID
* description: client_id parameter value from the GCS credentials file.
* type: text
* default value: ''
* valid values: digits
* GCS Auth URI
* name: gcs_auth_uri
* label: GCS Auth URI
* description: auth_uri parameter value from the GCS credentials file.
* type: text
* default value: 'https://accounts.google.com/o/oauth2/auth'
* valid values: https://[a-zA-Z][a-zA-Z0-9-_.!~*'() ;/?:@&=+$,%]*
* GCS Token URI
* name: gcs_token_uri
* label: GCS Token URI
* description: token_uri parameter value from the GCS credentials file.
* type: text
* default value: 'https://accounts.google.com/o/oauth2/token'
* valid values: https://[a-zA-Z][a-zA-Z0-9-_.!~*'() ;/?:@&=+$,%]*
* GCS Auth Provider X509 Cert URL
* name: gcs_auth_provider_x509_cert_url
* label: GCS Auth Provider X509 Cert URL
* description: auth_provider_x509_cert_url parameter value from
the GCS credentials file.
* type: text
* default value: 'https://www.googleapis.com/oauth2/v1/certs'
* valid values: https://[a-zA-Z][a-zA-Z0-9-_.!~*'() ;/?:@&=+$,%]*
* GCS Client X509 Cert URL
* name: gcs_client_x509_cert_url
* label: GCS Client X509 Cert URL
* description: client_x509_cert_url parameter value from
the GCS credentials file.
* type: text
* default value: ''
* valid values: https://[a-zA-Z][a-zA-Z0-9-_.!~*'() ;/?:@&=+$,%]*
* Show advanced settings
* name: backup_gcs_advanced_settings
* label: Show advanced settings
* description: Show advanced settings. The driver defaults are used
if not selected
* type: checkbox
* GCS Object Size
* name: backup_gcs_object_size
* label: GCS Object Size
* description: The size in bytes of GCS backup objects in bytes.
Must be a multiple of GCS Block Size. Default is 52428800
* type: text
* default value: 52428800
* valid values: positive integer
* visibility: only when backup_gcs_advanced_settings is selected
* GCS Block Size
* name: backup_gcs_block_size
* label: GCS Block Size
* description: The change tracking size for incremental backup in bytes.
Default is 32768
* type: text
* default value: 32768
* valid values: positive integer
* visibility: only when backup_gcs_advanced_settings is selected
* HTTP User-Agent
* name: backup_gcs_user_agent
* label: HTTP User-Agent
* description: HTTP User-Agent string for the GCS API.
* type: text
* default value: gcscinder
* valid values: a valid string accordigly to HTTP 1.1 RFC
http://www.faqs.org/rfcs/rfc2068.html
* visibility: only when backup_gcs_advanced_settings is selected
* GCS Reader Chunk Size
* name: backup_gcs_reader_chunk_size
* label: GCS Reader Chunk Size
* description: Chunk size for GCS object downloads in bytes.
Pass in a value of -1 to cause the file to be uploaded
as a single chunk. Default is 2097152
* type: text
* default value: 2097152
* valid values: positive integer OR -1
* visibility: only when backup_gcs_advanced_settings is selected
* GCS Writer Chunk Size
* name: backup_gcs_writer_chunk_size
* label: GCS Writer Chunk Size
* description: Chunk size for GCS object uploads in bytes
Pass in a value of -1 to cause the file to be uploaded
as a single chunk. Default is 2097152.
* type: text
* default value: 2097152
* valid values: a number in a range from 1 to 5242880 OR -1
* visibility: only when backup_gcs_advanced_settings is selected
* GCS Retries Number
* name: backup_gcs_num_retries
* label: GCS Retries Number
* description: Number of times to retry transfers.
Default is 3
* type: text
* default value: 3
* valid values: positive integer
* visibility: only when backup_gcs_advanced_settings is selected
* GCS Retry Error Codes
* name: backup_gcs_retry_error_codes
* label: GCS Retry Error Codes
* description: A comma separated list of GCS error codes for which
to initiate a retry. Default is 429
* type: text
* default value: 429
* valid values: valid list of HTTP v1.1 error codes (4xx and 5xx)
* visibility: only when backup_gcs_advanced_settings is selected
* Enable GCS progress Timer
* name: backup_gcs_enable_progress_timer
* label: GCS progress Timer
* description: Enable the timer to send the periodic progress notifications
to Ceilometer when backing up the volume to the GCS backend storage.
* type: checkbox
* default value: true
* visibility: only when backup_gcs_advanced_settings is selected
Nailgun
-------
None
Data model
----------
None
REST API
--------
None
Orchestration
-------------
None
Fuel Client
-----------
None
Fuel Library
------------
None
Limitations
-----------
Cinder does not support multiple backup backends at the same time so switching
backup backend for a cloud with some backups already created by another driver
may not be possible without losing access to previously created backups.
A single GCS bucket can be used per OpenStack environment.
Alternatives
============
The plugin can also be implemented as a part of Fuel core but it was decided
to create a plugin as any new additional functionality makes a project and
testing more difficult which is an additional risk for the Fuel release.
Upgrade impact
==============
Compatibility of new Fuel components and the Plugin should be checked before
upgrading Fuel Master.
Security impact
===============
Google Cloud Storage credentials are stored on Fuel Master and
Cinder/Compute nodes and need to be protected from unauthorized use.
Notifications impact
====================
None
End user impact
===============
End user will have more distributed and hybrid cloud, backup storage function
will be delegated to the reliable external storage service provider.
Performance impact
==================
Backup operation performance depends on Google Cloud Storage storage class and
the Internet connection speed.
Deployment impact
=================
The plugin can be installed and enabled either during Fuel Master installation
or after an environment is deployed.
Developer impact
================
None
Infrastructure impact
=====================
::
Diagram showing Cinder components and GCS driver Fig.1 :
...............................................
. ________ __________ .
.| | | | . O
.| SQL DB | |Cinder API|<----REST-API---> /|\
.|________| |__________| . / \
. A .
. | .
. | .
. _____V__ .
. | | .
. AMQP----->|RabbitMQ|<-----AMQP--- .
. | |________| | .
. | | .
. | ________________V_____ .
. | | |.
. _____V_______ | Cinder Backup |.
.| | | |.
.|Cinder Volume| | ________________ |.
.|_____________| | | Google Cinder | |.
. A | | Backup Driver | |.
. | |___|________________|_|.
.......|.........................A.............
| |
| | JSON-RPC
_____V______ |
| | ______V_____________
|Storage node| | |
|____________| |Google Cloud Storage|
|____________________|
Fig.1 Cinder components and GCS driver
Documentation impact
====================
* Deployment Guide
* User Guide
* Test Plan
* Test Report
Implementation
==============
Assignee(s)
-----------
Primary assignee:
- Taras Kostyuk <tkostyuk@mirantis.com> - developer
Other contributors:
- Oleksandr Martsyniuk <omartsyniuk@mirantis.com> - feature lead, developer
- Kostiantyn Kalynovskyi <kkalynovskyi@mirantis.com> - developer
Project manager:
- Andrian Noga <anoga@mirantis.com>
Quality assurance:
- Vitaliy Yerys <vyerys@mirantis.com> - qa
Work Items
----------
* Prepare development environment
* Create Fuel plugin bundle which allows setting plugin parameters
and pass them to OpenStack nodes via Hiera
* Implement Puppet manifests to configure Cinder and
Google Cloud Storage backup driver
* Test Google Cloud Storage Fuel plugin
* Prepare Documentation
Dependencies
============
* Fuel 9.0
* Ubuntu 14.04
* OpenStack Mitaka
* Internet connection on Controller and Cinder nodes
* Valid GCS credentials
Testing
=======
* Acceptance testing (this cases will be executed along with CI tests during
acceptance testing stage):
* Verification of active OS cloud with GCS fuel plugin installed using tempest
test framework
* Performance testing to verify OpenStack cloud with GCS fuel plugin
installed under heavy load. This testing will be performed using Rally
benchmark.
* Failover testing:
- Destroy controller node in HA mode cluster with plugin
- Destroy compute node in HA/non-HA mode cluster with plugin
- Destroy cinder node in HA/non-HA mode cluster with plugin
- Destroy controller/cinder node in cluster with plugin
- Destroy compute/cinder node in cluster with plugin
* CI test cases:
* System tests including deployment with different options enabled and plugin
installation included, both LVM and Ceph options have to be verified as a
Cinder backend for all this cases:
- Install plugin and deploy environment
- Install plugin and deploy environment with controller/cinder role
assigned to a node
- Install plugin and deploy environment with compute/cinder role assigned to
a node
- Remove, add controller node in cluster with plugin
- Remove, add compute node in cluster with plugin
- Remove, add cinder node in cluster with plugin
- Remove, add controller/cinder node in cluster with plugin
- Remove, add compute/cinder node in cluster with plugin
* Functional tests to verify plugin functionality are working correctly:
- Backup Volume and reattach it to the VM
- Write/Read data to/from volume
* UI test cases:
- Verify all default values are correct
- Manual verification of plugin UI dashboard
Acceptance criteria
-------------------
* A VM disk backup can be:
- stored to Google Cloud Storage
- restored from Google Cloud Storage object
- removed from Google Cloud Storage
- scheduled using Mistral
* All blocker, critical and major issues are fixed
* Documentation delivered
* Block, system and functional tests passed successfully
* Test results delivered
References
==========
OpenStack users: Backup your Cinder volumes to Google Cloud Storage
https://cloudplatform.googleblog.com/2016/04/OpenStack-users-backup-your-Cinder-volumes-to-Google-Cloud-Storage.html

View File

@ -1,26 +0,0 @@
# WARNING: `tasks.yaml` will be deprecated in further releases.
# Please, use `deployment_tasks.yaml` to describe tasks instead.
# This tasks will be applied on controller nodes,
# here you can also specify several roles, for example
# ['cinder', 'compute'] will be applied only on
# cinder and compute nodes
- role: ['controller']
stage: post_deployment
type: shell
parameters:
cmd: bash deploy.sh
timeout: 42
# Task is applied for all roles
- role: '*'
stage: pre_deployment
type: shell
parameters:
cmd: echo all > /tmp/plugin.all
timeout: 42
# "reboot" task reboots the nodes and waits until they get back online
# - role: '*'
# stage: pre_deployment
# type: reboot
# parameters:
# timeout: 600