Add source code to Tricircle

Initial PoC source code for Tricircle, the project for OpenStack cascading solution.

Change-Id: I8abc93839a26446cb61c8d9004dfd812bd91de6e
changes/33/122933/9
openstack 9 years ago committed by joehuang
parent e75f01b9a6
commit 0885397d10

45
.gitignore vendored

@ -0,0 +1,45 @@
*.DS_Store
*.egg*
*.log
*.mo
*.pyc
*.swo
*.swp
*.sqlite
*.iml
*~
.autogenerated
.coverage
.nova-venv
.project
.pydevproject
.ropeproject
.testrepository/
.tox
.idea
.venv
AUTHORS
Authors
build-stamp
build/*
CA/
ChangeLog
coverage.xml
cover/*
covhtml
dist/*
doc/source/api/*
doc/build/*
etc/nova/nova.conf.sample
instances
keeper
keys
local_settings.py
MANIFEST
nosetests.xml
nova/tests/cover/*
nova/vcsversion.py
tools/conf/nova.conf*
tools/lintstack.head.py
tools/pylint_exceptions
etc/nova/nova.conf.sample

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

@ -0,0 +1,397 @@
Tricircle
===============================
Tricircle is a project for [Openstack cascading solution](https://wiki.openstack.org/wiki/OpenStack_cascading_solution), including the source code of Nova Proxy, Cinder Proxy, Neutron L2/L3 Proxy, Glance sync manager and Ceilometer Proxy(not implemented yet).
The project name "Tricircle" comes from a fractal. See the blog ["OpenStack cascading and fractal"](https://www.linkedin.com/today/post/article/20140729022031-23841540-openstack-cascading-and-fractal) for more information.
Important to know
-----------
* the initial source code is for PoC only. Refactory will be done constantly to reach OpenStack acceptance standard.
* the PoC source code is based on IceHouse version, while Neutron is a master branch snapshot on July 1, 2014 which include DVR feature, not IceHouse version. The Neutron code is download from github when it was still in the developement and review status. The source code of DVR part is not stable, and not all DVR features are included, for example, N-S functions not ready.
* The Neutron cascading using the feature of provider network. But horizon doen't support provider network very well. So you have to use Neutron CLI to create a network. Or set default provide network type to VxLAN, or remove "local", "flat", "VLAN", "GRE" network typedriver from ML2 plugin configuration.
* For Neutron L2/L3 features, only VxLAN/L3 across casacaded OpenStack supported in the current source code. VLAN2VLAN, VLAN2VxLAN and VxLAN2VxLAN across cascaded OpenStack also implemented with IceHouse version but the patch is not ready yet, source code is in the VLAN2VLAN folder.
* The tunneling network for cross OpenStack piggy data path is using VxLAN, it leads to modification on L2 agent and L3 agent, we will refactory it to using GRE for the tunneling network to reduce patch for Juno version.
* If you want to experience VLAN2VLAN, VLAN2VxLAN and VxLAN2VxLAN across cascaded OpenStack, please ask help from PoC team member, see the wiki page [Openstack cascading solution](https://wiki.openstack.org/wiki/OpenStack_cascading_solution) for contact information.
* Glance cascading using Glance V2 API. Only CLI/pythonclient support V2 API, the Horizon doesn't support that version. So image management should be done through CLI, and using V2 only. Otherwise, the glance cascading cannot work properly.
* Glance cascading is not used by default, eg, useing global Glance by default. If Glance cascading is required, configuration is required.
* Refactory the Tricircle source code based on Juno version will be started soon once the Juno version is available.
Key modules
-----------
* Nova proxy
Similar role like Nova-Compute. Transfer the VM operation to cascaded Nova. Also responsible for attach volume and network to the VM in the cascaded OpenStack.
* Cinder proxy
Similar role like Cinder-Volume. Transfer the volume operation to cascaded Cinder.
* Neuton proxy
Including L2 proxy and L3 proxy, Similar role like OVS-Agent/L3-Agent. Finish L2/L3-networking in the cascaded OpenStack, including cross OpenStack networking.
* Glance sync
Synchronize image among the cascading and policy determined Cascaded OpenStacks
Patches required
------------------
* IceHouse-Patches
Pacthes for OpenStack IceHouse version, including patches for cascading level and cacscaded level.
Feature Supported
------------------
* Nova cascading
Launch/Reboot/Terminate/Resize/Rescue/Pause/Un-pause/Suspend/Resume/VNC Console/Attach Volume/Detach Volume/Snapshot/KeyPair/Flavor
* Cinder cascading
Create Volume/Delete Volume/Attach Volume/Detach Volume/Extend Volume/Create Snapshot/Delete Snapshot/List Snapshots/Create Volume from Snapshot/Create Volume from Image/Create Volume from Volume (Clone)/Create Image from Volume
* Neutron cascading
Network/Subnet/Port/Router
* Glance cascading
Only support V2 api. Create Image/Delete Image/List Image/Update Image/Upload Image/Patch Location/VM Snapshot/Image Synchronization
Known Issues
------------------
* Use "admin" role to experience these feature first, multi-tenancy has not been tested well.
* Launch VM only support "boot from image", "boot from volume", "boot from snapshot"
* Flavor only support new created flavor synchronized to the cascaded OpenStack, does not support flavor update synchronization to cascaded OpenStack yet.
* Must make a patch for "Create a volume from image", the patch link: https://bugs.launchpad.net/cinder/+bug/1308058
Installation without Glance cascading
------------
* **Prerequisites**
- the minimal installation requires three OpenStack IceHouse installated to experience across cascaded OpenStacks L2/L3 function. The minimal setup needs four nodes, see the following picture:
![minimal_setup](./minimal_setup.png?raw=true)
- the cascading OpenStack needs two node, Node1 and Node 2. Add Node1 to AZ1, Node2 to AZ2 in the cascading OpenStack for both Nova and Cinder.
- It's recommended to name the cascading Openstack region to "Cascading_OpenStack" or "Region1"
- Node1 is all-in-one OpenStack installation with KeyStone and Glance, Node1 also function as Nova-Compute/Cinder-Volume/Neutron OVS-Agent/L3-Agent node, and will be replaced to be the proxy node for AZ1.
- Node2 is general Nova-Compute node with Cinder-Volume, Neutron OVS-Agent/L3-Agent function installed. And will be replaced to be the proxy node for AZ2
- the all-in-one cascaded OpenStack installed in Node3 function as the AZ1. Node3 will also function as the Nova-Compute/Cinder-Volume/Neutron OVS-Agent/L3-Agent in order to be able to create VMs/Volume/Networking in this AZ1. Glance is only required to be installed if Glance cascading needed. Add Node3 to AZ1 in the cascaded OpenStack both for Nova and Cinder. It's recommended to name the cascaded Openstack region for Node3 to "AZ1"
- the all-in-one cascaded OpenStack installed in Node4 function as the AZ2. Node3 will also function as the Nova-Compute/Cinder-Volume/Neutron OVS-Agent/L3-Agent in order to be able to create VMs/Volume/Networking in this AZ2. Glance is only required to be installed if Glance cascading needed.Add Node4 to AZ2 in the cascaded OpenStack both for Nova and Cinder.It's recommended to name the cascaded Openstack region for Node4 to "AZ2"
Make sure the time of these four nodes are synchronized. Because the Nova Proxy/Cinder Proxy/Neutron L2/L3 Proxy will query the cascaded OpenStack using timestamp, incorrect time will lead to VM/Volume/Port status synchronization not work properly.
Register all services endpoint in the global shared KeyStone.
Make sure the 3 OpenStack can work independently before cascading introduced, eg. you can boot VM with network, create volume and attach volume in each OpenStack. After verify that 3 OpenStack can work independently, clean all created resources VM/Volume/Network.
After all OpenStack installation is ready, it's time to install IceHouse pathces both for cascading OpenStack and cascaded OpenStack, and then replace the Nova-Compute/Cinder-Volume/Neutron OVS-Agent/L3-Agent to Nova Proxy / Cinder Proxy / Neutron l2/l3 Proxy.
* **IceHouse pachtes installation step by step**
1. Node1
- Patches for Nova - instance_mapping_uuid_patch
This patch is to make the Nova proxy being able to translate the cascading level VM's uuid to cascadede level VM's uuid
Navigate to the folder
```
cd ./tricircle/icehouse-patches/nova/instance_mapping_uuid_patch
```
follow README.md instruction to install the patch
- Patches for Cinder - Volume/SnapShot/Backup UUID mapping patch
This patch is to make the Cinder proxy being able to translate the cascading level (Volume/Snapshot/backup)'s uuid to cascadede level (Volume/Snapshot/backup)'s uuid
Navigate to the folder
```
cd ./tricircle/icehouse-patches/cinder/instance_mapping_uuid_patch
```
follow README.md instruction to install the patch
- Patches for Neutron - DVR patch
This patch is to make the Neutron has the DVR(distributed virtual router) feature. Through DVR, all L2/L3 proxy nodes in the cascading level can receive correspoding RPC message, and then convert the command to restful API to cascaded Neutron.
Navigate to the folder
```
cd ./tricircle/icehouse-patches/neutron/dvr-patch
```
follow README.md instruction to install the patch
- Patches for Neutron - ml2-mech-driver-cascading patch
This patch is to make L2 population driver being able to populate the VM's host IP which stored in the port binding profile in the cascaded OpenStack to another cascaded OpenStack.
Navigate to the folder
```
cd ./tricircle/icehouse-patches/neutron/ml2-mech-driver-cascading-patch
```
follow README.md instruction to install the patch
2. Node3
- Patches for Nova - port binding profile update bug: https://bugs.launchpad.net/neutron/+bug/1338202.
because ml2-mech-driver-cascaded-patch will update the binding profile in the port, and will be flushed to null if you don't fix the bug.
You can also fix the bug via:
Navigate to the folder
```
cd ./tricircle/icehouse-patches/icehouse-patches/nova/instance_mapping_uuid_patch/nova/network/neutronv2/
cp api.py $python_installation_path/site-packages/nova/network/neutronv2/
```
the patch will reserve what has been saved in the port binding profile
- Patches for Cinder - timestamp-query-patch patch
This patch is to make the cascaded Cinder being able to execute query with timestamp filter, but not to return all objects.
Navigate to the folder
```
cd ./tricircle/icehouse-patches/cinder/timestamp-query-patch_patch
```
follow README.md instruction to install the patch
- Patches for Neutron - DVR patch
This patch is to make the Neutron has the DVR(distributed virtual router) feature.
Navigate to the folder
```
cd ./tricircle/icehouse-patches/neutron/dvr-patch
```
follow README.md instruction to install the patch
- Patches for Neutron - ml2-mech-driver-cascaded patch
This patch is to make L2 population driver being able to populate the virtual remote port where the VM located in another OpenStack.
Navigate to the folder
```
cd ./tricircle/icehouse-patches/neutron/ml2-mech-driver-cascaded-patch
```
follow README.md instruction to install the patch
- Patches for Neutron - openvswitch-agent patch
This patch is to get dvr mac crossing openstack for cross OpenStack L3 networking for VLAN-VLAN/VLAN-VxLAN/VxLAN-VxLAN.
Navigate to the folder
```
cd ./tricircle/icehouse-patches/neutron/openvswitch-agent-patch
```
follow README.md instruction to install the patch
3. Node4
- Patches for Nova - port binding profile update bug: https://bugs.launchpad.net/neutron/+bug/1338202.
because ml2-mech-driver-cascaded-patch will update the binding profile in the port, and will be flushed to null if you don't fix the bug.
You can also fix the bug via:
Navigate to the folder
```
cd ./tricircle/icehouse-patches/icehouse-patches/nova/instance_mapping_uuid_patch/nova/network/neutronv2/
cp api.py $python_installation_path/site-packages/nova/network/neutronv2/
```
the patch will reserve what has been saved in the port binding profile
- Patches for Cinder - timestamp-query-patch patch
This patch is to make the cascaded Cinder being able to execute query with timestamp filter, but not to return all objects.
Navigate to the folder
```
cd ./tricircle/icehouse-patches/cinder/timestamp-query-patch_patch
```
follow README.md instruction to install the patch
- Patches for Neutron - DVR patch
This patch is to make the Neutron has the DVR(distributed virtual router) feature.
Navigate to the folder
```
cd ./tricircle/icehouse-patches/neutron/dvr-patch
```
follow README.md instruction to install the patch
- Patches for Neutron - ml2-mech-driver-cascaded patch
This patch is to make L2 population driver being able to populate the virtual remote port where the VM located in another OpenStack.
Navigate to the folder
```
cd ./tricircle/icehouse-patches/neutron/ml2-mech-driver-cascaded-patch
```
follow README.md instruction to install the patch
- Patches for Neutron - openvswitch-agent patch
This patch is to get dvr mac crossing openstack for cross OpenStack L3 networking for VLAN-VLAN/VLAN-VxLAN/VxLAN-VxLAN.
Navigate to the folder
```
cd ./tricircle/icehouse-patches/neutron/openvswitch-agent-patch
```
follow README.md instruction to install the patch
* **Proxy installation step by step**
1. Node1
- Nova proxy
Navigate to the folder
```
cd ./tricircle/novaproxy
```
follow README.md instruction to install the proxy. Please change the configuration value in the install.sh according to your environment setting
- Cinder proxy
Navigate to the folder
```
cd ./tricircle/cinderproxy
```
follow README.md instruction to install the proxy. Please change the configuration value in the install.sh according to your environment setting
- L2 proxy
Navigate to the folder
```
cd ./tricircle/neutronproxy/l2-proxy
```
follow README.md instruction to install the proxy. Please change the configuration value in the install.sh according to your environment setting
- L3 proxy
Navigate to the folder
```
cd ./tricircle/neutronproxy/l3-proxy
```
follow README.md instruction to install the proxy. Please change the configuration value in the install.sh according to your environment setting
2. Node2
- Nova proxy
Navigate to the folder
```
cd ./tricircle/novaproxy
```
follow README.md instruction to install the proxy. Please change the configuration value in the install.sh according to your environment setting
Navigate to the folder
```
cd ./tricircle/icehouse-patches/nova/instance_mapping_uuid_patch/nova/objects
cp instance.py $python_installation_path/site-packages/nova/objects/
```
This file is a patch for instance UUID mapping used in the proxy nodes.
- Cinder proxy
Navigate to the folder
```
cd ./tricircle/cinderproxy
```
follow README.md instruction to install the proxy. Please change the configuration value in the install.sh according to your environment setting
Navigate to the folder
```
cd ./tricircle/icehouse-patches/cinder/uuid-mapping-patch/cinder/db/sqlalchemy
cp models.py $python_installation_path/site-packages/cinder/db/sqlalchemy
```
This file is a patch for instance UUID mapping used in the proxy nodes.
- L2 proxy
Navigate to the folder
```
cd ./tricircle/neutronproxy/l2-proxy
```
follow README.md instruction to install the proxy. Please change the configuration value in the install.sh according to your environment setting
- L3 proxy
Navigate to the folder
```
cd ./tricircle/neutronproxy/l3-proxy
```
follow README.md instruction to install the proxy. Please change the configuration value in the install.sh according to your environment setting
Upgrade to Glance cascading
------------
* **Prerequisites**
- To experience the glance cascading feature, you can simply upgrade the current installation with several step, see the following picture:
![minimal_setup_with_glance_cascading](./minimal_setup_with_glance_cascading.png?raw=true)
1. Node1
- Patches for Glance - glance_location_patch
This patch is to make the glance being able to handle http url location. The patch also insert the sync manager to the chain of responsibility.
Navigate to the folder
```
cd ./tricircle/icehouse-patches/glance/glance_location_patch
```
follow README.md instruction to install the patch
- Sync Manager
Navigate to the folder
```
cd ./tricircle/glancesync
```
modify the storage scheme configuration for cascading and cascaded level
```
vi ./tricircle/glancesync/etc/glance/glance_store.yaml
```
follow README.md instruction to install the sync manager. Please change the configuration value in the install.sh according to your environment setting, espeically for configuration:
sync_enabled=True
sync_server_port=9595
sync_server_host=127.0.0.1
2. Node3
- Glance Installation
Please install Glance in the Node3 as the casacded Glance.
Register the service endpoint in the KeyStone.
Change the glance endpoint in nova.conf and cinder.conf to the Glance located in Node3
3. Node4
- Glance Installation
Please install Glance in the Node4 as the casacded Glance.
Register the service endpoint in the KeyStone
Change the glance endpoint in nova.conf and cinder.conf to the Glance located in Node4
4. Configuration
- Change Nova proxy configuration on Node1, setting the "cascaded_glance_flag" to True and add "cascaded_glance_url" of Node3 configurantion according to Nova-proxy README.MD instruction
- Change Cinder proxy configuration on Node1, setting the "glance_cascading_flag" to True and add "cascaded_glance_url" of Node3 configurantion according to Nova-proxy README.MD instruction
- Change Nova proxy configuration on Node2, setting the "cascaded_glance_flag" to True and add "cascaded_glance_url" of Node4 configurantion according to Nova-proxy README.MD instruction
- Change Cinder proxy configuration on Node2, setting the "glance_cascading_flag" to True and add "cascaded_glance_url" of Node4 configurantion according to Nova-proxy README.MD instruction
5. Experience Glance cascading
- Restart all related service
- Use Glance V2 api to create Image, Upload Image or patch location for Image. Image should be able to sync to distributed Glance if sync_enabled is setting to True
- Sync image only during first time usage but not uploading or patch location is still in testing phase, may not work properly.
- Create VM/Volume/etc from Horizon

@ -0,0 +1,148 @@
Openstack Cinder Proxy
===============================
Cinder-Proxy acts as the same role of Cinder-Volume in cascading OpenStack.
Cinder-Proxy treats cascaded Cinder as its cinder volume, convert the internal request message from the message bus to restful API calling to cascaded Cinder.
Key modules
-----------
* The new cinder proxy module cinder_proxy,which treats cascaded Cinder as its cinder volume, convert the internal request message from the message bus to restful API calling to cascaded Cinder:
cinder/volume/cinder_proxy.py
Requirements
------------
* openstack-cinder-volume-2014.1-14.1 has been installed
Installation
------------
We provide two ways to install the cinder proxy code. In this section, we will guide you through installing the cinder proxy with the minimum configuration.
* **Note:**
- Make sure you have an existing installation of **Openstack Icehouse**.
- We recommend that you Do backup at least the following files before installation, because they are to be overwritten or modified:
$CINDER_CONFIG_PARENT_DIR/cinder.conf
(replace the $... with actual directory names.)
* **Manual Installation**
- Make sure you have performed backups properly.
- Navigate to the local repository and copy the contents in 'cinder' sub-directory to the corresponding places in existing cinder, e.g.
```cp -r $LOCAL_REPOSITORY_DIR/cinder $CINDER_PARENT_DIR```
(replace the $... with actual directory name.)
- Update the cinder configuration file (e.g. /etc/cinder/cinder.conf) with the minimum option below. If the option already exists, modify its value, otherwise add it to the config file. Check the "Configurations" section below for a full configuration guide.
```
[DEFAULT]
...
###configuration for Cinder cascading ###
volume_manager=cinder.volume.cinder_proxy.CinderProxy
volume_sync_interval=5
cinder_tenant_name=$CASCADED_ADMIN_TENANT
cinder_username=$CASCADED_ADMIN_NAME
cinder_password=$CASCADED_ADMIN_PASSWORD
keystone_auth_url=http://$GLOBAL_KEYSTONE_IP:5000/v2.0/
cascading_glance_url=$CASCADING_GLANCE
cascaded_glance_url=http://$CASCADED_GLANCE
cascaded_available_zone=$CASCADED_AVAILABLE_ZONE
cascaded_region_name=$CASCADED_REGION_NAME
```
- Restart the cinder proxy.
```service openstack-cinder-volume restart```
- Done. The cinder proxy should be working with a demo configuration.
* **Automatic Installation**
- Make sure you have performed backups properly.
- Navigate to the installation directory and run installation script.
```
cd $LOCAL_REPOSITORY_DIR/installation
sudo bash ./install.sh
```
(replace the $... with actual directory name.)
- Done. The installation code should setup the cinder proxy with the minimum configuration below. Check the "Configurations" section for a full configuration guide.
```
[DEFAULT]
...
###cascade info ###
...
###configuration for Cinder cascading ###
volume_manager=cinder.volume.cinder_proxy.CinderProxy
volume_sync_interval=5
cinder_tenant_name=$CASCADED_ADMIN_TENANT
cinder_username=$CASCADED_ADMIN_NAME
cinder_password=$CASCADED_ADMIN_PASSWORD
keystone_auth_url=http://$GLOBAL_KEYSTONE_IP:5000/v2.0/
cascading_glance_url=$CASCADING_GLANCE
cascaded_glance_url=http://$CASCADED_GLANCE
cascaded_available_zone=$CASCADED_AVAILABLE_ZONE
cascaded_region_name=$CASCADED_REGION_NAME
```
* **Troubleshooting**
In case the automatic installation process is not complete, please check the followings:
- Make sure your OpenStack version is Icehouse.
- Check the variables in the beginning of the install.sh scripts. Your installation directories may be different from the default values we provide.
- The installation code will automatically add the related codes to $CINDER_PARENT_DIR/cinder and modify the related configuration.
- In case the automatic installation does not work, try to install manually.
Configurations
--------------
* This is a (default) configuration sample for the cinder proxy. Please add/modify these options in /etc/cinder/cinder.conf.
* Note:
- Please carefully make sure that options in the configuration file are not duplicated. If an option name already exists, modify its value instead of adding a new one of the same name.
- Please refer to the 'Configuration Details' section below for proper configuration and usage of costs and constraints.
```
[DEFAULT]
...
#
#Options defined in cinder.volume.manager
#
# Default driver to use for the cinder proxy (string value)
volume_manager=cinder.volume.cinder_proxy.CinderProxy
#The cascading level keystone component service url, by which the cinder proxy
#can access to cascading level keystone service
keystone_auth_url=$keystone_auth_url
#The cascading level glance component service url, by which the cinder proxy
#can access to cascading level glance service
cascading_glance_url=$CASCADING_GLANCE
#The cascaded level glance component service url, by which the cinder proxy
#can judge whether the cascading glance image has a location for this cascaded glance
cascaded_glance_url=http://$CASCADED_GLANCE
#The cascaded level region name, which will be set as a parameter when
#the cascaded level component services register endpoint to keystone
cascaded_region_name=$CASCADED_REGION_NAME
#The cascaded level available zone name, which will be set as a parameter when
#forward request to cascaded level cinder. Please pay attention to that value of
#cascaded_available_zone of cinder-proxy must be the same as storage_availability_zone in
#the cascaded level node. And cinder-proxy should be configured to the same storage_availability_zone.
#this configuration could be removed in the future to just use the cinder-proxy storage_availability_zone
#configuration item. but it is up to the admin to make sure the storage_availability_zone in cinder-proxy#and casacdede cinder keep the same value.
cascaded_available_zone=$CASCADED_AVAILABLE_ZONE

File diff suppressed because it is too large Load Diff

@ -0,0 +1,130 @@
#!/bin/bash
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Copyright (c) 2014 Huawei Technologies.
_CINDER_CONF_DIR="/etc/cinder"
_CINDER_CONF_FILE="cinder.conf"
_CINDER_DIR="/usr/lib64/python2.6/site-packages/cinder"
_CINDER_INSTALL_LOG="/var/log/cinder/cinder-proxy/installation/install.log"
# please set the option list set in cinder configure file
_CINDER_CONF_OPTION=("volume_manager=cinder.volume.cinder_proxy.CinderProxy volume_sync_interval=5 periodic_interval=5 cinder_tenant_name=admin cinder_username=admin cinder_password=1234 keystone_auth_url=http://10.67.148.210:5000/v2.0/ glance_cascading_flag=False cascading_glance_url=10.67.148.210:9292 cascaded_glance_url=http://10.67.148.201:9292 cascaded_cinder_url=http://10.67.148.201:8776/v2/%(project_id)s cascaded_region_name=Region_AZ1 cascaded_available_zone=AZ1")
# if you did not make changes to the installation files,
# please do not edit the following directories.
_CODE_DIR="../cinder/"
_BACKUP_DIR="${_CINDER_DIR}/cinder-proxy-installation-backup"
function log()
{
if [ ! -f "${_CINDER_INSTALL_LOG}" ] ; then
mkdir -p `dirname ${_CINDER_INSTALL_LOG}`
touch $_CINDER_INSTALL_LOG
chmod 777 $_CINDER_INSTALL_LOG
fi
echo "$@"
echo "`date -u +'%Y-%m-%d %T.%N'`: $@" >> $_CINDER_INSTALL_LOG
}
if [[ ${EUID} -ne 0 ]]; then
log "Please run as root."
exit 1
fi
cd `dirname $0`
log "checking installation directories..."
if [ ! -d "${_CINDER_DIR}" ] ; then
log "Could not find the cinder installation. Please check the variables in the beginning of the script."
log "aborted."
exit 1
fi
if [ ! -f "${_CINDER_CONF_DIR}/${_CINDER_CONF_FILE}" ] ; then
log "Could not find cinder config file. Please check the variables in the beginning of the script."
log "aborted."
exit 1
fi
log "checking previous installation..."
if [ -d "${_BACKUP_DIR}/cinder" ] ; then
log "It seems cinder-proxy has already been installed!"
log "Please check README for solution if this is not true."
exit 1
fi
log "backing up current files that might be overwritten..."
mkdir -p "${_BACKUP_DIR}/cinder"
mkdir -p "${_BACKUP_DIR}/etc/cinder"
cp -r "${_CINDER_DIR}/volume" "${_BACKUP_DIR}/cinder/"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/cinder"
log "Error in code backup, aborted."
exit 1
fi
cp "${_CINDER_CONF_DIR}/${_CINDER_CONF_FILE}" "${_BACKUP_DIR}/etc/cinder/"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/cinder"
rm -r "${_BACKUP_DIR}/etc"
log "Error in config backup, aborted."
exit 1
fi
log "copying in new files..."
cp -r "${_CODE_DIR}" `dirname ${_CINDER_DIR}`
if [ $? -ne 0 ] ; then
log "Error in copying, aborted."
log "Recovering original files..."
cp -r "${_BACKUP_DIR}/cinder" `dirname ${_CINDER_DIR}` && rm -r "${_BACKUP_DIR}/cinder"
if [ $? -ne 0 ] ; then
log "Recovering failed! Please install manually."
fi
exit 1
fi
log "updating config file..."
sed -i.backup -e "/volume_manager *=/d" "${_CINDER_CONF_DIR}/${_CINDER_CONF_FILE}"
sed -i.backup -e "/periodic_interval *=/d" "${_CINDER_CONF_DIR}/${_CINDER_CONF_FILE}"
for option in $_CINDER_CONF_OPTION
do
sed -i -e "/\[DEFAULT\]/a \\"$option "${_CINDER_CONF_DIR}/${_CINDER_CONF_FILE}"
done
if [ $? -ne 0 ] ; then
log "Error in updating, aborted."
log "Recovering original files..."
cp -r "${_BACKUP_DIR}/cinder" `dirname ${_CINDER_DIR}` && rm -r "${_BACKUP_DIR}/cinder"
if [ $? -ne 0 ] ; then
log "Recovering /cinder failed! Please install manually."
fi
cp "${_BACKUP_DIR}/etc/cinder/${_CINDER_CONF_FILE}" "${_CINDER_CONF_DIR}" && rm -r "${_BACKUP_DIR}/etc"
if [ $? -ne 0 ] ; then
log "Recovering config failed! Please install manually."
fi
exit 1
fi
log "restarting cinder proxy..."
service openstack-cinder-volume restart
if [ $? -ne 0 ] ; then
log "There was an error in restarting the service, please restart cinder proxy manually."
exit 1
fi
log "Cinder proxy Completed."
log "See README to get started."
exit 0

@ -0,0 +1,129 @@
#!/bin/bash
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Copyright (c) 2014 Huawei Technologies.
_CINDER_CONF_DIR="/etc/cinder"
_CINDER_CONF_FILE="cinder.conf"
_CINDER_DIR="/usr/lib64/python2.6/site-packages/cinder"
_CINDER_CONF_OPTION=("volume_manager volume_sync_interval periodic_interval cinder_tenant_name cinder_username cinder_password keystone_auth_url glance_cascading_flag cascading_glance_url cascaded_glance_url cascaded_cinder_url cascaded_region_name cascaded_available_zone")
# if you did not make changes to the installation files,
# please do not edit the following directories.
_CODE_DIR="../cinder"
_BACKUP_DIR="${_CINDER_DIR}/cinder-proxy-installation-backup"
_CINDER_INSTALL_LOG="/var/log/cinder/cinder-proxy/installation/install.log"
#_SCRIPT_NAME="${0##*/}"
#_SCRIPT_LOGFILE="/var/log/nova-solver-scheduler/installation/${_SCRIPT_NAME}.log"
function log()
{
if [ ! -f "${_CINDER_INSTALL_LOG}" ] ; then
mkdir -p `dirname ${_CINDER_INSTALL_LOG}`
touch $_CINDER_INSTALL_LOG
chmod 777 $_CINDER_INSTALL_LOG
fi
echo "$@"
echo "`date -u +'%Y-%m-%d %T.%N'`: $@" >> $_CINDER_INSTALL_LOG
}
if [[ ${EUID} -ne 0 ]]; then
log "Please run as root."
exit 1
fi
cd `dirname $0`
log "checking installation directories..."
if [ ! -d "${_CINDER_DIR}" ] ; then
log "Could not find the cinder installation. Please check the variables in the beginning of the script."
log "aborted."
exit 1
fi
if [ ! -f "${_CINDER_CONF_DIR}/${_CINDER_CONF_FILE}" ] ; then
log "Could not find cinder config file. Please check the variables in the beginning of the script."
log "aborted."
exit 1
fi
log "checking backup..."
if [ ! -d "${_BACKUP_DIR}/cinder" ] ; then
log "Could not find backup files. It is possible that the cinder-proxy has been uninstalled."
log "If this is not the case, then please uninstall manually."
exit 1
fi
log "backing up current files that might be overwritten..."
if [ -d "${_BACKUP_DIR}/uninstall" ] ; then
rm -r "${_BACKUP_DIR}/uninstall"
fi
mkdir -p "${_BACKUP_DIR}/uninstall/cinder"
mkdir -p "${_BACKUP_DIR}/uninstall/etc/cinder"
cp -r "${_CINDER_DIR}/volume" "${_BACKUP_DIR}/uninstall/cinder/"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/uninstall/cinder"
log "Error in code backup, aborted."
exit 1
fi
cp "${_CINDER_CONF_DIR}/${_CINDER_CONF_FILE}" "${_BACKUP_DIR}/uninstall/etc/cinder/"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/uninstall/cinder"
rm -r "${_BACKUP_DIR}/uninstall/etc"
log "Error in config backup, aborted."
exit 1
fi
log "restoring code to the status before installing cinder-proxy..."
cp -r "${_BACKUP_DIR}/cinder" `dirname ${_CINDER_DIR}`
if [ $? -ne 0 ] ; then
log "Error in copying, aborted."
log "Recovering current files..."
cp -r "${_BACKUP_DIR}/uninstall/cinder" `dirname ${_CINDER_DIR}`
if [ $? -ne 0 ] ; then
log "Recovering failed! Please uninstall manually."
fi
exit 1
fi
log "updating config file..."
for option in $_CINDER_CONF_OPTION
do
sed -i.uninstall.backup -e "/"$option "*=/d" "${_CINDER_CONF_DIR}/${_CINDER_CONF_FILE}"
done
if [ $? -ne 0 ] ; then
log "Error in updating, aborted."
log "Recovering current files..."
cp "${_BACKUP_DIR}/uninstall/etc/cinder/${_CINDER_CONF_FILE}" "${_CINDER_CONF_DIR}"
if [ $? -ne 0 ] ; then
log "Recovering failed! Please uninstall manually."
fi
exit 1
fi
log "cleaning up backup files..."
rm -r "${_BACKUP_DIR}/cinder" && rm -r "${_BACKUP_DIR}/etc"
if [ $? -ne 0 ] ; then
log "There was an error when cleaning up the backup files."
fi
log "restarting cinder volume..."
service openstack-cinder-volume restart
if [ $? -ne 0 ] ; then
log "There was an error in restarting the service, please restart cinder volume manually."
exit 1
fi
log "Completed."
exit 0

@ -0,0 +1,140 @@
Glance Sync Manager
===============================
This is a submodule of Tricircle Project, in which a sync function is added to support the glance images' sync between cascading and cascadeds.
When launching a instance, the nova will search the image which is in the same region with the instance to downland, this can speeded up the whole launching time of the instance.
Key modules
-----------
* Primarily, there is only new module in glance cascading: Sync, which is in the glance/sync package.
glance/sync/__init__.py : Adds a ImageRepoProxy class, like store, policy .etc , to augment a sync mechanism layer on top of the api request handling chain.
glance/sync/base.py : Contains SyncManager object, execute the sync operations.
glance/sync/utils.py : Some help functions.
glance/sync/api/ : Support a Web Server of sync.
glance/sync/client/: Support a client to visit the Web Server , ImageRepoProxy use this client to call the sync requests.
glance/sync/task/: Each Sync operation is transformed into a task, we using queue to store the task an eventlet to handle the task simultaneously.
glance/sync/store/: We implements the independent-glance-store, separating the handles of image_data from image_metadata.
glance/cmd/sync.py: For the Sync Server starting launch (refer this in /usr/bin/glance-sync).
* **Note:**
At present, the glance cascading only support v2 version of glance-api;
Requirements
------------
* pexpect>=2.3
Installation
------------
* **Note:**
- The Installation and configuration guidelines written below is just for the cascading layer of glance. For the cascaded layer, the glance is installed as normal.
* **Prerequisites**
- Please install the python package: pexpect>=2.3 ( because we use pxssh for loginng and there is a bug in pxssh, see https://mail.python.org/pipermail/python-list/2008-February/510054.html, you should fix this before launch the service. )
* **Manual Installation**
- Make sure you have performed backups properly.
* **Manual Installation**
1. Under cascading Openstack, copy these files from glance-patch directory and glancesync directory to suitable place:
| DIR | FROM | TO |
| ------------- |:-----------------|:-------------------------------------------|
| glancesync | glance/ | ${python_install_dir}/glance |
| glancesync | etc/glance/ | /etc/glance/ |
| glancesync | glance-sync | /usr/bin/ |
|${glance-patch}| glance/ | ${python_install_dir}/glance |
|${glance-patch}|glance.egg-info/entry_points.txt | ${glance_install_egg.info}/ |
${glance-patch} = `icehouse-patches/glance/glance_location_patch` ${python_install_dir} is where the openstack installed, e.g. `/usr/lib64/python2.6/site-packages` .
2. Add/modify the config options
| CONFIG_FILE | OPTION | ADD or MODIFY |
| ----------------|:---------------------------------------------------|:--------------:|
|glance-api.conf | show_multiple_locations=True | M |
|glance-api.conf | sync_server_host=${sync_mgr_host} | A |
|glance-api.conf | sync_server_port=9595 | A |
|glance-api.conf | sync_enabled=True | A |
|glance-sync.conf | cascading_endpoint_url=${glance_api_endpoint_url} | M |
|glance-sync.conf | sync_strategy=ALL | M |
|glance-sync.conf | auth_host=${keystone_host} | M |
3. Re-launch services on cacading openstack, like:
`service openstack-glance-api restart `
`service openstack-glance-registry restart `
`python /usr/bin/glance-sync --config-file=/etc/glance/glance-sync.conf & `
* **Automatic Installation**
1. Enter the glance-patch installation dir: `cd ./tricircle/icehouse-patches/glance/glance_location_patch/installation` .
2. Optional, modify the shell script variable: `_PYTHON_INSTALL_DIR` .
3. Run the install script: `sh install.sh`
4. Enter the glancesync installation dir: `cd ./tricircle/glancesync/installation` .
5. Modify the cascading&cascaded glances' store scheme configuration, which is in the file: `./tricircle/glancesync/etc/glance/glance_store.yaml` .
6. Optional, modify the config options in shell script: `sync_enabled=True`, `sync_server_port=9595`, `sync_server_host=127.0.0.1` with the proper values.
7. Run the install script: `sh install.sh`
Configurations
--------------
Besides glance-api.conf file, we add some new config files. They are described separately.
- In glance-api.conf, three options added:
[DEFAULT]
# Indicate whether use the image sync, default value is False.
#If configuring on cascading layer, this value should be True.
sync_enabled = True
#The sync server 's port number, default is 9595.
sync_server_port = 9595
#The sync server's host name (or ip address)
sync_server_host = 127.0.0.1
*Besides, the option show_multiple_locations value should be ture.
- In glance-sync.conf which newly increased, the options is similar with glance-registry.conf except:
[DEFAULT]
#How to sync the image, the value can be ["None", "ALL", "USER"]
#When "ALL" choosen, means to sync to all the cascaded glances;
#When "USER" choosen, means according to user's role, project, etc.
sync_strategy = ALL
#What the cascading glance endpoint url is .(Note that this value should be consistent with what in keystone).
cascading_endpoint_url = http://127.0.0.1:9292/
#when snapshot sync, set the timeout time(second) of snapshot 's status
#changing into 'active'.
snapshot_timeout = 300
#when snapshot sync, set the polling interval time(second) to check the
#snapshot's status.
snapshot_sleep_interval = 10
#When sync task fails, set the retry times.
task_retry_times = 0
#When copy image data using 'scp' between filesystmes, set the timeout
#time of the copy.
scp_copy_timeout = 3600
#When snapshot, one can set the specific regions in which the snapshot
#will sync to. (e.g. physicalOpenstack001, physicalOpenstack002)
snapshot_region_names =
- Last but also important, we add a yaml file for config the store backend's copy : glance_store.yaml in cascading glance.
these config correspond to various store scheme (at present, only filesystem is supported), the values
are based on your environment, so you have to config it before installation or restart the glance-sync
when modify it.

@ -0,0 +1,10 @@
#!/usr/bin/python
# PBR Generated from 'console_scripts'
import sys
from glance.cmd.sync import main
if __name__ == "__main__":
sys.exit(main())

@ -0,0 +1,35 @@
# Use this pipeline for no auth - DEFAULT
[pipeline:glance-sync]
pipeline = versionnegotiation unauthenticated-context rootapp
[filter:unauthenticated-context]
paste.filter_factory = glance.api.middleware.context:UnauthenticatedContextMiddleware.factory
# Use this pipeline for keystone auth
[pipeline:glance-sync-keystone]
pipeline = versionnegotiation authtoken context rootapp
# Use this pipeline for authZ only. This means that the registry will treat a
# user as authenticated without making requests to keystone to reauthenticate
# the user.
[pipeline:glance-sync-trusted-auth]
pipeline = versionnegotiation context rootapp
[composite:rootapp]
paste.composite_factory = glance.sync.api:root_app_factory
/v1: syncv1app
[app:syncv1app]
paste.app_factory = glance.sync.api.v1:API.factory
[filter:context]
paste.filter_factory = glance.api.middleware.context:ContextMiddleware.factory
[filter:versionnegotiation]
paste.filter_factory = glance.api.middleware.version_negotiation:VersionNegotiationFilter.factory
[filter:unauthenticated-context]
paste.filter_factory = glance.api.middleware.context:UnauthenticatedContextMiddleware.factory
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory

@ -0,0 +1,57 @@
[DEFAULT]
# Show debugging output in logs (sets DEBUG log level output)
debug = True
# Address to bind the API server
bind_host = 0.0.0.0
# Port the bind the API server to
bind_port = 9595
# Log to this file. Make sure you do not set the same log file for both the API
# and registry servers!
#
# If `log_file` is omitted and `use_syslog` is false, then log messages are
# sent to stdout as a fallback.
log_file = /var/log/glance/sync.log
# Backlog requests when creating socket
backlog = 4096
#How to sync the image, the value can be ["None", "ALL", "USER"]
#When "ALL" choosen, means to sync to all the cascaded glances;
#When "USER" choosen, means according to user's role, project, etc.
sync_strategy = None
#What the cascading glance endpoint is .
cascading_endpoint_url = http://127.0.0.1:9292/
#when snapshot sync, set the timeout time(second) of snapshot 's status
#changing into 'active'.
snapshot_timeout = 300
#when snapshot sync, set the polling interval time(second) to check the
#snapshot's status.
snapshot_sleep_interval = 10
#When sync task fails, set the retry times.
task_retry_times = 0
#When copy image data using 'scp' between filesystmes, set the timeout
#time of the copy.
scp_copy_timeout = 3600
#When snapshot, one can set the specific regions in which the snapshot
#will sync to.
snapshot_region_names = physicalOpenstack001, physicalOpenstack002
[keystone_authtoken]
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = admin
admin_user = glance
admin_password = glance
[paste_deploy]
config_file = /etc/glance/glance-sync-paste.ini
flavor=keystone

@ -0,0 +1,29 @@
---
glances:
- name: master
service_ip: "127.0.0.1"
schemes:
- name: http
parameters:
netloc: '127.0.0.1:8800'
path: '/'
image_name: 'test.img'
- name: filesystem
parameters:
host: '127.0.0.1'
datadir: '/var/lib/glance/images/'
login_user: 'glance'
login_password: 'glance'
- name: slave1
service_ip: "0.0.0.0"
schemes:
- name: http
parameters:
netloc: '0.0.0.0:8800'
path: '/'
- name: filesystem
parameters:
host: '0.0.0.0'
datadir: '/var/lib/glance/images/'
login_user: 'glance'
login_password: 'glance'

@ -0,0 +1,59 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
"""
Reference implementation server for Glance Sync
"""
import eventlet
import os
import sys
from oslo.config import cfg
# Monkey patch socket and time
eventlet.patcher.monkey_patch(all=False, socket=True, time=True, thread=True)
# If ../glance/__init__.py exists, add ../ to Python search path, so that
# it will override what happens to be installed in /usr/(local/)lib/python...
possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]),
os.pardir,
os.pardir))
if os.path.exists(os.path.join(possible_topdir, 'glance', '__init__.py')):
sys.path.insert(0, possible_topdir)
from glance.common import config
from glance.common import exception
from glance.common import wsgi
from glance.openstack.common import log
import glance.sync
def main():
try:
config.parse_args(default_config_files='glance-sync.conf')
log.setup('glance')
server = wsgi.Server()
server.start(config.load_paste_app('glance-sync'), default_port=9595)
server.wait()
except RuntimeError as e:
sys.exit("ERROR: %s" % e)
if __name__ == '__main__':
main()

@ -0,0 +1,257 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
from oslo.config import cfg
import glance.context
import glance.domain.proxy
import glance.openstack.common.log as logging
from glance.sync.clients import Clients as clients
from glance.sync import utils
LOG = logging.getLogger(__name__)
_V2_IMAGE_CREATE_PROPERTIES = ['container_format', 'disk_format', 'min_disk',
'min_ram', 'name', 'virtual_size', 'visibility',
'protected']
_V2_IMAGE_UPDATE_PROPERTIES = ['container_format', 'disk_format', 'min_disk',
'min_ram', 'name']
def _check_trigger_sync(pre_image, image):
"""
check if it is the case that the cascaded glance has upload or first patch
location.
"""
return pre_image.status in ('saving', 'queued') and image.size and \
[l for l in image.locations if not utils.is_glance_location(l['url'])]
def _from_snapshot_request(pre_image, image):
"""
when patch location, check if it's snapshot-sync case.
"""
if pre_image.status == 'queued' and len(image.locations) == 1:
loc_meta = image.locations[0]['metadata']
return loc_meta and loc_meta.get('image_from', None) in ['snapshot',
'volume']
def get_adding_image_properties(image):
_tags = list(image.tags) or []
kwargs = {}
kwargs['body'] = {}
for key in _V2_IMAGE_CREATE_PROPERTIES:
try:
value = getattr(image, key, None)
if value and value != 'None':
kwargs['body'][key] = value
except KeyError:
pass
_properties = getattr(image, 'extra_properties') or None
if _properties:
extra_keys = _properties.keys()
for _key in extra_keys:
kwargs['body'][_key] = _properties[_key]
if _tags:
kwargs['body']['tags'] = _tags
return kwargs
def get_existing_image_locations(image):
return {'locations': image.locations}
class ImageRepoProxy(glance.domain.proxy.Repo):
def __init__(self, image_repo, context, sync_api):
self.image_repo = image_repo
self.context = context
self.sync_client = sync_api.get_sync_client(context)
proxy_kwargs = {'context': context, 'sync_api': sync_api}
super(ImageRepoProxy, self).__init__(image_repo,
item_proxy_class=ImageProxy,
item_proxy_kwargs=proxy_kwargs)
def _sync_saving_metadata(self, pre_image, image):
kwargs = {}
remove_keys = []
changes = {}
"""
image base properties
"""
for key in _V2_IMAGE_UPDATE_PROPERTIES:
pre_value = getattr(pre_image, key, None)
my_value = getattr(image, key, None)
if not my_value and not pre_value or my_value == pre_value:
continue
if not my_value and pre_value:
remove_keys.append(key)
else:
changes[key] = my_value
"""
image extra_properties
"""
pre_props = pre_image.extra_properties or {}
_properties = image.extra_properties or {}
addset = set(_properties.keys()).difference(set(pre_props.keys()))
removeset = set(pre_props.keys()).difference(set(_properties.keys()))
mayrepset = set(pre_props.keys()).intersection(set(_properties.keys()))
for key in addset:
changes[key] = _properties[key]
for key in removeset:
remove_keys.append(key)
for key in mayrepset:
if _properties[key] == pre_props[key]:
continue
changes[key] = _properties[key]
"""
image tags
"""
tag_dict = {}
pre_tags = pre_image.tags
new_tags = image.tags
added_tags = set(new_tags) - set(pre_tags)
removed_tags = set(pre_tags) - set(new_tags)
if added_tags:
tag_dict['add'] = added_tags
if removed_tags:
tag_dict['delete'] = removed_tags
if tag_dict:
kwargs['tags'] = tag_dict
kwargs['changes'] = changes
kwargs['removes'] = remove_keys
if not changes and not remove_keys and not tag_dict:
return
LOG.debug(_('In image %s, some properties changed, sync...')
% (image.image_id))
self.sync_client.update_image_matedata(image.image_id, **kwargs)
def _try_sync_locations(self, pre_image, image):
image_id = image.image_id
"""
image locations
"""
locations_dict = {}
pre_locs = pre_image.locations
_locs = image.locations
"""
if all locations of cascading removed, the image status become 'queued'
so the cascaded images should be 'queued' too. we replace all locations
with '[]'
"""
if pre_locs and not _locs:
LOG.debug(_('The image %s all locations removed, sync...')
% (image_id))
self.sync_client.sync_locations(image_id,
action='CLEAR',
locs=pre_locs)
return
added_locs = []
removed_locs = []
for _loc in pre_locs:
if _loc in _locs:
continue
removed_locs.append(_loc)
for _loc in _locs:
if _loc in pre_locs:
continue
added_locs.append(_loc)
if added_locs:
if _from_snapshot_request(pre_image, image):
add_kwargs = get_adding_image_properties(image)
else:
add_kwargs = {}
LOG.debug(_('The image %s add locations, sync...') % (image_id))
self.sync_client.sync_locations(image_id,
action='INSERT',
locs=added_locs,
**add_kwargs)
elif removed_locs:
LOG.debug(_('The image %s remove some locations, sync...')
% (image_id))
self.sync_client.sync_locations(image_id,
action='DELETE',
locs=removed_locs)
def save(self, image):
pre_image = self.get(image.image_id)
result = super(ImageRepoProxy, self).save(image)
image_id = image.image_id
if _check_trigger_sync(pre_image, image):
add_kwargs = get_adding_image_properties(image)
self.sync_client.sync_data(image_id, **add_kwargs)
LOG.debug(_('Sync data when image status changes ACTIVE, the '
'image id is %s.' % (image_id)))
else:
"""
In case of add/remove/replace locations property.
"""
self._try_sync_locations(pre_image, image)
"""
In case of sync the glance's properties
"""
if image.status == 'active':
self._sync_saving_metadata(pre_image, image)
return result
def remove(self, image):
result = super(ImageRepoProxy, self).remove(image)
LOG.debug(_('Image %s removed, sync...') % (image.image_id))
delete_kwargs = get_existing_image_locations(image)
self.sync_client.remove_image(image.image_id, **delete_kwargs)
return result
class ImageFactoryProxy(glance.domain.proxy.ImageFactory):
def __init__(self, factory, context, sync_api):
self.context = context
self.sync_api = sync_api
proxy_kwargs = {'context': context, 'sync_api': sync_api}
super(ImageFactoryProxy, self).__init__(factory,
proxy_class=ImageProxy,
proxy_kwargs=proxy_kwargs)
def new_image(self, **kwargs):
return super(ImageFactoryProxy, self).new_image(**kwargs)
class ImageProxy(glance.domain.proxy.Image):
def __init__(self, image, context, sync_api=None):
self.image = image
self.sync_api = sync_api
self.context = context
super(ImageProxy, self).__init__(image)

@ -0,0 +1,22 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
import paste.urlmap
def root_app_factory(loader, global_conf, **local_conf):
return paste.urlmap.urlmap_factory(loader, global_conf, **local_conf)