Add source code to Tricircle

Initial PoC source code for Tricircle, the project for OpenStack cascading solution.

Change-Id: I8abc93839a26446cb61c8d9004dfd812bd91de6e
This commit is contained in:
openstack 2014-09-20 17:01:18 +08:00 committed by joehuang
parent e75f01b9a6
commit 0885397d10
730 changed files with 187027 additions and 0 deletions

45
.gitignore vendored Normal file
View File

@ -0,0 +1,45 @@
*.DS_Store
*.egg*
*.log
*.mo
*.pyc
*.swo
*.swp
*.sqlite
*.iml
*~
.autogenerated
.coverage
.nova-venv
.project
.pydevproject
.ropeproject
.testrepository/
.tox
.idea
.venv
AUTHORS
Authors
build-stamp
build/*
CA/
ChangeLog
coverage.xml
cover/*
covhtml
dist/*
doc/source/api/*
doc/build/*
etc/nova/nova.conf.sample
instances
keeper
keys
local_settings.py
MANIFEST
nosetests.xml
nova/tests/cover/*
nova/vcsversion.py
tools/conf/nova.conf*
tools/lintstack.head.py
tools/pylint_exceptions
etc/nova/nova.conf.sample

201
LICENSE Normal file
View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

397
README.md Normal file
View File

@ -0,0 +1,397 @@
Tricircle
===============================
Tricircle is a project for [Openstack cascading solution](https://wiki.openstack.org/wiki/OpenStack_cascading_solution), including the source code of Nova Proxy, Cinder Proxy, Neutron L2/L3 Proxy, Glance sync manager and Ceilometer Proxy(not implemented yet).
The project name "Tricircle" comes from a fractal. See the blog ["OpenStack cascading and fractal"](https://www.linkedin.com/today/post/article/20140729022031-23841540-openstack-cascading-and-fractal) for more information.
Important to know
-----------
* the initial source code is for PoC only. Refactory will be done constantly to reach OpenStack acceptance standard.
* the PoC source code is based on IceHouse version, while Neutron is a master branch snapshot on July 1, 2014 which include DVR feature, not IceHouse version. The Neutron code is download from github when it was still in the developement and review status. The source code of DVR part is not stable, and not all DVR features are included, for example, N-S functions not ready.
* The Neutron cascading using the feature of provider network. But horizon doen't support provider network very well. So you have to use Neutron CLI to create a network. Or set default provide network type to VxLAN, or remove "local", "flat", "VLAN", "GRE" network typedriver from ML2 plugin configuration.
* For Neutron L2/L3 features, only VxLAN/L3 across casacaded OpenStack supported in the current source code. VLAN2VLAN, VLAN2VxLAN and VxLAN2VxLAN across cascaded OpenStack also implemented with IceHouse version but the patch is not ready yet, source code is in the VLAN2VLAN folder.
* The tunneling network for cross OpenStack piggy data path is using VxLAN, it leads to modification on L2 agent and L3 agent, we will refactory it to using GRE for the tunneling network to reduce patch for Juno version.
* If you want to experience VLAN2VLAN, VLAN2VxLAN and VxLAN2VxLAN across cascaded OpenStack, please ask help from PoC team member, see the wiki page [Openstack cascading solution](https://wiki.openstack.org/wiki/OpenStack_cascading_solution) for contact information.
* Glance cascading using Glance V2 API. Only CLI/pythonclient support V2 API, the Horizon doesn't support that version. So image management should be done through CLI, and using V2 only. Otherwise, the glance cascading cannot work properly.
* Glance cascading is not used by default, eg, useing global Glance by default. If Glance cascading is required, configuration is required.
* Refactory the Tricircle source code based on Juno version will be started soon once the Juno version is available.
Key modules
-----------
* Nova proxy
Similar role like Nova-Compute. Transfer the VM operation to cascaded Nova. Also responsible for attach volume and network to the VM in the cascaded OpenStack.
* Cinder proxy
Similar role like Cinder-Volume. Transfer the volume operation to cascaded Cinder.
* Neuton proxy
Including L2 proxy and L3 proxy, Similar role like OVS-Agent/L3-Agent. Finish L2/L3-networking in the cascaded OpenStack, including cross OpenStack networking.
* Glance sync
Synchronize image among the cascading and policy determined Cascaded OpenStacks
Patches required
------------------
* IceHouse-Patches
Pacthes for OpenStack IceHouse version, including patches for cascading level and cacscaded level.
Feature Supported
------------------
* Nova cascading
Launch/Reboot/Terminate/Resize/Rescue/Pause/Un-pause/Suspend/Resume/VNC Console/Attach Volume/Detach Volume/Snapshot/KeyPair/Flavor
* Cinder cascading
Create Volume/Delete Volume/Attach Volume/Detach Volume/Extend Volume/Create Snapshot/Delete Snapshot/List Snapshots/Create Volume from Snapshot/Create Volume from Image/Create Volume from Volume (Clone)/Create Image from Volume
* Neutron cascading
Network/Subnet/Port/Router
* Glance cascading
Only support V2 api. Create Image/Delete Image/List Image/Update Image/Upload Image/Patch Location/VM Snapshot/Image Synchronization
Known Issues
------------------
* Use "admin" role to experience these feature first, multi-tenancy has not been tested well.
* Launch VM only support "boot from image", "boot from volume", "boot from snapshot"
* Flavor only support new created flavor synchronized to the cascaded OpenStack, does not support flavor update synchronization to cascaded OpenStack yet.
* Must make a patch for "Create a volume from image", the patch link: https://bugs.launchpad.net/cinder/+bug/1308058
Installation without Glance cascading
------------
* **Prerequisites**
- the minimal installation requires three OpenStack IceHouse installated to experience across cascaded OpenStacks L2/L3 function. The minimal setup needs four nodes, see the following picture:
![minimal_setup](./minimal_setup.png?raw=true)
- the cascading OpenStack needs two node, Node1 and Node 2. Add Node1 to AZ1, Node2 to AZ2 in the cascading OpenStack for both Nova and Cinder.
- It's recommended to name the cascading Openstack region to "Cascading_OpenStack" or "Region1"
- Node1 is all-in-one OpenStack installation with KeyStone and Glance, Node1 also function as Nova-Compute/Cinder-Volume/Neutron OVS-Agent/L3-Agent node, and will be replaced to be the proxy node for AZ1.
- Node2 is general Nova-Compute node with Cinder-Volume, Neutron OVS-Agent/L3-Agent function installed. And will be replaced to be the proxy node for AZ2
- the all-in-one cascaded OpenStack installed in Node3 function as the AZ1. Node3 will also function as the Nova-Compute/Cinder-Volume/Neutron OVS-Agent/L3-Agent in order to be able to create VMs/Volume/Networking in this AZ1. Glance is only required to be installed if Glance cascading needed. Add Node3 to AZ1 in the cascaded OpenStack both for Nova and Cinder. It's recommended to name the cascaded Openstack region for Node3 to "AZ1"
- the all-in-one cascaded OpenStack installed in Node4 function as the AZ2. Node3 will also function as the Nova-Compute/Cinder-Volume/Neutron OVS-Agent/L3-Agent in order to be able to create VMs/Volume/Networking in this AZ2. Glance is only required to be installed if Glance cascading needed.Add Node4 to AZ2 in the cascaded OpenStack both for Nova and Cinder.It's recommended to name the cascaded Openstack region for Node4 to "AZ2"
Make sure the time of these four nodes are synchronized. Because the Nova Proxy/Cinder Proxy/Neutron L2/L3 Proxy will query the cascaded OpenStack using timestamp, incorrect time will lead to VM/Volume/Port status synchronization not work properly.
Register all services endpoint in the global shared KeyStone.
Make sure the 3 OpenStack can work independently before cascading introduced, eg. you can boot VM with network, create volume and attach volume in each OpenStack. After verify that 3 OpenStack can work independently, clean all created resources VM/Volume/Network.
After all OpenStack installation is ready, it's time to install IceHouse pathces both for cascading OpenStack and cascaded OpenStack, and then replace the Nova-Compute/Cinder-Volume/Neutron OVS-Agent/L3-Agent to Nova Proxy / Cinder Proxy / Neutron l2/l3 Proxy.
* **IceHouse pachtes installation step by step**
1. Node1
- Patches for Nova - instance_mapping_uuid_patch
This patch is to make the Nova proxy being able to translate the cascading level VM's uuid to cascadede level VM's uuid
Navigate to the folder
```
cd ./tricircle/icehouse-patches/nova/instance_mapping_uuid_patch
```
follow README.md instruction to install the patch
- Patches for Cinder - Volume/SnapShot/Backup UUID mapping patch
This patch is to make the Cinder proxy being able to translate the cascading level (Volume/Snapshot/backup)'s uuid to cascadede level (Volume/Snapshot/backup)'s uuid
Navigate to the folder
```
cd ./tricircle/icehouse-patches/cinder/instance_mapping_uuid_patch
```
follow README.md instruction to install the patch
- Patches for Neutron - DVR patch
This patch is to make the Neutron has the DVR(distributed virtual router) feature. Through DVR, all L2/L3 proxy nodes in the cascading level can receive correspoding RPC message, and then convert the command to restful API to cascaded Neutron.
Navigate to the folder
```
cd ./tricircle/icehouse-patches/neutron/dvr-patch
```
follow README.md instruction to install the patch
- Patches for Neutron - ml2-mech-driver-cascading patch
This patch is to make L2 population driver being able to populate the VM's host IP which stored in the port binding profile in the cascaded OpenStack to another cascaded OpenStack.
Navigate to the folder
```
cd ./tricircle/icehouse-patches/neutron/ml2-mech-driver-cascading-patch
```
follow README.md instruction to install the patch
2. Node3
- Patches for Nova - port binding profile update bug: https://bugs.launchpad.net/neutron/+bug/1338202.
because ml2-mech-driver-cascaded-patch will update the binding profile in the port, and will be flushed to null if you don't fix the bug.
You can also fix the bug via:
Navigate to the folder
```
cd ./tricircle/icehouse-patches/icehouse-patches/nova/instance_mapping_uuid_patch/nova/network/neutronv2/
cp api.py $python_installation_path/site-packages/nova/network/neutronv2/
```
the patch will reserve what has been saved in the port binding profile
- Patches for Cinder - timestamp-query-patch patch
This patch is to make the cascaded Cinder being able to execute query with timestamp filter, but not to return all objects.
Navigate to the folder
```
cd ./tricircle/icehouse-patches/cinder/timestamp-query-patch_patch
```
follow README.md instruction to install the patch
- Patches for Neutron - DVR patch
This patch is to make the Neutron has the DVR(distributed virtual router) feature.
Navigate to the folder
```
cd ./tricircle/icehouse-patches/neutron/dvr-patch
```
follow README.md instruction to install the patch
- Patches for Neutron - ml2-mech-driver-cascaded patch
This patch is to make L2 population driver being able to populate the virtual remote port where the VM located in another OpenStack.
Navigate to the folder
```
cd ./tricircle/icehouse-patches/neutron/ml2-mech-driver-cascaded-patch
```
follow README.md instruction to install the patch
- Patches for Neutron - openvswitch-agent patch
This patch is to get dvr mac crossing openstack for cross OpenStack L3 networking for VLAN-VLAN/VLAN-VxLAN/VxLAN-VxLAN.
Navigate to the folder
```
cd ./tricircle/icehouse-patches/neutron/openvswitch-agent-patch
```
follow README.md instruction to install the patch
3. Node4
- Patches for Nova - port binding profile update bug: https://bugs.launchpad.net/neutron/+bug/1338202.
because ml2-mech-driver-cascaded-patch will update the binding profile in the port, and will be flushed to null if you don't fix the bug.
You can also fix the bug via:
Navigate to the folder
```
cd ./tricircle/icehouse-patches/icehouse-patches/nova/instance_mapping_uuid_patch/nova/network/neutronv2/
cp api.py $python_installation_path/site-packages/nova/network/neutronv2/
```
the patch will reserve what has been saved in the port binding profile
- Patches for Cinder - timestamp-query-patch patch
This patch is to make the cascaded Cinder being able to execute query with timestamp filter, but not to return all objects.
Navigate to the folder
```
cd ./tricircle/icehouse-patches/cinder/timestamp-query-patch_patch
```
follow README.md instruction to install the patch
- Patches for Neutron - DVR patch
This patch is to make the Neutron has the DVR(distributed virtual router) feature.
Navigate to the folder
```
cd ./tricircle/icehouse-patches/neutron/dvr-patch
```
follow README.md instruction to install the patch
- Patches for Neutron - ml2-mech-driver-cascaded patch
This patch is to make L2 population driver being able to populate the virtual remote port where the VM located in another OpenStack.
Navigate to the folder
```
cd ./tricircle/icehouse-patches/neutron/ml2-mech-driver-cascaded-patch
```
follow README.md instruction to install the patch
- Patches for Neutron - openvswitch-agent patch
This patch is to get dvr mac crossing openstack for cross OpenStack L3 networking for VLAN-VLAN/VLAN-VxLAN/VxLAN-VxLAN.
Navigate to the folder
```
cd ./tricircle/icehouse-patches/neutron/openvswitch-agent-patch
```
follow README.md instruction to install the patch
* **Proxy installation step by step**
1. Node1
- Nova proxy
Navigate to the folder
```
cd ./tricircle/novaproxy
```
follow README.md instruction to install the proxy. Please change the configuration value in the install.sh according to your environment setting
- Cinder proxy
Navigate to the folder
```
cd ./tricircle/cinderproxy
```
follow README.md instruction to install the proxy. Please change the configuration value in the install.sh according to your environment setting
- L2 proxy
Navigate to the folder
```
cd ./tricircle/neutronproxy/l2-proxy
```
follow README.md instruction to install the proxy. Please change the configuration value in the install.sh according to your environment setting
- L3 proxy
Navigate to the folder
```
cd ./tricircle/neutronproxy/l3-proxy
```
follow README.md instruction to install the proxy. Please change the configuration value in the install.sh according to your environment setting
2. Node2
- Nova proxy
Navigate to the folder
```
cd ./tricircle/novaproxy
```
follow README.md instruction to install the proxy. Please change the configuration value in the install.sh according to your environment setting
Navigate to the folder
```
cd ./tricircle/icehouse-patches/nova/instance_mapping_uuid_patch/nova/objects
cp instance.py $python_installation_path/site-packages/nova/objects/
```
This file is a patch for instance UUID mapping used in the proxy nodes.
- Cinder proxy
Navigate to the folder
```
cd ./tricircle/cinderproxy
```
follow README.md instruction to install the proxy. Please change the configuration value in the install.sh according to your environment setting
Navigate to the folder
```
cd ./tricircle/icehouse-patches/cinder/uuid-mapping-patch/cinder/db/sqlalchemy
cp models.py $python_installation_path/site-packages/cinder/db/sqlalchemy
```
This file is a patch for instance UUID mapping used in the proxy nodes.
- L2 proxy
Navigate to the folder
```
cd ./tricircle/neutronproxy/l2-proxy
```
follow README.md instruction to install the proxy. Please change the configuration value in the install.sh according to your environment setting
- L3 proxy
Navigate to the folder
```
cd ./tricircle/neutronproxy/l3-proxy
```
follow README.md instruction to install the proxy. Please change the configuration value in the install.sh according to your environment setting
Upgrade to Glance cascading
------------
* **Prerequisites**
- To experience the glance cascading feature, you can simply upgrade the current installation with several step, see the following picture:
![minimal_setup_with_glance_cascading](./minimal_setup_with_glance_cascading.png?raw=true)
1. Node1
- Patches for Glance - glance_location_patch
This patch is to make the glance being able to handle http url location. The patch also insert the sync manager to the chain of responsibility.
Navigate to the folder
```
cd ./tricircle/icehouse-patches/glance/glance_location_patch
```
follow README.md instruction to install the patch
- Sync Manager
Navigate to the folder
```
cd ./tricircle/glancesync
```
modify the storage scheme configuration for cascading and cascaded level
```
vi ./tricircle/glancesync/etc/glance/glance_store.yaml
```
follow README.md instruction to install the sync manager. Please change the configuration value in the install.sh according to your environment setting, espeically for configuration:
sync_enabled=True
sync_server_port=9595
sync_server_host=127.0.0.1
2. Node3
- Glance Installation
Please install Glance in the Node3 as the casacded Glance.
Register the service endpoint in the KeyStone.
Change the glance endpoint in nova.conf and cinder.conf to the Glance located in Node3
3. Node4
- Glance Installation
Please install Glance in the Node4 as the casacded Glance.
Register the service endpoint in the KeyStone
Change the glance endpoint in nova.conf and cinder.conf to the Glance located in Node4
4. Configuration
- Change Nova proxy configuration on Node1, setting the "cascaded_glance_flag" to True and add "cascaded_glance_url" of Node3 configurantion according to Nova-proxy README.MD instruction
- Change Cinder proxy configuration on Node1, setting the "glance_cascading_flag" to True and add "cascaded_glance_url" of Node3 configurantion according to Nova-proxy README.MD instruction
- Change Nova proxy configuration on Node2, setting the "cascaded_glance_flag" to True and add "cascaded_glance_url" of Node4 configurantion according to Nova-proxy README.MD instruction
- Change Cinder proxy configuration on Node2, setting the "glance_cascading_flag" to True and add "cascaded_glance_url" of Node4 configurantion according to Nova-proxy README.MD instruction
5. Experience Glance cascading
- Restart all related service
- Use Glance V2 api to create Image, Upload Image or patch location for Image. Image should be able to sync to distributed Glance if sync_enabled is setting to True
- Sync image only during first time usage but not uploading or patch location is still in testing phase, may not work properly.
- Create VM/Volume/etc from Horizon

148
cinderproxy/README.md Normal file
View File

@ -0,0 +1,148 @@
Openstack Cinder Proxy
===============================
Cinder-Proxy acts as the same role of Cinder-Volume in cascading OpenStack.
Cinder-Proxy treats cascaded Cinder as its cinder volume, convert the internal request message from the message bus to restful API calling to cascaded Cinder.
Key modules
-----------
* The new cinder proxy module cinder_proxy,which treats cascaded Cinder as its cinder volume, convert the internal request message from the message bus to restful API calling to cascaded Cinder:
cinder/volume/cinder_proxy.py
Requirements
------------
* openstack-cinder-volume-2014.1-14.1 has been installed
Installation
------------
We provide two ways to install the cinder proxy code. In this section, we will guide you through installing the cinder proxy with the minimum configuration.
* **Note:**
- Make sure you have an existing installation of **Openstack Icehouse**.
- We recommend that you Do backup at least the following files before installation, because they are to be overwritten or modified:
$CINDER_CONFIG_PARENT_DIR/cinder.conf
(replace the $... with actual directory names.)
* **Manual Installation**
- Make sure you have performed backups properly.
- Navigate to the local repository and copy the contents in 'cinder' sub-directory to the corresponding places in existing cinder, e.g.
```cp -r $LOCAL_REPOSITORY_DIR/cinder $CINDER_PARENT_DIR```
(replace the $... with actual directory name.)
- Update the cinder configuration file (e.g. /etc/cinder/cinder.conf) with the minimum option below. If the option already exists, modify its value, otherwise add it to the config file. Check the "Configurations" section below for a full configuration guide.
```
[DEFAULT]
...
###configuration for Cinder cascading ###
volume_manager=cinder.volume.cinder_proxy.CinderProxy
volume_sync_interval=5
cinder_tenant_name=$CASCADED_ADMIN_TENANT
cinder_username=$CASCADED_ADMIN_NAME
cinder_password=$CASCADED_ADMIN_PASSWORD
keystone_auth_url=http://$GLOBAL_KEYSTONE_IP:5000/v2.0/
cascading_glance_url=$CASCADING_GLANCE
cascaded_glance_url=http://$CASCADED_GLANCE
cascaded_available_zone=$CASCADED_AVAILABLE_ZONE
cascaded_region_name=$CASCADED_REGION_NAME
```
- Restart the cinder proxy.
```service openstack-cinder-volume restart```
- Done. The cinder proxy should be working with a demo configuration.
* **Automatic Installation**
- Make sure you have performed backups properly.
- Navigate to the installation directory and run installation script.
```
cd $LOCAL_REPOSITORY_DIR/installation
sudo bash ./install.sh
```
(replace the $... with actual directory name.)
- Done. The installation code should setup the cinder proxy with the minimum configuration below. Check the "Configurations" section for a full configuration guide.
```
[DEFAULT]
...
###cascade info ###
...
###configuration for Cinder cascading ###
volume_manager=cinder.volume.cinder_proxy.CinderProxy
volume_sync_interval=5
cinder_tenant_name=$CASCADED_ADMIN_TENANT
cinder_username=$CASCADED_ADMIN_NAME
cinder_password=$CASCADED_ADMIN_PASSWORD
keystone_auth_url=http://$GLOBAL_KEYSTONE_IP:5000/v2.0/
cascading_glance_url=$CASCADING_GLANCE
cascaded_glance_url=http://$CASCADED_GLANCE
cascaded_available_zone=$CASCADED_AVAILABLE_ZONE
cascaded_region_name=$CASCADED_REGION_NAME
```
* **Troubleshooting**
In case the automatic installation process is not complete, please check the followings:
- Make sure your OpenStack version is Icehouse.
- Check the variables in the beginning of the install.sh scripts. Your installation directories may be different from the default values we provide.
- The installation code will automatically add the related codes to $CINDER_PARENT_DIR/cinder and modify the related configuration.
- In case the automatic installation does not work, try to install manually.
Configurations
--------------
* This is a (default) configuration sample for the cinder proxy. Please add/modify these options in /etc/cinder/cinder.conf.
* Note:
- Please carefully make sure that options in the configuration file are not duplicated. If an option name already exists, modify its value instead of adding a new one of the same name.
- Please refer to the 'Configuration Details' section below for proper configuration and usage of costs and constraints.
```
[DEFAULT]
...
#
#Options defined in cinder.volume.manager
#
# Default driver to use for the cinder proxy (string value)
volume_manager=cinder.volume.cinder_proxy.CinderProxy
#The cascading level keystone component service url, by which the cinder proxy
#can access to cascading level keystone service
keystone_auth_url=$keystone_auth_url
#The cascading level glance component service url, by which the cinder proxy
#can access to cascading level glance service
cascading_glance_url=$CASCADING_GLANCE
#The cascaded level glance component service url, by which the cinder proxy
#can judge whether the cascading glance image has a location for this cascaded glance
cascaded_glance_url=http://$CASCADED_GLANCE
#The cascaded level region name, which will be set as a parameter when
#the cascaded level component services register endpoint to keystone
cascaded_region_name=$CASCADED_REGION_NAME
#The cascaded level available zone name, which will be set as a parameter when
#forward request to cascaded level cinder. Please pay attention to that value of
#cascaded_available_zone of cinder-proxy must be the same as storage_availability_zone in
#the cascaded level node. And cinder-proxy should be configured to the same storage_availability_zone.
#this configuration could be removed in the future to just use the cinder-proxy storage_availability_zone
#configuration item. but it is up to the admin to make sure the storage_availability_zone in cinder-proxy#and casacdede cinder keep the same value.
cascaded_available_zone=$CASCADED_AVAILABLE_ZONE

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,130 @@
#!/bin/bash
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Copyright (c) 2014 Huawei Technologies.
_CINDER_CONF_DIR="/etc/cinder"
_CINDER_CONF_FILE="cinder.conf"
_CINDER_DIR="/usr/lib64/python2.6/site-packages/cinder"
_CINDER_INSTALL_LOG="/var/log/cinder/cinder-proxy/installation/install.log"
# please set the option list set in cinder configure file
_CINDER_CONF_OPTION=("volume_manager=cinder.volume.cinder_proxy.CinderProxy volume_sync_interval=5 periodic_interval=5 cinder_tenant_name=admin cinder_username=admin cinder_password=1234 keystone_auth_url=http://10.67.148.210:5000/v2.0/ glance_cascading_flag=False cascading_glance_url=10.67.148.210:9292 cascaded_glance_url=http://10.67.148.201:9292 cascaded_cinder_url=http://10.67.148.201:8776/v2/%(project_id)s cascaded_region_name=Region_AZ1 cascaded_available_zone=AZ1")
# if you did not make changes to the installation files,
# please do not edit the following directories.
_CODE_DIR="../cinder/"
_BACKUP_DIR="${_CINDER_DIR}/cinder-proxy-installation-backup"
function log()
{
if [ ! -f "${_CINDER_INSTALL_LOG}" ] ; then
mkdir -p `dirname ${_CINDER_INSTALL_LOG}`
touch $_CINDER_INSTALL_LOG
chmod 777 $_CINDER_INSTALL_LOG
fi
echo "$@"
echo "`date -u +'%Y-%m-%d %T.%N'`: $@" >> $_CINDER_INSTALL_LOG
}
if [[ ${EUID} -ne 0 ]]; then
log "Please run as root."
exit 1
fi
cd `dirname $0`
log "checking installation directories..."
if [ ! -d "${_CINDER_DIR}" ] ; then
log "Could not find the cinder installation. Please check the variables in the beginning of the script."
log "aborted."
exit 1
fi
if [ ! -f "${_CINDER_CONF_DIR}/${_CINDER_CONF_FILE}" ] ; then
log "Could not find cinder config file. Please check the variables in the beginning of the script."
log "aborted."
exit 1
fi
log "checking previous installation..."
if [ -d "${_BACKUP_DIR}/cinder" ] ; then
log "It seems cinder-proxy has already been installed!"
log "Please check README for solution if this is not true."
exit 1
fi
log "backing up current files that might be overwritten..."
mkdir -p "${_BACKUP_DIR}/cinder"
mkdir -p "${_BACKUP_DIR}/etc/cinder"
cp -r "${_CINDER_DIR}/volume" "${_BACKUP_DIR}/cinder/"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/cinder"
log "Error in code backup, aborted."
exit 1
fi
cp "${_CINDER_CONF_DIR}/${_CINDER_CONF_FILE}" "${_BACKUP_DIR}/etc/cinder/"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/cinder"
rm -r "${_BACKUP_DIR}/etc"
log "Error in config backup, aborted."
exit 1
fi
log "copying in new files..."
cp -r "${_CODE_DIR}" `dirname ${_CINDER_DIR}`
if [ $? -ne 0 ] ; then
log "Error in copying, aborted."
log "Recovering original files..."
cp -r "${_BACKUP_DIR}/cinder" `dirname ${_CINDER_DIR}` && rm -r "${_BACKUP_DIR}/cinder"
if [ $? -ne 0 ] ; then
log "Recovering failed! Please install manually."
fi
exit 1
fi
log "updating config file..."
sed -i.backup -e "/volume_manager *=/d" "${_CINDER_CONF_DIR}/${_CINDER_CONF_FILE}"
sed -i.backup -e "/periodic_interval *=/d" "${_CINDER_CONF_DIR}/${_CINDER_CONF_FILE}"
for option in $_CINDER_CONF_OPTION
do
sed -i -e "/\[DEFAULT\]/a \\"$option "${_CINDER_CONF_DIR}/${_CINDER_CONF_FILE}"
done
if [ $? -ne 0 ] ; then
log "Error in updating, aborted."
log "Recovering original files..."
cp -r "${_BACKUP_DIR}/cinder" `dirname ${_CINDER_DIR}` && rm -r "${_BACKUP_DIR}/cinder"
if [ $? -ne 0 ] ; then
log "Recovering /cinder failed! Please install manually."
fi
cp "${_BACKUP_DIR}/etc/cinder/${_CINDER_CONF_FILE}" "${_CINDER_CONF_DIR}" && rm -r "${_BACKUP_DIR}/etc"
if [ $? -ne 0 ] ; then
log "Recovering config failed! Please install manually."
fi
exit 1
fi
log "restarting cinder proxy..."
service openstack-cinder-volume restart
if [ $? -ne 0 ] ; then
log "There was an error in restarting the service, please restart cinder proxy manually."
exit 1
fi
log "Cinder proxy Completed."
log "See README to get started."
exit 0

View File

@ -0,0 +1,129 @@
#!/bin/bash
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Copyright (c) 2014 Huawei Technologies.
_CINDER_CONF_DIR="/etc/cinder"
_CINDER_CONF_FILE="cinder.conf"
_CINDER_DIR="/usr/lib64/python2.6/site-packages/cinder"
_CINDER_CONF_OPTION=("volume_manager volume_sync_interval periodic_interval cinder_tenant_name cinder_username cinder_password keystone_auth_url glance_cascading_flag cascading_glance_url cascaded_glance_url cascaded_cinder_url cascaded_region_name cascaded_available_zone")
# if you did not make changes to the installation files,
# please do not edit the following directories.
_CODE_DIR="../cinder"
_BACKUP_DIR="${_CINDER_DIR}/cinder-proxy-installation-backup"
_CINDER_INSTALL_LOG="/var/log/cinder/cinder-proxy/installation/install.log"
#_SCRIPT_NAME="${0##*/}"
#_SCRIPT_LOGFILE="/var/log/nova-solver-scheduler/installation/${_SCRIPT_NAME}.log"
function log()
{
if [ ! -f "${_CINDER_INSTALL_LOG}" ] ; then
mkdir -p `dirname ${_CINDER_INSTALL_LOG}`
touch $_CINDER_INSTALL_LOG
chmod 777 $_CINDER_INSTALL_LOG
fi
echo "$@"
echo "`date -u +'%Y-%m-%d %T.%N'`: $@" >> $_CINDER_INSTALL_LOG
}
if [[ ${EUID} -ne 0 ]]; then
log "Please run as root."
exit 1
fi
cd `dirname $0`
log "checking installation directories..."
if [ ! -d "${_CINDER_DIR}" ] ; then
log "Could not find the cinder installation. Please check the variables in the beginning of the script."
log "aborted."
exit 1
fi
if [ ! -f "${_CINDER_CONF_DIR}/${_CINDER_CONF_FILE}" ] ; then
log "Could not find cinder config file. Please check the variables in the beginning of the script."
log "aborted."
exit 1
fi
log "checking backup..."
if [ ! -d "${_BACKUP_DIR}/cinder" ] ; then
log "Could not find backup files. It is possible that the cinder-proxy has been uninstalled."
log "If this is not the case, then please uninstall manually."
exit 1
fi
log "backing up current files that might be overwritten..."
if [ -d "${_BACKUP_DIR}/uninstall" ] ; then
rm -r "${_BACKUP_DIR}/uninstall"
fi
mkdir -p "${_BACKUP_DIR}/uninstall/cinder"
mkdir -p "${_BACKUP_DIR}/uninstall/etc/cinder"
cp -r "${_CINDER_DIR}/volume" "${_BACKUP_DIR}/uninstall/cinder/"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/uninstall/cinder"
log "Error in code backup, aborted."
exit 1
fi
cp "${_CINDER_CONF_DIR}/${_CINDER_CONF_FILE}" "${_BACKUP_DIR}/uninstall/etc/cinder/"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/uninstall/cinder"
rm -r "${_BACKUP_DIR}/uninstall/etc"
log "Error in config backup, aborted."
exit 1
fi
log "restoring code to the status before installing cinder-proxy..."
cp -r "${_BACKUP_DIR}/cinder" `dirname ${_CINDER_DIR}`
if [ $? -ne 0 ] ; then
log "Error in copying, aborted."
log "Recovering current files..."
cp -r "${_BACKUP_DIR}/uninstall/cinder" `dirname ${_CINDER_DIR}`
if [ $? -ne 0 ] ; then
log "Recovering failed! Please uninstall manually."
fi
exit 1
fi
log "updating config file..."
for option in $_CINDER_CONF_OPTION
do
sed -i.uninstall.backup -e "/"$option "*=/d" "${_CINDER_CONF_DIR}/${_CINDER_CONF_FILE}"
done
if [ $? -ne 0 ] ; then
log "Error in updating, aborted."
log "Recovering current files..."
cp "${_BACKUP_DIR}/uninstall/etc/cinder/${_CINDER_CONF_FILE}" "${_CINDER_CONF_DIR}"
if [ $? -ne 0 ] ; then
log "Recovering failed! Please uninstall manually."
fi
exit 1
fi
log "cleaning up backup files..."
rm -r "${_BACKUP_DIR}/cinder" && rm -r "${_BACKUP_DIR}/etc"
if [ $? -ne 0 ] ; then
log "There was an error when cleaning up the backup files."
fi
log "restarting cinder volume..."
service openstack-cinder-volume restart
if [ $? -ne 0 ] ; then
log "There was an error in restarting the service, please restart cinder volume manually."
exit 1
fi
log "Completed."
exit 0

140
glancesync/README.md Normal file
View File

@ -0,0 +1,140 @@
Glance Sync Manager
===============================
This is a submodule of Tricircle Project, in which a sync function is added to support the glance images' sync between cascading and cascadeds.
When launching a instance, the nova will search the image which is in the same region with the instance to downland, this can speeded up the whole launching time of the instance.
Key modules
-----------
* Primarily, there is only new module in glance cascading: Sync, which is in the glance/sync package.
glance/sync/__init__.py : Adds a ImageRepoProxy class, like store, policy .etc , to augment a sync mechanism layer on top of the api request handling chain.
glance/sync/base.py : Contains SyncManager object, execute the sync operations.
glance/sync/utils.py : Some help functions.
glance/sync/api/ : Support a Web Server of sync.
glance/sync/client/: Support a client to visit the Web Server , ImageRepoProxy use this client to call the sync requests.
glance/sync/task/: Each Sync operation is transformed into a task, we using queue to store the task an eventlet to handle the task simultaneously.
glance/sync/store/: We implements the independent-glance-store, separating the handles of image_data from image_metadata.
glance/cmd/sync.py: For the Sync Server starting launch (refer this in /usr/bin/glance-sync).
* **Note:**
At present, the glance cascading only support v2 version of glance-api;
Requirements
------------
* pexpect>=2.3
Installation
------------
* **Note:**
- The Installation and configuration guidelines written below is just for the cascading layer of glance. For the cascaded layer, the glance is installed as normal.
* **Prerequisites**
- Please install the python package: pexpect>=2.3 ( because we use pxssh for loginng and there is a bug in pxssh, see https://mail.python.org/pipermail/python-list/2008-February/510054.html, you should fix this before launch the service. )
* **Manual Installation**
- Make sure you have performed backups properly.
* **Manual Installation**
1. Under cascading Openstack, copy these files from glance-patch directory and glancesync directory to suitable place:
| DIR | FROM | TO |
| ------------- |:-----------------|:-------------------------------------------|
| glancesync | glance/ | ${python_install_dir}/glance |
| glancesync | etc/glance/ | /etc/glance/ |
| glancesync | glance-sync | /usr/bin/ |
|${glance-patch}| glance/ | ${python_install_dir}/glance |
|${glance-patch}|glance.egg-info/entry_points.txt | ${glance_install_egg.info}/ |
${glance-patch} = `icehouse-patches/glance/glance_location_patch` ${python_install_dir} is where the openstack installed, e.g. `/usr/lib64/python2.6/site-packages` .
2. Add/modify the config options
| CONFIG_FILE | OPTION | ADD or MODIFY |
| ----------------|:---------------------------------------------------|:--------------:|
|glance-api.conf | show_multiple_locations=True | M |
|glance-api.conf | sync_server_host=${sync_mgr_host} | A |
|glance-api.conf | sync_server_port=9595 | A |
|glance-api.conf | sync_enabled=True | A |
|glance-sync.conf | cascading_endpoint_url=${glance_api_endpoint_url} | M |
|glance-sync.conf | sync_strategy=ALL | M |
|glance-sync.conf | auth_host=${keystone_host} | M |
3. Re-launch services on cacading openstack, like:
`service openstack-glance-api restart `
`service openstack-glance-registry restart `
`python /usr/bin/glance-sync --config-file=/etc/glance/glance-sync.conf & `
* **Automatic Installation**
1. Enter the glance-patch installation dir: `cd ./tricircle/icehouse-patches/glance/glance_location_patch/installation` .
2. Optional, modify the shell script variable: `_PYTHON_INSTALL_DIR` .
3. Run the install script: `sh install.sh`
4. Enter the glancesync installation dir: `cd ./tricircle/glancesync/installation` .
5. Modify the cascading&cascaded glances' store scheme configuration, which is in the file: `./tricircle/glancesync/etc/glance/glance_store.yaml` .
6. Optional, modify the config options in shell script: `sync_enabled=True`, `sync_server_port=9595`, `sync_server_host=127.0.0.1` with the proper values.
7. Run the install script: `sh install.sh`
Configurations
--------------
Besides glance-api.conf file, we add some new config files. They are described separately.
- In glance-api.conf, three options added:
[DEFAULT]
# Indicate whether use the image sync, default value is False.
#If configuring on cascading layer, this value should be True.
sync_enabled = True
#The sync server 's port number, default is 9595.
sync_server_port = 9595
#The sync server's host name (or ip address)
sync_server_host = 127.0.0.1
*Besides, the option show_multiple_locations value should be ture.
- In glance-sync.conf which newly increased, the options is similar with glance-registry.conf except:
[DEFAULT]
#How to sync the image, the value can be ["None", "ALL", "USER"]
#When "ALL" choosen, means to sync to all the cascaded glances;
#When "USER" choosen, means according to user's role, project, etc.
sync_strategy = ALL
#What the cascading glance endpoint url is .(Note that this value should be consistent with what in keystone).
cascading_endpoint_url = http://127.0.0.1:9292/
#when snapshot sync, set the timeout time(second) of snapshot 's status
#changing into 'active'.
snapshot_timeout = 300
#when snapshot sync, set the polling interval time(second) to check the
#snapshot's status.
snapshot_sleep_interval = 10
#When sync task fails, set the retry times.
task_retry_times = 0
#When copy image data using 'scp' between filesystmes, set the timeout
#time of the copy.
scp_copy_timeout = 3600
#When snapshot, one can set the specific regions in which the snapshot
#will sync to. (e.g. physicalOpenstack001, physicalOpenstack002)
snapshot_region_names =
- Last but also important, we add a yaml file for config the store backend's copy : glance_store.yaml in cascading glance.
these config correspond to various store scheme (at present, only filesystem is supported), the values
are based on your environment, so you have to config it before installation or restart the glance-sync
when modify it.

View File

@ -0,0 +1,10 @@
#!/usr/bin/python
# PBR Generated from 'console_scripts'
import sys
from glance.cmd.sync import main
if __name__ == "__main__":
sys.exit(main())

View File

@ -0,0 +1,35 @@
# Use this pipeline for no auth - DEFAULT
[pipeline:glance-sync]
pipeline = versionnegotiation unauthenticated-context rootapp
[filter:unauthenticated-context]
paste.filter_factory = glance.api.middleware.context:UnauthenticatedContextMiddleware.factory
# Use this pipeline for keystone auth
[pipeline:glance-sync-keystone]
pipeline = versionnegotiation authtoken context rootapp
# Use this pipeline for authZ only. This means that the registry will treat a
# user as authenticated without making requests to keystone to reauthenticate
# the user.
[pipeline:glance-sync-trusted-auth]
pipeline = versionnegotiation context rootapp
[composite:rootapp]
paste.composite_factory = glance.sync.api:root_app_factory
/v1: syncv1app
[app:syncv1app]
paste.app_factory = glance.sync.api.v1:API.factory
[filter:context]
paste.filter_factory = glance.api.middleware.context:ContextMiddleware.factory
[filter:versionnegotiation]
paste.filter_factory = glance.api.middleware.version_negotiation:VersionNegotiationFilter.factory
[filter:unauthenticated-context]
paste.filter_factory = glance.api.middleware.context:UnauthenticatedContextMiddleware.factory
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory

View File

@ -0,0 +1,57 @@
[DEFAULT]
# Show debugging output in logs (sets DEBUG log level output)
debug = True
# Address to bind the API server
bind_host = 0.0.0.0
# Port the bind the API server to
bind_port = 9595
# Log to this file. Make sure you do not set the same log file for both the API
# and registry servers!
#
# If `log_file` is omitted and `use_syslog` is false, then log messages are
# sent to stdout as a fallback.
log_file = /var/log/glance/sync.log
# Backlog requests when creating socket
backlog = 4096
#How to sync the image, the value can be ["None", "ALL", "USER"]
#When "ALL" choosen, means to sync to all the cascaded glances;
#When "USER" choosen, means according to user's role, project, etc.
sync_strategy = None
#What the cascading glance endpoint is .
cascading_endpoint_url = http://127.0.0.1:9292/
#when snapshot sync, set the timeout time(second) of snapshot 's status
#changing into 'active'.
snapshot_timeout = 300
#when snapshot sync, set the polling interval time(second) to check the
#snapshot's status.
snapshot_sleep_interval = 10
#When sync task fails, set the retry times.
task_retry_times = 0
#When copy image data using 'scp' between filesystmes, set the timeout
#time of the copy.
scp_copy_timeout = 3600
#When snapshot, one can set the specific regions in which the snapshot
#will sync to.
snapshot_region_names = physicalOpenstack001, physicalOpenstack002
[keystone_authtoken]
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = admin
admin_user = glance
admin_password = glance
[paste_deploy]
config_file = /etc/glance/glance-sync-paste.ini
flavor=keystone

View File

@ -0,0 +1,29 @@
---
glances:
- name: master
service_ip: "127.0.0.1"
schemes:
- name: http
parameters:
netloc: '127.0.0.1:8800'
path: '/'
image_name: 'test.img'
- name: filesystem
parameters:
host: '127.0.0.1'
datadir: '/var/lib/glance/images/'
login_user: 'glance'
login_password: 'glance'
- name: slave1
service_ip: "0.0.0.0"
schemes:
- name: http
parameters:
netloc: '0.0.0.0:8800'
path: '/'
- name: filesystem
parameters:
host: '0.0.0.0'
datadir: '/var/lib/glance/images/'
login_user: 'glance'
login_password: 'glance'

View File

@ -0,0 +1,59 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
"""
Reference implementation server for Glance Sync
"""
import eventlet
import os
import sys
from oslo.config import cfg
# Monkey patch socket and time
eventlet.patcher.monkey_patch(all=False, socket=True, time=True, thread=True)
# If ../glance/__init__.py exists, add ../ to Python search path, so that
# it will override what happens to be installed in /usr/(local/)lib/python...
possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]),
os.pardir,
os.pardir))
if os.path.exists(os.path.join(possible_topdir, 'glance', '__init__.py')):
sys.path.insert(0, possible_topdir)
from glance.common import config
from glance.common import exception
from glance.common import wsgi
from glance.openstack.common import log
import glance.sync
def main():
try:
config.parse_args(default_config_files='glance-sync.conf')
log.setup('glance')
server = wsgi.Server()
server.start(config.load_paste_app('glance-sync'), default_port=9595)
server.wait()
except RuntimeError as e:
sys.exit("ERROR: %s" % e)
if __name__ == '__main__':
main()

View File

@ -0,0 +1,257 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
from oslo.config import cfg
import glance.context
import glance.domain.proxy
import glance.openstack.common.log as logging
from glance.sync.clients import Clients as clients
from glance.sync import utils
LOG = logging.getLogger(__name__)
_V2_IMAGE_CREATE_PROPERTIES = ['container_format', 'disk_format', 'min_disk',
'min_ram', 'name', 'virtual_size', 'visibility',
'protected']
_V2_IMAGE_UPDATE_PROPERTIES = ['container_format', 'disk_format', 'min_disk',
'min_ram', 'name']
def _check_trigger_sync(pre_image, image):
"""
check if it is the case that the cascaded glance has upload or first patch
location.
"""
return pre_image.status in ('saving', 'queued') and image.size and \
[l for l in image.locations if not utils.is_glance_location(l['url'])]
def _from_snapshot_request(pre_image, image):
"""
when patch location, check if it's snapshot-sync case.
"""
if pre_image.status == 'queued' and len(image.locations) == 1:
loc_meta = image.locations[0]['metadata']
return loc_meta and loc_meta.get('image_from', None) in ['snapshot',
'volume']
def get_adding_image_properties(image):
_tags = list(image.tags) or []
kwargs = {}
kwargs['body'] = {}
for key in _V2_IMAGE_CREATE_PROPERTIES:
try:
value = getattr(image, key, None)
if value and value != 'None':
kwargs['body'][key] = value
except KeyError:
pass
_properties = getattr(image, 'extra_properties') or None
if _properties:
extra_keys = _properties.keys()
for _key in extra_keys:
kwargs['body'][_key] = _properties[_key]
if _tags:
kwargs['body']['tags'] = _tags
return kwargs
def get_existing_image_locations(image):
return {'locations': image.locations}
class ImageRepoProxy(glance.domain.proxy.Repo):
def __init__(self, image_repo, context, sync_api):
self.image_repo = image_repo
self.context = context
self.sync_client = sync_api.get_sync_client(context)
proxy_kwargs = {'context': context, 'sync_api': sync_api}
super(ImageRepoProxy, self).__init__(image_repo,
item_proxy_class=ImageProxy,
item_proxy_kwargs=proxy_kwargs)
def _sync_saving_metadata(self, pre_image, image):
kwargs = {}
remove_keys = []
changes = {}
"""
image base properties
"""
for key in _V2_IMAGE_UPDATE_PROPERTIES:
pre_value = getattr(pre_image, key, None)
my_value = getattr(image, key, None)
if not my_value and not pre_value or my_value == pre_value:
continue
if not my_value and pre_value:
remove_keys.append(key)
else:
changes[key] = my_value
"""
image extra_properties
"""
pre_props = pre_image.extra_properties or {}
_properties = image.extra_properties or {}
addset = set(_properties.keys()).difference(set(pre_props.keys()))
removeset = set(pre_props.keys()).difference(set(_properties.keys()))
mayrepset = set(pre_props.keys()).intersection(set(_properties.keys()))
for key in addset:
changes[key] = _properties[key]
for key in removeset:
remove_keys.append(key)
for key in mayrepset:
if _properties[key] == pre_props[key]:
continue
changes[key] = _properties[key]
"""
image tags
"""
tag_dict = {}
pre_tags = pre_image.tags
new_tags = image.tags
added_tags = set(new_tags) - set(pre_tags)
removed_tags = set(pre_tags) - set(new_tags)
if added_tags:
tag_dict['add'] = added_tags
if removed_tags:
tag_dict['delete'] = removed_tags
if tag_dict:
kwargs['tags'] = tag_dict
kwargs['changes'] = changes
kwargs['removes'] = remove_keys
if not changes and not remove_keys and not tag_dict:
return
LOG.debug(_('In image %s, some properties changed, sync...')
% (image.image_id))
self.sync_client.update_image_matedata(image.image_id, **kwargs)
def _try_sync_locations(self, pre_image, image):
image_id = image.image_id
"""
image locations
"""
locations_dict = {}
pre_locs = pre_image.locations
_locs = image.locations
"""
if all locations of cascading removed, the image status become 'queued'
so the cascaded images should be 'queued' too. we replace all locations
with '[]'
"""
if pre_locs and not _locs:
LOG.debug(_('The image %s all locations removed, sync...')
% (image_id))
self.sync_client.sync_locations(image_id,
action='CLEAR',
locs=pre_locs)
return
added_locs = []
removed_locs = []
for _loc in pre_locs:
if _loc in _locs:
continue
removed_locs.append(_loc)
for _loc in _locs:
if _loc in pre_locs:
continue
added_locs.append(_loc)
if added_locs:
if _from_snapshot_request(pre_image, image):
add_kwargs = get_adding_image_properties(image)
else:
add_kwargs = {}
LOG.debug(_('The image %s add locations, sync...') % (image_id))
self.sync_client.sync_locations(image_id,
action='INSERT',
locs=added_locs,
**add_kwargs)
elif removed_locs:
LOG.debug(_('The image %s remove some locations, sync...')
% (image_id))
self.sync_client.sync_locations(image_id,
action='DELETE',
locs=removed_locs)
def save(self, image):
pre_image = self.get(image.image_id)
result = super(ImageRepoProxy, self).save(image)
image_id = image.image_id
if _check_trigger_sync(pre_image, image):
add_kwargs = get_adding_image_properties(image)
self.sync_client.sync_data(image_id, **add_kwargs)
LOG.debug(_('Sync data when image status changes ACTIVE, the '
'image id is %s.' % (image_id)))
else:
"""
In case of add/remove/replace locations property.
"""
self._try_sync_locations(pre_image, image)
"""
In case of sync the glance's properties
"""
if image.status == 'active':
self._sync_saving_metadata(pre_image, image)
return result
def remove(self, image):
result = super(ImageRepoProxy, self).remove(image)
LOG.debug(_('Image %s removed, sync...') % (image.image_id))
delete_kwargs = get_existing_image_locations(image)
self.sync_client.remove_image(image.image_id, **delete_kwargs)
return result
class ImageFactoryProxy(glance.domain.proxy.ImageFactory):
def __init__(self, factory, context, sync_api):
self.context = context
self.sync_api = sync_api
proxy_kwargs = {'context': context, 'sync_api': sync_api}
super(ImageFactoryProxy, self).__init__(factory,
proxy_class=ImageProxy,
proxy_kwargs=proxy_kwargs)
def new_image(self, **kwargs):
return super(ImageFactoryProxy, self).new_image(**kwargs)
class ImageProxy(glance.domain.proxy.Image):
def __init__(self, image, context, sync_api=None):
self.image = image
self.sync_api = sync_api
self.context = context
super(ImageProxy, self).__init__(image)

View File

@ -0,0 +1,22 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
import paste.urlmap
def root_app_factory(loader, global_conf, **local_conf):
return paste.urlmap.urlmap_factory(loader, global_conf, **local_conf)

View File

@ -0,0 +1,59 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
from glance.common import wsgi
from glance.sync.api.v1 import images
def init(mapper):
images_resource = images.create_resource()
mapper.connect("/cascaded-eps",
controller=images_resource,
action="endpoints",
conditions={'method': ['POST']})
mapper.connect("/images/{id}",
controller=images_resource,
action="update",
conditions={'method': ['PATCH']})
mapper.connect("/images/{id}",
controller=images_resource,
action="remove",
conditions={'method': ['DELETE']})
mapper.connect("/images/{id}",
controller=images_resource,
action="upload",
conditions={'method': ['PUT']})
mapper.connect("/images/{id}/location",
controller=images_resource,
action="sync_loc",
conditions={'method': ['PUT']})
class API(wsgi.Router):
"""WSGI entry point for all Registry requests."""
def __init__(self, mapper):
mapper = mapper or wsgi.APIMapper()
init(mapper)
super(API, self).__init__(mapper)

View File

@ -0,0 +1,95 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
from oslo.config import cfg
from glance.common import exception
from glance.common import wsgi
import glance.openstack.common.log as logging
from glance.sync.base import SyncManagerV2 as sync_manager
from glance.sync import utils as utils
LOG = logging.getLogger(__name__)
class Controller(object):
def __init__(self):
self.sync_manager = sync_manager()
self.sync_manager.start()
def test(self, req):
return {'body': 'for test'}
def update(self, req, id, body):
LOG.debug(_('sync client start run UPDATE metadata operation for'
'image_id: %s' % (id)))
self.sync_manager.sync_image_metadata(id, req.context.auth_tok, 'SAVE',
**body)
return dict({'body': id})
def remove(self, req, id, body):
LOG.debug(_('sync client start run DELETE operation for image_id: %s'
% (id)))
self.sync_manager.sync_image_metadata(id, req.context.auth_tok,
'DELETE', **body)
return dict({'body': id})
def upload(self, req, id, body):
LOG.debug(_('sync client start run UPLOAD operation for image_id: %s'
% (id)))
self.sync_manager.sync_image_data(id, req.context.auth_tok, **body)
return dict({'body': id})
def sync_loc(self, req, id, body):
action = body['action']
locs = body['locations']
LOG.debug(_('sync client start run SYNC-LOC operation for image_id: %s'
% (id)))
if action == 'INSERT':
self.sync_manager.adding_locations(id, req.context.auth_tok, locs,
**body)
elif action == 'DELETE':
self.sync_manager.removing_locations(id,
req.context.auth_tok,
locs)
elif action == 'CLEAR':
self.sync_manager.clear_all_locations(id,
req.context.auth_tok,
locs)
return dict({'body': id})
def endpoints(self, req, body):
regions = req.params.get('regions', [])
if not regions:
regions = body.pop('regions', [])
if not isinstance(regions, list):
regions = [regions]
LOG.debug(_('get cacaded endpoints of user/tenant: %s'
% (req.context.user or req.context.tenant or 'NONE')))
return dict(eps=utils.get_endpoints(req.context.auth_tok,
req.context.tenant,
region_names=regions) or [])
def create_resource():
"""Images resource factory method."""
deserializer = wsgi.JSONRequestDeserializer()
serializer = wsgi.JSONResponseSerializer()
return wsgi.Resource(Controller(), deserializer, serializer)

View File

@ -0,0 +1,606 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
import copy
import httplib
import Queue
import threading
import time
import eventlet
from oslo.config import cfg
import six.moves.urllib.parse as urlparse
from glance.common import exception
from glance.common import utils
from glance.openstack.common import importutils
from glance.openstack.common import jsonutils
from glance.openstack.common import threadgroup
from glance.openstack.common import timeutils
import glance.openstack.common.log as logging
from glance.sync import utils as s_utils
from glance.sync.clients import Clients as clients
from glance.sync.store.driver import StoreFactory as s_factory
from glance.sync.store.location import LocationFactory as l_factory
import glance.sync.store.glance_store as glance_store
from glance.sync.task import TaskObject
from glance.sync.task import PeriodicTask
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
CONF.import_opt('sync_strategy', 'glance.common.config', group='sync')
CONF.import_opt('task_retry_times', 'glance.common.config', group='sync')
CONF.import_opt('snapshot_timeout', 'glance.common.config', group='sync')
CONF.import_opt('snapshot_sleep_interval', 'glance.common.config',
group='sync')
def get_image_servcie():
return ImageService
def create_glance_client(auth_token, url):
return clients(auth_token).glance(url=url)
def create_self_glance_client(auth_token):
return create_glance_client(auth_token,
s_utils.get_cascading_endpoint_url())
def create_restful_client(auth_token, url):
pieces = urlparse.urlparse(url)
return _create_restful_client(auth_token, pieces.netloc)
def create_self_restful_client(auth_token):
return create_restful_client(auth_token,
s_utils.get_cascading_endpoint_url())
def _create_restful_client(auth_token, url):
server, port = url.split(':')
conn = httplib.HTTPConnection(server.encode(), port.encode())
image_service = get_image_servcie()
glance_client = image_service(conn, auth_token)
return glance_client
def get_mappings_from_image(auth_token, image_id):
client = create_self_glance_client(auth_token)
image = client.images.get(image_id)
locations = image.locations
if not locations:
return {}
return get_mappings_from_locations(locations)
def get_mappings_from_locations(locations):
mappings = {}
for loc in locations:
if s_utils.is_glance_location(loc['url']):
id = loc['metadata'].get('image_id')
if not id:
continue
ep_url = s_utils.create_ep_by_loc(loc)
mappings[ep_url] = id
# endpoints.append(utils.create_ep_by_loc(loc))
return mappings
class AuthenticationException(Exception):
pass
class ImageAlreadyPresentException(Exception):
pass
class ServerErrorException(Exception):
pass
class UploadException(Exception):
pass
class ImageService(object):
def __init__(self, conn, auth_token):
"""Initialize the ImageService.
conn: a httplib.HTTPConnection to the glance server
auth_token: authentication token to pass in the x-auth-token header
"""
self.auth_token = auth_token
self.conn = conn
def _http_request(self, method, url, headers, body,
ignore_result_body=False):
"""Perform an HTTP request against the server.
method: the HTTP method to use
url: the URL to request (not including server portion)
headers: headers for the request
body: body to send with the request
ignore_result_body: the body of the result will be ignored
Returns: a httplib response object
"""
if self.auth_token:
headers.setdefault('x-auth-token', self.auth_token)
LOG.debug(_('Request: %(method)s http://%(server)s:%(port)s'
'%(url)s with headers %(headers)s')
% {'method': method,
'server': self.conn.host,
'port': self.conn.port,
'url': url,
'headers': repr(headers)})
self.conn.request(method, url, body, headers)
response = self.conn.getresponse()
headers = self._header_list_to_dict(response.getheaders())
code = response.status
code_description = httplib.responses[code]
LOG.debug(_('Response: %(code)s %(status)s %(headers)s')
% {'code': code,
'status': code_description,
'headers': repr(headers)})
if code in [400, 500]:
raise ServerErrorException(response.read())
if code in [401, 403]:
raise AuthenticationException(response.read())
if code == 409:
raise ImageAlreadyPresentException(response.read())
if ignore_result_body:
# NOTE: because we are pipelining requests through a single HTTP
# connection, httplib requires that we read the response body
# before we can make another request. If the caller knows they
# don't care about the body, they can ask us to do that for them.
response.read()
return response
@staticmethod
def _header_list_to_dict(headers):
"""Expand a list of headers into a dictionary.
headers: a list of [(key, value), (key, value), (key, value)]
Returns: a dictionary representation of the list
"""
d = {}
for (header, value) in headers:
if header.startswith('x-image-meta-property-'):
prop = header.replace('x-image-meta-property-', '')
d.setdefault('properties', {})
d['properties'][prop] = value
else:
d[header.replace('x-image-meta-', '')] = value
return d
@staticmethod
def _dict_to_headers(d):
"""Convert a dictionary into one suitable for a HTTP request.
d: a dictionary
Returns: the same dictionary, with x-image-meta added to every key
"""
h = {}
for key in d:
if key == 'properties':
for subkey in d[key]:
if d[key][subkey] is None:
h['x-image-meta-property-%s' % subkey] = ''
else:
h['x-image-meta-property-%s' % subkey] = d[key][subkey]
else:
h['x-image-meta-%s' % key] = d[key]
return h
def add_location(self, image_uuid, path_val, metadata=None):
"""
add an actual location
"""
LOG.debug(_('call restful api to add location: url is %s' % path_val))
metadata = metadata or {}
url = '/v2/images/%s' % image_uuid
hdrs = {'Content-Type': 'application/openstack-images-v2.1-json-patch'}
body = []
value = {'url': path_val, 'metadata': metadata}
body.append({'op': 'add', 'path': '/locations/-', 'value': value})
return self._http_request('PATCH', url, hdrs, jsonutils.dumps(body))
def clear_locations(self, image_uuid):
"""
clear all the location infos, make the image status be 'queued'.
"""
LOG.debug(_('call restful api to clear image location: image id is %s'
% image_uuid))
url = '/v2/images/%s' % image_uuid
hdrs = {'Content-Type': 'application/openstack-images-v2.1-json-patch'}
body = []
body.append({'op': 'replace', 'path': '/locations', 'value': []})
return self._http_request('PATCH', url, hdrs, jsonutils.dumps(body))
class MetadataHelper(object):
def execute(self, auth_token, endpoint, action_name='CREATE',
image_id=None, **kwargs):
glance_client = create_glance_client(auth_token, endpoint)
if action_name.upper() == 'CREATE':
return self._do_create_action(glance_client, **kwargs)
if action_name.upper() == 'SAVE':
return self._do_save_action(glance_client, image_id, **kwargs)
if action_name.upper() == 'DELETE':
return self._do_delete_action(glance_client, image_id, **kwargs)
return None
@staticmethod
def _fetch_params(keys, **kwargs):
return tuple([kwargs.get(key, None) for key in keys])
def _do_create_action(self, glance_client, **kwargs):
body = kwargs['body']
new_image = glance_client.images.create(**body)
return new_image.id
def _do_save_action(self, glance_client, image_id, **kwargs):
keys = ['changes', 'removes', 'tags']
changes, removes, tags = self._fetch_params(keys, **kwargs)
if changes or removes:
glance_client.images.update(image_id,
remove_props=removes,
**changes)
if tags:
if tags.get('add', None):
added = tags.get('add')
for tag in added:
glance_client.image_tags.update(image_id, tag)
elif tags.get('delete', None):
removed = tags.get('delete')
for tag in removed:
glance_client.image_tags.delete(image_id, tag)
return glance_client.images.get(image_id)
def _do_delete_action(self, glance_client, image_id, **kwargs):
return glance_client.images.delete(image_id)
_task_queue = Queue.Queue(maxsize=150)
class SyncManagerV2():
MAX_TASK_RETRY_TIMES = 1
def __init__(self):
global _task_queue
self.mete_helper = MetadataHelper()
self.location_factory = l_factory()
self.store_factory = s_factory()
self.task_queue = _task_queue
self.task_handler = None
self.unhandle_task_list = []
self.periodic_add_id_list = []
self.periodic_add_done = True
self._load_glance_store_cfg()
self.ks_client = clients().keystone()
self.create_new_periodic_task = False
def _load_glance_store_cfg(self):
glance_store.setup_glance_stores()
def sync_image_metadata(self, image_id, auth_token, action, **kwargs):
if not action or CONF.sync.sync_strategy == 'None':
return
kwargs['image_id'] = image_id
if action == 'SAVE':
self.task_queue.put_nowait(TaskObject.get_instance('meta_update',
kwargs))
elif action == 'DELETE':
self.task_queue.put_nowait(TaskObject.get_instance('meta_remove',
kwargs))
def sync_image_data(self, image_id, auth_token, eps=None, **kwargs):
if CONF.sync.sync_strategy == 'None':
return
kwargs['image_id'] = image_id
cascading_ep = s_utils.get_cascading_endpoint_url()
kwargs['cascading_ep'] = cascading_ep
self.task_queue.put_nowait(TaskObject.get_instance('sync', kwargs))
def adding_locations(self, image_id, auth_token, locs, **kwargs):
if CONF.sync.sync_strategy == 'None':
return
for loc in locs:
if s_utils.is_glance_location(loc['url']):
if s_utils.is_snapshot_location(loc):
snapshot_ep = s_utils.create_ep_by_loc(loc)
snapshot_id = s_utils.get_id_from_glance_loc(loc)
snapshot_client = create_glance_client(auth_token,
snapshot_ep)
snapshot_image = snapshot_client.images.get(snapshot_id)
_pre_check_time = timeutils.utcnow()
_timout = CONF.sync.snapshot_timeout
while not timeutils.is_older_than(_pre_check_time,
_timout):
if snapshot_image.status == 'active':
break
LOG.debug(_('Check snapshot not active, wait for %i'
'second.'
% CONF.sync.snapshot_sleep_interval))
time.sleep(CONF.sync.snapshot_sleep_interval)
snapshot_image = snapshot_client.images.get(
snapshot_id)
if snapshot_image.status != 'active':
LOG.error(_('Snapshot status to active Timeout'))
return
kwargs['image_id'] = image_id
kwargs['snapshot_ep'] = snapshot_ep
kwargs['snapshot_id'] = snapshot_id
snapshot_task = TaskObject.get_instance('snapshot', kwargs)
self.task_queue.put_nowait(snapshot_task)
else:
LOG.debug(_('patch a normal location %s to image %s'
% (loc['url'], image_id)))
input = {'image_id': image_id, 'location': loc}
self.task_queue.put_nowait(TaskObject.get_instance('patch',
input))
def removing_locations(self, image_id, auth_token, locs):
if CONF.sync.sync_strategy == 'None':
return
locs = filter(lambda loc: s_utils.is_glance_location(loc['url']), locs)
if not locs:
return
input = {'image_id': image_id, 'locations': locs}
remove_locs_task = TaskObject.get_instance('locs_remove', input)
self.task_queue.put_nowait(remove_locs_task)
def clear_all_locations(self, image_id, auth_token, locs):
locs = filter(lambda loc: not s_utils.is_snapshot_location(loc), locs)
self.removing_locations(image_id, auth_token, locs)
def create_new_cascaded_task(self, last_run_time=None):
LOG.debug(_('new_cascaded periodic task has been created.'))
glance_client = create_self_glance_client(self.ks_client.auth_token)
filters = {'status': 'active'}
image_list = glance_client.images.list(filters=filters)
input = {}
run_images = {}
cascading_ep = s_utils.get_cascading_endpoint_url()
input['cascading_ep'] = cascading_ep
input['image_id'] = 'ffffffff-ffff-ffff-ffff-ffffffffffff'
all_ep_urls = s_utils.get_endpoints()
for image in image_list:
glance_urls = [loc['url'] for loc in image.locations
if s_utils.is_glance_location(loc['url'])]
lack_ep_urls = s_utils.calculate_lack_endpoints(all_ep_urls,
glance_urls)
if lack_ep_urls:
image_core_props = s_utils.get_core_properties(image)
run_images[image.id] = {'body': image_core_props,
'locations': lack_ep_urls}
if not run_images:
LOG.debug(_('No images need to sync to new cascaded glances.'))
input['images'] = run_images
return TaskObject.get_instance('periodic_add', input,
last_run_time=last_run_time)
@staticmethod
def _fetch_params(keys, **kwargs):
return tuple([kwargs.get(key, None) for key in keys])
def _get_candidate_path(self, auth_token, from_ep, image_id,
scheme='file'):
g_client = create_glance_client(auth_token, from_ep)
image = g_client.images.get(image_id)
locs = image.locations or []
for loc in locs:
if s_utils.is_glance_location(loc['url']):
continue
if loc['url'].startswith(scheme):
if scheme == 'file':
return loc['url'][len('file://'):]
return loc['url']
return None
def _do_image_data_copy(self, s_ep, d_ep, from_image_id, to_image_id,
candidate_path=None):
from_scheme, to_scheme = glance_store.choose_best_store_schemes(s_ep,
d_ep)
store_driver = self.store_factory.get_instance(from_scheme['name'],
to_scheme['name'])
from_params = from_scheme['parameters']
from_params['image_id'] = from_image_id
to_params = to_scheme['parameters']
to_params['image_id'] = to_image_id
from_location = self.location_factory.get_instance(from_scheme['name'],
**from_params)
to_location = self.location_factory.get_instance(to_scheme['name'],
**to_params)
return store_driver.copy_to(from_location, to_location,
candidate_path=candidate_path)
def _patch_cascaded_location(self, auth_token, image_id,
cascaded_ep, cascaded_id, action=None):
self_restful_client = create_self_restful_client(auth_token)
path = s_utils.generate_glance_location(cascaded_ep, cascaded_id)
# add the auth_token, so this url can be visited, otherwise 404 error
path += '?auth_token=' + auth_token
metadata = {'image_id': cascaded_id}
if action:
metadata['action'] = action
self_restful_client.add_location(image_id, path, metadata)
def meta_update(self, auth_token, cascaded_ep, image_id, **kwargs):
return self.mete_helper.execute(auth_token, cascaded_ep, 'SAVE',
image_id, **kwargs)
def meta_delete(self, auth_token, cascaded_ep, image_id):
return self.mete_helper.execute(auth_token, cascaded_ep, 'DELETE',
image_id)
def sync_image(self, auth_token, copy_ep, cascaded_ep, copy_image_id,
cascading_image_id, **kwargs):
# Firstly, crate an image object with cascading image's properties.
cascaded_id = self.mete_helper.execute(auth_token, cascaded_ep,
**kwargs)
try:
c_path = self._get_candidate_path(auth_token, copy_ep,
copy_image_id)
# execute copy operation to copy the image data.
copy_image_loc = self._do_image_data_copy(copy_ep,
cascaded_ep,
copy_image_id,
cascaded_id,
candidate_path=c_path)
# patch the copied image_data to the image
glance_client = create_restful_client(auth_token, cascaded_ep)
glance_client.add_location(cascaded_id, copy_image_loc)
# patch the glance location to cascading glance
msg = _("patch glance location to cascading image, with cascaded "
"endpoint : %s, cascaded id: %s, cascading image id: %s." %
(cascaded_ep, cascaded_id, cascading_image_id))
LOG.debug(msg)
self._patch_cascaded_location(auth_token,
cascading_image_id,
cascaded_ep,
cascaded_id,
action='upload')
return cascaded_id
except exception.SyncStoreCopyError as e:
LOG.error(_("Exception occurs when syncing store copy."))
raise exception.SyncServiceOperationError(reason=e.msg)
def do_snapshot(self, auth_token, snapshot_ep, cascaded_ep,
snapshot_image_id, cascading_image_id, **kwargs):
return self.sync_image(auth_token, snapshot_ep, cascaded_ep,
snapshot_image_id, cascading_image_id, **kwargs)
def patch_location(self, image_id, cascaded_id, auth_token, cascaded_ep,
location):
g_client = create_glance_client(auth_token, cascaded_ep)
cascaded_image = g_client.images.get(cascaded_id)
glance_client = create_restful_client(auth_token, cascaded_ep)
try:
glance_client.add_location(cascaded_id, location['url'])
if cascaded_image.status == 'queued':
self._patch_cascaded_location(auth_token,
image_id,
cascaded_ep,
cascaded_id,
action='patch')
except:
pass
def remove_loc(self, cascaded_id, auth_token, cascaded_ep):
glance_client = create_glance_client(auth_token, cascaded_ep)
glance_client.images.delete(cascaded_id)
def start(self):
# lanuch a new thread to read the task_task to handle.
_thread = threading.Thread(target=self.tasks_handle)
_thread.setDaemon(True)
_thread.start()
def tasks_handle(self):
while True:
_task = self.task_queue.get()
if not isinstance(_task, TaskObject):
LOG.error(_('task type valid.'))
continue
LOG.debug(_('Task start to runs, task id is %s' % _task.id))
_task.start_time = timeutils.strtime()
self.unhandle_task_list.append(copy.deepcopy(_task))
eventlet.spawn(_task.execute, self, self.ks_client.auth_token)
def handle_tasks(self, task_result):
t_image_id = task_result.get('image_id')
t_type = task_result.get('type')
t_start_time = task_result.get('start_time')
t_status = task_result.get('status')
handling_tasks = filter(lambda t: t.image_id == t_image_id and
t.start_time == t_start_time,
self.unhandle_task_list)
if not handling_tasks or len(handling_tasks) > 1:
LOG.error(_('The task not exist or duplicate, can not go handle. '
'Info is image: %(id)s, op_type: %(type)s, run time: '
'%(time)s'
% {'id': t_image_id,
'type': t_type,
'time': t_start_time}
))
return
task = handling_tasks[0]
self.unhandle_task_list.remove(task)
if isinstance(task, PeriodicTask):
LOG.debug(_('The periodic task executed done, with op %(type)s '
'runs at time: %(start_time)s, the status is '
'%(status)s.' %
{'type': t_type,
'start_time': t_start_time,
'status': t_status
}))
else:
if t_status == 'terminal':
LOG.debug(_('The task executed successful for image:'
'%(image_id)s with op %(type)s, which runs '
'at time: %(start_time)s' %
{'image_id': t_image_id,
'type': t_type,
'start_time': t_start_time
}))
elif t_status == 'param_error':
LOG.error(_('The task executed failed for params error. Image:'
'%(image_id)s with op %(type)s, which runs '
'at time: %(start_time)s' %
{'image_id': t_image_id,
'type': t_type,
'start_time': t_start_time
}))
elif t_status == 'error':
LOG.error(_('The task failed to execute. Detail info is: '
'%(image_id)s with op %(op_type)s run_time:'
'%(start_time)s' %
{'image_id': t_image_id,
'op_type': t_type,
'start_time': t_start_time
}))

View File

@ -0,0 +1,46 @@
from oslo.config import cfg
sync_client_opts = [
cfg.StrOpt('sync_client_protocol', default='http',
help=_('The protocol to use for communication with the '
'sync server. Either http or https.')),
cfg.StrOpt('sync_client_key_file',
help=_('The path to the key file to use in SSL connections '
'to the sync server.')),
cfg.StrOpt('sync_client_cert_file',
help=_('The path to the cert file to use in SSL connections '
'to the sync server.')),
cfg.StrOpt('sync_client_ca_file',
help=_('The path to the certifying authority cert file to '
'use in SSL connections to the sync server.')),
cfg.BoolOpt('sync_client_insecure', default=False,
help=_('When using SSL in connections to the sync server, '
'do not require validation via a certifying '
'authority.')),
cfg.IntOpt('sync_client_timeout', default=600,
help=_('The period of time, in seconds, that the API server '
'will wait for a sync request to complete. A '
'value of 0 implies no timeout.')),
]
sync_client_ctx_opts = [
cfg.BoolOpt('sync_use_user_token', default=True,
help=_('Whether to pass through the user token when '
'making requests to the sync.')),
cfg.StrOpt('sync_admin_user', secret=True,
help=_('The administrators user name.')),
cfg.StrOpt('sync_admin_password', secret=True,
help=_('The administrators password.')),
cfg.StrOpt('sync_admin_tenant_name', secret=True,
help=_('The tenant name of the administrative user.')),
cfg.StrOpt('sync_auth_url',
help=_('The URL to the keystone service.')),
cfg.StrOpt('sync_auth_strategy', default='noauth',
help=_('The strategy to use for authentication.')),
cfg.StrOpt('sync_auth_region',
help=_('The region for the authentication service.')),
]
CONF = cfg.CONF
CONF.register_opts(sync_client_opts)
CONF.register_opts(sync_client_ctx_opts)

View File

@ -0,0 +1,124 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
import os
from oslo.config import cfg
from glance.common import exception
from glance.openstack.common import jsonutils
import glance.openstack.common.log as logging
from glance.sync.client.v1 import client
CONF = cfg.CONF
CONF.import_opt('sync_server_host', 'glance.common.config')
CONF.import_opt('sync_server_port', 'glance.common.config')
sync_client_ctx_opts = [
cfg.BoolOpt('sync_send_identity_headers', default=False,
help=_("Whether to pass through headers containing user "
"and tenant information when making requests to "
"the sync. This allows the sync to use the "
"context middleware without the keystoneclients' "
"auth_token middleware, removing calls to the keystone "
"auth service. It is recommended that when using this "
"option, secure communication between glance api and "
"glance sync is ensured by means other than "
"auth_token middleware.")),
]
CONF.register_opts(sync_client_ctx_opts)
_sync_client = 'glance.sync.client'
CONF.import_opt('sync_client_protocol', _sync_client)
CONF.import_opt('sync_client_key_file', _sync_client)
CONF.import_opt('sync_client_cert_file', _sync_client)
CONF.import_opt('sync_client_ca_file', _sync_client)
CONF.import_opt('sync_client_insecure', _sync_client)
CONF.import_opt('sync_client_timeout', _sync_client)
CONF.import_opt('sync_use_user_token', _sync_client)
CONF.import_opt('sync_admin_user', _sync_client)
CONF.import_opt('sync_admin_password', _sync_client)
CONF.import_opt('sync_admin_tenant_name', _sync_client)
CONF.import_opt('sync_auth_url', _sync_client)
CONF.import_opt('sync_auth_strategy', _sync_client)
CONF.import_opt('sync_auth_region', _sync_client)
CONF.import_opt('metadata_encryption_key', 'glance.common.config')
_CLIENT_CREDS = None
_CLIENT_HOST = None
_CLIENT_PORT = None
_CLIENT_KWARGS = {}
def get_sync_client(cxt):
global _CLIENT_CREDS, _CLIENT_KWARGS, _CLIENT_HOST, _CLIENT_PORT
kwargs = _CLIENT_KWARGS.copy()
if CONF.sync_use_user_token:
kwargs['auth_tok'] = cxt.auth_tok
if _CLIENT_CREDS:
kwargs['creds'] = _CLIENT_CREDS
if CONF.sync_send_identity_headers:
identity_headers = {
'X-User-Id': cxt.user,
'X-Tenant-Id': cxt.tenant,
'X-Roles': ','.join(cxt.roles),
'X-Identity-Status': 'Confirmed',
'X-Service-Catalog': jsonutils.dumps(cxt.service_catalog),
}
kwargs['identity_headers'] = identity_headers
return client.SyncClient(_CLIENT_HOST, _CLIENT_PORT, **kwargs)
def configure_sync_client():
global _CLIENT_KWARGS, _CLIENT_HOST, _CLIENT_PORT
host, port = CONF.sync_server_host, CONF.sync_server_port
_CLIENT_HOST = host
_CLIENT_PORT = port
_METADATA_ENCRYPTION_KEY = CONF.metadata_encryption_key
_CLIENT_KWARGS = {
'use_ssl': CONF.sync_client_protocol.lower() == 'https',
'key_file': CONF.sync_client_key_file,
'cert_file': CONF.sync_client_cert_file,
'ca_file': CONF.sync_client_ca_file,
'insecure': CONF.sync_client_insecure,
'timeout': CONF.sync_client_timeout,
}
if not CONF.sync_use_user_token:
configure_sync_admin_creds()
def configure_sync_admin_creds():
global _CLIENT_CREDS
if CONF.sync_auth_url or os.getenv('OS_AUTH_URL'):
strategy = 'keystone'
else:
strategy = CONF.sync_auth_strategy
_CLIENT_CREDS = {
'user': CONF.sync_admin_user,
'password': CONF.sync_admin_password,
'username': CONF.sync_admin_user,
'tenant': CONF.sync_admin_tenant_name,
'auth_url': CONF.sync_auth_url,
'strategy': strategy,
'region': CONF.sync_auth_region,
}

View File

@ -0,0 +1,106 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
from glance.common.client import BaseClient
from glance.openstack.common import jsonutils
import glance.openstack.common.log as logging
LOG = logging.getLogger(__name__)
class SyncClient(BaseClient):
DEFAULT_PORT = 9595
def __init__(self, host=None, port=DEFAULT_PORT, identity_headers=None,
**kwargs):
self.identity_headers = identity_headers
BaseClient.__init__(self, host, port, configure_via_auth=False,
**kwargs)
def do_request(self, method, action, **kwargs):
try:
kwargs['headers'] = kwargs.get('headers', {})
res = super(SyncClient, self).do_request(method, action, **kwargs)
status = res.status
request_id = res.getheader('x-openstack-request-id')
msg = (_("Sync request %(method)s %(action)s HTTP %(status)s"
" request id %(request_id)s") %
{'method': method, 'action': action,
'status': status, 'request_id': request_id})
LOG.debug(msg)
except Exception as exc:
exc_name = exc.__class__.__name__
LOG.info(_("Sync client request %(method)s %(action)s "
"raised %(exc_name)s"),
{'method': method, 'action': action,
'exc_name': exc_name})
raise
return res
def _add_common_params(self, id, kwargs):
pass
def update_image_matedata(self, image_id, **kwargs):
headers = {
'Content-Type': 'application/json',
}
body = jsonutils.dumps(kwargs)
res = self.do_request("PATCH", "/v1/images/%s" % (image_id), body=body,
headers=headers)
return res
def remove_image(self, image_id, **kwargs):
headers = {
'Content-Type': 'application/json',
}
body = jsonutils.dumps(kwargs)
res = self.do_request("DELETE", "/v1/images/%s" %
(image_id), body=body, headers=headers)
return res
def sync_data(self, image_id, **kwargs):
headers = {
'Content-Type': 'application/json',
}
body = jsonutils.dumps(kwargs)
res = self.do_request("PUT", "/v1/images/%s" % (image_id), body=body,
headers=headers)
return res
def sync_locations(self, image_id, action=None, locs=None, **kwargs):
headers = {
'Content-Type': 'application/json',
}
kwargs['action'] = action
kwargs['locations'] = locs
body = jsonutils.dumps(kwargs)
res = self.do_request("PUT", "/v1/images/%s/location" % (image_id),
body=body, headers=headers)
return res
def get_cascaded_endpoints(self, regions=[]):
headers = {
'Content-Type': 'application/json',
}
body = jsonutils.dumps({'regions': regions})
res = self.do_request('POST', '/v1/cascaded-eps', body=body,
headers=headers)
return jsonutils.loads(res.read())['eps']

View File

@ -0,0 +1,89 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
from oslo.config import cfg
from keystoneclient.v2_0 import client as ksclient
import glance.openstack.common.log as logging
from glanceclient.v2 import client as gclient2
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
class Clients(object):
def __init__(self, auth_token=None, tenant_id=None):
self._keystone = None
self._glance = None
self._cxt_token = auth_token
self._tenant_id = tenant_id
self._ks_conf = cfg.CONF.keystone_authtoken
@property
def auth_token(self, token=None):
return token or self.keystone().auth_token
@property
def ks_url(self):
protocol = self._ks_conf.auth_protocol or 'http'
auth_host = self._ks_conf.auth_host or '127.0.0.1'
auth_port = self._ks_conf.auth_port or '35357'
return protocol + '://' + auth_host + ':' + str(auth_port) + '/v2.0/'
def url_for(self, **kwargs):
return self.keystone().service_catalog.url_for(**kwargs)
def get_urls(self, **kwargs):
return self.keystone().service_catalog.get_urls(**kwargs)
def keystone(self):
if self._keystone:
return self._keystone
if self._cxt_token and self._tenant_id:
creds = {'token': self._cxt_token,
'auth_url': self.ks_url,
'project_id': self._tenant_id
}
else:
creds = {'username': self._ks_conf.admin_user,
'password': self._ks_conf.admin_password,
'auth_url': self.ks_url,
'project_name': self._ks_conf.admin_tenant_name}
try:
self._keystone = ksclient.Client(**creds)
except Exception as e:
LOG.error(_('create keystone client error: reason: %s') % (e))
return None
return self._keystone
def glance(self, auth_token=None, url=None):
gclient = gclient2
if gclient is None:
return None
if self._glance:
return self._glance
args = {
'token': auth_token or self.auth_token,
'endpoint': url or self.url_for(service_type='image')
}
self._glance = gclient.Client(**args)
return self._glance

View File

@ -0,0 +1,33 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
from concurrent.futures import ThreadPoolExecutor
import glance.openstack.common.log as logging
LOG = logging.getLogger(__name__)
class ThreadPool(object):
def __init__(self):
self.pool = ThreadPoolExecutor(128)
def execute(self, func, *args, **kwargs):
LOG.info(_('execute %s in a thread pool') % (func.__name__))
self.pool.submit(func, *args, **kwargs)

View File

View File

@ -0,0 +1,171 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
"""
A simple filesystem-backed store
"""
import logging
import os
import sys
from oslo.config import cfg
import pxssh
import pexpect
from glance.common import exception
import glance.sync.store.driver
import glance.sync.store.location
from glance.sync.store.location import Location
from glance.sync import utils as s_utils
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
CONF.import_opt('scp_copy_timeout', 'glance.common.config', group='sync')
def _login_ssh(host, passwd):
child_ssh = pexpect.spawn('ssh -p 22 %s' % (host))
child_ssh.logfile = sys.stdout
login_flag = True
while True:
ssh_index = child_ssh.expect(['.yes/no.', '.assword:.',
pexpect.TIMEOUT])
if ssh_index == 0:
child_ssh.sendline('yes')
elif ssh_index == 1:
child_ssh.sendline(passwd)
break
else:
login_flag = False
break
if not login_flag:
return None
return child_ssh
def _get_ssh(hostname, username, password):
s = pxssh.pxssh()
s.login(hostname, username, password, original_prompt='[#$>]')
s.logfile = sys.stdout
return s
class LocationCreator(glance.sync.store.location.LocationCreator):
def __init__(self):
self.scheme = 'file'
def create(self, **kwargs):
image_id = kwargs.get('image_id')
image_file_name = kwargs.get('image_name', None) or image_id
datadir = kwargs.get('datadir')
path = os.path.join(datadir, str(image_file_name))
login_user = kwargs.get('login_user')
login_password = kwargs.get('login_password')
host = kwargs.get('host')
store_specs = {'scheme': self.scheme, 'path': path, 'host': host,
'login_user': login_user,
'login_password': login_password}
return Location(self.scheme, StoreLocation, image_id=image_id,
store_specs=store_specs)
class StoreLocation(glance.sync.store.location.StoreLocation):
def process_specs(self):
self.scheme = self.specs.get('scheme', 'file')
self.path = self.specs.get('path')
self.host = self.specs.get('host')
self.login_user = self.specs.get('login_user')
self.login_password = self.specs.get('login_password')
class Store(glance.sync.store.driver.Store):
def copy_to(self, from_location, to_location, candidate_path=None):
from_store_loc = from_location.store_location
to_store_loc = to_location.store_location
if from_store_loc.host == to_store_loc.host and \
from_store_loc.path == to_store_loc.path:
LOG.info(_('The from_loc is same to to_loc, no need to copy. the '
'host:path is %s:%s') % (from_store_loc.host,
from_store_loc.path))
return 'file://%s' % to_store_loc.path
from_host = r"""{username}@{host}""".format(
username=from_store_loc.login_user,
host=from_store_loc.host)
to_host = r"""{username}@{host}""".format(
username=to_store_loc.login_user,
host=to_store_loc.host)
to_path = r"""{to_host}:{path}""".format(to_host=to_host,
path=to_store_loc.path)
copy_path = from_store_loc.path
try:
from_ssh = _get_ssh(from_store_loc.host,
from_store_loc.login_user,
from_store_loc.login_password)
except Exception:
raise exception.SyncStoreCopyError(reason="ssh login failed.")
from_ssh.sendline('ls %s' % copy_path)
from_ssh.prompt()
if 'cannot access' in from_ssh.before or \
'No such file' in from_ssh.before:
if candidate_path:
from_ssh.sendline('ls %s' % candidate_path)
from_ssh.prompt()
if 'cannot access' not in from_ssh.before and \
'No such file' not in from_ssh.before:
copy_path = candidate_path
else:
msg = _("the image path for copy to is not exists, file copy"
"failed: path is %s" % (copy_path))
raise exception.SyncStoreCopyError(reason=msg)
from_ssh.sendline('scp -P 22 %s %s' % (copy_path, to_path))
while True:
scp_index = from_ssh.expect(['.yes/no.', '.assword:.',
pexpect.TIMEOUT])
if scp_index == 0:
from_ssh.sendline('yes')
from_ssh.prompt()
elif scp_index == 1:
from_ssh.sendline(to_store_loc.login_password)
from_ssh.prompt(timeout=CONF.sync.scp_copy_timeout)
break
else:
msg = _("scp commond execute failed, with copy_path %s and "
"to_path %s" % (copy_path, to_path))
raise exception.SyncStoreCopyError(reason=msg)
break
if from_ssh:
from_ssh.logout()
return 'file://%s' % to_store_loc.path

View File

@ -0,0 +1,63 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
"""Base class for all storage backends"""
from oslo.config import cfg
from stevedore import extension
from glance.common import exception
import glance.openstack.common.log as logging
from glance.openstack.common.gettextutils import _
from glance.openstack.common import importutils
from glance.openstack.common import strutils
LOG = logging.getLogger(__name__)
class StoreFactory(object):
SYNC_STORE_NAMESPACE = "glance.sync.store.driver"
def __init__(self):
self._stores = {}
self._load_store_drivers()
def _load_store_drivers(self):
extension_manager = extension.ExtensionManager(
namespace=self.SYNC_STORE_NAMESPACE,
invoke_on_load=True,
)
for ext in extension_manager:
if ext.name in self._stores:
continue
ext.obj.name = ext.name
self._stores[ext.name] = ext.obj
def get_instance(self, from_scheme='filesystem', to_scheme=None):
_store_driver = self._stores.get(from_scheme)
if to_scheme and to_scheme != from_scheme and _store_driver:
func_name = 'copy_to_%s' % to_scheme
if not getattr(_store_driver, func_name, None):
return None
return _store_driver
class Store(object):
def copy_to(self, source_location, dest_location, candidate_path=None):
pass

View File

@ -0,0 +1,111 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
import fnmatch
import operator
import os
from oslo.config import cfg
import yaml
from glance.sync import utils as s_utils
OPTS = [
cfg.StrOpt('glance_store_cfg_file',
default="glance_store.yaml",
help="Configuration file for glance's store location "
"definition."
),
]
PRIOR_SOTRE_SCHEMES = ['filesystem', 'http', 'swift']
cfg.CONF.register_opts(OPTS)
def choose_best_store_schemes(source_endpoint, dest_endpoint):
global GLANCE_STORES
source_host = s_utils.get_host_from_ep(source_endpoint)
dest_host = s_utils.get_host_from_ep(dest_endpoint)
source_store = GLANCE_STORES.get_glance_store(source_host)
dest_store = GLANCE_STORES.get_glance_store(dest_host)
tmp_dict = {}
for s_scheme in source_store.schemes:
s_scheme_name = s_scheme['name']
for d_scheme in dest_store.schemes:
d_scheme_name = d_scheme['name']
if s_scheme_name == d_scheme_name:
tmp_dict[s_scheme_name] = (s_scheme, d_scheme)
if tmp_dict:
return tmp_dict[sorted(tmp_dict, key=lambda scheme:
PRIOR_SOTRE_SCHEMES.index(scheme))[0]]
return (source_store.schemes[0], dest_store.schemes[0])
class GlanceStore(object):
def __init__(self, service_ip, name, schemes):
self.service_ip = service_ip
self.name = name
self.schemes = schemes
class ImageObject(object):
def __init__(self, image_id, glance_store):
self.image_id = image_id
self.glance_store = glance_store
class GlanceStoreManager(object):
def __init__(self, cfg):
self.cfg = cfg
self.g_stores = []
cfg_items = cfg['glances']
for item in cfg_items:
self.g_stores.append(GlanceStore(item['service_ip'],
item['name'],
item['schemes']))
def get_glance_store(self, service_ip):
for g_store in self.g_stores:
if service_ip == g_store.service_ip:
return g_store
return None
def generate_Image_obj(self, image_id, endpoint):
g_store = self.get_glance_store(s_utils.get_host_from_ep(endpoint))
return ImageObject(image_id, g_store)
GLANCE_STORES = None
def setup_glance_stores():
global GLANCE_STORES
cfg_file = cfg.CONF.glance_store_cfg_file
if not os.path.exists(cfg_file):
cfg_file = cfg.CONF.find_file(cfg_file)
with open(cfg_file) as fap:
data = fap.read()
locs_cfg = yaml.safe_load(data)
GLANCE_STORES = GlanceStoreManager(locs_cfg)

View File

@ -0,0 +1,95 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
import logging
import urlparse
from stevedore import extension
LOG = logging.getLogger(__name__)
class LocationCreator(object):
def __init__(self):
self.scheme = None
def creator(self, **kwargs):
pass
class Location(object):
"""
Class describing the location of an image that Glance knows about
"""
def __init__(self, store_name, store_location_class,
uri=None, image_id=None, store_specs=None):
"""
Create a new Location object.
:param store_name: The string identifier/scheme of the storage backend
:param store_location_class: The store location class to use
for this location instance.
:param image_id: The identifier of the image in whatever storage
backend is used.
:param uri: Optional URI to construct location from
:param store_specs: Dictionary of information about the location
of the image that is dependent on the backend
store
"""
self.store_name = store_name
self.image_id = image_id
self.store_specs = store_specs or {}
self.store_location = store_location_class(self.store_specs)
class StoreLocation(object):
"""
Base class that must be implemented by each store
"""
def __init__(self, store_specs):
self.specs = store_specs
if self.specs:
self.process_specs()
class LocationFactory(object):
SYNC_LOCATION_NAMESPACE = "glance.sync.store.location"
def __init__(self):
self._locations = {}
self._load_locations()
def _load_locations(self):
extension_manager = extension.ExtensionManager(
namespace=self.SYNC_LOCATION_NAMESPACE,
invoke_on_load=True,
)
for ext in extension_manager:
if ext.name in self._locations:
continue
ext.obj.name = ext.name
self._locations[ext.name] = ext.obj
def get_instance(self, scheme, **kwargs):
loc_creator = self._locations.get(scheme, None)
return loc_creator.create(**kwargs)

View File

@ -0,0 +1,349 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
import threading
import Queue
import uuid
import eventlet
from oslo.config import cfg
import glance.openstack.common.log as logging
from glance.openstack.common import timeutils
from glance.sync import utils as s_utils
LOG = logging.getLogger(__name__)
snapshot_opt = [
cfg.ListOpt('snapshot_region_names',
default=[],
help=_("for what regions the snapshot sync to"),
deprecated_opts=[cfg.DeprecatedOpt('snapshot_region_names',
group='DEFAULT')]),
]
CONF = cfg.CONF
CONF.register_opts(snapshot_opt)
class TaskObject(object):
def __init__(self, type, input, retry_times=0):
self.id = str(uuid.uuid4())
self.type = type
self.input = input
self.image_id = self.input.get('image_id')
self.status = 'new'
self.retry_times = retry_times
self.start_time = None
@classmethod
def get_instance(cls, type, input, **kwargs):
_type_cls_dict = {'meta_update': MetaUpdateTask,
'meta_remove': MetaDeleteTask,
'sync': ImageActiveTask,
'snapshot': PatchSnapshotLocationTask,
'patch': PatchLocationTask,
'locs_remove': RemoveLocationsTask,
'periodic_add': ChkNewCascadedsPeriodicTask}
if _type_cls_dict.get(type):
return _type_cls_dict[type](input, **kwargs)
return None
def _handle_result(self, sync_manager):
return sync_manager.handle_tasks({'image_id': self.image_id,
'type': self.type,
'start_time': self.start_time,
'status': self.status
})
def execute(self, sync_manager, auth_token):
if not self.checkInput():
self.status = 'param_error'
LOG.error(_('the input content not valid: %s.' % (self.input)))
return self._handle_result(sync_manager)
try:
self.status = 'running'
green_threads = self.create_green_threads(sync_manager, auth_token)
for gt in green_threads:
gt.wait()
except Exception as e:
msg = _("Unable to execute task of image %(image_id)s: %(e)s") % \
{'image_id': self.image_id, 'e': unicode(e)}
LOG.exception(msg)
self.status = 'error'
else:
self.status = 'terminal'
return self._handle_result(sync_manager)
def checkInput(self):
if not self.input.pop('image_id', None):
LOG.warn(_('No cascading image_id specified.'))
return False
return self.do_checkInput()
class MetaUpdateTask(TaskObject):
def __init__(self, input):
super(MetaUpdateTask, self).__init__('meta_update', input)
def do_checkInput(self):
params = self.input
changes = params.get('changes')
removes = params.get('removes')
tags = params.get('tags')
if not changes and not removes and not tags:
LOG.warn(_('No changes and removes and tags with the glance.'))
return True
def create_green_threads(self, sync_manager, auth_token):
green_threads = []
cascaded_mapping = s_utils.get_mappings_from_image(auth_token,
self.image_id)
for cascaded_ep in cascaded_mapping:
cascaded_id = cascaded_mapping[cascaded_ep]
green_threads.append(eventlet.spawn(sync_manager.meta_update,
auth_token,
cascaded_ep,
image_id=cascaded_id,
**self.input))
return green_threads
class MetaDeleteTask(TaskObject):
def __init__(self, input):
super(MetaDeleteTask, self).__init__('meta_remove', input)
def do_checkInput(self):
self.locations = self.input.get('locations')
return self.locations is not None
def create_green_threads(self, sync_manager, auth_token):
green_threads = []
cascaded_mapping = s_utils.get_mappings_from_locations(self.locations)
for cascaded_ep in cascaded_mapping:
cascaded_id = cascaded_mapping[cascaded_ep]
green_threads.append(eventlet.spawn(sync_manager.meta_delete,
auth_token,
cascaded_ep,
image_id=cascaded_id))
return green_threads
class ImageActiveTask(TaskObject):
def __init__(self, input):
super(ImageActiveTask, self).__init__('sync', input)
def do_checkInput(self):
image_data = self.input.get('body')
self.cascading_endpoint = self.input.get('cascading_ep')
return image_data and self.cascading_endpoint
def create_green_threads(self, sync_manager, auth_token):
green_threads = []
cascaded_eps = s_utils.get_endpoints(auth_token)
for cascaded_ep in cascaded_eps:
green_threads.append(eventlet.spawn(sync_manager.sync_image,
auth_token,
self.cascading_endpoint,
cascaded_ep,
self.image_id,
self.image_id,
**self.input))
return green_threads
class PatchSnapshotLocationTask(TaskObject):
def __init__(self, input):
super(PatchSnapshotLocationTask, self).__init__('snapshot', input)
def do_checkInput(self):
image_metadata = self.input.get('body')
self.snapshot_endpoint = self.input.pop('snapshot_ep', None)
self.snapshot_id = self.input.pop('snapshot_id', None)
return image_metadata and self.snapshot_endpoint and self.snapshot_id
def create_green_threads(self, sync_manager, auth_token):
green_threads = []
_region_names = CONF.snapshot_region_names
cascaded_mapping = s_utils.get_endpoints(auth_token,
region_names=_region_names)
try:
if self.snapshot_endpoint in cascaded_mapping:
cascaded_mapping.remove(self.snapshot_endpoint)
except TypeError:
pass
for cascaded_ep in cascaded_mapping:
green_threads.append(eventlet.spawn(sync_manager.do_snapshot,
auth_token,
self.snapshot_endpoint,
cascaded_ep,
self.snapshot_id,
self.image_id,
**self.input))
return green_threads
class PatchLocationTask(TaskObject):
def __init__(self, input):
super(PatchLocationTask, self).__init__('patch', input)
def do_checkInput(self):
self.location = self.input.get('location')
return self.location is not None
def create_green_threads(self, sync_manager, auth_token):
green_threads = []
cascaded_mapping = s_utils.get_mappings_from_image(auth_token,
self.image_id)
for cascaded_ep in cascaded_mapping:
cascaded_id = cascaded_mapping[cascaded_ep]
green_threads.append(eventlet.spawn(sync_manager.patch_location,
self.image_id,
cascaded_id,
auth_token,
cascaded_ep,
self.location))
return green_threads
class RemoveLocationsTask(TaskObject):
def __init__(self, input):
super(RemoveLocationsTask, self).__init__('locs_remove', input)
def do_checkInput(self):
self.locations = self.input.get('locations')
return self.locations is not None
def create_green_threads(self, sync_manager, auth_token):
green_threads = []
cascaded_mapping = s_utils.get_mappings_from_locations(self.locations)
for cascaded_ep in cascaded_mapping:
cascaded_id = cascaded_mapping[cascaded_ep]
green_threads.append(eventlet.spawn(sync_manager.remove_loc,
cascaded_id,
auth_token,
cascaded_ep))
return green_threads
class PeriodicTask(TaskObject):
MAX_SLEEP_SECONDS = 15
def __init__(self, type, input, interval, last_run_time, run_immediately):
super(PeriodicTask, self).__init__(type, input)
self.interval = interval
self.last_run_time = last_run_time
self.run_immediately = run_immediately
def do_checkInput(self):
if not self.interval or self.interval < 0:
LOG.error(_('The Periodic Task interval invaild.'))
return False
return True
def ready(self):
# first time to run
if self.last_run_time is None:
self.last_run_time = timeutils.strtime()
return self.run_immediately
return timeutils.is_older_than(self.last_run_time, self.interval)
def execute(self, sync_manager, auth_token):
while not self.ready():
LOG.debug(_('the periodic task has not ready yet, sleep a while.'
'current_start_time is %s, last_run_time is %s, and '
'the interval is %i.' % (self.start_time,
self.last_run_time,
self.interval)))
_max_sleep_time = self.MAX_SLEEP_SECONDS
eventlet.sleep(seconds=max(self.interval / 10, _max_sleep_time))
super(PeriodicTask, self).execute(sync_manager, auth_token)
class ChkNewCascadedsPeriodicTask(PeriodicTask):
def __init__(self, input, interval=60, last_run_time=None,
run_immediately=False):
super(ChkNewCascadedsPeriodicTask, self).__init__('periodic_add',
input, interval,
last_run_time,
run_immediately)
LOG.debug(_('create ChkNewCascadedsPeriodicTask.'))
def do_checkInput(self):
self.images = self.input.get('images')
self.cascading_endpoint = self.input.get('cascading_ep')
if self.images is None or not self.cascading_endpoint:
return False
return super(ChkNewCascadedsPeriodicTask, self).do_checkInput()
def _stil_need_synced(self, cascaded_ep, image_id, auth_token):
g_client = s_utils.create_self_glance_client(auth_token)
try:
image = g_client.images.get(image_id)
except Exception:
LOG.warn(_('The add cascaded periodic task checks that the image '
'has deleted, no need to sync. id is %s' % image_id))
return False
else:
if image.status != 'active':
LOG.warn(_('The add cascaded period task checks image status '
'not active, no need to sync.'
'image id is %s.' % image_id))
return False
ep_list = [loc['url'] for loc in image.locations
if s_utils.is_glance_location(loc['url'])]
return not s_utils.is_ep_contains(cascaded_ep, ep_list)
def create_green_threads(self, sync_manager, auth_token):
green_threads = []
for image_id in self.images:
cascaded_eps = self.images[image_id].get('locations')
kwargs = {'body': self.images[image_id].get('body')}
for cascaded_ep in cascaded_eps:
if not self._stil_need_synced(cascaded_ep,
image_id, auth_token):
continue
green_threads.append(eventlet.spawn(sync_manager.sync_image,
auth_token,
self.cascading_endpoint,
cascaded_ep,
image_id,
image_id,
**kwargs))
return green_threads

View File

@ -0,0 +1,215 @@
# Copyright (c) 2014 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Jia Dong, HuaWei
import re
from oslo.config import cfg
import six.moves.urllib.parse as urlparse
from glance.sync.clients import Clients as clients
CONF = cfg.CONF
CONF.import_opt('cascading_endpoint_url', 'glance.common.config', group='sync')
CONF.import_opt('sync_strategy', 'glance.common.config', group='sync')
def create_glance_client(auth_token, url):
"""
create glance clients
"""
return clients(auth_token).glance(url=url)
def create_self_glance_client(auth_token):
return create_glance_client(auth_token, get_cascading_endpoint_url())
def get_mappings_from_image(auth_token, image_id):
"""
get image's patched glance-locations
"""
client = create_self_glance_client(auth_token)
image = client.images.get(image_id)
locations = image.locations
if not locations:
return {}
return get_mappings_from_locations(locations)
def get_mappings_from_locations(locations):
mappings = {}
for loc in locations:
if is_glance_location(loc['url']):
id = loc['metadata'].get('image_id')
if not id:
continue
ep_url = create_ep_by_loc(loc)
mappings[ep_url] = id
return mappings
def get_cascading_endpoint_url():
return CONF.sync.cascading_endpoint_url
def get_host_from_ep(ep_url):
if not ep_url:
return None
pieces = urlparse.urlparse(ep_url)
return pieces.netloc.split(':')[0]
pattern = re.compile(r'^https?://\S+/v2/images/\S+$')
def get_default_location(locations):
for location in locations:
if is_default_location(location):
return location
return None
def is_glance_location(loc_url):
return pattern.match(loc_url)
def is_snapshot_location(location):
l_meta = location['metadata']
return l_meta and l_meta.get('image_from', None) in['snapshot', 'volume']
def get_id_from_glance_loc(location):
if not is_glance_location(location['url']):
return None
loc_meta = location['metadata']
if not loc_meta:
return None
return loc_meta.get('image_id', None)
def is_default_location(location):
try:
return not is_glance_location(location['url']) \
and location['metadata']['is_default'] == 'true'
except:
return False
def get_snapshot_glance_loc(locations):
for location in locations:
if is_snapshot_location(location):
return location
return None
def create_ep_by_loc(location):
loc_url = location['url']
if not is_glance_location(loc_url):
return None
piece = urlparse.urlparse(loc_url)
return piece.scheme + '://' + piece.netloc + '/'
def generate_glance_location(ep, image_id, port=None):
default_port = port or '9292'
piece = urlparse.urlparse(ep)
paths = []
paths.append(piece.scheme)
paths.append('://')
paths.append(piece.netloc.split(':')[0])
paths.append(':')
paths.append(default_port)
paths.append('/v2/images/')
paths.append(image_id)
return ''.join(paths)
def get_endpoints(auth_token=None, tenant_id=None, **kwargs):
"""
find which glance should be sync by strategy config
"""
strategy = CONF.sync.sync_strategy
if strategy not in ['All', 'User']:
return None
openstack_clients = clients(auth_token, tenant_id)
ksclient = openstack_clients.keystone()
'''
suppose that the cascading glance is 'public' endpoint type, and the
cascaded glacne endpoints are 'internal'
'''
regions = kwargs.pop('region_names', [])
if strategy == 'All' and not regions:
urls = ksclient.service_catalog.get_urls(service_type='image',
endpoint_type='publicURL')
if urls:
result = [u for u in urls if u != get_cascading_endpoint_url()]
else:
result = []
return result
else:
user_urls = []
for region_name in regions:
urls = ksclient.service_catalog.get_urls(service_type='image',
endpoint_type='publicURL',
region_name=region_name)
if urls:
user_urls.extend(urls)
result = [u for u in set(user_urls) if u !=
get_cascading_endpoint_url()]
return result
_V2_IMAGE_CREATE_PROPERTIES = ['container_format',
'disk_format', 'min_disk', 'min_ram', 'name',
'virtual_size', 'visibility', 'protected']
def get_core_properties(image):
"""
when sync, create image object, get the sync info
"""
_tags = list(image.tags) or []
kwargs = {}
for key in _V2_IMAGE_CREATE_PROPERTIES:
try:
value = getattr(image, key, None)
if value and value != 'None':
kwargs[key] = value
except KeyError:
pass
if _tags:
kwargs['tags'] = _tags
return kwargs
def calculate_lack_endpoints(all_ep_urls, glance_urls):
"""
calculate endpoints which exists in all_eps but not in glance_eps
"""
if not glance_urls:
return all_ep_urls
def _contain(ep):
_hosts = [urlparse.urlparse(_ep).netloc for _ep in glance_urls]
return not urlparse.urlparse(ep).netloc in _hosts
return filter(_contain, all_ep_urls)
def is_ep_contains(ep_url, glance_urls):
_hosts = [urlparse.urlparse(_ep).netloc for _ep in glance_urls]
return urlparse.urlparse(ep_url) in _hosts

View File

@ -0,0 +1,152 @@
#!/bin/bash
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Copyright (c) 2014 Huawei Technologies.
CURPATH=$(cd "$(dirname "$0")"; pwd)
_GLANCE_CONF_DIR="/etc/glance"
_GLANCE_API_CONF_FILE="glance-api.conf"
_GLANCE_SYNC_CMD_FILE="glance-sync"
_PYTHON_INSTALL_DIR="/usr/lib64/python2.6/site-packages"
_GLANCE_DIR="${_PYTHON_INSTALL_DIR}/glance"
# if you did not make changes to the installation files,
# please do not edit the following directories.
_CODE_DIR="${CURPATH}/../glance"
_CONF_DIR="${CURPATH}/../etc"
_BACKUP_DIR="${_GLANCE_DIR}/glance-sync-backup"
_SCRIPT_LOGFILE="/var/log/glance/installation/install.log"
api_config_option_list="sync_enabled=True sync_server_port=9595 sync_server_host=127.0.0.1"
export PS4='+{$LINENO:${FUNCNAME[0]}}'
ERRTRAP()
{
echo "[LINE:$1] Error: Command or function exited with status $?"
}
function log()
{
echo "$@"
echo "`date -u +'%Y-%m-%d %T.%N'`: $@" >> $_SCRIPT_LOGFILE
}
function process_stop
{
PID=`ps -efw|grep "$1"|grep -v grep|awk '{print $2}'`
echo "PID is: $PID">>$_SCRIPT_LOGFILE
if [ "x${PID}" != "x" ]; then
for kill_id in $PID
do
kill -9 ${kill_id}
if [ $? -ne 0 ]; then
echo "[[stop glance-sync]]$1 stop failed.">>$_SCRIPT_LOGFILE
exit 1
fi
done
echo "[[stop glance-sync]]$1 stop ok.">>$_SCRIPT_LOGFILE
fi
}
trap 'ERRTRAP $LINENO' ERR
if [[ ${EUID} -ne 0 ]]; then
log "Please run as root."
exit 1
fi
if [ ! -d "/var/log/glance/installation" ]; then
mkdir /var/log/glance/installation
touch _SCRIPT_LOGFILE
fi
cd `dirname $0`
log "checking installation directories..."
if [ ! -d "${_GLANCE_DIR}" ] ; then
log "Could not find the glance installation. Please check the variables in the beginning of the script."
log "aborted."
exit 1
fi
if [ ! -f "${_GLANCE_CONF_DIR}/${_GLANCE_API_CONF_FILE}" ] ; then
log "Could not find glance-api config file. Please check the variables in the beginning of the script."
log "aborted."
exit 1
fi
if [ ! -f "${_CONF_DIR}/${_GLANCE_SYNC_CMD_FILE}" ]; then
log "Could not find the glance-sync file. Please check the variables in the beginning of the script."
log "aborted."
exit 1
fi
log "checking previous installation..."
if [ -d "${_BACKUP_DIR}/glance" ] ; then
log "It seems glance cascading has already been installed!"
log "Please check README for solution if this is not true."
exit 1
fi
log "backing up current files that might be overwritten..."
mkdir -p "${_BACKUP_DIR}/glance"
mkdir -p "${_BACKUP_DIR}/etc"
mkdir -p "${_BACKUP_DIR}/etc/glance"
cp -rf "${_GLANCE_CONF_DIR}/${_GLANCE_API_CONF_FILE}" "${_BACKUP_DIR}/etc/glance/"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/glance"
rm -r "${_BACKUP_DIR}/etc"
log "Error in config backup, aborted."
exit 1
fi
log "copying in new files..."
cp -r "${_CODE_DIR}" `dirname ${_GLANCE_DIR}`
cp -r "${_CONF_DIR}/glance" "/etc"
cp "${_CONF_DIR}/${_GLANCE_SYNC_CMD_FILE}" "/usr/bin/"
if [ $? -ne 0 ] ; then
log "Error in copying, aborted."
log "Recovering original files..."
cp -r "${_BACKUP_DIR}/glance" `dirname ${_GLANCE_DIR}` && rm -r "${_BACKUP_DIR}/glance"
cp "${_BACKUP_DIR}/etc/glance/*.conf" `dirname ${_GLANCE_CONF_DIR}` && rm -r "${_BACKUP_DIR}/etc"
if [ $? -ne 0 ] ; then
log "Recovering failed! Please install manually."
fi
exit 1
fi
log "updating config file..."
for option in $api_config_option_list
do
sed -i -e "/$option/d" "${_GLANCE_CONF_DIR}/${_GLANCE_API_CONF_FILE}"
sed -i -e "/DEFAULT/a $option" "${_GLANCE_CONF_DIR}/${_GLANCE_API_CONF_FILE}"
done
log "restarting glance ..."
service openstack-glance-api restart
service openstack-glance-registry restart
process_stop "glance-sync"
python /usr/bin/glance-sync --config-file=/etc/glance/glance-sync.conf &
if [ $? -ne 0 ] ; then
log "There was an error in restarting the service, please restart glance manually."
exit 1
fi
log "Completed."
log "See README to get started."
exit 0

View File

@ -0,0 +1,43 @@
Cinder create volume from image bug
===============================
Openstack cascade current is developed based on Icehouse version. While
in Icehouse version there is a bug about creating volume from image and uploading volume to image.
Please referer to the http links https://bugs.launchpad.net/cinder/+bug/1308058 for details.
This bug is recommended to fix in cascaded cinder.
Key modules
-----------
* when create volume from image or upload volume to image, cinder will call a function in glance.py
to check image metadata, while not all the metadata will be included from glance image information.
As a result, in the function _extract_attributes included in file,not all the element such as "cheksum"
will be validated :
cinder/image/glance.py
Requirements
------------
* openstack icehouse has been installed
Installation
------------
We suggest a way to fix the cinder-image-metadata bug. In this section,
we will guide you through fix this image metadata bug.
* **Note:**
- Make sure you have an existing installation of **Openstack Icehouse**.
- We recommend that you Do backup at least the following files before installation,
because they are to be overwritten or modified.
* **Manual Installation as the OpenStack Community suggest**
mofify "output[attr] = getattr(image, attr)" to "output[attr] = getattr(image, attr, None)"
in _extract_attributes cinder/image/glance.py,Line 434 around

View File

@ -0,0 +1,54 @@
Cinder timestamp-query-patch
===============================
it will be patched in cascaded level's control node
cinder icehouse version database has update_at attribute for change_since
query filter function, however cinder db api this version don't support
timestamp query function. So it is needed to make this patch in cascaded level
while syncronization state between cascading and cascaded openstack level
Key modules
-----------
* adding timestamp query function while list volumes:
cinder\db\sqlalchemy\api.py
Requirements
------------
* openstack icehouse has been installed
Installation
------------
We provide two ways to install the timestamp query patch code. In this section, we will guide you through installing the timestamp query patch.
* **Note:**
- Make sure you have an existing installation of **Openstack Icehouse**.
- We recommend that you Do backup at least the following files before installation, because they are to be overwritten or modified:
* **Manual Installation**
- Make sure you have performed backups properly.
- Navigate to the local repository and copy the contents in 'cinder' sub-directory to the corresponding places in existing cinder, e.g.
```cp -r $LOCAL_REPOSITORY_DIR/cinder $CINDER_PARENT_DIR```
(replace the $... with actual directory name.)
- restart cinder api service
- Done. The cinder proxy should be working with a demo configuration.
* **Automatic Installation**
- Make sure you have performed backups properly.
- Navigate to the installation directory and run installation script.
```
cd $LOCAL_REPOSITORY_DIR/installation
sudo bash ./install.sh
```
(replace the $... with actual directory name.)

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,87 @@
#!/bin/bash
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Copyright (c) 2014 Huawei Technologies.
_CINDER_DIR="/usr/lib64/python2.6/site-packages/cinder"
# if you did not make changes to the installation files,
# please do not edit the following directories.
_CODE_DIR="../cinder"
_BACKUP_DIR="${_CINDER_DIR}/cinder_timestamp_query_patch-installation-backup"
_SCRIPT_LOGFILE="/var/log/cinder/cinder_timestamp_query_patch/installation/install.log"
function log()
{
log_path=`dirname ${_SCRIPT_LOGFILE}`
if [ ! -d $log_path ] ; then
mkdir -p $log_path
chmod 777 $_SCRIPT_LOGFILE
fi
echo "$@"
echo "`date -u +'%Y-%m-%d %T.%N'`: $@" >> $_SCRIPT_LOGFILE
}
if [[ ${EUID} -ne 0 ]]; then
log "Please run as root."
exit 1
fi
cd `dirname $0`
log "checking installation directories..."
if [ ! -d "${_CINDER_DIR}" ] ; then
log "Could not find the cinder installation. Please check the variables in the beginning of the script."
log "aborted."
exit 1
fi
log "checking previous installation..."
if [ -d "${_BACKUP_DIR}/cinder" ] ; then
log "It seems cinder timestamp query has already been installed!"
log "Please check README for solution if this is not true."
exit 1
fi
log "backing up current files that might be overwritten..."
mkdir -p "${_BACKUP_DIR}/cinder"
mkdir -p "${_BACKUP_DIR}/etc/cinder"
cp -r "${_CINDER_DIR}/db" "${_BACKUP_DIR}/cinder"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/cinder"
echo "Error in code backup, aborted."
exit 1
fi
log "copying in new files..."
cp -r "${_CODE_DIR}" `dirname ${_CINDER_DIR}`
if [ $? -ne 0 ] ; then
log "Error in copying, aborted."
log "Recovering original files..."
cp -r "${_BACKUP_DIR}/cinder" `dirname ${_CINDER_DIR}` && rm -r "${_BACKUP_DIR}/cinder"
if [ $? -ne 0 ] ; then
log "Recovering failed! Please install manually."
fi
exit 1
fi
service openstack-cinder-api restart
if [ $? -ne 0 ] ; then
log "There was an error in restarting the service, please restart cinder api manually."
exit 1
fi
log "Completed."
log "See README to get started."
exit 0

View File

@ -0,0 +1,115 @@
#!/bin/bash
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Copyright (c) 2014 Huawei Technologies.
_CINDER_CONF_DIR="/etc/cinder"
_CINDER_CONF_FILE="cinder.conf"
_CINDER_DIR="/usr/lib64/python2.6/site-packages/cinder"
_CINDER_INSTALL_LOG="/var/log/cinder/cinder-proxy/installation/install.log"
# if you did not make changes to the installation files,
# please do not edit the following directories.
_CODE_DIR="../cinder"
_BACKUP_DIR="${_CINDER_DIR}/cinder_timestamp_query_patch-installation-backup"
#_SCRIPT_NAME="${0##*/}"
#_SCRIPT_LOGFILE="/var/log/nova-solver-scheduler/installation/${_SCRIPT_NAME}.log"
function log()
{
if [ ! -f "${_CINDER_INSTALL_LOG}" ] ; then
mkdir -p `dirname ${_CINDER_INSTALL_LOG}`
touch $_CINDER_INSTALL_LOG
chmod 777 $_CINDER_INSTALL_LOG
fi
echo "$@"
echo "`date -u +'%Y-%m-%d %T.%N'`: $@" >> $_CINDER_INSTALL_LOG
}
if [[ ${EUID} -ne 0 ]]; then
log "Please run as root."
exit 1
fi
cd `dirname $0`
log "checking installation directories..."
if [ ! -d "${_CINDER_DIR}" ] ; then
log "Could not find the cinder installation. Please check the variables in the beginning of the script."
log "aborted."
exit 1
fi
if [ ! -f "${_CINDER_CONF_DIR}/${_CINDER_CONF_FILE}" ] ; then
log "Could not find cinder config file. Please check the variables in the beginning of the script."
log "aborted."
exit 1
fi
log "checking backup..."
if [ ! -d "${_BACKUP_DIR}/cinder" ] ; then
log "Could not find backup files. It is possible that the cinder-proxy has been uninstalled."
log "If this is not the case, then please uninstall manually."
exit 1
fi
log "backing up current files that might be overwritten..."
if [ -d "${_BACKUP_DIR}/uninstall" ] ; then
rm -r "${_BACKUP_DIR}/uninstall"
fi
mkdir -p "${_BACKUP_DIR}/uninstall/cinder"
mkdir -p "${_BACKUP_DIR}/uninstall/etc/cinder"
cp -r "${_CINDER_DIR}/volume" "${_BACKUP_DIR}/uninstall/cinder/"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/uninstall/cinder"
log "Error in code backup, aborted."
exit 1
fi
cp "${_CINDER_CONF_DIR}/${_CINDER_CONF_FILE}" "${_BACKUP_DIR}/uninstall/etc/cinder/"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/uninstall/cinder"
rm -r "${_BACKUP_DIR}/uninstall/etc"
log "Error in config backup, aborted."
exit 1
fi
log "restoring code to the status before installing cinder-proxy..."
cp -r "${_BACKUP_DIR}/cinder" `dirname ${_CINDER_DIR}`
if [ $? -ne 0 ] ; then
log "Error in copying, aborted."
log "Recovering current files..."
cp -r "${_BACKUP_DIR}/uninstall/cinder" `dirname ${_CINDER_DIR}`
if [ $? -ne 0 ] ; then
log "Recovering failed! Please uninstall manually."
fi
exit 1
fi
log "cleaning up backup files..."
rm -r "${_BACKUP_DIR}/cinder" && rm -r "${_BACKUP_DIR}/etc"
if [ $? -ne 0 ] ; then
log "There was an error when cleaning up the backup files."
fi
log "restarting cinder api..."
service openstack-cinder-api restart
if [ $? -ne 0 ] ; then
log "There was an error in restarting the service, please restart cinder volume manually."
exit 1
fi
log "Completed."
exit 0

View File

@ -0,0 +1,65 @@
Cinder uuid-mapping-patch
===============================
it will be patched in cascading level's control node
Cascading level node can manage volume/snapshot/backup/ in cascaded level node,
because of the mapping_uuid stored in cascading level represent the relationship of
volume/snapshot/bakcup
Key modules
-----------
* adding mapping_uuid column in cinder volume /cinder snapshot /cinder backup table,
when cinder synchronizes db:
cinder\db\sqlalchemy\migrate_repo\versions\023_add_mapping_uuid.py
cinder\db\sqlalchemy\migrate_repo\versions\024_snapshots_add_mapping_uuid.py
cinder\db\sqlalchemy\migrate_repo\versions\025_backup_add_mapping_uuid.py
cinder\db\sqlalchemy\models.py
Requirements
------------
* openstack icehouse has been installed
Installation
------------
We provide two ways to install the mapping-uuid-patch code. In this section, we will guide you through installing the instance_mapping_uuid patch.
* **Note:**
- Make sure you have an existing installation of **Openstack Icehouse**.
- We recommend that you Do backup at least the following files before installation, because they are to be overwritten or modified:
* **Manual Installation**
- Make sure you have performed backups properly.
- Navigate to the local repository and copy the contents in 'cinder' sub-directory to the corresponding places in existing nova, e.g.
```cp -r $LOCAL_REPOSITORY_DIR/cinder $NOVA_PARENT_DIR```
(replace the $... with actual directory name.)
- synchronize the cinder db.
```
mysql -u root -p$MYSQL_PASS -e "DROP DATABASE if exists cinder;
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY $MYSQL_PASS;
GRANT ALL PRIVILEGES ON *.* TO 'cinder'@'%'IDENTIFIED BY $MYSQL_PASS;
cinder-manage db sync
```
- Done. The cinder proxy should be working with a demo configuration.
* **Automatic Installation**
- Make sure you have performed backups properly.
- Navigate to the installation directory and run installation script.
```
cd $LOCAL_REPOSITORY_DIR/installation
sudo bash ./install.sh
```
(replace the $... with actual directory name.)

View File

@ -0,0 +1,36 @@
# Copyright 2013 IBM Corp.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from sqlalchemy import String, Column, MetaData, Table
def upgrade(migrate_engine):
"""Add mapping_uuid column to volumes."""
meta = MetaData()
meta.bind = migrate_engine
volumes = Table('volumes', meta, autoload=True)
mapping_uuid = Column('mapping_uuid', String(36))
volumes.create_column(mapping_uuid)
def downgrade(migrate_engine):
"""Remove mapping_uuid column from volumes."""
meta = MetaData()
meta.bind = migrate_engine
volumes = Table('volumes', meta, autoload=True)
mapping_uuid = volumes.columns.mapping_uuid
volumes.drop_column(mapping_uuid)

View File

@ -0,0 +1,34 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from sqlalchemy import Column
from sqlalchemy import MetaData, String, Table
def upgrade(migrate_engine):
meta = MetaData()
meta.bind = migrate_engine
snapshots = Table('snapshots', meta, autoload=True)
mapping_uuid = Column('mapping_uuid', String(36))
snapshots.create_column(mapping_uuid)
snapshots.update().values(mapping_uuid=None).execute()
def downgrade(migrate_engine):
meta = MetaData()
meta.bind = migrate_engine
snapshots = Table('snapshots', meta, autoload=True)
mapping_uuid = snapshots.columns.mapping_uuid
snapshots.drop_column(mapping_uuid)

View File

@ -0,0 +1,34 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from sqlalchemy import Column
from sqlalchemy import MetaData, String, Table
def upgrade(migrate_engine):
meta = MetaData()
meta.bind = migrate_engine
backups = Table('backups', meta, autoload=True)
mapping_uuid = Column('mapping_uuid', String(36))
backups.create_column(mapping_uuid)
backups.update().values(mapping_uuid=None).execute()
def downgrade(migrate_engine):
meta = MetaData()
meta.bind = migrate_engine
backups = Table('backups', meta, autoload=True)
mapping_uuid = backups.columns.mapping_uuid
backups.drop_column(mapping_uuid)

View File

@ -0,0 +1,515 @@
# Copyright (c) 2011 X.commerce, a business unit of eBay Inc.
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# Copyright 2011 Piston Cloud Computing, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
SQLAlchemy models for cinder data.
"""
from sqlalchemy import Column, Integer, String, Text, schema
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import ForeignKey, DateTime, Boolean
from sqlalchemy.orm import relationship, backref
from oslo.config import cfg
from cinder.openstack.common.db.sqlalchemy import models
from cinder.openstack.common import timeutils
CONF = cfg.CONF
BASE = declarative_base()
class CinderBase(models.TimestampMixin,
models.ModelBase):
"""Base class for Cinder Models."""
__table_args__ = {'mysql_engine': 'InnoDB'}
# TODO(rpodolyaka): reuse models.SoftDeleteMixin in the next stage
# of implementing of BP db-cleanup
deleted_at = Column(DateTime)
deleted = Column(Boolean, default=False)
metadata = None
def delete(self, session=None):
"""Delete this object."""
self.deleted = True
self.deleted_at = timeutils.utcnow()
self.save(session=session)
class Service(BASE, CinderBase):
"""Represents a running service on a host."""
__tablename__ = 'services'
id = Column(Integer, primary_key=True)
host = Column(String(255)) # , ForeignKey('hosts.id'))
binary = Column(String(255))
topic = Column(String(255))
report_count = Column(Integer, nullable=False, default=0)
disabled = Column(Boolean, default=False)
availability_zone = Column(String(255), default='cinder')
disabled_reason = Column(String(255))
class Volume(BASE, CinderBase):
"""Represents a block storage device that can be attached to a vm."""
__tablename__ = 'volumes'
id = Column(String(36), primary_key=True)
_name_id = Column(String(36)) # Don't access/modify this directly!
@property
def name_id(self):
return self.id if not self._name_id else self._name_id
@name_id.setter
def name_id(self, value):
self._name_id = value
@property
def name(self):
return CONF.volume_name_template % self.name_id
ec2_id = Column(Integer)
user_id = Column(String(255))
project_id = Column(String(255))
snapshot_id = Column(String(36))
host = Column(String(255)) # , ForeignKey('hosts.id'))
size = Column(Integer)
availability_zone = Column(String(255)) # TODO(vish): foreign key?
instance_uuid = Column(String(36))
attached_host = Column(String(255))
mountpoint = Column(String(255))
attach_time = Column(String(255)) # TODO(vish): datetime
status = Column(String(255)) # TODO(vish): enum?
attach_status = Column(String(255)) # TODO(vish): enum
migration_status = Column(String(255))
scheduled_at = Column(DateTime)
launched_at = Column(DateTime)
terminated_at = Column(DateTime)
display_name = Column(String(255))
display_description = Column(String(255))
provider_location = Column(String(255))
provider_auth = Column(String(255))
provider_geometry = Column(String(255))
volume_type_id = Column(String(36))
source_volid = Column(String(36))
encryption_key_id = Column(String(36))
deleted = Column(Boolean, default=False)
bootable = Column(Boolean, default=False)
mapping_uuid = Column(String(36))
class VolumeMetadata(BASE, CinderBase):
"""Represents a metadata key/value pair for a volume."""
__tablename__ = 'volume_metadata'
id = Column(Integer, primary_key=True)
key = Column(String(255))
value = Column(String(255))
volume_id = Column(String(36), ForeignKey('volumes.id'), nullable=False)
volume = relationship(Volume, backref="volume_metadata",
foreign_keys=volume_id,
primaryjoin='and_('
'VolumeMetadata.volume_id == Volume.id,'
'VolumeMetadata.deleted == False)')
class VolumeAdminMetadata(BASE, CinderBase):
"""Represents a administrator metadata key/value pair for a volume."""
__tablename__ = 'volume_admin_metadata'
id = Column(Integer, primary_key=True)
key = Column(String(255))
value = Column(String(255))
volume_id = Column(String(36), ForeignKey('volumes.id'), nullable=False)
volume = relationship(Volume, backref="volume_admin_metadata",
foreign_keys=volume_id,
primaryjoin='and_('
'VolumeAdminMetadata.volume_id == Volume.id,'
'VolumeAdminMetadata.deleted == False)')
class VolumeTypes(BASE, CinderBase):
"""Represent possible volume_types of volumes offered."""
__tablename__ = "volume_types"
id = Column(String(36), primary_key=True)
name = Column(String(255))
# A reference to qos_specs entity
qos_specs_id = Column(String(36),
ForeignKey('quality_of_service_specs.id'))
volumes = relationship(Volume,
backref=backref('volume_type', uselist=False),
foreign_keys=id,
primaryjoin='and_('
'Volume.volume_type_id == VolumeTypes.id, '
'VolumeTypes.deleted == False)')
class VolumeTypeExtraSpecs(BASE, CinderBase):
"""Represents additional specs as key/value pairs for a volume_type."""
__tablename__ = 'volume_type_extra_specs'
id = Column(Integer, primary_key=True)
key = Column(String(255))
value = Column(String(255))
volume_type_id = Column(String(36),
ForeignKey('volume_types.id'),
nullable=False)
volume_type = relationship(
VolumeTypes,
backref="extra_specs",
foreign_keys=volume_type_id,
primaryjoin='and_('
'VolumeTypeExtraSpecs.volume_type_id == VolumeTypes.id,'
'VolumeTypeExtraSpecs.deleted == False)'
)
class QualityOfServiceSpecs(BASE, CinderBase):
"""Represents QoS specs as key/value pairs.
QoS specs is standalone entity that can be associated/disassociated
with volume types (one to many relation). Adjacency list relationship
pattern is used in this model in order to represent following hierarchical
data with in flat table, e.g, following structure
qos-specs-1 'Rate-Limit'
|
+------> consumer = 'front-end'
+------> total_bytes_sec = 1048576
+------> total_iops_sec = 500
qos-specs-2 'QoS_Level1'
|
+------> consumer = 'back-end'
+------> max-iops = 1000
+------> min-iops = 200
is represented by:
id specs_id key value
------ -------- ------------- -----
UUID-1 NULL QoSSpec_Name Rate-Limit
UUID-2 UUID-1 consumer front-end
UUID-3 UUID-1 total_bytes_sec 1048576
UUID-4 UUID-1 total_iops_sec 500
UUID-5 NULL QoSSpec_Name QoS_Level1
UUID-6 UUID-5 consumer back-end
UUID-7 UUID-5 max-iops 1000
UUID-8 UUID-5 min-iops 200
"""
__tablename__ = 'quality_of_service_specs'
id = Column(String(36), primary_key=True)
specs_id = Column(String(36), ForeignKey(id))
key = Column(String(255))
value = Column(String(255))
specs = relationship(
"QualityOfServiceSpecs",
cascade="all, delete-orphan",
backref=backref("qos_spec", remote_side=id),
)
vol_types = relationship(
VolumeTypes,
backref=backref('qos_specs'),
foreign_keys=id,
primaryjoin='and_('
'or_(VolumeTypes.qos_specs_id == '
'QualityOfServiceSpecs.id,'
'VolumeTypes.qos_specs_id == '
'QualityOfServiceSpecs.specs_id),'
'QualityOfServiceSpecs.deleted == False)')
class VolumeGlanceMetadata(BASE, CinderBase):
"""Glance metadata for a bootable volume."""
__tablename__ = 'volume_glance_metadata'
id = Column(Integer, primary_key=True, nullable=False)
volume_id = Column(String(36), ForeignKey('volumes.id'))
snapshot_id = Column(String(36), ForeignKey('snapshots.id'))
key = Column(String(255))
value = Column(Text)
volume = relationship(Volume, backref="volume_glance_metadata",
foreign_keys=volume_id,
primaryjoin='and_('
'VolumeGlanceMetadata.volume_id == Volume.id,'
'VolumeGlanceMetadata.deleted == False)')
class Quota(BASE, CinderBase):
"""Represents a single quota override for a project.
If there is no row for a given project id and resource, then the
default for the quota class is used. If there is no row for a
given quota class and resource, then the default for the
deployment is used. If the row is present but the hard limit is
Null, then the resource is unlimited.
"""
__tablename__ = 'quotas'
id = Column(Integer, primary_key=True)
project_id = Column(String(255), index=True)
resource = Column(String(255))
hard_limit = Column(Integer, nullable=True)
class QuotaClass(BASE, CinderBase):
"""Represents a single quota override for a quota class.
If there is no row for a given quota class and resource, then the
default for the deployment is used. If the row is present but the
hard limit is Null, then the resource is unlimited.
"""
__tablename__ = 'quota_classes'
id = Column(Integer, primary_key=True)
class_name = Column(String(255), index=True)
resource = Column(String(255))
hard_limit = Column(Integer, nullable=True)
class QuotaUsage(BASE, CinderBase):
"""Represents the current usage for a given resource."""
__tablename__ = 'quota_usages'
id = Column(Integer, primary_key=True)
project_id = Column(String(255), index=True)
resource = Column(String(255))
in_use = Column(Integer)
reserved = Column(Integer)
@property
def total(self):
return self.in_use + self.reserved
until_refresh = Column(Integer, nullable=True)
class Reservation(BASE, CinderBase):
"""Represents a resource reservation for quotas."""
__tablename__ = 'reservations'
id = Column(Integer, primary_key=True)
uuid = Column(String(36), nullable=False)
usage_id = Column(Integer, ForeignKey('quota_usages.id'), nullable=False)
project_id = Column(String(255), index=True)
resource = Column(String(255))
delta = Column(Integer)
expire = Column(DateTime, nullable=False)
usage = relationship(
"QuotaUsage",
foreign_keys=usage_id,
primaryjoin='and_(Reservation.usage_id == QuotaUsage.id,'
'QuotaUsage.deleted == 0)')
class Snapshot(BASE, CinderBase):
"""Represents a snapshot of volume."""
__tablename__ = 'snapshots'
id = Column(String(36), primary_key=True)
@property
def name(self):
return CONF.snapshot_name_template % self.id
@property
def volume_name(self):
return self.volume.name # pylint: disable=E1101
user_id = Column(String(255))
project_id = Column(String(255))
volume_id = Column(String(36))
status = Column(String(255))
progress = Column(String(255))
volume_size = Column(Integer)
display_name = Column(String(255))
display_description = Column(String(255))
encryption_key_id = Column(String(36))
volume_type_id = Column(String(36))
provider_location = Column(String(255))
volume = relationship(Volume, backref="snapshots",
foreign_keys=volume_id,
primaryjoin='Snapshot.volume_id == Volume.id')
mapping_uuid = Column(String(36))
class SnapshotMetadata(BASE, CinderBase):
"""Represents a metadata key/value pair for a snapshot."""
__tablename__ = 'snapshot_metadata'
id = Column(Integer, primary_key=True)
key = Column(String(255))
value = Column(String(255))
snapshot_id = Column(String(36),
ForeignKey('snapshots.id'),
nullable=False)
snapshot = relationship(Snapshot, backref="snapshot_metadata",
foreign_keys=snapshot_id,
primaryjoin='and_('
'SnapshotMetadata.snapshot_id == Snapshot.id,'
'SnapshotMetadata.deleted == False)')
class IscsiTarget(BASE, CinderBase):
"""Represents an iscsi target for a given host."""
__tablename__ = 'iscsi_targets'
__table_args__ = (schema.UniqueConstraint("target_num", "host"),
{'mysql_engine': 'InnoDB'})
id = Column(Integer, primary_key=True)
target_num = Column(Integer)
host = Column(String(255))
volume_id = Column(String(36), ForeignKey('volumes.id'), nullable=True)
volume = relationship(Volume,
backref=backref('iscsi_target', uselist=False),
foreign_keys=volume_id,
primaryjoin='and_(IscsiTarget.volume_id==Volume.id,'
'IscsiTarget.deleted==False)')
class Backup(BASE, CinderBase):
"""Represents a backup of a volume to Swift."""
__tablename__ = 'backups'
id = Column(String(36), primary_key=True)
@property
def name(self):
return CONF.backup_name_template % self.id
user_id = Column(String(255), nullable=False)
project_id = Column(String(255), nullable=False)
volume_id = Column(String(36), nullable=False)
host = Column(String(255))
availability_zone = Column(String(255))
display_name = Column(String(255))
display_description = Column(String(255))
container = Column(String(255))
status = Column(String(255))
fail_reason = Column(String(255))
service_metadata = Column(String(255))
service = Column(String(255))
size = Column(Integer)
object_count = Column(Integer)
mapping_uuid = Column(String(36))
class Encryption(BASE, CinderBase):
"""Represents encryption requirement for a volume type.
Encryption here is a set of performance characteristics describing
cipher, provider, and key_size for a certain volume type.
"""
__tablename__ = 'encryption'
cipher = Column(String(255))
key_size = Column(Integer)
provider = Column(String(255))
control_location = Column(String(255))
volume_type_id = Column(String(36),
ForeignKey('volume_types.id'),
primary_key=True)
volume_type = relationship(
VolumeTypes,
backref="encryption",
foreign_keys=volume_type_id,
primaryjoin='and_('
'Encryption.volume_type_id == VolumeTypes.id,'
'Encryption.deleted == False)'
)
class Transfer(BASE, CinderBase):
"""Represents a volume transfer request."""
__tablename__ = 'transfers'
id = Column(String(36), primary_key=True)
volume_id = Column(String(36), ForeignKey('volumes.id'))
display_name = Column(String(255))
salt = Column(String(255))
crypt_hash = Column(String(255))
expires_at = Column(DateTime)
volume = relationship(Volume, backref="transfer",
foreign_keys=volume_id,
primaryjoin='and_('
'Transfer.volume_id == Volume.id,'
'Transfer.deleted == False)')
def register_models():
"""Register Models and create metadata.
Called from cinder.db.sqlalchemy.__init__ as part of loading the driver,
it will never need to be called explicitly elsewhere unless the
connection is lost and needs to be reestablished.
"""
from sqlalchemy import create_engine
models = (Backup,
Service,
Volume,
VolumeMetadata,
VolumeAdminMetadata,
SnapshotMetadata,
Transfer,
VolumeTypeExtraSpecs,
VolumeTypes,
VolumeGlanceMetadata,
)
engine = create_engine(CONF.database.connection, echo=False)
for model in models:
model.metadata.create_all(engine)

View File

@ -0,0 +1,92 @@
#!/bin/bash
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Copyright (c) 2014 Huawei Technologies.
_MYSQL_PASS="1234"
_CINDER_DIR="/usr/lib64/python2.6/site-packages/cinder"
# if you did not make changes to the installation files,
# please do not edit the following directories.
_CODE_DIR="../cinder"
_BACKUP_DIR="${_CINDER_DIR}/cinder_mapping_uuid_patch-installation-backup"
_SCRIPT_LOGFILE="/var/log/cinder/cinder_mapping_uuid_patch/installation/install.log"
function log()
{
log_path=`dirname ${_SCRIPT_LOGFILE}`
if [ ! -d $log_path ] ; then
mkdir -p $log_path
chmod 777 $log_path
fi
echo "$@"
echo "`date -u +'%Y-%m-%d %T.%N'`: $@" >> $_SCRIPT_LOGFILE
}
if [[ ${EUID} -ne 0 ]]; then
log "Please run as root."
exit 1
fi
cd `dirname $0`
log "checking installation directories..."
if [ ! -d "${_CINDER_DIR}" ] ; then
log "Could not find the cinder installation. Please check the variables in the beginning of the script."
log "aborted."
exit 1
fi
log "checking previous installation..."
if [ -d "${_BACKUP_DIR}/cinder" ] ; then
log "It seems cinder mapping-uuid-patch has already been installed!"
log "Please check README for solution if this is not true."
exit 1
fi
log "backing up current files that might be overwritten..."
mkdir -p "${_BACKUP_DIR}/cinder"
mkdir -p "${_BACKUP_DIR}/etc/cinder"
cp -r "${_CINDER_DIR}/db" "${_BACKUP_DIR}/cinder"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/cinder"
echo "Error in code backup, aborted."
exit 1
fi
log "copying in new files..."
cp -r "${_CODE_DIR}" `dirname ${_CINDER_DIR}`
if [ $? -ne 0 ] ; then
log "Error in copying, aborted."
log "Recovering original files..."
cp -r "${_BACKUP_DIR}/cinder" `dirname ${_CINDER_DIR}` && rm -r "${_BACKUP_DIR}/cinder"
if [ $? -ne 0 ] ; then
log "Recovering failed! Please install manually."
fi
exit 1
fi
log "syc cinder db..."
mysql -u root -p$_MYSQL_PASS -e "DROP DATABASE if exists cinder;CREATE DATABASE cinder;"
cinder-manage db sync
if [ $? -ne 0 ] ; then
log "There was an error in restarting the service, please restart cinder api manually."
exit 1
fi
log "Completed."
log "See README to get started."
exit 0

View File

@ -0,0 +1,118 @@
#!/bin/bash
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Copyright (c) 2014 Huawei Technologies.
_MYSQL_PASS="1234"
_CINDER_CONF_DIR="/etc/cinder"
_CINDER_CONF_FILE="cinder.conf"
_CINDER_DIR="/usr/lib64/python2.6/site-packages/cinder"
_CINDER_INSTALL_LOG="/var/log/cinder/cinder_mapping_uuid_patch/installation/install.log"
# if you did not make changes to the installation files,
# please do not edit the following directories.
_CODE_DIR="../cinder"
_BACKUP_DIR="${_CINDER_DIR}/cinder_mapping_uuid_patch-installation-backup"
#_SCRIPT_NAME="${0##*/}"
#_SCRIPT_LOGFILE="/var/log/nova-solver-scheduler/installation/${_SCRIPT_NAME}.log"
function log()
{
if [ ! -f "${_CINDER_INSTALL_LOG}" ] ; then
mkdir -p `dirname ${_CINDER_INSTALL_LOG}`
touch $_CINDER_INSTALL_LOG
chmod 777 $_CINDER_INSTALL_LOG
fi
echo "$@"
echo "`date -u +'%Y-%m-%d %T.%N'`: $@" >> $_CINDER_INSTALL_LOG
}
if [[ ${EUID} -ne 0 ]]; then
log "Please run as root."
exit 1
fi
cd `dirname $0`
log "checking installation directories..."
if [ ! -d "${_CINDER_DIR}" ] ; then
log "Could not find the cinder installation. Please check the variables in the beginning of the script."
log "aborted."
exit 1
fi
if [ ! -f "${_CINDER_CONF_DIR}/${_CINDER_CONF_FILE}" ] ; then
log "Could not find cinder config file. Please check the variables in the beginning of the script."
log "aborted."
exit 1
fi
log "checking backup..."
if [ ! -d "${_BACKUP_DIR}/cinder" ] ; then
log "Could not find backup files. It is possible that the cinder-proxy has been uninstalled."
log "If this is not the case, then please uninstall manually."
exit 1
fi
log "backing up current files that might be overwritten..."
if [ -d "${_BACKUP_DIR}/uninstall" ] ; then
rm -r "${_BACKUP_DIR}/uninstall"
fi
mkdir -p "${_BACKUP_DIR}/uninstall/cinder"
mkdir -p "${_BACKUP_DIR}/uninstall/etc/cinder"
cp -r "${_CINDER_DIR}/db" "${_BACKUP_DIR}/uninstall/cinder/"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/uninstall/cinder"
log "Error in code backup, aborted."
exit 1
fi
cp "${_CINDER_CONF_DIR}/${_CINDER_CONF_FILE}" "${_BACKUP_DIR}/uninstall/etc/cinder/"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/uninstall/cinder"
rm -r "${_BACKUP_DIR}/uninstall/etc"
log "Error in config backup, aborted."
exit 1
fi
log "restoring code to the status before installing cinder-proxy..."
cp -r "${_BACKUP_DIR}/cinder" `dirname ${_CINDER_DIR}`
if [ $? -ne 0 ] ; then
log "Error in copying, aborted."
log "Recovering current files..."
cp -r "${_BACKUP_DIR}/uninstall/cinder" `dirname ${_CINDER_DIR}`
if [ $? -ne 0 ] ; then
log "Recovering failed! Please uninstall manually."
fi
exit 1
fi
log "cleaning up backup files..."
rm -r "${_BACKUP_DIR}/cinder" && rm -r "${_BACKUP_DIR}/etc"
if [ $? -ne 0 ] ; then
log "There was an error when cleaning up the backup files."
fi
log "restarting cinder api..."
mysql -u root -p$_MYSQL_PASS -e "DROP DATABASE if exists cinder;CREATE DATABASE cinder;"
cinder-manage db sync
service openstack-cinder-api restart
if [ $? -ne 0 ] ; then
log "There was an error in restarting the service, please restart cinder volume manually."
exit 1
fi
log "Completed."
exit 0

View File

@ -0,0 +1,23 @@
Glance-Cascading Patch
================
Introduction
-----------------------------
*For glance cascading, we have to create the relationship bewteen one cascading-glance and some cascaded-glances. In order to achieve this goal, we using glance's multi-location feature, the relationshiop can be as a location with the special format. Besides, we modify the image status changing-rule: The image's active toggle into 'active' only if the cascaded have been synced. Because of these two reasons, a few existing source files were modified for adapting the cascading:
glance/store/http.py
glance/store/__init__.py
glance/api/v2/image.py
glance/gateway.py
glance/common/utils.py
glance/common/config.py
glance/common/exception.py
Install
------------------------------
*To implement this patch just replacing the original files to these files, or run the install.sh in glancesync/installation/ directory.

View File

@ -0,0 +1,21 @@
[console_scripts]
glance-api = glance.cmd.api:main
glance-cache-cleaner = glance.cmd.cache_cleaner:main
glance-cache-manage = glance.cmd.cache_manage:main
glance-cache-prefetcher = glance.cmd.cache_prefetcher:main
glance-cache-pruner = glance.cmd.cache_pruner:main
glance-control = glance.cmd.control:main
glance-manage = glance.cmd.manage:main
glance-registry = glance.cmd.registry:main
glance-replicator = glance.cmd.replicator:main
glance-scrubber = glance.cmd.scrubber:main
[glance.common.image_location_strategy.modules]
location_order_strategy = glance.common.location_strategy.location_order
store_type_strategy = glance.common.location_strategy.store_type
[glance.sync.store.location]
filesystem = glance.sync.store._drivers.filesystem:LocationCreator
[glance.sync.store.driver]
filesystem = glance.sync.store._drivers.filesystem:Store

View File

@ -0,0 +1,822 @@
# Copyright 2012 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import re
from oslo.config import cfg
import six.moves.urllib.parse as urlparse
import webob.exc
from glance.api import policy
from glance.common import exception
from glance.common import location_strategy
from glance.common import utils
from glance.common import wsgi
import glance.db
import glance.gateway
import glance.notifier
from glance.openstack.common import jsonutils as json
import glance.openstack.common.log as logging
from glance.openstack.common import timeutils
import glance.schema
import glance.store
import glance.sync.client.v1.api as sync_api
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
CONF.import_opt('disk_formats', 'glance.common.config', group='image_format')
CONF.import_opt('container_formats', 'glance.common.config',
group='image_format')
CONF.import_opt('sync_enabled', 'glance.common.config')
class ImagesController(object):
def __init__(self, db_api=None, policy_enforcer=None, notifier=None,
store_api=None):
self.db_api = db_api or glance.db.get_api()
self.policy = policy_enforcer or policy.Enforcer()
self.notifier = notifier or glance.notifier.Notifier()
self.store_api = store_api or glance.store
self.sync_api = sync_api
self.sync_api.configure_sync_client()
self.gateway = glance.gateway.Gateway(self.db_api, self.store_api,
self.notifier, self.policy,
self.sync_api)
@utils.mutating
def create(self, req, image, extra_properties, tags):
image_factory = self.gateway.get_image_factory(req.context)
image_repo = self.gateway.get_repo(req.context)
try:
image = image_factory.new_image(extra_properties=extra_properties,
tags=tags, **image)
image_repo.add(image)
except exception.DuplicateLocation as dup:
raise webob.exc.HTTPBadRequest(explanation=dup.msg)
except exception.Forbidden as e:
raise webob.exc.HTTPForbidden(explanation=e.msg)
except exception.InvalidParameterValue as e:
raise webob.exc.HTTPBadRequest(explanation=e.msg)
except exception.LimitExceeded as e:
LOG.info(unicode(e))
raise webob.exc.HTTPRequestEntityTooLarge(
explanation=e.msg, request=req, content_type='text/plain')
return image
def index(self, req, marker=None, limit=None, sort_key='created_at',
sort_dir='desc', filters=None, member_status='accepted'):
result = {}
if filters is None:
filters = {}
filters['deleted'] = False
if limit is None:
limit = CONF.limit_param_default
limit = min(CONF.api_limit_max, limit)
image_repo = self.gateway.get_repo(req.context)
try:
images = image_repo.list(marker=marker, limit=limit,
sort_key=sort_key, sort_dir=sort_dir,
filters=filters,
member_status=member_status)
if len(images) != 0 and len(images) == limit:
result['next_marker'] = images[-1].image_id
except (exception.NotFound, exception.InvalidSortKey,
exception.InvalidFilterRangeValue) as e:
raise webob.exc.HTTPBadRequest(explanation=e.msg)
except exception.Forbidden as e:
raise webob.exc.HTTPForbidden(explanation=e.msg)
result['images'] = images
return result
def show(self, req, image_id):
image_repo = self.gateway.get_repo(req.context)
try:
image = image_repo.get(image_id)
if CONF.sync_enabled:
sync_client = sync_api.get_sync_client(req.context)
eps = sync_client.get_cascaded_endpoints()
utils.check_synced(image, eps)
return image
except exception.Forbidden as e:
raise webob.exc.HTTPForbidden(explanation=e.msg)
except exception.NotFound as e:
raise webob.exc.HTTPNotFound(explanation=e.msg)
@utils.mutating
def update(self, req, image_id, changes):
image_repo = self.gateway.get_repo(req.context)
try:
image = image_repo.get(image_id)
for change in changes:
change_method_name = '_do_%s' % change['op']
assert hasattr(self, change_method_name)
change_method = getattr(self, change_method_name)
change_method(req, image, change)
if changes:
image_repo.save(image)
except exception.NotFound as e:
raise webob.exc.HTTPNotFound(explanation=e.msg)
except exception.Forbidden as e:
raise webob.exc.HTTPForbidden(explanation=e.msg)
except exception.InvalidParameterValue as e:
raise webob.exc.HTTPBadRequest(explanation=e.msg)
except exception.StorageQuotaFull as e:
msg = (_("Denying attempt to upload image because it exceeds the ."
"quota: %s") % e)
LOG.info(msg)
raise webob.exc.HTTPRequestEntityTooLarge(
explanation=msg, request=req, content_type='text/plain')
except exception.LimitExceeded as e:
LOG.info(unicode(e))
raise webob.exc.HTTPRequestEntityTooLarge(
explanation=e.msg, request=req, content_type='text/plain')
return image
def _do_replace(self, req, image, change):
path = change['path']
path_root = path[0]
value = change['value']
if path_root == 'locations':
self._do_replace_locations(image, value)
else:
if hasattr(image, path_root):
setattr(image, path_root, value)
elif path_root in image.extra_properties:
image.extra_properties[path_root] = value
else:
msg = _("Property %s does not exist.")
raise webob.exc.HTTPConflict(msg % path_root)
def _do_add(self, req, image, change):
path = change['path']
path_root = path[0]
value = change['value']
if path_root == 'locations':
self._do_add_locations(image, path[1], value)
else:
if (hasattr(image, path_root) or
path_root in image.extra_properties):
msg = _("Property %s already present.")
raise webob.exc.HTTPConflict(msg % path_root)
image.extra_properties[path_root] = value
def _do_remove(self, req, image, change):
path = change['path']
path_root = path[0]
if path_root == 'locations':
self._do_remove_locations(image, path[1])
else:
if hasattr(image, path_root):
msg = _("Property %s may not be removed.")
raise webob.exc.HTTPForbidden(msg % path_root)
elif path_root in image.extra_properties:
del image.extra_properties[path_root]
else:
msg = _("Property %s does not exist.")
raise webob.exc.HTTPConflict(msg % path_root)
@utils.mutating
def delete(self, req, image_id):
image_repo = self.gateway.get_repo(req.context)
try:
image = image_repo.get(image_id)
image.delete()
image_repo.remove(image)
except exception.Forbidden as e:
raise webob.exc.HTTPForbidden(explanation=e.msg)
except exception.NotFound as e:
msg = (_("Failed to find image %(image_id)s to delete") %
{'image_id': image_id})
LOG.info(msg)
raise webob.exc.HTTPNotFound(explanation=msg)
def _get_locations_op_pos(self, path_pos, max_pos, allow_max):
if path_pos is None or max_pos is None:
return None
pos = max_pos if allow_max else max_pos - 1
if path_pos.isdigit():
pos = int(path_pos)
elif path_pos != '-':
return None
if (not allow_max) and (pos not in range(max_pos)):
return None
return pos
def _do_replace_locations(self, image, value):
if len(image.locations) > 0 and len(value) > 0:
msg = _("Cannot replace locations from a non-empty "
"list to a non-empty list.")
raise webob.exc.HTTPBadRequest(explanation=msg)
if len(value) == 0:
# NOTE(zhiyan): this actually deletes the location
# from the backend store.
del image.locations[:]
if image.status == 'active':
image.status = 'queued'
else: # NOTE(zhiyan): len(image.locations) == 0
try:
image.locations = value
if image.status == 'queued':
image.status = 'active'
except (exception.BadStoreUri, exception.DuplicateLocation) as bse:
raise webob.exc.HTTPBadRequest(explanation=bse.msg)
except ValueError as ve: # update image status failed.
raise webob.exc.HTTPBadRequest(explanation=unicode(ve))
def _do_add_locations(self, image, path_pos, value):
pos = self._get_locations_op_pos(path_pos,
len(image.locations), True)
if pos is None:
msg = _("Invalid position for adding a location.")
raise webob.exc.HTTPBadRequest(explanation=msg)
try:
image.locations.insert(pos, value)
if image.status == 'queued':
image.status = 'active'
except (exception.BadStoreUri, exception.DuplicateLocation) as bse:
raise webob.exc.HTTPBadRequest(explanation=bse.msg)
except ValueError as ve: # update image status failed.
raise webob.exc.HTTPBadRequest(explanation=unicode(ve))
def _do_remove_locations(self, image, path_pos):
pos = self._get_locations_op_pos(path_pos,
len(image.locations), False)
if pos is None:
msg = _("Invalid position for removing a location.")
raise webob.exc.HTTPBadRequest(explanation=msg)
try:
# NOTE(zhiyan): this actually deletes the location
# from the backend store.
image.locations.pop(pos)
except Exception as e:
raise webob.exc.HTTPInternalServerError(explanation=unicode(e))
if (len(image.locations) == 0) and (image.status == 'active'):
image.status = 'queued'
class RequestDeserializer(wsgi.JSONRequestDeserializer):
_disallowed_properties = ['direct_url', 'self', 'file', 'schema']
_readonly_properties = ['created_at', 'updated_at', 'status', 'checksum',
'size', 'virtual_size', 'direct_url', 'self',
'file', 'schema']
_reserved_properties = ['owner', 'is_public', 'location', 'deleted',
'deleted_at']
_base_properties = ['checksum', 'created_at', 'container_format',
'disk_format', 'id', 'min_disk', 'min_ram', 'name',
'size', 'virtual_size', 'status', 'tags',
'updated_at', 'visibility', 'protected']
_path_depth_limits = {'locations': {'add': 2, 'remove': 2, 'replace': 1}}
def __init__(self, schema=None):
super(RequestDeserializer, self).__init__()
self.schema = schema or get_schema()
def _get_request_body(self, request):
output = super(RequestDeserializer, self).default(request)
if 'body' not in output:
msg = _('Body expected in request.')
raise webob.exc.HTTPBadRequest(explanation=msg)
return output['body']
@classmethod
def _check_allowed(cls, image):
for key in cls._disallowed_properties:
if key in image:
msg = _("Attribute '%s' is read-only.") % key
raise webob.exc.HTTPForbidden(explanation=unicode(msg))
def create(self, request):
body = self._get_request_body(request)
self._check_allowed(body)
try:
self.schema.validate(body)
except exception.InvalidObject as e:
raise webob.exc.HTTPBadRequest(explanation=e.msg)
image = {}
properties = body
tags = properties.pop('tags', None)
for key in self._base_properties:
try:
image[key] = properties.pop(key)
except KeyError:
pass
return dict(image=image, extra_properties=properties, tags=tags)
def _get_change_operation_d10(self, raw_change):
try:
return raw_change['op']
except KeyError:
msg = _("Unable to find '%s' in JSON Schema change") % 'op'
raise webob.exc.HTTPBadRequest(explanation=msg)
def _get_change_operation_d4(self, raw_change):
op = None
for key in ['replace', 'add', 'remove']:
if key in raw_change:
if op is not None:
msg = _('Operation objects must contain only one member'
' named "add", "remove", or "replace".')
raise webob.exc.HTTPBadRequest(explanation=msg)
op = key
if op is None:
msg = _('Operation objects must contain exactly one member'
' named "add", "remove", or "replace".')
raise webob.exc.HTTPBadRequest(explanation=msg)
return op
def _get_change_path_d10(self, raw_change):
try:
return raw_change['path']
except KeyError:
msg = _("Unable to find '%s' in JSON Schema change") % 'path'
raise webob.exc.HTTPBadRequest(explanation=msg)
def _get_change_path_d4(self, raw_change, op):
return raw_change[op]
def _decode_json_pointer(self, pointer):
"""Parse a json pointer.
Json Pointers are defined in
http://tools.ietf.org/html/draft-pbryan-zyp-json-pointer .
The pointers use '/' for separation between object attributes, such
that '/A/B' would evaluate to C in {"A": {"B": "C"}}. A '/' character
in an attribute name is encoded as "~1" and a '~' character is encoded
as "~0".
"""
self._validate_json_pointer(pointer)
ret = []
for part in pointer.lstrip('/').split('/'):
ret.append(part.replace('~1', '/').replace('~0', '~').strip())
return ret
def _validate_json_pointer(self, pointer):
"""Validate a json pointer.
We only accept a limited form of json pointers.
"""
if not pointer.startswith('/'):
msg = _('Pointer `%s` does not start with "/".') % pointer
raise webob.exc.HTTPBadRequest(explanation=msg)
if re.search('/\s*?/', pointer[1:]):
msg = _('Pointer `%s` contains adjacent "/".') % pointer
raise webob.exc.HTTPBadRequest(explanation=msg)
if len(pointer) > 1 and pointer.endswith('/'):
msg = _('Pointer `%s` end with "/".') % pointer
raise webob.exc.HTTPBadRequest(explanation=msg)
if pointer[1:].strip() == '/':
msg = _('Pointer `%s` does not contains valid token.') % pointer
raise webob.exc.HTTPBadRequest(explanation=msg)
if re.search('~[^01]', pointer) or pointer.endswith('~'):
msg = _('Pointer `%s` contains "~" not part of'
' a recognized escape sequence.') % pointer
raise webob.exc.HTTPBadRequest(explanation=msg)
def _get_change_value(self, raw_change, op):
if 'value' not in raw_change:
msg = _('Operation "%s" requires a member named "value".')
raise webob.exc.HTTPBadRequest(explanation=msg % op)
return raw_change['value']
def _validate_change(self, change):
path_root = change['path'][0]
if path_root in self._readonly_properties:
msg = _("Attribute '%s' is read-only.") % path_root
raise webob.exc.HTTPForbidden(explanation=unicode(msg))
if path_root in self._reserved_properties:
msg = _("Attribute '%s' is reserved.") % path_root
raise webob.exc.HTTPForbidden(explanation=unicode(msg))
if change['op'] == 'delete':
return
partial_image = None
if len(change['path']) == 1:
partial_image = {path_root: change['value']}
elif ((path_root in _get_base_properties().keys()) and
(_get_base_properties()[path_root].get('type', '') == 'array')):
# NOTE(zhiyan): cient can use PATCH API to adding element to
# the image's existing set property directly.
# Such as: 1. using '/locations/N' path to adding a location
# to the image's 'locations' list at N position.
# (implemented)
# 2. using '/tags/-' path to appending a tag to the
# image's 'tags' list at last. (Not implemented)
partial_image = {path_root: [change['value']]}
if partial_image:
try:
self.schema.validate(partial_image)
except exception.InvalidObject as e:
raise webob.exc.HTTPBadRequest(explanation=e.msg)
def _validate_path(self, op, path):
path_root = path[0]
limits = self._path_depth_limits.get(path_root, {})
if len(path) != limits.get(op, 1):
msg = _("Invalid JSON pointer for this resource: "
"'/%s'") % '/'.join(path)
raise webob.exc.HTTPBadRequest(explanation=unicode(msg))
def _parse_json_schema_change(self, raw_change, draft_version):
if draft_version == 10:
op = self._get_change_operation_d10(raw_change)
path = self._get_change_path_d10(raw_change)
elif draft_version == 4:
op = self._get_change_operation_d4(raw_change)
path = self._get_change_path_d4(raw_change, op)
else:
msg = _('Unrecognized JSON Schema draft version')
raise webob.exc.HTTPBadRequest(explanation=msg)
path_list = self._decode_json_pointer(path)
return op, path_list
def update(self, request):
changes = []
content_types = {
'application/openstack-images-v2.0-json-patch': 4,
'application/openstack-images-v2.1-json-patch': 10,
}
if request.content_type not in content_types:
headers = {'Accept-Patch': ', '.join(content_types.keys())}
raise webob.exc.HTTPUnsupportedMediaType(headers=headers)
json_schema_version = content_types[request.content_type]
body = self._get_request_body(request)
if not isinstance(body, list):
msg = _('Request body must be a JSON array of operation objects.')
raise webob.exc.HTTPBadRequest(explanation=msg)
for raw_change in body:
if not isinstance(raw_change, dict):
msg = _('Operations must be JSON objects.')
raise webob.exc.HTTPBadRequest(explanation=msg)
(op, path) = self._parse_json_schema_change(raw_change,
json_schema_version)
# NOTE(zhiyan): the 'path' is a list.
self._validate_path(op, path)
change = {'op': op, 'path': path}
if not op == 'remove':
change['value'] = self._get_change_value(raw_change, op)
self._validate_change(change)
changes.append(change)
return {'changes': changes}
def _validate_limit(self, limit):
try:
limit = int(limit)
except ValueError:
msg = _("limit param must be an integer")
raise webob.exc.HTTPBadRequest(explanation=msg)
if limit < 0:
msg = _("limit param must be positive")
raise webob.exc.HTTPBadRequest(explanation=msg)
return limit
def _validate_sort_dir(self, sort_dir):
if sort_dir not in ['asc', 'desc']:
msg = _('Invalid sort direction: %s') % sort_dir
raise webob.exc.HTTPBadRequest(explanation=msg)
return sort_dir
def _validate_member_status(self, member_status):
if member_status not in ['pending', 'accepted', 'rejected', 'all']:
msg = _('Invalid status: %s') % member_status
raise webob.exc.HTTPBadRequest(explanation=msg)
return member_status
def _get_filters(self, filters):
visibility = filters.get('visibility')
if visibility:
if visibility not in ['public', 'private', 'shared']:
msg = _('Invalid visibility value: %s') % visibility
raise webob.exc.HTTPBadRequest(explanation=msg)
return filters
def index(self, request):
params = request.params.copy()
limit = params.pop('limit', None)
marker = params.pop('marker', None)
sort_dir = params.pop('sort_dir', 'desc')
member_status = params.pop('member_status', 'accepted')
# NOTE (flwang) To avoid using comma or any predefined chars to split
# multiple tags, now we allow user specify multiple 'tag' parameters
# in URL, such as v2/images?tag=x86&tag=64bit.
tags = []
while 'tag' in params:
tags.append(params.pop('tag').strip())
query_params = {
'sort_key': params.pop('sort_key', 'created_at'),
'sort_dir': self._validate_sort_dir(sort_dir),
'filters': self._get_filters(params),
'member_status': self._validate_member_status(member_status),
}
if marker is not None:
query_params['marker'] = marker
if limit is not None:
query_params['limit'] = self._validate_limit(limit)
if tags:
query_params['filters']['tags'] = tags
return query_params
class ResponseSerializer(wsgi.JSONResponseSerializer):
def __init__(self, schema=None):
super(ResponseSerializer, self).__init__()
self.schema = schema or get_schema()
def _get_image_href(self, image, subcollection=''):
base_href = '/v2/images/%s' % image.image_id
if subcollection:
base_href = '%s/%s' % (base_href, subcollection)
return base_href
def _format_image(self, image):
image_view = dict()
try:
image_view = dict(image.extra_properties)
attributes = ['name', 'disk_format', 'container_format',
'visibility', 'size', 'virtual_size', 'status',
'checksum', 'protected', 'min_ram', 'min_disk',
'owner']
for key in attributes:
image_view[key] = getattr(image, key)
image_view['id'] = image.image_id
image_view['created_at'] = timeutils.isotime(image.created_at)
image_view['updated_at'] = timeutils.isotime(image.updated_at)
if CONF.show_multiple_locations:
if image.locations:
image_view['locations'] = list(image.locations)
else:
# NOTE (flwang): We will still show "locations": [] if
# image.locations is None to indicate it's allowed to show
# locations but it's just non-existent.
image_view['locations'] = []
if CONF.show_image_direct_url and image.locations:
# Choose best location configured strategy
best_location = (
location_strategy.choose_best_location(image.locations))
image_view['direct_url'] = best_location['url']
image_view['tags'] = list(image.tags)
image_view['self'] = self._get_image_href(image)
image_view['file'] = self._get_image_href(image, 'file')
image_view['schema'] = '/v2/schemas/image'
image_view = self.schema.filter(image_view) # domain
except exception.Forbidden as e:
raise webob.exc.HTTPForbidden(explanation=e.msg)
return image_view
def create(self, response, image):
response.status_int = 201
self.show(response, image)
response.location = self._get_image_href(image)
def show(self, response, image):
image_view = self._format_image(image)
body = json.dumps(image_view, ensure_ascii=False)
response.unicode_body = unicode(body)
response.content_type = 'application/json'
def update(self, response, image):
image_view = self._format_image(image)
body = json.dumps(image_view, ensure_ascii=False)
response.unicode_body = unicode(body)
response.content_type = 'application/json'
def index(self, response, result):
params = dict(response.request.params)
params.pop('marker', None)
query = urlparse.urlencode(params)
body = {
'images': [self._format_image(i) for i in result['images']],
'first': '/v2/images',
'schema': '/v2/schemas/images',
}
if query:
body['first'] = '%s?%s' % (body['first'], query)
if 'next_marker' in result:
params['marker'] = result['next_marker']
next_query = urlparse.urlencode(params)
body['next'] = '/v2/images?%s' % next_query
response.unicode_body = unicode(json.dumps(body, ensure_ascii=False))
response.content_type = 'application/json'
def delete(self, response, result):
response.status_int = 204
def _get_base_properties():
return {
'id': {
'type': 'string',
'description': _('An identifier for the image'),
'pattern': ('^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}'
'-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$'),
},
'name': {
'type': 'string',
'description': _('Descriptive name for the image'),
'maxLength': 255,
},
'status': {
'type': 'string',
'description': _('Status of the image (READ-ONLY)'),
'enum': ['queued', 'saving', 'active', 'killed',
'deleted', 'pending_delete'],
},
'visibility': {
'type': 'string',
'description': _('Scope of image accessibility'),
'enum': ['public', 'private'],
},
'protected': {
'type': 'boolean',
'description': _('If true, image will not be deletable.'),
},
'checksum': {
'type': 'string',
'description': _('md5 hash of image contents. (READ-ONLY)'),
'maxLength': 32,
},
'owner': {
'type': 'string',
'description': _('Owner of the image'),
'maxLength': 255,
},
'size': {
'type': 'integer',
'description': _('Size of image file in bytes (READ-ONLY)'),
},
'virtual_size': {
'type': 'integer',
'description': _('Virtual size of image in bytes (READ-ONLY)'),
},
'container_format': {
'type': 'string',
'description': _('Format of the container'),
'enum': CONF.image_format.container_formats,
},
'disk_format': {
'type': 'string',
'description': _('Format of the disk'),
'enum': CONF.image_format.disk_formats,
},
'created_at': {
'type': 'string',
'description': _('Date and time of image registration'
' (READ-ONLY)'),
# TODO(bcwaldon): our jsonschema library doesn't seem to like the
# format attribute, figure out why!
#'format': 'date-time',
},
'updated_at': {
'type': 'string',
'description': _('Date and time of the last image modification'
' (READ-ONLY)'),
#'format': 'date-time',
},
'tags': {
'type': 'array',
'description': _('List of strings related to the image'),
'items': {
'type': 'string',
'maxLength': 255,
},
},
'direct_url': {
'type': 'string',
'description': _('URL to access the image file kept in external '
'store (READ-ONLY)'),
},
'min_ram': {
'type': 'integer',
'description': _('Amount of ram (in MB) required to boot image.'),
},
'min_disk': {
'type': 'integer',
'description': _('Amount of disk space (in GB) required to boot '
'image.'),
},
'self': {
'type': 'string',
'description': '(READ-ONLY)'
},
'file': {
'type': 'string',
'description': '(READ-ONLY)'
},
'schema': {
'type': 'string',
'description': '(READ-ONLY)'
},
'locations': {
'type': 'array',
'items': {
'type': 'object',
'properties': {
'url': {
'type': 'string',
'maxLength': 255,
},
'metadata': {
'type': 'object',
},
},
'required': ['url', 'metadata'],
},
'description': _('A set of URLs to access the image file kept in '
'external store'),
},
}
def _get_base_links():
return [
{'rel': 'self', 'href': '{self}'},
{'rel': 'enclosure', 'href': '{file}'},
{'rel': 'describedby', 'href': '{schema}'},
]
def get_schema(custom_properties=None):
properties = _get_base_properties()
links = _get_base_links()
if CONF.allow_additional_image_properties:
schema = glance.schema.PermissiveSchema('image', properties, links)
else:
schema = glance.schema.Schema('image', properties)
schema.merge_properties(custom_properties or {})
return schema
def get_collection_schema(custom_properties=None):
image_schema = get_schema(custom_properties)
return glance.schema.CollectionSchema('images', image_schema)
def load_custom_properties():
"""Find the schema properties files and load them into a dict."""
filename = 'schema-image.json'
match = CONF.find_file(filename)
if match:
schema_file = open(match)
schema_data = schema_file.read()
return json.loads(schema_data)
else:
msg = _('Could not find schema properties file %s. Continuing '
'without custom properties')
LOG.warn(msg % filename)
return {}
def create_resource(custom_properties=None):
"""Images resource factory method"""
schema = get_schema(custom_properties)
deserializer = RequestDeserializer(schema)
serializer = ResponseSerializer(schema)
controller = ImagesController()
return wsgi.Resource(controller, deserializer, serializer)

View File

@ -0,0 +1,260 @@
#!/usr/bin/env python
# Copyright 2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Routines for configuring Glance
"""
import logging
import logging.config
import logging.handlers
import os
from oslo.config import cfg
from paste import deploy
from glance.version import version_info as version
paste_deploy_opts = [
cfg.StrOpt('flavor',
help=_('Partial name of a pipeline in your paste configuration '
'file with the service name removed. For example, if '
'your paste section name is '
'[pipeline:glance-api-keystone] use the value '
'"keystone"')),
cfg.StrOpt('config_file',
help=_('Name of the paste configuration file.')),
]
image_format_opts = [
cfg.ListOpt('container_formats',
default=['ami', 'ari', 'aki', 'bare', 'ovf', 'ova'],
help=_("Supported values for the 'container_format' "
"image attribute"),
deprecated_opts=[cfg.DeprecatedOpt('container_formats',
group='DEFAULT')]),
cfg.ListOpt('disk_formats',
default=['ami', 'ari', 'aki', 'vhd', 'vmdk', 'raw', 'qcow2',
'vdi', 'iso'],
help=_("Supported values for the 'disk_format' "
"image attribute"),
deprecated_opts=[cfg.DeprecatedOpt('disk_formats',
group='DEFAULT')]),
]
task_opts = [
cfg.IntOpt('task_time_to_live',
default=48,
help=_("Time in hours for which a task lives after, either "
"succeeding or failing"),
deprecated_opts=[cfg.DeprecatedOpt('task_time_to_live',
group='DEFAULT')]),
]
common_opts = [
cfg.BoolOpt('allow_additional_image_properties', default=True,
help=_('Whether to allow users to specify image properties '
'beyond what the image schema provides')),
cfg.IntOpt('image_member_quota', default=128,
help=_('Maximum number of image members per image. '
'Negative values evaluate to unlimited.')),
cfg.IntOpt('image_property_quota', default=128,
help=_('Maximum number of properties allowed on an image. '
'Negative values evaluate to unlimited.')),
cfg.IntOpt('image_tag_quota', default=128,
help=_('Maximum number of tags allowed on an image. '
'Negative values evaluate to unlimited.')),
cfg.IntOpt('image_location_quota', default=10,
help=_('Maximum number of locations allowed on an image. '
'Negative values evaluate to unlimited.')),
cfg.StrOpt('data_api', default='glance.db.sqlalchemy.api',
help=_('Python module path of data access API')),
cfg.IntOpt('limit_param_default', default=25,
help=_('Default value for the number of items returned by a '
'request if not specified explicitly in the request')),
cfg.IntOpt('api_limit_max', default=1000,
help=_('Maximum permissible number of items that could be '
'returned by a request')),
cfg.BoolOpt('show_image_direct_url', default=False,
help=_('Whether to include the backend image storage location '
'in image properties. Revealing storage location can '
'be a security risk, so use this setting with '
'caution!')),
cfg.BoolOpt('show_multiple_locations', default=False,
help=_('Whether to include the backend image locations '
'in image properties. Revealing storage location can '
'be a security risk, so use this setting with '
'caution! The overrides show_image_direct_url.')),
cfg.IntOpt('image_size_cap', default=1099511627776,
help=_("Maximum size of image a user can upload in bytes. "
"Defaults to 1099511627776 bytes (1 TB).")),
cfg.IntOpt('user_storage_quota', default=0,
help=_("Set a system wide quota for every user. This value is "
"the total number of bytes that a user can use across "
"all storage systems. A value of 0 means unlimited.")),
cfg.BoolOpt('enable_v1_api', default=True,
help=_("Deploy the v1 OpenStack Images API.")),
cfg.BoolOpt('enable_v2_api', default=True,
help=_("Deploy the v2 OpenStack Images API.")),
cfg.BoolOpt('enable_v1_registry', default=True,
help=_("Deploy the v1 OpenStack Registry API.")),
cfg.BoolOpt('enable_v2_registry', default=True,
help=_("Deploy the v2 OpenStack Registry API.")),
cfg.StrOpt('pydev_worker_debug_host', default=None,
help=_('The hostname/IP of the pydev process listening for '
'debug connections')),
cfg.IntOpt('pydev_worker_debug_port', default=5678,
help=_('The port on which a pydev process is listening for '
'connections.')),
cfg.StrOpt('metadata_encryption_key', secret=True,
help=_('Key used for encrypting sensitive metadata while '
'talking to the registry or database.')),
cfg.BoolOpt('sync_enabled', default=False,
help=_("Whether to launch the Sync function.")),
cfg.StrOpt('sync_server_host', default='127.0.0.1',
help=_('host ip where sync_web_server in.')),
cfg.IntOpt('sync_server_port', default=9595,
help=_('host port where sync_web_server in.')),
]
sync_opts = [
cfg.StrOpt('cascading_endpoint_url', default='http://127.0.0.1:9292/',
help=_('host ip where glance in.'),
deprecated_opts=[cfg.DeprecatedOpt('cascading_endpoint_url',
group='DEFAULT')]),
cfg.StrOpt('sync_strategy', default='None',
help=_("Define the sync strategy, value can be All/User/None."),
deprecated_opts=[cfg.DeprecatedOpt('sync_strategy',
group='DEFAULT')]),
cfg.IntOpt('snapshot_timeout', default=300,
help=_('when snapshot, max wait (second)time for snapshot '
'status become active.'),
deprecated_opts=[cfg.DeprecatedOpt('snapshot_timeout',
group='DEFAULT')]),
cfg.IntOpt('snapshot_sleep_interval', default=10,
help=_('when snapshot, sleep interval for waiting snapshot '
'status become active.'),
deprecated_opts=[cfg.DeprecatedOpt('snapshot_sleep_interval',
group='DEFAULT')]),
cfg.IntOpt('task_retry_times', default=0,
help=_('sync task fail retry times.'),
deprecated_opts=[cfg.DeprecatedOpt('task_retry_times',
group='DEFAULT')]),
cfg.IntOpt('scp_copy_timeout', default=3600,
help=_('when snapshot, max wait (second)time for snapshot '
'status become active.'),
deprecated_opts=[cfg.DeprecatedOpt('scp_copy_timeout',
group='DEFAULT')]),
]
CONF = cfg.CONF
CONF.register_opts(paste_deploy_opts, group='paste_deploy')
CONF.register_opts(image_format_opts, group='image_format')
CONF.register_opts(task_opts, group='task')
CONF.register_opts(sync_opts, group='sync')
CONF.register_opts(common_opts)
def parse_args(args=None, usage=None, default_config_files=None):
CONF(args=args,
project='glance',
version=version.cached_version_string(),
usage=usage,
default_config_files=default_config_files)
def parse_cache_args(args=None):
config_files = cfg.find_config_files(project='glance', prog='glance-cache')
parse_args(args=args, default_config_files=config_files)
def _get_deployment_flavor(flavor=None):
"""
Retrieve the paste_deploy.flavor config item, formatted appropriately
for appending to the application name.
:param flavor: if specified, use this setting rather than the
paste_deploy.flavor configuration setting
"""
if not flavor:
flavor = CONF.paste_deploy.flavor
return '' if not flavor else ('-' + flavor)
def _get_paste_config_path():
paste_suffix = '-paste.ini'
conf_suffix = '.conf'
if CONF.config_file:
# Assume paste config is in a paste.ini file corresponding
# to the last config file
path = CONF.config_file[-1].replace(conf_suffix, paste_suffix)
else:
path = CONF.prog + paste_suffix
return CONF.find_file(os.path.basename(path))
def _get_deployment_config_file():
"""
Retrieve the deployment_config_file config item, formatted as an
absolute pathname.
"""
path = CONF.paste_deploy.config_file
if not path:
path = _get_paste_config_path()
if not path:
msg = _("Unable to locate paste config file for %s.") % CONF.prog
raise RuntimeError(msg)
return os.path.abspath(path)
def load_paste_app(app_name, flavor=None, conf_file=None):
"""
Builds and returns a WSGI app from a paste config file.
We assume the last config file specified in the supplied ConfigOpts
object is the paste config file, if conf_file is None.
:param app_name: name of the application to load
:param flavor: name of the variant of the application to load
:param conf_file: path to the paste config file
:raises RuntimeError when config file cannot be located or application
cannot be loaded from config file
"""
# append the deployment flavor to the application name,
# in order to identify the appropriate paste pipeline
app_name += _get_deployment_flavor(flavor)
if not conf_file:
conf_file = _get_deployment_config_file()
try:
logger = logging.getLogger(__name__)
logger.debug(_("Loading %(app_name)s from %(conf_file)s"),
{'conf_file': conf_file, 'app_name': app_name})
app = deploy.loadapp("config:%s" % conf_file, name=app_name)
# Log the options used when starting if we're in debug mode...
if CONF.debug:
CONF.log_opt_values(logger, logging.DEBUG)
return app
except (LookupError, ImportError) as e:
msg = (_("Unable to load %(app_name)s from "
"configuration file %(conf_file)s."
"\nGot: %(e)r") % {'app_name': app_name,
'conf_file': conf_file,
'e': e})
logger.error(msg)
raise RuntimeError(msg)

View File

@ -0,0 +1,362 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Glance exception subclasses"""
import six
import six.moves.urllib.parse as urlparse
_FATAL_EXCEPTION_FORMAT_ERRORS = False
class RedirectException(Exception):
def __init__(self, url):
self.url = urlparse.urlparse(url)
class GlanceException(Exception):
"""
Base Glance Exception
To correctly use this class, inherit from it and define
a 'message' property. That message will get printf'd
with the keyword arguments provided to the constructor.
"""
message = _("An unknown exception occurred")
def __init__(self, message=None, *args, **kwargs):
if not message:
message = self.message
try:
if kwargs:
message = message % kwargs
except Exception:
if _FATAL_EXCEPTION_FORMAT_ERRORS:
raise
else:
# at least get the core message out if something happened
pass
self.msg = message
super(GlanceException, self).__init__(message)
def __unicode__(self):
# NOTE(flwang): By default, self.msg is an instance of Message, which
# can't be converted by str(). Based on the definition of
# __unicode__, it should return unicode always.
return six.text_type(self.msg)
class MissingCredentialError(GlanceException):
message = _("Missing required credential: %(required)s")
class BadAuthStrategy(GlanceException):
message = _("Incorrect auth strategy, expected \"%(expected)s\" but "
"received \"%(received)s\"")
class NotFound(GlanceException):
message = _("An object with the specified identifier was not found.")
class UnknownScheme(GlanceException):
message = _("Unknown scheme '%(scheme)s' found in URI")
class BadStoreUri(GlanceException):
message = _("The Store URI was malformed.")
class Duplicate(GlanceException):
message = _("An object with the same identifier already exists.")
class Conflict(GlanceException):
message = _("An object with the same identifier is currently being "
"operated on.")
class StorageFull(GlanceException):
message = _("There is not enough disk space on the image storage media.")
class StorageQuotaFull(GlanceException):
message = _("The size of the data %(image_size)s will exceed the limit. "
"%(remaining)s bytes remaining.")
class StorageWriteDenied(GlanceException):
message = _("Permission to write image storage media denied.")
class AuthBadRequest(GlanceException):
message = _("Connect error/bad request to Auth service at URL %(url)s.")
class AuthUrlNotFound(GlanceException):
message = _("Auth service at URL %(url)s not found.")
class AuthorizationFailure(GlanceException):
message = _("Authorization failed.")
class NotAuthenticated(GlanceException):
message = _("You are not authenticated.")
class Forbidden(GlanceException):
message = _("You are not authorized to complete this action.")
class ForbiddenPublicImage(Forbidden):
message = _("You are not authorized to complete this action.")
class ProtectedImageDelete(Forbidden):
message = _("Image %(image_id)s is protected and cannot be deleted.")
class Invalid(GlanceException):
message = _("Data supplied was not valid.")
class InvalidSortKey(Invalid):
message = _("Sort key supplied was not valid.")
class InvalidPropertyProtectionConfiguration(Invalid):
message = _("Invalid configuration in property protection file.")
class InvalidFilterRangeValue(Invalid):
message = _("Unable to filter using the specified range.")
class ReadonlyProperty(Forbidden):
message = _("Attribute '%(property)s' is read-only.")
class ReservedProperty(Forbidden):
message = _("Attribute '%(property)s' is reserved.")
class AuthorizationRedirect(GlanceException):
message = _("Redirecting to %(uri)s for authorization.")
class ClientConnectionError(GlanceException):
message = _("There was an error connecting to a server")
class ClientConfigurationError(GlanceException):
message = _("There was an error configuring the client.")
class MultipleChoices(GlanceException):
message = _("The request returned a 302 Multiple Choices. This generally "
"means that you have not included a version indicator in a "
"request URI.\n\nThe body of response returned:\n%(body)s")
class LimitExceeded(GlanceException):
message = _("The request returned a 413 Request Entity Too Large. This "
"generally means that rate limiting or a quota threshold was "
"breached.\n\nThe response body:\n%(body)s")
def __init__(self, *args, **kwargs):
self.retry_after = (int(kwargs['retry']) if kwargs.get('retry')
else None)
super(LimitExceeded, self).__init__(*args, **kwargs)
class ServiceUnavailable(GlanceException):
message = _("The request returned 503 Service Unavilable. This "
"generally occurs on service overload or other transient "
"outage.")
def __init__(self, *args, **kwargs):
self.retry_after = (int(kwargs['retry']) if kwargs.get('retry')
else None)
super(ServiceUnavailable, self).__init__(*args, **kwargs)
class ServerError(GlanceException):
message = _("The request returned 500 Internal Server Error.")
class UnexpectedStatus(GlanceException):
message = _("The request returned an unexpected status: %(status)s."
"\n\nThe response body:\n%(body)s")
class InvalidContentType(GlanceException):
message = _("Invalid content type %(content_type)s")
class BadRegistryConnectionConfiguration(GlanceException):
message = _("Registry was not configured correctly on API server. "
"Reason: %(reason)s")
class BadStoreConfiguration(GlanceException):
message = _("Store %(store_name)s could not be configured correctly. "
"Reason: %(reason)s")
class BadDriverConfiguration(GlanceException):
message = _("Driver %(driver_name)s could not be configured correctly. "
"Reason: %(reason)s")
class StoreDeleteNotSupported(GlanceException):
message = _("Deleting images from this store is not supported.")
class StoreGetNotSupported(GlanceException):
message = _("Getting images from this store is not supported.")
class StoreAddNotSupported(GlanceException):
message = _("Adding images to this store is not supported.")
class StoreAddDisabled(GlanceException):
message = _("Configuration for store failed. Adding images to this "
"store is disabled.")
class MaxRedirectsExceeded(GlanceException):
message = _("Maximum redirects (%(redirects)s) was exceeded.")
class InvalidRedirect(GlanceException):
message = _("Received invalid HTTP redirect.")
class NoServiceEndpoint(GlanceException):
message = _("Response from Keystone does not contain a Glance endpoint.")
class RegionAmbiguity(GlanceException):
message = _("Multiple 'image' service matches for region %(region)s. This "
"generally means that a region is required and you have not "
"supplied one.")
class WorkerCreationFailure(GlanceException):
message = _("Server worker creation failed: %(reason)s.")
class SchemaLoadError(GlanceException):
message = _("Unable to load schema: %(reason)s")
class InvalidObject(GlanceException):
message = _("Provided object does not match schema "
"'%(schema)s': %(reason)s")
class UnsupportedHeaderFeature(GlanceException):
message = _("Provided header feature is unsupported: %(feature)s")
class InUseByStore(GlanceException):
message = _("The image cannot be deleted because it is in use through "
"the backend store outside of Glance.")
class ImageSizeLimitExceeded(GlanceException):
message = _("The provided image is too large.")
class ImageMemberLimitExceeded(LimitExceeded):
message = _("The limit has been exceeded on the number of allowed image "
"members for this image. Attempted: %(attempted)s, "
"Maximum: %(maximum)s")
class ImagePropertyLimitExceeded(LimitExceeded):
message = _("The limit has been exceeded on the number of allowed image "
"properties. Attempted: %(attempted)s, Maximum: %(maximum)s")
class ImageTagLimitExceeded(LimitExceeded):
message = _("The limit has been exceeded on the number of allowed image "
"tags. Attempted: %(attempted)s, Maximum: %(maximum)s")
class ImageLocationLimitExceeded(LimitExceeded):
message = _("The limit has been exceeded on the number of allowed image "
"locations. Attempted: %(attempted)s, Maximum: %(maximum)s")
class RPCError(GlanceException):
message = _("%(cls)s exception was raised in the last rpc call: %(val)s")
class TaskException(GlanceException):
message = _("An unknown task exception occurred")
class TaskNotFound(TaskException, NotFound):
message = _("Task with the given id %(task_id)s was not found")
class InvalidTaskStatus(TaskException, Invalid):
message = _("Provided status of task is unsupported: %(status)s")
class InvalidTaskType(TaskException, Invalid):
message = _("Provided type of task is unsupported: %(type)s")
class InvalidTaskStatusTransition(TaskException, Invalid):
message = _("Status transition from %(cur_status)s to"
" %(new_status)s is not allowed")
class DuplicateLocation(Duplicate):
message = _("The location %(location)s already exists")
class ImageDataNotFound(NotFound):
message = _("No image data could be found")
class InvalidParameterValue(Invalid):
message = _("Invalid value '%(value)s' for parameter '%(param)s': "
"%(extra_msg)s")
class InvalidImageStatusTransition(Invalid):
message = _("Image status transition from %(cur_status)s to"
" %(new_status)s is not allowed")
class FunctionNameNotFound(GlanceException):
message = _("Can not found the function name: %(func_name)s "
"in object/module: %(owner_name)s")
class SyncServiceOperationError(GlanceException):
message = _("Image sync service execute failed with reason: %(reason)s")
class SyncStoreCopyError(GlanceException):
message = _("Image sync store failed with reason: %(reason)s")

View File

@ -0,0 +1,602 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
System-level utilities and helper functions.
"""
import errno
try:
from eventlet import sleep
except ImportError:
from time import sleep
from eventlet.green import socket
import functools
import os
import platform
import re
import subprocess
import sys
import urlparse
import uuid
from OpenSSL import crypto
from oslo.config import cfg
from webob import exc
from glance.common import exception
import glance.openstack.common.log as logging
from glance.openstack.common import strutils
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
FEATURE_BLACKLIST = ['content-length', 'content-type', 'x-image-meta-size']
# Whitelist of v1 API headers of form x-image-meta-xxx
IMAGE_META_HEADERS = ['x-image-meta-location', 'x-image-meta-size',
'x-image-meta-is_public', 'x-image-meta-disk_format',
'x-image-meta-container_format', 'x-image-meta-name',
'x-image-meta-status', 'x-image-meta-copy_from',
'x-image-meta-uri', 'x-image-meta-checksum',
'x-image-meta-created_at', 'x-image-meta-updated_at',
'x-image-meta-deleted_at', 'x-image-meta-min_ram',
'x-image-meta-min_disk', 'x-image-meta-owner',
'x-image-meta-store', 'x-image-meta-id',
'x-image-meta-protected', 'x-image-meta-deleted']
GLANCE_TEST_SOCKET_FD_STR = 'GLANCE_TEST_SOCKET_FD'
def chunkreadable(iter, chunk_size=65536):
"""
Wrap a readable iterator with a reader yielding chunks of
a preferred size, otherwise leave iterator unchanged.
:param iter: an iter which may also be readable
:param chunk_size: maximum size of chunk
"""
return chunkiter(iter, chunk_size) if hasattr(iter, 'read') else iter
def chunkiter(fp, chunk_size=65536):
"""
Return an iterator to a file-like obj which yields fixed size chunks
:param fp: a file-like object
:param chunk_size: maximum size of chunk
"""
while True:
chunk = fp.read(chunk_size)
if chunk:
yield chunk
else:
break
def cooperative_iter(iter):
"""
Return an iterator which schedules after each
iteration. This can prevent eventlet thread starvation.
:param iter: an iterator to wrap
"""
try:
for chunk in iter:
sleep(0)
yield chunk
except Exception as err:
msg = _("Error: cooperative_iter exception %s") % err
LOG.error(msg)
raise
def cooperative_read(fd):
"""
Wrap a file descriptor's read with a partial function which schedules
after each read. This can prevent eventlet thread starvation.
:param fd: a file descriptor to wrap
"""
def readfn(*args):
result = fd.read(*args)
sleep(0)
return result
return readfn
class CooperativeReader(object):
"""
An eventlet thread friendly class for reading in image data.
When accessing data either through the iterator or the read method
we perform a sleep to allow a co-operative yield. When there is more than
one image being uploaded/downloaded this prevents eventlet thread
starvation, ie allows all threads to be scheduled periodically rather than
having the same thread be continuously active.
"""
def __init__(self, fd):
"""
:param fd: Underlying image file object
"""
self.fd = fd
self.iterator = None
# NOTE(markwash): if the underlying supports read(), overwrite the
# default iterator-based implementation with cooperative_read which
# is more straightforward
if hasattr(fd, 'read'):
self.read = cooperative_read(fd)
def read(self, length=None):
"""Return the next chunk of the underlying iterator.
This is replaced with cooperative_read in __init__ if the underlying
fd already supports read().
"""
if self.iterator is None:
self.iterator = self.__iter__()
try:
return self.iterator.next()
except StopIteration:
return ''
def __iter__(self):
return cooperative_iter(self.fd.__iter__())
class LimitingReader(object):
"""
Reader designed to fail when reading image data past the configured
allowable amount.
"""
def __init__(self, data, limit):
"""
:param data: Underlying image data object
:param limit: maximum number of bytes the reader should allow
"""
self.data = data
self.limit = limit
self.bytes_read = 0
def __iter__(self):
for chunk in self.data:
self.bytes_read += len(chunk)
if self.bytes_read > self.limit:
raise exception.ImageSizeLimitExceeded()
else:
yield chunk
def read(self, i):
result = self.data.read(i)
self.bytes_read += len(result)
if self.bytes_read > self.limit:
raise exception.ImageSizeLimitExceeded()
return result
def image_meta_to_http_headers(image_meta):
"""
Returns a set of image metadata into a dict
of HTTP headers that can be fed to either a Webob
Request object or an httplib.HTTP(S)Connection object
:param image_meta: Mapping of image metadata
"""
headers = {}
for k, v in image_meta.items():
if v is not None:
if k == 'properties':
for pk, pv in v.items():
if pv is not None:
headers["x-image-meta-property-%s"
% pk.lower()] = unicode(pv)
else:
headers["x-image-meta-%s" % k.lower()] = unicode(v)
return headers
def add_features_to_http_headers(features, headers):
"""
Adds additional headers representing glance features to be enabled.
:param headers: Base set of headers
:param features: Map of enabled features
"""
if features:
for k, v in features.items():
if k.lower() in FEATURE_BLACKLIST:
raise exception.UnsupportedHeaderFeature(feature=k)
if v is not None:
headers[k.lower()] = unicode(v)
def get_image_meta_from_headers(response):
"""
Processes HTTP headers from a supplied response that
match the x-image-meta and x-image-meta-property and
returns a mapping of image metadata and properties
:param response: Response to process
"""
result = {}
properties = {}
if hasattr(response, 'getheaders'): # httplib.HTTPResponse
headers = response.getheaders()
else: # webob.Response
headers = response.headers.items()
for key, value in headers:
key = str(key.lower())
if key.startswith('x-image-meta-property-'):
field_name = key[len('x-image-meta-property-'):].replace('-', '_')
properties[field_name] = value or None
elif key.startswith('x-image-meta-'):
field_name = key[len('x-image-meta-'):].replace('-', '_')
if 'x-image-meta-' + field_name not in IMAGE_META_HEADERS:
msg = _("Bad header: %(header_name)s") % {'header_name': key}
raise exc.HTTPBadRequest(msg, content_type="text/plain")
result[field_name] = value or None
result['properties'] = properties
for key in ('size', 'min_disk', 'min_ram'):
if key in result:
try:
result[key] = int(result[key])
except ValueError:
extra = (_("Cannot convert image %(key)s '%(value)s' "
"to an integer.")
% {'key': key, 'value': result[key]})
raise exception.InvalidParameterValue(value=result[key],
param=key,
extra_msg=extra)
if result[key] < 0:
extra = (_("Image %(key)s must be >= 0 "
"('%(value)s' specified).")
% {'key': key, 'value': result[key]})
raise exception.InvalidParameterValue(value=result[key],
param=key,
extra_msg=extra)
for key in ('is_public', 'deleted', 'protected'):
if key in result:
result[key] = strutils.bool_from_string(result[key])
return result
def safe_mkdirs(path):
try:
os.makedirs(path)
except OSError as e:
if e.errno != errno.EEXIST:
raise
def safe_remove(path):
try:
os.remove(path)
except OSError as e:
if e.errno != errno.ENOENT:
raise
class PrettyTable(object):
"""Creates an ASCII art table for use in bin/glance
Example:
ID Name Size Hits
--- ----------------- ------------ -----
122 image 22 0
"""
def __init__(self):
self.columns = []
def add_column(self, width, label="", just='l'):
"""Add a column to the table
:param width: number of characters wide the column should be
:param label: column heading
:param just: justification for the column, 'l' for left,
'r' for right
"""
self.columns.append((width, label, just))
def make_header(self):
label_parts = []
break_parts = []
for width, label, _ in self.columns:
# NOTE(sirp): headers are always left justified
label_part = self._clip_and_justify(label, width, 'l')
label_parts.append(label_part)
break_part = '-' * width
break_parts.append(break_part)
label_line = ' '.join(label_parts)
break_line = ' '.join(break_parts)
return '\n'.join([label_line, break_line])
def make_row(self, *args):
row = args
row_parts = []
for data, (width, _, just) in zip(row, self.columns):
row_part = self._clip_and_justify(data, width, just)
row_parts.append(row_part)
row_line = ' '.join(row_parts)
return row_line
@staticmethod
def _clip_and_justify(data, width, just):
# clip field to column width
clipped_data = str(data)[:width]
if just == 'r':
# right justify
justified = clipped_data.rjust(width)
else:
# left justify
justified = clipped_data.ljust(width)
return justified
def get_terminal_size():
def _get_terminal_size_posix():
import fcntl
import struct
import termios
height_width = None
try:
height_width = struct.unpack(
'hh',
fcntl.ioctl(
sys.stderr.fileno(),
termios.TIOCGWINSZ,
struct.pack(
'HH',
0,
0)))
except Exception:
pass
if not height_width:
try:
p = subprocess.Popen(['stty', 'size'],
shell=False,
stdout=subprocess.PIPE,
stderr=open(os.devnull, 'w'))
result = p.communicate()
if p.returncode == 0:
return tuple(int(x) for x in result[0].split())
except Exception:
pass
return height_width
def _get_terminal_size_win32():
try:
from ctypes import create_string_buffer
from ctypes import windll
handle = windll.kernel32.GetStdHandle(-12)
csbi = create_string_buffer(22)
res = windll.kernel32.GetConsoleScreenBufferInfo(handle, csbi)
except Exception:
return None
if res:
import struct
unpack_tmp = struct.unpack("hhhhHhhhhhh", csbi.raw)
(bufx, bufy, curx, cury, wattr,
left, top, right, bottom, maxx, maxy) = unpack_tmp
height = bottom - top + 1
width = right - left + 1
return (height, width)
else:
return None
def _get_terminal_size_unknownOS():
raise NotImplementedError
func = {'posix': _get_terminal_size_posix,
'win32': _get_terminal_size_win32}
height_width = func.get(platform.os.name, _get_terminal_size_unknownOS)()
if height_width is None:
raise exception.Invalid()
for i in height_width:
if not isinstance(i, int) or i <= 0:
raise exception.Invalid()
return height_width[0], height_width[1]
def mutating(func):
"""Decorator to enforce read-only logic"""
@functools.wraps(func)
def wrapped(self, req, *args, **kwargs):
if req.context.read_only:
msg = _("Read-only access")
LOG.debug(msg)
raise exc.HTTPForbidden(msg, request=req,
content_type="text/plain")
return func(self, req, *args, **kwargs)
return wrapped
def setup_remote_pydev_debug(host, port):
error_msg = ('Error setting up the debug environment. Verify that the'
' option pydev_worker_debug_port is pointing to a valid '
'hostname or IP on which a pydev server is listening on'
' the port indicated by pydev_worker_debug_port.')
try:
try:
from pydev import pydevd
except ImportError:
import pydevd
pydevd.settrace(host,
port=port,
stdoutToServer=True,
stderrToServer=True)
return True
except Exception:
LOG.exception(error_msg)
raise
class LazyPluggable(object):
"""A pluggable backend loaded lazily based on some value."""
def __init__(self, pivot, config_group=None, **backends):
self.__backends = backends
self.__pivot = pivot
self.__backend = None
self.__config_group = config_group
def __get_backend(self):
if not self.__backend:
if self.__config_group is None:
backend_name = CONF[self.__pivot]
else:
backend_name = CONF[self.__config_group][self.__pivot]
if backend_name not in self.__backends:
msg = _('Invalid backend: %s') % backend_name
raise exception.GlanceException(msg)
backend = self.__backends[backend_name]
if isinstance(backend, tuple):
name = backend[0]
fromlist = backend[1]
else:
name = backend
fromlist = backend
self.__backend = __import__(name, None, None, fromlist)
return self.__backend
def __getattr__(self, key):
backend = self.__get_backend()
return getattr(backend, key)
def validate_key_cert(key_file, cert_file):
try:
error_key_name = "private key"
error_filename = key_file
key_str = open(key_file, "r").read()
key = crypto.load_privatekey(crypto.FILETYPE_PEM, key_str)
error_key_name = "certficate"
error_filename = cert_file
cert_str = open(cert_file, "r").read()
cert = crypto.load_certificate(crypto.FILETYPE_PEM, cert_str)
except IOError as ioe:
raise RuntimeError(_("There is a problem with your %(error_key_name)s "
"%(error_filename)s. Please verify it."
" Error: %(ioe)s") %
{'error_key_name': error_key_name,
'error_filename': error_filename,
'ioe': ioe})
except crypto.Error as ce:
raise RuntimeError(_("There is a problem with your %(error_key_name)s "
"%(error_filename)s. Please verify it. OpenSSL"
" error: %(ce)s") %
{'error_key_name': error_key_name,
'error_filename': error_filename,
'ce': ce})
try:
data = str(uuid.uuid4())
digest = "sha1"
out = crypto.sign(key, data, digest)
crypto.verify(cert, out, data, digest)
except crypto.Error as ce:
raise RuntimeError(_("There is a problem with your key pair. "
"Please verify that cert %(cert_file)s and "
"key %(key_file)s belong together. OpenSSL "
"error %(ce)s") % {'cert_file': cert_file,
'key_file': key_file,
'ce': ce})
def get_test_suite_socket():
global GLANCE_TEST_SOCKET_FD_STR
if GLANCE_TEST_SOCKET_FD_STR in os.environ:
fd = int(os.environ[GLANCE_TEST_SOCKET_FD_STR])
sock = socket.fromfd(fd, socket.AF_INET, socket.SOCK_STREAM)
sock = socket.SocketType(_sock=sock)
sock.listen(CONF.backlog)
del os.environ[GLANCE_TEST_SOCKET_FD_STR]
os.close(fd)
return sock
return None
def is_uuid_like(val):
"""Returns validation of a value as a UUID.
For our purposes, a UUID is a canonical form string:
aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa
"""
try:
return str(uuid.UUID(val)) == val
except (TypeError, ValueError, AttributeError):
return False
pattern = re.compile(r'^https?://\S+/v2/images/\S+$')
def is_glance_location(loc_url):
return pattern.match(loc_url)
def check_synced(image, ep_url_list):
if image.status != 'active':
return
is_synced = True
if not ep_url_list:
is_synced = False
else:
all_host_list = [urlparse.urlparse(url).netloc for url in ep_url_list]
synced_host_list = [urlparse.urlparse(loc['url']).netloc
for loc in image.locations
if is_glance_location(loc['url'])]
is_synced = set(all_host_list) == set(synced_host_list)
if not is_synced:
image.status = 'queued'
image.size = None
image.virtual_size = None

View File

@ -0,0 +1,125 @@
# Copyright 2012 OpenStack Foundation
# Copyright 2013 IBM Corp.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo.config import cfg
from glance.api import authorization
from glance.api import policy
from glance.api import property_protections
from glance.common import property_utils
import glance.db
import glance.domain
import glance.notifier
import glance.quota
import glance.store
from glance.sync.client.v1 import api as syncapi
CONF = cfg.CONF
CONF.import_opt('sync_enabled', 'glance.common.config')
class Gateway(object):
def __init__(self, db_api=None, store_api=None, notifier=None,
policy_enforcer=None, sync_api=None):
self.db_api = db_api or glance.db.get_api()
self.store_api = store_api or glance.store
self.notifier = notifier or glance.notifier.Notifier()
self.policy = policy_enforcer or policy.Enforcer()
self.sync_api = sync_api or syncapi
def get_image_factory(self, context):
image_factory = glance.domain.ImageFactory()
store_image_factory = glance.store.ImageFactoryProxy(
image_factory, context, self.store_api)
quota_image_factory = glance.quota.ImageFactoryProxy(
store_image_factory, context, self.db_api)
policy_image_factory = policy.ImageFactoryProxy(
quota_image_factory, context, self.policy)
notifier_image_factory = glance.notifier.ImageFactoryProxy(
policy_image_factory, context, self.notifier)
if property_utils.is_property_protection_enabled():
property_rules = property_utils.PropertyRules(self.policy)
protected_image_factory = property_protections.\
ProtectedImageFactoryProxy(notifier_image_factory, context,
property_rules)
authorized_image_factory = authorization.ImageFactoryProxy(
protected_image_factory, context)
else:
authorized_image_factory = authorization.ImageFactoryProxy(
notifier_image_factory, context)
if CONF.sync_enabled:
sync_image_factory = glance.sync.ImageFactoryProxy(
authorized_image_factory, context, self.sync_api)
return sync_image_factory
return authorized_image_factory
def get_image_member_factory(self, context):
image_factory = glance.domain.ImageMemberFactory()
quota_image_factory = glance.quota.ImageMemberFactoryProxy(
image_factory, context, self.db_api)
policy_member_factory = policy.ImageMemberFactoryProxy(
quota_image_factory, context, self.policy)
authorized_image_factory = authorization.ImageMemberFactoryProxy(
policy_member_factory, context)
return authorized_image_factory
def get_repo(self, context):
image_repo = glance.db.ImageRepo(context, self.db_api)
store_image_repo = glance.store.ImageRepoProxy(
image_repo, context, self.store_api)
quota_image_repo = glance.quota.ImageRepoProxy(
store_image_repo, context, self.db_api)
policy_image_repo = policy.ImageRepoProxy(
quota_image_repo, context, self.policy)
notifier_image_repo = glance.notifier.ImageRepoProxy(
policy_image_repo, context, self.notifier)
if property_utils.is_property_protection_enabled():
property_rules = property_utils.PropertyRules(self.policy)
protected_image_repo = property_protections.\
ProtectedImageRepoProxy(notifier_image_repo, context,
property_rules)
authorized_image_repo = authorization.ImageRepoProxy(
protected_image_repo, context)
else:
authorized_image_repo = authorization.ImageRepoProxy(
notifier_image_repo, context)
if CONF.sync_enabled:
sync_image_repo = glance.sync.ImageRepoProxy(
authorized_image_repo, context, self.sync_api)
return sync_image_repo
return authorized_image_repo
def get_task_factory(self, context):
task_factory = glance.domain.TaskFactory()
policy_task_factory = policy.TaskFactoryProxy(
task_factory, context, self.policy)
notifier_task_factory = glance.notifier.TaskFactoryProxy(
policy_task_factory, context, self.notifier)
authorized_task_factory = authorization.TaskFactoryProxy(
notifier_task_factory, context)
return authorized_task_factory
def get_task_repo(self, context):
task_repo = glance.db.TaskRepo(context, self.db_api)
policy_task_repo = policy.TaskRepoProxy(
task_repo, context, self.policy)
notifier_task_repo = glance.notifier.TaskRepoProxy(
policy_task_repo, context, self.notifier)
authorized_task_repo = authorization.TaskRepoProxy(
notifier_task_repo, context)
return authorized_task_repo

View File

@ -0,0 +1,814 @@
# Copyright 2010-2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
import copy
import re
import sys
from oslo.config import cfg
import six
from glance.common import exception
from glance.common import utils
import glance.context
import glance.domain.proxy
from glance.openstack.common import importutils
import glance.openstack.common.log as logging
from glance import scrubber
from glance.store import location
LOG = logging.getLogger(__name__)
store_opts = [
cfg.ListOpt('known_stores',
default=[
'glance.store.filesystem.Store',
'glance.store.http.Store'
],
help=_('List of which store classes and store class locations '
'are currently known to glance at startup.')),
cfg.StrOpt('default_store', default='file',
help=_("Default scheme to use to store image data. The "
"scheme must be registered by one of the stores "
"defined by the 'known_stores' config option.")),
cfg.StrOpt('scrubber_datadir',
default='/var/lib/glance/scrubber',
help=_('Directory that the scrubber will use to track '
'information about what to delete. '
'Make sure this is set in glance-api.conf and '
'glance-scrubber.conf.')),
cfg.BoolOpt('delayed_delete', default=False,
help=_('Turn on/off delayed delete.')),
cfg.BoolOpt('use_user_token', default=True,
help=_('Whether to pass through the user token when '
'making requests to the registry.')),
cfg.IntOpt('scrub_time', default=0,
help=_('The amount of time in seconds to delay before '
'performing a delete.')),
]
REGISTERED_STORES = set()
CONF = cfg.CONF
CONF.register_opts(store_opts)
_ALL_STORES = [
'glance.store.filesystem.Store',
'glance.store.http.Store',
'glance.store.rbd.Store',
'glance.store.s3.Store',
'glance.store.swift.Store',
'glance.store.sheepdog.Store',
'glance.store.cinder.Store',
'glance.store.gridfs.Store',
'glance.store.vmware_datastore.Store'
]
class BackendException(Exception):
pass
class UnsupportedBackend(BackendException):
pass
class Indexable(object):
"""
Wrapper that allows an iterator or filelike be treated as an indexable
data structure. This is required in the case where the return value from
Store.get() is passed to Store.add() when adding a Copy-From image to a
Store where the client library relies on eventlet GreenSockets, in which
case the data to be written is indexed over.
"""
def __init__(self, wrapped, size):
"""
Initialize the object
:param wrappped: the wrapped iterator or filelike.
:param size: the size of data available
"""
self.wrapped = wrapped
self.size = int(size) if size else (wrapped.len
if hasattr(wrapped, 'len') else 0)
self.cursor = 0
self.chunk = None
def __iter__(self):
"""
Delegate iteration to the wrapped instance.
"""
for self.chunk in self.wrapped:
yield self.chunk
def __getitem__(self, i):
"""
Index into the next chunk (or previous chunk in the case where
the last data returned was not fully consumed).
:param i: a slice-to-the-end
"""
start = i.start if isinstance(i, slice) else i
if start < self.cursor:
return self.chunk[(start - self.cursor):]
self.chunk = self.another()
if self.chunk:
self.cursor += len(self.chunk)
return self.chunk
def another(self):
"""Implemented by subclasses to return the next element"""
raise NotImplementedError
def getvalue(self):
"""
Return entire string value... used in testing
"""
return self.wrapped.getvalue()
def __len__(self):
"""
Length accessor.
"""
return self.size
def _register_stores(store_classes):
"""
Given a set of store names, add them to a globally available set
of store names.
"""
for store_cls in store_classes:
REGISTERED_STORES.add(store_cls.__module__.split('.')[2])
# NOTE (spredzy): The actual class name is filesystem but in order
# to maintain backward compatibility we need to keep the 'file' store
# as a known store
if 'filesystem' in REGISTERED_STORES:
REGISTERED_STORES.add('file')
def _get_store_class(store_entry):
store_cls = None
try:
LOG.debug("Attempting to import store %s", store_entry)
store_cls = importutils.import_class(store_entry)
except exception.NotFound:
raise BackendException('Unable to load store. '
'Could not find a class named %s.'
% store_entry)
return store_cls
def create_stores():
"""
Registers all store modules and all schemes
from the given config. Duplicates are not re-registered.
"""
store_count = 0
store_classes = set()
for store_entry in set(CONF.known_stores + _ALL_STORES):
store_entry = store_entry.strip()
if not store_entry:
continue
store_cls = _get_store_class(store_entry)
try:
store_instance = store_cls()
except exception.BadStoreConfiguration as e:
if store_entry in CONF.known_stores:
LOG.warn(_("%s Skipping store driver.") % unicode(e))
continue
finally:
# NOTE(flaper87): To be removed in Juno
if store_entry not in CONF.known_stores:
LOG.deprecated(_("%s not found in `known_store`. "
"Stores need to be explicitly enabled in "
"the configuration file.") % store_entry)
schemes = store_instance.get_schemes()
if not schemes:
raise BackendException('Unable to register store %s. '
'No schemes associated with it.'
% store_cls)
else:
if store_cls not in store_classes:
LOG.debug("Registering store %s with schemes %s",
store_cls, schemes)
store_classes.add(store_cls)
scheme_map = {}
for scheme in schemes:
loc_cls = store_instance.get_store_location_class()
scheme_map[scheme] = {
'store_class': store_cls,
'location_class': loc_cls,
}
location.register_scheme_map(scheme_map)
store_count += 1
else:
LOG.debug("Store %s already registered", store_cls)
_register_stores(store_classes)
return store_count
def verify_default_store():
scheme = cfg.CONF.default_store
context = glance.context.RequestContext()
try:
get_store_from_scheme(context, scheme)
except exception.UnknownScheme:
msg = _("Store for scheme %s not found") % scheme
raise RuntimeError(msg)
def get_known_schemes():
"""Returns list of known schemes"""
return location.SCHEME_TO_CLS_MAP.keys()
def get_known_stores():
"""Returns list of known stores"""
return list(REGISTERED_STORES)
def get_store_from_scheme(context, scheme, loc=None):
"""
Given a scheme, return the appropriate store object
for handling that scheme.
"""
if scheme not in location.SCHEME_TO_CLS_MAP:
raise exception.UnknownScheme(scheme=scheme)
scheme_info = location.SCHEME_TO_CLS_MAP[scheme]
store = scheme_info['store_class'](context, loc)
return store
def get_store_from_uri(context, uri, loc=None):
"""
Given a URI, return the store object that would handle
operations on the URI.
:param uri: URI to analyze
"""
scheme = uri[0:uri.find('/') - 1]
store = get_store_from_scheme(context, scheme, loc)
return store
def get_from_backend(context, uri, **kwargs):
"""Yields chunks of data from backend specified by uri"""
loc = location.get_location_from_uri(uri)
store = get_store_from_uri(context, uri, loc)
try:
return store.get(loc)
except NotImplementedError:
raise exception.StoreGetNotSupported
def get_size_from_backend(context, uri):
"""Retrieves image size from backend specified by uri"""
if utils.is_glance_location(uri):
uri += ('?auth_token=' + context.auth_tok)
loc = location.get_location_from_uri(uri)
store = get_store_from_uri(context, uri, loc)
return store.get_size(loc)
def _check_glance_loc(context, location):
uri = location['url']
if not utils.is_glance_location(uri):
return False
if 'auth_token=' in uri:
return True
location['url'] = uri + ('?auth_token=' + context.auth_tok)
return True
def delete_from_backend(context, uri, **kwargs):
"""Removes chunks of data from backend specified by uri"""
loc = location.get_location_from_uri(uri)
store = get_store_from_uri(context, uri, loc)
try:
return store.delete(loc)
except NotImplementedError:
raise exception.StoreDeleteNotSupported
def get_store_from_location(uri):
"""
Given a location (assumed to be a URL), attempt to determine
the store from the location. We use here a simple guess that
the scheme of the parsed URL is the store...
:param uri: Location to check for the store
"""
loc = location.get_location_from_uri(uri)
return loc.store_name
def safe_delete_from_backend(context, uri, image_id, **kwargs):
"""Given a uri, delete an image from the store."""
try:
return delete_from_backend(context, uri, **kwargs)
except exception.NotFound:
msg = _('Failed to delete image %s in store from URI')
LOG.warn(msg % image_id)
except exception.StoreDeleteNotSupported as e:
LOG.warn(six.text_type(e))
except UnsupportedBackend:
exc_type = sys.exc_info()[0].__name__
msg = (_('Failed to delete image %(image_id)s from store '
'(%(error)s)') % {'image_id': image_id,
'error': exc_type})
LOG.error(msg)
def schedule_delayed_delete_from_backend(context, uri, image_id, **kwargs):
"""Given a uri, schedule the deletion of an image location."""
(file_queue, _db_queue) = scrubber.get_scrub_queues()
# NOTE(zhiyan): Defautly ask glance-api store using file based queue.
# In future we can change it using DB based queued instead,
# such as using image location's status to saving pending delete flag
# when that property be added.
if CONF.use_user_token is False:
context = None
file_queue.add_location(image_id, uri, user_context=context)
def delete_image_from_backend(context, store_api, image_id, uri):
if CONF.delayed_delete:
store_api.schedule_delayed_delete_from_backend(context, uri, image_id)
else:
store_api.safe_delete_from_backend(context, uri, image_id)
def check_location_metadata(val, key=''):
if isinstance(val, dict):
for key in val:
check_location_metadata(val[key], key=key)
elif isinstance(val, list):
ndx = 0
for v in val:
check_location_metadata(v, key='%s[%d]' % (key, ndx))
ndx = ndx + 1
elif not isinstance(val, unicode):
raise BackendException(_("The image metadata key %(key)s has an "
"invalid type of %(val)s. Only dict, list, "
"and unicode are supported.") %
{'key': key,
'val': type(val)})
def store_add_to_backend(image_id, data, size, store):
"""
A wrapper around a call to each stores add() method. This gives glance
a common place to check the output
:param image_id: The image add to which data is added
:param data: The data to be stored
:param size: The length of the data in bytes
:param store: The store to which the data is being added
:return: The url location of the file,
the size amount of data,
the checksum of the data
the storage systems metadata dictionary for the location
"""
(location, size, checksum, metadata) = store.add(image_id, data, size)
if metadata is not None:
if not isinstance(metadata, dict):
msg = (_("The storage driver %(store)s returned invalid metadata "
"%(metadata)s. This must be a dictionary type") %
{'store': six.text_type(store),
'metadata': six.text_type(metadata)})
LOG.error(msg)
raise BackendException(msg)
try:
check_location_metadata(metadata)
except BackendException as e:
e_msg = (_("A bad metadata structure was returned from the "
"%(store)s storage driver: %(metadata)s. %(error)s.") %
{'store': six.text_type(store),
'metadata': six.text_type(metadata),
'error': six.text_type(e)})
LOG.error(e_msg)
raise BackendException(e_msg)
return (location, size, checksum, metadata)
def add_to_backend(context, scheme, image_id, data, size):
store = get_store_from_scheme(context, scheme)
try:
return store_add_to_backend(image_id, data, size, store)
except NotImplementedError:
raise exception.StoreAddNotSupported
def set_acls(context, location_uri, public=False, read_tenants=[],
write_tenants=[]):
loc = location.get_location_from_uri(location_uri)
scheme = get_store_from_location(location_uri)
store = get_store_from_scheme(context, scheme, loc)
try:
store.set_acls(loc, public=public, read_tenants=read_tenants,
write_tenants=write_tenants)
except NotImplementedError:
LOG.debug(_("Skipping store.set_acls... not implemented."))
class ImageRepoProxy(glance.domain.proxy.Repo):
def __init__(self, image_repo, context, store_api):
self.context = context
self.store_api = store_api
proxy_kwargs = {'context': context, 'store_api': store_api}
super(ImageRepoProxy, self).__init__(image_repo,
item_proxy_class=ImageProxy,
item_proxy_kwargs=proxy_kwargs)
def _set_acls(self, image):
public = image.visibility == 'public'
member_ids = []
if image.locations and not public:
member_repo = image.get_member_repo()
member_ids = [m.member_id for m in member_repo.list()]
for location in image.locations:
self.store_api.set_acls(self.context, location['url'], public,
read_tenants=member_ids)
def add(self, image):
result = super(ImageRepoProxy, self).add(image)
self._set_acls(image)
return result
def save(self, image):
result = super(ImageRepoProxy, self).save(image)
self._set_acls(image)
return result
def _check_location_uri(context, store_api, uri):
"""
Check if an image location uri is valid.
:param context: Glance request context
:param store_api: store API module
:param uri: location's uri string
"""
is_ok = True
try:
size = store_api.get_size_from_backend(context, uri)
# NOTE(zhiyan): Some stores return zero when it catch exception
is_ok = size > 0
except (exception.UnknownScheme, exception.NotFound):
is_ok = False
if not is_ok:
raise exception.BadStoreUri(_('Invalid location: %s') % uri)
def _check_image_location(context, store_api, location):
if not _check_glance_loc(context, location):
_check_location_uri(context, store_api, location['url'])
store_api.check_location_metadata(location['metadata'])
def _remove_extra_info(location):
url = location['url']
if url.startswith('http'):
start = url.find('auth_token')
if start == -1:
return
end = url.find('&', start)
if end == -1:
if url[start - 1] == '?':
url = re.sub(r'\?auth_token=\S+', r'', url)
elif url[start - 1] == '&':
url = re.sub(r'&auth_token=\S+', r'', url)
else:
url = re.sub(r'auth_token=\S+&', r'', url)
location['url'] = url
def _set_image_size(context, image, locations):
if not image.size:
for location in locations:
size_from_backend = glance.store.get_size_from_backend(
context, location['url'])
if size_from_backend:
# NOTE(flwang): This assumes all locations have the same size
image.size = size_from_backend
break
class ImageFactoryProxy(glance.domain.proxy.ImageFactory):
def __init__(self, factory, context, store_api):
self.context = context
self.store_api = store_api
proxy_kwargs = {'context': context, 'store_api': store_api}
super(ImageFactoryProxy, self).__init__(factory,
proxy_class=ImageProxy,
proxy_kwargs=proxy_kwargs)
def new_image(self, **kwargs):
locations = kwargs.get('locations', [])
for l in locations:
_check_image_location(self.context, self.store_api, l)
if locations.count(l) > 1:
raise exception.DuplicateLocation(location=l['url'])
return super(ImageFactoryProxy, self).new_image(**kwargs)
class StoreLocations(collections.MutableSequence):
"""
The proxy for store location property. It takes responsibility for:
1. Location uri correctness checking when adding a new location.
2. Remove the image data from the store when a location is removed
from an image.
"""
def __init__(self, image_proxy, value):
self.image_proxy = image_proxy
if isinstance(value, list):
self.value = value
else:
self.value = list(value)
def append(self, location):
# NOTE(flaper87): Insert this
# location at the very end of
# the value list.
self.insert(len(self.value), location)
def extend(self, other):
if isinstance(other, StoreLocations):
locations = other.value
else:
locations = list(other)
for location in locations:
self.append(location)
def insert(self, i, location):
_check_image_location(self.image_proxy.context,
self.image_proxy.store_api, location)
_remove_extra_info(location)
if location in self.value:
raise exception.DuplicateLocation(location=location['url'])
self.value.insert(i, location)
_set_image_size(self.image_proxy.context,
self.image_proxy,
[location])
def pop(self, i=-1):
location = self.value.pop(i)
try:
delete_image_from_backend(self.image_proxy.context,
self.image_proxy.store_api,
self.image_proxy.image.image_id,
location['url'])
except Exception:
self.value.insert(i, location)
raise
return location
def count(self, location):
return self.value.count(location)
def index(self, location, *args):
return self.value.index(location, *args)
def remove(self, location):
if self.count(location):
self.pop(self.index(location))
else:
self.value.remove(location)
def reverse(self):
self.value.reverse()
# Mutable sequence, so not hashable
__hash__ = None
def __getitem__(self, i):
return self.value.__getitem__(i)
def __setitem__(self, i, location):
_check_image_location(self.image_proxy.context,
self.image_proxy.store_api, location)
self.value.__setitem__(i, location)
_set_image_size(self.image_proxy.context,
self.image_proxy,
[location])
def __delitem__(self, i):
location = None
try:
location = self.value.__getitem__(i)
except Exception:
return self.value.__delitem__(i)
delete_image_from_backend(self.image_proxy.context,
self.image_proxy.store_api,
self.image_proxy.image.image_id,
location['url'])
self.value.__delitem__(i)
def __delslice__(self, i, j):
i = max(i, 0)
j = max(j, 0)
locations = []
try:
locations = self.value.__getslice__(i, j)
except Exception:
return self.value.__delslice__(i, j)
for location in locations:
delete_image_from_backend(self.image_proxy.context,
self.image_proxy.store_api,
self.image_proxy.image.image_id,
location['url'])
self.value.__delitem__(i)
def __iadd__(self, other):
self.extend(other)
return self
def __contains__(self, location):
return location in self.value
def __len__(self):
return len(self.value)
def __cast(self, other):
if isinstance(other, StoreLocations):
return other.value
else:
return other
def __cmp__(self, other):
return cmp(self.value, self.__cast(other))
def __iter__(self):
return iter(self.value)
def __copy__(self):
return type(self)(self.image_proxy, self.value)
def __deepcopy__(self, memo):
# NOTE(zhiyan): Only copy location entries, others can be reused.
value = copy.deepcopy(self.value, memo)
self.image_proxy.image.locations = value
return type(self)(self.image_proxy, value)
def _locations_proxy(target, attr):
"""
Make a location property proxy on the image object.
:param target: the image object on which to add the proxy
:param attr: the property proxy we want to hook
"""
def get_attr(self):
value = getattr(getattr(self, target), attr)
return StoreLocations(self, value)
def set_attr(self, value):
if not isinstance(value, (list, StoreLocations)):
raise exception.BadStoreUri(_('Invalid locations: %s') % value)
ori_value = getattr(getattr(self, target), attr)
if ori_value != value:
# NOTE(zhiyan): Enforced locations list was previously empty list.
if len(ori_value) > 0:
raise exception.Invalid(_('Original locations is not empty: '
'%s') % ori_value)
# NOTE(zhiyan): Check locations are all valid.
for location in value:
_check_image_location(self.context, self.store_api,
location)
if value.count(location) > 1:
raise exception.DuplicateLocation(location=location['url'])
_set_image_size(self.context, getattr(self, target), value)
return setattr(getattr(self, target), attr, list(value))
def del_attr(self):
value = getattr(getattr(self, target), attr)
while len(value):
delete_image_from_backend(self.context, self.store_api,
self.image.image_id, value[0]['url'])
del value[0]
setattr(getattr(self, target), attr, value)
return delattr(getattr(self, target), attr)
return property(get_attr, set_attr, del_attr)
pattern = re.compile(r'^https?://\S+/v2/images/\S+$')
class ImageProxy(glance.domain.proxy.Image):
locations = _locations_proxy('image', 'locations')
def __init__(self, image, context, store_api):
self.image = image
self.context = context
self.store_api = store_api
proxy_kwargs = {
'context': context,
'image': self,
'store_api': store_api,
}
super(ImageProxy, self).__init__(
image, member_repo_proxy_class=ImageMemberRepoProxy,
member_repo_proxy_kwargs=proxy_kwargs)
def delete(self):
self.image.delete()
if self.image.locations:
for location in self.image.locations:
self.store_api.delete_image_from_backend(self.context,
self.store_api,
self.image.image_id,
location['url'])
def set_data(self, data, size=None):
if size is None:
size = 0 # NOTE(markwash): zero -> unknown size
location, size, checksum, loc_meta = self.store_api.add_to_backend(
self.context, CONF.default_store,
self.image.image_id, utils.CooperativeReader(data), size)
loc_meta = loc_meta or {}
loc_meta['is_default'] = 'true'
self.image.locations = [{'url': location, 'metadata': loc_meta}]
self.image.size = size
self.image.checksum = checksum
self.image.status = 'active'
def get_data(self):
if not self.image.locations:
raise exception.NotFound(_("No image data could be found"))
err = None
for loc in self.image.locations:
if pattern.match(loc['url']):
continue
try:
data, size = self.store_api.get_from_backend(self.context,
loc['url'])
return data
except Exception as e:
LOG.warn(_('Get image %(id)s data failed: '
'%(err)s.') % {'id': self.image.image_id,
'err': six.text_type(e)})
err = e
# tried all locations
LOG.error(_('Glance tried all locations to get data for image %s '
'but all have failed.') % self.image.image_id)
raise err
class ImageMemberRepoProxy(glance.domain.proxy.Repo):
def __init__(self, repo, image, context, store_api):
self.repo = repo
self.image = image
self.context = context
self.store_api = store_api
super(ImageMemberRepoProxy, self).__init__(repo)
def _set_acls(self):
public = self.image.visibility == 'public'
if self.image.locations and not public:
member_ids = [m.member_id for m in self.repo.list()]
for location in self.image.locations:
self.store_api.set_acls(self.context, location['url'], public,
read_tenants=member_ids)
def add(self, member):
super(ImageMemberRepoProxy, self).add(member)
self._set_acls()
def remove(self, member):
super(ImageMemberRepoProxy, self).remove(member)
self._set_acls()

View File

@ -0,0 +1,216 @@
# Copyright 2010 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import httplib
import six.moves.urllib.parse as urlparse
from glance.common import exception
from glance.openstack.common import jsonutils
import glance.openstack.common.log as logging
import glance.store.base
import glance.store.location
LOG = logging.getLogger(__name__)
MAX_REDIRECTS = 5
class StoreLocation(glance.store.location.StoreLocation):
"""Class describing an HTTP(S) URI"""
def process_specs(self):
self.scheme = self.specs.get('scheme', 'http')
self.netloc = self.specs['netloc']
self.user = self.specs.get('user')
self.password = self.specs.get('password')
self.path = self.specs.get('path')
def _get_credstring(self):
if self.user:
return '%s:%s@' % (self.user, self.password)
return ''
def get_uri(self):
return "%s://%s%s%s" % (
self.scheme,
self._get_credstring(),
self.netloc,
self.path)
def parse_uri(self, uri):
"""
Parse URLs. This method fixes an issue where credentials specified
in the URL are interpreted differently in Python 2.6.1+ than prior
versions of Python.
"""
pieces = urlparse.urlparse(uri)
assert pieces.scheme in ('https', 'http')
self.scheme = pieces.scheme
netloc = pieces.netloc
path = pieces.path
try:
if '@' in netloc:
creds, netloc = netloc.split('@')
else:
creds = None
except ValueError:
# Python 2.6.1 compat
# see lp659445 and Python issue7904
if '@' in path:
creds, path = path.split('@')
else:
creds = None
if creds:
try:
self.user, self.password = creds.split(':')
except ValueError:
reason = (_("Credentials '%s' not well-formatted.")
% "".join(creds))
LOG.debug(reason)
raise exception.BadStoreUri()
else:
self.user = None
if netloc == '':
reason = _("No address specified in HTTP URL")
LOG.debug(reason)
raise exception.BadStoreUri(message=reason)
self.netloc = netloc
self.path = path
self.token = None
if pieces.query:
params = pieces.query.split('&')
for param in params:
if 'auth_token' == param.split("=")[0].strip():
self.token = param.split("=")[1]
break
def http_response_iterator(conn, response, size):
"""
Return an iterator for a file-like object.
:param conn: HTTP(S) Connection
:param response: httplib.HTTPResponse object
:param size: Chunk size to iterate with
"""
chunk = response.read(size)
while chunk:
yield chunk
chunk = response.read(size)
conn.close()
class Store(glance.store.base.Store):
"""An implementation of the HTTP(S) Backend Adapter"""
def get(self, location):
"""
Takes a `glance.store.location.Location` object that indicates
where to find the image file, and returns a tuple of generator
(for reading the image file) and image_size
:param location `glance.store.location.Location` object, supplied
from glance.store.location.get_location_from_uri()
"""
conn, resp, content_length = self._query(location, 'GET')
iterator = http_response_iterator(conn, resp, self.CHUNKSIZE)
class ResponseIndexable(glance.store.Indexable):
def another(self):
try:
return self.wrapped.next()
except StopIteration:
return ''
return (ResponseIndexable(iterator, content_length), content_length)
def get_schemes(self):
return ('http', 'https')
def get_size(self, location):
"""
Takes a `glance.store.location.Location` object that indicates
where to find the image file, and returns the size
:param location `glance.store.location.Location` object, supplied
from glance.store.location.get_location_from_uri()
"""
try:
return self._query(location, 'HEAD')[2]
except Exception:
return 0
def _query(self, location, verb, depth=0):
if depth > MAX_REDIRECTS:
reason = (_("The HTTP URL exceeded %s maximum "
"redirects.") % MAX_REDIRECTS)
LOG.debug(reason)
raise exception.MaxRedirectsExceeded(redirects=MAX_REDIRECTS)
loc = location.store_location
conn_class = self._get_conn_class(loc)
conn = conn_class(loc.netloc)
hearders = {}
if loc.token:
hearders.setdefault('x-auth-token', loc.token)
verb = 'GET'
conn.request(verb, loc.path, "", hearders)
resp = conn.getresponse()
try:
size = jsonutils.loads(resp.read())['size']
except Exception:
size = 0
raise exception.BadStoreUri(loc.path, reason)
return (conn, resp, size)
conn.request(verb, loc.path, "", hearders)
resp = conn.getresponse()
# Check for bad status codes
if resp.status >= 400:
reason = _("HTTP URL returned a %s status code.") % resp.status
LOG.debug(reason)
raise exception.BadStoreUri(loc.path, reason)
location_header = resp.getheader("location")
if location_header:
if resp.status not in (301, 302):
reason = (_("The HTTP URL attempted to redirect with an "
"invalid %s status code.") % resp.status)
LOG.debug(reason)
raise exception.BadStoreUri(loc.path, reason)
location_class = glance.store.location.Location
new_loc = location_class(location.store_name,
location.store_location.__class__,
uri=location_header,
image_id=location.image_id,
store_specs=location.store_specs)
return self._query(new_loc, verb, depth + 1)
content_length = int(resp.getheader('content-length', 0))
return (conn, resp, content_length)
def _get_conn_class(self, loc):
"""
Returns connection class for accessing the resource. Useful
for dependency injection and stubouts in testing...
"""
return {'http': httplib.HTTPConnection,
'https': httplib.HTTPSConnection}[loc.scheme]

View File

@ -0,0 +1,111 @@
#!/bin/bash
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Copyright (c) 2014 Huawei Technologies.
CURPATH=$(cd "$(dirname "$0")"; pwd)
_GLANCE_CONF_DIR="/etc/glance"
_GLANCE_API_CONF_FILE="glance-api.conf"
_PYTHON_INSTALL_DIR="/usr/lib64/python2.6/site-packages"
_GLANCE_DIR="${_PYTHON_INSTALL_DIR}/glance"
# if you did not make changes to the installation files,
# please do not edit the following directories.
_CODE_DIR="${CURPATH}/../glance"
_CONF_DIR="${CURPATH}/../etc"
_PATCH_DIR="${CURPATH}/.."
_BACKUP_DIR="${_GLANCE_DIR}/glance-installation-backup"
_SCRIPT_LOGFILE="/var/log/glance/installation/install.log"
api_config_option_list="sync_enabled=True sync_server_port=9595 sync_server_host=127.0.0.1"
export PS4='+{$LINENO:${FUNCNAME[0]}}'
ERRTRAP()
{
echo "[LINE:$1] Error: Command or function exited with status $?"
}
function log()
{
echo "$@"
echo "`date -u +'%Y-%m-%d %T.%N'`: $@" >> $_SCRIPT_LOGFILE
}
trap 'ERRTRAP $LINENO' ERR
if [[ ${EUID} -ne 0 ]]; then
log "Please run as root."
exit 1
fi
if [ ! -d "/var/log/glance/installation" ]; then
mkdir /var/log/glance/installation
touch _SCRIPT_LOGFILE
fi
cd `dirname $0`
log "checking installation directories..."
if [ ! -d "${_GLANCE_DIR}" ] ; then
log "Could not find the glance installation. Please check the variables in the beginning of the script."
log "aborted."
exit 1
fi
if [ ! -f "${_GLANCE_CONF_DIR}/${_GLANCE_API_CONF_FILE}" ] ; then
log "Could not find glance-api config file. Please check the variables in the beginning of the script."
log "aborted."
exit 1
fi
log "checking previous installation..."
if [ -d "${_BACKUP_DIR}/glance" ] ; then
log "It seems glance cascading has already been installed!"
log "Please check README for solution if this is not true."
exit 1
fi
log "backing up current files that might be overwritten..."
mkdir -p "${_BACKUP_DIR}/glance"
mkdir -p "${_BACKUP_DIR}/etc"
mkdir -p "${_BACKUP_DIR}/etc/glance"
cp -rf "${_GLANCE_CONF_DIR}/${_GLANCE_API_CONF_FILE}" "${_BACKUP_DIR}/etc/glance/"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/glance"
rm -r "${_BACKUP_DIR}/etc"
log "Error in config backup, aborted."
exit 1
fi
log "copying in new files..."
cp -r "${_PATCH_DIR}/glance" `dirname ${_GLANCE_DIR}`
glanceEggDir=`ls ${_PYTHON_INSTALL_DIR} |grep -e glance- |grep -e egg-info `
if [ ! -d ${_PYTHON_INSTALL_DIR}/${glanceEggDir} ]; then
log "glance install dir not exist. Pleas check manually."
exit 1
fi
cp "${_PATCH_DIR}/glance.egg-info/entry_points.txt" "${_PYTHON_INSTALL_DIR}/${glanceEggDir}/"
if [ $? -ne 0 ] ; then
log "Error in copying, aborted. Please install manually."
exit 1
fi
log "Completed."
log "See README to get started."
exit 0

View File

@ -0,0 +1,165 @@
Openstack Neutron DVR patch
===============================
To solve the scalability problem in the OpenStack Neutron Deployment and to distribute the Network Node load to other Compute Nodes, some people proposed a solusion which is named DVR(Distributed Virtual Router).Distributed Virtual Router solves both the problems by providing a solution that would fit into the existing model.
DVR feature code has been merged into the neutron master branch, Neutron Juno release version would have expected the DVR characteristic. This patch was download from DVR branch on 1st June.
Key modules
-----------
* L2 Agent Doc
https://docs.google.com/document/d/1depasJSnGZPOnRLxEC_PYsVLcGVFXZLqP52RFTe21BE/edit#heading=h.5w7clq272tji
* L3 Agent Doc
https://docs.google.com/document/d/1jCmraZGirmXq5V1MtRqhjdZCbUfiwBhRkUjDXGt5QUQ/edit
Addressed by: https://review.openstack.org/84223
* Add L3 Extension for Distributed Routers
Addressed by: https://review.openstack.org/87730
* L2 Agent/ML2 Plugin changes for L3 DVR
Addressed by: https://review.openstack.org/88442
* Add 'ip neigh' to ip_lib
Addressed by: https://review.openstack.org/89413
* Modify L3 Agent for Distributed Routers
Addressed by: https://review.openstack.org/89694
* Add L3 Scheduler Changes for Distributed Routers
Addressed by: https://review.openstack.org/93233
* Add 'ip rule add from' to ip_lib
Addressed by: https://review.openstack.org/96389
* Addressed merge conflict
Addressed by: https://review.openstack.org/97028
* Refactor some router-related methods
Addressed by: https://review.openstack.org/97275
* Allow L3 base to handle extensions on router creation
Addressed by: https://review.openstack.org/102101
* L2 Model additions to support DVR
Addressed by: https://review.openstack.org/102332
* RPC additions to support DVR
Addressed by: https://review.openstack.org/102398
* ML2 additions to support DVR
Requirements
------------
* openstack-neutron-server-2014.1-1.1 has been installed
* oslo.db-0.2.0 has been installed
* sqlalchemy-migrate-0.9.1 has been installed
Installation
------------
We provide two ways to install the DVR patch code. In this section, we will guide you through installing the neutron DVR code with the minimum configuration.
* **Note:**
- Make sure you have an existing installation of **Openstack Icehouse**.
- We recommend that you Do backup at least the following files before installation, because they are to be overwritten or modified:
$NEUTRON_CONFIG_PARENT_DIR/neutron.conf
(replace the $... with actual directory names.)
* **Manual Installation**
- Navigate to the local repository and copy the contents in 'neutron' sub-directory to the corresponding places in existing neutron, e.g.
```cp -r $LOCAL_REPOSITORY_DIR/neutron $NEUTRON_PARENT_DIR```
(replace the $... with actual directory name.)
- Navigate to the local repository and copy the contents in 'etc' sub-directory to the corresponding places in existing neutron, e.g.
```cp -r $LOCAL_REPOSITORY_DIR/etc $NEUTRON_CONFIG_DIR```
(replace the $... with actual directory name.)
- Update the neutron configuration file (e.g. /etc/neutron/l3_agent.ini, /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini) with the minimum option below. If the option already exists, modify its value, otherwise add it to the config file. Check the "Configurations" section below for a full configuration guide.
1)update l3 agent configurations(/etc/neutron/l3_agent.ini)
```
[DEFAULT]
...
distributed_agent=True
```
2)update openvswitch agent configurations(/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini)
```
[AGENT]
...
enable_distributed_routing = True
```
- Remove the neutron DB
- Create the neutron DB
```neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head```
- Restart the neutron-server/openvswitch-agent/l3-agent.
```service openstack-neutron restart```
```service openstack-neutron-openvswitch-agent restart```
```service openstack-neutron-l3-agent restart```
- Done.
* **Automatic Installation**
- Navigate to the installation directory and run installation script.
```
cd $LOCAL_REPOSITORY_DIR/installation
sudo bash ./install.sh
```
(replace the $... with actual directory name.)
- Done. The installation code should setup the DVR code without the minimum configuration modifying. Check the "Configurations" section for a full configuration guide.
1)update l3 agent configurations(/etc/neutron/l3_agent.ini)
```
[DEFAULT]
...
distributed_agent=True
```
2)update openvswitch agent configurations(/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini)
```
[AGENT]
...
enable_distributed_routing = True
```
* **Troubleshooting**
In case the automatic installation process is not complete, please check the followings:
- Make sure your OpenStack version is Icehouse.
- Check the variables in the beginning of the install.sh scripts. Your installation directories may be different from the default values we provide.
- The installation code will automatically add the related codes to $NEUTRON_PARENT_DIR/nova but not modify the related configuration, you should update the related configurations manually.
- In case the automatic installation does not work, try to install manually.
Configurations
--------------
* This is a (default) configuration sample for the l2 proxy. Please add/modify these options in (/etc/neutron/l3_agent.ini, /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini).
* Note:
- Please carefully make sure that options in the configuration file are not duplicated. If an option name already exists, modify its value instead of adding a new one of the same name.
- Please refer to the 'Configuration Details' section below for proper configuration and usage of costs and constraints.
1)add or update l3 agent configurations(/etc/neutron/l3_agent.ini)
```
[DEFAULT]
...
#Enables distributed router agent function
distributed_agent=True
```
2)add or update openvswitch agent configurations(/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini)
```
[AGENT]
...
#Make the l2 agent run in dvr mode
enable_distributed_routing = True
```

View File

@ -0,0 +1,148 @@
#!/bin/bash
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Copyright (c) 2014 Huawei Technologies.
_MYSQL_PASS="Galax8800"
_NEUTRON_CONF_DIR="/etc/neutron"
_NEUTRON_CONF_FILE='neutron.conf'
_NEUTRON_INSTALL="/usr/lib64/python2.6/site-packages"
_NEUTRON_DIR="${_NEUTRON_INSTALL}/neutron"
_NEUTRON_L2_CONFIG_FILE='/plugins/openvswitch/ovs_neutron_plugin.ini'
_NEUTRON_L3_CONFIG_FILE='l3_agent.ini'
# if you did not make changes to the installation files,
# please do not edit the following directories.
_CODE_DIR="../neutron/"
_BACKUP_DIR="${_NEUTRON_INSTALL}/.neutron-dvr-code-installation-backup"
l2_config_option_list="\[AGENT\]:firewall_driver=neutron.agent.firewall.NoopFirewallDriver \[SECURITYGROUP\]:enable_distributed_routing=True"
l3_config_option_list="\[DEFAULT\]:distributed_agent=True"
#_SCRIPT_NAME="${0##*/}"
#_SCRIPT_LOGFILE="/var/log/neutron-dvr-code/installation/${_SCRIPT_NAME}.log"
if [[ ${EUID} -ne 0 ]]; then
echo "Please run as root."
exit 1
fi
##Redirecting output to logfile as well as stdout
#exec > >(tee -a ${_SCRIPT_LOGFILE})
#exec 2> >(tee -a ${_SCRIPT_LOGFILE} >&2)
cd `dirname $0`
echo "checking installation directories..."
if [ ! -d "${_NEUTRON_DIR}" ] ; then
echo "Could not find the neutron installation. Please check the variables in the beginning of the script."
echo "aborted."
exit 1
fi
if [ ! -f "${_NEUTRON_CONF_DIR}/${_NEUTRON_CONF_FILE}" ] ; then
echo "Could not find neutron config file. Please check the variables in the beginning of the script."
echo "aborted."
exit 1
fi
echo "checking previous installation..."
if [ -d "${_BACKUP_DIR}/neutron" ] ; then
echo "It seems neutron-dvr-code-cascaded has already been installed!"
echo "Please check README for solution if this is not true."
exit 1
fi
echo "backing up current code files that might be overwritten..."
mkdir -p "${_BACKUP_DIR}"
cp -r "${_NEUTRON_DIR}/" "${_BACKUP_DIR}/"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/neutron"
echo "Error in code backup code files, aborted."
exit 1
fi
echo "backing up current config code files that might be overwritten..."
mkdir -p "${_BACKUP_DIR}/etc"
cp -r "${_NEUTRON_CONF_DIR}/" "${_BACKUP_DIR}/etc"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/etc"
echo "Error in code backup config files, aborted."
exit 1
fi
echo "copying in new files..."
cp -r "${_CODE_DIR}" `dirname ${_NEUTRON_DIR}`
if [ $? -ne 0 ] ; then
echo "Error in copying, aborted."
echo "Recovering original files..."
cp -r "${_BACKUP_DIR}/neutron" `dirname ${_NEUTRON_DIR}` && rm -r "${_BACKUP_DIR}/neutron"
if [ $? -ne 0 ] ; then
echo "Recovering failed! Please install manually."
fi
exit 1
fi
if [ -d "${_NEUTRON_DIR}/openstack/common/db/rpc" ] ; then
rm -r "${_NEUTRON_DIR}/openstack/common/db/rpc"
fi
echo "updating l2 config file..."
for option in $l2_config_option_list
do
option_branch=`echo $option|awk -F ":" '{print $1}'`
option_config=`echo $option|awk -F ":" '{print $2}'`
option_key=`echo $option_config|awk -F "=" '{print $1}'`
option_value=`echo $option_config|awk -F "=" '{print $2}'`
sed -i.backup -e "/$option_key *=/d" "${_NEUTRON_CONF_DIR}/${_NEUTRON_L2_CONFIG_FILE}"
echo "$option_key,***************$option_value"
sed -i "/$option_branch/a\\$option_key=$option_value" "${_NEUTRON_CONF_DIR}/${_NEUTRON_L2_CONFIG_FILE}"
done
echo "updating l3 config file..."
for option in $l3_config_option_list
do
option_branch=`echo $option|awk -F ":" '{print $1}'`
option_config=`echo $option|awk -F ":" '{print $2}'`
option_key=`echo $option_config|awk -F "=" '{print $1}'`
option_value=`echo $option_config|awk -F "=" '{print $2}'`
sed -i.backup -e "/$option_key *=/d" "${_NEUTRON_CONF_DIR}/${_NEUTRON_L3_CONFIG_FILE}"
echo "$option_key,***************$option_value"
sed -i "/$option_branch/a\\$option_key=$option_value" "${_NEUTRON_CONF_DIR}/${_NEUTRON_L3_CONFIG_FILE}"
done
echo "create neutron db..."
exec_sql_str="DROP DATABASE if exists neutron;CREATE DATABASE neutron;GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY \"$_MYSQL_PASS\";GRANT ALL PRIVILEGES ON *.* TO 'neutron'@'%'IDENTIFIED BY \"$_MYSQL_PASS\";"
mysql -u root -p$_MYSQL_PASS -e "$exec_sql_str"
echo "syc neutron db..."
neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head
if [ $? -ne 0 ] ; then
log "There was an error in sync neutron db, please sync neutron db manually."
exit 1
fi
#echo "restarting neutron server..."
#service openstack-neutron stop
#if [ $? -ne 0 ] ; then
# echo "There was an error in restarting the service, please restart neutron server manually."
# exit 1
#fi
echo "Completed."
echo "See README to get started."
exit 0

View File

@ -0,0 +1,19 @@
# Copyright 2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import gettext
gettext.install('neutron', unicode=1)

View File

@ -0,0 +1,14 @@
# Copyright 2012 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -0,0 +1,14 @@
# Copyright 2012 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -0,0 +1,121 @@
# Copyright 2012 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from oslo.config import cfg
from neutron.common import config
from neutron.openstack.common import log as logging
LOG = logging.getLogger(__name__)
ROOT_HELPER_OPTS = [
cfg.StrOpt('root_helper', default='sudo',
help=_('Root helper application.')),
]
AGENT_STATE_OPTS = [
cfg.FloatOpt('report_interval', default=30,
help=_('Seconds between nodes reporting state to server; '
'should be less than agent_down_time, best if it '
'is half or less than agent_down_time.')),
]
INTERFACE_DRIVER_OPTS = [
cfg.StrOpt('interface_driver',
help=_("The driver used to manage the virtual interface.")),
]
USE_NAMESPACES_OPTS = [
cfg.BoolOpt('use_namespaces', default=True,
help=_("Allow overlapping IP.")),
]
def get_log_args(conf, log_file_name):
cmd_args = []
if conf.debug:
cmd_args.append('--debug')
if conf.verbose:
cmd_args.append('--verbose')
if (conf.log_dir or conf.log_file):
cmd_args.append('--log-file=%s' % log_file_name)
log_dir = None
if conf.log_dir and conf.log_file:
log_dir = os.path.dirname(
os.path.join(conf.log_dir, conf.log_file))
elif conf.log_dir:
log_dir = conf.log_dir
elif conf.log_file:
log_dir = os.path.dirname(conf.log_file)
if log_dir:
cmd_args.append('--log-dir=%s' % log_dir)
else:
if conf.use_syslog:
cmd_args.append('--use-syslog')
if conf.syslog_log_facility:
cmd_args.append(
'--syslog-log-facility=%s' % conf.syslog_log_facility)
return cmd_args
def register_root_helper(conf):
# The first call is to ensure backward compatibility
conf.register_opts(ROOT_HELPER_OPTS)
conf.register_opts(ROOT_HELPER_OPTS, 'AGENT')
def register_agent_state_opts_helper(conf):
conf.register_opts(AGENT_STATE_OPTS, 'AGENT')
def register_interface_driver_opts_helper(conf):
conf.register_opts(INTERFACE_DRIVER_OPTS)
def register_use_namespaces_opts_helper(conf):
conf.register_opts(USE_NAMESPACES_OPTS)
def get_root_helper(conf):
root_helper = conf.AGENT.root_helper
if root_helper != 'sudo':
return root_helper
root_helper = conf.root_helper
if root_helper != 'sudo':
LOG.deprecated(_('DEFAULT.root_helper is deprecated! Please move '
'root_helper configuration to [AGENT] section.'))
return root_helper
return 'sudo'
def setup_conf():
bind_opts = [
cfg.StrOpt('state_path',
default='/var/lib/neutron',
help=_('Top-level directory for maintaining dhcp state')),
]
conf = cfg.ConfigOpts()
conf.register_opts(bind_opts)
return conf
# add a logging setup method here for convenience
setup_logging = config.setup_logging

View File

@ -0,0 +1,620 @@
# Copyright 2012 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import sys
import eventlet
eventlet.monkey_patch()
import netaddr
from oslo.config import cfg
from neutron.agent.common import config
from neutron.agent.linux import dhcp
from neutron.agent.linux import external_process
from neutron.agent.linux import interface
from neutron.agent.linux import ovs_lib # noqa
from neutron.agent import rpc as agent_rpc
from neutron.common import config as common_config
from neutron.common import constants
from neutron.common import exceptions
from neutron.common import rpc as n_rpc
from neutron.common import topics
from neutron.common import utils
from neutron import context
from neutron import manager
from neutron.openstack.common import importutils
from neutron.openstack.common import log as logging
from neutron.openstack.common import loopingcall
from neutron.openstack.common import service
from neutron import service as neutron_service
LOG = logging.getLogger(__name__)
class DhcpAgent(manager.Manager):
OPTS = [
cfg.IntOpt('resync_interval', default=5,
help=_("Interval to resync.")),
cfg.StrOpt('dhcp_driver',
default='neutron.agent.linux.dhcp.Dnsmasq',
help=_("The driver used to manage the DHCP server.")),
cfg.BoolOpt('enable_isolated_metadata', default=False,
help=_("Support Metadata requests on isolated networks.")),
cfg.BoolOpt('enable_metadata_network', default=False,
help=_("Allows for serving metadata requests from a "
"dedicated network. Requires "
"enable_isolated_metadata = True")),
cfg.IntOpt('num_sync_threads', default=4,
help=_('Number of threads to use during sync process.')),
cfg.StrOpt('metadata_proxy_socket',
default='$state_path/metadata_proxy',
help=_('Location of Metadata Proxy UNIX domain '
'socket')),
]
def __init__(self, host=None):
super(DhcpAgent, self).__init__(host=host)
self.needs_resync_reasons = []
self.conf = cfg.CONF
self.cache = NetworkCache()
self.root_helper = config.get_root_helper(self.conf)
self.dhcp_driver_cls = importutils.import_class(self.conf.dhcp_driver)
ctx = context.get_admin_context_without_session()
self.plugin_rpc = DhcpPluginApi(topics.PLUGIN,
ctx, self.conf.use_namespaces)
# create dhcp dir to store dhcp info
dhcp_dir = os.path.dirname("/%s/dhcp/" % self.conf.state_path)
if not os.path.isdir(dhcp_dir):
os.makedirs(dhcp_dir, 0o755)
self.dhcp_version = self.dhcp_driver_cls.check_version()
self._populate_networks_cache()
def _populate_networks_cache(self):
"""Populate the networks cache when the DHCP-agent starts."""
try:
existing_networks = self.dhcp_driver_cls.existing_dhcp_networks(
self.conf,
self.root_helper
)
for net_id in existing_networks:
net = dhcp.NetModel(self.conf.use_namespaces,
{"id": net_id,
"subnets": [],
"ports": []})
self.cache.put(net)
except NotImplementedError:
# just go ahead with an empty networks cache
LOG.debug(
_("The '%s' DHCP-driver does not support retrieving of a "
"list of existing networks"),
self.conf.dhcp_driver
)
def after_start(self):
self.run()
LOG.info(_("DHCP agent started"))
def run(self):
"""Activate the DHCP agent."""
self.sync_state()
self.periodic_resync()
def call_driver(self, action, network, **action_kwargs):
"""Invoke an action on a DHCP driver instance."""
LOG.debug(_('Calling driver for network: %(net)s action: %(action)s'),
{'net': network.id, 'action': action})
try:
# the Driver expects something that is duck typed similar to
# the base models.
driver = self.dhcp_driver_cls(self.conf,
network,
self.root_helper,
self.dhcp_version,
self.plugin_rpc)
getattr(driver, action)(**action_kwargs)
return True
except exceptions.Conflict:
# No need to resync here, the agent will receive the event related
# to a status update for the network
LOG.warning(_('Unable to %(action)s dhcp for %(net_id)s: there is '
'a conflict with its current state; please check '
'that the network and/or its subnet(s) still exist.')
% {'net_id': network.id, 'action': action})
except Exception as e:
self.schedule_resync(e)
if (isinstance(e, n_rpc.RemoteError)
and e.exc_type == 'NetworkNotFound'
or isinstance(e, exceptions.NetworkNotFound)):
LOG.warning(_("Network %s has been deleted."), network.id)
else:
LOG.exception(_('Unable to %(action)s dhcp for %(net_id)s.')
% {'net_id': network.id, 'action': action})
def schedule_resync(self, reason):
"""Schedule a resync for a given reason."""
self.needs_resync_reasons.append(reason)
@utils.synchronized('dhcp-agent')
def sync_state(self):
"""Sync the local DHCP state with Neutron."""
LOG.info(_('Synchronizing state'))
pool = eventlet.GreenPool(cfg.CONF.num_sync_threads)
known_network_ids = set(self.cache.get_network_ids())
try:
active_networks = self.plugin_rpc.get_active_networks_info()
active_network_ids = set(network.id for network in active_networks)
for deleted_id in known_network_ids - active_network_ids:
try:
self.disable_dhcp_helper(deleted_id)
except Exception as e:
self.schedule_resync(e)
LOG.exception(_('Unable to sync network state on deleted '
'network %s'), deleted_id)
for network in active_networks:
pool.spawn(self.safe_configure_dhcp_for_network, network)
pool.waitall()
LOG.info(_('Synchronizing state complete'))
except Exception as e:
self.schedule_resync(e)
LOG.exception(_('Unable to sync network state.'))
def _periodic_resync_helper(self):
"""Resync the dhcp state at the configured interval."""
while True:
eventlet.sleep(self.conf.resync_interval)
if self.needs_resync_reasons:
# be careful to avoid a race with additions to list
# from other threads
reasons = self.needs_resync_reasons
self.needs_resync_reasons = []
for r in reasons:
LOG.debug(_("resync: %(reason)s"),
{"reason": r})
self.sync_state()
def periodic_resync(self):
"""Spawn a thread to periodically resync the dhcp state."""
eventlet.spawn(self._periodic_resync_helper)
def safe_get_network_info(self, network_id):
try:
network = self.plugin_rpc.get_network_info(network_id)
if not network:
LOG.warn(_('Network %s has been deleted.'), network_id)
return network
except Exception as e:
self.schedule_resync(e)
LOG.exception(_('Network %s info call failed.'), network_id)
def enable_dhcp_helper(self, network_id):
"""Enable DHCP for a network that meets enabling criteria."""
network = self.safe_get_network_info(network_id)
if network:
self.configure_dhcp_for_network(network)
def safe_configure_dhcp_for_network(self, network):
try:
self.configure_dhcp_for_network(network)
except (exceptions.NetworkNotFound, RuntimeError):
LOG.warn(_('Network %s may have been deleted and its resources '
'may have already been disposed.'), network.id)
def configure_dhcp_for_network(self, network):
if not network.admin_state_up:
return
for subnet in network.subnets:
if subnet.enable_dhcp:
if self.call_driver('enable', network):
if (self.conf.use_namespaces and
self.conf.enable_isolated_metadata):
self.enable_isolated_metadata_proxy(network)
self.cache.put(network)
break
def disable_dhcp_helper(self, network_id):
"""Disable DHCP for a network known to the agent."""
network = self.cache.get_network_by_id(network_id)
if network:
if (self.conf.use_namespaces and
self.conf.enable_isolated_metadata):
self.disable_isolated_metadata_proxy(network)
if self.call_driver('disable', network):
self.cache.remove(network)
def refresh_dhcp_helper(self, network_id):
"""Refresh or disable DHCP for a network depending on the current state
of the network.
"""
old_network = self.cache.get_network_by_id(network_id)
if not old_network:
# DHCP current not running for network.
return self.enable_dhcp_helper(network_id)
network = self.safe_get_network_info(network_id)
if not network:
return
old_cidrs = set(s.cidr for s in old_network.subnets if s.enable_dhcp)
new_cidrs = set(s.cidr for s in network.subnets if s.enable_dhcp)
if new_cidrs and old_cidrs == new_cidrs:
self.call_driver('reload_allocations', network)
self.cache.put(network)
elif new_cidrs:
if self.call_driver('restart', network):
self.cache.put(network)
else:
self.disable_dhcp_helper(network.id)
@utils.synchronized('dhcp-agent')
def network_create_end(self, context, payload):
"""Handle the network.create.end notification event."""
network_id = payload['network']['id']
self.enable_dhcp_helper(network_id)
@utils.synchronized('dhcp-agent')
def network_update_end(self, context, payload):
"""Handle the network.update.end notification event."""
network_id = payload['network']['id']
if payload['network']['admin_state_up']:
self.enable_dhcp_helper(network_id)
else:
self.disable_dhcp_helper(network_id)
@utils.synchronized('dhcp-agent')
def network_delete_end(self, context, payload):
"""Handle the network.delete.end notification event."""
self.disable_dhcp_helper(payload['network_id'])
@utils.synchronized('dhcp-agent')
def subnet_update_end(self, context, payload):
"""Handle the subnet.update.end notification event."""
network_id = payload['subnet']['network_id']
self.refresh_dhcp_helper(network_id)
# Use the update handler for the subnet create event.
subnet_create_end = subnet_update_end
@utils.synchronized('dhcp-agent')
def subnet_delete_end(self, context, payload):
"""Handle the subnet.delete.end notification event."""
subnet_id = payload['subnet_id']
network = self.cache.get_network_by_subnet_id(subnet_id)
if network:
self.refresh_dhcp_helper(network.id)
@utils.synchronized('dhcp-agent')
def port_update_end(self, context, payload):
"""Handle the port.update.end notification event."""
updated_port = dhcp.DictModel(payload['port'])
network = self.cache.get_network_by_id(updated_port.network_id)
if network:
self.cache.put_port(updated_port)
self.call_driver('reload_allocations', network)
# Use the update handler for the port create event.
port_create_end = port_update_end
@utils.synchronized('dhcp-agent')
def port_delete_end(self, context, payload):
"""Handle the port.delete.end notification event."""
port = self.cache.get_port_by_id(payload['port_id'])
if port:
network = self.cache.get_network_by_id(port.network_id)
self.cache.remove_port(port)
self.call_driver('reload_allocations', network)
def enable_isolated_metadata_proxy(self, network):
# The proxy might work for either a single network
# or all the networks connected via a router
# to the one passed as a parameter
neutron_lookup_param = '--network_id=%s' % network.id
meta_cidr = netaddr.IPNetwork(dhcp.METADATA_DEFAULT_CIDR)
has_metadata_subnet = any(netaddr.IPNetwork(s.cidr) in meta_cidr
for s in network.subnets)
if (self.conf.enable_metadata_network and has_metadata_subnet):
router_ports = [port for port in network.ports
if (port.device_owner ==
constants.DEVICE_OWNER_ROUTER_INTF)]
if router_ports:
# Multiple router ports should not be allowed
if len(router_ports) > 1:
LOG.warning(_("%(port_num)d router ports found on the "
"metadata access network. Only the port "
"%(port_id)s, for router %(router_id)s "
"will be considered"),
{'port_num': len(router_ports),
'port_id': router_ports[0].id,
'router_id': router_ports[0].device_id})
neutron_lookup_param = ('--router_id=%s' %
router_ports[0].device_id)
def callback(pid_file):
metadata_proxy_socket = cfg.CONF.metadata_proxy_socket
proxy_cmd = ['neutron-ns-metadata-proxy',
'--pid_file=%s' % pid_file,
'--metadata_proxy_socket=%s' % metadata_proxy_socket,
neutron_lookup_param,
'--state_path=%s' % self.conf.state_path,
'--metadata_port=%d' % dhcp.METADATA_PORT]
proxy_cmd.extend(config.get_log_args(
cfg.CONF, 'neutron-ns-metadata-proxy-%s.log' % network.id))
return proxy_cmd
pm = external_process.ProcessManager(
self.conf,
network.id,
self.root_helper,
network.namespace)
pm.enable(callback)
def disable_isolated_metadata_proxy(self, network):
pm = external_process.ProcessManager(
self.conf,
network.id,
self.root_helper,
network.namespace)
pm.disable()
class DhcpPluginApi(n_rpc.RpcProxy):
"""Agent side of the dhcp rpc API.
API version history:
1.0 - Initial version.
1.1 - Added get_active_networks_info, create_dhcp_port,
and update_dhcp_port methods.
"""
BASE_RPC_API_VERSION = '1.1'
def __init__(self, topic, context, use_namespaces):
super(DhcpPluginApi, self).__init__(
topic=topic, default_version=self.BASE_RPC_API_VERSION)
self.context = context
self.host = cfg.CONF.host
self.use_namespaces = use_namespaces
def get_active_networks_info(self):
"""Make a remote process call to retrieve all network info."""
networks = self.call(self.context,
self.make_msg('get_active_networks_info',
host=self.host),
topic=self.topic)
return [dhcp.NetModel(self.use_namespaces, n) for n in networks]
def get_network_info(self, network_id):
"""Make a remote process call to retrieve network info."""
network = self.call(self.context,
self.make_msg('get_network_info',
network_id=network_id,
host=self.host),
topic=self.topic)
if network:
return dhcp.NetModel(self.use_namespaces, network)
def get_dhcp_port(self, network_id, device_id):
"""Make a remote process call to get the dhcp port."""
port = self.call(self.context,
self.make_msg('get_dhcp_port',
network_id=network_id,
device_id=device_id,
host=self.host),
topic=self.topic)
if port:
return dhcp.DictModel(port)
def create_dhcp_port(self, port):
"""Make a remote process call to create the dhcp port."""
port = self.call(self.context,
self.make_msg('create_dhcp_port',
port=port,
host=self.host),
topic=self.topic)
if port:
return dhcp.DictModel(port)
def update_dhcp_port(self, port_id, port):
"""Make a remote process call to update the dhcp port."""
port = self.call(self.context,
self.make_msg('update_dhcp_port',
port_id=port_id,
port=port,
host=self.host),
topic=self.topic)
if port:
return dhcp.DictModel(port)
def release_dhcp_port(self, network_id, device_id):
"""Make a remote process call to release the dhcp port."""
return self.call(self.context,
self.make_msg('release_dhcp_port',
network_id=network_id,
device_id=device_id,
host=self.host),
topic=self.topic)
def release_port_fixed_ip(self, network_id, device_id, subnet_id):
"""Make a remote process call to release a fixed_ip on the port."""
return self.call(self.context,
self.make_msg('release_port_fixed_ip',
network_id=network_id,
subnet_id=subnet_id,
device_id=device_id,
host=self.host),
topic=self.topic)
class NetworkCache(object):
"""Agent cache of the current network state."""
def __init__(self):
self.cache = {}
self.subnet_lookup = {}
self.port_lookup = {}
def get_network_ids(self):
return self.cache.keys()
def get_network_by_id(self, network_id):
return self.cache.get(network_id)
def get_network_by_subnet_id(self, subnet_id):
return self.cache.get(self.subnet_lookup.get(subnet_id))
def get_network_by_port_id(self, port_id):
return self.cache.get(self.port_lookup.get(port_id))
def put(self, network):
if network.id in self.cache:
self.remove(self.cache[network.id])
self.cache[network.id] = network
for subnet in network.subnets:
self.subnet_lookup[subnet.id] = network.id
for port in network.ports:
self.port_lookup[port.id] = network.id
def remove(self, network):
del self.cache[network.id]
for subnet in network.subnets:
del self.subnet_lookup[subnet.id]
for port in network.ports:
del self.port_lookup[port.id]
def put_port(self, port):
network = self.get_network_by_id(port.network_id)
for index in range(len(network.ports)):
if network.ports[index].id == port.id:
network.ports[index] = port
break
else:
network.ports.append(port)
self.port_lookup[port.id] = network.id
def remove_port(self, port):
network = self.get_network_by_port_id(port.id)
for index in range(len(network.ports)):
if network.ports[index] == port:
del network.ports[index]
del self.port_lookup[port.id]
break
def get_port_by_id(self, port_id):
network = self.get_network_by_port_id(port_id)
if network:
for port in network.ports:
if port.id == port_id:
return port
def get_state(self):
net_ids = self.get_network_ids()
num_nets = len(net_ids)
num_subnets = 0
num_ports = 0
for net_id in net_ids:
network = self.get_network_by_id(net_id)
num_subnets += len(network.subnets)
num_ports += len(network.ports)
return {'networks': num_nets,
'subnets': num_subnets,
'ports': num_ports}
class DhcpAgentWithStateReport(DhcpAgent):
def __init__(self, host=None):
super(DhcpAgentWithStateReport, self).__init__(host=host)
self.state_rpc = agent_rpc.PluginReportStateAPI(topics.PLUGIN)
self.agent_state = {
'binary': 'neutron-dhcp-agent',
'host': host,
'topic': topics.DHCP_AGENT,
'configurations': {
'dhcp_driver': cfg.CONF.dhcp_driver,
'use_namespaces': cfg.CONF.use_namespaces,
'dhcp_lease_duration': cfg.CONF.dhcp_lease_duration},
'start_flag': True,
'agent_type': constants.AGENT_TYPE_DHCP}
report_interval = cfg.CONF.AGENT.report_interval
self.use_call = True
if report_interval:
self.heartbeat = loopingcall.FixedIntervalLoopingCall(
self._report_state)
self.heartbeat.start(interval=report_interval)
def _report_state(self):
try:
self.agent_state.get('configurations').update(
self.cache.get_state())
ctx = context.get_admin_context_without_session()
self.state_rpc.report_state(ctx, self.agent_state, self.use_call)
self.use_call = False
except AttributeError:
# This means the server does not support report_state
LOG.warn(_("Neutron server does not support state report."
" State report for this agent will be disabled."))
self.heartbeat.stop()
self.run()
return
except Exception:
LOG.exception(_("Failed reporting state!"))
return
if self.agent_state.pop('start_flag', None):
self.run()
def agent_updated(self, context, payload):
"""Handle the agent_updated notification event."""
self.schedule_resync(_("Agent updated: %(payload)s") %
{"payload": payload})
LOG.info(_("agent_updated by server side %s!"), payload)
def after_start(self):
LOG.info(_("DHCP agent started"))
def register_options():
cfg.CONF.register_opts(DhcpAgent.OPTS)
config.register_interface_driver_opts_helper(cfg.CONF)
config.register_use_namespaces_opts_helper(cfg.CONF)
config.register_agent_state_opts_helper(cfg.CONF)
config.register_root_helper(cfg.CONF)
cfg.CONF.register_opts(dhcp.OPTS)
cfg.CONF.register_opts(interface.OPTS)
def main():
register_options()
common_config.init(sys.argv[1:])
config.setup_logging(cfg.CONF)
server = neutron_service.Service.create(
binary='neutron-dhcp-agent',
topic=topics.DHCP_AGENT,
report_interval=cfg.CONF.AGENT.report_interval,
manager='neutron.agent.dhcp_agent.DhcpAgentWithStateReport')
service.launch(server).wait()

View File

@ -0,0 +1,136 @@
# Copyright 2012, Nachi Ueno, NTT MCL, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import contextlib
import six
@six.add_metaclass(abc.ABCMeta)
class FirewallDriver(object):
"""Firewall Driver base class.
Defines methods that any driver providing security groups
and provider firewall functionality should implement.
Note port attribute should have information of security group ids and
security group rules.
the dict of port should have
device : interface name
fixed_ips: ips of the device
mac_address: mac_address of the device
security_groups: [sgid, sgid]
security_group_rules : [ rule, rule ]
the rule must contain ethertype and direction
the rule may contain security_group_id,
protocol, port_min, port_max
source_ip_prefix, source_port_min,
source_port_max, dest_ip_prefix, and
remote_group_id
Note: source_group_ip in REST API should be converted by this rule
if direction is ingress:
remote_group_ip will be a source_ip_prefix
if direction is egress:
remote_group_ip will be a dest_ip_prefix
Note: remote_group_id in REST API should be converted by this rule
if direction is ingress:
remote_group_id will be a list of source_ip_prefix
if direction is egress:
remote_group_id will be a list of dest_ip_prefix
remote_group_id will also remaining membership update management
"""
def prepare_port_filter(self, port):
"""Prepare filters for the port.
This method should be called before the port is created.
"""
raise NotImplementedError()
def apply_port_filter(self, port):
"""Apply port filter.
Once this method returns, the port should be firewalled
appropriately. This method should as far as possible be a
no-op. It's vastly preferred to get everything set up in
prepare_port_filter.
"""
raise NotImplementedError()
def update_port_filter(self, port):
"""Refresh security group rules from data store
Gets called when an port gets added to or removed from
the security group the port is a member of or if the
group gains or looses a rule.
"""
raise NotImplementedError()
def remove_port_filter(self, port):
"""Stop filtering port."""
raise NotImplementedError()
def filter_defer_apply_on(self):
"""Defer application of filtering rule."""
pass
def filter_defer_apply_off(self):
"""Turn off deferral of rules and apply the rules now."""
pass
@property
def ports(self):
"""Returns filtered ports."""
pass
@contextlib.contextmanager
def defer_apply(self):
"""Defer apply context."""
self.filter_defer_apply_on()
try:
yield
finally:
self.filter_defer_apply_off()
class NoopFirewallDriver(FirewallDriver):
"""Noop Firewall Driver.
Firewall driver which does nothing.
This driver is for disabling the firewall functionality.
"""
def prepare_port_filter(self, port):
pass
def apply_port_filter(self, port):
pass
def update_port_filter(self, port):
pass
def remove_port_filter(self, port):
pass
def filter_defer_apply_on(self):
pass
def filter_defer_apply_off(self):
pass
@property
def ports(self):
return {}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,14 @@
# Copyright 2012 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -0,0 +1,221 @@
# Copyright 2013 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import eventlet
import eventlet.event
import eventlet.queue
from neutron.agent.linux import utils
from neutron.openstack.common import log as logging
LOG = logging.getLogger(__name__)
class AsyncProcessException(Exception):
pass
class AsyncProcess(object):
"""Manages an asynchronous process.
This class spawns a new process via subprocess and uses
greenthreads to read stderr and stdout asynchronously into queues
that can be read via repeatedly calling iter_stdout() and
iter_stderr().
If respawn_interval is non-zero, any error in communicating with
the managed process will result in the process and greenthreads
being cleaned up and the process restarted after the specified
interval.
Example usage:
>>> import time
>>> proc = AsyncProcess(['ping'])
>>> proc.start()
>>> time.sleep(5)
>>> proc.stop()
>>> for line in proc.iter_stdout():
... print line
"""
def __init__(self, cmd, root_helper=None, respawn_interval=None):
"""Constructor.
:param cmd: The list of command arguments to invoke.
:param root_helper: Optional, utility to use when running shell cmds.
:param respawn_interval: Optional, the interval in seconds to wait
to respawn after unexpected process death. Respawn will
only be attempted if a value of 0 or greater is provided.
"""
self.cmd = cmd
self.root_helper = root_helper
if respawn_interval is not None and respawn_interval < 0:
raise ValueError(_('respawn_interval must be >= 0 if provided.'))
self.respawn_interval = respawn_interval
self._process = None
self._kill_event = None
self._reset_queues()
self._watchers = []
def _reset_queues(self):
self._stdout_lines = eventlet.queue.LightQueue()
self._stderr_lines = eventlet.queue.LightQueue()
def start(self):
"""Launch a process and monitor it asynchronously."""
if self._kill_event:
raise AsyncProcessException(_('Process is already started'))
else:
LOG.debug(_('Launching async process [%s].'), self.cmd)
self._spawn()
def stop(self):
"""Halt the process and watcher threads."""
if self._kill_event:
LOG.debug(_('Halting async process [%s].'), self.cmd)
self._kill()
else:
raise AsyncProcessException(_('Process is not running.'))
def _spawn(self):
"""Spawn a process and its watchers."""
self._kill_event = eventlet.event.Event()
self._process, cmd = utils.create_process(self.cmd,
root_helper=self.root_helper)
self._watchers = []
for reader in (self._read_stdout, self._read_stderr):
# Pass the stop event directly to the greenthread to
# ensure that assignment of a new event to the instance
# attribute does not prevent the greenthread from using
# the original event.
watcher = eventlet.spawn(self._watch_process,
reader,
self._kill_event)
self._watchers.append(watcher)
def _kill(self, respawning=False):
"""Kill the process and the associated watcher greenthreads.
:param respawning: Optional, whether respawn will be subsequently
attempted.
"""
# Halt the greenthreads
self._kill_event.send()
pid = self._get_pid_to_kill()
if pid:
self._kill_process(pid)
if not respawning:
# Clear the kill event to ensure the process can be
# explicitly started again.
self._kill_event = None
def _get_pid_to_kill(self):
pid = self._process.pid
# If root helper was used, two or more processes will be created:
#
# - a root helper process (e.g. sudo myscript)
# - possibly a rootwrap script (e.g. neutron-rootwrap)
# - a child process (e.g. myscript)
#
# Killing the root helper process will leave the child process
# running, re-parented to init, so the only way to ensure that both
# die is to target the child process directly.
if self.root_helper:
try:
pid = utils.find_child_pids(pid)[0]
except IndexError:
# Process is already dead
return None
while True:
try:
# We shouldn't have more than one child per process
# so keep getting the children of the first one
pid = utils.find_child_pids(pid)[0]
except IndexError:
# Last process in the tree, return it
break
return pid
def _kill_process(self, pid):
try:
# A process started by a root helper will be running as
# root and need to be killed via the same helper.
utils.execute(['kill', '-9', pid], root_helper=self.root_helper)
except Exception as ex:
stale_pid = (isinstance(ex, RuntimeError) and
'No such process' in str(ex))
if not stale_pid:
LOG.exception(_('An error occurred while killing [%s].'),
self.cmd)
return False
return True
def _handle_process_error(self):
"""Kill the async process and respawn if necessary."""
LOG.debug(_('Halting async process [%s] in response to an error.'),
self.cmd)
respawning = self.respawn_interval >= 0
self._kill(respawning=respawning)
if respawning:
eventlet.sleep(self.respawn_interval)
LOG.debug(_('Respawning async process [%s].'), self.cmd)
self._spawn()
def _watch_process(self, callback, kill_event):
while not kill_event.ready():
try:
if not callback():
break
except Exception:
LOG.exception(_('An error occurred while communicating '
'with async process [%s].'), self.cmd)
break
# Ensure that watching a process with lots of output does
# not block execution of other greenthreads.
eventlet.sleep()
# The kill event not being ready indicates that the loop was
# broken out of due to an error in the watched process rather
# than the loop condition being satisfied.
if not kill_event.ready():
self._handle_process_error()
def _read(self, stream, queue):
data = stream.readline()
if data:
data = data.strip()
queue.put(data)
return data
def _read_stdout(self):
return self._read(self._process.stdout, self._stdout_lines)
def _read_stderr(self):
return self._read(self._process.stderr, self._stderr_lines)
def _iter_queue(self, queue):
while True:
try:
yield queue.get_nowait()
except eventlet.queue.Empty:
break
def iter_stdout(self):
return self._iter_queue(self._stdout_lines)
def iter_stderr(self):
return self._iter_queue(self._stderr_lines)

View File

@ -0,0 +1,149 @@
# Copyright 2012 New Dream Network, LLC (DreamHost)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Mark McClain, DreamHost
import atexit
import fcntl
import os
import signal
import sys
from neutron.openstack.common import log as logging
LOG = logging.getLogger(__name__)
class Pidfile(object):
def __init__(self, pidfile, procname, uuid=None):
self.pidfile = pidfile
self.procname = procname
self.uuid = uuid
try:
self.fd = os.open(pidfile, os.O_CREAT | os.O_RDWR)
fcntl.flock(self.fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
except IOError:
LOG.exception(_("Error while handling pidfile: %s"), pidfile)
sys.exit(1)
def __str__(self):
return self.pidfile
def unlock(self):
if not not fcntl.flock(self.fd, fcntl.LOCK_UN):
raise IOError(_('Unable to unlock pid file'))
def write(self, pid):
os.ftruncate(self.fd, 0)
os.write(self.fd, "%d" % pid)
os.fsync(self.fd)
def read(self):
try:
pid = int(os.read(self.fd, 128))
os.lseek(self.fd, 0, os.SEEK_SET)
return pid
except ValueError:
return
def is_running(self):
pid = self.read()
if not pid:
return False
cmdline = '/proc/%s/cmdline' % pid
try:
with open(cmdline, "r") as f:
exec_out = f.readline()
return self.procname in exec_out and (not self.uuid or
self.uuid in exec_out)
except IOError:
return False
class Daemon(object):
"""A generic daemon class.
Usage: subclass the Daemon class and override the run() method
"""
def __init__(self, pidfile, stdin='/dev/null', stdout='/dev/null',
stderr='/dev/null', procname='python', uuid=None):
self.stdin = stdin
self.stdout = stdout
self.stderr = stderr
self.procname = procname
self.pidfile = Pidfile(pidfile, procname, uuid)
def _fork(self):
try:
pid = os.fork()
if pid > 0:
sys.exit(0)
except OSError:
LOG.exception(_('Fork failed'))
sys.exit(1)
def daemonize(self):
"""Daemonize process by doing Stevens double fork."""
# fork first time
self._fork()
# decouple from parent environment
os.chdir("/")
os.setsid()
os.umask(0)
# fork second time
self._fork()
# redirect standard file descriptors
sys.stdout.flush()
sys.stderr.flush()
stdin = open(self.stdin, 'r')
stdout = open(self.stdout, 'a+')
stderr = open(self.stderr, 'a+', 0)
os.dup2(stdin.fileno(), sys.stdin.fileno())
os.dup2(stdout.fileno(), sys.stdout.fileno())
os.dup2(stderr.fileno(), sys.stderr.fileno())
# write pidfile
atexit.register(self.delete_pid)
signal.signal(signal.SIGTERM, self.handle_sigterm)
self.pidfile.write(os.getpid())
def delete_pid(self):
os.remove(str(self.pidfile))
def handle_sigterm(self, signum, frame):
sys.exit(0)
def start(self):
"""Start the daemon."""
if self.pidfile.is_running():
self.pidfile.unlock()
message = _('Pidfile %s already exist. Daemon already running?')
LOG.error(message, self.pidfile)
sys.exit(1)
# Start the daemon
self.daemonize()
self.run()
def run(self):
"""Override this method when subclassing Daemon.
start() will call this method after the process has daemonized.
"""
pass

View File

@ -0,0 +1,921 @@
# Copyright 2012 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import collections
import os
import re
import shutil
import socket
import sys
import netaddr
from oslo.config import cfg
import six
from neutron.agent.linux import ip_lib
from neutron.agent.linux import utils
from neutron.common import constants
from neutron.common import exceptions
from neutron.common import utils as commonutils
from neutron.openstack.common import importutils
from neutron.openstack.common import jsonutils
from neutron.openstack.common import log as logging
from neutron.openstack.common import uuidutils
LOG = logging.getLogger(__name__)
OPTS = [
cfg.StrOpt('dhcp_confs',
default='$state_path/dhcp',
help=_('Location to store DHCP server config files')),
cfg.StrOpt('dhcp_domain',
default='openstacklocal',
help=_('Domain to use for building the hostnames')),
cfg.StrOpt('dnsmasq_config_file',
default='',
help=_('Override the default dnsmasq settings with this file')),
cfg.ListOpt('dnsmasq_dns_servers',
help=_('Comma-separated list of the DNS servers which will be '
'used as forwarders.'),
deprecated_name='dnsmasq_dns_server'),
cfg.BoolOpt('dhcp_delete_namespaces', default=False,
help=_("Delete namespace after removing a dhcp server.")),
cfg.IntOpt(
'dnsmasq_lease_max',
default=(2 ** 24),
help=_('Limit number of leases to prevent a denial-of-service.')),
]
IPV4 = 4
IPV6 = 6
UDP = 'udp'
TCP = 'tcp'
DNS_PORT = 53
DHCPV4_PORT = 67
DHCPV6_PORT = 547
METADATA_DEFAULT_PREFIX = 16
METADATA_DEFAULT_IP = '169.254.169.254'
METADATA_DEFAULT_CIDR = '%s/%d' % (METADATA_DEFAULT_IP,
METADATA_DEFAULT_PREFIX)
METADATA_PORT = 80
WIN2k3_STATIC_DNS = 249
NS_PREFIX = 'qdhcp-'
class DictModel(dict):
"""Convert dict into an object that provides attribute access to values."""
def __init__(self, *args, **kwargs):
"""Convert dict values to DictModel values."""
super(DictModel, self).__init__(*args, **kwargs)
def needs_upgrade(item):
"""Check if `item` is a dict and needs to be changed to DictModel.
"""
return isinstance(item, dict) and not isinstance(item, DictModel)
def upgrade(item):
"""Upgrade item if it needs to be upgraded."""
if needs_upgrade(item):
return DictModel(item)
else:
return item
for key, value in self.iteritems():
if isinstance(value, (list, tuple)):
# Keep the same type but convert dicts to DictModels
self[key] = type(value)(
(upgrade(item) for item in value)
)
elif needs_upgrade(value):
# Change dict instance values to DictModel instance values
self[key] = DictModel(value)
def __getattr__(self, name):
try:
return self[name]
except KeyError as e:
raise AttributeError(e)
def __setattr__(self, name, value):
self[name] = value
def __delattr__(self, name):
del self[name]
class NetModel(DictModel):
def __init__(self, use_namespaces, d):
super(NetModel, self).__init__(d)
self._ns_name = (use_namespaces and
"%s%s" % (NS_PREFIX, self.id) or None)
@property
def namespace(self):
return self._ns_name
@six.add_metaclass(abc.ABCMeta)
class DhcpBase(object):
def __init__(self, conf, network, root_helper='sudo',
version=None, plugin=None):
self.conf = conf
self.network = network
self.root_helper = root_helper
self.device_manager = DeviceManager(self.conf,
self.root_helper, plugin)
self.version = version
@abc.abstractmethod
def enable(self):
"""Enables DHCP for this network."""
@abc.abstractmethod
def disable(self, retain_port=False):
"""Disable dhcp for this network."""
def restart(self):
"""Restart the dhcp service for the network."""
self.disable(retain_port=True)
self.enable()
@abc.abstractproperty
def active(self):
"""Boolean representing the running state of the DHCP server."""
@abc.abstractmethod
def reload_allocations(self):
"""Force the DHCP server to reload the assignment database."""
@classmethod
def existing_dhcp_networks(cls, conf, root_helper):
"""Return a list of existing networks ids that we have configs for."""
raise NotImplementedError
@classmethod
def check_version(cls):
"""Execute version checks on DHCP server."""
raise NotImplementedError
class DhcpLocalProcess(DhcpBase):
PORTS = []
def _enable_dhcp(self):
"""check if there is a subnet within the network with dhcp enabled."""
for subnet in self.network.subnets:
if subnet.enable_dhcp:
return True
return False
def enable(self):
"""Enables DHCP for this network by spawning a local process."""
interface_name = self.device_manager.setup(self.network)
if self.active:
self.restart()
elif self._enable_dhcp():
self.interface_name = interface_name
self.spawn_process()
def disable(self, retain_port=False):
"""Disable DHCP for this network by killing the local process."""
pid = self.pid
if pid:
if self.active:
cmd = ['kill', '-9', pid]
utils.execute(cmd, self.root_helper)
else:
LOG.debug(_('DHCP for %(net_id)s is stale, pid %(pid)d '
'does not exist, performing cleanup'),
{'net_id': self.network.id, 'pid': pid})
if not retain_port:
self.device_manager.destroy(self.network,
self.interface_name)
else:
LOG.debug(_('No DHCP started for %s'), self.network.id)
self._remove_config_files()
if not retain_port:
if self.conf.dhcp_delete_namespaces and self.network.namespace:
ns_ip = ip_lib.IPWrapper(self.root_helper,
self.network.namespace)
try:
ns_ip.netns.delete(self.network.namespace)
except RuntimeError:
msg = _('Failed trying to delete namespace: %s')
LOG.exception(msg, self.network.namespace)
def _remove_config_files(self):
confs_dir = os.path.abspath(os.path.normpath(self.conf.dhcp_confs))
conf_dir = os.path.join(confs_dir, self.network.id)
shutil.rmtree(conf_dir, ignore_errors=True)
def get_conf_file_name(self, kind, ensure_conf_dir=False):
"""Returns the file name for a given kind of config file."""
confs_dir = os.path.abspath(os.path.normpath(self.conf.dhcp_confs))
conf_dir = os.path.join(confs_dir, self.network.id)
if ensure_conf_dir:
if not os.path.isdir(conf_dir):
os.makedirs(conf_dir, 0o755)
return os.path.join(conf_dir, kind)
def _get_value_from_conf_file(self, kind, converter=None):
"""A helper function to read a value from one of the state files."""
file_name = self.get_conf_file_name(kind)
msg = _('Error while reading %s')
try:
with open(file_name, 'r') as f:
try:
return converter and converter(f.read()) or f.read()
except ValueError:
msg = _('Unable to convert value in %s')
except IOError:
msg = _('Unable to access %s')
LOG.debug(msg % file_name)
return None
@property
def pid(self):
"""Last known pid for the DHCP process spawned for this network."""
return self._get_value_from_conf_file('pid', int)
@property
def active(self):
pid = self.pid
if pid is None:
return False
cmdline = '/proc/%s/cmdline' % pid
try:
with open(cmdline, "r") as f:
return self.network.id in f.readline()
except IOError:
return False
@property
def interface_name(self):
return self._get_value_from_conf_file('interface')
@interface_name.setter
def interface_name(self, value):
interface_file_path = self.get_conf_file_name('interface',
ensure_conf_dir=True)
utils.replace_file(interface_file_path, value)
@abc.abstractmethod
def spawn_process(self):
pass
class Dnsmasq(DhcpLocalProcess):
# The ports that need to be opened when security policies are active
# on the Neutron port used for DHCP. These are provided as a convenience
# for users of this class.
PORTS = {IPV4: [(UDP, DNS_PORT), (TCP, DNS_PORT), (UDP, DHCPV4_PORT)],
IPV6: [(UDP, DNS_PORT), (TCP, DNS_PORT), (UDP, DHCPV6_PORT)],
}
_TAG_PREFIX = 'tag%d'
NEUTRON_NETWORK_ID_KEY = 'NEUTRON_NETWORK_ID'
NEUTRON_RELAY_SOCKET_PATH_KEY = 'NEUTRON_RELAY_SOCKET_PATH'
MINIMUM_VERSION = 2.59
@classmethod
def check_version(cls):
ver = 0
try:
cmd = ['dnsmasq', '--version']
out = utils.execute(cmd)
ver = re.findall("\d+.\d+", out)[0]
is_valid_version = float(ver) >= cls.MINIMUM_VERSION
if not is_valid_version:
LOG.warning(_('FAILED VERSION REQUIREMENT FOR DNSMASQ. '
'DHCP AGENT MAY NOT RUN CORRECTLY! '
'Please ensure that its version is %s '
'or above!'), cls.MINIMUM_VERSION)
except (OSError, RuntimeError, IndexError, ValueError):
LOG.warning(_('Unable to determine dnsmasq version. '
'Please ensure that its version is %s '
'or above!'), cls.MINIMUM_VERSION)
return float(ver)
@classmethod
def existing_dhcp_networks(cls, conf, root_helper):
"""Return a list of existing networks ids that we have configs for."""
confs_dir = os.path.abspath(os.path.normpath(conf.dhcp_confs))
return [
c for c in os.listdir(confs_dir)
if uuidutils.is_uuid_like(c)
]
def spawn_process(self):
"""Spawns a Dnsmasq process for the network."""
env = {
self.NEUTRON_NETWORK_ID_KEY: self.network.id,
}
cmd = [
'dnsmasq',
'--no-hosts',
'--no-resolv',
'--strict-order',
'--bind-interfaces',
'--interface=%s' % self.interface_name,
'--except-interface=lo',
'--pid-file=%s' % self.get_conf_file_name(
'pid', ensure_conf_dir=True),
'--dhcp-hostsfile=%s' % self._output_hosts_file(),
'--addn-hosts=%s' % self._output_addn_hosts_file(),
'--dhcp-optsfile=%s' % self._output_opts_file(),
'--leasefile-ro',
]
possible_leases = 0
for i, subnet in enumerate(self.network.subnets):
# if a subnet is specified to have dhcp disabled
if not subnet.enable_dhcp:
continue
if subnet.ip_version == 4:
mode = 'static'
else:
# Note(scollins) If the IPv6 attributes are not set, set it as
# static to preserve previous behavior
if (not getattr(subnet, 'ipv6_ra_mode', None) and
not getattr(subnet, 'ipv6_address_mode', None)):
mode = 'static'
elif getattr(subnet, 'ipv6_ra_mode', None) is None:
# RA mode is not set - do not launch dnsmasq
continue
if self.version >= self.MINIMUM_VERSION:
set_tag = 'set:'
else:
set_tag = ''
cidr = netaddr.IPNetwork(subnet.cidr)
if self.conf.dhcp_lease_duration == -1:
lease = 'infinite'
else:
lease = '%ss' % self.conf.dhcp_lease_duration
cmd.append('--dhcp-range=%s%s,%s,%s,%s' %
(set_tag, self._TAG_PREFIX % i,
cidr.network, mode, lease))
possible_leases += cidr.size
# Cap the limit because creating lots of subnets can inflate
# this possible lease cap.
cmd.append('--dhcp-lease-max=%d' %
min(possible_leases, self.conf.dnsmasq_lease_max))
cmd.append('--conf-file=%s' % self.conf.dnsmasq_config_file)
if self.conf.dnsmasq_dns_servers:
cmd.extend(
'--server=%s' % server
for server in self.conf.dnsmasq_dns_servers)
if self.conf.dhcp_domain:
cmd.append('--domain=%s' % self.conf.dhcp_domain)
ip_wrapper = ip_lib.IPWrapper(self.root_helper,
self.network.namespace)
ip_wrapper.netns.execute(cmd, addl_env=env)
def _release_lease(self, mac_address, ip):
"""Release a DHCP lease."""
cmd = ['dhcp_release', self.interface_name, ip, mac_address]
ip_wrapper = ip_lib.IPWrapper(self.root_helper,
self.network.namespace)
ip_wrapper.netns.execute(cmd)
def reload_allocations(self):
"""Rebuild the dnsmasq config and signal the dnsmasq to reload."""
# If all subnets turn off dhcp, kill the process.
if not self._enable_dhcp():
self.disable()
LOG.debug(_('Killing dhcpmasq for network since all subnets have '
'turned off DHCP: %s'), self.network.id)
return
self._release_unused_leases()
self._output_hosts_file()
self._output_addn_hosts_file()
self._output_opts_file()
if self.active:
cmd = ['kill', '-HUP', self.pid]
utils.execute(cmd, self.root_helper)
else:
LOG.debug(_('Pid %d is stale, relaunching dnsmasq'), self.pid)
LOG.debug(_('Reloading allocations for network: %s'), self.network.id)
self.device_manager.update(self.network, self.interface_name)
def _iter_hosts(self):
"""Iterate over hosts.
For each host on the network we yield a tuple containing:
(
port, # a DictModel instance representing the port.
alloc, # a DictModel instance of the allocated ip and subnet.
host_name, # Host name.
name, # Host name and domain name in the format 'hostname.domain'.
)
"""
v6_nets = dict((subnet.id, subnet) for subnet in
self.network.subnets if subnet.ip_version == 6)
for port in self.network.ports:
for alloc in port.fixed_ips:
# Note(scollins) Only create entries that are
# associated with the subnet being managed by this
# dhcp agent
if alloc.subnet_id in v6_nets:
ra_mode = v6_nets[alloc.subnet_id].ipv6_ra_mode
addr_mode = v6_nets[alloc.subnet_id].ipv6_address_mode
if (ra_mode is None and addr_mode == constants.IPV6_SLAAC):
continue
hostname = 'host-%s' % alloc.ip_address.replace(
'.', '-').replace(':', '-')
fqdn = '%s.%s' % (hostname, self.conf.dhcp_domain)
yield (port, alloc, hostname, fqdn)
def _output_hosts_file(self):
"""Writes a dnsmasq compatible dhcp hosts file.
The generated file is sent to the --dhcp-hostsfile option of dnsmasq,
and lists the hosts on the network which should receive a dhcp lease.
Each line in this file is in the form::
'mac_address,FQDN,ip_address'
IMPORTANT NOTE: a dnsmasq instance does not resolve hosts defined in
this file if it did not give a lease to a host listed in it (e.g.:
multiple dnsmasq instances on the same network if this network is on
multiple network nodes). This file is only defining hosts which
should receive a dhcp lease, the hosts resolution in itself is
defined by the `_output_addn_hosts_file` method.
"""
buf = six.StringIO()
filename = self.get_conf_file_name('host')
LOG.debug(_('Building host file: %s'), filename)
for (port, alloc, hostname, name) in self._iter_hosts():
set_tag = ''
# (dzyu) Check if it is legal ipv6 address, if so, need wrap
# it with '[]' to let dnsmasq to distinguish MAC address from
# IPv6 address.
ip_address = alloc.ip_address
if netaddr.valid_ipv6(ip_address):
ip_address = '[%s]' % ip_address
LOG.debug(_('Adding %(mac)s : %(name)s : %(ip)s'),
{"mac": port.mac_address, "name": name,
"ip": ip_address})
if getattr(port, 'extra_dhcp_opts', False):
if self.version >= self.MINIMUM_VERSION:
set_tag = 'set:'
buf.write('%s,%s,%s,%s%s\n' %
(port.mac_address, name, ip_address,
set_tag, port.id))
else:
buf.write('%s,%s,%s\n' %
(port.mac_address, name, ip_address))
utils.replace_file(filename, buf.getvalue())
LOG.debug(_('Done building host file %s'), filename)
return filename
def _read_hosts_file_leases(self, filename):
leases = set()
if os.path.exists(filename):
with open(filename) as f:
for l in f.readlines():
host = l.strip().split(',')
leases.add((host[2], host[0]))
return leases
def _release_unused_leases(self):
filename = self.get_conf_file_name('host')
old_leases = self._read_hosts_file_leases(filename)
new_leases = set()
for port in self.network.ports:
for alloc in port.fixed_ips:
new_leases.add((alloc.ip_address, port.mac_address))
for ip, mac in old_leases - new_leases:
self._release_lease(mac, ip)
def _output_addn_hosts_file(self):
"""Writes a dnsmasq compatible additional hosts file.
The generated file is sent to the --addn-hosts option of dnsmasq,
and lists the hosts on the network which should be resolved even if
the dnsmaq instance did not give a lease to the host (see the
`_output_hosts_file` method).
Each line in this file is in the same form as a standard /etc/hosts
file.
"""
buf = six.StringIO()
for (port, alloc, hostname, fqdn) in self._iter_hosts():
# It is compulsory to write the `fqdn` before the `hostname` in
# order to obtain it in PTR responses.
buf.write('%s\t%s %s\n' % (alloc.ip_address, fqdn, hostname))
addn_hosts = self.get_conf_file_name('addn_hosts')
utils.replace_file(addn_hosts, buf.getvalue())
return addn_hosts
def _output_opts_file(self):
"""Write a dnsmasq compatible options file."""
if self.conf.enable_isolated_metadata:
subnet_to_interface_ip = self._make_subnet_interface_ip_map()
options = []
dhcp_ips = collections.defaultdict(list)
subnet_idx_map = {}
for i, subnet in enumerate(self.network.subnets):
if not subnet.enable_dhcp:
continue
if subnet.dns_nameservers:
options.append(
self._format_option(i, 'dns-server',
','.join(subnet.dns_nameservers)))
else:
# use the dnsmasq ip as nameservers only if there is no
# dns-server submitted by the server
subnet_idx_map[subnet.id] = i
gateway = subnet.gateway_ip
host_routes = []
for hr in subnet.host_routes:
if hr.destination == "0.0.0.0/0":
if not gateway:
gateway = hr.nexthop
else:
host_routes.append("%s,%s" % (hr.destination, hr.nexthop))
# Add host routes for isolated network segments
if self._enable_metadata(subnet):
subnet_dhcp_ip = subnet_to_interface_ip[subnet.id]
host_routes.append(
'%s/32,%s' % (METADATA_DEFAULT_IP, subnet_dhcp_ip)
)
if host_routes:
if gateway and subnet.ip_version == 4:
host_routes.append("%s,%s" % ("0.0.0.0/0", gateway))
options.append(
self._format_option(i, 'classless-static-route',
','.join(host_routes)))
options.append(
self._format_option(i, WIN2k3_STATIC_DNS,
','.join(host_routes)))
if subnet.ip_version == 4:
if gateway:
options.append(self._format_option(i, 'router', gateway))
else:
options.append(self._format_option(i, 'router'))
for port in self.network.ports:
if getattr(port, 'extra_dhcp_opts', False):
options.extend(
self._format_option(port.id, opt.opt_name, opt.opt_value)
for opt in port.extra_dhcp_opts)
# provides all dnsmasq ip as dns-server if there is more than
# one dnsmasq for a subnet and there is no dns-server submitted
# by the server
if port.device_owner == constants.DEVICE_OWNER_DHCP:
for ip in port.fixed_ips:
i = subnet_idx_map.get(ip.subnet_id)
if i is None:
continue
dhcp_ips[i].append(ip.ip_address)
for i, ips in dhcp_ips.items():
if len(ips) > 1:
options.append(self._format_option(i,
'dns-server',
','.join(ips)))
name = self.get_conf_file_name('opts')
utils.replace_file(name, '\n'.join(options))
return name
def _make_subnet_interface_ip_map(self):
ip_dev = ip_lib.IPDevice(
self.interface_name,
self.root_helper,
self.network.namespace
)
subnet_lookup = dict(
(netaddr.IPNetwork(subnet.cidr), subnet.id)
for subnet in self.network.subnets
)
retval = {}
for addr in ip_dev.addr.list():
ip_net = netaddr.IPNetwork(addr['cidr'])
if ip_net in subnet_lookup:
retval[subnet_lookup[ip_net]] = addr['cidr'].split('/')[0]
return retval
def _format_option(self, tag, option, *args):
"""Format DHCP option by option name or code."""
if self.version >= self.MINIMUM_VERSION:
set_tag = 'tag:'
else:
set_tag = ''
option = str(option)
if isinstance(tag, int):
tag = self._TAG_PREFIX % tag
if not option.isdigit():
option = 'option:%s' % option
return ','.join((set_tag + tag, '%s' % option) + args)
def _enable_metadata(self, subnet):
'''Determine if the metadata route will be pushed to hosts on subnet.
If subnet has a Neutron router attached, we want the hosts to get
metadata from the router's proxy via their default route instead.
'''
if self.conf.enable_isolated_metadata and subnet.ip_version == 4:
if subnet.gateway_ip is None:
return True
else:
for port in self.network.ports:
if port.device_owner == constants.DEVICE_OWNER_ROUTER_INTF:
for alloc in port.fixed_ips:
if alloc.subnet_id == subnet.id:
return False
return True
else:
return False
@classmethod
def lease_update(cls):
network_id = os.environ.get(cls.NEUTRON_NETWORK_ID_KEY)
dhcp_relay_socket = os.environ.get(cls.NEUTRON_RELAY_SOCKET_PATH_KEY)
action = sys.argv[1]
if action not in ('add', 'del', 'old'):
sys.exit()
mac_address = sys.argv[2]
ip_address = sys.argv[3]
if action == 'del':
lease_remaining = 0
else:
lease_remaining = int(os.environ.get('DNSMASQ_TIME_REMAINING', 0))
data = dict(network_id=network_id, mac_address=mac_address,
ip_address=ip_address, lease_remaining=lease_remaining)
if os.path.exists(dhcp_relay_socket):
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sock.connect(dhcp_relay_socket)
sock.send(jsonutils.dumps(data))
sock.close()
class DeviceManager(object):
def __init__(self, conf, root_helper, plugin):
self.conf = conf
self.root_helper = root_helper
self.plugin = plugin
if not conf.interface_driver:
msg = _('An interface driver must be specified')
LOG.error(msg)
raise SystemExit(1)
try:
self.driver = importutils.import_object(
conf.interface_driver, conf)
except Exception as e:
msg = (_("Error importing interface driver '%(driver)s': "
"%(inner)s") % {'driver': conf.interface_driver,
'inner': e})
LOG.error(msg)
raise SystemExit(1)
def get_interface_name(self, network, port):
"""Return interface(device) name for use by the DHCP process."""
return self.driver.get_device_name(port)
def get_device_id(self, network):
"""Return a unique DHCP device ID for this host on the network."""
# There could be more than one dhcp server per network, so create
# a device id that combines host and network ids
return commonutils.get_dhcp_agent_device_id(network.id, self.conf.host)
def _set_default_route(self, network, device_name):
"""Sets the default gateway for this dhcp namespace.
This method is idempotent and will only adjust the route if adjusting
it would change it from what it already is. This makes it safe to call
and avoids unnecessary perturbation of the system.
"""
device = ip_lib.IPDevice(device_name,
self.root_helper,
network.namespace)
gateway = device.route.get_gateway()
if gateway:
gateway = gateway['gateway']
for subnet in network.subnets:
skip_subnet = (
subnet.ip_version != 4
or not subnet.enable_dhcp
or subnet.gateway_ip is None)
if skip_subnet:
continue
if gateway != subnet.gateway_ip:
m = _('Setting gateway for dhcp netns on net %(n)s to %(ip)s')
LOG.debug(m, {'n': network.id, 'ip': subnet.gateway_ip})
device.route.add_gateway(subnet.gateway_ip)
return
# No subnets on the network have a valid gateway. Clean it up to avoid
# confusion from seeing an invalid gateway here.
if gateway is not None:
msg = _('Removing gateway for dhcp netns on net %s')
LOG.debug(msg, network.id)
device.route.delete_gateway(gateway)
def setup_dhcp_port(self, network):
"""Create/update DHCP port for the host if needed and return port."""
device_id = self.get_device_id(network)
subnets = {}
dhcp_enabled_subnet_ids = []
for subnet in network.subnets:
if subnet.enable_dhcp:
dhcp_enabled_subnet_ids.append(subnet.id)
subnets[subnet.id] = subnet
dhcp_port = None
for port in network.ports:
port_device_id = getattr(port, 'device_id', None)
if port_device_id == device_id:
port_fixed_ips = []
for fixed_ip in port.fixed_ips:
port_fixed_ips.append({'subnet_id': fixed_ip.subnet_id,
'ip_address': fixed_ip.ip_address})
if fixed_ip.subnet_id in dhcp_enabled_subnet_ids:
dhcp_enabled_subnet_ids.remove(fixed_ip.subnet_id)
# If there are dhcp_enabled_subnet_ids here that means that
# we need to add those to the port and call update.
if dhcp_enabled_subnet_ids:
port_fixed_ips.extend(
[dict(subnet_id=s) for s in dhcp_enabled_subnet_ids])
dhcp_port = self.plugin.update_dhcp_port(
port.id, {'port': {'network_id': network.id,
'fixed_ips': port_fixed_ips}})
if not dhcp_port:
raise exceptions.Conflict()
else:
dhcp_port = port
# break since we found port that matches device_id
break
# check for a reserved DHCP port
if dhcp_port is None:
LOG.debug(_('DHCP port %(device_id)s on network %(network_id)s'
' does not yet exist. Checking for a reserved port.'),
{'device_id': device_id, 'network_id': network.id})
for port in network.ports:
port_device_id = getattr(port, 'device_id', None)
if port_device_id == constants.DEVICE_ID_RESERVED_DHCP_PORT:
dhcp_port = self.plugin.update_dhcp_port(
port.id, {'port': {'network_id': network.id,
'device_id': device_id}})
if dhcp_port:
break
# DHCP port has not yet been created.
if dhcp_port is None:
LOG.debug(_('DHCP port %(device_id)s on network %(network_id)s'
' does not yet exist.'), {'device_id': device_id,
'network_id': network.id})
port_dict = dict(
name='',
admin_state_up=True,
device_id=device_id,
network_id=network.id,
tenant_id=network.tenant_id,
fixed_ips=[dict(subnet_id=s) for s in dhcp_enabled_subnet_ids])
dhcp_port = self.plugin.create_dhcp_port({'port': port_dict})
if not dhcp_port:
raise exceptions.Conflict()
# Convert subnet_id to subnet dict
fixed_ips = [dict(subnet_id=fixed_ip.subnet_id,
ip_address=fixed_ip.ip_address,
subnet=subnets[fixed_ip.subnet_id])
for fixed_ip in dhcp_port.fixed_ips]
ips = [DictModel(item) if isinstance(item, dict) else item
for item in fixed_ips]
dhcp_port.fixed_ips = ips
return dhcp_port
def setup(self, network):
"""Create and initialize a device for network's DHCP on this host."""
port = self.setup_dhcp_port(network)
interface_name = self.get_interface_name(network, port)
if ip_lib.ensure_device_is_ready(interface_name,
self.root_helper,
network.namespace):
LOG.debug(_('Reusing existing device: %s.'), interface_name)
else:
self.driver.plug(network.id,
port.id,
interface_name,
port.mac_address,
namespace=network.namespace)
ip_cidrs = []
for fixed_ip in port.fixed_ips:
subnet = fixed_ip.subnet
net = netaddr.IPNetwork(subnet.cidr)
ip_cidr = '%s/%s' % (fixed_ip.ip_address, net.prefixlen)
ip_cidrs.append(ip_cidr)
if (self.conf.enable_isolated_metadata and
self.conf.use_namespaces):
ip_cidrs.append(METADATA_DEFAULT_CIDR)
self.driver.init_l3(interface_name, ip_cidrs,
namespace=network.namespace)
# ensure that the dhcp interface is first in the list
if network.namespace is None:
device = ip_lib.IPDevice(interface_name,
self.root_helper)
device.route.pullup_route(interface_name)
if self.conf.use_namespaces:
self._set_default_route(network, interface_name)
return interface_name
def update(self, network, device_name):
"""Update device settings for the network's DHCP on this host."""
if self.conf.use_namespaces:
self._set_default_route(network, device_name)
def destroy(self, network, device_name):
"""Destroy the device used for the network's DHCP on this host."""
self.driver.unplug(device_name, namespace=network.namespace)
self.plugin.release_dhcp_port(network.id,
self.get_device_id(network))

View File

@ -0,0 +1,102 @@
# Copyright 2012 New Dream Network, LLC (DreamHost)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Mark McClain, DreamHost
import os
from oslo.config import cfg
from neutron.agent.linux import ip_lib
from neutron.agent.linux import utils
from neutron.openstack.common import log as logging
LOG = logging.getLogger(__name__)
OPTS = [
cfg.StrOpt('external_pids',
default='$state_path/external/pids',
help=_('Location to store child pid files')),
]
cfg.CONF.register_opts(OPTS)
class ProcessManager(object):
"""An external process manager for Neutron spawned processes.
Note: The manager expects uuid to be in cmdline.
"""
def __init__(self, conf, uuid, root_helper='sudo', namespace=None):
self.conf = conf
self.uuid = uuid
self.root_helper = root_helper
self.namespace = namespace
def enable(self, cmd_callback):
if not self.active:
cmd = cmd_callback(self.get_pid_file_name(ensure_pids_dir=True))
ip_wrapper = ip_lib.IPWrapper(self.root_helper, self.namespace)
ip_wrapper.netns.execute(cmd)
def disable(self):
pid = self.pid
if self.active:
cmd = ['kill', '-9', pid]
utils.execute(cmd, self.root_helper)
elif pid:
LOG.debug(_('Process for %(uuid)s pid %(pid)d is stale, ignoring '
'command'), {'uuid': self.uuid, 'pid': pid})
else:
LOG.debug(_('No process started for %s'), self.uuid)
def get_pid_file_name(self, ensure_pids_dir=False):
"""Returns the file name for a given kind of config file."""
pids_dir = os.path.abspath(os.path.normpath(self.conf.external_pids))
if ensure_pids_dir and not os.path.isdir(pids_dir):
os.makedirs(pids_dir, 0o755)
return os.path.join(pids_dir, self.uuid + '.pid')
@property
def pid(self):
"""Last known pid for this external process spawned for this uuid."""
file_name = self.get_pid_file_name()
msg = _('Error while reading %s')
try:
with open(file_name, 'r') as f:
return int(f.read())
except IOError:
msg = _('Unable to access %s')
except ValueError:
msg = _('Unable to convert value in %s')
LOG.debug(msg, file_name)
return None
@property
def active(self):
pid = self.pid
if pid is None:
return False
cmdline = '/proc/%s/cmdline' % pid
try:
with open(cmdline, "r") as f:
return self.uuid in f.readline()
except IOError:
return False

View File

@ -0,0 +1,448 @@
# Copyright 2012 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import netaddr
from oslo.config import cfg
import six
from neutron.agent.common import config
from neutron.agent.linux import ip_lib
from neutron.agent.linux import ovs_lib
from neutron.agent.linux import utils
from neutron.common import exceptions
from neutron.extensions import flavor
from neutron.openstack.common import importutils
from neutron.openstack.common import log as logging
LOG = logging.getLogger(__name__)
OPTS = [
cfg.StrOpt('ovs_integration_bridge',
default='br-int',
help=_('Name of Open vSwitch bridge to use')),
cfg.BoolOpt('ovs_use_veth',
default=False,
help=_('Uses veth for an interface or not')),
cfg.IntOpt('network_device_mtu',
help=_('MTU setting for device.')),
cfg.StrOpt('meta_flavor_driver_mappings',
help=_('Mapping between flavor and LinuxInterfaceDriver')),
cfg.StrOpt('admin_user',
help=_("Admin username")),
cfg.StrOpt('admin_password',
help=_("Admin password"),
secret=True),
cfg.StrOpt('admin_tenant_name',
help=_("Admin tenant name")),
cfg.StrOpt('auth_url',
help=_("Authentication URL")),
cfg.StrOpt('auth_strategy', default='keystone',
help=_("The type of authentication to use")),
cfg.StrOpt('auth_region',
help=_("Authentication region")),
]
@six.add_metaclass(abc.ABCMeta)
class LinuxInterfaceDriver(object):
# from linux IF_NAMESIZE
DEV_NAME_LEN = 14
DEV_NAME_PREFIX = 'tap'
def __init__(self, conf):
self.conf = conf
self.root_helper = config.get_root_helper(conf)
def init_l3(self, device_name, ip_cidrs, namespace=None,
preserve_ips=[], gateway=None, extra_subnets=[]):
"""Set the L3 settings for the interface using data from the port.
ip_cidrs: list of 'X.X.X.X/YY' strings
preserve_ips: list of ip cidrs that should not be removed from device
"""
device = ip_lib.IPDevice(device_name,
self.root_helper,
namespace=namespace)
previous = {}
for address in device.addr.list(scope='global', filters=['permanent']):
previous[address['cidr']] = address['ip_version']
# add new addresses
for ip_cidr in ip_cidrs:
net = netaddr.IPNetwork(ip_cidr)
# Convert to compact IPv6 address because the return values of
# "ip addr list" are compact.
if net.version == 6:
ip_cidr = str(net)
if ip_cidr in previous:
del previous[ip_cidr]
continue
device.addr.add(net.version, ip_cidr, str(net.broadcast))
# clean up any old addresses
for ip_cidr, ip_version in previous.items():
if ip_cidr not in preserve_ips:
device.addr.delete(ip_version, ip_cidr)
if gateway:
device.route.add_gateway(gateway)
new_onlink_routes = set(s['cidr'] for s in extra_subnets)
existing_onlink_routes = set(device.route.list_onlink_routes())
for route in new_onlink_routes - existing_onlink_routes:
device.route.add_onlink_route(route)
for route in existing_onlink_routes - new_onlink_routes:
device.route.delete_onlink_route(route)
def check_bridge_exists(self, bridge):
if not ip_lib.device_exists(bridge):
raise exceptions.BridgeDoesNotExist(bridge=bridge)
def get_device_name(self, port):
return (self.DEV_NAME_PREFIX + port.id)[:self.DEV_NAME_LEN]
@abc.abstractmethod
def plug(self, network_id, port_id, device_name, mac_address,
bridge=None, namespace=None, prefix=None):
"""Plug in the interface."""
@abc.abstractmethod
def unplug(self, device_name, bridge=None, namespace=None, prefix=None):
"""Unplug the interface."""
class NullDriver(LinuxInterfaceDriver):
def plug(self, network_id, port_id, device_name, mac_address,
bridge=None, namespace=None, prefix=None):
pass
def unplug(self, device_name, bridge=None, namespace=None, prefix=None):
pass
class OVSInterfaceDriver(LinuxInterfaceDriver):
"""Driver for creating an internal interface on an OVS bridge."""
DEV_NAME_PREFIX = 'tap'
def __init__(self, conf):
super(OVSInterfaceDriver, self).__init__(conf)
if self.conf.ovs_use_veth:
self.DEV_NAME_PREFIX = 'ns-'
def _get_tap_name(self, dev_name, prefix=None):
if self.conf.ovs_use_veth:
dev_name = dev_name.replace(prefix or self.DEV_NAME_PREFIX, 'tap')
return dev_name
def _ovs_add_port(self, bridge, device_name, port_id, mac_address,
internal=True):
cmd = ['ovs-vsctl', '--', '--if-exists', 'del-port', device_name, '--',
'add-port', bridge, device_name]
if internal:
cmd += ['--', 'set', 'Interface', device_name, 'type=internal']
cmd += ['--', 'set', 'Interface', device_name,
'external-ids:iface-id=%s' % port_id,
'--', 'set', 'Interface', device_name,
'external-ids:iface-status=active',
'--', 'set', 'Interface', device_name,
'external-ids:attached-mac=%s' % mac_address]
utils.execute(cmd, self.root_helper)
def plug(self, network_id, port_id, device_name, mac_address,
bridge=None, namespace=None, prefix=None):
"""Plug in the interface."""
if not bridge:
bridge = self.conf.ovs_integration_bridge
if not ip_lib.device_exists(device_name,
self.root_helper,
namespace=namespace):
self.check_bridge_exists(bridge)
ip = ip_lib.IPWrapper(self.root_helper)
tap_name = self._get_tap_name(device_name, prefix)
if self.conf.ovs_use_veth:
# Create ns_dev in a namespace if one is configured.
root_dev, ns_dev = ip.add_veth(tap_name,
device_name,
namespace2=namespace)
else:
ns_dev = ip.device(device_name)
internal = not self.conf.ovs_use_veth
self._ovs_add_port(bridge, tap_name, port_id, mac_address,
internal=internal)
ns_dev.link.set_address(mac_address)
if self.conf.network_device_mtu:
ns_dev.link.set_mtu(self.conf.network_device_mtu)
if self.conf.ovs_use_veth:
root_dev.link.set_mtu(self.conf.network_device_mtu)
# Add an interface created by ovs to the namespace.
if not self.conf.ovs_use_veth and namespace:
namespace_obj = ip.ensure_namespace(namespace)
namespace_obj.add_device_to_namespace(ns_dev)
ns_dev.link.set_up()
if self.conf.ovs_use_veth:
root_dev.link.set_up()
else:
LOG.info(_("Device %s already exists"), device_name)
def unplug(self, device_name, bridge=None, namespace=None, prefix=None):
"""Unplug the interface."""
if not bridge:
bridge = self.conf.ovs_integration_bridge
tap_name = self._get_tap_name(device_name, prefix)
self.check_bridge_exists(bridge)
ovs = ovs_lib.OVSBridge(bridge, self.root_helper)
try:
ovs.delete_port(tap_name)
if self.conf.ovs_use_veth:
device = ip_lib.IPDevice(device_name,
self.root_helper,
namespace)
device.link.delete()
LOG.debug(_("Unplugged interface '%s'"), device_name)
except RuntimeError:
LOG.error(_("Failed unplugging interface '%s'"),
device_name)
class MidonetInterfaceDriver(LinuxInterfaceDriver):
def plug(self, network_id, port_id, device_name, mac_address,
bridge=None, namespace=None, prefix=None):
"""This method is called by the Dhcp agent or by the L3 agent
when a new network is created
"""
if not ip_lib.device_exists(device_name,
self.root_helper,
namespace=namespace):
ip = ip_lib.IPWrapper(self.root_helper)
tap_name = device_name.replace(prefix or 'tap', 'tap')
# Create ns_dev in a namespace if one is configured.
root_dev, ns_dev = ip.add_veth(tap_name, device_name,
namespace2=namespace)
ns_dev.link.set_address(mac_address)
# Add an interface created by ovs to the namespace.
namespace_obj = ip.ensure_namespace(namespace)
namespace_obj.add_device_to_namespace(ns_dev)
ns_dev.link.set_up()
root_dev.link.set_up()
cmd = ['mm-ctl', '--bind-port', port_id, device_name]
utils.execute(cmd, self.root_helper)
else:
LOG.info(_("Device %s already exists"), device_name)
def unplug(self, device_name, bridge=None, namespace=None, prefix=None):
# the port will be deleted by the dhcp agent that will call the plugin
device = ip_lib.IPDevice(device_name,
self.root_helper,
namespace)
try:
device.link.delete()
except RuntimeError:
LOG.error(_("Failed unplugging interface '%s'"), device_name)
LOG.debug(_("Unplugged interface '%s'"), device_name)
ip_lib.IPWrapper(
self.root_helper, namespace).garbage_collect_namespace()
class IVSInterfaceDriver(LinuxInterfaceDriver):
"""Driver for creating an internal interface on an IVS bridge."""
DEV_NAME_PREFIX = 'tap'
def __init__(self, conf):
super(IVSInterfaceDriver, self).__init__(conf)
self.DEV_NAME_PREFIX = 'ns-'
def _get_tap_name(self, dev_name, prefix=None):
dev_name = dev_name.replace(prefix or self.DEV_NAME_PREFIX, 'tap')
return dev_name
def _ivs_add_port(self, device_name, port_id, mac_address):
cmd = ['ivs-ctl', 'add-port', device_name]
utils.execute(cmd, self.root_helper)
def plug(self, network_id, port_id, device_name, mac_address,
bridge=None, namespace=None, prefix=None):
"""Plug in the interface."""
if not ip_lib.device_exists(device_name,
self.root_helper,
namespace=namespace):
ip = ip_lib.IPWrapper(self.root_helper)
tap_name = self._get_tap_name(device_name, prefix)
root_dev, ns_dev = ip.add_veth(tap_name, device_name)
self._ivs_add_port(tap_name, port_id, mac_address)
ns_dev = ip.device(device_name)
ns_dev.link.set_address(mac_address)
if self.conf.network_device_mtu:
ns_dev.link.set_mtu(self.conf.network_device_mtu)
root_dev.link.set_mtu(self.conf.network_device_mtu)
if namespace:
namespace_obj = ip.ensure_namespace(namespace)
namespace_obj.add_device_to_namespace(ns_dev)
ns_dev.link.set_up()
root_dev.link.set_up()
else:
LOG.info(_("Device %s already exists"), device_name)
def unplug(self, device_name, bridge=None, namespace=None, prefix=None):
"""Unplug the interface."""
tap_name = self._get_tap_name(device_name, prefix)
try:
cmd = ['ivs-ctl', 'del-port', tap_name]
utils.execute(cmd, self.root_helper)
device = ip_lib.IPDevice(device_name,
self.root_helper,
namespace)
device.link.delete()
LOG.debug(_("Unplugged interface '%s'"), device_name)
except RuntimeError:
LOG.error(_("Failed unplugging interface '%s'"),
device_name)
class BridgeInterfaceDriver(LinuxInterfaceDriver):
"""Driver for creating bridge interfaces."""
DEV_NAME_PREFIX = 'ns-'
def plug(self, network_id, port_id, device_name, mac_address,
bridge=None, namespace=None, prefix=None):
"""Plugin the interface."""
if not ip_lib.device_exists(device_name,
self.root_helper,
namespace=namespace):
ip = ip_lib.IPWrapper(self.root_helper)
# Enable agent to define the prefix
if prefix:
tap_name = device_name.replace(prefix, 'tap')
else:
tap_name = device_name.replace(self.DEV_NAME_PREFIX, 'tap')
# Create ns_veth in a namespace if one is configured.
root_veth, ns_veth = ip.add_veth(tap_name, device_name,
namespace2=namespace)
ns_veth.link.set_address(mac_address)
if self.conf.network_device_mtu:
root_veth.link.set_mtu(self.conf.network_device_mtu)
ns_veth.link.set_mtu(self.conf.network_device_mtu)
root_veth.link.set_up()
ns_veth.link.set_up()
else:
LOG.info(_("Device %s already exists"), device_name)
def unplug(self, device_name, bridge=None, namespace=None, prefix=None):
"""Unplug the interface."""
device = ip_lib.IPDevice(device_name, self.root_helper, namespace)
try:
device.link.delete()
LOG.debug(_("Unplugged interface '%s'"), device_name)
except RuntimeError:
LOG.error(_("Failed unplugging interface '%s'"),
device_name)
class MetaInterfaceDriver(LinuxInterfaceDriver):
def __init__(self, conf):
super(MetaInterfaceDriver, self).__init__(conf)
from neutronclient.v2_0 import client
self.neutron = client.Client(
username=self.conf.admin_user,
password=self.conf.admin_password,
tenant_name=self.conf.admin_tenant_name,
auth_url=self.conf.auth_url,
auth_strategy=self.conf.auth_strategy,
region_name=self.conf.auth_region
)
self.flavor_driver_map = {}
for net_flavor, driver_name in [
driver_set.split(':')
for driver_set in
self.conf.meta_flavor_driver_mappings.split(',')]:
self.flavor_driver_map[net_flavor] = self._load_driver(driver_name)
def _get_flavor_by_network_id(self, network_id):
network = self.neutron.show_network(network_id)
return network['network'][flavor.FLAVOR_NETWORK]
def _get_driver_by_network_id(self, network_id):
net_flavor = self._get_flavor_by_network_id(network_id)
return self.flavor_driver_map[net_flavor]
def _set_device_plugin_tag(self, network_id, device_name, namespace=None):
plugin_tag = self._get_flavor_by_network_id(network_id)
device = ip_lib.IPDevice(device_name, self.conf.root_helper, namespace)
device.link.set_alias(plugin_tag)
def _get_device_plugin_tag(self, device_name, namespace=None):
device = ip_lib.IPDevice(device_name, self.conf.root_helper, namespace)
return device.link.alias
def get_device_name(self, port):
driver = self._get_driver_by_network_id(port.network_id)
return driver.get_device_name(port)
def plug(self, network_id, port_id, device_name, mac_address,
bridge=None, namespace=None, prefix=None):
driver = self._get_driver_by_network_id(network_id)
ret = driver.plug(network_id, port_id, device_name, mac_address,
bridge=bridge, namespace=namespace, prefix=prefix)
self._set_device_plugin_tag(network_id, device_name, namespace)
return ret
def unplug(self, device_name, bridge=None, namespace=None, prefix=None):
plugin_tag = self._get_device_plugin_tag(device_name, namespace)
driver = self.flavor_driver_map[plugin_tag]
return driver.unplug(device_name, bridge, namespace, prefix)
def _load_driver(self, driver_provider):
LOG.debug(_("Driver location: %s"), driver_provider)
plugin_klass = importutils.import_class(driver_provider)
return plugin_klass(self.conf)

View File

@ -0,0 +1,567 @@
# Copyright 2012 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import netaddr
from oslo.config import cfg
from neutron.agent.linux import utils
from neutron.common import exceptions
OPTS = [
cfg.BoolOpt('ip_lib_force_root',
default=False,
help=_('Force ip_lib calls to use the root helper')),
]
LOOPBACK_DEVNAME = 'lo'
# NOTE(ethuleau): depend of the version of iproute2, the vlan
# interface details vary.
VLAN_INTERFACE_DETAIL = ['vlan protocol 802.1q',
'vlan protocol 802.1Q',
'vlan id']
class SubProcessBase(object):
def __init__(self, root_helper=None, namespace=None):
self.root_helper = root_helper
self.namespace = namespace
try:
self.force_root = cfg.CONF.ip_lib_force_root
except cfg.NoSuchOptError:
# Only callers that need to force use of the root helper
# need to register the option.
self.force_root = False
def _run(self, options, command, args):
if self.namespace:
return self._as_root(options, command, args)
elif self.force_root:
# Force use of the root helper to ensure that commands
# will execute in dom0 when running under XenServer/XCP.
return self._execute(options, command, args, self.root_helper)
else:
return self._execute(options, command, args)
def _as_root(self, options, command, args, use_root_namespace=False):
if not self.root_helper:
raise exceptions.SudoRequired()
namespace = self.namespace if not use_root_namespace else None
return self._execute(options,
command,
args,
self.root_helper,
namespace)
@classmethod
def _execute(cls, options, command, args, root_helper=None,
namespace=None):
opt_list = ['-%s' % o for o in options]
if namespace:
ip_cmd = ['ip', 'netns', 'exec', namespace, 'ip']
else:
ip_cmd = ['ip']
return utils.execute(ip_cmd + opt_list + [command] + list(args),
root_helper=root_helper)
class IPWrapper(SubProcessBase):
def __init__(self, root_helper=None, namespace=None):
super(IPWrapper, self).__init__(root_helper=root_helper,
namespace=namespace)
self.netns = IpNetnsCommand(self)
def device(self, name):
return IPDevice(name, self.root_helper, self.namespace)
def get_devices(self, exclude_loopback=False):
retval = []
output = self._execute(['o', 'd'], 'link', ('list',),
self.root_helper, self.namespace)
for line in output.split('\n'):
if '<' not in line:
continue
tokens = line.split(' ', 2)
if len(tokens) == 3:
if any(v in tokens[2] for v in VLAN_INTERFACE_DETAIL):
delimiter = '@'
else:
delimiter = ':'
name = tokens[1].rpartition(delimiter)[0].strip()
if exclude_loopback and name == LOOPBACK_DEVNAME:
continue
retval.append(IPDevice(name,
self.root_helper,
self.namespace))
return retval
def add_tuntap(self, name, mode='tap'):
self._as_root('', 'tuntap', ('add', name, 'mode', mode))
return IPDevice(name, self.root_helper, self.namespace)
def add_veth(self, name1, name2, namespace2=None):
args = ['add', name1, 'type', 'veth', 'peer', 'name', name2]
if namespace2 is None:
namespace2 = self.namespace
else:
self.ensure_namespace(namespace2)
args += ['netns', namespace2]
self._as_root('', 'link', tuple(args))
return (IPDevice(name1, self.root_helper, self.namespace),
IPDevice(name2, self.root_helper, namespace2))
def ensure_namespace(self, name):
if not self.netns.exists(name):
ip = self.netns.add(name)
lo = ip.device(LOOPBACK_DEVNAME)
lo.link.set_up()
else:
ip = IPWrapper(self.root_helper, name)
return ip
def namespace_is_empty(self):
return not self.get_devices(exclude_loopback=True)
def garbage_collect_namespace(self):
"""Conditionally destroy the namespace if it is empty."""
if self.namespace and self.netns.exists(self.namespace):
if self.namespace_is_empty():
self.netns.delete(self.namespace)
return True
return False
def add_device_to_namespace(self, device):
if self.namespace:
device.link.set_netns(self.namespace)
def add_vxlan(self, name, vni, group=None, dev=None, ttl=None, tos=None,
local=None, port=None, proxy=False):
cmd = ['add', name, 'type', 'vxlan', 'id', vni]
if group:
cmd.extend(['group', group])
if dev:
cmd.extend(['dev', dev])
if ttl:
cmd.extend(['ttl', ttl])
if tos:
cmd.extend(['tos', tos])
if local:
cmd.extend(['local', local])
if proxy:
cmd.append('proxy')
# tuple: min,max
if port and len(port) == 2:
cmd.extend(['port', port[0], port[1]])
elif port:
raise exceptions.NetworkVxlanPortRangeError(vxlan_range=port)
self._as_root('', 'link', cmd)
return (IPDevice(name, self.root_helper, self.namespace))
@classmethod
def get_namespaces(cls, root_helper):
output = cls._execute('', 'netns', ('list',), root_helper=root_helper)
return [l.strip() for l in output.split('\n')]
class IpRule(IPWrapper):
def add_rule_from(self, ip, table, rule_pr):
args = ['add', 'from', ip, 'lookup', table, 'priority', rule_pr]
ip = self._as_root('', 'rule', tuple(args))
return ip
def delete_rule_priority(self, rule_pr):
args = ['del', 'priority', rule_pr]
ip = self._as_root('', 'rule', tuple(args))
return ip
class IPDevice(SubProcessBase):
def __init__(self, name, root_helper=None, namespace=None):
super(IPDevice, self).__init__(root_helper=root_helper,
namespace=namespace)
self.name = name
self.link = IpLinkCommand(self)
self.addr = IpAddrCommand(self)
self.route = IpRouteCommand(self)
self.neigh = IpNeighCommand(self)
def __eq__(self, other):
return (other is not None and self.name == other.name
and self.namespace == other.namespace)
def __str__(self):
return self.name
class IpCommandBase(object):
COMMAND = ''
def __init__(self, parent):
self._parent = parent
def _run(self, *args, **kwargs):
return self._parent._run(kwargs.get('options', []), self.COMMAND, args)
def _as_root(self, *args, **kwargs):
return self._parent._as_root(kwargs.get('options', []),
self.COMMAND,
args,
kwargs.get('use_root_namespace', False))
class IpDeviceCommandBase(IpCommandBase):
@property
def name(self):
return self._parent.name
class IpLinkCommand(IpDeviceCommandBase):
COMMAND = 'link'
def set_address(self, mac_address):
self._as_root('set', self.name, 'address', mac_address)
def set_mtu(self, mtu_size):
self._as_root('set', self.name, 'mtu', mtu_size)
def set_up(self):
self._as_root('set', self.name, 'up')
def set_down(self):
self._as_root('set', self.name, 'down')
def set_netns(self, namespace):
self._as_root('set', self.name, 'netns', namespace)
self._parent.namespace = namespace
def set_name(self, name):
self._as_root('set', self.name, 'name', name)
self._parent.name = name
def set_alias(self, alias_name):
self._as_root('set', self.name, 'alias', alias_name)
def delete(self):
self._as_root('delete', self.name)
@property
def address(self):
return self.attributes.get('link/ether')
@property
def state(self):
return self.attributes.get('state')
@property
def mtu(self):
return self.attributes.get('mtu')
@property
def qdisc(self):
return self.attributes.get('qdisc')
@property
def qlen(self):
return self.attributes.get('qlen')
@property
def alias(self):
return self.attributes.get('alias')
@property
def attributes(self):
return self._parse_line(self._run('show', self.name, options='o'))
def _parse_line(self, value):
if not value:
return {}
device_name, settings = value.replace("\\", '').split('>', 1)
tokens = settings.split()
keys = tokens[::2]
values = [int(v) if v.isdigit() else v for v in tokens[1::2]]
retval = dict(zip(keys, values))
return retval
class IpAddrCommand(IpDeviceCommandBase):
COMMAND = 'addr'
def add(self, ip_version, cidr, broadcast, scope='global'):
self._as_root('add',
cidr,
'brd',
broadcast,
'scope',
scope,
'dev',
self.name,
options=[ip_version])
def delete(self, ip_version, cidr):
self._as_root('del',
cidr,
'dev',
self.name,
options=[ip_version])
def flush(self):
self._as_root('flush', self.name)
def list(self, scope=None, to=None, filters=None):
if filters is None:
filters = []
retval = []
if scope:
filters += ['scope', scope]
if to:
filters += ['to', to]
for line in self._run('show', self.name, *filters).split('\n'):
line = line.strip()
if not line.startswith('inet'):
continue
parts = line.split()
if parts[0] == 'inet6':
version = 6
scope = parts[3]
broadcast = '::'
else:
version = 4
if parts[2] == 'brd':
broadcast = parts[3]
scope = parts[5]
else:
# sometimes output of 'ip a' might look like:
# inet 192.168.100.100/24 scope global eth0
# and broadcast needs to be calculated from CIDR
broadcast = str(netaddr.IPNetwork(parts[1]).broadcast)
scope = parts[3]
retval.append(dict(cidr=parts[1],
broadcast=broadcast,
scope=scope,
ip_version=version,
dynamic=('dynamic' == parts[-1])))
return retval
class IpRouteCommand(IpDeviceCommandBase):
COMMAND = 'route'
def add_gateway(self, gateway, metric=None, table=None):
args = ['replace', 'default', 'via', gateway]
if metric:
args += ['metric', metric]
args += ['dev', self.name]
if table:
args += ['table', table]
self._as_root(*args)
def delete_gateway(self, gateway=None, table=None):
args = ['del', 'default']
if gateway:
args += ['via', gateway]
args += ['dev', self.name]
if table:
args += ['table', table]
self._as_root(*args)
def list_onlink_routes(self):
def iterate_routes():
output = self._run('list', 'dev', self.name, 'scope', 'link')
for line in output.split('\n'):
line = line.strip()
if line and not line.count('src'):
yield line
return [x for x in iterate_routes()]
def add_onlink_route(self, cidr):
self._as_root('replace', cidr, 'dev', self.name, 'scope', 'link')
def delete_onlink_route(self, cidr):
self._as_root('del', cidr, 'dev', self.name, 'scope', 'link')
def get_gateway(self, scope=None, filters=None):
if filters is None:
filters = []
retval = None
if scope:
filters += ['scope', scope]
route_list_lines = self._run('list', 'dev', self.name,
*filters).split('\n')
default_route_line = next((x.strip() for x in
route_list_lines if
x.strip().startswith('default')), None)
if default_route_line:
gateway_index = 2
parts = default_route_line.split()
retval = dict(gateway=parts[gateway_index])
if 'metric' in parts:
metric_index = parts.index('metric') + 1
retval.update(metric=int(parts[metric_index]))
return retval
def pullup_route(self, interface_name):
"""Ensures that the route entry for the interface is before all
others on the same subnet.
"""
device_list = []
device_route_list_lines = self._run('list', 'proto', 'kernel',
'dev', interface_name).split('\n')
for device_route_line in device_route_list_lines:
try:
subnet = device_route_line.split()[0]
except Exception:
continue
subnet_route_list_lines = self._run('list', 'proto', 'kernel',
'match', subnet).split('\n')
for subnet_route_line in subnet_route_list_lines:
i = iter(subnet_route_line.split())
while(i.next() != 'dev'):
pass
device = i.next()
try:
while(i.next() != 'src'):
pass
src = i.next()
except Exception:
src = ''
if device != interface_name:
device_list.append((device, src))
else:
break
for (device, src) in device_list:
self._as_root('del', subnet, 'dev', device)
if (src != ''):
self._as_root('append', subnet, 'proto', 'kernel',
'src', src, 'dev', device)
else:
self._as_root('append', subnet, 'proto', 'kernel',
'dev', device)
def add_route(self, cidr, ip, table=None):
args = ['replace', cidr, 'via', ip, 'dev', self.name]
if table:
args += ['table', table]
self._as_root(*args)
def delete_route(self, cidr, ip, table=None):
args = ['del', cidr, 'via', ip, 'dev', self.name]
if table:
args += ['table', table]
self._as_root(*args)
class IpNeighCommand(IpDeviceCommandBase):
COMMAND = 'neigh'
def add(self, ip_version, ip_address, mac_address):
self._as_root('replace',
ip_address,
'lladdr',
mac_address,
'nud',
'permanent',
'dev',
self.name,
options=[ip_version])
def delete(self, ip_version, ip_address, mac_address):
self._as_root('del',
ip_address,
'lladdr',
mac_address,
'dev',
self.name,
options=[ip_version])
class IpNetnsCommand(IpCommandBase):
COMMAND = 'netns'
def add(self, name):
self._as_root('add', name, use_root_namespace=True)
return IPWrapper(self._parent.root_helper, name)
def delete(self, name):
self._as_root('delete', name, use_root_namespace=True)
def execute(self, cmds, addl_env={}, check_exit_code=True):
if not self._parent.root_helper:
raise exceptions.SudoRequired()
ns_params = []
if self._parent.namespace:
ns_params = ['ip', 'netns', 'exec', self._parent.namespace]
env_params = []
if addl_env:
env_params = (['env'] +
['%s=%s' % pair for pair in addl_env.items()])
return utils.execute(
ns_params + env_params + list(cmds),
root_helper=self._parent.root_helper,
check_exit_code=check_exit_code)
def exists(self, name):
output = self._parent._execute('o', 'netns', ['list'])
for line in output.split('\n'):
if name == line.strip():
return True
return False
def device_exists(device_name, root_helper=None, namespace=None):
try:
address = IPDevice(device_name, root_helper, namespace).link.address
except RuntimeError:
return False
return bool(address)
def ensure_device_is_ready(device_name, root_helper=None, namespace=None):
dev = IPDevice(device_name, root_helper, namespace)
try:
# Ensure the device is up, even if it is already up. If the device
# doesn't exist, a RuntimeError will be raised.
dev.link.set_up()
except RuntimeError:
return False
return True
def iproute_arg_supported(command, arg, root_helper=None):
command += ['help']
stdout, stderr = utils.execute(command, root_helper=root_helper,
check_exit_code=False, return_stderr=True)
return any(arg in line for line in stderr.split('\n'))

View File

@ -0,0 +1,381 @@
# Copyright 2012, Nachi Ueno, NTT MCL, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import netaddr
from oslo.config import cfg
from neutron.agent import firewall
from neutron.agent.linux import iptables_manager
from neutron.common import constants
from neutron.openstack.common import log as logging
LOG = logging.getLogger(__name__)
SG_CHAIN = 'sg-chain'
INGRESS_DIRECTION = 'ingress'
EGRESS_DIRECTION = 'egress'
SPOOF_FILTER = 'spoof-filter'
CHAIN_NAME_PREFIX = {INGRESS_DIRECTION: 'i',
EGRESS_DIRECTION: 'o',
SPOOF_FILTER: 's'}
LINUX_DEV_LEN = 14
class IptablesFirewallDriver(firewall.FirewallDriver):
"""Driver which enforces security groups through iptables rules."""
IPTABLES_DIRECTION = {INGRESS_DIRECTION: 'physdev-out',
EGRESS_DIRECTION: 'physdev-in'}
def __init__(self):
self.iptables = iptables_manager.IptablesManager(
root_helper=cfg.CONF.AGENT.root_helper,
use_ipv6=True)
# list of port which has security group
self.filtered_ports = {}
self._add_fallback_chain_v4v6()
self._defer_apply = False
self._pre_defer_filtered_ports = None
@property
def ports(self):
return self.filtered_ports
def prepare_port_filter(self, port):
LOG.debug(_("Preparing device (%s) filter"), port['device'])
self._remove_chains()
self.filtered_ports[port['device']] = port
# each security group has it own chains
self._setup_chains()
self.iptables.apply()
def update_port_filter(self, port):
LOG.debug(_("Updating device (%s) filter"), port['device'])
if port['device'] not in self.filtered_ports:
LOG.info(_('Attempted to update port filter which is not '
'filtered %s'), port['device'])
return
self._remove_chains()
self.filtered_ports[port['device']] = port
self._setup_chains()
self.iptables.apply()
def remove_port_filter(self, port):
LOG.debug(_("Removing device (%s) filter"), port['device'])
if not self.filtered_ports.get(port['device']):
LOG.info(_('Attempted to remove port filter which is not '
'filtered %r'), port)
return
self._remove_chains()
self.filtered_ports.pop(port['device'], None)
self._setup_chains()
self.iptables.apply()
def _setup_chains(self):
"""Setup ingress and egress chain for a port."""
if not self._defer_apply:
self._setup_chains_apply(self.filtered_ports)
def _setup_chains_apply(self, ports):
self._add_chain_by_name_v4v6(SG_CHAIN)
for port in ports.values():
self._setup_chain(port, INGRESS_DIRECTION)
self._setup_chain(port, EGRESS_DIRECTION)
self.iptables.ipv4['filter'].add_rule(SG_CHAIN, '-j ACCEPT')
self.iptables.ipv6['filter'].add_rule(SG_CHAIN, '-j ACCEPT')
def _remove_chains(self):
"""Remove ingress and egress chain for a port."""
if not self._defer_apply:
self._remove_chains_apply(self.filtered_ports)
def _remove_chains_apply(self, ports):
for port in ports.values():
self._remove_chain(port, INGRESS_DIRECTION)
self._remove_chain(port, EGRESS_DIRECTION)
self._remove_chain(port, SPOOF_FILTER)
self._remove_chain_by_name_v4v6(SG_CHAIN)
def _setup_chain(self, port, DIRECTION):
self._add_chain(port, DIRECTION)
self._add_rule_by_security_group(port, DIRECTION)
def _remove_chain(self, port, DIRECTION):
chain_name = self._port_chain_name(port, DIRECTION)
self._remove_chain_by_name_v4v6(chain_name)
def _add_fallback_chain_v4v6(self):
self.iptables.ipv4['filter'].add_chain('sg-fallback')
self.iptables.ipv4['filter'].add_rule('sg-fallback', '-j DROP')
self.iptables.ipv6['filter'].add_chain('sg-fallback')
self.iptables.ipv6['filter'].add_rule('sg-fallback', '-j DROP')
def _add_chain_by_name_v4v6(self, chain_name):
self.iptables.ipv6['filter'].add_chain(chain_name)
self.iptables.ipv4['filter'].add_chain(chain_name)
def _remove_chain_by_name_v4v6(self, chain_name):
self.iptables.ipv4['filter'].ensure_remove_chain(chain_name)
self.iptables.ipv6['filter'].ensure_remove_chain(chain_name)
def _add_rule_to_chain_v4v6(self, chain_name, ipv4_rules, ipv6_rules):
for rule in ipv4_rules:
self.iptables.ipv4['filter'].add_rule(chain_name, rule)
for rule in ipv6_rules:
self.iptables.ipv6['filter'].add_rule(chain_name, rule)
def _get_device_name(self, port):
return port['device']
def _add_chain(self, port, direction):
chain_name = self._port_chain_name(port, direction)
self._add_chain_by_name_v4v6(chain_name)
# Note(nati) jump to the security group chain (SG_CHAIN)
# This is needed because the packet may much two rule in port
# if the two port is in the same host
# We accept the packet at the end of SG_CHAIN.
# jump to the security group chain
device = self._get_device_name(port)
jump_rule = ['-m physdev --%s %s --physdev-is-bridged '
'-j $%s' % (self.IPTABLES_DIRECTION[direction],
device,
SG_CHAIN)]
self._add_rule_to_chain_v4v6('FORWARD', jump_rule, jump_rule)
# jump to the chain based on the device
jump_rule = ['-m physdev --%s %s --physdev-is-bridged '
'-j $%s' % (self.IPTABLES_DIRECTION[direction],
device,
chain_name)]
self._add_rule_to_chain_v4v6(SG_CHAIN, jump_rule, jump_rule)
if direction == EGRESS_DIRECTION:
self._add_rule_to_chain_v4v6('INPUT', jump_rule, jump_rule)
def _split_sgr_by_ethertype(self, security_group_rules):
ipv4_sg_rules = []
ipv6_sg_rules = []
for rule in security_group_rules:
if rule.get('ethertype') == constants.IPv4:
ipv4_sg_rules.append(rule)
elif rule.get('ethertype') == constants.IPv6:
if rule.get('protocol') == 'icmp':
rule['protocol'] = 'icmpv6'
ipv6_sg_rules.append(rule)
return ipv4_sg_rules, ipv6_sg_rules
def _select_sgr_by_direction(self, port, direction):
return [rule
for rule in port.get('security_group_rules', [])
if rule['direction'] == direction]
def _setup_spoof_filter_chain(self, port, table, mac_ip_pairs, rules):
if mac_ip_pairs:
chain_name = self._port_chain_name(port, SPOOF_FILTER)
table.add_chain(chain_name)
for mac, ip in mac_ip_pairs:
if ip is None:
# If fixed_ips is [] this rule will be added to the end
# of the list after the allowed_address_pair rules.
table.add_rule(chain_name,
'-m mac --mac-source %s -j RETURN'
% mac)
else:
table.add_rule(chain_name,
'-m mac --mac-source %s -s %s -j RETURN'
% (mac, ip))
table.add_rule(chain_name, '-j DROP')
rules.append('-j $%s' % chain_name)
def _build_ipv4v6_mac_ip_list(self, mac, ip_address, mac_ipv4_pairs,
mac_ipv6_pairs):
if netaddr.IPNetwork(ip_address).version == 4:
mac_ipv4_pairs.append((mac, ip_address))
else:
mac_ipv6_pairs.append((mac, ip_address))
def _spoofing_rule(self, port, ipv4_rules, ipv6_rules):
#Note(nati) allow dhcp or RA packet
ipv4_rules += ['-p udp -m udp --sport 68 --dport 67 -j RETURN']
ipv6_rules += ['-p icmpv6 -j RETURN']
ipv6_rules += ['-p udp -m udp --sport 546 --dport 547 -j RETURN']
mac_ipv4_pairs = []
mac_ipv6_pairs = []
if isinstance(port.get('allowed_address_pairs'), list):
for address_pair in port['allowed_address_pairs']:
self._build_ipv4v6_mac_ip_list(address_pair['mac_address'],
address_pair['ip_address'],
mac_ipv4_pairs,
mac_ipv6_pairs)
for ip in port['fixed_ips']:
self._build_ipv4v6_mac_ip_list(port['mac_address'], ip,
mac_ipv4_pairs, mac_ipv6_pairs)
if not port['fixed_ips']:
mac_ipv4_pairs.append((port['mac_address'], None))
mac_ipv6_pairs.append((port['mac_address'], None))
self._setup_spoof_filter_chain(port, self.iptables.ipv4['filter'],
mac_ipv4_pairs, ipv4_rules)
self._setup_spoof_filter_chain(port, self.iptables.ipv6['filter'],
mac_ipv6_pairs, ipv6_rules)
def _drop_dhcp_rule(self, ipv4_rules, ipv6_rules):
#Note(nati) Drop dhcp packet from VM
ipv4_rules += ['-p udp -m udp --sport 67 --dport 68 -j DROP']
ipv6_rules += ['-p udp -m udp --sport 547 --dport 546 -j DROP']
def _accept_inbound_icmpv6(self):
# Allow multicast listener, neighbor solicitation and
# neighbor advertisement into the instance
icmpv6_rules = []
for icmp6_type in constants.ICMPV6_ALLOWED_TYPES:
icmpv6_rules += ['-p icmpv6 --icmpv6-type %s -j RETURN' %
icmp6_type]
return icmpv6_rules
def _add_rule_by_security_group(self, port, direction):
chain_name = self._port_chain_name(port, direction)
# select rules for current direction
security_group_rules = self._select_sgr_by_direction(port, direction)
# split groups by ip version
# for ipv4, iptables command is used
# for ipv6, iptables6 command is used
ipv4_sg_rules, ipv6_sg_rules = self._split_sgr_by_ethertype(
security_group_rules)
ipv4_iptables_rule = []
ipv6_iptables_rule = []
if direction == EGRESS_DIRECTION:
self._spoofing_rule(port,
ipv4_iptables_rule,
ipv6_iptables_rule)
self._drop_dhcp_rule(ipv4_iptables_rule, ipv6_iptables_rule)
if direction == INGRESS_DIRECTION:
ipv6_iptables_rule += self._accept_inbound_icmpv6()
ipv4_iptables_rule += self._convert_sgr_to_iptables_rules(
ipv4_sg_rules)
ipv6_iptables_rule += self._convert_sgr_to_iptables_rules(
ipv6_sg_rules)
self._add_rule_to_chain_v4v6(chain_name,
ipv4_iptables_rule,
ipv6_iptables_rule)
def _convert_sgr_to_iptables_rules(self, security_group_rules):
iptables_rules = []
self._drop_invalid_packets(iptables_rules)
self._allow_established(iptables_rules)
for rule in security_group_rules:
# These arguments MUST be in the format iptables-save will
# display them: source/dest, protocol, sport, dport, target
# Otherwise the iptables_manager code won't be able to find
# them to preserve their [packet:byte] counts.
args = self._ip_prefix_arg('s',
rule.get('source_ip_prefix'))
args += self._ip_prefix_arg('d',
rule.get('dest_ip_prefix'))
args += self._protocol_arg(rule.get('protocol'))
args += self._port_arg('sport',
rule.get('protocol'),
rule.get('source_port_range_min'),
rule.get('source_port_range_max'))
args += self._port_arg('dport',
rule.get('protocol'),
rule.get('port_range_min'),
rule.get('port_range_max'))
args += ['-j RETURN']
iptables_rules += [' '.join(args)]
iptables_rules += ['-j $sg-fallback']
return iptables_rules
def _drop_invalid_packets(self, iptables_rules):
# Always drop invalid packets
iptables_rules += ['-m state --state ' 'INVALID -j DROP']
return iptables_rules
def _allow_established(self, iptables_rules):
# Allow established connections
iptables_rules += ['-m state --state RELATED,ESTABLISHED -j RETURN']
return iptables_rules
def _protocol_arg(self, protocol):
if not protocol:
return []
iptables_rule = ['-p', protocol]
# iptables always adds '-m protocol' for udp and tcp
if protocol in ['udp', 'tcp']:
iptables_rule += ['-m', protocol]
return iptables_rule
def _port_arg(self, direction, protocol, port_range_min, port_range_max):
if (protocol not in ['udp', 'tcp', 'icmp', 'icmpv6']
or not port_range_min):
return []
if protocol in ['icmp', 'icmpv6']:
# Note(xuhanp): port_range_min/port_range_max represent
# icmp type/code when protocol is icmp or icmpv6
# icmp code can be 0 so we cannot use "if port_range_max" here
if port_range_max is not None:
return ['--%s-type' % protocol,
'%s/%s' % (port_range_min, port_range_max)]
return ['--%s-type' % protocol, '%s' % port_range_min]
elif port_range_min == port_range_max:
return ['--%s' % direction, '%s' % (port_range_min,)]
else:
return ['-m', 'multiport',
'--%ss' % direction,
'%s:%s' % (port_range_min, port_range_max)]
def _ip_prefix_arg(self, direction, ip_prefix):
#NOTE (nati) : source_group_id is converted to list of source_
# ip_prefix in server side
if ip_prefix:
return ['-%s' % direction, ip_prefix]
return []
def _port_chain_name(self, port, direction):
return iptables_manager.get_chain_name(
'%s%s' % (CHAIN_NAME_PREFIX[direction], port['device'][3:]))
def filter_defer_apply_on(self):
if not self._defer_apply:
self.iptables.defer_apply_on()
self._pre_defer_filtered_ports = dict(self.filtered_ports)
self._defer_apply = True
def filter_defer_apply_off(self):
if self._defer_apply:
self._defer_apply = False
self._remove_chains_apply(self._pre_defer_filtered_ports)
self._pre_defer_filtered_ports = None
self._setup_chains_apply(self.filtered_ports)
self.iptables.defer_apply_off()
class OVSHybridIptablesFirewallDriver(IptablesFirewallDriver):
OVS_HYBRID_TAP_PREFIX = 'tap'
def _port_chain_name(self, port, direction):
return iptables_manager.get_chain_name(
'%s%s' % (CHAIN_NAME_PREFIX[direction], port['device']))
def _get_device_name(self, port):
return (self.OVS_HYBRID_TAP_PREFIX + port['device'])[:LINUX_DEV_LEN]

View File

@ -0,0 +1,666 @@
# Copyright 2012 Locaweb.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# @author: Juliano Martinez, Locaweb.
# based on
# https://github.com/openstack/nova/blob/master/nova/network/linux_net.py
"""Implements iptables rules using linux utilities."""
import inspect
import os
import re
from neutron.agent.linux import utils as linux_utils
from neutron.common import utils
from neutron.openstack.common import excutils
from neutron.openstack.common import lockutils
from neutron.openstack.common import log as logging
LOG = logging.getLogger(__name__)
# NOTE(vish): Iptables supports chain names of up to 28 characters, and we
# add up to 12 characters to binary_name which is used as a prefix,
# so we limit it to 16 characters.
# (max_chain_name_length - len('-POSTROUTING') == 16)
def get_binary_name():
"""Grab the name of the binary we're running in."""
return os.path.basename(inspect.stack()[-1][1])[:16]
binary_name = get_binary_name()
# A length of a chain name must be less than or equal to 11 characters.
# <max length of iptables chain name> - (<binary_name> + '-') = 28-(16+1) = 11
MAX_CHAIN_LEN_WRAP = 11
MAX_CHAIN_LEN_NOWRAP = 28
# Number of iptables rules to print before and after a rule that causes a
# a failure during iptables-restore
IPTABLES_ERROR_LINES_OF_CONTEXT = 5
def get_chain_name(chain_name, wrap=True):
if wrap:
return chain_name[:MAX_CHAIN_LEN_WRAP]
else:
return chain_name[:MAX_CHAIN_LEN_NOWRAP]
class IptablesRule(object):
"""An iptables rule.
You shouldn't need to use this class directly, it's only used by
IptablesManager.
"""
def __init__(self, chain, rule, wrap=True, top=False,
binary_name=binary_name, tag=None):
self.chain = get_chain_name(chain, wrap)
self.rule = rule
self.wrap = wrap
self.top = top
self.wrap_name = binary_name[:16]
self.tag = tag
def __eq__(self, other):
return ((self.chain == other.chain) and
(self.rule == other.rule) and
(self.top == other.top) and
(self.wrap == other.wrap))
def __ne__(self, other):
return not self == other
def __str__(self):
if self.wrap:
chain = '%s-%s' % (self.wrap_name, self.chain)
else:
chain = self.chain
return '-A %s %s' % (chain, self.rule)
class IptablesTable(object):
"""An iptables table."""
def __init__(self, binary_name=binary_name):
self.rules = []
self.remove_rules = []
self.chains = set()
self.unwrapped_chains = set()
self.remove_chains = set()
self.wrap_name = binary_name[:16]
def add_chain(self, name, wrap=True):
"""Adds a named chain to the table.
The chain name is wrapped to be unique for the component creating
it, so different components of Nova can safely create identically
named chains without interfering with one another.
At the moment, its wrapped name is <binary name>-<chain name>,
so if nova-compute creates a chain named 'OUTPUT', it'll actually
end up named 'nova-compute-OUTPUT'.
"""
name = get_chain_name(name, wrap)
if wrap:
self.chains.add(name)
else:
self.unwrapped_chains.add(name)
def _select_chain_set(self, wrap):
if wrap:
return self.chains
else:
return self.unwrapped_chains
def ensure_remove_chain(self, name, wrap=True):
"""Ensure the chain is removed.
This removal "cascades". All rule in the chain are removed, as are
all rules in other chains that jump to it.
"""
name = get_chain_name(name, wrap)
chain_set = self._select_chain_set(wrap)
if name not in chain_set:
return
self.remove_chain(name, wrap)
def remove_chain(self, name, wrap=True):
"""Remove named chain.
This removal "cascades". All rule in the chain are removed, as are
all rules in other chains that jump to it.
If the chain is not found, this is merely logged.
"""
name = get_chain_name(name, wrap)
chain_set = self._select_chain_set(wrap)
if name not in chain_set:
LOG.warn(_('Attempted to remove chain %s which does not exist'),
name)
return
chain_set.remove(name)
if not wrap:
# non-wrapped chains and rules need to be dealt with specially,
# so we keep a list of them to be iterated over in apply()
self.remove_chains.add(name)
# first, add rules to remove that have a matching chain name
self.remove_rules += [r for r in self.rules if r.chain == name]
# next, remove rules from list that have a matching chain name
self.rules = [r for r in self.rules if r.chain != name]
if not wrap:
jump_snippet = '-j %s' % name
# next, add rules to remove that have a matching jump chain
self.remove_rules += [r for r in self.rules
if jump_snippet in r.rule]
else:
jump_snippet = '-j %s-%s' % (self.wrap_name, name)
# finally, remove rules from list that have a matching jump chain
self.rules = [r for r in self.rules
if jump_snippet not in r.rule]
def add_rule(self, chain, rule, wrap=True, top=False, tag=None):
"""Add a rule to the table.
This is just like what you'd feed to iptables, just without
the '-A <chain name>' bit at the start.
However, if you need to jump to one of your wrapped chains,
prepend its name with a '$' which will ensure the wrapping
is applied correctly.
"""
chain = get_chain_name(chain, wrap)
if wrap and chain not in self.chains:
raise LookupError(_('Unknown chain: %r') % chain)
if '$' in rule:
rule = ' '.join(
self._wrap_target_chain(e, wrap) for e in rule.split(' '))
self.rules.append(IptablesRule(chain, rule, wrap, top, self.wrap_name,
tag))
def _wrap_target_chain(self, s, wrap):
if s.startswith('$'):
s = ('%s-%s' % (self.wrap_name, get_chain_name(s[1:], wrap)))
return s
def remove_rule(self, chain, rule, wrap=True, top=False):
"""Remove a rule from a chain.
Note: The rule must be exactly identical to the one that was added.
You cannot switch arguments around like you can with the iptables
CLI tool.
"""
chain = get_chain_name(chain, wrap)
try:
if '$' in rule:
rule = ' '.join(
self._wrap_target_chain(e, wrap) for e in rule.split(' '))
self.rules.remove(IptablesRule(chain, rule, wrap, top,
self.wrap_name))
if not wrap:
self.remove_rules.append(IptablesRule(chain, rule, wrap, top,
self.wrap_name))
except ValueError:
LOG.warn(_('Tried to remove rule that was not there:'
' %(chain)r %(rule)r %(wrap)r %(top)r'),
{'chain': chain, 'rule': rule,
'top': top, 'wrap': wrap})
def empty_chain(self, chain, wrap=True):
"""Remove all rules from a chain."""
chain = get_chain_name(chain, wrap)
chained_rules = [rule for rule in self.rules
if rule.chain == chain and rule.wrap == wrap]
for rule in chained_rules:
self.rules.remove(rule)
def clear_rules_by_tag(self, tag):
if not tag:
return
rules = [rule for rule in self.rules if rule.tag == tag]
for rule in rules:
self.rules.remove(rule)
class IptablesManager(object):
"""Wrapper for iptables.
See IptablesTable for some usage docs
A number of chains are set up to begin with.
First, neutron-filter-top. It's added at the top of FORWARD and OUTPUT. Its
name is not wrapped, so it's shared between the various nova workers. It's
intended for rules that need to live at the top of the FORWARD and OUTPUT
chains. It's in both the ipv4 and ipv6 set of tables.
For ipv4 and ipv6, the built-in INPUT, OUTPUT, and FORWARD filter chains
are wrapped, meaning that the "real" INPUT chain has a rule that jumps to
the wrapped INPUT chain, etc. Additionally, there's a wrapped chain named
"local" which is jumped to from neutron-filter-top.
For ipv4, the built-in PREROUTING, OUTPUT, and POSTROUTING nat chains are
wrapped in the same was as the built-in filter chains. Additionally,
there's a snat chain that is applied after the POSTROUTING chain.
"""
def __init__(self, _execute=None, state_less=False,
root_helper=None, use_ipv6=False, namespace=None,
binary_name=binary_name):
if _execute:
self.execute = _execute
else:
self.execute = linux_utils.execute
self.use_ipv6 = use_ipv6
self.root_helper = root_helper
self.namespace = namespace
self.iptables_apply_deferred = False
self.wrap_name = binary_name[:16]
self.ipv4 = {'filter': IptablesTable(binary_name=self.wrap_name)}
self.ipv6 = {'filter': IptablesTable(binary_name=self.wrap_name)}
# Add a neutron-filter-top chain. It's intended to be shared
# among the various nova components. It sits at the very top
# of FORWARD and OUTPUT.
for tables in [self.ipv4, self.ipv6]:
tables['filter'].add_chain('neutron-filter-top', wrap=False)
tables['filter'].add_rule('FORWARD', '-j neutron-filter-top',
wrap=False, top=True)
tables['filter'].add_rule('OUTPUT', '-j neutron-filter-top',
wrap=False, top=True)
tables['filter'].add_chain('local')
tables['filter'].add_rule('neutron-filter-top', '-j $local',
wrap=False)
# Wrap the built-in chains
builtin_chains = {4: {'filter': ['INPUT', 'OUTPUT', 'FORWARD']},
6: {'filter': ['INPUT', 'OUTPUT', 'FORWARD']}}
if not state_less:
self.ipv4.update(
{'nat': IptablesTable(binary_name=self.wrap_name)})
builtin_chains[4].update({'nat': ['PREROUTING',
'OUTPUT', 'POSTROUTING']})
for ip_version in builtin_chains:
if ip_version == 4:
tables = self.ipv4
elif ip_version == 6:
tables = self.ipv6
for table, chains in builtin_chains[ip_version].iteritems():
for chain in chains:
tables[table].add_chain(chain)
tables[table].add_rule(chain, '-j $%s' %
(chain), wrap=False)
if not state_less:
# Add a neutron-postrouting-bottom chain. It's intended to be
# shared among the various nova components. We set it as the last
# chain of POSTROUTING chain.
self.ipv4['nat'].add_chain('neutron-postrouting-bottom',
wrap=False)
self.ipv4['nat'].add_rule('POSTROUTING',
'-j neutron-postrouting-bottom',
wrap=False)
# We add a snat chain to the shared neutron-postrouting-bottom
# chain so that it's applied last.
self.ipv4['nat'].add_chain('snat')
self.ipv4['nat'].add_rule('neutron-postrouting-bottom',
'-j $snat', wrap=False)
# And then we add a float-snat chain and jump to first thing in
# the snat chain.
self.ipv4['nat'].add_chain('float-snat')
self.ipv4['nat'].add_rule('snat', '-j $float-snat')
def defer_apply_on(self):
self.iptables_apply_deferred = True
def defer_apply_off(self):
self.iptables_apply_deferred = False
self._apply()
def apply(self):
if self.iptables_apply_deferred:
return
self._apply()
def _apply(self):
lock_name = 'iptables'
if self.namespace:
lock_name += '-' + self.namespace
try:
with lockutils.lock(lock_name, utils.SYNCHRONIZED_PREFIX, True):
LOG.debug(_('Got semaphore / lock "%s"'), lock_name)
return self._apply_synchronized()
finally:
LOG.debug(_('Semaphore / lock released "%s"'), lock_name)
def _apply_synchronized(self):
"""Apply the current in-memory set of iptables rules.
This will blow away any rules left over from previous runs of the
same component of Nova, and replace them with our current set of
rules. This happens atomically, thanks to iptables-restore.
"""
s = [('iptables', self.ipv4)]
if self.use_ipv6:
s += [('ip6tables', self.ipv6)]
for cmd, tables in s:
args = ['%s-save' % (cmd,), '-c']
if self.namespace:
args = ['ip', 'netns', 'exec', self.namespace] + args
all_tables = self.execute(args, root_helper=self.root_helper)
all_lines = all_tables.split('\n')
for table_name, table in tables.iteritems():
start, end = self._find_table(all_lines, table_name)
all_lines[start:end] = self._modify_rules(
all_lines[start:end], table, table_name)
args = ['%s-restore' % (cmd,), '-c']
if self.namespace:
args = ['ip', 'netns', 'exec', self.namespace] + args
try:
self.execute(args, process_input='\n'.join(all_lines),
root_helper=self.root_helper)
except RuntimeError as r_error:
with excutils.save_and_reraise_exception():
try:
line_no = int(re.search(
'iptables-restore: line ([0-9]+?) failed',
str(r_error)).group(1))
context = IPTABLES_ERROR_LINES_OF_CONTEXT
log_start = max(0, line_no - context)
log_end = line_no + context
except AttributeError:
# line error wasn't found, print all lines instead
log_start = 0
log_end = len(all_lines)
log_lines = ('%7d. %s' % (idx, l)
for idx, l in enumerate(
all_lines[log_start:log_end],
log_start + 1)
)
LOG.error(_("IPTablesManager.apply failed to apply the "
"following set of iptables rules:\n%s"),
'\n'.join(log_lines))
LOG.debug(_("IPTablesManager.apply completed with success"))
def _find_table(self, lines, table_name):
if len(lines) < 3:
# length only <2 when fake iptables
return (0, 0)
try:
start = lines.index('*%s' % table_name) - 1
except ValueError:
# Couldn't find table_name
LOG.debug(_('Unable to find table %s'), table_name)
return (0, 0)
end = lines[start:].index('COMMIT') + start + 2
return (start, end)
def _find_rules_index(self, lines):
seen_chains = False
rules_index = 0
for rules_index, rule in enumerate(lines):
if not seen_chains:
if rule.startswith(':'):
seen_chains = True
else:
if not rule.startswith(':'):
break
if not seen_chains:
rules_index = 2
return rules_index
def _find_last_entry(self, filter_list, match_str):
# find a matching entry, starting from the bottom
for s in reversed(filter_list):
s = s.strip()
if match_str in s:
return s
def _modify_rules(self, current_lines, table, table_name):
unwrapped_chains = table.unwrapped_chains
chains = table.chains
remove_chains = table.remove_chains
rules = table.rules
remove_rules = table.remove_rules
if not current_lines:
fake_table = ['# Generated by iptables_manager',
'*' + table_name, 'COMMIT',
'# Completed by iptables_manager']
current_lines = fake_table
# Fill old_filter with any chains or rules we might have added,
# they could have a [packet:byte] count we want to preserve.
# Fill new_filter with any chains or rules without our name in them.
old_filter, new_filter = [], []
for line in current_lines:
(old_filter if self.wrap_name in line else
new_filter).append(line.strip())
rules_index = self._find_rules_index(new_filter)
all_chains = [':%s' % name for name in unwrapped_chains]
all_chains += [':%s-%s' % (self.wrap_name, name) for name in chains]
# Iterate through all the chains, trying to find an existing
# match.
our_chains = []
for chain in all_chains:
chain_str = str(chain).strip()
old = self._find_last_entry(old_filter, chain_str)
if not old:
dup = self._find_last_entry(new_filter, chain_str)
new_filter = [s for s in new_filter if chain_str not in s.strip()]
# if no old or duplicates, use original chain
if old or dup:
chain_str = str(old or dup)
else:
# add-on the [packet:bytes]
chain_str += ' - [0:0]'
our_chains += [chain_str]
# Iterate through all the rules, trying to find an existing
# match.
our_rules = []
bot_rules = []
for rule in rules:
rule_str = str(rule).strip()
# Further down, we weed out duplicates from the bottom of the
# list, so here we remove the dupes ahead of time.
old = self._find_last_entry(old_filter, rule_str)
if not old:
dup = self._find_last_entry(new_filter, rule_str)
new_filter = [s for s in new_filter if rule_str not in s.strip()]
# if no old or duplicates, use original rule
if old or dup:
rule_str = str(old or dup)
# backup one index so we write the array correctly
if not old:
rules_index -= 1
else:
# add-on the [packet:bytes]
rule_str = '[0:0] ' + rule_str
if rule.top:
# rule.top == True means we want this rule to be at the top.
our_rules += [rule_str]
else:
bot_rules += [rule_str]
our_rules += bot_rules
new_filter[rules_index:rules_index] = our_rules
new_filter[rules_index:rules_index] = our_chains
def _strip_packets_bytes(line):
# strip any [packet:byte] counts at start or end of lines
if line.startswith(':'):
# it's a chain, for example, ":neutron-billing - [0:0]"
line = line.split(':')[1]
line = line.split(' - [', 1)[0]
elif line.startswith('['):
# it's a rule, for example, "[0:0] -A neutron-billing..."
line = line.split('] ', 1)[1]
line = line.strip()
return line
seen_chains = set()
def _weed_out_duplicate_chains(line):
# ignore [packet:byte] counts at end of lines
if line.startswith(':'):
line = _strip_packets_bytes(line)
if line in seen_chains:
return False
else:
seen_chains.add(line)
# Leave it alone
return True
seen_rules = set()
def _weed_out_duplicate_rules(line):
if line.startswith('['):
line = _strip_packets_bytes(line)
if line in seen_rules:
return False
else:
seen_rules.add(line)
# Leave it alone
return True
def _weed_out_removes(line):
# We need to find exact matches here
if line.startswith(':'):
line = _strip_packets_bytes(line)
for chain in remove_chains:
if chain == line:
remove_chains.remove(chain)
return False
elif line.startswith('['):
line = _strip_packets_bytes(line)
for rule in remove_rules:
rule_str = _strip_packets_bytes(str(rule))
if rule_str == line:
remove_rules.remove(rule)
return False
# Leave it alone
return True
# We filter duplicates. Go through the chains and rules, letting
# the *last* occurrence take precendence since it could have a
# non-zero [packet:byte] count we want to preserve. We also filter
# out anything in the "remove" list.
new_filter.reverse()
new_filter = [line for line in new_filter
if _weed_out_duplicate_chains(line) and
_weed_out_duplicate_rules(line) and
_weed_out_removes(line)]
new_filter.reverse()
# flush lists, just in case we didn't find something
remove_chains.clear()
for rule in remove_rules:
remove_rules.remove(rule)
return new_filter
def _get_traffic_counters_cmd_tables(self, chain, wrap=True):
name = get_chain_name(chain, wrap)
cmd_tables = [('iptables', key) for key, table in self.ipv4.items()
if name in table._select_chain_set(wrap)]
cmd_tables += [('ip6tables', key) for key, table in self.ipv6.items()
if name in table._select_chain_set(wrap)]
return cmd_tables
def get_traffic_counters(self, chain, wrap=True, zero=False):
"""Return the sum of the traffic counters of all rules of a chain."""
cmd_tables = self._get_traffic_counters_cmd_tables(chain, wrap)
if not cmd_tables:
LOG.warn(_('Attempted to get traffic counters of chain %s which '
'does not exist'), chain)
return
name = get_chain_name(chain, wrap)
acc = {'pkts': 0, 'bytes': 0}
for cmd, table in cmd_tables:
args = [cmd, '-t', table, '-L', name, '-n', '-v', '-x']
if zero:
args.append('-Z')
if self.namespace:
args = ['ip', 'netns', 'exec', self.namespace] + args
current_table = (self.execute(args,
root_helper=self.root_helper))
current_lines = current_table.split('\n')
for line in current_lines[2:]:
if not line:
break
data = line.split()
if (len(data) < 2 or
not data[0].isdigit() or
not data[1].isdigit()):
break
acc['pkts'] += int(data[0])
acc['bytes'] += int(data[1])
return acc

View File

@ -0,0 +1,564 @@
# Copyright 2011 VMware, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo.config import cfg
from neutron.agent.linux import ip_lib
from neutron.agent.linux import utils
from neutron.common import exceptions
from neutron.common import utils as common_utils
from neutron.openstack.common import excutils
from neutron.openstack.common import jsonutils
from neutron.openstack.common import log as logging
from neutron.plugins.common import constants as p_const
# TODO(JLH) Should we remove the explicit include of the ovs plugin here
from neutron.plugins.openvswitch.common import constants
# Default timeout for ovs-vsctl command
DEFAULT_OVS_VSCTL_TIMEOUT = 10
OPTS = [
cfg.IntOpt('ovs_vsctl_timeout',
default=DEFAULT_OVS_VSCTL_TIMEOUT,
help=_('Timeout in seconds for ovs-vsctl commands')),
]
cfg.CONF.register_opts(OPTS)
LOG = logging.getLogger(__name__)
class VifPort:
def __init__(self, port_name, ofport, vif_id, vif_mac, switch):
self.port_name = port_name
self.ofport = ofport
self.vif_id = vif_id
self.vif_mac = vif_mac
self.switch = switch
def __str__(self):
return ("iface-id=" + self.vif_id + ", vif_mac=" +
self.vif_mac + ", port_name=" + self.port_name +
", ofport=" + str(self.ofport) + ", bridge_name=" +
self.switch.br_name)
class BaseOVS(object):
def __init__(self, root_helper):
self.root_helper = root_helper
self.vsctl_timeout = cfg.CONF.ovs_vsctl_timeout
def run_vsctl(self, args, check_error=False):
full_args = ["ovs-vsctl", "--timeout=%d" % self.vsctl_timeout] + args
try:
return utils.execute(full_args, root_helper=self.root_helper)
except Exception as e:
with excutils.save_and_reraise_exception() as ctxt:
LOG.error(_("Unable to execute %(cmd)s. "
"Exception: %(exception)s"),
{'cmd': full_args, 'exception': e})
if not check_error:
ctxt.reraise = False
def add_bridge(self, bridge_name):
self.run_vsctl(["--", "--may-exist", "add-br", bridge_name])
return OVSBridge(bridge_name, self.root_helper)
def delete_bridge(self, bridge_name):
self.run_vsctl(["--", "--if-exists", "del-br", bridge_name])
def bridge_exists(self, bridge_name):
try:
self.run_vsctl(['br-exists', bridge_name], check_error=True)
except RuntimeError as e:
with excutils.save_and_reraise_exception() as ctxt:
if 'Exit code: 2\n' in str(e):
ctxt.reraise = False
return False
return True
def get_bridge_name_for_port_name(self, port_name):
try:
return self.run_vsctl(['port-to-br', port_name], check_error=True)
except RuntimeError as e:
with excutils.save_and_reraise_exception() as ctxt:
if 'Exit code: 1\n' in str(e):
ctxt.reraise = False
def port_exists(self, port_name):
return bool(self.get_bridge_name_for_port_name(port_name))
class OVSBridge(BaseOVS):
def __init__(self, br_name, root_helper):
super(OVSBridge, self).__init__(root_helper)
self.br_name = br_name
self.defer_apply_flows = False
self.deferred_flows = {'add': '', 'mod': '', 'del': ''}
def set_controller(self, controller_names):
vsctl_command = ['--', 'set-controller', self.br_name]
vsctl_command.extend(controller_names)
self.run_vsctl(vsctl_command, check_error=True)
def del_controller(self):
self.run_vsctl(['--', 'del-controller', self.br_name],
check_error=True)
def get_controller(self):
res = self.run_vsctl(['--', 'get-controller', self.br_name],
check_error=True)
if res:
return res.strip().split('\n')
return res
def set_secure_mode(self):
self.run_vsctl(['--', 'set-fail-mode', self.br_name, 'secure'],
check_error=True)
def set_protocols(self, protocols):
self.run_vsctl(['--', 'set', 'bridge', self.br_name,
"protocols=%s" % protocols],
check_error=True)
def create(self):
self.add_bridge(self.br_name)
def destroy(self):
self.delete_bridge(self.br_name)
def reset_bridge(self):
self.destroy()
self.create()
def add_port(self, port_name):
self.run_vsctl(["--", "--may-exist", "add-port", self.br_name,
port_name])
return self.get_port_ofport(port_name)
def delete_port(self, port_name):
self.run_vsctl(["--", "--if-exists", "del-port", self.br_name,
port_name])
def set_db_attribute(self, table_name, record, column, value):
args = ["set", table_name, record, "%s=%s" % (column, value)]
self.run_vsctl(args)
def clear_db_attribute(self, table_name, record, column):
args = ["clear", table_name, record, column]
self.run_vsctl(args)
def run_ofctl(self, cmd, args, process_input=None):
full_args = ["ovs-ofctl", cmd, self.br_name] + args
try:
return utils.execute(full_args, root_helper=self.root_helper,
process_input=process_input)
except Exception as e:
LOG.error(_("Unable to execute %(cmd)s. Exception: %(exception)s"),
{'cmd': full_args, 'exception': e})
def count_flows(self):
flow_list = self.run_ofctl("dump-flows", []).split("\n")[1:]
return len(flow_list) - 1
def remove_all_flows(self):
self.run_ofctl("del-flows", [])
def get_port_ofport(self, port_name):
ofport = self.db_get_val("Interface", port_name, "ofport")
# This can return a non-integer string, like '[]' so ensure a
# common failure case
try:
int(ofport)
return ofport
except ValueError:
return constants.INVALID_OFPORT
def get_datapath_id(self):
return self.db_get_val('Bridge',
self.br_name, 'datapath_id').strip('"')
def add_flow(self, **kwargs):
flow_str = _build_flow_expr_str(kwargs, 'add')
if self.defer_apply_flows:
self.deferred_flows['add'] += flow_str + '\n'
else:
self.run_ofctl("add-flow", [flow_str])
def mod_flow(self, **kwargs):
flow_str = _build_flow_expr_str(kwargs, 'mod')
if self.defer_apply_flows:
self.deferred_flows['mod'] += flow_str + '\n'
else:
self.run_ofctl("mod-flows", [flow_str])
def delete_flows(self, **kwargs):
flow_expr_str = _build_flow_expr_str(kwargs, 'del')
if self.defer_apply_flows:
self.deferred_flows['del'] += flow_expr_str + '\n'
else:
self.run_ofctl("del-flows", [flow_expr_str])
def dump_flows_for_table(self, table):
retval = None
flow_str = "table=%s" % table
flows = self.run_ofctl("dump-flows", [flow_str])
if flows:
retval = '\n'.join(item for item in flows.splitlines()
if 'NXST' not in item)
return retval
def defer_apply_on(self):
LOG.debug(_('defer_apply_on'))
self.defer_apply_flows = True
def defer_apply_off(self):
LOG.debug(_('defer_apply_off'))
# Note(ethuleau): stash flows and disable deferred mode. Then apply
# flows from the stashed reference to be sure to not purge flows that
# were added between two ofctl commands.
stashed_deferred_flows, self.deferred_flows = (
self.deferred_flows, {'add': '', 'mod': '', 'del': ''}
)
self.defer_apply_flows = False
for action, flows in stashed_deferred_flows.items():
if flows:
LOG.debug(_('Applying following deferred flows '
'to bridge %s'), self.br_name)
for line in flows.splitlines():
LOG.debug(_('%(action)s: %(flow)s'),
{'action': action, 'flow': line})
self.run_ofctl('%s-flows' % action, ['-'], flows)
def add_tunnel_port(self, port_name, remote_ip, local_ip,
tunnel_type=p_const.TYPE_GRE,
vxlan_udp_port=constants.VXLAN_UDP_PORT,
dont_fragment=True):
vsctl_command = ["--", "--may-exist", "add-port", self.br_name,
port_name]
vsctl_command.extend(["--", "set", "Interface", port_name,
"type=%s" % tunnel_type])
if tunnel_type == p_const.TYPE_VXLAN:
# Only set the VXLAN UDP port if it's not the default
if vxlan_udp_port != constants.VXLAN_UDP_PORT:
vsctl_command.append("options:dst_port=%s" % vxlan_udp_port)
vsctl_command.append(("options:df_default=%s" %
bool(dont_fragment)).lower())
vsctl_command.extend(["options:remote_ip=%s" % remote_ip,
"options:local_ip=%s" % local_ip,
"options:in_key=flow",
"options:out_key=flow"])
self.run_vsctl(vsctl_command)
ofport = self.get_port_ofport(port_name)
if (tunnel_type == p_const.TYPE_VXLAN and
ofport == constants.INVALID_OFPORT):
LOG.error(_('Unable to create VXLAN tunnel port. Please ensure '
'that an openvswitch version that supports VXLAN is '
'installed.'))
return ofport
def add_patch_port(self, local_name, remote_name):
self.run_vsctl(["add-port", self.br_name, local_name,
"--", "set", "Interface", local_name,
"type=patch", "options:peer=%s" % remote_name])
return self.get_port_ofport(local_name)
def db_get_map(self, table, record, column, check_error=False):
output = self.run_vsctl(["get", table, record, column], check_error)
if output:
output_str = output.rstrip("\n\r")
return self.db_str_to_map(output_str)
return {}
def db_get_val(self, table, record, column, check_error=False):
output = self.run_vsctl(["get", table, record, column], check_error)
if output:
return output.rstrip("\n\r")
def db_str_to_map(self, full_str):
list = full_str.strip("{}").split(", ")
ret = {}
for e in list:
if e.find("=") == -1:
continue
arr = e.split("=")
ret[arr[0]] = arr[1].strip("\"")
return ret
def get_port_name_list(self):
res = self.run_vsctl(["list-ports", self.br_name], check_error=True)
if res:
return res.strip().split("\n")
return []
def get_port_stats(self, port_name):
return self.db_get_map("Interface", port_name, "statistics")
def get_xapi_iface_id(self, xs_vif_uuid):
args = ["xe", "vif-param-get", "param-name=other-config",
"param-key=nicira-iface-id", "uuid=%s" % xs_vif_uuid]
try:
return utils.execute(args, root_helper=self.root_helper).strip()
except Exception as e:
with excutils.save_and_reraise_exception():
LOG.error(_("Unable to execute %(cmd)s. "
"Exception: %(exception)s"),
{'cmd': args, 'exception': e})
# returns a VIF object for each VIF port
def get_vif_ports(self):
edge_ports = []
port_names = self.get_port_name_list()
for name in port_names:
external_ids = self.db_get_map("Interface", name, "external_ids",
check_error=True)
ofport = self.db_get_val("Interface", name, "ofport",
check_error=True)
if "iface-id" in external_ids and "attached-mac" in external_ids:
p = VifPort(name, ofport, external_ids["iface-id"],
external_ids["attached-mac"], self)
edge_ports.append(p)
elif ("xs-vif-uuid" in external_ids and
"attached-mac" in external_ids):
# if this is a xenserver and iface-id is not automatically
# synced to OVS from XAPI, we grab it from XAPI directly
iface_id = self.get_xapi_iface_id(external_ids["xs-vif-uuid"])
p = VifPort(name, ofport, iface_id,
external_ids["attached-mac"], self)
edge_ports.append(p)
return edge_ports
def get_vif_port_set(self):
port_names = self.get_port_name_list()
edge_ports = set()
args = ['--format=json', '--', '--columns=name,external_ids,ofport',
'list', 'Interface']
result = self.run_vsctl(args, check_error=True)
if not result:
return edge_ports
for row in jsonutils.loads(result)['data']:
name = row[0]
if name not in port_names:
continue
external_ids = dict(row[1][1])
# Do not consider VIFs which aren't yet ready
# This can happen when ofport values are either [] or ["set", []]
# We will therefore consider only integer values for ofport
ofport = row[2]
try:
int_ofport = int(ofport)
except (ValueError, TypeError):
LOG.warn(_("Found not yet ready openvswitch port: %s"), row)
else:
if int_ofport > 0:
if ("iface-id" in external_ids and
"attached-mac" in external_ids):
edge_ports.add(external_ids['iface-id'])
elif ("xs-vif-uuid" in external_ids and
"attached-mac" in external_ids):
# if this is a xenserver and iface-id is not
# automatically synced to OVS from XAPI, we grab it
# from XAPI directly
iface_id = self.get_xapi_iface_id(
external_ids["xs-vif-uuid"])
edge_ports.add(iface_id)
else:
LOG.warn(_("Found failed openvswitch port: %s"), row)
return edge_ports
def get_port_tag_dict(self):
"""Get a dict of port names and associated vlan tags.
e.g. the returned dict is of the following form::
{u'int-br-eth2': [],
u'patch-tun': [],
u'qr-76d9e6b6-21': 1,
u'tapce5318ff-78': 1,
u'tape1400310-e6': 1}
The TAG ID is only available in the "Port" table and is not available
in the "Interface" table queried by the get_vif_port_set() method.
"""
port_names = self.get_port_name_list()
args = ['--format=json', '--', '--columns=name,tag', 'list', 'Port']
result = self.run_vsctl(args, check_error=True)
port_tag_dict = {}
if not result:
return port_tag_dict
for name, tag in jsonutils.loads(result)['data']:
if name not in port_names:
continue
# 'tag' can be [u'set', []] or an integer
if isinstance(tag, list):
tag = tag[1]
port_tag_dict[name] = tag
return port_tag_dict
def get_vif_port_by_id(self, port_id):
args = ['--format=json', '--', '--columns=external_ids,name,ofport',
'find', 'Interface',
'external_ids:iface-id="%s"' % port_id]
result = self.run_vsctl(args)
if not result:
return
json_result = jsonutils.loads(result)
try:
# Retrieve the indexes of the columns we're looking for
headings = json_result['headings']
ext_ids_idx = headings.index('external_ids')
name_idx = headings.index('name')
ofport_idx = headings.index('ofport')
# If data attribute is missing or empty the line below will raise
# an exeception which will be captured in this block.
# We won't deal with the possibility of ovs-vsctl return multiple
# rows since the interface identifier is unique
data = json_result['data'][0]
port_name = data[name_idx]
switch = get_bridge_for_iface(self.root_helper, port_name)
if switch != self.br_name:
LOG.info(_("Port: %(port_name)s is on %(switch)s,"
" not on %(br_name)s"), {'port_name': port_name,
'switch': switch,
'br_name': self.br_name})
return
ofport = data[ofport_idx]
# ofport must be integer otherwise return None
if not isinstance(ofport, int) or ofport == -1:
LOG.warn(_("ofport: %(ofport)s for VIF: %(vif)s is not a "
"positive integer"), {'ofport': ofport,
'vif': port_id})
return
# Find VIF's mac address in external ids
ext_id_dict = dict((item[0], item[1]) for item in
data[ext_ids_idx][1])
vif_mac = ext_id_dict['attached-mac']
return VifPort(port_name, ofport, port_id, vif_mac, self)
except Exception as e:
LOG.warn(_("Unable to parse interface details. Exception: %s"), e)
return
def delete_ports(self, all_ports=False):
if all_ports:
port_names = self.get_port_name_list()
else:
port_names = (port.port_name for port in self.get_vif_ports())
for port_name in port_names:
self.delete_port(port_name)
def get_local_port_mac(self):
"""Retrieve the mac of the bridge's local port."""
address = ip_lib.IPDevice(self.br_name, self.root_helper).link.address
if address:
return address
else:
msg = _('Unable to determine mac address for %s') % self.br_name
raise Exception(msg)
def __enter__(self):
self.create()
return self
def __exit__(self, exc_type, exc_value, exc_tb):
self.destroy()
def get_bridge_for_iface(root_helper, iface):
args = ["ovs-vsctl", "--timeout=%d" % cfg.CONF.ovs_vsctl_timeout,
"iface-to-br", iface]
try:
return utils.execute(args, root_helper=root_helper).strip()
except Exception:
LOG.exception(_("Interface %s not found."), iface)
return None
def get_bridges(root_helper):
args = ["ovs-vsctl", "--timeout=%d" % cfg.CONF.ovs_vsctl_timeout,
"list-br"]
try:
return utils.execute(args, root_helper=root_helper).strip().split("\n")
except Exception as e:
with excutils.save_and_reraise_exception():
LOG.exception(_("Unable to retrieve bridges. Exception: %s"), e)
def get_bridge_external_bridge_id(root_helper, bridge):
args = ["ovs-vsctl", "--timeout=2", "br-get-external-id",
bridge, "bridge-id"]
try:
return utils.execute(args, root_helper=root_helper).strip()
except Exception:
LOG.exception(_("Bridge %s not found."), bridge)
return None
def _build_flow_expr_str(flow_dict, cmd):
flow_expr_arr = []
actions = None
if cmd == 'add':
flow_expr_arr.append("hard_timeout=%s" %
flow_dict.pop('hard_timeout', '0'))
flow_expr_arr.append("idle_timeout=%s" %
flow_dict.pop('idle_timeout', '0'))
flow_expr_arr.append("priority=%s" %
flow_dict.pop('priority', '1'))
elif 'priority' in flow_dict:
msg = _("Cannot match priority on flow deletion or modification")
raise exceptions.InvalidInput(error_message=msg)
if cmd != 'del':
if "actions" not in flow_dict:
msg = _("Must specify one or more actions on flow addition"
" or modification")
raise exceptions.InvalidInput(error_message=msg)
actions = "actions=%s" % flow_dict.pop('actions')
for key, value in flow_dict.iteritems():
if key == 'proto':
flow_expr_arr.append(value)
else:
flow_expr_arr.append("%s=%s" % (key, str(value)))
if actions:
flow_expr_arr.append(actions)
return ','.join(flow_expr_arr)
def ofctl_arg_supported(root_helper, cmd, args):
'''Verify if ovs-ofctl binary supports command with specific args.
:param root_helper: utility to use when running shell cmds.
:param cmd: ovs-vsctl command to use for test.
:param args: arguments to test with command.
:returns: a boolean if the args supported.
'''
supported = True
br_name = 'br-test-%s' % common_utils.get_random_string(6)
test_br = OVSBridge(br_name, root_helper)
test_br.reset_bridge()
full_args = ["ovs-ofctl", cmd, test_br.br_name] + args
try:
utils.execute(full_args, root_helper=root_helper)
except Exception:
supported = False
test_br.destroy()
return supported

View File

@ -0,0 +1,105 @@
# Copyright 2013 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import eventlet
from neutron.agent.linux import async_process
from neutron.openstack.common import log as logging
LOG = logging.getLogger(__name__)
class OvsdbMonitor(async_process.AsyncProcess):
"""Manages an invocation of 'ovsdb-client monitor'."""
def __init__(self, table_name, columns=None, format=None,
root_helper=None, respawn_interval=None):
cmd = ['ovsdb-client', 'monitor', table_name]
if columns:
cmd.append(','.join(columns))
if format:
cmd.append('--format=%s' % format)
super(OvsdbMonitor, self).__init__(cmd,
root_helper=root_helper,
respawn_interval=respawn_interval)
def _read_stdout(self):
data = self._process.stdout.readline()
if not data:
return
self._stdout_lines.put(data)
LOG.debug(_('Output received from ovsdb monitor: %s') % data)
return data
def _read_stderr(self):
data = super(OvsdbMonitor, self)._read_stderr()
if data:
LOG.error(_('Error received from ovsdb monitor: %s') % data)
# Do not return value to ensure that stderr output will
# stop the monitor.
class SimpleInterfaceMonitor(OvsdbMonitor):
"""Monitors the Interface table of the local host's ovsdb for changes.
The has_updates() method indicates whether changes to the ovsdb
Interface table have been detected since the monitor started or
since the previous access.
"""
def __init__(self, root_helper=None, respawn_interval=None):
super(SimpleInterfaceMonitor, self).__init__(
'Interface',
columns=['name', 'ofport'],
format='json',
root_helper=root_helper,
respawn_interval=respawn_interval,
)
self.data_received = False
@property
def is_active(self):
return (self.data_received and
self._kill_event and
not self._kill_event.ready())
@property
def has_updates(self):
"""Indicate whether the ovsdb Interface table has been updated.
True will be returned if the monitor process is not active.
This 'failing open' minimizes the risk of falsely indicating
the absence of updates at the expense of potential false
positives.
"""
return bool(list(self.iter_stdout())) or not self.is_active
def start(self, block=False, timeout=5):
super(SimpleInterfaceMonitor, self).start()
if block:
eventlet.timeout.Timeout(timeout)
while not self.is_active:
eventlet.sleep()
def _kill(self, *args, **kwargs):
self.data_received = False
super(SimpleInterfaceMonitor, self)._kill(*args, **kwargs)
def _read_stdout(self):
data = super(SimpleInterfaceMonitor, self)._read_stdout()
if data and not self.data_received:
self.data_received = True
return data

View File

@ -0,0 +1,112 @@
# Copyright 2013 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import contextlib
import eventlet
from neutron.agent.linux import ovsdb_monitor
from neutron.plugins.openvswitch.common import constants
@contextlib.contextmanager
def get_polling_manager(minimize_polling=False,
root_helper=None,
ovsdb_monitor_respawn_interval=(
constants.DEFAULT_OVSDBMON_RESPAWN)):
if minimize_polling:
pm = InterfacePollingMinimizer(
root_helper=root_helper,
ovsdb_monitor_respawn_interval=ovsdb_monitor_respawn_interval)
pm.start()
else:
pm = AlwaysPoll()
try:
yield pm
finally:
if minimize_polling:
pm.stop()
class BasePollingManager(object):
def __init__(self):
self._force_polling = False
self._polling_completed = True
def force_polling(self):
self._force_polling = True
def polling_completed(self):
self._polling_completed = True
def _is_polling_required(self):
raise NotImplemented
@property
def is_polling_required(self):
# Always consume the updates to minimize polling.
polling_required = self._is_polling_required()
# Polling is required regardless of whether updates have been
# detected.
if self._force_polling:
self._force_polling = False
polling_required = True
# Polling is required if not yet done for previously detected
# updates.
if not self._polling_completed:
polling_required = True
if polling_required:
# Track whether polling has been completed to ensure that
# polling can be required until the caller indicates via a
# call to polling_completed() that polling has been
# successfully performed.
self._polling_completed = False
return polling_required
class AlwaysPoll(BasePollingManager):
@property
def is_polling_required(self):
return True
class InterfacePollingMinimizer(BasePollingManager):
"""Monitors ovsdb to determine when polling is required."""
def __init__(self, root_helper=None,
ovsdb_monitor_respawn_interval=(
constants.DEFAULT_OVSDBMON_RESPAWN)):
super(InterfacePollingMinimizer, self).__init__()
self._monitor = ovsdb_monitor.SimpleInterfaceMonitor(
root_helper=root_helper,
respawn_interval=ovsdb_monitor_respawn_interval)
def start(self):
self._monitor.start()
def stop(self):
self._monitor.stop()
def _is_polling_required(self):
# Maximize the chances of update detection having a chance to
# collect output.
eventlet.sleep()
return self._monitor.has_updates

View File

@ -0,0 +1,128 @@
# Copyright 2012 Locaweb.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Juliano Martinez, Locaweb.
import fcntl
import os
import shlex
import socket
import struct
import tempfile
from eventlet.green import subprocess
from eventlet import greenthread
from neutron.common import constants
from neutron.common import utils
from neutron.openstack.common import excutils
from neutron.openstack.common import log as logging
LOG = logging.getLogger(__name__)
def create_process(cmd, root_helper=None, addl_env=None):
"""Create a process object for the given command.
The return value will be a tuple of the process object and the
list of command arguments used to create it.
"""
if root_helper:
cmd = shlex.split(root_helper) + cmd
cmd = map(str, cmd)
LOG.debug(_("Running command: %s"), cmd)
env = os.environ.copy()
if addl_env:
env.update(addl_env)
obj = utils.subprocess_popen(cmd, shell=False,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
env=env)
return obj, cmd
def execute(cmd, root_helper=None, process_input=None, addl_env=None,
check_exit_code=True, return_stderr=False):
try:
obj, cmd = create_process(cmd, root_helper=root_helper,
addl_env=addl_env)
_stdout, _stderr = (process_input and
obj.communicate(process_input) or
obj.communicate())
obj.stdin.close()
m = _("\nCommand: %(cmd)s\nExit code: %(code)s\nStdout: %(stdout)r\n"
"Stderr: %(stderr)r") % {'cmd': cmd, 'code': obj.returncode,
'stdout': _stdout, 'stderr': _stderr}
if obj.returncode:
LOG.error(m)
if check_exit_code:
raise RuntimeError(m)
else:
LOG.debug(m)
finally:
# NOTE(termie): this appears to be necessary to let the subprocess
# call clean something up in between calls, without
# it two execute calls in a row hangs the second one
greenthread.sleep(0)
return return_stderr and (_stdout, _stderr) or _stdout
def get_interface_mac(interface):
MAC_START = 18
MAC_END = 24
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
info = fcntl.ioctl(s.fileno(), 0x8927,
struct.pack('256s', interface[:constants.DEVICE_NAME_MAX_LEN]))
return ''.join(['%02x:' % ord(char)
for char in info[MAC_START:MAC_END]])[:-1]
def replace_file(file_name, data):
"""Replaces the contents of file_name with data in a safe manner.
First write to a temp file and then rename. Since POSIX renames are
atomic, the file is unlikely to be corrupted by competing writes.
We create the tempfile on the same device to ensure that it can be renamed.
"""
base_dir = os.path.dirname(os.path.abspath(file_name))
tmp_file = tempfile.NamedTemporaryFile('w+', dir=base_dir, delete=False)
tmp_file.write(data)
tmp_file.close()
os.chmod(tmp_file.name, 0o644)
os.rename(tmp_file.name, file_name)
def find_child_pids(pid):
"""Retrieve a list of the pids of child processes of the given pid."""
try:
raw_pids = execute(['ps', '--ppid', pid, '-o', 'pid='])
except RuntimeError as e:
# Unexpected errors are the responsibility of the caller
with excutils.save_and_reraise_exception() as ctxt:
# Exception has already been logged by execute
no_children_found = 'Exit code: 1' in str(e)
if no_children_found:
ctxt.reraise = False
return []
return [x.strip() for x in raw_pids.split('\n') if x.strip()]

View File

@ -0,0 +1,15 @@
# Copyright 2012 New Dream Network, LLC (DreamHost)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Mark McClain, DreamHost

View File

@ -0,0 +1,390 @@
# Copyright 2012 New Dream Network, LLC (DreamHost)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Mark McClain, DreamHost
import hashlib
import hmac
import os
import socket
import sys
import eventlet
eventlet.monkey_patch()
import httplib2
from neutronclient.v2_0 import client
from oslo.config import cfg
import six.moves.urllib.parse as urlparse
import webob
from neutron.agent.common import config as agent_conf
from neutron.agent import rpc as agent_rpc
from neutron.common import config
from neutron.common import constants as n_const
from neutron.common import topics
from neutron.common import utils
from neutron import context
from neutron.openstack.common.cache import cache
from neutron.openstack.common import excutils
from neutron.openstack.common import log as logging
from neutron.openstack.common import loopingcall
from neutron.openstack.common import service
from neutron import wsgi
LOG = logging.getLogger(__name__)
class MetadataProxyHandler(object):
OPTS = [
cfg.StrOpt('admin_user',
help=_("Admin user")),
cfg.StrOpt('admin_password',
help=_("Admin password"),
secret=True),
cfg.StrOpt('admin_tenant_name',
help=_("Admin tenant name")),
cfg.StrOpt('auth_url',
help=_("Authentication URL")),
cfg.StrOpt('auth_strategy', default='keystone',
help=_("The type of authentication to use")),
cfg.StrOpt('auth_region',
help=_("Authentication region")),
cfg.BoolOpt('auth_insecure',
default=False,
help=_("Turn off verification of the certificate for"
" ssl")),
cfg.StrOpt('auth_ca_cert',
help=_("Certificate Authority public key (CA cert) "
"file for ssl")),
cfg.StrOpt('endpoint_type',
default='adminURL',
help=_("Network service endpoint type to pull from "
"the keystone catalog")),
cfg.StrOpt('nova_metadata_ip', default='127.0.0.1',
help=_("IP address used by Nova metadata server.")),
cfg.IntOpt('nova_metadata_port',
default=8775,
help=_("TCP Port used by Nova metadata server.")),
cfg.StrOpt('metadata_proxy_shared_secret',
default='',
help=_('Shared secret to sign instance-id request'),
secret=True),
cfg.StrOpt('nova_metadata_protocol',
default='http',
choices=['http', 'https'],
help=_("Protocol to access nova metadata, http or https")),
cfg.BoolOpt('nova_metadata_insecure', default=False,
help=_("Allow to perform insecure SSL (https) requests to "
"nova metadata")),
cfg.StrOpt('nova_client_cert',
default='',
help=_("Client certificate for nova metadata api server.")),
cfg.StrOpt('nova_client_priv_key',
default='',
help=_("Private key of client certificate."))
]
def __init__(self, conf):
self.conf = conf
self.auth_info = {}
if self.conf.cache_url:
self._cache = cache.get_cache(self.conf.cache_url)
else:
self._cache = False
def _get_neutron_client(self):
qclient = client.Client(
username=self.conf.admin_user,
password=self.conf.admin_password,
tenant_name=self.conf.admin_tenant_name,
auth_url=self.conf.auth_url,
auth_strategy=self.conf.auth_strategy,
region_name=self.conf.auth_region,
token=self.auth_info.get('auth_token'),
insecure=self.conf.auth_insecure,
ca_cert=self.conf.auth_ca_cert,
endpoint_url=self.auth_info.get('endpoint_url'),
endpoint_type=self.conf.endpoint_type
)
return qclient
@webob.dec.wsgify(RequestClass=webob.Request)
def __call__(self, req):
try:
LOG.debug(_("Request: %s"), req)
instance_id, tenant_id = self._get_instance_and_tenant_id(req)
if instance_id:
return self._proxy_request(instance_id, tenant_id, req)
else:
return webob.exc.HTTPNotFound()
except Exception:
LOG.exception(_("Unexpected error."))
msg = _('An unknown error has occurred. '
'Please try your request again.')
return webob.exc.HTTPInternalServerError(explanation=unicode(msg))
@utils.cache_method_results
def _get_router_networks(self, router_id):
"""Find all networks connected to given router."""
qclient = self._get_neutron_client()
internal_ports = qclient.list_ports(
device_id=router_id,
device_owner=n_const.DEVICE_OWNER_ROUTER_INTF)['ports']
return tuple(p['network_id'] for p in internal_ports)
@utils.cache_method_results
def _get_ports_for_remote_address(self, remote_address, networks):
"""Get list of ports that has given ip address and are part of
given networks.
:param networks: list of networks in which the ip address will be
searched for
"""
qclient = self._get_neutron_client()
return qclient.list_ports(
network_id=networks,
fixed_ips=['ip_address=%s' % remote_address])['ports']
def _get_ports(self, remote_address, network_id=None, router_id=None):
"""Search for all ports that contain passed ip address and belongs to
given network.
If no network is passed ports are searched on all networks connected to
given router. Either one of network_id or router_id must be passed.
"""
if network_id:
networks = (network_id,)
elif router_id:
networks = self._get_router_networks(router_id)
else:
raise TypeError(_("Either one of parameter network_id or router_id"
" must be passed to _get_ports method."))
return self._get_ports_for_remote_address(remote_address, networks)
def _get_instance_and_tenant_id(self, req):
qclient = self._get_neutron_client()
remote_address = req.headers.get('X-Forwarded-For')
network_id = req.headers.get('X-Neutron-Network-ID')
router_id = req.headers.get('X-Neutron-Router-ID')
ports = self._get_ports(remote_address, network_id, router_id)
self.auth_info = qclient.get_auth_info()
if len(ports) == 1:
return ports[0]['device_id'], ports[0]['tenant_id']
return None, None
def _proxy_request(self, instance_id, tenant_id, req):
headers = {
'X-Forwarded-For': req.headers.get('X-Forwarded-For'),
'X-Instance-ID': instance_id,
'X-Tenant-ID': tenant_id,
'X-Instance-ID-Signature': self._sign_instance_id(instance_id)
}
nova_ip_port = '%s:%s' % (self.conf.nova_metadata_ip,
self.conf.nova_metadata_port)
url = urlparse.urlunsplit((
self.conf.nova_metadata_protocol,
nova_ip_port,
req.path_info,
req.query_string,
''))
h = httplib2.Http(ca_certs=self.conf.auth_ca_cert,
disable_ssl_certificate_validation=
self.conf.nova_metadata_insecure)
if self.conf.nova_client_cert and self.conf.nova_client_priv_key:
h.add_certificate(self.conf.nova_client_priv_key,
self.conf.nova_client_cert,
nova_ip_port)
resp, content = h.request(url, method=req.method, headers=headers,
body=req.body)
if resp.status == 200:
LOG.debug(str(resp))
req.response.content_type = resp['content-type']
req.response.body = content
return req.response
elif resp.status == 403:
msg = _(
'The remote metadata server responded with Forbidden. This '
'response usually occurs when shared secrets do not match.'
)
LOG.warn(msg)
return webob.exc.HTTPForbidden()
elif resp.status == 404:
return webob.exc.HTTPNotFound()
elif resp.status == 409:
return webob.exc.HTTPConflict()
elif resp.status == 500:
msg = _(
'Remote metadata server experienced an internal server error.'
)
LOG.warn(msg)
return webob.exc.HTTPInternalServerError(explanation=unicode(msg))
else:
raise Exception(_('Unexpected response code: %s') % resp.status)
def _sign_instance_id(self, instance_id):
return hmac.new(self.conf.metadata_proxy_shared_secret,
instance_id,
hashlib.sha256).hexdigest()
class UnixDomainHttpProtocol(eventlet.wsgi.HttpProtocol):
def __init__(self, request, client_address, server):
if client_address == '':
client_address = ('<local>', 0)
# base class is old-style, so super does not work properly
eventlet.wsgi.HttpProtocol.__init__(self, request, client_address,
server)
class WorkerService(wsgi.WorkerService):
def start(self):
self._server = self._service.pool.spawn(self._service._run,
self._application,
self._service._socket)
class UnixDomainWSGIServer(wsgi.Server):
def __init__(self, name):
self._socket = None
self._launcher = None
self._server = None
super(UnixDomainWSGIServer, self).__init__(name)
def start(self, application, file_socket, workers, backlog):
self._socket = eventlet.listen(file_socket,
family=socket.AF_UNIX,
backlog=backlog)
if workers < 1:
# For the case where only one process is required.
self._server = self.pool.spawn_n(self._run, application,
self._socket)
else:
# Minimize the cost of checking for child exit by extending the
# wait interval past the default of 0.01s.
self._launcher = service.ProcessLauncher(wait_interval=1.0)
self._server = WorkerService(self, application)
self._launcher.launch_service(self._server, workers=workers)
def _run(self, application, socket):
"""Start a WSGI service in a new green thread."""
logger = logging.getLogger('eventlet.wsgi.server')
eventlet.wsgi.server(socket,
application,
custom_pool=self.pool,
protocol=UnixDomainHttpProtocol,
log=logging.WritableLogger(logger))
class UnixDomainMetadataProxy(object):
OPTS = [
cfg.StrOpt('metadata_proxy_socket',
default='$state_path/metadata_proxy',
help=_('Location for Metadata Proxy UNIX domain socket')),
cfg.IntOpt('metadata_workers',
default=utils.cpu_count() // 2,
help=_('Number of separate worker processes for metadata '
'server')),
cfg.IntOpt('metadata_backlog',
default=4096,
help=_('Number of backlog requests to configure the '
'metadata server socket with'))
]
def __init__(self, conf):
self.conf = conf
dirname = os.path.dirname(cfg.CONF.metadata_proxy_socket)
if os.path.isdir(dirname):
try:
os.unlink(cfg.CONF.metadata_proxy_socket)
except OSError:
with excutils.save_and_reraise_exception() as ctxt:
if not os.path.exists(cfg.CONF.metadata_proxy_socket):
ctxt.reraise = False
else:
os.makedirs(dirname, 0o755)
self._init_state_reporting()
def _init_state_reporting(self):
self.context = context.get_admin_context_without_session()
self.state_rpc = agent_rpc.PluginReportStateAPI(topics.PLUGIN)
self.agent_state = {
'binary': 'neutron-metadata-agent',
'host': cfg.CONF.host,
'topic': 'N/A',
'configurations': {
'metadata_proxy_socket': cfg.CONF.metadata_proxy_socket,
'nova_metadata_ip': cfg.CONF.nova_metadata_ip,
'nova_metadata_port': cfg.CONF.nova_metadata_port,
},
'start_flag': True,
'agent_type': n_const.AGENT_TYPE_METADATA}
report_interval = cfg.CONF.AGENT.report_interval
if report_interval:
self.heartbeat = loopingcall.FixedIntervalLoopingCall(
self._report_state)
self.heartbeat.start(interval=report_interval)
def _report_state(self):
try:
self.state_rpc.report_state(
self.context,
self.agent_state,
use_call=self.agent_state.get('start_flag'))
except AttributeError:
# This means the server does not support report_state
LOG.warn(_('Neutron server does not support state report.'
' State report for this agent will be disabled.'))
self.heartbeat.stop()
return
except Exception:
LOG.exception(_("Failed reporting state!"))
return
self.agent_state.pop('start_flag', None)
def run(self):
server = UnixDomainWSGIServer('neutron-metadata-agent')
server.start(MetadataProxyHandler(self.conf),
self.conf.metadata_proxy_socket,
workers=self.conf.metadata_workers,
backlog=self.conf.metadata_backlog)
server.wait()
def main():
cfg.CONF.register_opts(UnixDomainMetadataProxy.OPTS)
cfg.CONF.register_opts(MetadataProxyHandler.OPTS)
cache.register_oslo_configs(cfg.CONF)
cfg.CONF.set_default(name='cache_url', default='memory://?default_ttl=5')
agent_conf.register_agent_state_opts_helper(cfg.CONF)
config.init(sys.argv[1:])
config.setup_logging(cfg.CONF)
utils.log_opt_values(LOG)
proxy = UnixDomainMetadataProxy(cfg.CONF)
proxy.run()

View File

@ -0,0 +1,182 @@
# Copyright 2012 New Dream Network, LLC (DreamHost)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Mark McClain, DreamHost
import httplib
import socket
import eventlet
eventlet.monkey_patch()
import httplib2
from oslo.config import cfg
import six.moves.urllib.parse as urlparse
import webob
from neutron.agent.linux import daemon
from neutron.common import config
from neutron.common import utils
from neutron.openstack.common import log as logging
from neutron import wsgi
LOG = logging.getLogger(__name__)
class UnixDomainHTTPConnection(httplib.HTTPConnection):
"""Connection class for HTTP over UNIX domain socket."""
def __init__(self, host, port=None, strict=None, timeout=None,
proxy_info=None):
httplib.HTTPConnection.__init__(self, host, port, strict)
self.timeout = timeout
def connect(self):
self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
if self.timeout:
self.sock.settimeout(self.timeout)
self.sock.connect(cfg.CONF.metadata_proxy_socket)
class NetworkMetadataProxyHandler(object):
"""Proxy AF_INET metadata request through Unix Domain socket.
The Unix domain socket allows the proxy access resource that are not
accessible within the isolated tenant context.
"""
def __init__(self, network_id=None, router_id=None):
self.network_id = network_id
self.router_id = router_id
if network_id is None and router_id is None:
msg = _('network_id and router_id are None. One must be provided.')
raise ValueError(msg)
@webob.dec.wsgify(RequestClass=webob.Request)
def __call__(self, req):
LOG.debug(_("Request: %s"), req)
try:
return self._proxy_request(req.remote_addr,
req.method,
req.path_info,
req.query_string,
req.body)
except Exception:
LOG.exception(_("Unexpected error."))
msg = _('An unknown error has occurred. '
'Please try your request again.')
return webob.exc.HTTPInternalServerError(explanation=unicode(msg))
def _proxy_request(self, remote_address, method, path_info,
query_string, body):
headers = {
'X-Forwarded-For': remote_address,
}
if self.router_id:
headers['X-Neutron-Router-ID'] = self.router_id
else:
headers['X-Neutron-Network-ID'] = self.network_id
url = urlparse.urlunsplit((
'http',
'169.254.169.254', # a dummy value to make the request proper
path_info,
query_string,
''))
h = httplib2.Http()
resp, content = h.request(
url,
method=method,
headers=headers,
body=body,
connection_type=UnixDomainHTTPConnection)
if resp.status == 200:
LOG.debug(resp)
LOG.debug(content)
response = webob.Response()
response.status = resp.status
response.headers['Content-Type'] = resp['content-type']
response.body = content
return response
elif resp.status == 404:
return webob.exc.HTTPNotFound()
elif resp.status == 409:
return webob.exc.HTTPConflict()
elif resp.status == 500:
msg = _(
'Remote metadata server experienced an internal server error.'
)
LOG.debug(msg)
return webob.exc.HTTPInternalServerError(explanation=unicode(msg))
else:
raise Exception(_('Unexpected response code: %s') % resp.status)
class ProxyDaemon(daemon.Daemon):
def __init__(self, pidfile, port, network_id=None, router_id=None):
uuid = network_id or router_id
super(ProxyDaemon, self).__init__(pidfile, uuid=uuid)
self.network_id = network_id
self.router_id = router_id
self.port = port
def run(self):
handler = NetworkMetadataProxyHandler(
self.network_id,
self.router_id)
proxy = wsgi.Server('neutron-network-metadata-proxy')
proxy.start(handler, self.port)
proxy.wait()
def main():
opts = [
cfg.StrOpt('network_id',
help=_('Network that will have instance metadata '
'proxied.')),
cfg.StrOpt('router_id',
help=_('Router that will have connected instances\' '
'metadata proxied.')),
cfg.StrOpt('pid_file',
help=_('Location of pid file of this process.')),
cfg.BoolOpt('daemonize',
default=True,
help=_('Run as daemon.')),
cfg.IntOpt('metadata_port',
default=9697,
help=_("TCP Port to listen for metadata server "
"requests.")),
cfg.StrOpt('metadata_proxy_socket',
default='$state_path/metadata_proxy',
help=_('Location of Metadata Proxy UNIX domain '
'socket'))
]
cfg.CONF.register_cli_opts(opts)
# Don't get the default configuration file
cfg.CONF(project='neutron', default_config_files=[])
config.setup_logging(cfg.CONF)
utils.log_opt_values(LOG)
proxy = ProxyDaemon(cfg.CONF.pid_file,
cfg.CONF.metadata_port,
network_id=cfg.CONF.network_id,
router_id=cfg.CONF.router_id)
if cfg.CONF.daemonize:
proxy.start()
else:
proxy.run()

View File

@ -0,0 +1,174 @@
# Copyright (c) 2012 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import re
import eventlet
eventlet.monkey_patch()
from oslo.config import cfg
from neutron.agent.common import config as agent_config
from neutron.agent import dhcp_agent
from neutron.agent import l3_agent
from neutron.agent.linux import dhcp
from neutron.agent.linux import interface
from neutron.agent.linux import ip_lib
from neutron.agent.linux import ovs_lib
from neutron.api.v2 import attributes
from neutron.common import config
from neutron.openstack.common import importutils
from neutron.openstack.common import log as logging
LOG = logging.getLogger(__name__)
NS_MANGLING_PATTERN = ('(%s|%s)' % (dhcp.NS_PREFIX, l3_agent.NS_PREFIX) +
attributes.UUID_PATTERN)
class FakeDhcpPlugin(object):
"""Fake RPC plugin to bypass any RPC calls."""
def __getattribute__(self, name):
def fake_method(*args):
pass
return fake_method
def setup_conf():
"""Setup the cfg for the clean up utility.
Use separate setup_conf for the utility because there are many options
from the main config that do not apply during clean-up.
"""
cli_opts = [
cfg.BoolOpt('force',
default=False,
help=_('Delete the namespace by removing all devices.')),
]
conf = cfg.CONF
conf.register_cli_opts(cli_opts)
agent_config.register_interface_driver_opts_helper(conf)
agent_config.register_use_namespaces_opts_helper(conf)
agent_config.register_root_helper(conf)
conf.register_opts(dhcp.OPTS)
conf.register_opts(dhcp_agent.DhcpAgent.OPTS)
conf.register_opts(interface.OPTS)
return conf
def kill_dhcp(conf, namespace):
"""Disable DHCP for a network if DHCP is still active."""
root_helper = agent_config.get_root_helper(conf)
network_id = namespace.replace(dhcp.NS_PREFIX, '')
dhcp_driver = importutils.import_object(
conf.dhcp_driver,
conf=conf,
network=dhcp.NetModel(conf.use_namespaces, {'id': network_id}),
root_helper=root_helper,
plugin=FakeDhcpPlugin())
if dhcp_driver.active:
dhcp_driver.disable()
def eligible_for_deletion(conf, namespace, force=False):
"""Determine whether a namespace is eligible for deletion.
Eligibility is determined by having only the lo device or if force
is passed as a parameter.
"""
# filter out namespaces without UUID as the name
if not re.match(NS_MANGLING_PATTERN, namespace):
return False
root_helper = agent_config.get_root_helper(conf)
ip = ip_lib.IPWrapper(root_helper, namespace)
return force or ip.namespace_is_empty()
def unplug_device(conf, device):
try:
device.link.delete()
except RuntimeError:
root_helper = agent_config.get_root_helper(conf)
# Maybe the device is OVS port, so try to delete
bridge_name = ovs_lib.get_bridge_for_iface(root_helper, device.name)
if bridge_name:
bridge = ovs_lib.OVSBridge(bridge_name, root_helper)
bridge.delete_port(device.name)
else:
LOG.debug(_('Unable to find bridge for device: %s'), device.name)
def destroy_namespace(conf, namespace, force=False):
"""Destroy a given namespace.
If force is True, then dhcp (if it exists) will be disabled and all
devices will be forcibly removed.
"""
try:
root_helper = agent_config.get_root_helper(conf)
ip = ip_lib.IPWrapper(root_helper, namespace)
if force:
kill_dhcp(conf, namespace)
# NOTE: The dhcp driver will remove the namespace if is it empty,
# so a second check is required here.
if ip.netns.exists(namespace):
for device in ip.get_devices(exclude_loopback=True):
unplug_device(conf, device)
ip.garbage_collect_namespace()
except Exception:
LOG.exception(_('Error unable to destroy namespace: %s'), namespace)
def main():
"""Main method for cleaning up network namespaces.
This method will make two passes checking for namespaces to delete. The
process will identify candidates, sleep, and call garbage collect. The
garbage collection will re-verify that the namespace meets the criteria for
deletion (ie it is empty). The period of sleep and the 2nd pass allow
time for the namespace state to settle, so that the check prior deletion
will re-confirm the namespace is empty.
The utility is designed to clean-up after the forced or unexpected
termination of Neutron agents.
The --force flag should only be used as part of the cleanup of a devstack
installation as it will blindly purge namespaces and their devices. This
option also kills any lingering DHCP instances.
"""
conf = setup_conf()
conf()
config.setup_logging(conf)
root_helper = agent_config.get_root_helper(conf)
# Identify namespaces that are candidates for deletion.
candidates = [ns for ns in
ip_lib.IPWrapper.get_namespaces(root_helper)
if eligible_for_deletion(conf, ns, conf.force)]
if candidates:
eventlet.sleep(2)
for namespace in candidates:
destroy_namespace(conf, namespace, conf.force)

View File

@ -0,0 +1,110 @@
# Copyright (c) 2012 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo.config import cfg
from neutron.agent.common import config as agent_config
from neutron.agent import l3_agent
from neutron.agent.linux import interface
from neutron.agent.linux import ip_lib
from neutron.agent.linux import ovs_lib
from neutron.common import config
from neutron.openstack.common import log as logging
LOG = logging.getLogger(__name__)
def setup_conf():
"""Setup the cfg for the clean up utility.
Use separate setup_conf for the utility because there are many options
from the main config that do not apply during clean-up.
"""
opts = [
cfg.BoolOpt('ovs_all_ports',
default=False,
help=_('True to delete all ports on all the OpenvSwitch '
'bridges. False to delete ports created by '
'Neutron on integration and external network '
'bridges.'))
]
conf = cfg.CONF
conf.register_cli_opts(opts)
conf.register_opts(l3_agent.L3NATAgent.OPTS)
conf.register_opts(interface.OPTS)
agent_config.register_interface_driver_opts_helper(conf)
agent_config.register_use_namespaces_opts_helper(conf)
agent_config.register_root_helper(conf)
return conf
def collect_neutron_ports(bridges, root_helper):
"""Collect ports created by Neutron from OVS."""
ports = []
for bridge in bridges:
ovs = ovs_lib.OVSBridge(bridge, root_helper)
ports += [port.port_name for port in ovs.get_vif_ports()]
return ports
def delete_neutron_ports(ports, root_helper):
"""Delete non-internal ports created by Neutron
Non-internal OVS ports need to be removed manually.
"""
for port in ports:
if ip_lib.device_exists(port):
device = ip_lib.IPDevice(port, root_helper)
device.link.delete()
LOG.info(_("Delete %s"), port)
def main():
"""Main method for cleaning up OVS bridges.
The utility cleans up the integration bridges used by Neutron.
"""
conf = setup_conf()
conf()
config.setup_logging(conf)
configuration_bridges = set([conf.ovs_integration_bridge,
conf.external_network_bridge])
ovs_bridges = set(ovs_lib.get_bridges(conf.AGENT.root_helper))
available_configuration_bridges = configuration_bridges & ovs_bridges
if conf.ovs_all_ports:
bridges = ovs_bridges
else:
bridges = available_configuration_bridges
# Collect existing ports created by Neutron on configuration bridges.
# After deleting ports from OVS bridges, we cannot determine which
# ports were created by Neutron, so port information is collected now.
ports = collect_neutron_ports(available_configuration_bridges,
conf.AGENT.root_helper)
for bridge in bridges:
LOG.info(_("Cleaning %s"), bridge)
ovs = ovs_lib.OVSBridge(bridge, conf.AGENT.root_helper)
ovs.delete_ports(all_ports=conf.ovs_all_ports)
# Remove remaining ports created by Neutron (usually veth pair)
delete_neutron_ports(ports, conf.AGENT.root_helper)
LOG.info(_("OVS cleanup completed successfully"))

View File

@ -0,0 +1,134 @@
# Copyright (c) 2012 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import itertools
from oslo import messaging
from neutron.common import rpc as n_rpc
from neutron.common import topics
from neutron.openstack.common import log as logging
from neutron.openstack.common import timeutils
LOG = logging.getLogger(__name__)
def create_consumers(endpoints, prefix, topic_details):
"""Create agent RPC consumers.
:param endpoints: The list of endpoints to process the incoming messages.
:param prefix: Common prefix for the plugin/agent message queues.
:param topic_details: A list of topics. Each topic has a name, an
operation, and an optional host param keying the
subscription to topic.host for plugin calls.
:returns: A common Connection.
"""
connection = n_rpc.create_connection(new=True)
for details in topic_details:
topic, operation, node_name = itertools.islice(
itertools.chain(details, [None]), 3)
topic_name = topics.get_topic_name(prefix, topic, operation)
connection.create_consumer(topic_name, endpoints, fanout=True)
if node_name:
node_topic_name = '%s.%s' % (topic_name, node_name)
connection.create_consumer(node_topic_name,
endpoints,
fanout=False)
connection.consume_in_threads()
return connection
class PluginReportStateAPI(n_rpc.RpcProxy):
BASE_RPC_API_VERSION = '1.0'
def __init__(self, topic):
super(PluginReportStateAPI, self).__init__(
topic=topic, default_version=self.BASE_RPC_API_VERSION)
def report_state(self, context, agent_state, use_call=False):
msg = self.make_msg('report_state',
agent_state={'agent_state':
agent_state},
time=timeutils.strtime())
if use_call:
return self.call(context, msg, topic=self.topic)
else:
return self.cast(context, msg, topic=self.topic)
class PluginApi(n_rpc.RpcProxy):
'''Agent side of the rpc API.
API version history:
1.0 - Initial version.
1.3 - get_device_details rpc signature upgrade to obtain 'host' and
return value to include fixed_ips and device_owner for
the device port
'''
BASE_RPC_API_VERSION = '1.3'
def __init__(self, topic):
super(PluginApi, self).__init__(
topic=topic, default_version=self.BASE_RPC_API_VERSION)
def get_device_details(self, context, device, agent_id, host=None):
return self.call(context,
self.make_msg('get_device_details', device=device,
agent_id=agent_id,
host=host),
topic=self.topic)
def get_devices_details_list(self, context, devices, agent_id, host=None):
res = []
try:
res = self.call(context,
self.make_msg('get_devices_details_list',
devices=devices,
agent_id=agent_id,
host=host),
topic=self.topic,
version=self.BASE_RPC_API_VERSION)
except messaging.UnsupportedVersion:
res = [
self.call(context,
self.make_msg('get_device_details', device=device,
agent_id=agent_id, host=host),
topic=self.topic)
for device in devices
]
return res
def update_device_down(self, context, device, agent_id, host=None):
return self.call(context,
self.make_msg('update_device_down', device=device,
agent_id=agent_id, host=host),
topic=self.topic)
def update_device_up(self, context, device, agent_id, host=None):
return self.call(context,
self.make_msg('update_device_up', device=device,
agent_id=agent_id, host=host),
topic=self.topic)
def tunnel_sync(self, context, tunnel_ip, tunnel_type=None):
return self.call(context,
self.make_msg('tunnel_sync', tunnel_ip=tunnel_ip,
tunnel_type=tunnel_type),
topic=self.topic)

View File

@ -0,0 +1,301 @@
# Copyright 2012, Nachi Ueno, NTT MCL, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from oslo.config import cfg
from neutron.common import topics
from neutron.openstack.common import importutils
from neutron.openstack.common import log as logging
LOG = logging.getLogger(__name__)
SG_RPC_VERSION = "1.1"
security_group_opts = [
cfg.StrOpt(
'firewall_driver',
help=_('Driver for security groups firewall in the L2 agent')),
cfg.BoolOpt(
'enable_security_group',
default=True,
help=_(
'Controls whether the neutron security group API is enabled '
'in the server. It should be false when using no security '
'groups or using the nova security group API.'))
]
cfg.CONF.register_opts(security_group_opts, 'SECURITYGROUP')
#This is backward compatibility check for Havana
def _is_valid_driver_combination():
return ((cfg.CONF.SECURITYGROUP.enable_security_group and
(cfg.CONF.SECURITYGROUP.firewall_driver and
cfg.CONF.SECURITYGROUP.firewall_driver !=
'neutron.agent.firewall.NoopFirewallDriver')) or
(not cfg.CONF.SECURITYGROUP.enable_security_group and
(cfg.CONF.SECURITYGROUP.firewall_driver ==
'neutron.agent.firewall.NoopFirewallDriver' or
cfg.CONF.SECURITYGROUP.firewall_driver is None)
))
def is_firewall_enabled():
if not _is_valid_driver_combination():
LOG.warn(_("Driver configuration doesn't match with "
"enable_security_group"))
return cfg.CONF.SECURITYGROUP.enable_security_group
def _disable_extension(extension, aliases):
if extension in aliases:
aliases.remove(extension)
def disable_security_group_extension_by_config(aliases):
if not is_firewall_enabled():
LOG.info(_('Disabled security-group extension.'))
_disable_extension('security-group', aliases)
LOG.info(_('Disabled allowed-address-pairs extension.'))
_disable_extension('allowed-address-pairs', aliases)
class SecurityGroupServerRpcApiMixin(object):
"""A mix-in that enable SecurityGroup support in plugin rpc."""
def security_group_rules_for_devices(self, context, devices):
LOG.debug(_("Get security group rules "
"for devices via rpc %r"), devices)
return self.call(context,
self.make_msg('security_group_rules_for_devices',
devices=devices),
version=SG_RPC_VERSION,
topic=self.topic)
class SecurityGroupAgentRpcCallbackMixin(object):
"""A mix-in that enable SecurityGroup agent
support in agent implementations.
"""
#mix-in object should be have sg_agent
sg_agent = None
def _security_groups_agent_not_set(self):
LOG.warning(_("Security group agent binding currently not set. "
"This should be set by the end of the init "
"process."))
def security_groups_rule_updated(self, context, **kwargs):
"""Callback for security group rule update.
:param security_groups: list of updated security_groups
"""
security_groups = kwargs.get('security_groups', [])
LOG.debug(
_("Security group rule updated on remote: %s"), security_groups)
if not self.sg_agent:
return self._security_groups_agent_not_set()
self.sg_agent.security_groups_rule_updated(security_groups)
def security_groups_member_updated(self, context, **kwargs):
"""Callback for security group member update.
:param security_groups: list of updated security_groups
"""
security_groups = kwargs.get('security_groups', [])
LOG.debug(
_("Security group member updated on remote: %s"), security_groups)
if not self.sg_agent:
return self._security_groups_agent_not_set()
self.sg_agent.security_groups_member_updated(security_groups)
def security_groups_provider_updated(self, context, **kwargs):
"""Callback for security group provider update."""
LOG.debug(_("Provider rule updated"))
if not self.sg_agent:
return self._security_groups_agent_not_set()
self.sg_agent.security_groups_provider_updated()
class SecurityGroupAgentRpcMixin(object):
"""A mix-in that enable SecurityGroup agent
support in agent implementations.
"""
def init_firewall(self, defer_refresh_firewall=False):
firewall_driver = cfg.CONF.SECURITYGROUP.firewall_driver
LOG.debug(_("Init firewall settings (driver=%s)"), firewall_driver)
if not _is_valid_driver_combination():
LOG.warn(_("Driver configuration doesn't match "
"with enable_security_group"))
if not firewall_driver:
firewall_driver = 'neutron.agent.firewall.NoopFirewallDriver'
self.firewall = importutils.import_object(firewall_driver)
# The following flag will be set to true if port filter must not be
# applied as soon as a rule or membership notification is received
self.defer_refresh_firewall = defer_refresh_firewall
# Stores devices for which firewall should be refreshed when
# deferred refresh is enabled.
self.devices_to_refilter = set()
# Flag raised when a global refresh is needed
self.global_refresh_firewall = False
def prepare_devices_filter(self, device_ids):
if not device_ids:
return
LOG.info(_("Preparing filters for devices %s"), device_ids)
devices = self.plugin_rpc.security_group_rules_for_devices(
self.context, list(device_ids))
with self.firewall.defer_apply():
for device in devices.values():
self.firewall.prepare_port_filter(device)
def security_groups_rule_updated(self, security_groups):
LOG.info(_("Security group "
"rule updated %r"), security_groups)
self._security_group_updated(
security_groups,
'security_groups')
def security_groups_member_updated(self, security_groups):
LOG.info(_("Security group "
"member updated %r"), security_groups)
self._security_group_updated(
security_groups,
'security_group_source_groups')
def _security_group_updated(self, security_groups, attribute):
devices = []
sec_grp_set = set(security_groups)
for device in self.firewall.ports.values():
if sec_grp_set & set(device.get(attribute, [])):
devices.append(device['device'])
if devices:
if self.defer_refresh_firewall:
LOG.debug(_("Adding %s devices to the list of devices "
"for which firewall needs to be refreshed"),
devices)
self.devices_to_refilter |= set(devices)
else:
self.refresh_firewall(devices)
def security_groups_provider_updated(self):
LOG.info(_("Provider rule updated"))
if self.defer_refresh_firewall:
# NOTE(salv-orlando): A 'global refresh' might not be
# necessary if the subnet for which the provider rules
# were updated is known
self.global_refresh_firewall = True
else:
self.refresh_firewall()
def remove_devices_filter(self, device_ids):
if not device_ids:
return
LOG.info(_("Remove device filter for %r"), device_ids)
with self.firewall.defer_apply():
for device_id in device_ids:
device = self.firewall.ports.get(device_id)
if not device:
continue
self.firewall.remove_port_filter(device)
def refresh_firewall(self, device_ids=None):
LOG.info(_("Refresh firewall rules"))
if not device_ids:
device_ids = self.firewall.ports.keys()
if not device_ids:
LOG.info(_("No ports here to refresh firewall"))
return
devices = self.plugin_rpc.security_group_rules_for_devices(
self.context, device_ids)
with self.firewall.defer_apply():
for device in devices.values():
LOG.debug(_("Update port filter for %s"), device['device'])
self.firewall.update_port_filter(device)
def firewall_refresh_needed(self):
return self.global_refresh_firewall or self.devices_to_refilter
def setup_port_filters(self, new_devices, updated_devices):
"""Configure port filters for devices.
This routine applies filters for new devices and refreshes firewall
rules when devices have been updated, or when there are changes in
security group membership or rules.
:param new_devices: set containing identifiers for new devices
:param updated_devices: set containining identifiers for
updated devices
"""
if new_devices:
LOG.debug(_("Preparing device filters for %d new devices"),
len(new_devices))
self.prepare_devices_filter(new_devices)
# These data structures are cleared here in order to avoid
# losing updates occurring during firewall refresh
devices_to_refilter = self.devices_to_refilter
global_refresh_firewall = self.global_refresh_firewall
self.devices_to_refilter = set()
self.global_refresh_firewall = False
# TODO(salv-orlando): Avoid if possible ever performing the global
# refresh providing a precise list of devices for which firewall
# should be refreshed
if global_refresh_firewall:
LOG.debug(_("Refreshing firewall for all filtered devices"))
self.refresh_firewall()
else:
# If a device is both in new and updated devices
# avoid reprocessing it
updated_devices = ((updated_devices | devices_to_refilter) -
new_devices)
if updated_devices:
LOG.debug(_("Refreshing firewall for %d devices"),
len(updated_devices))
self.refresh_firewall(updated_devices)
class SecurityGroupAgentRpcApiMixin(object):
def _get_security_group_topic(self):
return topics.get_topic_name(self.topic,
topics.SECURITY_GROUP,
topics.UPDATE)
def security_groups_rule_updated(self, context, security_groups):
"""Notify rule updated security groups."""
if not security_groups:
return
self.fanout_cast(context,
self.make_msg('security_groups_rule_updated',
security_groups=security_groups),
version=SG_RPC_VERSION,
topic=self._get_security_group_topic())
def security_groups_member_updated(self, context, security_groups):
"""Notify member updated security groups."""
if not security_groups:
return
self.fanout_cast(context,
self.make_msg('security_groups_member_updated',
security_groups=security_groups),
version=SG_RPC_VERSION,
topic=self._get_security_group_topic())
def security_groups_provider_updated(self, context):
"""Notify provider updated security groups."""
self.fanout_cast(context,
self.make_msg('security_groups_provider_updated'),
version=SG_RPC_VERSION,
topic=self._get_security_group_topic())

View File

@ -0,0 +1,327 @@
# Copyright 2011 Citrix System.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import urllib
from oslo.config import cfg
from webob import exc
from neutron.common import constants
from neutron.common import exceptions
from neutron.openstack.common import log as logging
LOG = logging.getLogger(__name__)
def get_filters(request, attr_info, skips=[]):
"""Extracts the filters from the request string.
Returns a dict of lists for the filters:
check=a&check=b&name=Bob&
becomes:
{'check': [u'a', u'b'], 'name': [u'Bob']}
"""
res = {}
for key, values in request.GET.dict_of_lists().iteritems():
if key in skips:
continue
values = [v for v in values if v]
key_attr_info = attr_info.get(key, {})
if 'convert_list_to' in key_attr_info:
values = key_attr_info['convert_list_to'](values)
elif 'convert_to' in key_attr_info:
convert_to = key_attr_info['convert_to']
values = [convert_to(v) for v in values]
if values:
res[key] = values
return res
def get_previous_link(request, items, id_key):
params = request.GET.copy()
params.pop('marker', None)
if items:
marker = items[0][id_key]
params['marker'] = marker
params['page_reverse'] = True
return "%s?%s" % (request.path_url, urllib.urlencode(params))
def get_next_link(request, items, id_key):
params = request.GET.copy()
params.pop('marker', None)
if items:
marker = items[-1][id_key]
params['marker'] = marker
params.pop('page_reverse', None)
return "%s?%s" % (request.path_url, urllib.urlencode(params))
def get_limit_and_marker(request):
"""Return marker, limit tuple from request.
:param request: `wsgi.Request` possibly containing 'marker' and 'limit'
GET variables. 'marker' is the id of the last element
the client has seen, and 'limit' is the maximum number
of items to return. If limit == 0, it means we needn't
pagination, then return None.
"""
max_limit = _get_pagination_max_limit()
limit = _get_limit_param(request, max_limit)
if max_limit > 0:
limit = min(max_limit, limit) or max_limit
if not limit:
return None, None
marker = request.GET.get('marker', None)
return limit, marker
def _get_pagination_max_limit():
max_limit = -1
if (cfg.CONF.pagination_max_limit.lower() !=
constants.PAGINATION_INFINITE):
try:
max_limit = int(cfg.CONF.pagination_max_limit)
if max_limit == 0:
raise ValueError()
except ValueError:
LOG.warn(_("Invalid value for pagination_max_limit: %s. It "
"should be an integer greater to 0"),
cfg.CONF.pagination_max_limit)
return max_limit
def _get_limit_param(request, max_limit):
"""Extract integer limit from request or fail."""
try:
limit = int(request.GET.get('limit', 0))
if limit >= 0:
return limit
except ValueError:
pass
msg = _("Limit must be an integer 0 or greater and not '%d'")
raise exceptions.BadRequest(resource='limit', msg=msg)
def list_args(request, arg):
"""Extracts the list of arg from request."""
return [v for v in request.GET.getall(arg) if v]
def get_sorts(request, attr_info):
"""Extract sort_key and sort_dir from request.
Return as: [(key1, value1), (key2, value2)]
"""
sort_keys = list_args(request, "sort_key")
sort_dirs = list_args(request, "sort_dir")
if len(sort_keys) != len(sort_dirs):
msg = _("The number of sort_keys and sort_dirs must be same")
raise exc.HTTPBadRequest(explanation=msg)
valid_dirs = [constants.SORT_DIRECTION_ASC, constants.SORT_DIRECTION_DESC]
absent_keys = [x for x in sort_keys if x not in attr_info]
if absent_keys:
msg = _("%s is invalid attribute for sort_keys") % absent_keys
raise exc.HTTPBadRequest(explanation=msg)
invalid_dirs = [x for x in sort_dirs if x not in valid_dirs]
if invalid_dirs:
msg = (_("%(invalid_dirs)s is invalid value for sort_dirs, "
"valid value is '%(asc)s' and '%(desc)s'") %
{'invalid_dirs': invalid_dirs,
'asc': constants.SORT_DIRECTION_ASC,
'desc': constants.SORT_DIRECTION_DESC})
raise exc.HTTPBadRequest(explanation=msg)
return zip(sort_keys,
[x == constants.SORT_DIRECTION_ASC for x in sort_dirs])
def get_page_reverse(request):
data = request.GET.get('page_reverse', 'False')
return data.lower() == "true"
def get_pagination_links(request, items, limit,
marker, page_reverse, key="id"):
key = key if key else 'id'
links = []
if not limit:
return links
if not (len(items) < limit and not page_reverse):
links.append({"rel": "next",
"href": get_next_link(request, items,
key)})
if not (len(items) < limit and page_reverse):
links.append({"rel": "previous",
"href": get_previous_link(request, items,
key)})
return links
class PaginationHelper(object):
def __init__(self, request, primary_key='id'):
self.request = request
self.primary_key = primary_key
def update_fields(self, original_fields, fields_to_add):
pass
def update_args(self, args):
pass
def paginate(self, items):
return items
def get_links(self, items):
return {}
class PaginationEmulatedHelper(PaginationHelper):
def __init__(self, request, primary_key='id'):
super(PaginationEmulatedHelper, self).__init__(request, primary_key)
self.limit, self.marker = get_limit_and_marker(request)
self.page_reverse = get_page_reverse(request)
def update_fields(self, original_fields, fields_to_add):
if not original_fields:
return
if self.primary_key not in original_fields:
original_fields.append(self.primary_key)
fields_to_add.append(self.primary_key)
def paginate(self, items):
if not self.limit:
return items
i = -1
if self.marker:
for item in items:
i = i + 1
if item[self.primary_key] == self.marker:
break
if self.page_reverse:
return items[i - self.limit:i]
return items[i + 1:i + self.limit + 1]
def get_links(self, items):
return get_pagination_links(
self.request, items, self.limit, self.marker,
self.page_reverse, self.primary_key)
class PaginationNativeHelper(PaginationEmulatedHelper):
def update_args(self, args):
if self.primary_key not in dict(args.get('sorts', [])).keys():
args.setdefault('sorts', []).append((self.primary_key, True))
args.update({'limit': self.limit, 'marker': self.marker,
'page_reverse': self.page_reverse})
def paginate(self, items):
return items
class NoPaginationHelper(PaginationHelper):
pass
class SortingHelper(object):
def __init__(self, request, attr_info):
pass
def update_args(self, args):
pass
def update_fields(self, original_fields, fields_to_add):
pass
def sort(self, items):
return items
class SortingEmulatedHelper(SortingHelper):
def __init__(self, request, attr_info):
super(SortingEmulatedHelper, self).__init__(request, attr_info)
self.sort_dict = get_sorts(request, attr_info)
def update_fields(self, original_fields, fields_to_add):
if not original_fields:
return
for key in dict(self.sort_dict).keys():
if key not in original_fields:
original_fields.append(key)
fields_to_add.append(key)
def sort(self, items):
def cmp_func(obj1, obj2):
for key, direction in self.sort_dict:
ret = cmp(obj1[key], obj2[key])
if ret:
return ret * (1 if direction else -1)
return 0
return sorted(items, cmp=cmp_func)
class SortingNativeHelper(SortingHelper):
def __init__(self, request, attr_info):
self.sort_dict = get_sorts(request, attr_info)
def update_args(self, args):
args['sorts'] = self.sort_dict
class NoSortingHelper(SortingHelper):
pass
class NeutronController(object):
"""Base controller class for Neutron API."""
# _resource_name will be redefined in sub concrete controller
_resource_name = None
def __init__(self, plugin):
self._plugin = plugin
super(NeutronController, self).__init__()
def _prepare_request_body(self, body, params):
"""Verifies required parameters are in request body.
Sets default value for missing optional parameters.
Body argument must be the deserialized body.
"""
try:
if body is None:
# Initialize empty resource for setting default value
body = {self._resource_name: {}}
data = body[self._resource_name]
except KeyError:
# raise if _resource_name is not in req body.
raise exc.HTTPBadRequest(_("Unable to find '%s' in request body") %
self._resource_name)
for param in params:
param_name = param['param-name']
param_value = data.get(param_name)
# If the parameter wasn't found and it was required, return 400
if param_value is None and param['required']:
msg = (_("Failed to parse request. "
"Parameter '%s' not specified") % param_name)
LOG.error(msg)
raise exc.HTTPBadRequest(msg)
data[param_name] = param_value or param.get('default-value')
return body

View File

@ -0,0 +1,684 @@
# Copyright 2011 OpenStack Foundation.
# Copyright 2011 Justin Santa Barbara
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import imp
import itertools
import os
from oslo.config import cfg
import routes
import six
import webob.dec
import webob.exc
from neutron.api.v2 import attributes
from neutron.common import exceptions
import neutron.extensions
from neutron import manager
from neutron.openstack.common import log as logging
from neutron import policy
from neutron import wsgi
LOG = logging.getLogger(__name__)
@six.add_metaclass(abc.ABCMeta)
class PluginInterface(object):
@classmethod
def __subclasshook__(cls, klass):
"""Checking plugin class.
The __subclasshook__ method is a class method
that will be called every time a class is tested
using issubclass(klass, PluginInterface).
In that case, it will check that every method
marked with the abstractmethod decorator is
provided by the plugin class.
"""
if not cls.__abstractmethods__:
return NotImplemented
for method in cls.__abstractmethods__:
if any(method in base.__dict__ for base in klass.__mro__):
continue
return NotImplemented
return True
class ExtensionDescriptor(object):
"""Base class that defines the contract for extensions.
Note that you don't have to derive from this class to have a valid
extension; it is purely a convenience.
"""
def get_name(self):
"""The name of the extension.
e.g. 'Fox In Socks'
"""
raise NotImplementedError()
def get_alias(self):
"""The alias for the extension.
e.g. 'FOXNSOX'
"""
raise NotImplementedError()
def get_description(self):
"""Friendly description for the extension.
e.g. 'The Fox In Socks Extension'
"""
raise NotImplementedError()
def get_namespace(self):
"""The XML namespace for the extension.
e.g. 'http://www.fox.in.socks/api/ext/pie/v1.0'
"""
raise NotImplementedError()
def get_updated(self):
"""The timestamp when the extension was last updated.
e.g. '2011-01-22T13:25:27-06:00'
"""
# NOTE(justinsb): Not sure of the purpose of this is, vs the XML NS
raise NotImplementedError()
def get_resources(self):
"""List of extensions.ResourceExtension extension objects.
Resources define new nouns, and are accessible through URLs.
"""
resources = []
return resources
def get_actions(self):
"""List of extensions.ActionExtension extension objects.
Actions are verbs callable from the API.
"""
actions = []
return actions
def get_request_extensions(self):
"""List of extensions.RequestException extension objects.
Request extensions are used to handle custom request data.
"""
request_exts = []
return request_exts
def get_extended_resources(self, version):
"""Retrieve extended resources or attributes for core resources.
Extended attributes are implemented by a core plugin similarly
to the attributes defined in the core, and can appear in
request and response messages. Their names are scoped with the
extension's prefix. The core API version is passed to this
function, which must return a
map[<resource_name>][<attribute_name>][<attribute_property>]
specifying the extended resource attribute properties required
by that API version.
Extension can add resources and their attr definitions too.
The returned map can be integrated into RESOURCE_ATTRIBUTE_MAP.
"""
return {}
def get_plugin_interface(self):
"""Returns an abstract class which defines contract for the plugin.
The abstract class should inherit from extesnions.PluginInterface,
Methods in this abstract class should be decorated as abstractmethod
"""
return None
def update_attributes_map(self, extended_attributes,
extension_attrs_map=None):
"""Update attributes map for this extension.
This is default method for extending an extension's attributes map.
An extension can use this method and supplying its own resource
attribute map in extension_attrs_map argument to extend all its
attributes that needs to be extended.
If an extension does not implement update_attributes_map, the method
does nothing and just return.
"""
if not extension_attrs_map:
return
for resource, attrs in extension_attrs_map.iteritems():
extended_attrs = extended_attributes.get(resource)
if extended_attrs:
attrs.update(extended_attrs)
def get_alias_namespace_compatibility_map(self):
"""Returns mappings between extension aliases and XML namespaces.
The mappings are XML namespaces that should, for backward compatibility
reasons, be added to the XML serialization of extended attributes.
This allows an established extended attribute to be provided by
another extension than the original one while keeping its old alias
in the name.
:return: A dictionary of extension_aliases and namespace strings.
"""
return {}
class ActionExtensionController(wsgi.Controller):
def __init__(self, application):
self.application = application
self.action_handlers = {}
def add_action(self, action_name, handler):
self.action_handlers[action_name] = handler
def action(self, request, id):
input_dict = self._deserialize(request.body,
request.get_content_type())
for action_name, handler in self.action_handlers.iteritems():
if action_name in input_dict:
return handler(input_dict, request, id)
# no action handler found (bump to downstream application)
response = self.application
return response
class RequestExtensionController(wsgi.Controller):
def __init__(self, application):
self.application = application
self.handlers = []
def add_handler(self, handler):
self.handlers.append(handler)
def process(self, request, *args, **kwargs):
res = request.get_response(self.application)
# currently request handlers are un-ordered
for handler in self.handlers:
response = handler(request, res)
return response
class ExtensionController(wsgi.Controller):
def __init__(self, extension_manager):
self.extension_manager = extension_manager
def _translate(self, ext):
ext_data = {}
ext_data['name'] = ext.get_name()
ext_data['alias'] = ext.get_alias()
ext_data['description'] = ext.get_description()
ext_data['namespace'] = ext.get_namespace()
ext_data['updated'] = ext.get_updated()
ext_data['links'] = [] # TODO(dprince): implement extension links
return ext_data
def index(self, request):
extensions = []
for _alias, ext in self.extension_manager.extensions.iteritems():
extensions.append(self._translate(ext))
return dict(extensions=extensions)
def show(self, request, id):
# NOTE(dprince): the extensions alias is used as the 'id' for show
ext = self.extension_manager.extensions.get(id, None)
if not ext:
raise webob.exc.HTTPNotFound(
_("Extension with alias %s does not exist") % id)
return dict(extension=self._translate(ext))
def delete(self, request, id):
msg = _('Resource not found.')
raise webob.exc.HTTPNotFound(msg)
def create(self, request):
msg = _('Resource not found.')
raise webob.exc.HTTPNotFound(msg)
class ExtensionMiddleware(wsgi.Middleware):
"""Extensions middleware for WSGI."""
def __init__(self, application,
ext_mgr=None):
self.ext_mgr = (ext_mgr
or ExtensionManager(get_extensions_path()))
mapper = routes.Mapper()
# extended resources
for resource in self.ext_mgr.get_resources():
path_prefix = resource.path_prefix
if resource.parent:
path_prefix = (resource.path_prefix +
"/%s/{%s_id}" %
(resource.parent["collection_name"],
resource.parent["member_name"]))
LOG.debug(_('Extended resource: %s'),
resource.collection)
for action, method in resource.collection_actions.iteritems():
conditions = dict(method=[method])
path = "/%s/%s" % (resource.collection, action)
with mapper.submapper(controller=resource.controller,
action=action,
path_prefix=path_prefix,
conditions=conditions) as submap:
submap.connect(path)
submap.connect("%s.:(format)" % path)
mapper.resource(resource.collection, resource.collection,
controller=resource.controller,
member=resource.member_actions,
parent_resource=resource.parent,
path_prefix=path_prefix)
# extended actions
action_controllers = self._action_ext_controllers(application,
self.ext_mgr, mapper)
for action in self.ext_mgr.get_actions():
LOG.debug(_('Extended action: %s'), action.action_name)
controller = action_controllers[action.collection]
controller.add_action(action.action_name, action.handler)
# extended requests
req_controllers = self._request_ext_controllers(application,
self.ext_mgr, mapper)
for request_ext in self.ext_mgr.get_request_extensions():
LOG.debug(_('Extended request: %s'), request_ext.key)
controller = req_controllers[request_ext.key]
controller.add_handler(request_ext.handler)
self._router = routes.middleware.RoutesMiddleware(self._dispatch,
mapper)
super(ExtensionMiddleware, self).__init__(application)
@classmethod
def factory(cls, global_config, **local_config):
"""Paste factory."""
def _factory(app):
return cls(app, global_config, **local_config)
return _factory
def _action_ext_controllers(self, application, ext_mgr, mapper):
"""Return a dict of ActionExtensionController-s by collection."""
action_controllers = {}
for action in ext_mgr.get_actions():
if action.collection not in action_controllers.keys():
controller = ActionExtensionController(application)
mapper.connect("/%s/:(id)/action.:(format)" %
action.collection,
action='action',
controller=controller,
conditions=dict(method=['POST']))
mapper.connect("/%s/:(id)/action" % action.collection,
action='action',
controller=controller,
conditions=dict(method=['POST']))
action_controllers[action.collection] = controller
return action_controllers
def _request_ext_controllers(self, application, ext_mgr, mapper):
"""Returns a dict of RequestExtensionController-s by collection."""
request_ext_controllers = {}
for req_ext in ext_mgr.get_request_extensions():
if req_ext.key not in request_ext_controllers.keys():
controller = RequestExtensionController(application)
mapper.connect(req_ext.url_route + '.:(format)',
action='process',
controller=controller,
conditions=req_ext.conditions)
mapper.connect(req_ext.url_route,
action='process',
controller=controller,
conditions=req_ext.conditions)
request_ext_controllers[req_ext.key] = controller
return request_ext_controllers
@webob.dec.wsgify(RequestClass=wsgi.Request)
def __call__(self, req):
"""Route the incoming request with router."""
req.environ['extended.app'] = self.application
return self._router
@staticmethod
@webob.dec.wsgify(RequestClass=wsgi.Request)
def _dispatch(req):
"""Dispatch the request.
Returns the routed WSGI app's response or defers to the extended
application.
"""
match = req.environ['wsgiorg.routing_args'][1]
if not match:
return req.environ['extended.app']
app = match['controller']
return app
def plugin_aware_extension_middleware_factory(global_config, **local_config):
"""Paste factory."""
def _factory(app):
ext_mgr = PluginAwareExtensionManager.get_instance()
return ExtensionMiddleware(app, ext_mgr=ext_mgr)
return _factory
class ExtensionManager(object):
"""Load extensions from the configured extension path.
See tests/unit/extensions/foxinsocks.py for an
example extension implementation.
"""
def __init__(self, path):
LOG.info(_('Initializing extension manager.'))
self.path = path
self.extensions = {}
self._load_all_extensions()
policy.reset()
def get_resources(self):
"""Returns a list of ResourceExtension objects."""
resources = []
resources.append(ResourceExtension('extensions',
ExtensionController(self)))
for ext in self.extensions.itervalues():
try:
resources.extend(ext.get_resources())
except AttributeError:
# NOTE(dprince): Extension aren't required to have resource
# extensions
pass
return resources
def get_actions(self):
"""Returns a list of ActionExtension objects."""
actions = []
for ext in self.extensions.itervalues():
try:
actions.extend(ext.get_actions())
except AttributeError:
# NOTE(dprince): Extension aren't required to have action
# extensions
pass
return actions
def get_request_extensions(self):
"""Returns a list of RequestExtension objects."""
request_exts = []
for ext in self.extensions.itervalues():
try:
request_exts.extend(ext.get_request_extensions())
except AttributeError:
# NOTE(dprince): Extension aren't required to have request
# extensions
pass
return request_exts
def extend_resources(self, version, attr_map):
"""Extend resources with additional resources or attributes.
:param: attr_map, the existing mapping from resource name to
attrs definition.
After this function, we will extend the attr_map if an extension
wants to extend this map.
"""
update_exts = []
processed_exts = set()
exts_to_process = self.extensions.copy()
# Iterate until there are unprocessed extensions or if no progress
# is made in a whole iteration
while exts_to_process:
processed_ext_count = len(processed_exts)
for ext_name, ext in exts_to_process.items():
if not hasattr(ext, 'get_extended_resources'):
del exts_to_process[ext_name]
continue
if hasattr(ext, 'update_attributes_map'):
update_exts.append(ext)
if hasattr(ext, 'get_required_extensions'):
# Process extension only if all required extensions
# have been processed already
required_exts_set = set(ext.get_required_extensions())
if required_exts_set - processed_exts:
continue
try:
extended_attrs = ext.get_extended_resources(version)
for resource, resource_attrs in extended_attrs.iteritems():
if attr_map.get(resource, None):
attr_map[resource].update(resource_attrs)
else:
attr_map[resource] = resource_attrs
if extended_attrs:
attributes.EXT_NSES[ext.get_alias()] = (
ext.get_namespace())
except AttributeError:
LOG.exception(_("Error fetching extended attributes for "
"extension '%s'"), ext.get_name())
try:
comp_map = ext.get_alias_namespace_compatibility_map()
attributes.EXT_NSES_BC.update(comp_map)
except AttributeError:
LOG.info(_("Extension '%s' provides no backward "
"compatibility map for extended attributes"),
ext.get_name())
processed_exts.add(ext_name)
del exts_to_process[ext_name]
if len(processed_exts) == processed_ext_count:
# Exit loop as no progress was made
break
if exts_to_process:
# NOTE(salv-orlando): Consider whether this error should be fatal
LOG.error(_("It was impossible to process the following "
"extensions: %s because of missing requirements."),
','.join(exts_to_process.keys()))
# Extending extensions' attributes map.
for ext in update_exts:
ext.update_attributes_map(attr_map)
def _check_extension(self, extension):
"""Checks for required methods in extension objects."""
try:
LOG.debug(_('Ext name: %s'), extension.get_name())
LOG.debug(_('Ext alias: %s'), extension.get_alias())
LOG.debug(_('Ext description: %s'), extension.get_description())
LOG.debug(_('Ext namespace: %s'), extension.get_namespace())
LOG.debug(_('Ext updated: %s'), extension.get_updated())
except AttributeError as ex:
LOG.exception(_("Exception loading extension: %s"), unicode(ex))
return False
return True
def _load_all_extensions(self):
"""Load extensions from the configured path.
The extension name is constructed from the module_name. If your
extension module is named widgets.py, the extension class within that
module should be 'Widgets'.
See tests/unit/extensions/foxinsocks.py for an example extension
implementation.
"""
for path in self.path.split(':'):
if os.path.exists(path):
self._load_all_extensions_from_path(path)
else:
LOG.error(_("Extension path '%s' doesn't exist!"), path)
def _load_all_extensions_from_path(self, path):
# Sorting the extension list makes the order in which they
# are loaded predictable across a cluster of load-balanced
# Neutron Servers
for f in sorted(os.listdir(path)):
try:
LOG.debug(_('Loading extension file: %s'), f)
mod_name, file_ext = os.path.splitext(os.path.split(f)[-1])
ext_path = os.path.join(path, f)
if file_ext.lower() == '.py' and not mod_name.startswith('_'):
mod = imp.load_source(mod_name, ext_path)
ext_name = mod_name[0].upper() + mod_name[1:]
new_ext_class = getattr(mod, ext_name, None)
if not new_ext_class:
LOG.warn(_('Did not find expected name '
'"%(ext_name)s" in %(file)s'),
{'ext_name': ext_name,
'file': ext_path})
continue
new_ext = new_ext_class()
self.add_extension(new_ext)
except Exception as exception:
LOG.warn(_("Extension file %(f)s wasn't loaded due to "
"%(exception)s"), {'f': f, 'exception': exception})
def add_extension(self, ext):
# Do nothing if the extension doesn't check out
if not self._check_extension(ext):
return
alias = ext.get_alias()
LOG.info(_('Loaded extension: %s'), alias)
if alias in self.extensions:
raise exceptions.DuplicatedExtension(alias=alias)
self.extensions[alias] = ext
class PluginAwareExtensionManager(ExtensionManager):
_instance = None
def __init__(self, path, plugins):
self.plugins = plugins
super(PluginAwareExtensionManager, self).__init__(path)
self.check_if_plugin_extensions_loaded()
def _check_extension(self, extension):
"""Check if an extension is supported by any plugin."""
extension_is_valid = super(PluginAwareExtensionManager,
self)._check_extension(extension)
return (extension_is_valid and
self._plugins_support(extension) and
self._plugins_implement_interface(extension))
def _plugins_support(self, extension):
alias = extension.get_alias()
supports_extension = any((hasattr(plugin,
"supported_extension_aliases") and
alias in plugin.supported_extension_aliases)
for plugin in self.plugins.values())
if not supports_extension:
LOG.warn(_("Extension %s not supported by any of loaded plugins"),
alias)
return supports_extension
def _plugins_implement_interface(self, extension):
if(not hasattr(extension, "get_plugin_interface") or
extension.get_plugin_interface() is None):
return True
for plugin in self.plugins.values():
if isinstance(plugin, extension.get_plugin_interface()):
return True
LOG.warn(_("Loaded plugins do not implement extension %s interface"),
extension.get_alias())
return False
@classmethod
def get_instance(cls):
if cls._instance is None:
cls._instance = cls(get_extensions_path(),
manager.NeutronManager.get_service_plugins())
return cls._instance
def check_if_plugin_extensions_loaded(self):
"""Check if an extension supported by a plugin has been loaded."""
plugin_extensions = set(itertools.chain.from_iterable([
getattr(plugin, "supported_extension_aliases", [])
for plugin in self.plugins.values()]))
missing_aliases = plugin_extensions - set(self.extensions)
if missing_aliases:
raise exceptions.ExtensionsNotFound(
extensions=list(missing_aliases))
class RequestExtension(object):
"""Extend requests and responses of core Neutron OpenStack API controllers.
Provide a way to add data to responses and handle custom request data
that is sent to core Neutron OpenStack API controllers.
"""
def __init__(self, method, url_route, handler):
self.url_route = url_route
self.handler = handler
self.conditions = dict(method=[method])
self.key = "%s-%s" % (method, url_route)
class ActionExtension(object):
"""Add custom actions to core Neutron OpenStack API controllers."""
def __init__(self, collection, action_name, handler):
self.collection = collection
self.action_name = action_name
self.handler = handler
class ResourceExtension(object):
"""Add top level resources to the OpenStack API in Neutron."""
def __init__(self, collection, controller, parent=None, path_prefix="",
collection_actions={}, member_actions={}, attr_map={}):
self.collection = collection
self.controller = controller
self.parent = parent
self.collection_actions = collection_actions
self.member_actions = member_actions
self.path_prefix = path_prefix
self.attr_map = attr_map
# Returns the extension paths from a config entry and the __path__
# of neutron.extensions
def get_extensions_path():
paths = ':'.join(neutron.extensions.__path__)
if cfg.CONF.api_extensions_path:
paths = ':'.join([cfg.CONF.api_extensions_path, paths])
return paths
def append_api_extensions_path(paths):
paths = [cfg.CONF.api_extensions_path] + paths
cfg.CONF.set_override('api_extensions_path',
':'.join([p for p in paths if p]))

View File

@ -0,0 +1,177 @@
# Copyright (c) 2013 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from neutron.common import constants
from neutron.common import rpc as n_rpc
from neutron.common import topics
from neutron.common import utils
from neutron import manager
from neutron.openstack.common import log as logging
LOG = logging.getLogger(__name__)
class DhcpAgentNotifyAPI(n_rpc.RpcProxy):
"""API for plugin to notify DHCP agent."""
BASE_RPC_API_VERSION = '1.0'
# It seems dhcp agent does not support bulk operation
VALID_RESOURCES = ['network', 'subnet', 'port']
VALID_METHOD_NAMES = ['network.create.end',
'network.update.end',
'network.delete.end',
'subnet.create.end',
'subnet.update.end',
'subnet.delete.end',
'port.create.end',
'port.update.end',
'port.delete.end']
def __init__(self, topic=topics.DHCP_AGENT, plugin=None):
super(DhcpAgentNotifyAPI, self).__init__(
topic=topic, default_version=self.BASE_RPC_API_VERSION)
self._plugin = plugin
@property
def plugin(self):
if self._plugin is None:
self._plugin = manager.NeutronManager.get_plugin()
return self._plugin
def _schedule_network(self, context, network, existing_agents):
"""Schedule the network to new agents
:return: all agents associated with the network
"""
new_agents = self.plugin.schedule_network(context, network) or []
if new_agents:
for agent in new_agents:
self._cast_message(
context, 'network_create_end',
{'network': {'id': network['id']}}, agent['host'])
elif not existing_agents:
LOG.warn(_('Unable to schedule network %s: no agents available; '
'will retry on subsequent port creation events.'),
network['id'])
return new_agents + existing_agents
def _get_enabled_agents(self, context, network, agents, method, payload):
"""Get the list of agents whose admin_state is UP."""
network_id = network['id']
enabled_agents = [x for x in agents if x.admin_state_up]
active_agents = [x for x in agents if x.is_active]
len_enabled_agents = len(enabled_agents)
len_active_agents = len(active_agents)
if len_active_agents < len_enabled_agents:
LOG.warn(_("Only %(active)d of %(total)d DHCP agents associated "
"with network '%(net_id)s' are marked as active, so "
" notifications may be sent to inactive agents.")
% {'active': len_active_agents,
'total': len_enabled_agents,
'net_id': network_id})
if not enabled_agents:
num_ports = self.plugin.get_ports_count(
context, {'network_id': [network_id]})
notification_required = (
num_ports > 0 and len(network['subnets']) >= 1)
if notification_required:
LOG.error(_("Will not send event %(method)s for network "
"%(net_id)s: no agent available. Payload: "
"%(payload)s")
% {'method': method,
'net_id': network_id,
'payload': payload})
return enabled_agents
def _notify_agents(self, context, method, payload, network_id):
"""Notify all the agents that are hosting the network."""
# fanout is required as we do not know who is "listening"
no_agents = not utils.is_extension_supported(
self.plugin, constants.DHCP_AGENT_SCHEDULER_EXT_ALIAS)
fanout_required = method == 'network_delete_end' or no_agents
# we do nothing on network creation because we want to give the
# admin the chance to associate an agent to the network manually
cast_required = method != 'network_create_end'
if fanout_required:
self._fanout_message(context, method, payload)
elif cast_required:
admin_ctx = (context if context.is_admin else context.elevated())
network = self.plugin.get_network(admin_ctx, network_id)
agents = self.plugin.get_dhcp_agents_hosting_networks(
context, [network_id])
# schedule the network first, if needed
schedule_required = method == 'port_create_end'
if schedule_required:
agents = self._schedule_network(admin_ctx, network, agents)
enabled_agents = self._get_enabled_agents(
context, network, agents, method, payload)
for agent in enabled_agents:
self._cast_message(
context, method, payload, agent.host, agent.topic)
def _cast_message(self, context, method, payload, host,
topic=topics.DHCP_AGENT):
"""Cast the payload to the dhcp agent running on the host."""
self.cast(
context, self.make_msg(method,
payload=payload),
topic='%s.%s' % (topic, host))
def _fanout_message(self, context, method, payload):
"""Fanout the payload to all dhcp agents."""
self.fanout_cast(
context, self.make_msg(method,
payload=payload),
topic=topics.DHCP_AGENT)
def network_removed_from_agent(self, context, network_id, host):
self._cast_message(context, 'network_delete_end',
{'network_id': network_id}, host)
def network_added_to_agent(self, context, network_id, host):
self._cast_message(context, 'network_create_end',
{'network': {'id': network_id}}, host)
def agent_updated(self, context, admin_state_up, host):
self._cast_message(context, 'agent_updated',
{'admin_state_up': admin_state_up}, host)
def notify(self, context, data, method_name):
# data is {'key' : 'value'} with only one key
if method_name not in self.VALID_METHOD_NAMES:
return
obj_type = data.keys()[0]
if obj_type not in self.VALID_RESOURCES:
return
obj_value = data[obj_type]
network_id = None
if obj_type == 'network' and 'id' in obj_value:
network_id = obj_value['id']
elif obj_type in ['port', 'subnet'] and 'network_id' in obj_value:
network_id = obj_value['network_id']
if not network_id:
return
method_name = method_name.replace(".", "_")
if method_name.endswith("_delete_end"):
if 'id' in obj_value:
self._notify_agents(context, method_name,
{obj_type + '_id': obj_value['id']},
network_id)
else:
self._notify_agents(context, method_name, data, network_id)

View File

@ -0,0 +1,149 @@
# Copyright (c) 2013 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from neutron.common import constants
from neutron.common import rpc as n_rpc
from neutron.common import topics
from neutron.common import utils
from neutron import manager
from neutron.openstack.common import log as logging
from neutron.plugins.common import constants as service_constants
LOG = logging.getLogger(__name__)
class L3AgentNotifyAPI(n_rpc.RpcProxy):
"""API for plugin to notify L3 agent."""
BASE_RPC_API_VERSION = '1.0'
def __init__(self, topic=topics.L3_AGENT):
super(L3AgentNotifyAPI, self).__init__(
topic=topic, default_version=self.BASE_RPC_API_VERSION)
def _notification_host(self, context, method, payload, host):
"""Notify the agent that is hosting the router."""
LOG.debug(_('Nofity agent at %(host)s the message '
'%(method)s'), {'host': host,
'method': method})
self.cast(
context, self.make_msg(method,
payload=payload),
topic='%s.%s' % (topics.L3_AGENT, host))
def _agent_notification(self, context, method, router_ids,
operation, data):
"""Notify changed routers to hosting l3 agents."""
adminContext = context.is_admin and context or context.elevated()
plugin = manager.NeutronManager.get_service_plugins().get(
service_constants.L3_ROUTER_NAT)
for router_id in router_ids:
l3_agents = plugin.get_l3_agents_hosting_routers(
adminContext, [router_id],
admin_state_up=True,
active=True)
for l3_agent in l3_agents:
LOG.debug(_('Notify agent at %(topic)s.%(host)s the message '
'%(method)s'),
{'topic': l3_agent.topic,
'host': l3_agent.host,
'method': method})
self.cast(
context, self.make_msg(method,
routers=[router_id]),
topic='%s.%s' % (l3_agent.topic, l3_agent.host),
version='1.1')
def _agent_notification_arp(self, context, method, router_id,
operation, data):
"""Notify arp details to l3 agents hosting router."""
if not router_id:
return
adminContext = (context.is_admin and
context or context.elevated())
plugin = manager.NeutronManager.get_service_plugins().get(
service_constants.L3_ROUTER_NAT)
l3_agents = (plugin.
get_l3_agents_hosting_routers(adminContext,
[router_id],
admin_state_up=True,
active=True))
for l3_agent in l3_agents:
LOG.debug(_('Notify agent at %(topic)s.%(host)s the message '
'%(method)s'),
{'topic': l3_agent.topic,
'host': l3_agent.host,
'method': method})
dvr_arptable = {'router_id': router_id,
'arp_table': data}
self.cast(
context, self.make_msg(method,
payload=dvr_arptable),
topic='%s.%s' % (l3_agent.topic, l3_agent.host),
version='1.1')
def _notification(self, context, method, router_ids, operation, data):
"""Notify all the agents that are hosting the routers."""
plugin = manager.NeutronManager.get_service_plugins().get(
service_constants.L3_ROUTER_NAT)
if not plugin:
LOG.error(_('No plugin for L3 routing registered. Cannot notify '
'agents with the message %s'), method)
return
if utils.is_extension_supported(
plugin, constants.L3_AGENT_SCHEDULER_EXT_ALIAS):
adminContext = (context.is_admin and
context or context.elevated())
plugin.schedule_routers(adminContext, router_ids, hints=data)
self._agent_notification(
context, method, router_ids, operation, data)
else:
self.fanout_cast(
context, self.make_msg(method,
routers=router_ids),
topic=topics.L3_AGENT)
def _notification_fanout(self, context, method, router_id):
"""Fanout the deleted router to all L3 agents."""
LOG.debug(_('Fanout notify agent at %(topic)s the message '
'%(method)s on router %(router_id)s'),
{'topic': topics.L3_AGENT,
'method': method,
'router_id': router_id})
self.fanout_cast(
context, self.make_msg(method,
router_id=router_id),
topic=topics.L3_AGENT)
def agent_updated(self, context, admin_state_up, host):
self._notification_host(context, 'agent_updated',
{'admin_state_up': admin_state_up},
host)
def router_deleted(self, context, router_id):
self._notification_fanout(context, 'router_deleted', router_id)
def routers_updated(self, context, router_ids, operation=None, data=None):
if router_ids:
self._notification(context, 'routers_updated', router_ids,
operation, data)
def router_removed_from_agent(self, context, router_id, host):
self._notification_host(context, 'router_removed_from_agent',
{'router_id': router_id}, host)
def router_added_to_agent(self, context, router_ids, host):
self._notification_host(context, 'router_added_to_agent',
router_ids, host)

View File

@ -0,0 +1,99 @@
# Copyright (C) 2013 eNovance SAS <licensing@enovance.com>
#
# Author: Sylvain Afchain <sylvain.afchain@enovance.com>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from neutron.common import constants
from neutron.common import rpc as n_rpc
from neutron.common import topics
from neutron.common import utils
from neutron import manager
from neutron.openstack.common import log as logging
from neutron.plugins.common import constants as service_constants
LOG = logging.getLogger(__name__)
class MeteringAgentNotifyAPI(n_rpc.RpcProxy):
"""API for plugin to notify L3 metering agent."""
BASE_RPC_API_VERSION = '1.0'
def __init__(self, topic=topics.METERING_AGENT):
super(MeteringAgentNotifyAPI, self).__init__(
topic=topic, default_version=self.BASE_RPC_API_VERSION)
def _agent_notification(self, context, method, routers):
"""Notify l3 metering agents hosted by l3 agent hosts."""
adminContext = context.is_admin and context or context.elevated()
plugin = manager.NeutronManager.get_service_plugins().get(
service_constants.L3_ROUTER_NAT)
l3_routers = {}
for router in routers:
l3_agents = plugin.get_l3_agents_hosting_routers(
adminContext, [router['id']],
admin_state_up=True,
active=True)
for l3_agent in l3_agents:
LOG.debug(_('Notify metering agent at %(topic)s.%(host)s '
'the message %(method)s'),
{'topic': self.topic,
'host': l3_agent.host,
'method': method})
l3_router = l3_routers.get(l3_agent.host, [])
l3_router.append(router)
l3_routers[l3_agent.host] = l3_router
for host, routers in l3_routers.iteritems():
self.cast(context, self.make_msg(method, routers=routers),
topic='%s.%s' % (self.topic, host))
def _notification_fanout(self, context, method, router_id):
LOG.debug(_('Fanout notify metering agent at %(topic)s the message '
'%(method)s on router %(router_id)s'),
{'topic': self.topic,
'method': method,
'router_id': router_id})
self.fanout_cast(
context, self.make_msg(method,
router_id=router_id),
topic=self.topic)
def _notification(self, context, method, routers):
"""Notify all the agents that are hosting the routers."""
plugin = manager.NeutronManager.get_service_plugins().get(
service_constants.L3_ROUTER_NAT)
if utils.is_extension_supported(
plugin, constants.L3_AGENT_SCHEDULER_EXT_ALIAS):
self._agent_notification(context, method, routers)
else:
self.fanout_cast(context, self.make_msg(method, routers=routers),
topic=self.topic)
def router_deleted(self, context, router_id):
self._notification_fanout(context, 'router_deleted', router_id)
def routers_updated(self, context, routers):
if routers:
self._notification(context, 'routers_updated', routers)
def update_metering_label_rules(self, context, routers):
self._notification(context, 'update_metering_label_rules', routers)
def add_metering_label(self, context, routers):
self._notification(context, 'add_metering_label', routers)
def remove_metering_label(self, context, routers):
self._notification(context, 'remove_metering_label', routers)

View File

@ -0,0 +1,122 @@
# Copyright 2014, Hewlett Packard, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from neutron.common import log
from neutron.common import topics
from neutron import manager
from neutron.openstack.common import log as logging
LOG = logging.getLogger(__name__)
class DVRServerRpcApiMixin(object):
"""Agent-side RPC (stub) for agent-to-plugin interaction."""
DVR_RPC_VERSION = "1.0"
@log.log
def get_dvr_mac_address_by_host(self, context, host):
return self.call(context,
self.make_msg('get_dvr_mac_address_by_host',
host=host),
version=self.DVR_RPC_VERSION,
topic=self.topic)
@log.log
def get_dvr_mac_address_list(self, context):
return self.call(context,
self.make_msg('get_dvr_mac_address_list'),
version=self.DVR_RPC_VERSION,
topic=self.topic)
@log.log
def get_compute_ports_on_host_by_subnet(self, context, host, subnet):
return self.call(context,
self.make_msg('get_compute_ports_on_host_by_subnet',
host=host,
subnet=subnet),
version=self.DVR_RPC_VERSION,
topic=self.topic)
@log.log
def get_subnet_for_dvr(self, context, subnet):
return self.call(context,
self.make_msg('get_subnet_for_dvr',
subnet=subnet),
version=self.DVR_RPC_VERSION,
topic=self.topic)
class DVRServerRpcCallbackMixin(object):
"""Plugin-side RPC (implementation) for agent-to-plugin interaction."""
@property
def plugin(self):
if not getattr(self, '_plugin', None):
self._plugin = manager.NeutronManager.get_plugin()
return self._plugin
def get_dvr_mac_address_list(self, context):
return self.plugin.get_dvr_mac_address_list(context)
def get_dvr_mac_address_by_host(self, context, host):
return self.plugin.get_dvr_mac_address_by_host(context, host)
def get_compute_ports_on_host_by_subnet(self, context, host, subnet):
return self.plugin.get_compute_ports_on_host_by_subnet(context,
host,
subnet)
def get_subnet_for_dvr(self, context, subnet):
return self.plugin.get_subnet_for_dvr(context, subnet)
class DVRAgentRpcApiMixin(object):
"""Plugin-side RPC (stub) for plugin-to-agent interaction."""
DVR_RPC_VERSION = "1.0"
def _get_dvr_update_topic(self):
return topics.get_topic_name(self.topic,
topics.DVR,
topics.UPDATE)
def dvr_mac_address_update(self, context, dvr_macs):
"""Notify dvr mac address updates."""
if not dvr_macs:
return
self.fanout_cast(context,
self.make_msg('dvr_mac_address_update',
dvr_macs=dvr_macs),
version=self.DVR_RPC_VERSION,
topic=self._get_dvr_update_topic())
class DVRAgentRpcCallbackMixin(object):
"""Agent-side RPC (implementation) for plugin-to-agent interaction."""
dvr_agent = None
def dvr_mac_address_update(self, context, **kwargs):
"""Callback for dvr_mac_addresses update.
:param dvr_macs: list of updated dvr_macs
"""
dvr_macs = kwargs.get('dvr_macs', [])
LOG.debug("dvr_macs updated on remote: %s", dvr_macs)
if not self.dvr_agent:
LOG.warn(_("DVR agent binding currently not set."))
return
self.dvr_agent.dvr_mac_address_update(dvr_macs)

View File

@ -0,0 +1,777 @@
# Copyright (c) 2012 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import netaddr
import re
from neutron.common import constants
from neutron.common import exceptions as n_exc
from neutron.openstack.common import log as logging
from neutron.openstack.common import uuidutils
LOG = logging.getLogger(__name__)
ATTR_NOT_SPECIFIED = object()
# Defining a constant to avoid repeating string literal in several modules
SHARED = 'shared'
# Used by range check to indicate no limit for a bound.
UNLIMITED = None
def _verify_dict_keys(expected_keys, target_dict, strict=True):
"""Allows to verify keys in a dictionary.
:param expected_keys: A list of keys expected to be present.
:param target_dict: The dictionary which should be verified.
:param strict: Specifies whether additional keys are allowed to be present.
:return: True, if keys in the dictionary correspond to the specification.
"""
if not isinstance(target_dict, dict):
msg = (_("Invalid input. '%(target_dict)s' must be a dictionary "
"with keys: %(expected_keys)s") %
{'target_dict': target_dict, 'expected_keys': expected_keys})
return msg
expected_keys = set(expected_keys)
provided_keys = set(target_dict.keys())
predicate = expected_keys.__eq__ if strict else expected_keys.issubset
if not predicate(provided_keys):
msg = (_("Validation of dictionary's keys failed."
"Expected keys: %(expected_keys)s "
"Provided keys: %(provided_keys)s") %
{'expected_keys': expected_keys,
'provided_keys': provided_keys})
return msg
def is_attr_set(attribute):
return not (attribute is None or attribute is ATTR_NOT_SPECIFIED)
def _validate_values(data, valid_values=None):
if data not in valid_values:
msg = (_("'%(data)s' is not in %(valid_values)s") %
{'data': data, 'valid_values': valid_values})
LOG.debug(msg)
return msg
def _validate_not_empty_string_or_none(data, max_len=None):
if data is not None:
return _validate_not_empty_string(data, max_len=max_len)
def _validate_not_empty_string(data, max_len=None):
msg = _validate_string(data, max_len=max_len)
if msg:
return msg
if not data.strip():
return _("'%s' Blank strings are not permitted") % data
def _validate_string_or_none(data, max_len=None):
if data is not None:
return _validate_string(data, max_len=max_len)
def _validate_string(data, max_len=None):
if not isinstance(data, basestring):
msg = _("'%s' is not a valid string") % data
LOG.debug(msg)
return msg
if max_len is not None and len(data) > max_len:
msg = (_("'%(data)s' exceeds maximum length of %(max_len)s") %
{'data': data, 'max_len': max_len})
LOG.debug(msg)
return msg
def _validate_boolean(data, valid_values=None):
try:
convert_to_boolean(data)
except n_exc.InvalidInput:
msg = _("'%s' is not a valid boolean value") % data
LOG.debug(msg)
return msg
def _validate_range(data, valid_values=None):
"""Check that integer value is within a range provided.
Test is inclusive. Allows either limit to be ignored, to allow
checking ranges where only the lower or upper limit matter.
It is expected that the limits provided are valid integers or
the value None.
"""
min_value = valid_values[0]
max_value = valid_values[1]
try:
data = int(data)
except (ValueError, TypeError):
msg = _("'%s' is not an integer") % data
LOG.debug(msg)
return msg
if min_value is not UNLIMITED and data < min_value:
msg = _("'%(data)s' is too small - must be at least "
"'%(limit)d'") % {'data': data, 'limit': min_value}
LOG.debug(msg)
return msg
if max_value is not UNLIMITED and data > max_value:
msg = _("'%(data)s' is too large - must be no larger than "
"'%(limit)d'") % {'data': data, 'limit': max_value}
LOG.debug(msg)
return msg
def _validate_no_whitespace(data):
"""Validates that input has no whitespace."""
if len(data.split()) > 1:
msg = _("'%s' contains whitespace") % data
LOG.debug(msg)
raise n_exc.InvalidInput(error_message=msg)
return data
def _validate_mac_address(data, valid_values=None):
valid_mac = False
try:
valid_mac = netaddr.valid_mac(_validate_no_whitespace(data))
except Exception:
pass
finally:
# TODO(arosen): The code in this file should be refactored
# so it catches the correct exceptions. _validate_no_whitespace
# raises AttributeError if data is None.
if valid_mac is False:
msg = _("'%s' is not a valid MAC address") % data
LOG.debug(msg)
return msg
def _validate_mac_address_or_none(data, valid_values=None):
if data is None:
return
return _validate_mac_address(data, valid_values)
def _validate_ip_address(data, valid_values=None):
try:
netaddr.IPAddress(_validate_no_whitespace(data))
except Exception:
msg = _("'%s' is not a valid IP address") % data
LOG.debug(msg)
return msg
def _validate_ip_pools(data, valid_values=None):
"""Validate that start and end IP addresses are present.
In addition to this the IP addresses will also be validated
"""
if not isinstance(data, list):
msg = _("Invalid data format for IP pool: '%s'") % data
LOG.debug(msg)
return msg
expected_keys = ['start', 'end']
for ip_pool in data:
msg = _verify_dict_keys(expected_keys, ip_pool)
if msg:
LOG.debug(msg)
return msg
for k in expected_keys:
msg = _validate_ip_address(ip_pool[k])
if msg:
LOG.debug(msg)
return msg
def _validate_fixed_ips(data, valid_values=None):
if not isinstance(data, list):
msg = _("Invalid data format for fixed IP: '%s'") % data
LOG.debug(msg)
return msg
ips = []
for fixed_ip in data:
if not isinstance(fixed_ip, dict):
msg = _("Invalid data format for fixed IP: '%s'") % fixed_ip
LOG.debug(msg)
return msg
if 'ip_address' in fixed_ip:
# Ensure that duplicate entries are not set - just checking IP
# suffices. Duplicate subnet_id's are legitimate.
fixed_ip_address = fixed_ip['ip_address']
if fixed_ip_address in ips:
msg = _("Duplicate IP address '%s'") % fixed_ip_address
else:
msg = _validate_ip_address(fixed_ip_address)
if msg:
LOG.debug(msg)
return msg
ips.append(fixed_ip_address)
if 'subnet_id' in fixed_ip:
msg = _validate_uuid(fixed_ip['subnet_id'])
if msg:
LOG.debug(msg)
return msg
def _validate_nameservers(data, valid_values=None):
if not hasattr(data, '__iter__'):
msg = _("Invalid data format for nameserver: '%s'") % data
LOG.debug(msg)
return msg
ips = []
for ip in data:
msg = _validate_ip_address(ip)
if msg:
# This may be a hostname
msg = _validate_regex(ip, HOSTNAME_PATTERN)
if msg:
msg = _("'%s' is not a valid nameserver") % ip
LOG.debug(msg)
return msg
if ip in ips:
msg = _("Duplicate nameserver '%s'") % ip
LOG.debug(msg)
return msg
ips.append(ip)
def _validate_hostroutes(data, valid_values=None):
if not isinstance(data, list):
msg = _("Invalid data format for hostroute: '%s'") % data
LOG.debug(msg)
return msg
expected_keys = ['destination', 'nexthop']
hostroutes = []
for hostroute in data:
msg = _verify_dict_keys(expected_keys, hostroute)
if msg:
LOG.debug(msg)
return msg
msg = _validate_subnet(hostroute['destination'])
if msg:
LOG.debug(msg)
return msg
msg = _validate_ip_address(hostroute['nexthop'])
if msg:
LOG.debug(msg)
return msg
if hostroute in hostroutes:
msg = _("Duplicate hostroute '%s'") % hostroute
LOG.debug(msg)
return msg
hostroutes.append(hostroute)
def _validate_ip_address_or_none(data, valid_values=None):
if data is None:
return None
return _validate_ip_address(data, valid_values)
def _validate_subnet(data, valid_values=None):
msg = None
try:
net = netaddr.IPNetwork(_validate_no_whitespace(data))
if '/' not in data:
msg = _("'%(data)s' isn't a recognized IP subnet cidr,"
" '%(cidr)s' is recommended") % {"data": data,
"cidr": net.cidr}
else:
return
except Exception:
msg = _("'%s' is not a valid IP subnet") % data
if msg:
LOG.debug(msg)
return msg
def _validate_subnet_list(data, valid_values=None):
if not isinstance(data, list):
msg = _("'%s' is not a list") % data
LOG.debug(msg)
return msg
if len(set(data)) != len(data):
msg = _("Duplicate items in the list: '%s'") % ', '.join(data)
LOG.debug(msg)
return msg
for item in data:
msg = _validate_subnet(item)
if msg:
return msg
def _validate_subnet_or_none(data, valid_values=None):
if data is None:
return
return _validate_subnet(data, valid_values)
def _validate_regex(data, valid_values=None):
try:
if re.match(valid_values, data):
return
except TypeError:
pass
msg = _("'%s' is not a valid input") % data
LOG.debug(msg)
return msg
def _validate_regex_or_none(data, valid_values=None):
if data is None:
return
return _validate_regex(data, valid_values)
def _validate_uuid(data, valid_values=None):
if not uuidutils.is_uuid_like(data):
msg = _("'%s' is not a valid UUID") % data
LOG.debug(msg)
return msg
def _validate_uuid_or_none(data, valid_values=None):
if data is not None:
return _validate_uuid(data)
def _validate_uuid_list(data, valid_values=None):
if not isinstance(data, list):
msg = _("'%s' is not a list") % data
LOG.debug(msg)
return msg
for item in data:
msg = _validate_uuid(item)
if msg:
LOG.debug(msg)
return msg
if len(set(data)) != len(data):
msg = _("Duplicate items in the list: '%s'") % ', '.join(data)
LOG.debug(msg)
return msg
def _validate_dict_item(key, key_validator, data):
# Find conversion function, if any, and apply it
conv_func = key_validator.get('convert_to')
if conv_func:
data[key] = conv_func(data.get(key))
# Find validator function
# TODO(salv-orlando): Structure of dict attributes should be improved
# to avoid iterating over items
val_func = val_params = None
for (k, v) in key_validator.iteritems():
if k.startswith('type:'):
# ask forgiveness, not permission
try:
val_func = validators[k]
except KeyError:
return _("Validator '%s' does not exist.") % k
val_params = v
break
# Process validation
if val_func:
return val_func(data.get(key), val_params)
def _validate_dict(data, key_specs=None):
if not isinstance(data, dict):
msg = _("'%s' is not a dictionary") % data
LOG.debug(msg)
return msg
# Do not perform any further validation, if no constraints are supplied
if not key_specs:
return
# Check whether all required keys are present
required_keys = [key for key, spec in key_specs.iteritems()
if spec.get('required')]
if required_keys:
msg = _verify_dict_keys(required_keys, data, False)
if msg:
LOG.debug(msg)
return msg
# Perform validation and conversion of all values
# according to the specifications.
for key, key_validator in [(k, v) for k, v in key_specs.iteritems()
if k in data]:
msg = _validate_dict_item(key, key_validator, data)
if msg:
LOG.debug(msg)
return msg
def _validate_dict_or_none(data, key_specs=None):
if data is not None:
return _validate_dict(data, key_specs)
def _validate_dict_or_empty(data, key_specs=None):
if data != {}:
return _validate_dict(data, key_specs)
def _validate_dict_or_nodata(data, key_specs=None):
if data:
return _validate_dict(data, key_specs)
def _validate_non_negative(data, valid_values=None):
try:
data = int(data)
except (ValueError, TypeError):
msg = _("'%s' is not an integer") % data
LOG.debug(msg)
return msg
if data < 0:
msg = _("'%s' should be non-negative") % data
LOG.debug(msg)
return msg
def convert_to_boolean(data):
if isinstance(data, basestring):
val = data.lower()
if val == "true" or val == "1":
return True
if val == "false" or val == "0":
return False
elif isinstance(data, bool):
return data
elif isinstance(data, int):
if data == 0:
return False
elif data == 1:
return True
msg = _("'%s' cannot be converted to boolean") % data
raise n_exc.InvalidInput(error_message=msg)
def convert_to_boolean_if_not_none(data):
if data is not None:
return convert_to_boolean(data)
def convert_to_int(data):
try:
return int(data)
except (ValueError, TypeError):
msg = _("'%s' is not a integer") % data
raise n_exc.InvalidInput(error_message=msg)
def convert_kvp_str_to_list(data):
"""Convert a value of the form 'key=value' to ['key', 'value'].
:raises: n_exc.InvalidInput if any of the strings are malformed
(e.g. do not contain a key).
"""
kvp = [x.strip() for x in data.split('=', 1)]
if len(kvp) == 2 and kvp[0]:
return kvp
msg = _("'%s' is not of the form <key>=[value]") % data
raise n_exc.InvalidInput(error_message=msg)
def convert_kvp_list_to_dict(kvp_list):
"""Convert a list of 'key=value' strings to a dict.
:raises: n_exc.InvalidInput if any of the strings are malformed
(e.g. do not contain a key) or if any
of the keys appear more than once.
"""
if kvp_list == ['True']:
# No values were provided (i.e. '--flag-name')
return {}
kvp_map = {}
for kvp_str in kvp_list:
key, value = convert_kvp_str_to_list(kvp_str)
kvp_map.setdefault(key, set())
kvp_map[key].add(value)
return dict((x, list(y)) for x, y in kvp_map.iteritems())
def convert_none_to_empty_list(value):
return [] if value is None else value
def convert_none_to_empty_dict(value):
return {} if value is None else value
def convert_to_list(data):
if data is None:
return []
elif hasattr(data, '__iter__'):
return list(data)
else:
return [data]
HOSTNAME_PATTERN = ("(?=^.{1,254}$)(^(?:(?!\d+\.|-)[a-zA-Z0-9_\-]"
"{1,63}(?<!-)\.?)+(?:[a-zA-Z]{2,})$)")
HEX_ELEM = '[0-9A-Fa-f]'
UUID_PATTERN = '-'.join([HEX_ELEM + '{8}', HEX_ELEM + '{4}',
HEX_ELEM + '{4}', HEX_ELEM + '{4}',
HEX_ELEM + '{12}'])
# Note: In order to ensure that the MAC address is unicast the first byte
# must be even.
MAC_PATTERN = "^%s[aceACE02468](:%s{2}){5}$" % (HEX_ELEM, HEX_ELEM)
# Dictionary that maintains a list of validation functions
validators = {'type:dict': _validate_dict,
'type:dict_or_none': _validate_dict_or_none,
'type:dict_or_empty': _validate_dict_or_empty,
'type:dict_or_nodata': _validate_dict_or_nodata,
'type:fixed_ips': _validate_fixed_ips,
'type:hostroutes': _validate_hostroutes,
'type:ip_address': _validate_ip_address,
'type:ip_address_or_none': _validate_ip_address_or_none,
'type:ip_pools': _validate_ip_pools,
'type:mac_address': _validate_mac_address,
'type:mac_address_or_none': _validate_mac_address_or_none,
'type:nameservers': _validate_nameservers,
'type:non_negative': _validate_non_negative,
'type:range': _validate_range,
'type:regex': _validate_regex,
'type:regex_or_none': _validate_regex_or_none,
'type:string': _validate_string,
'type:string_or_none': _validate_string_or_none,
'type:not_empty_string': _validate_not_empty_string,
'type:not_empty_string_or_none':
_validate_not_empty_string_or_none,
'type:subnet': _validate_subnet,
'type:subnet_list': _validate_subnet_list,
'type:subnet_or_none': _validate_subnet_or_none,
'type:uuid': _validate_uuid,
'type:uuid_or_none': _validate_uuid_or_none,
'type:uuid_list': _validate_uuid_list,
'type:values': _validate_values,
'type:boolean': _validate_boolean}
# Define constants for base resource name
NETWORK = 'network'
NETWORKS = '%ss' % NETWORK
PORT = 'port'
PORTS = '%ss' % PORT
SUBNET = 'subnet'
SUBNETS = '%ss' % SUBNET
# Note: a default of ATTR_NOT_SPECIFIED indicates that an
# attribute is not required, but will be generated by the plugin
# if it is not specified. Particularly, a value of ATTR_NOT_SPECIFIED
# is different from an attribute that has been specified with a value of
# None. For example, if 'gateway_ip' is omitted in a request to
# create a subnet, the plugin will receive ATTR_NOT_SPECIFIED
# and the default gateway_ip will be generated.
# However, if gateway_ip is specified as None, this means that
# the subnet does not have a gateway IP.
# The following is a short reference for understanding attribute info:
# default: default value of the attribute (if missing, the attribute
# becomes mandatory.
# allow_post: the attribute can be used on POST requests.
# allow_put: the attribute can be used on PUT requests.
# validate: specifies rules for validating data in the attribute.
# convert_to: transformation to apply to the value before it is returned
# is_visible: the attribute is returned in GET responses.
# required_by_policy: the attribute is required by the policy engine and
# should therefore be filled by the API layer even if not present in
# request body.
# enforce_policy: the attribute is actively part of the policy enforcing
# mechanism, ie: there might be rules which refer to this attribute.
RESOURCE_ATTRIBUTE_MAP = {
NETWORKS: {
'id': {'allow_post': False, 'allow_put': False,
'validate': {'type:uuid': None},
'is_visible': True,
'primary_key': True},
'name': {'allow_post': True, 'allow_put': True,
'validate': {'type:string': None},
'default': '', 'is_visible': True},
'subnets': {'allow_post': False, 'allow_put': False,
'default': [],
'is_visible': True},
'admin_state_up': {'allow_post': True, 'allow_put': True,
'default': True,
'convert_to': convert_to_boolean,
'is_visible': True},
'status': {'allow_post': False, 'allow_put': False,
'is_visible': True},
'tenant_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:string': None},
'required_by_policy': True,
'is_visible': True},
SHARED: {'allow_post': True,
'allow_put': True,
'default': False,
'convert_to': convert_to_boolean,
'is_visible': True,
'required_by_policy': True,
'enforce_policy': True},
},
PORTS: {
'id': {'allow_post': False, 'allow_put': False,
'validate': {'type:uuid': None},
'is_visible': True,
'primary_key': True},
'name': {'allow_post': True, 'allow_put': True, 'default': '',
'validate': {'type:string': None},
'is_visible': True},
'network_id': {'allow_post': True, 'allow_put': False,
'required_by_policy': True,
'validate': {'type:uuid': None},
'is_visible': True},
'admin_state_up': {'allow_post': True, 'allow_put': True,
'default': True,
'convert_to': convert_to_boolean,
'is_visible': True},
'mac_address': {'allow_post': True, 'allow_put': False,
'default': ATTR_NOT_SPECIFIED,
'validate': {'type:mac_address': None},
'enforce_policy': True,
'is_visible': True},
'fixed_ips': {'allow_post': True, 'allow_put': True,
'default': ATTR_NOT_SPECIFIED,
'convert_list_to': convert_kvp_list_to_dict,
'validate': {'type:fixed_ips': None},
'enforce_policy': True,
'is_visible': True},
'device_id': {'allow_post': True, 'allow_put': True,
'validate': {'type:string': None},
'default': '',
'is_visible': True},
'device_owner': {'allow_post': True, 'allow_put': True,
'validate': {'type:string': None},
'default': '',
'is_visible': True},
'tenant_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:string': None},
'required_by_policy': True,
'is_visible': True},
'status': {'allow_post': False, 'allow_put': False,
'is_visible': True},
},
SUBNETS: {
'id': {'allow_post': False, 'allow_put': False,
'validate': {'type:uuid': None},
'is_visible': True,
'primary_key': True},
'name': {'allow_post': True, 'allow_put': True, 'default': '',
'validate': {'type:string': None},
'is_visible': True},
'ip_version': {'allow_post': True, 'allow_put': False,
'convert_to': convert_to_int,
'validate': {'type:values': [4, 6]},
'is_visible': True},
'network_id': {'allow_post': True, 'allow_put': False,
'required_by_policy': True,
'validate': {'type:uuid': None},
'is_visible': True},
'cidr': {'allow_post': True, 'allow_put': False,
'validate': {'type:subnet': None},
'is_visible': True},
'gateway_ip': {'allow_post': True, 'allow_put': True,
'default': ATTR_NOT_SPECIFIED,
'validate': {'type:ip_address_or_none': None},
'is_visible': True},
'allocation_pools': {'allow_post': True, 'allow_put': True,
'default': ATTR_NOT_SPECIFIED,
'validate': {'type:ip_pools': None},
'is_visible': True},
'dns_nameservers': {'allow_post': True, 'allow_put': True,
'convert_to': convert_none_to_empty_list,
'default': ATTR_NOT_SPECIFIED,
'validate': {'type:nameservers': None},
'is_visible': True},
'host_routes': {'allow_post': True, 'allow_put': True,
'convert_to': convert_none_to_empty_list,
'default': ATTR_NOT_SPECIFIED,
'validate': {'type:hostroutes': None},
'is_visible': True},
'tenant_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:string': None},
'required_by_policy': True,
'is_visible': True},
'enable_dhcp': {'allow_post': True, 'allow_put': True,
'default': True,
'convert_to': convert_to_boolean,
'is_visible': True},
'ipv6_ra_mode': {'allow_post': True, 'allow_put': True,
'default': ATTR_NOT_SPECIFIED,
'validate': {'type:values': constants.IPV6_MODES},
'is_visible': True},
'ipv6_address_mode': {'allow_post': True, 'allow_put': True,
'default': ATTR_NOT_SPECIFIED,
'validate': {'type:values':
constants.IPV6_MODES},
'is_visible': True},
SHARED: {'allow_post': False,
'allow_put': False,
'default': False,
'convert_to': convert_to_boolean,
'is_visible': False,
'required_by_policy': True,
'enforce_policy': True},
}
}
# Identify the attribute used by a resource to reference another resource
RESOURCE_FOREIGN_KEYS = {
NETWORKS: 'network_id'
}
PLURALS = {NETWORKS: NETWORK,
PORTS: PORT,
SUBNETS: SUBNET,
'dns_nameservers': 'dns_nameserver',
'host_routes': 'host_route',
'allocation_pools': 'allocation_pool',
'fixed_ips': 'fixed_ip',
'extensions': 'extension'}
EXT_NSES = {}
# Namespaces to be added for backward compatibility
# when existing extended resource attributes are
# provided by other extension than original one.
EXT_NSES_BC = {}
def get_attr_metadata():
return {'plurals': PLURALS,
'xmlns': constants.XML_NS_V20,
constants.EXT_NS: EXT_NSES,
constants.EXT_NS_COMP: EXT_NSES_BC}

View File

@ -0,0 +1,677 @@
# Copyright (c) 2012 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
import netaddr
import webob.exc
from oslo.config import cfg
from neutron.api import api_common
from neutron.api.rpc.agentnotifiers import dhcp_rpc_agent_api
from neutron.api.v2 import attributes
from neutron.api.v2 import resource as wsgi_resource
from neutron.common import constants as const
from neutron.common import exceptions
from neutron.common import rpc as n_rpc
from neutron.openstack.common import log as logging
from neutron import policy
from neutron import quota
LOG = logging.getLogger(__name__)
FAULT_MAP = {exceptions.NotFound: webob.exc.HTTPNotFound,
exceptions.Conflict: webob.exc.HTTPConflict,
exceptions.InUse: webob.exc.HTTPConflict,
exceptions.BadRequest: webob.exc.HTTPBadRequest,
exceptions.ServiceUnavailable: webob.exc.HTTPServiceUnavailable,
exceptions.NotAuthorized: webob.exc.HTTPForbidden,
netaddr.AddrFormatError: webob.exc.HTTPBadRequest,
}
class Controller(object):
LIST = 'list'
SHOW = 'show'
CREATE = 'create'
UPDATE = 'update'
DELETE = 'delete'
def __init__(self, plugin, collection, resource, attr_info,
allow_bulk=False, member_actions=None, parent=None,
allow_pagination=False, allow_sorting=False):
if member_actions is None:
member_actions = []
self._plugin = plugin
self._collection = collection.replace('-', '_')
self._resource = resource.replace('-', '_')
self._attr_info = attr_info
self._allow_bulk = allow_bulk
self._allow_pagination = allow_pagination
self._allow_sorting = allow_sorting
self._native_bulk = self._is_native_bulk_supported()
self._native_pagination = self._is_native_pagination_supported()
self._native_sorting = self._is_native_sorting_supported()
self._policy_attrs = [name for (name, info) in self._attr_info.items()
if info.get('required_by_policy')]
self._notifier = n_rpc.get_notifier('network')
# use plugin's dhcp notifier, if this is already instantiated
agent_notifiers = getattr(plugin, 'agent_notifiers', {})
self._dhcp_agent_notifier = (
agent_notifiers.get(const.AGENT_TYPE_DHCP) or
dhcp_rpc_agent_api.DhcpAgentNotifyAPI()
)
if cfg.CONF.notify_nova_on_port_data_changes:
from neutron.notifiers import nova
self._nova_notifier = nova.Notifier()
self._member_actions = member_actions
self._primary_key = self._get_primary_key()
if self._allow_pagination and self._native_pagination:
# Native pagination need native sorting support
if not self._native_sorting:
raise exceptions.Invalid(
_("Native pagination depend on native sorting")
)
if not self._allow_sorting:
LOG.info(_("Allow sorting is enabled because native "
"pagination requires native sorting"))
self._allow_sorting = True
if parent:
self._parent_id_name = '%s_id' % parent['member_name']
parent_part = '_%s' % parent['member_name']
else:
self._parent_id_name = None
parent_part = ''
self._plugin_handlers = {
self.LIST: 'get%s_%s' % (parent_part, self._collection),
self.SHOW: 'get%s_%s' % (parent_part, self._resource)
}
for action in [self.CREATE, self.UPDATE, self.DELETE]:
self._plugin_handlers[action] = '%s%s_%s' % (action, parent_part,
self._resource)
def _get_primary_key(self, default_primary_key='id'):
for key, value in self._attr_info.iteritems():
if value.get('primary_key', False):
return key
return default_primary_key
def _is_native_bulk_supported(self):
native_bulk_attr_name = ("_%s__native_bulk_support"
% self._plugin.__class__.__name__)
return getattr(self._plugin, native_bulk_attr_name, False)
def _is_native_pagination_supported(self):
native_pagination_attr_name = ("_%s__native_pagination_support"
% self._plugin.__class__.__name__)
return getattr(self._plugin, native_pagination_attr_name, False)
def _is_native_sorting_supported(self):
native_sorting_attr_name = ("_%s__native_sorting_support"
% self._plugin.__class__.__name__)
return getattr(self._plugin, native_sorting_attr_name, False)
def _exclude_attributes_by_policy(self, context, data):
"""Identifies attributes to exclude according to authZ policies.
Return a list of attribute names which should be stripped from the
response returned to the user because the user is not authorized
to see them.
"""
attributes_to_exclude = []
for attr_name in data.keys():
attr_data = self._attr_info.get(attr_name)
if attr_data and attr_data['is_visible']:
if policy.check(
context,
'%s:%s' % (self._plugin_handlers[self.SHOW], attr_name),
data,
might_not_exist=True):
# this attribute is visible, check next one
continue
# if the code reaches this point then either the policy check
# failed or the attribute was not visible in the first place
attributes_to_exclude.append(attr_name)
return attributes_to_exclude
def _view(self, context, data, fields_to_strip=None):
"""Build a view of an API resource.
:param context: the neutron context
:param data: the object for which a view is being created
:param fields_to_strip: attributes to remove from the view
:returns: a view of the object which includes only attributes
visible according to API resource declaration and authZ policies.
"""
fields_to_strip = ((fields_to_strip or []) +
self._exclude_attributes_by_policy(context, data))
return self._filter_attributes(context, data, fields_to_strip)
def _filter_attributes(self, context, data, fields_to_strip=None):
if not fields_to_strip:
return data
return dict(item for item in data.iteritems()
if (item[0] not in fields_to_strip))
def _do_field_list(self, original_fields):
fields_to_add = None
# don't do anything if fields were not specified in the request
if original_fields:
fields_to_add = [attr for attr in self._policy_attrs
if attr not in original_fields]
original_fields.extend(self._policy_attrs)
return original_fields, fields_to_add
def __getattr__(self, name):
if name in self._member_actions:
def _handle_action(request, id, **kwargs):
arg_list = [request.context, id]
# Ensure policy engine is initialized
policy.init()
# Fetch the resource and verify if the user can access it
try:
resource = self._item(request, id, True)
except exceptions.PolicyNotAuthorized:
msg = _('The resource could not be found.')
raise webob.exc.HTTPNotFound(msg)
body = kwargs.pop('body', None)
# Explicit comparison with None to distinguish from {}
if body is not None:
arg_list.append(body)
# It is ok to raise a 403 because accessibility to the
# object was checked earlier in this method
policy.enforce(request.context, name, resource)
return getattr(self._plugin, name)(*arg_list, **kwargs)
return _handle_action
else:
raise AttributeError
def _get_pagination_helper(self, request):
if self._allow_pagination and self._native_pagination:
return api_common.PaginationNativeHelper(request,
self._primary_key)
elif self._allow_pagination:
return api_common.PaginationEmulatedHelper(request,
self._primary_key)
return api_common.NoPaginationHelper(request, self._primary_key)
def _get_sorting_helper(self, request):
if self._allow_sorting and self._native_sorting:
return api_common.SortingNativeHelper(request, self._attr_info)
elif self._allow_sorting:
return api_common.SortingEmulatedHelper(request, self._attr_info)
return api_common.NoSortingHelper(request, self._attr_info)
def _items(self, request, do_authz=False, parent_id=None):
"""Retrieves and formats a list of elements of the requested entity."""
# NOTE(salvatore-orlando): The following ensures that fields which
# are needed for authZ policy validation are not stripped away by the
# plugin before returning.
original_fields, fields_to_add = self._do_field_list(
api_common.list_args(request, 'fields'))
filters = api_common.get_filters(request, self._attr_info,
['fields', 'sort_key', 'sort_dir',
'limit', 'marker', 'page_reverse'])
kwargs = {'filters': filters,
'fields': original_fields}
sorting_helper = self._get_sorting_helper(request)
pagination_helper = self._get_pagination_helper(request)
sorting_helper.update_args(kwargs)
sorting_helper.update_fields(original_fields, fields_to_add)
pagination_helper.update_args(kwargs)
pagination_helper.update_fields(original_fields, fields_to_add)
if parent_id:
kwargs[self._parent_id_name] = parent_id
obj_getter = getattr(self._plugin, self._plugin_handlers[self.LIST])
obj_list = obj_getter(request.context, **kwargs)
obj_list = sorting_helper.sort(obj_list)
obj_list = pagination_helper.paginate(obj_list)
# Check authz
if do_authz:
# FIXME(salvatore-orlando): obj_getter might return references to
# other resources. Must check authZ on them too.
# Omit items from list that should not be visible
obj_list = [obj for obj in obj_list
if policy.check(request.context,
self._plugin_handlers[self.SHOW],
obj,
plugin=self._plugin)]
# Use the first element in the list for discriminating which attributes
# should be filtered out because of authZ policies
# fields_to_add contains a list of attributes added for request policy
# checks but that were not required by the user. They should be
# therefore stripped
fields_to_strip = fields_to_add or []
if obj_list:
fields_to_strip += self._exclude_attributes_by_policy(
request.context, obj_list[0])
collection = {self._collection:
[self._filter_attributes(
request.context, obj,
fields_to_strip=fields_to_strip)
for obj in obj_list]}
pagination_links = pagination_helper.get_links(obj_list)
if pagination_links:
collection[self._collection + "_links"] = pagination_links
return collection
def _item(self, request, id, do_authz=False, field_list=None,
parent_id=None):
"""Retrieves and formats a single element of the requested entity."""
kwargs = {'fields': field_list}
action = self._plugin_handlers[self.SHOW]
if parent_id:
kwargs[self._parent_id_name] = parent_id
obj_getter = getattr(self._plugin, action)
obj = obj_getter(request.context, id, **kwargs)
# Check authz
# FIXME(salvatore-orlando): obj_getter might return references to
# other resources. Must check authZ on them too.
if do_authz:
policy.enforce(request.context, action, obj)
return obj
def _send_dhcp_notification(self, context, data, methodname):
if cfg.CONF.dhcp_agent_notification:
if self._collection in data:
for body in data[self._collection]:
item = {self._resource: body}
self._dhcp_agent_notifier.notify(context, item, methodname)
else:
self._dhcp_agent_notifier.notify(context, data, methodname)
def _send_nova_notification(self, action, orig, returned):
if hasattr(self, '_nova_notifier'):
self._nova_notifier.send_network_change(action, orig, returned)
def index(self, request, **kwargs):
"""Returns a list of the requested entity."""
parent_id = kwargs.get(self._parent_id_name)
# Ensure policy engine is initialized
policy.init()
return self._items(request, True, parent_id)
def show(self, request, id, **kwargs):
"""Returns detailed information about the requested entity."""
try:
# NOTE(salvatore-orlando): The following ensures that fields
# which are needed for authZ policy validation are not stripped
# away by the plugin before returning.
field_list, added_fields = self._do_field_list(
api_common.list_args(request, "fields"))
parent_id = kwargs.get(self._parent_id_name)
# Ensure policy engine is initialized
policy.init()
return {self._resource:
self._view(request.context,
self._item(request,
id,
do_authz=True,
field_list=field_list,
parent_id=parent_id),
fields_to_strip=added_fields)}
except exceptions.PolicyNotAuthorized:
# To avoid giving away information, pretend that it
# doesn't exist
msg = _('The resource could not be found.')
raise webob.exc.HTTPNotFound(msg)
def _emulate_bulk_create(self, obj_creator, request, body, parent_id=None):
objs = []
try:
for item in body[self._collection]:
kwargs = {self._resource: item}
if parent_id:
kwargs[self._parent_id_name] = parent_id
fields_to_strip = self._exclude_attributes_by_policy(
request.context, item)
objs.append(self._filter_attributes(
request.context,
obj_creator(request.context, **kwargs),
fields_to_strip=fields_to_strip))
return objs
# Note(salvatore-orlando): broad catch as in theory a plugin
# could raise any kind of exception
except Exception as ex:
for obj in objs:
obj_deleter = getattr(self._plugin,
self._plugin_handlers[self.DELETE])
try:
kwargs = ({self._parent_id_name: parent_id} if parent_id
else {})
obj_deleter(request.context, obj['id'], **kwargs)
except Exception:
# broad catch as our only purpose is to log the exception
LOG.exception(_("Unable to undo add for "
"%(resource)s %(id)s"),
{'resource': self._resource,
'id': obj['id']})
# TODO(salvatore-orlando): The object being processed when the
# plugin raised might have been created or not in the db.
# We need a way for ensuring that if it has been created,
# it is then deleted
raise ex
def create(self, request, body=None, **kwargs):
"""Creates a new instance of the requested entity."""
parent_id = kwargs.get(self._parent_id_name)
self._notifier.info(request.context,
self._resource + '.create.start',
body)
body = Controller.prepare_request_body(request.context, body, True,
self._resource, self._attr_info,
allow_bulk=self._allow_bulk)
action = self._plugin_handlers[self.CREATE]
# Check authz
if self._collection in body:
# Have to account for bulk create
items = body[self._collection]
deltas = {}
bulk = True
else:
items = [body]
bulk = False
# Ensure policy engine is initialized
policy.init()
for item in items:
self._validate_network_tenant_ownership(request,
item[self._resource])
policy.enforce(request.context,
action,
item[self._resource])
try:
tenant_id = item[self._resource]['tenant_id']
count = quota.QUOTAS.count(request.context, self._resource,
self._plugin, self._collection,
tenant_id)
if bulk:
delta = deltas.get(tenant_id, 0) + 1
deltas[tenant_id] = delta
else:
delta = 1
kwargs = {self._resource: count + delta}
except exceptions.QuotaResourceUnknown as e:
# We don't want to quota this resource
LOG.debug(e)
else:
quota.QUOTAS.limit_check(request.context,
item[self._resource]['tenant_id'],
**kwargs)
def notify(create_result):
notifier_method = self._resource + '.create.end'
self._notifier.info(request.context,
notifier_method,
create_result)
self._send_dhcp_notification(request.context,
create_result,
notifier_method)
return create_result
kwargs = {self._parent_id_name: parent_id} if parent_id else {}
if self._collection in body and self._native_bulk:
# plugin does atomic bulk create operations
obj_creator = getattr(self._plugin, "%s_bulk" % action)
objs = obj_creator(request.context, body, **kwargs)
# Use first element of list to discriminate attributes which
# should be removed because of authZ policies
fields_to_strip = self._exclude_attributes_by_policy(
request.context, objs[0])
return notify({self._collection: [self._filter_attributes(
request.context, obj, fields_to_strip=fields_to_strip)
for obj in objs]})
else:
obj_creator = getattr(self._plugin, action)
if self._collection in body:
# Emulate atomic bulk behavior
objs = self._emulate_bulk_create(obj_creator, request,
body, parent_id)
return notify({self._collection: objs})
else:
kwargs.update({self._resource: body})
obj = obj_creator(request.context, **kwargs)
self._send_nova_notification(action, {},
{self._resource: obj})
return notify({self._resource: self._view(request.context,
obj)})
def delete(self, request, id, **kwargs):
"""Deletes the specified entity."""
self._notifier.info(request.context,
self._resource + '.delete.start',
{self._resource + '_id': id})
action = self._plugin_handlers[self.DELETE]
# Check authz
policy.init()
parent_id = kwargs.get(self._parent_id_name)
obj = self._item(request, id, parent_id=parent_id)
try:
policy.enforce(request.context,
action,
obj)
except exceptions.PolicyNotAuthorized:
# To avoid giving away information, pretend that it
# doesn't exist
msg = _('The resource could not be found.')
raise webob.exc.HTTPNotFound(msg)
obj_deleter = getattr(self._plugin, action)
obj_deleter(request.context, id, **kwargs)
notifier_method = self._resource + '.delete.end'
self._notifier.info(request.context,
notifier_method,
{self._resource + '_id': id})
result = {self._resource: self._view(request.context, obj)}
self._send_nova_notification(action, {}, result)
self._send_dhcp_notification(request.context,
result,
notifier_method)
def update(self, request, id, body=None, **kwargs):
"""Updates the specified entity's attributes."""
parent_id = kwargs.get(self._parent_id_name)
try:
payload = body.copy()
except AttributeError:
msg = _("Invalid format: %s") % request.body
raise exceptions.BadRequest(resource='body', msg=msg)
payload['id'] = id
self._notifier.info(request.context,
self._resource + '.update.start',
payload)
body = Controller.prepare_request_body(request.context, body, False,
self._resource, self._attr_info,
allow_bulk=self._allow_bulk)
action = self._plugin_handlers[self.UPDATE]
# Load object to check authz
# but pass only attributes in the original body and required
# by the policy engine to the policy 'brain'
field_list = [name for (name, value) in self._attr_info.iteritems()
if (value.get('required_by_policy') or
value.get('primary_key') or
'default' not in value)]
# Ensure policy engine is initialized
policy.init()
orig_obj = self._item(request, id, field_list=field_list,
parent_id=parent_id)
orig_object_copy = copy.copy(orig_obj)
orig_obj.update(body[self._resource])
try:
policy.enforce(request.context,
action,
orig_obj)
except exceptions.PolicyNotAuthorized:
# To avoid giving away information, pretend that it
# doesn't exist
msg = _('The resource could not be found.')
raise webob.exc.HTTPNotFound(msg)
obj_updater = getattr(self._plugin, action)
kwargs = {self._resource: body}
if parent_id:
kwargs[self._parent_id_name] = parent_id
obj = obj_updater(request.context, id, **kwargs)
result = {self._resource: self._view(request.context, obj)}
notifier_method = self._resource + '.update.end'
self._notifier.info(request.context, notifier_method, result)
self._send_dhcp_notification(request.context,
result,
notifier_method)
self._send_nova_notification(action, orig_object_copy, result)
return result
@staticmethod
def _populate_tenant_id(context, res_dict, is_create):
if (('tenant_id' in res_dict and
res_dict['tenant_id'] != context.tenant_id and
not context.is_admin)):
msg = _("Specifying 'tenant_id' other than authenticated "
"tenant in request requires admin privileges")
raise webob.exc.HTTPBadRequest(msg)
if is_create and 'tenant_id' not in res_dict:
if context.tenant_id:
res_dict['tenant_id'] = context.tenant_id
else:
msg = _("Running without keystone AuthN requires "
" that tenant_id is specified")
raise webob.exc.HTTPBadRequest(msg)
@staticmethod
def prepare_request_body(context, body, is_create, resource, attr_info,
allow_bulk=False):
"""Verifies required attributes are in request body.
Also checking that an attribute is only specified if it is allowed
for the given operation (create/update).
Attribute with default values are considered to be optional.
body argument must be the deserialized body.
"""
collection = resource + "s"
if not body:
raise webob.exc.HTTPBadRequest(_("Resource body required"))
LOG.debug(_("Request body: %(body)s"), {'body': body})
prep_req_body = lambda x: Controller.prepare_request_body(
context,
x if resource in x else {resource: x},
is_create,
resource,
attr_info,
allow_bulk)
if collection in body:
if not allow_bulk:
raise webob.exc.HTTPBadRequest(_("Bulk operation "
"not supported"))
bulk_body = [prep_req_body(item) for item in body[collection]]
if not bulk_body:
raise webob.exc.HTTPBadRequest(_("Resources required"))
return {collection: bulk_body}
res_dict = body.get(resource)
if res_dict is None:
msg = _("Unable to find '%s' in request body") % resource
raise webob.exc.HTTPBadRequest(msg)
Controller._populate_tenant_id(context, res_dict, is_create)
Controller._verify_attributes(res_dict, attr_info)
if is_create: # POST
for attr, attr_vals in attr_info.iteritems():
if attr_vals['allow_post']:
if ('default' not in attr_vals and
attr not in res_dict):
msg = _("Failed to parse request. Required "
"attribute '%s' not specified") % attr
raise webob.exc.HTTPBadRequest(msg)
res_dict[attr] = res_dict.get(attr,
attr_vals.get('default'))
else:
if attr in res_dict:
msg = _("Attribute '%s' not allowed in POST") % attr
raise webob.exc.HTTPBadRequest(msg)
else: # PUT
for attr, attr_vals in attr_info.iteritems():
if attr in res_dict and not attr_vals['allow_put']:
msg = _("Cannot update read-only attribute %s") % attr
raise webob.exc.HTTPBadRequest(msg)
for attr, attr_vals in attr_info.iteritems():
if (attr not in res_dict or
res_dict[attr] is attributes.ATTR_NOT_SPECIFIED):
continue
# Convert values if necessary
if 'convert_to' in attr_vals:
res_dict[attr] = attr_vals['convert_to'](res_dict[attr])
# Check that configured values are correct
if 'validate' not in attr_vals:
continue
for rule in attr_vals['validate']:
res = attributes.validators[rule](res_dict[attr],
attr_vals['validate'][rule])
if res:
msg_dict = dict(attr=attr, reason=res)
msg = _("Invalid input for %(attr)s. "
"Reason: %(reason)s.") % msg_dict
raise webob.exc.HTTPBadRequest(msg)
return body
@staticmethod
def _verify_attributes(res_dict, attr_info):
extra_keys = set(res_dict.keys()) - set(attr_info.keys())
if extra_keys:
msg = _("Unrecognized attribute(s) '%s'") % ', '.join(extra_keys)
raise webob.exc.HTTPBadRequest(msg)
def _validate_network_tenant_ownership(self, request, resource_item):
# TODO(salvatore-orlando): consider whether this check can be folded
# in the policy engine
if (request.context.is_admin or
self._resource not in ('port', 'subnet')):
return
network = self._plugin.get_network(
request.context,
resource_item['network_id'])
# do not perform the check on shared networks
if network.get('shared'):
return
network_owner = network['tenant_id']
if network_owner != resource_item['tenant_id']:
msg = _("Tenant %(tenant_id)s not allowed to "
"create %(resource)s on this network")
raise webob.exc.HTTPForbidden(msg % {
"tenant_id": resource_item['tenant_id'],
"resource": self._resource,
})
def create_resource(collection, resource, plugin, params, allow_bulk=False,
member_actions=None, parent=None, allow_pagination=False,
allow_sorting=False):
controller = Controller(plugin, collection, resource, params, allow_bulk,
member_actions=member_actions, parent=parent,
allow_pagination=allow_pagination,
allow_sorting=allow_sorting)
return wsgi_resource.Resource(controller, FAULT_MAP)

View File

@ -0,0 +1,69 @@
# Copyright 2011 Citrix Systems.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import webob.dec
from neutron.api.views import versions as versions_view
from neutron.openstack.common import gettextutils
from neutron.openstack.common import log as logging
from neutron import wsgi
LOG = logging.getLogger(__name__)
class Versions(object):
@classmethod
def factory(cls, global_config, **local_config):
return cls()
@webob.dec.wsgify(RequestClass=wsgi.Request)
def __call__(self, req):
"""Respond to a request for all Neutron API versions."""
version_objs = [
{
"id": "v2.0",
"status": "CURRENT",
},
]
if req.path != '/':
language = req.best_match_language()
msg = _('Unknown API version specified')
msg = gettextutils.translate(msg, language)
return webob.exc.HTTPNotFound(explanation=msg)
builder = versions_view.get_view_builder(req)
versions = [builder.build(version) for version in version_objs]
response = dict(versions=versions)
metadata = {
"application/xml": {
"attributes": {
"version": ["status", "id"],
"link": ["rel", "href"],
}
}
}
content_type = req.best_match_content_type()
body = (wsgi.Serializer(metadata=metadata).
serialize(response, content_type))
response = webob.Response()
response.content_type = content_type
response.body = body
return response

View File

@ -0,0 +1,58 @@
# Copyright 2010-2011 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
def get_view_builder(req):
base_url = req.application_url
return ViewBuilder(base_url)
class ViewBuilder(object):
def __init__(self, base_url):
"""Object initialization.
:param base_url: url of the root wsgi application
"""
self.base_url = base_url
def build(self, version_data):
"""Generic method used to generate a version entity."""
version = {
"id": version_data["id"],
"status": version_data["status"],
"links": self._build_links(version_data),
}
return version
def _build_links(self, version_data):
"""Generate a container of links that refer to the provided version."""
href = self.generate_href(version_data["id"])
links = [
{
"rel": "self",
"href": href,
},
]
return links
def generate_href(self, version_number):
"""Create an url that refers to a specific version_number."""
return os.path.join(self.base_url, version_number)

View File

@ -0,0 +1,71 @@
# Copyright 2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo.config import cfg
import webob.dec
import webob.exc
from neutron import context
from neutron.openstack.common import log as logging
from neutron.openstack.common.middleware import request_id
from neutron import wsgi
LOG = logging.getLogger(__name__)
class NeutronKeystoneContext(wsgi.Middleware):
"""Make a request context from keystone headers."""
@webob.dec.wsgify
def __call__(self, req):
# Determine the user ID
user_id = req.headers.get('X_USER_ID')
if not user_id:
LOG.debug(_("X_USER_ID is not found in request"))
return webob.exc.HTTPUnauthorized()
# Determine the tenant
tenant_id = req.headers.get('X_PROJECT_ID')
# Suck out the roles
roles = [r.strip() for r in req.headers.get('X_ROLES', '').split(',')]
# Human-friendly names
tenant_name = req.headers.get('X_PROJECT_NAME')
user_name = req.headers.get('X_USER_NAME')
# Use request_id if already set
req_id = req.environ.get(request_id.ENV_REQUEST_ID)
# Create a context with the authentication data
ctx = context.Context(user_id, tenant_id, roles=roles,
user_name=user_name, tenant_name=tenant_name,
request_id=req_id)
# Inject the context...
req.environ['neutron.context'] = ctx
return self.application
def pipeline_factory(loader, global_conf, **local_conf):
"""Create a paste pipeline based on the 'auth_strategy' config option."""
pipeline = local_conf[cfg.CONF.auth_strategy]
pipeline = pipeline.split()
filters = [loader.get_filter(n) for n in pipeline[:-1]]
app = loader.get_app(pipeline[-1])
filters.reverse()
for filter in filters:
app = filter(app)
return app

View File

@ -0,0 +1,14 @@
# Copyright (c) 2013 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

Some files were not shown because too many files have changed in this diff Show More