Remove trailing whitespaces in regular file

Fixes bug #945346

Change-Id: I07a303c2e503e50d7138585c683e0d1310339276
This commit is contained in:
Hengqing Hu 2012-03-07 13:43:37 +08:00
parent d954b11944
commit 9a042d3c50
22 changed files with 693 additions and 693 deletions

View File

@ -1,5 +1,5 @@
(function($) {
$.fn.tweet = function(o){
var s = {
username: ["seaofclouds"], // [string] required, unless you want to display our tweets. :) it can be an array, just do ["username1","username2","etc"]
@ -17,9 +17,9 @@
loading_text: null, // [string] optional loading text, displayed while tweets load
query: null // [string] optional search query
};
if(o) $.extend(s, o);
$.fn.extend({
linkUrl: function() {
var returning = [];

View File

@ -81,7 +81,7 @@ http://twitter.com/necolas
Created: 02 March 2010
Version: 1.1 (21 October 2010)
Dual licensed under MIT and GNU GPLv2 © Nicolas Gallagher
Dual licensed under MIT and GNU GPLv2 © Nicolas Gallagher
------------------------------------------ */
/* THE SPEECH BUBBLE
------------------------------------------------------------------------------------------------------------------------------- */
@ -96,7 +96,7 @@ Dual licensed under MIT and GNU GPLv2 © Nicolas Gallagher
border:5px solid #BC1518;
color:#333;
background:#fff;
/* css3 */
-moz-border-radius:10px;
-webkit-border-radius:10px;

View File

@ -20,13 +20,13 @@ Compute API Extensions
In this section you will find extension reference information. If you need to write an extension's reference page, you can find an RST template in doc/source/api_ext/rst_extension_template.rst.
The Compute API specification is published to http://docs.openstack.org/api and the source is found in https://github.com/openstack/compute-api. These extensions extend the core API.
The Compute API specification is published to http://docs.openstack.org/api and the source is found in https://github.com/openstack/compute-api. These extensions extend the core API.
Extensions
----------
.. toctree::
:maxdepth: 3
ext_config_drive.rst
ext_floating_ip_dns.rst
ext_floating_ips.rst

View File

@ -1,6 +1,6 @@
..
Copyright 2010-2011 United States Government as represented by the
Administrator of the National Aeronautics and Space Administration.
Administrator of the National Aeronautics and Space Administration.
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
@ -27,8 +27,8 @@ How to Join the OpenStack Community
Our community welcomes all people interested in open source cloud computing, and there are no formal
membership requirements. The best way to join the community is to talk with others online or at a meetup
and offer contributions through Launchpad, the wiki, or blogs. We welcome all types of contributions,
from blueprint designs to documentation to testing to deployment scripts.
and offer contributions through Launchpad, the wiki, or blogs. We welcome all types of contributions,
from blueprint designs to documentation to testing to deployment scripts.
Contributing Code
-----------------
@ -89,14 +89,14 @@ aggregation with your blog posts, there are instructions for `adding your blog <
Twitter
-------
Because all the cool kids do it: `@openstack <http://twitter.com/openstack>`_. Also follow the
Because all the cool kids do it: `@openstack <http://twitter.com/openstack>`_. Also follow the
`#openstack <http://search.twitter.com/search?q=%23openstack>`_ tag for relevant tweets.
OpenStack Docs Site
-------------------
The `nova.openstack.org <http://nova.openstack.org>`_ site is geared towards developer documentation,
and the `docs.openstack.org <http://docs.openstack.org>`_ site is intended for cloud administrators
The `nova.openstack.org <http://nova.openstack.org>`_ site is geared towards developer documentation,
and the `docs.openstack.org <http://docs.openstack.org>`_ site is intended for cloud administrators
who are standing up and running OpenStack Compute in production. You can contribute to the Docs Site
by using git and Gerrit and contributing to the openstack-manuals project at http://github.com/openstack/openstack-manuals.

View File

@ -110,7 +110,7 @@ modindex_common_prefix = ['nova.']
# -- Options for man page output -----------------------------------------------
# Grouping the document tree for man pages.
# Grouping the document tree for man pages.
# List of tuples 'sourcefile', 'target', u'title', u'Authors name', 'manual'
man_pages = [

View File

@ -104,7 +104,7 @@ modindex_common_prefix = ['nova.']
# -- Options for man page output -----------------------------------------------
# Grouping the document tree for man pages.
# Grouping the document tree for man pages.
# List of tuples 'sourcefile', 'target', u'title', u'Authors name', 'manual'
man_pages = [

View File

@ -1,5 +1,5 @@
..
Copyright 2011 OpenStack LLC
Copyright 2011 OpenStack LLC
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
@ -17,13 +17,13 @@
Source for illustrations in doc/source/image_src/zone_distsched_illustrations.odp
(OpenOffice Impress format) Illustrations are "exported" to png and then scaled
to 400x300 or 640x480 as needed and placed in the doc/source/images directory.
Filter Scheduler
=====================
The Scheduler is akin to a Dating Service. Requests for the creation of new instances come in and the most applicable Compute nodes are selected from a large pool of potential candidates. In a small deployment we may be happy with the currently available Chance Scheduler which randomly selects a Host from the available pool. Or if you need something a little more fancy you may want to use the Filter Scheduler, which selects Compute hosts from a logical partitioning of available hosts.
.. image:: /images/dating_service.png
.. image:: /images/dating_service.png
The Filter Scheduler supports filtering and weighing to make informed decisions on where a new instance should be created.
@ -31,9 +31,9 @@ So, how does this all work?
Costs & Weights
---------------
When deciding where to place an Instance, we compare a Weighted Cost for each Host. The Weighting, currently, is just the sum of each Cost. Costs are nothing more than integers from `0 - max_int`. Costs are computed by looking at the various Capabilities of the Host relative to the specs of the Instance being asked for. Trying to put a plain vanilla instance on a high performance host should have a very high cost. But putting a vanilla instance on a vanilla Host should have a low cost.
When deciding where to place an Instance, we compare a Weighted Cost for each Host. The Weighting, currently, is just the sum of each Cost. Costs are nothing more than integers from `0 - max_int`. Costs are computed by looking at the various Capabilities of the Host relative to the specs of the Instance being asked for. Trying to put a plain vanilla instance on a high performance host should have a very high cost. But putting a vanilla instance on a vanilla Host should have a low cost.
Some Costs are more esoteric. Consider a rule that says we should prefer Hosts that don't already have an instance on it that is owned by the user requesting it (to mitigate against machine failures). Here we have to look at all the other Instances on the host to compute our cost.
Some Costs are more esoteric. Consider a rule that says we should prefer Hosts that don't already have an instance on it that is owned by the user requesting it (to mitigate against machine failures). Here we have to look at all the other Instances on the host to compute our cost.
An example of some other costs might include selecting:
* a GPU-based host over a standard CPU
@ -42,15 +42,15 @@ An example of some other costs might include selecting:
* a host in the EU vs North America
* etc
This Weight is computed for each Instance requested. If the customer asked for 1000 instances, the consumed resources on each Host are "virtually" depleted so the Cost can change accordingly.
This Weight is computed for each Instance requested. If the customer asked for 1000 instances, the consumed resources on each Host are "virtually" depleted so the Cost can change accordingly.
.. image:: /images/costs_weights.png
.. image:: /images/costs_weights.png
Filtering and Weighing
----------------------
The filtering (excluding compute nodes incapable of fulfilling the request) and weighing (computing the relative "fitness" of a compute node to fulfill the request) rules used are very subjective operations ... Service Providers will probably have a very different set of filtering and weighing rules than private cloud administrators. The filtering and weighing aspects of the `FilterScheduler` are flexible and extensible.
.. image:: /images/filtering.png
.. image:: /images/filtering.png
Host Filter
-----------

View File

@ -25,7 +25,7 @@ Programming HowTos and Tutorials
--------------------------------
.. toctree::
:maxdepth: 3
development.environment
unit_tests
addmethod.openstackapi

View File

@ -18,7 +18,7 @@
Networking
==========
.. todo::
.. todo::
* document hardware specific commands (maybe in admin guide?) (todd)
* document a map between flags and managers/backends (todd)

View File

@ -8,7 +8,7 @@ through using the Python `eventlet <http://eventlet.net/>`_ and
Green threads use a cooperative model of threading: thread context
switches can only occur when specific eventlet or greenlet library calls are
made (e.g., sleep, certain I/O calls). From the operating system's point of
view, each OpenStack service runs in a single thread.
view, each OpenStack service runs in a single thread.
The use of green threads reduces the likelihood of race conditions, but does
not completely eliminate them. In some cases, you may need to use the

View File

@ -1,6 +1,6 @@
..
Copyright 2010-2011 United States Government as represented by the
Administrator of the National Aeronautics and Space Administration.
Administrator of the National Aeronautics and Space Administration.
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
@ -18,8 +18,8 @@
Welcome to Nova's documentation!
================================
Nova is a cloud computing fabric controller, the main part of an IaaS system.
Individuals and organizations can use Nova to host and manage their own cloud
Nova is a cloud computing fabric controller, the main part of an IaaS system.
Individuals and organizations can use Nova to host and manage their own cloud
computing systems. Nova originated as a project out of NASA Ames Research Laboratory.
Nova is written with the following design guidelines in mind:
@ -33,7 +33,7 @@ Nova is written with the following design guidelines in mind:
This documentation is generated by the Sphinx toolkit and lives in the source
tree. Additional draft and project documentation on Nova and other components of OpenStack can
be found on the `OpenStack wiki`_. Cloud administrators, refer to `docs.openstack.org`_.
be found on the `OpenStack wiki`_. Cloud administrators, refer to `docs.openstack.org`_.
Also see the :doc:`community` page for other ways to interact with the community.

View File

@ -77,13 +77,13 @@ Concept: System Architecture
Nova consists of seven main components, with the Cloud Controller component representing the global state and interacting with all other components. API Server acts as the Web services front end for the cloud controller. Compute Controller provides compute server resources, and the Object Store component provides storage services. Auth Manager provides authentication and authorization services. Volume Controller provides fast and permanent block-level storage for the comput servers. Network Controller provides virtual networks to enable compute servers to interact with each other and with the public network. Scheduler selects the most suitable compute controller to host an instance.
.. image:: images/Novadiagram.png
.. image:: images/Novadiagram.png
Nova is built on a shared-nothing, messaging-based architecture. All of the major components, that is Compute Controller, Volume Controller, Network Controller, and Object Store can be run on multiple servers. Cloud Controller communicates with Object Store via HTTP (Hyper Text Transfer Protocol), but it communicates with Scheduler, Network Controller, and Volume Controller via AMQP (Advanced Message Queue Protocol). To avoid blocking each component while waiting for a response, Nova uses asynchronous calls, with a call-back that gets triggered when a response is received.
To achieve the shared-nothing property with multiple copies of the same component, Nova keeps all the cloud system state in a distributed data store. Updates to system state are written into this store, using atomic transactions when required. Requests for system state are read out of this store. In limited cases, the read results are cached within controllers for short periods of time (for example, the current list of system users.)
To achieve the shared-nothing property with multiple copies of the same component, Nova keeps all the cloud system state in a distributed data store. Updates to system state are written into this store, using atomic transactions when required. Requests for system state are read out of this store. In limited cases, the read results are cached within controllers for short periods of time (for example, the current list of system users.)
.. note:: The database schema is available on the `OpenStack Wiki <http://wiki.openstack.org/NovaDatabaseSchema>`_.
.. note:: The database schema is available on the `OpenStack Wiki <http://wiki.openstack.org/NovaDatabaseSchema>`_.
Concept: Storage
----------------
@ -171,7 +171,7 @@ details.
Concept: Flags
--------------
Nova uses python-gflags for a distributed command line system, and the flags can either be set when running a command at the command line or within a flag file. When you install Nova packages for the Austin release, each nova service gets its own flag file. For example, nova-network.conf is used for configuring the nova-network service, and so forth. In releases beyond Austin which was released in October 2010, all flags are set in nova.conf.
Nova uses python-gflags for a distributed command line system, and the flags can either be set when running a command at the command line or within a flag file. When you install Nova packages for the Austin release, each nova service gets its own flag file. For example, nova-network.conf is used for configuring the nova-network service, and so forth. In releases beyond Austin which was released in October 2010, all flags are set in nova.conf.
Concept: Plugins
----------------
@ -213,7 +213,7 @@ When launching VM instances, the project manager specifies which security groups
A security group can be thought of as a security profile or a security role - it promotes the good practice of managing firewalls by role, not by machine. For example, a user could stipulate that servers with the "webapp" role must be able to connect to servers with the "mysql" role on port 3306. Going further with the security profile analogy, an instance can be launched with membership of multiple security groups - similar to a server with multiple roles. Because all rules in security groups are ACCEPT rules, it's trivial to combine them.
Each rule in a security group must specify the source of packets to be allowed, which can either be a subnet anywhere on the Internet (in CIDR notation, with 0.0.0./0 representing the entire Internet) or another security group. In the latter case, the source security group can be any user's group. This makes it easy to grant selective access to one user's instances from instances run by the user's friends, partners, and vendors.
Each rule in a security group must specify the source of packets to be allowed, which can either be a subnet anywhere on the Internet (in CIDR notation, with 0.0.0./0 representing the entire Internet) or another security group. In the latter case, the source security group can be any user's group. This makes it easy to grant selective access to one user's instances from instances run by the user's friends, partners, and vendors.
The creation of rules with other security groups specified as sources helps users deal with dynamic IP addressing. Without this feature, the user would have had to adjust the security groups each time a new instance is launched. This practice would become cumbersome if an application running in Nova is very dynamic and elastic, for example scales up or down frequently.

View File

@ -23,4 +23,4 @@ With Nova, you can manage images either using the built-in object store or using
* Ability to store and retrieve virtual machine images
* Ability to store and retrieve metadata about these virtual machine images
Refer to http://glance.openstack.org for additional details.
Refer to http://glance.openstack.org for additional details.

View File

@ -16,12 +16,12 @@
Managing Instance Types and Flavors
===================================
You can manage instance types and instance flavors using the nova-manage command-line interface coupled with the instance_type subcommand for nova-manage.
You can manage instance types and instance flavors using the nova-manage command-line interface coupled with the instance_type subcommand for nova-manage.
What are Instance Types or Flavors ?
------------------------------------
Instance types describe the compute, memory and storage capacity of nova computing instances. In layman terms, this is the size (in terms of vCPUs, RAM, etc.) of the virtual server that you will be launching. In the EC2 API, these are called by names such as "m1.large" or "m1.tiny", while the OpenStack API terms these "flavors" with names like "512 MB Server".
Instance types describe the compute, memory and storage capacity of nova computing instances. In layman terms, this is the size (in terms of vCPUs, RAM, etc.) of the virtual server that you will be launching. In the EC2 API, these are called by names such as "m1.large" or "m1.tiny", while the OpenStack API terms these "flavors" with names like "512 MB Server".
In Nova, "flavor" and "instance type" are equivalent terms. When you create an EC2 instance type, you are also creating a OpenStack API flavor. To reduce repetition, for the rest of this document I will refer to these as instance types.
@ -34,8 +34,8 @@ In the current (Cactus) version of nova, instance types can only be created by t
Basic Management
----------------
Instance types / flavor are managed through the nova-manage binary with
the "instance_type" command and an appropriate subcommand. Note that you can also use
Instance types / flavor are managed through the nova-manage binary with
the "instance_type" command and an appropriate subcommand. Note that you can also use
the "flavor" command as a synonym for "instance_types".
To see all currently active instance types, use the list subcommand::
@ -58,7 +58,7 @@ By default, the list subcommand only shows active instance types. To see all ins
m1.deleted: Memory: 2048MB, VCPUS: 1, Storage: 20GB, FlavorID: 2, Swap: 0GB, RXTX Quota: 0GB, RXTX Cap: 0MB, inactive
To create an instance type, use the "create" subcommand with the following positional arguments:
* memory (expressed in megabytes)
* memory (expressed in megabytes)
* vcpu(s) (integer)
* local storage (expressed in gigabytes)
* flavorid (unique integer)
@ -76,10 +76,10 @@ To delete an instance type, use the "delete" subcommand and specify the name::
# nova-manage instance_type delete m1.xxlarge
m1.xxlarge deleted
Please note that the "delete" command only marks the instance type as
Please note that the "delete" command only marks the instance type as
inactive in the database; it does not actually remove the instance type. This is done
to preserve the instance type definition for long running instances (which may not
terminate for months or years). If you are sure that you want to delete this instance
to preserve the instance type definition for long running instances (which may not
terminate for months or years). If you are sure that you want to delete this instance
type from the database, pass the "--purge" flag after the name::
# nova-manage instance_type delete m1.xxlarge --purge

View File

@ -1,6 +1,6 @@
..
Copyright 2010-2011 United States Government as represented by the
Administrator of the National Aeronautics and Space Administration.
Administrator of the National Aeronautics and Space Administration.
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may

View File

@ -32,8 +32,8 @@ IP addresses for VM instances are grabbed from a subnet specified by the network
* Each compute host creates a single bridge for all instances to use to attach to the external network.
* The networking configuration is injected into the instance before it is booted or it is obtained by a guest agent installed in the instance.
Note that the configuration injection currently only works on linux-style systems that keep networking
Note that the configuration injection currently only works on linux-style systems that keep networking
configuration in /etc/network/interfaces.

View File

@ -22,21 +22,21 @@ VLAN Network Mode is the default mode for Nova. It provides a private network
segment for each project's instances that can be accessed via a dedicated
VPN connection from the Internet.
In this mode, each project gets its own VLAN, Linux networking bridge, and subnet. The subnets are specified by the network administrator, and are assigned dynamically to a project when required. A DHCP Server is started for each VLAN to pass out IP addresses to VM instances from the subnet assigned to the project. All instances belonging to one project are bridged into the same VLAN for that project. The Linux networking bridges and VLANs are created by Nova when required, described in more detail in Nova VLAN Network Management Implementation.
In this mode, each project gets its own VLAN, Linux networking bridge, and subnet. The subnets are specified by the network administrator, and are assigned dynamically to a project when required. A DHCP Server is started for each VLAN to pass out IP addresses to VM instances from the subnet assigned to the project. All instances belonging to one project are bridged into the same VLAN for that project. The Linux networking bridges and VLANs are created by Nova when required, described in more detail in Nova VLAN Network Management Implementation.
..
..
(this text revised above)
Because the flat network and flat DhCP network are simple to understand and yet do not scale well enough for real-world cloud systems, this section focuses on the VLAN network implementation by the VLAN Network Manager.
Because the flat network and flat DhCP network are simple to understand and yet do not scale well enough for real-world cloud systems, this section focuses on the VLAN network implementation by the VLAN Network Manager.
In the VLAN network mode, all the VM instances of a project are connected together in a VLAN with the specified private subnet. Each running VM instance is assigned an IP address within the given private subnet.
In the VLAN network mode, all the VM instances of a project are connected together in a VLAN with the specified private subnet. Each running VM instance is assigned an IP address within the given private subnet.
.. image:: /images/Novadiagram.png
:width: 790
While network traffic between VM instances belonging to the same VLAN is always open, Nova can enforce isolation of network traffic between different projects by enforcing one VLAN per project.
In addition, the network administrator can specify a pool of public IP addresses that users may allocate and then assign to VMs, either at boot or dynamically at run-time. This capability is similar to Amazon's 'elastic IPs'. A public IP address may be associated with a running instances, allowing the VM instance to be accessed from the public network. The public IP addresses are accessible from the network host and NATed to the private IP address of the project. A public IP address could be associated with a project using the euca-allocate-address commands.
While network traffic between VM instances belonging to the same VLAN is always open, Nova can enforce isolation of network traffic between different projects by enforcing one VLAN per project.
In addition, the network administrator can specify a pool of public IP addresses that users may allocate and then assign to VMs, either at boot or dynamically at run-time. This capability is similar to Amazon's 'elastic IPs'. A public IP address may be associated with a running instances, allowing the VM instance to be accessed from the public network. The public IP addresses are accessible from the network host and NATed to the private IP address of the project. A public IP address could be associated with a project using the euca-allocate-address commands.
This is the default networking mode and supports the most features. For multiple machine installation, it requires a switch that supports host-managed vlan tagging. In this mode, nova will create a vlan and bridge for each project. The project gets a range of private ips that are only accessible from inside the vlan. In order for a user to access the instances in their project, a special vpn instance (code named :ref:`cloudpipe <cloudpipe>`) needs to be created. Nova generates a certificate and key for the user to access the vpn and starts the vpn automatically. More information on cloudpipe can be found :ref:`here <cloudpipe>`.
@ -65,22 +65,22 @@ We also keep as a goal a common DMZ segment for support services, meaning these
Limitations
-----------
We kept in mind some of these limitations:
We kept in mind some of these limitations:
* Projects / cluster limited to available VLANs in switching infrastructure
* Requires VPN for access to project segment
Implementation
--------------
Currently Nova segregates project VLANs using 802.1q VLAN tagging in the
switching layer. Compute hosts create VLAN-specific interfaces and bridges
Currently Nova segregates project VLANs using 802.1q VLAN tagging in the
switching layer. Compute hosts create VLAN-specific interfaces and bridges
as required.
The network nodes act as default gateway for project networks and contain
The network nodes act as default gateway for project networks and contain
all of the routing and firewall rules implementing security groups. The
network node also handles DHCP to provide instance IPs for each project.
VPN access is provided by running a small instance called CloudPipe
VPN access is provided by running a small instance called CloudPipe
on the IP immediately following the gateway IP for each project. The
network node maps a dedicated public IP/port to the CloudPipe instance.

View File

@ -152,7 +152,7 @@ Important Options
management ip on the same network as the proxies.
.. todo::
.. todo::
Reformat command line app instructions for commands using
``:command:``, ``:option:``, and ``.. program::``. (bug-947261)

View File

@ -32,7 +32,7 @@ Novas Cloud Fabric is composed of the following major components:
.. image:: /images/fabric.png
:width: 790
API Server
API Server
--------------------------------------------------
At the heart of the cloud framework is an API Server. This API Server makes command and control of the hypervisor, storage, and networking programmatically available to users in realization of the definition of cloud computing.

View File

@ -1,226 +1,226 @@
..
Copyright (c) 2010 Citrix Systems, Inc.
Copyright 2010 OpenStack LLC.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
VMware ESX/ESXi Server Support for OpenStack Compute
====================================================
Introduction
------------
A module named 'vmwareapi' is added to 'nova.virt' to add support of VMware ESX/ESXi hypervisor to OpenStack compute (Nova). Nova may now use VMware vSphere as a compute provider.
The basic requirement is to support VMware vSphere 4.1 as a compute provider within Nova. As the deployment architecture, support both ESX and ESXi. VM storage is restricted to VMFS volumes on local drives. vCenter is not required by the current design, and is not currently supported. Instead, Nova Compute talks directly to ESX/ESXi.
The 'vmwareapi' module is integrated with Glance, so that VM images can be streamed from there for boot on ESXi using Glance server for image storage & retrieval.
Currently supports Nova's flat networking model (Flat Manager) & VLAN networking model.
.. image:: images/vmwareapi_blockdiagram.jpg
System Requirements
-------------------
Following software components are required for building the cloud using OpenStack on top of ESX/ESXi Server(s):
* OpenStack
* Glance Image service
* VMware ESX v4.1 or VMware ESXi(licensed) v4.1
VMware ESX Requirements
-----------------------
* ESX credentials with administration/root privileges
* Single local hard disk at the ESX host
* An ESX Virtual Machine Port Group (For Flat Networking)
* An ESX physical network adapter (For VLAN networking)
* Need to enable "vSphere Web Access" in "vSphere client" UI at Configuration->Security Profile->Firewall
Python dependencies
-------------------
* suds-0.4
* Installation procedure on Ubuntu/Debian
::
easy_install suds==0.4
Configuration flags required for nova-compute
---------------------------------------------
::
--connection_type=vmwareapi
--vmwareapi_host_ip=<VMware ESX Host IP>
--vmwareapi_host_username=<VMware ESX Username>
--vmwareapi_host_password=<VMware ESX Password>
--vmwareapi_vlan_interface=<Physical ethernet adapter name in VMware ESX host for vlan networking E.g vmnic0> [Optional, only for VLAN Networking]
Configuration flags required for nova-network
---------------------------------------------
::
--network_manager=nova.network.manager.FlatManager [or nova.network.manager.VlanManager]
--flat_network_bridge=<ESX Virtual Machine Port Group> [Optional, only for Flat Networking]
Configuration flags required for nova-console
---------------------------------------------
::
--console_manager=nova.console.vmrc_manager.ConsoleVMRCManager
--console_driver=nova.console.vmrc.VMRCSessionConsole [Optional, only for OTP (One time Passwords) as against host credentials]
Other flags
-----------
::
--image_service=nova.image.glance.GlanceImageService
--glance_host=<Glance Host>
--vmwareapi_wsdl_loc=<http://<WEB SERVER>/vimService.wsdl>
Note:- Due to a faulty wsdl being shipped with ESX vSphere 4.1 we need a working wsdl which can to be mounted on any webserver. Follow the below steps to download the SDK,
* Go to http://www.vmware.com/support/developer/vc-sdk/
* Go to section VMware vSphere Web Services SDK 4.0
* Click "Downloads"
* Enter VMware credentials when prompted for download
* Unzip the downloaded file vi-sdk-4.0.0-xxx.zip
* Go to SDK->WSDL->vim25 & host the files "vimService.wsdl" and "vim.wsdl" in a WEB SERVER
* Set the flag "--vmwareapi_wsdl_loc" with url, "http://<WEB SERVER>/vimService.wsdl"
Debug flag
----------
.. note::
suds logging is very verbose and turned off by default. If you need to
debug the VMware API calls, change the default_log_levels flag appropriately.
VLAN Network Manager
--------------------
VLAN network support is added through a custom network driver in the nova-compute node i.e "nova.network.vmwareapi_net" and it uses a Physical ethernet adapter on the VMware ESX/ESXi host for VLAN Networking (the name of the ethernet adapter is specified as vlan_interface flag in the nova-compute configuration flag) in the nova-compute node.
Using the physical adapter name the associated Virtual Switch will be determined. In VMware ESX there can be only one Virtual Switch associated with a Physical adapter.
When VM Spawn request is issued with a VLAN ID the work flow looks like,
1. Check that a Physical adapter with the given name exists. If no, throw an error.If yes, goto next step.
2. Check if a Virtual Switch is associated with the Physical ethernet adapter with vlan interface name. If no, throw an error. If yes, goto next step.
3. Check if a port group with the network bridge name exists. If no, create a port group in the Virtual switch with the give name and VLAN id and goto step 6. If yes, goto next step.
4. Check if the port group is associated with the Virtual Switch. If no, throw an error. If yes, goto next step.
5. Check if the port group is associated with the given VLAN Id. If no, throw an error. If yes, goto next step.
6. Spawn the VM using this Port Group as the Network Name for the VM.
Guest console Support
---------------------
| VMware VMRC console is a built-in console method providing graphical control of the VM remotely.
|
| VMRC Console types supported:
| # Host based credentials
| Not secure (Sends ESX admin credentials in clear text)
|
| # OTP (One time passwords)
| Secure but creates multiple session entries in DB for each OpenStack console create request.
| Console sessions created is can be used only once.
|
| Install browser based VMware ESX plugins/activex on the client machine to connect
|
| Windows:-
| Internet Explorer:
| https://<VMware ESX Host>/ui/plugin/vmware-vmrc-win32-x86.exe
|
| Mozilla Firefox:
| https://<VMware ESX Host>/ui/plugin/vmware-vmrc-win32-x86.xpi
|
| Linux:-
| Mozilla Firefox
| 32-Bit Linux:
| https://<VMware ESX Host>/ui/plugin/vmware-vmrc-linux-x86.xpi
|
| 64-Bit Linux:
| https://<VMware ESX Host>/ui/plugin/vmware-vmrc-linux-x64.xpi
|
| OpenStack Console Details:
| console_type = vmrc+credentials | vmrc+session
| host = <VMware ESX Host>
| port = <VMware ESX Port>
| password = {'vm_id': <VMware VM ID>,'username':<VMware ESX Username>, 'password':<VMware ESX Password>} //base64 + json encoded
|
| Instantiate the plugin/activex object
| # In Internet Explorer
| <object id='vmrc' classid='CLSID:B94C2238-346E-4C5E-9B36-8CC627F35574'>
| </object>
|
| # Mozilla Firefox and other browsers
| <object id='vmrc' type='application/x-vmware-vmrc;version=2.5.0.0'>
| </object>
|
| Open vmrc connection
| # Host based credentials [type=vmrc+credentials]
| <script type="text/javascript">
| var MODE_WINDOW = 2;
| var vmrc = document.getElementById('vmrc');
| vmrc.connect(<VMware ESX Host> + ':' + <VMware ESX Port>, <VMware ESX Username>, <VMware ESX Password>, '', <VMware VM ID>, MODE_WINDOW);
| </script>
|
| # OTP (One time passwords) [type=vmrc+session]
| <script type="text/javascript">
| var MODE_WINDOW = 2;
| var vmrc = document.getElementById('vmrc');
| vmrc.connectWithSession(<VMware ESX Host> + ':' + <VMware ESX Port>, <VMware VM ID>, <VMware ESX Password>, MODE_WINDOW);
| </script>
Assumptions
-----------
1. The VMware images uploaded to the image repositories have VMware Tools installed.
FAQ
---
1. What type of disk images are supported?
* Only VMware VMDK's are currently supported and of that support is available only for thick disks, thin provisioned disks are not supported.
2. How is IP address information injected into the guest?
* IP address information is injected through 'machine.id' vmx parameter (equivalent to XenStore in XenServer). This information can be retrived inside the guest using VMware tools.
3. What is the guest tool?
* The guest tool is a small python script that should be run either as a service or added to system startup. This script configures networking on the guest. The guest tool is available at tools/esx/guest_tool.py
4. What type of consoles are supported?
* VMware VMRC based consoles are supported. There are 2 options for credentials one is OTP (Secure but creates multiple session entries in DB for each OpenStack console create request.) & other is host based credentials (It may not be secure as ESX credentials are transmitted as clear text).
5. What does 'Vim' refer to as far as vmwareapi module is concerned?
* Vim refers to VMware Virtual Infrastructure Methodology. This is not to be confused with "VIM" editor.
..
Copyright (c) 2010 Citrix Systems, Inc.
Copyright 2010 OpenStack LLC.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
VMware ESX/ESXi Server Support for OpenStack Compute
====================================================
Introduction
------------
A module named 'vmwareapi' is added to 'nova.virt' to add support of VMware ESX/ESXi hypervisor to OpenStack compute (Nova). Nova may now use VMware vSphere as a compute provider.
The basic requirement is to support VMware vSphere 4.1 as a compute provider within Nova. As the deployment architecture, support both ESX and ESXi. VM storage is restricted to VMFS volumes on local drives. vCenter is not required by the current design, and is not currently supported. Instead, Nova Compute talks directly to ESX/ESXi.
The 'vmwareapi' module is integrated with Glance, so that VM images can be streamed from there for boot on ESXi using Glance server for image storage & retrieval.
Currently supports Nova's flat networking model (Flat Manager) & VLAN networking model.
.. image:: images/vmwareapi_blockdiagram.jpg
System Requirements
-------------------
Following software components are required for building the cloud using OpenStack on top of ESX/ESXi Server(s):
* OpenStack
* Glance Image service
* VMware ESX v4.1 or VMware ESXi(licensed) v4.1
VMware ESX Requirements
-----------------------
* ESX credentials with administration/root privileges
* Single local hard disk at the ESX host
* An ESX Virtual Machine Port Group (For Flat Networking)
* An ESX physical network adapter (For VLAN networking)
* Need to enable "vSphere Web Access" in "vSphere client" UI at Configuration->Security Profile->Firewall
Python dependencies
-------------------
* suds-0.4
* Installation procedure on Ubuntu/Debian
::
easy_install suds==0.4
Configuration flags required for nova-compute
---------------------------------------------
::
--connection_type=vmwareapi
--vmwareapi_host_ip=<VMware ESX Host IP>
--vmwareapi_host_username=<VMware ESX Username>
--vmwareapi_host_password=<VMware ESX Password>
--vmwareapi_vlan_interface=<Physical ethernet adapter name in VMware ESX host for vlan networking E.g vmnic0> [Optional, only for VLAN Networking]
Configuration flags required for nova-network
---------------------------------------------
::
--network_manager=nova.network.manager.FlatManager [or nova.network.manager.VlanManager]
--flat_network_bridge=<ESX Virtual Machine Port Group> [Optional, only for Flat Networking]
Configuration flags required for nova-console
---------------------------------------------
::
--console_manager=nova.console.vmrc_manager.ConsoleVMRCManager
--console_driver=nova.console.vmrc.VMRCSessionConsole [Optional, only for OTP (One time Passwords) as against host credentials]
Other flags
-----------
::
--image_service=nova.image.glance.GlanceImageService
--glance_host=<Glance Host>
--vmwareapi_wsdl_loc=<http://<WEB SERVER>/vimService.wsdl>
Note:- Due to a faulty wsdl being shipped with ESX vSphere 4.1 we need a working wsdl which can to be mounted on any webserver. Follow the below steps to download the SDK,
* Go to http://www.vmware.com/support/developer/vc-sdk/
* Go to section VMware vSphere Web Services SDK 4.0
* Click "Downloads"
* Enter VMware credentials when prompted for download
* Unzip the downloaded file vi-sdk-4.0.0-xxx.zip
* Go to SDK->WSDL->vim25 & host the files "vimService.wsdl" and "vim.wsdl" in a WEB SERVER
* Set the flag "--vmwareapi_wsdl_loc" with url, "http://<WEB SERVER>/vimService.wsdl"
Debug flag
----------
.. note::
suds logging is very verbose and turned off by default. If you need to
debug the VMware API calls, change the default_log_levels flag appropriately.
VLAN Network Manager
--------------------
VLAN network support is added through a custom network driver in the nova-compute node i.e "nova.network.vmwareapi_net" and it uses a Physical ethernet adapter on the VMware ESX/ESXi host for VLAN Networking (the name of the ethernet adapter is specified as vlan_interface flag in the nova-compute configuration flag) in the nova-compute node.
Using the physical adapter name the associated Virtual Switch will be determined. In VMware ESX there can be only one Virtual Switch associated with a Physical adapter.
When VM Spawn request is issued with a VLAN ID the work flow looks like,
1. Check that a Physical adapter with the given name exists. If no, throw an error.If yes, goto next step.
2. Check if a Virtual Switch is associated with the Physical ethernet adapter with vlan interface name. If no, throw an error. If yes, goto next step.
3. Check if a port group with the network bridge name exists. If no, create a port group in the Virtual switch with the give name and VLAN id and goto step 6. If yes, goto next step.
4. Check if the port group is associated with the Virtual Switch. If no, throw an error. If yes, goto next step.
5. Check if the port group is associated with the given VLAN Id. If no, throw an error. If yes, goto next step.
6. Spawn the VM using this Port Group as the Network Name for the VM.
Guest console Support
---------------------
| VMware VMRC console is a built-in console method providing graphical control of the VM remotely.
|
| VMRC Console types supported:
| # Host based credentials
| Not secure (Sends ESX admin credentials in clear text)
|
| # OTP (One time passwords)
| Secure but creates multiple session entries in DB for each OpenStack console create request.
| Console sessions created is can be used only once.
|
| Install browser based VMware ESX plugins/activex on the client machine to connect
|
| Windows:-
| Internet Explorer:
| https://<VMware ESX Host>/ui/plugin/vmware-vmrc-win32-x86.exe
|
| Mozilla Firefox:
| https://<VMware ESX Host>/ui/plugin/vmware-vmrc-win32-x86.xpi
|
| Linux:-
| Mozilla Firefox
| 32-Bit Linux:
| https://<VMware ESX Host>/ui/plugin/vmware-vmrc-linux-x86.xpi
|
| 64-Bit Linux:
| https://<VMware ESX Host>/ui/plugin/vmware-vmrc-linux-x64.xpi
|
| OpenStack Console Details:
| console_type = vmrc+credentials | vmrc+session
| host = <VMware ESX Host>
| port = <VMware ESX Port>
| password = {'vm_id': <VMware VM ID>,'username':<VMware ESX Username>, 'password':<VMware ESX Password>} //base64 + json encoded
|
| Instantiate the plugin/activex object
| # In Internet Explorer
| <object id='vmrc' classid='CLSID:B94C2238-346E-4C5E-9B36-8CC627F35574'>
| </object>
|
| # Mozilla Firefox and other browsers
| <object id='vmrc' type='application/x-vmware-vmrc;version=2.5.0.0'>
| </object>
|
| Open vmrc connection
| # Host based credentials [type=vmrc+credentials]
| <script type="text/javascript">
| var MODE_WINDOW = 2;
| var vmrc = document.getElementById('vmrc');
| vmrc.connect(<VMware ESX Host> + ':' + <VMware ESX Port>, <VMware ESX Username>, <VMware ESX Password>, '', <VMware VM ID>, MODE_WINDOW);
| </script>
|
| # OTP (One time passwords) [type=vmrc+session]
| <script type="text/javascript">
| var MODE_WINDOW = 2;
| var vmrc = document.getElementById('vmrc');
| vmrc.connectWithSession(<VMware ESX Host> + ':' + <VMware ESX Port>, <VMware VM ID>, <VMware ESX Password>, MODE_WINDOW);
| </script>
Assumptions
-----------
1. The VMware images uploaded to the image repositories have VMware Tools installed.
FAQ
---
1. What type of disk images are supported?
* Only VMware VMDK's are currently supported and of that support is available only for thick disks, thin provisioned disks are not supported.
2. How is IP address information injected into the guest?
* IP address information is injected through 'machine.id' vmx parameter (equivalent to XenStore in XenServer). This information can be retrived inside the guest using VMware tools.
3. What is the guest tool?
* The guest tool is a small python script that should be run either as a service or added to system startup. This script configures networking on the guest. The guest tool is available at tools/esx/guest_tool.py
4. What type of consoles are supported?
* VMware VMRC based consoles are supported. There are 2 options for credentials one is OTP (Secure but creates multiple session entries in DB for each OpenStack console create request.) & other is host based credentials (It may not be secure as ESX credentials are transmitted as clear text).
5. What does 'Vim' refer to as far as vmwareapi module is concerned?
* Vim refers to VMware Virtual Infrastructure Methodology. This is not to be confused with "VIM" editor.

View File

@ -67,7 +67,7 @@ for arg in "$@"; do
process_option $arg
done
# If enabled, tell nose to collect coverage data
# If enabled, tell nose to collect coverage data
if [ $coverage -eq 1 ]; then
noseopts="$noseopts --with-coverage --cover-package=nova"
fi

View File

@ -1,404 +1,404 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# Copyright (c) 2011 Citrix Systems, Inc.
# Copyright 2011 OpenStack LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Guest tools for ESX to set up network in the guest.
On Windows we require pyWin32 installed on Python.
"""
import array
import gettext
import logging
import os
import platform
import socket
import struct
import subprocess
import sys
import time
gettext.install('nova', unicode=1)
PLATFORM_WIN = 'win32'
PLATFORM_LINUX = 'linux2'
ARCH_32_BIT = '32bit'
ARCH_64_BIT = '64bit'
NO_MACHINE_ID = 'No machine id'
# Logging
FORMAT = "%(asctime)s - %(levelname)s - %(message)s"
if sys.platform == PLATFORM_WIN:
LOG_DIR = os.path.join(os.environ.get('ALLUSERSPROFILE'), 'openstack')
elif sys.platform == PLATFORM_LINUX:
LOG_DIR = '/var/log/openstack'
else:
LOG_DIR = 'logs'
if not os.path.exists(LOG_DIR):
os.mkdir(LOG_DIR)
LOG_FILENAME = os.path.join(LOG_DIR, 'openstack-guest-tools.log')
logging.basicConfig(filename=LOG_FILENAME, format=FORMAT)
if sys.hexversion < 0x3000000:
_byte = ord # 2.x chr to integer
else:
_byte = int # 3.x byte to integer
class ProcessExecutionError:
"""Process Execution Error Class."""
def __init__(self, exit_code, stdout, stderr, cmd):
self.exit_code = exit_code
self.stdout = stdout
self.stderr = stderr
self.cmd = cmd
def __str__(self):
return str(self.exit_code)
def _bytes2int(bytes):
"""Convert bytes to int."""
intgr = 0
for byt in bytes:
intgr = (intgr << 8) + _byte(byt)
return intgr
def _parse_network_details(machine_id):
"""
Parse the machine_id to get MAC, IP, Netmask and Gateway fields per NIC.
machine_id is of the form ('NIC_record#NIC_record#', '')
Each of the NIC will have record NIC_record in the form
'MAC;IP;Netmask;Gateway;Broadcast;DNS' where ';' is field separator.
Each record is separated by '#' from next record.
"""
logging.debug(_("Received machine_id from vmtools : %s") % machine_id[0])
network_details = []
if machine_id[1].strip() == "1":
pass
else:
for machine_id_str in machine_id[0].split('#'):
network_info_list = machine_id_str.split(';')
if len(network_info_list) % 6 != 0:
break
no_grps = len(network_info_list) / 6
i = 0
while i < no_grps:
k = i * 6
network_details.append((
network_info_list[k].strip().lower(),
network_info_list[k + 1].strip(),
network_info_list[k + 2].strip(),
network_info_list[k + 3].strip(),
network_info_list[k + 4].strip(),
network_info_list[k + 5].strip().split(',')))
i += 1
logging.debug(_("NIC information from vmtools : %s") % network_details)
return network_details
def _get_windows_network_adapters():
"""Get the list of windows network adapters."""
import win32com.client
wbem_locator = win32com.client.Dispatch('WbemScripting.SWbemLocator')
wbem_service = wbem_locator.ConnectServer('.', 'root\cimv2')
wbem_network_adapters = wbem_service.InstancesOf('Win32_NetworkAdapter')
network_adapters = []
for wbem_network_adapter in wbem_network_adapters:
if wbem_network_adapter.NetConnectionStatus == 2 or \
wbem_network_adapter.NetConnectionStatus == 7:
adapter_name = wbem_network_adapter.NetConnectionID
mac_address = wbem_network_adapter.MacAddress.lower()
wbem_network_adapter_config = \
wbem_network_adapter.associators_(
'Win32_NetworkAdapterSetting',
'Win32_NetworkAdapterConfiguration')[0]
ip_address = ''
subnet_mask = ''
if wbem_network_adapter_config.IPEnabled:
ip_address = wbem_network_adapter_config.IPAddress[0]
subnet_mask = wbem_network_adapter_config.IPSubnet[0]
#wbem_network_adapter_config.DefaultIPGateway[0]
network_adapters.append({'name': adapter_name,
'mac-address': mac_address,
'ip-address': ip_address,
'subnet-mask': subnet_mask})
return network_adapters
def _get_linux_network_adapters():
"""Get the list of Linux network adapters."""
import fcntl
max_bytes = 8096
arch = platform.architecture()[0]
if arch == ARCH_32_BIT:
offset1 = 32
offset2 = 32
elif arch == ARCH_64_BIT:
offset1 = 16
offset2 = 40
else:
raise OSError(_("Unknown architecture: %s") % arch)
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
names = array.array('B', '\0' * max_bytes)
outbytes = struct.unpack('iL', fcntl.ioctl(
sock.fileno(),
0x8912,
struct.pack('iL', max_bytes, names.buffer_info()[0])))[0]
adapter_names = \
[names.tostring()[n_counter:n_counter + offset1].split('\0', 1)[0]
for n_counter in xrange(0, outbytes, offset2)]
network_adapters = []
for adapter_name in adapter_names:
ip_address = socket.inet_ntoa(fcntl.ioctl(
sock.fileno(),
0x8915,
struct.pack('256s', adapter_name))[20:24])
subnet_mask = socket.inet_ntoa(fcntl.ioctl(
sock.fileno(),
0x891b,
struct.pack('256s', adapter_name))[20:24])
raw_mac_address = '%012x' % _bytes2int(fcntl.ioctl(
sock.fileno(),
0x8927,
struct.pack('256s', adapter_name))[18:24])
mac_address = ":".join([raw_mac_address[m_counter:m_counter + 2]
for m_counter in range(0, len(raw_mac_address), 2)]).lower()
network_adapters.append({'name': adapter_name,
'mac-address': mac_address,
'ip-address': ip_address,
'subnet-mask': subnet_mask})
return network_adapters
def _get_adapter_name_and_ip_address(network_adapters, mac_address):
"""Get the adapter name based on the MAC address."""
adapter_name = None
ip_address = None
for network_adapter in network_adapters:
if network_adapter['mac-address'] == mac_address.lower():
adapter_name = network_adapter['name']
ip_address = network_adapter['ip-address']
break
return adapter_name, ip_address
def _get_win_adapter_name_and_ip_address(mac_address):
"""Get Windows network adapter name."""
network_adapters = _get_windows_network_adapters()
return _get_adapter_name_and_ip_address(network_adapters, mac_address)
def _get_linux_adapter_name_and_ip_address(mac_address):
"""Get Linux network adapter name."""
network_adapters = _get_linux_network_adapters()
return _get_adapter_name_and_ip_address(network_adapters, mac_address)
def _execute(cmd_list, process_input=None, check_exit_code=True):
"""Executes the command with the list of arguments specified."""
cmd = ' '.join(cmd_list)
logging.debug(_("Executing command: '%s'") % cmd)
env = os.environ.copy()
obj = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env)
result = None
if process_input is not None:
result = obj.communicate(process_input)
else:
result = obj.communicate()
obj.stdin.close()
if obj.returncode:
logging.debug(_("Result was %s") % obj.returncode)
if check_exit_code and obj.returncode != 0:
(stdout, stderr) = result
raise ProcessExecutionError(exit_code=obj.returncode,
stdout=stdout,
stderr=stderr,
cmd=cmd)
time.sleep(0.1)
return result
def _windows_set_networking():
"""Set IP address for the windows VM."""
program_files = os.environ.get('PROGRAMFILES')
program_files_x86 = os.environ.get('PROGRAMFILES(X86)')
vmware_tools_bin = None
if os.path.exists(os.path.join(program_files, 'VMware', 'VMware Tools',
'vmtoolsd.exe')):
vmware_tools_bin = os.path.join(program_files, 'VMware',
'VMware Tools', 'vmtoolsd.exe')
elif os.path.exists(os.path.join(program_files, 'VMware', 'VMware Tools',
'VMwareService.exe')):
vmware_tools_bin = os.path.join(program_files, 'VMware',
'VMware Tools', 'VMwareService.exe')
elif program_files_x86 and os.path.exists(os.path.join(program_files_x86,
'VMware', 'VMware Tools',
'VMwareService.exe')):
vmware_tools_bin = os.path.join(program_files_x86, 'VMware',
'VMware Tools', 'VMwareService.exe')
if vmware_tools_bin:
cmd = ['"' + vmware_tools_bin + '"', '--cmd', 'machine.id.get']
for network_detail in _parse_network_details(_execute(cmd,
check_exit_code=False)):
mac_address, ip_address, subnet_mask, gateway, broadcast,\
dns_servers = network_detail
adapter_name, current_ip_address = \
_get_win_adapter_name_and_ip_address(mac_address)
if adapter_name and not ip_address == current_ip_address:
cmd = ['netsh', 'interface', 'ip', 'set', 'address',
'name="%s"' % adapter_name, 'source=static', ip_address,
subnet_mask, gateway, '1']
_execute(cmd)
# Windows doesn't let you manually set the broadcast address
for dns_server in dns_servers:
if dns_server:
cmd = ['netsh', 'interface', 'ip', 'add', 'dns',
'name="%s"' % adapter_name, dns_server]
_execute(cmd)
else:
logging.warn(_("VMware Tools is not installed"))
def _filter_duplicates(all_entries):
final_list = []
for entry in all_entries:
if entry and entry not in final_list:
final_list.append(entry)
return final_list
def _set_rhel_networking(network_details=None):
"""Set IPv4 network settings for RHEL distros."""
network_details = network_details or []
all_dns_servers = []
for network_detail in network_details:
mac_address, ip_address, subnet_mask, gateway, broadcast,\
dns_servers = network_detail
all_dns_servers.extend(dns_servers)
adapter_name, current_ip_address = \
_get_linux_adapter_name_and_ip_address(mac_address)
if adapter_name and not ip_address == current_ip_address:
interface_file_name = \
'/etc/sysconfig/network-scripts/ifcfg-%s' % adapter_name
# Remove file
os.remove(interface_file_name)
# Touch file
_execute(['touch', interface_file_name])
interface_file = open(interface_file_name, 'w')
interface_file.write('\nDEVICE=%s' % adapter_name)
interface_file.write('\nUSERCTL=yes')
interface_file.write('\nONBOOT=yes')
interface_file.write('\nBOOTPROTO=static')
interface_file.write('\nBROADCAST=%s' % broadcast)
interface_file.write('\nNETWORK=')
interface_file.write('\nGATEWAY=%s' % gateway)
interface_file.write('\nNETMASK=%s' % subnet_mask)
interface_file.write('\nIPADDR=%s' % ip_address)
interface_file.write('\nMACADDR=%s' % mac_address)
interface_file.close()
if all_dns_servers:
dns_file_name = "/etc/resolv.conf"
os.remove(dns_file_name)
_execute(['touch', dns_file_name])
dns_file = open(dns_file_name, 'w')
dns_file.write("; generated by OpenStack guest tools")
unique_entries = _filter_duplicates(all_dns_servers)
for dns_server in unique_entries:
dns_file.write("\nnameserver %s" % dns_server)
dns_file.close()
_execute(['/sbin/service', 'network', 'restart'])
def _set_ubuntu_networking(network_details=None):
"""Set IPv4 network settings for Ubuntu."""
network_details = network_details or []
all_dns_servers = []
interface_file_name = '/etc/network/interfaces'
# Remove file
os.remove(interface_file_name)
# Touch file
_execute(['touch', interface_file_name])
interface_file = open(interface_file_name, 'w')
for device, network_detail in enumerate(network_details):
mac_address, ip_address, subnet_mask, gateway, broadcast,\
dns_servers = network_detail
all_dns_servers.extend(dns_servers)
adapter_name, current_ip_address = \
_get_linux_adapter_name_and_ip_address(mac_address)
if adapter_name:
interface_file.write('\nauto %s' % adapter_name)
interface_file.write('\niface %s inet static' % adapter_name)
interface_file.write('\nbroadcast %s' % broadcast)
interface_file.write('\ngateway %s' % gateway)
interface_file.write('\nnetmask %s' % subnet_mask)
interface_file.write('\naddress %s\n' % ip_address)
logging.debug(_("Successfully configured NIC %d with "
"NIC info %s") % (device, network_detail))
interface_file.close()
if all_dns_servers:
dns_file_name = "/etc/resolv.conf"
os.remove(dns_file_name)
_execute(['touch', dns_file_name])
dns_file = open(dns_file_name, 'w')
dns_file.write("; generated by OpenStack guest tools")
unique_entries = _filter_duplicates(all_dns_servers)
for dns_server in unique_entries:
dns_file.write("\nnameserver %s" % dns_server)
dns_file.close()
logging.debug(_("Restarting networking....\n"))
_execute(['/etc/init.d/networking', 'restart'])
def _linux_set_networking():
"""Set IP address for the Linux VM."""
vmware_tools_bin = None
if os.path.exists('/usr/sbin/vmtoolsd'):
vmware_tools_bin = '/usr/sbin/vmtoolsd'
elif os.path.exists('/usr/bin/vmtoolsd'):
vmware_tools_bin = '/usr/bin/vmtoolsd'
elif os.path.exists('/usr/sbin/vmware-guestd'):
vmware_tools_bin = '/usr/sbin/vmware-guestd'
elif os.path.exists('/usr/bin/vmware-guestd'):
vmware_tools_bin = '/usr/bin/vmware-guestd'
if vmware_tools_bin:
cmd = [vmware_tools_bin, '--cmd', 'machine.id.get']
network_details = _parse_network_details(_execute(cmd,
check_exit_code=False))
# TODO(sateesh): For other distros like suse, debian, BSD, etc.
if(platform.dist()[0] == 'Ubuntu'):
_set_ubuntu_networking(network_details)
elif (platform.dist()[0] == 'redhat'):
_set_rhel_networking(network_details)
else:
logging.warn(_("Distro '%s' not supported") % platform.dist()[0])
else:
logging.warn(_("VMware Tools is not installed"))
if __name__ == '__main__':
pltfrm = sys.platform
if pltfrm == PLATFORM_WIN:
_windows_set_networking()
elif pltfrm == PLATFORM_LINUX:
_linux_set_networking()
else:
raise NotImplementedError(_("Platform not implemented: '%s'") % pltfrm)
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# Copyright (c) 2011 Citrix Systems, Inc.
# Copyright 2011 OpenStack LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Guest tools for ESX to set up network in the guest.
On Windows we require pyWin32 installed on Python.
"""
import array
import gettext
import logging
import os
import platform
import socket
import struct
import subprocess
import sys
import time
gettext.install('nova', unicode=1)
PLATFORM_WIN = 'win32'
PLATFORM_LINUX = 'linux2'
ARCH_32_BIT = '32bit'
ARCH_64_BIT = '64bit'
NO_MACHINE_ID = 'No machine id'
# Logging
FORMAT = "%(asctime)s - %(levelname)s - %(message)s"
if sys.platform == PLATFORM_WIN:
LOG_DIR = os.path.join(os.environ.get('ALLUSERSPROFILE'), 'openstack')
elif sys.platform == PLATFORM_LINUX:
LOG_DIR = '/var/log/openstack'
else:
LOG_DIR = 'logs'
if not os.path.exists(LOG_DIR):
os.mkdir(LOG_DIR)
LOG_FILENAME = os.path.join(LOG_DIR, 'openstack-guest-tools.log')
logging.basicConfig(filename=LOG_FILENAME, format=FORMAT)
if sys.hexversion < 0x3000000:
_byte = ord # 2.x chr to integer
else:
_byte = int # 3.x byte to integer
class ProcessExecutionError:
"""Process Execution Error Class."""
def __init__(self, exit_code, stdout, stderr, cmd):
self.exit_code = exit_code
self.stdout = stdout
self.stderr = stderr
self.cmd = cmd
def __str__(self):
return str(self.exit_code)
def _bytes2int(bytes):
"""Convert bytes to int."""
intgr = 0
for byt in bytes:
intgr = (intgr << 8) + _byte(byt)
return intgr
def _parse_network_details(machine_id):
"""
Parse the machine_id to get MAC, IP, Netmask and Gateway fields per NIC.
machine_id is of the form ('NIC_record#NIC_record#', '')
Each of the NIC will have record NIC_record in the form
'MAC;IP;Netmask;Gateway;Broadcast;DNS' where ';' is field separator.
Each record is separated by '#' from next record.
"""
logging.debug(_("Received machine_id from vmtools : %s") % machine_id[0])
network_details = []
if machine_id[1].strip() == "1":
pass
else:
for machine_id_str in machine_id[0].split('#'):
network_info_list = machine_id_str.split(';')
if len(network_info_list) % 6 != 0:
break
no_grps = len(network_info_list) / 6
i = 0
while i < no_grps:
k = i * 6
network_details.append((
network_info_list[k].strip().lower(),
network_info_list[k + 1].strip(),
network_info_list[k + 2].strip(),
network_info_list[k + 3].strip(),
network_info_list[k + 4].strip(),
network_info_list[k + 5].strip().split(',')))
i += 1
logging.debug(_("NIC information from vmtools : %s") % network_details)
return network_details
def _get_windows_network_adapters():
"""Get the list of windows network adapters."""
import win32com.client
wbem_locator = win32com.client.Dispatch('WbemScripting.SWbemLocator')
wbem_service = wbem_locator.ConnectServer('.', 'root\cimv2')
wbem_network_adapters = wbem_service.InstancesOf('Win32_NetworkAdapter')
network_adapters = []
for wbem_network_adapter in wbem_network_adapters:
if wbem_network_adapter.NetConnectionStatus == 2 or \
wbem_network_adapter.NetConnectionStatus == 7:
adapter_name = wbem_network_adapter.NetConnectionID
mac_address = wbem_network_adapter.MacAddress.lower()
wbem_network_adapter_config = \
wbem_network_adapter.associators_(
'Win32_NetworkAdapterSetting',
'Win32_NetworkAdapterConfiguration')[0]
ip_address = ''
subnet_mask = ''
if wbem_network_adapter_config.IPEnabled:
ip_address = wbem_network_adapter_config.IPAddress[0]
subnet_mask = wbem_network_adapter_config.IPSubnet[0]
#wbem_network_adapter_config.DefaultIPGateway[0]
network_adapters.append({'name': adapter_name,
'mac-address': mac_address,
'ip-address': ip_address,
'subnet-mask': subnet_mask})
return network_adapters
def _get_linux_network_adapters():
"""Get the list of Linux network adapters."""
import fcntl
max_bytes = 8096
arch = platform.architecture()[0]
if arch == ARCH_32_BIT:
offset1 = 32
offset2 = 32
elif arch == ARCH_64_BIT:
offset1 = 16
offset2 = 40
else:
raise OSError(_("Unknown architecture: %s") % arch)
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
names = array.array('B', '\0' * max_bytes)
outbytes = struct.unpack('iL', fcntl.ioctl(
sock.fileno(),
0x8912,
struct.pack('iL', max_bytes, names.buffer_info()[0])))[0]
adapter_names = \
[names.tostring()[n_counter:n_counter + offset1].split('\0', 1)[0]
for n_counter in xrange(0, outbytes, offset2)]
network_adapters = []
for adapter_name in adapter_names:
ip_address = socket.inet_ntoa(fcntl.ioctl(
sock.fileno(),
0x8915,
struct.pack('256s', adapter_name))[20:24])
subnet_mask = socket.inet_ntoa(fcntl.ioctl(
sock.fileno(),
0x891b,
struct.pack('256s', adapter_name))[20:24])
raw_mac_address = '%012x' % _bytes2int(fcntl.ioctl(
sock.fileno(),
0x8927,
struct.pack('256s', adapter_name))[18:24])
mac_address = ":".join([raw_mac_address[m_counter:m_counter + 2]
for m_counter in range(0, len(raw_mac_address), 2)]).lower()
network_adapters.append({'name': adapter_name,
'mac-address': mac_address,
'ip-address': ip_address,
'subnet-mask': subnet_mask})
return network_adapters
def _get_adapter_name_and_ip_address(network_adapters, mac_address):
"""Get the adapter name based on the MAC address."""
adapter_name = None
ip_address = None
for network_adapter in network_adapters:
if network_adapter['mac-address'] == mac_address.lower():
adapter_name = network_adapter['name']
ip_address = network_adapter['ip-address']
break
return adapter_name, ip_address
def _get_win_adapter_name_and_ip_address(mac_address):
"""Get Windows network adapter name."""
network_adapters = _get_windows_network_adapters()
return _get_adapter_name_and_ip_address(network_adapters, mac_address)
def _get_linux_adapter_name_and_ip_address(mac_address):
"""Get Linux network adapter name."""
network_adapters = _get_linux_network_adapters()
return _get_adapter_name_and_ip_address(network_adapters, mac_address)
def _execute(cmd_list, process_input=None, check_exit_code=True):
"""Executes the command with the list of arguments specified."""
cmd = ' '.join(cmd_list)
logging.debug(_("Executing command: '%s'") % cmd)
env = os.environ.copy()
obj = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env)
result = None
if process_input is not None:
result = obj.communicate(process_input)
else:
result = obj.communicate()
obj.stdin.close()
if obj.returncode:
logging.debug(_("Result was %s") % obj.returncode)
if check_exit_code and obj.returncode != 0:
(stdout, stderr) = result
raise ProcessExecutionError(exit_code=obj.returncode,
stdout=stdout,
stderr=stderr,
cmd=cmd)
time.sleep(0.1)
return result
def _windows_set_networking():
"""Set IP address for the windows VM."""
program_files = os.environ.get('PROGRAMFILES')
program_files_x86 = os.environ.get('PROGRAMFILES(X86)')
vmware_tools_bin = None
if os.path.exists(os.path.join(program_files, 'VMware', 'VMware Tools',
'vmtoolsd.exe')):
vmware_tools_bin = os.path.join(program_files, 'VMware',
'VMware Tools', 'vmtoolsd.exe')
elif os.path.exists(os.path.join(program_files, 'VMware', 'VMware Tools',
'VMwareService.exe')):
vmware_tools_bin = os.path.join(program_files, 'VMware',
'VMware Tools', 'VMwareService.exe')
elif program_files_x86 and os.path.exists(os.path.join(program_files_x86,
'VMware', 'VMware Tools',
'VMwareService.exe')):
vmware_tools_bin = os.path.join(program_files_x86, 'VMware',
'VMware Tools', 'VMwareService.exe')
if vmware_tools_bin:
cmd = ['"' + vmware_tools_bin + '"', '--cmd', 'machine.id.get']
for network_detail in _parse_network_details(_execute(cmd,
check_exit_code=False)):
mac_address, ip_address, subnet_mask, gateway, broadcast,\
dns_servers = network_detail
adapter_name, current_ip_address = \
_get_win_adapter_name_and_ip_address(mac_address)
if adapter_name and not ip_address == current_ip_address:
cmd = ['netsh', 'interface', 'ip', 'set', 'address',
'name="%s"' % adapter_name, 'source=static', ip_address,
subnet_mask, gateway, '1']
_execute(cmd)
# Windows doesn't let you manually set the broadcast address
for dns_server in dns_servers:
if dns_server:
cmd = ['netsh', 'interface', 'ip', 'add', 'dns',
'name="%s"' % adapter_name, dns_server]
_execute(cmd)
else:
logging.warn(_("VMware Tools is not installed"))
def _filter_duplicates(all_entries):
final_list = []
for entry in all_entries:
if entry and entry not in final_list:
final_list.append(entry)
return final_list
def _set_rhel_networking(network_details=None):
"""Set IPv4 network settings for RHEL distros."""
network_details = network_details or []
all_dns_servers = []
for network_detail in network_details:
mac_address, ip_address, subnet_mask, gateway, broadcast,\
dns_servers = network_detail
all_dns_servers.extend(dns_servers)
adapter_name, current_ip_address = \
_get_linux_adapter_name_and_ip_address(mac_address)
if adapter_name and not ip_address == current_ip_address:
interface_file_name = \
'/etc/sysconfig/network-scripts/ifcfg-%s' % adapter_name
# Remove file
os.remove(interface_file_name)
# Touch file
_execute(['touch', interface_file_name])
interface_file = open(interface_file_name, 'w')
interface_file.write('\nDEVICE=%s' % adapter_name)
interface_file.write('\nUSERCTL=yes')
interface_file.write('\nONBOOT=yes')
interface_file.write('\nBOOTPROTO=static')
interface_file.write('\nBROADCAST=%s' % broadcast)
interface_file.write('\nNETWORK=')
interface_file.write('\nGATEWAY=%s' % gateway)
interface_file.write('\nNETMASK=%s' % subnet_mask)
interface_file.write('\nIPADDR=%s' % ip_address)
interface_file.write('\nMACADDR=%s' % mac_address)
interface_file.close()
if all_dns_servers:
dns_file_name = "/etc/resolv.conf"
os.remove(dns_file_name)
_execute(['touch', dns_file_name])
dns_file = open(dns_file_name, 'w')
dns_file.write("; generated by OpenStack guest tools")
unique_entries = _filter_duplicates(all_dns_servers)
for dns_server in unique_entries:
dns_file.write("\nnameserver %s" % dns_server)
dns_file.close()
_execute(['/sbin/service', 'network', 'restart'])
def _set_ubuntu_networking(network_details=None):
"""Set IPv4 network settings for Ubuntu."""
network_details = network_details or []
all_dns_servers = []
interface_file_name = '/etc/network/interfaces'
# Remove file
os.remove(interface_file_name)
# Touch file
_execute(['touch', interface_file_name])
interface_file = open(interface_file_name, 'w')
for device, network_detail in enumerate(network_details):
mac_address, ip_address, subnet_mask, gateway, broadcast,\
dns_servers = network_detail
all_dns_servers.extend(dns_servers)
adapter_name, current_ip_address = \
_get_linux_adapter_name_and_ip_address(mac_address)
if adapter_name:
interface_file.write('\nauto %s' % adapter_name)
interface_file.write('\niface %s inet static' % adapter_name)
interface_file.write('\nbroadcast %s' % broadcast)
interface_file.write('\ngateway %s' % gateway)
interface_file.write('\nnetmask %s' % subnet_mask)
interface_file.write('\naddress %s\n' % ip_address)
logging.debug(_("Successfully configured NIC %d with "
"NIC info %s") % (device, network_detail))
interface_file.close()
if all_dns_servers:
dns_file_name = "/etc/resolv.conf"
os.remove(dns_file_name)
_execute(['touch', dns_file_name])
dns_file = open(dns_file_name, 'w')
dns_file.write("; generated by OpenStack guest tools")
unique_entries = _filter_duplicates(all_dns_servers)
for dns_server in unique_entries:
dns_file.write("\nnameserver %s" % dns_server)
dns_file.close()
logging.debug(_("Restarting networking....\n"))
_execute(['/etc/init.d/networking', 'restart'])
def _linux_set_networking():
"""Set IP address for the Linux VM."""
vmware_tools_bin = None
if os.path.exists('/usr/sbin/vmtoolsd'):
vmware_tools_bin = '/usr/sbin/vmtoolsd'
elif os.path.exists('/usr/bin/vmtoolsd'):
vmware_tools_bin = '/usr/bin/vmtoolsd'
elif os.path.exists('/usr/sbin/vmware-guestd'):
vmware_tools_bin = '/usr/sbin/vmware-guestd'
elif os.path.exists('/usr/bin/vmware-guestd'):
vmware_tools_bin = '/usr/bin/vmware-guestd'
if vmware_tools_bin:
cmd = [vmware_tools_bin, '--cmd', 'machine.id.get']
network_details = _parse_network_details(_execute(cmd,
check_exit_code=False))
# TODO(sateesh): For other distros like suse, debian, BSD, etc.
if(platform.dist()[0] == 'Ubuntu'):
_set_ubuntu_networking(network_details)
elif (platform.dist()[0] == 'redhat'):
_set_rhel_networking(network_details)
else:
logging.warn(_("Distro '%s' not supported") % platform.dist()[0])
else:
logging.warn(_("VMware Tools is not installed"))
if __name__ == '__main__':
pltfrm = sys.platform
if pltfrm == PLATFORM_WIN:
_windows_set_networking()
elif pltfrm == PLATFORM_LINUX:
_linux_set_networking()
else:
raise NotImplementedError(_("Platform not implemented: '%s'") % pltfrm)