Removing duplicate installation docs and adding flag file information, plus pointing to docs.openstack.org for Admin-audience docs

This commit is contained in:
Anne Gentle 2011-02-21 14:27:37 -06:00
parent bd0ca93866
commit 3392f6b4b0
24 changed files with 24 additions and 2178 deletions

View File

@ -1,57 +0,0 @@
..
Copyright 2010-2011 United States Government as represented by the
Administrator of the National Aeronautics and Space Administration.
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
.. _binaries:
Nova Daemons
=============
The configuration of these binaries relies on "flagfiles" using the google
gflags package::
$ nova-xxxxx --flagfile flagfile
The binaries can all run on the same machine or be spread out amongst multiple boxes in a large deployment.
nova-api
--------
Nova api receives xml requests and sends them to the rest of the system. It is a wsgi app that routes and authenticate requests. It supports the ec2 and openstack apis.
nova-objectstore
----------------
Nova objectstore is an ultra simple file-based storage system for images that replicates most of the S3 Api. It will soon be replaced with glance and a simple image manager.
nova-compute
------------
Nova compute is responsible for managing virtual machines. It loads a Service object which exposes the public methods on ComputeManager via rpc.
nova-volume
-----------
Nova volume is responsible for managing attachable block storage devices. It loads a Service object which exposes the public methods on VolumeManager via rpc.
nova-network
------------
Nova network is responsible for managing floating and fixed ips, dhcp, bridging and vlans. It loads a Service object which exposes the public methods on one of the subclasses of NetworkManager. Different networking strategies are as simple as changing the network_manager flag::
$ nova-network --network_manager=nova.network.manager.FlatManager
IMPORTANT: Make sure that you also set the network_manager on nova-api and nova_compute, since make some calls to network manager in process instead of through rpc. More information on the interactions between services, managers, and drivers can be found :ref:`here <service_manager_driver>`

View File

@ -1,88 +0,0 @@
Installation on other distros (like Debian, Fedora or CentOS )
==============================================================
Feel free to add additional notes for additional distributions.
Nova installation on CentOS 5.5
-------------------------------
These are notes for installing OpenStack Compute on CentOS 5.5 and will be updated but are NOT final. Please test for accuracy and edit as you see fit.
The principle botleneck for running nova on centos in python 2.6. Nova is written in python 2.6 and CentOS 5.5. comes with python 2.4. We can not update python system wide as some core utilities (like yum) is dependent on python 2.4. Also very few python 2.6 modules are available in centos/epel repos.
Pre-reqs
--------
Add euca2ools and EPEL repo first.::
cat >/etc/yum.repos.d/euca2ools.repo << EUCA_REPO_CONF_EOF
[eucalyptus]
name=euca2ools
baseurl=http://www.eucalyptussoftware.com/downloads/repo/euca2ools/1.3.1/yum/centos/
enabled=1
gpgcheck=0
EUCA_REPO_CONF_EOF
::
rpm -Uvh 'http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-4.noarch.rpm'
Now install python2.6, kvm and few other libraries through yum::
yum -y install dnsmasq vblade kpartx kvm gawk iptables ebtables bzr screen euca2ools curl rabbitmq-server gcc gcc-c++ autoconf automake swig openldap openldap-servers nginx python26 python26-devel python26-distribute git openssl-devel python26-tools mysql-server qemu kmod-kvm libxml2 libxslt libxslt-devel mysql-devel
Then download the latest aoetools and then build(and install) it, check for the latest version on sourceforge, exact url will change if theres a new release::
wget -c http://sourceforge.net/projects/aoetools/files/aoetools/32/aoetools-32.tar.gz/download
tar -zxvf aoetools-32.tar.gz
cd aoetools-32
make
make install
Add the udev rules for aoetools::
cat > /etc/udev/rules.d/60-aoe.rules << AOE_RULES_EOF
SUBSYSTEM=="aoe", KERNEL=="discover", NAME="etherd/%k", GROUP="disk", MODE="0220"
SUBSYSTEM=="aoe", KERNEL=="err", NAME="etherd/%k", GROUP="disk", MODE="0440"
SUBSYSTEM=="aoe", KERNEL=="interfaces", NAME="etherd/%k", GROUP="disk", MODE="0220"
SUBSYSTEM=="aoe", KERNEL=="revalidate", NAME="etherd/%k", GROUP="disk", MODE="0220"
# aoe block devices
KERNEL=="etherd*", NAME="%k", GROUP="disk"
AOE_RULES_EOF
Load the kernel modules::
modprobe aoe
::
modprobe kvm
Now, install the python modules using easy_install-2.6, this ensures the installation are done against python 2.6
easy_install-2.6 twisted sqlalchemy mox greenlet carrot daemon eventlet tornado IPy routes lxml MySQL-python
python-gflags need to be downloaded and installed manually, use these commands (check the exact url for newer releases ):
::
wget -c "http://python-gflags.googlecode.com/files/python-gflags-1.4.tar.gz"
tar -zxvf python-gflags-1.4.tar.gz
cd python-gflags-1.4
python2.6 setup.py install
cd ..
Same for python2.6-libxml2 module, notice the --with-python and --prefix flags. --with-python ensures we are building it against python2.6 (otherwise it will build against python2.4, which is default)::
wget -c "ftp://xmlsoft.org/libxml2/libxml2-2.7.3.tar.gz"
tar -zxvf libxml2-2.7.3.tar.gz
cd libxml2-2.7.3
./configure --with-python=/usr/bin/python26 --prefix=/usr
make all
make install
cd python
python2.6 setup.py install
cd ..
Once you've done this, continue at Step 3 here: :doc:`../single.node.install`

View File

@ -1,40 +0,0 @@
Installing on Ubuntu 10.04 (Lucid)
==================================
Step 1: Install dependencies
----------------------------
Grab the latest code from launchpad:
::
bzr clone lp:nova
Here's a script you can use to install (and then run) Nova on Ubuntu or Debian (when using Debian, edit nova.sh to have USE_PPA=0):
.. todo:: give a link to a stable releases page
Step 2: Install dependencies
----------------------------
Nova requires rabbitmq for messaging, so install that first.
*Note:* You must have sudo installed to run these commands as shown here.
::
sudo apt-get install rabbitmq-server
You'll see messages starting with "Reading package lists... Done" and you must confirm by typing Y that you want to continue.
If you're running on Ubuntu 10.04, you'll need to install Twisted and python-gflags which is included in the OpenStack PPA.
::
sudo apt-get install python-software-properties
sudo add-apt-repository ppa:nova-core/trunk
sudo apt-get update
sudo apt-get install python-twisted python-gflags
Once you've done this, continue at Step 3 here: :doc:`../single.node.install`

View File

@ -1,41 +0,0 @@
Installing on Ubuntu 10.10 (Maverick)
=====================================
Single Machine Installation (Ubuntu 10.10)
While we wouldn't expect you to put OpenStack Compute into production on a non-LTS version of Ubuntu, these instructions are up-to-date with the latest version of Ubuntu.
Make sure you are running Ubuntu 10.10 so that the packages will be available. This install requires more than 70 MB of free disk space.
These instructions are based on Soren Hansen's blog entry, Openstack on Maverick. A script is in progress as well.
Step 1: Install required prerequisites
--------------------------------------
Nova requires rabbitmq for messaging and redis for storing state (for now), so we'll install these first.::
sudo apt-get install rabbitmq-server redis-server
You'll see messages starting with "Reading package lists... Done" and you must confirm by typing Y that you want to continue.
Step 2: Install Nova packages available in Maverick Meerkat
-----------------------------------------------------------
Type or copy/paste in the following line to get the packages that you use to run OpenStack Compute.::
sudo apt-get install python-nova
sudo apt-get install nova-api nova-objectstore nova-compute nova-scheduler nova-network euca2ools unzip
You'll see messages starting with "Reading package lists... Done" and you must confirm by typing Y that you want to continue. This operation may take a while as many dependent packages will be installed. Note: there is a dependency problem with python-nova which can be worked around by installing first.
When the installation is complete, you'll see the following lines confirming:::
Adding system user `nova' (UID 106) ...
Adding new user `nova' (UID 106) with group `nogroup' ...
Not creating home directory `/var/lib/nova'.
Setting up nova-scheduler (0.9.1~bzr331-0ubuntu2) ...
* Starting nova scheduler nova-scheduler
WARNING:root:Starting scheduler node
...done.
Processing triggers for libc-bin ...
ldconfig deferred processing now taking place
Processing triggers for python-support ...
Once you've done this, continue at Step 3 here: :doc:`../single.node.install`

View File

@ -1,49 +0,0 @@
Euca2ools
=========
Nova is compatible with most of the euca2ools command line utilities. Both Administrators and Users will find these tools helpful for day-to-day administration.
* euca-add-group
* euca-delete-bundle
* euca-describe-instances
* euca-register
* euca-add-keypair
* euca-delete-group
* euca-describe-keypairs
* euca-release-address
* euca-allocate-address
* euca-delete-keypair
* euca-describe-regions
* euca-reset-image-attribute
* euca-associate-address
* euca-delete-snapshot
* euca-describe-snapshots
* euca-revoke
* euca-attach-volume
* euca-delete-volume
* euca-describe-volumes
* euca-run-instances
* euca-authorize
* euca-deregister
* euca-detach-volume
* euca-terminate-instances
* euca-bundle-image
* euca-describe-addresses
* euca-disassociate-address
* euca-unbundle
* euca-bundle-vol
* euca-describe-availability-zones
* euca-download-bundle
* euca-upload-bundle
* euca-confirm-product-instance
* euca-describe-groups
* euca-get-console-output
* euca-version
* euca-create-snapshot
* euca-describe-image-attribute
* euca-modify-image-attribute
* euca-create-volume
* euca-describe-images
* euca-reboot-instances

View File

@ -1,23 +0,0 @@
..
Copyright 2010-2011 United States Government as represented by the
Administrator of the National Aeronautics and Space Administration.
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Flags and Flagfiles
===================
* python-gflags
* flagfiles
* list of flags by component (see concepts list)

View File

@ -1,167 +0,0 @@
..
Copyright 2010-2011 United States Government as represented by the
Administrator of the National Aeronautics and Space Administration.
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Getting Started with Nova
=========================
This code base is continually changing, so dependencies also change. If you
encounter any problems, see the :doc:`../community` page.
The `contrib/nova.sh` script should be kept up to date, and may be a good
resource to review when debugging.
The purpose of this document is to get a system installed that you can use to
test your setup assumptions. Working from this base installtion you can
tweak configurations and work with different flags to monitor interaction with
your hardware, network, and other factors that will allow you to determine
suitability for your deployment. After following this setup method, you should
be able to experiment with different managers, drivers, and flags to get the
best performance.
Dependencies
------------
Related servers we rely on
* **RabbitMQ**: messaging queue, used for all communication between components
Optional servers
* **OpenLDAP**: By default, the auth server uses the RDBMS-backed datastore by
setting FLAGS.auth_driver to `nova.auth.dbdriver.DbDriver`. But OpenLDAP
(or LDAP) could be configured by specifying `nova.auth.ldapdriver.LdapDriver`.
There is a script in the sources (`nova/auth/slap.sh`) to install a very basic
openldap server on ubuntu.
* **ReDIS**: There is a fake ldap auth driver
`nova.auth.ldapdriver.FakeLdapDriver` that backends to redis. This was
created for testing ldap implementation on systems that don't have an easy
means to install ldap.
* **MySQL**: Either MySQL or another database supported by sqlalchemy needs to
be avilable. Currently, only sqlite3 an mysql have been tested.
Python libraries that we use (from pip-requires):
.. literalinclude:: ../../../tools/pip-requires
Other libraries:
* **XenAPI**: Needed only for Xen Cloud Platform or XenServer support. Available
from http://wiki.xensource.com/xenwiki/XCP_SDK or
http://community.citrix.com/cdn/xs/sdks.
External unix tools that are required:
* iptables
* ebtables
* gawk
* curl
* kvm
* libvirt
* dnsmasq
* vlan
* open-iscsi and iscsitarget (if you use iscsi volumes)
* aoetools and vblade-persist (if you use aoe-volumes)
Nova uses cutting-edge versions of many packages. There are ubuntu packages in
the nova-core trunk ppa. You can use add this ppa to your sources list on an
ubuntu machine with the following commands::
sudo apt-get install -y python-software-properties
sudo add-apt-repository ppa:nova-core/trunk
Recommended
-----------
* euca2ools: python implementation of aws ec2-tools and ami tools
* build tornado to use C module for evented section
Installation
--------------
You can install from packages for your particular Linux distribution if they are
available. Otherwise you can install from source by checking out the source
files from the `Nova Source Code Repository <http://code.launchpad.net/nova>`_
and running::
python setup.py install
Configuration
---------------
Configuring the host system
~~~~~~~~~~~~~~~~~~~~~~~~~~~
As you read through the Administration Guide you will notice configuration hints
inline with documentation on the subsystem you are configuring. Presented in
this "Getting Started with Nova" document, we only provide what you need to
get started as quickly as possible. For a more detailed description of system
configuration, start reading through :doc:`multi.node.install`.
* Create a volume group (you can use an actual disk for the volume group as
well)::
# This creates a 1GB file to create volumes out of
dd if=/dev/zero of=MY_FILE_PATH bs=100M count=10
losetup --show -f MY_FILE_PATH
# replace /dev/loop0 below with whatever losetup returns
# nova-volumes is the default for the --volume_group flag
vgcreate nova-volumes /dev/loop0
Configuring Nova
~~~~~~~~~~~~~~~~
Configuration of the entire system is performed through python-gflags. The
best way to track configuration is through the use of a flagfile.
A flagfile is specified with the ``--flagfile=FILEPATH`` argument to the binary
when you launch it. Flagfiles for nova are typically stored in
``/etc/nova/nova.conf``, and flags specific to a certain program are stored in
``/etc/nova/nova-COMMAND.conf``. Each configuration file can include another
flagfile, so typically a file like ``nova-manage.conf`` would have as its first
line ``--flagfile=/etc/nova/nova.conf`` to load the common flags before
specifying overrides or additional options.
A sample configuration to test the system follows::
--verbose
--nodaemon
--auth_driver=nova.auth.dbdriver.DbDriver
Running
---------
There are many parts to the nova system, each with a specific function. They
are built to be highly-available, so there are may configurations they can be
run in (ie: on many machines, many listeners per machine, etc). This part
of the guide only gets you started quickly, to learn about HA options, see
:doc:`multi.node.install`.
Launch supporting services
* rabbitmq
* redis (optional)
* mysql (optional)
* openldap (optional)
Launch nova components, each should have ``--flagfile=/etc/nova/nova.conf``
* nova-api
* nova-compute
* nova-objectstore
* nova-volume
* nova-scheduler

View File

@ -1,91 +0,0 @@
..
Copyright 2010-2011 United States Government as represented by the
Administrator of the National Aeronautics and Space Administration.
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Administration Guide
====================
This guide describes the basics of running and managing Nova.
Running the Cloud
-----------------
The fastest way to get a test cloud running is by following the directions in the :doc:`../quickstart`.
Nova's cloud works via the interaction of a series of daemon processes that reside persistently on the host machine(s). Fortunately, the :doc:`../quickstart` process launches sample versions of all these daemons for you. Once you are familiar with basic Nova usage, you can learn more about daemons by reading :doc:`../service.architecture` and :doc:`binaries`.
Administration Utilities
------------------------
There are two main tools that a system administrator will find useful to manage their Nova cloud:
.. toctree::
:maxdepth: 1
nova.manage
euca2ools
The nova-manage command may only be run by users with admin priviledges. Commands for euca2ools can be used by all users, though specific commands may be restricted by Role Based Access Control. You can read more about creating and managing users in :doc:`managing.users`
User and Resource Management
----------------------------
The nova-manage and euca2ools commands provide the basic interface to perform a broad range of administration functions. In this section, you can read more about how to accomplish specific administration tasks.
For background on the core objects referenced in this section, see :doc:`../object.model`
.. toctree::
:maxdepth: 1
managing.users
managing.projects
managing.instances
managing.images
managing.volumes
managing.networks
Deployment
----------
For a starting multi-node architecture, you would start with two nodes - a cloud controller node and a compute node. The cloud controller node contains the nova- services plus the Nova database. The compute node installs all the nova-services but then refers to the database installation, which is hosted by the cloud controller node. Ensure that the nova.conf file is identical on each node. If you find performance issues not related to database reads or writes, but due to the messaging queue backing up, you could add additional messaging services (rabbitmq).
.. toctree::
:maxdepth: 1
multi.node.install
dbsync
Networking
^^^^^^^^^^
.. toctree::
:maxdepth: 1
multi.node.install
network.vlan.rst
network.flat.rst
Advanced Topics
---------------
.. toctree::
:maxdepth: 1
flags
monitoring

View File

@ -1,21 +0,0 @@
..
Copyright 2010-2011 United States Government as represented by the
Administrator of the National Aeronautics and Space Administration.
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Managing Images
===============
.. todo:: Put info on managing images here!

View File

@ -1,59 +0,0 @@
..
Copyright 2010-2011 United States Government as represented by the
Administrator of the National Aeronautics and Space Administration.
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Managing Instances
==================
Keypairs
--------
Images can be shared by many users, so it is dangerous to put passwords into the images. Nova therefore supports injecting ssh keys into instances before they are booted. This allows a user to login to the instances that he or she creates securely. Generally the first thing that a user does when using the system is create a keypair. Nova generates a public and private key pair, and sends the private key to the user. The public key is stored so that it can be injected into instances.
Keypairs are created through the api. They can be created on the command line using the euca2ools script euca-add-keypair. Refer to the man page for the available options. Example usage::
euca-add-keypair test > test.pem
chmod 600 test.pem
euca-run-instances -k test -t m1.tiny ami-tiny
# wait for boot
ssh -i test.pem root@ip.of.instance
Basic Management
----------------
Instance management can be accomplished with euca commands:
To run an instance:
::
euca-run-instances
To terminate an instance:
::
euca-terminate-instances
To reboot an instance:
::
euca-reboot-instances
See the euca2ools documentation for more information

View File

@ -1,70 +0,0 @@
..
Copyright 2010-2011 United States Government as represented by the
Administrator of the National Aeronautics and Space Administration.
Overview Sections Copyright 2010-2011 Citrix
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Networking Overview
===================
In Nova, users organize their cloud resources in projects. A Nova project consists of a number of VM instances created by a user. For each VM instance, Nova assigns to it a private IP address. (Currently, Nova only supports Linux bridge networking that allows the virtual interfaces to connect to the outside network through the physical interface. Other virtual network technologies, such as Open vSwitch, could be supported in the future.) The Network Controller provides virtual networks to enable compute servers to interact with each other and with the public network.
Nova Network Strategies
-----------------------
Currently, Nova supports three kinds of networks, implemented in three "Network Manager" types respectively: Flat Network Manager, Flat DHCP Network Manager, and VLAN Network Manager. The three kinds of networks can co-exist in a cloud system. However, the scheduler for selecting the type of network for a given project is not yet implemented. Here is a brief description of each of the different network strategies, with a focus on the VLAN Manager in a separate section.
Read more about Nova network strategies here:
.. toctree::
:maxdepth: 1
network.flat.rst
network.vlan.rst
Network Management Commands
---------------------------
Admins and Network Administrators can use the 'nova-manage' command to manage network resources:
VPN Management
~~~~~~~~~~~~~~
* vpn list: Print a listing of the VPNs for all projects.
* arguments: none
* vpn run: Start the VPN for a given project.
* arguments: project
* vpn spawn: Run all VPNs.
* arguments: none
Floating IP Management
~~~~~~~~~~~~~~~~~~~~~~
* floating create: Creates floating ips for host by range
* arguments: host ip_range
* floating delete: Deletes floating ips by range
* arguments: range
* floating list: Prints a listing of all floating ips
* arguments: none
Network Management
~~~~~~~~~~~~~~~~~~
* network create: Creates fixed ips for host by range
* arguments: [fixed_range=FLAG], [num_networks=FLAG],
[network_size=FLAG], [vlan_start=FLAG],
[vpn_start=FLAG]

View File

@ -1,68 +0,0 @@
..
Copyright 2010-2011 United States Government as represented by the
Administrator of the National Aeronautics and Space Administration.
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Managing Projects
=================
Projects are isolated resource containers forming the principal organizational structure within Nova. They consist of a separate vlan, volumes, instances, images, keys, and users.
Although the original ec2 api only supports users, nova adds the concept of projects. A user can specify which project he or she wishes to use by appending `:project_id` to his or her access key. If no project is specified in the api request, nova will attempt to use a project with the same id as the user.
The api will return NotAuthorized if a normal user attempts to make requests for a project that he or she is not a member of. Note that admins or users with special admin roles skip this check and can make requests for any project.
To create a project, use the `project create` command of nova-manage. The syntax is nova-manage project create projectname manager_id [description] You must specify a projectname and a manager_id. For example::
nova-manage project create john_project john "This is a sample project"
You can add and remove users from projects with `project add` and `project remove`::
nova-manage project add john_project john
nova-manage project remove john_project john
Project Commands
----------------
Admins and Project Managers can use the 'nova-manage project' command to manage project resources:
* project add: Adds user to project
* arguments: project user
* project create: Creates a new project
* arguments: name project_manager [description]
* project delete: Deletes an existing project
* arguments: project_id
* project environment: Exports environment variables to an sourcable file
* arguments: project_id user_id [filename='novarc]
* project list: lists all projects
* arguments: none
* project remove: Removes user from project
* arguments: project user
* project scrub: Deletes data associated with project
* arguments: project
* project zipfile: Exports credentials for project to a zip file
* arguments: project_id user_id [filename='nova.zip]
Setting Quotas
--------------
Nova utilizes a quota system at the project level to control resource consumption across available hardware resources. Current quota controls are available to limit the:
* Number of volumes which may be created
* Total size of all volumes within a project as measured in GB
* Number of instances which may be launched
* Number of processor cores which may be allocated
* Publicly accessible IP addresses
Use the following command to set quotas for a project
* project quota: Set or display quotas for project
* arguments: project_id [key] [value]

View File

@ -1,82 +0,0 @@
Managing Users
==============
Users and Access Keys
---------------------
Access to the ec2 api is controlled by an access and secret key. The user's access key needs to be included in the request, and the request must be signed with the secret key. Upon receipt of api requests, nova will verify the signature and execute commands on behalf of the user.
In order to begin using nova, you will need a to create a user. This can be easily accomplished using the user create or user admin commands in nova-manage. `user create` will create a regular user, whereas `user admin` will create an admin user. The syntax of the command is nova-manage user create username [access] [secret]. For example::
nova-manage user create john my-access-key a-super-secret-key
If you do not specify an access or secret key, a random uuid will be created automatically.
Credentials
-----------
Nova can generate a handy set of credentials for a user. These credentials include a CA for bundling images and a file for setting environment variables to be used by euca2ools. If you don't need to bundle images, just the environment script is required. You can export one with the `project environment` command. The syntax of the command is nova-manage project environment project_id user_id [filename]. If you don't specify a filename, it will be exported as novarc. After generating the file, you can simply source it in bash to add the variables to your environment::
nova-manage project environment john_project john
. novarc
If you do need to bundle images, you will need to get all of the credentials using `project zipfile`. Note that zipfile will give you an error message if networks haven't been created yet. Otherwise zipfile has the same syntax as environment, only the default file name is nova.zip. Example usage::
nova-manage project zipfile john_project john
unzip nova.zip
. novarc
Role Based Access Control
-------------------------
Roles control the api actions that a user is allowed to perform. For example, a user cannot allocate a public ip without the `netadmin` role. It is important to remember that a users de facto permissions in a project is the intersection of user (global) roles and project (local) roles. So for john to have netadmin permissions in his project, he needs to separate roles specified. You can add roles with `role add`. The syntax is nova-manage role add user_id role [project_id]. Let's give john the netadmin role for his project::
nova-manage role add john netadmin
nova-manage role add john netadmin john_project
Role-based access control (RBAC) is an approach to restricting system access to authorized users based on an individuals role within an organization. Various employee functions require certain levels of system access in order to be successful. These functions are mapped to defined roles and individuals are categorized accordingly. Since users are not assigned permissions directly, but only acquire them through their role (or roles), management of individual user rights becomes a matter of assigning appropriate roles to the user. This simplifies common operations, such as adding a user, or changing a user's department.
Novas rights management system employs the RBAC model and currently supports the following five roles:
* **Cloud Administrator.** (admin) Users of this class enjoy complete system access.
* **IT Security.** (itsec) This role is limited to IT security personnel. It permits role holders to quarantine instances.
* **Project Manager.** (projectmanager)The default for project owners, this role affords users the ability to add other users to a project, interact with project images, and launch and terminate instances.
* **Network Administrator.** (netadmin) Users with this role are permitted to allocate and assign publicly accessible IP addresses as well as create and modify firewall rules.
* **Developer.** This is a general purpose role that is assigned to users by default.
RBAC management is exposed through the dashboard for simplified user management.
User Commands
~~~~~~~~~~~~
Users, including admins, are created through the ``user`` commands.
* user admin: creates a new admin and prints exports
* arguments: name [access] [secret]
* user create: creates a new user and prints exports
* arguments: name [access] [secret]
* user delete: deletes an existing user
* arguments: name
* user exports: prints access and secrets for user in export format
* arguments: name
* user list: lists all users
* arguments: none
* user modify: update a users keys & admin flag
* arguments: accesskey secretkey admin
* leave any field blank to ignore it, admin should be 'T', 'F', or blank
User Role Management
~~~~~~~~~~~~~~~~~~~~
* role add: adds role to user
* if project is specified, adds project specific role
* arguments: user, role [project]
* role has: checks to see if user has role
* if project is specified, returns True if user has
the global role and the project role
* arguments: user, role [project]
* role remove: removes role from user
* if project is specified, removes project specific role
* arguments: user, role [project]

View File

@ -1,39 +0,0 @@
..
Copyright 2010-2011 United States Government as represented by the
Administrator of the National Aeronautics and Space Administration.
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Security Considerations
=======================
.. todo:: This doc is vague and just high-level right now. Describe architecture that enables security.
The goal of securing a cloud computing system involves both protecting the instances, data on the instances, and
ensuring users are authenticated for actions and that borders are understood by the users and the system.
Protecting the system from intrusion or attack involves authentication, network protections, and
compromise detection.
Key Concepts
------------
Authentication - Each instance is authenticated with a key pair.
Network - Instances can communicate with each other but you can configure the boundaries through firewall
configuration.
Monitoring - Log all API commands and audit those logs.
Encryption - Data transfer between instances is not encrypted.

View File

@ -1,27 +0,0 @@
..
Copyright 2010-2011 United States Government as represented by the
Administrator of the National Aeronautics and Space Administration.
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Monitoring
==========
* components
* throughput
* exceptions
* hardware
* ganglia
* syslog

View File

@ -1,392 +0,0 @@
Installing Nova on Multiple Servers
===================================
When you move beyond evaluating the technology and into building an actual
production environment, you will need to know how to configure your datacenter
and how to deploy components across your clusters. This guide should help you
through that process.
You can install multiple nodes to increase performance and availability of the OpenStack Compute installation.
This setup is based on an Ubuntu Lucid 10.04 installation with the latest updates. Most of this works around issues that need to be resolved either in packaging or bug-fixing. It also needs to eventually be generalized, but the intent here is to get the multi-node configuration bootstrapped so folks can move forward.
For a starting architecture, these instructions describing installing a cloud controller node and a compute node. The cloud controller node contains the nova- services plus the database. The compute node installs all the nova-services but then refers to the database installation, which is hosted by the cloud controller node.
Requirements for a multi-node installation
------------------------------------------
* You need a real database, compatible with SQLAlchemy (mysql, postgresql) There's not a specific reason to choose one over another, it basically depends what you know. MySQL is easier to do High Availability (HA) with, but people may already know PostgreSQL. We should document both configurations, though.
* For a recommended HA setup, consider a MySQL master/slave replication, with as many slaves as you like, and probably a heartbeat to kick one of the slaves into being a master if it dies.
* For performance optimization, split reads and writes to the database. MySQL proxy is the easiest way to make this work if running MySQL.
Assumptions
-----------
* Networking is configured between/through the physical machines on a single subnet.
* Installation and execution are both performed by ROOT user.
Scripted Installation
---------------------
A script available to get your OpenStack cloud running quickly. You can copy the file to the server where you want to install OpenStack Compute services - typically you would install a compute node and a cloud controller node.
You must run these scripts with root permissions.
From a server you intend to use as a cloud controller node, use this command to get the cloud controller script. This script is a work-in-progress and the maintainer plans to keep it up, but it is offered "as-is." Feel free to collaborate on it in GitHub - https://github.com/dubsquared/OpenStack-NOVA-Installer-Script/.
::
wget --no-check-certificate https://github.com/dubsquared/OpenStack-NOVA-Installer-Script/raw/master/nova-CC-install-v1.1.sh
Ensure you can execute the script by modifying the permissions on the script file.
::
sudo chmod 755 nova-CC-install-v1.1.sh
::
sudo ./nova-CC-install-v1.1.sh
Next, from a server you intend to use as a compute node (doesn't contain the database), install the nova services. You can use the nova-NODE-installer.sh script from the above github-hosted project for the compute node installation.
Copy the nova.conf from the cloud controller node to the compute node.
Restart related services::
libvirtd restart; service nova-network restart; service nova-compute restart; service nova-api restart; service nova-objectstore restart; service nova-scheduler restart
You can go to the `Configuration section`_ for next steps.
Manual Installation - Step-by-Step
----------------------------------
The following sections show you how to install Nova manually with a cloud controller node and a separate compute node. The cloud controller node contains the database plus all nova- services, and the compute node runs nova- services only.
Cloud Controller Installation
`````````````````````````````
On the cloud controller node, you install nova services and the related helper applications, and then configure with the nova.conf file. You will then copy the nova.conf file to the compute node, which you install as a second node in the `Compute Installation`_.
Step 1 - Use apt-get to get the latest code
-------------------------------------------
1. Setup Nova PPA with https://launchpad.net/~nova-core/+archive/trunk. The python-software-properties package is a pre-requisite for setting up the nova package repo:
::
sudo apt-get install python-software-properties
sudo add-apt-repository ppa:nova-core/trunk
2. Run update.
::
sudo apt-get update
3. Install python required packages, nova-packages, and helper apps.
::
sudo apt-get install python-greenlet python-mysqldb python-nova nova-common nova-doc nova-api nova-network nova-objectstore nova-scheduler nova-compute euca2ools unzip
It is highly likely that there will be errors when the nova services come up since they are not yet configured. Don't worry, you're only at step 1!
Step 2 Set up configuration file (installed in /etc/nova)
---------------------------------------------------------
1. Nova development has consolidated all config files to nova.conf as of November 2010. There is a default set of options that are already configured in nova.conf:
::
--daemonize=1
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--logdir=/var/log/nova
--state_path=/var/lib/nova
The following items ALSO need to be defined in /etc/nova/nova.conf. Ive added some explanation of the variables, as comments CANNOT be in nova.conf. There seems to be an issue with nova-manage not processing the comments/whitespace correctly:
--sql_connection ### Location of Nova SQL DB
--s3_host ### This is where Nova is hosting the objectstore service, which will contain the VM images and buckets
--rabbit_host ### This is where the rabbit AMQP messaging service is hosted
--cc_host ### This is where the the nova-api service lives
--verbose ### Optional but very helpful during initial setup
--ec2_url ### The location to interface nova-api
--network_manager ### Many options here, discussed below. This is how your controller will communicate with additional Nova nodes and VMs:
nova.network.manager.FlatManager # Simple, no-vlan networking type
nova.network.manager. FlatDHCPManager # Flat networking with DHCP
nova.network.manager.VlanManager # Vlan networking with DHCP /DEFAULT/ if no network manager is defined in nova.conf
--fixed_range=<network/prefix> ### This will be the IP network that ALL the projects for future VM guests will reside on. E.g. 192.168.0.0/12
--network_size=<# of addrs> ### This is the total number of IP Addrs to use for VM guests, of all projects. E.g. 5000
The following code can be cut and paste, and edited to your setup:
Note: CC_ADDR=<the external IP address of your cloud controller>
Detailed explanation of the following example is available above.
::
--sql_connection=mysql://root:nova@<CC_ADDR>/nova
--s3_host=<CC_ADDR>
--rabbit_host=<CC_ADDR>
--cc_host=<CC_ADDR>
--verbose
--ec2_url=http://<CC_ADDR>:8773/services/Cloud
--network_manager=nova.network.manager.VlanManager
--fixed_range=<network/prefix>
--network_size=<# of addrs>
2. Create a “nova” group, and set permissions::
addgroup nova
The Nova config file should have its owner set to root:nova, and mode set to 0644, since they contain your MySQL server's root password. ::
chown -R root:nova /etc/nova
chmod 644 /etc/nova/nova.conf
Step 3 - Setup the SQL DB (MySQL for this setup)
------------------------------------------------
1. First you 'preseed' to bypass all the installation prompts::
bash
MYSQL_PASS=nova
cat <<MYSQL_PRESEED | debconf-set-selections
mysql-server-5.1 mysql-server/root_password password $MYSQL_PASS
mysql-server-5.1 mysql-server/root_password_again password $MYSQL_PASS
mysql-server-5.1 mysql-server/start_on_boot boolean true
MYSQL_PRESEED
2. Install MySQL::
apt-get install -y mysql-server
3. Edit /etc/mysql/my.cnf to change bind-address from localhost to any::
sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf
service mysql restart
4. MySQL DB configuration:
Create NOVA database::
mysql -uroot -p$MYSQL_PASS -e 'CREATE DATABASE nova;'
Update the DB to include user 'root'@'%' with super user privileges::
mysql -uroot -p$MYSQL_PASS -e "GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;"
Set mySQL root password::
mysql -uroot -p$MYSQL_PASS -e "SET PASSWORD FOR 'root'@'%' = PASSWORD('$MYSQL_PASS');"
Compute Node Installation
`````````````````````````
Repeat steps 1 and 2 from the Cloud Controller Installation section above, then configure the network for your Compute instances on the Compute node. Copy the nova.conf file from the Cloud Controller node to this node.
Network Configuration
---------------------
If you use FlatManager as your network manager (as opposed to VlanManager that is shown in the nova.conf example above), there are some additional networking changes youll have to make to ensure connectivity between your nodes and VMs. If you chose VlanManager or FlatDHCP, you may skip this section, as its set up for you automatically.
Nova defaults to a bridge device named 'br100'. This needs to be created and somehow integrated into YOUR network. To keep things as simple as possible, have all the VM guests on the same network as the VM hosts (the compute nodes). To do so, set the compute node's external IP address to be on the bridge and add eth0 to that bridge. To do this, edit your network interfaces config to look like the following::
< begin /etc/network/interfaces >
# The loopback network interface
auto lo
iface lo inet loopback
# Networking for NOVA
auto br100
iface br100 inet dhcp
bridge_ports eth0
bridge_stp off
bridge_maxwait 0
bridge_fd 0
< end /etc/network/interfaces >
Next, restart networking to apply the changes::
sudo /etc/init.d/networking restart
Configuration
`````````````
On the Compute node, you should continue with these configuration steps.
Step 1 - Set up the Nova environment
------------------------------------
These are the commands you run to update the database if needed, and then set up a user and project::
/usr/bin/python /usr/bin/nova-manage db sync
/usr/bin/python /usr/bin/nova-manage user admin <user_name>
/usr/bin/python /usr/bin/nova-manage project create <project_name> <user_name>
/usr/bin/python /usr/bin/nova-manage network create <project-network> <number-of-networks-in-project> <IPs in project>
Here is an example of what this looks like with real data::
/usr/bin/python /usr/bin/nova-manage db sync
/usr/bin/python /usr/bin/nova-manage user admin dub
/usr/bin/python /usr/bin/nova-manage project create dubproject dub
/usr/bin/python /usr/bin/nova-manage network create 192.168.0.0/24 1 255
(I chose a /24 since that falls inside my /12 range I set in fixed-range in nova.conf. Currently, there can only be one network, and I am using the max IPs available in a /24. You can choose to use any valid amount that you would like.)
Note: The nova-manage service assumes that the first IP address is your network (like 192.168.0.0), that the 2nd IP is your gateway (192.168.0.1), and that the broadcast is the very last IP in the range you defined (192.168.0.255). If this is not the case you will need to manually edit the sql db 'networks' table.o.
On running the "nova-manage network create" command, entries are made in the 'networks' and 'fixed_ips' table. However, one of the networks listed in the 'networks' table needs to be marked as bridge in order for the code to know that a bridge exists. The Network is marked as bridged automatically based on the type of network manager selected. You only need to mark the network as a bridge if you chose FlatManager as your network type. More information can be found at the end of this document discussing setting up the bridge device.
Step 2 - Create Nova certifications
-----------------------------------
1. Generate the certs as a zip file. These are the certs you will use to launch instances, bundle images, and all the other assorted api functions.
::
mkdir p /root/creds
/usr/bin/python /usr/bin/nova-manage project zipfile $NOVA_PROJECT $NOVA_PROJECT_USER /root/creds/novacreds.zip
2. Unzip them in your home directory, and add them to your environment.
::
unzip /root/creds/novacreds.zip -d /root/creds/
cat /root/creds/novarc >> ~/.bashrc
source ~/.bashrc
Step 3 - Restart all relevant services
--------------------------------------
Restart all six services in total, just to cover the entire spectrum::
libvirtd restart; service nova-network restart; service nova-compute restart; service nova-api restart; service nova-objectstore restart; service nova-scheduler restart
Step 4 - Closing steps, and cleaning up
---------------------------------------
One of the most commonly missed configuration areas is not allowing the proper access to VMs. Use the 'euca-authorize' command to enable access. Below, you will find the commands to allow 'ping' and 'ssh' to your VMs::
euca-authorize -P icmp -t -1:-1 default
euca-authorize -P tcp -p 22 default
Another common issue is you cannot ping or SSH your instances after issusing the 'euca-authorize' commands. Something to look at is the amount of 'dnsmasq' processes that are running. If you have a running instance, check to see that TWO 'dnsmasq' processes are running. If not, perform the following::
killall dnsmasq
service nova-network restart
To avoid issues with KVM and permissions with Nova, run the following commands to ensure we have VM's that are running optimally::
chgrp kvm /dev/kvm
chmod g+rwx /dev/kvm
If you want to use the 10.04 Ubuntu Enterprise Cloud images that are readily available at http://uec-images.ubuntu.com/releases/10.04/release/, you may run into delays with booting. Any server that does not have nova-api running on it needs this iptables entry so that UEC images can get metadata info. On compute nodes, configure the iptables with this next step::
# iptables -t nat -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination $NOVA_API_IP:8773
Testing the Installation
````````````````````````
You can confirm that your compute node is talking to your cloud controller. From the cloud controller, run this database query::
mysql -u$MYSQL_USER -p$MYSQL_PASS nova -e 'select * from services;'
In return, you should see something similar to this::
+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+
| created_at | updated_at | deleted_at | deleted | id | host | binary | topic | report_count | disabled | availability_zone |
+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+
| 2011-01-28 22:52:46 | 2011-02-03 06:55:48 | NULL | 0 | 1 | osdemo02 | nova-network | network | 46064 | 0 | nova |
| 2011-01-28 22:52:48 | 2011-02-03 06:55:57 | NULL | 0 | 2 | osdemo02 | nova-compute | compute | 46056 | 0 | nova |
| 2011-01-28 22:52:52 | 2011-02-03 06:55:50 | NULL | 0 | 3 | osdemo02 | nova-scheduler | scheduler | 46065 | 0 | nova |
| 2011-01-29 23:49:29 | 2011-02-03 06:54:26 | NULL | 0 | 4 | osdemo01 | nova-compute | compute | 37050 | 0 | nova |
| 2011-01-30 23:42:24 | 2011-02-03 06:55:44 | NULL | 0 | 9 | osdemo04 | nova-compute | compute | 28484 | 0 | nova |
| 2011-01-30 21:27:28 | 2011-02-03 06:54:23 | NULL | 0 | 8 | osdemo05 | nova-compute | compute | 29284 | 0 | nova |
+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+
You can see that 'osdemo0{1,2,4,5} are all running 'nova-compute.' When you start spinning up instances, they will allocate on any node that is running nova-compute from this list.
You can then use `euca2ools` to test some items::
euca-describe-images
euca-describe-instances
If you have issues with the API key, you may need to re-source your creds file::
. /root/creds/novarc
If you dont get any immediate errors, youre successfully making calls to your cloud!
Spinning up a VM for Testing
````````````````````````````
(This excerpt is from Thierry Carrez's blog, with reference to http://wiki.openstack.org/GettingImages.)
The image that you will use here will be a ttylinux image, so this is a limited function server. You will be able to ping and SSH to this instance, but it is in no way a full production VM.
UPDATE: Due to `bug 661159 <https://bugs.launchpad.net/nova/+bug/661159>`_, we cant use images without ramdisks yet, so we cant use the classic Ubuntu cloud images from http://uec-images.ubuntu.com/releases/ yet. For the sake of this tutorial, well use the `ttylinux images from Scott Moser instead <http://smoser.brickies.net/ubuntu/ttylinux-uec/>`_.
Download the image, and publish to your bucket:
::
image="ttylinux-uec-amd64-12.1_2.6.35-22_1.tar.gz"
wget http://smoser.brickies.net/ubuntu/ttylinux-uec/$image
uec-publish-tarball $image mybucket
This will output three references, an "emi", an "eri" and an "eki." (Image, ramdisk, and kernel) The emi is the one we use to launch instances, so take note of this.
Create a keypair to SSH to the server:
::
euca-add-keypair mykey > mykey.priv
chmod 0600 mykey.priv
Boot your instance:
::
euca-run-instances $emi -k mykey -t m1.tiny
($emi is replaced with the output from the previous command)
Checking status, and confirming communication:
Once you have booted the instance, you can check the status the the `euca-describe-instances` command. Here you can view the instance ID, IP, and current status of the VM.
::
euca-describe-instances
Once in a "running" state, you can use your SSH key connect:
::
ssh -i mykey.priv root@$ipaddress
When you are ready to terminate the instance, you may do so with the `euca-terminate-instances` command:
::
euca-terminate-instances $instance-id
You can determine the instance-id with `euca-describe-instances`, and the format is "i-" with a series of letter and numbers following: e.g. i-a4g9d.
For more information in creating you own custom (production ready) instance images, please visit http://wiki.openstack.org/GettingImages for more information!
Enjoy your new private cloud, and play responsibly!

View File

@ -1,60 +0,0 @@
..
Copyright 2010-2011 United States Government as represented by the
Administrator of the National Aeronautics and Space Administration.
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Flat Network Mode (Original and Flat)
=====================================
Flat network mode removes most of the complexity of VLAN mode by simply
bridging all instance interfaces onto a single network.
There are two variations of flat mode that differ mostly in how IP addresses
are given to instances.
Original Flat Mode
------------------
IP addresses for VM instances are grabbed from a subnet specified by the network administrator, and injected into the image on launch. All instances of the system are attached to the same Linux networking bridge, configured manually by the network administrator both on the network controller hosting the network and on the computer controllers hosting the instances. To recap:
* Each compute host creates a single bridge for all instances to use to attach to the external network.
* The networking configuration is injected into the instance before it is booted or it is obtained by a guest agent installed in the instance.
Note that the configuration injection currently only works on linux-style systems that keep networking
configuration in /etc/network/interfaces.
Flat DHCP Mode
--------------
IP addresses for VM instances are grabbed from a subnet specified by the network administrator. Similar to the flat network, a single Linux networking bridge is created and configured manually by the network administrator and used for all instances. A DHCP server is started to pass out IP addresses to VM instances from the specified subnet. To recap:
* Like flat mode, all instances are attached to a single bridge on the compute node.
* In addition a DHCP server is running to configure instances.
Implementation
--------------
The network nodes do not act as a default gateway in flat mode. Instances
are given public IP addresses.
Compute nodes have iptables/ebtables entries created per project and
instance to protect against IP/MAC address spoofing and ARP poisoning.
Examples
--------
.. todo:: add flat network mode configuration examples

View File

@ -1,179 +0,0 @@
..
Copyright 2010-2011 United States Government as represented by the
Administrator of the National Aeronautics and Space Administration.
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
VLAN Network Mode
=================
VLAN Network Mode is the default mode for Nova. It provides a private network
segment for each project's instances that can be accessed via a dedicated
VPN connection from the Internet.
In this mode, each project gets its own VLAN, Linux networking bridge, and subnet. The subnets are specified by the network administrator, and are assigned dynamically to a project when required. A DHCP Server is started for each VLAN to pass out IP addresses to VM instances from the subnet assigned to the project. All instances belonging to one project are bridged into the same VLAN for that project. The Linux networking bridges and VLANs are created by Nova when required, described in more detail in Nova VLAN Network Management Implementation.
..
(this text revised above)
Because the flat network and flat DhCP network are simple to understand and yet do not scale well enough for real-world cloud systems, this section focuses on the VLAN network implementation by the VLAN Network Manager.
In the VLAN network mode, all the VM instances of a project are connected together in a VLAN with the specified private subnet. Each running VM instance is assigned an IP address within the given private subnet.
.. image:: /images/Novadiagram.png
:width: 790
While network traffic between VM instances belonging to the same VLAN is always open, Nova can enforce isolation of network traffic between different projects by enforcing one VLAN per project.
In addition, the network administrator can specify a pool of public IP addresses that users may allocate and then assign to VMs, either at boot or dynamically at run-time. This capability is similar to Amazon's 'elastic IPs'. A public IP address may be associated with a running instances, allowing the VM instance to be accessed from the public network. The public IP addresses are accessible from the network host and NATed to the private IP address of the project.
.. todo:: Describe how a public IP address could be associated with a project (a VLAN)
This is the default networking mode and supports the most features. For multiple machine installation, it requires a switch that supports host-managed vlan tagging. In this mode, nova will create a vlan and bridge for each project. The project gets a range of private ips that are only accessible from inside the vlan. In order for a user to access the instances in their project, a special vpn instance (code named :ref:`cloudpipe <cloudpipe>`) needs to be created. Nova generates a certificate and key for the user to access the vpn and starts the vpn automatically. More information on cloudpipe can be found :ref:`here <cloudpipe>`.
The following diagram illustrates how the communication that occurs between the vlan (the dashed box) and the public internet (represented by the two clouds)
.. image:: /images/cloudpipe.png
:width: 100%
Goals
-----
For our implementation of Nova, our goal is that each project is in a protected network segment. Here are the specifications we keep in mind for meeting this goal.
* RFC-1918 IP space
* public IP via NAT
* no default inbound Internet access without public NAT
* limited (project-admin controllable) outbound Internet access
* limited (project-admin controllable) access to other project segments
* all connectivity to instance and cloud API is via VPN into the project segment
We also keep as a goal a common DMZ segment for support services, meaning these items are only visible from project segment:
* metadata
* dashboard
Limitations
-----------
We kept in mind some of these limitations:
* Projects / cluster limited to available VLANs in switching infrastructure
* Requires VPN for access to project segment
Implementation
--------------
Currently Nova segregates project VLANs using 802.1q VLAN tagging in the
switching layer. Compute hosts create VLAN-specific interfaces and bridges
as required.
The network nodes act as default gateway for project networks and contain
all of the routing and firewall rules implementing security groups. The
network node also handles DHCP to provide instance IPs for each project.
VPN access is provided by running a small instance called CloudPipe
on the IP immediately following the gateway IP for each project. The
network node maps a dedicated public IP/port to the CloudPipe instance.
Compute nodes have per-VLAN interfaces and bridges created as required.
These do NOT have IP addresses in the host to protect host access.
Compute nodes have iptables/ebtables entries created per project and
instance to protect against IP/MAC address spoofing and ARP poisoning.
The network assignment to a project, and IP address assignment to a VM instance, are triggered when a user starts to run a VM instance. When running a VM instance, a user needs to specify a project for the instances, and the security groups (described in Security Groups) when the instance wants to join. If this is the first instance to be created for the project, then Nova (the cloud controller) needs to find a network controller to be the network host for the project; it then sets up a private network by finding an unused VLAN id, an unused subnet, and then the controller assigns them to the project, it also assigns a name to the project's Linux bridge (br100 stored in the Nova database), and allocating a private IP within the project's subnet for the new instance.
If the instance the user wants to start is not the project's first, a subnet and a VLAN must have already been assigned to the project; therefore the system needs only to find an available IP address within the subnet and assign it to the new starting instance. If there is no private IP available within the subnet, an exception will be raised to the cloud controller, and the VM creation cannot proceed.
External Infrastructure
-----------------------
Nova assumes the following is available:
* DNS
* NTP
* Internet connectivity
Example
-------
This example network configuration demonstrates most of the capabilities
of VLAN Mode. It splits administrative access to the nodes onto a dedicated
management network and uses dedicated network nodes to handle all
routing and gateway functions.
It uses a 10GB network for instance traffic and a 1GB network for management.
Hardware
~~~~~~~~
* All nodes have a minimum of two NICs for management and production.
* management is 1GB
* production is 10GB
* add additional NICs for bonding or HA/performance
* network nodes should have an additional NIC dedicated to public Internet traffic
* switch needs to support enough simultaneous VLANs for number of projects
* production network configured as 802.1q trunk on switch
Operation
~~~~~~~~~
The network node controls the project network configuration:
* assigns each project a VLAN and private IP range
* starts dnsmasq on project VLAN to serve private IP range
* configures iptables on network node for default project access
* launches CloudPipe instance and configures iptables access
When starting an instance the network node:
* sets up a VLAN interface and bridge on each host as required when an
instance is started on that host
* assigns private IP to instance
* generates MAC address for instance
* update dnsmasq with IP/MAC for instance
When starting an instance the compute node:
* sets up a VLAN interface and bridge on each host as required when an
instance is started on that host
Setup
~~~~~
* Assign VLANs in the switch:
* public Internet segment
* production network
* management network
* cluster DMZ
* Assign a contiguous range of VLANs to Nova for project use.
* Configure management NIC ports as management VLAN access ports.
* Configure management VLAN with Internet access as required
* Configure production NIC ports as 802.1q trunk ports.
* Configure Nova (need to add specifics here)
* public IPs
* instance IPs
* project network size
* DMZ network
.. todo:: need specific Nova configuration added

View File

@ -1,239 +0,0 @@
..
Copyright 2010-2011 United States Government as represented by the
Administrator of the National Aeronautics and Space Administration.
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
The nova-manage command
=======================
Introduction
~~~~~~~~~~~~
The nova-manage command is used to perform many essential functions for
administration and ongoing maintenance of nova, such as user creation,
vpn management, and much more.
The standard pattern for executing a nova-manage command is:
``nova-manage <category> <command> [<args>]``
For example, to obtain a list of all projects:
``nova-manage project list``
Run without arguments to see a list of available command categories:
``nova-manage``
Categories are user, project, role, shell, vpn, and floating. Detailed descriptions are below.
You can also run with a category argument such as user to see a list of all commands in that category:
``nova-manage user``
These sections describe the available categories and arguments for nova-manage.
Nova Db
~~~~~~~
``nova-manage db version``
Print the current database version.
``nova-manage db sync``
Sync the database up to the most recent version. This is the standard way to create the db as well.
Nova User
~~~~~~~~~
``nova-manage user admin <username>``
Create an admin user with the name <username>.
``nova-manage user create <username>``
Create a normal user with the name <username>.
``nova-manage user delete <username>``
Delete the user with the name <username>.
``nova-manage user exports <username>``
Outputs a list of access key and secret keys for user to the screen
``nova-manage user list``
Outputs a list of all the user names to the screen.
``nova-manage user modify <accesskey> <secretkey> <admin?T/F>``
Updates the indicated user keys, indicating with T or F if the user is an admin user. Leave any argument blank if you do not want to update it.
Nova Project
~~~~~~~~~~~~
``nova-manage project add <projectname>``
Add a nova project with the name <projectname> to the database.
``nova-manage project create <projectname>``
Create a new nova project with the name <projectname> (you still need to do nova-manage project add <projectname> to add it to the database).
``nova-manage project delete <projectname>``
Delete a nova project with the name <projectname>.
``nova-manage project environment <projectname> <username>``
Exports environment variables for the named project to a file named novarc.
``nova-manage project list``
Outputs a list of all the projects to the screen.
``nova-manage project quota <projectname>``
Outputs the size and specs of the project's instances including gigabytes, instances, floating IPs, volumes, and cores.
``nova-manage project remove <projectname>``
Deletes the project with the name <projectname>.
``nova-manage project zipfile``
Compresses all related files for a created project into a zip file nova.zip.
Nova Role
~~~~~~~~~
nova-manage role <action> [<argument>]
``nova-manage role add <username> <rolename> <(optional) projectname>``
Add a user to either a global or project-based role with the indicated <rolename> assigned to the named user. Role names can be one of the following five roles: admin, itsec, projectmanager, netadmin, developer. If you add the project name as the last argument then the role is assigned just for that project, otherwise the user is assigned the named role for all projects.
``nova-manage role has <username> <projectname>``
Checks the user or project and responds with True if the user has a global role with a particular project.
``nova-manage role remove <username> <rolename>``
Remove the indicated role from the user.
Nova Shell
~~~~~~~~~~
``nova-manage shell bpython``
Starts a new bpython shell.
``nova-manage shell ipython``
Starts a new ipython shell.
``nova-manage shell python``
Starts a new python shell.
``nova-manage shell run``
Starts a new shell using python.
``nova-manage shell script <path/scriptname>``
Runs the named script from the specified path with flags set.
Nova VPN
~~~~~~~~
``nova-manage vpn list``
Displays a list of projects, their IP prot numbers, and what state they're in.
``nova-manage vpn run <projectname>``
Starts the VPN for the named project.
``nova-manage vpn spawn``
Runs all VPNs.
Nova Floating IPs
~~~~~~~~~~~~~~~~~
``nova-manage floating create <host> <ip_range>``
Creates floating IP addresses for the named host by the given range.
``nova-manage floating delete <ip_range>``
Deletes floating IP addresses in the range given.
``nova-manage floating list``
Displays a list of all floating IP addresses.
Concept: Flags
--------------
python-gflags
Concept: Plugins
----------------
* Managers/Drivers: utils.import_object from string flag
* virt/connections: conditional loading from string flag
* db: LazyPluggable via string flag
* auth_manager: utils.import_class based on string flag
* Volumes: moving to pluggable driver instead of manager
* Network: pluggable managers
* Compute: same driver used, but pluggable at connection
Concept: IPC/RPC
----------------
Rabbit!
Concept: Fakes
--------------
* auth
* ldap
Concept: Scheduler
------------------
* simple
* random
Concept: Security Groups
------------------------
Security groups
Concept: Certificate Authority
------------------------------
Nova does a small amount of certificate management. These certificates are used for :ref:`project vpns <../cloudpipe>` and decrypting bundled images.
Concept: Images
---------------
* launching
* bundling

View File

@ -1,362 +0,0 @@
Installing Nova on a Single Host
================================
Nova can be run on a single machine, and it is recommended that new users practice managing this type of installation before graduating to multi node systems.
The fastest way to get a test cloud running is through our :doc:`../quickstart`. But for more detail on installing the system read this doc.
Step 1 and 2: Get the latest Nova code system software
------------------------------------------------------
Depending on your system, the method for accomplishing this varies
.. toctree::
:maxdepth: 1
distros/ubuntu.10.04
distros/ubuntu.10.10
distros/others
Step 3: Build and install Nova services
---------------------------------------
Switch to the base nova source directory.
Then type or copy/paste in the following line to compile the Python code for OpenStack Compute.
::
sudo python setup.py build
sudo python setup.py install
When the installation is complete, you'll see the following lines:
::
Installing nova-network script to /usr/local/bin
Installing nova-volume script to /usr/local/bin
Installing nova-objectstore script to /usr/local/bin
Installing nova-manage script to /usr/local/bin
Installing nova-scheduler script to /usr/local/bin
Installing nova-dhcpbridge script to /usr/local/bin
Installing nova-compute script to /usr/local/bin
Installing nova-instancemonitor script to /usr/local/bin
Installing nova-api script to /usr/local/bin
Installing nova-import-canonical-imagestore script to /usr/local/bin
Installed /usr/local/lib/python2.6/dist-packages/nova-2010.1-py2.6.egg
Processing dependencies for nova==2010.1
Finished processing dependencies for nova==2010.1
Step 4: Create the Nova Database
--------------------------------
Type or copy/paste in the following line to create your nova db::
sudo nova-manage db sync
Step 5: Create a Nova administrator
-----------------------------------
Type or copy/paste in the following line to create a user named "anne."::
sudo nova-manage user admin anne
You see an access key and a secret key export, such as these made-up ones:::
export EC2_ACCESS_KEY=4e6498a2-blah-blah-blah-17d1333t97fd
export EC2_SECRET_KEY=0a520304-blah-blah-blah-340sp34k05bbe9a7
Step 6: Create the network
--------------------------
Type or copy/paste in the following line to create a network prior to creating a project.
::
sudo nova-manage network create 10.0.0.0/8 1 64
For this command, the IP address is the cidr notation for your netmask, such as 192.168.1.0/24. The value 1 is the total number of networks you want made, and the 64 value is the total number of ips in all networks.
After running this command, entries are made in the 'networks' and 'fixed_ips' table in the database.
Step 7: Create a project with the user you created
--------------------------------------------------
Type or copy/paste in the following line to create a project named IRT (for Ice Road Truckers, of course) with the newly-created user named anne.
::
sudo nova-manage project create IRT anne
::
Generating RSA private key, 1024 bit long modulus
.....++++++
..++++++
e is 65537 (0x10001)
Using configuration from ./openssl.cnf
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
countryName :PRINTABLE:'US'
stateOrProvinceName :PRINTABLE:'California'
localityName :PRINTABLE:'MountainView'
organizationName :PRINTABLE:'AnsoLabs'
organizationalUnitName:PRINTABLE:'NovaDev'
commonName :PRINTABLE:'anne-2010-10-12T21:12:35Z'
Certificate is to be certified until Oct 12 21:12:35 2011 GMT (365 days)
Write out database with 1 new entries
Data Base Updated
Step 8: Unzip the nova.zip
--------------------------
You should have a nova.zip file in your current working directory. Unzip it with this command:
::
unzip nova.zip
You'll see these files extract.
::
Archive: nova.zip
extracting: novarc
extracting: pk.pem
extracting: cert.pem
extracting: nova-vpn.conf
extracting: cacert.pem
Step 9: Source the rc file
--------------------------
Type or copy/paste the following to source the novarc file in your current working directory.
::
. novarc
Step 10: Pat yourself on the back :)
-----------------------------------
Congratulations, your cloud is up and running, youve created an admin user, created a network, retrieved the user's credentials and put them in your environment.
Now you need an image.
Step 11: Get an image
--------------------
To make things easier, we've provided a small image on the Rackspace CDN. Use this command to get it on your server.
::
wget http://c2477062.cdn.cloudfiles.rackspacecloud.com/images.tgz
::
--2010-10-12 21:40:55-- http://c2477062.cdn.cloudfiles.rackspacecloud.com/images.tgz
Resolving cblah2.cdn.cloudfiles.rackspacecloud.com... 208.111.196.6, 208.111.196.7
Connecting to cblah2.cdn.cloudfiles.rackspacecloud.com|208.111.196.6|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 58520278 (56M) [application/x-gzip]
Saving to: `images.tgz'
100%[======================================>] 58,520,278 14.1M/s in 3.9s
2010-10-12 21:40:59 (14.1 MB/s) - `images.tgz' saved [58520278/58520278]
Step 12: Decompress the image file
----------------------------------
Use this command to extract the image files:::
tar xvzf images.tgz
You get a directory listing like so:::
images
|-- aki-lucid
| |-- image
| `-- info.json
|-- ami-tiny
| |-- image
| `-- info.json
`-- ari-lucid
|-- image
`-- info.json
Step 13: Send commands to upload sample image to the cloud
----------------------------------------------------------
Type or copy/paste the following commands to create a manifest for the kernel.::
euca-bundle-image -i images/aki-lucid/image -p kernel --kernel true
You should see this in response:::
Checking image
Tarring image
Encrypting image
Splitting image...
Part: kernel.part.0
Generating manifest /tmp/kernel.manifest.xml
Type or copy/paste the following commands to create a manifest for the ramdisk.::
euca-bundle-image -i images/ari-lucid/image -p ramdisk --ramdisk true
You should see this in response:::
Checking image
Tarring image
Encrypting image
Splitting image...
Part: ramdisk.part.0
Generating manifest /tmp/ramdisk.manifest.xml
Type or copy/paste the following commands to upload the kernel bundle.::
euca-upload-bundle -m /tmp/kernel.manifest.xml -b mybucket
You should see this in response:::
Checking bucket: mybucket
Creating bucket: mybucket
Uploading manifest file
Uploading part: kernel.part.0
Uploaded image as mybucket/kernel.manifest.xml
Type or copy/paste the following commands to upload the ramdisk bundle.::
euca-upload-bundle -m /tmp/ramdisk.manifest.xml -b mybucket
You should see this in response:::
Checking bucket: mybucket
Uploading manifest file
Uploading part: ramdisk.part.0
Uploaded image as mybucket/ramdisk.manifest.xml
Type or copy/paste the following commands to register the kernel and get its ID.::
euca-register mybucket/kernel.manifest.xml
You should see this in response:::
IMAGE ami-fcbj2non
Type or copy/paste the following commands to register the ramdisk and get its ID.::
euca-register mybucket/ramdisk.manifest.xml
You should see this in response:::
IMAGE ami-orukptrc
Type or copy/paste the following commands to create a manifest for the machine image associated with the ramdisk and kernel IDs that you got from the previous commands.::
euca-bundle-image -i images/ami-tiny/image -p machine --kernel ami-fcbj2non --ramdisk ami-orukptrc
You should see this in response:::
Checking image
Tarring image
Encrypting image
Splitting image...
Part: machine.part.0
Part: machine.part.1
Part: machine.part.2
Part: machine.part.3
Part: machine.part.4
Generating manifest /tmp/machine.manifest.xml
Type or copy/paste the following commands to upload the machine image bundle.::
euca-upload-bundle -m /tmp/machine.manifest.xml -b mybucket
You should see this in response:::
Checking bucket: mybucket
Uploading manifest file
Uploading part: machine.part.0
Uploading part: machine.part.1
Uploading part: machine.part.2
Uploading part: machine.part.3
Uploading part: machine.part.4
Uploaded image as mybucket/machine.manifest.xml
Type or copy/paste the following commands to register the machine image and get its ID.::
euca-register mybucket/machine.manifest.xml
You should see this in response:::
IMAGE ami-g06qbntt
Type or copy/paste the following commands to register a SSH keypair for use in starting and accessing the instances.::
euca-add-keypair mykey > mykey.priv
chmod 600 mykey.priv
Type or copy/paste the following commands to run an instance using the keypair and IDs that we previously created.::
euca-run-instances ami-g06qbntt --kernel ami-fcbj2non --ramdisk ami-orukptrc -k mykey
You should see this in response:::
RESERVATION r-0at28z12 IRT
INSTANCE i-1b0bh8n ami-g06qbntt 10.0.0.3 10.0.0.3 scheduling mykey (IRT, None) m1.small 2010-10-18 19:02:10.443599
Type or copy/paste the following commands to watch as the scheduler launches, and completes booting your instance.::
euca-describe-instances
You should see this in response:::
RESERVATION r-0at28z12 IRT
INSTANCE i-1b0bh8n ami-g06qbntt 10.0.0.3 10.0.0.3 launching mykey (IRT, cloud02) m1.small 2010-10-18 19:02:10.443599
Type or copy/paste the following commands to see when loading is completed and the instance is running.::
euca-describe-instances
You should see this in response:::
RESERVATION r-0at28z12 IRT
INSTANCE i-1b0bh8n ami-g06qbntt 10.0.0.3 10.0.0.3 running mykey (IRT, cloud02) 0 m1.small 2010-10-18 19:02:10.443599
Type or copy/paste the following commands to check that the virtual machine is running.::
virsh list
You should see this in response:::
Id Name State
----------------------------------
1 2842445831 running
Type or copy/paste the following commands to ssh to the instance using your private key.::
ssh -i mykey.priv root@10.0.0.3
Troubleshooting Installation
----------------------------
If you see an "error loading the config file './openssl.cnf'" it means you can copy the openssl.cnf file to the location where Nova expects it and reboot, then try the command again.
::
cp /etc/ssl/openssl.cnf ~
sudo reboot

View File

@ -18,7 +18,7 @@
Getting Involved
================
The Nova community is a very friendly group and there are places online to join in with the
The OpenStack community for Nova is a very friendly group and there are places online to join in with the
community. Feel free to ask questions. This document points you to some of the places where you can
communicate with people.
@ -83,3 +83,13 @@ Twitter
Because all the cool kids do it: `@openstack <http://twitter.com/openstack>`_. Also follow the
`#openstack <http://search.twitter.com/search?q=%23openstack>`_ tag for relevant tweets.
OpenStack Docs Site
-------------------
The `nova.openstack.org <http://nova.openstack.org>`_ site is geared towards developer documentation,
and the `docs.openstack.org <http://docs.openstack.org>`_ site is intended for cloud administrators
who are standing up and running OpenStack Compute in production. You can contribute to the Docs Site
by using bzr and Launchpad and contributing to the openstack-manuals project at http://launchpad.net/openstack-manuals.

View File

@ -32,11 +32,13 @@ Nova is written with the following design guidelines in mind:
* **API Compatibility**: Nova strives to provide API-compatible with popular systems like Amazon EC2
This documentation is generated by the Sphinx toolkit and lives in the source
tree. Additional documentation on Nova and other components of OpenStack can
be found on the `OpenStack wiki`_. Also see the :doc:`community` page for
other ways to interact with the community.
tree. Additional draft and project documentation on Nova and other components of OpenStack can
be found on the `OpenStack wiki`_. Cloud administrators, refer to `docs.openstack.org`_.
Also see the :doc:`community` page for other ways to interact with the community.
.. _`OpenStack wiki`: http://wiki.openstack.org
.. _`docs.openstack.org`: http://docs.openstack.org
Key Concepts
@ -50,17 +52,7 @@ Key Concepts
service.architecture
nova.object.model
swift.object.model
Administrator's Documentation
=============================
.. toctree::
:maxdepth: 1
livecd
adminguide/index
adminguide/single.node.install
adminguide/multi.node.install
runnova/index
Developer Docs
==============

View File

@ -18,8 +18,6 @@
Object Model
============
.. todo:: Add brief description for core models
.. graphviz::
digraph foo {
@ -42,27 +40,27 @@ Object Model
Users
-----
Each Nova User is authorized based on their access key and secret key, assigned per-user. Read more at :doc:`/adminguide/managing.users`.
Each Nova User is authorized based on their access key and secret key, assigned per-user. Read more at :doc:`/runnova/managing.users`.
Projects
--------
For Nova, access to images is based on the project. Read more at :doc:`/adminguide/managing.projects`.
For Nova, access to images is based on the project. Read more at :doc:`/runnova/managing.projects`.
Images
------
Images are binary files that run the operating system. Read more at :doc:`/adminguide/managing.images`.
Images are binary files that run the operating system. Read more at :doc:`/runnova/managing.images`.
Instances
---------
Instances are running virtual servers. Read more at :doc:`/adminguide/managing.instances`.
Instances are running virtual servers. Read more at :doc:`/runnova/managing.instances`.
Volumes
-------
.. todo:: Write doc about volumes
Volumes offer extra block level storage to instances. Read more at `Managing Volumes <http://docs.openstack.org/openstack-compute/admin/content/ch05s07.html>`_.
Security Groups
---------------
@ -72,7 +70,7 @@ In Nova, a security group is a named collection of network access rules, like fi
VLANs
-----
VLAN is the default network mode for Nova. Read more at :doc:`/adminguide/network.vlan`.
VLAN is the default network mode for Nova. Read more at :doc:`/runnova/network.vlan`.
IP Addresses
------------

View File

@ -54,7 +54,7 @@ Environment Variables
By tweaking the environment that nova.sh run in, you can build slightly
different configurations (though for more complex setups you should see
:doc:`/adminguide/getting.started` and :doc:`/adminguide/multi.node.install`).
`Installing and Configuring OpenStack Compute <http://docs.openstack.org/openstack-compute/admin/content/ch03.html>`_).
* HOST_IP
* Default: address of first interface from the ifconfig command