diff --git a/doc/src/docbkx/openstack-compute-admin/computeinstall.xml b/doc/src/docbkx/openstack-compute-admin/computeinstall.xml index 39472f3f89..35d110c488 100644 --- a/doc/src/docbkx/openstack-compute-admin/computeinstall.xml +++ b/doc/src/docbkx/openstack-compute-admin/computeinstall.xml @@ -25,7 +25,8 @@ nova- service on an independent server means there are many possible methods for installing OpenStack Compute. The only co-dependency between possible multi-node installations is that the Dashboard must be installed - nova-api server. Here are the types of installation architectures: + on the nova-api server. Here are the types of + installation architectures: @@ -36,20 +37,22 @@ Two nodes: A cloud controller node runs the nova- services - except for nova-compute, and a compute node runs nova-compute. A - client computer is likely needed to bundle images and interfacing to - the servers, but a client is not required. Use this configuration for - proof of concepts or development environments. + except for nova-compute, and a compute node runs + nova-compute. A client computer is likely needed to + bundle images and interfacing to the servers, but a client is not + required. Use this configuration for proof of concepts or development + environments. Multiple nodes: You can add more compute nodes to the two node - installation by simply installing nova-compute on an additional server - and copying a nova.conf file to the added node. This would result in a - multiple node installation. You can also add a volume controller and a - network controller as additional nodes in a more complex multiple node - installation. A minimum of 4 nodes is best for running multiple - virtual instances that require a lot of processing power. + installation by simply installing nova-compute on + an additional server and copying a nova.conf file + to the added node. This would result in a multiple node installation. + You can also add a volume controller and a network controller as + additional nodes in a more complex multiple node installation. A + minimum of 4 nodes is best for running multiple virtual instances that + require a lot of processing power. @@ -68,8 +71,8 @@ problems. In that case you would add an additional RabbitMQ server in addition to or instead of scaling up the database server. Your installation can run any nova- service on any server as long as the - nova.conf is configured to point to the RabbitMQ server and the server can - send messages to the server. + nova.conf is configured to point to the RabbitMQ + server and the server can send messages to the server. Multiple installation architectures are possible, here is another example illustration. @@ -110,8 +113,9 @@ Linux Server 10.04 LTS distribution containing only the components needed to run OpenStack Compute. See http://sourceforge.net/projects/stackops/files/ - for download files and information, license information, and a README - file. For documentation on the StackOps distro, see README file. For documentation on the StackOps + distro, see http://docs.stackops.org. For free support, go to http://getsatisfaction.com/stackops. @@ -139,7 +143,7 @@ Download DevStack: - git clone git://github.com/cloudbuilders/devstack.git + $ git clone git://github.com/cloudbuilders/devstack.git The devstack repo contains a script that installs OpenStack Compute, the Image Service and the Identity Service and offers @@ -149,7 +153,7 @@ Start the install: - cd devstack; ./stack.sh + $ cd devstack; ./stack.sh It takes a few minutes, we recommend reading the @@ -358,21 +362,23 @@ for n in node1 node2 node3; do
Post-Installation Configuration for OpenStack Compute - Configuring your Compute installation involves nova-manage commands - plus editing the nova.conf file to ensure the correct flags are set. This - section contains the basics for a simple multi-node installation, but + Configuring your Compute installation involves + nova-manage commands plus editing the + nova.conf file to ensure the correct flags are set. + This section contains the basics for a simple multi-node installation, but Compute can be configured many ways. You can find networking options and hypervisor options described in separate chapters, and you will read about additional configuration information in a separate chapter as well.
- Setting Flags in the nova.conf File + Setting Flags in the <filename>nova.conf</filename> File - The configuration file nova.conf is installed in /etc/nova by - default. You only need to do these steps when installing manually, the - scripted installation above does this configuration during the - installation. A default set of options are already configured in - nova.conf when you install manually. The defaults are as follows: + The configuration file nova.conf is installed + in /etc/nova by default. You only need to do these + steps when installing manually. The scripted installation above does + this configuration during the installation. A default set of options are + already configured in nova.conf when you install + manually. The defaults are as follows: --daemonize=1 @@ -383,14 +389,16 @@ for n in node1 node2 node3; do Starting with the default file, you must define the following - required items in /etc/nova/nova.conf. The flag variables are described - below. You can place comments in the nova.conf file by entering a new - line with a # sign at the beginning of the line. To see a listing of all - possible flag settings, see the output of running /bin/nova-api - --help. + required items in /etc/nova/nova.conf. The flag + variables are described below. You can place comments in the + nova.conf file by entering a new line with a + # sign at the beginning of the line. To see a listing + of all possible flag settings, see the output of running + /bin/nova-api --help. - + @@ -402,14 +410,14 @@ for n in node1 node2 node3; do - + - + - + - + - + - + - + + 192.168.0.0/12 - + - + - + - + - + - + - + - + @@ -505,9 +519,9 @@ for n in node1 node2 node3; do
Description of nova.conf flags (not comprehensive)Description of nova.conf flags (not + comprehensive)
--sql_connection--sql_connection SQL Alchemy connect string (reference); Location of OpenStack Compute SQL database
--s3_host--s3_host IP address; Location where OpenStack Compute is hosting the objectstore service, which will contain the virtual machine images @@ -417,38 +425,44 @@ for n in node1 node2 node3; do
--rabbit_host--rabbit_host IP address; Location of RabbitMQ server
--verbose--verboseSet to 1 to turn on; Optional but helpful during initial - setupSet to 1 to turn on; Optional but helpful + during initial setup
--network_manager--network_manager Configures how your controller will communicate with - additional OpenStack Compute nodes and virtual machines. Options: - + additional OpenStack Compute nodes and virtual machines. + Options: - nova.network.manager.FlatManager + + nova.network.manager.FlatManager + Simple, non-VLAN networking - nova.network.manager.FlatDHCPManager + + nova.network.manager.FlatDHCPManager + Flat networking with DHCP - nova.network.manager.VlanManager + + nova.network.manager.VlanManager + VLAN networking with DHCP; This is the Default if no network manager is defined here in nova.conf. @@ -457,47 +471,47 @@ for n in node1 node2 node3; do
--fixed_range--fixed_range IP address/range; Network prefix for the IP network that all the projects for future VM guests reside on. Example: - 192.168.0.0/12
--ec2_host--ec2_hostIP address; Indicates where the nova-api service is - installed.IP address; Indicates where the nova-api + service is installed.
--ec2_url--ec2_url Url; Indicates the service for EC2 requests.
--osapi_host--osapi_hostIP address; Indicates where the nova-api service is - installed.IP address; Indicates where the nova-api + service is installed.
--network_size--network_size Number value; Number of addresses in each private subnet.
--glance_api_servers--glance_api_servers IP and port; Address for Image Service.
--use_deprecated_auth--use_deprecated_auth If this flag is present, the Cactus method of authentication is used with the novarc file containing credentials.
- Here is a simple example nova.conf file for a small private cloud, - with all the cloud controller services, database server, and messaging - server on the same server. + Here is a simple example nova.conf file for a + small private cloud, with all the cloud controller services, database + server, and messaging server on the same server. --dhcpbridge_flagfile=/etc/nova/nova.conf @@ -531,18 +545,19 @@ for n in node1 node2 node3; do Create a “nova” group, so you can set permissions on the configuration file: - sudo addgroup nova + $ sudo addgroup nova - The nova.config file should have its owner set to root:nova, and - mode set to 0640, since the file contains your MySQL server’s username - and password. You also want to ensure that the nova user belongs to the - nova group. + The nova.config file should have its owner + set to root:nova, and mode set to + 0640, since the file contains your MySQL server’s + username and password. You also want to ensure that the + nova user belongs to the nova + group. - -sudo usermod -g nova nova -chown -R root:nova /etc/nova -chmod 640 /etc/nova/nova.conf - + $ sudo usermod -g nova nova +$ chown -R root:nova /etc/nova +$ chmod 640 /etc/nova/nova.conf +
@@ -551,41 +566,40 @@ chmod 640 /etc/nova/nova.conf These are the commands you run to ensure the database schema is current, and then set up a user and project, if you are using built-in - auth with the --use_deprecated_auth flag - rather than the Identity Service: + auth with the --use_deprecated_auth flag rather than + the Identity Service: - -nova-manage db sync -nova-manage user admin <user_name> -nova-manage project create <project_name> <user_name> -nova-manage network create <network-label> <project-network> <number-of-networks-in-project> <addresses-in-each-network> - + $ nova-manage db sync +$ nova-manage user admin <user_name> +$ nova-manage project create <project_name> <user_name> +$ nova-manage network create <network-label> <project-network> <number-of-networks-in-project> <addresses-in-each-network> + Here is an example of what this looks like with real values entered: - -nova-manage db sync -nova-manage user admin dub -nova-manage project create dubproject dub -nova-manage network create novanet 192.168.0.0/24 1 256 - + $ nova-manage db sync +$ nova-manage user admin dub +$ nova-manage project create dubproject dub +$ nova-manage network create novanet 192.168.0.0/24 1 256 - For this example, the number of IPs is /24 since that falls inside - the /16 range that was set in ‘fixed-range’ in nova.conf. Currently, - there can only be one network, and this set up would use the max IPs - available in a /24. You can choose values that let you use any valid - amount that you would like. + For this example, the number of IPs is /24 + since that falls inside the /16 range that was set in + fixed-range in nova.conf. + Currently, there can only be one network, and this set up would use the + max IPs available in a /24. You can choose values + that let you use any valid amount that you would like. The nova-manage service assumes that the first IP address is your network (like 192.168.0.0), that the 2nd IP is your gateway (192.168.0.1), and that the broadcast is the very last IP in the range you defined (192.168.0.255). If this is not the case you will need to - manually edit the sql db ‘networks’ table.o. + manually edit the sql db networks table. - When you run the nova-manage network create command, entries are - made in the ‘networks’ and ‘fixed_ips’ table. However, one of the - networks listed in the ‘networks’ table needs to be marked as bridge in + When you run the nova-manage network create + command, entries are made in the networks and + fixed_ips tables. However, one of the networks listed + in the networks table needs to be marked as bridge in order for the code to know that a bridge exists. The network in the Nova networks table is marked as bridged automatically for Flat Manager. @@ -598,111 +612,111 @@ nova-manage network create novanet 192.168.0.0/24 1 256 will use to launch instances, bundle images, and all the other assorted API functions. - -mkdir –p /root/creds -/usr/bin/python /usr/bin/nova-manage project zipfile $NOVA_PROJECT $NOVA_PROJECT_USER /root/creds/novacreds.zip - + # mkdir –p /root/creds +# nova-manage project zipfile $NOVA_PROJECT $NOVA_PROJECT_USER /root/creds/novacreds.zip If you are using one of the Flat modes for networking, you may see - a Warning message "No vpn data for project <project_name>" which - you can safely ignore. + a Warning message "No vpn data for project + <project_name>" which you can + safely ignore. Unzip them in your home directory, and add them to your environment. - -unzip /root/creds/novacreds.zip -d /root/creds/ -cat /root/creds/novarc >> ~/.bashrc -source ~/.bashrc - + # unzip /root/creds/novacreds.zip -d /root/creds/ +# cat /root/creds/novarc >> ~/.bashrc +# source ~/.bashrc If you already have Nova credentials present in your environment, you can use a script included with Glance the Image Service, - tools/nova_to_os_env.sh, to create Glance-style credentials. This script - adds OS_AUTH credentials to the environment which are used by the Image - Service to enable private images when the Identity Service is configured - as the authentication system for Compute and the Image Service. + tools/nova_to_os_env.sh, to create Glance-style + credentials. This script adds OS_AUTH credentials to + the environment which are used by the Image Service to enable private + images when the Identity Service is configured as the authentication + system for Compute and the Image Service.
Enabling Access to VMs on the Compute Node One of the most commonly missed configuration areas is not - allowing the proper access to VMs. Use the ‘euca-authorize’ command to - enable access. Below, you will find the commands to allow 'ping' and - 'ssh' to your VMs : + allowing the proper access to VMs. Use the + euca-authorize command to enable access. Below, you + will find the commands to allow ping and + ssh to your VMs : These commands need to be run as root only if the credentials - used to interact with nova-api have been put under /root/.bashrc. If - the EC2 credentials have been put into another user's .bashrc file, - then, it is necessary to run these commands as the user. + used to interact with nova-api have been put under + /root/.bashrc. If the EC2 credentials have been + put into another user's .bashrc file, then, it is + necessary to run these commands as the user. - - nova  secgroup-add-rule default icmp - 1 -1 0.0.0.0/0 - nova  secgroup-add-rule default tcp 22 22 0.0.0.0/0 - + $ nova  secgroup-add-rule default icmp - 1 -1 0.0.0.0/0 +$ nova  secgroup-add-rule default tcp 22 22 0.0.0.0/0 - Another common issue is you cannot ping or SSH your instances - after issuing the ‘euca-authorize’ commands. Something to look at is the - amount of ‘dnsmasq’ processes that are running. If you have a running - instance, check to see that TWO "dnsmasq’" processes are running. If - not, perform the following: + Another common issue is you cannot ping or SSH to your instances + after issuing the euca-authorize commands. Something + to look at is the amount of dnsmasq processes that + are running. If you have a running instance, check to see that TWO + dnsmasq processes are running. If not, perform the + following: - -sudo killall dnsmasq -sudo service nova-network restart - + $ sudo killall dnsmasq +$ sudo service nova-network restart - If you get the instance not found - message while performing the restart, that means the service was not - previously running. You simply need to start it instead of restarting it - : sudo service nova-network start + If you get the instance not found message while + performing the restart, that means the service was not previously + running. You simply need to start it instead of restarting it : + + $ sudo service nova-network start
Configuring Multiple Compute Nodes If your goal is to split your VM load across more than one server, - you can connect an additional nova-compute node to a cloud controller - node. This configuring can be reproduced on multiple compute servers to - start building a true multi-node OpenStack Compute cluster. + you can connect an additional nova-compute node to a + cloud controller node. This configuring can be reproduced on multiple + compute servers to start building a true multi-node OpenStack Compute + cluster. To build out and scale the Compute platform, you spread out services amongst many servers. While there are additional ways to accomplish the build-out, this section describes adding compute nodes, - and the service we are scaling out is called 'nova-compute.' + and the service we are scaling out is called + nova-compute. - For a multi-node install you only make changes to nova.conf and - copy it to additional compute nodes. Ensure each nova.conf file points - to the correct IP addresses for the respective services. Customize the - nova.conf example below to match your environment. The CC_ADDR is the - Cloud Controller IP Address. + For a multi-node install you only make changes to + nova.conf and copy it to additional compute nodes. + Ensure each nova.conf file points to the correct IP + addresses for the respective services. Customize the + nova.conf example below to match your environment. + The CC_ADDR is the Cloud + Controller IP Address. - ---dhcpbridge_flagfile=/etc/nova/nova.conf + --dhcpbridge_flagfile=/etc/nova/nova.conf --dhcpbridge=/usr/bin/nova-dhcpbridge --flat_network_bridge=br100 --logdir=/var/log/nova --state_path=/var/lib/nova --verbose ---sql_connection=mysql://root:nova@CC_ADDR/nova ---s3_host=CC_ADDR ---rabbit_host=CC_ADDR ---ec2_api=CC_ADDR +--sql_connection=mysql://root:nova@CC_ADDR/nova +--s3_host=CC_ADDR +--rabbit_host=CC_ADDR +--ec2_api=CC_ADDR --ec2_url=http://CC_ADDR:8773/services/Cloud --network_manager=nova.network.manager.FlatManager --fixed_range= network/CIDR ---network_size=number of addresses - +--network_size=number of addresses By default, Nova sets the bridge device based on the setting in - --flat_network_bridge. Now you can edit /etc/network/interfaces with the - following template, updated with your IP information. + --flat_network_bridge. Now you can edit + /etc/network/interfaces with the following + template, updated with your IP information. - -# The loopback network interface + # The loopback network interface auto lo iface lo inet loopback @@ -713,52 +727,51 @@ iface br100 inet static bridge_stp off bridge_maxwait 0 bridge_fd 0 - address xxx.xxx.xxx.xxx - netmask xxx.xxx.xxx.xxx - network xxx.xxx.xxx.xxx - broadcast xxx.xxx.xxx.xxx - gateway xxx.xxx.xxx.xxx + address xxx.xxx.xxx.xxx + netmask xxx.xxx.xxx.xxx + network xxx.xxx.xxx.xxx + broadcast xxx.xxx.xxx.xxx + gateway xxx.xxx.xxx.xxx # dns-* options are implemented by the resolvconf package, if installed - dns-nameservers xxx.xxx.xxx.xxx + dns-nameservers xxx.xxx.xxx.xxx Restart networking: - /etc/init.d/networking restart + $ sudo service networking restart - With nova.conf updated and networking set, configuration is nearly - complete. First, bounce the relevant services to take the latest - updates: + With nova.conf updated and networking set, + configuration is nearly complete. First, bounce the relevant services to + take the latest updates: - restart libvirt-bin; service nova-compute restart + $ sudo service libvirtd restart +$ sudo service nova-compute restart To avoid issues with KVM and permissions with Nova, run the following commands to ensure we have VM's that are running optimally: - -chgrp kvm /dev/kvm -chmod g+rwx /dev/kvm - + # chgrp kvm /dev/kvm +# chmod g+rwx /dev/kvm If you want to use the 10.04 Ubuntu Enterprise Cloud images that are readily available at http://uec-images.ubuntu.com/releases/10.04/release/, you may run into - delays with booting. Any server that does not have nova-api running on - it needs this iptables entry so that UEC images can get metadata info. - On compute nodes, configure the iptables with this next step: + delays with booting. Any server that does not have + nova-api running on it needs this iptables entry so + that UEC images can get metadata info. On compute nodes, configure the + iptables with this next step: - iptables -t nat -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination $NOVA_API_IP:8773 + # iptables -t nat -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination $NOVA_API_IP:8773 Lastly, confirm that your compute node is talking to your cloud controller. From the cloud controller, run this database query: - mysql -u$MYSQL_USER -p$MYSQL_PASS nova -e 'select * from services;' + $ mysql -u$MYSQL_USER -p$MYSQL_PASS nova -e 'select * from services;' In return, you should see something similar to this: - -+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+ + +---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+ | created_at | updated_at | deleted_at | deleted | id | host | binary | topic | report_count | disabled | availability_zone | +---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+ | 2011-01-28 22:52:46 | 2011-02-03 06:55:48 | NULL | 0 | 1 | osdemo02 | nova-network | network | 46064 | 0 | nova | @@ -767,21 +780,21 @@ chmod g+rwx /dev/kvm | 2011-01-29 23:49:29 | 2011-02-03 06:54:26 | NULL | 0 | 4 | osdemo01 | nova-compute | compute | 37050 | 0 | nova | | 2011-01-30 23:42:24 | 2011-02-03 06:55:44 | NULL | 0 | 9 | osdemo04 | nova-compute | compute | 28484 | 0 | nova | | 2011-01-30 21:27:28 | 2011-02-03 06:54:23 | NULL | 0 | 8 | osdemo05 | nova-compute | compute | 29284 | 0 | nova | -+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+ - ++---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+ - You can see that 'osdemo0{1,2,4,5} are all running 'nova-compute.' - When you start spinning up instances, they will allocate on any node - that is running nova-compute from this list. + You can see that osdemo0{1,2,4,5} are all + running nova-compute. When you start spinning up + instances, they will allocate on any node that is running + nova-compute from this list.
Determining the Version of Compute You can find the version of the installation by using the - nova-manage command: + nova-manage command: - nova-manage version list + $ nova-manage version list