Adding new install script plus changes to multinode install doc

This commit is contained in:
Anne Gentle
2010-12-17 16:53:27 -06:00
parent 1658d4a419
commit 26c3941556

View File

@@ -1,6 +1,6 @@
..
Copyright 2010 United States Government as represented by the
Administrator of the National Aeronautics and Space Administration.
Copyright 2010 OpenStack LLC
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
@@ -16,255 +16,282 @@
under the License.
Installing Nova on Multiple Servers
===================================
===================================
When you move beyond evaluating the technology and into building an actual
production environment, you will need to know how to configure your datacenter
and how to deploy components across your clusters. This guide should help you
through that process.
You can install multiple nodes to increase performance and availability of the OpenStack Compute installation.
This setup is based on an Ubuntu Lucid 10.04 installation with the latest updates. Most of this works around issues that need to be resolved in the installation and configuration scripts as of October 18th 2010. It also needs to eventually be generalized, but the intent here is to get the multi-node configuration bootstrapped so folks can move forward.
Requirements for a multi-node installation
------------------------------------------
* You need a real database, compatible with SQLAlchemy (mysql, postgresql) There's not a specific reason to choose one over another, it basically depends what you know. MySQL is easier to do High Availability (HA) with, but people may already know Postgres. We should document both configurations, though.
* For a recommended HA setup, consider a MySQL master/slave replication, with as many slaves as you like, and probably a heartbeat to kick one of the slaves into being a master if it dies.
* For performance optimization, split reads and writes to the database. MySQL proxy is the easiest way to make this work if running MySQL.
Assumptions
^^^^^^^^^^^
------------------------------------
* Networking is configured between/through the physical machines on a single subnet.
* Installation and execution are both performed by root user.
Step 1 Use apt-get to get the latest code
* Installation and execution are both performed by ROOT user.
Step 1 - Use apt-get to get the latest code
-----------------------------------------
1. Setup Nova PPA with https://launchpad.net/~nova-core/+archive/ppa.
1. Setup Nova PPA with https://launchpad.net/~nova-core/+archive/trunk. The python-software-properties package is a pre-requisite for setting up the nova package repo:
::
sudo apt-get install python-software-properties
sudo add-apt-repository ppa:nova-core/ppa
2. Run update.
apt-get -y install python-software-properties
add-apt-repository ppa:nova-core/trunk
2. Update apt-get:
::
sudo apt-get update
3. Install nova-pkgs (dependencies should be automatically installed).
apt-get update
3. Install nova-packages (dependencies should be automatically installed).
::
sudo apt-get install python-greenlet
sudo apt-get install nova-common nova-doc python-nova nova-api nova-network nova-objectstore nova-scheduler
It is highly likely that there will be errors when the nova services come up since they are not yet configured. Don't worry, you're only at step 1!
Step 2 Setup configuration file (installed in /etc/nova)
apt-get -y install bzr nova-common nova-doc python-mysqldb python-greenlet python-nova nova-api nova-network nova-objectstore nova-scheduler nova-compute unzip vim euca2ools rabbitmq-server dnsmasq open-iscsi kpartx kvm gawk iptables ebtables user-mode-linux kvm libvirt-bin screen iscsitarget euca2ools vlan curl python-twisted python-sqlalchemy python-mox python-greenlet python-carrot python-daemon python-eventlet python-gflags python-libvirt python-libxml2 python-routes
Step 2 Setting up nova.conf (installed in /etc/nova)
---------------------------------------------------------
Note: CC_ADDR=<the external IP address of your cloud controller>
Nova development has consolidated all .conf files to nova.conf as of November 2010. References to specific .conf files may be ignored.
#. These need to be defined in the nova.conf configuration file::
--sql_connection=mysql://root:nova@$CC_ADDR/nova # location of nova sql db
--s3_host=$CC_ADDR # This is where nova is hosting the objectstore service, which
# will contain the VM images and buckets
--rabbit_host=$CC_ADDR # This is where the rabbit AMQP messaging service is hosted
--cc_host=$CC_ADDR # This is where the the nova-api service lives
--verbose # Optional but very helpful during initial setup
--ec2_url=http://$CC_ADDR:8773/services/Cloud
--network_manager=nova.network.manager.FlatManager # simple, no-vlan networking type
--fixed_range=<network/prefix> # ip network to use for VM guests, ex 192.168.2.64/26
--network_size=<# of addrs> # number of ip addrs to use for VM guests, ex 64
#. Create a nova group::
sudo addgroup nova
The Nova config file should have its owner set to root:nova, and mode set to 0640, since they contain your MySQL server's root password.
1. Nova development has consolidated all config files to nova.conf as of November 2010. There is a default set of options that are already configured in nova.conf:
::
--daemonize=1
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--logdir=/var/log/nova
--state_path=/var/lib/nova
The following items ALSO need to be defined in /etc/nova/nova.conf. Ive added some explanation of the variables, as comments CANNOT be in nova.conf. There seems to be an issue with nova-manage not processing the comments/whitespace correctly:
--sql_connection ### Location of Nova SQL DB
--s3_host ### This is where Nova is hosting the objectstore service, which will contain the VM images and buckets
--rabbit_host ### This is where the rabbit AMQP messaging service is hosted
cd /etc/nova
chown -R root:nova .
--cc_host ### This is where the the nova-api service lives
--verbose ### Optional but very helpful during initial setup
--ec2_url ### The location to interface nova-api
--network_manager ### Many options here, discussed below. This is how your controller will communicate with additional Nova nodes and VMs:
nova.network.manager.FlatManager # Simple, no-vlan networking type
nova.network.manager. FlatDHCPManager # Flat networking with DHCP
nova.network.manager.VlanManager # Vlan networking with DHCP /DEFAULT/ if no network manager is defined in nova.conf
--fixed_range=<network/prefix> ### This will be the IP network that ALL the projects for future VM guests will reside on. E.g. 192.168.0.0/12
--network_size=<# of addrs> ### This is the total number of IP Addrs to use for VM guests, of all projects. E.g. 5000
The following code can be cut and paste, and edited to your setup:
Step 3 Setup the sql db
## Note: CC_ADDR=<the external IP address of your cloud controller>##
## Detailed explanation of the following entries are right above this ##
::
--sql_connection=mysql://root:nova@<CC_ADDR>/nova
--s3_host=<CC_ADDR>
--rabbit_host=<CC_ADDR>
--cc_host=<CC_ADDR>
--verbose
--ec2_url=http://<CC_ADDR>:8773/services/Cloud
--network_manager=nova.network.manager.VlanManager
--fixed_range=<network/prefix>
--network_size=<# of addrs>
2. Create a “nova” group, and set permissions:
::
addgroup nova
The Nova config file should have its owner set to root:nova, and mode set to 0644, since they contain your MySQL server's root password.
::
chown -R root:nova /etc/nova
chmod 644 /etc/nova/nova.conf
Step 3 - Setup the SQL DB (MySQL for this setup)
-----------------------
1. First you 'preseed' (using the Quick Start method :doc:`../quickstart`). Run this as root.
1. First you 'preseed' to bypass all the installation prompts
::
sudo apt-get install bzr git-core
sudo bash
export MYSQL_PASS=nova
bash
MYSQL_PASS=nova
cat <<MYSQL_PRESEED | debconf-set-selections
mysql-server-5.1 mysql-server/root_password password $MYSQL_PASS
mysql-server-5.1 mysql-server/root_password_again password $MYSQL_PASS
mysql-server-5.1 mysql-server/start_on_boot boolean true
MYSQL_PRESEED
2. Install MySQL:
::
cat <<MYSQL_PRESEED | debconf-set-selections
mysql-server-5.1 mysql-server/root_password password $MYSQL_PASS
mysql-server-5.1 mysql-server/root_password_again password $MYSQL_PASS
mysql-server-5.1 mysql-server/start_on_boot boolean true
MYSQL_PRESEED
2. Install mysql
apt-get install -y mysql-server
3. Edit /etc/mysql/my.cnf to change bind-address from localhost to any:
::
sudo apt-get install -y mysql-server
4. Edit /etc/mysql/my.cnf and set this line: bind-address=0.0.0.0 and then sighup or restart mysql
5. create nova's db
sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf
service mysql restart
3. Network Configuration
If you use FlatManager (as opposed to VlanManager that we set) as your network manager, there are some additional networking changes youll have to make to ensure connectivity between your nodes and VMs. If you chose VlanManager or FlatDHCP, you may skip this section, as its set up for you automatically.
Nova defaults to a bridge device named 'br100'. This needs to be created and somehow integrated into YOUR network. To keep things as simple as possible, have all the VM guests on the same network as the VM hosts (the compute nodes). To do so, set the compute node's external IP address to be on the bridge and add eth0 to that bridge. To do this, edit your network interfaces config to look like the following
::
mysql -uroot -pnova -e 'CREATE DATABASE nova;'
6. Update the db to include user 'root'@'%'
::
mysql -u root -p nova
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;
SET PASSWORD FOR 'root'@'%' = PASSWORD('nova');
7. Branch and install Nova
::
sudo -i
cd ~
export USE_MYSQL=1
export MYSQL_PASS=nova
git clone https://github.com/vishvananda/novascript.git
cd novascript
./nova.sh branch
./nova.sh install
./nova.sh run
Step 4 Setup Nova environment
-----------------------------
::
/usr/bin/python /usr/bin/nova-manage user admin <user_name>
/usr/bin/python /usr/bin/nova-manage project create <project_name> <user_name>
/usr/bin/python /usr/bin/nova-manage project create network
Note: The nova-manage service assumes that the first IP address is your network (like 192.168.0.0), that the 2nd IP is your gateway (192.168.0.1), and that the broadcast is the very last IP in the range you defined (192.168.0.255). If this is not the case you will need to manually edit the sql db 'networks' table.o.
On running this command, entries are made in the 'networks' and 'fixed_ips' table. However, one of the networks listed in the 'networks' table needs to be marked as bridge in order for the code to know that a bridge exists. The Network is marked as bridged automatically based on the type of network manager selected.
More networking details to create a network bridge for flat network
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Nova defaults to a bridge device named 'br100'. This needs to be created and somehow integrated into YOUR network. In my case, I wanted to keep things as simple as possible and have all the vm guests on the same network as the vm hosts (the compute nodes). Thus, I set the compute node's external IP address to be on the bridge and added eth0 to that bridge. To do this, edit your network interfaces config to look like the following::
< begin /etc/network/interfaces >
# The loopback network interface
auto lo
iface lo inet loopback
# Networking for NOVA
auto br100
iface br100 inet dhcp
bridge_ports eth0
bridge_stp off
bridge_maxwait 0
bridge_fd 0
< end /etc/network/interfaces >
Next, restart networking to apply the changes::
sudo /etc/init.d/networking restart
Step 5: Create nova certs.
sudo /etc/init.d/networking restart
4. MySQL DB configuration:
Create NOVA database:
::
mysql -uroot -p$MYSQL_PASS -e 'CREATE DATABASE nova;'
Update the DB to include user 'root'@'%' with super user privileges
::
mysql -uroot -p$MYSQL_PASS -e "GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;"
Set mySQL root password
::
mysql -uroot -p$MYSQL_PASS -e "SET PASSWORD FOR 'root'@'%' = PASSWORD('$MYSQL_PASS');"
Step 4 - Setup Nova environment
-----------------------------
::
/usr/bin/python /usr/bin/nova-manage user admin <user_name>
/usr/bin/python /usr/bin/nova-manage project create <project_name> <user_name>
/usr/bin/python /usr/bin/nova-manage network create <project-network> <number-of-networks-in-project> <IPs in project>
Here is an example of what this looks like with real data:
/usr/bin/python /usr/bin/nova-manage user admin dub
/usr/bin/python /usr/bin/nova-manage project create dubproject dub
/usr/bin/python /usr/bin/nova-manage network create 192.168.0.0/24 1 255
(I chose a /24 since that falls inside my /12 range I set in fixed-range in nova.conf. Currently, there can only be one network, and I am using the max IPs available in a /24. You can choose to use any valid amount that you would like.)
Note: The nova-manage service assumes that the first IP address is your network (like 192.168.0.0), that the 2nd IP is your gateway (192.168.0.1), and that the broadcast is the very last IP in the range you defined (192.168.0.255). If this is not the case you will need to manually edit the sql db 'networks' table.o.
On running this command, entries are made in the 'networks' and 'fixed_ips' table. However, one of the networks listed in the 'networks' table needs to be marked as bridge in order for the code to know that a bridge exists. The Network is marked as bridged automatically based on the type of network manager selected. This is ONLY necessary if you chose FlatManager as your network type. More information can be found at the end of this document discussing setting up the bridge device.
Step 5 - Create Nova certs
--------------------------
1. Generate the certs as a zip file. These are the certs you will use to launch instances, bundle images, and all the other assorted api functions:
::
mkdir p /root/creds
/usr/bin/python /usr/bin/nova-manage project zipfile $NOVA_PROJECT $NOVA_PROJECT_USER /root/creds/novacreds.zip
2. Unzip them in your home directory, and add them to your environment:
::
unzip /root/creds/novacreds.zip -d /root/creds/
cat /root/creds/novarc >> ~/.bashrc
source ~/.bashrc
Step 6 - Restart all relevant services
------------------------------------
Restart all six services in total, just to cover the entire spectrum:
::
libvirtd restart; service nova-network restart; service nova-compute restart; service nova-api restart; service nova-objectstore restart; service nova-scheduler restart
Generate the certs as a zip file::
mkdir creds
sudo /usr/bin/python /usr/bin/nova-manage project zip admin admin creds/nova.zip
you can get the rc file more easily with::
sudo /usr/bin/python /usr/bin/nova-manage project env admin admin creds/novarc
unzip them in your home directory, and add them to your environment::
unzip creds/nova.zip
echo ". creds/novarc" >> ~/.bashrc
~/.bashrc
Step 6 Restart all relevant services
Step 7 - Closing steps, and cleaning up:
------------------------------------
Restart Libvirt::
One of the most commonly missed configuration areas is not allowing the proper access to VMs. Use the 'euca-authorize' command to enable access. Below, you will find the commands to allow 'ping' and 'ssh' to your VMs:
sudo /etc/init.d/libvirt-bin restart
::
Restart relevant nova services::
euca-authorize -P icmp -t -1:-1 default
euca-authorize -P tcp -p 22 default
sudo /etc/init.d/nova-compute restart
sudo /etc/init.d/nova-volume restart
Another common issue is you cannot ping or SSH your instances after issusing the 'euca-authorize' commands. Something to look at is the amount of 'dnsmasq' processes that are running. If you have a running instance, check to see that TWO 'dnsmasq' processes are running. If not, perform the following:
::
.. todo:: do we still need the content below?
killall dnsmasq
service nova-network restart
Bare-metal Provisioning Notes
-----------------------------
Step 8 Testing the installation
------------------------------------
To install the base operating system you can use PXE booting.
You can then use `euca2ools` to test some items:
::
euca-describe-images
euca-describe-instances
If you have issues with the API key, you may need to re-source your creds file:
::
. /root/creds/novarc
If you dont get any immediate errors, youre successfully making calls to your cloud!
The next thing you are going to need is an image to test. There will soon be an update on how to capture an image and use it as a bootable AMI so you can ping, ssh, show instances spinning up, etc.
Enjoy your new private cloud, and play responsibly!
Types of Hosts
--------------
A single machine in your cluster can act as one or more of the following types
of host:
Nova Services
* Network
* Compute
* Volume
* API
* Objectstore
Other supporting services
* Message Queue
* Database (optional)
* Authentication database (optional)
Initial Setup
-------------
* Networking
* Cloudadmin User Creation
Deployment Technologies
-----------------------
Once you have machines with a base operating system installation, you can deploy
code and configuration with your favorite tools to specify which machines in
your cluster have which roles:
* Puppet
* Chef