diff --git a/doc/glossary/glossary-terms.xml b/doc/glossary/glossary-terms.xml index 4e5189320e..49776e07fa 100644 --- a/doc/glossary/glossary-terms.xml +++ b/doc/glossary/glossary-terms.xml @@ -906,6 +906,13 @@ storage services for VMs. + + CirrOS + + A minimal Linux distribution designed for use as a test + image on clouds such as OpenStack. + + Cisco neutron plug-in @@ -1876,6 +1883,13 @@ + + external network + + A network segment typically used for instance Internet + access. + + extra specs @@ -2523,6 +2537,13 @@ The current state of a guest VM image. + + instance tunnels network + + A network segment used for instance traffic tunnels + between compute nodes and the network node. + + instance type @@ -2811,6 +2832,14 @@ requests evenly between designated instances. + + Logical Volume Manager (LVM) + + Provides a method of allocating space on mass-storage + devices that is more flexible than conventional + partitioning schemes. + + @@ -3573,6 +3602,14 @@ Alternative term for a cloudpipe. + + promiscuous mode + + Causes the network interface to pass all traffic it + receives to the host rather than passing only the frames + addressed to it. + + provider diff --git a/doc/install-guide/ch_basics.xml b/doc/install-guide/ch_basics.xml index 26b43f4e93..bfa3f65bf0 100644 --- a/doc/install-guide/ch_basics.xml +++ b/doc/install-guide/ch_basics.xml @@ -2,659 +2,36 @@ xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="ch_basics"> - Basic operating system configuration + Basic environment configuration We are updating this material for Icehouse. You may find structure and/or content issues during this process. - This guide shows you how to create a controller node to host most - services and a compute node to run virtual machine instances. Subsequent - chapters create additional nodes to run more services. OpenStack is flexible - about how and where you run each service, so other configurations are - possible. However, you must configure certain operating system settings on - each node. + This chapter explains how to configure each node in the + example architectures + including the + two-node architecture with legacy networking and + three-node + architecture with OpenStack Networking (neutron). - You can install OpenStack Object Storage with OpenStack Identity as a - starting point rather than installing OpenStack Compute. You cannot use - the OpenStack dashboard unless you also install Compute and the Image - Service. If object storage is your use case, you can skip these operating - system configuration requirements and refer to instead. + Although most environments include OpenStack Identity, Image Service, + Compute, at least one networking service, and the dashboard, OpenStack + Object Storage can operate independently of most other services. If your + use case only involves Object Storage, you can skip to + . However, the + dashboard will not work without at least OpenStack Image Service and + Compute. - This chapter details a sample configuration for the controller node and - any additional nodes. You can configure the operating system in other ways, - but this guide assumes that your configuration is compatible with the one - described here. - All example commands assume you have administrative privileges. Either - run the commands as the root user or prefix them with the - sudo command. - - -
- Before you begin - - We strongly recommend that you install a 64-bit operating system on - your compute nodes. If you use a 32-bit operating system, attempting a - start a virtual machine using a 64-bit image will fail with an - error. - - For more information about system requirements, see the OpenStack Operations - Guide. -
-
- Networking - For an OpenStack production deployment, most nodes must have these - network interface cards: - - - One network interface card for external network traffic. - - - Another card to communicate with other OpenStack nodes. - - - For simple test cases, you can use machines with a single network - interface card. - The following example configures Networking on two networks with - static IP addresses - and manually manages a list of host names on each machine. If you manage a - large network, you might already have systems in place to manage this. If - so, you can skip this section but note that the rest of this guide assumes - that each node can reach the other nodes on the internal network by using - the controller and compute1 host - names. - - Disable the NetworkManager service and enable the network service. The network service is more suitable for the - static network configuration done in this guide. - - # service NetworkManager stop -# service network start -# chkconfig NetworkManager off -# chkconfig network on - - Since Fedora 19, firewalld replaces - iptables as the default firewall system. - You can use firewalld successfully, but this - guide recommends and demonstrates the use of the default - iptables. - For Fedora 19 systems, run the following commands to disable - firewalld and enable - iptables: - # service firewalld stop -# service iptables start -# chkconfig firewalld off -# chkconfig iptables on - - - RHEL and derivatives including CentOS and Scientific Linux enable a - restrictive firewall by default. During this - installation, certain steps will fail unless you alter this setting or - disable the firewall. For further information about securing your - installation, refer to the OpenStack Security - Guide. - - When you set up your system, use the traditional - network scripts and do not use NetworkManager. You can change the settings after - installation with the YaST network module: - # yast2 network - Configure both eth0 and eth1. - The examples in this guide use the - 192.168.0.x IP addresses - for the internal network and the - 10.0.0.x IP addresses - for the external network. Make sure to connect your network devices to the - correct network. - In this guide, the controller node uses the - 192.168.0.10 and 10.0.0.10 IP - addresses. When you create the compute node, use the - 192.168.0.11 and 10.0.0.11 - addresses instead. Additional nodes that you add in subsequent chapters - also follow this pattern. -
- Basic architecture - - - - - -
- - <filename>/etc/sysconfig/network-scripts/ifcfg-eth0</filename> - # Internal Network -DEVICE=eth0 -TYPE=Ethernet -BOOTPROTO=static -IPADDR=192.168.0.10 -NETMASK=255.255.255.0 -DEFROUTE=yes -ONBOOT=yes - - - <filename>/etc/sysconfig/network-scripts/ifcfg-eth1</filename> - # External Network -DEVICE=eth1 -TYPE=Ethernet -BOOTPROTO=static -IPADDR=10.0.0.10 -NETMASK=255.255.255.0 -DEFROUTE=yes -ONBOOT=yes - - To configure the network interfaces, start the YaST - network module, as follows: - # yast2 network - - - Use these parameters to set up the eth0 - Ethernet card for the internal network: - Statically assigned IP Address -IP Address: 192.168.0.10 -Subnet Mask: 255.255.255.0 - - - Use these parameters to set up the eth1 - Ethernet card for the external network: - Statically assigned IP Address -IP Address: 10.0.0.10 -Subnet Mask: 255.255.255.0 - - - Set up a default route on the external network. - - - - - <filename>/etc/network/interfaces</filename> - # Internal Network -auto eth0 -iface eth0 inet static - address 192.168.0.10 - netmask 255.255.255.0 - -# External Network -auto eth1 -iface eth1 inet static - address 10.0.0.10 - netmask 255.255.255.0 - - After you configure the network, restart the daemon for changes to - take effect: - # service networking restart - # service network restart - Set the host name of each machine. Name the controller node - controller and the first compute node - compute1. The examples in this guide use these host - names. - Use the - hostname command to set the host name: - # hostname controller - Use yast network to set the host - name with YaST. - To have the host name change persist when the - system reboots, you must specify it in the proper configuration file. In - Red Hat Enterprise Linux, CentOS, and older versions of Fedora, you set - this in the file /etc/sysconfig/network. Change the - line starting with HOSTNAME=. - HOSTNAME=controller - As of Fedora 18, Fedora uses the - /etc/hostname file, which contains a single line - with the host name. - To configure this host name to be available when - the system reboots, you must specify it in the - /etc/hostname file, which contains a single line - with the host name. - Finally, ensure that each node can reach the other nodes by using host - names. You must manually edit the /etc/hosts file on - each system. For large-scale deployments, use DNS or a configuration - management system like Puppet. - 127.0.0.1 localhost -192.168.0.10 controller -192.168.0.11 compute1 -
-
- Network Time Protocol (NTP) - To synchronize services across multiple machines, you must install - NTP. The - examples in this guide configure the controller node as the reference - server and any additional nodes to set their time from the controller - node. - Install the ntp package on each system running - OpenStack services. - # apt-get install ntp - # yum install ntp - # zypper install ntp - Set up the NTP server on your - controller node so that it receives data by modifying the - ntp.conf file and restarting the service. - # service ntpd start -# chkconfig ntpd on - # service ntp start -# chkconfig ntp on - On additional nodes, it is advised that you configure the other nodes - to synchronize their time from the controller node rather than from - outside of your LAN. To do so, install the ntp daemon as above, then edit - /etc/ntp.conf and change the server - directive to use the controller node as internet time source. -
-
- Passwords - The various OpenStack services and the required software like the - database and the messaging server have to be password protected. You use - these passwords when configuring a service and then again to access the - service. You have to choose a password while configuring the service and - later remember to use the same password when accessing it. Optionally, you - can generate random passwords with the pwgen - program. Or, to create passwords one at a time, use the output of this - command repeatedly: - $ openssl rand -hex 10 - - This guide uses the convention that - SERVICE_PASS is password - to access the service SERVICE and - SERVICE_DBPASS is the - database password used by the service SERVICE to access the database. - The complete list of passwords you need to define in this guide are: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Passwords
Password nameDescription
Database password (no variable used)Root password for the database
RABBIT_PASSPassword of user guest of RabbitMQ
KEYSTONE_DBPASSDatabase password of Identity service
ADMIN_PASSPassword of user admin
GLANCE_DBPASSDatabase password for Image Service
GLANCE_PASSPassword of Image Service user glance
NOVA_DBPASSDatabase password for Compute service
NOVA_PASSPassword of Compute service user nova
DASH_DBPASSDatabase password for the dashboard
CINDER_DBPASSDatabase password for the Block Storage service
CINDER_PASSPassword of Block Storage service user - cinder
NEUTRON_DBPASSDatabase password for the Networking service
NEUTRON_PASSPassword of Networking service user - neutron
HEAT_DBPASSDatabase password for the Orchestration service
HEAT_PASSPassword of Orchestration service user - heat
CEILOMETER_DBPASSDatabase password for the Telemetry service
CEILOMETER_PASSPassword of Telemetry service user - ceilometer
-
-
- -
- - MySQL database - Most OpenStack services require - a database to store information. These examples use a MySQL database that - runs on the controller node. You must install the MySQL database on the - controller node. You must install MySQL client software on any additional - nodes that access MySQL. - Most OpenStack services require a database to store - information. This guide uses a MySQL database on SUSE Linux Enterprise - Server and a compatible database on openSUSE running on the controller - node. This compatible database for openSUSE is MariaDB. You must install - the MariaDB database on the controller node. You must install the MariaDB - client software on any nodes that access the MariaDB database. - -
- Controller setup - For SUSE Linux Enterprise Server: On the - controller node, install the MySQL client and server packages, and the - Python library. - # zypper install mysql-client mysql python-mysql - For openSUSE: On the controller node, install the - MariaDB client and database server packages, and the MySQL Python - library. - # zypper install mariadb-client mariadb python-mysql - # apt-get install python-mysqldb mysql-server - # yum install mysql mysql-server MySQL-python - - When you install the server package, you are prompted for the root - password for the database. Choose a strong password and remember - it. - - The MySQL configuration requires some changes to work with - OpenStack. - - - Edit the - /etc/mysql/my.cnf file: - Edit the - /etc/my.cnf file: - - - Under the [mysqld] section, set the - bind-address key to the management IP - address of the controller node to enable access by other nodes - via the management network: - [mysqld] -... -bind-address = 192.168.0.10 - - - Under the [mysqld] section, set the - following keys to enable InnoDB, UTF-8 character set, and - UTF-8 collation by default: - [mysqld] -... -default-storage-engine = innodb -collation-server = utf8_general_ci -init-connect = 'SET NAMES utf8' -character-set-server = utf8 - - - - - Restart the MySQL service to apply the - changes: - # service mysql restart - Start the MySQL - MariaDB or MySQL database server and - set it to start automatically when the system boots. - # service mysqld start -# chkconfig mysqld on - # service mysql start -# chkconfig mysql on - Finally, you should set a root - password for your MySQL - MariaDB or MySQL database. The - OpenStack programs that set up databases and tables prompt you for this - password if it is set. - You must delete - the anonymous users that are created when the database is first started. - Otherwise, database connection problems occur when you follow the - instructions in this guide. To do this, use the - mysql_secure_installation command. Note that if - mysql_secure_installation fails you might need to - use mysql_install_db first: - # mysql_install_db -# mysql_secure_installation - If you have not - already set a root database password, press ENTER - when you are prompted for the password. This command presents - a number of options for you to secure your database installation. - Respond yes to all prompts unless you have a good - reason to do otherwise. -
- -
- Node setup - On all nodes other than the controller node, install the MySQL - MariaDB (on openSUSE) client and the - MySQL Python library on any system that does not host a MySQL - database: - # apt-get install python-mysqldb - # yum install mysql MySQL-python - # zypper install mariadb-client python-mysql - For SUSE Linux Enterprise, install MySQL: - # zypper install mysql-client python-mysql -
-
- -
- OpenStack packages - Distributions might release OpenStack packages as part of their - distribution or through other methods because the OpenStack and - distribution release times are independent of each other. - This section describes the configuration you must complete after you - configure machines to install the latest OpenStack packages. - The examples in this guide use the OpenStack - packages from the RDO repository. These packages work on Red Hat - Enterprise Linux 6, compatible versions of CentOS, and Fedora 20. To - enable the RDO repository, download and install the - rdo-release-icehouse package. - # yum install http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-1.noarch.rpm - The EPEL package includes GPG keys for package - signing and repository information. This should only be installed on Red - Hat Enterprise Linux and CentOS, not Fedora. Install the latest - epel-release package (see http://download.fedoraproject.org/pub/epel/6/x86_64/repoview/epel-release.html). - For example: - # yum install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm - The openstack-utils package - contains utility programs that make installation and configuration easier. - These programs are used throughout this guide. Install - openstack-utils. This verifies that you can access - the RDO repository. - # yum install openstack-utils - Use the Open Build Service repositories for - Icehouse based on your openSUSE or SUSE Linux - Enterprise Server version, for example if you run openSUSE 12.3 - use: - # zypper addrepo -f obs://Cloud:OpenStack:Icehouse/openSUSE_12.3 Icehouse - For openSUSE 13.1 use: - # zypper addrepo -f obs://Cloud:OpenStack:Icehouse/openSUSE_13.1 Icehouse - If you use SUSE Linux Enterprise Server 11 SP3, use: - # zypper addrepo -f obs://Cloud:OpenStack:Icehouse/SLE_11_SP3 Icehouse - The openstack-utils package - contains utility programs that make installation and configuration easier. - These programs are used throughout this guide. Install - openstack-utils. This verifies that you can access - the Open Build Service repository: - # zypper install openstack-utils - - The openstack-config program in the - openstack-utils package uses - crudini to manipulate configuration files. - However, crudini version 0.3 does not support - multi valued options. See https://bugs.launchpad.net/openstack-manuals/+bug/1269271. As - a work around, you must manually set any multi valued options or the new - value overwrites the previous value instead of creating a new - option. - - The openstack-selinux - package includes the policy files that are required to configure SELinux - during OpenStack installation. Install - openstack-selinux. - # yum install openstack-selinux - Upgrade your system - packages: - # yum upgrade - # zypper refresh -# zypper update - If the upgrade included a new - kernel package, reboot the system to ensure the new kernel is - running: - # reboot - - To use the Ubuntu Cloud Archive for Icehouse - The Ubuntu - Cloud Archive is a special repository that allows you to - install newer releases of OpenStack on the stable supported version of - Ubuntu. - - Install the Ubuntu Cloud Archive for - Icehouse: - # apt-get install python-software-properties -# add-apt-repository cloud-archive:icehouse - - - Update the package database, upgrade your system, and reboot for - all changes to take effect: - # apt-get update && apt-get dist-upgrade -# reboot - - - - To use the Debian Wheezy backports archive for Icehouse - The Icehouse release is available only in - Debian Sid (otherwise called Unstable). However, the Debian maintainers - of OpenStack also maintain a non-official Debian repository for - OpenStack containing Wheezy backports. - - Install the Debian Wheezy backport repository Icehouse: - # echo "deb http://archive.gplhost.com/debian icehouse-backports main" >>/etc/apt/sources.list - - - Install the Debian Wheezy OpenStack repository for - Icehouse: - # echo "deb http://archive.gplhost.com/debian icehouse main" >>/etc/apt/sources.list - - - Update the repository database and install the key: - # apt-get update && apt-get install gplhost-archive-keyring - - - Update the package database, upgrade your system, and reboot for - all changes to take effect: - # apt-get update && apt-get dist-upgrade -# reboot - - - Numerous archive.gplhost.com mirrors are available around - the world. All are available with both FTP and HTTP protocols (you should - use the closest mirror). The list of mirrors is available at http://archive.gplhost.com/readme.mirrors. -
-
- Manually install python-argparse - The Debian OpenStack packages are maintained on Debian Sid (also known - as Debian Unstable) - the current development version. Backported packages - run correctly on Debian Wheezy with one caveat: - All OpenStack packages are written in Python. Wheezy uses Python 2.6 - and 2.7, with Python 2.6 as the default interpreter; Sid has only Python - 2.7. There is one packaging change between these two. In Python 2.6, you - installed the python-argparse package separately. In - Python 2.7, this package is installed by default. Unfortunately, in Python - 2.7, this package does not include Provides: python-argparse - directive. - Because the packages are maintained in Sid where the Provides: - python-argparse directive causes an error, and the Debian - OpenStack maintainer wants to maintain one version of the OpenStack - packages, you must manually install the python-argparse - on each OpenStack system that runs Debian Wheezy before you install the - other OpenStack packages. Use the following command to install the - package: - # apt-get install python-argparse - This caveat applies to most OpenStack packages in Wheezy. -
-
- Messaging server - On the controller node, install the messaging queue server. Typically - this is RabbitMQ - - Qpid but Qpid - RabbitMQ - and ZeroMQ (0MQ) are also available. - # apt-get install rabbitmq-server - # zypper install rabbitmq-server - # yum install qpid-cpp-server - - Important security consideration - The rabbitmq-server package configures the - RabbitMQ service to start automatically and creates a - guest user with a default guest - password. The RabbitMQ examples in this guide use the - guest account, though it is strongly advised to - change its default password, especially if you have IPv6 available: by - default the RabbitMQ server enables anyone to connect to it by using - guest as login and password, and with IPv6, it is reachable from the - outside. - To change the default guest password of RabbitMQ: - # rabbitmqctl change_password guest RABBIT_PASS - - Disable Qpid authentication by editing - /etc/qpidd.conf file and changing the - auth option to no. - auth=no - - To simplify configuration, the Qpid examples in this guide do not - use authentication. However, we strongly advise enabling authentication - for production deployments. For more information on securing Qpid refer - to the Qpid Documentation. - After you enable Qpid authentication, you must update the - configuration file of each OpenStack service to ensure that the - qpid_username and qpid_password - configuration keys refer to a valid Qpid username and password, - respectively. - - Start Qpid and set it to start automatically - when the system boots. - # service qpidd start -# chkconfig qpidd on - Start the messaging service and set it to start - automatically when the system boots: - # service rabbitmq-server start -# chkconfig rabbitmq-server on - Congratulations, now you are ready to install OpenStack - services! -
+ + You must use an account with administrative privileges to configure + each node. Either run the commands as the root user + or configure the sudo utility. + + + + + + + + diff --git a/doc/install-guide/section_basics-database.xml b/doc/install-guide/section_basics-database.xml new file mode 100644 index 0000000000..3a04bf3f9c --- /dev/null +++ b/doc/install-guide/section_basics-database.xml @@ -0,0 +1,120 @@ +
+ + Database + Most OpenStack + services require a database to store information. These examples + use a MySQL database that runs on the controller node. You must + install the MySQL database on the controller node. You must + install MySQL client software on any additional nodes that + access MySQL. + Most OpenStack services require a + database to store information. This guide uses a MySQL database + on SUSE Linux Enterprise Server and a compatible database on + openSUSE running on the controller node. This compatible + database for openSUSE is MariaDB. You must install the MariaDB + database on the controller node. You must install the MariaDB + client software on any nodes that access the MariaDB + database. +
+ Controller setup + For SUSE Linux Enterprise Server: + On the controller node, install the MySQL client and + server packages, and the Python library. + # zypper install mysql-client mysql python-mysql + For openSUSE: On the controller node, + install the MariaDB client and database server packages, + and the MySQL Python library. + # zypper install mariadb-client mariadb python-mysql + # apt-get install python-mysqldb mysql-server + # yum install mysql mysql-server MySQL-python + + When you install the server package, you are prompted + for the root password for the database. Choose a strong + password and remember it. + + The MySQL configuration requires some changes to work with + OpenStack. + + + Edit the + /etc/mysql/my.cnf file: + Edit the + /etc/my.cnf file: + + + Under the [mysqld] section, set the + bind-address key to the management IP + address of the controller node to enable access by other + nodes via the management network: + [mysqld] +... +bind-address = 192.168.0.10 + + + Under the [mysqld] section, set the + following keys to enable InnoDB, UTF-8 character set, and + UTF-8 collation by default: + [mysqld] +... +default-storage-engine = innodb +collation-server = utf8_general_ci +init-connect = 'SET NAMES utf8' +character-set-server = utf8 + + + + + Restart the MySQL service to apply + the changes: + # service mysql restart + Start the MySQL + MariaDB or MySQL database + server and set it to start automatically when the system + boots. + # service mysqld start +# chkconfig mysqld on + # service mysql start +# chkconfig mysql on + Finally, you should + set a root password for your MySQL + MariaDB or MySQL database. + The OpenStack programs that set up databases and tables prompt + you for this password if it is set. + You must + delete the anonymous users that are created when the database is + first started. Otherwise, database connection problems occur + when you follow the instructions in this guide. To do this, use + the mysql_secure_installation command. + Note that if mysql_secure_installation fails + you might need to use mysql_install_db first: + # mysql_install_db +# mysql_secure_installation + If you have + not already set a root database password, press + ENTER when you are prompted for the + password. This command presents a number of options + for you to secure your database installation. Respond + yes to all prompts unless you have a + good reason to do otherwise. +
+
+ Node setup + On all nodes other than the controller node, install the + MySQL + MariaDB (on openSUSE) client + and the MySQL Python library on any system that does not + host a MySQL database: + # apt-get install python-mysqldb + # yum install mysql MySQL-python + # zypper install mariadb-client python-mysql + For SUSE Linux Enterprise, install + MySQL: + # zypper install mysql-client python-mysql +
+
diff --git a/doc/install-guide/section_basics-networking-neutron.xml b/doc/install-guide/section_basics-networking-neutron.xml new file mode 100644 index 0000000000..568996232c --- /dev/null +++ b/doc/install-guide/section_basics-networking-neutron.xml @@ -0,0 +1,325 @@ +
+ + OpenStack Networking + The example architecture with OpenStack Networking (neutron) requires + one controller node, one network node, and at least one compute node. + The controller node contains one network interface on the + management network. The network node contains + one network interface on the management network, one on the + instance tunnels network, and one on the + external network. The compute node contains + one network interface on the management network and one on the + instance tunnels network. +
+ Three-node architecture with OpenStack Networking + + + + + +
+ Unless you intend to use the exact configuration provided in this + example architecture, you must modify the networks in this procedure to + match your environment. Also, each node must resolve the other nodes + by name in addition to IP address. For example, the + controller name must resolve to + 10.0.0.11, the IP address of the management + interface on the controller node. + + Reconfiguring network interfaces will interrupt network + connectivity. We recommend using a local terminal session for these + procedures. + +
+ Controller node + + To configure networking: + + Configure the management interface: + IP address: 10.0.0.11 + Network mask: 255.255.255.0 (or /24) + Default gateway: 10.0.0.1 + + + + To configure name resolution: + + Edit the /etc/hosts file to contain the + following: + # controller +10.0.0.11 controller + +# network +10.0.0.21 network + +# compute1 +10.0.0.31 compute1 + + You must remove or comment the line beginning with + 127.0.1.1. + + + +
+
+ Network node + + To configure networking: + + Configure the management interface: + IP address: 10.0.0.21 + Network mask: 255.255.255.0 (or /24) + Default gateway: 10.0.0.1 + + + Configure the instance tunnels interface: + IP address: 10.0.1.21 + Network mask: 255.255.255.0 (or /24) + + + The external interface uses a special configuration without an + IP address assigned to it. Configure the external interface: + + + Edit the /etc/network/interfaces file + to contain the following: + # The external network interface +auto eth2 +iface eth2 inet manual + up ip link set dev $IFACE up + down ip link set dev $IFACE down + + + Edit the + /etc/sysconfig/network-scripts/ifcfg-eth2 + file to contain the following: + Do not change the HWADDR and + UUID keys. + DEVICE=eth2 +TYPE=Ethernet +ONBOOT="yes" +BOOTPROTO="none" + + + Edit the + /etc/sysconfig/network/ifcfg-eth2 file to + contain the following: + STARTMODE='auto' +BOOTPROTO='static' + + + + + Restart networking: + # service networking stop && service networking start + # service network restart + + + + To configure name resolution: + + Edit the /etc/hosts file to contain the + following: + # network +10.0.0.21 network + +# controller +10.0.0.11 controller + +# compute1 +10.0.0.31 compute1 + + You must remove or comment the line beginning with + 127.0.1.1. + + + +
+
+ Compute node + + To configure networking: + + Configure the management interface: + IP address: 10.0.0.31 + Network mask: 255.255.255.0 (or /24) + Default gateway: 10.0.0.1 + + Additional compute nodes should use 10.0.0.32, 10.0.0.33, + and so on. + + + + Configure the instance tunnels interface: + IP address: 10.0.1.31 + Network mask: 255.255.255.0 (or /24) + + Additional compute nodes should use 10.0.1.32, 10.0.1.33, + and so on. + + + + + To configure name resolution: + + Edit the /etc/hosts file to contain the + following: + # compute1 +10.0.0.31 compute1 + +# controller +10.0.0.11 controller + +# network +10.0.0.21 network + + You must remove or comment the line beginning with + 127.0.1.1. + + + +
+
+ Verify connectivity + We recommend that you verify network connectivity to the internet + and among the nodes before proceeding further. + + + From the controller node, + ping a site on the internet: + # ping -c 4 openstack.org +PING openstack.org (174.143.194.225) 56(84) bytes of data. +64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms +64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms +64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms +64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms + +--- openstack.org ping statistics --- +4 packets transmitted, 4 received, 0% packet loss, time 3022ms +rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms + + + From the controller node, + ping the management interface on the + network node: + # ping -c 4 network +PING network (10.0.0.21) 56(84) bytes of data. +64 bytes from network (10.0.0.21): icmp_seq=1 ttl=64 time=0.263 ms +64 bytes from network (10.0.0.21): icmp_seq=2 ttl=64 time=0.202 ms +64 bytes from network (10.0.0.21): icmp_seq=3 ttl=64 time=0.203 ms +64 bytes from network (10.0.0.21): icmp_seq=4 ttl=64 time=0.202 ms + +--- network ping statistics --- +4 packets transmitted, 4 received, 0% packet loss, time 3000ms +rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms + + + From the controller node, + ping the management interface on the + compute node: + # ping -c 4 compute1 +PING compute1 (10.0.0.31) 56(84) bytes of data. +64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms +64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms +64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms +64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms + +--- network ping statistics --- +4 packets transmitted, 4 received, 0% packet loss, time 3000ms +rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms + + + From the network node, + ping a site on the internet: + # ping -c 4 openstack.org +PING openstack.org (174.143.194.225) 56(84) bytes of data. +64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms +64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms +64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms +64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms + +--- openstack.org ping statistics --- +4 packets transmitted, 4 received, 0% packet loss, time 3022ms +rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms + + + From the network node, + ping the management interface on the + controller node: + # ping -c 4 controller +PING controller (10.0.0.11) 56(84) bytes of data. +64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms +64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms +64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms +64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms + +--- controller ping statistics --- +4 packets transmitted, 4 received, 0% packet loss, time 3000ms +rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms + + + From the network node, + ping the instance tunnels interface on the + compute node: + # ping -c 4 10.0.1.31 +PING 10.0.1.31 (10.0.1.31) 56(84) bytes of data. +64 bytes from 10.0.1.31 (10.0.1.31): icmp_seq=1 ttl=64 time=0.263 ms +64 bytes from 10.0.1.31 (10.0.1.31): icmp_seq=2 ttl=64 time=0.202 ms +64 bytes from 10.0.1.31 (10.0.1.31): icmp_seq=3 ttl=64 time=0.203 ms +64 bytes from 10.0.1.31 (10.0.1.31): icmp_seq=4 ttl=64 time=0.202 ms + +--- 10.0.1.31 ping statistics --- +4 packets transmitted, 4 received, 0% packet loss, time 3000ms +rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms + + + From the compute node, + ping a site on the internet: + # ping -c 4 openstack.org +PING openstack.org (174.143.194.225) 56(84) bytes of data. +64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms +64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms +64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms +64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms + +--- openstack.org ping statistics --- +4 packets transmitted, 4 received, 0% packet loss, time 3022ms +rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms + + + From the compute node, + ping the management interface on the + controller node: + # ping -c 4 controller +PING controller (10.0.0.11) 56(84) bytes of data. +64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms +64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms +64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms +64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms + +--- controller ping statistics --- +4 packets transmitted, 4 received, 0% packet loss, time 3000ms +rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms + + + From the compute node, + ping the instance tunnels interface on the + network node: + # ping -c 4 10.0.1.21 +PING 10.0.1.21 (10.0.1.21) 56(84) bytes of data. +64 bytes from 10.0.1.21 (10.0.1.21): icmp_seq=1 ttl=64 time=0.263 ms +64 bytes from 10.0.1.21 (10.0.1.21): icmp_seq=2 ttl=64 time=0.202 ms +64 bytes from 10.0.1.21 (10.0.1.21): icmp_seq=3 ttl=64 time=0.203 ms +64 bytes from 10.0.1.21 (10.0.1.21): icmp_seq=4 ttl=64 time=0.202 ms + +--- 10.0.1.21 ping statistics --- +4 packets transmitted, 4 received, 0% packet loss, time 3000ms +rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms + + +
+
diff --git a/doc/install-guide/section_basics-networking-nova.xml b/doc/install-guide/section_basics-networking-nova.xml new file mode 100644 index 0000000000..4e5895d089 --- /dev/null +++ b/doc/install-guide/section_basics-networking-nova.xml @@ -0,0 +1,197 @@ +
+ + Legacy networking + The example architecture with legacy networking (nova) requires one + controller node and at least one compute node. The controller node + contains one network interface on the + management network. The compute node contains + one network interface on the management network and one on the + external network. +
+ Two-node architecture with legacy networking + + + + + +
+ Unless you intend to use the exact configuration provided in this + example architecture, you must modify the networks in this procedure to + match your environment. Also, each node must resolve the other nodes + by name in addition to IP address. For example, the + controller name must resolve to + 10.0.0.11, the IP address of the management + interface on the controller node. + + Reconfiguring network interfaces will interrupt network + connectivity. We recommend using a local terminal session for these + procedures. + +
+ Controller node + + To configure networking: + + Configure the management interface: + IP address: 10.0.0.11 + Network mask: 255.255.255.0 (or /24) + Default gateway: 10.0.0.1 + + + + To configure name resolution: + + Edit the /etc/hosts file to contain the + following: + # controller +10.0.0.11 controller + +# compute1 +10.0.0.31 compute1 + + You must remove or comment the line beginning with + 127.0.1.1. + + + +
+
+ Compute node + + To configure networking: + + Configure the management interface: + IP address: 10.0.0.31 + Network mask: 255.255.255.0 (or /24) + Default gateway: 10.0.0.1 + + Additional compute nodes should use 10.0.0.32, 10.0.0.33, + and so on. + + + + The external interface uses a special configuration without an + IP address assigned to it. Configure the external interface: + + + Edit the /etc/network/interfaces file + to contain the following: + # The external network interface +auto eth1 +iface eth1 inet manual + up ip link set dev $IFACE up + down ip link set dev $IFACE down + + + Edit the + /etc/sysconfig/network-scripts/ifcfg-eth1 + file to contain the following: + Do not change the HWADDR and + UUID keys. + DEVICE=eth1 +TYPE=Ethernet +ONBOOT="yes" +BOOTPROTO="none" + + + Edit the + /etc/sysconfig/network/ifcfg-eth1 file to + contain the following: + STARTMODE='auto' +BOOTPROTO='static' + + + + + Restart networking: + # service networking stop && service networking start + # service network restart + + + + To configure name resolution: + + Edit the /etc/hosts file to contain the + following: + # compute1 +10.0.0.31 compute1 + +# controller +10.0.0.11 controller + + You must remove or comment the line beginning with + 127.0.1.1. + + + +
+
+ Verify connectivity + We recommend that you verify network connectivity to the internet + and among the nodes before proceeding further. + + + From the controller node, + ping a site on the internet: + # ping -c 4 openstack.org +PING openstack.org (174.143.194.225) 56(84) bytes of data. +64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms +64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms +64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms +64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms + +--- openstack.org ping statistics --- +4 packets transmitted, 4 received, 0% packet loss, time 3022ms +rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms + + + From the controller node, + ping the management interface on the + compute node: + # ping -c 4 compute1 +PING compute1 (10.0.0.31) 56(84) bytes of data. +64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms +64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms +64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms +64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms + +--- compute1 ping statistics --- +4 packets transmitted, 4 received, 0% packet loss, time 3000ms +rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms + + + From the compute node, + ping a site on the internet: + # ping -c 4 openstack.org +PING openstack.org (174.143.194.225) 56(84) bytes of data. +64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms +64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms +64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms +64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms + +--- openstack.org ping statistics --- +4 packets transmitted, 4 received, 0% packet loss, time 3022ms +rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms + + + From the compute node, + ping the management interface on the + controller node: + # ping -c 4 controller +PING controller (10.0.0.11) 56(84) bytes of data. +64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms +64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms +64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms +64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms + +--- controller ping statistics --- +4 packets transmitted, 4 received, 0% packet loss, time 3000ms +rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms + + +
+
diff --git a/doc/install-guide/section_basics-networking.xml b/doc/install-guide/section_basics-networking.xml new file mode 100644 index 0000000000..f0af453e23 --- /dev/null +++ b/doc/install-guide/section_basics-networking.xml @@ -0,0 +1,80 @@ +
+ + Networking + After installing the operating system on each node for the + architecture that you choose to deploy, you must configure the network + interfaces. We recommend that you disable any automated network + management tools and manually edit the appropriate configuration files + for your distribution. For more information on how to configure networking + on your distribution, see the + documentation. + documentation. + documentation. + documentation. + documentation. + + To disable <systemitem class="service">NetworkManager</systemitem> + and enable the <systemitem class="service">network</systemitem> + service: + + # service NetworkManager stop +# service network start +# chkconfig NetworkManager off +# chkconfig network on + + + + To disable <systemitem class="service">NetworkManager</systemitem> + : + + Use the YaST network module: + # yast2 network + For more information, see the + documentation. + For more information, see the + documentation. + + + + RHEL and derivatives including CentOS and Scientific + Linux enable a restrictive firewall by default. + During this installation, certain steps will fail unless you alter or + disable the firewall. For further information about securing your + installation, refer to the + + OpenStack Security Guide. + On Fedora, firewalld replaces + iptables as the default firewall system. While you + can use firewalld successfully, this guide + references iptables for compatibility with other + distributions. + + To disable <literal>firewalld</literal> and enable + <literal>iptables</literal>: + + # service firewalld stop +# service iptables start +# chkconfig firewalld off +# chkconfig iptables on + + + Proceed to network configuration for the example + OpenStack Networking + or legacy + networking architecture. + + +
diff --git a/doc/install-guide/section_basics-ntp.xml b/doc/install-guide/section_basics-ntp.xml new file mode 100644 index 0000000000..52f45b30ed --- /dev/null +++ b/doc/install-guide/section_basics-ntp.xml @@ -0,0 +1,31 @@ +
+ + Network Time Protocol (NTP) + To synchronize services across multiple machines, you must + install NTP. + The examples in this guide configure the controller + node as the reference server and any additional nodes to set + their time from the controller node. + Install the ntp package on each system + running OpenStack services. + # apt-get install ntp + # yum install ntp + # zypper install ntp + Set up the NTP server + on your controller node so that it receives data by modifying + the ntp.conf file and restarting the + service. + # service ntpd start +# chkconfig ntpd on + # service ntp start +# chkconfig ntp on + On additional nodes, it is advised that you configure the + other nodes to synchronize their time from the controller node + rather than from outside of your LAN. To do so, install the ntp + daemon as above, then edit /etc/ntp.conf + and change the server directive to use the + controller node as internet time source. +
diff --git a/doc/install-guide/section_basics-packages.xml b/doc/install-guide/section_basics-packages.xml new file mode 100644 index 0000000000..2b8cab7c44 --- /dev/null +++ b/doc/install-guide/section_basics-packages.xml @@ -0,0 +1,160 @@ +
+ + OpenStack packages + Distributions might release OpenStack packages as part of + their distribution or through other methods because the + OpenStack and distribution release times are independent of each + other. + This section describes the configuration you must + complete after you configure machines to install the latest + OpenStack packages. + The examples in this guide use the + OpenStack packages from the RDO repository. These packages work + on Red Hat Enterprise Linux 6, compatible versions of CentOS, + and Fedora 20. To enable the RDO repository, download and + install the rdo-release-icehouse + package. + # yum install http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-1.noarch.rpm + The EPEL package includes GPG keys + for package signing and repository information. This should only + be installed on Red Hat Enterprise Linux and CentOS, not Fedora. + Install the latest epel-release package (see + http://download.fedoraproject.org/pub/epel/6/x86_64/repoview/epel-release.html). + For example: + # yum install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm + The + openstack-utils package contains utility + programs that make installation and configuration easier. These + programs are used throughout this guide. Install + openstack-utils. This verifies that you can + access the RDO repository. + # yum install openstack-utils + Use the Open Build Service repositories + for Icehouse based on your openSUSE or + SUSE Linux Enterprise + Server version, for example if you run openSUSE 12.3 use: + # zypper addrepo -f obs://Cloud:OpenStack:Icehouse/openSUSE_12.3 Icehouse + For openSUSE 13.1 use: + # zypper addrepo -f obs://Cloud:OpenStack:Icehouse/openSUSE_13.1 Icehouse + If you use SUSE Linux Enterprise Server 11 SP3, + use: + # zypper addrepo -f obs://Cloud:OpenStack:Icehouse/SLE_11_SP3 Icehouse + The openstack-utils + package contains utility programs that make installation and + configuration easier. These programs are used throughout this + guide. Install openstack-utils. This verifies + that you can access the Open Build Service repository: + # zypper install openstack-utils + + The openstack-config program + in the openstack-utils package uses + crudini to manipulate configuration + files. However, crudini version 0.3 + does not support multi valued options. See + https://bugs.launchpad.net/openstack-manuals/+bug/1269271. + As a work around, you must manually set any multi valued + options or the new value overwrites the previous value instead + of creating a new option. + + The + openstack-selinux package includes the + policy files that are required to configure SELinux during + OpenStack installation. + Install openstack-selinux. + # yum install openstack-selinux + Upgrade your system packages: + # yum upgrade + # zypper refresh +# zypper update + If the upgrade included a new + kernel package, reboot the system to ensure the new kernel is running: + # reboot + + To use the Ubuntu Cloud Archive for Icehouse + The Ubuntu Cloud Archive is a special repository that + allows you to install newer releases of OpenStack on the + stable supported version of Ubuntu. + + Install the Ubuntu Cloud Archive for + Icehouse: + # apt-get install python-software-properties +# add-apt-repository cloud-archive:icehouse + + + Update the package database, upgrade your system, and reboot + for all changes to take effect: + # apt-get update && apt-get dist-upgrade +# reboot + + + + To use the Debian Wheezy backports archive for + Icehouse + The Icehouse release is available + only in Debian Sid + (otherwise called Unstable). However, the Debian maintainers + of OpenStack also maintain a non-official Debian repository + for OpenStack containing Wheezy backports. + + Install the Debian Wheezy backport repository + Icehouse: + # echo "deb http://archive.gplhost.com/debian icehouse-backports main" >>/etc/apt/sources.list + + + Install the Debian Wheezy OpenStack repository for + Icehouse: + # echo "deb http://archive.gplhost.com/debian icehouse main" >>/etc/apt/sources.list + + + Update the repository database and install the key: + # apt-get update && apt-get install gplhost-archive-keyring + + + Update the package database, upgrade your system, and reboot + for all changes to take effect: + # apt-get update && apt-get dist-upgrade +# reboot + + + Numerous archive.gplhost.com mirrors are + available around the world. All are available with both FTP and + HTTP protocols (you should use the closest mirror). The list of + mirrors is available at http://archive.gplhost.com/readme.mirrors. +
+ Manually install python-argparse + The Debian OpenStack packages are maintained on Debian Sid + (also known as Debian Unstable) - the current development + version. Backported packages run correctly on Debian Wheezy with + one caveat: + All OpenStack packages are written in Python. Wheezy uses + Python 2.6 and 2.7, with Python 2.6 as the default interpreter; + Sid has only Python 2.7. There is one packaging change between + these two. In Python 2.6, you installed the + python-argparse package separately. In + Python 2.7, this package is installed by default. Unfortunately, + in Python 2.7, this package does not include Provides: + python-argparse directive. + Because the packages are maintained in Sid where the + Provides: python-argparse directive causes an + error, and the Debian OpenStack maintainer wants to maintain one + version of the OpenStack packages, you must manually install the + python-argparse on each OpenStack system + that runs Debian Wheezy before you install the other OpenStack + packages. Use the following command to install the + package: + # apt-get install python-argparse + This caveat applies to most OpenStack packages in + Wheezy. +
+
diff --git a/doc/install-guide/section_basics-passwords.xml b/doc/install-guide/section_basics-passwords.xml new file mode 100644 index 0000000000..9db32b6eb8 --- /dev/null +++ b/doc/install-guide/section_basics-passwords.xml @@ -0,0 +1,105 @@ +
+ + Passwords + The various OpenStack services and the required software like the + database and the messaging server have to be password protected. You use + these passwords when configuring a service and then again to access the + service. You have to choose a password while configuring the + service and later remember to use the same password when accessing it. + Optionally, you can generate random passwords with the + pwgen program. Or, to create passwords one at a + time, use the output of this command repeatedly: + $ openssl rand -hex 10 + + This guide uses the convention that + SERVICE_PASS is + password to access the service SERVICE and + SERVICE_DBPASS is + the database password used by the service SERVICE to access the + database. + + The complete list of passwords you need to define in this guide are: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Passwords
Password nameDescription
Database password (no variable used)Root password for the database
RABBIT_PASSPassword of user guest of RabbitMQ
KEYSTONE_DBPASSDatabase password of Identity service
ADMIN_PASSPassword of user admin
GLANCE_DBPASSDatabase password for Image Service
GLANCE_PASSPassword of Image Service user glance
NOVA_DBPASSDatabase password for Compute service
NOVA_PASSPassword of Compute service user nova
DASH_DBPASSDatabase password for the dashboard
CINDER_DBPASSDatabase password for the Block Storage service
CINDER_PASSPassword of Block Storage service user cinder
NEUTRON_DBPASSDatabase password for the Networking service
NEUTRON_PASSPassword of Networking service user neutron
HEAT_DBPASSDatabase password for the Orchestration service
HEAT_PASSPassword of Orchestration service user heat
CEILOMETER_DBPASSDatabase password for the Telemetry service
CEILOMETER_PASSPassword of Telemetry service user ceilometer
+
+
diff --git a/doc/install-guide/section_basics-prerequisites.xml b/doc/install-guide/section_basics-prerequisites.xml new file mode 100644 index 0000000000..23f16d1f80 --- /dev/null +++ b/doc/install-guide/section_basics-prerequisites.xml @@ -0,0 +1,63 @@ +
+ + Before you begin + For a functional environment, OpenStack doesn't require a + significant amount of resources. We recommend that your environment meets + or exceeds the following minimum requirements which can support several + minimal CirrOS instances: + + + Controller Node: 1 processor, 2 GB memory, and 5 GB + storage + + + Network Node: 1 processor, 512 MB memory, and 5 GB + storage + + + Compute Node: 1 processor, 2 GB memory, and 10 GB + storage + + + To minimize clutter and provide more resources for OpenStack, we + recommend a minimal installation of your Linux distribution. Also, we + strongly recommend that you install a 64-bit version of your distribution + on at least the compute node. If you install a 32-bit version of your + distribution on the compute node, attempting to start an instance using + a 64-bit image will fail. + + A single disk partition on each node works for most basic + installations. However, you should consider + Logical Volume Manager (LVM) for installations + with optional services such as Block Storage. + + Many users build their test environments on + virtual machines + (VMs). The primary benefits of VMs include the + following: + + + One physical server can support multiple nodes, each with almost + any number of network interfaces. + + + Ability to take periodic "snap shots" throughout the installation + process and "roll back" to a working configuration in the event of + a problem. + + + However, VMs will reduce performance of your instances, particularly + if your hypervisor and/or processor lacks support for hardware + acceleration of nested VMs. + + If you choose to install on VMs, make sure your hypervisor + permits promiscuous mode on the + external network. + + For more information about system requirements, see the OpenStack Operations + Guide. +
diff --git a/doc/install-guide/section_basics-queue.xml b/doc/install-guide/section_basics-queue.xml new file mode 100644 index 0000000000..fa1abc0028 --- /dev/null +++ b/doc/install-guide/section_basics-queue.xml @@ -0,0 +1,66 @@ +
+ + Messaging server + On the controller node, install the messaging queue server. + Typically this is RabbitMQ + + Qpid but + Qpid + RabbitMQ + and ZeroMQ (0MQ) are also available. + # apt-get install rabbitmq-server + # zypper install rabbitmq-server + # yum install qpid-cpp-server + + Important security consideration + The rabbitmq-server package configures + the RabbitMQ service to start automatically and creates a + guest user with a default + guest password. The RabbitMQ examples in + this guide use the guest account, though it + is strongly advised to change its default password, especially + if you have IPv6 available: by default the RabbitMQ server + enables anyone to connect to it by using guest as login and + password, and with IPv6, it is reachable from the + outside. + To change the default guest password of RabbitMQ: + # rabbitmqctl change_password guest RABBIT_PASS + + Disable Qpid authentication by + editing /etc/qpidd.conf file and changing + the auth option to + no. + auth=no + + + To simplify configuration, the Qpid examples in this guide do not use + authentication. However, we strongly advise enabling authentication + for production deployments. For more information on securing Qpid + refer to the + Qpid Documentation. + + + After you enable Qpid authentication, you must update the configuration + file of each OpenStack service to ensure that the + qpid_username and qpid_password + configuration keys refer to a valid Qpid username and password, + respectively. + + + Start Qpid and set it to start + automatically when the system boots. + # service qpidd start +# chkconfig qpidd on + Start the messaging service and set it to + start automatically when the system boots: + # service rabbitmq-server start +# chkconfig rabbitmq-server on + Congratulations, now you are ready to install OpenStack + services! +