diff --git a/doc/admin-guide-cloud-rst/source/compute-configure-migrations.rst b/doc/admin-guide-cloud-rst/source/compute-configure-migrations.rst deleted file mode 100644 index 4de5575bde..0000000000 --- a/doc/admin-guide-cloud-rst/source/compute-configure-migrations.rst +++ /dev/null @@ -1,382 +0,0 @@ -.. _section_configuring-compute-migrations: - -==================== -Configure migrations -==================== - -.. :ref:`_configuring-migrations-kvm-libvirt` -.. :ref:`_configuring-migrations-xenserver` - -.. note:: - - Only cloud administrators can perform live migrations. If your cloud - is configured to use cells, you can perform live migration within - but not between cells. - -Migration enables an administrator to move a virtual-machine instance -from one compute host to another. This feature is useful when a compute -host requires maintenance. Migration can also be useful to redistribute -the load when many VM instances are running on a specific physical -machine. - -The migration types are: - -- **Non-live migration** (sometimes referred to simply as 'migration'). - The instance is shut down for a period of time to be moved to another - hypervisor. In this case, the instance recognizes that it was - rebooted. - -- **Live migration** (or 'true live migration'). Almost no instance - downtime. Useful when the instances must be kept running during the - migration. The different types of live migration are: - - - **Shared storage-based live migration**. Both hypervisors have - access to shared storage. - - - **Block live migration**. No shared storage is required. - Incompatible with read-only devices such as CD-ROMs and - `Configuration Drive (config\_drive) `_. - - - **Volume-backed live migration**. Instances are backed by volumes - rather than ephemeral disk, no shared storage is required, and - migration is supported (currently only available for libvirt-based - hypervisors). - -The following sections describe how to configure your hosts and compute -nodes for migrations by using the KVM and XenServer hypervisors. - -.. _configuring-migrations-kvm-libvirt: - -KVM-Libvirt -~~~~~~~~~~~ - -.. :ref:`_configuring-migrations-kvm-shared-storage` -.. :ref:`_configuring-migrations-kvm-block-migration` - -.. _configuring-migrations-kvm-shared-storage: - -Shared storage --------------- - -.. :ref:`_section_example-compute-install` -.. :ref:`_true-live-migration-kvm-libvirt` - -**Prerequisites** - -- **Hypervisor:** KVM with libvirt - -- **Shared storage:** :file:`NOVA-INST-DIR/instances/` (for example, - :file:`/var/lib/nova/instances`) has to be mounted by shared storage. - This guide uses NFS but other options, including the - `OpenStack Gluster Connector `_ - are available. - -- **Instances:** Instance can be migrated with iSCSI-based volumes. - -.. note:: - - - Because the Compute service does not use the libvirt live - migration functionality by default, guests are suspended before - migration and might experience several minutes of downtime. For - details, see :ref:Enabling True Live Migration. - - - This guide assumes the default value for ``instances_path`` in - your :file:`nova.conf` file (:file:`NOVA-INST-DIR/instances`). If you - have changed the ``state_path`` or ``instances_path`` variables, - modify the commands accordingly. - - - You must specify ``vncserver_listen=0.0.0.0`` or live migration - will not work correctly. - - - You must specify the ``instances_path`` in each node that runs - nova-compute. The mount point for ``instances_path`` must be the - same value for each node, or live migration will not work - correctly. - -.. _section_example-compute-install: - -Example Compute installation environment -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -- Prepare at least three servers. In this example, we refer to the - servers as ``HostA``, ``HostB``, and ``HostC``: - - - ``HostA`` is the Cloud Controller, and should run these services: - nova-api, nova-scheduler, ``nova-network``, cinder-volume, and - ``nova-objectstore``. - - - ``HostB`` and ``HostC`` are the compute nodes that run - nova-compute. - - Ensure that ``NOVA-INST-DIR`` (set with ``state_path`` in the - :file:`nova.conf` file) is the same on all hosts. - -- In this example, ``HostA`` is the NFSv4 server that exports - ``NOVA-INST-DIR/instances`` directory. ``HostB`` and ``HostC`` are - NFSv4 clients that mount ``HostA``. - -**Configuring your system** - -#. Configure your DNS or ``/etc/hosts`` and ensure it is consistent across - all hosts. Make sure that the three hosts can perform name resolution - with each other. As a test, use the :command:`ping` command to ping each host - from one another: - - .. code:: console - - $ ping HostA - $ ping HostB - $ ping HostC - -#. Ensure that the UID and GID of your Compute and libvirt users are - identical between each of your servers. This ensures that the - permissions on the NFS mount works correctly. - -#. Export ``NOVA-INST-DIR/instances`` from ``HostA``, and ensure it is - readable and writable by the Compute user on ``HostB`` and ``HostC``. - - For more information, see: `SettingUpNFSHowTo `_ - or `CentOS/Red Hat: Setup NFS v4.0 File Server `_ - -#. Configure the NFS server at ``HostA`` by adding the following line to - the :file:`/etc/exports` file: - - .. code:: ini - - NOVA-INST-DIR/instances HostA/255.255.0.0(rw,sync,fsid=0,no_root_squash) - - Change the subnet mask (``255.255.0.0``) to the appropriate value to - include the IP addresses of ``HostB`` and ``HostC``. Then restart the - NFS server: - - .. code:: console - - # /etc/init.d/nfs-kernel-server restart - # /etc/init.d/idmapd restart - -#. On both compute nodes, enable the 'execute/search' bit on your shared - directory to allow qemu to be able to use the images within the - directories. On all hosts, run the following command: - - .. code:: console - - $ chmod o+x NOVA-INST-DIR/instances - -#. Configure NFS on ``HostB`` and ``HostC`` by adding the following line to - the :file:`/etc/fstab` file - - .. code:: console - - HostA:/ /NOVA-INST-DIR/instances nfs4 defaults 0 0 - - Ensure that you can mount the exported directory - - .. code:: console - - $ mount -a -v - - Check that ``HostA`` can see the :file:`NOVA-INST-DIR/instances/` - directory - - .. code:: console - - $ ls -ld NOVA-INST-DIR/instances/ - drwxr-xr-x 2 nova nova 4096 2012-05-19 14:34 nova-install-dir/instances/ - - Perform the same check on ``HostB`` and ``HostC``, paying special - attention to the permissions (Compute should be able to write) - - .. code-block:: console - :linenos: - - $ ls -ld NOVA-INST-DIR/instances/ - drwxr-xr-x 2 nova nova 4096 2012-05-07 14:34 nova-install-dir/instances/ - - $ df -k - Filesystem 1K-blocks Used Available Use% Mounted on - /dev/sda1 921514972 4180880 870523828 1% / - none 16498340 1228 16497112 1% /dev - none 16502856 0 16502856 0% /dev/shm - none 16502856 368 16502488 1% /var/run - none 16502856 0 16502856 0% /var/lock - none 16502856 0 16502856 0% /lib/init/rw - HostA: 921515008 101921792 772783104 12% /var/lib/nova/instances ( <--- this line is important.) - -#. Update the libvirt configurations so that the calls can be made - securely. These methods enable remote access over TCP and are not - documented here. - - - SSH tunnel to libvirtd's UNIX socket - - - libvirtd TCP socket, with GSSAPI/Kerberos for auth+data encryption - - - libvirtd TCP socket, with TLS for encryption and x509 client certs - for authentication - - - libvirtd TCP socket, with TLS for encryption and Kerberos for - authentication - - Restart libvirt. After you run the command, ensure that libvirt is - successfully restarted - - .. code:: console - - # stop libvirt-bin && start libvirt-bin - $ ps -ef | grep libvirt - root 1145 1 0 Nov27 ? 00:00:03 /usr/sbin/libvirtd -d -l\ - -#. Configure your firewall to allow libvirt to communicate between nodes. - By default, libvirt listens on TCP port 16509, and an ephemeral TCP - range from 49152 to 49261 is used for the KVM communications. Based on - the secure remote access TCP configuration you chose, be careful which - ports you open, and always understand who has access. For information - about ports that are used with libvirt, - see `the libvirt documentation `_. - -#. You can now configure options for live migration. In most cases, you - will not need to configure any options. The following chart is for - advanced users only. - -.. TODO :include :: /../../common/tables/nova-livemigration.xml/ - -.. _true-live-migration-kvm-libvirt: - -Enabling true live migration -^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Prior to the Kilo release, the Compute service did not use the libvirt -live migration function by default. To enable this function, add the -following line to the ``[libvirt]`` section of the :file:`nova.conf` file: - -.. code:: ini - - live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_TUNNELLED - -On versions older than Kilo, the Compute service does not use libvirt's -live migration by default because there is a risk that the migration -process will never end. This can happen if the guest operating system -uses blocks on the disk faster than they can be migrated. - -.. _configuring-migrations-kvm-block-migration: - -Block Migration -^^^^^^^^^^^^^^^ - -Configuring KVM for block migration is exactly the same as the above -configuration in :ref:`configuring-migrations-kvm-shared-storage` -the section called shared storage, except that ``NOVA-INST-DIR/instances`` -is local to each host rather than shared. No NFS client or server -configuration is required. - -.. note:: - - - To use block migration, you must use the :option:`--block-migrate` - parameter with the live migration command. - - - Block migration is incompatible with read-only devices such as - CD-ROMs and `Configuration Drive (config_drive) `_. - - - Since the ephemeral drives are copied over the network in block - migration, migrations of instances with heavy I/O loads may never - complete if the drives are writing faster than the data can be - copied over the network. - -.. _configuring-migrations-xenserver: - -XenServer -~~~~~~~~~ - -.. :ref:Shared Storage -.. :ref:Block migration - -.. _configuring-migrations-xenserver-shared-storage: - -Shared storage --------------- - -Prerequisites -^^^^^^^^^^^^^ - -- **Compatible XenServer hypervisors**. For more information, see the - `Requirements for Creating Resource Pools `_ section of the XenServer - Administrator's Guide. - -- **Shared storage**. An NFS export, visible to all XenServer hosts. - - .. note:: - - For the supported NFS versions, see the - `NFS VHD `_ - section of the XenServer Administrator's Guide. - -To use shared storage live migration with XenServer hypervisors, the -hosts must be joined to a XenServer pool. To create that pool, a host -aggregate must be created with specific metadata. This metadata is used -by the XAPI plug-ins to establish the pool. - -**Using shared storage live migrations with XenServer Hypervisors** - -#. Add an NFS VHD storage to your master XenServer, and set it as the - default storage repository. For more information, see NFS VHD in the - XenServer Administrator's Guide. - -#. Configure all compute nodes to use the default storage repository - (``sr``) for pool operations. Add this line to your :file:`nova.conf` - configuration files on all compute nodes: - - .. code:: ini - - sr_matching_filter=default-sr:true - -#. Create a host aggregate. This command creates the aggregate, and then - displays a table that contains the ID of the new aggregate - - .. code:: console - - $ nova aggregate-create POOL_NAME AVAILABILITY_ZONE - - Add metadata to the aggregate, to mark it as a hypervisor pool - - .. code:: console - - $ nova aggregate-set-metadata AGGREGATE_ID hypervisor_pool=true - - $ nova aggregate-set-metadata AGGREGATE_ID operational_state=created - - Make the first compute node part of that aggregate - - .. code:: console - - $ nova aggregate-add-host AGGREGATE_ID MASTER_COMPUTE_NAME - - The host is now part of a XenServer pool. - -#. Add hosts to the pool - - .. code:: console - - $ nova aggregate-add-host AGGREGATE_ID COMPUTE_HOST_NAME - - .. note:: - - The added compute node and the host will shut down to join the host - to the XenServer pool. The operation will fail if any server other - than the compute node is running or suspended on the host. - -.. _configuring-migrations-xenserver-block-migration: - -Block migration -^^^^^^^^^^^^^^^ - -- **Compatible XenServer hypervisors**. - The hypervisors must support the Storage XenMotion feature. - See your XenServer manual to make sure your edition - has this feature. - - .. note:: - - - To use block migration, you must use the :option:`--block-migrate` - parameter with the live migration command. - - - Block migration works only with EXT local storage storage - repositories, and the server must not have any volumes attached. diff --git a/doc/admin-guide-cloud-rst/source/compute-configure-service-groups.rst b/doc/admin-guide-cloud-rst/source/compute-configure-service-groups.rst deleted file mode 100644 index 7745871549..0000000000 --- a/doc/admin-guide-cloud-rst/source/compute-configure-service-groups.rst +++ /dev/null @@ -1,120 +0,0 @@ -.. highlight:: ini - :linenothreshold: 5 - -.. _configuring-compute-service-groups: - -================================ -Configure Compute service groups -================================ - -The Compute service must know the status of each compute node to -effectively manage and use them. This can include events like a user -launching a new VM, the scheduler sending a request to a live node, or a -query to the ServiceGroup API to determine if a node is live. - -When a compute worker running the nova-compute daemon starts, it calls -the join API to join the compute group. Any service (such as the -scheduler) can query the group's membership and the status of its nodes. -Internally, the ServiceGroup client driver automatically updates the -compute worker status. - -.. _database-servicegroup-driver: - -Database ServiceGroup driver -~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -By default, Compute uses the database driver to track if a node is live. -In a compute worker, this driver periodically sends a ``db update`` -command to the database, saying “I'm OK” with a timestamp. Compute uses -a pre-defined timeout (``service_down_time``) to determine if a node is -dead. - -The driver has limitations, which can be problematic depending on your -environment. If a lot of compute worker nodes need to be checked, the -database can be put under heavy load, which can cause the timeout to -trigger, and a live node could incorrectly be considered dead. By -default, the timeout is 60 seconds. Reducing the timeout value can help -in this situation, but you must also make the database update more -frequently, which again increases the database workload. - -The database contains data that is both transient (such as whether the -node is alive) and persistent (such as entries for VM owners). With the -ServiceGroup abstraction, Compute can treat each type separately. - -.. _zookeeper-servicegroup-driver: - -ZooKeeper ServiceGroup driver ------------------------------ - -The ZooKeeper ServiceGroup driver works by using ZooKeeper ephemeral -nodes. ZooKeeper, unlike databases, is a distributed system, with its -load divided among several servers. On a compute worker node, the driver -can establish a ZooKeeper session, then create an ephemeral znode in the -group directory. Ephemeral znodes have the same lifespan as the session. -If the worker node or the nova-compute daemon crashes, or a network -partition is in place between the worker and the ZooKeeper server -quorums, the ephemeral znodes are removed automatically. The driver -can be given group membership by running the :command:`ls` command in the -group directory. - -The ZooKeeper driver requires the ZooKeeper servers and client -libraries. Setting up ZooKeeper servers is outside the scope of this -guide (for more information, see `Apache Zookeeper `_). These client-side -Python libraries must be installed on every compute node: - -**python-zookeeper** - The official Zookeeper Python binding - -**evzookeeper** - This library makes the binding work with the eventlet threading model. - -This example assumes the ZooKeeper server addresses and ports are -``192.168.2.1:2181``, ``192.168.2.2:2181``, and ``192.168.2.3:2181``. - -These values in the :file:`/etc/nova/nova.conf` file are required on every -node for the ZooKeeper driver: - -.. code:: ini - - # Driver for the ServiceGroup service - servicegroup_driver="zk" - - [zookeeper] - address="192.168.2.1:2181,192.168.2.2:2181,192.168.2.3:2181" - -To customize the Compute Service groups, use these configuration option -settings: - -.. TODO ../../common/tables/nova-zookeeper.xml - -.. _memcache-servicegroup-driver: - -Memcache ServiceGroup driver ----------------------------- - -The memcache ServiceGroup driver uses memcached, a distributed memory -object caching system that is used to increase site performance. For -more details, see `memcached.org `_. - -To use the memcache driver, you must install memcached. You might -already have it installed, as the same driver is also used for the -OpenStack Object Storage and OpenStack dashboard. If you need to install -memcached, see the instructions in the `OpenStack Installation Guide `_. - -These values in the :file:`/etc/nova/nova.conf` file are required on every -node for the memcache driver: - -.. code:: ini - - # Driver for the ServiceGroup service - servicegroup_driver="mc" - - # Memcached servers. Use either a list of memcached servers to use for - caching (list value), - # or "" for in-process caching (default). - memcached_servers= - - # Timeout; maximum time since last check-in for up service - (integer value). - # Helps to define whether a node is dead - service_down_time=60 diff --git a/doc/admin-guide-cloud-rst/source/compute-images-instances.rst b/doc/admin-guide-cloud-rst/source/compute-images-instances.rst index ac9c3df2e0..7eeda580ee 100644 --- a/doc/admin-guide-cloud-rst/source/compute-images-instances.rst +++ b/doc/admin-guide-cloud-rst/source/compute-images-instances.rst @@ -4,9 +4,6 @@ Images and instances .. TODO (bmoss) - compute-image-mgt.rst - compute-instance-building-blocks.rst - compute-instance-mgt-tools.rst instance-scheduling-constraints.rst @@ -39,14 +36,14 @@ flavors that you can edit or add to. services `__ section of the OpenStack Configuration Reference. - - For more information about flavors, see ? or + - For more information about flavors, see :ref:`compute-flavors` or `Flavors `__ in the OpenStack Operations Guide. You can add and remove additional resources from running instances, such as persistent volume storage, or public IP addresses. The example used in this chapter is of a typical virtual system within an OpenStack -cloud. It uses the cinder-volume service, which provides persistent +cloud. It uses the ``cinder-volume`` service, which provides persistent block storage, instead of the ephemeral storage provided by the selected instance flavor. @@ -54,14 +51,14 @@ This diagram shows the system state prior to launching an instance. The image store, fronted by the Image service (glance) has a number of predefined images. Inside the cloud, a compute node contains the available vCPU, memory, and local disk resources. Additionally, the -cinder-volume service provides a number of predefined volumes. +``cinder-volume`` service provides a number of predefined volumes. |Base image state with no running instances| To launch an instance select an image, flavor, and any optional attributes. The selected flavor provides a root volume, labeled ``vda`` in this diagram, and additional ephemeral storage, labeled ``vdb``. In -this example, the cinder-volume store is mapped to the third virtual +this example, the ``cinder-volume`` store is mapped to the third virtual disk on this instance, ``vdc``. |Instance creation from image and runtime state| @@ -74,8 +71,8 @@ images, as less data needs to be copied across the network. A new empty ephemeral disk is also created, labeled ``vdb`` in this diagram. This disk is destroyed when you delete the instance. -The compute node connects to the attached cinder-volume using ISCSI. The -cinder-volume is mapped to the third disk, labeled ``vdc`` in this +The compute node connects to the attached ``cinder-volume`` using ISCSI. The +``cinder-volume`` is mapped to the third disk, labeled ``vdc`` in this diagram. After the compute node provisions the vCPU and memory resources, the instance boots up from root volume ``vda``. The instance runs, and changes data on the disks (highlighted in red on the diagram). @@ -98,6 +95,65 @@ process. |End state of image and volume after instance exits| + +Image management +~~~~~~~~~~~~~~~~ + +The OpenStack Image service discovers, registers, and retrieves virtual +machine images. The service also includes a RESTful API that allows you +to query VM image metadata and retrieve the actual image with HTTP +requests. For more information about the API, see the `OpenStack API +Complete Reference `__ and +the `Python +API `__. + +The OpenStack Image service can be controlled using a command-line tool. +For more information about using the OpenStack Image command-line tool, +see the `Manage +Images `__ +section in the OpenStack End User Guide. + +Virtual images that have been made available through the Image service +can be stored in a variety of ways. In order to use these services, you +must have a working installation of the Image service, with a working +endpoint, and users that have been created in OpenStack Identity. +Additionally, you must meet the environment variables required by the +Compute and Image service clients. + +The Image service supports these back-end stores: + +File system + The OpenStack Image service stores virtual machine images in the + file system back end by default. This simple back end writes image + files to the local file system. + +Object Storage + The OpenStack highly available service for storing objects. + +Block Storage + The OpenStack highly available service for storing blocks. + +VMware + ESX/ESXi or vCenter Server target system. + +S3 + The Amazon S3 service. + +HTTP + OpenStack Image service can read virtual machine images that are + available on the Internet using HTTP. This store is read only. + +RADOS block device (RBD) + Stores images inside of a Ceph storage cluster using Ceph's RBD + interface. + +Sheepdog + A distributed storage system for QEMU/KVM. + +GridFS + Stores images using MongoDB. + + Image properties and property protection ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ An image property is a key and value pair that the cloud administrator @@ -280,6 +336,97 @@ Information on the configuration options for caching on compute nodes can be found in the `Configuration Reference `__. +Instance building blocks +~~~~~~~~~~~~~~~~~~~~~~~~ + +In OpenStack, the base operating system is usually copied from an image +stored in the OpenStack Image service. This results in an ephemeral +instance that starts from a known template state and loses all +accumulated states on shutdown. + +You can also put an operating system on a persistent volume in Compute +or the Block Storage volume system. This gives a more traditional, +persistent system that accumulates states that are preserved across +restarts. To get a list of available images on your system, run: + +.. code:: console + + $ nova image-list + +---------------------------+------------------+--------+----------------+ + | ID | Name | Status | Server | + +---------------------------+------------------+--------+----------------+ + | aee1d242-730f-431f-88c1- | | | | + | 87630c0f07ba | Ubuntu 14.04 | | | + | | cloudimg amd64 | ACTIVE | | + | 0b27baa1-0ca6-49a7-b3f4- | | | | + | 48388e440245 | Ubuntu 14.10 | | | + | | cloudimg amd64 | ACTIVE | | + | df8d56fc-9cea-4dfd-a8d3- | | | | + | 28764de3cb08 | jenkins | ACTIVE | | + +---------------------------+------------------+--------+----------------+ + +The displayed image attributes are: + +``ID`` + Automatically generated UUID of the image. + +``Name`` + Free form, human-readable name for the image. + +``Status`` + The status of the image. Images marked ``ACTIVE`` are available for + use. + +``Server`` + For images that are created as snapshots of running instances, this + is the UUID of the instance the snapshot derives from. For uploaded + images, this field is blank. + +Virtual hardware templates are called ``flavors``. The default +installation provides five predefined flavors. + +For a list of flavors that are available on your system, run: + +.. code:: console + + $ nova flavor-list + +----+----------+----------+-----+----------+-----+------+------------+----------+ + | ID | Name | Memory_MB| Disk| Ephemeral| Swap| VCPUs| RXTX_Factor| Is_Public| + +----+----------+----------+-----+----------+-----+------+------------+----------+ + | 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True | + | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True | + | 3 | m1.medium| 4096 | 40 | 0 | | 2 | 1.0 | True | + | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True | + | 5 | m1.xlarge| 16384 | 160 | 0 | | 8 | 1.0 | True | + +----+----------+----------+-----+----------+-----+------+------------+----------+ + +By default, administrative users can configure the flavors. You can +change this behavior by redefining the access controls for +``compute_extension:flavormanage`` in :file:`/etc/nova/policy.json` on the +``compute-api`` server. + + +Instance management tools +~~~~~~~~~~~~~~~~~~~~~~~~~ + +OpenStack provides command-line, web interface, and API-based instance +management tools. Third-party management tools are also available, using +either the native API or the provided EC2-compatible API. + +The OpenStack python-novaclient package provides a basic command-line +utility, which uses the :command:`nova` command. This is available as a native +package for most Linux distributions, or you can install the latest +version using the pip python package installer: + +:: + + # pip install python-novaclient + +For more information about python-novaclient and other command-line +tools, see the `OpenStack End User +Guide `__. + + Control where instances run ~~~~~~~~~~~~~~~~~~~~~~~~~~~ The `OpenStack Configuration diff --git a/doc/admin-guide-cloud-rst/source/compute-networking-nova.rst b/doc/admin-guide-cloud-rst/source/compute-networking-nova.rst new file mode 100644 index 0000000000..6c69dec3cb --- /dev/null +++ b/doc/admin-guide-cloud-rst/source/compute-networking-nova.rst @@ -0,0 +1,1011 @@ +============================ +Networking with nova-network +============================ + +Understanding the networking configuration options helps you design the +best configuration for your Compute instances. + +You can choose to either install and configure nova-network or use the +OpenStack Networking service (neutron). This section contains a brief +overview of nova-network. For more information about OpenStack +Networking, see :ref:`networking`. + +Networking concepts +~~~~~~~~~~~~~~~~~~~ + +Compute assigns a private IP address to each VM instance. Compute makes +a distinction between fixed IPs and floating IP. Fixed IPs are IP +addresses that are assigned to an instance on creation and stay the same +until the instance is explicitly terminated. Floating IPs are addresses +that can be dynamically associated with an instance. A floating IP +address can be disassociated and associated with another instance at any +time. A user can reserve a floating IP for their project. + +.. note:: + + Currently, Compute with ``nova-network`` only supports Linux bridge + networking that allows virtual interfaces to connect to the outside + network through the physical interface. + +The network controller with ``nova-network`` provides virtual networks to +enable compute servers to interact with each other and with the public +network. Compute with ``nova-network`` supports the following network modes, +which are implemented as Network Manager types: + +Flat Network Manager + In this mode, a network administrator specifies a subnet. IP + addresses for VM instances are assigned from the subnet, and then + injected into the image on launch. Each instance receives a fixed IP + address from the pool of available addresses. A system administrator + must create the Linux networking bridge (typically named ``br100``, + although this is configurable) on the systems running the + ``nova-network`` service. All instances of the system are attached to + the same bridge, which is configured manually by the network + administrator. + +.. note:: + + Configuration injection currently only works on Linux-style + systems that keep networking configuration in + :file:`/etc/network/interfaces`. + +Flat DHCP Network Manager + In this mode, OpenStack starts a DHCP server (dnsmasq) to allocate + IP addresses to VM instances from the specified subnet, in addition + to manually configuring the networking bridge. IP addresses for VM + instances are assigned from a subnet specified by the network + administrator. + + Like flat mode, all instances are attached to a single bridge on the + compute node. Additionally, a DHCP server configures instances + depending on single-/multi-host mode, alongside each ``nova-network``. + In this mode, Compute does a bit more configuration. It attempts to + bridge into an Ethernet device (``flat_interface``, eth0 by + default). For every instance, Compute allocates a fixed IP address + and configures dnsmasq with the MAC ID and IP address for the VM. + dnsmasq does not take part in the IP address allocation process, it + only hands out IPs according to the mapping done by Compute. + Instances receive their fixed IPs with the :command:`dhcpdiscover` command. + These IPs are not assigned to any of the host's network interfaces, + only to the guest-side interface for the VM. + + In any setup with flat networking, the hosts providing the + ``nova-network`` service are responsible for forwarding traffic from the + private network. They also run and configure dnsmasq as a DHCP + server listening on this bridge, usually on IP address 10.0.0.1 (see + :ref:`compute-dnsmasq`). Compute can determine + the NAT entries for each network, although sometimes NAT is not + used, such as when the network has been configured with all public + IPs, or if a hardware router is used (which is a high availability + option). In this case, hosts need to have ``br100`` configured and + physically connected to any other nodes that are hosting VMs. You + must set the ``flat_network_bridge`` option or create networks with + the bridge parameter in order to avoid raising an error. Compute + nodes have iptables or ebtables entries created for each project and + instance to protect against MAC ID or IP address spoofing and ARP + poisoning. + +.. note:: + + In single-host Flat DHCP mode you will be able to ping VMs + through their fixed IP from the ``nova-network`` node, but you + cannot ping them from the compute nodes. This is expected + behavior. + +VLAN Network Manager + This is the default mode for OpenStack Compute. In this mode, + Compute creates a VLAN and bridge for each tenant. For + multiple-machine installations, the VLAN Network Mode requires a + switch that supports VLAN tagging (IEEE 802.1Q). The tenant gets a + range of private IPs that are only accessible from inside the VLAN. + In order for a user to access the instances in their tenant, a + special VPN instance (code named cloudpipe) needs to be created. + Compute generates a certificate and key for the user to access the + VPN and starts the VPN automatically. It provides a private network + segment for each tenant's instances that can be accessed through a + dedicated VPN connection from the internet. In this mode, each + tenant gets its own VLAN, Linux networking bridge, and subnet. + + The subnets are specified by the network administrator, and are + assigned dynamically to a tenant when required. A DHCP server is + started for each VLAN to pass out IP addresses to VM instances from + the subnet assigned to the tenant. All instances belonging to one + tenant are bridged into the same VLAN for that tenant. OpenStack + Compute creates the Linux networking bridges and VLANs when + required. + +These network managers can co-exist in a cloud system. However, because +you cannot select the type of network for a given tenant, you cannot +configure multiple network types in a single Compute installation. + +All network managers configure the network using network drivers. For +example, the Linux L3 driver (``l3.py`` and ``linux_net.py``), which +makes use of ``iptables``, ``route`` and other network management +facilities, and the libvirt `network filtering +facilities `__. The driver is +not tied to any particular network manager; all network managers use the +same driver. The driver usually initializes only when the first VM lands +on this host node. + +All network managers operate in either single-host or multi-host mode. +This choice greatly influences the network configuration. In single-host +mode, a single ``nova-network`` service provides a default gateway for VMs +and hosts a single DHCP server (dnsmasq). In multi-host mode, each +compute node runs its own ``nova-network`` service. In both cases, all +traffic between VMs and the internet flows through ``nova-network``. Each +mode has benefits and drawbacks. For more on this, see the Network +Topology section in the `OpenStack Operations Guide +`__. + +All networking options require network connectivity to be already set up +between OpenStack physical nodes. OpenStack does not configure any +physical network interfaces. All network managers automatically create +VM virtual interfaces. Some network managers can also create network +bridges such as ``br100``. + +The internal network interface is used for communication with VMs. The +interface should not have an IP address attached to it before OpenStack +installation, it serves only as a fabric where the actual endpoints are +VMs and dnsmasq. Additionally, the internal network interface must be in +``promiscuous`` mode, so that it can receive packets whose target MAC +address is the guest VM, not the host. + +All machines must have a public and internal network interface +(controlled by these options: ``public_interface`` for the public +interface, and ``flat_interface`` and ``vlan_interface`` for the +internal interface with flat or VLAN managers). This guide refers to the +public network as the external network and the private network as the +internal or tenant network. + +For flat and flat DHCP modes, use the :command:`nova network-create` command +to create a network: + +.. code:: console + + $ nova network-create vmnet \ + --fixed-range-v4 10.0.0.0/16 --fixed-cidr 10.0.20.0/24 --bridge br100 + +This example uses the following parameters: + --fixed-range-v4 specifies the network subnet. + --fixed-cidr specifies a range of fixed IP addresses to allocate, + and can be a subset of the ``--fixed-range-v4`` + argument. + --bridge specifies the bridge device to which this network is + connected on every compute node. + +.. _compute-dnsmasq: + +DHCP server: dnsmasq +~~~~~~~~~~~~~~~~~~~~ + +The Compute service uses +`dnsmasq `__ as the DHCP +server when using either Flat DHCP Network Manager or VLAN Network +Manager. For Compute to operate in IPv4/IPv6 dual-stack mode, use at +least dnsmasq v2.63. The ``nova-network`` service is responsible for +starting dnsmasq processes. + +The behavior of dnsmasq can be customized by creating a dnsmasq +configuration file. Specify the configuration file using the +``dnsmasq_config_file`` configuration option: + +.. code:: ini + + dnsmasq_config_file=/etc/dnsmasq-nova.conf + +For more information about creating a dnsmasq configuration file, see +the `OpenStack Configuration +Reference `__, +and `the dnsmasq +documentation `__. + +dnsmasq also acts as a caching DNS server for instances. You can specify +the DNS server that dnsmasq uses by setting the ``dns_server`` +configuration option in :file:`/etc/nova/nova.conf`. This example configures +dnsmasq to use Google's public DNS server: + +.. code:: ini + + dns_server=8.8.8.8 + +dnsmasq logs to syslog (typically :file:`/var/log/syslog` or +:file:`/var/log/messages`, depending on Linux distribution). Logs can be +useful for troubleshooting, especially in a situation where VM instances +boot successfully but are not reachable over the network. + +Administrators can specify the starting point IP address to reserve with +the DHCP server (in the format n.n.n.n) with this command: + +.. code:: console + + $nova-manage fixed reserve --address IP_ADDRESS + +This reservation only affects which IP address the VMs start at, not the +fixed IP addresses that ``nova-network`` places on the bridges. + + +Configure Compute to use IPv6 addresses +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +If you are using OpenStack Compute with ``nova-network``, you can put +Compute into dual-stack mode, so that it uses both IPv4 and IPv6 +addresses for communication. In dual-stack mode, instances can acquire +their IPv6 global unicast address by using a stateless address +auto-configuration mechanism [RFC 4862/2462]. IPv4/IPv6 dual-stack mode +works with both ``VlanManager`` and ``FlatDHCPManager`` networking +modes. + +In ``VlanManager`` networking mode, each project uses a different 64-bit +global routing prefix. In ``FlatDHCPManager`` mode, all instances use +one 64-bit global routing prefix. + +This configuration was tested with virtual machine images that have an +IPv6 stateless address auto-configuration capability. This capability is +required for any VM to run with an IPv6 address. You must use an EUI-64 +address for stateless address auto-configuration. Each node that +executes a ``nova-*`` service must have ``python-netaddr`` and ``radvd`` +installed. + +**Switch into IPv4/IPv6 dual-stack mode** + +#. For every node running a ``nova-*`` service, install python-netaddr: + + .. code:: console + + # apt-get install python-netaddr + +#. For every node running ``nova-network``, install ``radvd`` and configure + IPv6 networking: + + .. code:: console + + # apt-get install radvd + # echo 1 > /proc/sys/net/ipv6/conf/all/forwarding + # echo 0 > /proc/sys/net/ipv6/conf/all/accept_ra + +#. On all nodes, edit the :file:`nova.conf` file and specify + ``use_ipv6 = True``. + +#. Restart all ``nova-*`` services. + +**IPv6 configuration options** + +You can use the following options with the :command:`nova network-create` +command: + +- Add a fixed range for IPv6 addresses to the :command:`nova network-create` + command. Specify ``public`` or ``private`` after the ``network-create`` + parameter. + + .. code:: console + + $ nova network-create public --fixed-range-v4 FIXED_RANGE_V4 \ + --vlan VLAN_ID --vpn VPN_START --fixed-range-v6 FIXED_RANGE_V6 + +- Set the IPv6 global routing prefix by using the + ``--fixed_range_v6`` parameter. The default value for the parameter + is ``fd00::/48``. + + When you use ``FlatDHCPManager``, the command uses the original + ``--fixed_range_v6`` value. For example: + + .. code:: console + + $ nova network-create public --fixed-range-v4 10.0.2.0/24 \ + --fixed-range-v6 fd00:1::/48 + +- When you use ``VlanManager``, the command increments the subnet ID + to create subnet prefixes. Guest VMs use this prefix to generate + their IPv6 global unicast address. For example: + + .. code:: console + + $ nova network-create public --fixed-range-v4 10.0.1.0/24 --vlan 100 \ + --vpn 1000 --fixed-range-v6 fd00:1::/48 + +.. list-table:: Description of IPv6 configuration options + :header-rows: 2 + + * - Configuration option = Default value + - Description + * - [DEFAULT] + - + * - fixed_range_v6 = fd00::/48 + - (StrOpt) Fixed IPv6 address block + * - gateway_v6 = None + - (StrOpt) Default IPv6 gateway + * - ipv6_backend = rfc2462 + - (StrOpt) Backend to use for IPv6 generation + * - use_ipv6 = False + - (BoolOpt) Use IPv6 + +Metadata service +~~~~~~~~~~~~~~~~ + +Compute uses a metadata service for virtual machine instances to +retrieve instance-specific data. Instances access the metadata service +at ``http://169.254.169.254``. The metadata service supports two sets of +APIs: an OpenStack metadata API and an EC2-compatible API. Both APIs are +versioned by date. + +To retrieve a list of supported versions for the OpenStack metadata API, +make a GET request to ``http://169.254.169.254/openstack``: + +.. code:: console + + $ curl http://169.254.169.254/openstack + 2012-08-10 + 2013-04-04 + 2013-10-17 + latest + +To list supported versions for the EC2-compatible metadata API, make a +GET request to ``http://169.254.169.254``: + +.. code:: console + + $ curl http://169.254.169.254 + 1.0 + 2007-01-19 + 2007-03-01 + 2007-08-29 + 2007-10-10 + 2007-12-15 + 2008-02-01 + 2008-09-01 + 2009-04-04 + latest + +If you write a consumer for one of these APIs, always attempt to access +the most recent API version supported by your consumer first, then fall +back to an earlier version if the most recent one is not available. + +Metadata from the OpenStack API is distributed in JSON format. To +retrieve the metadata, make a GET request to +``http://169.254.169.254/openstack/2012-08-10/meta_data.json``: + +.. code:: console + + $ curl http://169.254.169.254/openstack/2012-08-10/meta_data.json + +.. code-block:: json + :linenos: + + { + "uuid": "d8e02d56-2648-49a3-bf97-6be8f1204f38", + "availability_zone": "nova", + "hostname": "test.novalocal", + "launch_index": 0, + "meta": { + "priority": "low", + "role": "webserver" + }, + "project_id": "f7ac731cc11f40efbc03a9f9e1d1d21f", + "public_keys": { + "mykey": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKV\ + VRNCRX6BlnNbI+USLGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTH\ + bsiyPCIDOKyeHba4MUJq8Oh5b2i71/3BISpyxTBH/uZDHdslW2a+SrPDCe\ + uMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated by Nova\n" + }, + "name": "test" + } + +Instances also retrieve user data (passed as the ``user_data`` parameter +in the API call or by the ``--user_data`` flag in the :command:`nova boot` +command) through the metadata service, by making a GET request to +``http://169.254.169.254/openstack/2012-08-10/user_data``: + +.. code:: json + + $ curl http://169.254.169.254/openstack/2012-08-10/user_data + #!/bin/bash + echo 'Extra user data here' + +The metadata service has an API that is compatible with version +2009-04-04 of the `Amazon EC2 metadata +service `__. +This means that virtual machine images designed for EC2 will work +properly with OpenStack. + +The EC2 API exposes a separate URL for each metadata element. Retrieve a +listing of these elements by making a GET query to +``http://169.254.169.254/2009-04-04/meta-data/``: + +.. code:: console + + $ curl http://169.254.169.254/2009-04-04/meta-data/ + ami-id + ami-launch-index + ami-manifest-path + block-device-mapping/ + hostname + instance-action + instance-id + instance-type + kernel-id + local-hostname + local-ipv4 + placement/ + public-hostname + public-ipv4 + public-keys/ + ramdisk-id + reservation-id + security-groups + +.. code:: console + + $ curl http://169.254.169.254/2009-04-04/meta-data/block-device-mapping/ + ami + +.. code:: console + + $ curl http://169.254.169.254/2009-04-04/meta-data/placement/ + availability-zone + +.. code:: console + + $ curl http://169.254.169.254/2009-04-04/meta-data/public-keys/ + 0=mykey + +Instances can retrieve the public SSH key (identified by keypair name +when a user requests a new instance) by making a GET request to +``http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-key``: + +.. code:: console + + $ curl http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-key + ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKVVRNCRX6BlnNbI+US\ + LGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTHbsiyPCIDOKyeHba4MUJq8Oh5b2i71/3B\ + ISpyxTBH/uZDHdslW2a+SrPDCeuMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated\ + by Nova + +Instances can retrieve user data by making a GET request to +``http://169.254.169.254/2009-04-04/user-data``: + +.. code:: console + + $ curl http://169.254.169.254/2009-04-04/user-data + #!/bin/bash + echo 'Extra user data here' + +The metadata service is implemented by either the nova-api service or +the nova-api-metadata service. Note that the nova-api-metadata service +is generally only used when running in multi-host mode, as it retrieves +instance-specific metadata. If you are running the nova-api service, you +must have ``metadata`` as one of the elements listed in the +``enabled_apis`` configuration option in :file:`/etc/nova/nova.conf`. The +default ``enabled_apis`` configuration setting includes the metadata +service, so you should not need to modify it. + +Hosts access the service at ``169.254.169.254:80``, and this is +translated to ``metadata_host:metadata_port`` by an iptables rule +established by the ``nova-network`` service. In multi-host mode, you can set +``metadata_host`` to ``127.0.0.1``. + +For instances to reach the metadata service, the ``nova-network`` service +must configure iptables to NAT port ``80`` of the ``169.254.169.254`` +address to the IP address specified in ``metadata_host`` (this defaults +to ``$my_ip``, which is the IP address of the ``nova-network`` service) and +port specified in ``metadata_port`` (which defaults to ``8775``) in +:file:`/etc/nova/nova.conf`. + +.. note:: + + The ``metadata_host`` configuration option must be an IP address, + not a host name. + +The default Compute service settings assume that ``nova-network`` and +``nova-api`` are running on the same host. If this is not the case, in the +:file:`/etc/nova/nova.conf` file on the host running ``nova-network``, set the +``metadata_host`` configuration option to the IP address of the host +where nova-api is running. + +.. list-table:: Description of metadata configuration options + :header-rows: 2 + + * - Configuration option = Default value + - Description + * - [DEFAULT] + - + * - metadata_cache_expiration = 15 + - (IntOpt) Time in seconds to cache metadata; 0 to disable metadata + caching entirely (not recommended). Increasing this should improve + response times of the metadata API when under heavy load. Higher values + may increase memoryusage and result in longer times for host metadata + changes to take effect. + * - metadata_host = $my_ip + - (StrOpt) The IP address for the metadata API server + * - metadata_listen = 0.0.0.0 + - (StrOpt) The IP address on which the metadata API will listen. + * - metadata_listen_port = 8775 + - (IntOpt) The port on which the metadata API will listen. + * - metadata_manager = nova.api.manager.MetadataManager + - (StrOpt) OpenStack metadata service manager + * - metadata_port = 8775 + - (IntOpt) The port for the metadata API port + * - metadata_workers = None + - (IntOpt) Number of workers for metadata service. The default will be the number of CPUs available. + * - vendordata_driver = nova.api.metadata.vendordata_json.JsonFileVendorData + - (StrOpt) Driver to use for vendor data + * - vendordata_jsonfile_path = None + - (StrOpt) File to load JSON formatted vendor data from + +Enable ping and SSH on VMs +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +You need to enable ``ping`` and ``ssh`` on your VMs for network access. +This can be done with either the :command:`nova` or :command:`euca2ools` +commands. + +.. note:: + + Run these commands as root only if the credentials used to interact + with nova-api are in :file:`/root/.bashrc`. If the EC2 credentials in + the :file:`.bashrc` file are for an unprivileged user, you must run + these commands as that user instead. + +Enable ping and SSH with :command:`nova` commands: + +.. code:: console + + $ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 + $ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 + +Enable ping and SSH with ``euca2ools``: + +.. code:: console + + $ euca-authorize -P icmp -t -1:-1 -s 0.0.0.0/0 default + $ euca-authorize -P tcp -p 22 -s 0.0.0.0/0 default + +If you have run these commands and still cannot ping or SSH your +instances, check the number of running ``dnsmasq`` processes, there +should be two. If not, kill the processes and restart the service with +these commands: + +.. code:: console + + # killall dnsmasq + # service nova-network restart + +Configure public (floating) IP addresses +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +This section describes how to configure floating IP addresses with +``nova-network``. For information about doing this with OpenStack +Networking, see :ref:`L3-routing-and-NAT`. + +Private and public IP addresses +------------------------------- + +In this section, the term floating IP address is used to refer to an IP +address, usually public, that you can dynamically add to a running +virtual instance. + +Every virtual instance is automatically assigned a private IP address. +You can choose to assign a public (or floating) IP address instead. +OpenStack Compute uses network address translation (NAT) to assign +floating IPs to virtual instances. + +To be able to assign a floating IP address, edit the +:file:`/etc/nova/nova.conf` file to specify which interface the +``nova-network`` service should bind public IP addresses to: + +.. code:: ini + + public_interface=VLAN100 + +If you make changes to the :file:`/etc/nova/nova.conf` file while the +``nova-network`` service is running, you will need to restart the service to +pick up the changes. + +.. Note:: + + Floating IPs are implemented by using a source NAT (SNAT rule in + iptables), so security groups can sometimes display inconsistent + behavior if VMs use their floating IP to communicate with other VMs, + particularly on the same physical host. Traffic from VM to VM across + the fixed network does not have this issue, and so this is the + recommended setup. To ensure that traffic does not get SNATed to the + floating range, explicitly set: + + .. code:: ini + + dmz_cidr=x.x.x.x/y + + The ``x.x.x.x/y`` value specifies the range of floating IPs for each + pool of floating IPs that you define. This configuration is also + required if the VMs in the source group have floating IPs. + +Enable IP forwarding +-------------------- + +IP forwarding is disabled by default on most Linux distributions. You +will need to enable it in order to use floating IPs. + +.. Note:: + + IP forwarding only needs to be enabled on the nodes that run + ``nova-network``. However, you will need to enable it on all compute + nodes if you use ``multi_host`` mode. + +To check if IP forwarding is enabled, run: + +.. code:: console + + $ cat /proc/sys/net/ipv4/ip_forward + 0 + +Alternatively, run: + +.. code:: console + + $ sysctl net.ipv4.ip_forward + net.ipv4.ip_forward = 0 + +In these examples, IP forwarding is disabled. + +To enable IP forwarding dynamically, run: + +.. code:: console + + # sysctl -w net.ipv4.ip_forward=1 + +Alternatively, run: + +.. code:: console + + # echo 1 > /proc/sys/net/ipv4/ip_forward + +To make the changes permanent, edit the ``/etc/sysctl.conf`` file and +update the IP forwarding setting: + +.. code:: ini + + net.ipv4.ip_forward = 1 + +Save the file and run this command to apply the changes: + +.. code:: console + + # sysctl -p + +You can also apply the changes by restarting the network service: + +- on Ubuntu, Debian: + + .. code:: console + + # /etc/init.d/networking restart + +- on RHEL, Fedora, CentOS, openSUSE and SLES: + + .. code:: console + + # service network restart + +Create a list of available floating IP addresses +------------------------------------------------ + +Compute maintains a list of floating IP addresses that are available for +assigning to instances. Use the :command:`nova-manage floating` commands +to perform floating IP operations: + +- Add entries to the list: + + .. code:: console + + # nova-manage floating create --pool nova --ip_range 68.99.26.170/31 + +- List the floating IP addresses in the pool: + + .. code:: console + + # nova-manage floating list + +- Create specific floating IPs for either a single address or a + subnet: + + .. code:: console + + # nova-manage floating create --pool POOL_NAME --ip_range CIDR + +- Remove floating IP addresses using the same parameters as the create + command: + + .. code:: console + + # nova-manage floating delete CIDR + +For more information about how administrators can associate floating IPs +with instances, see `Manage IP +addresses `__ +in the OpenStack Admin User Guide. + +Automatically add floating IPs +------------------------------ + +You can configure ``nova-network`` to automatically allocate and assign a +floating IP address to virtual instances when they are launched. Add +this line to the :file:`/etc/nova/nova.conf` file: + +.. code:: ini + + auto_assign_floating_ip=True + +Save the file, and restart ``nova-network`` + +.. note:: + + If this option is enabled, but all floating IP addresses have + already been allocated, the :command:`nova boot` command will fail. + +Remove a network from a project +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +You cannot delete a network that has been associated to a project. This +section describes the procedure for dissociating it so that it can be +deleted. + +In order to disassociate the network, you will need the ID of the +project it has been associated to. To get the project ID, you will need +to be an administrator. + +Disassociate the network from the project using the :command:`scrub` command, +with the project ID as the final parameter: + +.. code:: console + + # nova-manage project scrub --project ID + +Multiple interfaces for instances (multinic) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The multinic feature allows you to use more than one interface with your +instances. This is useful in several scenarios: + +- SSL Configurations (VIPs) + +- Services failover/HA + +- Bandwidth Allocation + +- Administrative/Public access to your instances + +Each VIP represents a separate network with its own IP block. Every +network mode has its own set of changes regarding multinic usage: + +|multinic flat manager| + +|multinic flatdhcp manager| + +|multinic VLAN manager| + +Using multinic +-------------- + +In order to use multinic, create two networks, and attach them to the +tenant (named ``project`` on the command line): + +.. code:: console + + $ nova network-create first-net --fixed-range-v4 20.20.0.0/24 --project-id $your-project + $ nova network-create second-net --fixed-range-v4 20.20.10.0/24 --project-id $your-project + +Each new instance will now receive two IP addresses from their +respective DHCP servers: + +.. code:: console + + $ nova list + +-----+------------+--------+----------------------------------------+ + | ID | Name | Status | Networks | + +-----+------------+--------+----------------------------------------+ + | 124 | Server 124 | ACTIVE | network2=20.20.0.3; private=20.20.10.14| + +-----+------------+--------+----------------------------------------+ + +.. note:: + + Make sure you start the second interface on the instance, or it + won't be reachable through the second IP. + +This example demonstrates how to set up the interfaces within the +instance. This is the configuration that needs to be applied inside the +image. + +Edit the :file:`/etc/network/interfaces` file: + +.. code-block:: bash + :linenos: + + # The loopback network interface + auto lo + iface lo inet loopback + + auto eth0 + iface eth0 inet dhcp + + auto eth1 + iface eth1 inet dhcp + +If the Virtual Network Service Neutron is installed, you can specify the +networks to attach to the interfaces by using the ``--nic`` flag with +the :command:`nova` command: + +.. code:: console + + $ nova boot --image ed8b2a37-5535-4a5f-a615-443513036d71 --flavor 1 --nic net-id=NETWORK1_ID --nic net-id=NETWORK2_ID test-vm1 + +Troubleshooting Networking +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Cannot reach floating IPs +------------------------- + +If you cannot reach your instances through the floating IP address: + +- Check that the default security group allows ICMP (ping) and SSH + (port 22), so that you can reach the instances: + + .. code:: console + + $ nova secgroup-list-rules default + +-------------+-----------+---------+-----------+--------------+ + | IP Protocol | From Port | To Port | IP Range | Source Group | + +-------------+-----------+---------+-----------+--------------+ + | icmp | -1 | -1 | 0.0.0.0/0 | | + | tcp | 22 | 22 | 0.0.0.0/0 | | + +-------------+-----------+---------+-----------+--------------+ + +- Check the NAT rules have been added to iptables on the node that is + running ``nova-network``: + + .. code:: console + + # iptables -L -nv -t nat + -A nova-network-PREROUTING -d 68.99.26.170/32 -j DNAT --to-destination 10.0.0.3 + -A nova-network-floating-snat -s 10.0.0.3/32 -j SNAT --to-source 68.99.26.170 + +- Check that the public address (`68.99.26.170 <68.99.26.170>`__ in + this example), has been added to your public interface. You should + see the address in the listing when you use the :command:`ip addr` command: + + .. code:: console + + $ ip addr + 2: eth0: mtu 1500 qdisc mq state UP qlen 1000 + link/ether xx:xx:xx:17:4b:c2 brd ff:ff:ff:ff:ff:ff + inet 13.22.194.80/24 brd 13.22.194.255 scope global eth0 + inet 68.99.26.170/32 scope global eth0 + inet6 fe80::82b:2bf:fe1:4b2/64 scope link + valid_lft forever preferred_lft forever + + .. note:: + + You cannot use ``SSH`` to access an instance with a public IP from within + the same server because the routing configuration does not allow + it. + +- Use ``tcpdump`` to identify if packets are being routed to the + inbound interface on the compute host. If the packets are reaching + the compute hosts but the connection is failing, the issue may be + that the packet is being dropped by reverse path filtering. Try + disabling reverse-path filtering on the inbound interface. For + example, if the inbound interface is ``eth2``, run: + + .. code:: console + + # sysctl -w net.ipv4.conf.ETH2.rp_filter=0 + + If this solves the problem, add the following line to + :file:`/etc/sysctl.conf` so that the reverse-path filter is persistent: + + .. code:: ini + + net.ipv4.conf.rp_filter=0 + +Temporarily disable firewall +---------------------------- + +To help debug networking issues with reaching VMs, you can disable the +firewall by setting this option in :file:`/etc/nova/nova.conf`: + +.. code:: ini + + firewall_driver=nova.virt.firewall.NoopFirewallDriver + +We strongly recommend you remove this line to re-enable the firewall +once your networking issues have been resolved. + +Packet loss from instances to nova-network server (VLANManager mode) +-------------------------------------------------------------------- + +If you can access your instances with ``SSH`` but the network to your instance +is slow, or if you find that running certain operations are slower than +they should be (for example, ``sudo``), packet loss could be occurring +on the connection to the instance. + +Packet loss can be caused by Linux networking configuration settings +related to bridges. Certain settings can cause packets to be dropped +between the VLAN interface (for example, ``vlan100``) and the associated +bridge interface (for example, ``br100``) on the host running +``nova-network``. + +One way to check whether this is the problem is to open three terminals +and run the following commands: + +#. In the first terminal, on the host running ``nova-network``, use + ``tcpdump`` on the VLAN interface to monitor DNS-related traffic + (UDP, port 53). As root, run: + + .. code:: console + + # tcpdump -K -p -i vlan100 -v -vv udp port 53 + +#. In the second terminal, also on the host running ``nova-network``, use + ``tcpdump`` to monitor DNS-related traffic on the bridge interface. + As root, run: + + .. code:: console + + # tcpdump -K -p -i br100 -v -vv udp port 53 + +#. In the third terminal, use ``SSH`` to access the instance and generate DNS + requests by using the :command:`nslookup` command: + + .. code:: console + + $ nslookup www.google.com + + The symptoms may be intermittent, so try running :command:`nslookup` + multiple times. If the network configuration is correct, the command + should return immediately each time. If it is not correct, the + command hangs for several seconds before returning. + +#. If the :command:`nslookup` command sometimes hangs, and there are packets + that appear in the first terminal but not the second, then the + problem may be due to filtering done on the bridges. Try disabling + filtering, and running these commands as root: + + .. code:: console + + # sysctl -w net.bridge.bridge-nf-call-arptables=0 + # sysctl -w net.bridge.bridge-nf-call-iptables=0 + # sysctl -w net.bridge.bridge-nf-call-ip6tables=0 + + If this solves your issue, add the following line to + :file:`/etc/sysctl.conf` so that the changes are persistent: + + .. code:: ini + + net.bridge.bridge-nf-call-arptables=0 + net.bridge.bridge-nf-call-iptables=0 + net.bridge.bridge-nf-call-ip6tables=0 + +KVM: Network connectivity works initially, then fails +----------------------------------------------------- + +With KVM hypervisors, instances running Ubuntu 12.04 sometimes lose +network connectivity after functioning properly for a period of time. +Try loading the ``vhost_net`` kernel module as a workaround for this +issue (see `bug +#997978 `__) +. This kernel module may also `improve network +performance `__ on KVM. To load +the kernel module: + +.. code:: console + + # modprobe vhost_net + +.. note:: + + Loading the module has no effect on running instances. + +.. |multinic flat manager| image:: ../../common/figures/SCH_5007_V00_NUAC-multi_nic_OpenStack-Flat-manager.jpg + :width: 600 +.. |multinic flatdhcp manager| image:: ../../common/figures/SCH_5007_V00_NUAC-multi_nic_OpenStack-Flat-DHCP-manager.jpg + :width: 600 +.. |multinic VLAN manager| image:: ../../common/figures/SCH_5007_V00_NUAC-multi_nic_OpenStack-VLAN-manager.jpg + :width: 600 diff --git a/doc/admin-guide-cloud-rst/source/compute-recover-nodes.rst b/doc/admin-guide-cloud-rst/source/compute-recover-nodes.rst deleted file mode 100644 index 16e535b442..0000000000 --- a/doc/admin-guide-cloud-rst/source/compute-recover-nodes.rst +++ /dev/null @@ -1,333 +0,0 @@ -.. _section_nova-compute-node-down: - -================================== -Recover from a failed compute node -================================== - -If Compute is deployed with a shared file system, and a node fails, -there are several methods to quickly recover from the failure. This -section discusses manual recovery. - -.. TODO ../../common/section_cli_nova_evacuate.xml - -.. _nova-compute-node-down-manual-recovery: - -Manual Recovery -~~~~~~~~~~~~~~~ - -To recover a KVM or libvirt compute node, see -the section called :ref:`nova-compute-node-down-manual-recovery`. For -all other hypervisors, use this procedure: - -**Recovering from a failed compute node manually** - -#. Identify the VMs on the affected hosts. To do this, you can use a - combination of ``nova list`` and ``nova show`` or - ``euca-describe-instances``. For example, this command displays - information about instance i-000015b9 that is running on node np-rcc54: - - .. code:: console - - $ euca-describe-instances - i-000015b9 at3-ui02 running nectarkey (376, np-rcc54) 0 m1.xxlarge 2012-06-19T00:48:11.000Z 115.146.93.60 - -#. Query the Compute database to check the status of the host. This example - converts an EC2 API instance ID into an OpenStack ID. If you use the - :command:`nova` commands, you can substitute the ID directly (the output in - this example has been truncated): - - .. code:: ini - - mysql> SELECT * FROM instances WHERE id = CONV('15b9', 16, 10) \G; - *************************** 1. row *************************** - created_at: 2012-06-19 00:48:11 - updated_at: 2012-07-03 00:35:11 - deleted_at: NULL - ... - id: 5561 - ... - power_state: 5 - vm_state: shutoff - ... - hostname: at3-ui02 - host: np-rcc54 - ... - uuid: 3f57699a-e773-4650-a443-b4b37eed5a06 - ... - task_state: NULL - ... - - .. note:: - - The credentials for your database can be found in :file:`/etc/nova.conf`. - -#. Decide which compute host the affected VM should be moved to, and run - this database command to move the VM to the new host: - - .. code:: console - - mysql> UPDATE instances SET host = 'np-rcc46' WHERE uuid = '3f57699a-e773-4650-a443-b4b37eed5a06'; - -#. If you are using a hypervisor that relies on libvirt (such as KVM), - update the :file:`libvirt.xml` file (found in - :file:`/var/lib/nova/instances/[instance ID]`) with these changes: - - - Change the ``DHCPSERVER`` value to the host IP address of the new - compute host. - - - Update the VNC IP to `0.0.0.0` - -#. Reboot the VM: - - .. code:: console - - $ nova reboot 3f57699a-e773-4650-a443-b4b37eed5a06 - -The database update and :command:`nova reboot` command should be all that is -required to recover a VM from a failed host. However, if you continue to -have problems try recreating the network filter configuration using -``virsh``, restarting the Compute services, or updating the ``vm_state`` -and ``power_state`` in the Compute database. - -.. _section_nova-uid-mismatch: - -**Recover from a UID/GID mismatch** - -In some cases, files on your compute node can end up using the wrong UID -or GID. This can happen when running OpenStack Compute, using a shared -file system, or with an automated configuration tool. This can cause a -number of problems, such as inability to perform live migrations, or -start virtual machines. - -This procedure runs on nova-compute hosts, based on the KVM hypervisor: - -#. Set the nova UID in :file:`/etc/passwd` to the same number on all hosts (for - example, 112). - - .. note:: - - Make sure you choose UIDs or GIDs that are not in use for other - users or groups. - -#. Set the ``libvirt-qemu`` UID in :file:`/etc/passwd` to the same number on - all hosts (for example, 119). - -#. Set the ``nova`` group in :file:`/etc/group` file to the same number on all - hosts (for example, 120). - -#. Set the ``libvirtd`` group in :file:`/etc/group` file to the same number on - all hosts (for example, 119). - -#. Stop the services on the compute node. - -#. Change all the files owned by user or group nova. For example: - - .. code:: console - - # find / -uid 108 -exec chown nova {} \; - # note the 108 here is the old nova UID before the change - # find / -gid 120 -exec chgrp nova {} \; - -#. Repeat all steps for the :file:`libvirt-qemu` files, if required. - -#. Restart the services. - -#. Run the :command:`find` command to verify that all files use the correct - identifiers. - -.. _section_nova-disaster-recovery-process: - -Recover cloud after disaster ----------------------------- - -This section covers procedures for managing your cloud after a disaster, -and backing up persistent storage volumes. Backups are mandatory, even -outside of disaster scenarios. - -For a definition of a disaster recovery plan (DRP), see -`http://en.wikipedia.org/wiki/Disaster\_Recovery\_Plan `_. - -A disaster could happen to several components of your architecture (for -example, a disk crash, network loss, or a power failure). In this -example, the following components are configured: - -- A cloud controller (nova-api, nova-objectstore, nova-network) - -- A compute node (nova-compute) - -- A storage area network (SAN) used by OpenStack Block Storage - (cinder-volumes) - -The worst disaster for a cloud is power loss, which applies to all three -components. Before a power loss: - -- Create an active iSCSI session from the SAN to the cloud controller - (used for the ``cinder-volumes`` LVM's VG). - -- Create an active iSCSI session from the cloud controller to the compute - node (managed by cinder-volume). - -- Create an iSCSI session for every volume (so 14 EBS volumes requires 14 - iSCSI sessions). - -- Create iptables or ebtables rules from the cloud controller to the - compute node. This allows access from the cloud controller to the - running instance. - -- Save the current state of the database, the current state of the running - instances, and the attached volumes (mount point, volume ID, volume - status, etc), at least from the cloud controller to the compute node. - -After power is recovered and all hardware components have restarted: - -- The iSCSI session from the SAN to the cloud no longer exists. - -- The iSCSI session from the cloud controller to the compute node no - longer exists. - -- The iptables and ebtables from the cloud controller to the compute - node are recreated. This is because nova-network reapplies - configurations on boot. - -- Instances are no longer running. - - Note that instances will not be lost, because neither ``destroy`` nor - ``terminate`` was invoked. The files for the instances will remain on - the compute node. - -- The database has not been updated. - -**Begin recovery** - -.. warning:: - - Do not add any extra steps to this procedure, or perform the steps - out of order. - -#. Check the current relationship between the volume and its instance, so - that you can recreate the attachment. - - This information can be found using the :command:`nova volume-list` command. - Note that the ``nova`` client also includes the ability to get volume - information from OpenStack Block Storage. - -#. Update the database to clean the stalled state. Do this for every - volume, using these queries: - - .. code:: console - - mysql> use cinder; - mysql> update volumes set mountpoint=NULL; - mysql> update volumes set status="available" where status <>"error_deleting"; - mysql> update volumes set attach_status="detached"; - mysql> update volumes set instance_id=0; - - Use ``nova volume-list`` commands to list all volumes. - -#. Restart the instances using the :command:`nova reboot INSTANCE` command. - - .. important:: - - Some instances will completely reboot and become reachable, while - some might stop at the plymouth stage. This is expected behavior, DO - NOT reboot a second time. - - Instance state at this stage depends on whether you added an - `/etc/fstab` entry for that volume. Images built with the - cloud-init package remain in a ``pending`` state, while others skip - the missing volume and start. This step is performed in order to ask - Compute to reboot every instance, so that the stored state is - preserved. It does not matter if not all instances come up - successfully. For more information about cloud-init, see - `help.ubuntu.com/community/CloudInit/ `__. - -#. Reattach the volumes to their respective instances, if required, using - the :command:`nova volume-attach` command. This example uses a file of - listed volumes to reattach them: - - .. code:: bash - - #!/bin/bash - - while read line; do - volume=`echo $line | $CUT -f 1 -d " "` - instance=`echo $line | $CUT -f 2 -d " "` - mount_point=`echo $line | $CUT -f 3 -d " "` - echo "ATTACHING VOLUME FOR INSTANCE - $instance" - nova volume-attach $instance $volume $mount_point - sleep 2 - done < $volumes_tmp_file - - Instances that were stopped at the plymouth stage will now automatically - continue booting and start normally. Instances that previously started - successfully will now be able to see the volume. - -#. Log in to the instances with SSH and reboot them. - - If some services depend on the volume, or if a volume has an entry in - fstab, you should now be able to restart the instance. Restart directly - from the instance itself, not through ``nova``: - - .. code:: console - - # shutdown -r now - - When you are planning for and performing a disaster recovery, follow - these tips: - -- Use the ``errors=remount`` parameter in the :file:`fstab` file to prevent - data corruption. - - This parameter will cause the system to disable the ability to write - to the disk if it detects an I/O error. This configuration option - should be added into the cinder-volume server (the one which performs - the iSCSI connection to the SAN), and into the instances' :file:`fstab` - files. - -- Do not add the entry for the SAN's disks to the cinder-volume's - :file:`fstab` file. - - Some systems hang on that step, which means you could lose access to - your cloud-controller. To re-run the session manually, run this - command before performing the mount: - - .. code:: console - - # iscsiadm -m discovery -t st -p $SAN_IP $ iscsiadm -m node --target-name $IQN -p $SAN_IP -l - -- On your instances, if you have the whole ``/home/`` directory on the - disk, leave a user's directory with the user's bash files and the - :file:`authorized_keys` file (instead of emptying the ``/home`` directory - and mapping the disk on it). - - This allows you to connect to the instance even without the volume - attached, if you allow only connections through public keys. - -If you want to script the disaster recovery plan (DRP), a bash script is -available from `https://github.com/Razique `_ -which performs the following steps: - -#. An array is created for instances and their attached volumes. - -#. The MySQL database is updated. - -#. All instances are restarted with euca2ools. - -#. The volumes are reattached. - -#. An SSH connection is performed into every instance using Compute - credentials. - -The script includes a ``test mode``, which allows you to perform that -whole sequence for only one instance. - -To reproduce the power loss, connect to the compute node which runs that -instance and close the iSCSI session. Do not detach the volume using the -:command:`nova volume-detach` command, manually close the iSCSI session. -This example closes an iSCSI session with the number 15: - -.. code:: console - - # iscsiadm -m session -u -r 15 - -Do not forget the ``-r`` flag. Otherwise, you will close all sessions. diff --git a/doc/admin-guide-cloud-rst/source/compute-rootwrap.rst b/doc/admin-guide-cloud-rst/source/compute-rootwrap.rst deleted file mode 100644 index eb36bd68be..0000000000 --- a/doc/admin-guide-cloud-rst/source/compute-rootwrap.rst +++ /dev/null @@ -1,102 +0,0 @@ -.. _root-wrap-reference: - -==================== -Secure with rootwrap -==================== - -Rootwrap allows unprivileged users to safely run Compute actions as the -root user. Compute previously used :command:`sudo` for this purpose, but this -was difficult to maintain, and did not allow advanced filters. The -:command:`rootwrap` command replaces :command:`sudo` for Compute. - -To use rootwrap, prefix the Compute command with :command:`nova-rootwrap`. For -example: - -.. code:: console - - $ sudo nova-rootwrap /etc/nova/rootwrap.conf command - -A generic ``sudoers`` entry lets the Compute user run :command:`nova-rootwrap` -as root. The :command:`nova-rootwrap` code looks for filter definition -directories in its configuration file, and loads command filters from -them. It then checks if the command requested by Compute matches one of -those filters and, if so, executes the command (as root). If no filter -matches, it denies the request. - -.. note:: - - Be aware of issues with using NFS and root-owned files. The NFS - share must be configured with the ``no_root_squash`` option enabled, - in order for rootwrap to work correctly. - -Rootwrap is fully controlled by the root user. The root user -owns the sudoers entry which allows Compute to run a specific -rootwrap executable as root, and only with a specific -configuration file (which should also be owned by root). -The :command:`nova-rootwrap` command imports the Python -modules it needs from a cleaned, system-default PYTHONPATH. -The root-owned configuration file points to root-owned -filter definition directories, which contain root-owned -filters definition files. This chain ensures that the Compute -user itself is not in control of the configuration or modules -used by the :command:`nova-rootwrap` executable. - -Rootwrap is configured using the :file:`rootwrap.conf` file. Because -it's in the trusted security path, it must be owned and writable -by only the root user. The file's location is specified in both -the sudoers entry and in the :file:`nova.conf` configuration file -with the ``rootwrap_config=entry`` parameter. - -The :file:`rootwrap.conf` file uses an INI file format with these -sections and parameters: - -.. list-table:: rootwrap.conf configuration options - :widths: 64 31 - - - * - Configuration option=Default value - - (Type) Description - * - [DEFAULT] - filters\_path=/etc/nova/rootwrap.d,/usr/share/nova/rootwrap - - (ListOpt) Comma-separated list of directories - containing filter definition files. - Defines where rootwrap filters are stored. - Directories defined on this line should all - exist, and be owned and writable only by the - root user. - -If the root wrapper is not performing correctly, you can add a -workaround option into the :file:`nova.conf` configuration file. This -workaround re-configures the root wrapper configuration to fall back to -running commands as sudo, and is a Kilo release feature. - -Including this workaround in your configuration file safeguards your -environment from issues that can impair root wrapper performance. Tool -changes that have impacted -`Python Build Reasonableness (PBR) `__ -for example, are a known issue that affects root wrapper performance. - -To set up this workaround, configure the ``disable_rootwrap`` option in -the ``[workaround]`` section of the :file:`nova.conf` configuration file. - -The filters definition files contain lists of filters that rootwrap will -use to allow or deny a specific command. They are generally suffixed by -``.filters`` . Since they are in the trusted security path, they need to -be owned and writable only by the root user. Their location is specified -in the :file:`rootwrap.conf` file. - -Filter definition files use an INI file format with a ``[Filters]`` -section and several lines, each with a unique parameter name, which -should be different for each filter you define: - -.. list-table:: filters configuration options - :widths: 72 39 - - - * - Configuration option=Default value - - (Type) Description - * - [Filters] - filter\_name=kpartx: CommandFilter, /sbin/kpartx, root - - (ListOpt) Comma-separated list containing the filter class to - use, followed by the Filter arguments (which vary depending - on the Filter class selected). diff --git a/doc/admin-guide-cloud-rst/source/compute-security.rst b/doc/admin-guide-cloud-rst/source/compute-security.rst deleted file mode 100644 index ad5f8feb6e..0000000000 --- a/doc/admin-guide-cloud-rst/source/compute-security.rst +++ /dev/null @@ -1,36 +0,0 @@ -.. _section-compute-security: - -================== -Security hardening -================== - -OpenStack Compute can be integrated with various third-party -technologies to increase security. For more information, see the -`OpenStack Security Guide `_. - -.. :ref:section_trusted-compute-pools.rst - -Encrypt Compute metadata traffic -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -**Enabling SSL encryption** - -OpenStack supports encrypting Compute metadata traffic with HTTPS. -Enable SSL encryption in the :file:`metadata_agent.ini` file. - -#. Enable the HTTPS protocol:: - - nova_metadata_protocol = https - -#. Determine whether insecure SSL connections are accepted for Compute - metadata server requests. The default value is ``False``:: - - nova_metadata_insecure = False - -#. Specify the path to the client certificate:: - - nova_client_cert = PATH_TO_CERT - -#. Specify the path to the private key:: - - nova_client_priv_key = PATH_TO_KEY diff --git a/doc/admin-guide-cloud-rst/source/compute-system-admin.rst b/doc/admin-guide-cloud-rst/source/compute-system-admin.rst index cf3af54198..8528f1eefc 100644 --- a/doc/admin-guide-cloud-rst/source/compute-system-admin.rst +++ b/doc/admin-guide-cloud-rst/source/compute-system-admin.rst @@ -1,4 +1,4 @@ -.. _section_compute-system-admin: +.. _compute-trusted-pools.rst: ===================== System administration @@ -17,47 +17,56 @@ deployment. The responsibilities of services and drivers are: **Services** -- ``nova-api``: receives XML requests and sends them to the rest of the +``nova-api`` + receives XML requests and sends them to the rest of the system. A WSGI app routes and authenticates requests. Supports the EC2 and OpenStack APIs. A :file:`nova.conf` configuration file is created when Compute is installed. -- ``nova-cert``: manages certificates. +``nova-cert`` + manages certificates. -- ``nova-compute``: manages virtual machines. Loads a Service object, and +``nova-compute`` + manages virtual machines. Loads a Service object, and exposes the public methods on ComputeManager through a Remote Procedure Call (RPC). -- ``nova-conductor``: provides database-access support for Compute nodes - (thereby reducing security risks).10D0-9355 +``nova-conductor`` + provides database-access support for Compute nodes + (thereby reducing security risks). -- ``nova-consoleauth``: manages console authentication. +``nova-consoleauth`` + manages console authentication. -- ``nova-objectstore``: a simple file-based storage system for images that +``nova-objectstore`` + a simple file-based storage system for images that replicates most of the S3 API. It can be replaced with OpenStack Image service and either a simple image manager or OpenStack Object Storage as the virtual machine image storage facility. It must exist on the same node as nova-compute. -- ``nova-network``: manages floating and fixed IPs, DHCP, bridging and +``nova-network`` + manages floating and fixed IPs, DHCP, bridging and VLANs. Loads a Service object which exposes the public methods on one of the subclasses of NetworkManager. Different networking strategies are available by changing the ``network_manager`` configuration option to ``FlatManager``, ``FlatDHCPManager``, or ``VLANManager`` (defaults to ``VLANManager`` if nothing is specified). -- ``nova-scheduler``: dispatches requests for new virtual machines to the +``nova-scheduler`` + dispatches requests for new virtual machines to the correct node. -- ``nova-novncproxy``: provides a VNC proxy for browsers, allowing VNC +``nova-novncproxy`` + provides a VNC proxy for browsers, allowing VNC consoles to access virtual machines. -.. note:: +.. note:: - Some services have drivers that change how the service implements - its core functionality. For example, the nova-compute service - supports drivers that let you choose which hypervisor type it can - use. nova-network and nova-scheduler also have drivers. + Some services have drivers that change how the service implements + its core functionality. For example, the nova-compute service + supports drivers that let you choose which hypervisor type it can + use. nova-network and nova-scheduler also have drivers. .. _section_manage-compute-users: @@ -73,12 +82,378 @@ the user. To begin using Compute, you must create a user with the Identity Service. -.. TODO/doc/admin-guide-cloud-rst/source/compute_config-firewalls.rst +Manage Volumes +~~~~~~~~~~~~~~ + +Depending on the setup of your cloud provider, they may give you an +endpoint to use to manage volumes, or there may be an extension under +the covers. In either case, you can use the ``nova`` CLI to manage +volumes. + +.. list-table:: nova volume commands + :header-rows: 1 + + * - Command + - Description + * - volume-attach + - Attach a volume to a server. + * - volume-create + - Add a new volume. + * - volume-delete + - Remove a volume. + * - volume-detach + - Detach a volume from a server. + * - volume-list + - List all the volumes. + * - volume-show + - Show details about a volume. + * - volume-snapshot-create + - Add a new snapshot. + * - volume-snapshot-delete + - Remove a snapshot. + * - volume-snapshot-list + - List all the snapshots. + * - volume-snapshot-show + - Show details about a snapshot. + * - volume-type-create + - Create a new volume type. + * - volume-type-delete + - Delete a specific flavor + * - volume-type-list + - Print a list of available 'volume types'. + * - volume-update + - Update an attached volume. + +| + +For example, to list IDs and names of Compute volumes, run: + +.. code:: console + + $ nova volume-list + +-----------+-----------+--------------+------+-------------+-------------+ + | ID | Status | Display Name | Size | Volume Type | Attached to | + +-----------+-----------+--------------+------+-------------+-------------+ + | 1af4cb9...| available | PerfBlock | 1 | Performance | | + +-----------+-----------+--------------+------+-------------+-------------+ + +.. _compute-flavors: + +Flavors +~~~~~~~ + +Admin users can use the :command:`nova flavor-` commands to customize and +manage flavors. To see the available flavor-related commands, run: + +.. code:: console + + $ nova help | grep flavor- + flavor-access-add Add flavor access for the given tenant. + flavor-access-list Print access information about the given flavor. + flavor-access-remove Remove flavor access for the given tenant. + flavor-create Create a new flavor + flavor-delete Delete a specific flavor + flavor-key Set or unset extra_spec for a flavor. + flavor-list Print a list of available 'flavors' (sizes of + flavor-show Show details about the given flavor. + +.. note:: + + - Configuration rights can be delegated to additional users by + redefining the access controls for + ``compute_extension:flavormanage`` in :file:`/etc/nova/policy.json` + on the nova-api server. + + - To modify an existing flavor in the dashboard, you must delete + the flavor and create a modified one with the same name. + +Flavors define these elements: + +**Identity Service configuration file sections** + ++-------------+---------------------------------------------------------------+ +| Element | Description | ++=============+===============================================================+ +| Name | A descriptive name. XX.SIZE_NAME is typically not required, | +| | though some third party tools may rely on it. | ++-------------+---------------------------------------------------------------+ +| Memory_MB | Virtual machine memory in megabytes. | ++-------------+---------------------------------------------------------------+ +| Disk | Virtual root disk size in gigabytes. This is an ephemeral di\ | +| | sk that the base image is copied into. When booting from a p\ | +| | ersistent volume it is not used. The "0" size is a special c\ | +| | ase which uses the native base image size as the size of the | +| | ephemeral root volume. | ++-------------+---------------------------------------------------------------+ +| Ephemeral | Specifies the size of a secondary ephemeral data disk. This | +| | is an empty, unformatted disk and exists only for the life o\ | +| | f the instance. | ++-------------+---------------------------------------------------------------+ +| Swap | Optional swap space allocation for the instance. | ++-------------+---------------------------------------------------------------+ +| VCPUs | Number of virtual CPUs presented to the instance. | ++-------------+---------------------------------------------------------------+ +| RXTX_Factor | Optional property allows created servers to have a different | +| | bandwidth cap than that defined in the network they are att\ | +| | ached to. This factor is multiplied by the rxtx_base propert\ | +| | y of the network. Default value is 1.0. That is, the same as | +| | attached network. This parameter is only available for Xen | +| | or NSX based systems. | ++-------------+---------------------------------------------------------------+ +| Is_Public | Boolean value, whether flavor is available to all users or p\ | +| | rivate to the tenant it was created in. Defaults to True. | ++-------------+---------------------------------------------------------------+ +| extra_specs | Key and value pairs that define on which compute nodes a fla\ | +| | vor can run. These pairs must match corresponding pairs on t\ | +| | he compute nodes. Use to implement special resources, such a\ | +| | s flavors that run on only compute nodes with GPU hardware. | ++-------------+---------------------------------------------------------------+ + +| + +Flavor customization can be limited by the hypervisor in use. For +example the libvirt driver enables quotas on CPUs available to a VM, +disk tuning, bandwidth I/O, watchdog behavior, random number generator +device control, and instance VIF traffic control. + +CPU limits + You can configure the CPU limits with control parameters with the + ``nova`` client. For example, to configure the I/O limit, use: + + .. code:: console + + $ nova flavor-key m1.small set quota:read_bytes_sec=10240000 + $ nova flavor-key m1.small set quota:write_bytes_sec=10240000 + + Use these optional parameters to control weight shares, enforcement + intervals for runtime quotas, and a quota for maximum allowed + bandwidth: + + - ``cpu_shares``. Specifies the proportional weighted share for the + domain. If this element is omitted, the service defaults to the + OS provided defaults. There is no unit for the value; it is a + relative measure based on the setting of other VMs. For example, + a VM configured with value 2048 gets twice as much CPU time as a + VM configured with value 1024. + + - ``cpu_period``. Specifies the enforcement interval (unit: + microseconds) for QEMU and LXC hypervisors. Within a period, each + VCPU of the domain is not allowed to consume more than the quota + worth of runtime. The value should be in range ``[1000, 1000000]``. + A period with value 0 means no value. + + - ``cpu_limit``. Specifies the upper limit for VMware machine CPU + allocation in MHz. This parameter ensures that a machine never + uses more than the defined amount of CPU time. It can be used to + enforce a limit on the machine's CPU performance. + + - ``cpu_reservation``. Specifies the guaranteed minimum CPU + reservation in MHz for VMware. This means that if needed, the + machine will definitely get allocated the reserved amount of CPU + cycles. + + - ``cpu_quota``. Specifies the maximum allowed bandwidth (unit: + microseconds). A domain with a negative-value quota indicates + that the domain has infinite bandwidth, which means that it is + not bandwidth controlled. The value should be in range ``[1000, + 18446744073709551]`` or less than 0. A quota with value 0 means no + value. You can use this feature to ensure that all vCPUs run at the + same speed. For example: + + .. code:: console + + $ nova flavor-key m1.low_cpu set quota:cpu_quota=10000 + $ nova flavor-key m1.low_cpu set quota:cpu_period=20000 + + In this example, the instance of ``m1.low_cpu`` can only consume + a maximum of 50% CPU of a physical CPU computing capability. + +Disk tuning + Using disk I/O quotas, you can set maximum disk write to 10 MB per + second for a VM user. For example: + + .. code:: console + + $ nova flavor-key m1.medium set quota:disk_write_bytes_sec=10485760 + + The disk I/O options are: + + - disk\_read\_bytes\_sec + + - disk\_read\_iops\_sec + + - disk\_write\_bytes\_sec + + - disk\_write\_iops\_sec + + - disk\_total\_bytes\_sec + + - disk\_total\_iops\_sec + +Bandwidth I/O + The vif I/O options are: + + - vif\_inbound\_ average + + - vif\_inbound\_burst + + - vif\_inbound\_peak + + - vif\_outbound\_ average + + - vif\_outbound\_burst + + - vif\_outbound\_peak + + Incoming and outgoing traffic can be shaped independently. The + bandwidth element can have at most, one inbound and at most, one + outbound child element. If you leave any of these child elements + out, no quality of service (QoS) is applied on that traffic + direction. So, if you want to shape only the network's incoming + traffic, use inbound only (and vice versa). Each element has one + mandatory attribute average, which specifies the average bit rate on + the interface being shaped. + + There are also two optional attributes (integer): ``peak``, which + specifies the maximum rate at which a bridge can send data + (kilobytes/second), and ``burst``, the amount of bytes that can be + burst at peak speed (kilobytes). The rate is shared equally within + domains connected to the network. + + Below example sets network traffic bandwidth limits for existing + flavor as follow: + + - Outbound traffic: + + - average: 256 Mbps (32768 kilobytes/second) + + - peak: 512 Mbps (65536 kilobytes/second) + + - burst: 100 ms + + - Inbound traffic: + + - average: 256 Mbps (32768 kilobytes/second) + + - peak: 512 Mbps (65536 kilobytes/second) + + - burst: 100 ms + + .. code:: console + + $ nova flavor-key nlimit set quota:vif_outbound_average=32768 + $ nova flavor-key nlimit set quota:vif_outbound_peak=65536 + $ nova flavor-key nlimit set quota:vif_outbound_burst=6553 + $ nova flavor-key nlimit set quota:vif_inbound_average=16384 + $ nova flavor-key nlimit set quota:vif_inbound_peak=32768 + $ nova flavor-key nlimit set quota:vif_inbound_burst=3276 + + + .. note:: + + All the speed limit values in above example are specified in + kilobytes/second. And burst values are in kilobytes. + +Watchdog behavior + For the libvirt driver, you can enable and set the behavior of a + virtual hardware watchdog device for each flavor. Watchdog devices + keep an eye on the guest server, and carry out the configured + action, if the server hangs. The watchdog uses the i6300esb device + (emulating a PCI Intel 6300ESB). If ``hw:watchdog_action`` is not + specified, the watchdog is disabled. + + To set the behavior, use: + + .. code:: console + + $ nova flavor-key FLAVOR-NAME set hw:watchdog_action=ACTION + + Valid ACTION values are: + + - ``disabled``—(default) The device is not attached. + + - ``reset``—Forcefully reset the guest. + + - ``poweroff``—Forcefully power off the guest. + + - ``pause``—Pause the guest. + + - ``none``—Only enable the watchdog; do nothing if the server + hangs. + + .. note:: + + Watchdog behavior set using a specific image's properties will + override behavior set using flavors. + +Random-number generator + If a random-number generator device has been added to the instance + through its image properties, the device can be enabled and + configured using: + + .. code:: console + + $ nova flavor-key FLAVOR-NAME set hw_rng:allowed=True + $ nova flavor-key FLAVOR-NAME set hw_rng:rate_bytes=RATE-BYTES + $ nova flavor-key FLAVOR-NAME set hw_rng:rate_period=RATE-PERIOD + + Where: + + - RATE-BYTES—(Integer) Allowed amount of bytes that the guest can + read from the host's entropy per period. + + - RATE-PERIOD—(Integer) Duration of the read period in seconds. + +Project private flavors + Flavors can also be assigned to particular projects. By default, a + flavor is public and available to all projects. Private flavors are + only accessible to those on the access list and are invisible to + other projects. To create and assign a private flavor to a project, + run these commands: + + .. code:: console + + $ nova flavor-create --is-public false p1.medium auto 512 40 4 + $ nova flavor-access-add 259d06a0-ba6d-4e60-b42d-ab3144411d58 86f94150ed744e08be565c2ff608eef9 + + +.. _default_ports: + +Compute service node firewall requirements +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Console connections for virtual machines, whether direct or through a +proxy, are received on ports ``5900`` to ``5999``. The firewall on each +Compute service node must allow network traffic on these ports. + +This procedure modifies the iptables firewall to allow incoming +connections to the Compute services. + +**Configuring the service-node firewall** + +#. Log in to the server that hosts the Compute service, as root. + +#. Edit the :file:`/etc/sysconfig/iptables` file, to add an INPUT rule that + allows TCP traffic on ports from ``5900`` to ``5999``. Make sure the new + rule appears before any INPUT rules that REJECT traffic: + + .. code:: ini + + -A INPUT -p tcp -m multiport --dports 5900:5999 -j ACCEPT + +#. Save the changes to :file:`/etc/sysconfig/iptables` file, and restart the + iptables service to pick up the changes: + + .. code:: console + + $ service iptables restart + +#. Repeat this process for each Compute service node. .. _admin-password-injection: Injecting the administrator password ------------------------------------- +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Compute can generate a random administrator (root) password and inject that password into an instance. If this feature is enabled, users can @@ -122,11 +497,11 @@ editing the :file:`/etc/shadow` file inside the virtual machine instance. .. note:: - Users can only :command:`ssh` to the instance by using the admin password - if the virtual machine image is a Linux distribution, and it has - been configured to allow users to :command:`ssh` as the root user. This is - not the case for `Ubuntu cloud images`_ which, by default, does not - allow users to :command:`ssh` to the root account. + Users can only use :command:`ssh` to access the instance by using the admin + password if the virtual machine image is a Linux distribution, and it has + been configured to allow users to use :command:`ssh` as the root user. This + is not the case for `Ubuntu cloud images`_ which, by default, does not + allow users to use :command:`ssh` to access the root account. **Password injection and XenAPI (XenServer/XCP)** @@ -147,10 +522,10 @@ the admin password on boot by installing an agent such as .. _section_manage-the-cloud: Manage the cloud ----------------- +~~~~~~~~~~~~~~~~ -System administrators can use ``nova`` client and ``Euca2ools`` commands -to manage their clouds. +System administrators can use :command:`nova` client and :command:`Euca2ools` +commands to manage their clouds. ``nova`` client and ``euca2ools`` can be used by all users, though specific commands might be restricted by Role Based Access Control in @@ -242,10 +617,10 @@ EC2 API calls. For more information about ``euca2ools``, see .. _section_manage-logs: Logging -------- +~~~~~~~ Logging module -^^^^^^^^^^^^^^ +-------------- Logging behavior can be changed by creating a configuration file. To specify the configuration file, add this line to the @@ -431,8 +806,8 @@ also requires a websocket client to access the serial console. **Accessing the serial console on an instance** -#. Use the ``nova get-serial-proxy`` command to retrieve the websocket URL - for the serial console on the instance: +#. Use the :command:`nova get-serial-proxy` command to retrieve the websocket + URL for the serial console on the instance: .. code: console @@ -479,13 +854,493 @@ Alternatively, use a `Python websocket client `__ +for example, are a known issue that affects root wrapper performance. + +To set up this workaround, configure the ``disable_rootwrap`` option in +the ``[workaround]`` section of the :file:`nova.conf` configuration file. + +The filters definition files contain lists of filters that rootwrap will +use to allow or deny a specific command. They are generally suffixed by +``.filters`` . Since they are in the trusted security path, they need to +be owned and writable only by the root user. Their location is specified +in the :file:`rootwrap.conf` file. + +Filter definition files use an INI file format with a ``[Filters]`` +section and several lines, each with a unique parameter name, which +should be different for each filter you define: + +.. list-table:: filters configuration options + :widths: 72 39 + + + * - Configuration option=Default value + - (Type) Description + * - [Filters] + filter\_name=kpartx: CommandFilter, /sbin/kpartx, root + - (ListOpt) Comma-separated list containing the filter class to + use, followed by the Filter arguments (which vary depending + on the Filter class selected). + +.. _section_configuring-compute-migrations: + +Configure migrations +~~~~~~~~~~~~~~~~~~~~ + +.. :ref:`_configuring-migrations-kvm-libvirt` +.. :ref:`_configuring-migrations-xenserver` + +.. note:: + + Only cloud administrators can perform live migrations. If your cloud + is configured to use cells, you can perform live migration within + but not between cells. + +Migration enables an administrator to move a virtual-machine instance +from one compute host to another. This feature is useful when a compute +host requires maintenance. Migration can also be useful to redistribute +the load when many VM instances are running on a specific physical +machine. + +The migration types are: + +- **Non-live migration** (sometimes referred to simply as 'migration'). + The instance is shut down for a period of time to be moved to another + hypervisor. In this case, the instance recognizes that it was + rebooted. + +- **Live migration** (or 'true live migration'). Almost no instance + downtime. Useful when the instances must be kept running during the + migration. The different types of live migration are: + + - **Shared storage-based live migration**. Both hypervisors have + access to shared storage. + + - **Block live migration**. No shared storage is required. + Incompatible with read-only devices such as CD-ROMs and + `Configuration Drive (config\_drive) `_. + + - **Volume-backed live migration**. Instances are backed by volumes + rather than ephemeral disk, no shared storage is required, and + migration is supported (currently only available for libvirt-based + hypervisors). + +The following sections describe how to configure your hosts and compute +nodes for migrations by using the KVM and XenServer hypervisors. + +.. _configuring-migrations-kvm-libvirt: + +KVM-Libvirt +----------- + +.. :ref:`_configuring-migrations-kvm-shared-storage` +.. :ref:`_configuring-migrations-kvm-block-migration` + +.. _configuring-migrations-kvm-shared-storage: + +Shared storage +^^^^^^^^^^^^^^ + +.. :ref:`_section_example-compute-install` +.. :ref:`_true-live-migration-kvm-libvirt` + +**Prerequisites** + +- **Hypervisor:** KVM with libvirt + +- **Shared storage:** :file:`NOVA-INST-DIR/instances/` (for example, + :file:`/var/lib/nova/instances`) has to be mounted by shared storage. + This guide uses NFS but other options, including the + `OpenStack Gluster Connector `_ + are available. + +- **Instances:** Instance can be migrated with iSCSI-based volumes. + +.. note:: + + - Because the Compute service does not use the libvirt live + migration functionality by default, guests are suspended before + migration and might experience several minutes of downtime. For + details, see :ref:Enabling True Live Migration. + + - This guide assumes the default value for ``instances_path`` in + your :file:`nova.conf` file (:file:`NOVA-INST-DIR/instances`). If you + have changed the ``state_path`` or ``instances_path`` variables, + modify the commands accordingly. + + - You must specify ``vncserver_listen=0.0.0.0`` or live migration + will not work correctly. + + - You must specify the ``instances_path`` in each node that runs + nova-compute. The mount point for ``instances_path`` must be the + same value for each node, or live migration will not work + correctly. + +.. _section_example-compute-install: + +Example Compute installation environment +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +- Prepare at least three servers. In this example, we refer to the + servers as ``HostA``, ``HostB``, and ``HostC``: + + - ``HostA`` is the Cloud Controller, and should run these services: + nova-api, nova-scheduler, ``nova-network``, cinder-volume, and + ``nova-objectstore``. + + - ``HostB`` and ``HostC`` are the compute nodes that run + nova-compute. + + Ensure that ``NOVA-INST-DIR`` (set with ``state_path`` in the + :file:`nova.conf` file) is the same on all hosts. + +- In this example, ``HostA`` is the NFSv4 server that exports + ``NOVA-INST-DIR/instances`` directory. ``HostB`` and ``HostC`` are + NFSv4 clients that mount ``HostA``. + +**Configuring your system** + +#. Configure your DNS or ``/etc/hosts`` and ensure it is consistent across + all hosts. Make sure that the three hosts can perform name resolution + with each other. As a test, use the :command:`ping` command to ping each host + from one another: + + .. code:: console + + $ ping HostA + $ ping HostB + $ ping HostC + +#. Ensure that the UID and GID of your Compute and libvirt users are + identical between each of your servers. This ensures that the + permissions on the NFS mount works correctly. + +#. Export ``NOVA-INST-DIR/instances`` from ``HostA``, and ensure it is + readable and writable by the Compute user on ``HostB`` and ``HostC``. + + For more information, see: `SettingUpNFSHowTo `_ + or `CentOS/Red Hat: Setup NFS v4.0 File Server `_ + +#. Configure the NFS server at ``HostA`` by adding the following line to + the :file:`/etc/exports` file: + + .. code:: ini + + NOVA-INST-DIR/instances HostA/255.255.0.0(rw,sync,fsid=0,no_root_squash) + + Change the subnet mask (``255.255.0.0``) to the appropriate value to + include the IP addresses of ``HostB`` and ``HostC``. Then restart the + NFS server: + + .. code:: console + + # /etc/init.d/nfs-kernel-server restart + # /etc/init.d/idmapd restart + +#. On both compute nodes, enable the 'execute/search' bit on your shared + directory to allow qemu to be able to use the images within the + directories. On all hosts, run the following command: + + .. code:: console + + $ chmod o+x NOVA-INST-DIR/instances + +#. Configure NFS on ``HostB`` and ``HostC`` by adding the following line to + the :file:`/etc/fstab` file + + .. code:: console + + HostA:/ /NOVA-INST-DIR/instances nfs4 defaults 0 0 + + Ensure that you can mount the exported directory + + .. code:: console + + $ mount -a -v + + Check that ``HostA`` can see the :file:`NOVA-INST-DIR/instances/` + directory + + .. code:: console + + $ ls -ld NOVA-INST-DIR/instances/ + drwxr-xr-x 2 nova nova 4096 2012-05-19 14:34 nova-install-dir/instances/ + + Perform the same check on ``HostB`` and ``HostC``, paying special + attention to the permissions (Compute should be able to write) + + .. code-block:: console + :linenos: + + $ ls -ld NOVA-INST-DIR/instances/ + drwxr-xr-x 2 nova nova 4096 2012-05-07 14:34 nova-install-dir/instances/ + + $ df -k + Filesystem 1K-blocks Used Available Use% Mounted on + /dev/sda1 921514972 4180880 870523828 1% / + none 16498340 1228 16497112 1% /dev + none 16502856 0 16502856 0% /dev/shm + none 16502856 368 16502488 1% /var/run + none 16502856 0 16502856 0% /var/lock + none 16502856 0 16502856 0% /lib/init/rw + HostA: 921515008 101921792 772783104 12% /var/lib/nova/instances ( <--- this line is important.) + +#. Update the libvirt configurations so that the calls can be made + securely. These methods enable remote access over TCP and are not + documented here. + + - SSH tunnel to libvirtd's UNIX socket + + - libvirtd TCP socket, with GSSAPI/Kerberos for auth+data encryption + + - libvirtd TCP socket, with TLS for encryption and x509 client certs + for authentication + + - libvirtd TCP socket, with TLS for encryption and Kerberos for + authentication + + Restart libvirt. After you run the command, ensure that libvirt is + successfully restarted + + .. code:: console + + # stop libvirt-bin && start libvirt-bin + $ ps -ef | grep libvirt + root 1145 1 0 Nov27 ? 00:00:03 /usr/sbin/libvirtd -d -l\ + +#. Configure your firewall to allow libvirt to communicate between nodes. + By default, libvirt listens on TCP port 16509, and an ephemeral TCP + range from 49152 to 49261 is used for the KVM communications. Based on + the secure remote access TCP configuration you chose, be careful which + ports you open, and always understand who has access. For information + about ports that are used with libvirt, + see `the libvirt documentation `_. + +#. You can now configure options for live migration. In most cases, you + will not need to configure any options. The following chart is for + advanced users only. + +.. TODO :include :: /../../common/tables/nova-livemigration.xml/ + +.. _true-live-migration-kvm-libvirt: + +Enabling true live migration +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Prior to the Kilo release, the Compute service did not use the libvirt +live migration function by default. To enable this function, add the +following line to the ``[libvirt]`` section of the :file:`nova.conf` file: + +.. code:: ini + + live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_TUNNELLED + +On versions older than Kilo, the Compute service does not use libvirt's +live migration by default because there is a risk that the migration +process will never end. This can happen if the guest operating system +uses blocks on the disk faster than they can be migrated. + +.. _configuring-migrations-kvm-block-migration: + +Block Migration +^^^^^^^^^^^^^^^ + +Configuring KVM for block migration is exactly the same as the above +configuration in :ref:`configuring-migrations-kvm-shared-storage` +the section called shared storage, except that ``NOVA-INST-DIR/instances`` +is local to each host rather than shared. No NFS client or server +configuration is required. + +.. note:: + + - To use block migration, you must use the :option:`--block-migrate` + parameter with the live migration command. + + - Block migration is incompatible with read-only devices such as + CD-ROMs and `Configuration Drive (config_drive) `_. + + - Since the ephemeral drives are copied over the network in block + migration, migrations of instances with heavy I/O loads may never + complete if the drives are writing faster than the data can be + copied over the network. + +.. _configuring-migrations-xenserver: + +XenServer +--------- + +.. :ref:Shared Storage +.. :ref:Block migration + +.. _configuring-migrations-xenserver-shared-storage: + +Shared storage +^^^^^^^^^^^^^^ + +**Prerequisites** + +- **Compatible XenServer hypervisors**. For more information, see the + `Requirements for Creating Resource Pools `_ section of the XenServer + Administrator's Guide. + +- **Shared storage**. An NFS export, visible to all XenServer hosts. + + .. note:: + + For the supported NFS versions, see the + `NFS VHD `_ + section of the XenServer Administrator's Guide. + +To use shared storage live migration with XenServer hypervisors, the +hosts must be joined to a XenServer pool. To create that pool, a host +aggregate must be created with specific metadata. This metadata is used +by the XAPI plug-ins to establish the pool. + +**Using shared storage live migrations with XenServer Hypervisors** + +#. Add an NFS VHD storage to your master XenServer, and set it as the + default storage repository. For more information, see NFS VHD in the + XenServer Administrator's Guide. + +#. Configure all compute nodes to use the default storage repository + (``sr``) for pool operations. Add this line to your :file:`nova.conf` + configuration files on all compute nodes: + + .. code:: ini + + sr_matching_filter=default-sr:true + +#. Create a host aggregate. This command creates the aggregate, and then + displays a table that contains the ID of the new aggregate + + .. code:: console + + $ nova aggregate-create POOL_NAME AVAILABILITY_ZONE + + Add metadata to the aggregate, to mark it as a hypervisor pool + + .. code:: console + + $ nova aggregate-set-metadata AGGREGATE_ID hypervisor_pool=true + + $ nova aggregate-set-metadata AGGREGATE_ID operational_state=created + + Make the first compute node part of that aggregate + + .. code:: console + + $ nova aggregate-add-host AGGREGATE_ID MASTER_COMPUTE_NAME + + The host is now part of a XenServer pool. + +#. Add hosts to the pool + + .. code:: console + + $ nova aggregate-add-host AGGREGATE_ID COMPUTE_HOST_NAME + + .. note:: + + The added compute node and the host will shut down to join the host + to the XenServer pool. The operation will fail if any server other + than the compute node is running or suspended on the host. + +.. _configuring-migrations-xenserver-block-migration: + +Block migration +^^^^^^^^^^^^^^^ + +- **Compatible XenServer hypervisors**. + The hypervisors must support the Storage XenMotion feature. + See your XenServer manual to make sure your edition + has this feature. + + .. note:: + + - To use block migration, you must use the :option:`--block-migrate` + parameter with the live migration command. + + - Block migration works only with EXT local storage storage + repositories, and the server must not have any volumes attached. + .. _section_live-migration-usage: Migrate instances ------------------ +~~~~~~~~~~~~~~~~~ This section discusses how to migrate running instances from one OpenStack Compute server to another OpenStack Compute server. @@ -702,8 +1557,623 @@ Before starting a migration, review the Configure migrations section. check the log files at :file:`src/dest` for nova-compute and nova-scheduler) to determine why. -.. TODO /source/common/compute_configure_console.rst +.. _configuring-compute-service-groups: -.. TODO /source/compute_configure_service_groups.rst -.. TODO /source/compute_security.rst -.. TODO /source/compute_recover_nodes.rst +Configure Compute service groups +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The Compute service must know the status of each compute node to +effectively manage and use them. This can include events like a user +launching a new VM, the scheduler sending a request to a live node, or a +query to the ServiceGroup API to determine if a node is live. + +When a compute worker running the nova-compute daemon starts, it calls +the join API to join the compute group. Any service (such as the +scheduler) can query the group's membership and the status of its nodes. +Internally, the ServiceGroup client driver automatically updates the +compute worker status. + +.. _database-servicegroup-driver: + +Database ServiceGroup driver +---------------------------- + +By default, Compute uses the database driver to track if a node is live. +In a compute worker, this driver periodically sends a ``db update`` +command to the database, saying “I'm OK” with a timestamp. Compute uses +a pre-defined timeout (``service_down_time``) to determine if a node is +dead. + +The driver has limitations, which can be problematic depending on your +environment. If a lot of compute worker nodes need to be checked, the +database can be put under heavy load, which can cause the timeout to +trigger, and a live node could incorrectly be considered dead. By +default, the timeout is 60 seconds. Reducing the timeout value can help +in this situation, but you must also make the database update more +frequently, which again increases the database workload. + +The database contains data that is both transient (such as whether the +node is alive) and persistent (such as entries for VM owners). With the +ServiceGroup abstraction, Compute can treat each type separately. + +.. _zookeeper-servicegroup-driver: + +ZooKeeper ServiceGroup driver +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The ZooKeeper ServiceGroup driver works by using ZooKeeper ephemeral +nodes. ZooKeeper, unlike databases, is a distributed system, with its +load divided among several servers. On a compute worker node, the driver +can establish a ZooKeeper session, then create an ephemeral znode in the +group directory. Ephemeral znodes have the same lifespan as the session. +If the worker node or the nova-compute daemon crashes, or a network +partition is in place between the worker and the ZooKeeper server +quorums, the ephemeral znodes are removed automatically. The driver +can be given group membership by running the :command:`ls` command in the +group directory. + +The ZooKeeper driver requires the ZooKeeper servers and client +libraries. Setting up ZooKeeper servers is outside the scope of this +guide (for more information, see `Apache Zookeeper `_). These client-side +Python libraries must be installed on every compute node: + +**python-zookeeper** + + The official Zookeeper Python binding + +**evzookeeper** + + This library makes the binding work with the eventlet threading model. + +This example assumes the ZooKeeper server addresses and ports are +``192.168.2.1:2181``, ``192.168.2.2:2181``, and ``192.168.2.3:2181``. + +These values in the :file:`/etc/nova/nova.conf` file are required on every +node for the ZooKeeper driver: + +.. code:: ini + + # Driver for the ServiceGroup service + servicegroup_driver="zk" + + [zookeeper] + address="192.168.2.1:2181,192.168.2.2:2181,192.168.2.3:2181" + +To customize the Compute Service groups, use these configuration option +settings: + +.. TODO ../../common/tables/nova-zookeeper.xml + +.. _memcache-servicegroup-driver: + +Memcache ServiceGroup driver +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The memcache ServiceGroup driver uses memcached, a distributed memory +object caching system that is used to increase site performance. For +more details, see `memcached.org `_. + +To use the memcache driver, you must install memcached. You might +already have it installed, as the same driver is also used for the +OpenStack Object Storage and OpenStack dashboard. If you need to install +memcached, see the instructions in the `OpenStack Installation Guide `_. + +These values in the :file:`/etc/nova/nova.conf` file are required on every +node for the memcache driver: + +.. code:: ini + + # Driver for the ServiceGroup service + servicegroup_driver="mc" + + # Memcached servers. Use either a list of memcached servers to use for + caching (list value), + # or "" for in-process caching (default). + memcached_servers= + + # Timeout; maximum time since last check-in for up service + (integer value). + # Helps to define whether a node is dead + service_down_time=60 + +.. _section-compute-security: + +Security hardening +~~~~~~~~~~~~~~~~~~ + +OpenStack Compute can be integrated with various third-party +technologies to increase security. For more information, see the +`OpenStack Security Guide `_. + +Trusted compute pools +--------------------- + +Administrators can designate a group of compute hosts as trusted using +trusted compute pools. The trusted hosts use hardware-based security +features, such as the Intel Trusted Execution Technology (TXT), to +provide an additional level of security. Combined with an external +stand-alone, web-based remote attestation server, cloud providers can +ensure that the compute node runs only software with verified +measurements and can ensure a secure cloud stack. + +Trusted compute pools provide the ability for cloud subscribers to +request services run only on verified compute nodes. + +The remote attestation server performs node verification like this: + +1. Compute nodes boot with Intel TXT technology enabled. + +2. The compute node BIOS, hypervisor, and operating system are measured. + +3. When the attestation server challenges the compute node, the measured + data is sent to the attestation server. + +4. The attestation server verifies the measurements against a known good + database to determine node trustworthiness. + +A description of how to set up an attestation service is beyond the +scope of this document. For an open source project that you can use to +implement an attestation service, see the `Open +Attestation `__ +project. + +|image0| + +**Configuring Compute to use trusted compute pools** + +#. Enable scheduling support for trusted compute pools by adding these + lines to the ``DEFAULT`` section of the :file:`/etc/nova/nova.conf` file: + + .. code:: ini + + [DEFAULT] + compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler + scheduler_available_filters=nova.scheduler.filters.all_filters + scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter,TrustedFilter + +#. Specify the connection information for your attestation service by + adding these lines to the ``trusted_computing`` section of the + :file:`/etc/nova/nova.conf` file: + + .. code-block:: ini + :linenos: + + [trusted_computing] + attestation_server = 10.1.71.206 + attestation_port = 8443 + # If using OAT v2.0 after, use this port: + # attestation_port = 8181 + attestation_server_ca_file = /etc/nova/ssl.10.1.71.206.crt + # If using OAT v1.5, use this api_url: + attestation_api_url = /AttestationService/resources + # If using OAT pre-v1.5, use this api_url: + # attestation_api_url = /OpenAttestationWebServices/V1.0 + attestation_auth_blob = i-am-openstack + + In this example: + + server + Host name or IP address of the host that runs the attestation + service + + port + HTTPS port for the attestation service + + server_ca_file + Certificate file used to verify the attestation server's identity + + api_url + The attestation service's URL path + + auth_blob + An authentication blob, required by the attestation service. + +#. Save the file, and restart the nova-compute and nova-scheduler services + to pick up the changes. + +To customize the trusted compute pools, use these configuration option +settings: + +.. list-table:: Description of trusted computing configuration options + :header-rows: 2 + + * - Configuration option = Default value + - Description + * - [trusted_computing] + - + * - attestation_api_url = /OpenAttestationWebServices/V1.0 + - (StrOpt) Attestation web API URL + * - attestation_auth_blob = None + - (StrOpt) Attestation authorization blob - must change + * - attestation_auth_timeout = 60 + - (IntOpt) Attestation status cache valid period length + * - attestation_insecure_ssl = False + - (BoolOpt) Disable SSL cert verification for Attestation service + * - attestation_port = 8443 + - (StrOpt) Attestation server port + * - attestation_server = None + - (StrOpt) Attestation server HTTP + * - attestation_server_ca_file = None + - (StrOpt) Attestation server Cert file for Identity verification + +**Specifying trusted flavors** + +#. Flavors can be designated as trusted using the ``nova flavor-key set`` + command. In this example, the ``m1.tiny`` flavor is being set as + trusted: + + .. code:: console + + $ nova flavor-key m1.tiny set trust:trusted_host=trusted + +#. You can request that your instance is run on a trusted host by + specifying a trusted flavor when booting the instance: + + .. code:: console + + $ nova boot --flavor m1.tiny --key_name myKeypairName --image myImageID newInstanceName + +|Trusted compute pool| + +.. |image0| image:: ../../common/figures/OpenStackTrustedComputePool1.png +.. |Trusted compute pool| image:: ../../common/figures/OpenStackTrustedComputePool2.png + + +Encrypt Compute metadata traffic +-------------------------------- + +**Enabling SSL encryption** + +OpenStack supports encrypting Compute metadata traffic with HTTPS. +Enable SSL encryption in the :file:`metadata_agent.ini` file. + +#. Enable the HTTPS protocol:: + + nova_metadata_protocol = https + +#. Determine whether insecure SSL connections are accepted for Compute + metadata server requests. The default value is ``False``:: + + nova_metadata_insecure = False + +#. Specify the path to the client certificate:: + + nova_client_cert = PATH_TO_CERT + +#. Specify the path to the private key:: + + nova_client_priv_key = PATH_TO_KEY + +.. _section_nova-compute-node-down: + +Recover from a failed compute node +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +If Compute is deployed with a shared file system, and a node fails, +there are several methods to quickly recover from the failure. This +section discusses manual recovery. + +.. TODO ../../common/section_cli_nova_evacuate.xml + +.. _nova-compute-node-down-manual-recovery: + +Manual Recovery +--------------- + +To recover a KVM or libvirt compute node, see +the section called :ref:`nova-compute-node-down-manual-recovery`. For +all other hypervisors, use this procedure: + +**Recovering from a failed compute node manually** + +#. Identify the VMs on the affected hosts. To do this, you can use a + combination of :command:`nova list` and :command:`nova show` or + :command:`euca-describe-instances`. For example, this command displays + information about instance i-000015b9 that is running on node np-rcc54: + + .. code:: console + + $ euca-describe-instances + i-000015b9 at3-ui02 running nectarkey (376, np-rcc54) 0 m1.xxlarge 2012-06-19T00:48:11.000Z 115.146.93.60 + +#. Query the Compute database to check the status of the host. This example + converts an EC2 API instance ID into an OpenStack ID. If you use the + :command:`nova` commands, you can substitute the ID directly (the output in + this example has been truncated): + + .. code:: ini + + mysql> SELECT * FROM instances WHERE id = CONV('15b9', 16, 10) \G; + *************************** 1. row *************************** + created_at: 2012-06-19 00:48:11 + updated_at: 2012-07-03 00:35:11 + deleted_at: NULL + ... + id: 5561 + ... + power_state: 5 + vm_state: shutoff + ... + hostname: at3-ui02 + host: np-rcc54 + ... + uuid: 3f57699a-e773-4650-a443-b4b37eed5a06 + ... + task_state: NULL + ... + + .. note:: + + The credentials for your database can be found in :file:`/etc/nova.conf`. + +#. Decide which compute host the affected VM should be moved to, and run + this database command to move the VM to the new host: + + .. code:: console + + mysql> UPDATE instances SET host = 'np-rcc46' WHERE uuid = '3f57699a-e773-4650-a443-b4b37eed5a06'; + +#. If you are using a hypervisor that relies on libvirt (such as KVM), + update the :file:`libvirt.xml` file (found in + :file:`/var/lib/nova/instances/[instance ID]`) with these changes: + + - Change the ``DHCPSERVER`` value to the host IP address of the new + compute host. + + - Update the VNC IP to `0.0.0.0` + +#. Reboot the VM: + + .. code:: console + + $ nova reboot 3f57699a-e773-4650-a443-b4b37eed5a06 + +The database update and :command:`nova reboot` command should be all that is +required to recover a VM from a failed host. However, if you continue to +have problems try recreating the network filter configuration using +``virsh``, restarting the Compute services, or updating the ``vm_state`` +and ``power_state`` in the Compute database. + +.. _section_nova-uid-mismatch: + +Recover from a UID/GID mismatch +------------------------------- + +In some cases, files on your compute node can end up using the wrong UID +or GID. This can happen when running OpenStack Compute, using a shared +file system, or with an automated configuration tool. This can cause a +number of problems, such as inability to perform live migrations, or +start virtual machines. + +This procedure runs on nova-compute hosts, based on the KVM hypervisor: + +#. Set the nova UID in :file:`/etc/passwd` to the same number on all hosts (for + example, 112). + + .. note:: + + Make sure you choose UIDs or GIDs that are not in use for other + users or groups. + +#. Set the ``libvirt-qemu`` UID in :file:`/etc/passwd` to the same number on + all hosts (for example, 119). + +#. Set the ``nova`` group in :file:`/etc/group` file to the same number on all + hosts (for example, 120). + +#. Set the ``libvirtd`` group in :file:`/etc/group` file to the same number on + all hosts (for example, 119). + +#. Stop the services on the compute node. + +#. Change all the files owned by user or group nova. For example: + + .. code:: console + + # find / -uid 108 -exec chown nova {} \; + # note the 108 here is the old nova UID before the change + # find / -gid 120 -exec chgrp nova {} \; + +#. Repeat all steps for the :file:`libvirt-qemu` files, if required. + +#. Restart the services. + +#. Run the :command:`find` command to verify that all files use the correct + identifiers. + +.. _section_nova-disaster-recovery-process: + +Recover cloud after disaster +---------------------------- + +This section covers procedures for managing your cloud after a disaster, +and backing up persistent storage volumes. Backups are mandatory, even +outside of disaster scenarios. + +For a definition of a disaster recovery plan (DRP), see +`http://en.wikipedia.org/wiki/Disaster\_Recovery\_Plan `_. + +A disaster could happen to several components of your architecture (for +example, a disk crash, network loss, or a power failure). In this +example, the following components are configured: + +- A cloud controller (nova-api, nova-objectstore, nova-network) + +- A compute node (nova-compute) + +- A storage area network (SAN) used by OpenStack Block Storage + (cinder-volumes) + +The worst disaster for a cloud is power loss, which applies to all three +components. Before a power loss: + +- Create an active iSCSI session from the SAN to the cloud controller + (used for the ``cinder-volumes`` LVM's VG). + +- Create an active iSCSI session from the cloud controller to the compute + node (managed by cinder-volume). + +- Create an iSCSI session for every volume (so 14 EBS volumes requires 14 + iSCSI sessions). + +- Create iptables or ebtables rules from the cloud controller to the + compute node. This allows access from the cloud controller to the + running instance. + +- Save the current state of the database, the current state of the running + instances, and the attached volumes (mount point, volume ID, volume + status, etc), at least from the cloud controller to the compute node. + +After power is recovered and all hardware components have restarted: + +- The iSCSI session from the SAN to the cloud no longer exists. + +- The iSCSI session from the cloud controller to the compute node no + longer exists. + +- The iptables and ebtables from the cloud controller to the compute + node are recreated. This is because nova-network reapplies + configurations on boot. + +- Instances are no longer running. + + Note that instances will not be lost, because neither ``destroy`` nor + ``terminate`` was invoked. The files for the instances will remain on + the compute node. + +- The database has not been updated. + +**Begin recovery** + +.. warning:: + + Do not add any extra steps to this procedure, or perform the steps + out of order. + +#. Check the current relationship between the volume and its instance, so + that you can recreate the attachment. + + This information can be found using the :command:`nova volume-list` command. + Note that the ``nova`` client also includes the ability to get volume + information from OpenStack Block Storage. + +#. Update the database to clean the stalled state. Do this for every + volume, using these queries: + + .. code:: console + + mysql> use cinder; + mysql> update volumes set mountpoint=NULL; + mysql> update volumes set status="available" where status <>"error_deleting"; + mysql> update volumes set attach_status="detached"; + mysql> update volumes set instance_id=0; + + Use :command:`nova volume-list` commands to list all volumes. + +#. Restart the instances using the :command:`nova reboot INSTANCE` command. + + .. important:: + + Some instances will completely reboot and become reachable, while + some might stop at the plymouth stage. This is expected behavior, DO + NOT reboot a second time. + + Instance state at this stage depends on whether you added an + `/etc/fstab` entry for that volume. Images built with the + cloud-init package remain in a ``pending`` state, while others skip + the missing volume and start. This step is performed in order to ask + Compute to reboot every instance, so that the stored state is + preserved. It does not matter if not all instances come up + successfully. For more information about cloud-init, see + `help.ubuntu.com/community/CloudInit/ `__. + +#. Reattach the volumes to their respective instances, if required, using + the :command:`nova volume-attach` command. This example uses a file of + listed volumes to reattach them: + + .. code:: bash + + #!/bin/bash + + while read line; do + volume=`echo $line | $CUT -f 1 -d " "` + instance=`echo $line | $CUT -f 2 -d " "` + mount_point=`echo $line | $CUT -f 3 -d " "` + echo "ATTACHING VOLUME FOR INSTANCE - $instance" + nova volume-attach $instance $volume $mount_point + sleep 2 + done < $volumes_tmp_file + + Instances that were stopped at the plymouth stage will now automatically + continue booting and start normally. Instances that previously started + successfully will now be able to see the volume. + +#. Log in to the instances with SSH and reboot them. + + If some services depend on the volume, or if a volume has an entry in + fstab, you should now be able to restart the instance. Restart directly + from the instance itself, not through ``nova``: + + .. code:: console + + # shutdown -r now + + When you are planning for and performing a disaster recovery, follow + these tips: + +- Use the ``errors=remount`` parameter in the :file:`fstab` file to prevent + data corruption. + + This parameter will cause the system to disable the ability to write + to the disk if it detects an I/O error. This configuration option + should be added into the cinder-volume server (the one which performs + the iSCSI connection to the SAN), and into the instances' :file:`fstab` + files. + +- Do not add the entry for the SAN's disks to the cinder-volume's + :file:`fstab` file. + + Some systems hang on that step, which means you could lose access to + your cloud-controller. To re-run the session manually, run this + command before performing the mount: + + .. code:: console + + # iscsiadm -m discovery -t st -p $SAN_IP $ iscsiadm -m node --target-name $IQN -p $SAN_IP -l + +- On your instances, if you have the whole ``/home/`` directory on the + disk, leave a user's directory with the user's bash files and the + :file:`authorized_keys` file (instead of emptying the ``/home`` directory + and mapping the disk on it). + + This allows you to connect to the instance even without the volume + attached, if you allow only connections through public keys. + +If you want to script the disaster recovery plan (DRP), a bash script is +available from `https://github.com/Razique `_ +which performs the following steps: + +#. An array is created for instances and their attached volumes. + +#. The MySQL database is updated. + +#. All instances are restarted with euca2ools. + +#. The volumes are reattached. + +#. An SSH connection is performed into every instance using Compute + credentials. + +The script includes a ``test mode``, which allows you to perform that +whole sequence for only one instance. + +To reproduce the power loss, connect to the compute node which runs that +instance and close the iSCSI session. Do not detach the volume using the +:command:`nova volume-detach` command, manually close the iSCSI session. +This example closes an iSCSI session with the number 15: + +.. code:: console + + # iscsiadm -m session -u -r 15 + +Do not forget the ``-r`` flag. Otherwise, you will close all sessions. diff --git a/doc/admin-guide-cloud-rst/source/compute.rst b/doc/admin-guide-cloud-rst/source/compute.rst index 3ecf352a39..6326c9faf1 100644 --- a/doc/admin-guide-cloud-rst/source/compute.rst +++ b/doc/admin-guide-cloud-rst/source/compute.rst @@ -17,21 +17,11 @@ web-based API. compute_arch.rst compute-images-instances.rst - common/support-compute.rst + compute-networking-nova.rst compute-system-admin.rst - compute_config-firewalls.rst - compute-rootwrap.rst - compute-configure-migrations.rst - compute-configure-service-groups.rst - compute-security.rst - compute-recover-nodes.rst + common/support-compute.rst .. TODO (bmoss) - compute/section_compute-networking-nova.xml - compute/section_compute-system-admin.xml - ../common/section_support-compute.xml ../common/section_cli_nova_usage_statistics.xml - ../common/section_cli_nova_volumes.xml - ../common/section_cli_nova_customize_flavors.xml ../common/section_compute-configure-console.xml diff --git a/doc/admin-guide-cloud-rst/source/compute_config-firewalls.rst b/doc/admin-guide-cloud-rst/source/compute_config-firewalls.rst deleted file mode 100644 index 722bcd810d..0000000000 --- a/doc/admin-guide-cloud-rst/source/compute_config-firewalls.rst +++ /dev/null @@ -1,33 +0,0 @@ -.. _default_ports: - -========================================== -Compute service node firewall requirements -========================================== - -Console connections for virtual machines, whether direct or through a -proxy, are received on ports ``5900`` to ``5999``. The firewall on each -Compute service node must allow network traffic on these ports. - -This procedure modifies the iptables firewall to allow incoming -connections to the Compute services. - -**Configuring the service-node firewall** - -#. Log in to the server that hosts the Compute service, as root. - -#. Edit the :file:`/etc/sysconfig/iptables` file, to add an INPUT rule that - allows TCP traffic on ports from ``5900`` to ``5999``. Make sure the new - rule appears before any INPUT rules that REJECT traffic: - - .. code:: ini - - -A INPUT -p tcp -m multiport --dports 5900:5999 -j ACCEPT - -#. Save the changes to :file:`/etc/sysconfig/iptables` file, and restart the - iptables service to pick up the changes: - - .. code:: console - - $ service iptables restart - -#. Repeat this process for each Compute service node. diff --git a/doc/admin-guide-cloud-rst/source/networking.rst b/doc/admin-guide-cloud-rst/source/networking.rst index 21b859121c..f79faf1c75 100644 --- a/doc/admin-guide-cloud-rst/source/networking.rst +++ b/doc/admin-guide-cloud-rst/source/networking.rst @@ -1,3 +1,5 @@ +.. _networking: + ========== Networking ========== diff --git a/doc/admin-guide-cloud-rst/source/networking_adv-features.rst b/doc/admin-guide-cloud-rst/source/networking_adv-features.rst index 0aa5ce739c..6b484d49e7 100644 --- a/doc/admin-guide-cloud-rst/source/networking_adv-features.rst +++ b/doc/admin-guide-cloud-rst/source/networking_adv-features.rst @@ -165,6 +165,8 @@ administrative user can view and set the provider extended attributes through Networking API calls. See the section called :ref:`Authentication and authorization` for details on policy configuration. +.. _L3-routing-and-NAT: + L3 routing and NAT ~~~~~~~~~~~~~~~~~~