diff --git a/doc/source/admin/configuration/api.rst b/doc/source/admin/configuration/api.rst new file mode 100644 index 000000000000..21048265259b --- /dev/null +++ b/doc/source/admin/configuration/api.rst @@ -0,0 +1,25 @@ +========================= +Compute API configuration +========================= + +The Compute API, run by the ``nova-api`` daemon, is the component of OpenStack +Compute that receives and responds to user requests, whether they be direct API +calls, or via the CLI tools or dashboard. + +Configure Compute API password handling +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The OpenStack Compute API enables users to specify an administrative password +when they create or rebuild a server instance. If the user does not specify a +password, a random password is generated and returned in the API response. + +In practice, how the admin password is handled depends on the hypervisor in use +and might require additional configuration of the instance. For example, you +might have to install an agent to handle the password setting. If the +hypervisor and instance configuration do not support setting a password at +server create time, the password that is returned by the create API call is +misleading because it was ignored. + +To prevent this confusion, use the ``enable_instance_password`` configuration +option to disable the return of the admin password for installations that do +not support setting instance passwords. diff --git a/doc/source/admin/configuration/cells.rst b/doc/source/admin/configuration/cells.rst new file mode 100644 index 000000000000..ddc2c1775aad --- /dev/null +++ b/doc/source/admin/configuration/cells.rst @@ -0,0 +1,295 @@ +========== +Cells (v1) +========== + +.. warning:: + + Configuring and implementing Cells v1 is not recommended for new deployments + of the Compute service (nova). Cells v2 replaces cells v1, and v2 is + required to install or upgrade the Compute service to the 15.0.0 Ocata + release. More information on cells v2 can be found in :doc:`/user/cells`. + +`Cells` functionality enables you to scale an OpenStack Compute cloud in a more +distributed fashion without having to use complicated technologies like +database and message queue clustering. It supports very large deployments. + +When this functionality is enabled, the hosts in an OpenStack Compute cloud are +partitioned into groups called cells. Cells are configured as a tree. The +top-level cell should have a host that runs a ``nova-api`` service, but no +``nova-compute`` services. Each child cell should run all of the typical +``nova-*`` services in a regular Compute cloud except for ``nova-api``. You can +think of cells as a normal Compute deployment in that each cell has its own +database server and message queue broker. + +The ``nova-cells`` service handles communication between cells and selects +cells for new instances. This service is required for every cell. Communication +between cells is pluggable, and currently the only option is communication +through RPC. + +Cells scheduling is separate from host scheduling. ``nova-cells`` first picks +a cell. Once a cell is selected and the new build request reaches its +``nova-cells`` service, it is sent over to the host scheduler in that cell and +the build proceeds as it would have without cells. + +Cell configuration options +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. todo:: This is duplication. We should be able to use the + oslo.config.sphinxext module to generate this for us + +Cells are disabled by default. All cell-related configuration options appear in +the ``[cells]`` section in ``nova.conf``. The following cell-related options +are currently supported: + +``enable`` + Set to ``True`` to turn on cell functionality. Default is ``false``. + +``name`` + Name of the current cell. Must be unique for each cell. + +``capabilities`` + List of arbitrary ``key=value`` pairs defining capabilities of the current + cell. Values include ``hypervisor=xenserver;kvm,os=linux;windows``. + +``call_timeout`` + How long in seconds to wait for replies from calls between cells. + +``scheduler_filter_classes`` + Filter classes that the cells scheduler should use. By default, uses + ``nova.cells.filters.all_filters`` to map to all cells filters included with + Compute. + +``scheduler_weight_classes`` + Weight classes that the scheduler for cells uses. By default, uses + ``nova.cells.weights.all_weighers`` to map to all cells weight algorithms + included with Compute. + +``ram_weight_multiplier`` + Multiplier used to weight RAM. Negative numbers indicate that Compute should + stack VMs on one host instead of spreading out new VMs to more hosts in the + cell. The default value is 10.0. + +Configure the API (top-level) cell +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The cell type must be changed in the API cell so that requests can be proxied +through ``nova-cells`` down to the correct cell properly. Edit the +``nova.conf`` file in the API cell, and specify ``api`` in the ``cell_type`` +key: + +.. code-block:: ini + + [DEFAULT] + compute_api_class=nova.compute.cells_api.ComputeCellsAPI + # ... + + [cells] + cell_type= api + +Configure the child cells +~~~~~~~~~~~~~~~~~~~~~~~~~ + +Edit the ``nova.conf`` file in the child cells, and specify ``compute`` in the +``cell_type`` key: + +.. code-block:: ini + + [DEFAULT] + # Disable quota checking in child cells. Let API cell do it exclusively. + quota_driver=nova.quota.NoopQuotaDriver + + [cells] + cell_type = compute + +Configure the database in each cell +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Before bringing the services online, the database in each cell needs to be +configured with information about related cells. In particular, the API cell +needs to know about its immediate children, and the child cells must know about +their immediate agents. The information needed is the ``RabbitMQ`` server +credentials for the particular cell. + +Use the :command:`nova-manage cell create` command to add this information to +the database in each cell: + +.. code-block:: console + + # nova-manage cell create -h + usage: nova-manage cell create [-h] [--name ] + [--cell_type ] + [--username ] [--password ] + [--broker_hosts ] + [--hostname ] [--port ] + [--virtual_host ] + [--woffset ] [--wscale ] + + optional arguments: + -h, --help show this help message and exit + --name Name for the new cell + --cell_type + Whether the cell is parent/api or child/compute + --username + Username for the message broker in this cell + --password + Password for the message broker in this cell + --broker_hosts + Comma separated list of message brokers in this cell. + Each Broker is specified as hostname:port with both + mandatory. This option overrides the --hostname and + --port options (if provided). + --hostname + Address of the message broker in this cell + --port Port number of the message broker in this cell + --virtual_host + The virtual host of the message broker in this cell + --woffset + --wscale + +As an example, assume an API cell named ``api`` and a child cell named +``cell1``. + +Within the ``api`` cell, specify the following ``RabbitMQ`` server information: + +.. code-block:: ini + + rabbit_host=10.0.0.10 + rabbit_port=5672 + rabbit_username=api_user + rabbit_password=api_passwd + rabbit_virtual_host=api_vhost + +Within the ``cell1`` child cell, specify the following ``RabbitMQ`` server +information: + +.. code-block:: ini + + rabbit_host=10.0.1.10 + rabbit_port=5673 + rabbit_username=cell1_user + rabbit_password=cell1_passwd + rabbit_virtual_host=cell1_vhost + +You can run this in the API cell as root: + +.. code-block:: console + + # nova-manage cell create --name cell1 --cell_type child \ + --username cell1_user --password cell1_passwd --hostname 10.0.1.10 \ + --port 5673 --virtual_host cell1_vhost --woffset 1.0 --wscale 1.0 + +Repeat the previous steps for all child cells. + +In the child cell, run the following, as root: + +.. code-block:: console + + # nova-manage cell create --name api --cell_type parent \ + --username api_user --password api_passwd --hostname 10.0.0.10 \ + --port 5672 --virtual_host api_vhost --woffset 1.0 --wscale 1.0 + +To customize the Compute cells, use the configuration option settings +documented above. + +Cell scheduling configuration +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To determine the best cell to use to launch a new instance, Compute uses a set +of filters and weights defined in the ``/etc/nova/nova.conf`` file. The +following options are available to prioritize cells for scheduling: + +``scheduler_filter_classes`` + List of filter classes. By default ``nova.cells.filters.all_filters`` + is specified, which maps to all cells filters included with Compute + (see the section called :ref:`Filters `). + +``scheduler_weight_classes`` + List of weight classes. By default ``nova.cells.weights.all_weighers`` is + specified, which maps to all cell weight algorithms included with Compute. + The following modules are available: + + ``mute_child`` + Downgrades the likelihood of child cells being chosen for scheduling + requests, which haven't sent capacity or capability updates in a while. + Options include ``mute_weight_multiplier`` (multiplier for mute children; + value should be negative). + + ``ram_by_instance_type`` + Select cells with the most RAM capacity for the instance type being + requested. Because higher weights win, Compute returns the number of + available units for the instance type requested. The + ``ram_weight_multiplier`` option defaults to 10.0 that adds to the weight + by a factor of 10. + + Use a negative number to stack VMs on one host instead of spreading + out new VMs to more hosts in the cell. + + ``weight_offset`` + Allows modifying the database to weight a particular cell. You can use this + when you want to disable a cell (for example, '0'), or to set a default + cell by making its ``weight_offset`` very high (for example, + ``999999999999999``). The highest weight will be the first cell to be + scheduled for launching an instance. + +Additionally, the following options are available for the cell scheduler: + +``scheduler_retries`` + Specifies how many times the scheduler tries to launch a new instance when no + cells are available (default=10). + +``scheduler_retry_delay`` + Specifies the delay (in seconds) between retries (default=2). + +As an admin user, you can also add a filter that directs builds to a particular +cell. The ``policy.json`` file must have a line with +``"cells_scheduler_filter:TargetCellFilter" : "is_admin:True"`` to let an admin +user specify a scheduler hint to direct a build to a particular cell. + +Optional cell configuration +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Cells store all inter-cell communication data, including user names and +passwords, in the database. Because the cells data is not updated very +frequently, use the ``[cells]cells_config`` option to specify a JSON file to +store cells data. With this configuration, the database is no longer consulted +when reloading the cells data. The file must have columns present in the Cell +model (excluding common database fields and the ``id`` column). You must +specify the queue connection information through a ``transport_url`` field, +instead of ``username``, ``password``, and so on. + +The ``transport_url`` has the following form:: + + rabbit://USERNAME:PASSWORD@HOSTNAME:PORT/VIRTUAL_HOST + +The scheme can only be ``rabbit``. + +The following sample shows this optional configuration: + +.. code-block:: json + + { + "parent": { + "name": "parent", + "api_url": "http://api.example.com:8774", + "transport_url": "rabbit://rabbit.example.com", + "weight_offset": 0.0, + "weight_scale": 1.0, + "is_parent": true + }, + "cell1": { + "name": "cell1", + "api_url": "http://api.example.com:8774", + "transport_url": "rabbit://rabbit1.example.com", + "weight_offset": 0.0, + "weight_scale": 1.0, + "is_parent": false + }, + "cell2": { + "name": "cell2", + "api_url": "http://api.example.com:8774", + "transport_url": "rabbit://rabbit2.example.com", + "weight_offset": 0.0, + "weight_scale": 1.0, + "is_parent": false + } + } diff --git a/doc/source/admin/configuration/fibre-channel.rst b/doc/source/admin/configuration/fibre-channel.rst new file mode 100644 index 000000000000..a579c443d76b --- /dev/null +++ b/doc/source/admin/configuration/fibre-channel.rst @@ -0,0 +1,28 @@ +================================= +Configuring Fibre Channel Support +================================= + +Fibre Channel support in OpenStack Compute is remote block storage attached to +compute nodes for VMs. + +.. todo:: This below statement needs to be verified for current release + +Fibre Channel supported only the KVM hypervisor. + +Compute and Block Storage support Fibre Channel automatic zoning on Brocade and +Cisco switches. On other hardware Fibre Channel arrays must be pre-zoned or +directly attached to the KVM hosts. + +KVM host requirements +~~~~~~~~~~~~~~~~~~~~~ + +You must install these packages on the KVM host: + +``sysfsutils`` + Nova uses the ``systool`` application in this package. + +``sg3-utils`` or ``sg3_utils`` + Nova uses the ``sg_scan`` and ``sginfo`` applications. + +Installing the ``multipath-tools`` or ``device-mapper-multipath`` package is +optional. diff --git a/doc/source/admin/configuration/hypervisor-basics.rst b/doc/source/admin/configuration/hypervisor-basics.rst new file mode 100644 index 000000000000..9ac1e785e1ff --- /dev/null +++ b/doc/source/admin/configuration/hypervisor-basics.rst @@ -0,0 +1,14 @@ +=============================== +Hypervisor Configuration Basics +=============================== + +The node where the ``nova-compute`` service is installed and operates on the +same node that runs all of the virtual machines. This is referred to as the +compute node in this guide. + +By default, the selected hypervisor is KVM. To change to another hypervisor, +change the ``virt_type`` option in the ``[libvirt]`` section of ``nova.conf`` +and restart the ``nova-compute`` service. + +Specific options for particular hypervisors can be found in +the following sections. diff --git a/doc/source/admin/configuration/hypervisor-hyper-v.rst b/doc/source/admin/configuration/hypervisor-hyper-v.rst new file mode 100644 index 000000000000..5fb8a9fb9533 --- /dev/null +++ b/doc/source/admin/configuration/hypervisor-hyper-v.rst @@ -0,0 +1,464 @@ +=============================== +Hyper-V virtualization platform +=============================== + +.. todo:: This is really installation guide material and should probably be + moved. + +It is possible to use Hyper-V as a compute node within an OpenStack Deployment. +The ``nova-compute`` service runs as ``openstack-compute``, a 32-bit service +directly upon the Windows platform with the Hyper-V role enabled. The necessary +Python components as well as the ``nova-compute`` service are installed +directly onto the Windows platform. Windows Clustering Services are not needed +for functionality within the OpenStack infrastructure. The use of the Windows +Server 2012 platform is recommend for the best experience and is the platform +for active development. The following Windows platforms have been tested as +compute nodes: + +- Windows Server 2012 +- Windows Server 2012 R2 Server and Core (with the Hyper-V role enabled) +- Hyper-V Server + +Hyper-V configuration +~~~~~~~~~~~~~~~~~~~~~ + +The only OpenStack services required on a Hyper-V node are ``nova-compute`` and +``neutron-hyperv-agent``. Regarding the resources needed for this host you have +to consider that Hyper-V will require 16 GB - 20 GB of disk space for the OS +itself, including updates. Two NICs are required, one connected to the +management network and one to the guest data network. + +The following sections discuss how to prepare the Windows Hyper-V node for +operation as an OpenStack compute node. Unless stated otherwise, any +configuration information should work for the Windows 2012 and 2012 R2 +platforms. + +Local storage considerations +---------------------------- + +The Hyper-V compute node needs to have ample storage for storing the virtual +machine images running on the compute nodes. You may use a single volume for +all, or partition it into an OS volume and VM volume. + +.. _configure-ntp-windows: + +Configure NTP +------------- + +Network time services must be configured to ensure proper operation of the +OpenStack nodes. To set network time on your Windows host you must run the +following commands: + +.. code-block:: bat + + C:\>net stop w32time + C:\>w32tm /config /manualpeerlist:pool.ntp.org,0x8 /syncfromflags:MANUAL + C:\>net start w32time + +Keep in mind that the node will have to be time synchronized with the other +nodes of your OpenStack environment, so it is important to use the same NTP +server. Note that in case of an Active Directory environment, you may do this +only for the AD Domain Controller. + +Configure Hyper-V virtual switching +----------------------------------- + +Information regarding the Hyper-V virtual Switch can be found in the `Hyper-V +Virtual Switch Overview`__. + +To quickly enable an interface to be used as a Virtual Interface the +following PowerShell may be used: + +.. code-block:: none + + PS C:\> $if = Get-NetIPAddress -IPAddress 192* | Get-NetIPInterface + PS C:\> New-VMSwitch -NetAdapterName $if.ifAlias -Name YOUR_BRIDGE_NAME -AllowManagementOS $false + +.. note:: + + It is very important to make sure that when you are using a Hyper-V node + with only 1 NIC the -AllowManagementOS option is set on ``True``, otherwise + you will lose connectivity to the Hyper-V node. + +__ https://technet.microsoft.com/en-us/library/hh831823.aspx + +Enable iSCSI initiator service +------------------------------ + +To prepare the Hyper-V node to be able to attach to volumes provided by cinder +you must first make sure the Windows iSCSI initiator service is running and +started automatically. + +.. code-block:: none + + PS C:\> Set-Service -Name MSiSCSI -StartupType Automatic + PS C:\> Start-Service MSiSCSI + +Configure shared nothing live migration +--------------------------------------- + +Detailed information on the configuration of live migration can be found in +`this guide`__ + +The following outlines the steps of shared nothing live migration. + +#. The target host ensures that live migration is enabled and properly + configured in Hyper-V. + +#. The target host checks if the image to be migrated requires a base VHD and + pulls it from the Image service if not already available on the target host. + +#. The source host ensures that live migration is enabled and properly + configured in Hyper-V. + +#. The source host initiates a Hyper-V live migration. + +#. The source host communicates to the manager the outcome of the operation. + +The following three configuration options are needed in order to support +Hyper-V live migration and must be added to your ``nova.conf`` on the Hyper-V +compute node: + +* This is needed to support shared nothing Hyper-V live migrations. It is used + in ``nova/compute/manager.py``. + + .. code-block:: ini + + instances_shared_storage = False + +* This flag is needed to support live migration to hosts with different CPU + features. This flag is checked during instance creation in order to limit the + CPU features used by the VM. + + .. code-block:: ini + + limit_cpu_features = True + +* This option is used to specify where instances are stored on disk. + + .. code-block:: ini + + instances_path = DRIVELETTER:\PATH\TO\YOUR\INSTANCES + +Additional Requirements: + +* Hyper-V 2012 R2 or Windows Server 2012 R2 with Hyper-V role enabled + +* A Windows domain controller with the Hyper-V compute nodes as domain members + +* The instances_path command-line option/flag needs to be the same on all hosts + +* The ``openstack-compute`` service deployed with the setup must run with + domain credentials. You can set the service credentials with: + +.. code-block:: bat + + C:\>sc config openstack-compute obj="DOMAIN\username" password="password" + +__ https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/manage/Use-live-migration-without-Failover-Clustering-to-move-a-virtual-machine + +How to setup live migration on Hyper-V +-------------------------------------- + +To enable 'shared nothing live' migration, run the 3 instructions below on each +Hyper-V host: + +.. code-block:: none + + PS C:\> Enable-VMMigration + PS C:\> Set-VMMigrationNetwork IP_ADDRESS + PS C:\> Set-VMHost -VirtualMachineMigrationAuthenticationTypeKerberos + +.. note:: + + Replace the ``IP_ADDRESS`` with the address of the interface which will + provide live migration. + +Additional Reading +------------------ + +This article clarifies the various live migration options in Hyper-V: + +`Hyper-V Live Migration of Yesterday +`_ + +Install nova-compute using OpenStack Hyper-V installer +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +In case you want to avoid all the manual setup, you can use Cloudbase +Solutions' installer. You can find it here: + +`HyperVNovaCompute_Beta download +`_ + +The tool installs an independent Python environment in order to avoid conflicts +with existing applications, and dynamically generates a ``nova.conf`` file +based on the parameters provided by you. + +The tool can also be used for an automated and unattended mode for deployments +on a massive number of servers. More details about how to use the installer and +its features can be found here: + +`Cloudbase `_ + +.. _windows-requirements: + +Requirements +~~~~~~~~~~~~ + +Python +------ + +Python 2.7 32bit must be installed as most of the libraries are not working +properly on the 64bit version. + +**Setting up Python prerequisites** + +#. Download and install Python 2.7 using the MSI installer from here: + + `python-2.7.3.msi download + `_ + + .. code-block:: none + + PS C:\> $src = "https://www.python.org/ftp/python/2.7.3/python-2.7.3.msi" + PS C:\> $dest = "$env:temp\python-2.7.3.msi" + PS C:\> Invoke-WebRequest –Uri $src –OutFile $dest + PS C:\> Unblock-File $dest + PS C:\> Start-Process $dest + +#. Make sure that the ``Python`` and ``Python\Scripts`` paths are set up in the + ``PATH`` environment variable. + + .. code-block:: none + + PS C:\> $oldPath = [System.Environment]::GetEnvironmentVariable("Path") + PS C:\> $newPath = $oldPath + ";C:\python27\;C:\python27\Scripts\" + PS C:\> [System.Environment]::SetEnvironmentVariable("Path", $newPath, [System.EnvironmentVariableTarget]::User + +Python dependencies +------------------- + +The following packages need to be downloaded and manually installed: + +``setuptools`` + https://pypi.python.org/packages/2.7/s/setuptools/setuptools-0.6c11.win32-py2.7.exe + +``pip`` + https://pip.pypa.io/en/latest/installing/ + +``PyMySQL`` + http://codegood.com/download/10/ + +``PyWin32`` + https://sourceforge.net/projects/pywin32/files/pywin32/Build%20217/pywin32-217.win32-py2.7.exe + +``Greenlet`` + http://www.lfd.uci.edu/~gohlke/pythonlibs/#greenlet + +``PyCryto`` + http://www.voidspace.org.uk/downloads/pycrypto26/pycrypto-2.6.win32-py2.7.exe + +The following packages must be installed with pip: + +* ``ecdsa`` +* ``amqp`` +* ``wmi`` + +.. code-block:: none + + PS C:\> pip install ecdsa + PS C:\> pip install amqp + PS C:\> pip install wmi + +Other dependencies +------------------ + +``qemu-img`` is required for some of the image related operations. You can get +it from here: http://qemu.weilnetz.de/. You must make sure that the +``qemu-img`` path is set in the PATH environment variable. + +Some Python packages need to be compiled, so you may use MinGW or Visual +Studio. You can get MinGW from here: http://sourceforge.net/projects/mingw/. +You must configure which compiler is to be used for this purpose by using the +``distutils.cfg`` file in ``$Python27\Lib\distutils``, which can contain: + +.. code-block:: ini + + [build] + compiler = mingw32 + +As a last step for setting up MinGW, make sure that the MinGW binaries' +directories are set up in PATH. + +Install nova-compute +~~~~~~~~~~~~~~~~~~~~ + +Download the nova code +---------------------- + +#. Use Git to download the necessary source code. The installer to run Git on + Windows can be downloaded here: + + https://github.com/msysgit/msysgit/releases/download/Git-1.9.2-preview20140411/Git-1.9.2-preview20140411.exe + +#. Download the installer. Once the download is complete, run the installer and + follow the prompts in the installation wizard. The default should be + acceptable for the purposes of this guide. + + .. code-block:: none + + PS C:\> $src = "https://github.com/msysgit/msysgit/releases/download/Git-1.9.2-preview20140411/Git-1.9.2-preview20140411.exe" + PS C:\> $dest = "$env:temp\Git-1.9.2-preview20140411.exe" + PS C:\> Invoke-WebRequest –Uri $src –OutFile $dest + PS C:\> Unblock-File $dest + PS C:\> Start-Process $dest + +#. Run the following to clone the nova code. + + .. code-block:: none + + PS C:\> git.exe clone https://git.openstack.org/openstack/nova + +Install nova-compute service +---------------------------- + +To install ``nova-compute``, run: + +.. code-block:: none + + PS C:\> cd c:\nova + PS C:\> python setup.py install + +Configure nova-compute +---------------------- + +The ``nova.conf`` file must be placed in ``C:\etc\nova`` for running OpenStack +on Hyper-V. Below is a sample ``nova.conf`` for Windows: + +.. code-block:: ini + + [DEFAULT] + auth_strategy = keystone + image_service = nova.image.glance.GlanceImageService + compute_driver = nova.virt.hyperv.driver.HyperVDriver + volume_api_class = nova.volume.cinder.API + fake_network = true + instances_path = C:\Program Files (x86)\OpenStack\Instances + use_cow_images = true + force_config_drive = false + injected_network_template = C:\Program Files (x86)\OpenStack\Nova\etc\interfaces.template + policy_file = C:\Program Files (x86)\OpenStack\Nova\etc\policy.json + mkisofs_cmd = C:\Program Files (x86)\OpenStack\Nova\bin\mkisofs.exe + allow_resize_to_same_host = true + running_deleted_instance_action = reap + running_deleted_instance_poll_interval = 120 + resize_confirm_window = 5 + resume_guests_state_on_host_boot = true + rpc_response_timeout = 1800 + lock_path = C:\Program Files (x86)\OpenStack\Log\ + rpc_backend = nova.openstack.common.rpc.impl_kombu + rabbit_host = IP_ADDRESS + rabbit_port = 5672 + rabbit_userid = guest + rabbit_password = Passw0rd + logdir = C:\Program Files (x86)\OpenStack\Log\ + logfile = nova-compute.log + instance_usage_audit = true + instance_usage_audit_period = hour + use_neutron = True + [glance] + api_servers = http://IP_ADDRESS:9292 + [neutron] + url = http://IP_ADDRESS:9696 + auth_strategy = keystone + admin_tenant_name = service + admin_username = neutron + admin_password = Passw0rd + admin_auth_url = http://IP_ADDRESS:35357/v2.0 + [hyperv] + vswitch_name = newVSwitch0 + limit_cpu_features = false + config_drive_inject_password = false + qemu_img_cmd = C:\Program Files (x86)\OpenStack\Nova\bin\qemu-img.exe + config_drive_cdrom = true + dynamic_memory_ratio = 1 + enable_instance_metrics_collection = true + [rdp] + enabled = true + html5_proxy_base_url = https://IP_ADDRESS:4430 + +Prepare images for use with Hyper-V +----------------------------------- + +Hyper-V currently supports only the VHD and VHDX file format for virtual +machine instances. Detailed instructions for installing virtual machines on +Hyper-V can be found here: + +`Create Virtual Machines +`_ + +Once you have successfully created a virtual machine, you can then upload the +image to `glance` using the `openstack-client`: + +.. code-block:: none + + PS C:\> openstack image create --name "VM_IMAGE_NAME" --property hypervisor_type=hyperv --public \ + --container-format bare --disk-format vhd + +.. note:: + + VHD and VHDX files sizes can be bigger than their maximum internal size, + as such you need to boot instances using a flavor with a slightly bigger + disk size than the internal size of the disk file. + To create VHDs, use the following PowerShell cmdlet: + + .. code-block:: none + + PS C:\> New-VHD DISK_NAME.vhd -SizeBytes VHD_SIZE + +Inject interfaces and routes +---------------------------- + +The ``interfaces.template`` file describes the network interfaces and routes +available on your system and how to activate them. You can specify the location +of the file with the ``injected_network_template`` configuration option in +``/etc/nova/nova.conf``. + +.. code-block:: ini + + injected_network_template = PATH_TO_FILE + +A default template exists in ``nova/virt/interfaces.template``. + +Run Compute with Hyper-V +------------------------ + +To start the ``nova-compute`` service, run this command from a console in the +Windows server: + +.. code-block:: none + + PS C:\> C:\Python27\python.exe c:\Python27\Scripts\nova-compute --config-file c:\etc\nova\nova.conf + +Troubleshoot Hyper-V configuration +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +* I ran the :command:`nova-manage service list` command from my controller; + however, I'm not seeing smiley faces for Hyper-V compute nodes, what do I do? + + Verify that you are synchronized with a network time source. For + instructions about how to configure NTP on your Hyper-V compute node, see + :ref:`configure-ntp-windows`. + +* How do I restart the compute service? + + .. code-block:: none + + PS C:\> net stop nova-compute && net start nova-compute + +* How do I restart the iSCSI initiator service? + + .. code-block:: none + + PS C:\> net stop msiscsi && net start msiscsi diff --git a/doc/source/admin/configuration/hypervisor-kvm.rst b/doc/source/admin/configuration/hypervisor-kvm.rst new file mode 100644 index 000000000000..701e5fefe0a2 --- /dev/null +++ b/doc/source/admin/configuration/hypervisor-kvm.rst @@ -0,0 +1,372 @@ +=== +KVM +=== + +.. todo:: This is really installation guide material and should probably be + moved. + +KVM is configured as the default hypervisor for Compute. + +.. note:: + + This document contains several sections about hypervisor selection. If you + are reading this document linearly, you do not want to load the KVM module + before you install ``nova-compute``. The ``nova-compute`` service depends + on qemu-kvm, which installs ``/lib/udev/rules.d/45-qemu-kvm.rules``, which + sets the correct permissions on the ``/dev/kvm`` device node. + +To enable KVM explicitly, add the following configuration options to the +``/etc/nova/nova.conf`` file: + +.. code-block:: ini + + compute_driver = libvirt.LibvirtDriver + + [libvirt] + virt_type = kvm + +The KVM hypervisor supports the following virtual machine image formats: + +* Raw +* QEMU Copy-on-write (QCOW2) +* QED Qemu Enhanced Disk +* VMware virtual machine disk format (vmdk) + +This section describes how to enable KVM on your system. For more information, +see the following distribution-specific documentation: + +* `Fedora: Virtualization Getting Started Guide `_ + from the Fedora 22 documentation. +* `Ubuntu: KVM/Installation `_ from the Community Ubuntu documentation. +* `Debian: Virtualization with KVM `_ from the Debian handbook. +* `Red Hat Enterprise Linux: Installing virtualization packages on an existing + Red Hat Enterprise Linux system `_ from the ``Red Hat Enterprise Linux + Virtualization Host Configuration and Guest Installation Guide``. +* `openSUSE: Installing KVM `_ + from the openSUSE Virtualization with KVM manual. +* `SLES: Installing KVM `_ from the SUSE Linux Enterprise Server + ``Virtualization Guide``. + +Enable KVM +~~~~~~~~~~ + +The following sections outline how to enable KVM based hardware virtualization +on different architectures and platforms. To perform these steps, you must be +logged in as the ``root`` user. + +For x86 based systems +--------------------- + +#. To determine whether the ``svm`` or ``vmx`` CPU extensions are present, run + this command: + + .. code-block:: console + + # grep -E 'svm|vmx' /proc/cpuinfo + + This command generates output if the CPU is capable of + hardware-virtualization. Even if output is shown, you might still need to + enable virtualization in the system BIOS for full support. + + If no output appears, consult your system documentation to ensure that your + CPU and motherboard support hardware virtualization. Verify that any + relevant hardware virtualization options are enabled in the system BIOS. + + The BIOS for each manufacturer is different. If you must enable + virtualization in the BIOS, look for an option containing the words + ``virtualization``, ``VT``, ``VMX``, or ``SVM``. + +#. To list the loaded kernel modules and verify that the ``kvm`` modules are + loaded, run this command: + + .. code-block:: console + + # lsmod | grep kvm + + If the output includes ``kvm_intel`` or ``kvm_amd``, the ``kvm`` hardware + virtualization modules are loaded and your kernel meets the module + requirements for OpenStack Compute. + + If the output does not show that the ``kvm`` module is loaded, run this + command to load it: + + .. code-block:: console + + # modprobe -a kvm + + Run the command for your CPU. For Intel, run this command: + + .. code-block:: console + + # modprobe -a kvm-intel + + For AMD, run this command: + + .. code-block:: console + + # modprobe -a kvm-amd + + Because a KVM installation can change user group membership, you might need + to log in again for changes to take effect. + + If the kernel modules do not load automatically, use the procedures listed + in these subsections. + +If the checks indicate that required hardware virtualization support or kernel +modules are disabled or unavailable, you must either enable this support on the +system or find a system with this support. + +.. note:: + + Some systems require that you enable VT support in the system BIOS. If you + believe your processor supports hardware acceleration but the previous + command did not produce output, reboot your machine, enter the system BIOS, + and enable the VT option. + +If KVM acceleration is not supported, configure Compute to use a different +hypervisor, such as ``QEMU`` or ``Xen``. See :ref:`compute_qemu` or +:ref:`compute_xen_api` for details. + +These procedures help you load the kernel modules for Intel-based and AMD-based +processors if they do not load automatically during KVM installation. + +.. rubric:: Intel-based processors + +If your compute host is Intel-based, run these commands as root to load the +kernel modules: + +.. code-block:: console + + # modprobe kvm + # modprobe kvm-intel + +Add these lines to the ``/etc/modules`` file so that these modules load on +reboot: + +.. code-block:: console + + kvm + kvm-intel + +.. rubric:: AMD-based processors + +If your compute host is AMD-based, run these commands as root to load the +kernel modules: + +.. code-block:: console + + # modprobe kvm + # modprobe kvm-amd + +Add these lines to ``/etc/modules`` file so that these modules load on reboot: + +.. code-block:: console + + kvm + kvm-amd + +For POWER based systems +----------------------- + +KVM as a hypervisor is supported on POWER system's PowerNV platform. + +#. To determine if your POWER platform supports KVM based virtualization run + the following command: + + .. code-block:: console + + # cat /proc/cpuinfo | grep PowerNV + + If the previous command generates the following output, then CPU supports + KVM based virtualization. + + .. code-block:: console + + platform: PowerNV + + If no output is displayed, then your POWER platform does not support KVM + based hardware virtualization. + +#. To list the loaded kernel modules and verify that the ``kvm`` modules are + loaded, run the following command: + + .. code-block:: console + + # lsmod | grep kvm + + If the output includes ``kvm_hv``, the ``kvm`` hardware virtualization + modules are loaded and your kernel meets the module requirements for + OpenStack Compute. + + If the output does not show that the ``kvm`` module is loaded, run the + following command to load it: + + .. code-block:: console + + # modprobe -a kvm + + For PowerNV platform, run the following command: + + .. code-block:: console + + # modprobe -a kvm-hv + + Because a KVM installation can change user group membership, you might need + to log in again for changes to take effect. + +Configure Compute backing storage +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Backing Storage is the storage used to provide the expanded operating system +image, and any ephemeral storage. Inside the virtual machine, this is normally +presented as two virtual hard disks (for example, ``/dev/vda`` and ``/dev/vdb`` +respectively). However, inside OpenStack, this can be derived from one of these +methods: ``lvm``, ``qcow``, ``rbd`` or ``flat``, chosen using the +``images_type`` option in ``nova.conf`` on the compute node. + +.. note:: + + The option ``raw`` is acceptable but deprecated in favor of ``flat``. The + Flat back end uses either raw or QCOW2 storage. It never uses a backing + store, so when using QCOW2 it copies an image rather than creating an + overlay. By default, it creates raw files but will use QCOW2 when creating a + disk from a QCOW2 if ``force_raw_images`` is not set in configuration. + +QCOW is the default backing store. It uses a copy-on-write philosophy to delay +allocation of storage until it is actually needed. This means that the space +required for the backing of an image can be significantly less on the real disk +than what seems available in the virtual machine operating system. + +Flat creates files without any sort of file formatting, effectively creating +files with the plain binary one would normally see on a real disk. This can +increase performance, but means that the entire size of the virtual disk is +reserved on the physical disk. + +Local `LVM volumes +`__ can also be +used. Set ``images_volume_group = nova_local`` where ``nova_local`` is the name +of the LVM group you have created. + +Specify the CPU model of KVM guests +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The Compute service enables you to control the guest CPU model that is exposed +to KVM virtual machines. Use cases include: + +* To maximize performance of virtual machines by exposing new host CPU features + to the guest + +* To ensure a consistent default CPU across all machines, removing reliance of + variable QEMU defaults + +In libvirt, the CPU is specified by providing a base CPU model name (which is a +shorthand for a set of feature flags), a set of additional feature flags, and +the topology (sockets/cores/threads). The libvirt KVM driver provides a number +of standard CPU model names. These models are defined in the +``/usr/share/libvirt/cpu_map.xml`` file. Check this file to determine which +models are supported by your local installation. + +Two Compute configuration options in the ``[libvirt]`` group of ``nova.conf`` +define which type of CPU model is exposed to the hypervisor when using KVM: +``cpu_mode`` and ``cpu_model``. + +The ``cpu_mode`` option can take one of the following values: ``none``, +``host-passthrough``, ``host-model``, and ``custom``. + +Host model (default for KVM & QEMU) +----------------------------------- + +If your ``nova.conf`` file contains ``cpu_mode=host-model``, libvirt identifies +the CPU model in ``/usr/share/libvirt/cpu_map.xml`` file that most closely +matches the host, and requests additional CPU flags to complete the match. This +configuration provides the maximum functionality and performance and maintains +good reliability and compatibility if the guest is migrated to another host +with slightly different host CPUs. + +Host pass through +----------------- + +If your ``nova.conf`` file contains ``cpu_mode=host-passthrough``, libvirt +tells KVM to pass through the host CPU with no modifications. The difference +to host-model, instead of just matching feature flags, every last detail of the +host CPU is matched. This gives the best performance, and can be important to +some apps which check low level CPU details, but it comes at a cost with +respect to migration. The guest can only be migrated to a matching host CPU. + +Custom +------ + +If your ``nova.conf`` file contains ``cpu_mode=custom``, you can explicitly +specify one of the supported named models using the cpu_model configuration +option. For example, to configure the KVM guests to expose Nehalem CPUs, your +``nova.conf`` file should contain: + +.. code-block:: ini + + [libvirt] + cpu_mode = custom + cpu_model = Nehalem + +None (default for all libvirt-driven hypervisors other than KVM & QEMU) +----------------------------------------------------------------------- + +If your ``nova.conf`` file contains ``cpu_mode=none``, libvirt does not specify +a CPU model. Instead, the hypervisor chooses the default model. + +Guest agent support +------------------- + +Use guest agents to enable optional access between compute nodes and guests +through a socket, using the QMP protocol. + +To enable this feature, you must set ``hw_qemu_guest_agent=yes`` as a metadata +parameter on the image you wish to use to create the guest-agent-capable +instances from. You can explicitly disable the feature by setting +``hw_qemu_guest_agent=no`` in the image metadata. + +KVM performance tweaks +~~~~~~~~~~~~~~~~~~~~~~ + +The `VHostNet `_ kernel module improves +network performance. To load the kernel module, run the following command as +root: + +.. code-block:: console + + # modprobe vhost_net + +Troubleshoot KVM +~~~~~~~~~~~~~~~~ + +Trying to launch a new virtual machine instance fails with the ``ERROR`` state, +and the following error appears in the ``/var/log/nova/nova-compute.log`` file: + +.. code-block:: console + + libvirtError: internal error no supported architecture for os type 'hvm' + +This message indicates that the KVM kernel modules were not loaded. + +If you cannot start VMs after installation without rebooting, the permissions +might not be set correctly. This can happen if you load the KVM module before +you install ``nova-compute``. To check whether the group is set to ``kvm``, +run: + +.. code-block:: console + + # ls -l /dev/kvm + +If it is not set to ``kvm``, run: + +.. code-block:: console + + # udevadm trigger diff --git a/doc/source/admin/configuration/hypervisor-lxc.rst b/doc/source/admin/configuration/hypervisor-lxc.rst new file mode 100644 index 000000000000..eb8d51f83ef0 --- /dev/null +++ b/doc/source/admin/configuration/hypervisor-lxc.rst @@ -0,0 +1,38 @@ +====================== +LXC (Linux containers) +====================== + +LXC (also known as Linux containers) is a virtualization technology that works +at the operating system level. This is different from hardware virtualization, +the approach used by other hypervisors such as KVM, Xen, and VMware. LXC (as +currently implemented using libvirt in the Compute service) is not a secure +virtualization technology for multi-tenant environments (specifically, +containers may affect resource quotas for other containers hosted on the same +machine). Additional containment technologies, such as AppArmor, may be used to +provide better isolation between containers, although this is not the case by +default. For all these reasons, the choice of this virtualization technology +is not recommended in production. + +If your compute hosts do not have hardware support for virtualization, LXC will +likely provide better performance than QEMU. In addition, if your guests must +access specialized hardware, such as GPUs, this might be easier to achieve with +LXC than other hypervisors. + +.. note:: + + Some OpenStack Compute features might be missing when running with LXC as + the hypervisor. See the `hypervisor support matrix + `_ for details. + +To enable LXC, ensure the following options are set in ``/etc/nova/nova.conf`` +on all hosts running the ``nova-compute`` service. + +.. code-block:: ini + + compute_driver = libvirt.LibvirtDriver + + [libvirt] + virt_type = lxc + +On Ubuntu, enable LXC support in OpenStack by installing the +``nova-compute-lxc`` package. diff --git a/doc/source/admin/configuration/hypervisor-qemu.rst b/doc/source/admin/configuration/hypervisor-qemu.rst new file mode 100644 index 000000000000..6849b89c2804 --- /dev/null +++ b/doc/source/admin/configuration/hypervisor-qemu.rst @@ -0,0 +1,56 @@ +.. _compute_qemu: + +==== +QEMU +==== + +From the perspective of the Compute service, the QEMU hypervisor is +very similar to the KVM hypervisor. Both are controlled through libvirt, +both support the same feature set, and all virtual machine images that +are compatible with KVM are also compatible with QEMU. +The main difference is that QEMU does not support native virtualization. +Consequently, QEMU has worse performance than KVM and is a poor choice +for a production deployment. + +The typical uses cases for QEMU are + +* Running on older hardware that lacks virtualization support. +* Running the Compute service inside of a virtual machine for + development or testing purposes, where the hypervisor does not + support native virtualization for guests. + +To enable QEMU, add these settings to ``nova.conf``: + +.. code-block:: ini + + compute_driver = libvirt.LibvirtDriver + + [libvirt] + virt_type = qemu + +For some operations you may also have to install the +:command:`guestmount` utility: + +On Ubuntu: + +.. code-block:: console + + # apt-get install guestmount + +On Red Hat Enterprise Linux, Fedora, or CentOS: + +.. code-block:: console + + # yum install libguestfs-tools + +On openSUSE: + +.. code-block:: console + + # zypper install guestfs-tools + +The QEMU hypervisor supports the following virtual machine image formats: + +* Raw +* QEMU Copy-on-write (qcow2) +* VMware virtual machine disk format (vmdk) diff --git a/doc/source/admin/configuration/hypervisor-virtuozzo.rst b/doc/source/admin/configuration/hypervisor-virtuozzo.rst new file mode 100644 index 000000000000..13c63daba625 --- /dev/null +++ b/doc/source/admin/configuration/hypervisor-virtuozzo.rst @@ -0,0 +1,39 @@ +========= +Virtuozzo +========= + +Virtuozzo 7.0.0 (or newer), or its community edition OpenVZ, provides both +types of virtualization: Kernel Virtual Machines and OS Containers. The type +of instance to span is chosen depending on the ``hw_vm_type`` property of an +image. + +.. note:: + + Some OpenStack Compute features may be missing when running with Virtuozzo + as the hypervisor. See :doc:`/user/support-matrix` for details. + +To enable Virtuozzo Containers, set the following options in +``/etc/nova/nova.conf`` on all hosts running the ``nova-compute`` service. + +.. code-block:: ini + + compute_driver = libvirt.LibvirtDriver + force_raw_images = False + + [libvirt] + virt_type = parallels + images_type = ploop + connection_uri = parallels:///system + inject_partition = -2 + +To enable Virtuozzo Virtual Machines, set the following options in +``/etc/nova/nova.conf`` on all hosts running the ``nova-compute`` service. + +.. code-block:: ini + + compute_driver = libvirt.LibvirtDriver + + [libvirt] + virt_type = parallels + images_type = qcow2 + connection_uri = parallels:///system diff --git a/doc/source/admin/configuration/hypervisor-vmware.rst b/doc/source/admin/configuration/hypervisor-vmware.rst new file mode 100644 index 000000000000..d8d3ec40cade --- /dev/null +++ b/doc/source/admin/configuration/hypervisor-vmware.rst @@ -0,0 +1,1130 @@ +============== +VMware vSphere +============== + +Introduction +~~~~~~~~~~~~ + +OpenStack Compute supports the VMware vSphere product family and enables access +to advanced features such as vMotion, High Availability, and Dynamic Resource +Scheduling (DRS). + +This section describes how to configure VMware-based virtual machine images for +launch. The VMware driver supports vCenter version 5.5.0 and later. + +The VMware vCenter driver enables the ``nova-compute`` service to communicate +with a VMware vCenter server that manages one or more ESX host clusters. The +driver aggregates the ESX hosts in each cluster to present one large hypervisor +entity for each cluster to the Compute scheduler. Because individual ESX hosts +are not exposed to the scheduler, Compute schedules to the granularity of +clusters and vCenter uses DRS to select the actual ESX host within the cluster. +When a virtual machine makes its way into a vCenter cluster, it can use all +vSphere features. + +The following sections describe how to configure the VMware vCenter driver. + +High-level architecture +~~~~~~~~~~~~~~~~~~~~~~~ + +The following diagram shows a high-level view of the VMware driver +architecture: + +.. rubric:: VMware driver architecture + +.. figure:: /figures/vmware-nova-driver-architecture.jpg + :width: 100% + +As the figure shows, the OpenStack Compute Scheduler sees three hypervisors +that each correspond to a cluster in vCenter. ``nova-compute`` contains the +VMware driver. You can run with multiple ``nova-compute`` services. It is +recommended to run with one ``nova-compute`` service per ESX cluster thus +ensuring that while Compute schedules at the granularity of the +``nova-compute`` service it is also in effect able to schedule at the cluster +level. In turn the VMware driver inside ``nova-compute`` interacts with the +vCenter APIs to select an appropriate ESX host within the cluster. Internally, +vCenter uses DRS for placement. + +The VMware vCenter driver also interacts with the Image service to copy VMDK +images from the Image service back-end store. The dotted line in the figure +represents VMDK images being copied from the OpenStack Image service to the +vSphere data store. VMDK images are cached in the data store so the copy +operation is only required the first time that the VMDK image is used. + +After OpenStack boots a VM into a vSphere cluster, the VM becomes visible in +vCenter and can access vSphere advanced features. At the same time, the VM is +visible in the OpenStack dashboard and you can manage it as you would any other +OpenStack VM. You can perform advanced vSphere operations in vCenter while you +configure OpenStack resources such as VMs through the OpenStack dashboard. + +The figure does not show how networking fits into the architecture. Both +``nova-network`` and the OpenStack Networking Service are supported. For +details, see :ref:`vmware-networking`. + +Configuration overview +~~~~~~~~~~~~~~~~~~~~~~ + +To get started with the VMware vCenter driver, complete the following +high-level steps: + +#. Configure vCenter. See :ref:`vmware-prereqs`. + +#. Configure the VMware vCenter driver in the ``nova.conf`` file. + See :ref:`vmware-vcdriver`. + +#. Load desired VMDK images into the Image service. See :ref:`vmware-images`. + +#. Configure networking with either ``nova-network`` or + the Networking service. See :ref:`vmware-networking`. + +.. _vmware-prereqs: + +Prerequisites and limitations +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Use the following list to prepare a vSphere environment that runs with the +VMware vCenter driver: + +Copying VMDK files + In vSphere 5.1, copying large image files (for example, 12 GB and greater) + from the Image service can take a long time. To improve performance, VMware + recommends that you upgrade to VMware vCenter Server 5.1 Update 1 or later. + For more information, see the `Release Notes + `_. + +DRS + For any cluster that contains multiple ESX hosts, enable DRS and enable fully + automated placement. + +Shared storage + Only shared storage is supported and data stores must be shared among all + hosts in a cluster. It is recommended to remove data stores not intended for + OpenStack from clusters being configured for OpenStack. + +Clusters and data stores + Do not use OpenStack clusters and data stores for other purposes. If you do, + OpenStack displays incorrect usage information. + +Networking + The networking configuration depends on the desired networking model. See + :ref:`vmware-networking`. + +Security groups + If you use the VMware driver with OpenStack Networking and the NSX plug-in, + security groups are supported. If you use ``nova-network``, security groups + are not supported. + + .. note:: + + The NSX plug-in is the only plug-in that is validated for vSphere. + +VNC + The port range 5900 - 6105 (inclusive) is automatically enabled for VNC + connections on every ESX host in all clusters under OpenStack control. + + .. note:: + + In addition to the default VNC port numbers (5900 to 6000) specified in + the above document, the following ports are also used: 6101, 6102, and + 6105. + + You must modify the ESXi firewall configuration to allow the VNC ports. + Additionally, for the firewall modifications to persist after a reboot, you + must create a custom vSphere Installation Bundle (VIB) which is then + installed onto the running ESXi host or added to a custom image profile used + to install ESXi hosts. For details about how to create a VIB for persisting + the firewall configuration modifications, see `Knowledge Base + `_. + + .. note:: + + The VIB can be downloaded from `openstack-vmwareapi-team/Tools + `_. + +To use multiple vCenter installations with OpenStack, each vCenter must be +assigned to a separate availability zone. This is required as the OpenStack +Block Storage VMDK driver does not currently work across multiple vCenter +installations. + +VMware vCenter service account +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +OpenStack integration requires a vCenter service account with the following +minimum permissions. Apply the permissions to the ``Datacenter`` root object, +and select the :guilabel:`Propagate to Child Objects` option. + +.. list-table:: vCenter permissions tree + :header-rows: 1 + :widths: 12, 12, 40, 36 + + * - All Privileges + - + - + - + * - + - Datastore + - + - + * - + - + - Allocate space + - + * - + - + - Browse datastore + - + * - + - + - Low level file operation + - + * - + - + - Remove file + - + * - + - Extension + - + - + * - + - + - Register extension + - + * - + - Folder + - + - + * - + - + - Create folder + - + * - + - Host + - + - + * - + - + - Configuration + - + * - + - + - + - Maintenance + * - + - + - + - Network configuration + * - + - + - + - Storage partition configuration + * - + - Network + - + - + * - + - + - Assign network + - + * - + - Resource + - + - + * - + - + - Assign virtual machine to resource pool + - + * - + - + - Migrate powered off virtual machine + - + * - + - + - Migrate powered on virtual machine + - + * - + - Virtual Machine + - + - + * - + - + - Configuration + - + * - + - + - + - Add existing disk + * - + - + - + - Add new disk + * - + - + - + - Add or remove device + * - + - + - + - Advanced + * - + - + - + - CPU count + * - + - + - + - Change resource + * - + - + - + - Disk change tracking + * - + - + - + - Host USB device + * - + - + - + - Memory + * - + - + - + - Modify device settings + * - + - + - + - Raw device + * - + - + - + - Remove disk + * - + - + - + - Rename + * - + - + - + - Set annotation + * - + - + - + - Swapfile placement + * - + - + - Interaction + - + * - + - + - + - Configure CD media + * - + - + - + - Power Off + * - + - + - + - Power On + * - + - + - + - Reset + * - + - + - + - Suspend + * - + - + - Inventory + - + * - + - + - + - Create from existing + * - + - + - + - Create new + * - + - + - + - Move + * - + - + - + - Remove + * - + - + - + - Unregister + * - + - + - Provisioning + - + * - + - + - + - Clone virtual machine + * - + - + - + - Customize + * - + - + - + - Create template from virtual machine + * - + - + - Snapshot management + - + * - + - + - + - Create snapshot + * - + - + - + - Remove snapshot + * - + - Sessions + - + - + * - + - + - + - Validate session + * - + - + - + - View and stop sessions + * - + - vApp + - + - + * - + - + - Export + - + * - + - + - Import + - + +.. _vmware-vcdriver: + +VMware vCenter driver +~~~~~~~~~~~~~~~~~~~~~ + +Use the VMware vCenter driver (VMwareVCDriver) to connect OpenStack Compute +with vCenter. This recommended configuration enables access through vCenter to +advanced vSphere features like vMotion, High Availability, and Dynamic Resource +Scheduling (DRS). + +VMwareVCDriver configuration options +------------------------------------ + +Add the following VMware-specific configuration options to the ``nova.conf`` +file: + +.. code-block:: ini + + [DEFAULT] + compute_driver = vmwareapi.VMwareVCDriver + + [vmware] + host_ip = + host_username = + host_password = + cluster_name = + datastore_regex = + +.. note:: + + * Clusters: The vCenter driver can support only a single cluster. Clusters + and data stores used by the vCenter driver should not contain any VMs + other than those created by the driver. + + * Data stores: The ``datastore_regex`` setting specifies the data stores to + use with Compute. For example, ``datastore_regex="nas.*"`` selects all + the data stores that have a name starting with "nas". If this line is + omitted, Compute uses the first data store returned by the vSphere API. It + is recommended not to use this field and instead remove data stores that + are not intended for OpenStack. + + * Reserved host memory: The ``reserved_host_memory_mb`` option value is 512 + MB by default. However, VMware recommends that you set this option to 0 MB + because the vCenter driver reports the effective memory available to the + virtual machines. + + * The vCenter driver generates instance name by instance ID. Instance name + template is ignored. + + * The minimum supported vCenter version is 5.5.0. Starting in the OpenStack + Ocata release any version lower than 5.5.0 will be logged as a warning. In + the OpenStack Pike release this will be enforced. + +A ``nova-compute`` service can control one or more clusters containing multiple +ESXi hosts, making ``nova-compute`` a critical service from a high availability +perspective. Because the host that runs ``nova-compute`` can fail while the +vCenter and ESX still run, you must protect the ``nova-compute`` service +against host failures. + +.. note:: + + Many ``nova.conf`` options are relevant to libvirt but do not apply to this + driver. + +.. _vmware-images: + +Images with VMware vSphere +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The vCenter driver supports images in the VMDK format. Disks in this format can +be obtained from VMware Fusion or from an ESX environment. It is also possible +to convert other formats, such as qcow2, to the VMDK format using the +``qemu-img`` utility. After a VMDK disk is available, load it into the Image +service. Then, you can use it with the VMware vCenter driver. The following +sections provide additional details on the supported disks and the commands +used for conversion and upload. + +Supported image types +--------------------- + +Upload images to the OpenStack Image service in VMDK format. The following +VMDK disk types are supported: + +* ``VMFS Flat Disks`` (includes thin, thick, zeroedthick, and + eagerzeroedthick). Note that once a VMFS thin disk is exported from VMFS to a + non-VMFS location, like the OpenStack Image service, it becomes a + preallocated flat disk. This impacts the transfer time from the Image service + to the data store when the full preallocated flat disk, rather than the thin + disk, must be transferred. + +* ``Monolithic Sparse disks``. Sparse disks get imported from the Image service + into ESXi as thin provisioned disks. Monolithic Sparse disks can be obtained + from VMware Fusion or can be created by converting from other virtual disk + formats using the ``qemu-img`` utility. + +* ``Stream-optimized disks``. Stream-optimized disks are compressed sparse + disks. They can be obtained from VMware vCenter/ESXi when exporting vm to + ovf/ova template. + +The following table shows the ``vmware_disktype`` property that applies to each +of the supported VMDK disk types: + +.. list-table:: OpenStack Image service disk type settings + :header-rows: 1 + + * - vmware_disktype property + - VMDK disk type + * - sparse + - Monolithic Sparse + * - thin + - VMFS flat, thin provisioned + * - preallocated (default) + - VMFS flat, thick/zeroedthick/eagerzeroedthick + * - streamOptimized + - Compressed Sparse + +The ``vmware_disktype`` property is set when an image is loaded into the Image +service. For example, the following command creates a Monolithic Sparse image +by setting ``vmware_disktype`` to ``sparse``: + +.. code-block:: console + + $ openstack image create \ + --disk-format vmdk \ + --container-format bare \ + --property vmware_disktype="sparse" \ + --property vmware_ostype="ubuntu64Guest" \ + ubuntu-sparse < ubuntuLTS-sparse.vmdk + +.. note:: + + Specifying ``thin`` does not provide any advantage over ``preallocated`` + with the current version of the driver. Future versions might restore the + thin properties of the disk after it is downloaded to a vSphere data store. + +The following table shows the ``vmware_ostype`` property that applies to each +of the supported guest OS: + +.. note:: + + If a glance image has a ``vmware_ostype`` property which does not correspond + to a valid VMware guestId, VM creation will fail, and a warning will be + logged. + +.. list-table:: OpenStack Image service OS type settings + :header-rows: 1 + + * - vmware_ostype property + - Retail Name + * - asianux3_64Guest + - Asianux Server 3 (64 bit) + * - asianux3Guest + - Asianux Server 3 + * - asianux4_64Guest + - Asianux Server 4 (64 bit) + * - asianux4Guest + - Asianux Server 4 + * - darwin64Guest + - Darwin 64 bit + * - darwinGuest + - Darwin + * - debian4_64Guest + - Debian GNU/Linux 4 (64 bit) + * - debian4Guest + - Debian GNU/Linux 4 + * - debian5_64Guest + - Debian GNU/Linux 5 (64 bit) + * - debian5Guest + - Debian GNU/Linux 5 + * - dosGuest + - MS-DOS + * - freebsd64Guest + - FreeBSD x64 + * - freebsdGuest + - FreeBSD + * - mandrivaGuest + - Mandriva Linux + * - netware4Guest + - Novell NetWare 4 + * - netware5Guest + - Novell NetWare 5.1 + * - netware6Guest + - Novell NetWare 6.x + * - nld9Guest + - Novell Linux Desktop 9 + * - oesGuest + - Open Enterprise Server + * - openServer5Guest + - SCO OpenServer 5 + * - openServer6Guest + - SCO OpenServer 6 + * - opensuse64Guest + - openSUSE (64 bit) + * - opensuseGuest + - openSUSE + * - os2Guest + - OS/2 + * - other24xLinux64Guest + - Linux 2.4x Kernel (64 bit) (experimental) + * - other24xLinuxGuest + - Linux 2.4x Kernel + * - other26xLinux64Guest + - Linux 2.6x Kernel (64 bit) (experimental) + * - other26xLinuxGuest + - Linux 2.6x Kernel (experimental) + * - otherGuest + - Other Operating System + * - otherGuest64 + - Other Operating System (64 bit) (experimental) + * - otherLinux64Guest + - Linux (64 bit) (experimental) + * - otherLinuxGuest + - Other Linux + * - redhatGuest + - Red Hat Linux 2.1 + * - rhel2Guest + - Red Hat Enterprise Linux 2 + * - rhel3_64Guest + - Red Hat Enterprise Linux 3 (64 bit) + * - rhel3Guest + - Red Hat Enterprise Linux 3 + * - rhel4_64Guest + - Red Hat Enterprise Linux 4 (64 bit) + * - rhel4Guest + - Red Hat Enterprise Linux 4 + * - rhel5_64Guest + - Red Hat Enterprise Linux 5 (64 bit) (experimental) + * - rhel5Guest + - Red Hat Enterprise Linux 5 + * - rhel6_64Guest + - Red Hat Enterprise Linux 6 (64 bit) + * - rhel6Guest + - Red Hat Enterprise Linux 6 + * - sjdsGuest + - Sun Java Desktop System + * - sles10_64Guest + - SUSE Linux Enterprise Server 10 (64 bit) (experimental) + * - sles10Guest + - SUSE Linux Enterprise Server 10 + * - sles11_64Guest + - SUSE Linux Enterprise Server 11 (64 bit) + * - sles11Guest + - SUSE Linux Enterprise Server 11 + * - sles64Guest + - SUSE Linux Enterprise Server 9 (64 bit) + * - slesGuest + - SUSE Linux Enterprise Server 9 + * - solaris10_64Guest + - Solaris 10 (64 bit) (experimental) + * - solaris10Guest + - Solaris 10 (32 bit) (experimental) + * - solaris6Guest + - Solaris 6 + * - solaris7Guest + - Solaris 7 + * - solaris8Guest + - Solaris 8 + * - solaris9Guest + - Solaris 9 + * - suse64Guest + - SUSE Linux (64 bit) + * - suseGuest + - SUSE Linux + * - turboLinux64Guest + - Turbolinux (64 bit) + * - turboLinuxGuest + - Turbolinux + * - ubuntu64Guest + - Ubuntu Linux (64 bit) + * - ubuntuGuest + - Ubuntu Linux + * - unixWare7Guest + - SCO UnixWare 7 + * - win2000AdvServGuest + - Windows 2000 Advanced Server + * - win2000ProGuest + - Windows 2000 Professional + * - win2000ServGuest + - Windows 2000 Server + * - win31Guest + - Windows 3.1 + * - win95Guest + - Windows 95 + * - win98Guest + - Windows 98 + * - windows7_64Guest + - Windows 7 (64 bit) + * - windows7Guest + - Windows 7 + * - windows7Server64Guest + - Windows Server 2008 R2 (64 bit) + * - winLonghorn64Guest + - Windows Longhorn (64 bit) (experimental) + * - winLonghornGuest + - Windows Longhorn (experimental) + * - winMeGuest + - Windows Millennium Edition + * - winNetBusinessGuest + - Windows Small Business Server 2003 + * - winNetDatacenter64Guest + - Windows Server 2003, Datacenter Edition (64 bit) (experimental) + * - winNetDatacenterGuest + - Windows Server 2003, Datacenter Edition + * - winNetEnterprise64Guest + - Windows Server 2003, Enterprise Edition (64 bit) + * - winNetEnterpriseGuest + - Windows Server 2003, Enterprise Edition + * - winNetStandard64Guest + - Windows Server 2003, Standard Edition (64 bit) + * - winNetEnterpriseGuest + - Windows Server 2003, Enterprise Edition + * - winNetStandard64Guest + - Windows Server 2003, Standard Edition (64 bit) + * - winNetStandardGuest + - Windows Server 2003, Standard Edition + * - winNetWebGuest + - Windows Server 2003, Web Edition + * - winNTGuest + - Windows NT 4 + * - winVista64Guest + - Windows Vista (64 bit) + * - winVistaGuest + - Windows Vista + * - winXPHomeGuest + - Windows XP Home Edition + * - winXPPro64Guest + - Windows XP Professional Edition (64 bit) + * - winXPProGuest + - Windows XP Professional + +Convert and load images +----------------------- + +Using the ``qemu-img`` utility, disk images in several formats (such as, +qcow2) can be converted to the VMDK format. + +For example, the following command can be used to convert a `qcow2 Ubuntu +Trusty cloud image `_: + +.. code-block:: console + + $ qemu-img convert -f qcow2 ~/Downloads/trusty-server-cloudimg-amd64-disk1.img \ + -O vmdk trusty-server-cloudimg-amd64-disk1.vmdk + +VMDK disks converted through ``qemu-img`` are ``always`` monolithic sparse VMDK +disks with an IDE adapter type. Using the previous example of the Ubuntu Trusty +image after the ``qemu-img`` conversion, the command to upload the VMDK disk +should be something like: + +.. code-block:: console + + $ openstack image create \ + --container-format bare --disk-format vmdk \ + --property vmware_disktype="sparse" \ + --property vmware_adaptertype="ide" \ + trusty-cloud < trusty-server-cloudimg-amd64-disk1.vmdk + +Note that the ``vmware_disktype`` is set to ``sparse`` and the +``vmware_adaptertype`` is set to ``ide`` in the previous command. + +If the image did not come from the ``qemu-img`` utility, the +``vmware_disktype`` and ``vmware_adaptertype`` might be different. To +determine the image adapter type from an image file, use the following command +and look for the ``ddb.adapterType=`` line: + +.. code-block:: console + + $ head -20 + +Assuming a preallocated disk type and an iSCSI lsiLogic adapter type, the +following command uploads the VMDK disk: + +.. code-block:: console + + $ openstack image create \ + --disk-format vmdk \ + --container-format bare \ + --property vmware_adaptertype="lsiLogic" \ + --property vmware_disktype="preallocated" \ + --property vmware_ostype="ubuntu64Guest" \ + ubuntu-thick-scsi < ubuntuLTS-flat.vmdk + +Currently, OS boot VMDK disks with an IDE adapter type cannot be attached to a +virtual SCSI controller and likewise disks with one of the SCSI adapter types +(such as, busLogic, lsiLogic, lsiLogicsas, paraVirtual) cannot be attached to +the IDE controller. Therefore, as the previous examples show, it is important +to set the ``vmware_adaptertype`` property correctly. The default adapter type +is lsiLogic, which is SCSI, so you can omit the ``vmware_adaptertype`` property +if you are certain that the image adapter type is lsiLogic. + +Tag VMware images +----------------- + +In a mixed hypervisor environment, OpenStack Compute uses the +``hypervisor_type`` tag to match images to the correct hypervisor type. For +VMware images, set the hypervisor type to ``vmware``. Other valid hypervisor +types include: ``hyperv``, ``ironic``, ``lxc``, ``qemu``, ``uml``, and ``xen``. +Note that ``qemu`` is used for both QEMU and KVM hypervisor types. + +.. code-block:: console + + $ openstack image create \ + --disk-format vmdk \ + --container-format bare \ + --property vmware_adaptertype="lsiLogic" \ + --property vmware_disktype="preallocated" \ + --property hypervisor_type="vmware" \ + --property vmware_ostype="ubuntu64Guest" \ + ubuntu-thick-scsi < ubuntuLTS-flat.vmdk + +Optimize images +--------------- + +Monolithic Sparse disks are considerably faster to download but have the +overhead of an additional conversion step. When imported into ESX, sparse disks +get converted to VMFS flat thin provisioned disks. The download and conversion +steps only affect the first launched instance that uses the sparse disk image. +The converted disk image is cached, so subsequent instances that use this disk +image can simply use the cached version. + +To avoid the conversion step (at the cost of longer download times) consider +converting sparse disks to thin provisioned or preallocated disks before +loading them into the Image service. + +Use one of the following tools to pre-convert sparse disks. + +vSphere CLI tools + Sometimes called the remote CLI or rCLI. + + Assuming that the sparse disk is made available on a data store accessible by + an ESX host, the following command converts it to preallocated format: + + .. code-block:: console + + vmkfstools --server=ip_of_some_ESX_host -i \ + /vmfs/volumes/datastore1/sparse.vmdk \ + /vmfs/volumes/datastore1/converted.vmdk + + Note that the vifs tool from the same CLI package can be used to upload the + disk to be converted. The vifs tool can also be used to download the + converted disk if necessary. + +``vmkfstools`` directly on the ESX host + If the SSH service is enabled on an ESX host, the sparse disk can be uploaded + to the ESX data store through scp and the vmkfstools local to the ESX host + can use used to perform the conversion. After you log in to the host through + ssh, run this command: + + .. code-block:: console + + vmkfstools -i /vmfs/volumes/datastore1/sparse.vmdk /vmfs/volumes/datastore1/converted.vmdk + +``vmware-vdiskmanager`` + ``vmware-vdiskmanager`` is a utility that comes bundled with VMware Fusion + and VMware Workstation. The following example converts a sparse disk to + preallocated format: + + .. code-block:: console + + '/Applications/VMware Fusion.app/Contents/Library/vmware-vdiskmanager' -r sparse.vmdk -t 4 converted.vmdk + +In the previous cases, the converted vmdk is actually a pair of files: + +* The descriptor file ``converted.vmdk``. + +* The actual virtual disk data file ``converted-flat.vmdk``. + +The file to be uploaded to the Image service is ``converted-flat.vmdk``. + +Image handling +-------------- + +The ESX hypervisor requires a copy of the VMDK file in order to boot up a +virtual machine. As a result, the vCenter OpenStack Compute driver must +download the VMDK via HTTP from the Image service to a data store that is +visible to the hypervisor. To optimize this process, the first time a VMDK file +is used, it gets cached in the data store. A cached image is stored in a +folder named after the image ID. Subsequent virtual machines that need the +VMDK use the cached version and don't have to copy the file again from the +Image service. + +Even with a cached VMDK, there is still a copy operation from the cache +location to the hypervisor file directory in the shared data store. To avoid +this copy, boot the image in linked_clone mode. To learn how to enable this +mode, see :ref:`vmware-config`. + +.. note:: + + You can also use the ``img_linked_clone`` property (or legacy property + ``vmware_linked_clone``) in the Image service to override the linked_clone + mode on a per-image basis. + + If spawning a virtual machine image from ISO with a VMDK disk, the image is + created and attached to the virtual machine as a blank disk. In that case + ``img_linked_clone`` property for the image is just ignored. + +If multiple compute nodes are running on the same host, or have a shared file +system, you can enable them to use the same cache folder on the back-end data +store. To configure this action, set the ``cache_prefix`` option in the +``nova.conf`` file. Its value stands for the name prefix of the folder where +cached images are stored. + +.. note:: + + This can take effect only if compute nodes are running on the same host, or + have a shared file system. + +You can automatically purge unused images after a specified period of time. To +configure this action, set these options in the ``DEFAULT`` section in the +``nova.conf`` file: + +``remove_unused_base_images`` + Set this option to ``True`` to specify that unused images should be removed + after the duration specified in the + ``remove_unused_original_minimum_age_seconds`` option. The default is + ``True``. + +``remove_unused_original_minimum_age_seconds`` + Specifies the duration in seconds after which an unused image is purged from + the cache. The default is ``86400`` (24 hours). + +.. _vmware-networking: + +Networking with VMware vSphere +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The VMware driver supports networking with the ``nova-network`` service or the +Networking Service. Depending on your installation, complete these +configuration steps before you provision VMs: + +#. **The nova-network service with the FlatManager or FlatDHCPManager**. + Create a port group with the same name as the ``flat_network_bridge`` value + in the ``nova.conf`` file. The default value is ``br100``. If you specify + another value, the new value must be a valid Linux bridge identifier that + adheres to Linux bridge naming conventions. + + All VM NICs are attached to this port group. + + Ensure that the flat interface of the node that runs the ``nova-network`` + service has a path to this network. + + .. note:: + + When configuring the port binding for this port group in vCenter, specify + ``ephemeral`` for the port binding type. For more information, see + `Choosing a port binding type in ESX/ESXi `_ in the VMware Knowledge Base. + +#. **The nova-network service with the VlanManager**. + Set the ``vlan_interface`` configuration option to match the ESX host + interface that handles VLAN-tagged VM traffic. + + OpenStack Compute automatically creates the corresponding port groups. + +#. If you are using the OpenStack Networking Service: + Before provisioning VMs, create a port group with the same name as the + ``vmware.integration_bridge`` value in ``nova.conf`` (default is + ``br-int``). All VM NICs are attached to this port group for management by + the OpenStack Networking plug-in. + +Volumes with VMware vSphere +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The VMware driver supports attaching volumes from the Block Storage service. +The VMware VMDK driver for OpenStack Block Storage is recommended and should be +used for managing volumes based on vSphere data stores. For more information +about the VMware VMDK driver, see Cinder's manual on the VMDK Driver (TODO: +this has not yet been imported and published). Also an +iSCSI volume driver provides limited support and can be used only for +attachments. + +.. _vmware-config: + +Configuration reference +~~~~~~~~~~~~~~~~~~~~~~~ + +To customize the VMware driver, use the configuration option settings below. + +.. TODO(sdague): for the import we just copied this in from the auto generated + file. We probably need a strategy for doing equivalent autogeneration, but + we don't as of yet. + + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _nova-vmware: + +.. list-table:: Description of VMware configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[vmware]** + - + * - ``api_retry_count`` = ``10`` + - (Integer) Number of times VMware vCenter server API must be retried on connection failures, e.g. socket error, etc. + * - ``ca_file`` = ``None`` + - (String) Specifies the CA bundle file to be used in verifying the vCenter server certificate. + * - ``cache_prefix`` = ``None`` + - (String) This option adds a prefix to the folder where cached images are stored + + This is not the full path - just a folder prefix. This should only be used when a datastore cache is shared between compute nodes. + + Note: This should only be used when the compute nodes are running on same host or they have a shared file system. + + Possible values: + + * Any string representing the cache prefix to the folder + * - ``cluster_name`` = ``None`` + - (String) Name of a VMware Cluster ComputeResource. + * - ``console_delay_seconds`` = ``None`` + - (Integer) Set this value if affected by an increased network latency causing repeated characters when typing in a remote console. + * - ``datastore_regex`` = ``None`` + - (String) Regular expression pattern to match the name of datastore. + + The datastore_regex setting specifies the datastores to use with Compute. For example, datastore_regex="nas.*" selects all the data stores that have a name starting with "nas". + + NOTE: If no regex is given, it just picks the datastore with the most freespace. + + Possible values: + + * Any matching regular expression to a datastore must be given + * - ``host_ip`` = ``None`` + - (String) Hostname or IP address for connection to VMware vCenter host. + * - ``host_password`` = ``None`` + - (String) Password for connection to VMware vCenter host. + * - ``host_port`` = ``443`` + - (Port number) Port for connection to VMware vCenter host. + * - ``host_username`` = ``None`` + - (String) Username for connection to VMware vCenter host. + * - ``insecure`` = ``False`` + - (Boolean) If true, the vCenter server certificate is not verified. If false, then the default CA truststore is used for verification. + + Related options: + + * ca_file: This option is ignored if "ca_file" is set. + * - ``integration_bridge`` = ``None`` + - (String) This option should be configured only when using the NSX-MH Neutron plugin. This is the name of the integration bridge on the ESXi server or host. This should not be set for any other Neutron plugin. Hence the default value is not set. + + Possible values: + + * Any valid string representing the name of the integration bridge + * - ``maximum_objects`` = ``100`` + - (Integer) This option specifies the limit on the maximum number of objects to return in a single result. + + A positive value will cause the operation to suspend the retrieval when the count of objects reaches the specified limit. The server may still limit the count to something less than the configured value. Any remaining objects may be retrieved with additional requests. + * - ``pbm_default_policy`` = ``None`` + - (String) This option specifies the default policy to be used. + + If pbm_enabled is set and there is no defined storage policy for the specific request, then this policy will be used. + + Possible values: + + * Any valid storage policy such as VSAN default storage policy + + Related options: + + * pbm_enabled + * - ``pbm_enabled`` = ``False`` + - (Boolean) This option enables or disables storage policy based placement of instances. + + Related options: + + * pbm_default_policy + * - ``pbm_wsdl_location`` = ``None`` + - (String) This option specifies the PBM service WSDL file location URL. + + Setting this will disable storage policy based placement of instances. + + Possible values: + + * Any valid file path e.g file:///opt/SDK/spbm/wsdl/pbmService.wsdl + * - ``serial_port_proxy_uri`` = ``None`` + - (String) Identifies a proxy service that provides network access to the serial_port_service_uri. + + Possible values: + + * Any valid URI + + Related options: This option is ignored if serial_port_service_uri is not specified. + + * serial_port_service_uri + * - ``serial_port_service_uri`` = ``None`` + - (String) Identifies the remote system where the serial port traffic will be sent. + + This option adds a virtual serial port which sends console output to a configurable service URI. At the service URI address there will be virtual serial port concentrator that will collect console logs. If this is not set, no serial ports will be added to the created VMs. + + Possible values: + + * Any valid URI + * - ``task_poll_interval`` = ``0.5`` + - (Floating point) Time interval in seconds to poll remote tasks invoked on VMware VC server. + * - ``use_linked_clone`` = ``True`` + - (Boolean) This option enables/disables the use of linked clone. + + The ESX hypervisor requires a copy of the VMDK file in order to boot up a virtual machine. The compute driver must download the VMDK via HTTP from the OpenStack Image service to a datastore that is visible to the hypervisor and cache it. Subsequent virtual machines that need the VMDK use the cached version and don't have to copy the file again from the OpenStack Image service. + + If set to false, even with a cached VMDK, there is still a copy operation from the cache location to the hypervisor file directory in the shared datastore. If set to true, the above copy operation is avoided as it creates copy of the virtual machine that shares virtual disks with its parent VM. + * - ``wsdl_location`` = ``None`` + - (String) This option specifies VIM Service WSDL Location + + If vSphere API versions 5.1 and later is being used, this section can be ignored. If version is less than 5.1, WSDL files must be hosted locally and their location must be specified in the above section. + + Optional over-ride to default location for bug work-arounds. + + Possible values: + + * http:///vimService.wsdl + + * file:///opt/stack/vmware/SDK/wsdl/vim25/vimService.wsdl diff --git a/doc/source/admin/configuration/hypervisor-xen-api.rst b/doc/source/admin/configuration/hypervisor-xen-api.rst new file mode 100644 index 000000000000..81588672fb4c --- /dev/null +++ b/doc/source/admin/configuration/hypervisor-xen-api.rst @@ -0,0 +1,434 @@ +.. _compute_xen_api: + +============================================= +XenServer (and other XAPI based Xen variants) +============================================= + +This section describes XAPI managed hypervisors, and how to use them with +OpenStack. + +Terminology +~~~~~~~~~~~ + +Xen +--- + +A hypervisor that provides the fundamental isolation between virtual machines. +Xen is open source (GPLv2) and is managed by `XenProject.org +`_, a cross-industry organization and a Linux +Foundation Collaborative project. + +Xen is a component of many different products and projects. The hypervisor +itself is very similar across all these projects, but the way that it is +managed can be different, which can cause confusion if you're not clear which +toolstack you are using. Make sure you know what `toolstack +`_ you want before you get +started. If you want to use Xen with libvirt in OpenStack Compute refer to +:doc:`hypervisor-xen-libvirt`. + +XAPI +---- + +XAPI is one of the toolstacks that could control a Xen based hypervisor. +XAPI's role is similar to libvirt's in the KVM world. The API provided by XAPI +is called XenAPI. To learn more about the provided interface, look at `XenAPI +Object Model Overview `_ for definitions of XAPI +specific terms such as SR, VDI, VIF and PIF. + +OpenStack has a compute driver which talks to XAPI, therefore all XAPI managed +servers could be used with OpenStack. + +XenAPI +------ + +XenAPI is the API provided by XAPI. This name is also used by the python +library that is a client for XAPI. A set of packages to use XenAPI on existing +distributions can be built using the `xenserver/buildroot +`_ project. + +XenServer +--------- + +An Open Source virtualization platform that delivers all features needed for +any server and datacenter implementation including the Xen hypervisor and XAPI +for the management. For more information and product downloads, visit +`xenserver.org `_. + +XCP +--- + +XCP is not supported anymore. XCP project recommends all XCP users to upgrade +to the latest version of XenServer by visiting `xenserver.org +`_. + +Privileged and unprivileged domains +----------------------------------- + +A Xen host runs a number of virtual machines, VMs, or domains (the terms are +synonymous on Xen). One of these is in charge of running the rest of the +system, and is known as domain 0, or dom0. It is the first domain to boot after +Xen, and owns the storage and networking hardware, the device drivers, and the +primary control software. Any other VM is unprivileged, and is known as a domU +or guest. All customer VMs are unprivileged, but you should note that on +XenServer (and other XenAPI using hypervisors), the OpenStack Compute service +(``nova-compute``) also runs in a domU. This gives a level of security +isolation between the privileged system software and the OpenStack software +(much of which is customer-facing). This architecture is described in more +detail later. + +Paravirtualized versus hardware virtualized domains +--------------------------------------------------- + +A Xen virtual machine can be paravirtualized (PV) or hardware virtualized +(HVM). This refers to the interaction between Xen, domain 0, and the guest VM's +kernel. PV guests are aware of the fact that they are virtualized and will +co-operate with Xen and domain 0; this gives them better performance +characteristics. HVM guests are not aware of their environment, and the +hardware has to pretend that they are running on an unvirtualized machine. HVM +guests do not need to modify the guest operating system, which is essential +when running Windows. + +In OpenStack, customer VMs may run in either PV or HVM mode. However, the +OpenStack domU (that's the one running ``nova-compute``) must be running in PV +mode. + +XenAPI deployment architecture +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +A basic OpenStack deployment on a XAPI-managed server, assuming that the +network provider is neutron network, looks like this: + +.. figure:: /figures/xenserver_architecture.png + :width: 100% + +Key things to note: + +* The hypervisor: Xen + +* Domain 0: runs XAPI and some small pieces from OpenStack, + the XAPI plug-ins. + +* OpenStack VM: The ``Compute`` service runs in a paravirtualized virtual + machine, on the host under management. Each host runs a local instance of + ``Compute``. It is also running neutron plugin-agent + (``neutron-openvswitch-agent``) to perform local vSwitch configuration. + +* OpenStack Compute uses the XenAPI Python library to talk to XAPI, and it uses + the Management Network to reach from the OpenStack VM to Domain 0. + +Some notes on the networking: + +* The above diagram assumes DHCP networking. + +* There are three main OpenStack networks: + + * Management network: RabbitMQ, MySQL, inter-host communication, and + compute-XAPI communication. Please note that the VM images are downloaded + by the XenAPI plug-ins, so make sure that the OpenStack Image service is + accessible through this network. It usually means binding those services to + the management interface. + + * Tenant network: controlled by neutron, this is used for tenant traffic. + + * Public network: floating IPs, public API endpoints. + +* The networks shown here must be connected to the corresponding physical + networks within the data center. In the simplest case, three individual + physical network cards could be used. It is also possible to use VLANs to + separate these networks. Please note, that the selected configuration must be + in line with the networking model selected for the cloud. (In case of VLAN + networking, the physical channels have to be able to forward the tagged + traffic.) + +* With the Networking service, you should enable Linux bridge in ``Dom0`` which + is used for Compute service. ``nova-compute`` will create Linux bridges for + security group and ``neutron-openvswitch-agent`` in Compute node will apply + security group rules on these Linux bridges. To implement this, you need to + remove ``/etc/modprobe.d/blacklist-bridge*`` in ``Dom0``. + +Further reading +~~~~~~~~~~~~~~~ + +Here are some of the resources available to learn more about Xen: + +* `Citrix XenServer official documentation + `_ +* `What is Xen? by XenProject.org + `_ +* `Xen Hypervisor project + `_ +* `Xapi project `_ +* `Further XenServer and OpenStack information + `_ + +Install XenServer +~~~~~~~~~~~~~~~~~ + +Before you can run OpenStack with XenServer, you must install the hypervisor on +`an appropriate server `_. + +.. note:: + + Xen is a type 1 hypervisor: When your server starts, Xen is the first + software that runs. Consequently, you must install XenServer before you + install the operating system where you want to run OpenStack code. You then + install ``nova-compute`` into a dedicated virtual machine on the host. + +Use the following link to download XenServer's installation media: + +* http://xenserver.org/open-source-virtualization-download.html + +When you install many servers, you might find it easier to perform `PXE boot +installations `_. You can also package any +post-installation changes that you want to make to your XenServer by following +the instructions of `creating your own XenServer supplemental pack +`_. + +.. important:: + + Make sure you use the EXT type of storage repository (SR). Features that + require access to VHD files (such as copy on write, snapshot and migration) + do not work when you use the LVM SR. Storage repository (SR) is a + XAPI-specific term relating to the physical storage where virtual disks are + stored. + + On the XenServer installation screen, choose the :guilabel:`XenDesktop + Optimized` option. If you use an answer file, make sure you use + ``srtype="ext"`` in the ``installation`` tag of the answer file. + +Post-installation steps +~~~~~~~~~~~~~~~~~~~~~~~ + +The following steps need to be completed after the hypervisor's installation: + +#. For resize and migrate functionality, enable password-less SSH + authentication and set up the ``/images`` directory on dom0. + +#. Install the XAPI plug-ins. + +#. To support AMI type images, you must set up ``/boot/guest`` + symlink/directory in dom0. + +#. Create a paravirtualized virtual machine that can run ``nova-compute``. + +#. Install and configure ``nova-compute`` in the above virtual machine. + +Install XAPI plug-ins +--------------------- + +When you use a XAPI managed hypervisor, you can install a Python script (or any +executable) on the host side, and execute that through XenAPI. These scripts +are called plug-ins. The OpenStack related XAPI plug-ins live in OpenStack +os-xenapi code repository. These plug-ins have to be copied to dom0's +filesystem, to the appropriate directory, where XAPI can find them. It is +important to ensure that the version of the plug-ins are in line with the +OpenStack Compute installation you are using. + +The plugins should typically be copied from the Nova installation running in +the Compute's DomU (``pip show os-xenapi`` to find its location), but if you +want to download the latest version the following procedure can be used. + +**Manually installing the plug-ins** + +#. Create temporary files/directories: + + .. code-block:: console + + $ OS_XENAPI_TARBALL=$(mktemp) + $ OS_XENAPI_SOURCES=$(mktemp -d) + +#. Get the source from the openstack.org archives. The example assumes the + latest release is used, and the XenServer host is accessible as xenserver. + Match those parameters to your setup. + + .. code-block:: console + + $ OS_XENAPI_URL=https://tarballs.openstack.org/os-xenapi/os-xenapi-0.1.1.tar.gz + $ wget -qO "$OS_XENAPI_TARBALL" "$OS_XENAPI_URL" + $ tar xvf "$OS_XENAPI_TARBALL" -d "$OS_XENAPI_SOURCES" + +#. Copy the plug-ins to the hypervisor: + + .. code-block:: console + + $ PLUGINPATH=$(find $OS_XENAPI_SOURCES -path '*/xapi.d/plugins' -type d -print) + $ tar -czf - -C "$PLUGINPATH" ./ | + > ssh root@xenserver tar -xozf - -C /etc/xapi.d/plugins + +#. Remove temporary files/directories: + + .. code-block:: console + + $ rm "$OS_XENAPI_TARBALL" + $ rm -rf "$OS_XENAPI_SOURCES" + +Prepare for AMI type images +--------------------------- + +To support AMI type images in your OpenStack installation, you must create the +``/boot/guest`` directory on dom0. One of the OpenStack XAPI plugins will +extract the kernel and ramdisk from AKI and ARI images and put them to that +directory. + +OpenStack maintains the contents of this directory and its size should not +increase during normal operation. However, in case of power failures or +accidental shutdowns, some files might be left over. To prevent these files +from filling up dom0's filesystem, set up this directory as a symlink that +points to a subdirectory of the local SR. + +Run these commands in dom0 to achieve this setup: + +.. code-block:: console + + # LOCAL_SR=$(xe sr-list name-label="Local storage" --minimal) + # LOCALPATH="/var/run/sr-mount/$LOCAL_SR/os-guest-kernels" + # mkdir -p "$LOCALPATH" + # ln -s "$LOCALPATH" /boot/guest + +Modify dom0 for resize/migration support +---------------------------------------- + +To resize servers with XenServer you must: + +* Establish a root trust between all hypervisor nodes of your deployment: + + To do so, generate an ssh key-pair with the :command:`ssh-keygen` command. + Ensure that each of your dom0's ``authorized_keys`` file (located in + ``/root/.ssh/authorized_keys``) contains the public key fingerprint (located + in ``/root/.ssh/id_rsa.pub``). + +* Provide a ``/images`` mount point to the dom0 for your hypervisor: + + dom0 space is at a premium so creating a directory in dom0 is potentially + dangerous and likely to fail especially when you resize large servers. The + least you can do is to symlink ``/images`` to your local storage SR. The + following instructions work for an English-based installation of XenServer + and in the case of ext3-based SR (with which the resize functionality is + known to work correctly). + + .. code-block:: console + + # LOCAL_SR=$(xe sr-list name-label="Local storage" --minimal) + # IMG_DIR="/var/run/sr-mount/$LOCAL_SR/images" + # mkdir -p "$IMG_DIR" + # ln -s "$IMG_DIR" /images + +XenAPI configuration reference +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The following section discusses some commonly changed options when using the +XenAPI driver. The table below provides a complete reference of all +configuration options available for configuring XAPI with OpenStack. + +The recommended way to use XAPI with OpenStack is through the XenAPI driver. +To enable the XenAPI driver, add the following configuration options to +``/etc/nova/nova.conf`` and restart ``OpenStack Compute``: + +.. code-block:: ini + + compute_driver = xenapi.XenAPIDriver + [xenserver] + connection_url = http://your_xenapi_management_ip_address + connection_username = root + connection_password = your_password + ovs_integration_bridge = br-int + vif_driver = nova.virt.xenapi.vif.XenAPIOpenVswitchDriver + +These connection details are used by OpenStack Compute service to contact your +hypervisor and are the same details you use to connect XenCenter, the XenServer +management console, to your XenServer node. + +.. note:: + + The ``connection_url`` is generally the management network IP + address of the XenServer. + +Networking configuration +------------------------ + +The Networking service in the Compute node is running +``neutron-openvswitch-agent``, this manages dom0's OVS. You can refer +Networking `openvswitch_agent.ini.sample `_ for details, however there are several specific +items to look out for. + +.. code-block:: ini + + [agent] + minimize_polling = False + root_helper_daemon = xenapi_root_helper + + [ovs] + of_listen_address = management_ip_address + ovsdb_connection = tcp:your_xenapi_management_ip_address:6640 + bridge_mappings = :, ... + integration_bridge = br-int + + [xenapi] + connection_url = http://your_xenapi_management_ip_address + connection_username = root + connection_password = your_pass_word + +.. note:: + + The ``ovsdb_connection`` is the connection string for the native OVSDB + backend, you need to enable port 6640 in dom0. + +Agent +----- + +The agent is a piece of software that runs on the instances, and communicates +with OpenStack. In case of the XenAPI driver, the agent communicates with +OpenStack through XenStore (see `the Xen Project Wiki +`_ for more information on XenStore). + +If you don't have the guest agent on your VMs, it takes a long time for +OpenStack Compute to detect that the VM has successfully started. Generally a +large timeout is required for Windows instances, but you may want to adjust: +``agent_version_timeout`` within the ``[xenserver]`` section. + +VNC proxy address +----------------- + +Assuming you are talking to XAPI through a management network, and XenServer is +on the address: 10.10.1.34 specify the same address for the vnc proxy address: +``vncserver_proxyclient_address=10.10.1.34`` + +Storage +------- + +You can specify which Storage Repository to use with nova by editing the +following flag. To use the local-storage setup by the default installer: + +.. code-block:: ini + + sr_matching_filter = "other-config:i18n-key=local-storage" + +Another alternative is to use the "default" storage (for example if you have +attached NFS or any other shared storage): + +.. code-block:: ini + + sr_matching_filter = "default-sr:true" + +Image upload in TGZ compressed format +------------------------------------- + +To start uploading ``tgz`` compressed raw disk images to the Image service, +configure ``xenapi_image_upload_handler`` by replacing ``GlanceStore`` with +``VdiThroughDevStore``. + +.. code-block:: ini + + xenapi_image_upload_handler=nova.virt.xenapi.image.vdi_through_dev.VdiThroughDevStore + +As opposed to: + +.. code-block:: ini + + xenapi_image_upload_handler=nova.virt.xenapi.image.glance.GlanceStore diff --git a/doc/source/admin/configuration/hypervisor-xen-libvirt.rst b/doc/source/admin/configuration/hypervisor-xen-libvirt.rst new file mode 100644 index 000000000000..2c28cf03d40f --- /dev/null +++ b/doc/source/admin/configuration/hypervisor-xen-libvirt.rst @@ -0,0 +1,249 @@ +=============== +Xen via libvirt +=============== + +OpenStack Compute supports the Xen Project Hypervisor (or Xen). Xen can be +integrated with OpenStack Compute via the `libvirt `_ +`toolstack `_ or via the `XAPI +`_ `toolstack +`_. This section describes how +to set up OpenStack Compute with Xen and libvirt. For information on how to +set up Xen with XAPI refer to :doc:`hypervisor-xen-api`. + +Installing Xen with libvirt +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +At this stage we recommend using the baseline that we use for the `Xen Project +OpenStack CI Loop +`_, which +contains the most recent stability fixes to both Xen and libvirt. + +`Xen 4.5.1 +`_ +(or newer) and `libvirt 1.2.15 `_ (or newer) +contain the minimum required OpenStack improvements for Xen. Although libvirt +1.2.15 works with Xen, libvirt 1.3.2 or newer is recommended. The necessary +Xen changes have also been backported to the Xen 4.4.3 stable branch. Please +check with the Linux and FreeBSD distros you are intending to use as `Dom 0 +`_, whether the relevant +version of Xen and libvirt are available as installable packages. + +The latest releases of Xen and libvirt packages that fulfil the above minimum +requirements for the various openSUSE distributions can always be found and +installed from the `Open Build Service +`_ Virtualization +project. To install these latest packages, add the Virtualization repository +to your software management stack and get the newest packages from there. More +information about the latest Xen and libvirt packages are available `here +`__ and `here +`__. + +Alternatively, it is possible to use the Ubuntu LTS 14.04 Xen Package +**4.4.1-0ubuntu0.14.04.4** (Xen 4.4.1) and apply the patches outlined `here +`__. +You can also use the Ubuntu LTS 14.04 libvirt package **1.2.2 +libvirt_1.2.2-0ubuntu13.1.7** as baseline and update it to libvirt version +1.2.15, or 1.2.14 with the patches outlined `here +`__ +applied. Note that this will require rebuilding these packages partly from +source. + +For further information and latest developments, you may want to consult the +Xen Project's `mailing lists for OpenStack related issues and questions +`_. + +Configuring Xen with libvirt +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To enable Xen via libvirt, ensure the following options are set in +``/etc/nova/nova.conf`` on all hosts running the ``nova-compute`` service. + +.. code-block:: ini + + compute_driver = libvirt.LibvirtDriver + + [libvirt] + virt_type = xen + +Additional configuration options +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Use the following as a guideline for configuring Xen for use in OpenStack: + +#. **Dom0 memory**: Set it between 1GB and 4GB by adding the following + parameter to the Xen Boot Options in the `grub.conf `_ file. + + .. code-block:: ini + + dom0_mem=1024M + + .. note:: + + The above memory limits are suggestions and should be based on the + available compute host resources. For large hosts that will run many + hundreds of instances, the suggested values may need to be higher. + + .. note:: + + The location of the grub.conf file depends on the host Linux distribution + that you are using. Please refer to the distro documentation for more + details (see `Dom 0 `_ for more resources). + +#. **Dom0 vcpus**: Set the virtual CPUs to 4 and employ CPU pinning by adding + the following parameters to the Xen Boot Options in the `grub.conf + `_ file. + + .. code-block:: ini + + dom0_max_vcpus=4 dom0_vcpus_pin + + .. note:: + + Note that the above virtual CPU limits are suggestions and should be + based on the available compute host resources. For large hosts, that will + run many hundred of instances, the suggested values may need to be + higher. + +#. **PV vs HVM guests**: A Xen virtual machine can be paravirtualized (PV) or + hardware virtualized (HVM). The virtualization mode determines the + interaction between Xen, Dom 0, and the guest VM's kernel. PV guests are + aware of the fact that they are virtualized and will co-operate with Xen and + Dom 0. The choice of virtualization mode determines performance + characteristics. For an overview of Xen virtualization modes, see `Xen Guest + Types `_. + + In OpenStack, customer VMs may run in either PV or HVM mode. The mode is a + property of the operating system image used by the VM, and is changed by + adjusting the image metadata stored in the Image service. The image + metadata can be changed using the :command:`openstack` commands. + + To choose one of the HVM modes (HVM, HVM with PV Drivers or PVHVM), use + :command:`openstack` to set the ``vm_mode`` property to ``hvm``. + + To choose one of the HVM modes (HVM, HVM with PV Drivers or PVHVM), use one + of the following two commands: + + .. code-block:: console + + $ openstack image set --property vm_mode=hvm IMAGE + + To chose PV mode, which is supported by NetBSD, FreeBSD and Linux, use one + of the following two commands + + .. code-block:: console + + $ openstack image set --property vm_mode=xen IMAGE + + .. note:: + + The default for virtualization mode in nova is PV mode. + +#. **Image formats**: Xen supports raw, qcow2 and vhd image formats. For more + information on image formats, refer to the `OpenStack Virtual Image Guide + `__ and the + `Storage Options Guide on the Xen Project Wiki + `_. + +#. **Image metadata**: In addition to the ``vm_mode`` property discussed above, + the ``hypervisor_type`` property is another important component of the image + metadata, especially if your cloud contains mixed hypervisor compute nodes. + Setting the ``hypervisor_type`` property allows the nova scheduler to select + a compute node running the specified hypervisor when launching instances of + the image. Image metadata such as ``vm_mode``, ``hypervisor_type``, + architecture, and others can be set when importing the image to the Image + service. The metadata can also be changed using the :command:`openstack` + commands: + + .. code-block:: console + + $ openstack image set --property hypervisor_type=xen vm_mode=hvm IMAGE + + For more more information on image metadata, refer to the `OpenStack Virtual + Image Guide `__. + +#. **Libguestfs file injection**: OpenStack compute nodes can use `libguestfs + `_ to inject files into an instance's image prior to + launching the instance. libguestfs uses libvirt's QEMU driver to start a + qemu process, which is then used to inject files into the image. When using + libguestfs for file injection, the compute node must have the libvirt qemu + driver installed, in addition to the Xen driver. In RPM based distributions, + the qemu driver is provided by the ``libvirt-daemon-qemu`` package. In + Debian and Ubuntu, the qemu driver is provided by the ``libvirt-bin`` + package. + +Troubleshoot Xen with libvirt +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +**Important log files**: When an instance fails to start, or when you come +across other issues, you should first consult the following log files: + +* ``/var/log/nova/nova-compute.log`` + +* ``/var/log/libvirt/libxl/libxl-driver.log``, + +* ``/var/log/xen/qemu-dm-${instancename}.log``, + +* ``/var/log/xen/xen-hotplug.log``, + +* ``/var/log/xen/console/guest-${instancename}`` (to enable see `Enabling Guest + Console Logs + `_) + +* Host Console Logs (read `Enabling and Retrieving Host Console Logs + `_). + +If you need further help you can ask questions on the mailing lists `xen-users@ +`_, +`wg-openstack@ `_ or `raise a bug `_ against Xen. + +Known issues +~~~~~~~~~~~~ + +* **Networking**: Xen via libvirt is currently only supported with + nova-network. Fixes for a number of bugs are currently being worked on to + make sure that Xen via libvirt will also work with OpenStack Networking + (neutron). + + .. todo:: Is this still true? + +* **Live migration**: Live migration is supported in the libvirt libxl driver + since version 1.2.5. However, there were a number of issues when used with + OpenStack, in particular with libvirt migration protocol compatibility. It is + worth mentioning that libvirt 1.3.0 addresses most of these issues. We do + however recommend using libvirt 1.3.2, which is fully supported and tested as + part of the Xen Project CI loop. It addresses live migration monitoring + related issues and adds support for peer-to-peer migration mode, which nova + relies on. + +* **Live migration monitoring**: On compute nodes running Kilo or later, live + migration monitoring relies on libvirt APIs that are only implemented from + libvirt version 1.3.1 onwards. When attempting to live migrate, the migration + monitoring thread would crash and leave the instance state as "MIGRATING". If + you experience such an issue and you are running on a version released before + libvirt 1.3.1, make sure you backport libvirt commits ad71665 and b7b4391 + from upstream. + +Additional information and resources +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The following section contains links to other useful resources. + +* `wiki.xenproject.org/wiki/OpenStack `_ - OpenStack Documentation on the Xen Project wiki + +* `wiki.xenproject.org/wiki/OpenStack_CI_Loop_for_Xen-Libvirt + `_ - + Information about the Xen Project OpenStack CI Loop + +* `wiki.xenproject.org/wiki/OpenStack_via_DevStack + `_ - How to set up + OpenStack via DevStack + +* `Mailing lists for OpenStack related issues and questions + `_ - This + list is dedicated to coordinating bug fixes and issues across Xen, libvirt + and OpenStack and the CI loop. diff --git a/doc/source/admin/configuration/hypervisors.rst b/doc/source/admin/configuration/hypervisors.rst new file mode 100644 index 000000000000..79b7e3a5df62 --- /dev/null +++ b/doc/source/admin/configuration/hypervisors.rst @@ -0,0 +1,63 @@ +=========== +Hypervisors +=========== + +.. toctree:: + :maxdepth: 1 + + hypervisor-basics.rst + hypervisor-kvm.rst + hypervisor-qemu.rst + hypervisor-xen-api.rst + hypervisor-xen-libvirt.rst + hypervisor-lxc.rst + hypervisor-vmware.rst + hypervisor-hyper-v.rst + hypervisor-virtuozzo.rst + +OpenStack Compute supports many hypervisors, which might make it difficult for +you to choose one. Most installations use only one hypervisor. However, you +can use :ref:`ComputeFilter` and :ref:`ImagePropertiesFilter` to schedule +different hypervisors within the same installation. The following links help +you choose a hypervisor. See :doc:`/user/support-matrix` for a detailed list +of features and support across the hypervisors. + +The following hypervisors are supported: + +* `KVM`_ - Kernel-based Virtual Machine. The virtual disk formats that it + supports is inherited from QEMU since it uses a modified QEMU program to + launch the virtual machine. The supported formats include raw images, the + qcow2, and VMware formats. + +* `LXC`_ - Linux Containers (through libvirt), used to run Linux-based virtual + machines. + +* `QEMU`_ - Quick EMUlator, generally only used for development purposes. + +* `VMware vSphere`_ 5.1.0 and newer - Runs VMware-based Linux and Windows + images through a connection with a vCenter server. + +* `Xen (using libvirt) `_ - Xen Project Hypervisor using libvirt as + management interface into ``nova-compute`` to run Linux, Windows, FreeBSD and + NetBSD virtual machines. + +* `XenServer`_ - XenServer, Xen Cloud Platform (XCP) and other XAPI based Xen + variants runs Linux or Windows virtual machines. You must install the + ``nova-compute`` service in a para-virtualized VM. + +* `Hyper-V`_ - Server virtualization with Microsoft Hyper-V, use to run + Windows, Linux, and FreeBSD virtual machines. Runs ``nova-compute`` natively + on the Windows virtualization platform. + +* `Virtuozzo`_ 7.0.0 and newer - OS Containers and Kernel-based Virtual + Machines supported via libvirt virt_type=parallels. The supported formats + include ploop and qcow2 images. + +.. _KVM: http://www.linux-kvm.org/page/Main_Page +.. _LXC: https://linuxcontainers.org/ +.. _QEMU: http://wiki.qemu.org/Manual +.. _VMware vSphere: https://www.vmware.com/support/vsphere-hypervisor +.. _Xen: (using libvirt) ,,, + iser iser,,,, + bnx2i.00:05:b5:d2:a0:c2 bnx2i,00:05:b5:d2:a0:c2,5.10.10.20,, + + The output is in the format:: + + iface_name transport_name,hwaddress,ipaddress,net_ifacename,initiatorname + +* Individual iface configuration can be viewed via + + .. code-block:: console + + # iscsiadm -m iface -I IFACE_NAME + # BEGIN RECORD 2.0-873 + iface.iscsi_ifacename = cxgb4i.00:07:43:28:b2:58 + iface.net_ifacename = + iface.ipaddress = 102.50.50.80 + iface.hwaddress = 00:07:43:28:b2:58 + iface.transport_name = cxgb4i + iface.initiatorname = + # END RECORD + + Configuration can be updated as desired via + + .. code-block:: console + + # iscsiadm -m iface-I IFACE_NAME--op=update -n iface.SETTING -v VALUE + +* All iface configurations need a minimum of ``iface.iface_name``, + ``iface.transport_name`` and ``iface.hwaddress`` to be correctly configured + to work. Some transports may require ``iface.ipaddress`` and + ``iface.net_ifacename`` as well to bind correctly. + + Detailed configuration instructions can be found at + http://www.open-iscsi.org/docs/README. diff --git a/doc/source/admin/configuration/logs.rst b/doc/source/admin/configuration/logs.rst new file mode 100644 index 000000000000..396eab9b88cf --- /dev/null +++ b/doc/source/admin/configuration/logs.rst @@ -0,0 +1,47 @@ +================= +Compute log files +================= + +The corresponding log file of each Compute service is stored in the +``/var/log/nova/`` directory of the host on which each service runs. + +.. list-table:: Log files used by Compute services + :widths: 35 35 30 + :header-rows: 1 + + * - Log file + - Service name (CentOS/Fedora/openSUSE/Red Hat Enterprise + Linux/SUSE Linux Enterprise) + - Service name (Ubuntu/Debian) + * - ``nova-api.log`` + - ``openstack-nova-api`` + - ``nova-api`` + * - ``nova-cert.log`` [#a]_ + - ``openstack-nova-cert`` + - ``nova-cert`` + * - ``nova-compute.log`` + - ``openstack-nova-compute`` + - ``nova-compute`` + * - ``nova-conductor.log`` + - ``openstack-nova-conductor`` + - ``nova-conductor`` + * - ``nova-consoleauth.log`` + - ``openstack-nova-consoleauth`` + - ``nova-consoleauth`` + * - ``nova-network.log`` [#b]_ + - ``openstack-nova-network`` + - ``nova-network`` + * - ``nova-manage.log`` + - ``nova-manage`` + - ``nova-manage`` + * - ``nova-scheduler.log`` + - ``openstack-nova-scheduler`` + - ``nova-scheduler`` + +.. rubric:: Footnotes + +.. [#a] The X509 certificate service (``openstack-nova-cert``/``nova-cert``) + is only required by the EC2 API to the Compute service. +.. [#b] The ``nova`` network service (``openstack-nova-network``/ + ``nova-network``) only runs in deployments that are not configured + to use the Networking service (``neutron``). diff --git a/doc/source/admin/configuration/resize.rst b/doc/source/admin/configuration/resize.rst new file mode 100644 index 000000000000..fd787af6491c --- /dev/null +++ b/doc/source/admin/configuration/resize.rst @@ -0,0 +1,26 @@ +================ +Configure resize +================ + +Resize (or Server resize) is the ability to change the flavor of a server, thus +allowing it to upscale or downscale according to user needs. For this feature +to work properly, you might need to configure some underlying virt layers. + +KVM +~~~ + +Resize on KVM is implemented currently by transferring the images between +compute nodes over ssh. For KVM you need hostnames to resolve properly and +passwordless ssh access between your compute hosts. Direct access from one +compute host to another is needed to copy the VM file across. + +Cloud end users can find out how to resize a server by reading the `OpenStack +End User Guide `_. + +XenServer +~~~~~~~~~ + +To get resize to work with XenServer (and XCP), you need to establish a root +trust between all hypervisor nodes and provide an ``/image`` mount point to +your hypervisors dom0. diff --git a/doc/source/admin/configuration/samples/api-paste.ini.rst b/doc/source/admin/configuration/samples/api-paste.ini.rst new file mode 100644 index 000000000000..b702904eef1f --- /dev/null +++ b/doc/source/admin/configuration/samples/api-paste.ini.rst @@ -0,0 +1,8 @@ +============= +api-paste.ini +============= + +The Compute service stores its API configuration settings in the +``api-paste.ini`` file. + +.. literalinclude:: /../../etc/nova/api-paste.ini diff --git a/doc/source/admin/configuration/samples/index.rst b/doc/source/admin/configuration/samples/index.rst new file mode 100644 index 000000000000..6db5a16a4882 --- /dev/null +++ b/doc/source/admin/configuration/samples/index.rst @@ -0,0 +1,12 @@ +========================================== +Compute service sample configuration files +========================================== + +Files in this section can be found in ``/etc/nova``. + +.. toctree:: + :maxdepth: 2 + + api-paste.ini.rst + policy.yaml.rst + rootwrap.conf.rst diff --git a/doc/source/admin/configuration/samples/policy.yaml.rst b/doc/source/admin/configuration/samples/policy.yaml.rst new file mode 100644 index 000000000000..7c195a9bb3d2 --- /dev/null +++ b/doc/source/admin/configuration/samples/policy.yaml.rst @@ -0,0 +1,9 @@ +=========== +policy.yaml +=========== + +The ``policy.yaml`` file defines additional access controls +that apply to the Compute service. + +.. literalinclude:: /_static/nova.policy.yaml.sample + :language: yaml diff --git a/doc/source/admin/configuration/samples/rootwrap.conf.rst b/doc/source/admin/configuration/samples/rootwrap.conf.rst new file mode 100644 index 000000000000..82975b824916 --- /dev/null +++ b/doc/source/admin/configuration/samples/rootwrap.conf.rst @@ -0,0 +1,13 @@ +============= +rootwrap.conf +============= + +The ``rootwrap.conf`` file defines configuration values +used by the rootwrap script when the Compute service needs +to escalate its privileges to those of the root user. + +It is also possible to disable the root wrapper, and default +to sudo only. Configure the ``disable_rootwrap`` option in the +``[workaround]`` section of the ``nova.conf`` configuration file. + +.. literalinclude:: /../../etc/nova/rootwrap.conf diff --git a/doc/source/admin/configuration/schedulers.rst b/doc/source/admin/configuration/schedulers.rst new file mode 100644 index 000000000000..1a559cc035de --- /dev/null +++ b/doc/source/admin/configuration/schedulers.rst @@ -0,0 +1,1198 @@ +================== +Compute schedulers +================== + +Compute uses the ``nova-scheduler`` service to determine how to dispatch +compute requests. For example, the ``nova-scheduler`` service determines on +which host a VM should launch. In the context of filters, the term ``host`` +means a physical node that has a ``nova-compute`` service running on it. You +can configure the scheduler through a variety of options. + +Compute is configured with the following default scheduler options in the +``/etc/nova/nova.conf`` file: + +.. code-block:: ini + + scheduler_driver_task_period = 60 + scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler + scheduler_available_filters = nova.scheduler.filters.all_filters + scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, DiskFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter + +By default, the ``scheduler_driver`` is configured as a filter scheduler, as +described in the next section. In the default configuration, this scheduler +considers hosts that meet all the following criteria: + +* Have not been attempted for scheduling purposes (``RetryFilter``). + +* Are in the requested availability zone (``AvailabilityZoneFilter``). + +* Have sufficient RAM available (``RamFilter``). + +* Have sufficient disk space available for root and ephemeral storage + (``DiskFilter``). + +* Can service the request (``ComputeFilter``). + +* Satisfy the extra specs associated with the instance type + (``ComputeCapabilitiesFilter``). + +* Satisfy any architecture, hypervisor type, or virtual machine mode properties + specified on the instance's image properties (``ImagePropertiesFilter``). + +* Are on a different host than other instances of a group (if requested) + (``ServerGroupAntiAffinityFilter``). + +* Are in a set of group hosts (if requested) (``ServerGroupAffinityFilter``). + +The scheduler caches its list of available hosts; use the +``scheduler_driver_task_period`` option to specify how often the list is +updated. + +.. note:: + + Do not configure ``service_down_time`` to be much smaller than + ``scheduler_driver_task_period``; otherwise, hosts appear to be dead while + the host list is being cached. + +For information about the volume scheduler, see the `Block Storage section +`_ of +OpenStack Administrator Guide. + +The scheduler chooses a new host when an instance is migrated. + +When evacuating instances from a host, the scheduler service honors the target +host defined by the administrator on the :command:`nova evacuate` command. If +a target is not defined by the administrator, the scheduler determines the +target host. For information about instance evacuation, see `Evacuate instances +`_ section of the OpenStack +Administrator Guide. + +.. _compute-scheduler-filters: + +Filter scheduler +~~~~~~~~~~~~~~~~ + +The filter scheduler (``nova.scheduler.filter_scheduler.FilterScheduler``) is +the default scheduler for scheduling virtual machine instances. It supports +filtering and weighting to make informed decisions on where a new instance +should be created. + +When the filter scheduler receives a request for a resource, it first applies +filters to determine which hosts are eligible for consideration when +dispatching a resource. Filters are binary: either a host is accepted by the +filter, or it is rejected. Hosts that are accepted by the filter are then +processed by a different algorithm to decide which hosts to use for that +request, described in the :ref:`weights` section. + +**Filtering** + +.. figure:: /figures/filteringWorkflow1.png + +The ``scheduler_available_filters`` configuration option in ``nova.conf`` +provides the Compute service with the list of the filters that are used by the +scheduler. The default setting specifies all of the filter that are included +with the Compute service: + +.. code-block:: ini + + scheduler_available_filters = nova.scheduler.filters.all_filters + +This configuration option can be specified multiple times. For example, if you +implemented your own custom filter in Python called ``myfilter.MyFilter`` and +you wanted to use both the built-in filters and your custom filter, your +``nova.conf`` file would contain: + +.. code-block:: ini + + scheduler_available_filters = nova.scheduler.filters.all_filters + scheduler_available_filters = myfilter.MyFilter + +The ``scheduler_default_filters`` configuration option in ``nova.conf`` defines +the list of filters that are applied by the ``nova-scheduler`` service. The +default filters are: + +.. code-block:: ini + + scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter + +Compute filters +~~~~~~~~~~~~~~~ + +The following sections describe the available compute filters. + +AggregateCoreFilter +------------------- + +Filters host by CPU core numbers with a per-aggregate ``cpu_allocation_ratio`` +value. If the per-aggregate value is not found, the value falls back to the +global setting. If the host is in more than one aggregate and more than one +value is found, the minimum value will be used. For information about how to +use this filter, see :ref:`host-aggregates`. See also :ref:`CoreFilter`. + +AggregateDiskFilter +------------------- + +Filters host by disk allocation with a per-aggregate ``disk_allocation_ratio`` +value. If the per-aggregate value is not found, the value falls back to the +global setting. If the host is in more than one aggregate and more than one +value is found, the minimum value will be used. For information about how to +use this filter, see :ref:`host-aggregates`. See also :ref:`DiskFilter`. + +AggregateImagePropertiesIsolation +--------------------------------- + +Matches properties defined in an image's metadata against those of aggregates +to determine host matches: + +* If a host belongs to an aggregate and the aggregate defines one or more + metadata that matches an image's properties, that host is a candidate to boot + the image's instance. + +* If a host does not belong to any aggregate, it can boot instances from all + images. + +For example, the following aggregate ``myWinAgg`` has the Windows operating +system as metadata (named 'windows'): + +.. code-block:: console + + $ openstack aggregate show MyWinAgg + +-------------------+----------------------------+ + | Field | Value | + +-------------------+----------------------------+ + | availability_zone | zone1 | + | created_at | 2017-01-01T15:36:44.000000 | + | deleted | False | + | deleted_at | None | + | hosts | [u'sf-devel'] | + | id | 1 | + | name | test | + | properties | | + | updated_at | None | + +-------------------+----------------------------+ + +In this example, because the following Win-2012 image has the ``windows`` +property, it boots on the ``sf-devel`` host (all other filters being equal): + +.. code-block:: console + + $ openstack image show Win-2012 + +------------------+------------------------------------------------------+ + | Field | Value | + +------------------+------------------------------------------------------+ + | checksum | ee1eca47dc88f4879d8a229cc70a07c6 | + | container_format | bare | + | created_at | 2016-12-13T09:30:30Z | + | disk_format | qcow2 | + | ... + +You can configure the ``AggregateImagePropertiesIsolation`` filter by using the +following options in the ``nova.conf`` file: + +.. code-block:: ini + + # Considers only keys matching the given namespace (string). + # Multiple values can be given, as a comma-separated list. + aggregate_image_properties_isolation_namespace = + + # Separator used between the namespace and keys (string). + aggregate_image_properties_isolation_separator = . + +.. _AggregateInstanceExtraSpecsFilter: + +AggregateInstanceExtraSpecsFilter +--------------------------------- + +Matches properties defined in extra specs for an instance type against +admin-defined properties on a host aggregate. Works with specifications that +are scoped with ``aggregate_instance_extra_specs``. Multiple values can be +given, as a comma-separated list. For backward compatibility, also works with +non-scoped specifications; this action is highly discouraged because it +conflicts with :ref:`ComputeCapabilitiesFilter` filter when you enable both +filters. For information about how to use this filter, see the +:ref:`host-aggregates` section. + +AggregateIoOpsFilter +-------------------- + +Filters host by disk allocation with a per-aggregate ``max_io_ops_per_host`` +value. If the per-aggregate value is not found, the value falls back to the +global setting. If the host is in more than one aggregate and more than one +value is found, the minimum value will be used. For information about how to +use this filter, see :ref:`host-aggregates`. See also :ref:`IoOpsFilter`. + +AggregateMultiTenancyIsolation +------------------------------ + +Ensures that the tenant (or list of tenants) creates all instances only on +specific :ref:`host-aggregates`. If a host is in an aggregate that has the +``filter_tenant_id`` metadata key, the host creates instances from only that +tenant or list of tenants. A host can be in different aggregates. If a host +does not belong to an aggregate with the metadata key, the host can create +instances from all tenants. This setting does not isolate the aggregate from +other tenants. Any other tenant can continue to build instances on the +specified aggregate. + +AggregateNumInstancesFilter +--------------------------- + +Filters host by number of instances with a per-aggregate +``max_instances_per_host`` value. If the per-aggregate value is not found, the +value falls back to the global setting. If the host is in more than one +aggregate and thus more than one value is found, the minimum value will be +used. For information about how to use this filter, see +:ref:`host-aggregates`. See also :ref:`NumInstancesFilter`. + +AggregateRamFilter +------------------ + +Filters host by RAM allocation of instances with a per-aggregate +``ram_allocation_ratio`` value. If the per-aggregate value is not found, the +value falls back to the global setting. If the host is in more than one +aggregate and thus more than one value is found, the minimum value will be +used. For information about how to use this filter, see +:ref:`host-aggregates`. See also :ref:`ramfilter`. + +AggregateTypeAffinityFilter +--------------------------- + +This filter passes hosts if no ``instance_type`` key is set or the +``instance_type`` aggregate metadata value contains the name of the +``instance_type`` requested. The value of the ``instance_type`` metadata entry +is a string that may contain either a single ``instance_type`` name or a +comma-separated list of ``instance_type`` names, such as ``m1.nano`` or +``m1.nano,m1.small``. For information about how to use this filter, see +:ref:`host-aggregates`. See also :ref:`TypeAffinityFilter`. + +AllHostsFilter +-------------- + +This is a no-op filter. It does not eliminate any of the available hosts. + +AvailabilityZoneFilter +---------------------- + +Filters hosts by availability zone. You must enable this filter for the +scheduler to respect availability zones in requests. + +.. _ComputeCapabilitiesFilter: + +ComputeCapabilitiesFilter +------------------------- + +Matches properties defined in extra specs for an instance type against compute +capabilities. If an extra specs key contains a colon (``:``), anything before +the colon is treated as a namespace and anything after the colon is treated as +the key to be matched. If a namespace is present and is not ``capabilities``, +the filter ignores the namespace. For backward compatibility, also treats the +extra specs key as the key to be matched if no namespace is present; this +action is highly discouraged because it conflicts with +:ref:`AggregateInstanceExtraSpecsFilter` filter when you enable both filters. + +.. _ComputeFilter: + +ComputeFilter +------------- + +Passes all hosts that are operational and enabled. + +In general, you should always enable this filter. + +.. _CoreFilter: + +CoreFilter +---------- + +Only schedules instances on hosts if sufficient CPU cores are available. If +this filter is not set, the scheduler might over-provision a host based on +cores. For example, the virtual cores running on an instance may exceed the +physical cores. + +You can configure this filter to enable a fixed amount of vCPU overcommitment +by using the ``cpu_allocation_ratio`` configuration option in ``nova.conf``. +The default setting is: + +.. code-block:: ini + + cpu_allocation_ratio = 16.0 + +With this setting, if 8 vCPUs are on a node, the scheduler allows instances up +to 128 vCPU to be run on that node. + +To disallow vCPU overcommitment set: + +.. code-block:: ini + + cpu_allocation_ratio = 1.0 + +.. note:: + + The Compute API always returns the actual number of CPU cores available on a + compute node regardless of the value of the ``cpu_allocation_ratio`` + configuration key. As a result changes to the ``cpu_allocation_ratio`` are + not reflected via the command line clients or the dashboard. Changes to + this configuration key are only taken into account internally in the + scheduler. + +DifferentHostFilter +------------------- + +Schedules the instance on a different host from a set of instances. To take +advantage of this filter, the requester must pass a scheduler hint, using +``different_host`` as the key and a list of instance UUIDs as the value. This +filter is the opposite of the ``SameHostFilter``. Using the +:command:`openstack server create` command, use the ``--hint`` flag. For +example: + +.. code-block:: console + + $ openstack server create --image cedef40a-ed67-4d10-800e-17455edce175 \ + --flavor 1 --hint different_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 \ + --hint different_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1 + +With the API, use the ``os:scheduler_hints`` key. For example: + +.. code-block:: json + + { + "server": { + "name": "server-1", + "imageRef": "cedef40a-ed67-4d10-800e-17455edce175", + "flavorRef": "1" + }, + "os:scheduler_hints": { + "different_host": [ + "a0cf03a5-d921-4877-bb5c-86d26cf818e1", + "8c19174f-4220-44f0-824a-cd1eeef10287" + ] + } + } + +.. _DiskFilter: + +DiskFilter +---------- + +Only schedules instances on hosts if there is sufficient disk space available +for root and ephemeral storage. + +You can configure this filter to enable a fixed amount of disk overcommitment +by using the ``disk_allocation_ratio`` configuration option in the +``nova.conf`` configuration file. The default setting disables the possibility +of the overcommitment and allows launching a VM only if there is a sufficient +amount of disk space available on a host: + +.. code-block:: ini + + disk_allocation_ratio = 1.0 + +DiskFilter always considers the value of the ``disk_available_least`` property +and not the one of the ``free_disk_gb`` property of a hypervisor's statistics: + +.. code-block:: console + + $ openstack hypervisor stats show + +----------------------+-------+ + | Field | Value | + +----------------------+-------+ + | count | 1 | + | current_workload | 0 | + | disk_available_least | 14 | + | free_disk_gb | 27 | + | free_ram_mb | 15374 | + | local_gb | 27 | + | local_gb_used | 0 | + | memory_mb | 15886 | + | memory_mb_used | 512 | + | running_vms | 0 | + | vcpus | 8 | + | vcpus_used | 0 | + +----------------------+-------+ + +As it can be viewed from the command output above, the amount of the available +disk space can be less than the amount of the free disk space. It happens +because the ``disk_available_least`` property accounts for the virtual size +rather than the actual size of images. If you use an image format that is +sparse or copy on write so that each virtual instance does not require a 1:1 +allocation of a virtual disk to a physical storage, it may be useful to allow +the overcommitment of disk space. + +To enable scheduling instances while overcommitting disk resources on the node, +adjust the value of the ``disk_allocation_ratio`` configuration option to +greater than ``1.0``: + +.. code-block:: none + + disk_allocation_ratio > 1.0 + +.. note:: + + If the value is set to ``>1``, we recommend keeping track of the free disk + space, as the value approaching ``0`` may result in the incorrect + functioning of instances using it at the moment. + +ExactCoreFilter +--------------- + +Only schedules instances on hosts if host has the exact number of CPU cores. + +ExactDiskFilter +--------------- + +Only schedules instances on hosts if host has the exact amount of disk +available. + +ExactRamFilter +-------------- + +Only schedules instances on hosts if host has the exact number of RAM +available. + +.. _ImagePropertiesFilter: + +ImagePropertiesFilter +--------------------- + +Filters hosts based on properties defined on the instance's image. It passes +hosts that can support the specified image properties contained in the +instance. Properties include the architecture, hypervisor type, hypervisor +version (for Xen hypervisor type only), and virtual machine mode. + +For example, an instance might require a host that runs an ARM-based processor, +and QEMU as the hypervisor. You can decorate an image with these properties by +using: + +.. code-block:: console + + $ openstack image set --architecture arm --property hypervisor_type=qemu \ + img-uuid + +The image properties that the filter checks for are: + +``architecture`` + describes the machine architecture required by the image. Examples are + ``i686``, ``x86_64``, ``arm``, and ``ppc64``. + +``hypervisor_type`` + describes the hypervisor required by the image. Examples are ``xen``, + ``qemu``, and ``xenapi``. + + .. note:: + + ``qemu`` is used for both QEMU and KVM hypervisor types. + +``hypervisor_version_requires`` + describes the hypervisor version required by the image. The property is + supported for Xen hypervisor type only. It can be used to enable support for + multiple hypervisor versions, and to prevent instances with newer Xen tools + from being provisioned on an older version of a hypervisor. If available, the + property value is compared to the hypervisor version of the compute host. + + To filter the hosts by the hypervisor version, add the + ``hypervisor_version_requires`` property on the image as metadata and pass an + operator and a required hypervisor version as its value: + + .. code-block:: console + + $ openstack image set --property hypervisor_type=xen --property \ + hypervisor_version_requires=">=4.3" img-uuid + +``vm_mode`` + describes the hypervisor application binary interface (ABI) required by the + image. Examples are ``xen`` for Xen 3.0 paravirtual ABI, ``hvm`` for native + ABI, ``uml`` for User Mode Linux paravirtual ABI, ``exe`` for container virt + executable ABI. + +IsolatedHostsFilter +------------------- + +Allows the admin to define a special (isolated) set of images and a special +(isolated) set of hosts, such that the isolated images can only run on the +isolated hosts, and the isolated hosts can only run isolated images. The flag +``restrict_isolated_hosts_to_isolated_images`` can be used to force isolated +hosts to only run isolated images. + +The admin must specify the isolated set of images and hosts in the +``nova.conf`` file using the ``isolated_hosts`` and ``isolated_images`` +configuration options. For example: + +.. code-block:: ini + + isolated_hosts = server1, server2 + isolated_images = 342b492c-128f-4a42-8d3a-c5088cf27d13, ebd267a6-ca86-4d6c-9a0e-bd132d6b7d09 + +.. _IoOpsFilter: + +IoOpsFilter +----------- + +The IoOpsFilter filters hosts by concurrent I/O operations on it. Hosts with +too many concurrent I/O operations will be filtered out. The +``max_io_ops_per_host`` option specifies the maximum number of I/O intensive +instances allowed to run on a host. A host will be ignored by the scheduler if +more than ``max_io_ops_per_host`` instances in build, resize, snapshot, +migrate, rescue or unshelve task states are running on it. + +JsonFilter +---------- + +The JsonFilter allows a user to construct a custom filter by passing a +scheduler hint in JSON format. The following operators are supported: + +* = +* < +* > +* in +* <= +* >= +* not +* or +* and + +The filter supports the following variables: + +* ``$free_ram_mb`` +* ``$free_disk_mb`` +* ``$total_usable_ram_mb`` +* ``$vcpus_total`` +* ``$vcpus_used`` + +Using the :command:`openstack server create` command, use the ``--hint`` flag: + +.. code-block:: console + + $ openstack server create --image 827d564a-e636-4fc4-a376-d36f7ebe1747 \ + --flavor 1 --hint query='[">=","$free_ram_mb",1024]' server1 + +With the API, use the ``os:scheduler_hints`` key: + +.. code-block:: json + + { + "server": { + "name": "server-1", + "imageRef": "cedef40a-ed67-4d10-800e-17455edce175", + "flavorRef": "1" + }, + "os:scheduler_hints": { + "query": "[>=,$free_ram_mb,1024]" + } + } + +MetricsFilter +------------- + +Filters hosts based on meters ``weight_setting``. Only hosts with the +available meters are passed so that the metrics weigher will not fail due to +these hosts. + +NUMATopologyFilter +------------------ + +Filters hosts based on the NUMA topology that was specified for the instance +through the use of flavor ``extra_specs`` in combination with the image +properties, as described in detail in the `related nova-spec document +`_. Filter +will try to match the exact NUMA cells of the instance to those of the host. It +will consider the standard over-subscription limits each cell, and provide +limits to the compute host accordingly. + +.. note:: + + If instance has no topology defined, it will be considered for any host. If + instance has a topology defined, it will be considered only for NUMA capable + hosts. + +.. _NumInstancesFilter: + +NumInstancesFilter +------------------ + +Hosts that have more instances running than specified by the +``max_instances_per_host`` option are filtered out when this filter is in +place. + +PciPassthroughFilter +-------------------- + +The filter schedules instances on a host if the host has devices that meet the +device requests in the ``extra_specs`` attribute for the flavor. + +.. _RamFilter: + +RamFilter +--------- + +Only schedules instances on hosts that have sufficient RAM available. If this +filter is not set, the scheduler may over provision a host based on RAM (for +example, the RAM allocated by virtual machine instances may exceed the physical +RAM). + +You can configure this filter to enable a fixed amount of RAM overcommitment by +using the ``ram_allocation_ratio`` configuration option in ``nova.conf``. The +default setting is: + +.. code-block:: ini + + ram_allocation_ratio = 1.5 + +This setting enables 1.5 GB instances to run on any compute node with 1 GB of +free RAM. + +RetryFilter +----------- + +Filters out hosts that have already been attempted for scheduling purposes. If +the scheduler selects a host to respond to a service request, and the host +fails to respond to the request, this filter prevents the scheduler from +retrying that host for the service request. + +This filter is only useful if the ``scheduler_max_attempts`` configuration +option is set to a value greater than zero. + +SameHostFilter +-------------- + +Schedules the instance on the same host as another instance in a set of +instances. To take advantage of this filter, the requester must pass a +scheduler hint, using ``same_host`` as the key and a list of instance UUIDs as +the value. This filter is the opposite of the ``DifferentHostFilter``. Using +the :command:`openstack server create` command, use the ``--hint`` flag: + +.. code-block:: console + + $ openstack server create --image cedef40a-ed67-4d10-800e-17455edce175 \ + --flavor 1 --hint same_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 \ + --hint same_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1 + +With the API, use the ``os:scheduler_hints`` key: + +.. code-block:: json + + { + "server": { + "name": "server-1", + "imageRef": "cedef40a-ed67-4d10-800e-17455edce175", + "flavorRef": "1" + }, + "os:scheduler_hints": { + "same_host": [ + "a0cf03a5-d921-4877-bb5c-86d26cf818e1", + "8c19174f-4220-44f0-824a-cd1eeef10287" + ] + } + } + +.. _ServerGroupAffinityFilter: + +ServerGroupAffinityFilter +------------------------- + +The ServerGroupAffinityFilter ensures that an instance is scheduled on to a +host from a set of group hosts. To take advantage of this filter, the requester +must create a server group with an ``affinity`` policy, and pass a scheduler +hint, using ``group`` as the key and the server group UUID as the value. Using +the :command:`openstack server create` command, use the ``--hint`` flag. For +example: + +.. code-block:: console + + $ openstack server group create --policy affinity group-1 + $ openstack server create --image IMAGE_ID --flavor 1 \ + --hint group=SERVER_GROUP_UUID server-1 + +.. _ServerGroupAntiAffinityFilter: + +ServerGroupAntiAffinityFilter +----------------------------- + +The ServerGroupAntiAffinityFilter ensures that each instance in a group is on a +different host. To take advantage of this filter, the requester must create a +server group with an ``anti-affinity`` policy, and pass a scheduler hint, using +``group`` as the key and the server group UUID as the value. Using the +:command:`openstack server create` command, use the ``--hint`` flag. For +example: + +.. code-block:: console + + $ openstack server group create --policy anti-affinity group-1 + $ openstack server create --image IMAGE_ID --flavor 1 \ + --hint group=SERVER_GROUP_UUID server-1 + +SimpleCIDRAffinityFilter +------------------------ + +Schedules the instance based on host IP subnet range. To take advantage of +this filter, the requester must specify a range of valid IP address in CIDR +format, by passing two scheduler hints: + +``build_near_host_ip`` + The first IP address in the subnet (for example, ``192.168.1.1``) + +``cidr`` + The CIDR that corresponds to the subnet (for example, ``/24``) + +Using the :command:`openstack server create` command, use the ``--hint`` flag. +For example, to specify the IP subnet ``192.168.1.1/24``: + +.. code-block:: console + + $ openstack server create --image cedef40a-ed67-4d10-800e-17455edce175 \ + --flavor 1 --hint build_near_host_ip=192.168.1.1 --hint cidr=/24 server-1 + +With the API, use the ``os:scheduler_hints`` key: + +.. code-block:: json + + { + "server": { + "name": "server-1", + "imageRef": "cedef40a-ed67-4d10-800e-17455edce175", + "flavorRef": "1" + }, + "os:scheduler_hints": { + "build_near_host_ip": "192.168.1.1", + "cidr": "24" + } + } + +TrustedFilter +------------- + +Filters hosts based on their trust. Only passes hosts that meet the trust +requirements specified in the instance properties. + +.. _TypeAffinityFilter: + +TypeAffinityFilter +------------------ + +Dynamically limits hosts to one instance type. An instance can only be launched +on a host, if no instance with different instances types are running on it, or +if the host has no running instances at all. + +Cell filters +~~~~~~~~~~~~ + +The following sections describe the available cell filters. + +DifferentCellFilter +------------------- + +Schedules the instance on a different cell from a set of instances. To take +advantage of this filter, the requester must pass a scheduler hint, using +``different_cell`` as the key and a list of instance UUIDs as the value. + +ImagePropertiesFilter +--------------------- + +Filters cells based on properties defined on the instance’s image. This +filter works specifying the hypervisor required in the image metadata and the +supported hypervisor version in cell capabilities. + +TargetCellFilter +---------------- + +Filters target cells. This filter works by specifying a scheduler hint of +``target_cell``. The value should be the full cell path. + +.. _weights: + +Weights +~~~~~~~ + +When resourcing instances, the filter scheduler filters and weights each host +in the list of acceptable hosts. Each time the scheduler selects a host, it +virtually consumes resources on it, and subsequent selections are adjusted +accordingly. This process is useful when the customer asks for the same large +amount of instances, because weight is computed for each requested instance. + +All weights are normalized before being summed up; the host with the largest +weight is given the highest priority. + +**Weighting hosts** + +.. figure:: /figures/nova-weighting-hosts.png + +If cells are used, cells are weighted by the scheduler in the same manner as +hosts. + +Hosts and cells are weighted based on the following options in the +``/etc/nova/nova.conf`` file: + +.. list-table:: Host weighting options + :header-rows: 1 + :widths: 10, 25, 60 + + * - Section + - Option + - Description + * - [DEFAULT] + - ``ram_weight_multiplier`` + - By default, the scheduler spreads instances across all hosts evenly. + Set the ``ram_weight_multiplier`` option to a negative number if you + prefer stacking instead of spreading. Use a floating-point value. + * - [DEFAULT] + - ``scheduler_host_subset_size`` + - New instances are scheduled on a host that is chosen randomly from a + subset of the N best hosts. This property defines the subset size from + which a host is chosen. A value of 1 chooses the first host returned by + the weighting functions. This value must be at least 1. A value less + than 1 is ignored, and 1 is used instead. Use an integer value. + * - [DEFAULT] + - ``scheduler_weight_classes`` + - Defaults to ``nova.scheduler.weights.all_weighers``. Hosts are then + weighted and sorted with the largest weight winning. + * - [DEFAULT] + - ``io_ops_weight_multiplier`` + - Multiplier used for weighing host I/O operations. A negative value means + a preference to choose light workload compute hosts. + * - [DEFAULT] + - ``soft_affinity_weight_multiplier`` + - Multiplier used for weighing hosts for group soft-affinity. Only a + positive value is meaningful. Negative means that the behavior will + change to the opposite, which is soft-anti-affinity. + * - [DEFAULT] + - ``soft_anti_affinity_weight_multiplier`` + - Multiplier used for weighing hosts for group soft-anti-affinity. Only a + positive value is meaningful. Negative means that the behavior will + change to the opposite, which is soft-affinity. + * - [metrics] + - ``weight_multiplier`` + - Multiplier for weighting meters. Use a floating-point value. + * - [metrics] + - ``weight_setting`` + - Determines how meters are weighted. Use a comma-separated list of + metricName=ratio. For example: ``name1=1.0, name2=-1.0`` results in: + ``name1.value * 1.0 + name2.value * -1.0`` + * - [metrics] + - ``required`` + - Specifies how to treat unavailable meters: + + * True - Raises an exception. To avoid the raised exception, you should + use the scheduler filter ``MetricFilter`` to filter out hosts with + unavailable meters. + * False - Treated as a negative factor in the weighting process (uses + the ``weight_of_unavailable`` option). + * - [metrics] + - ``weight_of_unavailable`` + - If ``required`` is set to False, and any one of the meters set by + ``weight_setting`` is unavailable, the ``weight_of_unavailable`` value + is returned to the scheduler. + +For example: + +.. code-block:: ini + + [DEFAULT] + scheduler_host_subset_size = 1 + scheduler_weight_classes = nova.scheduler.weights.all_weighers + ram_weight_multiplier = 1.0 + io_ops_weight_multiplier = 2.0 + soft_affinity_weight_multiplier = 1.0 + soft_anti_affinity_weight_multiplier = 1.0 + [metrics] + weight_multiplier = 1.0 + weight_setting = name1=1.0, name2=-1.0 + required = false + weight_of_unavailable = -10000.0 + +.. list-table:: Cell weighting options + :header-rows: 1 + :widths: 10, 25, 60 + + * - Section + - Option + - Description + * - [cells] + - ``mute_weight_multiplier`` + - Multiplier to weight mute children (hosts which have not sent + capacity or capacity updates for some time). + Use a negative, floating-point value. + * - [cells] + - ``offset_weight_multiplier`` + - Multiplier to weight cells, so you can specify a preferred cell. + Use a floating point value. + * - [cells] + - ``ram_weight_multiplier`` + - By default, the scheduler spreads instances across all cells evenly. + Set the ``ram_weight_multiplier`` option to a negative number if you + prefer stacking instead of spreading. Use a floating-point value. + * - [cells] + - ``scheduler_weight_classes`` + - Defaults to ``nova.cells.weights.all_weighers``, which maps to all + cell weighers included with Compute. Cells are then weighted and + sorted with the largest weight winning. + +For example: + +.. code-block:: ini + + [cells] + scheduler_weight_classes = nova.cells.weights.all_weighers + mute_weight_multiplier = -10.0 + ram_weight_multiplier = 1.0 + offset_weight_multiplier = 1.0 + +Chance scheduler +~~~~~~~~~~~~~~~~ + +As an administrator, you work with the filter scheduler. However, the Compute +service also uses the Chance Scheduler, +``nova.scheduler.chance.ChanceScheduler``, which randomly selects from lists of +filtered hosts. + +Utilization aware scheduling +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +It is possible to schedule VMs using advanced scheduling decisions. These +decisions are made based on enhanced usage statistics encompassing data like +memory cache utilization, memory bandwidth utilization, or network bandwidth +utilization. This is disabled by default. The administrator can configure how +the metrics are weighted in the configuration file by using the +``weight_setting`` configuration option in the ``nova.conf`` configuration +file. For example to configure metric1 with ratio1 and metric2 with ratio2: + +.. code-block:: ini + + weight_setting = "metric1=ratio1, metric2=ratio2" + +.. _host-aggregates: + +Host aggregates and availability zones +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Host aggregates are a mechanism for partitioning hosts in an OpenStack cloud, +or a region of an OpenStack cloud, based on arbitrary characteristics. +Examples where an administrator may want to do this include where a group of +hosts have additional hardware or performance characteristics. + +Host aggregates are not explicitly exposed to users. Instead administrators +map flavors to host aggregates. Administrators do this by setting metadata on +a host aggregate, and matching flavor extra specifications. The scheduler then +endeavors to match user requests for instance of the given flavor to a host +aggregate with the same key-value pair in its metadata. Compute nodes can be +in more than one host aggregate. + +Administrators are able to optionally expose a host aggregate as an +availability zone. Availability zones are different from host aggregates in +that they are explicitly exposed to the user, and hosts can only be in a single +availability zone. Administrators can configure a default availability zone +where instances will be scheduled when the user fails to specify one. + +Command-line interface +---------------------- + +The :command:`nova` command-line client supports the following +aggregate-related commands. + +nova aggregate-list + Print a list of all aggregates. + +nova aggregate-create [availability-zone] + Create a new aggregate named ````, and optionally in availability zone + ``[availability-zone]`` if specified. The command returns the ID of the newly + created aggregate. Hosts can be made available to multiple host aggregates. + Be careful when adding a host to an additional host aggregate when the host + is also in an availability zone. Pay attention when using the :command:`nova + aggregate-set-metadata` and :command:`nova aggregate-update` commands to + avoid user confusion when they boot instances in different availability + zones. An error occurs if you cannot add a particular host to an aggregate + zone for which it is not intended. + +nova aggregate-delete + Delete an aggregate with its ```` or ````. + +nova aggregate-show + Show details of the aggregate with its ```` or ````. + +nova aggregate-add-host + Add host with name ```` to aggregate with its ```` or ````. + +nova aggregate-remove-host + Remove the host with name ```` from the aggregate with its ```` + or ````. + +nova aggregate-set-metadata [ ...] + Add or update metadata (key-value pairs) associated with the aggregate with + its ```` or ````. + +nova aggregate-update [] + Update the name and availability zone (optional) for the aggregate. + +nova host-list + List all hosts by service. + +nova host-update --maintenance [enable | disable] + Put/resume host into/from maintenance. + +.. note:: + + Only administrators can access these commands. If you try to use these + commands and the user name and tenant that you use to access the Compute + service do not have the ``admin`` role or the appropriate privileges, these + errors occur: + + .. code-block:: console + + ERROR: Policy doesn't allow compute_extension:aggregates to be performed. (HTTP 403) (Request-ID: req-299fbff6-6729-4cef-93b2-e7e1f96b4864) + + .. code-block:: console + + ERROR: Policy doesn't allow compute_extension:hosts to be performed. (HTTP 403) (Request-ID: req-ef2400f6-6776-4ea3-b6f1-7704085c27d1) + +Configure scheduler to support host aggregates +---------------------------------------------- + +One common use case for host aggregates is when you want to support scheduling +instances to a subset of compute hosts because they have a specific capability. +For example, you may want to allow users to request compute hosts that have SSD +drives if they need access to faster disk I/O, or access to compute hosts that +have GPU cards to take advantage of GPU-accelerated code. + +To configure the scheduler to support host aggregates, the +``scheduler_default_filters`` configuration option must contain the +``AggregateInstanceExtraSpecsFilter`` in addition to the other filters used by +the scheduler. Add the following line to ``/etc/nova/nova.conf`` on the host +that runs the ``nova-scheduler`` service to enable host aggregates filtering, +as well as the other filters that are typically enabled: + +.. code-block:: ini + + scheduler_default_filters=AggregateInstanceExtraSpecsFilter,RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter + +Example: Specify compute hosts with SSDs +---------------------------------------- + +This example configures the Compute service to enable users to request nodes +that have solid-state drives (SSDs). You create a ``fast-io`` host aggregate in +the ``nova`` availability zone and you add the ``ssd=true`` key-value pair to +the aggregate. Then, you add the ``node1``, and ``node2`` compute nodes to it. + +.. code-block:: console + + $ openstack aggregate create --zone nova fast-io + +-------------------+----------------------------+ + | Field | Value | + +-------------------+----------------------------+ + | availability_zone | nova | + | created_at | 2016-12-22T07:31:13.013466 | + | deleted | False | + | deleted_at | None | + | id | 1 | + | name | fast-io | + | updated_at | None | + +-------------------+----------------------------+ + + $ openstack aggregate set --property ssd=true 1 + +-------------------+----------------------------+ + | Field | Value | + +-------------------+----------------------------+ + | availability_zone | nova | + | created_at | 2016-12-22T07:31:13.000000 | + | deleted | False | + | deleted_at | None | + | hosts | [] | + | id | 1 | + | name | fast-io | + | properties | ssd='true' | + | updated_at | None | + +-------------------+----------------------------+ + + $ openstack aggregate add host 1 node1 + +-------------------+--------------------------------------------------+ + | Field | Value | + +-------------------+--------------------------------------------------+ + | availability_zone | nova | + | created_at | 2016-12-22T07:31:13.000000 | + | deleted | False | + | deleted_at | None | + | hosts | [u'node1'] | + | id | 1 | + | metadata | {u'ssd': u'true', u'availability_zone': u'nova'} | + | name | fast-io | + | updated_at | None | + +-------------------+--------------------------------------------------+ + + $ openstack aggregate add host 1 node2 + +-------------------+--------------------------------------------------+ + | Field | Value | + +-------------------+--------------------------------------------------+ + | availability_zone | nova | + | created_at | 2016-12-22T07:31:13.000000 | + | deleted | False | + | deleted_at | None | + | hosts | [u'node2'] | + | id | 1 | + | metadata | {u'ssd': u'true', u'availability_zone': u'nova'} | + | name | fast-io | + | updated_at | None | + +-------------------+--------------------------------------------------+ + +Use the :command:`openstack flavor create` command to create the ``ssd.large`` +flavor called with an ID of 6, 8 GB of RAM, 80 GB root disk, and 4 vCPUs. + +.. code-block:: console + + $ openstack flavor create --id 6 --ram 8192 --disk 80 --vcpus 4 ssd.large + +----------------------------+-----------+ + | Field | Value | + +----------------------------+-----------+ + | OS-FLV-DISABLED:disabled | False | + | OS-FLV-EXT-DATA:ephemeral | 0 | + | disk | 80 | + | id | 6 | + | name | ssd.large | + | os-flavor-access:is_public | True | + | ram | 8192 | + | rxtx_factor | 1.0 | + | swap | | + | vcpus | 4 | + +----------------------------+-----------+ + +Once the flavor is created, specify one or more key-value pairs that match the +key-value pairs on the host aggregates with scope +``aggregate_instance_extra_specs``. In this case, that is the +``aggregate_instance_extra_specs:ssd=true`` key-value pair. Setting a +key-value pair on a flavor is done using the :command:`openstack flavor set` +command. + +.. code-block:: console + + $ openstack flavor set --property aggregate_instance_extra_specs:ssd=true ssd.large + +Once it is set, you should see the ``extra_specs`` property of the +``ssd.large`` flavor populated with a key of ``ssd`` and a corresponding value +of ``true``. + +.. code-block:: console + + $ openstack flavor show ssd.large + +----------------------------+-------------------------------------------+ + | Field | Value | + +----------------------------+-------------------------------------------+ + | OS-FLV-DISABLED:disabled | False | + | OS-FLV-EXT-DATA:ephemeral | 0 | + | disk | 80 | + | id | 6 | + | name | ssd.large | + | os-flavor-access:is_public | True | + | properties | aggregate_instance_extra_specs:ssd='true' | + | ram | 8192 | + | rxtx_factor | 1.0 | + | swap | | + | vcpus | 4 | + +----------------------------+-------------------------------------------+ + +Now, when a user requests an instance with the ``ssd.large`` flavor, +the scheduler only considers hosts with the ``ssd=true`` key-value pair. +In this example, these are ``node1`` and ``node2``. + +XenServer hypervisor pools to support live migration +---------------------------------------------------- + +When using the XenAPI-based hypervisor, the Compute service uses host +aggregates to manage XenServer Resource pools, which are used in supporting +live migration. diff --git a/doc/source/index.rst b/doc/source/index.rst index 4380860ebae9..a142ffe87b06 100644 --- a/doc/source/index.rst +++ b/doc/source/index.rst @@ -167,6 +167,10 @@ Reference Material * **Configuration**: + * :doc:`Configuration Guide `: detailed + configuration guides for various parts of you Nova system. Helpful + reference for setting up specific hypervisor backends. + * :doc:`Config Reference `: a complete reference of all configuration options available in the nova.conf file. @@ -210,6 +214,7 @@ looking parts of our architecture. These are collected below. :hidden: admin/index + admin/configuration/index cli/index configuration/config configuration/sample-config