doc: Import configuration reference

Import the following files from the former config-reference [1]:

  api.rst
  cells.rst
  fibre-channel.rst
  hypervisor-basics.rst
  hypervisor-hyper-v.rst
  hypervisor-kvm.rst
  hypervisor-lxc.rst
  hypervisor-qemu.rst
  hypervisor-virtuozzo.rst
  hypervisor-vmware.rst
  hypervisor-xen-api.rst
  hypervisor-xen-libvirt.rst
  hypervisors.rst
  index.rst
  iscsi-offload.rst
  logs.rst
  resize.rst
  samples/api-paste.ini.rst
  samples/index.rst
  samples/policy.yaml.rst
  samples/rootwrap.conf.rst
  schedulers.rst

The below files are skipped as they're already included, in slightly
different forms, in the nova documentation.

  config-options.rst
  nova-conf-samples.rst
  nova-conf.rst
  nova.conf

Part of bp: doc-migration

Change-Id: I145e38149bf20a5e068f8cfe913f90c7ebeaad36
This commit is contained in:
Stephen Finucane 2017-08-08 16:10:45 +01:00 committed by Sean Dague
parent 6b21efbc7f
commit da224b3a05
23 changed files with 4628 additions and 0 deletions

View File

@ -0,0 +1,25 @@
=========================
Compute API configuration
=========================
The Compute API, run by the ``nova-api`` daemon, is the component of OpenStack
Compute that receives and responds to user requests, whether they be direct API
calls, or via the CLI tools or dashboard.
Configure Compute API password handling
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The OpenStack Compute API enables users to specify an administrative password
when they create or rebuild a server instance. If the user does not specify a
password, a random password is generated and returned in the API response.
In practice, how the admin password is handled depends on the hypervisor in use
and might require additional configuration of the instance. For example, you
might have to install an agent to handle the password setting. If the
hypervisor and instance configuration do not support setting a password at
server create time, the password that is returned by the create API call is
misleading because it was ignored.
To prevent this confusion, use the ``enable_instance_password`` configuration
option to disable the return of the admin password for installations that do
not support setting instance passwords.

View File

@ -0,0 +1,295 @@
==========
Cells (v1)
==========
.. warning::
Configuring and implementing Cells v1 is not recommended for new deployments
of the Compute service (nova). Cells v2 replaces cells v1, and v2 is
required to install or upgrade the Compute service to the 15.0.0 Ocata
release. More information on cells v2 can be found in :doc:`/user/cells`.
`Cells` functionality enables you to scale an OpenStack Compute cloud in a more
distributed fashion without having to use complicated technologies like
database and message queue clustering. It supports very large deployments.
When this functionality is enabled, the hosts in an OpenStack Compute cloud are
partitioned into groups called cells. Cells are configured as a tree. The
top-level cell should have a host that runs a ``nova-api`` service, but no
``nova-compute`` services. Each child cell should run all of the typical
``nova-*`` services in a regular Compute cloud except for ``nova-api``. You can
think of cells as a normal Compute deployment in that each cell has its own
database server and message queue broker.
The ``nova-cells`` service handles communication between cells and selects
cells for new instances. This service is required for every cell. Communication
between cells is pluggable, and currently the only option is communication
through RPC.
Cells scheduling is separate from host scheduling. ``nova-cells`` first picks
a cell. Once a cell is selected and the new build request reaches its
``nova-cells`` service, it is sent over to the host scheduler in that cell and
the build proceeds as it would have without cells.
Cell configuration options
~~~~~~~~~~~~~~~~~~~~~~~~~~
.. todo:: This is duplication. We should be able to use the
oslo.config.sphinxext module to generate this for us
Cells are disabled by default. All cell-related configuration options appear in
the ``[cells]`` section in ``nova.conf``. The following cell-related options
are currently supported:
``enable``
Set to ``True`` to turn on cell functionality. Default is ``false``.
``name``
Name of the current cell. Must be unique for each cell.
``capabilities``
List of arbitrary ``key=value`` pairs defining capabilities of the current
cell. Values include ``hypervisor=xenserver;kvm,os=linux;windows``.
``call_timeout``
How long in seconds to wait for replies from calls between cells.
``scheduler_filter_classes``
Filter classes that the cells scheduler should use. By default, uses
``nova.cells.filters.all_filters`` to map to all cells filters included with
Compute.
``scheduler_weight_classes``
Weight classes that the scheduler for cells uses. By default, uses
``nova.cells.weights.all_weighers`` to map to all cells weight algorithms
included with Compute.
``ram_weight_multiplier``
Multiplier used to weight RAM. Negative numbers indicate that Compute should
stack VMs on one host instead of spreading out new VMs to more hosts in the
cell. The default value is 10.0.
Configure the API (top-level) cell
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The cell type must be changed in the API cell so that requests can be proxied
through ``nova-cells`` down to the correct cell properly. Edit the
``nova.conf`` file in the API cell, and specify ``api`` in the ``cell_type``
key:
.. code-block:: ini
[DEFAULT]
compute_api_class=nova.compute.cells_api.ComputeCellsAPI
# ...
[cells]
cell_type= api
Configure the child cells
~~~~~~~~~~~~~~~~~~~~~~~~~
Edit the ``nova.conf`` file in the child cells, and specify ``compute`` in the
``cell_type`` key:
.. code-block:: ini
[DEFAULT]
# Disable quota checking in child cells. Let API cell do it exclusively.
quota_driver=nova.quota.NoopQuotaDriver
[cells]
cell_type = compute
Configure the database in each cell
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Before bringing the services online, the database in each cell needs to be
configured with information about related cells. In particular, the API cell
needs to know about its immediate children, and the child cells must know about
their immediate agents. The information needed is the ``RabbitMQ`` server
credentials for the particular cell.
Use the :command:`nova-manage cell create` command to add this information to
the database in each cell:
.. code-block:: console
# nova-manage cell create -h
usage: nova-manage cell create [-h] [--name <name>]
[--cell_type <parent|api|child|compute>]
[--username <username>] [--password <password>]
[--broker_hosts <broker_hosts>]
[--hostname <hostname>] [--port <number>]
[--virtual_host <virtual_host>]
[--woffset <float>] [--wscale <float>]
optional arguments:
-h, --help show this help message and exit
--name <name> Name for the new cell
--cell_type <parent|api|child|compute>
Whether the cell is parent/api or child/compute
--username <username>
Username for the message broker in this cell
--password <password>
Password for the message broker in this cell
--broker_hosts <broker_hosts>
Comma separated list of message brokers in this cell.
Each Broker is specified as hostname:port with both
mandatory. This option overrides the --hostname and
--port options (if provided).
--hostname <hostname>
Address of the message broker in this cell
--port <number> Port number of the message broker in this cell
--virtual_host <virtual_host>
The virtual host of the message broker in this cell
--woffset <float>
--wscale <float>
As an example, assume an API cell named ``api`` and a child cell named
``cell1``.
Within the ``api`` cell, specify the following ``RabbitMQ`` server information:
.. code-block:: ini
rabbit_host=10.0.0.10
rabbit_port=5672
rabbit_username=api_user
rabbit_password=api_passwd
rabbit_virtual_host=api_vhost
Within the ``cell1`` child cell, specify the following ``RabbitMQ`` server
information:
.. code-block:: ini
rabbit_host=10.0.1.10
rabbit_port=5673
rabbit_username=cell1_user
rabbit_password=cell1_passwd
rabbit_virtual_host=cell1_vhost
You can run this in the API cell as root:
.. code-block:: console
# nova-manage cell create --name cell1 --cell_type child \
--username cell1_user --password cell1_passwd --hostname 10.0.1.10 \
--port 5673 --virtual_host cell1_vhost --woffset 1.0 --wscale 1.0
Repeat the previous steps for all child cells.
In the child cell, run the following, as root:
.. code-block:: console
# nova-manage cell create --name api --cell_type parent \
--username api_user --password api_passwd --hostname 10.0.0.10 \
--port 5672 --virtual_host api_vhost --woffset 1.0 --wscale 1.0
To customize the Compute cells, use the configuration option settings
documented above.
Cell scheduling configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To determine the best cell to use to launch a new instance, Compute uses a set
of filters and weights defined in the ``/etc/nova/nova.conf`` file. The
following options are available to prioritize cells for scheduling:
``scheduler_filter_classes``
List of filter classes. By default ``nova.cells.filters.all_filters``
is specified, which maps to all cells filters included with Compute
(see the section called :ref:`Filters <compute-scheduler-filters>`).
``scheduler_weight_classes``
List of weight classes. By default ``nova.cells.weights.all_weighers`` is
specified, which maps to all cell weight algorithms included with Compute.
The following modules are available:
``mute_child``
Downgrades the likelihood of child cells being chosen for scheduling
requests, which haven't sent capacity or capability updates in a while.
Options include ``mute_weight_multiplier`` (multiplier for mute children;
value should be negative).
``ram_by_instance_type``
Select cells with the most RAM capacity for the instance type being
requested. Because higher weights win, Compute returns the number of
available units for the instance type requested. The
``ram_weight_multiplier`` option defaults to 10.0 that adds to the weight
by a factor of 10.
Use a negative number to stack VMs on one host instead of spreading
out new VMs to more hosts in the cell.
``weight_offset``
Allows modifying the database to weight a particular cell. You can use this
when you want to disable a cell (for example, '0'), or to set a default
cell by making its ``weight_offset`` very high (for example,
``999999999999999``). The highest weight will be the first cell to be
scheduled for launching an instance.
Additionally, the following options are available for the cell scheduler:
``scheduler_retries``
Specifies how many times the scheduler tries to launch a new instance when no
cells are available (default=10).
``scheduler_retry_delay``
Specifies the delay (in seconds) between retries (default=2).
As an admin user, you can also add a filter that directs builds to a particular
cell. The ``policy.json`` file must have a line with
``"cells_scheduler_filter:TargetCellFilter" : "is_admin:True"`` to let an admin
user specify a scheduler hint to direct a build to a particular cell.
Optional cell configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Cells store all inter-cell communication data, including user names and
passwords, in the database. Because the cells data is not updated very
frequently, use the ``[cells]cells_config`` option to specify a JSON file to
store cells data. With this configuration, the database is no longer consulted
when reloading the cells data. The file must have columns present in the Cell
model (excluding common database fields and the ``id`` column). You must
specify the queue connection information through a ``transport_url`` field,
instead of ``username``, ``password``, and so on.
The ``transport_url`` has the following form::
rabbit://USERNAME:PASSWORD@HOSTNAME:PORT/VIRTUAL_HOST
The scheme can only be ``rabbit``.
The following sample shows this optional configuration:
.. code-block:: json
{
"parent": {
"name": "parent",
"api_url": "http://api.example.com:8774",
"transport_url": "rabbit://rabbit.example.com",
"weight_offset": 0.0,
"weight_scale": 1.0,
"is_parent": true
},
"cell1": {
"name": "cell1",
"api_url": "http://api.example.com:8774",
"transport_url": "rabbit://rabbit1.example.com",
"weight_offset": 0.0,
"weight_scale": 1.0,
"is_parent": false
},
"cell2": {
"name": "cell2",
"api_url": "http://api.example.com:8774",
"transport_url": "rabbit://rabbit2.example.com",
"weight_offset": 0.0,
"weight_scale": 1.0,
"is_parent": false
}
}

View File

@ -0,0 +1,28 @@
=================================
Configuring Fibre Channel Support
=================================
Fibre Channel support in OpenStack Compute is remote block storage attached to
compute nodes for VMs.
.. todo:: This below statement needs to be verified for current release
Fibre Channel supported only the KVM hypervisor.
Compute and Block Storage support Fibre Channel automatic zoning on Brocade and
Cisco switches. On other hardware Fibre Channel arrays must be pre-zoned or
directly attached to the KVM hosts.
KVM host requirements
~~~~~~~~~~~~~~~~~~~~~
You must install these packages on the KVM host:
``sysfsutils``
Nova uses the ``systool`` application in this package.
``sg3-utils`` or ``sg3_utils``
Nova uses the ``sg_scan`` and ``sginfo`` applications.
Installing the ``multipath-tools`` or ``device-mapper-multipath`` package is
optional.

View File

@ -0,0 +1,14 @@
===============================
Hypervisor Configuration Basics
===============================
The node where the ``nova-compute`` service is installed and operates on the
same node that runs all of the virtual machines. This is referred to as the
compute node in this guide.
By default, the selected hypervisor is KVM. To change to another hypervisor,
change the ``virt_type`` option in the ``[libvirt]`` section of ``nova.conf``
and restart the ``nova-compute`` service.
Specific options for particular hypervisors can be found in
the following sections.

View File

@ -0,0 +1,464 @@
===============================
Hyper-V virtualization platform
===============================
.. todo:: This is really installation guide material and should probably be
moved.
It is possible to use Hyper-V as a compute node within an OpenStack Deployment.
The ``nova-compute`` service runs as ``openstack-compute``, a 32-bit service
directly upon the Windows platform with the Hyper-V role enabled. The necessary
Python components as well as the ``nova-compute`` service are installed
directly onto the Windows platform. Windows Clustering Services are not needed
for functionality within the OpenStack infrastructure. The use of the Windows
Server 2012 platform is recommend for the best experience and is the platform
for active development. The following Windows platforms have been tested as
compute nodes:
- Windows Server 2012
- Windows Server 2012 R2 Server and Core (with the Hyper-V role enabled)
- Hyper-V Server
Hyper-V configuration
~~~~~~~~~~~~~~~~~~~~~
The only OpenStack services required on a Hyper-V node are ``nova-compute`` and
``neutron-hyperv-agent``. Regarding the resources needed for this host you have
to consider that Hyper-V will require 16 GB - 20 GB of disk space for the OS
itself, including updates. Two NICs are required, one connected to the
management network and one to the guest data network.
The following sections discuss how to prepare the Windows Hyper-V node for
operation as an OpenStack compute node. Unless stated otherwise, any
configuration information should work for the Windows 2012 and 2012 R2
platforms.
Local storage considerations
----------------------------
The Hyper-V compute node needs to have ample storage for storing the virtual
machine images running on the compute nodes. You may use a single volume for
all, or partition it into an OS volume and VM volume.
.. _configure-ntp-windows:
Configure NTP
-------------
Network time services must be configured to ensure proper operation of the
OpenStack nodes. To set network time on your Windows host you must run the
following commands:
.. code-block:: bat
C:\>net stop w32time
C:\>w32tm /config /manualpeerlist:pool.ntp.org,0x8 /syncfromflags:MANUAL
C:\>net start w32time
Keep in mind that the node will have to be time synchronized with the other
nodes of your OpenStack environment, so it is important to use the same NTP
server. Note that in case of an Active Directory environment, you may do this
only for the AD Domain Controller.
Configure Hyper-V virtual switching
-----------------------------------
Information regarding the Hyper-V virtual Switch can be found in the `Hyper-V
Virtual Switch Overview`__.
To quickly enable an interface to be used as a Virtual Interface the
following PowerShell may be used:
.. code-block:: none
PS C:\> $if = Get-NetIPAddress -IPAddress 192* | Get-NetIPInterface
PS C:\> New-VMSwitch -NetAdapterName $if.ifAlias -Name YOUR_BRIDGE_NAME -AllowManagementOS $false
.. note::
It is very important to make sure that when you are using a Hyper-V node
with only 1 NIC the -AllowManagementOS option is set on ``True``, otherwise
you will lose connectivity to the Hyper-V node.
__ https://technet.microsoft.com/en-us/library/hh831823.aspx
Enable iSCSI initiator service
------------------------------
To prepare the Hyper-V node to be able to attach to volumes provided by cinder
you must first make sure the Windows iSCSI initiator service is running and
started automatically.
.. code-block:: none
PS C:\> Set-Service -Name MSiSCSI -StartupType Automatic
PS C:\> Start-Service MSiSCSI
Configure shared nothing live migration
---------------------------------------
Detailed information on the configuration of live migration can be found in
`this guide`__
The following outlines the steps of shared nothing live migration.
#. The target host ensures that live migration is enabled and properly
configured in Hyper-V.
#. The target host checks if the image to be migrated requires a base VHD and
pulls it from the Image service if not already available on the target host.
#. The source host ensures that live migration is enabled and properly
configured in Hyper-V.
#. The source host initiates a Hyper-V live migration.
#. The source host communicates to the manager the outcome of the operation.
The following three configuration options are needed in order to support
Hyper-V live migration and must be added to your ``nova.conf`` on the Hyper-V
compute node:
* This is needed to support shared nothing Hyper-V live migrations. It is used
in ``nova/compute/manager.py``.
.. code-block:: ini
instances_shared_storage = False
* This flag is needed to support live migration to hosts with different CPU
features. This flag is checked during instance creation in order to limit the
CPU features used by the VM.
.. code-block:: ini
limit_cpu_features = True
* This option is used to specify where instances are stored on disk.
.. code-block:: ini
instances_path = DRIVELETTER:\PATH\TO\YOUR\INSTANCES
Additional Requirements:
* Hyper-V 2012 R2 or Windows Server 2012 R2 with Hyper-V role enabled
* A Windows domain controller with the Hyper-V compute nodes as domain members
* The instances_path command-line option/flag needs to be the same on all hosts
* The ``openstack-compute`` service deployed with the setup must run with
domain credentials. You can set the service credentials with:
.. code-block:: bat
C:\>sc config openstack-compute obj="DOMAIN\username" password="password"
__ https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/manage/Use-live-migration-without-Failover-Clustering-to-move-a-virtual-machine
How to setup live migration on Hyper-V
--------------------------------------
To enable 'shared nothing live' migration, run the 3 instructions below on each
Hyper-V host:
.. code-block:: none
PS C:\> Enable-VMMigration
PS C:\> Set-VMMigrationNetwork IP_ADDRESS
PS C:\> Set-VMHost -VirtualMachineMigrationAuthenticationTypeKerberos
.. note::
Replace the ``IP_ADDRESS`` with the address of the interface which will
provide live migration.
Additional Reading
------------------
This article clarifies the various live migration options in Hyper-V:
`Hyper-V Live Migration of Yesterday
<https://ariessysadmin.blogspot.ro/2012/04/hyper-v-live-migration-of-windows.html>`_
Install nova-compute using OpenStack Hyper-V installer
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In case you want to avoid all the manual setup, you can use Cloudbase
Solutions' installer. You can find it here:
`HyperVNovaCompute_Beta download
<https://www.cloudbase.it/downloads/HyperVNovaCompute_Beta.msi>`_
The tool installs an independent Python environment in order to avoid conflicts
with existing applications, and dynamically generates a ``nova.conf`` file
based on the parameters provided by you.
The tool can also be used for an automated and unattended mode for deployments
on a massive number of servers. More details about how to use the installer and
its features can be found here:
`Cloudbase <https://www.cloudbase.it>`_
.. _windows-requirements:
Requirements
~~~~~~~~~~~~
Python
------
Python 2.7 32bit must be installed as most of the libraries are not working
properly on the 64bit version.
**Setting up Python prerequisites**
#. Download and install Python 2.7 using the MSI installer from here:
`python-2.7.3.msi download
<https://www.python.org/ftp/python/2.7.3/python-2.7.3.msi>`_
.. code-block:: none
PS C:\> $src = "https://www.python.org/ftp/python/2.7.3/python-2.7.3.msi"
PS C:\> $dest = "$env:temp\python-2.7.3.msi"
PS C:\> Invoke-WebRequest Uri $src OutFile $dest
PS C:\> Unblock-File $dest
PS C:\> Start-Process $dest
#. Make sure that the ``Python`` and ``Python\Scripts`` paths are set up in the
``PATH`` environment variable.
.. code-block:: none
PS C:\> $oldPath = [System.Environment]::GetEnvironmentVariable("Path")
PS C:\> $newPath = $oldPath + ";C:\python27\;C:\python27\Scripts\"
PS C:\> [System.Environment]::SetEnvironmentVariable("Path", $newPath, [System.EnvironmentVariableTarget]::User
Python dependencies
-------------------
The following packages need to be downloaded and manually installed:
``setuptools``
https://pypi.python.org/packages/2.7/s/setuptools/setuptools-0.6c11.win32-py2.7.exe
``pip``
https://pip.pypa.io/en/latest/installing/
``PyMySQL``
http://codegood.com/download/10/
``PyWin32``
https://sourceforge.net/projects/pywin32/files/pywin32/Build%20217/pywin32-217.win32-py2.7.exe
``Greenlet``
http://www.lfd.uci.edu/~gohlke/pythonlibs/#greenlet
``PyCryto``
http://www.voidspace.org.uk/downloads/pycrypto26/pycrypto-2.6.win32-py2.7.exe
The following packages must be installed with pip:
* ``ecdsa``
* ``amqp``
* ``wmi``
.. code-block:: none
PS C:\> pip install ecdsa
PS C:\> pip install amqp
PS C:\> pip install wmi
Other dependencies
------------------
``qemu-img`` is required for some of the image related operations. You can get
it from here: http://qemu.weilnetz.de/. You must make sure that the
``qemu-img`` path is set in the PATH environment variable.
Some Python packages need to be compiled, so you may use MinGW or Visual
Studio. You can get MinGW from here: http://sourceforge.net/projects/mingw/.
You must configure which compiler is to be used for this purpose by using the
``distutils.cfg`` file in ``$Python27\Lib\distutils``, which can contain:
.. code-block:: ini
[build]
compiler = mingw32
As a last step for setting up MinGW, make sure that the MinGW binaries'
directories are set up in PATH.
Install nova-compute
~~~~~~~~~~~~~~~~~~~~
Download the nova code
----------------------
#. Use Git to download the necessary source code. The installer to run Git on
Windows can be downloaded here:
https://github.com/msysgit/msysgit/releases/download/Git-1.9.2-preview20140411/Git-1.9.2-preview20140411.exe
#. Download the installer. Once the download is complete, run the installer and
follow the prompts in the installation wizard. The default should be
acceptable for the purposes of this guide.
.. code-block:: none
PS C:\> $src = "https://github.com/msysgit/msysgit/releases/download/Git-1.9.2-preview20140411/Git-1.9.2-preview20140411.exe"
PS C:\> $dest = "$env:temp\Git-1.9.2-preview20140411.exe"
PS C:\> Invoke-WebRequest Uri $src OutFile $dest
PS C:\> Unblock-File $dest
PS C:\> Start-Process $dest
#. Run the following to clone the nova code.
.. code-block:: none
PS C:\> git.exe clone https://git.openstack.org/openstack/nova
Install nova-compute service
----------------------------
To install ``nova-compute``, run:
.. code-block:: none
PS C:\> cd c:\nova
PS C:\> python setup.py install
Configure nova-compute
----------------------
The ``nova.conf`` file must be placed in ``C:\etc\nova`` for running OpenStack
on Hyper-V. Below is a sample ``nova.conf`` for Windows:
.. code-block:: ini
[DEFAULT]
auth_strategy = keystone
image_service = nova.image.glance.GlanceImageService
compute_driver = nova.virt.hyperv.driver.HyperVDriver
volume_api_class = nova.volume.cinder.API
fake_network = true
instances_path = C:\Program Files (x86)\OpenStack\Instances
use_cow_images = true
force_config_drive = false
injected_network_template = C:\Program Files (x86)\OpenStack\Nova\etc\interfaces.template
policy_file = C:\Program Files (x86)\OpenStack\Nova\etc\policy.json
mkisofs_cmd = C:\Program Files (x86)\OpenStack\Nova\bin\mkisofs.exe
allow_resize_to_same_host = true
running_deleted_instance_action = reap
running_deleted_instance_poll_interval = 120
resize_confirm_window = 5
resume_guests_state_on_host_boot = true
rpc_response_timeout = 1800
lock_path = C:\Program Files (x86)\OpenStack\Log\
rpc_backend = nova.openstack.common.rpc.impl_kombu
rabbit_host = IP_ADDRESS
rabbit_port = 5672
rabbit_userid = guest
rabbit_password = Passw0rd
logdir = C:\Program Files (x86)\OpenStack\Log\
logfile = nova-compute.log
instance_usage_audit = true
instance_usage_audit_period = hour
use_neutron = True
[glance]
api_servers = http://IP_ADDRESS:9292
[neutron]
url = http://IP_ADDRESS:9696
auth_strategy = keystone
admin_tenant_name = service
admin_username = neutron
admin_password = Passw0rd
admin_auth_url = http://IP_ADDRESS:35357/v2.0
[hyperv]
vswitch_name = newVSwitch0
limit_cpu_features = false
config_drive_inject_password = false
qemu_img_cmd = C:\Program Files (x86)\OpenStack\Nova\bin\qemu-img.exe
config_drive_cdrom = true
dynamic_memory_ratio = 1
enable_instance_metrics_collection = true
[rdp]
enabled = true
html5_proxy_base_url = https://IP_ADDRESS:4430
Prepare images for use with Hyper-V
-----------------------------------
Hyper-V currently supports only the VHD and VHDX file format for virtual
machine instances. Detailed instructions for installing virtual machines on
Hyper-V can be found here:
`Create Virtual Machines
<http://technet.microsoft.com/en-us/library/cc772480.aspx>`_
Once you have successfully created a virtual machine, you can then upload the
image to `glance` using the `openstack-client`:
.. code-block:: none
PS C:\> openstack image create --name "VM_IMAGE_NAME" --property hypervisor_type=hyperv --public \
--container-format bare --disk-format vhd
.. note::
VHD and VHDX files sizes can be bigger than their maximum internal size,
as such you need to boot instances using a flavor with a slightly bigger
disk size than the internal size of the disk file.
To create VHDs, use the following PowerShell cmdlet:
.. code-block:: none
PS C:\> New-VHD DISK_NAME.vhd -SizeBytes VHD_SIZE
Inject interfaces and routes
----------------------------
The ``interfaces.template`` file describes the network interfaces and routes
available on your system and how to activate them. You can specify the location
of the file with the ``injected_network_template`` configuration option in
``/etc/nova/nova.conf``.
.. code-block:: ini
injected_network_template = PATH_TO_FILE
A default template exists in ``nova/virt/interfaces.template``.
Run Compute with Hyper-V
------------------------
To start the ``nova-compute`` service, run this command from a console in the
Windows server:
.. code-block:: none
PS C:\> C:\Python27\python.exe c:\Python27\Scripts\nova-compute --config-file c:\etc\nova\nova.conf
Troubleshoot Hyper-V configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* I ran the :command:`nova-manage service list` command from my controller;
however, I'm not seeing smiley faces for Hyper-V compute nodes, what do I do?
Verify that you are synchronized with a network time source. For
instructions about how to configure NTP on your Hyper-V compute node, see
:ref:`configure-ntp-windows`.
* How do I restart the compute service?
.. code-block:: none
PS C:\> net stop nova-compute && net start nova-compute
* How do I restart the iSCSI initiator service?
.. code-block:: none
PS C:\> net stop msiscsi && net start msiscsi

View File

@ -0,0 +1,372 @@
===
KVM
===
.. todo:: This is really installation guide material and should probably be
moved.
KVM is configured as the default hypervisor for Compute.
.. note::
This document contains several sections about hypervisor selection. If you
are reading this document linearly, you do not want to load the KVM module
before you install ``nova-compute``. The ``nova-compute`` service depends
on qemu-kvm, which installs ``/lib/udev/rules.d/45-qemu-kvm.rules``, which
sets the correct permissions on the ``/dev/kvm`` device node.
To enable KVM explicitly, add the following configuration options to the
``/etc/nova/nova.conf`` file:
.. code-block:: ini
compute_driver = libvirt.LibvirtDriver
[libvirt]
virt_type = kvm
The KVM hypervisor supports the following virtual machine image formats:
* Raw
* QEMU Copy-on-write (QCOW2)
* QED Qemu Enhanced Disk
* VMware virtual machine disk format (vmdk)
This section describes how to enable KVM on your system. For more information,
see the following distribution-specific documentation:
* `Fedora: Virtualization Getting Started Guide <http://docs.fedoraproject.org/
en-US/Fedora/22/html/Virtualization_Getting_Started_Guide/index.html>`_
from the Fedora 22 documentation.
* `Ubuntu: KVM/Installation <https://help.ubuntu.com/community/KVM/
Installation>`_ from the Community Ubuntu documentation.
* `Debian: Virtualization with KVM <http://static.debian-handbook.info/browse/
stable/sect.virtualization.html#idp11279352>`_ from the Debian handbook.
* `Red Hat Enterprise Linux: Installing virtualization packages on an existing
Red Hat Enterprise Linux system <http://docs.redhat.com/docs/en-US/
Red_Hat_Enterprise_Linux/6/html/Virtualization_Host_Configuration_and_Guest_
Installation_Guide/sect-Virtualization_Host_Configuration_and_Guest_Installa
tion_Guide-Host_Installation-Installing_KVM_packages_on_an_existing_Red_Hat_
Enterprise_Linux_system.html>`_ from the ``Red Hat Enterprise Linux
Virtualization Host Configuration and Guest Installation Guide``.
* `openSUSE: Installing KVM <http://doc.opensuse.org/documentation/html/
openSUSE/opensuse-kvm/cha.kvm.requires.html#sec.kvm.requires.install>`_
from the openSUSE Virtualization with KVM manual.
* `SLES: Installing KVM <https://www.suse.com/documentation/sles-12/book_virt/
data/sec_vt_installation_kvm.html>`_ from the SUSE Linux Enterprise Server
``Virtualization Guide``.
Enable KVM
~~~~~~~~~~
The following sections outline how to enable KVM based hardware virtualization
on different architectures and platforms. To perform these steps, you must be
logged in as the ``root`` user.
For x86 based systems
---------------------
#. To determine whether the ``svm`` or ``vmx`` CPU extensions are present, run
this command:
.. code-block:: console
# grep -E 'svm|vmx' /proc/cpuinfo
This command generates output if the CPU is capable of
hardware-virtualization. Even if output is shown, you might still need to
enable virtualization in the system BIOS for full support.
If no output appears, consult your system documentation to ensure that your
CPU and motherboard support hardware virtualization. Verify that any
relevant hardware virtualization options are enabled in the system BIOS.
The BIOS for each manufacturer is different. If you must enable
virtualization in the BIOS, look for an option containing the words
``virtualization``, ``VT``, ``VMX``, or ``SVM``.
#. To list the loaded kernel modules and verify that the ``kvm`` modules are
loaded, run this command:
.. code-block:: console
# lsmod | grep kvm
If the output includes ``kvm_intel`` or ``kvm_amd``, the ``kvm`` hardware
virtualization modules are loaded and your kernel meets the module
requirements for OpenStack Compute.
If the output does not show that the ``kvm`` module is loaded, run this
command to load it:
.. code-block:: console
# modprobe -a kvm
Run the command for your CPU. For Intel, run this command:
.. code-block:: console
# modprobe -a kvm-intel
For AMD, run this command:
.. code-block:: console
# modprobe -a kvm-amd
Because a KVM installation can change user group membership, you might need
to log in again for changes to take effect.
If the kernel modules do not load automatically, use the procedures listed
in these subsections.
If the checks indicate that required hardware virtualization support or kernel
modules are disabled or unavailable, you must either enable this support on the
system or find a system with this support.
.. note::
Some systems require that you enable VT support in the system BIOS. If you
believe your processor supports hardware acceleration but the previous
command did not produce output, reboot your machine, enter the system BIOS,
and enable the VT option.
If KVM acceleration is not supported, configure Compute to use a different
hypervisor, such as ``QEMU`` or ``Xen``. See :ref:`compute_qemu` or
:ref:`compute_xen_api` for details.
These procedures help you load the kernel modules for Intel-based and AMD-based
processors if they do not load automatically during KVM installation.
.. rubric:: Intel-based processors
If your compute host is Intel-based, run these commands as root to load the
kernel modules:
.. code-block:: console
# modprobe kvm
# modprobe kvm-intel
Add these lines to the ``/etc/modules`` file so that these modules load on
reboot:
.. code-block:: console
kvm
kvm-intel
.. rubric:: AMD-based processors
If your compute host is AMD-based, run these commands as root to load the
kernel modules:
.. code-block:: console
# modprobe kvm
# modprobe kvm-amd
Add these lines to ``/etc/modules`` file so that these modules load on reboot:
.. code-block:: console
kvm
kvm-amd
For POWER based systems
-----------------------
KVM as a hypervisor is supported on POWER system's PowerNV platform.
#. To determine if your POWER platform supports KVM based virtualization run
the following command:
.. code-block:: console
# cat /proc/cpuinfo | grep PowerNV
If the previous command generates the following output, then CPU supports
KVM based virtualization.
.. code-block:: console
platform: PowerNV
If no output is displayed, then your POWER platform does not support KVM
based hardware virtualization.
#. To list the loaded kernel modules and verify that the ``kvm`` modules are
loaded, run the following command:
.. code-block:: console
# lsmod | grep kvm
If the output includes ``kvm_hv``, the ``kvm`` hardware virtualization
modules are loaded and your kernel meets the module requirements for
OpenStack Compute.
If the output does not show that the ``kvm`` module is loaded, run the
following command to load it:
.. code-block:: console
# modprobe -a kvm
For PowerNV platform, run the following command:
.. code-block:: console
# modprobe -a kvm-hv
Because a KVM installation can change user group membership, you might need
to log in again for changes to take effect.
Configure Compute backing storage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Backing Storage is the storage used to provide the expanded operating system
image, and any ephemeral storage. Inside the virtual machine, this is normally
presented as two virtual hard disks (for example, ``/dev/vda`` and ``/dev/vdb``
respectively). However, inside OpenStack, this can be derived from one of these
methods: ``lvm``, ``qcow``, ``rbd`` or ``flat``, chosen using the
``images_type`` option in ``nova.conf`` on the compute node.
.. note::
The option ``raw`` is acceptable but deprecated in favor of ``flat``. The
Flat back end uses either raw or QCOW2 storage. It never uses a backing
store, so when using QCOW2 it copies an image rather than creating an
overlay. By default, it creates raw files but will use QCOW2 when creating a
disk from a QCOW2 if ``force_raw_images`` is not set in configuration.
QCOW is the default backing store. It uses a copy-on-write philosophy to delay
allocation of storage until it is actually needed. This means that the space
required for the backing of an image can be significantly less on the real disk
than what seems available in the virtual machine operating system.
Flat creates files without any sort of file formatting, effectively creating
files with the plain binary one would normally see on a real disk. This can
increase performance, but means that the entire size of the virtual disk is
reserved on the physical disk.
Local `LVM volumes
<https://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)>`__ can also be
used. Set ``images_volume_group = nova_local`` where ``nova_local`` is the name
of the LVM group you have created.
Specify the CPU model of KVM guests
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The Compute service enables you to control the guest CPU model that is exposed
to KVM virtual machines. Use cases include:
* To maximize performance of virtual machines by exposing new host CPU features
to the guest
* To ensure a consistent default CPU across all machines, removing reliance of
variable QEMU defaults
In libvirt, the CPU is specified by providing a base CPU model name (which is a
shorthand for a set of feature flags), a set of additional feature flags, and
the topology (sockets/cores/threads). The libvirt KVM driver provides a number
of standard CPU model names. These models are defined in the
``/usr/share/libvirt/cpu_map.xml`` file. Check this file to determine which
models are supported by your local installation.
Two Compute configuration options in the ``[libvirt]`` group of ``nova.conf``
define which type of CPU model is exposed to the hypervisor when using KVM:
``cpu_mode`` and ``cpu_model``.
The ``cpu_mode`` option can take one of the following values: ``none``,
``host-passthrough``, ``host-model``, and ``custom``.
Host model (default for KVM & QEMU)
-----------------------------------
If your ``nova.conf`` file contains ``cpu_mode=host-model``, libvirt identifies
the CPU model in ``/usr/share/libvirt/cpu_map.xml`` file that most closely
matches the host, and requests additional CPU flags to complete the match. This
configuration provides the maximum functionality and performance and maintains
good reliability and compatibility if the guest is migrated to another host
with slightly different host CPUs.
Host pass through
-----------------
If your ``nova.conf`` file contains ``cpu_mode=host-passthrough``, libvirt
tells KVM to pass through the host CPU with no modifications. The difference
to host-model, instead of just matching feature flags, every last detail of the
host CPU is matched. This gives the best performance, and can be important to
some apps which check low level CPU details, but it comes at a cost with
respect to migration. The guest can only be migrated to a matching host CPU.
Custom
------
If your ``nova.conf`` file contains ``cpu_mode=custom``, you can explicitly
specify one of the supported named models using the cpu_model configuration
option. For example, to configure the KVM guests to expose Nehalem CPUs, your
``nova.conf`` file should contain:
.. code-block:: ini
[libvirt]
cpu_mode = custom
cpu_model = Nehalem
None (default for all libvirt-driven hypervisors other than KVM & QEMU)
-----------------------------------------------------------------------
If your ``nova.conf`` file contains ``cpu_mode=none``, libvirt does not specify
a CPU model. Instead, the hypervisor chooses the default model.
Guest agent support
-------------------
Use guest agents to enable optional access between compute nodes and guests
through a socket, using the QMP protocol.
To enable this feature, you must set ``hw_qemu_guest_agent=yes`` as a metadata
parameter on the image you wish to use to create the guest-agent-capable
instances from. You can explicitly disable the feature by setting
``hw_qemu_guest_agent=no`` in the image metadata.
KVM performance tweaks
~~~~~~~~~~~~~~~~~~~~~~
The `VHostNet <http://www.linux-kvm.org/page/VhostNet>`_ kernel module improves
network performance. To load the kernel module, run the following command as
root:
.. code-block:: console
# modprobe vhost_net
Troubleshoot KVM
~~~~~~~~~~~~~~~~
Trying to launch a new virtual machine instance fails with the ``ERROR`` state,
and the following error appears in the ``/var/log/nova/nova-compute.log`` file:
.. code-block:: console
libvirtError: internal error no supported architecture for os type 'hvm'
This message indicates that the KVM kernel modules were not loaded.
If you cannot start VMs after installation without rebooting, the permissions
might not be set correctly. This can happen if you load the KVM module before
you install ``nova-compute``. To check whether the group is set to ``kvm``,
run:
.. code-block:: console
# ls -l /dev/kvm
If it is not set to ``kvm``, run:
.. code-block:: console
# udevadm trigger

View File

@ -0,0 +1,38 @@
======================
LXC (Linux containers)
======================
LXC (also known as Linux containers) is a virtualization technology that works
at the operating system level. This is different from hardware virtualization,
the approach used by other hypervisors such as KVM, Xen, and VMware. LXC (as
currently implemented using libvirt in the Compute service) is not a secure
virtualization technology for multi-tenant environments (specifically,
containers may affect resource quotas for other containers hosted on the same
machine). Additional containment technologies, such as AppArmor, may be used to
provide better isolation between containers, although this is not the case by
default. For all these reasons, the choice of this virtualization technology
is not recommended in production.
If your compute hosts do not have hardware support for virtualization, LXC will
likely provide better performance than QEMU. In addition, if your guests must
access specialized hardware, such as GPUs, this might be easier to achieve with
LXC than other hypervisors.
.. note::
Some OpenStack Compute features might be missing when running with LXC as
the hypervisor. See the `hypervisor support matrix
<http://wiki.openstack.org/HypervisorSupportMatrix>`_ for details.
To enable LXC, ensure the following options are set in ``/etc/nova/nova.conf``
on all hosts running the ``nova-compute`` service.
.. code-block:: ini
compute_driver = libvirt.LibvirtDriver
[libvirt]
virt_type = lxc
On Ubuntu, enable LXC support in OpenStack by installing the
``nova-compute-lxc`` package.

View File

@ -0,0 +1,56 @@
.. _compute_qemu:
====
QEMU
====
From the perspective of the Compute service, the QEMU hypervisor is
very similar to the KVM hypervisor. Both are controlled through libvirt,
both support the same feature set, and all virtual machine images that
are compatible with KVM are also compatible with QEMU.
The main difference is that QEMU does not support native virtualization.
Consequently, QEMU has worse performance than KVM and is a poor choice
for a production deployment.
The typical uses cases for QEMU are
* Running on older hardware that lacks virtualization support.
* Running the Compute service inside of a virtual machine for
development or testing purposes, where the hypervisor does not
support native virtualization for guests.
To enable QEMU, add these settings to ``nova.conf``:
.. code-block:: ini
compute_driver = libvirt.LibvirtDriver
[libvirt]
virt_type = qemu
For some operations you may also have to install the
:command:`guestmount` utility:
On Ubuntu:
.. code-block:: console
# apt-get install guestmount
On Red Hat Enterprise Linux, Fedora, or CentOS:
.. code-block:: console
# yum install libguestfs-tools
On openSUSE:
.. code-block:: console
# zypper install guestfs-tools
The QEMU hypervisor supports the following virtual machine image formats:
* Raw
* QEMU Copy-on-write (qcow2)
* VMware virtual machine disk format (vmdk)

View File

@ -0,0 +1,39 @@
=========
Virtuozzo
=========
Virtuozzo 7.0.0 (or newer), or its community edition OpenVZ, provides both
types of virtualization: Kernel Virtual Machines and OS Containers. The type
of instance to span is chosen depending on the ``hw_vm_type`` property of an
image.
.. note::
Some OpenStack Compute features may be missing when running with Virtuozzo
as the hypervisor. See :doc:`/user/support-matrix` for details.
To enable Virtuozzo Containers, set the following options in
``/etc/nova/nova.conf`` on all hosts running the ``nova-compute`` service.
.. code-block:: ini
compute_driver = libvirt.LibvirtDriver
force_raw_images = False
[libvirt]
virt_type = parallels
images_type = ploop
connection_uri = parallels:///system
inject_partition = -2
To enable Virtuozzo Virtual Machines, set the following options in
``/etc/nova/nova.conf`` on all hosts running the ``nova-compute`` service.
.. code-block:: ini
compute_driver = libvirt.LibvirtDriver
[libvirt]
virt_type = parallels
images_type = qcow2
connection_uri = parallels:///system

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,434 @@
.. _compute_xen_api:
=============================================
XenServer (and other XAPI based Xen variants)
=============================================
This section describes XAPI managed hypervisors, and how to use them with
OpenStack.
Terminology
~~~~~~~~~~~
Xen
---
A hypervisor that provides the fundamental isolation between virtual machines.
Xen is open source (GPLv2) and is managed by `XenProject.org
<http://www.xenproject.org>`_, a cross-industry organization and a Linux
Foundation Collaborative project.
Xen is a component of many different products and projects. The hypervisor
itself is very similar across all these projects, but the way that it is
managed can be different, which can cause confusion if you're not clear which
toolstack you are using. Make sure you know what `toolstack
<http://wiki.xen.org/wiki/Choice_of_Toolstacks>`_ you want before you get
started. If you want to use Xen with libvirt in OpenStack Compute refer to
:doc:`hypervisor-xen-libvirt`.
XAPI
----
XAPI is one of the toolstacks that could control a Xen based hypervisor.
XAPI's role is similar to libvirt's in the KVM world. The API provided by XAPI
is called XenAPI. To learn more about the provided interface, look at `XenAPI
Object Model Overview <http://docs.vmd.citrix.com/XenServer/
6.2.0/1.0/en_gb/sdk.html#object_model_overview>`_ for definitions of XAPI
specific terms such as SR, VDI, VIF and PIF.
OpenStack has a compute driver which talks to XAPI, therefore all XAPI managed
servers could be used with OpenStack.
XenAPI
------
XenAPI is the API provided by XAPI. This name is also used by the python
library that is a client for XAPI. A set of packages to use XenAPI on existing
distributions can be built using the `xenserver/buildroot
<https://github.com/xenserver/buildroot>`_ project.
XenServer
---------
An Open Source virtualization platform that delivers all features needed for
any server and datacenter implementation including the Xen hypervisor and XAPI
for the management. For more information and product downloads, visit
`xenserver.org <http://xenserver.org/>`_.
XCP
---
XCP is not supported anymore. XCP project recommends all XCP users to upgrade
to the latest version of XenServer by visiting `xenserver.org
<http://xenserver.org/>`_.
Privileged and unprivileged domains
-----------------------------------
A Xen host runs a number of virtual machines, VMs, or domains (the terms are
synonymous on Xen). One of these is in charge of running the rest of the
system, and is known as domain 0, or dom0. It is the first domain to boot after
Xen, and owns the storage and networking hardware, the device drivers, and the
primary control software. Any other VM is unprivileged, and is known as a domU
or guest. All customer VMs are unprivileged, but you should note that on
XenServer (and other XenAPI using hypervisors), the OpenStack Compute service
(``nova-compute``) also runs in a domU. This gives a level of security
isolation between the privileged system software and the OpenStack software
(much of which is customer-facing). This architecture is described in more
detail later.
Paravirtualized versus hardware virtualized domains
---------------------------------------------------
A Xen virtual machine can be paravirtualized (PV) or hardware virtualized
(HVM). This refers to the interaction between Xen, domain 0, and the guest VM's
kernel. PV guests are aware of the fact that they are virtualized and will
co-operate with Xen and domain 0; this gives them better performance
characteristics. HVM guests are not aware of their environment, and the
hardware has to pretend that they are running on an unvirtualized machine. HVM
guests do not need to modify the guest operating system, which is essential
when running Windows.
In OpenStack, customer VMs may run in either PV or HVM mode. However, the
OpenStack domU (that's the one running ``nova-compute``) must be running in PV
mode.
XenAPI deployment architecture
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A basic OpenStack deployment on a XAPI-managed server, assuming that the
network provider is neutron network, looks like this:
.. figure:: /figures/xenserver_architecture.png
:width: 100%
Key things to note:
* The hypervisor: Xen
* Domain 0: runs XAPI and some small pieces from OpenStack,
the XAPI plug-ins.
* OpenStack VM: The ``Compute`` service runs in a paravirtualized virtual
machine, on the host under management. Each host runs a local instance of
``Compute``. It is also running neutron plugin-agent
(``neutron-openvswitch-agent``) to perform local vSwitch configuration.
* OpenStack Compute uses the XenAPI Python library to talk to XAPI, and it uses
the Management Network to reach from the OpenStack VM to Domain 0.
Some notes on the networking:
* The above diagram assumes DHCP networking.
* There are three main OpenStack networks:
* Management network: RabbitMQ, MySQL, inter-host communication, and
compute-XAPI communication. Please note that the VM images are downloaded
by the XenAPI plug-ins, so make sure that the OpenStack Image service is
accessible through this network. It usually means binding those services to
the management interface.
* Tenant network: controlled by neutron, this is used for tenant traffic.
* Public network: floating IPs, public API endpoints.
* The networks shown here must be connected to the corresponding physical
networks within the data center. In the simplest case, three individual
physical network cards could be used. It is also possible to use VLANs to
separate these networks. Please note, that the selected configuration must be
in line with the networking model selected for the cloud. (In case of VLAN
networking, the physical channels have to be able to forward the tagged
traffic.)
* With the Networking service, you should enable Linux bridge in ``Dom0`` which
is used for Compute service. ``nova-compute`` will create Linux bridges for
security group and ``neutron-openvswitch-agent`` in Compute node will apply
security group rules on these Linux bridges. To implement this, you need to
remove ``/etc/modprobe.d/blacklist-bridge*`` in ``Dom0``.
Further reading
~~~~~~~~~~~~~~~
Here are some of the resources available to learn more about Xen:
* `Citrix XenServer official documentation
<http://docs.vmd.citrix.com/XenServer/6.2.0/1.0/en_gb/>`_
* `What is Xen? by XenProject.org
<http://www.xenproject.org/users/cloud.html>`_
* `Xen Hypervisor project
<http://www.xenproject.org/developers/teams/hypervisor.html>`_
* `Xapi project <http://www.xenproject.org/developers/teams/xapi.html>`_
* `Further XenServer and OpenStack information
<http://wiki.openstack.org/XenServer>`_
Install XenServer
~~~~~~~~~~~~~~~~~
Before you can run OpenStack with XenServer, you must install the hypervisor on
`an appropriate server <http://docs.vmd.citrix.com/XenServer/
6.2.0/1.0/en_gb/installation.html#sys_requirements>`_.
.. note::
Xen is a type 1 hypervisor: When your server starts, Xen is the first
software that runs. Consequently, you must install XenServer before you
install the operating system where you want to run OpenStack code. You then
install ``nova-compute`` into a dedicated virtual machine on the host.
Use the following link to download XenServer's installation media:
* http://xenserver.org/open-source-virtualization-download.html
When you install many servers, you might find it easier to perform `PXE boot
installations <http://docs.vmd.citrix.com/XenServer/6.2.0/
1.0/en_gb/installation.html#pxe_boot_install>`_. You can also package any
post-installation changes that you want to make to your XenServer by following
the instructions of `creating your own XenServer supplemental pack
<http://docs.vmd.citrix.com/
XenServer/6.2.0/1.0/en_gb/supplemental_pack_ddk.html>`_.
.. important::
Make sure you use the EXT type of storage repository (SR). Features that
require access to VHD files (such as copy on write, snapshot and migration)
do not work when you use the LVM SR. Storage repository (SR) is a
XAPI-specific term relating to the physical storage where virtual disks are
stored.
On the XenServer installation screen, choose the :guilabel:`XenDesktop
Optimized` option. If you use an answer file, make sure you use
``srtype="ext"`` in the ``installation`` tag of the answer file.
Post-installation steps
~~~~~~~~~~~~~~~~~~~~~~~
The following steps need to be completed after the hypervisor's installation:
#. For resize and migrate functionality, enable password-less SSH
authentication and set up the ``/images`` directory on dom0.
#. Install the XAPI plug-ins.
#. To support AMI type images, you must set up ``/boot/guest``
symlink/directory in dom0.
#. Create a paravirtualized virtual machine that can run ``nova-compute``.
#. Install and configure ``nova-compute`` in the above virtual machine.
Install XAPI plug-ins
---------------------
When you use a XAPI managed hypervisor, you can install a Python script (or any
executable) on the host side, and execute that through XenAPI. These scripts
are called plug-ins. The OpenStack related XAPI plug-ins live in OpenStack
os-xenapi code repository. These plug-ins have to be copied to dom0's
filesystem, to the appropriate directory, where XAPI can find them. It is
important to ensure that the version of the plug-ins are in line with the
OpenStack Compute installation you are using.
The plugins should typically be copied from the Nova installation running in
the Compute's DomU (``pip show os-xenapi`` to find its location), but if you
want to download the latest version the following procedure can be used.
**Manually installing the plug-ins**
#. Create temporary files/directories:
.. code-block:: console
$ OS_XENAPI_TARBALL=$(mktemp)
$ OS_XENAPI_SOURCES=$(mktemp -d)
#. Get the source from the openstack.org archives. The example assumes the
latest release is used, and the XenServer host is accessible as xenserver.
Match those parameters to your setup.
.. code-block:: console
$ OS_XENAPI_URL=https://tarballs.openstack.org/os-xenapi/os-xenapi-0.1.1.tar.gz
$ wget -qO "$OS_XENAPI_TARBALL" "$OS_XENAPI_URL"
$ tar xvf "$OS_XENAPI_TARBALL" -d "$OS_XENAPI_SOURCES"
#. Copy the plug-ins to the hypervisor:
.. code-block:: console
$ PLUGINPATH=$(find $OS_XENAPI_SOURCES -path '*/xapi.d/plugins' -type d -print)
$ tar -czf - -C "$PLUGINPATH" ./ |
> ssh root@xenserver tar -xozf - -C /etc/xapi.d/plugins
#. Remove temporary files/directories:</para>
.. code-block:: console
$ rm "$OS_XENAPI_TARBALL"
$ rm -rf "$OS_XENAPI_SOURCES"
Prepare for AMI type images
---------------------------
To support AMI type images in your OpenStack installation, you must create the
``/boot/guest`` directory on dom0. One of the OpenStack XAPI plugins will
extract the kernel and ramdisk from AKI and ARI images and put them to that
directory.
OpenStack maintains the contents of this directory and its size should not
increase during normal operation. However, in case of power failures or
accidental shutdowns, some files might be left over. To prevent these files
from filling up dom0's filesystem, set up this directory as a symlink that
points to a subdirectory of the local SR.
Run these commands in dom0 to achieve this setup:
.. code-block:: console
# LOCAL_SR=$(xe sr-list name-label="Local storage" --minimal)
# LOCALPATH="/var/run/sr-mount/$LOCAL_SR/os-guest-kernels"
# mkdir -p "$LOCALPATH"
# ln -s "$LOCALPATH" /boot/guest
Modify dom0 for resize/migration support
----------------------------------------
To resize servers with XenServer you must:
* Establish a root trust between all hypervisor nodes of your deployment:
To do so, generate an ssh key-pair with the :command:`ssh-keygen` command.
Ensure that each of your dom0's ``authorized_keys`` file (located in
``/root/.ssh/authorized_keys``) contains the public key fingerprint (located
in ``/root/.ssh/id_rsa.pub``).
* Provide a ``/images`` mount point to the dom0 for your hypervisor:
dom0 space is at a premium so creating a directory in dom0 is potentially
dangerous and likely to fail especially when you resize large servers. The
least you can do is to symlink ``/images`` to your local storage SR. The
following instructions work for an English-based installation of XenServer
and in the case of ext3-based SR (with which the resize functionality is
known to work correctly).
.. code-block:: console
# LOCAL_SR=$(xe sr-list name-label="Local storage" --minimal)
# IMG_DIR="/var/run/sr-mount/$LOCAL_SR/images"
# mkdir -p "$IMG_DIR"
# ln -s "$IMG_DIR" /images
XenAPI configuration reference
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following section discusses some commonly changed options when using the
XenAPI driver. The table below provides a complete reference of all
configuration options available for configuring XAPI with OpenStack.
The recommended way to use XAPI with OpenStack is through the XenAPI driver.
To enable the XenAPI driver, add the following configuration options to
``/etc/nova/nova.conf`` and restart ``OpenStack Compute``:
.. code-block:: ini
compute_driver = xenapi.XenAPIDriver
[xenserver]
connection_url = http://your_xenapi_management_ip_address
connection_username = root
connection_password = your_password
ovs_integration_bridge = br-int
vif_driver = nova.virt.xenapi.vif.XenAPIOpenVswitchDriver
These connection details are used by OpenStack Compute service to contact your
hypervisor and are the same details you use to connect XenCenter, the XenServer
management console, to your XenServer node.
.. note::
The ``connection_url`` is generally the management network IP
address of the XenServer.
Networking configuration
------------------------
The Networking service in the Compute node is running
``neutron-openvswitch-agent``, this manages dom0's OVS. You can refer
Networking `openvswitch_agent.ini.sample <https://github.com/openstack/
openstack-manuals/blob/master/doc/config-reference/source/samples/neutron/
openvswitch_agent.ini.sample>`_ for details, however there are several specific
items to look out for.
.. code-block:: ini
[agent]
minimize_polling = False
root_helper_daemon = xenapi_root_helper
[ovs]
of_listen_address = management_ip_address
ovsdb_connection = tcp:your_xenapi_management_ip_address:6640
bridge_mappings = <physical_network>:<physical_bridge>, ...
integration_bridge = br-int
[xenapi]
connection_url = http://your_xenapi_management_ip_address
connection_username = root
connection_password = your_pass_word
.. note::
The ``ovsdb_connection`` is the connection string for the native OVSDB
backend, you need to enable port 6640 in dom0.
Agent
-----
The agent is a piece of software that runs on the instances, and communicates
with OpenStack. In case of the XenAPI driver, the agent communicates with
OpenStack through XenStore (see `the Xen Project Wiki
<http://wiki.xenproject.org/wiki/XenStore>`_ for more information on XenStore).
If you don't have the guest agent on your VMs, it takes a long time for
OpenStack Compute to detect that the VM has successfully started. Generally a
large timeout is required for Windows instances, but you may want to adjust:
``agent_version_timeout`` within the ``[xenserver]`` section.
VNC proxy address
-----------------
Assuming you are talking to XAPI through a management network, and XenServer is
on the address: 10.10.1.34 specify the same address for the vnc proxy address:
``vncserver_proxyclient_address=10.10.1.34``
Storage
-------
You can specify which Storage Repository to use with nova by editing the
following flag. To use the local-storage setup by the default installer:
.. code-block:: ini
sr_matching_filter = "other-config:i18n-key=local-storage"
Another alternative is to use the "default" storage (for example if you have
attached NFS or any other shared storage):
.. code-block:: ini
sr_matching_filter = "default-sr:true"
Image upload in TGZ compressed format
-------------------------------------
To start uploading ``tgz`` compressed raw disk images to the Image service,
configure ``xenapi_image_upload_handler`` by replacing ``GlanceStore`` with
``VdiThroughDevStore``.
.. code-block:: ini
xenapi_image_upload_handler=nova.virt.xenapi.image.vdi_through_dev.VdiThroughDevStore
As opposed to:
.. code-block:: ini
xenapi_image_upload_handler=nova.virt.xenapi.image.glance.GlanceStore

View File

@ -0,0 +1,249 @@
===============
Xen via libvirt
===============
OpenStack Compute supports the Xen Project Hypervisor (or Xen). Xen can be
integrated with OpenStack Compute via the `libvirt <http://libvirt.org/>`_
`toolstack <http://wiki.xen.org/wiki/Choice_of_Toolstacks>`_ or via the `XAPI
<http://xenproject.org/developers/teams/xapi.html>`_ `toolstack
<http://wiki.xen.org/wiki/Choice_of_Toolstacks>`_. This section describes how
to set up OpenStack Compute with Xen and libvirt. For information on how to
set up Xen with XAPI refer to :doc:`hypervisor-xen-api`.
Installing Xen with libvirt
~~~~~~~~~~~~~~~~~~~~~~~~~~~
At this stage we recommend using the baseline that we use for the `Xen Project
OpenStack CI Loop
<http://wiki.xenproject.org/wiki/OpenStack_CI_Loop_for_Xen-Libvirt>`_, which
contains the most recent stability fixes to both Xen and libvirt.
`Xen 4.5.1
<http://www.xenproject.org/downloads/xen-archives/xen-45-series/xen-451.html>`_
(or newer) and `libvirt 1.2.15 <http://libvirt.org/sources/>`_ (or newer)
contain the minimum required OpenStack improvements for Xen. Although libvirt
1.2.15 works with Xen, libvirt 1.3.2 or newer is recommended. The necessary
Xen changes have also been backported to the Xen 4.4.3 stable branch. Please
check with the Linux and FreeBSD distros you are intending to use as `Dom 0
<http://wiki.xenproject.org/wiki/Category:Host_Install>`_, whether the relevant
version of Xen and libvirt are available as installable packages.
The latest releases of Xen and libvirt packages that fulfil the above minimum
requirements for the various openSUSE distributions can always be found and
installed from the `Open Build Service
<https://build.opensuse.org/project/show/Virtualization>`_ Virtualization
project. To install these latest packages, add the Virtualization repository
to your software management stack and get the newest packages from there. More
information about the latest Xen and libvirt packages are available `here
<https://build.opensuse.org/package/show/Virtualization/xen>`__ and `here
<https://build.opensuse.org/package/show/Virtualization/libvirt>`__.
Alternatively, it is possible to use the Ubuntu LTS 14.04 Xen Package
**4.4.1-0ubuntu0.14.04.4** (Xen 4.4.1) and apply the patches outlined `here
<http://wiki.xenproject.org/wiki/OpenStack_CI_Loop_for_Xen-Libvirt#Baseline>`__.
You can also use the Ubuntu LTS 14.04 libvirt package **1.2.2
libvirt_1.2.2-0ubuntu13.1.7** as baseline and update it to libvirt version
1.2.15, or 1.2.14 with the patches outlined `here
<http://wiki.xenproject.org/wiki/OpenStack_CI_Loop_for_Xen-Libvirt#Baseline>`__
applied. Note that this will require rebuilding these packages partly from
source.
For further information and latest developments, you may want to consult the
Xen Project's `mailing lists for OpenStack related issues and questions
<http://lists.xenproject.org/cgi-bin/mailman/listinfo/wg-openstack>`_.
Configuring Xen with libvirt
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To enable Xen via libvirt, ensure the following options are set in
``/etc/nova/nova.conf`` on all hosts running the ``nova-compute`` service.
.. code-block:: ini
compute_driver = libvirt.LibvirtDriver
[libvirt]
virt_type = xen
Additional configuration options
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Use the following as a guideline for configuring Xen for use in OpenStack:
#. **Dom0 memory**: Set it between 1GB and 4GB by adding the following
parameter to the Xen Boot Options in the `grub.conf <http://
xenbits.xen.org/docs/unstable/misc/xen-command-line.html>`_ file.
.. code-block:: ini
dom0_mem=1024M
.. note::
The above memory limits are suggestions and should be based on the
available compute host resources. For large hosts that will run many
hundreds of instances, the suggested values may need to be higher.
.. note::
The location of the grub.conf file depends on the host Linux distribution
that you are using. Please refer to the distro documentation for more
details (see `Dom 0 <http://wiki.xenproject.org
/wiki/Category:Host_Install>`_ for more resources).
#. **Dom0 vcpus**: Set the virtual CPUs to 4 and employ CPU pinning by adding
the following parameters to the Xen Boot Options in the `grub.conf
<http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html>`_ file.
.. code-block:: ini
dom0_max_vcpus=4 dom0_vcpus_pin
.. note::
Note that the above virtual CPU limits are suggestions and should be
based on the available compute host resources. For large hosts, that will
run many hundred of instances, the suggested values may need to be
higher.
#. **PV vs HVM guests**: A Xen virtual machine can be paravirtualized (PV) or
hardware virtualized (HVM). The virtualization mode determines the
interaction between Xen, Dom 0, and the guest VM's kernel. PV guests are
aware of the fact that they are virtualized and will co-operate with Xen and
Dom 0. The choice of virtualization mode determines performance
characteristics. For an overview of Xen virtualization modes, see `Xen Guest
Types <http://wiki.xen.org/wiki/Xen_Overview#Guest_Types>`_.
In OpenStack, customer VMs may run in either PV or HVM mode. The mode is a
property of the operating system image used by the VM, and is changed by
adjusting the image metadata stored in the Image service. The image
metadata can be changed using the :command:`openstack` commands.
To choose one of the HVM modes (HVM, HVM with PV Drivers or PVHVM), use
:command:`openstack` to set the ``vm_mode`` property to ``hvm``.
To choose one of the HVM modes (HVM, HVM with PV Drivers or PVHVM), use one
of the following two commands:
.. code-block:: console
$ openstack image set --property vm_mode=hvm IMAGE
To chose PV mode, which is supported by NetBSD, FreeBSD and Linux, use one
of the following two commands
.. code-block:: console
$ openstack image set --property vm_mode=xen IMAGE
.. note::
The default for virtualization mode in nova is PV mode.
#. **Image formats**: Xen supports raw, qcow2 and vhd image formats. For more
information on image formats, refer to the `OpenStack Virtual Image Guide
<https://docs.openstack.org/image-guide/introduction.html>`__ and the
`Storage Options Guide on the Xen Project Wiki
<http://wiki.xenproject.org/wiki/Storage_options>`_.
#. **Image metadata**: In addition to the ``vm_mode`` property discussed above,
the ``hypervisor_type`` property is another important component of the image
metadata, especially if your cloud contains mixed hypervisor compute nodes.
Setting the ``hypervisor_type`` property allows the nova scheduler to select
a compute node running the specified hypervisor when launching instances of
the image. Image metadata such as ``vm_mode``, ``hypervisor_type``,
architecture, and others can be set when importing the image to the Image
service. The metadata can also be changed using the :command:`openstack`
commands:
.. code-block:: console
$ openstack image set --property hypervisor_type=xen vm_mode=hvm IMAGE
For more more information on image metadata, refer to the `OpenStack Virtual
Image Guide <https://docs.openstack.org/image-guide/image-metadata.html>`__.
#. **Libguestfs file injection**: OpenStack compute nodes can use `libguestfs
<http://libguestfs.org/>`_ to inject files into an instance's image prior to
launching the instance. libguestfs uses libvirt's QEMU driver to start a
qemu process, which is then used to inject files into the image. When using
libguestfs for file injection, the compute node must have the libvirt qemu
driver installed, in addition to the Xen driver. In RPM based distributions,
the qemu driver is provided by the ``libvirt-daemon-qemu`` package. In
Debian and Ubuntu, the qemu driver is provided by the ``libvirt-bin``
package.
Troubleshoot Xen with libvirt
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
**Important log files**: When an instance fails to start, or when you come
across other issues, you should first consult the following log files:
* ``/var/log/nova/nova-compute.log``
* ``/var/log/libvirt/libxl/libxl-driver.log``,
* ``/var/log/xen/qemu-dm-${instancename}.log``,
* ``/var/log/xen/xen-hotplug.log``,
* ``/var/log/xen/console/guest-${instancename}`` (to enable see `Enabling Guest
Console Logs
<http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen#Guest_console_logs>`_)
* Host Console Logs (read `Enabling and Retrieving Host Console Logs
<http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen#Host_console_logs>`_).
If you need further help you can ask questions on the mailing lists `xen-users@
<http://lists.xenproject.org/cgi-bin/mailman/listinfo/ xen-users>`_,
`wg-openstack@ <http://lists.xenproject.org/cgi-bin/mailman/
listinfo/wg-openstack>`_ or `raise a bug <http://wiki.xen.org/wiki/
Reporting_Bugs_against_Xen>`_ against Xen.
Known issues
~~~~~~~~~~~~
* **Networking**: Xen via libvirt is currently only supported with
nova-network. Fixes for a number of bugs are currently being worked on to
make sure that Xen via libvirt will also work with OpenStack Networking
(neutron).
.. todo:: Is this still true?
* **Live migration**: Live migration is supported in the libvirt libxl driver
since version 1.2.5. However, there were a number of issues when used with
OpenStack, in particular with libvirt migration protocol compatibility. It is
worth mentioning that libvirt 1.3.0 addresses most of these issues. We do
however recommend using libvirt 1.3.2, which is fully supported and tested as
part of the Xen Project CI loop. It addresses live migration monitoring
related issues and adds support for peer-to-peer migration mode, which nova
relies on.
* **Live migration monitoring**: On compute nodes running Kilo or later, live
migration monitoring relies on libvirt APIs that are only implemented from
libvirt version 1.3.1 onwards. When attempting to live migrate, the migration
monitoring thread would crash and leave the instance state as "MIGRATING". If
you experience such an issue and you are running on a version released before
libvirt 1.3.1, make sure you backport libvirt commits ad71665 and b7b4391
from upstream.
Additional information and resources
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following section contains links to other useful resources.
* `wiki.xenproject.org/wiki/OpenStack <http://wiki.xenproject.org/wiki/
OpenStack>`_ - OpenStack Documentation on the Xen Project wiki
* `wiki.xenproject.org/wiki/OpenStack_CI_Loop_for_Xen-Libvirt
<http://wiki.xenproject.org/wiki/OpenStack_CI_Loop_for_Xen-Libvirt>`_ -
Information about the Xen Project OpenStack CI Loop
* `wiki.xenproject.org/wiki/OpenStack_via_DevStack
<http://wiki.xenproject.org/wiki/OpenStack_via_DevStack>`_ - How to set up
OpenStack via DevStack
* `Mailing lists for OpenStack related issues and questions
<http://lists.xenproject.org/cgi-bin/mailman/listinfo/wg-openstack>`_ - This
list is dedicated to coordinating bug fixes and issues across Xen, libvirt
and OpenStack and the CI loop.

View File

@ -0,0 +1,63 @@
===========
Hypervisors
===========
.. toctree::
:maxdepth: 1
hypervisor-basics.rst
hypervisor-kvm.rst
hypervisor-qemu.rst
hypervisor-xen-api.rst
hypervisor-xen-libvirt.rst
hypervisor-lxc.rst
hypervisor-vmware.rst
hypervisor-hyper-v.rst
hypervisor-virtuozzo.rst
OpenStack Compute supports many hypervisors, which might make it difficult for
you to choose one. Most installations use only one hypervisor. However, you
can use :ref:`ComputeFilter` and :ref:`ImagePropertiesFilter` to schedule
different hypervisors within the same installation. The following links help
you choose a hypervisor. See :doc:`/user/support-matrix` for a detailed list
of features and support across the hypervisors.
The following hypervisors are supported:
* `KVM`_ - Kernel-based Virtual Machine. The virtual disk formats that it
supports is inherited from QEMU since it uses a modified QEMU program to
launch the virtual machine. The supported formats include raw images, the
qcow2, and VMware formats.
* `LXC`_ - Linux Containers (through libvirt), used to run Linux-based virtual
machines.
* `QEMU`_ - Quick EMUlator, generally only used for development purposes.
* `VMware vSphere`_ 5.1.0 and newer - Runs VMware-based Linux and Windows
images through a connection with a vCenter server.
* `Xen (using libvirt) <Xen>`_ - Xen Project Hypervisor using libvirt as
management interface into ``nova-compute`` to run Linux, Windows, FreeBSD and
NetBSD virtual machines.
* `XenServer`_ - XenServer, Xen Cloud Platform (XCP) and other XAPI based Xen
variants runs Linux or Windows virtual machines. You must install the
``nova-compute`` service in a para-virtualized VM.
* `Hyper-V`_ - Server virtualization with Microsoft Hyper-V, use to run
Windows, Linux, and FreeBSD virtual machines. Runs ``nova-compute`` natively
on the Windows virtualization platform.
* `Virtuozzo`_ 7.0.0 and newer - OS Containers and Kernel-based Virtual
Machines supported via libvirt virt_type=parallels. The supported formats
include ploop and qcow2 images.
.. _KVM: http://www.linux-kvm.org/page/Main_Page
.. _LXC: https://linuxcontainers.org/
.. _QEMU: http://wiki.qemu.org/Manual
.. _VMware vSphere: https://www.vmware.com/support/vsphere-hypervisor
.. _Xen: (using libvirt) <http://www.xenproject.org
.. _XenServer: <http://xenserver.org
.. _Hyper-V: http://www.microsoft.com/en-us/server-cloud/solutions/virtualization.aspx
.. _Virtuozzo: https://virtuozzo.com/products/#product-virtuozzo/

View File

@ -0,0 +1,30 @@
===============
Configuration
===============
To configure your Compute installation, you must define configuration options
in these files:
* ``nova.conf`` contains most of the Compute configuration options and resides
in the ``/etc/nova`` directory.
* ``api-paste.ini`` defines Compute limits and resides in the ``/etc/nova``
directory.
* Configuration files for related services, such as the Image and Identity
services.
A list of config options based on different topics can be found below:
.. toctree::
:maxdepth: 1
/admin/configuration/api.rst
/admin/configuration/resize.rst
/admin/configuration/fibre-channel.rst
/admin/configuration/iscsi-offload.rst
/admin/configuration/hypervisors.rst
/admin/configuration/schedulers.rst
/admin/configuration/cells.rst
/admin/configuration/logs.rst
/admin/configuration/samples/index.rst

View File

@ -0,0 +1,73 @@
===============================================
Configuring iSCSI interface and offload support
===============================================
Compute supports open-iscsi iSCSI interfaces for offload cards. Offload
hardware must be present and configured on every compute node where offload is
desired. Once an open-iscsi interface is configured, the iface name
(``iface.iscsi_ifacename``) should be passed to libvirt via the ``iscsi_iface``
parameter for use. All iSCSI sessions will be bound to this iSCSI interface.
Currently supported transports (``iface.transport_name``) are ``be2iscsi``,
``bnx2i``, ``cxgb3i``, ``cxgb4i``, ``qla4xxx``, ``ocs``. Configuration changes
are required on the compute node only.
iSER is supported using the separate iSER LibvirtISERVolumeDriver and will be
rejected if used via the ``iscsi_iface`` parameter.
iSCSI iface configuration
~~~~~~~~~~~~~~~~~~~~~~~~~
* Note the distinction between the transport name (``iface.transport_name``)
and iface name (``iface.iscsi_ifacename``). The actual iface name must be
specified via the iscsi_iface parameter to libvirt for offload to work.
* The default name for an iSCSI iface (open-iscsi parameter
``iface.iscsi_ifacename``) is in the format transport_name.hwaddress when
generated by ``iscsiadm``.
* ``iscsiadm`` can be used to view and generate current iface configuration.
Every network interface that supports an open-iscsi transport can have one or
more iscsi ifaces associated with it. If no ifaces have been configured for a
network interface supported by an open-iscsi transport, this command will
create a default iface configuration for that network interface. For example
:
.. code-block:: console
# iscsiadm -m iface
default tcp,<empty>,<empty>,<empty>,<empty>
iser iser,<empty>,<empty>,<empty>,<empty>
bnx2i.00:05:b5:d2:a0:c2 bnx2i,00:05:b5:d2:a0:c2,5.10.10.20,<empty>,<empty>
The output is in the format::
iface_name transport_name,hwaddress,ipaddress,net_ifacename,initiatorname
* Individual iface configuration can be viewed via
.. code-block:: console
# iscsiadm -m iface -I IFACE_NAME
# BEGIN RECORD 2.0-873
iface.iscsi_ifacename = cxgb4i.00:07:43:28:b2:58
iface.net_ifacename = <empty>
iface.ipaddress = 102.50.50.80
iface.hwaddress = 00:07:43:28:b2:58
iface.transport_name = cxgb4i
iface.initiatorname = <empty>
# END RECORD
Configuration can be updated as desired via
.. code-block:: console
# iscsiadm -m iface-I IFACE_NAME--op=update -n iface.SETTING -v VALUE
* All iface configurations need a minimum of ``iface.iface_name``,
``iface.transport_name`` and ``iface.hwaddress`` to be correctly configured
to work. Some transports may require ``iface.ipaddress`` and
``iface.net_ifacename`` as well to bind correctly.
Detailed configuration instructions can be found at
http://www.open-iscsi.org/docs/README.

View File

@ -0,0 +1,47 @@
=================
Compute log files
=================
The corresponding log file of each Compute service is stored in the
``/var/log/nova/`` directory of the host on which each service runs.
.. list-table:: Log files used by Compute services
:widths: 35 35 30
:header-rows: 1
* - Log file
- Service name (CentOS/Fedora/openSUSE/Red Hat Enterprise
Linux/SUSE Linux Enterprise)
- Service name (Ubuntu/Debian)
* - ``nova-api.log``
- ``openstack-nova-api``
- ``nova-api``
* - ``nova-cert.log`` [#a]_
- ``openstack-nova-cert``
- ``nova-cert``
* - ``nova-compute.log``
- ``openstack-nova-compute``
- ``nova-compute``
* - ``nova-conductor.log``
- ``openstack-nova-conductor``
- ``nova-conductor``
* - ``nova-consoleauth.log``
- ``openstack-nova-consoleauth``
- ``nova-consoleauth``
* - ``nova-network.log`` [#b]_
- ``openstack-nova-network``
- ``nova-network``
* - ``nova-manage.log``
- ``nova-manage``
- ``nova-manage``
* - ``nova-scheduler.log``
- ``openstack-nova-scheduler``
- ``nova-scheduler``
.. rubric:: Footnotes
.. [#a] The X509 certificate service (``openstack-nova-cert``/``nova-cert``)
is only required by the EC2 API to the Compute service.
.. [#b] The ``nova`` network service (``openstack-nova-network``/
``nova-network``) only runs in deployments that are not configured
to use the Networking service (``neutron``).

View File

@ -0,0 +1,26 @@
================
Configure resize
================
Resize (or Server resize) is the ability to change the flavor of a server, thus
allowing it to upscale or downscale according to user needs. For this feature
to work properly, you might need to configure some underlying virt layers.
KVM
~~~
Resize on KVM is implemented currently by transferring the images between
compute nodes over ssh. For KVM you need hostnames to resolve properly and
passwordless ssh access between your compute hosts. Direct access from one
compute host to another is needed to copy the VM file across.
Cloud end users can find out how to resize a server by reading the `OpenStack
End User Guide <https://docs.openstack.org/user-guide/
cli_change_the_size_of_your_server.html>`_.
XenServer
~~~~~~~~~
To get resize to work with XenServer (and XCP), you need to establish a root
trust between all hypervisor nodes and provide an ``/image`` mount point to
your hypervisors dom0.

View File

@ -0,0 +1,8 @@
=============
api-paste.ini
=============
The Compute service stores its API configuration settings in the
``api-paste.ini`` file.
.. literalinclude:: /../../etc/nova/api-paste.ini

View File

@ -0,0 +1,12 @@
==========================================
Compute service sample configuration files
==========================================
Files in this section can be found in ``/etc/nova``.
.. toctree::
:maxdepth: 2
api-paste.ini.rst
policy.yaml.rst
rootwrap.conf.rst

View File

@ -0,0 +1,9 @@
===========
policy.yaml
===========
The ``policy.yaml`` file defines additional access controls
that apply to the Compute service.
.. literalinclude:: /_static/nova.policy.yaml.sample
:language: yaml

View File

@ -0,0 +1,13 @@
=============
rootwrap.conf
=============
The ``rootwrap.conf`` file defines configuration values
used by the rootwrap script when the Compute service needs
to escalate its privileges to those of the root user.
It is also possible to disable the root wrapper, and default
to sudo only. Configure the ``disable_rootwrap`` option in the
``[workaround]`` section of the ``nova.conf`` configuration file.
.. literalinclude:: /../../etc/nova/rootwrap.conf

File diff suppressed because it is too large Load Diff

View File

@ -167,6 +167,10 @@ Reference Material
* **Configuration**:
* :doc:`Configuration Guide </admin/configuration/index>`: detailed
configuration guides for various parts of you Nova system. Helpful
reference for setting up specific hypervisor backends.
* :doc:`Config Reference </configuration/config>`: a complete reference of all
configuration options available in the nova.conf file.
@ -210,6 +214,7 @@ looking parts of our architecture. These are collected below.
:hidden:
admin/index
admin/configuration/index
cli/index
configuration/config
configuration/sample-config