Merge "doc: Add additional content to admin guide"

This commit is contained in:
Jenkins 2017-08-08 19:06:37 +00:00 committed by Gerrit Code Review
commit 3f12c8badd
12 changed files with 1335 additions and 1 deletions

View File

@ -0,0 +1,70 @@
=========================================
Select hosts where instances are launched
=========================================
With the appropriate permissions, you can select which host instances are
launched on and which roles can boot instances on this host.
#. To select the host where instances are launched, use the
``--availability-zone ZONE:HOST:NODE`` parameter on the :command:`openstack
server create` command.
For example:
.. code-block:: console
$ openstack server create --image IMAGE --flavor m1.tiny \
--key-name KEY --availability-zone ZONE:HOST:NODE \
--nic net-id=UUID SERVER
.. note::
HOST and NODE are optional parameters. In such cases, use the
``--availability-zone ZONE::NODE``, ``--availability-zone ZONE:HOST`` or
``--availability-zone ZONE``.
#. To specify which roles can launch an instance on a specified host, enable
the ``create:forced_host`` option in the ``policy.json`` file. By default,
this option is enabled for only the admin role. If you see ``Forbidden (HTTP
403)`` in return, then you are not using admin credentials.
#. To view the list of valid zones, use the :command:`openstack availability
zone list` command.
.. code-block:: console
$ openstack availability zone list
+-----------+-------------+
| Zone Name | Zone Status |
+-----------+-------------+
| zone1 | available |
| zone2 | available |
+-----------+-------------+
#. To view the list of valid compute hosts, use the :command:`openstack host
list` command.
.. code-block:: console
$ openstack host list
+----------------+-------------+----------+
| Host Name | Service | Zone |
+----------------+-------------+----------+
| compute01 | compute | nova |
| compute02 | compute | nova |
+----------------+-------------+----------+
#. To view the list of valid compute nodes, use the :command:`openstack
hypervisor list` command.
.. code-block:: console
$ openstack hypervisor list
+----+---------------------+
| ID | Hypervisor Hostname |
+----+---------------------+
| 1 | server2 |
| 2 | server3 |
| 3 | server4 |
+----+---------------------+

View File

@ -0,0 +1,49 @@
==================
Evacuate instances
==================
If a hardware malfunction or other error causes a cloud compute node to fail,
you can evacuate instances to make them available again. You can optionally
include the target host on the :command:`nova evacuate` command. If you omit
the host, the scheduler chooses the target host.
To preserve user data on the server disk, configure shared storage on the
target host. When you evacuate the instance, Compute detects whether shared
storage is available on the target host. Also, you must validate that the
current VM host is not operational. Otherwise, the evacuation fails.
#. To find a host for the evacuated instance, list all hosts:
.. code-block:: console
$ openstack host list
#. Evacuate the instance. You can use the ``--password PWD`` option to pass the
instance password to the command. If you do not specify a password, the
command generates and prints one after it finishes successfully. The
following command evacuates a server from a failed host to ``HOST_B``.
.. code-block:: console
$ nova evacuate EVACUATED_SERVER_NAME HOST_B
The command rebuilds the instance from the original image or volume and
returns a password. The command preserves the original configuration, which
includes the instance ID, name, uid, IP address, and so on.
.. code-block:: console
+-----------+--------------+
| Property | Value |
+-----------+--------------+
| adminPass | kRAJpErnT4xZ |
+-----------+--------------+
#. To preserve the user disk data on the evacuated server, deploy Compute with
a shared file system. To configure your system, see
:ref:`section_configuring-compute-migrations`. The following example does
not change the password.
.. code-block:: console
$ nova evacuate EVACUATED_SERVER_NAME HOST_B --on-shared-storage

View File

@ -0,0 +1,155 @@
==============
Manage flavors
==============
.. todo:: Merge this into 'flavors'
In OpenStack, flavors define the compute, memory, and storage capacity of nova
computing instances. To put it simply, a flavor is an available hardware
configuration for a server. It defines the *size* of a virtual server that can
be launched.
.. note::
Flavors can also determine on which compute host a flavor can be used to
launch an instance. For information about customizing flavors, refer to
:doc:`flavors`.
A flavor consists of the following parameters:
Flavor ID
Unique ID (integer or UUID) for the new flavor. If specifying 'auto', a UUID
will be automatically generated.
Name
Name for the new flavor.
VCPUs
Number of virtual CPUs to use.
Memory MB
Amount of RAM to use (in megabytes).
Root Disk GB
Amount of disk space (in gigabytes) to use for the root (``/``) partition.
Ephemeral Disk GB
Amount of disk space (in gigabytes) to use for the ephemeral partition. If
unspecified, the value is ``0`` by default. Ephemeral disks offer machine
local disk storage linked to the lifecycle of a VM instance. When a VM is
terminated, all data on the ephemeral disk is lost. Ephemeral disks are not
included in any snapshots.
Swap
Amount of swap space (in megabytes) to use. If unspecified, the value is
``0`` by default.
RXTX Factor
Optional property that allows servers with a different bandwidth be created
with the RXTX Factor. The default value is ``1.0``. That is, the new
bandwidth is the same as that of the attached network. The RXTX Factor is
available only for Xen or NSX based systems.
Is Public
Boolean value defines whether the flavor is available to all users. Defaults
to ``True``.
Extra Specs
Key and value pairs that define on which compute nodes a flavor can run.
These pairs must match corresponding pairs on the compute nodes. It can be
used to implement special resources, such as flavors that run on only compute
nodes with GPU hardware.
As of Newton, there are no default flavors. The following table lists the
default flavors for Mitaka and earlier.
============ ========= =============== ===============
Flavor VCPUs Disk (in GB) RAM (in MB)
============ ========= =============== ===============
m1.tiny 1 1 512
m1.small 1 20 2048
m1.medium 2 40 4096
m1.large 4 80 8192
m1.xlarge 8 160 16384
============ ========= =============== ===============
You can create and manage flavors with the :command:`openstack flavor` commands
provided by the ``python-openstackclient`` package.
Create a flavor
~~~~~~~~~~~~~~~
#. List flavors to show the ID and name, the amount of memory, the amount of
disk space for the root partition and for the ephemeral partition, the swap,
and the number of virtual CPUs for each flavor:
.. code-block:: console
$ openstack flavor list
#. To create a flavor, specify a name, ID, RAM size, disk size, and the number
of VCPUs for the flavor, as follows:
.. code-block:: console
$ openstack flavor create FLAVOR_NAME --id FLAVOR_ID \
--ram RAM_IN_MB --disk ROOT_DISK_IN_GB --vcpus NUMBER_OF_VCPUS
.. note::
Unique ID (integer or UUID) for the new flavor. If specifying 'auto', a
UUID will be automatically generated.
Here is an example with additional optional parameters filled in that
creates a public ``extra_tiny`` flavor that automatically gets an ID
assigned, with 256 MB memory, no disk space, and one VCPU. The rxtx-factor
indicates the slice of bandwidth that the instances with this flavor can use
(through the Virtual Interface (vif) creation in the hypervisor):
.. code-block:: console
$ openstack flavor create --public m1.extra_tiny --id auto \
--ram 256 --disk 0 --vcpus 1 --rxtx-factor 1
#. If an individual user or group of users needs a custom flavor that you do
not want other projects to have access to, you can change the flavor's
access to make it a private flavor. See `Private Flavors in the OpenStack
Operations Guide
<https://docs.openstack.org/ops-guide/ops-user-facing-operations.html#private-flavors>`_.
For a list of optional parameters, run this command:
.. code-block:: console
$ openstack help flavor create
#. After you create a flavor, assign it to a project by specifying the flavor
name or ID and the project ID:
.. code-block:: console
$ nova flavor-access-add FLAVOR TENANT_ID
#. In addition, you can set or unset ``extra_spec`` for the existing flavor.
The ``extra_spec`` metadata keys can influence the instance directly when it
is launched. If a flavor sets the ``extra_spec key/value
quota:vif_outbound_peak=65536``, the instance's outbound peak bandwidth I/O
should be less than or equal to 512 Mbps. There are several aspects that can
work for an instance including *CPU limits*, *Disk tuning*, *Bandwidth I/O*,
*Watchdog behavior*, and *Random-number generator*. For information about
supporting metadata keys, see :doc:`flavors`.
For a list of optional parameters, run this command:
.. code-block:: console
$ nova help flavor-key
Delete a flavor
~~~~~~~~~~~~~~~
Delete a specified flavor, as follows:
.. code-block:: console
$ openstack flavor delete FLAVOR_ID

View File

@ -14,7 +14,36 @@ operating system, and exposes functionality over a web-based API.
.. toctree::
:maxdepth: 2
admin-password-injection.rst
adv-config.rst
arch.rst
availability-zones.rst
configuring-migrations.rst
cpu-topologies.rst
default-ports.rst
euca2ools.rst
evacuate.rst
flavors2.rst
flavors.rst
huge-pages.rst
live-migration-usage.rst
manage-logs.rst
manage-the-cloud.rst
manage-users.rst
manage-volumes.rst
migration.rst
networking-nova.rst
system-admin.rst
node-down.rst
numa.rst
pci-passthrough.rst
quotas2.rst
quotas.rst
remote-console-access.rst
root-wrap-reference.rst
security-groups.rst
security.rst
service-groups.rst
services.rst
ssh-configuration.rst
support-compute.rst
system-admin.rst

View File

@ -0,0 +1,79 @@
=================
Migrate instances
=================
When you want to move an instance from one compute host to another, you can use
the :command:`openstack server migrate` command. The scheduler chooses the
destination compute host based on its settings. This process does not assume
that the instance has shared storage available on the target host. If you are
using SSH tunneling, you must ensure that each node is configured with SSH key
authentication so that the Compute service can use SSH to move disks to other
nodes. For more information, see :ref:`cli-os-migrate-cfg-ssh`.
#. To list the VMs you want to migrate, run:
.. code-block:: console
$ openstack server list
#. Use the :command:`openstack server migrate` command.
.. code-block:: console
$ openstack server migrate --live TARGET_HOST VM_INSTANCE
#. To migrate an instance and watch the status, use this example script:
.. code-block:: bash
#!/bin/bash
# Provide usage
usage() {
echo "Usage: $0 VM_ID"
exit 1
}
[[ $# -eq 0 ]] && usage
# Migrate the VM to an alternate hypervisor
echo -n "Migrating instance to alternate host"
VM_ID=$1
openstack server migrate $VM_ID
VM_OUTPUT=$(openstack server show $VM_ID)
VM_STATUS=$(echo "$VM_OUTPUT" | grep status | awk '{print $4}')
while [[ "$VM_STATUS" != "VERIFY_RESIZE" ]]; do
echo -n "."
sleep 2
VM_OUTPUT=$(openstack server show $VM_ID)
VM_STATUS=$(echo "$VM_OUTPUT" | grep status | awk '{print $4}')
done
nova resize-confirm $VM_ID
echo " instance migrated and resized."
echo;
# Show the details for the VM
echo "Updated instance details:"
openstack server show $VM_ID
# Pause to allow users to examine VM details
read -p "Pausing, press <enter> to exit."
.. note::
If you see the following error, it means you are either running the command
with the wrong credentials, such as a non-admin user, or the ``policy.json``
file prevents migration for your user::
ERROR (Forbidden): Policy doesn't allow compute_extension:admin_actions:migrate to be performed. (HTTP 403)``
.. note::
If you see the following error, similar to this message, SSH tunneling was
not set up between the compute nodes::
ProcessExecutionError: Unexpected error while running command.
Stderr: u Host key verification failed.\r\n
The instance is booted from a new host, but preserves its configuration
including instance ID, name, IP address, any metadata, and other properties.

26
doc/source/admin/numa.rst Normal file
View File

@ -0,0 +1,26 @@
=============================================
Consider NUMA topology when booting instances
=============================================
.. todo:: Merge this into 'cpu-topologies.rst'
NUMA topology can exist on both the physical hardware of the host, and the
virtual hardware of the instance. OpenStack Compute uses libvirt to tune
instances to take advantage of NUMA topologies. The libvirt driver boot
process looks at the NUMA topology field of both the instance and the host it
is being booted on, and uses that information to generate an appropriate
configuration.
If the host is NUMA capable, but the instance has not requested a NUMA
topology, Compute attempts to pack the instance into a single cell.
If this fails, though, Compute will not continue to try.
If the host is NUMA capable, and the instance has requested a specific NUMA
topology, Compute will try to pin the vCPUs of different NUMA cells
on the instance to the corresponding NUMA cells on the host. It will also
expose the NUMA topology of the instance to the guest OS.
If you want Compute to pin a particular vCPU as part of this process,
set the ``vcpu_pin_set`` parameter in the ``nova.conf`` configuration
file. For more information about the ``vcpu_pin_set`` parameter, see the
Configuration Reference Guide.

304
doc/source/admin/quotas.rst Normal file
View File

@ -0,0 +1,304 @@
=============================
Manage Compute service quotas
=============================
As an administrative user, you can use the :command:`nova quota-*` commands,
which are provided by the ``python-novaclient`` package, to update the Compute
service quotas for a specific project or project user, as well as update the
quota defaults for a new project.
.. todo::
At some point, probably in Queens, we need to scrub this page and mention
the microversions that remove the proxy and network-related resource quotas.
.. rubric:: Compute quota descriptions
.. list-table::
:header-rows: 1
:widths: 10 40
* - Quota name
- Description
* - cores
- Number of instance cores (VCPUs) allowed per project.
* - fixed-ips
- Number of fixed IP addresses allowed per project. This number
must be equal to or greater than the number of allowed
instances.
* - floating-ips
- Number of floating IP addresses allowed per project.
* - injected-file-content-bytes
- Number of content bytes allowed per injected file.
* - injected-file-path-bytes
- Length of injected file path.
* - injected-files
- Number of injected files allowed per project.
* - instances
- Number of instances allowed per project.
* - key-pairs
- Number of key pairs allowed per user.
* - metadata-items
- Number of metadata items allowed per instance.
* - ram
- Megabytes of instance ram allowed per project.
* - security-groups
- Number of security groups per project.
* - security-group-rules
- Number of security group rules per project.
* - server-groups
- Number of server groups per project.
* - server-group-members
- Number of servers per server group.
View and update Compute quotas for a project
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To view and update default quota values
---------------------------------------
#. List all default quotas for all projects:
.. code-block:: console
$ openstack quota show --default
+-----------------------------+-------+
| Quota | Limit |
+-----------------------------+-------+
| instances | 10 |
| cores | 20 |
| ram | 51200 |
| floating_ips | 10 |
| fixed_ips | -1 |
| metadata_items | 128 |
| injected_files | 5 |
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes | 255 |
| key_pairs | 100 |
| security_groups | 10 |
| security_group_rules | 20 |
| server_groups | 10 |
| server_group_members | 10 |
+-----------------------------+-------+
#. Update a default value for a new project, for example:
.. code-block:: console
$ openstack quota set --instances 15 default
To view quota values for an existing project
--------------------------------------------
#. List the currently set quota values for a project:
.. code-block:: console
$ openstack quota show PROJECT_NAME
+-----------------------------+-------+
| Quota | Limit |
+-----------------------------+-------+
| instances | 10 |
| cores | 20 |
| ram | 51200 |
| floating_ips | 10 |
| fixed_ips | -1 |
| metadata_items | 128 |
| injected_files | 5 |
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes | 255 |
| key_pairs | 100 |
| security_groups | 10 |
| security_group_rules | 20 |
| server_groups | 10 |
| server_group_members | 10 |
+-----------------------------+-------+
To update quota values for an existing project
----------------------------------------------
#. Obtain the project ID.
.. code-block:: console
$ project=$(openstack project show -f value -c id PROJECT_NAME)
#. Update a particular quota value.
.. code-block:: console
$ openstack quota set --QUOTA_NAME QUOTA_VALUE PROJECT_OR_CLASS
For example:
.. code-block:: console
$ openstack quota set --floating-ips 20 PROJECT_OR_CLASS
$ openstack quota show PROJECT_NAME
+-----------------------------+-------+
| Quota | Limit |
+-----------------------------+-------+
| instances | 10 |
| cores | 20 |
| ram | 51200 |
| floating_ips | 20 |
| fixed_ips | -1 |
| metadata_items | 128 |
| injected_files | 5 |
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes | 255 |
| key_pairs | 100 |
| security_groups | 10 |
| security_group_rules | 20 |
| server_groups | 10 |
| server_group_members | 10 |
+-----------------------------+-------+
.. note::
To view a list of options for the :command:`openstack quota set` command,
run:
.. code-block:: console
$ openstack help quota set
View and update Compute quotas for a project user
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To view quota values for a project user
---------------------------------------
#. Place the user ID in a usable variable.
.. code-block:: console
$ projectUser=$(openstack user show -f value -c id USER_NAME)
#. Place the user's project ID in a usable variable, as follows:
.. code-block:: console
$ project=$(openstack project show -f value -c id PROJECT_NAME)
#. List the currently set quota values for a project user.
.. code-block:: console
$ nova quota-show --user $projectUser --tenant $project
For example:
.. code-block:: console
$ nova quota-show --user $projecUser --tenant $project
+-----------------------------+-------+
| Quota | Limit |
+-----------------------------+-------+
| instances | 10 |
| cores | 20 |
| ram | 51200 |
| floating_ips | 20 |
| fixed_ips | -1 |
| metadata_items | 128 |
| injected_files | 5 |
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes | 255 |
| key_pairs | 100 |
| security_groups | 10 |
| security_group_rules | 20 |
| server_groups | 10 |
| server_group_members | 10 |
+-----------------------------+-------+
To update quota values for a project user
-----------------------------------------
#. Place the user ID in a usable variable.
.. code-block:: console
$ projectUser=$(openstack user show -f value -c id USER_NAME)
#. Place the user's project ID in a usable variable, as follows:
.. code-block:: console
$ project=$(openstack project show -f value -c id PROJECT_NAME)
#. Update a particular quota value, as follows:
.. code-block:: console
$ nova quota-update --user $projectUser --QUOTA_NAME QUOTA_VALUE $project
For example:
.. code-block:: console
$ nova quota-update --user $projectUser --floating-ips 12 $project
$ nova quota-show --user $projectUser --tenant $project
+-----------------------------+-------+
| Quota | Limit |
+-----------------------------+-------+
| instances | 10 |
| cores | 20 |
| ram | 51200 |
| floating_ips | 12 |
| fixed_ips | -1 |
| metadata_items | 128 |
| injected_files | 5 |
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes | 255 |
| key_pairs | 100 |
| security_groups | 10 |
| security_group_rules | 20 |
| server_groups | 10 |
| server_group_members | 10 |
+-----------------------------+-------+
.. note::
To view a list of options for the :command:`nova quota-update` command,
run:
.. code-block:: console
$ nova help quota-update
To display the current quota usage for a project user
-----------------------------------------------------
Use :command:`nova limits` to get a list of the
current quota values and the current quota usage:
.. code-block:: console
$ nova limits --tenant PROJET_NAME
+------+-----+-------+--------+------+----------------+
| Verb | URI | Value | Remain | Unit | Next_Available |
+------+-----+-------+--------+------+----------------+
+------+-----+-------+--------+------+----------------+
+--------------------+------+-------+
| Name | Used | Max |
+--------------------+------+-------+
| Cores | 0 | 20 |
| Instances | 0 | 10 |
| Keypairs | - | 100 |
| Personality | - | 5 |
| Personality Size | - | 10240 |
| RAM | 0 | 51200 |
| Server Meta | - | 128 |
| ServerGroupMembers | - | 10 |
| ServerGroups | 0 | 10 |
+--------------------+------+-------+
.. note::
The :command:`nova limits` command generates an empty
table as a result of the Compute API, which prints an
empty list for backward compatibility purposes.

View File

@ -0,0 +1,54 @@
.. _manage-quotas:
=============
Manage quotas
=============
.. todo:: Merge this into 'quotas.rst'
To prevent system capacities from being exhausted without notification, you can
set up quotas. Quotas are operational limits. For example, the number of
gigabytes allowed for each project can be controlled so that cloud resources
are optimized. Quotas can be enforced at both the project and the project-user
level.
Using the command-line interface, you can manage quotas for the OpenStack
Compute service, the OpenStack Block Storage service, and the OpenStack
Networking service.
The cloud operator typically changes default values because a project requires
more than ten volumes or 1 TB on a compute node.
.. note::
To view all projects, run:
.. code-block:: console
$ openstack project list
+----------------------------------+----------+
| ID | Name |
+----------------------------------+----------+
| e66d97ac1b704897853412fc8450f7b9 | admin |
| bf4a37b885fe46bd86e999e50adad1d3 | services |
| 21bd1c7c95234fd28f589b60903606fa | tenant01 |
| f599c5cd1cba4125ae3d7caed08e288c | tenant02 |
+----------------------------------+----------+
To display all current users for a project, run:
.. code-block:: console
$ openstack user list --project PROJECT_NAME
+----------------------------------+--------+
| ID | Name |
+----------------------------------+--------+
| ea30aa434ab24a139b0e85125ec8a217 | demo00 |
| 4f8113c1d838467cad0c2f337b3dfded | demo01 |
+----------------------------------+--------+
Use :samp:`openstack quota show {PROJECT_NAME}` to list all quotas for a
project.
Use :samp:`openstack quota set {PROJECT_NAME} {--parameters}` to set quota
values.

View File

@ -0,0 +1,251 @@
=======================
Manage project security
=======================
Security groups are sets of IP filter rules that are applied to all project
instances, which define networking access to the instance. Group rules are
project specific; project members can edit the default rules for their group
and add new rule sets.
All projects have a ``default`` security group which is applied to any instance
that has no other defined security group. Unless you change the default, this
security group denies all incoming traffic and allows only outgoing traffic to
your instance.
You can use the ``allow_same_net_traffic`` option in the
``/etc/nova/nova.conf`` file to globally control whether the rules apply to
hosts which share a network. There are two possible values:
``True`` (default)
Hosts on the same subnet are not filtered and are allowed to pass all types
of traffic between them. On a flat network, this allows all instances from
all projects unfiltered communication. With VLAN networking, this allows
access between instances within the same project. You can also simulate this
setting by configuring the default security group to allow all traffic from
the subnet.
``False``
Security groups are enforced for all connections.
Additionally, the number of maximum rules per security group is controlled by
the ``security_group_rules`` and the number of allowed security groups per
project is controlled by the ``security_groups`` quota (see
:ref:`manage-quotas`).
List and view current security groups
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
From the command-line you can get a list of security groups for the project,
using the :command:`openstack` and :command:`nova` commands:
#. Ensure your system variables are set for the user and project for which you
are checking security group rules. For example:
.. code-block:: console
export OS_USERNAME=demo00
export OS_TENANT_NAME=tenant01
#. Output security groups, as follows:
.. code-block:: console
$ openstack security group list
+--------------------------------------+---------+-------------+
| Id | Name | Description |
+--------------------------------------+---------+-------------+
| 73580272-d8fa-4927-bd55-c85e43bc4877 | default | default |
| 6777138a-deb7-4f10-8236-6400e7aff5b0 | open | all ports |
+--------------------------------------+---------+-------------+
#. View the details of a group, as follows:
.. code-block:: console
$ openstack security group rule list GROUPNAME
For example:
.. code-block:: console
$ openstack security group rule list open
+--------------------------------------+-------------+-----------+-----------------+-----------------------+
| ID | IP Protocol | IP Range | Port Range | Remote Security Group |
+--------------------------------------+-------------+-----------+-----------------+-----------------------+
| 353d0611-3f67-4848-8222-a92adbdb5d3a | udp | 0.0.0.0/0 | 1:65535 | None |
| 63536865-e5b6-4df1-bac5-ca6d97d8f54d | tcp | 0.0.0.0/0 | 1:65535 | None |
+--------------------------------------+-------------+-----------+-----------------+-----------------------+
These rules are allow type rules as the default is deny. The first column is
the IP protocol (one of ICMP, TCP, or UDP). The second and third columns
specify the affected port range. The third column specifies the IP range in
CIDR format. This example shows the full port range for all protocols
allowed from all IPs.
Create a security group
~~~~~~~~~~~~~~~~~~~~~~~
When adding a new security group, you should pick a descriptive but brief name.
This name shows up in brief descriptions of the instances that use it where the
longer description field often does not. For example, seeing that an instance
is using security group "http" is much easier to understand than "bobs\_group"
or "secgrp1".
#. Ensure your system variables are set for the user and project for which you
are creating security group rules.
#. Add the new security group, as follows:
.. code-block:: console
$ openstack security group create GroupName --description Description
For example:
.. code-block:: console
$ openstack security group create global_http --description "Allows Web traffic anywhere on the Internet."
+-----------------+--------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-----------------+--------------------------------------------------------------------------------------------------------------------------+
| created_at | 2016-11-03T13:50:53Z |
| description | Allows Web traffic anywhere on the Internet. |
| headers | |
| id | c0b92b20-4575-432a-b4a9-eaf2ad53f696 |
| name | global_http |
| project_id | 5669caad86a04256994cdf755df4d3c1 |
| project_id | 5669caad86a04256994cdf755df4d3c1 |
| revision_number | 1 |
| rules | created_at='2016-11-03T13:50:53Z', direction='egress', ethertype='IPv4', id='4d8cec94-e0ee-4c20-9f56-8fb67c21e4df', |
| | project_id='5669caad86a04256994cdf755df4d3c1', revision_number='1', updated_at='2016-11-03T13:50:53Z' |
| | created_at='2016-11-03T13:50:53Z', direction='egress', ethertype='IPv6', id='31be2ad1-be14-4aef-9492-ecebede2cf12', |
| | project_id='5669caad86a04256994cdf755df4d3c1', revision_number='1', updated_at='2016-11-03T13:50:53Z' |
| updated_at | 2016-11-03T13:50:53Z |
+-----------------+--------------------------------------------------------------------------------------------------------------------------+
#. Add a new group rule, as follows:
.. code-block:: console
$ openstack security group rule create SEC_GROUP_NAME \
--protocol PROTOCOL --dst-port FROM_PORT:TO_PORT --remote-ip CIDR
The arguments are positional, and the ``from-port`` and ``to-port``
arguments specify the local port range connections are allowed to access,
not the source and destination ports of the connection. For example:
.. code-block:: console
$ openstack security group rule create global_http \
--protocol tcp --dst-port 80:80 --remote-ip 0.0.0.0/0
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2016-11-06T14:02:00Z |
| description | |
| direction | ingress |
| ethertype | IPv4 |
| headers | |
| id | 2ba06233-d5c8-43eb-93a9-8eaa94bc9eb5 |
| port_range_max | 80 |
| port_range_min | 80 |
| project_id | 5669caad86a04256994cdf755df4d3c1 |
| project_id | 5669caad86a04256994cdf755df4d3c1 |
| protocol | tcp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 1 |
| security_group_id | c0b92b20-4575-432a-b4a9-eaf2ad53f696 |
| updated_at | 2016-11-06T14:02:00Z |
+-------------------+--------------------------------------+
You can create complex rule sets by creating additional rules. For example,
if you want to pass both HTTP and HTTPS traffic, run:
.. code-block:: console
$ openstack security group rule create global_http \
--protocol tcp --dst-port 443:443 --remote-ip 0.0.0.0/0
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2016-11-06T14:09:20Z |
| description | |
| direction | ingress |
| ethertype | IPv4 |
| headers | |
| id | 821c3ef6-9b21-426b-be5b-c8a94c2a839c |
| port_range_max | 443 |
| port_range_min | 443 |
| project_id | 5669caad86a04256994cdf755df4d3c1 |
| project_id | 5669caad86a04256994cdf755df4d3c1 |
| protocol | tcp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 1 |
| security_group_id | c0b92b20-4575-432a-b4a9-eaf2ad53f696 |
| updated_at | 2016-11-06T14:09:20Z |
+-------------------+--------------------------------------+
Despite only outputting the newly added rule, this operation is additive
(both rules are created and enforced).
#. View all rules for the new security group, as follows:
.. code-block:: console
$ openstack security group rule list global_http
+--------------------------------------+-------------+-----------+-----------------+-----------------------+
| ID | IP Protocol | IP Range | Port Range | Remote Security Group |
+--------------------------------------+-------------+-----------+-----------------+-----------------------+
| 353d0611-3f67-4848-8222-a92adbdb5d3a | tcp | 0.0.0.0/0 | 80:80 | None |
| 63536865-e5b6-4df1-bac5-ca6d97d8f54d | tcp | 0.0.0.0/0 | 443:443 | None |
+--------------------------------------+-------------+-----------+-----------------+-----------------------+
Delete a security group
~~~~~~~~~~~~~~~~~~~~~~~
#. Ensure your system variables are set for the user and project for which you
are deleting a security group.
#. Delete the new security group, as follows:
.. code-block:: console
$ openstack security group delete GROUPNAME
For example:
.. code-block:: console
$ openstack security group delete global_http
Create security group rules for a cluster of instances
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Source Groups are a special, dynamic way of defining the CIDR of allowed
sources. The user specifies a Source Group (Security Group name), and all the
user's other Instances using the specified Source Group are selected
dynamically. This alleviates the need for individual rules to allow each new
member of the cluster.
#. Make sure to set the system variables for the user and project for which you
are creating a security group rule.
#. Add a source group, as follows:
.. code-block:: console
$ openstack security group rule create secGroupName \
--remote-group source-group --protocol ip-protocol \
--dst-port from-port:to-port
For example:
.. code-block:: console
$ openstack security group rule create cluster \
--remote-group global_http --protocol tcp --dst-port 22:22
The ``cluster`` rule allows SSH access from any other instance that uses the
``global_http`` group.

View File

@ -0,0 +1,72 @@
=======================
Manage Compute services
=======================
You can enable and disable Compute services. The following examples disable and
enable the ``nova-compute`` service.
#. List the Compute services:
.. code-block:: console
$ openstack compute service list
+----+--------------+------------+----------+---------+-------+--------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+--------------+------------+----------+---------+-------+--------------+
| 4 | nova- | controller | internal | enabled | up | 2016-12-20T0 |
| | consoleauth | | | | | 0:44:48.0000 |
| | | | | | | 00 |
| 5 | nova- | controller | internal | enabled | up | 2016-12-20T0 |
| | scheduler | | | | | 0:44:48.0000 |
| | | | | | | 00 |
| 6 | nova- | controller | internal | enabled | up | 2016-12-20T0 |
| | conductor | | | | | 0:44:54.0000 |
| | | | | | | 00 |
| 9 | nova-compute | compute | nova | enabled | up | 2016-10-21T0 |
| | | | | | | 2:35:03.0000 |
| | | | | | | 00 |
+----+--------------+------------+----------+---------+-------+--------------+
#. Disable a nova service:
.. code-block:: console
$ openstack compute service set --disable --disable-reason trial log nova nova-compute
+----------+--------------+----------+-------------------+
| Host | Binary | Status | Disabled Reason |
+----------+--------------+----------+-------------------+
| compute | nova-compute | disabled | trial log |
+----------+--------------+----------+-------------------+
#. Check the service list:
.. code-block:: console
$ openstack compute service list
+----+--------------+------------+----------+---------+-------+--------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+--------------+------------+----------+---------+-------+--------------+
| 4 | nova- | controller | internal | enabled | up | 2016-12-20T0 |
| | consoleauth | | | | | 0:44:48.0000 |
| | | | | | | 00 |
| 5 | nova- | controller | internal | enabled | up | 2016-12-20T0 |
| | scheduler | | | | | 0:44:48.0000 |
| | | | | | | 00 |
| 6 | nova- | controller | internal | enabled | up | 2016-12-20T0 |
| | conductor | | | | | 0:44:54.0000 |
| | | | | | | 00 |
| 9 | nova-compute | compute | nova | disabled| up | 2016-10-21T0 |
| | | | | | | 2:35:03.0000 |
| | | | | | | 00 |
+----+--------------+------------+----------+---------+-------+--------------+
#. Enable the service:
.. code-block:: console
$ openstack compute service set --enable nova nova-compute
+----------+--------------+---------+
| Host | Binary | Status |
+----------+--------------+---------+
| compute | nova-compute | enabled |
+----------+--------------+---------+

View File

@ -0,0 +1,77 @@
.. _cli-os-migrate-cfg-ssh:
===================================
Configure SSH between compute nodes
===================================
.. todo::
Consider merging this into a larger "live-migration" document or to the
installation guide
If you are resizing or migrating an instance between hypervisors, you might
encounter an SSH (Permission denied) error. Ensure that each node is configured
with SSH key authentication so that the Compute service can use SSH to move
disks to other nodes.
To share a key pair between compute nodes, complete the following steps:
#. On the first node, obtain a key pair (public key and private key). Use the
root key that is in the ``/root/.ssh/id_rsa`` and ``/root/.ssh/id_ras.pub``
directories or generate a new key pair.
#. Run :command:`setenforce 0` to put SELinux into permissive mode.
#. Enable login abilities for the nova user:
.. code-block:: console
# usermod -s /bin/bash nova
Switch to the nova account.
.. code-block:: console
# su nova
#. As root, create the folder that is needed by SSH and place the private key
that you obtained in step 1 into this folder:
.. code-block:: console
mkdir -p /var/lib/nova/.ssh
cp <private key> /var/lib/nova/.ssh/id_rsa
echo 'StrictHostKeyChecking no' >> /var/lib/nova/.ssh/config
chmod 600 /var/lib/nova/.ssh/id_rsa /var/lib/nova/.ssh/authorized_keys
#. Repeat steps 2-4 on each node.
.. note::
The nodes must share the same key pair, so do not generate a new key pair
for any subsequent nodes.
#. From the first node, where you created the SSH key, run:
.. code-block:: console
ssh-copy-id -i <pub key> nova@remote-host
This command installs your public key in a remote machine's
``authorized_keys`` folder.
#. Ensure that the nova user can now log in to each node without using a
password:
.. code-block:: console
# su nova
$ ssh *computeNodeAddress*
$ exit
#. As root on each node, restart both libvirt and the Compute services:
.. code-block:: console
# systemctl restart libvirtd.service
# systemctl restart openstack-nova-compute.service

View File

@ -257,3 +257,171 @@ off the live snapshotting mechanism by setting up its value to ``True`` in the
[workarounds]
disable_libvirt_livesnapshot = True
Cannot find suitable emulator for x86_64
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Problem
-------
When you attempt to create a VM, the error shows the VM is in the ``BUILD``
then ``ERROR`` state.
Solution
--------
On the KVM host, run :command:`cat /proc/cpuinfo`. Make sure the ``vmx`` or
``svm`` flags are set.
Follow the instructions in the `Enable KVM
<https://docs.openstack.org/ocata/config-reference/compute/hypervisor-kvm.html#enable-kvm>`__
section in the OpenStack Configuration Reference to enable hardware
virtualization support in your BIOS.
Failed to attach volume after detaching
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Problem
-------
Failed to attach a volume after detaching the same volume.
Solution
--------
You must change the device name on the :command:`nova-attach` command. The VM
might not clean up after a :command:`nova-detach` command runs. This example
shows how the :command:`nova-attach` command fails when you use the ``vdb``,
``vdc``, or ``vdd`` device names:
.. code-block:: console
# ls -al /dev/disk/by-path/
total 0
drwxr-xr-x 2 root root 200 2012-08-29 17:33 .
drwxr-xr-x 5 root root 100 2012-08-29 17:33 ..
lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0 -> ../../vda
lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0-part1 -> ../../vda1
lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0-part2 -> ../../vda2
lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0-part5 -> ../../vda5
lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:06.0-virtio-pci-virtio2 -> ../../vdb
lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:08.0-virtio-pci-virtio3 -> ../../vdc
lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:09.0-virtio-pci-virtio4 -> ../../vdd
lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:09.0-virtio-pci-virtio4-part1 -> ../../vdd1
You might also have this problem after attaching and detaching the same volume
from the same VM with the same mount point multiple times. In this case,
restart the KVM host.
Failed to attach volume, systool is not installed
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Problem
-------
This warning and error occurs if you do not have the required ``sysfsutils``
package installed on the compute node:
.. code-block:: console
WARNING nova.virt.libvirt.utils [req-1200f887-c82b-4e7c-a891-fac2e3735dbb\
admin admin|req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin] systool\
is not installed
ERROR nova.compute.manager [req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin\
admin|req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin]
[instance: df834b5a-8c3f-477a-be9b-47c97626555c|instance: df834b5a-8c3f-47\
7a-be9b-47c97626555c]
Failed to attach volume 13d5c633-903a-4764-a5a0-3336945b1db1 at /dev/vdk.
Solution
--------
Install the ``sysfsutils`` package on the compute node. For example:
.. code-block:: console
# apt-get install sysfsutils
Failed to connect volume in FC SAN
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Problem
-------
The compute node failed to connect to a volume in a Fibre Channel (FC) SAN
configuration. The WWN may not be zoned correctly in your FC SAN that links the
compute host to the storage array:
.. code-block:: console
ERROR nova.compute.manager [req-2ddd5297-e405-44ab-aed3-152cd2cfb8c2 admin\
demo|req-2ddd5297-e405-44ab-aed3-152cd2cfb8c2 admin demo] [instance: 60ebd\
6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3|instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3\
d5f3]
Failed to connect to volume 6f6a6a9c-dfcf-4c8d-b1a8-4445ff883200 while\
attaching at /dev/vdjTRACE nova.compute.manager [instance: 60ebd6c7-c1e3-4\
bf0-8ef0-f07aa4c3d5f3|instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3]
Traceback (most recent call last):…f07aa4c3d5f3\] ClientException: The\
server has either erred or is incapable of performing the requested\
operation.(HTTP 500)(Request-ID: req-71e5132b-21aa-46ee-b3cc-19b5b4ab2f00)
Solution
--------
The network administrator must configure the FC SAN fabric by correctly zoning
the WWN (port names) from your compute node HBAs.
Multipath call failed exit
~~~~~~~~~~~~~~~~~~~~~~~~~~
Problem
-------
Multipath call failed exit. This warning occurs in the Compute log if you do
not have the optional ``multipath-tools`` package installed on the compute
node. This is an optional package and the volume attachment does work without
the multipath tools installed. If the ``multipath-tools`` package is installed
on the compute node, it is used to perform the volume attachment. The IDs in
your message are unique to your system.
.. code-block:: console
WARNING nova.storage.linuxscsi [req-cac861e3-8b29-4143-8f1b-705d0084e571 \
admin admin|req-cac861e3-8b29-4143-8f1b-705d0084e571 admin admin] \
Multipath call failed exit (96)
Solution
--------
Install the ``multipath-tools`` package on the compute node. For example:
.. code-block:: console
# apt-get install multipath-tools
Failed to Attach Volume, Missing sg_scan
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Problem
-------
Failed to attach volume to an instance, ``sg_scan`` file not found. This error
occurs when the sg3-utils package is not installed on the compute node. The
IDs in your message are unique to your system:
.. code-block:: console
ERROR nova.compute.manager [req-cf2679fd-dd9e-4909-807f-48fe9bda3642 admin admin|req-cf2679fd-dd9e-4909-807f-48fe9bda3642 admin admin]
[instance: 7d7c92e0-49fa-4a8e-87c7-73f22a9585d5|instance: 7d7c92e0-49fa-4a8e-87c7-73f22a9585d5]
Failed to attach volume 4cc104c4-ac92-4bd6-9b95-c6686746414a at /dev/vdcTRACE nova.compute.manager
[instance: 7d7c92e0-49fa-4a8e-87c7-73f22a9585d5|instance: 7d7c92e0-49fa-4a8e-87c7-73f22a9585d5]
Stdout: '/usr/local/bin/nova-rootwrap: Executable not found: /usr/bin/sg_scan'
Solution
--------
Install the ``sg3-utils`` package on the compute node. For example:
.. code-block:: console
# apt-get install sg3-utils