Remove cells v1 (for the most part) from the docs

As discussed in the mailing list [1] since cells v1
has been deprecated since Pike and the biggest user
of it (CERN as far as we know) moved to cells v2
in Queens, we can start rolling back the cells v1
specific documentation to avoid confusing people
new to nova about what cells is and making them
understand there was an optional v1.

There are still a few mentions of cells v1 left in
here for things like adding a new cell which need
to be re-written and for that I've left a todo.

Users can still get at cells v1 specific docs from
published stable branches and/or rebuilding the
docs from before this change.

[1] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002569.html

Change-Id: Idaa04a88b6883254cad9a8c6665e1c63a67e88d3
This commit is contained in:
Matt Riedemann 2019-02-13 13:59:09 -05:00
parent 3ba8311f6f
commit bc5ef2ff06
11 changed files with 6 additions and 504 deletions

View File

@ -1,295 +0,0 @@
==========
Cells (v1)
==========
.. warning::
Configuring and implementing Cells v1 is not recommended for new deployments
of the Compute service (nova). Cells v2 replaces cells v1, and v2 is
required to install or upgrade the Compute service to the 15.0.0 Ocata
release. More information on cells v2 can be found in :doc:`/user/cells`.
`Cells` functionality enables you to scale an OpenStack Compute cloud in a more
distributed fashion without having to use complicated technologies like
database and message queue clustering. It supports very large deployments.
When this functionality is enabled, the hosts in an OpenStack Compute cloud are
partitioned into groups called cells. Cells are configured as a tree. The
top-level cell should have a host that runs a ``nova-api`` service, but no
``nova-compute`` services. Each child cell should run all of the typical
``nova-*`` services in a regular Compute cloud except for ``nova-api``. You can
think of cells as a normal Compute deployment in that each cell has its own
database server and message queue broker.
The ``nova-cells`` service handles communication between cells and selects
cells for new instances. This service is required for every cell. Communication
between cells is pluggable, and currently the only option is communication
through RPC.
Cells scheduling is separate from host scheduling. ``nova-cells`` first picks
a cell. Once a cell is selected and the new build request reaches its
``nova-cells`` service, it is sent over to the host scheduler in that cell and
the build proceeds as it would have without cells.
Cell configuration options
~~~~~~~~~~~~~~~~~~~~~~~~~~
.. todo:: This is duplication. We should be able to use the
oslo.config.sphinxext module to generate this for us
Cells are disabled by default. All cell-related configuration options appear in
the ``[cells]`` section in ``nova.conf``. The following cell-related options
are currently supported:
``enable``
Set to ``True`` to turn on cell functionality. Default is ``false``.
``name``
Name of the current cell. Must be unique for each cell.
``capabilities``
List of arbitrary ``key=value`` pairs defining capabilities of the current
cell. Values include ``hypervisor=xenserver;kvm,os=linux;windows``.
``call_timeout``
How long in seconds to wait for replies from calls between cells.
``scheduler_filter_classes``
Filter classes that the cells scheduler should use. By default, uses
``nova.cells.filters.all_filters`` to map to all cells filters included with
Compute.
``scheduler_weight_classes``
Weight classes that the scheduler for cells uses. By default, uses
``nova.cells.weights.all_weighers`` to map to all cells weight algorithms
included with Compute.
``ram_weight_multiplier``
Multiplier used to weight RAM. Negative numbers indicate that Compute should
stack VMs on one host instead of spreading out new VMs to more hosts in the
cell. The default value is 10.0.
Configure the API (top-level) cell
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The cell type must be changed in the API cell so that requests can be proxied
through ``nova-cells`` down to the correct cell properly. Edit the
``nova.conf`` file in the API cell, and specify ``api`` in the ``cell_type``
key:
.. code-block:: ini
[DEFAULT]
compute_api_class=nova.compute.cells_api.ComputeCellsAPI
# ...
[cells]
cell_type= api
Configure the child cells
~~~~~~~~~~~~~~~~~~~~~~~~~
Edit the ``nova.conf`` file in the child cells, and specify ``compute`` in the
``cell_type`` key:
.. code-block:: ini
[DEFAULT]
# Disable quota checking in child cells. Let API cell do it exclusively.
quota_driver=nova.quota.NoopQuotaDriver
[cells]
cell_type = compute
Configure the database in each cell
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Before bringing the services online, the database in each cell needs to be
configured with information about related cells. In particular, the API cell
needs to know about its immediate children, and the child cells must know about
their immediate agents. The information needed is the ``RabbitMQ`` server
credentials for the particular cell.
Use the :command:`nova-manage cell create` command to add this information to
the database in each cell:
.. code-block:: console
# nova-manage cell create -h
usage: nova-manage cell create [-h] [--name <name>]
[--cell_type <parent|api|child|compute>]
[--username <username>] [--password <password>]
[--broker_hosts <broker_hosts>]
[--hostname <hostname>] [--port <number>]
[--virtual_host <virtual_host>]
[--woffset <float>] [--wscale <float>]
optional arguments:
-h, --help show this help message and exit
--name <name> Name for the new cell
--cell_type <parent|api|child|compute>
Whether the cell is parent/api or child/compute
--username <username>
Username for the message broker in this cell
--password <password>
Password for the message broker in this cell
--broker_hosts <broker_hosts>
Comma separated list of message brokers in this cell.
Each Broker is specified as hostname:port with both
mandatory. This option overrides the --hostname and
--port options (if provided).
--hostname <hostname>
Address of the message broker in this cell
--port <number> Port number of the message broker in this cell
--virtual_host <virtual_host>
The virtual host of the message broker in this cell
--woffset <float>
--wscale <float>
As an example, assume an API cell named ``api`` and a child cell named
``cell1``.
Within the ``api`` cell, specify the following ``RabbitMQ`` server information:
.. code-block:: ini
rabbit_host=10.0.0.10
rabbit_port=5672
rabbit_username=api_user
rabbit_password=api_passwd
rabbit_virtual_host=api_vhost
Within the ``cell1`` child cell, specify the following ``RabbitMQ`` server
information:
.. code-block:: ini
rabbit_host=10.0.1.10
rabbit_port=5673
rabbit_username=cell1_user
rabbit_password=cell1_passwd
rabbit_virtual_host=cell1_vhost
You can run this in the API cell as root:
.. code-block:: console
# nova-manage cell create --name cell1 --cell_type child \
--username cell1_user --password cell1_passwd --hostname 10.0.1.10 \
--port 5673 --virtual_host cell1_vhost --woffset 1.0 --wscale 1.0
Repeat the previous steps for all child cells.
In the child cell, run the following, as root:
.. code-block:: console
# nova-manage cell create --name api --cell_type parent \
--username api_user --password api_passwd --hostname 10.0.0.10 \
--port 5672 --virtual_host api_vhost --woffset 1.0 --wscale 1.0
To customize the Compute cells, use the configuration option settings
documented above.
Cell scheduling configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To determine the best cell to use to launch a new instance, Compute uses a set
of filters and weights defined in the ``/etc/nova/nova.conf`` file. The
following options are available to prioritize cells for scheduling:
``scheduler_filter_classes``
List of filter classes. By default ``nova.cells.filters.all_filters``
is specified, which maps to all cells filters included with Compute
(see the section called :ref:`Filters <compute-scheduler-filters>`).
``scheduler_weight_classes``
List of weight classes. By default ``nova.cells.weights.all_weighers`` is
specified, which maps to all cell weight algorithms included with Compute.
The following modules are available:
``mute_child``
Downgrades the likelihood of child cells being chosen for scheduling
requests, which haven't sent capacity or capability updates in a while.
Options include ``mute_weight_multiplier`` (multiplier for mute children;
value should be negative).
``ram_by_instance_type``
Select cells with the most RAM capacity for the instance type being
requested. Because higher weights win, Compute returns the number of
available units for the instance type requested. The
``ram_weight_multiplier`` option defaults to 10.0 that adds to the weight
by a factor of 10.
Use a negative number to stack VMs on one host instead of spreading
out new VMs to more hosts in the cell.
``weight_offset``
Allows modifying the database to weight a particular cell. You can use this
when you want to disable a cell (for example, '0'), or to set a default
cell by making its ``weight_offset`` very high (for example,
``999999999999999``). The highest weight will be the first cell to be
scheduled for launching an instance.
Additionally, the following options are available for the cell scheduler:
``scheduler_retries``
Specifies how many times the scheduler tries to launch a new instance when no
cells are available (default=10).
``scheduler_retry_delay``
Specifies the delay (in seconds) between retries (default=2).
As an admin user, you can also add a filter that directs builds to a particular
cell. The ``policy.json`` file must have a line with
``"cells_scheduler_filter:TargetCellFilter" : "is_admin:True"`` to let an admin
user specify a scheduler hint to direct a build to a particular cell.
Optional cell configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Cells store all inter-cell communication data, including user names and
passwords, in the database. Because the cells data is not updated very
frequently, use the ``[cells]cells_config`` option to specify a JSON file to
store cells data. With this configuration, the database is no longer consulted
when reloading the cells data. The file must have columns present in the Cell
model (excluding common database fields and the ``id`` column). You must
specify the queue connection information through a ``transport_url`` field,
instead of ``username``, ``password``, and so on.
The ``transport_url`` has the following form::
rabbit://USERNAME:PASSWORD@HOSTNAME:PORT/VIRTUAL_HOST
The scheme can only be ``rabbit``.
The following sample shows this optional configuration:
.. code-block:: json
{
"parent": {
"name": "parent",
"api_url": "http://api.example.com:8774",
"transport_url": "rabbit://rabbit.example.com",
"weight_offset": 0.0,
"weight_scale": 1.0,
"is_parent": true
},
"cell1": {
"name": "cell1",
"api_url": "http://api.example.com:8774",
"transport_url": "rabbit://rabbit1.example.com",
"weight_offset": 0.0,
"weight_scale": 1.0,
"is_parent": false
},
"cell2": {
"name": "cell2",
"api_url": "http://api.example.com:8774",
"transport_url": "rabbit://rabbit2.example.com",
"weight_offset": 0.0,
"weight_scale": 1.0,
"is_parent": false
}
}

View File

@ -25,6 +25,5 @@ A list of config options based on different topics can be found below:
/admin/configuration/iscsi-offload.rst
/admin/configuration/hypervisors.rst
/admin/configuration/schedulers.rst
/admin/configuration/cells.rst
/admin/configuration/logs.rst
/admin/configuration/samples/index.rst

View File

@ -791,35 +791,6 @@ With the API, use the ``os:scheduler_hints`` key:
}
}
Cell filters
~~~~~~~~~~~~
The following sections describe the available cell filters.
.. note::
These filters are only available for cellsv1 which is deprecated.
DifferentCellFilter
-------------------
Schedules the instance on a different cell from a set of instances. To take
advantage of this filter, the requester must pass a scheduler hint, using
``different_cell`` as the key and a list of instance UUIDs as the value.
ImagePropertiesFilter
---------------------
Filters cells based on properties defined on the instance's image. This
filter works specifying the hypervisor required in the image metadata and the
supported hypervisor version in cell capabilities.
TargetCellFilter
----------------
Filters target cells. This filter works by specifying a scheduler hint of
``target_cell``. The value should be the full cell path.
.. _weights:
Weights
@ -838,10 +809,7 @@ weight is given the highest priority.
.. figure:: /figures/nova-weighting-hosts.png
If cells are used, cells are weighted by the scheduler in the same manner as
hosts.
Hosts and cells are weighted based on the following options in the
Hosts are weighted based on the following options in the
``/etc/nova/nova.conf`` file:
.. list-table:: Host weighting options
@ -957,43 +925,6 @@ For example:
required = false
weight_of_unavailable = -10000.0
.. list-table:: Cell weighting options
:header-rows: 1
:widths: 10, 25, 60
* - Section
- Option
- Description
* - [cells]
- ``mute_weight_multiplier``
- Multiplier to weight mute children (hosts which have not sent
capacity or capacity updates for some time).
Use a negative, floating-point value.
* - [cells]
- ``offset_weight_multiplier``
- Multiplier to weight cells, so you can specify a preferred cell.
Use a floating point value.
* - [cells]
- ``ram_weight_multiplier``
- By default, the scheduler spreads instances across all cells evenly.
Set the ``ram_weight_multiplier`` option to a negative number if you
prefer stacking instead of spreading. Use a floating-point value.
* - [cells]
- ``scheduler_weight_classes``
- Defaults to ``nova.cells.weights.all_weighers``, which maps to all
cell weighers included with Compute. Cells are then weighted and
sorted with the largest weight winning.
For example:
.. code-block:: ini
[cells]
scheduler_weight_classes = nova.cells.weights.all_weighers
mute_weight_multiplier = -10.0
ram_weight_multiplier = 1.0
offset_weight_multiplier = 1.0
Utilization aware scheduling
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -85,7 +85,6 @@ deployments, but are documented for existing ones.
.. toctree::
:maxdepth: 1
nova-cells
nova-dhcpbridge
nova-network
nova-consoleauth

View File

@ -1,54 +0,0 @@
==========
nova-cells
==========
-------------------------
Server for the Nova Cells
-------------------------
:Author: openstack@lists.openstack.org
:Copyright: OpenStack Foundation
:Manual section: 1
:Manual group: cloud computing
Synopsis
========
::
nova-cells [options]
Description
===========
:program:`nova-cells` is a server daemon that serves the Nova Cells service,
which handles communication between cells and selects cells for new instances.
.. deprecated:: 16.0.0
Everything in this document is referring to Cells v1, which is
not recommended for new deployments and is deprecated in favor of Cells v2
as of the 16.0.0 Pike release. For information about commands to use
with Cells v2, see the man page for :ref:`man-page-cells-v2`.
Options
=======
**General options**
Files
=====
* ``/etc/nova/nova.conf``
* ``/etc/nova/policy.json``
* ``/etc/nova/rootwrap.conf``
* ``/etc/nova/rootwrap.d/``
See Also
========
* :nova-doc:`OpenStack Nova <>`
Bugs
====
* Nova bugs are managed at `Launchpad <https://bugs.launchpad.net/nova>`__

View File

@ -120,7 +120,6 @@ _man_pages = [
('nova-api-metadata', u'Cloud controller fabric'),
('nova-api-os-compute', u'Cloud controller fabric'),
('nova-api', u'Cloud controller fabric'),
('nova-cells', u'Cloud controller fabric'),
('nova-compute', u'Cloud controller fabric'),
('nova-console', u'Cloud controller fabric'),
('nova-consoleauth', u'Cloud controller fabric'),

View File

@ -46,7 +46,7 @@ these documents will move into the "Internals" section.
If you want to get involved in shaping the future of nova's architecture,
these are a great place to start reading up on the current plans.
* :doc:`/user/cells`: Comparison of Cells v1 and v2, and how v2 is evolving
* :doc:`/user/cells`: How cells v2 is evolving
* :doc:`/reference/policy-enforcement`: How we want policy checks on API actions
to work in the future
* :doc:`/reference/stable-api`: What stable api means to nova

View File

@ -22,59 +22,6 @@ Andrew Laski gave at the Austin (Newton) summit which is worth watching.
.. _presentation: https://www.openstack.org/videos/summits/austin-2016/nova-cells-v2-whats-going-on
Cells V1
========
Historically, Nova has depended on a single logical database and message queue
that all nodes depend on for communication and data persistence. This becomes
an issue for deployers as scaling and providing fault tolerance for these
systems is difficult.
We have an experimental feature in Nova called "cells", hereafter referred to
as "cells v1", which is used by some large deployments to partition compute
nodes into smaller groups, coupled with a database and queue. This seems to be
a well-liked and easy-to-understand arrangement of resources, but the
implementation of it has issues for maintenance and correctness.
See `Comparison with Cells V1`_ for more detail.
Status
~~~~~~
.. deprecated:: 16.0.0
Cells v1 is deprecated in favor of Cells v2 as of the 16.0.0 Pike release.
Cells v1 is considered experimental and receives much less testing than the
rest of Nova. For example, there is no job for testing cells v1 with Neutron.
The priority for the core team is implementation of and migration to cells v2.
Because of this, there are a few restrictions placed on cells v1:
#. Cells v1 is in feature freeze. This means no new feature proposals for cells
v1 will be accepted by the core team, which includes but is not limited to
API parity, e.g. supporting virtual interface attach/detach with Neutron.
#. Latent bugs caused by the cells v1 design will not be fixed, e.g.
`bug 1489581 <https://bugs.launchpad.net/nova/+bug/1489581>`_. So if new
tests are added to Tempest which trigger a latent bug in cells v1 it may not
be fixed. However, regressions in working function should be tracked with
bugs and fixed.
#. Changes proposed to nova will not be automatically tested against a Cells v1
environment. To manually trigger Cells v1 integration testing on a nova
change in Gerrit, leave a review comment of "check experimental" and the
*nova-cells-v1* job will run on it, but the job is non-voting, meaning if
it fails it will not prevent the patch from being merged if approved by the
core team.
**Suffice it to say, new deployments of cells v1 are not encouraged.**
The restrictions above are basically meant to prioritize effort and focus on
getting cells v2 completed, and feature requests and hard to fix latent bugs
detract from that effort. Further discussion on this can be found in the
`2015/11/12 Nova meeting minutes
<http://eavesdrop.openstack.org/meetings/nova/2015/nova.2015-11-12-14.00.log.html>`_.
There are no plans to remove Cells V1 until V2 is usable by existing
deployments and there is a migration path.
.. _cells-v2:
Cells V2
@ -165,23 +112,6 @@ The benefits of this new organization are:
* Adding new sets of hosts as a new "cell" allows them to be plugged into a
deployment and tested before allowing builds to be scheduled to them.
Comparison with Cells V1
------------------------
In reality, the proposed organization is nearly the same as what we currently
have in cells today. A cell mostly consists of a database, queue, and set of
compute nodes. The primary difference is that current cells require a
nova-cells service that synchronizes information up and down from the top level
to the child cell. Additionally, there are alternate code paths in
compute/api.py which handle routing messages to cells instead of directly down
to a compute host. Both of these differences are relevant to why we have a hard
time achieving feature and test parity with regular nova (because many things
take an alternate path with cells) and why it's hard to understand what is
going on (all the extra synchronization of data). The new proposed cellsv2
organization avoids both of these problems by letting things live where they
should, teaching nova to natively find the right db, queue, and compute node to
handle a given request.
Database split
~~~~~~~~~~~~~~
@ -572,6 +502,9 @@ database. This will set up a single cell Nova deployment.
Upgrade with Cells V1
~~~~~~~~~~~~~~~~~~~~~
.. todo:: This needs to be removed but `Adding a new cell to an existing deployment`_
is still using it.
You are upgrading an existing Nova install that has Cells V1 enabled and have
compute hosts in your databases. This will set up a multiple cell Nova
deployment. At this time, it is recommended to keep Cells V1 enabled during and

View File

@ -22,12 +22,6 @@ is geared towards people who want to have multiple cells for whatever
reason, the nature of the cellsv2 support in Nova means that it
applies in some way to all deployments.
.. note:: The concepts laid out in this document do not in any way
relate to CellsV1, which includes the ``nova-cells``
service, and the ``[cells]`` section of the configuration
file. For more information on the differences, see the main
:ref:`cells` page.
Concepts
========

View File

@ -279,10 +279,6 @@ RPC version pinning
.. note::
This does not apply to cells v1 deployments since cells v1 does not
support rolling upgrades. It is assumed that cells v1 deployments are
upgraded in lockstep so n-1 cells compatibility does not work.
The procedure for rolling upgrades with multiple cells v2 cells is not
yet determined.

View File

@ -27,8 +27,8 @@
/nova/latest/man/nova-api-metadata.html 301 /nova/latest/cli/nova-api-metadata.html
/nova/latest/man/nova-api-os-compute.html 301 /nova/latest/cli/nova-api-os-compute.html
/nova/latest/man/nova-api.html 301 /nova/latest/cli/nova-api.html
/nova/latest/man/nova-cells.html 301 /nova/latest/cli/nova-cells.html
# this is gone and never coming back, indicate that to the end users
/nova/latest/man/nova-cells.html 301 /nova/latest/cli/nova-cells.html
/nova/latest/man/nova-compute.html 301 /nova/latest/cli/nova-compute.html
/nova/latest/man/nova-conductor.html 301 /nova/latest/cli/nova-conductor.html
/nova/latest/man/nova-console.html 301 /nova/latest/cli/nova-console.html