We currently have three cells v2 documents in-tree: - A 'user/cellsv2-layout' document that details the structure or architecture of a cells v2 deployment (which is to say, any modern nova deployment) - A 'user/cells' document, which is written from a pre-cells v2 viewpoint and details the changes that cells v2 *will* require and the benefits it *would* bring. It also includes steps for upgrading from pre-cells v2 (that is, pre-Pike) deployment or a deployment with cells v1 (which we removed in Train and probably broke long before) - An 'admin/cells' document, which doesn't contain much other than some advice for handling down cells Clearly there's a lot of cruft to be cleared out as well as some centralization of information that's possible. As such, we combine all of these documents into one document, 'admin/cells'. This is chosen over 'users/cells' since cells are not an end-user-facing feature. References to cells v1 and details on upgrading from pre-cells v2 deployments are mostly dropped, as are some duplicated installation/configuration steps. Formatting is fixed and Sphinx-isms used to cross reference config option where possible. Finally, redirects are added so that people can continue to find the relevant resources. The result is (hopefully) a one stop shop for all things cells v2-related that operators can use to configure and understand their deployments. Change-Id: If39db50fd8b109a5a13dec70f8030f3663555065 Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
3.3 KiB
3.3 KiB
User Documentation
End user guide
availability-zones launch-instances metadata manage-ip-addresses certificate-validation resize reboot rescue block-device-mapping /reference/api-microversion-history
The rest of this document should probably move to the admin guide.
Architecture Overview
Nova architecture </user/architecture>: An overview of how all the parts in nova fit together.Block Device Mapping </user/block-device-mapping>: One of the more complicated parts to understand is the Block Device Mapping parameters used to connect specific block devices to computes. This deserves its own deep dive.
See the reference guide <reference-internals> for
details about more internal subsystems.
Deployment Considerations
There is information you might want to consider before doing your
deployment, especially if it is going to be a larger deployment. For
smaller deployments the defaults from the install guide </install/index> will be
sufficient.
- Compute Driver Features Supported: While the
majority of nova deployments use libvirt/kvm, you can use nova with
other compute drivers. Nova attempts to provide a unified feature set
across these, however, not all features are implemented on all backends,
and not all features are equally well tested.
Feature Support by Use Case </user/feature-classification>: A view of what features each driver supports based on what's important to some large use cases (General Purpose Cloud, NFV Cloud, HPC Cloud).Feature Support full list </user/support-matrix>: A detailed dive through features in each compute driver backend.
Cells v2 configuration </admin/cells>: For large deployments, cells v2 cells allow sharding of your compute environment. Upfront planning is key to a successful cells v2 layout.Placement service <>: Overview of the placement service, including how it fits in with the rest of nova.Running nova-api on wsgi </user/wsgi>: Considerations for using a real WSGI container instead of the baked-in eventlet web server.
Maintenance
Once you are running nova, the following information is extremely useful.
Admin Guide </admin/index>: A collection of guides for administrating nova.Quotas </user/quotas>: Managing project quotas in nova.Availablity Zones </admin/availability-zones>: Availability Zones are an end-user visible logical abstraction for partitioning a cloud without knowing the physical infrastructure. They can be used to partition a cloud on arbitrary factors, such as location (country, datacenter, rack), network layout and/or power source.Scheduling </admin/scheduling>: How the scheduler is configured, and how that will impact where compute instances land in your environment. If you are seeing unexpected distribution of compute instances in your hosts, you'll want to dive into this configuration.Exposing custom metadata to compute instances </admin/vendordata>: How and when you might want to extend the basic metadata exposed to compute instances (either via metadata server or config drive) for your specific purposes.