openstack-manuals/doc/admin-guide/source/compute-adv-config.rst
Stephen Finucane deff6c812e [admin-guide] Document NUMA and CPU pinning
The NUMA and CPU pinning features and closely related features but do
have different use cases. Document how these features are useful, and
when they should be used.

Change-Id: I3c31c1a132c8e87fe864c8b68fcf2e33ec42750d
2016-05-16 16:37:02 +01:00

1.3 KiB

Advanced configuration

OpenStack clouds run on platforms that differ greatly in the capabilities that they provide. By default, the Compute service seeks to abstract the underlying hardware that it runs on, rather than exposing specifics about the underlying host platforms. This abstraction manifests itself in many ways. For example, rather than exposing the types and topologies of CPUs running on hosts, the service exposes a number of generic CPUs (virtual CPUs, or vCPUs) and allows for overcommiting of these. In a similar manner, rather than exposing the individual types of network devices available on hosts, generic software-powered network ports are provided. These features are designed to allow high resource utilization and allows the service to provide a generic cost-effective and highly-scalable cloud upon which to build applications.

This abstraction is beneficial for most workloads. However, there are some workloads where determinism and per-instance performance are important, if not vital. In these cases, instances can be expected to deliver near-native performance. The Compute service provides features to improve individual instance for these kind of workloads.

compute-pci-passthrough.rst compute-numa-cpu-pinning.rst