+
+ | Dashboard (horizon) |
+ Controller |
+ Configured to use memcached as a session store,
+ neutron support is enabled,
+ can_set_mount_point = False |
+ The Dashboard is run on all controller nodes, ensuring at least once
+ instance will be available in case of node failure. It also sits behind
+ HAProxy, which detects when the software fails and routes requests
+ around the failing instance. |
+ The Dashboard is run on all controller nodes, so scalability can be achieved with additional
+ controller nodes. Haproxy allows scalability for the Dashboard as more nodes are added. |
+
+
+ | Identity (keystone) |
+ Controller |
+ Configured to use memcached for caching, and use PKI for tokens. |
+ Identity is run on all controller nodes, ensuring at least once instance will be available
+ in case of node failure. Identity also sits behind HAProxy, which detects when the software fails
+ and routes requests around the failing instance. |
+ Identity is run on all controller nodes, so scalability can be achieved with additional
+ controller nodes. Haproxy allows scalability for Identity as more nodes are added. |
+
+
+ | Image Service (glance) |
+ Controller |
+ /var/lib/glance/images is a GlusterFS native
+ mount to a Gluster volume off the storage layer. |
+ The Image Service is run on all controller nodes, ensuring at least once
+ instance will be available in case of node failure. It also sits behind
+ HAProxy, which detects when the software fails and routes requests
+ around the failing instance. |
+ The Image Service is run on all controller nodes, so scalability can be achieved with additional controller
+ nodes. HAProxy allows scalability for the Image Service as more nodes are added. |
+
+
+ | Compute (nova) |
+ Controller, Compute |
+ Configured to use Qpid. qpid_heartbeat = 10,
+ configured to use memcached for caching,
+ configured to use libvirt, configured to use
+ neutron.
+ Configured nova-consoleauth to use
+ memcached for session management (so
+ that it can have multiple copies and run in a load balancer). |
+ The nova API, scheduler, objectstore, cert, consoleauth, conductor, and vncproxy services are
+ run on all controller nodes, ensuring at least once instance will be
+ available in case of node failure. Compute is also behind HAProxy,
+ which detects when the software fails and routes requests around the
+ failing instance.
+ Compute's compute and conductor services, which run on the compute nodes, are only needed to run services on
+ that node, so availability of those services is coupled tightly to the nodes that are available. As long
+ as a compute node is up, it will have the needed services running on top of it.
+ |
+ The nova API, scheduler, objectstore, cert, consoleauth, conductor, and
+ vncproxy services are run on all controller nodes, so scalability can be
+ achieved with additional controller nodes. HAProxy allows scalability
+ for Compute as more nodes are added. The scalability of services running
+ on the compute nodes (compute, conductor) is achieved linearly by adding
+ in more compute nodes. |
+
+
+ | Block Storage (cinder) |
+ Controller |
+ Configured to use Qpid, qpid_heartbeat = 10,
+ configured to use a Gluster volume from the storage layer as the backend
+ for Block Storage, using the Gluster native client. |
+ Block Storage API, scheduler, and volume services are run on all controller nodes,
+ ensuring at least once instance will be available in case of node failure. Block
+ Storage also sits behind HAProxy, which detects if the software fails and routes
+ requests around the failing instance. |
+ Block Storage API, scheduler and volume services are run on all
+ controller nodes, so scalability can be achieved with additional
+ controller nodes. HAProxy allows scalability for Block Storage as more
+ nodes are added. |
+
+
+ | OpenStack Networking (neutron) |
+ Controller, Compute, Network |
+ Configured to use QPID. qpid_heartbeat = 10,
+ kernel namespace support enabled, tenant_network_type =
+ vlan, allow_overlapping_ips =
+ true, tenant_network_type =
+ vlan, bridge_uplinks =
+ br-ex:em2, bridge_mappings =
+ physnet1:br-ex |
+ The OpenStack Networking service is run on all controller nodes,
+ ensuring at least one instance will be available in case of node
+ failure. It also sits behind HAProxy, which detects if the software
+ fails and routes requests around the failing instance.
+ OpenStack Networking's ovs-agent,
+ l3-agent-dhcp-agent, and
+ metadata-agent services run on the
+ network nodes, as lsb resources inside of
+ Pacemaker. This means that in the case of network node failure,
+ services are kept running on another node. Finally, the
+ ovs-agent service is also run on all
+ compute nodes, and in case of compute node failure, the other nodes
+ will continue to function using the copy of the service running on
+ them. |
+ The OpenStack Networking server service is run on all controller nodes,
+ so scalability can be achieved with additional controller nodes. HAProxy
+ allows scalability for OpenStack Networking as more nodes are added.
+ Scalability of services running on the network nodes is not currently
+ supported by OpenStack Networking, so they are not be considered. One
+ copy of the services should be sufficient to handle the workload.
+ Scalability of the ovs-agent running on compute
+ nodes is achieved by adding in more compute nodes as necessary. |
+
+
+