[[ha-aa-controllers]] === OpenStack Controller Nodes OpenStack Controller Nodes contains: * All OpenStack API services * All OpenStack schedulers * Memcached service ==== Running OpenStack API & schedulers ===== API Services All OpenStack projects have an API service for controlling all the resources in the Cloud. In Active / Active mode, the most common setup is to scale-out these services on at least two nodes and use load-balancing and virtual IP (with HAproxy & Keepalived in this setup). *Configuring API OpenStack services* To configure our Cloud using Highly available and scalable API services, we need to ensure that: * Using Virtual IP when configuring OpenStack Identity Endpoints. * All OpenStack configuration files should refer to Virtual IP. *In case of failure* The monitor check is quite simple since it just establishes a TCP connection to the API port. Comparing to the Active / Passive mode using Corosync & Resources Agents, we don’t check if the service is actually running). That’s why all OpenStack API should be monitored by another tool (i.e. Nagios) with the goal to detect failures in the Cloud Framework infrastructure. ===== Schedulers OpenStack schedulers are used to determine how to dispatch compute, network and volume requests. The most common setup is to use RabbitMQ as messaging system already documented in this guide. Those services are connected to the messaging backend and can scale-out : * nova-scheduler * nova-conductor * cinder-scheduler * neutron-server * ceilometer-collector * heat-engine Please refer to the RabbitMQ section for configure these services with multiple messaging servers. ==== Memcached Most of OpenStack services use an application to offer persistence and store ephemeral datas (like tokens). Memcached is one of them and can scale-out easily without specific trick. To install and configure it, you can read the http://code.google.com/p/memcached/wiki/NewStart[official documentation]. Memory caching is managed by Oslo-incubator for so the way to use multiple memcached servers is the same for all projects. Example with two hosts: ---- memcached_servers = controller1:11211,controller2:11211 ---- By default, controller1 will handle the caching service but if the host goes down, controller2 will do the job. More informations about memcached installation are in the OpenStack Compute Manual.