Install the Data processing service This procedure installs the Data processing service (sahara) on the controller node. To install the Data processing service on the controller: Install the required packages: # yum install openstack-sahara python-saharaclient # zypper install openstack-sahara python-saharaclient Install the packages: # apt-get install sahara python-saharaclient Respond to prompts for database management, Identity service credentials, service endpoint registration, and message broker credentials. You need to install the required packages. For now, sahara doesn't have packages for Ubuntu. Documentation will be updated once the packages are available. The rest of this document assumes that you have the sahara service packages installed on the system. Edit /etc/sahara/sahara.conf configuration file First, edit parameter in the [database] section. The URL provided here should point to an empty database. For instance, connection string for MySQL database will be: connection = mysql://sahara:SAHARA_DBPASS@controller/sahara Switch to the [keystone_authtoken] section. The parameter should point to the public Identity API endpoint. should point to the admin Identity API endpoint. For example: auth_uri = http://controller:5000/v2.0 identity_uri = http://controller:35357 Next specify admin_user, admin_password and admin_tenant_name. These parameters must specify a keystone user which has the admin role in the given tenant. These credentials allow sahara to authenticate and authorize its users. Switch to the [DEFAULT] section. Proceed to the networking parameters. If you are using Neutron for networking, then set use_neutron=true. Otherwise if you are using nova-network set the given parameter to false. That should be enough for the first run. If you want to increase logging level for troubleshooting, there are two parameters in the config: verbose and debug. If the former is set to true, sahara will start to write logs of INFO level and above. If debug is set to true, sahara will write all the logs, including the DEBUG ones. If you use the Data processing service with a MySQL database, then for storing big job binaries in the sahara internal database you must configure the size of max allowed packets. Edit the my.cnf file and change parameter: [mysqld] max_allowed_packet = 256M and restart MySQL server. Create database schema: # sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head You must register the Data processing service with the Identity service so that other OpenStack services can locate it. Register the service and specify the endpoint: $ keystone service-create --name sahara --type data_processing \ --description "Data processing service" $ keystone endpoint-create \ --service-id $(keystone service-list | awk '/ sahara / {print $2}') \ --publicurl http://controller:8386/v1.1/%\(tenant_id\)s \ --internalurl http://controller:8386/v1.1/%\(tenant_id\)s \ --adminurl http://controller:8386/v1.1/%\(tenant_id\)s \ --region regionOne Start the sahara service: # systemctl start openstack-sahara-all # service openstack-sahara-all start (Optional) Enable the Data processing service to start on boot # systemctl enable openstack-sahara-all # chkconfig openstack-sahara-all on