Install the Data processing service This procedure installs the Data processing service (sahara) on the controller node. To install the Data processing service on the controller: Install the required packages: # yum install openstack-sahara python-saharaclient # zypper install openstack-sahara python-saharaclient Install the packages: # apt-get install sahara python-saharaclient Respond to prompts for database management, Identity service credentials, service endpoint registration, and message broker credentials. You need to install the required packages. For now, sahara doesn't have packages for Ubuntu. Documentation will be updated once the packages are available. The rest of this document assumes that you have the sahara service packages installed on the system. Edit /etc/sahara/sahara.conf configuration file First, edit option in the [database] section. The URL provided here should point to an empty database. For instance, connection string for MySQL database will be: connection = mysql://sahara:SAHARA_DBPASS@controller/sahara Switch to the [keystone_authtoken] section. The option should point to the public Identity API endpoint. should point to the admin Identity API endpoint. For example: auth_uri = http://controller:5000/v2.0 identity_uri = http://controller:35357 Next specify admin_user, admin_password and admin_tenant_name. These parameters must specify a keystone user which has the admin role in the given tenant. These credentials allow sahara to authenticate and authorize its users. Switch to the [DEFAULT] section. Proceed to the networking parameters. If you are using Neutron for networking, then set use_neutron=true. Otherwise if you are using nova-network set the given parameter to false. That should be enough for the first run. If you want to increase logging level for troubleshooting, there are two parameters in the config: verbose and debug. If the former is set to true, sahara will start to write logs of INFO level and above. If debug is set to true, sahara will write all the logs, including the DEBUG ones. If you use the Data processing service with a MySQL or MariaDB database, you must configure the maximum number of allowed packets for storing big job binaries in the service internal database. Edit the /etc/mysql/my.cnf /etc/my.cnf file and change the max_allowed_packet option: [mysqld] max_allowed_packet = 256M Restart the database service: # service mysql restart # systemctl restart mariadb.service To create the database, complete these steps: Use the database access client to connect to the database server as the root user: $ mysql -u root -p Create the sahara database: CREATE DATABASE sahara; Grant proper access to the sahara database: GRANT ALL PRIVILEGES ON sahara.* TO 'sahara'@'localhost' \ IDENTIFIED BY ' SAHARA_DBPASS'; GRANT ALL PRIVILEGES ON sahara.* TO 'sahara'@'%' \ IDENTIFIED BY 'SAHARA_DBPASS'; Replace SAHARA_DBPASS with a suitable password. Exit the database access client. Create database schema: # sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head Source the admin credentials to gain access to admin-only CLI commands: $ source admin-openrc.sh To create the service credentials, complete these steps: Create the sahara user: $ openstack user create --password-prompt sahara User Password: Repeat User Password: +----------+----------------------------------+ | Field | Value | +----------+----------------------------------+ | email | None | | enabled | True | | id | 92fbc405a237420fbc92e9bbd697b8c8 | | name | sahara | | username | sahara | +----------+----------------------------------+ Add the admin role to the sahara user: $ openstack role add --project service --user sahara admin +-------+----------------------------------+ | Field | Value | +-------+----------------------------------+ | id | cd2cb9a39e874ea69e5d4b896eb16128 | | name | admin | +-------+----------------------------------+ Create the sahara service entity: $ openstack service create --name sahara \ --description "Data processing service" data_processing +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | Data processing service | | enabled | True | | id | 6e867d0a062e4c1fb1c335b74ab7a8f4 | | name | sahara | | type | data_processing | +-------------+----------------------------------+ Create the Data processing service API endpoint: $ openstack endpoint create \ --publicurl http://controller:8386/v1.1/%\(tenant_id\)s \ --internalurl http://controller:8386/v1.1/%\(tenant_id\)s \ --adminurl http://controller:8386/v1.1/%\(tenant_id\)s \ --region RegionOne \ data_processing +--------------+-------------------------------------------+ | Field | Value | +--------------+-------------------------------------------+ | adminurl | http://controller:8386/v1.1/%(tenant_id)s | | id | dbb754f069c949e19fc40171112bfc01 | | internalurl | http://controller:8386/v1.1/%(tenant_id)s | | publicurl | http://controller:8386/v1.1/%(tenant_id)s | | region | RegionOne | | service_id | 6e867d0a062e4c1fb1c335b74ab7a8f4 | | service_name | sahara | | service_type | data_processing | +--------------+-------------------------------------------+ Start the Data processing service and configure it to start when the system boots: # systemctl enable openstack-sahara-all.service # systemctl start openstack-sahara-all.service