Format INSTALLATION document. New TRY_IT document with basic commands.

Change-Id: Id855ea88cfd5574ad534f842eef5dca7484beb79
Signed-off-by: Pino de Candia <giuseppe.decandia@gmail.com>
This commit is contained in:
Pino de Candia 2018-03-09 22:52:18 +00:00
parent 67436e32eb
commit 18413ba679
3 changed files with 187 additions and 102 deletions

View File

@ -1,167 +1,227 @@
===============
Installing Tatu
===============
###############
Devstack
--------
After installing Tatu, look at the TRY_IT document in this repository for step
by step instructions on using it for the first time.
About Devstack and automation tools
***********************************
So far (March 2018) I've been developing Tatu on my devstack instance. The
devstack plugin is mostly working. See the README under tatu/devstack.
Installation Tools
------------------
No work has been done to automate Tatu installation for production. We plan
to provide Ansible and Kolla installers, but this is just a vague intent at the
moment (March 2018).
Manual Installation
-------------------
Manually installing Tatus daemons
==================================
A good guide to manual installation is to look at Tatu's devstack plugin (in
tatu/devstack/plugin.sh). This document's steps may become stale, but the
steps are given in more detail and with some motivation.
Note that there are 2 daemons: API daemon and Notifications daemon.
Installing Tatus daemons
There are 2 daemons: API daemon and Notifications daemon.
Get the code
------------
On your controller node, in a development directory:
# git clone https://github.com/openstack/tatu
# cd tatu
# python setup.py develop
git clone https://github.com/openstack/tatu
cd tatu
python setup.py develop
Modify Tatus cloud-init script
WARNING: user-cloud-config has only been tested on Fedora-Cloud-Base-25-1.3.x86_64
-------------------------------
tatu/files/user-cloud-config is a cloud-init script that needs to run once on every VM.
It extracts Tatus dynamic vendor data from ConfigDrive;
Finds the one-time-token and uses it in the call to Tatu /noauth/hostcerts API;
Does the user account and SSH configuration;
Finally, sets up a cron job to periodically refresh the revoked-keys file from Tatu.
**WARNING: user-cloud-config has only been tested on Fedora-Cloud-Base-25-1.3.x86_64**
If youre using my branch of Dragonflow (https://github.com/pinodeca/dragonflow/tree/tatu) then a VM can reach the Tatu API at http://169.254.169.254/noauth via the Metadata Proxy. However, if youre using any other Neutron driver, youll need to modify the cloud-init script. Replace:
url=http://169.254.169.254/….
tatu/files/user-cloud-config is a cloud-init script that needs to run once on
every VM.
* It extracts Tatus **dynamic** vendor data from ConfigDrive;
* Finds the one-time-token and uses it in the call to Tatu /noauth/hostcerts
API;
* Does the user account and SSH configuration;
* Finally, sets up a cron job to periodically refresh the revoked-keys file
from Tatu.
in tatu/files/user-cloud-config in 2 places, with:
If youre using my branch of Dragonflow
(https://github.com/pinodeca/dragonflow/tree/tatu) then a VM can reach the Tatu
API at http://169.254.169.254/noauth via the Metadata Proxy. However, if youre
using any other Neutron driver, youll need to modify the cloud-init script.
Replace:
url=http://<Tatu APIs VM-accessible address>/….
url=http://169.254.169.254/….
in tatu/files/user-cloud-config **in 2 places**, with:
url=http://<Tatu APIs VM-accessible address>/….
And make sure any VMs you deploy are in Tenants and Networks that have SNAT
enabled (or give every VM a FloatingIP).
And make sure any VMs you deploy are in Tenants and Networks that have SNAT enabled (or give every VM a FloatingIP).
Prepare the cloud-init script as static vendor data...
How does Tatus cloud-init script get into the VMs you deploy? There are two ways.
------------------------------------------------------
The first and recommended way (and what I did in the video demo) is to use static vendor data. First, convert the (possibly modified) cloud-init to vendor-data by running the following command from the tatu directory:
# scripts/cloud-config-to-vendor-data files/user-cloud-config > /etc/nova/tatu_static_vd.json
How does Tatus cloud-init script get into the VMs you deploy? There are two
ways.
The first and recommended way (and what I did in the video demo) is to use
**static** vendor data. First, convert the (possibly modified) cloud-init to
vendor-data by running the following command from the tatu directory:
scripts/cloud-config-to-vendor-data files/user-cloud-config > /etc/nova/tatu_static_vd.json
And now modify /etc/nova/nova-cpu.conf as follows:
[api]
vendordata_providers = StaticJSON,DynamicJSON
vendordata_jsonfile_path = /etc/nova/tatu_static_vd.json
[api]
vendordata_providers = StaticJSON,DynamicJSON
vendordata_jsonfile_path = /etc/nova/tatu_static_vd.json
...or pass it as user-data for each VM launch
The second/alternative way to get the cloud-init script into your VM is to pass it as user-data at launch time. The Horizon instance launch panel has a tab with a text field to paste a cloud-init user data script. Users will have to paste Tatus user-cloud-config script at every launch. Obviously, this isnt as good a user experience.
---------------------------------------------
The second/alternative way to get the cloud-init script into your VM is to pass
it as user-data at launch time. The Horizon instance launch panel has a tab
with a text field to paste a cloud-init user data script. Users will have to
paste Tatus user-cloud-config script at every launch. Obviously, this isnt a
user experience.
Configure dynamic vendor data
In order to configure SSH, Tatus cloud-init script needs some data unique to each VM:
A one-time-token generated by Tatu for the specific VM
The list of user accounts to configure (based on Keystone roles in the VMs project)
The list of user accounts that need sudo access.
-----------------------------
In order to configure SSH, Tatus cloud-init script needs some data unique
to each VM:
* A one-time-token generated by Tatu for the specific VM
* The list of user accounts to configure (based on Keystone roles in the VMs
project)
* The list of user accounts that need sudo access.
As well as some data thats common to VMs in the project:
The projects public key for validating User SSH certificates.
A non-standard SSH port.
* The projects public key for validating User SSH certificates.
* A non-standard SSH port (if configured).
All this information is passed to the VM as follows:
At launch time, Nova Compute securely calls Tatus dynamic vendordata API.
Nova writes the vendordata to ConfigDrive
Note: to protect the one-time-token and the user account names, its best not to expose thiis information via the metadata API.
* At launch time, Nova Compute calls Tatus dynamic vendordata API using
Keystone authentication with tokens.
* Nova writes the vendordata to ConfigDrive
** Note: to protect the one-time-token and the user account names, its best
not to expose thiis information via the metadata API.
To enable ConfigDrive, add this to /etc/nova/nova-cpu.conf:
[DEFAULT]
force_config_drive=True
[DEFAULT]
force_config_drive=True
TODO: disable Tatu vendor data availability via MetaData API. May require Nova changes.
**TODO: disable Tatu vendor data availability via MetaData API. May require
Nova changes.**
To get Nova Compute talking to Tatu, add this to /etc/nova/nova-cpu.conf:
[api]
vendordata_providers = StaticJSON, DynamicJSON
vendordata_dynamic_targets = 'tatu@http://127.0.0.1:18322/novavendordata'
vendordata_dynamic_connect_timeout = 5
vendordata_dynamic_read_timeout = 30
[vendordata_dynamic_auth]
auth_url = http://127.0.0.1/identity
auth_type = password
username = admin
password = pinot
project_id = 2e6c998ad16f4045821304470a57d160
user_domain_name = default
[api]
vendordata_providers = StaticJSON, DynamicJSON
vendordata_dynamic_targets = 'tatu@http://127.0.0.1:18322/novavendordata'
vendordata_dynamic_connect_timeout = 5
vendordata_dynamic_read_timeout = 30
[vendordata_dynamic_auth]
auth_url = http://127.0.0.1/identity
auth_type = password
username = admin
password = pinot
project_id = 2e6c998ad16f4045821304470a57d160
user_domain_name = default
Of course, modify the IP addresses, project ID, username and password as
appropriate.
Of course, modify the IP addresses, project ID, username and password as appropriate.
Prepare /etc/tatu/tatu.conf
# cd tatu
# mkdir /etc/tatu
# cp files/tatu.conf /etc/tatu/
---------------------------
Editing /etc/tatu/tatu.conf:
use_pat_bastions = False
sqlalchemy_engine = <URI for your database, e.g. mysql+pymysql://root:pinot@127.0.0.1/tatu>
auth_url = <location of identity API>
user_id = <ID of the Admin user>
cd tatu
mkdir /etc/tatu
cp files/tatu.conf /etc/tatu/
Edit /etc/tatu/tatu.conf:
use_pat_bastions = False
sqlalchemy_engine = <URI for your database, e.g. mysql+pymysql://root:pinot@127.0.0.1/tatu>
auth_url = <location of identity API>
user_id = <ID of the Admin user>
Launch Tatus notification daemon
---------------------------------
Tatus notification daemon only needs tatu.conf, so we can launch it now.
Tatu listens on topic “tatu_notifications” for:
Project creation and deletion events from Keystone.
To create new CA key pairs or clean up unused ones.
Role assignment deletion events from Keystone.
To revoke user SSH certificates that are too permissive.
VM deletion events from Nova.
To clean up per-VM bastion and DNS state.
* Project creation and deletion events from Keystone.
** To create new CA key pairs or clean up unused ones.
* Role assignment deletion events from Keystone.
** To revoke user SSH certificates that are too permissive.
* VM deletion events from Nova.
** To clean up per-VM bastion and DNS state.
Edit both /etc/keystone/keystone.conf and /etc/nova/nova.conf as follows:
[oslo_messaging_notifications]
topics = notifications,tatu_notifications
[oslo_messaging_notifications]
topics = notifications,tatu_notifications
Now launch Tatus notification listener daemon:
# python tatu/notifications.py
At first launch you should see debug messages indicating that CA key pairs are being created for all existing projects.
python tatu/notifications.py
At first launch you should see debug messages indicating that CA key pairs are
being created for all existing projects.
Prepare /etc/tatu/paste.ini
# cd tatu
# mkdir /etc/tatu
# cp files/paste.ini /etc/tatu/
---------------------------
cd tatu
mkdir /etc/tatu
cp files/paste.ini /etc/tatu/
paste.ini should only need these modifications:
Host (address the daemon will listen on)
Port (port the daemon will listen on)
admin_token - run "openstack token issue" and put the resulting token ID here. TOKEN=$(openstack token issue -f yaml -c id | awk '{print $2}')
* Host (address the daemon will listen on)
* Port (port the daemon will listen on)
Launch Tatus API daemon
------------------------
Tatus API daemon needs both tatu.conf and paste.ini. We can launch it now.
I have done all my testing with Pylons (no good reason, Im new to wsgi frameworks):
# pip install pylons
# pserve files/paste.ini
I have done all my testing with Pylons (no good reason, Im new to wsgi
frameworks):
pip install pylons
pserve files/paste.ini
Note the API serves /noauth/hostcerts and /noauth/revokeduserkeys without
authorization (so that newly bootstrapped servers can access get their
certificates and the list of revoked keys).
Note the API serves /noauth/hostcerts and /noauth/revokeduserkeys without authorization (so that newly bootstrapped servers can access get their certificates and the list of revoked keys).
Register Tatu API in Keystone
-----------------------------
# openstack endpoint create --region RegionOne ssh public http://147.75.72.229:18322/
# openstack service create --name tatu --description "OpenStack SSH Management" ssh
openstack endpoint create --region RegionOne ssh public http://147.75.72.229:18322/
openstack service create --name tatu --description "OpenStack SSH Management" ssh
Thanks to this registration, neither the dashboard nor CLI need configuration
to find Tatu.
Thanks to this registration, neither the dashboard nor CLI need configuration to find Tatu.
Installing tatu-dashboard
(Wherever horizon is installed)
git clone https://github.com/openstack/tatu-dashboard
python setup.py develop
Copy (or soft link) files from tatu-dashboard/tatudashboard/enabled to horizon/openstack_dashboard/local/enabled/
From horizon directory, run python manage.py compress
service apache2 restart
=========================
Do the following wherever horizon is installed:
git clone https://github.com/openstack/tatu-dashboard
python setup.py develop
* Copy (or soft link) files from tatu-dashboard/tatudashboard/enabled to horizon/openstack_dashboard/local/enabled/
# From horizon directory, run
python manage.py compress
service apache2 restart
Installing python-tatuclient
(On any host where you want to run "openstack ssh)
git clone https://github.com/pinodeca/python-tatuclient
python setup.py develop
============================
On any host where you want to run "openstack ssh:
git clone https://github.com/pinodeca/python-tatuclient
python setup.py develop

View File

@ -78,7 +78,8 @@ authorized_keys file.
Installation
------------
Please see the INSTALLATION document in this repository.
Please see the INSTALLATION document in this repository. Then see the TRY_IT
document as well for step by step instructions on using it.
APIs, Horizon Panels, and OpenStack CLIs
----------------------------------------

24
TRY_IT.rst Normal file
View File

@ -0,0 +1,24 @@
Notes on using Tatu for the first time
======================================
If you don't already have one, generate an ssh key pair on your client machine.
ssh-keygen
Now generate a certificate for your public key (this can also be done in
Horizon). First set your environment variables to select your user and project.
source openrc demo demo
openstack ssh usercert create -f value -c Certificate "`cat ~/.ssh/id_rsa.pub`" > ~/.ssh/id_rsa-cert.pub
Now get the host CA public key for your project. This command appends the key
to your known_hosts file and configures it to be trusted for any hostname in
any domain.
echo '@cert-authority * ' `openstack ssh ca show 626bfa8fd12b48d8b674caf4ef3a0cd7 -f value -c 'Host Public Key'` >> ~/.ssh/known_hosts
Now launch a VM without a Key Pair. Unless you're using Dragonflow and Tatu's
experimental PAT bastion feature, assign a floating IP to the VM, for example
172.24.4.10.
Use the following to