Browse Source

Use already Deployed/Installed servers

This patch provides a set of templates that enables
tripleo-heat-templates to be used with a set of already deployed,
installed, and running servers. In this method, Nova and Ironic are not
used to deploy any servers.

This approach is attractive for POC deployments where dedicated
provisioning networks are not available, or other server install methods
are dictated for various reasons.

There are also assumptions that currently have to be made about the software
installed on the already deployed servers.  Effectively, they must match the
standard TripleO overcloud-full image.

Co-Authored-By: Steve Hardy <shardy@redhat.com>

Change-Id: I4ab1531f69c73457653f1cca3fe30cc32a04c129
changes/72/222772/21
James Slagle 7 years ago
parent
commit
c3d595c49a
  1. 129
      deployed-server/README.rst
  2. 22
      deployed-server/deployed-server-config.yaml
  3. 122
      deployed-server/deployed-server.yaml
  4. 113
      deployed-server/scripts/get-occ-config.sh
  5. 3
      environments/deployed-server-environment.yaml
  6. 99
      net-config-static-bridge-with-external-dhcp.yaml
  7. 3
      overcloud-resource-registry-puppet.yaml
  8. 2
      puppet/ceph-storage.yaml
  9. 2
      puppet/cinder-storage.yaml
  10. 2
      puppet/compute.yaml
  11. 2
      puppet/controller.yaml

129
deployed-server/README.rst

@ -0,0 +1,129 @@
TripleO with Deployed Servers
=============================
The deployed-server set of templates can be used to deploy TripleO via
tripleo-heat-templates to servers that are already installed with a base
operating system.
When OS::TripleO::Server is mapped to the deployed-server.yaml template via the
provided deployed-server-environment.yaml resource registry, Nova and Ironic
are not used to create any server instances. Heat continues to create the
SoftwareDeployment resources, and they are made available to the already
deployed and running servers.
Template Usage
--------------
To use these templates pass the included environment file to the deployment
command::
-e deployed-server/deployed-server-environment.yaml
Deployed Server configuration
-----------------------------
It is currently assumed that the deployed servers being used have the required
set of software and packages already installed on them. These exact
requirements must match how such a server would look if it were deployed the
standard way via Ironic using the TripleO overcloud-full image.
An easy way to help get this setup for development is to use an overcloud-full
image from an already existing TripleO setup. Create the vm's for the already
deployed server, and use the overcloud-full image as their disk.
Each server must have a fqdn set that resolves to an IP address on a routable
network (e.g., the hostname should not resolve to 127.0.0.1). The hostname
will be detected on each server via the hostnamectl --static command.
Each server also must have a route to the configured IP address on the
undercloud where the OpenStack services are listening. This is the value for
local_ip in the undercloud.conf.
It's recommended that each server have at least 2 nic's. One used for external
management such as ssh, and one used for the OpenStack deployment itself. Since
the overcloud deployment will reconfigure networking on the configured nic to
be used by OpenStack, the external management nic is needed as a fallback so
that all connectivity is not lost in case of a configuration error. Be sure to
use correct nic config templates as needed, since the nodes will not receive
dhcp from the undercloud neutron-dhcp-agent service.
For example, the net-config-static-bridge.yaml template could be used for
controllers, and the net-config-static.yaml template could be used for computes
by specifying:
resource_registry:
OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/deployed-server/tripleo-heat-templates/net-config-static-bridge.yaml
OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/deployed-server/tripleo-heat-templates/net-config-static.yaml
In a setup where the first nic on the servers is used for external management,
set the nic's to be used for OpenStack to nic2:
parameter_defaults:
NeutronPublicInterface: nic2
HypervisorNeutronPublicInterface: nic2
The above nic config templates also require a route to the ctlplane network to
be defined. Define the needed parameters as necessary for your environment, for
example:
parameter_defaults:
ControlPlaneDefaultRoute: 192.168.122.130
ControlPlaneSubnetCidr: "24"
EC2MetadataIp: "192.0.2.1"
In this example, 192.168.122.130 is the external management IP of an
undercloud, thus it is the default route for the configured local_ip value of
192.0.2.1.
os-collect-config
-----------------
os-collect-config on each deployed server must be manually configured to poll
the Heat API for the available SoftwareDeployments. An example configuration
for /etc/os-collect-config.conf looks like:
[DEFAULT]
collectors=heat
command=os-refresh-config
[heat]
# you can get these values from stackrc on the undercloud
user_id=<a user that can connect to heat> # note this must be the ID, not the username
password=<a password>
auth_url=<keystone url>
project_id=<project_id> # note, this must be the ID, not project name
stack_id=<stack_id>
resource_name=<resource_name>
Note that the stack_id value is the id of the nested stack containing the
resource (identified by resource_name) implemented by the deployed-server.yaml
templates.
Once the configuration for os-collect-config has been defined, the service
needs to be restarted. Once restarted, it will start polling Heat and applying
the SoftwareDeployments.
A sample script at deployed-server/scripts/get-occ-config.sh is included that
will automatically generate the os-collect-config configuration needed on each
server, ssh to each server, copy the configuration, and restart the
os-collect-config service.
.. warning::
The get-occ-config.sh script is not intended for production use, as it
copies admin credentials to each of the deployed nodes.
The script can only be used once the stack id's of the nested deployed-server
stacks have been created via Heat. This usually only takes a couple of minutes
once the deployment command has been started. Once the following output is seen
from the deployment command, the script should be ready to run:
[Controller]: CREATE_IN_PROGRESS state changed
[NovaCompute]: CREATE_IN_PROGRESS state changed
The user running the script must be able to ssh as root to each server. Define
the hostnames of the deployed servers you intend to use for each role type::
export controller_hosts="controller0 controller1 controller2"
export compute_hosts="compute0"
Then run the script on the undercloud with a stackrc file sourced, and
the script will copy the needed os-collect-config.conf configuration to each
server and restart the os-collect-config service.

22
deployed-server/deployed-server-config.yaml

@ -0,0 +1,22 @@
heat_template_version: 2014-10-16
parameters:
user_data_format:
type: string
default: SOFTWARE_CONFIG
resources:
# We just need something which returns a unique ID, but we can't
# use RandomString because RefId returns the value, not the physical
# resource ID, SoftwareConfig should work as it returns a UUID
deployed-server-config:
type: OS::Heat::SoftwareConfig
outputs:
# FIXME(shardy) this is needed because TemplateResource returns an
# ARN not a UUID, which overflows the Deployment server_id column..
user_data_format:
value: SOFTWARE_CONFIG
OS::stack_id:
value: {get_resource: deployed-server-config}

122
deployed-server/deployed-server.yaml

@ -0,0 +1,122 @@
heat_template_version: 2014-10-16
parameters:
image:
type: string
default: unused
flavor:
type: string
default: unused
key_name:
type: string
default: unused
security_groups:
type: json
default: []
# Require this so we can validate the parent passes the
# correct value
user_data_format:
type: string
user_data:
type: string
default: ''
name:
type: string
default: ''
image_update_policy:
type: string
default: ''
networks:
type: comma_delimited_list
default: ''
metadata:
type: json
default: {}
software_config_transport:
default: POLL_SERVER_CFN
type: string
scheduler_hints:
type: json
description: Optional scheduler hints to pass to nova
default: {}
resources:
# We just need something which returns a unique ID, but we can't
# use RandomString because RefId returns the value, not the physical
# resource ID, SoftwareConfig should work as it returns a UUID
deployed-server:
type: OS::TripleO::DeployedServerConfig
properties:
user_data_format: SOFTWARE_CONFIG
InstanceIdConfig:
type: OS::Heat::StructuredConfig
properties:
group: os-apply-config
config:
instance-id: {get_attr: [deployed-server, "OS::stack_id"]}
InstanceIdDeployment:
type: OS::Heat::StructuredDeployment
properties:
config: {get_resource: InstanceIdConfig}
server: {get_resource: deployed-server}
HostsEntryConfig:
type: OS::Heat::SoftwareConfig
properties:
group: script
config: |
#!/bin/bash
set -eux
mkdir -p $heat_outputs_path
host=$(hostnamectl --static)
echo -n "$host " > $heat_outputs_path.hosts_entry
host_ip=$(python -c "import socket; print socket.gethostbyname(\"$host\")")
echo -n "$host_ip " >> $heat_outputs_path.hosts_entry
echo >> $heat_outputs_path.hosts_entry
cat $heat_outputs_path.hosts_entry
echo -n $host_ip > $heat_outputs_path.ip_address
cat $heat_outputs_path.ip_address
echo -n $host > $heat_outputs_path.hostname
cat $heat_outputs_path.hostname
outputs:
- name: hosts_entry
description: hosts_entry
- name: ip_address
description: ip_address
- name: hostname
description: hostname
HostsEntryDeployment:
type: OS::Heat::SoftwareDeployment
properties:
config: {get_resource: HostsEntryConfig}
server: {get_resource: deployed-server}
ControlPlanePort:
type: OS::Neutron::Port
properties:
network: ctlplane
name:
list_join:
- '-'
- - {get_attr: [HostsEntryDeployment, hostname]}
- ctlplane
- port
replacement_policy: AUTO
outputs:
# FIXME(shardy) this is needed because TemplateResource returns an
# ARN not a UUID, which overflows the Deployment server_id column..
OS::stack_id:
value: {get_attr: [deployed-server, "OS::stack_id"]}
networks:
value:
ctlplane:
- {get_attr: [ControlPlanePort, fixed_ips, 0, ip_address]}
name:
value: {get_attr: [HostsEntryDeployment, hostname]}
hosts_entry:
value: {get_attr: [HostsEntryDeployment, hosts_entry]}
ip_address:
value: {get_attr: [HostsEntryDeployment, ip_address]}

113
deployed-server/scripts/get-occ-config.sh

@ -0,0 +1,113 @@
#!/bin/bash
set -eux
SLEEP_TIME=5
CONTROLLER_HOSTS=${CONTROLLER_HOSTS:-""}
COMPUTE_HOSTS=${COMPUTE_HOSTS:-""}
BLOCKSTORAGE_HOSTS=${BLOCKSTORAGE_HOSTS:-""}
OBJECTSTORAGE_HOSTS=${OBJECTSTORAGE_HOSTS:-""}
CEPHSTORAGE_HOSTS=${CEPHSTORAGE_HOSTS:-""}
SUBNODES_SSH_KEY=${SUBNODES_SSH_KEY:-"~/.ssh/id_rsa"}
SSH_OPTIONS="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o LogLevel=Verbose -o PasswordAuthentication=no -o ConnectionAttempts=32"
read -a Controller_hosts_a <<< $CONTROLLER_HOSTS
read -a Compute_hosts_a <<< $COMPUTE_HOSTS
read -a BlockStorage_hosts_a <<< $BLOCKSTORAGE_HOSTS
read -a ObjectStorage_hosts_a <<< $OBJECTSTORAGE_HOSTS
read -a CephStorage_hosts_a <<< $CEPHSTORAGE_HOSTS
roles="Controller Compute BlockStorage ObjectStorage CephStorage"
admin_user_id=$(openstack user show admin -c id -f value)
admin_project_id=$(openstack project show admin -c id -f value)
function check_stack {
local stack_to_check=$1
if [ "$stack_to_check" = "|" ]; then
echo Stack not created
return 1
fi
echo Checking if $1 stack is created
set +e
heat resource-list $stack_to_check
rc=$?
set -e
if [ ! "$rc" = "0" ]; then
echo Stack $1 not yet created
fi
return $rc
}
for role in $roles; do
while ! check_stack overcloud; do
sleep $SLEEP_TIME
done
rg_stack=$(heat resource-list overcloud | grep " $role " | awk '{print $4}')
while ! check_stack $rg_stack; do
sleep $SLEEP_TIME
rg_stack=$(heat resource-list overcloud | grep " $role " | awk '{print $4}')
done
stacks=$(heat resource-list $rg_stack | grep OS::TripleO::$role | awk '{print $4}')
i=0
for stack in $stacks; do
server_resource_name=$role
if [ "$server_resource_name" = "Compute" ]; then
server_resource_name="NovaCompute"
fi
server_stack=$(heat resource-list $stack | grep " $server_resource_name " | awk '{print $4}')
while ! check_stack $server_stack; do
sleep $SLEEP_TIME
server_stack=$(heat resource-list $stack | grep " $server_resource_name " | awk '{print $4}')
done
deployed_server_stack=$(heat resource-list $server_stack | grep "deployed-server" | awk '{print $4}')
echo "======================"
echo "$role$i os-collect-config.conf configuration:"
config="
[DEFAULT]
collectors=heat
command=os-refresh-config
polling_interval=30
[heat]
user_id=$admin_user_id
password=$OS_PASSWORD
auth_url=$OS_AUTH_URL
project_id=$admin_project_id
stack_id=$deployed_server_stack
resource_name=deployed-server-config"
echo "$config"
echo "======================"
echo
host=
eval host=\${${role}_hosts_a[i]}
if [ -n "$host" ]; then
# Delete the os-collect-config.conf template so our file won't get
# overwritten
ssh $SSH_OPTIONS -i $SUBNODES_SSH_KEY $host sudo /bin/rm -f /usr/libexec/os-apply-config/templates/etc/os-collect-config.conf
ssh $SSH_OPTIONS -i $SUBNODES_SSH_KEY $host "echo \"$config\" > os-collect-config.conf"
ssh $SSH_OPTIONS -i $SUBNODES_SSH_KEY $host sudo cp os-collect-config.conf /etc/os-collect-config.conf
ssh $SSH_OPTIONS -i $SUBNODES_SSH_KEY $host sudo systemctl restart os-collect-config
fi
let i+=1
done
done

3
environments/deployed-server-environment.yaml

@ -0,0 +1,3 @@
resource_registry:
OS::TripleO::Server: ../deployed-server/deployed-server.yaml
OS::TripleO::DeployedServerConfig: ../deployed-server/deployed-server-config.yaml

99
net-config-static-bridge-with-external-dhcp.yaml

@ -0,0 +1,99 @@
heat_template_version: 2015-04-30
description: >
Software Config to drive os-net-config for a simple bridge configured
with a static IP address for the ctlplane network.
parameters:
ControlPlaneIp:
default: ''
description: IP address/subnet on the ctlplane network
type: string
ExternalIpSubnet:
default: ''
description: IP address/subnet on the external network
type: string
InternalApiIpSubnet:
default: ''
description: IP address/subnet on the internal API network
type: string
StorageIpSubnet:
default: ''
description: IP address/subnet on the storage network
type: string
StorageMgmtIpSubnet:
default: ''
description: IP address/subnet on the storage mgmt network
type: string
TenantIpSubnet:
default: ''
description: IP address/subnet on the tenant network
type: string
ManagementIpSubnet:
default: ''
description: IP address/subnet on the management network
type: string
ControlPlaneSubnetCidr: # Override this via parameter_defaults
default: '24'
description: The subnet CIDR of the control plane network.
type: string
ControlPlaneDefaultRoute: # Override this via parameter_defaults
description: The default route of the control plane network.
type: string
DnsServers: # Override this via parameter_defaults
default: []
description: A list of DNS servers (2 max for some implementations) that will be added to resolv.conf.
type: comma_delimited_list
EC2MetadataIp: # Override this via parameter_defaults
description: The IP address of the EC2 metadata server.
type: string
resources:
OsNetConfigImpl:
type: OS::Heat::StructuredConfig
properties:
group: os-apply-config
config:
os_net_config:
network_config:
-
type: ovs_bridge
name: {get_input: bridge_name}
use_dhcp: true
members:
-
type: interface
name: {get_input: interface_name}
# force the MAC address of the bridge to this interface
primary: true
-
type: interface
# would like to do the following, but can't b/c of:
# https://bugs.launchpad.net/heat/+bug/1344284
# name:
# list_join:
# - '/'
# - - {get_input: bridge_name}
# - ':0'
# So, just hardcode to br-ex:0 for now, br-ex is hardcoded in
# controller.yaml anyway.
name: br-ex:0
addresses:
-
ip_netmask:
list_join:
- '/'
- - {get_param: ControlPlaneIp}
- {get_param: ControlPlaneSubnetCidr}
routes:
-
ip_netmask: 169.254.169.254/32
next_hop: {get_param: EC2MetadataIp}
-
default: true
next_hop: {get_param: ControlPlaneDefaultRoute}
outputs:
OS::stack_id:
description: The OsNetConfigImpl resource.
value: {get_resource: OsNetConfigImpl}

3
overcloud-resource-registry-puppet.yaml

@ -28,6 +28,9 @@ resource_registry:
OS::TripleO::Tasks::ControllerPrePuppet: OS::Heat::None
OS::TripleO::Tasks::ControllerPostPuppet: OS::Heat::None
OS::TripleO::Server: OS::Nova::Server
# This creates the "heat-admin" user for all OS images by default
# To disable, replace with firstboot/userdata_default.yaml
OS::TripleO::NodeAdminUserData: firstboot/userdata_heat_admin.yaml

2
puppet/ceph-storage.yaml

@ -98,7 +98,7 @@ parameters:
resources:
CephStorage:
type: OS::Nova::Server
type: OS::TripleO::Server
metadata:
os-collect-config:
command: {get_param: ConfigCommand}

2
puppet/cinder-storage.yaml

@ -98,7 +98,7 @@ parameters:
resources:
BlockStorage:
type: OS::Nova::Server
type: OS::TripleO::Server
metadata:
os-collect-config:
command: {get_param: ConfigCommand}

2
puppet/compute.yaml

@ -324,7 +324,7 @@ parameters:
resources:
NovaCompute:
type: OS::Nova::Server
type: OS::TripleO::Server
metadata:
os-collect-config:
command: {get_param: ConfigCommand}

2
puppet/controller.yaml

@ -404,7 +404,7 @@ parameter_groups:
resources:
Controller:
type: OS::Nova::Server
type: OS::TripleO::Server
metadata:
os-collect-config:
command: {get_param: ConfigCommand}

Loading…
Cancel
Save