Retire Tripleo: remove repo content

TripleO project is retiring
- https://review.opendev.org/c/openstack/governance/+/905145

this commit remove the content of this project repo

Change-Id: Ib988f3b567e31c2b9402f41e5dd222b7fc006756
This commit is contained in:
Ghanshyam Mann 2024-02-24 11:32:14 -08:00
parent d01331db06
commit 8325044e7a
181 changed files with 9 additions and 19698 deletions

15
.gitignore vendored
View File

@ -1,15 +0,0 @@
nodes.json
env-*.yaml
bmc_bm_pairs
*.pyc
*.pyo
.coverage
cover
.testrepository
.tox
*.egg-info
.eggs
doc/build
AUTHORS
ChangeLog
.stestr/

3
.gitmodules vendored
View File

@ -1,3 +0,0 @@
[submodule "ipxe/ipxe"]
path = ipxe/ipxe
url = https://git.ipxe.org/ipxe.git

View File

@ -1,3 +0,0 @@
[DEFAULT]
test_path=${OS_TEST_PATH:-./openstack_virtual_baremetal/tests}
top_path=./

View File

@ -1,6 +0,0 @@
- project:
templates:
- openstack-python3-zed-jobs
- docs-on-readthedocs
vars:
rtd_webhook_id: '64851'

View File

@ -1,7 +1,10 @@
OpenStack Virtual Baremetal
===========================
This project is no longer maintained.
OpenStack Virtual Baremetal is a way to use OpenStack instances to do
simulated baremetal deployments. For more details, see the `full
documentation
<http://openstack-virtual-baremetal.readthedocs.io/en/latest/index.html>`_.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
OFTC.

View File

@ -1 +0,0 @@
../openstack_virtual_baremetal/build_nodes_json.py

View File

@ -1,28 +0,0 @@
#!/bin/bash
set -e
# Report an error if the generated sample environments are not in sync with
# the current configuration and templates.
echo 'Verifying that generated environments are in sync'
tmpdir=$(mktemp -d)
trap "rm -rf $tmpdir" EXIT
./bin/environment-generator.py sample-env-generator/ $tmpdir/environments
base=$PWD
retval=0
cd $tmpdir
file_list=$(find environments -type f)
for f in $file_list; do
if ! diff -q $f $base/$f; then
echo "ERROR: $base/$f is not up to date"
diff $f $base/$f
retval=1
fi
done
exit $retval

View File

@ -1 +0,0 @@
../openstack_virtual_baremetal/deploy.py

View File

@ -1,260 +0,0 @@
#!/usr/bin/env python
# Copyright 2015 Red Hat Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# *************************************************************************
# This is a copy of the environment-generator.py in tripleo-heat-templates.
# At some point that version should be generalized enough to be used by
# other projects in a less hacky way.
# *************************************************************************
import argparse
import errno
import json
import os
import re
import yaml
_PARAM_FORMAT = u""" # %(description)s
%(mandatory)s# Type: %(type)s
%(name)s:%(default)s
"""
_STATIC_MESSAGE_START = (
' # ******************************************************\n'
' # Static parameters - these are values that must be\n'
' # included in the environment but should not be changed.\n'
' # ******************************************************\n'
)
_STATIC_MESSAGE_END = (' # *********************\n'
' # End static parameters\n'
' # *********************\n'
)
_FILE_HEADER = (
'# *******************************************************************\n'
'# This file was created automatically by the sample environment\n'
'# generator. Developers should use `tox -e genconfig` to update it.\n'
'# Users are recommended to make changes to a copy of the file instead\n'
'# of the original, if any customizations are needed.\n'
'# *******************************************************************\n'
)
# Certain parameter names can't be changed, but shouldn't be shown because
# they are never intended for direct user input.
_PRIVATE_OVERRIDES = []
# Hidden params are not included by default when the 'all' option is used,
# but can be explicitly included by referencing them in sample_defaults or
# static. This allows us to generate sample environments using them when
# necessary, but they won't be improperly included by accident.
_HIDDEN_PARAMS = []
# We also want to hide some patterns by default. If a parameter name matches
# one of the patterns in this list (a "match" being defined by Python's
# re.match function returning a value other than None), then the parameter
# will be omitted by default.
_HIDDEN_RE = []
_index_data = {}
def _create_output_dir(target_file):
try:
os.makedirs(os.path.dirname(target_file))
except OSError as e:
if e.errno == errno.EEXIST:
pass
else:
raise
def _generate_environment(input_env, output_path, parent_env=None):
if parent_env is None:
parent_env = {}
env = dict(parent_env)
env.pop('children', None)
env.update(input_env)
parameter_defaults = {}
param_names = []
sample_values = env.get('sample_values', {})
static_names = env.get('static', [])
for template_file, template_data in env.get('files', {}).items():
with open(template_file) as f:
f_data = yaml.safe_load(f)
f_params = f_data['parameters']
parameter_defaults.update(f_params)
if template_data['parameters'] == 'all':
new_names = [k for k, v in f_params.items()]
for hidden in _HIDDEN_PARAMS:
if (hidden not in (static_names + sample_values.keys()) and
hidden in new_names):
new_names.remove(hidden)
for hidden_re in _HIDDEN_RE:
new_names = [n for n in new_names
if n in (static_names +
sample_values.keys()) or
not re.match(hidden_re, n)]
else:
new_names = template_data['parameters']
missing_params = [name for name in new_names
if name not in f_params]
if missing_params:
raise RuntimeError('Did not find specified parameter names %s '
'in file %s for environment %s' %
(missing_params, template_file,
env['name']))
param_names += new_names
static_defaults = {k: v for k, v in parameter_defaults.items()
if k in param_names and
k in static_names
}
parameter_defaults = {k: v for k, v in parameter_defaults.items()
if k in param_names and
k not in _PRIVATE_OVERRIDES and
not k.startswith('_') and
k not in static_names
}
for k, v in sample_values.items():
if k in parameter_defaults:
parameter_defaults[k]['sample'] = v
if k in static_defaults:
static_defaults[k]['sample'] = v
def write_sample_entry(f, name, value):
default = value.get('default')
mandatory = ''
if default is None:
mandatory = ('# Mandatory. This parameter must be set by the '
'user.\n ')
default = '<None>'
if value.get('sample') is not None:
default = value['sample']
if isinstance(default, dict):
# We need to explicitly sort these so the order doesn't change
# from one run to the next
default = json.dumps(default, sort_keys=True)
# We ultimately cast this to str for output anyway
default = str(default)
if default == '':
default = "''"
# If the default value is something like %index%, yaml won't
# parse the output correctly unless we wrap it in quotes.
# However, not all default values can be wrapped so we need to
# do it conditionally.
if default.startswith('%') or default.startswith('*'):
default = "'%s'" % default
if not default.startswith('\n'):
default = ' ' + default
values = {'name': name,
'type': value['type'],
'description':
value.get('description', '').rstrip().replace('\n',
'\n # '),
'default': default,
'mandatory': mandatory,
}
f.write(_PARAM_FORMAT % values + '\n')
target_file = os.path.join(output_path, env['name'] + '.yaml')
_create_output_dir(target_file)
with open(target_file, 'w') as env_file:
env_file.write(_FILE_HEADER)
# TODO(bnemec): Once Heat allows the title and description to live in
# the environment itself, uncomment these entries and make them
# top-level keys in the YAML.
env_title = env.get('title', '')
env_file.write(u'# title: %s\n' % env_title)
env_desc = env.get('description', '')
env_file.write(u'# description: |\n')
for line in env_desc.splitlines():
env_file.write(u'# %s\n' % line)
_index_data[target_file] = {'title': env_title,
'description': env_desc
}
if parameter_defaults:
env_file.write(u'parameter_defaults:\n')
for name, value in sorted(parameter_defaults.items()):
write_sample_entry(env_file, name, value)
if static_defaults:
env_file.write(_STATIC_MESSAGE_START)
for name, value in sorted(static_defaults.items()):
write_sample_entry(env_file, name, value)
if static_defaults:
env_file.write(_STATIC_MESSAGE_END)
if env.get('resource_registry'):
env_file.write(u'resource_registry:\n')
for res, value in sorted(env.get('resource_registry', {}).items()):
env_file.write(u' %s: %s\n' % (res, value))
print('Wrote sample environment "%s"' % target_file)
for e in env.get('children', []):
_generate_environment(e, output_path, env)
def generate_environments(config_path, output_path):
if os.path.isdir(config_path):
config_files = os.listdir(config_path)
config_files = [os.path.join(config_path, i) for i in config_files
if os.path.splitext(i)[1] == '.yaml']
else:
config_files = [config_path]
for config_file in config_files:
print('Reading environment definitions from %s' % config_file)
with open(config_file) as f:
config = yaml.safe_load(f)
for env in config['environments']:
_generate_environment(env, output_path)
def generate_index(index_path):
with open(index_path, 'w') as f:
f.write('Sample Environment Index\n')
f.write('========================\n\n')
for filename, details in sorted(_index_data.items()):
f.write(details['title'] + '\n')
f.write('-' * len(details['title']) + '\n\n')
f.write('**File:** ' + filename + '\n\n')
f.write('**Description:** ' + details['description'] + '\n\n')
def _parse_args():
parser = argparse.ArgumentParser(description='Generate Heat sample '
'environments.')
parser.add_argument('config_path',
help='Filename or directory containing the sample '
'environment definitions.')
parser.add_argument('output_path',
help='Location to write generated files.',
default='environments',
nargs='?')
parser.add_argument('--index',
help='Specify the output path for an index file '
'listing all the generated environments. '
'The file will be in RST format. '
'If not specified, no index will be generated.')
return parser.parse_args()
def main():
args = _parse_args()
generate_environments(args.config_path, args.output_path)
if args.index:
generate_index(args.index)
if __name__ == '__main__':
main()

View File

@ -1 +0,0 @@
start-env

View File

@ -1,192 +0,0 @@
#!/bin/bash
set -x
centos_ver=$(rpm --eval %{centos_ver})
if [ "$centos_ver" == "7" ] ; then
curl -o /etc/yum.repos.d/delorean.repo https://trunk.rdoproject.org/centos7/current/delorean.repo
yum install -y python2-tripleo-repos ca-certificates
tripleo-repos current-tripleo
yum install -y python-crypto python2-novaclient python2-neutronclient python2-pyghmi os-net-config python2-os-client-config python2-openstackclient
elif [ "$centos_ver" == "9" ] ; then
curl -o /etc/yum.repos.d/delorean.repo https://trunk.rdoproject.org/centos${centos_ver}-master/current-tripleo/delorean.repo
dnf install -y python3-tripleo-repos
tripleo-repos current-tripleo
dnf install -y openstack-network-scripts python3-cryptography python3-novaclient python3-pyghmi os-net-config python3-os-client-config python3-openstackclient
else
set +x
$signal_command --data-binary '{"status": "FAILURE"}'
echo "Unsupported CentOS version $centos_ver"
exit 1
fi
cat <<EOF >/usr/local/bin/openstackbmc
$openstackbmc_script
EOF
chmod +x /usr/local/bin/openstackbmc
# Configure clouds.yaml so we can authenticate to the host cloud
mkdir -p ~/.config/openstack
# Passing this as an argument is problematic because it has quotes inline that
# cause syntax errors. Reading from a file should be easier.
cat <<'EOF' >/tmp/bmc-cloud-data
$cloud_data
EOF
python -c 'import json
import sys
import yaml
with open("/tmp/bmc-cloud-data") as f:
data=json.loads(f.read())
clouds={"clouds": {"host_cloud": data}}
print(yaml.safe_dump(clouds, default_flow_style=False))' > ~/.config/openstack/clouds.yaml
rm -f /tmp/bmc-cloud-data
export OS_CLOUD=host_cloud
# python script do query the cloud and write out the bmc services/configs
$(command -v python3 || command -v python2) <<EOF
import json
import openstack
import os
import sys
cache_status = ''
if not $bmc_use_cache:
cache_status = '--cache-status'
conn = openstack.connect(cloud='host_cloud')
print('Fetching private network')
items = conn.network.networks(name='$private_net')
private_net = next(items, None)
print('Fetching private subnet')
private_subnet = conn.network.find_subnet(private_net.subnet_ids[0])
if not private_subnet:
print('[ERROR] Could not find private subnet')
sys.exit(1)
default_gw = private_subnet.gateway_ip
prefix_len = private_subnet.cidr.split('/')[1]
mtu = private_net.mtu
os_net_config = {
'network_config': [{
'type': 'interface',
'name': 'eth0',
'use_dhcp': False,
'mtu': mtu,
'routes': [{
'default': True,
'next_hop': default_gw,
}],
'addresses': [
{ 'ip_netmask': '$bmc_utility/{}'.format(prefix_len) }
]
}]
}
os_net_config_unit = """
[Unit]
Description=config-bmc-ips Service
Requires=network.target
After=network.target
[Service]
ExecStart=/bin/os-net-config -c /etc/os-net-config/config.json -v
Type=oneshot
User=root
StandardOutput=kmsg+console
StandardError=inherit
[Install]
WantedBy=multi-user.target
"""
print('Writing out config-bmc-ips.service')
with open('/usr/lib/systemd/system/config-bmc-ips.service', 'w') as f:
f.write(os_net_config_unit)
print('Fetching bm ports')
bmc_port_names = [('$bmc_prefix_{}'.format(x), '$bm_prefix_{}'.format(x)) for x in range(0, $bm_node_count)]
bmc_ports = {}
for (bmc_port_name, bm_port_name) in bmc_port_names:
print('Finding {} port'.format(bmc_port_name))
bmc_ports[bmc_port_name] = conn.network.find_port(bmc_port_name)
print('Finding {} port'.format(bm_port_name))
bmc_ports[bm_port_name] = conn.network.find_port(bm_port_name)
unit_template = """
[Unit]
Description=openstack-bmc {port_name} Service
Requires=config-bmc-ips.service
After=config-bmc-ips.service
[Service]
ExecStart=/usr/local/bin/openstackbmc --os-cloud host_cloud --instance {port_instance} --address {port_ip} {cache_status}
Restart=always
User=root
StandardOutput=kmsg+console
StandardError=inherit
[Install]
WantedBy=multi-user.target
"""
for (bmc_port_name, bm_port_name) in bmc_port_names:
port_name = bm_port_name
unit_file = os.path.join('/usr/lib/systemd/system', 'openstack-bmc-{}.service'.format(port_name))
device_id = bmc_ports[bm_port_name].device_id
port = bmc_ports[bmc_port_name]
if isinstance(port.fixed_ips, list):
port_ip = port.fixed_ips[0].get('ip_address')
else:
# TODO: test older openstacksdk
port_ip = port.fixed_ips.split('\'')[1]
print('Writing out {}'.format(unit_file))
with open(unit_file, "w") as unit:
unit.write(unit_template.format(port_name=port_name,
port_instance=device_id,
port_ip=port_ip,
cache_status=cache_status))
addr = { 'ip_netmask': '{}/{}'.format(port_ip, prefix_len) }
os_net_config['network_config'][0]['addresses'].append(addr)
if not os.path.isdir('/etc/os-net-config'):
os.mkdir('/etc/os-net-config')
print('Writing /etc/os-net-config/config.json')
with open('/etc/os-net-config/config.json', 'w') as f:
json.dump(os_net_config, f)
EOF
# reload systemd
systemctl daemon-reload
# enable and start bmc ip service
systemctl enable config-bmc-ips
systemctl start config-bmc-ips
# enable bmcs
for i in $(seq 1 $bm_node_count)
do
unit="openstack-bmc-$bm_prefix_$(($i-1)).service"
systemctl enable $unit
systemctl start $unit
done
sleep 5
if ! systemctl is-active openstack-bmc-* >/dev/null
then
systemctl status openstack-bmc-*
set +x
$signal_command --data-binary '{"status": "FAILURE"}'
echo "********** $unit failed to start **********"
exit 1
fi
set +x
$signal_command --data-binary '{"status": "SUCCESS"}'

View File

@ -1,18 +0,0 @@
#!/bin/bash
set -ex
JOBS=${1:-1}
DELAY=${2:-60}
BIN_DIR=$(cd $(dirname $0); pwd -P)
for i in $(seq $JOBS)
do
jobid=$(uuidgen | md5sum)
jobid=${jobid:1:8}
date
echo "Starting job $jobid, logging to /tmp/$jobid.log"
$BIN_DIR/test-job $jobid > /tmp/$jobid.log 2>&1 &
sleep $DELAY
done

View File

@ -1 +0,0 @@
../openstack_virtual_baremetal/openstackbmc.py

View File

@ -1,130 +0,0 @@
#!/bin/bash
function timer
{
name=${1:-}
seconds=$(date +%s)
if [ -n "$name" ]
then
echo "${name}: $((seconds - start_time))" >> ~/timer-results
else
start_time=$seconds
fi
}
set -ex
timer
# When not running my local cloud we don't want these set
if [ ${LOCAL:-1} -eq 1 ]
then
export http_proxy=http://roxy:3128
curl -O http://openstack/CentOS-7-x86_64-GenericCloud-1901.qcow2
export DIB_LOCAL_IMAGE=CentOS-7-x86_64-GenericCloud-1901.qcow2
export DIB_DISTRIBUTION_MIRROR=http://mirror.centos.org/centos
export no_proxy=192.168.24.1,192.168.24.2,192.168.24.3,127.0.0.1
fi
# Set up the undercloud in preparation for running the deployment
sudo yum install -y git wget
rm -rf git-tripleo-ci
git clone https://git.openstack.org/openstack-infra/tripleo-ci git-tripleo-ci
echo '#!/bin/bash' > tripleo.sh
echo 'git-tripleo-ci/scripts/tripleo.sh $@' >> tripleo.sh
chmod +x tripleo.sh
if [ ! -d overcloud-templates ]
then
git clone https://git.openstack.org/openstack/openstack-virtual-baremetal
cp -r openstack-virtual-baremetal/overcloud-templates .
fi
export OVERCLOUD_PINGTEST_OLD_HEATCLIENT=0
export TRIPLEOSH=/home/centos/tripleo.sh
# Do the tripleo deployment
# Repo setup
wget -r --no-parent -nd -e robots=off -l 1 -A 'python2-tripleo-repos-*' https://trunk.rdoproject.org/centos7/current/
sudo yum install -y python2-tripleo-repos-*
sudo tripleo-repos -b rocky current --rdo-mirror http://mirror.regionone.rdo-cloud.rdoproject.org:8080/rdo --mirror http://mirror.regionone.rdo-cloud.rdoproject.org
timer 'system setup'
timer
# Undercloud install
cat << EOF > undercloud.conf
[DEFAULT]
undercloud_hostname=undercloud.localdomain
enable_telemetry = false
enable_legacy_ceilometer_api = false
enable_ui = false
enable_validations = false
enable_tempest = false
local_mtu = 1450
[ctlplane-subnet]
masquerade = true
EOF
sudo yum install -y python-tripleoclient
openstack undercloud install
. stackrc
# Undercloud networking
cat >> /tmp/eth2.cfg <<EOF_CAT
network_config:
- type: interface
name: eth2
use_dhcp: false
mtu: 1450
addresses:
- ip_netmask: 10.0.0.1/24
- ip_netmask: 2001:db8:fd00:1000::1/64
EOF_CAT
sudo os-net-config -c /tmp/eth2.cfg -v
sudo iptables -A POSTROUTING -s 10.0.0.0/24 ! -d 10.0.0.0/24 -j MASQUERADE -t nat
sudo iptables -I FORWARD -s 10.0.0.0/24 -j ACCEPT
sudo iptables -I FORWARD -d 10.0.0.0/24 -j ACCEPT
timer 'undercloud install'
timer
# Image creation
export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean*"
openstack overcloud image build
openstack overcloud image upload --update-existing
timer 'image build'
timer
# Node registration and introspection
openstack overcloud node import --introspect --provide instackenv.json
timer 'node introspection'
sleep 60
timer
# Overcloud deploy
export OVERCLOUD_DEPLOY_ARGS="--libvirt-type qemu -e /usr/share/openstack-tripleo-heat-templates/environments/disable-telemetry.yaml"
if [ ${VERSION:-1} -eq 2 ]
then
OVERCLOUD_DEPLOY_ARGS="$OVERCLOUD_DEPLOY_ARGS -e /home/centos/overcloud-templates/network-templates-v2/network-isolation-absolute.yaml -e /home/centos/overcloud-templates/network-templates-v2/network-environment.yaml"
fi
openstack overcloud deploy --templates $OVERCLOUD_DEPLOY_ARGS
timer 'overcloud deploy'
timer
# Overcloud validation
if [ ${VERSION:-1} -eq 2 ]
then
export FLOATING_IP_CIDR=10.0.0.0/24
export FLOATING_IP_START=10.0.0.50
export FLOATING_IP_END=10.0.0.70
export EXTERNAL_NETWORK_GATEWAY=10.0.0.1
fi
$TRIPLEOSH --overcloud-pingtest --skip-pingtest-cleanup
timer 'ping test'
cat ~/timer-results

View File

@ -1,26 +0,0 @@
#!/bin/bash
# Script to rebuild baremetal instances to the ipxe-boot image.
# When using OVB without the Nova PXE boot patch, this is required after
# each deployment to ensure the nodes can PXE boot for the next one.
# Usage: rebuild-baremetal <number of nodes> [baremetal_base] [environment ID]
# Examples: rebuild-baremetal 2
# rebuild-baremetal 5 my-baremetal-name
# rebuild-baremetal 5 baremetal test
node_num=$1
baremetal_base=${2:-'baremetal'}
env_id=${3:-}
name_base="$baremetal_base"
if [ -n "$env_id" ]
then
name_base="$baremetal_base-$env_id"
fi
for i in `seq 0 $((node_num - 1))`
do
echo nova rebuild "${instance_list}${name_base}_$i" ipxe-boot
nova rebuild "${instance_list}${name_base}_$i" ipxe-boot
done

View File

@ -1,7 +0,0 @@
#!/bin/bash
# Usage: start-env <ID>
# Example: start-env test
# This will start the undercloud-test and bmc-test instances
nova start bmc-$1 undercloud-$1

View File

@ -1,8 +0,0 @@
#!/bin/bash
# Usage: stop-env <ID>
# Example: stop-env test
# This will stop the undercloud and bmc, as well as up to 5 baremetal instances
# TODO: Make this smarter so it will handle an arbitrary number of baremetal instances
nova stop undercloud-$1 bmc-$1 baremetal-$1_0 baremetal-$1_1 baremetal-$1_2 baremetal-$1_3 baremetal-$1_4

View File

@ -1,76 +0,0 @@
#!/bin/bash
# Edit these to match your environment
BMC_IMAGE=bmc-base
BMC_FLAVOR=bmc
KEY_NAME=default
UNDERCLOUD_IMAGE=centos7-base
UNDERCLOUD_FLAVOR=undercloud-16
BAREMETAL_FLAVOR=baremetal
EXTERNAL_NET=external
# Set to 0 if you're not running in bnemec's private cloud
LOCAL=1
set -ex
date
start_seconds=$(date +%s)
BIN_DIR=$(cd $(dirname $0); pwd -P)
MY_ID=$1
BMC_PREFIX=bmc-$MY_ID
BAREMETAL_PREFIX=baremetal-$MY_ID
UNDERCLOUD_NAME=undercloud-$MY_ID
PUBLIC_NET=public-$MY_ID
PROVISION_NET=provision-$MY_ID
PRIVATE_NET=private-$MY_ID
# NOTE(bnemec): I'm intentionally not adding a trap to clean this up because
# if something fails it may be helpful to look at the contents of the tempdir.
TEMPDIR=$(mktemp -d)
echo "Working in $TEMPDIR"
cd $TEMPDIR
cp -r $BIN_DIR/../templates .
cp templates/env.yaml.example ./env.yaml
sed -i "s/bmc_image: .*/bmc_image: $BMC_IMAGE/" env.yaml
sed -i "s/bmc_flavor: .*/bmc_flavor: $BMC_FLAVOR/" env.yaml
sed -i "s/key_name: .*/key_name: $KEY_NAME/" env.yaml
sed -i "s/baremetal_flavor: .*/baremetal_flavor: $BAREMETAL_FLAVOR/" env.yaml
sed -i "s/undercloud_image: .*/undercloud_image: $UNDERCLOUD_IMAGE/" env.yaml
sed -i "s/undercloud_flavor: .*/undercloud_flavor: $UNDERCLOUD_FLAVOR/" env.yaml
sed -i "s|external_net: .*|external_net: $EXTERNAL_NET|" env.yaml
sed -i "s/private_net: .*/private_net: $PRIVATE_NET/" env.yaml
if [ $LOCAL -eq 1 ]
then
echo 'parameter_defaults:' >> env.yaml
echo ' dns_nameservers: 11.1.1.1' >> env.yaml
fi
echo 'resource_registry:' >> env.yaml
echo ' OS::OVB::UndercloudFloating: templates/undercloud-floating.yaml' >> env.yaml
echo ' OS::OVB::BaremetalPorts: templates/baremetal-ports-default.yaml' >> env.yaml
echo ' OS::OVB::BMCPort: templates/bmc-port.yaml' >> env.yaml
echo ' OS::OVB::UndercloudPorts: templates/undercloud-ports.yaml' >> env.yaml
echo ' OS::OVB::PrivateNetwork: templates/private-net-create.yaml' >> env.yaml
cp -r $BIN_DIR ./bin
cp -r $BIN_DIR/../openstack_virtual_baremetal .
STACK_NAME=$MY_ID
$BIN_DIR/deploy.py --quintupleo --id $MY_ID --name $STACK_NAME --poll
UNDERCLOUD_IP=$(heat output-show $STACK_NAME undercloud_host_floating_ip | sed -e 's/"//g')
bin/build-nodes-json --env env-$MY_ID.yaml --driver ipmi
#bin/build-nodes-json --bmc_prefix $BMC_PREFIX --baremetal_prefix $BAREMETAL_PREFIX --provision_net $PROVISION_NET
SSH_OPTS="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o LogLevel=Verbose -o PasswordAuthentication=no -o ConnectionAttempts=32 "
# Canary command to make sure the undercloud is ready
until ssh -t -t $SSH_OPTS centos@$UNDERCLOUD_IP ls; do sleep 1; done
scp $SSH_OPTS bin/ovb-instack centos@$UNDERCLOUD_IP:/tmp
scp $SSH_OPTS nodes.json centos@$UNDERCLOUD_IP:~/instackenv.json
ssh -t -t $SSH_OPTS centos@$UNDERCLOUD_IP LOCAL=$LOCAL /tmp/ovb-instack
heat stack-delete -y $STACK_NAME
date
end_seconds=$(date +%s)
elapsed_seconds=$(($end_seconds - $start_seconds))
echo "Finished in $elapsed_seconds seconds"
rm -rf $TEMPDIR

View File

@ -1,71 +0,0 @@
#!/bin/bash
# Edit these to match your environment
BMC_IMAGE=bmc-base
BMC_FLAVOR=bmc
KEY_NAME=default
UNDERCLOUD_IMAGE=centos7-base
UNDERCLOUD_FLAVOR=undercloud-16
BAREMETAL_FLAVOR=baremetal
EXTERNAL_NET=external
# Set to 0 if you're not running in bnemec's private cloud
LOCAL=1
set -ex
date
start_seconds=$(date +%s)
BIN_DIR=$(cd $(dirname $0); pwd -P)
MY_ID=$1
BMC_PREFIX=bmc-$MY_ID
BAREMETAL_PREFIX=baremetal-$MY_ID
UNDERCLOUD_NAME=undercloud-$MY_ID
PUBLIC_NET=public-$MY_ID
PROVISION_NET=provision-$MY_ID
PRIVATE_NET=private-$MY_ID
# NOTE(bnemec): I'm intentionally not adding a trap to clean this up because
# if something fails it may be helpful to look at the contents of the tempdir.
TEMPDIR=$(mktemp -d)
echo "Working in $TEMPDIR"
cd $TEMPDIR
cp -r $BIN_DIR/../templates .
cp -r $BIN_DIR/../environments .
cp -r $BIN_DIR/../overcloud-templates .
cp environments/base.yaml ./env.yaml
sed -i "s/bmc_image: .*/bmc_image: $BMC_IMAGE/" env.yaml
sed -i "s/bmc_flavor: .*/bmc_flavor: $BMC_FLAVOR/" env.yaml
sed -i "s/key_name: .*/key_name: $KEY_NAME/" env.yaml
sed -i "s/baremetal_flavor: .*/baremetal_flavor: $BAREMETAL_FLAVOR/" env.yaml
sed -i "s/undercloud_image: .*/undercloud_image: $UNDERCLOUD_IMAGE/" env.yaml
sed -i "s/undercloud_flavor: .*/undercloud_flavor: $UNDERCLOUD_FLAVOR/" env.yaml
sed -i "s|external_net: .*|external_net: $EXTERNAL_NET|" env.yaml
sed -i "s/private_net: .*/private_net: $PRIVATE_NET/" env.yaml
if [ $LOCAL -eq 1 ]
then
sed -i "s/dns_nameservers: .*/dns_nameservers: 11.1.1.1/" environments/create-private-network.yaml
fi
cp -r $BIN_DIR ./bin
cp -r $BIN_DIR/../openstack_virtual_baremetal .
STACK_NAME=$MY_ID
$BIN_DIR/deploy.py --quintupleo --id $MY_ID --name $STACK_NAME --poll -e env.yaml -e environments/create-private-network.yaml -e environments/all-networks.yaml
UNDERCLOUD_IP=$(heat output-show $STACK_NAME undercloud_host_floating_ip | sed -e 's/"//g')
bin/build-nodes-json --env env-$MY_ID.yaml --driver ipmi
SSH_OPTS="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o LogLevel=Verbose -o PasswordAuthentication=no -o ConnectionAttempts=32 "
# Canary command to make sure the undercloud is ready
until ssh -t -t $SSH_OPTS centos@$UNDERCLOUD_IP ls; do sleep 1; done
scp $SSH_OPTS bin/ovb-instack centos@$UNDERCLOUD_IP:/tmp
scp $SSH_OPTS nodes.json centos@$UNDERCLOUD_IP:~/instackenv.json
scp $SSH_OPTS -r overcloud-templates centos@$UNDERCLOUD_IP:~
ssh -t -t $SSH_OPTS centos@$UNDERCLOUD_IP LOCAL=$LOCAL VERSION=2 /tmp/ovb-instack
heat stack-delete -y $STACK_NAME
date
end_seconds=$(date +%s)
elapsed_seconds=$(($end_seconds - $start_seconds))
echo "Finished in $elapsed_seconds seconds"
rm -rf $TEMPDIR

View File

@ -1,8 +0,0 @@
sphinx!=1.6.6,>=1.6.2,<2.0.0;python_version=='2.7' # BSD
sphinx!=1.6.6,>=1.6.2;python_version>='3.4' # BSD
sphinx_rtd_theme
PyYAML
os_client_config
python-heatclient
python-novaclient
pyghmi

View File

@ -1,28 +0,0 @@
Python API
==========
build_nodes_json
----------------
.. automodule:: build_nodes_json
:members:
:private-members:
deploy
------
.. automodule:: deploy
:members:
:private-members:
openstackbmc
------------
.. autoclass:: openstackbmc.OpenStackBmc
:members:
auth
------
.. automodule:: auth
:members:
:private-members:

View File

@ -1,125 +0,0 @@
# openstack-virtual-baremetal documentation build configuration file, created by
# sphinx-quickstart on Wed Feb 25 10:56:57 2015.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys, os
import sphinx_rtd_theme
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
sys.path.insert(0, os.path.abspath('../../openstack_virtual_baremetal'))
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.intersphinx',
'sphinx.ext.autosectionlabel',
]
# Add any paths that contain templates here, relative to this directory.
templates_path = []
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'OpenStack Virtual Baremetal'
copyright = u'2015, Red Hat Inc.'
bug_tracker = u'Launchpad'
bug_tracker_url = u'https://bugs.launchpad.net/tripleo/'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '0.0.1'
# The full version, including alpha/beta/rc tags.
release = '0.0.1'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
html_theme = 'sphinx_rtd_theme'
html_static_path = []
# html_style = 'custom.css'
templates_path = []
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# -- Options for LaTeX output --------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
rst_prolog = """
.. |project| replace:: %s
.. |bug_tracker| replace:: %s
.. |bug_tracker_url| replace:: %s
""" % (project, bug_tracker, bug_tracker_url)

View File

@ -1,121 +0,0 @@
Deploying a Standalone Baremetal Stack
======================================
The process described here will create a very minimal OVB environment, and the
user will be responsible for creating most of the resources manually. In most
cases it will be easier to use the :doc:`QuintupleO <quintupleo>` deployment
method, which creates most of the resources needed automatically.
#. Create private network.
If your cloud provider has already created a private network for your use
then you can skip this step and reference the existing network in your
OVB environment file.
::
neutron net-create private
neutron subnet-create --name private private 10.0.1.0/24 --dns-nameserver 8.8.8.8
You will also need to create a router so traffic from your private network
can get to the external network. The external network should have been
created by the cloud provider::
neutron router-create router
neutron router-gateway-set router [external network name or id]
neutron router-interface-add router private
#. Create provisioning network.
.. note:: The CIDR used for the subnet does not matter.
Standard tenant and external networks are also needed to
provide floating ip access to the undercloud and bmc instances
.. warning:: Do not enable DHCP on this network. Addresses will be
assigned by the undercloud Neutron.
::
neutron net-create provision
neutron subnet-create --name provision --no-gateway --disable-dhcp provision 192.168.24.0/24
#. Create "public" network.
.. note:: The CIDR used for the subnet does not matter.
This can be used as the network for the public API endpoints
on the overcloud, but it does not have to be accessible
externally. Only the undercloud VM will need to have access
to this network.
.. warning:: Do not enable DHCP on this network. Doing so may cause
conflicts between the host cloud metadata service and the
undercloud metadata service. Overcloud nodes will be
assigned addresses on this network by the undercloud Neutron.
::
neutron net-create public
neutron subnet-create --name public --no-gateway --disable-dhcp public 10.0.0.0/24
#. Copy the example env file and edit it to reflect the host environment:
.. note:: Some of the parameters in the base environment file are only
used for QuintupleO deployments. Their values will be ignored
in a plain virtual-baremetal deployment.
::
cp environments/base.yaml env.yaml
vi env.yaml
#. Deploy the stack::
bin/deploy.py
#. Wait for Heat stack to complete:
.. note:: The BMC instance does post-deployment configuration that can
take a while to complete, so the Heat stack completing does
not necessarily mean the environment is entirely ready for
use. To determine whether the BMC is finished starting up,
run ``nova console-log bmc``. The BMC service outputs a
message like "Managing instance [uuid]" when it is fully
configured. There should be one of these messages for each
baremetal instance.
::
heat stack-show baremetal
#. Boot a VM to serve as the undercloud::
nova boot undercloud --flavor m1.xlarge --image centos7 --nic net-id=[tenant net uuid] --nic net-id=[provisioning net uuid]
neutron floatingip-create [external net uuid]
neutron port-list
neutron floatingip-associate [floatingip uuid] [undercloud instance port id]
#. Turn off port-security on the undercloud provisioning port::
neutron port-update [UUID of undercloud port on the provision network] --no-security-groups --port-security-enabled=False
#. Build a nodes.json file that can be imported into Ironic::
bin/build-nodes-json
scp nodes.json centos@[undercloud floating ip]:~/instackenv.json
.. note:: ``build-nodes-json`` also outputs a file named ``bmc_bm_pairs``
that lists which BMC address corresponds to a given baremetal
instance.
#. The undercloud vm can now be used with something like TripleO
to do a baremetal-style deployment to the virtual baremetal instances
deployed previously.
Deleting an OVB Environment
---------------------------
All of the OpenStack resources created by OVB are part of the Heat stack, so
to delete the environment just delete the Heat stack. There are a few local
files that may also have been created as part of the deployment, such as
nodes.json files and bmc_bm_pairs. Once the stack is deleted these can be
removed safely as well.

View File

@ -1,11 +0,0 @@
Deploying the Heat stack
========================
There are two options for deploying the Heat stack.
.. toctree::
quintupleo
baremetal
heterogeneous
environment-index

View File

@ -1,216 +0,0 @@
Sample Environment Index
========================
Deploy with All Networks Enabled and Two Public Interfaces
----------------------------------------------------------
**File:** environments/all-networks-public-bond.yaml
**Description:** Deploy an OVB stack that adds interfaces for all the standard TripleO
network isolation networks. This version will deploy duplicate
public network interfaces on the baremetal instances so that the
public network can be configured as a bond.
Deploy with All Networks Enabled
--------------------------------
**File:** environments/all-networks.yaml
**Description:** Deploy an OVB stack that adds interfaces for all the standard TripleO
network isolation networks.
Base Configuration Options for Extra Nodes with All Ports Open
--------------------------------------------------------------
**File:** environments/base-extra-node-all.yaml
**Description:** Configuration options that need to be set when deploying an OVB
environment with extra undercloud-like nodes. This environment
should be used like a role file, but will deploy an undercloud-like
node instead of more baremetal nodes.
Base Configuration Options for Extra Nodes
------------------------------------------
**File:** environments/base-extra-node.yaml
**Description:** Configuration options that need to be set when deploying an OVB
environment with extra undercloud-like nodes. This environment
should be used like a role file, but will deploy an undercloud-like
node instead of more baremetal nodes.
Base Configuration Options for Secondary Roles
----------------------------------------------
**File:** environments/base-role.yaml
**Description:** Configuration options that need to be set when deploying an OVB
environment that has multiple roles.
Base Configuration Options
--------------------------
**File:** environments/base.yaml
**Description:** Basic configuration options needed for all OVB environments
Enable Instance Status Caching in BMC
-------------------------------------
**File:** environments/bmc-use-cache.yaml
**Description:** Enable caching of instance status in the BMC. This should reduce load on
the host cloud, but at the cost of potential inconsistency if the state
of a baremetal instance is changed without using the BMC.
Boot Baremetal Instances from Volume
------------------------------------
**File:** environments/boot-baremetal-from-volume.yaml
**Description:** Boot the baremetal instances from Cinder volumes instead of
ephemeral storage.
Boot Undercloud and Baremetal Instances from Volume
---------------------------------------------------
**File:** environments/boot-from-volume.yaml
**Description:** Boot the undercloud and baremetal instances from Cinder volumes instead of
ephemeral storage.
Boot Undercloud Instance from Volume
------------------------------------
**File:** environments/boot-undercloud-from-volume.yaml
**Description:** Boot the undercloud instance from a Cinder volume instead of
ephemeral storage.
Create a Private Network
------------------------
**File:** environments/create-private-network.yaml
**Description:** Create the private network as part of the OVB stack instead of using an
existing one.
Disable BMC
-----------
**File:** environments/disable-bmc.yaml
**Description:** Deploy a stack without a BMC. This will obviously make it impossible to
control the instances via IPMI. It will also prevent use of
ovb-build-nodes-json because there will be no BMC addresses.
Configuration for router advertisement daemon (radvd)
-----------------------------------------------------
**File:** environments/ipv6-radvd-configuration.yaml
**Description:** Contains the available parameters that need to be configured when using
a IPv6 network. Requires the ipv6-radvd.yaml environment.
Enable router advertisement daemon (radvd)
------------------------------------------
**File:** environments/ipv6-radvd.yaml
**Description:** Deploy the stack with a router advertisement daemon running for the
provisioning network.
Public Network External Router
------------------------------
**File:** environments/public-router.yaml
**Description:** Deploy a router that connects the public and external networks. This
allows the public network to be used as a gateway instead of routing all
traffic through the undercloud.
Disable the Undercloud in a QuintupleO Stack
--------------------------------------------
**File:** environments/quintupleo-no-undercloud.yaml
**Description:** Deploy a QuintupleO environment, but do not create the undercloud
instance.
Configuration for Routed Networks
---------------------------------
**File:** environments/routed-networks-configuration.yaml
**Description:** Contains the available parameters that need to be configured when using
a routed networks environment. Requires the routed-networks.yaml or
routed-networks-ipv6.yaml environment.
Enable Routed Networks IPv6
---------------------------
**File:** environments/routed-networks-ipv6.yaml
**Description:** Enable use of routed IPv6 networks, where there may be multiple separate
networks connected with a router, router advertisement daemon (radvd),
and DHCP relay. Do not pass any other network configuration environments
after this one or they may override the changes made by this environment.
When this environment is in use, the routed-networks-configuration
environment should usually be included as well.
Base Role Configuration for Routed Networks
-------------------------------------------
**File:** environments/routed-networks-role.yaml
**Description:** A base role environment that contains the necessary parameters for
deploying with routed networks.
Enable Routed Networks
----------------------
**File:** environments/routed-networks.yaml
**Description:** Enable use of routed networks, where there may be multiple separate
networks connected with a router and DHCP relay. Do not pass any other
network configuration environments after this one or they may override
the changes made by this environment. When this environment is in use,
the routed-networks-configuration environment should usually be
included as well.
Assign the Undercloud an Existing Floating IP
---------------------------------------------
**File:** environments/undercloud-floating-existing.yaml
**Description:** When deploying the undercloud, assign it an existing floating IP instead
of creating a new one.
Do Not Assign a Floating IP to the Undercloud
---------------------------------------------
**File:** environments/undercloud-floating-none.yaml
**Description:** When deploying the undercloud, do not assign a floating ip to it.

View File

@ -1,54 +0,0 @@
Deploying Heterogeneous Environments
====================================
It is possible to deploy an OVB environment with multiple "baremetal"
node types. The :doc:`QuintupleO <quintupleo>` deployment method must be used, so it
would be best to start with a working configuration for that before
moving on to heterogeneous deployments.
Each node type will be identified as a ``role``. A simple QuintupleO
deployment can be thought of as a single-role deployment. To deploy
multiple roles, additional environment files describing the extra roles
are required. These environments are simplified versions of the
standard environment file. See ``environments/base-role.yaml``
for a starting point when writing these role files.
.. note:: Each extra role consists of exactly one environment file. This
means that the standalone option environments cannot be used with
roles. To override the options specified for the primary role in
a secondary role, the parameter_defaults and resource_registry
entries from the option environment must be copied into the role
environment.
However, note that most resource_registry entries are filtered out
of role environments anyway since they are not relevant for a
secondary stack.
Steps for deploying the environment:
#. Customize the environment files. Make sure all environments have a ``role``
key in the ``parameter_defaults`` section. When building nodes.json, this
role will be automatically assigned to the node, so it is simplest to use
one of the default TripleO roles (control, compute, cephstorage, etc.).
#. Deploy with both roles::
bin/deploy.py --quintupleo --env env-control.yaml --role env-compute.yaml
#. One Heat stack will be created for each role being deployed. Wait for them
all to complete before proceeding.
.. note:: Be aware that the extra role stacks will be connected to networks
in the primary role stack, so the extra stacks must be deleted
before the primary one or the neutron subnets will not delete cleanly.
#. Build a nodes.json file that can be imported into Ironic::
bin/build-nodes-json --env env-control.yaml
.. note:: Only the primary environment file needs to be passed here. The
resources deployed as part of the secondary roles will be named
such that they appear to be part of the primary environment.
.. note:: If ``--id`` was used when deploying, remember to pass the generated
environment file to this command instead of the original.

View File

@ -1,306 +0,0 @@
Deploying with QuintupleO
=========================
QuintupleO is short for OpenStack on OpenStack on OpenStack. It was the
original name for OVB, and has been repurposed to indicate that this
deployment method is able to deploy a full TripleO development environment
in one command. It should be useful for non-TripleO users of OVB as well,
however.
#. Copy the example env file and edit it to reflect the host environment::
cp environments/base.yaml env.yaml
vi env.yaml
#. Deploy a QuintupleO stack. The example command includes a number of
environment files intended to simplify the deployment process or make
it compatible with a broader set of host clouds. However, these
environments are not necessary in every situation and may not even work
with some older clouds. See below for details on customizing an OVB
deployment for your particular situation::
bin/deploy.py --quintupleo -e env.yaml -e environments/all-networks.yaml -e environments/create-private-network.yaml
.. note:: There is a quintupleo-specific option ``--id`` in deploy.py.
It appends the value passed in to the name of all resources
in the stack. For example, if ``undercloud_name`` is set to
'undercloud' and ``--id foo`` is passed to deploy.py, the
resulting undercloud VM will be named 'undercloud-foo'. It is
recommended that this be used any time multiple environments are
being deployed in the same cloud/tenant to avoid name collisions.
Be aware that when ``--id`` is used, a new environment file will
be generated that reflects the new names. The name of the new
file will be ``env-${id}.yaml``. This new file should be passed
to build-nodes-json instead of the original.
.. note:: See :ref:`advanced-options` for other ways to customize an OVB
deployment.
#. Wait for Heat stack to complete. To make this easier, the ``--poll``
option can be passed to ``deploy.py``.
.. note:: The BMC instance does post-deployment configuration that can
take a while to complete, so the Heat stack completing does
not necessarily mean the environment is entirely ready for
use. To determine whether the BMC is finished starting up,
run ``nova console-log bmc``. The BMC service outputs a
message like "Managing instance [uuid]" when it is fully
configured. There should be one of these messages for each
baremetal instance.
::
heat stack-show quintupleo
#. Build a nodes.json file that can be imported into Ironic::
bin/build-nodes-json
scp nodes.json centos@[undercloud floating ip]:~/instackenv.json
.. note:: Only the base environment file needs to be passed to this command.
Additional option environments that may have been passed to the
deploy command should *not* be included here.
.. note:: If ``--id`` was used to deploy the stack, make sure to pass the
generated ``env-${id}.yaml`` file to build-nodes-json using the
``--env`` parameter. Example::
bin/build-nodes-json --env env-foo.yaml
.. note:: If roles were used for the deployment, separate node files named
``nodes-<profile>.json`` will also be output that list only the
nodes for that particular profile. Nodes with no profile
specified will go in ``nodes-no-profile.json``. The base
``nodes.json`` will still contain all of the nodes in the
deployment, regardless of profile.
.. note:: ``build-nodes-json`` also outputs a file named ``bmc_bm_pairs``
that lists which BMC address corresponds to a given baremetal
instance.
Deleting a QuintupleO Environment
---------------------------------
All of the OpenStack resources created by OVB are part of the Heat stack, so
to delete the environment just delete the Heat stack. There are a few local
files that may also have been created as part of the deployment, such as
ID environment files, nodes.json files, and bmc_bm_pairs. Once the stack is
deleted these can be removed safely as well.
.. _advanced-options:
Advanced Options
----------------
There are also a number of advanced options that can be enabled for a
QuintupleO deployment. For each such option there is a sample environment
to be passed to the deploy command.
For example, to deploy all networks needed for TripleO network isolation, the
following command could be used::
bin/deploy.py --quintupleo -e env.yaml -e environments/all-networks.yaml
.. important:: When deploying with multiple environment files, ``env.yaml``
*must* be explicitly passed to the deploy command.
``deploy.py`` will only default to using ``env.yaml`` if no
environments are specified.
Some options may have additional configuration parameters. These parameters
will be listed in the environment file.
A full list of the environments available can be found at
:doc:`environment-index`.
Network Isolation
-----------------