Retire tripleo-incubator

This project is no logner maintained or used as part of TripleO.

Change-Id: I0c9d052d2b3e3a3e656342042461b330c74139f0
Related-Bug: #1768590
Depends-On: https://review.openstack.org/#/c/565836/
This commit is contained in:
Alex Schultz 2018-05-02 10:39:11 -06:00
parent b445339881
commit ed8fed7aa2
117 changed files with 8 additions and 9512 deletions

33
.gitignore vendored
View File

@ -1,33 +0,0 @@
*.swp
*~
*.qcow2
.DS_Store
*.egg
*.egg-info
*.pyc
doc/source/devtest*.rst
openstack-tools
scripts/ceilometer
scripts/cinder
scripts/generate-keystone-pki
scripts/glance
scripts/heat
scripts/init-keystone
scripts/ironic
scripts/keystone
scripts/nova
scripts/neutron
scripts/openstack
scripts/os-apply-config
scripts/register-nodes
scripts/setup-neutron
scripts/swift
.tox
doc/build
AUTHORS
ChangeLog

View File

@ -1,4 +0,0 @@
[gerrit]
host=review.openstack.org
port=29418
project=openstack/tripleo-incubator.git

View File

@ -1,20 +0,0 @@
Contributing
============
If you would like to contribute to the development of OpenStack,
you must follow the steps in documented at:
http://docs.openstack.org/infra/manual/developers.html#development-workflow
Once those steps have been completed, changes to OpenStack
should be submitted for review via the Gerrit tool, following
the workflow documented at:
http://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will not be seen.
Bugs should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/tripleo

View File

@ -1,70 +0,0 @@
TripleO Style Guidelines
========================
- Step 1: Read the OpenStack Style Guidelines [1]_
- Step 2: Read Bashate [2]_
- Step 3: See specific guidelines below
TripleO Specific Guidelines
---------------------------
There is plenty of code that does not adhere to these conventions currently.
However it is useful to have conventions as consistently formatted code is
easier to read and less likely to hide bugs. New code should adhere to these
conventions, and developers should consider sensible adjustment of existing
code when working nearby.
Formatting
~~~~~~~~~~
Please follow conventions described in OpenStack style guidelines [1]_ and Bashate [2]_.
- Order lists whenever possible, whether in code or data. If the order doesn't
matter, use a case-insensitive alphabetical sort. This makes them easier to
compare with ``diff``-like tools.
- "a" < "B" < "c"
- "a" < "ab"
- "2" < "10"
Bash
~~~~
As well as those rules described in Bashate [2]_:
- The interpreter is ``/bin/bash``.
- Provide a shebang ``#!/bin/bash`` if you intend your script to be run rather than sourced.
- Use ``set -e`` and ``set -o pipefail`` to exit early on errors.
- Use ``set -u`` to catch typos in variable names.
- Use ``$()`` not `````` for subshell commands.
- Double quote substitutions by default. It's OK to omit quotes if it's
important that the result be multiple words. EG given VAR="a b":
``echo "${VAR}"``
Quote variables.
``echo "$(echo a b)"``
Quote subshells.
``echo "$(echo "${VAR}")"``
In subshells, the inner quotes must not be escaped.
``function print_b() { echo "$2"; }; print_b ${VAR}``
You must omit quotes for a variable to be passed as multiple arguments.
``ARRAY=(${VAR}); echo "${#ARRAY[@]}" = 2``
You must omit quotes to form a multi-element array.
- Avoid repeated/copy-pasted code. Make it a function, or a shared script, etc.
Script Input
~~~~~~~~~~~~
- Avoid environment variables as input. Prefer command-line arguments.
- If passing structured data, use JSON.
- Avoid passing substantial amounts of bare data (eg JSON) on the command
line. It is preferred to place the data in a file and pass the filename.
Using process substitution ``<()`` can help with this.
Variables
~~~~~~~~~
- Within a shell script, variables that are defined for local use should be
lower_cased. Variables that are passed in or come from outside the script
should be UPPER_CASED.
References
----------
.. [1] http://docs.openstack.org/developer/hacking/
.. [2] http://git.openstack.org/cgit/openstack-dev/bashate/tree/README.rst

202
LICENSE
View File

@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,13 +1,10 @@
========================
Team and repository tags
========================
This project is no longer maintained.
.. image:: http://governance.openstack.org/badges/tripleo-incubator.svg
:target: http://governance.openstack.org/reference/tags/index.html
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
.. Change things from this point on
This Repo is Deprecated
=======================
Please see the `current TripleO docs <http://docs.openstack.org/developer/tripleo-docs/>`_.
For any further questions, please email
openstack-dev@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1,5 +0,0 @@
# Add OS_CLOUDNAME to PS1
if [ -z "${OS_CLOUDPROMPT_ENABLED:-}" ]; then
export PS1=\${OS_CLOUDNAME:+"(\$OS_CLOUDNAME)"}$PS1
export OS_CLOUDPROMPT_ENABLED=1
fi

View File

View File

@ -1,32 +0,0 @@
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from sphinx import errors
def builder_inited(app):
app.info('In: ' + os.path.abspath('.'))
source_dir = app.srcdir
build_dir = app.outdir
app.info('Generating devtest from %s into %s' % (source_dir, build_dir))
ret = os.system('scripts/extract-docs')
if ret:
raise errors.ExtensionError(
"Error generating %s/devtest.rst" % build_dir)
def setup(app):
app.connect('builder-inited', builder_inited)

View File

@ -1 +0,0 @@
.. include:: ../../CONTRIBUTING.rst

View File

@ -1 +0,0 @@
.. include:: ../../HACKING.rst

View File

@ -1 +0,0 @@
.. include:: ../../README.rst

View File

@ -1,50 +0,0 @@
# -*- coding: utf-8 -*-
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))
sys.path.insert(0, os.path.abspath('../'))
sys.path.insert(0, os.path.abspath('./'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.intersphinx',
'oslosphinx',
'ext.extract_docs'
]
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'TripleO'
copyright = u'2013, OpenStack Foundation'
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output --------------------------------------------------
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'%s.tex' % project,
u'%s Documentation' % project,
u'OpenStack Foundation', 'manual'),
]
# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {'http://docs.python.org/': None}

View File

@ -1,494 +0,0 @@
Deploying TripleO
=================
Components
----------
Essential Components
^^^^^^^^^^^^^^^^^^^^
Essential components make up the self-deploying infrastructure that is
the heart of TripleO.
- Baremetal machine deployment (Nova Baremetal, soon to be 'Ironic')
- Baremetal volume management (Cinder - not available yet)
- Cluster orchestration (Heat)
- Machine image creation (Diskimage-builder)
- In-instance configuration management
(os-apply-config+os-refresh-config, and/or Chef/Puppet/Salt)
- Image management (Glance)
- Network management (Neutron)
- Authentication and service catalog (Keystone)
Additional Components
^^^^^^^^^^^^^^^^^^^^^
These components add value to the TripleO story, making it safer to
upgrade and evolve an environment, but are secondary to the core thing
itself.
- Continuous integration (Zuul/Jenkins)
- Monitoring and alerting (Ceilometer/nagios/etc)
Dependencies
------------
Each component can only be deployed once its dependencies are available.
TripleO is built on a Linux platform, so a Linux environment is required
both to create images and as the OS that will run on the machines. If
you have no Linux machines at all, you can download a live CD from a
number of vendors, which will permit you to run diskimage-builder to get
going.
Diskimage-builder
^^^^^^^^^^^^^^^^^
An internet connection is also required to download the various packages
used in preparing each image.
The machine images built *can* depend on Heat metadata, or they can just
contain configured Chef/Puppet/Salt credentials, depending on how much
of TripleO is in use. Avoiding Heat is useful when doing a incremental
adoption of TripleO (see later in this document).
Baremetal machine deployment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Baremetal deployments are delivered via Nova. Additionally, the network
must be configured so that the baremetal host machine can receive TFTP
from any physical machine that is being booted.
Nova
^^^^
Nova depends on Keystone, Glance and Neutron. In future Cinder will be
one of the dependencies.
There are three ways the service can be deployed:
- Via diskimage-builder built machine images, configured via a running
Heat cluster. This is the normal TripleO deployment.
- Via the special bootstrap node image, which is built by
diskimage-builder and contains a full working stack - nova, glance,
keystone and neutron, configured by statically generated Heat
metadata. This approach is used to get TripleO up and running.
- By hand - e.g. using devstack, or manually/chef/puppet/packages on a
dedicated machine. This can be useful for incremental adoption of
TripleO.
Cinder
^^^^^^
Cinder is needed for persistent storage on bare metal machines. That
aspect of TripleO is not yet available : when an instance is deleted,
the storage is deleted with it.
Neutron
^^^^^^^
Neutron depends on Keystone. The same three deployment options exist as
for Nova. The Neutron network node(s) must be the only DHCP servers on
the network.
Glance
^^^^^^
Glance depends on Keystone. The same three deployment options exist as
for Nova.
Keystone
^^^^^^^^
Keystone has no external dependencies. The same three deployment options
exist as for Nova.
Heat
^^^^
Heat depends on Nova, Cinder and Keystone. The same three deployment
options exist as for Nova.
In-instance configuration
^^^^^^^^^^^^^^^^^^^^^^^^^
The os-apply-config and os-refresh-config tools depend on Heat to
provide cluster configuration metadata. They can be used before Heat is
functional if a statically prepared metadata file is placed in the Heat
path : this is how the bootstrap node works.
os-apply-config and os-refresh-config can be used in concert with
Chef/Puppet/Salt, or not used at all, if you configure your services via
Chef/Puppet/Salt.
The reference TripleO elements do not depend on Chef/Puppet/Salt, to
avoid conflicting when organisations with an investment in
Chef/Puppet/Salt start using TripleO.
Deploying TripleO incrementally
-------------------------------
The general sequence is:
- Examine the current state of TripleO and assess where non-automated
solutions will be needed for your environment. E.g. at the time of
writing VLAN support requires baking the VLAN configuration into your
built disk images.
- Decide how much of TripleO you will adopt. See `Example deployments (possible today)`_
below.
- Install diskimage-builder somewhere and use it to build the disk
images your configuration will require.
- Bring up the aspects of TripleO you will be using, starting with a
boot-stack node (which you can run in a KVM VM in your datacentre),
using that to bring up an actual machine and transfer bare metal
services onto it, and then continuing up the stack.
Current caveats / workarounds
-----------------------------
These are all documented in README.rst and in the
`TripleO bugtracker`_.
.. _`TripleO bugtracker`: https://launchpad.net/tripleo
No API driven persistent storage
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Every 'nova boot' will reset the data on the machine it deploys to. To
do incremental image based updates they have to be done within the
runnning image. 'takeovernode' can do that, but as yet we have not
written rules to split out persistent data into another partition - so
some assembly required.
VLANs for physical nodes require customised images (rather than just metadata).
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you require VLANs you should create a diskimage-builder element to
add the vlan package and vlan configuration to /etc/network/interfaces
as a first-boot rule.
New seed image creation returns tmpfs space errors (systems with < 9GB of RAM)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Creating a new seed image takes up to 4.5GB of space inside a /tmp/imageXXXXX
directory. tmpfs can take up to 50% of RAM and systems with less than 9GB of
RAM will fail in this step. When using ``diskimage-builder`` directly, you can
prevent the space errors by:
- avoiding tmpfs with ``--no-tmpfs`` or
- specifying a minimum tmpfs size required with ``--min-tmpfs`` (which can be used
in conjunction with setting the environment variable ``TMP_DIR`` to override the
default temporary directory)
If you are using ``boot-seed-vm``, set the environment variable ``DIB_NO_TMPFS=1``.
Example deployments (possible today)
------------------------------------
Baremetal only
^^^^^^^^^^^^^^
In this scenario you make use of the baremetal driver to deploy
unspecialised machine images, and perform specialisation using
Chef/Puppet/Salt - whatever configuration management toolchain you
prefer. The baremetal host system is installed manually, but a TripleO
image is used to deploy it.
It scales within any one broadcast domain to the capacity of the single
baremetal host.
Prerequisites
~~~~~~~~~~~~~
- A boot-stack image setup to run in KVM.
- A vanilla image.
- A userdata script to configure new instances to run however you want.
- A machine installed with your OS of choice in your datacentre.
- Physical machines configured to netboot in preference to local boot.
- A list of the machines + their IPMI details + mac addresses.
- A network range larger than the maximum number of concurrent deploy
operations to run in parallel.
- A network to run the instances on large enough to supply one ip per
instance.
HOWTO
~~~~~
- Build the images you need (add any local elements you need to the
commands)
- Copy ``tripleo-image-elements/elements/seed-stack-config/config.json`` to
``tripleo-image-elements/elements/seed-stack-config/local.json`` and
change the virtual power manager to 'nova...impi.IPMI'.
https://bugs.launchpad.net/tripleo/+bug/1178547::
disk-image-create -o bootstrap vm boot-stack local-config ubuntu
disk-image-create -o ubuntu ubuntu
The ``local-config`` element will copy your ssh key and your HTTP proxy
settings in the disk image during the creation process.
The ``stackuser`` element will create a user ``stack`` with the password ``stack``.
``disk-image-create`` will create a image with a very small disk size
that at to be resized for example by cloud-init. You can use
``DIB_IMAGE_SIZE`` to increase this initial size, in GB.
- Setup a VM using bootstrap.qcow2 on your existing machine, with eth1
bridged into your datacentre LAN.
- Run up that VM, which will create a self contained nova baremetal
install.
- Reconfigure the networking within the VM to match your physical
network. https://bugs.launchpad.net/tripleo/+bug/1178397
https://bugs.launchpad.net/tripleo/+bug/1178099
- If you had exotic hardware needs, replace the deploy images that the
bootstack creates. https://bugs.launchpad.net/tripleo/+bug/1178094
- Enroll your vanilla image into the glance of that install. Be sure to
use ``tripleo-incubator/scripts/load-image`` as that will extract the
kernel and ramdisk and register them appropriately with glance.
- Enroll your other datacentre machines into that nova baremetal
install. A script that takes your machine inventory and prints out
something like::
nova baremetal-node-create --pm_user XXX --pm_address YYY --pm_password ZZZ COMPUTEHOST 24 98304 2048 MAC
can be a great help - and can be run from outside the environment.
- Setup admin users with SSH keypairs etc. e.g.::
nova keypair-add --pub-key .ssh/authorized_keys default
- Boot them using the ubuntu.qcow2 image, with appropriate user data to
connect to your Chef/Puppet/Salt environments.
Baremetal with Heat
^^^^^^^^^^^^^^^^^^^
In this scenario you use the baremetal driver to deploy specialised
machine images which are orchestrated by Heat.
Prerequisites.
~~~~~~~~~~~~~~
- A boot-stack image setup to run in KVM.
- A vanilla image with cfn-tools installed.
- A seed machine installed with your OS of choice in your datacentre.
HOWTO
~~~~~
- Build the images you need (add any local elements you need to the
commands)::
disk-image-create -o bootstrap vm boot-stack ubuntu heat-api
disk-image-create -o ubuntu ubuntu cfn-tools
- Setup a VM using bootstrap.qcow2 on your existing machine, with eth1
bridged into your datacentre LAN.
- Run up that VM, which will create a self contained nova baremetal
install.
- Enroll your vanilla image into the glance of that install.
- Enroll your other datacentre machines into that nova baremetal
install.
- Setup admin users with SSH keypairs etc.
- Create a Heat stack with your application topology. Be sure to use
the image id of your cfn-tools customised image.
GRE Neutron OpenStack managed by Heat
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In this scenario we build on Baremetal with Heat to deploy a full
OpenStack orchestrated by Heat, with specialised disk images for
different OpenStack node roles.
Prerequisites.
~~~~~~~~~~~~~~
- A boot-stack image setup to run in KVM.
- A vanilla image with cfn-tools installed.
- A seed machine installed with your OS of choice in your datacentre.
- At least 4 machines in your datacentre, one of which manually installed with
a recent Linux (libvirt 1.0+ or newer required).
- L2 network with private address range
- L3 accessible management network (via the L2 default router)
- VLAN with public IP ranges on it
Needed data
~~~~~~~~~~~
- a JSON file describing your baremetal machines in a format described
in :ref:`devtest-environment-configuration` (see: nodes), making sure to
include all MAC addresses for all network interface cards as well as the
IPMI (address, user, password) details for them.
- 2 spare contiguous ip addresses on your L2 network for seed deployment.
- 1 spare ip address for your seed VM, and one spare for talking to it on it's
bridge (seedip, seediplink)
- 3 spare ip addresses for your undercloud tenant network + neutron services.
- Public IP address to be your undercloud endpoint
- Public IP address to be your overcloud endpoint
Install Seed
~~~~~~~~~~~~
Follow the 'devtest' guide but edit the seed config.json to:
- change the dnsmasq range to the seed deployment range
- change the heat endpoint details to refer to your seed ip address
- change the ctlplane ip and cidr to match your seed ip address
- change the power manager line nova.virt.baremetal.ipmi.IPMI and
remove the virtual subsection.
- setup proxy arp (this and the related bits are used to avoid messing about
with the public NIC and bridging: you may choose to use that approach
instead...)::
sudo sysctl net/ipv4/conf/all/proxy_arp=1
arp -s <seedip> -i <external_interface> -D <external_interface> pub
ip addr add <seediplink>/32 dev brbm
ip route add <seedip>/32 dev brbm src <seediplink>
- setup ec2 metadata support::
iptables -t nat -A PREROUTING -d 169.254.169.254/32 -i <external_interface> -p tcp -m tcp --dport 80 -j DNAT --to-destination <seedip>:8775
- setup DHCP relay::
sudo apt-get install dhcp-helper
and configure it with ``-s <seedip>``
Note that isc-dhcp-relay fails to forward responses correctly, so dhcp-helper is preferred
( https://bugs.launchpad.net/ubuntu/+bug/1233953 ).
Also note that dnsmasq may have to be stopped as they both listen to ``*:dhcps``
( https://bugs.launchpad.net/ubuntu/+bug/1233954 ).
Disable the ``filter-bootps`` cronjob (``./etc/cron.d/filter-bootp``) inside the seed vm and reset the table::
sudo iptables -F FILTERBOOTPS
edit /etc/init/novabm-dnsmasq.conf::
exec dnsmasq --conf-file= \
--keep-in-foreground \
--port=0 \
--dhcp-boot=pxelinux.0,<seedip>,<seedip> \
--bind-interfaces \
--pid-file=/var/run/dnsmasq.pid \
--interface=br-ctlplane \
--dhcp-range=<seed_deploy_start>,<seed_deploy_end>,<network_cidr>
- When you setup the seed, use <seedip> instead of 192.0.2.1, and you may need to edit seedrc.
- For setup-neutron:
setup-neutron <start of seed deployment> <end of seed deployment> <cidr of network> <seedip> <metadata server> ctlplane
- Validate networking:
- From outside the seed host you should be able to ping <seedip>
- From the seed VM you should be able to ping <all ipmi addresses>
- From outside the seed host you should be able to get a response from the dnsmasq running on <seedip>
- Create your deployment ramdisk with baremetal in mind::
$TRIPLEO_ROOT/diskimage-builder/bin/disk-image-create $NODE_DIST -a \
$NODE_ARCH -o $TRIPLEO_ROOT/undercloud boot-stack nova-baremetal \
os-collect-config stackuser $DHCP_DRIVER -p linux-image-generic mellanox \
serial-console --offline
- If your hardware has something other than eth0 plugged into the network,
fix your file injection template -
``/opt/stack/nova/nova/virt/baremetal/net-static.ubuntu.template`` inside the
seed vm, replacing the enumerated interface values with the right interface
to use (e.g. auto eth2... iface eth2 inet static..)
Deploy Undercloud
~~~~~~~~~~~~~~~~~
Use ``heat stack-create`` per the devtest documentation to boot your undercloud.
But use the ``undercloud-bm.yaml`` file rather ``than undercloud-vm.yaml``.
Once it's booted:
- ``modprobe 8021q``
- edit ``/etc/network/interfaces`` and define your vlan
- delete the default route on your internal network
- add a targeted route to your management l3 range via the internal network router
- add a targeted route to ``169.254.169.254`` via <seedip>
- ``ifup`` the vlan interface
- fix your resolv.conf
- configure the undercloud per devtest.
- upgrade your quotas::
nova quota-update --cores node_size*machine_count --instances machine_count --ram node_size*machine_count admin-tenant-id
Deploy Overcloud
~~~~~~~~~~~~~~~~
Follow devtest again, but modify the images you build per the undercloud notes, and for machines you put public services on, follow the undercloud notes to fix them up.
Example deployments (future)
----------------------------
WARNING: Here be draft notes.
VM seed + bare metal under cloud
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- need to be aware nova metadata wont be available after booting as the
default rule assumes this host never initiates requests
( https://bugs.launchpad.net/tripleo/+bug/1178487 ).

View File

@ -1,53 +0,0 @@
TripleO Incubator
=================
Getting Started
---------------
.. toctree::
:maxdepth: 1
README
userguide
devtest
HACKING
Detailed notes
---------------
.. tip::
The following docs each contain detailed notes about one of the scripts corresponding to one of the high-level stages of a TripleO deployment. You should be familiar with the content in the `Getting Started`_ section above before diving into these docs.
.. toctree::
:maxdepth: 1
devtest_variables
devtest_setup
devtest_testenv
devtest_update_network
devtest_ramdisk
devtest_seed
devtest_undercloud
devtest_overcloud
devtest_overcloud_images
devtest_end
Further Information
-------------------
.. toctree::
:maxdepth: 1
deploying
puppet
resources
troubleshooting
CONTRIBUTING
selinux-guide
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 19 KiB

View File

@ -1,133 +0,0 @@
TripleO Overcloud deployment with Puppet
========================================
Intro
-----
This document outlines how to deploy a TripleO overcloud using Puppet
for configuration. TripleO currently supports using Puppet for configuration
using Heat metadata directly via the normal os-collect-config/os-refresh-config
agents. No puppet-master or puppet DB infrastructure is required.
Building Images
---------------
When building TripleO images for use with Puppet the following elements
should be installed:
- ``hosts``
- ``os-net-config``
- ``os-collect-config``
- ``heat-config-puppet``
- ``puppet-modules``
- ``hiera``
The ``hosts`` and ``os-net-config`` are normal TripleO image elements and are still
used to deploy basic physical networking configuration required to bootstrap
the node.
The ``os-collect-config``, and ``heat-config-puppet`` elements provide mechanism
to run ``puppet apply`` commands that have been configured via Heat software
deployment configurations.
The ``puppet-modules`` element installs all of the required ``stackforge
puppet-*`` modules. This element has two modes of operation: package or source
installs. The package mode assumes that all of the required modules exist in
a single distribution provided package. The source mode deploys the puppet
modules from Git at image build time and automatically links them into
``/etc/puppet/modules``. The source mode makes use of source repositories so
you can, for example, pin to a specific ``puppetlabs-mysql`` module version by setting::
DIB_REPOREF_puppetlabs_mysql=<GIT COMMIT HASH>
The ``hiera`` element provides a way to configure the hiera.yaml and hieradata
files on each node directly via Heat metadata. The ``tripleo-heat-templates``
are used to drive this configuration.
When building images for use with Puppet it is important to note that
regardless of whether you use source or package mode to install these core
elements the actual OpenStack service packages (Nova, Neutron, Keystone, etc)
will need to be installed via normal distro packages. This is required in
order to work with the stackforge puppet modules.
The OpenStack service packages can be installed at DIB time via the -p
option or at deployment time when Puppet is executed on each node.
Heat Templates
--------------
When deploying an overcloud with Heat only the newer
``overcloud-without-mergepy.yaml`` supports Puppet. To enable Puppet simply use
the ``overcloud-resource-registry-puppet.yaml`` instead of the normal
``overcloud-resource-registry.yaml`` with your Heat ``stack-create`` command.
Running Devtest Overcloud with Delorean on Fedora
-------------------------------------------------
This section describes the variables required in order to run
``devtest_overcloud.sh`` with Puppet. It assumes you have a fully working
TripleO undercloud (or seed) which has been preconfigured to work
in your environment.
.. note::
The following instructions assume this pre-existing config from a normal devtest Fedora setup::
export NODE_DIST='fedora selinux-permissive'
export DIB_RELEASE=21
export RDO_RELEASE=kilo
# Enable packages for all elements by default
export DIB_DEFAULT_INSTALLTYPE=package
# Do not manage /etc/hosts via cloud-init
export DIB_CLOUD_INIT_ETC_HOSTS=''
# Set ROOT_DISK == NODE_DISK (no ephemeral partition)
export ROOT_DISK=40
export NODE_DISK=40
By default TripleO uses puppet for configuration only. Packages (RPMs, etc)
are typically installed at image build time.
If you wish to have packages installed at deploy time via Puppet it
is important to have a working undercloud nameserver. You can configure
this by adding the appropriate undercloud.nameserver setting
settings to your undercoud-env.json file. Alternately, If going directly
from the seed to the overcloud then you'll need to set seed.nameserver
in your testenv.json. If you wish to install packages at deploy
time you will also need to set EnablePackageInstall to true in your
overcloud-resource-registry-puppet.yaml (see below for instructions
on how to override your Heat resource registry).
1) Git clone the tripleo-puppet-elements [1]_ project into your $TRIPLEO_ROOT. This is currently a non-standard image elements repository and needs to be manually cloned in order to build Puppet images.
2) Add tripleo-puppet-elements to your ELEMENTS_PATH::
export ELEMENTS_PATH=$ELEMENTS_PATH:$TRIPLEO_ROOT/tripleo-puppet-elements/elements:$TRIPLEO_ROOT/heat-templates/hot/software-config/elements
3) Set a variable so that a custom puppet image gets built and loaded into Glance::
export OVERCLOUD_DISK_IMAGES_CONFIG=$TRIPLEO_ROOT/tripleo-incubator/scripts/overcloud_puppet_disk_images.yaml
4) Override the tripleo-heat-templates resource registry::
export RESOURCE_REGISTRY_PATH="$TRIPLEO_ROOT/tripleo-heat-templates/overcloud-resource-registry-puppet.yaml"
5) Configure your Delorean repo URL. This is used to fetch more recently built upstream packages for your OpenStack services::
export DELOREAN_REPO_URL="http://trunk.rdoproject.org/f21/current/"
For more information on Delorean see [2]_
6) Enable the use of stackforge modules from Git. This is to work around the fact that the Fedora RPM doesn't have support for all the required modules yet::
export DIB_INSTALLTYPE_puppet_modules=source
7) Source your undercloud environment RC file (perhaps via the select-cloud script). Then execute devtest_overcloud.sh::
devtest_overcloud.sh
References
----------
.. [1] http://git.openstack.org/openstack/tripleo-puppet-elements/
.. [2] https://github.com/openstack-packages/delorean

View File

@ -1,29 +0,0 @@
Tripleo team resources
======================
- Launchpad team (lets you get our ssh keys etc easily):
::
https://launchpad.net/~tripleo
- Demo and staging PPAs (for custom binaries):
::
apt-add-repository ppa:tripleo/demo
apt-add-repository ppa:tripleo/demo-staging
- Git repositories:
::
https://git.openstack.org/cgit/?q=tripleo
https://git.openstack.org/cgit/?q=tuskar
https://git.openstack.org/cgit/openstack/diskimage-builder
- IRC: duh.
::
irc://irc.freenode.net/#tripleo

View File

@ -1,283 +0,0 @@
SELinux Developer Guide
=======================
Do I have a SELinux problem?
----------------------------
At the moment SELinux is set to run in permissive mode in TripleO. This means
that problems are logged but not blocked. To see if you have a SELinux problem
that needs to be fixed, examine /var/log/audit/audit.log in your local
development environment or from the TripleO-CI log archive. You may need to
examine the log files for multiple nodes (undercloud and/or overcloud).
Any line that has "denied" is a problem. This guide will talk about common
problems and how to fix them.
Workflow
--------
All changes are assumed to have been tested locally before a patch is submitted
upstream for review. Testing should include inspecting the local audit.log to
see that no new SELinux errors were logged.
If an error was logged, it should be fixed using the guidelines described below.
If no errors were logged, then the change is submitted for review. In addition
to getting the change to pass CI, the audit.log archived from the CI runs should
be inspected to see no new SELinux errors were logged. Problems should be fixed
until the audit.log is clear of new errors.
The archived audit.log file can be found in the logs directory for each
individual instance that is brought up. For example the seed instance log files
can be seen here:
http://logs.openstack.org/03/115303/1/check-tripleo/check-tripleo-novabm-overcloud-f20-nonha/e5bef5c/logs/seed_logs/
audit.log is audit.txt.gz.
ps -efZ output can be found in host_info.txt.gz.
Updating SELinux file security contexts
---------------------------------------
The targeted policy expects directories and files to be placed in certain
locations. For example, nova normally has files under /var/log/nova and
/var/lib/nova. Its executables are placed under /usr/bin.
::
[user@server files]$ pwd
/etc/selinux/targeted/contexts/files
[user@server files]$ grep nova *
file_contexts:/var/lib/nova(/.*)? system_u:object_r:nova_var_lib_t:s0
file_contexts:/var/log/nova(/.*)? system_u:object_r:nova_log_t:s0
file_contexts:/var/run/nova(/.*)? system_u:object_r:nova_var_run_t:s0
file_contexts:/usr/bin/nova-api -- system_u:object_r:nova_api_exec_t:s0
file_contexts:/usr/bin/nova-cert -- system_u:object_r:nova_cert_exec_t:s0
TripleO diverges from what the target policy expects and places files and
executables in different locations. When a file or directory is not properly
labeled the service may fail to startup. A SELinux AVC denial is logged to
/var/log/audit.log when SELinux detects that a service doesn't have permission
to access a file or directory.
When the ephemeral element is active, upstream TripleO places /var/log and
/var/lib under the ephemeral mount point, /mnt/state. The directories and files
on these locations may not have the correct file security contexts if they were
installed outside of yum.
The directories and files in the ephemeral disk must be updated to have the
correct security context. Here is an example for nova:
https://github.com/openstack/tripleo-image-elements/blob/master/elements/nova/os-refresh-config/configure.d/20-nova-selinux#L6
::
semanage fcontext -a -t nova_var_lib_t "/mnt/state/var/lib/nova(/.*)?"
restorecon -Rv /mnt/state/var/lib/nova
semanage fcontext -a -t nova_log_t "/mnt/state/var/log/nova(/.*)?"
restorecon -Rv /mnt/state/var/log/nova
For nova we use semanage to relabel /mnt/state/var/lib/nova with the type
nova_var_lib_t and /mnt/state/var/log/nova with the type nova_var_log_t. Then
we call restorecon to apply the labels.
To see a file's security context run "ls -lZ <filename>".
::
[user@server]# ls -lZ /mnt/state/var/lib
drwxr-xr-x. root root system_u:object_r:file_t:s0 boot-stack
drwxrwx---. ceilometer ceilometer system_u:object_r:file_t:s0 ceilometer
drwxr-xr-x. root root system_u:object_r:file_t:s0 cinder
drwxrwx---. glance glance system_u:object_r:glance_var_lib_t:s0 glance
drwxr-xr-x. mysql mysql system_u:object_r:mysqld_db_t:s0 mysql
drwxrwx---. neutron neutron system_u:object_r:neutron_var_lib_t:s0 neutron
drwxrwxr-x. nova nova system_u:object_r:nova_var_lib_t:s0 nova
drwxrwx---. rabbitmq rabbitmq system_u:object_r:rabbitmq_var_lib_t:s0 rabbitmq
TripleO installs many components under /opt/stack/venvs/. Executables under
/opt/stack/venvs/<component>/bin need to be relabeled. For these we do a path
substitution to tell SELinux policy that /usr/bin and
/opt/stack/venvs/<component>/bin are equivalent. When the image is relabeled
during image build or during first boot, SELinux will relabel the files under
/opt/stack/stack/venvs/<component>/bin as if they were installed under /usr/bin.
An example of a path substitution for nova:
https://github.com/openstack/tripleo-image-elements/blob/master/elements/nova/install.d/nova-source-install/74-nova
::
add-selinux-path-substitution /usr/bin $NOVA_VENV_DIR/bin
Allowing port access
--------------------
Services are granted access to a prespecified set of ports by the
selinux-policy. A list of ports for a service can be seen using
::
semanage port -l | grep http
You can grant a service access to additional ports by using semanage.
::
semanage port -a -t http_port_t -p tcp 9876
If the port you are adding is a standard or default port, then it would be
appropriate to also file a bug against upstream SELinux to ask for the policy
to include it by default.
Using SELinux booleans
----------------------
Sometimes a problem can be fixed by toggling a SELinux boolean to allow certain
actions.
Currently we enable two booleans in TripleO.
https://github.com/openstack/tripleo-image-elements/blob/master/elements/keepalived/os-refresh-config/configure.d/20-keepalived-selinux
::
setsebool -P domain_kernel_load_modules 1
https://github.com/openstack/tripleo-image-elements/blob/master/elements/haproxy/os-refresh-config/configure.d/20-haproxy-selinux
::
setsebool -P haproxy_connect_any 1
domain_kernel_load_modules is used with the keepalived element to allow
keepalive to load kernel modules.
haproxy_connect_any is used with the haproxy element to allow it to proxy any
port.
When a boolean is enabled, it should be enabled within the element that requires
it.
"semanage boolean -l" lists the booleans that are available in the current
policy.
When would you know to use a boolean? Generating a custom policy for the denials
you are seeing will tell you whether a boolean can be used to fix the denials.
For example, when I generated a custom policy for the haproxy denials I was
seeing in audit.log, the custom policy stated that haproxy_connect_any could be
used to fix the denials.
::
#!!!! This avc can be allowed using the boolean 'haproxy_connect_any'
allow haproxy_t glance_registry_port_t:tcp_socket name_bind;
#!!!! This avc can be allowed using the boolean 'haproxy_connect_any'
allow haproxy_t neutron_port_t:tcp_socket name_bind;
How to generate a custom policy is discussed in the next section.
Generating a custom policy
--------------------------
If relabeling or toggling a boolean doesn't solve your problem, the next step is
to generate a custom policy used as an hotfix to allow the actions that SELinux
denied.
To generate a custom policy, use this command
::
ausearch -m AVC | audit2allow -M <custom-policy-name>
.. note:: Not all AVCs should be allowed from an ausearch. In fact, most of
them are likely leaked file descriptors, mislabeled files, and bugs in code.
The custom policies are stored under
tripleo-image-elements/elements/selinux/custom-policies. We use a single policy
file for each component (one for nova, keystone, etc..). It is organized as per
component to mirror how the policies are organized upstream. When you generate
your custom policy, instead of dropping in a new file, you may need to edit an
existing policy file to include the new changes.
Each custom policy file must contain comments referencing the upstream bugs
(Launchpad and upstream SELinux) that the policy is intended to fix. The
comments help with housekeeping. When a bug is fixed upstream, a developer can
then quickly search for the bug number and delete the appropriate lines from the
custom policy file that are no longer needed.
Example: https://review.openstack.org/#/c/107233/3/elements/selinux/custom-policies/tripleo-selinux-ssh.te
Filing bugs for SELinux policy updates
--------------------------------------
The custom policy is meant to be used as a temporary solution until the
underlying problem is addressed. Most of the time, the upstream SELinux policy
needs to be updated to incorporate the rules suggested by the custom policy. To
ensure that that upstream policy is updated, we need to file a bug against the
selinux-policy package.
For Fedora, use this link to create a bug
https://bugzilla.redhat.com/enter_bug.cgi?component=selinux-policy&product=Fedora
For RHEL 7, use this link to create a bug, and file against the
openstack-selinux component, not the selinux-policy component because it is
released less frequently.
https://bugzilla.redhat.com/enter_bug.cgi?product=Red%20Hat%20OpenStack
Under "Version-Release number" include the package and version of the affected
component.
::
Example:
selinux-policy-3.12.1-179.fc20.noarch
selinux-policy-targeted-3.12.1-179.fc20.noarch
openssh-6.4p1-5.fc20.i686
openssh-clients-6.4p1-5.fc20.i686
openssh-server-6.4p1-5.fc20.i686
Include the ps -efZ output from the affected system. And most importantly
attach the /var/log/audit/audit.log to the bug.
Also file a bug in Launchpad, referencing the bugzilla. When you commit the
custom policy into github, the commit message should reference the Launchpad
bug ID. The Launchpad bug should also be tagged with "selinux" to make SELinux
bugs easier to find.
Setting SELinux to enforcing mode
---------------------------------
By default in TripleO, SELinux runs in permissive mode. This is set in the
NODE_DIST environment variable in the devtest scripts.
::
export NODE_DIST="fedora selinux-permissive"
To set SELinux to run in enforcing mode, remove the selinux-permissive element
by adding this line to your ~/.devtestrc file.
::
export NODE_DIST="fedora"
Additional Resources
--------------------
1. http://openstack.redhat.com/SELinux_issues
2. http://docs.fedoraproject.org/en-US/Fedora/19/html/Security_Guide/ch09.html

View File

@ -1,101 +0,0 @@
Troubleshooting tips
====================
VM won't boot
-------------
Make sure the partition table is correct. See
https://bugs.launchpad.net/nova/+bug/1088652.
Baremetal
---------
If you get a no hosts found error in the schedule/nova logs, check:
::
mysql nova -e 'select * from compute_nodes;'
After adding a bare metal node, the bare metal backend writes an entry
to the compute nodes table, but it takes about 5 seconds to go from A to
B.
Be sure that the hostname in nova\_bm.bm\_nodes (service\_host) is the
same than the one used by nova. If no value has been specified using the
flag "host=" in nova.conf, the default one is:
::
python -c "import socket; print socket.getfqdn()"
You can override this value when populating the bm database using the -h
flag:
::
scripts/populate-nova-bm-db.sh -i "xx:xx:xx:xx:xx:xx" -j "yy:yy:yy:yy:yy:yy" -h "nova_hostname" add
DHCP Server Work Arounds
------------------------
If you don't control the DHCP server on your flat network you will need
to at least have someone put the MAC address of the server your trying
to provision in there DHCP server.
::
host bm-compute001 {
hardware ethernet 78:e7:d1:XX:XX:XX ;
next-server 10.0.1.2 ;
filename "pxelinux.0";
}
Write down the MAC address for the IPMI management interface and the NIC
your booting from. You will also need to know the IP address of both.
Most DHCP server won't expire the IP leased to quickly so if your lucky
you will get the same IP each time you reboot. With that information
bare-metal can generate the correct pxelinux.cfg/. (???? Commands to
tell nova?)
In the provisional environment I have there was another problem. The
DHCP Server was already modified to point to a next-server. A quick work
around was to redirect the connections using iptables.
::
modprobe nf_nat_tftp
baremetal_installer="<ip address>/<mask>"
iptables -t nat -A PREROUTING -i eth2 -p udp --dport 69 -j DNAT --to ${baremetal_installer}:69
iptables -t nat -A PREROUTING -i eth2 -p tcp --dport 10000 -j DNAT --to ${baremetal_installer}:10000
iptables -A FORWARD -p udp -i eth2 -o eth2 -d ${baremetal_installer} --dport 69 -j ACCEPT
iptables -A FORWARD -p tcp -i eth2 -o eth2 -d ${baremetal_installer} --dport 10000 -j ACCEPT
iptables -t nat -A POSTROUTING -j MASQUERADE
Notice the additional rules for port 10000. It is for the bare-metal
interface (???) You should have matching reverse DNS too. We experienced
problems connecting to port 10000 (????). That may be very unique to my
environment btw.
Image Build Race Condition
--------------------------
Multiple times we experienced a failure to build a good bootable image.
This is because of a race condition hidden in the code currently. Just
remove the failed image and try to build it again.
Once you have a working image check the Nova DB to make sure the it is
not flagged as removed (???)
Virtual Machines
----------------
VM's booting terribly slowly in KVM?
------------------------------------
Check the console, if the slowdown happens right after probing for
consoles - wait 2m or so and you should see a serial console as the next
line output after the vga console. If so you're likely running into
https://bugzilla.redhat.com/show\_bug.cgi?id=750773. Remove the serial
device from your machine definition in libvirt, and it should fix it.

View File

@ -1,39 +0,0 @@
Using TripleO
=============
Learning
--------
Learning how TripleO all works is essential. Working through :doc:`devtest` is
highly recommended.
Overview
--------
.. image:: overview.svg
Setup
-----
The script `install-dependencies` from incubator will install the basic tools
needed to build and deploy images via TripleO. What it won't do is larger scale
tasks like configuring a Ubuntu/Fedora/etc mirror, a pypi mirror, squid or
similar HTTP caches etc. If you are deploying rarely, these things are
optional.
However, if you are building lots of images, having a local mirror of the
things you are installing can be extremely advantageous.
Operating
---------
The general design of TripleO is intended to produce small unix-like tools
that can be used to drive arbitrary cloud deployments. It is expected that
you will either wrap them in higher order tools (such as CM tools, custom UI's
or even just targeted scripts). TripleO is building a dedicated API to unify
all these small tools for common case deployments, called Tuskar, but that is
not yet ready for prime time. We'll start using it ourselves as it becomes
ready.
Take the time to learn the plumbing - nova, nova-bm or ironic, glance, keystone
etc.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -1,469 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns="http://www.w3.org/2000/svg"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
width="1052.3622"
height="744.09448"
id="svg2"
version="1.1"
inkscape:version="0.48.3.1 r9886"
sodipodi:docname="tripleo-gospel.svg">
<title
id="title4133">TripleO Concept</title>
<defs
id="defs4" />
<sodipodi:namedview
id="base"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageopacity="0.0"
inkscape:pageshadow="2"
inkscape:zoom="0.7"
inkscape:cx="553.2491"
inkscape:cy="432.78623"
inkscape:document-units="px"
inkscape:current-layer="layer1"
showgrid="false"
inkscape:window-width="1437"
inkscape:window-height="798"
inkscape:window-x="163"
inkscape:window-y="28"
inkscape:window-maximized="0" />
<metadata
id="metadata7">
<rdf:RDF>
<cc:Work
rdf:about="">
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
<dc:title>TripleO Concept</dc:title>
<dc:creator>
<cc:Agent>
<dc:title>Clint Byrum</dc:title>
</cc:Agent>
</dc:creator>
<dc:rights>
<cc:Agent>
<dc:title>HP Cloud Services</dc:title>
</cc:Agent>
</dc:rights>
<cc:license
rdf:resource="http://creativecommons.org/licenses/by-sa/3.0/" />
<dc:date>2013-03-06</dc:date>
</cc:Work>
<cc:License
rdf:about="http://creativecommons.org/licenses/by-sa/3.0/">
<cc:permits
rdf:resource="http://creativecommons.org/ns#Reproduction" />
<cc:permits
rdf:resource="http://creativecommons.org/ns#Distribution" />
<cc:requires
rdf:resource="http://creativecommons.org/ns#Notice" />
<cc:requires
rdf:resource="http://creativecommons.org/ns#Attribution" />
<cc:permits
rdf:resource="http://creativecommons.org/ns#DerivativeWorks" />
<cc:requires
rdf:resource="http://creativecommons.org/ns#ShareAlike" />
</cc:License>
</rdf:RDF>
</metadata>
<g
inkscape:label="Needs"
inkscape:groupmode="layer"
id="layer1"
transform="translate(0,-308.2677)">
<g
id="g4084"
transform="translate(-17.971026,-202.85714)">
<rect
y="588.07648"
x="249.64288"
height="392.32117"
width="180"
id="rect2985"
style="fill:#aaffaa;fill-rule:evenodd;stroke:#000000;stroke-width:1.30200422px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" />
<text
sodipodi:linespacing="125%"
id="text3021"
y="566.64789"
x="277.72296"
style="font-size:28px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans;-inkscape-font-specification:Sans"
xml:space="preserve"><tspan
y="566.64789"
x="277.72296"
id="tspan3023"
sodipodi:role="line">Software</tspan></text>
</g>
<g
id="g4089"
transform="translate(-19.790057,-205.71426)">
<rect
y="588.07648"
x="450.27661"
height="392.32117"
width="180"
id="rect2985-2"
style="fill:#ff9955;fill-rule:evenodd;stroke:#000000;stroke-width:1.30200422px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" />
<text
sodipodi:linespacing="125%"
id="text3025"
y="569.505"
x="445.67407"
style="font-size:28px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans;-inkscape-font-specification:Sans"
xml:space="preserve"><tspan
y="569.505"
x="445.67407"
id="tspan3027"
sodipodi:role="line">Configuration</tspan></text>
</g>
<g
id="g4102"
transform="translate(-16.77022,-203.34933)">
<rect
y="588.07648"
x="646.07147"
height="392.32117"
width="180"
id="rect2985-7"
style="fill:#00ffff;fill-rule:evenodd;stroke:#000000;stroke-width:1.30200422px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" />
<text
sodipodi:linespacing="125%"
id="text3029"
y="566.64789"
x="698.81561"
style="font-size:28px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans;-inkscape-font-specification:Sans"
xml:space="preserve"><tspan
y="566.64789"
x="698.81561"
id="tspan3031"
sodipodi:role="line">State</tspan></text>
</g>
<g
id="g4107"
transform="translate(-18.571429,-204.28573)">
<rect
y="588.07648"
x="846.68738"
height="392.32117"
width="180"
id="rect2985-1"
style="fill:#e5d5ff;fill-rule:evenodd;stroke:#000000;stroke-width:1.30200422px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" />
<text
sodipodi:linespacing="125%"
id="text3033"
y="568.07648"
x="841.42859"
style="font-size:28px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans;-inkscape-font-specification:Sans"
xml:space="preserve"><tspan
y="568.07648"
x="841.42859"
id="tspan3035"
sodipodi:role="line">Orchestration</tspan></text>
</g>
<g
id="g4079"
transform="translate(-18.571429,-202.85714)">
<rect
y="588.07648"
x="51.42857"
height="392.32117"
width="180"
id="rect2985-9"
style="fill:#ff8080;fill-rule:evenodd;stroke:#000000;stroke-width:1.30200422px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" />
<text
sodipodi:linespacing="125%"
id="text3823"
y="566.64789"
x="56.642437"
style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
xml:space="preserve"><tspan
style="font-size:28px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Sans;-inkscape-font-specification:Sans"
y="566.64789"
x="56.642437"
id="tspan3825"
sodipodi:role="line">Provisioning</tspan></text>
</g>
</g>
<g
inkscape:groupmode="layer"
id="layer2"
inkscape:label="Packaging"
style="display:inline">
<g
id="g3853"
transform="matrix(1.0589377,0,0,0.77742143,-47.820973,99.91528)" />
<g
id="g4128"
transform="translate(-17.142857,-202.85714)">
<path
sodipodi:type="arc"
style="fill:#c4c8b7;stroke:#000000;stroke-opacity:1;display:inline"
id="path3847"
sodipodi:cx="532.85712"
sodipodi:cy="394.09448"
sodipodi:rx="254.28572"
sodipodi:ry="67.14286"
d="m 787.14284,394.09448 c 0,37.08198 -113.8476,67.14286 -254.28572,67.14286 -140.43813,0 -254.28572,-30.06088 -254.28572,-67.14286 0,-37.08198 113.84759,-67.14286 254.28572,-67.14286 140.43812,0 254.28572,30.06088 254.28572,67.14286 z"
transform="matrix(1.0589377,0,0,0.77742143,-42.106687,299.91528)" />
<text
sodipodi:linespacing="125%"
id="text3849"
y="617.32794"
x="427.78079"
style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
xml:space="preserve"><tspan
y="617.32794"
x="427.78079"
id="tspan3851"
sodipodi:role="line">Packages</tspan></text>
</g>
</g>
<g
inkscape:groupmode="layer"
id="layer3"
inkscape:label="Puppet/Chef"
style="display:inline">
<g
id="g4112"
transform="translate(-5.7142857,-200)">
<path
transform="matrix(1,0,0,0.74798414,-47.142857,250.6084)"
d="m 852.85715,389.80878 c 0,35.50402 -117.36536,64.28571 -262.14286,64.28571 -144.7775,0 -262.14285,-28.78169 -262.14285,-64.28571 0,-35.50402 117.36535,-64.28572 262.14285,-64.28572 144.7775,0 262.14286,28.7817 262.14286,64.28572 z"
sodipodi:ry="64.285713"
sodipodi:rx="262.14285"
sodipodi:cy="389.80878"
sodipodi:cx="590.71429"
id="path3857"
style="fill:#aca793;stroke:#000000;stroke-opacity:1"
sodipodi:type="arc" />
<text
sodipodi:linespacing="125%"
id="text3859"
y="553.21436"
x="418.48355"
style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
xml:space="preserve"><tspan
y="553.21436"
x="418.48355"
id="tspan3861"
sodipodi:role="line">Puppet/Chef</tspan></text>
</g>
</g>
<g
inkscape:groupmode="layer"
id="layer4"
inkscape:label="Juju"
style="display:inline">
<g
id="g4117"
transform="matrix(0.85175805,0,0,1,147.02514,-185.71429)">
<path
d="m 1001.4286,458.38019 c 0,22.09139 -200.51252,40 -447.85719,40 -247.34467,0 -447.85715,-17.90861 -447.85715,-40 0,-22.09139 200.51248,-40 447.85715,-40 247.34467,0 447.85719,17.90861 447.85719,40 z"
sodipodi:ry="40"
sodipodi:rx="447.85715"
sodipodi:cy="458.38019"
sodipodi:cx="553.57141"
id="path3864"
style="fill:#e9ddaf;stroke:#000000;stroke-opacity:1"
sodipodi:type="arc" />
<text
sodipodi:linespacing="125%"
id="text3866"
y="469.41534"
x="519.60657"
style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
xml:space="preserve"><tspan
y="469.41534"
x="519.60657"
id="tspan3868"
sodipodi:role="line">Juju</tspan></text>
</g>
<g
id="g4122"
transform="translate(-15.714286,-214.28571)">
<path
d="m 417.14285,440.52304 c 0,15.38508 -78.66996,27.85714 -175.71428,27.85714 -97.04431,0 -175.714276,-12.47206 -175.714276,-27.85714 0,-15.38507 78.669966,-27.85714 175.714276,-27.85714 97.04432,0 175.71428,12.47207 175.71428,27.85714 z"
sodipodi:ry="27.857143"
sodipodi:rx="175.71428"
sodipodi:cy="440.52304"
sodipodi:cx="241.42857"
id="path3870"
style="fill:#ffeeaa;stroke:#000000;stroke-opacity:1"
sodipodi:type="arc" />
<text
sodipodi:linespacing="125%"
id="text3872"
y="455.08359"
x="186.08678"
style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
xml:space="preserve"><tspan
y="455.08359"
x="186.08678"
id="tspan3874"
sodipodi:role="line">MaaS</tspan></text>
</g>
</g>
<g
inkscape:groupmode="layer"
id="layer5"
inkscape:label="Nova"
style="display:inline">
<g
id="g4026"
transform="translate(-18.571429,-207.85714)">
<path
d="m 224.28571,349.09448 c 0,25.64179 -37.41621,46.42857 -83.57143,46.42857 -46.155225,0 -83.571427,-20.78678 -83.571427,-46.42857 0,-25.64179 37.416202,-46.42857 83.571427,-46.42857 46.15522,0 83.57143,20.78678 83.57143,46.42857 z"
sodipodi:ry="46.42857"
sodipodi:rx="83.571426"
sodipodi:cy="349.09448"
sodipodi:cx="140.71428"
id="path3877"
style="fill:#eeffaa;stroke:#000000;stroke-opacity:1"
sodipodi:type="arc" />
<text
sodipodi:linespacing="125%"
id="text3879"
y="364.09448"
x="88.571426"
style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
xml:space="preserve"><tspan
y="364.09448"
x="88.571426"
id="tspan3881"
sodipodi:role="line">Nova</tspan></text>
</g>
</g>
<g
inkscape:groupmode="layer"
id="layer6"
inkscape:label="dib"
style="display:inline">
<g
id="g4031"
transform="translate(-19.821426,-204.28571)">
<path
d="m 425.7143,354.09448 c 0,23.66935 -39.01519,42.85715 -87.14286,42.85715 -48.12767,0 -87.14286,-19.1878 -87.14286,-42.85715 0,-23.66934 39.01519,-42.85714 87.14286,-42.85714 48.12767,0 87.14286,19.1878 87.14286,42.85714 z"
sodipodi:ry="42.857143"
sodipodi:rx="87.14286"
sodipodi:cy="354.09448"
sodipodi:cx="338.57144"
id="path3883"
style="fill:#eeffaa;stroke:#000000;stroke-opacity:1"
sodipodi:type="arc"
transform="translate(1.4285585,-8.5714283)" />
<text
sodipodi:linespacing="125%"
id="text3885"
y="351.98834"
x="256.25797"
style="font-size:18px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans;-inkscape-font-specification:Sans"
xml:space="preserve"><tspan
y="351.98834"
x="256.25797"
id="tspan3887"
sodipodi:role="line">diskimage-builder</tspan></text>
</g>
</g>
<g
inkscape:groupmode="layer"
id="layer7"
inkscape:label="os-apply-config"
style="display:inline">
<g
id="g4036"
transform="translate(-22.177684,-204.35663)">
<path
transform="translate(2.5348367,-14.214809)"
d="m 625.71426,359.80878 c 0,22.09139 -39.33498,40 -87.85714,40 -48.52216,0 -87.85714,-17.90861 -87.85714,-40 0,-22.09139 39.33498,-40 87.85714,-40 48.52216,0 87.85714,17.90861 87.85714,40 z"
sodipodi:ry="40"
sodipodi:rx="87.85714"
sodipodi:cy="359.80878"
sodipodi:cx="537.85712"
id="path3889"
style="fill:#eeffaa;stroke:#000000;stroke-opacity:1"
sodipodi:type="arc" />
<text
sodipodi:linespacing="125%"
id="text3885-2"
y="351.91745"
x="464.43744"
style="font-size:18px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;display:inline;font-family:Sans;-inkscape-font-specification:Sans"
xml:space="preserve"><tspan
y="351.91745"
x="464.43744"
id="tspan3887-4"
sodipodi:role="line">os-apply-config</tspan></text>
</g>
</g>
<g
inkscape:groupmode="layer"
id="layer8"
inkscape:label="os-refresh-config"
style="display:inline">
<g
id="g4041"
transform="translate(-17.259486,-205.23164)">
<path
transform="matrix(1.1533124,0,0,0.75737029,-113.35132,78.288337)"
d="m 807.14285,354.09448 c 0,17.35752 -32.61926,31.42857 -72.85714,31.42857 -40.23789,0 -72.85714,-14.07105 -72.85714,-31.42857 0,-17.35752 32.61925,-31.42857 72.85714,-31.42857 40.23788,0 72.85714,14.07105 72.85714,31.42857 z"
sodipodi:ry="31.428572"
sodipodi:rx="72.85714"
sodipodi:cy="354.09448"
sodipodi:cx="734.28571"
id="path4014"
style="fill:#eeffaa;stroke:#000000;stroke-opacity:1"
sodipodi:type="arc" />
<text
sodipodi:linespacing="125%"
id="text4016"
y="352.66589"
x="662.85712"
style="font-size:18px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans;-inkscape-font-specification:Sans"
xml:space="preserve"><tspan
y="352.66589"
x="662.85712"
id="tspan4018"
sodipodi:role="line">os-refresh-config</tspan></text>
</g>
</g>
<g
inkscape:groupmode="layer"
id="layer9"
inkscape:label="Heat"
style="display:inline">
<g
id="g4046"
transform="translate(-22.142889,-230.71429)">
<path
d="m 1018.5714,371.95163 c 0,26.43077 -36.77656,47.85714 -82.14281,47.85714 -45.36625,0 -82.14286,-21.42637 -82.14286,-47.85714 0,-26.43077 36.77661,-47.85714 82.14286,-47.85714 45.36625,0 82.14281,21.42637 82.14281,47.85714 z"
sodipodi:ry="47.857143"
sodipodi:rx="82.14286"
sodipodi:cy="371.95163"
sodipodi:cx="936.42859"
id="path4020"
style="fill:#eeffaa;stroke:#000000;stroke-opacity:1"
sodipodi:type="arc" />
<text
sodipodi:linespacing="125%"
id="text4022"
y="384.09448"
x="885.71429"
style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
xml:space="preserve"><tspan
y="384.09448"
x="885.71429"
id="tspan4024"
sodipodi:role="line">Heat</tspan></text>
</g>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

View File

@ -1,8 +0,0 @@
export NOVA_VERSION=1.1
export OS_PASSWORD=$(os-apply-config -m $TE_DATAFILE --type raw --key overcloud.password)
export OS_AUTH_URL=$(os-apply-config -m $TE_DATAFILE --type raw --key overcloud.endpoint)
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export COMPUTE_API_VERSION=1.1
export OS_NO_CACHE=True
export OS_CLOUDNAME=overcloud

View File

@ -1,8 +0,0 @@
export NOVA_VERSION=1.1
export OS_PASSWORD=$OVERCLOUD_DEMO_PASSWORD
export OS_AUTH_URL=$(os-apply-config -m $TE_DATAFILE --type raw --key overcloud.endpoint)
export OS_USERNAME=demo
export OS_TENANT_NAME=demo
export COMPUTE_API_VERSION=1.1
export OS_NO_CACHE=True
export OS_CLOUDNAME=overcloud-user

View File

@ -1,5 +0,0 @@
#!/bin/bash -
# We could ignore the E012 bashate rule until the bug will be fixed in it.
find scripts -type f -not -name '*.awk' -print0 | xargs -0 grep -HL '^#!/usr/bin/env python' | xargs bashate -v -i E012

View File

@ -1,103 +0,0 @@
#!/bin/bash
set -eu
set -o pipefail
SCRIPT_NAME=$(basename $0)
SCRIPT_HOME=$(dirname $0)
function show_options () {
echo "Usage: $SCRIPT_NAME --download BASE_URL [options] IMAGE_SET"
echo
echo "Acquire an image from cache/download it."
echo
echo "A BASE_URL needs to be supplied and the image will be downloaded only if"
echo "the local copy is different. With -c, locally built images are not refreshed"
echo "while we can do cache invalidation for downloaded images, we don't have cache"
echo "invalidation logic yet for building images - -c allows direct control."
echo
echo "What constitutes an image is determined by the images key of the"
echo "IMAGE_SET metadata file. This is a json file with the following structure:"
echo
echo " {\"images\": ["
echo " \"image1_control.qcow2\","
echo " \"image2_compute.qcow2\","
echo " \"image3_compute_ha.qcow2\","
echo " ..."
echo " ]}"
echo
echo "Options:"
echo " -c -- re-use existing images rather than rebuilding."
echo " --download BASE_URL -- download images from BASE_URL/\$imagename."
echo " -h, --help -- this text."
echo
exit $1
}
DOWNLOAD_BASE=
USE_CACHE=
TEMP=$(getopt -o ch -l download:,help -n $SCRIPT_NAME -- "$@")
if [ $? != 0 ] ; then show_options 1; fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
--download) DOWNLOAD_BASE=$2; shift 2;;
-c) USE_CACHE=1; shift 1;;
-h | --help) show_options 0;;
--) shift ; break ;;
*) echo "Error: unsupported option $1." ; exit 1 ;;
esac
done
IMAGE_SET=${1:-''}
shift || true
if [ -z "$IMAGE_SET" -o -z "$DOWNLOAD_BASE" ]; then
show_options 1
fi
IMAGE_BASENAME=$(basename "${IMAGE_SET}")
IMAGE_DIRNAME=$(dirname "${IMAGE_SET}")
METADATA_PATH=$DOWNLOAD_BASE/$IMAGE_BASENAME
CACHE_URL="$TRIPLEO_ROOT/diskimage-builder/elements/cache-url/bin/cache-url"
if [ image_exists -a -n "$USE_CACHE" ]; then
exit 0
fi
set +e
"${CACHE_URL}" ${METADATA_PATH} ${IMAGE_SET}
RES=$?
set -e
if [ 0 -ne "$RES" -a 44 -ne "$RES" ]; then
exit $RES
elif [ 44 -ne "$RES" ]; then
IMG_LIST=$(jq '.images' ${IMAGE_SET})
for pos in $(seq 0 $(( $(jq length <<< $IMG_LIST) -1 )) ); do
COMPONENT_NAME=$(jq -r ".[$pos]" <<< $IMG_LIST)
"${CACHE_URL}" ${DOWNLOAD_BASE}/${COMPONENT_NAME} "${IMAGE_DIRNAME}"/${COMPONENT_NAME}
done
exit 0
else
echo "Error to retrieve the IMAGE_SET meta file."
exit 1
fi
function image_exists() {
if [ ! -e "${IMAGE_SET}" ]; then
return 1
fi
IMG_LIST=$(jq '.images' ${IMAGE_SET})
for pos in $(seq 0 $(($(jq length <<< $IMG_LIST) -1))); do
COMPONENT_NAME=$(jq -r ".[$pos]" <<< $IMG_LIST)
if [ ! -e "${IMAGE_DIRNAME}"/${COMPONENT_NAME} ]; then
return 1
fi
done
return 0
}

View File

@ -1,60 +0,0 @@
#!/bin/bash
#
# Copyright 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -eu
SCRIPT_NAME=$(basename $0)
SCRIPT_HOME=$(dirname $0)
function show_options {
echo "Usage: $SCRIPT_NAME [options] LISTFILE"
echo
echo "Ensure that every user listed in LISTFILE has an admin account."
echo "Admin accounts are made by creating a user $USER-admin for every"
echo "user in LISTFILE."
echo
echo "Options:"
echo " -h -- this help"
echo
exit $1
}
TEMP=`getopt -o h -n $SCRIPT_NAME -- "$@"`
if [ $? != 0 ]; then
echo "Terminating..." >&2
exit 1
fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
-h) show_options 0;;
--) shift ; break ;;
*) echo "Error: unsupported option $1." ; exit 1 ;;
esac
done
LISTFILE=${1:-''}
EXTRA_ARGS=${2:-''}
if [ -z "$LISTFILE" -o -n "$EXTRA_ARGS" ]; then
show_options 1
fi
assert-users -t admin <(awk 'BEGIN { FS = "," }{ print $1 "-admin," $2 "," $3 }' < $LISTFILE)

View File

@ -1,101 +0,0 @@
#!/bin/bash
#
# Copyright 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -eu
SCRIPT_NAME=$(basename $0)
SCRIPT_HOME=$(dirname $0)
function show_options {
echo "Usage: $SCRIPT_NAME [options]"
echo
echo "Ensure that a given user exists."
echo
echo "Options:"
echo " -h -- this help"
echo " -e -- email"
echo " -n -- name"
echo " -t -- tenant"
echo " -u -- usercode"
echo
exit $1
}
EMAIL=''
NAME=''
TENANT=''
USERCODE=''
TEMP=`getopt -o hu:e:n:t: -n $SCRIPT_NAME -- "$@"`
if [ $? != 0 ]; then
echo "Terminating..." >&2;
exit 1;
fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
-h) show_options 0;;
-e) EMAIL=$2; shift 2 ;;
-n) NAME=$2; shift 2 ;;
-t) TENANT=$2; shift 2 ;;
-u) USERCODE=$2; shift 2 ;;
--) shift ; break ;;
*) echo "Error: unsupported option $1." ; exit 1 ;;
esac
done
EXTRA_ARGS=${1:-''}
if [ -z "$EMAIL" -o -z "$NAME" -o -z "$TENANT" -o -z "$USERCODE" -o -n "$EXTRA_ARGS" ]; then
show_options 1
fi
echo "Checking for user $USERCODE"
#TODO: fix after bug 1392035 in the keystone client library
USER_ID=$(openstack user list | awk '{print tolower($0)}' |grep " ${USERCODE,,} " |awk '{print$2}')
if [ -z "$USER_ID" ]; then
PASSWORD=''
if [ -e os-asserted-users ]; then
PASSWORD=$(awk "\$1==\"$USERCODE\" { print \$2 }" < os-asserted-users)
fi
if [ -z "$PASSWORD" ]; then
PASSWORD=$(os-make-password)
echo "$USERCODE $PASSWORD" >> os-asserted-users
fi
USER_ID=$(openstack user create --pass "$PASSWORD"
--email "$EMAIL" $USERCODE | awk '$2=="id" {print $4}')
fi
#TODO: fix after bug 1392035 in the keystone client library
TENANT_ID=$(openstack project list | awk '{print tolower($0)}' |grep " ${TENANT,,} " |awk '{print$2}')
if [ -z "$TENANT_ID" ]; then
TENANT_ID=$(openstack project create $TENANT | awk '$2=="id" {print $4}')
fi
if [ "$TENANT" = "admin" ]; then
ROLE="admin"
else
ROLE="_member_"
fi
ROLE_ID=$(openstack role show $ROLE | awk '$2=="id" {print $4}')
if openstack user role list --project $TENANT_ID $USER_ID | grep "${ROLE_ID}.*${ROLE}.*${USER_ID}" ; then
echo "User already has role '$ROLE'"
else
openstack role add --project $TENANT_ID --user $USER_ID $ROLE_ID
fi
echo "User $USERCODE configured."

View File

@ -1,69 +0,0 @@
#!/bin/bash
#
# Copyright 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -eu
SCRIPT_NAME=$(basename $0)
SCRIPT_HOME=$(dirname $0)
function show_options {
echo "Usage: $SCRIPT_NAME [options] LISTFILE"
echo
echo "Ensure that every user listed in LISTFILE has a cloud account."
echo
echo "Options:"
echo " -h -- this help"
echo " -t -- Choose a tenant. Defaults to the usercode"
echo
exit $1
}
TENANT=''
TEMP=`getopt -o ht: -n $SCRIPT_NAME -- "$@"`
if [ $? != 0 ]; then
echo "Terminating..." >&2;
exit 1;
fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
-h) show_options 0;;
-t) TENANT=$2; shift 2 ;;
--) shift ; break ;;
*) echo "Error: unsupported option $1." ; exit 1 ;;
esac
done
LISTFILE=${1:-''}
EXTRA_ARGS=${2:-''}
if [ -z "$LISTFILE" -o -n "$EXTRA_ARGS" ]; then
show_options 1
fi
while IFS=, read -ra DETAILS; do
if [ -z "$TENANT" ] ; then
USER_TENANT=${DETAILS[0]}
else
USER_TENANT=$TENANT
fi
assert-user -u ${DETAILS[0]} -e ${DETAILS[1]} -t $USER_TENANT -n "${DETAILS[2]}"
done < $LISTFILE

View File

@ -1,228 +0,0 @@
#!/bin/bash
#
# Copyright 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -e
set -o pipefail
SCRIPT_NAME=$(basename $0)
SCRIPT_HOME=$(dirname $0)
PATH=$PATH:/usr/sbin:/sbin
# Some defaults
ARCH=i386
export IMAGE_NAME=seed
export DIB_IMAGE_SIZE=30
BUILD_ONLY=
CREATE_IMAGE=yes
ALWAYS_ELEMENTS="vm cloud-init-nocloud local-config boot-stack seed-stack-config nova-ironic"
DIB_COMMON_ELEMENTS=${DIB_COMMON_ELEMENTS:-''}
SEED_DIB_EXTRA_ARGS=${SEED_DIB_EXTRA_ARGS:-'rabbitmq-server'}
if [ "${USE_MARIADB:-}" = 1 ] ; then
SEED_DIB_EXTRA_ARGS="$SEED_DIB_EXTRA_ARGS mariadb-rpm"
fi
if [[ "$DIB_COMMON_ELEMENTS $SEED_DIB_EXTRA_ARGS" != *enable-serial-console* ]]; then
SEED_DIB_EXTRA_ARGS="$SEED_DIB_EXTRA_ARGS remove-serial-console"
fi
export VM_IP=""
function show_options {
echo "Usage: $SCRIPT_NAME [options] <element> [<element> ...]"
echo
echo "Create and start a VM by combining the specified elements"
echo "with common default elements, assuming many things about"
echo "the local operating environment."
echo "See ../scripts/devtest.sh"
echo
echo "The environment variable TE_DATAFILE must be set, pointing at a test"
echo "environment JSON file. If seed-ip is present in the JSON then that is"
echo "used for the VM IP address, otherwise it is discovered by probing the"
echo "ARP table and then saved back into the JSON file."
echo
echo "If host-ip (and possibly ssh-user) is set in the JSON then those details"
echo "are used to construct a remote libvirt URL and spawn the VM remotely."
echo "Note that seed-ip *must* be present when doing this. When spawning remotely"
echo "the image is copied to that host via rsync, and a remote virsh URI is used."
echo "However SSH access with rsync write access to /var/lib/libvirt/images/,"
echo "permission to chattr, and the ability to run virsh as the selected user are"
echo "requirements."
echo
echo "Options:"
echo " -a i386|amd64 -- set the architecture of the VM (i386)"
echo " --build-only -- build the needed images but don't deploy them."
echo " -o name -- set the name of the VM and image file"
echo " (seed) - must match that from setup-seed-vm"
echo " -s size -- set the image size (30 GB)"
echo " -c -- use a image cache for seed image"
echo " -i -- image file was built elsewhere, don't"
echo " create"
echo
exit $1
}
TEMP=$(getopt -o hcia:o:s: -l build-only -n $SCRIPT_NAME -- "$@")
if [ $? != 0 ]; then
echo "Terminating..." >&2;
exit 1;
fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
-a) export ARCH=$2; shift 2 ;;
--build-only) BUILD_ONLY="1"; shift 1;;
-o) export IMAGE_NAME=$2; shift 2 ;;
-s) export DIB_IMAGE_SIZE=$2; shift 2 ;;
-h) show_options 0;;
-c) export IMAGE_CACHE_USE=1; shift ;;
-i) export CREATE_IMAGE=; shift ;;
--) shift ; break ;;
*) echo "Error: unsupported option $1." ; exit 1 ;;
esac
done
for arg; do
SEED_DIB_EXTRA_ARGS="$SEED_DIB_EXTRA_ARGS $arg";
done
SEED_ARCH=
case $ARCH in
i386) SEED_ARCH='i686'; ;;
amd64|x86_64) SEED_ARCH='x86_64'; ;;
*) echo "Unsupported arch $ARCH!" ; exit 1 ;;
esac
if [ -z "$TE_DATAFILE" ]; then
echo "Error: TE_DATAFILE not set."
show_options 1
fi
HOST_IP=$(os-apply-config -m $TE_DATAFILE --key host-ip --type netaddress --key-default '')
REMOTE_OPERATIONS=$(os-apply-config -m $TE_DATAFILE --key remote-operations --type raw --key-default '')
if [ -n "$HOST_IP" ]; then
SSH_USER=$(os-apply-config -m $TE_DATAFILE --key ssh-user --type raw --key-default '')
if [ -n "$SSH_USER" ]; then
SSH_USER="${SSH_USER}@"
fi
VM_HOST=${SSH_USER}${HOST_IP}
echo $VM_HOST
fi
ENV_NUM=$(os-apply-config -m $TE_DATAFILE --key env-num --type int --key-default 0)
if [ $CREATE_IMAGE ]; then
ELEMENTS_PATH=${ELEMENTS_PATH:-$SCRIPT_HOME/../../tripleo-image-elements/elements}
export ELEMENTS_PATH
DIB_PATH=${DIB_PATH:-$SCRIPT_HOME/../../diskimage-builder}
DIB=$(which disk-image-create || echo $DIB_PATH/bin/disk-image-create)
if [ ! -e $DIB ]; then
echo "Error: unable to locate disk-image-create"
exit 1
fi
fi
# Shutdown any running VM - writing to the image file of a running VM is a
# great way to get a corrupt image file.
if [ -z "$BUILD_ONLY" ]; then
if [ -n "$REMOTE_OPERATIONS" ]; then
ssh -o StrictHostKeyChecking=no -o PasswordAuthentication=no ${VM_HOST} virsh destroy ${IMAGE_NAME}_$ENV_NUM || true
# Ensure any existing VM's in the test environment are shutdown, so devtest always starts at a consistent point.
for NUM in $(seq 0 14) ; do
ssh -o StrictHostKeyChecking=no -o PasswordAuthentication=no ${VM_HOST} virsh destroy baremetalbrbm${ENV_NUM}_${NUM} || true
done
else
virsh destroy $IMAGE_NAME || true
for NUM in $(seq 0 14) ; do
virsh destroy baremetal_${NUM} || true
done
fi
fi
if [ $CREATE_IMAGE ]; then
IMAGE_CACHE_FILE=$TRIPLEO_ROOT/seed
# Create the image if it doesn't exist or we're not using image cache
if [ ! -e "$IMAGE_CACHE_FILE.qcow2" -o -z "$IMAGE_CACHE_USE" ] ; then
$DIB -x -u -a $ARCH $ALWAYS_ELEMENTS $DIB_COMMON_ELEMENTS $SEED_DIB_EXTRA_ARGS -o $IMAGE_CACHE_FILE 2>&1 | tee $IMAGE_CACHE_FILE.log
else
echo "Using cached seed image : $IMAGE_CACHE_FILE.qcow2"
fi
if [ -n "$BUILD_ONLY" ]; then
exit 0
fi
if [ -n "$REMOTE_OPERATIONS" ]; then
# rsync could be used here which may have been more efficient but using a
# custom command "copyseed" should be easier to restrict. Also we can
# take multiple steps on the server in this single command meaning we
# don't have to open up the ssh access even further.
dd if=$IMAGE_CACHE_FILE.qcow2 | ssh -o StrictHostKeyChecking=no -o PasswordAuthentication=no ${VM_HOST} copyseed $ENV_NUM
else
sudo cp $IMAGE_CACHE_FILE.qcow2 /var/lib/libvirt/images/$IMAGE_NAME.qcow2
sudo chattr +C /var/lib/libvirt/images/$IMAGE_NAME.qcow2 || true
fi
fi
function poll_vm {
if [ -z "$VM_IP" ]; then
MAC=$(sudo virsh dumpxml $IMAGE_NAME | grep "mac address" | head -1 | awk -F "'" '{print $2}')
VM_IP=$(arp -n | grep $MAC | awk '{print $1}')
fi
[ -z $VM_IP ] && return 1
ping -c 1 $VM_IP || return 1
return 0
}
export -f poll_vm
if [ -n "$REMOTE_OPERATIONS" ]; then
ssh -o StrictHostKeyChecking=no -o PasswordAuthentication=no ${VM_HOST} virsh start ${IMAGE_NAME}_$ENV_NUM
VM_IP=$(os-apply-config -m $TE_DATAFILE --key seed-ip --type netaddress --key-default '')
else
sudo virsh start $IMAGE_NAME
fi
echo "Waiting for $IMAGE_NAME VM to boot."
wait_for -w 100 --delay 1 -- poll_vm
poll_vm
echo
echo "Booted. Found IP: $VM_IP."
# hostkeys are generated by cloud-init as part of the boot sequence - can
# take a few seconds.
echo "Waiting for SSH hostkey."
wait_for -w 30 --delay 1 -- "ssh-keyscan $VM_IP 2>&1 | grep \"$VM_IP.*OpenSSH\""
# Remove the hostkey, new instance == new key.
ssh-keygen -R $(os-apply-config -m $TE_DATAFILE --key baremetal-network.seed.ip --type netaddress --key-default '192.0.2.1') || true
echo "element(s): $ALWAYS_ELEMENTS $DIB_COMMON_ELEMENTS $SEED_DIB_EXTRA_ARGS booted and ready."
echo "SEED_IP=$VM_IP"
echo
echo "to login: ssh root@$VM_IP"
NEW_JSON=$(jq '.["seed-ip"]="'${VM_IP}'"' $TE_DATAFILE)
echo "$NEW_JSON" > $TE_DATAFILE

View File

@ -1,144 +0,0 @@
#!/usr/bin/env python
# Copyright 2015 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import argparse
import logging
import os
import subprocess
import sys
import yaml
logger = logging.getLogger(__name__)
env = os.environ.copy()
# YAML FILE FORMAT
# disk_images:
# -
# imagename: overcloud-compute
# arch: amd64
# type: qcow2
# elements:
# - overcloud-compute
# packages:
# - vim
# options:
def parse_opts(argv):
parser = argparse.ArgumentParser(
description='Create a set of disk images using a YAML/JSON config file'
' format.')
parser.add_argument('-c', '--config-file', metavar='CONFIG_FILE',
help="""path to the configuration file.""",
default='disk_images.yaml')
parser.add_argument('-o', '--output-directory', metavar='DIRECTORY',
help="""output directory for images. """
"""Defaults to $TRIPLEO_ROOT""",
default=env.get('TRIPLEO_ROOT'))
parser.add_argument('-s', '--skip', action='store_true',
help="""skip build if cached image exists. """
"""Or set USE_CACHE ENV variable to 1.""",
default=False)
parser.add_argument('-d', '--debug', dest="debug", action='store_true',
help="Print debugging output.", required=False)
parser.add_argument('-v', '--verbose', dest="verbose",
action='store_true', help="Print verbose output.",
required=False)
opts = parser.parse_args(argv[1:])
return opts
def configure_logger(verbose=False, debug=False):
LOG_FORMAT = '[%(asctime)s] [%(levelname)s] %(message)s'
DATE_FORMAT = '%Y/%m/%d %I:%M:%S %p'
log_level = logging.WARN
if debug:
log_level = logging.DEBUG
elif verbose:
log_level = logging.INFO
logging.basicConfig(format=LOG_FORMAT, datefmt=DATE_FORMAT,
level=log_level)
def main(argv=sys.argv):
opts = parse_opts(argv)
configure_logger(opts.verbose, opts.debug)
logger.info('Using config file at: %s' % opts.config_file)
if os.path.exists(opts.config_file):
with open(opts.config_file) as cf:
disk_images = yaml.load(cf.read()).get("disk_images")
logger.debug('disk_images JSON: %s' % str(disk_images))
else:
logger.error('No config file exists at: %s' % opts.config_file)
return 1
if not opts.output_directory:
logger.error('Please specify --output-directory.')
return 1
for image in disk_images:
arch = image.get('arch', 'amd64')
img_type = image.get('type', 'qcow2')
skip_base = image.get('skip_base', 'false')
docker_target = image.get('docker_target')
imagename = image.get('imagename')
logger.info('imagename: %s' % imagename)
image_path = '%s/%s.%s' % (opts.output_directory, imagename, img_type)
if opts.skip or env.get('USE_CACHE', '0') == '1':
logger.info('looking for image at path: %s' % image_path)
if os.path.exists(image_path):
logger.warn('Image file exists for image name: %s' % imagename)
logger.warn('Skipping image build')
continue
elements = image.get('elements', [])
options = image.get('options', [])
packages = image.get('packages', [])
cmd = ['disk-image-create', '-a', arch, '-o', image_path, '-t',
img_type]
if packages:
cmd.append('-p')
cmd.append(','.join(packages))
if docker_target:
cmd.append('--docker-target')
cmd.append(docker_target)
if skip_base == True:
cmd.append('-n')
if options:
cmd.extend(options)
# NODE_DIST provides a distro specific element hook
node_dist = image.get('distro') or env.get('NODE_DIST')
if node_dist:
cmd.append(node_dist)
cmd.extend(elements)
logger.info('Running %s' % cmd)
retval = subprocess.call(cmd)
if retval != 0:
logger.error('Failed to build image: %s' % imagename)
return 1
if __name__ == '__main__':
sys.exit(main(sys.argv))

View File

@ -1,106 +0,0 @@
#!/usr/bin/env bash
#
# Copyright 2013 Red Hat
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
SCRIPT_NAME=$(basename $0)
LIBVIRT_VOL_POOL=${LIBVIRT_VOL_POOL:-"default"}
function show_options {
echo "Usage: $SCRIPT_NAME [-n NUM]"
echo
echo "Cleanup vm state left behind by previous runs"
echo
echo " -b -- Baremetal bridge name(s)."
echo " The create-nodes script names nodes and"
echo " volumes based on the attached"
echo " bridge name(s). This parameter provides"
echo " a way to cleanup nodes attached to the"
echo " associated bridge name(s). NOTE: when"
echo " cleaning up environments with multiple"
echo " bridges all bridge names must be"
echo " specified."
echo " -n -- Test environment number to clean up."
echo " -a -- Clean up all environments."
echo " Will delete all libvirt defined domains"
echo " that start with baremetal* and seed*"
echo " and their storage"
echo
echo "If provided, NUM is the environment number to be cleaned up."
echo "If not provided, the default environment will be cleaned."
echo ""
echo "If both baremetal bridge names and NUM (-n) are provided the NUM"
echo "is appended to the bridge names when searching for VMs to delete."
exit 1
}
NUM=
BRIDGE_NAMES=brbm
CLEANUP_ALL=
TEMP=$(getopt -o h,b:,n:,a -n $SCRIPT_NAME -- "$@")
if [ $? != 0 ]; then
show_options;
fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
-h) show_options ;;
-b) BRIDGE_NAMES="$2" ; shift 2 ;;
-n) NUM="$2" ; shift 2 ;;
-a) CLEANUP_ALL=1 ; shift ;;
--) shift ; break ;;
*) echo "Error: unsupported option $1." ; show_options ;;
esac
done
SEED_NAME=seed
BAREMETAL_PREFIX="baremetal"
NUMBERED_BRIDGE_NAMES=
if [ -n "$NUM" ]; then
SEED_NAME="seed_${NUM}"
fi
for NAME in $BRIDGE_NAMES; do
NUMBERED_BRIDGE_NAMES="$NUMBERED_BRIDGE_NAMES$NAME${NUM}_"
done
# remove the last underscore
NUMBERED_BRIDGE_NAMES=${NUMBERED_BRIDGE_NAMES%_}
if [ -z "$CLEANUP_ALL" ]; then
BAREMETAL_PREFIX="baremetal${NUMBERED_BRIDGE_NAMES}"
fi
for NAME in $(sudo virsh list --name | grep "^\($SEED_NAME\|${BAREMETAL_PREFIX}\)"); do
sudo virsh destroy $NAME
done
for NAME in $(sudo virsh list --name --all | grep "^\($SEED_NAME\|${BAREMETAL_PREFIX}\)"); do
if [ $NAME == $SEED_NAME ]; then
# handle seeds differently since their storage is not managed by libvirt
sudo virsh undefine --managed-save $NAME
sudo rm /var/lib/libvirt/images/$NAME.qcow2
else
sudo virsh undefine --managed-save --remove-all-storage $NAME
fi
done
for NAME in $(sudo virsh vol-list $LIBVIRT_VOL_POOL 2>/dev/null | grep /var/ | awk '{print $1}' | grep "^\($SEED_NAME\|${BAREMETAL_PREFIX}\)"); do
sudo virsh vol-delete --pool $LIBVIRT_VOL_POOL $NAME
done

View File

@ -1,159 +0,0 @@
#!/usr/bin/env python
import argparse
import math
import os.path
import random
import libvirt
templatedir = os.path.join(
os.path.dirname(
os.path.dirname(
os.path.abspath(__file__))), 'templates')
MAX_NUM_MACS = math.trunc(0xff/2)
def generate_baremetal_macs(count=1):
"""Generate an Ethernet MAC address suitable for baremetal testing."""
# NOTE(dprince): We generate our own bare metal MAC address's here
# instead of relying on libvirt so that we can ensure the
# locally administered bit is set low. (The libvirt default is
# to set the 2nd MSB high.) This effectively allows our
# fake baremetal VMs to more accurately behave like real hardware
# and fixes issues with bridge/DHCP configurations which rely
# on the fact that bridges assume the MAC address of the lowest
# attached NIC.
# MACs generated for a given machine will also be in sequential
# order, which matches how most BM machines are laid out as well.
# Additionally we increment each MAC by two places.
macs = []
if count > MAX_NUM_MACS:
raise ValueError("The MAX num of MACS supported is %i." % MAX_NUM_MACS)
base_nums = [0x00,
random.randint(0x00, 0xff),
random.randint(0x00, 0xff),
random.randint(0x00, 0xff),
random.randint(0x00, 0xff)]
base_mac = ':'.join(map(lambda x: "%02x" % x, base_nums))
start = random.randint(0x00, 0xff)
if (start + (count * 2)) > 0xff:
# leave room to generate macs in sequence
start = 0xff - count * 2
for num in range(0, count*2, 2):
mac = start + num
macs.append(base_mac + ":" + ("%02x" % mac))
return macs
def main():
parser = argparse.ArgumentParser(
description="Configure a kvm virtual machine for the seed image.")
parser.add_argument('--name', default='seed',
help='the name to give the machine in libvirt.')
parser.add_argument('--image',
help='Use a custom image file (must be qcow2).')
parser.add_argument('--diskbus', default='sata',
help='Choose an alternate bus type for the disk')
parser.add_argument('--baremetal-interface', nargs='+', default=['brbm'],
help='The interface which bare metal nodes will be connected to.')
parser.add_argument('--engine', default='kvm',
help='The virtualization engine to use')
parser.add_argument('--arch', default='i686',
help='The architecture to use')
parser.add_argument('--memory', default='2097152',
help="Maximum memory for the VM in KB.")
parser.add_argument('--cpus', default='1',
help="CPU count for the VM.")
parser.add_argument('--bootdev', default='hd',
help="What boot device to use (hd/network).")
parser.add_argument('--seed', default=False, action='store_true',
help='Create a seed vm with two interfaces.')
parser.add_argument('--ovsbridge', default="",
help='Place the seed public interface on this ovs bridge.')
parser.add_argument('--libvirt-nic-driver', default='virtio',
help='The libvirt network driver to use')
parser.add_argument('--enable-serial-console', action="store_true",
help='Enable a serial console')
parser.add_argument('--uri', default='qemu:///system',
help='The server uri with which to connect.')
args = parser.parse_args()
with file(templatedir + '/domain.xml', 'rb') as f:
source_template = f.read()
imagefile = '/var/lib/libvirt/images/seed.qcow2'
if args.image:
imagefile = args.image
imagefile = os.path.realpath(imagefile)
params = {
'name': args.name,
'imagefile': imagefile,
'engine': args.engine,
'arch': args.arch,
'memory': args.memory,
'cpus': args.cpus,
'bootdev': args.bootdev,
'network': '',
'enable_serial_console': '',
}
if args.image is not None:
params['imagefile'] = args.image
# Configure the bus type for the target disk device
params['diskbus'] = args.diskbus
nicparams = {
'nicdriver': args.libvirt_nic_driver,
'ovsbridge': args.ovsbridge,
}
if args.seed:
if args.ovsbridge:
params['network'] = """
<interface type='bridge'>
<source bridge='%(ovsbridge)s'/>
<virtualport type='openvswitch'/>
<model type='%(nicdriver)s'/>
</interface>""" % nicparams
else:
params['network'] = """
<!-- regular natted network, for access to the vm -->
<interface type='network'>
<source network='default'/>
<model type='%(nicdriver)s'/>
</interface>""" % nicparams
macs = generate_baremetal_macs(len(args.baremetal_interface))
params['bm_network'] = ""
for bm_interface, mac in zip(args.baremetal_interface, macs):
bm_interface_params = {
'bminterface': bm_interface,
'bmmacaddress': mac,
'nicdriver': args.libvirt_nic_driver,
}
params['bm_network'] += """
<!-- bridged 'bare metal' network on %(bminterface)s -->
<interface type='network'>
<mac address='%(bmmacaddress)s'/>
<source network='%(bminterface)s'/>
<model type='%(nicdriver)s'/>
</interface>""" % bm_interface_params
if args.enable_serial_console:
params['enable_serial_console'] = """
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
"""
libvirt_template = source_template % params
conn=libvirt.open(args.uri)
a = conn.defineXML(libvirt_template)
print ("Created machine %s with UUID %s" % (args.name, a.UUIDString()))
if __name__ == '__main__':
main()

View File

@ -1,77 +0,0 @@
#!/bin/bash
set -eu
CPU=$1
MEM=$(( 1024 * $2 ))
# extra G to allow fuzz for partition table : flavor size and registered size
# need to be different to actual size.
DISK=$3
LIBVIRT_DISK_BUS_TYPE=${LIBVIRT_DISK_BUS_TYPE:-"sata"}
NODE_DISK=$(( $DISK + 1))
case $4 in
i386) ARCH='i686' ;;
amd64|x86_64) ARCH='x86_64' ;;
*) echo "Unsupported arch $4!" ; exit 1 ;;
esac
TOTAL=$(($5 - 1))
SSH_USER=$6
HOSTIP=$7
TE_DATAFILE=$8
BRIDGE_NAMES=${9:-""}
LIBVIRT_NIC_DRIVER=${LIBVIRT_NIC_DRIVER:-"virtio"}
LIBVIRT_VOL_POOL=${LIBVIRT_VOL_POOL:-"default"}
LIBVIRT_VOL_POOL_TARGET=${LIBVIRT_VOL_POOL_TARGET:-"/var/lib/libvirt/images"}
# define the $LIBVIRT_VOL_POOL storage pool if its not there yet
if ! $(virsh pool-list --all --persistent | grep -q $LIBVIRT_VOL_POOL) ; then
if [ ! -d $LIBVIRT_VOL_POOL_TARGET ]; then
sudo mkdir -p $LIBVIRT_VOL_POOL_TARGET ;
fi
(virsh pool-define-as --name $LIBVIRT_VOL_POOL dir --target $LIBVIRT_VOL_POOL_TARGET ; \
virsh pool-autostart $LIBVIRT_VOL_POOL; virsh pool-start $LIBVIRT_VOL_POOL) >&2
fi
PREALLOC=
if [ "${TRIPLEO_OS_FAMILY:-}" = "debian" ]; then
PREALLOC="--prealloc-metadata"
fi
# Create empty json file if it doesn't exist
[ -s $TE_DATAFILE ] || echo "{}" > $TE_DATAFILE
JSON=$(jq .nodes=[] $TE_DATAFILE)
EXTRAOPTS=
if [[ ${DIB_COMMON_ELEMENTS:-} == *enable-serial-console* ]]; then
EXTRAOPTS="--enable-serial-console"
fi
for idx in $(seq 0 $TOTAL) ; do
vm_name="baremetal${BRIDGE_NAMES// /_}_$idx"
(virsh list --all --name | grep -q "^$vm_name\$") && continue
virsh vol-create-as $LIBVIRT_VOL_POOL $vm_name.qcow2 ${NODE_DISK}G --format qcow2 $PREALLOC >&2
volume_path=$(virsh vol-path --pool $LIBVIRT_VOL_POOL $vm_name.qcow2)
# Pre-touch the VM to set +C, as it can only be set on empty files.
sudo touch "$volume_path"
sudo chattr +C "$volume_path" || true
BAREMETAL_INTERFACE=
if [ -n "$BRIDGE_NAMES" ]; then
BAREMETAL_INTERFACE="--baremetal-interface $BRIDGE_NAMES"
fi
configure-vm $EXTRAOPTS \
--bootdev network \
--name $vm_name \
--image "$volume_path" \
--diskbus $LIBVIRT_DISK_BUS_TYPE \
--arch $ARCH \
--cpus $CPU \
--memory $MEM \
--libvirt-nic-driver $LIBVIRT_NIC_DRIVER $BAREMETAL_INTERFACE >&2
mac=$(get-vm-mac $vm_name)
JSON=$(jq ".nodes=(.nodes + [{mac:[\"$mac\"], cpu:\"$CPU\", memory:\"$2\", disk:\"$DISK\", arch:\"$4\", pm_user:\"$SSH_USER\", pm_addr:\"$HOSTIP\", pm_password:.[\"ssh-key\"], pm_type:\"pxe_ssh\"}])" <<< $JSON)
done
jq . <<< $JSON > $TE_DATAFILE

View File

@ -1,387 +0,0 @@
#!/bin/bash
#
# Demo script for Tripleo - the dev/test story.
# This can be run for CI purposes, by passing --trash-my-machine to it.
# Without that parameter, the script is a no-op.
# Set PS4 as early as possible if it is still at the default, so that
# we have a useful trace output for everything when running devtest.sh
# with bash -x ./devtest.sh
if [ "$PS4" = "+ " ]; then
export PS4='$(basename ${BASH_SOURCE})@${LINENO}: '
fi
set -eu
set -o pipefail
SCRIPT_NAME=$(basename $0)
SCRIPT_HOME=$(dirname $0)
function show_options {
echo "Usage: $SCRIPT_NAME [options]"
echo
echo "Test the core TripleO story."
echo
echo "Options:"
echo " --trash-my-machine -- make nontrivial destructive changes to the machine."
echo " For details read the source."
echo " -c -- re-use existing source/images if they exist."
echo " --existing-environment -- use an existing test environment. The JSON file"
echo " for it may be overridden via the TE_DATAFILE"
echo " environment variable."
echo " --bm-networks NETFILE -- You are supplying your own network layout."
echo " The schema for baremetal-network can be found in"
echo " the devtest_setup documentation."
echo
echo " --nodes NODEFILE -- You are supplying your own list of hardware."
echo " The schema for nodes can be found in the devtest_setup"
echo " documentation."
echo " --no-undercloud -- Use the seed as the baremetal cloud to deploy the"
echo " overcloud from."
echo " --build-only -- Builds images but doesn't attempt to run them."
echo " --no-mergepy -- Use the standalone Heat templates (default)."
echo " --debug-logging -- Enable debug logging in the undercloud and overcloud."
echo " This enables build time debug logs by setting the"
echo " OS_DEBUG_LOGGING env var and also sets the Debug"
echo " heat parameter."
echo " --heat-env-undercloud ENVFILE"
echo " -- heat environment file for the undercloud."
echo " --heat-env-overcloud ENVFILE"
echo " -- heat environment file for the overcloud."
echo
echo "Note that this script just chains devtest_variables, devtest_setup,"
echo "devtest_testenv, devtest_ramdisk, devtest_seed, devtest_undercloud,"
echo "devtest_overcloud, devtest_end. If you want to run less than all of them just"
echo "run the steps you want in order after sourcing ~/.devtestrc and"
echo "devtest_variables.sh"
echo
exit $1
}
BUILD_ONLY=
DEBUG_LOGGING=
NODES_ARG=
NO_UNDERCLOUD=
NETS_ARG=
CONTINUE=
HEAT_ENV_UNDERCLOUD=
HEAT_ENV_OVERCLOUD=
USE_CACHE=0
export TRIPLEO_CLEANUP=1
DEVTEST_START=$(date +%s) #nodocs
TEMP=$(getopt -o h,c -l build-only,no-mergepy,debug-logging,existing-environment,help,trash-my-machine,nodes:,bm-networks:,no-undercloud,heat-env-overcloud:,heat-env-undercloud: -n $SCRIPT_NAME -- "$@")
if [ $? != 0 ]; then
echo "Terminating..." >&2
exit 1
fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
--build-only) BUILD_ONLY=--build-only; shift 1;;
--no-mergepy)
USE_MERGEPY=0
echo "Warning: --no-mergepy is the default now, option is DEPRECATED" >&2
shift 1
;;
--debug-logging)
DEBUG_LOGGING=--debug-logging
export OS_DEBUG_LOGGING="1"
shift 1
;;
--trash-my-machine) CONTINUE=--trash-my-machine; shift 1;;
--existing-environment) TRIPLEO_CLEANUP=0; shift 1;;
--nodes) NODES_ARG="--nodes $2"; shift 2;;
--bm-networks) NETS_ARG="--bm-networks $2"; shift 2;;
--no-undercloud) NO_UNDERCLOUD="true"; shift 1;;
--heat-env-undercloud) HEAT_ENV_UNDERCLOUD="--heat-env $2"; shift 2;;
--heat-env-overcloud) HEAT_ENV_OVERCLOUD="--heat-env $2"; shift 2;;
-c) USE_CACHE=1; shift 1;;
-h|--help) show_options 0;;
--) shift ; break ;;
*) echo "Error: unsupported option $1." ; exit 1 ;;
esac
done
if [ -z "$CONTINUE" ]; then
echo "Not running - this script is destructive and requires --trash-my-machine to run." >&2
exit 1
fi
export USE_CACHE
export USE_MERGEPY
# Source environment variables from .devtestrc, allowing defaults to be setup
# specific to users environments
if [ -e ~/.devtestrc ] ; then
echo "sourcing ~/.devtestrc"
source ~/.devtestrc
fi
### --include
## devtest
## =======
## (There are detailed instructions available below, the overview and
## configuration sections provide background information).
## Overview:
## * Define a VM that is your seed node
## * Define N VMs to pretend to be your cluster
## * Create a seed VM
## * Create an undercloud
## * Create an overcloud
## * Deploy a sample workload in the overcloud
## * Add environment variables to be included to ~/.devtestrc, e.g. http_proxy
## * Go to town testing deployments on them.
## * For troubleshooting see :doc:`troubleshooting`
## * For generic deployment information see :doc:`deploying`
## This document is extracted from devtest.sh, our automated bring-up story for
## CI/experimentation.
## More details about the TripleO project and its goals can be found in the
## :doc:`README <README>`
## .. tip::
## https://wiki.openstack.org/wiki/TripleO#Notes_for_new_developers contains
## notes on setting up a development environment. It's primarily aimed at
## people who intend to become contributors for tripleo, but many of its
## notes (such as those relating to setting up local mirrors for apt and
## pypi) will probably be helpful for everyone.
## .. note::
## See :ref:`tested_platforms` for an overview of which releases of which
## distros are tested in our CI system. We suggest you read that section
## before proceeding, to make sure you're running on a platform that we have
## extensively tested.
## Permissions
## -----------
## These scripts are designed to be run under your normal user account. The
## scripts make use of sudo when elevated privileges are needed. You will
## either need to run this attended, entering your password when sudo needs
## it, or enable passwordless sudo for your user. Another option is to extend
## the timeout of sudo sessions so that passwordless sudo will be allowed
## enough time on the controlling terminal to complete the devtest run. If
## there are any circumstances where running as a normal user, and not root,
## fails, this is considered a critical bug.
## Sudo
## ~~~~
## In order to set the sudo session timeout higher, add this to /etc/sudoers::
##
## Defaults timestamp_timeout=240 # 4 hours
##
## This will result in 4 hour timeouts for sudo session credentials. To
## reset the timeout run::
##
## sudo -k; sudo -v
##
## In order to set a user to full passwordless operation add this (typically
## near the end of /etc/sudoers)::
##
## username ALL = NOPASSWD: ALL
##
## Initial Checkout
## ----------------
## #. Choose a base location to put all of the source code.
## .. note::
## exports are ephemeral - they will not survive across new shell sessions
## or reboots. If you put these export commands in ``~/.devtestrc``, you
## can simply ``source ~/.devtestrc`` to reload them. Alternatively, you
## can ``$TRIPLEO_ROOT/tripleo-incubator/scripts/write-tripleorc`` and then
## source the generated tripleorc file.
## ::
## export TRIPLEO_ROOT=~/tripleo
## .. note::
## This will be used by devtest.sh and other scripts to store the
## additional tools, images, packages, tarballs and everything else
## needed by the deployment process. The tripleo-incubator tools must
## be cloned within your ``$TRIPLEO_ROOT``.
## #. Create the directory and clone tripleo-incubator within ``$TRIPLEO_ROOT``
## ::
## mkdir -p $TRIPLEO_ROOT
## cd $TRIPLEO_ROOT
## git clone https://git.openstack.org/openstack/tripleo-incubator
## cd tripleo-incubator
## Optional: stable branch
## -----------------------
## Note that every effort is made to keep the published set of these instructions
## updated for use with only the master branches of the TripleO projects. There is
## **NO** guaranteed stability in master. There is also no guaranteed stable
## upgrade path from release to release or from one stable branch to a later
## stable branch. The stable branches are a point in time and make no
## guarantee about deploying older or newer branches of OpenStack projects
## correctly.
## If you wish to use the stable branches, you should instead checkout and clone
## the stable branch of tripleo-incubator you want, and then build the
## instructions yourself. For instance, to create a local branch named
## ``foo`` based on the upstream branch ``stable/foo``::
## git checkout -b foo origin/stable/foo
## tox -edocs
## # View doc/build/html/devtest.html in your browser and proceed from there
## Next Steps:
## -----------
## When run as a standalone script, devtest.sh runs the following commands
## to configure the devtest environment, bootstrap a seed, deploy under and
## overclouds. Many of these commands are also part of our documentation.
## Readers may choose to either run the commands given here, or instead follow
## the documentation for each command and walk through it step by step to see
## what is going on. This choice can be made on a case by case basis - for
## instance, if bootstrapping is not interesting, run that as devtest does,
## then step into the undercloud setup for granular details of bringing up a
## baremetal cloud.
### --end
#FIXME: This is a little weird. Perhaps we should identify whatever state we're
# accumulating and store it in files or something, rather than using
# source?
### --include
## #. See :doc:`devtest_variables` for documentation. Assuming you're still at
## the root of your checkout::
## source scripts/devtest_variables.sh
source $SCRIPT_HOME/devtest_variables.sh #nodocs
## #. See :doc:`devtest_setup` for documentation.
## $CONTINUE should be set to '--trash-my-machine' to have it execute
## unattended.
## ::
devtest_setup.sh $CONTINUE
## #. See :doc:`devtest_testenv` for documentation. This step creates the
## seed VM, as well as "baremetal" VMs for the under/overclouds. Details
## of the created VMs are written to ``$TE_DATAFILE``.
## .. warning::
## You should only run this step once, the first time the environment
## is being set up. Unless you remove the VMs and need to recreate
## them, you should skip this step on subsequent runs. Running this
## script with existing VMs will result in information about the existing
## nodes being removed from ``$TE_DATAFILE``
## ::
if [ "$TRIPLEO_CLEANUP" = "1" ]; then #nodocs
#XXX: When updating, also update the header in devtest_testenv.sh #nodocs
devtest_testenv.sh $TE_DATAFILE $NODES_ARG $NETS_ARG
fi #nodocs
## #. See :doc:`devtest_ramdisk` for documentation::
DEVTEST_RD_START=$(date +%s) #nodocs
devtest_ramdisk.sh
DEVTEST_RD_END=$(date +%s) #nodocs
## #. See :doc:`devtest_seed` for documentation. If you are not deploying an
## undercloud, (see below) then you will want to add --all-nodes to your
## invocation of devtest_seed.sh,which will register all your nodes directly
## with the seed cloud.::
## devtest_seed.sh
## export no_proxy=${no_proxy:-},192.0.2.1
## source $TRIPLEO_ROOT/tripleo-incubator/seedrc
### --end
DEVTEST_SD_START=$(date +%s)
if [ -z "$NO_UNDERCLOUD" ]; then
ALLNODES=""
else
ALLNODES="--all-nodes"
fi
devtest_seed.sh $BUILD_ONLY $ALLNODES $DEBUG_LOGGING
DEVTEST_SD_END=$(date +%s)
export no_proxy=${no_proxy:-},$(os-apply-config --type netaddress -m $TE_DATAFILE --key baremetal-network.seed.ip --key-default '192.0.2.1')
if [ -z "$BUILD_ONLY" ]; then
source $TRIPLEO_ROOT/tripleo-incubator/seedrc
fi
### --include
## #. See :doc:`devtest_undercloud` for documentation. The undercloud doesn't
## have to be built - the seed is entirely capable of deploying any
## baremetal workload - but a production deployment would quite probably
## want to have a heat deployed (and thus reconfigurable) deployment
## infrastructure layer).
## If you are only building images you won't need to update your no_proxy
## line or source the undercloudrc file.
## ::
## devtest_undercloud.sh $TE_DATAFILE
## export no_proxy=$no_proxy,$(os-apply-config --type raw -m $TE_DATAFILE --key undercloud.endpointhost)
## source $TRIPLEO_ROOT/tripleo-incubator/undercloudrc
### --end
DEVTEST_UC_START=$(date +%s)
if [ -z "$NO_UNDERCLOUD" ]; then
devtest_undercloud.sh $TE_DATAFILE $BUILD_ONLY $DEBUG_LOGGING $HEAT_ENV_UNDERCLOUD
if [ -z "$BUILD_ONLY" ]; then
export no_proxy=$no_proxy,$(os-apply-config --type raw -m $TE_DATAFILE --key undercloud.endpointhost)
source $TRIPLEO_ROOT/tripleo-incubator/undercloudrc
fi
fi
DEVTEST_UC_END=$(date +%s)
### --include
## #. See :doc:`devtest_overcloud` for documentation.
## If you are only building images you won't need to update your no_proxy
## line or source the overcloudrc file.
## ::
## devtest_overcloud.sh
### --end
DEVTEST_OC_START=$(date +%s)
devtest_overcloud.sh $BUILD_ONLY $DEBUG_LOGGING $HEAT_ENV_OVERCLOUD
DEVTEST_OC_END=$(date +%s)
if [ -z "$BUILD_ONLY" ]; then
### --include
export no_proxy=$no_proxy,$(os-apply-config --type raw -m $TE_DATAFILE --key overcloud.endpointhost)
source $TRIPLEO_ROOT/tripleo-incubator/overcloudrc
fi #nodocs
## #. See :doc:`devtest_end` for documentation::
devtest_end.sh
### --end
DEVTEST_END=$(date +%s) #nodocs
DEVTEST_PERF_LOG="${TRIPLEO_ROOT}/devtest_perf.log" #nodocs
TIMESTAMP=$(date "+[%Y-%m-%d %H:%M:%S]") #nodocs
echo "${TIMESTAMP} Run comment : ${DEVTEST_PERF_COMMENT:-"No Comment"}" >> ${DEVTEST_PERF_LOG} #nodocs
echo "${TIMESTAMP} Total runtime: $((DEVTEST_END - DEVTEST_START)) s" | tee -a ${DEVTEST_PERF_LOG} #nodocs
echo "${TIMESTAMP} ramdisk : $((DEVTEST_RD_END - DEVTEST_RD_START)) s" | tee -a ${DEVTEST_PERF_LOG} #nodocs
echo "${TIMESTAMP} seed : $((DEVTEST_SD_END - DEVTEST_SD_START)) s" | tee -a ${DEVTEST_PERF_LOG} #nodocs
echo "${TIMESTAMP} undercloud : $((DEVTEST_UC_END - DEVTEST_UC_START)) s" | tee -a ${DEVTEST_PERF_LOG} #nodocs
echo "${TIMESTAMP} overcloud : $((DEVTEST_OC_END - DEVTEST_OC_START)) s" | tee -a ${DEVTEST_PERF_LOG} #nodocs
echo "${TIMESTAMP} DIB_COMMON_ELEMENTS=${DIB_COMMON_ELEMENTS}" >> ${DEVTEST_PERF_LOG} #nodocs

View File

@ -1,36 +0,0 @@
#!/bin/bash
set -eu
set -o pipefail
### --include
## devtest_end
## ============
## #. Save your devtest environment::
## write-tripleorc --overwrite $TRIPLEO_ROOT/tripleorc
### --end
if [ -e tripleorc ]; then
echo "Resetting existing $PWD/tripleorc with new values"
tripleorc_path=$PWD/tripleorc
else
tripleorc_path=$TRIPLEO_ROOT/tripleorc
fi
write-tripleorc --overwrite $tripleorc_path
echo "devtest.sh completed."
echo source $tripleorc_path to restore all values
echo ""
### --include
## #. If you need to recover the environment, you can source tripleorc.
## ::
## source $TRIPLEO_ROOT/tripleorc
## The End!
##
### --end

View File

@ -1,716 +0,0 @@
#!/bin/bash
set -eu
set -o pipefail
SCRIPT_NAME=$(basename $0)
SCRIPT_HOME=$(dirname $0)
BUILD_ONLY=
DEBUG_LOGGING=
HEAT_ENV=
DISK_IMAGES_CONFIG=${OVERCLOUD_DISK_IMAGES_CONFIG:-''}
COMPUTE_FLAVOR="baremetal"
CONTROL_FLAVOR="baremetal"
BLOCKSTORAGE_FLAVOR="baremetal"
SWIFTSTORAGE_FLAVOR="baremetal"
WITH_STEPS=
function show_options {
echo "Usage: $SCRIPT_NAME [options]"
echo
echo "Deploys a KVM cloud via heat."
echo
echo "Options:"
echo " -h -- this help"
echo " -c -- re-use existing source/images if they exist."
echo " --build-only -- build the needed images but don't deploy them."
echo " --no-mergepy -- use the standalone Heat templates (default)."
echo " --with-steps -- Deploy in steps, asking for confirmation between each."
echo " --debug-logging -- Turn on debug logging in the built overcloud."
echo " Sets both OS_DEBUG_LOGGING and the heat Debug parameter."
echo " --heat-env -- path to a JSON heat environment file."
echo " Defaults to \$TRIPLEO_ROOT/overcloud-env.json."
echo " --compute-flavor -- Nova flavor to use for compute nodes."
echo " Defaults to 'baremetal'."
echo " --control-flavor -- Nova flavor to use for control nodes."
echo " Defaults to 'baremetal'."
echo " --block-storage-flavor -- Nova flavor to use for block "
echo " storage nodes."
echo " Defaults to 'baremetal'."
echo " --swift-storage-flavor -- Nova flavor to use for swift "
echo " storage nodes."
echo " Defaults to 'baremetal'."
echo
exit $1
}
TEMP=$(getopt -o c,h -l build-only,no-mergepy,with-steps,debug-logging,heat-env:,compute-flavor:,control-flavor:,block-storage-flavor:,swift-storage-flavor:,help -n $SCRIPT_NAME -- "$@")
if [ $? != 0 ] ; then
echo "Terminating..." >&2;
exit 1;
fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
-c) USE_CACHE=1; shift 1;;
--build-only) BUILD_ONLY="1"; shift 1;;
--no-mergepy)
USE_MERGEPY=0
echo "Warning: --no-mergepy is the default now, option is DEPRECATED" >&2
shift 1
;;
--with-steps) WITH_STEPS="1"; shift 1;;
--debug-logging)
DEBUG_LOGGING="1"
export OS_DEBUG_LOGGING="1"
shift 1
;;
--heat-env) HEAT_ENV="$2"; shift 2;;
--disk-images-config) DISK_IMAGES_CONFIG="$2"; shift 2;;
--compute-flavor) COMPUTE_FLAVOR="$2"; shift 2;;
--control-flavor) CONTROL_FLAVOR="$2"; shift 2;;
--block-storage-flavor) BLOCKSTORAGE_FLAVOR="$2"; shift 2;;
--swift-storage-flavor) SWIFTSTORAGE_FLAVOR="$2"; shift 2;;
-h | --help) show_options 0;;
--) shift ; break ;;
*) echo "Error: unsupported option $1." ; exit 1 ;;
esac
done
set -x
### --include
## devtest_overcloud
## =================
## #. Build images. There are two helper scripts which can be
## used to build images. The first method uses environment
## variables to create a specific image for each overcloud
## role. This method works best if you are using tripleo-image-elements
## for configuration (which requires per role image customization).
## See :doc:`devtest_overcloud_images` for documentation.
## This method is currently the default.
## Another option is to make use of the build-images script which
## dynamically creates a set of images using a YAML (or JSON) config
## file (see the build-images script for details and the expected config
## file format). This method is typically preferred when using
## tripleo-puppet-elements (Puppet) for configuration which
## allows the contents and number of images used to deploy an
## overcloud to be more flexibly defined. Example:
## build-images -d -c $DISK_IMAGES_CONFIG
### --end
USE_CACHE=${USE_CACHE:-0}
if [ -n "$DISK_IMAGES_CONFIG" ]; then
USE_CACHE=$USE_CACHE build-images -d -c $DISK_IMAGES_CONFIG
else
USE_CACHE=$USE_CACHE devtest_overcloud_images.sh
# use a default disk images YAML file to load images
DISK_IMAGES_CONFIG="$TRIPLEO_ROOT/tripleo-incubator/scripts/overcloud_disk_images.yaml"
fi
if [ -n "$BUILD_ONLY" ]; then
echo "--build-only is deprecated. Please use devtest_overcloud_images.sh instead."
exit 0
fi
OS_PASSWORD=${OS_PASSWORD:?"OS_PASSWORD is not set. Undercloud credentials are required"}
# Parameters for tripleo-cd - see the tripleo-cd element.
# NOTE(rpodolyaka): retain backwards compatibility by accepting both positional
# arguments and environment variables. Positional arguments
# take precedence over environment variables
NeutronPublicInterface=${1:-${NeutronPublicInterface:-'nic1'}}
NeutronPublicInterfaceIP=${2:-${NeutronPublicInterfaceIP:-''}}
NeutronPublicInterfaceRawDevice=${3:-${NeutronPublicInterfaceRawDevice:-''}}
NeutronPublicInterfaceDefaultRoute=${4:-${NeutronPublicInterfaceDefaultRoute:-''}}
FLOATING_START=${5:-${FLOATING_START:-'192.0.2.45'}}
FLOATING_END=${6:-${FLOATING_END:-'192.0.2.64'}}
FLOATING_CIDR=${7:-${FLOATING_CIDR:-'192.0.2.0/24'}}
ADMIN_USERS=${8:-${ADMIN_USERS:-''}}
USERS=${9:-${USERS:-''}}
STACKNAME=${10:-overcloud}
# If set, the base name for a .crt and .key file for SSL. This will trigger
# inclusion of openstack-ssl in the build and pass the contents of the files to heat.
# Note that PUBLIC_API_URL ($12) must also be set for SSL to actually be used.
SSLBASE=${11:-''}
OVERCLOUD_SSL_CERT=${SSLBASE:+$(<$SSLBASE.crt)}
OVERCLOUD_SSL_KEY=${SSLBASE:+$(<$SSLBASE.key)}
PUBLIC_API_URL=${12:-''}
TE_DATAFILE=${TE_DATAFILE:?"TE_DATAFILE must be defined before calling this script!"}
# A client-side timeout in minutes for creating or updating the overcloud
# Heat stack.
OVERCLOUD_STACK_TIMEOUT=${OVERCLOUD_STACK_TIMEOUT:-60}
# The private instance fixed IP network range
OVERCLOUD_FIXED_RANGE_CIDR=${OVERCLOUD_FIXED_RANGE_CIDR:-"10.0.0.0/8"}
OVERCLOUD_FIXED_RANGE_GATEWAY=${OVERCLOUD_FIXED_RANGE_GATEWAY:-"10.0.0.1"}
OVERCLOUD_FIXED_RANGE_NAMESERVER=${OVERCLOUD_FIXED_RANGE_NAMESERVER:-"8.8.8.8"}
NODE_ARCH=$(os-apply-config -m $TE_DATAFILE --key arch --type raw)
### --include
## #. Load all images into Glance (based on the provided disk images config).
## This captures all the Glance IDs into a Heat env file which maps
## them to the appropriate parameter names. This allows us some
## amount of flexability how many images to use for the overcloud
## deployment.
## ::
OVERCLOUD_IMAGE_IDS_ENV=${OVERCLOUD_IMAGE_IDS_ENV:-"${TRIPLEO_ROOT}/overcloud-images-env.yaml"}
load-images -d --remove -c $DISK_IMAGES_CONFIG -o $OVERCLOUD_IMAGE_IDS_ENV
## #. For running an overcloud in VM's. For Physical machines, set to kvm:
## ::
OVERCLOUD_LIBVIRT_TYPE=${OVERCLOUD_LIBVIRT_TYPE:-"qemu"}
## #. Set the public interface of overcloud network node::
## ::
NeutronPublicInterface=${NeutronPublicInterface:-'nic1'}
## #. Set the NTP server for the overcloud::
## ::
OVERCLOUD_NTP_SERVER=${OVERCLOUD_NTP_SERVER:-''}
## #. If you want to permit VM's access to bare metal networks, you need
## to define flat-networks and bridge mappings in Neutron. We default
## to creating one called datacentre, which we use to grant external
## network access to VMs::
## ::
OVERCLOUD_FLAT_NETWORKS=${OVERCLOUD_FLAT_NETWORKS:-'datacentre'}
OVERCLOUD_BRIDGE_MAPPINGS=${OVERCLOUD_BRIDGE_MAPPINGS:-'datacentre:br-ex'}
OVERCLOUD_HYPERVISOR_PHYSICAL_BRIDGE=${OVERCLOUD_HYPERVISOR_PHYSICAL_BRIDGE:-'br-ex'}
OVERCLOUD_HYPERVISOR_PUBLIC_INTERFACE=${OVERCLOUD_HYPERVISOR_PUBLIC_INTERFACE:-'nic1'}
OVERCLOUD_VIRTUAL_INTERFACE=${OVERCLOUD_VIRTUAL_INTERFACE:-'br-ex'}
## #. If you are using SSL, your compute nodes will need static mappings to your
## endpoint in ``/etc/hosts`` (because we don't do dynamic undercloud DNS yet).
## set this to the DNS name you're using for your SSL certificate - the heat
## template looks up the controller address within the cloud::
OVERCLOUD_NAME=${OVERCLOUD_NAME:-''}
## #. Detect if we are deploying with a VLAN for API endpoints / floating IPs.
## This is done by looking for a 'public' network in Neutron, and if found
## we pull out the VLAN id and pass that into Heat, as well as using a VLAN
## enabled Heat template.
## ::
if (neutron net-list | grep -q public); then
VLAN_ID=$(neutron net-show public | awk '/provider:segmentation_id/ { print $4 }')
NeutronPublicInterfaceTag="$VLAN_ID"
# This should be in the heat template, but see
# https://bugs.launchpad.net/heat/+bug/1336656
# note that this will break if there are more than one subnet, as if
# more reason to fix the bug is needed :).
PUBLIC_SUBNET_ID=$(neutron net-show public | awk '/subnets/ { print $4 }')
VLAN_GW=$(neutron subnet-show $PUBLIC_SUBNET_ID | awk '/gateway_ip/ { print $4}')
BM_VLAN_CIDR=$(neutron subnet-show $PUBLIC_SUBNET_ID | awk '/cidr/ { print $4}')
NeutronPublicInterfaceDefaultRoute="${VLAN_GW}"
export CONTROLEXTRA=overcloud-vlan-port.yaml
else
VLAN_ID=
NeutronPublicInterfaceTag=
fi
## #. TripleO explicitly models key settings for OpenStack, as well as settings
## that require cluster awareness to configure. To configure arbitrary
## additional settings, provide a JSON string with them in the structure
## required by the template ExtraConfig parameter.
OVERCLOUD_EXTRA_CONFIG=${OVERCLOUD_EXTRA_CONFIG:-''}
## #. Choose whether to deploy or update. Use stack-update to update::
## HEAT_OP=stack-create
### --end
if heat stack-show $STACKNAME > /dev/null; then
HEAT_OP=stack-update
if (heat stack-show $STACKNAME | grep -q FAILED); then
echo "Updating a failed stack; this is a new ability and may cause problems." >&2
fi
else
HEAT_OP=stack-create
fi
### --include
## #. Wait for the BM cloud to register BM nodes with the scheduler::
expected_nodes=$(( $OVERCLOUD_COMPUTESCALE + $OVERCLOUD_CONTROLSCALE + $OVERCLOUD_BLOCKSTORAGESCALE ))
wait_for -w $((60 * $expected_nodes)) --delay 10 -- wait_for_hypervisor_stats $expected_nodes
## #. Set password for Overcloud SNMPd, same password needs to be set in Undercloud Ceilometer
UNDERCLOUD_CEILOMETER_SNMPD_PASSWORD=$(os-apply-config -m $TE_DATAFILE --key undercloud.ceilometer_snmpd_password --type raw --key-default '')
if [ -z "$UNDERCLOUD_CEILOMETER_SNMPD_PASSWORD" ]; then #nodocs
UNDERCLOUD_CEILOMETER_SNMPD_PASSWORD=$(os-make-password)
fi #nodocs
## #. Create unique credentials::
### --end
# NOTE(tchaypo): We used to write these passwords in $CWD; so check to see
# if the file exists there first. As well as providing backwards
# compatibility, this allows for people to run multiple test environments on
# the same machine - just make sure to have a different directory for
# running the scripts for each different environment you wish to use.
#
# If we can't find the file in $CWD we look in the new default location.
if [ -e tripleo-overcloud-passwords ]; then
echo "Re-using existing passwords in $PWD/tripleo-overcloud-passwords"
# Add any new passwords since the file was generated
setup-overcloud-passwords tripleo-overcloud-passwords
source tripleo-overcloud-passwords
else
### --include
setup-overcloud-passwords $TRIPLEO_ROOT/tripleo-overcloud-passwords
source $TRIPLEO_ROOT/tripleo-overcloud-passwords
fi #nodocs
## #. We need an environment file to store the parameters we're gonig to give
## heat.::
HEAT_ENV=${HEAT_ENV:-"${TRIPLEO_ROOT}/overcloud-env.json"}
## #. Read the heat env in for updating.::
if [ -e "${HEAT_ENV}" ]; then
### --end
if [ "$(stat -c %a ${HEAT_ENV})" != "600" ]; then
echo "Error: Heat environment cache \"${HEAT_ENV}\" not set to permissions of 0600."
# We should exit 1 so all the users from before the permissions
# requirement dont have their HEAT_ENV files ignored in a nearly silent way
exit 1
fi
### --include
ENV_JSON=$(cat "${HEAT_ENV}")
else
ENV_JSON='{"parameters":{}}'
fi
## #. Set parameters we need to deploy a KVM cloud.::
NeutronControlPlaneID=$(neutron net-show ctlplane | grep ' id ' | awk '{print $4}')
ENV_JSON=$(jq '.parameters = {
"MysqlInnodbBufferPoolSize": 100
} + .parameters + {
"AdminPassword": "'"${OVERCLOUD_ADMIN_PASSWORD}"'",
"AdminToken": "'"${OVERCLOUD_ADMIN_TOKEN}"'",
"CeilometerPassword": "'"${OVERCLOUD_CEILOMETER_PASSWORD}"'",
"CeilometerMeteringSecret": "'"${OVERCLOUD_CEILOMETER_SECRET}"'",
"CinderPassword": "'"${OVERCLOUD_CINDER_PASSWORD}"'",
"CloudName": "'"${OVERCLOUD_NAME}"'",
"GlancePassword": "'"${OVERCLOUD_GLANCE_PASSWORD}"'",
"HeatPassword": "'"${OVERCLOUD_HEAT_PASSWORD}"'",
"HeatStackDomainAdminPassword": "'"${OVERCLOUD_HEAT_STACK_DOMAIN_PASSWORD}"'",
"HypervisorNeutronPhysicalBridge": "'"${OVERCLOUD_HYPERVISOR_PHYSICAL_BRIDGE}"'",
"HypervisorNeutronPublicInterface": "'"${OVERCLOUD_HYPERVISOR_PUBLIC_INTERFACE}"'",
"NeutronBridgeMappings": "'"${OVERCLOUD_BRIDGE_MAPPINGS}"'",
"NeutronControlPlaneID": "'${NeutronControlPlaneID}'",
"NeutronFlatNetworks": "'"${OVERCLOUD_FLAT_NETWORKS}"'",
"NeutronPassword": "'"${OVERCLOUD_NEUTRON_PASSWORD}"'",
"NeutronPublicInterface": "'"${NeutronPublicInterface}"'",
"NeutronPublicInterfaceTag": "'"${NeutronPublicInterfaceTag}"'",
"NovaComputeLibvirtType": "'"${OVERCLOUD_LIBVIRT_TYPE}"'",
"NovaPassword": "'"${OVERCLOUD_NOVA_PASSWORD}"'",
"NtpServer": "'"${OVERCLOUD_NTP_SERVER}"'",
"SwiftHashSuffix": "'"${OVERCLOUD_SWIFT_HASH}"'",
"SwiftPassword": "'"${OVERCLOUD_SWIFT_PASSWORD}"'",
"SSLCertificate": "'"${OVERCLOUD_SSL_CERT}"'",
"SSLKey": "'"${OVERCLOUD_SSL_KEY}"'",
"OvercloudComputeFlavor": "'"${COMPUTE_FLAVOR}"'",
"OvercloudControlFlavor": "'"${CONTROL_FLAVOR}"'",
"OvercloudBlockStorageFlavor": "'"${BLOCKSTORAGE_FLAVOR}"'",
"OvercloudSwiftStorageFlavor": "'"${SWIFTSTORAGE_FLAVOR}"'"
}' <<< $ENV_JSON)
### --end
if [ "$DEBUG_LOGGING" = "1" ]; then
ENV_JSON=$(jq '.parameters = .parameters + {
"Debug": "True",
}' <<< $ENV_JSON)
fi
### --include
## #. We enable the automatic relocation of L3 routers in Neutron by default,
## alternatively you can use the L3 agents high availability mechanism
## (only works with three or more controller nodes) or the distributed virtul
## routing mechanism (deploying routers on compute nodes). Set the environment
## variable ``OVERCLOUD_L3`` to ``relocate``, ``ha`` or ``dvr``.
## ::
OVERCLOUD_L3=${OVERCLOUD_L3:-'relocate'}
## #. If enabling distributed virtual routing on the overcloud, some values need
## to be set so that Neutron DVR will work.
## ::
if [ ${OVERCLOUD_DISTRIBUTED_ROUTERS:-'False'} == "True" -o $OVERCLOUD_L3 == "dvr" ]; then
ENV_JSON=$(jq '.parameters = {} + .parameters + {
"NeutronDVR": "True",
"NeutronTunnelTypes": "vxlan",
"NeutronNetworkType": "vxlan",
"NeutronMechanismDrivers": "openvswitch,l2population",
"NeutronAllowL3AgentFailover": "False",
}' <<< $ENV_JSON)
fi
if [ ${OVERCLOUD_L3_HA:-'False'} == "True" -o $OVERCLOUD_L3 == "ha" ]; then
ENV_JSON=$(jq '.parameters = {} + .parameters + {
"NeutronL3HA": "True",
"NeutronAllowL3AgentFailover": "False",
}' <<< $ENV_JSON)
fi
### --end
# Options we haven't documented as such
ENV_JSON=$(jq '.parameters = {
"ControlVirtualInterface": "'${OVERCLOUD_VIRTUAL_INTERFACE}'"
} + .parameters + {
"NeutronPublicInterfaceDefaultRoute": "'${NeutronPublicInterfaceDefaultRoute}'",
"NeutronPublicInterfaceIP": "'${NeutronPublicInterfaceIP}'",
"NeutronPublicInterfaceRawDevice": "'${NeutronPublicInterfaceRawDevice}'",
"SnmpdReadonlyUserPassword": "'${UNDERCLOUD_CEILOMETER_SNMPD_PASSWORD}'",
}' <<< $ENV_JSON)
RESOURCE_REGISTRY=
RESOURCE_REGISTRY_PATH=${RESOURCE_REGISTRY_PATH:-"$TRIPLEO_ROOT/tripleo-heat-templates/overcloud-resource-registry.yaml"}
STEPFILE_PATH=${STEPFILE_PATH:-"$TRIPLEO_ROOT/tripleo-heat-templates/environments/overcloud-steps.yaml"}
if [ "$USE_MERGEPY" -eq 0 ]; then
RESOURCE_REGISTRY="-e $RESOURCE_REGISTRY_PATH"
ENV_JSON=$(jq '.parameters = .parameters + {
"ControllerCount": '${OVERCLOUD_CONTROLSCALE}',
"ComputeCount": '${OVERCLOUD_COMPUTESCALE}'
}' <<< $ENV_JSON)
if [ -e "$TRIPLEO_ROOT/tripleo-heat-templates/cinder-storage.yaml" ]; then
ENV_JSON=$(jq '.parameters = .parameters + {
"BlockStorageCount": '${OVERCLOUD_BLOCKSTORAGESCALE}'
}' <<< $ENV_JSON)
fi
if [ "$WITH_STEPS" = "1" ]; then
RESOURCE_REGISTRY="$RESOURCE_REGISTRY -e $STEPFILE_PATH"
fi
fi
CUSTOM_HEAT_ENVIRONMENT=
OVERCLOUD_CUSTOM_HEAT_ENV=${OVERCLOUD_CUSTOM_HEAT_ENV:-''}
if [ -n "$OVERCLOUD_CUSTOM_HEAT_ENV" ]; then
for NAME in $OVERCLOUD_CUSTOM_HEAT_ENV; do
CUSTOM_HEAT_ENVIRONMENT="$CUSTOM_HEAT_ENVIRONMENT -e $NAME"
done
fi
### --include
## #. Save the finished environment file.::
jq . > "${HEAT_ENV}" <<< $ENV_JSON
chmod 0600 "${HEAT_ENV}"
## #. Add Keystone certs/key into the environment file.::
generate-keystone-pki --heatenv $HEAT_ENV
## #. Deploy an overcloud::
## heat $HEAT_OP -e "$HEAT_ENV" \
## -f $TRIPLEO_ROOT/tripleo-heat-templates/overcloud.yaml \
## -P "ExtraConfig=${OVERCLOUD_EXTRA_CONFIG}" \
## overcloud
### --end
if [ "$USE_MERGEPY" -eq 1 ]; then
make -C $TRIPLEO_ROOT/tripleo-heat-templates overcloud.yaml \
COMPUTESCALE=$OVERCLOUD_COMPUTESCALE,${OVERCLOUD_COMPUTE_BLACKLIST:-} \
CONTROLSCALE=$OVERCLOUD_CONTROLSCALE,${OVERCLOUD_CONTROL_BLACKLIST:-} \
BLOCKSTORAGESCALE=$OVERCLOUD_BLOCKSTORAGESCALE
OVERCLOUD_TEMPLATE=$TRIPLEO_ROOT/tripleo-heat-templates/overcloud.yaml
else
OVERCLOUD_TEMPLATE=$TRIPLEO_ROOT/tripleo-heat-templates/overcloud-without-mergepy.yaml
fi
# create stack with a 6 hour timeout, and allow wait_for_stack_ready
# to impose a realistic timeout.
heat $HEAT_OP -e "$HEAT_ENV" \
-e $OVERCLOUD_IMAGE_IDS_ENV \
$RESOURCE_REGISTRY \
$CUSTOM_HEAT_ENVIRONMENT \
-t 360 \
-f "$OVERCLOUD_TEMPLATE" \
-P "ExtraConfig=${OVERCLOUD_EXTRA_CONFIG}" \
$STACKNAME
### --include
## You can watch the console via ``virsh``/``virt-manager`` to observe the PXE
## boot/deploy process. After the deploy is complete, the machines will reboot
## and be available.
## #. While we wait for the stack to come up, build an end user disk image and
## register it with glance.::
USER_IMG_NAME="user.qcow2"
### --end
USE_CIRROS=${USE_CIRROS:-0}
if [ "$USE_CIRROS" != "0" ]; then
USER_IMG_NAME="user-cirros.qcow2"
fi
TEST_IMAGE_DIB_EXTRA_ARGS=${TEST_IMAGE_DIB_EXTRA_ARGS:-''}
if [ ! -e $TRIPLEO_ROOT/$USER_IMG_NAME -o "$USE_CACHE" == "0" ] ; then
if [ "$USE_CIRROS" == "0" ] ; then
### --include
$TRIPLEO_ROOT/diskimage-builder/bin/disk-image-create $NODE_DIST vm $TEST_IMAGE_DIB_EXTRA_ARGS \
-a $NODE_ARCH -o $TRIPLEO_ROOT/user 2>&1 | tee $TRIPLEO_ROOT/dib-user.log
### --end
else
VERSION=$($TRIPLEO_ROOT/diskimage-builder/elements/cache-url/bin/cache-url \
http://download.cirros-cloud.net/version/released >(cat) 1>&2)
IMAGE_ID=cirros-${VERSION}-${NODE_ARCH/amd64/x86_64}-disk.img
MD5SUM=$($TRIPLEO_ROOT/diskimage-builder/elements/cache-url/bin/cache-url \
http://download.cirros-cloud.net/${VERSION}/MD5SUMS >(cat) 1>&2 | awk "/$IMAGE_ID/ {print \$1}")
$TRIPLEO_ROOT/diskimage-builder/elements/cache-url/bin/cache-url \
http://download.cirros-cloud.net/${VERSION}/${IMAGE_ID} $TRIPLEO_ROOT/$USER_IMG_NAME
pushd $TRIPLEO_ROOT
echo "$MD5SUM *$USER_IMG_NAME" | md5sum --check -
popd
fi
fi
# If --with-steps is specified, we step through the deployment, waiting
# for user confirmation before proceeding, based on the heat hooks defined
# in tripleo-heat-templates/overcloud-steps.yaml
STEP_SLEEPTIME=${STEP_SLEEPTIME:-30}
STEP_NESTED_DEPTH=${STEP_NESTED_DEPTH:-5}
if [ "$WITH_STEPS" = "1" ]; then
set +x
while $(heat stack-show overcloud | grep stack_status | grep -q IN_PROGRESS)
do
HEADER_DONE=0
HEADER_STR="| resource_name"
while read -r -u 3 line
do
if [[ $line =~ ^"$HEADER_STR" && $HEADER_DONE -eq 0 ]]; then
echo $line
HEADER_DONE=1
fi
if [[ $line == *"paused until Hook"* ]]; then
echo -e "Hit hook, event:\n$line"
# Prompt for confirmation before continuing
read -p "To clear hook and continue, press \"y\"" -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
HOOK_RSRC=$(echo $line | cut -d "|" -f2)
HOOK_STACK=$(echo $line | cut -d "|" -f7)
heat hook-clear $HOOK_STACK $HOOK_RSRC
fi
fi
done 3< <(heat hook-poll overcloud --nested-depth $STEP_NESTED_DEPTH)
echo "Waiting for hook to be reached"
sleep $STEP_SLEEPTIME
done
set -x
fi
### --include
## #. Get the overcloud IP from the heat stack
## ::
echo "Waiting for the overcloud stack to be ready" #nodocs
wait_for_stack_ready -w $(($OVERCLOUD_STACK_TIMEOUT * 60)) 10 $STACKNAME
OVERCLOUD_ENDPOINT=$(heat output-show $STACKNAME KeystoneURL|sed 's/^"\(.*\)"$/\1/')
OVERCLOUD_IP=$(echo $OVERCLOUD_ENDPOINT | awk -F '[/:]' '{print $4}')
### --end
# If we're forcing a specific public interface, we'll want to advertise that as
# the public endpoint for APIs.
if [ -n "$NeutronPublicInterfaceIP" ]; then
OVERCLOUD_IP=$(echo ${NeutronPublicInterfaceIP} | sed -e s,/.*,,)
OVERCLOUD_ENDPOINT="http://$OVERCLOUD_IP:5000/v2.0"
fi
### --include
## #. We don't (yet) preserve ssh keys on rebuilds.
## ::
ssh-keygen -R $OVERCLOUD_IP
## #. Export the overcloud endpoint and credentials to your test environment.
## ::
NEW_JSON=$(jq '.overcloud.password="'${OVERCLOUD_ADMIN_PASSWORD}'" | .overcloud.endpoint="'${OVERCLOUD_ENDPOINT}'" | .overcloud.endpointhost="'${OVERCLOUD_IP}'"' $TE_DATAFILE)
echo $NEW_JSON > $TE_DATAFILE
## #. Source the overcloud configuration::
source $TRIPLEO_ROOT/tripleo-incubator/overcloudrc
## #. Exclude the overcloud from proxies::
set +u #nodocs
export no_proxy=$no_proxy,$OVERCLOUD_IP
set -u #nodocs
## #. If we updated the cloud we don't need to do admin setup again - skip down to `Wait for Nova Compute`_.
if [ "stack-create" = "$HEAT_OP" ]; then #nodocs
## #. Perform admin setup of your overcloud.
## ::
init-keystone -o $OVERCLOUD_IP -t $OVERCLOUD_ADMIN_TOKEN \
-e admin@example.com -p $OVERCLOUD_ADMIN_PASSWORD \
${SSLBASE:+-s $PUBLIC_API_URL} --no-pki-setup
# Creating these roles to be used by tenants using swift
openstack role create swiftoperator
openstack role create ResellerAdmin
setup-endpoints $OVERCLOUD_IP \
--cinder-password $OVERCLOUD_CINDER_PASSWORD \
--glance-password $OVERCLOUD_GLANCE_PASSWORD \
--heat-password $OVERCLOUD_HEAT_PASSWORD \
--neutron-password $OVERCLOUD_NEUTRON_PASSWORD \
--nova-password $OVERCLOUD_NOVA_PASSWORD \
--swift-password $OVERCLOUD_SWIFT_PASSWORD \
--ceilometer-password $OVERCLOUD_CEILOMETER_PASSWORD \
${SSLBASE:+--ssl $PUBLIC_API_URL}
openstack role create heat_stack_user
user-config
BM_NETWORK_GATEWAY=$(OS_CONFIG_FILES=$TE_DATAFILE os-apply-config --key baremetal-network.gateway-ip --type raw --key-default '192.0.2.1')
OVERCLOUD_NAMESERVER=$(os-apply-config -m $TE_DATAFILE --key overcloud.nameserver --type netaddress --key-default "$OVERCLOUD_FIXED_RANGE_NAMESERVER")
NETWORK_JSON=$(mktemp)
jq "." <<EOF > $NETWORK_JSON
{
"float": {
"cidr": "$OVERCLOUD_FIXED_RANGE_CIDR",
"name": "default-net",
"nameserver": "$OVERCLOUD_NAMESERVER",
"segmentation_id": "$NeutronPublicInterfaceTag",
"physical_network": "datacentre",
"gateway": "$OVERCLOUD_FIXED_RANGE_GATEWAY"
},
"external": {
"name": "ext-net",
"provider:network_type": "flat",
"provider:physical_network": "datacentre",
"cidr": "$FLOATING_CIDR",
"allocation_start": "$FLOATING_START",
"allocation_end": "$FLOATING_END",
"gateway": "$BM_NETWORK_GATEWAY"
}
}
EOF
setup-neutron -n $NETWORK_JSON
rm $NETWORK_JSON
## #. If you want a demo user in your overcloud (probably a good idea).
## ::
os-adduser -p $OVERCLOUD_DEMO_PASSWORD demo demo@example.com
## #. Workaround https://bugs.launchpad.net/diskimage-builder/+bug/1211165.
## ::
nova flavor-delete m1.tiny
nova flavor-create m1.tiny 1 512 2 1
## #. Register the end user image with glance.
## ::
glance image-create --name user --visibility public --disk-format qcow2 \
--container-format bare --file $TRIPLEO_ROOT/$USER_IMG_NAME
fi #nodocs
## #. _`Wait for Nova Compute`
## ::
wait_for -w 300 --delay 10 -- nova service-list --binary nova-compute 2\>/dev/null \| grep 'enabled.*\ up\ '
## #. Wait for L2 Agent On Nova Compute
## ::
wait_for -w 300 --delay 10 -- neutron agent-list -f csv -c alive -c agent_type -c host \| grep "\":-).*Open vSwitch agent.*-novacompute\"" #nodocs
## wait_for 30 10 neutron agent-list -f csv -c alive -c agent_type -c host \| grep "\":-).*Open vSwitch agent.*-novacompute\""
## #. Log in as a user.
## ::
source $TRIPLEO_ROOT/tripleo-incubator/overcloudrc-user
## #. If you just created the cloud you need to add your keypair to your user.
## ::
if [ "stack-create" = "$HEAT_OP" ] ; then #nodocs
user-config
## #. So that you can deploy a VM.
## ::
IMAGE_ID=$(nova image-show user | awk '/ id / {print $4}')
nova boot --key-name default --flavor m1.tiny --block-device source=image,id=$IMAGE_ID,dest=volume,size=3,shutdown=preserve,bootindex=0 demo
## #. Add an external IP for it.
## ::
wait_for -w 50 --delay 5 -- neutron port-list -f csv -c id --quote none \| grep id
PORT=$(neutron port-list -f csv -c id --quote none | tail -n1)
FLOATINGIP=$(neutron floatingip-create ext-net \
--port-id "${PORT//[[:space:]]/}" \
| awk '$2=="floating_ip_address" {print $4}')
## #. And allow network access to it.
## ::
neutron security-group-rule-create default --protocol icmp \
--direction ingress --port-range-min 8
neutron security-group-rule-create default --protocol tcp \
--direction ingress --port-range-min 22 --port-range-max 22
### --end
else
FLOATINGIP=$(neutron floatingip-list \
--quote=none -f csv -c floating_ip_address | tail -n 1)
nova stop demo
sleep 5
nova start demo
fi
### --include
## #. After which, you should be able to ping it
## ::
wait_for -w 300 --delay 10 -- ping -c 1 $FLOATINGIP
### --end
if [ -n "$ADMIN_USERS" ]; then
source $TRIPLEO_ROOT/tripleo-incubator/overcloudrc
assert-admin-users "$ADMIN_USERS"
assert-users "$ADMIN_USERS"
fi
if [ -n "$USERS" ] ; then
source $TRIPLEO_ROOT/tripleo-incubator/overcloudrc
assert-users "$USERS"
fi

View File

@ -1,124 +0,0 @@
#!/bin/bash
set -eu
set -o pipefail
SCRIPT_NAME=$(basename $0)
function show_options {
echo "Usage: $SCRIPT_NAME [options]"
echo
echo "Builds overcloud images using defined environment variables."
echo
echo "Options:"
echo " -h -- this help"
echo " -c -- re-use existing source/images if they exist."
exit $1
}
TEMP=$(getopt -o c,h,help -n $SCRIPT_NAME -- "$@")
if [ $? != 0 ] ; then
echo "Terminating..." >&2;
exit 1;
fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
-c) USE_CACHE=1; shift 1;;
-h | --help) show_options 0;;
--) shift ; break ;;
*) echo "Error: unsupported option $1." ; exit 1 ;;
esac
done
USE_CACHE=${USE_CACHE:-0}
DIB_COMMON_ELEMENTS=${DIB_COMMON_ELEMENTS:-'stackuser'}
OVERCLOUD_CONTROL_DIB_ELEMENTS=${OVERCLOUD_CONTROL_DIB_ELEMENTS:-'ntp hosts baremetal boot-stack cinder-api ceilometer-collector ceilometer-api ceilometer-agent-central ceilometer-agent-notification ceilometer-alarm-notifier ceilometer-alarm-evaluator os-collect-config horizon neutron-network-node dhcp-all-interfaces swift-proxy swift-storage keepalived haproxy sysctl'}
OVERCLOUD_CONTROL_DIB_EXTRA_ARGS=${OVERCLOUD_CONTROL_DIB_EXTRA_ARGS:-'rabbitmq-server cinder-tgt'}
OVERCLOUD_COMPUTE_DIB_ELEMENTS=${OVERCLOUD_COMPUTE_DIB_ELEMENTS:-'ntp hosts baremetal nova-compute nova-kvm neutron-openvswitch-agent os-collect-config dhcp-all-interfaces sysctl'}
OVERCLOUD_COMPUTE_DIB_EXTRA_ARGS=${OVERCLOUD_COMPUTE_DIB_EXTRA_ARGS:-''}
OVERCLOUD_BLOCKSTORAGE_DIB_ELEMENTS=${OVERCLOUD_BLOCKSTORAGE_DIB_ELEMENTS:-'ntp hosts baremetal os-collect-config dhcp-all-interfaces sysctl'}
OVERCLOUD_BLOCKSTORAGE_DIB_EXTRA_ARGS=${OVERCLOUD_BLOCKSTORAGE_DIB_EXTRA_ARGS:-'cinder-tgt'}
SSL_ELEMENT=${SSLBASE:+openstack-ssl}
TE_DATAFILE=${TE_DATAFILE:?"TE_DATAFILE must be defined before calling this script!"}
if [ "${USE_MARIADB:-}" = 1 ] ; then
OVERCLOUD_CONTROL_DIB_EXTRA_ARGS="$OVERCLOUD_CONTROL_DIB_EXTRA_ARGS mariadb-rpm"
OVERCLOUD_COMPUTE_DIB_EXTRA_ARGS="$OVERCLOUD_COMPUTE_DIB_EXTRA_ARGS mariadb-dev-rpm"
OVERCLOUD_BLOCKSTORAGE_DIB_EXTRA_ARGS="$OVERCLOUD_BLOCKSTORAGE_DIB_EXTRA_ARGS mariadb-dev-rpm"
fi
NODE_ARCH=$(os-apply-config -m $TE_DATAFILE --key arch --type raw)
### --include
## devtest_overcloud_images
## ========================
## Build images with environment variables. This script works best
## when using tripleo-image-elements for Overcloud configuration.
## #. Undercloud UI needs SNMPd for monitoring of every Overcloud node
## ::
if [ "$USE_UNDERCLOUD_UI" -ne 0 ] ; then
OVERCLOUD_CONTROL_DIB_EXTRA_ARGS="$OVERCLOUD_CONTROL_DIB_EXTRA_ARGS snmpd"
OVERCLOUD_COMPUTE_DIB_EXTRA_ARGS="$OVERCLOUD_COMPUTE_DIB_EXTRA_ARGS snmpd"
OVERCLOUD_BLOCKSTORAGE_DIB_EXTRA_ARGS="$OVERCLOUD_BLOCKSTORAGE_DIB_EXTRA_ARGS snmpd"
fi
## #. Create your overcloud control plane image.
## ``$OVERCLOUD_*_DIB_EXTRA_ARGS`` (CONTROL, COMPUTE, BLOCKSTORAGE) are
## meant to be used to pass additional build-time specific arguments to
## ``disk-image-create``.
## ``$SSL_ELEMENT`` is used when building a cloud with SSL endpoints - it should be
## set to openstack-ssl in that situation.
## ::
if [ ! -e $TRIPLEO_ROOT/overcloud-control.qcow2 -o "$USE_CACHE" == "0" ] ; then
$TRIPLEO_ROOT/diskimage-builder/bin/disk-image-create $NODE_DIST \
-a $NODE_ARCH -o $TRIPLEO_ROOT/overcloud-control \
$OVERCLOUD_CONTROL_DIB_ELEMENTS \
$DIB_COMMON_ELEMENTS $OVERCLOUD_CONTROL_DIB_EXTRA_ARGS ${SSL_ELEMENT:-} 2>&1 | \
tee $TRIPLEO_ROOT/dib-overcloud-control.log
fi
## #. Create your block storage image if some block storage nodes are to be used. This
## is the image the undercloud deploys for the additional cinder-volume nodes.
## ::
if [ ${OVERCLOUD_BLOCKSTORAGESCALE:-0} -gt 0 ]; then
if [ ! -e $TRIPLEO_ROOT/overcloud-cinder-volume.qcow2 -o "$USE_CACHE" == "0" ]; then
$TRIPLEO_ROOT/diskimage-builder/bin/disk-image-create $NODE_DIST \
-a $NODE_ARCH -o $TRIPLEO_ROOT/overcloud-cinder-volume \
$OVERCLOUD_BLOCKSTORAGE_DIB_ELEMENTS $DIB_COMMON_ELEMENTS \
$OVERCLOUD_BLOCKSTORAGE_DIB_EXTRA_ARGS 2>&1 | \
tee $TRIPLEO_ROOT/dib-overcloud-cinder-volume.log
fi
fi
## If enabling distributed virtual routing for Neutron on the overcloud the compute node
## must have the ``neutron-router`` element installed.
## ::
OVERCLOUD_DISTRIBUTED_ROUTERS=${OVERCLOUD_DISTRIBUTED_ROUTERS:-'False'}
OVERCLOUD_L3=${OVERCLOUD_L3:-'relocate'}
if [ $OVERCLOUD_DISTRIBUTED_ROUTERS == "True" -o $OVERCLOUD_L3 == "dvr" ]; then
OVERCLOUD_COMPUTE_DIB_ELEMENTS="$OVERCLOUD_COMPUTE_DIB_ELEMENTS neutron-router"
fi
## #. Create your overcloud compute image. This is the image the undercloud
## deploys to host the overcloud Nova compute hypervisor components.
## ::
if [ ! -e $TRIPLEO_ROOT/overcloud-compute.qcow2 -o "$USE_CACHE" == "0" ]; then
$TRIPLEO_ROOT/diskimage-builder/bin/disk-image-create $NODE_DIST \
-a $NODE_ARCH -o $TRIPLEO_ROOT/overcloud-compute \
$OVERCLOUD_COMPUTE_DIB_ELEMENTS $DIB_COMMON_ELEMENTS \
$OVERCLOUD_COMPUTE_DIB_EXTRA_ARGS 2>&1 | \
tee $TRIPLEO_ROOT/dib-overcloud-compute.log
fi
### --end

View File

@ -1,63 +0,0 @@
#!/bin/bash
set -eu
set -o pipefail
SCRIPT_NAME=$(basename $0)
SCRIPT_HOME=$(dirname $0)
function show_options {
echo "Usage: $SCRIPT_NAME [options]"
echo
echo "Build a baremetal deployment ramdisk."
echo
echo "Options:"
echo " -h -- this help"
echo
exit $1
}
TEMP=$(getopt -o h -l help -n $SCRIPT_NAME -- "$@")
if [ $? != 0 ]; then
echo "Terminating..." >&2;
exit 1;
fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
-h | --help) show_options 0;;
--) shift ; break ;;
*) echo "Error: unsupported option $1." ; exit 1 ;;
esac
done
set -x
USE_CACHE=${USE_CACHE:-0}
DIB_COMMON_ELEMENTS=${DIB_COMMON_ELEMENTS:-'stackuser'}
### --include
## devtest_ramdisk
## ===============
## Deploy Ramdisk creation
## -----------------------
## #. Create a deployment ramdisk + kernel. These are used by the seed cloud and
## the undercloud for deployment to bare metal.
## ::
### --end
NODE_ARCH=$(os-apply-config -m $TE_DATAFILE --key arch)
if [ ! -e $TRIPLEO_ROOT/$DEPLOY_NAME.kernel -o \
! -e $TRIPLEO_ROOT/$DEPLOY_NAME.initramfs -o \
"$USE_CACHE" == "0" ] ; then
### --include
$TRIPLEO_ROOT/diskimage-builder/bin/ramdisk-image-create -a $NODE_ARCH \
$NODE_DIST $DEPLOY_IMAGE_ELEMENT -o $TRIPLEO_ROOT/$DEPLOY_NAME \
$DIB_COMMON_ELEMENTS 2>&1 | \
tee $TRIPLEO_ROOT/dib-deploy.log
### --end
fi

View File

@ -1,437 +0,0 @@
#!/bin/bash
set -eu
set -o pipefail
SCRIPT_NAME=$(basename $0)
SCRIPT_HOME=$(dirname $0)
PATH=$PATH:/usr/sbin:/sbin
function show_options {
echo "Usage: $SCRIPT_NAME [options]"
echo
echo "Deploys a baremetal cloud via virsh."
echo
echo "Options:"
echo " -h -- this help"
echo " -c -- re-use existing source/images if they exist."
echo " --build-only -- build the needed images but don't deploy them."
echo " --debug-logging -- Turn on debug logging in the seed. Sets both the"
echo " OS_DEBUG_LOGGING env var and the debug environment"
echo " json values."
echo " --all-nodes -- use all the nodes in the testenv rather than"
echo " just the first one."
echo
exit $1
}
BUILD_ONLY=
DEBUG_LOGGING=
TEMP=$(getopt -o c,h -l all-nodes,build-only,debug-logging,help -n $SCRIPT_NAME -- "$@")
if [ $? != 0 ]; then
echo "Terminating..." >&2
exit 1
fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
--all-nodes) ALL_NODES="true"; shift 1;;
-c) SEED_USE_CACHE=1; shift 1;;
--build-only) BUILD_ONLY="--build-only"; shift 1;;
--debug-logging)
DEBUG_LOGGING="seed-debug-logging"
export OS_DEBUG_LOGGING="1"
shift 1
;;
-h | --help) show_options 0;;
--) shift ; break ;;
*) echo "Error: unsupported option $1." ; exit 1 ;;
esac
done
set -x
USE_CACHE=${USE_CACHE:-0}
# SEED_USE_CACHE can be set independently of the global USE_CACHE
# This is useful if you want to rebuild your seed, but reuse the
# cached overcloud images. Do this via SEED_USE_CACHE=0 with
# the global USE_CACHE=1. Note this is only overriding the local
# USE_CACHE variable, not what is set in the environment.
USE_CACHE=${SEED_USE_CACHE:-$USE_CACHE}
### --include
## devtest_seed
## ============
## #. Create and start your seed VM. This script invokes diskimage-builder with
## suitable paths and options to create and start a VM that contains an
## all-in-one OpenStack cloud with the baremetal driver enabled, and
## preconfigures it for a development environment. Note that the seed has
## minimal variation in it's configuration: the goal is to bootstrap with
## a known-solid config.
## ::
cd $TRIPLEO_ROOT/tripleo-image-elements/elements/seed-stack-config
## #. Ironic and Nova-Baremetal require different metadata to operate.
## ::
# Unsets .nova.baremetal as it's unused.
# TODO replace "baremetal": {} with del(.baremetal) when jq 1.3 is widely available.
# Sets:
# - ironic.virtual_power_ssh_key.
# - nova.compute_driver to ironic.nova.virt.ironic.driver.IronicDriver.
# - nova.compute_manager to avoid race conditions on ironic startup.
jq -s '
.[1] as $config
| .[0]
| . + {
"ironic": (.ironic + {
"virtual_power_ssh_key": $config["ssh-key"],
}),
"nova": (.nova + {
"baremetal": {},
"compute_driver": "nova.virt.ironic.driver.IronicDriver",
"compute_manager": "ironic.nova.compute.manager.ClusteredComputeManager",
"scheduler_host_manager": "nova.scheduler.ironic_host_manager.IronicHostManager",
})
}' config.json $TE_DATAFILE > tmp_local.json
# Add Keystone certs/key into the environment file
generate-keystone-pki --heatenv tmp_local.json -s
# Get details required to set-up a callback heat call back from the seed from os-collect-config.
HOST_IP=$(os-apply-config -m $TE_DATAFILE --key host-ip --type netaddress --key-default '192.168.122.1')
COMP_IP=$(ip route get "$HOST_IP" | awk '/'"$HOST_IP"'/ {print $NF}')
SEED_COMP_PORT="${SEED_COMP_PORT:-27410}"
SEED_IMAGE_ID="${SEED_IMAGE_ID:-seedImageID}"
# Firewalld interferes with our seed completion signal
if systemctl status firewalld; then
if ! sudo firewall-cmd --list-ports | grep "$SEED_COMP_PORT/tcp"; then
echo 'Firewalld is running and the seed completion port is not open.'
echo 'To continue you must either stop firewalld or open the port with:'
echo "sudo firewall-cmd --add-port=$SEED_COMP_PORT/tcp"
exit 1
fi
fi
# Apply custom BM network settings to the seeds local.json config
# Because the seed runs under libvirt and usually isn't in routing tables for
# access to the networks behind it, we setup masquerading for the bm networks,
# which permits outbound access from the machines we've deployed.
# If the seed is not the router (e.g. real machines are being used) then these
# rules are harmless.
BM_NETWORK_CIDR=$(os-apply-config -m $TE_DATAFILE --key baremetal-network.cidr --type raw --key-default '192.0.2.0/24')
BM_VLAN_SEED_TAG=$(os-apply-config -m $TE_DATAFILE --key baremetal-network.seed.public_vlan.tag --type netaddress --key-default '')
BM_VLAN_SEED_IP=$(os-apply-config -m $TE_DATAFILE --key baremetal-network.seed.public_vlan.ip --type netaddress --key-default '')
if [ -n "$BM_VLAN_SEED_IP" ]; then
BM_VLAN_SEED_IP_ADDR=$(python -c "import netaddr; print netaddr.IPNetwork('$BM_VLAN_SEED_IP').ip")
BM_VLAN_SEED_IP_CIDR=$(python -c "import netaddr; print '%s/%s' % (netaddr.IPNetwork('$BM_VLAN_SEED_IP').network, netaddr.IPNetwork('$BM_VLAN_SEED_IP').prefixlen)")
echo "{ \"ovs\": {\"public_interface_tag\": \"${BM_VLAN_SEED_TAG}\", \"public_interface_tag_ip\": \"${BM_VLAN_SEED_IP}\"}, \"masquerade\": [\"${BM_VLAN_SEED_IP}\"] }" > bm-vlan.json
else
echo "{ \"ovs\": {}, \"masquerade\": [] }" > bm-vlan.json
fi
BM_BRIDGE_ROUTE=$(jq -r '.["baremetal-network"].seed.physical_bridge_route // {}' $TE_DATAFILE)
BM_CTL_ROUTE_PREFIX=$(jq -r '.["baremetal-network"].seed.physical_bridge_route.prefix // ""' $TE_DATAFILE)
BM_CTL_ROUTE_VIA=$(jq -r '.["baremetal-network"].seed.physical_bridge_route.via // ""' $TE_DATAFILE)
jq -s '
.[1]["baremetal-network"] as $bm
| ($bm.seed.ip // "192.0.2.1") as $bm_seed_ip
| .[2] as $bm_vlan
| .[3] as $bm_bridge_route
| .[0]
| . + {
"local-ipv4": $bm_seed_ip,
"completion-signal": ("http://'"${COMP_IP}"':'"${SEED_COMP_PORT}"'"),
"instance-id": "'"${SEED_IMAGE_ID}"'",
"bootstack": (.bootstack + {
"public_interface_ip": ($bm_seed_ip + "/'"${BM_NETWORK_CIDR##*/}"'"),
"masquerade_networks": ([$bm.cidr // "192.0.2.0/24"] + $bm_vlan.masquerade)
}),
"heat": (.heat + {
"watch_server_url": ("http://" + $bm_seed_ip + ":8003"),
"waitcondition_server_url": ("http://" + $bm_seed_ip + ":8000/v1/waitcondition"),
"metadata_server_url": ("http://" + $bm_seed_ip + ":8000")
}),
"neutron": (.neutron + {
"ovs": (.neutron.ovs + $bm_vlan.ovs + {"local_ip": $bm_seed_ip } + {
"physical_bridge_route": $bm_bridge_route
})
})
}' tmp_local.json $TE_DATAFILE bm-vlan.json <(echo "$BM_BRIDGE_ROUTE") > local.json
rm tmp_local.json
rm bm-vlan.json
### --end
# If running in a CI environment then the user and ip address should be read
# from the json describing the environment
REMOTE_OPERATIONS=$(os-apply-config -m $TE_DATAFILE --key remote-operations --type raw --key-default '')
if [ -n "$REMOTE_OPERATIONS" ] ; then
SSH_USER=$(os-apply-config -m $TE_DATAFILE --key ssh-user --type raw --key-default 'root')
sed -i "s/\"192.168.122.1\"/\"$HOST_IP\"/" local.json
sed -i "s/\"user\": \".*\?\",/\"user\": \"$SSH_USER\",/" local.json
fi
### --include
NODE_ARCH=$(os-apply-config -m $TE_DATAFILE --key arch --type raw)
## #. If you are only building disk images, there is no reason to boot the
## seed VM. Instead, pass ``--build-only`` to tell boot-seed-vm not to boot
## the vm it builds.
## If you want to use a previously built image rather than building a new
## one, passing ``-c`` will boot the existing image rather than creating
## a new one.
## ::
cd $TRIPLEO_ROOT
## boot-seed-vm -a $NODE_ARCH $NODE_DIST neutron-dhcp-agent
### --end
if [ "$USE_CACHE" == "0" ] ; then
CACHE_OPT=
else
CACHE_OPT="-c"
fi
boot-seed-vm $CACHE_OPT $BUILD_ONLY -a $NODE_ARCH $NODE_DIST $DEBUG_LOGGING neutron-dhcp-agent 2>&1 | \
tee $TRIPLEO_ROOT/dib-seed.log
if [ -n "${BUILD_ONLY}" ]; then
exit 0
fi
### --include
## #. If you're just building images, you're done with this script. Move on
## to :doc:`devtest_undercloud`
## ``boot-seed-vm`` will start a VM containing your SSH key for the root user.
##
## The IP address of the VM's eth0 is printed out at the end of boot-seed-vm, or
## you can query the testenv json which is updated by boot-seed-vm::
SEED_IP=$(os-apply-config -m $TE_DATAFILE --key seed-ip --type netaddress)
## #. Add a route to the baremetal bridge via the seed node (we do this so that
## your host is isolated from the networking of the test environment.
## We only add this route if the baremetal seed IP is used as the
## gateway (the route is typically not required if you are using
## a pre-existing baremetal network)
## ::
# These are not persistent, if you reboot, re-run them.
BM_NETWORK_SEED_IP=$(os-apply-config -m $TE_DATAFILE --key baremetal-network.seed.ip --type raw --key-default '192.0.2.1')
BM_NETWORK_GATEWAY=$(os-apply-config -m $TE_DATAFILE --key baremetal-network.gateway-ip --type raw --key-default '192.0.2.1')
if [ $BM_NETWORK_GATEWAY = $BM_NETWORK_SEED_IP -o $BM_NETWORK_GATEWAY = ${BM_VLAN_SEED_IP_ADDR:-''} ]; then
ROUTE_DEV=$(os-apply-config -m $TE_DATAFILE --key seed-route-dev --type netdevice --key-default virbr0)
sudo ip route replace $BM_NETWORK_CIDR dev $ROUTE_DEV via $SEED_IP
if [ -n "$BM_VLAN_SEED_IP" ]; then
sudo ip route replace $BM_VLAN_SEED_IP_CIDR via $SEED_IP
fi
fi
## #. Mask the seed API endpoint out of your proxy settings
## ::
set +u #nodocs
export no_proxy=$no_proxy,$BM_NETWORK_SEED_IP
set -u #nodocs
## #. If you downloaded a pre-built seed image you will need to log into it
## and customise the configuration within it. See footnote [#f1]_.)
##
## #. Setup a prompt clue so you can tell what cloud you have configured.
## (Do this once).
## ::
##
## source $TRIPLEO_ROOT/tripleo-incubator/cloudprompt
## #. Source the client configuration for the seed cloud.
## ::
source $TRIPLEO_ROOT/tripleo-incubator/seedrc
## #. Perform setup of your seed cloud.
## ::
echo "Waiting for seed node to configure br-ctlplane..." #nodocs
# Listen on SEED_COMP_PORT for a callback from os-collect-config. This is
# similar to how Heat waits, but Heat does not run on the seed.
timeout 480 sh -c 'printf "HTTP/1.0 200 OK\r\n\r\n\r\n" | nc -l '"$COMP_IP"' '"$SEED_COMP_PORT"' | grep '"$SEED_IMAGE_ID"
# Wait for network
wait_for -w 20 --delay 1 -- ping -c 1 $BM_NETWORK_SEED_IP
# If ssh-keyscan fails to connect, it returns 0. So grep to see if it succeeded
ssh-keyscan -t rsa $BM_NETWORK_SEED_IP | tee -a ~/.ssh/known_hosts | grep -q "^$BM_NETWORK_SEED_IP ssh-rsa "
init-keystone -o $BM_NETWORK_SEED_IP -t unset -e admin@example.com -p unset --no-pki-setup
setup-endpoints $BM_NETWORK_SEED_IP --glance-password unset --heat-password unset --neutron-password unset --nova-password unset --ironic-password unset
openstack role create heat_stack_user
# Creating these roles to be used by tenants using swift
openstack role create swiftoperator
openstack role create ResellerAdmin
echo "Waiting for nova to initialise..."
wait_for -w 500 --delay 10 -- nova list
user-config
echo "Waiting for Nova Compute to be available"
wait_for -w 300 --delay 10 -- nova service-list --binary nova-compute 2\>/dev/null \| grep 'enabled.*\ up\ '
echo "Waiting for neutron API and L2 agent to be available"
wait_for -w 300 --delay 10 -- neutron agent-list -f csv -c alive -c agent_type -c host \| grep "\":-).*Open vSwitch agent.*\"" #nodocs
BM_NETWORK_SEED_RANGE_START=$(os-apply-config -m $TE_DATAFILE --key baremetal-network.seed.range-start --type raw --key-default '192.0.2.2')
BM_NETWORK_SEED_RANGE_END=$(os-apply-config -m $TE_DATAFILE --key baremetal-network.seed.range-end --type raw --key-default '192.0.2.20')
if [ -n "$BM_VLAN_SEED_TAG" ]; then
# With a public VLAN, the gateway address is on the public LAN.
CTL_GATEWAY=
else
CTL_GATEWAY=$BM_NETWORK_GATEWAY
fi
SEED_NAMESERVER=$(os-apply-config -m $TE_DATAFILE --key seed.nameserver --type netaddress --key-default "${SEED_NAMESERVER:-}")
NETWORK_JSON=$(mktemp)
jq "." <<EOF > $NETWORK_JSON
{
"physical": {
"gateway": "$CTL_GATEWAY",
"metadata_server": "$BM_NETWORK_SEED_IP",
"cidr": "$BM_NETWORK_CIDR",
"allocation_start": "$BM_NETWORK_SEED_RANGE_START",
"allocation_end": "$BM_NETWORK_SEED_RANGE_END",
"name": "ctlplane",
"nameserver": "$SEED_NAMESERVER"
}
}
EOF
if [ -n "$BM_CTL_ROUTE_PREFIX" -a -n "$BM_CTL_ROUTE_VIA" ]; then
EXTRA_ROUTE="{\"destination\": \"$BM_CTL_ROUTE_PREFIX\", \"nexthop\": \"$BM_CTL_ROUTE_VIA\"}"
TMP_NETWORK=$(mktemp)
jq ".[\"physical\"][\"extra_routes\"]=[$EXTRA_ROUTE]" < $NETWORK_JSON > $TMP_NETWORK
mv $TMP_NETWORK $NETWORK_JSON
fi
setup-neutron -n $NETWORK_JSON
rm $NETWORK_JSON
# Is there a public network as well? If so configure it.
if [ -n "$BM_VLAN_SEED_TAG" ]; then
BM_VLAN_SEED_START=$(jq -r '.["baremetal-network"].seed.public_vlan.start' $TE_DATAFILE)
BM_VLAN_SEED_END=$(jq -r '.["baremetal-network"].seed.public_vlan.finish' $TE_DATAFILE)
BM_VLAN_SEED_TAG=$(jq -r '.["baremetal-network"].seed.public_vlan.tag' $TE_DATAFILE)
PUBLIC_NETWORK_JSON=$(mktemp)
jq "." <<EOF > $PUBLIC_NETWORK_JSON
{
"physical": {
"gateway": "$BM_NETWORK_GATEWAY",
"metadata_server": "$BM_NETWORK_SEED_IP",
"cidr": "$BM_VLAN_SEED_IP_CIDR",
"allocation_start": "$BM_VLAN_SEED_START",
"allocation_end": "$BM_VLAN_SEED_END",
"name": "public",
"nameserver": "$SEED_NAMESERVER",
"segmentation_id": "$BM_VLAN_SEED_TAG",
"physical_network": "ctlplane",
"enabled_dhcp": false
}
}
EOF
setup-neutron -n $PUBLIC_NETWORK_JSON
rm $PUBLIC_NETWORK_JSON
fi
## #. Nova quota runs up with the defaults quota so overide the default to
## allow unlimited cores, instances and ram.
## ::
nova quota-update --cores -1 --instances -1 --ram -1 $(openstack project show admin | awk '$2=="id" {print $4}')
## #. Register "bare metal" nodes with nova and setup Nova baremetal flavors.
## When using VMs Nova will PXE boot them as though they use physical
## hardware.
## If you want to create the VM yourself see footnote [#f2]_ for details
## on its requirements.
## If you want to use real baremetal see footnote [#f3]_ for details.
## If you are building an undercloud, register only the first node.
## ::
if [ -z "${ALL_NODES:-}" ]; then #nodocs
setup-baremetal --service-host seed --nodes <(jq '[.nodes[0]]' $TE_DATAFILE)
else #nodocs
## Otherwise, if you are skipping the undercloud, you should register all
## the nodes.::
setup-baremetal --service-host seed --nodes <(jq '.nodes' $TE_DATAFILE)
fi #nodocs
## If you need to collect the MAC address separately, see ``scripts/get-vm-mac``.
## .. rubric:: Footnotes
##
## .. [#f1] Customize a downloaded seed image.
##
## If you downloaded your seed VM image, you may need to configure it.
## Setup a network proxy, if you have one (e.g. 192.168.2.1 port 8080)
## ::
##
## # Run within the image!
## echo << EOF >> ~/.profile
## export no_proxy=192.0.2.1
## export http_proxy=http://192.168.2.1:8080/
## EOF
##
## Add an ~/.ssh/authorized_keys file. The image rejects password authentication
## for security, so you will need to ssh out from the VM console. Even if you
## don't copy your authorized_keys in, you will still need to ensure that
## /home/stack/.ssh/authorized_keys on your seed node has some kind of
## public SSH key in it, or the openstack configuration scripts will error.
##
## You can log into the console using the username 'stack' password 'stack'.
##
## .. [#f2] Requirements for the "baremetal node" VMs
##
## If you don't use create-nodes, but want to create your own VMs, here are some
## suggestions for what they should look like.
##
## * each VM should have 1 NIC
## * eth0 should be on brbm
## * record the MAC addresses for the NIC of each VM.
## * give each VM no less than 2GB of disk, and ideally give them
## more than NODE_DISK, which defaults to 20GB
## * 1GB RAM is probably enough (512MB is not enough to run an all-in-one
## OpenStack), and 768M isn't enough to do repeated deploys with.
## * if using KVM, specify that you will install the virtual machine via PXE.
## This will avoid KVM prompting for a disk image or installation media.
##
## .. [#f3] Notes when using real bare metal
##
## If you want to use real bare metal see the following.
##
## * When calling setup-baremetal you can set the MAC, IP address, user,
## and password parameters which should all be space delemited lists
## that correspond to the MAC addresses and power management commands
## your real baremetal machines require. See scripts/setup-baremetal
## for details.
##
## * If you see over-mtu packets getting dropped when iscsi data is copied
## over the control plane you may need to increase the MTU on your brbm
## interfaces. Symptoms that this might be the cause include:
## ::
##
## iscsid: log shows repeated connection failed errors (and reconnects)
## dmesg shows:
## openvswitch: vnet1: dropped over-mtu packet: 1502 > 1500
##
### --end

View File

@ -1,353 +0,0 @@
#!/bin/bash
#
# Idempotent one-time setup for devtest.
# This can be run for CI purposes, by passing --trash-my-machine to it.
# Without that parameter, the script will error.
set -eux
set -o pipefail
SCRIPT_NAME=$(basename $0)
SCRIPT_HOME=$(dirname $0)
function show_options {
echo "Usage: $SCRIPT_NAME [options]"
echo
echo "Setup the TripleO devtest environment."
echo
echo "Options:"
echo " --trash-my-machine -- make nontrivial destructive changes to the machine."
echo " For details read the source."
echo " -c -- re-use existing source/images if they exist."
echo
exit $1
}
CONTINUE=0
USE_CACHE=${USE_CACHE:-0}
TEMP=`getopt -o h,c -l trash-my-machine -n $SCRIPT_NAME -- "$@"`
if [ $? != 0 ]; then
echo "Terminating..." >&2;
exit 1;
fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
--trash-my-machine) CONTINUE=1; shift 1;;
-c) USE_CACHE=1; shift 1;;
-h) show_options 0;;
--) shift ; break ;;
*) echo "Error: unsupported option $1." ; exit 1 ;;
esac
done
if [ "0" = "$CONTINUE" ]; then
echo "Not running - this script is destructive and requires --trash-my-machine to run." >&2
exit 1
fi
### --include
## devtest_setup
## =============
## Configuration
## -------------
## The seed instance expects to run with its eth0 connected to the outside world,
## via whatever IP range you choose to setup. You can run NAT, or not, as you
## choose. This is how we connect to it to run scripts etc - though you can
## equally log in on its console if you like.
## We use flat networking with all machines on one broadcast domain for dev-test.
## The eth1 of your seed instance should be connected to your bare metal cloud
## LAN. The seed VM uses this network to bringing up nodes, and does its own
## DHCP etc, so do not connect it to a network
## shared with other DHCP servers or the like. The instructions in this document
## create a bridge device ('brbm') on your machine to emulate this with virtual
## machine 'bare metal' nodes.
## NOTE: We recommend using an apt/HTTP proxy and setting the http_proxy
## environment variable accordingly in order to speed up the image build
## times. See footnote [#f3]_ to set up Squid proxy.
## NOTE: Likewise, setup a pypi mirror and use the pypi element, or use the
## pip-cache element. (See diskimage-builder documentation for both of
## these). Add the relevant element name to the DIB_COMMON_ELEMENTS
## variable.
## .. _devtest-environment-configuration:
## Devtest test environment configuration
## --------------------------------------
## Devtest uses a JSON file to describe the test environment that OpenStack will
## run within. The JSON file path is given by $TE_DATAFILE. The JSON file contains
## the following keys:
## #. arch: The CPU arch which Nova-BM nodes will be registered with.
## This must be consistent when VM's are created (in devtest_testenv.sh)
## and when disk images are created (in devtest_seed / undercloud /
## overcloud. The images are controlled by this testenv key, and VMs
## are created by the same code that sets this key in the test environment
## description, so you should only need to change/set it once, when creating
## the test environment. We use 32-bit by default for the reduced memory
## footprint. If you are running on real hardware, or want to test with
## 64-bit arch, replace i386 => amd64 in all the commands below. You will of
## course need amd64 capable hardware to do this.
## #. host-ip: The IP address of the host which will run the seed VM using virsh.
## #. seed-ip: The IP address of the seed VM (if known). If not known, it is
## looked up locally in the ARP table.
## #. ssh-key: The private part of an SSH key to be used when performing virsh
## commands on $host-ip.
## #. ssh-user: The SSH username to use when performing virsh commands on
## $host-ip.
## #. nodes: A list of node metadata. Each node has "memory" in MiB, "cpu" in
## threads, "arch" (one of i386/amd64/etc), "disk" in GiB, a list of MAC
## addresses for the node, and "pm_type", "pm_user", "pm_addr", and
## "pm_password" fields. Future iterations may add more Ironic power and
## deploy driver selections here.
## See the `os-cloud-config documentation
## <http://docs.openstack.org/developer/os-cloud-config/usage.html#registering-nodes-with-a-baremetal-service>`_
## for a sample
## #. baremetal-network: A mapping of metadata describing the bare metal cloud
## network. This is a flat network which is used to bring up nodes via
## DHCP and transfer images. By default the rfc5735 TEST-NET-1 range -
## 192.0.2.0/24 is used. The following fields are available (along
## with the default values for each field):
## ::
## {
## "cidr": "192.0.2.0/24",
## "gateway-ip": "192.0.2.1",
## "seed": {
## "ip": "192.0.2.1",
## "range-start": "192.0.2.2",
## "range-end": "192.0.2.20",
## "physical_bridge_route": (null),
## "public_vlan": (null)
## },
## "undercloud": {
## "range-start": "192.0.2.21",
## "range-end": "192.0.2.40",
## "public_vlan": (null)
## }
## }
## The physical_bridge_route and public_vlan keys default to absent, which
## is suitable for a flat networking environment. When exterior access will
## be on a vlan they should be filled out. For instance, if TEST-NET-2 were
## our exterior subnet on VLAN 10, we might have the following as our
## baremetal network, to use a baremetal router on .1, the seed on .2, and a
## handful of addresses for both the seed and the undercloud dhcp pools. We
## would also expect a route to the IPMI network to allow control of machines.
## The gateway IP and physical_bridge_route if specified are also put into the
## initial network definitions created by the _seed script, and so are
## accessible via DHCP to the undercloud instances (and likewise overcloud).
## ::
## {
## "cidr": "192.0.2.0/25",
## "gateway-ip": "198.51.100.1",
## "seed": {
## "ip": "192.0.2.1",
## "range-start": "192.0.2.2",
## "range-end": "192.0.2.20",
## "physical_bridge_route": {
## "prefix": "192.0.2.0/24",
## "via": "192.0.2.126"
## },
## "public_vlan": {
## "tag": 10,
## "ip": "198.51.100.2/24",
## "start": "198.51.100.3",
## "finish": "198.51.100.10"
## }
## },
## "undercloud": {
## "range-start": "192.0.2.21",
## "range-end": "192.0.2.40",
## "public_vlan": {
## "start": "198.51.100.11",
## "finish": "198.51.100.20"
## }
## }
## }
## #. power_manager: The class path for a Nova Baremetal power manager.
## Note that this is specific to operating with Nova Baremetal and is ignored
## for use with Ironic. However, since this describes the test environment,
## not the code under test, it should always be present while we support
## using Nova Baremetal.
## #. seed-route-dev: What device to route traffic for the initial undercloud
## network. As our test network is unrouteable we require an explicit device
## to avoid accidentally routing it onto live networks. Defaults to virbr0.
## #. remote-operations: Whether to operate on the local machine only, or
## perform remote operations when starting VMs and copying disk images.
## A non-empty string means true, the default is '', which means false.
## #. remote-host: If the test environment is on a remote host, this may be
## set to the host name of the remote host. It is intended to help
## provide valuable debug information about where devtest is hosted.
## #. env-num: An opaque key used by the test environment hosts for identifying
## which environment seed images are being copied into.
## #. undercloud: an object with metadata for connecting to the undercloud in
## the environment.
## #. undercloud.password: The password for the currently deployed undercloud.
## #. undercloud.endpoint: The Keystone endpoint URL for the undercloud.
## #. undercloud.endpointhost: The host of the endpoint - used for noproxy settings.
## #. overcloud: an object with metadata for connecting to the overcloud in
## the environment.
## #. overcloud.password: The admin password for the currently deployed overcloud.
## #. overcloud.endpoint: The Keystone endpoint URL for the overcloud.
## #. overcloud.endpointhost: The host of the endpoint - used for noproxy settings.
## XXX: We're currently migrating to that structure - some code still uses
## environment variables instead.
## Detailed instructions
## ---------------------
## **(Note: all of the following commands should be run on your host machine, not inside the seed VM)**
## #. Before you start, check to see that your machine supports hardware
## virtualization, otherwise performance of the test environment will be poor.
## We are currently bringing up an LXC based alternative testing story, which
## will mitigate this, though the deployed instances will still be full virtual
## machines and so performance will be significantly less there without
## hardware virtualization.
## #. As you step through the instructions several environment
## variables are set in your shell. These variables will be lost if
## you exit out of your shell. After setting variables, use
## scripts/write-tripleorc to write out the variables to a file that
## can be sourced later to restore the environment.
## #. Also check ssh server is running on the host machine and port 22 is open for
## connections from virbr0 - VirtPowerManager will boot VMs by sshing into the
## host machine and issuing libvirt/virsh commands. The user these instructions
## use is your own, but you can also setup a dedicated user if you choose.
### --end
if [ "$USE_CACHE" == "0" ] ; then
if [ -z "${ZUUL_REF:-''}" ]; then
cd $TRIPLEO_ROOT/tripleo-incubator ; git pull
fi
fi
if [ "$NODE_DIST" == 'unsupported' ]; then
echo 'Unsupported OS distro.'
exit 1
fi
### --include
## #. Install required system packages
## ::
if [ "$USE_CACHE" = "0" ] ; then #nodocs
install-dependencies
fi #nodocs
## #. Clone/update the other needed tools which are not available as packages.
## The DIB_REPOLOCATION_* and DIB_REPOREF_* environment variables will be used,
## if set, to select the diskimage_builder, tripleo_image_elements and
## tripleo_heat_templates to check out. Setting TRIPLEO_ADDITIONAL_PULL_TOOLS
## to full git URLs will also allow you to add extra repositories to be cloned
## or updated by the pull-tools script.
## ::
if [ "$USE_CACHE" = "0" ] ; then #nodocs
pull-tools
fi #nodocs
## #. Install client tools
## ::
if [ "$USE_CACHE" = "0" ] ; then #nodocs
setup-clienttools
fi #nodocs
## #. Ensure current user can manage libvirt resources
## ::
set-usergroup-membership
## .. rubric:: Footnotes
## .. [#f3] Setting Up Squid Proxy
##
## * Install squid proxy
## ::
##
## apt-get install squid
##
## * Set `/etc/squid3/squid.conf` to the following
## ::
##
## acl localhost src 127.0.0.1/32 ::1
## acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
## acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
## acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
## acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
## acl SSL_ports port 443
## acl Safe_ports port 80 # http
## acl Safe_ports port 21 # ftp
## acl Safe_ports port 443 # https
## acl Safe_ports port 70 # gopher
## acl Safe_ports port 210 # wais
## acl Safe_ports port 1025-65535 # unregistered ports
## acl Safe_ports port 280 # http-mgmt
## acl Safe_ports port 488 # gss-http
## acl Safe_ports port 591 # filemaker
## acl Safe_ports port 777 # multiling http
## acl CONNECT method CONNECT
## http_access allow manager localhost
## http_access deny manager
## http_access deny !Safe_ports
## http_access deny CONNECT !SSL_ports
## http_access allow localnet
## http_access allow localhost
## http_access deny all
## http_port 3128
## maximum_object_size 1024 MB
## cache_dir aufs /var/spool/squid3 5000 24 256
## coredump_dir /var/spool/squid3
## refresh_pattern ^ftp: 1440 20% 10080
## refresh_pattern ^gopher: 1440 0% 1440
## refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
## refresh_pattern (Release|Packages(.gz)*)$ 0 20% 2880
## refresh_pattern . 0 20% 4320
## refresh_all_ims on
##
## * Restart squid
## ::
##
## sudo service squid3 restart
##
## * Set http_proxy environment variable
## ::
##
## http_proxy=http://your_ip_or_localhost:3128/
##
##
### --end

View File

@ -1,288 +0,0 @@
#!/bin/bash
#
# Test environment creation for devtest.
# This creates the bridge and VM's
set -eu
set -o pipefail
SCRIPT_NAME=$(basename $0)
SCRIPT_HOME=$(dirname $0)
function show_options {
echo "Usage: $SCRIPT_NAME [options] {JSON-filename}"
echo
echo "Setup a TripleO devtest environment."
echo
echo "Options:"
echo " -b -- Name of an already existing OVS bridge to use for "
echo " the public interface of the seed."
echo " -h -- This help."
echo " -n -- Test environment number to add the seed to."
echo " -s -- SSH private key path to inject into the JSON."
echo " If not supplied, defaults to ~/.ssh/id_rsa_virt_power"
echo " --nodes NODEFILE -- You are supplying your own list of hardware."
echo " A sample nodes definition can be found in the os-cloud-config"
echo " usage documentation."
echo
echo " --bm-networks NETFILE -- You are supplying your own network layout."
echo " The schema for baremetal-network can be found in"
echo " the devtest_setup documentation."
echo " --baremetal-bridge-names BRIDGE_NAMES -- Name(s) of baremetal bridges"
echo " to create and attach to each VM."
echo " This should be a space delimited string"
echo " that contains 'brbm' as the first"
echo " entry so that it seed's ctlplane"
echo " network (also attached to brbm)"
echo " can provision instances."
echo " --keep-vms -- Prevent cleanup of virsh instances for"
echo " undercloud and overcloud"
echo "JSON-filename -- the path to write the environment description to."
echo
echo "Note: This adds a unique key to your authorised_keys file to permit "
echo "virtual-power-managment calls to be made."
echo
exit $1
}
NODES_PATH=
NETS_PATH=
NUM=
OVSBRIDGE=
BRIDGE_NAMES=brbm
SSH_KEY=~/.ssh/id_rsa_virt_power
KEEP_VMS=
TEMP=$(getopt -o h,n:,b:,s: -l nodes:,bm-networks:,baremetal-bridge-names:,keep-vms -n $SCRIPT_NAME -- "$@")
if [ $? != 0 ]; then
echo "Terminating..." >&2
exit 1
fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
--nodes) NODES_PATH="$2"; shift 2;;
--bm-networks) NETS_PATH="$2"; shift 2;;
--keep-vms) KEEP_VMS=1; shift;;
--baremetal-bridge-names) BRIDGE_NAMES="$2" ; shift 2 ;;
-b) OVSBRIDGE="$2" ; shift 2 ;;
-h) show_options 0;;
-n) NUM="$2" ; shift 2 ;;
-s) SSH_KEY="$2" ; shift 2 ;;
--) shift ; break ;;
*) echo "Error: unsupported option $1." ; exit 1 ;;
esac
done
### --include
## devtest_testenv
## ===============
#XXX: When updating, sync with the call in devtest.sh #nodocs
## .. note::
## This script is usually called from ``devtest.sh`` as
## ``devtest_testenv.sh $TE_DATAFILE`` so we should declare
## a JSONFILE variable (which equals to the first positional
## argument) explicitly.
## ::
## JSONFILE=${JSONFILE:-$TE_DATAFILE}
### --end
JSONFILE=${1:-''}
EXTRA_ARGS=${2:-''}
if [ -z "$JSONFILE" -o -n "$EXTRA_ARGS" ]; then
show_options 1
fi
### --include
## #. Set HW resources for VMs used as 'baremetal' nodes. NODE_CPU is cpu count,
## NODE_MEM is memory (MB), NODE_DISK is disk size (GB), NODE_ARCH is
## architecture (i386, amd64). NODE_ARCH is used also for the seed VM.
## A note on memory sizing: TripleO images in raw form are currently
## ~2.7Gb, which means that a tight node will end up with a thrashing page
## cache during glance -> local + local -> raw operations. This significantly
## impairs performance. Of the four minimum VMs for TripleO simulation, two
## are nova baremetal nodes (seed and undercloud) and these need to be 3G or
## larger. The hypervisor host in the overcloud also needs to be a decent size
## or it cannot host more than one VM. The NODE_DISK is set to support
## building 5 overcloud nodes when not using Ironic. If you are building a
## larger overcloud than this without using Ironic you may need to increase
## NODE_DISK.
## NODE_CNT specifies how many VMs to define using virsh. NODE_CNT
## defaults to 15, or 0 if NODES_PATH is provided.
### --end
## This number is intentionally higher than required as the
## definitions are cheap (until the VM is activated the only cost
## is a small amount of disk space) but growing this number in our
## CI environment is expensive.
### --include
## 32bit VMs
## ::
## NODE_CPU=1 NODE_MEM=3072 NODE_DISK=40 NODE_ARCH=i386
### --end
if [ -n "$NODES_PATH" ]; then
NODE_CNT=${NODE_CNT:-0}
else
NODE_CNT=${NODE_CNT:-15}
fi
NODE_CPU=${NODE_CPU:-1} NODE_MEM=${NODE_MEM:-3072} NODE_DISK=${NODE_DISK:-40} NODE_ARCH=${NODE_ARCH:-i386}
### --include
## For 64bit it is better to create VMs with more memory and storage because of
## increased memory footprint (we suggest 4GB)::
## NODE_CPU=1 NODE_MEM=4096 NODE_DISK=40 NODE_ARCH=amd64
## #. Configure a network for your test environment.
## This configures an openvswitch bridge and teaches libvirt about it.
## ::
setup-network -n "$NUM" -b "$BRIDGE_NAMES"
## #. Configure a seed VM. This VM has a disk image manually configured by
## later scripts, and hosts the statically configured seed which is used
## to bootstrap a full dynamically configured baremetal cloud. The seed VM
## specs can be configured with the environment variables SEED_CPU and
## SEED_MEM (MB). It defaults to the NODE_CPU and NODE_MEM values, since
## the seed is equivalent to an undercloud in resource requirements.
## ::
NUMBERED_BRIDGE_NAMES=
SEED_ARGS="-a $NODE_ARCH"
if [ -n "$NUM" ]; then
SEED_ARGS="$SEED_ARGS -o seed_${NUM}"
fi
if [ -n "$OVSBRIDGE" ]; then
SEED_ARGS="$SEED_ARGS -p $OVSBRIDGE"
fi
for NAME in $BRIDGE_NAMES; do
NUMBERED_BRIDGE_NAMES="$NUMBERED_BRIDGE_NAMES$NAME${NUM} "
done
# remove the last space
NUMBERED_BRIDGE_NAMES=${NUMBERED_BRIDGE_NAMES% }
SEED_CPU=${SEED_CPU:-${NODE_CPU}}
SEED_MEM=${SEED_MEM:-${NODE_MEM}}
## #. Clean up any prior environment. Unless the --keep-vms argument is
## passed to the script, VMs for the undercloud and overcloud are
## destroyed
## ::
if [ -z "$KEEP_VMS" ]; then
if [ -n "$NUM" ]; then
cleanup-env -n $NUM -b "$BRIDGE_NAMES"
else
cleanup-env -b "$BRIDGE_NAMES"
fi
fi
#Now start creating the new environment
setup-seed-vm $SEED_ARGS -c ${SEED_CPU} -m $((1024 * ${SEED_MEM}))
## #. What user will be used to ssh to run virt commands to control our
## emulated baremetal machines.
## ::
SSH_USER=$(whoami)
## #. What IP address to ssh to for virsh operations.
## ::
HOSTIP=${HOSTIP:-192.168.122.1}
## #. If a static SEEDIP is in use, define it here. If not defined it will be
## looked up in the ARP table by the seed MAC address during seed deployment.
## ::
if [ -n "$NETS_PATH" ]; then
# if the value is not set try the default 192.0.2.1.
SEEDIP=$(jq '.["seed"]["ip"] // "192.0.2.1"' -r $NETS_PATH)
else
SEEDIP=${SEEDIP:-''}
fi
## #. Set the default bare metal power manager. By default devtest uses
## nova.virt.baremetal.virtual_power_driver.VirtualPowerManager to
## support a fully virtualized TripleO test environment. You may
## optionally customize this setting if you are using real baremetal
## hardware with the devtest scripts. This setting controls the
## power manager used in both the seed VM and undercloud for Nova Baremetal.
## ::
POWER_MANAGER=${POWER_MANAGER:-'nova.virt.baremetal.virtual_power_driver.VirtualPowerManager'}
## #. Ensure we can ssh into the host machine to turn VMs on and off.
## The private key we create will be embedded in the seed VM, and delivered
## dynamically by heat to the undercloud VM.
## ::
# generate ssh authentication keys if they don't exist
if [ ! -f $SSH_KEY ]; then
ssh-keygen -t rsa -N "" -C virtual-power-key -f $SSH_KEY
fi
# make the local id_rsa_virt_power.pub be in ``.ssh/authorized_keys`` before
# that is copied into images via ``local-config``
if ! grep -qF "$(cat ${SSH_KEY}.pub)" ~/.ssh/authorized_keys; then
cat ${SSH_KEY}.pub >> ~/.ssh/authorized_keys
chmod 0600 ~/.ssh/authorized_keys
fi
## #. Wrap this all up into JSON.
## ::
jq "." <<EOF > $JSONFILE
{
"arch":"$NODE_ARCH",
"host-ip":"$HOSTIP",
"power_manager":"$POWER_MANAGER",
"seed-ip":"$SEEDIP",
"ssh-key":"$(cat $SSH_KEY|sed 's,$,\\n,'|tr -d '\n')",
"ssh-user":"$SSH_USER"
}
EOF
## #. If you have an existing bare metal cloud network to use, use it. See
## `baremetal-network` section in :ref:`devtest-environment-configuration`
## for more details
## ::
devtest_update_network.sh ${NETS_PATH:+--bm-networks $NETS_PATH} $JSONFILE
## #. If you have an existing set of nodes to use, use them.
## ::
if [ -n "$NODES_PATH" ]; then
JSON=$(jq -s '.[0].nodes=.[1] | .[0]' $JSONFILE $NODES_PATH)
echo "${JSON}" > $JSONFILE
else
## #. Create baremetal nodes for the test cluster. If the required number of
## VMs changes in future, you can run cleanup-env and then recreate with
## more nodes.
## ::
create-nodes $NODE_CPU $NODE_MEM $NODE_DISK $NODE_ARCH $NODE_CNT $SSH_USER $HOSTIP $JSONFILE "$NUMBERED_BRIDGE_NAMES"
### --end
fi

View File

@ -1,448 +0,0 @@
#!/bin/bash
set -eu
set -o pipefail
SCRIPT_NAME=$(basename $0)
SCRIPT_HOME=$(dirname $0)
BUILD_ONLY=
DEBUG_LOGGING=
HEAT_ENV=
FLAVOR="baremetal"
function show_options {
echo "Usage: $SCRIPT_NAME [options]"
echo
echo "Deploys a baremetal cloud via heat."
echo
echo "Options:"
echo " -h -- this help"
echo " -c -- re-use existing source/images if they exist."
echo " --build-only -- build the needed images but don't deploy them."
echo " --debug-logging -- Turn on debug logging in the undercloud. Sets"
echo " both OS_DEBUG_LOGGING and the heat Debug parameter."
echo " --heat-env -- path to a JSON heat environment file."
echo " Defaults to \$TRIPLEO_ROOT/undercloud-env.json."
echo " --flavor -- flavor to use for the undercloud. Defaults"
echo " to 'baremetal'."
echo
exit $1
}
TEMP=$(getopt -o c,h -l build-only,debug-logging,heat-env:,flavor:,help -n $SCRIPT_NAME -- "$@")
if [ $? != 0 ]; then
echo "Terminating..." >&2
exit 1
fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
-c) USE_CACHE=1; shift 1;;
--build-only) BUILD_ONLY="1"; shift 1;;
--debug-logging)
DEBUG_LOGGING="1"
export OS_DEBUG_LOGGING="1"
shift 1
;;
--heat-env) HEAT_ENV="$2"; shift 2;;
--flavor) FLAVOR="$2"; shift 2;;
-h | --help) show_options 0;;
--) shift ; break ;;
*) echo "Error: unsupported option $1." ; exit 1 ;;
esac
done
set -x
USE_CACHE=${USE_CACHE:-0}
TE_DATAFILE=${1:?"A test environment description is required as \$1."}
UNDERCLOUD_DIB_EXTRA_ARGS=${UNDERCLOUD_DIB_EXTRA_ARGS:-'rabbitmq-server'}
if [ "${USE_MARIADB:-}" = 1 ] ; then
UNDERCLOUD_DIB_EXTRA_ARGS="$UNDERCLOUD_DIB_EXTRA_ARGS mariadb-rpm"
fi
### --include
## devtest_undercloud
## ==================
## #. Add extra elements for Undercloud UI
## ::
if [ "$USE_UNDERCLOUD_UI" -ne 0 ] ; then
UNDERCLOUD_DIB_EXTRA_ARGS="$UNDERCLOUD_DIB_EXTRA_ARGS ceilometer-collector \
ceilometer-api ceilometer-agent-central ceilometer-agent-notification \
ceilometer-undercloud-config horizon nova-ironic"
fi
## #. Specifiy a client-side timeout in minutes for creating or updating the
## undercloud Heat stack.
## ::
UNDERCLOUD_STACK_TIMEOUT=${UNDERCLOUD_STACK_TIMEOUT:-60}
## #. Create your undercloud image. This is the image that the seed nova
## will deploy to become the baremetal undercloud. $UNDERCLOUD_DIB_EXTRA_ARGS is
## meant to be used to pass additional arguments to disk-image-create.
## ::
NODE_ARCH=$(os-apply-config -m $TE_DATAFILE --key arch --type raw)
if [ ! -e $TRIPLEO_ROOT/undercloud.qcow2 -o "$USE_CACHE" == "0" ] ; then #nodocs
$TRIPLEO_ROOT/diskimage-builder/bin/disk-image-create $NODE_DIST \
-a $NODE_ARCH -o $TRIPLEO_ROOT/undercloud \
ntp baremetal boot-stack os-collect-config dhcp-all-interfaces \
neutron-dhcp-agent $DIB_COMMON_ELEMENTS $UNDERCLOUD_DIB_EXTRA_ARGS 2>&1 | \
tee $TRIPLEO_ROOT/dib-undercloud.log
### --end
fi
if [ -n "$BUILD_ONLY" ]; then
exit 0
fi
### --include
## #. If you wanted to build the image and run it elsewhere, you can stop at
## this point and head onto the overcloud image building.
## #. Load the undercloud image into Glance:
## ::
UNDERCLOUD_ID=$(load-image -d $TRIPLEO_ROOT/undercloud.qcow2)
## #. Set the public interface of the undercloud network node:
## ::
NeutronPublicInterface=${NeutronPublicInterface:-'nic1'}
## #. Set the NTP server for the undercloud::
## ::
UNDERCLOUD_NTP_SERVER=${UNDERCLOUD_NTP_SERVER:-''}
## #. Create secrets for the cloud. The secrets will be written to a file
## ($TRIPLEO_ROOT/tripleo-undercloud-passwords by default)
## that you need to source into your shell environment.
##
## .. note::
##
## You can also make or change these later and
## update the heat stack definition to inject them - as long as you also
## update the keystone recorded password.
##
## .. note::
##
## There will be a window between updating keystone and
## instances where they will disagree and service will be down. Instead
## consider adding a new service account and changing everything across
## to it, then deleting the old account after the cluster is updated.
##
## ::
### --end
# NOTE(tchaypo): We used to write these passwords in $CWD; so check to see if the
# file exists there first. As well as providing backwards compatibility, this
# allows for people to run multiple test environments on the same machine - just
# make sure to have a different directory for running the scripts for each
# different environment you wish to use.
#
if [ -e tripleo-undercloud-passwords ]; then
echo "Re-using existing passwords in $PWD/tripleo-undercloud-passwords"
# Add any new passwords since the file was generated
setup-undercloud-passwords tripleo-undercloud-passwords
source tripleo-undercloud-passwords
else
### --include
setup-undercloud-passwords $TRIPLEO_ROOT/tripleo-undercloud-passwords
source $TRIPLEO_ROOT/tripleo-undercloud-passwords
fi #nodocs
## #. Export UNDERCLOUD_CEILOMETER_SNMPD_PASSWORD to your environment
## so it can be applied to the SNMPd of all Overcloud nodes.
NEW_JSON=$(jq '.undercloud.ceilometer_snmpd_password="'${UNDERCLOUD_CEILOMETER_SNMPD_PASSWORD}'"' $TE_DATAFILE)
echo $NEW_JSON > $TE_DATAFILE
## #. Pull out needed variables from the test environment definition.
## ::
POWER_MANAGER=$(os-apply-config -m $TE_DATAFILE --key power_manager --type raw)
POWER_KEY=$(os-apply-config -m $TE_DATAFILE --key ssh-key --type raw)
POWER_HOST=$(os-apply-config -m $TE_DATAFILE --key host-ip --type raw)
POWER_USER=$(os-apply-config -m $TE_DATAFILE --key ssh-user --type raw)
## #. Wait for the BM cloud to register BM nodes with the scheduler::
wait_for -w 60 --delay 1 -- wait_for_hypervisor_stats
## #. We need an environment file to store the parameters we're going to give
## heat.::
HEAT_ENV=${HEAT_ENV:-"${TRIPLEO_ROOT}/undercloud-env.json"}
## #. Read the heat env in for updating.::
if [ -e "${HEAT_ENV}" ]; then
### --end
if [ "$(stat -c %a ${HEAT_ENV})" != "600" ]; then
echo "Error: Heat environment cache \"${HEAT_ENV}\" not set to permissions of 0600."
# We should exit 1 so all the users from before the permissions
# requirement dont have their HEAT_ENV files ignored in a nearly silent way
exit 1
fi
### --include
ENV_JSON=$(cat "${HEAT_ENV}")
else
ENV_JSON='{"parameters":{}}'
fi
## #. Detect if we are deploying with a VLAN for API endpoints / floating IPs.
## This is done by looking for a 'public' network in Neutron, and if found
## we pull out the VLAN id and pass that into Heat, as well as using a VLAN
## enabled Heat template.
## ::
if (neutron net-list | grep -q public); then
VLAN_ID=$(neutron net-show public | awk '/provider:segmentation_id/ { print $4 }')
else
VLAN_ID=
fi
## #. Nova-baremetal and Ironic require different Heat templates
## and different options.
## ::
if [ -n "$VLAN_ID" ]; then
HEAT_UNDERCLOUD_TEMPLATE="undercloud-vm-ironic-vlan.yaml"
ENV_JSON=$(jq .parameters.NeutronPublicInterfaceTag=\"${VLAN_ID}\" <<< $ENV_JSON)
# This should be in the heat template, but see
# https://bugs.launchpad.net/heat/+bug/1336656
# note that this will break if there are more than one subnet, as if
# more reason to fix the bug is needed :).
PUBLIC_SUBNET_ID=$(neutron net-show public | awk '/subnets/ { print $4 }')
VLAN_GW=$(neutron subnet-show $PUBLIC_SUBNET_ID | awk '/gateway_ip/ { print $4}')
BM_VLAN_CIDR=$(neutron subnet-show $PUBLIC_SUBNET_ID | awk '/cidr/ { print $4}')
ENV_JSON=$(jq .parameters.NeutronPublicInterfaceDefaultRoute=\"${VLAN_GW}\" <<< $ENV_JSON)
else
HEAT_UNDERCLOUD_TEMPLATE="undercloud-vm-ironic.yaml"
fi
ENV_JSON=$(jq .parameters.IronicPassword=\"${UNDERCLOUD_IRONIC_PASSWORD}\" <<< $ENV_JSON)
REGISTER_SERVICE_OPTS="--ironic-password $UNDERCLOUD_IRONIC_PASSWORD"
STACKNAME_UNDERCLOUD=${STACKNAME_UNDERCLOUD:-'undercloud'}
## #. Choose whether to deploy or update. Use stack-update to update::
## HEAT_OP=stack-create
## ::
if heat stack-show $STACKNAME_UNDERCLOUD > /dev/null; then
HEAT_OP=stack-update
if (heat stack-show $STACKNAME_UNDERCLOUD | grep -q FAILED); then
echo "Updating a failed stack. this is a new ability and may cause problems." >&2
fi
else
HEAT_OP=stack-create
fi
## #. Set parameters we need to deploy a baremetal undercloud::
ENV_JSON=$(jq '.parameters = {
"MysqlInnodbBufferPoolSize": 100
} + .parameters + {
"AdminPassword": "'"${UNDERCLOUD_ADMIN_PASSWORD}"'",
"AdminToken": "'"${UNDERCLOUD_ADMIN_TOKEN}"'",
"SnmpdReadonlyUserPassword": "'"${UNDERCLOUD_CEILOMETER_SNMPD_PASSWORD}"'",
"GlancePassword": "'"${UNDERCLOUD_GLANCE_PASSWORD}"'",
"HeatPassword": "'"${UNDERCLOUD_HEAT_PASSWORD}"'",
"NovaPassword": "'"${UNDERCLOUD_NOVA_PASSWORD}"'",
"NeutronPassword": "'"${UNDERCLOUD_NEUTRON_PASSWORD}"'",
"NeutronPublicInterface": "'"${NeutronPublicInterface}"'",
"undercloudImage": "'"${UNDERCLOUD_ID}"'",
"BaremetalArch": "'"${NODE_ARCH}"'",
"PowerSSHPrivateKey": "'"${POWER_KEY}"'",
"NtpServer": "'"${UNDERCLOUD_NTP_SERVER}"'",
"Flavor": "'"${FLAVOR}"'"
}' <<< $ENV_JSON)
### --end
if [ "$DEBUG_LOGGING" = "1" ]; then
ENV_JSON=$(jq '.parameters = .parameters + {
"Debug": "True",
}' <<< $ENV_JSON)
fi
### --include
#Add Ceilometer to env only if USE_UNDERCLOUD_UI is specified
if [ "$USE_UNDERCLOUD_UI" -ne 0 ] ; then
ENV_JSON=$(jq '.parameters = .parameters + {
"CeilometerPassword": "'"${UNDERCLOUD_CEILOMETER_PASSWORD}"'"
}' <<< $ENV_JSON)
fi
## #. Save the finished environment file.::
jq . > "${HEAT_ENV}" <<< $ENV_JSON
chmod 0600 "${HEAT_ENV}"
## #. Add Keystone certs/key into the environment file.::
generate-keystone-pki --heatenv $HEAT_ENV
## #. Deploy an undercloud.
## ::
make -C $TRIPLEO_ROOT/tripleo-heat-templates $HEAT_UNDERCLOUD_TEMPLATE
heat $HEAT_OP -e $HEAT_ENV \
-t 360 \
-f $TRIPLEO_ROOT/tripleo-heat-templates/$HEAT_UNDERCLOUD_TEMPLATE \
$STACKNAME_UNDERCLOUD
## You can watch the console via ``virsh``/``virt-manager`` to observe the PXE
## boot/deploy process. After the deploy is complete, it will reboot into the
## image.
##
## #. Get the undercloud IP from ``nova list``
## ::
echo "Waiting for the undercloud stack to be ready" #nodocs
# Make time out 60 mins as like the Heat stack-create default timeout.
wait_for_stack_ready -w $(($UNDERCLOUD_STACK_TIMEOUT * 60 )) 10 undercloud
UNDERCLOUD_CTL_IP=$(nova list | grep ctlplane | sed -e "s/.*=\\([0-9.]*\\).*/\1/")
## #. If we're deploying with a public VLAN we must use it, not the control plane
## network (which we may not even have access to) to ping and configure thing.
## ::
if [ -n "$VLAN_ID" ]; then
UNDERCLOUD_IP=$(heat output-show undercloud PublicIP|sed 's/^"\(.*\)"$/\1/')
else
UNDERCLOUD_IP=$UNDERCLOUD_CTL_IP
fi
## #. We don't (yet) preserve ssh keys on rebuilds.
## ::
ssh-keygen -R $UNDERCLOUD_IP
ssh-keygen -R $UNDERCLOUD_CTL_IP
## #. Exclude the undercloud from proxies:
## ::
set +u #nodocs
export no_proxy=$no_proxy,$UNDERCLOUD_IP
set -u #nodocs
## #. Export the undercloud endpoint and credentials to your test environment.
## ::
UNDERCLOUD_ENDPOINT="http://$UNDERCLOUD_IP:5000/v2.0"
NEW_JSON=$(jq '.undercloud.password="'${UNDERCLOUD_ADMIN_PASSWORD}'" | .undercloud.endpoint="'${UNDERCLOUD_ENDPOINT}'" | .undercloud.endpointhost="'${UNDERCLOUD_IP}'"' $TE_DATAFILE)
echo $NEW_JSON > $TE_DATAFILE
## #. Source the undercloud configuration:
## ::
source $TRIPLEO_ROOT/tripleo-incubator/undercloudrc
## #. Perform setup of your undercloud.
## ::
init-keystone -o $UNDERCLOUD_CTL_IP -t $UNDERCLOUD_ADMIN_TOKEN \
-e admin@example.com -p $UNDERCLOUD_ADMIN_PASSWORD \
--public $UNDERCLOUD_IP --no-pki-setup
# Creating these roles to be used by tenants using swift
openstack role create swiftoperator
openstack role create ResellerAdmin
# Create service endpoints and optionally include Ceilometer for UI support
ENDPOINT_LIST="--glance-password $UNDERCLOUD_GLANCE_PASSWORD
--heat-password $UNDERCLOUD_HEAT_PASSWORD
--neutron-password $UNDERCLOUD_NEUTRON_PASSWORD
--nova-password $UNDERCLOUD_NOVA_PASSWORD
--tuskar-password $UNDERCLOUD_TUSKAR_PASSWORD"
if [ "$USE_UNDERCLOUD_UI" -ne 0 ] ; then
ENDPOINT_LIST="$ENDPOINT_LIST --ceilometer-password $UNDERCLOUD_CEILOMETER_PASSWORD"
fi
setup-endpoints $UNDERCLOUD_CTL_IP $ENDPOINT_LIST $REGISTER_SERVICE_OPTS \
--public $UNDERCLOUD_IP
openstack role create heat_stack_user
user-config
BM_NETWORK_CIDR=$(os-apply-config -m $TE_DATAFILE --key baremetal-network.cidr --type raw --key-default '192.0.2.0/24')
if [ -n "$VLAN_ID" ]; then
# No ctl plane gateway - public net gateway is needed.
# XXX (lifeless) - Neutron still configures one, first position in the subnet.
BM_NETWORK_GATEWAY=
else
# Use a control plane gateway.
BM_NETWORK_GATEWAY=$(os-apply-config -m $TE_DATAFILE --key baremetal-network.gateway-ip --type raw --key-default '192.0.2.1')
fi
BM_NETWORK_UNDERCLOUD_RANGE_START=$(os-apply-config -m $TE_DATAFILE --key baremetal-network.undercloud.range-start --type raw --key-default '192.0.2.21')
BM_NETWORK_UNDERCLOUD_RANGE_END=$(os-apply-config -m $TE_DATAFILE --key baremetal-network.undercloud.range-end --type raw --key-default '192.0.2.40')
UNDERCLOUD_NAMESERVER=$(os-apply-config -m $TE_DATAFILE --key undercloud.nameserver --type netaddress --key-default "${UNDERCLOUD_NAMESERVER:-}")
NETWORK_JSON=$(mktemp)
jq "." <<EOF > $NETWORK_JSON
{
"physical": {
"gateway": "$BM_NETWORK_GATEWAY",
"metadata_server": "$UNDERCLOUD_CTL_IP",
"cidr": "$BM_NETWORK_CIDR",
"allocation_start": "$BM_NETWORK_UNDERCLOUD_RANGE_START",
"allocation_end": "$BM_NETWORK_UNDERCLOUD_RANGE_END",
"name": "ctlplane",
"nameserver": "$UNDERCLOUD_NAMESERVER"
}
}
EOF
setup-neutron -n $NETWORK_JSON
rm $NETWORK_JSON
if [ -n "$VLAN_ID" ]; then
BM_VLAN_START=$(jq -r '.["baremetal-network"].undercloud.public_vlan.start' $TE_DATAFILE)
BM_VLAN_END=$(jq -r '.["baremetal-network"].undercloud.public_vlan.finish' $TE_DATAFILE)
PUBLIC_NETWORK_JSON=$(mktemp)
jq "." <<EOF > $PUBLIC_NETWORK_JSON
{
"physical": {
"gateway": "$VLAN_GW",
"metadata_server": "$UNDERCLOUD_CTL_IP",
"cidr": "$BM_VLAN_CIDR",
"allocation_start": "$BM_VLAN_START",
"allocation_end": "$BM_VLAN_END",
"name": "public",
"nameserver": "$UNDERCLOUD_NAMESERVER",
"segmentation_id": "$VLAN_ID",
"physical_network": "ctlplane",
"enable_dhcp": false
}
}
EOF
setup-neutron -n $PUBLIC_NETWORK_JSON
fi
## #. Nova quota runs up with the defaults quota so overide the default to
## allow unlimited cores, instances and ram.
## ::
nova quota-update --cores -1 --instances -1 --ram -1 $(openstack project show admin | awk '$2=="id" {print $4}')
## #. Register two baremetal nodes with your undercloud.
## ::
setup-baremetal --service-host undercloud --nodes <(jq '.nodes - [.nodes[0]]' $TE_DATAFILE)
### --end

View File

@ -1,66 +0,0 @@
#!/bin/bash
set -eu
set -o pipefail
SCRIPT_NAME=$(basename $0)
function show_options {
echo "Usage: $SCRIPT_NAME --bm-networks NETFILE {JSON-filename}"
echo
echo "Reads the baremetal-network description in NETFILE and writes it into JSON-filename"
echo
echo "For instance, to read the file named bm-networks.json and update testenv.json:"
echo " ${SCRIPT_NAME} --bm-networks bm-networks.json testenv.json "
echo
echo "Options:"
echo " -h -- This help."
echo " --bm-networks NETFILE -- You are supplying your own network layout."
echo " The schema for baremetal-network can be found in"
echo " the devtest_setup documentation."
echo " For backwards compatibility, this argument is optional;"
echo " but if it's not provided this script does nothing."
echo
echo "JSON-filename -- the path to write the environment description to."
echo
exit $1
}
NETS_PATH=
TEMP=$(getopt -o h -l help,bm-networks: -n $SCRIPT_NAME -- "$@")
if [ $? != 0 ]; then
echo "Terminating..." >&2
exit 1
fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
--bm-networks) NETS_PATH="$2"; shift 2;;
-h|--help) show_options 0;;
--) shift ; break ;;
*) echo "Error: unsupported option $1." ; exit 1 ;;
esac
done
### --include
## devtest_update_network
## ======================
## This script updates the baremetal networks definition in the
## ``$TE_DATAFILE``.
### --end
JSONFILE=${1:-''}
EXTRA_ARGS=${2:-''}
if [ -z "$JSONFILE" -o -n "$EXTRA_ARGS" ]; then
show_options 1
fi
if [ -n "$NETS_PATH" ]; then
JSON=$(jq -s '.[0]["baremetal-network"]=.[1] | .[0]' $JSONFILE $NETS_PATH)
echo "${JSON}" > $JSONFILE
fi

View File

@ -1,215 +0,0 @@
#!/bin/bash
#
# Variable definition for devtest.
### --include
## devtest_variables
## =================
## #. The devtest scripts require access to the libvirt system URI.
## If running against a different libvirt URI you may encounter errors.
## Export ``LIBVIRT_DEFAULT_URI`` to prevent devtest using qemu:///system
## Check that the default libvirt connection for your user is qemu:///system.
## If it is not, set an environment variable to configure the connection.
## This configuration is necessary for consistency, as later steps assume
## qemu:///system is being used.
## ::
export LIBVIRT_DEFAULT_URI=${LIBVIRT_DEFAULT_URI:-"qemu:///system"}
## #. The VMs created by devtest will use a virtio network device by
## default. This can be overridden to use a different network driver for
## interfaces instead, such as ``e1000`` if required.
## ::
export LIBVIRT_NIC_DRIVER=${LIBVIRT_NIC_DRIVER:-"virtio"}
## #. By default the node volumes will be created in a volume pool named
## 'default'. This variable can be used to specify a custom volume
## pool. This is useful in scenarios where the default volume pool cannot
## accommodate the storage requirements of the nodes.
## Note that this variable only changes the volume pool for the nodes.
## Seed image will still end up in /var/lib/libvirt/images.
## ::
export LIBVIRT_VOL_POOL=${LIBVIRT_VOL_POOL:-"default"}
## #. The tripleo-incubator tools must be available at
## ``$TRIPLEO_ROOT/tripleo-incubator``. See the :doc:`devtest` documentation
## which describes how to set that up correctly.
## ::
export TRIPLEO_ROOT=${TRIPLEO_ROOT:-} #nodocs
### --end
## NOTE(gfidente): Keep backwards compatibility by setting TRIPLEO_ROOT
## to ~/.cache/tripleo if the var is found empty and the dir exists.
if [ -z "$TRIPLEO_ROOT" -a -d ~/.cache/tripleo ]; then
echo "WARNING: Defaulting TRIPLEO_ROOT to ~/.cache/tripleo"
echo " Other environment variables are based on \$TRIPLEO_ROOT so"
echo " if you intend changing it, please source devtest_variables.sh"
echo " again afterwards."
TRIPLEO_ROOT=~/.cache/tripleo
fi
if [ -z "$TRIPLEO_ROOT" -o ! -d $TRIPLEO_ROOT/tripleo-incubator/scripts ]; then
echo 'WARNING: Cannot find $TRIPLEO_ROOT/tripleo-incubator/scripts'
echo ' To use devtest you must export the TRIPLEO_ROOT variable and have cloned tripleo-incubator within that directory.'
echo ' Check http://docs.openstack.org/developer/tripleo-incubator/devtest.html#initial-checkout for instructions.'
fi
### --include
if [ -n "$TRIPLEO_ROOT" ]; then
export PATH=$TRIPLEO_ROOT/tripleo-incubator/scripts:$TRIPLEO_ROOT/dib-utils/bin:$TRIPLEO_ROOT/diskimage-builder/bin:$PATH
fi
## #. It's possible to deploy the Undercloud without a UI and its dependent elements.
## The dependent image elements in Undercloud are Horizon, Tuskar-UI (not included
## yet, Tuskar UI element is not finished) and Ceilometer. In Overcloud it is
## SNMPd image element on every node.
## ::
export USE_UNDERCLOUD_UI=${USE_UNDERCLOUD_UI:-1}
## #. Set a list of image elements that should be included in all image builds.
## Note that stackuser is only for debugging support - it is not suitable for
## a production network. This is also the place to include elements such as
## pip-cache or pypi-openstack if you intend to use them.
## ::
export DIB_COMMON_ELEMENTS=${DIB_COMMON_ELEMENTS:-"stackuser common-venv use-ephemeral"}
## #. If you have a specific Ubuntu mirror you want to use when building
## images.
## ::
# export DIB_COMMON_ELEMENTS="${DIB_COMMON_ELEMENTS} apt-sources"
# export DIB_APT_SOURCES=/path/to/a/sources.list to use.
## #. Choose the deploy image element to be used. `deploy-kexec` will relieve you of
## the need to wait for long hardware POST times, however it has known stability
## issues (please see https://bugs.launchpad.net/diskimage-builder/+bug/1240933).
## If stability is preferred over speed, use the `deploy-ironic` image
## element.
## ::
export DEPLOY_IMAGE_ELEMENT=${DEPLOY_IMAGE_ELEMENT:-deploy-ironic}
export DEPLOY_NAME=deploy-ramdisk-ironic
## #. A messaging backend is required for the seed, undercloud, and overcloud
## control node. It is not required for overcloud computes. The backend is
## set through the ``*EXTRA_ARGS``.
## rabbitmq-server is enabled by default. Another option is qpidd.
## For overclouds we also use ``*EXTRA_ARGS`` to choose a cinder backend, set
## to cinder-tgt by default.
## ::
export SEED_DIB_EXTRA_ARGS=${SEED_DIB_EXTRA_ARGS:-"rabbitmq-server"}
export UNDERCLOUD_DIB_EXTRA_ARGS=${UNDERCLOUD_DIB_EXTRA_ARGS:-"rabbitmq-server"}
export OVERCLOUD_CONTROL_DIB_EXTRA_ARGS=${OVERCLOUD_CONTROL_DIB_EXTRA_ARGS:-'rabbitmq-server cinder-tgt'}
## #. The block storage nodes are deployed with the cinder-tgt backend by
## default too. Alternatives are cinder-lio and cinder-volume-nfs. Make sure
## to check the README files of these elements to configure them as needed.
## ::
export OVERCLOUD_BLOCKSTORAGE_DIB_EXTRA_ARGS=${OVERCLOUD_BLOCKSTORAGE_DIB_EXTRA_ARGS:-'cinder-tgt'}
## #. Set distribution used for VMs (fedora, opensuse, ubuntu). If unset, this
## will match TRIPLEO_OS_DISTRO, which is automatically gathered by devtest
## and represent your build host distro (where the devtest code runs).
##
## For Fedora, set SELinux permissive mode(currently the default when using Fedora)::
##
## export NODE_DIST="fedora selinux-permissive"
## For openSUSE, use::
##
## export NODE_DIST="opensuse"
## For Ubuntu, use::
##
## export NODE_DIST="ubuntu"
### --end
source $(dirname ${BASH_SOURCE[0]:-$0})/set-os-type
if [ -z "${NODE_DIST:-}" ]; then
if [ "$TRIPLEO_OS_DISTRO" = "fedora" ]; then
export NODE_DIST="fedora selinux-permissive"
else
export NODE_DIST=$TRIPLEO_OS_DISTRO
fi
fi
### --include
## #. Set the number of baremetal nodes to create in the virtual test
## environment.
## ::
# Node definitions are cheap but redeploying testenv's is not.
# Set NODE_CNT high enough for typical CI and Dev deployments for the
# foreseeable future
export NODE_CNT=${NODE_CNT:-15}
## #. Set size of root partition on our disk (GB). The remaining disk space
## will be used for the persistent ephemeral disk to store node state.
## ::
export ROOT_DISK=${ROOT_DISK:-10}
## #. Set the disk bus type. The default value is 'sata'. But if the VM is going
## to be migrated or saved to disk, then 'scsi' would be more appropriate
## for libvirt.
## ::
export LIBVIRT_DISK_BUS_TYPE=${LIBVIRT_DISK_BUS_TYPE:-"sata"}
## #. Set number of compute, control and block storage nodes for the overcloud.
## Only a value of 1 for OVERCLOUD_CONTROLSCALE is currently supported.
## ::
export OVERCLOUD_COMPUTESCALE=${OVERCLOUD_COMPUTESCALE:-1}
export OVERCLOUD_CONTROLSCALE=${OVERCLOUD_CONTROLSCALE:-1}
export OVERCLOUD_BLOCKSTORAGESCALE=${OVERCLOUD_BLOCKSTORAGESCALE:-0}
## #. These optional variables can be set to remove dead nodes. See the merge.py
## help for details of use. These example lines would remove Compute1 and
## Compute3, and Control2 and Control4.
## ::
## export OVERCLOUD_COMPUTE_BLACKLIST=1,3
## export OVERCLOUD_CONTROL_BLACKLIST=2,4
## #. You need to make the tripleo image elements accessible to diskimage-builder:
## ::
export ELEMENTS_PATH=${ELEMENTS_PATH:-"$TRIPLEO_ROOT/tripleo-image-elements/elements"}
## #. Set the datafile to use to describe the 'hardware' in the devtest
## environment. If this file already exists, you should skip running
## devtest_testenv.sh as it writes to the file
## ::
export TE_DATAFILE=${TE_DATAFILE:-"$TRIPLEO_ROOT/testenv.json"}
## #. By default Percona XtraDB Cluster is used when installing MySQL database,
## set ``USE_MARIADB=1`` if you want use MariaDB instead, MariaDB is used by
## default on Fedora based distributions because MariaDB packages are included
## directly in distribution
## ::
if [[ $NODE_DIST =~ .*(fedora|rhel|centos).* ]] ; then
export USE_MARIADB=${USE_MARIADB:-1}
else
export USE_MARIADB=0
fi
## #. You can choose between using the old-style merge.py script for putting
## together or the newer way of doing it directly via Heat.
## ::
export USE_MERGEPY=${USE_MERGEPY:-0}
### --end

View File

@ -1,60 +0,0 @@
#!/bin/bash
#
# Copyright 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -eu
SCRIPT_NAME=$(basename $0)
SCRIPT_HOME=$(dirname $0)
function show_options {
echo "Usage: $SCRIPT_NAME [options]"
echo
echo "Extract documentation from our demonstration scripts."
echo
echo "This will create devtest.rst from devtest.sh."
echo
echo "Options:"
echo " -h -- Show this help screen."
echo
exit 0
}
TEMP=`getopt -o h -l help -n $SCRIPT_NAME -- "$@"`
if [ $? != 0 ]; then
echo "Terminating..." >&2
exit 1
fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
-h | --help) show_options;;
--) shift ; break ;;
*) echo "Error: unsupported option $1." ; exit 1 ;;
esac
done
EXTRA=${1:-""}
for script in $(ls $SCRIPT_HOME/../scripts/devtest*.sh) ; do
bname=${script##*/}
noext=${bname%.sh}
awk -f $SCRIPT_HOME/extract-docs.awk $script > $SCRIPT_HOME/../doc/source/$noext.rst
done

View File

@ -1,57 +0,0 @@
# Copyright 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
#################
#
# Read a shell script with embedded comments and:
# - discard undesired portions
# - strip leading '## ' from lines
# - indent other non-empty lines by 8 spaces
# - output the result to a nominated file
# This allows a script to have copious documentation but also be presented as a
# markdown / ReST file.
#
/^### --include/ {
for (;;) {
if ((getline line) <= 0)
unexpected_eof()
if (line ~ /^### --end/)
break
if (match(line, ".* #nodocs$"))
continue
if (substr(line, 0, 3) == "## ") {
line = substr(line, 4)
} else if (line != "") {
line = " "line
}
print line > "/dev/stdout"
}
}
function unexpected_eof() {
printf("%s:%d: unexpected EOF or error\n", FILENAME, FNR) > "/dev/stderr"
exit 1
}
END {
if (curfile)
close(curfile)
}
# vim:sw=4:sts=4:expandtab:textwidth=79

View File

@ -1,20 +0,0 @@
#!/bin/sh
set -eu
PATH=$PATH:/usr/sbin:/sbin
if [ "$#" -lt 1 ]; then
echo "Usage: $(basename $0) <vm-name>"
exit 1
fi
VMNAME="$1"
vms=$(sudo virsh list --all | grep "$VMNAME" | awk '{ print $2 }')
macs=""
for vm in $vms ; do
macs="$(sudo virsh dumpxml $vm | grep "mac address" | head -1 | awk -F "'" '{ print $2 }') $macs"
done
echo $macs

View File

@ -1,84 +0,0 @@
#!/bin/bash
set -eu
## This script should die: https://bugs.launchpad.net/tripleo/+bug/1195046.
# generate ssh key directory if it doesn't exist
if [ ! -d ~/.ssh ]; then
install --mode 700 -d ~/.ssh
fi
# generate ssh authentication keys if they don't exist
if [ ! -f ~/.ssh/id_rsa ]; then
ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
fi
# Ensure the local id_rsa.pub is in .ssh/authorized_keys before that is copied
# into images via local-config. We are opening up ssh access to the host with
# a key that the user might not want, we should find another way to place the
# key onto the image. See https://bugs.launchpad.net/tripleo/+bug/1280052 for
# more details.
if ! grep "$(cat ~/.ssh/id_rsa.pub)" ~/.ssh/authorized_keys >/dev/null; then
echo "Adding public key to ~/.ssh/authorized_keys"
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
fi
# Make sure permissions are correct for ssh authorized_keys file.
chmod 0600 ~/.ssh/authorized_keys
# packages
if [ "$TRIPLEO_OS_DISTRO" = "unsupported" ]; then
echo This script has not been tested outside of Fedora, RHEL/CentOS, and Ubuntu variants.
echo Make sure you have installed all the needed dependencies or subsequent steps will fail.
fi
if [ "$TRIPLEO_OS_FAMILY" = "debian" ]; then
if $(grep -Eqs 'Ubuntu 12.04' /etc/lsb-release); then
#adding Ubuntu Cloud Archive Repository only if not present : bug https://bugs.launchpad.net/tripleo/+bug/1212237
#Ubuntu 12.04 has a too-old libvirt-bin but a newer one is present in the Ubuntu cloud archive.
sudo -E apt-get update
DEBIAN_FRONTEND=noninteractive sudo -E apt-get install --yes ubuntu-cloud-keyring
(grep -Eqs "precise-updates/grizzly" /etc/apt/sources.list.d/cloud-archive.list) || echo 'deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main
' | sudo tee -a /etc/apt/sources.list.d/cloud-archive.list
#adding precise-backports universe repository for jq package
if ! command -v add-apt-repository; then
DEBIAN_FRONTEND=noninteractive sudo -E apt-get install --yes python-software-properties
fi
sudo add-apt-repository "deb http://us.archive.ubuntu.com/ubuntu/ precise-backports universe"
fi
# packages
sudo -E apt-get update
DEBIAN_FRONTEND=noninteractive sudo -E apt-get install --yes python-lxml python-libvirt libvirt-bin qemu-utils qemu-system qemu-kvm git python-pip python-dev gcc python-virtualenv openvswitch-switch libssl-dev curl python-yaml parted lsb-release libxml2-dev libxslt1-dev jq openssh-server libffi-dev kpartx python-netaddr
if [ -f /lib/systemd/system/libvirtd.service ]; then
sudo service libvirtd restart
else
sudo service libvirt-bin restart
fi
fi
if [ "$TRIPLEO_OS_FAMILY" = "redhat" ]; then
sudo -E yum install -y python-lxml libvirt-python libvirt qemu-img qemu-kvm git python-pip openssl-devel python-devel gcc audit python-virtualenv openvswitch python-yaml net-tools redhat-lsb-core libxslt-devel jq openssh-server libffi-devel which glusterfs-api python-netaddr
sudo service libvirtd restart
sudo service openvswitch restart
sudo chkconfig openvswitch on
fi
if [ "$TRIPLEO_OS_FAMILY" = "suse" ]; then
# Need these in path for sudo service & usermod to work
PATH=/sbin:/usr/sbin:$PATH
# TODO: this is a bit fragile, and assumes openSUSE, not SLES
suse_version=$(awk '/VERSION/ { print $3 }' /etc/SuSE-release)
if [ ! -f /etc/zypp/repos.d/Cloud_OpenStack_Master.repo ]; then
# Add Cloud:OpenStack:Master (Project that follows master branch with daily updates)
sudo -E zypper -n ar -f http://download.opensuse.org/repositories/Cloud:/OpenStack:/Master/openSUSE_$suse_version/Cloud:OpenStack:Master.repo
sudo -E zypper -n --gpg-auto-import-keys ref
fi
sudo -E zypper --non-interactive install \
python-lxml libvirt-python libvirt qemu-tools kvm git python-pip libopenssl-devel \
python-devel gcc audit python-virtualenv openvswitch-switch python-PyYAML net-tools \
lsb-release libxslt-devel jq libffi-devel python-netaddr
sudo service libvirtd restart
sudo service openvswitch-switch restart
fi

View File

@ -1,164 +0,0 @@
#!/bin/bash
#
# Copyright 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -e
SCRIPT_NAME=$(basename $0)
SCRIPT_HOME=$(dirname $0)
# save stdout for later then make fd 1 stderr
exec 3>&1 >&2
function show_options {
echo "Usage: $SCRIPT_NAME [options] <file>"
echo
echo "Load an image into Glance for use with Nova BareMetal driver"
echo
echo "Options:"
echo " -d -- delete duplicate images from glance before loading"
echo " -h -- print this help"
echo
exit 0
}
function cleanup {
rm -rf $TMP_IMAGE_DIR
}
function remove_image {
NAME=$1
UUIDS=$(glance image-list | awk "/$NAME/ {print \$2}")
for UUID in $UUIDS; do
echo "Removing image $1 ($UUID) from glance"
glance image-delete $UUID
done
}
function load_image {
FILE=$(readlink -f $1)
DIR=$(dirname ${FILE})
GLANCE_IMAGE_NAME=$(basename ${FILE%.*})
RAMDISK="${DIR}/${GLANCE_IMAGE_NAME}.initrd"
KERNEL="${DIR}/${GLANCE_IMAGE_NAME}.vmlinuz"
if [ ! -e "$FILE" ]; then
echo "Error: specified file $FILE not found"
exit 1
fi
CURRENT_CHECKSUM=$(nova image-show $GLANCE_IMAGE_NAME 2> /dev/null | awk '/ checksum / {print $4}')
NEW_CHECKSUM=$(md5sum $FILE | awk '{print $1}')
if [ "$CURRENT_CHECKSUM" = "$NEW_CHECKSUM" ]; then
echo "$FILE checksum matches glance checksum, not creating duplicate image."
nova image-show $GLANCE_IMAGE_NAME | awk '/ id / {print $4}' >&3
return
fi
if [ ! -e "$KERNEL" -o ! -e "$RAMDISK" ] ; then
DIGK=$(which disk-image-get-kernel || echo $DIB_PATH/bin/disk-image-get-kernel)
if [ ! -e $DIGK ]; then
echo "Error: unable to locate disk-image-get-kernel"
exit 1
fi
echo "Warning: Kernel ($KERNEL) or initrd ($RAMDISK) for specified file $FILE not found."
echo " Trying to extract them with disk-image-get-kernel now."
echo " Please add the \"baremetal\" element to your image-build."
export TMP_IMAGE_DIR=$(mktemp -t -d --tmpdir=${TMP_DIR:-/tmp} image.XXXXXXXX)
[ $? -eq 0 ] || die "Failed to create tmp directory"
trap cleanup EXIT
$DIGK -d ${TMP_IMAGE_DIR} -o 'tmp' -i $FILE
KERNEL=$TMP_IMAGE_DIR/tmp-vmlinuz
RAMDISK=$TMP_IMAGE_DIR/tmp-initrd
fi
if [ "$REMOVE_OLD_IMAGES" ]; then
remove_image "${GLANCE_IMAGE_NAME}-vmlinuz"
remove_image "${GLANCE_IMAGE_NAME}-initrd"
remove_image "${GLANCE_IMAGE_NAME}"
fi
kernel_id=$(glance image-create \
--name "${GLANCE_IMAGE_NAME}-vmlinuz" \
--visibility public \
--disk-format aki \
--container-format aki \
--file "$KERNEL" \
| grep ' id ' | awk '{print $4}')
ramdisk_id=$(glance image-create \
--name "${GLANCE_IMAGE_NAME}-initrd" \
--visibility public \
--disk-format ari \
--container-format ari \
--file "$RAMDISK" \
| grep ' id ' | awk '{print $4}')
# >&3 sends to the original stdout as this is what we are after
glance image-create --name $GLANCE_IMAGE_NAME \
--visibility public \
--disk-format qcow2 \
--container-format bare \
--property kernel_id=$kernel_id \
--property ramdisk_id=$ramdisk_id \
--file $FILE | awk '/ id / { print $4 }' >&3
cleanup
trap EXIT
}
TEMP=`getopt -o hd -n $SCRIPT_NAME -- "$@"`
if [ $? != 0 ]; then
echo "Terminating..." >&2;
exit 1;
fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
-d) export REMOVE_OLD_IMAGES=1 ; shift ;;
-h) show_options;;
--) shift ; break ;;
*) echo "Error: unsupported option $1." ; exit 1 ;;
esac
done
for arg; do
FILES="$FILES $arg";
done
if [ ! "$FILES" ]; then
show_options
fi
which glance >/dev/null || ( echo "Error: unable to locate glance"; exit 1 )
DIB_PATH=${DIB_PATH:-$SCRIPT_HOME/../../diskimage-builder}
# Attempt to get the OS credentials, or die.
[ -z "$OS_AUTH_URL" ] && [ -z "$OS_USERNAME" ] && [ -z "$OS_PASSWORD" ] && \
( ( [ -e ~/stackrc ] && source ~/stackrc ) \
|| ( echo "Error: OS credentials not found. Please save them to ~/stackrc." && exit 1 ) )
# Load the images now
for FILE in $FILES; do
load_image $FILE
done

View File

@ -1,130 +0,0 @@
#!/usr/bin/env python
# Copyright 2015 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import argparse
import logging
import os
import subprocess
import sys
import yaml
logger = logging.getLogger(__name__)
env = os.environ.copy()
# YAML FILE FORMAT (same format as build-images but only uses the name/type)
# disk_images:
# -
# type: qcow2
# imagename: overcloud
# heat_parameters:
# - controllerImage
# - NovaImage
def parse_opts(argv):
parser = argparse.ArgumentParser(
description='Load images into Glance using a YAML/JSON config file'
' format.')
parser.add_argument('-c', '--config-file', metavar='CONFIG_FILE',
help="""path to the configuration file.""",
default='disk_images.yaml')
parser.add_argument('-i', '--images-directory', metavar='DIRECTORY',
help="""images directory for images. """
"""Defaults to $TRIPLEO_ROOT""",
default=env.get('TRIPLEO_ROOT'))
parser.add_argument('-o', '--output-heat-env', metavar='PATH',
help="""Output path for a heat environment that
contains Glance image IDs set to the
respective heat input name specified in the
config file. """)
parser.add_argument('-r', '--remove', action='store_true',
help="""remove duplicate image names from glance.""",
default=False)
parser.add_argument('-d', '--debug', dest="debug", action='store_true',
help="Print debugging output.", required=False)
parser.add_argument('-v', '--verbose', dest="verbose",
action='store_true', help="Print verbose output.",
required=False)
opts = parser.parse_args(argv[1:])
return opts
def configure_logger(verbose=False, debug=False):
LOG_FORMAT = '[%(asctime)s] [%(levelname)s] %(message)s'
DATE_FORMAT = '%Y/%m/%d %I:%M:%S %p'
log_level = logging.WARN
if debug:
log_level = logging.DEBUG
elif verbose:
log_level = logging.INFO
logging.basicConfig(format=LOG_FORMAT, datefmt=DATE_FORMAT,
level=log_level)
def main(argv=sys.argv):
opts = parse_opts(argv)
configure_logger(opts.verbose, opts.debug)
logger.info('Using config file at: %s' % opts.config_file)
if os.path.exists(opts.config_file):
with open(opts.config_file) as cf:
disk_images = yaml.load(cf.read()).get("disk_images")
logger.debug('disk_images JSON: %s' % str(disk_images))
else:
logger.error('No config file exists at: %s' % opts.config_file)
return 1
if not opts.images_directory:
logger.error('Please specify --images-directory.')
return 1
heat_parameters = {'parameters': {}}
for image in disk_images:
img_type = image.get('type', 'qcow2')
imagename = image.get('imagename')
image_path = '%s/%s.%s' % (opts.images_directory, imagename, img_type)
if os.path.exists(image_path):
logger.info('image path: %s' % image_path)
cmd = ['load-image']
if opts.remove:
cmd.append('-d')
cmd.append(image_path)
logger.info('Running %s' % cmd)
retval = subprocess.call(cmd)
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, env=env)
stdout, stderr = proc.communicate()
if proc.returncode != 0:
logger.error('Failed to load image: %s' % imagename)
return 1
if image.get('heat_parameters'):
for name in image.get('heat_parameters'):
heat_parameters['parameters'][name] = stdout.strip()
else:
logger.warn('No image file exists for image name: %s' % image_path)
continue
if opts.output_heat_env:
with open(opts.output_heat_env, 'w') as of:
of.write(yaml.dump(heat_parameters))
if __name__ == '__main__':
sys.exit(main(sys.argv))

View File

@ -1,128 +0,0 @@
#!/bin/bash
#
# Copyright 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -eu
SCRIPT_NAME=$(basename $0)
SCRIPT_HOME=$(dirname $0)
GROUP=""
PASSWORD=""
function show_options {
echo "Usage: $SCRIPT_NAME [options] <username> <useremail>"
echo
echo "Create a well formed user in a cloud."
echo "A tenant with the same name as the user is automatically created unless"
echo "it already exists."
echo
echo "The admin user is added to the tenant in the admin role."
echo
echo "Options:"
echo " -p, --password -- the password for the user."
echo
echo "For instance: $SCRIPT_NAME joe joe@example.com"
echo "would create a tenant 'joe', a user 'joe' with email joe@example.com"
echo "and a random password."
exit $1
}
TEMP=`getopt -o p: -l password: -n $SCRIPT_NAME -- "$@"`
if [ $? != 0 ]; then
echo "Terminating..." >&2
exit 1
fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
-p | --password) export PASSWORD="$2"; shift 2 ;;
-h) show_options 0;;
--) shift ; break ;;
*) echo "Error: unsupported option $1." ; exit 1 ;;
esac
done
NAME=${1:-""}
EMAIL=${2:-""}
if [ -z "$NAME" -o -z "$EMAIL" ]; then
show_options 1
fi
PASSWORD=${PASSWORD:-$(os-make-password)}
ADMIN_ROLE=$(openstack role show admin| awk '$2=="id" {print $4}')
if [ -z "$ADMIN_ROLE" ]; then
echo "Could not find admin role" >&2
exit 1
fi
MEMBER_ROLE=$(openstack role show _member_| awk '$2=="id" {print $4}')
# Role _member_ is implicitly created by Keystone only while creating a new user
# If no users were created, need to create a role explicitly
if [ -z "$MEMBER_ROLE" ]; then
MEMBER_ROLE=$(openstack role create _member_ | awk '$2=="id" {print $4}')
echo "Created role _member_ with id ${MEMBER_ROLE}" >&2
fi
ADMIN_USER_ID=$(openstack user show admin | awk '$2=="id" {print $4}')
if [ -z "$ADMIN_USER_ID" ]; then
echo "Could not find admin user" >&2
exit 1
fi
if ! openstack project show $NAME 1>/dev/null 2>&1 ; then
USER_TENANT_ID=$(openstack project create $NAME | awk '$2=="id" {print $4}')
if [ -z "$USER_TENANT_ID" ]; then
echo "Failed to create tenant $NAME" >&2
exit 1
fi
else
USER_TENANT_ID=$(openstack project show $NAME 2>/dev/null| awk '$2=="id" {print $4}')
if [ -z "$USER_TENANT_ID" ]; then
echo "Failed to retrieve existing tenant $NAME" >&2
exit 1
fi
fi
USER_ID=$(openstack user show $NAME | awk '$2=="id" {print $4}')
if [ -z "$USER_ID" ]; then
USER_ID=$(openstack user create \
--password "$PASSWORD" \
--email $EMAIL $NAME | awk '$2=="id" {print $4}')
if [ -z "$USER_ID" ]; then
echo "Failed to create user $NAME" >&2
exit 1
else
echo "Created user $NAME with password '$PASSWORD'"
fi
else
echo "User $NAME with id $USER_ID already exists"
fi
if openstack role list --user $USER_ID --project $USER_TENANT_ID | grep -q "\s$MEMBER_ROLE\s"; then
echo "Role $MEMBER_ROLE is already granted for user $USER_ID with tenant $USER_TENANT_ID"
else
openstack role add --user $USER_ID --project $USER_TENANT_ID $MEMBER_ROLE
fi
if openstack role list --user $ADMIN_USER_ID --project $USER_TENANT_ID | grep -q "\s$ADMIN_ROLE\s"; then
echo "Role $ADMIN_ROLE is already granted for user $ADMIN_USER_ID with tenant $USER_TENANT_ID"
else
openstack role add --user $ADMIN_USER_ID --project $USER_TENANT_ID $ADMIN_ROLE
fi

View File

@ -1,60 +0,0 @@
#!/bin/bash
#
# Copyright 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -eu
set -o pipefail
SCRIPT_NAME=$(basename $0)
SCRIPT_HOME=$(dirname $0)
function show_options {
echo "Usage: $SCRIPT_NAME"
echo
echo "Create a random password."
echo
echo "This outputs a random password."
echo
echo "The password is made by taking a uuid and passing it though sha1sum."
echo "We may change this in future to gain more entropy."
echo
exit $1
}
TEMP=`getopt -o h -n $SCRIPT_NAME -- "$@"`
if [ $? != 0 ]; then
echo "Terminating..." >&2;
exit 1;
fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
-h) show_options 0;;
--) shift ; break ;;
*) echo "Error: unsupported option $1." ; exit 1 ;;
esac
done
EXTRA=${1:-""}
if [ -n "$EXTRA" ]; then
show_options 1
fi
uuidgen | sha1sum | awk '{print $1}'

View File

@ -1,60 +0,0 @@
#!/usr/bin/env bash
#
# Copyright 2014 Red Hat
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# Script to spam tripleo-cd-admin on #tripleo while there is a outage of
# the tripleo-ci cloud. Send messages to irc with irc messages from
# https://etherpad.openstack.org/p/cloud-outage, each time the irc message
# changes or every 30 minutes if no changes occurs.
if [ -z "$1" ] ; then
echo "Supply channel name"
exit 1
fi
SCRIPTDIR=$(dirname $0)
CURRENT=/var/tmp/outage-bot.current
LAST=/var/tmp/outage-bot.last
NEXTMESSAGE=0
CHANNEL=$1
touch $LAST
function sendmessage {
PEOPLE=$(cut -d , -f 1 $SCRIPTDIR/../tripleo-cloud/tripleo-cd-admins | xargs echo)
$SCRIPTDIR/send-irc $CHANNEL CLOUDOUTAGE "$PEOPLE $(sed -e 's/^ircmessage: \?//g' $CURRENT | xargs -0 -I LINE echo -n " --" LINE)"
NEXTMESSAGE=$(( $(date +%s) + 1800 ))
}
while true ; do
sleep 60
curl https://etherpad.openstack.org/p/cloud-outage/export/txt | grep "^ircmessage:" > $CURRENT
if [ ! -s $CURRENT ] ; then
continue
fi
if ! diff $CURRENT $LAST &> /dev/null ; then
sendmessage
fi
if [ $NEXTMESSAGE -lt $(date +%s) ] ; then
sendmessage
fi
cp $CURRENT $LAST
done

View File

@ -1,18 +0,0 @@
# A default disk images YAML file that will load images
# created with devtest_overcloud_images.sh. The
# heat_parameters sections are used to output a heat
# environment file that maps heat parameter
# names to the Glance image IDs from each upload.
disk_images:
-
imagename: overcloud-control
heat_parameters:
- controllerImage
-
imagename: overcloud-compute
heat_parameters:
- NovaImage
-
imagename: overcloud-cinder-volume
heat_parameters:
- BlockStorageImage

View File

@ -1,19 +0,0 @@
# A puppet images YAML file that will build and
# load a single puppet base image to be used for
# all roles.
#
# The heat_parameter section is used to output a heat
# environment file that maps heat parameter
# names to the Glance image IDs.
disk_images:
-
imagename: overcloud
arch: amd64
type: qcow2
elements:
- hosts baremetal dhcp-all-interfaces os-collect-config heat-config-puppet heat-config-script puppet-modules hiera overcloud-compute overcloud-controller stackuser os-net-config delorean-repo rdo-release
heat_parameters:
- controllerImage
- NovaImage
- CephStorageImage
- BlockStorageImage

View File

@ -1,90 +0,0 @@
#!/bin/bash
#
# Copyright 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -eu
# This is a cheap mr/cm-alike. Perhaps we should use mr/cm.
TRIPLEO_ADDITIONAL_PULL_TOOLS=${TRIPLEO_ADDITIONAL_PULL_TOOLS:-}
TOOLS="https://git.openstack.org/openstack/diskimage-builder
https://git.openstack.org/openstack/dib-utils
https://git.openstack.org/openstack/heat-templates
https://git.openstack.org/openstack/tripleo-image-elements
https://git.openstack.org/openstack/tripleo-puppet-elements
https://git.openstack.org/openstack/tripleo-heat-templates
https://git.openstack.org/openstack/tripleo-incubator
https://git.openstack.org/openstack-infra/tripleo-ci
https://git.openstack.org/openstack/os-cloud-config ${TRIPLEO_ADDITIONAL_PULL_TOOLS}"
ZUUL_REF=${ZUUL_REF:-''}
if [ -n "$ZUUL_REF" ]; then
echo "SKIPPING pull-tools as ZUUL_REF is present."
exit 0
fi
# Create a manifest of tools that are in use
GIT_MANIFEST=$TRIPLEO_ROOT/dib-manifest-git-pull_tools
rm -f $GIT_MANIFEST
for TOOL in $TOOLS; do
TOOL_BASE=$(basename $TOOL)
echo pulling/updating $TOOL_BASE
LOCATION_OVERRIDE=DIB_REPOLOCATION_${TOOL_BASE//[^A-Za-z0-9]/_}
LOCATION=${!LOCATION_OVERRIDE:-$TOOL}
REF=master
REF_OVERRIDE=DIB_REPOREF_${TOOL_BASE//[^A-Za-z0-9]/_}
REF=${!REF_OVERRIDE:-$REF}
if [ ! -d $TRIPLEO_ROOT/$TOOL_BASE ] ; then
cd $TRIPLEO_ROOT
git clone $LOCATION
pushd $TOOL_BASE
git checkout $REF # for a branch or SHA1
popd
else
cd $TRIPLEO_ROOT/$TOOL_BASE
if echo "/$(git symbolic-ref -q HEAD)" | grep -q "/${REF}\$" ; then
if ! git pull --ff-only ; then
echo "***************************************************"
echo "* Perhaps you want to 'git rebase origin/$REF'? *"
echo "***************************************************"
exit 1
fi
else
echo "***************************************"
echo "* $TOOL_BASE is not on branch $REF; skipping pull *"
echo "***************************************"
fi
fi
echo -n $TRIPLEO_ROOT/$TOOL_BASE:
cd $TRIPLEO_ROOT/$TOOL_BASE
git --no-pager log -1 --pretty=oneline
# Write the manifest entry
# Make a best guess at the branch to get the remote in use
if ! branch=$(git symbolic-ref -q HEAD) ; then
# We are on a non-symbolic reference - try the first branch containing this ref
branch=$(git branch --contains HEAD | grep -v '(no branch)' | head -1 | tr -d ' ')
else
# Strip the leading refs/heads
branch=${branch##refs/heads/}
fi
[[ -z "$(git config branch.${branch}.remote)" ]] ||\
branch_remote=$(git config remote.$(git config branch.${branch}.remote).url)
branch_remote=${branch_remote:-"unknown"}
echo "$TOOL_BASE git $TRIPLEO_ROOT/$TOOL_BASE $branch_remote $(git rev-parse HEAD)" >> $GIT_MANIFEST
done

View File

@ -1,44 +0,0 @@
#!/bin/bash
#
# Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
__tripleo_refresh_env() {
export TRIPLEO_ROOT=$1
export PATH=$TRIPLEO_ROOT/tripleo-incubator/scripts:$PATH
source $(dirname $BASH_SOURCE)/set-os-type
export NODE_DIST=${NODE_DIST:-"$TRIPLEO_OS_DISTRO"}
pull-tools
setup-clienttools
export ELEMENTS_PATH=$TRIPLEO_ROOT/tripleo-image-elements/elements
echo "Make sure to source your stackrc file"
}
# Setup/update your undercloud environment to run devtest_overcloud.sh
#
if [ -z "${1:-}" ] ; then
echo "Usage:"
echo "source refresh-env TRIPLEO_ROOT"
echo "Ex:"
echo "source refresh-env ~/tripleo"
else
if [ -d "$1/tripleo-incubator/scripts" ] ; then
__tripleo_refresh_env $1
else
echo "TRIPLEO_ROOT must contain tripleo-incubator/scripts"
fi
fi

View File

@ -1,188 +0,0 @@
#!/bin/bash
#
# Copyright 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -eu
SCRIPT_NAME=$(basename $0)
SCRIPT_HOME=$(dirname $0)
DESCRIPTION=""
ADMIN_URL=""
INTERNAL_URL=""
REGION="regionOne" # NB: This is the default keystone uses.
DEBUG=""
function show_options {
echo "Usage: $SCRIPT_NAME [options] <name> <type> <public_url>"
echo
echo "Register a service and create an endpoint for it."
echo "The script assumes that the service tenant is called 'service' and "
echo "the admin role is called 'admin'."
echo
echo "Supported types are ec2, image, orchestration, identity,"
echo "network, compute, baremetal, dashboard and metering."
echo
echo "Options:"
echo " -d, --description -- the description for the service."
echo " -a, --admin -- the admin URL prefix for this endpoint. If"
echo " not supplied, defaults to the internal url."
echo " -i, --internal -- the internal URL prefix for this endpoint."
echo " If not supplied, defaults to the public url."
echo " -r, --region -- Override the default region 'regionOne'."
echo " --debug -- Debug API calls made."
echo
echo "For instance: $SCRIPT_NAME nova compute https://api.cloud.com/nova/"
echo "would create a nova service and register"
echo "https://api.cloud.com/nova/v2/\$(tenant_id)s for all three endpoints."
exit 0
}
TEMP=`getopt -o d:a:i:r: -l debug,description:,admin:,internal:,region: -n $SCRIPT_NAME -- "$@"`
if [ $? != 0 ]; then
echo "Terminating..." >&2;
exit 1;
fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
-d | --description) export DESCRIPTION="$2"; shift 2 ;;
--debug) export DEBUG="--debug"; shift 1 ;;
-a | --admin) export ADMIN_URL="$2"; shift 2 ;;
-i | --internal) export INTERNAL_URL="$2"; shift 2 ;;
-r | --region) export REGION="$2"; shift 2 ;;
-h) show_options;;
--) shift ; break ;;
*) echo "Error: unsupported option $1." ; exit 1 ;;
esac
done
NAME=${1:-""}
TYPE=${2:-""}
PUBLIC_URL=${3:-""}
EXTRA=${4:-""}
if [ -z "NAME" -o -z "$TYPE" -o -z "$PUBLIC_URL" -o -n "$EXTRA" ]; then
show_options
fi
ADMIN_SUFFIX=
case "$TYPE" in
ec2)
SUFFIX="/services/Cloud"
ADMIN_SUFFIX="/services/Admin"
;;
image|baremetal|network|metering)
SUFFIX="/"
;;
orchestration|volume)
SUFFIX="/v1/%(tenant_id)s"
;;
volumev2)
SUFFIX="/v2/%(tenant_id)s"
;;
identity)
SUFFIX="/v2.0"
;;
compute)
SUFFIX="/v2/\$(tenant_id)s"
;;
computev3)
SUFFIX="/v3"
;;
object-store)
SUFFIX="/v1/AUTH_%(tenant_id)s"
ADMIN_SUFFIX="/v1"
;;
dashboard)
SUFFIX="/"
ADMIN_SUFFIX="/admin"
;;
management)
SUFFIX="/v2"
;;
*)
echo "Unknown service type" >&2
exit 1
esac
if [ -z "$ADMIN_SUFFIX" ]; then
ADMIN_SUFFIX="$SUFFIX"
fi
if [ -n "$DESCRIPTION" ]; then
DESCRIPTION="--description=$DESCRIPTION"
fi
if [ -z "$INTERNAL_URL" ]; then
INTERNAL_URL="$PUBLIC_URL"
fi
if [ -z "$ADMIN_URL" ]; then
ADMIN_URL="$INTERNAL_URL"
fi
ADMIN_ROLE=$(openstack $DEBUG role list | awk '/ admin / {print $2}')
if [ -z "$ADMIN_ROLE" ]; then
echo "Could not find admin role" >&2
exit 1
fi
# Some services don't need a user
if [ "dashboard" != "$TYPE" ]; then
SERVICE_TENANT=$(openstack $DEBUG project list | awk '/ service / {print $2}')
PASSWORD=${PASSWORD:-$(os-make-password)}
# Some services have multiple endpoints, the user doesn't need to be recreated
USER_ID=$(openstack $DEBUG user show $NAME | awk '$2=="id" { print $4 }')
if [ -z "$USER_ID" ]; then
USER_ID=$(openstack $DEBUG user create --password $PASSWORD --project $SERVICE_TENANT --email=nobody@example.com $NAME | awk ' / id / {print $4}')
fi
if ! openstack role list --project $SERVICE_TENANT --user $USER_ID | grep -q " $ADMIN_ROLE "; then
echo "Creating user-role assignment for user $NAME, role admin, tenant service"
openstack role add $DEBUG \
--project $SERVICE_TENANT \
--user $USER_ID \
$ADMIN_ROLE
fi
#Add the admin tenant role for ceilometer user to enable polling services
if [ "metering" == "$TYPE" ]; then
ADMIN_TENANT=$(openstack $DEBUG project list | awk '/ admin / {print $2}')
if ! openstack role list --project $ADMIN_TENANT --user $USER_ID | grep -q " $ADMIN_ROLE "; then
echo "Creating user-role assignment for user $NAME, role admin, tenant admin"
openstack role add $DEBUG \
--project $ADMIN_TENANT \
--user $USER_ID \
$ADMIN_ROLE
#swift polling requires ResellerAdmin role to be added to the Ceilometer user
RESELLER_ADMIN_ROLE=$(openstack $DEBUG role list | awk '/ ResellerAdmin / {print $2}')
openstack role add $DEBUG \
--project $ADMIN_TENANT \
--user $USER_ID \
$RESELLER_ADMIN_ROLE
fi
fi
fi
SERVICE_ID=$(openstack $DEBUG service create --name $NAME "$DESCRIPTION" $TYPE | awk '/ id / {print $4}')
openstack endpoint create $DEBUG \
--publicurl "${PUBLIC_URL}${SUFFIX}" \
--adminurl "${ADMIN_URL}${ADMIN_SUFFIX}" \
--internalurl "${INTERNAL_URL}${SUFFIX}" --region "$REGION" $SERVICE_ID
echo "Service $TYPE created"

View File

@ -1,134 +0,0 @@
#!/bin/bash
# Copyright 2014 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# Script to make cloud selection simple
#
set -eu
set -o pipefail
SCRIPT_NAME=$(basename $0)
SCRIPT_HOME=$(cd $(dirname $0); pwd)
function show_options {
echo "Usage: $SCRIPT_NAME <cloud>"
echo
echo "Options:"
echo " -h, --help -- print this help."
echo " --root <dir> -- Use <dir> as TRIPLEO_ROOT"
echo
echo "Echos the appropriate setup to interact with the requested cloud."
echo "Choices for <cloud> are:"
echo " seed or s"
echo " undercloud or under or u"
echo " overcloud or over or o"
echo
echo "Run as follows to source the undercloud variables into the current shell:"
echo " source <( $SCRIPT_HOME/$SCRIPT_NAME undercloud )"
echo
exit $1
}
TEMP=`getopt -o h -l help,root: -n $SCRIPT_NAME -- "$@"`
if [ $? != 0 ]; then
echo "Terminating..." >&2
exit 1
fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
-h|--help) show_options 0 >&2;;
--root) TRIPLEO_ROOT=$2 ; shift 2;;
--) shift ; break;;
*) echo "Error: unsupported option $1." ; exit 1;;
esac
done
if [[ -z "${TRIPLEO_ROOT:-}" ]] ; then
echo "Error: You must have TRIPLEO_ROOT set in the environment, or specify it with --root" >&2
show_options 1 >&2
fi
if (( $# != 1 )); then echo "Cloud to interact with is required" >&2; show_options 1 >&2; fi
set_common() {
cat << EOF
source_config_file() {
filename=\$1
if [[ -e \$filename ]] ; then
source \$filename
elif [[ -e \${TRIPLEO_ROOT}/\$filename ]] ; then
source \${TRIPLEO_ROOT}/\$filename
else
echo "Could not find \$filename - sourcing may not work" >&2
fi
}
update_tripleo_no_proxy() {
add=\$1
echo \$no_proxy | grep -wq \$1 || export no_proxy=\$no_proxy,\$1
}
EOF
# This file may not be there, depending on if the undercloud/overcloud are started yet
echo "[ -f ${TRIPLEO_ROOT}/tripleorc ] && source ${TRIPLEO_ROOT}/tripleorc"
# Need to reset TRIPLEO_ROOT after sourcing tripleorc
echo "export TRIPLEO_ROOT=$TRIPLEO_ROOT"
echo "source ${TRIPLEO_ROOT}/tripleo-incubator/scripts/devtest_variables.sh"
}
set_seed() {
cat << EOF
source ${TRIPLEO_ROOT}/tripleo-incubator/seedrc
export SEED_IP=\$(os-apply-config -m \$TE_DATAFILE --type raw --key seed-ip)
export OS_AUTH_URL=http://\${SEED_IP}:5000/v2.0
update_tripleo_no_proxy \${SEED_IP}
export UNDERCLOUD_ID=\$(glance image-list | grep undercloud | grep qcow2 | awk '{print \$2}' | head -1)
EOF
# Don't proxy to the seeds IP on the baremetal network
echo "update_tripleo_no_proxy \$(OS_CONFIG_FILES=\$TE_DATAFILE os-apply-config \
--key baremetal-network.seed.ip --type raw --key-default '192.0.2.1')"
}
set_undercloud() {
cat << EOF
source_config_file tripleo-undercloud-passwords
source ${TRIPLEO_ROOT}/tripleo-incubator/undercloudrc
export UNDERCLOUD_IP=\$(os-apply-config -m \$TE_DATAFILE --type raw --key undercloud.endpointhost)
update_tripleo_no_proxy \$UNDERCLOUD_IP
EOF
}
set_overcloud() {
cat << EOF
source_config_file tripleo-overcloud-passwords
source ${TRIPLEO_ROOT}/tripleo-incubator/overcloudrc-user
export OVERCLOUD_IP=\$(os-apply-config -m \$TE_DATAFILE --type raw --key overcloud.endpointhost)
update_tripleo_no_proxy \$OVERCLOUD_IP
EOF
}
# Get the argument to show what cloud to interact with
case "$1" in
s|seed) cloud=seed;;
u|under|undercloud) cloud=undercloud;;
o|over|overcloud) cloud=overcloud;;
*) echo "Error: unsupported cloud $1." ; exit 1 ;;
esac
# Call the appropriate functions
set_common
set_${cloud}

View File

@ -1,49 +0,0 @@
#!/bin/bash
#
# Copyright 2013 Red Hat
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -eu
SCRIPT_NAME=$(basename $0)
function show_options {
echo "Usage: $SCRIPT_NAME IRC_CHANNEL IRC_USERNAME MESSAGE..."
echo
echo "Send a MESSAGE to a freenode channel IRC_CHANNEL from"
echo "the user IRC_USERNAME"
echo "Examples:"
echo " $SCRIPT_NAME tripleo toci "WARNING : The build failed""
exit 1
}
[ $# -lt 3 ] && show_options
exec 3<>/dev/tcp/irc.freenode.net/6667
IRC_CHANNEL=$1
IRC_USERNAME=$2
shift 2
MESSAGE=$@
echo "Nick $IRC_USERNAME" >&3
echo "User $IRC_USERNAME -i * : hi" >&3
sleep 2
echo "JOIN #$IRC_CHANNEL" >&3
echo "PRIVMSG #$IRC_CHANNEL :$@" >&3
echo "QUIT" >&3
cat <&3 > /dev/null

View File

@ -1,49 +0,0 @@
#!/bin/bash
TRIPLEO_OS_FAMILY='unsupported' # Generic OS Family: debian, redhat, suse
TRIPLEO_OS_DISTRO='unsupported' # Specific distro: centos, fedora, rhel,
# opensuse, sles, ubuntu
if [ -f /etc/redhat-release ]; then
TRIPLEO_OS_FAMILY='redhat'
if $(grep -Eqs 'Red Hat Enterprise Linux' /etc/redhat-release); then
TRIPLEO_OS_DISTRO='rhel'
fi
if $(grep -Eqs 'CentOS' /etc/redhat-release); then
TRIPLEO_OS_DISTRO='centos'
fi
if $(grep -Eqs 'Fedora' /etc/redhat-release); then
TRIPLEO_OS_DISTRO='fedora'
fi
fi
if [ -f /etc/debian_version ]; then
TRIPLEO_OS_FAMILY='debian'
if $(grep -Eqs 'Ubuntu' /etc/lsb-release); then
TRIPLEO_OS_DISTRO='ubuntu'
fi
if $(grep -Eqs 'Debian' /etc/os-release); then
TRIPLEO_OS_DISTRO='debian'
fi
fi
function get_os_release {
(
source /etc/os-release
echo $ID
)
}
if [ -f /etc/os-release ]; then
if [ "$(get_os_release)" = "opensuse" ]; then
TRIPLEO_OS_FAMILY='suse'
TRIPLEO_OS_DISTRO='opensuse'
fi
if [ "$(get_os_release)" = "sles" ]; then
TRIPLEO_OS_FAMILY='suse'
TRIPLEO_OS_DISTRO='sles'
fi
fi
export TRIPLEO_OS_FAMILY
export TRIPLEO_OS_DISTRO

View File

@ -1,80 +0,0 @@
#!/bin/bash
# Copyright 2014 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Script to setup pip manifest file location environment variables from a
# given directory tree
set -eu
set -o pipefail
SCRIPT_NAME=$(basename $0)
SCRIPT_HOME=$(cd $(dirname $0); pwd)
function show_options {
echo
echo "Usage: $SCRIPT_NAME file|directory ..[file|directory]"
echo
echo " This script takes a list of manifest files and or directory"
echo " trees and sets the matching DIB_PIP_MANIFEST_<name> environment"
echo " variable to the full path of any matching manifest"
echo " dib-pip-manifest* files found"
echo
echo " To source the export commands produced by running this script"
echo " and set the variables for the current shell,"
echo " you can run the script as follows:"
echo
echo " source <( $SCRIPT_NAME /path/to/pip-manifests )"
echo
echo "Options:"
echo " -h, --help -- print this help."
echo
echo "Echo the appropriate environment variables to use a pip manifest"
echo "for input into image building with the pip-manifest element."
echo
echo "e.g. $SCRIPT_NAME ~/myTripleo/seed-manifests"
echo
exit $1
}
TEMP=`getopt -o h -l help -n $SCRIPT_NAME -- "$@"`
if [ $? != 0 ]; then
echo "Terminating..." >&2;
exit 1;
fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
-h|--help) show_options 0 >&2;;
--) shift ; break ;;
*) echo "Error: unsupported option $1." ; exit 1 ;;
esac
done
if (( $# <= 0 )); then
echo "One or more pip manifest directory or file name required" >&2;
show_options 1 >&2
fi
for target in $*; do
for ent in $(find ${target} -type f -name dib-pip-manifest\* ); do
echo DIB_PIP_MANIFEST_${ent##*dib-pip-manifest-}=${ent};
done
done 2>/dev/null | sort -t = -k1 -u

View File

@ -1,170 +0,0 @@
#!/bin/bash
# Copyright 2014 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# Script to make git repository selection simple
set -eu
set -o pipefail
SCRIPT_NAME=$(basename $0)
SCRIPT_HOME=$(cd $(dirname $0); pwd)
function show_options {
echo "Usage: $SCRIPT_NAME [options] local_repo [...]"
echo " e.g. use locally cloned repositories as input:"
echo " $SCRIPT_NAME -l /my/local/neutron /a/local/nova"
echo " e.g. use the url of remote \"origin\" of the locally cloned repositories as input:"
echo " $SCRIPT_NAME -r origin /my/local/neutron /a/local/nova"
echo " e.g. use the values from a git manifest generated by disk-image-builder:"
echo " $SCRIPT_NAME -m tripleo-git-manifest"
echo " The above examples will set the location and reference to use."
echo " The reference will be set to the current HEAD reference of the local repository"
echo
echo " To source the export commands produced by running this script and set the variables"
echo " for the current shell you can run the script as follows:"
echo " source <( $SCRIPT_NAME -l /path/to/repo )"
echo
echo "Display source-repositories element environment variables"
echo
echo "Options:"
echo " -h, --help -- print this help."
echo " -l, --local -- use the local path location of the repository"
echo " -r <remote>, --remote <remote> -- remote of the local repo to use to get the repo url"
echo " -m <manifest>, --manifest <manifest> -- git manifest file as produced by running devtest"
echo
echo "Echo the appropriate environment variables to use a local git repository"
echo "for input into image building via the source-repositories element"
echo
echo "The \"local_repo\" argment is the path to a locally cloned git repository from which to"
echo "take the settings. The name of the remote \"origin\" will be used to determine the name"
echo "of the repository (nova, etc) for use in the diskimage-builder environment variables."
echo
echo "Using the -l flag will result in the full path to the local repository being used"
echo "as the location to clone from for that repository in diskimage-builder"
echo
echo "Specifying a remote via the -r flag will result in the url associated with that remote"
echo "in the local repository being used for the location to clone from for that repository"
echo "in diskimage-builder"
echo
exit $1
}
TEMP=`getopt -o h,l,n,r:,m: -l help,local,no-echo,no_echo,remote:,manifest: -n $SCRIPT_NAME -- "$@"`
if [ $? != 0 ]; then
echo "Terminating..." >&2
exit 1
fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
-h|--help) show_options 0 >&2;;
-l|--local) USE_LOCAL=1; shift 1;;
-r|--remote) USE_REMOTE=1; REMOTE=$2; shift 2;;
-m|--manifest) USE_MANIFEST=1; MANIFEST=$2; shift 2;;
--) shift ; break ;;
*) echo "Error: unsupported option $1." ; exit 1 ;;
esac
done
USE_LOCAL=${USE_LOCAL:-0}
USE_REMOTE=${USE_REMOTE:-0}
REMOTE=${REMOTE:-""}
USE_MANIFEST=${USE_MANIFEST:-0}
if [[ "$((USE_LOCAL + USE_REMOTE + USE_MANIFEST))" != "1" ]]; then
echo "Choose to either clone the local path (-l)" >&2
echo "OR" >&2
echo "to use the remote (-r <remote>) defined in the local repo to lookup the repo URL" >&2
echo "OR" >&2
echo "to parse repos and SHA1s from a manifest file (-m <manifest-file>)" >&2
show_options 1 >&2
fi
function transform_manifest {
while read name _type _dest loc ref; do
name_transformed=$(echo "$name" | tr '[:upper:]-' '[:lower:]_')
echo "export DIB_REPOLOCATION_${name_transformed}=$loc"
echo "export DIB_REPOREF_${name_transformed}=$ref"
done
}
function use_manifest {
for manifest in "${MANIFEST}"; do
transform_manifest < ${manifest}
done | sort -ut _ -k3
}
function get_location {
local remote_name=${1:-""}
if [[ -n "${remote_name}" ]] ; then
git config --get remote.${remote_name}.url
elif [[ "${USE_LOCAL}" == "1" ]]; then
# Find the .git directory
local dir="$(pwd)"
while [[ "${dir}" != "/" ]]; do
if [[ -d "${dir}/.git" ]]; then
echo ${dir}
break
fi
dir="$(dirname $dir})"
done
else
echo -n "Internal Error: get_location called with [${remote_name}]" >&2
echo " and USE_LOCAL is ${USE_LOCAL}" >&2
exit 1
fi
}
function get_ref {
git rev-parse HEAD
}
function get_name {
local remote_name=${1:-""}
name=$(get_location ${remote_name})
echo $(basename ${name##*:} .git)
}
function use_repos {
declare -A a
for dir in "${REPOS[@]}"; do
if [[ ! -d ${dir} ]] ; then
echo "Not a directory: ${dir}" >&2
exit 1
fi
pushd ${dir} > /dev/null 2>&1
REPONAME=$(get_name ${REMOTE})
REPONAME_VAR=${REPONAME//[^a-zA-Z0-9]/_}
a[DIB_REPOLOCATION_${REPONAME_VAR}]=$(get_location ${REMOTE})
a[DIB_REPOREF_${REPONAME_VAR}]=$(get_ref)
echo "export DIB_REPOLOCATION_${REPONAME_VAR}=${a[DIB_REPOLOCATION_${REPONAME_VAR}]}"
echo "export DIB_REPOREF_${REPONAME_VAR}=${a[DIB_REPOREF_${REPONAME_VAR}]}"
popd > /dev/null 2>&1
done
}
if [[ "${USE_MANIFEST}" == "1" ]] ; then
use_manifest
else
if (( $# <= 0 )); then echo "Local repository location is required" >&2; show_options 1 >&2; fi
REPOS=( "${@}" )
use_repos
fi

View File

@ -1,65 +0,0 @@
#!/bin/bash
set -eu
# libvirtd group
case "$TRIPLEO_OS_DISTRO" in
'debian' | 'opensuse' | 'sles')
LIBVIRTD_GROUP='libvirt'
;;
*)
LIBVIRTD_GROUP='libvirtd'
;;
esac
getent group $LIBVIRTD_GROUP || sudo groupadd $LIBVIRTD_GROUP
if [ "$TRIPLEO_OS_FAMILY" = "suse" ]; then
# kvm_intel/amd is autoloaded on SUSE, but without
# proper permissions. the kvm package will install an udev rule,
# so lets activate that one:
if [ "$(sudo readlink -f /proc/1/root)" = "/" ]; then
sudo /sbin/udevadm control --reload-rules || :
sudo /sbin/udevadm trigger || :
fi
fi
if [ "$TRIPLEO_OS_FAMILY" = "redhat" ]; then
libvirtd_file=/etc/libvirt/libvirtd.conf
if ! sudo grep -q "^unix_sock_group" $libvirtd_file; then
sudo sed -i "s/^#unix_sock_group.*/unix_sock_group = \"$LIBVIRTD_GROUP\"/g" $libvirtd_file
sudo sed -i 's/^#auth_unix_rw.*/auth_unix_rw = "none"/g' $libvirtd_file
sudo sed -i 's/^#unix_sock_rw_perms.*/unix_sock_rw_perms = "0770"/g' $libvirtd_file
sudo service libvirtd restart
fi
fi
REMOTE_OPERATIONS=${REMOTE_OPERATIONS:-0}
if [ "$REMOTE_OPERATIONS" != 1 -a -n "$TE_DATAFILE" -a -e "$TE_DATAFILE" ]; then
REMOTE_OPERATIONS=$(jq '.["remote-operations"]' $TE_DATAFILE)
REMOTE_OPERATIONS=${REMOTE_OPERATIONS//\"}
fi
if [ $REMOTE_OPERATIONS != 1 ]; then
if ! id | grep -qw $LIBVIRTD_GROUP; then
echo "adding $USER to group $LIBVIRTD_GROUP"
sudo usermod -a -G $LIBVIRTD_GROUP $USER
echo "$USER was just added to the $LIBVIRTD_GROUP. Devtest will not"
echo "be able to continue until you start a new session to pick up the"
echo "new group membership. This can be done by either logging out and"
echo "back in, or running:"
echo
echo "sudo su -l $USER"
echo
echo "To verify that your group membership is correct, you can use the"
echo "following command:"
echo
echo "id | grep $LIBVIRTD_GROUP"
echo
echo "Once you have verified your group membership, you should be able to"
echo "re-run devtest successfully or continue with devtest_testenv."
# We have to exit non-zero so the calling script knows to stop.
exit 1
fi
else
echo $TE_DATAFILE says to use remote operations\; not adding $USER to $LIBVIRTD_GROUP
fi

View File

@ -1,115 +0,0 @@
#!/bin/bash
#
# Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -eu
set -o pipefail
SCRIPT_NAME=$(basename $0)
SCRIPT_HOME=$(dirname $0)
function show_options {
echo "Usage: $SCRIPT_NAME [options]"
echo
echo "Reads a JSON file describing machines for a baremetal cluster and"
echo "registers them all with Nova baremetal. Excess machines are removed"
echo "and flavors are created to match the machines that have been"
echo "registered using the local deploy-ramdisk and kernel, which are also"
echo "loaded into glance."
echo
echo "Options:"
echo " -h -- this help"
echo " --service-host -- nova bm service host to register nodes with"
echo " --nodes -- JSON list of nodes to register"
echo
exit $1
}
SERVICE_HOST=""
JSON_PATH=
TEMP=$(getopt -o h -l help,service-host:,nodes: -n $SCRIPT_NAME -- "$@")
if [ $? != 0 ]; then
echo "Terminating..." >&2
exit 1
fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
-h | --help) show_options 0;;
--service-host) SERVICE_HOST="$2"; shift 2 ;;
--nodes) JSON_PATH="$2"; shift 2 ;;
--) shift ; break ;;
*) echo "Error: unsupported option $1." ; exit 1 ;;
esac
done
if [ -z "$SERVICE_HOST" ]; then
echo "Ironic not supported, please specify --service-host."
exit 1
fi
if [ -z "$JSON_PATH" ]; then
echo "A node list is required."
exit 1
fi
deploy_kernel=$TRIPLEO_ROOT/deploy-ramdisk-ironic.kernel
deploy_ramdisk=$TRIPLEO_ROOT/deploy-ramdisk-ironic.initramfs
if ! nova image-show bm-deploy-kernel > /dev/null ; then
deploy_kernel_id=$(glance image-create --name bm-deploy-kernel --visibility public \
--disk-format aki --container-format aki < "$deploy_kernel" | awk ' / id / {print $4}')
deploy_ramdisk_id=$(glance image-create --name bm-deploy-ramdisk --visibility public \
--disk-format ari --container-format ari < "$deploy_ramdisk" | awk ' / id / {print $4}')
fi
NODES=$(cat $JSON_PATH)
register-nodes -s $SERVICE_HOST -n <(echo $NODES) -k bm-deploy-kernel -d bm-deploy-ramdisk
function cleanup_flavor {
local FLAVOR_NAME=${1:?"cleanup_flavor requires a flavor name"}
if nova flavor-show "$FLAVOR_NAME" &> /dev/null; then
nova flavor-delete "$FLAVOR_NAME"
fi
}
# While we can't mix hypervisors, having non-baremetal flavors will just
# confuse things.
cleanup_flavor 'm1.tiny'
cleanup_flavor 'm1.small'
cleanup_flavor 'm1.medium'
cleanup_flavor 'm1.large'
cleanup_flavor 'm1.xlarge'
cleanup_flavor 'baremetal'
# XXX(lifeless) this should be a loop making sure every node is represented
# with a flavor.
MEM=$(jq -r ".[0][\"memory\"]" <<< $NODES)
DISK=$(jq -r ".[0][\"disk\"]" <<< $NODES)
CPU=$(jq -r ".[0][\"cpu\"]" <<< $NODES)
ARCH=$(jq -r ".[0][\"arch\"]" <<< $NODES)
EPHEMERAL_DISK=$(( $DISK - $ROOT_DISK ))
if (( $EPHEMERAL_DISK < 0 )); then
echo "Error: NODE_DISK - ROOT_DISK must be >= 0 to specify size of ephemeral disk"
exit 1
fi
nova flavor-create baremetal \
--ephemeral $EPHEMERAL_DISK auto $MEM $ROOT_DISK $CPU
nova flavor-key baremetal set "cpu_arch"="$ARCH"

View File

@ -1,38 +0,0 @@
#!/bin/bash
set -eu
BASE=$(readlink -f $(dirname $0)/..)
VENV_HOME=$BASE/openstack-tools
if [ ! -f $VENV_HOME/bin/activate ]; then
virtualenv --setuptools $VENV_HOME
fi
# NOTE(derekh): we need to use +u to workaround an issue with the activate script
# /opt/stack/new/tripleo-incubator/openstack-tools/bin/activate: line 8: _OLD_VIRTUAL_PATH: unbound variable
set +u
source $VENV_HOME/bin/activate
set -u
# Use latest versions of build/environment tooling.
pip install -U pip
pip install -U wheel setuptools pbr
pip install -U \
os-apply-config \
os-cloud-config \
python-barbicanclient \
python-ceilometerclient \
python-cinderclient \
python-glanceclient \
python-heatclient \
python-ironicclient \
python-neutronclient \
python-novaclient \
python-openstackclient \
python-swiftclient
for tool in os-apply-config cinder nova glance heat neutron swift ironic ceilometer openstack init-keystone generate-keystone-pki register-nodes setup-neutron; do
ln -sf $VENV_HOME/bin/$tool $BASE/scripts/$tool ;
done
echo "Installed openstack client tool symlinks in $BASE/scripts"

View File

@ -1,176 +0,0 @@
#!/bin/bash
#
# Copyright 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -eu
SCRIPT_NAME=$(basename $0)
SCRIPT_HOME=$(dirname $0)
function show_options {
echo "Usage: $SCRIPT_NAME [options] <controlplane-ip>"
echo
echo "Perform initial setup of a cloud running on <controlplane-ip>"
echo
echo "This will register ec2, image, orchestration, identity, network, "
echo "volume (optional), dashboard (optional), metering (optional) and "
echo "compute services as running on the default ports on controlplane-ip."
echo
echo "Options:"
echo " -r, --region -- Override the default region 'regionOne'."
echo " --ceilometer-password -- Specify a password for ceilometer"
echo " --cinder-password -- Specify a password for cinder."
echo " --glance-password -- Specify a password for glance."
echo " --heat-password -- Specify a password for heat."
echo " --ironic-password -- Specify a password for ironic."
echo " --neutron-password -- Specify a password for neutron."
echo " --nova-password -- Specify a password for nova."
echo " --swift-password -- Specify a password for swift"
echo " --tuskar-password -- Specify a password for tuskar"
echo " --enable-horizon -- Enable horizon"
echo " --debug -- Debug the API calls made."
echo " --ssl -- Use SSL public endpoints. Takes the hostname to"
echo " use for the public endpoints."
echo " --public -- Use non-SSL public endpoints. Takes the ip/hostname"
echo " to use for the public endpoints."
echo
echo "For instance: $SCRIPT_NAME 192.0.2.1"
echo "For instance(ssl): $SCRIPT_NAME --ssl mysite.org 192.0.2.1"
exit $1
}
DEBUG=""
CEILOMETER_PASSWORD=""
CINDER_PASSWORD=""
GLANCE_PASSWORD=""
HEAT_PASSWORD=""
IRONIC_PASSWORD=""
NEUTRON_PASSWORD=""
NOVA_PASSWORD=""
SWIFT_PASSWORD=""
TUSKAR_PASSWORD=""
ENABLE_HORIZON=""
SSL=""
PUBLIC=""
REGION="regionOne" #NB: This is the keystone default.
TEMP=`getopt -o r: -l region:,debug,ceilometer-password:,cinder-password:,glance-password:,heat-password:,ironic-password:,public:,neutron-password:,nova-password:,swift-password:,tuskar-password:,enable-horizon,ssl: -n $SCRIPT_NAME -- "$@"`
if [ $? != 0 ]; then
echo "Terminating..." >&2;
exit 1;
fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
-r|--region) export REGION=$2; shift 2 ;;
--debug) export DEBUG=--debug; set -x; shift 1;;
--ceilometer-password) export CEILOMETER_PASSWORD=$2; shift 2 ;;
--cinder-password) export CINDER_PASSWORD=$2; shift 2 ;;
--glance-password) export GLANCE_PASSWORD=$2; shift 2 ;;
--heat-password) export HEAT_PASSWORD=$2; shift 2 ;;
--ironic-password) export IRONIC_PASSWORD=$2; shift 2 ;;
--neutron-password) export NEUTRON_PASSWORD=$2; shift 2 ;;
--nova-password) export NOVA_PASSWORD=$2; shift 2 ;;
--public) export PUBLIC=$2; shift 2 ;;
--swift-password) export SWIFT_PASSWORD=$2; shift 2 ;;
--tuskar-password) export TUSKAR_PASSWORD=$2; shift 2 ;;
--enable-horizon) export ENABLE_HORIZON=--enable-horizon; shift 1;;
--ssl) export SSL=$2; shift 2 ;;
-h) show_options 0;;
--) shift ; break ;;
*) echo "Error: unsupported option $1." ; exit 1 ;;
esac
done
HOST=${1:-""}
EXTRA=${2:-""}
if [ -z "$HOST" -o -n "$EXTRA" ]; then
show_options 1
fi
INTERNAL_HOST=http://${HOST}:
if [ -n "$SSL" ]; then
PUBLIC_HOST=https://${SSL}:
elif [ -n "$PUBLIC" ]; then
PUBLIC_HOST=http://${PUBLIC}:
else
PUBLIC_HOST=$INTERNAL_HOST
fi
NORMAL_PORT=8004
SSL_PORT=${SSL:+13004}
SSL_PORT=${SSL_PORT:-$NORMAL_PORT}
PASSWORD=$HEAT_PASSWORD register-endpoint $DEBUG -r $REGION -d "Heat Service" heat orchestration -i ${INTERNAL_HOST}${NORMAL_PORT} ${PUBLIC_HOST}${SSL_PORT}
NORMAL_PORT=9696
SSL_PORT=${SSL:+13696}
SSL_PORT=${SSL_PORT:-$NORMAL_PORT}
PASSWORD=$NEUTRON_PASSWORD register-endpoint $DEBUG -r $REGION -d "Neutron Service" neutron network -i ${INTERNAL_HOST}${NORMAL_PORT} ${PUBLIC_HOST}${SSL_PORT}
NORMAL_PORT=9292
SSL_PORT=${SSL:+13292}
SSL_PORT=${SSL_PORT:-$NORMAL_PORT}
PASSWORD=$GLANCE_PASSWORD register-endpoint $DEBUG -r $REGION -d "Glance Image Service" glance image -i ${INTERNAL_HOST}${NORMAL_PORT} ${PUBLIC_HOST}${SSL_PORT}
NORMAL_PORT=8773
SSL_PORT=${SSL:+13773}
SSL_PORT=${SSL_PORT:-$NORMAL_PORT}
register-endpoint $DEBUG -r $REGION -d "EC2 Compatibility Layer" ec2 ec2 -i ${INTERNAL_HOST}${NORMAL_PORT} ${PUBLIC_HOST}${SSL_PORT}
NORMAL_PORT=8774
SSL_PORT=${SSL:+13774}
SSL_PORT=${SSL_PORT:-$NORMAL_PORT}
PASSWORD=$NOVA_PASSWORD register-endpoint $DEBUG -r $REGION -d "Nova Compute Service" nova compute -i ${INTERNAL_HOST}${NORMAL_PORT} ${PUBLIC_HOST}${SSL_PORT}
PASSWORD=$NOVA_PASSWORD register-endpoint $DEBUG -r $REGION -d "Nova Compute Service v3" nova computev3 -i ${INTERNAL_HOST}${NORMAL_PORT} ${PUBLIC_HOST}${SSL_PORT}
if [ -n "$CEILOMETER_PASSWORD" ]; then
# Updating Ceilometer to be like other services
NORMAL_PORT=8777
SSL_PORT=${SSL:+13777}
SSL_PORT=${SSL_PORT:-$NORMAL_PORT}
PASSWORD=$CEILOMETER_PASSWORD register-endpoint $DEBUG -r $REGION -d "Ceilometer Service" ceilometer metering -i ${INTERNAL_HOST}${NORMAL_PORT} ${PUBLIC_HOST}${SSL_PORT}
fi
if [ -n "$CINDER_PASSWORD" ]; then
NORMAL_PORT=8776
SSL_PORT=${SSL:+13776}
SSL_PORT=${SSL_PORT:-$NORMAL_PORT}
PASSWORD=$CINDER_PASSWORD register-endpoint $DEBUG -r $REGION -d "Cinder Volume Service" cinder volume -i ${INTERNAL_HOST}${NORMAL_PORT} ${PUBLIC_HOST}${SSL_PORT}
PASSWORD=$CINDER_PASSWORD register-endpoint $DEBUG -r $REGION -d "Cinder Volume Service V2" cinderv2 volumev2 -i ${INTERNAL_HOST}${NORMAL_PORT} ${PUBLIC_HOST}${SSL_PORT}
fi
if [ -n "$SWIFT_PASSWORD" ]; then
NORMAL_PORT=8080
SSL_PORT=${SSL:+13080}
SSL_PORT=${SSL_PORT:-$NORMAL_PORT}
PASSWORD=$SWIFT_PASSWORD register-endpoint $DEBUG -r $REGION -d "Swift Object Storage Service" swift object-store -i ${INTERNAL_HOST}${NORMAL_PORT} ${PUBLIC_HOST}${SSL_PORT}
fi
if [ -n "$ENABLE_HORIZON" ]; then
# XXX: SSL not wired up yet.
register-endpoint $DEBUG -r $REGION -d "OpenStack Dashboard" horizon dashboard -i ${INTERNAL_HOST} ${INTERNAL_HOST}
fi
if [ -n "$IRONIC_PASSWORD" ]; then
# XXX: SSL not wired up yet.
PASSWORD=$IRONIC_PASSWORD register-endpoint $DEBUG -r $REGION -d "Ironic Service" ironic baremetal -i ${INTERNAL_HOST}6385 ${PUBLIC_HOST}6385
fi
if [ -n "$TUSKAR_PASSWORD" ]; then
# XXX: SSL not wired up yet.
PASSWORD=$TUSKAR_PASSWORD register-endpoint $DEBUG -r $REGION -d "Tuskar Service" tuskar management -i ${INTERNAL_HOST}8585 ${PUBLIC_HOST}8585
fi

View File

@ -1 +0,0 @@
refresh-env

View File

@ -1,74 +0,0 @@
#!/bin/bash
set -eu
BASE=$(dirname $0)/../
BRIDGE_SUFFIX=${1:-''} # support positional arg for legacy support
BRIDGE_NAMES='brbm'
VLAN_TRUNK_IDS=''
SCRIPT_NAME=$(basename $0)
function show_options {
echo "Usage: $SCRIPT_NAME [-n num] [-b space delimited bridge names ]"
echo
echo "Setup libvirt networking and OVS bridges for TripleO."
echo
echo " -n -- Bridge number/suffix. Added to all bridges."
echo " Useful when creating multiple environments"
echo " on the same machine."
echo " -b -- Space delimited list of baremetal bridge"
echo " name(s). Defaults to brbm."
echo
exit 1
}
TEMP=$(getopt -o h,n:,b: -n $SCRIPT_NAME -- "$@")
if [ $? != 0 ]; then
show_options;
fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
-h) show_options ;;
-n) BRIDGE_SUFFIX="$2" ; shift 2 ;;
-b) BRIDGE_NAMES="$2" ; shift 2 ;;
--) shift ; break ;;
*) echo "Error: unsupported option $1." ; show_options ;;
esac
done
function create_bridge {
local BRIDGE_NAME=$1
# Only add bridge if missing
(sudo ovs-vsctl list-br | grep ${BRIDGE_NAME}$) || sudo ovs-vsctl add-br ${BRIDGE_NAME}
# remove bridge before replacing it.
(virsh net-list --persistent | grep "${BRIDGE_NAME} ") && virsh net-destroy ${BRIDGE_NAME}
(virsh net-list --inactive --persistent | grep "${BRIDGE_NAME} ") && virsh net-undefine ${BRIDGE_NAME}
virsh net-define <(sed -e "s/%NETWORK_NAME%/$BRIDGE_NAME/" $BASE/templates/net.xml)
virsh net-autostart ${BRIDGE_NAME}
virsh net-start ${BRIDGE_NAME}
}
for NAME in $BRIDGE_NAMES; do
create_bridge "$NAME$BRIDGE_SUFFIX"
done
# start default if needed and configure it to autostart
default_net=$(sudo virsh net-list --all --persistent | grep default | awk 'BEGIN{OFS=":";} {print $2,$3}')
state=${default_net%%:*}
autostart=${default_net##*:}
if [ "$state" != "active" ]; then
virsh net-start default
fi
if [ "$autostart" != "yes" ]; then
virsh net-autostart default
fi

View File

@ -1,99 +0,0 @@
#!/bin/bash
#
# Copyright 2013 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -e
set -o pipefail
SCRIPT_NAME=$(basename $0)
SCRIPT_HOME=$(dirname $0)
function show_options {
echo "Usage: $SCRIPT_NAME [options] FILENAME"
echo
echo "Generate passwords for devtest and write them out to a file"
echo "that can be sourced."
echo
echo "Options:"
echo " -f, --file -- Noop. For backwards compatibility only"
echo " -o, --overwrite -- Overwrite file if it already exists."
exit $1
}
FILE=
TEMP=`getopt -o hof -l help,overwrite,file -n $SCRIPT_NAME -- "$@"`
if [ $? != 0 ]; then
echo "Terminating..." >&2;
exit 1;
fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
-f | --file) shift 1 ;;
-o | --overwrite) OVERWRITE=--overwrite; shift 1 ;;
-h | --help) show_options 0;;
--) shift ; break ;;
*) echo "Error: unsupported option $1." ; exit 1 ;;
esac
done
FILE=${FILE:-$1}
if [ -z "$FILE" ]; then
echo "ERROR: Must provide a filename"
exit 1
fi
OVERWRITE=${OVERWRITE:-""}
touch $FILE
# Make the file secure as reasonably possible.
chmod 0600 $FILE
if [ -n "$OVERWRITE" ]; then
echo -n "" > $FILE
fi
function generate_password {
local name=$1
if [ -z "$(grep "^$name=" $FILE)" ]; then
echo "$name=$(os-make-password)" >> $FILE
else
echo "Password $name in $FILE already exists, not overwriting."
echo "To overwrite all passwords in $FILE specify -o."
fi
}
PASSWORD_LIST="OVERCLOUD_ADMIN_PASSWORD
OVERCLOUD_ADMIN_TOKEN
OVERCLOUD_CEILOMETER_PASSWORD
OVERCLOUD_CEILOMETER_SECRET
OVERCLOUD_CINDER_PASSWORD
OVERCLOUD_DEMO_PASSWORD
OVERCLOUD_GLANCE_PASSWORD
OVERCLOUD_HEAT_PASSWORD
OVERCLOUD_HEAT_STACK_DOMAIN_PASSWORD
OVERCLOUD_NEUTRON_PASSWORD
OVERCLOUD_NOVA_PASSWORD
OVERCLOUD_SWIFT_HASH
OVERCLOUD_SWIFT_PASSWORD"
for name in $PASSWORD_LIST; do
generate_password $name
done

View File

@ -1,133 +0,0 @@
#!/bin/bash
#
# Copyright 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -e
SCRIPT_NAME=$(basename $0)
SCRIPT_HOME=$(dirname $0)
PATH=$PATH:/usr/sbin:/sbin
# Some defaults
ARCH=i386
BRIDGE=brbm
OVSBRIDGE=
MEMORY=2097152
CPUS=1
export IMAGE_NAME=seed
LIBVIRT_NIC_DRIVER=${LIBVIRT_NIC_DRIVER:-"virtio"}
LIBVIRT_DISK_BUS_TYPE=${LIBVIRT_DISK_BUS_TYPE:-"sata"}
function show_options {
echo "Usage: $SCRIPT_NAME [options] <element> [<element> ...]"
echo
echo "Create a VM definition for the seed VM."
echo "See ../scripts/devtest.sh"
echo
echo "Options:"
echo " -a i386|amd64 -- set the architecture of the VM (i386)"
echo " -o name -- set the name of the VM and image file"
echo " (seed) - must match that from boot-seed-vm"
echo " -m memory -- define amount of memory to use"
echo " -c cpus -- define number of CPUs to use"
echo " -b bridge -- define a baremetal bridge to use"
echo " -p bridge -- define an ovs bridge to use for the public interface"
echo " -e engine -- set the virt engine to use"
echo " (defaults to kvm if available, otherwise"
echo " qemu)"
echo
exit $1
}
TEMP=`getopt -o ha:o:m:c:b:p:e: -n $SCRIPT_NAME -- "$@"`
if [ $? != 0 ]; then
echo "Terminating..." >&2
exit 1
fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
-a) export ARCH=$2; shift 2 ;;
-o) export IMAGE_NAME=$2; shift 2 ;;
-m) export MEMORY=$2; shift 2 ;;
-c) export CPUS=$2; shift 2 ;;
-b) export BRIDGE=$2; shift 2 ;;
-p) export OVSBRIDGE=$2; shift 2 ;;
-e) export ENGINE=$2; shift 2 ;;
-h) show_options 0;;
--) shift ; break ;;
*) echo "Error: unsupported option $1." ; exit 1 ;;
esac
done
EXTRA_ARGS=${1:-''}
if [ -n "$EXTRA_ARGS" ]; then
show_options 1
fi
if [[ -z "$ENGINE" ]]; then
if [ -d /sys/module/kvm ]; then
ENGINE=kvm
else
ENGINE=qemu
if test -r /proc/cpuinfo && grep -q "vmx\|svm" /proc/cpuinfo; then
echo 'CPU supports virtualization but the kvm module is not loaded.'
fi
echo 'Using qemu as virtualization engine. Warning!: things will be extremely slow.'
fi
fi
SEED_ARCH=
case $ARCH in
i386) SEED_ARCH='i686'; ;;
amd64|x86_64) SEED_ARCH='x86_64'; ;;
*) echo "Unsupported arch $ARCH!" ; exit 1 ;;
esac
which virsh >/dev/null || die "Error: virsh not found in path"
sudo virsh destroy $IMAGE_NAME 2>/dev/null || echo "$IMAGE_NAME VM not running"
sudo virsh undefine $IMAGE_NAME --managed-save 2>/dev/null || echo "$IMAGE_NAME VM not defined"
sudo touch /var/lib/libvirt/images/$IMAGE_NAME.qcow2
EXTRAOPTS=
if [ -n "$OVSBRIDGE" ] ; then
EXTRAOPTS="--ovsbridge $OVSBRIDGE"
fi
if [[ $DIB_COMMON_ELEMENTS == *enable-serial-console* ]]; then
EXTRAOPTS="${EXTRAOPTS} --enable-serial-console"
fi
configure-vm $EXTRAOPTS \
--name $IMAGE_NAME \
--image /var/lib/libvirt/images/$IMAGE_NAME.qcow2 \
--diskbus $LIBVIRT_DISK_BUS_TYPE \
--baremetal-interface $BRIDGE \
--engine $ENGINE \
--arch $SEED_ARCH \
--memory $MEMORY \
--cpus $CPUS \
--libvirt-nic-driver $LIBVIRT_NIC_DRIVER \
--seed
MAC=$(sudo virsh dumpxml $IMAGE_NAME | grep "mac address" | head -1 | awk -F "'" '{print $2}')
echo "Seed VM created with MAC ${MAC}"

View File

@ -1,96 +0,0 @@
#!/bin/bash
#
# Copyright 2013 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -e
set -o pipefail
SCRIPT_NAME=$(basename $0)
SCRIPT_HOME=$(dirname $0)
function show_options {
echo "Usage: $SCRIPT_NAME [options] FILENAME"
echo
echo "Generate passwords for devtest and write them out to a file"
echo "that can be sourced."
echo
echo "Options:"
echo " -f, --file -- Noop. For backwards compatibility only"
echo " -o, --overwrite -- Overwrite file if it already exists."
exit $1
}
FILE=
TEMP=`getopt -o hof -l help,overwrite,file -n $SCRIPT_NAME -- "$@"`
if [ $? != 0 ]; then
echo "Terminating..." >&2;
exit 1;
fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
-f | --file) shift 1 ;;
-o | --overwrite) OVERWRITE=--overwrite; shift 1 ;;
-h | --help) show_options 0;;
--) shift ; break ;;
*) echo "Error: unsupported option $1." ; exit 1 ;;
esac
done
FILE=${FILE:-$1}
if [ -z "$FILE" ]; then
echo "ERROR: Must provide a filename"
exit 1
fi
OVERWRITE=${OVERWRITE:-""}
touch $FILE
# Make the file secure as reasonably possible.
chmod 0600 $FILE
if [ -n "$OVERWRITE" ]; then
echo -n "" > $FILE
fi
function generate_password {
local name=$1
if [ -z "$(grep "^$name=" $FILE)" ]; then
echo "$name=$(os-make-password)" >> $FILE
else
echo "Password $name in $FILE already exists, not overwriting."
echo "To overwrite all passwords in $FILE specify -o."
fi
}
PASSWORD_LIST="UNDERCLOUD_ADMIN_TOKEN
UNDERCLOUD_ADMIN_PASSWORD
UNDERCLOUD_CEILOMETER_PASSWORD
UNDERCLOUD_CEILOMETER_SNMPD_PASSWORD
UNDERCLOUD_GLANCE_PASSWORD
UNDERCLOUD_HEAT_PASSWORD
UNDERCLOUD_NEUTRON_PASSWORD
UNDERCLOUD_NOVA_PASSWORD
UNDERCLOUD_IRONIC_PASSWORD
UNDERCLOUD_TUSKAR_PASSWORD"
for name in $PASSWORD_LIST; do
generate_password $name
done

View File

@ -1,73 +0,0 @@
#!/bin/bash
#
# Copyright 2012 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Initial cut - no functions, JFDI.
if [ -z "$1" ]; then
echo "No host supplied" >&2
exit 1
fi
if [ -z "$2" ]; then
echo "No image id supplied" >&2
exit 1
fi
#ref image=481ddf40-8f9c-4175-a993-c11b070d6653
# NOT SAFE against /etc races.
commands="sudo su -
apt-get -y install python-pip qemu-utils
pip install python-glanceclient
rm /tmp/image.qcow2
http_proxy= /usr/local/bin/glance -v --os-username demo --os-password nomoresecrete --os-tenant-name demo --os-auth-url http://glance.tripleo.org:5000/v2.0 --os-image-url http://glance.tripleo.org:9292/ image-download $2 --file /tmp/image.qcow2
ls -lh /tmp/
modprobe nbd max_part=16
rmdir /tmp/newimage
mkdir -p /tmp/newimage
qemu-nbd -c /dev/nbd1 /tmp/image.qcow2
mount /dev/nbd1 /tmp/newimage
rm -rf /tmp/recover
mkdir -p /tmp/recover/ssh
cp -ta /tmp/recover /etc/mtab /etc/hosts
cp -a /etc/ssh/ssh_host_*key* /tmp/recover/ssh/
[ -e "/tmp/newimage/boot" ] && rsync -axHAXv /tmp/newimage/ / --exclude=/tmp --delete-after | tee -a /tmp/rsync.log
cp -at /etc /tmp/recover/*
# Rewrites e.g. /dev/nbd0 -> a FS UUID from the taken over system
# XXX: TODO: Relabel the taken over system rootfs label to match root=LABEL=cloudimg-rootfs
# XXX: TODO: make the built images use the label, not the device.
update-grub
grub-install /dev/vda
reboot -n
"
# Rewrites e.g. /dev/nbd0 -> a FS UUID from the taken over system
echo "$commands" | ssh ubuntu@$1
# TODO:
# permit either:
# reboot -n
# or (staying online)
# retrigger cloud-init
# then free up the device....
# apt-get install qemu-utils
# sudo umount /tmp/newimage
# sudo qemu-nbd -d /dev/nbd1
#
#ssh stack@host / ubuntu@host?
# for the bootstrap image:
# Add eth1 via modprobe dummy && dummy0 - edit localrc and /etc/network/interfaces
#sudo ifup dummy0
#tripleo-incubator/scripts/demo
#$profit

View File

@ -1,74 +0,0 @@
#!/bin/bash
# Copyright 2015 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -eux
LOGFILE=undercloud-debug.log
exec > >(tee $LOGFILE)
exec 2>&1
OS_AUTH_URL=${OS_AUTH_URL:-""}
if [ -z "$OS_AUTH_URL" ]; then
echo "You must source a stackrc file for the Undercloud."
exit 1
fi
nova list
for i in $(nova list | head -n -1 | tail -n +4 | awk '{print $2}'); do nova show $i; done
nova flavor-list
for f in $(nova flavor-list | head -n -1 | tail -n +4 | awk '{print $2}'); do nova flavor-show $f; done
nova quota-show
nova hypervisor-list
nova hypervisor-stats
nova service-list
ironic node-list
for n in $(ironic node-list | head -n -1 | tail -n +4 | awk '{print $2}'); do ironic node-show $n; done
for n in $(ironic node-list | head -n -1 | tail -n +4 | awk '{print $2}'); do ironic node-port-list $n; done
glance image-list
for i in $(glance image-list | head -n -1 | tail -n +4 | awk '{print $2}'); do glance image-show $i; done
heat stack-list
if heat stack-list | grep overcloud; then
heat stack-show overcloud
heat resource-list -n 10 overcloud
for failed_deployment in $(heat resource-list --nested-depth 5 overcloud | grep FAILED | grep -E 'OS::Heat::SoftwareDeployment |OS::Heat::StructuredDeployment ' | cut -d '|' -f 3); do
echo $failed_deployment;
heat deployment-show $failed_deployment;
done
fi
keystone endpoint-list
keystone catalog
neutron quota-list
neutron net-list
neutron port-list
neutron agent-list
sudo ovs-vsctl show
sudo ovs-ofctl dump-flows br-ctlplane
set +x
echo
echo
echo "###############################################################"
echo "# All output saved to undercloud-debug.log"
echo "# Finished."
echo "###############################################################"
exit

View File

@ -1,91 +0,0 @@
#!/bin/bash
set -eu
set -o pipefail
SCRIPT_NAME=$(basename $0)
SCRIPT_HOME=$(dirname $0)
function show_options {
echo "Usage: $SCRIPT_NAME"
echo
echo "Pull the latest tripleo-cd-admin ssh keys into a user account."
echo
echo "Assumes it is running as that user."
echo
echo "Options:"
echo " -u|--users -- Update passwords for individual user accounts"
echo " instead of the root account."
echo " -h|--help -- This help."
echo
exit $1
}
TEMP=$(getopt -o hu -l help,users -n $SCRIPT_NAME -- "$@")
if [ $? != 0 ]; then
echo "Terminating..." >&2;
exit 1;
fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
INDIVIDUAL_USERS=
while true ; do
case "$1" in
-h|--help) show_options 0;;
-u|--users) shift ; INDIVIDUAL_USERS=1;;
--) shift ; break ;;
*) echo "Error: unsupported option $1." ; exit 1 ;;
esac
done
if [ -n "${1:-}" ]; then
show_options 1
fi
cd ~
mkdir -p .ssh
chmod 0700 .ssh
mkdir -p .cache/tripleo-cd
# Get the keys
cd .cache/tripleo-cd
if [ ! -d tripleo-incubator ]; then
git clone https://git.openstack.org/openstack/tripleo-incubator
cd tripleo-incubator
else
cd tripleo-incubator
git pull
fi
TMP_SSH_KEYS=$(mktemp)
for FILE in tripleo-cloud/ssh-keys/*; do
if [ -n "$INDIVIDUAL_USERS" ]; then
USER=$(basename $FILE)
if ! getent passwd $USER &>/dev/null; then
useradd --create-home --user-group $USER
fi
eval mkdir -p ~$USER/.ssh
eval chown -R $USER:$USER ~$USER/.ssh
eval chmod 700 ~$USER/.ssh
eval cp -f $FILE ~$USER/.ssh/authorized_keys
eval chmod 600 ~$USER/.ssh/authorized_keys
touch /etc/sudoers.d/$USER
chmod 0440 /etc/sudoers.d/$USER
echo "$USER ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/$USER
else
cat $FILE >> $TMP_SSH_KEYS
fi
done
if [ -z "$INDIVIDUAL_USERS" ]; then
# Allow tripleo-incubator stuff that wants to add local keys...
# they'll get wiped on the next run (and obviously aren't relevant for bm
# access).
chmod 0600 $TMP_SSH_KEYS
mv $TMP_SSH_KEYS ~/.ssh/authorized_keys
else
# in individual users mode lets... lets check sudo syntax
visudo -c -q
rm $TMP_SSH_KEYS
fi

View File

@ -1,5 +0,0 @@
#!/bin/bash
set -eu
# Assumes nova etc are on PATH.
nova keypair-add --pub-key ~/.ssh/id_rsa.pub default

View File

@ -1,179 +0,0 @@
#!/bin/bash
#
# Copyright 2013 Red Hat
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -e # exit on the first non-zero status
set -u # exit on unset variables
set -o pipefail
SCRIPT_NAME=$(basename $0)
function show_options {
EXITVAL=${1:-1}
echo "Usage: $SCRIPT_NAME [-h] [-w TIMEOUT] [-l LOOP_COUNT] [-f FAIL_MATCH] [-s SUCCESS_MATCH] --delay SLEEP_TIME -- COMMAND"
echo
echo "Waits for a command to fail, succeed, or timeout."
echo
echo "Options:"
echo " -h,--help -- this help"
echo " -w,--walltime TIMEOUT -- Timeout after TIMEOUT seconds."
echo " -l,--looptimeout LOOP_COUNT -- Timeout after checking COMMAND LOOP_COUNT times."
echo " -d,--delay SLEEP_TIME -- Seconds to sleep between checks of COMMAND."
echo " -s,--success-match -- Output that indicates a success."
echo " -f,--fail-match -- Output that indicates a short-circuit failure."
echo
echo "Execute the command in a loop until it succeeds, a timeout is reached, or"
echo "a short-circuit failure occurs. Between each check of the command sleep for"
echo "the number of seconds specified by SLEEP_TIME."
echo
echo "Examples:"
echo " wait_for -w 300 --delay 10 -- ping -c 1 192.0.2.2"
echo " wait_for -w 10 --delay 1 -- ls file_we_are_waiting_for"
echo " wait_for -w 30 --delay 3 -- date \| grep 8"
echo " wait_for -w 300 --delay 10 --fail-match CREATE_FAILED -- heat stack-show undercloud"
echo " wait_for -w 300 --delay 10 --success-match CREATE_COMPLETE -- heat stack-show undercloud"
exit $EXITVAL
}
USE_WALLTIME=
TIMEOUT=
DELAY=
if [ -n "${SUCCESSFUL_MATCH_OUTPUT:-}" ]; then
echo "DEPRECATION WARNING: Using env vars for specifying SUCCESSFUL_MATCH_OUTPUT is deprecated."
fi
SUCCESSFUL_MATCH_OUTPUT=${SUCCESSFUL_MATCH_OUTPUT:-""}
if [ -n "${FAIL_MATCH_OUTPUT:-}" ]; then
echo "DEPRECATION WARNING: Using env vars for specifying FAIL_MATCH_OUTPUT is deprecated."
fi
FAIL_MATCH_OUTPUT=${FAIL_MATCH_OUTPUT:-""}
USE_ARGPARSE=0
# We have to support positional arguments for backwards compat
if [ -n "$1" -a "${1:0:1}" == "-" ]; then
USE_ARGPARSE=1
else
echo "DEPRECATION WARNING: Using positional arguments for wait_for is deprecated."
fi
if [ $USE_ARGPARSE -eq 1 ]; then
set +e
TEMP=$(getopt -o h,w:,l:,d:,s:,f: -l help,walltime:,looptimeout:,delay:,success-match:,fail-match: -n $SCRIPT_NAME -- "$@")
if [ $? != 0 ]; then
show_options;
fi
set -e
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
-h) show_options 0;;
--help) show_options 0;;
-w|--walltime) [ -n "$USE_WALLTIME" ] && show_options
USE_WALLTIME=1
TIMEOUT="$2"
shift 2
;;
-l|--looptimeout) [ -n "$USE_WALLTIME" ] && show_options
USE_WALLTIME=0
TIMEOUT="$2"
shift 2
;;
-d|--delay) DELAY="$2"; shift 2;;
-s|--success-match) SUCCESSFUL_MATCH_OUTPUT="$2"; shift 2;;
-f|--fail-match) FAIL_MATCH_OUTPUT="$2"; shift 2;;
--) shift ; break ;;
esac
done
else
TIMEOUT=${1:-""}
DELAY=${2:-""}
USE_WALLTIME=0
shift 2 || true
fi
COMMAND="$@"
if [ -z "$TIMEOUT" -o -z "$DELAY" -o -z "$COMMAND" ]; then
show_options
fi
ENDTIME=$(($(date +%s) + $TIMEOUT))
TIME_REMAINING=0
function update_time_remaining {
CUR_TIME="$(date +%s)"
TIME_REMAINING=$(($ENDTIME - $CUR_TIME))
}
OUTPUT=
function check_cmd {
STATUS=0
OUTPUT=$(eval $COMMAND 2>&1) || STATUS=$?
if [[ -n "$SUCCESSFUL_MATCH_OUTPUT" ]] \
&& [[ $OUTPUT =~ $SUCCESSFUL_MATCH_OUTPUT ]]; then
exit 0
elif [[ -n "$FAIL_MATCH_OUTPUT" ]] \
&& [[ $OUTPUT =~ $FAIL_MATCH_OUTPUT ]]; then
echo "Command output matched '$FAIL_MATCH_OUTPUT'. Exiting..."
exit 1
elif [[ -z "$SUCCESSFUL_MATCH_OUTPUT" ]] && [[ $STATUS -eq 0 ]]; then
# The command successfully completed and we aren't testing against
# it's output so we have finished waiting.
exit 0
fi
}
i=0
while [ $USE_WALLTIME -eq 1 -o $i -lt $TIMEOUT ]; do
if [ $USE_WALLTIME -eq 1 ]; then
update_time_remaining
if [ $TIME_REMAINING -le 0 ]; then
break
fi
else
i=$((i + 1))
fi
check_cmd
if [ $USE_WALLTIME -eq 1 ]; then
update_time_remaining
if [ $TIME_REMAINING -lt $DELAY ]; then
if [ $TIME_REMAINING -gt 0 ]; then
sleep $TIME_REMAINING
check_cmd
fi
else
sleep $DELAY
fi
else
sleep $DELAY
fi
done
if [ $USE_WALLTIME -eq 1 ]; then
SECONDS=$TIMEOUT
else
SECONDS=$((TIMEOUT * DELAY))
fi
printf 'Timing out after %d seconds:\nCOMMAND=%s\nOUTPUT=%s\n' \
"$SECONDS" "$COMMAND" "$OUTPUT"
exit 1

View File

@ -1,67 +0,0 @@
#!/bin/bash
set -eu
set -o pipefail
SCRIPT_NAME=$(basename $0)
function show_options {
echo "Usage: $SCRIPT_NAME [<nodes>] [options]"
echo
echo "Waits for \`nova hypervisor-stats\` to show some memory + vcpus are available."
echo
echo "Positional arguments:"
echo " nodes -- The number of nodes to wait for, defaults to 1."
echo " memory -- The amount of memory to wait for in MB,"
echo " defaults to the amount of memory for the"
echo " baremetal flavor times the number of nodes."
echo " vcpus -- The number of vcpus to wait for,"
echo " defaults to the number of vcpus for the"
echo " baremtal flavor times the number of nodes."
echo
echo "Options:"
echo " -h / --help -- this help"
echo
exit $1
}
TEMP=$(getopt -o h -l help -n $SCRIPT_NAME -- "$@")
if [ $? != 0 ]; then
echo "Terminating..." >&2
exit 1
fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
-h | --help) show_options 0;;
--) shift ; break ;;
*) echo "Error: unsupported option $1." ; exit 1 ;;
esac
done
EXPECTED_NODES=${1:-1}
EXPECTED_MEM=${2:-""}
EXPECTED_VCPUS=${3:-""}
# NOTE(bnemec): If/when we have more flavors, this will need
# to be expanded.
if [ -z "$EXPECTED_VCPUS" ]; then
FLAVOR=$(nova flavor-show baremetal)
VCPUS=$(echo "$FLAVOR" | awk '$2=="vcpus" { print $4 }')
EXPECTED_VCPUS=$(($VCPUS*$EXPECTED_NODES))
fi
if [ -z "$EXPECTED_MEM" ]; then
FLAVOR=$(nova flavor-show baremetal)
MEM=$(echo "$FLAVOR" | awk '$2=="ram" { print $4 }')
EXPECTED_MEM=$(($MEM*$EXPECTED_NODES))
fi
nova hypervisor-stats | awk '
$2=="count" && $4 >= '"$EXPECTED_NODES"' { c++ };
$2=="memory_mb" && $4 >= '"$EXPECTED_MEM"' { c++ };
$2=="vcpus" && $4 >= '"$EXPECTED_VCPUS"' { c++ };
END { if (c != 3) exit 1 }'

View File

@ -1,43 +0,0 @@
#!/bin/bash
#
# Copyright 2014 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -eu
SCRIPT_NAME=$(basename $0)
USE_WALLTIME="-l"
if [ -n "$1" -a "$1" = "-w" ]; then
USE_WALLTIME="-w"
shift 1
fi
LOOPS=${1:-""}
SLEEPTIME=${2:-""}
STACK_NAME=${3:-""}
if [ -z "$LOOPS" -o -z "$SLEEPTIME" -o -z "$STACK_NAME" ]; then
echo "Usage: $SCRIPT_NAME [-w] LOOPS_NUMBER SLEEP_TIME STACK_NAME"
exit 1
fi
SUCCESSFUL_MATCH_OUTPUT="(CREATE|UPDATE)_COMPLETE"
FAIL_MATCH_OUTPUT="(CREATE|UPDATE)_FAILED"
wait_for $USE_WALLTIME $1 --delay $2 \
--success-match $SUCCESSFUL_MATCH_OUTPUT \
--fail-match $FAIL_MATCH_OUTPUT -- \
"heat stack-show $STACK_NAME | awk '/stack_status / { print \$4 }'"

View File

@ -1,133 +0,0 @@
#!/bin/bash
#
# Copyright 2013 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -e
set -o pipefail
SCRIPT_NAME=$(basename $0)
SCRIPT_HOME=$(dirname $0)
function show_options {
echo "Usage: $SCRIPT_NAME [options] FILENAME"
echo
echo "Write devtest defined environment variables to a file."
echo
echo "Creates a tripleorc file that can be sourced later to restore"
echo "environment variables that are defined by devtest.md"
echo
echo "Options:"
echo " -f, --file -- Noop. For backwards compatibility only"
echo " -o, --overwrite -- Overwrite file if it already exists."
exit $1
}
FILE=
TEMP=`getopt -o hof -l help,overwrite,file -n $SCRIPT_NAME -- "$@"`
if [ $? != 0 ]; then
echo "Terminating..." >&2;
exit 1;
fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
-f | --file) shift 1 ;;
-o | --overwrite) OVERWRITE=--overwrite; shift 1 ;;
-h | --help) show_options 0;;
--) shift ; break ;;
*) echo "Error: unsupported option $1." ; exit 1 ;;
esac
done
FILE=${FILE:-$1}
if [ -z "$FILE" ]; then
echo "ERROR: Must provide a filename"
exit 1
fi
OVERWRITE=${OVERWRITE:-""}
# Don't overwrite $FILE if it already exists and the overwrite option
# wasn't provided.
if [ -f $FILE -a -z "$OVERWRITE" ]; then
echo $FILE exists, not overwriting.
echo Either delete the file first, or specify -o
exit 1
fi
rm -f $FILE
touch $FILE
ENV_VARS="
DEPLOY_IMAGE_ELEMENT
DEPLOY_NAME
DIB_COMMON_ELEMENTS
ELEMENTS_PATH
LIBVIRT_DEFAULT_URI
LIBVIRT_DISK_BUS_TYPE
LIBVIRT_NIC_DRIVER
LIBVIRT_VOL_POOL
NODE_CNT
NODE_DIST
OVERCLOUD_BLOCKSTORAGE_DIB_EXTRA_ARGS
OVERCLOUD_BLOCKSTORAGESCALE
OVERCLOUD_COMPUTE_DIB_EXTRA_ARGS
OVERCLOUD_COMPUTESCALE
OVERCLOUD_CONTROL_DIB_EXTRA_ARGS
OVERCLOUD_CONTROLSCALE
OVERCLOUD_LIBVIRT_TYPE
ROOT_DISK
SEED_DIB_EXTRA_ARGS
TE_DATAFILE
TRIPLEO_ROOT
UNDERCLOUD_DIB_EXTRA_ARGS
USE_UNDERCLOUD_UI"
for env_var in $ENV_VARS; do
if [ ! -z "${!env_var}" ]; then
echo export $env_var=\"${!env_var}\" >> $FILE
fi
done
# Also write out updated $PATH and $ELEMENTS_PATH
if [ -n "$TRIPLEO_ROOT" ]; then
# Add a newline for some clarity in the tripleorc file.
echo >> $FILE
# When tripleorc is later sourced, we only want to update $PATH and
# $ELEMENTS_PATH if they haven't already been updated. Otherwise, we will
# keep making them longer each time tripleorc is sourced.
cat >> $FILE <<EOF
SCRIPTS_PATH=\$TRIPLEO_ROOT/tripleo-incubator/scripts
if [[ ! "\$PATH" =~ (^|:)"\$SCRIPTS_PATH"(:|$) ]]; then
export PATH=\$TRIPLEO_ROOT/tripleo-incubator/scripts:\$PATH
fi
TIE_PATH=\$TRIPLEO_ROOT/tripleo-image-elements/elements
if [[ "\${ELEMENTS_PATH:-}" !~ (^|:)"\$TIE_PATH"(:|$) ]]; then
export ELEMENTS_PATH=\$TIE_PATH\${ELEMENTS_PATH:+":\$ELEMENTS_PATH"}
fi
source devtest_variables.sh
EOF
fi

8
seedrc
View File

@ -1,8 +0,0 @@
export NOVA_VERSION=1.1
export OS_PASSWORD=unset
export OS_AUTH_URL=http://$(os-apply-config -m $TE_DATAFILE --key baremetal-network.seed.ip --type raw --key-default '192.0.2.1'):5000/v2.0
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export COMPUTE_API_VERSION=1.1
export OS_NO_CACHE=True
export OS_CLOUDNAME=seed

View File

@ -1,30 +0,0 @@
[metadata]
name = tripleo-incubator
author = OpenStack
author-email = openstack-dev@lists.openstack.org
summary = Incubator for TripleO
description-file =
README.rst
home-page = http://docs.openstack.org/developer/tripleo-incubator/
classifier =
Environment :: OpenStack
Intended Audience :: Developers
Intended Audience :: Information Technology
License :: OSI Approved :: Apache Software License
Operating System :: OS Independent
[build_sphinx]
all_files = 1
build-dir = doc/build
source-dir = doc/source
[egg_info]
tag_build =
tag_date = 0
tag_svn_revision = 0
[wheel]
universal = 1
[pbr]
warnerrors = True

View File

@ -1,22 +0,0 @@
#!/usr/bin/env python
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
setuptools.setup(
setup_requires=['pbr'],
pbr=True)

View File

@ -1,37 +0,0 @@
<domain type='%(engine)s'>
<name>%(name)s</name>
<memory unit='KiB'>%(memory)s</memory>
<vcpu>%(cpus)s</vcpu>
<cpu mode='host-passthrough' />
<os>
<type arch='%(arch)s'>hvm</type>
<boot dev='%(bootdev)s'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<controller type='scsi' model='virtio-scsi' index='0'/>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='unsafe'/>
<source file='%(imagefile)s'/>
<target dev='sda' bus='%(diskbus)s'/>
</disk>
%(network)s
%(bm_network)s
%(enable_serial_console)s
<input type='mouse' bus='ps2'/>
<graphics type='vnc' port='-1' autoport='yes'/>
<video>
<model type='cirrus' vram='9216' heads='1'/>
</video>
</devices>
</domain>

View File

@ -1,6 +0,0 @@
<network>
<name>%NETWORK_NAME%</name>
<forward mode='bridge'/>
<bridge name='%NETWORK_NAME%'/>
<virtualport type='openvswitch'/>
</network>

View File

@ -1,2 +0,0 @@
oslosphinx
sphinx>=1.5.1

24
tox.ini
View File

@ -1,24 +0,0 @@
[tox]
minversion = 1.6
skipsdist = True
envlist = docs,pep8
[testenv]
usedevelop = True
install_command = pip install {opts} {packages}
setenv = VIRTUAL_ENV={envdir}
deps = -r{toxinidir}/test-requirements.txt
[testenv:venv]
commands = {posargs}
[testenv:docs]
commands = python setup.py build_sphinx
[testenv:pep8]
deps = bashate
whitelist_externals = bash
commands = bash -c "./run-bashate.sh"
[flake8]
exclude = .tox

View File

@ -1,41 +0,0 @@
This is a staging area for tools and information related to the
[https://wiki.openstack.org/wiki/TripleO/TripleOCloud production quality cloud]
the TripleO program is running in a continuous delivery fashion.
Currently found here:
* tripleo-cd-admins: A list (ircname/username,email,human name,comment)
of people permitted root access to the tripleo cloud. This is used for
recording details and for automatically creating admin (and regular user)
accounts. Our convention is use the IRC name as the username for ssh
access.
* ssh-keys: (directory) SSH keys for TripleO CD Admins. The file names
in this directory correspond to the IRC/username in the tripleo-cd-admins
file. Multiple SSH keys may be listed in each file for a given user.
* tripleo-cd-users: A list of users of the TripleO CD overcloud - either
TripleO ATC's or other folk which the TripleO PTL has granted access to the
cloud. This is used to populate users on the cloud automatically, and new
ATC's should ask for access by submitting a review to add their details.
The comment field should list why non-ATC's have access.
The script update-admin-ssh-keys will copy the tripleo-cd-ssh-keys file on top
of the authorized\_keys file for the current user - making it an easy way to
self-maintain (as long as you trust the SSL infrastructure to ensure the right
repo is being copied :)).
Policy on adding / removing people:
- get consensus/supermajority for adds from existing triple-cd-admins members.
- remove folk at own request or if idle for extended period.
Implementation of adding / removing people:
- Ssh into the seed VM host and add / remove a user for them.
- Ssh into the seed VM and update the root authorized-keys likewise.
- Update the 'default' keyring on the CD seed 'admin' user to the current
keyring here.
- Ssh into cd-undercloud.tripleo.org and update the heat-admin authorized-keys
file.
- Update the 'default' keyring on the CD undercloud 'admin' user to the
current keyring here.
- Add them to https://docs.google.com/spreadsheet/ccc?key=0AlLkXwa7a4bpdERqN0p5RjNMQUJJeDdhZ05fRVUxUnc&usp=sharing

View File

@ -1 +0,0 @@
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDB4ARQm2NlwqBvegBEQuFNEFnlydjwyQJzLbkxPcUTqBDBtqUvsCsRyxkJOgJZoXr/jNPGBxTaw7hLojdGVfS24U0Av2iS9Gq1wteDW271dI5hfTPgiawaSsbaz5sd07LwdfbcXg+TO1cdWRuA33bH5mNtUP7A3VTSHJ3hXO0LHwjM+jUbdmRdUEtu8IKtKZoVXOOYY9FOb0xPNbxaex0WZV70KOrbZiV5+0DkTXIyw+MLE2TZWUun7VLpaQ5qgprUx7zGe54JWv5QUWkFPx0dWy0T3hNOD3r9TKf8ieESiguQK28AXtagEqbXLQx15zfKGHF3VAn2Wcn11+KMBalD GheRivero

Some files were not shown because too many files have changed in this diff Show More