Retire training-labs: remove repo content

As there is no maintainer for training-labs and it
is decided to retire[1].

Depends-On: https://review.opendev.org/c/openstack/project-config/+/817502

[1] http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025586.html

Change-Id: I02ef4109509b4a6b87979aedca367ca7f9dabc10
This commit is contained in:
Ghanshyam Mann 2021-11-10 19:28:35 -06:00
parent 2585b12b87
commit e78d74f105
223 changed files with 11 additions and 21875 deletions

72
.gitignore vendored
View File

@ -1,72 +0,0 @@
*.py[cod]
# C extensions
*.so
# Packages
*.egg
*.egg-info
dist
build
.eggs
eggs
parts
bin
var
sdist
develop-eggs
.installed.cfg
.DS_Store
# Installer logs
pip-log.txt
# Unit test / coverage reports
.coverage
.tox
nosetests.xml
.testrepository
.venv
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
# Complexity
output/*.html
output/*/index.html
# Sphinx
doc/build
# pbr generates these
AUTHORS
ChangeLog
# Editors
*~
.*.swp
.*sw?
# Others
*.log
*.sqlite
tenvironment
.ropeproject/
# Labs artifacts
labs/osbash/autostart/
labs/osbash/img/
labs/osbash/log/
labs/osbash/wbatch/
labs/osbash/lib/vagrant-ssh-keys/
labs/osbash/test_tmp/
labs/autostart/
labs/img/
labs/log/
labs/wbatch/

View File

@ -1,17 +0,0 @@
- project:
check:
jobs:
- training-labs-scripts
gate:
jobs:
- training-labs-scripts
post:
jobs:
- publish-training-labs-scripts
- job:
name: training-labs-scripts
description: |
Build scripts for training-labs repository.
parent: unittests
run: playbooks/scripts/run.yaml

View File

@ -1,17 +0,0 @@
If you would like to contribute to the development of OpenStack, you must
follow the steps in this page:
http://docs.openstack.org/infra/manual/developers.html
If you already have a good understanding of how the system works and your
OpenStack accounts are set up, you can skip to the development workflow
section of this documentation to learn how changes to OpenStack should be
submitted for review via the Gerrit tool:
http://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will be ignored.
Bugs should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/labs

View File

@ -1,134 +0,0 @@
Contributing to replace-labs scripts
====================================
First things first
------------------
Read the OpenStack Style Commandments https://docs.openstack.org/hacking/latest/
General
-------
Contributing code to replace labs scripts follows the usual OpenStack process
as described in `How To Contribute`__ in the OpenStack wiki.
Our `main blueprint`__ contains the usual links for blueprints, bugs, etc.
__ contribute_
.. _contribute: http://docs.openstack.org/infra/manual/developers.html
__ spec_
.. _spec: http://specs.openstack.org/openstack/docs-specs/specs/liberty/traininglabs.html
Getting started
---------------
.. TODO(psalunke): Fix Me. Add more content here.
Prerequisites
-------------
.. TODO(psalunke): Fix Me. Add more content here.
Coding style
------------
We follow the conventions of other OpenStack projects.
StackTrain
~~~~~~~~~~
.. TODO(psalunke): Fix me. Add more content here.
Osbash
~~~~~~
Osbash is written in BASH and follows conventions of DevStack:
`devstack <https://docs.openstack.org/devstack/latest/>`_.
DevStack bash style guidelines can be found at the bottom of:
https://opendev.org/openstack/devstack/src/branch/master/HACKING.rst
Structure
---------
.. TODO(psalunke): Add more information as the repo gets merged.
OSBASH:
~~~~~~~
**autostart**
osbash/wbatch copy shell scripts (\*.sh) into this directory to have them
automatically executed (and removed) upon boot.
**config**
Contains the configuration files for all the scripts. The setup can be customized here.
**img**
By default osbash will put into this directory its base disk images
(base-\*-<distro>.vdi), the VM export images (labs-<distro>.ova),
and all installation ISO images it may download.
**lib**
This directory contains bash libraries used by scripts.
**log**
Contains the log files written (and removed) by osbash/wbatch and
the scripts running within the VMs.
**scripts**
All scripts in this directory run within the VMs.
**wbatch**
Files in this directory are Windows batch files generated by osbash to
configure host-only networks, produce a base disk, and build OpenStack
replace-labs VMs as configured when osbash created them.
Testing
-------
Useful tools for checking scripts:
- `bashate <https://github.com/openstack-dev/bashate>`_ (must pass)
- `shellcheck <https://github.com/koalaman/shellcheck.git>`_ (optional)
.. TODO (psalunke): Add Python checks etc.
Submitting patches
------------------
These documents will help you submit patches to OpenStack projects (including
this one):
- https://docs.openstack.org/infra/manual/developers.html#development-workflow
- https://wiki.openstack.org/wiki/GitCommitMessages
If you change the behavior of the scripts as documented in the replace-guides,
add a DocImpact flag to alert the documentation team. For instance, add a line
like this to your commit message:
DocImpact new option added to osbash.sh
- https://wiki.openstack.org/wiki/Documentation/DocImpact
Reviewing
---------
Learn how to review (or what to expect when having your patches reviewed) here:
- https://docs.openstack.org/infra/manual/developers.html#development-workflow
TODO
----
Anything not covered here
-------------------------
Check README.md and get in touch with other scripts developers.

175
LICENSE
View File

@ -1,175 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,209 +1,14 @@
========================
Team and repository tags
========================
This project is no longer maintained.
.. image:: https://governance.openstack.org/tc/badges/training-labs.svg
:target: https://governance.openstack.org/tc/reference/tags/index.html
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
.. Change things from this point on
(Optional:)
For an alternative project, please see <alternative project name> at
<alternative project URL>.
=============
Training labs
=============
About
-----
Training-labs provides an automated way to deploy Vanilla OpenStack, closely
following the
`OpenStack Install Guide <https://docs.openstack.org/install-guide>`_.
Training-labs offers an easy way to set up an OpenStack cluster which is a good
starting point for beginners to learn OpenStack, and for advanced users to test
out new features, and check out different capabilities of OpenStack.
On top of that training-labs is also a good way to test the installation
instructions on a regular basis.
Training-labs is a project under OpenStack Documentation. For more information
see the
`OpenStack wiki <https://wiki.openstack.org/wiki/Documentation/training-labs>`_.
* Free software: Apache license
* `Documentation:openstack-training-labs <https://docs.openstack.org/training_labs/>`_
* `Source:openstack/training-labs <https://opendev.org/openstack/training-labs>`_
* `Bugs:openstack-training-labs <https://bugs.launchpad.net/labs>`_
* `Release Notes:openstack-training-labs <https://docs.openstack.org/releasenotes/openstack-manuals/>`_
Pre-requisite
-------------
* Download and install `VirtualBox <https://www.virtualbox.org/wiki/Downloads>`_.
VirtualBox is the default hypervisor used by training-labs. Alternatively, you can use KVM (just set ``PROVIDER=kvm`` in ``labs/config/localrc``).
Getting the Code for an OpenStack Release
-----------------------------------------
The current release is master which usually deploys the current stable
OpenStack release. Unless you have a reason to go with an older release,
we recommend using master.
For non-development purposes (training, etc.), the easiest way to get the code is through downloading the desired archive from
`OpenStack Training Labs <https://docs.openstack.org/training_labs/>`_.
Unpack the archive and you are good to go.
How to run the scripts for GNU/Linux and macOS
----------------------------------------------
Change directory::
$ cd training-labs/labs/
By default, the cluster is built on Virtualbox VMs.
Run the script by::
$ ./st.py -b cluster
How to run the scripts for Windows
----------------------------------
The easiest and recommended way to get everything you need besides
VirtualBox is to download a zip file for Windows from the
`Training Labs page <https://docs.openstack.org/training_labs/>`_.
The zip files include pre-generated Windows batch files.
Creates the host-only networks used by the node VMs to communicate::
> create_hostnet.bat
Creates the base disk::
> create_base.bat
Creates the node VMs based on the base disk::
> create_ubuntu_cluster_node.bat
What the script installs
------------------------
Running this will automatically spin up 2 virtual machines in VirtualBox/KVM:
* Controller node
* Compute node
Now you have a multi-node deployment of OpenStack running with the following services installed.
* Keystone
* Nova
* Neutron
* Glance
* Cinder
* Horizon
How to access the services
--------------------------
There are two ways to access the services:
* OpenStack Dashboard (horizon)
You can access the dashboard at: http://10.0.0.11/horizon
Admin Login:
* Username: ``admin``
* Password: ``admin_pass``
Demo User Login:
* Username: ``demo``
* Password: ``demo_pass``
You can ssh to each of the nodes by::
# Controller node
$ ssh osbash@10.0.0.11
# Compute node
$ ssh osbash@10.0.0.31
Credentials for all nodes:
* Username: ``osbash``
* Password: ``osbash``
After you have ssh access, you need to source the OpenStack credentials in order to access the services.
Two credential files are present on each of the nodes:
* ``demo-openstackrc.sh``
* ``admin-openstackrc.sh``
Source the following credential files
For Admin user privileges::
$ source admin-openstackrc.sh
For Demo user privileges::
$ source demo-openstackrc.sh
Note: Instead 'source' you can use '.', or you define an alias.
Now you can access the OpenStack services via CLI.
Specs
-----
To review specifications, see `Training-labs
<https://specs.openstack.org/openstack/docs-specs/specs/liberty/training-labs.html>`_
Mailing lists, IRC
------------------
To contribute, join the IRC channel, ``#openstack-doc``, on IRC freenode
or write an e-mail to the OpenStack Development Mailing List
``openstack-discuss@lists.openstack.org``. Please use ``[training-labs]`` tag in the
subject of the email message.
You may have to
`subscribe to the OpenStack Development Mailing List <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss>`_
to have your mail accepted by the mailing list software.
Sub-team leads
--------------
Feel free to ping Roger, Julen, or Pranav via email or on the IRC channel
``#openstack-doc`` regarding any queries about training-labs.
* Roger Luethi
* Email: ``rl@patchworkscience.org``
* IRC: ``rluethi``
* Pranav Salunke
* Email: ``dguitarbite@gmail.com``
* IRC: ``dguitarbite``
* Julen Larrucea
* Email: ``julen@larrucea.eu``
* IRC: julen, julenl
Meetings
--------
Training-labs uses the Doc Team Meeting:
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting
Wiki
----
Follow various links on training-labs here:
https://wiki.openstack.org/wiki/Documentation/training-labs
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
OFTC.

View File

@ -1,64 +0,0 @@
# Edit this one. Remove all the non-required deps.
asciidoc
build-essential [platform:dpkg]
curl
gawk
# gettext and graphviz are needed by doc builds only. For transition,
# have them in both doc and test.
gettext [doc test]
graphviz [doc test]
language-pack-en [platform:ubuntu]
libcurl-devel [platform:rpm]
libcurl4-gnutls-dev [platform:dpkg]
liberasurecode-dev [platform:dpkg]
liberasurecode-devel [platform:rpm]
libevent-dev [platform:dpkg]
libevent-devel [platform:rpm]
libffi-dev [platform:dpkg]
libffi-devel [platform:rpm]
libjerasure-dev [platform:ubuntu-trusty]
libjpeg-dev [platform:dpkg]
libjpeg-turbo-devel [platform:rpm]
libldap2-dev [platform:dpkg]
libmysqlclient-dev [platform:dpkg]
libpcap-dev [platform:dpkg]
libpcap-devel [platform:rpm]
libpq-dev [platform:dpkg]
librrd-dev [platform:dpkg]
libsasl2-dev [platform:dpkg]
libselinux-python [platform:rpm]
libsqlite3-dev [platform:dpkg]
libuuid-devel [platform:rpm]
libvirt-dev [platform:dpkg]
libvirt-devel [platform:rpm]
libvirt-python [platform:rpm]
libxml2-dev [platform:dpkg]
libxml2-devel [platform:rpm]
libxml2-utils [platform:dpkg]
libxslt-devel [platform:rpm]
libxslt1-dev [platform:dpkg]
locales [platform:debian]
pkg-config [platform:dpkg]
pkgconfig [platform:rpm]
pypy [platform:ubuntu-trusty]
pypy-dev [platform:ubuntu-trusty]
python-dev [platform:dpkg]
python-devel [platform:rpm]
python-libvirt [platform:dpkg]
python-lxml
python-zmq
python3-all-dev [platform:ubuntu-trusty]
python3-dev [platform:dpkg]
python3-devel [platform:fedora]
python3.4 [platform:ubuntu-trusty]
python34-devel [platform:centos]
sqlite [platform:rpm]
sqlite-devel [platform:rpm]
sqlite3 [platform:dpkg]
unzip
uuid-dev [platform:dpkg]
xsltproc [platform:dpkg]
zip
zlib-devel [platform:rpm]
zlib1g-dev [platform:dpkg]

View File

@ -1,5 +0,0 @@
Documentation for training-labs
===============================
See the "Building the Dcumenation" section of
doc/source/development.environment.rst.

View File

@ -1,7 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
sphinx!=1.6.6,!=1.6.7,!=2.1.0,>=1.6.2 # BSD
sphinx-testing # BSD
openstackdocstheme>=1.31.2 # Apache-2.0

View File

@ -1,34 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
'openstackdocs'
]
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'OpenStack Training Labs'
copyright = u'2013, OpenStack Foundation'
# -- Options for HTML output --------------------------------------------------
html_theme = 'openstackdocs'

View File

@ -1,4 +0,0 @@
============
Contributing
============
.. include:: ../../CONTRIBUTING.rst

View File

@ -1,25 +0,0 @@
.. labs documentation master file, created by
sphinx-quickstart on Tue Jul 9 22:26:36 2013.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to labs's documentation!
========================================================
Contents:
.. toctree::
:maxdepth: 2
readme
installation
usage
contributing
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -1,11 +0,0 @@
============
Installation
============
At the command line::
$ git clone https://git.openstack.org/openstack/training-labs
$ cd training-labs/labs
$ ./st.py -h
Make sure that you have VirtualBox installed on your system.

View File

View File

@ -1,5 +0,0 @@
========
Usage
========

View File

View File

@ -1,55 +0,0 @@
# The format of this file isn't really documented; just use --generate-rcfile
[MASTER]
# Add <file or directory> to the black list. It should be a base name, not a
# path. You may set this option multiple times.
ignore=.git,tests
[Messages Control]
# NOTE(justinsb): We might want to have a 2nd strict pylintrc in future
# C0111: Don't require docstrings on every method
# W0511: TODOs in code comments are fine.
# W0142: *args and **kwargs are fine.
# W0622: Redefining id is fine.
disable=C0111,W0511,W0142,W0622
[Basic]
# Variable names can be 1 to 31 characters long, with lowercase and underscores
variable-rgx=[a-z_][a-z0-9_]{0,30}$
# Argument names can be 2 to 31 characters long, with lowercase and underscores
argument-rgx=[a-z_][a-z0-9_]{1,30}$
# Method names should be at least 3 characters long
# and be lowercased with underscores
method-rgx=([a-z_][a-z0-9_]{2,50}|setUp|tearDown)$
# Module names
module-rgx=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$
# Don't require docstrings on tests.
no-docstring-rgx=((__.*__)|([tT]est.*)|setUp|tearDown)$
[Miscellaneous]
# List of note tags to take in consideration, separated by a comma.
notes=FIXME
[Format]
# Maximum number of characters on a single line.
max-line-length=79
[Design]
max-public-methods=100
min-public-methods=0
max-args=6
[Variables]
# List of additional names supposed to be defined in builtins. Remember that
# you should avoid to define new builtins when possible.
# _ is used by our localization
additional-builtins=_
[REPORTS]
# Tells whether to display a full report or only the messages
reports=no

View File

@ -1 +0,0 @@
osbash/autostart

View File

@ -1 +0,0 @@
osbash/config

View File

@ -1 +0,0 @@
osbash/lib

View File

@ -1,2 +0,0 @@
osbash/wbatch copy shell scripts (*.sh) into this directory to have them
automatically executed (and removed) upon boot.

View File

@ -1,2 +0,0 @@
The configuration files in this directory are used by osbash/wbatch and
by scripts running inside the VMs (scripts directory).

View File

@ -1,21 +0,0 @@
# The variables in this file are exported for use by OpenStack client
# applications.
# Use BASH_SOURCE so the file works when sourced from a shell, too; use
# $0 to make it work on zsh
CONFIG_DIR=$(dirname "${BASH_SOURCE[0]:-$0}")
source "$CONFIG_DIR/openstack"
source "$CONFIG_DIR/credentials"
#------------------------------------------------------------------------------
# OpenStack client environment scripts
# https://docs.openstack.org/keystone/train/install/keystone-openrc-ubuntu.html
#------------------------------------------------------------------------------
export OS_USERNAME=$ADMIN_USER_NAME
export OS_PASSWORD=$ADMIN_PASS
export OS_PROJECT_NAME=$ADMIN_PROJECT_NAME
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

View File

@ -1,7 +0,0 @@
# Base disk VM configuration. Used by osbash/wbatch (host and guest).
# Port forwarding
VM_SSH_PORT=2229
# Our default RAM size (512 MB) is not sufficient for installation
VM_MEM=1024

View File

@ -1,32 +0,0 @@
# Node VM configuration. Used by osbash/wbatch (host and guest).
# Port forwarding
# ssh access to compute1: 127.0.0.1:2232
VM_SSH_PORT=2232
# Assign network interfaces to networks
NET_IF_0=dhcp
#------------------------------------------------------------------------------
# http://docs.openstack.org/mitaka/install-guide-ubuntu/environment-networking-compute.html
#------------------------------------------------------------------------------
# Mgmt network (elevate interface boot priority to 1; set when PXE booting)
NET_IF_1="static 10.0.0.31 1"
# Public network (select network, IP address configured manually)
NET_IF_2="manual 203.0.113.0"
#------------------------------------------------------------------------------
# Size of second disk in MB (/dev/sdb)
# Test volume is 1 GB; backing volume must be bigger
SECOND_DISK_SIZE=1280
#------------------------------------------------------------------------------
# http://docs.openstack.org/mitaka/install-guide-ubuntu/environment.html
#------------------------------------------------------------------------------
# Controller node is running many services.
# A default instance within compute uses 512 MB RAM. The install-guide
# demands 2048 MB of RAM, but 1024 MB is enough for one CirrOS instance.
VM_MEM=1024
# Override number of virtual CPUs (default is 1)
# To edit uncomment the line below
# VM_CPUS=1

View File

@ -1,27 +0,0 @@
# Node VM configuration. Used by osbash/wbatch (host and guest).
# Port forwarding
# ssh access to controller: 127.0.0.1:2230
VM_SSH_PORT=2230
# Dashboard access: 127.0.0.1:8888
VM_WWW_PORT=8888
# Assign network interfaces to networks
NET_IF_0=dhcp
#------------------------------------------------------------------------------
# http://docs.openstack.org/mitaka/install-guide-ubuntu/environment-networking-controller.html
#------------------------------------------------------------------------------
# Mgmt network (elevate interface boot priority to 1; set when PXE booting)
NET_IF_1="static 10.0.0.11 1"
# Public network (select network, IP address configured manually)
NET_IF_2="manual 203.0.113.0"
#------------------------------------------------------------------------------
# http://docs.openstack.org/mitaka/install-guide-ubuntu/environment.html
#------------------------------------------------------------------------------
# Controller node is running many services.
VM_MEM=5120
# Override number of virtual CPUs (default is 1)
# To edit uncomment the line below
# VM_CPUS=1

View File

@ -1,14 +0,0 @@
# Node VM configuration. Used by osbash/wbatch (host and guest).
# Port forwarding
# ssh access to controller: 127.0.0.1:2250
VM_SSH_PORT=2250
# Assign network interfaces to networks
NET_IF_0=dhcp
#------------------------------------------------------------------------------
# Mgmt network
NET_IF_1="static 10.0.0.100"
PXE_GATEWAY="10.0.0.101"

View File

@ -1,81 +0,0 @@
# This file contains user names, passwords, and tokens that are set and used
# by OpenStack applications and related software running in the VMs.
# Note that the VM shell user and its password are not set here. By default,
# those are hard-coded in the preseed/kickstart files. The scripts get the
# shell user name from deploy.{osbash} and don't need a password
# (they use password-less sudo and -- if configured -- ssh keys).
# Used for MySQL or whatever other DBMS is configured
: ${DATABASE_PASSWORD:=secrete}
# Used for MySQL or whatever other DBMS is configured
: ${RABBIT_PASS:=rabbitPass}
# Project and role for admin accounts
: ${ADMIN_ROLE_NAME:=admin}
: ${ADMIN_PROJECT_NAME:=admin}
# Member role for generic use
: ${MEMBER_ROLE_NAME:=_member_}
# User name and password for administrator
: ${ADMIN_USER_NAME:=admin}
#------------------------------------------------------------------------------
# Passwords for OpenStack services
# http://docs.openstack.org/mitaka/install-guide-ubuntu/environment-security.html
#------------------------------------------------------------------------------
: ${ADMIN_PASS:=admin_user_secret}
: ${CINDER_DB_USER:=cinder}
: ${CINDER_DBPASS:=cinder_db_secret}
: ${CINDER_PASS:=cinder_user_secret}
: ${GLANCE_DB_USER:=glance}
: ${GLANCE_DBPASS:=glance_db_secret}
: ${GLANCE_PASS:=glance_user_secret}
: ${HEAT_DB_USER:=heat}
: ${HEAT_DBPASS:=heat_db_secret}
: ${HEAT_DOMAIN_PASS:=heat_dom_pw}
: ${HEAT_PASS:=heat_user_secret}
: ${KEYSTONE_DB_USER:=keystone}
: ${KEYSTONE_DBPASS:=keystone_db_secret}
: ${NEUTRON_DB_USER:=neutron}
: ${NEUTRON_DBPASS:=neutron_db_secret}
: ${NEUTRON_PASS:=neutron_user_secret}
: ${NOVA_DB_USER:=nova}
: ${NOVA_DBPASS:=nova_db_secret}
: ${NOVA_PASS:=nova_user_secret}
: ${PLACEMENT_DB_USER:=placement}
: ${PLACEMENT_DBPASS:=placement_db_secret}
: ${PLACEMENT_PASS:=placement_user_secret}
# Project name, user name and password for normal (demo) user
: ${DEMO_PROJECT_NAME:=myproject}
: ${DEMO_USER_NAME:=myuser}
: ${DEMO_PASS:=myuser_user_pass}
# User role
: ${USER_ROLE_NAME:=myrole}
# OpenStack Services needs to be affiliated with a tenant to provide
# authentication to other OpenStack services. We create a "service" tenant for
# the OpenStack services. All the OpenStack services will be registered via
# service tenant.
# Project and role for service accounts.
: ${SERVICE_PROJECT_NAME:=service}
# Domain to use for email addresses (e.g. admin@example.com)
: ${MAIL_DOMAIN:=example.com}
# Metadata secret used by neutron and nova.
: ${METADATA_SECRET:=osbash_training}
# vim: set ai ts=4 sw=4 et ft=sh:

View File

@ -1,21 +0,0 @@
# The variables in this file are exported for use by OpenStack client
# applications.
# Use BASH_SOURCE so the file works when sourced from a shell, too; use
# $0 to make it work on zsh
CONFIG_DIR=$(dirname "${BASH_SOURCE[0]:-$0}")
source "$CONFIG_DIR/openstack"
source "$CONFIG_DIR/credentials"
#------------------------------------------------------------------------------
# OpenStack client environment scripts
# https://docs.openstack.org/keystone/train/install/keystone-openrc-ubuntu.html
#------------------------------------------------------------------------------
export OS_USERNAME=$DEMO_USER_NAME
export OS_PASSWORD=$DEMO_PASS
export OS_PROJECT_NAME=$DEMO_PROJECT_NAME
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

View File

@ -1,17 +0,0 @@
# Used by osbash.sh and guest scripts
: ${OSBASH_LIB_DIR:=$LIB_DIR/osbash}
: ${OSBASH_SCRIPTS_DIR:=$SCRIPTS_DIR/osbash}
: ${TEMPLATE_DIR:=$LIB_DIR/osbash/templates}
# Name of VirtualBox shared folder
: ${SHARE_NAME:=osbash}
# Note: shell user name and password are set in preseed.cfg
VM_SHELL_USER=osbash
# Override disk size in MB (default is 10000 MB, inherited by node VMs)
# BASE_DISK_SIZE=10000
# vim: set ai ts=4 sw=4 et ft=sh:

View File

@ -1,17 +0,0 @@
#------------------------------------------------------------------------------
# http://docs.openstack.org/mitaka/install-guide-ubuntu/environment-networking-controller.html
#------------------------------------------------------------------------------
# controller
10.0.0.11 controller
# compute1
10.0.0.31 compute1
# block1
10.0.0.41 block1
# object1
10.0.0.51 object1
# object2
10.0.0.52 object2

View File

@ -1,30 +0,0 @@
# Set this if you already have the install ISO, but in a location other
# than IMG_DIR (which defaults to TOP_DIR/img)
#INSTALL_ISO=/data/iso/ubuntu/ubuntu-12.04.4-server-amd64.iso
# VM_PROXY will be used by osbash to get the ISO image and by apt/yum/wget once
# the operating system is installed (i.e. for software updates and
# installation). It should be sufficient to build a base disk if you have to
# use a proxy to connect to the Internet. Building the cluster itself does not
# require an Internet connection at all.
#VM_PROXY="http://192.168.178.20:3128"
# Options:
# ubuntu-18.04-server-amd64 (default)
# ubuntu-18.04-server-i386
# ubuntu-16.04-server-amd64
# ubuntu-16.04-server-i386
# ubuntu-14.04-server-amd64
# ubuntu-14.04-server-i386
#
# example: DISTRO=ubuntu-18.04-server-i386
: ${DISTRO:=ubuntu-18.04-server-amd64}
# PROVIDER: virtualbox or kvm (defaults to virtualbox)
# KVM tends to give better performance (on Linux), but may be harder to set up
# and osbash does not (yet) support all features with a KVM backend.
#
# example: PROVIDER=kvm
: ${PROVIDER:=virtualbox}
# vim: set ai ts=4 sw=4 et ft=sh:

View File

@ -1,58 +0,0 @@
# This file contains OpenStack configuration data. It is used by both
# host (osbash, Windows batch) and VM guest scripts.
# train (production release; cloud-archive:train)
# train-proposed (pre-release testing: cloud-archive:train-proposed)
# train-staging (ppa:openstack-ubuntu-testing/train)
: ${OPENSTACK_RELEASE:=train}
# CirrOS image URL
if [ "$(uname -m)" = "x86_64" ]; then
arch=x86_64
else
arch=i386
fi
CIRROS_URL="http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-$arch-disk.img"
unset arch
# Name used for CirrOS image in glance
CIRROS_IMG_NAME="cirros"
#------------------------------------------------------------------------------
# https://docs.openstack.org/install-guide/environment-networking.html
#------------------------------------------------------------------------------
# Networks used by OpenStack training-labs setup
NETWORK_1="mgmt 10.0.0.0"
NETWORK_2="provider 203.0.113.0"
# Static IP used temporarily by PXE booted machines before being reconfigured
# by osbash
PXE_INITIAL_NODE_IP="10.0.0.240"
#------------------------------------------------------------------------------
# https://docs.openstack.org/install-guide/launch-instance-networks-provider.html
#------------------------------------------------------------------------------
# Public network
: ${PROVIDER_NETWORK_CIDR:=203.0.113.0/24}
: ${PROVIDER_NETWORK_GATEWAY:=203.0.113.1}
# Floating IP addresses
: ${START_IP_ADDRESS:=203.0.113.101}
: ${END_IP_ADDRESS:=203.0.113.200}
#------------------------------------------------------------------------------
# https://docs.openstack.org/install-guide/launch-instance-selfservice.html
#------------------------------------------------------------------------------
# Private demo network
: ${SELFSERVICE_NETWORK_CIDR:=172.16.1.0/24}
: ${SELFSERVICE_NETWORK_GATEWAY:=172.16.1.1}
# DNS name server used by instance VMs.
# Default is Google Public DNS (8.8.4.4).
: ${DNS_RESOLVER:=8.8.4.4}
: ${REGION:=RegionOne}
# vim: set ai ts=4 sw=4 et ft=sh:

View File

@ -1,33 +0,0 @@
# This file is used by all scripts to find the directories for the files they
# read or write. They find this file as "$TOP_DIR/config/paths".
# Configuration files
: ${CONFIG_DIR:=$TOP_DIR/config}
# Installation ISO images, basedisk images, exported VM cluster images, etc.
#
# TODO(rluethi): merge these directories in the code, the option to have them
# in separate directories doesn't seem very useful
: ${DISK_DIR:=$TOP_DIR/img}
: ${IMG_DIR:=$TOP_DIR/img}
: ${ISO_DIR:=$TOP_DIR/img}
# Code libraries, templates, preseed/kickstart files
: ${LIB_DIR:=$TOP_DIR/lib}
# Log files
: ${LOG_DIR:=$TOP_DIR/log}
# Status files (progress indicator for running scripts)
: ${STATUS_DIR:=$LOG_DIR/status}
# Scripts that run within the basedisk and node VMs
: ${SCRIPTS_DIR:=$TOP_DIR/scripts}
# Directory shared with VM guest
: ${SHARE_DIR:=$TOP_DIR}
# Autostart directory; files placed here are executed within the VM
: ${AUTOSTART_DIR:=$SHARE_DIR/autostart}
# vim: set ai ts=4 sw=4 et ft=sh:

View File

@ -1,14 +0,0 @@
# KVM specific settings; used by osbash
: ${KVM_VOL_POOL:=default}
: ${LIBVIRT_CONNECT_URI:=qemu:///system}
: ${VIRSH_CALL:=sudo virsh --connect=$LIBVIRT_CONNECT_URI}
: ${VIRT_INSTALL_CALL:=sudo virt-install --connect=$LIBVIRT_CONNECT_URI}
# KVM VM group (stored in VM description)
: ${VM_GROUP:=OpenStack training-labs}
# VM GUI type (one of headless, gui, vnc)
: ${VM_UI:=vnc}
# vim: set ai ts=4 sw=4 et ft=sh:

View File

@ -1,21 +0,0 @@
# VirtualBox specific settings; used by osbash
# Type of NIC to use for network interfaces, one of:
# 82540EM for Intel PRO/1000 MT Desktop
# 82543GC for Intel PRO/1000 T Server
# 82545EM for Intel PRO/1000 MT Server
# Am79C970A for PCnet-PCI II
# Am79C973 for PCnet-FAST III
# virtio for Paravirtualized network
: ${NICTYPE:=virtio}
# Location of VBoxManage binary
: ${VBM_EXE:=$(which VBoxManage)}
# VirtualBox VM group
: ${VM_GROUP:=labs}
# VirtualBox VM GUI type
: ${VM_UI:=headless}
# vim: set ai ts=4 sw=4 et ft=sh:

View File

@ -1,3 +0,0 @@
# Scripts for CentOS installations
cmd queue centos/yum_init.sh
cmd queue centos/yum_update.sh

View File

@ -1,6 +0,0 @@
# Scripts for Ubuntu installations
cmd queue ubuntu/apt_init.sh
cmd queue ubuntu/apt_upgrade.sh
cmd queue pre-download.sh
cmd queue ubuntu/apt_pre-download.sh
cmd queue osbash/enable_osbash_ssh_keys.sh

View File

@ -1,102 +0,0 @@
#==============================================================================
# Scripts for controller node
cmd create_node -n controller
cmd queue_renamed -n controller osbash/init_xxx_node.sh
cmd queue etc_hosts.sh
cmd queue osbash/enable_osbash_ssh_keys.sh
cmd queue osbash/copy_openrc.sh
cmd snapshot_cycle -n controller controller_node_init
# Environment
cmd queue ubuntu/apt_install_mysql.sh
cmd queue ubuntu/install_rabbitmq.sh
cmd queue ubuntu/install_memcached.sh
cmd snapshot_cycle -n controller pre-openstack_installed
# Identity
cmd queue ubuntu/setup_keystone.sh
cmd queue test/get_auth_token.sh
cmd snapshot_cycle -n controller keystone_installed
# Image
cmd queue ubuntu/setup_glance.sh
cmd snapshot_cycle -n controller glance_installed
# Compute
cmd queue ubuntu/setup_nova_controller.sh
cmd queue ubuntu/setup_placement_controller.sh
cmd snapshot_cycle -n controller nova-controller_installed
# Networking
cmd queue ubuntu/setup_neutron_controller.sh
cmd queue ubuntu/setup_self-service_controller.sh
cmd queue ubuntu/setup_neutron_controller_part_2.sh
cmd snapshot_cycle -n controller neutron-controller_installed
# Dashboard
cmd queue ubuntu/setup_horizon.sh
cmd snapshot_cycle -n controller horizon_installed
# Block Storage
cmd queue ubuntu/setup_cinder_controller.sh
cmd snapshot_cycle -n controller cinder_installed
# Orchestration
cmd queue ubuntu/setup_heat_controller.sh
cmd snapshot_cycle -n controller heat_controller_installed
cmd boot -n controller
#==============================================================================
# Scripts for compute1 node
cmd create_node -n compute1
cmd queue_renamed -n compute1 osbash/init_xxx_node.sh
cmd queue etc_hosts.sh
cmd queue osbash/enable_osbash_ssh_keys.sh
cmd queue osbash/copy_openrc.sh
cmd snapshot_cycle -n compute1 compute1_node_init
# Compute
cmd queue ubuntu/setup_nova_compute.sh
cmd snapshot_cycle -n compute1 nova-compute1_installed
# Networking
cmd queue ubuntu/setup_neutron_compute.sh
cmd queue ubuntu/setup_self-service_compute.sh
cmd queue ubuntu/setup_neutron_compute_part_2.sh
cmd snapshot_cycle -n compute1 neutron-compute_installed
# Block Storage
cmd queue ubuntu/setup_cinder_volumes.sh
cmd snapshot_cycle -n compute1 cinder-volume_installed
cmd boot -n compute1
#==============================================================================
# Create networks
cmd shutdown -n controller
cmd queue config_public_network.sh
cmd queue config_private_network.sh
cmd boot -n controller
#==============================================================================
# Always take snapshots of finished cluster
cmd shutdown -n controller
cmd shutdown -n compute1
cmd snapshot -n controller controller_-_cluster_installed
cmd snapshot -n compute1 compute-_cluster_installed
# Boot cluster nodes -- cluster is ready for use
cmd boot -n compute1
# Enable extra services as needed:
#
#cmd queue ubuntu/barbican/setup_barbican_server.sh
#
#cmd queue ubuntu/mistral/setup_mistral_server.sh
#
# Note: tacker depends on mistral and barbican
#cmd queue ubuntu/tacker/setup_tacker_server.sh
#cmd queue ubuntu/tacker/create_vim.sh
#cmd queue ubuntu/tacker/create_vnf.sh
cmd boot -n controller

View File

@ -1,26 +0,0 @@
#==============================================================================
# Only create VMs (don't install any software)
#==============================================================================
# Scripts for controller node
cmd create_node -n controller
cmd queue_renamed -n controller osbash/init_xxx_node.sh
cmd queue etc_hosts.sh
cmd queue osbash/enable_osbash_ssh_keys.sh
cmd queue osbash/copy_openrc.sh
cmd snapshot_cycle -n controller controller_node_init
#==============================================================================
# Scripts for compute1 node
cmd create_node -n compute1
cmd queue_renamed -n compute1 osbash/init_xxx_node.sh
cmd queue etc_hosts.sh
cmd queue osbash/enable_osbash_ssh_keys.sh
cmd snapshot_cycle -n compute1 compute1_node_init
#==============================================================================
# Both nodes are built, boot them
cmd boot -n controller
cmd boot -n compute1

View File

@ -1,102 +0,0 @@
cmd boot -n pxeserver
#==============================================================================
# Scripts for controller node
cmd create_pxe_node -n controller
cmd boot_set_tmp_node_ip -n controller
cmd queue_renamed -n controller osbash/init_xxx_node.sh
cmd queue etc_hosts.sh
cmd queue ubuntu/apt_init.sh
cmd queue ubuntu/apt_upgrade.sh
cmd queue pre-download.sh
cmd queue osbash/enable_osbash_ssh_keys.sh
# This reboot is not optional, we must switch from temporary PXE IP address to
# final address before installing servers
cmd queue shutdown.sh
cmd boot -n controller
cmd wait_for_shutdown -n controller
cmd snapshot -n controller controller_node_init
# Environment
cmd queue ubuntu/apt_install_mysql.sh
cmd queue ubuntu/install_rabbitmq.sh
cmd queue ubuntu/install_memcached.sh
cmd snapshot_cycle -n controller pre-openstack_installed
# Identity
cmd queue ubuntu/setup_keystone.sh
cmd queue test/get_auth_token.sh
cmd snapshot_cycle -n controller keystone_installed
# Image
cmd queue ubuntu/setup_glance.sh
cmd snapshot_cycle -n controller glance_installed
# Compute
cmd queue ubuntu/setup_nova_controller.sh
cmd snapshot_cycle -n controller nova-controller_installed
# Networking
cmd queue ubuntu/setup_neutron_controller.sh
cmd queue ubuntu/setup_self-service_controller.sh
cmd queue ubuntu/setup_neutron_controller_part_2.sh
cmd snapshot_cycle -n controller neutron-controller_installed
# Dashboard
cmd queue ubuntu/setup_horizon.sh
cmd snapshot_cycle -n controller horizon_installed
# Block Storage
cmd queue ubuntu/setup_cinder_controller.sh
cmd snapshot_cycle -n controller cinder_installed
# Orchestration
cmd queue ubuntu/setup_heat_controller.sh
cmd snapshot_cycle -n controller heat_controller_installed
cmd boot -n controller
#==============================================================================
# Scripts for compute1 node
cmd create_pxe_node -n compute1
cmd boot_set_tmp_node_ip -n compute1
cmd queue_renamed -n compute1 osbash/init_xxx_node.sh
cmd queue etc_hosts.sh
cmd queue ubuntu/apt_init.sh
cmd queue ubuntu/apt_upgrade.sh
cmd queue pre-download.sh
cmd queue osbash/enable_osbash_ssh_keys.sh
# This reboot is not optional, we must switch from temporary PXE IP address to
# final address before installing servers
cmd queue shutdown.sh
cmd boot -n compute1
cmd wait_for_shutdown -n compute1
cmd snapshot -n compute1 compute1_node_init
# Compute
cmd queue ubuntu/setup_nova_compute.sh
cmd snapshot_cycle -n compute1 nova-compute1_installed
# Networking
cmd queue ubuntu/setup_neutron_compute.sh
cmd queue ubuntu/setup_self-service_compute.sh
cmd queue ubuntu/setup_neutron_compute_part_2.sh
cmd snapshot_cycle -n compute1 neutron-compute_installed
# Block Storage
cmd queue ubuntu/setup_cinder_volumes.sh
cmd snapshot_cycle -n compute1 cinder-volume_installed
cmd boot -n compute1
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Take snapshot of database changes on controller VM, too
cmd shutdown -n controller
cmd snapshot -n controller controller_-_compute1_node_installed
#==============================================================================
cmd queue config_public_network.sh
cmd queue config_private_network.sh
cmd snapshot_cycle -n controller public_private_networks
#==============================================================================
cmd boot -n controller

View File

@ -1,25 +0,0 @@
#==============================================================================
cmd create_node -n pxeserver
cmd queue_renamed -n pxeserver osbash/init_xxx_node.sh
cmd queue etc_hosts.sh
cmd queue osbash/enable_osbash_ssh_keys.sh
cmd queue osbash/copy_openrc.sh
cmd snapshot_cycle -n pxeserver pxe_server_node_init
#- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Copy ISO image to pxeserver
cmd boot -n pxeserver
cmd cp_iso -n pxeserver
cmd shutdown -n pxeserver
#- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
cmd queue pxe_server/install_pxeserver.sh
# Add scripts for creating netboot config file on PXE server
cmd queue_renamed -n controller ubuntu/create_xxx_node_pxeboot.sh
# Add scripts for creating netboot config file on PXE server
cmd queue_renamed -n compute1 ubuntu/create_xxx_node_pxeboot.sh
cmd snapshot_cycle -n pxeserver pxe_server_ready
#- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
cmd boot -n pxeserver

View File

@ -1,18 +0,0 @@
OsBash
------
About
-----
By default, osbash will put into this directory its base disk images
(base-*-<distro>.vdi), the VM export images (labs-<distro>.ova),
and all installation ISO images it may download.
- 'img' folder stores all the base disk and ISO images.
- To find individual virtualbox disk images, please look into the
virtualbox default machine folder.
- For Linux: "~/VirtualBox/labs/"
- In case your default folder is at another location (manually set)
please get the location by opening the VirtualBox GUI at this location
"File>Preferences>General>Default Machine Folder"