Move documentation into Sphinx
If it's in Sphinx, then we can publish it. If we can publish it, then we can link to it in other places. Change-Id: I19ae8afdb70183de9f3f74e464dc9a8287e88ac7
This commit is contained in:
committed by
Clint Byrum
parent
366595df2d
commit
1429f6c132
5
.gitignore
vendored
5
.gitignore
vendored
@@ -4,8 +4,11 @@
|
||||
|
||||
.DS_Store
|
||||
|
||||
*.egg
|
||||
*.egg-info
|
||||
*.pyc
|
||||
|
||||
doc/source/devtest.rst
|
||||
openstack-tools
|
||||
scripts/glance
|
||||
scripts/heat
|
||||
@@ -15,3 +18,5 @@ scripts/neutron
|
||||
stackrc
|
||||
|
||||
tripleo-passwords
|
||||
.tox
|
||||
doc/build
|
||||
|
||||
306
Deploying.md
306
Deploying.md
@@ -1,306 +0,0 @@
|
||||
Deploying TripleO
|
||||
=================
|
||||
|
||||
# Components
|
||||
|
||||
## Essential Components
|
||||
|
||||
Essential components make up the self-deploying infrastructure that is the
|
||||
heart of TripleO.
|
||||
|
||||
* Baremetal machine deployment (Nova Baremetal, soon to be 'Ironic')
|
||||
|
||||
* Baremetal volume management (Cinder - not available yet)
|
||||
|
||||
* Cluster orchestration (Heat)
|
||||
|
||||
* Machine image creation (Diskimage-builder)
|
||||
|
||||
* In-instance configuration management (os-config-applier+os-refresh-config,
|
||||
and/or Chef/Puppet/Salt)
|
||||
|
||||
* Image management (Glance)
|
||||
|
||||
* Network management (Quantum)
|
||||
|
||||
* Authentication and service catalog (Keystone)
|
||||
|
||||
## Additional Components
|
||||
|
||||
These components add value to the TripleO story, making it safer to upgrade and
|
||||
evolve an environment, but are secondary to the core thing itself.
|
||||
|
||||
* Continuous integration (Zuul/Jenkins)
|
||||
|
||||
* Monitoring and alerting (Ceilometer/nagios/etc)
|
||||
|
||||
# Dependencies
|
||||
|
||||
Each component can only be deployed once its dependencies are available.
|
||||
|
||||
TripleO is built on a Linux platform, so a Linux environment is required both
|
||||
to create images and as the OS that will run on the machines. If you have no
|
||||
Linux machines at all, you can download a live CD from a number of vendors,
|
||||
which will permit you to run diskimage-builder to get going.
|
||||
|
||||
## Diskimage-builder
|
||||
|
||||
An internet connection is also required to download the various packages used
|
||||
in preparing each image.
|
||||
|
||||
The machine images built *can* depend on Heat metadata, or they can just
|
||||
contain configured Chef/Puppet/Salt credentials, depending on how much of
|
||||
TripleO is in use. Avoiding Heat is useful when doing a incremental adoption
|
||||
of TripleO (see later in this document).
|
||||
|
||||
## Baremetal machine deployment
|
||||
|
||||
Baremetal deployments are delivered via Nova. Additionally, the network must
|
||||
be configured so that the baremetal host machine can receive TFTP from any
|
||||
physical machine that is being booted.
|
||||
|
||||
## Nova
|
||||
|
||||
Nova depends on Keystone, Glance and Quantum. In future Cinder
|
||||
will be one of the dependencies.
|
||||
|
||||
There are three ways the service can be deployed:
|
||||
|
||||
* Via diskimage-builder built machine images, configured via a running Heat
|
||||
cluster. This is the normal TripleO deployment.
|
||||
|
||||
* Via the special bootstrap node image, which is built by diskimage-builder and
|
||||
contains a full working stack - nova, glance, keystone and neutron,
|
||||
configured by statically generated Heat metadata. This approach is used to
|
||||
get TripleO up and running.
|
||||
|
||||
* By hand - e.g. using devstack, or manually/chef/puppet/packages on a
|
||||
dedicated machine. This can be useful for incremental adoption of TripleO.
|
||||
|
||||
## Cinder
|
||||
|
||||
Cinder is needed for persistent storage on bare metal machines. That aspect of
|
||||
TripleO is not yet available : when an instance is deleted, the storage is
|
||||
deleted with it.
|
||||
|
||||
## Quantum
|
||||
|
||||
Quantum depends on Keystone. The same three deployment options exist as for
|
||||
Nova. The Quantum network node(s) must be the only DHCP servers on the network.
|
||||
|
||||
## Glance
|
||||
|
||||
Glance depends on Keystone. The same three deployment options exist as for
|
||||
Nova.
|
||||
|
||||
## Keystone
|
||||
|
||||
Keystone has no external dependencies. The same three deployment options exist
|
||||
as for Nova.
|
||||
|
||||
## Heat
|
||||
|
||||
Heat depends on Nova, Cinder and Keystone. The same three deployment options
|
||||
exist as for Nova.
|
||||
|
||||
## In-instance configuration
|
||||
|
||||
The os-config-applier and os-refresh-config tools depend on Heat to provide
|
||||
cluster configuration metadata. They can be used before Heat is functional
|
||||
if a statically prepared metadata file is placed in the Heat path : this is
|
||||
how the bootstrap node works.
|
||||
|
||||
os-config-applier and os-refresh-config can be used in concert with
|
||||
Chef/Puppet/Salt, or not used at all, if you configure your services via
|
||||
Chef/Puppet/Salt.
|
||||
|
||||
The reference TripleO elements do not depend on Chef/Puppet/Salt, to avoid
|
||||
conflicting when organisations with an investment in Chef/Puppet/Salt start
|
||||
using TripleO.
|
||||
|
||||
# Deploying TripleO incrementally
|
||||
|
||||
The general sequence is:
|
||||
|
||||
* Examine the current state of TripleO and assess where non-automated solutions
|
||||
will be needed for your environment. E.g. at the time of writing VLAN support
|
||||
requires baking the VLAN configuration into your built disk images.
|
||||
|
||||
* Decide how much of TripleO you will adopt. See 'Example deployments' below.
|
||||
|
||||
* Install diskimage-builder somewhere and use it to build the disk images your
|
||||
configuration will require.
|
||||
|
||||
* Bring up the aspects of TripleO you will be using, starting with a boot-stack
|
||||
node (which you can run in a KVM VM in your datacentre), using that to bring
|
||||
up an actual machine and transfer bare metal services onto it, and then
|
||||
continuing up the stack.
|
||||
|
||||
# Current caveats / workarounds
|
||||
|
||||
These are all documented in README.md and in the [TripleO bugtracker]
|
||||
(https://launchpad.net/tripleo).
|
||||
|
||||
## No API driven persistent storage
|
||||
|
||||
Every 'nova boot' will reset the data on the machine it deploys to. To do
|
||||
incremental image based updates they have to be done within the runnning image.
|
||||
'takeovernode' can do that, but as yet we have not written rules to split out
|
||||
persistent data into another partition - so some assembly required.
|
||||
|
||||
## VLANs for physical nodes require customised images (rather than just metadata).
|
||||
|
||||
If you require VLANs you should create a diskimage-builder element to add the vlan
|
||||
package and vlan configuration to /etc/network/interfaces as a first-boot rule.
|
||||
|
||||
# Example deployments (possible today)
|
||||
|
||||
## Baremetal only
|
||||
|
||||
In this scenario you make use of the baremetal driver to deploy unspecialised
|
||||
machine images, and perform specialisation using Chef/Puppet/Salt -
|
||||
whatever configuration management toolchain you prefer. The baremetal host
|
||||
system is installed manually, but a TripleO image is used to deploy it.
|
||||
|
||||
It scales within any one broadcast domain to the capacity of the single
|
||||
baremetal host.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
* A boot-stack image setup to run in KVM.
|
||||
|
||||
* A vanilla image.
|
||||
|
||||
* A userdata script to configure new instances to run however you want.
|
||||
|
||||
* A machine installed with your OS of choice in your datacentre.
|
||||
|
||||
* Physical machines configured to netboot in preference to local boot.
|
||||
|
||||
* A list of the machines + their IPMI details + mac addresses.
|
||||
|
||||
* A network range larger than the maximum number of concurrent deploy
|
||||
operations to run in parallel.
|
||||
|
||||
* A network to run the instances on large enough to supply one ip per
|
||||
instance.
|
||||
|
||||
### HOWTO
|
||||
|
||||
* Build the images you need (add any local elements you need to the commands)
|
||||
|
||||
* Edit tripleo-image-elements/elements-boot-stack.config.json and change
|
||||
the virtual power manager to 'nova...impi.IPMI'.
|
||||
https://bugs.launchpad.net/tripleo/+bug/1178547
|
||||
|
||||
disk-image-create -o bootstrap vm boot-stack ubuntu
|
||||
disk-image-create -o ubuntu ubuntu
|
||||
|
||||
* Setup a VM using bootstrap.qcow2 on your existing machine, with eth1 bridged
|
||||
into your datacentre LAN.
|
||||
|
||||
* Run up that VM, which will create a self contained nova baremetal install.
|
||||
|
||||
* Reconfigure the networking within the VM to match your physical network.
|
||||
https://bugs.launchpad.net/tripleo/+bug/1178397
|
||||
https://bugs.launchpad.net/tripleo/+bug/1178099
|
||||
|
||||
* If you had exotic hardware needs, replace the deploy images that the
|
||||
bootstack creates.
|
||||
https://bugs.launchpad.net/tripleo/+bug/1178094
|
||||
|
||||
* Enroll your vanilla image into the glance of that install.
|
||||
Be sure to use tripleo-incubator/scripts/load-image as that will extract the kernel
|
||||
and ramdisk and register them appropriately with glance.
|
||||
|
||||
* Enroll your other datacentre machines into that nova baremetal install.
|
||||
A script that takes your machine inventory and prints out something like:
|
||||
nova baremetal-node-create --pm_user XXX --pm_address YYY --pm_password ZZZ
|
||||
COMPUTEHOST 24 98304 2048 MAC
|
||||
can be a great help - and can be run from outside the environment.
|
||||
|
||||
* Setup admin users with SSH keypairs etc.
|
||||
e.g. nova keypair-add --pub-key .ssh/authorized_keys default
|
||||
|
||||
* Boot them using the ubuntu.qcow2 image, with appropriate user data to
|
||||
connect to your Chef/Puppet/Salt environments.
|
||||
|
||||
## Baremetal with Heat
|
||||
|
||||
In this scenario you use the baremetal driver to deploy specialised machine
|
||||
images which are orchestrated by Heat.
|
||||
|
||||
### Prerequisites.
|
||||
|
||||
* A boot-stack image setup to run in KVM.
|
||||
|
||||
* A vanilla image with cfn-tools installed.
|
||||
|
||||
* A seed machine installed with your OS of choice in your datacentre.
|
||||
|
||||
### HOWTO
|
||||
|
||||
* Build the images you need (add any local elements you need to the commands)
|
||||
|
||||
disk-image-create -o bootstrap vm boot-stack ubuntu heat-api
|
||||
disk-image-create -o ubuntu ubuntu cfn-tools
|
||||
|
||||
* Setup a VM using bootstrap.qcow2 on your existing machine, with eth1 bridged
|
||||
into your datacentre LAN.
|
||||
|
||||
* Run up that VM, which will create a self contained nova baremetal install.
|
||||
|
||||
* Enroll your vanilla image into the glance of that install.
|
||||
|
||||
* Enroll your other datacentre machines into that nova baremetal install.
|
||||
|
||||
* Setup admin users with SSH keypairs etc.
|
||||
|
||||
* Create a Heat stack with your application topology. Be sure to use the image
|
||||
id of your cfn-tools customised image.
|
||||
|
||||
## Flat-networking OpenStack managed by Heat
|
||||
|
||||
In this scenario we build on Baremetal with Heat to deploy a full OpenStack
|
||||
orchestrated by Heat, with specialised disk images for different OpenStack node
|
||||
roles.
|
||||
|
||||
### Prerequisites.
|
||||
|
||||
* A boot-stack image setup to run in KVM.
|
||||
|
||||
* A vanilla image with cfn-tools installed.
|
||||
|
||||
* A seed machine installed with your OS of choice in your datacentre.
|
||||
|
||||
### HOWTO
|
||||
|
||||
* Build the images you need (add any local elements you need to the commands)
|
||||
|
||||
disk-image-create -o bootstrap vm boot-stack ubuntu heat-api stackuser
|
||||
disk-image-create -o ubuntu ubuntu cfn-tools
|
||||
|
||||
* Setup a VM using bootstrap.qcow2 on your existing machine, with eth1 bridged
|
||||
into your datacentre LAN.
|
||||
|
||||
* Run up that VM, which will create a self contained nova baremetal install.
|
||||
|
||||
* Enroll your vanilla image into the glance of that install.
|
||||
|
||||
* Enroll your other datacentre machines into that nova baremetal install.
|
||||
|
||||
* Setup admin users with SSH keypairs etc.
|
||||
|
||||
* Create a Heat stack with your application topology. Be sure to use the image
|
||||
id of your cfn-tools customised image.
|
||||
|
||||
|
||||
# Example deployments (future)
|
||||
|
||||
WARNING: Here be draft notes.
|
||||
|
||||
## VM seed + bare metal under cloud
|
||||
* need to be aware nova metadata wont be available after booting as the default
|
||||
rule assumes this host never initiates requests.
|
||||
https://bugs.launchpad.net/tripleo/+bug/1178487
|
||||
|
||||
15
README.md
15
README.md
@@ -6,7 +6,7 @@ deployed on and with OpenStack. This repository is our staging area, where we
|
||||
incubate new ideas and new tools which get us closer to our goal.
|
||||
|
||||
As an incubation area, we move tools to permanent homes in
|
||||
https://github.com/stackforge once they have proved that they do need to exist.
|
||||
OpenStack Infra once they have proved that they do need to exist.
|
||||
Other times we will propose the tool for inclusion in an existing project (such
|
||||
as nova or glance).
|
||||
|
||||
@@ -82,7 +82,7 @@ The lowest layer in the dependency stack, diskimage-builder can be used to
|
||||
customise generic disk images for use with Nova bare metal. It can also be
|
||||
used to provide build-time specialisation for disk images. Diskimage-builder
|
||||
is quite mature and can be downloaded from
|
||||
https://github.com/stackforge/diskimage-builder.
|
||||
https://github.com/openstack/diskimage-builder.
|
||||
|
||||
### Nova bare-metal / Ironic
|
||||
|
||||
@@ -191,8 +191,9 @@ manual or automated) within an organisation, some care is needed to make
|
||||
migration, or integration smooth.
|
||||
|
||||
This is a sufficiently complex topic, we've created a dedicated document for
|
||||
it - [Deploying TripleO] (./Deploying.md). A related document is the
|
||||
instructions for doing [Dev/Test of TripleO] (./devtest.md).
|
||||
it - [Deploying TripleO] (http://docs.openstack.org/developer/tripleo-incubator/deploying.html).
|
||||
A related document is the
|
||||
instructions for doing [Dev/Test of TripleO] (http://docs.openstack.org/developer/tripleo-incubator/devtest.html)
|
||||
|
||||
Architecture
|
||||
------------
|
||||
@@ -326,9 +327,3 @@ Currently there is no way to guarantee preservation (or deletion) of any of the
|
||||
drive contents on a machine if it is deleted in nova baremetal. The planned
|
||||
cinder driver will give us an API for describing what behaviour is needed on
|
||||
an instance by instance basis.
|
||||
|
||||
See also
|
||||
--------
|
||||
https://github.com/tripleo/incubator-bootstrap contains the scripts we run on
|
||||
the devstack based bootstrap node - but these are no longer maintained, as
|
||||
we have moved to tripleo-image-element based bootstrap nodes.
|
||||
|
||||
473
devtest.md
473
devtest.md
@@ -1,473 +0,0 @@
|
||||
VM
|
||||
==============
|
||||
|
||||
(There are detailed instructions available below, the overview and
|
||||
configuration sections provide background information).
|
||||
|
||||
Overview:
|
||||
* Setup SSH access to let the seed node turn on/off other libvirt VMs.
|
||||
* Setup a VM that is your seed node
|
||||
* Setup N VMs to pretend to be your cluster
|
||||
* Go to town testing deployments on them.
|
||||
* For troubleshooting see [troubleshooting.md](troubleshooting.md)
|
||||
* For generic deployment information see [Deploying.md](Deploying.md)
|
||||
|
||||
This document is extracted from devtest.sh, our automated bring-up story for
|
||||
CI/experimentation.
|
||||
|
||||
Configuration
|
||||
-------------
|
||||
|
||||
The seed instance expects to run with its eth0 connected to the outside world,
|
||||
via whatever IP range you choose to setup. You can run NAT, or not, as you
|
||||
choose. This is how we connect to it to run scripts etc - though you can
|
||||
equally log in on its console if you like.
|
||||
|
||||
We use flat networking with all machines on one broadcast domain for dev-test.
|
||||
|
||||
The eth1 of your seed instance should be connected to your bare metal cloud
|
||||
LAN. The seed VM uses the rfc5735 TEST-NET-1 range - 192.0.2.0/24 for
|
||||
bringing up nodes, and does its own DHCP etc, so do not connect it to a network
|
||||
shared with other DHCP servers or the like. The instructions in this document
|
||||
create a bridge device ('brbm') on your machine to emulate this with virtual
|
||||
machine 'bare metal' nodes.
|
||||
|
||||
|
||||
NOTE: We recommend using an apt/HTTP proxy and setting the http_proxy
|
||||
environment variable accordingly in order to speed up the image build
|
||||
times. See footnote [3] to set up Squid proxy.
|
||||
|
||||
NOTE: Likewise, setup a pypi mirror and use the pypi element, or use the
|
||||
pip-cache element. (See diskimage-builder documentation for both of
|
||||
these). Add the relevant element name to the disk-image-builder and
|
||||
boot-seed-vm script invocations.
|
||||
|
||||
NOTE: The CPU architecture specified in several places must be consistent.
|
||||
The examples here use 32-bit arch for the reduced memory footprint. If
|
||||
you are running on real hardware, or want to test with 64-bit arch,
|
||||
replace i386 => amd64 and i686 => x86_64 in all the commands below. You
|
||||
will of course need amd64 capable hardware to do this.
|
||||
|
||||
Detailed instructions
|
||||
---------------------
|
||||
|
||||
__(Note: all of the following commands should be run on your host machine, not inside the seed VM)__
|
||||
|
||||
1. Before you start, check to see that your machine supports hardware
|
||||
virtualization, otherwise performance of the test environment will be poor.
|
||||
We are currently bringing up an LXC based alternative testing story, which
|
||||
will mitigate this, though the deployed instances will still be full virtual
|
||||
machines and so performance will be significantly less there without
|
||||
hardware virtualization.
|
||||
|
||||
1. Also check ssh server is running on the host machine and port 22 is open for
|
||||
connections from virbr0 - VirtPowerManager will boot VMs by sshing into the
|
||||
host machine and issuing libvirt/virsh commands. The user these instructions
|
||||
use is your own, but you can also setup a dedicated user if you choose.
|
||||
|
||||
1. The devtest scripts require access to the libvirt system URI.
|
||||
If running against a different libvirt URI you may encounter errors.
|
||||
Export LIBVIRT_DEFAULT_URI to prevent devtest using qemu:///system
|
||||
Check that the default libvirt connection for your user is qemu:///system.
|
||||
If it is not, set an environment variable to configure the connection.
|
||||
This configuration is necessary for consistency, as later steps assume
|
||||
qemu:///system is being used.
|
||||
|
||||
export LIBVIRT_DEFAULT_URI=${LIBVIRT_DEFAULT_URI:-"qemu:///system"}
|
||||
|
||||
1. Choose a base location to put all of the source code.
|
||||
|
||||
mkdir ~/tripleo
|
||||
# exports are ephemeral - new shell sessions, or reboots, and you need
|
||||
# to redo them.
|
||||
export TRIPLEO_ROOT=~/tripleo
|
||||
cd $TRIPLEO_ROOT
|
||||
|
||||
1. git clone this repository to your local machine.
|
||||
|
||||
git clone https://github.com/openstack/tripleo-incubator.git
|
||||
|
||||
1. Nova tools get installed in $TRIPLEO_ROOT/tripleo-incubator/scripts - you need to
|
||||
add that to the PATH.
|
||||
|
||||
export PATH=$TRIPLEO_ROOT/tripleo-incubator/scripts:$PATH
|
||||
|
||||
1. Set HW resources for VMs used as 'baremetal' nodes. NODE_CPU is cpu count,
|
||||
NODE_MEM is memory (MB), NODE_DISK is disk size (GB), NODE_ARCH is
|
||||
architecture (i386, amd64). NODE_ARCH is used also for the seed VM.
|
||||
A note on memory sizing: TripleO images in raw form are currently
|
||||
~2.7Gb, which means that a tight node will end up with a thrashing page
|
||||
cache during glance -> local + local -> raw operations. This significantly
|
||||
impairs performance. Of the four minimum VMs for TripleO simulation, two
|
||||
are nova baremetal nodes (seed an undercloud) and these need to be 2G or
|
||||
larger. The hypervisor host in the overcloud also needs to be a decent size
|
||||
or it cannot host more than one VM.
|
||||
|
||||
32bit VMs:
|
||||
|
||||
export NODE_CPU=1 NODE_MEM=2048 NODE_DISK=20 NODE_ARCH=i386
|
||||
|
||||
For 64bit it is better to create VMs with more memory and storage because of
|
||||
increased memory footprint:
|
||||
|
||||
export NODE_CPU=1 NODE_MEM=2048 NODE_DISK=20 NODE_ARCH=amd64
|
||||
|
||||
1. Set distribution used for VMs (fedora, ubuntu).
|
||||
|
||||
export NODE_DIST=ubuntu
|
||||
|
||||
for Fedora set SELinux permissive mode.
|
||||
|
||||
export NODE_DIST="fedora selinux-permissive"
|
||||
|
||||
1. A DHCP driver is used to do DHCP when booting nodes.
|
||||
The default bm-dnsmasq is deprecated and soon to be replaced by
|
||||
neutron-dhcp-agent.
|
||||
|
||||
export DHCP_DRIVER=bm-dnsmasq
|
||||
|
||||
1. Ensure dependencies are installed and required virsh configuration is
|
||||
performed:
|
||||
|
||||
install-dependencies
|
||||
|
||||
1. Clone/update the other needed tools which are not available as packages.
|
||||
|
||||
pull-tools
|
||||
|
||||
1. You need to make the tripleo image elements accessible to diskimage-builder:
|
||||
|
||||
export ELEMENTS_PATH=$TRIPLEO_ROOT/tripleo-image-elements/elements
|
||||
|
||||
1. Configure a network for your test environment.
|
||||
This configures an openvswitch bridge and teaches libvirt about it.
|
||||
|
||||
setup-network
|
||||
|
||||
1. Create a deployment ramdisk + kernel. These are used by the seed cloud and
|
||||
the undercloud for deployment to bare metal.
|
||||
|
||||
$TRIPLEO_ROOT/diskimage-builder/bin/ramdisk-image-create -a $NODE_ARCH \
|
||||
$NODE_DIST deploy -o $TRIPLEO_ROOT/deploy-ramdisk
|
||||
|
||||
1. Create and start your seed VM. This script invokes diskimage-builder with
|
||||
suitable paths and options to create and start a VM that contains an
|
||||
all-in-one OpenStack cloud with the baremetal driver enabled, and
|
||||
preconfigures it for a development environment. Note that the seed has
|
||||
minimal variation in it's configuration: the goal is to bootstrap with
|
||||
a known-solid config.
|
||||
|
||||
cd $TRIPLEO_ROOT/tripleo-image-elements/elements/seed-stack-config
|
||||
sed -i "s/\"user\": \"stack\",/\"user\": \"`whoami`\",/" config.json
|
||||
# If you use 64bit VMs (NODE_ARCH=amd64), update also architecture.
|
||||
sed -i "s/\"arch\": \"i386\",/\"arch\": \"$NODE_ARCH\",/" config.json
|
||||
|
||||
cd $TRIPLEO_ROOT
|
||||
boot-seed-vm -a $NODE_ARCH $NODE_DIST bm-dnsmasq
|
||||
|
||||
boot-seed-vm will start a VM and copy your SSH pub key into the VM so that
|
||||
you can log into it with 'ssh root@192.0.2.1'.
|
||||
|
||||
The IP address of the VM is printed out at the end of boot-elements, or
|
||||
you can use the get-vm-ip script:
|
||||
|
||||
export SEED_IP=`get-vm-ip seed`
|
||||
|
||||
1. Add a route to the baremetal bridge via the seed node (we do this so that
|
||||
your host is isolated from the networking of the test environment.
|
||||
|
||||
# These are not persistent, if you reboot, re-run them.
|
||||
sudo ip route del 192.0.2.0/24 dev virbr0 || true
|
||||
sudo ip route add 192.0.2.0/24 dev virbr0 via $SEED_IP
|
||||
|
||||
1. Mask the SEED_IP out of your proxy settings
|
||||
|
||||
export no_proxy=$no_proxy,192.0.2.1,$SEED_IP
|
||||
|
||||
1. If you downloaded a pre-built seed image you will need to log into it
|
||||
and customise the configuration within it. See footnote [1].)
|
||||
|
||||
1. Setup a prompt clue so you can tell what cloud you have configured.
|
||||
(Do this once).
|
||||
|
||||
source $TRIPLEO_ROOT/tripleo-incubator/cloudprompt
|
||||
|
||||
1. Source the client configuration for the seed cloud.
|
||||
|
||||
source $TRIPLEO_ROOT/tripleo-incubator/seedrc
|
||||
|
||||
1. Create some 'baremetal' node(s) out of KVM virtual machines and collect
|
||||
their MAC addresses.
|
||||
Nova will PXE boot these VMs as though they were physical hardware.
|
||||
If you want to create the VMs yourself, see footnote [2] for details on
|
||||
their requirements. The parameter to create-nodes is VM count.
|
||||
|
||||
export MACS=$(create-nodes $NODE_CPU $NODE_MEM $NODE_DISK $NODE_ARCH 3)
|
||||
|
||||
If you need to collect MAC addresses separately, see scripts/get-vm-mac.
|
||||
|
||||
1. Perform setup of your seed cloud.
|
||||
|
||||
init-keystone -p unset unset 192.0.2.1 admin@example.com root@192.0.2.1
|
||||
setup-endpoints 192.0.2.1 --glance-password unset --heat-password unset --neutron-password unset --nova-password unset
|
||||
keystone role-create --name heat_stack_user
|
||||
user-config
|
||||
setup-baremetal $NODE_CPU $NODE_MEM $NODE_DISK $NODE_ARCH seed
|
||||
setup-neutron 192.0.2.2 192.0.2.3 192.0.2.0/24 192.0.2.1 ctlplane
|
||||
|
||||
1. Allow the VirtualPowerManager to ssh into your host machine to power on vms:
|
||||
|
||||
ssh root@192.0.2.1 "cat /opt/stack/boot-stack/virtual-power-key.pub" >> ~/.ssh/authorized_keys
|
||||
|
||||
1. Create your undercloud image. This is the image that the seed nova
|
||||
will deploy to become the baremetal undercloud. Note that stackuser is only
|
||||
there for debugging support - it is not suitable for a production network.
|
||||
|
||||
$TRIPLEO_ROOT/diskimage-builder/bin/disk-image-create $NODE_DIST \
|
||||
-a $NODE_ARCH -o $TRIPLEO_ROOT/undercloud \
|
||||
boot-stack nova-baremetal os-collect-config stackuser $DHCP_DRIVER
|
||||
|
||||
1. Load the undercloud image into Glance:
|
||||
|
||||
load-image $TRIPLEO_ROOT/undercloud.qcow2
|
||||
|
||||
1. Create secrets for the cloud. The secrets will be written to a file
|
||||
(tripleo-passwords by default) that you need to source into your shell
|
||||
environment. Note that you can also make or change these later and
|
||||
update the heat stack definition to inject them - as long as you also
|
||||
update the keystone recorded password. Note that there will be a window
|
||||
between updating keystone and instances where they will disagree and
|
||||
service will be down. Instead consider adding a new service account and
|
||||
changing everything across to it, then deleting the old account after
|
||||
the cluster is updated.
|
||||
|
||||
setup-passwords
|
||||
source tripleo-passwords
|
||||
|
||||
1. Deploy an undercloud:
|
||||
|
||||
heat stack-create -f $TRIPLEO_ROOT/tripleo-heat-templates/undercloud-vm.yaml \
|
||||
-P "PowerUserName=$(whoami);AdminToken=${UNDERCLOUD_ADMIN_TOKEN};AdminPassword=${UNDERCLOUD_ADMIN_PASSWORD};GlancePassword=${UNDERCLOUD_GLANCE_PASSWORD};HeatPassword=${UNDERCLOUD_HEAT_PASSWORD};NeutronPassword=${UNDERCLOUD_NEUTRON_PASSWORD};NovaPassword=${UNDERCLOUD_NOVA_PASSWORD};BaremetalArch=${NODE_ARCH}" \
|
||||
undercloud
|
||||
|
||||
You can watch the console via virsh/virt-manager to observe the PXE
|
||||
boot/deploy process. After the deploy is complete, it will reboot into the
|
||||
image.
|
||||
|
||||
1. Get the undercloud IP from 'nova list'
|
||||
|
||||
export UNDERCLOUD_IP=$(nova list | grep ctlplane | sed -e "s/.*=\\([0-9.]*\\).*/\1/")
|
||||
ssh-keygen -R $UNDERCLOUD_IP
|
||||
|
||||
1. Source the undercloud configuration:
|
||||
|
||||
source $TRIPLEO_ROOT/tripleo-incubator/undercloudrc
|
||||
|
||||
1. Exclude the undercloud from proxies:
|
||||
|
||||
export no_proxy=$no_proxy,$UNDERCLOUD_IP
|
||||
|
||||
1. Perform setup of your undercloud.
|
||||
|
||||
init-keystone -p $UNDERCLOUD_ADMIN_PASSWORD $UNDERCLOUD_ADMIN_TOKEN \
|
||||
$UNDERCLOUD_IP admin@example.com heat-admin@$UNDERCLOUD_IP
|
||||
setup-endpoints $UNDERCLOUD_IP --glance-password $UNDERCLOUD_GLANCE_PASSWORD \
|
||||
--heat-password $UNDERCLOUD_HEAT_PASSWORD \
|
||||
--neutron-password $UNDERCLOUD_NEUTRON_PASSWORD \
|
||||
--nova-password $UNDERCLOUD_NOVA_PASSWORD
|
||||
keystone role-create --name heat_stack_user
|
||||
user-config
|
||||
setup-baremetal $NODE_CPU $NODE_MEM $NODE_DISK $NODE_ARCH undercloud
|
||||
setup-neutron 192.0.2.5 192.0.2.24 192.0.2.0/24 $UNDERCLOUD_IP ctlplane
|
||||
|
||||
1. Allow the VirtualPowerManager to ssh into your host machine to power on vms:
|
||||
|
||||
ssh heat-admin@$UNDERCLOUD_IP "cat /opt/stack/boot-stack/virtual-power-key.pub" >> ~/.ssh/authorized_keys
|
||||
|
||||
1. Create your overcloud control plane image. This is the image the undercloud
|
||||
will deploy to become the KVM (or Xen etc) cloud control plane. Note that
|
||||
stackuser is only there for debugging support - it is not suitable for a
|
||||
production network.
|
||||
|
||||
$TRIPLEO_ROOT/diskimage-builder/bin/disk-image-create $NODE_DIST \
|
||||
-a $NODE_ARCH -o $TRIPLEO_ROOT/overcloud-control \
|
||||
boot-stack cinder os-collect-config neutron-network-node stackuser
|
||||
|
||||
1. Load the image into Glance:
|
||||
|
||||
load-image $TRIPLEO_ROOT/overcloud-control.qcow2
|
||||
|
||||
1. Create your overcloud compute image. This is the image the undercloud
|
||||
deploys to host KVM instances. Note that stackuser is only there for
|
||||
debugging support - it is not suitable for a production network.
|
||||
|
||||
$TRIPLEO_ROOT/diskimage-builder/bin/disk-image-create $NODE_DIST \
|
||||
-a $NODE_ARCH -o $TRIPLEO_ROOT/overcloud-compute \
|
||||
nova-compute nova-kvm neutron-openvswitch-agent os-collect-config stackuser
|
||||
|
||||
1. Load the image into Glance:
|
||||
|
||||
load-image $TRIPLEO_ROOT/overcloud-compute.qcow2
|
||||
|
||||
1. For running an overcloud in VM's:
|
||||
|
||||
OVERCLOUD_LIBVIRT_TYPE=${OVERCLOUD_LIBVIRT_TYPE:-";NovaComputeLibvirtType=qemu"}
|
||||
|
||||
1. Deploy an overcloud:
|
||||
|
||||
make -C $TRIPLEO_ROOT/tripleo-heat-templates overcloud.yaml
|
||||
heat stack-create -f $TRIPLEO_ROOT/tripleo-heat-templates/overcloud.yaml \
|
||||
-P "AdminToken=${OVERCLOUD_ADMIN_TOKEN};AdminPassword=${OVERCLOUD_ADMIN_PASSWORD};CinderPassword=${OVERCLOUD_CINDER_PASSWORD};GlancePassword=${OVERCLOUD_GLANCE_PASSWORD};HeatPassword=${OVERCLOUD_HEAT_PASSWORD};NeutronPassword=${OVERCLOUD_NEUTRON_PASSWORD};NovaPassword=${OVERCLOUD_NOVA_PASSWORD}${OVERCLOUD_LIBVIRT_TYPE}" \
|
||||
overcloud
|
||||
|
||||
You can watch the console via virsh/virt-manager to observe the PXE
|
||||
boot/deploy process. After the deploy is complete, the machines will reboot
|
||||
and be available.
|
||||
|
||||
1. Get the overcloud IP from 'nova list'
|
||||
|
||||
export OVERCLOUD_IP=$(nova list | grep notcompute.*ctlplane | sed -e "s/.*=\\([0-9.]*\\).*/\1/")
|
||||
ssh-keygen -R $OVERCLOUD_IP
|
||||
|
||||
1. Source the overcloud configuration:
|
||||
|
||||
source $TRIPLEO_ROOT/tripleo-incubator/overcloudrc
|
||||
|
||||
1. Exclude the undercloud from proxies:
|
||||
|
||||
export no_proxy=$no_proxy,$OVERCLOUD_IP
|
||||
|
||||
1. Perform admin setup of your overcloud.
|
||||
|
||||
init-keystone -p $OVERCLOUD_ADMIN_PASSWORD $OVERCLOUD_ADMIN_TOKEN \
|
||||
$OVERCLOUD_IP admin@example.com heat-admin@$OVERCLOUD_IP
|
||||
setup-endpoints $OVERCLOUD_IP --cinder-password $OVERCLOUD_CINDER_PASSWORD \
|
||||
--glance-password $OVERCLOUD_GLANCE_PASSWORD \
|
||||
--heat-password $UNDERCLOUD_HEAT_PASSWORD \
|
||||
--neutron-password $OVERCLOUD_NEUTRON_PASSWORD \
|
||||
--nova-password $OVERCLOUD_NOVA_PASSWORD
|
||||
keystone role-create --name heat_stack_user
|
||||
user-config
|
||||
setup-neutron "" "" 10.0.0.0/8 "" "" 192.0.2.45 192.0.2.64 192.0.2.0/24
|
||||
|
||||
1. If you want a demo user in your overcloud (probably a good idea).
|
||||
|
||||
os-adduser -p $OVERCLOUD_DEMO_PASSWORD demo demo@example.com
|
||||
|
||||
1. Workaround https://bugs.launchpad.net/diskimage-builder/+bug/1211165.
|
||||
|
||||
nova flavor-delete m1.tiny
|
||||
nova flavor-create m1.tiny 1 512 2 1
|
||||
|
||||
1. Build an end user disk image and register it with glance.
|
||||
|
||||
$TRIPLEO_ROOT/diskimage-builder/bin/disk-image-create $NODE_DIST \
|
||||
-a $NODE_ARCH -o $TRIPLEO_ROOT/user
|
||||
glance image-create --name user --public --disk-format qcow2 \
|
||||
--container-format bare --file $TRIPLEO_ROOT/user.qcow2
|
||||
|
||||
1. Log in as a user.
|
||||
|
||||
source $TRIPLEO_ROOT/tripleo-incubator/overcloudrc-user
|
||||
user-config
|
||||
|
||||
1. Deploy your image.
|
||||
|
||||
nova boot --key-name default --flavor m1.tiny --image user demo
|
||||
|
||||
1. Add an external IP for it.
|
||||
|
||||
PORT=$(neutron port-list -f csv -c id --quote none | tail -n1)
|
||||
neutron floatingip-create ext-net --port-id "${PORT//[[:space:]]/}"
|
||||
|
||||
1. And allow network access to it.
|
||||
|
||||
neutron security-group-rule-create default --protocol icmp \
|
||||
--direction ingress --port-range-min 8 --port-range-max 8
|
||||
neutron security-group-rule-create default --protocol tcp \
|
||||
--direction ingress --port-range-min 22 --port-range-max 22
|
||||
|
||||
The End!
|
||||
|
||||
|
||||
Footnotes
|
||||
=========
|
||||
|
||||
* [1] Customize a downloaded seed image.
|
||||
|
||||
If you downloaded your seed VM image, you may need to configure it.
|
||||
Setup a network proxy, if you have one (e.g. 192.168.2.1 port 8080)
|
||||
|
||||
# Run within the image!
|
||||
echo << EOF >> ~/.profile
|
||||
export no_proxy=192.0.2.1
|
||||
export http_proxy=http://192.168.2.1:8080/
|
||||
EOF
|
||||
|
||||
Add an ~/.ssh/authorized_keys file. The image rejects password authentication
|
||||
for security, so you will need to ssh out from the VM console. Even if you
|
||||
don't copy your authorized_keys in, you will still need to ensure that
|
||||
/home/stack/.ssh/authorized_keys on your seed node has some kind of
|
||||
public SSH key in it, or the openstack configuration scripts will error.
|
||||
|
||||
You can log into the console using the username 'stack' password 'stack'.
|
||||
|
||||
* [2] Requirements for the "baremetal node" VMs
|
||||
|
||||
If you don't use create-nodes, but want to create your own VMs, here are some
|
||||
suggestions for what they should look like.
|
||||
- each VM should have 1 NIC
|
||||
- eth0 should be on brbm
|
||||
- record the MAC addresses for the NIC of each VM.
|
||||
- give each VM no less than 2GB of disk, and ideally give them
|
||||
more than NODE_DISK, which defaults to 20GB
|
||||
- 1GB RAM is probably enough (512MB is not enough to run an all-in-one
|
||||
OpenStack), and 768M isn't enough to do repeated deploys with.
|
||||
- if using KVM, specify that you will install the virtual machine via PXE.
|
||||
This will avoid KVM prompting for a disk image or installation media.
|
||||
|
||||
* [3] Setting Up Squid Proxy
|
||||
|
||||
- Install squid proxy: `apt-get install squid`
|
||||
- Set `/etc/squid3/squid.conf` to the following:
|
||||
<pre><code>
|
||||
acl manager proto cache_object
|
||||
acl localhost src 127.0.0.1/32 ::1
|
||||
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
|
||||
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
|
||||
acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
|
||||
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
|
||||
acl SSL_ports port 443
|
||||
acl Safe_ports port 80 # http
|
||||
acl Safe_ports port 21 # ftp
|
||||
acl Safe_ports port 443 # https
|
||||
acl Safe_ports port 70 # gopher
|
||||
acl Safe_ports port 210 # wais
|
||||
acl Safe_ports port 1025-65535 # unregistered ports
|
||||
acl Safe_ports port 280 # http-mgmt
|
||||
acl Safe_ports port 488 # gss-http
|
||||
acl Safe_ports port 591 # filemaker
|
||||
acl Safe_ports port 777 # multiling http
|
||||
acl CONNECT method CONNECT
|
||||
http_access allow manager localhost
|
||||
http_access deny manager
|
||||
http_access deny !Safe_ports
|
||||
http_access deny CONNECT !SSL_ports
|
||||
http_access allow localnet
|
||||
http_access allow localhost
|
||||
http_access deny all
|
||||
http_port 3128
|
||||
cache_dir aufs /var/spool/squid3 5000 24 256
|
||||
maximum_object_size 1024 MB
|
||||
coredump_dir /var/spool/squid3
|
||||
refresh_pattern ^ftp: 1440 20% 10080
|
||||
refresh_pattern ^gopher: 1440 0% 1440
|
||||
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
|
||||
refresh_pattern (Release|Packages(.gz)*)$ 0 20% 2880
|
||||
refresh_pattern . 0 20% 4320
|
||||
refresh_all_ims on
|
||||
</pre></code>
|
||||
|
||||
- Restart squid: `sudo service squid3 restart`
|
||||
- Set http_proxy environment variable: `http_proxy=http://your_ip_or_localhost:3128/`
|
||||
|
||||
0
doc/ext/__init__.py
Normal file
0
doc/ext/__init__.py
Normal file
32
doc/ext/extract_docs.py
Normal file
32
doc/ext/extract_docs.py
Normal file
@@ -0,0 +1,32 @@
|
||||
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import os
|
||||
|
||||
from sphinx import errors
|
||||
|
||||
|
||||
def builder_inited(app):
|
||||
app.info('In: ' + os.path.abspath('.'))
|
||||
source_dir = app.srcdir
|
||||
build_dir = app.outdir
|
||||
app.info('Generating devtest from %s into %s' % (source_dir, build_dir))
|
||||
ret = os.system('scripts/extract-docs')
|
||||
if ret:
|
||||
raise errors.ExtensionError(
|
||||
"Error generating %s/devtest.rst" % build_dir)
|
||||
|
||||
|
||||
def setup(app):
|
||||
app.connect('builder-inited', builder_inited)
|
||||
50
doc/source/conf.py
Normal file
50
doc/source/conf.py
Normal file
@@ -0,0 +1,50 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
sys.path.insert(0, os.path.abspath('../..'))
|
||||
sys.path.insert(0, os.path.abspath('../'))
|
||||
sys.path.insert(0, os.path.abspath('./'))
|
||||
|
||||
# -- General configuration ----------------------------------------------------
|
||||
|
||||
# Add any Sphinx extension module names here, as strings. They can be
|
||||
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
|
||||
extensions = [
|
||||
'sphinx.ext.autodoc',
|
||||
'sphinx.ext.intersphinx',
|
||||
'oslo.sphinx',
|
||||
'ext.extract_docs'
|
||||
]
|
||||
|
||||
# The suffix of source filenames.
|
||||
source_suffix = '.rst'
|
||||
|
||||
# The master toctree document.
|
||||
master_doc = 'index'
|
||||
|
||||
# General information about the project.
|
||||
project = u'TripleO'
|
||||
copyright = u'2013, OpenStack Foundation'
|
||||
|
||||
# The name of the Pygments (syntax highlighting) style to use.
|
||||
pygments_style = 'sphinx'
|
||||
|
||||
# -- Options for HTML output --------------------------------------------------
|
||||
|
||||
# Output file base name for HTML help builder.
|
||||
htmlhelp_basename = '%sdoc' % project
|
||||
|
||||
# Grouping the document tree into LaTeX files. List of tuples
|
||||
# (source start file, target name, title, author, documentclass
|
||||
# [howto/manual]).
|
||||
latex_documents = [
|
||||
('index',
|
||||
'%s.tex' % project,
|
||||
u'%s Documentation' % project,
|
||||
u'OpenStack Foundation', 'manual'),
|
||||
]
|
||||
|
||||
# Example configuration for intersphinx: refer to the Python standard library.
|
||||
intersphinx_mapping = {'http://docs.python.org/': None}
|
||||
351
doc/source/deploying.rst
Normal file
351
doc/source/deploying.rst
Normal file
@@ -0,0 +1,351 @@
|
||||
Deploying TripleO
|
||||
=================
|
||||
|
||||
Components
|
||||
==========
|
||||
|
||||
Essential Components
|
||||
--------------------
|
||||
|
||||
Essential components make up the self-deploying infrastructure that is
|
||||
the heart of TripleO.
|
||||
|
||||
- Baremetal machine deployment (Nova Baremetal, soon to be 'Ironic')
|
||||
|
||||
- Baremetal volume management (Cinder - not available yet)
|
||||
|
||||
- Cluster orchestration (Heat)
|
||||
|
||||
- Machine image creation (Diskimage-builder)
|
||||
|
||||
- In-instance configuration management
|
||||
(os-config-applier+os-refresh-config, and/or Chef/Puppet/Salt)
|
||||
|
||||
- Image management (Glance)
|
||||
|
||||
- Network management (Quantum)
|
||||
|
||||
- Authentication and service catalog (Keystone)
|
||||
|
||||
Additional Components
|
||||
---------------------
|
||||
|
||||
These components add value to the TripleO story, making it safer to
|
||||
upgrade and evolve an environment, but are secondary to the core thing
|
||||
itself.
|
||||
|
||||
- Continuous integration (Zuul/Jenkins)
|
||||
|
||||
- Monitoring and alerting (Ceilometer/nagios/etc)
|
||||
|
||||
Dependencies
|
||||
============
|
||||
|
||||
Each component can only be deployed once its dependencies are available.
|
||||
|
||||
TripleO is built on a Linux platform, so a Linux environment is required
|
||||
both to create images and as the OS that will run on the machines. If
|
||||
you have no Linux machines at all, you can download a live CD from a
|
||||
number of vendors, which will permit you to run diskimage-builder to get
|
||||
going.
|
||||
|
||||
Diskimage-builder
|
||||
-----------------
|
||||
|
||||
An internet connection is also required to download the various packages
|
||||
used in preparing each image.
|
||||
|
||||
The machine images built *can* depend on Heat metadata, or they can just
|
||||
contain configured Chef/Puppet/Salt credentials, depending on how much
|
||||
of TripleO is in use. Avoiding Heat is useful when doing a incremental
|
||||
adoption of TripleO (see later in this document).
|
||||
|
||||
Baremetal machine deployment
|
||||
----------------------------
|
||||
|
||||
Baremetal deployments are delivered via Nova. Additionally, the network
|
||||
must be configured so that the baremetal host machine can receive TFTP
|
||||
from any physical machine that is being booted.
|
||||
|
||||
Nova
|
||||
----
|
||||
|
||||
Nova depends on Keystone, Glance and Quantum. In future Cinder will be
|
||||
one of the dependencies.
|
||||
|
||||
There are three ways the service can be deployed:
|
||||
|
||||
- Via diskimage-builder built machine images, configured via a running
|
||||
Heat cluster. This is the normal TripleO deployment.
|
||||
|
||||
- Via the special bootstrap node image, which is built by
|
||||
diskimage-builder and contains a full working stack - nova, glance,
|
||||
keystone and neutron, configured by statically generated Heat
|
||||
metadata. This approach is used to get TripleO up and running.
|
||||
|
||||
- By hand - e.g. using devstack, or manually/chef/puppet/packages on a
|
||||
dedicated machine. This can be useful for incremental adoption of
|
||||
TripleO.
|
||||
|
||||
Cinder
|
||||
------
|
||||
|
||||
Cinder is needed for persistent storage on bare metal machines. That
|
||||
aspect of TripleO is not yet available : when an instance is deleted,
|
||||
the storage is deleted with it.
|
||||
|
||||
Quantum
|
||||
-------
|
||||
|
||||
Quantum depends on Keystone. The same three deployment options exist as
|
||||
for Nova. The Quantum network node(s) must be the only DHCP servers on
|
||||
the network.
|
||||
|
||||
Glance
|
||||
------
|
||||
|
||||
Glance depends on Keystone. The same three deployment options exist as
|
||||
for Nova.
|
||||
|
||||
Keystone
|
||||
--------
|
||||
|
||||
Keystone has no external dependencies. The same three deployment options
|
||||
exist as for Nova.
|
||||
|
||||
Heat
|
||||
----
|
||||
|
||||
Heat depends on Nova, Cinder and Keystone. The same three deployment
|
||||
options exist as for Nova.
|
||||
|
||||
In-instance configuration
|
||||
-------------------------
|
||||
|
||||
The os-config-applier and os-refresh-config tools depend on Heat to
|
||||
provide cluster configuration metadata. They can be used before Heat is
|
||||
functional if a statically prepared metadata file is placed in the Heat
|
||||
path : this is how the bootstrap node works.
|
||||
|
||||
os-config-applier and os-refresh-config can be used in concert with
|
||||
Chef/Puppet/Salt, or not used at all, if you configure your services via
|
||||
Chef/Puppet/Salt.
|
||||
|
||||
The reference TripleO elements do not depend on Chef/Puppet/Salt, to
|
||||
avoid conflicting when organisations with an investment in
|
||||
Chef/Puppet/Salt start using TripleO.
|
||||
|
||||
Deploying TripleO incrementally
|
||||
===============================
|
||||
|
||||
The general sequence is:
|
||||
|
||||
- Examine the current state of TripleO and assess where non-automated
|
||||
solutions will be needed for your environment. E.g. at the time of
|
||||
writing VLAN support requires baking the VLAN configuration into your
|
||||
built disk images.
|
||||
|
||||
- Decide how much of TripleO you will adopt. See 'Example deployments'
|
||||
below.
|
||||
|
||||
- Install diskimage-builder somewhere and use it to build the disk
|
||||
images your configuration will require.
|
||||
|
||||
- Bring up the aspects of TripleO you will be using, starting with a
|
||||
boot-stack node (which you can run in a KVM VM in your datacentre),
|
||||
using that to bring up an actual machine and transfer bare metal
|
||||
services onto it, and then continuing up the stack.
|
||||
|
||||
Current caveats / workarounds
|
||||
=============================
|
||||
|
||||
These are all documented in README.rst and in the [TripleO bugtracker]
|
||||
(https://launchpad.net/tripleo).
|
||||
|
||||
No API driven persistent storage
|
||||
--------------------------------
|
||||
|
||||
Every 'nova boot' will reset the data on the machine it deploys to. To
|
||||
do incremental image based updates they have to be done within the
|
||||
runnning image. 'takeovernode' can do that, but as yet we have not
|
||||
written rules to split out persistent data into another partition - so
|
||||
some assembly required.
|
||||
|
||||
VLANs for physical nodes require customised images (rather than just metadata).
|
||||
-------------------------------------------------------------------------------
|
||||
|
||||
If you require VLANs you should create a diskimage-builder element to
|
||||
add the vlan package and vlan configuration to /etc/network/interfaces
|
||||
as a first-boot rule.
|
||||
|
||||
Example deployments (possible today)
|
||||
====================================
|
||||
|
||||
Baremetal only
|
||||
--------------
|
||||
|
||||
In this scenario you make use of the baremetal driver to deploy
|
||||
unspecialised machine images, and perform specialisation using
|
||||
Chef/Puppet/Salt - whatever configuration management toolchain you
|
||||
prefer. The baremetal host system is installed manually, but a TripleO
|
||||
image is used to deploy it.
|
||||
|
||||
It scales within any one broadcast domain to the capacity of the single
|
||||
baremetal host.
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
- A boot-stack image setup to run in KVM.
|
||||
|
||||
- A vanilla image.
|
||||
|
||||
- A userdata script to configure new instances to run however you want.
|
||||
|
||||
- A machine installed with your OS of choice in your datacentre.
|
||||
|
||||
- Physical machines configured to netboot in preference to local boot.
|
||||
|
||||
- A list of the machines + their IPMI details + mac addresses.
|
||||
|
||||
- A network range larger than the maximum number of concurrent deploy
|
||||
operations to run in parallel.
|
||||
|
||||
- A network to run the instances on large enough to supply one ip per
|
||||
instance.
|
||||
|
||||
HOWTO
|
||||
~~~~~
|
||||
|
||||
- Build the images you need (add any local elements you need to the
|
||||
commands)
|
||||
|
||||
- Edit tripleo-image-elements/elements-boot-stack.config.json and
|
||||
change the virtual power manager to 'nova...impi.IPMI'.
|
||||
https://bugs.launchpad.net/tripleo/+bug/1178547
|
||||
|
||||
disk-image-create -o bootstrap vm boot-stack ubuntu disk-image-create
|
||||
-o ubuntu ubuntu
|
||||
|
||||
- Setup a VM using bootstrap.qcow2 on your existing machine, with eth1
|
||||
bridged into your datacentre LAN.
|
||||
|
||||
- Run up that VM, which will create a self contained nova baremetal
|
||||
install.
|
||||
|
||||
- Reconfigure the networking within the VM to match your physical
|
||||
network. https://bugs.launchpad.net/tripleo/+bug/1178397
|
||||
https://bugs.launchpad.net/tripleo/+bug/1178099
|
||||
|
||||
- If you had exotic hardware needs, replace the deploy images that the
|
||||
bootstack creates. https://bugs.launchpad.net/tripleo/+bug/1178094
|
||||
|
||||
- Enroll your vanilla image into the glance of that install. Be sure to
|
||||
use tripleo-incubator/scripts/load-image as that will extract the
|
||||
kernel and ramdisk and register them appropriately with glance.
|
||||
|
||||
- Enroll your other datacentre machines into that nova baremetal
|
||||
install. A script that takes your machine inventory and prints out
|
||||
something like: nova baremetal-node-create --pm\_user XXX
|
||||
--pm\_address YYY --pm\_password ZZZ COMPUTEHOST 24 98304 2048 MAC
|
||||
can be a great help - and can be run from outside the environment.
|
||||
|
||||
- Setup admin users with SSH keypairs etc. e.g. nova keypair-add
|
||||
--pub-key .ssh/authorized\_keys default
|
||||
|
||||
- Boot them using the ubuntu.qcow2 image, with appropriate user data to
|
||||
connect to your Chef/Puppet/Salt environments.
|
||||
|
||||
Baremetal with Heat
|
||||
-------------------
|
||||
|
||||
In this scenario you use the baremetal driver to deploy specialised
|
||||
machine images which are orchestrated by Heat.
|
||||
|
||||
Prerequisites.
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
- A boot-stack image setup to run in KVM.
|
||||
|
||||
- A vanilla image with cfn-tools installed.
|
||||
|
||||
- A seed machine installed with your OS of choice in your datacentre.
|
||||
|
||||
HOWTO
|
||||
~~~~~
|
||||
|
||||
- Build the images you need (add any local elements you need to the
|
||||
commands)
|
||||
|
||||
disk-image-create -o bootstrap vm boot-stack ubuntu heat-api
|
||||
disk-image-create -o ubuntu ubuntu cfn-tools
|
||||
|
||||
- Setup a VM using bootstrap.qcow2 on your existing machine, with eth1
|
||||
bridged into your datacentre LAN.
|
||||
|
||||
- Run up that VM, which will create a self contained nova baremetal
|
||||
install.
|
||||
|
||||
- Enroll your vanilla image into the glance of that install.
|
||||
|
||||
- Enroll your other datacentre machines into that nova baremetal
|
||||
install.
|
||||
|
||||
- Setup admin users with SSH keypairs etc.
|
||||
|
||||
- Create a Heat stack with your application topology. Be sure to use
|
||||
the image id of your cfn-tools customised image.
|
||||
|
||||
Flat-networking OpenStack managed by Heat
|
||||
-----------------------------------------
|
||||
|
||||
In this scenario we build on Baremetal with Heat to deploy a full
|
||||
OpenStack orchestrated by Heat, with specialised disk images for
|
||||
different OpenStack node roles.
|
||||
|
||||
Prerequisites.
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
- A boot-stack image setup to run in KVM.
|
||||
|
||||
- A vanilla image with cfn-tools installed.
|
||||
|
||||
- A seed machine installed with your OS of choice in your datacentre.
|
||||
|
||||
HOWTO
|
||||
~~~~~
|
||||
|
||||
- Build the images you need (add any local elements you need to the
|
||||
commands)
|
||||
|
||||
disk-image-create -o bootstrap vm boot-stack ubuntu heat-api
|
||||
stackuser disk-image-create -o ubuntu ubuntu cfn-tools
|
||||
|
||||
- Setup a VM using bootstrap.qcow2 on your existing machine, with eth1
|
||||
bridged into your datacentre LAN.
|
||||
|
||||
- Run up that VM, which will create a self contained nova baremetal
|
||||
install.
|
||||
|
||||
- Enroll your vanilla image into the glance of that install.
|
||||
|
||||
- Enroll your other datacentre machines into that nova baremetal
|
||||
install.
|
||||
|
||||
- Setup admin users with SSH keypairs etc.
|
||||
|
||||
- Create a Heat stack with your application topology. Be sure to use
|
||||
the image id of your cfn-tools customised image.
|
||||
|
||||
Example deployments (future)
|
||||
============================
|
||||
|
||||
WARNING: Here be draft notes.
|
||||
|
||||
VM seed + bare metal under cloud
|
||||
--------------------------------
|
||||
|
||||
- need to be aware nova metadata wont be available after booting as the
|
||||
default rule assumes this host never initiates requests.
|
||||
https://bugs.launchpad.net/tripleo/+bug/1178487
|
||||
|
||||
20
doc/source/index.rst
Normal file
20
doc/source/index.rst
Normal file
@@ -0,0 +1,20 @@
|
||||
TripleO Incubator
|
||||
=================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
devtest
|
||||
|
||||
deploying
|
||||
resources
|
||||
neutron_notes
|
||||
troubleshooting
|
||||
|
||||
|
||||
Indices and tables
|
||||
==================
|
||||
|
||||
* :ref:`genindex`
|
||||
* :ref:`modindex`
|
||||
* :ref:`search`
|
||||
65
doc/source/neutron_notes.rst
Normal file
65
doc/source/neutron_notes.rst
Normal file
@@ -0,0 +1,65 @@
|
||||
Neutron Notes
|
||||
=============
|
||||
|
||||
The following notes have already been incorporated into the
|
||||
baremetal-neutron-dhcp branches of devstack and nova, and this
|
||||
(neutron-dhcp) branch of tripleo-incubator. Most of the differences are
|
||||
contained in tripleo-incubator/localrc and devstack/lib/neutron. There
|
||||
are also several patches to Nova.
|
||||
|
||||
These notes are preserved in a single place just for clarity. You do not
|
||||
need to follow these instructions.
|
||||
|
||||
Instructions
|
||||
------------
|
||||
|
||||
After starting devstack, fixup your neutron networking. You want a
|
||||
provider network, connected through to eth1, with ip addresses on the
|
||||
bridge.
|
||||
|
||||
/etc/neutron/plugins/openvswitch/ovs\_neutron\_plugin.ini should already
|
||||
have the following two lines, but check just in case
|
||||
|
||||
::
|
||||
|
||||
network_vlan_ranges=ctlplane
|
||||
bridge_mappings=ctlplane:br-ctlplane
|
||||
|
||||
you will need to add a port to the bridge
|
||||
|
||||
::
|
||||
|
||||
sudo ovs-vsctl add-port br-ctlplane eth1
|
||||
|
||||
and move ip addresses to it
|
||||
|
||||
::
|
||||
|
||||
sudo ip addr del 192.0.2.1/24 dev eth1
|
||||
sudo ip addr del 192.0.2.33/29 dev eth1
|
||||
sudo ip addr add 192.0.2.33/29 dev br-ctlplane
|
||||
sudo ip addr add 192.0.2.1/24 dev br-ctlplane
|
||||
|
||||
you need to replace the private network definition
|
||||
|
||||
::
|
||||
|
||||
export OS_USERNAME=admin neutron net-list
|
||||
|
||||
then neutron net-show of the existing 192.0.2.33 network and get the tenant
|
||||
id, then delete and recreate it
|
||||
|
||||
::
|
||||
|
||||
neutron net-delete
|
||||
neutron net-create ctlplane --tenant-id --provider:network_type flat --provider:physical_network ctlplane
|
||||
neutron subnet-create --ip-version 4 --allocation-pool start=192.0.2.34,end=192.0.2.38 --gateway=192.0.2.33 192.0.2.32/29
|
||||
sudo ifconfig br-ctlplane up
|
||||
|
||||
then kill and restart q-svc and q-agt and q-dhcp, bm-helper's dnsmasq: -
|
||||
in screen -x stack ctrl-c and restart them - for bm-helper, run this by
|
||||
hand after killing the old dnsmasq
|
||||
|
||||
::
|
||||
|
||||
dnsmasq --conf-file= --port=0 --enable-tftp --tftp-root=/tftpboot --dhcp-boot=pxelinux.0 --bind-interfaces --pid-file=/var/run/dnsmasq.pid --dhcp-range=192.0.2.65,192.0.2.69,29 --interface=br-ctlplane
|
||||
27
doc/source/resources.rst
Normal file
27
doc/source/resources.rst
Normal file
@@ -0,0 +1,27 @@
|
||||
Tripleo team resources
|
||||
======================
|
||||
|
||||
- Launchpad team (lets you get our ssh keys etc easily):
|
||||
|
||||
::
|
||||
|
||||
https://launchpad.net/~tripleo
|
||||
|
||||
- Demo and staging PPAs (for custom binaries):
|
||||
|
||||
::
|
||||
|
||||
apt-add-repository ppa:tripleo/demo
|
||||
apt-add-repository ppa:tripleo/demo-staging
|
||||
|
||||
- Git repositories:
|
||||
|
||||
::
|
||||
|
||||
https://github.com/tripleo
|
||||
|
||||
- IRC: duh.
|
||||
|
||||
::
|
||||
|
||||
irc://irc.freenode.net/#tripleo
|
||||
@@ -12,40 +12,57 @@ Baremetal
|
||||
|
||||
If you get a no hosts found error in the schedule/nova logs, check:
|
||||
|
||||
::
|
||||
|
||||
mysql nova -e 'select * from compute_nodes;'
|
||||
|
||||
After adding a bare metal node, the bare metal backend writes an entry to the
|
||||
compute nodes table, but it takes about 5 seconds to go from A to B.
|
||||
After adding a bare metal node, the bare metal backend writes an entry
|
||||
to the compute nodes table, but it takes about 5 seconds to go from A to
|
||||
B.
|
||||
|
||||
Be sure that the hostname in nova\_bm.bm\_nodes (service\_host) is the
|
||||
same than the one used by nova. If no value has been specified using the
|
||||
flag "host=" in nova.conf, the default one is:
|
||||
|
||||
Be sure that the hostname in nova_bm.bm_nodes (service_host) is the same than
|
||||
the one used by nova. If no value has been specified using the flag "host=" in
|
||||
nova.conf, the default one is:
|
||||
::
|
||||
|
||||
python -c "import socket; print socket.getfqdn()"
|
||||
|
||||
You can override this value when populating the bm database using the -h flag:
|
||||
You can override this value when populating the bm database using the -h
|
||||
flag:
|
||||
|
||||
::
|
||||
|
||||
scripts/populate-nova-bm-db.sh -i "xx:xx:xx:xx:xx:xx" -j "yy:yy:yy:yy:yy:yy" -h "nova_hostname" add
|
||||
|
||||
## DHCP Server Work Arounds ##
|
||||
|
||||
If you don't control the DHCP server on your flat network you will
|
||||
need to at least have someone put the MAC address of the server
|
||||
your trying to provision in there DHCP server.
|
||||
|
||||
DHCP Server Work Arounds
|
||||
------------------------
|
||||
|
||||
If you don't control the DHCP server on your flat network you will need
|
||||
to at least have someone put the MAC address of the server your trying
|
||||
to provision in there DHCP server.
|
||||
|
||||
::
|
||||
|
||||
host bm-compute001 {
|
||||
hardware ethernet 78:e7:d1:XX:XX:XX ;
|
||||
next-server 10.0.1.2 ;
|
||||
filename “pxelinux.0”;
|
||||
}
|
||||
}
|
||||
|
||||
Write down the MAC address for the IPMI management interface and
|
||||
the NIC your booting from. You will also need to know the IP address of both.
|
||||
Most DHCP server won't expire the IP leased to quickly so if your lucky you will get the same IP each time you reboot. With that information bare-metal can
|
||||
generate the correct pxelinux.cfg/<file>. (???? Commands to tell nova?)
|
||||
Write down the MAC address for the IPMI management interface and the NIC
|
||||
your booting from. You will also need to know the IP address of both.
|
||||
Most DHCP server won't expire the IP leased to quickly so if your lucky
|
||||
you will get the same IP each time you reboot. With that information
|
||||
bare-metal can generate the correct pxelinux.cfg/. (???? Commands to
|
||||
tell nova?)
|
||||
|
||||
In the provisional environment I have there was another problem. The DHCP Server was already modified to point to a next-server. A quick work around was to redirect the connections using iptables.
|
||||
In the provisional environment I have there was another problem. The
|
||||
DHCP Server was already modified to point to a next-server. A quick work
|
||||
around was to redirect the connections using iptables.
|
||||
|
||||
::
|
||||
|
||||
modprobe nf_nat_tftp
|
||||
|
||||
@@ -56,21 +73,29 @@ In the provisional environment I have there was another problem. The DHCP Server
|
||||
iptables -A FORWARD -p tcp -i eth2 -o eth2 -d ${baremetal_installer} --dport 10000 -j ACCEPT
|
||||
iptables -t nat -A POSTROUTING -j MASQUERADE
|
||||
|
||||
Notice the additional rules for port 10000. It is for the bare-metal interface (???) You should have matching reverse DNS too. We experienced problems connecting to port 10000 (????). That may be very unique to my environment btw.
|
||||
Notice the additional rules for port 10000. It is for the bare-metal
|
||||
interface (???) You should have matching reverse DNS too. We experienced
|
||||
problems connecting to port 10000 (????). That may be very unique to my
|
||||
environment btw.
|
||||
|
||||
## Image Build Race Condition ##
|
||||
Image Build Race Condition
|
||||
--------------------------
|
||||
|
||||
Multiple times we experienced a failure to build a good bootable image. This is because of a race condition hidden in the code currently. Just remove the failed image and try to build it again.
|
||||
Multiple times we experienced a failure to build a good bootable image.
|
||||
This is because of a race condition hidden in the code currently. Just
|
||||
remove the failed image and try to build it again.
|
||||
|
||||
Once you have a working image check the Nova DB to make sure the it is not flagged as removed (???)
|
||||
Once you have a working image check the Nova DB to make sure the it is
|
||||
not flagged as removed (???)
|
||||
|
||||
Virtual Machines
|
||||
----------------
|
||||
|
||||
## VM's booting terribly slowly in KVM? ##
|
||||
VM's booting terribly slowly in KVM?
|
||||
------------------------------------
|
||||
|
||||
Check the console, if the slowdown happens right after probing for consoles -
|
||||
wait 2m or so and you should see a serial console as the next line output after
|
||||
the vga console. If so you're likely running into
|
||||
https://bugzilla.redhat.com/show_bug.cgi?id=750773. Remove the serial device
|
||||
from your machine definition in libvirt, and it should fix it.
|
||||
Check the console, if the slowdown happens right after probing for
|
||||
consoles - wait 2m or so and you should see a serial console as the next
|
||||
line output after the vga console. If so you're likely running into
|
||||
https://bugzilla.redhat.com/show\_bug.cgi?id=750773. Remove the serial
|
||||
device from your machine definition in libvirt, and it should fix it.
|
||||
@@ -1,43 +0,0 @@
|
||||
The following notes have already been incorporated into the baremetal-neutron-dhcp
|
||||
branches of devstack and nova, and this (neutron-dhcp) branch of tripleo-incubator.
|
||||
Most of the differences are contained in tripleo-incubator/localrc and devstack/lib/neutron.
|
||||
There are also several patches to Nova.
|
||||
|
||||
These notes are preserved in a single place just for clarity.
|
||||
You do not need to follow these instructions.
|
||||
|
||||
----------------------------------
|
||||
|
||||
After starting devstack, fixup your neutron networking.
|
||||
You want a provider network, connected through to eth1, with ip addresses on the bridge.
|
||||
|
||||
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini should already have the following two lines, but check just in case:
|
||||
network_vlan_ranges=ctlplane
|
||||
bridge_mappings=ctlplane:br-ctlplane
|
||||
|
||||
you will need to add a port to the bridge
|
||||
sudo ovs-vsctl add-port br-ctlplane eth1
|
||||
|
||||
and move ip addresses to it
|
||||
sudo ip addr del 192.0.2.1/24 dev eth1
|
||||
sudo ip addr del 192.0.2.33/29 dev eth1
|
||||
sudo ip addr add 192.0.2.33/29 dev br-ctlplane
|
||||
sudo ip addr add 192.0.2.1/24 dev br-ctlplane
|
||||
|
||||
you need to replace the private network definition
|
||||
export OS_USERNAME=admin
|
||||
neutron net-list
|
||||
then
|
||||
neutron net-show <uuid>
|
||||
of the existing 192.0.2.33 network and get the tenant id, then delete and recreate it
|
||||
neutron net-delete <uuid>
|
||||
neutron net-create ctlplane --tenant-id <uuid> --provider:network_type flat --provider:physical_network ctlplane
|
||||
neutron subnet-create --ip-version 4 --allocation-pool start=192.0.2.34,end=192.0.2.38 --gateway=192.0.2.33 <new-uuid> 192.0.2.32/29
|
||||
sudo ifconfig br-ctlplane up
|
||||
|
||||
then kill and restart q-svc and q-agt and q-dhcp, bm-helper's dnsmasq:
|
||||
- in screen -x stack ctrl-c and restart them
|
||||
- for bm-helper, run this by hand after killing the old dnsmasq
|
||||
dnsmasq --conf-file= --port=0 --enable-tftp --tftp-root=/tftpboot --dhcp-boot=pxelinux.0 --bind-interfaces --pid-file=/var/run/dnsmasq.pid --dhcp-range=192.0.2.65,192.0.2.69,29 --interface=br-ctlplane
|
||||
|
||||
|
||||
23
resources.md
23
resources.md
@@ -1,23 +0,0 @@
|
||||
Tripleo team resources
|
||||
===========================
|
||||
|
||||
* Launchpad team (lets you get our ssh keys etc easily):
|
||||
|
||||
https://launchpad.net/~tripleo
|
||||
|
||||
* Demo and staging PPAs (for custom binaries):
|
||||
|
||||
apt-add-repository ppa:tripleo/demo
|
||||
apt-add-repository ppa:tripleo/demo-staging
|
||||
|
||||
* Github project with various repositories (nova etc):
|
||||
|
||||
https://github.com/tripleo
|
||||
|
||||
* Trello - where we are coordinating what needs done:
|
||||
|
||||
https://trello.com/tripleo
|
||||
|
||||
* IRC: duh.
|
||||
|
||||
irc://irc.freenode.net/#tripleo
|
||||
@@ -37,7 +37,7 @@ function show_options () {
|
||||
echo "Create and start a VM by combining the specified elements"
|
||||
echo "with common default elements, assuming many things about"
|
||||
echo "the local operating environment."
|
||||
echo "See ../devtest.md"
|
||||
echo "See ../scripts/devtest.sh"
|
||||
echo
|
||||
echo "Options:"
|
||||
echo " -a i386|amd64 -- set the architecture of the VM (i386)"
|
||||
|
||||
@@ -42,19 +42,20 @@ if [ "0" = "$CONTINUE" ]; then
|
||||
fi
|
||||
|
||||
### --include
|
||||
## VM
|
||||
## ==============
|
||||
## devtest
|
||||
## =======
|
||||
|
||||
## (There are detailed instructions available below, the overview and
|
||||
## configuration sections provide background information).
|
||||
|
||||
## Overview:
|
||||
##
|
||||
## * Setup SSH access to let the seed node turn on/off other libvirt VMs.
|
||||
## * Setup a VM that is your seed node
|
||||
## * Setup N VMs to pretend to be your cluster
|
||||
## * Go to town testing deployments on them.
|
||||
## * For troubleshooting see [troubleshooting.md](troubleshooting.md)
|
||||
## * For generic deployment information see [Deploying.md](Deploying.md)
|
||||
## * For troubleshooting see :doc:`troubleshooting`
|
||||
## * For generic deployment information see :doc:`deploying`
|
||||
|
||||
## This document is extracted from devtest.sh, our automated bring-up story for
|
||||
## CI/experimentation.
|
||||
@@ -77,66 +78,69 @@ fi
|
||||
## machine 'bare metal' nodes.
|
||||
##
|
||||
##
|
||||
## NOTE: We recommend using an apt/HTTP proxy and setting the http_proxy
|
||||
## environment variable accordingly in order to speed up the image build
|
||||
## times. See footnote [3] to set up Squid proxy.
|
||||
## NOTE: We recommend using an apt/HTTP proxy and setting the http_proxy
|
||||
## environment variable accordingly in order to speed up the image build
|
||||
## times. See footnote [#f3]_ to set up Squid proxy.
|
||||
##
|
||||
## NOTE: Likewise, setup a pypi mirror and use the pypi element, or use the
|
||||
## pip-cache element. (See diskimage-builder documentation for both of
|
||||
## these). Add the relevant element name to the disk-image-builder and
|
||||
## boot-seed-vm script invocations.
|
||||
## NOTE: Likewise, setup a pypi mirror and use the pypi element, or use the
|
||||
## pip-cache element. (See diskimage-builder documentation for both of
|
||||
## these). Add the relevant element name to the disk-image-builder and
|
||||
## boot-seed-vm script invocations.
|
||||
##
|
||||
## NOTE: The CPU architecture specified in several places must be consistent.
|
||||
## The examples here use 32-bit arch for the reduced memory footprint. If
|
||||
## you are running on real hardware, or want to test with 64-bit arch,
|
||||
## replace i386 => amd64 and i686 => x86_64 in all the commands below. You
|
||||
## will of course need amd64 capable hardware to do this.
|
||||
## NOTE: The CPU architecture specified in several places must be consistent.
|
||||
## The examples here use 32-bit arch for the reduced memory footprint. If
|
||||
## you are running on real hardware, or want to test with 64-bit arch,
|
||||
## replace i386 => amd64 and i686 => x86_64 in all the commands below. You
|
||||
## will of course need amd64 capable hardware to do this.
|
||||
##
|
||||
## Detailed instructions
|
||||
## ---------------------
|
||||
##
|
||||
## __(Note: all of the following commands should be run on your host machine, not inside the seed VM)__
|
||||
## **(Note: all of the following commands should be run on your host machine, not inside the seed VM)**
|
||||
##
|
||||
## 1. Before you start, check to see that your machine supports hardware
|
||||
## #. Before you start, check to see that your machine supports hardware
|
||||
## virtualization, otherwise performance of the test environment will be poor.
|
||||
## We are currently bringing up an LXC based alternative testing story, which
|
||||
## will mitigate this, though the deployed instances will still be full virtual
|
||||
## machines and so performance will be significantly less there without
|
||||
## hardware virtualization.
|
||||
##
|
||||
## 1. Also check ssh server is running on the host machine and port 22 is open for
|
||||
## #. Also check ssh server is running on the host machine and port 22 is open for
|
||||
## connections from virbr0 - VirtPowerManager will boot VMs by sshing into the
|
||||
## host machine and issuing libvirt/virsh commands. The user these instructions
|
||||
## use is your own, but you can also setup a dedicated user if you choose.
|
||||
##
|
||||
## 1. The devtest scripts require access to the libvirt system URI.
|
||||
## #. The devtest scripts require access to the libvirt system URI.
|
||||
## If running against a different libvirt URI you may encounter errors.
|
||||
## Export LIBVIRT_DEFAULT_URI to prevent devtest using qemu:///system
|
||||
## Check that the default libvirt connection for your user is qemu:///system.
|
||||
## If it is not, set an environment variable to configure the connection.
|
||||
## This configuration is necessary for consistency, as later steps assume
|
||||
## qemu:///system is being used.
|
||||
##
|
||||
## ::
|
||||
|
||||
export LIBVIRT_DEFAULT_URI=${LIBVIRT_DEFAULT_URI:-"qemu:///system"}
|
||||
|
||||
## 1. Choose a base location to put all of the source code.
|
||||
##
|
||||
## #. Choose a base location to put all of the source code.
|
||||
## ::
|
||||
## mkdir ~/tripleo
|
||||
## # exports are ephemeral - new shell sessions, or reboots, and you need
|
||||
## # to redo them.
|
||||
## export TRIPLEO_ROOT=~/tripleo
|
||||
## cd $TRIPLEO_ROOT
|
||||
##
|
||||
## 1. git clone this repository to your local machine.
|
||||
## #. git clone this repository to your local machine.
|
||||
## ::
|
||||
##
|
||||
## git clone https://github.com/openstack/tripleo-incubator.git
|
||||
##
|
||||
## 1. Nova tools get installed in $TRIPLEO_ROOT/tripleo-incubator/scripts - you need to
|
||||
## add that to the PATH.
|
||||
## #. Nova tools get installed in $TRIPLEO_ROOT/tripleo-incubator/scripts
|
||||
## - you need to add that to the PATH.
|
||||
## ::
|
||||
##
|
||||
## export PATH=$TRIPLEO_ROOT/tripleo-incubator/scripts:$PATH
|
||||
##
|
||||
## 1. Set HW resources for VMs used as 'baremetal' nodes. NODE_CPU is cpu count,
|
||||
## #. Set HW resources for VMs used as 'baremetal' nodes. NODE_CPU is cpu count,
|
||||
## NODE_MEM is memory (MB), NODE_DISK is disk size (GB), NODE_ARCH is
|
||||
## architecture (i386, amd64). NODE_ARCH is used also for the seed VM.
|
||||
## A note on memory sizing: TripleO images in raw form are currently
|
||||
@@ -147,59 +151,63 @@ export LIBVIRT_DEFAULT_URI=${LIBVIRT_DEFAULT_URI:-"qemu:///system"}
|
||||
## larger. The hypervisor host in the overcloud also needs to be a decent size
|
||||
## or it cannot host more than one VM.
|
||||
##
|
||||
## 32bit VMs:
|
||||
## 32bit VMs::
|
||||
##
|
||||
## export NODE_CPU=1 NODE_MEM=2048 NODE_DISK=20 NODE_ARCH=i386
|
||||
##
|
||||
## For 64bit it is better to create VMs with more memory and storage because of
|
||||
## increased memory footprint:
|
||||
## increased memory footprint::
|
||||
##
|
||||
## export NODE_CPU=1 NODE_MEM=2048 NODE_DISK=20 NODE_ARCH=amd64
|
||||
##
|
||||
## 1. Set distribution used for VMs (fedora, ubuntu).
|
||||
## #. Set distribution used for VMs (fedora, ubuntu).
|
||||
## ::
|
||||
##
|
||||
## export NODE_DIST=ubuntu
|
||||
##
|
||||
## for Fedora set SELinux permissive mode.
|
||||
## ::
|
||||
##
|
||||
## export NODE_DIST="fedora selinux-permissive"
|
||||
##
|
||||
## 1. A DHCP driver is used to do DHCP when booting nodes.
|
||||
## #. A DHCP driver is used to do DHCP when booting nodes.
|
||||
## The default bm-dnsmasq is deprecated and soon to be replaced by
|
||||
## neutron-dhcp-agent.
|
||||
## ::
|
||||
|
||||
export DHCP_DRIVER=bm-dnsmasq
|
||||
|
||||
## 1. Ensure dependencies are installed and required virsh configuration is
|
||||
## #. Ensure dependencies are installed and required virsh configuration is
|
||||
## performed:
|
||||
##
|
||||
## ::
|
||||
## install-dependencies
|
||||
##
|
||||
## 1. Clone/update the other needed tools which are not available as packages.
|
||||
##
|
||||
## #. Clone/update the other needed tools which are not available as packages.
|
||||
## ::
|
||||
## pull-tools
|
||||
##
|
||||
## 1. You need to make the tripleo image elements accessible to diskimage-builder:
|
||||
##
|
||||
## #. You need to make the tripleo image elements accessible to diskimage-builder:
|
||||
## ::
|
||||
## export ELEMENTS_PATH=$TRIPLEO_ROOT/tripleo-image-elements/elements
|
||||
##
|
||||
## 1. Configure a network for your test environment.
|
||||
## #. Configure a network for your test environment.
|
||||
## This configures an openvswitch bridge and teaches libvirt about it.
|
||||
##
|
||||
## ::
|
||||
## setup-network
|
||||
##
|
||||
## 1. Create a deployment ramdisk + kernel. These are used by the seed cloud and
|
||||
## #. Create a deployment ramdisk + kernel. These are used by the seed cloud and
|
||||
## the undercloud for deployment to bare metal.
|
||||
##
|
||||
## ::
|
||||
## $TRIPLEO_ROOT/diskimage-builder/bin/ramdisk-image-create -a $NODE_ARCH \
|
||||
## $NODE_DIST deploy -o $TRIPLEO_ROOT/deploy-ramdisk
|
||||
##
|
||||
## 1. Create and start your seed VM. This script invokes diskimage-builder with
|
||||
## #. Create and start your seed VM. This script invokes diskimage-builder with
|
||||
## suitable paths and options to create and start a VM that contains an
|
||||
## all-in-one OpenStack cloud with the baremetal driver enabled, and
|
||||
## preconfigures it for a development environment. Note that the seed has
|
||||
## minimal variation in it's configuration: the goal is to bootstrap with
|
||||
## a known-solid config.
|
||||
## ::
|
||||
|
||||
cd $TRIPLEO_ROOT/tripleo-image-elements/elements/seed-stack-config
|
||||
sed -i "s/\"user\": \"stack\",/\"user\": \"`whoami`\",/" config.json
|
||||
@@ -213,44 +221,50 @@ boot-seed-vm -a $NODE_ARCH $NODE_DIST bm-dnsmasq
|
||||
## you can log into it with 'ssh root@192.0.2.1'.
|
||||
##
|
||||
## The IP address of the VM is printed out at the end of boot-elements, or
|
||||
## you can use the get-vm-ip script:
|
||||
## you can use the get-vm-ip script::
|
||||
##
|
||||
## export SEED_IP=`get-vm-ip seed`
|
||||
##
|
||||
## 1. Add a route to the baremetal bridge via the seed node (we do this so that
|
||||
## #. Add a route to the baremetal bridge via the seed node (we do this so that
|
||||
## your host is isolated from the networking of the test environment.
|
||||
## ::
|
||||
##
|
||||
## # These are not persistent, if you reboot, re-run them.
|
||||
## sudo ip route del 192.0.2.0/24 dev virbr0 || true
|
||||
## sudo ip route add 192.0.2.0/24 dev virbr0 via $SEED_IP
|
||||
##
|
||||
## 1. Mask the SEED_IP out of your proxy settings
|
||||
## #. Mask the SEED_IP out of your proxy settings
|
||||
## ::
|
||||
##
|
||||
## export no_proxy=$no_proxy,192.0.2.1,$SEED_IP
|
||||
##
|
||||
## 1. If you downloaded a pre-built seed image you will need to log into it
|
||||
## and customise the configuration within it. See footnote [1].)
|
||||
## #. If you downloaded a pre-built seed image you will need to log into it
|
||||
## and customise the configuration within it. See footnote [#f1]_.)
|
||||
##
|
||||
## 1. Setup a prompt clue so you can tell what cloud you have configured.
|
||||
## #. Setup a prompt clue so you can tell what cloud you have configured.
|
||||
## (Do this once).
|
||||
## ::
|
||||
|
||||
source $TRIPLEO_ROOT/tripleo-incubator/cloudprompt
|
||||
|
||||
## 1. Source the client configuration for the seed cloud.
|
||||
## #. Source the client configuration for the seed cloud.
|
||||
## ::
|
||||
##
|
||||
## source $TRIPLEO_ROOT/tripleo-incubator/seedrc
|
||||
##
|
||||
## 1. Create some 'baremetal' node(s) out of KVM virtual machines and collect
|
||||
## #. Create some 'baremetal' node(s) out of KVM virtual machines and collect
|
||||
## their MAC addresses.
|
||||
## Nova will PXE boot these VMs as though they were physical hardware.
|
||||
## If you want to create the VMs yourself, see footnote [2] for details on
|
||||
## If you want to create the VMs yourself, see footnote [#f2]_ for details on
|
||||
## their requirements. The parameter to create-nodes is VM count.
|
||||
## ::
|
||||
##
|
||||
## export MACS=$(create-nodes $NODE_CPU $NODE_MEM $NODE_DISK $NODE_ARCH 3)
|
||||
##
|
||||
## If you need to collect MAC addresses separately, see scripts/get-vm-mac.
|
||||
##
|
||||
## 1. Perform setup of your seed cloud.
|
||||
## #. Perform setup of your seed cloud.
|
||||
## ::
|
||||
##
|
||||
## init-keystone -p unset unset 192.0.2.1 admin@example.com root@192.0.2.1
|
||||
## setup-endpoints 192.0.2.1 --glance-password unset --heat-password unset --neutron-password unset --nova-password unset
|
||||
@@ -259,23 +273,26 @@ source $TRIPLEO_ROOT/tripleo-incubator/cloudprompt
|
||||
## setup-baremetal $NODE_CPU $NODE_MEM $NODE_DISK $NODE_ARCH seed
|
||||
## setup-neutron 192.0.2.2 192.0.2.3 192.0.2.0/24 192.0.2.1 ctlplane
|
||||
##
|
||||
## 1. Allow the VirtualPowerManager to ssh into your host machine to power on vms:
|
||||
## #. Allow the VirtualPowerManager to ssh into your host machine to power on vms:
|
||||
## ::
|
||||
##
|
||||
## ssh root@192.0.2.1 "cat /opt/stack/boot-stack/virtual-power-key.pub" >> ~/.ssh/authorized_keys
|
||||
##
|
||||
## 1. Create your undercloud image. This is the image that the seed nova
|
||||
## #. Create your undercloud image. This is the image that the seed nova
|
||||
## will deploy to become the baremetal undercloud. Note that stackuser is only
|
||||
## there for debugging support - it is not suitable for a production network.
|
||||
## ::
|
||||
|
||||
$TRIPLEO_ROOT/diskimage-builder/bin/disk-image-create $NODE_DIST \
|
||||
-a $NODE_ARCH -o $TRIPLEO_ROOT/undercloud \
|
||||
boot-stack nova-baremetal os-collect-config stackuser $DHCP_DRIVER
|
||||
|
||||
## 1. Load the undercloud image into Glance:
|
||||
## #. Load the undercloud image into Glance:
|
||||
## ::
|
||||
##
|
||||
## load-image $TRIPLEO_ROOT/undercloud.qcow2
|
||||
##
|
||||
## 1. Create secrets for the cloud. The secrets will be written to a file
|
||||
## #. Create secrets for the cloud. The secrets will be written to a file
|
||||
## (tripleo-passwords by default) that you need to source into your shell
|
||||
## environment. Note that you can also make or change these later and
|
||||
## update the heat stack definition to inject them - as long as you also
|
||||
@@ -284,11 +301,12 @@ $TRIPLEO_ROOT/diskimage-builder/bin/disk-image-create $NODE_DIST \
|
||||
## service will be down. Instead consider adding a new service account and
|
||||
## changing everything across to it, then deleting the old account after
|
||||
## the cluster is updated.
|
||||
## ::
|
||||
|
||||
setup-passwords
|
||||
source tripleo-passwords
|
||||
|
||||
## 1. Deploy an undercloud:
|
||||
## #. Deploy an undercloud::
|
||||
|
||||
heat stack-create -f $TRIPLEO_ROOT/tripleo-heat-templates/undercloud-vm.yaml \
|
||||
-P "PowerUserName=$(whoami);AdminToken=${UNDERCLOUD_ADMIN_TOKEN};AdminPassword=${UNDERCLOUD_ADMIN_PASSWORD};GlancePassword=${UNDERCLOUD_GLANCE_PASSWORD};HeatPassword=${UNDERCLOUD_HEAT_PASSWORD};NeutronPassword=${UNDERCLOUD_NEUTRON_PASSWORD};NovaPassword=${UNDERCLOUD_NOVA_PASSWORD};BaremetalArch=${NODE_ARCH}" \
|
||||
@@ -298,20 +316,24 @@ heat stack-create -f $TRIPLEO_ROOT/tripleo-heat-templates/undercloud-vm.yaml \
|
||||
## boot/deploy process. After the deploy is complete, it will reboot into the
|
||||
## image.
|
||||
##
|
||||
## 1. Get the undercloud IP from 'nova list'
|
||||
## #. Get the undercloud IP from 'nova list'
|
||||
## ::
|
||||
##
|
||||
## export UNDERCLOUD_IP=$(nova list | grep ctlplane | sed -e "s/.*=\\([0-9.]*\\).*/\1/")
|
||||
## ssh-keygen -R $UNDERCLOUD_IP
|
||||
##
|
||||
## 1. Source the undercloud configuration:
|
||||
## #. Source the undercloud configuration:
|
||||
## ::
|
||||
##
|
||||
## source $TRIPLEO_ROOT/tripleo-incubator/undercloudrc
|
||||
##
|
||||
## 1. Exclude the undercloud from proxies:
|
||||
## #. Exclude the undercloud from proxies:
|
||||
## ::
|
||||
##
|
||||
## export no_proxy=$no_proxy,$UNDERCLOUD_IP
|
||||
##
|
||||
## 1. Perform setup of your undercloud.
|
||||
## #. Perform setup of your undercloud.
|
||||
## ::
|
||||
|
||||
init-keystone -p $UNDERCLOUD_ADMIN_PASSWORD $UNDERCLOUD_ADMIN_TOKEN \
|
||||
$UNDERCLOUD_IP admin@example.com heat-admin@$UNDERCLOUD_IP
|
||||
@@ -324,40 +346,46 @@ user-config
|
||||
setup-baremetal $NODE_CPU $NODE_MEM $NODE_DISK $NODE_ARCH undercloud
|
||||
setup-neutron 192.0.2.5 192.0.2.24 192.0.2.0/24 $UNDERCLOUD_IP ctlplane
|
||||
|
||||
## 1. Allow the VirtualPowerManager to ssh into your host machine to power on vms:
|
||||
## #. Allow the VirtualPowerManager to ssh into your host machine to power on vms:
|
||||
## ::
|
||||
##
|
||||
## ssh heat-admin@$UNDERCLOUD_IP "cat /opt/stack/boot-stack/virtual-power-key.pub" >> ~/.ssh/authorized_keys
|
||||
##
|
||||
## 1. Create your overcloud control plane image. This is the image the undercloud
|
||||
## #. Create your overcloud control plane image. This is the image the undercloud
|
||||
## will deploy to become the KVM (or Xen etc) cloud control plane. Note that
|
||||
## stackuser is only there for debugging support - it is not suitable for a
|
||||
## production network.
|
||||
## ::
|
||||
##
|
||||
## $TRIPLEO_ROOT/diskimage-builder/bin/disk-image-create $NODE_DIST \
|
||||
## -a $NODE_ARCH -o $TRIPLEO_ROOT/overcloud-control \
|
||||
## boot-stack cinder os-collect-config neutron-network-node stackuser
|
||||
##
|
||||
## 1. Load the image into Glance:
|
||||
## #. Load the image into Glance:
|
||||
## ::
|
||||
##
|
||||
## load-image $TRIPLEO_ROOT/overcloud-control.qcow2
|
||||
##
|
||||
## 1. Create your overcloud compute image. This is the image the undercloud
|
||||
## #. Create your overcloud compute image. This is the image the undercloud
|
||||
## deploys to host KVM instances. Note that stackuser is only there for
|
||||
## debugging support - it is not suitable for a production network.
|
||||
## ::
|
||||
##
|
||||
## $TRIPLEO_ROOT/diskimage-builder/bin/disk-image-create $NODE_DIST \
|
||||
## -a $NODE_ARCH -o $TRIPLEO_ROOT/overcloud-compute \
|
||||
## nova-compute nova-kvm neutron-openvswitch-agent os-collect-config stackuser
|
||||
##
|
||||
## 1. Load the image into Glance:
|
||||
## #. Load the image into Glance:
|
||||
## ::
|
||||
##
|
||||
## load-image $TRIPLEO_ROOT/overcloud-compute.qcow2
|
||||
##
|
||||
## 1. For running an overcloud in VM's:
|
||||
## #. For running an overcloud in VM's::
|
||||
## ::
|
||||
|
||||
OVERCLOUD_LIBVIRT_TYPE=${OVERCLOUD_LIBVIRT_TYPE:-";NovaComputeLibvirtType=qemu"}
|
||||
|
||||
## 1. Deploy an overcloud:
|
||||
## #. Deploy an overcloud::
|
||||
|
||||
make -C $TRIPLEO_ROOT/tripleo-heat-templates overcloud.yaml
|
||||
heat stack-create -f $TRIPLEO_ROOT/tripleo-heat-templates/overcloud.yaml \
|
||||
@@ -368,20 +396,22 @@ heat stack-create -f $TRIPLEO_ROOT/tripleo-heat-templates/overcloud.yaml \
|
||||
## boot/deploy process. After the deploy is complete, the machines will reboot
|
||||
## and be available.
|
||||
##
|
||||
## 1. Get the overcloud IP from 'nova list'
|
||||
## #. Get the overcloud IP from 'nova list'
|
||||
## ::
|
||||
##
|
||||
## export OVERCLOUD_IP=$(nova list | grep notcompute.*ctlplane | sed -e "s/.*=\\([0-9.]*\\).*/\1/")
|
||||
## ssh-keygen -R $OVERCLOUD_IP
|
||||
##
|
||||
## 1. Source the overcloud configuration:
|
||||
## #. Source the overcloud configuration::
|
||||
##
|
||||
## source $TRIPLEO_ROOT/tripleo-incubator/overcloudrc
|
||||
##
|
||||
## 1. Exclude the undercloud from proxies:
|
||||
## #. Exclude the undercloud from proxies::
|
||||
##
|
||||
## export no_proxy=$no_proxy,$OVERCLOUD_IP
|
||||
##
|
||||
## 1. Perform admin setup of your overcloud.
|
||||
## #. Perform admin setup of your overcloud.
|
||||
## ::
|
||||
|
||||
init-keystone -p $OVERCLOUD_ADMIN_PASSWORD $OVERCLOUD_ADMIN_TOKEN \
|
||||
$OVERCLOUD_IP admin@example.com heat-admin@$OVERCLOUD_IP
|
||||
@@ -394,37 +424,44 @@ keystone role-create --name heat_stack_user
|
||||
user-config
|
||||
setup-neutron "" "" 10.0.0.0/8 "" "" 192.0.2.45 192.0.2.64 192.0.2.0/24
|
||||
|
||||
## 1. If you want a demo user in your overcloud (probably a good idea).
|
||||
## #. If you want a demo user in your overcloud (probably a good idea).
|
||||
## ::
|
||||
##
|
||||
## os-adduser -p $OVERCLOUD_DEMO_PASSWORD demo demo@example.com
|
||||
##
|
||||
## 1. Workaround https://bugs.launchpad.net/diskimage-builder/+bug/1211165.
|
||||
## #. Workaround https://bugs.launchpad.net/diskimage-builder/+bug/1211165.
|
||||
## ::
|
||||
##
|
||||
## nova flavor-delete m1.tiny
|
||||
## nova flavor-create m1.tiny 1 512 2 1
|
||||
##
|
||||
## 1. Build an end user disk image and register it with glance.
|
||||
## #. Build an end user disk image and register it with glance.
|
||||
## ::
|
||||
##
|
||||
## $TRIPLEO_ROOT/diskimage-builder/bin/disk-image-create $NODE_DIST \
|
||||
## -a $NODE_ARCH -o $TRIPLEO_ROOT/user
|
||||
## glance image-create --name user --public --disk-format qcow2 \
|
||||
## --container-format bare --file $TRIPLEO_ROOT/user.qcow2
|
||||
##
|
||||
## 1. Log in as a user.
|
||||
## #. Log in as a user.
|
||||
## ::
|
||||
##
|
||||
## source $TRIPLEO_ROOT/tripleo-incubator/overcloudrc-user
|
||||
## user-config
|
||||
##
|
||||
## 1. Deploy your image.
|
||||
## #. Deploy your image.
|
||||
## ::
|
||||
##
|
||||
## nova boot --key-name default --flavor m1.tiny --image user demo
|
||||
##
|
||||
## 1. Add an external IP for it.
|
||||
## #. Add an external IP for it.
|
||||
## ::
|
||||
##
|
||||
## PORT=$(neutron port-list -f csv -c id --quote none | tail -n1)
|
||||
## neutron floatingip-create ext-net --port-id "${PORT//[[:space:]]/}"
|
||||
##
|
||||
## 1. And allow network access to it.
|
||||
## #. And allow network access to it.
|
||||
## ::
|
||||
##
|
||||
## neutron security-group-rule-create default --protocol icmp \
|
||||
## --direction ingress --port-range-min 8 --port-range-max 8
|
||||
@@ -434,13 +471,13 @@ setup-neutron "" "" 10.0.0.0/8 "" "" 192.0.2.45 192.0.2.64 192.0.2.0/24
|
||||
## The End!
|
||||
##
|
||||
##
|
||||
## Footnotes
|
||||
## =========
|
||||
## .. rubric:: Footnotes
|
||||
##
|
||||
## * [1] Customize a downloaded seed image.
|
||||
## .. [#f1] Customize a downloaded seed image.
|
||||
##
|
||||
## If you downloaded your seed VM image, you may need to configure it.
|
||||
## Setup a network proxy, if you have one (e.g. 192.168.2.1 port 8080)
|
||||
## If you downloaded your seed VM image, you may need to configure it.
|
||||
## Setup a network proxy, if you have one (e.g. 192.168.2.1 port 8080)
|
||||
## ::
|
||||
##
|
||||
## # Run within the image!
|
||||
## echo << EOF >> ~/.profile
|
||||
@@ -448,71 +485,80 @@ setup-neutron "" "" 10.0.0.0/8 "" "" 192.0.2.45 192.0.2.64 192.0.2.0/24
|
||||
## export http_proxy=http://192.168.2.1:8080/
|
||||
## EOF
|
||||
##
|
||||
## Add an ~/.ssh/authorized_keys file. The image rejects password authentication
|
||||
## for security, so you will need to ssh out from the VM console. Even if you
|
||||
## don't copy your authorized_keys in, you will still need to ensure that
|
||||
## /home/stack/.ssh/authorized_keys on your seed node has some kind of
|
||||
## public SSH key in it, or the openstack configuration scripts will error.
|
||||
## Add an ~/.ssh/authorized_keys file. The image rejects password authentication
|
||||
## for security, so you will need to ssh out from the VM console. Even if you
|
||||
## don't copy your authorized_keys in, you will still need to ensure that
|
||||
## /home/stack/.ssh/authorized_keys on your seed node has some kind of
|
||||
## public SSH key in it, or the openstack configuration scripts will error.
|
||||
##
|
||||
## You can log into the console using the username 'stack' password 'stack'.
|
||||
## You can log into the console using the username 'stack' password 'stack'.
|
||||
##
|
||||
## * [2] Requirements for the "baremetal node" VMs
|
||||
## .. [#f2] Requirements for the "baremetal node" VMs
|
||||
##
|
||||
## If you don't use create-nodes, but want to create your own VMs, here are some
|
||||
## suggestions for what they should look like.
|
||||
## - each VM should have 1 NIC
|
||||
## - eth0 should be on brbm
|
||||
## - record the MAC addresses for the NIC of each VM.
|
||||
## - give each VM no less than 2GB of disk, and ideally give them
|
||||
## If you don't use create-nodes, but want to create your own VMs, here are some
|
||||
## suggestions for what they should look like.
|
||||
##
|
||||
## * each VM should have 1 NIC
|
||||
## * eth0 should be on brbm
|
||||
## * record the MAC addresses for the NIC of each VM.
|
||||
## * give each VM no less than 2GB of disk, and ideally give them
|
||||
## more than NODE_DISK, which defaults to 20GB
|
||||
## - 1GB RAM is probably enough (512MB is not enough to run an all-in-one
|
||||
## * 1GB RAM is probably enough (512MB is not enough to run an all-in-one
|
||||
## OpenStack), and 768M isn't enough to do repeated deploys with.
|
||||
## - if using KVM, specify that you will install the virtual machine via PXE.
|
||||
## * if using KVM, specify that you will install the virtual machine via PXE.
|
||||
## This will avoid KVM prompting for a disk image or installation media.
|
||||
##
|
||||
## * [3] Setting Up Squid Proxy
|
||||
## .. [#f3] Setting Up Squid Proxy
|
||||
##
|
||||
## - Install squid proxy: `apt-get install squid`
|
||||
## - Set `/etc/squid3/squid.conf` to the following:
|
||||
## <pre><code>
|
||||
## acl manager proto cache_object
|
||||
## acl localhost src 127.0.0.1/32 ::1
|
||||
## acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
|
||||
## acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
|
||||
## acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
|
||||
## acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
|
||||
## acl SSL_ports port 443
|
||||
## acl Safe_ports port 80 # http
|
||||
## acl Safe_ports port 21 # ftp
|
||||
## acl Safe_ports port 443 # https
|
||||
## acl Safe_ports port 70 # gopher
|
||||
## acl Safe_ports port 210 # wais
|
||||
## acl Safe_ports port 1025-65535 # unregistered ports
|
||||
## acl Safe_ports port 280 # http-mgmt
|
||||
## acl Safe_ports port 488 # gss-http
|
||||
## acl Safe_ports port 591 # filemaker
|
||||
## acl Safe_ports port 777 # multiling http
|
||||
## acl CONNECT method CONNECT
|
||||
## http_access allow manager localhost
|
||||
## http_access deny manager
|
||||
## http_access deny !Safe_ports
|
||||
## http_access deny CONNECT !SSL_ports
|
||||
## http_access allow localnet
|
||||
## http_access allow localhost
|
||||
## http_access deny all
|
||||
## http_port 3128
|
||||
## cache_dir aufs /var/spool/squid3 5000 24 256
|
||||
## maximum_object_size 1024 MB
|
||||
## coredump_dir /var/spool/squid3
|
||||
## refresh_pattern ^ftp: 1440 20% 10080
|
||||
## refresh_pattern ^gopher: 1440 0% 1440
|
||||
## refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
|
||||
## refresh_pattern (Release|Packages(.gz)*)$ 0 20% 2880
|
||||
## refresh_pattern . 0 20% 4320
|
||||
## refresh_all_ims on
|
||||
## </pre></code>
|
||||
## * Install squid proxy
|
||||
## ::
|
||||
## apt-get install squid
|
||||
##
|
||||
## - Restart squid: `sudo service squid3 restart`
|
||||
## - Set http_proxy environment variable: `http_proxy=http://your_ip_or_localhost:3128/`
|
||||
## * Set `/etc/squid3/squid.conf` to the following
|
||||
## ::
|
||||
##
|
||||
## acl manager proto cache_object
|
||||
## acl localhost src 127.0.0.1/32 ::1
|
||||
## acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
|
||||
## acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
|
||||
## acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
|
||||
## acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
|
||||
## acl SSL_ports port 443
|
||||
## acl Safe_ports port 80 # http
|
||||
## acl Safe_ports port 21 # ftp
|
||||
## acl Safe_ports port 443 # https
|
||||
## acl Safe_ports port 70 # gopher
|
||||
## acl Safe_ports port 210 # wais
|
||||
## acl Safe_ports port 1025-65535 # unregistered ports
|
||||
## acl Safe_ports port 280 # http-mgmt
|
||||
## acl Safe_ports port 488 # gss-http
|
||||
## acl Safe_ports port 591 # filemaker
|
||||
## acl Safe_ports port 777 # multiling http
|
||||
## acl CONNECT method CONNECT
|
||||
## http_access allow manager localhost
|
||||
## http_access deny manager
|
||||
## http_access deny !Safe_ports
|
||||
## http_access deny CONNECT !SSL_ports
|
||||
## http_access allow localnet
|
||||
## http_access allow localhost
|
||||
## http_access deny all
|
||||
## http_port 3128
|
||||
## cache_dir aufs /var/spool/squid3 5000 24 256
|
||||
## maximum_object_size 1024 MB
|
||||
## coredump_dir /var/spool/squid3
|
||||
## refresh_pattern ^ftp: 1440 20% 10080
|
||||
## refresh_pattern ^gopher: 1440 0% 1440
|
||||
## refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
|
||||
## refresh_pattern (Release|Packages(.gz)*)$ 0 20% 2880
|
||||
## refresh_pattern . 0 20% 4320
|
||||
## refresh_all_ims on
|
||||
##
|
||||
## * Restart squid
|
||||
## ::
|
||||
## sudo service squid3 restart
|
||||
##
|
||||
## * Set http_proxy environment variable
|
||||
## ::
|
||||
## http_proxy=http://your_ip_or_localhost:3128/
|
||||
##
|
||||
### --end
|
||||
|
||||
@@ -26,7 +26,7 @@ function show_options () {
|
||||
echo
|
||||
echo "Extract documentation from our demonstration scripts."
|
||||
echo
|
||||
echo "This will create devtest.md from devtest.sh."
|
||||
echo "This will create devtest.rst from devtest.sh."
|
||||
echo
|
||||
echo "Options:"
|
||||
echo " -h -- Show this help screen."
|
||||
|
||||
@@ -38,7 +38,7 @@
|
||||
} else if (line != "") {
|
||||
line = " "line
|
||||
}
|
||||
print line > "devtest.md"
|
||||
print line > "doc/source/devtest.rst"
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
24
setup.cfg
Normal file
24
setup.cfg
Normal file
@@ -0,0 +1,24 @@
|
||||
[metadata]
|
||||
name = tripleo-incubator
|
||||
author = OpenStack
|
||||
author-email = openstack-dev@lists.openstack.org
|
||||
summary = Incubator for TripleO
|
||||
description-file =
|
||||
README.md
|
||||
home-page = http://git.openstack.org/cgit/openstack/tripleo-incubator
|
||||
classifier =
|
||||
Environment :: OpenStack
|
||||
Intended Audience :: Developers
|
||||
Intended Audience :: Information Technology
|
||||
License :: OSI Approved :: Apache Software License
|
||||
Operating System :: OS Independent
|
||||
|
||||
[build_sphinx]
|
||||
all_files = 1
|
||||
build-dir = doc/build
|
||||
source-dir = doc/source
|
||||
|
||||
[egg_info]
|
||||
tag_build =
|
||||
tag_date = 0
|
||||
tag_svn_revision = 0
|
||||
22
setup.py
Normal file
22
setup.py
Normal file
@@ -0,0 +1,22 @@
|
||||
#!/usr/bin/env python
|
||||
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
|
||||
import setuptools
|
||||
|
||||
setuptools.setup(
|
||||
setup_requires=['pbr'],
|
||||
pbr=True)
|
||||
2
test-requirements.txt
Normal file
2
test-requirements.txt
Normal file
@@ -0,0 +1,2 @@
|
||||
oslo.sphinx
|
||||
sphinx>=1.1.2
|
||||
19
tox.ini
Normal file
19
tox.ini
Normal file
@@ -0,0 +1,19 @@
|
||||
[tox]
|
||||
minversion = 1.6
|
||||
skipsdist = True
|
||||
|
||||
[testenv]
|
||||
usedevelop = True
|
||||
install_command = pip install {opts} {packages}
|
||||
setenv = VIRTUAL_ENV={envdir}
|
||||
LANG=en_US.UTF-8
|
||||
LANGUAGE=en_US:en
|
||||
LC_ALL=C
|
||||
deps = -r{toxinidir}/test-requirements.txt
|
||||
|
||||
[testenv:venv]
|
||||
commands = {posargs}
|
||||
|
||||
[flake8]
|
||||
ignore = H803
|
||||
exclude = .tox
|
||||
Reference in New Issue
Block a user