I managed to leave off the "--image" flag for a Xenial host, so the
script created a Bionic host by default. I let that play out, deleted
the host and tried again with the correct image, but what ended up
happening was the fact cache thought this new host was Bionic, and
several ansible roles therefore ran thinking this too, and we ended up
with a bad Xenial/Bionic mashup.
Clear the cache on node launch to avoid this sort of thing again.
I have launched a node with this new option, and it worked.
Change-Id: Ie37f562402bed3846f27fbdd4441b5f4dcec7eb2
Passing the -i to the jobdir means we're overriding the inventory.
This means variables that come from the /etc/ansible vars, like
sysadmins, are missing.
Add the global inventory to the command line for ansible-playbook.
We have --limit specified from '-l' - so we should still only run
on the host in question.
Change-Id: Ia67e65d25a1d961b619aa445303015fd577dee57
When we're booting boot-from-volume servers and there are errors,
we leave the root volume around. Clean up after ourselves.
Change-Id: I6341cdbf21d659d043592f92ddf8ecf6be997802
When launching a new server we should make sure that all available
package updates are installed before we reboot the server. This way we
get available security updates applied to things like our kernel.
This change adds a new playbook that runs the unattended-upgrade command
on debuntu servers. Will need to add support for other platforms in a
followup change.
Change-Id: Idc88dc33afdd209c388452493e6a7f5731fa0974
We want to be launching opendev server more and more now. Update launch
docs to point out some of the difference with opendev servers.
Additionally point out that we need to update our static inventory file
so that ansible (and puppet) see the new host.
Change-Id: I425377c50007e11aa99cb53f3f5dc3068911ef7f
Some clouds may be a little slower than others building images and to
override the create_server default timeout of 3 minutes (180) you have
to hand edit -- add a global timeout option and use that consistently.
Change-Id: I66032ef929746739d07dca3fd178b8c43bb8174c
Remove the section on launching nodes in the jenkins tenant. That
never happens.
Remove the bits about groups and sudo, as they aren't relevant
any more.
Remove the unused os_client_config import.
Change-Id: I676bb7450ec80df73b76ee7841f78eadbe179183
os.listdir returns dirents relative to the dir being listed. We need to
give full path to these entries when unlinking them. Do this by joining
the inventory_cache_dir path to each inventory_cache file.
Change-Id: I78376cfa3b2aa92641f2685b08616660f523dfaf
Update the launch node readme and script to use python3 on the new
bridge node. There is no python2. Also update ansible to pull in
python3 support. The version we had been using wasn't python3 happy.
Change-Id: I6122160eb70eb6b5f299a8adb6478a9046ff1725
Replace launch-node.py with launch-node-ansible.py. Update it to
delete the inventory cache correctly.
Also, update the docs to list Bionic by default rather than Trusty.
Change-Id: Iadda897b7e71dc12c8db4ced120894054169bbb8
The production directory is a relic from the puppet environment concept,
which we do not use. Remove it.
The puppet apply tests run puppet locally, where the production
environment is still needed, so don't update the paths in the
tools/prep-apply.sh.
Depends-On: https://review.openstack.org/592946
Change-Id: I82572cc616e3c994eab38b0de8c3c72cb5ec5413
We want to launch a new bastion host to run ansible on. Because we're
working on the transition to ansible, it seems like being able to do
that without needing puppet would be nice. This gets user management,
base repo setup and whatnot installed. It doesn't remove them from the
existing puppet, nor does it change the way we're calling anything that
currently exists.
Add bridge.openstack.org to the disabled group so that we don't try to
run puppet on it.
Change-Id: I3165423753009c639d9d2e2ed7d9adbe70360932
Change I76b1099bf0cf3bfead17f96e456cdce87d0e8a49 altered the name of
the inventory script, so reflect that in the corresponding
subprocess call in launch-node.py and a comment in the
expand-groups.sh script.
Change-Id: I4c2c762716813b5d59dcc1b623f5988c8aa7d490
The dns.py file uses openstack.connect to make the Connection but
launch_node.py was still using shade.OpenStackCloud, so when the
connection was passed to dns.py it was trying to use an SDK property but
getting a Shade object.
This is because while sdk has been updated with all of the shade objects,
we haven't updated shade yet to provide the sdk version of the object, so
shade objects from sdk have things shade objects from shade don't yet have.
Update launch_node.py to use the same Connection construction that
dns.py does.
Change-Id: I1c6bfe54f94effe0e592280ba179f61a6d983e7a
Shade no longer uses novaclient. shade also strips links dicts from the
resources it returns. shade also now depends on openstacksdk, which does
not strip links dicts.
Change-Id: Ifb6a8280e548cb55932cae4a2bba8e1fa5b34c3c
When booting servers with --boot-from-volume (vexxhost) it is helpful
to also provide the size of the volume we want to use.
Change-Id: I478e40ba129f267c0d2d5b54e90a6f84716018f0
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
virtualenv on the puppetmaster is defaulting to python3 now but all our
dev header files for python are for python2. Force python2 when creating
the virtualenv so that pycrypto can be built. Additionally ansible
likely wants python2 here anyways.
Change-Id: I19bc1985fc4b6a722b10fb0b89a86127e27340fe
We're now launching xenial for control servers, lets update the
defaults.
Change-Id: I14dc26673c290ae37b7a9ef016d7a343d2763efe
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
eth0 might not exist, such as on Xenial hosts with interfaced-based
names. Since this is a bit of platform/provider specific hack, just
ignore failures.
Change-Id: Ie18b7f49ea2f1b72b496c61ac2576ae53f5ad3eb
os-client-config will construct a cloud called "envvars" if the
environment has environment variables that start with OS_ and are not
OS_CLOUD and OS_REGION_NAME (those are singled out because they are
selectors) The convenience variable in our example code snippet here is
an OS_ var that is neither of those, so it causes the environment to
produce an invalid cloud config which then confuses the ansible
inventory which is trying to iterate over the all the clouds that
exist.
Change-Id: I65324bc2f3ca71dd4ada2f39f322ccc5f13d6897
This way we are able to stream the output from commands as they
are received for better debugging. We can also move some new
debug statements to inside of the new run() function so they
are more automatic.
Change-Id: I484f5cf70aa15923ea4bb866f3be536b2e8ed4ed
One problem with "shell script as python" is that there's no
equivalent of "-x" in shell, which makes it really hard to extract
what's being called and where output came from.
This adds a bit more verbose logging around the ssh calls to try and
help someone parsing the logs.
Change-Id: I85e2415b47e044cfa1c678fc7786b4891fa1f93e
Avoid a bunch of warnings about unwritable /var/log/ansible.log (the
default) by setting the log path environment variable where we call
ansible.
Note expand-groups.sh is moved inside the JobDir() context so we can
use the environment var there too, as it calls ansible underneath.
Change-Id: I575d633a36db8cfb891c8903a7bfbea73a4cfb29
Save the key to a file in /tmp when failing early with --keep.
Although it is put into the JobDir later, if we fail before that we're
locked out of the host.
While we're here, make what just happened in an error case a little
clearer
Change-Id: Ide601e2018302664bc4ad609c4483aa1451b3724
RAX nodes are exhibiting new behaviour of having ipv6 configured but
not active. Restart eth0 to pick up the address in
/etc/network/interfaces so the ping6's work
Change-Id: I6b60bde34cc28ca60c5cbbb41de02cd89354cc32
There are potentially two related issues here which can result in
an empty generated groups file. The first is that if there are OS_
environment variables set, then os-client-config can create an 'environ'
cloud. That cloud then, in most cases here, will not be a valid cloud
since it won't be a full config, so iterating over all existing clouds
to get their server will fail, meaning that the inventory will be empty
meaning that generated groups will then be generated empty.
To deal with that, we can consume the newer upstream option that allows
the inventory to not bail out if it has a bad cloud, but instead get all
of the resources from the clouds that do work.
Additionally though, we can do an explicit inventory run so that we can
look to see if the inventory run failed, and if so, avoid running the
expand-groups.sh script, since we'd be fairly assured that it would be
running on top of a bad inventory cache.
Change-Id: Ib18987b3083f6addc61934b435d7ecb14aa1d25a
For --config-drive to actually work as advertised in launch-node.py,
it needs to default to False. Otherwise this option is useless.
Change-Id: Ib29fa758779e89d3d25399615fd009b836dda598
When launching a server, ansible needs to know what groups the
new host is in so that it can copy the appropriate files. Figuring
that out is done based on the groups.txt file and the expand-groups
script. This change runs that script after creating a host, which
will update the global list of expanded groups. That is then
symlinked into a temporary inventory directory used by launch-node.
The JobDir concept is borrowed from Zuul as a simple way of creating
and deleting at the appropriate time a complex temporary directory.
Change-Id: Icce083ca67a3473b7d77401142f870fd28dd08f5
As discussed during the "Launch Node, Ansible and Puppet" summit
session in Austin, we're making things unnecessarily hard on
ourselves by insisting on having multiple servers in our inventory
with the same name. In order to make server addition and replacement
automation simpler, start using an ordinal suffix on server short
names to differentiate them (we can still easily rely on DNS for
their non-numbered convenience names).
Change-Id: I040a5c3b5e1abc50c3e4676bcab0bf4eaa550f4b
We can only get the volume attach device if we are attaching a volume.
Check if the volume is being attached and only determine the attachment
location in that case to avoid errors.
Story: 2000569
Change-Id: I4adc5e23abdfc0627a0850f845e2333d3bd25e63
Now that we have a shade version of the launch node script adding in
support for attaching a cinder volume is simple. Do this so that
launching mirrors which rely on cinder volumes is simpler.
This updates the mount_volume.sh script to setup the first cinder volume
with lvm and mount it under the specified path. It will also install
lvm2 pacakges since they may not be present on all base images.
This updates the make_swap.sh script to avoid blindly using /dev/vdb as
the location for swap as this may be a cinder volume or config drive.
We add availability zone, device specification, mount path, and
fs label support to shade-launch-node.py as these are all necessary
inputs to properly mount a cinder volume in a VM.
Change-Id: Ie95fd4bd5fca8df4f8046d43d1333935cad567e3
With Ansible-based launch-node.py now using clouds.yaml, we're no
longer setting up the environment variables expected by the rackdns
utility for setting PTR resource records. Add a line to the example
commands sourcing the appropriate variables into the environment
first.
Change-Id: Ia96296a8fac803d514e49eeeedaeebc76585d6fd
There is a bug in OCC that causes an envvars cloud to be created when
the only two env vars are the selectors OS_CLOUD and OS_REGION_NAME. So
exclude them from the envionment when running the group creation
command.
Also, there is a bug in the invocation of the hostname playbook, in that
it was passing in the UUID as the target to run against, but we're
writing out a name-based inventory.
Change-Id: I0b524dc43ec96c6645ae82a090744eab463e7fb9
It looks like we solved the duplicate server problem twice in
conflicting ways. Using uuid in the inventory is not needed, bcause
we're making a specific inventory for the ansible commands and avoiding
the OpenStack inventory. So the ansible run has no idea of any other
servers other than the one we're making right now. With that, we can use
name as the hostname rather than UUID.
Story: 2000520
Change-Id: Idb967e10fc00471923077e4e9caa32fdb4c1cc78