Change I76b1099bf0cf3bfead17f96e456cdce87d0e8a49 altered the name of
the inventory script, so reflect that in the corresponding
subprocess call in launch-node.py and a comment in the
expand-groups.sh script.
Change-Id: I4c2c762716813b5d59dcc1b623f5988c8aa7d490
The dns.py file uses openstack.connect to make the Connection but
launch_node.py was still using shade.OpenStackCloud, so when the
connection was passed to dns.py it was trying to use an SDK property but
getting a Shade object.
This is because while sdk has been updated with all of the shade objects,
we haven't updated shade yet to provide the sdk version of the object, so
shade objects from sdk have things shade objects from shade don't yet have.
Update launch_node.py to use the same Connection construction that
dns.py does.
Change-Id: I1c6bfe54f94effe0e592280ba179f61a6d983e7a
Shade no longer uses novaclient. shade also strips links dicts from the
resources it returns. shade also now depends on openstacksdk, which does
not strip links dicts.
Change-Id: Ifb6a8280e548cb55932cae4a2bba8e1fa5b34c3c
virtualenv on the puppetmaster is defaulting to python3 now but all our
dev header files for python are for python2. Force python2 when creating
the virtualenv so that pycrypto can be built. Additionally ansible
likely wants python2 here anyways.
Change-Id: I19bc1985fc4b6a722b10fb0b89a86127e27340fe
We're now launching xenial for control servers, lets update the
defaults.
Change-Id: I14dc26673c290ae37b7a9ef016d7a343d2763efe
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
eth0 might not exist, such as on Xenial hosts with interfaced-based
names. Since this is a bit of platform/provider specific hack, just
ignore failures.
Change-Id: Ie18b7f49ea2f1b72b496c61ac2576ae53f5ad3eb
os-client-config will construct a cloud called "envvars" if the
environment has environment variables that start with OS_ and are not
OS_CLOUD and OS_REGION_NAME (those are singled out because they are
selectors) The convenience variable in our example code snippet here is
an OS_ var that is neither of those, so it causes the environment to
produce an invalid cloud config which then confuses the ansible
inventory which is trying to iterate over the all the clouds that
exist.
Change-Id: I65324bc2f3ca71dd4ada2f39f322ccc5f13d6897
This way we are able to stream the output from commands as they
are received for better debugging. We can also move some new
debug statements to inside of the new run() function so they
are more automatic.
Change-Id: I484f5cf70aa15923ea4bb866f3be536b2e8ed4ed
One problem with "shell script as python" is that there's no
equivalent of "-x" in shell, which makes it really hard to extract
what's being called and where output came from.
This adds a bit more verbose logging around the ssh calls to try and
help someone parsing the logs.
Change-Id: I85e2415b47e044cfa1c678fc7786b4891fa1f93e
Avoid a bunch of warnings about unwritable /var/log/ansible.log (the
default) by setting the log path environment variable where we call
ansible.
Note expand-groups.sh is moved inside the JobDir() context so we can
use the environment var there too, as it calls ansible underneath.
Change-Id: I575d633a36db8cfb891c8903a7bfbea73a4cfb29
Save the key to a file in /tmp when failing early with --keep.
Although it is put into the JobDir later, if we fail before that we're
locked out of the host.
While we're here, make what just happened in an error case a little
clearer
Change-Id: Ide601e2018302664bc4ad609c4483aa1451b3724
RAX nodes are exhibiting new behaviour of having ipv6 configured but
not active. Restart eth0 to pick up the address in
/etc/network/interfaces so the ping6's work
Change-Id: I6b60bde34cc28ca60c5cbbb41de02cd89354cc32
There are potentially two related issues here which can result in
an empty generated groups file. The first is that if there are OS_
environment variables set, then os-client-config can create an 'environ'
cloud. That cloud then, in most cases here, will not be a valid cloud
since it won't be a full config, so iterating over all existing clouds
to get their server will fail, meaning that the inventory will be empty
meaning that generated groups will then be generated empty.
To deal with that, we can consume the newer upstream option that allows
the inventory to not bail out if it has a bad cloud, but instead get all
of the resources from the clouds that do work.
Additionally though, we can do an explicit inventory run so that we can
look to see if the inventory run failed, and if so, avoid running the
expand-groups.sh script, since we'd be fairly assured that it would be
running on top of a bad inventory cache.
Change-Id: Ib18987b3083f6addc61934b435d7ecb14aa1d25a
For --config-drive to actually work as advertised in launch-node.py,
it needs to default to False. Otherwise this option is useless.
Change-Id: Ib29fa758779e89d3d25399615fd009b836dda598
When launching a server, ansible needs to know what groups the
new host is in so that it can copy the appropriate files. Figuring
that out is done based on the groups.txt file and the expand-groups
script. This change runs that script after creating a host, which
will update the global list of expanded groups. That is then
symlinked into a temporary inventory directory used by launch-node.
The JobDir concept is borrowed from Zuul as a simple way of creating
and deleting at the appropriate time a complex temporary directory.
Change-Id: Icce083ca67a3473b7d77401142f870fd28dd08f5
As discussed during the "Launch Node, Ansible and Puppet" summit
session in Austin, we're making things unnecessarily hard on
ourselves by insisting on having multiple servers in our inventory
with the same name. In order to make server addition and replacement
automation simpler, start using an ordinal suffix on server short
names to differentiate them (we can still easily rely on DNS for
their non-numbered convenience names).
Change-Id: I040a5c3b5e1abc50c3e4676bcab0bf4eaa550f4b
We can only get the volume attach device if we are attaching a volume.
Check if the volume is being attached and only determine the attachment
location in that case to avoid errors.
Story: 2000569
Change-Id: I4adc5e23abdfc0627a0850f845e2333d3bd25e63
Now that we have a shade version of the launch node script adding in
support for attaching a cinder volume is simple. Do this so that
launching mirrors which rely on cinder volumes is simpler.
This updates the mount_volume.sh script to setup the first cinder volume
with lvm and mount it under the specified path. It will also install
lvm2 pacakges since they may not be present on all base images.
This updates the make_swap.sh script to avoid blindly using /dev/vdb as
the location for swap as this may be a cinder volume or config drive.
We add availability zone, device specification, mount path, and
fs label support to shade-launch-node.py as these are all necessary
inputs to properly mount a cinder volume in a VM.
Change-Id: Ie95fd4bd5fca8df4f8046d43d1333935cad567e3
With Ansible-based launch-node.py now using clouds.yaml, we're no
longer setting up the environment variables expected by the rackdns
utility for setting PTR resource records. Add a line to the example
commands sourcing the appropriate variables into the environment
first.
Change-Id: Ia96296a8fac803d514e49eeeedaeebc76585d6fd
There is a bug in OCC that causes an envvars cloud to be created when
the only two env vars are the selectors OS_CLOUD and OS_REGION_NAME. So
exclude them from the envionment when running the group creation
command.
Also, there is a bug in the invocation of the hostname playbook, in that
it was passing in the UUID as the target to run against, but we're
writing out a name-based inventory.
Change-Id: I0b524dc43ec96c6645ae82a090744eab463e7fb9
It looks like we solved the duplicate server problem twice in
conflicting ways. Using uuid in the inventory is not needed, bcause
we're making a specific inventory for the ansible commands and avoiding
the OpenStack inventory. So the ansible run has no idea of any other
servers other than the one we're making right now. With that, we can use
name as the hostname rather than UUID.
Story: 2000520
Change-Id: Idb967e10fc00471923077e4e9caa32fdb4c1cc78
We have a playbook that does the logic of setting the hostname. Rather
than implementing that logic in launch-node - just use the playbook.
Change-Id: I1a6c0ff12803bdac35631cb3bb2c8fe70cbd1904
https://github.com/ansible/ansible/pull/14882 landed, so the inventory
will understand that an empty cache means the inventory needs
refetching. Zero out the file, and start consuming inventory from the
master branch of ansible since mordred controls that file anyway.
Change-Id: I2a4f4b21c50bfa94a229dd109e3d21f47552f0a1
Shade supports all of the TODO items in here now, so use it. Also,
we have ansible playbooks that do the work of running puppet - and since
we're on puppet apply now, we can use them.
Change-Id: I6f57e9a31bf835ef2e22db1f5531d92e99806cf4
Since nova does not believe in the existence of hostnames, we need
to set them ourselves when we boot new servers in launch-node.
Change-Id: Ib318224a09c1b0b748ab31e1ed507975b3190784
Some clouds may not have a metadata service and need to retrieve key
info via config drive. Add a flag to specifically request that a config
drive is provided to the instance booted by nova to facilitate this
information injection.
Change-Id: Ic41df5b34ea67ad62949244e064db82410077453
Created as a new script so that we aren't hosed if this doesn't
work with current providers.
Change-Id: Ia8d35d0acbfb773ca710c9383d9b746e786766bc
Depends-On: I26bc94408441edf067493b7ffd50eebd9dd95e75
We have a set of hostname patterns which is not a thing that ansible
supports in inventory files. While we can put hostname patterns into
playbooks directly, that does not help us with copying hiera group files
since ansible doesn't know about the groups in site.pp and puppet
doesn't know about the ansible groups.
Instead, do a quick expansion any time the groups.txt file changes and
at the end of launch-node. It will be left to admins to run
expand-groups.sh whenever they delete a node.
Change-Id: I00c60748ddb2d35a3b98f78d828dabebcf065118
With the puppetmaster not there anymore, we should consume inventory
from OpenStack rather than from puppet.
It turns out that because of the way static and dynamic inventories get
merged, the static file needs to stand alone. SO - if you need to
disable a dynamic host from OpenStack (pretty much all of our hosts) you
need to not only add it to dynamic:children, you need to add an emtpy
group into the static file too, otherwise you'll get an error like:
root@puppetmaster:~# ansible -i newinv '!disabled' --list-hosts
ERROR: newinv/static:4: child group is not defined: (jenkins-dev.openstack.org)
Change-Id: Ic6809ed0b7014d7aebd414bf3a342e3a37eb10b6
This is a quick hack to handle public ip addrs on OVH. A better
fix is porting to shade or ansible.
Change-Id: I62bba7215249521307be5b42af1c6f8fc886d982
This parameter is usually not necessary, and over-specifying it
can be problematic because the value is not consistent.
Change-Id: I0a90631499294e7a6eb287f24739cf4884a8db7b
New systemd based distros reboot so quickly that the ssh connection
errors returning 255 (or -1 in python because signed integers). Ignore
return codes of -1 when rebooting over ssh as a result. All other return
codes will be propogated properly.
Change-Id: I272f00e9e07f1ed04f2b97d0e1609c6e8d49caf3