And also the certs and the other clouds.yaml file.
So that admins can run openstackclient, etc, without sudo.
Change-Id: Ib8be3cd0601531284ec5d33cb5024b8363d924ca
Since nova does not believe in the existence of hostnames, we need
to set them ourselves when we boot new servers in launch-node.
Change-Id: Ib318224a09c1b0b748ab31e1ed507975b3190784
We are only deploying West for now, so just doing West.
When we get East in production, we would update this playbook.
Unfortunate there is no Ansible module or Puppet resources to set
quotas per-project, thus using regular shell module in Ansible.
Change-Id: Ib884508bebedc9f88fac242711af98fc0c4d95ec
This should be a noop change, we are just moving the settings into
puppet.
Change-Id: Ic533a5fb125125e9791c40312318be79cbbe4826
Depends-On: I1ad6da353c25aed8976806f00cc39d6c3c93e7ae
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
Some clouds may not have a metadata service and need to retrieve key
info via config drive. Add a flag to specifically request that a config
drive is provided to the instance booted by nova to facilitate this
information injection.
Change-Id: Ic41df5b34ea67ad62949244e064db82410077453
Turns out we have had many issues with random servers having
wrong hostname and /etc/hosts info.
This playbook/role allows to configure that by passing
-e "target=<hostname>" as ansible-playbook parameter.
Change-Id: I73939ebc65211a840bb41370c22b111112389716
The documentation we were referencing has been given a more descriptive
path, so let's use that and not use the broken link.
Change-Id: I42224a103cf35f84cf5ff331386ec28e6d84f136
We are installing a cert to trust the infracloud but were trying to put
it in a dir that does not exist. Put it next to the clouds.yaml in
~nodepool/.config/openstack as that will exist because nodepool consumes
clouds.yaml from there.
Change-Id: I27e1a1d340e9864308c89c660ae014d7110fbe9f
This uses the correct infra domain name, changes domain keys from id to
name, a fixes indentation for various keys.
Change-Id: Ic8a8f67bc2586ca640b8c3e500f6cdad1abf0ebd
Add definition for baremetal nodes for hpuswest, plus a script to
one-time enroll and deploy nodes in bifrost.
Co-Authored-By: Clint Byrum <clint@fewbar.com>
Co-Authored-By: greghaynes <greg@greghaynes.net>
Co-Authored-By: Yolanda Robla <info@ysoft.biz>
Depends-On: I949344c16cf9ee3965b0bc96850eb208ac65b168
Change-Id: I947add7e8e8aa88fe6e881d77fd3278910b3b903
This adds the omfracloud jenkins account credentials to nodepool.
I'm not pleased with the file resource in the node definition, but that
node definiton grew huge and needs a refactor anyways so we can do that
when we do it.
I have verified that the correct keys are in hiera.
Change-Id: Iafca5e86f72321c6aa7bef748ac2b1942539d15f
We have had an all-clouds.yaml file that was not being managed on disk
by puppet. Actually apply it to disk so that the template ends up on the
puppetmaster as expected.
Change-Id: I0136cab7c03b1932be5b24ff2e93ea8adb84c20d