Remove puppet and cron mentions from docs

We've got some old out of date docs in some places. This isn't even
a full reworking, but at least tries to remove some of the more
egregiously wrong things.

Change-Id: I9033acb9572e1ce1b3e4426564b92706a4385dcb
This commit is contained in:
Monty Taylor 2020-04-09 15:27:14 -05:00 committed by James E. Blair
parent 8af7b47812
commit cba5129465
5 changed files with 140 additions and 181 deletions

124
doc/source/bridge.rst Normal file
View File

@ -0,0 +1,124 @@
:title: Bridge
.. _bridge:
Bridge
######
Bridge is a bastion host that is the starting point for ops operations in
OpenDev. It is the server from which Ansible is run, and contains a
centralized database that contains secure information such as passwords.
The bridge server contains all of the ansible playbooks as well as the
scripts to create new servers.
Many of the systems in OpenDev are still configured using puppet, although
the trend is away from Puppet to Ansible. For the hosts still using puppet,
the process is still driven by Ansible.
At a Glance
===========
:Projects:
* https://ansible.com/
* https://puppetlabs.com/
:Bugs:
* https://storyboard.openstack.org/#!/project/748
* https://tickets.puppetlabs.com/
:Resources:
* `Puppet Language Reference <https://docs.puppetlabs.com/references/latest/type.html>`_
Ansible Hosts
-------------
In OpenDev, all host configuration is done via ansible playbooks.
Puppet Hosts
------------
For hosts still using puppet, ansible drives the running of ``puppet apply``
on hosts in the inventory in the ``puppet`` group. That process first
copies appropriate ``hiera`` data files to each host.
Hiera uses a systemwide configuration file in ``/etc/puppet/hiera.yaml``.
The hiera configuration is placed by ansible into common.yaml in
``/etc/puppet/hieradata/production`` on each puppet host.
The values are simple key-value pairs in yaml format. The keys needed are the
keys referenced in ``site.pp``, their values are typically obvious
(strings, lists of strings). ``/etc/puppet/hieradata/`` and below should be
owned by ``puppet:puppet`` and have mode ``0711``.
Below the ``hieradata`` directory, there should be a ``common.yaml`` file where
settings that should be available to all servers in the infrastructure go,
and then two directories full of files. The first is ``fqdn`` which should
contain a yaml file for every server in the infrastructure named
``${fqdn_of_server}.yaml``. That file has secrets that are only for that
server. Additionally, some servers can have a ``$group`` defined in
``manifests/site.pp``. There can be a correspondingly named yaml file in the
``group`` directory that contains secrets to be made available to each
server in the group.
All of the actual yaml files should have mode 0600 and be owned by root.
Adding a node
-------------
Adding a new node should be done using the
``/opt/system-config/launch/launch-node.py`` script
(see :git_file:`launch/README.rst` for full details). If the host is put into
the puppet group in the Ansible inventory, puppet will be run on on the host.
.. _running-ansible-on-nodes:
Running Ansible on Nodes
------------------------
Each service that has been migrated fully to Ansible has its own playbook in
:git_file:`playbooks` named ``service_{ service_name }.yaml``.
Because the playbooks are normally run by zuul, to run them manually, first
touch the file ``/home/zuul/DISABLE-ANSIBLE``. Then make sure no jobs are
currently executing ansible. Ensure that ``/home/zuul/src/opendev.org/opendev/system-config``
and ``/home/zuul/src/opendev.org/openstack/project-config`` are in the
appropriate states, then run:
.. code-block:: bash
cd /home/zuul/src/opendev.org/opendev/system-config
ansible-playbook --limit="$HOST:localhost" playbooks/service-$SERVICE.yaml
as root, where `$HOST` is the host you want to run puppet on.
The `:localhost` is important as some of the plays depend on performing a task
on the localhost before continuing to the host in question, and without it in
the limit section, the tasks for the host will have undefined values.
When done, don't forget to remove ``/home/zuul/DISABLE-ANSBILE``
Running Puppet on Nodes
-----------------------
In OpenDev, puppet is run by ansible from a Zuul job running on bridge
which in turn runs a single run of puppet apply on each host using puppet.
The entry point for this process is :git_file:`playbooks/remote_puppet_else.yaml`
If an admin needs to run puppet by hand, it's a simple matter of following the
instructions in :ref:`running-ansible-on-nodes` but using ``playbooks/remote_puppet_else.yaml``
as the playbook.
Alternately, if local iteration is desired, it's possible to log in to the
server in question and running `puppet apply /opt/system-config/manifests/site.pp`.
There is also a script, `tools/kick.sh` that takes the host as an argument
and runs the above command.
Testing new puppet code can be done via `puppet apply --noop` or by
constructing a VM with a puppet install in it and just running `puppet apply`
on the code in question. This should actually make it fairly easy to test
how production works in a more self-contained manner.
Disabling Ansible on Nodes
--------------------------
In the case of needing to disable the running of ansible on a node, it's a
simple matter of adding an entry to the ansible inventory "disabled" group.
See the :ref:`disable-enable-ansible` section for more details.

View File

@ -153,7 +153,7 @@ An example is::
Ansible
=======
We have mqtt events emitted from ansible being run on :ref:`puppet-master`.
We have mqtt events emitted from ansible being run on :ref:`bridge`.
These events are generated using a `MQTT Ansible Callback Plugin`_.
.. _MQTT Ansible Callback Plugin: https://opendev.org/opendev/system-config/src/branch/master/modules/openstack_project/files/puppetmaster/mqtt.py

View File

@ -1,141 +0,0 @@
:title: Puppet Master
.. _puppet-master:
Puppet Master
#############
The puppetmaster server is named puppetmaster for historical reasons - it
no longer runs a puppetmaster process. There is a centralized 'hiera'
database that contains secure information such as passwords. The puppetmaster
server contains all of the ansible playbooks to run puppet apply
as well as the scripts to create new servers.
At a Glance
===========
:Projects:
* https://puppetlabs.com/
:Bugs:
* https://storyboard.openstack.org/#!/project/748
* https://tickets.puppetlabs.com/
:Resources:
* `Puppet Language Reference <https://docs.puppetlabs.com/references/latest/type.html>`_
Puppet Driving Ansible Driving Puppet
-------------------------------------
In OpenStack Infra, there are ansible playbooks that drive the running of
``puppet apply`` on all of the hosts in the inventory. That process first
copies appropriate ``hiera`` data files to each host.
The cron jobs, current configuration files and more can be done with ``puppet
apply`` but first some bootstrapping needs to be done.
You want to install these from puppetlabs' apt repo. There is a script,
:git_file:`install_puppet.sh` in the root of the system-config repository that
will setup and install the puppet client. After that you must install the
ansible playbooks and hiera config (used to maintain secrets).
Ansible and Puppet 3 is known to run on Precise, Trusty, Centos 6 and Centos 7.
.. code-block:: bash
sudo su -
git clone https://opendev.org/opendev/system-config /opt/system-config
bash /opt/system-config/install_puppet.sh
bash /opt/system-config/install_modules.sh
echo $REAL_HOSTNAME > /etc/hostname
service hostname restart
puppet apply --modulepath='/opt/system-config/modules:/etc/puppet/modules' -e 'include openstack_project::puppetmaster'
Hiera uses a systemwide configuration file in ``/etc/puppet/hiera.yaml``
and this setup supports multiple configurations. The two sets of environments
that OpenStack Infrastructure uses are ``production`` and ``development``.
``production`` is the default and the environment used when nothing else is
specified.
The hiera configuration is placed by puppet apply into common.yaml in
``/etc/puppet/hieradata/production`` and ``/etc/puppet/hieradata/development``.
The values are simple key-value pairs in yaml format. The keys needed are the
keys referenced in your ``site.pp``, their values are typically obvious
(strings, lists of strings). ``/etc/puppet/hieradata/`` and below should be
owned by ``puppet:puppet`` and have mode ``0711``.
Below the ``hieradata`` directory, there should be a ``common.yaml`` file where
settings that should be available to all servers in the infrastructure go,
and then two directories full of files. The first is ``fqdn`` which should
contain a yaml file for every server in the infrastructure named
``${fqdn_of_server}.yaml``. That file has secrets that are only for that
server. Additionally, some servers can have a ``$group`` defined in
``manifests/site.pp``. There can be a correspondingly named yaml file in the
``group`` directory that contains secrets to be made available to each
server in the group.
All of the actual yaml files should have mode 0600 and be owned by root.
Adding a node
-------------
For adding a new node to your puppet master, you can either use the
``/opt/system-config/launch/launch-node.py`` script
(see :git_file:`launch/README.rst` for full details) or bootstrap puppet manually.
For manual bootstrap, you need to run on the new server connecting
(for example, review.opendev.org) to the puppet master:
.. code-block:: bash
sudo su -
wget https://opendev.org/opendev/system-config/raw/branch/master/install_puppet.sh
bash -x install_puppet.sh
Running Puppet on Nodes
-----------------------
In OpenStack's Infrastructure, puppet runs are triggered from a cronjob
running on the puppetmaster which in turn runs a single run of puppet apply on
each host we know about.
The entry point for this process is ``/opt/system-config/run_all.sh``
There are a few sets of nodes which have their own playbooks so that they
are run in sequence before the rest of the nodes are run in parallel.
At the moment, this allows creation of git repos on the git slaves before
creation of the master repos on the gerrit server.
If an admin needs to run puppet by hand, it's a simple matter of either
logging in to the server in question and running
`puppet apply /opt/system-config/manifests/site.pp` or, on the
puppetmaster, running:
.. code-block:: bash
ansible-playbook --limit="$HOST:localhost" /opt/system-config/playbooks/remote_puppet_adhoc.yaml
as root, where `$HOST` is the host you want to run puppet on.
The `:localhost` is important as some of the plays depend on performing a task
on the localhost before continuing to the host in question, and without it in
the limit section, the tasks for the host will have undefined values.
There is also a script, `tools/kick.sh` that takes the host as an argument
and runs the above command.
Testing new puppet code can be done via `puppet apply --noop` or by
constructing a VM with a puppet install in it and just running `puppet apply`
on the code in question. This should actually make it fairly easy to test
how production works in a more self-contained manner.
Disabling Puppet on Nodes
-------------------------
In the case of needing to disable the running of puppet on a node, it's a
simple matter of adding an entry to the ansible inventory "disabled" group.
See the :ref:`disable-enable-puppet` section for more details.
Important Notes
---------------
#. Make sure the site manifest **does not** include the puppet cron job, this
conflicts with puppet master and can cause issues. The initial puppet run
that create users should be done using the puppet apply configuration above.

View File

@ -426,29 +426,26 @@ repository ``https://opendev.org/opendev/system-config``. This
tool is run from a checkout on the bridge - please see :git_file:`launch/README.rst`
for detailed instructions.
.. _disable-enable-puppet:
.. _disable-enable-ansible:
Disable/Enable Puppet
=====================
Disable/Enable Ansible
======================
You should normally not make manual changes to servers, but instead,
make changes through puppet. However, under some circumstances, you
may need to temporarily make a manual change to a puppet-managed
make changes through ansible or puppet. However, under some circumstances,
you may need to temporarily make a manual change to a managed
resource on a server.
OpenStack Infra uses a non-trivial combination of Dynamic and Static
Inventory in Ansible to control execution of puppet. A full understanding
OpenDev uses a Static Inventory in Ansible to control execution of Ansible
on hosts. A full understanding
of the concepts in
`Ansible Inventory Introduction
<http://docs.ansible.com/ansible/intro_inventory.html>`_
and
`Ansible Dynamic Inventory
<http://docs.ansible.com/ansible/intro_dynamic_inventory.html>`_
is essential for being able to make informed decisions about actions
to take.
In the case of needing to disable the running of puppet on a node, it's a
simple matter of adding an entry to the ansible inventory "disabled" group
In the case of needing to disable the running of ansible or puppet on a node,
it's a simple matter of adding an entry to the ansible inventory "disabled" group
in :git_file:`inventory/groups.yaml`. The
disabled entry is an input to `ansible --list-hosts` so you can check your
entry simply by running it with `ansible $hostlist --list-hosts` as root
@ -460,21 +457,15 @@ to `system-config`, there is a file on the bridge host,
`/etc/ansible/hosts/emergency.yaml` that can be edited directly.
`/etc/ansible/hosts/emergency.yaml` is a file that should normally be empty,
but the contents are not managed by puppet. It's purpose is to allow for
disabling puppet at times when landing a change to the puppet repo would be
but the contents are not managed by ansible. It's purpose is to allow for
disabling ansible at times when landing a change to the ansible repo would be
either unreasonable or impossible.
There are two sections in the emergency file, `disabled` and
`disabled:children`. To disable a single host, put it in `disabled`. If you
want to disable a group of hosts, put it in `disabled:children`. Any hosts we
have that have more than one host with the same name (such as in the case of
being in the midst of a migration) will show up as a group with the name of
the hostname and the individual servers will be listed by UUID.
Disabling puppet via ansible inventory does not disable puppet from being
able to be run directly on the host, it merely prevents ansible from
attempting to run it. If you choose to run puppet manually on a host, take care
to ensure that it has not been disabled at the bridge level first.
attempting to run it during the regular zuul jobs. If you choose to run
puppet manually on a host, take care to ensure that it has not been disabled
at the bridge level first.
Examples
--------
@ -491,21 +482,6 @@ without landing a puppet change, ensure the following is in
disabled:
- foo.opendev.org # 2020-05-23 bob is testing change 654321
To disable a group of hosts in the emergency file, such as all of the pypi
hosts.
::
[disabled:children]
pypi
To disable a staticly defined host that is not an OpenStack host, such as
the Infra cloud controller hosts, update the ``disabled`` entry in
groups.yaml with something like:
::
disabled: inventory_hostname == 'controller.useast.openstack.org'
.. _cinder:

View File

@ -6,6 +6,7 @@ Major Systems
.. toctree::
:maxdepth: 2
bridge
cacti
certificate_authority
dns
@ -23,7 +24,6 @@ Major Systems
etherpad
paste
planet
puppet
static
reprepro
lists