docs: remove orphaned howto pages

The zuul-from-scratch page was removed with
I3c6327c9bc1a924f076ded06afc0afc4e3024384, but all these files it
linked to were left behind.

At first glance this seemed a bit odd, because sphinx should warn when
pages aren't linked to from a TOC.  It took me a while to realise
these pages were already marked with :orphan: at the top which stopped
this happening.  So they really are orphans now, but we haven't
noticed.

This appears to go back well before the zuul-from-scratch removal to
some of the original organisation several years ago in
I206a2acf09eb8a2871ec61a00226654c798bb3eb -- it's not clear if this
flag was intended to be left in the files or was a temporary step; but
it seems that as we've gone on we've copied it into all the other
files as they got created too.

Most of this is all old and part of the bit-rot as described in the
original zuul-from-scratch removal.  The only recent part is some
console streaming docs added with
I5bfb61323bf3219168d4d014cbb9703eed230e71 -- upon reevaluating this
I've moved it into the executor docs where it seems to fit.

The other orphaned files are removed.

Change-Id: Id3669418189f1083a2fb690ada0b60043a77b1d6
This commit is contained in:
Ian Wienand
2022-11-11 09:13:53 +11:00
parent 57451d04b9
commit 5d6a6fb1ad
12 changed files with 35 additions and 879 deletions

View File

@ -42,9 +42,6 @@ Then in the zuul.conf, set webhook_token and api_token.
Application
...........
.. NOTE Duplicate content here and in howtos/github_setup.rst. Keep them
in sync.
To create a `GitHub application
<https://developer.github.com/apps/building-integrations/setting-up-and-registering-github-apps/registering-github-apps/>`_:

View File

@ -1,75 +0,0 @@
:orphan:
CentOS 7
=========
We're going to be using CentOS 7 on a cloud server for this installation.
Prerequisites
-------------
- Port 9000 must be open and accessible from the Internet so that
GitHub can communicate with the Zuul web service.
Login to your environment
-------------------------
Since we'll be using a cloud image for CentOS 7, our login user will
be ``centos`` which will also be the staging user for installation of
Zuul and Nodepool.
To get started, ssh to your machine as the ``centos`` user.
.. code-block:: shell
ssh centos@<ip_address>
Environment Setup
-----------------
Certain packages needed for Zuul and Nodepool are not available in upstream
CentOS 7 repositories so additional repositories need to be enabled.
The repositories and the packages installed from those are listed below.
* ius-release: python35u, python35u-pip, python35u-devel
* bigtop: zookeeper
First, make sure the system packages are up to date, and then install
some packages which will be required later. Most of Zuul's binary
dependencies are handled by the bindep program, but a few additional
dependencies are needed to install bindep, and for other commands
which we will use in these instructions.
.. code-block:: shell
sudo yum update -y
sudo systemctl reboot
sudo yum install -y https://centos7.iuscommunity.org/ius-release.rpm
sudo yum install -y git2u-all python35u python35u-pip python35u-devel java-1.8.0-openjdk
sudo alternatives --install /usr/bin/python3 python3 /usr/bin/python3.5 10
sudo alternatives --install /usr/bin/pip3 pip3 /usr/bin/pip3.5 10
sudo pip3 install python-openstackclient bindep
Install Zookeeper
-----------------
Nodepool uses Zookeeper to keep track of information about the
resources it manages, and it's also how Zuul makes requests to
Nodepool for nodes.
.. code-block:: console
sudo bash -c "cat << EOF > /etc/yum.repos.d/bigtop.repo
[bigtop]
name=Bigtop
enabled=1
gpgcheck=1
type=NONE
baseurl=http://repos.bigtop.apache.org/releases/1.2.1/centos/7/x86_64
gpgkey=https://dist.apache.org/repos/dist/release/bigtop/KEYS
EOF"
sudo yum install -y zookeeper zookeeper-server
sudo systemctl start zookeeper-server.service
sudo systemctl status zookeeper-server.service
sudo systemctl enable zookeeper-server.service

View File

@ -1,54 +0,0 @@
:orphan:
Fedora 27
=========
We're going to be using Fedora 27 on a cloud server for this installation.
Prerequisites
-------------
- Port 9000 must be open and accessible from the Internet so that
GitHub can communicate with the Zuul web service.
Login to your environment
-------------------------
Since we'll be using a cloud image for Fedora 27, our login user will
be ``fedora`` which will also be the staging user for installation of
Zuul and Nodepool.
To get started, ssh to your machine as the ``fedora`` user::
ssh fedora@<ip_address>
Environment Setup
-----------------
First, make sure the system packages are up to date, and then install
some packages which will be required later. Most of Zuul's binary
dependencies are handled by the bindep program, but a few additional
dependencies are needed to install bindep, and for other commands
which we will use in these instructions.
::
sudo dnf update -y
sudo systemctl reboot
sudo dnf install git redhat-lsb-core python3 python3-pip python3-devel make gcc openssl-devel python-openstackclient -y
pip3 install --user bindep
Install Zookeeper
-----------------
Nodepool uses Zookeeper to keep track of information about the
resources it manages, and it's also how Zuul makes requests to
Nodepool for nodes.
::
sudo dnf install zookeeper -y
sudo cp /etc/zookeeper/zoo_sample.cfg /etc/zookeeper/zoo.cfg
sudo systemctl start zookeeper.service
sudo systemctl status zookeeper.service
sudo systemctl enable zookeeper.service

View File

@ -1,103 +0,0 @@
:orphan:
Gerrit
======
Installation
------------
Gerrit can be downloaded from the `Gerrit Code Review
<https:///www.gerritcodereview.com>`_ web site, and also contains
Gerrit documentation with installation instructions.
Create a Zuul User
------------------
The Gerrit documentation walks you through adding a first user, which
will end up being the admin user. Once the admin user is created, and
SSH access has been setup for that user, you can use that account to
create a new ``zuul`` user. This user, which will be used by our Zuul
installation, must have SSH access to gerrit, and have the
`stream-events <https://gerrit-review.googlesource.com/Documentation/access-control.html#global_capabilities>`_
ACL enabled.
.. TODO: Instructions to create the ssh key used here
As the admin user, create the ``zuul`` user, and import an SSH key for
``zuul``:
.. code-block:: shell
cat $PUBKEY | ssh -p 29418 $USER@localhost gerrit create-account \
--group "'Registered Users'" --ssh-key - zuul
``$PUBKEY`` is the location of the SSH public key for the ``zuul``
user. ``$USER`` is the username for the admin user.
The ``zuul`` user should now be able to stream events:
.. code-block:: shell
ssh -p 29418 zuul@localhost gerrit stream-events
Configure Gerrit
----------------
The ``zuul`` user (and any other users you may create, for that
matter) will need to be able to leave review votes on any project
hosted in your Gerrit. This is done with the use of Gerrit
`Review Labels <https://gerrit-review.googlesource.com/Documentation/access-control.html#category_review_labels>`_.
You may need to add the proper label permissions to the ``All-Projects``
project, which defines ACLs that all other projects will inherit.
.. TODO: Instructions to create a Verified label?
Visting `Projects` -> `List` -> `All-Projects` -> `Access` in your
Gerrit lets you see the current access permissions. In the
``Reference: refs/heads/*`` section, you will need to add a permisson
for the ``Label Code-Review`` for the ``Registered Users`` group (we
added the ``zuul`` user to this group when we created it).
.. note:: The label you configure here must match the label referenced in
your Zuul pipeline definitions. We've chosen the Code-Review label
here as an example.
Create a New Project
--------------------
The admin user can create new projects in Gerrit, which users can then clone
and use to submit code changes. Zuul will monitor the Gerrit event stream for
these submissions.
To create a new project named 'demo-project':
.. code-block:: shell
ssh -p 29418 $USER@localhost gerrit create-project demo-project --empty-commit
Modify the Project
------------------
* Clone the project:
.. code-block:: shell
git clone ssh://$USER@localhost:29418/demo-project.git
* Install the change ID hook that Gerrit requires:
.. code-block:: shell
cd demo-project
scp -p -P 29418 $USER@localhost:hooks/commit-msg .git/hooks/
* Now you are ready to modify the project and push the changes to Gerrit:
.. code-block:: shell
echo "test" > README.txt
git add .
git commit -m "First commit"
git push origin HEAD:refs/for/master
You should now be able to see your change in Gerrit.

View File

@ -1,180 +0,0 @@
:orphan:
GitHub
======
Configure GitHub
----------------
The recommended way to use Zuul with GitHub is by creating a GitHub
App. This allows you to easily add it to GitHub projects, and reduces
the likelihood of running into GitHub rate limits. You'll need an
organization in Github for this, so create one if you haven't already.
In this example we will use `my-org`.
.. NOTE Duplicate content here and in drivers/github.rst. Keep them
in sync.
To create a `GitHub application
<https://developer.github.com/apps/building-integrations/setting-up-and-registering-github-apps/registering-github-apps/>`_:
* Go to your organization settings page to create the application, e.g.:
https://github.com/organizations/my-org/settings/apps/new
* Set GitHub App name to "my-org-zuul"
* Set Setup URL to your setup documentation, when user install the application
they are redirected to this url
* Set Webhook URL to
``http://<zuul-hostname>:<port>/api/connection/<connection-name>/payload``.
* Create a Webhook secret
* Set permissions:
* Repository administration: Read
* Checks: Read & Write
* Repository contents: Read & Write (write to let zuul merge change)
* Issues: Read & Write
* Pull requests: Read & Write
* Commit statuses: Read & Write
* Set events subscription:
* Check run
* Commit comment
* Create
* Push
* Release
* Issue comment
* Issues
* Label
* Pull request
* Pull request review
* Pull request review comment
* Status
* Set Where can this GitHub App be installed to "Any account"
* Create the App
* Generate a Private key in the app settings page
.. TODO See if we can script this using GitHub API
Go back to the `General` settings page for the app,
https://github.com/organizations/my-org/settings/apps/my-org-zuul
and look for the app `ID` number, under the `About` section.
Edit ``/etc/zuul/zuul.conf`` to add the following:
.. code-block:: shell
sudo bash -c "cat >> /etc/zuul/zuul.conf <<EOF
[connection github]
driver=github
app_id=<APP ID NUMBER>
app_key=/etc/zuul/github.pem
webhook_token=<WEBHOOK SECRET>
EOF"
Upload the private key which was generated earlier, and save it in
``/etc/zuul/github.pem``.
Restart all of Zuul:
.. code-block:: shell
sudo systemctl restart zuul-executor.service
sudo systemctl restart zuul-web.service
sudo systemctl restart zuul-scheduler.service
Go to the `Advanced` tab for the app in GitHub,
https://github.com/organizations/my-org/settings/apps/my-org-zuul/advanced,
and look for the initial ping from the app. It probably wasn't
delivered since Zuul wasn't configured at the time, so click
``Resend`` and verify that it is delivered now that Zuul is
configured.
Create two new repositories in your org. One will hold the
configuration for this tenant in Zuul, the other should be a normal
project repo to use for testing. We'll call them ``zuul-test-config``
and ``zuul-test``, respectively.
Visit the public app page on GitHub,
https://github.com/apps/my-org-zuul, and install the app into your org.
Edit ``/etc/zuul/main.yaml`` so that it looks like this:
.. code-block:: yaml
- tenant:
name: quickstart
source:
zuul-git:
config-projects:
- zuul/zuul-base-jobs
untrusted-projects:
- zuul/zuul-jobs
github:
config-projects:
- my-org/zuul-test-config
untrusted-projects:
- my-org/zuul-test
The first section, under ``zuul-git`` imports the standard library of
Zuul jobs that we configured earlier. This adds a number of jobs that
you can immediately use in your Zuul installation.
The second section is your GitHub configuration.
After updating the file, restart the Zuul scheduler:
.. code-block:: shell
sudo systemctl restart zuul-scheduler.service
Add an initial pipeline configuration to the `zuul-test-config`
repository. Inside that project, create a ``zuul.yaml`` file with the
following contents:
.. code-block:: yaml
- pipeline:
name: check
description: |
Newly opened pull requests enter this pipeline to receive an
initial verification
manager: independent
trigger:
github:
- event: pull_request
action:
- opened
- changed
- reopened
- event: pull_request
action: comment
comment: (?i)^\s*recheck\s*$
- event: check_run
start:
github:
check: 'in_progress'
comment: false
success:
github:
check: 'success'
failure:
github:
check: 'failure'
Merge that commit into the repository.
In the `zuul-test` project, create a `.zuul.yaml` file with the
following contents:
.. code-block:: yaml
- project:
check:
jobs:
- noop
Open a new pull request with that commit against the `zuul-test`
project and verify that Zuul reports a successful run of the `noop`
job.

View File

@ -1,73 +0,0 @@
:orphan:
Install Nodepool
================
Initial Setup
-------------
First we'll create the nodepool user and set up some directories it
needs. We also need to create an SSH key for Zuul to use when it logs
into the nodes that Nodepool provides.
.. code-block:: shell
sudo groupadd --system nodepool
sudo useradd --system nodepool --home-dir /var/lib/nodepool --create-home -g nodepool
ssh-keygen -t rsa -m PEM -b 2048 -f nodepool_rsa -N ''
sudo mkdir /etc/nodepool/
sudo mkdir /var/log/nodepool
sudo chgrp -R nodepool /var/log/nodepool/
sudo chmod 775 /var/log/nodepool/
Installation
------------
Clone the Nodepool git repository and install it. The ``bindep``
program is used to determine any additional binary dependencies which
are required.
.. code-block:: shell
# All:
git clone https://opendev.org/zuul/nodepool
pushd nodepool/
# For Fedora and CentOS:
sudo yum -y install $(bindep -b compile)
# For openSUSE:
sudo zypper install -y $(bindep -b compile)
# For Ubuntu:
sudo apt-get install -y $(bindep -b compile)
# All:
sudo pip3 install .
popd
Service File
------------
Nodepool includes a systemd service file for nodepool-launcher in the ``etc``
source directory. To use it, do the following steps.
.. code-block:: shell
pushd nodepool/
sudo cp etc/nodepool-launcher.service /etc/systemd/system/nodepool-launcher.service
sudo chmod 0644 /etc/systemd/system/nodepool-launcher.service
popd
If you are installing Nodepool on ``CentOS 7`` and copied the provided service
file in previous step, please follow the steps below to use corresponding
systemd drop-in file so Nodepool service can be managed by systemd.
.. code-block:: shell
pushd nodepool/
sudo mkdir /etc/systemd/system/nodepool-launcher.service.d
sudo cp etc/nodepool-launcher.service.d/centos.conf \
/etc/systemd/system/nodepool-launcher.service.d/centos.conf
sudo chmod 0644 /etc/systemd/system/nodepool-launcher.service.d/centos.conf
popd

View File

@ -1,87 +0,0 @@
:orphan:
Nodepool - OpenStack
====================
Setup
-----
Before starting on this, you need to download your `openrc`
configuration from your OpenStack cloud. Put it on your server in the
staging user's home directory. It should be called
``<username>-openrc.sh``. Once that is done, create a new keypair
that will be installed when instantiating the servers:
.. code-block:: shell
cd ~
source <username>-openrc.sh # this may prompt for password - enter it
openstack keypair create --public-key nodepool_rsa.pub nodepool
We'll use the private key later when configuring Zuul. In the same
session, configure nodepool to talk to your cloud:
.. code-block:: shell
umask 0066
sudo mkdir -p ~nodepool/.config/openstack
cat > clouds.yaml <<EOF
clouds:
mycloud:
auth:
username: $OS_USERNAME
password: $OS_PASSWORD
project_name: ${OS_PROJECT_NAME:-$OS_TENANT_NAME}
auth_url: $OS_AUTH_URL
region_name: $OS_REGION_NAME
EOF
sudo mv clouds.yaml ~nodepool/.config/openstack/
sudo chown -R nodepool.nodepool ~nodepool/.config
umask 0002
Once you've written out the file, double check all the required fields
have been filled out.
Configuration
-------------
You'll need the following information in order to create the Nodepool
configuration file:
* cloud name / region name - from clouds.yaml
* flavor-name
* image-name - from your cloud
.. code-block:: shell
sudo bash -c "cat >/etc/nodepool/nodepool.yaml <<EOF
zookeeper-servers:
- host: localhost
port: 2181
providers:
- name: myprovider # this is a nodepool identifier for this cloud provider (cloud+region combo)
region-name: regionOne # this needs to match the region name in clouds.yaml but is only needed if there is more than one region
cloud: mycloud # This needs to match the name in clouds.yaml
cloud-images:
- name: centos-7 # Defines a cloud-image for nodepool
image-name: CentOS-7-x86_64-GenericCloud-1706 # name of image from cloud
username: centos # The user Zuul should log in as
pools:
- name: main
max-servers: 4 # nodepool will never create more than this many servers
labels:
- name: centos-7-small # defines label that will be used to get one of these in a job
flavor-name: 'm1.small' # name of flavor from cloud
cloud-image: centos-7 # matches name from cloud-images
key-name: nodepool # name of the keypair to use for authentication
labels:
- name: centos-7-small # defines label that will be used in jobs
min-ready: 2 # nodepool will always keep this many booted and ready to go
EOF"
.. warning::
`min-ready:2` may incur costs in your cloud provider. This will result in
two instances always running, even when idle.

View File

@ -1,98 +0,0 @@
:orphan:
Nodepool - Static
=================
The static driver allows you to use existing compute resources, such as real
hardware or long-lived virtual machines, with nodepool.
Node Requirements
-----------------
Any nodes you setup for nodepool (either real or virtual) must meet
the following requirements:
* Must be reachable by Zuul executors and have SSH access enabled.
* Must have a user that Zuul can use for SSH.
* Must have an Ansible supported Python installed
* Must be reachable by Zuul executors over TCP port 19885 for console
log streaming. See :ref:`nodepool_console_streaming`
When setting up your nodepool.yaml file, you will need the host keys
for each node for the ``host-key`` value. This can be obtained with
the command:
.. code-block:: shell
ssh-keyscan -t ed25519 <HOST>
Nodepool Configuration
----------------------
Below is a sample Nodepool configuration file that sets up static
nodes. Place this file in ``/etc/nodepool/nodepool.yaml``:
.. code-block:: shell
sudo bash -c "cat > /etc/nodepool/nodepool.yaml <<EOF
zookeeper-servers:
- host: localhost
labels:
- name: ubuntu-jammy
providers:
- name: static-vms
driver: static
pools:
- name: main
nodes:
- name: 192.168.1.10
labels: ubuntu-jammy
host-key: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGXqY02bdYqg1BcIf2x08zs60rS6XhlBSQ4qE47o5gb"
username: zuul
- name: 192.168.1.11
labels: ubuntu-jammy
host-key: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGXqY02bdYqg1BcIf2x08zs60rS6XhlBSQ5sE47o5gc"
username: zuul
EOF"
Make sure that ``username``, ``host-key``, IP addresses and label names are
customized for your environment.
.. _nodepool_console_streaming:
Log streaming
-------------
The log streaming service enables Zuul to show the live status of
long-running ``shell`` or ``command`` tasks. The server side is setup
by the ``zuul_console:`` task built-in to Zuul's Ansible installation.
The executor requires the ability to communicate with this server on
the job nodes via port ``19885`` for this to work.
The log streaming service spools command output via files on the job
node in the format ``/tmp/console-<uuid>-<task_id>-<host>.log``. By
default, it will clean these files up automatically.
Occasionally, a streaming file may be left if a job is interrupted.
These may be safely removed after a short period of inactivity with a
command such as
.. code-block:: shell
find /tmp -maxdepth 1 -name 'console-*-*-<host>.log' -mtime +2 -delete
If the executor is unable to reach port ``19885`` (for example due to
firewall rules), or the ``zuul_console`` daemon can not be run for
some other reason, the command to clean these spool files will not be
processed and they may be left behind; on an ephemeral node this is
not usually a problem, but on a static node these files will persist.
In this situation, , Zuul can be instructed to not to create any spool
files for ``shell`` and ``command`` tasks via setting
``zuul_console_disabled: True`` (usually via a global host variable in
inventory). Live streaming of ``shell`` and ``command`` calls will of
course be unavailable in this case, but no spool files will be
created.

View File

@ -1,61 +0,0 @@
:orphan:
openSUSE Leap 15
================
We're going to be using openSUSE Leap 15 for this installation.
Prerequisites
-------------
If you are using Zuul with GitHub,
- Port 9000 must be open and accessible from the Internet so that
GitHub can communicate with the Zuul web service.
Environment Setup
-----------------
First, make sure the system packages are up to date, and then install
some packages which will be required later. Most of Zuul's binary
dependencies are handled by the bindep program, but a few additional
dependencies are needed to install bindep, and for other commands
which we will use in these instructions.
.. code-block:: shell
sudo zypper install -y git python3-pip
Then install bindep
.. code-block:: shell
pip3 install --user bindep
# Add it to your path
PATH=~/.local/bin:$PATH
Install Zookeeper
-----------------
Nodepool uses Zookeeper to keep track of information about the
resources it manages, and it's also how Zuul makes requests to
Nodepool for nodes.
You should follow the `official deployment instructions for zookeeper
<https://zookeeper.apache.org/doc/current/zookeeperAdmin.html>`_,
but to get started quickly, just download, unpack and run.
To download follow the directions on `Zookeeper's releases page
<https://zookeeper.apache.org/releases.html>`_ to grab the latest
release of zookeeper. Then:
.. code-block:: shell
sudo zypper install -y java-1_8_0-openjdk
tar -xzf zookeeper-3.4.12.tar.gz # Tarball downloaded from Zookeeper
cp zookeeper-3.4.12/conf/zoo_sample.cfg zookeeper-3.4.12/conf/zoo.cfg
./zookeeper-3.4.12/bin/zkServer.sh start
.. note:: Don't forget to follow `Apache's checksum instructions
<https://www.apache.org/dyn/closer.cgi#verify>`_ before
extracting.

View File

@ -1,54 +0,0 @@
:orphan:
Ubuntu
======
We're going to be using Ubuntu on a cloud server for this installation.
Prerequisites
-------------
- Port 9000 must be open and accessible from the Internet so that
GitHub can communicate with the Zuul web service.
Login to your environment
-------------------------
Since we'll be using a cloud image for Ubuntu, our login user will
be ``ubuntu`` which will also be the staging user for installation of
Zuul and Nodepool.
To get started, ssh to your machine as the ``ubuntu`` user.
.. code-block:: shell
ssh ubuntu@<ip_address>
Environment Setup
-----------------
First, make sure the system packages are up to date, and then install
some packages which will be required later. Most of Zuul's binary
dependencies are handled by the bindep program, but a few additional
dependencies are needed to install bindep, and for other commands
which we will use in these instructions.
.. code-block:: shell
sudo apt-get update
sudo apt-get install python3-pip git
# install bindep, the --user setting will install bindep only in
# the user profile not global.
pip3 install --user bindep
Install Zookeeper
-----------------
Nodepool uses Zookeeper to keep track of information about the
resources it manages, and it's also how Zuul makes requests to
Nodepool for nodes.
.. code-block:: console
sudo apt-get install -y zookeeper zookeeperd

View File

@ -1,91 +0,0 @@
:orphan:
Install Zuul
============
Initial Setup
-------------
First we'll create the zuul user and set up some directories it needs.
We'll also install the SSH private key that we previously created
during the Nodepool setup.
.. code-block:: console
$ sudo groupadd --system zuul
$ sudo useradd --system zuul --home-dir /var/lib/zuul --create-home -g zuul
$ sudo mkdir /etc/zuul/
$ sudo mkdir /var/log/zuul/
$ sudo chown zuul:zuul /var/log/zuul/
$ sudo mkdir /var/lib/zuul/.ssh
$ sudo chmod 0700 /var/lib/zuul/.ssh
$ sudo mv nodepool_rsa /var/lib/zuul/.ssh
$ sudo chown -R zuul:zuul /var/lib/zuul/.ssh
Installation
------------
Clone the Zuul git repository and install it. The ``bindep`` program
is used to determine any additional binary dependencies which are
required.
.. code-block:: console
# All:
$ git clone https://opendev.org/zuul/zuul
$ pushd zuul/
# For Fedora and CentOS:
$ sudo yum -y install $(bindep -b compile)
# For openSUSE:
$ zypper install -y $(bindep -b compile)
# For Ubuntu:
$ apt-get install -y $(bindep -b compile)
# All:
$ tools/install-js-tools.sh
# All:
$ sudo pip3 install .
$ sudo zuul-manage-ansible
$ popd
Service Files
-------------
Zuul includes systemd service files for Zuul services in the ``etc`` source
directory. To use them, do the following steps.
.. code-block:: console
$ pushd zuul/
$ sudo cp etc/zuul-scheduler.service /etc/systemd/system/zuul-scheduler.service
$ sudo cp etc/zuul-executor.service /etc/systemd/system/zuul-executor.service
$ sudo cp etc/zuul-web.service /etc/systemd/system/zuul-web.service
$ sudo chmod 0644 /etc/systemd/system/zuul-scheduler.service
$ sudo chmod 0644 /etc/systemd/system/zuul-executor.service
$ sudo chmod 0644 /etc/systemd/system/zuul-web.service
$ popd
If you are installing Zuul on ``CentOS 7`` and copied the provided service
files in previous step, please follow the steps below to use corresponding
systemd drop-in files so Zuul services can be managed by systemd.
.. code-block:: console
$ pushd zuul/
$ sudo mkdir /etc/systemd/system/zuul-scheduler.service.d
$ sudo cp etc/zuul-scheduler.service.d/centos.conf \
/etc/systemd/system/zuul-scheduler.service.d/centos.conf
$ sudo chmod 0644 /etc/systemd/system/zuul-scheduler.service.d/centos.conf
$ sudo mkdir /etc/systemd/system/zuul-executor.service.d
$ sudo cp etc/zuul-executor.service.d/centos.conf \
/etc/systemd/system/zuul-executor.service.d/centos.conf
$ sudo chmod 0644 /etc/systemd/system/zuul-executor.service.d/centos.conf
$ sudo mkdir /etc/systemd/system/zuul-web.service.d
$ sudo cp etc/zuul-web.service.d/centos.conf \
/etc/systemd/system/zuul-web.service.d/centos.conf
$ sudo chmod 0644 /etc/systemd/system/zuul-web.service.d/centos.conf
$ popd

View File

@ -150,6 +150,41 @@ safe now, but there is still a small possibility of incompatibility.
See also the Ansible `Python 3 support page
<https://docs.ansible.com/ansible/latest/reference_appendices/python_3_support.html>`__.
.. _nodepool_console_streaming:
Log streaming
~~~~~~~~~~~~~
The log streaming service enables Zuul to show the live status of
long-running ``shell`` or ``command`` tasks. The server side is setup
by the ``zuul_console:`` task built-in to Zuul's Ansible installation.
The executor requires the ability to communicate with this server on
the job nodes via port ``19885`` for this to work.
The log streaming service spools command output via files on the job
node in the format ``/tmp/console-<uuid>-<task_id>-<host>.log``. By
default, it will clean these files up automatically.
Occasionally, a streaming file may be left if a job is interrupted.
These may be safely removed after a short period of inactivity with a
command such as
.. code-block:: shell
find /tmp -maxdepth 1 -name 'console-*-*-<host>.log' -mtime +2 -delete
If the executor is unable to reach port ``19885`` (for example due to
firewall rules), or the ``zuul_console`` daemon can not be run for
some other reason, the command to clean these spool files will not be
processed and they may be left behind; on an ephemeral node this is
not usually a problem, but on a static node these files will persist.
In this situation, Zuul can be instructed to not to create any spool
files for ``shell`` and ``command`` tasks via setting
``zuul_console_disabled: True`` (usually via a global host variable in
inventory). Live streaming of ``shell`` and ``command`` calls will of
course be unavailable in this case, but no spool files will be
created.
Web Server
----------