14 Commits

Author SHA1 Message Date
Ian Wienand
ace1c39c61 gitea: use random time for git gc run
Randomising the time of this job should help avoid a thundering herd
of I/O intensive operations in the gitea environment.

Change-Id: I035f7781a397665357b6d039b989ab9fe6a46b8a
2019-09-04 05:15:21 +10:00
Monty Taylor
5c6b3411b7 Run actual full project creation in gitea test
Add the full remote_puppet_git playbook that we actually use in
production so that we can test the whole kit and caboodle. For
now don't add a review.o.o server to the mix, because we aren't
testing anything about it.

Change-Id: If1112a363e96148c06f8edf1e3adeaa45fc7271c
2019-07-11 13:39:22 -07:00
Monty Taylor
caebf387b4 Translate gitea project creation to python
Sadly, as readable as the use of the uri module to do the interactions
with gitea is, more reent ansible changed how subprocesses are forked
and this makes iterating over all the projects in projects.yaml take
an incredibly long amount of time.

Instead of doing it in yaml, make a python module that takes the list
one time and does looping and requests calls. This should make it be
possible to run the actual gitea creation playbook in integration tests.

Change-Id: Ifff3291c1092e6df09ae339c9e7dddb5ee692685
2019-07-11 08:21:35 -04:00
Clark Boylan
5727407486 Only backup the gitea database on gitea hosts
During a db recovery to rebuild a host using the existing db backups
resulted in a corrupt mysql.proc table. The issue seemed to be
attempting to restore the mysql database. Instead of dumping all
databases lets just backup the one we care about: gitea.

Change-Id: Ia2c87b62736fda1c8a9ce77126e383ec74990b4a
2019-06-27 09:53:34 -07:00
Jeremy Stanley
d0ff3e48d1 Suppress progress for git gc cron on Gitea servers
The stdout progress feed from `git gc` is fairly verbose and
targeted at audiences running it interactively. Since our cron for
this iterates over thoudands of repositories on our Gitea servers,
we don't need to send the progress info to all our sysadmins by
E-mail. Instead use the --quiet option to the gc subcommand so that
progress output will be suppressed.

If this still proves too verbose (as in, continues to result in
E-mail to root even when there are no failures), we can try
redirecting stdout to /dev/null.

Change-Id: Idc06e48cbf85e127a343c2a3cf51a35e6ed09685
2019-06-09 14:30:28 +00:00
Clark Boylan
e832987fca Add db backups to gitea
This isn't added as a separate role because it heavily relies on the
gitea deployment specific (docker-compose, service names, etc). If we
end up running more services with docker-compose and databases we can
probably make this reconsumable.

Change-Id: I7b9084a8a90a86f73f5b24de505978d3f286850b
2019-06-04 16:07:46 -07:00
James E. Blair
b87c2d02ab Add cron to gc on gitea servers
As new change refs accumulate, replication pushes and page loads
will take longer as git stats all of the refs/ files.  To avoid
that, pack refs and gc every week to keep the number of files
and space used minimal.

Change-Id: Iff273ebbc25a512ab7e12b8418ceb30e7c722f92
2019-05-23 15:33:55 -07:00
Zuul
2c5847dad9 Merge "Split the base playbook into services" 2019-05-20 10:04:40 +00:00
James E. Blair
8ad300927e Split the base playbook into services
This is a first step toward making smaller playbooks which can be
run by Zuul in CD.

Zuul should be able to handle missing projects now, so remove it
from the puppet_git playbook and into puppet.

Make the base playbook be merely the base roles.

Make service playbooks for each service.

Remove the run-docker job because it's covered by service jobs.

Stop testing that puppet is installed in testinfra. It's accidentally
working due to the selection of non-puppeted hosts only being on
bionic nodes and not installing puppet on bionic. Instead, we can now
rely on actually *running* puppet when it's important, such as in the
eavesdrop job. Also remove the installation of puppet on the nodes in
the base job, since it's only useful to test that a synthetic test
of installing puppet on nodes we don't use works.

Don't run remote_puppet_git on gitea for now - it's too slow. A
followup patch will rework gitea project creation to not take hours.

Change-Id: Ibb78341c2c6be28005cea73542e829d8f7cfab08
2019-05-19 07:31:00 -05:00
Clark Boylan
f4bf952f34 Prune docker images after docker-compose up
This ensures that we cleanup images that are superceded and no longer
necessary. We do this to avoid filling the disk with docker images.

Note that we use the -f flag to avoid being prompted by docker image
prune for confirmation.

Change-Id: I8eb5bb97d8c66755e695498707220c9e6e7b2de0
2019-05-02 15:09:37 -07:00
Monty Taylor
930b64c96b Add a stop timeout to gitea docker-compose up
It's possible that we're not allowing long enough time for mariadb
to stop cleanly. https://github.com/docker-library/mariadb/issues/201
indicates that adding a stop time might be useful. The default is 10,
bump it to 60.

Change-Id: Id7a815d1508fe6d8f79818c9109cbf89533bb2a6
2019-03-05 08:18:51 +00:00
James E. Blair
4d91f29b39 Run docker-compose pull before docker-compose up
This will make sure that the latest relevant images are in the
local image storage, and therefore, will cause docker-compose up
to recreate containers when the images are updated.

Change-Id: Ic6f0bc8c8aea5b5c16501f4ab5d3095fb70c0ff7
2019-03-04 14:40:35 -08:00
James E. Blair
480c7ebe37 Use host networking for gitea
Change-Id: If706c6f85022919add93e46eeb6eae1b6d948d75
2019-02-21 15:27:44 -08:00
James E. Blair
67cda2c7df Deploy gitea with docker-compose
This deploys a shared-nothing gitea server using docker-compose.
It includes a mariadb server.

Change-Id: I58aff016c7108c69dfc5f2ebd46667c4117ba5da
2019-02-18 08:46:40 -08:00