Ian Wienand 368466730c Migrate codesearch site to container
The hound project has undergone a small re-birth and moved to

 https://github.com/hound-search/hound

which has broken our deployment.  We've talked about leaving
codesearch up to gitea, but it's not quite there yet.  There seems to
be no point working on the puppet now.

This builds a container than runs houndd.  It's an opendev specific
container; the config is pulled from project-config directly.

There's some custom scripts that drive things.  Some points for
reviewers:

 - update-hound-config.sh uses "create-hound-config" (which is in
   jeepyb for historical reasons) to generate the config file.  It
   grabs the latest projects.yaml from project-config and exits with a
   return code to indicate if things changed.

 - when the container starts, it runs update-hound-config.sh to
   populate the initial config.  There is a testing environment flag
   and small config so it doesn't have to clone the entire opendev for
   functional testing.

 - it runs under supervisord so we can restart the daemon when
   projects are updated.  Unlike earlier versions that didn't start
   listening till indexing was done, this version now puts up a "Hound
   is not ready yet" message when while it is working; so we can drop
   all the magic we were doing to probe if hound is listening via
   netstat and making Apache redirect to a status page.

 - resync-hound.sh is run from an external cron job daily, and does
   this update and restart check.  Since it only reloads if changes
   are made, this should be relatively rare anyway.

 - There is a PR to monitor the config file
   (https://github.com/hound-search/hound/pull/357) which would mean
   the restart is unnecessary.  This would be good in the near and we
   could remove the cron job.

 - playbooks/roles/codesearch is unexciting and deploys the container,
   certificates and an apache proxy back to localhost:6080 where hound
   is listening.

I've combined removal of the old puppet bits here as the "-codesearch"
namespace was already being used.

Change-Id: I8c773b5ea6b87e8f7dfd8db2556626f7b2500473
2020-11-20 07:41:12 +11:00
2020-11-20 07:41:12 +11:00
2020-10-29 07:59:42 +11:00
2020-08-11 13:15:06 -07:00
2020-06-09 10:15:05 +10:00
2020-11-10 09:47:21 -08:00
2016-07-15 12:04:48 -07:00
2019-04-19 19:26:05 +00:00
2018-11-02 08:19:53 +11:00
2019-04-20 09:31:14 -07:00
2020-10-29 07:59:42 +11:00
2020-09-07 17:09:36 +10:00
2014-09-30 12:40:59 -07:00
2020-04-06 18:19:28 +00:00
2018-06-25 11:19:43 +10:00
2020-11-09 14:58:43 +00:00

OpenDev System Configuration

This is the machinery that drives the configuration, testing, continuous integration and deployment of services provided by the OpenDev project.

Services are driven by Ansible playbooks and associated roles stored here. If you are interested in the configuration of a particular service, starting at playbooks/service-<name>.yaml will show you how it is configured.

Most services are deployed via containers; many of them are built or customised in this repository; see docker/.

A small number of legacy services are still configured with Puppet. Although the act of running puppet on these hosts is managed by Ansible, the actual core of their orchestration lives in manifests and modules.

Testing

OpenDev infrastructure runs a complete testing and continuous-integration environment, powered by Zuul.

Any changes to playbooks, roles or containers will trigger jobs to thoroughly test those changes.

Tests run the orchestration for the modified services on test nodes assigned to the job. After the testing deployment is configured (validating the basic environment at least starts running), specific tests are configured in the testinfra directory to validate functionality.

Continuous Deployment

Once changes are reviewed and committed, they will be applied automatically to the production hosts. This is done by Zuul jobs running in the deploy pipeline. At any one time, you may see these jobs running live on the status page or you could check historical runs on the pipeline results (note there is also an opendev-prod-hourly pipeline, which ensures things like upstream package updates or certificate renewals are incorporated in a timely fashion).

Contributing

Contributions are welcome!

You do not need any special permissions to make contributions, even those that will affect production services. Your changes will be automatically tested, reviewed by humans and, once accepted, deployed automatically.

Bug fixes or modifications to existing code are great places to start, and you will see the results of your changes in CI testing.

You can develop all the playbooks, roles, containers and testing required for a new service just by uploading a change. Using a similar service as a template is generally a good place to start. If deploying to production will require new compute resources (servers, volumes, etc.) these will have to be deployed by an OpenDev administrator before your code is committed. Thus if you know you will need new resources, it is best to coordinate this before review.

The #opendev IRC channel is the main place for interactive discussion. Feel free to ask any questions and someone will try to help ASAP. The OpenDev meeting is a co-ordinated time to synchronize on infrastructure issues. Issues should be added to the agenda for discussion; even if you can not attend, you can raise your issue and check back on the logs later. There is also the service-discuss mailing list where you are welcome to send queries or questions.

Documentation

The latest documentation is available at https://docs.opendev.org/opendev/system-config/latest/

That documentation is generated from this repository. You can geneate it yourself with tox -e docs.

Description
System configuration for the OpenDev Collaboratory
Readme 154 MiB
Languages
Python 37.2%
Jinja 36.6%
Shell 13.6%
Dockerfile 3.8%
JavaScript 3%
Other 5.8%