
This adds a local mariadb container to the gerrit host to hold the accountPatchReviewDb database. This is inspired by a few things - since migration to NoteDB, there is only one table left where Gerrit records what files have been reviewed for a change. This logically scales with the number of reviews users are doing. Pulling the stats on this, we can see since the NoteDB upgrade this went from a very busy database (~300 queries/70 commits per second) to barely registering one hit per second : https://imgur.com/a/QGJV7Fw Thus separating the db to an external host for performance reasons is not a large concern any more. - emperically we've done a bad job in keeping the existing hosted db up-to-date; it's still running mysql 5.1 and we have been hit by bugs such as the one referenced in-line which silently drops backups. - The other gerrit option is to use an on-disk H2 database. This is certainly an option, however you need special tools to interact with it for migration, etc. and it's not safe to backup from files on disk (as opposed to mysqldump). Upstream advice is unclear, and varies between H2 being a performance bottleneck to this being ephemeral data that users don't care about. We know how to admin mariadb/mysql and this allows us to migrate and backup data, so seems like the best choice. - we have a pressing need to update the server to a new operating system. Running the db alongside the gerrit instance minimises fiddling we have to do manging connections to and migrating the hosted db systems. - related to that, we are tending towards more provider independence for control-plane servers. A hosted database product is not always provided, so this gives us more flexibility in moving things around. - the main concern here is memory usage. "docker stats" reports a quiescent container, freshly started on a 8GB host: gerrit-compose_mariadb_1 67.32MiB After loading a copy of the production table, and then dumping it back to a file the same container reports: gerrit-compose_mariadb_1 462.6MiB The existing remote mysql configuration path remains mostly the same. We move the gerrit startup into a script rather than a CMD so we can call it after a "wait for db" script in the mariadb_container case (this is the reccommeded way to enforce ordering [1]). Backups of the local container need different dump commands; backups are relocated to a new file and updated. Testing is converted to use this rather than a local H2 database. [1] https://docs.docker.com/compose/startup-order/ Change-Id: Iec981ef3c2e38889f91e9759e66295dbfb499c2e
OpenDev System Configuration
This is the machinery that drives the configuration, testing, continuous integration and deployment of services provided by the OpenDev project.
Services are driven by Ansible playbooks and associated roles stored
here. If you are interested in the configuration of a particular
service, starting at playbooks/service-<name>.yaml
will show you how it is configured.
Most services are deployed via containers; many of them are built or
customised in this repository; see docker/
.
A small number of legacy services are still configured with Puppet.
Although the act of running puppet on these hosts is managed by Ansible,
the actual core of their orchestration lives in manifests
and modules
.
Testing
OpenDev infrastructure runs a complete testing and continuous-integration environment, powered by Zuul.
Any changes to playbooks, roles or containers will trigger jobs to thoroughly test those changes.
Tests run the orchestration for the modified services on test nodes
assigned to the job. After the testing deployment is configured
(validating the basic environment at least starts running), specific
tests are configured in the testinfra
directory to validate
functionality.
Continuous Deployment
Once changes are reviewed and committed, they will be applied
automatically to the production hosts. This is done by Zuul jobs running
in the deploy
pipeline. At any one time, you may see these
jobs running live on the status page or
you could check historical runs on the pipeline
results (note there is also an opendev-prod-hourly
pipeline, which ensures things like upstream package updates or
certificate renewals are incorporated in a timely fashion).
Contributing
Contributions are welcome!
You do not need any special permissions to make contributions, even those that will affect production services. Your changes will be automatically tested, reviewed by humans and, once accepted, deployed automatically.
Bug fixes or modifications to existing code are great places to start, and you will see the results of your changes in CI testing.
You can develop all the playbooks, roles, containers and testing required for a new service just by uploading a change. Using a similar service as a template is generally a good place to start. If deploying to production will require new compute resources (servers, volumes, etc.) these will have to be deployed by an OpenDev administrator before your code is committed. Thus if you know you will need new resources, it is best to coordinate this before review.
The #opendev IRC on OFTC channel is the main place for interactive discussion. Feel free to ask any questions and someone will try to help ASAP. The OpenDev meeting is a co-ordinated time to synchronize on infrastructure issues. Issues should be added to the agenda for discussion; even if you can not attend, you can raise your issue and check back on the logs later. There is also the service-discuss mailing list where you are welcome to send queries or questions.
Documentation
The latest documentation is available at https://docs.opendev.org/opendev/system-config/latest/
That documentation is generated from this repository. You can geneate
it yourself with tox -e docs
.