tox now hard fails if there is mismatch in spec attributes. This works around the issue temporarily. We probably have to drop py38 jobs and specify basepython for py39 target in the future. Removes ansible-galaxy --timeout for py* jobs to workaround. We may hit the timeout with py* jobs.  https://github.com/tox-dev/tox/pull/2824/ Change-Id: Ie9bacf18cf167139601eff80bba91f2b3454b5dd
|4 months ago|
|doc||1 year ago|
|elements||9 months ago|
|releasenotes||7 months ago|
|tests||2 years ago|
|tools||4 years ago|
|zuul.d||1 year ago|
|.gitignore||2 years ago|
|.gitreview||4 years ago|
|.stestr.conf||2 years ago|
|HACKING.rst||5 years ago|
|LICENSE||11 years ago|
|MANIFEST.in||10 years ago|
|README.rst||4 years ago|
|babel.cfg||10 years ago|
|requirements.txt||2 years ago|
|run-flake8||9 years ago|
|setup.cfg||2 years ago|
|setup.py||1 year ago|
|test-requirements.txt||2 years ago|
|tox.ini||4 months ago|
Team and repository tags
Image building rules for OpenStack images
These elements are used to build disk images for deploying OpenStack via Heat. They are built as part of the TripleO umbrella project.
Checkout this source tree and also the diskimage builder, export an ELEMENTS_PATH to add elements from this tree, and build any disk images you need:
virtualenv . source bin/activate pip install dib-utils pyyaml git clone https://opendev.org/openstack/diskimage-builder.git git clone https://opendev.org/openstack/tripleo-image-elements.git export ELEMENTS_PATH=tripleo-image-elements/elements diskimage-builder/bin/disk-image-create -u base vm bootstrap local-config stackuser heat-cfntools -a i386 -o bootstrap
Common element combinations
Always include heat-cfntools in images that you intend to boot via heat : if that is not done, then the user ssh keys are not reliably pulled down from the metadata server due to interactions with cloud-init.
OpenStack images are intended to be deployed and maintained using Nova + Heat.
As such they should strive to be stateless, maintained entirely via automation.
In a running OpenStack there are several categories of config.
- per user - e.g. ssh key registration with nova: we repeat this sort of config every time we add a user.
- local node - e.g. nova.conf or ovs-vsctl add-br br-ex : settings that apply individually to machines
- inter-node - e.g. credentials on rabbitmq for a given nova compute node
- application state - e.g. 'neutron net-create ...' : settings that apply to the whole cluster not on a per-user / per-tenant basis
- We have five places we can do configuration in TripleO:
- image build time
- in-instance heat-driven (ORC scripts)
- from outside via APIs
- orchestrated by Heat
Our current heuristic for deciding where to do any particular configuration step:
- per user config should be done from the outside via APIs, even for users like 'admin' that we know we'll have. Note that service accounts are different - they are a form of inter-node configuration.
- local node configuration should be done via ORC driven by Heat and/or configuration management system metadata.
- inter-node configuration should be done by working through Heat. For instance, creating a rabbit account for a nova compute node is something that Heat should arrange, though the act of creating is probably done by a script on the rabbit server - triggered by Heat - and applying the config is done on the compute node by the local node script - again triggered by Heat.
- application state changes should be done from outside via APIs
Copyright 2012,2013 Hewlett-Packard Development Company, L.P. Copyright (c) 2012 NTT DOCOMO, INC.
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
- Release notes for the project can be found at: