This PS configures swift-rsyncd process to use non-default port
from the range above 1024.
Change-Id: I7c37c548a5185a2ffac789383fe012619e401131
Closes-Bug: #1573137
Due to lack of /var/cache/swift directory, swift services
are unable to access /var/cache/swift/container.recon file.
Closes-Bug: 1569182
Change-Id: Ifc4763d40256e43f51728e1dd4b3986c4f0ba0fc
Updates to ensure commands run in the swift containers
are done as the 'swift' user rather than root.
Change-Id: I8c5a12e24b9940200241dbe09d6bde8f1cc1cf05
Closes-Bug: #1553895
Co-Authored-By: Serguei Bezverkhi <sbezverk@cisco.com>
This prevents failure when the directory already exists.
This commit fixes a failure in centos-binary because of a change in the
RDO packaging, where the required directory is now created for us.
Change-Id: Idd3e15802c3e3fd363e1295111ec12948d566781
Closes-Bug: #1543417
Current Swift playbook is based on the preassumption of AIO setup.
However, if one goes with default multinode setup
(ansible/inventory/multinode), it follows the P + ACO deployment model,
which proxy-server runs on controller nodes where ACO
(account/container/object services) run on storage nodes.
It will break because swift proxy-server no longer has access (it
shouldn't have) to /srv/node path. This change ensure disk mounting part
only happens on storage node. It also moves chown from proxy-server
Dockerfile to rsyncd because no matter with PACO, P+ACO or P+A+C+O
model, rsyncd is always running on each storage node.
Change-Id: I3aa20454902caa9c84d3901bb91e4e4c93ac5f34
Partially-Implements: blueprint swift-physical-disk
Closes-Bug: #1537544
In python3.x, there is no method named xrange(),
it has replaced by range(),which is equal to xrange()
in python2.x. so we must fix this issue.
Closes-Bug: #1268439
Change-Id: I66f0a7f248ad77bf06e96ea7cfcb7ef5f050b13a
There were some inconsistencies with pip install instructions
thoughout Kolla. We fix those here.
Additionally, we fix the virtualenv to properly use the site-packages
on the host if a library is not available in the venv.
Change-Id: Ib84d48e8826bb96060338b3fa0782620c98794a8
Related-Bug: #1524684
Closes-Bug: #1529434
Use virtualenv for installation of OpenStack projects and
dependencies to avoid conflicts with Python libraries installed
by non-OpenStack binary packages.
Change-Id: I21ecd673b2e93335b1d3dd4e279e940c9d694c3c
Implements: blueprint virtualenv
This brings Kolla images inline with FHS and should make finding
locations of things more consistent and reliable with the linux world
at large.
Change-Id: Iece5b4da4bace0fb8b1f41a65ab2c852ec73e6f8
Closes-Bug: #1485742
The majority of the start.sh code is identical. This removes that
duplicate code while still maintaining the ability to call code in a
specific container.
The start.sh is moved into /usr/local/bin/kolla_start in the container
The extend_start.sh script is called by the kolla_start script at the
location /usr/local/bin/kolla_extend_start . It always exists because
we create a noop kolla_extend_start in the base directory. We override
it with extend_start.sh in a specific image should we need to.
Of note, the neutron-agents container is exempt from this new
structure due to it being a fat container.
Additionally, we fix the inconsistent permissions throughout. 644 for
repo files and the scripts are set to 755 via a Docker RUN command to
ensure someones local perm change won't break upstream containers.
Change-Id: I7da8d19965463ad30ee522a71183e3f092e0d6ad
Closes-Bug: #1501295
This prepares for the RHEL OSP implementation by making the build
tool convert all binary-* into an install_type of binary and * into
an install_metatype variable substitution inside the Dockerfiles.
Further binary-* is substituted as install_name to enable proper
building only.
Change-Id: Ib681b29176eb79a3cab12ec824313fdecb6e7a5f
Partially-Implements: blueprint rhel-based-image-support
Ubuntu binary is not supported and may never be. Installing from
cloud-archive packaging is only for the current stable distros, Ubuntu
does not have a Delorean type repo. We place a fail message in the
base image to catch this and remove the messages throughout the
project.
An additional fail message is placed to catch all other things.
Change-Id: Id2953f503ebd42226f6a08e75979ae56511c40f7
Implements: blueprint install-from-ubuntu
Add 'rhel' to list for RPM based distros. Also sort the distro
list for rpm packages for affected lines.
Change-Id: Ied4cb3e9763d6c6359f314d16185383ac3e006ed
Partially-Implements: blueprint rhel-based-image-support
Currently we cannot import source archives with names different
than expected by hardcoded line in Dockerfiles. This worked well
for Openstack services' tarballs where we expected SERVICE-* root
folder after extraction or kanaka-noVNC for nova-novncproxy docker.
The latter fails if one tries to clone or get tarball under other
names. This fix allows any archive (tar,tgz,zip) or repo name to be
imported into dockerfile.
Change-Id: I869a6a19afaf0e93925572746c22b7589b6600c9
Closes-Bug: #1491415
This creates and moves the dependencies for Ubuntu into a common
openstack-base container. This commit shows dramatically smaller
sizes for all non-openstack containers. The Openstack container remain
the same size.
Change-Id: I2f46420d4b9edcfddda374caddcce906fc708f6c
Partially-Implements: blueprint openstack-common-container
We can, and should, figure out the filename dynamically rather than
hardcode that value in build.ini since it is not actually a
configurable paramater.
Change-Id: I496d6555e9fa356ab09e62063fd707f43ed08121
Closes-Bug: #1490386
Updated build.py to reflect this change.
Deprecate --template option and make it a noop.
Change-Id: I7cd98d1ee684a4c64984a49597159868152683b2
Partially-Implements: blueprint remove-docker-dir
As a restructure, nothing is changed from the original behaviour and
naming despite the file structure changing. The symlinks to build had
to be updated generating lots of "deleted" and "new_file".
The new structure is:
docker/${base_distro}/${type}/${container}
base_distro == centos, ubuntu, fedora, etc
type == source, binary, rdo
type rdo is a symlink to binary for backwards compatibility
Two new flags are added to the build-all script to support the ability
to support different base distros and a flag to support binary or source
containers.
There are several added folders that are empty to hold the directory
structure for future containers of these types.
To use a prefix other than centos-rdo- you can set PREFIX in the toplevel
directory .buildconf file
Change-Id: Ifc7bac0d827470f506c8b5c004a833da9ce13b90
This represents making build-docker-images --release build
with the icehouse tag and causes docker-compsoe to pull from
the icehouse tag.
Partially-implements: blueprint port-kilo
Change-Id: I66b2c39abc55c0f47152dd90e696fc46b9c58f50
By changing the PREFIX variable in the .buildconf one is now able to
build docker images from different bases.
For example, add the following line to your .buildconf file to build
CentOS based images:
PREFIX=centos-rdo-
Default base image is Fedora. For now only RH family is supported.
Additionally, changing the namespace either with the NAMESPACE variable
in .buildconf or via --namespace commandline option now changes the
source namespace as well from the default kollaglue one.
Implements: blueprint multi-baseos
Co-Authored-By: Steven Dake <stdake@cisco.com>
Change-Id: I3964cd2292789ea883a1f2d2738a5731a4fff49b
This patch replaces the collection of individual "build" scripts with a
single script (tools/build-docker-image), made available as "build"
inside each image directory.
The build-docker-image script will, by default, build images tagged with
the current commit id in order to prevent developers from accidentally
stepping on each other or on release images.
Documentation in docs/image-building.md describes the script in more
detail.
Change-Id: I444d5c2256a85223f8750a0904cb4b07f18ab67f