This commit adds the ability to install cinder without a repo server.
This pattern is lifted from the os_keystone role and allows us to
further develop functional testing for this role.
Change-Id: I85ba753a946b22ee3e9b9403977501a1804f9d86
Partial-Bug: #1553967
This fix configures the auth_url parameter to use keystone_service_adminurl
over the existing keystone_service_adminuri parameter which actually leads
to a incomplete URL lacking the API version like /v3/tokens
Change-Id: I46f2ab7cbdb579dda5d019c29950af7e8c974bea
Related-Bug: #1552394
This change makes it so that all services are expecting SSL termination
at the load balancer by default. This is more indicative of how a real
world deployment will be setup and is being added such that we can test
a more production like deployment system by default.
The AIO will now terminate SSL in HAProxy using a self-signed cert.
Change-Id: I6273ffa453b4e5eb8a33767974d390a126296c47
Re-Implementation-Of: https://review.openstack.org/#/c/277199/9
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
The sudoers file was being created in the pre-install tasks
which causes an incorrect configuration variable to be dropped
when the venv env is not turned on. To correct this issue the
sudoers template is now dropped in the post install task file
after the bin_path fact has been set.
This change also removes the directory create task for heat, keystone,
glance, and swift because no sudoers files are needed for these services.
Re-Implementation-Of: https://review.openstack.org/#/c/277674/1
Change-Id: I609c9c12579dc1897787d19a1f58fe3e919b5e35
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
This PR sets the ownership for the swift log directories to the syslog
user instead of the swift user. Since swift uses syslog, no logs were
being created/logged to before this change.
Change-Id: I44768d4cd04108a7163169dfec2f0de774a2cf83
The memcache configuration was only setup for the proxy-server.conf
within Swift, and was not set for the object and container reconcilers
which both use memcache.
This patch moves the memcache settings into a separate memcache.conf
file which is then configured on all swift hosts, removing the specific
conf from the proxy-server.conf file.
Change-Id: I047b2d1178de43c694c30280f6ed9fe8511341fd
Closes-Bug: #1542121
Workarounding the upstream ansible apt module bug
documented here:
https://github.com/ansible/ansible-modules-core/pull/1517
For the next versions of ansible we'll be using, we should
check if the apt bug is fixed. When it's fixed, we could
abandon this change and use the standard apt module
with correct cache handling.
Change-Id: I2aaf00da175f31d0157bbc4ae30a4e176b055078
We currently have two issues with venvs:
- if you update your venv on the repo server, it is not possible for
that updated venv to land on the service's container as the get_url
task always skips if the file exists (even if the file is different)
- if you have an updated venv on the repo server and forcefully delete
the cached venv tarball on the service's container, the new tarball
will get unarchived over top of the existing venv
This commit does the following:
- gets the checksum of the /var/cache tarball and downloads checksum
file from repo server
- updates "Attempt venv download" to only download the venv if the
cache doesn't exist or if the local and remote checksums differ
- adds a "force: true" to "Attempt venv download" task so that the venv
tarball will get re-downloaded when the when condition is true (this
is necessary otherwise the download will get skipped since the
destination already exists)
- adds a new task "Remove existing venv" so we can first remove the
venv before we unarchive the potentially new venv from the repo
server
- updates "Create swift venv dir" and "Unarchive pre-built venv"
tasks to only proceed if "swift_get_venv | changed", which
prevents these tasks from running when they the venv tarball hasn't
changed
- adds multiple service restarts to
os_swift/tasks/swift_install.yml so that swift will restart
correctly should the venv/packages update without any associated
config changes
NOTE: The reason why we compare local and remote checksum is to avoid
unnecessarily downloading the venv when the checksums are in fact
the same. On small deploys this is more or less a non-issue but
if a deploy w/ thousands of compute nodes re-runs playbooks we
want to limit the venv downloads when it's unnecessary.
Change-Id: I4b028f6e4ca59eceac010d2bbc10a8d79f6f3937
The rsync service is currently restarted using two handlers, one to stop
the service and a second to start it. There is not a sufficient delay
between the two task and so the rsync pid has not been removed before
the attempt is made to start the service.
This commit replaces the two handlers with a single one that will do the
restart in one go.
Change-Id: I8ed4630da1add7205552b6ec731a143dbe45112b
Closes-bug: 1538649
This adds the optional configuration options:
- statsd_host
- statsd_port
- statsd_metric_prefix (defaults to inventory_hostname)
- statsd_default_sample_rate
- statsd_sample_rate_factor
Which can be defined under swift globally or on the server level.
The configuration will only be added if statsd_host is defined.
Change-Id: I793b189e0a1f5ca4fc1fe17b1d89f2a83af8c796
It's possible to remove keystoneauth from the middleware pipeline,
but the play will fail because the keystone swift User/roles/perms
tasks will not be able to succeed.
This patch skips those tasks when not using keystoneauth in the
middleware pipeline.
Change-Id: I87143b5c220dc312e2cb5d7e3dd3e9e01609ff91
Closes-Bug: #1523581
When using an LDAP backend the plabooks fail when "ensuring.*"
which is a keystone client action. The reason for the failure is
related to how ldap backend, and is triggered when the service
users are within the ldap and not SQL. To resolve the issue a boolean
conditional was created on the various OS_.* roles to skip specific
tasks when the service users have already been added into LDAP.
Change-Id: I64a8d1e926c54b821f8bfb561a8b6f755bc1ed93
Closes-Bug: #1518351
Closes-Bug: #1519174
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
The container-reconciler and object-expirer were missing from the os-swift
role. The reconciler makes sure incorrectly placed objets live in the
correct storage policy. The expirer is the service that deletes expired objects.
This change also adds the abilty to optionally specify a reclaim_age in the swift
section of the configuration, which is now set in all the locations required,
still with the default of 604800 seconds (7 days).
Change-Id: Ic56a714c3fb3c84b9bb5ed8e2ae3c86dad474161
Closes-Bug: #1516877
The change builds venvs in a single repo container and then
ships them to to all targets. The built venvs will be within
the repo servers and will allow for faster deployments,
upgrades, and more consistent deployments for the life cycle
of the deployment.
This will create a versioned tarball that will allow for
greater visablility into the build process as well as giving
deployers/developers the ability to compair a release in
place.
Change-Id: Ieef0b89ebc009d1453c99e19e53a36eb2d70edae
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
The pattern for nova and neutron on hosts is to have a symlink from
/var/log/{service} to /openstack/log/_hostname_-{service}/ and then
to have all the service logs configured to log to /var/log/{service}
as that is a logical place for an operator to look for them.
Swift currently does not follow that pattern.
Currently the swift {account,container,object} logs are placed in
/openstack/log/{hostname}/, whereas the proxy logs are placed in
/var/log/swift/. On hosts the /var/log/swift symlink to
/openstack/log/{hostname}-swift/ is created, but not used.
This creates confusion for operators trying to find the logs in the
logical (and upstream) pattern in the directory /var/log/{service}.
This patch puts the swift logs where they belong.
Upgrade Notes:
- This changes the location of the log storage on swift hosts from
/openstack/log/{hostname} to /openstack/log/{hostname}-swift
- Any log processing or monitoring tooling that consumes swift logs
will need to be adjusted to consume them from the new location,
or simply to consume them from /var/log/swift
DocImpact
UpgradeImpact
Closes-Bug: #1417536
Change-Id: I8d6ec98d310ce8d4e4a7a6cc5fb2d349d17757cf
This commit conditionally allows the os_swift role to
install build and deploy within a venv. This is the new
default behavior of the role however the functionality
can be disabled.
In this PR, like all of the other venv related PRs, the
`is_metal` flag was removed from the role however unlike
some of the other PRs this removal required moving some
of the `is_metal` logic out of the role and into the
play. This was done for consistency as well as making
the role more standalone. The only thing that the role
should care about, in terms of installation, is whether
or not to install in a venv.
Change-Id: I6f5b883a853611659567bd12e8bcf572189854b7
Implements: blueprint enable-venv-support-within-the-roles
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
This patch adds the variable 'pip_install_options' which is passed to the pip
install module as extra arguments in order to allow the use of options like
'--force-reinstall' when executing playbooks.
eg: openstack-ansible -e pip_install_options="--force-reinstall" \
setup-openstack.yml
This is required due to constant upstream changes in dependencies which
result in python wheel version upgrades and downgrades between tagged
versions of openstack-ansible.
The intention is that this can be used whenever a deployer switches between
tags for both upgrades and downgrades.
DocImpact
Closes-Bug: #1489251
Closes-Bug: #1499451
Related-Bug: #1501114
Change-Id: I996185e009a4c4af4f23798619bdbd0d490360c9
The Swift configuration item [filter:recon]/recon_lock_path is set to
'/var/lock/swift' by openstack-ansible in the appropriate configuration
files. The playbooks also create the directory if it does not exist. If
the host is rebooted the directory '/var/lock/swift' is missing and must
be recreated.
/var/lock (/run/lock) is a tmpfs and so the directory /var/lock/swift
will not persist between reboots.
Swift attempts to create a directory in the directory specified by
recon_lock_path however it does not recursively create any missing
parent directories.
This commit changes the value of [filter:recon]/recon_lock_path to that
set by Swift, '/var/lock'. This allows it to create the directory
'/var/lock/swift-recon-object-cron'. The creation of '/var/lock/swift'
is also removed from the playbooks.
Change-Id: I714367b02c7cf961e9e0bdee4e41f9e4e105b088
Closes-bug: #1496117
The change modifies the swift template tasks such that it's now
using the config_template action plugin. This change will make so that
config files can be dynamically updated, by a deployer, at run time,
without requiring the need to modify the in tree templates or defaults.
Partially implements: blueprint tunable-openstack-configuration
Change-Id: Id992937f35afa0549f9f0d0fbcf0be5e6978df57
Presently all services use the single root virtual host within RabbitMQ
and while this is “OK” for small to mid sized deployments however it
would be better to divide services into logical resource groups within
RabbitMQ which will bring with it additional security. This change set
provides OSAD better compartmentalization of consumer services that use
RabbitMQ.
UpgradeImpact
DocImpact
Change-Id: I6f9d07522faf133f3c1c84a5b9046a55d5789e52
Implements: blueprint compartmentalize-rabbitmq
When read_affinity is used and sorting_method is not used warnings
are generated in the swift proxy log indicating that the
read_affinity is not being respected. When read_affinity is specified
this change sets the sorting_method to affinity automatically, and
otherwise uses a configured value which defaults to shuffle.
Note that write_affinity does not respect sorting_method and follows
a different code path and does not issue warnings in logs when used
without sorting_method.
Closes-bug: 1480581
Co-Authored-By: Andy McCrae <andy.mccrae@gmail.com>
Change-Id: I3cab89c95f288b4a59f4dd3c7360daca7a4f47bf
Existing rsync stop/start handlers were relying on the pattern
parameter to the Ansible service module which relies on the results
of ps to determine if the service is running. This is unnecessary
because the rsync service script is well-behaved and responds
appropriately to start stop and restart commands. Removal of the
pattern param ensures that the response from the service command is
used instead.
Root cause of the bug is that when Keystone was changed to share
fernet secrets via rsync over ssh tunnel, an rsync process was
introduced in AIOs, Swift stand-alones, and other deployment
configurations that contain Keystone containers on the storage hosts.
The resulting rsync processes within Keystone containers pollute the
results of ps commands on the host, fooling Ansible into thinking
that an rsync service is running on the standard port when it is not.
Secondly, the handler responsible for stopping rsync was not causing
the notice for "Ensure rsync service running" to trigger cleanly in
my testing, so the tasks were changed to trigger both notices in an
ordered list.
Change-Id: I5ed47f7c1974d6b22eeb2ff5816ee6fa30ee9309
Closes-Bug: 1481121
swift_service_admin_username and swift_service_admin_tenant_name
are not used anywhere. The correct (and used) variables for the
purpose are swift_service_user_name and swift_service_project_name
Change-Id: I26cbbb77ddbf46fa64d8d34e5625590f3f66c515
Closes-Bug: #1460497
Add the swift-remote host group and environment file.
Add an os_swift_sync role which will sync the swift ring and ssh keys
for swift hosts (remote and not-remote). Which has the following:
* Moves the key and ring tasks out of os_swift role to os_swift_sync.
* This adds the use of the "-r" flag that was added to the
swift_rings.py and swift_rings_check.py.
* Adds a ring.builder vs contents file consistency check.
* Adjusts the rsync process to use the built-in synchronize module
* Ensure services have started post ring/ssh key sync.
Adds environment file and sample configuration file for swift-remote
hosts (conf.d).
Move appropriate default vars to the os_swift_sync role, and remove them
from the os_swift role.
Rename the "os-swift-install.yml" playbook to "os-swift-setup.yml" as
this handles only the setup, and add a playbook to for both
os-swift-sync.yml and an overarching playbook (os-swift-install.yml)
that will call both the os-swift-sync.yml and os-swift-setup.yml
playbooks. This means the funcitonality of "os-swift-install.yml"
remains unchanged.
Adjust the run-playbooks.sh so that it calls the new overarching swift
playbook.
Change-Id: Ie2d8041b4bc46f092a96882fe3ca430be92195ed
Partially-Implements: blueprint multi-region-swift