Change step to start nova placement and make compute wait for it

There is a deployment race where nova-placement fails to start if
the nova api db migration have not finished before starting it.
We start nova placement early to make sure it is up before the
nova-compute services get started. Since in HA scenario there is
no sync in between the nodes on the current worked deployment step
we might have the situation that the placement service gets started
on C1/2 when the nova api db sync is not yet finished on C0.

We have two possibilities:
1) start placement later and verify that nova-computes recover correct
2) verify that db migration on nova_api db finished before start nova-
placement on the controllers

2) which was addressed via showed
a) the docker/podman container failed to start with some file not found
error, therefore this was reverted in

b) when the scrip were running on different controllers at the same
time, the way how nova's db_version() is implemented has issues, which
is being worked on in

This patch addresses 1) and moves placement service start to step_4
and adds an additional task on the computes to wait until the placement
service is up.

Closes-Bug: #1784155

Change-Id: Ifb5ffc4b25f5ca266560bc0ac96c73071ebd1c9f
Martin Schuppert 2018-11-22 15:08:11 +01:00
parent cb86cc0a33
commit cc61ff93ec
4 changed files with 127 additions and 2 deletions

View File

@ -40,3 +40,6 @@ outputs:
mode: "0700"
content: { get_file: ../../docker_config_scripts/ }
mode: "0700"
content: { get_file: ../../docker_config_scripts/ }

View File

@ -197,10 +197,22 @@ outputs:
detach: false
- /var/lib/nova:/var/lib/nova:shared,z
- /var/lib/docker-config-scripts/:/docker-config-scripts/
- /var/lib/docker-config-scripts/:/docker-config-scripts/:z
command: "/docker-config-scripts/ /docker-config-scripts/"
start_order: 2
image: *nova_compute_image
user: root
net: host
privileged: false
detach: false
- /var/lib/docker-config-scripts/:/docker-config-scripts/:z
- /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro
command: "/docker-config-scripts/ /docker-config-scripts/"
start_order: 3
image: *nova_compute_image
ulimit: {get_param: DockerNovaComputeUlimit}
ipc: host

View File

@ -118,7 +118,7 @@ outputs:
get_attr: [NovaPlacementLogging, docker_config, step_2]
# start this early so it is up before computes start reporting
start_order: 1
image: {get_param: DockerNovaPlacementImage}

View File

@ -0,0 +1,110 @@
#!/usr/bin/env python
# Copyright 2018 Red Hat Inc.
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# shell script to check if nova API DB migrations finished after X attempts.
# Default max is 60 iterations with 10s (default) timeout in between.
from __future__ import print_function
import logging
import os
import re
import sys
import time
from keystoneauth1.identity import v3
from keystoneauth1 import session
from keystoneclient.v3 import client
import requests
from six.moves.configparser import SafeConfigParser
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
LOG = logging.getLogger('nova_wait_for_placement_service')
iterations = 60
timeout = 10
nova_cfg = '/etc/nova/nova.conf'
if __name__ == '__main__':
if os.path.isfile(nova_cfg):
config = SafeConfigParser()
LOG.error('Nova configuration file %s does not exist', nova_cfg)
# get keystone client with details from [placement] section
auth = v3.Password(
user_domain_name=config.get('placement', 'user_domain_name'),
username=config.get('placement', 'username'),
password=config.get('placement', 'password'),
project_name=config.get('placement', 'project_name'),
project_domain_name=config.get('placement', 'user_domain_name'),
auth_url=config.get('placement', 'auth_url')+'/v3')
sess = session.Session(auth=auth)
keystone = client.Client(session=sess)
iterations_endpoint = iterations
placement_endpoint_url = None
while iterations_endpoint > 1:
iterations_endpoint -= 1
# get placement service id
placement_service_id =
# get placement endpoint (valid_interfaces)
placement_endpoint_url = keystone.endpoints.list(
interface=config.get('placement', 'valid_interfaces'))[0].url
if not placement_endpoint_url:
LOG.error('Failed to get placement service endpoint!')
except Exception as e:
LOG.exception('Retry - Failed to get placement service endpoint:')
if not placement_endpoint_url:
LOG.error('Failed to get placement service endpoint!')
# we should have CURRENT in the request response from placement:
# {"versions": [{"status": "CURRENT", "min_version": "1.0", "max_version":
# "1.29", "id": "v1.0", "links": [{"href": "", "rel": "self"}]}]}
response_reg = re.compile('.*CURRENT,*')
while iterations > 1:
iterations -= 1
r = requests.get(placement_endpoint_url+'/', verify=False)
if r.status_code == 200 and response_reg.match(r.text):'Placement service up! - %s', r.text)
else:'response - %r', r)'Placement service not up - %s, %s',
except Exception as e:
LOG.exception('Error query the placement endpoint:')
# vim: set et ts=4 sw=4 :