StarlingX open source release updates

Signed-off-by: Dean Troyer <dtroyer@gmail.com>
This commit is contained in:
Dean Troyer 2018-05-30 16:17:02 -07:00
parent ddded39cb9
commit 9b95aa0a35
1076 changed files with 209271 additions and 0 deletions

12
CONTRIBUTORS.wrs Normal file
View File

@ -0,0 +1,12 @@
The following contributors from Wind River have developed the seed code in this
repository. We look forward to community collaboration and contributions for
additional features, enhancements and refactoring.
Contributors:
=============
Bart Wensley <Barton.Wensley@windriver.com>
John Kung <John.Kung@windriver.com>
Don Penney <Don.Penney@windriver.com>
Matt Peters <Matt.Peters@windriver.com>
Tao Liu <Tao.Liu@windriver.com>
David Sullivan <David.Sullivan@windriver.com>

202
LICENSE Normal file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

5
README.rst Normal file
View File

@ -0,0 +1,5 @@
==========
stx-config
==========
StarlingX Configuration Management

6
compute-huge/.gitignore vendored Normal file
View File

@ -0,0 +1,6 @@
!.distro
.distro/centos7/rpmbuild/RPMS
.distro/centos7/rpmbuild/SRPMS
.distro/centos7/rpmbuild/BUILD
.distro/centos7/rpmbuild/BUILDROOT
.distro/centos7/rpmbuild/SOURCES/compute-huge*tar.gz

13
compute-huge/PKG-INFO Normal file
View File

@ -0,0 +1,13 @@
Metadata-Version: 1.1
Name: compute-huge
Version: 1.0
Summary: Initial compute node hugepages and reserved cpus configuration
Home-page:
Author: Windriver
Author-email: info@windriver.com
License: Apache-2.0
Description: Initial compute node hugepages and reserved cpus configuration
Platform: UNKNOWN

View File

@ -0,0 +1,8 @@
#!/bin/bash
#
# Copyright (c) 2013-2014 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
python /usr/bin/topology.pyc

View File

@ -0,0 +1,4 @@
SRC_DIR="compute-huge"
COPY_LIST_TO_TAR="bin"
COPY_LIST="$SRC_DIR/LICENSE"
TIS_PATCH_VER=10

View File

@ -0,0 +1,85 @@
Summary: Initial compute node hugepages and reserved cpus configuration
Name: compute-huge
Version: 1.0
Release: %{tis_patch_ver}%{?_tis_dist}
License: Apache-2.0
Group: base
Packager: Wind River <info@windriver.com>
URL: unknown
Source0: %{name}-%{version}.tar.gz
Source1: LICENSE
BuildRequires: systemd-devel
Requires: systemd
Requires: python
Requires: /bin/systemctl
%description
Initial compute node hugepages and reserved cpus configuration
%define local_bindir /usr/bin/
%define local_etc_initd /etc/init.d/
%define local_etc_nova /etc/nova/
%define local_etc_goenabledd /etc/goenabled.d/
%define debug_package %{nil}
%prep
%setup
%build
%{__python} -m compileall topology.py
%install
# compute init scripts
install -d -m 755 %{buildroot}%{local_etc_initd}
install -p -D -m 755 affine-platform.sh %{buildroot}%{local_etc_initd}/affine-platform.sh
install -p -D -m 755 compute-huge.sh %{buildroot}%{local_etc_initd}/compute-huge.sh
# utility scripts
install -p -D -m 755 cpumap_functions.sh %{buildroot}%{local_etc_initd}/cpumap_functions.sh
install -p -D -m 755 task_affinity_functions.sh %{buildroot}%{local_etc_initd}/task_affinity_functions.sh
install -p -D -m 755 log_functions.sh %{buildroot}%{local_etc_initd}/log_functions.sh
install -d -m 755 %{buildroot}%{local_bindir}
install -p -D -m 755 ps-sched.sh %{buildroot}%{local_bindir}/ps-sched.sh
# TODO: Only ship pyc ?
install -p -D -m 755 topology.py %{buildroot}%{local_bindir}/topology.py
install -p -D -m 755 topology.pyc %{buildroot}%{local_bindir}/topology.pyc
install -p -D -m 755 affine-interrupts.sh %{buildroot}%{local_bindir}/affine-interrupts.sh
install -p -D -m 755 set-cpu-wakeup-latency.sh %{buildroot}%{local_bindir}/set-cpu-wakeup-latency.sh
install -p -D -m 755 bin/topology %{buildroot}%{local_bindir}/topology
# compute config data
install -d -m 755 %{buildroot}%{local_etc_nova}
install -p -D -m 755 compute_reserved.conf %{buildroot}%{local_etc_nova}/compute_reserved.conf
install -p -D -m 755 compute_hugepages_total.conf %{buildroot}%{local_etc_nova}/compute_hugepages_total.conf
# goenabled check
install -d -m 755 %{buildroot}%{local_etc_goenabledd}
install -p -D -m 755 compute-huge-goenabled.sh %{buildroot}%{local_etc_goenabledd}/compute-huge-goenabled.sh
# systemd services
install -d -m 755 %{buildroot}%{_unitdir}
install -p -D -m 664 affine-platform.sh.service %{buildroot}%{_unitdir}/affine-platform.sh.service
install -p -D -m 664 compute-huge.sh.service %{buildroot}%{_unitdir}/compute-huge.sh.service
%post
/bin/systemctl enable affine-platform.sh.service >/dev/null 2>&1
/bin/systemctl enable compute-huge.sh.service >/dev/null 2>&1
%clean
rm -rf $RPM_BUILD_ROOT
%files
%defattr(-,root,root,-)
%{local_bindir}/*
%{local_etc_initd}/*
%{local_etc_goenabledd}/*
%config(noreplace) %{local_etc_nova}/compute_reserved.conf
%config(noreplace) %{local_etc_nova}/compute_hugepages_total.conf
%{_unitdir}/compute-huge.sh.service
%{_unitdir}/affine-platform.sh.service

View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,62 @@
#!/bin/bash
################################################################################
# Copyright (c) 2015-2016 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
################################################################################
#
# Purpose:
# Affine the interface IRQ to specified cpulist.
#
# Usage: /usr/bin/affine-interrupts.sh interface cpulist
#
# Define minimal path
PATH=/bin:/usr/bin:/usr/local/bin
# logger setup
WHOAMI=`basename $0`
LOG_FACILITY=user
LOG_PRIORITY=info
TMPLOG=/tmp/${WHOAMI}.log
# LOG() - generates log and puts in temporary file
function LOG()
{
logger -t "${0##*/}[$$]" -p ${LOG_FACILITY}.${LOG_PRIORITY} "$@"
echo "${0##*/}[$$]" "$@" >> ${TMPLOG}
}
function INFO()
{
MSG="INFO"
LOG "${MSG} $@"
}
function ERROR()
{
MSG="ERROR"
LOG "${MSG} $@"
}
if [ "$#" -ne 2 ]; then
ERROR "Interface name and cpulist are required"
exit 1
fi
interface=$1
cpulist=$2
# Find PCI device matching interface, keep last matching device name
dev=$(find /sys/devices -name "${interface}" | \
perl -ne 'print $1 if /([[:xdigit:]]{4}:[[:xdigit:]]{2}:[[:xdigit:]]{2}\.[[:xdigit:]])\/[[:alpha:]]/;')
# Obtain all IRQs for this device
irq=$(cat /sys/bus/pci/devices/${dev}/irq 2>/dev/null)
msi_irqs=$(ls /sys/bus/pci/devices/${dev}/msi_irqs 2>/dev/null | xargs)
INFO $LINENO "affine ${interface} (dev:${dev} irq:${irq} msi_irqs:${msi_irqs}) with cpus (${cpulist})"
for i in $(echo "${irq} ${msi_irqs}"); do echo $i; done | \
xargs --no-run-if-empty -i{} \
/bin/bash -c "[[ -e /proc/irq/{} ]] && echo ${cpulist} > /proc/irq/{}/smp_affinity_list" 2>/dev/null
exit 0

View File

@ -0,0 +1,170 @@
#!/bin/bash
################################################################################
# Copyright (c) 2013 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
################################################################################
# Define minimal path
PATH=/bin:/usr/bin:/usr/local/bin
LOG_FUNCTIONS=${LOG_FUNCTIONS:-"/etc/init.d/log_functions.sh"}
CPUMAP_FUNCTIONS=${CPUMAP_FUNCTIONS:-"/etc/init.d/cpumap_functions.sh"}
TASK_AFFINITY_FUNCTIONS=${TASK_AFFINITY_FUNCTIONS:-"/etc/init.d/task_affinity_functions.sh"}
source /etc/init.d/functions
[[ -e ${LOG_FUNCTIONS} ]] && source ${LOG_FUNCTIONS}
[[ -e ${CPUMAP_FUNCTIONS} ]] && source ${CPUMAP_FUNCTIONS}
[[ -e ${TASK_AFFINITY_FUNCTIONS} ]] && source ${TASK_AFFINITY_FUNCTIONS}
linkname=$(readlink -n -f $0)
scriptname=$(basename $linkname)
# Enable debug logs
LOG_DEBUG=1
. /etc/platform/platform.conf
################################################################################
# Affine all running tasks to the CPULIST provided in the first parameter.
################################################################################
function affine_tasks
{
local CPULIST=$1
local PIDLIST
local RET=0
# Affine non-kernel-thread tasks (excluded [kthreadd] and its children) to all available
# cores. They will be reaffined to platform cores later on as part of nova-compute
# launch.
log_debug "Affining all tasks to all available CPUs..."
affine_tasks_to_all_cores
RET=$?
if [ $RET -ne 0 ]; then
log_error "Some tasks failed to be affined to all cores."
fi
# Get number of logical cpus
N_CPUS=$(cat /proc/cpuinfo 2>/dev/null | \
awk '/^[pP]rocessor/ { n +=1 } END { print (n>0) ? n : 1}')
# Calculate platform cores cpumap
PLATFORM_COREMASK=$(cpulist_to_cpumap ${CPULIST} ${N_CPUS})
# Set default IRQ affinity
echo ${PLATFORM_COREMASK} > /proc/irq/default_smp_affinity
# Affine all PCI/MSI interrupts to platform cores; this overrides
# irqaffinity boot arg, since that does not handle IRQs for PCI devices
# on numa nodes that do not intersect with platform cores.
PCIDEVS=/sys/bus/pci/devices
declare -a irqs=()
irqs+=($(cat ${PCIDEVS}/*/irq 2>/dev/null | xargs))
irqs+=($(ls ${PCIDEVS}/*/msi_irqs 2>/dev/null | grep -E '^[0-9]+$' | xargs))
# flatten list of irqs, removing duplicates
irqs=($(echo ${irqs[@]} | tr ' ' '\n' | sort -nu))
log_debug "Affining all PCI/MSI irqs(${irqs[@]}) with cpus (${CPULIST})"
for i in ${irqs[@]}; do
/bin/bash -c "[[ -e /proc/irq/${i} ]] && echo ${CPULIST} > /proc/irq/${i}/smp_affinity_list" 2>/dev/null
done
if [[ "$subfunction" == *"compute,lowlatency" ]]; then
# Affine work queues to platform cores
echo ${PLATFORM_COREMASK} > /sys/devices/virtual/workqueue/cpumask
echo ${PLATFORM_COREMASK} > /sys/bus/workqueue/devices/writeback/cpumask
# On low latency compute reassign the per cpu threads rcuc, ksoftirq,
# ktimersoftd to FIFO along with the specified priority
PIDLIST=$( ps -e -p 2 |grep rcuc | awk '{ print $1; }')
for PID in ${PIDLIST[@]}
do
chrt -p -f 4 ${PID} 2>/dev/null
done
PIDLIST=$( ps -e -p 2 |grep ksoftirq | awk '{ print $1; }')
for PID in ${PIDLIST[@]}
do
chrt -p -f 2 ${PID} 2>/dev/null
done
PIDLIST=$( ps -e -p 2 |grep ktimersoftd | awk '{ print $1; }')
for PID in ${PIDLIST[@]}
do
chrt -p -f 3 ${PID} 2>/dev/null
done
fi
return 0
}
################################################################################
# Start Action
################################################################################
function start
{
local RET=0
echo -n "Starting ${scriptname}: "
## Check whether we are root (need root for taskset)
if [ $UID -ne 0 ]; then
log_error "require root or sudo"
RET=1
return ${RET}
fi
## Define platform cpulist to be thread siblings of core 0
PLATFORM_CPULIST=$(get_platform_cpu_list)
# Affine all tasks to platform cpulist
affine_tasks ${PLATFORM_CPULIST}
RET=$?
if [ ${RET} -ne 0 ]; then
log_error "Failed to affine tasks ${PLATFORM_CPULIST}, rc=${RET}"
return ${RET}
fi
print_status ${RET}
return ${RET}
}
################################################################################
# Stop Action - don't do anything
################################################################################
function stop
{
local RET=0
echo -n "Stopping ${scriptname}: "
print_status ${RET}
return ${RET}
}
################################################################################
# Restart Action
################################################################################
function restart() {
stop
start
}
################################################################################
# Main Entry
#
################################################################################
case "$1" in
start)
start
;;
stop)
stop
;;
restart|reload)
restart
;;
status)
echo -n "OK"
;;
*)
echo $"Usage: $0 {start|stop|restart|reload|status}"
exit 1
esac
exit $?

View File

@ -0,0 +1,14 @@
[Unit]
Description=Titanium Cloud Affine Platform
After=syslog.service network.service dbus.service sw-patch.service
Before=compute-huge.sh.service
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/etc/init.d/affine-platform.sh start
ExecStop=/etc/init.d/affine-platform.sh stop
ExecReload=/etc/init.d/affine-platform.sh restart
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,24 @@
#!/bin/bash
#
# Copyright (c) 2014,2016 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
#
# compute-huge.sh "goenabled" check.
#
# If a problem was detected during configuration of huge pages and compute
# resources then the board is not allowed to enable.
#
COMPUTE_HUGE_GOENABLED="/var/run/compute_huge_goenabled"
source "/etc/init.d/log_functions.sh"
source "/usr/bin/tsconfig"
if [ -e ${VOLATILE_COMPUTE_CONFIG_COMPLETE} -a ! -f ${COMPUTE_HUGE_GOENABLED} ]; then
log_error "compute-huge.sh CPU configuration check failed. Failing goenabled check."
exit 1
fi
exit 0

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,14 @@
[Unit]
Description=Titanium Cloud Compute Huge
After=syslog.service network.service affine-platform.sh.service sw-patch.service
Before=sshd.service sw-patch-agent.service sysinv-agent.service
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/etc/init.d/compute-huge.sh start
ExecStop=/etc/init.d/compute-huge.sh stop
ExecReload=/etc/init.d/compute-huge.sh restart
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,78 @@
################################################################################
# Copyright (c) 2013-2015 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
################################################################################
# COMPUTE Node configuration parameters for reserved memory and physical cores
# used by Base software and VSWITCH. These are resources that libvirt cannot use.
#
################################################################################
#
# Enable compute-huge.sh console debug logs (uncomment)
#
################################################################################
LOG_DEBUG=1
################################################################################
#
# List of logical CPU instances available in the system. This value is used
# for auditing purposes so that the current configuration can be checked for
# validity against the actual number of logical CPU instances in the system.
#
################################################################################
COMPUTE_CPU_LIST="0-1"
################################################################################
#
# List of Base software resources reserved per numa node. Each array element
# consists of a 3-tuple formatted as: <node>:<memory>:<cores>.
#
# Example: To reserve 1500MB and 1 core on NUMA node0, and 1500MB and 1 core
# on NUMA node1, the variable must be specified as follows.
# COMPUTE_BASE_MEMORY=("node0:1500MB:1" "node1:1500MB:1")
#
################################################################################
COMPUTE_BASE_RESERVED=("node0:8000MB:1" "node1:2000MB:0" "node2:2000MB:0" "node3:2000MB:0")
################################################################################
#
# List of HugeTLB memory descriptors to configure. Each array element
# consists of a 3-tuple descriptor formatted as: <node>:<pgsize>:<pgcount>.
# The NUMA node specified must exist and the HugeTLB pagesize must be a valid
# value such as 2048kB or 1048576kB.
#
# For example, to request 256 x 2MB HugeTLB pages on NUMA node0 and node1 the
# variable must be specified as follows.
# COMPUTE_VSWITCH_MEMORY=("node0:2048kB:256" "node1:2048kB:256")
#
################################################################################
COMPUTE_VSWITCH_MEMORY=("node0:1048576kB:1" "node1:1048576kB:1" "node2:1048576kB:1" "node3:1048576kB:1")
################################################################################
#
# List of VSWITCH physical cores reserved for VSWITCH applications.
#
# Example: To reserve 2 cores on NUMA node0, and 2 cores on NUMA node1, the
# variable must be specified as follows.
# COMPUTE_VSWITCH_CORES=("node0:2" "node1:2")
#
################################################################################
COMPUTE_VSWITCH_CORES=("node0:2" "node1:0" "node2:0" "node3:0")
################################################################################
#
# List of HugeTLB memory descriptors to configure for Libvirt. Each array element
# consists of a 3-tuple descriptor formatted as: <node>:<pgsize>:<pgcount>.
# The NUMA node specified must exist and the HugeTLB pagesize must be a valid
# value such as 2048kB or 1048576kB.
#
# For example, to request 256 x 2MB HugeTLB pages on NUMA node0 and node1 the
# variable must be specified as follows.
# COMPUTE_VM_MEMORY_2M=("node0:2048kB:256" "node1:2048kB:256")
#
################################################################################
COMPUTE_VM_MEMORY_2M=()
COMPUTE_VM_MEMORY_1G=()

View File

@ -0,0 +1,399 @@
#!/bin/bash
################################################################################
# Copyright (c) 2013-2015 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
################################################################################
source /etc/platform/platform.conf
################################################################################
# Utility function to expand a sequence of numbers (e.g., 0-7,16-23)
################################################################################
function expand_sequence
{
SEQUENCE=(${1//,/ })
DELIMITER=${2:-","}
LIST=
for entry in ${SEQUENCE[@]}
do
range=(${entry/-/ })
a=${range[0]}
b=${range[1]:-${range[0]}}
for i in $(seq $a $b)
do
LIST="${LIST}${DELIMITER}${i}"
done
done
echo ${LIST:1}
}
################################################################################
# Append a string to comma separated list string
################################################################################
function append_list() {
local PUSH=$1
local LIST=$2
if [ -z "${LIST}" ]
then
LIST=${PUSH}
else
LIST="${LIST},${PUSH}"
fi
echo ${LIST}
return 0
}
################################################################################
# Condense a sequence of numbers to a list of ranges (e.g, 7-12,15-16)
################################################################################
function condense_sequence() {
local arr=( $(printf '%s\n' "$@" | sort -n) )
local first
local last
local cpulist=""
for ((i=0; i < ${#arr[@]}; i++))
do
num=${arr[$i]}
if [[ -z $first ]]; then
first=$num
last=$num
continue
fi
if [[ num -ne $((last + 1)) ]]; then
if [[ first -eq last ]]; then
cpulist=$(append_list ${first} ${cpulist})
else
cpulist=$(append_list "${first}-${last}" ${cpulist})
fi
first=$num
last=$num
else
: $((last++))
fi
done
if [[ first -eq last ]]; then
cpulist=$(append_list ${first} ${cpulist})
else
cpulist=$(append_list "${first}-${last}" ${cpulist})
fi
echo "$cpulist"
}
################################################################################
# Converts a CPULIST (e.g., 0-7,16-23) to a CPUMAP (e.g., 0x00FF00FF). The
# CPU map is returned as a string representation of a large hexidecimal
# number but without the leading "0x" characters.
#
################################################################################
function cpulist_to_cpumap
{
local CPULIST=$1
local NR_CPUS=$2
local CPUMAP=0
local CPUID=0
if [ -z "${NR_CPUS}" ] || [ ${NR_CPUS} -eq 0 ]
then
echo 0
return 0
fi
for CPUID in $(expand_sequence $CPULIST " ")
do
if [ "${CPUID}" -lt "${NR_CPUS}" ]; then
CPUMAP=$(echo "${CPUMAP} + (2^${CPUID})" | bc -l)
fi
done
echo "obase=16;ibase=10;${CPUMAP}" | bc -l
return 0
}
################################################################################
# Converts a CPUMAP (e.g., 0x00FF00FF) to a CPULIST (e.g., 0-7,16-23). The
# CPUMAP is expected in hexidecimal (base=10) form without the leading "0x"
# characters.
#
################################################################################
function cpumap_to_cpulist
{
local CPUMAP=$(echo "obase=10;ibase=16;$1" | bc -l)
local NR_CPUS=$2
local list=()
local cpulist=""
for((i=0; i < NR_CPUS; i++))
do
## Since 'bc' does not support any bitwise operators this expression:
## if (CPUMAP & (1 << CPUID))
## has to be rewritten like this:
## if (CPUMAP % (2**(CPUID+1)) > ((2**(CPUID)) - 1))
##
ISSET=$(echo "scale=0; (${CPUMAP} % 2^(${i}+1)) > (2^${i})-1" | bc -l)
if [ "${ISSET}" -ne 0 ]
then
list+=($i)
fi
done
cpulist=$(condense_sequence ${list[@]} )
echo "$cpulist"
return 0
}
################################################################################
# Bitwise NOT of a hexidecimal representation of a CPULIST. The value is
# returned as a hexidecimal value but without the leading "0x" characters
#
################################################################################
function invert_cpumap
{
local CPUMAP=$(echo "obase=10;ibase=16;$1" | bc -l)
local NR_CPUS=$2
local INVERSE_CPUMAP=0
for CPUID in $(seq 0 $((NR_CPUS - 1)));
do
## See comment in previous function
ISSET=$(echo "scale=0; (${CPUMAP} % 2^(${CPUID}+1)) > (2^${CPUID})-1" | bc -l)
if [ "${ISSET}" -eq 1 ]; then
continue
fi
INVERSE_CPUMAP=$(echo "${INVERSE_CPUMAP} + (2^${CPUID})" | bc -l)
done
echo "obase=16;ibase=10;${INVERSE_CPUMAP}" | bc -l
return 0
}
################################################################################
# Builds the complement representation of a CPULIST
#
################################################################################
function invert_cpulist
{
local CPULIST=$1
local NR_CPUS=$2
local CPUMAP=$(cpulist_to_cpumap ${CPULIST} ${NR_CPUS})
cpumap_to_cpulist $(invert_cpumap ${CPUMAP} ${NR_CPUS}) ${NR_CPUS}
return 0
}
################################################################################
# in_list() - check whether item is contained in list
# param: item
# param: list (i.e. 0-3,8-11)
# returns: 0 - item is contained in list;
# 1 - item is not contained in list
#
################################################################################
function in_list() {
local item="$1"
local list="$2"
# expand list format 0-3,8-11 to a full sequence {0..3} {8..11}
local exp_list=$(echo ${list} | \
sed -e 's#,# #g' -e 's#\([0-9]*\)-\([0-9]*\)#{\1\.\.\2}#g')
local e
for e in $(eval echo ${exp_list})
do
[[ "$e" == "$item" ]] && return 0
done
return 1
}
################################################################################
# any_in_list() - check if any item of sublist is contained in list
# param: sublist
# param: list
# returns: 0 - an item of sublist is contained in list;
# 1 - no sublist items contained in list
#
################################################################################
function any_in_list() {
local sublist="$1"
local list="$2"
local e
local exp_list
# expand list format 0-3,8-11 to a full sequence {0..3} {8..11}
exp_list=$(echo ${list} | \
sed -e 's#,# #g' -e 's#\([0-9]*\)-\([0-9]*\)#{\1\.\.\2}#g')
declare -A a_list
for e in $(eval echo ${exp_list})
do
a_list[$e]=1
done
# expand list format 0-3,8-11 to a full sequence {0..3} {8..11}
exp_list=$(echo ${sublist} | \
sed -e 's#,# #g' -e 's#\([0-9]*\)-\([0-9]*\)#{\1\.\.\2}#g')
declare -A a_sublist
for e in $(eval echo ${exp_list})
do
a_sublist[$e]=1
done
# Check if any element of sublist is in list
for e in "${!a_sublist[@]}"
do
if [[ "${a_list[$e]}" == 1 ]]
then
return 0 # matches
fi
done
return 1 # no match
}
################################################################################
# Return list of CPUs reserved for platform
################################################################################
function get_platform_cpu_list() {
## Define platform cpulist based on engineering a number of cores and
## whether this is a combo or not, and include SMT siblings.
if [[ $subfunction = *compute* ]]; then
RESERVE_CONF="/etc/nova/compute_reserved.conf"
[[ -e ${RESERVE_CONF} ]] && source ${RESERVE_CONF}
if [ -n "$PLATFORM_CPU_LIST" ];then
echo "$PLATFORM_CPU_LIST"
return 0
fi
fi
local PLATFORM_SOCKET=0
local PLATFORM_START=0
local PLATFORM_CORES=1
if [ "$nodetype" = "controller" ]; then
((PLATFORM_CORES+=1))
fi
local PLATFORM_CPULIST=$(topology_to_cpulist ${PLATFORM_SOCKET} ${PLATFORM_START} ${PLATFORM_CORES})
echo ${PLATFORM_CPULIST}
}
################################################################################
# Return list of CPUs reserved for vswitch
################################################################################
function get_vswitch_cpu_list() {
## Define default avp cpulist based on engineered number of platform cores,
## engineered avp cores, and include SMT siblings.
if [[ $subfunction = *compute* ]]; then
VSWITCH_CONF="/etc/vswitch/vswitch.conf"
[[ -e ${VSWITCH_CONF} ]] && source ${VSWITCH_CONF}
if [ -n "$VSWITCH_CPU_LIST" ];then
echo "$VSWITCH_CPU_LIST"
return 0
fi
fi
local N_CORES_IN_PKG=$(cat /proc/cpuinfo 2>/dev/null | \
awk '/^cpu cores/ {n = $4} END { print (n>0) ? n : 1 }')
# engineer platform cores
local PLATFORM_CORES=1
if [ "$nodetype" = "controller" ]; then
((PLATFORM_CORES+=1))
fi
# engineer AVP cores
local AVP_SOCKET=0
local AVP_START=${PLATFORM_CORES}
local AVP_CORES=1
if [ ${N_CORES_IN_PKG} -gt 4 ]; then
((AVP_CORES+=1))
fi
local AVP_CPULIST=$(topology_to_cpulist ${AVP_SOCKET} ${AVP_START} ${AVP_CORES})
echo ${AVP_CPULIST}
}
################################################################################
# vswitch_expanded_cpu_list() - compute the vswitch cpu list, including it's siblings
################################################################################
function vswitch_expanded_cpu_list() {
list=$(get_vswitch_cpu_list)
# Expand vswitch cpulist
vswitch_cpulist=$(expand_sequence ${list} " ")
cpulist=""
for e in $vswitch_cpulist
do
# claim hyperthread siblings if SMT enabled
SIBLINGS_CPULIST=$(cat /sys/devices/system/cpu/cpu${e}/topology/thread_siblings_list 2>/dev/null)
siblings_cpulist=$(expand_sequence ${SIBLINGS_CPULIST} " ")
for s in $siblings_cpulist
do
in_list ${s} ${cpulist}
if [ $? -eq 1 ]
then
cpulist=$(append_list ${s} ${cpulist})
fi
done
done
echo "$cpulist"
return 0
}
################################################################################
# platform_expanded_cpu_list() - compute the platform cpu list, including it's siblings
################################################################################
function platform_expanded_cpu_list() {
list=$(get_platform_cpu_list)
# Expand platform cpulist
platform_cpulist=$(expand_sequence ${list} " ")
cpulist=""
for e in $platform_cpulist
do
# claim hyperthread siblings if SMT enabled
SIBLINGS_CPULIST=$(cat /sys/devices/system/cpu/cpu${e}/topology/thread_siblings_list 2>/dev/null)
siblings_cpulist=$(expand_sequence ${SIBLINGS_CPULIST} " ")
for s in $siblings_cpulist
do
in_list ${s} ${cpulist}
if [ $? -eq 1 ]
then
cpulist=$(append_list ${s} ${cpulist})
fi
done
done
echo "$cpulist"
return 0
}
################################################################################
# Return list of CPUs based on cpu topology. Select the socket, starting core
# within the socket, select number of cores, and SMT siblings.
################################################################################
function topology_to_cpulist() {
local SOCKET=$1
local CORE_START=$2
local NUM_CORES=$3
local CPULIST=$(cat /proc/cpuinfo 2>/dev/null | perl -sne \
'BEGIN { %T = {}; %H = {}; $L = $P = $C = $S = 0; }
{
if (/processor\s+:\s+(\d+)/) { $L = $1; }
if (/physical id\s+:\s+(\d+)/) { $P = $1; }
if (/core id\s+:\s+(\d+)/) {
$C = $1;
$T{$P}{$C}++;
$S = $T{$P}{$C};
$H{$P}{$C}{$S} = $L;
}
}
END {
@cores = sort { $a <=> $b } keys $T{$socket};
@sel_cores = splice @cores, $core_start, $num_cores;
@lcpus = ();
for $C (@sel_cores) {
for $S (sort {$a <=> $b } keys %{ $H{$socket}{$C} }) {
push @lcpus, $H{$socket}{$C}{$S};
}
}
printf "%s\n", join(",", @lcpus);
}' -- -socket=${SOCKET} -core_start=${CORE_START} -num_cores=${NUM_CORES})
echo ${CPULIST}
}

View File

@ -0,0 +1,244 @@
#!/bin/bash
#
# Copyright (c) 2015-2016 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
source /etc/init.d/cpumap_functions.sh
export NR_CPUS_LIST=("4" "8" "16" "32" "64" "128")
if [ ! -z ${1} ]; then
NR_CPUS_LIST=(${1//,/ })
fi
function test_cpumap_to_cpulist()
{
local NR_CPUS=$1
declare -A CPULISTS
if [ ${NR_CPUS} -ge 4 ]; then
CPULISTS["0"]=""
CPULISTS["1"]="0"
CPULISTS["2"]="1"
CPULISTS["3"]="0-1"
CPULISTS["5"]="0,2"
CPULISTS["7"]="0-2"
CPULISTS["F"]="0-3"
CPULISTS["9"]="0,3"
fi
if [ ${NR_CPUS} -ge 8 ]; then
CPULISTS["00"]=""
CPULISTS["11"]="0,4"
CPULISTS["FF"]="0-7"
CPULISTS["81"]="0,7"
fi
if [ ${NR_CPUS} -ge 16 ]; then
CPULISTS["0000"]=""
CPULISTS["1111"]="0,4,8,12"
CPULISTS["FFF"]="0-11"
CPULISTS["F0F"]="0-3,8-11"
CPULISTS["F0F0"]="4-7,12-15"
CPULISTS["FFFF"]="0-15"
CPULISTS["FFFE"]="1-15"
CPULISTS["8001"]="0,15"
fi
if [ ${NR_CPUS} -ge 32 ]; then
CPULISTS["00000000"]=""
CPULISTS["11111111"]="0,4,8,12,16,20,24,28"
CPULISTS["0F0F0F0F"]="0-3,8-11,16-19,24-27"
CPULISTS["F0F0F0F0"]="4-7,12-15,20-23,28-31"
CPULISTS["FFFFFFFF"]="0-31"
CPULISTS["FFFFFFFE"]="1-31"
CPULISTS["80000001"]="0,31"
fi
if [ ${NR_CPUS} -ge 64 ]; then
CPULISTS["0000000000000000"]=""
CPULISTS["1111111111111111"]="0,4,8,12,16,20,24,28,32,36,40,44,48,52,56,60"
CPULISTS["0F0F0F0F0F0F0F0F"]="0-3,8-11,16-19,24-27,32-35,40-43,48-51,56-59"
CPULISTS["F0F0F0F0F0F0F0F0"]="4-7,12-15,20-23,28-31,36-39,44-47,52-55,60-63"
CPULISTS["FFFFFFFFFFFFFFFF"]="0-63"
CPULISTS["FFFFFFFFFFFFFFFE"]="1-63"
CPULISTS["8000000000000001"]="0,63"
fi
if [ ${NR_CPUS} -ge 128 ]; then
CPULISTS["00000000000000000000000000000000"]=""
CPULISTS["11111111111111111111111111111111"]="0,4,8,12,16,20,24,28,32,36,40,44,48,52,56,60,64,68,72,76,80,84,88,92,96,100,104,108,112,116,120,124"
CPULISTS["0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F"]="0-3,8-11,16-19,24-27,32-35,40-43,48-51,56-59,64-67,72-75,80-83,88-91,96-99,104-107,112-115,120-123"
CPULISTS["F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0"]="4-7,12-15,20-23,28-31,36-39,44-47,52-55,60-63,68-71,76-79,84-87,92-95,100-103,108-111,116-119,124-127"
CPULISTS["FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF"]="0-127"
CPULISTS["FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFE"]="1-127"
CPULISTS["80000000000000000000000000000001"]="0,127"
fi
for CPUMAP in ${!CPULISTS[@]}; do
EXPECTED=${CPULISTS[${CPUMAP}]}
CPULIST=$(cpumap_to_cpulist ${CPUMAP} ${NR_CPUS})
if [ "${CPULIST}" != "${EXPECTED}" ]; then
printf "\n"
echo "error: (cpumap_to_list ${CPUMAP} ${NR_CPUS}) returned \"${CPULIST}\" instead of \"${EXPECTED}\""
fi
printf "."
done
printf "\n"
}
function test_cpulist_to_cpumap()
{
local NR_CPUS=$1
declare -A CPUMAPS
if [ ${NR_CPUS} -ge 4 ]; then
CPUMAPS[" "]="0"
CPUMAPS["0"]="1"
CPUMAPS["1"]="2"
CPUMAPS["0-1"]="3"
CPUMAPS["0,2"]="5"
CPUMAPS["0-2"]="7"
CPUMAPS["0-3"]="F"
CPUMAPS["0,3"]="9"
fi
if [ ${NR_CPUS} -ge 8 ]; then
CPUMAPS["0,4"]="11"
CPUMAPS["0-7"]="FF"
CPUMAPS["0,7"]="81"
fi
if [ ${NR_CPUS} -ge 16 ]; then
CPUMAPS["0,4,8,12"]="1111"
CPUMAPS["0-11"]="FFF"
CPUMAPS["0-3,8-11"]="F0F"
CPUMAPS["4-7,12-15"]="F0F0"
CPUMAPS["0-15"]="FFFF"
CPUMAPS["1-15"]="FFFE"
CPUMAPS["0,15"]="8001"
fi
if [ ${NR_CPUS} -ge 32 ]; then
CPUMAPS["0,4,8,12,16,20,24,28"]="11111111"
CPUMAPS["0-3,8-11,16-19,24-27"]="F0F0F0F"
CPUMAPS["4-7,12-15,20-23,28-31"]="F0F0F0F0"
CPUMAPS["0-31"]="FFFFFFFF"
CPUMAPS["1-31"]="FFFFFFFE"
CPUMAPS["0,31"]="80000001"
fi
if [ ${NR_CPUS} -ge 64 ]; then
CPUMAPS["0,4,8,12,16,20,24,28,32,36,40,44,48,52,56,60"]="1111111111111111"
CPUMAPS["0-3,8-11,16-19,24-27,32-35,40-43,48-51,56-59"]="F0F0F0F0F0F0F0F"
CPUMAPS["4-7,12-15,20-23,28-31,36-39,44-47,52-55,60-63"]="F0F0F0F0F0F0F0F0"
CPUMAPS["0-63"]="FFFFFFFFFFFFFFFF"
CPUMAPS["1-63"]="FFFFFFFFFFFFFFFE"
CPUMAPS["0,63"]="8000000000000001"
fi
if [ ${NR_CPUS} -ge 128 ]; then
CPUMAPS["0,4,8,12,16,20,24,28,32,36,40,44,48,52,56,60,64,68,72,76,80,84,88,92,96,100,104,108,112,116,120,124"]="11111111111111111111111111111111"
CPUMAPS["0-3,8-11,16-19,24-27,32-35,40-43,48-51,56-59,64-67,72-75,80-83,88-91,96-99,104-107,112-115,120-123"]="F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F"
CPUMAPS["4-7,12-15,20-23,28-31,36-39,44-47,52-55,60-63,68-71,76-79,84-87,92-95,100-103,108-111,116-119,124-127"]="F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0"
CPUMAPS["0-127"]="FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF"
CPUMAPS["1-127"]="FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFE"
CPUMAPS["0,127"]="80000000000000000000000000000001"
fi
for CPULIST in ${!CPUMAPS[@]}; do
EXPECTED=${CPUMAPS[${CPULIST}]}
CPUMAP=$(cpulist_to_cpumap ${CPULIST} ${NR_CPUS})
if [ "${CPUMAP}" != "${EXPECTED}" ]; then
printf "\n"
echo "error: (cpulist_to_cpumap ${CPULIST} ${NR_CPUS}) returned \"${CPUMAP}\" instead of \"${EXPECTED}\""
fi
printf "."
done
printf "\n"
}
function test_invert_cpumap()
{
local NR_CPUS=$1
declare -A INVERSES
if [ $((${NR_CPUS} % 4)) -ne 0 ]; then
echo "test_invert_cpumap skipping NR_CPUS=${NR_CPUS}; not a multiple of 4"
return 0
fi
if [ ${NR_CPUS} -ge 4 ]; then
INVERSES["0"]="FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF"
INVERSES["1"]="FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFE"
INVERSES["2"]="FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFD"
INVERSES["3"]="FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFC"
INVERSES["5"]="FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFA"
INVERSES["7"]="FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF8"
INVERSES["F"]="FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF0"
INVERSES["9"]="FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF6"
fi
if [ ${NR_CPUS} -ge 8 ]; then
INVERSES["11"]="FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEE"
INVERSES["FF"]="FFFFFFFFFFFFFFFFFFFFFFFFFFFFFF00"
INVERSES["F0"]="FFFFFFFFFFFFFFFFFFFFFFFFFFFFFF0F"
INVERSES["81"]="FFFFFFFFFFFFFFFFFFFFFFFFFFFFFF7E"
fi
if [ ${NR_CPUS} -ge 16 ]; then
INVERSES["1111"]="FFFFFFFFFFFFFFFFFFFFFFFFFFFFEEEE"
INVERSES["FFF"]="FFFFFFFFFFFFFFFFFFFFFFFFFFFFF000"
INVERSES["F0F"]="FFFFFFFFFFFFFFFFFFFFFFFFFFFFF0F0"
INVERSES["F0F0"]="FFFFFFFFFFFFFFFFFFFFFFFFFFFF0F0F"
INVERSES["0F0F"]="FFFFFFFFFFFFFFFFFFFFFFFFFFFFF0F0"
INVERSES["FFFF"]="FFFFFFFFFFFFFFFFFFFFFFFFFFFF0000"
INVERSES["FFFE"]="FFFFFFFFFFFFFFFFFFFFFFFFFFFF0001"
INVERSES["8001"]="FFFFFFFFFFFFFFFFFFFFFFFFFFFF7FFE"
fi
if [ ${NR_CPUS} -ge 32 ]; then
INVERSES["11111111"]="FFFFFFFFFFFFFFFFFFFFFFFFEEEEEEEE"
INVERSES["0F0F0F0F"]="FFFFFFFFFFFFFFFFFFFFFFFFF0F0F0F0"
INVERSES["F0F0F0F0"]="FFFFFFFFFFFFFFFFFFFFFFFF0F0F0F0F"
INVERSES["FFFFFFFF"]="FFFFFFFFFFFFFFFFFFFFFFFF00000000"
INVERSES["FFFFFFFE"]="FFFFFFFFFFFFFFFFFFFFFFFF00000001"
INVERSES["80000001"]="FFFFFFFFFFFFFFFFFFFFFFFF7FFFFFFE"
fi
if [ ${NR_CPUS} -ge 64 ]; then
INVERSES["1111111111111111"]="FFFFFFFFFFFFFFFFEEEEEEEEEEEEEEEE"
INVERSES["0F0F0F0F0F0F0F0F"]="FFFFFFFFFFFFFFFFF0F0F0F0F0F0F0F0"
INVERSES["F0F0F0F0F0F0F0F0"]="FFFFFFFFFFFFFFFF0F0F0F0F0F0F0F0F"
INVERSES["FFFFFFFFFFFFFFFF"]="FFFFFFFFFFFFFFFF0000000000000000"
INVERSES["FFFFFFFFFFFFFFFE"]="FFFFFFFFFFFFFFFF0000000000000001"
INVERSES["8000000000000001"]="FFFFFFFFFFFFFFFF7FFFFFFFFFFFFFFE"
fi
if [ ${NR_CPUS} -ge 128 ]; then
INVERSES["11111111111111111111111111111111"]="EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE"
INVERSES["0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F"]="F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0"
INVERSES["F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0"]="0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F"
INVERSES["FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF"]="00000000000000000000000000000000"
INVERSES["FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFE"]="00000000000000000000000000000001"
INVERSES["80000000000000000000000000000001"]="7FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFE"
fi
for CPUMAP in ${!INVERSES[@]}; do
EXPECTED=${INVERSES[${CPUMAP}]}
if [ ${NR_CPUS} -lt 128 ]; then
EXPECTED=$(echo ${EXPECTED} | cut --complement -c1-$((32-((${NR_CPUS}+3)/4))))
fi
EXPECTED=$(echo ${EXPECTED} | sed -e "s/^0*//")
if [ -z ${EXPECTED} ]; then
EXPECTED="0"
fi
INVERSE=$(invert_cpumap ${CPUMAP} ${NR_CPUS})
if [ "${INVERSE}" != "${EXPECTED}" ]; then
printf "\n"
echo "error: (invert_cpumap ${CPUMAP} ${NR_CPUS}) returned \"${INVERSE}\" instead of \"${EXPECTED}\""
fi
printf "."
done
printf "\n"
}
for NR_CPUS in ${NR_CPUS_LIST[@]}; do
echo "NR_CPUS=${NR_CPUS}"
test_cpumap_to_cpulist ${NR_CPUS}
test_cpulist_to_cpumap ${NR_CPUS}
test_invert_cpumap ${NR_CPUS}
echo ""
done
exit 0

View File

@ -0,0 +1,49 @@
#!/bin/bash
################################################################################
# Copyright (c) 2013-2015 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
################################################################################
################################################################################
# Log if debug is enabled via LOG_DEBUG
#
################################################################################
function log_debug
{
if [ ! -z "${LOG_DEBUG}" ]; then
logger -p debug -t "$0[${PPID}]" -s "$@" 2>&1
fi
}
################################################################################
# Log unconditionally to STDERR
#
################################################################################
function log_error
{
logger -p error -t "$0[${PPID}]" -s "$@"
}
################################################################################
# Log unconditionally to STDOUT
#
################################################################################
function log
{
logger -p info -t "$0[${PPID}]" -s "$@" 2>&1
}
################################################################################
# Utility function to print the status of a command result
#
################################################################################
function print_status()
{
if [ "$1" -eq "0" ]; then
echo "[ OK ]"
else
echo "[FAILED]"
fi
}

View File

@ -0,0 +1,27 @@
#!/bin/bash
################################################################################
# Copyright (c) 2013 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
################################################################################
#
# ps-sched.sh -- gives detailed task listing with scheduling attributes
# -- this is cpu and scheduling intensive version (shell/taskset based)
# (note: does not print fields 'group' or 'timeslice')
printf "%6s %6s %6s %1c %2s %4s %6s %4s %-24s %2s %-16s %s\n" "PID" "TID" "PPID" "S" "PO" "NICE" "RTPRIO" "PR" "AFFINITY" "P" "COMM" "COMMAND"
ps -eL -o pid=,lwp=,ppid=,state=,class=,nice=,rtprio=,priority=,psr=,comm=,command= | \
while read pid tid ppid state policy nice rtprio priority psr comm command
do
bitmask=$(taskset -p $tid 2>/dev/null)
aff=${bitmask##*: }
if [ -z "${aff}" ]; then
aff="0x0"
else
aff="0x${aff}"
fi
printf "%6d %6d %6d %1c %2s %4s %6s %4d %-24s %2d %-16s %s\n" $pid $tid $ppid $state $policy $nice $rtprio $priority $aff $psr $comm "$command"
done
exit 0

View File

@ -0,0 +1,90 @@
#!/bin/bash
#
# Copyright (c) 2017 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
# Purpose: set PM QoS resume latency constraints for CPUs.
# Usage: /usr/bin/set-cpu-wakeup-latency.sh policy cpulist
# policy may be either "low" or "high" to set appropriate latency.
# "low" means HALT (C1) is the deepest C-state we allow the CPU to enter.
# "high" means we allow the CPU to sleep as deeply as possible.
# cpulist is for specifying a numerical list of processors.
# It may contain multiple items, separated by comma, and ranges.
# For example, 0,5,7,9-11.
# Define minimal path
PATH=/bin:/usr/bin:/usr/local/bin
LOG_FUNCTIONS=${LOG_FUNCTIONS:-"/etc/init.d/log_functions.sh"}
CPUMAP_FUNCTIONS=${CPUMAP_FUNCTIONS:-"/etc/init.d/cpumap_functions.sh"}
[[ -e ${LOG_FUNCTIONS} ]] && source ${LOG_FUNCTIONS}
[[ -e ${CPUMAP_FUNCTIONS} ]] && source ${CPUMAP_FUNCTIONS}
if [ $UID -ne 0 ]; then
log_error "$0 requires root or sudo privileges"
exit 1
fi
if [ "$#" -ne 2 ]; then
log_error "$0 requires policy and cpulist parameters"
exit 1
fi
POLICY=$1
CPU_LIST=$2
NUMBER_OF_CPUS=$(getconf _NPROCESSORS_CONF 2>/dev/null)
STATUS=1
for CPU_NUM in $(expand_sequence "$CPU_LIST" " ")
do
# Check that we are not setting PM QoS policy for non-existing CPU
if [ "$CPU_NUM" -lt "0" ] || [ "$CPU_NUM" -ge "$NUMBER_OF_CPUS" ]; then
log_error "CPU number ${CPU_NUM} is invalid, available CPUs are 0-${NUMBER_OF_CPUS-1}"
exit 1
fi
# Obtain CPU wakeup latencies for all C-states available starting from operating state to deepest sleep
declare -a LIMITS=()
LIMITS+=($(cat /sys/devices/system/cpu/cpu${CPU_NUM}/cpuidle/state*/latency 2>/dev/null | xargs | sort))
if [ ${#LIMITS[@]} -eq 0 ]; then
log_debug "Failed to get PM QoS latency limits for CPU ${CPU_NUM}"
fi
# Select appropriate CPU wakeup latency based on "low" or "high" policy
case "${POLICY}" in
"low")
# Get first sleep state for "low" policy
if [ ${#LIMITS[@]} -eq 0 ]; then
LATENCY=1
else
LATENCY=${LIMITS[1]}
fi
;;
"high")
# Get deepest sleep state for "high" policy
if [ ${#LIMITS[@]} -eq 0 ]; then
LATENCY=1000
else
LATENCY=${LIMITS[${#LIMITS[@]}-1]}
fi
;;
*)
log_error "Policy is invalid, can be either low or high"
exit 1
esac
# Set the latency for paricular CPU
echo ${LATENCY} > /sys/devices/system/cpu/cpu${CPU_NUM}/power/pm_qos_resume_latency_us 2>/dev/null
RET_VAL=$?
if [ ${RET_VAL} -ne 0 ]; then
log_error "Failed to set PM QoS latency for CPU ${CPU_NUM}, rc=${RET_VAL}"
continue
else
log_debug "Succesfully set PM QoS latency for CPU ${CPU_NUM}, rc=${RET_VAL}"
STATUS=0
fi
done
exit ${STATUS}

View File

@ -0,0 +1,330 @@
#!/bin/bash
################################################################################
# Copyright (c) 2017 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
################################################################################
# Define minimal path
PATH=/bin:/usr/bin:/usr/local/bin
. /etc/platform/platform.conf
LOG_FUNCTIONS=${LOG_FUNCTIONS:-"/etc/init.d/log_functions.sh"}
CPUMAP_FUNCTIONS=${CPUMAP_FUNCTIONS:-"/etc/init.d/cpumap_functions.sh"}
[[ -e ${LOG_FUNCTIONS} ]] && source ${LOG_FUNCTIONS}
[[ -e ${CPUMAP_FUNCTIONS} ]] && source ${CPUMAP_FUNCTIONS}
# Enable debug logs and tag them
LOG_DEBUG=1
TAG="TASKAFFINITY:"
TASK_AFFINING_INCOMPLETE="/etc/platform/.task_affining_incomplete"
N_CPUS=$(cat /proc/cpuinfo 2>/dev/null | \
awk '/^[pP]rocessor/ { n +=1 } END { print (n>0) ? n : 1}')
FULLSET_CPUS="0-"$((N_CPUS-1))
FULLSET_MASK=$(cpulist_to_cpumap ${FULLSET_CPUS} ${N_CPUS})
PLATFORM_CPUS=$(get_platform_cpu_list)
PLATFORM_CPULIST=$(get_platform_cpu_list| \
perl -pe 's/(\d+)-(\d+)/join(",",$1..$2)/eg'| \
sed 's/,/ /g')
VSWITCH_CPULIST=$(get_vswitch_cpu_list| \
perl -pe 's/(\d+)-(\d+)/join(",",$1..$2)/eg'| \
sed 's/,/ /g')
IDLE_MARK=95.0
KERNEL=`uname -a`
################################################################################
# Check if a given core is one of the platform cores
################################################################################
function is_platform_core()
{
local core=$1
for CPU in ${PLATFORM_CPULIST}; do
if [ $core -eq $CPU ]; then
return 1
fi
done
return 0
}
################################################################################
# Check if a given core is one of the vswitch cores
################################################################################
function is_vswitch_core()
{
local core=$1
for CPU in ${VSWITCH_CPULIST}; do
if [ $core -eq $CPU ]; then
return 1
fi
done
return 0
}
################################################################################
# An audit and corrective action following a swact
################################################################################
function audit_and_reaffine()
{
local mask=$1
local cmd_str=""
local tasklist
cmd_str="ps-sched.sh|awk '(\$9==\"$mask\") {print \$2}'"
tasklist=($(eval $cmd_str))
# log_debug "cmd str = $cmd_str"
log_debug "${TAG} There are ${#tasklist[@]} tasks to reaffine."
for task in ${tasklist[@]}; do
taskset -acp ${PLATFORM_CPUS} $task &> /dev/null
rc=$?
[[ $rc -ne 0 ]] && log_error "Failed to set CPU affinity for pid $pid, rc=$rc"
done
tasklist=($(eval $cmd_str))
[[ ${#tasklist[@]} -eq 0 ]] && return 0 || return 1
}
################################################################################
# The following function is used to verify that any sleeping management tasks
# that are on non-platform cores can be migrated to platform cores as soon as
# they are scheduled. It can be invoked either manually or from goenableCompute
# script as a scheduled job (with a few minute delay) if desired.
# The induced tasks migration should be done after all VMs have been restored
# following a host reboot in AIO, hence the delay.
################################################################################
function move_inactive_threads_to_platform_cores()
{
local tasklist
local cmd_str=""
# Compile a list of non-kernel & non-vswitch/VM related threads that are not
# on platform cores.
# e.g. if the platform cpulist value is "0 8", the resulting command to be
# evaluated should look like this:
# ps-sched.sh|grep -v vswitch|awk '($10!=0 && $10!=8 && $3!=2) {if(NR>1)print $2}'
cmd_str="ps-sched.sh|grep -v vswitch|awk '("
for cpu_num in ${PLATFORM_CPULIST}; do
cmd_str=$cmd_str"\$10!="${cpu_num}" && "
done
cmd_str=$cmd_str"\$3!=2) {if(NR>1)print \$2}'"
echo "selection string = $cmd_str"
tasklist=($(eval $cmd_str))
log_debug "${TAG} There are ${#tasklist[@]} number of tasks to be moved."
# These sleep tasks are stuck on the wrong core(s). They need to be woken up
# so they can be migrated to the right ones. Attaching and detaching strace
# momentarily to the task does the trick.
for task in ${tasklist[@]}; do
strace -p $task 2>/dev/null &
pid=$!
sleep 0.1
kill -SIGINT $pid
done
tasklist=($(eval $cmd_str))
[[ ${#tasklist[@]} -eq 0 ]] && return 0 || return 1
}
################################################################################
# The following function is called by affine-platform.sh to affine tasks to
# all available cores during initial startup and subsequent host reboots.
################################################################################
function affine_tasks_to_all_cores()
{
local pidlist
local rc=0
if [[ "${KERNEL}" == *" RT "* ]]; then
return 0
fi
log_debug "${TAG} Affining all tasks to CPU (${FULLSET_CPUS})"
pidlist=$(ps --ppid 2 -p 2 --deselect -o pid= | awk '{ print $1; }')
for pid in ${pidlist[@]}; do
ppid=$(ps -o ppid= -p $pid |tr -d '[:space:]')
if [ -z $ppid ] || [ $ppid -eq 2 ]; then
continue
fi
log_debug "Affining pid $pid, parent pid = $ppid"
taskset --all-tasks --pid --cpu-list ${FULLSET_CPUS} $pid &> /dev/null
rc=$?
[[ $rc -ne 0 ]] && log_error "Failed to set CPU affinity for pid $pid, rc=$rc"
done
# Write the cpu list to a temp file which will be read and removed when
# the tasks are reaffined back to platform cores later on.
echo ${FULLSET_CPUS} > ${TASK_AFFINING_INCOMPLETE}
return $rc
}
################################################################################
# The following function can be called by any platform service that needs to
# temporarily make use of idle VM cores to run a short-duration, service
# critical and cpu intensive operation in AIO. For instance, sm can levearage
# the idle cores to speed up swact activity.
#
# At the end of the operation, regarless of the result, the service must be
# calling function affine_tasks_to_platform_cores to re-affine platform tasks
# back to their assigned core(s).
#
# Kernel, vswitch and VM related tasks are untouched.
################################################################################
function affine_tasks_to_idle_cores()
{
local cpulist
local cpuocc_list
local vswitch_pid
local pidlist
local idle_cpulist
local platform_cpus
local rc=0
local cpu=0
if [ -f ${TASK_AFFINING_INCOMPLETE} ]; then
read cpulist < ${TASK_AFFINING_INCOMPLETE}
log_debug "${TAG} Tasks have already been affined to CPU ($cpulist)."
return 0
fi
if [[ "${KERNEL}" == *" RT "* ]]; then
return 0
fi
# Compile a list of cpus with idle percentage greater than 95% in the last
# 5 seconds.
cpuocc_list=($(sar -P ALL 1 5|grep Average|awk '{if(NR>2)print $8}'))
for idle_value in ${cpuocc_list[@]}; do
is_vswitch_core $cpu
if [ $? -eq 1 ]; then
((cpu++))
continue
fi
is_platform_core $cpu
if [ $? -eq 1 ]; then
# Platform core is added to the idle list by default
idle_cpulist=$idle_cpulist$cpu","
else
# Non platform core is added to the idle list if it is more than 95% idle
[[ $(echo "$idle_value > ${IDLE_MARK}"|bc) -eq 1 ]] && idle_cpulist=$idle_cpulist$cpu","
fi
((cpu++))
done
idle_cpulist=$(echo $idle_cpulist|sed 's/.$//')
platform_affinity_mask=$(cpulist_to_cpumap ${PLATFORM_CPUS} ${N_CPUS} \
|awk '{print tolower($0)}')
log_debug "${TAG} Affining all tasks to idle CPU ($idle_cpulist)"
vswitch_pid=$(pgrep vswitch)
pidlist=$(ps --ppid 2 -p 2 --deselect -o pid= | awk '{ print $1; }')
for pid in ${pidlist[@]}; do
ppid=$(ps -o ppid= -p $pid |tr -d '[:space:]')
if [ -z $ppid ] || [ $ppid -eq 2 ] || [ "$pid" = "$vswitch_pid" ]; then
continue
fi
pid_affinity_mask=$(taskset -p $pid | awk '{print $6}')
if [ "${pid_affinity_mask}" == "${platform_affinity_mask}" ]; then
# log_debug "Affining pid $pid to idle cores..."
taskset --all-tasks --pid --cpu-list $idle_cpulist $pid &> /dev/null
rc=$?
[[ $rc -ne 0 ]] && log_error "Failed to set CPU affinity for pid $pid, rc=$rc"
fi
done
# Save the cpu list to the temp file which will be read and removed when
# tasks are reaffined to the platform cores later on.
echo $idle_cpulist > ${TASK_AFFINING_INCOMPLETE}
return $rc
}
################################################################################
# The following function is called by either:
# a) nova-compute wrapper script during AIO system initial bringup or reboot
# or
# b) sm at the end of swact sequence
# to re-affine management tasks back to the platform cores.
################################################################################
function affine_tasks_to_platform_cores()
{
local cpulist
local pidlist
local rc=0
local count=0
if [ ! -f ${TASK_AFFINING_INCOMPLETE} ]; then
dbg_str="${TAG} Either tasks have never been affined to all/idle cores or"
dbg_str=$dbg_str" they have already been reaffined to platform cores."
log_debug "$dbg_str"
return 0
fi
read cpulist < ${TASK_AFFINING_INCOMPLETE}
affinity_mask=$(cpulist_to_cpumap $cpulist ${N_CPUS}|awk '{print tolower($0)}')
log_debug "${TAG} Reaffining tasks to platform cores (${PLATFORM_CPUS})..."
pidlist=$(ps --ppid 2 -p 2 --deselect -o pid= | awk '{ print $1; }')
for pid in ${pidlist[@]}; do
# log_debug "Processing pid $pid..."
pid_affinity_mask=$(taskset -p $pid | awk '{print $6}')
# Only management tasks need to be reaffined. Kernel, vswitch and VM related
# tasks were not affined previously so they should have different affinity
# mask(s).
if [ "${pid_affinity_mask}" == "${affinity_mask}" ]; then
((count++))
# log_debug "Affining pid $pid to platform cores..."
taskset --all-tasks --pid --cpu-list ${PLATFORM_CPUS} $pid &> /dev/null
rc=$?
[[ $rc -ne 0 ]] && log_error "Failed to set CPU affinity for pid $pid, rc=$rc"
fi
done
# A workaround for lack of "end of swact" state
fullmask=$(echo ${FULLSET_MASK} | awk '{print tolower($0)}')
if [ "${affinity_mask}" != "${fullmask}" ]; then
log_debug "${TAG} Schedule an audit and cleanup"
(sleep 60; audit_and_reaffine "0x"$affinity_mask) &
fi
rm -rf ${TASK_AFFINING_INCOMPLETE}
log_debug "${TAG} $count tasks were reaffined to platform cores."
return $rc
}
################################################################################
# The following function can be leveraged by cron tasks
################################################################################
function get_most_idle_core()
{
local cpuocc_list
local cpu=0
local most_idle_value=${IDLE_MARK}
local most_idle_cpu=0
if [[ "${KERNEL}" == *" RT "* ]]; then
echo $cpu
return
fi
cpuocc_list=($(sar -P ALL 1 5|grep Average|awk '{if(NR>2)print $8}'))
for idle_value in ${cpuocc_list[@]}; do
is_vswitch_core $cpu
if [ $? -eq 1 ]; then
((cpu++))
continue
fi
if [ $(echo "$idle_value > $most_idle_value"|bc) -eq 1 ]; then
most_idle_value=$idle_value
most_idle_cpu=$cpu
fi
((cpu++))
done
echo $most_idle_cpu
}

View File

@ -0,0 +1,241 @@
#!/usr/bin/env python
################################################################################
# Copyright (c) 2013 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
################################################################################
#
# topology.py -- gives a summary of logical cpu enumeration,
# sockets, cores per package, threads per core,
# total memory, and numa nodes
import os
import sys
import re
class Topology(object):
""" Build up topology information.
(i.e. logical cpu topology, NUMA nodes, memory)
"""
def __init__(self):
self.num_cpus = 0
self.num_nodes = 0
self.num_sockets = 0
self.num_cores_per_pkg = 0
self.num_threads_per_core = 0
self.topology = {}
self.topology_idx = {}
self.total_memory_MiB = 0
self.total_memory_nodes_MiB = []
self._get_cpu_topology()
self._get_total_memory_MiB()
self._get_total_memory_nodes_MiB()
def _get_cpu_topology(self):
'''Enumerate logical cpu topology based on parsing /proc/cpuinfo
as function of socket_id, core_id, and thread_id. This updates
topology and reverse index topology_idx mapping.
:param self
:updates self.num_cpus - number of logical cpus
:updates self.num_nodes - number of sockets; maps to number of numa nodes
:updates self.topology[socket_id][core_id][thread_id] = cpu
:updates self.topology_idx[cpu] = {'s': socket_id, 'c': core_id, 't': thread_id}
:returns None
'''
self.num_cpus = 0
self.num_nodes = 0
self.num_sockets = 0
self.num_cores = 0
self.num_threads = 0
self.topology = {}
self.topology_idx = {}
Thread_cnt = {}
cpu = socket_id = core_id = thread_id = -1
re_processor = re.compile(r'^[Pp]rocessor\s+:\s+(\d+)')
re_socket = re.compile(r'^physical id\s+:\s+(\d+)')
re_core = re.compile(r'^core id\s+:\s+(\d+)')
with open('/proc/cpuinfo', 'r') as infile:
for line in infile:
match = re_processor.search(line)
if match:
cpu = int(match.group(1))
socket_id = -1; core_id = -1; thread_id = -1
self.num_cpus += 1
continue
match = re_socket.search(line)
if match:
socket_id = int(match.group(1))
continue
match = re_core.search(line)
if match:
core_id = int(match.group(1))
if not Thread_cnt.has_key(socket_id):
Thread_cnt[socket_id] = {}
if not Thread_cnt[socket_id].has_key(core_id):
Thread_cnt[socket_id][core_id] = 0
else:
Thread_cnt[socket_id][core_id] += 1
thread_id = Thread_cnt[socket_id][core_id]
if not self.topology.has_key(socket_id):
self.topology[socket_id] = {}
if not self.topology[socket_id].has_key(core_id):
self.topology[socket_id][core_id] = {}
self.topology[socket_id][core_id][thread_id] = cpu
self.topology_idx[cpu] = {'s': socket_id, 'c': core_id, 't': thread_id}
continue
self.num_nodes = len(self.topology.keys())
# In the case topology not detected, hard-code structures
if self.num_nodes == 0:
n_sockets, n_cores, n_threads = (1, self.num_cpus, 1)
self.topology = {}
for socket_id in range(n_sockets):
self.topology[socket_id] = {}
for core_id in range(n_cores):
self.topology[socket_id][core_id] = {}
for thread_id in range(n_threads):
self.topology[socket_id][core_id][thread_id] = 0
# Define Thread-Socket-Core order for logical cpu enumeration
self.topology_idx = {}
cpu = 0
for thread_id in range(n_threads):
for socket_id in range(n_sockets):
for core_id in range(n_cores):
self.topology[socket_id][core_id][thread_id] = cpu
self.topology_idx[cpu] = {'s': socket_id, 'c': core_id, 't': thread_id}
cpu += 1
self.num_nodes = len(self.topology.keys())
self.num_sockets = len(self.topology.keys())
self.num_cores_per_pkg = len(self.topology[0].keys())
self.num_threads_per_core = len(self.topology[0][0].keys())
return None
def _get_total_memory_MiB(self):
"""Get the total memory for VMs (MiB).
:updates: total memory for VMs (MiB)
"""
self.total_memory_MiB = 0
# Total memory
try:
m = open('/proc/meminfo').read().split()
idx_Total = m.index('MemTotal:') + 1
self.total_memory_MiB = int(m[idx_Total]) / 1024
except IOError:
# silently ignore IO errors (eg. file missing)
pass
return None
def _get_total_memory_nodes_MiB(self):
"""Get the total memory per numa node for VMs (MiB).
:updates: total memory per numa node for VMs (MiB)
"""
self.total_memory_nodes_MiB = []
# Memory of each numa node (MiB)
for node in range(self.num_nodes):
Total_MiB = 0
meminfo = "/sys/devices/system/node/node%d/meminfo" % node
try:
m = open(meminfo).read().split()
idx_Total = m.index('MemTotal:') + 1
Total_MiB = int(m[idx_Total]) / 1024
except IOError:
# silently ignore IO errors (eg. file missing)
pass
self.total_memory_nodes_MiB.append(Total_MiB)
return None
def _print_cpu_topology(self):
'''Print logical cpu topology enumeration as function of:
socket_id, core_id, and thread_id.
:param self
:returns None
'''
cpu_list = self.topology_idx.keys()
cpu_list.sort()
total_memory_GiB = self.total_memory_MiB/1024.0
print 'TOPOLOGY:'
print '%16s : %5d' % ('logical cpus', self.num_cpus)
print '%16s : %5d' % ('sockets', self.num_sockets)
print '%16s : %5d' % ('cores_per_pkg', self.num_cores_per_pkg)
print '%16s : %5d' % ('threads_per_core', self.num_threads_per_core)
print '%16s : %5d' % ('numa_nodes', self.num_nodes)
print '%16s : %5.2f %s' % ('total_memory', total_memory_GiB, 'GiB')
print '%16s :' % ('memory_per_node'),
for node in range(self.num_nodes):
node_memory_GiB = self.total_memory_nodes_MiB[node]/1024.0
print '%5.2f' % (node_memory_GiB),
print '%s' % ('GiB')
print
print 'LOGICAL CPU TOPOLOGY:'
print "%9s :" % 'cpu_id',
for cpu in cpu_list:
print "%3d" % cpu,
print
print "%9s :" % 'socket_id',
for cpu in cpu_list:
socket_id = self.topology_idx[cpu]['s']
print "%3d" % socket_id,
print
print "%9s :" % 'core_id',
for cpu in cpu_list:
core_id = self.topology_idx[cpu]['c']
print "%3d" % core_id,
print
print "%9s :" % 'thread_id',
for cpu in cpu_list:
thread_id = self.topology_idx[cpu]['t']
print "%3d" % thread_id,
print
print
print 'CORE TOPOLOGY:'
print "%6s %9s %7s %9s %s" % ('cpu_id', 'socket_id', 'core_id', 'thread_id', 'affinity')
for cpu in cpu_list:
affinity = 1<<cpu
socket_id = self.topology_idx[cpu]['s']
core_id = self.topology_idx[cpu]['c']
thread_id = self.topology_idx[cpu]['t']
print "%6d %9d %7d %9d 0x%x" \
% (cpu, socket_id, core_id, thread_id, affinity)
return None
#-------------------------------------------------------------------------------
''' Main Program
'''
# Get logical cpu topology
topology = Topology()
topology._print_cpu_topology()
sys.exit(0)

6
computeconfig/.gitignore vendored Normal file
View File

@ -0,0 +1,6 @@
!.distro
.distro/centos7/rpmbuild/RPMS
.distro/centos7/rpmbuild/SRPMS
.distro/centos7/rpmbuild/BUILD
.distro/centos7/rpmbuild/BUILDROOT
.distro/centos7/rpmbuild/SOURCES/computeconfig*tar.gz

13
computeconfig/PKG-INFO Normal file
View File

@ -0,0 +1,13 @@
Metadata-Version: 1.1
Name: computeconfig
Version: 1.0
Summary: Initial compute node configuration
Home-page:
Author: Windriver
Author-email: info@windriver.com
License: Apache-2.0
Description: Initial compute node configuration
Platform: UNKNOWN

View File

@ -0,0 +1,2 @@
SRC_DIR="computeconfig"
TIS_PATCH_VER=11

View File

@ -0,0 +1,99 @@
Summary: computeconfig
Name: computeconfig
Version: 1.0
Release: %{tis_patch_ver}%{?_tis_dist}
License: Apache-2.0
Group: base
Packager: Wind River <info@windriver.com>
URL: unknown
Source0: %{name}-%{version}.tar.gz
%define debug_package %{nil}
Requires: systemd
%description
Initial compute node configuration
%package -n computeconfig-standalone
Summary: computeconfig
Group: base
%description -n computeconfig-standalone
Initial compute node configuration
%package -n computeconfig-subfunction
Summary: computeconfig
Group: base
%description -n computeconfig-subfunction
Initial compute node configuration
%define local_etc_initd /etc/init.d/
%define local_goenabledd /etc/goenabled.d/
%define local_etc_systemd /etc/systemd/system/
%prep
%setup
%build
%install
install -d -m 755 %{buildroot}%{local_etc_initd}
install -p -D -m 700 compute_config %{buildroot}%{local_etc_initd}/compute_config
install -p -D -m 700 compute_services %{buildroot}%{local_etc_initd}/compute_services
install -d -m 755 %{buildroot}%{local_goenabledd}
install -p -D -m 755 config_goenabled_check.sh %{buildroot}%{local_goenabledd}/config_goenabled_check.sh
install -d -m 755 %{buildroot}%{local_etc_systemd}
install -d -m 755 %{buildroot}%{local_etc_systemd}/config
install -p -D -m 664 computeconfig.service %{buildroot}%{local_etc_systemd}/config/computeconfig-standalone.service
install -p -D -m 664 computeconfig-combined.service %{buildroot}%{local_etc_systemd}/config/computeconfig-combined.service
#install -p -D -m 664 config.service %{buildroot}%{local_etc_systemd}/config.service
%post -n computeconfig-standalone
if [ ! -e $D%{local_etc_systemd}/computeconfig.service ]; then
cp $D%{local_etc_systemd}/config/computeconfig-standalone.service $D%{local_etc_systemd}/computeconfig.service
else
cmp -s $D%{local_etc_systemd}/config/computeconfig-standalone.service $D%{local_etc_systemd}/computeconfig.service
if [ $? -ne 0 ]; then
rm -f $D%{local_etc_systemd}/computeconfig.service
cp $D%{local_etc_systemd}/config/computeconfig-standalone.service $D%{local_etc_systemd}/computeconfig.service
fi
fi
systemctl enable computeconfig.service
%post -n computeconfig-subfunction
if [ ! -e $D%{local_etc_systemd}/computeconfig.service ]; then
cp $D%{local_etc_systemd}/config/computeconfig-combined.service $D%{local_etc_systemd}/computeconfig.service
else
cmp -s $D%{local_etc_systemd}/config/computeconfig-combined.service $D%{local_etc_systemd}/computeconfig.service
if [ $? -ne 0 ]; then
rm -f $D%{local_etc_systemd}/computeconfig.service
cp $D%{local_etc_systemd}/config/computeconfig-combined.service $D%{local_etc_systemd}/computeconfig.service
fi
fi
systemctl enable computeconfig.service
%clean
# rm -rf $RPM_BUILD_ROOT
%files
%defattr(-,root,root,-)
%doc LICENSE
%{local_etc_initd}/*
%files -n computeconfig-standalone
%defattr(-,root,root,-)
%dir %{local_etc_systemd}/config
%{local_etc_systemd}/config/computeconfig-standalone.service
#%{local_etc_systemd}/config.service
%{local_goenabledd}/*
%files -n computeconfig-subfunction
%defattr(-,root,root,-)
%dir %{local_etc_systemd}/config
%{local_etc_systemd}/config/computeconfig-combined.service

View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,383 @@
#!/bin/bash
#
# Copyright (c) 2013-2016 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
#
# chkconfig: 2345 80 80
#
### BEGIN INIT INFO
# Provides: compute_config
# Short-Description: Compute node config agent
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
### END INIT INFO
. /usr/bin/tsconfig
. /etc/platform/platform.conf
PLATFORM_DIR=/opt/platform
CONFIG_DIR=$CONFIG_PATH
VOLATILE_CONFIG_PASS="/var/run/.config_pass"
VOLATILE_CONFIG_FAIL="/var/run/.config_fail"
LOGFILE="/var/log/compute_config.log"
IMA_POLICY=/etc/ima.policy
# Copy of /opt/platform required for compute_services
VOLATILE_PLATFORM_PATH=$VOLATILE_PATH/cpe_upgrade_opt_platform
DELAY_SEC=600
# If we're on a controller, increase DELAY_SEC to a large value
# to allow for active services to recover from a reboot or DOR
if [ "$nodetype" = "controller" ]
then
DELAY_SEC=900
fi
fatal_error()
{
cat <<EOF
*****************************************************
*****************************************************
$1
*****************************************************
*****************************************************
EOF
touch $VOLATILE_CONFIG_FAIL
logger "Error: $1"
echo "Pausing for 5 seconds..."
sleep 5
exit 1
}
get_ip()
{
local host=$1
# Check /etc/hosts for the hostname
local ipaddr=$(cat /etc/hosts | awk -v host=$host '$2 == host {print $1}')
if [ -n "$ipaddr" ]
then
echo $ipaddr
return
fi
START=$SECONDS
let -i UNTIL=${SECONDS}+${DELAY_SEC}
while [ ${UNTIL} -ge ${SECONDS} ]
do
# Because dnsmasq can resolve both a hostname to both an IPv4 and an IPv6
# address in certain situations, and the last address is the IPv6, which
# would be the management, this is preferred over the IPv4 pxeboot address,
# so take the last address only.
ipaddr=$(dig +short ANY $host|tail -1)
if [[ "$ipaddr" =~ ^[0-9][0-9]*\.[0-9][0-9]*\.[0-9][0-9]*\.[0-9][0-9]*$ ]]
then
let -i DURATION=$SECONDS-$START
logger -t $0 -p info "DNS query resolved to $ipaddr (took ${DURATION} secs)"
echo $ipaddr
return
fi
if [[ "$ipaddr" =~ ^[0-9a-z]*\:[0-9a-z\:]*$ ]]
then
let -i DURATION=$SECONDS-$START
logger -t $0 -p info "DNS query resolved to $ipaddr (took ${DURATION} secs)"
echo $ipaddr
return
fi
logger -t $0 -p warn "DNS query failed for $host"
sleep 5
done
let -i DURATION=$SECONDS-$START
logger -t $0 -p warn "DNS query failed after max retries for $host (${DURATION} secs)"
}
wait_for_controller_services()
{
SERVICE="platform-nfs-ip"
while [ "$SECONDS" -le "$DELAY_SEC" ]
do
# Check to make sure the service is enabled
sm-query service ${SERVICE} | grep -q "enabled-active"
if [ $? -eq 0 ]
then
return 0
fi
# Not running Let's wait a couple of seconds and check again
sleep 2
done
return 1
}
start()
{
if [ -f /etc/platform/installation_failed ] ; then
fatal_error "/etc/platform/installation_failed flag is set. Aborting."
fi
function=`echo "$subfunction" | cut -f 2 -d','`
if [ "$nodetype" != "compute" -a "$function" != "compute" ] ; then
logger -t $0 -p warn "exiting because this is not compute host"
exit 0
fi
# If we're on a controller, ensure we only run if the controller config is complete
if [ "$nodetype" = "controller" -a ! -f /etc/platform/.initial_controller_config_complete ]
then
logger -t $0 -p warn "exiting because this is controller that has not completed initial config"
exit 0
fi
# Exit in error if called while the fail flag file is present
if [ -e $VOLATILE_CONFIG_FAIL ] ; then
logger -t $0 -p warn "exiting due to presence of $VOLATILE_CONFIG_FAIL file"
exit 1
fi
# remove previous pass flag file so that if this fails we don't
# end up with both pass and fail flag files present
rm -f $VOLATILE_CONFIG_PASS
if [ "$(stat -c %d:%i /)" != "$(stat -c %d:%i /proc/1/root/.)" ]; then
# we are in chroot installer environment
exit 0
fi
echo "Configuring compute node..."
###### SECURITY PROFILE (EXTENDED) #################
# If we are in Extended Security Profile mode, #
# then before anything else, we need to load the #
# IMA Policy so that all configuration operations #
# can be measured and appraised #
# #
# N.B: Only run for compute nodetype since for AIO #
# controllerconfig would have already enabled IMA #
# policy #
#####################################################
if [ "$nodetype" = "compute" -a "${security_profile}" = "extended" ]
then
IMA_LOAD_PATH=/sys/kernel/security/ima/policy
if [ -f ${IMA_LOAD_PATH} ]; then
echo "Loading IMA Policy"
# Best effort operation only, if policy is
# malformed then audit logs will indicate this,
# and customer will need to load policy manually
cat $IMA_POLICY > ${IMA_LOAD_PATH}
[ $? -eq 0 ] || logger -t $0 -p warn "IMA Policy could not be loaded, see audit.log"
else
# the securityfs mount should have been
# created had the IMA module loaded properly.
# This is therefore a fatal error
fatal_error "${IMA_LOAD_PATH} not available. Aborting."
fi
fi
HOST=$(hostname)
if [ -z "$HOST" -o "$HOST" = "localhost" ]
then
fatal_error "Host undefined. Unable to perform config"
fi
date "+%FT%T.%3N" > $LOGFILE
IPADDR=$(get_ip $HOST)
if [ -z "$IPADDR" ]
then
fatal_error "Unable to get IP from host: $HOST"
fi
# wait for controller services to be ready if it is an AIO system
# since ping the loopback interface always returns ok
if [ -e "${PLATFORM_SIMPLEX_FLAG}" ]
then
echo "Wait for the controller services"
wait_for_controller_services
if [ $? -ne 0 ]
then
fatal_error "Controller services are not ready"
fi
else
/usr/local/bin/connectivity_test -t ${DELAY_SEC} -i ${IPADDR} controller-platform-nfs
if [ $? -ne 0 ]
then
# 'controller-platform-nfs' is not available from management address
fatal_error "Unable to contact active controller (controller-platform-nfs) from management address"
fi
fi
# Write the hostname to file so it's persistent
echo $HOST > /etc/hostname
if ! [ -e "${PLATFORM_SIMPLEX_FLAG}" ]
then
# Mount the platform filesystem (if necessary - could be auto-mounted by now)
mkdir -p $PLATFORM_DIR
if [ ! -f $CONFIG_DIR/hosts ]
then
nfs-mount controller-platform-nfs:$PLATFORM_DIR $PLATFORM_DIR > /dev/null 2>&1
RC=$?
if [ $RC -ne 0 ]
then
fatal_error "Unable to mount $PLATFORM_DIR (RC:$RC)"
fi
fi
fi
if [ "$nodetype" = "compute" ]
then
# Check whether our installed load matches the active controller
CONTROLLER_UUID=`curl -sf http://controller/feed/rel-${SW_VERSION}/install_uuid`
if [ $? -ne 0 ]
then
fatal_error "Unable to retrieve installation uuid from active controller"
fi
if [ "$INSTALL_UUID" != "$CONTROLLER_UUID" ]
then
fatal_error "This node is running a different load than the active controller and must be reinstalled"
fi
fi
# banner customization always returns 0, success:
/usr/sbin/install_banner_customization
cp $CONFIG_DIR/hosts /etc/hosts
if [ $? -ne 0 ]
then
fatal_error "Unable to copy $CONFIG_DIR/hosts"
fi
if [ "$nodetype" = "controller" -a "$HOST" = "controller-1" ]
then
# In a small system restore, there may be instance data that we want to
# restore. Copy it and delete it.
MATE_INSTANCES_DIR="$CONFIG_DIR/controller-1_nova_instances"
if [ -d "$MATE_INSTANCES_DIR" ]
then
echo "Restoring instance data from mate controller"
cp -Rp $MATE_INSTANCES_DIR/* /etc/nova/instances/
rm -rf $MATE_INSTANCES_DIR
fi
fi
# Upgrade related checks for controller-1 in combined controller/compute
if [ "$nodetype" = "controller" -a "$HOST" = "controller-1" ]
then
# Check controller activity.
# Prior to the final compile of R5 the service check below had been
# against platform-nfs-ip. However, there was a compute
# subfunction configuration failure when an AIO-DX system controller
# booted up while there was no pingable backup controller. Seems the
# platform-nfs-ip service was not always reaching the enabled-active
# state when this check was performed under this particular failure.
# Seems an earlier launched service of like functionality, namely
# 'platform-export-fs' is reliably enabled at this point there-by
# resolving the issue.
sm-query service platform-export-fs | grep enabled-active > /dev/null 2>&1
if [ $? -ne 0 ]
then
# This controller is not active so it is safe to check the version
# of the mate controller.
VOLATILE_ETC_PLATFORM_MOUNT=$VOLATILE_PATH/etc_platform
mkdir $VOLATILE_ETC_PLATFORM_MOUNT
nfs-mount controller-0:/etc/platform $VOLATILE_ETC_PLATFORM_MOUNT
if [ $? -eq 0 ]
then
# Check whether software versions match on the two controllers
MATE_SW_VERSION=$(source $VOLATILE_ETC_PLATFORM_MOUNT/platform.conf && echo $sw_version)
if [ $SW_VERSION != $MATE_SW_VERSION ]
then
echo "Controllers are running different software versions"
echo "SW_VERSION: $SW_VERSION MATE_SW_VERSION: $MATE_SW_VERSION"
# Since controller-1 is always upgraded first (and downgraded
# last), we know that controller-1 is running a higher release
# than controller-0.
# This controller is not active and is running a higher
# release than the mate controller, so do not launch
# any of the compute services (they will not work with
# a lower version of the controller services).
echo "Disabling compute services until controller activated"
touch $VOLATILE_DISABLE_COMPUTE_SERVICES
# Copy $PLATFORM_DIR into a temporary location for the compute_services script to
# access. This is only required for CPE upgrades
rm -rf $VOLATILE_PLATFORM_PATH
mkdir -p $VOLATILE_PLATFORM_PATH
cp -Rp $PLATFORM_DIR/* $VOLATILE_PLATFORM_PATH/
fi
umount $VOLATILE_ETC_PLATFORM_MOUNT
rmdir $VOLATILE_ETC_PLATFORM_MOUNT
else
rmdir $VOLATILE_ETC_PLATFORM_MOUNT
fatal_error "Unable to mount /etc/platform"
fi
else
# Controller-1 (CPE) is active and is rebooting. This is probably a DOR. Since this
# could happen during an upgrade, we will copy $PLATFORM_DIR into a temporary
# location for the compute_services script to access in case of a future swact.
rm -rf $VOLATILE_PLATFORM_PATH
mkdir -p $VOLATILE_PLATFORM_PATH
cp -Rp $PLATFORM_DIR/* $VOLATILE_PLATFORM_PATH/
fi
fi
# Apply the puppet manifest
HOST_HIERA=${PUPPET_PATH}/hieradata/${IPADDR}.yaml
if [ -f ${HOST_HIERA} ]; then
echo "$0: Running puppet manifest apply"
puppet-manifest-apply.sh ${PUPPET_PATH}/hieradata ${IPADDR} compute
RC=$?
if [ $RC -ne 0 ];
then
fatal_error "Failed to run the puppet manifest (RC:$RC)"
fi
else
fatal_error "Host configuration not yet available for this node ($(hostname)=${IPADDR}); aborting configuration."
fi
# Load Network Block Device
modprobe nbd
if [ $? -ne 0 ]
then
echo "WARNING: Unable to load kernel module: nbd."
logger "WARNING: Unable to load kernel module: nbd."
fi
#Run mount command to mount any NFS filesystems that required network access
/bin/mount -a -t nfs
RC=$?
if [ $RC -ne 0 ]
then
fatal_error "Unable to mount NFS filesystems (RC:$RC)"
fi
touch $VOLATILE_CONFIG_PASS
}
stop ()
{
# Nothing to do
return
}
case "$1" in
start)
start
;;
stop)
stop
;;
*)
echo "Usage: $0 {start|stop}"
exit 1
;;
esac
exit 0

View File

@ -0,0 +1,220 @@
#!/bin/bash
#
# Copyright (c) 2016-2016 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
#
# This script provides support for CPE upgrades. It will be called during swacts
# by the /usr/local/sbin/sm-notification python script, if we are in a small
# footprint system (CPE)
#
# During a swact to, the script will delete the $VOLATILE_DISABLE_COMPUTE_SERVICES
# flag and re-apply the compute manifests.
# During a swact away from (downgrades), the script re-create the
# $VOLATILE_DISABLE_COMPUTE_SERVICES flag and re-apply the compute manifests.
#
# This script should only re-apply the compute manifests if;
# - It is running on a CPE (small footprint) system
# - It is controller-1
# - Controller-0 has not yet been upgraded
#
# This script logs to /var/log/platform.log
#
. /usr/bin/tsconfig
. /etc/platform/platform.conf
VOLATILE_CONFIG_PASS="/var/run/.config_pass"
VOLATILE_CONFIG_FAIL="/var/run/.config_fail"
IN_PROGRESS="/var/run/.compute_services_in_progress"
TEMP_MATE_ETC_DIR="$VOLATILE_PATH/etc_platform_compute"
TEMP_PUPPET_DIR="$VOLATILE_PATH/puppet_compute"
# Copy of /opt/platform populate by compute_config
VOLATILE_PLATFORM_PATH=$VOLATILE_PATH/cpe_upgrade_opt_platform
# Process id and full filename of this executable
NAME="[$$] $0($1)"
end_exec()
{
rm $IN_PROGRESS
exit 0
}
init()
{
local action_to_perform=$1
# This will log to /var/log/platform.log
logger -t $NAME -p local1.info "Begin ..."
# Check if this program is currently executing, if so sleep for 5 seconds and check again.
# After 10 minutes of waiting assume something is wrong and exit.
count=0
while [ -f $IN_PROGRESS ] ; do
if [ $count -gt 120 ] ; then
logger -t $NAME -p local1.error "Execution completion of previous call is taking more than 10 minutes. Exiting."
end_exec
fi
logger -t $NAME -p local1.info "Sleep for 5 seconds"
let count++
sleep 5
done
touch $IN_PROGRESS
HOST=$(hostname)
if [ -z "$HOST" -o "$HOST" = "localhost" ] ; then
logger -t $NAME -p local1.error "Host undefiled"
end_exec
fi
# this script should only be performed on controller-1
if [ "$HOST" != "controller-1" ] ; then
logger -t $NAME -p local1.info "Exiting because this is not controller-1"
end_exec
fi
# This script should only be called if we are in a CPE system
sub_function=`echo "$subfunction" | cut -f 2 -d','`
if [ $sub_function != "compute" ] ; then
logger -t $NAME -p local1.error "Exiting because this is not CPE host"
end_exec
fi
# Exit if called while the config compute success flag file is not present
if [ ! -f $VOLATILE_CONFIG_PASS ] ; then
logger -t $NAME -p local1.info "Exiting due to non-presence of $VOLATILE_CONFIG_PASS file"
end_exec
fi
# Exit if called while the config compute failure flag file is present
if [ -f $VOLATILE_CONFIG_FAIL ] ; then
logger -t $NAME -p local1.info "Exiting due to presence of $VOLATILE_CONFIG_FAIL file"
end_exec
fi
# Ensure we only run if the controller config is complete
if [ ! -f /etc/platform/.initial_controller_config_complete ] ; then
logger -t $NAME -p local1.warn "exiting because CPE controller that has not completed initial config"
end_exec
fi
IPADDR=$(cat /etc/hosts | awk -v host=$HOST '$2 == host {print $1}')
if [ -z "$IPADDR" ] ; then
logger -t $NAME -p local1.error "Unable to get IP from host: $HOST"
end_exec
fi
# The platform filesystem was mounted in compute_config and copied in a temp
# location
if [ ! -f $VOLATILE_PLATFORM_PATH/config/${SW_VERSION}/hosts ] ; then
logger -t $NAME -p local1.error "Error accessing $VOLATILE_PLATFORM_PATH"
end_exec
fi
# Check the release version of controller-0
mkdir $TEMP_MATE_ETC_DIR
nfs-mount controller-0:/etc/platform $TEMP_MATE_ETC_DIR
if [ $? -eq 0 ] ; then
# Should only be executed when the releases do not match
MATE_SW_VERSION=$(source $TEMP_MATE_ETC_DIR/platform.conf && echo $sw_version)
logger -t $NAME -p local1.info "SW_VERSION: $SW_VERSION MATE_SW_VERSION: $MATE_SW_VERSION"
# Check whether software versions match on the two controllers
# Since controller-1 is always upgraded first (and downgraded
# last), we know that controller-1 is running a higher release
# than controller-0.
if [ $SW_VERSION == $MATE_SW_VERSION ] ; then
logger -t $NAME -p local1.info "Releases matches... do not continue"
umount $TEMP_MATE_ETC_DIR
rmdir $TEMP_MATE_ETC_DIR
end_exec
fi
else
logger -t $NAME -p local1.error "Unable to mount /etc/platform"
rmdir $TEMP_MATE_ETC_DIR
end_exec
fi
umount $TEMP_MATE_ETC_DIR
rmdir $TEMP_MATE_ETC_DIR
# Copy the puppet data into $TEMP_PUPPET_DIR
VOLATILE_PUPPET_PATH=${VOLATILE_PLATFORM_PATH}/puppet/${SW_VERSION}
logger -t $NAME -p local1.info "VOLATILE_PUPPET_PATH = $VOLATILE_PUPPET_PATH"
rm -rf $TEMP_PUPPET_DIR
cp -R $VOLATILE_PUPPET_PATH $TEMP_PUPPET_DIR
if [ $? -ne 0 ] ; then
logger -t $NAME -p local1.error "Failed to copy packstack directory $VOLATILE_PUPPET_PATH to $TEMP_PUPPET_DIR "
end_exec
fi
# Update the VOLATILE_DISABLE_COMPUTE_SERVICES flag and stop nova-compute if in "stop"
if [ $action_to_perform == "stop" ] ; then
logger -t $NAME -p local1.info "Disabling compute services"
# Set the compute services disable flag used by the manifest
touch $VOLATILE_DISABLE_COMPUTE_SERVICES
# Stop nova-compute
logger -t $NAME -p local1.info "Stopping nova-compute"
/etc/init.d/e_nova-init stop
else
logger -t $NAME -p local1.info "Enabling compute services"
# Clear the compute services disable flag used by the manifest
rm $VOLATILE_DISABLE_COMPUTE_SERVICES
fi
# Apply the puppet manifest
HOST_HIERA=${TEMP_PUPPET_DIR}/hieradata/${IPADDR}.yaml
if [ -f ${HOST_HIERA} ]; then
echo "$0: Running puppet manifest apply"
puppet-manifest-apply.sh ${TEMP_PUPPET_DIR}/hieradata ${IPADDR} compute
RC=$?
if [ $RC -ne 0 ];
then
logger -t $NAME -p local1.info "Failed to run the puppet manifest (RC:$RC)"
end_exec
fi
else
logger -t $NAME -p local1.info "Host configuration not yet available for this node ($(hostname)=${IPADDR}); aborting configuration."
end_exec
fi
# Start nova-compute is we are starting compute services
if [ $action_to_perform == "start" ] ; then
logger -t $NAME -p local1.info "Starting nova-compute"
/etc/init.d/e_nova-init start
fi
# Cleanup
rm -rf $TEMP_PUPPET_DIR
logger -t $NAME -p local1.info "... Done"
end_exec
}
case "$1" in
start)
init $1
;;
stop)
init $1
;;
*)
logger -t $NAME -p local1.info "Usage: $0 {start|stop}"
exit 1
;;
esac
end_exec

View File

@ -0,0 +1,21 @@
[Unit]
Description=computeconfig service
After=syslog.target network.service remote-fs.target
After=sw-patch.service
After=affine-platform.sh.service compute-huge.sh.service
After=controllerconfig.service config.service
After=goenabled.service
After=sysinv-agent.service
After=network-online.target
[Service]
Type=simple
ExecStart=/etc/init.d/compute_config start
ExecStop=
ExecReload=
StandardOutput=syslog+console
StandardError=syslog+console
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,22 @@
[Unit]
Description=computeconfig service
After=syslog.target network.service remote-fs.target
After=sw-patch.service
After=affine-platform.sh.service compute-huge.sh.service
After=opt-platform.service
After=sysinv-agent.service
After=network-online.target
Before=config.service compute-config-gate.service
Before=goenabled.service
[Service]
Type=simple
ExecStart=/etc/init.d/compute_config start
ExecStop=
ExecReload=
StandardOutput=syslog+console
StandardError=syslog+console
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,22 @@
#!/bin/bash
#
# Copyright (c) 2014 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
# Configuration "goenabled" check.
# If configuration failed, prevent the node from going enabled.
NAME=$(basename $0)
VOLATILE_CONFIG_FAIL="/var/run/.config_fail"
logfile=/var/log/patching.log
if [ -f $VOLATILE_CONFIG_FAIL ]
then
logger "$NAME: Node configuration has failed. Failing goenabled check."
exit 1
fi
exit 0

13
config-gate/PKG-INFO Normal file
View File

@ -0,0 +1,13 @@
Metadata-Version: 1.1
Name: config-gate
Version: 1.0
Summary: General config initialization gate
Home-page:
Author: Windriver
Author-email: info@windriver.com
License: Apache-2.0
Description: General config initialization gate
Platform: UNKNOWN

View File

@ -0,0 +1,2 @@
SRC_DIR="files"
TIS_PATCH_VER=0

View File

@ -0,0 +1,59 @@
Summary: config-gate
Name: config-gate
Version: 1.0
Release: %{tis_patch_ver}%{?_tis_dist}
License: Apache-2.0
Group: base
Packager: Wind River <info@windriver.com>
URL: unknown
Source0: %{name}-%{version}.tar.gz
%define debug_package %{nil}
Requires: systemd
%description
Startup configuration gate
%package -n %{name}-compute
Summary: config-gate-compute
Group: base
%description -n %{name}-compute
Startup compute configuration gate
%define local_etc_systemd /etc/systemd/system/
%prep
%setup
%build
%install
install -d -m 755 %{buildroot}%{_sbindir}
install -p -D -m 555 wait_for_config_init.sh %{buildroot}%{_sbindir}/
install -p -D -m 555 wait_for_compute_config_init.sh %{buildroot}%{_sbindir}/
install -d -m 755 %{buildroot}%{local_etc_systemd}
install -p -D -m 444 config.service %{buildroot}%{local_etc_systemd}/config.service
install -p -D -m 444 compute-config-gate.service %{buildroot}%{local_etc_systemd}/compute-config-gate.service
%post
systemctl enable config.service
%post -n %{name}-compute
systemctl enable compute-config-gate.service
%clean
# rm -rf $RPM_BUILD_ROOT
%files
%defattr(-,root,root,-)
%doc LICENSE
%{_sbindir}/wait_for_config_init.sh
%{local_etc_systemd}/config.service
%files -n %{name}-compute
%defattr(-,root,root,-)
%{_sbindir}/wait_for_compute_config_init.sh
%{local_etc_systemd}/compute-config-gate.service

202
config-gate/files/LICENSE Normal file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,15 @@
[Unit]
Description=TIS compute config gate
After=sw-patch.service computeconfig.service
Before=serial-getty@ttyS0.service getty@tty1.service
[Service]
Type=oneshot
ExecStart=/usr/sbin/wait_for_compute_config_init.sh
ExecStop=
ExecReload=
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,16 @@
[Unit]
Description=General TIS config gate
After=sw-patch.service
Before=serial-getty@ttyS0.service getty@tty1.service
# Each config service must have a Before statement against config.service, to ensure ordering
[Service]
Type=oneshot
ExecStart=/usr/sbin/wait_for_config_init.sh
ExecStop=
ExecReload=
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,20 @@
#!/bin/bash
#
# Copyright (c) 2016 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
# Wait for compute config service
SERVICE=computeconfig.service
while :
do
systemctl status $SERVICE |grep -q running
if [ $? -ne 0 ]; then
exit 0
fi
sleep 1
done

View File

@ -0,0 +1,36 @@
#!/bin/bash
#
# Copyright (c) 2016 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
# Wait for base node config service
. /etc/platform/platform.conf
SERVICE=
case $nodetype in
controller)
SERVICE=controllerconfig.service
;;
compute)
SERVICE=computeconfig.service
;;
storage)
SERVICE=storageconfig.service
;;
*)
exit 1
;;
esac
while :
do
systemctl status $SERVICE |grep -q running
if [ $? -ne 0 ]; then
exit 0
fi
sleep 1
done

6
configutilities/.gitignore vendored Normal file
View File

@ -0,0 +1,6 @@
!.distro
.distro/centos7/rpmbuild/RPMS
.distro/centos7/rpmbuild/SRPMS
.distro/centos7/rpmbuild/BUILD
.distro/centos7/rpmbuild/BUILDROOT
.distro/centos7/rpmbuild/SOURCES/configutilities*tar.gz

13
configutilities/PKG-INFO Executable file
View File

@ -0,0 +1,13 @@
Metadata-Version: 1.1
Name: configutilities
Version: 1.2.0
Summary: Titanium Cloud configuration utilities
Home-page:
Author: Windriver
Author-email: info@windriver.com
License: Apache-2.0
Description: Titanium Cloud configuration utilities
Platform: UNKNOWN

View File

@ -0,0 +1,3 @@
SRC_DIR="configutilities"
COPY_LIST="$SRC_DIR/LICENSE"
TIS_PATCH_VER=34

View File

@ -0,0 +1,64 @@
Summary: configutilities
Name: configutilities
Version: 3.0.0
Release: %{tis_patch_ver}%{?_tis_dist}
License: Apache-2.0
Group: base
Packager: Wind River <info@windriver.com>
URL: unknown
Source0: %{name}-%{version}.tar.gz
Source1: LICENSE
%define debug_package %{nil}
BuildRequires: python-setuptools
Requires: python-netaddr
#Requires: wxPython
%description
Titanium Cloud Controller configuration utilities
%package -n %{name}-cgts-sdk
Summary: configutilities sdk files
Group: devel
%description -n %{name}-cgts-sdk
SDK files for configutilities
%define local_bindir /usr/bin
%define pythonroot /usr/lib64/python2.7/site-packages
%define cgcs_sdk_deploy_dir /opt/deploy/cgcs_sdk
%define cgcs_sdk_tarball_name wrs-%{name}-%{version}.tgz
%prep
%setup
%build
%{__python} setup.py build
%install
%{__python} setup.py install --root=$RPM_BUILD_ROOT \
--install-lib=%{pythonroot} \
--prefix=/usr \
--install-data=/usr/share \
--single-version-externally-managed
sed -i "s#xxxSW_VERSIONxxx#%{platform_release}#" %{name}/common/validator.py
tar czf %{cgcs_sdk_tarball_name} %{name}
mkdir -p $RPM_BUILD_ROOT%{cgcs_sdk_deploy_dir}
install -m 644 %{cgcs_sdk_tarball_name} $RPM_BUILD_ROOT%{cgcs_sdk_deploy_dir}
%clean
rm -rf $RPM_BUILD_ROOT
%files
%defattr(-,root,root,-)
%doc LICENSE
%{local_bindir}/*
%dir %{pythonroot}/%{name}
%{pythonroot}/%{name}/*
%dir %{pythonroot}/%{name}-%{version}-py2.7.egg-info
%{pythonroot}/%{name}-%{version}-py2.7.egg-info/*
%files -n %{name}-cgts-sdk
%{cgcs_sdk_deploy_dir}/%{cgcs_sdk_tarball_name}

View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,76 @@
Copyright © 2017 Wind River Systems, Inc.
SPDX-License-Identifier: Apache-2.0
-----------------------------------------------------------------------
Titanium Cloud Configuration Utilities
---------------------------------------
To facilitate various aspects of Titanium Cloud installation and
configuration, utilities have been created to generate and validate
configuration and setup files which are utilized by the system.
Installing the Configuration Utilities
--------------------------------------
This tarball includes several utilities which can be used to aid in the
configuration of Titanium Cloud. Note that these are optional tools which are run prior
to installation, and not run on the target system.
To install the utilities on a Linux machine follow these steps:
1. Ensure you have the tools necessary to install new python packages (pip and setuptools)
If you do not, you must install them using the appropriate commands for
your version of linux, such as:
sudo apt-get install python-pip # e.g. for Ubuntu or Debian
2. The config_gui tool makes use of external tools which must be
installed as follows:
if using Ubuntu/Debian:
sudo apt-get install python-wxgtk2.8 python-wxtools
if using Fedora:
sudo yum install wxPython python-setuptools
if using CentOS/RedHat, the appropriate rpm can be obtained from EPEL
sudo yum install epel-release
sudo yum install wxPython
Note, if epel-release is not available, it can be obtained as such (specific to
your version)
wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
sudo rpm -Uvh epel-release-6*.rpm
sudo yum install wxPython
3. Copy wrs-configutilities-3.0.0.tgz to the python install directory
(i.e. /usr/lib/python2.7/dist-packages or /usr/lib/python2.7/site-packages)
4. Cd to this python install directory
5. Untar the file:
sudo tar xfv wrs-configutilities-3.0.0.tgz
6. Cd configutilities
7. Run setup:
sudo python setup.py install
Using the Configuration Utilities
---------------------------------
There are two tools installed: config_validator and config_gui.
config_validator is a commandline tool which takes a 'controller configuration
input' file of the INI type and does preliminary analysis to ensure its validity.
It can be called as follows:
config_validator --system-config <filename>
config_gui is a GUI-based tool which provides tools for creating a 'controller
configuration input' INI file and/or a 'bulk host' XML file. It can be launched
by calling 'config_gui' from the command line and will walk you through the process
of generating the desired configuration files.

View File

@ -0,0 +1,20 @@
#
# Copyright (c) 2015-2016 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
# flake8: noqa
#
from common.validator import validate
from common.configobjects import (Network, DEFAULT_CONFIG, REGION_CONFIG,
DEFAULT_NAMES, HP_NAMES, SUBCLOUD_CONFIG,
MGMT_TYPE, INFRA_TYPE, OAM_TYPE,
NETWORK_PREFIX_NAMES, HOST_XML_ATTRIBUTES,
LINK_SPEED_1G, LINK_SPEED_10G,
DEFAULT_DOMAIN_NAME)
from common.exceptions import ConfigError, ConfigFail, ValidateFail
from common.utils import is_valid_vlan, is_mtu_valid, is_speed_valid, \
validate_network_str, validate_address_str, validate_address, \
ip_version_to_string, lag_mode_to_str, \
validate_openstack_password, extract_openstack_password_rules_from_file

View File

@ -0,0 +1,5 @@
#
# Copyright (c) 2015 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#

View File

@ -0,0 +1,381 @@
"""
Copyright (c) 2015-2016 Wind River Systems, Inc.
SPDX-License-Identifier: Apache-2.0
"""
from netaddr import iter_iprange
from exceptions import ConfigFail, ValidateFail
from utils import is_mtu_valid, is_speed_valid, is_valid_vlan, \
validate_network_str, validate_address_str
DEFAULT_CONFIG = 0
REGION_CONFIG = 1
SUBCLOUD_CONFIG = 2
MGMT_TYPE = 0
INFRA_TYPE = 1
OAM_TYPE = 2
NETWORK_PREFIX_NAMES = [
('MGMT', 'INFRA', 'OAM'),
('CLM', 'BLS', 'CAN')
]
LINK_SPEED_1G = 1000
LINK_SPEED_10G = 10000
LINK_SPEED_25G = 25000
VALID_LINK_SPEED = [LINK_SPEED_1G, LINK_SPEED_10G, LINK_SPEED_25G]
# Additions to this list must be reflected in the hostfile
# generator tool (config->configutilities->hostfiletool.py)
HOST_XML_ATTRIBUTES = ['hostname', 'personality', 'subfunctions',
'mgmt_mac', 'mgmt_ip',
'bm_ip', 'bm_type', 'bm_username',
'bm_password', 'boot_device', 'rootfs_device',
'install_output', 'console', 'vsc_controllers',
'power_on', 'location', 'subtype']
# Network naming types
DEFAULT_NAMES = 0
HP_NAMES = 1
# well-known default domain name
DEFAULT_DOMAIN_NAME = 'Default'
class LogicalInterface(object):
""" Represents configuration for a logical interface.
"""
def __init__(self):
self.name = None
self.mtu = None
self.link_capacity = None
self.lag_interface = False
self.lag_mode = None
self.ports = None
def parse_config(self, system_config, logical_interface):
# Ensure logical interface config is present
if not system_config.has_section(logical_interface):
raise ConfigFail("Missing config for logical interface %s." %
logical_interface)
self.name = logical_interface
# Parse/validate the MTU
self.mtu = system_config.getint(logical_interface, 'INTERFACE_MTU')
if not is_mtu_valid(self.mtu):
raise ConfigFail("Invalid MTU value for %s. "
"Valid values: 576 - 9216" % logical_interface)
# Parse/validate the link_capacity
if system_config.has_option(logical_interface,
'INTERFACE_LINK_CAPACITY'):
self.link_capacity = \
system_config.getint(logical_interface,
'INTERFACE_LINK_CAPACITY')
# link_capacity is optional
if self.link_capacity:
if not is_speed_valid(self.link_capacity,
valid_speeds=VALID_LINK_SPEED):
raise ConfigFail(
"Invalid link-capacity value for %s." % logical_interface)
# Parse the ports
self.ports = filter(None, [x.strip() for x in
system_config.get(logical_interface,
'INTERFACE_PORTS').split(',')])
# Parse/validate the LAG config
lag_interface = system_config.get(logical_interface,
'LAG_INTERFACE')
if lag_interface.lower() == 'y':
self.lag_interface = True
if len(self.ports) != 2:
raise ConfigFail(
"Invalid number of ports (%d) supplied for LAG "
"interface %s" % (len(self.ports), logical_interface))
self.lag_mode = system_config.getint(logical_interface, 'LAG_MODE')
if self.lag_mode < 1 or self.lag_mode > 6:
raise ConfigFail(
"Invalid LAG_MODE value of %d for %s. Valid values: 1-6" %
(self.lag_mode, logical_interface))
elif lag_interface.lower() == 'n':
if len(self.ports) > 1:
raise ConfigFail(
"More than one interface supplied for non-LAG "
"interface %s" % logical_interface)
if len(self.ports) == 0:
raise ConfigFail(
"No interfaces supplied for non-LAG "
"interface %s" % logical_interface)
else:
raise ConfigFail(
"Invalid LAG_INTERFACE value of %s for %s. Valid values: "
"Y or N" % (lag_interface, logical_interface))
class Network(object):
""" Represents configuration for a network.
"""
def __init__(self):
self.vlan = None
self.cidr = None
self.multicast_cidr = None
self.start_address = None
self.end_address = None
self.floating_address = None
self.address_0 = None
self.address_1 = None
self.dynamic_allocation = False
self.gateway_address = None
self.logical_interface = None
def parse_config(self, system_config, config_type, network_type,
min_addresses=0, multicast_addresses=0, optional=False,
naming_type=DEFAULT_NAMES):
network_prefix = NETWORK_PREFIX_NAMES[naming_type][network_type]
network_name = network_prefix + '_NETWORK'
if naming_type == HP_NAMES:
attr_prefix = network_prefix + '_'
else:
attr_prefix = ''
# Ensure network config is present
if not system_config.has_section(network_name):
if not optional:
raise ConfigFail("Missing config for network %s." %
network_name)
else:
# Optional interface - just return
return
# Parse/validate the VLAN
if system_config.has_option(network_name, attr_prefix + 'VLAN'):
self.vlan = system_config.getint(network_name,
attr_prefix + 'VLAN')
if self.vlan:
if not is_valid_vlan(self.vlan):
raise ConfigFail(
"Invalid %s value of %d for %s. Valid values: 1-4094" %
(attr_prefix + 'VLAN', self.vlan, network_name))
# Parse/validate the cidr
cidr_str = system_config.get(network_name, attr_prefix + 'CIDR')
try:
self.cidr = validate_network_str(
cidr_str, min_addresses)
except ValidateFail as e:
raise ConfigFail(
"Invalid %s value of %s for %s.\nReason: %s" %
(attr_prefix + 'CIDR', cidr_str, network_name, e))
# Parse/validate the multicast subnet
if 0 < multicast_addresses and \
system_config.has_option(network_name,
attr_prefix + 'MULTICAST_CIDR'):
multicast_cidr_str = system_config.get(network_name, attr_prefix +
'MULTICAST_CIDR')
try:
self.multicast_cidr = validate_network_str(
multicast_cidr_str, multicast_addresses, multicast=True)
except ValidateFail as e:
raise ConfigFail(
"Invalid %s value of %s for %s.\nReason: %s" %
(attr_prefix + 'MULTICAST_CIDR', multicast_cidr_str,
network_name, e))
if self.cidr.version != self.multicast_cidr.version:
raise ConfigFail(
"Invalid %s value of %s for %s. Multicast "
"subnet and network IP families must be the same." %
(attr_prefix + 'MULTICAST_CIDR', multicast_cidr_str,
network_name))
# Parse/validate the hardwired controller addresses
floating_address_str = None
address_0_str = None
address_1_str = None
if min_addresses == 1:
if (system_config.has_option(
network_name, attr_prefix + 'IP_FLOATING_ADDRESS') or
system_config.has_option(
network_name, attr_prefix + 'IP_UNIT_0_ADDRESS') or
system_config.has_option(
network_name, attr_prefix + 'IP_UNIT_1_ADDRESS') or
system_config.has_option(
network_name, attr_prefix + 'IP_START_ADDRESS') or
system_config.has_option(
network_name, attr_prefix + 'IP_END_ADDRESS')):
raise ConfigFail(
"Only one IP address is required for OAM "
"network, use 'IP_ADDRESS' to specify the OAM IP "
"address")
floating_address_str = system_config.get(
network_name, attr_prefix + 'IP_ADDRESS')
try:
self.floating_address = validate_address_str(
floating_address_str, self.cidr)
except ValidateFail as e:
raise ConfigFail(
"Invalid %s value of %s for %s.\nReason: %s" %
(attr_prefix + 'IP_ADDRESS',
floating_address_str, network_name, e))
self.address_0 = self.floating_address
self.address_1 = self.floating_address
else:
if system_config.has_option(
network_name, attr_prefix + 'IP_FLOATING_ADDRESS'):
floating_address_str = system_config.get(
network_name, attr_prefix + 'IP_FLOATING_ADDRESS')
try:
self.floating_address = validate_address_str(
floating_address_str, self.cidr)
except ValidateFail as e:
raise ConfigFail(
"Invalid %s value of %s for %s.\nReason: %s" %
(attr_prefix + 'IP_FLOATING_ADDRESS',
floating_address_str, network_name, e))
if system_config.has_option(
network_name, attr_prefix + 'IP_UNIT_0_ADDRESS'):
address_0_str = system_config.get(
network_name, attr_prefix + 'IP_UNIT_0_ADDRESS')
try:
self.address_0 = validate_address_str(
address_0_str, self.cidr)
except ValidateFail as e:
raise ConfigFail(
"Invalid %s value of %s for %s.\nReason: %s" %
(attr_prefix + 'IP_UNIT_0_ADDRESS',
address_0_str, network_name, e))
if system_config.has_option(
network_name, attr_prefix + 'IP_UNIT_1_ADDRESS'):
address_1_str = system_config.get(
network_name, attr_prefix + 'IP_UNIT_1_ADDRESS')
try:
self.address_1 = validate_address_str(
address_1_str, self.cidr)
except ValidateFail as e:
raise ConfigFail(
"Invalid %s value of %s for %s.\nReason: %s" %
(attr_prefix + 'IP_UNIT_1_ADDRESS',
address_1_str, network_name, e))
# Parse/validate the start/end addresses
start_address_str = None
end_address_str = None
if system_config.has_option(
network_name, attr_prefix + 'IP_START_ADDRESS'):
start_address_str = system_config.get(
network_name, attr_prefix + 'IP_START_ADDRESS')
try:
self.start_address = validate_address_str(
start_address_str, self.cidr)
except ValidateFail as e:
raise ConfigFail(
"Invalid %s value of %s for %s.\nReason: %s" %
(attr_prefix + 'IP_START_ADDRESS',
start_address_str, network_name, e))
if system_config.has_option(
network_name, attr_prefix + 'IP_END_ADDRESS'):
end_address_str = system_config.get(
network_name, attr_prefix + 'IP_END_ADDRESS')
try:
self.end_address = validate_address_str(
end_address_str, self.cidr)
except ValidateFail as e:
raise ConfigFail(
"Invalid %s value of %s for %s.\nReason: %s " %
(attr_prefix + 'IP_END_ADDRESS',
end_address_str, network_name, e))
if start_address_str or end_address_str:
if not end_address_str:
raise ConfigFail("Missing attribute %s for %s_NETWORK" %
(attr_prefix + 'IP_END_ADDRESS',
network_name))
if not start_address_str:
raise ConfigFail("Missing attribute %s for %s_NETWORK" %
(attr_prefix + 'IP_START_ADDRESS',
network_name))
if not self.start_address < self.end_address:
raise ConfigFail(
"Start address %s not less than end address %s for %s."
% (str(self.start_address), str(self.end_address),
network_name))
address_list = list(iter_iprange(start_address_str,
end_address_str))
if not len(address_list) >= min_addresses:
raise ConfigFail("Address range for %s must contain at "
"least %d addresses." %
(network_name, min_addresses))
if floating_address_str or address_0_str or address_1_str:
if not floating_address_str:
raise ConfigFail("Missing attribute %s for %s_NETWORK" %
(attr_prefix + 'IP_FLOATING_ADDRESS',
network_name))
if not address_0_str:
raise ConfigFail("Missing attribute %s for %s_NETWORK" %
(attr_prefix + 'IP_UNIT_0_ADDRESS',
network_name))
if not address_1_str:
raise ConfigFail("Missing attribute %s for %s_NETWORK" %
(attr_prefix + 'IP_UNIT_1_ADDRESS',
network_name))
if start_address_str and floating_address_str:
raise ConfigFail("Overspecified network: Can only set %s "
"and %s OR %s, %s, and %s for "
"%s_NETWORK" %
(attr_prefix + 'IP_START_ADDRESS',
attr_prefix + 'IP_END_ADDRESS',
attr_prefix + 'IP_FLOATING_ADDRESS',
attr_prefix + 'IP_UNIT_0_ADDRESS',
attr_prefix + 'IP_UNIT_1_ADDRESS',
network_name))
if config_type == DEFAULT_CONFIG:
if not self.start_address:
self.start_address = self.cidr[2]
if not self.end_address:
self.end_address = self.cidr[-2]
# Parse/validate the dynamic IP address allocation
if system_config.has_option(network_name,
'DYNAMIC_ALLOCATION'):
dynamic_allocation = system_config.get(network_name,
'DYNAMIC_ALLOCATION')
if dynamic_allocation.lower() == 'y':
self.dynamic_allocation = True
elif dynamic_allocation.lower() == 'n':
self.dynamic_allocation = False
else:
raise ConfigFail(
"Invalid DYNAMIC_ALLOCATION value of %s for %s. "
"Valid values: Y or N" %
(dynamic_allocation, network_name))
# Parse/validate the gateway (optional)
if system_config.has_option(network_name, attr_prefix + 'GATEWAY'):
gateway_address_str = system_config.get(
network_name, attr_prefix + 'GATEWAY')
try:
self.gateway_address = validate_address_str(
gateway_address_str, self.cidr)
except ValidateFail as e:
raise ConfigFail(
"Invalid %s value of %s for %s.\nReason: %s" %
(attr_prefix + 'GATEWAY',
gateway_address_str, network_name, e))
# Parse/validate the logical interface
logical_interface_name = system_config.get(
network_name, attr_prefix + 'LOGICAL_INTERFACE')
self.logical_interface = LogicalInterface()
self.logical_interface.parse_config(system_config,
logical_interface_name)

View File

@ -0,0 +1,98 @@
# Copyright 2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Routines for URL-safe encrypting/decrypting
Cloned from git/glance/common
"""
import base64
import os
import random
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives.ciphers import algorithms
from cryptography.hazmat.primitives.ciphers import Cipher
from cryptography.hazmat.primitives.ciphers import modes
from oslo_utils import encodeutils
import six
# NOTE(jokke): simplified transition to py3, behaves like py2 xrange
from six.moves import range
def urlsafe_encrypt(key, plaintext, blocksize=16):
"""
Encrypts plaintext. Resulting ciphertext will contain URL-safe characters.
If plaintext is Unicode, encode it to UTF-8 before encryption.
:param key: AES secret key
:param plaintext: Input text to be encrypted
:param blocksize: Non-zero integer multiple of AES blocksize in bytes (16)
:returns: Resulting ciphertext
"""
def pad(text):
"""
Pads text to be encrypted
"""
pad_length = (blocksize - len(text) % blocksize)
# NOTE(rosmaita): I know this looks stupid, but we can't just
# use os.urandom() to get the bytes because we use char(0) as
# a delimiter
pad = b''.join(six.int2byte(random.SystemRandom().randint(1, 0xFF))
for i in range(pad_length - 1))
# We use chr(0) as a delimiter between text and padding
return text + b'\0' + pad
plaintext = encodeutils.to_utf8(plaintext)
key = encodeutils.to_utf8(key)
# random initial 16 bytes for CBC
init_vector = os.urandom(16)
backend = default_backend()
cypher = Cipher(algorithms.AES(key), modes.CBC(init_vector),
backend=backend)
encryptor = cypher.encryptor()
padded = encryptor.update(
pad(six.binary_type(plaintext))) + encryptor.finalize()
encoded = base64.urlsafe_b64encode(init_vector + padded)
if six.PY3:
encoded = encoded.decode('ascii')
return encoded
def urlsafe_decrypt(key, ciphertext):
"""
Decrypts URL-safe base64 encoded ciphertext.
On Python 3, the result is decoded from UTF-8.
:param key: AES secret key
:param ciphertext: The encrypted text to decrypt
:returns: Resulting plaintext
"""
# Cast from unicode
ciphertext = encodeutils.to_utf8(ciphertext)
key = encodeutils.to_utf8(key)
ciphertext = base64.urlsafe_b64decode(ciphertext)
backend = default_backend()
cypher = Cipher(algorithms.AES(key), modes.CBC(ciphertext[:16]),
backend=backend)
decryptor = cypher.decryptor()
padded = decryptor.update(ciphertext[16:]) + decryptor.finalize()
text = padded[:padded.rfind(b'\0')]
if six.PY3:
text = text.decode('utf-8')
return text

View File

@ -0,0 +1,25 @@
#
# Copyright (c) 2015 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
class ConfigError(Exception):
"""Base class for configuration exceptions."""
def __init__(self, message=None):
self.message = message
def __str__(self):
return self.message or ""
class ConfigFail(ConfigError):
"""General configuration error."""
pass
class ValidateFail(ConfigError):
"""Validation of data failed."""
pass

View File

@ -0,0 +1,295 @@
#
# Copyright (c) 2016 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
import wx
from exceptions import ValidateFail
import wrs_ico
TEXT_BOX_SIZE = (150, -1)
TEXT_WIDTH = 450
DEBUG = False
VGAP = 5
HGAP = 10
def debug(msg):
if DEBUG:
print msg
# Tracks what type of controls will implement a config question
class TYPES(object):
string = 1
int = 2
radio = 3
choice = 4
checkbox = 5
help = 6
separator = 7
class Field(object):
def __init__(self, text="", type=TYPES.string, transient=False,
initial="", choices=[], shows=[], reverse=False,
enabled=True):
"""Represent a configuration question
:param text: Question prompt text
:param type: The type of wxWidgets control(s) used to implement this
field
:param transient: Whether this field should be written automatically
to the INI file
:param enabled: Whether this field should be enabled or
disabled (greyed-out)
:param initial: Initial value used to populate the control
:param choices: A string list of choices to populate selection-based
fields
:param shows: A list of field key strings that this field should show
when checked. Only checkboxes implement this functionality atm
:param reverse: Switches the 'shows' logic -> checked
will hide fields instead of showing them
:return: the Field object
"""
self.text = text
self.type = type
self.transient = transient
self.initial = initial
self.choices = choices
self.shows = shows
self.reverse = reverse
self.enabled = enabled
# Controls used to implement this field
self.prompt = None
self.input = None
if type is TYPES.help:
self.transient = True
# Sanity to make sure fields are being utilized correctly
if self.shows and self.type is TYPES.help:
raise NotImplementedError()
if not self.shows and self.reverse:
raise NotImplementedError()
def get_value(self):
# Return value of the control (a string or int)
if not self.input:
value = None
elif not self.input.IsShown() or not self.input.IsEnabled():
value = None
elif self.type is TYPES.string:
value = self.input.GetLineText(0)
elif self.type is TYPES.int:
try:
value = self.input.GetLineText(0)
int(value)
except ValueError:
raise ValidateFail(
"Invalid entry for %s. Must enter a numeric value" %
self.text)
elif self.type is TYPES.radio:
value = self.input.GetString(self.input.GetSelection())
elif self.type is TYPES.choice:
value = self.input.GetString(self.input.GetSelection())
elif self.type is TYPES.checkbox:
value = "N"
if self.input.GetValue():
value = "Y"
else:
raise NotImplementedError()
return value
def set_value(self, value):
# Set value of the control (string or int)
if not self.input:
# Can't 'set' help text etc.
raise NotImplementedError()
elif self.type is TYPES.string or self.type is TYPES.int:
self.input.SetValue(value)
elif self.type is TYPES.radio or self.type is TYPES.choice:
index = self.input.FindString(value)
if index == wx.NOT_FOUND:
raise ValidateFail("Invalid value %s for field %s" %
(value, self.text))
self.input.SetSelection(index)
elif self.type is TYPES.checkbox:
self.input.SetValue(value == "Y")
else:
raise NotImplementedError()
def destroy(self):
if self.prompt:
self.prompt.Destroy()
if self.input:
self.input.Destroy()
def show(self, visible):
debug("Setting visibility to %s for field %s prompt=%s" %
(visible, self.text, self.prompt))
if visible:
if self.prompt:
self.prompt.Show()
if self.input:
self.input.Show()
else:
if self.prompt:
self.prompt.Hide()
if self.input:
self.input.Hide()
def prepare_fields(parent, fields, sizer, change_hdlr):
for row, (name, field) in enumerate(fields.items()):
initial = field.initial
# if config.has_option(parent.section, name):
# initial = config.get(parent.section, name)
add_attributes = wx.ALIGN_CENTER_VERTICAL
width = 1
field.prompt = wx.StaticText(parent, label=field.text, name=name)
# Generate different control based on field type
if field.type is TYPES.string or field.type is TYPES.int:
field.input = wx.TextCtrl(parent, value=initial, name=name,
size=TEXT_BOX_SIZE)
elif field.type is TYPES.radio:
field.input = wx.RadioBox(
parent, choices=field.choices, majorDimension=1,
style=wx.RA_SPECIFY_COLS, name=name, id=wx.ID_ANY)
elif field.type is TYPES.choice:
field.input = wx.Choice(
parent, choices=field.choices, name=name)
if initial:
field.input.SetSelection(field.input.FindString(initial))
elif field.type is TYPES.checkbox:
width = 2
field.input = wx.CheckBox(parent, name=name, label=field.text,
) # style=wx.ALIGN_RIGHT)
field.input.SetValue(initial == 'Y')
if field.prompt:
field.prompt.Hide()
field.prompt = None
elif field.type is TYPES.help:
width = 2
field.prompt.Wrap(TEXT_WIDTH)
field.input = None
elif field.type is TYPES.separator:
width = 2
field.prompt = wx.StaticLine(parent, -1)
add_attributes = wx.EXPAND | wx.ALL
field.input = None
else:
raise NotImplementedError()
col = 0
if field.prompt:
sizer.Add(field.prompt, (row, col), span=(1, width),
flag=add_attributes)
col += 1
if field.input:
field.input.Enable(field.enabled)
sizer.Add(field.input, (row, col),
flag=add_attributes)
# Go through again and set show/hide relationships
for name, field in fields.items():
if field.shows:
# Add display handlers
field.input.Bind(wx.EVT_CHECKBOX, change_hdlr)
# todo tsmith add other evts
# Start by hiding target prompt/input controls
for target_name in field.shows:
target = fields[target_name]
if target.prompt:
target.prompt.Hide()
if target.input:
target.input.Hide()
def on_change(parent, fields, event):
obj = event.GetEventObject()
# debug("Checked: " + str(event.Checked()) +
# ", Reverse: " + str(parent.fields[obj.GetName()].reverse) +
# ", Will show: " + str(event.Checked() is not
# parent.fields[obj.GetName()].reverse))
# Hide/Show the targets of the control
# Note: the "is not" implements switching the show logic around
handle_sub_show(
fields,
fields[obj.GetName()].shows,
event.Checked() is not fields[obj.GetName()].reverse)
parent.Layout()
event.Skip()
def handle_sub_show(fields, targets, show):
""" Recursive function to handle showing/hiding of a list of fields
:param targets: [String]
:param show: bool
"""
sub_handled = []
for tgt in targets:
if tgt in sub_handled:
# Handled by newly shown control
continue
tgt_field = fields[tgt]
# Show or hide this field as necessary
tgt_field.show(show)
# If it shows others (checkbox) and is now shown,
# apply it's value decide on showing it's children, not the
# original show
if tgt_field.shows and show:
sub_handled.extend(tgt_field.shows)
handle_sub_show(
fields,
tgt_field.shows,
(tgt_field.get_value() is 'Y') is not fields[tgt].reverse)
def set_icons(parent):
# Icon setting
# todo Make higher resolution icons, verify on different linux desktops
icons = wx.IconBundle()
for sz in [16, 32, 48]:
# try:
# icon = wx.Icon(wrs_ico.windriver_favicon.getIcon(),
# width=sz, height=sz)
icon = wrs_ico.favicon.getIcon()
icons.AddIcon(icon)
# except:
# pass
parent.SetIcons(icons)
# ico = wrs_ico.windriver_favicon.getIcon()
# self.SetIcon(ico)
# self.tbico = wx.TaskBarIcon()
# self.tbico.SetIcon(ico, '')

View File

@ -0,0 +1,308 @@
"""
Copyright (c) 2015-2016 Wind River Systems, Inc.
SPDX-License-Identifier: Apache-2.0
"""
import ConfigParser
import re
import six
from netaddr import (IPNetwork,
IPAddress,
AddrFormatError)
from exceptions import ValidateFail
EXPECTED_SERVICE_NAME_AND_TYPE = (
{"KEYSTONE_SERVICE_NAME": "keystone",
"KEYSTONE_SERVICE_TYPE": "identity",
"GLANCE_SERVICE_NAME": "glance",
"GLANCE_SERVICE_TYPE": "image",
"NOVA_SERVICE_NAME": "nova",
"NOVA_SERVICE_TYPE": "compute",
"PLACEMENT_SERVICE_NAME": "placement",
"PLACEMENT_SERVICE_TYPE": "placement",
"NEUTRON_SERVICE_NAME": "neutron",
"NEUTRON_SERVICE_TYPE": "network",
"SYSINV_SERVICE_NAME": "sysinv",
"SYSINV_SERVICE_TYPE": "platform",
"PATCHING_SERVICE_NAME": "patching",
"PATCHING_SERVICE_TYPE": "patching",
"HEAT_SERVICE_NAME": "heat",
"HEAT_SERVICE_TYPE": "orchestration",
"HEAT_CFN_SERVICE_NAME": "heat-cfn",
"HEAT_CFN_SERVICE_TYPE": "cloudformation",
"CEILOMETER_SERVICE_NAME": "ceilometer",
"CEILOMETER_SERVICE_TYPE": "metering",
"NFV_SERVICE_NAME": "vim",
"NFV_SERVICE_TYPE": "nfv",
"AODH_SERVICE_NAME": "aodh",
"AODH_SERVICE_TYPE": "alarming",
"PANKO_SERVICE_NAME": "panko",
"PANKO_SERVICE_TYPE": "event"})
def is_valid_vlan(vlan):
"""Determine whether vlan is valid."""
try:
if 0 < int(vlan) < 4095:
return True
else:
return False
except (ValueError, TypeError):
return False
def is_mtu_valid(mtu):
"""Determine whether a mtu is valid."""
try:
if int(mtu) < 576:
return False
elif int(mtu) > 9216:
return False
else:
return True
except (ValueError, TypeError):
return False
def is_speed_valid(speed, valid_speeds=None):
"""Determine whether speed is valid."""
try:
if valid_speeds is not None and int(speed) not in valid_speeds:
return False
else:
return True
except (ValueError, TypeError):
return False
def is_valid_hostname(hostname):
"""Determine whether a hostname is valid as per RFC 1123."""
# Maximum length of 255
if not hostname or len(hostname) > 255:
return False
# Allow a single dot on the right hand side
if hostname[-1] == ".":
hostname = hostname[:-1]
# Create a regex to ensure:
# - hostname does not begin or end with a dash
# - each segment is 1 to 63 characters long
# - valid characters are A-Z (any case) and 0-9
valid_re = re.compile("(?!-)[A-Z\d-]{1,63}(?<!-)$", re.IGNORECASE)
return all(valid_re.match(x) for x in hostname.split("."))
def is_valid_mac(mac):
"""Verify the format of a MAC addres."""
if not mac:
return False
m = "[0-9a-f]{2}([-:])[0-9a-f]{2}(\\1[0-9a-f]{2}){4}$"
return isinstance(mac, six.string_types) and re.match(m, mac.lower())
def validate_network_str(network_str, minimum_size,
existing_networks=None, multicast=False):
"""Determine whether a network is valid."""
try:
network = IPNetwork(network_str)
if network.ip != network.network:
raise ValidateFail("Invalid network address")
elif network.size < minimum_size:
raise ValidateFail("Subnet too small - must have at least %d "
"addresses" % minimum_size)
elif network.version == 6 and network.prefixlen < 64:
raise ValidateFail("IPv6 minimum prefix length is 64")
elif existing_networks:
if any(network.ip in subnet for subnet in existing_networks):
raise ValidateFail("Subnet overlaps with another "
"configured subnet")
elif multicast and not network.is_multicast():
raise ValidateFail("Invalid subnet - must be multicast")
return network
except AddrFormatError:
raise ValidateFail(
"Invalid subnet - not a valid IP subnet")
def is_valid_filename(filename):
return '\0' not in filename
def is_valid_by_path(filename):
return "/dev/disk/by-path" in filename and "-part" not in filename
def validate_address_str(ip_address_str, network):
"""Determine whether an address is valid."""
try:
ip_address = IPAddress(ip_address_str)
if ip_address.version != network.version:
msg = ("Invalid IP version - must match network version " +
ip_version_to_string(network.version))
raise ValidateFail(msg)
elif ip_address == network:
raise ValidateFail("Cannot use network address")
elif ip_address == network.broadcast:
raise ValidateFail("Cannot use broadcast address")
elif ip_address not in network:
raise ValidateFail(
"Address must be in subnet %s" % str(network))
return ip_address
except AddrFormatError:
raise ValidateFail(
"Invalid address - not a valid IP address")
def ip_version_to_string(ip_version):
"""Determine whether a nameserver address is valid."""
if ip_version == 4:
return "IPv4"
elif ip_version == 6:
return "IPv6"
else:
return "IP"
def validate_nameserver_address_str(ip_address_str, subnet_version=None):
"""Determine whether a nameserver address is valid."""
try:
ip_address = IPAddress(ip_address_str)
if subnet_version is not None and ip_address.version != subnet_version:
msg = ("Invalid IP version - must match OAM subnet version " +
ip_version_to_string(subnet_version))
raise ValidateFail(msg)
return ip_address
except AddrFormatError:
msg = "Invalid address - "
"not a valid %s address" % ip_version_to_string(subnet_version)
raise ValidateFail(msg)
def validate_address(ip_address, network):
"""Determine whether an address is valid."""
if ip_address.version != network.version:
msg = ("Invalid IP version - must match network version " +
ip_version_to_string(network.version))
raise ValidateFail(msg)
elif ip_address == network:
raise ValidateFail("Cannot use network address")
elif ip_address == network.broadcast:
raise ValidateFail("Cannot use broadcast address")
elif ip_address not in network:
raise ValidateFail("Address must be in subnet %s" % str(network))
def check_network_overlap(new_network, configured_networks):
""" Validate that new_network does not overlap any configured_networks.
"""
if any(new_network.ip in subnet for subnet in
configured_networks):
raise ValidateFail(
"Subnet %s overlaps with another configured subnet" % new_network)
def lag_mode_to_str(lag_mode):
if lag_mode == 0:
return "balance-rr"
if lag_mode == 1:
return "active-backup"
elif lag_mode == 2:
return "balance-xor"
elif lag_mode == 3:
return "broadcast"
elif lag_mode == 4:
return "802.3ad"
elif lag_mode == 5:
return "balance-tlb"
elif lag_mode == 6:
return "balance-alb"
else:
raise Exception(
"Invalid LAG_MODE value of %d. Valid values: 0-6" % lag_mode)
def validate_openstack_password(password, rules_file,
section="security_compliance"):
try:
config = ConfigParser.RawConfigParser()
parsed_config = config.read(rules_file)
if not parsed_config:
msg = ("Cannot parse rules file: %s" % rules_file)
raise Exception(msg)
if not config.has_section(section):
msg = ("Required section '%s' not found in rules file" % section)
raise Exception(msg)
password_regex = get_optional(config, section, 'password_regex')
password_regex_description = get_optional(config, section,
'password_regex_description')
if not password_regex:
msg = ("Required option 'password_regex' not found in "
"rule file: %s" % rules_file)
raise Exception(msg)
# Even if regex_description is not found, we will proceed
# and give a generic failure warning instead
if not password_regex_description:
password_regex_description = ("Password does not meet "
"complexity criteria")
if not isinstance(password, six.string_types):
msg = ("Password must be a string type")
raise Exception(msg)
try:
# config parser would read in the string as a literal
# representation which would fail regex matching
password_regex = password_regex.strip('"')
if not re.match(password_regex, password):
return False, password_regex_description
except re.error:
msg = ("Unable to validate password due to invalid "
"complexity criteria ('password_regex')")
raise Exception(msg)
except Exception:
raise Exception("Password validation failed")
return True, ""
def extract_openstack_password_rules_from_file(
rules_file, section="security_compliance"):
try:
config = ConfigParser.RawConfigParser()
parsed_config = config.read(rules_file)
if not parsed_config:
msg = ("Cannot parse rules file: %" % rules_file)
raise Exception(msg)
if not config.has_section(section):
msg = ("Required section '%s' not found in rules file" % section)
raise Exception(msg)
rules = config.items(section)
if not rules:
msg = ("section '%s' contains no configuration options" % section)
raise Exception(msg)
return dict(rules)
except Exception:
raise Exception("Failed to extract password rules from file")
def get_optional(conf, section, key):
if conf.has_option(section, key):
return conf.get(section, key)
return None
def get_service(conf, section, key):
if key in EXPECTED_SERVICE_NAME_AND_TYPE:
if conf.has_option(section, key):
value = conf.get(section, key)
if value != EXPECTED_SERVICE_NAME_AND_TYPE[key]:
raise ValidateFail("Unsupported %s: %s " % (key, value))
else:
value = EXPECTED_SERVICE_NAME_AND_TYPE[key]
return value
else:
return conf.get(section, key)

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,37 @@
# ----------------------------------------------------------------------
# This file was generated by img2py.py
#
#
# Copyright (c) 2015-2016 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
#
# Stylized red Wind River 'W' icon
#
from wx.lib.embeddedimage import PyEmbeddedImage
favicon = PyEmbeddedImage(
"iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAABHNCSVQICAgIfAhkiAAAA99J"
"REFUWIXtls1PE2sUxn/z0Q7U0g/AXmNAUBGNxsSYGnWjQQ0YNAISY1y4kLh16T/g1qV/gwtJ"
"JFFATbBCmuBXrhiMitGk0hpBJaQtM4Vpp+3cxXAHSltub1zggpPM4p155pznfd5zznuEv8Fk"
"A03cyOCbBDYJ/BEE5EpqUPiP72YFmHI42bV/f0ng6uDG3BzZubmSTuW6OhzbtmEaBuloFDOd"
"Lo3buhVHIGDhpqcxMxnLfzoaXYlnmiAICJKEoCjWWpKI37tH7MYNyOUKvQoCDbdvU3ftGnlN"
"Y7qvj4VQqGiXgizTeOcOtVeukHr1isjly+QSCYvYdF9fEVvR5aLu6lX8ly4B4OnoQGlqIh2J"
"FOCcTU34urqQa2uhthZ/by/qs2cW8VWmtLTgPXcOyetlcWLCDg4gL4RCJSXLfPtGzcmTyIEA"
"SnMz7hMn0CMRe3cm4GlvR9m1y/7H096Os7mZzNevK6ICnrNncTY2kksmSQ4PF+SCKCwv1j76"
"p0+kXr9eRol4z59HdDptx5LLhf/iRRBXCknZuRPPmTMFOSR7PPi6ugBYfPuWpcnJgiMqW4Z5"
"XScxOGjL6T5+HGXPHszlXbkOH2bL0aOQz2PMzlo4UcTX04Pkctm7dx05wpZgEIDko0dkVbUg"
"TlkCAqCOjpKJxQBwbN+O59Qp+5uvuxvJ5yMTizF765Z9ru5jx6g+dMiSWRDwdXcjut1kf/1i"
"YWSkKM66jSgTiaCGw/ba29mJWFWFo6EBb2cnAOrYGPN376K9eAGA5Pfbkjt37MDb0QGA9vIl"
"+tRUUYWsSyCfy5EcHMQ0DABcwSBV+/ZR09aG0tpKXteJDwyQVVXi/f2Y2axN1BEIUHP6NMru"
"3WCaJIeGyJfoEesSEABtfBz982cA5Pp6/D09+Ht7ESSJpclJUs+fIwALIyPoU1MAVO3di+/C"
"BUsJUSQTi6GOjZXslvJ6BACMmRnUp0+pPnAAgPrr1xHdbgASDx9izM9b3XJmhsSDB1QfPIjg"
"cPDXzZs4AgEA1HCYzJoeUpECYGVyYmiIfCoFWMkoeTwYP36QHB4uxA0MYPz8aanQ2ork82Ea"
"hiX/2i5aKQEBWHzzhqV37wrea+Ew+sePtqwCsPT+vdUJV1n6yxe08fGyl1VF13E2Hif5+LG9"
"Ng2D+P375JeT81/LGwbx/n7yum6/WwiFML5/L+u74nkg+eSJfSMuffiAFg4XXzpY5704MWER"
"0jS79ZYzodKxXFAUatrakP1+MtGoVfdm6V9dwSBVLS3kUinU0VHymvb7BKB4TvhdHFRQhqut"
"kqnn/+DgD5gJNwlsEthwAv8AApOBr7T8BuQAAAAASUVORK5CYII=")

View File

@ -0,0 +1,100 @@
"""
Copyright (c) 2015-2016 Wind River Systems, Inc.
SPDX-License-Identifier: Apache-2.0
"""
import sys
import os
import ConfigParser
from common.validator import validate
from common.configobjects import DEFAULT_CONFIG, REGION_CONFIG
from common.exceptions import ConfigFail, ValidateFail
def parse_config(config_file):
"""Parse system config file"""
config = ConfigParser.RawConfigParser()
try:
config.read(config_file)
except Exception as e:
raise ConfigFail("Error parsing system config file: %s" % e.message)
return config
def show_help():
print ("Usage: %s\n"
"Perform validation of a given configuration file\n\n"
"--system-config <name> Validate a system configuration file\n"
"--region-config <name> Validate a region configuration file\n"
% sys.argv[0])
exit(1)
def main():
config_file = None
system_config = False
region_config = False
arg = 1
while arg < len(sys.argv):
if sys.argv[arg] == "--system-config":
arg += 1
if arg < len(sys.argv):
config_file = sys.argv[arg]
else:
print "--system-config requires the filename of the config " \
"file"
exit(1)
system_config = True
elif sys.argv[arg] == "--region-config":
arg += 1
if arg < len(sys.argv):
config_file = sys.argv[arg]
else:
print "--region-config requires the filename of the config " \
"file"
exit(1)
region_config = True
elif sys.argv[arg] in ["--help", "-h", "-?"]:
show_help()
else:
print "Invalid option."
show_help()
arg += 1
if [system_config, region_config].count(True) != 1:
print "Invalid combination of options selected"
show_help()
if system_config:
config_type = DEFAULT_CONFIG
else:
config_type = REGION_CONFIG
if not os.path.isfile(config_file):
print("Config file %s does not exist" % config_file)
exit(1)
# Parse the system config file
print "Parsing configuration file... ",
system_config = parse_config(config_file)
print "DONE"
# Validate the system config file
print "Validating configuration file... ",
try:
# we use the presence of tsconfig to determine if we are onboard or
# not since it will not be available in the offboard case
offboard = False
try:
from tsconfig.tsconfig import SW_VERSION # noqa: F401
except ImportError:
offboard = True
validate(system_config, config_type, None, offboard)
except ConfigParser.Error as e:
print("Error parsing configuration file %s: %s" % (config_file, e))
except (ConfigFail, ValidateFail) as e:
print("\nValidation failed: %s" % e)
print "DONE"

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,114 @@
"""
Copyright (c) 2015-2017 Wind River Systems, Inc.
SPDX-License-Identifier: Apache-2.0
"""
import wx
from common.guicomponents import set_icons
from common.validator import TiS_VERSION
import configfiletool
import hostfiletool
TEXT_WIDTH = 560
BTN_SIZE = (200, -1)
class WelcomeScreen(wx.Frame):
def __init__(self, *args, **kwargs):
super(WelcomeScreen, self).__init__(*args, **kwargs)
page = Content(self)
set_icons(self)
size = page.main_sizer.Fit(self)
self.SetMinSize(size)
self.Layout()
class Content(wx.Panel):
def __init__(self, *args, **kwargs):
super(Content, self).__init__(*args, **kwargs)
self.title = wx.StaticText(
self, -1,
'Titanium Cloud Configuration Utility')
self.title.SetFont(wx.Font(18, wx.SWISS, wx.NORMAL, wx.BOLD))
# Set up controls for the main page
self.description = wx.StaticText(
self, -1,
' Welcome, The following tools are available for use:')
self.config_desc = wx.StaticText(
self, -1,
"The Titanium Cloud configuration file wizard allows users to "
"create the configuration INI file which is used during the "
"installation process")
self.config_desc.Wrap(TEXT_WIDTH / 2)
self.hosts_desc = wx.StaticText(
self, -1,
"The Titanium Cloud host file tool allows users to create an XML "
"file specifying hosts to be provisioned as part of the Titanium "
"Cloud cloud deployment.")
self.hosts_desc.Wrap(TEXT_WIDTH / 2)
self.config_wiz_btn = wx.Button(
self, -1, "Launch Config File Wizard", size=BTN_SIZE)
self.Bind(wx.EVT_BUTTON, self.launch_config_wiz, self.config_wiz_btn)
self.host_file_tool_btn = wx.Button(
self, -1, "Launch Host File Tool", size=BTN_SIZE)
self.Bind(wx.EVT_BUTTON, self.launch_host_wiz, self.host_file_tool_btn)
self.box1 = wx.StaticBox(self)
self.box2 = wx.StaticBox(self)
# Do layout of controls
self.main_sizer = wx.BoxSizer(wx.VERTICAL)
self.tool1Sizer = wx.StaticBoxSizer(self.box1, wx.HORIZONTAL)
self.tool2Sizer = wx.StaticBoxSizer(self.box2, wx.HORIZONTAL)
self.main_sizer.AddSpacer(10)
self.main_sizer.Add(self.title, flag=wx.ALIGN_CENTER)
self.main_sizer.AddSpacer(10)
self.main_sizer.Add(self.description)
self.main_sizer.AddSpacer(5)
self.main_sizer.Add(self.tool1Sizer, proportion=1, flag=wx.EXPAND)
self.main_sizer.Add(self.tool2Sizer, proportion=1, flag=wx.EXPAND)
self.main_sizer.AddSpacer(5)
self.tool1Sizer.Add(self.config_desc, flag=wx.ALIGN_CENTER)
self.tool1Sizer.AddSpacer(10)
self.tool1Sizer.Add(self.config_wiz_btn, flag=wx.ALIGN_CENTER)
self.tool2Sizer.Add(self.hosts_desc, flag=wx.ALIGN_CENTER)
self.tool2Sizer.AddSpacer(10)
self.tool2Sizer.Add(self.host_file_tool_btn, flag=wx.ALIGN_CENTER)
self.SetSizer(self.main_sizer)
self.Layout()
def launch_config_wiz(self, event):
conf_wizard = configfiletool.ConfigWizard()
conf_wizard.run()
conf_wizard.Destroy()
def launch_host_wiz(self, event):
hostfiletool.HostGUI()
def main():
app = wx.App(0) # Start the application
gui = WelcomeScreen(None, title="Titanium Cloud Configuration Utility v"
+ TiS_VERSION)
gui.Show()
app.MainLoop()
app.Destroy()
if __name__ == '__main__':
main()

View File

@ -0,0 +1,510 @@
"""
Copyright (c) 2015-2017 Wind River Systems, Inc.
SPDX-License-Identifier: Apache-2.0
"""
from collections import OrderedDict
import netaddr
import xml.etree.ElementTree as ET
import wx
from common import utils, exceptions
from common.guicomponents import Field, TYPES, prepare_fields, on_change, \
set_icons, handle_sub_show
from common.configobjects import HOST_XML_ATTRIBUTES
from common.validator import TiS_VERSION
PAGE_SIZE = (200, 200)
WINDOW_SIZE = (570, 700)
CB_TRUE = True
CB_FALSE = False
PADDING = 10
IMPORT_ID = 100
EXPORT_ID = 101
INTERNAL_ID = 105
EXTERNAL_ID = 106
filedir = ""
filename = ""
# Globals
BULK_ADDING = False
class HostPage(wx.Panel):
def __init__(self, parent):
wx.Panel.__init__(self, parent=parent)
self.parent = parent
self.sizer = wx.BoxSizer(wx.VERTICAL)
self.SetSizer(self.sizer)
self.fieldgroup = []
self.fieldgroup.append(OrderedDict())
self.fieldgroup.append(OrderedDict())
self.fieldgroup.append(OrderedDict())
self.fields_sizer1 = wx.GridBagSizer(vgap=10, hgap=10)
self.fields_sizer2 = wx.GridBagSizer(vgap=10, hgap=10)
self.fields_sizer3 = wx.GridBagSizer(vgap=10, hgap=10)
# Basic Fields
self.fieldgroup[0]['personality'] = Field(
text="Personality",
type=TYPES.choice,
choices=['compute', 'controller', 'storage'],
initial='compute'
)
self.fieldgroup[0]['hostname'] = Field(
text="Hostname",
type=TYPES.string,
initial=parent.get_next_hostname()
)
self.fieldgroup[0]['mgmt_mac'] = Field(
text="Management MAC Address",
type=TYPES.string,
initial=""
)
self.fieldgroup[0]['mgmt_ip'] = Field(
text="Management IP Address",
type=TYPES.string,
initial=""
)
self.fieldgroup[0]['location'] = Field(
text="Location",
type=TYPES.string,
initial=""
)
# Board Management
self.fieldgroup[1]['uses_bm'] = Field(
text="This host uses Board Management",
type=TYPES.checkbox,
initial="",
shows=['bm_ip', 'bm_username',
'bm_password', 'power_on'],
transient=True
)
self.fieldgroup[1]['bm_ip'] = Field(
text="Board Management IP Address",
type=TYPES.string,
initial=""
)
self.fieldgroup[1]['bm_username'] = Field(
text="Board Management username",
type=TYPES.string,
initial=""
)
self.fieldgroup[1]['bm_password'] = Field(
text="Board Management password",
type=TYPES.string,
initial=""
)
self.fieldgroup[1]['power_on'] = Field(
text="Power on host",
type=TYPES.checkbox,
initial="N",
transient=True
)
# Installation Parameters
self.fieldgroup[2]['boot_device'] = Field(
text="Boot Device",
type=TYPES.string,
initial=""
)
self.fieldgroup[2]['rootfs_device'] = Field(
text="Rootfs Device",
type=TYPES.string,
initial=""
)
self.fieldgroup[2]['install_output'] = Field(
text="Installation Output",
type=TYPES.choice,
choices=['text', 'graphical'],
initial="text"
)
self.fieldgroup[2]['console'] = Field(
text="Console",
type=TYPES.string,
initial=""
)
prepare_fields(self, self.fieldgroup[0], self.fields_sizer1,
self.on_change)
prepare_fields(self, self.fieldgroup[1], self.fields_sizer2,
self.on_change)
prepare_fields(self, self.fieldgroup[2], self.fields_sizer3,
self.on_change)
# Bind button handlers
self.Bind(wx.EVT_CHOICE, self.on_personality,
self.fieldgroup[0]['personality'].input)
self.Bind(wx.EVT_TEXT, self.on_hostname,
self.fieldgroup[0]['hostname'].input)
# Control Buttons
self.button_sizer = wx.BoxSizer(orient=wx.HORIZONTAL)
self.add = wx.Button(self, -1, "Add a New Host")
self.Bind(wx.EVT_BUTTON, self.on_add, self.add)
self.remove = wx.Button(self, -1, "Remove this Host")
self.Bind(wx.EVT_BUTTON, self.on_remove, self.remove)
self.button_sizer.Add(self.add)
self.button_sizer.Add(self.remove)
# Add fields and spacers
self.sizer.Add(self.fields_sizer1)
self.sizer.AddWindow(wx.StaticLine(self, -1), 0, wx.EXPAND | wx.ALL,
PADDING)
self.sizer.Add(self.fields_sizer2)
self.sizer.AddWindow(wx.StaticLine(self, -1), 0, wx.EXPAND | wx.ALL,
PADDING)
self.sizer.Add(self.fields_sizer3)
self.sizer.AddStretchSpacer()
self.sizer.AddWindow(wx.StaticLine(self, -1), 0, wx.EXPAND | wx.ALL,
PADDING)
self.sizer.Add(self.button_sizer, border=10, flag=wx.CENTER)
def on_hostname(self, event, string=None):
"""Update the List entry text to match the new hostname
"""
string = string or event.GetString()
index = self.parent.GetSelection()
self.parent.SetPageText(index, string)
self.parent.parent.Layout()
def on_personality(self, event, string=None):
"""Remove hostname field if it's a storage or controller
"""
string = string or event.GetString()
index = self.parent.GetSelection()
if string == 'compute':
self.fieldgroup[0]['hostname'].show(True)
self.parent.SetPageText(index,
self.fieldgroup[0]['hostname'].get_value())
return
elif string == 'controller':
self.fieldgroup[0]['hostname'].show(False)
elif string == 'storage':
self.fieldgroup[0]['hostname'].show(False)
self.parent.SetPageText(index, string)
self.parent.Layout()
def on_add(self, event):
try:
self.validate()
except Exception as ex:
wx.LogError("Error on page: " + ex.message)
return
self.parent.new_page()
def on_remove(self, event):
if self.parent.GetPageCount() is 1:
wx.LogError("Must leave at least one host")
return
index = self.parent.GetSelection()
self.parent.DeletePage(index)
def to_xml(self):
"""Create the XML for this host
"""
self.validate()
attrs = ""
# Generic handling
for fgroup in self.fieldgroup:
for name, field in fgroup.items():
if field.transient or not field.get_value():
continue
attrs += "\t\t<" + name + ">" + \
field.get_value() + "</" + name + ">\n"
# Special Fields
if self.fieldgroup[1]['power_on'].get_value() is 'Y':
attrs += "\t\t<power_on/>\n"
if self.fieldgroup[1]['uses_bm'].get_value() is 'Y':
attrs += "\t\t<bm_type>bmc</bm_type>\n"
return "\t<host>\n" + attrs + "\t</host>\n"
def validate(self):
if self.fieldgroup[0]['personality'].get_value() == "compute" and not \
utils.is_valid_hostname(
self.fieldgroup[0]['hostname'].get_value()):
raise exceptions.ValidateFail(
"Hostname %s is not valid" %
self.fieldgroup[0]['hostname'].get_value())
if not utils.is_valid_mac(self.fieldgroup[0]['mgmt_mac'].get_value()):
raise exceptions.ValidateFail(
"Management MAC address %s is not valid" %
self.fieldgroup[0]['mgmt_mac'].get_value())
ip = self.fieldgroup[0]['mgmt_ip'].get_value()
if ip:
try:
netaddr.IPAddress(ip)
except Exception:
raise exceptions.ValidateFail(
"Management IP address %s is not valid" % ip)
if self.fieldgroup[1]['uses_bm'].get_value() == 'Y':
ip = self.fieldgroup[1]['bm_ip'].get_value()
if ip:
try:
netaddr.IPAddress(ip)
except Exception:
raise exceptions.ValidateFail(
"Board Management IP address %s is not valid" % ip)
else:
raise exceptions.ValidateFail(
"Board Management IP is not specified. "
"External Board Management Network requires Board "
"Management IP address.")
def on_change(self, event):
on_change(self, self.fieldgroup[1], event)
def set_field(self, name, value):
for fgroup in self.fieldgroup:
for fname, field in fgroup.items():
if fname == name:
field.set_value(value)
class HostBook(wx.Listbook):
def __init__(self, parent):
wx.Listbook.__init__(self, parent, style=wx.BK_DEFAULT)
self.parent = parent
self.Layout()
# Add a starting host
self.new_page()
self.Bind(wx.EVT_LISTBOOK_PAGE_CHANGED, self.on_changed)
self.Bind(wx.EVT_LISTBOOK_PAGE_CHANGING, self.on_changing)
def on_changed(self, event):
event.Skip()
def on_changing(self, event):
# Trigger page validation before leaving
if BULK_ADDING:
event.Skip()
return
index = self.GetSelection()
try:
if index != -1:
self.GetPage(index).validate()
except Exception as ex:
wx.LogError("Error on page: " + ex.message)
event.Veto()
return
event.Skip()
def new_page(self, hostname=None):
new_page = HostPage(self)
self.AddPage(new_page, hostname or self.get_next_hostname())
self.SetSelection(self.GetPageCount() - 1)
return new_page
def get_next_hostname(self, suggest=None):
prefix = "compute-"
new_suggest = suggest or 0
for existing in range(self.GetPageCount()):
if prefix + str(new_suggest) in self.GetPageText(existing):
new_suggest = self.get_next_hostname(suggest=new_suggest + 1)
if suggest:
prefix = ""
return prefix + str(new_suggest)
def to_xml(self):
"""Create the complete XML and allow user to save
"""
xml = "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n" \
"<hosts version=\"" + TiS_VERSION + "\">\n"
for index in range(self.GetPageCount()):
try:
xml += self.GetPage(index).to_xml()
except Exception as ex:
wx.LogError("Error on page number %s: %s" %
(index + 1, ex.message))
return
xml += "</hosts>"
writer = wx.FileDialog(self,
message="Save Host XML File",
defaultDir=filedir or "",
defaultFile=filename or "TiS_hosts.xml",
wildcard="XML file (*.xml)|*.xml",
style=wx.FD_SAVE,
)
if writer.ShowModal() == wx.ID_CANCEL:
return
# Write the XML file to disk
try:
with open(writer.GetPath(), "wb") as f:
f.write(xml.encode('utf-8'))
except IOError:
wx.LogError("Error writing hosts xml file '%s'." %
writer.GetPath())
class HostGUI(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, None, wx.ID_ANY,
"Titanium Cloud Host File Creator v" + TiS_VERSION,
size=WINDOW_SIZE)
self.panel = wx.Panel(self)
self.sizer = wx.BoxSizer(wx.VERTICAL)
self.book = HostBook(self.panel)
self.sizer.Add(self.book, 1, wx.ALL | wx.EXPAND, 5)
self.panel.SetSizer(self.sizer)
set_icons(self)
menu_bar = wx.MenuBar()
# File
file_menu = wx.Menu()
import_item = wx.MenuItem(file_menu, IMPORT_ID, '&Import')
file_menu.AppendItem(import_item)
export_item = wx.MenuItem(file_menu, EXPORT_ID, '&Export')
file_menu.AppendItem(export_item)
menu_bar.Append(file_menu, '&File')
self.Bind(wx.EVT_MENU, self.on_import, id=IMPORT_ID)
self.Bind(wx.EVT_MENU, self.on_export, id=EXPORT_ID)
self.SetMenuBar(menu_bar)
self.Layout()
self.SetMinSize(WINDOW_SIZE)
self.Show()
def on_import(self, e):
global BULK_ADDING
try:
BULK_ADDING = True
msg = ""
reader = wx.FileDialog(self,
"Import Existing Titanium Cloud Host File",
"", "", "XML file (*.xml)|*.xml",
wx.FD_OPEN | wx.FD_FILE_MUST_EXIST)
if reader.ShowModal() == wx.ID_CANCEL:
return
# Read in the config file
try:
with open(reader.GetPath(), 'rb') as f:
contents = f.read()
root = ET.fromstring(contents)
except Exception as ex:
wx.LogError("Cannot parse host file, Error: %s." % ex)
return
# Check version of host file
if root.get('version', "") != TiS_VERSION:
msg += "Warning: This file was created using tools for a " \
"different version of Titanium Cloud than this tool " \
"was designed for (" + TiS_VERSION + ")"
for idx, xmlhost in enumerate(root.findall('host')):
hostname = None
name_elem = xmlhost.find('hostname')
if name_elem is not None:
hostname = name_elem.text
new_host = self.book.new_page()
self.book.GetSelection()
try:
for attr in HOST_XML_ATTRIBUTES:
elem = xmlhost.find(attr)
if elem is not None and elem.text:
# Enable and display bm section if used
if attr == 'bm_type' and elem.text:
new_host.set_field("uses_bm", "Y")
handle_sub_show(
new_host.fieldgroup[1],
new_host.fieldgroup[1]['uses_bm'].shows,
True)
new_host.Layout()
# Basic field setting
new_host.set_field(attr, elem.text)
# Additional functionality for special fields
if attr == 'personality':
# Update hostname visibility and page title
new_host.on_personality(None, elem.text)
# Special handling for presence of power_on element
if attr == 'power_on' and elem is not None:
new_host.set_field(attr, "Y")
new_host.validate()
except Exception as ex:
if msg:
msg += "\n"
msg += "Warning: Added host %s has a validation error, " \
"reason: %s" % \
(hostname or ("with index " + str(idx)),
ex.message)
# No longer delete hosts with validation errors,
# The user can fix them up before exporting
# self.book.DeletePage(new_index)
if msg:
wx.LogWarning(msg)
finally:
BULK_ADDING = False
self.Layout()
def on_export(self, e):
# Do a validation of current page first
index = self.book.GetSelection()
try:
if index != -1:
self.book.GetPage(index).validate()
except Exception as ex:
wx.LogError("Error on page: " + ex.message)
return
# Check for hostname conflicts
hostnames = []
for existing in range(self.book.GetPageCount()):
hostname = self.book.GetPage(
existing).fieldgroup[0]['hostname'].get_value()
if hostname in hostnames:
wx.LogError("Cannot export, duplicate hostname '%s'" %
hostname)
return
# Ignore multiple None hostnames
elif hostname:
hostnames.append(hostname)
self.book.to_xml()
def main():
app = wx.App(0) # Start the application
HostGUI()
app.MainLoop()
if __name__ == '__main__':
main()

View File

@ -0,0 +1,29 @@
"""
Copyright (c) 2016-2017 Wind River Systems, Inc.
SPDX-License-Identifier: Apache-2.0
"""
from setuptools import setup, find_packages
setup(
name='wrs-configutility',
description='Titanium Cloud Configuration Utility',
version='3.0.0',
license='Apache-2.0',
platforms=['any'],
provides=['configutilities'],
packages=find_packages(),
install_requires=['netaddr>=0.7.14', 'six'],
package_data={},
include_package_data=False,
entry_points={
'gui_scripts': [
'config_gui = configutilities.configgui:main',
],
'console_scripts': [
'config_validator = configutilities.config_validator:main'
],
}
)

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

View File

@ -0,0 +1,26 @@
"""
Copyright (c) 2016 Wind River Systems, Inc.
SPDX-License-Identifier: Apache-2.0
"""
from setuptools import setup, find_packages
setup(
name='configutilities',
description='Configuration File Validator',
version='3.0.0',
license='Apache-2.0',
platforms=['any'],
provides=['configutilities'],
packages=find_packages(),
install_requires=['netaddr>=0.7.14'],
package_data={},
include_package_data=False,
entry_points={
'console_scripts': [
'config_validator = configutilities.config_validator:main',
],
}
)

View File

@ -0,0 +1,22 @@
# Tox (http://tox.testrun.org/) is a tool for running tests
# in multiple virtualenvs. This configuration file will run the
# test suite on all supported python versions. To use it, "pip install tox"
# and then run "tox" from this directory.
[tox]
envlist = flake8
# Tox does not work if the path to the workdir is too long, so move it to /tmp
toxworkdir = /tmp/{env:USER}_ccutiltox
wrsdir = {toxinidir}/../../../../../../../../..
[testenv]
whitelist_externals = find
install_command = pip install --no-cache-dir {opts} {packages}
[testenv:flake8]
basepython = python2.7
deps = flake8
commands = flake8 {posargs}
[flake8]
ignore = W503

6
controllerconfig/.gitignore vendored Normal file
View File

@ -0,0 +1,6 @@
!.distro
.distro/centos7/rpmbuild/RPMS
.distro/centos7/rpmbuild/SRPMS
.distro/centos7/rpmbuild/BUILD
.distro/centos7/rpmbuild/BUILDROOT
.distro/centos7/rpmbuild/SOURCES/controllerconfig*tar.gz

13
controllerconfig/PKG-INFO Normal file
View File

@ -0,0 +1,13 @@
Metadata-Version: 1.1
Name: controllerconfig
Version: 1.0
Summary: Controller Node Configuration
Home-page:
Author: Windriver
Author-email: info@windriver.com
License: Apache-2.0
Description: Controller node configuration
Platform: UNKNOWN

View File

@ -0,0 +1,2 @@
SRC_DIR="controllerconfig"
TIS_PATCH_VER=140

View File

@ -0,0 +1,86 @@
Summary: Controller node configuration
Name: controllerconfig
Version: 1.0
Release: %{tis_patch_ver}%{?_tis_dist}
License: Apache-2.0
Group: base
Packager: Wind River <info@windriver.com>
URL: unknown
Source0: %{name}-%{version}.tar.gz
BuildRequires: python-setuptools
Requires: systemd
Requires: python-netaddr
Requires: python-keyring
Requires: python-six
Requires: python-iso8601
Requires: psmisc
Requires: lshell
Requires: python-pyudev
Requires: python-netifaces
%description
Controller node configuration
%define local_dir /usr/
%define local_bindir %{local_dir}/bin/
%define local_etc_initd /etc/init.d/
%define local_goenabledd /etc/goenabled.d/
%define local_etc_upgraded /etc/upgrade.d/
%define local_etc_systemd /etc/systemd/system/
%define pythonroot /usr/lib64/python2.7/site-packages
%define debug_package %{nil}
%prep
%setup
%build
%{__python} setup.py build
# TODO: NO_GLOBAL_PY_DELETE (see python-byte-compile.bbclass), put in macro/script
%install
%{__python} setup.py install --root=$RPM_BUILD_ROOT \
--install-lib=%{pythonroot} \
--prefix=/usr \
--install-data=/usr/share \
--single-version-externally-managed
install -d -m 755 %{buildroot}%{local_bindir}
install -p -D -m 700 scripts/keyringstaging %{buildroot}%{local_bindir}/keyringstaging
install -p -D -m 700 scripts/openstack_update_admin_password %{buildroot}%{local_bindir}/openstack_update_admin_password
install -p -D -m 700 scripts/install_clone.py %{buildroot}%{local_bindir}/install_clone
install -p -D -m 700 scripts/finish_install_clone.sh %{buildroot}%{local_bindir}/finish_install_clone.sh
install -d -m 755 %{buildroot}%{local_goenabledd}
install -p -D -m 700 scripts/config_goenabled_check.sh %{buildroot}%{local_goenabledd}/config_goenabled_check.sh
install -d -m 755 %{buildroot}%{local_etc_initd}
install -p -D -m 755 scripts/controller_config %{buildroot}%{local_etc_initd}/controller_config
# Install Upgrade scripts
install -d -m 755 %{buildroot}%{local_etc_upgraded}
install -p -D -m 755 upgrade-scripts/* %{buildroot}%{local_etc_upgraded}/
install -d -m 755 %{buildroot}%{local_etc_systemd}
install -p -D -m 664 scripts/controllerconfig.service %{buildroot}%{local_etc_systemd}/controllerconfig.service
#install -p -D -m 664 scripts/config.service %{buildroot}%{local_etc_systemd}/config.service
%post
systemctl enable controllerconfig.service
%clean
rm -rf $RPM_BUILD_ROOT
%files
%defattr(-,root,root,-)
%doc LICENSE
%{local_bindir}/*
%dir %{pythonroot}/%{name}
%{pythonroot}/%{name}/*
%dir %{pythonroot}/%{name}-%{version}.0-py2.7.egg-info
%{pythonroot}/%{name}-%{version}.0-py2.7.egg-info/*
%{local_goenabledd}/*
%{local_etc_initd}/*
%dir %{local_etc_upgraded}
%{local_etc_upgraded}/*
%{local_etc_systemd}/*

View File

@ -0,0 +1,7 @@
[run]
branch = True
source = controllerconfig
omit = controllerconfig/tests/*
[report]
ignore_errors = True

View File

@ -0,0 +1,5 @@
*.pyc
.coverage
.testrepository
cover

View File

@ -0,0 +1,8 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=1 \
OS_STDERR_CAPTURE=1 \
OS_TEST_TIMEOUT=60 \
${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./controllerconfig/tests} $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,5 @@
#
# Copyright (c) 2015 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,717 @@
#
# Copyright (c) 2017 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
"""
Clone a Configured System and Install the image on another
identical hardware or the same hardware.
"""
import os
import re
import glob
import time
import shutil
import netaddr
import tempfile
import fileinput
import subprocess
from common import constants
from sysinv.common import constants as si_const
import sysinv_api
import tsconfig.tsconfig as tsconfig
from common import log
from common.exceptions import CloneFail, BackupFail
import utils
import backup_restore
DEBUG = False
LOG = log.get_logger(__name__)
DEVNULL = open(os.devnull, 'w')
CLONE_ARCHIVE_DIR = "clone-archive"
CLONE_ISO_INI = ".cloneiso.ini"
NAME = "name"
INSTALLED = "installed_at"
RESULT = "result"
IN_PROGRESS = "in-progress"
FAIL = "failed"
OK = "ok"
def clone_status():
""" Check status of last install-clone. """
INI_FILE1 = os.path.join("/", CLONE_ARCHIVE_DIR, CLONE_ISO_INI)
INI_FILE2 = os.path.join(tsconfig.PLATFORM_CONF_PATH, CLONE_ISO_INI)
name = "unknown"
result = "unknown"
installed_at = "unknown time"
for ini_file in [INI_FILE1, INI_FILE2]:
if os.path.exists(ini_file):
with open(ini_file) as f:
s = f.read()
for line in s.split("\n"):
if line.startswith(NAME):
name = line.split("=")[1].strip()
elif line.startswith(RESULT):
result = line.split("=")[1].strip()
elif line.startswith(INSTALLED):
installed_at = line.split("=")[1].strip()
break # one file was found, skip the other file
if result != "unknown":
if result == OK:
print("\nInstallation of cloned image [{}] was successful at {}\n"
.format(name, installed_at))
elif result == FAIL:
print("\nInstallation of cloned image [{}] failed at {}\n"
.format(name, installed_at))
else:
print("\ninstall-clone is in progress.\n")
else:
print("\nCloned image is not installed on this node.\n")
def check_size(archive_dir):
""" Check if there is enough space to create iso. """
overhead_bytes = 1024 ** 3 # extra GB for staging directory
# Size of the cloned iso is directly proportional to the
# installed package repository (note that patches are a part of
# the system archive size below).
# 1G overhead size added (above) will accomodate the temporary
# workspace (updating system archive etc) needed to create the iso.
feed_dir = os.path.join('/www', 'pages', 'feed',
'rel-' + tsconfig.SW_VERSION)
overhead_bytes += backup_restore.backup_std_dir_size(feed_dir)
cinder_config = False
backend_services = sysinv_api.get_storage_backend_services()
for services in backend_services.values():
if (services.find(si_const.SB_SVC_CINDER) != -1):
cinder_config = True
break
clone_size = (
overhead_bytes +
backup_restore.backup_etc_size() +
backup_restore.backup_config_size(tsconfig.CONFIG_PATH) +
backup_restore.backup_puppet_data_size(constants.HIERADATA_PERMDIR) +
backup_restore.backup_keyring_size(backup_restore.keyring_permdir) +
backup_restore.backup_ldap_size() +
backup_restore.backup_postgres_size(cinder_config) +
backup_restore.backup_ceilometer_size(
backup_restore.ceilometer_permdir) +
backup_restore.backup_std_dir_size(backup_restore.glance_permdir) +
backup_restore.backup_std_dir_size(backup_restore.home_permdir) +
backup_restore.backup_std_dir_size(backup_restore.patching_permdir) +
backup_restore.backup_std_dir_size(
backup_restore.patching_repo_permdir) +
backup_restore.backup_std_dir_size(backup_restore.extension_permdir) +
backup_restore.backup_std_dir_size(
backup_restore.patch_vault_permdir) +
backup_restore.backup_cinder_size(backup_restore.cinder_permdir))
archive_dir_free_space = \
utils.filesystem_get_free_space(archive_dir)
if clone_size > archive_dir_free_space:
print ("\nArchive directory (%s) does not have enough free "
"space (%s), estimated size to create image is %s." %
(archive_dir,
utils.print_bytes(archive_dir_free_space),
utils.print_bytes(clone_size)))
raise CloneFail("Not enough free space.\n")
def update_bootloader_default(bl_file, host):
""" Update bootloader files for cloned image """
if not os.path.exists(bl_file):
LOG.error("{} does not exist".format(bl_file))
raise CloneFail("{} does not exist".format(os.path.basename(bl_file)))
# Tags should be in sync with common-bsp/files/centos.syslinux.cfg
# and common-bsp/files/grub.cfg
STANDARD_STANDARD = '0'
STANDARD_EXTENDED = 'S0'
AIO_STANDARD = '2'
AIO_EXTENDED = 'S2'
AIO_LL_STANDARD = '4'
AIO_LL_EXTENDED = 'S4'
if "grub.cfg" in bl_file:
STANDARD_STANDARD = 'standard>serial>' + \
si_const.SYSTEM_SECURITY_PROFILE_STANDARD
STANDARD_EXTENDED = 'standard>serial>' + \
si_const.SYSTEM_SECURITY_PROFILE_EXTENDED
AIO_STANDARD = 'aio>serial>' + \
si_const.SYSTEM_SECURITY_PROFILE_STANDARD
AIO_EXTENDED = 'aio>serial>' + \
si_const.SYSTEM_SECURITY_PROFILE_EXTENDED
AIO_LL_STANDARD = 'aio-lowlat>serial>' + \
si_const.SYSTEM_SECURITY_PROFILE_STANDARD
AIO_LL_EXTENDED = 'aio-lowlat>serial>' + \
si_const.SYSTEM_SECURITY_PROFILE_EXTENDED
SUBMENUITEM_TBOOT = 'tboot'
SUBMENUITEM_SECUREBOOT = 'secureboot'
timeout_line = None
default_line = None
default_label_num = STANDARD_STANDARD
if utils.get_system_type() == si_const.TIS_AIO_BUILD:
if si_const.LOWLATENCY in tsconfig.subfunctions:
default_label_num = AIO_LL_STANDARD
else:
default_label_num = AIO_STANDARD
if (tsconfig.security_profile ==
si_const.SYSTEM_SECURITY_PROFILE_EXTENDED):
default_label_num = STANDARD_EXTENDED
if utils.get_system_type() == si_const.TIS_AIO_BUILD:
if si_const.LOWLATENCY in tsconfig.subfunctions:
default_label_num = AIO_LL_EXTENDED
else:
default_label_num = AIO_EXTENDED
if "grub.cfg" in bl_file:
if host.tboot is not None:
if host.tboot == "true":
default_label_num = default_label_num + '>' + \
SUBMENUITEM_TBOOT
else:
default_label_num = default_label_num + '>' + \
SUBMENUITEM_SECUREBOOT
try:
with open(bl_file) as f:
s = f.read()
for line in s.split("\n"):
if line.startswith("timeout"):
timeout_line = line
elif line.startswith("default"):
default_line = line
if "grub.cfg" in bl_file:
replace = "default='{}'\ntimeout=10".format(default_label_num)
else: # isolinux format
replace = "default {}\ntimeout 10".format(default_label_num)
if default_line and timeout_line:
s = s.replace(default_line, "")
s = s.replace(timeout_line, replace)
elif default_line:
s = s.replace(default_line, replace)
elif timeout_line:
s = s.replace(timeout_line, replace)
else:
s = replace + s
s = re.sub(r'boot_device=[^\s]*',
'boot_device=%s' % host.boot_device,
s)
s = re.sub(r'rootfs_device=[^\s]*',
'rootfs_device=%s' % host.rootfs_device,
s)
s = re.sub(r'console=[^\s]*',
'console=%s' % host.console,
s)
with open(bl_file, "w") as f:
LOG.info("rewriting {}: label={} find=[{}][{}] replace=[{}]"
.format(bl_file, default_label_num, timeout_line,
default_line, replace.replace('\n', '<newline>')))
f.write(s)
except Exception as e:
LOG.error("update_bootloader_default failed: {}".format(e))
raise CloneFail("Failed to update bootloader files")
def get_online_cpus():
""" Get max cpu id """
with open('/sys/devices/system/cpu/online') as f:
s = f.read()
max_cpu_id = s.split('-')[-1].strip()
LOG.info("Max cpu id:{} [{}]".format(max_cpu_id, s.strip()))
return max_cpu_id
return ""
def get_total_mem():
""" Get total memory size """
with open('/proc/meminfo') as f:
s = f.read()
for line in s.split("\n"):
if line.startswith("MemTotal:"):
mem_total = line.split()[1]
LOG.info("MemTotal:[{}]".format(mem_total))
return mem_total
return ""
def get_disk_size(disk):
""" Get the disk size """
disk_size = ""
try:
disk_size = subprocess.check_output(
['lsblk', '--nodeps', '--output', 'SIZE',
'--noheadings', '--bytes', disk])
except Exception as e:
LOG.exception(e)
LOG.error("Failed to get disk size [{}]".format(disk))
raise CloneFail("Failed to get disk size")
return disk_size.strip()
def create_ini_file(clone_archive_dir, iso_name):
"""Create clone ini file."""
interfaces = ""
my_hostname = utils.get_controller_hostname()
macs = sysinv_api.get_mac_addresses(my_hostname)
for intf in macs.keys():
interfaces += intf + " "
disk_paths = ""
for _, _, files in os.walk('/dev/disk/by-path'):
for f in files:
if f.startswith("pci-") and "part" not in f and "usb" not in f:
disk_size = get_disk_size('/dev/disk/by-path/' + f)
disk_paths += f + "#" + disk_size + " "
break # no need to go into sub-dirs.
LOG.info("create ini: {} {}".format(macs, files))
with open(os.path.join(clone_archive_dir, CLONE_ISO_INI), 'w') as f:
f.write('[clone_iso]\n')
f.write('name=' + iso_name + '\n')
f.write('host=' + my_hostname + '\n')
f.write('created_at=' + time.strftime("%Y-%m-%d %H:%M:%S %Z")
+ '\n')
f.write('interfaces=' + interfaces + '\n')
f.write('disks=' + disk_paths + '\n')
f.write('cpus=' + get_online_cpus() + '\n')
f.write('mem=' + get_total_mem() + '\n')
LOG.info("create ini: ({}) ({})".format(interfaces, disk_paths))
def create_iso(iso_name, archive_dir):
""" Create iso image. This is modelled after
the cgcs-root/build-tools/build-iso tool. """
try:
controller_0 = sysinv_api.get_host_data('controller-0')
except Exception as e:
e_log = "Failed to retrieve controller-0 inventory details."
LOG.exception(e_log)
raise CloneFail(e_log)
iso_dir = os.path.join(archive_dir, 'isolinux')
clone_archive_dir = os.path.join(iso_dir, CLONE_ARCHIVE_DIR)
output = None
tmpdir = None
total_steps = 6
step = 1
print ("\nCreating ISO:")
# Add the correct kick-start file to the image
ks_file = "controller_ks.cfg"
if utils.get_system_type() == si_const.TIS_AIO_BUILD:
if si_const.LOWLATENCY in tsconfig.subfunctions:
ks_file = "smallsystem_lowlatency_ks.cfg"
else:
ks_file = "smallsystem_ks.cfg"
try:
# prepare the iso files
images_dir = os.path.join(iso_dir, 'images')
os.mkdir(images_dir, 0644)
pxe_dir = os.path.join('/pxeboot',
'rel-' + tsconfig.SW_VERSION)
os.symlink(pxe_dir + '/installer-bzImage',
iso_dir + '/vmlinuz')
os.symlink(pxe_dir + '/installer-initrd',
iso_dir + '/initrd.img')
utils.progress(total_steps, step, 'preparing files', 'DONE')
step += 1
feed_dir = os.path.join('/www', 'pages', 'feed',
'rel-' + tsconfig.SW_VERSION)
os.symlink(feed_dir + '/Packages', iso_dir + '/Packages')
os.symlink(feed_dir + '/repodata', iso_dir + '/repodata')
os.symlink(feed_dir + '/LiveOS', iso_dir + '/LiveOS')
shutil.copy2(feed_dir + '/isolinux.cfg', iso_dir)
update_bootloader_default(iso_dir + '/isolinux.cfg', controller_0)
shutil.copyfile('/usr/share/syslinux/isolinux.bin',
iso_dir + '/isolinux.bin')
os.symlink('/usr/share/syslinux/vesamenu.c32',
iso_dir + '/vesamenu.c32')
for filename in glob.glob(os.path.join(feed_dir, '*ks.cfg')):
shutil.copy(os.path.join(feed_dir, filename), iso_dir)
utils.progress(total_steps, step, 'preparing files', 'DONE')
step += 1
efiboot_dir = os.path.join(iso_dir, 'EFI', 'BOOT')
os.makedirs(efiboot_dir, 0644)
l_efi_dir = os.path.join('/boot', 'efi', 'EFI')
shutil.copy2(l_efi_dir + '/BOOT/BOOTX64.EFI', efiboot_dir)
shutil.copy2(l_efi_dir + '/centos/MokManager.efi', efiboot_dir)
shutil.copy2(l_efi_dir + '/centos/grubx64.efi', efiboot_dir)
shutil.copy2('/pxeboot/EFI/grub.cfg', efiboot_dir)
update_bootloader_default(efiboot_dir + '/grub.cfg', controller_0)
shutil.copytree(l_efi_dir + '/centos/fonts',
efiboot_dir + '/fonts')
# copy EFI boot image and update the grub.cfg file
efi_img = images_dir + '/efiboot.img'
shutil.copy2(pxe_dir + '/efiboot.img', efi_img)
tmpdir = tempfile.mkdtemp(dir=archive_dir)
output = subprocess.check_output(
["mount", "-t", "vfat", "-o", "loop",
efi_img, tmpdir],
stderr=subprocess.STDOUT)
# replace the grub.cfg file with the updated file
efi_grub_f = os.path.join(tmpdir, 'EFI', 'BOOT', 'grub.cfg')
os.remove(efi_grub_f)
shutil.copy2(efiboot_dir + '/grub.cfg', efi_grub_f)
subprocess.call(['umount', tmpdir])
shutil.rmtree(tmpdir, ignore_errors=True)
tmpdir = None
epoch_time = "%.9f" % time.time()
disc_info = [epoch_time, tsconfig.SW_VERSION, "x86_64"]
with open(iso_dir + '/.discinfo', 'w') as f:
f.write('\n'.join(disc_info))
# copy the latest install_clone executable
shutil.copy2('/usr/bin/install_clone', iso_dir)
subprocess.check_output("cat /pxeboot/post_clone_iso_ks.cfg >> " +
iso_dir + "/" + ks_file, shell=True)
utils.progress(total_steps, step, 'preparing files', 'DONE')
step += 1
# copy patches
iso_patches_dir = os.path.join(iso_dir, 'patches')
iso_patch_repo_dir = os.path.join(iso_patches_dir, 'repodata')
iso_patch_pkgs_dir = os.path.join(iso_patches_dir, 'Packages')
iso_patch_metadata_dir = os.path.join(iso_patches_dir, 'metadata')
iso_patch_applied_dir = os.path.join(iso_patch_metadata_dir, 'applied')
iso_patch_committed_dir = os.path.join(iso_patch_metadata_dir,
'committed')
os.mkdir(iso_patches_dir, 0755)
os.mkdir(iso_patch_repo_dir, 0755)
os.mkdir(iso_patch_pkgs_dir, 0755)
os.mkdir(iso_patch_metadata_dir, 0755)
os.mkdir(iso_patch_applied_dir, 0755)
os.mkdir(iso_patch_committed_dir, 0755)
repodata = '/www/pages/updates/rel-%s/repodata/' % tsconfig.SW_VERSION
pkgsdir = '/www/pages/updates/rel-%s/Packages/' % tsconfig.SW_VERSION
patch_applied_dir = '/opt/patching/metadata/applied/'
patch_committed_dir = '/opt/patching/metadata/committed/'
subprocess.check_call(['rsync', '-a', repodata,
'%s/' % iso_patch_repo_dir])
if os.path.exists(pkgsdir):
subprocess.check_call(['rsync', '-a', pkgsdir,
'%s/' % iso_patch_pkgs_dir])
if os.path.exists(patch_applied_dir):
subprocess.check_call(['rsync', '-a', patch_applied_dir,
'%s/' % iso_patch_applied_dir])
if os.path.exists(patch_committed_dir):
subprocess.check_call(['rsync', '-a', patch_committed_dir,
'%s/' % iso_patch_committed_dir])
utils.progress(total_steps, step, 'preparing files', 'DONE')
step += 1
create_ini_file(clone_archive_dir, iso_name)
os.chmod(iso_dir + '/isolinux.bin', 0664)
iso_file = os.path.join(archive_dir, iso_name + ".iso")
output = subprocess.check_output(
["nice", "mkisofs",
"-o", iso_file, "-R", "-D",
"-A", "oe_iso_boot", "-V", "oe_iso_boot",
"-f", "-quiet",
"-b", "isolinux.bin", "-c", "boot.cat", "-no-emul-boot",
"-boot-load-size", "4", "-boot-info-table",
"-eltorito-alt-boot", "-e", "images/efiboot.img",
"-no-emul-boot",
iso_dir],
stderr=subprocess.STDOUT)
LOG.info("{} created: [{}]".format(iso_file, output))
utils.progress(total_steps, step, 'iso created', 'DONE')
step += 1
output = subprocess.check_output(
["nice", "isohybrid",
"--uefi",
iso_file],
stderr=subprocess.STDOUT)
LOG.debug("isohybrid: {}".format(output))
output = subprocess.check_output(
["nice", "implantisomd5",
iso_file],
stderr=subprocess.STDOUT)
LOG.debug("implantisomd5: {}".format(output))
utils.progress(total_steps, step, 'checksum implanted', 'DONE')
print("Cloned iso image created: {}".format(iso_file))
except Exception as e:
LOG.exception(e)
e_log = "ISO creation ({}) failed".format(iso_name)
if output:
e_log += ' [' + output + ']'
LOG.error(e_log)
raise CloneFail("ISO creation failed.")
finally:
if tmpdir:
subprocess.call(['umount', tmpdir], stderr=DEVNULL)
shutil.rmtree(tmpdir, ignore_errors=True)
def find_and_replace_in_file(target, find, replace):
""" Find and replace a string in a file. """
found = None
try:
for line in fileinput.FileInput(target, inplace=1):
if find in line:
# look for "find" string within word boundaries
fpat = r'\b' + find + r'\b'
line = re.sub(fpat, replace, line)
found = True
print line,
except Exception as e:
LOG.error("Failed to replace [{}] with [{}] in [{}]: {}"
.format(find, replace, target, str(e)))
found = None
finally:
fileinput.close()
return found
def find_and_replace(target_list, find, replace):
""" Find and replace a string in all files in a directory. """
found = False
file_list = []
for target in target_list:
if os.path.isfile(target):
if find_and_replace_in_file(target, find, replace):
found = True
file_list.append(target)
elif os.path.isdir(target):
try:
output = subprocess.check_output(
['grep', '-rl', find, target])
if output:
for line in output.split('\n'):
if line and find_and_replace_in_file(
line, find, replace):
found = True
file_list.append(line)
except Exception:
pass # nothing found in that directory
if not found:
LOG.error("[{}] not found in backup".format(find))
else:
LOG.info("Replaced [{}] with [{}] in {}".format(
find, replace, file_list))
def remove_from_archive(archive, unwanted):
""" Remove a file from the archive. """
try:
subprocess.check_call(["tar", "--delete",
"--file=" + archive,
unwanted])
except subprocess.CalledProcessError, e:
LOG.error("Delete of {} failed: {}".format(unwanted, e.output))
raise CloneFail("Failed to modify backup archive")
def update_oamip_in_archive(tmpdir):
""" Update OAM IP in system archive file. """
oam_list = sysinv_api.get_oam_ip()
if not oam_list:
raise CloneFail("Failed to get OAM IP")
for oamfind in [oam_list.oam_start_ip, oam_list.oam_end_ip,
oam_list.oam_subnet, oam_list.oam_floating_ip,
oam_list.oam_c0_ip, oam_list.oam_c1_ip]:
if not oamfind:
continue
ip = netaddr.IPNetwork(oamfind)
find_str = ""
if ip.version == 4:
# if ipv4, use 192.0.x.x as the temporary oam ip
find_str = str(ip.ip)
ipstr_list = find_str.split('.')
ipstr_list[0] = '192'
ipstr_list[1] = '0'
repl_ipstr = ".".join(ipstr_list)
else:
# if ipv6, use 2001:db8:x as the temporary oam ip
find_str = str(ip.ip)
ipstr_list = find_str.split(':')
ipstr_list[0] = '2001'
ipstr_list[1] = 'db8'
repl_ipstr = ":".join(ipstr_list)
if repl_ipstr:
find_and_replace(
[os.path.join(tmpdir, 'etc/hosts'),
os.path.join(tmpdir, 'etc/sysconfig/network-scripts'),
os.path.join(tmpdir, 'etc/nfv/vim/config.ini'),
os.path.join(tmpdir, 'etc/haproxy/haproxy.cfg'),
os.path.join(tmpdir, 'etc/heat/heat.conf'),
os.path.join(tmpdir, 'etc/keepalived/keepalived.conf'),
os.path.join(tmpdir, 'etc/murano/murano.conf'),
os.path.join(tmpdir, 'etc/vswitch/vswitch.ini'),
os.path.join(tmpdir, 'etc/nova/nova.conf'),
os.path.join(tmpdir, 'config/hosts'),
os.path.join(tmpdir, 'hieradata'),
os.path.join(tmpdir, 'postgres/keystone.sql.data'),
os.path.join(tmpdir, 'postgres/sysinv.sql.data')],
find_str, repl_ipstr)
else:
LOG.error("Failed to modify OAM IP:[{}]"
.format(oamfind))
raise CloneFail("Failed to modify OAM IP")
def update_mac_in_archive(tmpdir):
""" Update MAC addresses in system archive file. """
hostname = utils.get_controller_hostname()
macs = sysinv_api.get_mac_addresses(hostname)
for intf, mac in macs.iteritems():
find_and_replace(
[os.path.join(tmpdir, 'postgres/sysinv.sql.data')],
mac, "CLONEISOMAC_{}{}".format(hostname, intf))
if (tsconfig.system_mode == si_const.SYSTEM_MODE_DUPLEX or
tsconfig.system_mode == si_const.SYSTEM_MODE_DUPLEX_DIRECT):
hostname = utils.get_mate_controller_hostname()
macs = sysinv_api.get_mac_addresses(hostname)
for intf, mac in macs.iteritems():
find_and_replace(
[os.path.join(tmpdir, 'postgres/sysinv.sql.data')],
mac, "CLONEISOMAC_{}{}".format(hostname, intf))
def update_disk_serial_id_in_archive(tmpdir):
""" Update disk serial id in system archive file. """
hostname = utils.get_controller_hostname()
disk_sids = sysinv_api.get_disk_serial_ids(hostname)
for d_dnode, d_sid in disk_sids.iteritems():
find_and_replace(
[os.path.join(tmpdir, 'postgres/sysinv.sql.data')],
d_sid, "CLONEISODISKSID_{}{}".format(hostname, d_dnode))
if (tsconfig.system_mode == si_const.SYSTEM_MODE_DUPLEX or
tsconfig.system_mode == si_const.SYSTEM_MODE_DUPLEX_DIRECT):
hostname = utils.get_mate_controller_hostname()
disk_sids = sysinv_api.get_disk_serial_ids(hostname)
for d_dnode, d_sid in disk_sids.iteritems():
find_and_replace(
[os.path.join(tmpdir, 'postgres/sysinv.sql.data')],
d_sid, "CLONEISODISKSID_{}{}".format(hostname, d_dnode))
def update_sysuuid_in_archive(tmpdir):
""" Update system uuid in system archive file. """
sysuuid = sysinv_api.get_system_uuid()
find_and_replace(
[os.path.join(tmpdir, 'postgres/sysinv.sql.data')],
sysuuid, "CLONEISO_SYSTEM_UUID")
def update_backup_archive(backup_name, archive_dir):
""" Update backup archive file to be included in clone-iso """
path_to_archive = os.path.join(archive_dir, backup_name)
tmpdir = tempfile.mkdtemp(dir=archive_dir)
try:
subprocess.check_call(
['gunzip', path_to_archive + '.tgz'],
stdout=DEVNULL, stderr=DEVNULL)
# 70-persistent-net.rules with the correct MACs will be
# generated on the linux boot on the cloned side. Remove
# the stale file from original side.
remove_from_archive(path_to_archive + '.tar',
'etc/udev/rules.d/70-persistent-net.rules')
# Extract only a subset of directories which have files to be
# updated for oam-ip and MAC addresses. After updating the files
# these directories are added back to the archive.
subprocess.check_call(
['tar', '-x',
'--directory=' + tmpdir,
'-f', path_to_archive + '.tar',
'etc', 'postgres', 'config',
'hieradata'],
stdout=DEVNULL, stderr=DEVNULL)
update_oamip_in_archive(tmpdir)
update_mac_in_archive(tmpdir)
update_disk_serial_id_in_archive(tmpdir)
update_sysuuid_in_archive(tmpdir)
subprocess.check_call(
['tar', '--update',
'--directory=' + tmpdir,
'-f', path_to_archive + '.tar',
'etc', 'postgres', 'config',
'hieradata'],
stdout=DEVNULL, stderr=DEVNULL)
subprocess.check_call(['gzip', path_to_archive + '.tar'])
shutil.move(path_to_archive + '.tar.gz', path_to_archive + '.tgz')
except Exception as e:
LOG.error("Update of backup archive {} failed {}".format(
path_to_archive, str(e)))
raise CloneFail("Failed to update backup archive")
finally:
if not DEBUG:
shutil.rmtree(tmpdir, ignore_errors=True)
def validate_controller_state():
""" Cloning allowed now? """
# Check if this Controller is enabled and provisioned
try:
if not sysinv_api.controller_enabled_provisioned(
utils.get_controller_hostname()):
raise CloneFail("Controller is not enabled/provisioned")
if (tsconfig.system_mode == si_const.SYSTEM_MODE_DUPLEX or
tsconfig.system_mode == si_const.SYSTEM_MODE_DUPLEX_DIRECT):
if not sysinv_api.controller_enabled_provisioned(
utils.get_mate_controller_hostname()):
raise CloneFail("Mate controller is not enabled/provisioned")
except CloneFail:
raise
except Exception:
raise CloneFail("Controller is not enabled/provisioned")
if utils.get_system_type() != si_const.TIS_AIO_BUILD:
raise CloneFail("Cloning supported only on All-in-one systems")
if len(sysinv_api.get_alarms()) > 0:
raise CloneFail("There are active alarms on this system!")
def clone(backup_name, archive_dir):
""" Do Cloning """
validate_controller_state()
LOG.info("Cloning [{}] at [{}]".format(backup_name, archive_dir))
check_size(archive_dir)
isolinux_dir = os.path.join(archive_dir, 'isolinux')
clone_archive_dir = os.path.join(isolinux_dir, CLONE_ARCHIVE_DIR)
if os.path.exists(isolinux_dir):
LOG.info("deleting old iso_dir %s" % isolinux_dir)
shutil.rmtree(isolinux_dir, ignore_errors=True)
os.makedirs(clone_archive_dir, 0644)
try:
backup_restore.backup(backup_name, clone_archive_dir, clone=True)
LOG.info("system backup done")
update_backup_archive(backup_name + '_system', clone_archive_dir)
create_iso(backup_name, archive_dir)
except BackupFail as e:
raise CloneFail(e.message)
except CloneFail as e:
raise
finally:
if not DEBUG:
shutil.rmtree(isolinux_dir, ignore_errors=True)

View File

@ -0,0 +1,5 @@
#
# Copyright (c) 2015 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#

View File

@ -0,0 +1,93 @@
#
# Copyright (c) 2016-2017 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from sysinv.common import constants as sysinv_constants
from tsconfig import tsconfig
CONFIG_WORKDIR = '/tmp/config'
CGCS_CONFIG_FILE = CONFIG_WORKDIR + '/cgcs_config'
CONFIG_PERMDIR = tsconfig.CONFIG_PATH
HIERADATA_WORKDIR = '/tmp/hieradata'
HIERADATA_PERMDIR = tsconfig.PUPPET_PATH + 'hieradata'
KEYRING_WORKDIR = '/tmp/python_keyring'
KEYRING_PERMDIR = tsconfig.KEYRING_PATH
INITIAL_CONFIG_COMPLETE_FILE = '/etc/platform/.initial_config_complete'
CONFIG_FAIL_FILE = '/var/run/.config_fail'
COMMON_CERT_FILE = "/etc/ssl/private/server-cert.pem"
FIREWALL_RULES_FILE = '/etc/platform/iptables.rules'
OPENSTACK_PASSWORD_RULES_FILE = '/etc/keystone/password-rules.conf'
INSTALLATION_FAILED_FILE = '/etc/platform/installation_failed'
BACKUPS_PATH = '/opt/backups'
INTERFACES_LOG_FILE = "/tmp/configure_interfaces.log"
TC_SETUP_SCRIPT = '/usr/local/bin/cgcs_tc_setup.sh'
LINK_MTU_DEFAULT = "1500"
CINDER_LVM_THIN = "thin"
CINDER_LVM_THICK = "thick"
DEFAULT_IMAGE_STOR_SIZE = \
sysinv_constants.DEFAULT_IMAGE_STOR_SIZE
DEFAULT_DATABASE_STOR_SIZE = \
sysinv_constants.DEFAULT_DATABASE_STOR_SIZE
DEFAULT_IMG_CONVERSION_STOR_SIZE = \
sysinv_constants.DEFAULT_IMG_CONVERSION_STOR_SIZE
DEFAULT_SMALL_IMAGE_STOR_SIZE = \
sysinv_constants.DEFAULT_SMALL_IMAGE_STOR_SIZE
DEFAULT_SMALL_DATABASE_STOR_SIZE = \
sysinv_constants.DEFAULT_SMALL_DATABASE_STOR_SIZE
DEFAULT_SMALL_IMG_CONVERSION_STOR_SIZE = \
sysinv_constants.DEFAULT_SMALL_IMG_CONVERSION_STOR_SIZE
DEFAULT_SMALL_BACKUP_STOR_SIZE = \
sysinv_constants.DEFAULT_SMALL_BACKUP_STOR_SIZE
DEFAULT_VIRTUAL_IMAGE_STOR_SIZE = \
sysinv_constants.DEFAULT_VIRTUAL_IMAGE_STOR_SIZE
DEFAULT_VIRTUAL_DATABASE_STOR_SIZE = \
sysinv_constants.DEFAULT_VIRTUAL_DATABASE_STOR_SIZE
DEFAULT_VIRTUAL_IMG_CONVERSION_STOR_SIZE = \
sysinv_constants.DEFAULT_VIRTUAL_IMG_CONVERSION_STOR_SIZE
DEFAULT_VIRTUAL_BACKUP_STOR_SIZE = \
sysinv_constants.DEFAULT_VIRTUAL_BACKUP_STOR_SIZE
DEFAULT_EXTENSION_STOR_SIZE = \
sysinv_constants.DEFAULT_EXTENSION_STOR_SIZE
VALID_LINK_SPEED_MGMT = [sysinv_constants.LINK_SPEED_1G,
sysinv_constants.LINK_SPEED_10G,
sysinv_constants.LINK_SPEED_25G]
VALID_LINK_SPEED_INFRA = [sysinv_constants.LINK_SPEED_1G,
sysinv_constants.LINK_SPEED_10G,
sysinv_constants.LINK_SPEED_25G]
SYSTEM_CONFIG_TIMEOUT = 300
SERVICE_ENABLE_TIMEOUT = 180
MINIMUM_ROOT_DISK_SIZE = 500
MAXIMUM_CGCS_LV_SIZE = 500
LDAP_CONTROLLER_CONFIGURE_TIMEOUT = 30
WRSROOT_MAX_PASSWORD_AGE = 45 # 45 days
LAG_MODE_ACTIVE_BACKUP = "active-backup"
LAG_MODE_BALANCE_XOR = "balance-xor"
LAG_MODE_8023AD = "802.3ad"
LAG_TXHASH_LAYER2 = "layer2"
LAG_MIIMON_FREQUENCY = 100
LOOPBACK_IFNAME = 'lo'
DEFAULT_MULTICAST_SUBNET_IPV4 = '239.1.1.0/28'
DEFAULT_MULTICAST_SUBNET_IPV6 = 'ff08::1:1:0/124'
DEFAULT_MGMT_ON_LOOPBACK_SUBNET_IPV4 = '127.168.204.0/24'
DEFAULT_REGION_NAME = "RegionOne"
DEFAULT_SERVICE_PROJECT_NAME = "services"

View File

@ -0,0 +1,44 @@
#
# Copyright (c) 2017 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
"""
DC Manager Interactions
"""
import log
from Crypto.Hash import MD5
from configutilities.common import crypt
import json
LOG = log.get_logger(__name__)
class UserList(object):
"""
User List
"""
def __init__(self, user_data, hash_string):
# Decrypt the data using input hash_string to generate
# the key
h = MD5.new()
h.update(hash_string)
encryption_key = h.hexdigest()
user_data_decrypted = crypt.urlsafe_decrypt(encryption_key,
user_data)
self._data = json.loads(user_data_decrypted)
def get_password(self, name):
"""
Search the users for the password
"""
for user in self._data:
if user['name'] == name:
return user['password']
return None

View File

@ -0,0 +1,51 @@
#
# Copyright (c) 2014 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
"""
Configuration Errors
"""
from configutilities import ConfigError
class BackupFail(ConfigError):
"""Backup error."""
pass
class UpgradeFail(ConfigError):
"""Upgrade error."""
pass
class BackupWarn(ConfigError):
"""Backup warning."""
pass
class RestoreFail(ConfigError):
"""Backup error."""
pass
class KeystoneFail(ConfigError):
"""Keystone error."""
pass
class SysInvFail(ConfigError):
"""System Inventory error."""
pass
class UserQuit(ConfigError):
"""User initiated quit operation."""
pass
class CloneFail(ConfigError):
"""Clone error."""
pass

View File

@ -0,0 +1,246 @@
#
# Copyright (c) 2014-2016 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
"""
OpenStack Keystone Interactions
"""
import datetime
import iso8601
from exceptions import KeystoneFail
import log
LOG = log.get_logger(__name__)
class Token(object):
def __init__(self, token_data, token_id):
self._expired = False
self._data = token_data
self._token_id = token_id
def set_expired(self):
""" Indicate token is expired """
self._expired = True
def is_expired(self, within_seconds=300):
""" Check if token is expired """
if not self._expired:
end = iso8601.parse_date(self._data['token']['expires_at'])
now = iso8601.parse_date(datetime.datetime.utcnow().isoformat())
delta = abs(end - now).seconds
return delta <= within_seconds
return True
def get_id(self):
""" Get the identifier of the token """
return self._token_id
def get_service_admin_url(self, service_type, service_name, region_name):
""" Search the catalog of a service for the administrative url """
return self.get_service_url(region_name, service_name,
service_type, 'admin')
def get_service_url(self, region_name, service_name, service_type,
endpoint_type):
"""
Search the catalog of a service in a region for the url
"""
for catalog in self._data['token']['catalog']:
if catalog['type'] == service_type:
if catalog['name'] == service_name:
if 0 != len(catalog['endpoints']):
for endpoint in catalog['endpoints']:
if (endpoint['region'] == region_name and
endpoint['interface'] == endpoint_type):
return endpoint['url']
raise KeystoneFail((
"Keystone service type %s, name %s, region %s, endpoint type %s "
"not available" %
(service_type, service_name, region_name, endpoint_type)))
class Service(object):
"""
Keystone Service
"""
def __init__(self, service_data):
self._data = service_data
def get_id(self):
if 'id' in self._data['service']:
return self._data['service']['id']
return None
class ServiceList(object):
"""
Keystone Service List
"""
def __init__(self, service_data):
self._data = service_data
def get_service_id(self, name, type):
"""
Search the services for the id
"""
for service in self._data['services']:
if service['name'] == name:
if service['type'] == type:
return service['id']
raise KeystoneFail((
"Keystone service type %s, name %s not available" %
(type, name)))
class Project(object):
"""
Keystone Project
"""
def __init__(self, project_data):
self._data = project_data
def get_id(self):
if 'id' in self._data['project']:
return self._data['project']['id']
return None
class ProjectList(object):
"""
Keystone Project List
"""
def __init__(self, project_data):
self._data = project_data
def get_project_id(self, name):
"""
Search the projects for the id
"""
for project in self._data['projects']:
if project['name'] == name:
return project['id']
return None
class Endpoint(object):
"""
Keystone Endpoint
"""
def __init__(self, endpoint_data):
self._data = endpoint_data
def get_id(self):
if 'id' in self._data['endpoint']:
return self._data['endpoint']['id']
return None
class EndpointList(object):
"""
Keystone Endpoint List
"""
def __init__(self, endpoint_data):
self._data = endpoint_data
def get_service_url(self, region_name, service_id, endpoint_type):
"""
Search the endpoints for the url
"""
for endpoint in self._data['endpoints']:
if endpoint['service_id'] == service_id:
if (endpoint['region'] == region_name and
endpoint['interface'] == endpoint_type):
return endpoint['url']
raise KeystoneFail((
"Keystone service id %s, region %s, endpoint type %s not "
"available" % (service_id, region_name, endpoint_type)))
class User(object):
"""
Keystone User
"""
def __init__(self, user_data):
self._data = user_data
def get_user_id(self):
return self._data['user']['id']
class UserList(object):
"""
Keystone User List
"""
def __init__(self, user_data):
self._data = user_data
def get_user_id(self, name):
"""
Search the users for the id
"""
for user in self._data['users']:
if user['name'] == name:
return user['id']
return None
class Role(object):
"""
Keystone Role
"""
def __init__(self, role_data):
self._data = role_data
class RoleList(object):
"""
Keystone Role List
"""
def __init__(self, role_data):
self._data = role_data
def get_role_id(self, name):
"""
Search the roles for the id
"""
for role in self._data['roles']:
if role['name'] == name:
return role['id']
return None
class Domain(object):
"""
Keystone Domain
"""
def __init__(self, user_data):
self._data = user_data
def get_domain_id(self):
return self._data['domain']['id']
class DomainList(object):
"""
Keystone Domain List
"""
def __init__(self, user_data):
self._data = user_data
def get_domain_id(self, name):
"""
Search the domains for the id
"""
for domain in self._data['domains']:
if domain['name'] == name:
return domain['id']
return None

View File

@ -0,0 +1,49 @@
#
# Copyright (c) 2014 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
"""
Logging
"""
import logging
import logging.handlers
_loggers = {}
def get_logger(name):
""" Get a logger or create one """
if name not in _loggers:
_loggers[name] = logging.getLogger(name)
return _loggers[name]
def setup_logger(logger):
""" Setup a logger """
# Send logs to /var/log/platform.log
syslog_facility = logging.handlers.SysLogHandler.LOG_LOCAL1
formatter = logging.Formatter("configassistant[%(process)d] " +
"%(pathname)s:%(lineno)s " +
"%(levelname)8s [%(name)s] %(message)s")
handler = logging.handlers.SysLogHandler(address='/dev/log',
facility=syslog_facility)
handler.setLevel(logging.INFO)
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def configure():
""" Setup logging """
for logger in _loggers:
setup_logger(_loggers[logger])

View File

@ -0,0 +1,336 @@
"""
Copyright (c) 2015-2017 Wind River Systems, Inc.
SPDX-License-Identifier: Apache-2.0
"""
import httplib
import json
import urllib2
from exceptions import KeystoneFail
import dcmanager
import keystone
import log
LOG = log.get_logger(__name__)
def rest_api_request(token, method, api_cmd, api_cmd_headers=None,
api_cmd_payload=None):
"""
Make a rest-api request
"""
try:
request_info = urllib2.Request(api_cmd)
request_info.get_method = lambda: method
request_info.add_header("X-Auth-Token", token.get_id())
request_info.add_header("Accept", "application/json")
if api_cmd_headers is not None:
for header_type, header_value in api_cmd_headers.items():
request_info.add_header(header_type, header_value)
if api_cmd_payload is not None:
request_info.add_header("Content-type", "application/json")
request_info.add_data(api_cmd_payload)
request = urllib2.urlopen(request_info)
response = request.read()
if response == "":
response = json.loads("{}")
else:
response = json.loads(response)
request.close()
return response
except urllib2.HTTPError as e:
if httplib.UNAUTHORIZED == e.code:
token.set_expired()
LOG.exception(e)
raise KeystoneFail(
"REST API HTTP Error for url: %s. Error: %s" %
(api_cmd, e))
except (urllib2.URLError, httplib.BadStatusLine) as e:
LOG.exception(e)
raise KeystoneFail(
"REST API URL Error for url: %s. Error: %s" %
(api_cmd, e))
def get_token(auth_url, auth_project, auth_user, auth_password,
user_domain, project_domain):
"""
Ask OpenStack Keystone for a token
"""
try:
url = auth_url + "/auth/tokens"
request_info = urllib2.Request(url)
request_info.add_header("Content-Type", "application/json")
request_info.add_header("Accept", "application/json")
payload = json.dumps(
{"auth": {
"identity": {
"methods": [
"password"
],
"password": {
"user": {
"name": auth_user,
"password": auth_password,
"domain": {"name": user_domain}
}
}
},
"scope": {
"project": {
"name": auth_project,
"domain": {"name": project_domain}
}}}})
request_info.add_data(payload)
request = urllib2.urlopen(request_info)
# Identity API v3 returns token id in X-Subject-Token
# response header.
token_id = request.info().getheader('X-Subject-Token')
response = json.loads(request.read())
request.close()
return keystone.Token(response, token_id)
except urllib2.HTTPError as e:
LOG.error("%s, %s" % (e.code, e.read()))
return None
except (urllib2.URLError, httplib.BadStatusLine) as e:
LOG.error(e)
return None
def get_services(token, api_url):
"""
Ask OpenStack Keystone for a list of services
"""
api_cmd = api_url + "/services"
response = rest_api_request(token, "GET", api_cmd)
return keystone.ServiceList(response)
def create_service(token, api_url, name, type, description):
"""
Ask OpenStack Keystone to create a service
"""
api_cmd = api_url + "/services"
req = json.dumps({"service": {
"name": name,
"type": type,
"description": description}})
response = rest_api_request(token, "POST", api_cmd, api_cmd_payload=req)
return keystone.Service(response)
def delete_service(token, api_url, id):
"""
Ask OpenStack Keystone to delete a service
"""
api_cmd = api_url + "/services/" + id
response = rest_api_request(token, "DELETE", api_cmd)
return keystone.Service(response)
def get_endpoints(token, api_url):
"""
Ask OpenStack Keystone for a list of endpoints
"""
api_cmd = api_url + "/endpoints"
response = rest_api_request(token, "GET", api_cmd)
return keystone.EndpointList(response)
def create_endpoint(token, api_url, service_id, region_name, type, url):
"""
Ask OpenStack Keystone to create an endpoint
"""
api_cmd = api_url + "/endpoints"
req = json.dumps({"endpoint": {
"region": region_name,
"service_id": service_id,
"interface": type,
"url": url}})
response = rest_api_request(token, "POST", api_cmd, api_cmd_payload=req)
return keystone.Endpoint(response)
def delete_endpoint(token, api_url, id):
"""
Ask OpenStack Keystone to delete an endpoint
"""
api_cmd = api_url + "/endpoints/" + id
response = rest_api_request(token, "DELETE", api_cmd)
return keystone.Endpoint(response)
def get_users(token, api_url):
"""
Ask OpenStack Keystone for a list of users
"""
api_cmd = api_url + "/users"
response = rest_api_request(token, "GET", api_cmd)
return keystone.UserList(response)
def create_user(token, api_url, name, password, email, project_id, domain_id):
"""
Ask OpenStack Keystone to create a user
"""
api_cmd = api_url + "/users"
req = json.dumps({"user": {
"password": password,
"default_project_id": project_id,
"domain_id": domain_id,
"name": name,
"email": email
}})
response = rest_api_request(token, "POST", api_cmd, api_cmd_payload=req)
return keystone.User(response)
def create_domain_user(token, api_url, name, password, email, domain_id):
"""
Ask OpenStack Keystone to create a domain user
"""
api_cmd = api_url + "/users"
req = json.dumps({"user": {
"password": password,
"domain_id": domain_id,
"name": name,
"email": email
}})
response = rest_api_request(token, "POST", api_cmd, api_cmd_payload=req)
return keystone.User(response)
def delete_user(token, api_url, id):
"""
Ask OpenStack Keystone to create a user
"""
api_cmd = api_url + "/users/" + id
response = rest_api_request(token, "DELETE", api_cmd)
return keystone.User(response)
def add_role(token, api_url, project_id, user_id, role_id):
"""
Ask OpenStack Keystone to add a role
"""
api_cmd = "%s/projects/%s/users/%s/roles/%s" % (
api_url, project_id, user_id, role_id)
response = rest_api_request(token, "PUT", api_cmd)
return keystone.Role(response)
def add_role_on_domain(token, api_url, domain_id, user_id, role_id):
"""
Ask OpenStack Keystone to assign role to user on domain
"""
api_cmd = "%s/domains/%s/users/%s/roles/%s" % (
api_url, domain_id, user_id, role_id)
response = rest_api_request(token, "PUT", api_cmd)
return keystone.Role(response)
def get_roles(token, api_url):
"""
Ask OpenStack Keystone for a list of roles
"""
api_cmd = api_url + "/roles"
response = rest_api_request(token, "GET", api_cmd)
return keystone.RoleList(response)
def get_domains(token, api_url):
"""
Ask OpenStack Keystone for a list of domains
"""
# Domains are only available from the keystone V3 API
api_cmd = api_url + "/domains"
response = rest_api_request(token, "GET", api_cmd)
return keystone.DomainList(response)
def create_domain(token, api_url, name, description):
api_cmd = api_url + "/domains"
req = json.dumps({"domain": {
"enabled": True,
"name": name,
"description": description}})
response = rest_api_request(token, "POST", api_cmd, api_cmd_payload=req)
return keystone.Domain(response)
def disable_domain(token, api_url, id):
api_cmd = api_url + "/domains/" + id
req = json.dumps({"domain": {
"enabled": False}})
response = rest_api_request(token, "PATCH", api_cmd, api_cmd_payload=req)
return keystone.Domain(response)
def delete_domain(token, api_url, id):
"""
Ask OpenStack Keystone to delete a project
"""
api_cmd = api_url + "/domains/" + id
response = rest_api_request(token, "DELETE", api_cmd,)
return keystone.Domain(response)
def get_projects(token, api_url):
"""
Ask OpenStack Keystone for a list of projects
"""
api_cmd = api_url + "/projects"
response = rest_api_request(token, "GET", api_cmd)
return keystone.ProjectList(response)
def create_project(token, api_url, name, description, domain_id):
"""
Ask OpenStack Keystone to create a project
"""
api_cmd = api_url + "/projects"
req = json.dumps({"project": {
"enabled": True,
"name": name,
"domain_id": domain_id,
"is_domain": False,
"description": description}})
response = rest_api_request(token, "POST", api_cmd, api_cmd_payload=req)
return keystone.Project(response)
def delete_project(token, api_url, id):
"""
Ask OpenStack Keystone to delete a project
"""
api_cmd = api_url + "/projects/" + id
response = rest_api_request(token, "DELETE", api_cmd,)
return keystone.Project(response)
def get_subcloud_config(token, api_url, subcloud_name,
hash_string):
"""
Ask DC Manager for our subcloud configuration
"""
api_cmd = api_url + "/subclouds/" + subcloud_name + "/config"
response = rest_api_request(token, "GET", api_cmd)
config = dict()
config['users'] = dcmanager.UserList(response['users'], hash_string)
return config

View File

@ -0,0 +1,159 @@
"""
Copyright (c) 2017 Wind River Systems, Inc.
SPDX-License-Identifier: Apache-2.0
"""
import json
import netaddr
import os
import subprocess
import sys
import time
import configutilities.common.exceptions as cexeptions
import configutilities.common.utils as cutils
def is_valid_management_address(ip_address, management_subnet):
"""Determine whether a management address is valid."""
if ip_address == management_subnet.network:
print "Cannot use network address"
return False
elif ip_address == management_subnet.broadcast:
print "Cannot use broadcast address"
return False
elif ip_address.is_multicast():
print "Invalid address - multicast address not allowed"
return False
elif ip_address.is_loopback():
print "Invalid address - loopback address not allowed"
return False
elif ip_address not in management_subnet:
print "Address must be in the management subnet"
return False
else:
return True
def configure_management():
interface_list = list()
lldp_interface_list = list()
print "Enabling interfaces... ",
ip_link_output = subprocess.check_output(['ip', '-o', 'link'])
for line in ip_link_output.splitlines():
interface = line.split()[1].rstrip(':')
if interface != 'lo':
interface_list.append(interface)
subprocess.call(['ip', 'link', 'set', interface, 'up'])
print 'DONE'
wait_seconds = 120
delay_seconds = 5
print "Waiting %d seconds for LLDP neighbor discovery" % wait_seconds,
while wait_seconds > 0:
sys.stdout.write('.')
sys.stdout.flush()
time.sleep(delay_seconds)
wait_seconds -= delay_seconds
print ' DONE'
print "Retrieving neighbor details... ",
lldpcli_show_output = subprocess.check_output(
['sudo', 'lldpcli', 'show', 'neighbors', 'summary', '-f', 'json'])
lldp_interfaces = json.loads(lldpcli_show_output)['lldp'][0]['interface']
print "DONE"
print "\nAvailable interfaces:"
print "%-20s %s" % ("local interface", "remote port")
print "%-20s %s" % ("---------------", "-----------")
for interface in lldp_interfaces:
print "%-20s %s" % (interface['name'],
interface['port'][0]['id'][0]['value'])
lldp_interface_list.append(interface['name'])
for interface in interface_list:
if interface not in lldp_interface_list:
print "%-20s %s" % (interface, 'unknown')
print
while True:
user_input = raw_input("Enter management interface name: ")
if user_input in interface_list:
management_interface = user_input
break
else:
print "Invalid interface name"
continue
while True:
user_input = raw_input("Enter management address CIDR: ")
try:
management_cidr = netaddr.IPNetwork(user_input)
management_ip = management_cidr.ip
management_network = netaddr.IPNetwork(
"%s/%s" % (str(management_cidr.network),
str(management_cidr.prefixlen)))
if not is_valid_management_address(management_ip,
management_network):
continue
break
except (netaddr.AddrFormatError, ValueError):
print ("Invalid CIDR - "
"please enter a valid management address CIDR")
while True:
user_input = raw_input("Enter management gateway address [" +
str(management_network[1]) + "]: ")
if user_input == "":
user_input = management_network[1]
try:
ip_input = netaddr.IPAddress(user_input)
if not is_valid_management_address(ip_input,
management_network):
continue
management_gateway_address = ip_input
break
except (netaddr.AddrFormatError, ValueError):
print ("Invalid address - "
"please enter a valid management gateway address")
min_addresses = 8
while True:
user_input = raw_input("Enter System Controller subnet: ")
try:
system_controller_subnet = cutils.validate_network_str(
user_input, min_addresses)
break
except cexeptions.ValidateFail as e:
print "{}".format(e)
print "Disabling non-management interfaces... ",
for interface in interface_list:
if interface != management_interface:
subprocess.call(['ip', 'link', 'set', interface, 'down'])
print 'DONE'
print "Configuring management interface... ",
subprocess.call(['ip', 'addr', 'add', str(management_cidr), 'dev',
management_interface])
print "DONE"
print "Adding route to System Controller... ",
subprocess.call(['ip', 'route', 'add', str(system_controller_subnet),
'dev', management_interface, 'via',
str(management_gateway_address)])
print "DONE"
def main():
if not os.geteuid() == 0:
print "%s must be run with root privileges" % sys.argv[0]
exit(1)
try:
configure_management()
except KeyboardInterrupt:
print "\nAborted"

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,284 @@
#
# Copyright (c) 2014-2015 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
"""
OpenStack
"""
import os
import time
import subprocess
from common import log
from common.exceptions import SysInvFail
from common.rest_api_utils import get_token
import sysinv_api as sysinv
LOG = log.get_logger(__name__)
KEYSTONE_AUTH_SERVER_RETRY_CNT = 60
KEYSTONE_AUTH_SERVER_WAIT = 1 # 1sec wait per retry
class OpenStack(object):
def __init__(self):
self.admin_token = None
self.conf = {}
self._sysinv = None
with open(os.devnull, "w") as fnull:
proc = subprocess.Popen(
['bash', '-c',
'source /etc/nova/openrc && env'],
stdout=subprocess.PIPE, stderr=fnull)
for line in proc.stdout:
key, _, value = line.partition("=")
if key == 'OS_USERNAME':
self.conf['admin_user'] = value.strip()
elif key == 'OS_PASSWORD':
self.conf['admin_pwd'] = value.strip()
elif key == 'OS_PROJECT_NAME':
self.conf['admin_tenant'] = value.strip()
elif key == 'OS_AUTH_URL':
self.conf['auth_url'] = value.strip()
elif key == 'OS_REGION_NAME':
self.conf['region_name'] = value.strip()
elif key == 'OS_USER_DOMAIN_NAME':
self.conf['user_domain'] = value.strip()
elif key == 'OS_PROJECT_DOMAIN_NAME':
self.conf['project_domain'] = value.strip()
proc.communicate()
def __enter__(self):
if not self._connect():
raise Exception('Failed to connect')
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self._disconnect()
def __del__(self):
self._disconnect()
def _connect(self):
""" Connect to an OpenStack instance """
if self.admin_token is not None:
self._disconnect()
# Try to obtain an admin token from keystone
for _ in xrange(KEYSTONE_AUTH_SERVER_RETRY_CNT):
self.admin_token = get_token(self.conf['auth_url'],
self.conf['admin_tenant'],
self.conf['admin_user'],
self.conf['admin_pwd'],
self.conf['user_domain'],
self.conf['project_domain'])
if self.admin_token:
break
time.sleep(KEYSTONE_AUTH_SERVER_WAIT)
return self.admin_token is not None
def _disconnect(self):
""" Disconnect from an OpenStack instance """
self.admin_token = None
def lock_hosts(self, exempt_hostnames=None, progress_callback=None,
timeout=60):
""" Lock hosts of an OpenStack instance except for host names
in the exempt list
"""
failed_hostnames = []
if exempt_hostnames is None:
exempt_hostnames = []
hosts = sysinv.get_hosts(self.admin_token, self.conf['region_name'])
if not hosts:
if progress_callback is not None:
progress_callback(0, 0, None, None)
return
wait = False
host_i = 0
for host in hosts:
if host.name in exempt_hostnames:
continue
if host.is_unlocked():
if not host.force_lock(self.admin_token,
self.conf['region_name']):
failed_hostnames.append(host.name)
LOG.warning("Could not lock %s" % host.name)
else:
wait = True
else:
host_i += 1
if progress_callback is not None:
progress_callback(len(hosts), host_i,
('locking %s' % host.name),
'DONE')
if wait and timeout > 5:
time.sleep(5)
timeout -= 5
for _ in range(0, timeout):
wait = False
for host in hosts:
if host.name in exempt_hostnames:
continue
if (host.name not in failed_hostnames) and host.is_unlocked():
host.refresh_data(self.admin_token,
self.conf['region_name'])
if host.is_locked():
LOG.info("Locked %s" % host.name)
host_i += 1
if progress_callback is not None:
progress_callback(len(hosts), host_i,
('locking %s' % host.name),
'DONE')
else:
LOG.info("Waiting for lock of %s" % host.name)
wait = True
if not wait:
break
time.sleep(1)
else:
failed_hostnames.append(host.name)
LOG.warning("Wait failed for lock of %s" % host.name)
return failed_hostnames
def power_off_hosts(self, exempt_hostnames=None, progress_callback=None,
timeout=60):
""" Power-off hosts of an OpenStack instance except for host names
in the exempt list
"""
if exempt_hostnames is None:
exempt_hostnames = []
hosts = sysinv.get_hosts(self.admin_token, self.conf['region_name'])
hosts[:] = [host for host in hosts if host.support_power_off()]
if not hosts:
if progress_callback is not None:
progress_callback(0, 0, None, None)
return
wait = False
host_i = 0
for host in hosts:
if host.name in exempt_hostnames:
continue
if host.is_powered_on():
if not host.power_off(self.admin_token,
self.conf['region_name']):
raise SysInvFail("Could not power-off %s" % host.name)
wait = True
else:
host_i += 1
if progress_callback is not None:
progress_callback(len(hosts), host_i,
('powering off %s' % host.name),
'DONE')
if wait and timeout > 5:
time.sleep(5)
timeout -= 5
for _ in range(0, timeout):
wait = False
for host in hosts:
if host.name in exempt_hostnames:
continue
if host.is_powered_on():
host.refresh_data(self.admin_token,
self.conf['region_name'])
if host.is_powered_off():
LOG.info("Powered-Off %s" % host.name)
host_i += 1
if progress_callback is not None:
progress_callback(len(hosts), host_i,
('powering off %s' % host.name),
'DONE')
else:
LOG.info("Waiting for power-off of %s" % host.name)
wait = True
if not wait:
break
time.sleep(1)
else:
failed_hosts = [h.name for h in hosts if h.is_powered_on()]
msg = "Wait timeout for power-off of %s" % failed_hosts
LOG.info(msg)
raise SysInvFail(msg)
def wait_for_hosts_disabled(self, exempt_hostnames=None, timeout=300,
interval_step=10):
"""Wait for hosts to be identified as disabled.
Run check every interval_step seconds
"""
if exempt_hostnames is None:
exempt_hostnames = []
for _ in xrange(timeout / interval_step):
hosts = sysinv.get_hosts(self.admin_token,
self.conf['region_name'])
if not hosts:
time.sleep(interval_step)
continue
for host in hosts:
if host.name in exempt_hostnames:
continue
if host.is_enabled():
LOG.info("host %s is still enabled" % host.name)
break
else:
LOG.info("all hosts disabled.")
return True
time.sleep(interval_step)
return False
@property
def sysinv(self):
if self._sysinv is None:
# TOX cannot import cgts_client and all the dependencies therefore
# the client is being lazy loaded since TOX doesn't actually
# require the cgtsclient module.
from cgtsclient import client as cgts_client
endpoint = self.admin_token.get_service_url(
self.conf['region_name'], "sysinv", "platform", 'admin')
self._sysinv = cgts_client.Client(
sysinv.API_VERSION,
endpoint=endpoint,
token=self.admin_token.get_id())
return self._sysinv

View File

@ -0,0 +1,31 @@
import sys
from common import log
LOG = log.get_logger(__name__)
class ProgressRunner(object):
steps = []
def add(self, action, message):
self.steps.append((action, message))
def run(self):
total = len(self.steps)
for i, step in enumerate(self.steps, start=1):
action, message = step
LOG.info("Start step: %s" % message)
sys.stdout.write(
"\n%.2u/%.2u: %s ... " % (i, total, message))
sys.stdout.flush()
try:
action()
sys.stdout.write('DONE')
sys.stdout.flush()
except Exception:
sys.stdout.flush()
raise
LOG.info("Finish step: %s" % message)
sys.stdout.write("\n")
sys.stdout.flush()

View File

@ -0,0 +1,732 @@
"""
Copyright (c) 2015-2017 Wind River Systems, Inc.
SPDX-License-Identifier: Apache-2.0
"""
import ConfigParser
import os
import sys
import textwrap
import time
import uuid
from common import constants
from common import log
from common import rest_api_utils as rutils
from common.exceptions import KeystoneFail
from configutilities.common import utils
from configutilities.common.configobjects import REGION_CONFIG, SUBCLOUD_CONFIG
from configutilities import ConfigFail
from configassistant import ConfigAssistant
from netaddr import IPAddress
from systemconfig import parse_system_config, configure_management_interface, \
create_cgcs_config_file
from configutilities import DEFAULT_DOMAIN_NAME
# Temporary file for building cgcs_config
TEMP_CGCS_CONFIG_FILE = "/tmp/cgcs_config"
# For region mode, this is the list of users that we expect to find configured
# in the region config file as <USER>_USER_KEY and <USER>_PASSWORD.
# For distributed cloud, this is the list of users that we expect to find
# configured in keystone. The password for each user will be retrieved from
# the DC Manager in the system controller and added to the region config file.
# The format is:
# REGION_NAME = key in region config file for this user's region
# USER_KEY = key in region config file for this user's name
# USER_NAME = user name in keystone
REGION_NAME = 0
USER_KEY = 1
USER_NAME = 2
EXPECTED_USERS = [
('REGION_2_SERVICES', 'NOVA', 'nova'),
('REGION_2_SERVICES', 'PLACEMENT', 'placement'),
('REGION_2_SERVICES', 'SYSINV', 'sysinv'),
('REGION_2_SERVICES', 'PATCHING', 'patching'),
('REGION_2_SERVICES', 'HEAT', 'heat'),
('REGION_2_SERVICES', 'CEILOMETER', 'ceilometer'),
('REGION_2_SERVICES', 'NFV', 'vim'),
('REGION_2_SERVICES', 'AODH', 'aodh'),
('REGION_2_SERVICES', 'MTCE', 'mtce'),
('REGION_2_SERVICES', 'PANKO', 'panko')]
EXPECTED_SHARED_SERVICES_NEUTRON_USER = ('SHARED_SERVICES', 'NEUTRON',
'neutron')
EXPECTED_REGION_2_NEUTRON_USER = ('REGION_2_SERVICES', 'NEUTRON', 'neutron')
EXPECTED_REGION_2_GLANCE_USER = ('REGION_2_SERVICES', 'GLANCE', 'glance')
# This a description of the region 2 endpoints that we expect to configure or
# find configured in keystone. The format is as follows:
# SERVICE_NAME = key in region config file for this service's name
# SERVICE_TYPE = key in region config file for this service's type
# PUBLIC_URL = required publicurl - {} is replaced with CAM floating IP
# INTERNAL_URL = required internalurl - {} is replaced with CLM floating IP
# ADMIN_URL = required adminurl - {} is replaced with CLM floating IP
# DESCRIPTION = Description of the service (for automatic configuration)
SERVICE_NAME = 0
SERVICE_TYPE = 1
PUBLIC_URL = 2
INTERNAL_URL = 3
ADMIN_URL = 4
DESCRIPTION = 5
EXPECTED_REGION2_ENDPOINTS = [
('NOVA_SERVICE_NAME', 'NOVA_SERVICE_TYPE',
'http://{}:8774/v2.1/%(tenant_id)s',
'http://{}:8774/v2.1/%(tenant_id)s',
'http://{}:8774/v2.1/%(tenant_id)s',
'Openstack Compute Service'),
('PLACEMENT_SERVICE_NAME', 'PLACEMENT_SERVICE_TYPE',
'http://{}:8778',
'http://{}:8778',
'http://{}:8778',
'Openstack Placement Service'),
('SYSINV_SERVICE_NAME', 'SYSINV_SERVICE_TYPE',
'http://{}:6385/v1',
'http://{}:6385/v1',
'http://{}:6385/v1',
'SysInv Service'),
('PATCHING_SERVICE_NAME', 'PATCHING_SERVICE_TYPE',
'http://{}:15491',
'http://{}:5491',
'http://{}:5491',
'Patching Service'),
('HEAT_SERVICE_NAME', 'HEAT_SERVICE_TYPE',
'http://{}:8004/v1/%(tenant_id)s',
'http://{}:8004/v1/%(tenant_id)s',
'http://{}:8004/v1/%(tenant_id)s',
'Openstack Orchestration Service'),
('HEAT_CFN_SERVICE_NAME', 'HEAT_CFN_SERVICE_TYPE',
'http://{}:8000/v1/',
'http://{}:8000/v1/',
'http://{}:8000/v1/',
'Openstack Cloudformation Service'),
('CEILOMETER_SERVICE_NAME', 'CEILOMETER_SERVICE_TYPE',
'http://{}:8777',
'http://{}:8777',
'http://{}:8777',
'Openstack Metering Service'),
('NFV_SERVICE_NAME', 'NFV_SERVICE_TYPE',
'http://{}:4545',
'http://{}:4545',
'http://{}:4545',
'Virtual Infrastructure Manager'),
('AODH_SERVICE_NAME', 'AODH_SERVICE_TYPE',
'http://{}:8042',
'http://{}:8042',
'http://{}:8042',
'OpenStack Alarming Service'),
('PANKO_SERVICE_NAME', 'PANKO_SERVICE_TYPE',
'http://{}:8977',
'http://{}:8977',
'http://{}:8977',
'OpenStack Event Service'),
]
EXPECTED_NEUTRON_ENDPOINT = (
'NEUTRON_SERVICE_NAME', 'NEUTRON_SERVICE_TYPE',
'http://{}:9696',
'http://{}:9696',
'http://{}:9696',
'Neutron Networking Service')
EXPECTED_KEYSTONE_ENDPOINT = (
'KEYSTONE_SERVICE_NAME', 'KEYSTONE_SERVICE_TYPE',
'http://{}:8081/keystone/main/v2.0',
'http://{}:8081/keystone/main/v2.0',
'http://{}:8081/keystone/admin/v2.0',
'OpenStack Identity')
EXPECTED_GLANCE_ENDPOINT = (
'GLANCE_SERVICE_NAME', 'GLANCE_SERVICE_TYPE',
'http://{}:9292',
'http://{}:9292',
'http://{}:9292',
'OpenStack Image Service')
DEFAULT_HEAT_ADMIN_DOMAIN = 'heat'
DEFAULT_HEAT_ADMIN_USER_NAME = 'heat_admin'
LOG = log.get_logger(__name__)
def validate_region_one_keystone_config(region_config, token, api_url, users,
services, endpoints, create=False,
config_type=REGION_CONFIG,
user_config=None):
""" Validate that the required region one configuration are in place,
if create is True, any missing entries will be set up to be added
to keystone later on by puppet.
"""
region_1_name = region_config.get('SHARED_SERVICES', 'REGION_NAME')
region_2_name = region_config.get('REGION_2_SERVICES', 'REGION_NAME')
# Determine what keystone entries are expected
expected_users = EXPECTED_USERS
expected_region_2_endpoints = EXPECTED_REGION2_ENDPOINTS
# Keystone is always in region 1
expected_region_1_endpoints = [EXPECTED_KEYSTONE_ENDPOINT]
# Region of neutron user and endpoint depends on vswitch type
if region_config.has_option('NETWORK', 'VSWITCH_TYPE'):
if region_config.get('NETWORK', 'VSWITCH_TYPE').upper() == 'NUAGE_VRS':
expected_users.append(EXPECTED_SHARED_SERVICES_NEUTRON_USER)
else:
expected_users.append(EXPECTED_REGION_2_NEUTRON_USER)
expected_region_2_endpoints.append(EXPECTED_NEUTRON_ENDPOINT)
# Determine region of glance user and endpoint
if not region_config.has_option('SHARED_SERVICES',
'GLANCE_SERVICE_NAME'):
expected_users.append(EXPECTED_REGION_2_GLANCE_USER)
expected_region_2_endpoints.append(EXPECTED_GLANCE_ENDPOINT)
elif region_config.has_option(
'SHARED_SERVICES', 'GLANCE_CACHED'):
if region_config.get('SHARED_SERVICES',
'GLANCE_CACHED').upper() == 'TRUE':
expected_users.append(EXPECTED_REGION_2_GLANCE_USER)
expected_region_2_endpoints.append(EXPECTED_GLANCE_ENDPOINT)
else:
expected_region_1_endpoints.append(EXPECTED_GLANCE_ENDPOINT)
domains = rutils.get_domains(token, api_url)
# Verify service project domain, creating if necessary
if region_config.has_option('REGION_2_SERVICES', 'PROJECT_DOMAIN_NAME'):
project_domain = region_config.get('REGION_2_SERVICES',
'PROJECT_DOMAIN_NAME')
else:
project_domain = DEFAULT_DOMAIN_NAME
project_domain_id = domains.get_domain_id(project_domain)
if not project_domain_id:
if create and config_type == REGION_CONFIG:
region_config.set('REGION_2_SERVICES', 'PROJECT_DOMAIN_NAME',
project_domain)
else:
raise ConfigFail(
"Keystone configuration error: service project domain '%s' is "
"not configured." % project_domain)
# Verify service project, creating if necessary
if region_config.has_option('SHARED_SERVICES',
'SERVICE_PROJECT_NAME'):
service_project = region_config.get('SHARED_SERVICES',
'SERVICE_PROJECT_NAME')
else:
service_project = region_config.get('SHARED_SERVICES',
'SERVICE_TENANT_NAME')
projects = rutils.get_projects(token, api_url)
project_id = projects.get_project_id(service_project)
if not project_id:
if create and config_type == REGION_CONFIG:
region_config.set('SHARED_SERVICES', 'SERVICE_TENANT_NAME',
service_project)
else:
raise ConfigFail(
"Keystone configuration error: service project '%s' is not "
"configured." % service_project)
# Verify and retrieve the id of the admin role (only needed when creating)
roles = rutils.get_roles(token, api_url)
role_id = roles.get_role_id('admin')
if not role_id and create:
raise ConfigFail("Keystone configuration error: No admin role present")
# verify that the heat admin domain is configured, creating if necessary
heat_admin_domain = region_config.get('REGION_2_SERVICES',
'HEAT_ADMIN_DOMAIN')
domains = rutils.get_domains(token, api_url)
heat_domain_id = domains.get_domain_id(heat_admin_domain)
if not heat_domain_id:
if create and config_type == REGION_CONFIG:
region_config.set('REGION_2_SERVICES', 'HEAT_ADMIN_DOMAIN',
heat_admin_domain)
else:
raise ConfigFail(
"Unable to obtain id for %s domain. Please ensure "
"keystone configuration is correct." % heat_admin_domain)
# Verify that the heat stack user is configured, creating if necessary
heat_stack_user = region_config.get('REGION_2_SERVICES',
'HEAT_ADMIN_USER_NAME')
if not users.get_user_id(heat_stack_user):
if create and config_type == REGION_CONFIG:
if not region_config.has_option('REGION_2_SERVICES',
'HEAT_ADMIN_PASSWORD'):
try:
region_config.set('REGION_2_SERVICES',
'HEAT_ADMIN_PASSWORD',
uuid.uuid4().hex[:10] + "TiC2*")
except Exception as e:
raise ConfigFail("Failed to generate random user "
"password: %s" % e)
else:
raise ConfigFail(
"Unable to obtain user (%s) from domain (%s). Please ensure "
"keystone configuration is correct." % (heat_stack_user,
heat_admin_domain))
elif config_type == SUBCLOUD_CONFIG:
# Add the password to the region config so it will be used when
# configuring services.
auth_password = user_config.get_password(heat_stack_user)
region_config.set('REGION_2_SERVICES', 'HEAT_ADMIN_PASSWORD',
auth_password)
# verify that the service user domain is configured, creating if necessary
if region_config.has_option('REGION_2_SERVICES', 'USER_DOMAIN_NAME'):
user_domain = region_config.get('REGION_2_SERVICES',
'USER_DOMAIN_NAME')
else:
user_domain = DEFAULT_DOMAIN_NAME
domains = rutils.get_domains(token, api_url)
user_domain_id = domains.get_domain_id(user_domain)
if not user_domain_id:
if create and config_type == REGION_CONFIG:
region_config.set('REGION_2_SERVICES',
'USER_DOMAIN_NAME')
else:
raise ConfigFail(
"Unable to obtain id for for %s domain. Please ensure "
"keystone configuration is correct." % user_domain)
auth_url = region_config.get('SHARED_SERVICES', 'KEYSTONE_ADMINURL')
if config_type == REGION_CONFIG:
# Verify that all users are configured and can retrieve a token,
# Optionally set up to create missing users + their admin role
for user in expected_users:
auth_user = region_config.get(user[REGION_NAME],
user[USER_KEY] + '_USER_NAME')
user_id = users.get_user_id(auth_user)
auth_password = None
if not user_id and create:
if not region_config.has_option(
user[REGION_NAME], user[USER_KEY] + '_PASSWORD'):
# Generate random password for new user via
# /dev/urandom if necessary
try:
region_config.set(
user[REGION_NAME], user[USER_KEY] + '_PASSWORD',
uuid.uuid4().hex[:10] + "TiC2*")
except Exception as e:
raise ConfigFail("Failed to generate random user "
"password: %s" % e)
elif user_id and user_domain_id and\
project_id and project_domain_id:
# If there is a user_id existing then we cannot use
# a randomized password as it was either created by
# a previous run of regionconfig or was created as
# part of Titanium Cloud Primary region config
if not region_config.has_option(
user[REGION_NAME], user[USER_KEY] + '_PASSWORD'):
raise ConfigFail("Failed to find configured password "
"for pre-defined user %s" % auth_user)
auth_password = region_config.get(user[REGION_NAME],
user[USER_KEY] + '_PASSWORD')
# Verify that the existing user can seek an auth token
user_token = rutils.get_token(auth_url, service_project,
auth_user,
auth_password, user_domain,
project_domain)
if not user_token:
raise ConfigFail(
"Unable to obtain keystone token for %s user. "
"Please ensure keystone configuration is correct."
% auth_user)
else:
# For subcloud configs we re-use the users from the system controller
# (the primary region).
for user in expected_users:
auth_user = user[USER_NAME]
user_id = users.get_user_id(auth_user)
auth_password = None
if user_id:
# Add the password to the region config so it will be used when
# configuring services.
auth_password = user_config.get_password(user[USER_NAME])
region_config.set(user[REGION_NAME],
user[USER_KEY] + '_PASSWORD',
auth_password)
else:
raise ConfigFail(
"Unable to obtain user (%s). Please ensure "
"keystone configuration is correct." % user[USER_NAME])
# Verify that the existing user can seek an auth token
user_token = rutils.get_token(auth_url, service_project, auth_user,
auth_password, user_domain,
project_domain)
if not user_token:
raise ConfigFail(
"Unable to obtain keystone token for %s user. "
"Please ensure keystone configuration is correct." %
auth_user)
# Verify that region two endpoints & services for shared services
# match our requirements, optionally creating missing entries
for endpoint in expected_region_1_endpoints:
service_name = region_config.get('SHARED_SERVICES',
endpoint[SERVICE_NAME])
service_type = region_config.get('SHARED_SERVICES',
endpoint[SERVICE_TYPE])
try:
service_id = services.get_service_id(service_name, service_type)
except KeystoneFail as ex:
# No option to create services for region one, if those are not
# present, something is seriously wrong
raise ex
# Extract region one url information from the existing endpoint entry:
try:
endpoints.get_service_url(
region_1_name, service_id, "public")
endpoints.get_service_url(
region_1_name, service_id, "internal")
endpoints.get_service_url(
region_1_name, service_id, "admin")
except KeystoneFail as ex:
# Fail since shared services endpoints are not found
raise ConfigFail("Endpoint for shared service %s "
"is not configured" % service_name)
# Verify that region two endpoints & services match our requirements,
# optionally creating missing entries
public_address = utils.get_optional(region_config, 'CAN_NETWORK',
'CAN_IP_START_ADDRESS')
if not public_address:
public_address = utils.get_optional(region_config, 'CAN_NETWORK',
'CAN_IP_FLOATING_ADDRESS')
if not public_address:
public_address = utils.get_optional(region_config, 'OAM_NETWORK',
'IP_START_ADDRESS')
if not public_address:
# AIO-SX configuration
public_address = utils.get_optional(region_config, 'OAM_NETWORK',
'IP_ADDRESS')
if not public_address:
public_address = region_config.get('OAM_NETWORK',
'IP_FLOATING_ADDRESS')
if region_config.has_section('CLM_NETWORK'):
internal_address = region_config.get('CLM_NETWORK',
'CLM_IP_START_ADDRESS')
else:
internal_address = region_config.get('MGMT_NETWORK',
'IP_START_ADDRESS')
internal_infra_address = utils.get_optional(
region_config, 'BLS_NETWORK', 'BLS_IP_START_ADDRESS')
if not internal_infra_address:
internal_infra_address = utils.get_optional(
region_config, 'INFRA_NETWORK', 'IP_START_ADDRESS')
for endpoint in expected_region_2_endpoints:
service_name = utils.get_service(region_config, 'REGION_2_SERVICES',
endpoint[SERVICE_NAME])
service_type = utils.get_service(region_config, 'REGION_2_SERVICES',
endpoint[SERVICE_TYPE])
expected_public_url = endpoint[PUBLIC_URL].format(public_address)
if internal_infra_address and service_type == 'image':
nfs_address = IPAddress(internal_infra_address) + 3
expected_internal_url = endpoint[INTERNAL_URL].format(nfs_address)
expected_admin_url = endpoint[ADMIN_URL].format(nfs_address)
else:
expected_internal_url = endpoint[INTERNAL_URL].format(
internal_address)
expected_admin_url = endpoint[ADMIN_URL].format(internal_address)
try:
public_url = endpoints.get_service_url(region_2_name, service_id,
"public")
internal_url = endpoints.get_service_url(region_2_name, service_id,
"internal")
admin_url = endpoints.get_service_url(region_2_name, service_id,
"admin")
except KeystoneFail as ex:
# The endpoint will be created optionally
if not create:
raise ConfigFail("Keystone configuration error: Unable to "
"find endpoints for service %s"
% service_name)
continue
# Validate the existing endpoints
for endpointtype, found, expected in [
('public', public_url, expected_public_url),
('internal', internal_url, expected_internal_url),
('admin', admin_url, expected_admin_url)]:
if found != expected:
raise ConfigFail(
"Keystone configuration error for:\nregion ({}), "
"service name ({}), service type ({})\n"
"expected {}: {}\nconfigured {}: {}".format(
region_2_name, service_name, service_type,
endpointtype, expected, endpointtype, found))
def set_subcloud_config_defaults(region_config):
"""Set defaults in region_config for subclouds"""
# We always create endpoints for subclouds
region_config.set('REGION_2_SERVICES', 'CREATE', 'Y')
# We use the default service project
region_config.set('SHARED_SERVICES', 'SERVICE_PROJECT_NAME',
constants.DEFAULT_SERVICE_PROJECT_NAME)
# We use the default heat admin domain
region_config.set('REGION_2_SERVICES', 'HEAT_ADMIN_DOMAIN',
DEFAULT_HEAT_ADMIN_DOMAIN)
# We use the heat admin user already created in the system controller
region_config.set('REGION_2_SERVICES', 'HEAT_ADMIN_USER_NAME',
DEFAULT_HEAT_ADMIN_USER_NAME)
# Add the necessary users to the region config, which will allow the
# validation code to run and will later result in services being
# configured to use the users from the system controller.
expected_users = EXPECTED_USERS
expected_users.append(EXPECTED_REGION_2_NEUTRON_USER)
if not region_config.has_option('SHARED_SERVICES',
'GLANCE_SERVICE_NAME'):
expected_users.append(EXPECTED_REGION_2_GLANCE_USER)
elif region_config.has_option(
'SHARED_SERVICES', 'GLANCE_CACHED'):
if region_config.get('SHARED_SERVICES',
'GLANCE_CACHED').upper() == 'TRUE':
expected_users.append(EXPECTED_REGION_2_GLANCE_USER)
for user in expected_users:
# Add the user to the region config so to allow validation.
region_config.set(user[REGION_NAME], user[USER_KEY] + '_USER_NAME',
user[USER_NAME])
def configure_region(config_file, config_type=REGION_CONFIG):
"""Configure the region"""
# Parse the region/subcloud config file
print "Parsing configuration file... ",
region_config = parse_system_config(config_file)
print "DONE"
if config_type == SUBCLOUD_CONFIG:
# Set defaults in region_config for subclouds
set_subcloud_config_defaults(region_config)
# Validate the region/subcloud config file
print "Validating configuration file... ",
try:
create_cgcs_config_file(None, region_config, None, None, None,
config_type=config_type,
validate_only=True)
except ConfigParser.Error as e:
raise ConfigFail("Error parsing configuration file %s: %s" %
(config_file, e))
print "DONE"
# Bring up management interface to allow us to reach Region 1
print "Configuring management interface... ",
configure_management_interface(region_config)
print "DONE"
# Get token from keystone
print "Retrieving keystone token...",
sys.stdout.flush()
auth_url = region_config.get('SHARED_SERVICES', 'KEYSTONE_ADMINURL')
if region_config.has_option('SHARED_SERVICES', 'ADMIN_TENANT_NAME'):
auth_project = region_config.get('SHARED_SERVICES',
'ADMIN_TENANT_NAME')
else:
auth_project = region_config.get('SHARED_SERVICES',
'ADMIN_PROJECT_NAME')
auth_user = region_config.get('SHARED_SERVICES', 'ADMIN_USER_NAME')
auth_password = region_config.get('SHARED_SERVICES', 'ADMIN_PASSWORD')
if region_config.has_option('SHARED_SERVICES', 'ADMIN_USER_DOMAIN'):
admin_user_domain = region_config.get('SHARED_SERVICES',
'ADMIN_USER_DOMAIN')
else:
admin_user_domain = DEFAULT_DOMAIN_NAME
if region_config.has_option('SHARED_SERVICES',
'ADMIN_PROJECT_DOMAIN'):
admin_project_domain = region_config.get('SHARED_SERVICES',
'ADMIN_PROJECT_DOMAIN')
else:
admin_project_domain = DEFAULT_DOMAIN_NAME
attempts = 0
token = None
# Wait for connectivity to region one. It can take some time, especially if
# we have LAG on the management network.
while not token:
token = rutils.get_token(auth_url, auth_project, auth_user,
auth_password, admin_user_domain,
admin_project_domain)
if not token:
attempts += 1
if attempts < 10:
print "\rRetrieving keystone token...{}".format(
'.' * attempts),
sys.stdout.flush()
time.sleep(10)
else:
raise ConfigFail(
"Unable to obtain keystone token. Please ensure "
"networking and keystone configuration is correct.")
print "DONE"
# Get services, endpoints, users and domains from keystone
print "Retrieving services, endpoints and users from keystone... ",
region_name = region_config.get('SHARED_SERVICES', 'REGION_NAME')
service_name = region_config.get('SHARED_SERVICES',
'KEYSTONE_SERVICE_NAME')
service_type = region_config.get('SHARED_SERVICES',
'KEYSTONE_SERVICE_TYPE')
api_url = token.get_service_url(
region_name, service_name, service_type, "admin").replace(
'v2.0', 'v3')
services = rutils.get_services(token, api_url)
endpoints = rutils.get_endpoints(token, api_url)
users = rutils.get_users(token, api_url)
domains = rutils.get_domains(token, api_url)
if not services or not endpoints or not users:
raise ConfigFail(
"Unable to retrieve services, endpoints or users from keystone. "
"Please ensure networking and keystone configuration is correct.")
print "DONE"
user_config = None
if config_type == SUBCLOUD_CONFIG:
# Retrieve subcloud configuration from dcmanager
print "Retrieving configuration from dcmanager... ",
dcmanager_url = token.get_service_url(
'SystemController', 'dcmanager', 'dcmanager', "admin")
subcloud_name = region_config.get('REGION_2_SERVICES',
'REGION_NAME')
subcloud_management_subnet = region_config.get('MGMT_NETWORK',
'CIDR')
hash_string = subcloud_name + subcloud_management_subnet
subcloud_config = rutils.get_subcloud_config(token, dcmanager_url,
subcloud_name,
hash_string)
user_config = subcloud_config['users']
print "DONE"
try:
# Configure missing region one keystone entries
create = True
# Prepare region configuration for puppet to create keystone identities
if (region_config.has_option('REGION_2_SERVICES', 'CREATE') and
region_config.get('REGION_2_SERVICES', 'CREATE') == 'Y'):
print "Preparing keystone configuration... ",
# If keystone configuration for this region already in place,
# validate it only
else:
# Validate region one keystone config
create = False
print "Validating keystone configuration... ",
validate_region_one_keystone_config(region_config, token, api_url,
users, services, endpoints, create,
config_type=config_type,
user_config=user_config)
print "DONE"
# Create cgcs_config file
print "Creating config apply file... ",
try:
create_cgcs_config_file(TEMP_CGCS_CONFIG_FILE, region_config,
services, endpoints, domains,
config_type=config_type)
except ConfigParser.Error as e:
raise ConfigFail("Error parsing configuration file %s: %s" %
(config_file, e))
print "DONE"
# Configure controller
assistant = ConfigAssistant()
assistant.configure(TEMP_CGCS_CONFIG_FILE, display_config=False)
except ConfigFail as e:
print "A configuration failure has occurred.",
raise e
def show_help_region():
print ("Usage: %s [OPTIONS] <CONFIG_FILE>" % sys.argv[0])
print textwrap.fill(
"Perform region configuration using the region "
"configuration from CONFIG_FILE.", 80)
def show_help_subcloud():
print ("Usage: %s [OPTIONS] <CONFIG_FILE>" % sys.argv[0])
print textwrap.fill(
"Perform subcloud configuration using the subcloud "
"configuration from CONFIG_FILE.", 80)
def config_main(config_type=REGION_CONFIG):
if config_type == REGION_CONFIG:
config_file = "/home/wrsroot/region_config"
elif config_type == SUBCLOUD_CONFIG:
config_file = "/home/wrsroot/subcloud_config"
else:
raise ConfigFail("Invalid config_type: %s" % config_type)
arg = 1
while arg < len(sys.argv):
if sys.argv[arg] in ['--help', '-h', '-?']:
if config_type == REGION_CONFIG:
show_help_region()
else:
show_help_subcloud()
exit(1)
elif arg == len(sys.argv) - 1:
config_file = sys.argv[arg]
else:
print "Invalid option. Use --help for more information."
exit(1)
arg += 1
log.configure()
if not os.path.isfile(config_file):
print "Config file %s does not exist." % config_file
exit(1)
try:
configure_region(config_file, config_type=config_type)
except KeyboardInterrupt:
print "\nAborting configuration"
except ConfigFail as e:
LOG.exception(e)
print "\nConfiguration failed: {}".format(e)
except Exception as e:
LOG.exception(e)
print "\nConfiguration failed: {}".format(e)
else:
print("\nConfiguration finished successfully.")
finally:
if os.path.isfile(TEMP_CGCS_CONFIG_FILE):
os.remove(TEMP_CGCS_CONFIG_FILE)
def region_main():
config_main(REGION_CONFIG)
def subcloud_main():
config_main(SUBCLOUD_CONFIG)

View File

@ -0,0 +1,575 @@
#
# Copyright (c) 2014-2017 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
"""
System Inventory Interactions
"""
import json
import openstack
import urllib2
from common import log
from common.exceptions import KeystoneFail
LOG = log.get_logger(__name__)
API_VERSION = 1
# Host Personality Constants
HOST_PERSONALITY_NOT_SET = ""
HOST_PERSONALITY_UNKNOWN = "unknown"
HOST_PERSONALITY_CONTROLLER = "controller"
HOST_PERSONALITY_COMPUTE = "compute"
HOST_PERSONALITY_STORAGE = "storage"
# Host Administrative State Constants
HOST_ADMIN_STATE_NOT_SET = ""
HOST_ADMIN_STATE_UNKNOWN = "unknown"
HOST_ADMIN_STATE_LOCKED = "locked"
HOST_ADMIN_STATE_UNLOCKED = "unlocked"
# Host Operational State Constants
HOST_OPERATIONAL_STATE_NOT_SET = ""
HOST_OPERATIONAL_STATE_UNKNOWN = "unknown"
HOST_OPERATIONAL_STATE_ENABLED = "enabled"
HOST_OPERATIONAL_STATE_DISABLED = "disabled"
# Host Availability State Constants
HOST_AVAIL_STATE_NOT_SET = ""
HOST_AVAIL_STATE_UNKNOWN = "unknown"
HOST_AVAIL_STATE_AVAILABLE = "available"
HOST_AVAIL_STATE_ONLINE = "online"
HOST_AVAIL_STATE_OFFLINE = "offline"
HOST_AVAIL_STATE_POWERED_OFF = "powered-off"
HOST_AVAIL_STATE_POWERED_ON = "powered-on"
# Host Board Management Constants
HOST_BM_TYPE_NOT_SET = ""
HOST_BM_TYPE_UNKNOWN = "unknown"
HOST_BM_TYPE_ILO3 = 'ilo3'
HOST_BM_TYPE_ILO4 = 'ilo4'
# Host invprovision state
HOST_PROVISIONING = "provisioning"
HOST_PROVISIONED = "provisioned"
class Host(object):
def __init__(self, hostname, host_data=None):
self.name = hostname
self.personality = HOST_PERSONALITY_NOT_SET
self.admin_state = HOST_ADMIN_STATE_NOT_SET
self.operational_state = HOST_OPERATIONAL_STATE_NOT_SET
self.avail_status = []
self.bm_type = HOST_BM_TYPE_NOT_SET
self.uuid = None
self.config_status = None
self.invprovision = None
self.boot_device = None
self.rootfs_device = None
self.console = None
self.tboot = None
if host_data is not None:
self.__host_set_state__(host_data)
def __host_set_state__(self, host_data):
if host_data is None:
self.admin_state = HOST_ADMIN_STATE_UNKNOWN
self.operational_state = HOST_OPERATIONAL_STATE_UNKNOWN
self.avail_status = []
self.bm_type = HOST_BM_TYPE_NOT_SET
# Set personality
if host_data['personality'] == "controller":
self.personality = HOST_PERSONALITY_CONTROLLER
elif host_data['personality'] == "compute":
self.personality = HOST_PERSONALITY_COMPUTE
elif host_data['personality'] == "storage":
self.personality = HOST_PERSONALITY_STORAGE
else:
self.personality = HOST_PERSONALITY_UNKNOWN
# Set administrative state
if host_data['administrative'] == "locked":
self.admin_state = HOST_ADMIN_STATE_LOCKED
elif host_data['administrative'] == "unlocked":
self.admin_state = HOST_ADMIN_STATE_UNLOCKED
else:
self.admin_state = HOST_ADMIN_STATE_UNKNOWN
# Set operational state
if host_data['operational'] == "enabled":
self.operational_state = HOST_OPERATIONAL_STATE_ENABLED
elif host_data['operational'] == "disabled":
self.operational_state = HOST_OPERATIONAL_STATE_DISABLED
else:
self.operational_state = HOST_OPERATIONAL_STATE_UNKNOWN
# Set availability status
self.avail_status[:] = []
if host_data['availability'] == "available":
self.avail_status.append(HOST_AVAIL_STATE_AVAILABLE)
elif host_data['availability'] == "online":
self.avail_status.append(HOST_AVAIL_STATE_ONLINE)
elif host_data['availability'] == "offline":
self.avail_status.append(HOST_AVAIL_STATE_OFFLINE)
elif host_data['availability'] == "power-on":
self.avail_status.append(HOST_AVAIL_STATE_POWERED_ON)
elif host_data['availability'] == "power-off":
self.avail_status.append(HOST_AVAIL_STATE_POWERED_OFF)
else:
self.avail_status.append(HOST_AVAIL_STATE_AVAILABLE)
# Set board management type
if host_data['bm_type'] is None:
self.bm_type = HOST_BM_TYPE_NOT_SET
elif host_data['bm_type'] == 'ilo3':
self.bm_type = HOST_BM_TYPE_ILO3
elif host_data['bm_type'] == 'ilo4':
self.bm_type = HOST_BM_TYPE_ILO4
else:
self.bm_type = HOST_BM_TYPE_UNKNOWN
if host_data['invprovision'] == 'provisioned':
self.invprovision = HOST_PROVISIONED
else:
self.invprovision = HOST_PROVISIONING
self.uuid = host_data['uuid']
self.config_status = host_data['config_status']
self.boot_device = host_data['boot_device']
self.rootfs_device = host_data['rootfs_device']
self.console = host_data['console']
self.tboot = host_data['tboot']
def __host_update__(self, admin_token, region_name):
try:
url = admin_token.get_service_admin_url("platform", "sysinv",
region_name)
url += "/ihosts/" + self.name
request_info = urllib2.Request(url)
request_info.add_header("X-Auth-Token", admin_token.get_id())
request_info.add_header("Accept", "application/json")
request = urllib2.urlopen(request_info)
response = json.loads(request.read())
request.close()
return response
except KeystoneFail as e:
LOG.error("Keystone authentication failed:{} ".format(e))
return None
except urllib2.HTTPError as e:
LOG.error("%s, %s" % (e.code, e.read()))
if e.code == 401:
admin_token.set_expired()
return None
except urllib2.URLError as e:
LOG.error(e)
return None
def __host_action__(self, admin_token, action, region_name):
try:
url = admin_token.get_service_admin_url("platform", "sysinv",
region_name)
url += "/ihosts/" + self.name
request_info = urllib2.Request(url)
request_info.get_method = lambda: 'PATCH'
request_info.add_header("X-Auth-Token", admin_token.get_id())
request_info.add_header("Content-type", "application/json")
request_info.add_header("Accept", "application/json")
request_info.add_data(action)
request = urllib2.urlopen(request_info)
request.close()
return True
except KeystoneFail as e:
LOG.error("Keystone authentication failed:{} ".format(e))
return False
except urllib2.HTTPError as e:
LOG.error("%s, %s" % (e.code, e.read()))
if e.code == 401:
admin_token.set_expired()
return False
except urllib2.URLError as e:
LOG.error(e)
return False
def is_unlocked(self):
return(self.admin_state == HOST_ADMIN_STATE_UNLOCKED)
def is_locked(self):
return(not self.is_unlocked())
def is_enabled(self):
return(self.admin_state == HOST_ADMIN_STATE_UNLOCKED and
self.operational_state == HOST_OPERATIONAL_STATE_ENABLED)
def is_controller_enabled_provisioned(self):
return(self.admin_state == HOST_ADMIN_STATE_UNLOCKED and
self.operational_state == HOST_OPERATIONAL_STATE_ENABLED and
self.personality == HOST_PERSONALITY_CONTROLLER and
self.invprovision == HOST_PROVISIONED)
def is_disabled(self):
return(not self.is_enabled())
def support_power_off(self):
return(HOST_BM_TYPE_NOT_SET != self.bm_type)
def is_powered_off(self):
for status in self.avail_status:
if status == HOST_AVAIL_STATE_POWERED_OFF:
return(self.admin_state == HOST_ADMIN_STATE_LOCKED and
self.operational_state ==
HOST_OPERATIONAL_STATE_DISABLED)
return False
def is_powered_on(self):
return not self.is_powered_off()
def refresh_data(self, admin_token, region_name):
""" Ask the System Inventory for an update view of the host """
host_data = self.__host_update__(admin_token, region_name)
self.__host_set_state__(host_data)
def lock(self, admin_token, region_name):
""" Asks the Platform to perform a lock against a host """
if self.is_unlocked():
action = json.dumps([{"path": "/action",
"value": "lock", "op": "replace"}])
return self.__host_action__(admin_token, action, region_name)
return True
def force_lock(self, admin_token, region_name):
""" Asks the Platform to perform a force lock against a host """
if self.is_unlocked():
action = json.dumps([{"path": "/action",
"value": "force-lock", "op": "replace"}])
return self.__host_action__(admin_token, action, region_name)
return True
def unlock(self, admin_token, region_name):
""" Asks the Platform to perform an ulock against a host """
if self.is_locked():
action = json.dumps([{"path": "/action",
"value": "unlock", "op": "replace"}])
return self.__host_action__(admin_token, action, region_name)
return True
def power_off(self, admin_token, region_name):
""" Asks the Platform to perform a power-off against a host """
if self.is_powered_on():
action = json.dumps([{"path": "/action",
"value": "power-off", "op": "replace"}])
return self.__host_action__(admin_token, action, region_name)
return True
def power_on(self, admin_token, region_name):
""" Asks the Platform to perform a power-on against a host """
if self.is_powered_off():
action = json.dumps([{"path": "/action",
"value": "power-on", "op": "replace"}])
return self.__host_action__(admin_token, action, region_name)
return True
def get_hosts(admin_token, region_name, personality=None,
exclude_hostnames=None):
""" Asks System Inventory for a list of hosts """
if exclude_hostnames is None:
exclude_hostnames = []
try:
url = admin_token.get_service_admin_url("platform", "sysinv",
region_name)
url += "/ihosts/"
request_info = urllib2.Request(url)
request_info.add_header("X-Auth-Token", admin_token.get_id())
request_info.add_header("Accept", "application/json")
request = urllib2.urlopen(request_info)
response = json.loads(request.read())
request.close()
host_list = []
if personality is None:
for host in response['ihosts']:
if host['hostname'] not in exclude_hostnames:
host_list.append(Host(host['hostname'], host))
else:
for host in response['ihosts']:
if host['hostname'] not in exclude_hostnames:
if (host['personality'] == "controller" and
personality == HOST_PERSONALITY_CONTROLLER):
host_list.append(Host(host['hostname'], host))
elif (host['personality'] == "compute" and
personality == HOST_PERSONALITY_COMPUTE):
host_list.append(Host(host['hostname'], host))
elif (host['personality'] == "storage" and
personality == HOST_PERSONALITY_STORAGE):
host_list.append(Host(host['hostname'], host))
return host_list
except KeystoneFail as e:
LOG.error("Keystone authentication failed:{} ".format(e))
return []
except urllib2.HTTPError as e:
LOG.error("%s, %s" % (e.code, e.read()))
if e.code == 401:
admin_token.set_expired()
return []
except urllib2.URLError as e:
LOG.error(e)
return []
def dict_to_patch(values, install_action=False):
# install default action
if install_action:
values.update({'action': 'install'})
patch = []
for key, value in values.iteritems():
path = '/' + key
patch.append({'op': 'replace', 'path': path, 'value': value})
return patch
def get_shared_services():
try:
services = ""
with openstack.OpenStack() as client:
systems = client.sysinv.isystem.list()
if systems:
services = systems[0].capabilities.get("shared_services", "")
except Exception as e:
LOG.exception("failed to get shared services")
raise e
return services
def get_alarms():
""" get all alarms """
alarm_list = []
try:
with openstack.OpenStack() as client:
alarm_list = client.sysinv.ialarm.list()
except Exception as e:
LOG.exception("failed to get alarms")
raise e
return alarm_list
def controller_enabled_provisioned(hostname):
""" check if host is enabled """
try:
with openstack.OpenStack() as client:
hosts = get_hosts(client.admin_token,
client.conf['region_name'])
for host in hosts:
if (hostname == host.name and
host.is_controller_enabled_provisioned()):
LOG.info("host %s is enabled/provisioned" % host.name)
return True
except Exception as e:
LOG.exception("failed to check if host is enabled/provisioned")
raise e
return False
def get_system_uuid():
""" get system uuid """
try:
sysuuid = ""
with openstack.OpenStack() as client:
systems = client.sysinv.isystem.list()
if systems:
sysuuid = systems[0].uuid
except Exception as e:
LOG.exception("failed to get system uuid")
raise e
return sysuuid
def get_oam_ip():
""" get OAM ip details """
try:
with openstack.OpenStack() as client:
oam_list = client.sysinv.iextoam.list()
if oam_list:
return oam_list[0]
except Exception as e:
LOG.exception("failed to get OAM IP")
raise e
return None
def get_mac_addresses(hostname):
""" get MAC addresses for the host """
macs = {}
try:
with openstack.OpenStack() as client:
hosts = get_hosts(client.admin_token,
client.conf['region_name'])
for host in hosts:
if hostname == host.name:
port_list = client.sysinv.ethernet_port.list(host.uuid)
macs = {port.name: port.mac for port in port_list}
except Exception as e:
LOG.exception("failed to get MAC addresses")
raise e
return macs
def get_disk_serial_ids(hostname):
""" get disk serial ids for the host """
disk_serial_ids = {}
try:
with openstack.OpenStack() as client:
hosts = get_hosts(client.admin_token,
client.conf['region_name'])
for host in hosts:
if hostname == host.name:
disk_list = client.sysinv.idisk.list(host.uuid)
disk_serial_ids = {
disk.device_node: disk.serial_id for disk in disk_list}
except Exception as e:
LOG.exception("failed to get disks")
raise e
return disk_serial_ids
def update_clone_system(descr, hostname):
""" update system parameters on clone installation """
try:
with openstack.OpenStack() as client:
systems = client.sysinv.isystem.list()
if not systems:
return False
values = {
'name': "Cloned_system",
'description': descr
}
patch = dict_to_patch(values)
LOG.info("Updating system: {} [{}]".format(systems[0].name, patch))
client.sysinv.isystem.update(systems[0].uuid, patch)
hosts = get_hosts(client.admin_token,
client.conf['region_name'])
for host in hosts:
if hostname == host.name:
values = {
'location': {},
'serialid': ""
}
patch = dict_to_patch(values)
client.sysinv.ihost.update(host.uuid, patch)
LOG.info("Updating host: {} [{}]".format(host, patch))
except Exception as e:
LOG.exception("failed to update system parameters")
raise e
return True
def get_config_status(hostname):
""" get config status of the host """
try:
with openstack.OpenStack() as client:
hosts = get_hosts(client.admin_token,
client.conf['region_name'])
for host in hosts:
if hostname == host.name:
return host.config_status
except Exception as e:
LOG.exception("failed to get config status")
raise e
return None
def get_host_data(hostname):
""" get data for the specified host """
try:
with openstack.OpenStack() as client:
hosts = get_hosts(client.admin_token,
client.conf['region_name'])
for host in hosts:
if hostname == host.name:
return host
except Exception as e:
LOG.exception("failed to get host data")
raise e
return None
def do_compute_config_complete(hostname):
""" enable compute functionality """
try:
with openstack.OpenStack() as client:
hosts = get_hosts(client.admin_token,
client.conf['region_name'])
for host in hosts:
if hostname == host.name:
# Create/apply compute manifests
values = {
'action': "subfunction_config"
}
patch = dict_to_patch(values)
LOG.info("Applying compute manifests: {} [{}]"
.format(host, patch))
client.sysinv.ihost.update(host.uuid, patch)
except Exception as e:
LOG.exception("compute_config_complete failed")
raise e
def get_storage_backend_services():
""" get all storage backends and their assigned services """
backend_service_dict = {}
try:
with openstack.OpenStack() as client:
backend_list = client.sysinv.storage_backend.list()
for backend in backend_list:
backend_service_dict.update(
{backend.backend: backend.services})
except Exception as e:
LOG.exception("failed to get storage backend services")
raise e
return backend_service_dict

View File

@ -0,0 +1,500 @@
"""
Copyright (c) 2015-2017 Wind River Systems, Inc.
SPDX-License-Identifier: Apache-2.0
"""
import ConfigParser
import os
import readline
import sys
import textwrap
from common import constants
from common import log
from common.exceptions import (BackupFail, RestoreFail, UserQuit, CloneFail)
from configutilities import lag_mode_to_str, Network, validate
from configutilities import ConfigFail
from configutilities import DEFAULT_CONFIG, REGION_CONFIG, SUBCLOUD_CONFIG
from configutilities import MGMT_TYPE, HP_NAMES, DEFAULT_NAMES
from configassistant import ConfigAssistant, check_for_ssh_parent
import backup_restore
import utils
import clone
# Temporary file for building cgcs_config
TEMP_CGCS_CONFIG_FILE = "/tmp/cgcs_config"
LOG = log.get_logger(__name__)
def parse_system_config(config_file):
"""Parse system config file"""
system_config = ConfigParser.RawConfigParser()
try:
system_config.read(config_file)
except Exception as e:
LOG.exception(e)
raise ConfigFail("Error parsing system config file")
# Dump configuration for debugging
# for section in config.sections():
# print "Section: %s" % section
# for (name, value) in config.items(section):
# print "name: %s, value: %s" % (name, value)
return system_config
def configure_management_interface(region_config, config_type=REGION_CONFIG):
"""Bring up management interface
"""
mgmt_network = Network()
if region_config.has_section('CLM_NETWORK'):
naming_type = HP_NAMES
else:
naming_type = DEFAULT_NAMES
try:
mgmt_network.parse_config(region_config, config_type, MGMT_TYPE,
min_addresses=8, naming_type=naming_type)
except ConfigFail:
raise
except Exception as e:
LOG.exception("Error parsing configuration file")
raise ConfigFail("Error parsing configuration file: %s" % e)
try:
# Remove interface config files currently installed
utils.remove_interface_config_files()
# Create the management interface configuration files.
# Code based on ConfigAssistant._write_interface_config_management
parameters = utils.get_interface_config_static(
mgmt_network.start_address,
mgmt_network.cidr,
mgmt_network.gateway_address)
if mgmt_network.logical_interface.lag_interface:
management_interface = 'bond0'
else:
management_interface = mgmt_network.logical_interface.ports[0]
if mgmt_network.vlan:
management_interface_name = "%s.%s" % (management_interface,
mgmt_network.vlan)
utils.write_interface_config_vlan(
management_interface_name,
mgmt_network.logical_interface.mtu,
parameters)
# underlying interface has no additional parameters
parameters = None
else:
management_interface_name = management_interface
if mgmt_network.logical_interface.lag_interface:
utils.write_interface_config_bond(
management_interface,
mgmt_network.logical_interface.mtu,
lag_mode_to_str(mgmt_network.logical_interface.lag_mode),
None,
constants.LAG_MIIMON_FREQUENCY,
mgmt_network.logical_interface.ports[0],
mgmt_network.logical_interface.ports[1],
parameters)
else:
utils.write_interface_config_ethernet(
management_interface,
mgmt_network.logical_interface.mtu,
parameters)
# Restart networking with the new management interface configuration
utils.restart_networking()
# Send a GARP for floating address. Doing this to help in
# cases where we are re-installing in a lab and another node
# previously held the floating address.
if mgmt_network.cidr.version == 4:
utils.send_interface_garp(management_interface_name,
mgmt_network.start_address)
except Exception:
LOG.exception("Failed to configure management interface")
raise ConfigFail("Failed to configure management interface")
def create_cgcs_config_file(output_file, system_config,
services, endpoints, domains,
config_type=REGION_CONFIG, validate_only=False):
"""
Create cgcs_config file or just perform validation of the system_config if
validate_only=True.
:param output_file: filename of output cgcs_config file
:param system_config: system configuration
:param services: keystone services (not used if validate_only)
:param endpoints: keystone endpoints (not used if validate_only)
:param domains: keystone domains (not used if validate_only)
:param config_type: specify region, subcloud or standard config
:param validate_only: used to validate the input system_config
:return:
"""
cgcs_config = None
if not validate_only:
cgcs_config = ConfigParser.RawConfigParser()
cgcs_config.optionxform = str
# general error checking, if not validate_only cgcs config data is returned
validate(system_config, config_type, cgcs_config)
# Region configuration: services, endpoints and domain
if config_type in [REGION_CONFIG, SUBCLOUD_CONFIG] and not validate_only:
# The services and endpoints are not available in the validation phase
region_1_name = system_config.get('SHARED_SERVICES', 'REGION_NAME')
keystone_service_name = system_config.get('SHARED_SERVICES',
'KEYSTONE_SERVICE_NAME')
keystone_service_type = system_config.get('SHARED_SERVICES',
'KEYSTONE_SERVICE_TYPE')
keystone_service_id = services.get_service_id(keystone_service_name,
keystone_service_type)
keystone_admin_url = endpoints.get_service_url(region_1_name,
keystone_service_id,
"admin")
keystone_internal_url = endpoints.get_service_url(region_1_name,
keystone_service_id,
"internal")
keystone_public_url = endpoints.get_service_url(region_1_name,
keystone_service_id,
"public")
cgcs_config.set('cREGION', 'KEYSTONE_AUTH_URI', keystone_internal_url)
cgcs_config.set('cREGION', 'KEYSTONE_IDENTITY_URI', keystone_admin_url)
cgcs_config.set('cREGION', 'KEYSTONE_ADMIN_URI', keystone_admin_url)
cgcs_config.set('cREGION', 'KEYSTONE_INTERNAL_URI',
keystone_internal_url)
cgcs_config.set('cREGION', 'KEYSTONE_PUBLIC_URI', keystone_public_url)
is_glance_cached = False
if system_config.has_option('SHARED_SERVICES', 'GLANCE_CACHED'):
if (system_config.get('SHARED_SERVICES',
'GLANCE_CACHED').upper() == 'TRUE'):
is_glance_cached = True
cgcs_config.set('cREGION', 'GLANCE_CACHED', is_glance_cached)
if (system_config.has_option('SHARED_SERVICES',
'GLANCE_SERVICE_NAME') and
not is_glance_cached):
glance_service_name = system_config.get('SHARED_SERVICES',
'GLANCE_SERVICE_NAME')
glance_service_type = system_config.get('SHARED_SERVICES',
'GLANCE_SERVICE_TYPE')
glance_region_name = region_1_name
glance_service_id = services.get_service_id(glance_service_name,
glance_service_type)
glance_internal_url = endpoints.get_service_url(glance_region_name,
glance_service_id,
"internal")
glance_public_url = endpoints.get_service_url(glance_region_name,
glance_service_id,
"public")
cgcs_config.set('cREGION', 'GLANCE_ADMIN_URI', glance_internal_url)
cgcs_config.set('cREGION', 'GLANCE_PUBLIC_URI', glance_public_url)
cgcs_config.set('cREGION', 'GLANCE_INTERNAL_URI',
glance_internal_url)
# The domains are not available in the validation phase
heat_admin_domain = system_config.get('REGION_2_SERVICES',
'HEAT_ADMIN_DOMAIN')
cgcs_config.set('cREGION', 'HEAT_ADMIN_DOMAIN_NAME', heat_admin_domain)
# If primary region is non-TiC and keystone entries already created,
# the flag will tell puppet not to create them.
if (system_config.has_option('REGION_2_SERVICES', 'CREATE') and
system_config.get('REGION_2_SERVICES', 'CREATE') == 'Y'):
cgcs_config.set('cREGION', 'REGION_SERVICES_CREATE', 'True')
# System Timezone configuration
if system_config.has_option('SYSTEM', 'TIMEZONE'):
timezone = system_config.get('SYSTEM', 'TIMEZONE')
if not os.path.isfile("/usr/share/zoneinfo/%s" % timezone):
raise ConfigFail(
"Timezone file %s does not exist" % timezone)
# Dump results for debugging
# for section in cgcs_config.sections():
# print "[%s]" % section
# for (name, value) in cgcs_config.items(section):
# print "%s=%s" % (name, value)
if not validate_only:
# Write config file
with open(output_file, 'w') as config_file:
cgcs_config.write(config_file)
def configure_system(config_file):
"""Configure the system"""
# Parse the system config file
print "Parsing system configuration file... ",
system_config = parse_system_config(config_file)
print "DONE"
# Validate the system config file
print "Validating system configuration file... ",
try:
create_cgcs_config_file(None, system_config, None, None, None,
DEFAULT_CONFIG, validate_only=True)
except ConfigParser.Error as e:
raise ConfigFail("Error parsing configuration file %s: %s" %
(config_file, e))
print "DONE"
# Create cgcs_config file
print "Creating config apply file... ",
try:
create_cgcs_config_file(TEMP_CGCS_CONFIG_FILE, system_config,
None, None, None, DEFAULT_CONFIG)
except ConfigParser.Error as e:
raise ConfigFail("Error parsing configuration file %s: %s" %
(config_file, e))
print "DONE"
def show_help():
print ("Usage: %s\n"
"Perform system configuration\n"
"\nThe default action is to perform the initial configuration for "
"the system. The following options are also available:\n"
"--config-file <name> Perform configuration using INI file\n"
"--backup <name> Backup configuration using the given "
"name\n"
"--clone-iso <name> Clone and create an image with "
"the given file name\n"
"--clone-status Status of the last installation of "
"cloned image\n"
"--restore-system <name> Restore system configuration from backup "
"file with\n"
" the given name, full path required\n"
"--restore-images <name> Restore images from backup file with the "
"given name,\n"
" full path required\n"
"--restore-compute Restore controller-0 compute function "
"for All-In-One system,\n"
" controller-0 will reboot\n"
% sys.argv[0])
def show_help_lab_only():
print ("Usage: %s\n"
"Perform initial configuration\n"
"\nThe following options are for lab use only:\n"
"--answerfile <file> Apply the configuration from the specified "
"file without\n"
" any validation or user interaction\n"
"--default Apply default configuration with no NTP or "
"DNS server\n"
" configuration (suitable for testing in a "
"virtual\n"
" environment)\n"
"--archive-dir <dir> Directory to store the archive in\n"
"--provision Provision initial system data only\n"
% sys.argv[0])
def no_complete(text, state):
return
def main():
options = {}
answerfile = None
backup_name = None
archive_dir = constants.BACKUPS_PATH
do_default_config = False
do_backup = False
do_system_restore = False
do_images_restore = False
do_compute_restore = False
do_clone = False
do_non_interactive = False
do_provision = False
system_config_file = "/home/wrsroot/system_config"
# Disable completion as the default completer shows python commands
readline.set_completer(no_complete)
# remove any previous config fail flag file
if os.path.exists(constants.CONFIG_FAIL_FILE) is True:
os.remove(constants.CONFIG_FAIL_FILE)
if os.environ.get('CGCS_LABMODE'):
options['labmode'] = True
arg = 1
while arg < len(sys.argv):
if sys.argv[arg] == "--answerfile":
arg += 1
if arg < len(sys.argv):
answerfile = sys.argv[arg]
else:
print "--answerfile option requires a file to be specified"
exit(1)
elif sys.argv[arg] == "--backup":
arg += 1
if arg < len(sys.argv):
backup_name = sys.argv[arg]
else:
print "--backup requires the name of the backup"
exit(1)
do_backup = True
elif sys.argv[arg] == "--restore-system":
arg += 1
if arg < len(sys.argv):
backup_name = sys.argv[arg]
else:
print "--restore-system requires the filename of the backup"
exit(1)
do_system_restore = True
elif sys.argv[arg] == "--restore-images":
arg += 1
if arg < len(sys.argv):
backup_name = sys.argv[arg]
else:
print "--restore-images requires the filename of the backup"
exit(1)
do_images_restore = True
elif sys.argv[arg] == "--restore-compute":
do_compute_restore = True
elif sys.argv[arg] == "--archive-dir":
arg += 1
if arg < len(sys.argv):
archive_dir = sys.argv[arg]
else:
print "--archive-dir requires a directory"
exit(1)
elif sys.argv[arg] == "--clone-iso":
arg += 1
if arg < len(sys.argv):
backup_name = sys.argv[arg]
else:
print "--clone-iso requires the name of the image"
exit(1)
do_clone = True
elif sys.argv[arg] == "--clone-status":
clone.clone_status()
exit(0)
elif sys.argv[arg] == "--default":
do_default_config = True
elif sys.argv[arg] == "--config-file":
arg += 1
if arg < len(sys.argv):
system_config_file = sys.argv[arg]
else:
print "--config-file requires the filename of the config file"
exit(1)
do_non_interactive = True
elif sys.argv[arg] in ["--help", "-h", "-?"]:
show_help()
exit(1)
elif sys.argv[arg] == "--labhelp":
show_help_lab_only()
exit(1)
elif sys.argv[arg] == "--provision":
do_provision = True
else:
print "Invalid option. Use --help for more information."
exit(1)
arg += 1
if [do_backup,
do_system_restore,
do_images_restore,
do_compute_restore,
do_clone,
do_default_config,
do_non_interactive].count(True) > 1:
print "Invalid combination of options selected"
exit(1)
if answerfile and [do_backup,
do_system_restore,
do_images_restore,
do_compute_restore,
do_clone,
do_default_config,
do_non_interactive].count(True) > 0:
print "The --answerfile option cannot be used with the selected option"
exit(1)
log.configure()
# Reduce the printk console log level to avoid noise during configuration
printk_levels = ''
with open('/proc/sys/kernel/printk', 'r') as f:
printk_levels = f.readline()
temp_printk_levels = '3' + printk_levels[1:]
with open('/proc/sys/kernel/printk', 'w') as f:
f.write(temp_printk_levels)
if not do_backup and not do_clone:
check_for_ssh_parent()
try:
if do_backup:
backup_restore.backup(backup_name, archive_dir)
print "\nBackup complete"
elif do_system_restore:
backup_restore.restore_system(backup_name)
print "\nSystem restore complete"
elif do_images_restore:
backup_restore.restore_images(backup_name)
print "\nImages restore complete"
elif do_compute_restore:
backup_restore.restore_compute()
elif do_clone:
clone.clone(backup_name, archive_dir)
print "\nCloning complete"
elif do_provision:
assistant = ConfigAssistant(**options)
assistant.provision(answerfile)
else:
if do_non_interactive:
if not os.path.isfile(system_config_file):
raise ConfigFail("Config file %s does not exist." %
system_config_file)
if (os.path.exists(constants.CGCS_CONFIG_FILE) or
os.path.exists(constants.CONFIG_PERMDIR) or
os.path.exists(
constants.INITIAL_CONFIG_COMPLETE_FILE)):
raise ConfigFail("Configuration has already been done "
"and cannot be repeated.")
configure_system(system_config_file)
answerfile = TEMP_CGCS_CONFIG_FILE
assistant = ConfigAssistant(**options)
assistant.configure(answerfile, do_default_config)
print "\nConfiguration was applied\n"
print textwrap.fill(
"Please complete any out of service commissioning steps "
"with system commands and unlock controller to proceed.", 80)
assistant.check_required_interfaces_status()
except KeyboardInterrupt:
print "\nAborting configuration"
except BackupFail as e:
print "\nBackup failed: {}".format(e)
except RestoreFail as e:
print "\nRestore failed: {}".format(e)
except ConfigFail as e:
print "\nConfiguration failed: {}".format(e)
except CloneFail as e:
print "\nCloning failed: {}".format(e)
except UserQuit:
print "\nAborted configuration"
finally:
if os.path.isfile(TEMP_CGCS_CONFIG_FILE):
os.remove(TEMP_CGCS_CONFIG_FILE)
# Restore the printk console log level
with open('/proc/sys/kernel/printk', 'w') as f:
f.write(printk_levels)

View File

@ -0,0 +1,126 @@
[SYSTEM]
SYSTEM_MODE=duplex
[LOGICAL_INTERFACE_1]
LAG_INTERFACE=N
;LAG_MODE=
INTERFACE_MTU=1500
INTERFACE_PORTS=eth0
[LOGICAL_INTERFACE_2]
LAG_INTERFACE=N
;LAG_MODE=
INTERFACE_MTU=1500
INTERFACE_PORTS=eth1
[LOGICAL_INTERFACE_3]
LAG_INTERFACE=N
;LAG_MODE=
INTERFACE_MTU=1500
INTERFACE_PORTS=eth2
[MGMT_NETWORK]
VLAN=121
IP_START_ADDRESS=192.168.204.102
IP_END_ADDRESS=192.168.204.199
CIDR=192.168.204.0/24
MULTICAST_CIDR=239.1.1.0/28
;GATEWAY=192.168.204.12
LOGICAL_INTERFACE=LOGICAL_INTERFACE_1
DYNAMIC_ALLOCATION=N
[INFRA_NETWORK]
;VLAN=124
IP_START_ADDRESS=192.168.205.102
IP_END_ADDRESS=192.168.205.199
CIDR=192.168.205.0/24
LOGICAL_INTERFACE=LOGICAL_INTERFACE_3
[OAM_NETWORK]
;VLAN=
IP_START_ADDRESS=10.10.10.2
IP_END_ADDRESS=10.10.10.99
CIDR=10.10.10.0/24
GATEWAY=10.10.10.1
LOGICAL_INTERFACE=LOGICAL_INTERFACE_2
[REGION2_PXEBOOT_NETWORK]
PXEBOOT_CIDR=192.168.203.0/24
[SHARED_SERVICES]
REGION_NAME=RegionOne
ADMIN_PROJECT_NAME=admin
ADMIN_USER_NAME=admin
ADMIN_USER_DOMAIN=admin_domain
ADMIN_PROJECT_DOMAIN=admin_domain
ADMIN_PASSWORD=Li69nux*
KEYSTONE_ADMINURL=http://192.168.204.12:35357/v2.0
KEYSTONE_SERVICE_NAME=keystone
KEYSTONE_SERVICE_TYPE=identity
SERVICE_PROJECT_NAME=FULL_TEST
[REGION_2_SERVICES]
REGION_NAME=RegionTwo
USER_DOMAIN_NAME=service_domain
PROJECT_DOMAIN_NAME=service_domain
CINDER_SERVICE_NAME=cinder
CINDER_SERVICE_TYPE=volume
CINDER_V2_SERVICE_NAME=cinderv2
CINDER_V2_SERVICE_TYPE=volumev2
CINDER_V3_SERVICE_NAME=cinderv3
CINDER_V3_SERVICE_TYPE=volumev3
CINDER_USER_NAME=cinderTWO
CINDER_PASSWORD=password2WO*
GLANCE_SERVICE_NAME=glance
GLANCE_SERVICE_TYPE=image
GLANCE_USER_NAME=glanceTWO
GLANCE_PASSWORD=password2WO*
NOVA_USER_NAME=novaTWO
NOVA_PASSWORD=password2WO*
NOVA_SERVICE_NAME=nova
NOVA_SERVICE_TYPE=compute
PLACEMENT_USER_NAME=placement
PLACEMENT_PASSWORD=password2WO*
PLACEMENT_SERVICE_NAME=placement
PLACEMENT_SERVICE_TYPE=placement
NOVA_V3_SERVICE_NAME=novav3
NOVA_V3_SERVICE_TYPE=computev3
NEUTRON_USER_NAME=neutronTWO
NEUTRON_PASSWORD=password2WO*
NEUTRON_SERVICE_NAME=neutron
NEUTRON_SERVICE_TYPE=network
SYSINV_USER_NAME=sysinvTWO
SYSINV_PASSWORD=password2WO*
SYSINV_SERVICE_NAME=sysinv
SYSINV_SERVICE_TYPE=platform
PATCHING_USER_NAME=patchingTWO
PATCHING_PASSWORD=password2WO*
PATCHING_SERVICE_NAME=patching
PATCHING_SERVICE_TYPE=patching
HEAT_USER_NAME=heatTWO
HEAT_PASSWORD=password2WO*
HEAT_ADMIN_DOMAIN=heat
HEAT_ADMIN_USER_NAME=heat_stack_adminTWO
HEAT_ADMIN_PASSWORD=password2WO*
HEAT_SERVICE_NAME=heat
HEAT_SERVICE_TYPE=orchestration
HEAT_CFN_SERVICE_NAME=heat-cfn
HEAT_CFN_SERVICE_TYPE=cloudformation
CEILOMETER_USER_NAME=ceilometerTWO
CEILOMETER_PASSWORD=password2WO*
CEILOMETER_SERVICE_NAME=ceilometer
CEILOMETER_SERVICE_TYPE=metering
NFV_USER_NAME=vimTWO
NFV_PASSWORD=password2WO*
AODH_USER_NAME=aodhTWO
AODH_PASSWORD=password2WO*
MTCE_USER_NAME=mtceTWO
MTCE_PASSWORD=password2WO*
PANKO_USER_NAME=pankoTWO
PANKO_PASSWORD=password2WO*
[VERSION]
RELEASE = 18.03

View File

@ -0,0 +1,122 @@
[cSYSTEM]
TIMEZONE = UTC
SYSTEM_MODE = duplex
[cPXEBOOT]
PXEBOOT_SUBNET = 192.168.203.0/24
CONTROLLER_PXEBOOT_FLOATING_ADDRESS = 192.168.203.2
CONTROLLER_PXEBOOT_ADDRESS_0 = 192.168.203.3
CONTROLLER_PXEBOOT_ADDRESS_1 = 192.168.203.4
PXECONTROLLER_FLOATING_HOSTNAME = pxecontroller
[cMGMT]
MANAGEMENT_MTU = 1500
MANAGEMENT_LINK_CAPACITY = None
MANAGEMENT_SUBNET = 192.168.204.0/24
LAG_MANAGEMENT_INTERFACE = no
MANAGEMENT_INTERFACE = eth0
MANAGEMENT_VLAN = 121
MANAGEMENT_INTERFACE_NAME = eth0.121
CONTROLLER_FLOATING_ADDRESS = 192.168.204.102
CONTROLLER_0_ADDRESS = 192.168.204.103
CONTROLLER_1_ADDRESS = 192.168.204.104
NFS_MANAGEMENT_ADDRESS_1 = 192.168.204.105
CONTROLLER_FLOATING_HOSTNAME = controller
CONTROLLER_HOSTNAME_PREFIX = controller-
OAMCONTROLLER_FLOATING_HOSTNAME = oamcontroller
DYNAMIC_ADDRESS_ALLOCATION = no
MANAGEMENT_START_ADDRESS = 192.168.204.102
MANAGEMENT_END_ADDRESS = 192.168.204.199
MANAGEMENT_MULTICAST_SUBNET = 239.1.1.0/28
[cINFRA]
INFRASTRUCTURE_MTU = 1500
INFRASTRUCTURE_LINK_CAPACITY = None
INFRASTRUCTURE_SUBNET = 192.168.205.0/24
LAG_INFRASTRUCTURE_INTERFACE = no
INFRASTRUCTURE_INTERFACE = eth2
INFRASTRUCTURE_INTERFACE_NAME = eth2
CONTROLLER_0_INFRASTRUCTURE_ADDRESS = 192.168.205.103
CONTROLLER_1_INFRASTRUCTURE_ADDRESS = 192.168.205.104
NFS_INFRASTRUCTURE_ADDRESS_1 = 192.168.205.105
INFRASTRUCTURE_START_ADDRESS = 192.168.205.102
INFRASTRUCTURE_END_ADDRESS = 192.168.205.199
[cEXT_OAM]
EXTERNAL_OAM_MTU = 1500
EXTERNAL_OAM_SUBNET = 10.10.10.0/24
LAG_EXTERNAL_OAM_INTERFACE = no
EXTERNAL_OAM_INTERFACE = eth1
EXTERNAL_OAM_INTERFACE_NAME = eth1
EXTERNAL_OAM_GATEWAY_ADDRESS = 10.10.10.1
EXTERNAL_OAM_FLOATING_ADDRESS = 10.10.10.2
EXTERNAL_OAM_0_ADDRESS = 10.10.10.3
EXTERNAL_OAM_1_ADDRESS = 10.10.10.4
[cNETWORK]
VSWITCH_TYPE = avs
[cREGION]
REGION_CONFIG = True
REGION_1_NAME = RegionOne
REGION_2_NAME = RegionTwo
ADMIN_USER_NAME = admin
ADMIN_USER_DOMAIN = admin_domain
ADMIN_PROJECT_NAME = admin
ADMIN_PROJECT_DOMAIN = admin_domain
SERVICE_PROJECT_NAME = FULL_TEST
KEYSTONE_SERVICE_NAME = keystone
KEYSTONE_SERVICE_TYPE = identity
GLANCE_USER_NAME = glanceTWO
GLANCE_PASSWORD = password2WO*
GLANCE_SERVICE_NAME = glance
GLANCE_SERVICE_TYPE = image
GLANCE_CACHED = False
GLANCE_REGION = RegionTwo
NOVA_USER_NAME = novaTWO
NOVA_PASSWORD = password2WO*
NOVA_SERVICE_NAME = nova
NOVA_SERVICE_TYPE = compute
PLACEMENT_USER_NAME = placement
PLACEMENT_PASSWORD = password2WO*
PLACEMENT_SERVICE_NAME = placement
PLACEMENT_SERVICE_TYPE = placement
NEUTRON_USER_NAME = neutronTWO
NEUTRON_PASSWORD = password2WO*
NEUTRON_REGION_NAME = RegionTwo
NEUTRON_SERVICE_NAME = neutron
NEUTRON_SERVICE_TYPE = network
CEILOMETER_USER_NAME = ceilometerTWO
CEILOMETER_PASSWORD = password2WO*
CEILOMETER_SERVICE_NAME = ceilometer
CEILOMETER_SERVICE_TYPE = metering
PATCHING_USER_NAME = patchingTWO
PATCHING_PASSWORD = password2WO*
SYSINV_USER_NAME = sysinvTWO
SYSINV_PASSWORD = password2WO*
SYSINV_SERVICE_NAME = sysinv
SYSINV_SERVICE_TYPE = platform
HEAT_USER_NAME = heatTWO
HEAT_PASSWORD = password2WO*
HEAT_ADMIN_USER_NAME = heat_stack_adminTWO
HEAT_ADMIN_PASSWORD = password2WO*
AODH_USER_NAME = aodhTWO
AODH_PASSWORD = password2WO*
NFV_USER_NAME = vimTWO
NFV_PASSWORD = password2WO*
MTCE_USER_NAME = mtceTWO
MTCE_PASSWORD = password2WO*
PANKO_USER_NAME = pankoTWO
PANKO_PASSWORD = password2WO*
USER_DOMAIN_NAME = service_domain
PROJECT_DOMAIN_NAME = service_domain
KEYSTONE_AUTH_URI = http://192.168.204.12:8081/keystone/main/v2.0
KEYSTONE_IDENTITY_URI = http://192.168.204.12:8081/keystone/admin/v2.0
KEYSTONE_ADMIN_URI = http://192.168.204.12:8081/keystone/admin/v2.0
KEYSTONE_INTERNAL_URI = http://192.168.204.12:8081/keystone/main/v2.0
KEYSTONE_PUBLIC_URI = http://10.10.10.2:8081/keystone/main/v2.0
HEAT_ADMIN_DOMAIN_NAME = heat
[cAUTHENTICATION]
ADMIN_PASSWORD = Li69nux*

View File

@ -0,0 +1,118 @@
[SYSTEM]
SYSTEM_MODE = duplex
[STORAGE]
[LOGICAL_INTERFACE_1]
LAG_INTERFACE=N
;LAG_MODE=
INTERFACE_MTU=1500
INTERFACE_PORTS=eth0
[LOGICAL_INTERFACE_2]
LAG_INTERFACE=N
;LAG_MODE=
INTERFACE_MTU=1500
INTERFACE_PORTS=eth1
[LOGICAL_INTERFACE_3]
LAG_INTERFACE=N
;LAG_MODE=
INTERFACE_MTU=1500
INTERFACE_PORTS=eth2
[MGMT_NETWORK]
VLAN=121
IP_START_ADDRESS=192.168.204.102
IP_END_ADDRESS=192.168.204.199
CIDR=192.168.204.0/24
MULTICAST_CIDR=239.1.1.0/28
;GATEWAY=192.168.204.12
LOGICAL_INTERFACE=LOGICAL_INTERFACE_1
DYNAMIC_ALLOCATION=N
[INFRA_NETWORK]
;VLAN=124
IP_START_ADDRESS=192.168.205.102
IP_END_ADDRESS=192.168.205.199
CIDR=192.168.205.0/24
LOGICAL_INTERFACE=LOGICAL_INTERFACE_3
[OAM_NETWORK]
;VLAN=
IP_START_ADDRESS=10.10.10.2
IP_END_ADDRESS=10.10.10.99
CIDR=10.10.10.0/24
GATEWAY=10.10.10.1
LOGICAL_INTERFACE=LOGICAL_INTERFACE_2
[REGION2_PXEBOOT_NETWORK]
PXEBOOT_CIDR=192.168.203.0/24
[SHARED_SERVICES]
REGION_NAME=RegionOne
ADMIN_PROJECT_NAME=admin
ADMIN_USER_NAME=admin
ADMIN_PASSWORD=Li69nux*
KEYSTONE_ADMINURL=http://192.168.204.12:35357/v2.0
KEYSTONE_SERVICE_NAME=keystone
KEYSTONE_SERVICE_TYPE=identity
SERVICE_PROJECT_NAME=FULL_TEST
GLANCE_SERVICE_NAME=glance
GLANCE_SERVICE_TYPE=image
CINDER_SERVICE_NAME=cinder
CINDER_SERVICE_TYPE=volume
CINDER_V2_SERVICE_NAME=cinderv2
CINDER_V2_SERVICE_TYPE=volumev2
CINDER_V3_SERVICE_NAME=cinderv3
CINDER_V3_SERVICE_TYPE=volumev3
[REGION_2_SERVICES]
REGION_NAME=RegionTwo
NOVA_USER_NAME=novaTWO
NOVA_PASSWORD=password2WO*
NOVA_SERVICE_NAME=nova
NOVA_SERVICE_TYPE=compute
PLACEMENT_USER_NAME=placement
PLACEMENT_PASSWORD=password2WO*
PLACEMENT_SERVICE_NAME=placement
PLACEMENT_SERVICE_TYPE=placement
NOVA_V3_SERVICE_NAME=novav3
NOVA_V3_SERVICE_TYPE=computev3
NEUTRON_USER_NAME=neutronTWO
NEUTRON_PASSWORD=password2WO*
NEUTRON_SERVICE_NAME=neutron
NEUTRON_SERVICE_TYPE=network
SYSINV_USER_NAME=sysinvTWO
SYSINV_PASSWORD=password2WO*
SYSINV_SERVICE_NAME=sysinv
SYSINV_SERVICE_TYPE=platform
PATCHING_USER_NAME=patchingTWO
PATCHING_PASSWORD=password2WO*
PATCHING_SERVICE_NAME=patching
PATCHING_SERVICE_TYPE=patching
HEAT_USER_NAME=heatTWO
HEAT_PASSWORD=password2WO*
HEAT_ADMIN_DOMAIN=heat
HEAT_ADMIN_USER_NAME=heat_stack_adminTWO
HEAT_ADMIN_PASSWORD=password2WO*
HEAT_SERVICE_NAME=heat
HEAT_SERVICE_TYPE=orchestration
HEAT_CFN_SERVICE_NAME=heat-cfn
HEAT_CFN_SERVICE_TYPE=cloudformation
CEILOMETER_USER_NAME=ceilometerTWO
CEILOMETER_PASSWORD=password2WO*
CEILOMETER_SERVICE_NAME=ceilometer
CEILOMETER_SERVICE_TYPE=metering
NFV_USER_NAME=vimTWO
NFV_PASSWORD=password2WO*
AODH_USER_NAME=aodhTWO
AODH_PASSWORD=password2WO*
MTCE_USER_NAME=mtceTWO
MTCE_PASSWORD=password2WO*
PANKO_USER_NAME=pankoTWO
PANKO_PASSWORD=password2WO*
[VERSION]
RELEASE = 18.03

View File

@ -0,0 +1,123 @@
[cSYSTEM]
TIMEZONE = UTC
SYSTEM_MODE = duplex
[cPXEBOOT]
PXEBOOT_SUBNET = 192.168.203.0/24
CONTROLLER_PXEBOOT_FLOATING_ADDRESS = 192.168.203.2
CONTROLLER_PXEBOOT_ADDRESS_0 = 192.168.203.3
CONTROLLER_PXEBOOT_ADDRESS_1 = 192.168.203.4
PXECONTROLLER_FLOATING_HOSTNAME = pxecontroller
[cMGMT]
MANAGEMENT_MTU = 1500
MANAGEMENT_LINK_CAPACITY = None
MANAGEMENT_SUBNET = 192.168.204.0/24
LAG_MANAGEMENT_INTERFACE = no
MANAGEMENT_INTERFACE = eth0
MANAGEMENT_VLAN = 121
MANAGEMENT_INTERFACE_NAME = eth0.121
CONTROLLER_FLOATING_ADDRESS = 192.168.204.102
CONTROLLER_0_ADDRESS = 192.168.204.103
CONTROLLER_1_ADDRESS = 192.168.204.104
NFS_MANAGEMENT_ADDRESS_1 = 192.168.204.105
CONTROLLER_FLOATING_HOSTNAME = controller
CONTROLLER_HOSTNAME_PREFIX = controller-
OAMCONTROLLER_FLOATING_HOSTNAME = oamcontroller
DYNAMIC_ADDRESS_ALLOCATION = no
MANAGEMENT_START_ADDRESS = 192.168.204.102
MANAGEMENT_END_ADDRESS = 192.168.204.199
MANAGEMENT_MULTICAST_SUBNET = 239.1.1.0/28
[cINFRA]
INFRASTRUCTURE_MTU = 1500
INFRASTRUCTURE_LINK_CAPACITY = None
INFRASTRUCTURE_SUBNET = 192.168.205.0/24
LAG_INFRASTRUCTURE_INTERFACE = no
INFRASTRUCTURE_INTERFACE = eth2
INFRASTRUCTURE_INTERFACE_NAME = eth2
CONTROLLER_0_INFRASTRUCTURE_ADDRESS = 192.168.205.103
CONTROLLER_1_INFRASTRUCTURE_ADDRESS = 192.168.205.104
NFS_INFRASTRUCTURE_ADDRESS_1 = 192.168.205.105
INFRASTRUCTURE_START_ADDRESS = 192.168.205.102
INFRASTRUCTURE_END_ADDRESS = 192.168.205.199
[cEXT_OAM]
EXTERNAL_OAM_MTU = 1500
EXTERNAL_OAM_SUBNET = 10.10.10.0/24
LAG_EXTERNAL_OAM_INTERFACE = no
EXTERNAL_OAM_INTERFACE = eth1
EXTERNAL_OAM_INTERFACE_NAME = eth1
EXTERNAL_OAM_GATEWAY_ADDRESS = 10.10.10.1
EXTERNAL_OAM_FLOATING_ADDRESS = 10.10.10.2
EXTERNAL_OAM_0_ADDRESS = 10.10.10.3
EXTERNAL_OAM_1_ADDRESS = 10.10.10.4
[cNETWORK]
VSWITCH_TYPE = avs
[cREGION]
REGION_CONFIG = True
REGION_1_NAME = RegionOne
REGION_2_NAME = RegionTwo
ADMIN_USER_NAME = admin
ADMIN_USER_DOMAIN = Default
ADMIN_PROJECT_NAME = admin
ADMIN_PROJECT_DOMAIN = Default
SERVICE_PROJECT_NAME = FULL_TEST
KEYSTONE_SERVICE_NAME = keystone
KEYSTONE_SERVICE_TYPE = identity
GLANCE_SERVICE_NAME = glance
GLANCE_SERVICE_TYPE = image
GLANCE_CACHED = False
GLANCE_REGION = RegionOne
NOVA_USER_NAME = novaTWO
NOVA_PASSWORD = password2WO*
NOVA_SERVICE_NAME = nova
NOVA_SERVICE_TYPE = compute
PLACEMENT_USER_NAME = placement
PLACEMENT_PASSWORD = password2WO*
PLACEMENT_SERVICE_NAME = placement
PLACEMENT_SERVICE_TYPE = placement
NEUTRON_USER_NAME = neutronTWO
NEUTRON_PASSWORD = password2WO*
NEUTRON_REGION_NAME = RegionTwo
NEUTRON_SERVICE_NAME = neutron
NEUTRON_SERVICE_TYPE = network
CEILOMETER_USER_NAME = ceilometerTWO
CEILOMETER_PASSWORD = password2WO*
CEILOMETER_SERVICE_NAME = ceilometer
CEILOMETER_SERVICE_TYPE = metering
PATCHING_USER_NAME = patchingTWO
PATCHING_PASSWORD = password2WO*
SYSINV_USER_NAME = sysinvTWO
SYSINV_PASSWORD = password2WO*
SYSINV_SERVICE_NAME = sysinv
SYSINV_SERVICE_TYPE = platform
HEAT_USER_NAME = heatTWO
HEAT_PASSWORD = password2WO*
HEAT_ADMIN_USER_NAME = heat_stack_adminTWO
HEAT_ADMIN_PASSWORD = password2WO*
AODH_USER_NAME = aodhTWO
AODH_PASSWORD = password2WO*
NFV_USER_NAME = vimTWO
NFV_PASSWORD = password2WO*
MTCE_USER_NAME = mtceTWO
MTCE_PASSWORD = password2WO*
PANKO_USER_NAME = pankoTWO
PANKO_PASSWORD = password2WO*
USER_DOMAIN_NAME = Default
PROJECT_DOMAIN_NAME = Default
KEYSTONE_AUTH_URI = http://192.168.204.12:8081/keystone/main/v2.0
KEYSTONE_IDENTITY_URI = http://192.168.204.12:8081/keystone/admin/v2.0
KEYSTONE_ADMIN_URI = http://192.168.204.12:8081/keystone/admin/v2.0
KEYSTONE_INTERNAL_URI = http://192.168.204.12:8081/keystone/main/v2.0
KEYSTONE_PUBLIC_URI = http://10.10.10.2:8081/keystone/main/v2.0
GLANCE_ADMIN_URI = http://192.168.204.12:9292/v2
GLANCE_PUBLIC_URI = http://10.10.10.2:9292/v2
GLANCE_INTERNAL_URI = http://192.168.204.12:9292/v2
HEAT_ADMIN_DOMAIN_NAME = heat
[cAUTHENTICATION]
ADMIN_PASSWORD = Li69nux*

View File

@ -0,0 +1 @@
# Dummy certificate file

View File

@ -0,0 +1,78 @@
[cSYSTEM]
# System Configuration
SYSTEM_MODE=duplex
TIMEZONE=UTC
[cPXEBOOT]
# PXEBoot Network Support Configuration
PXECONTROLLER_FLOATING_HOSTNAME=pxecontroller
[cMGMT]
# Management Network Configuration
MANAGEMENT_INTERFACE_NAME=eth1
MANAGEMENT_INTERFACE=eth1
MANAGEMENT_MTU=1500
MANAGEMENT_LINK_CAPACITY=1000
MANAGEMENT_SUBNET=192.168.204.0/24
LAG_MANAGEMENT_INTERFACE=no
CONTROLLER_FLOATING_ADDRESS=192.168.204.2
CONTROLLER_0_ADDRESS=192.168.204.3
CONTROLLER_1_ADDRESS=192.168.204.4
NFS_MANAGEMENT_ADDRESS_1=192.168.204.7
CONTROLLER_FLOATING_HOSTNAME=controller
CONTROLLER_HOSTNAME_PREFIX=controller-
OAMCONTROLLER_FLOATING_HOSTNAME=oamcontroller
DYNAMIC_ADDRESS_ALLOCATION=yes
MANAGEMENT_MULTICAST_SUBNET=239.1.1.0/28
[cINFRA]
# Infrastructure Network Configuration
INFRASTRUCTURE_INTERFACE_NAME=eth2
INFRASTRUCTURE_INTERFACE=eth2
INFRASTRUCTURE_VLAN=
INFRASTRUCTURE_MTU=1500
INFRASTRUCTURE_LINK_CAPACITY=1000
INFRASTRUCTURE_SUBNET=192.168.205.0/24
LAG_INFRASTRUCTURE_INTERFACE=no
CONTROLLER_0_INFRASTRUCTURE_ADDRESS=192.168.205.3
CONTROLLER_1_INFRASTRUCTURE_ADDRESS=192.168.205.4
NFS_INFRASTRUCTURE_ADDRESS_1=192.168.205.7
CONTROLLER_INFRASTRUCTURE_HOSTNAME_SUFFIX=-infra
INFRASTRUCTURE_START_ADDRESS=192.168.205.2
INFRASTRUCTURE_END_ADDRESS=192.168.205.254
[cEXT_OAM]
# External OAM Network Configuration
EXTERNAL_OAM_INTERFACE_NAME=eth0
EXTERNAL_OAM_INTERFACE=eth0
EXTERNAL_OAM_VLAN=NC
EXTERNAL_OAM_MTU=1500
LAG_EXTERNAL_OAM_INTERFACE=no
EXTERNAL_OAM_SUBNET=10.10.10.0/24
EXTERNAL_OAM_GATEWAY_ADDRESS=10.10.10.1
EXTERNAL_OAM_FLOATING_ADDRESS=10.10.10.2
EXTERNAL_OAM_0_ADDRESS=10.10.10.3
EXTERNAL_OAM_1_ADDRESS=10.10.10.4
[cNETWORK]
# Data Network Configuration
VSWITCH_TYPE=avs
NEUTRON_L2_PLUGIN=ml2
NEUTRON_L2_AGENT=vswitch
NEUTRON_L3_EXT_BRIDGE=provider
NEUTRON_ML2_MECHANISM_DRIVERS=vswitch,sriovnicswitch
NEUTRON_ML2_TYPE_DRIVERS=managed_flat,managed_vlan,managed_vxlan
NEUTRON_ML2_TENANT_NETWORK_TYPES=vlan,vxlan
NEUTRON_ML2_SRIOV_AGENT_REQUIRED=False
NEUTRON_HOST_DRIVER=neutron.plugins.wrs.drivers.host.DefaultHostDriver
NEUTRON_FM_DRIVER=neutron.plugins.wrs.drivers.fm.DefaultFmDriver
NEUTRON_NETWORK_SCHEDULER=neutron.scheduler.dhcp_host_agent_scheduler.HostChanceScheduler
NEUTRON_ROUTER_SCHEDULER=neutron.scheduler.l3_host_agent_scheduler.HostChanceScheduler
[cSECURITY]
[cREGION]
# Region Configuration
REGION_CONFIG=False
[cAUTHENTICATION]
ADMIN_PASSWORD=Li69nux*

View File

@ -0,0 +1,84 @@
[cSYSTEM]
# System Configuration
SYSTEM_MODE=duplex
TIMEZONE=UTC
[cPXEBOOT]
# PXEBoot Network Support Configuration
PXECONTROLLER_FLOATING_HOSTNAME=pxecontroller
[cMGMT]
# Management Network Configuration
MANAGEMENT_INTERFACE_NAME=eth1
MANAGEMENT_INTERFACE=eth1
MANAGEMENT_MTU=1500
MANAGEMENT_LINK_CAPACITY=1000
MANAGEMENT_SUBNET=192.168.204.0/24
LAG_MANAGEMENT_INTERFACE=no
CONTROLLER_FLOATING_ADDRESS=192.168.204.2
CONTROLLER_0_ADDRESS=192.168.204.3
CONTROLLER_1_ADDRESS=192.168.204.4
NFS_MANAGEMENT_ADDRESS_1=192.168.204.5
NFS_MANAGEMENT_ADDRESS_2=192.168.204.6
CONTROLLER_FLOATING_HOSTNAME=controller
CONTROLLER_HOSTNAME_PREFIX=controller-
OAMCONTROLLER_FLOATING_HOSTNAME=oamcontroller
DYNAMIC_ADDRESS_ALLOCATION=yes
MANAGEMENT_MULTICAST_SUBNET=239.1.1.0/28
[cINFRA]
# Infrastructure Network Configuration
INFRASTRUCTURE_INTERFACE_NAME=NC
INFRASTRUCTURE_INTERFACE=NC
INFRASTRUCTURE_VLAN=NC
INFRASTRUCTURE_MTU=NC
INFRASTRUCTURE_LINK_CAPACITY=NC
INFRASTRUCTURE_SUBNET=NC
LAG_INFRASTRUCTURE_INTERFACE=no
INFRASTRUCTURE_BOND_MEMBER_0=NC
INFRASTRUCTURE_BOND_MEMBER_1=NC
INFRASTRUCTURE_BOND_POLICY=NC
CONTROLLER_0_INFRASTRUCTURE_ADDRESS=NC
CONTROLLER_1_INFRASTRUCTURE_ADDRESS=NC
NFS_INFRASTRUCTURE_ADDRESS_1=NC
STORAGE_0_INFRASTRUCTURE_ADDRESS=NC
STORAGE_1_INFRASTRUCTURE_ADDRESS=NC
CONTROLLER_INFRASTRUCTURE_HOSTNAME_SUFFIX=NC
INFRASTRUCTURE_START_ADDRESS=NC
INFRASTRUCTURE_END_ADDRESS=NC
[cEXT_OAM]
# External OAM Network Configuration
EXTERNAL_OAM_INTERFACE_NAME=eth0
EXTERNAL_OAM_INTERFACE=eth0
EXTERNAL_OAM_VLAN=NC
EXTERNAL_OAM_MTU=1500
LAG_EXTERNAL_OAM_INTERFACE=no
EXTERNAL_OAM_SUBNET=10.10.10.0/24
EXTERNAL_OAM_GATEWAY_ADDRESS=10.10.10.1
EXTERNAL_OAM_FLOATING_ADDRESS=10.10.10.2
EXTERNAL_OAM_0_ADDRESS=10.10.10.3
EXTERNAL_OAM_1_ADDRESS=10.10.10.4
[cNETWORK]
# Data Network Configuration
VSWITCH_TYPE=avs
NEUTRON_L2_PLUGIN=ml2
NEUTRON_L2_AGENT=vswitch
NEUTRON_L3_EXT_BRIDGE=provider
NEUTRON_ML2_MECHANISM_DRIVERS=vswitch,sriovnicswitch
NEUTRON_ML2_TYPE_DRIVERS=managed_flat,managed_vlan,managed_vxlan
NEUTRON_ML2_TENANT_NETWORK_TYPES=vlan,vxlan
NEUTRON_ML2_SRIOV_AGENT_REQUIRED=False
NEUTRON_HOST_DRIVER=neutron.plugins.wrs.drivers.host.DefaultHostDriver
NEUTRON_FM_DRIVER=neutron.plugins.wrs.drivers.fm.DefaultFmDriver
NEUTRON_NETWORK_SCHEDULER=neutron.scheduler.dhcp_host_agent_scheduler.HostChanceScheduler
NEUTRON_ROUTER_SCHEDULER=neutron.scheduler.l3_host_agent_scheduler.HostChanceScheduler
[cSECURITY]
[cREGION]
# Region Configuration
REGION_CONFIG=False
[cAUTHENTICATION]
ADMIN_PASSWORD=Li69nux*

View File

@ -0,0 +1,84 @@
[cSYSTEM]
# System Configuration
SYSTEM_MODE=duplex
TIMEZONE=UTC
[cPXEBOOT]
# PXEBoot Network Support Configuration
PXECONTROLLER_FLOATING_HOSTNAME=pxecontroller
[cMGMT]
# Management Network Configuration
MANAGEMENT_INTERFACE_NAME=eth1
MANAGEMENT_INTERFACE=eth1
MANAGEMENT_MTU=1500
MANAGEMENT_LINK_CAPACITY=1000
MANAGEMENT_SUBNET=1234::/64
LAG_MANAGEMENT_INTERFACE=no
CONTROLLER_FLOATING_ADDRESS=1234::2
CONTROLLER_0_ADDRESS=1234::3
CONTROLLER_1_ADDRESS=1234::4
NFS_MANAGEMENT_ADDRESS_1=1234::5
NFS_MANAGEMENT_ADDRESS_2=1234::6
CONTROLLER_FLOATING_HOSTNAME=controller
CONTROLLER_HOSTNAME_PREFIX=controller-
OAMCONTROLLER_FLOATING_HOSTNAME=oamcontroller
DYNAMIC_ADDRESS_ALLOCATION=yes
MANAGEMENT_MULTICAST_SUBNET=ff08::1:1:0/124
[cINFRA]
# Infrastructure Network Configuration
INFRASTRUCTURE_INTERFACE_NAME=NC
INFRASTRUCTURE_INTERFACE=NC
INFRASTRUCTURE_VLAN=NC
INFRASTRUCTURE_MTU=NC
INFRASTRUCTURE_LINK_CAPACITY=NC
INFRASTRUCTURE_SUBNET=NC
LAG_INFRASTRUCTURE_INTERFACE=no
INFRASTRUCTURE_BOND_MEMBER_0=NC
INFRASTRUCTURE_BOND_MEMBER_1=NC
INFRASTRUCTURE_BOND_POLICY=NC
CONTROLLER_0_INFRASTRUCTURE_ADDRESS=NC
CONTROLLER_1_INFRASTRUCTURE_ADDRESS=NC
NFS_INFRASTRUCTURE_ADDRESS_1=NC
STORAGE_0_INFRASTRUCTURE_ADDRESS=NC
STORAGE_1_INFRASTRUCTURE_ADDRESS=NC
CONTROLLER_INFRASTRUCTURE_HOSTNAME_SUFFIX=NC
INFRASTRUCTURE_START_ADDRESS=NC
INFRASTRUCTURE_END_ADDRESS=NC
[cEXT_OAM]
# External OAM Network Configuration
EXTERNAL_OAM_INTERFACE_NAME=eth0
EXTERNAL_OAM_INTERFACE=eth0
EXTERNAL_OAM_VLAN=NC
EXTERNAL_OAM_MTU=1500
LAG_EXTERNAL_OAM_INTERFACE=no
EXTERNAL_OAM_SUBNET=abcd::/64
EXTERNAL_OAM_GATEWAY_ADDRESS=abcd::1
EXTERNAL_OAM_FLOATING_ADDRESS=abcd::2
EXTERNAL_OAM_0_ADDRESS=abcd::3
EXTERNAL_OAM_1_ADDRESS=abcd::4
[cNETWORK]
# Data Network Configuration
VSWITCH_TYPE=avs
NEUTRON_L2_PLUGIN=ml2
NEUTRON_L2_AGENT=vswitch
NEUTRON_L3_EXT_BRIDGE=provider
NEUTRON_ML2_MECHANISM_DRIVERS=vswitch,sriovnicswitch
NEUTRON_ML2_TYPE_DRIVERS=managed_flat,managed_vlan,managed_vxlan
NEUTRON_ML2_TENANT_NETWORK_TYPES=vlan,vxlan
NEUTRON_ML2_SRIOV_AGENT_REQUIRED=False
NEUTRON_HOST_DRIVER=neutron.plugins.wrs.drivers.host.DefaultHostDriver
NEUTRON_FM_DRIVER=neutron.plugins.wrs.drivers.fm.DefaultFmDriver
NEUTRON_NETWORK_SCHEDULER=neutron.scheduler.dhcp_host_agent_scheduler.HostChanceScheduler
NEUTRON_ROUTER_SCHEDULER=neutron.scheduler.l3_host_agent_scheduler.HostChanceScheduler
[cSECURITY]
[cREGION]
# Region Configuration
REGION_CONFIG=False
[cAUTHENTICATION]
ADMIN_PASSWORD=Li69nux*

Some files were not shown because too many files have changed in this diff Show More