Retire repo and note new content in openstack/osops

Change-Id: Ibf7eab00a55cda9423663feb2dc3feea8ac3778a
Signed-off-by: Sean McGinnis <sean.mcginnis@gmail.com>
This commit is contained in:
Sean McGinnis 2020-09-10 20:00:52 -05:00
parent b74cbb8e33
commit 126bd4af7f
No known key found for this signature in database
GPG Key ID: CE7EE4BFAF8D70C8
179 changed files with 10 additions and 17392 deletions

11
.gitignore vendored
View File

@ -1,11 +0,0 @@
/onvm/conf/nodes.conf.yml
/onvm/conf/ids.conf.yml
/onvm/conf/hosts
/onvm/lampstack/openrc
*.out
*/**/*.log
*/**/.DS_Store
*/**/._
*/**/*.tfstate*
.tox
site.retry

201
LICENSE
View File

@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,35 +1,12 @@
==================================
osops-tools-contrib
==================================
This project is no longer maintained. Its content has now moved to the
https://opendev.org/openstack/osops repo, and further development will
continue there.
This is not being tested on any deployment.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
This repository is a location for Operators to upload useful scripts and tooling
for the general Operating Community to use with their OpenStack Clouds.
This place is also untested and unverified.
For more details on how to contribute, please follow the Gerrit git-review process
described at http://docs.openstack.org/infra/manual/developers.html .
If you would like some curated, tested, and verified code please look to the
`osops-tools-generic <https://github.com/openstack/osops-tools-generic>`_ repository.
Please see the wiki page at https://wiki.openstack.org/wiki/Osops#Overview_moving_code
for more details about how code is promoted up to the generic repo.
Please remember USE AT YOUR OWN RISK.
The `nova/` directory has useful tools and scripts for nova.
The `glance/` directory has useful tools and scripts for glance.
The `neutron/` directory has useful tools and scripts for neutron.
The `multi/` directory is a tool that crosses multiple projects.
Licensing
---------
All contributions will be licensed under the Apache 2.0 License unless you
state otherwise. Please see the LICENSE file for details about the Apache 2.0
License.
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1,4 +0,0 @@
*.out
*/**/*.log
*/**/.DS_Store
*/**/._

View File

@ -1,131 +0,0 @@
# Docker Swarm Ansible deployments on OpenStack Cloud
## Status
This will install a 3 node lampstack. Once the script finishes, a set of
environment varialbes will be displayed, export these environment variable
then you can run docker commands against the swarm
## Requirements
- [Install Ansible](http://docs.ansible.com/ansible/intro_installation.html)
- [Install openstack shade] (http://docs.openstack.org/infra/shade/installation.html)
- Make sure there is an openstack coreos image available on your cloud.
- Clone this project into a directory.
- To run docker commands, you will need to install docker client. Following
the following steps if you are using ubuntu to run the script, if you are
using some other environment run the script, then the steps setting up
docker client may be different::
apt-get update
apt-get -y install docker.io
ln -sf /usr/bin/docker.io /usr/local/bin/docker
## Ansible
Ansible and OpenStack Shade are used to provision all of the OpenStack
resources.
### Prep
#### Deal with ssh keys for Openstack Authentication
If you do not have a ssh key, then you should create one by using a tool.
An example command to do that is provided below. Once you have a key pair,
ensure your local ssh-agent is running and your ssh key has been added.
This step is required. Not doing this, you will have to manually give
passphrase when script runs, and script can fail. If you really do not want
to deal with passphrase, you can create a key pair without passphrase::
ssh-keygen -t rsa
eval $(ssh-agent -s)
ssh-add ~/.ssh/id_rsa
#### General Openstack Settings
Ansible's OpenStack cloud module is used to provision compute resources
against an OpenStack cloud. Before you run the script, the cloud environment
will have to be specified. Sample files have been provided in vars directory.
You may create one such file per cloud for your tests. The following is an
example::
auth: {
auth_url: "http://x.x.x.x:5000/v3",
username: "demo",
password: "{{ password }}",
domain_name: "default",
project_name: "demo"
}
app_env: {
image_name: "coreos",
private_net_name: "",
net_device: "eth0",
flavor_name: "m1.small",
swarm_version: "latest",
swarm_size: 3,
region_name: "RegionOne",
availability_zone: "nova",
validate_certs: True,
fqdn: "swarm.example.com",
public_key_file: "/home/tong/.ssh/id_rsa.pub"
}
The values of these variables should be provided by your cloud provider. When
use keystone 2.0 API, you will not need to setup domain name. If your account
only has more than one regions available, specify the region_name to be used.
If there is only one, you can leave it blank or use the correct name. If your
cloud does not expose tenant network, leave private_net_name blank as well.
However, if your cloud supports tenant network and you have more than one
tenant networks in your account, you will need to specify which tenant network
to be used, otherwise, the script will error out. To create a large docker
swarm, change the swarm_size to a large value like 20, the script will create
a docker swarm with 20 coreos nodes. You can also specify if you do not want
to verify server certificate if your server uses self signed certificate.
## Run the script
With your cloud environment set, you should be able to run the script::
ansible-playbook -e "action=apply env=leap password=XXXXX" site.yml
The command will stand up the nodes using a cloud named leap (vars/leap.yml).
If you run the test against other cloud, you can create a new file use same
structure and specify that cloud attributes such as auth_url, etc. Then you
can simply replace work leap with that file name. Replace xxxxx with your
own cloud account password, you can also simply put your password in the
configuration file (vars/leap.yml in this case) and avoid to specify it from
the command line.
If everything goes well, it will accomplish the following::
1. Provision 3 coreos nodes on your cloud
2. Create security group
3. Add security rules to allow ping, ssh, docker access
4. Setup ssl keys, certificates
5. Display a set of environment variables that you can use to run docker
commands
## Next Steps
### Check its up
If there are no errors, you can export the environment variables shown by
the script at the end. Then you can start running docker commands, here are
few examples::
docker info
docker images
docker pull ubuntu:vivid
## Cleanup
Once you're done with the swarm, don't forget to nuke the whole thing::
ansible-playbook -e "action=destroy env=leap password=XXXXX" site.yml
The above command will destroy all the resources created by the script.

View File

@ -1,3 +0,0 @@
[defaults]
inventory = ./hosts
host_key_checking=False

View File

@ -1 +0,0 @@
cloud ansible_host=127.0.0.1 ansible_python_interpreter=python

View File

@ -1,19 +0,0 @@
---
- debug:
msg: >-
export DOCKER_HOST=tcp://{{ hostvars.swarmnode1.swarmnode.openstack.public_v4 }}:2375;
export DOCKER_TLS_VERIFY=1;
export DOCKER_CERT_PATH=/tmp/{{ env }}/keys
when: hostvars.swarmnode1.swarmnode.openstack.public_v4 != ""
- debug:
msg: >-
export DOCKER_HOST=tcp://{{ hostvars.swarmnode1.swarmnode.openstack.private_v4 }}:2375;
export DOCKER_TLS_VERIFY=1;
export DOCKER_CERT_PATH=/tmp/{{ env }}/keys
when: hostvars.swarmnode1.swarmnode.openstack.public_v4 == ""
- debug:
msg: >-
The work load test started at {{ starttime.time }},
ended at {{ ansible_date_time.time }}

View File

@ -1,27 +0,0 @@
---
- name: Remove security group
os_security_group:
state: absent
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
name: dockerswarm_sg
description: secuirty group for dockerswarm
- name: Delete discovery url directory
file: path="/tmp/{{ env }}" state=absent
- name: Delete a key-pair
os_keypair:
state: absent
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
name: "dockerswarm"
- debug:
msg: >-
The work load test started at {{ starttime.time }},
ended at {{ ansible_date_time.time }}

View File

@ -1,96 +0,0 @@
---
- name: Get start timestamp
set_fact: starttime="{{ ansible_date_time }}"
- name: Create certificate directory
file: path="/tmp/{{ env }}/keys" state=directory
- stat: path="/tmp/{{ env }}/discovery_url"
register: discovery_url_flag
- name: Get docker discovery url
get_url:
url: "https://discovery.etcd.io/new?size={{ app_env.swarm_size }}"
dest: "/tmp/{{ env }}/discovery_url"
when: discovery_url_flag.stat.exists == false
- shell: openssl genrsa -out "/tmp/{{ env }}/keys/ca-key.pem" 2048
- shell: openssl genrsa -out "/tmp/{{ env }}/keys/key.pem" 2048
- shell: >-
openssl req -x509 -new -nodes -key /tmp/{{ env }}/keys/ca-key.pem
-days 10000 -out /tmp/{{ env }}/keys/ca.pem -subj '/CN=docker-CA'
- shell: >-
openssl req -new -key /tmp/{{ env }}/keys/key.pem
-out /tmp/{{ env }}/keys/cert.csr
-subj '/CN=docker-client' -config ./roles/prov_apply/templates/openssl.cnf
- shell: >-
openssl x509 -req -in /tmp/{{ env }}/keys/cert.csr
-CA /tmp/{{ env }}/keys/ca.pem -CAkey /tmp/{{ env }}/keys/ca-key.pem
-CAcreateserial -out /tmp/{{ env }}/keys/cert.pem -days 365
-extensions v3_req -extfile ./roles/prov_apply/templates/openssl.cnf
- name: Retrieve specified flavor
os_flavor_facts:
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
name: "{{ app_env.flavor_name }}"
- name: Create a key-pair
os_keypair:
state: "present"
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
name: "dockerswarm"
public_key_file: "{{ app_env.public_key_file }}"
- name: Create security group
os_security_group:
state: present
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
name: dockerswarm_sg
description: secuirty group for dockerswarm
- name: Add security rules
os_security_group_rule:
state: present
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
security_group: dockerswarm_sg
protocol: "{{ item.protocol }}"
direction: "{{ item.dir }}"
port_range_min: "{{ item.p_min }}"
port_range_max: "{{ item.p_max }}"
remote_ip_prefix: 0.0.0.0/0
with_items:
- { p_min: 22, p_max: 22, dir: ingress, protocol: tcp }
- { p_min: 2375, p_max: 2376, dir: ingress, protocol: tcp }
- { p_min: 2379, p_max: 2380, dir: ingress, protocol: tcp }
- { p_min: 2379, p_max: 2380, dir: egress, protocol: tcp }
- { p_min: -1, p_max: -1, dir: ingress, protocol: icmp }
- { p_min: -1, p_max: -1, dir: egress, protocol: icmp }
- name: Create cloudinit file for all nodes
template:
src: templates/cloudinit.j2
dest: "/tmp/{{ env }}/cloudinit"
- name: Add nodes to host group
add_host:
name: "swarmnode{{ item }}"
hostname: "127.0.0.1"
groups: dockerswarm
host_no: "{{ item }}"
with_sequence: count={{ app_env.swarm_size }}
no_log: True

View File

@ -1,47 +0,0 @@
#cloud-config
coreos:
units:
- name: etcd.service
mask: true
- name: etcd2.service
command: start
- name: docker.service
command: start
- name: swarm-agent.service
content: |
[Unit]
Description=swarm agent
Requires=docker.service
After=docker.service
[Service]
EnvironmentFile=/etc/environment
TimeoutStartSec=20m
ExecStartPre=/usr/bin/docker pull swarm:latest
ExecStartPre=-/usr/bin/docker rm -f swarm-agent
ExecStart=/bin/sh -c "/usr/bin/docker run --rm --name swarm-agent swarm:latest join --addr=$COREOS_PRIVATE_IPV4:2376 etcd://$COREOS_PRIVATE_IPV4:2379/docker"
ExecStop=/usr/bin/docker stop swarm-agent
- name: swarm-manager.service
content: |
[Unit]
Description=swarm manager
Requires=docker.service
After=docker.service
[Service]
EnvironmentFile=/etc/environment
TimeoutStartSec=20m
ExecStartPre=/usr/bin/docker pull swarm:latest
ExecStartPre=-/usr/bin/docker rm -f swarm-manager
ExecStart=/bin/sh -c "/usr/bin/docker run --rm --name swarm-manager -v /etc/docker/ssl:/etc/docker/ssl --net=host swarm:latest manage --tlsverify --tlscacert=/etc/docker/ssl/ca.pem --tlscert=/etc/docker/ssl/cert.pem --tlskey=/etc/docker/ssl/key.pem etcd://$COREOS_PRIVATE_IPV4:2379/docker"
ExecStop=/usr/bin/docker stop swarm-manager
etcd2:
discovery: {{ lookup('file', '/tmp/'+env+'/discovery_url') }}
advertise-client-urls: http://$private_ipv4:2379
initial-advertise-peer-urls: http://$private_ipv4:2380
listen-client-urls: http://0.0.0.0:2379
listen-peer-urls: http://$private_ipv4:2380
data-dir: /var/lib/etcd2
initial-cluster-token: openstackinterop
update:
reboot-strategy: "off"

View File

@ -1,13 +0,0 @@
---
- name: Get start timestamp
set_fact: starttime="{{ ansible_date_time }}"
- name: Add web servers to webservers host group
add_host:
name: "swarmnode{{ item }}"
hostname: "127.0.0.1"
groups: dockerswarm
host_no: "{{ item }}"
with_sequence: count={{ app_env.swarm_size }}
no_log: True

View File

@ -1,39 +0,0 @@
---
- name: Get public IP
set_fact: node_ip="{{ swarmnode.openstack.public_v4 }}"
when: swarmnode.openstack.public_v4 != ""
- name: Get public IP
set_fact: node_ip="{{ swarmnode.openstack.private_v4 }}"
when: swarmnode.openstack.public_v4 == ""
- name: Make certificate configuration file
copy:
src: templates/openssl.cnf
dest: "/tmp/{{ env }}/{{ node_ip }}/keys/"
- name: Make service file
template:
src: templates/dockerservice.j2
dest: "/tmp/{{ env }}/{{ node_ip }}/keys/dockerservice.cnf"
- name: Create bootstrap file
template:
src: templates/bootstrap1.j2
dest: "/tmp/{{ env }}/{{ node_ip }}/keys/bootstrap.sh"
when: swarmnode.openstack.private_v4 == ""
- name: Create bootstrap file
template:
src: templates/bootstrap2.j2
dest: "/tmp/{{ env }}/{{ node_ip }}/keys/bootstrap.sh"
when: swarmnode.openstack.private_v4 != ""
- name: Transfer configureation
shell: scp -r "/tmp/{{ env }}/{{ node_ip }}/keys" "core@{{ node_ip }}:/home/core"
- name: Transfer certificate file over to the nodes
shell: scp -r "/tmp/{{ env }}/keys" "core@{{ node_ip }}:/home/core"
- name: Start services
shell: ssh "core@{{ node_ip }}" "sh keys/bootstrap.sh"

View File

@ -1,31 +0,0 @@
mkdir -p /home/core/.docker
cp /home/core/keys/ca.pem /home/core/.docker/
cp /home/core/keys/cert.pem /home/core/.docker/
cp /home/core/keys/key.pem /home/core/.docker/
echo 'subjectAltName = @alt_names' >> /home/core/keys/openssl.cnf
echo '[alt_names]' >> /home/core/keys/openssl.cnf
cd /home/core/keys
echo 'IP.1 = {{ swarmnode.openstack.public_v4 }}' >> openssl.cnf
echo 'DNS.1 = {{ app_env.fqdn }}' >> openssl.cnf
echo 'DNS.2 = {{ swarmnode.openstack.public_v4 }}.xip.io' >> openssl.cnf
openssl req -new -key key.pem -out cert.csr -subj '/CN=docker-client' -config openssl.cnf
openssl x509 -req -in cert.csr -CA ca.pem -CAkey ca-key.pem \
-CAcreateserial -out cert.pem -days 365 -extensions v3_req -extfile openssl.cnf
sudo mkdir -p /etc/docker/ssl
sudo cp ca.pem /etc/docker/ssl/
sudo cp cert.pem /etc/docker/ssl/
sudo cp key.pem /etc/docker/ssl/
# Apply localized settings to services
sudo mkdir -p /etc/systemd/system/{docker,swarm-agent,swarm-manager}.service.d
sudo mv /home/core/keys/dockerservice.cnf /etc/systemd/system/docker.service.d/10-docker-service.conf
sudo systemctl daemon-reload
sudo systemctl restart docker.service
sudo systemctl start swarm-agent.service
sudo systemctl start swarm-manager.service

View File

@ -1,32 +0,0 @@
mkdir -p /home/core/.docker
cp /home/core/keys/ca.pem /home/core/.docker/
cp /home/core/keys/cert.pem /home/core/.docker/
cp /home/core/keys/key.pem /home/core/.docker/
echo 'subjectAltName = @alt_names' >> /home/core/keys/openssl.cnf
echo '[alt_names]' >> /home/core/keys/openssl.cnf
cd /home/core/keys
echo 'IP.1 = {{ swarmnode.openstack.private_v4 }}' >> openssl.cnf
echo 'IP.2 = {{ swarmnode.openstack.public_v4 }}' >> openssl.cnf
echo 'DNS.1 = {{ app_env.fqdn }}' >> openssl.cnf
echo 'DNS.2 = {{ swarmnode.openstack.public_v4 }}.xip.io' >> openssl.cnf
openssl req -new -key key.pem -out cert.csr -subj '/CN=docker-client' -config openssl.cnf
openssl x509 -req -in cert.csr -CA ca.pem -CAkey ca-key.pem \
-CAcreateserial -out cert.pem -days 365 -extensions v3_req -extfile openssl.cnf
sudo mkdir -p /etc/docker/ssl
sudo cp ca.pem /etc/docker/ssl/
sudo cp cert.pem /etc/docker/ssl/
sudo cp key.pem /etc/docker/ssl/
# Apply localized settings to services
sudo mkdir -p /etc/systemd/system/{docker,swarm-agent,swarm-manager}.service.d
sudo mv /home/core/keys/dockerservice.cnf /etc/systemd/system/docker.service.d/10-docker-service.conf
sudo systemctl daemon-reload
sudo systemctl restart docker.service
sudo systemctl start swarm-agent.service
sudo systemctl start swarm-manager.service

View File

@ -1,2 +0,0 @@
[Service]
Environment="DOCKER_OPTS=-H=0.0.0.0:2376 -H unix:///var/run/docker.sock --tlsverify --tlscacert=/etc/docker/ssl/ca.pem --tlscert=/etc/docker/ssl/cert.pem --tlskey=/etc/docker/ssl/key.pem --cluster-advertise {{app_env.net_device}}:2376 --cluster-store etcd://127.0.0.1:2379/docker"

View File

@ -1,8 +0,0 @@
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth, serverAuth

View File

@ -1,14 +0,0 @@
---
- name: Remove docker swarm nodes
os_server:
state: "absent"
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
name: docker-swarm-{{ host_no }}
key_name: "dockerswarm"
timeout: 200
security_groups: dockerswarm_sg
meta:
hostname: docker-swarm-{{ host_no }}

View File

@ -1,21 +0,0 @@
---
- name: Create docker swarm nodes
os_server:
state: "present"
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
name: docker-swarm-{{ host_no }}
image: "{{ app_env.image_name }}"
key_name: "dockerswarm"
timeout: 200
flavor: "{{ hostvars.cloud.openstack_flavors[0].id }}"
network: "{{ app_env.private_net_name }}"
auto_ip: yes
userdata: "{{ lookup('file', '/tmp/' +env+ '/cloudinit') }}"
security_groups: dockerswarm_sg
meta:
hostname: docker-swarm-{{ host_no }}
register: swarmnode

View File

@ -1,33 +0,0 @@
---
- name: prepare for provision
hosts: cloud
connection: local
vars_files:
- "vars/{{ env }}.yml"
roles:
- "prep_{{ action }}"
- name: provision swarm nodes
hosts: dockerswarm
serial: 1
connection: local
vars_files:
- "vars/{{ env }}.yml"
roles:
- "vm_{{ action }}"
- name: setup swarm nodes
hosts: dockerswarm
connection: local
vars_files:
- "vars/{{ env }}.yml"
roles:
- "prov_{{ action }}"
- name: post provisioning
hosts: cloud
connection: local
vars_files:
- "vars/{{ env }}.yml"
roles:
- "post_{{ action }}"

View File

@ -1,23 +0,0 @@
---
horizon_url: "https://salesdemo-sjc.openstack.blueboxgrid.com"
auth: {
auth_url: "https://salesdemo-sjc.openstack.blueboxgrid.com:5000/v2.0",
username: "litong01",
password: "{{ password }}",
project_name: "Interop"
}
app_env: {
image_name: "coreos",
private_net_name: "interopnet",
net_device: "eth0",
flavor_name: "m1.small",
swarm_version: "latest",
swarm_size: 3,
region_name: "",
availability_zone: "",
validate_certs: True,
fqdn: "swarm.example.com",
public_key_file: "/home/tong/.ssh/id_rsa.pub"
}

View File

@ -1,21 +0,0 @@
---
horizon_url: "https://iad2.dreamcompute.com"
auth: {
auth_url: "https://iad2.dream.io:5000/v2.0",
username: "stemaf4",
password: "{{ password }}",
project_name: "dhc2131831"
}
app_env: {
region_name: "RegionOne",
image_name: "CoreOS Sept16",
private_net_name: "",
flavor_name: "gp1.subsonic",
public_key_file: "/home/reed/.ssh/id_rsa.pub",
swarm_version: "latest",
swarm_size: 3,
fqdn: "swarm.example.com",
net_device: "eth0",
}

View File

@ -1,24 +0,0 @@
---
horizon_url: "http://9.30.217.9"
auth: {
auth_url: "http://9.30.217.9:5000/v3",
username: "demo",
password: "{{ password }}",
domain_name: "default",
project_name: "demo"
}
app_env: {
image_name: "coreos",
private_net_name: "Bluebox",
net_device: "eth0",
flavor_name: "m1.small",
swarm_version: "latest",
swarm_size: 3,
region_name: "RegionOne",
availability_zone: "nova",
validate_certs: False,
fqdn: "swarm.example.com",
public_key_file: "/home/tong/.ssh/id_rsa.pub"
}

View File

@ -1,24 +0,0 @@
---
horizon_url: "https://cloud1.osic.org"
auth: {
auth_url: "https://cloud1.osic.org:5000/v3",
username: "litong01",
password: "{{ password }}",
domain_name: "default",
project_name: "interop_challenge"
}
app_env: {
image_name: "coreos",
private_net_name: "interopnet",
net_device: "eth0",
flavor_name: "m1.small",
swarm_version: "latest",
swarm_size: 3,
region_name: "",
availability_zone: "",
validate_certs: True,
fqdn: "swarm.example.com",
public_key_file: "/home/tong/.ssh/id_rsa.pub"
}

View File

@ -1,23 +0,0 @@
---
horizon_url: "https://horizon.cloud.ovh.net"
auth: {
auth_url: "https://auth.cloud.ovh.net/v2.0",
username: "SXYbmFhC4aqQ",
password: "{{ password }}",
project_name: "2487610196015734"
}
app_env: {
image_name: "coreos",
private_net_name: "",
net_device: "eth0",
flavor_name: "eg-15-ssd",
swarm_version: "latest",
swarm_size: 3,
region_name: "BHS1",
availability_zone: "",
validate_certs: True,
fqdn: "swarm.example.com",
public_key_file: "/home/tong/.ssh/id_rsa.pub"
}

View File

@ -1,6 +0,0 @@
*.out
vars/*
*/**/*.log
*/**/.DS_Store
*/**/._
*/**/*.tfstate*

View File

@ -1,141 +0,0 @@
# LAMPstack Ansible deployments on OpenStack Cloud
## Status
This will install a 4 node lampstack. The first node will be used as a load
balancer by using Haproxy. The second node will be a database node and two
nodes will be used as web servers. If it is desirable for more node, you
can simply increase the number of nodes in the configuration, all added nodes
will be used as web servers.
Once the script finishes, a URL will be displayed at the end for verification.
## Requirements
- [Install Ansible](http://docs.ansible.com/ansible/intro_installation.html)
- [Install openstack shade] (http://docs.openstack.org/infra/shade/installation.html)
- Make sure there is an Ubuntu cloud image available on your cloud.
- Clone this project into a directory.
## Ansible
Ansible and OpenStack Shade will be used to provision all of the OpenStack
resources required by LAMP stack.
### Prep
#### Deal with ssh keys for Openstack Authentication
If you do not have a ssh key, then you should create one by using a tool.
An example command to do that is provided below. Once you have a key pair,
ensure your local ssh-agent is running and your ssh key has been added.
This step is required. Not doing this, you will have to manually give
passphrase when script runs, and script can fail. If you really do not want
to deal with passphrase, you can create a key pair without passphrase::
ssh-keygen -t rsa
eval $(ssh-agent -s)
ssh-add ~/.ssh/id_rsa
#### General Openstack Settings
Ansible's OpenStack cloud module is used to provision compute resources
against an OpenStack cloud. Before you run the script, the cloud environment
will have to be specified. Sample files have been provided in vars directory.
You may create one such file per cloud for your tests.
auth: {
auth_url: "http://x.x.x.x:5000/v3",
username: "demo",
password: "{{ password }}",
domain_name: "default",
project_name: "demo"
}
app_env: {
image_name: "ubuntu-15.04",
region_name: "RegionOne",
availability_zone: "nova",
validate_certs: True,
private_net_name: "my_tenant_net",
flavor_name: "m1.small",
public_key_file: "/home/tong/.ssh/id_rsa.pub",
stack_size: 4,
volume_size: 2,
block_device_name: "/dev/vdb",
config_drive: no,
wp_theme: "https://downloads.wordpress.org/theme/iribbon.2.0.65.zip",
wp_posts: "http://wpcandy.s3.amazonaws.com/resources/postsxml.zip"
}
It's also possible to provide download URL's for wordpress and associated
other utilities, supporting use of this module in environments with limited
outbound network access to the Internet (defaults show below):
app_env: {
...
wp_latest: 'https://wordpress.org/latest.tar.gz',
wp_cli: 'https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar',
wp_importer: 'http://downloads.wordpress.org/plugin/wordpress-importer.0.6.3.zip'
}
The values of these variables should be provided by your cloud provider. When
use keystone 2.0 API, you will not need to setup domain name. You can leave
region_name empty if you have just one region. You can also leave
private_net_name empty if your cloud does not support tenant network or you
only have one tenant network. The private_net_name is only needed when you
have multiple tenant networks. validate_certs should be normally set to True
when your cloud uses tls(ssl) and your cloud is not using self signed
certificate. If your cloud is using self signed certificate, then the
certificate can not be easily validated by ansible. You can skip it by setting
the parameter to False.
## Provision the LAMP stack
With your cloud environment set, you should be able to run the script::
ansible-playbook -e "action=apply env=leap password=XXXXX" site.yml
The command will stand up the nodes using a cloud named leap (vars/leap.yml).
If you run the test against other cloud, you can create a new file use same
structure and specify that cloud attributes such as auth_url, etc. Then you
can simply replace work leap with that file name. Replace xxxxx with your
own password.
If everything goes well, it will accomplish the following::
1. Provision 4 nodes
2. Create security group
3. Add security rules to allow ping, ssh, mysql and nfs access
4. Create a cinder volume
5. Attach the cinder volume to database node for wordpress database and
content
6. Setup NFS on database node, so that web servers can share the cinder
volume space, all wordpress content will be saved on cinder volume.
This is to ensure that the multiple web servres will represent same
content.
7. Setup mysql to use the space provided by cinder volume
8. Configure and initialize wordpress
9. Install and activte a wordpress theme specified by configuration file
10.Install wordpress importer plugin
11.Import sample word press content
12.Remove not needed floating IPs from servers which do not need them.
## Next Steps
### Check its up
If there are no errors, you can use the IP addresses of the webservers to
access wordpress. If this is the very first time, you will be asked to do
answer few questions. Once that is done, you will have a fully functional
wordpress running.
## Cleanup
Once you're done with it, don't forget to nuke the whole thing::
ansible-playbook -e "action=destroy env=leap password=XXXXX" site.yml
The above command will destroy all the resources created.

View File

@ -1,3 +0,0 @@
[defaults]
inventory = ./hosts
host_key_checking = False

View File

@ -1,7 +0,0 @@
---
db_user: "wpdbuser"
db_pass: "{{ lookup('password',
'/tmp/sqlpassword chars=ascii_letters,digits length=8') }}"
proxy_env: {
}

View File

@ -1 +0,0 @@
cloud ansible_host=127.0.0.1 ansible_python_interpreter=python

View File

@ -1,194 +0,0 @@
---
- name: Get start timestamp
set_fact:
starttime: "{{ ansible_date_time }}"
- name: Retrieve specified flavor
os_flavor_facts:
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
name: "{{ app_env.flavor_name }}"
- name: Create a key-pair
os_keypair:
state: "present"
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
name: "lampstack"
public_key_file: "{{ app_env.public_key_file }}"
- name: Create volume
os_volume:
state: present
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
size: "{{ app_env.volume_size }}"
wait: yes
display_name: db_volume
- name: Create security group
os_security_group:
state: present
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
name: lampstack_sg
description: security group for lampstack
- name: Add security rules
os_security_group_rule:
state: present
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
security_group: lampstack_sg
protocol: "{{ item.protocol }}"
direction: "{{ item.dir }}"
port_range_min: "{{ item.p_min }}"
port_range_max: "{{ item.p_max }}"
remote_ip_prefix: 0.0.0.0/0
with_items:
- { p_min: 22, p_max: 22, dir: ingress, protocol: tcp }
- { p_min: 80, p_max: 80, dir: ingress, protocol: tcp }
- { p_min: 2049, p_max: 2049, dir: ingress, protocol: tcp }
- { p_min: 2049, p_max: 2049, dir: egress, protocol: tcp }
- { p_min: 3306, p_max: 3306, dir: ingress, protocol: tcp }
- { p_min: -1, p_max: -1, dir: ingress, protocol: icmp }
- { p_min: -1, p_max: -1, dir: egress, protocol: icmp }
- name: Create database node
os_server:
state: "present"
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
name: database
image: "{{ app_env.image_name }}"
key_name: "lampstack"
timeout: 200
flavor: "{{ app_env.flavor_name }}"
network: "{{ app_env.private_net_name }}"
userdata: "{{ lookup('file', 'templates/userdata.j2') }}"
config_drive: "{{ app_env.config_drive | default('no') }}"
security_groups: lampstack_sg
floating_ip_pools: "{{ app_env.public_net_name | default(omit) }}"
meta:
hostname: database
register: database
- name: Add database node to the dbservers host group
add_host:
name: "{{ database.openstack.public_v4 }}"
groups: dbservers
when: database.openstack.public_v4 != ""
- name: Add database node to the dbservers host group
add_host:
name: "{{ database.openstack.private_v4 }}"
groups: dbservers
when: database.openstack.public_v4 == ""
- name: Create balancer node
os_server:
state: "present"
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
name: balancer
image: "{{ app_env.image_name }}"
key_name: "lampstack"
timeout: 200
flavor: "{{ app_env.flavor_name }}"
network: "{{ app_env.private_net_name }}"
userdata: "{{ lookup('file', 'templates/userdata.j2') }}"
config_drive: "{{ app_env.config_drive | default('no') }}"
security_groups: lampstack_sg
floating_ip_pools: "{{ app_env.public_net_name | default(omit) }}"
meta:
hostname: balancer
register: balancer
- name: Add balancer node to the balancers host group
add_host:
name: "{{ balancer.openstack.public_v4 }}"
groups: balancers
when: balancer.openstack.public_v4 != ""
- name: Add balancer node to the balancers host group
add_host:
name: "{{ balancer.openstack.private_v4 }}"
groups: balancers
when: balancer.openstack.public_v4 == ""
- name: Create a volume for database to save data
os_server_volume:
state: present
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
server: database
volume: db_volume
device: "{{ app_env.block_device_name }}"
- name: Create web server nodes to host application
os_server:
state: "present"
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
name: apache-{{ item }}
image: "{{ app_env.image_name }}"
key_name: "lampstack"
timeout: 200
flavor: "{{ app_env.flavor_name }}"
network: "{{ app_env.private_net_name }}"
floating_ip_pools: "{{ app_env.public_net_name | default(omit) }}"
userdata: "{{ lookup('file', 'templates/userdata.j2') }}"
config_drive: "{{ app_env.config_drive | default('no') }}"
security_groups: lampstack_sg
meta:
hostname: apache-{{ item }}
with_sequence: count={{ app_env.stack_size - 2 }}
register: webserver
- name: Add web servers to webservers host group
add_host:
name: "{{ item.openstack.public_v4 }}"
groups: webservers
when: item.openstack.public_v4 != ""
with_items: "{{ webserver.results }}"
no_log: True
- name: Add web servers to webservers host group
add_host:
name: "{{ item.openstack.private_v4 }}"
groups: webservers
when: item.openstack.public_v4 == ""
with_items: "{{ webserver.results }}"
no_log: True
- name: Add one web servers to wps host group
add_host:
name: "{{ webserver.results[0].openstack.public_v4 }}"
groups: wps
when: webserver.results[0].openstack.public_v4 != ""
no_log: True
- name: Add one web servers to wps host group
add_host:
name: "{{ webserver.results[0].openstack.private_v4 }}"
groups: wps
when: webserver.results[0].openstack.public_v4 == ""
no_log: True

View File

@ -1,4 +0,0 @@
#cloud-config
runcmd:
- addr=$(ip -4 -o addr | grep -v '127.0.0.1' | awk 'NR==1{print $4}' | cut -d '/' -f 1)
- echo $addr `hostname` >> /etc/hosts

View File

@ -1,53 +0,0 @@
---
- name: Haproxy install
package:
name="{{ item }}"
state=latest
update_cache=yes
with_items:
- haproxy
when: ansible_distribution == 'Ubuntu'
- name: Haproxy install
package:
name="{{ item }}"
state=latest
with_items:
- haproxy
when: ansible_distribution == 'Fedora'
- name: Enable haproxy service
replace:
dest: /etc/default/haproxy
regexp: "ENABLED=0"
replace: "ENABLED=1"
backup: no
when: ansible_distribution == 'Ubuntu'
- name: Place the haproxy configuration file
copy:
src: templates/haproxy.cfg.j2
dest: /etc/haproxy/haproxy.cfg
owner: root
group: root
when: ansible_distribution == 'Ubuntu'
- name: Place the haproxy configuration file
copy:
src: templates/haproxy_fedora.cfg.j2
dest: /etc/haproxy/haproxy.cfg
owner: root
group: root
when: ansible_distribution == 'Fedora'
- name: Add web servers to the haproxy
lineinfile:
dest: /etc/haproxy/haproxy.cfg
line: " server ws{{ item[0].openstack[item[1]] }} {{ item[0].openstack[item[1]] }}:80 check"
with_nested:
- "{{ hostvars.cloud.webserver.results }}"
- ["private_v4", "public_v4"]
when: item[0].openstack[item[1]] != ''
no_log: True
- service: name=haproxy state=restarted enabled=yes

View File

@ -1,33 +0,0 @@
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
option redispatch
retries 3
contimeout 5000
clitimeout 50000
srvtimeout 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
listen webfarm 0.0.0.0:80
mode http
stats enable
stats uri /haproxy?stats
balance roundrobin
option httpclose
option forwardfor

View File

@ -1,34 +0,0 @@
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
option redispatch
retries 3
contimeout 5000
clitimeout 50000
srvtimeout 50000
errorfile 400 /usr/share/haproxy/400.http
errorfile 403 /usr/share/haproxy/403.http
errorfile 408 /usr/share/haproxy/408.http
errorfile 500 /usr/share/haproxy/500.http
errorfile 502 /usr/share/haproxy/502.http
errorfile 503 /usr/share/haproxy/503.http
errorfile 504 /usr/share/haproxy/504.http
listen webfarm
bind 0.0.0.0:80
mode http
stats enable
stats uri /haproxy?stats
balance roundrobin
option httpclose
option forwardfor

View File

@ -1,23 +0,0 @@
---
- os_floating_ip:
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
state: absent
floating_ip_address: "{{ database.openstack.public_v4 }}"
server: "{{ database.openstack.name }}"
when: database.openstack.private_v4 != ""
no_log: True
- os_floating_ip:
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
state: absent
floating_ip_address: "{{ item.openstack.public_v4 }}"
server: "{{ item.openstack.name }}"
with_items: "{{ webserver.results }}"
when: item.openstack.private_v4 != ""
no_log: True

View File

@ -1,19 +0,0 @@
---
- name: Wait until server is up and runnning
local_action: wait_for port=22 host="{{ ansible_ssh_host | default(inventory_hostname) }}" search_regex=OpenSSH delay=10
become: no
- name: Check if running on Fedora
raw: "[ -f /etc/fedora-release ]"
register: fedora_release
ignore_errors: yes
- name: Install python2 for Ansible
raw: dnf install -y python2 python2-dnf libselinux-python
register: result
until: result|success
when: fedora_release.rc == 0
- name: Set SELinux to permisive
selinux: policy=targeted state=permissive
when: fedora_release.rc == 0

View File

@ -1,164 +0,0 @@
---
- stat: path=/tmp/diskflag
register: diskflag
- name: update apt cache
apt: update_cache=yes
when: ansible_os_family == "Debian"
- name: install scsitools
package: name=scsitools state=latest
when: ansible_distribution == 'Ubuntu'
- name: install sg3_utils
package: name=sg3_utils state=latest
when: ansible_distribution == 'Fedora'
- shell: /sbin/rescan-scsi-bus
when: diskflag.stat.exists == false and ansible_distribution == 'Ubuntu'
- shell: /bin/rescan-scsi-bus.sh
when: diskflag.stat.exists == false and ansible_distribution == 'Fedora'
- shell: parted -s "{{ app_env.block_device_name }}" mklabel msdos
when: diskflag.stat.exists == false
- shell: parted -s "{{ app_env.block_device_name }}" mkpart primary ext4 1049kb 100%
when: diskflag.stat.exists == false
- lineinfile: dest=/tmp/diskflag line="disk is now partitioned!" create=yes
- filesystem: fstype=ext4 dev="{{ app_env.block_device_name }}1"
- mount: name=/storage src="{{ app_env.block_device_name }}1" fstype=ext4 state=mounted
- shell: ip -4 -o addr | grep -v '127.0.0.1' | awk 'NR==1{print $4}' | cut -d '/' -f 1
register: local_ip
- name: Creates share directory for database
file: path=/storage/sqldatabase state=directory
- name: Creates share directory for wpcontent
file: path=/storage/wpcontent state=directory
- name: Creates directory for database mounting point
file: path=/var/lib/mysql state=directory
- name: Install NFS server
package:
name=nfs-kernel-server
state=latest
update_cache=yes
when: ansible_distribution == 'Ubuntu'
- name: Install NFS server
package: name=nfs-utils state=latest
when: ansible_distribution == 'Fedora'
- name: Setup NFS database access
lineinfile:
dest: /etc/exports
line: "/storage/sqldatabase {{ local_ip.stdout }}/32(rw,sync,no_root_squash,no_subtree_check)"
- name: Setup NFS webserver access
lineinfile:
dest: /etc/exports
line: "/storage/wpcontent {{ item[0].openstack[item[1]] }}/32(rw,sync,no_root_squash,no_subtree_check)"
with_nested:
- "{{ hostvars.cloud.webserver.results }}"
- ["private_v4", "public_v4"]
when: item[0].openstack[item[1]] != ''
no_log: True
- name: nfs export
shell: exportfs -a
- service: name=nfs-kernel-server state=restarted enabled=yes
when: ansible_distribution == 'Ubuntu'
- service: name=nfs-server state=restarted enabled=yes
when: ansible_distribution == 'Fedora'
- name: Mount the database data directory
mount:
name: /var/lib/mysql
src: "{{ local_ip.stdout }}:/storage/sqldatabase"
state: mounted
fstype: nfs
- name: Install mysql and libraries
package:
name="{{ item }}"
state=latest
update_cache=yes
with_items:
- mysql-server
- python-mysqldb
when: ansible_distribution == 'Ubuntu'
- name: Install mysql and libraries
package:
name="{{ item }}"
state=latest
with_items:
- mariadb-server
- python2-mysql
when: ansible_distribution == 'Fedora'
- service: name=mysql state=stopped enabled=yes
when: ansible_distribution == 'Ubuntu'
- service: name=mariadb state=stopped enabled=yes
when: ansible_distribution == 'Fedora'
- stat: path=/etc/mysql/my.cnf
register: mysqlflag
- name: Configure mysql 5.5
replace:
dest: "/etc/mysql/my.cnf"
regexp: '^bind-address[ \t]*=[ ]*127\.0\.0\.1'
replace: "bind-address = {{ local_ip.stdout }}"
backup: no
when: mysqlflag.stat.exists == true
- stat: path=/etc/mysql/mysql.conf.d/mysqld.cnf
register: mysqlflag
- name: Configure mysql 5.6+
replace:
dest: "/etc/mysql/mysql.conf.d/mysqld.cnf"
replace: "bind-address = {{ local_ip.stdout }}"
backup: no
when: mysqlflag.stat.exists == true
- stat: path=/etc/my.cnf
register: mariadbflag
- name: Configure MariaDB 10.1
ini_file:
dest=/etc/my.cnf
section=mysqld
option=bind-address
value={{ local_ip.stdout }}
when: mariadbflag.stat.exists == true
- service: name=mysql state=started enabled=yes
when: ansible_distribution == 'Ubuntu'
- service: name=mariadb state=started enabled=yes
when: ansible_distribution == 'Fedora'
- name: create wordpress database
mysql_db:
name: "decision2016"
state: "{{ item }}"
with_items:
- ['present', 'absent', 'present']
- name: Add a user
mysql_user:
name: "{{ db_user }}"
password: "{{ db_pass }}"
host: "%"
priv: 'decision2016.*:ALL'
state: present

View File

@ -1,79 +0,0 @@
---
- name: Get start timestamp
set_fact: starttime="{{ ansible_date_time }}"
- name: Delete key pairs
os_keypair:
state: "absent"
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
name: "lampstack"
public_key_file: "{{ app_env.public_key_file }}"
- name: Delete database node
os_server:
state: "absent"
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
name: database
image: "{{ app_env.image_name }}"
key_name: "lampstack"
timeout: 200
network: "{{ app_env.private_net_name }}"
meta:
hostname: database
- name: Delete balancer node
os_server:
state: "absent"
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
name: balancer
image: "{{ app_env.image_name }}"
key_name: "lampstack"
timeout: 200
network: "{{ app_env.private_net_name }}"
meta:
hostname: balancer
- name: Delete web server nodes
os_server:
state: "absent"
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
name: apache-{{ item }}
image: "{{ app_env.image_name }}"
key_name: "lampstack"
timeout: 200
network: "{{ app_env.private_net_name }}"
meta:
hostname: apache-{{ item }}
with_sequence: count={{ app_env.stack_size - 2 }}
- name: Delete security group
os_security_group:
state: absent
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
name: lampstack_sg
description: secuirty group for lampstack
- name: Delete cinder volume
os_volume:
state: absent
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
wait: yes
display_name: db_volume

View File

@ -1,147 +0,0 @@
---
- name: Apache and php 5
package:
name="{{ item }}"
state=latest
update_cache=yes
with_items:
- apache2
- php5
- php5-mysql
- nfs-common
- unzip
- ssmtp
when: ansible_distribution == 'Ubuntu'
- name: Apache and php 5
package:
name="{{ item }}"
state=latest
with_items:
- httpd
- php
- php-mysqlnd
- nfs-utils
- unzip
- ssmtp
when: ansible_distribution == 'Fedora'
- shell: rm -rf /var/www/html/index.html
args:
warn: no
- name: Creates share directory for wpcontent
file:
path: /var/www/html/wp-content/uploads
state: directory
owner: www-data
group: www-data
when: ansible_distribution == 'Ubuntu'
- name: Creates share directory for wpcontent
file:
path: /var/www/html/wp-content/uploads
state: directory
owner: apache
group: apache
when: ansible_distribution == 'Fedora'
- name: Mount the directory using private IP
mount:
name: /var/www/html/wp-content/uploads
src: "{{ hostvars.cloud.database.openstack.private_v4 }}:/storage/wpcontent"
state: mounted
fstype: nfs
when: hostvars.cloud.database.openstack.private_v4 != ""
- name: Mount the directory using public IP
mount:
name: /var/www/html/wp-content/uploads
src: "{{ hostvars.cloud.database.openstack.public_v4 }}:/storage/wpcontent"
state: mounted
fstype: nfs
when: hostvars.cloud.database.openstack.private_v4 == ""
- lineinfile: dest=/etc/apache2/apache2.conf line="ServerName localhost"
when: ansible_distribution == 'Ubuntu'
- lineinfile: dest=/etc/httpd/conf/httpd.conf line="ServerName localhost"
when: ansible_distribution == 'Fedora'
- name: Download wordpress
get_url:
url: "{{ app_env.wp_latest | default('https://wordpress.org/latest.tar.gz') }}"
dest: /var/www/latest.tar.gz
- name: Unpack latest wordpress
shell: tar -xf /var/www/latest.tar.gz -C /var/www/html --strip-components=1
args:
warn: no
- name: Create wordpress configuration
shell: cp /var/www/html/wp-config-sample.php /var/www/html/wp-config.php
args:
warn: no
- name: Configure wordpress database, username and password
replace:
dest: /var/www/html/wp-config.php
regexp: "'{{ item.then }}'"
replace: "'{{ item.now }}'"
backup: no
with_items:
- { then: 'database_name_here', now: 'decision2016' }
- { then: 'username_here', now: "{{ db_user }}" }
- { then: 'password_here', now: "{{ db_pass }}" }
- name: Configure wordpress network access using private IP
replace:
dest: /var/www/html/wp-config.php
regexp: "'localhost'"
replace: "'{{ hostvars.cloud.database.openstack.private_v4 }}'"
backup: no
when: hostvars.cloud.database.openstack.private_v4 != ""
- name: Configure wordpress network access using public IP
replace:
dest: /var/www/html/wp-config.php
regexp: "'localhost'"
replace: "'{{ hostvars.cloud.database.openstack.public_v4 }}'"
backup: no
when: hostvars.cloud.database.openstack.private_v4 == ""
- name: Change ownership of wordpress
shell: chown -R www-data:www-data /var/www/html
args:
warn: no
when: ansible_distribution == 'Ubuntu'
- name: Change ownership of wordpress
shell: chown -R apache:apache /var/www/html
args:
warn: no
when: ansible_distribution == 'Fedora'
- service: name=apache2 state=restarted enabled=yes
when: ansible_distribution == 'Ubuntu'
- service: name=httpd state=restarted enabled=yes
when: ansible_distribution == 'Fedora'
- name: Install wordpress command line tool
get_url:
url: "{{ app_env.wp_cli | default('https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar') }}"
dest: /usr/local/bin/wp
mode: "a+x"
force: no
- name: Download a wordpress theme
get_url:
url: "{{ app_env.wp_theme }}"
dest: /tmp/wptheme.zip
force: yes
- name: Install the theme
shell: unzip -o -q /tmp/wptheme.zip -d /var/www/html/wp-content/themes
args:
warn: no

View File

@ -1,73 +0,0 @@
---
- name: Install wordpress
command: >
wp core install --path=/var/www/html
--url="http://{{ hostvars.cloud.balancer.openstack.public_v4 }}"
--title='OpenStack Interop Challenge'
--admin_user=wpuser
--admin_password="{{ db_pass }}"
--admin_email='interop@openstack.org'
when: hostvars.cloud.balancer.openstack.public_v4 != ""
- name: Install wordpress
command: >
wp core install --path=/var/www/html
--url="http://{{ hostvars.cloud.balancer.openstack.private_v4 }}"
--title='OpenStack Interop Challenge'
--admin_user=wpuser
--admin_password="{{ db_pass }}"
--admin_email='interop@openstack.org'
when: hostvars.cloud.balancer.openstack.public_v4 == ""
- name: Activate wordpress theme
command: >
wp --path=/var/www/html theme activate
"{{ app_env.wp_theme.split('/').pop().split('.')[0] }}"
- name: Download wordpress importer plugin
get_url:
url: "{{ app_env.wp_importer | default('http://downloads.wordpress.org/plugin/wordpress-importer.0.6.3.zip') }}"
dest: "/tmp/wordpress-importer.zip"
force: "yes"
- name: Install wordpress importer plugin
command: >
sudo -u www-data wp --path=/var/www/html plugin install /tmp/wordpress-importer.zip --activate
args:
warn: "no"
when: ansible_distribution == 'Ubuntu'
- name: Install wordpress importer plugin
command: >
sudo -u apache /usr/local/bin/wp --path=/var/www/html plugin install /tmp/wordpress-importer.zip
args:
warn: "no"
when: ansible_distribution == 'Fedora'
- name: Enable wordpress importer plugin
command: >
sudo -u apache /usr/local/bin/wp --path=/var/www/html plugin activate wordpress-importer
args:
warn: "no"
when: ansible_distribution == 'Fedora'
- name: Download wordpress sample posts
get_url:
url: "{{ app_env.wp_posts }}"
dest: "/tmp/wpposts.zip"
force: "yes"
- name: Unpack the posts
command: unzip -o -q /tmp/wpposts.zip -d /tmp/posts
args:
warn: "no"
- name: Import wordpress posts
command: >
sudo -u www-data wp --path=/var/www/html import /tmp/posts/*.xml --authors=create --quiet
when: ansible_distribution == 'Ubuntu'
- name: Import wordpress posts
shell: >
sudo -u apache /usr/local/bin/wp --path=/var/www/html import /tmp/posts/*.xml --authors=create --quiet
when: ansible_distribution == 'Fedora'

View File

@ -1,96 +0,0 @@
---
- name: provision servers
hosts: cloud
connection: local
vars_files:
- "vars/{{ env }}.yml"
roles:
- "{{ action }}"
- name: Install python2 for ansible to work
hosts: dbservers, webservers, balancers, wps
gather_facts: false
user: "{{ app_env.ssh_user }}"
become: true
become_user: root
vars_files:
- "vars/{{ env }}.yml"
roles:
- common
environment: "{{ proxy_env }}"
- name: setup database
hosts: dbservers
user: "{{ app_env.ssh_user }}"
become: true
become_user: root
vars_files:
- "vars/{{ env }}.yml"
roles:
- database
environment: "{{proxy_env}}"
- name: setup web servers
hosts: webservers
user: "{{ app_env.ssh_user }}"
become: true
become_user: root
vars_files:
- "vars/{{ env }}.yml"
roles:
- webserver
environment: "{{proxy_env}}"
- name: setup load balancer servers
hosts: balancers
user: "{{ app_env.ssh_user }}"
become: true
become_user: root
vars_files:
- "vars/{{ env }}.yml"
roles:
- balancer
environment: "{{proxy_env}}"
- name: install wordpress
hosts: wps
user: "{{ app_env.ssh_user }}"
vars_files:
- "vars/{{ env }}.yml"
roles:
- wordpress
environment: "{{proxy_env}}"
- name: clean up resources
hosts: cloud
connection: local
vars_files:
- "vars/{{ env }}.yml"
tasks:
- include: "roles/cleaner/tasks/{{action}}.yml"
roles:
- cleaner
environment: "{{proxy_env}}"
- name: Inform the installer
hosts: cloud
connection: local
tasks:
- debug:
msg: >-
Access wordpress at
http://{{ hostvars.cloud.balancer.openstack.public_v4 }}.
wordpress userid is wpuser, password is {{ db_pass }}
when: hostvars.cloud.balancer is defined and
hostvars.cloud.balancer.openstack.public_v4 != ""
- debug:
msg: >-
Access wordpress at
http://{{ hostvars.cloud.balancer.openstack.private_v4 }}.
wordpress userid is wpuser, password is {{ db_pass }}
when: hostvars.cloud.balancer is defined and
hostvars.cloud.balancer.openstack.public_v4 == ""
- debug:
msg: >-
The work load test started at {{ hostvars.cloud.starttime.time }},
ended at {{ ansible_date_time.time }}

View File

@ -1,25 +0,0 @@
---
horizon_url: "https://salesdemo-sjc.openstack.blueboxgrid.com"
auth: {
auth_url: "https://salesdemo-sjc.openstack.blueboxgrid.com:5000/v2.0",
username: "litong01",
password: "{{ password }}",
project_name: "Interop"
}
app_env: {
ssh_user: "ubuntu",
image_name: "ubuntu-15.04",
region_name: "",
availability_zone: "",
validate_certs: True,
private_net_name: "interopnet",
flavor_name: "m1.small",
public_key_file: "/home/tong/.ssh/id_rsa.pub",
stack_size: 4,
volume_size: 10,
block_device_name: "/dev/vdb",
wp_theme: "https://downloads.wordpress.org/theme/iribbon.2.0.65.zip",
wp_posts: "http://wpcandy.s3.amazonaws.com/resources/postsxml.zip"
}

View File

@ -1,25 +0,0 @@
---
horizon_url: "https://iad2.dreamcompute.com"
auth: {
auth_url: "https://iad2.dream.io:5000/v2.0",
username: "stemaf4",
password: "{{ password }}",
project_name: "dhc2131831"
}
app_env: {
ssh_user: "ubuntu",
region_name: "RegionOne",
image_name: "Ubuntu-14.04",
private_net_name: "",
validate_certs: False,
availability_zone: "iad-2",
flavor_name: "gp1.supersonic",
public_key_file: "/home/reed/.ssh/id_rsa.pub",
stack_size: 4,
volume_size: 10,
block_device_name: "/dev/vdb",
wp_theme: "https://downloads.wordpress.org/theme/iribbon.2.0.65.zip",
wp_posts: "http://wpcandy.s3.amazonaws.com/resources/postsxml.zip"
}

View File

@ -1,26 +0,0 @@
---
horizon_url: "https://10.241.20.5:443"
auth: {
auth_url: "http://10.241.144.2:5000/v3",
username: "interop_admin",
password: "{{ password }}",
project_name: "interop",
domain_name: "Default"
}
app_env: {
image_name: "ubuntu-trusty",
region_name: "region1",
private_net_name: "private-net",
flavor_name: "m1.small",
public_key_file: "/home/ghe.rivero/.ssh/id_rsa.pub",
stack_size: 4,
volume_size: 2,
block_device_name: "/dev/vdb",
wp_theme: "https://downloads.wordpress.org/theme/iribbon.2.0.65.zip",
wp_posts: "http://wpcandy.s3.amazonaws.com/resources/postsxml.zip",
validate_certs: False,
availability_zone: "nova"
}

View File

@ -1,26 +0,0 @@
---
horizon_url: "http://9.30.217.9"
auth: {
auth_url: "http://9.30.217.9:5000/v3",
username: "demo",
password: "{{ password }}",
domain_name: "default",
project_name: "demo"
}
app_env: {
image_name: "ubuntu-15.04",
region_name: "RegionOne",
availability_zone: "nova",
validate_certs: False,
ssh_user: "ubuntu",
private_net_name: "Bluebox",
flavor_name: "m1.small",
public_key_file: "/home/tong/.ssh/id_rsa.pub",
stack_size: 4,
volume_size: 2,
block_device_name: "/dev/vdb",
wp_theme: "https://downloads.wordpress.org/theme/iribbon.2.0.65.zip",
wp_posts: "http://wpcandy.s3.amazonaws.com/resources/postsxml.zip"
}

View File

@ -1,25 +0,0 @@
---
horizon_url: "https://cloud1.osic.org"
auth: {
auth_url: "https://cloud1.osic.org:5000/v3",
username: "litong01",
password: "{{ password }}",
domain_name: "default",
project_name: "interop_challenge"
}
app_env: {
image_name: "ubuntu-server-14.04",
region_name: "",
availability_zone: "nova",
validate_certs: True,
private_net_name: "interopnet",
flavor_name: "m1.small",
public_key_file: "/home/tong/.ssh/id_rsa.pub",
stack_size: 4,
volume_size: 2,
block_device_name: "/dev/vdb",
wp_theme: "https://downloads.wordpress.org/theme/iribbon.2.0.65.zip",
wp_posts: "http://wpcandy.s3.amazonaws.com/resources/postsxml.zip"
}

View File

@ -1,24 +0,0 @@
---
auth: {
auth_url: "https://iam.eu-de.otc.t-systems.com/v3",
username: "14610052 OTC00000000001000000447",
password: "{{ password }}",
domain_name: "eu-de",
project_name: "eu-de"
}
app_env: {
image_name: "Community_Ubuntu_14.04_TSI_20161004_0",
region_name: "",
availability_zone: "eu-de-01",
validate_certs: False,
private_net_name: "a45173e7-3c00-485f-b297-3bd73bd6d80b",
flavor_name: "computev1-1",
public_key_file: "/home/ubuntu/.ssh/id_rsa.pub",
ssh_user: "ubuntu",
stack_size: 4,
volume_size: 2,
block_device_name: "/dev/xvdb",
wp_theme: "https://downloads.wordpress.org/theme/iribbon.2.0.65.zip",
wp_posts: "http://wpcandy.s3.amazonaws.com/resources/postsxml.zip"
}

View File

@ -1,25 +0,0 @@
---
horizon_url: "https://horizon.cloud.ovh.net/"
auth: {
auth_url: "https://auth.cloud.ovh.net/v2.0",
username: "5sAcQ8EqamKq",
password: "{{ password }}",
project_name: "6987064600428478"
}
app_env: {
ssh_user: "ubuntu",
region_name: "SBG1",
image_name: "Ubuntu 14.04",
private_net_name: "Ext-Net",
validate_certs: True,
availability_zone: "nova",
flavor_name: "eg-15-app",
public_key_file: "/home/ubuntu/.ssh/id_rsa.pub",
stack_size: 4,
volume_size: 4,
block_device_name: "/dev/vdb",
wp_theme: "https://downloads.wordpress.org/theme/iribbon.2.0.65.zip",
wp_posts: "http://wpcandy.s3.amazonaws.com/resources/postsxml.zip"
}

View File

@ -1,41 +0,0 @@
---
# Copyright Red Hat, Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
horizon_url: "https://x86.trystack.org/dashboard/"
auth: {
auth_url: "http://8.43.86.11:5000/v3",
username: "{{ lookup('env', 'OS_USERNAME') }}",
password: "{{ lookup('env', 'OS_PASSWORD') }}",
project_name: "{{ lookup('env', 'OS_PROJECT_NAME') }}",
domain_name: "default",
}
app_env: {
ssh_user: "ubuntu",
image_name: "ubuntu1404",
region_name: "regionOne",
availability_zone: "nova",
validate_certs: False,
private_net_name: "private",
flavor_name: "m1.small",
public_key_file: "/root/.ssh/id_rsa.pub",
stack_size: 4,
volume_size: 2,
block_device_name: "/dev/vdb",
wp_theme: "https://downloads.wordpress.org/theme/iribbon.2.0.65.zip",
wp_posts: "http://wpcandy.s3.amazonaws.com/resources/postsxml.zip"
}

View File

@ -1,68 +0,0 @@
ansible==2.1.2.0
appdirs==1.4.0
Babel==2.3.4
cffi==1.8.3
cliff==2.2.0
cmd2==0.6.9
cryptography==1.5.2
debtcollector==1.8.0
decorator==4.0.10
dogpile.cache==0.6.2
enum34==1.1.6
funcsigs==1.0.2
functools32==3.2.3.post2
futures==3.0.5
idna==2.1
ipaddress==1.0.17
iso8601==0.1.11
Jinja2==2.8
jsonpatch==1.14
jsonpointer==1.10
jsonschema==2.5.1
keystoneauth1==2.12.1
MarkupSafe==0.23
monotonic==1.2
msgpack-python==0.4.8
munch==2.0.4
netaddr==0.7.18
netifaces==0.10.5
openstacksdk==0.9.8
os-client-config==1.21.1
osc-lib==1.1.0
oslo.config==3.17.0
oslo.i18n==3.9.0
oslo.serialization==2.13.0
oslo.utils==3.16.0
paramiko==2.0.2
pbr==1.10.0
positional==1.1.1
prettytable==0.7.2
pyasn1==0.1.9
pycparser==2.14
pycrypto==2.6.1
pyparsing==2.1.9
python-cinderclient==1.9.0
python-designateclient==2.3.0
python-glanceclient==2.5.0
python-heatclient==1.5.0
python-ironicclient==1.7.0
python-keystoneclient==3.5.0
python-magnumclient==2.3.0
python-mistralclient==2.1.1
python-neutronclient==6.0.0
python-novaclient==6.0.0
python-openstackclient==3.2.0
python-swiftclient==3.1.0
python-troveclient==2.5.0
pytz==2016.7
PyYAML==3.12
requests==2.11.1
requestsexceptions==1.1.3
rfc3986==0.4.1
shade>=1.9.0,<=1.12.1
simplejson==3.8.2
six==1.10.0
stevedore==1.17.1
unicodecsv==0.14.1
warlock==1.2.0
wrapt==1.10.8

View File

@ -1,67 +0,0 @@
# Copyright (c) 2019 VEXXHOST, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Clean up Heat stacks
This script grabs a list of all stacks in DELETE_FAILED state and tries to
delete them again. For usage, please run the script with `--help`.
"""
import argparse
import openstack
options = argparse.ArgumentParser(description='OpenStack Heat Clean-up')
cloud = openstack.connect(options=options)
def cleanup_stack(stack):
# Skip anything that isn't DELETE_FAILED
if stack.status != 'DELETE_FAILED':
return
# Get a list of all the resources of the stack
resources = list(cloud.orchestration.resources(stack))
# If we don't have any resources, we can consider this stack gone.
if len(resources) == 0:
print('[{}] no resources, deleting stack'.format(stack.id))
cloud.orchestration.delete_stack(stack)
return
# Find resources that are DELETE_FAILED
for resource in resources:
# Skip resources that are not DELETE_FAILED
if resource.status != 'DELETE_FAILED':
continue
# Clean up and nested stacks
if resource.resource_type in ('OS::Heat::ResourceGroup'):
stack_id = resource.physical_resource_id
nested_stack = cloud.orchestration.find_stack(stack_id)
cleanup_stack(nested_stack)
continue
# This is protection to make sure that we only delete once we're sure
# that all resources are gone.
print(stack, resource)
raise
# At this point, the stack should be ready to be deleted
print("[{}] deleting..".format(stack.id))
cloud.orchestration.delete_stack(stack)
for stack in cloud.orchestration.stacks():
cleanup_stack(stack)

View File

@ -1,214 +0,0 @@
heat_template_version: 2016-04-08
#The value of heat_template_version tells Heat not only the format of the template but also features that will be validated and supported
#2016-04-08 represents the Mitaka release
description: >
This is the main Heat template for the 3-tier LAMP Workload created by the Enterprise WG.
This version of the tempalte does not include autoscaling, and does not require ceilometer.
This template calls multiple nested templates which actually do the
majority of the work. This file calls the following yaml files in a ./lib subdirectory
setup_net_sg.yaml sets up the security groups and networks for Web, App, and Database
heat_app_tier.yaml starts up application servers and does on-the-fly builds
heat_web_tier.yaml starts up web servers and does on-the-fly builds
heat_sql_tier.yaml starts up mysql server and does on-the-fly builds.
NOTE: This serves as a guide to new users and is not meant for production deployment.
REQUIRED YAML FILES:
setup_net_sg.yaml, heat_app_tier.yaml, heat_sql_tier.yaml, heat_web_tier.yaml
REQUIRED PARAMETERS:
ssh_key_name, image_id, public_network_id
OPTIONAL PARAMETERS:
db_instance_flavor, app_instance_flavor, web_instance_flavor, db_server_name, app_server_name, web_server_name, dns_nameserver
#Created by: Craig Sterrett 3/23/2016
######################################
#The parameters section allows for specifying input parameters that have to be provided when instantiating the template
parameters:
ssh_key_name:
type: string
label: SSH Key Name
description: REQUIRED PARAMETER -Name of an existing SSH KeyPair to enable SSH access to instances.
hidden: false
constraints:
- custom_constraint: nova.keypair
description: Must already exist on your cloud
image_id:
type: string
label: Image ID
description: >
REQUIRED PARAMETER - The image id to be used for the compute instance. Please specify
your own Image ID in your project/tenant. This could be modified to use different
images for each tier.
hidden: false
constraints:
- custom_constraint: glance.image
description: Must be a valid image on your cloud
public_network_id:
type: string
label: Public Network
description: >
REQUIRED PARAMETER - The public network name or id used to access the internet.
This will fail if this is not a true public network
hidden: false
constraints:
- custom_constraint: neutron.network
description: Must be a valid network on your cloud
db_instance_flavor:
type: string
label: Database server instance flavor
description: The flavor type to use for db server.
default: m1.small
hidden: false
constraints:
- custom_constraint: nova.flavor
description: Must be a valid flavor provided by your cloud provider.
app_instance_flavor:
type: string
label: Application server instance flavor
description: The flavor type to use for app servers.
default: m1.small
hidden: false
constraints:
- custom_constraint: nova.flavor
description: Must be a valid flavor provided by your cloud provider.
web_instance_flavor:
type: string
label: Web server instance flavor
description: The flavor type to use for web servers.
default: m1.small
hidden: false
constraints:
- custom_constraint: nova.flavor
description: Must be a valid flavor provided by your cloud provider.
db_server_name:
type: string
label: Server Name
description: Name of the database servers
hidden: false
default: db_server
app_server_name:
type: string
label: Server Name
description: Name of the application servers
hidden: false
default: app_server
web_server_name:
type: string
label: Server Name
description: Name of the web servers
hidden: false
default: web_server
dns_nameserver:
type: comma_delimited_list
label: DNS Name Server
description: The IP address of a DNS nameserver in list format
default: 8.8.8.8,8.8.4.4
######################################
#The resources section defines actual resources that make up a stack deployed from the HOT template (for instance compute instances, networks, storage volumes).
resources:
####################
#Setup Networking and Security Group
#Call the setup_net_sg.yaml file
network_setup:
type: lib/setup_net_sg.yaml
properties:
public_network_id: { get_param: public_network_id }
dns_nameserver: { get_param: dns_nameserver }
####################
##Kick off a Database server
launch_db_server:
type: lib/heat_sql_tier.yaml
properties:
ssh_key_name: { get_param: ssh_key_name }
server_name: { get_param: db_server_name }
instance_flavor: { get_param: db_instance_flavor }
image_id: { get_param: image_id }
private_network_id: {get_attr: [network_setup, db_private_network_id]}
security_group: {get_attr: [network_setup, db_security_group_id]}
####################
##Kick off two application servers
#Utilizing Heat resourcegroup to kick off multiple copies
app_server_resource_group:
type: OS::Heat::ResourceGroup
properties:
count: 2
resource_def:
type: lib/heat_app_tier.yaml
properties:
ssh_key_name:
get_param: ssh_key_name
server_name:
get_param: app_server_name
instance_flavor:
get_param: app_instance_flavor
image_id:
get_param: image_id
private_network_id: {get_attr: [network_setup, app_private_network_id]}
security_group: {get_attr: [network_setup, app_security_group_id]}
pool_name: {get_attr: [network_setup, app_lbaas_pool_name]}
db_server_ip: {get_attr: [launch_db_server, instance_ip]}
#Just passing something for metadata, it's not used in this script but is used in
#the autoscaling script
metadata: {"metering.stack": {get_param: "OS::stack_id"}}
####################
##Kick off two web servers
#Utilizing Heat resourcegroup to kick off multiple copies
web_server_resource_group:
type: OS::Heat::ResourceGroup
properties:
count: 2
resource_def:
type: lib/heat_web_tier.yaml
properties:
ssh_key_name:
get_param: ssh_key_name
server_name:
get_param: web_server_name
instance_flavor:
get_param: web_instance_flavor
image_id:
get_param: image_id
private_network_id: {get_attr: [network_setup, web_private_network_id]}
app_lbaas_vip: {get_attr: [network_setup, app_lbaas_IP]}
security_group: {get_attr: [network_setup, web_security_group_id]}
pool_name: {get_attr: [network_setup, web_lbaas_pool_name]}
#Just passing something for metadata, it's not used in this script but is used in
#the autoscaling script
metadata: {"metering.stack": {get_param: "OS::stack_id"}}
######################################
#The outputs section defines output parameters that should be available to the user after a stack has been created.
outputs:
web_lbaas_ip:
description: >
This is the floating IP assigned to the WEB LoadBalancer.
value: {get_attr: [network_setup, web_lbaas_IP]}
app_lbaas_ip:
description: >
This is the floating IP assigned to the Application LoadBalancer.
value: {get_attr: [network_setup, app_lbaas_IP]}

View File

@ -1,343 +0,0 @@
heat_template_version: 2016-04-08
#The value of heat_template_version tells Heat not only the format of the template but also features that will be validated and supported
#2016-04-08 represents the Mitaka release
description: >
This is the main Heat template for the 3-tier LAMP Workload created by the Enterprise WG.
This version of the tempalte does not include autoscaling, and does not require ceilometer.
This template calls multiple nested templates which actually do the
majority of the work. This file calls the following yaml files in a ./lib subdirectory
setup_net_sg.yaml sets up the security groups and networks for Web, App, and Database
heat_app_tier.yaml starts up application servers and does on-the-fly builds
heat_web_tier.yaml starts up web servers and does on-the-fly builds
heat_sql_tier.yaml starts up mysql server and does on-the-fly builds.
NOTE: This serves as a guide to new users and is not meant for production deployment.
REQUIRED YAML FILES:
setup_net_sg.yaml, heat_app_tier.yaml, heat_sql_tier.yaml, heat_web_tier.yaml
REQUIRED PARAMETERS:
ssh_key_name, image_id, public_network_id
OPTIONAL PARAMETERS:
db_instance_flavor, app_instance_flavor, web_instance_flavor, db_server_name, app_server_name, web_server_name, dns_nameserver
#Created by: Craig Sterrett 3/23/2016
######################################
#The parameters section allows for specifying input parameters that have to be provided when instantiating the template
parameters:
ssh_key_name:
type: string
label: SSH Key Name
description: REQUIRED PARAMETER -Name of an existing SSH KeyPair to enable SSH access to instances.
hidden: false
constraints:
- custom_constraint: nova.keypair
description: Must already exist on your cloud
image_id:
type: string
label: Image ID
description: >
REQUIRED PARAMETER - The image id to be used for the compute instance. Please specify
your own Image ID in your project/tenant. This could be modified to use different
images for each tier.
hidden: false
constraints:
- custom_constraint: glance.image
description: Must be a valid image on your cloud
public_network_id:
type: string
label: Public Network
description: >
REQUIRED PARAMETER - The public network name or id used to access the internet.
This will fail if this is not a true public network
hidden: false
constraints:
- custom_constraint: neutron.network
description: Must be a valid network on your cloud
db_instance_flavor:
type: string
label: Database server instance flavor
description: The flavor type to use for db server.
default: m1.small
hidden: false
constraints:
- custom_constraint: nova.flavor
description: Must be a valid flavor provided by your cloud provider.
app_instance_flavor:
type: string
label: Application server instance flavor
description: The flavor type to use for app servers.
default: m1.small
hidden: false
constraints:
- custom_constraint: nova.flavor
description: Must be a valid flavor provided by your cloud provider.
web_instance_flavor:
type: string
label: Web server instance flavor
description: The flavor type to use for web servers.
default: m1.small
hidden: false
constraints:
- custom_constraint: nova.flavor
description: Must be a valid flavor provided by your cloud provider.
db_server_name:
type: string
label: Server Name
description: Name of the database servers
hidden: false
default: db_server
app_server_name:
type: string
label: Server Name
description: Name of the application servers
hidden: false
default: app_server
web_server_name:
type: string
label: Server Name
description: Name of the web servers
hidden: false
default: web_server
dns_nameserver:
type: comma_delimited_list
label: DNS Name Server
description: The IP address of a DNS nameserver in list format
default: 8.8.8.8,8.8.4.4
######################################
#The resources section defines actual resources that make up a stack deployed from the HOT template (for instance compute instances, networks, storage volumes).
resources:
####################
#Setup Networking and Security Group
#Call the setup_net_sg.yaml file
network_setup:
type: lib/setup_net_sg.yaml
properties:
public_network_id: { get_param: public_network_id }
dns_nameserver: { get_param: dns_nameserver }
####################
##Kick off a Database server
launch_db_server:
type: lib/heat_sql_tier.yaml
properties:
ssh_key_name: { get_param: ssh_key_name }
server_name: { get_param: db_server_name }
instance_flavor: { get_param: db_instance_flavor }
image_id: { get_param: image_id }
private_network_id: {get_attr: [network_setup, db_private_network_id]}
security_group: {get_attr: [network_setup, db_security_group_id]}
####################
#Autoscaling for the app servers
app_autoscale_group:
type: OS::Heat::AutoScalingGroup
properties:
desired_capacity: 2
min_size: 2
max_size: 5
resource:
type: lib/heat_app_tier.yaml
properties:
ssh_key_name:
get_param: ssh_key_name
server_name:
get_param: app_server_name
instance_flavor:
get_param: app_instance_flavor
image_id:
get_param: image_id
private_network_id: {get_attr: [network_setup, app_private_network_id]}
security_group: {get_attr: [network_setup, app_security_group_id]}
pool_name: {get_attr: [network_setup, app_lbaas_pool_name]}
db_server_ip: {get_attr: [launch_db_server, instance_ip]}
#created unique tag to be used by ceilometer to identify meters specific to the app nodes
#without some unique metadata tag, ceilometer will group together all resources in the tenant
metadata: {"metering.autoscale_group_name": "app_autoscale_group"}
####################
app_scaleup_policy:
type: OS::Heat::ScalingPolicy
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: { get_resource: app_autoscale_group }
#cooldown prevents duplicate alarms while instances spin up. Set the value large
#enough to allow for instance to startup and begin taking requests.
cooldown: 900
scaling_adjustment: 1
app_cpu_alarm_high:
type: OS::Ceilometer::Alarm
properties:
meter_name: cpu_util
statistic: avg
#period needs to be greater than the sampling rate in the pipleine.config file in /etc/ceilometer
period: 600
evaluation_periods: 1
#Alarms if CPU utilization for ALL app nodes averaged together exceeds 50%
threshold: 50
repeat_actions: true
alarm_actions:
- {get_attr: [app_scaleup_policy, alarm_url]}
#Collect data only on servers with the autoscale_group_name metadata set to app_autoscale_group
#Otherwise ceilometer would look at all servers in the project
matching_metadata: {'metadata.user_metadata.autoscale_group_name': "app_autoscale_group"}
comparison_operator: gt
app_scaledown_policy:
type: OS::Heat::ScalingPolicy
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: { get_resource: app_autoscale_group }
#cooldown prevents duplicate alarms while instances shut down. Set the value large
#enough to allow for instance to shutdown and things stabilize.
cooldown: 900
scaling_adjustment: -1
app_cpu_alarm_low:
type: OS::Ceilometer::Alarm
properties:
meter_name: cpu_util
statistic: avg
#period needs to be greater than the sampling rate in the pipleine.config file in /etc/ceilometer
period: 600
evaluation_periods: 1
#Alarms if CPU utilization for ALL app nodes averaged together drops below 20%
threshold: 20
repeat_actions: true
alarm_actions:
- {get_attr: [app_scaledown_policy, alarm_url]}
#Collect data only on servers with the autoscale_group_name metadata set to app_autoscale_group
#Otherwise ceilometer would look at all servers in the project
matching_metadata: {'metadata.user_metadata.autoscale_group_name': "app_autoscale_group"}
comparison_operator: lt
####################
#Autoscaling for the web servers
web_autoscale_group:
type: OS::Heat::AutoScalingGroup
properties:
desired_capacity: 2
min_size: 2
max_size: 5
resource:
type: lib/heat_web_tier.yaml
properties:
ssh_key_name:
get_param: ssh_key_name
server_name:
get_param: web_server_name
instance_flavor:
get_param: web_instance_flavor
image_id:
get_param: image_id
private_network_id: {get_attr: [network_setup, web_private_network_id]}
app_lbaas_vip: {get_attr: [network_setup, app_lbaas_IP]}
security_group: {get_attr: [network_setup, web_security_group_id]}
pool_name: {get_attr: [network_setup, web_lbaas_pool_name]}
metadata: {"metering.autoscale_group_name": "web_autoscale_group"}
####################
web_scaleup_policy:
type: OS::Heat::ScalingPolicy
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: { get_resource: web_autoscale_group }
cooldown: 900
scaling_adjustment: 1
web_cpu_alarm_high:
type: OS::Ceilometer::Alarm
properties:
meter_name: cpu_util
statistic: avg
period: 600
evaluation_periods: 1
threshold: 50
repeat_actions: true
alarm_actions:
- {get_attr: [web_scaleup_policy, alarm_url]}
matching_metadata: {'metadata.user_metadata.autoscale_group_name': "web_autoscale_group"}
comparison_operator: gt
web_scaledown_policy:
type: OS::Heat::ScalingPolicy
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: { get_resource: web_autoscale_group }
cooldown: 900
scaling_adjustment: -1
web_cpu_alarm_low:
type: OS::Ceilometer::Alarm
properties:
meter_name: cpu_util
statistic: avg
period: 600
evaluation_periods: 1
threshold: 20
repeat_actions: true
alarm_actions:
- {get_attr: [web_scaledown_policy, alarm_url]}
matching_metadata: {'metadata.user_metadata.autoscale_group_name': "web_autoscale_group"}
comparison_operator: lt
######################################
#The outputs section defines output parameters that should be available to the user after a stack has been created.
outputs:
web_lbaas_ip:
description: >
This is the floating IP assigned to the WEB LoadBalancer.
value: {get_attr: [network_setup, web_lbaas_IP]}
app_lbaas_ip:
description: >
This is the floating IP assigned to the Application LoadBalancer.
value: {get_attr: [network_setup, app_lbaas_IP]}
web_scale_up_url:
description: >
This URL is the webhook to scale up the WEB autoscaling group. You
can invoke the scale-up operation by doing an HTTP POST to this
URL; no body nor extra headers are needed. You do need to be authenticated
Example: source openrc; curl -X POST "<url>"
value: {get_attr: [web_scaleup_policy, alarm_url]}
web_scale_down_url:
description: >
This URL is the webhook to scale down the WEB autoscaling group.
value: {get_attr: [web_scaledown_policy, alarm_url]}
app_scale_up_url:
description: >
This URL is the webhook to scale up the application autoscaling group. You
can invoke the scale-up operation by doing an HTTP POST to this
URL; no body nor extra headers are needed.
value: {get_attr: [app_scaleup_policy, alarm_url]}
app_scale_down_url:
description: >
This URL is the webhook to scale down the application autoscaling group.
value: {get_attr: [app_scaledown_policy, alarm_url]}

View File

@ -1,67 +0,0 @@
3-Tier LAMP Sample Heat Template
================================
These heat templates deploy WordPress on a 3-Tier LAMP architecture. There are two versions of the primary template, one which creates a static environment which does not require ceilometer, and one which provides autoscaling of the web and application tiers based on CPU load, which does require ceilometer.
**The WordPress 3-Tier LAMP Architecture Sample**
====== ====================== =====================================
Tier Function Details
====== ====================== =====================================
Web Reverse Proxy Server Apache + mod_proxy
App WordPress Server Apache, PHP, MySQL Client, WordPress
Data Database Server MySQL
====== ====================== =====================================
**NOTE:** The sample WordPress application was tested with CentOS7 and Ubuntu Trusty. The sample application installation does not currently work with Ubuntu Xenial
-----------------
Heat File Details
-----------------
The template uses a nested structure, with two different primary yaml files, both of which utilize the same 4 nested files. The templates were tested using Mitaka release of OpenStack, and Ubuntu server 14.04 and Centos7.
**AppWG_3Tier.yaml:** If you want a static environment, run this yaml file. This will create a static environment, with two load balanced web servers, and two load balanced application servers, and a single database server using cinder block storage for the database files.
REQUIRED PARAMETERS:
* ssh_key_name, image_id, public_network_id
OPTIONAL PARAMETERS:
* db_instance_flavor, app_instance_flavor, web_instance_flavor, db_server_name, app_server_name, web_server_name, dns_nameserver
**AppWG_3Tier_AutoScale.yaml:** If you want a dynamic autoscaling environment, run this yaml file. This yaml files sets up heat autoscaling groups.
REQUIRED PARAMETERS:
* ssh_key_name, image_id, public_network_id
OPTIONAL PARAMETERS:
* db_instance_flavor, app_instance_flavor, web_instance_flavor, db_server_name, app_server_name, web_server_name, dns_nameserver
The following 4 yaml files are called by the primary files above, and are by default expected to be in a lib subdirectory:
**setup_net_sg.yaml:** This file creates 3 separate private networks, one for each tier. In addition it creates two load balancers (using neutron LBaaS V1), one which has a public IP that connects the web private network to the public network, and one with a private IP that connects the web network to the application network. The template also creates a router connecting the application network to the database network. In addition to the networks and routers, the template creates 3 security groups, one for each of the tiers.
**heat_web_tier.yaml:** This template file launches the web tier nodes. In addition to launching instances, it installs and configures Apache and Apache modproxy which is used to redirect traffic to the application nodes.
**heat_app_tier.yaml:** This template file launches the application tier nodes. In addition to launching the instances, it installs Apache, PHP, MySQL client, and finally WordPress.
**heat_sql_tier.yaml:** This template file launches the database tier node and installs MySQL. In addition it creates a cinder block device to store the database files. The template also creates the required users and databases for the WordPress application.
-------------------------------
Running the heat template files
-------------------------------
First you need to source your credential file. You may download a copy of the credential file from Horizon under Project>Compute>Access & Security>API Access
**Example to setup the static environment**
openstack stack create --template AppWG_3Tier.yaml --parameter ssh_key_name=mykey --parameter image_id=ubuntu --parameter dns_nameserver="8.8.8.8,8.8.4.4" --parameter public_network_id=external_network ThreeTierLAMP
**Example to setup the autoscaling environment**
openstack stack create --template AppWG_3Tier_AutoScale.yaml --parameter ssh_key_name=mykey --parameter image_id=centos --parameter dns_nameserver="8.8.8.8,8.8.4.4" --parameter public_network_id=external_network ThreeTierLAMP

View File

@ -1,138 +0,0 @@
heat_template_version: 2013-05-23
description: >
This is a nested Heat used by the 3-Tier Architecture Workload reference document
created by the Enterprise Working Group. These templates demonstrate a sample
LAMP architecture supporting Wordpress. This template file launches the application
tier nodes, and installs Apache, PHP, MySQL client, and finally WordPress.
This serves as a guide to new users and is not meant for production deployment.
#Created by: Craig Sterrett 3/23/2016
parameters:
ssh_key_name:
type: string
label: SSH Key Name
description: REQUIRED PARAMETER -Name of an existing SSH KeyPair to enable SSH access to instances.
hidden: false
default: cloudkey
server_name:
type: string
label: Server Name
description: REQUIRED PARAMETER - Name of the instance to spin up.
hidden: false
default: App_Server
instance_flavor:
type: string
label: Instance Flavor
description: The flavor type to use for each server.
default: m1.small
hidden: false
image_id:
type: string
label: Image ID
description: >
REQUIRED PARAMETER - The image id to be used for the compute instance. Please specify
your own Image ID in your project/tenant.
hidden: false
private_network_id:
type: string
default: App_Tier_private_network
description: The private Application network that will be utilized for all App servers
security_group:
type: string
default: Workload_App_SG
description: The Application security group that will be utilized for all App servers
pool_name:
type: string
description: LBaaS Pool to join
db_server_ip:
type: string
description: Database Server IP
metadata:
type: json
resources:
app_server:
type: OS::Nova::Server
properties:
name: { get_param: server_name }
image: { get_param: image_id }
flavor: { get_param: instance_flavor }
key_name: { get_param: ssh_key_name }
metadata: { get_param: metadata }
networks:
- network: { get_param: private_network_id }
security_groups:
- { get_param: security_group }
user_data_format: RAW
user_data:
str_replace:
params:
$db_server_ip: { get_param: db_server_ip }
template: |
#!/bin/bash -v
#use apt-get for Debian/ubuntu, and yum for centos/fedora
if apt-get -v &> /dev/null
then
apt-get update -y
apt-get upgrade -y
#Install PHP5, and mysql
apt-get -y install apache2 php5 libapache2-mod-php5 php5-mysql php5-gd mysql-client
elif which yum &> /dev/null
then
yum update -y
#Install PHP5, and mysql
setenforce 0
yum install -y php php-mysql
yum install -y wget
yum install php-gd
fi
# install wordpress
# download wordpress
wget http://wordpress.org/latest.tar.gz
tar -xzf latest.tar.gz
# configure wordpress
cp wordpress/wp-config-sample.php wordpress/wp-config.php
sed -i 's/database_name_here/wordpress/' wordpress/wp-config.php
sed -i 's/username_here/wordpress_user/' wordpress/wp-config.php
sed -i 's/password_here/wordpress/' wordpress/wp-config.php
sed -i 's/localhost/$db_server_ip/' wordpress/wp-config.php
# install a copy of the configured wordpress into apache's www directory
rm /var/www/html/index.html
cp -R wordpress/* /var/www/html/
# give apache ownership of the application files
chown -R www-data:www-data /var/www/html/
chown -R apache:apache /var/www/html/
chmod -R g+w /var/www/html/
#Allow remote database connection
setsebool -P httpd_can_network_connect=1
systemctl restart httpd.service
Pool_Member:
type: OS::Neutron::PoolMember
properties:
pool_id: {get_param: pool_name}
address: {get_attr: [app_server, first_address]}
protocol_port: 80
outputs:
app_private_ip:
description: Private IP address of the Web node
value: { get_attr: [app_server, first_address] }
lb_member:
description: LoadBalancer member details.
value: { get_attr: [Pool_Member, show] }

View File

@ -1,210 +0,0 @@
heat_template_version: 2013-05-23
description: >
This is a nested Heat used by the 3-Tier Architecture Workload reference document
created by the Enterprise Working Group. These templates demonstrate a sample
LAMP architecture supporting Wordpress. This template file launches the database
tier node, creates a cinder block device to store the database files and creates
the required users and databases for the WordPress application.
This serves as a guide to new users and is not meant for production deployment.
#Created by: Craig Sterrett 3/23/2016
parameters:
ssh_key_name:
type: string
label: SSH Key Name
description: REQUIRED PARAMETER -Name of an existing SSH KeyPair to enable SSH access to instances.
hidden: false
default: cloudkey
server_name:
type: string
label: Server Name
description: REQUIRED PARAMETER - Name of the instance to spin up.
hidden: false
default: DB_Server
instance_flavor:
type: string
label: Instance Flavor
description: The flavor type to use for each server.
default: m1.small
hidden: false
image_id:
type: string
label: Image ID
description: >
REQUIRED PARAMETER - The image id to be used for the compute instance. Please specify
your own Image ID in your project/tenant.
hidden: false
private_network_id:
type: string
default: DB_Tier_private_network
description: The private database network that will be utilized for all DB servers
security_group:
type: string
default: Workload_DB_SG
description: The database security group that will be utilized for all DB servers
db_name:
type: string
description: MYSQL database name
default: wordpress
constraints:
- length: { min: 1, max: 64 }
description: db_name must be between 1 and 64 characters
- allowed_pattern: '[a-zA-Z][a-zA-Z0-9]*'
description: >
db_name must begin with a letter and contain only alphanumeric
characters
db_username:
type: string
description: MYSQL database admin account username
default: wordpress_user
hidden: true
db_password:
type: string
description: MYSQL database admin account password
default: wordpress
hidden: true
constraints:
- length: { min: 1, max: 41 }
description: db_password must be between 1 and 41 characters
- allowed_pattern: '[a-zA-Z0-9]*'
description: db_password must contain only alphanumeric characters
db_root_password:
type: string
description: Root password for MySQL
default: admin
hidden: true
constraints:
- length: { min: 1, max: 41 }
description: db_root_password must be between 1 and 41 characters
- allowed_pattern: '[a-zA-Z0-9]*'
description: db_root_password must contain only alphanumeric characters
db_volume_size:
type: string
description: Database cinder volume size (in GB) for database files
default: 2
hidden: true
resources:
#Setup a cinder volume for storage of the datbase files
db_files_volume:
type: OS::Cinder::Volume
properties:
size: { get_param: db_volume_size }
name: DB_Files
db_volume_attachment:
type: OS::Cinder::VolumeAttachment
properties:
volume_id: { get_resource: db_files_volume }
instance_uuid: { get_resource: MYSQL_instance }
#Install MySQL and setup wordpress DB and set usernames and passwords
MYSQL_instance:
type: OS::Nova::Server
properties:
name: { get_param: server_name }
image: { get_param: image_id }
flavor: { get_param: instance_flavor }
key_name: { get_param: ssh_key_name }
networks:
- network: { get_param: private_network_id }
security_groups:
- { get_param: security_group }
user_data_format: RAW
user_data:
str_replace:
template: |
#!/bin/bash -v
#make mount point for cinder volume and prepare volume
mkdir /mnt/db_files
chown mysql:mysql /mnt/db_files
volume_path="/dev/disk/by-id/virtio-$(echo volume_id | cut -c -20)"
echo ${volume_path}
mkfs.ext4 ${volume_path}
echo "${volume_path} /mnt/db_files ext4 defaults 1 2" >> /etc/fstab
mount /mnt/db_files
#use apt-get for Debian/ubuntu, and yum for centos/fedora
if apt-get -v &> /dev/null
then
apt-get update -y
apt-get upgrade -y
#Next line stops mysql install from popping up request for root password
export DEBIAN_FRONTEND=noninteractive
apt-get install -q -y --force-yes mariadb-server
touch /var/log/mariadb/mariadb.log
chown mysql:mysql /var/log/mariadb/mariadb.log
#Ubuntu mysql install blocks remote access by default
sed -i 's/bind-address/#bind-address/' /etc/mysql/my.cnf
service mysql stop
#Move the database to the cinder device
mv -f /var/lib/mysql /mnt/db_files/
#edit data file location in the mysql config file
sed -i 's/\/var\/lib\/mysql/\/mnt\/db_files\/mysql/' /etc/mysql/my.cnf
service mysql start
elif which yum &> /dev/null
then
yum update -y
setenforce 0
yum -y install mariadb-server mariadb
systemctl start mariadb
systemctl stop mariadb
chown mysql:mysql /mnt/db_files
touch /var/log/mariadb/mariadb.log
chown mysql:mysql /var/log/mariadb/mariadb.log
#Move the database to the cinder device
mv -f /var/lib/mysql /mnt/db_files/
#edit data file location in the mysql config file
sed -i 's/\/var\/lib\/mysql/\/mnt\/db_files\/mysql/' /etc/my.cnf
#need to modify the socket info for the clients
echo "[client]" >> /etc/my.cnf
echo "socket=/mnt/db_files/mysql/mysql.sock" >> /etc/my.cnf
systemctl start mariadb
systemctl enable mariadb
fi
# Setup MySQL root password and create a user and add remote privs to app subnet
mysqladmin -u root password db_rootpassword
# create wordpress database
cat << EOF | mysql -u root --password=db_rootpassword
CREATE DATABASE db_name;
CREATE USER 'db_user'@'localhost';
SET PASSWORD FOR 'db_user'@'localhost'=PASSWORD("db_password");
GRANT ALL PRIVILEGES ON db_name.* TO 'db_user'@'localhost' IDENTIFIED BY 'db_password';
CREATE USER 'db_user'@'%';
SET PASSWORD FOR 'db_user'@'%'=PASSWORD("db_password");
GRANT ALL PRIVILEGES ON db_name.* TO 'db_user'@'%' IDENTIFIED BY 'db_password';
FLUSH PRIVILEGES;
EOF
params:
db_rootpassword: { get_param: db_root_password }
db_name: { get_param: db_name }
db_user: { get_param: db_username }
db_password: { get_param: db_password }
volume_id: {get_resource: db_files_volume }
outputs:
completion:
description: >
MYSQL Setup is complete, login username and password are
value:
str_replace:
template: >
Database Name=$dbName, Database Admin Acct=$dbAdmin
params:
$dbName: { get_param: db_name }
$dbAdmin: { get_param: db_username }
instance_ip:
description: IP address of the deployed compute instance
value: { get_attr: [MYSQL_instance, first_address] }

View File

@ -1,139 +0,0 @@
heat_template_version: 2013-05-23
description: >
This is a nested Heat used by the 3-Tier Architecture Workload reference document
created by the Enterprise Working Group. These templates demonstrate a sample
LAMP architecture supporting Wordpress. This template installs and configures
Apache and Apache modproxy which is used to redirect traffic to the application nodes.
This serves as a guide to new users and is not meant for production deployment.
#Created by: Craig Sterrett 3/23/2016
parameters:
ssh_key_name:
type: string
label: SSH Key Name
description: REQUIRED PARAMETER -Name of an existing SSH KeyPair to enable SSH access to instances.
hidden: false
default: cloudkey
server_name:
type: string
label: Server Name
description: REQUIRED PARAMETER - Name of the instance to spin up.
hidden: false
default: Web_Server
instance_flavor:
type: string
label: Instance Flavor
description: The flavor type to use for each server.
default: m1.small
hidden: false
image_id:
type: string
label: Image ID
description: >
REQUIRED PARAMETER - The image id to be used for the compute instance. Please specify
your own Image ID in your project/tenant.
hidden: false
private_network_id:
type: string
default: Web_Tier_private_network
description: The private Web network that will be utilized for all web servers
security_group:
type: string
default: Workload_Web_SG
description: The Web security group that will be utilized for all web servers
pool_name:
type: string
description: LBaaS Pool to join
app_lbaas_vip:
type: string
description: Application LBaaS virtual IP
metadata:
type: json
resources:
web_server:
type: OS::Nova::Server
properties:
name: { get_param: server_name }
image: { get_param: image_id }
flavor: { get_param: instance_flavor }
key_name: { get_param: ssh_key_name }
metadata: { get_param: metadata }
networks:
- network: { get_param: private_network_id }
security_groups:
- { get_param: security_group }
user_data_format: RAW
user_data:
str_replace:
params:
$app_lbaas_vip: { get_param: app_lbaas_vip }
template: |
#!/bin/bash -v
#centos has this "security" feature in sudoers to keep scripts from sudo, comment it out
sed -i '/Defaults \+requiretty/s/^/#/' /etc/sudoers
#use apt-get for Debian/ubuntu, and yum for centos/fedora
if apt-get -v &> /dev/null
then
apt-get update -y
apt-get upgrade -y
#Install Apache
apt-get -y --force-yes install apache2
apt-get install y libapache2-mod-proxy-html libxml2-dev
a2enmod proxy
a2enmod proxy_http
a2enmod deflate
a2enmod headers
a2enmod proxy_connect
a2enmod proxy_html
cat > /etc/apache2/sites-enabled/000-default.conf << EOL
<VirtualHost *:*>
ProxyPreserveHost On
ProxyPass / http://$app_lbaas_vip/
ProxyPassReverse / http://$app_lbaas_vip/
ServerName localhost
</VirtualHost>
EOL
/etc/init.d/apache2 restart
elif which yum &> /dev/null
then
#yum update -y
#Install Apache
yum install -y httpd
yum install -y wget
cat >> /etc/httpd/conf/httpd.conf << EOL
<VirtualHost *:*>
ProxyPreserveHost On
ProxyPass / http://$app_lbaas_vip/
ProxyPassReverse / http://$app_lbaas_vip/
ServerName localhost
</VirtualHost>
EOL
service httpd restart
fi
Pool_Member:
type: OS::Neutron::PoolMember
properties:
pool_id: {get_param: pool_name}
address: {get_attr: [web_server, first_address]}
protocol_port: 80
outputs:
web_private_ip:
description: Private IP address of the Web node
value: { get_attr: [web_server, first_address] }
lb_member:
description: LoadBalancer member details.
value: { get_attr: [Pool_Member, show] }

View File

@ -1,348 +0,0 @@
heat_template_version: 2016-04-08
description: >
This is a nested Heat used by the 3-Tier Architecture Workload reference document
created by the Enterprise Working Group. These templates demonstrate a sample
LAMP architecture supporting Wordpress. This template file creates 3 separate
private networks, two load balancers(LBaaS V1), and creates 3 security groups.
This serves as a guide to new users and is not meant for production deployment.
REQUIRED PARAMETERS:
public_network_id
#Created by: Craig Sterrett 3/23/2016
parameters:
public_network_id:
type: string
label: Public Network
description: >
REQUIRED PARAMETER - The public network name or id used to access the internet.
This will fail if this is not a true public network
dns_nameserver:
type: comma_delimited_list
label: DNS Name Server
description: The IP address of a DNS nameserver
default: 8.8.8.8,8.8.4.4
resources:
#Create 3 private Networks, one for each Tier
# create a private network/subnet for the web servers
web_private_network:
type: OS::Neutron::Net
properties:
name: Web_Tier_private_network
web_private_network_subnet:
type: OS::Neutron::Subnet
properties:
cidr: 192.168.100.0/24
#Need to define default gateway in order for LBaaS namespace to pick it up
#If you let neutron grant a default gateway IP, then the LBaaS namespace may
#not pick it up and you will have routing issues
gateway_ip: 192.168.100.4
allocation_pools: [{ "start": 192.168.100.10, "end": 192.168.100.200 }]
#This routing information will get passed to the instances as they startup
#Provide the routes to the App network otherwise everything will try to go out the
#default gateway
host_routes: [{"destination": 192.168.101.0/24, "nexthop": 192.168.100.5}]
network: { get_resource: web_private_network }
name: Web_Tier_private_subnet
dns_nameservers: { get_param: dns_nameserver }
enable_dhcp: true
# create a router between the public/external network and the web network
public_router:
type: OS::Neutron::Router
properties:
name: PublicWebRouter
external_gateway_info:
network: { get_param: public_network_id }
# attach the web private network to the public router
public_router_interface:
type: OS::Neutron::RouterInterface
properties:
router: { get_resource: public_router }
subnet: { get_resource: web_private_network_subnet }
#############################
# create a private network/subnet for the Application servers
App_private_network:
type: OS::Neutron::Net
properties:
name: App_Tier_private_network
App_private_network_subnet:
type: OS::Neutron::Subnet
properties:
cidr: 192.168.101.0/24
#Need to define default gateway in order for LBaaS namespace to pick it up
#If you let neutron grant a default gateway IP, then the LBaaS namespace may
#not pick it up and you will have routing issues
gateway_ip: 192.168.101.5
#setting aside lower IP's to leave room for routers
allocation_pools: [{ "start": 192.168.101.10, "end": 192.168.101.200 }]
#This routing information will get passed to the instances as they startup
#Provide both the routes to the DB network and to the web network
host_routes: [{"destination": 192.168.100.0/24, "nexthop": 192.168.101.5}, {"destination": 192.168.102.0/24, "nexthop": 192.168.101.6}, {"destination": 0.0.0.0/24, "nexthop": 192.168.100.4}]
network: { get_resource: App_private_network }
name: App_Tier_private_subnet
dns_nameservers: { get_param: dns_nameserver }
enable_dhcp: true
# create a router linking App and Web network
App_router:
type: OS::Neutron::Router
properties:
name: "AppWebRouter"
external_gateway_info: {"network": { get_param: public_network_id }, "enable_snat": True}
# Create a port connecting the App_router to the App network
web_router_app_port:
type: OS::Neutron::Port
properties:
name: "App_Net_Port"
network: { get_resource: App_private_network }
#Assign the default gateway address
#The default gateway will get set as the default route in the LBaaS namespace
fixed_ips: [{"ip_address": 192.168.101.5}]
# Create a port connecting the App_router to the Web network
web_router_web_port:
type: OS::Neutron::Port
properties:
name: "Web_Net_Port"
network: { get_resource: web_private_network }
fixed_ips: [{"ip_address": 192.168.100.5}]
App_router_interface1:
type: OS::Neutron::RouterInterface
properties:
router: { get_resource: App_router }
port: { get_resource: web_router_app_port }
App_router_interface2:
type: OS::Neutron::RouterInterface
properties:
router: { get_resource: App_router }
port: { get_resource: web_router_web_port }
##############################
#Create two Load Balancers one for the Web tier with a public IP and one for the App Tier
#with only private network access
#LBaaS V1 Load Balancer for Web Tier
Web_Tier_LoadBalancer:
type: OS::Neutron::LoadBalancer
properties:
protocol_port: 80
pool_id: {get_resource: Web_Server_Pool}
#LBaaS V1 Monitor for Web Tier
Web_Tier_Monitor:
type: OS::Neutron::HealthMonitor
properties:
type: TCP
delay: 5
max_retries: 5
timeout: 5
#LBaaS V1 Pool for Web Tier
Web_Server_Pool:
type: OS::Neutron::Pool
properties:
protocol: HTTP
monitors: [{get_resource: Web_Tier_Monitor}]
subnet: {get_resource: web_private_network_subnet}
lb_method: ROUND_ROBIN
vip:
protocol_port: 80
# Create a VIP port
web_vip_port:
type: OS::Neutron::Port
properties:
network: { get_resource: web_private_network }
security_groups: [{ get_resource: web_security_group }]
fixed_ips:
- subnet_id: { get_resource: web_private_network_subnet }
# Floating_IP:
Web_Network_Floating_IP:
type: OS::Neutron::FloatingIP
properties:
floating_network: {get_param: public_network_id}
port_id: { get_resource: web_vip_port }
# Associate the Floating IP:
association:
type: OS::Neutron::FloatingIPAssociation
properties:
floatingip_id: { get_resource: Web_Network_Floating_IP }
port_id: { get_attr: [ Web_Server_Pool, vip, port_id ] }
#****************************************
#App Load Balancer
App_Tier_LoadBalancer:
type: OS::Neutron::LoadBalancer
properties:
protocol_port: 80
pool_id: {get_resource: App_Server_Pool}
#LBaaS V1 Monitor for App Tier
App_Tier_Monitor:
type: OS::Neutron::HealthMonitor
properties:
type: TCP
delay: 5
max_retries: 5
timeout: 5
#LBaaS V1 Pool for App Tier
App_Server_Pool:
type: OS::Neutron::Pool
properties:
protocol: HTTP
monitors: [{get_resource: App_Tier_Monitor}]
subnet_id: {get_resource: App_private_network_subnet}
lb_method: ROUND_ROBIN
vip:
protocol_port: 80
#############################
# create a private network/subnet for the Database servers
DB_private_network:
type: OS::Neutron::Net
properties:
name: DB_Tier_private_network
DB_private_network_subnet:
type: OS::Neutron::Subnet
properties:
cidr: 192.168.102.0/24
gateway_ip: 192.168.102.6
allocation_pools: [{ "start": 192.168.102.10, "end": 192.168.102.200 }]
host_routes: [{"destination": 192.168.101.0/24, "nexthop": 192.168.102.6}]
network: { get_resource: DB_private_network }
dns_nameservers: { get_param: dns_nameserver }
enable_dhcp: true
# create a router linking Database and App network
DB_router:
type: OS::Neutron::Router
properties:
name: "AppDBRouter"
external_gateway_info: {"network": { get_param: public_network_id }, "enable_snat": True}
# Create a port connecting the db_router to the db network
db_router_db_port:
type: OS::Neutron::Port
properties:
network: { get_resource: DB_private_network }
name: "DB_Net_Port"
fixed_ips: [{"ip_address": 192.168.102.6}]
# Create a port connecting the db_router to the app network
db_router_app_port:
type: OS::Neutron::Port
properties:
network: { get_resource: App_private_network }
name: "DB_Router_App_Port"
fixed_ips: [{"ip_address": 192.168.101.6}]
# Now lets add our ports to our router
db_router_interface1:
type: OS::Neutron::RouterInterface
properties:
router: { get_resource: DB_router }
port: { get_resource: db_router_db_port }
db_router_interface2:
type: OS::Neutron::RouterInterface
properties:
router: { get_resource: DB_router }
port: { get_resource: db_router_app_port }
#################
#Create separate security groups for each Tier
# create a specific web security group that routes just web and ssh traffic
web_security_group:
type: OS::Neutron::SecurityGroup
properties:
description: A application specific security group that passes ports 22 and 80
name: Workload_Web_SG
rules:
- protocol: tcp
port_range_min: 22
port_range_max: 22
- protocol: tcp
port_range_min: 80
port_range_max: 80
# create a specific application layer security group that routes database port 3306 traffic, web and ssh
app_security_group:
type: OS::Neutron::SecurityGroup
properties:
description: A application specific security group that passes ports 22, 80 and 3306
name: Workload_App_SG
rules:
- protocol: tcp
port_range_min: 22
port_range_max: 22
- protocol: tcp
port_range_min: 80
port_range_max: 80
- protocol: tcp
port_range_min: 3306
port_range_max: 3306
# create a specific database security group that routes just database port 3306 traffic and ssh
db_security_group:
type: OS::Neutron::SecurityGroup
properties:
description: A database specific security group that just passes port 3306 and 22 for ssh
name: Workload_DB_SG
rules:
- protocol: tcp
port_range_min: 22
port_range_max: 22
- protocol: tcp
port_range_min: 3306
port_range_max: 3306
outputs:
#Return a bunch of values so we can use them later in the Parent Heat template when we spin up servers
db_private_network_id:
description: Database private network ID
value: {get_resource: DB_private_network}
web_private_network_id:
description: Web private network ID
value: {get_resource: web_private_network}
app_private_network_id:
description: App private network ID
value: {get_resource: App_private_network}
db_security_group_id:
description: Database security group ID
value: {get_resource: db_security_group}
app_security_group_id:
description: App security group ID
value: {get_resource: app_security_group}
web_security_group_id:
description: Web security group ID
value: {get_resource: web_security_group}
web_lbaas_pool_name:
description: Name of Web LBaaS Pool
value: {get_resource: Web_Server_Pool}
app_lbaas_pool_name:
description: Name of App LBaaS Pool
value: {get_resource: App_Server_Pool}
web_lbaas_IP:
description: Public floating IP assigned to web LBaaS
value: { get_attr: [ Web_Network_Floating_IP, floating_ip_address ] }
app_lbaas_IP:
description: Internal floating IP assigned to app LBaaS
value: {get_attr: [ App_Server_Pool, vip, address]}

View File

@ -1,147 +0,0 @@
# Copyright (c) 2016 SWITCH http://www.switch.ch
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Author: Valery Tschopp <valery.tschopp@switch.ch>
# Date: 2016-07-05
import keystoneclient
from cinderclient.v2 import client as cinder_client
from glanceclient.v2 import client as glance_client
from keystoneauth1.identity import v3 as identity_v3
from keystoneauth1 import session
from keystoneclient.v3 import client as keystone_v3
from neutronclient.v2_0 import client as neutron_client
from novaclient import client as nova_client
class OpenstackAPI():
"""Openstack API clients
Initialize all the necessary Openstack clients for all available regions.
"""
def __init__(self, os_auth_url, os_username, os_password, os_project_name,
user_domain_name='default',
project_domain_name='default'):
# keystone_V3 client requires a /v3 auth url
if '/v2.0' in os_auth_url:
self.auth_url = os_auth_url.replace('/v2.0', '/v3')
else:
self.auth_url = os_auth_url
_auth = identity_v3.Password(auth_url=self.auth_url,
username=os_username,
password=os_password,
project_name=os_project_name,
user_domain_name=user_domain_name,
project_domain_name=project_domain_name)
self._auth_session = session.Session(auth=_auth)
self._keystone = keystone_v3.Client(session=self._auth_session)
# all regions available
self.all_region_names = []
for region in self.keystone.regions.list():
self.all_region_names.append(region.id)
self._nova = {}
self._cinder = {}
self._neutron = {}
self._glance = {}
@property
def keystone(self):
"""Get Keystone client"""
return self._keystone
def nova(self, region):
"""Get Nova client for the region."""
if region not in self._nova:
# Nova client lazy initialisation
_nova = nova_client.Client('2',
session=self._auth_session,
region_name=region)
self._nova[region] = _nova
return self._nova[region]
def cinder(self, region):
"""Get Cinder client for the region."""
if region not in self._cinder:
# Cinder client lazy initialisation
_cinder = cinder_client.Client(session=self._auth_session,
region_name=region)
self._cinder[region] = _cinder
return self._cinder[region]
def neutron(self, region):
"""Get Neutron client for the region."""
if region not in self._neutron:
# Neutron client lazy initialisation
_neutron = neutron_client.Client(session=self._auth_session,
region_name=region)
self._neutron[region] = _neutron
return self._neutron[region]
def glance(self, region):
"""Get Glance client for the region."""
if region not in self._glance:
# Glance client lazy initialisation
_glance = glance_client.Client(session=self._auth_session,
region_name=region)
self._glance[region] = _glance
return self._glance[region]
def get_all_regions(self):
"""Get list of all region names"""
return self.all_region_names
def get_user(self, user_name_or_id):
"""Get a user by name or id"""
user = None
try:
# try by name
user = self._keystone.users.find(name=user_name_or_id)
except keystoneclient.exceptions.NotFound as e:
# try by ID
user = self._keystone.users.get(user_name_or_id)
return user
def get_user_projects(self, user):
"""Get all user projects"""
projects = self._keystone.projects.list(user=user)
return projects
def get_project(self, project_name_or_id):
"""Get a project by name or id"""
project = None
try:
# try by name
project = self._keystone.projects.find(name=project_name_or_id)
except keystoneclient.exceptions.NotFound as e:
# try by ID
project = self._keystone.projects.get(project_name_or_id)
return project
def get_project_users(self, project):
"""Get all users in project"""
assignments = self._keystone.role_assignments.list(project=project)
user_ids = set()
for assignment in assignments:
if hasattr(assignment, 'user'):
user_ids.add(assignment.user['id'])
users = []
for user_id in user_ids:
users.append(self._keystone.users.get(user_id))
return users

View File

@ -1,23 +0,0 @@
# Multi folder
this folder contains scripts that are not related to a specific Openstack project.
## User info
Show the resources belonging to a user:
```
usage: user-info.py [-h] [-a] [-v] USERNAME
Show information (servers, volumes, networks, ...) for a user. Search in all
projects the user is member of, and optionally in all regions (-a).
positional arguments:
USERNAME username to search
optional arguments:
-h, --help show this help message and exit
-a, --all-regions query all regions
-v, --verbose verbose
```

View File

@ -1,72 +0,0 @@
#!/bin/bash -ex
source config.cfg
echo "########## CONFIGURING STATIC IP FOR NICs ##########"
ifaces=/etc/network/interfaces
test -f $ifaces.orig || cp $ifaces $ifaces.orig
rm $ifaces
cat << EOF > $ifaces
#Configuring IP for Controller node
# LOOPBACK NET
auto lo
iface lo inet loopback
# LOCAL NETWORK
auto eth0
iface eth0 inet static
address $LOCAL_IP
netmask $NETMASK_LOCAL
# EXT NETWORK
auto eth1
iface eth1 inet static
address $MASTER
netmask $NETMASK_MASTER
gateway $GATEWAY_IP
dns-nameservers 8.8.8.8
EOF
echo "Configuring hostname in CONTROLLER node"
sleep 3
echo "controller" > /etc/hostname
hostname -F /etc/hostname
echo "Configuring for file /etc/hosts"
sleep 3
iphost=/etc/hosts
test -f $iphost.orig || cp $iphost $iphost.orig
rm $iphost
touch $iphost
cat << EOF >> $iphost
127.0.0.1 localhost controller
$LOCAL_IP controller
EOF
# Enable IP forwarding
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.ipv4.conf.all.rp_filter=0" >> /etc/sysctl.conf
echo "net.ipv4.conf.default.rp_filter=0" >> /etc/sysctl.conf
sysctl -p
echo "##### Cai dat repos cho Liberty ##### "
apt-get install software-properties-common -y
add-apt-repository cloud-archive:liberty -y
sleep 5
echo "UPDATE PACKAGE FOR LIBERTY"
apt-get -y update && apt-get -y upgrade && apt-get -y dist-upgrade
sleep 5
echo "Reboot Server"
#sleep 5
init 6
#

View File

@ -1,80 +0,0 @@
#!/bin/bash -ex
#
source config.cfg
echo "Install python client"
apt-get -y install python-openstackclient
sleep 5
echo "Install and config NTP"
sleep 3
apt-get install ntp -y
cp /etc/ntp.conf /etc/ntp.conf.bka
rm /etc/ntp.conf
cat /etc/ntp.conf.bka | grep -v ^# | grep -v ^$ >> /etc/ntp.conf
## Config NTP in LIBERTY
sed -i 's/server ntp.ubuntu.com/ \
server 0.vn.pool.ntp.org iburst \
server 1.asia.pool.ntp.org iburst \
server 2.asia.pool.ntp.org iburst/g' /etc/ntp.conf
sed -i 's/restrict -4 default kod notrap nomodify nopeer noquery/ \
#restrict -4 default kod notrap nomodify nopeer noquery/g' /etc/ntp.conf
sed -i 's/restrict -6 default kod notrap nomodify nopeer noquery/ \
restrict -4 default kod notrap nomodify \
restrict -6 default kod notrap nomodify/g' /etc/ntp.conf
# sed -i 's/server/#server/' /etc/ntp.conf
# echo "server $LOCAL_IP" >> /etc/ntp.conf
##############################################
echo "Install and Config RabbitMQ"
sleep 3
apt-get install rabbitmq-server -y
rabbitmqctl add_user openstack $RABBIT_PASS
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
# rabbitmqctl change_password guest $RABBIT_PASS
sleep 3
service rabbitmq-server restart
echo "Finish setup pre-install package !!!"
echo "##### Install MYSQL #####"
sleep 3
echo mysql-server mysql-server/root_password password $MYSQL_PASS \
| debconf-set-selections
echo mysql-server mysql-server/root_password_again password $MYSQL_PASS \
| debconf-set-selections
apt-get -y install mariadb-server python-mysqldb curl
echo "##### Configuring MYSQL #####"
sleep 3
echo "########## CONFIGURING FOR MYSQL ##########"
sleep 5
touch /etc/mysql/conf.d/mysqld_openstack.cnf
cat << EOF > /etc/mysql/conf.d/mysqld_openstack.cnf
[mysqld]
bind-address = 0.0.0.0
[mysqld]
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
EOF
sleep 5
echo "Restart MYSQL"
service mysql restart

View File

@ -1,222 +0,0 @@
#!/bin/bash -ex
#
source config.cfg
echo "Create Database for Keystone"
cat << EOF | mysql -uroot -p$MYSQL_PASS
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '$KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '$KEYSTONE_DBPASS';
FLUSH PRIVILEGES;
EOF
echo "##### Install keystone #####"
echo "manual" > /etc/init/keystone.override
apt-get -y install keystone python-openstackclient apache2 \
libapache2-mod-wsgi memcached python-memcache
#/* Back-up file nova.conf
filekeystone=/etc/keystone/keystone.conf
test -f $filekeystone.orig || cp $filekeystone $filekeystone.orig
#Config file /etc/keystone/keystone.conf
cat << EOF > $filekeystone
[DEFAULT]
log_dir = /var/log/keystone
admin_token = $TOKEN_PASS
public_bind_host = $LOCAL_IP
admin_bind_host = $LOCAL_IP
[assignment]
[auth]
[cache]
[catalog]
[cors]
[cors.subdomain]
[credential]
[database]
connection = mysql+pymysql://keystone:$KEYSTONE_DBPASS@$LOCAL_IP/keystone
[domain_config]
[endpoint_filter]
[endpoint_policy]
[eventlet_server]
[eventlet_server_ssl]
[federation]
[fernet_tokens]
[identity]
[identity_mapping]
[kvs]
[ldap]
[matchmaker_redis]
[matchmaker_ring]
[memcache]
servers = localhost:11211
[oauth1]
[os_inherit]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[paste_deploy]
[policy]
[resource]
[revoke]
driver = sql
[role]
[saml]
[signing]
[ssl]
[token]
provider = uuid
driver = memcache
[tokenless_auth]
[trust]
[extra_headers]
Distribution = Ubuntu
EOF
#
su -s /bin/sh -c "keystone-manage db_sync" keystone
echo "#### ServerName $LOCAL_IP#### " >> /etc/apache2/apache2.conf
cat << EOF > /etc/apache2/sites-available/wsgi-keystone.conf
Listen 5000
Listen 35357
<VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
<IfVersion >= 2.4>
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog /var/log/apache2/keystone.log
CustomLog /var/log/apache2/keystone_access.log combined
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>
<VirtualHost *:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
<IfVersion >= 2.4>
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog /var/log/apache2/keystone.log
CustomLog /var/log/apache2/keystone_access.log combined
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>
EOF
ln -s /etc/apache2/sites-available/wsgi-keystone.conf \
/etc/apache2/sites-enabled
service apache2 restart
rm -f /var/lib/keystone/keystone.db
export OS_TOKEN="$TOKEN_PASS"
export OS_URL=http://$LOCAL_IP:35357/v2.0
# export OS_SERVICE_TOKEN="$TOKEN_PASS"
# export OS_SERVICE_ENDPOINT="http://$LOCAL_IP:35357/v2.0"
# export SERVICE_ENDPOINT="http://$LOCAL_IP:35357/v2.0"
###Identity service
openstack service create --name keystone \
--description "OpenStack Identity" identity
### Create the Identity service API endpoint
openstack endpoint create \
--publicurl http://$LOCAL_IP:5000/v2.0 \
--internalurl http://$LOCAL_IP:5000/v2.0 \
--adminurl http://$LOCAL_IP:35357/v2.0 \
--region RegionOne \
identity
#### To create tenants, users, and roles ADMIN
openstack project create --description "Admin Project" admin
openstack user create --password $ADMIN_PASS admin
openstack role create admin
openstack role add --project admin --user admin admin
#### To create tenants, users, and roles SERVICE
openstack project create --description "Service Project" service
#### To create tenants, users, and roles DEMO
openstack project create --description "Demo Project" demo
openstack user create --password $ADMIN_PASS demo
### Create the user role
openstack role create user
openstack role add --project demo --user demo user
#################
unset OS_TOKEN OS_URL
# Tao bien moi truong
echo "export OS_PROJECT_DOMAIN_ID=default" > admin-openrc.sh
echo "export OS_USER_DOMAIN_ID=default" >> admin-openrc.sh
echo "export OS_PROJECT_NAME=admin" >> admin-openrc.sh
echo "export OS_TENANT_NAME=admin" >> admin-openrc.sh
echo "export OS_USERNAME=admin" >> admin-openrc.sh
echo "export OS_PASSWORD=$ADMIN_PASS" >> admin-openrc.sh
echo "export OS_AUTH_URL=http://$LOCAL_IP:35357/v3" >> admin-openrc.sh
echo "export OS_VOLUME_API_VERSION=2" >> admin-openrc.sh
sleep 5
echo "########## Execute environment script ##########"
chmod +x admin-openrc.sh
cat admin-openrc.sh >> /etc/profile
cp admin-openrc.sh /root/admin-openrc.sh
source admin-openrc.sh
echo "export OS_PROJECT_DOMAIN_ID=default" > demo-openrc.sh
echo "export OS_USER_DOMAIN_ID=default" >> demo-openrc.sh
echo "export OS_PROJECT_NAME=demo" >> demo-openrc.sh
echo "export OS_TENANT_NAME=demo" >> demo-openrc.sh
echo "export OS_USERNAME=demo" >> demo-openrc.sh
echo "export OS_PASSWORD=$ADMIN_PASS" >> demo-openrc.sh
echo "export OS_AUTH_URL=http://$LOCAL_IP:35357/v3" >> demo-openrc.sh
echo "export OS_VOLUME_API_VERSION=2" >> demo-openrc.sh
chmod +x demo-openrc.sh
cp demo-openrc.sh /root/demo-openrc.sh

View File

@ -1,167 +0,0 @@
#!/bin/bash -ex
#
source config.cfg
echo "Create the database for GLANCE"
cat << EOF | mysql -uroot -p$MYSQL_PASS
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '$GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '$GLANCE_DBPASS';
FLUSH PRIVILEGES;
EOF
sleep 5
echo " Create user, endpoint for GLANCE"
openstack user create --password $ADMIN_PASS glance
openstack role add --project service --user glance admin
openstack service create --name glance --description \
"OpenStack Image service" image
openstack endpoint create \
--publicurl http://$LOCAL_IP:9292 \
--internalurl http://$LOCAL_IP:9292 \
--adminurl http://$LOCAL_IP:9292 \
--region RegionOne \
image
echo "########## Install GLANCE ##########"
apt-get -y install glance python-glanceclient
sleep 10
echo "########## Configuring GLANCE API ##########"
sleep 5
#/* Back-up file nova.conf
fileglanceapicontrol=/etc/glance/glance-api.conf
test -f $fileglanceapicontrol.orig \
|| cp $fileglanceapicontrol $fileglanceapicontrol.orig
rm $fileglanceapicontrol
touch $fileglanceapicontrol
#Configuring glance config file /etc/glance/glance-api.conf
cat << EOF > $fileglanceapicontrol
[DEFAULT]
notification_driver = noop
verbose = True
[database]
connection = mysql+pymysql://glance:$GLANCE_DBPASS@$LOCAL_IP/glance
backend = sqlalchemy
[glance_store]
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
[image_format]
[keystone_authtoken]
auth_uri = http://$LOCAL_IP:5000
auth_url = http://$LOCAL_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = $GLANCE_PASS
[matchmaker_redis]
[matchmaker_ring]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
[oslo_policy]
[paste_deploy]
flavor = keystone
[store_type_location_strategy]
[task]
[taskflow_executor]
EOF
#
sleep 10
echo "########## Configuring GLANCE REGISTER ##########"
#/* Backup file file glance-registry.conf
fileglanceregcontrol=/etc/glance/glance-registry.conf
test -f $fileglanceregcontrol.orig \
|| cp $fileglanceregcontrol $fileglanceregcontrol.orig
rm $fileglanceregcontrol
touch $fileglanceregcontrol
#Config file /etc/glance/glance-registry.conf
cat << EOF > $fileglanceregcontrol
[DEFAULT]
notification_driver = noop
verbose = True
[database]
connection = mysql+pymysql://glance:$GLANCE_DBPASS@$LOCAL_IP/glance
backend = sqlalchemy
[glance_store]
[keystone_authtoken]
auth_uri = http://$LOCAL_IP:5000
auth_url = http://$LOCAL_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = $GLANCE_PASS
[matchmaker_redis]
[matchmaker_ring]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
[oslo_policy]
[paste_deploy]
flavor = keystone
EOF
sleep 7
echo "########## Remove Glance default DB ##########"
rm /var/lib/glance/glance.sqlite
chown glance:glance $fileglanceapicontrol
chown glance:glance $fileglanceregcontrol
sleep 7
echo "########## Syncing DB for Glance ##########"
glance-manage db_sync
sleep 5
echo "########## Restarting GLANCE service ... ##########"
service glance-registry restart
service glance-api restart
sleep 3
service glance-registry restart
service glance-api restart
#
echo "Remove glance.sqlite "
rm -f /var/lib/glance/glance.sqlite
sleep 3
echo "########## Registering Cirros IMAGE for GLANCE ... ##########"
mkdir images
cd images/
wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
glance image-create --name "cirros" \
--file cirros-0.3.4-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--visibility public --progress
cd /root/
# rm -r /tmp/images
sleep 5
echo "########## Testing Glance ##########"
glance image-list

View File

@ -1,167 +0,0 @@
#!/bin/bash -ex
#
source config.cfg
echo "Create DB for NOVA "
cat << EOF | mysql -uroot -p$MYSQL_PASS
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '$NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '$NOVA_DBPASS';
FLUSH PRIVILEGES;
EOF
echo "Creat user, endpoint for NOVA"
openstack user create --password $ADMIN_PASS nova
openstack role add --project service --user nova admin
openstack service create --name nova --description "OpenStack Compute" compute
openstack endpoint create \
--publicurl http://$LOCAL_IP:8774/v2/%\(tenant_id\)s \
--internalurl http://$LOCAL_IP:8774/v2/%\(tenant_id\)s \
--adminurl http://$LOCAL_IP:8774/v2/%\(tenant_id\)s \
--region RegionOne \
compute
echo "########## Install NOVA in $LOCAL_IP ##########"
sleep 5
apt-get -y install nova-compute nova-api nova-cert nova-conductor \
nova-consoleauth nova-novncproxy nova-scheduler python-novaclient
echo "libguestfs-tools libguestfs/update-appliance boolean true" \
| debconf-set-selections
apt-get -y install libguestfs-tools sysfsutils guestfsd python-guestfs
#fix loi chen pass tren hypervisor la KVM
update-guestfs-appliance
chmod 0644 /boot/vmlinuz*
usermod -a -G kvm root
######## Backup configurations for NOVA ##########"
sleep 7
#
controlnova=/etc/nova/nova.conf
test -f $controlnova.orig || cp $controlnova $controlnova.orig
rm $controlnova
touch $controlnova
cat << EOF >> $controlnova
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
libvirt_use_virtio_for_bridges=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
enabled_apis=ec2,osapi_compute,metadata
my_ip = $LOCAL_IP
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
enabled_apis=osapi_compute,metadata
verbose = True
enable_instance_password = True
[database]
connection = mysql+pymysql://nova:$NOVA_DBPASS@$LOCAL_IP/nova
[oslo_messaging_rabbit]
rabbit_host = $LOCAL_IP
rabbit_userid = openstack
rabbit_password = Welcome123
[keystone_authtoken]
auth_uri = http://$LOCAL_IP:5000
auth_url = http://$LOCAL_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = $NOVA_PASS
[vnc]
vncserver_listen = \$my_ip
vncserver_proxyclient_address = \$my_ip
novncproxy_base_url = http://$MASTER:6080/vnc_auto.html
[glance]
host = $LOCAL_IP
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[neutron]
url = http://$LOCAL_IP:9696
auth_url = http://$LOCAL_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = neutron
password = $NEUTRON_PASS
service_metadata_proxy = True
metadata_proxy_shared_secret = $DEFAULT_PASS
[cinder]
os_region_name = RegionOne
[libvirt]
inject_key = True
inject_partition = -1
inject_password = True
EOF
echo "########## Remove Nova default db ##########"
sleep 7
rm /var/lib/nova/nova.sqlite
echo "########## Syncing Nova DB ##########"
sleep 7
su -s /bin/sh -c "nova-manage db sync" nova
# fix libvirtError:internal error: no supported architecture for os type 'hvm'
# echo 'kvm_intel' >> /etc/modules
echo "########## Restarting NOVA ... ##########"
sleep 7
service nova-api restart;
service nova-cert restart;
service nova-consoleauth restart;
service nova-scheduler restart;
service nova-conductor restart;
service nova-novncproxy restart;
service nova-compute restart;
service nova-console restart
sleep 7
echo "########## Restarting NOVA ... ##########"
service nova-api restart;
service nova-cert restart;
service nova-consoleauth restart;
service nova-scheduler restart;
service nova-conductor restart;
service nova-novncproxy restart;
service nova-compute restart;
service nova-console restart
echo "########## Testing NOVA service ##########"
nova-manage service list

View File

@ -1,51 +0,0 @@
#!/bin/bash -ex
source config.cfg
apt-get update -y && apt-get upgrade -y && apt-get dist-upgrade -y
echo "########## Install and Config OpenvSwitch ##########"
apt-get install -y openvswitch-switch
apt-get install -y neutron-plugin-ml2 neutron-plugin-openvswitch-agent \
neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent \
neutron-plugin-openvswitch neutron-common
echo "########## Configuring br-int and br-ex for OpenvSwitch ##########"
sleep 5
ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex eth1
echo "########## Configuring IP for br-ex ##########"
ifaces=/etc/network/interfaces
test -f $ifaces.orig1 || cp $ifaces $ifaces.orig1
rm $ifaces
cat << EOF > $ifaces
# The loopback network interface
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address $LOCAL_IP
netmask $NETMASK_LOCAL
# The primary network interface
auto br-ex
iface br-ex inet static
address $MASTER
netmask $NETMASK_MASTER
gateway $GATEWAY_IP
dns-nameservers 8.8.8.8
auto eth1
iface eth1 inet manual
up ifconfig \$IFACE 0.0.0.0 up
up ip link set \$IFACE promisc on
down ip link set \$IFACE promisc off
down ifconfig \$IFACE down
EOF
echo "########## Reboot machine after finishing configure IP ##########"
init 6

View File

@ -1,247 +0,0 @@
#!/bin/bash -ex
source config.cfg
echo "Create DB for NEUTRON "
cat << EOF | mysql -uroot -p$MYSQL_PASS
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '$NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '$NEUTRON_DBPASS';
FLUSH PRIVILEGES;
EOF
echo "Create user, endpoint for NEUTRON"
openstack user create --password $ADMIN_PASS neutron
openstack role add --project service --user neutron admin
openstack service create --name neutron --description \
"OpenStack Networking" network
openstack endpoint create \
--publicurl http://$LOCAL_IP:9696 \
--adminurl http://$LOCAL_IP:9696 \
--internalurl http://$LOCAL_IP:9696 \
--region RegionOne \
network
echo "########## Install NEUTRON on CONTROLLER ##########"
apt-get install -y openvswitch-switch
apt-get -y install neutron-server python-neutronclient neutron-plugin-ml2 \
neutron-plugin-openvswitch-agent neutron-l3-agent neutron-dhcp-agent \
neutron-metadata-agent neutron-plugin-openvswitch neutron-common
######## SAO LUU CAU HINH NEUTRON.CONF CHO CONTROLLER##################"
echo "########## Editing neutron.conf ##########"
controlneutron=/etc/neutron/neutron.conf
test -f $controlneutron.orig || cp $controlneutron $controlneutron.orig
rm $controlneutron
cat << EOF > $controlneutron
[DEFAULT]
verbose = True
rpc_backend = rabbit
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://$LOCAL_IP:8774/v2
[matchmaker_redis]
[matchmaker_ring]
[quotas]
[agent]
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
[keystone_authtoken]
auth_uri = http://$LOCAL_IP:5000
auth_url = http://$LOCAL_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = $NEUTRON_PASS
[database]
connection = mysql+pymysql://neutron:$NEUTRON_DBPASS@$LOCAL_IP/neutron
[nova]
auth_url = http://$LOCAL_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = nova
password = $NOVA_PASS
[oslo_concurrency]
lock_path = \$state_path/lock
[oslo_policy]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host = $LOCAL_IP
rabbit_userid = openstack
rabbit_password = $RABBIT_PASS
EOF
######## SAO LUU CAU HINH ML2 CHO CONTROLLER##################"
echo "########## Config ml2_conf.ini ##########"
sleep 7
controlML2=/etc/neutron/plugins/ml2/ml2_conf.ini
test -f $controlML2.orig || cp $controlML2 $controlML2.orig
rm $controlML2
cat << EOF > $controlML2
[ml2]
type_drivers = flat,vlan,gre,vxlan
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_flat]
flat_networks = external
[ml2_type_vlan]
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[ml2_type_vxlan]
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[ovs]
local_ip = $LOCAL_IP
enable_tunneling = True
bridge_mappings = external:br-ex
[agent]
tunnel_types = gre
EOF
echo "Fix loi MTU"
sleep 3
echo "dhcp-option-force=26,1454" > /etc/neutron/dnsmasq-neutron.conf
killall dnsmasq
######## SAO LUU CAU HINH METADATA CHO CONTROLLER##################"
echo "########## Sua file cau hinh metadata_agent.ini ##########"
sleep 7
metadatafile=/etc/neutron/metadata_agent.ini
test -f $metadatafile.orig || cp $metadatafile $metadatafile.orig
rm $metadatafile
cat << EOF > $metadatafile
[DEFAULT]
verbose = True
auth_uri = http://$LOCAL_IP:5000
auth_url = http://$LOCAL_IP:35357
auth_region = RegionOne
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = $NEUTRON_PASS
nova_metadata_ip = $LOCAL_IP
metadata_proxy_shared_secret = $METADATA_SECRET
EOF
######## SUA FILE CAU HINH DHCP ##################"
echo "########## Sua file cau hinh DHCP ##########"
sleep 7
dhcpfile=/etc/neutron/dhcp_agent.ini
test -f $dhcpfile.orig || cp $dhcpfile $dhcpfile.orig
rm $dhcpfile
cat << EOF > $dhcpfile
[DEFAULT]
use_namespaces = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
verbose = True
dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf
[AGENT]
EOF
###################### SAO LUU CAU HINH L3 ###########################"
echo "########## Sua file cau hinh l3_agent.ini ##########"
sleep 7
l3file=/etc/neutron/l3_agent.ini
test -f $l3file.orig || cp $l3file $l3file.orig
rm $l3file
touch $l3file
cat << EOF >> $l3file
[DEFAULT]
verbose = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
external_network_bridge =
router_delete_namespaces = True
EOF
chown root:neutron /etc/neutron/*
chown root:neutron $controlML2
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
echo "########## Restarting NEUTRON ##########"
sleep 5
#for i in $( ls /etc/init.d/neutron-* );do service `basename $i` restart;done
service neutron-server restart
service neutron-l3-agent restart
service neutron-dhcp-agent restart
service neutron-metadata-agent restart
service openvswitch-switch restart
service neutron-plugin-openvswitch-agent restart
echo "########## Restarting NEUTRON ##########"
sleep 5
#for i in $( ls /etc/init.d/neutron-* );do service `basename $i` restart;done
service neutron-server restart
service neutron-l3-agent restart
service neutron-dhcp-agent restart
service neutron-metadata-agent restart
service openvswitch-switch restart
service neutron-plugin-openvswitch-agent restart
# Them lenh khoi dong dich vu cua NEUTRON moi khi reboot OpenStack de fix loi.
sed -i "s/exit 0/# exit 0/g" /etc/rc.local
echo "service neutron-server restart" >> /etc/rc.local
echo "service neutron-l3-agent restart" >> /etc/rc.local
echo "service neutron-dhcp-agent restart" >> /etc/rc.local
echo "service neutron-metadata-agent restart" >> /etc/rc.local
echo "service openvswitch-switch restart" >> /etc/rc.local
echo "service neutron-plugin-openvswitch-agent restart" >> /etc/rc.local
echo "exit 0" >> /etc/rc.local
echo "########## Testing NEUTRON (wait 60s) ##########"
# Can doi neutron khoi dong xong de kiem tra
sleep 30
neutron agent-list

View File

@ -1,42 +0,0 @@
#!/bin/bash -ex
source config.cfg
###################
echo "########## START INSTALLING OPS DASHBOARD ##########"
###################
sleep 5
echo "########## Installing Dashboard package ##########"
apt-get -y install openstack-dashboard
apt-get -y remove --auto-remove openstack-dashboard-ubuntu-theme
echo "########## Creating redirect page ##########"
filehtml=/var/www/html/index.html
test -f $filehtml.orig || cp $filehtml $filehtml.orig
rm $filehtml
touch $filehtml
cat << EOF >> $filehtml
<html>
<head>
<META HTTP-EQUIV="Refresh" Content="0.5; URL=http://$BR_EX_IP/horizon">
</head>
<body>
<center> <h1>Redirecting to OpenStack Dashboard</h1> </center>
</body>
</html>
EOF
# Allowing insert password in dashboard ( only apply in image )
sed -i "s/'can_set_password': False/'can_set_password': True/g" \
/etc/openstack-dashboard/local_settings.py
## /* Restarting apache2 and memcached
service apache2 restart
service memcached restart
echo "########## Finish setting up Horizon ##########"
echo "########## LOGIN INFORMATION IN HORIZON ##########"
echo "URL: http://$BR_EX_IP/horizon"
echo "User: admin or demo"
echo "Password:" $ADMIN_PASS

View File

@ -1,81 +0,0 @@
#!/bin/bash -ex
source config.cfg
echo "Configuring hostname in CONTROLLER node"
sleep 3
echo "controller" > /etc/hostname
hostname -F /etc/hostname
echo "Configuring for file /etc/hosts"
sleep 3
iphost=/etc/hosts
test -f $iphost.orig || cp $iphost $iphost.orig
rm $iphost
touch $iphost
cat << EOF >> $iphost
127.0.0.1 localhost controller
$LOCAL_IP controller
EOF
# Enable IP forwarding
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.ipv4.conf.all.rp_filter=0" >> /etc/sysctl.conf
echo "net.ipv4.conf.default.rp_filter=0" >> /etc/sysctl.conf
sysctl -p
echo "##### Cai dat repos cho Liberty ##### "
apt-get install software-properties-common -y
add-apt-repository cloud-archive:liberty -y
sleep 5
echo "UPDATE PACKAGE FOR LIBERTY"
apt-get -y update && apt-get -y upgrade && apt-get -y dist-upgrade
echo "########## Install and Config OpenvSwitch ##########"
apt-get install -y openvswitch-switch
echo "########## Cau hinh br-int va br-ex cho OpenvSwitch ##########"
sleep 5
ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex eth1
echo "########## Cau hinh dia chi IP cho br-ex ##########"
ifaces=/etc/network/interfaces
test -f $ifaces.orig1 || cp $ifaces $ifaces.orig1
rm $ifaces
cat << EOF > $ifaces
# The loopback network interface
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address $LOCAL_IP
netmask $NETMASK_LOCAL
# The primary network interface
auto br-ex
iface br-ex inet static
address $MASTER
netmask $NETMASK_MASTER
gateway $GATEWAY_IP
dns-nameservers 8.8.8.8
auto eth1
iface eth1 inet manual
up ifconfig \$IFACE 0.0.0.0 up
up ip link set \$IFACE promisc on
down ip link set \$IFACE promisc off
down ifconfig \$IFACE down
EOF
sleep 5
echo "Reboot Server"
#sleep 5
init 6

View File

@ -1,915 +0,0 @@
#!/bin/bash -ex
source config.cfg
#************************************************************************#
########## Python clientNTP, MARIADB, RabbitMQ ###########################
#************************************************************************#
echo "Install python client"
apt-get -y install python-openstackclient
sleep 5
echo "Install and config NTP"
sleep 3
apt-get install ntp -y
cp /etc/ntp.conf /etc/ntp.conf.bka
rm /etc/ntp.conf
cat /etc/ntp.conf.bka | grep -v ^# | grep -v ^$ >> /etc/ntp.conf
## Config NTP in LIBERTY
sed -i 's/server ntp.ubuntu.com/ \
server 0.vn.pool.ntp.org iburst \
server 1.asia.pool.ntp.org iburst \
server 2.asia.pool.ntp.org iburst/g' /etc/ntp.conf
sed -i 's/restrict -4 default kod notrap nomodify nopeer noquery/ \
#restrict -4 default kod notrap nomodify nopeer noquery/g' /etc/ntp.conf
sed -i 's/restrict -6 default kod notrap nomodify nopeer noquery/ \
restrict -4 default kod notrap nomodify \
restrict -6 default kod notrap nomodify/g' /etc/ntp.conf
# sed -i 's/server/#server/' /etc/ntp.conf
# echo "server $LOCAL_IP" >> /etc/ntp.conf
##############################################
echo "Install and Config RabbitMQ"
sleep 3
apt-get install rabbitmq-server -y
rabbitmqctl add_user openstack $RABBIT_PASS
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
# rabbitmqctl change_password guest $RABBIT_PASS
sleep 3
service rabbitmq-server restart
echo "Finish setup pre-install package !!!"
echo "##### Install MYSQL #####"
sleep 3
echo mysql-server mysql-server/root_password password $MYSQL_PASS \
| debconf-set-selections
echo mysql-server mysql-server/root_password_again password $MYSQL_PASS \
| debconf-set-selections
apt-get -y install mariadb-server python-mysqldb curl
echo "##### Configuring MYSQL #####"
sleep 3
echo "########## CONFIGURING FOR MYSQL ##########"
sleep 5
touch /etc/mysql/conf.d/mysqld_openstack.cnf
cat << EOF > /etc/mysql/conf.d/mysqld_openstack.cnf
[mysqld]
bind-address = 0.0.0.0
[mysqld]
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
EOF
sleep 5
echo "Restart MYSQL"
service mysql restart
#********************************************************#
#################### KEYSTONE ############################
#********************************************************#
echo "Create Database for Keystone"
cat << EOF | mysql -uroot -p$MYSQL_PASS
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '$KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '$KEYSTONE_DBPASS';
FLUSH PRIVILEGES;
EOF
echo "##### Install keystone #####"
sleep 3
echo "manual" > /etc/init/keystone.override
apt-get -y install keystone python-openstackclient apache2 \
libapache2-mod-wsgi memcached python-memcache
#/* Back-up file nova.conf
filekeystone=/etc/keystone/keystone.conf
test -f $filekeystone.orig || cp $filekeystone $filekeystone.orig
#Config file /etc/keystone/keystone.conf
cat << EOF > $filekeystone
[DEFAULT]
log_dir = /var/log/keystone
admin_token = $TOKEN_PASS
public_bind_host = $LOCAL_IP
admin_bind_host = $LOCAL_IP
[assignment]
[auth]
[cache]
[catalog]
[cors]
[cors.subdomain]
[credential]
[database]
connection = mysql+pymysql://keystone:$KEYSTONE_DBPASS@$LOCAL_IP/keystone
[domain_config]
[endpoint_filter]
[endpoint_policy]
[eventlet_server]
[eventlet_server_ssl]
[federation]
[fernet_tokens]
[identity]
[identity_mapping]
[kvs]
[ldap]
[matchmaker_redis]
[matchmaker_ring]
[memcache]
servers = localhost:11211
[oauth1]
[os_inherit]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[paste_deploy]
[policy]
[resource]
[revoke]
driver = sql
[role]
[saml]
[signing]
[ssl]
[token]
provider = uuid
driver = memcache
[tokenless_auth]
[trust]
[extra_headers]
Distribution = Ubuntu
EOF
#
su -s /bin/sh -c "keystone-manage db_sync" keystone
echo "#### ServerName $LOCAL_IP#### " >> /etc/apache2/apache2.conf
cat << EOF > /etc/apache2/sites-available/wsgi-keystone.conf
Listen 5000
Listen 35357
<VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
<IfVersion >= 2.4>
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog /var/log/apache2/keystone.log
CustomLog /var/log/apache2/keystone_access.log combined
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>
<VirtualHost *:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
<IfVersion >= 2.4>
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog /var/log/apache2/keystone.log
CustomLog /var/log/apache2/keystone_access.log combined
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>
EOF
ln -s /etc/apache2/sites-available/wsgi-keystone.conf \
/etc/apache2/sites-enabled
service apache2 restart
rm -f /var/lib/keystone/keystone.db
export OS_TOKEN="$TOKEN_PASS"
export OS_URL=http://$LOCAL_IP:35357/v2.0
# export OS_SERVICE_TOKEN="$TOKEN_PASS"
# export OS_SERVICE_ENDPOINT="http://$LOCAL_IP:35357/v2.0"
# export SERVICE_ENDPOINT="http://$LOCAL_IP:35357/v2.0"
### Identity service
openstack service create --name keystone --description \
"OpenStack Identity" identity
### Create the Identity service API endpoint
openstack endpoint create \
--publicurl http://$LOCAL_IP:5000/v2.0 \
--internalurl http://$LOCAL_IP:5000/v2.0 \
--adminurl http://$LOCAL_IP:35357/v2.0 \
--region RegionOne \
identity
#### To create tenants, users, and roles ADMIN
openstack project create --description "Admin Project" admin
openstack user create --password $ADMIN_PASS admin
openstack role create admin
openstack role add --project admin --user admin admin
#### To create tenants, users, and roles SERVICE
openstack project create --description "Service Project" service
#### To create tenants, users, and roles DEMO
openstack project create --description "Demo Project" demo
openstack user create --password $ADMIN_PASS demo
### Create the user role
openstack role create user
openstack role add --project demo --user demo user
#################
unset OS_TOKEN OS_URL
# Tao bien moi truong
echo "export OS_PROJECT_DOMAIN_ID=default" > admin-openrc.sh
echo "export OS_USER_DOMAIN_ID=default" >> admin-openrc.sh
echo "export OS_PROJECT_NAME=admin" >> admin-openrc.sh
echo "export OS_TENANT_NAME=admin" >> admin-openrc.sh
echo "export OS_USERNAME=admin" >> admin-openrc.sh
echo "export OS_PASSWORD=$ADMIN_PASS" >> admin-openrc.sh
echo "export OS_AUTH_URL=http://$LOCAL_IP:35357/v3" >> admin-openrc.sh
echo "export OS_VOLUME_API_VERSION=2" >> admin-openrc.sh
sleep 5
echo "########## Execute environment script ##########"
chmod +x admin-openrc.sh
cat admin-openrc.sh >> /etc/profile
cp admin-openrc.sh /root/admin-openrc.sh
source admin-openrc.sh
echo "export OS_PROJECT_DOMAIN_ID=default" > demo-openrc.sh
echo "export OS_USER_DOMAIN_ID=default" >> demo-openrc.sh
echo "export OS_PROJECT_NAME=demo" >> demo-openrc.sh
echo "export OS_TENANT_NAME=demo" >> demo-openrc.sh
echo "export OS_USERNAME=demo" >> demo-openrc.sh
echo "export OS_PASSWORD=$ADMIN_PASS" >> demo-openrc.sh
echo "export OS_AUTH_URL=http://$LOCAL_IP:35357/v3" >> demo-openrc.sh
echo "export OS_VOLUME_API_VERSION=2" >> demo-openrc.sh
chmod +x demo-openrc.sh
cp demo-openrc.sh /root/demo-openrc.sh
#*****************************************************#
#################### GLANCE ###########################
#*****************************************************#
echo "Create the database for GLANCE"
cat << EOF | mysql -uroot -p$MYSQL_PASS
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '$GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '$GLANCE_DBPASS';
FLUSH PRIVILEGES;
EOF
sleep 5
echo " Create user, endpoint for GLANCE"
openstack user create --password $ADMIN_PASS glance
openstack role add --project service --user glance admin
openstack service create --name glance --description \
"OpenStack Image service" image
openstack endpoint create \
--publicurl http://$LOCAL_IP:9292 \
--internalurl http://$LOCAL_IP:9292 \
--adminurl http://$LOCAL_IP:9292 \
--region RegionOne \
image
echo "########## Install GLANCE ##########"
apt-get -y install glance python-glanceclient
sleep 10
echo "########## Configuring GLANCE API ##########"
sleep 5
#/* Back-up file nova.conf
fileglanceapicontrol=/etc/glance/glance-api.conf
test -f $fileglanceapicontrol.orig \
|| cp $fileglanceapicontrol $fileglanceapicontrol.orig
rm $fileglanceapicontrol
touch $fileglanceapicontrol
#Configuring glance config file /etc/glance/glance-api.conf
cat << EOF > $fileglanceapicontrol
[DEFAULT]
notification_driver = noop
verbose = True
[database]
connection = mysql+pymysql://glance:$GLANCE_DBPASS@$LOCAL_IP/glance
backend = sqlalchemy
[glance_store]
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
[image_format]
[keystone_authtoken]
auth_uri = http://$LOCAL_IP:5000
auth_url = http://$LOCAL_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = $GLANCE_PASS
[matchmaker_redis]
[matchmaker_ring]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
[oslo_policy]
[paste_deploy]
flavor = keystone
[store_type_location_strategy]
[task]
[taskflow_executor]
EOF
#
sleep 10
echo "########## Configuring GLANCE REGISTER ##########"
#/* Backup file file glance-registry.conf
fileglanceregcontrol=/etc/glance/glance-registry.conf
test -f $fileglanceregcontrol.orig \
|| cp $fileglanceregcontrol $fileglanceregcontrol.orig
rm $fileglanceregcontrol
touch $fileglanceregcontrol
#Config file /etc/glance/glance-registry.conf
cat << EOF > $fileglanceregcontrol
[DEFAULT]
notification_driver = noop
verbose = True
[database]
connection = mysql+pymysql://glance:$GLANCE_DBPASS@$LOCAL_IP/glance
backend = sqlalchemy
[glance_store]
[keystone_authtoken]
auth_uri = http://$LOCAL_IP:5000
auth_url = http://$LOCAL_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = $GLANCE_PASS
[matchmaker_redis]
[matchmaker_ring]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
[oslo_policy]
[paste_deploy]
flavor = keystone
EOF
sleep 7
echo "########## Remove Glance default DB ##########"
rm /var/lib/glance/glance.sqlite
chown glance:glance $fileglanceapicontrol
chown glance:glance $fileglanceregcontrol
sleep 7
echo "########## Syncing DB for Glance ##########"
glance-manage db_sync
sleep 5
echo "########## Restarting GLANCE service ... ##########"
service glance-registry restart
service glance-api restart
sleep 3
service glance-registry restart
service glance-api restart
echo "Remove glance.sqlite "
rm -f /var/lib/glance/glance.sqlite
sleep 3
echo "########## Registering Cirros IMAGE for GLANCE ... ##########"
mkdir images
cd images/
wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
glance image-create --name "cirros" \
--file cirros-0.3.4-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--visibility public --progress
cd /root/
# rm -r /tmp/images
sleep 5
echo "########## Testing Glance ##########"
glance image-list
#*****************************************************#
##################### NOVA ############################
#*****************************************************#
echo "Create DB for NOVA "
cat << EOF | mysql -uroot -p$MYSQL_PASS
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '$NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '$NOVA_DBPASS';
FLUSH PRIVILEGES;
EOF
echo "Creat user, endpoint for NOVA"
openstack user create --password $ADMIN_PASS nova
openstack role add --project service --user nova admin
openstack service create --name nova --description "OpenStack Compute" compute
openstack endpoint create \
--publicurl http://$LOCAL_IP:8774/v2/%\(tenant_id\)s \
--internalurl http://$LOCAL_IP:8774/v2/%\(tenant_id\)s \
--adminurl http://$LOCAL_IP:8774/v2/%\(tenant_id\)s \
--region RegionOne \
compute
echo "########## Install NOVA in $LOCAL_IP ##########"
sleep 5
apt-get -y install nova-compute nova-api nova-cert nova-conductor \
nova-consoleauth nova-novncproxy nova-scheduler python-novaclient
echo "libguestfs-tools libguestfs/update-appliance boolean true" \
| debconf-set-selections
apt-get -y install libguestfs-tools sysfsutils
######## Backup configurations for NOVA ##########"
sleep 7
#
controlnova=/etc/nova/nova.conf
test -f $controlnova.orig || cp $controlnova $controlnova.orig
rm $controlnova
touch $controlnova
cat << EOF >> $controlnova
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
libvirt_use_virtio_for_bridges=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
enabled_apis=ec2,osapi_compute,metadata
my_ip = $LOCAL_IP
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
enabled_apis=osapi_compute,metadata
verbose = True
[database]
connection = mysql+pymysql://nova:$NOVA_DBPASS@$LOCAL_IP/nova
[oslo_messaging_rabbit]
rabbit_host = $LOCAL_IP
rabbit_userid = openstack
rabbit_password = Welcome123
[keystone_authtoken]
auth_uri = http://$LOCAL_IP:5000
auth_url = http://$LOCAL_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = $NOVA_PASS
[vnc]
vncserver_listen = \$my_ip
vncserver_proxyclient_address = \$my_ip
novncproxy_base_url = http://$BR_EX_IP:6080/vnc_auto.html
[glance]
host = $LOCAL_IP
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[neutron]
url = http://$LOCAL_IP:9696
auth_url = http://$LOCAL_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = neutron
password = $NEUTRON_PASS
service_metadata_proxy = True
metadata_proxy_shared_secret = $DEFAULT_PASS
[cinder]
os_region_name = RegionOne
EOF
echo "########## Remove Nova default db ##########"
sleep 7
rm /var/lib/nova/nova.sqlite
echo "########## Syncing Nova DB ##########"
sleep 7
su -s /bin/sh -c "nova-manage db sync" nova
# fix bug libvirtError: internal error: no supported architecture for os type 'hvm'
# echo 'kvm_intel' >> /etc/modules
echo "########## Restarting NOVA ... ##########"
sleep 7
service nova-api restart;
service nova-cert restart;
service nova-consoleauth restart;
service nova-scheduler restart;
service nova-conductor restart;
service nova-novncproxy restart;
service nova-compute restart;
service nova-console restart
sleep 7
echo "########## Restarting NOVA ... ##########"
service nova-api restart;
service nova-cert restart;
service nova-consoleauth restart;
service nova-scheduler restart;
service nova-conductor restart;
service nova-novncproxy restart;
service nova-compute restart;
service nova-console restart
echo "########## Testing NOVA service ##########"
nova-manage service list
#**********************************************************#
####################### NEUTRON ############################
#**********************************************************#
echo "Create DB for NEUTRON "
cat << EOF | mysql -uroot -p$MYSQL_PASS
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '$NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '$NEUTRON_DBPASS';
FLUSH PRIVILEGES;
EOF
echo "Create user, endpoint for NEUTRON"
openstack user create --password $ADMIN_PASS neutron
openstack role add --project service --user neutron admin
openstack service create --name neutron --description \
"OpenStack Networking" network
openstack endpoint create \
--publicurl http://$LOCAL_IP:9696 \
--adminurl http://$LOCAL_IP:9696 \
--internalurl http://$LOCAL_IP:9696 \
--region RegionOne \
network
echo "########## CAI DAT NEUTRON ##########"
apt-get -y install neutron-server python-neutronclient \
neutron-plugin-ml2 neutron-plugin-openvswitch-agent \
neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent \
neutron-plugin-openvswitch neutron-common
######## SAO LUU CAU HINH NEUTRON.CONF CHO CONTROLLER##################"
echo "########## Editing file neutron.conf ##########"
controlneutron=/etc/neutron/neutron.conf
test -f $controlneutron.orig || cp $controlneutron $controlneutron.orig
rm $controlneutron
cat << EOF > $controlneutron
[DEFAULT]
verbose = True
rpc_backend = rabbit
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://$LOCAL_IP:8774/v2
[matchmaker_redis]
[matchmaker_ring]
[quotas]
[agent]
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
[keystone_authtoken]
auth_uri = http://$LOCAL_IP:5000
auth_url = http://$LOCAL_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = $NEUTRON_PASS
[database]
connection = mysql+pymysql://neutron:$NEUTRON_DBPASS@$LOCAL_IP/neutron
[nova]
auth_url = http://$LOCAL_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = nova
password = $NOVA_PASS
[oslo_concurrency]
lock_path = \$state_path/lock
[oslo_policy]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host = $LOCAL_IP
rabbit_userid = openstack
rabbit_password = $RABBIT_PASS
EOF
######## SAO LUU CAU HINH ML2 CHO CONTROLLER##################"
echo "########## Sau file cau hinh cho ml2_conf.ini ##########"
sleep 7
controlML2=/etc/neutron/plugins/ml2/ml2_conf.ini
test -f $controlML2.orig || cp $controlML2 $controlML2.orig
rm $controlML2
cat << EOF > $controlML2
[ml2]
type_drivers = flat,vlan,gre,vxlan
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_flat]
flat_networks = external
[ml2_type_vlan]
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[ml2_type_vxlan]
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[ovs]
local_ip = $LOCAL_IP
enable_tunneling = True
bridge_mappings = external:br-ex
[agent]
tunnel_types = gre
EOF
echo "Fix loi MTU"
sleep 3
echo "dhcp-option-force=26,1454" > /etc/neutron/dnsmasq-neutron.conf
killall dnsmasq
######## SAO LUU CAU HINH METADATA CHO CONTROLLER##################"
echo "########## Sua file cau hinh metadata_agent.ini ##########"
sleep 7
metadatafile=/etc/neutron/metadata_agent.ini
test -f $metadatafile.orig || cp $metadatafile $metadatafile.orig
rm $metadatafile
cat << EOF > $metadatafile
[DEFAULT]
verbose = True
auth_uri = http://$LOCAL_IP:5000
auth_url = http://$LOCAL_IP:35357
auth_region = RegionOne
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = $NEUTRON_PASS
nova_metadata_ip = $LOCAL_IP
metadata_proxy_shared_secret = $METADATA_SECRET
EOF
######## SUA FILE CAU HINH DHCP ##################"
echo "########## Sua file cau hinh DHCP ##########"
sleep 7
dhcpfile=/etc/neutron/dhcp_agent.ini
test -f $dhcpfile.orig || cp $dhcpfile $dhcpfile.orig
rm $dhcpfile
cat << EOF > $dhcpfile
[DEFAULT]
use_namespaces = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
verbose = True
dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf
[AGENT]
EOF
###################### SAO LUU CAU HINH L3 ###########################"
echo "########## Sua file cau hinh l3_agent.ini ##########"
sleep 7
l3file=/etc/neutron/l3_agent.ini
test -f $l3file.orig || cp $l3file $l3file.orig
rm $l3file
touch $l3file
cat << EOF >> $l3file
[DEFAULT]
verbose = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
external_network_bridge =
router_delete_namespaces = True
EOF
chown root:neutron /etc/neutron/*
chown root:neutron $controlML2
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
echo "########## KHOI DONG LAI NEUTRON ##########"
sleep 5
#for i in $( ls /etc/init.d/neutron-* ); do service `basename $i` restart;done
service neutron-server restart
service neutron-l3-agent restart
service neutron-dhcp-agent restart
service neutron-metadata-agent restart
service openvswitch-switch restart
service neutron-plugin-openvswitch-agent restart
echo "########## KHOI DONG LAI NEUTRON (lan2) ##########"
sleep 5
#for i in $( ls /etc/init.d/neutron-* ); do service `basename $i` restart;done
service neutron-server restart
service neutron-l3-agent restart
service neutron-dhcp-agent restart
service neutron-metadata-agent restart
service openvswitch-switch restart
service neutron-plugin-openvswitch-agent restart
#Them lenh khoi dong dich vu cua NEUTRON moi khi reboot OpenStack de fix loi.
sed -i "s/exit 0/# exit 0/g" /etc/rc.local
echo "service neutron-server restart" >> /etc/rc.local
echo "service neutron-l3-agent restart" >> /etc/rc.local
echo "service neutron-dhcp-agent restart" >> /etc/rc.local
echo "service neutron-metadata-agent restart" >> /etc/rc.local
echo "service openvswitch-switch restart" >> /etc/rc.local
echo "service neutron-plugin-openvswitch-agent restart" >> /etc/rc.local
echo "exit 0" >> /etc/rc.local
echo "########## KIEM TRA NEUTRON (cho 30s) ##########"
# Can doi neutron khoi dong xong de kiem tra
sleep 30
neutron agent-list
#**********************************************************#
####################### HORIZON ############################
#**********************************************************#
echo "########## Installing Dashboard package ##########"
sleep 5
apt-get -y install openstack-dashboard
# echo "########## Fix bug in apache2 ##########"
# sleep 5
# Fix bug apache in ubuntu 14.04
# echo "ServerName localhost" > /etc/apache2/conf-available/servername.conf
# sudo a2enconf servername
echo "########## Creating redirect page ##########"
filehtml=/var/www/html/index.html
test -f $filehtml.orig || cp $filehtml $filehtml.orig
rm $filehtml
touch $filehtml
cat << EOF >> $filehtml
<html>
<head>
<META HTTP-EQUIV="Refresh" Content="0.5; URL=http://$BR_EX_IP/horizon">
</head>
<body>
<center> <h1>Dang chuyen den Dashboard cua OpenStack</h1> </center>
</body>
</html>
EOF
# Allowing insert password in dashboard ( only apply in image )
sed -i "s/'can_set_password': False/'can_set_password': True/g" \
/etc/openstack-dashboard/local_settings.py
## /* Restarting apache2 and memcached
service apache2 restart
service memcached restart
echo "########## Finish setting up Horizon ##########"
echo "########## LOGIN INFORMATION IN HORIZON ##########"
echo "URL: http://$BR_EX_IP/horizon"
echo "User: admin or demo"
echo "Password:" $ADMIN_PASS

View File

@ -1,263 +0,0 @@
# Installation and User Guide for OpenStack LIBERTY AIO
### Introduction
- The script is used to install OpenStack LIBERTY on ONLY one server
- Required components:
- MariaDB, NTP
- Keystone Version 3
- Glance
- Neutron (ML2, OpenvSwitch)
### Before you begin
- Install on VMware workstation or physical servers as the following requirements:
```sh
- RAM: 4GB
- HDD
- HDD1: 60GB (used for installing OS and OpenStack components)
- HDD2: 40GB (used for installing CINDER which provides VOLUME for OpenStack) - NOTE: IF YOU DO NOT INSTALL THIS SERVICE, THIS STEP IS OPTIONAL.
- 02 NIC with the following order:
- NIC 1: - eth0 - Management Network
- NIC 2: - eth1 - External Network
- CPU supports virtulization
```
### Installation steps
#### VMware Environment Preparation
Set up configuration like the following, NOTE THAT:
- NIC1: using Vmnet 1 or hostonly
- NIC2: using bridge
- CPU: 2x2, remebering to select VT
![Topo-liberty](/images/VMware1.png)
#### Option 1: Only use this option during the installation if you choose this one
- After finish the installation steps, if you choose this option remembering to move to the step of using DASHBOARD immediately. Please do not try the second option.
#### Download GIT and configure DHCP for all NICs.
- Using these following commands for network configuration to make sure your server will have enough 02 NICs.
```sh
cat << EOF > /etc/network/interfaces
auto lo
iface lo inet loopback
# NIC MGNT
auto eth0
iface eth0 inet dhcp
# NIC EXT
auto eth1
iface eth1 inet dhcp
EOF
```
- Network restarting
```sh
ifdown -a && ifup -a
```
- Using the `landscape-sysinfo` command to ensure your server had enough 02 NICs. Then check the ip address again on the installed Openstack server.
```sh
root@controller:~# landscape-sysinfo
System load: 0.93 Users logged in: 1
Usage of /: 4.0% of 94.11GB IP address for eth0: 10.10.10.159
Memory usage: 53% IP address for eth0 172.16.69.228
Swap usage: 0%
```
- Check the Internet connection with the `ping google.com` command.
```sh
root@controller:~# ping google.com
PING google.com (203.162.236.211) 56(84) bytes of data.
64 bytes from 203.162.236.211: icmp_seq=1 ttl=57 time=0.877 ms
64 bytes from 203.162.236.211: icmp_seq=2 ttl=57 time=0.786 ms
64 bytes from 203.162.236.211: icmp_seq=3 ttl=57 time=0.781 ms
```
- Install GIT with root permission
```sh
su -
apt-get update
apt-get -y install git
```
- Execute the script to set up static IP address for the installed OpenStack server.
```sh
git clone https://github.com/vietstacker/openstack-liberty-multinode.git
mv /root/openstack-liberty-multinode/LIBERTY-U14.04-AIO /root
rm -rf openstack-liberty-multinode
cd LIBERTY-U14.04-AIO
chmod +x *.sh
bash AIO-LIBERTY-1.sh
```
- The server will be restarted. You need to login again, then execute the next script.
- Execute the script for installing all remaining components.
```sh
bash AIO-LIBERTY-2.sh
```
- Wait for 30-60 minutes for dowloading, configuring the services. Then move to the step of creating network and VMs.
- Openstack Installation finished here!
#### Option 2: Execute each script
#### Download and execute the script
- Download script
- Login with root permission, in Ubuntu version of 14.04 you must login with normal user first, then move to the root user using `su - ` command
```sh
git clone https://github.com/vietstacker/openstack-liberty-multinode.git
mv /root/openstack-liberty-multinode/LIBERTY-U14.04-AIO /root
rm -rf openstack-liberty-multinode
cd LIBERTY-U14.04-AIO
chmod +x *.sh
```
##### Execute the script to set up IP address for all NICs.
- The script will be executed automatically to set up static IP address for all NICs.
```sh
bash 0-liberty-aio-ipadd.sh
```
##### Install NTP, MARIADB, RABBITMQ packages
- Login to the server again with root account. Then do the following scripts.
```sh
su -
cd LIBERTY-U14.04-AIO
bash 1-liberty-aio-prepare.sh
```
- When the script is executed. The server will be restarted right after that.
##### Install Keystone
- Use the following script to install Keystone
```sh
bash 2-liberty-aio-keystone.sh
```
- Execute the below command to populate environment variables for OpenStack
```sh
source admin-openrc.sh
```
- Use the below script to check whether the installed Keystone is OK or not.
```sh
openstack token issue
```
- If the result is shown like this. Your installation is succeeded.
```sh
+------------+----------------------------------+
| Field | Value |
+------------+----------------------------------+
| expires | 2015-11-20T04:36:40.458714Z |
| id | afa93ac41b9f432d989cc6f5c235c44f |
| project_id | a863f6011c9f4d748a9af23983284a90 |
| user_id | 07817eb3060941598fe406312b8aa448 |
+------------+----------------------------------+
```
##### Install GLANCE
```sh
bash 3-liberty-aio-glance.sh
```
##### Install NOVA
```
bash 4-liberty-aio-nova.sh
```
##### Install NEUTRON
- Install OpenvSwitch and re-configure NIC
```sh
bash 5-liberty-aio-config-ip-neutron.sh
```
- After running the script successfully, your server will be restarted. You need to login with root account in order to finish the bellow script for installing NEUTRON.
```sh
bash 6-liberty-aio-install-neutron.sh
```
##### Install Horizon
```
bash 7-liberty-aio-install-horizon.sh
```
## User Guide for using dashboard to create network, VM, rules.
### Create rule for admin project
- Login to the dashboard
![liberty-horizon1.png](/images/liberty-horizon1.png)
- Select `admin => Access & Security => Manage Rules` tab
![liberty-horizon2.png](/images/liberty-horizon2.png)
- Select `Add Rule` tab
![liberty-horizon3.png](/images/liberty-horizon3.png)
- Open rule which allows user to access to the VMs via SSH
![liberty-horizon4.png](/images/liberty-horizon4.png)
- Do the same with ICMP rule so that ping to virtual machines is allowed and other rules
### Create network
#### Create external network
- Select `Admin => Networks => Create Network`tab
![liberty-net-ext1.png](/images/liberty-net-ext1.png)
- Enter the informatioin and choose like the following image
![liberty-net-ext2.png](/images/liberty-net-ext2.png)
- Click to `ext-net` to declare subnet mask for the external network
![liberty-net-ext3.png](/images/liberty-net-ext3.png)
- Select `Creat Subnet` tab
![liberty-net-ext4.png](/images/liberty-net-ext4.png)
- Initialize IP range for subnet of the external network
![liberty-net-ext5.png](/images/liberty-net-ext5.png)
- Declare pools and DNS
![liberty-net-ext6.png](/images/liberty-net-ext6.png)
#### Create the internal network
- Select the tabs with the order of `Project admin => Network => Networks => Create Network"
![liberty-net-int1.png](/images/liberty-net-int1.png)
- Initialize for the internal network
![liberty-net-int2.png](/images/liberty-net-int2.png)
- Declare subnet for the internal network
![liberty-net-int3.png](/images/liberty-net-int3.png)
- Declare IP range for the internal network
![liberty-net-int4.png](/images/liberty-net-int4.png)
#### Create a Router for admin project
- Select the tabs with the order of "Project admin => Routers => Create Router
![liberty-r1.png](/images/liberty-r1.png)
- Input router name and do like in the below image
![liberty-r2.png](/images/liberty-r2.png)
- Assign interface for the router
![liberty-r3.png](/images/liberty-r3.png)
![liberty-r4.png](/images/liberty-r4.png)
![liberty-r5.png](/images/liberty-r5.png)
- END the steps of creating exteral network, internal network and router
## Create Instance
- Select the tabs with order of `Project admin => Instances => Launch Instance`
![liberty-instance1.png](/images/liberty-instance1.png)
![liberty-instance2.png](/images/liberty-instance2.png)
![liberty-instance3.png](/images/liberty-instance3.png)

View File

@ -1,57 +0,0 @@
#### Env variable configs
# Khai bao ve network
eth0_address=`/sbin/ifconfig eth0|awk '/inet addr/ {print $2}'|cut -f2 -d ":" `
eth1_address=`/sbin/ifconfig eth1|awk '/inet addr/ {print $2}'|cut -f2 -d ":" `
eth0_netmask=`/sbin/ifconfig eth0|awk '/inet addr/ {print $4}'|cut -f2 -d ":" `
eth1_netmask=`/sbin/ifconfig eth1|awk '/inet addr/ {print $4}'|cut -f2 -d ":" `
LOCAL_IP=$eth0_address
MASTER=$eth1_address
NETMASK_LOCAL=$eth0_netmask
NETMASK_MASTER=$eth1_netmask
GATEWAY_IP=`route -n | grep 'UG[ \t]' | awk '{print $2}'`
br_ex_address=`/sbin/ifconfig br-ex|awk '/inet addr/ {print $2}'|cut -f2 -d ":" `
BR_EX_IP=$br_ex_address
# Set password
DEFAULT_PASS='Welcome123'
RABBIT_PASS="$DEFAULT_PASS"
MYSQL_PASS="$DEFAULT_PASS"
TOKEN_PASS="$DEFAULT_PASS"
ADMIN_PASS="$DEFAULT_PASS"
SERVICE_PASSWORD="$DEFAULT_PASS"
METADATA_SECRET="$DEFAULT_PASS"
SERVICE_TENANT_NAME="service"
ADMIN_TENANT_NAME="admin"
DEMO_TENANT_NAME="demo"
INVIS_TENANT_NAME="invisible_to_admin"
ADMIN_USER_NAME="admin"
DEMO_USER_NAME="demo"
# Environment variable for OPS service
KEYSTONE_PASS="$DEFAULT_PASS"
GLANCE_PASS="$DEFAULT_PASS"
NOVA_PASS="$DEFAULT_PASS"
NEUTRON_PASS="$DEFAULT_PASS"
CINDER_PASS="$DEFAULT_PASS"
SWIFT_PASS="$DEFAULT_PASS"
HEAT_PASS="$DEFAULT_PASS"
# Environment variable for DB
KEYSTONE_DBPASS="$DEFAULT_PASS"
GLANCE_DBPASS="$DEFAULT_PASS"
NOVA_DBPASS="$DEFAULT_PASS"
NEUTRON_DBPASS="$DEFAULT_PASS"
CINDER_DBPASS="$DEFAULT_PASS"
HEAT_DBPASS="$DEFAULT_PASS"
# User declaration in Keystone
ADMIN_ROLE_NAME="admin"
MEMBER_ROLE_NAME="Member"
KEYSTONEADMIN_ROLE_NAME="KeystoneAdmin"
KEYSTONESERVICE_ROLE_NAME="KeystoneServiceAdmin"
# OS PASS ROOT

View File

@ -1,201 +0,0 @@
# Installation Steps
### Prepare LAB enviroment
- Using in VMware Workstation enviroment
#### Configure CONTROLLER NODE
```sh
RAM: 4GB
CPU: 2x2, VT supported
NIC1: eth0: 10.10.10.0/24 ( interntel range, using vmnet or hostonly in VMware Workstation )
NIC2: eth1: 172.16.69.0/24, gateway 172.16.69.1 ( external range - using NAT or Bridge VMware Workstation)
HDD: 60GB
```
#### Configure NODE CONTROLLER
```sh
RAM: 4GB
CPU: 2x2, VT supported
NIC1: eth0: 10.10.10.0/24 (interntel range, using vmnet or hostonly in VMware Workstation)
NIC2: eth1: 172.16.69.0/24, gateway 172.16.69.1 ( external range - using NAT or Bridge VMware Workstation )
HDD: 1000GB
```
### Execute script
- Install git package and dowload script
```sh
su -
apt-get update
apt-get -y install git
git clone https://github.com/vietstacker/openstack-liberty-multinode.git
mv /root/openstack-liberty-multinode/LIBERTY-U14.04-LB/ /root/
rm -rf openstack-liberty-multinode/
cd LIBERTY-U14.04-LB/
chmod +x *.sh
```
## Install on CONTROLLER NODE
### install IP establishment script and repos for Liberty
- Edit file config in dicrectory with IP that you want to use
```sh
bash ctl-1-ipadd.sh
```
### Install NTP, MariaDB packages
```sh
bash ctl-2-prepare.sh
```
### Install KEYSTONE
- Install Keystone
```sh
bash ctl-3.keystone.sh
```
- Declare enviroment parameter
```sh
source admin-openrc.sh
```
### Install GLANCE
```sh
bash ctl-4-glance.sh
```
### Install NOVA
```sh
bash ctl-5-nova.sh
```
### Install NEUTRON
```sh
bash ctl-6-neutron.sh
```
- After NEUTRON installation done, controller node will restart.
- Login with `root` end execute Horizon installation script.
### Install HORIZON
- Login with `root` privilege and execute script below
```sh
bash ctl-horizon.sh
```
## Install on COMPUTE NODE
### Dowload GIT and script
- install git package and dowload script
```sh
su -
apt-get update
apt-get -y install git
git clone https://github.com/vietstacker/openstack-liberty-multinode.git
mv /root/openstack-liberty-multinode/LIBERTY-U14.04-LB/ /root/
rm -rf openstack-liberty-multinode/
cd LIBERTY-U14.04-LB/
chmod +x *.sh
### Establish IP and hostname
- Edit file config to make it suitable with your IP
- Execute script to establish IP, hostname
```sh
bash com1-ipdd.sh
```
- The server will restart after script `com1-ipdd.sh` is executed.
- Login to server with root privilege and execute conponents installation script on Nova
```sh
su -
cd LIBERTY-U14.04-LB/
bash com1-prepare.sh
```
After install COMPUTE NODE, move to step that guide to use dashboard
## Using dashboard to initialize network, VM, rules.
### Initialize rule for project admin
- Login to dasboard
![liberty-horizon1.png](/images/liberty-horizon1.png)
- Select tab `admin => Access & Security => Manage Rules`
![liberty-horizon2.png](/images/liberty-horizon2.png)
- Select tab `Add Rule`
![liberty-horizon3.png](/images/liberty-horizon3.png)
- Open rule to allow SSH from outside to virtual machine
![liberty-horizon4.png](/images/liberty-horizon4.png)
- Do the same with ICMP rule to allow ping to virtual machine and the other rules.
### Initialize network
#### Initialize external network range
- Select tab `Admin => Networks => Create Network`
![liberty-net-ext1.png](/images/liberty-net-ext1.png)
- Enter and select tabs like picture below
![liberty-net-ext2.png](/images/liberty-net-ext2.png)
- Click to newly created `ext-net` to declare subnet for external range.
![liberty-net-ext3.png](/images/liberty-net-ext3.png)
- Select tab `Creat Subnet`
![liberty-net-ext4.png](/images/liberty-net-ext4.png)
- Declare IP range of subnet for external range
![liberty-net-ext5.png](/images/liberty-net-ext5.png)
- Declare pools and DNS
![liberty-net-ext6.png](/images/liberty-net-ext6.png)
#### Initialize internal network range
- Select tabs in turn of rank : `Project admin => Network => Networks => Create Network"
![liberty-net-int1.png](/images/liberty-net-int1.png)
- Declare name for internal network
![liberty-net-int2.png](/images/liberty-net-int2.png)
- Declare subnet for internal network
![liberty-net-int3.png](/images/liberty-net-int3.png)
- Declare IP range for Internal network
![liberty-net-int4.png](/images/liberty-net-int4.png)
#### Initialize Router for project admin
- Select by tabs "Project admin => Routers => Create Router
![liberty-r1.png](/images/liberty-r1.png)
- Initialize router name and select like picture below
![liberty-r2.png](/images/liberty-r2.png)
- Apply interface for router
![liberty-r3.png](/images/liberty-r3.png)
![liberty-r4.png](/images/liberty-r4.png)
![liberty-r5.png](/images/liberty-r5.png)
- ending of initializing steps: exteral network, internal network, router
## Initialize virtual machine (Instance)
- Select tabs below `Project admin => Instances => Launch Instance`
![liberty-instance1.png](/images/liberty-instance1.png)
![liberty-instance2.png](/images/liberty-instance2.png)
![liberty-instance3.png](/images/liberty-instance3.png)

View File

@ -1,68 +0,0 @@
#!/bin/bash -ex
source config.cfg
sleep 3
echo "#### Update for Ubuntu #####"
apt-get install software-properties-common -y
add-apt-repository cloud-archive:liberty -y
sleep 3
echo "##### update for Ubuntu #####"
apt-get update -y && apt-get upgrade -y && apt-get dist-upgrade -y
echo "##### Configuring hostname for COMPUTE1 node... #####"
sleep 3
echo "compute1" > /etc/hostname
hostname -F /etc/hostname
iphost=/etc/hosts
test -f $iphost.orig || cp $iphost $iphost.orig
rm $iphost
touch $iphost
cat << EOF >> $iphost
127.0.0.1 localhost
127.0.0.1 compute1
$CON_MGNT_IP controller
$COM1_MGNT_IP compute1
EOF
sleep 3
echo "##### Config network for COMPUTE NODE ####"
ifaces=/etc/network/interfaces
test -f $ifaces.orig || cp $ifaces $ifaces.orig
rm $ifaces
touch $ifaces
cat << EOF >> $ifaces
#Dat IP cho $CON_MGNT_IP node
# LOOPBACK NET
auto lo
iface lo inet loopback
# MGNT NETWORK
auto eth0
iface eth0 inet static
address $COM1_MGNT_IP
netmask $NETMASK_ADD_MGNT
# EXT NETWORK
auto eth1
iface eth1 inet static
address $COM1_EXT_IP
netmask $NETMASK_ADD_EXT
gateway $GATEWAY_IP_EXT
dns-nameservers 8.8.8.8
EOF
sleep 5
echo "##### Rebooting machine ... #####"
init 6
#

View File

@ -1,217 +0,0 @@
#!/bin/bash -ex
#
source config.cfg
#
echo "##### Install python openstack client ##### "
apt-get -y install python-openstackclient
echo "##### Install NTP ##### "
apt-get install ntp -y
apt-get install python-mysqldb -y
#
echo "##### Backup NTP configuration... ##### "
sleep 7
cp /etc/ntp.conf /etc/ntp.conf.bka
rm /etc/ntp.conf
cat /etc/ntp.conf.bka | grep -v ^# | grep -v ^$ >> /etc/ntp.conf
#
sed -i 's/server 0.ubuntu.pool.ntp.org/ \
#server 0.ubuntu.pool.ntp.org/g' /etc/ntp.conf
sed -i 's/server 1.ubuntu.pool.ntp.org/ \
#server 1.ubuntu.pool.ntp.org/g' /etc/ntp.conf
sed -i 's/server 2.ubuntu.pool.ntp.org/ \
#server 2.ubuntu.pool.ntp.org/g' /etc/ntp.conf
sed -i 's/server 3.ubuntu.pool.ntp.org/ \
#server 3.ubuntu.pool.ntp.org/g' /etc/ntp.conf
sed -i "s/server ntp.ubuntu.com/server $CON_MGNT_IP iburst/g" /etc/ntp.conf
sleep 5
echo "##### Installl package for NOVA"
apt-get -y install nova-compute
echo "libguestfs-tools libguestfs/update-appliance boolean true" \
| debconf-set-selections
apt-get -y install libguestfs-tools sysfsutils guestfsd python-guestfs
#fix loi chen pass tren hypervisor la KVM
update-guestfs-appliance
chmod 0644 /boot/vmlinuz*
usermod -a -G kvm root
echo "############ Configuring in nova.conf ...############"
sleep 5
########
#/* Sao luu truoc khi sua file nova.conf
filenova=/etc/nova/nova.conf
test -f $filenova.orig || cp $filenova $filenova.orig
#Chen noi dung file /etc/nova/nova.conf vao
cat << EOF > $filenova
[DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
libvirt_use_virtio_for_bridges=True
verbose=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
enabled_apis=ec2,osapi_compute,metadata
rpc_backend = rabbit
auth_strategy = keystone
my_ip = $COM1_MGNT_IP
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
verbose = True
enable_instance_password = True
[oslo_messaging_rabbit]
rabbit_host = $CON_MGNT_IP
rabbit_userid = openstack
rabbit_password = $RABBIT_PASS
[keystone_authtoken]
auth_uri = http://$CON_MGNT_IP:5000
auth_url = http://$CON_MGNT_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = $KEYSTONE_PASS
[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = \$my_ip
novncproxy_base_url = http://$CON_EXT_IP:6080/vnc_auto.html
[glance]
host = $CON_MGNT_IP
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[neutron]
url = http://$CON_MGNT_IP:9696
auth_url = http://$CON_MGNT_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = neutron
password = $NEUTRON_PASS
[libvirt]
inject_key = True
inject_partition = -1
inject_password = True
EOF
echo "##### Restart nova-compute #####"
sleep 5
service nova-compute restart
# Remove default nova db
rm /var/lib/nova/nova.sqlite
echo "##### Install linuxbridge-agent (neutron) on COMPUTE NODE #####"
sleep 10
apt-get -y install neutron-plugin-linuxbridge-agent
echo "Config file neutron.conf"
controlneutron=/etc/neutron/neutron.conf
test -f $controlneutron.orig || cp $controlneutron $controlneutron.orig
rm $controlneutron
touch $controlneutron
cat << EOF >> $controlneutron
[DEFAULT]
core_plugin = ml2
rpc_backend = rabbit
auth_strategy = keystone
verbose = True
[matchmaker_redis]
[matchmaker_ring]
[quotas]
[agent]
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
[keystone_authtoken]
auth_uri = http://$CON_MGNT_IP:5000
auth_url = http://$CON_MGNT_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = $KEYSTONE_PASS
[database]
# connection = sqlite:////var/lib/neutron/neutron.sqlite
[nova]
[oslo_concurrency]
lock_path = \$state_path/lock
[oslo_policy]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host = $CON_MGNT_IP
rabbit_userid = openstack
rabbit_password = $RABBIT_PASS
[qos]
EOF
echo "############ Configuring Linux Bbridge AGENT ############"
sleep 7
linuxbridgefile=/etc/neutron/plugins/ml2/linuxbridge_agent.ini
test -f $linuxbridgefile.orig || cp $linuxbridgefile $linuxbridgefile.orig
cat << EOF >> $linuxbridgefile
[linux_bridge]
physical_interface_mappings = public:eth1
[vxlan]
enable_vxlan = True
local_ip = $COM1_MGNT_IP
l2_population = True
[agent]
prevent_arp_spoofing = True
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
EOF
echo "Reset service nova-compute,linuxbridge-agent"
sleep 5
service nova-compute restart
service neutron-plugin-linuxbridge-agent restart

View File

@ -1,64 +0,0 @@
## Network Info
# MASTER=$eth0_address
# LOCAL_IP=$eth1_address
##################### KHAI BAO CAC BIEN CHO SCRIPT ########################
## Assigning IP for CONTROLLER NODE
CON_MGNT_IP=10.10.10.140
CON_EXT_IP=172.16.69.140
# Assigning IP for COMPUTE1 NODE
COM1_MGNT_IP=10.10.10.141
COM1_EXT_IP=172.16.69.141
#Gateway for EXT network
GATEWAY_IP_EXT=172.16.69.1
NETMASK_ADD_EXT=255.255.255.0
#Gateway for MGNT network
GATEWAY_IP_MGNT=10.10.10.1
NETMASK_ADD_MGNT=255.255.255.0
# Set password
DEFAULT_PASS='Welcome123'
RABBIT_PASS="$DEFAULT_PASS"
MYSQL_PASS="$DEFAULT_PASS"
TOKEN_PASS="$DEFAULT_PASS"
ADMIN_PASS="$DEFAULT_PASS"
SERVICE_PASSWORD="$DEFAULT_PASS"
METADATA_SECRET="$DEFAULT_PASS"
SERVICE_TENANT_NAME="service"
ADMIN_TENANT_NAME="admin"
DEMO_TENANT_NAME="demo"
INVIS_TENANT_NAME="invisible_to_admin"
ADMIN_USER_NAME="admin"
DEMO_USER_NAME="demo"
# Environment variable for OPS service
KEYSTONE_PASS="$DEFAULT_PASS"
GLANCE_PASS="$DEFAULT_PASS"
NOVA_PASS="$DEFAULT_PASS"
NEUTRON_PASS="$DEFAULT_PASS"
CINDER_PASS="$DEFAULT_PASS"
SWIFT_PASS="$DEFAULT_PASS"
HEAT_PASS="$DEFAULT_PASS"
# Environment variable for DB
KEYSTONE_DBPASS="$DEFAULT_PASS"
GLANCE_DBPASS="$DEFAULT_PASS"
NOVA_DBPASS="$DEFAULT_PASS"
NEUTRON_DBPASS="$DEFAULT_PASS"
CINDER_DBPASS="$DEFAULT_PASS"
HEAT_DBPASS="$DEFAULT_PASS"
# User declaration in Keystone
ADMIN_ROLE_NAME="admin"
MEMBER_ROLE_NAME="Member"
KEYSTONEADMIN_ROLE_NAME="KeystoneAdmin"
KEYSTONESERVICE_ROLE_NAME="KeystoneServiceAdmin"
# OS PASS ROOT

View File

@ -1,71 +0,0 @@
#!/bin/bash -ex
source config.cfg
ifaces=/etc/network/interfaces
test -f $ifaces.orig || cp $ifaces $ifaces.orig
rm $ifaces
touch $ifaces
cat << EOF >> $ifaces
#Assign IP for Controller node
# LOOPBACK NET
auto lo
iface lo inet loopback
# MGNT NETWORK
auto eth0
iface eth0 inet static
address $CON_MGNT_IP
netmask $NETMASK_ADD_MGNT
# EXT NETWORK
auto eth1
iface eth1 inet static
address $CON_EXT_IP
netmask $NETMASK_ADD_EXT
gateway $GATEWAY_IP_EXT
dns-nameservers 8.8.8.8
EOF
echo "Configuring hostname in CONTROLLER node"
sleep 3
echo "controller" > /etc/hostname
hostname -F /etc/hostname
echo "Configuring for file /etc/hosts"
sleep 3
iphost=/etc/hosts
test -f $iphost.orig || cp $iphost $iphost.orig
rm $iphost
touch $iphost
cat << EOF >> $iphost
127.0.0.1 localhost
127.0.1.1 controller
$CON_MGNT_IP controller
$COM1_MGNT_IP compute1
EOF
echo "##### Cai dat repos cho Liberty ##### "
apt-get install software-properties-common -y
add-apt-repository cloud-archive:liberty -y
sleep 5
echo "UPDATE PACKAGE FOR LIBERTY"
apt-get -y update && apt-get -y upgrade && apt-get -y dist-upgrade
sleep 5
echo "Reboot Server"
#sleep 5
init 6
#

View File

@ -1,80 +0,0 @@
#!/bin/bash -ex
#
source config.cfg
echo "Install python client"
apt-get -y install python-openstackclient
sleep 5
echo "Install and config NTP"
sleep 3
apt-get install ntp -y
cp /etc/ntp.conf /etc/ntp.conf.bka
rm /etc/ntp.conf
cat /etc/ntp.conf.bka | grep -v ^# | grep -v ^$ >> /etc/ntp.conf
## Config NTP in LIBERTY
sed -i 's/server ntp.ubuntu.com/ \
server 0.vn.pool.ntp.org iburst \
server 1.asia.pool.ntp.org iburst \
server 2.asia.pool.ntp.org iburst/g' /etc/ntp.conf
sed -i 's/restrict -4 default kod notrap nomodify nopeer noquery/ \
#restrict -4 default kod notrap nomodify nopeer noquery/g' /etc/ntp.conf
sed -i 's/restrict -6 default kod notrap nomodify nopeer noquery/ \
restrict -4 default kod notrap nomodify \
restrict -6 default kod notrap nomodify/g' /etc/ntp.conf
# sed -i 's/server/#server/' /etc/ntp.conf
# echo "server $LOCAL_IP" >> /etc/ntp.conf
##############################################
echo "Install and Config RabbitMQ"
sleep 3
apt-get install rabbitmq-server -y
rabbitmqctl add_user openstack $RABBIT_PASS
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
# rabbitmqctl change_password guest $RABBIT_PASS
sleep 3
service rabbitmq-server restart
echo "Finish setup pre-install package !!!"
echo "##### Install MYSQL #####"
sleep 3
echo mysql-server mysql-server/root_password password $MYSQL_PASS \
| debconf-set-selections
echo mysql-server mysql-server/root_password_again password $MYSQL_PASS \
| debconf-set-selections
apt-get -y install mariadb-server python-mysqldb curl
echo "##### Configuring MYSQL #####"
sleep 3
echo "########## CONFIGURING FOR MYSQL ##########"
sleep 5
touch /etc/mysql/conf.d/mysqld_openstack.cnf
cat << EOF > /etc/mysql/conf.d/mysqld_openstack.cnf
[mysqld]
bind-address = 0.0.0.0
[mysqld]
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
EOF
sleep 5
echo "Restart MYSQL"
service mysql restart

View File

@ -1,225 +0,0 @@
#!/bin/bash -ex
#
source config.cfg
echo "Create Database for Keystone"
cat << EOF | mysql -uroot -p$MYSQL_PASS
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '$KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '$KEYSTONE_DBPASS';
FLUSH PRIVILEGES;
EOF
echo "##### Install keystone #####"
echo "manual" > /etc/init/keystone.override
apt-get -y install keystone python-openstackclient apache2 \
libapache2-mod-wsgi memcached python-memcache
#/* Back-up file nova.conf
filekeystone=/etc/keystone/keystone.conf
test -f $filekeystone.orig || cp $filekeystone $filekeystone.orig
#Config file /etc/keystone/keystone.conf
cat << EOF > $filekeystone
[DEFAULT]
log_dir = /var/log/keystone
admin_token = $TOKEN_PASS
public_bind_host = $CON_MGNT_IP
admin_bind_host = $CON_MGNT_IP
[assignment]
[auth]
[cache]
[catalog]
[cors]
[cors.subdomain]
[credential]
[database]
connection = mysql+pymysql://keystone:$KEYSTONE_DBPASS@$CON_MGNT_IP/keystone
[domain_config]
[endpoint_filter]
[endpoint_policy]
[eventlet_server]
[eventlet_server_ssl]
[federation]
[fernet_tokens]
[identity]
[identity_mapping]
[kvs]
[ldap]
[matchmaker_redis]
[matchmaker_ring]
[memcache]
servers = localhost:11211
[oauth1]
[os_inherit]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[paste_deploy]
[policy]
[resource]
[revoke]
driver = sql
[role]
[saml]
[signing]
[ssl]
[token]
provider = uuid
driver = memcache
[tokenless_auth]
[trust]
[extra_headers]
Distribution = Ubuntu
EOF
#
su -s /bin/sh -c "keystone-manage db_sync" keystone
echo "ServerName $CON_MGNT_IP" >> /etc/apache2/apache2.conf
cat << EOF > /etc/apache2/sites-available/wsgi-keystone.conf
Listen 5000
Listen 35357
<VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
<IfVersion >= 2.4>
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog /var/log/apache2/keystone.log
CustomLog /var/log/apache2/keystone_access.log combined
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>
<VirtualHost *:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
<IfVersion >= 2.4>
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog /var/log/apache2/keystone.log
CustomLog /var/log/apache2/keystone_access.log combined
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>
EOF
ln -s /etc/apache2/sites-available/wsgi-keystone.conf \
/etc/apache2/sites-enabled
service apache2 restart
rm -f /var/lib/keystone/keystone.db
export OS_TOKEN="$TOKEN_PASS"
export OS_URL=http://$CON_MGNT_IP:35357/v2.0
# export OS_SERVICE_TOKEN="$TOKEN_PASS"
# export OS_SERVICE_ENDPOINT="http://$CON_MGNT_IP:35357/v2.0"
# export SERVICE_ENDPOINT="http://$CON_MGNT_IP:35357/v2.0"
### Identity service
openstack service create --name keystone --description \
"OpenStack Identity" identity
### Create the Identity service API endpoint
openstack endpoint create \
--publicurl http://$CON_MGNT_IP:5000/v2.0 \
--internalurl http://$CON_MGNT_IP:5000/v2.0 \
--adminurl http://$CON_MGNT_IP:35357/v2.0 \
--region RegionOne \
identity
#### To create tenants, users, and roles ADMIN
openstack project create --description "Admin Project" admin
openstack user create --password $ADMIN_PASS admin
openstack role create admin
openstack role add --project admin --user admin admin
#### To create tenants, users, and roles SERVICE
openstack project create --description "Service Project" service
#### To create tenants, users, and roles DEMO
openstack project create --description "Demo Project" demo
openstack user create --password $ADMIN_PASS demo
### Create the user role
openstack role create user
openstack role add --project demo --user demo user
#################
unset OS_TOKEN OS_URL
# Tao bien moi truong
echo "export OS_PROJECT_DOMAIN_ID=default" > admin-openrc.sh
echo "export OS_USER_DOMAIN_ID=default" >> admin-openrc.sh
echo "export OS_PROJECT_NAME=admin" >> admin-openrc.sh
echo "export OS_TENANT_NAME=admin" >> admin-openrc.sh
echo "export OS_USERNAME=admin" >> admin-openrc.sh
echo "export OS_PASSWORD=$ADMIN_PASS" >> admin-openrc.sh
echo "export OS_AUTH_URL=http://$CON_MGNT_IP:35357/v3" >> admin-openrc.sh
echo "export OS_VOLUME_API_VERSION=2" >> admin-openrc.sh
sleep 5
echo "########## Execute environment script ##########"
chmod +x admin-openrc.sh
cat admin-openrc.sh >> /etc/profile
cp admin-openrc.sh /root/admin-openrc.sh
source admin-openrc.sh
echo "export OS_PROJECT_DOMAIN_ID=default" > demo-openrc.sh
echo "export OS_USER_DOMAIN_ID=default" >> demo-openrc.sh
echo "export OS_PROJECT_NAME=demo" >> demo-openrc.sh
echo "export OS_TENANT_NAME=demo" >> demo-openrc.sh
echo "export OS_USERNAME=demo" >> demo-openrc.sh
echo "export OS_PASSWORD=$ADMIN_PASS" >> demo-openrc.sh
echo "export OS_AUTH_URL=http://$CON_MGNT_IP:35357/v3" >> demo-openrc.sh
echo "export OS_VOLUME_API_VERSION=2" >> demo-openrc.sh
chmod +x demo-openrc.sh
cp demo-openrc.sh /root/demo-openrc.sh

View File

@ -1,171 +0,0 @@
#!/bin/bash -ex
#
source config.cfg
echo "Create the database for GLANCE"
cat << EOF | mysql -uroot -p$MYSQL_PASS
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '$GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '$GLANCE_DBPASS';
FLUSH PRIVILEGES;
EOF
sleep 5
echo " Create user, endpoint for GLANCE"
openstack user create --password $GLANCE_PASS glance
openstack role add --project service --user glance admin
openstack service create --name glance --description \
"OpenStack Image service" image
openstack endpoint create \
--publicurl http://$CON_MGNT_IP:9292 \
--internalurl http://$CON_MGNT_IP:9292 \
--adminurl http://$CON_MGNT_IP:9292 \
--region RegionOne \
image
echo "########## Install GLANCE ##########"
apt-get -y install glance python-glanceclient
sleep 10
echo "########## Configuring GLANCE API ##########"
sleep 5
#/* Back-up file nova.conf
fileglanceapicontrol=/etc/glance/glance-api.conf
test -f $fileglanceapicontrol.orig \
|| cp $fileglanceapicontrol $fileglanceapicontrol.orig
rm $fileglanceapicontrol
touch $fileglanceapicontrol
#Configuring glance config file /etc/glance/glance-api.conf
cat << EOF > $fileglanceapicontrol
[DEFAULT]
notification_driver = noop
verbose = True
[database]
connection = mysql+pymysql://glance:$GLANCE_DBPASS@$CON_MGNT_IP/glance
backend = sqlalchemy
[glance_store]
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
[image_format]
[keystone_authtoken]
auth_uri = http://$CON_MGNT_IP:5000
auth_url = http://$CON_MGNT_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = $GLANCE_PASS
[matchmaker_redis]
[matchmaker_ring]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
[oslo_policy]
[paste_deploy]
flavor = keystone
[store_type_location_strategy]
[task]
[taskflow_executor]
EOF
#
sleep 10
echo "########## Configuring GLANCE REGISTER ##########"
#/* Backup file file glance-registry.conf
fileglanceregcontrol=/etc/glance/glance-registry.conf
test -f $fileglanceregcontrol.orig \
|| cp $fileglanceregcontrol $fileglanceregcontrol.orig
rm $fileglanceregcontrol
touch $fileglanceregcontrol
#Config file /etc/glance/glance-registry.conf
cat << EOF > $fileglanceregcontrol
[DEFAULT]
notification_driver = noop
verbose = True
[database]
connection = mysql+pymysql://glance:$GLANCE_DBPASS@$CON_MGNT_IP/glance
backend = sqlalchemy
[glance_store]
[keystone_authtoken]
auth_uri = http://$CON_MGNT_IP:5000
auth_url = http://$CON_MGNT_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = $GLANCE_PASS
[matchmaker_redis]
[matchmaker_ring]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
[oslo_policy]
[paste_deploy]
flavor = keystone
EOF
sleep 7
echo "########## Remove Glance default DB ##########"
rm /var/lib/glance/glance.sqlite
chown glance:glance $fileglanceapicontrol
chown glance:glance $fileglanceregcontrol
sleep 7
echo "########## Syncing DB for Glance ##########"
glance-manage db_sync
sleep 5
echo "########## Restarting GLANCE service ... ##########"
service glance-registry restart
service glance-api restart
sleep 3
service glance-registry restart
service glance-api restart
#
echo "Remove glance.sqlite "
rm -f /var/lib/glance/glance.sqlite
sleep 3
echo "########## Registering Cirros IMAGE for GLANCE ... ##########"
mkdir images
cd images/
wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
glance image-create --name "cirros" \
--file cirros-0.3.4-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--visibility public --progress
cd /root/
# rm -r /tmp/images
sleep 5
echo "########## Testing Glance ##########"
glance image-list

View File

@ -1,148 +0,0 @@
#!/bin/bash -ex
#
source config.cfg
echo "Create DB for NOVA "
cat << EOF | mysql -uroot -p$MYSQL_PASS
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '$NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '$NOVA_DBPASS';
FLUSH PRIVILEGES;
EOF
echo "Create user, endpoint for NOVA"
openstack user create --password $NOVA_PASS nova
openstack role add --project service --user nova admin
openstack service create --name nova --description "OpenStack Compute" compute
openstack endpoint create \
--publicurl http://$CON_MGNT_IP:8774/v2/%\(tenant_id\)s \
--internalurl http://$CON_MGNT_IP:8774/v2/%\(tenant_id\)s \
--adminurl http://$CON_MGNT_IP:8774/v2/%\(tenant_id\)s \
--region RegionOne \
compute
echo "########## Install NOVA in $CON_MGNT_IP ##########"
sleep 5
apt-get -y install nova-api nova-cert nova-conductor nova-consoleauth \
nova-novncproxy nova-scheduler python-novaclient
# Cai tu dong libguestfs-tools
apt-get -y install libguestfs-tools sysfsutils guestfsd python-guestfs
######## Backup configurations for NOVA ##########"
sleep 7
#
controlnova=/etc/nova/nova.conf
test -f $controlnova.orig || cp $controlnova $controlnova.orig
rm $controlnova
touch $controlnova
cat << EOF >> $controlnova
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
libvirt_use_virtio_for_bridges=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
enabled_apis=ec2,osapi_compute,metadata
my_ip = $CON_MGNT_IP
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
enabled_apis=osapi_compute,metadata
verbose = True
enable_instance_password = True
[database]
connection = mysql+pymysql://nova:$NOVA_DBPASS@$CON_MGNT_IP/nova
[oslo_messaging_rabbit]
rabbit_host = $CON_MGNT_IP
rabbit_userid = openstack
rabbit_password = Welcome123
[keystone_authtoken]
auth_uri = http://$CON_MGNT_IP:5000
auth_url = http://$CON_MGNT_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = $NOVA_PASS
[vnc]
vncserver_listen = \$my_ip
vncserver_proxyclient_address = \$my_ip
[glance]
host = $CON_MGNT_IP
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[neutron]
url = http://$CON_MGNT_IP:9696
auth_url = http://$CON_MGNT_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = neutron
password = $NEUTRON_PASS
service_metadata_proxy = True
metadata_proxy_shared_secret = $METADATA_SECRET
EOF
echo "########## Remove Nova default db ##########"
sleep 7
rm /var/lib/nova/nova.sqlite
echo "########## Syncing Nova DB ##########"
sleep 7
su -s /bin/sh -c "nova-manage db sync" nova
# echo 'kvm_intel' >> /etc/modules
echo "########## Restarting NOVA ... ##########"
sleep 7
service nova-api restart
service nova-cert restart
service nova-consoleauth restart
service nova-scheduler restart
service nova-conductor restart
service nova-novncproxy restart
sleep 7
echo "########## Restarting NOVA ... ##########"
service nova-api restart
service nova-cert restart
service nova-consoleauth restart
service nova-scheduler restart
service nova-conductor restart
service nova-novncproxy restart
echo "########## Testing NOVA service ##########"
nova-manage service list

View File

@ -1,306 +0,0 @@
#!/bin/bash -ex
#
# RABBIT_PASS=a
# ADMIN_PASS=a
source config.cfg
echo "Create DB for NEUTRON "
cat << EOF | mysql -uroot -p$MYSQL_PASS
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '$NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '$NEUTRON_DBPASS';
FLUSH PRIVILEGES;
EOF
echo "Create user, endpoint for NEUTRON"
openstack user create --password $NEUTRON_PASS neutron
openstack role add --project service --user neutron admin
openstack service create --name neutron --description \
"OpenStack Networking" network
openstack endpoint create \
--publicurl http://$CON_MGNT_IP:9696 \
--adminurl http://$CON_MGNT_IP:9696 \
--internalurl http://$CON_MGNT_IP:9696 \
--region RegionOne \
network
# SERVICE_TENANT_ID=`keystone tenant-get service | awk '$2~/^id/{print $4}'`
echo "########## Install NEUTRON in $CON_MGNT_IP or NETWORK node ############"
sleep 5
apt-get -y install neutron-server neutron-plugin-ml2 \
neutron-plugin-linuxbridge-agent neutron-l3-agent neutron-dhcp-agent \
neutron-metadata-agent python-neutronclient
######## Backup configuration NEUTRON.CONF in $CON_MGNT_IP################"
echo "########## Config NEUTRON in $CON_MGNT_IP/NETWORK node ##########"
sleep 7
#
controlneutron=/etc/neutron/neutron.conf
test -f $controlneutron.orig || cp $controlneutron $controlneutron.orig
rm $controlneutron
touch $controlneutron
cat << EOF >> $controlneutron
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
rpc_backend = rabbit
auth_strategy = keystone
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://$CON_MGNT_IP:8774/v2
verbose = True
[matchmaker_redis]
[matchmaker_ring]
[quotas]
[agent]
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
[keystone_authtoken]
auth_uri = http://$CON_MGNT_IP:5000
auth_url = http://$CON_MGNT_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = $NEUTRON_PASS
[database]
connection = mysql+pymysql://neutron:$NEUTRON_DBPASS@$CON_MGNT_IP/neutron
[nova]
auth_url = http://$CON_MGNT_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = nova
password = $NOVA_PASS
[oslo_concurrency]
lock_path = \$state_path/lock
[oslo_policy]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host = $CON_MGNT_IP
rabbit_userid = openstack
rabbit_password = $RABBIT_PASS
[qos]
EOF
######## Backup configuration of ML2 in $CON_MGNT_IP##################"
echo "########## Configuring ML2 in $CON_MGNT_IP/NETWORK node ##########"
sleep 7
controlML2=/etc/neutron/plugins/ml2/ml2_conf.ini
test -f $controlML2.orig || cp $controlML2 $controlML2.orig
rm $controlML2
touch $controlML2
cat << EOF >> $controlML2
[ml2]
tenant_network_types = vxlan
type_drivers = flat,vlan,vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = external
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges = 1:1000
[ml2_type_geneve]
[securitygroup]
enable_ipset = True
EOF
echo "############ Configuring Linux Bbridge AGENT ############"
sleep 7
linuxbridgefile=/etc/neutron/plugins/ml2/linuxbridge_agent.ini
test -f $linuxbridgefile.orig || cp $linuxbridgefile $linuxbridgefile.orig
cat << EOF >> $linuxbridgefile
[linux_bridge]
physical_interface_mappings = external:eth1
[vxlan]
enable_vxlan = True
local_ip = $CON_MGNT_IP
l2_population = True
[agent]
prevent_arp_spoofing = True
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
EOF
echo "############ Configuring L3 AGENT ############"
sleep 7
netl3agent=/etc/neutron/l3_agent.ini
test -f $netl3agent.orig || cp $netl3agent $netl3agent.orig
rm $netl3agent
touch $netl3agent
cat << EOF >> $netl3agent
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
external_network_bridge =
verbose = True
[AGENT]
EOF
echo "############ Configuring DHCP AGENT ############ "
sleep 7
#
netdhcp=/etc/neutron/dhcp_agent.ini
test -f $netdhcp.orig || cp $netdhcp $netdhcp.orig
rm $netdhcp
touch $netdhcp
cat << EOF >> $netdhcp
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
verbose = True
dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf
[AGENT]
EOF
echo "Fix loi MTU"
sleep 3
echo "dhcp-option-force=26,1450" > /etc/neutron/dnsmasq-neutron.conf
killall dnsmasq
echo "############ Configuring METADATA AGENT ############"
sleep 7
netmetadata=/etc/neutron/metadata_agent.ini
test -f $netmetadata.orig || cp $netmetadata $netmetadata.orig
rm $netmetadata
touch $netmetadata
cat << EOF >> $netmetadata
[DEFAULT]
verbose = True
auth_uri = http://$CON_MGNT_IP:5000
auth_url = http://$CON_MGNT_IP:35357
auth_region = regionOne
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = $NEUTRON_PASS
nova_metadata_ip = $CON_MGNT_IP
metadata_proxy_shared_secret = $METADATA_SECRET
EOF
#
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
echo "########## Restarting NOVA service ##########"
sleep 7
service nova-api restart
service nova-scheduler restart
service nova-conductor restart
echo "########## Restarting NEUTRON service ##########"
sleep 7
service neutron-server restart
service neutron-plugin-linuxbridge-agent restart
service neutron-dhcp-agent restart
service neutron-metadata-agent restart
service neutron-l3-agent restart
rm -f /var/lib/neutron/neutron.sqlite
echo "Setup IP for PUBLIC interface"
sleep 5
cat << EOF > /etc/network/interfaces
#Assign IP for Controller node
# LOOPBACK NET
auto lo
iface lo inet loopback
# MGNT NETWORK
auto eth0
iface eth0 inet static
address $CON_MGNT_IP
netmask $NETMASK_ADD_MGNT
# EXT NETWORK
auto eth1:0
iface eth1:0 inet static
address $CON_EXT_IP
netmask $NETMASK_ADD_EXT
gateway $GATEWAY_IP_EXT
dns-nameservers 8.8.8.8
auto eth1
iface eth1 inet manual
up ip link set dev \$IFACE up
down ip link set dev \$IFACE down
EOF
ifdown -a && ifup -a
echo "#### Reboot ####":
reboot

View File

@ -1,49 +0,0 @@
#!/bin/bash -ex
source config.cfg
###################
echo "########## START INSTALLING OPS DASHBOARD ##########"
###################
sleep 5
echo "########## Installing Dashboard package ##########"
apt-get -y install openstack-dashboard
apt-get -y remove --auto-remove openstack-dashboard-ubuntu-theme
# echo "########## Fix bug in apache2 ##########"
# sleep 5
# Fix bug apache in ubuntu 14.04
# echo "ServerName localhost" > /etc/apache2/conf-available/servername.conf
# sudo a2enconf servername
echo "########## Creating redirect page ##########"
filehtml=/var/www/html/index.html
test -f $filehtml.orig || cp $filehtml $filehtml.orig
rm $filehtml
touch $filehtml
cat << EOF >> $filehtml
<html>
<head>
<META HTTP-EQUIV="Refresh" Content="0.5; URL=http://$CON_EXT_IP/horizon">
</head>
<body>
<center> <h1>Dang chuyen den Dashboard cua OpenStack</h1> </center>
</body>
</html>
EOF
# Allowing insert password in dashboard ( only apply in image )
sed -i "s/'can_set_password': False/'can_set_password': True/g" \
/etc/openstack-dashboard/local_settings.py
## /* Restarting apache2 and memcached
service apache2 restart
service memcached restart
echo "########## Finish setting up Horizon ##########"
echo "########## LOGIN INFORMATION IN HORIZON ##########"
echo "URL: http://$CON_EXT_IP/horizon"
echo "User: admin or demo"
echo "Password:" $ADMIN_PASS

View File

@ -1,202 +0,0 @@
# Installation Steps
### Prepare LAB enviroment
- Using in VMware Workstation enviroment
#### Configure CONTROLLER NODE
```sh
RAM: 4GB
CPU: 2x2, VT supported
NIC1: eth0: 10.10.10.0/24 (interntel range, using vmnet or hostonly in VMware Workstation)
NIC2: eth1: 172.16.69.0/24, gateway 172.16.69.1 (external range - using NAT or Bridge VMware Workstation)
HDD: 60GB
```
#### Configure CONTROLLER NODE
```sh
RAM: 4GB
CPU: 2x2, VT supported
NIC1: eth0: 10.10.10.0/24 (interntel range, using vmnet or hostonly in VMware Workstation)
NIC2: eth1: 172.16.69.0/24, gateway 172.16.69.1 (external range - using NAT or Bridge VMware Workstation)
HDD: 1000GB
```
### Execute script
- Install git package and dowload script
```sh
su -
apt-get update
apt-get -y install git
git clone https://github.com/vietstacker/openstack-liberty-multinode.git
mv /root/openstack-liberty-multinode/LIBERTY-U14.04-OVS/ /root/
rm -rf openstack-liberty-multinode/
cd LIBERTY-U14.04-OVS/
chmod +x *.sh
```
## Install on CONTROLLER NODE
### install IP establishment script and repos for Liberty
- Edit file config in dicrectory with IP that you want to use.
```sh
bash ctl-1-ipadd.sh
```
### Install NTP, MariaDB packages
```sh
bash ctl-2-prepare.sh
```
### Install KEYSTONE
- Install Keystone
```sh
bash ctl-3.keystone.sh
```
- Declare enviroment parameter
```sh
source admin-openrc.sh
```
### Install GLANCE
```sh
bash ctl-4-glance.sh
```
### Install NOVA
```sh
bash ctl-5-nova.sh
```
### Install NEUTRON
```sh
bash ctl-6-neutron.sh
```
- After NEUTRON installation done, controller node will restart.
- Login with `root` end execute Horizon installation script.
### Install HORIZON
- Login with `root` privilege and execute script below
```sh
bash ctl-horizon.sh
```
## Install on COMPUTE NODE
### Dowload GIT and script
- install git package and dowload script
```sh
su -
apt-get update
apt-get -y install git
git clone https://github.com/vietstacker/openstack-liberty-multinode.git
mv /root/openstack-liberty-multinode/LIBERTY-U14.04-OVS/ /root/
rm -rf openstack-liberty-multinode/
cd LIBERTY-U14.04-OVS/
chmod +x *.sh
### Establish IP and hostname
- Edit file config to make it suitable with your IP.
- Execute script to establish IP, hostname
```sh
bash com1-ipdd.sh
```
- The server will restart after script `com1-ipdd.sh` is executed.
- Login to server with root privilege and execute conponents installation script on Nova
```sh
su -
cd LIBERTY-U14.04-OVS/
bash com1-prepare.sh
```
After install COMPUTE NODE, move to step that guide to use dashboard
## Using dashboard to initialize network, VM, rules.
### Initialize rule for project admin
- Login to dasboard
![liberty-horizon1.png](/images/liberty-horizon1.png)
- Select tab `admin => Access & Security => Manage Rules`
![liberty-horizon2.png](/images/liberty-horizon2.png)
- Select tab `Add Rule`
![liberty-horizon3.png](/images/liberty-horizon3.png)
- Open rule to allow SSH from outside to virtual machine
![liberty-horizon4.png](/images/liberty-horizon4.png)
- Do the same with ICMP rule to allow ping to virtual machine and the other rules.
### Initialize network
#### Initialize external network range
- Select tab `Admin => Networks => Create Network`
![liberty-net-ext1.png](/images/liberty-net-ext1.png)
- Enter and select tabs like picture below.
![liberty-net-ext2.png](/images/liberty-net-ext2.png)
- Click to newly created `ext-net` to declare subnet for external range.
![liberty-net-ext3.png](/images/liberty-net-ext3.png)
- Select tab `Creat Subnet`
![liberty-net-ext4.png](/images/liberty-net-ext4.png)
- Declare IP range of subnet for external range
![liberty-net-ext5.png](/images/liberty-net-ext5.png)
- Declare pools and DNS
![liberty-net-ext6.png](/images/liberty-net-ext6.png)
#### Initialize internal network range
- Select tabs in turn of rank : `Project admin => Network => Networks => Create Network"
![liberty-net-int1.png](/images/liberty-net-int1.png)
- Declare name for internal network
![liberty-net-int2.png](/images/liberty-net-int2.png)
- Declare subnet for internal network
![liberty-net-int3.png](/images/liberty-net-int3.png)
- Declare IP range for Internal network
![liberty-net-int4.png](/images/liberty-net-int4.png)
#### Initialize Router for project admin
- Select by tabs "Project admin => Routers => Create Router
![liberty-r1.png](/images/liberty-r1.png)
- Initialize router name and select like picture below
![liberty-r2.png](/images/liberty-r2.png)
- Apply interface for router
![liberty-r3.png](/images/liberty-r3.png)
![liberty-r4.png](/images/liberty-r4.png)
![liberty-r5.png](/images/liberty-r5.png)
- ending of initializing steps: exteral network, internal network, router
## Initialize virtual machine (Instance)
- L?a ch?n các tab d??i `Project admin => Instances => Launch Instance`
![liberty-instance1.png](/images/liberty-instance1.png)
![liberty-instance2.png](/images/liberty-instance2.png)
![liberty-instance3.png](/images/liberty-instance3.png)

View File

@ -1,68 +0,0 @@
#!/bin/bash -ex
source config.cfg
sleep 3
echo "#### Update for Ubuntu #####"
apt-get install software-properties-common -y
add-apt-repository cloud-archive:liberty -y
sleep 3
echo "##### update for Ubuntu #####"
apt-get update -y && apt-get upgrade -y && apt-get dist-upgrade -y
echo "##### Configuring hostname for COMPUTE1 node... #####"
sleep 3
echo "compute1" > /etc/hostname
hostname -F /etc/hostname
iphost=/etc/hosts
test -f $iphost.orig || cp $iphost $iphost.orig
rm $iphost
touch $iphost
cat << EOF >> $iphost
127.0.0.1 localhost
127.0.0.1 compute1
$CON_MGNT_IP controller
$COM1_MGNT_IP compute1
EOF
sleep 3
echo "##### Config network for COMPUTE NODE ####"
ifaces=/etc/network/interfaces
test -f $ifaces.orig || cp $ifaces $ifaces.orig
rm $ifaces
touch $ifaces
cat << EOF >> $ifaces
#Dat IP cho $CON_MGNT_IP node
# LOOPBACK NET
auto lo
iface lo inet loopback
# MGNT NETWORK
auto eth0
iface eth0 inet static
address $COM1_MGNT_IP
netmask $NETMASK_ADD_MGNT
# EXT NETWORK
auto eth1
iface eth1 inet static
address $COM1_EXT_IP
netmask $NETMASK_ADD_EXT
gateway $GATEWAY_IP_EXT
dns-nameservers 8.8.8.8
EOF
sleep 5
echo "##### Rebooting machine ... #####"
init 6
#

View File

@ -1,237 +0,0 @@
#!/bin/bash -ex
#
source config.cfg
#
echo "net.ipv4.conf.all.rp_filter=0" >> /etc/sysctl.conf
echo "net.ipv4.conf.default.rp_filter=0" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables=1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables=1" >> /etc/sysctl.conf
echo "##### Install python openstack client ##### "
apt-get -y install python-openstackclient
echo "##### Install NTP ##### "
apt-get install ntp -y
apt-get install python-mysqldb -y
#
echo "##### Backup NTP configuration... ##### "
sleep 7
cp /etc/ntp.conf /etc/ntp.conf.bka
rm /etc/ntp.conf
cat /etc/ntp.conf.bka | grep -v ^# | grep -v ^$ >> /etc/ntp.conf
#
sed -i 's/server 0.ubuntu.pool.ntp.org/ \
#server 0.ubuntu.pool.ntp.org/g' /etc/ntp.conf
sed -i 's/server 1.ubuntu.pool.ntp.org/ \
#server 1.ubuntu.pool.ntp.org/g' /etc/ntp.conf
sed -i 's/server 2.ubuntu.pool.ntp.org/ \
#server 2.ubuntu.pool.ntp.org/g' /etc/ntp.conf
sed -i 's/server 3.ubuntu.pool.ntp.org/ \
#server 3.ubuntu.pool.ntp.org/g' /etc/ntp.conf
sed -i "s/server ntp.ubuntu.com/server $CON_MGNT_IP iburst/g" /etc/ntp.conf
sleep 5
echo "##### Installl package for NOVA"
apt-get -y install nova-compute
echo "libguestfs-tools libguestfs/update-appliance boolean true" \
| debconf-set-selections
apt-get -y install libguestfs-tools sysfsutils guestfsd python-guestfs
#fix loi chen pass tren hypervisor la KVM
update-guestfs-appliance
chmod 0644 /boot/vmlinuz*
usermod -a -G kvm root
echo "############ Configuring in nova.conf ...############"
sleep 5
########
#/* Sao luu truoc khi sua file nova.conf
filenova=/etc/nova/nova.conf
test -f $filenova.orig || cp $filenova $filenova.orig
#Chen noi dung file /etc/nova/nova.conf vao
cat << EOF > $filenova
[DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
libvirt_use_virtio_for_bridges=True
verbose=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
enabled_apis=ec2,osapi_compute,metadata
rpc_backend = rabbit
auth_strategy = keystone
my_ip = $COM1_MGNT_IP
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
verbose = True
enable_instance_password = True
[oslo_messaging_rabbit]
rabbit_host = $CON_MGNT_IP
rabbit_userid = openstack
rabbit_password = $RABBIT_PASS
[keystone_authtoken]
auth_uri = http://$CON_MGNT_IP:5000
auth_url = http://$CON_MGNT_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = $KEYSTONE_PASS
[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = \$my_ip
novncproxy_base_url = http://$CON_EXT_IP:6080/vnc_auto.html
[glance]
host = $CON_MGNT_IP
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[neutron]
url = http://$CON_MGNT_IP:9696
auth_url = http://$CON_MGNT_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = neutron
password = $NEUTRON_PASS
[libvirt]
inject_key = True
inject_partition = -1
inject_password = True
EOF
echo "##### Restart nova-compute #####"
sleep 5
service nova-compute restart
# Remove default nova db
rm /var/lib/nova/nova.sqlite
echo "##### Install openvswitch-agent (neutron) on COMPUTE NODE #####"
sleep 10
apt-get -y install neutron-plugin-ml2 neutron-plugin-openvswitch-agent
echo "Config file neutron.conf"
controlneutron=/etc/neutron/neutron.conf
test -f $controlneutron.orig || cp $controlneutron $controlneutron.orig
rm $controlneutron
touch $controlneutron
cat << EOF >> $controlneutron
[DEFAULT]
core_plugin = ml2
rpc_backend = rabbit
auth_strategy = keystone
verbose = True
allow_overlapping_ips = True
service_plugins = router
[matchmaker_redis]
[matchmaker_ring]
[quotas]
[agent]
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
[keystone_authtoken]
auth_uri = http://$CON_MGNT_IP:5000
auth_url = http://$CON_MGNT_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = $KEYSTONE_PASS
[database]
# connection = sqlite:////var/lib/neutron/neutron.sqlite
[nova]
[oslo_concurrency]
lock_path = \$state_path/lock
[oslo_policy]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host = $CON_MGNT_IP
rabbit_userid = openstack
rabbit_password = $RABBIT_PASS
[qos]
EOF
echo "############ Configuring ml2_conf.ini ############"
sleep 5
########
comfileml2=/etc/neutron/plugins/ml2/ml2_conf.ini
test -f $comfileml2.orig || cp $comfileml2 $comfileml2.orig
rm $comfileml2
touch $comfileml2
#Update ML2 config file /etc/neutron/plugins/ml2/ml2_conf.ini
cat << EOF > $comfileml2
[ml2]
type_drivers = flat,vlan,gre,vxlan
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[ml2_type_vxlan]
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[ovs]
local_ip = $COM1_MGNT_IP
enable_tunneling = True
[agent]
tunnel_types = gre
EOF
echo "Reset service nova-compute,openvswitch-agent"
sleep 5
service nova-compute restart
service neutron-plugin-openvswitch-agent restart

View File

@ -1,66 +0,0 @@
## Network Info
# MASTER=$eth0_address
# LOCAL_IP=$eth1_address
################## KHAI BAO CAC BIEN CHO SCRIPT ###########################
## Assigning IP for CONTROLLER NODE
CON_MGNT_IP=10.10.10.140
CON_EXT_IP=172.16.69.140
# Assigning IP for COMPUTE1 NODE
COM1_MGNT_IP=10.10.10.141
COM1_EXT_IP=172.16.69.141
#Gateway for EXT network
GATEWAY_IP_EXT=172.16.69.1
NETMASK_ADD_EXT=255.255.255.0
#Gateway for MGNT network
GATEWAY_IP_MGNT=10.10.10.1
NETMASK_ADD_MGNT=255.255.255.0
# Set password
DEFAULT_PASS='Welcome123'
RABBIT_PASS="$DEFAULT_PASS"
MYSQL_PASS="$DEFAULT_PASS"
TOKEN_PASS="$DEFAULT_PASS"
ADMIN_PASS="$DEFAULT_PASS"
SERVICE_PASSWORD="$DEFAULT_PASS"
METADATA_SECRET="$DEFAULT_PASS"
SERVICE_TENANT_NAME="service"
ADMIN_TENANT_NAME="admin"
DEMO_TENANT_NAME="demo"
INVIS_TENANT_NAME="invisible_to_admin"
ADMIN_USER_NAME="admin"
DEMO_USER_NAME="demo"
# Environment variable for OPS service
KEYSTONE_PASS="$DEFAULT_PASS"
GLANCE_PASS="$DEFAULT_PASS"
NOVA_PASS="$DEFAULT_PASS"
NEUTRON_PASS="$DEFAULT_PASS"
CINDER_PASS="$DEFAULT_PASS"
SWIFT_PASS="$DEFAULT_PASS"
HEAT_PASS="$DEFAULT_PASS"
CEILOMETER_PASS="$DEFAULT_PASS"
# Environment variable for DB
KEYSTONE_DBPASS="$DEFAULT_PASS"
GLANCE_DBPASS="$DEFAULT_PASS"
NOVA_DBPASS="$DEFAULT_PASS"
NEUTRON_DBPASS="$DEFAULT_PASS"
CINDER_DBPASS="$DEFAULT_PASS"
HEAT_DBPASS="$DEFAULT_PASS"
CEILOMETER_DBPASS="$DEFAULT_PASS"
# User declaration in Keystone
ADMIN_ROLE_NAME="admin"
MEMBER_ROLE_NAME="Member"
KEYSTONEADMIN_ROLE_NAME="KeystoneAdmin"
KEYSTONESERVICE_ROLE_NAME="KeystoneServiceAdmin"
# OS PASS ROOT

View File

@ -1,71 +0,0 @@
#!/bin/bash -ex
source config.cfg
ifaces=/etc/network/interfaces
test -f $ifaces.orig || cp $ifaces $ifaces.orig
rm $ifaces
touch $ifaces
cat << EOF >> $ifaces
#Assign IP for Controller node
# LOOPBACK NET
auto lo
iface lo inet loopback
# MGNT NETWORK
auto eth0
iface eth0 inet static
address $CON_MGNT_IP
netmask $NETMASK_ADD_MGNT
# EXT NETWORK
auto eth1
iface eth1 inet static
address $CON_EXT_IP
netmask $NETMASK_ADD_EXT
gateway $GATEWAY_IP_EXT
dns-nameservers 8.8.8.8
EOF
echo "Configuring hostname in CONTROLLER node"
sleep 3
echo "controller" > /etc/hostname
hostname -F /etc/hostname
echo "Configuring for file /etc/hosts"
sleep 3
iphost=/etc/hosts
test -f $iphost.orig || cp $iphost $iphost.orig
rm $iphost
touch $iphost
cat << EOF >> $iphost
127.0.0.1 localhost
127.0.1.1 controller
$CON_MGNT_IP controller
$COM1_MGNT_IP compute1
EOF
echo "##### Cai dat repos cho Liberty ##### "
apt-get install software-properties-common -y
add-apt-repository cloud-archive:liberty -y
sleep 5
echo "UPDATE PACKAGE FOR LIBERTY"
apt-get -y update && apt-get -y upgrade && apt-get -y dist-upgrade
sleep 5
echo "Reboot Server"
#sleep 5
init 6
#

View File

@ -1,104 +0,0 @@
#!/bin/bash -ex
source config.cfg
apt-get install -y mongodb-server mongodb-clients python-pymongo
sed -i "s/bind_ip = 127.0.0.1/bind_ip = $CON_MGNT_IP/g" /etc/mongodb.conf
service mongodb restart
sleep 40
cat << EOF > mongo.js
db = db.getSiblingDB("ceilometer");
db.addUser({user: "ceilometer",
pwd: "$CEILOMETER_DBPASS",
roles: [ "readWrite", "dbAdmin" ]})
EOF
sleep 20
mongo --host $CON_MGNT_IP ./mongo.js
## Tao user, endpoint va gan role cho CEILOMETER
openstack user create --password $CEILOMETER_PASS ceilometer
openstack role add --project service --user ceilometer admin
openstack service create --name ceilometer --description "Telemetry" metering
openstack endpoint create \
--publicurl http://$CON_MGNT_IP:8777 \
--internalurl http://$CON_MGNT_IP:8777 \
--adminurl http://$CON_MGNT_IP:8777 \
--region RegionOne \
metering
# Cai dat cac goi trong CEILOMETER
apt-get -y install ceilometer-api ceilometer-collector \
ceilometer-agent-central ceilometer-agent-notification \
ceilometer-alarm-evaluator ceilometer-alarm-notifier \
python-ceilometerclient
mv /etc/ceilometer/ceilometer.conf /etc/ceilometer/ceilometer.conf.bka
cat << EOF > /etc/ceilometer/ceilometer.conf
[DEFAULT]
verbose = True
rpc_backend = rabbit
auth_strategy = keystone
[database]
connection = mongodb://ceilometer:$CEILOMETER_DBPASS@$CON_MGNT_IP:27017/ceilometer
[keystone_authtoken]
auth_uri = http://$CON_MGNT_IP:5000
auth_url = http://$CON_MGNT_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = ceilometer
password = $CEILOMETER_PASS
[service_credentials]
os_auth_url = http://$CON_MGNT_IP:5000/v2.0
os_username = ceilometer
os_tenant_name = service
os_password = $CEILOMETER_PASS
os_endpoint_type = internalURL
os_region_name = RegionOne
# [publisher]
# telemetry_secret = $METERING_SECRET
[matchmaker_redis]
[matchmaker_ring]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host = $CON_MGNT_IP
rabbit_userid = openstack
rabbit_password = $RABBIT_PASS
[oslo_policy]
EOF
echo "Restart service"
sleep 3
service ceilometer-agent-central restart
service ceilometer-agent-notification restart
service ceilometer-api restart
service ceilometer-collector restart
service ceilometer-alarm-evaluator restart
service ceilometer-alarm-notifier restart
echo "Restart service"
sleep 10
service ceilometer-agent-central restart
service ceilometer-agent-notification restart
service ceilometer-api restart
service ceilometer-collector restart
service ceilometer-alarm-evaluator restart
service ceilometer-alarm-notifier restart

View File

@ -1,80 +0,0 @@
#!/bin/bash -ex
#
source config.cfg
echo "Install python client"
apt-get -y install python-openstackclient
sleep 5
echo "Install and config NTP"
sleep 3
apt-get install ntp -y
cp /etc/ntp.conf /etc/ntp.conf.bka
rm /etc/ntp.conf
cat /etc/ntp.conf.bka | grep -v ^# | grep -v ^$ >> /etc/ntp.conf
## Config NTP in LIBERTY
sed -i 's/server ntp.ubuntu.com/ \
server 0.vn.pool.ntp.org iburst \
server 1.asia.pool.ntp.org iburst \
server 2.asia.pool.ntp.org iburst/g' /etc/ntp.conf
sed -i 's/restrict -4 default kod notrap nomodify nopeer noquery/ \
#restrict -4 default kod notrap nomodify nopeer noquery/g' /etc/ntp.conf
sed -i 's/restrict -6 default kod notrap nomodify nopeer noquery/ \
restrict -4 default kod notrap nomodify \
restrict -6 default kod notrap nomodify/g' /etc/ntp.conf
# sed -i 's/server/#server/' /etc/ntp.conf
# echo "server $LOCAL_IP" >> /etc/ntp.conf
##############################################
echo "Install and Config RabbitMQ"
sleep 3
apt-get install rabbitmq-server -y
rabbitmqctl add_user openstack $RABBIT_PASS
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
# rabbitmqctl change_password guest $RABBIT_PASS
sleep 3
service rabbitmq-server restart
echo "Finish setup pre-install package !!!"
echo "##### Install MYSQL #####"
sleep 3
echo mysql-server mysql-server/root_password password $MYSQL_PASS \
| debconf-set-selections
echo mysql-server mysql-server/root_password_again password $MYSQL_PASS \
| debconf-set-selections
apt-get -y install mariadb-server python-mysqldb curl
echo "##### Configuring MYSQL #####"
sleep 3
echo "########## CONFIGURING FOR MYSQL ##########"
sleep 5
touch /etc/mysql/conf.d/mysqld_openstack.cnf
cat << EOF > /etc/mysql/conf.d/mysqld_openstack.cnf
[mysqld]
bind-address = 0.0.0.0
[mysqld]
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
EOF
sleep 5
echo "Restart MYSQL"
service mysql restart

View File

@ -1,225 +0,0 @@
#!/bin/bash -ex
#
source config.cfg
echo "Create Database for Keystone"
cat << EOF | mysql -uroot -p$MYSQL_PASS
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '$KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '$KEYSTONE_DBPASS';
FLUSH PRIVILEGES;
EOF
echo "##### Install keystone #####"
echo "manual" > /etc/init/keystone.override
apt-get -y install keystone python-openstackclient apache2 \
libapache2-mod-wsgi memcached python-memcache
#/* Back-up file nova.conf
filekeystone=/etc/keystone/keystone.conf
test -f $filekeystone.orig || cp $filekeystone $filekeystone.orig
#Config file /etc/keystone/keystone.conf
cat << EOF > $filekeystone
[DEFAULT]
log_dir = /var/log/keystone
admin_token = $TOKEN_PASS
public_bind_host = $CON_MGNT_IP
admin_bind_host = $CON_MGNT_IP
[assignment]
[auth]
[cache]
[catalog]
[cors]
[cors.subdomain]
[credential]
[database]
connection = mysql+pymysql://keystone:$KEYSTONE_DBPASS@$CON_MGNT_IP/keystone
[domain_config]
[endpoint_filter]
[endpoint_policy]
[eventlet_server]
[eventlet_server_ssl]
[federation]
[fernet_tokens]
[identity]
[identity_mapping]
[kvs]
[ldap]
[matchmaker_redis]
[matchmaker_ring]
[memcache]
servers = localhost:11211
[oauth1]
[os_inherit]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[paste_deploy]
[policy]
[resource]
[revoke]
driver = sql
[role]
[saml]
[signing]
[ssl]
[token]
provider = uuid
driver = memcache
[tokenless_auth]
[trust]
[extra_headers]
Distribution = Ubuntu
EOF
#
su -s /bin/sh -c "keystone-manage db_sync" keystone
echo "ServerName $CON_MGNT_IP" >> /etc/apache2/apache2.conf
cat << EOF > /etc/apache2/sites-available/wsgi-keystone.conf
Listen 5000
Listen 35357
<VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
<IfVersion >= 2.4>
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog /var/log/apache2/keystone.log
CustomLog /var/log/apache2/keystone_access.log combined
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>
<VirtualHost *:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
<IfVersion >= 2.4>
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog /var/log/apache2/keystone.log
CustomLog /var/log/apache2/keystone_access.log combined
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>
EOF
ln -s /etc/apache2/sites-available/wsgi-keystone.conf \
/etc/apache2/sites-enabled
service apache2 restart
rm -f /var/lib/keystone/keystone.db
export OS_TOKEN="$TOKEN_PASS"
export OS_URL=http://$CON_MGNT_IP:35357/v2.0
# export OS_SERVICE_TOKEN="$TOKEN_PASS"
# export OS_SERVICE_ENDPOINT="http://$CON_MGNT_IP:35357/v2.0"
# export SERVICE_ENDPOINT="http://$CON_MGNT_IP:35357/v2.0"
### Identity service
openstack service create --name keystone --description \
"OpenStack Identity" identity
### Create the Identity service API endpoint
openstack endpoint create \
--publicurl http://$CON_MGNT_IP:5000/v2.0 \
--internalurl http://$CON_MGNT_IP:5000/v2.0 \
--adminurl http://$CON_MGNT_IP:35357/v2.0 \
--region RegionOne \
identity
#### To create tenants, users, and roles ADMIN
openstack project create --description "Admin Project" admin
openstack user create --password $ADMIN_PASS admin
openstack role create admin
openstack role add --project admin --user admin admin
#### To create tenants, users, and roles SERVICE
openstack project create --description "Service Project" service
#### To create tenants, users, and roles DEMO
openstack project create --description "Demo Project" demo
openstack user create --password $ADMIN_PASS demo
### Create the user role
openstack role create user
openstack role add --project demo --user demo user
#################
unset OS_TOKEN OS_URL
# Tao bien moi truong
echo "export OS_PROJECT_DOMAIN_ID=default" > admin-openrc.sh
echo "export OS_USER_DOMAIN_ID=default" >> admin-openrc.sh
echo "export OS_PROJECT_NAME=admin" >> admin-openrc.sh
echo "export OS_TENANT_NAME=admin" >> admin-openrc.sh
echo "export OS_USERNAME=admin" >> admin-openrc.sh
echo "export OS_PASSWORD=$ADMIN_PASS" >> admin-openrc.sh
echo "export OS_AUTH_URL=http://$CON_MGNT_IP:35357/v3" >> admin-openrc.sh
echo "export OS_VOLUME_API_VERSION=2" >> admin-openrc.sh
sleep 5
echo "########## Execute environment script ##########"
chmod +x admin-openrc.sh
cat admin-openrc.sh >> /etc/profile
cp admin-openrc.sh /root/admin-openrc.sh
source admin-openrc.sh
echo "export OS_PROJECT_DOMAIN_ID=default" > demo-openrc.sh
echo "export OS_USER_DOMAIN_ID=default" >> demo-openrc.sh
echo "export OS_PROJECT_NAME=demo" >> demo-openrc.sh
echo "export OS_TENANT_NAME=demo" >> demo-openrc.sh
echo "export OS_USERNAME=demo" >> demo-openrc.sh
echo "export OS_PASSWORD=$ADMIN_PASS" >> demo-openrc.sh
echo "export OS_AUTH_URL=http://$CON_MGNT_IP:35357/v3" >> demo-openrc.sh
echo "export OS_VOLUME_API_VERSION=2" >> demo-openrc.sh
chmod +x demo-openrc.sh
cp demo-openrc.sh /root/demo-openrc.sh

View File

@ -1,183 +0,0 @@
#!/bin/bash -ex
#
source config.cfg
echo "Create the database for GLANCE"
cat << EOF | mysql -uroot -p$MYSQL_PASS
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '$GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '$GLANCE_DBPASS';
FLUSH PRIVILEGES;
EOF
sleep 5
echo " Create user, endpoint for GLANCE"
openstack user create --password $GLANCE_PASS glance
openstack role add --project service --user glance admin
openstack service create --name glance --description \
"OpenStack Image service" image
openstack endpoint create \
--publicurl http://$CON_MGNT_IP:9292 \
--internalurl http://$CON_MGNT_IP:9292 \
--adminurl http://$CON_MGNT_IP:9292 \
--region RegionOne \
image
echo "########## Install GLANCE ##########"
apt-get -y install glance python-glanceclient
sleep 10
echo "########## Configuring GLANCE API ##########"
sleep 5
#/* Back-up file nova.conf
fileglanceapicontrol=/etc/glance/glance-api.conf
test -f $fileglanceapicontrol.orig \
|| cp $fileglanceapicontrol $fileglanceapicontrol.orig
rm $fileglanceapicontrol
touch $fileglanceapicontrol
#Configuring glance config file /etc/glance/glance-api.conf
cat << EOF > $fileglanceapicontrol
[DEFAULT]
notification_driver = noop
verbose = True
notification_driver = messagingv2
rpc_backend = rabbit
[database]
connection = mysql+pymysql://glance:$GLANCE_DBPASS@$CON_MGNT_IP/glance
backend = sqlalchemy
[glance_store]
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
[image_format]
[keystone_authtoken]
auth_uri = http://$CON_MGNT_IP:5000
auth_url = http://$CON_MGNT_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = $GLANCE_PASS
[matchmaker_redis]
[matchmaker_ring]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host = $CON_MGNT_IP
rabbit_userid = openstack
rabbit_password = $RABBIT_PASS
[oslo_policy]
[paste_deploy]
flavor = keystone
[store_type_location_strategy]
[task]
[taskflow_executor]
EOF
#
sleep 10
echo "########## Configuring GLANCE REGISTER ##########"
#/* Backup file file glance-registry.conf
fileglanceregcontrol=/etc/glance/glance-registry.conf
test -f $fileglanceregcontrol.orig \
|| cp $fileglanceregcontrol $fileglanceregcontrol.orig
rm $fileglanceregcontrol
touch $fileglanceregcontrol
#Config file /etc/glance/glance-registry.conf
cat << EOF > $fileglanceregcontrol
[DEFAULT]
notification_driver = noop
verbose = True
notification_driver = messagingv2
rpc_backend = rabbit
[database]
connection = mysql+pymysql://glance:$GLANCE_DBPASS@$CON_MGNT_IP/glance
backend = sqlalchemy
[glance_store]
[keystone_authtoken]
auth_uri = http://$CON_MGNT_IP:5000
auth_url = http://$CON_MGNT_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = $GLANCE_PASS
[matchmaker_redis]
[matchmaker_ring]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host = $CON_MGNT_IP
rabbit_userid = openstack
rabbit_password = $RABBIT_PASS
[oslo_policy]
[paste_deploy]
flavor = keystone
EOF
sleep 7
echo "########## Remove Glance default DB ##########"
rm /var/lib/glance/glance.sqlite
chown glance:glance $fileglanceapicontrol
chown glance:glance $fileglanceregcontrol
sleep 7
echo "########## Syncing DB for Glance ##########"
glance-manage db_sync
sleep 5
echo "########## Restarting GLANCE service ... ##########"
service glance-registry restart
service glance-api restart
sleep 3
service glance-registry restart
service glance-api restart
#
echo "Remove glance.sqlite "
rm -f /var/lib/glance/glance.sqlite
sleep 3
echo "########## Registering Cirros IMAGE for GLANCE ... ##########"
mkdir images
cd images/
wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
glance image-create --name "cirros" \
--file cirros-0.3.4-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--visibility public --progress
cd /root/
# rm -r /tmp/images
sleep 5
echo "########## Testing Glance ##########"
glance image-list

View File

@ -1,150 +0,0 @@
#!/bin/bash -ex
#
source config.cfg
echo "Create DB for NOVA "
cat << EOF | mysql -uroot -p$MYSQL_PASS
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '$NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '$NOVA_DBPASS';
FLUSH PRIVILEGES;
EOF
echo "Creat user, endpoint for NOVA"
openstack user create --password $NOVA_PASS nova
openstack role add --project service --user nova admin
openstack service create --name nova --description "OpenStack Compute" compute
openstack endpoint create \
--publicurl http://$CON_MGNT_IP:8774/v2/%\(tenant_id\)s \
--internalurl http://$CON_MGNT_IP:8774/v2/%\(tenant_id\)s \
--adminurl http://$CON_MGNT_IP:8774/v2/%\(tenant_id\)s \
--region RegionOne \
compute
echo "########## Install NOVA in $CON_MGNT_IP ##########"
sleep 5
apt-get -y install nova-api nova-cert nova-conductor \
nova-consoleauth nova-novncproxy nova-scheduler python-novaclient
# Cai tu dong libguestfs-tools
echo "libguestfs-tools libguestfs/update-appliance boolean true" \
| debconf-set-selections
apt-get -y install libguestfs-tools sysfsutils guestfsd python-guestfs
######## Backup configurations for NOVA ##########"
sleep 7
#
controlnova=/etc/nova/nova.conf
test -f $controlnova.orig || cp $controlnova $controlnova.orig
rm $controlnova
touch $controlnova
cat << EOF >> $controlnova
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
libvirt_use_virtio_for_bridges=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
enabled_apis=ec2,osapi_compute,metadata
my_ip = $CON_MGNT_IP
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
enabled_apis=osapi_compute,metadata
verbose = True
enable_instance_password = True
[database]
connection = mysql+pymysql://nova:$NOVA_DBPASS@$CON_MGNT_IP/nova
[oslo_messaging_rabbit]
rabbit_host = $CON_MGNT_IP
rabbit_userid = openstack
rabbit_password = $RABBIT_PASS
[keystone_authtoken]
auth_uri = http://$CON_MGNT_IP:5000
auth_url = http://$CON_MGNT_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = $NOVA_PASS
[vnc]
vncserver_listen = \$my_ip
vncserver_proxyclient_address = \$my_ip
[glance]
host = $CON_MGNT_IP
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[neutron]
url = http://$CON_MGNT_IP:9696
auth_url = http://$CON_MGNT_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = neutron
password = $NEUTRON_PASS
service_metadata_proxy = True
metadata_proxy_shared_secret = $METADATA_SECRET
EOF
echo "########## Remove Nova default db ##########"
sleep 7
rm /var/lib/nova/nova.sqlite
echo "########## Syncing Nova DB ##########"
sleep 7
su -s /bin/sh -c "nova-manage db sync" nova
# echo 'kvm_intel' >> /etc/modules
echo "########## Restarting NOVA ... ##########"
sleep 7
service nova-api restart
service nova-cert restart
service nova-consoleauth restart
service nova-scheduler restart
service nova-conductor restart
service nova-novncproxy restart
sleep 7
echo "########## Restarting NOVA ... ##########"
service nova-api restart
service nova-cert restart
service nova-consoleauth restart
service nova-scheduler restart
service nova-conductor restart
service nova-novncproxy restart
echo "########## Testing NOVA service ##########"
nova-manage service list

View File

@ -1,292 +0,0 @@
#!/bin/bash -ex
#
# RABBIT_PASS=a
# ADMIN_PASS=a
source config.cfg
echo "############ Configuring net forward for all VMs ############"
sleep 5
echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
echo "net.ipv4.conf.all.rp_filter=0" >> /etc/sysctl.conf
echo "net.ipv4.conf.default.rp_filter=0" >> /etc/sysctl.conf
sysctl -p
echo "Create DB for NEUTRON "
sleep 5
cat << EOF | mysql -uroot -p$MYSQL_PASS
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '$NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '$NEUTRON_DBPASS';
FLUSH PRIVILEGES;
EOF
echo "Create user, endpoint for NEUTRON"
sleep 5
openstack user create --password $NEUTRON_PASS neutron
openstack role add --project service --user neutron admin
openstack service create --name neutron --description \
"OpenStack Networking" network
openstack endpoint create \
--publicurl http://$CON_MGNT_IP:9696 \
--adminurl http://$CON_MGNT_IP:9696 \
--internalurl http://$CON_MGNT_IP:9696 \
--region RegionOne \
network
# SERVICE_TENANT_ID=`keystone tenant-get service | awk '$2~/^id/{print $4}'`
echo "########## Install NEUTRON in 172.16.69.40 or NETWORK node ###########"
sleep 5
apt-get -y install neutron-server python-neutronclient \
neutron-plugin-ml2 neutron-plugin-openvswitch-agent \
neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent \
neutron-plugin-openvswitch neutron-common
######## Backup configuration NEUTRON.CONF ##################"
echo "########## Config NEUTRON ##########"
sleep 5
#
controlneutron=/etc/neutron/neutron.conf
test -f $controlneutron.orig || cp $controlneutron $controlneutron.orig
rm $controlneutron
touch $controlneutron
cat << EOF >> $controlneutron
[DEFAULT]
core_plugin = ml2
rpc_backend = rabbit
service_plugins = router
allow_overlapping_ips = True
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://$CON_MGNT_IP:8774/v2
verbose = True
[matchmaker_redis]
[matchmaker_ring]
[quotas]
[agent]
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
[keystone_authtoken]
auth_uri = http://$CON_MGNT_IP:5000
auth_url = http://$CON_MGNT_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = $NEUTRON_PASS
[database]
connection = mysql+pymysql://neutron:$NEUTRON_DBPASS@$CON_MGNT_IP/neutron
[nova]
[oslo_concurrency]
lock_path = \$state_path/lock
[oslo_policy]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host = $CON_MGNT_IP
rabbit_userid = openstack
rabbit_password = $RABBIT_PASS
[nova]
auth_url = http://$CON_MGNT_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = nova
password = $NOVA_PASS
[qos]
EOF
######## Backup configuration of ML2 ##################"
echo "########## Configuring ML2 ##########"
sleep 7
controlML2=/etc/neutron/plugins/ml2/ml2_conf.ini
test -f $controlML2.orig || cp $controlML2 $controlML2.orig
rm $controlML2
touch $controlML2
cat << EOF >> $controlML2
[ml2]
type_drivers = flat,vlan,gre,vxlan
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_flat]
flat_networks = external
[ml2_type_vlan]
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[ml2_type_vxlan]
[ml2_type_geneve]
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[ovs]
local_ip = $CON_MGNT_IP
bridge_mappings = external:br-ex
[agent]
tunnel_types = gre
EOF
echo "############ Configuring L3 AGENT ############"
sleep 7
netl3agent=/etc/neutron/l3_agent.ini
test -f $netl3agent.orig || cp $netl3agent $netl3agent.orig
rm $netl3agent
touch $netl3agent
cat << EOF >> $netl3agent
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
external_network_bridge =
router_delete_namespaces = True
verbose = True
[AGENT]
EOF
echo "############ Configuring DHCP AGENT ############ "
sleep 7
#
netdhcp=/etc/neutron/dhcp_agent.ini
test -f $netdhcp.orig || cp $netdhcp $netdhcp.orig
rm $netdhcp
touch $netdhcp
cat << EOF >> $netdhcp
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
dhcp_delete_namespaces = True
verbose = True
dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf
[AGENT]
EOF
echo "############ Fix loi MTU ############"
sleep 3
echo "dhcp-option-force=26,1454" > /etc/neutron/dnsmasq-neutron.conf
killall dnsmasq
echo "############ Configuring METADATA AGENT ############"
sleep 7
netmetadata=/etc/neutron/metadata_agent.ini
test -f $netmetadata.orig || cp $netmetadata $netmetadata.orig
rm $netmetadata
touch $netmetadata
cat << EOF >> $netmetadata
[DEFAULT]
auth_uri = http://$CON_MGNT_IP:5000
auth_url = http://$CON_MGNT_IP:35357
auth_region = RegionOne
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = $NEUTRON_PASS
nova_metadata_ip = $CON_MGNT_IP
metadata_proxy_shared_secret = $METADATA_SECRET
verbose = True
EOF
#
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
echo "########## Restarting NOVA service ##########"
sleep 7
service nova-api restart
service nova-scheduler restart
service nova-conductor restart
echo "########## Restarting NEUTRON service ##########"
sleep 7
service neutron-server restart
service neutron-plugin-openvswitch-agent restart
service neutron-dhcp-agent restart
service neutron-metadata-agent restart
service neutron-l3-agent restart
rm -f /var/lib/neutron/neutron.sqlite
echo "########## check service Neutron ##########"
neutron agent-list
sleep 5
echo "########## Config IP address for br-ex ##########"
ifaces=/etc/network/interfaces
test -f $ifaces.orig1 || cp $ifaces $ifaces.orig1
rm $ifaces
cat << EOF > $ifaces
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto br-ex
iface br-ex inet static
address $CON_EXT_IP
netmask $NETMASK_ADD_EXT
gateway $GATEWAY_IP_EXT
dns-nameservers 8.8.8.8
auto eth1
iface eth1 inet manual
up ifconfig \$IFACE 0.0.0.0 up
up ip link set \$IFACE promisc on
down ip link set \$IFACE promisc off
down ifconfig \$IFACE down
auto eth0
iface eth0 inet static
address $CON_MGNT_IP
netmask $NETMASK_ADD_MGNT
EOF
echo "########## Config br-int and br-ex for OpenvSwitch ##########"
sleep 5
# ovs-vsctl add-br br-int
ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex eth1
sleep 5
echo "##### Reboot SERVER #####"
init 6

View File

@ -1,126 +0,0 @@
#!/bin/bash -ex
#
# RABBIT_PASS=a
# ADMIN_PASS=a
source config.cfg
echo "Create DB for CINDER"
sleep 5
cat << EOF | mysql -uroot -p$MYSQL_PASS
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '$CINDER_DBPASS';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '$CINDER_DBPASS';
FLUSH PRIVILEGES;
EOF
echo "Create user, endpoint for CINDER"
sleep 5
openstack user create --password $CINDER_PASS cinder
openstack role add --project service --user cinder admin
openstack service create --name cinder --description \
"OpenStack Block Storage" volume
openstack service create --name cinderv2 --description \
"OpenStack Block Storage" volumev2
openstack endpoint create \
--publicurl http://$CON_MGNT_IP:8776/v1/%\(tenant_id\)s \
--internalurl http://$CON_MGNT_IP:8776/v1/%\(tenant_id\)s \
--adminurl http://$CON_MGNT_IP:8776/v1/%\(tenant_id\)s \
--region RegionOne \
volume
openstack endpoint create \
--publicurl http://$CON_MGNT_IP:8776/v2/%\(tenant_id\)s \
--internalurl http://$CON_MGNT_IP:8776/v2/%\(tenant_id\)s \
--adminurl http://$CON_MGNT_IP:8776/v2/%\(tenant_id\)s \
--region RegionOne \
volumev2
#
echo "########## Install CINDER ##########"
sleep 3
apt-get install -y cinder-api cinder-scheduler python-cinderclient \
lvm2 cinder-volume python-mysqldb qemu
pvcreate /dev/vdb
vgcreate cinder-volumes /dev/vdb
sed -r -i 's#(filter = )(\[ "a/\.\*/" \])#\1["a\/vdb\/", "r/\.\*\/"]#g' \
/etc/lvm/lvm.conf
filecinder=/etc/cinder/cinder.conf
test -f $filecinder.orig || cp $filecinder $filecinder.orig
rm $filecinder
cat << EOF > $filecinder
[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
rpc_backend = rabbit
my_ip = $CON_MGNT_IP
enabled_backends = lvm
glance_host = $CON_MGNT_IP
notification_driver = messagingv2
[database]
connection = mysql+pymysql://cinder:$CINDER_DBPASS@$CON_MGNT_IP/cinder
[oslo_messaging_rabbit]
rabbit_host = $CON_MGNT_IP
rabbit_userid = openstack
rabbit_password = $RABBIT_PASS
[keystone_authtoken]
auth_uri = http://$CON_MGNT_IP:5000
auth_url = http://$CON_MGNT_IP:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = $CINDER_PASS
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[cinder]
os_region_name = RegionOne
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm
EOF
echo "########## Syncing Cinder DB ##########"
sleep 3
su -s /bin/sh -c "cinder-manage db sync" cinder
echo "########## Restarting CINDER service ##########"
sleep 3
service tgt restart
service cinder-volume restart
service cinder-api restart
service cinder-scheduler restart
rm -f /var/lib/cinder/cinder.sqlite
echo "########## Finish setting up CINDER !!! ##########"

View File

@ -1,48 +0,0 @@
#!/bin/bash -ex
source config.cfg
###################
echo "########## START INSTALLING OPS DASHBOARD ##########"
###################
sleep 5
echo "########## Installing Dashboard package ##########"
apt-get -y install openstack-dashboard
apt-get -y remove --auto-remove openstack-dashboard-ubuntu-theme
# echo "########## Fix bug in apache2 ##########"
# sleep 5
# Fix bug apache in ubuntu 14.04
# echo "ServerName localhost" > /etc/apache2/conf-available/servername.conf
# sudo a2enconf servername
echo "########## Creating redirect page ##########"
filehtml=/var/www/html/index.html
test -f $filehtml.orig || cp $filehtml $filehtml.orig
rm $filehtml
touch $filehtml
cat << EOF >> $filehtml
<html>
<head>
<META HTTP-EQUIV="Refresh" Content="0.5; URL=http://$CON_EXT_IP/horizon">
</head>
<body>
<center> <h1>Dang chuyen den Dashboard cua OpenStack</h1> </center>
</body>
</html>
EOF
# Allowing insert password in dashboard ( only apply in image )
sed -i "s/'can_set_password': False/'can_set_password': True/g" \
/etc/openstack-dashboard/local_settings.py
## /* Restarting apache2 and memcached
service apache2 restart
service memcached restart
echo "########## Finish setting up Horizon ##########"
echo "########## LOGIN INFORMATION IN HORIZON ##########"
echo "URL: http://$CON_EXT_IP/horizon"
echo "User: admin or demo"
echo "Password:" $ADMIN_PASS

View File

@ -1,346 +0,0 @@
#!/bin/bash
#Document the bridge setup....
#ovs-vsctl set bridge shabr stp_enable=false
#FIXME not all of them work... hardcoding for now.
#mirror=$(curl -s http://nl.alpinelinux.org/alpine/MIRRORS.txt | shuf | head -n 1)
mirror="http://dl-6.alpinelinux.org/alpine/"
#FIXME write some logic to detect this.
version=2.6.5-r1
statedir=/var/lib/superhaproxy
wrapperurl='http://git.haproxy.org/?p=haproxy-1.6.git;a=blob_plain;f=src/haproxy-systemd-wrapper.c;hb=HEAD'
#FIXME make this configurable
bridge=shabr
function init_config {
name="$1"
ip=$(crudini --get "$statedir/containers/$name/container.ini" superhaproxy ip)
subnet=$(crudini --get "$statedir/containers/$name/container.ini" superhaproxy subnet)
gateway=$(crudini --get "$statedir/containers/$name/container.ini" superhaproxy gateway)
mtu=$(crudini --get "$statedir/containers/$name/container.ini" superhaproxy mtu)
}
function get_pid_file {
echo "$statedir/containers/$1/container.pid"
}
function get_pid {
echo "$(< "$statedir/containers/$1/container.pid")"
}
function get_dump_dir {
echo "$statedir/dumps/$1"
}
function get_container_dir {
echo "$statedir/containers/$1"
}
if [ "x$1" == "x" ]
then
echo "Usage:"
echo " init"
echo " list"
echo " create"
echo " show"
echo " start"
echo " stop"
echo " reload"
echo " pid"
echo " pstree"
echo " shell"
echo " hatop"
echo " dump local"
echo " restore local"
exit -1
fi
if [ "x$1" == "xinit" ]
then
mkdir -p $statedir
if [ ! -d $statedir/alpine-tools ]
then
mkdir -p $statedir/alpine-tools
pushd $statedir/alpine-tools
curl ${mirror}/latest-stable/main/x86_64/apk-tools-static-${version}.apk | tar -zxf -
popd
fi
if [ ! -d $statedir/rootimg ]
then
mkdir -p $statedir/rootimg
$statedir/alpine-tools/sbin/apk.static -X ${mirror}/latest-stable/main -U --allow-untrusted --root $statedir/rootimg --initdb add alpine-base haproxy
#FIXME this makes way too big a binary. Remove once alpine provides the wrapper
curl -s "$wrapperurl" -o $statedir/wrapper.c
gcc --static -o $statedir/rootimg/usr/sbin/haproxy-systemd-wrapper $statedir/wrapper.c
#FIXME criu doesn't support checkpinting the chroot yet.
sed -i '/chroot/d' $statedir/rootimg/etc/haproxy/haproxy.cfg
fi
mkdir -p $statedir/containers
mkdir -p $statedir/dumps
mkdir -p $statedir/action-scripts
exit 0
fi
if [ "x$1" == "xlist" ]
then
ls $statedir/containers/ | cat
exit 0
fi
if [ "x$1" == "xcreate" ]
then
shift
ip=""
name=""
subnet="255.255.255.0"
gateway=""
mtu=9000
while getopts ":i:m:n:s:g:" opt; do
case ${opt} in
i )
ip="$OPTARG"
;;
m )
mtu="$OPTARG"
;;
s )
subnet="$OPTARG"
;;
n )
name="$OPTARG"
;;
\? ) echo "Usage: superhaproxy create [-m mtu] [-s subnetmask] [-g gatewayip] -i ip_address -n name"
exit -1
;;
esac
done
if [ "x$name" == "x" ]
then
echo "You must specify a name with -n"
exit -1
fi
if [ "x$ip" == "x" ]
then
echo "You must specify an ip with -i"
exit -1
fi
cp -a $statedir/rootimg "$statedir/containers/$name"
touch "$statedir/containers/$name/container.ini"
crudini --set "$statedir/containers/$name/container.ini" superhaproxy ip "$ip"
crudini --set "$statedir/containers/$name/container.ini" superhaproxy mtu "$mtu"
crudini --set "$statedir/containers/$name/container.ini" superhaproxy subnet "$subnet"
crudini --set "$statedir/containers/$name/container.ini" superhaproxy gateway "$gateway"
exit 0
fi
if [ "x$1" == "xshow" ]
then
name="$2"
if [ "x$name" == "x" ]
then
echo "You must specify a name"
exit -1
fi
init_config "$name"
echo "IP: $ip"
echo "Subnet Mask: $subnet"
if [ "x$gateay" != "x" ]
then
echo "Gateway: $gateway"
fi
echo "MTU: $mtu"
exit 0
fi
if [ "x$1" == "xstart" ]
then
name="$2"
if [ "x$name" == "x" ]
then
echo "You must specify a name"
exit -1
fi
init_config "$name"
container="$(get_container_dir "$name")"
#FIXME ensure escaping is correct.
unshare --net --mount --pid --fork -- bash -c "/usr/bin/setsid -- /bin/bash -c 'mount --make-rprivate /; mount --bind $container /tmp; cd /tmp; mkdir -p old; pivot_root . old; mount --bind /old/dev /dev; mount /proc /proc -t proc; umount -l old; exec /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid </dev/null >/dev/null 2>&1'" &
sleep 1
awk '{print $1}' /proc/$!/task/$!/children > "$container/container.pid"
P="$(get_pid "$name")"
ovs-vsctl del-port $bridge "sha$(get_pid "$name")" > /dev/null 2>&1
ip link add sha$P type veth peer name shai$P
ip link set dev sha$P mtu "$mtu" up
ip link set shai$P netns $P name eth0
nsenter -t $P -n ip addr add "$ip/$subnet" dev eth0
nsenter -t $P -n ip link set dev eth0 mtu "$mtu" up
ovs-vsctl add-port $bridge sha$P
exit $?
fi
if [ "x$1" == "xpid" ]
then
name="$2"
if [ "x$name" == "x" ]
then
echo "You must specify a name"
exit -1
fi
get_pid $name
exit 0
fi
if [ "x$1" == "xpstree" ]
then
name="$2"
if [ "x$name" == "x" ]
then
echo "You must specify a name"
exit -1
fi
pstree -p $(get_pid "$name")
exit 0
fi
if [ "x$1" == "xstop" ]
then
name="$2"
if [ "x$name" == "x" ]
then
echo "You must specify a name"
exit -1
fi
kill $(get_pid "$name")
ovs-vsctl del-port $bridge "sha$(get_pid "$name")"
exit 0
fi
if [ "x$1" == "xshell" ]
then
name="$2"
if [ "x$name" == "x" ]
then
echo "You must specify a name"
exit -1
fi
nsenter -n -m -p -t $(get_pid "$name") /bin/busybox sh
exit 0
fi
if [ "x$1" == "xhatop" ]
then
name="$2"
if [ "x$name" == "x" ]
then
echo "You must specify a name"
exit -1
fi
hatop -s "$(get_container_dir "$name")/var/lib/haproxy/stats"
exit 0
fi
if [ "x$1" == "xreload" ]
then
name="$2"
if [ "x$name" == "x" ]
then
echo "You must specify a name"
exit -1
fi
kill -USR2 $(get_pid "$name")
exit 0
fi
if [ "x$1" == "xdump" ]
then
subcmd="$2"
if [ "x$subcmd" != "xlocal" ]
then
echo "only local is supported at the moment"
exit -1
fi
name="$3"
if [ "x$name" == "x" ]
then
echo "You must specify a name"
exit -1
fi
if [ "x$subcmd" == "xlocal" ]
then
dumpdir=$(get_dump_dir "$name")
rm -rf "$dumpdir"
mkdir -p "$dumpdir"
criu dump -D "$dumpdir" -t "$(get_pid "$name")" --tcp-established --shell-job --ext-mount-map /dev:dev
exit $?
fi
exit 0
fi
if [ "x$1" == "xrestore" ]
then
subcmd="$2"
if [ "x$subcmd" != "xlocal" ]
then
echo "only local is supported at the moment"
exit -1
fi
name="$3"
if [ "x$name" == "x" ]
then
echo "You must specify a name"
exit -1
fi
if [ "x$subcmd" == "xlocal" ]
then
tmpid=$$
pidfile=$(get_pid_file "$name")
as="$statedir/action-scripts/$name.sh"
cat > "$as" <<EOF
#!/bin/bash
if [ "x\${CRTOOLS_SCRIPT_ACTION}" == "xpost-restore" ]
then
P=\$(cat "$pidfile")
ip link set dev sha$tmpid name "sha\$P"
ip link set dev "sha\$P" mtu 9000 up
ovs-vsctl add-port $bridge "sha\$P"
fi
EOF
chmod +x "$as"
dumpdir=$(get_dump_dir "$name")
container="$(get_container_dir "$name")"
if [ ! -d "$dumpdir" ]
then
echo "Dump does not exist"
exit -1
fi
rm -f "$(get_pid_file "$name")"
ovs-vsctl del-port $bridge "sha$(get_pid "$name")" > /dev/null 2>&1
mount --bind "$container" "$container"
criu restore -d -D "$dumpdir" --shell-job --tcp-established --ext-mount-map dev:/dev --root "$container" --veth-pair eth0="sha$tmpid" --action-script "$as" --pidfile "$(get_pid_file "$name")"
res=$?
umount "$container"
exit $res
fi
exit 0
fi
#migrate
#rsync -avz --delete -e ssh /var/lib/superhaproxy/containers/foo 192.168.0.20:/var/lib/superhaproxy/containers/
# procedure:
# * initial rsync of container
# * dump on local host
# * second rsync of container
# * rsync of images
# * restore on remote host
# * On success
# * rm container and dump on localhost
# * On failure
# * If autofailback
# * Restore container local
# * on restore failure
# * Try starting remote, if works, remove local container/images all done.
# * If failed to start remote, try and start local
# * If state all still local, remove remote data.
echo "Unknown command: $1"
exit -1

Some files were not shown because too many files have changed in this diff Show More