Merge "Docs/Gate: NFS Support"

This commit is contained in:
Zuul 2018-01-19 06:50:53 +00:00 committed by Gerrit Code Review
commit 5f26fcd91e
68 changed files with 969 additions and 478 deletions

View File

@ -17,7 +17,12 @@
check:
jobs:
- openstack-helm-linter
- openstack-helm-dev-deploy:
- openstack-helm-dev-deploy-ceph:
irrelevant-files:
- ^.*\.rst$
- ^doc/.*$
- ^releasenotes/.*$
- openstack-helm-dev-deploy-nfs:
irrelevant-files:
- ^.*\.rst$
- ^doc/.*$
@ -42,7 +47,8 @@
gate:
jobs:
- openstack-helm-linter
- openstack-helm-dev-deploy
- openstack-helm-dev-deploy-ceph
- openstack-helm-dev-deploy-nfs
- openstack-helm-multinode-ubuntu
# - openstack-helm-multinode-centos
- openstack-helm-multinode-fedora
@ -59,12 +65,21 @@
zuul_osh_infra_relative_path: ../openstack-helm-infra/
pre-run:
- ../openstack-helm-infra/tools/gate/playbooks/osh-infra-upgrade-host.yaml
run: tools/gate/playbooks/dev-deploy.yaml
post-run: ../openstack-helm-infra/tools/gate/playbooks/osh-infra-collect-logs.yaml
required-projects:
- openstack/openstack-helm-infra
nodeset: openstack-helm-single-node
- job:
name: openstack-helm-dev-deploy-ceph
parent: openstack-helm-dev-deploy
run: tools/gate/playbooks/dev-deploy-ceph.yaml
- job:
name: openstack-helm-dev-deploy-nfs
parent: openstack-helm-dev-deploy
run: tools/gate/playbooks/dev-deploy-nfs.yaml
- job:
timeout: 7200
vars:

View File

@ -1,418 +0,0 @@
==========
All-in-One
==========
Overview
========
Below are some instructions and suggestions to help you get started with a
Kubeadm All-in-One environment on Ubuntu 16.04.
Other supported versions of Linux can also be used, with the appropriate changes
to package installation.
Requirements
============
.. warning:: Until the Ubuntu kernel shipped with 16.04 supports CephFS
subvolume mounts by default the `HWE Kernel
<../../troubleshooting/ubuntu-hwe-kernel.rst>`__ is required to use CephFS.
System Requirements
-------------------
The recommended minimum system requirements for a full deployment are:
- 16GB of RAM
- 8 Cores
- 48GB HDD
For a deployment without cinder and horizon the system requirements are:
- 8GB of RAM
- 4 Cores
- 48GB HDD
This guide covers the minimum number of requirements to get started.
All commands below should be run as a normal user, not as root.
Appropriate versions of Docker, Kubernetes, and Helm will be installed
by the playbooks used below, so there's no need to install them ahead of time.
.. warning:: By default the Calico CNI will use ``192.168.0.0/16`` and
Kubernetes services will use ``10.96.0.0/16`` as the CIDR for services. Check
that these CIDRs are not in use on the development node before proceeding, or
adjust as required.
Host Configuration
------------------
OpenStack-Helm uses the hosts networking namespace for many pods including,
Ceph, Neutron and Nova components. For this, to function, as expected pods need
to be able to resolve DNS requests correctly. Ubuntu Desktop and some other
distributions make use of ``mdns4_minimal`` which does not operate as Kubernetes
expects with its default TLD of ``.local``. To operate at expected either
change the ``hosts`` line in the ``/etc/nsswitch.conf``, or confirm that it
matches:
.. code-block:: ini
hosts: files dns
Packages
--------
Install the latest versions of Git, CA Certs & Make if necessary
.. literalinclude:: ../../../../tools/deployment/developer/00-install-packages.sh
:language: shell
:lines: 1,17-
Clone the OpenStack-Helm Repos
------------------------------
Once the host has been configured the repos containing the OpenStack-Helm charts
should be cloned:
.. code-block:: shell
#!/bin/bash
set -xe
git clone https://git.openstack.org/openstack/openstack-helm-infra.git
git clone https://git.openstack.org/openstack/openstack-helm.git
Deploy Kubernetes & Helm
------------------------
You may now deploy kubernetes, and helm onto your machine, first move into the
``openstack-helm`` directory and then run the following:
.. literalinclude:: ../../../../tools/deployment/developer/01-deploy-k8s.sh
:language: shell
:lines: 1,17-
This command will deploy a single node KubeADM administered cluster. This will
use the parameters in ``${OSH_INFRA_PATH}/tools/gate/playbooks/vars.yaml`` to control the
deployment, which can be over-ridden by adding entries to
``${OSH_INFRA_PATH}/tools/gate/devel/local-vars.yaml``.
Helm Chart Installation
=======================
Using the Helm packages previously pushed to the local Helm repository, run the
following commands to instruct tiller to create an instance of the given chart.
During installation, the helm client will print useful information about
resources created, the state of the Helm releases, and whether any additional
configuration steps are necessary.
Install OpenStack-Helm
----------------------
.. note:: The following commands all assume that they are run from the
``openstack-helm`` directory and the repos have been cloned as above.
Setup Clients on the host and assemble the charts
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The OpenStack clients and Kubernetes RBAC rules, along with assembly of the
charts can be performed by running the following commands:
.. literalinclude:: ../../../../tools/deployment/developer/02-setup-client.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/02-setup-client.sh
Deploy the ingress controller
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/03-ingress.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/03-ingress.sh
Deploy Ceph
^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/04-ceph.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/04-ceph.sh
Activate the openstack namespace to be able to use Ceph
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/05-ceph-ns-activate.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/05-ceph-ns-activate.sh
Deploy MariaDB
^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/06-mariadb.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/06-mariadb.sh
Deploy RabbitMQ
^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/07-rabbitmq.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/07-rabbitmq.sh
Deploy Memcached
^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/08-memcached.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/08-memcached.sh
Deploy Keystone
^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/09-keystone.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/09-keystone.sh
Create Ceph endpoints and service account for use with keystone
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/10-ceph-radosgateway.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/10-ceph-radosgateway.sh
Deploy Horizon
^^^^^^^^^^^^^^
.. warning:: Horizon deployment is not tested in the OSH development environment
community gates
.. literalinclude:: ../../../../tools/deployment/developer/11-horizon.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/11-horizon.sh
Deploy Glance
^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/12-glance.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/12-glance.sh
Deploy OpenvSwitch
^^^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/13-openvswitch.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/13-openvswitch.sh
Deploy Libvirt
^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/14-libvirt.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/14-libvirt.sh
Deploy Compute Kit (Nova and Neutron)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/15-compute-kit.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/15-compute-kit.sh
Setup the gateway to the public network
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/16-setup-gateway.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/16-setup-gateway.sh
Deploy Cinder
^^^^^^^^^^^^^
.. warning:: Cinder deployment is not tested in the OSH development environment
community gates
.. literalinclude:: ../../../../tools/deployment/developer/17-cinder.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/17-cinder.sh
Deploy Heat
^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/18-heat.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/18-heat.sh
Exercise the cloud
^^^^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/19-use-it.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/19-use-it.sh
To run further commands from the CLI manually, execute the following to
set up authentication credentials::
export OS_CLOUD=openstack_helm
Note that this command will only enable you to auth successfully using the
``python-openstackclient`` CLI. To use legacy clients like the
``python-novaclient`` from the CLI, reference the auth values in
``/etc/openstack/clouds.yaml`` and run::
export OS_USERNAME='admin'
export OS_PASSWORD='password'
export OS_PROJECT_NAME='admin'
export OS_PROJECT_DOMAIN_NAME='default'
export OS_USER_DOMAIN_NAME='default'
export OS_AUTH_URL='http://keystone.openstack.svc.cluster.local/v3'
The example above uses the default values used by ``openstack-helm-infra``.
Removing Helm Charts
====================
To delete an installed helm chart, use the following command:
.. code-block:: shell
helm delete ${RELEASE_NAME} --purge
This will delete all Kubernetes resources generated when the chart was
instantiated. However for OpenStack charts, by default, this will not delete
the database and database users that were created when the chart was installed.
All OpenStack projects can be configured such that upon deletion, their database
will also be removed. To delete the database when the chart is deleted the
database drop job must be enabled before installing the chart. There are two
ways to enable the job, set the job_db_drop value to true in the chart's
values.yaml file, or override the value using the helm install command as
follows:
.. code-block:: shell
helm install ${RELEASE_NAME} --set manifests.job_db_drop=true
Environment tear-down
=====================
To tear-down, the development environment charts should be removed firstly from
the 'openstack' namespace and then the 'ceph' namespace using the commands from
the `Removing Helm Charts`_ section. Once this has been done the namespaces
themselves can be cleaned by running:
.. code-block:: shell
kubectl delete namespace <namespace_name>
Final cleanup of the development environment is then performed by removing the
``/var/lib/openstack-helm`` directory from the host. This will restore the
environment back to a clean Kubernetes deployment, that can either be manually
removed or over-written by restarting the deployment process.

View File

@ -0,0 +1,44 @@
=======================
Cleaning the Deployment
=======================
Removing Helm Charts
====================
To delete an installed helm chart, use the following command:
.. code-block:: shell
helm delete ${RELEASE_NAME} --purge
This will delete all Kubernetes resources generated when the chart was
instantiated. However for OpenStack charts, by default, this will not delete
the database and database users that were created when the chart was installed.
All OpenStack projects can be configured such that upon deletion, their database
will also be removed. To delete the database when the chart is deleted the
database drop job must be enabled before installing the chart. There are two
ways to enable the job, set the job_db_drop value to true in the chart's
values.yaml file, or override the value using the helm install command as
follows:
.. code-block:: shell
helm install ${RELEASE_NAME} --set manifests.job_db_drop=true
Environment tear-down
=====================
To tear-down, the development environment charts should be removed firstly from
the 'openstack' namespace and then the 'ceph' namespace using the commands from
the `Removing Helm Charts`_ section. Once this has been done the namespaces
themselves can be cleaned by running:
.. code-block:: shell
kubectl delete namespace <namespace_name>
Final cleanup of the development environment is then performed by removing the
``/var/lib/openstack-helm`` directory from the host. This will restore the
environment back to a clean Kubernetes deployment, that can either be manually
removed or over-written by restarting the deployment process.

View File

@ -0,0 +1,204 @@
====================
Deployment With Ceph
====================
Deploy Ceph
^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/ceph/040-ceph.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/ceph/040-ceph.sh
Activate the openstack namespace to be able to use Ceph
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/ceph/045-ceph-ns-activate.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/ceph/045-ceph-ns-activate.sh
Deploy MariaDB
^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/ceph/050-mariadb.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/ceph/050-mariadb.sh
Deploy RabbitMQ
^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/ceph/060-rabbitmq.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/ceph/060-rabbitmq.sh
Deploy Memcached
^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/ceph/070-memcached.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/ceph/070-memcached.sh
Deploy Keystone
^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/ceph/080-keystone.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/ceph/080-keystone.sh
Deploy Heat
^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/ceph/090-heat.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/ceph/090-heat.sh
Deploy Horizon
^^^^^^^^^^^^^^
.. warning:: Horizon deployment is not tested in the OSH development environment
community gates
.. literalinclude:: ../../../../tools/deployment/developer/ceph/100-horizon.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/ceph/100-horizon.sh
Create Ceph endpoints and service account for use with keystone
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/ceph/110-ceph-radosgateway.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/ceph/110-ceph-radosgateway.sh
Deploy Glance
^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/ceph/120-glance.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/ceph/120-glance.sh
Deploy Cinder
^^^^^^^^^^^^^
.. warning:: Cinder deployment is not tested in the OSH development environment
community gates
.. literalinclude:: ../../../../tools/deployment/developer/ceph/130-cinder.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/ceph/130-cinder.sh
Deploy OpenvSwitch
^^^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/ceph/140-openvswitch.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/ceph/140-openvswitch.sh
Deploy Libvirt
^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/ceph/150-libvirt.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/ceph/150-libvirt.sh
Deploy Compute Kit (Nova and Neutron)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/ceph/160-compute-kit.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/ceph/160-compute-kit.sh
Setup the gateway to the public network
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/ceph/170-setup-gateway.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/ceph/170-setup-gateway.sh

View File

@ -0,0 +1,159 @@
===================
Deployment With NFS
===================
Deploy NFS Provisioner
^^^^^^^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/nfs/040-nfs-provisioner.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/nfs/040-nfs-provisioner.sh
Deploy MariaDB
^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/nfs/050-mariadb.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/nfs/050-mariadb.sh
Deploy RabbitMQ
^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/nfs/060-rabbitmq.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/nfs/060-rabbitmq.sh
Deploy Memcached
^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/nfs/070-memcached.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/nfs/070-memcached.sh
Deploy Keystone
^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/nfs/080-keystone.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/nfs/080-keystone.sh
Deploy Heat
^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/nfs/090-heat.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/nfs/090-heat.sh
Deploy Horizon
^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/nfs/100-horizon.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/nfs/100-horizon.sh
Deploy Glance
^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/nfs/120-glance.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/nfs/120-glance.sh
Deploy OpenvSwitch
^^^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/nfs/140-openvswitch.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/nfs/140-openvswitch.sh
Deploy Libvirt
^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/nfs/150-libvirt.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/nfs/150-libvirt.sh
Deploy Compute Kit (Nova and Neutron)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/nfs/160-compute-kit.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/nfs/160-compute-kit.sh
Setup the gateway to the public network
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/nfs/170-setup-gateway.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/nfs/170-setup-gateway.sh

View File

@ -0,0 +1,36 @@
==================
Exercise the Cloud
==================
Once OpenStack-Helm has been deployed, the cloud can be exercised either with
the OpenStack client, or the same heat templates that are used in the validation
gates.
.. literalinclude:: ../../../../tools/deployment/developer/common/900-use-it.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/common/900-use-it.sh
To run further commands from the CLI manually, execute the following to
set up authentication credentials::
export OS_CLOUD=openstack_helm
Note that this command will only enable you to auth successfully using the
``python-openstackclient`` CLI. To use legacy clients like the
``python-novaclient`` from the CLI, reference the auth values in
``/etc/openstack/clouds.yaml`` and run::
export OS_USERNAME='admin'
export OS_PASSWORD='password'
export OS_PROJECT_NAME='admin'
export OS_PROJECT_DOMAIN_NAME='default'
export OS_USER_DOMAIN_NAME='default'
export OS_AUTH_URL='http://keystone.openstack.svc.cluster.local/v3'
The example above uses the default values used by ``openstack-helm-infra``.

View File

@ -6,4 +6,9 @@ Contents:
.. toctree::
:maxdepth: 2
all-in-one
requirements-and-host-config
kubernetes-and-common-setup
deploy-with-nfs
deploy-with-ceph
exercise-the-cloud
cleaning-deployment

View File

@ -0,0 +1,87 @@
===========================
Kubernetes and Common Setup
===========================
Packages
--------
Install the latest versions of Git, CA Certs & Make if necessary
.. literalinclude:: ../../../../tools/deployment/developer/common/000-install-packages.sh
:language: shell
:lines: 1,17-
Clone the OpenStack-Helm Repos
------------------------------
Once the host has been configured the repos containing the OpenStack-Helm charts
should be cloned:
.. code-block:: shell
#!/bin/bash
set -xe
git clone https://git.openstack.org/openstack/openstack-helm-infra.git
git clone https://git.openstack.org/openstack/openstack-helm.git
Deploy Kubernetes & Helm
------------------------
You may now deploy kubernetes, and helm onto your machine, first move into the
``openstack-helm`` directory and then run the following:
.. literalinclude:: ../../../../tools/deployment/developer/common/010-deploy-k8s.sh
:language: shell
:lines: 1,17-
This command will deploy a single node KubeADM administered cluster. This will
use the parameters in ``${OSH_INFRA_PATH}/tools/gate/playbooks/vars.yaml`` to control the
deployment, which can be over-ridden by adding entries to
``${OSH_INFRA_PATH}/tools/gate/devel/local-vars.yaml``.
Helm Chart Installation
=======================
Using the Helm packages previously pushed to the local Helm repository, run the
following commands to instruct tiller to create an instance of the given chart.
During installation, the helm client will print useful information about
resources created, the state of the Helm releases, and whether any additional
configuration steps are necessary.
Install OpenStack-Helm
----------------------
.. note:: The following commands all assume that they are run from the
``openstack-helm`` directory and the repos have been cloned as above.
Setup Clients on the host and assemble the charts
=================================================
The OpenStack clients and Kubernetes RBAC rules, along with assembly of the
charts can be performed by running the following commands:
.. literalinclude:: ../../../../tools/deployment/developer/common/020-setup-client.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/common/020-setup-client.sh
Deploy the ingress controller
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/common/030-ingress.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/common/030-ingress.sh

View File

@ -0,0 +1,59 @@
===================================
Requirements and Host Configuration
===================================
Overview
========
Below are some instructions and suggestions to help you get started with a
Kubeadm All-in-One environment on Ubuntu 16.04.
Other supported versions of Linux can also be used, with the appropriate changes
to package installation.
Requirements
============
.. warning:: Until the Ubuntu kernel shipped with 16.04 supports CephFS
subvolume mounts by default the `HWE Kernel
<../../troubleshooting/ubuntu-hwe-kernel.rst>`__ is required to use CephFS.
System Requirements
-------------------
The recommended minimum system requirements for a full deployment are:
- 16GB of RAM
- 8 Cores
- 48GB HDD
For a deployment without cinder and horizon the system requirements are:
- 8GB of RAM
- 4 Cores
- 48GB HDD
This guide covers the minimum number of requirements to get started.
All commands below should be run as a normal user, not as root.
Appropriate versions of Docker, Kubernetes, and Helm will be installed
by the playbooks used below, so there's no need to install them ahead of time.
.. warning:: By default the Calico CNI will use ``192.168.0.0/16`` and
Kubernetes services will use ``10.96.0.0/16`` as the CIDR for services. Check
that these CIDRs are not in use on the development node before proceeding, or
adjust as required.
Host Configuration
------------------
OpenStack-Helm uses the hosts networking namespace for many pods including,
Ceph, Neutron and Nova components. For this, to function, as expected pods need
to be able to resolve DNS requests correctly. Ubuntu Desktop and some other
distributions make use of ``mdns4_minimal`` which does not operate as Kubernetes
expects with its default TLD of ``.local``. To operate at expected either
change the ``hosts`` line in the ``/etc/nsswitch.conf``, or confirm that it
matches:
.. code-block:: ini
hosts: files dns

View File

@ -1 +0,0 @@
../common/setup-client.sh

View File

@ -1 +0,0 @@
../common/memcached.sh

View File

@ -0,0 +1 @@
../common/000-install-packages.sh

View File

@ -0,0 +1 @@
../common/010-deploy-k8s.sh

View File

@ -0,0 +1 @@
../common/020-setup-client.sh

View File

@ -0,0 +1 @@
../common/030-ingress.sh

View File

@ -0,0 +1 @@
../common/050-mariadb.sh

View File

@ -0,0 +1 @@
../common/060-rabbitmq.sh

View File

@ -0,0 +1 @@
../../common/memcached.sh

View File

@ -0,0 +1 @@
../common/080-keystone.sh

View File

@ -0,0 +1 @@
../common/090-heat.sh

View File

@ -0,0 +1 @@
../common/100-horizon.sh

View File

@ -42,6 +42,7 @@ helm install --namespace=openstack ${WORK_DIR}/ceph --name=radosgw-openstack \
#NOTE: Validate Deployment info
helm status radosgw-openstack
export OS_CLOUD=openstack_helm
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
openstack service list
openstack container create 'mygreatcontainer'
curl -L -o /tmp/important-file.jpg https://imgflip.com/s/meme/Cute-Cat.jpg

View File

@ -33,6 +33,6 @@ helm install ./glance \
helm status glance
export OS_CLOUD=openstack_helm
openstack service list
sleep 15
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
openstack image list
openstack image show 'Cirros 0.3.5 64-bit'

View File

@ -29,5 +29,5 @@ helm install ./cinder \
#NOTE: Validate Deployment info
export OS_CLOUD=openstack_helm
openstack service list
sleep 15
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
openstack volume type list

View File

@ -0,0 +1 @@
../common/140-openvswitch.sh

View File

@ -45,6 +45,6 @@ helm install ./neutron \
#NOTE: Validate Deployment info
export OS_CLOUD=openstack_helm
openstack service list
sleep 15
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
openstack hypervisor list
openstack network agent list

View File

@ -0,0 +1 @@
../common/170-setup-gateway.sh

View File

@ -0,0 +1 @@
../common/900-use-it.sh

View File

@ -16,7 +16,9 @@
set -xe
CURRENT_DIR="$(pwd)"
: ${OSH_INFRA_PATH:="../openstack-helm-infra"}
cd ${OSH_INFRA_PATH}
make dev-deploy setup-host
make dev-deploy k8s
cd ${CURRENT_DIR}

View File

@ -0,0 +1 @@
../../common/setup-client.sh

View File

@ -22,7 +22,8 @@ make pull-images rabbitmq
#NOTE: Deploy command
helm install ./rabbitmq \
--namespace=openstack \
--name=rabbitmq
--name=rabbitmq \
--set pod.replicas.server=1
#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack

View File

@ -30,4 +30,5 @@ helm install ./keystone \
#NOTE: Validate Deployment info
helm status keystone
export OS_CLOUD=openstack_helm
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
openstack endpoint list

View File

@ -29,5 +29,5 @@ helm install ./heat \
#NOTE: Validate Deployment info
export OS_CLOUD=openstack_helm
openstack service list
sleep 15
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
openstack orchestration service list

View File

@ -0,0 +1 @@
../common/000-install-packages.sh

View File

@ -0,0 +1 @@
../common/010-deploy-k8s.sh

View File

@ -0,0 +1 @@
../common/020-setup-client.sh

View File

@ -0,0 +1 @@
../common/030-ingress.sh

View File

@ -0,0 +1,30 @@
#!/bin/bash
# Copyright 2017 The Openstack-Helm Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -xe
#NOTE: Deploy command
: ${OSH_INFRA_PATH:="../openstack-helm-infra"}
helm install ${OSH_INFRA_PATH}/nfs-provisioner \
--namespace=nfs \
--name=nfs-provisioner \
--set storageclass.name=general
#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh nfs
#NOTE: Display info
helm status nfs-provisioner

View File

@ -0,0 +1 @@
../common/050-mariadb.sh

View File

@ -0,0 +1 @@
../common/060-rabbitmq.sh

View File

@ -0,0 +1 @@
../../common/memcached.sh

View File

@ -0,0 +1 @@
../common/080-keystone.sh

View File

@ -0,0 +1 @@
../common/090-heat.sh

View File

@ -0,0 +1 @@
../common/100-horizon.sh

View File

@ -0,0 +1,37 @@
#!/bin/bash
# Copyright 2017 The Openstack-Helm Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -xe
#NOTE: Pull images and lint chart
make pull-images glance
#NOTE: Deploy command
helm install ./glance \
--namespace=openstack \
--name=glance \
--set storage=pvc
#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack
#NOTE: Validate Deployment info
helm status glance
export OS_CLOUD=openstack_helm
openstack service list
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
openstack image list
openstack image show 'Cirros 0.3.5 64-bit'

View File

@ -0,0 +1 @@
../common/140-openvswitch.sh

View File

@ -0,0 +1,31 @@
#!/bin/bash
# Copyright 2017 The Openstack-Helm Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -xe
#NOTE: Pull images and lint chart
make pull-images libvirt
#NOTE: Deploy command
helm install ./libvirt \
--namespace=openstack \
--name=libvirt \
--set ceph.enabled=false
#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack
#NOTE: Validate Deployment info
helm status libvirt

View File

@ -0,0 +1,52 @@
#!/bin/bash
# Copyright 2017 The Openstack-Helm Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -xe
#NOTE: Pull images and lint chart
make pull-images nova
make pull-images neutron
#NOTE: Deploy nova
if [ "x$(systemd-detect-virt)" == "xnone" ]; then
echo 'OSH is not being deployed in virtualized environment'
helm install ./nova \
--namespace=openstack \
--name=nova \
--set ceph.enabled=false
else
echo 'OSH is being deployed in virtualized environment, using qemu for nova'
helm install ./nova \
--namespace=openstack \
--name=nova \
--set conf.nova.libvirt.virt_type=qemu \
--set ceph.enabled=false
fi
#NOTE: Deploy neutron
helm install ./neutron \
--namespace=openstack \
--name=neutron \
--values=./tools/overrides/mvp/neutron-ovs.yaml
#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack
#NOTE: Validate Deployment info
export OS_CLOUD=openstack_helm
openstack service list
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
openstack hypervisor list
openstack network agent list

View File

@ -0,0 +1 @@
../common/170-setup-gateway.sh

View File

@ -0,0 +1 @@
../common/900-use-it.sh

View File

@ -28,6 +28,6 @@ helm install ./keystone \
#NOTE: Validate Deployment info
helm status keystone
export OS_CLOUD=openstack_helm
sleep 30
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
openstack endpoint list
helm test keystone --timeout 900

View File

@ -32,7 +32,7 @@ helm install ./glance \
helm status glance
export OS_CLOUD=openstack_helm
openstack service list
sleep 30
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
openstack image list
openstack image show 'Cirros 0.3.5 64-bit'
helm test glance --timeout 900

View File

@ -31,6 +31,6 @@ helm install ./cinder \
#NOTE: Validate Deployment info
export OS_CLOUD=openstack_helm
openstack service list
sleep 30
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
openstack volume type list
helm test cinder --timeout 900

View File

@ -66,7 +66,7 @@ helm install ./neutron \
#NOTE: Validate Deployment info
export OS_CLOUD=openstack_helm
openstack service list
sleep 30
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
openstack hypervisor list
openstack network agent list
helm test nova --timeout 900

View File

@ -30,5 +30,5 @@ helm install ./heat \
#NOTE: Validate Deployment info
export OS_CLOUD=openstack_helm
openstack service list
sleep 15
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
openstack orchestration service list

View File

@ -27,5 +27,5 @@ helm install ./barbican \
#NOTE: Validate Deployment info
export OS_CLOUD=openstack_helm
openstack service list
sleep 30
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
helm test barbican

View File

@ -17,13 +17,13 @@
- name: Deploy Required packages
shell: |
set -xe;
./tools/deployment/developer/00-install-packages.sh
./tools/deployment/developer/ceph/000-install-packages.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Deploy Required packages
shell: |
set -xe;
./tools/deployment/developer/01-deploy-k8s.sh
./tools/deployment/developer/ceph/010-deploy-k8s.sh
vars:
OSH_INFRA_PATH: "{{ zuul_osh_infra_relative_path | default('') }}"
args:
@ -31,108 +31,108 @@
- name: Setup OS and K8s Clients
shell: |
set -xe;
./tools/deployment/developer/02-setup-client.sh
./tools/deployment/developer/ceph/020-setup-client.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Deploy Ingress
shell: |
set -xe;
./tools/deployment/developer/03-ingress.sh
./tools/deployment/developer/ceph/030-ingress.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Deploy Ceph
shell: |
set -xe;
./tools/deployment/developer/04-ceph.sh
./tools/deployment/developer/ceph/040-ceph.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Deploy Ceph NS Activate
shell: |
set -xe;
./tools/deployment/developer/05-ceph-ns-activate.sh
./tools/deployment/developer/ceph/045-ceph-ns-activate.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Deploy Mariadb
shell: |
set -xe;
./tools/deployment/developer/06-mariadb.sh
./tools/deployment/developer/ceph/050-mariadb.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Deploy RabbitMQ
shell: |
set -xe;
./tools/deployment/developer/07-rabbitmq.sh
./tools/deployment/developer/ceph/060-rabbitmq.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Deploy Memcached
shell: |
set -xe;
./tools/deployment/developer/08-memcached.sh
./tools/deployment/developer/ceph/070-memcached.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Deploy Keystone
shell: |
set -xe;
./tools/deployment/developer/09-keystone.sh
./tools/deployment/developer/ceph/080-keystone.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Deploy Ceph Keystone RadosGW
- name: Deploy Heat
shell: |
set -xe;
./tools/deployment/developer/10-ceph-radosgateway.sh
./tools/deployment/developer/ceph/090-heat.sh
args:
chdir: "{{ zuul.project.src_dir }}"
# - name: Deploy Horizon
# shell: |
# set -xe;
# ./tools/deployment/developer/11-horizon.sh
# ./tools/deployment/developer/ceph/100-horizon.sh
# args:
# chdir: "{{ zuul.project.src_dir }}"
- name: Deploy Keystone Endpoints and User management for CephRGW
shell: |
set -xe;
./tools/deployment/developer/ceph/110-ceph-radosgateway.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Deploy Glance
shell: |
set -xe;
./tools/deployment/developer/12-glance.sh
./tools/deployment/developer/ceph/120-glance.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Deploy openvswitch
shell: |
set -xe;
./tools/deployment/developer/13-openvswitch.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Deploy libvirt
shell: |
set -xe;
./tools/deployment/developer/14-libvirt.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Deploy compute kit
shell: |
set -xe;
./tools/deployment/developer/15-compute-kit.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Deploy setup gateway
shell: |
set -xe;
./tools/deployment/developer/16-setup-gateway.sh
args:
chdir: "{{ zuul.project.src_dir }}"
# - name: Deploy cinder
# - name: Deploy Cinder
# shell: |
# set -xe;
# ./tools/deployment/developer/17-cinder.sh
# ./tools/deployment/developer/ceph/130-cinder.sh
# args:
# chdir: "{{ zuul.project.src_dir }}"
- name: Deploy heat
- name: Deploy OpenvSwitch
shell: |
set -xe;
./tools/deployment/developer/18-heat.sh
./tools/deployment/developer/ceph/140-openvswitch.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Deploy Libvirt
shell: |
set -xe;
./tools/deployment/developer/ceph/150-libvirt.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Deploy VM Compute Kit
shell: |
set -xe;
./tools/deployment/developer/ceph/160-compute-kit.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Setup Gateway
shell: |
set -xe;
./tools/deployment/developer/ceph/170-setup-gateway.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Deploy use the cloud
shell: |
set -xe;
./tools/deployment/developer/19-use-it.sh
./tools/deployment/developer/ceph/900-use-it.sh
args:
chdir: "{{ zuul.project.src_dir }}"

View File

@ -0,0 +1,120 @@
# Copyright 2017 The Openstack-Helm Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: primary
tasks:
- name: Deploy Required packages
shell: |
set -xe;
./tools/deployment/developer/nfs/000-install-packages.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Deploy Required packages
shell: |
set -xe;
./tools/deployment/developer/nfs/010-deploy-k8s.sh
vars:
OSH_INFRA_PATH: "{{ zuul_osh_infra_relative_path | default('') }}"
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Setup OS and K8s Clients
shell: |
set -xe;
./tools/deployment/developer/nfs/020-setup-client.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Deploy Ingress
shell: |
set -xe;
./tools/deployment/developer/nfs/030-ingress.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Deploy NFS
shell: |
set -xe;
./tools/deployment/developer/nfs/040-nfs-provisioner.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Deploy Mariadb
shell: |
set -xe;
./tools/deployment/developer/nfs/050-mariadb.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Deploy RabbitMQ
shell: |
set -xe;
./tools/deployment/developer/nfs/060-rabbitmq.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Deploy Memcached
shell: |
set -xe;
./tools/deployment/developer/nfs/070-memcached.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Deploy Keystone
shell: |
set -xe;
./tools/deployment/developer/nfs/080-keystone.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Deploy Heat
shell: |
set -xe;
./tools/deployment/developer/nfs/090-heat.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Deploy Horizon
shell: |
set -xe;
./tools/deployment/developer/nfs/100-horizon.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Deploy Glance
shell: |
set -xe;
./tools/deployment/developer/nfs/120-glance.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Deploy OpenvSwitch
shell: |
set -xe;
./tools/deployment/developer/nfs/140-openvswitch.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Deploy Libvirt
shell: |
set -xe;
./tools/deployment/developer/nfs/150-libvirt.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Deploy VM Compute Kit
shell: |
set -xe;
./tools/deployment/developer/nfs/160-compute-kit.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Setup Gateway
shell: |
set -xe;
./tools/deployment/developer/nfs/170-setup-gateway.sh
args:
chdir: "{{ zuul.project.src_dir }}"
- name: Deploy use the cloud
shell: |
set -xe;
./tools/deployment/developer/nfs/900-use-it.sh
args:
chdir: "{{ zuul.project.src_dir }}"