Add doc for additional CI information

CI and basic introduction was added

Change-Id: Iee06fc2c7f2a7a2e4cb4721f7b49934fcf9db834
This commit is contained in:
jichenjc 2017-08-18 22:03:28 +08:00
parent be4a16a300
commit 4e283994bd
9 changed files with 99 additions and 32 deletions

51
doc/source/ci.rst Normal file
View File

@ -0,0 +1,51 @@
.. _ci:
========================
z/VM openstack driver CI
========================
This document is only for architecture reference.
openstack 3rd party CI
----------------------
Openstack requested 3rd party CI for vendor drivers, the detailed info
can be found at https://docs.openstack.org/infra/openstackci/third_party_ci.html.
z/VM CI hardware
----------------
The CI Cloud is an openstack liberty (may move as new releases are made available)
based cloud that is used to deploy test servers used to run the the devstack-gate
job(s) which run selected tempest tests. Openstack Liberty is used as the cloud
infrastructure is installed using the packages obtained from the Liberty apt
repository. An openstack controller, neutron and a compute node are installed in
virtual machines created using libvirt (virsh) hosted on Racker server 1.
Additional compute nodes are installed on Rack Servers 2, 3, 4.
.. image:: ./images/ci_arch.jpg
z/VM CI running sample
----------------------
Using an example of two tests servers running tempest tests each testing a different
openstack patch, this diagram shows additional detail of the bottom layer of the
preceding diagram. Each test server assumes it has a dedicated z/VM system that its
OpenStack nova plugins are using.
The test server is an OpenStack controller; a devstack installation running on the
reference platform (x86 Ubuntu Linux) installed prior to running the tempest tests.
Each test servers OpenStack nova plugin for z/VM are configured to talk to some
z/VM system; depending upon how z/VM scales in practice, each worker might really
have its own dedicated z/VM back end, or each worker might actually be sharing a
z/VM instance as shown here. Each worker's plugin can be configured to use a
different prefix when creating virtual servers on z/VM, so they will not directly
collide.
.. image:: ./images/ci_sample.jpg
z/VM CI reference and logs
--------------------------
* Logs: `<http://extbasicopstackcilog01.podc.sl.edst.ibm.com/test_logs/>`_
* Status:

View File

@ -0,0 +1,23 @@
.. _cloudlib4zvm:
====================
z/VM cloud connector
====================
Introduction
------------
z/VM cloud connector is a development sdk for manage z/VM.
It provides a set of APIs to operate z/VM including guest, image,
network, volume etc.
Just like os-win for nova hyperv driver and oslo.vmware for
nova vmware driver, z/VM cloud connector (CloudLib4zvm) is
for nova z/vm driver and other z/VM related openstack driver such
as neutron, ceilometer.
Links
-----
* Pypi: `<https://pypi.python.org/pypi/CloudLib4zvm>`_
* Doc: `<http://cloudlib4zvm.readthedocs.io/en/latest>`_

View File

@ -59,27 +59,32 @@ Image Requirements
* The virtual server/Linux instance used as the source of the new image should meet the following criteria:
1. The root filesystem must not be on a logical volume.
2. The minidisk on which the root filesystem resides should be a minidisk of the same type as
desired for a subsequent deploy (for example, an ECKD disk image should be captured
for a subsequent deploy to an ECKD disk),
desired for a subsequent deploy (for example, an ECKD disk image should be captured
for a subsequent deploy to an ECKD disk).
3. not be a full-pack minidisk, since cylinder 0 on full-pack minidisks is reserved, and be
defined with virtual address 0100.
4. The root disk should have a single partition.
5. The image being captured should support SSH access using keys instead of specifying a password. The
subsequent steps to capture the image will perform a key exchange to allow xCAT to access the server.
6. The image being captured should not have any network interface cards (NICs) defined below virtual
address 1100.
In addition to the specified criteria, the following recommendations allow for efficient use of the image:
* The minidisk on which the root filesystem resides should be defined as a multiple of full gigabytes in
size (for example, 1GB or 2GB). OpenStack specifies disk sizes in full gigabyte values, whereas z/VM
handles disk sizes in other ways (cylinders for ECKD disks, blocks for FBA disks, and so on). See the
appropriate online information if you need to convert cylinders or blocks to gigabytes; for example:
http://www.mvsforums.com/helpboards/viewtopic.php?t=8316.
size (for example, 1GB or 2GB). OpenStack specifies disk sizes in full gigabyte values, whereas z/VM
handles disk sizes in other ways (cylinders for ECKD disks, blocks for FBA disks, and so on). See the
appropriate online information if you need to convert cylinders or blocks to gigabytes; for example:
http://www.mvsforums.com/helpboards/viewtopic.php?t=8316.
* During subsequent deploys of the image, the OpenStack code will ensure that a disk image is not
copied to a disk smaller than the source disk, as this would result in loss of data. The disk specified in
the flavor should therefore be equal to or slightly larger than the source virtual machine's root disk.
IBM recommends specifying the disk size as 0 in the flavor, which will cause the virtual machine to be
created with the same disk size as the source disk.
copied to a disk smaller than the source disk, as this would result in loss of data. The disk specified in
the flavor should therefore be equal to or slightly larger than the source virtual machine's root disk.
IBM recommends specifying the disk size as 0 in the flavor, which will cause the virtual machine to be
created with the same disk size as the source disk.

BIN
doc/source/images/arch.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 314 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 91 KiB

View File

@ -35,11 +35,6 @@ zVM drivers, consist of a set of drivers/plugins for different OpenStack compone
enables OpenStack to communicate with z/VM hypervisor to manage z/VM system and
virtual machines running on the system.
xCAT is an open source scalable distributed computing management and provisioning
tool that provides a unified interface for hardware control, discovery,
and OS diskful/diskfree deployment. For more info, please refer to
http://xcat.org/ and https://github.com/xcat2/xcat-core.
Overview
========
@ -48,6 +43,7 @@ Overview
topology
support-matrix
cloudlib4zvm
Using the driver
================
@ -67,6 +63,13 @@ Creating zVM Images
imageguide
activeengine
Continuous integration(CI)
==========================
.. toctree::
:maxdepth: 2
ci
Contributing to the project
===========================

View File

@ -7,7 +7,7 @@ Topology
Generic concepts and components
-------------------------------
Above picture shows a conceptual view of the relationship between any OpenStack solution and z/VM.
Following picture shows a conceptual view of the relationship between any OpenStack solution and z/VM.
An OpenStack solution is free to run its components wherever it wishes; its options range from running
all components on z/VM, to running some on z/VM and others elsewhere, to running all components on
@ -20,22 +20,7 @@ process the request. These servers are known collectively as SMAPI. The worker s
the z/VM hypervisor (CP) or with a directory manager. A directory manager is required for this
environment.
Beginning with z/VM version 6.3, additional functionality is provided by integrated xCAT services. xCAT
is an Open Source scalable distributed computing management and provisioning tool that provides a
unified interface for hardware control, discovery, and deployment, including remote access to the SMAPI
APIs. It can be used for the deployment and administration of Linux servers that OpenStack wants to
manipulate. The z/VM drivers in the OpenStack services communicate with xCAT services via REST
APIs to manage the virtual servers.
xCAT is composed of two main services: the xCAT management node (xCAT MN) and ZHCP. Both the
xCAT MN server and the ZHCP server run within the same virtual machine, called the OPNCLOUD
virtual machine. The xCAT MN coordinates creating, deleting and updating virtual servers. The
management node uses a z/VM hardware control point (ZHCP) to communicate with SMAPI to
implement changes on a z/VM host. Only one instance of the xCAT MN is necessary to support multiple
z/VM hosts. Each z/VM host runs one instance of ZHCP. xCAT MN supports both a GUI for human
interaction and REST APIs for use by programs (for example, OpenStack).
Overall architecture
--------------------
.. image:: ./images/arch.png
.. image:: ./images/arch.jpg