Add warning-is-error in setup.cfg

This patch adds the ``warning-is-error`` flag in setup.cfg
to build documents and also fix failure with the introduction
of this flag.

Change-Id: I3bfedc31361584526d6f528b74b0be3993f1ecba
Partial-Bug: #1703442
This commit is contained in:
Madhuri Kumari 2017-07-11 11:26:51 +05:30
parent 6e48d31a72
commit 4b489da4f7
9 changed files with 101 additions and 89 deletions

View File

@ -22,6 +22,7 @@ sys.path.insert(0, os.path.abspath('../..'))
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [ extensions = [
'sphinx.ext.autodoc', 'sphinx.ext.autodoc',
'sphinx.ext.graphviz',
'openstackdocstheme', 'openstackdocstheme',
] ]

View File

@ -31,7 +31,7 @@ etc/apache2/zun.conf
The ``etc/apache2/zun.conf`` file contains example settings that The ``etc/apache2/zun.conf`` file contains example settings that
work with a copy of zun installed via devstack. work with a copy of zun installed via devstack.
.. literalinclude:: ../../../etc/apache2/zun.conf .. literalinclude:: ../../../etc/apache2/zun.conf.template
1. On deb-based systems copy or symlink the file to 1. On deb-based systems copy or symlink the file to
``/etc/apache2/sites-available``. For rpm-based systems the file will go in ``/etc/apache2/sites-available``. For rpm-based systems the file will go in

View File

@ -1,4 +1,4 @@
.. _dev-quickstart: .. _quickstart:
===================== =====================
Developer Quick-Start Developer Quick-Start

View File

@ -1,7 +0,0 @@
=====
Usage
=====
To use zun in a project::
import zun

View File

@ -26,6 +26,7 @@ packages =
source-dir = doc/source source-dir = doc/source
build-dir = doc/build build-dir = doc/build
all_files = 1 all_files = 1
warning-is-error = 1
[upload_sphinx] [upload_sphinx]
upload-dir = doc/build/html upload-dir = doc/build/html

View File

@ -26,52 +26,54 @@ Problem description
=================== ===================
Currently running or deploying one container to do the operation is not a Currently running or deploying one container to do the operation is not a
very effective way in micro services, while multiple different containers run very effective way in micro services, while multiple different containers run
as an integration has widely used in different scenarios, such as pod in Kubernetes. as an integration has widely used in different scenarios, such as pod in
The pod has the independent network, storage, while the compose has an easy way to defining Kubernetes. The pod has the independent network, storage, while the compose has
and running multi-container Docker applications. They are becoming the basic unit for an easy way to defining and running multi-container Docker applications. They
container application scenarios. are becoming the basic unit for container application scenarios.
Nowadays Zun doesn't support creating and running multiple containers as an Nowadays Zun doesn't support creating and running multiple containers as an
integration. So we will introduce the new Object ``capsule`` to realize this integration. So we will introduce the new Object ``capsule`` to realize this
function. ``capsule`` is the basic unit for zun to support service to external. function. ``capsule`` is the basic unit for zun to support service to external.
The ``capsule`` will be designed based on some similar concepts such as pod and compose. The ``capsule`` will be designed based on some similar concepts such as pod and
For example, ``capsule`` can be specified in a yaml file that might be similar to the format compose. For example, ``capsule`` can be specified in a yaml file that might be
of k8s pod manifest. However, the specification of ``capsule`` will be exclusive to Zun. The similar to the format of k8s pod manifest. However, the specification of
details will be showed in the following section. ``capsule`` will be exclusive to Zun. The details will be showed in the
following section.
Proposed change Proposed change
=============== ===============
A ``capsule`` has the following properties: A ``capsule`` has the following properties:
* Structure: It can contains one or multiple containers, and has a sandbox * Structure: It can contains one or multiple containers, and has a sandbox
container which will support the network namespace for the capsule. container which will support the network namespace for the capsule.
* Scheduler: Containers inside a capsule are scheduled as a unit, thus all * Scheduler: Containers inside a capsule are scheduled as a unit, thus all
containers inside a capsule is co-located. All containers inside a capsule containers inside a capsule is co-located. All containers inside a capsule
will be launched in one compute host. will be launched in one compute host.
* Network: Containers inside a capsule share the same network namespace, so they * Network: Containers inside a capsule share the same network namespace,
share IP address(es) and can find each other via localhost by using different so they share IP address(es) and can find each other via localhost by using
remapping network port. Capsule IP address(es) will re-use the sandbox IP. different remapping network port. Capsule IP address(es) will re-use the
Containers communication between different capsules will use capsules IP and sandbox IP. Containers communication between different capsules will use
port. capsules IP and port.
* LifeCycle: Capsule has different status: * LifeCycle: Capsule has different status:
Starting: Capsule is created, but one or more container inside the capsule is Starting: Capsule is created, but one or more container inside the capsule is
being created. being created.
Running: Capsule is created, and all the containers are running. Running: Capsule is created, and all the containers are running.
Finished: All containers inside the capsule have successfully executed and exited. Finished: All containers inside the capsule have successfully executed and
Failed: Capsule creation is failed exited.
* Restart Policy: Capsule will have a restart policy just like container. The restart Failed: Capsule creation is failed
policy relies on container restart policy to execute. * Restart Policy: Capsule will have a restart policy just like container. The
restart policy relies on container restart policy to execute.
* Health checker: * Health checker:
In the first step of realization, container inside the capsule will send its In the first step of realization, container inside the capsule will send its
status to capsule when its status changed. status to capsule when its status changed.
* Upgrade and rollback: * Upgrade and rollback:
Upgrade: Support capsule update(different from zun update). That means the Upgrade: Support capsule update(different from zun update). That means the
container image will update, launch the new capsule from new image, then destroy container image will update, launch the new capsule from new image, then
the old capsule. The capsule IP address will change. For Volume, need to clarify destroy the old capsule. The capsule IP address will change. For Volume, need
it after Cinder integration. to clarify it after Cinder integration.
Rollback: When update failed, rollback to it origin status. Rollback: When update failed, rollback to it origin status.
* CPU and memory resources: Given that host resource allocation, cpu and memory * CPU and memory resources: Given that host resource allocation, cpu and memory
support will be implemented. support will be implemented.
Implementation: Implementation:
@ -81,23 +83,23 @@ Implementation:
and cgroups. and cgroups.
2. Support the CRUD operations against capsule object, capsule should be a 2. Support the CRUD operations against capsule object, capsule should be a
basic unit for scheduling and spawning. To be more specific, all containers basic unit for scheduling and spawning. To be more specific, all containers
in a capsule should be scheduled to and spawned on the same host. Server side in a capsule should be scheduled to and spawned on the same host. Server
will keep the information in DB. side will keep the information in DB.
3. Add functions about yaml file parser in the CLI side. After parsing the yaml, 3. Add functions about yaml file parser in the CLI side. After parsing the
send the REST to API server side, scheduler will decide which host to run yaml, send the REST to API server side, scheduler will decide which host
the capsule. to run the capsule.
4. Introduce new REST API for capsule. The capsule creation workflow is: 4. Introduce new REST API for capsule. The capsule creation workflow is:
CLI Parsing capsule information from yaml file --> CLI Parsing capsule information from yaml file -->
API server do the CRUD operation, call scheduler to launch the capsule, from Cinder API server do the CRUD operation, call scheduler to launch the capsule, from
to get volume, from Kuryr to get network support--> Cinder to get volume, from Kuryr to get network support-->
Compute host launch the capsule, attach the volume--> Compute host launch the capsule, attach the volume-->
Send the status to API server, update the DB. Send the status to API server, update the DB.
5. Capsule creation will finally depend on the backend container driver. Now choose 5. Capsule creation will finally depend on the backend container driver. Now
Docker driver first. choose Docker driver first.
6. Define a yaml file structure for capsule. The yaml file will be compatible with 6. Define a yaml file structure for capsule. The yaml file will be compatible
Kubernetes pod yaml file, at the same time Zun will define the available properties, with Kubernetes pod yaml file, at the same time Zun will define the
metadata and template of the yaml file. In the first step, only essential properties available properties, metadata and template of the yaml file. In the first
will be defined. step, only essential properties will be defined.
The diagram below offers an overview of the architecture of ``capsule``: The diagram below offers an overview of the architecture of ``capsule``:
@ -129,6 +131,7 @@ Yaml format for ``capsule``:
Sample capsule: Sample capsule:
.. code-block:: yaml .. code-block:: yaml
apiVersion: beta apiVersion: beta
kind: capsule kind: capsule
metadata: metadata:
@ -163,7 +166,7 @@ Sample capsule:
cpu: 1 cpu: 1
memory: 2GB memory: 2GB
volumes: volumes:
- name: volume1 - name: volume1
drivers: cinder drivers: cinder
driverOptions: options driverOptions: options
size: 5GB size: 5GB
@ -183,14 +186,16 @@ ObjectMeta fields:
* lables(dict, name: string): labels for capsule * lables(dict, name: string): labels for capsule
CapsuleSpec fields: CapsuleSpec fields:
* containers(Containers array): containers info array, one capsule have multiple containers * containers(Containers array): containers info array, one capsule have
multiple containers
* volumes(Volumes array): volume information * volumes(Volumes array): volume information
Containers fields: Containers fields:
* name(string): name for container * name(string): name for container
* image(string): container image for container * image(string): container image for container
* imagePullPolicy(string): [Always | Never | IfNotPresent] * imagePullPolicy(string): [Always | Never | IfNotPresent]
* imageDriver(string): glance or dockerRegistory, by default is according to zun configuration * imageDriver(string): glance or dockerRegistory, by default is according to
zun configuration
* command(string): container command when starting * command(string): container command when starting
* args(string): container args for the command * args(string): container args for the command
* workDir(string): workDir for the container * workDir(string): workDir for the container
@ -223,20 +228,22 @@ Volumes fields:
* driver(string): volume drivers * driver(string): volume drivers
* driverOptions(string): options for volume driver * driverOptions(string): options for volume driver
* size(string): volume size * size(string): volume size
* volumeType(string): volume type that cinder need. by default is from cinder config * volumeType(string): volume type that cinder need. by default is from
cinder config
* image(string): cinder needed to boot from image * image(string): cinder needed to boot from image
Alternatives Alternatives
------------ ------------
1. Abstract all the information from yaml file and implement the capsule CRUD in 1. Abstract all the information from yaml file and implement the capsule CRUD
client side. in client side.
2. Implement the CRUD in server side. 2. Implement the CRUD in server side.
Data model impact Data model impact
----------------- -----------------
* Add a field to container to store the id of the capsule which include the container * Add a field to container to store the id of the capsule which include the
container
* Create a 'capsule' table. Each entry in this table is a record of a capsule. * Create a 'capsule' table. Each entry in this table is a record of a capsule.
.. code-block:: python .. code-block:: python
@ -277,29 +284,32 @@ REST API impact
--------------- ---------------
* Add a new API endpoint /capsule to the REST API interface. * Add a new API endpoint /capsule to the REST API interface.
* Capsule API: Capsule consider to support multiple operations as container * Capsule API: Capsule consider to support multiple operations as container
composition. composition.
* Container API: Many container API will be extended to capsule. Here in this * Container API: Many container API will be extended to capsule. Here in this
section will define the API usage range. section will define the API usage range.
Capsule API: Capsule API:
list <List all the capsule, add parameters about list capsules with the same labels> list <List all the capsule, add parameters about list capsules
with the same labels>
create <-f yaml file><-f directory> create <-f yaml file><-f directory>
describe <display the details state of one or more resource> describe <display the details state of one or more resource>
delete delete <capsule name>
<capsule name>
<-l name=label-name> <-l name=label-name>
<all> <all>
run <--capsule ... container-image> run <--capsule ... container-image>
If "--capsule .." is set, the container will be created inside the capsule. If "--capsule .." is set, the container will be created
inside the capsule.
Otherwise, it will be created as normal. Otherwise, it will be created as normal.
Container API: Container API:
* show/list allow all containers * show/list allow all containers
* create/delete allow bare container only (disallow in-capsule containers) * create/delete allow bare container only
(disallow in-capsule containers)
* attach/cp/logs/top allow all containers * attach/cp/logs/top allow all containers
* start/stop/restart/kill/pause/unpause allow bare container only (disallow in-capsule containers) * start/stop/restart/kill/pause/unpause allow bare container only (disallow
* update for container in the capsule, need <--capsule> params. in-capsule containers)
Bare container doesn't need. * update for container in the capsule, need <--capsule>
params. Bare container doesn't need.
Security impact Security impact
--------------- ---------------

View File

@ -26,10 +26,10 @@ Proposed change
zun commit <container-name> <image-name> zun commit <container-name> <image-name>
# zun help commit # zun help commit
usage: zun commit <container-name> <image-name> usage: zun commit <container-name> <image-name>
Create a new image by taking a snapshot of a running container. Create a new image by taking a snapshot of a running container.
Positional arguments: Positional arguments:
<container-name> Name or ID of container. <container-name> Name or ID of container.
<image-name> Name of snapshot. <image-name> Name of snapshot.
2. Extend docker driver to enable “docker commit” command to create a 2. Extend docker driver to enable “docker commit” command to create a
new image. new image.
3. The new image should be accessable from other hosts. There are two 3. The new image should be accessable from other hosts. There are two
@ -59,27 +59,30 @@ Creates an image from a container.
Specify the image name in the request body. Specify the image name in the request body.
After making this request, a user typically must keep polling the status of the created image After making this request, a user typically must keep polling the status of the
from glance to determine whether the request succeeded. created image from glance to determine whether the request succeeded.
If the operation succeeds, the created image has a status of active. User can also see the new If the operation succeeds, the created image has a status of active. User can
image in the image back end that OpenStack Image service manages. also see the new image in the image back end that OpenStack Image service
manages.
Preconditions: Preconditions:
1. The container must exist. 1. The container must exist.
2. User can only create a new image from the container when its status is Running, Stopped, 2. User can only create a new image from the container when its status is
and Paused. Running, Stopped and Paused.
3. The connection to the Image service is valid. 3. The connection to the Image service is valid.
POST /containers/<ID>/commit: commit a container POST /containers/<ID>/commit: commit a container
Example commit Example commit
{ {
"image-name" : "foo-image" "image-name" : "foo-image"
} }
Response: Response:
If successful, this method does not return content in the response body. Normal response codes: 202 If successful, this method does not return content in the response body.
Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404) Normal response codes: 202
Error response codes: badRequest(400), unauthorized(401), forbidden(403),
itemNotFound(404)
Security impact Security impact
=============== ===============

View File

@ -71,20 +71,23 @@ host.
How it works internally? How it works internally?
Once the user specifies the number of cpus, we would try to select a numa node Once the user specifies the number of cpus, we would try to select a numa node
that has the same or more number of cpusets unpinned that can satisfy the request. that has the same or more number of cpusets unpinned that can satisfy the
request.
Once the cpusets are determined by the scheduler and it's corresponding numa node, Once the cpusets are determined by the scheduler and it's corresponding numa
a driver method should be called for the actual provisoning of the request on the node, a driver method should be called for the actual provisoning of the
compute node. Corresponding updates would be made to the inventory table. request on the compute node. Corresponding updates would be made to the
inventory table.
In case of the docker driver - this can be achieved by a docker run equivalent: In case of the docker driver - this can be achieved by a docker run equivalent:
docker run -d ubuntu --cpusets-cpu="1,3" --cpuset-mems="1,3" docker run -d ubuntu --cpusets-cpu="1,3" --cpuset-mems="1,3"
The cpuset-mems would allow the memory access for the cpusets to stay localized. The cpuset-mems would allow the memory access for the cpusets to stay
localized.
If the container is in paused/stopped state, the DB will still continue to block If the container is in paused/stopped state, the DB will still continue to
the pinset information for the container instead of releasing it. block the pinset information for the container instead of releasing it.
Design Principles Design Principles

View File

@ -99,7 +99,8 @@ not true, users need to manually create the resources.
container by using the IP address(es) of the neutron port. This is container by using the IP address(es) of the neutron port. This is
equivalent to: equivalent to:
$ docker run --net=foo kubernetes/pause --ip <ipv4_address> --ip6 <ipv6_address> $ docker run --net=foo kubernetes/pause --ip <ipv4_address> \
--ip6 <ipv6_address>
NOTE: In this step, docker engine will make a call to Kuryr to setup the NOTE: In this step, docker engine will make a call to Kuryr to setup the
networking of the container. After receiving the request from Docker, Kuryr networking of the container. After receiving the request from Docker, Kuryr