Updated Magnum documentation

Magnum documentation updated to reflect current usage. Cleanups for
syntax and readability applied as needed.

DocImpact
Closes-Bug: #1472029
Change-Id: If3d8660e6763083544529f469be02eec73e9fd0c
This commit is contained in:
Martin Falatic 2015-07-07 11:12:05 -07:00
parent ae00d5b574
commit a62d6bac63
8 changed files with 340 additions and 268 deletions

View File

@ -2,50 +2,14 @@
Magnum Magnum
====== ======
new Openstack project for containers. Magnum is an OpenStack project which offers container orchestration engines
for deploying and managing containers as first class resources in OpenStack.
* Free software: Apache license For more information, please refer to the following resources:
* Documentation: http://docs.openstack.org/developer/magnum
* Source: http://git.openstack.org/cgit/openstack/magnum
* Bugs: http://bugs.launchpad.net/magnum
* ReST Client: http://git.openstack.org/cgit/openstack/python-magnumclient
Architecture * **Free software:** under the `Apache license <http://www.apache.org/licenses/LICENSE-2.0>`_
------------ * **Documentation:** http://docs.openstack.org/developer/magnum
* **Source:** http://git.openstack.org/cgit/openstack/magnum
There are seven different types of objects in the Magnum system: * **Blueprints:** https://blueprints.launchpad.net/magnum
* **Bugs:** http://bugs.launchpad.net/magnum
* Bay: A collection of node objects where work is scheduled * **ReST Client:** http://git.openstack.org/cgit/openstack/python-magnumclient
* BayModel: An object stores template information about the bay which is used to create new bays consistently
* Node: A baremetal or virtual machine where work executes
* Pod: A collection of containers running on one physical or virtual machine
* Service: An abstraction which defines a logical set of pods and a policy by which to access them
* ReplicationController: An abstraction for managing a group of PODs to ensure a specified number of PODs are running
* Container: A docker container
Two binaries work together to compose the Magnum system. The first binary
accessed by the python-magnumclient code is the magnum-api ReST server. The
ReST server may run as one process or multiple processes. When a ReST request
is sent to the client API, the request is sent via AMQP to the magnum-conductor
process. The ReST server is horizontally scalable. At this time, the
conductor is limited to one process, but we intend to add horizontal
scalability to the conductor as well.
The magnum-conductor process runs on a controller machine and connects to a
kubernetes or docker ReST API endpoint. The kubernetes and docker ReST API
endpoints are managed by the bay object.
When service or pod objects are created, Kubernetes is directly contacted via
the k8s ReST API. When container objects are acted upon, the docker ReST API
is directly contacted.
Features
--------
* Abstractions for bays, containers, nodes, pods, and services
* Integration with Kubernetes and Docker for backend container technology.
* Integration with Keystone for multi-tenant security.
* Integration with Neutron for k8s multi-tenancy network security.
Installation and Usage
----------------------
* Getting Started Guides: http://git.openstack.org/cgit/openstack/magnum/tree/doc/source/dev/dev-quickstart.rst

View File

@ -1,24 +1,27 @@
==================== ====================
Devstack Integration DevStack Integration
==================== ====================
This directory contains the files necessary to integrate Magnum with devstack. This directory contains the files necessary to integrate magnum with devstack.
Refer the quickstart guide for more information on using devstack and magnum. Refer the quickstart guide at
http://docs.openstack.org/developer/magnum/dev/dev-quickstart.html
for more information on using devstack and magnum.
Running devstack with magnum for the first time may take a long time as it Running devstack with magnum for the first time may take a long time as it
needs to download an atomic fedora 21 qcow image. If you already have this image needs to download the Fedora Atomic micro-OS qcow2 image (e.g.,
you can copy it to /opt/stack/devstack/files/fedora-21-atomic-3.qcow2 to save you ``fedora-21-atomic-3.qcow2``). If you already have this image you can copy it
this time. to /opt/stack/devstack/files first to save time.
To install magnum into devstack, add the following settings to enable magnum plugin: :: To install magnum into devstack, add the following settings to enable the
magnum plugin::
cat > /opt/stack/devstack/local.conf << END cat > /opt/stack/devstack/local.conf << END
[[local|localrc]] [[local|localrc]]
enable_plugin magnum https://github.com/openstack/magnum master enable_plugin magnum https://github.com/openstack/magnum master
END END
Run devstack as normal: :: Then run devstack normally::
cd /opt/stack/devstack cd /opt/stack/devstack
./stack.sh ./stack.sh

View File

@ -1,30 +1,31 @@
.. _dev-manual-install: .. _dev-manual-install:
==================================
Manually Adding Magnum to DevStack Manually Adding Magnum to DevStack
================================== ==================================
If you are getting started with Magnum it is recommended you follow the If you are getting started with magnum it is recommended you follow the
:ref:`dev-quickstart` to get up and running with Magnum. This guide covers :ref:`dev-quickstart` to get up and running with magnum. This guide covers
a more in-depth process to setup Magnum with devstack. a more in-depth process to setup magnum with devstack.
Magnum depends on Nova, Glance, Heat and Neutron to create and schedule Magnum depends on nova, glance, heat, and neutron to create and schedule
virtual machines to simulate bare-metal. For bare-metal fully support, it virtual machines to simulate bare-metal. Full bare-metal support
is still under active development. is still under active development.
This session has only been tested on Ubuntu 14.04 (Trusty) and Fedora 20/21. This session has only been tested on Ubuntu 14.04 (Trusty) and Fedora 20/21.
We recommend users to select one of them if it is possible. We recommend users to select one of them if it is possible.
Clone DevStack:: Clone devstack::
cd ~ cd ~
git clone https://github.com/openstack-dev/devstack.git devstack git clone https://git.openstack.org/openstack-dev/devstack
Create devstack/localrc with minimal settings required to enable Heat Configure devstack with the minimal settings required to enable heat
and Neutron, refer to http://docs.openstack.org/developer/devstack/guides/neutron.html and neutron::
for more detailed neutron configuration.::
cd devstack cd devstack
cat >localrc <<END cat > local.conf << END
[[local|localrc]]
# Modify to your environment # Modify to your environment
FLOATING_RANGE=192.168.1.224/27 FLOATING_RANGE=192.168.1.224/27
PUBLIC_NETWORK_GATEWAY=192.168.1.225 PUBLIC_NETWORK_GATEWAY=192.168.1.225
@ -39,7 +40,8 @@ for more detailed neutron configuration.::
enable_service rabbit enable_service rabbit
# Enable Neutron which is required by Magnum and disable nova-network. # Ensure we are using neutron networking rather than nova networking
# (Neutron is enabled by default since Kilo)
disable_service n-net disable_service n-net
enable_service q-svc enable_service q-svc
enable_service q-agt enable_service q-agt
@ -48,7 +50,7 @@ for more detailed neutron configuration.::
enable_service q-meta enable_service q-meta
enable_service neutron enable_service neutron
# Enable Heat services # Enable heat services
enable_service h-eng enable_service h-eng
enable_service h-api enable_service h-api
enable_service h-api-cfn enable_service h-api-cfn
@ -70,29 +72,47 @@ for more detailed neutron configuration.::
VOLUME_BACKING_FILE_SIZE=20G VOLUME_BACKING_FILE_SIZE=20G
END END
Note: Update PUBLIC_INTERFACE and other parameters as appropriate for your
system.
More devstack configuration information can be found at
http://docs.openstack.org/developer/devstack/configuration.html
More neutron configuration information can be found at
http://docs.openstack.org/developer/devstack/guides/neutron.html
Create a local.sh to automatically make necessary networking changes during
the devstack deployment process. This will allow bays spawned by magnum to
access the internet through PUBLIC_INTERFACE::
cat > local.sh << END_LOCAL_SH cat > local.sh << END_LOCAL_SH
#!/bin/sh #!/bin/sh
sudo iptables -t nat -A POSTROUTING -o br-ex -j MASQUERADE sudo iptables -t nat -A POSTROUTING -o br-ex -j MASQUERADE
END_LOCAL_SH END_LOCAL_SH
chmod 755 local.sh chmod 755 local.sh
Run devstack::
./stack.sh ./stack.sh
At this time, Magnum has only been tested with the Fedora Atomic micro-OS. Note: If using the m-1 tag or tarball, please use the documentation shipped
Magnum will likely work with other micro-OS platforms, but each one requires with the milestone as the current master instructions are slightly
individual support in the heat template. incompatible.
The next step is to store the Fedora Atomic micro-OS in glance. The steps for Prepare your session to be able to use the various openstack clients including
updating Fedora Atomic images are a bit detailed. Fortunately one of the core magnum, neutron, and glance. Create a new shell, and source the devstack openrc
developers has made Atomic images available via the web: script::
If using the m-1 tag or tarball, please use the documentation shipped with the
milestone as the current master instructions are slightly incompatible.
Create a new shell, and source the devstack openrc script::
source ~/devstack/openrc admin admin source ~/devstack/openrc admin admin
Magnum has been tested with the Fedora Atomic micro-OS and CoreOS. Magnum will
likely work with other micro-OS platforms, but each requires individual
support in the heat template.
Store the Fedora Atomic micro-OS in glance. (The steps for updating Fedora
Atomic images are a bit detailed. Fortunately one of the core developers has
made Atomic images available at https://fedorapeople.org/groups/magnum)::
cd ~ cd ~
wget https://fedorapeople.org/groups/magnum/fedora-21-atomic-3.qcow2 wget https://fedorapeople.org/groups/magnum/fedora-21-atomic-3.qcow2
glance image-create --name fedora-21-atomic-3 \ glance image-create --name fedora-21-atomic-3 \
@ -100,10 +120,13 @@ Create a new shell, and source the devstack openrc script::
--disk-format qcow2 \ --disk-format qcow2 \
--property os_distro='fedora-atomic'\ --property os_distro='fedora-atomic'\
--container-format bare < fedora-21-atomic-3.qcow2 --container-format bare < fedora-21-atomic-3.qcow2
test -f ~/.ssh/id_rsa.pub || ssh-keygen
Create a keypair for use with the baymodel::
test -f ~/.ssh/id_rsa.pub || ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
nova keypair-add --pub-key ~/.ssh/id_rsa.pub testkey nova keypair-add --pub-key ~/.ssh/id_rsa.pub testkey
Next, create a database in MySQL for Magnum:: Create a database in MySQL for magnum::
mysql -h 127.0.0.1 -u root -ppassword mysql <<EOF mysql -h 127.0.0.1 -u root -ppassword mysql <<EOF
CREATE DATABASE IF NOT EXISTS magnum DEFAULT CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS magnum DEFAULT CHARACTER SET utf8;
@ -111,14 +134,14 @@ Next, create a database in MySQL for Magnum::
'root'@'%' IDENTIFIED BY 'password' 'root'@'%' IDENTIFIED BY 'password'
EOF EOF
Next, clone and install Magnum:: Clone and install magnum::
cd ~ cd ~
git clone https://github.com/openstack/magnum git clone https://git.openstack.org/openstack/magnum
cd magnum cd magnum
sudo pip install -e . sudo pip install -e .
Next configure Magnum:: Configure magnum::
# create the magnum conf directory # create the magnum conf directory
sudo mkdir -p /etc/magnum sudo mkdir -p /etc/magnum
@ -133,56 +156,64 @@ Next configure Magnum::
sudo sed -i "s/#verbose\s*=.*/verbose=true/" /etc/magnum/magnum.conf sudo sed -i "s/#verbose\s*=.*/verbose=true/" /etc/magnum/magnum.conf
# set RabbitMQ userid # set RabbitMQ userid
sudo sed -i "s/#rabbit_userid\s*=.*/rabbit_userid=stackrabbit/" /etc/magnum/magnum.conf sudo sed -i "s/#rabbit_userid\s*=.*/rabbit_userid=stackrabbit/" \
/etc/magnum/magnum.conf
# set RabbitMQ password # set RabbitMQ password
sudo sed -i "s/#rabbit_password\s*=.*/rabbit_password=password/" /etc/magnum/magnum.conf sudo sed -i "s/#rabbit_password\s*=.*/rabbit_password=password/" \
/etc/magnum/magnum.conf
# set SQLAlchemy connection string to connect to MySQL # set SQLAlchemy connection string to connect to MySQL
sudo sed -i "s/#connection\s*=.*/connection=mysql:\/\/root:password@localhost\/magnum/" /etc/magnum/magnum.conf sudo sed -i "s/#connection\s*=.*/connection=mysql:\/\/root:password@localhost\/magnum/" \
/etc/magnum/magnum.conf
# set Keystone account username # set Keystone account username
sudo sed -i "s/#admin_user\s*=.*/admin_user=admin/" /etc/magnum/magnum.conf sudo sed -i "s/#admin_user\s*=.*/admin_user=admin/" \
/etc/magnum/magnum.conf
# set Keystone account password # set Keystone account password
sudo sed -i "s/#admin_password\s*=.*/admin_password=password/" /etc/magnum/magnum.conf sudo sed -i "s/#admin_password\s*=.*/admin_password=password/" \
/etc/magnum/magnum.conf
# set admin Identity API endpoint # set admin Identity API endpoint
sudo sed -i "s/#identity_uri\s*=.*/identity_uri=http:\/\/127.0.0.1:35357/" /etc/magnum/magnum.conf sudo sed -i "s/#identity_uri\s*=.*/identity_uri=http:\/\/127.0.0.1:35357/" \
/etc/magnum/magnum.conf
# set public Identity API endpoint # set public Identity API endpoint
sudo sed -i "s/#auth_uri\s*=.*/auth_uri=http:\/\/127.0.0.1:5000\/v2.0/" /etc/magnum/magnum.conf sudo sed -i "s/#auth_uri\s*=.*/auth_uri=http:\/\/127.0.0.1:5000\/v2.0/" \
/etc/magnum/magnum.conf
Next, clone and install the client:: Clone and install the magnum client::
cd ~ cd ~
git clone https://github.com/openstack/python-magnumclient git clone https://git.openstack.org/openstack/python-magnumclient
cd python-magnumclient cd python-magnumclient
sudo pip install -e . sudo pip install -e .
Next, configure the database for use with Magnum:: Configure the database for use with magnum::
magnum-db-manage upgrade magnum-db-manage upgrade
Finally, configure the keystone endpoint:: Configure the keystone endpoint::
keystone service-create --name=magnum \ keystone service-create --name=magnum \
--type=container \ --type=container \
--description="Magnum Container Service" --description="magnum Container Service"
keystone endpoint-create --service=magnum \ keystone endpoint-create --service=magnum \
--publicurl=http://127.0.0.1:9511/v1 \ --publicurl=http://127.0.0.1:9511/v1 \
--internalurl=http://127.0.0.1:9511/v1 \ --internalurl=http://127.0.0.1:9511/v1 \
--adminurl=http://127.0.0.1:9511/v1 \ --adminurl=http://127.0.0.1:9511/v1 \
--region RegionOne --region RegionOne
Start the API service in a new screen::
Next start the API service::
magnum-api magnum-api
Finally start the conductor service in a new window:: Start the conductor service in a new screen::
magnum-conductor magnum-conductor
Magnum should now be up and running. Further steps on utilizing Magnum and Magnum should now be up and running!
deploying containers can be found in guide :ref:`dev-quickstart`.
Further details on utilizing magnum and deploying containers can be found in
the guide :ref:`dev-quickstart`.

View File

@ -4,9 +4,9 @@
Developer Quick-Start Developer Quick-Start
===================== =====================
This is a quick walkthrough to get you started developing code for Magnum. This is a quick walkthrough to get you started developing code for magnum.
This assumes you are already familiar with submitting code reviews to This assumes you are already familiar with submitting code reviews to an
an OpenStack project. OpenStack project.
.. seealso:: .. seealso::
@ -15,7 +15,7 @@ an OpenStack project.
Setup Dev Environment Setup Dev Environment
===================== =====================
Install prerequisites:: Install OS-specific prerequisites::
# Ubuntu/Debian: # Ubuntu/Debian:
sudo apt-get update sudo apt-get update
@ -36,11 +36,13 @@ Install prerequisites::
python-testrepository python-tox python-virtualenv \ python-testrepository python-tox python-virtualenv \
gettext-runtime gettext-runtime
Install common prerequisites::
sudo pip install virtualenv setuptools-git flake8 tox testrepository sudo pip install virtualenv setuptools-git flake8 tox testrepository
If using RHEL and yum reports "No package python-pip available" and "No Note: If using RHEL and yum reports "No package python-pip available" and "No
package git-review available", use the EPEL software repository. Instructions package git-review available", use the EPEL software repository. Instructions
can be found at `<http://fedoraproject.org/wiki/EPEL/FAQ#howtouse>`_. can be found at the http://fedoraproject.org/wiki/EPEL/FAQ#howtouse page.
You may need to explicitly upgrade virtualenv if you've installed the one You may need to explicitly upgrade virtualenv if you've installed the one
from your OS distribution and it is too old (tox will complain). You can from your OS distribution and it is too old (tox will complain). You can
@ -55,7 +57,7 @@ Magnum source code should be pulled directly from git::
git clone https://git.openstack.org/openstack/magnum git clone https://git.openstack.org/openstack/magnum
cd magnum cd magnum
Set up a local environment for development and testing should be done with tox:: Set up a local environment for development and testing with tox::
# create a virtualenv for development # create a virtualenv for development
tox -evenv -- python -V tox -evenv -- python -V
@ -65,7 +67,7 @@ All further commands in this section should be run with the venv active::
source .tox/venv/bin/activate source .tox/venv/bin/activate
All unit tests should be run using tox. To run Magnum's entire test suite:: All unit tests should be run using tox. To run magnum's entire test suite::
# run all tests (unit and pep8) # run all tests (unit and pep8)
tox tox
@ -91,58 +93,56 @@ When you're done, deactivate the virtualenv::
To discover and interact with templates, please refer to To discover and interact with templates, please refer to
`<http://git.openstack.org/cgit/openstack/magnum/tree/contrib/templates/example/README.rst>`_ `<http://git.openstack.org/cgit/openstack/magnum/tree/contrib/templates/example/README.rst>`_
Exercising the Services Using Devstack
Exercising the Services Using DevStack
====================================== ======================================
DevStack can be configured to enable Magnum support. It is easy to develop Magnum Devstack can be configured to enable magnum support. It is easy to develop
with devstack environment. Magnum depends on Nova, Glance, Heat and Neutron to magnum with the devstack environment. Magnum depends on nova, glance, heat and
create and schedule virtual machines to simulate bare-metal. For bare-metal fully neutron to create and schedule virtual machines to simulate bare-metal (full
support, it is still under active development. bare-metal support is under active development).
Note: Running devstack within a virtual machine with magnum enabled is not
recommended at this time.
This session has only been tested on Ubuntu 14.04 (Trusty) and Fedora 20/21. This session has only been tested on Ubuntu 14.04 (Trusty) and Fedora 20/21.
We recommend users to select one of them if it is possible. We recommend users to select one of them if it is possible.
Clone DevStack:: Clone devstack::
# Create dir to run devstack from, if not done so already # Create a root directory for devstack if needed
sudo mkdir -p /opt/stack sudo mkdir -p /opt/stack
sudo chown $USER /opt/stack sudo chown $USER /opt/stack
git clone https://github.com/openstack-dev/devstack.git /opt/stack/devstack git clone https://git.openstack.org/openstack-dev/devstack /opt/stack/devstack
Copy devstack/localrc with minimal settings required to enable Heat We will run devstack with minimal local.conf settings required to enable
and Neutron, refer to http://docs.openstack.org/developer/devstack/guides/neutron.html magnum, heat, and neutron (neutron is enabled by default in devstack since
for more detailed neutron configuration. Kilo, and heat is enabled by the magnum plugin)::
To install magnum into devstack, add following settings to local.conf. You need to cat > /opt/stack/devstack/local.conf << END
make customized setting according to your environment requirement, refer devstack [[local|localrc]]
guide for details.:: DATABASE_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_TOKEN=password
SERVICE_PASSWORD=password
ADMIN_PASSWORD=password
# magnum requires the following to be set correctly
PUBLIC_INTERFACE=eth1
enable_plugin magnum https://github.com/openstack/magnum
VOLUME_BACKING_FILE_SIZE=20G
END
cat > /opt/stack/devstack/local.conf << END Note: Update PUBLIC_INTERFACE as appropriate for your system.
[[local|localrc]]
enable_plugin magnum https://github.com/openstack/magnum
DATABASE_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_TOKEN=password
SERVICE_PASSWORD=password
ADMIN_PASSWORD=password
PUBLIC_INTERFACE=eth1
VOLUME_BACKING_FILE_SIZE=20G
END
Or, if you already have localrc in /opt/stack/devstack/, then :: More devstack configuration information can be found at
http://docs.openstack.org/developer/devstack/configuration.html
cat >> /opt/stack/devstack/localrc << END More neutron configuration information can be found at
enable_plugin magnum https://github.com/openstack/magnum http://docs.openstack.org/developer/devstack/guides/neutron.html
PUBLIC_INTERFACE=eth1
VOLUME_BACKING_FILE_SIZE=20G
END
Note: Replace eth1 with your public interface for Neutron to use. Create a local.sh to automatically make necessary networking changes during
the devstack deployment process. This will allow bays spawned by magnum to
Create a local.sh make final networking changes after devstack has spawned. This access the internet through PUBLIC_INTERFACE::
will allow Bays spawned by Magnum to access the internet through PUBLIC_INTERFACE.::
cat > /opt/stack/devstack/local.sh << END_LOCAL_SH cat > /opt/stack/devstack/local.sh << END_LOCAL_SH
#!/bin/sh #!/bin/sh
@ -150,30 +150,34 @@ will allow Bays spawned by Magnum to access the internet through PUBLIC_INTERFAC
END_LOCAL_SH END_LOCAL_SH
chmod 755 /opt/stack/devstack/local.sh chmod 755 /opt/stack/devstack/local.sh
Run DevStack:: Run devstack::
cd /opt/stack/devstack cd /opt/stack/devstack
./stack.sh ./stack.sh
After the script finishes, two magnum process (magnum-api and magnum-conductor) Note: This will take a little extra time when the Fedora Atomic micro-OS
will be running on a stack screen. If you make some code changes and want to image is downloaded for the first time.
test their effects, just restart either magnum-api or magnum-conductor.
At this time, Magnum has only been tested with the Fedora Atomic micro-OS. At this point, two magnum process (magnum-api and magnum-conductor) will be
Magnum will likely work with other micro-OS platforms, but each one requires running on devstack screens. If you make some code changes and want to
individual support in the heat template. test their effects, just stop and restart magnum-api and/or magnum-conductor.
Prepare your session to be able to use the various openstack clients including Prepare your session to be able to use the various openstack clients including
magnum, neutron and glance. Create a new shell, and source the devstack openrc magnum, neutron, and glance. Create a new shell, and source the devstack openrc
script:: script::
source /opt/stack/devstack/openrc admin admin source /opt/stack/devstack/openrc admin admin
The fedora-21-atomic-3 image will automatically be added to glance. You can Magnum has been tested with the Fedora Atomic micro-OS and CoreOS. Magnum will
add additional images to use manually through glance. To verify the image likely work with other micro-OS platforms, but each requires individual
created when installing DevStack:: support in the heat template.
The Fedora Atomic micro-OS image will automatically be added to glance. You
can add additional images manually through glance. To verify the image created
when installing devstack use::
glance image-list glance image-list
+--------------------------------------+---------------------------------+-------------+------------------+-----------+--------+ +--------------------------------------+---------------------------------+-------------+------------------+-----------+--------+
| ID | Name | Disk Format | Container Format | Size | Status | | ID | Name | Disk Format | Container Format | Size | Status |
+--------------------------------------+---------------------------------+-------------+------------------+-----------+--------+ +--------------------------------------+---------------------------------+-------------+------------------+-----------+--------+
@ -183,41 +187,43 @@ created when installing DevStack::
| 02c312e3-2d30-43fd-ab2d-1d25622c0eaa | fedora-21-atomic-3 | qcow2 | bare | 770179072 | active | | 02c312e3-2d30-43fd-ab2d-1d25622c0eaa | fedora-21-atomic-3 | qcow2 | bare | 770179072 | active |
+--------------------------------------+---------------------------------+-------------+------------------+-----------+--------+ +--------------------------------------+---------------------------------+-------------+------------------+-----------+--------+
You need to define and register a keypair for use when creating baymodel's:: To list the available commands and resources for magnum, use::
cd ~
test -f ~/.ssh/id_rsa.pub || ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
nova keypair-add --pub-key ~/.ssh/id_rsa.pub testkey
To get started, list the available commands and resources::
magnum help magnum help
First create a baymodel, which is similar in nature to a flavor. The Create a keypair for use with the baymodel::
coe (Container Orchestration Engine) needs to be specified for baymodel.
The baymodel informs Magnum in which way to construct a bay.:: test -f ~/.ssh/id_rsa.pub || ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
nova keypair-add --pub-key ~/.ssh/id_rsa.pub testkey
Create a baymodel. This is similar in nature to a flavor and describes
to magnum how to construct the bay. The coe (Container Orchestration Engine)
and keypair need to be specified for the baymodel::
NIC_ID=$(neutron net-show public | awk '/ id /{print $4}') NIC_ID=$(neutron net-show public | awk '/ id /{print $4}')
magnum baymodel-create --name k8sbaymodel --image-id fedora-21-atomic-3 \ echo ${NIC_ID}
--keypair-id testkey \
--external-network-id $NIC_ID \
--dns-nameserver 8.8.8.8 --flavor-id m1.small \
--docker-volume-size 5 --coe kubernetes
Next create a bay. Use the baymodel UUID as a template for bay creation. magnum baymodel-create --name k8sbaymodel \
This bay will result in one master kubernetes node and one minion node.:: --image-id fedora-21-atomic-3 \
--keypair-id testkey \
--external-network-id ${NIC_ID} \
--dns-nameserver 8.8.8.8 \
--flavor-id m1.small \
--docker-volume-size 5 \
--coe kubernetes
Create a bay. Use the baymodel name as a template for bay creation.
This bay will result in one master kubernetes node and one minion node::
magnum bay-create --name k8sbay --baymodel k8sbaymodel --node-count 1 magnum bay-create --name k8sbay --baymodel k8sbaymodel --node-count 1
The existing bays can be listed as follows::
magnum bay-list
Bays will have an initial status of CREATE_IN_PROGRESS. Magnum will update Bays will have an initial status of CREATE_IN_PROGRESS. Magnum will update
the status to CREATE_COMPLETE when it is done creating the bay. Do not create the status to CREATE_COMPLETE when it is done creating the bay. Do not create
containers, pods, services, or replication controllers before Magnum finishes containers, pods, services, or replication controllers before magnum finishes
creating the bay. They will likely not be created, causing Magnum to become creating the bay. They will likely not be created, and may cause magnum to
confused. become confused.
The existing bays can be listed as follows::
magnum bay-list magnum bay-list
@ -227,41 +233,69 @@ confused.
| 9dccb1e6-02dc-4e2b-b897-10656c5339ce | k8sbay | 1 | CREATE_COMPLETE | | 9dccb1e6-02dc-4e2b-b897-10656c5339ce | k8sbay | 1 | CREATE_COMPLETE |
+--------------------------------------+---------+------------+-----------------+ +--------------------------------------+---------+------------+-----------------+
More detailed information for a given bay is obtained via::
magnum bay-show k8sbay
After a bay is created, you can dynamically add/remove node(s) to/from the bay After a bay is created, you can dynamically add/remove node(s) to/from the bay
by updating the node_count attribute. For example, to add one more node:: by updating the node_count attribute. For example, to add one more node::
magnum bay-update k8sbay replace node_count=2 magnum bay-update k8sbay replace node_count=2
Bays will have an initial status of UPDATE_IN_PROGRESS. Magnum will update Bays in the process of updating will have a status of UPDATE_IN_PROGRESS.
the status to UPDATE_COMPLETE when it is done updating the bay. Magnum will update the status to UPDATE_COMPLETE when it is done updating
the bay.
NOTE: If you choose to reduce the node_count, Magnum will first try to remove Note: Reducing node_count will remove all the existing containers on the
nodes that are deleted.
Heat can be used to see detailed information on the status of a stack or
specific bay::
heat stack-list
Monitoring bay status in detail (e.g., creating, updating)::
BAY_HEAT_NAME=$(heat stack-list | awk "/\sk8sbay-/{print \$4}")
echo ${BAY_HEAT_NAME}
heat resource-list ${BAY_HEAT_NAME}
A bay can be deleted as follows::
magnum bay-delete k8sbay
Note: If you choose to reduce the node_count, magnum will first try to remove
empty nodes with no containers running on them. If you reduce node_count by empty nodes with no containers running on them. If you reduce node_count by
more than the number of empty nodes, Magnum must remove nodes that have running more than the number of empty nodes, magnum must remove nodes that have running
containers on them. This action will delete those containers. We strongly containers on them. This action will delete those containers. We strongly
recommend using a replication controller before reducing the node_count so recommend using a replication controller before reducing the node_count so
any removed containers can be automatically recovered on your remaining nodes. any removed containers can be automatically recovered on your remaining nodes.
Kubernetes provides a number of examples you can use to check that things Using Kubernetes
are working. You may need to clone kubernetes by:: ================
Kubernetes provides a number of examples you can use to check that things are
working. You may need to clone kubernetes using::
wget https://github.com/GoogleCloudPlatform/kubernetes/releases/download/v0.15.0/kubernetes.tar.gz wget https://github.com/GoogleCloudPlatform/kubernetes/releases/download/v0.15.0/kubernetes.tar.gz
tar -xvzf kubernetes.tar.gz tar -xvzf kubernetes.tar.gz
(No require to install it, we just use the example file) Note: We do not need to install Kubernetes, we just need the example file
from the tarball.
Here's how to set up the replicated redis example. First, create Here's how to set up the replicated redis example. First, create
a pod for the redis-master:: a pod for the redis-master::
cd kubernetes/examples/redis/v1beta3 cd kubernetes/examples/redis/v1beta3
magnum pod-create --manifest ./redis-master.yaml --bay k8sbay magnum pod-create --manifest ./redis-master.yaml --bay k8sbay
Now turn up a service to provide a discoverable endpoint for the redis sentinels Now create a service to provide a discoverable endpoint for the redis
in the cluster:: sentinels in the cluster::
magnum service-create --manifest ./redis-sentinel-service.yaml --bay k8sbay magnum service-create --manifest ./redis-sentinel-service.yaml --bay k8sbay
To make it a replicated redis cluster create replication controllers for the redis To make it a replicated redis cluster create replication controllers for the
slaves and sentinels:: redis slaves and sentinels::
sed -i 's/\(replicas: \)1/\1 2/' redis-controller.yaml sed -i 's/\(replicas: \)1/\1 2/' redis-controller.yaml
magnum rc-create --manifest ./redis-controller.yaml --bay k8sbay magnum rc-create --manifest ./redis-controller.yaml --bay k8sbay
@ -269,13 +303,15 @@ slaves and sentinels::
sed -i 's/\(replicas: \)1/\1 2/' redis-sentinel-controller.yaml sed -i 's/\(replicas: \)1/\1 2/' redis-sentinel-controller.yaml
magnum rc-create --manifest ./redis-sentinel-controller.yaml --bay k8sbay magnum rc-create --manifest ./redis-sentinel-controller.yaml --bay k8sbay
Full lifecycle and introspection operations for each object are supported. For Full lifecycle and introspection operations for each object are supported.
example, magnum bay-create, magnum baymodel-delete, magnum rc-show, magnum service-list. For example, magnum bay-create, magnum baymodel-delete, magnum rc-show,
magnum service-list.
Now run bay-show command to get the IP of the bay host on which the redis-master is Now run bay-show command to get the IP of the bay host on which the
running on:: redis-master is running::
magnum bay-show k8sbay
$ magnum bay-show k8sbay
+----------------+--------------------------------------+ +----------------+--------------------------------------+
| Property | Value | | Property | Value |
+----------------+--------------------------------------+ +----------------+--------------------------------------+
@ -291,8 +327,8 @@ running on::
| name | k8sbay | | name | k8sbay |
+----------------+--------------------------------------+ +----------------+--------------------------------------+
The output indicates the redis-master is running on the The output indicates the redis-master is running on the bay host with IP
bay host with IP address 192.168.19.86. To access the redis master:: address 192.168.19.86. To access the redis master::
ssh minion@192.168.19.86 ssh minion@192.168.19.86
REDIS_ID=$(sudo docker ps | grep redis:v1 | grep k8s_master | awk '{print $1}') REDIS_ID=$(sudo docker ps | grep redis:v1 | grep k8s_master | awk '{print $1}')
@ -304,7 +340,7 @@ bay host with IP address 192.168.19.86. To access the redis master::
exit exit
Now log into one of the other container hosts and access a redis slave from there:: Log into one of the other container hosts and access a redis slave from it::
ssh minion@$(nova list | grep 10.0.0.4 | awk '{print $13}') ssh minion@$(nova list | grep 10.0.0.4 | awk '{print $13}')
REDIS_ID=$(sudo docker ps | grep redis:v1 | grep k8s_redis | tail -n +2 | awk '{print $1}') REDIS_ID=$(sudo docker ps | grep redis:v1 | grep k8s_redis | tail -n +2 | awk '{print $1}')
@ -316,32 +352,35 @@ Now log into one of the other container hosts and access a redis slave from ther
exit exit
There are four redis instances, one master and three slaves, running across the bay, Now there are four redis instances (one master and three slaves) running
replicating data between one another. across the bay, replicating data between one another.
Building and using a Swarm bay Building and Using a Swarm Bay
============================== ==============================
Create a baymodel. It is very similar to the Kubernetes baymodel,
it is only missing some Kubernetes specific arguments and uses 'swarm' as the Create a baymodel. It is very similar to the Kubernetes baymodel, except for
coe. :: the absence of some Kubernetes-specific arguments and the use of 'swarm'
as the coe::
NIC_ID=$(neutron net-show public | awk '/ id /{print $4}') NIC_ID=$(neutron net-show public | awk '/ id /{print $4}')
magnum baymodel-create --name swarmbaymodel --image-id fedora-21-atomic-3 \ magnum baymodel-create --name swarmbaymodel \
--image-id fedora-21-atomic-3 \
--keypair-id testkey \ --keypair-id testkey \
--external-network-id $NIC_ID \ --external-network-id ${NIC_ID} \
--dns-nameserver 8.8.8.8 --flavor-id m1.small \ --dns-nameserver 8.8.8.8 \
--flavor-id m1.small \
--coe swarm --coe swarm
Finally, create the bay. Use the baymodel 'swarmbaymodel' as a template for Finally, create the bay. Use the baymodel 'swarmbaymodel' as a template for
bay creation. This bay will result in one swarm manager node and two extra bay creation. This bay will result in one swarm manager node and two extra
agent nodes. :: agent nodes::
magnum bay-create --name swarmbay --baymodel swarmbaymodel --node-count 2 magnum bay-create --name swarmbay --baymodel swarmbaymodel --node-count 2
Now that we have a swarm bay we can start interacting with it. First we need Now that we have a swarm bay we can start interacting with it::
to get it's uuid. ::
magnum bay-show swarmbay
$ magnum bay-show swarmbay
+---------------+------------------------------------------+ +---------------+------------------------------------------+
| Property | Value | | Property | Value |
+---------------+------------------------------------------+ +---------------+------------------------------------------+
@ -356,11 +395,13 @@ to get it's uuid. ::
+---------------+------------------------------------------+ +---------------+------------------------------------------+
Next we will create a container in this bay. This container will ping the Next we will create a container in this bay. This container will ping the
address 8.8.8.8 four times. :: address 8.8.8.8 four times::
magnum container-create --name testcontainer \
--image cirros \
--bay swarmbay \
--command "ping -c 4 8.8.8.8"
$ magnum container-create --name testcontainer --image cirros\
--bay swarmbay\
--command "ping -c 4 8.8.8.8"
+------------+----------------------------------------+ +------------+----------------------------------------+
| Property | Value | | Property | Value |
+------------+----------------------------------------+ +------------+----------------------------------------+
@ -374,11 +415,12 @@ address 8.8.8.8 four times. ::
| name | test-container | | name | test-container |
+------------+----------------------------------------+ +------------+----------------------------------------+
At this point, the container exists, but it has not been started yet. Let's At this point the container exists but it has not been started yet. To start
start it then check it's output. :: it and check its output run the following::
magnum container-start test-container
magnum container-logs test-container
$ magnum container-start test-container
$ magnum container-logs test-container
PING 8.8.8.8 (8.8.8.8): 56 data bytes PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=40 time=25.513 ms 64 bytes from 8.8.8.8: seq=0 ttl=40 time=25.513 ms
64 bytes from 8.8.8.8: seq=1 ttl=40 time=25.348 ms 64 bytes from 8.8.8.8: seq=1 ttl=40 time=25.348 ms
@ -389,23 +431,22 @@ start it then check it's output. ::
4 packets transmitted, 4 packets received, 0% packet loss 4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 25.226/25.340/25.513 ms round-trip min/avg/max = 25.226/25.340/25.513 ms
Now that we're done with the container, we can delete it. :: Now that we're done with the container we can delete it::
magnum container-delete test-container magnum container-delete test-container
Building developer documentation Building Developer Documentation
================================ ================================
If you would like to build the documentation locally, eg. to test your To build the documentation locally (e.g., to test documentation changes
documentation changes before uploading them for review, run these before uploading them for review) chdir to the magnum root folder and
commands to build the documentation set:: run tox::
# activate your development virtualenv
source .tox/venv/bin/activate
# build the docs
tox -edocs tox -edocs
Now use your browser to open the top-level index.html located at:: Note: The first time you run this will take some extra time as it
creates a virtual environment to run in.
magnum/doc/build/html/index.html When complete, the documentation can be accesed from::
doc/build/html/index.html

View File

@ -1,27 +1,80 @@
.. magnum documentation master file, created by ..
sphinx-quickstart on Tue Jul 9 22:26:36 2013. Copyright 2014-2015 OpenStack Foundation
You can adapt this file completely to your liking, but it should at least All Rights Reserved.
contain the root `toctree` directive.
Welcome to Magnum's documentation! Licensed under the Apache License, Version 2.0 (the "License"); you may
======================================================== not use this file except in compliance with the License. You may obtain
a copy of the License at
Contents: http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
============================================
Welcome to Magnum's Developer Documentation!
============================================
Magnum is an OpenStack project which offers container orchestration engines
for deploying and managing containers as first class resources in OpenStack.
* **Free software:** under the `Apache license <http://www.apache.org/licenses/LICENSE-2.0>`_
* **Source:** http://git.openstack.org/cgit/openstack/magnum
* **Blueprints:** https://blueprints.launchpad.net/magnum
* **Bugs:** http://bugs.launchpad.net/magnum
* **ReST Client:** http://git.openstack.org/cgit/openstack/python-magnumclient
Architecture
============
There are several different types of objects in the magnum system:
* **Bay:** A collection of node objects where work is scheduled
* **BayModel:** An object stores template information about the bay which is
used to create new bays consistently
* **Node:** A baremetal or virtual machine where work executes
* **Pod:** A collection of containers running on one physical or virtual
machine
* **Service:** An abstraction which defines a logical set of pods and a policy
by which to access them
* **ReplicationController:** An abstraction for managing a group of pods to
ensure a specified number of resources are running
* **Container:** A Docker container
Two binaries work together to compose the magnum system. The first binary
(accessed by the python-magnumclient code) is the magnum-api ReST server. The
ReST server may run as one process or multiple processes. When a ReST request
is sent to the client API, the request is sent via AMQP to the
magnum-conductor process. The ReST server is horizontally scalable. At this
time, the conductor is limited to one process, but we intend to add horizontal
scalability to the conductor as well.
The magnum-conductor process runs on a controller machine and connects to a
Kubernetes or Docker ReST API endpoint. The Kubernetes and Docker ReST API
endpoints are managed by the bay object.
When service or pod objects are created, Kubernetes may be directly contacted
via the Kubernetes ReST API. When container objects are acted upon, the
Docker ReST API may be directly contacted.
Features
========
* Abstractions for bays, containers, nodes, pods, replication controllers, and
services
* Integration with Kubernetes and Docker for backend container technology
* Integration with Keystone for multi-tenant security
* Integration with Neutron for Kubernetes multi-tenancy network security
Developer Info
==============
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 1
readme
dev/dev-quickstart dev/dev-quickstart
dev/dev-manual-devstack dev/dev-manual-devstack
installation
usage
contributing contributing
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -1,12 +0,0 @@
============
Installation
============
At the command line::
$ pip install magnum
Or, if you have virtualenvwrapper installed::
$ mkvirtualenv magnum
$ pip install magnum

View File

@ -1 +0,0 @@
.. include:: ../../README.rst

View File

@ -1,7 +0,0 @@
========
Usage
========
To use magnum in a project::
import magnum