Remove Swarm documentation

Swarm is deprecated. Remove all documentation for swarm to reflect
that.

Also fix up image used, from fedora-atomic to fedora-coreos

Change-Id: I67fa19bf5637e61464e682e7787b795b7604d569
This commit is contained in:
Jake Yip 2023-04-28 18:11:23 +10:00 committed by Jake Yip
parent 71ede8257c
commit df5bb49bf2
4 changed files with 15 additions and 521 deletions

View File

@ -32,10 +32,6 @@ Kubernetes cluster-create fails
Check the `heat stacks`_, log into the master nodes and check the
`Kubernetes services`_ and `etcd service`_.
Swarm cluster-create fails
Check the `heat stacks`_, log into the master nodes and check the `Swarm
services`_ and `etcd service`_.
I get the error "Timed out waiting for a reply" when deploying a pod
Verify the `Kubernetes services`_ and `etcd service`_ are running on the
master nodes.
@ -50,9 +46,6 @@ I deploy pods and services on Kubernetes cluster but the app is not working
if the app is performing communication between pods through services,
verify `Kubernetes networking`_.
Swarm cluster is created successfully but I cannot deploy containers
Check the `Swarm services`_ and `etcd service`_ on the master nodes.
I get a "Protocol violation" error when deploying a container
For Kubernetes, check the `Kubernetes services`_ to verify that
kube-apiserver is running to accept the request.
@ -97,8 +90,8 @@ in some cases it may only say "Unknown".
If the failed resource is OS::Heat::WaitConditionHandle, this indicates that
one of the services that are being started on the node is hung. Log into the
node where the failure occurred and check the respective `Kubernetes
services`_ or `Swarm services`_. If the failure is in
other scripts, look for them as `Heat software resource scripts`_.
services`_. If the failure is in other scripts, look for them as `Heat
software resource scripts`_.
Trustee for cluster
@ -667,13 +660,6 @@ Additional `Kubernetes troubleshooting section
<https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/>`_
is available in the Monitoring, Logging, and Debugging section.
Swarm services
--------------
*To be filled in*
(How to check on a swarm cluster: see membership information, view master,
agent containers)
Barbican issues
---------------

View File

@ -587,126 +587,6 @@ deleted as follows::
openstack coe cluster delete k8s-cluster
Building and Using a Swarm Cluster
==================================
Create a ClusterTemplate. It is very similar to the Kubernetes ClusterTemplate,
except for the absence of some Kubernetes-specific arguments and the use of
'swarm' as the COE::
openstack coe cluster template create swarm-cluster-template \
--image Fedora-Atomic-27-20180419.0.x86_64 \
--keypair testkey \
--external-network public \
--dns-nameserver 8.8.8.8 \
--flavor m1.small \
--docker-volume-size 5 \
--coe swarm-mode
**NOTE:** If you are using Magnum behind a firewall then refer
to :doc:`/admin/magnum-proxy`.
Finally, create the cluster. Use the ClusterTemplate 'swarm-cluster-template'
as a template for cluster creation. This cluster will result in one swarm
manager node and two extra agent nodes::
openstack coe cluster create swarm-cluster \
--cluster-template swarm-cluster-template \
--node-count 2
Now that we have a swarm cluster we can start interacting with it::
$ openstack coe cluster show swarm-cluster
+--------------------+------------------------------------------------------------+
| Property | Value |
+--------------------+------------------------------------------------------------+
| status | CREATE_COMPLETE |
| uuid | eda91c1e-6103-45d4-ab09-3f316310fa8e |
| stack_id | 7947844a-8e18-4c79-b591-ecf0f6067641 |
| status_reason | Stack CREATE completed successfully |
| created_at | 2015-04-20T19:05:27+00:00 |
| updated_at | 2015-04-20T19:06:08+00:00 |
| create_timeout | 60 |
| api_address | https://172.24.4.4:6443 |
| coe_version | 1.2.5 |
| cluster_template_id| e73298e7-e621-4d42-b35b-7a1952b97158 |
| master_addresses | ['172.24.4.6'] |
| node_count | 2 |
| node_addresses | ['172.24.4.5'] |
| master_count | 1 |
| container_version | 1.9.1 |
| discovery_url | https://discovery.etcd.io/4caaa65f297d4d49ef0a085a7aecf8e0 |
| name | swarm-cluster |
+--------------------+------------------------------------------------------------+
We now need to setup the docker CLI to use the swarm cluster we have created
with the appropriate credentials.
Create a dir to store certs and cd into it. The `DOCKER_CERT_PATH` env variable
is consumed by docker which expects ca.pem, key.pem and cert.pem to be in that
directory.::
export DOCKER_CERT_PATH=~/.docker
mkdir -p ${DOCKER_CERT_PATH}
cd ${DOCKER_CERT_PATH}
Generate an RSA key.::
openssl genrsa -out key.pem 4096
Create openssl config to help generated a CSR.::
$ cat > client.conf << END
[req]
distinguished_name = req_distinguished_name
req_extensions = req_ext
prompt = no
[req_distinguished_name]
CN = Your Name
[req_ext]
extendedKeyUsage = clientAuth
END
Run the openssl 'req' command to generate the CSR.::
openssl req -new -days 365 \
-config client.conf \
-key key.pem \
-out client.csr
Now that you have your client CSR use the Magnum CLI to get it signed and also
download the signing cert.::
magnum ca-sign --cluster swarm-cluster --csr client.csr > cert.pem
magnum ca-show --cluster swarm-cluster > ca.pem
Set the CLI to use TLS . This env var is consumed by docker.::
export DOCKER_TLS_VERIFY="1"
Set the correct host to use which is the public ip address of swarm API server
endpoint. This env var is consumed by docker.::
export DOCKER_HOST=$(openstack coe cluster show swarm-cluster | awk '/ api_address /{print substr($4,7)}')
Next we will create a container in this swarm cluster. This container will ping
the address 8.8.8.8 four times::
docker run --rm -it cirros:latest ping -c 4 8.8.8.8
You should see a similar output to::
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=40 time=25.513 ms
64 bytes from 8.8.8.8: seq=1 ttl=40 time=25.348 ms
64 bytes from 8.8.8.8: seq=2 ttl=40 time=25.226 ms
64 bytes from 8.8.8.8: seq=3 ttl=40 time=25.275 ms
--- 8.8.8.8 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 25.226/25.340/25.513 ms
Building Developer Documentation
================================

View File

@ -108,191 +108,29 @@ in your project, create one.
Upload the images required for your clusters to the Image service
-----------------------------------------------------------------
The VM versions of Kubernetes and Docker Swarm drivers require a Fedora Atomic
image. The following is stock Fedora Atomic image, built by the Atomic team
and tested by the Magnum team.
The Kubernetes driver require a Fedora CoreOS image. Plese refer to 'Supported
versions' for each Magnum release.
#. Download the image:
.. code-block:: console
$ wget https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-27-20180419.0/CloudImages/x86_64/images/Fedora-Atomic-27-20180419.0.x86_64.qcow2
$ export FCOS_VERSION="35.20220116.3.0"
$ wget https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/${FCOS_VERSION}/x86_64/fedora-coreos-${FCOS_VERSION}-openstack.x86_64.qcow2.xz
$ unxz fedora-coreos-${FCOS_VERSION}-openstack.x86_64.qcow2.xz
#. Register the image to the Image service setting the ``os_distro`` property
to ``fedora-atomic``:
to ``fedora-coreos``:
.. code-block:: console
$ openstack image create \
--disk-format=qcow2 \
--container-format=bare \
--file=Fedora-Atomic-27-20180419.0.x86_64.qcow2\
--property os_distro='fedora-atomic' \
fedora-atomic-latest
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | a987b691e23dce54c03d7a57c104b195 |
| container_format | bare |
| created_at | 2016-09-14T12:58:01Z |
| disk_format | qcow2 |
| file | /v2/images/81b25935-3400-441a-9f2e-f984a46c89dd/file |
| id | 81b25935-3400-441a-9f2e-f984a46c89dd |
| min_disk | 0 |
| min_ram | 0 |
| name | fedora-atomic-latest |
| owner | c4b42942156741dfbc4775dbcb032841 |
| properties | os_distro='fedora-atomic' |
| protected | False |
| schema | /v2/schemas/image |
| size | 507928064 |
| status | active |
| tags | |
| updated_at | 2016-09-14T12:58:03Z |
| virtual_size | None |
| visibility | private |
+------------------+------------------------------------------------------+
--file=fedora-coreos-${FCOS_VERSION}-openstack.x86_64.qcow2 \
--property os_distro='fedora-coreos' \
fedora-coreos-latest
Provision a Docker Swarm cluster and create a container
-------------------------------------------------------
Following this example, you will provision a Docker Swarm cluster with one
master and one node. Then, using docker's native API you will create a
container.
#. Create a cluster template for a Docker Swarm cluster using the
``fedora-atomic-latest`` image, ``m1.small`` as the flavor for the master
and the node, ``public`` as the external network and ``8.8.8.8`` for the
DNS nameserver, using the following command:
.. code-block:: console
$ openstack coe cluster template create swarm-cluster-template \
--image fedora-atomic-latest \
--external-network public \
--dns-nameserver 8.8.8.8 \
--master-flavor m1.small \
--flavor m1.small \
--coe swarm
+-----------------------+--------------------------------------+
| Property | Value |
+-----------------------+--------------------------------------+
| insecure_registry | - |
| labels | {} |
| updated_at | - |
| floating_ip_enabled | True |
| fixed_subnet | - |
| master_flavor_id | m1.small |
| uuid | 47c6ce77-50ae-43bd-8e2a-06980392693d |
| no_proxy | - |
| https_proxy | - |
| tls_disabled | False |
| keypair_id | mykey |
| public | False |
| http_proxy | - |
| docker_volume_size | - |
| server_type | vm |
| external_network_id | public |
| cluster_distro | fedora-atomic |
| image_id | fedora-atomic-latest |
| volume_driver | - |
| registry_enabled | False |
| docker_storage_driver | devicemapper |
| apiserver_port | - |
| name | swarm-cluster-template |
| created_at | 2016-09-14T13:05:11+00:00 |
| network_driver | docker |
| fixed_network | - |
| coe | swarm |
| flavor_id | m1.small |
| master_lb_enabled | False |
| dns_nameserver | 8.8.8.8 |
+-----------------------+--------------------------------------+
#. Create a cluster with one node and one master using ``mykey`` as the
keypair, using the following command:
.. code-block:: console
$ openstack coe cluster create swarm-cluster \
--cluster-template swarm-cluster-template \
--master-count 1 \
--node-count 1 \
--keypair mykey
Request to create cluster 2582f192-480e-4329-ac05-32a8e5b1166b has been accepted.
Your cluster is now being created. Creation time depends on your
infrastructure's performance. You can check the status of your cluster
using the commands: ``openstack coe cluster list`` or
``openstack coe cluster show swarm-cluster``.
.. code-block:: console
$ openstack coe cluster list
+--------------------------------------+---------------+---------+------------+--------------+-----------------+
| uuid | name | keypair | node_count | master_count | status |
+--------------------------------------+---------------+---------+------------+--------------+-----------------+
| 2582f192-480e-4329-ac05-32a8e5b1166b | swarm-cluster | mykey | 1 | 1 | CREATE_COMPLETE |
+--------------------------------------+---------------+---------+------------+--------------+-----------------+
.. code-block:: console
$ openstack coe cluster show swarm-cluster
+---------------------+------------------------------------------------------------+
| Property | Value |
+---------------------+------------------------------------------------------------+
| status | CREATE_COMPLETE |
| cluster_template_id | 47c6ce77-50ae-43bd-8e2a-06980392693d |
| uuid | 2582f192-480e-4329-ac05-32a8e5b1166b |
| stack_id | 3d7bbf1c-49bd-4930-84e0-ab71ba200687 |
| status_reason | Stack CREATE completed successfully |
| created_at | 2016-09-14T13:36:54+00:00 |
| name | swarm-cluster |
| updated_at | 2016-09-14T13:38:08+00:00 |
| discovery_url | https://discovery.etcd.io/a5ece414689287eca62e35555512bfd5 |
| api_address | tcp://172.24.4.10:2376 |
| coe_version | 1.2.5 |
| master_addresses | ['172.24.4.10'] |
| create_timeout | 60 |
| node_addresses | ['172.24.4.8'] |
| master_count | 1 |
| container_version | 1.12.6 |
| node_count | 1 |
+---------------------+------------------------------------------------------------+
#. Add the credentials of the above cluster to your environment:
.. code-block:: console
$ mkdir myclusterconfig
$ $(openstack coe cluster config swarm-cluster --dir myclusterconfig)
The above command will save the authentication artifacts in the
`myclusterconfig` directory and it will export the environment
variables: DOCKER_HOST, DOCKER_CERT_PATH and DOCKER_TLS_VERIFY.
Sample output:
.. code-block:: console
export DOCKER_HOST=tcp://172.24.4.10:2376
export DOCKER_CERT_PATH=myclusterconfig
export DOCKER_TLS_VERIFY=True
#. Create a container:
.. code-block:: console
$ docker run busybox echo "Hello from Docker!"
Hello from Docker!
#. Delete the cluster:
.. code-block:: console
$ openstack coe cluster delete swarm-cluster
Request to delete cluster swarm-cluster has been accepted.
Provision a Kubernetes cluster and create a deployment
------------------------------------------------------
@ -315,40 +153,6 @@ will create a deployment.
--master-flavor m1.small \
--flavor m1.small \
--coe kubernetes
+-----------------------+--------------------------------------+
| Property | Value |
+-----------------------+--------------------------------------+
| insecure_registry | - |
| labels | {} |
| updated_at | - |
| floating_ip_enabled | True |
| fixed_subnet | - |
| master_flavor_id | m1.small |
| uuid | 0a601cc4-8fef-41aa-8036-d113e719ed7a |
| no_proxy | - |
| https_proxy | - |
| tls_disabled | False |
| keypair_id | - |
| public | False |
| http_proxy | - |
| docker_volume_size | - |
| server_type | vm |
| external_network_id | public |
| cluster_distro | fedora-atomic |
| image_id | fedora-atomic-latest |
| volume_driver | - |
| registry_enabled | False |
| docker_storage_driver | devicemapper |
| apiserver_port | - |
| name | kubernetes-cluster-template |
| created_at | 2017-05-16T09:53:00+00:00 |
| network_driver | flannel |
| fixed_network | - |
| coe | kubernetes |
| flavor_id | m1.small |
| master_lb_enabled | False |
| dns_nameserver | 8.8.8.8 |
+-----------------------+--------------------------------------+
#. Create a cluster with one node and one master using ``mykey`` as the
keypair, using the following command:
@ -376,42 +180,17 @@ will create a deployment.
| b1ef3528-ac03-4459-bbf7-22649bfbc84f | kubernetes-cluster | mykey | 1 | 1 | CREATE_COMPLETE |
+--------------------------------------+--------------------+---------+------------+--------------+-----------------+
.. code-block:: console
$ openstack coe cluster show kubernetes-cluster
+---------------------+------------------------------------------------------------+
| Property | Value |
+---------------------+------------------------------------------------------------+
| status | CREATE_COMPLETE |
| cluster_template_id | 0a601cc4-8fef-41aa-8036-d113e719ed7a |
| node_addresses | ['172.24.4.5'] |
| uuid | b1ef3528-ac03-4459-bbf7-22649bfbc84f |
| stack_id | 8296624c-3c0e-45e1-967e-b6ff05105a3b |
| status_reason | Stack CREATE completed successfully |
| created_at | 2017-05-16T09:58:02+00:00 |
| updated_at | 2017-05-16T10:00:02+00:00 |
| coe_version | v1.6.7 |
| keypair | default |
| api_address | https://172.24.4.13:6443 |
| master_addresses | ['172.24.4.13'] |
| create_timeout | 60 |
| node_count | 1 |
| discovery_url | https://discovery.etcd.io/69c7cd3b3b06c98b4771410bd166a7c6 |
| master_count | 1 |
| container_version | 1.12.6 |
| name | kubernetes-cluster |
+---------------------+------------------------------------------------------------+
#. Add the credentials of the above cluster to your environment:
.. code-block:: console
$ mkdir -p ~/clusters/kubernetes-cluster
$ $(openstack coe cluster config kubernetes-cluster --dir ~/clusters/kubernetes-cluster)
$ cd ~/clusters/kubernetes-cluster
$ openstack coe cluster config kubernetes-cluster
The above command will save the authentication artifacts in the directory
``~/clusters/kubernetes-cluster`` and it will export the ``KUBECONFIG``
``~/clusters/kubernetes-cluster``. It will output a command to set the ``KUBECONFIG``
environment variable:
.. code-block:: console

View File

@ -21,7 +21,6 @@ created and managed by Magnum to support the COE's.
#. `Choosing a COE`_
#. `Native Clients`_
#. `Kubernetes`_
#. `Swarm`_
#. `Transport Layer Security`_
#. `Networking`_
#. `High Availability`_
@ -92,7 +91,7 @@ They are loosely grouped as: mandatory, infrastructure, COE specific.
--coe \<coe\>
Specify the Container Orchestration Engine to use. Supported
COE's include 'kubernetes' and 'swarm'. If your environment
COE is 'kubernetes'. If your environment
has additional cluster drivers installed, refer to the cluster driver
documentation for the new COE names. This is a mandatory parameter
and there is no default value.
@ -107,7 +106,6 @@ They are loosely grouped as: mandatory, infrastructure, COE specific.
COE os_distro
========== =====================
Kubernetes fedora-coreos
Swarm fedora-atomic
========== =====================
This is a mandatory parameter and there is no default value. Note that the
@ -282,9 +280,6 @@ the table are linked to more details elsewhere in the user guide.
| `flannel_network_subnetlen`_ | size of subnet to | 24 |
| | assign to node | |
+---------------------------------------+--------------------+---------------+
| `rexray_preempt`_ | - true | false |
| | - false | |
+---------------------------------------+--------------------+---------------+
| `heapster_enabled`_ | - true | false |
| | - false | |
+---------------------------------------+--------------------+---------------+
@ -320,10 +315,6 @@ the table are linked to more details elsewhere in the user guide.
+---------------------------------------+--------------------+---------------+
| `prometheus_adapter_configmap` | (rules CM name) | "" |
+---------------------------------------+--------------------+---------------+
| `swarm_strategy`_ | - spread | spread |
| | - binpack | |
| | - random | |
+---------------------------------------+--------------------+---------------+
| `traefik_ingress_controller_tag`_ | see below | see below |
+---------------------------------------+--------------------+---------------+
| `admission_control_list`_ | see below | see below |
@ -1795,132 +1786,6 @@ Please refer the doc of `k8s-keystone-auth in cloud-provider-openstack
<https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-keystone-webhook-authenticator-and-authorizer.md>`_
for more information.
Swarm
=====
A Swarm cluster is a pool of servers running Docker daemon that is
managed as a single Docker host. One or more Swarm managers accepts
the standard Docker API and manage this pool of servers.
Magnum deploys a Swarm cluster using parameters defined in
the ClusterTemplate and specified on the 'cluster-create' command, for
example::
openstack coe cluster template create swarm-cluster-template \
--image fedora-atomic-latest \
--keypair testkey \
--external-network public \
--dns-nameserver 8.8.8.8 \
--flavor m1.small \
--docker-volume-size 5 \
--coe swarm
openstack coe cluster create swarm-cluster \
--cluster-template swarm-cluster-template \
--master-count 3 \
--node-count 8
Refer to the `ClusterTemplate`_ and `Cluster`_ sections for the full list of
parameters. Following are further details relevant to Swarm:
What runs on the servers
There are two types of servers in the Swarm cluster: managers and nodes.
The Docker daemon runs on all servers. On the servers for manager,
the Swarm manager is run as a Docker container on port 2376 and this
is initiated by the systemd service swarm-manager. Etcd is also run
on the manager servers for discovery of the node servers in the cluster.
On the servers for node, the Swarm agent is run as a Docker
container on port 2375 and this is initiated by the systemd service
swarm-agent. On start up, the agents will register themselves in
etcd and the managers will discover the new node to manage.
Number of managers (master-count)
Specified in the cluster-create command to indicate how many servers will
run as managers in the cluster. Having more than one will provide high
availability. The managers will be in a load balancer pool and the
load balancer virtual IP address (VIP) will serve as the Swarm API
endpoint. A floating IP associated with the load balancer VIP will
serve as the external Swarm API endpoint. The managers accept
the standard Docker API and perform the corresponding operation on the
servers in the pool. For instance, when a new container is created,
the managers will select one of the servers based on some strategy
and schedule the containers there.
Number of nodes (node-count)
Specified in the cluster-create command to indicate how many servers will
run as nodes in the cluster to host your Docker containers. These servers
will register themselves in etcd for discovery by the managers, and
interact with the managers. Docker daemon is run locally to host
containers from users.
Network driver (network-driver)
Specified in the ClusterTemplate to select the network driver. The supported
drivers are 'docker' and 'flannel', with 'docker' as the default.
With the 'docker' driver, containers are connected to the 'docker0'
bridge on each node and are assigned local IP address. With the
'flannel' driver, containers are connected to a flat overlay network
and are assigned IP address by Flannel. Refer to the `Networking`_
section for more details.
Volume driver (volume-driver)
Specified in the ClusterTemplate to select the volume driver to provide
persistent storage for containers. The supported volume driver is
'rexray'. The default is no volume driver. When 'rexray' or other
volume driver is deployed, you can use the Docker 'volume' command to
create, mount, unmount, delete volumes in containers. Cinder block
storage is used as the backend to support this feature.
Refer to the `Storage`_ section for more details.
Storage driver (docker-storage-driver)
Specified in the ClusterTemplate to select the Docker storage driver. The
default is 'devicemapper'. Refer to the `Storage`_ section for more
details.
Image (image)
Specified in the ClusterTemplate to indicate the image to boot the servers
for the Swarm manager and node.
The image binary is loaded in Glance with the attribute
'os_distro = fedora-atomic'.
Current supported image is Fedora Atomic (download from `Fedora
<https://dl.fedoraproject.org/pub/alt/atomic/stable/>`_ )
TLS (tls-disabled)
Transport Layer Security is enabled by default to secure the Swarm API for
access by both the users and Magnum. You will need a key and a
signed certificate to access the Swarm API and CLI. Magnum
handles its own key and certificate when interfacing with the
Swarm cluster. In development mode, TLS can be disabled. Refer to
the 'Transport Layer Security'_ section for details on how to create your
key and have Magnum sign your certificate.
Log into the servers
You can log into the manager and node servers with the account 'fedora' and
the keypair specified in the ClusterTemplate.
In addition to the common attributes in the ClusterTemplate, you can specify
the following attributes that are specific to Swarm by using the
labels attribute.
_`swarm_strategy`
This label corresponds to Swarm parameter for master '--strategy'.
For more details, refer to the `Swarm Strategy
<https://docs.docker.com/swarm/scheduler/strategy/>`_.
Valid values for this label are:
- spread
- binpack
- random
_`rexray_preempt`
When the volume driver 'rexray' is used, you can mount a data volume
backed by Cinder to a host to be accessed by a container. In this
case, the label 'rexray_preempt' can optionally be set to True or
False to enable any host to take control of the volume regardless of
whether other hosts are using the volume. This will in effect
unmount the volume from the current host and remount it on the new
host. If this label is set to false, then rexray will ensure data
safety for locking the volume before remounting. The default value
is False.
.. _transport_layer_security:
Transport Layer Security
@ -2596,14 +2461,6 @@ For Kubernetes, pods are scaled manually by setting the count in the
replication controller. Kubernetes version 1.3 and later also
supports `autoscaling
<http://blog.kubernetes.io/2016/07/autoscaling-in-kubernetes.html>`_.
For Docker, the tool 'Docker Compose' provides the command
`docker-compose scale
<https://docs.docker.com/compose/reference/scale/>`_ which lets you
manually set the number of instances of a container. For Swarm
version 1.12 and later, services can also be scaled manually through
the command `docker service scale
<https://docs.docker.com/engine/swarm/swarm-tutorial/scale-service/>`_.
Automatic scaling for Swarm is not yet available.
Scaling the cluster nodes involves managing the number of nodes in the
cluster by adding more nodes or removing nodes. There is no direct
@ -2926,14 +2783,6 @@ Currently Ironic is not fully supported yet, therefore more details will be
provided when this driver has been fully tested.
Swarm on Fedora Atomic
----------------------
This image can be downloaded from the `public Atomic site
<https://dl.fedoraproject.org/pub/alt/atomic/stable/>`_
or can be built locally using diskimagebuilder.
The login for this image is *fedora*.
Notification
============