containerization: Add dockerfile

This adds an alpine linux based docker image for running kuryr

one can try it out doing:

    sudo mkdir -p /usr/lib/docker/plugins/kuryr
    sudo curl -o /usr/lib/docker/plugins/kuryr/kuryr.spec \
      https://raw.githubusercontent.com/openstack/kuryr/master/etc/kuryr.spec
    sudo service docker restart

    docker run --name kuryr-libnetwork \
      --net=host \
      --cap-add=NET_ADMIN \
      -e SERVICE_USER=admin \
      -e SERVICE_TENANT_NAME=admin \
      -e SERVICE_PASSWORD=pass \
      -e IDENTITY_URL=http://127.0.0.1:35357/v2.0 \
      -e OS_URL=http://127.0.0.1:9696 \
      -v /var/log/kuryr:/var/log/kuryr \
      kuryr/libnetwork

Change-Id: I68d727194d6029da965fca90fdd464ed45b02044
Signed-off-by: Antoni Segura Puimedon <toni@midokura.com>
This commit is contained in:
Antoni Segura Puimedon 2016-02-12 00:35:32 +01:00
parent 7aa20ff639
commit 7f4146e044
No known key found for this signature in database
GPG Key ID: 2329618D2967720A
5 changed files with 157 additions and 12 deletions

View File

@ -0,0 +1,39 @@
FROM alpine:3.3
MAINTAINER Antoni Segura Puimedon "toni@kuryr.org"
WORKDIR /
RUN apk add --no-cache \
bash \
iproute2 \
openvswitch \
py-pip \
python \
uwsgi-python && \
apk add --no-cache --virtual build-deps \
gcc \
git \
linux-headers \
musl-dev \
python-dev && \
pip install -U pip setuptools && \
git clone https://github.com/openstack/kuryr && \
cd /kuryr && \
pip install . && \
cd / && \
rm -fr /kuryr && \
apk del build-deps
ENV SERVICE_USER="admin"
ENV SERVICE_TENANT_NAME="admin"
ENV SERVICE_PASSWORD="pass"
ENV IDENTITY_URL="http://127.0.0.1:35357/v2.0"
ENV OS_URL="http://127.0.0.1:9696"
ENV CAPABILITY_SCOPE="local"
ENV LOG_LEVEL="INFO"
ENV PROCESSES=2
ENV THREADS=2
VOLUME /var/log/kuryr
ADD run_kuryr.sh /usr/bin/run_kuryr.sh
CMD ["/usr/bin/run_kuryr.sh"]

View File

@ -0,0 +1,79 @@
=================================
Kuryr Docker libnetwork container
=================================
This is the container generation file for Kuryr's Docker libnetwork driver,
useful for single Docker engine usage as well as Docker Swarm usage.
How to build the container
--------------------------
If you want to build your own container, you can just build it by running the
following command from this same directory:
::
docker build -t your_docker_username/libnetwork:latest .
How to get the container
------------------------
To get the upstream docker libnetwork container with ovs, you can just do:
::
docker pull kuryr/libnetwork:latest
It is expected that different vendors may have their own versions of the
Kuryr libnetwork container in their docker hub namespaces, for example:
::
docker pull midonet/libnetwork:latest
The reason for this is that some vendors' binding scripts need different (and
potentially non-redistributable) userspace tools in the container.
How to run the container
------------------------
First we prepare Docker to find the driver
::
sudo mkdir -p /usr/lib/docker/plugins/kuryr
sudo curl -o /usr/lib/docker/plugins/kuryr/kuryr.spec \
https://raw.githubusercontent.com/openstack/kuryr/master/etc/kuryr.spec
sudo service docker restart
Then we start the container
::
docker run --name kuryr-libnetwork \
--net=host \
--cap-add=NET_ADMIN \
-e SERVICE_USER=admin \
-e SERVICE_TENANT_NAME=admin \
-e SERVICE_PASSWORD=admin \
-e IDENTITY_URL=http://127.0.0.1:35357/v2.0 \
-e OS_URL=http://127.0.0.1:9696 \
-v /var/log/kuryr:/var/log/kuryr \
-v /var/run/openvswitch:/var/run/openvswitch \
kuryr/libnetwork
Where:
* SERVICE_USER, SERVICE_TENANT_SERVICE_PASSWORD are OpenStack credentials
* IDENTITY_URL is the url to OpenStack Keystone
* OS_URL is the url to OpenStack Neutron
* k8S_API is the url to the Kubernetes API server
* A volume is created so that the logs are available on the host
* NET_ADMIN capabilities are given in order to perform network operations on
the host namespace like ovs-vsctl
Other options:
* CAPABILITY_SCOPE can be "local" or "global", the latter being for when there
is a cluster store plugged into the docker engine.
* LOG_LEVEL for defining, for example, "DEBUG" logging messages.
* PROCESSES for defining how many kuryr processes to use to handle the
libnetwork requests.
* THREADS for defining how many threads per process to use to handle the
libnetwork requests.
Note that the 127.0.0.1 are most likely to have to be changed unless you are
running everything on a single machine with `--net=host`.

View File

@ -0,0 +1,18 @@
#!/bin/bash
mkdir -p /etc/kuryr
cat > /etc/kuryr/kuryr.conf << EOF
[DEFAULT]
bindir = /usr/libexec/kuryr
log_level = $LOG_LEVEL
capability_scope = $CAPABILITY_SCOPE
EOF
/usr/sbin/uwsgi \
--plugin /usr/lib/uwsgi/python \
--http-socket :2377 \
-w kuryr.server:app \
--master \
--processes "$PROCESSES" \
--threads "$THREADS"

View File

@ -14,17 +14,24 @@ import sys
from oslo_log import log from oslo_log import log
from kuryr import app
from kuryr.common import config
from kuryr import controllers
config.init(sys.argv[1:])
controllers.check_for_neutron_ext_support()
controllers.check_for_neutron_ext_tag()
app.debug = config.CONF.debug
log.setup(config.CONF, 'Kuryr')
def start(): def start():
from kuryr.common import config
config.init(sys.argv[1:])
port = int(config.CONF.kuryr_uri.split(':')[-1]) port = int(config.CONF.kuryr_uri.split(':')[-1])
from kuryr import app
from kuryr import controllers
controllers.check_for_neutron_ext_support()
controllers.check_for_neutron_ext_tag()
app.debug = config.CONF.debug
log.setup(config.CONF, 'Kuryr')
app.run("0.0.0.0", port) app.run("0.0.0.0", port)
if __name__ == '__main__':
start()

View File

@ -45,8 +45,10 @@ ovs_hybrid_bind_port() {
# create a linux bridge # create a linux bridge
br_name="qbr"${PORT:0:11} br_name="qbr"${PORT:0:11}
ip link add name $br_name type bridge ip link add name $br_name type bridge
echo 0 > /sys/devices/virtual/net/$br_name/bridge/forward_delay # Using brctl allows containerized usage not to need privileged mode
echo 0 > /sys/devices/virtual/net/$br_name/bridge/stp_state # as sysfs is mounted read-only when running with just CAP_NET_ADMIN
brctl setfd $br_name 0
brctl stp $br_name off
# connect the veth outside to linux bridge # connect the veth outside to linux bridge
ip link set $VETH up ip link set $VETH up