Make a demo for Magnum
Make a Magnum demonstration that creates a baymodel and bay. Just run demos/magnum/launch to make it work. This depends on Neutron. Borrow the v1beta3 redis example from the examples repository in Kubernetes. Change-Id: I448a5890bfe0c1675914ae7dbd02fad03f4b1eeb
This commit is contained in:
parent
5a06901664
commit
794a814275
@ -17,7 +17,7 @@ resources:
|
|||||||
steak:
|
steak:
|
||||||
type: OS::Heat::ResourceGroup
|
type: OS::Heat::ResourceGroup
|
||||||
properties:
|
properties:
|
||||||
count: 20
|
count: 2
|
||||||
resource_def:
|
resource_def:
|
||||||
type: steak.yaml
|
type: steak.yaml
|
||||||
properties:
|
properties:
|
||||||
|
5
demos/magnum/redis
Executable file
5
demos/magnum/redis
Executable file
@ -0,0 +1,5 @@
|
|||||||
|
magnum pod-create --manifest redis-kube/redis-master.yaml --bay testbay
|
||||||
|
magnum service-create --manifest redis-kube/redis-sentinel-service.yaml --bay testbay
|
||||||
|
magnum rc-create --manifest redis-kube/redis-controller.yaml --bay testbay
|
||||||
|
magnum rc-create --manifest redis-kube/redis-sentinel-controller.yaml --bay testbay
|
||||||
|
|
115
demos/magnum/redis-kube/README.md
Normal file
115
demos/magnum/redis-kube/README.md
Normal file
@ -0,0 +1,115 @@
|
|||||||
|
## Reliable, Scalable Redis on Kubernetes
|
||||||
|
|
||||||
|
The following document describes the deployment of a reliable, multi-node Redis on Kubernetes. It deploys a master with replicated slaves, as well as replicated redis sentinels which are use for health checking and failover.
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
This example assumes that you have a Kubernetes cluster installed and running, and that you have installed the ```kubectl``` command line tool somewhere in your path. Please see the [getting started](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs/getting-started-guides) for installation instructions for your platform.
|
||||||
|
|
||||||
|
### A note for the impatient
|
||||||
|
This is a somewhat long tutorial. If you want to jump straight to the "do it now" commands, please see the [tl; dr](#tl-dr) at the end.
|
||||||
|
|
||||||
|
### Turning up an initial master/sentinel pod.
|
||||||
|
is a [_Pod_](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/pods.md). A Pod is one or more containers that _must_ be scheduled onto the same host. All containers in a pod share a network namespace, and may optionally share mounted volumes.
|
||||||
|
|
||||||
|
We will used the shared network namespace to bootstrap our Redis cluster. In particular, the very first sentinel needs to know how to find the master (subsequent sentinels just ask the first sentinel). Because all containers in a Pod share a network namespace, the sentinel can simply look at ```$(hostname -i):6379```.
|
||||||
|
|
||||||
|
Here is the config for the initial master and sentinel pod: [redis-master.yaml](redis-master.yaml)
|
||||||
|
|
||||||
|
|
||||||
|
Create this master as follows:
|
||||||
|
```sh
|
||||||
|
kubectl create -f examples/redis/v1beta3/redis-master.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Turning up a sentinel service
|
||||||
|
In Kubernetes a _Service_ describes a set of Pods that perform the same task. For example, the set of nodes in a Cassandra cluster, or even the single node we created above. An important use for a Service is to create a load balancer which distributes traffic across members of the set. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods (or the single Pod we've already created) available via the Kubernetes API.
|
||||||
|
|
||||||
|
In Redis, we will use a Kubernetes Service to provide a discoverable endpoints for the Redis sentinels in the cluster. From the sentinels Redis clients can find the master, and then the slaves and other relevant info for the cluster. This enables new members to join the cluster when failures occur.
|
||||||
|
|
||||||
|
Here is the definition of the sentinel service:[redis-sentinel-service.yaml](redis-sentinel-service.yaml)
|
||||||
|
|
||||||
|
Create this service:
|
||||||
|
```sh
|
||||||
|
kubectl create -f examples/redis/v1beta3/redis-sentinel-service.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Turning up replicated redis servers
|
||||||
|
So far, what we have done is pretty manual, and not very fault-tolerant. If the ```redis-master``` pod that we previously created is destroyed for some reason (e.g. a machine dying) our Redis service goes away with it.
|
||||||
|
|
||||||
|
In Kubernetes a _Replication Controller_ is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state.
|
||||||
|
|
||||||
|
Replication Controllers will "adopt" existing pods that match their selector query, so let's create a Replication Controller with a single replica to adopt our existing Redis server.
|
||||||
|
[redis-controller.yaml](redis-controller.yaml)
|
||||||
|
|
||||||
|
The bulk of this controller config is actually identical to the redis-master pod definition above. It forms the template or "cookie cutter" that defines what it means to be a member of this set.
|
||||||
|
|
||||||
|
Create this controller:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
kubectl create -f examples/redis/v1beta3/redis-controller.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
We'll do the same thing for the sentinel. Here is the controller config:[redis-sentinel-controller.yaml](redis-sentinel-controller.yaml)
|
||||||
|
|
||||||
|
We create it as follows:
|
||||||
|
```sh
|
||||||
|
kubectl create -f examples/redis/v1beta3/redis-sentinel-controller.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Resize our replicated pods
|
||||||
|
Initially creating those pods didn't actually do anything, since we only asked for one sentinel and one redis server, and they already existed, nothing changed. Now we will add more replicas:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
kubectl resize rc redis --replicas=3
|
||||||
|
```
|
||||||
|
|
||||||
|
```sh
|
||||||
|
kubectl resize rc redis-sentinel --replicas=3
|
||||||
|
```
|
||||||
|
|
||||||
|
This will create two additional replicas of the redis server and two additional replicas of the redis sentinel.
|
||||||
|
|
||||||
|
Unlike our original redis-master pod, these pods exist independently, and they use the ```redis-sentinel-service``` that we defined above to discover and join the cluster.
|
||||||
|
|
||||||
|
### Delete our manual pod
|
||||||
|
The final step in the cluster turn up is to delete the original redis-master pod that we created manually. While it was useful for bootstrapping discovery in the cluster, we really don't want the lifespan of our sentinel to be tied to the lifespan of one of our redis servers, and now that we have a successful, replicated redis sentinel service up and running, the binding is unnecessary.
|
||||||
|
|
||||||
|
Delete the master as follows:
|
||||||
|
```sh
|
||||||
|
kubectl delete pods redis-master
|
||||||
|
```
|
||||||
|
|
||||||
|
Now let's take a close look at what happens after this pod is deleted. There are three things that happen:
|
||||||
|
|
||||||
|
1. The redis replication controller notices that its desired state is 3 replicas, but there are currently only 2 replicas, and so it creates a new redis server to bring the replica count back up to 3
|
||||||
|
2. The redis-sentinel replication controller likewise notices the missing sentinel, and also creates a new sentinel.
|
||||||
|
3. The redis sentinels themselves, realize that the master has disappeared from the cluster, and begin the election procedure for selecting a new master. They perform this election and selection, and chose one of the existing redis server replicas to be the new master.
|
||||||
|
|
||||||
|
### Conclusion
|
||||||
|
At this point we now have a reliable, scalable Redis installation. By resizing the replication controller for redis servers, we can increase or decrease the number of read-slaves in our cluster. Likewise, if failures occur, the redis-sentinels will perform master election and select a new master.
|
||||||
|
|
||||||
|
### tl; dr
|
||||||
|
For those of you who are impatient, here is the summary of commands we ran in this tutorial
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# Create a bootstrap master
|
||||||
|
kubectl create -f examples/redis/v1beta3/redis-master.yaml
|
||||||
|
|
||||||
|
# Create a service to track the sentinels
|
||||||
|
kubectl create -f examples/redis/v1beta3/redis-sentinel-service.yaml
|
||||||
|
|
||||||
|
# Create a replication controller for redis servers
|
||||||
|
kubectl create -f examples/redis/v1beta3/redis-controller.yaml
|
||||||
|
|
||||||
|
# Create a replication controller for redis sentinels
|
||||||
|
kubectl create -f examples/redis/v1beta3/redis-sentinel-controller.yaml
|
||||||
|
|
||||||
|
# Resize both replication controllers
|
||||||
|
kubectl resize rc redis --replicas=3
|
||||||
|
kubectl resize rc redis-sentinel --replicas=3
|
||||||
|
|
||||||
|
# Delete the original master pod
|
||||||
|
kubectl delete pods redis-master
|
||||||
|
```
|
||||||
|
|
||||||
|
|
28
demos/magnum/redis-kube/redis-controller.yaml
Normal file
28
demos/magnum/redis-kube/redis-controller.yaml
Normal file
@ -0,0 +1,28 @@
|
|||||||
|
apiVersion: v1beta3
|
||||||
|
kind: ReplicationController
|
||||||
|
metadata:
|
||||||
|
name: redis
|
||||||
|
spec:
|
||||||
|
replicas: 2
|
||||||
|
selector:
|
||||||
|
name: redis
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
name: redis
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: redis
|
||||||
|
image: kubernetes/redis:v1
|
||||||
|
ports:
|
||||||
|
- containerPort: 6379
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
cpu: "1"
|
||||||
|
volumeMounts:
|
||||||
|
- mountPath: /redis-master-data
|
||||||
|
name: data
|
||||||
|
volumes:
|
||||||
|
- name: data
|
||||||
|
emptyDir: {}
|
||||||
|
|
33
demos/magnum/redis-kube/redis-master.yaml
Normal file
33
demos/magnum/redis-kube/redis-master.yaml
Normal file
@ -0,0 +1,33 @@
|
|||||||
|
apiVersion: v1beta3
|
||||||
|
kind: Pod
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
name: redis
|
||||||
|
redis-sentinel: "true"
|
||||||
|
role: master
|
||||||
|
name: redis-master
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: master
|
||||||
|
image: kubernetes/redis:v1
|
||||||
|
env:
|
||||||
|
- name: MASTER
|
||||||
|
value: "true"
|
||||||
|
ports:
|
||||||
|
- containerPort: 6379
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
cpu: "1"
|
||||||
|
volumeMounts:
|
||||||
|
- mountPath: /redis-master-data
|
||||||
|
name: data
|
||||||
|
- name: sentinel
|
||||||
|
image: kubernetes/redis:v1
|
||||||
|
env:
|
||||||
|
- name: SENTINEL
|
||||||
|
value: "true"
|
||||||
|
ports:
|
||||||
|
- containerPort: 26379
|
||||||
|
volumes:
|
||||||
|
- name: data
|
||||||
|
emptyDir: {}
|
14
demos/magnum/redis-kube/redis-proxy.yaml
Normal file
14
demos/magnum/redis-kube/redis-proxy.yaml
Normal file
@ -0,0 +1,14 @@
|
|||||||
|
apiVersion: v1beta3
|
||||||
|
kind: Pod
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
name: redis-proxy
|
||||||
|
role: proxy
|
||||||
|
name: redis-proxy
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: proxy
|
||||||
|
image: kubernetes/redis-proxy:v1
|
||||||
|
ports:
|
||||||
|
- containerPort: 6379
|
||||||
|
name: api
|
23
demos/magnum/redis-kube/redis-sentinel-controller.yaml
Normal file
23
demos/magnum/redis-kube/redis-sentinel-controller.yaml
Normal file
@ -0,0 +1,23 @@
|
|||||||
|
apiVersion: v1beta3
|
||||||
|
kind: ReplicationController
|
||||||
|
metadata:
|
||||||
|
name: redis-sentinel
|
||||||
|
spec:
|
||||||
|
replicas: 2
|
||||||
|
selector:
|
||||||
|
redis-sentinel: "true"
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
name: redis-sentinel
|
||||||
|
redis-sentinel: "true"
|
||||||
|
role: sentinel
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: sentinel
|
||||||
|
image: kubernetes/redis:v1
|
||||||
|
env:
|
||||||
|
- name: SENTINEL
|
||||||
|
value: "true"
|
||||||
|
ports:
|
||||||
|
- containerPort: 26379
|
13
demos/magnum/redis-kube/redis-sentinel-service.yaml
Normal file
13
demos/magnum/redis-kube/redis-sentinel-service.yaml
Normal file
@ -0,0 +1,13 @@
|
|||||||
|
apiVersion: v1beta3
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
name: sentinel
|
||||||
|
role: service
|
||||||
|
name: redis-sentinel
|
||||||
|
spec:
|
||||||
|
ports:
|
||||||
|
- port: 26379
|
||||||
|
targetPort: 26379
|
||||||
|
selector:
|
||||||
|
redis-sentinel: "true"
|
38
demos/magnum/start
Executable file
38
demos/magnum/start
Executable file
@ -0,0 +1,38 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
NETWORK_MANAGER=$(grep -sri NETWORK_MANAGER ../../compose/openstack.env | cut -f2 -d "=")
|
||||||
|
if [ "$NETWORK_MANAGER" != "neutron" ]; then
|
||||||
|
echo 'Magnum depends on the Neutron network manager to operate.'
|
||||||
|
echo 'Exiting because the network manager is' "$NETWORK_MANAGER".
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo Downloading glance image.
|
||||||
|
IMAGE_URL=https://fedorapeople.org/groups/magnum
|
||||||
|
IMAGE=fedora-21-atomic-3.qcow2
|
||||||
|
if ! [ -f "$IMAGE" ]; then
|
||||||
|
curl -L -o ./$IMAGE $IMAGE_URL/$IMAGE
|
||||||
|
fi
|
||||||
|
|
||||||
|
NIC_ID=$(neutron net-show public1 | awk '/ id /{print $4}')
|
||||||
|
|
||||||
|
glance image-delete fedora-21-atomic-3 2> /dev/null
|
||||||
|
|
||||||
|
echo Loading fedora-atomic image into glance
|
||||||
|
glance image-create --name fedora-21-atomic-3 --progress --is-public true --disk-format qcow2 --container-format bare --file ./$IMAGE
|
||||||
|
GLANCE_IMAGE_ID=$(glance image-show fedora-21-atomic-3 | grep id | awk '{print $4}')
|
||||||
|
|
||||||
|
echo registering os-distro property with image
|
||||||
|
glance image-update $GLANCE_IMAGE_ID --property os_distro=fedora-atomic
|
||||||
|
|
||||||
|
echo Creating baymodel
|
||||||
|
magnum baymodel-create --name testbaymodel --image-id $GLANCE_IMAGE_ID \
|
||||||
|
--keypair-id mykey \
|
||||||
|
--fixed-network 10.0.3.0/24 \
|
||||||
|
--external-network-id $NIC_ID \
|
||||||
|
--dns-nameserver 8.8.8.8 --flavor-id m1.small \
|
||||||
|
--docker-volume-size 5 --coe kubernetes
|
||||||
|
|
||||||
|
echo Creating Bay
|
||||||
|
magnum bay-create --name testbay --baymodel testbaymodel --node-count 2
|
||||||
|
|
8
demos/magnum/stop
Executable file
8
demos/magnum/stop
Executable file
@ -0,0 +1,8 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
magnum bay-delete testbay
|
||||||
|
while magnum bay-list | grep -q testbay; do
|
||||||
|
sleep 1
|
||||||
|
done
|
||||||
|
magnum baymodel-delete testbaymodel
|
||||||
|
|
@ -44,9 +44,9 @@ if [[ "${NETWORK_MANAGER}" == "nova" ]] ; then
|
|||||||
else
|
else
|
||||||
echo Configuring neutron.
|
echo Configuring neutron.
|
||||||
neutron net-create public1 --router:external True --provider:physical_network physnet1 --provider:network_type flat
|
neutron net-create public1 --router:external True --provider:physical_network physnet1 --provider:network_type flat
|
||||||
neutron subnet-create --name 1-subnet --disable-dhcp --allocation-pool start=192.168.100.150,end=192.168.100.199 public1 192.168.100.0/24 --gateway 192.168.100.1 --dns_nameservers list=true 192.168.100.1
|
neutron subnet-create --name 1-subnet --disable-dhcp --allocation-pool start=10.0.2.150,end=10.0.2.199 public1 10.0.2.0/24 --gateway 10.0.2.1
|
||||||
neutron net-create demo-net --provider:network_type vxlan --provider:segmentation_id 10
|
neutron net-create demo-net --provider:network_type vxlan --provider:segmentation_id 10
|
||||||
neutron subnet-create --name demo-subnet --gateway 10.10.10.1 demo-net 10.10.10.0/24
|
neutron subnet-create demo-net --name demo-subnet --gateway 10.0.0.1 10.0.0.0/24 --dns_nameservers list=true 8.8.8.8
|
||||||
neutron router-create demo-router
|
neutron router-create demo-router
|
||||||
neutron router-interface-add demo-router demo-subnet
|
neutron router-interface-add demo-router demo-subnet
|
||||||
neutron router-gateway-set demo-router public1
|
neutron router-gateway-set demo-router public1
|
||||||
@ -56,6 +56,7 @@ else
|
|||||||
neutron security-group-rule-create default --direction ingress --ethertype IPv4 --protocol tcp --port-range-min 22 --port-range-max 22 --remote-ip-prefix 0.0.0.0/0
|
neutron security-group-rule-create default --direction ingress --ethertype IPv4 --protocol tcp --port-range-min 22 --port-range-max 22 --remote-ip-prefix 0.0.0.0/0
|
||||||
# Open heat-cfn so it can run on a different host
|
# Open heat-cfn so it can run on a different host
|
||||||
neutron security-group-rule-create default --direction ingress --ethertype IPv4 --protocol tcp --port-range-min 8000 --port-range-max 8000 --remote-ip-prefix 0.0.0.0/0
|
neutron security-group-rule-create default --direction ingress --ethertype IPv4 --protocol tcp --port-range-min 8000 --port-range-max 8000 --remote-ip-prefix 0.0.0.0/0
|
||||||
|
neutron security-group-rule-create default --direction ingress --ethertype IPv4 --protocol tcp --port-range-min 8080 --port-range-max 8080 --remote-ip-prefix 0.0.0.0/0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [ -r ~/.ssh/id_rsa.pub ]; then
|
if [ -r ~/.ssh/id_rsa.pub ]; then
|
||||||
|
Loading…
Reference in New Issue
Block a user