Documentation cleanups for nova devref

This patch contains a number of cleanups of the nova devref,
mostly related to outdated content.

1) remove outdated todo items from network
=> these have been long covered in the manuals

2) remove outdated multinic docs and images
=> this is now better covered in:
http://docs.openstack.org/trunk/openstack-compute/admin/content
/using-multi-nics.html

3) remove outdated cloudpipe docs, confs and scripts
=> This is now better covered in:
http://docs.openstack.org/trunk/openstack-compute/admin/content/
cloudpipe-per-project-vpns.html

4) remove outdated networking docs
=> These were marked as 'legacy' more than 2 years ago

Change-Id: I9321335031b4581c603a6f31c613e1b620d468a6
This commit is contained in:
Tom Fifield 2013-02-19 23:42:21 +11:00
parent 1b9b66ba21
commit e79811b855
14 changed files with 0 additions and 351 deletions

View File

@ -1,166 +0,0 @@
..
Copyright 2010-2011 United States Government as represented by the
Administrator of the National Aeronautics and Space Administration.
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
.. _cloudpipe:
Cloudpipe -- Per Project Vpns
=============================
Cloudpipe is a method for connecting end users to their project instances in vlan mode.
Overview
--------
The support code for cloudpipe implements admin commands (via nova-manage) to automatically create a vm for a project that allows users to vpn into the private network of their project. Access to this vpn is provided through a public port on the network host for the project. This allows users to have free access to the virtual machines in their project without exposing those machines to the public internet.
Cloudpipe Image
---------------
The cloudpipe image is basically just a linux instance with openvpn installed. It needs a simple script to grab user data from the metadata server, b64 decode it into a zip file, and run the autorun.sh script from inside the zip. The autorun script will configure and run openvpn to run using the data from nova.
It is also useful to have a cron script that will periodically redownload the metadata and copy the new crl. This will keep revoked users from connecting and will disconnect any users that are connected with revoked certificates when their connection is renegotiated (every hour).
Creating a Cloudpipe Image
--------------------------
Making a cloudpipe image is relatively easy.
# install openvpn on a base ubuntu image.
# set up a server.conf.template in /etc/openvpn/
.. literalinclude:: server.conf.template
:language: bash
:linenos:
# set up.sh in /etc/openvpn/
.. literalinclude:: up.sh
:language: bash
:linenos:
# set down.sh in /etc/openvpn/
.. literalinclude:: down.sh
:language: bash
:linenos:
# download and run the payload on boot from /etc/rc.local
.. literalinclude:: rc.local
:language: bash
:linenos:
# setup /etc/network/interfaces
.. literalinclude:: interfaces
:language: bash
:linenos:
# register the image and set the image id in your flagfile::
--vpn_image_id=ami-xxxxxxxx
# you should set a few other flags to make vpns work properly::
--use_project_ca
--cnt_vpn_clients=5
Cloudpipe Launch
----------------
When you use nova-manage to launch a cloudpipe for a user, it goes through the following process:
#. creates a keypair called <project_id>-vpn and saves it in the keys directory
#. creates a security group <project_id>-vpn and opens up 1194 and icmp
#. creates a cert and private key for the vpn instance and saves it in the CA/projects/<project_id>/ directory
#. zips up the info and puts it b64 encoded as user data
#. launches an m1.tiny instance with the above settings using the flag-specified vpn image
Vpn Access
----------
In vlan networking mode, the second ip in each private network is reserved for the cloudpipe instance. This gives a consistent ip to the instance so that nova-network can create forwarding rules for access from the outside world. The network for each project is given a specific high-numbered port on the public ip of the network host. This port is automatically forwarded to 1194 on the vpn instance.
If specific high numbered ports do not work for your users, you can always allocate and associate a public ip to the instance, and then change the vpn_public_ip and vpn_public_port in the database. This will be turned into a nova-manage command or a flag soon.
Certificates and Revocation
---------------------------
If the use_project_ca flag is set (required to for cloudpipes to work securely), then each project has its own ca. This ca is used to sign the certificate for the vpn, and is also passed to the user for bundling images. When a certificate is revoked using nova-manage, a new Certificate Revocation List (crl) is generated. As long as cloudpipe has an updated crl, it will block revoked users from connecting to the vpn.
The userdata for cloudpipe isn't currently updated when certs are revoked, so it is necessary to restart the cloudpipe instance if a user's credentials are revoked.
Restarting Cloudpipe VPN
------------------------
You can reboot a cloudpipe vpn through the api if something goes wrong (using euca-reboot-instances for example), but if you generate a new crl, you will have to terminate it and start it again using nova-manage vpn run. The cloudpipe instance always gets the first ip in the subnet and it can take up to 10 minutes for the ip to be recovered. If you try to start the new vpn instance too soon, the instance will fail to start because of a NoMoreAddresses error. If you can't wait 10 minutes, you can manually update the ip with something like the following (use the right ip for the project)::
euca-terminate-instances <instance_id>
mysql nova -e "update fixed_ips set allocated=0, leased=0, instance_id=NULL where fixed_ip='10.0.0.2'"
You also will need to terminate the dnsmasq running for the user (make sure you use the right pid file)::
sudo kill `cat /var/lib/nova/br100.pid`
Now you should be able to re-run the vpn::
nova-manage vpn run <project_id>
Logging into Cloudpipe VPN
--------------------------
The keypair that was used to launch the cloudpipe instance should be in the keys/<project_id> folder. You can use this key to log into the cloudpipe instance for debugging purposes.
The :mod:`nova.cloudpipe.pipelib` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: nova.cloudpipe.pipelib
:noindex:
:members:
:undoc-members:
:show-inheritance:
The :mod:`nova.api.cloudpipe` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: nova.api.cloudpipe
:noindex:
:members:
:undoc-members:
:show-inheritance:
The :mod:`nova.crypto` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: nova.crypto
:noindex:
:members:
:undoc-members:
:show-inheritance:

View File

@ -1,7 +0,0 @@
#!/bin/sh
BR=$1
DEV=$2
/usr/sbin/brctl delif $BR $DEV
/sbin/ifconfig $DEV down

View File

@ -41,7 +41,6 @@ Background Concepts for Nova
vmstates
il8n
filter_scheduler
multinic
rpc
hooks
@ -74,7 +73,6 @@ Module Reference
scheduler
fakes
nova
cloudpipe
objectstore
glance

View File

@ -1,17 +0,0 @@
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto eth0
iface eth0 inet manual
up ifconfig $IFACE 0.0.0.0 up
down ifconfig $IFACE down
auto br0
iface br0 inet dhcp
bridge_ports eth0

View File

@ -1,39 +0,0 @@
MultiNic
========
What is it
----------
Multinic allows an instance to have more than one vif connected to it. Each vif is representative of a separate network with its own IP block.
Managers
--------
Each of the network managers are designed to run independently of the compute manager. They expose a common API for the compute manager to call to determine and configure the network(s) for an instance. Direct calls to either the network api or especially the DB should be avoided by the virt layers.
On startup a manager looks in the networks table for networks it is assigned and configures itself to support that network. Using the periodic task, they will claim new networks that have no host set. Only one network per network-host will be claimed at a time. This allows for psuedo-loadbalancing if there are multiple network-hosts running.
Flat Manager
------------
.. image:: /images/multinic_flat.png
The Flat manager is most similar to a traditional switched network environment. It assumes that the IP routing, DNS, DHCP (possibly) and bridge creation is handled by something else. That is it makes no attempt to configure any of this. It does keep track of a range of IPs for the instances that are connected to the network to be allocated.
Each instance will get a fixed IP from each network's pool. The guest operating system may be configured to gather this information through an agent or by the hypervisor injecting the files, or it may ignore it completely and come up with only a layer 2 connection.
Flat manager requires at least one nova-network process running that will listen to the API queue and respond to queries. It does not need to sit on any of the networks but it does keep track of the IPs it hands out to instances.
FlatDHCP Manager
----------------
.. image:: /images/multinic_dhcp.png
FlatDHCP manager builds on the the Flat manager adding dnsmask (DNS and DHCP) and radvd (Router Advertisement) servers on the bridge for that network. The services run on the host that is assigned to that network. The FlatDHCP manager will create its bridge as specified when the network was created on the network-host when the network host starts up or when a new network gets allocated to that host. Compute nodes will also create the bridges as necessary and connect instance VIFs to them.
VLAN Manager
------------
.. image:: /images/multinic_vlan.png
The VLAN manager sets up forwarding to/from a cloudpipe instance in addition to providing dnsmask (DNS and DHCP) and radvd (Router Advertisement) services for each network. The manager will create its bridge as specified when the network was created on the network-host when the network host starts up or when a new network gets allocated to that host. Compute nodes will also create the bridges as necessary and connect instance VIFs to them.

View File

@ -18,12 +18,6 @@
Networking
==========
.. todo::
* document hardware specific commands (maybe in admin guide?) (todd)
* document a map between flags and managers/backends (todd)
The :mod:`nova.network.manager` Module
--------------------------------------
@ -53,76 +47,3 @@ The :mod:`network_unittest` Module
:members:
:undoc-members:
:show-inheritance:
Legacy docs
-----------
The nova networking components manage private networks, public IP addressing, VPN connectivity, and firewall rules.
Components
----------
There are several key components:
* NetworkController (Manages address and vlan allocation)
* RoutingNode (NATs public IPs to private IPs, and enforces firewall rules)
* AddressingNode (runs DHCP services for private networks)
* BridgingNode (a subclass of the basic nova ComputeNode)
* TunnelingNode (provides VPN connectivity)
Component Diagram
-----------------
Overview::
(PUBLIC INTERNET)
| \
/ \ / \
[RoutingNode] ... [RN] [TunnelingNode] ... [TN]
| \ / | |
| < AMQP > | |
[AddressingNode]-- (VLAN) ... | (VLAN)... (VLAN) --- [AddressingNode]
\ | \ /
/ \ / \ / \ / \
[BridgingNode] ... [BridgingNode]
[NetworkController] ... [NetworkController]
\ /
< AMQP >
|
/ \
[CloudController]...[CloudController]
While this diagram may not make this entirely clear, nodes and controllers communicate exclusively across the message bus (AMQP, currently).
State Model
-----------
Network State consists of the following facts:
* VLAN assignment (to a project)
* Private Subnet assignment (to a security group) in a VLAN
* Private IP assignments (to running instances)
* Public IP allocations (to a project)
* Public IP associations (to a private IP / running instance)
While copies of this state exist in many places (expressed in IPTables rule chains, DHCP hosts files, etc), the controllers rely only on the distributed "fact engine" for state, queried over RPC (currently AMQP). The NetworkController inserts most records into this datastore (allocating addresses, etc) - however, individual nodes update state e.g. when running instances crash.
The Public Traffic Path
-----------------------
Public Traffic::
(PUBLIC INTERNET)
|
<NAT> <-- [RoutingNode]
|
[AddressingNode] --> |
( VLAN )
| <-- [BridgingNode]
|
<RUNNING INSTANCE>
The RoutingNode is currently implemented using IPTables rules, which implement both NATing of public IP addresses, and the appropriate firewall chains. We are also looking at using Netomata / Clusto to manage NATting within a switch or router, and/or to manage firewall rules within a hardware firewall appliance.
Similarly, the AddressingNode currently manages running DNSMasq instances for DHCP services. However, we could run an internal DHCP server (using Scapy ala Clusto), or even switch to static addressing by inserting the private address into the disk image the same way we insert the SSH keys. (See compute for more details).

View File

@ -1,34 +0,0 @@
port 1194
proto udp
dev tap0
up "/etc/openvpn/up.sh br0"
down "/etc/openvpn/down.sh br0"
persist-key
persist-tun
ca ca.crt
cert server.crt
key server.key # This file should be kept secret
dh dh1024.pem
ifconfig-pool-persist ipp.txt
server-bridge VPN_IP DHCP_SUBNET DHCP_LOWER DHCP_UPPER
client-to-client
keepalive 10 120
comp-lzo
max-clients 1
user nobody
group nogroup
persist-key
persist-tun
status openvpn-status.log
verb 3
mute 20

View File

@ -1,7 +0,0 @@
#!/bin/sh
BR=$1
DEV=$2
MTU=$3
/sbin/ifconfig $DEV mtu $MTU promisc up
/usr/sbin/brctl addif $BR $DEV

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 57 KiB