diff --git a/doc/source/conf.py b/doc/source/conf.py index 7fb1a7c83..40e564bd3 100644 --- a/doc/source/conf.py +++ b/doc/source/conf.py @@ -90,7 +90,6 @@ version = version_info.version_string() # List of documents that shouldn't be included in the build. unused_docs = [ 'api_ext/rst_extension_template', - 'vmwareapi_readme', 'installer', ] diff --git a/doc/source/devref/down.sh b/doc/source/devref/down.sh deleted file mode 100644 index 5c1888870..000000000 --- a/doc/source/devref/down.sh +++ /dev/null @@ -1,7 +0,0 @@ -#!/bin/sh - -BR=$1 -DEV=$2 - -/usr/sbin/brctl delif $BR $DEV -/sbin/ifconfig $DEV down diff --git a/doc/source/devref/filter_scheduler.rst b/doc/source/devref/filter_scheduler.rst deleted file mode 100644 index ef98e3b3b..000000000 --- a/doc/source/devref/filter_scheduler.rst +++ /dev/null @@ -1,258 +0,0 @@ -Filter Scheduler -================ - -The **Filter Scheduler** supports `filtering` and `weighting` to make informed -decisions on where a new instance should be created. This Scheduler supports -only working with Compute Nodes. - -Filtering ---------- - -.. image:: /images/filteringWorkflow1.png - -During its work Filter Scheduler firstly makes dictionary of unfiltered hosts, -then filters them using filter properties and finally chooses hosts for the -requested number of instances (each time it chooses the least costed host and -appends it to the list of selected costs). - -If it turns up, that it can't find candidates for the next instance, it means -that there are no more appropriate instances locally. - -If we speak about `filtering` and `weighting`, their work is quite flexible -in the Filter Scheduler. There are a lot of filtering strategies for the -Scheduler to support. Also you can even implement `your own algorithm of -filtering`. - -There are some standard filter classes to use (:mod:`cinder.scheduler.filters`): - -* |AllHostsFilter| - frankly speaking, this filter does no operation. It - returns all the available hosts after its work. -* |AvailabilityZoneFilter| - filters hosts by availability zone. It returns - hosts with the same availability zone as the requested instance has in its - properties. -* |ComputeFilter| - checks that the capabilities provided by the compute - service satisfy the extra specifications, associated with the instance type. - It returns a list of hosts that can create instance type. -* |CoreFilter| - filters based on CPU core utilization. It will approve host if - it has sufficient number of CPU cores. -* |IsolatedHostsFilter| - filter based on "image_isolated" and "host_isolated" - flags. -* |JsonFilter| - allows simple JSON-based grammar for selecting hosts. -* |RamFilter| - filters hosts by their RAM. So, it returns only the hosts with - enough available RAM. -* |SimpleCIDRAffinityFilter| - allows to put a new instance on a host within - the same IP block. -* |DifferentHostFilter| - allows to put the instance on a different host from a - set of instances. -* |SameHostFilter| - puts the instance on the same host as another instance in - a set of of instances. - -Now we can focus on these standard filter classes in details. I will pass the -simplest ones, such as |AllHostsFilter|, |CoreFilter| and |RamFilter| are, -because their functionality is quite simple and can be understood just from the -code. For example class |RamFilter| has the next realization: - -:: - - class RamFilter(filters.BaseHostFilter): - """Ram Filter with over subscription flag""" - - def host_passes(self, host_state, filter_properties): - """Only return hosts with sufficient available RAM.""" - instance_type = filter_properties.get('instance_type') - requested_ram = instance_type['memory_mb'] - free_ram_mb = host_state.free_ram_mb - return free_ram_mb * FLAGS.ram_allocation_ratio >= requested_ram - -Here `ram_allocation_ratio` means the virtual RAM to physical RAM allocation -ratio (it is 1.5 by default). Really, nice and simple. - -Next standard filter to describe is |AvailabilityZoneFilter| and it isn't -difficult too. This filter just looks at the availability zone of compute node -and availability zone from the properties of the request. Each compute service -has its own availability zone. So deployment engineers have an option to run -scheduler with availability zones support and can configure availability zones -on each compute host. This classes method `host_passes` returns `True` if -availability zone mentioned in request is the same on the current compute host. - -|ComputeFilter| checks if host can create `instance_type`. Let's note that -instance types describe the compute, memory and storage capacity of cinder -compute nodes, it is the list of characteristics such as number of vCPUs, -amount RAM and so on. So |ComputeFilter| looks at hosts' capabilities (host -without requested specifications can't be chosen for the creating of the -instance), checks if the hosts service is up based on last heartbeat. Finally, -this Scheduler can verify if host satisfies some `extra specifications` -associated with the instance type (of course if there are no such extra -specifications, every host suits them). - -Now we are going to |IsolatedHostsFilter|. There can be some special hosts -reserved for specific images. These hosts are called **isolated**. So the -images to run on the isolated hosts are also called isolated. This Scheduler -checks if `image_isolated` flag named in instance specifications is the same -that the host has. - -|DifferentHostFilter| - its method `host_passes` returns `True` if host to -place instance on is different from all the hosts used by set of instances. - -|SameHostFilter| does the opposite to what |DifferentHostFilter| does. So its -`host_passes` returns `True` if the host we want to place instance on is one -of the set of instances uses. - -|SimpleCIDRAffinityFilter| looks at the subnet mask and investigates if -the network address of the current host is in the same sub network as it was -defined in the request. - -|JsonFilter| - this filter provides the opportunity to write complicated -queries for the hosts capabilities filtering, based on simple JSON-like syntax. -There can be used the following operations for the host states properties: -'=', '<', '>', 'in', '<=', '>=', that can be combined with the following -logical operations: 'not', 'or', 'and'. For example, there is the query you can -find in tests: - -:: - - ['and', - ['>=', '$free_ram_mb', 1024], - ['>=', '$free_disk_mb', 200 * 1024] - ] - -This query will filter all hosts with free RAM greater or equal than 1024 MB -and at the same time with free disk space greater or equal than 200 GB. - -Many filters use data from `scheduler_hints`, that is defined in the moment of -creation of the new server for the user. The only exeption for this rule is -|JsonFilter|, that takes data in some strange difficult to understand way. - -To use filters you specify next two settings: - -* `scheduler_available_filters` - points available filters. -* `scheduler_default_filters` - points filters to be used by default from the - list of available ones. - -Host Manager sets up these flags in `cinder.conf` by default on the next values: - -:: - - --scheduler_available_filters=cinder.scheduler.filters.standard_filters - --scheduler_default_filters=RamFilter,ComputeFilter,AvailabilityZoneFilter - -These two lines mean, that all the filters in the `cinder.scheduler.filters` -would be available, and the default ones would be |RamFilter|, |ComputeFilter| -and |AvailabilityZoneFilter|. - -If you want to create **your own filter** you just need to inherit from -|BaseHostFilter| and implement one method: -`host_passes`. This method should return `True` if host passes the filter. It -takes `host_state` (describes host) and `filter_properties` dictionary as the -parameters. - -So in the end file cinder.conf should contain lines like these: - -:: - - --scheduler_driver=cinder.scheduler.distributed_scheduler.FilterScheduler - --scheduler_available_filters=cinder.scheduler.filters.standard_filters - --scheduler_available_filters=myfilter.MyFilter - --scheduler_default_filters=RamFilter,ComputeFilter,MyFilter - -As you see, flag `scheduler_driver` is set up for the `FilterSchedule`, -available filters can be specified more than once and description of the -default filters should not contain full paths with class names you need, only -class names. - -Costs and weights ------------------ - -Filter Scheduler uses so-called **weights** and **costs** during its work. - -`Costs` are the computed integers, expressing hosts measure of fitness to be -chosen as a result of the request. Of course, costs are computed due to hosts -characteristics compared with characteristics from the request. So trying to -put instance on a not appropriate host (for example, trying to put really -simple and plain instance on a high performance host) would have high cost, and -putting instance on an appropriate host would have low. - -So let's find out, how does all this computing work happen. - -Before weighting Filter Scheduler creates the list of tuples containing weights -and cost functions to use for weighing hosts. These functions can be got from -cache, if this operation had been done before (this cache depends on `topic` of -node, Filter Scheduler works with only the Compute Nodes, so the topic would be -"`compute`" here). If there is no cost functions in cache associated with -"compute", Filter Scheduler tries to get these cost functions from `cinder.conf`. -Weight in tuple means weight of cost function matching with it. It also can be -got from `cinder.conf`. After that Scheduler weights host, using selected cost -functions. It does this using `weighted_sum` method, which parameters are: - -* `weighted_fns` - list of cost functions created with their weights; -* `host_states` - hosts to be weighted; -* `weighing_properties` - dictionary of values that can influence weights. - -This method firstly creates a grid of function results (it just counts value of -each function using `host_state` and `weighing_properties`) - `scores`, where -it would be one row per host and one function per column. The next step is to -multiply value from the each cell of the grid by the weight of appropriate cost -function. And the final step is to sum values in the each row - it would be the -weight of host, described in this line. This method returns the host with the -lowest weight - the best one. - -If we concentrate on cost functions, it would be important to say that we use -`compute_fill_first_cost_fn` function by default, which simply returns hosts -free RAM: - -:: - - def compute_fill_first_cost_fn(host_state, weighing_properties): - """More free ram = higher weight. So servers will less free ram will be - preferred.""" - return host_state.free_ram_mb - -You can implement your own variant of cost function for the hosts capabilities -you would like to mention. Using different cost functions (as you understand, -there can be a lot of ones used in the same time) can make the chose of next -host for the creating of the new instance flexible. - -These cost functions should be set up in the `cinder.conf` with the flag -`least_cost_functions` (there can be more than one functions separated by -commas). By default this line would look like this: - -:: - - --least_cost_functions=cinder.scheduler.least_cost.compute_fill_first_cost_fn - -As for weights of cost functions, they also should be described in `cinder.conf`. -The line with this description looks the following way: -**function_name_weight**. - -As for default cost function, it would be: `compute_fill_first_cost_fn_weight`, -and by default it is 1.0. - -:: - - --compute_fill_first_cost_fn_weight=1.0 - -Filter Scheduler finds local list of acceptable hosts by repeated filtering and -weighing. Each time it chooses a host, it virtually consumes resources on it, -so subsequent selections can adjust accordingly. It is useful if the customer -asks for the some large amount of instances, because weight is computed for -each instance requested. - -.. image:: /images/filteringWorkflow2.png - -In the end Filter Scheduler sorts selected hosts by their weight and provisions -instances on them. - -P.S.: you can find more examples of using Filter Scheduler and standard filters -in :mod:`cinder.tests.scheduler`. - -.. |AllHostsFilter| replace:: :class:`AllHostsFilter ` -.. |AvailabilityZoneFilter| replace:: :class:`AvailabilityZoneFilter ` -.. |BaseHostFilter| replace:: :class:`BaseHostFilter ` -.. |ComputeFilter| replace:: :class:`ComputeFilter ` -.. |CoreFilter| replace:: :class:`CoreFilter ` -.. |IsolatedHostsFilter| replace:: :class:`IsolatedHostsFilter ` -.. |JsonFilter| replace:: :class:`JsonFilter ` -.. |RamFilter| replace:: :class:`RamFilter ` -.. |SimpleCIDRAffinityFilter| replace:: :class:`SimpleCIDRAffinityFilter ` -.. |DifferentHostFilter| replace:: :class:`DifferentHostFilter ` -.. |SameHostFilter| replace:: :class:`SameHostFilter ` diff --git a/doc/source/devref/rc.local b/doc/source/devref/rc.local deleted file mode 100644 index d1ccf0cbc..000000000 --- a/doc/source/devref/rc.local +++ /dev/null @@ -1,36 +0,0 @@ -#!/bin/sh -e -# -# rc.local -# -# This script is executed at the end of each multiuser runlevel. -# Make sure that the script will "exit 0" on success or any other -# value on error. -# -# In order to enable or disable this script just change the execution -# bits. -# -# By default this script does nothing. -####### These lines go at the end of /etc/rc.local ####### -. /lib/lsb/init-functions - -echo Downloading payload from userdata -wget http://169.254.169.254/latest/user-data -O /tmp/payload.b64 -echo Decrypting base64 payload -openssl enc -d -base64 -in /tmp/payload.b64 -out /tmp/payload.zip - -mkdir -p /tmp/payload -echo Unzipping payload file -unzip -o /tmp/payload.zip -d /tmp/payload/ - -# if the autorun.sh script exists, run it -if [ -e /tmp/payload/autorun.sh ]; then - echo Running autorun.sh - cd /tmp/payload - sh /tmp/payload/autorun.sh - -else - echo rc.local : No autorun script to run -fi - - -exit 0 diff --git a/doc/source/devref/server.conf.template b/doc/source/devref/server.conf.template deleted file mode 100644 index feee3185b..000000000 --- a/doc/source/devref/server.conf.template +++ /dev/null @@ -1,34 +0,0 @@ -port 1194 -proto udp -dev tap0 -up "/etc/openvpn/up.sh br0" -down "/etc/openvpn/down.sh br0" - -persist-key -persist-tun - -ca ca.crt -cert server.crt -key server.key # This file should be kept secret - -dh dh1024.pem -ifconfig-pool-persist ipp.txt - -server-bridge VPN_IP DHCP_SUBNET DHCP_LOWER DHCP_UPPER - -client-to-client -keepalive 10 120 -comp-lzo - -max-clients 1 - -user nobody -group nogroup - -persist-key -persist-tun - -status openvpn-status.log - -verb 3 -mute 20 \ No newline at end of file diff --git a/doc/source/devref/up.sh b/doc/source/devref/up.sh deleted file mode 100644 index 073a58e15..000000000 --- a/doc/source/devref/up.sh +++ /dev/null @@ -1,7 +0,0 @@ -#!/bin/sh - -BR=$1 -DEV=$2 -MTU=$3 -/sbin/ifconfig $DEV mtu $MTU promisc up -/usr/sbin/brctl addif $BR $DEV diff --git a/doc/source/image_src/multinic_1.odg b/doc/source/image_src/multinic_1.odg deleted file mode 100644 index bbd76b10e..000000000 Binary files a/doc/source/image_src/multinic_1.odg and /dev/null differ diff --git a/doc/source/image_src/multinic_2.odg b/doc/source/image_src/multinic_2.odg deleted file mode 100644 index 1f1e4251a..000000000 Binary files a/doc/source/image_src/multinic_2.odg and /dev/null differ diff --git a/doc/source/image_src/multinic_3.odg b/doc/source/image_src/multinic_3.odg deleted file mode 100644 index d29e16353..000000000 Binary files a/doc/source/image_src/multinic_3.odg and /dev/null differ diff --git a/doc/source/images/NOVA_ARCH.png b/doc/source/images/NOVA_ARCH.png deleted file mode 100644 index 617ec4211..000000000 Binary files a/doc/source/images/NOVA_ARCH.png and /dev/null differ diff --git a/doc/source/images/NOVA_ARCH.svg b/doc/source/images/NOVA_ARCH.svg deleted file mode 100644 index 3175eeeb9..000000000 --- a/doc/source/images/NOVA_ARCH.svg +++ /dev/null @@ -1,5854 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - image/svg+xml - - - - - - - - - - - - AMQP - Messaging(RabbitMQ) - - - - cinder-api - (Public API server) - - - - cinder-api - (Public API server) - - - - - - - - - - - Internet - - - - Cloud users - Using tools to managevirtual guests - - - - - - - - - - - - - - - Admin network - - - - Internet EndUsers - Using services providedby virtual guests - - - - - - - - - - - - Publicnetwork - - - - - - - - - Disk Images - for Virtual Guests - - - - - - - - - Virtual Guests - Runing in the cloud - - - - cinder-compute - (uses libvirt or XenAPI to manage guests) - - - - User authorisation - (SQL, LDAP or fake LDAP using ReDIS) - - - - - - - - - - cinder-network - manages cloud networks, vlans and bridges - - - - - - - cinder-volume - disk images for v. guests(filesystem or AoE) - - - - - - - cinder-objectstore - (implements S3-like apiUsing Files or (later) Swift - - - - - cinder-scheduler - Plans where to place new guests - - - - - diff --git a/doc/source/images/NOVA_ARCH_200dpi.png b/doc/source/images/NOVA_ARCH_200dpi.png deleted file mode 100644 index 9dde9aa92..000000000 Binary files a/doc/source/images/NOVA_ARCH_200dpi.png and /dev/null differ diff --git a/doc/source/images/NOVA_ARCH_66dpi.png b/doc/source/images/NOVA_ARCH_66dpi.png deleted file mode 100644 index 1ca7f3d3b..000000000 Binary files a/doc/source/images/NOVA_ARCH_66dpi.png and /dev/null differ diff --git a/doc/source/images/NOVA_clouds_A_B.png b/doc/source/images/NOVA_clouds_A_B.png deleted file mode 100644 index 439967cb1..000000000 Binary files a/doc/source/images/NOVA_clouds_A_B.png and /dev/null differ diff --git a/doc/source/images/NOVA_clouds_A_B.svg b/doc/source/images/NOVA_clouds_A_B.svg deleted file mode 100644 index 6e1c50456..000000000 --- a/doc/source/images/NOVA_clouds_A_B.svg +++ /dev/null @@ -1,16342 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - image/svg+xml - - - - - - David Pravec <alekibango@danix.org> - - - - - released under terms of Apache License - - - - - - - - - - - - - - - - - - - - - - - - - A) Cinder running on 1 Hardware node - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Internet - - B) Cloud of 2-4 servers in one clusterSelf-contained storage solutionTypical smallest private cloud - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/doc/source/images/NOVA_clouds_C1_C2.svg b/doc/source/images/NOVA_clouds_C1_C2.svg deleted file mode 100644 index 5b10faf7c..000000000 --- a/doc/source/images/NOVA_clouds_C1_C2.svg +++ /dev/null @@ -1,9763 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - image/svg+xml - - - - - - David Pravec <alekibango@danix.org> - - - - - released under terms of Apache License - - - - - - - - - - C) More computers, but still only 1 cluster,not distributed geographically - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Internet - - HA Database - HA storage(SAN, SheepDog ?) - Diskless servers running virtual guests - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - FW/VPNgiving access tocloud administrators - PXEBoot server - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Monitoring - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - C2) using diskless nodes - t - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Internet - - HA Database forOpenStack Compute - Servers running virtual guests - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - FW/VPNgiving access tocloud administrators - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Monitoring - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - C1) Nodes with disks - TODO: image store ?multicluster... - - diff --git a/doc/source/images/NOVA_clouds_C1_C2.svg.png b/doc/source/images/NOVA_clouds_C1_C2.svg.png deleted file mode 100644 index f7526bd1f..000000000 Binary files a/doc/source/images/NOVA_clouds_C1_C2.svg.png and /dev/null differ diff --git a/doc/source/images/Novadiagram.png b/doc/source/images/Novadiagram.png deleted file mode 100644 index 731adab95..000000000 Binary files a/doc/source/images/Novadiagram.png and /dev/null differ diff --git a/doc/source/images/base_scheduler.png b/doc/source/images/base_scheduler.png deleted file mode 100644 index 75d029338..000000000 Binary files a/doc/source/images/base_scheduler.png and /dev/null differ diff --git a/doc/source/images/cloudpipe.png b/doc/source/images/cloudpipe.png deleted file mode 100644 index ffdd181f2..000000000 Binary files a/doc/source/images/cloudpipe.png and /dev/null differ diff --git a/doc/source/images/fabric.png b/doc/source/images/fabric.png deleted file mode 100644 index a5137e377..000000000 Binary files a/doc/source/images/fabric.png and /dev/null differ diff --git a/doc/source/images/filteringWorkflow1.png b/doc/source/images/filteringWorkflow1.png deleted file mode 100644 index 58da979d7..000000000 Binary files a/doc/source/images/filteringWorkflow1.png and /dev/null differ diff --git a/doc/source/images/filteringWorkflow2.png b/doc/source/images/filteringWorkflow2.png deleted file mode 100644 index e0fe66acf..000000000 Binary files a/doc/source/images/filteringWorkflow2.png and /dev/null differ diff --git a/doc/source/images/multinic_dhcp.png b/doc/source/images/multinic_dhcp.png deleted file mode 100644 index bce05b595..000000000 Binary files a/doc/source/images/multinic_dhcp.png and /dev/null differ diff --git a/doc/source/images/multinic_flat.png b/doc/source/images/multinic_flat.png deleted file mode 100644 index e055e60e8..000000000 Binary files a/doc/source/images/multinic_flat.png and /dev/null differ diff --git a/doc/source/images/multinic_vlan.png b/doc/source/images/multinic_vlan.png deleted file mode 100644 index 9b0e4fd63..000000000 Binary files a/doc/source/images/multinic_vlan.png and /dev/null differ diff --git a/doc/source/images/nova.compute.api.create.png b/doc/source/images/nova.compute.api.create.png deleted file mode 100755 index 999f39ed9..000000000 Binary files a/doc/source/images/nova.compute.api.create.png and /dev/null differ diff --git a/doc/source/images/novascreens.png b/doc/source/images/novascreens.png deleted file mode 100644 index 0fe3279cf..000000000 Binary files a/doc/source/images/novascreens.png and /dev/null differ diff --git a/doc/source/images/novashvirtually.png b/doc/source/images/novashvirtually.png deleted file mode 100644 index 02c7e767c..000000000 Binary files a/doc/source/images/novashvirtually.png and /dev/null differ diff --git a/doc/source/images/vmwareapi_blockdiagram.jpg b/doc/source/images/vmwareapi_blockdiagram.jpg deleted file mode 100644 index 1ae1fc8e0..000000000 Binary files a/doc/source/images/vmwareapi_blockdiagram.jpg and /dev/null differ diff --git a/doc/source/images/zone_aware_overview.png b/doc/source/images/zone_aware_overview.png deleted file mode 100755 index 470e78138..000000000 Binary files a/doc/source/images/zone_aware_overview.png and /dev/null differ diff --git a/doc/source/images/zone_aware_scheduler.png b/doc/source/images/zone_aware_scheduler.png deleted file mode 100644 index a144e1212..000000000 Binary files a/doc/source/images/zone_aware_scheduler.png and /dev/null differ diff --git a/doc/source/images/zone_overview.png b/doc/source/images/zone_overview.png deleted file mode 100755 index cc891df0a..000000000 Binary files a/doc/source/images/zone_overview.png and /dev/null differ