diff --git a/doc/admin-guide-cloud/ch_compute.xml b/doc/admin-guide-cloud/ch_compute.xml index 4be0aea213..2ec4161a21 100644 --- a/doc/admin-guide-cloud/ch_compute.xml +++ b/doc/admin-guide-cloud/ch_compute.xml @@ -392,7 +392,7 @@ configuration are discussed in the OpenStack End User - Guide. + Guide. Volumes do not provide concurrent access from multiple instances. For that you need either a traditional network file system like NFS or CIFS @@ -413,7 +413,7 @@ >RESTful API that allows users to query VM image metadata and retrieve the actual image with HTTP requests. You can also use the glance command-line tool, or the Python API to accomplish the same @@ -489,13 +489,14 @@ Full details for nova and other CLI tools are provided in the OpenStack CLI Guide. What follows is + xlink:href="http://docs.openstack.org/user-guide/content/index.html" + >OpenStack End User Guide. What follows is the minimal introduction required to follow the CLI example in this chapter. In the case of a - conflict the OpenStack CLI Guide should be + conflict the + OpenStack End User Guide should be considered authoritative (and a bug filed against this section). To function, the @@ -684,7 +685,7 @@ header: Date: Thu, 13 Sep 2012 20:27:36 GMT >php-opencloud is a PHP SDK that should work with most OpenStack-based cloud deployments and - the Rackspace public cloud. + the Rackspace public cloud. @@ -836,9 +837,10 @@ header: Date: Thu, 13 Sep 2012 20:27:36 GMT >nova-network for networking between VMs or use the Networking service (neutron) for networking. To configure Compute networking options - with Neutron, see Network Administration Guide. + with Neutron, see the networking chapter of the + Cloud Administrator Guide. For each VM instance, Compute assigns to it a private IP address. (Currently, Compute with received by the API server and relayed to the originating user. Database entries are queried, added, or removed as necessary throughout the - process. + process. Compute worker @@ -2020,10 +2022,10 @@ local0.error @@172.20.1.43:1024
- Migration - + Before starting migrations, review the Configuring Migrations section in the + Configuration Reference Guide. Migration provides a scheme to migrate running instances from one OpenStack Compute server to another OpenStack Compute server. @@ -2424,7 +2426,7 @@ find / -gid 120 -exec chgrp nova {} \; command was invoked, so the files for the instances remain on the compute node. The plan is to perform the following tasks, in - that exact order. + that exact order. Any extra step would be dangerous at this stage : @@ -2442,12 +2444,12 @@ find / -gid 120 -exec chgrp nova {} \; Restart the instances. In other words, go from a shutdown to running - state. + state. After the restart, you can reattach the volumes to their respective - instances. + instances. That step, which is not a mandatory diff --git a/doc/common/ch_getstart.xml b/doc/common/ch_getstart.xml index fe63c6f961..dd1612620c 100644 --- a/doc/common/ch_getstart.xml +++ b/doc/common/ch_getstart.xml @@ -91,7 +91,7 @@ xlink:href="http://docs.openstack.org/developer/glance/" >Glance Provides a registry of virtual machine images. Compute - Service uses it to provision instances. + Service uses it to provision instances. nova-network installations. For details, see Metadata service. + xlink:href="http://docs.openstack.org/admin-guide-cloud/content/section_metadata-service.html" + >Metadata service in the Cloud Administrator Guide. @@ -345,7 +345,7 @@ either type can be run against a single nova-consoleauth service in a cluster configuration. For information, see About nova-consoleauth. diff --git a/doc/common/section_about-dashboard.xml b/doc/common/section_about-dashboard.xml index 02218a47f8..e7baa9b27e 100644 --- a/doc/common/section_about-dashboard.xml +++ b/doc/common/section_about-dashboard.xml @@ -29,13 +29,14 @@ After you install the dashboard, you can complete the following tasks: - To customize your dashboard, see How To Custom Brand The OpenStack Dashboard (Horizon). + To customize your dashboard, see Customize the dashboard section. To set up session storage for the dashboard, - see OpenStack Dashboard Session Storage. + see Set up session storage for the dashboard section. To deploy the diff --git a/doc/common/section_compute-configure-vnc.xml b/doc/common/section_compute-configure-vnc.xml index ed5dc235a5..4e8ae58485 100644 --- a/doc/common/section_compute-configure-vnc.xml +++ b/doc/common/section_compute-configure-vnc.xml @@ -115,7 +115,7 @@ To support live migration, you cannot specify a specific IP address for vncserver_listen, because that IP address does not exist on the destination diff --git a/doc/common/section_nova_cli_evacuate.xml b/doc/common/section_nova_cli_evacuate.xml index 9b690075ea..10499e6ecd 100644 --- a/doc/common/section_nova_cli_evacuate.xml +++ b/doc/common/section_nova_cli_evacuate.xml @@ -42,8 +42,8 @@ To preserve the user disk data on the evacuated server, deploy OpenStack Compute with shared filesystem. To configure your system, see Configure migrations guide. In this + xlink:href="http://docs.openstack.org/trunk/config-reference/content/configuring-openstack-compute-basics.html#section_configuring-compute-migrations" + >Configure migrations section. In this example, the password remains unchanged. $ nova evacuate evacuated_server_name host_b --on-shared-storage diff --git a/doc/common/section_xen-install.xml b/doc/common/section_xen-install.xml index e6b8789a96..7d6474acc4 100644 --- a/doc/common/section_xen-install.xml +++ b/doc/common/section_xen-install.xml @@ -76,10 +76,10 @@ For resize and migrate functionality, please perform the changes described in the Configuring Resize section of the - OpenStack Compute Administration - Manual. + OpenStack Configuration Reference. + Install the VIF isolation rules to help diff --git a/doc/high-availability-guide/aa-network.txt b/doc/high-availability-guide/aa-network.txt index cf30ae6c12..3d0c69c185 100644 --- a/doc/high-availability-guide/aa-network.txt +++ b/doc/high-availability-guide/aa-network.txt @@ -20,7 +20,7 @@ highly available. Since the Grizzly release, OpenStack Networking service has a scheduler which allows to run multiple agents accross nodes. Also, the DHCP agent can be natively -highly available. Please follow the http://docs.openstack.org/trunk/openstack-network/admin/content/app_demo_multi_dhcp_agents.html[OpenStack Networking guide] for +highly available. Please follow the http://docs.openstack.org/trunk/config-reference/content/app_demo_multi_dhcp_agents.html[OpenStack Configuration Reference] for further details. ==== Running Neutron L3 Agent diff --git a/doc/high-availability-guide/ap-cinder-api.txt b/doc/high-availability-guide/ap-cinder-api.txt index 82a654344a..e167d8ddf8 100644 --- a/doc/high-availability-guide/ap-cinder-api.txt +++ b/doc/high-availability-guide/ap-cinder-api.txt @@ -8,7 +8,7 @@ Making the Cinder API service highly available in active / passive mode involves * managing Cinder API daemon with the Pacemaker cluster manager, * configuring OpenStack services to use this IP address. -NOTE: Here is the http://docs.openstack.org/trunk/openstack-compute/install/apt/content/osfolubuntu-cinder.html[documentation] for installing Cinder service. +NOTE: Here is the http://docs.openstack.org/trunk/install-guide/install/apt/content/cinder-install.html[documentation] for installing Cinder service. ===== Adding Cinder API resource to Pacemaker diff --git a/doc/high-availability-guide/ap-glance-api.txt b/doc/high-availability-guide/ap-glance-api.txt index 4287eb856d..c1240bd71e 100644 --- a/doc/high-availability-guide/ap-glance-api.txt +++ b/doc/high-availability-guide/ap-glance-api.txt @@ -8,7 +8,7 @@ Making the OpenStack Image API service highly available in active / passive mode * managing OpenStack Image API daemon with the Pacemaker cluster manager, * configuring OpenStack services to use this IP address. -NOTE: Here is the http://docs.openstack.org/trunk/openstack-compute/install/apt/content/install-glance.html[documentation] for installing OpenStack Image API service. +NOTE: Here is the http://docs.openstack.org/trunk/install-guide/install/apt/content/ch_installing-openstack-image.html[documentation] for installing OpenStack Image API service. ===== Adding OpenStack Image API resource to Pacemaker diff --git a/doc/high-availability-guide/ap-keystone.txt b/doc/high-availability-guide/ap-keystone.txt index 495b1cd014..ef4ca63f9f 100644 --- a/doc/high-availability-guide/ap-keystone.txt +++ b/doc/high-availability-guide/ap-keystone.txt @@ -8,7 +8,7 @@ Making the OpenStack Identity service highly available in active / passive mode * managing OpenStack Identity daemon with the Pacemaker cluster manager, * configuring OpenStack services to use this IP address. -NOTE: Here is the http://docs.openstack.org/trunk/openstack-compute/install/apt/content/ch_installing-openstack-identity-service.html[documentation] for installing OpenStack Identity service. +NOTE: Here is the http://docs.openstack.org/trunk/install-guide/install/apt/content/ch_installing-openstack-identity-service.html[documentation] for installing OpenStack Identity service. ===== Adding OpenStack Identity resource to Pacemaker diff --git a/doc/high-availability-guide/ap-network-controller.txt b/doc/high-availability-guide/ap-network-controller.txt index 101941038c..9419e36297 100644 --- a/doc/high-availability-guide/ap-network-controller.txt +++ b/doc/high-availability-guide/ap-network-controller.txt @@ -13,7 +13,7 @@ The Neutron L3 agent provides L3/NAT forwarding to ensure external network acces for VMs on tenant networks. High Availability for the L3 agent is achieved by adopting Pacemaker. -NOTE: Here is the http://docs.openstack.org/trunk/openstack-network/admin/content/adv_cfg_l3_agent.html[documentation] for installing Neutron L3 Agent. +NOTE: Here is the http://docs.openstack.org/trunk/config-reference/content/section_adv_cfg_l3_agent.html[documentation] for installing Neutron L3 Agent. ===== Adding Neutron L3 Agent resource to Pacemaker @@ -55,7 +55,7 @@ Neutron DHCP agent distributes IP addresses to the VMs with dnsmasq (by default). High Availability for the DHCP agent is achieved by adopting Pacemaker. -NOTE: Here is the http://docs.openstack.org/trunk/openstack-network/admin/content/adv_cfg_dhcp_agent.html[documentation] for installing Neutron DHCP Agent. +NOTE: Here is the http://docs.openstack.org/trunk/config-reference/content/section_adv_cfg_dhcp_agent.html[documentation] for installing Neutron DHCP Agent. ===== Adding Neutron DHCP Agent resource to Pacemaker @@ -95,7 +95,7 @@ Neutron Metadata agent allows Nova API Metadata to be reachable by VMs on tenant networks. High Availability for the Metadata agent is achieved by adopting Pacemaker. -NOTE: Here is the http://docs.openstack.org/trunk/openstack-network/admin/content/metadata_agent_options.html[documentation] for installing Neutron Metadata Agent. +NOTE: Here is the http://docs.openstack.org/trunk/config-reference/content/networking-options-metadata.html[documentation] for installing Neutron Metadata Agent. ===== Adding Neutron Metadata Agent resource to Pacemaker diff --git a/doc/high-availability-guide/ap-neutron-server.txt b/doc/high-availability-guide/ap-neutron-server.txt index 3a96c0a287..878e019e37 100644 --- a/doc/high-availability-guide/ap-neutron-server.txt +++ b/doc/high-availability-guide/ap-neutron-server.txt @@ -8,7 +8,7 @@ Making the OpenStack Networking Server service highly available in active / pass * managing OpenStack Networking API Server daemon with the Pacemaker cluster manager, * configuring OpenStack services to use this IP address. -NOTE: Here is the http://docs.openstack.org/trunk/openstack-network/admin/content/index.html[documentation] for installing OpenStack Networking service. +NOTE: Here is the http://docs.openstack.org/trunk/install-guide/install/apt/content/ch_installing-openstack-networking.html[documentation] for installing OpenStack Networking service. ===== Adding OpenStack Networking Server resource to Pacemaker diff --git a/doc/security-guide/ch014_best-practices-for-operator-mode-access.xml b/doc/security-guide/ch014_best-practices-for-operator-mode-access.xml index 634f89efbc..f1f1d4ccf2 100644 --- a/doc/security-guide/ch014_best-practices-for-operator-mode-access.xml +++ b/doc/security-guide/ch014_best-practices-for-operator-mode-access.xml @@ -122,7 +122,7 @@
References OpenStack End User Guide section command line clients overview - OpenStack End User Guide section OpenStack RC file + OpenStack End User Guide section Download and source the OpenStack RC file
diff --git a/doc/security-guide/ch024_authentication.xml b/doc/security-guide/ch024_authentication.xml index 06530392a7..dc7e0893fe 100644 --- a/doc/security-guide/ch024_authentication.xml +++ b/doc/security-guide/ch024_authentication.xml @@ -59,7 +59,7 @@
Service Authorization - As described in the OpenStack Compute Administration Guide, cloud administrators must define a user for each service, with a role of Admin. This service user account provides the service with the authorization to authenticate users. + As described in the OpenStack Cloud Administrator Guide, cloud administrators must define a user for each service, with a role of Admin. This service user account provides the service with the authorization to authenticate users. The Nova and Swift services can be configured to use either the "tempAuth" file or Keystone to store authentication information. The "tempAuth" solution MUST NOT be deployed in a production environment since it stores passwords in plain text. Keystone supports client authentication for SSL which may be enabled. SSL client authentication provides an additional authentication factor, in addition to the username / password, that provides greater reliability on user identification. It reduces the risk of unauthorized access when usernames and passwords may be compromised.  However, there is additional administrative overhead and cost to issue certificates to users that may not be feasible in every deployment. NOTE: We recommend using client authentication using SSL for the authentication of services to Keystone. diff --git a/doc/security-guide/ch026_compute.xml b/doc/security-guide/ch026_compute.xml index 1411e6c1e8..be5867061f 100644 --- a/doc/security-guide/ch026_compute.xml +++ b/doc/security-guide/ch026_compute.xml @@ -68,7 +68,7 @@
References - SPICE Console + SPICE Console Red Hat bug 913607 SPICE support in RDO Grizzly
diff --git a/doc/security-guide/ch034_tenant-secure-networking-best-practices.xml b/doc/security-guide/ch034_tenant-secure-networking-best-practices.xml index 83227f3771..ef20f88af0 100644 --- a/doc/security-guide/ch034_tenant-secure-networking-best-practices.xml +++ b/doc/security-guide/ch034_tenant-secure-networking-best-practices.xml @@ -8,7 +8,20 @@
Networking Resource Policy Engine - A policy engine and its configuration file, policy.json, within OpenStack Networking provides a method to provide finer grained authorization of users on tenant networking methods and objects. It is important that cloud architects and operators evaluate their design and use cases in providing users and tenants the ability to create, update, and destroy available network resources as it has a tangible effect on tenant network availability, network security, and overall OpenStack security. For a more detail explanation of OpenStack Networking policy definition, please refer to the Authentication and Authorization chapter in the OpenStack Networking Administration Guide. + A policy engine and its configuration file, + policy.json, within OpenStack Networking + provides a method to provide finer grained authorization of + users on tenant networking methods and objects. It is important + that cloud architects and operators evaluate their design and + use cases in providing users and tenants the ability to create, + update, and destroy available network resources as it has a + tangible effect on tenant network availability, network + security, and overall OpenStack security. For a more detailed + explanation of OpenStack Networking policy definition, please + refer to the Authentication + and authorization section in the OpenStack + Cloud Administrator Guide.
It is important to review the default networking resource policy and modify the policy appropriately for your security posture.
If your deployment of OpenStack provides multiple external access points into different security domains it is important that you limit the tenant's ability to attach multiple vNICs to multiple external access points -- this would bridge these security domains and could lead to unforseen security compromise. It is possible mitigate this risk by utilizing the host aggregates functionality provided by OpenStack Compute or through splitting the tenant VMs into multiple tenant projects with different virtual network configurations.
diff --git a/doc/security-guide/ch055_security-services-for-instances.xml b/doc/security-guide/ch055_security-services-for-instances.xml index 6117f74244..f7cfb23851 100644 --- a/doc/security-guide/ch055_security-services-for-instances.xml +++ b/doc/security-guide/ch055_security-services-for-instances.xml @@ -27,7 +27,22 @@
Scheduling Instances to Nodes Before an instance is created, a host for the image instantiation must be selected. This selection is performed by the nova-scheduler which determines how to dispatch compute and volume requests. - The default nova scheduler in Grizzly is the Filter Scheduler, although other schedulers exist (see the section Other Schedulers in the OpenStack Compute Administration Guide). The filter scheduler works in collaboration with 'filters' to decide where an instance should be started. This process of host selection allows administrators to fulfil many different security requirements. Depending on the cloud deployment type for example, one could choose to have tenant instances reside on the same hosts whenever possible if data isolation was a primary concern, conversely one could attempt to have instances for a tenant reside on as many different hosts as possible for availability or fault tolerance reasons. The following diagram demonstrates how the filter scheduler works: + The default nova scheduler in Grizzly is the Filter + Scheduler, although other schedulers exist (see the section + Scheduling + in the OpenStack Configuration + Reference). The filter scheduler works in + collaboration with 'filters' to decide where an instance should + be started. This process of host selection allows administrators + to fulfil many different security requirements. Depending on the + cloud deployment type for example, one could choose to have + tenant instances reside on the same hosts whenever possible if + data isolation was a primary concern, conversely one could + attempt to have instances for a tenant reside on as many + different hosts as possible for availability or fault tolerance + reasons. The following diagram demonstrates how the filter + scheduler works: @@ -36,12 +51,20 @@ The use of scheduler filters may be used to segregate customers, data, or even discard machines of the cloud that cannot be attested as secure. This generally applies to all OpenStack projects offering a scheduler. When building a cloud, you may choose to implement scheduling filters for a variety of security-related purposes. - Below we highlight a few of the filters that may be useful in a security context, depending on your requirements, the full set of filter documentation is documented in the Scheduler Filters section of the OpenStack Compute Administration Guide + Below we highlight a few of the filters that may be useful in a security context, depending on your requirements, the full set of filter documentation is documented in the Filter Scheduler section of the OpenStack Configuration Reference. Tenant Driven Whole Host Reservation There currently exists a blueprint for whole host reservation - This would allow a tenant to exclusively reserve hosts for only it's instances, incurring extra costs.
Host Aggregates - While not a filter in themselves, host aggregates allow administrators to assign key-value pairs to groups of machines. This allows cloud administrators, not users, to partition up their compute host resources. Each node can have multiple aggregates (see the Host Aggregates section of the OpenStack Compute Administration Guide for more information on creating and managing aggregates). + While not a filter in themselves, host aggregates allow + administrators to assign key-value pairs to groups of + machines. This allows cloud administrators, not users, to + partition up their compute host resources. Each node can have + multiple aggregates (see the Host + Aggregates section of the OpenStack + Configuration Reference for more information on + creating and managing aggregates).
AggregateMultiTenancyIsolation @@ -77,10 +100,10 @@ wget http://mirror.anl.gov/pub/ubuntu-iso/CDs/precise/SHA256SUMS.gpg gpg --keyserver hkp://keyserver.ubuntu.com --recv-keys 0xFBB75451 gpg --verify SHA256SUMS.gpg SHA256SUMSsha256sum -c SHA256SUMS 2>&1 | grep OK - The second option is to use the OpenStack guide for building images. In this case, you will want to follow your organizations OS hardening guidelines or those provided by a trusted third-party such as the RHEL6 STIG. + The second option is to use the OpenStack Virtual Maschine Image Guide. In this case, you will want to follow your organizations OS hardening guidelines or those provided by a trusted third-party such as the RHEL6 STIG. The final option is to use an automated image builder. The following example uses the Oz image builder. The OpenStack community has recently created a newer tool worth investigating: disk-image-builder. We have not evaluated this tool from a security perspective. Example of RHEL 6 CCE-26976-1 which will help implement NIST 800-53 Section AC-19(d) in Oz. -   + <template> <name>centos64</name> <os> diff --git a/doc/training-guide/module001-ch011-block-storage.xml b/doc/training-guide/module001-ch011-block-storage.xml index 2c1aaa9eda..0bad6eea9f 100644 --- a/doc/training-guide/module001-ch011-block-storage.xml +++ b/doc/training-guide/module001-ch011-block-storage.xml @@ -54,10 +54,8 @@ will be on the persistent volume and thus state will be maintained even if the instance it shutdown. Details of this configuration are discussed in theOpenStack Clients Guide. + xlink:href="http://docs.openstack.org/user-guide/content/" + >OpenStack End User Guide. Volumes do not provide concurrent access from multiple instances. For that you need either a traditional network filesystem like NFS or CIFS or a cluster filesystem such as @@ -248,4 +246,4 @@ volume. - \ No newline at end of file +