diff --git a/RELEASENOTES.rst b/RELEASENOTES.rst index 7a8e3b4442..c9ceb2ae7a 100644 --- a/RELEASENOTES.rst +++ b/RELEASENOTES.rst @@ -35,6 +35,11 @@ Virtual Machine Image Guide * RST conversion finished. +Architecture Design Guide +------------------------- + +* Completed RST conversion. + Translations ------------ diff --git a/doc-tools-check-languages.conf b/doc-tools-check-languages.conf index a75cfebbf3..8d3c3ebca3 100644 --- a/doc-tools-check-languages.conf +++ b/doc-tools-check-languages.conf @@ -30,9 +30,9 @@ declare -A SPECIAL_BOOKS=( ["networking-guide"]="RST" ["user-guide"]="RST" ["user-guide-admin"]="RST" + ["arch-design"]="RST" # Skip in-progress guides ["contributor-guide"]="skip" - ["arch-design-rst"]="skip" ["config-ref-rst"]="skip" # This needs special handling, handle it with the RST tools. ["common-rst"]="RST" diff --git a/doc/arch-design/bk-openstack-arch-design.xml b/doc/arch-design/bk-openstack-arch-design.xml deleted file mode 100644 index 7ffc9514c0..0000000000 --- a/doc/arch-design/bk-openstack-arch-design.xml +++ /dev/null @@ -1,64 +0,0 @@ - - - OpenStack Architecture Design Guide - - Architecture Guide - - - - - - - - OpenStack Foundation - - - - 2014 - 2015 - OpenStack Foundation - - current - OpenStack - - - - Copyright details are filled in by the - template. - - - - - Remaining licensing details are filled in by - the template. - - - - To reap the benefits of OpenStack, you should - plan, design, and architect your cloud properly, - taking user's needs into account and understanding the - use cases. - - - - - - - - - - - - - - - - - - diff --git a/doc/arch-design/ch_compute_focus.xml b/doc/arch-design/ch_compute_focus.xml deleted file mode 100644 index 598f87481d..0000000000 --- a/doc/arch-design/ch_compute_focus.xml +++ /dev/null @@ -1,45 +0,0 @@ - - - Compute focused - Compute-focused clouds are a specialized subset of the general purpose - OpenStack cloud architecture. A compute-focused cloud specifically supports - compute intensive workloads. - - Compute intensive workloads may be CPU intensive, RAM intensive, - or both; they are not typically storage or network intensive. - - Compute-focused workloads may include the following use cases: - - - High performance computing (HPC) - - - Big data analytics using Hadoop or other distributed data - stores - - - Continuous integration/continuous deployment (CI/CD) - - - Platform-as-a-Service (PaaS) - - - Signal processing for network function virtualization (NFV) - - - - A compute-focused OpenStack cloud does not typically use raw block storage - services as it does not host applications that require - persistent block storage. - - - - - - - - diff --git a/doc/arch-design/ch_generalpurpose.xml b/doc/arch-design/ch_generalpurpose.xml deleted file mode 100644 index 0d1362c1e1..0000000000 --- a/doc/arch-design/ch_generalpurpose.xml +++ /dev/null @@ -1,95 +0,0 @@ - - - General purpose - An OpenStack general purpose cloud is often considered a - starting point for building a cloud deployment. They are designed - to balance the components and do not emphasize any particular aspect - of the overall computing environment. - Cloud design must give equal weight to the compute, network, and - storage components. General purpose clouds are - found in private, public, and hybrid environments, lending - themselves to many different use cases. - - - - General purpose clouds are homogeneous deployments. They are - not suited to specialized environments or edge case situations. - - - - Common uses of a general purpose cloud include: - - - - - Providing a simple database - - - - - A web application runtime environment - - - - - A shared application development platform - - - - - Lab test bed - - - - Use cases that benefit from scale-out rather than scale-up approaches - are good candidates for general purpose cloud architecture. - - A general purpose cloud is designed to have a range of potential - uses or functions; not specialized for specific use cases. General - purpose architecture is designed to address 80% of potential use - cases available. The infrastructure, in itself, is a specific use case, - enabling it to be used as a base model for the design process. - General purpose clouds are designed to be platforms that are suited - for general purpose applications. - General purpose clouds are limited to the most basic - components, but they can include additional resources such - as: - - - Virtual-machine disk image library - - - Raw block storage - - - File or object storage - - - Firewalls - - - Load balancers - - - IP addresses - - - Network overlays or virtual local area networks - (VLANs) - - - Software bundles - - - - - - - - - - diff --git a/doc/arch-design/ch_hybrid.xml b/doc/arch-design/ch_hybrid.xml deleted file mode 100644 index 2426dec267..0000000000 --- a/doc/arch-design/ch_hybrid.xml +++ /dev/null @@ -1,59 +0,0 @@ - - - Hybrid - A hybrid cloud design - is one that uses more than one cloud. For example, designs that use - both an OpenStack-based private cloud and an OpenStack-based public - cloud, or that use an OpenStack cloud and a non-OpenStack cloud, - are hybrid clouds. - Bursting describes the - practice of creating new instances in an external cloud to alleviate - capacity issues in a private cloud. - - Example scenarios suited to hybrid clouds - - Bursting from a private cloud to a public - cloud - - - Disaster recovery - - - Development and testing - - - Federated cloud, enabling users to choose resources - from multiple providers - - - Supporting legacy systems as they transition to the - cloud - - - Hybrid clouds interact with systems that are outside - the control of the private cloud administrator, and require careful - architecture to prevent conflicts with hardware, software, - and APIs under external control. - The degree to which the architecture is OpenStack-based - affects your ability to accomplish tasks with native - OpenStack tools. By definition, this is a situation in which - no single cloud can provide all of the necessary - functionality. In order to manage the entire system, we recommend - using a cloud management platform (CMP). - There are several commercial and open source CMPs available, - but there is no single CMP that can address all needs in all scenarios, - and sometimes a manually-built solution is the best option. - This chapter includes discussion of using CMPs for managing a hybrid - cloud. - - - - - - - - diff --git a/doc/arch-design/ch_introduction.xml b/doc/arch-design/ch_introduction.xml deleted file mode 100644 index 0f751e8623..0000000000 --- a/doc/arch-design/ch_introduction.xml +++ /dev/null @@ -1,18 +0,0 @@ - - - Introduction - - OpenStack is a fully-featured, self-service - cloud. This book takes you through some of the considerations you have to make - when designing your cloud. - - - - - - - diff --git a/doc/arch-design/ch_legal-security-requirements.xml b/doc/arch-design/ch_legal-security-requirements.xml deleted file mode 100644 index 8882fd5fd1..0000000000 --- a/doc/arch-design/ch_legal-security-requirements.xml +++ /dev/null @@ -1,260 +0,0 @@ - - - - Security and legal requirements - This chapter discusses the legal and security requirements you - need to consider for the different OpenStack scenarios. -
- Legal requirements - Many jurisdictions have legislative and regulatory - requirements governing the storage and management of data in - cloud environments. Common areas of regulation include: - - - Data retention policies ensuring storage of - persistent data and records management to meet data - archival requirements. - - - Data ownership policies governing the possession and - responsibility for data. - - - Data sovereignty policies governing the storage of - data in foreign countries or otherwise separate - jurisdictions. - - - Data compliance policies governing certain types of - information needing to reside in certain locations due to - regulatory issues - and more importantly, cannot reside in - other locations for the same reason. - - - Examples of such legal frameworks include the data - protection framework of the European Union and the - requirements of the - Financial Industry Regulatory Authority in the United - States. Consult a local regulatory body for more information. - -
-
- Security - When deploying OpenStack in an enterprise as a private - cloud, despite activating a firewall and binding - employees with security agreements, cloud architecture - should not make assumptions about safety and protection. - In addition to considering the users, operators, or administrators - who will use the environment, consider also negative or hostile users who - would attack or compromise the security of your deployment regardless - of firewalls or security agreements. - Attack vectors increase further in a public facing OpenStack - deployment. For example, the API endpoints and the - software behind it become vulnerable to hostile - entities attempting to gain unauthorized access or prevent access - to services. This can result in loss of reputation and you must - protect against it through auditing and appropriate - filtering. - It is important to understand that user authentication - requests encase sensitive information such as user names, - passwords, and authentication tokens. For this reason, place - the API services behind hardware that performs SSL termination. - - Be mindful of consistency when utilizing third party - clouds to explore authentication options. - -
-
- Security domains - A security domain comprises users, applications, servers or - networks that share common trust requirements and expectations - within a system. Typically, security domains have the same - authentication and authorization requirements and users. - You can map security domains individually to the - installation, or combine them. For example, some - deployment topologies combine both guest and data domains onto - one physical network. In other cases these networks - are physically separate. Map out the security domains against - specific OpenStack topologies needs. The domains and their trust requirements - depend on whether the cloud instance is public, private, or - hybrid. - - Public security domains - The public security domain is an untrusted area of - the cloud infrastructure. It can refer to the internet as a - whole or simply to networks over which the user has no - authority. Always consider this domain untrusted. For example, - in a hybrid cloud deployment, any information traversing - between and beyond the clouds is in the public domain and - untrustworthy. - - - Guest security domains - Typically used for compute instance-to-instance traffic, the - guest security domain handles compute data generated by - instances on the cloud but not services that support the - operation of the cloud, such as API calls. Public cloud - providers and private cloud providers who do not have - stringent controls on instance use or who allow unrestricted - internet access to instances should consider this domain to be - untrusted. Private cloud providers may want to consider this - network as internal and therefore trusted only if they have - controls in place to assert that they trust instances and all - their tenants. - - - Management security domains - The management security domain is where services interact. - The networks in this domain transport confidential data such as configuration - parameters, user names, and passwords. Trust this domain when it is - behind an organization's firewall in deployments. - - - Data security domains - The data security domain is concerned primarily with - information pertaining to the storage services within - OpenStack. The data that crosses this network has integrity and - confidentiality requirements. Depending on the type of deployment there - may also be availability requirements. The trust level of this network - is heavily dependent on deployment decisions and does not have a default - level of trust. - -
-
- Hypervisor-security - The hypervisor also requires a security assessment. In a - public cloud, organizations typically do not have control - over the choice of hypervisor. Properly securing your - hypervisor is important. Attacks made upon the - unsecured hypervisor are called a - hypervisor breakout. - Hypervisor breakout describes the event of a - compromised or malicious instance breaking out of the resource - controls of the hypervisor and gaining access to the bare - metal operating system and hardware resources. - There is not an issue if the security of instances is not important. - However, enterprises need to avoid vulnerability. The only way to - do this is to avoid the situation where the instances are running - on a public cloud. That does not mean that there is a - need to own all of the infrastructure on which an OpenStack - installation operates; it suggests avoiding situations in which - sharing hardware with others occurs. -
-
- Baremetal security - There are other services worth considering that provide a - bare metal instance instead of a cloud. In other cases, it is - possible to replicate a second private cloud by integrating - with a private Cloud-as-a-Service deployment. The - organization does not buy the hardware, but also does not share - with other tenants. It is also possible to use a provider that - hosts a bare-metal public cloud instance for which the - hardware is dedicated only to one customer, or a provider that - offers private Cloud-as-a-Service. - - Each cloud implements services differently. - What keeps data secure in one - cloud may not do the same in another. Be sure to know the - security requirements of every cloud that handles the - organization's data or workloads. - - More information on OpenStack Security can be found in the - OpenStack - Security Guide. -
-
- Networking Security - Consider security implications and requirements before designing the - physical and logical network topologies. Make sure that the networks are - properly segregated and traffic flows are going to the correct - destinations without crossing through locations that are undesirable. - Consider the following example factors: - - - Firewalls - - - Overlay interconnects for joining separated tenant networks - - - Routing through or avoiding specific networks - - - How networks attach to hypervisors can expose security - vulnerabilities. To mitigate against exploiting hypervisor breakouts, - separate networks from other systems and schedule instances for the - network onto dedicated compute nodes. This prevents attackers - from having access to the networks from a compromised instance. -
-
- Multi-site security - Securing a multi-site OpenStack installation brings - extra challenges. Tenants may expect a tenant-created network - to be secure. In a multi-site installation the use of a - non-private connection between sites may be required. This may - mean that traffic would be visible to third parties and, in - cases where an application requires security, this issue - requires mitigation. In these instances, install a VPN or - encrypted connection between sites to conceal sensitive traffic. - Another security consideration with regard to multi-site - deployments is Identity. Centralize authentication within a - multi-site deployment. Centralization provides a - single authentication point for users across the deployment, - as well as a single point of administration for traditional - create, read, update, and delete operations. Centralized - authentication is also useful for auditing purposes because - all authentication tokens originate from the same - source. - Just as tenants in a single-site deployment need isolation - from each other, so do tenants in multi-site installations. - The extra challenges in multi-site designs revolve around - ensuring that tenant networks function across regions. - OpenStack Networking (neutron) does not presently support - a mechanism to provide this functionality, therefore an - external system may be necessary to manage these mappings. - Tenant networks may contain sensitive information requiring - that this mapping be accurate and consistent to ensure that a - tenant in one site does not connect to a different tenant in - another site. -
-
- OpenStack components - Most OpenStack installations require a bare minimum set of - pieces to function. These include OpenStack Identity - (keystone) for authentication, OpenStack Compute - (nova) for compute, OpenStack Image service (glance) for image - storage, OpenStack Networking (neutron) for networking, and - potentially an object store in the form of OpenStack Object - Storage (swift). Bringing multi-site into play also demands extra - components in order to coordinate between regions. Centralized - Identity service is necessary to provide the single authentication - point. Centralized dashboard is also recommended to provide a - single login point and a mapped experience to the API and CLI - options available. If needed, use a centralized Object Storage service, - installing the required swift proxy service alongside the Object - Storage service. - It may also be helpful to install a few extra options in - order to facilitate certain use cases. For instance, - installing DNS service may assist in automatically generating - DNS domains for each region with an automatically-populated - zone full of resource records for each instance. This - facilitates using DNS as a mechanism for determining which - region would be selected for certain applications. - Another useful tool for managing a multi-site installation - is Orchestration (heat). The Orchestration service - allows the use of templates to define a set of instances to - be launched together or for scaling existing sets. It can - set up matching or differentiated groupings based on - regions. For instance, if an application requires an equally - balanced number of nodes across sites, the same heat template - can be used to cover each site with small alterations to only - the region name. -
-
- diff --git a/doc/arch-design/ch_massively_scalable.xml b/doc/arch-design/ch_massively_scalable.xml deleted file mode 100644 index 05da4c210c..0000000000 --- a/doc/arch-design/ch_massively_scalable.xml +++ /dev/null @@ -1,79 +0,0 @@ - - - Massively scalable - - A massively scalable architecture is a cloud - implementation that is either a very large deployment, such as - a commercial service provider might build, or - one that has the capability to support user requests for large - amounts of cloud resources. - An example is an infrastructure in which requests to service - 500 or more instances at a time is common. A massively scalable - infrastructure fulfills such a request without exhausting the - available cloud infrastructure resources. While the high capital - cost of implementing such a cloud architecture means that it - is currently in limited use, many organizations are planning - for massive scalability in the future. - A massively scalable OpenStack cloud design presents a - unique set of challenges and considerations. For the most part - it is similar to a general purpose cloud architecture, as it - is built to address a non-specific range of potential use - cases or functions. Typically, it is rare that particular - workloads determine the design or configuration of massively - scalable clouds. The massively scalable cloud is most often - built as a platform for a variety of workloads. Because private - organizations rarely require or have the resources for them, - massively scalable OpenStack clouds are generally built as - commercial, public cloud offerings. - Services provided by a massively scalable OpenStack cloud - include: - - - Virtual-machine disk image library - - - Raw block storage - - - File or object storage - - - Firewall functionality - - - Load balancing functionality - - - Private (non-routable) and public (floating) IP - addresses - - - Virtualized network topologies - - - Software bundles - - - Virtual compute resources - - - Like a general purpose cloud, the instances deployed in a - massively scalable OpenStack cloud do not necessarily use - any specific aspect of the cloud offering (compute, network, - or storage). As the cloud grows in scale, the number of - workloads can cause stress on all the cloud - components. This adds further stresses to supporting - infrastructure such as databases and message brokers. The - architecture design for such a cloud must account for these - performance pressures without negatively impacting user - experience. - - - - - - diff --git a/doc/arch-design/ch_multi_site.xml b/doc/arch-design/ch_multi_site.xml deleted file mode 100644 index 3e30d0580e..0000000000 --- a/doc/arch-design/ch_multi_site.xml +++ /dev/null @@ -1,34 +0,0 @@ - - - Multi-site - - OpenStack is capable of running in a multi-region - configuration. This enables some parts of OpenStack to - effectively manage a group of sites as a single cloud. - Some use cases that might indicate a need for a multi-site - deployment of OpenStack include: - - - An organization with a diverse geographic - footprint. - - - Geo-location sensitive data. - - - Data locality, in which specific data or - functionality should be close to users. - - - - - - - - - - diff --git a/doc/arch-design/ch_network_focus.xml b/doc/arch-design/ch_network_focus.xml deleted file mode 100644 index ce4ec79b52..0000000000 --- a/doc/arch-design/ch_network_focus.xml +++ /dev/null @@ -1,152 +0,0 @@ - - - Network focused - All OpenStack deployments depend on network communication in order - to function properly due to its service-based nature. In some cases, - however, the network elevates beyond simple - infrastructure. This chapter discusses architectures that are more - reliant or focused on network services. These architectures depend - on the network infrastructure and require - network services that perform reliably in order to satisfy user and - application requirements. - Some possible use cases include: - - - Content delivery network - - This includes streaming video, viewing photographs, or - accessing any other cloud-based data repository distributed to - a large number of end users. Network configuration affects - latency, bandwidth, and the distribution of instances. Therefore, - it impacts video streaming. Not all video streaming is - consumer-focused. For example, multicast videos (used for media, - press conferences, corporate presentations, and web conferencing - services) can also use a content delivery network. - The location of the video repository and its relationship to end - users affects content delivery. Network throughput of the back-end - systems, as well as the WAN architecture and the cache methodology, - also affect performance. - - - - Network management functions - - Use this cloud to provide network service functions built to - support the delivery of back-end network services such as DNS, - NTP, or SNMP. - - - - Network service offerings - - Use this cloud to run customer-facing network tools to - support services. Examples include VPNs, MPLS private networks, - and GRE tunnels. - - - - Web portals or web services - - Web servers are a common application for cloud services, - and we recommend an understanding of their network requirements. - The network requires scaling out to meet user demand and deliver - web pages with a minimum latency. Depending on the details of - the portal architecture, consider the internal east-west and - north-south network bandwidth. - - - - High speed and high volume transactional systems - - - These types of applications are sensitive to network - configurations. Examples include financial systems, - credit card transaction applications, and trading and other - extremely high volume systems. These systems are sensitive - to network jitter and latency. They must balance a high volume - of East-West and North-South network traffic to - maximize efficiency of the data delivery. - Many of these systems must access large, high performance - database back ends. - - - - High availability - - These types of use cases are dependent on the proper sizing - of the network to maintain replication of data between sites for - high availability. If one site becomes unavailable, the extra - sites can serve the displaced load until the original site - returns to service. It is important to size network capacity - to handle the desired loads. - - - - Big data - - Clouds used for the management and collection of big data - (data ingest) have a significant demand on network resources. - Big data often uses partial replicas of the data to maintain - integrity over large distributed clouds. Other big data - applications that require a large amount of network resources - are Hadoop, Cassandra, NuoDB, Riak, and other NoSQL and - distributed databases. - - - - Virtual desktop infrastructure (VDI) - - This use case is sensitive to network congestion, latency, - jitter, and other network characteristics. Like video streaming, - the user experience is important. However, unlike video - streaming, caching is not an option to offset the network issues. - VDI requires both upstream and downstream traffic and cannot rely - on caching for the delivery of the application to the end user. - - - - Voice over IP (VoIP) - - This is sensitive to network congestion, latency, jitter, - and other network characteristics. VoIP has a symmetrical traffic - pattern and it requires network quality of service (QoS) for best - performance. In addition, you can implement active queue management - to deliver voice and multimedia content. Users are sensitive to - latency and jitter fluctuations and can detect them at very low - levels. - - - - Video Conference or web conference - - This is sensitive to network congestion, latency, jitter, - and other network characteristics. Video Conferencing has a - symmetrical traffic pattern, but unless the network is on an - MPLS private network, it cannot use network quality of service - (QoS) to improve performance. Similar to VoIP, users are - sensitive to network performance issues even at low levels. - - - - High performance computing (HPC) - - This is a complex use case that requires careful - consideration of the traffic flows and usage patterns to address - the needs of cloud clusters. It has high east-west traffic - patterns for distributed computing, but there can be substantial - north-south traffic depending on the specific application. - - - - - - - - - - - diff --git a/doc/arch-design/ch_references.xml b/doc/arch-design/ch_references.xml deleted file mode 100644 index e396b04758..0000000000 --- a/doc/arch-design/ch_references.xml +++ /dev/null @@ -1,128 +0,0 @@ - - - - References - - Data - Protection framework of the European Union: Guidance on - Data Protection laws governed by the EU. - - - Depletion - of IPv4 Addresses: describing how IPv4 addresses and the - migration to IPv6 is inevitable. - - - Ethernet - Switch Reliability: ​Research white paper on Ethernet Switch - reliability. - - - Financial - Industry Regulatory Authority: ​Requirements of the - Financial Industry Regulatory Authority in the USA. - - - Image - Service property keys: Glance API property keys allows the - administrator to attach custom characteristics to images. - - - LibGuestFS - Documentation: Official LibGuestFS documentation. - - - Logging - and Monitoring: Official OpenStack Operations - documentation. - - - ManageIQ Cloud Management - Platform: An Open Source Cloud Management Platform for - managing multiple clouds. - - - N-Tron - Network Availability: Research white paper on network - availability. - - - Nested - KVM: Post on how to nest KVM under KVM. - - - Open Compute - Project: The Open Compute Project Foundation's mission is - to design and enable the delivery of the most efficient server, - storage and data center hardware designs for scalable - computing. - - - OpenStack - Flavors: Official OpenStack documentation. - - - OpenStack - High Availability Guide: Information on how to provide - redundancy for the OpenStack components. - - - OpenStack - Hypervisor Support Matrix: ​Matrix of supported hypervisors - and capabilities when used with OpenStack. - - - OpenStack - Object Store (Swift) Replication Reference: Developer - documentation of Swift replication. - - - OpenStack - Operations Guide: The OpenStack Operations Guide provides - information on setting up and installing OpenStack. - - - OpenStack - Security Guide: The OpenStack Security Guide provides - information on securing OpenStack deployments. - - - OpenStack - Training Marketplace: The OpenStack Market for training and - Vendors providing training on OpenStack. - - - PCI - passthrough: The PCI API patches extend the - servers/os-hypervisor to show PCI information for instance and - compute node, and also provides a resource endpoint to show PCI - information. - - - TripleO: - TripleO is a program aimed at installing, upgrading and operating - OpenStack clouds using OpenStack's own cloud facilities as the - foundation. - - diff --git a/doc/arch-design/ch_specialized.xml b/doc/arch-design/ch_specialized.xml deleted file mode 100644 index 65077c480a..0000000000 --- a/doc/arch-design/ch_specialized.xml +++ /dev/null @@ -1,67 +0,0 @@ - - - Specialized cases - Although most OpenStack architecture designs fall into one - of the seven major scenarios outlined in other sections - (compute focused, network focused, storage focused, general - purpose, multi-site, hybrid cloud, and massively scalable), - there are a few use cases that do not fit into these categories. - This section discusses these specialized cases and provide - some additional details and design considerations - for each use case: - - - - Specialized - networking: describes running - networking-oriented software that may involve reading - packets directly from the wire or participating in - routing protocols. - - - - - Software-defined - networking (SDN): describes both - running an SDN controller from within OpenStack as well - as participating in a software-defined network. - - - - - Desktop-as-a-Service: - describes running a virtualized desktop environment - in a cloud (Desktop-as-a-Service). - This applies to private and public clouds. - - - - - OpenStack on - OpenStack: describes building a multi-tiered cloud by - running OpenStack on top of an OpenStack installation. - - - - - Specialized - hardware: describes the use of specialized - hardware devices from within the OpenStack environment. - - - - - - - - - - diff --git a/doc/arch-design/ch_storage_focus.xml b/doc/arch-design/ch_storage_focus.xml deleted file mode 100644 index 5eeef00b7c..0000000000 --- a/doc/arch-design/ch_storage_focus.xml +++ /dev/null @@ -1,78 +0,0 @@ - - - Storage focused - - Cloud storage is a model of data storage that stores digital - data in logical pools and physical storage that spans - across multiple servers and locations. Cloud storage commonly - refers to a hosted object storage service, however the term - also includes other types of data storage that are - available as a service, for example block storage. - Cloud storage runs on virtualized infrastructure and - resembles broader cloud computing in terms of accessible - interfaces, elasticity, scalability, multi-tenancy, and - metered resources. You can use cloud storage services from - an off-premises service or deploy on-premises. - Cloud storage consists of many distributed, synonymous - resources, which are often referred to as integrated - storage clouds. Cloud storage is highly fault tolerant through - redundancy and the distribution of data. It is highly durable - through the creation of versioned copies, and can be - consistent with regard to data replicas. - At large scale, management of data operations is - a resource intensive process for an organization. Hierarchical - storage management (HSM) systems and data grids help - annotate and report a baseline data valuation to make - intelligent decisions and automate data decisions. HSM enables - automated tiering and movement, as well as orchestration - of data operations. A data grid is an architecture, or set of - services evolving technology, that brings together sets of - services enabling users to manage large data sets. - Example applications deployed with cloud - storage characteristics: - - - Active archive, backups and hierarchical storage - management. - - - General content storage and synchronization. An - example of this is private dropbox. - - - Data analytics with parallel file systems. - - - Unstructured data store for services. For example, - social media back-end storage. - - - Persistent block storage. - - - Operating system and application image store. - - - Media streaming. - - - Databases. - - - Content distribution. - - - Cloud storage peering. - - - - - - - - - diff --git a/doc/arch-design/compute_focus/section_architecture_compute_focus.xml b/doc/arch-design/compute_focus/section_architecture_compute_focus.xml deleted file mode 100644 index fee597c55c..0000000000 --- a/doc/arch-design/compute_focus/section_architecture_compute_focus.xml +++ /dev/null @@ -1,268 +0,0 @@ - -
- - Architecture - The hardware selection covers three areas: - - - Compute - - - Network - - - Storage - - - Compute-focused OpenStack clouds have high demands on processor and - memory resources, and requires hardware that can handle these demands. - Consider the following factors when selecting compute (server) hardware: - - - Server density - - - Resource capacity - - - Expandability - - - Cost - - - Weigh these considerations against each other to determine the - best design for the desired purpose. For example, increasing server density - means sacrificing resource capacity or expandability. - A compute-focused cloud should have an emphasis on server hardware - that can offer more CPU sockets, more CPU cores, and more RAM. Network - connectivity and storage capacity are less critical. - When designing a compute-focused OpenStack architecture, you must - consider whether you intend to scale up or scale out. - Selecting a smaller number of larger hosts, or a - larger number of smaller hosts, depends on a combination of factors: - cost, power, cooling, physical rack and floor space, support-warranty, - and manageability. - Considerations for selecting hardware: - - - Most blade servers can support dual-socket multi-core CPUs. To - avoid this CPU limit, select full width - or full height blades. - Be aware, however, that this also decreases server density. For example, - high density blade servers such as HP BladeSystem or Dell PowerEdge - M1000e support up to 16 servers in only ten rack units. Using - half-height blades is twice as dense as using full-height blades, - which results in only eight servers per ten rack units. - - - 1U rack-mounted servers that occupy only a single rack - unit may offer greater server density than a blade server - solution. It is possible to place forty 1U servers in a rack, providing - space for the top of rack (ToR) switches, compared to 32 full width - blade servers. - - - 2U rack-mounted servers provide quad-socket, multi-core CPU - support, but with a corresponding decrease in server density (half - the density that 1U rack-mounted servers offer). - - - Larger rack-mounted servers, such as 4U servers, often provide - even greater CPU capacity, commonly supporting four or even eight CPU - sockets. These servers have greater expandability, but such servers - have much lower server density and are often more expensive. - - - Sled servers are rack-mounted servers that - support multiple - independent servers in a single 2U or 3U enclosure. These deliver higher - density as compared to typical 1U or 2U rack-mounted servers. For - example, many sled servers offer four independent dual-socket - nodes in 2U for a total of eight CPU sockets in 2U. - - - Consider these when choosing server hardware for a compute- - focused OpenStack design architecture: - - - Instance density - - - Host density - - - Power and cooling density - - - -
- Selecting networking hardware - Some of the key considerations for networking hardware selection - include: - - - Port count - - - Port density - - - Port speed - - - Redundancy - - - Power requirements - - - We recommend designing the network architecture using - a scalable network model that makes it easy to add capacity and - bandwidth. A good example of such a model is the leaf-spline model. In - this type of network design, it is possible to easily add additional - bandwidth as well as scale out to additional racks of gear. It is - important to select network hardware that supports the required - port count, port speed, and port density while also allowing for future - growth as workload demands increase. It is also important to evaluate - where in the network architecture it is valuable to provide redundancy. -
- -
- Operating system and hypervisor - The selection of operating system (OS) and hypervisor has a - significant impact on the end point design. - OS and hypervisor selection impact the following areas: - - - Cost - - - Supportability - - - Management tools - - - Scale and performance - - - Security - - - Supported features - - - Interoperability - - -
- -
- OpenStack components - The selection of OpenStack components is important. - There are certain components that are required, for example the compute - and image services, but others, such as the Orchestration service, may not - be present. - For a compute-focused OpenStack design architecture, the - following components may be present: - - - Identity (keystone) - - - Dashboard (horizon) - - - Compute (nova) - - - Object Storage (swift) - - - Image (glance) - - - Networking (neutron) - - - Orchestration (heat) - - - - A compute-focused design is less likely to include OpenStack Block - Storage. However, there may be some situations where the need for - performance requires a block storage component to improve data I-O. - - The exclusion of certain OpenStack components might also limit the - functionality of other components. If a design includes - the Orchestration service but excludes the Telemetry service, then - the design cannot take advantage of Orchestration's auto - scaling functionality as this relies on information from Telemetry. -
- -
- Networking software - OpenStack Networking provides a wide variety of networking services - for instances. There are many additional networking software packages - that might be useful to manage the OpenStack components themselves. - The OpenStack High Availability Guide - (http://docs.openstack.org/ha-guide/) - describes some of these software packages in more detail. - - For a compute-focused OpenStack cloud, the OpenStack infrastructure - components must be highly available. If the design does not - include hardware load balancing, you must add networking software packages, - for example, HAProxy. -
- -
- Management software - The selected supplemental software solution impacts and affects - the overall OpenStack cloud design. This includes software for - providing clustering, logging, monitoring and alerting. - The availability of design requirements is the main determiner - for the inclusion of clustering software, such as Corosync or Pacemaker. - Operational considerations determine the requirements for logging, - monitoring, and alerting. Each of these sub-categories include - various options. - Some other potential design impacts include: - - - OS-hypervisor combination - - Ensure that the selected logging, - monitoring, or alerting tools support the proposed OS-hypervisor - combination. - - - - Network hardware - - The logging, monitoring, and alerting software - must support the network hardware selection. - - - -
- -
- Database software - A large majority of OpenStack components require access to - back-end database services to store state and configuration - information. Select an appropriate back-end database that - satisfies the availability and fault tolerance requirements of the - OpenStack services. OpenStack services support connecting - to any database that the SQLAlchemy Python drivers support, - however most common database deployments make use of MySQL or some - variation of it. We recommend that you make the database that provides - back-end services within a general-purpose cloud highly - available. Some of the more common software solutions include Galera, - MariaDB, and MySQL with multi-master replication. -
- -
diff --git a/doc/arch-design/compute_focus/section_operational_considerations_compute_focus.xml b/doc/arch-design/compute_focus/section_operational_considerations_compute_focus.xml deleted file mode 100644 index 6e5131d3a2..0000000000 --- a/doc/arch-design/compute_focus/section_operational_considerations_compute_focus.xml +++ /dev/null @@ -1,84 +0,0 @@ - -
- - Operational considerations - There are a number of operational considerations that affect the - design of compute-focused OpenStack clouds, including: - - - - Enforcing strict API availability requirements - - - - - Understanding and dealing with failure scenarios - - - - - Managing host maintenance schedules - - - - Service-level agreements (SLAs) are contractual obligations that - ensure the availability of a service. When designing an OpenStack cloud, - factoring in promises of availability implies a certain level of - redundancy and resiliency. - -
- Monitoring - OpenStack clouds require appropriate monitoring platforms - to catch and manage errors. - - We recommend leveraging existing monitoring systems - to see if they are able to effectively monitor an - OpenStack environment. - - Specific meters that are critically important to capture - include: - - - Image disk utilization - - - Response time to the Compute API - - -
- -
- Capacity planning - Adding extra capacity to an OpenStack cloud is a - horizontally scaling process. - We recommend similar (or the same) CPUs - when adding extra nodes to the environment. This reduces - the chance of breaking live-migration features if they are - present. Scaling out hypervisor hosts also has a direct effect - on network and other data center resources. We recommend you - factor in this increase when reaching rack capacity or when requiring - extra network switches. - Changing the internal components of a Compute host to account for - increases in demand is a process known as vertical scaling. - Swapping a CPU for one with more cores, or - increasing the memory in a server, can help add extra - capacity for running applications. - Another option is to assess the average workloads and - increase the number of instances that can run within the - compute environment by adjusting the overcommit ratio. - - It is important to remember that changing the CPU - overcommit ratio can have a detrimental effect and cause - a potential increase in a noisy neighbor. - - The added risk of increasing the overcommit ratio is that - more instances fail when a compute host fails. We do not recommend - that you increase the CPU overcommit ratio in compute-focused - OpenStack design architecture, as it can increase the potential - for noisy neighbor issues. -
-
diff --git a/doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml b/doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml deleted file mode 100644 index cf6ae045ba..0000000000 --- a/doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml +++ /dev/null @@ -1,162 +0,0 @@ - -
- - Prescriptive examples - The Conseil Européen pour la Recherche Nucléaire (CERN), - also known as the European Organization for Nuclear Research, - provides particle accelerators and other infrastructure for - high-energy physics research. - As of 2011 CERN operated these two compute centers in Europe - with plans to add a third. - - - - - Data centerApproximate capacity - - - - Geneva, Switzerland - - - 3.5 Mega Watts - 91000 cores - 120 PB HDD - 100 PB Tape - 310 TB Memory - - - - - Budapest, Hungary - - - 2.5 Mega Watts - 20000 cores - 6 PB HDD - - - - - - To support a growing number of compute-heavy users of - experiments related to the Large Hadron Collider (LHC), CERN - ultimately elected to deploy an OpenStack cloud using - Scientific Linux and RDO. This effort aimed to simplify the - management of the center's compute resources with a view to - doubling compute capacity through the addition of a - data center in 2013 while maintaining the same - levels of compute staff. - The CERN solution uses cells - for segregation of compute - resources and for transparently scaling between different data - centers. This decision meant trading off support for security - groups and live migration. In addition, they must manually replicate - some details, like flavors, across cells. In - spite of these drawbacks cells provide the - required scale while exposing a single public API endpoint to - users. - CERN created a compute cell for each of the two original data - centers and created a third when it added a new data center - in 2013. Each cell contains three availability zones to - further segregate compute resources and at least three - RabbitMQ message brokers configured for clustering with - mirrored queues for high availability. - The API cell, which resides behind a HAProxy load balancer, - is in the data center in Switzerland and directs API - calls to compute cells using a customized variation of the - cell scheduler. The customizations allow certain workloads to - route to a specific data center or all data centers, - with cell RAM availability determining cell selection in the - latter case. - - - - - - There is also some customization of the filter scheduler - that handles placement within the cells: - - ImagePropertiesFilter - - Provides special handling - depending on the guest operating system in use - (Linux-based or Windows-based). - - - ProjectsToAggregateFilter - Provides special - handling depending on which project the instance is - associated with. - - - default_schedule_zones - Allows the selection of - multiple default availability zones, rather than a - single default. - - - - A central database team manages the MySQL database server in each cell - in an active/passive configuration with a NetApp storage back end. - Backups run every 6 hours. - -
- Network architecture - To integrate with existing networking infrastructure, CERN - made customizations to legacy networking (nova-network). This was in the - form of a driver to integrate with CERN's existing database - for tracking MAC and IP address assignments. - The driver facilitates selection of a MAC address and IP for - new instances based on the compute node where the scheduler places - the instance. - The driver considers the compute node where the scheduler - placed an instance and selects a MAC address and IP - from the pre-registered list associated with that node in the - database. The database updates to reflect the address assignment to - that instance. -
- -
- Storage architecture - CERN deploys the OpenStack Image service in the API cell and - configures it to expose version 1 (V1) of the API. This also requires - the image registry. The storage back end in - use is a 3 PB Ceph cluster. - CERN maintains a small set of Scientific Linux 5 and 6 images onto - which orchestration tools can place applications. Puppet manages - instance configuration and customization. -
- -
- Monitoring - CERN does not require direct billing, but uses the Telemetry service - to perform metering for the purposes of adjusting - project quotas. CERN uses a sharded, replicated, MongoDB back-end. - To spread API load, CERN deploys instances of the nova-api service - within the child cells for Telemetry to query - against. This also requires the configuration of supporting services - such as keystone, glance-api, and glance-registry in the child cells. - - - - - - - - Additional monitoring tools in use include Flume, Elastic - Search, Kibana, - and the CERN developed Lemon - project. - -
-
diff --git a/doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml b/doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml deleted file mode 100644 index 80cf06b8cb..0000000000 --- a/doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml +++ /dev/null @@ -1,275 +0,0 @@ - - -%openstack; -]> -
- - Technical considerations - In a compute-focused OpenStack cloud, the type of instance - workloads you provision heavily influences technical - decision making. - Public and private clouds require deterministic capacity - planning to support elastic growth in order to meet user SLA - expectations. Deterministic capacity planning is the path to - predicting the effort and expense of making a given process - perform consistently. This process is important because, - when a service becomes a critical part of a user's - infrastructure, the user's experience links directly to the SLAs of - the cloud itself. - There are two aspects of capacity planning to consider: - - - Planning the initial deployment footprint - - - Planning expansion of the environment to stay ahead of the - demands of cloud users - - - Begin planning an initial OpenStack deployment footprint with - estimations of expected uptake, and existing infrastructure workloads. - The starting point is the core count of the cloud. By - applying relevant ratios, the user can gather information - about: - - - The number of expected concurrent instances: - (overcommit fraction × cores) / virtual cores per instance - - - Required storage: flavor disk size × number of instances - - - These ratios determine the amount of - additional infrastructure needed to support the cloud. For - example, consider a situation in which you require 1600 - instances, each with 2 vCPU and 50 GB of storage. Assuming the - default overcommit rate of 16:1, working out the math provides - an equation of: - - - 1600 = (16 × (number of physical cores)) / 2 - - - Storage required = 50 GB × 1600 - - - On the surface, the equations reveal the need for 200 - physical cores and 80 TB of storage for - /var/lib/nova/instances/. However, - it is also important to - look at patterns of usage to estimate the load that the API - services, database servers, and queue servers are likely to - encounter. - Aside from the creation and termination of instances, consider the - impact of users accessing the service, - particularly on nova-api and its associated database. Listing - instances gathers a great deal of information and given the - frequency with which users run this operation, a cloud with a - large number of users can increase the load significantly. - This can even occur unintentionally. For example, the - OpenStack Dashboard instances tab refreshes the list of - instances every 30 seconds, so leaving it open in a browser - window can cause unexpected load. - Consideration of these factors can help determine how many - cloud controller cores you require. A server with 8 CPU cores - and 8 GB of RAM server would be sufficient for a rack of - compute nodes, given the above caveats. - Key hardware specifications are also crucial to the - performance of user instances. Be sure to consider budget and - performance needs, including storage performance - (spindles/core), memory availability (RAM/core), network - bandwidth (Gbps/core), and overall CPU performance - (CPU/core). - The cloud resource calculator is a useful tool in examining - the impacts of different hardware and instance load outs. See: - https://github.com/noslzzp/cloud-resource-calculator/blob/master/cloud-resource-calculator.ods - - -
- Expansion planning - A key challenge for planning the expansion of cloud - compute services is the elastic nature of cloud infrastructure - demands. - Planning for expansion is a balancing act. - Planning too conservatively can lead to unexpected - oversubscription of the cloud and dissatisfied users. Planning - for cloud expansion too aggressively can lead to unexpected - underutilization of the cloud and funds spent unnecessarily on operating - infrastructure. - The key is to carefully monitor the trends in - cloud usage over time. The intent is to measure the - consistency with which you deliver services, not the - average speed or capacity of the cloud. Using this information - to model capacity performance enables users to more - accurately determine the current and future capacity of the - cloud. -
- -
- CPU and RAM - OpenStack enables users to overcommit CPU and RAM on - compute nodes. This allows an increase in the number of - instances running on the cloud at the cost of reducing the - performance of the instances. OpenStack Compute uses the - following ratios by default: - - - CPU allocation ratio: 16:1 - - - RAM allocation ratio: 1.5:1 - - - The default CPU allocation ratio of 16:1 means that the - scheduler allocates up to 16 virtual cores per physical core. - For example, if a physical node has 12 cores, the scheduler - sees 192 available virtual cores. With typical flavor - definitions of 4 virtual cores per instance, this ratio would - provide 48 instances on a physical node. - Similarly, the default RAM allocation ratio of 1.5:1 means - that the scheduler allocates instances to a physical node as - long as the total amount of RAM associated with the instances - is less than 1.5 times the amount of RAM available on the - physical node. - You must select the appropriate CPU and RAM allocation ratio - based on particular use cases. -
- -
- Additional hardware - Certain use cases may benefit from exposure to additional - devices on the compute node. Examples might include: - - - High performance computing jobs that benefit from - the availability of graphics processing units (GPUs) - for general-purpose computing. - - - Cryptographic routines that benefit from the - availability of hardware random number generators to - avoid entropy starvation. - - - Database management systems that benefit from the - availability of SSDs for ephemeral storage to maximize - read/write time. - - - Host aggregates group hosts that share similar - characteristics, which can include hardware similarities. The - addition of specialized hardware to a cloud deployment is - likely to add to the cost of each node, so consider carefully - whether all compute nodes, or - just a subset targeted by flavors, need the - additional customization to support the desired - workloads. -
- -
- Utilization - Infrastructure-as-a-Service offerings, including OpenStack, - use flavors to provide standardized views of virtual machine - resource requirements that simplify the problem of scheduling - instances while making the best use of the available physical - resources. - In order to facilitate packing of virtual machines onto - physical hosts, the default selection of flavors provides a - second largest flavor that is half the size - of the largest flavor in every dimension. It has half the - vCPUs, half the vRAM, and half the ephemeral disk space. The - next largest flavor is half that size again. The following figure - provides a visual representation of this concept for a general - purpose computing design: - - - - - - The following figure displays a CPU-optimized, packed server: - - - - - - These default flavors are well suited to typical configurations - of commodity server hardware. To maximize utilization, - however, it may be necessary to customize the flavors or - create new ones in order to better align instance sizes to the - available hardware. - Workload characteristics may also influence hardware choices - and flavor configuration, particularly where they present - different ratios of CPU versus RAM versus HDD - requirements. - For more information on Flavors see: - OpenStack Operations Guide: Flavors -
- -
- OpenStack components - Due to the nature of the workloads in this - scenario, a number of components are highly beneficial for - a Compute-focused cloud. This includes the typical OpenStack - components: - - - OpenStack Compute (nova) - - - OpenStack Image service (glance) - - - OpenStack Identity (keystone) - - - Also consider several specialized components: - - - Orchestration (heat) - Given the nature of the - applications involved in this scenario, these are heavily - automated deployments. Making use of Orchestration is highly - beneficial in this case. You can script the deployment of a - batch of instances and the running of tests, but it - makes sense to use the Orchestration service - to handle all these actions. - - - Telemetry (ceilometer) - Telemetry and the alarms it generates support autoscaling - of instances using Orchestration. Users that are not using the - Orchestration service do not need to deploy the Telemetry - service and may choose to use external solutions to fulfill - their metering and monitoring requirements. - - - OpenStack Block Storage (cinder) - Due to the burst-able nature of the workloads and the - applications and instances that perform batch - processing, this cloud mainly uses memory or CPU, so - the need for add-on storage to each instance is not a likely - requirement. This does not mean that you do not use - OpenStack Block Storage (cinder) in the infrastructure, but - typically it is not a central component. - - - Networking - When choosing a networking platform, ensure that it either - works with all desired hypervisor and container technologies - and their OpenStack drivers, or that it includes an implementation of - an ML2 mechanism driver. You can mix networking platforms - that provide ML2 mechanisms drivers. - - -
-
diff --git a/doc/arch-design/figures/Compute_NSX.png b/doc/arch-design/figures/Compute_NSX.png deleted file mode 100644 index 0cd2fcf42d..0000000000 Binary files a/doc/arch-design/figures/Compute_NSX.png and /dev/null differ diff --git a/doc/arch-design/figures/Compute_Tech_Bin_Packing_CPU_optimized1.png b/doc/arch-design/figures/Compute_Tech_Bin_Packing_CPU_optimized1.png deleted file mode 100644 index b6f691f038..0000000000 Binary files a/doc/arch-design/figures/Compute_Tech_Bin_Packing_CPU_optimized1.png and /dev/null differ diff --git a/doc/arch-design/figures/Compute_Tech_Bin_Packing_General1.png b/doc/arch-design/figures/Compute_Tech_Bin_Packing_General1.png deleted file mode 100644 index 1d66bace4a..0000000000 Binary files a/doc/arch-design/figures/Compute_Tech_Bin_Packing_General1.png and /dev/null differ diff --git a/doc/arch-design/figures/Example_Compute_Heavy_Multi-Hypervisor_-_Architecture_4.png b/doc/arch-design/figures/Example_Compute_Heavy_Multi-Hypervisor_-_Architecture_4.png deleted file mode 100644 index 4b608380a1..0000000000 Binary files a/doc/arch-design/figures/Example_Compute_Heavy_Multi-Hypervisor_-_Architecture_4.png and /dev/null differ diff --git a/doc/arch-design/figures/Example_General_Purpose_Architecture_w_Swift.png b/doc/arch-design/figures/Example_General_Purpose_Architecture_w_Swift.png deleted file mode 100644 index ad55b954e7..0000000000 Binary files a/doc/arch-design/figures/Example_General_Purpose_Architecture_w_Swift.png and /dev/null differ diff --git a/doc/arch-design/figures/General_Architecture1.png b/doc/arch-design/figures/General_Architecture1.png deleted file mode 100644 index 88fad2f672..0000000000 Binary files a/doc/arch-design/figures/General_Architecture1.png and /dev/null differ diff --git a/doc/arch-design/figures/General_Architecture2.png b/doc/arch-design/figures/General_Architecture2.png deleted file mode 100644 index 0666f0a337..0000000000 Binary files a/doc/arch-design/figures/General_Architecture2.png and /dev/null differ diff --git a/doc/arch-design/figures/General_Architecture3.png b/doc/arch-design/figures/General_Architecture3.png deleted file mode 100644 index aa1c6399ae..0000000000 Binary files a/doc/arch-design/figures/General_Architecture3.png and /dev/null differ diff --git a/doc/arch-design/figures/Generic_CERN_Architecture.png b/doc/arch-design/figures/Generic_CERN_Architecture.png deleted file mode 100644 index a0fa63726c..0000000000 Binary files a/doc/arch-design/figures/Generic_CERN_Architecture.png and /dev/null differ diff --git a/doc/arch-design/figures/Generic_CERN_Example.png b/doc/arch-design/figures/Generic_CERN_Example.png deleted file mode 100644 index 3b72de12fb..0000000000 Binary files a/doc/arch-design/figures/Generic_CERN_Example.png and /dev/null differ diff --git a/doc/arch-design/figures/Massively_Scalable_Cells_+_regions_+_azs.png b/doc/arch-design/figures/Massively_Scalable_Cells_+_regions_+_azs.png deleted file mode 100644 index 9f54142dfb..0000000000 Binary files a/doc/arch-design/figures/Massively_Scalable_Cells_+_regions_+_azs.png and /dev/null differ diff --git a/doc/arch-design/figures/Methodology.png b/doc/arch-design/figures/Methodology.png deleted file mode 100644 index b037e001ca..0000000000 Binary files a/doc/arch-design/figures/Methodology.png and /dev/null differ diff --git a/doc/arch-design/figures/Multi-Cloud_DR2.png b/doc/arch-design/figures/Multi-Cloud_DR2.png deleted file mode 100644 index 6f3ab79429..0000000000 Binary files a/doc/arch-design/figures/Multi-Cloud_DR2.png and /dev/null differ diff --git a/doc/arch-design/figures/Multi-Cloud_Priv-AWS3.png b/doc/arch-design/figures/Multi-Cloud_Priv-AWS3.png deleted file mode 100644 index df84870861..0000000000 Binary files a/doc/arch-design/figures/Multi-Cloud_Priv-AWS3.png and /dev/null differ diff --git a/doc/arch-design/figures/Multi-Cloud_Priv-AWS4.png b/doc/arch-design/figures/Multi-Cloud_Priv-AWS4.png deleted file mode 100644 index e73dff23b0..0000000000 Binary files a/doc/arch-design/figures/Multi-Cloud_Priv-AWS4.png and /dev/null differ diff --git a/doc/arch-design/figures/Multi-Cloud_Priv-Pub2.png b/doc/arch-design/figures/Multi-Cloud_Priv-Pub2.png deleted file mode 100644 index 170fd6db66..0000000000 Binary files a/doc/arch-design/figures/Multi-Cloud_Priv-Pub2.png and /dev/null differ diff --git a/doc/arch-design/figures/Multi-Cloud_Priv-Pub3.png b/doc/arch-design/figures/Multi-Cloud_Priv-Pub3.png deleted file mode 100644 index 1082761ef0..0000000000 Binary files a/doc/arch-design/figures/Multi-Cloud_Priv-Pub3.png and /dev/null differ diff --git a/doc/arch-design/figures/Multi-Cloud_failover.png b/doc/arch-design/figures/Multi-Cloud_failover.png deleted file mode 100644 index 9e071bacad..0000000000 Binary files a/doc/arch-design/figures/Multi-Cloud_failover.png and /dev/null differ diff --git a/doc/arch-design/figures/Multi-Cloud_failover2.png b/doc/arch-design/figures/Multi-Cloud_failover2.png deleted file mode 100644 index 3ceb1e26c7..0000000000 Binary files a/doc/arch-design/figures/Multi-Cloud_failover2.png and /dev/null differ diff --git a/doc/arch-design/figures/Multi-Site_Customer_Edge.png b/doc/arch-design/figures/Multi-Site_Customer_Edge.png deleted file mode 100644 index fd57baea93..0000000000 Binary files a/doc/arch-design/figures/Multi-Site_Customer_Edge.png and /dev/null differ diff --git a/doc/arch-design/figures/Multi-Site_Location_Local.png b/doc/arch-design/figures/Multi-Site_Location_Local.png deleted file mode 100644 index aea1feebbd..0000000000 Binary files a/doc/arch-design/figures/Multi-Site_Location_Local.png and /dev/null differ diff --git a/doc/arch-design/figures/Multi-Site_shared_keystone.png b/doc/arch-design/figures/Multi-Site_shared_keystone.png deleted file mode 100644 index ec311e74ca..0000000000 Binary files a/doc/arch-design/figures/Multi-Site_shared_keystone.png and /dev/null differ diff --git a/doc/arch-design/figures/Multi-Site_shared_keystone1.png b/doc/arch-design/figures/Multi-Site_shared_keystone1.png deleted file mode 100644 index 4ce0bf4c11..0000000000 Binary files a/doc/arch-design/figures/Multi-Site_shared_keystone1.png and /dev/null differ diff --git a/doc/arch-design/figures/Multi-Site_shared_keystone_horizon.png b/doc/arch-design/figures/Multi-Site_shared_keystone_horizon.png deleted file mode 100644 index e1dc11a4b2..0000000000 Binary files a/doc/arch-design/figures/Multi-Site_shared_keystone_horizon.png and /dev/null differ diff --git a/doc/arch-design/figures/Multi-Site_shared_keystone_horizon_swift.png b/doc/arch-design/figures/Multi-Site_shared_keystone_horizon_swift.png deleted file mode 100644 index 9e2970a86c..0000000000 Binary files a/doc/arch-design/figures/Multi-Site_shared_keystone_horizon_swift.png and /dev/null differ diff --git a/doc/arch-design/figures/Multi-Site_shared_keystone_horizon_swift1.png b/doc/arch-design/figures/Multi-Site_shared_keystone_horizon_swift1.png deleted file mode 100644 index a051ba5c74..0000000000 Binary files a/doc/arch-design/figures/Multi-Site_shared_keystone_horizon_swift1.png and /dev/null differ diff --git a/doc/arch-design/figures/Multi-site_Geo_Redundant_LB.png b/doc/arch-design/figures/Multi-site_Geo_Redundant_LB.png deleted file mode 100644 index 855d65edcc..0000000000 Binary files a/doc/arch-design/figures/Multi-site_Geo_Redundant_LB.png and /dev/null differ diff --git a/doc/arch-design/figures/Network_Cloud_Storage1.png b/doc/arch-design/figures/Network_Cloud_Storage1.png deleted file mode 100644 index 615210d6f2..0000000000 Binary files a/doc/arch-design/figures/Network_Cloud_Storage1.png and /dev/null differ diff --git a/doc/arch-design/figures/Network_Cloud_Storage2.png b/doc/arch-design/figures/Network_Cloud_Storage2.png deleted file mode 100644 index 9e38860624..0000000000 Binary files a/doc/arch-design/figures/Network_Cloud_Storage2.png and /dev/null differ diff --git a/doc/arch-design/figures/Network_Web_Services1.png b/doc/arch-design/figures/Network_Web_Services1.png deleted file mode 100644 index b0004ccefe..0000000000 Binary files a/doc/arch-design/figures/Network_Web_Services1.png and /dev/null differ diff --git a/doc/arch-design/figures/OPST_0008_Compute_12015337_0314cd-compute_cells_high.png b/doc/arch-design/figures/OPST_0008_Compute_12015337_0314cd-compute_cells_high.png deleted file mode 100644 index 65835f68ec..0000000000 Binary files a/doc/arch-design/figures/OPST_0008_Compute_12015337_0314cd-compute_cells_high.png and /dev/null differ diff --git a/doc/arch-design/figures/Special_case_SDN_external.png b/doc/arch-design/figures/Special_case_SDN_external.png deleted file mode 100644 index 5fa3e9493a..0000000000 Binary files a/doc/arch-design/figures/Special_case_SDN_external.png and /dev/null differ diff --git a/doc/arch-design/figures/Special_case_SDN_hosted.png b/doc/arch-design/figures/Special_case_SDN_hosted.png deleted file mode 100644 index 42913e04f2..0000000000 Binary files a/doc/arch-design/figures/Special_case_SDN_hosted.png and /dev/null differ diff --git a/doc/arch-design/figures/Specialized_Hardware2.png b/doc/arch-design/figures/Specialized_Hardware2.png deleted file mode 100644 index 907a87e879..0000000000 Binary files a/doc/arch-design/figures/Specialized_Hardware2.png and /dev/null differ diff --git a/doc/arch-design/figures/Specialized_OOO.png b/doc/arch-design/figures/Specialized_OOO.png deleted file mode 100644 index 3086516add..0000000000 Binary files a/doc/arch-design/figures/Specialized_OOO.png and /dev/null differ diff --git a/doc/arch-design/figures/Specialized_VDI1.png b/doc/arch-design/figures/Specialized_VDI1.png deleted file mode 100644 index ff9b9a9afd..0000000000 Binary files a/doc/arch-design/figures/Specialized_VDI1.png and /dev/null differ diff --git a/doc/arch-design/figures/Storage_Database_+_Object2.png b/doc/arch-design/figures/Storage_Database_+_Object2.png deleted file mode 100644 index 9539ca6044..0000000000 Binary files a/doc/arch-design/figures/Storage_Database_+_Object2.png and /dev/null differ diff --git a/doc/arch-design/figures/Storage_Database_+_Object3.png b/doc/arch-design/figures/Storage_Database_+_Object3.png deleted file mode 100644 index 3cad8df747..0000000000 Binary files a/doc/arch-design/figures/Storage_Database_+_Object3.png and /dev/null differ diff --git a/doc/arch-design/figures/Storage_Database_+_Object5.png b/doc/arch-design/figures/Storage_Database_+_Object5.png deleted file mode 100644 index 307d17b83f..0000000000 Binary files a/doc/arch-design/figures/Storage_Database_+_Object5.png and /dev/null differ diff --git a/doc/arch-design/figures/Storage_Hadoop.png b/doc/arch-design/figures/Storage_Hadoop.png deleted file mode 100644 index e05bc72469..0000000000 Binary files a/doc/arch-design/figures/Storage_Hadoop.png and /dev/null differ diff --git a/doc/arch-design/figures/Storage_Hadoop3.png b/doc/arch-design/figures/Storage_Hadoop3.png deleted file mode 100644 index 6752a27e8a..0000000000 Binary files a/doc/arch-design/figures/Storage_Hadoop3.png and /dev/null differ diff --git a/doc/arch-design/figures/Storage_Object.png b/doc/arch-design/figures/Storage_Object.png deleted file mode 100644 index 2ca79e6604..0000000000 Binary files a/doc/arch-design/figures/Storage_Object.png and /dev/null differ diff --git a/doc/arch-design/figures/arch-design.graffle b/doc/arch-design/figures/arch-design.graffle deleted file mode 100644 index ecde530b91..0000000000 Binary files a/doc/arch-design/figures/arch-design.graffle and /dev/null differ diff --git a/doc/arch-design/figures/design-methodology.png b/doc/arch-design/figures/design-methodology.png deleted file mode 100644 index bfb91d7460..0000000000 Binary files a/doc/arch-design/figures/design-methodology.png and /dev/null differ diff --git a/doc/arch-design/figures/openstack_fullcover2014_1.jpg b/doc/arch-design/figures/openstack_fullcover2014_1.jpg deleted file mode 100644 index 6cf775b4f3..0000000000 Binary files a/doc/arch-design/figures/openstack_fullcover2014_1.jpg and /dev/null differ diff --git a/doc/arch-design/figures/packingexample-2.png b/doc/arch-design/figures/packingexample-2.png deleted file mode 100644 index 8737f8b201..0000000000 Binary files a/doc/arch-design/figures/packingexample-2.png and /dev/null differ diff --git a/doc/arch-design/figures/region-example.png b/doc/arch-design/figures/region-example.png deleted file mode 100644 index 158ed93c2f..0000000000 Binary files a/doc/arch-design/figures/region-example.png and /dev/null differ diff --git a/doc/arch-design/generalpurpose/section_architecture_general_purpose.xml b/doc/arch-design/generalpurpose/section_architecture_general_purpose.xml deleted file mode 100644 index 3cdc733a14..0000000000 --- a/doc/arch-design/generalpurpose/section_architecture_general_purpose.xml +++ /dev/null @@ -1,720 +0,0 @@ - - -%openstack; -]> -
- - Architecture - Hardware selection involves three key areas: - - - Compute - - - Network - - - Storage - - - Hardware for a general purpose OpenStack cloud - should reflect a cloud with no pre-defined usage model, - designed to run a wide variety of applications with - varying resource usage requirements. - These applications include any of the following: - - - - RAM-intensive - - - - - CPU-intensive - - - - - Storage-intensive - - - - Certain hardware form factors may better suit a general - purpose OpenStack cloud due to the requirement for equal (or - nearly equal) balance of resources. Server hardware must provide - the following: - - - - Equal (or nearly equal) balance of compute capacity (RAM and CPU) - - - - - Network capacity (number and speed of links) - - - - - Storage capacity (gigabytes or terabytes as well as Input/Output - Operations Per Second (IOPS) - - - - Evaluate server hardware around four conflicting - dimensions: - - - Server density - - A measure of how many servers can - fit into a given measure of physical space, such as a - rack unit [U]. - - - - Resource capacity - - The number of CPU cores, amount of RAM, - or amount of deliverable storage. - - - - Expandability - - Limit of additional resources you can add to - a server. - - - - Cost - - The relative purchase price of the hardware - weighted against the level of design effort needed to - build the system. - - - - Increasing server density means sacrificing resource - capacity or expandability, however, increasing resource - capacity and expandability increases cost and decreases server - density. As a result, determining the best server hardware for - a general purpose OpenStack architecture means understanding - how choice of form factor will impact the rest of the - design. The following list outlines the form factors to - choose from: - - - Blade servers typically support dual-socket - multi-core CPUs. Blades also offer - outstanding density. - - - 1U rack-mounted servers occupy only a single rack - unit. Their benefits include high density, support for - dual-socket multi-core CPUs, and support for - reasonable RAM amounts. This form factor offers - limited storage capacity, limited network capacity, - and limited expandability. - - - 2U rack-mounted servers offer the expanded storage - and networking capacity that 1U servers tend to lack, - but with a corresponding decrease in server density - (half the density offered by 1U rack-mounted - servers). - - - Larger rack-mounted servers, such as 4U servers, - will tend to offer even greater CPU capacity, often - supporting four or even eight CPU sockets. These - servers often have much greater expandability so will - provide the best option for upgradability. This means, - however, that the servers have a much lower server - density and a much greater hardware cost. - - - Sled servers are rack-mounted servers that support - multiple independent servers in a single 2U or 3U - enclosure. This form factor offers increased density - over typical 1U-2U rack-mounted servers but tends to - suffer from limitations in the amount of storage or - network capacity each individual server - supports. - - - The best form factor for server hardware - supporting a general purpose OpenStack cloud is driven by - outside business and cost factors. No single reference - architecture applies to all implementations; the decision - must flow from user requirements, technical - considerations, and operational considerations. Here are some - of the key factors that influence the selection of server - hardware: - - - Instance density - - Sizing is an important - consideration for a general purpose OpenStack cloud. - The expected or anticipated number of instances that - each hypervisor can host is a common meter used in - sizing the deployment. The selected server hardware - needs to support the expected or anticipated instance - density. - - - - Host density - - Physical data centers have limited - physical space, power, and cooling. The number of - hosts (or hypervisors) that can be fitted into a given - metric (rack, rack unit, or floor tile) is another - important method of sizing. Floor weight is an often - overlooked consideration. The data center floor must - be able to support the weight of the proposed number - of hosts within a rack or set of racks. These factors - need to be applied as part of the host density - calculation and server hardware selection. - - - - Power density - - Data centers have a specified amount - of power fed to a given rack or set of racks. Older - data centers may have a power density as power as low - as 20 AMPs per rack, while more recent data centers - can be architected to support power densities as high - as 120 AMP per rack. The selected server hardware must - take power density into account. - - - - Network connectivity - - The selected server hardware - must have the appropriate number of network - connections, as well as the right type of network - connections, in order to support the proposed - architecture. Ensure that, at a minimum, there are at - least two diverse network connections coming into each - rack. - - - - The selection of form factors or architectures affects the selection - of server hardware. Ensure that the selected server hardware - is configured to support enough storage capacity (or storage - expandability) to match the requirements of selected scale-out - storage solution. Similarly, the network architecture impacts - the server hardware selection and vice versa. - -
- Selecting storage hardware - Determine storage hardware architecture by - selecting specific storage architecture. Determine the selection of - storage architecture by evaluating possible solutions against the - critical factors, the user requirements, technical - considerations, and operational considerations. - Incorporate the following facts into your storage architecture: - - - Cost - - Storage can be a significant portion of the - overall system cost. For an organization that is concerned - with vendor support, a commercial storage solution is - advisable, although it comes with a higher price - tag. If initial capital expenditure requires - minimization, designing a system based on commodity - hardware would apply. The trade-off is potentially - higher support costs and a greater risk of - incompatibility and interoperability issues. - - - - Scalability - - Scalability, along with expandability, is a major - consideration in a general purpose OpenStack cloud. It - might be difficult to predict the final intended size - of the implementation as there are no established - usage patterns for a general purpose cloud. It might - become necessary to expand the initial deployment in - order to accommodate growth and user demand. - - - - Expandability - - Expandability is a major architecture factor for - storage solutions with general purpose OpenStack - cloud. A storage solution that expands - to 50 PB is considered more expandable than a - solution that only scales to 10 PB. This meter - is related to scalability, which is the measure of a - solution's performance as it expands. - - - - Using a scale-out storage solution with direct-attached - storage (DAS) in the servers is well suited for a general - purpose OpenStack cloud. Cloud services requirements determine - your choice of scale-out solution. You need to determine if - a single, highly expandable and highly vertical, scalable, - centralized storage array is suitable for your design. - After determining an approach, select the storage hardware - based on this criteria. - This list expands upon the potential impacts for including a - particular storage architecture (and corresponding storage - hardware) into the design for a general purpose OpenStack - cloud: - - - Connectivity - - Ensure that, if storage protocols - other than Ethernet are part of the storage solution, - the appropriate hardware has been selected. - If a centralized storage array is selected, ensure - that the hypervisor will be able to connect to that - storage array for image storage. - - - - Usage - - How the particular storage architecture will - be used is critical for determining the architecture. - Some of the configurations that will influence the - architecture include whether it will be used by the - hypervisors for ephemeral instance storage or if - OpenStack Object Storage will use it for object storage. - - - - Instance and image locations - - - Where instances and images will be stored will influence - the architecture. - - - - Server hardware - - If the solution is a scale-out - storage architecture that includes DAS, it - will affect the server hardware selection. This could - ripple into the decisions that affect host density, - instance density, power density, OS-hypervisor, - management tools and others. - - - - General purpose OpenStack cloud has multiple options. - The key factors that will have an influence - on selection of storage hardware for a general purpose - OpenStack cloud are as follows: - - - Capacity - - Hardware resources selected for the resource nodes - should be capable of supporting enough storage for the - cloud services. Defining the initial requirements and - ensuring the design can support adding capacity is - important. Hardware nodes selected for object storage - should be capable of support a large number of inexpensive - disks with no reliance on RAID controller cards. - Hardware nodes selected for block storage should be capable - of supporting high speed storage solutions and RAID controller - cards to provide performance and redundancy to storage at a - hardware level. - Selecting hardware RAID controllers that automatically repair - damaged arrays will assist with the replacement and repair of - degraded or deleted storage devices. - - - - Performance - - Disks selected for object storage services do not need - to be fast performing disks. We recommend that object storage - nodes take advantage of the best cost per terabyte available - for storage. Contrastingly, disks chosen for block storage - services should take advantage of performance boosting - features that may entail the use of SSDs or flash storage - to provide high performance block storage pools. Storage - performance of ephemeral disks used for instances should - also be taken into consideration. - - - - Fault tolerance - - Object storage resource nodes have - no requirements for hardware fault tolerance or RAID - controllers. It is not necessary to plan for fault - tolerance within the object storage hardware because - the object storage service provides replication - between zones as a feature of the service. Block - storage nodes, compute nodes, and cloud controllers - should all have fault tolerance built in at the - hardware level by making use of hardware RAID - controllers and varying levels of RAID configuration. - The level of RAID chosen should be consistent with the - performance and availability requirements of the - cloud. - - - -
- -
- Selecting networking hardware - Selecting network architecture determines which network - hardware will be used. Networking software is determined by - the selected networking hardware. - There are more subtle design impacts that need to be considered. - The selection of certain networking hardware (and the networking software) - affects the management tools that can be used. There are - exceptions to this; the rise of open networking software - that supports a range of networking hardware means that there - are instances where the relationship between networking - hardware and networking software are not as tightly defined. - Some of the key considerations that should be included in - the selection of networking hardware include: - - - Port count - - The design will require networking - hardware that has the requisite port count. - - - - Port density - - The network design will be affected by - the physical space that is required to provide the - requisite port count. A higher port density is preferred, - as it leaves more rack space for compute or storage components - that may be required by the design. This can also lead into - concerns about fault domains and power density that - should be considered. Higher density switches are more - expensive and should also be considered, as it is - important not to over design the network if it is not - required. - - - - Port speed - - - The networking hardware must support the proposed - network speed, for example: 1 GbE, 10 GbE, or - 40 GbE (or even 100 GbE). - - - - Redundancy - - The level of network hardware redundancy - required is influenced by the user requirements for - high availability and cost considerations. Network - redundancy can be achieved by adding redundant power - supplies or paired switches. If this is a requirement, - the hardware will need to support this configuration. - - - - Power requirements - - Ensure that the physical data - center provides the necessary power for the selected - network hardware. - - - This may be an issue for spine switches in a leaf and - spine fabric, or end of row (EoR) switches. - - - - - There is no single best practice architecture for the - networking hardware supporting a general purpose OpenStack - cloud that will apply to all implementations. Some of the key - factors that will have a strong influence on selection of - networking hardware include: - - - Connectivity - - All nodes within an OpenStack cloud - require network connectivity. In some - cases, nodes require access to more than one network - segment. The design must encompass sufficient network - capacity and bandwidth to ensure that all - communications within the cloud, both north-south and - east-west traffic have sufficient resources - available. - - - - Scalability - - The network design should - encompass a physical and logical network design that - can be easily expanded upon. Network hardware should - offer the appropriate types of interfaces and speeds - that are required by the hardware nodes. - - - - Availability - - To ensure that access to nodes within - the cloud is not interrupted, we recommend that - the network architecture identify any single points of - failure and provide some level of redundancy or fault - tolerance. With regard to the network infrastructure - itself, this often involves use of networking - protocols such as LACP, VRRP or others to achieve a - highly available network connection. In addition, it - is important to consider the networking implications - on API availability. In order to ensure that the APIs, - and potentially other services in the cloud are highly - available, we recommend you design a load balancing - solution within the network architecture to - accommodate for these requirements. - - - -
- -
- Software selection - Software selection for a general purpose OpenStack - architecture design needs to include these three areas: - - - Operating system (OS) and hypervisor - - - OpenStack components - - - Supplemental software - - -
- -
- Operating system and hypervisor - The operating system (OS) and hypervisor have a - significant impact on the overall design. Selecting a particular - operating system and hypervisor can directly affect server - hardware selection. Make sure the storage - hardware and topology support the selected operating - system and hypervisor combination. Also ensure the networking - hardware selection and topology will work with the chosen operating - system and hypervisor combination. - Some areas that could be impacted by the selection of OS and - hypervisor include: - - - Cost - - Selecting a commercially supported hypervisor, - such as Microsoft Hyper-V, will result in a different - cost model rather than community-supported open source - hypervisors including KVM, - Kinstance or Xen. When - comparing open source OS solutions, choosing Ubuntu - over Red Hat (or vice versa) will have an impact on - cost due to support contracts. - - - - Supportability - - Depending on the selected - hypervisor, staff should have the appropriate - training and knowledge to support the selected OS and - hypervisor combination. If they do not, training will - need to be provided which could have a cost impact on - the design. - - - - Management tools - - The management tools used for - Ubuntu and Kinstance differ from the management tools - for VMware vSphere. Although both OS and hypervisor - combinations are supported by OpenStack, there will be - very different impacts to the rest of the design as a - result of the selection of one combination versus the - other. - - - - Scale and performance - - Ensure that selected OS and - hypervisor combinations meet the appropriate scale and - performance requirements. The chosen architecture will - need to meet the targeted instance-host ratios with - the selected OS-hypervisor combinations. - - - - Security - - Ensure that the design can accommodate - regular periodic installations of application security - patches while maintaining required workloads. The - frequency of security patches for the proposed - OS-hypervisor combination will have an impact on - performance and the patch installation process could - affect maintenance windows. - - - - Supported features - - Determine which features of OpenStack are required. - This will often determine the selection of the OS-hypervisor combination. - Some features are only available with specific operating systems or - hypervisors. - - - - Interoperability - - You will need to consider how the OS and hypervisor combination - interactions with other operating systems and hypervisors, including - other software solutions. - Operational troubleshooting tools for one OS-hypervisor - combination may differ from the tools used for another OS-hypervisor - combination and, as a result, the design will need to - address if the two sets of tools need to interoperate. - - - -
- -
- OpenStack components - Selecting which OpenStack components are included in the overall - design is important. Some OpenStack components, like - compute and Image service, are required in every architecture. Other - components, like Orchestration, are not always required. - Excluding certain OpenStack components can limit or constrain - the functionality of other components. For example, if the architecture includes - Orchestration but excludes Telemetry, then the design will not be able - to take advantage of Orchestrations' auto scaling functionality. - It is important to research the component interdependencies - in conjunction with the technical requirements before deciding - on the final architecture. - -
- Networking software - OpenStack Networking (neutron) provides a wide variety of networking - services for instances. There are many additional networking - software packages that can be useful when managing OpenStack - components. Some examples include: - - - - Software to provide load balancing - - - - - Network redundancy protocols - - - - - Routing daemons - - - - Some of these software packages are described - in more detail in the OpenStack High Availability - Guide (refer to the Network - controller cluster stack chapter of the OpenStack High - Availability Guide). - For a general purpose OpenStack cloud, the OpenStack - infrastructure components need to be highly available. If - the design does not include hardware load balancing, - networking software packages like HAProxy will need to be - included. -
- -
- Management software - Selected supplemental software solution impacts and - affects the overall OpenStack cloud design. This includes - software for providing clustering, logging, monitoring and - alerting. - Inclusion of clustering software, such as Corosync or - Pacemaker, is determined primarily by the availability - requirements. The impact of including (or not - including) these software packages is primarily determined by - the availability of the cloud infrastructure and the - complexity of supporting the configuration after it is - deployed. The OpenStack High Availability Guide - provides more - details on the installation and configuration of Corosync and - Pacemaker, should these packages need to be included in the - design. - Requirements for logging, monitoring, and alerting are - determined by operational considerations. Each of these - sub-categories includes a number of various options. - If these software packages are required, the - design must account for the additional resource consumption - (CPU, RAM, storage, and network bandwidth). Some other potential - design impacts include: - - - OS-hypervisor combination: Ensure that the - selected logging, monitoring, or alerting tools - support the proposed OS-hypervisor combination. - - - Network hardware: The network hardware selection - needs to be supported by the logging, monitoring, and - alerting software. - - -
- -
- Database software - OpenStack components often require access - to back-end database services to store state and configuration - information. Selecting an appropriate back-end database - that satisfies the availability and fault tolerance - requirements of the OpenStack services is required. OpenStack - services supports connecting to a database that is supported - by the SQLAlchemy python drivers, however, most common - database deployments make use of MySQL or variations of it. We - recommend that the database, which provides back-end - service within a general purpose cloud, be made highly - available when using an available technology which can - accomplish that goal. -
-
-
diff --git a/doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml b/doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml deleted file mode 100644 index 0de8c6400d..0000000000 --- a/doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml +++ /dev/null @@ -1,156 +0,0 @@ - -
- - Operational considerations - In the planning and design phases of the build out, it is - important to include the operation's function. Operational - factors affect the design choices for a general purpose cloud, - and operations staff are often tasked with the maintenance of - cloud environments for larger installations. - Expectations set by the Service Level Agreements (SLAs) directly - affect knowing when and where you should implement redundancy and - high availability. SLAs are contractual - obligations that provide assurances for service availability. - They define the levels of availability that drive the technical - design, often with penalties for not meeting contractual obligations. - SLA terms that affect design include: - - - API availability guarantees implying multiple - infrastructure services and highly available - load balancers. - - - Network uptime guarantees affecting switch - design, which might require redundant switching and - power. - - - Factor in networking security policy requirements - in to your deployments. - - - -
- Support and maintainability - To be able to support and maintain an installation, OpenStack - cloud management requires operations staff to understand and - comprehend design architecture content. The operations and engineering - staff skill level, and level of separation, are dependent on size and - purpose of the installation. Large cloud service providers, or telecom - providers, are more likely to be managed by specially trained, dedicated - operations organizations. Smaller implementations are more likely to rely - on support staff that need to take on combined engineering, design and - operations functions. - Maintaining OpenStack installations requires a - variety of technical skills. You may want to consider using a third-party - management company with special expertise in managing - OpenStack deployment. -
- -
- Monitoring - OpenStack clouds require appropriate monitoring platforms to - ensure errors are caught and managed appropriately. Specific - meters that are critically important to monitor include: - - - - Image disk utilization - - - - - Response time to the Compute API - - - - Leveraging existing monitoring systems is an effective check to - ensure OpenStack environments can be monitored. -
- -
- Downtime - To effectively run cloud installations, initial downtime planning - includes creating processes and architectures that support - the following: - - - - Planned (maintenance) - - - - - Unplanned (system faults) - - - - Resiliency of overall system and individual components are going - to be dictated by the requirements of the SLA, meaning designing - for high availability (HA) can have cost ramifications. -
- -
- Capacity planning - Capacity constraints for a general purpose cloud environment - include: - - - - Compute limits - - - - - Storage limits - - - - A relationship exists between the size of the compute environment - and the supporting OpenStack infrastructure controller nodes requiring - support. - Increasing the size of the supporting compute environment increases - the network traffic and messages, adding load to the controller or - networking nodes. Effective monitoring of the environment will help - with capacity decisions on scaling. - Compute nodes automatically attach to OpenStack clouds, resulting in - a horizontally scaling process when adding extra compute capacity to an - OpenStack cloud. Additional processes are required to place nodes into - appropriate availability zones and host aggregates. When adding additional - compute nodes to environments, ensure identical or functional compatible - CPUs are used, otherwise live migration features will break. It is necessary - to add rack capacity or network switches as scaling out compute hosts directly - affects network and datacenter resources. - Assessing the average workloads and increasing the number of instances - that can run within the compute environment by adjusting the overcommit - ratio is another option. It is important to remember that changing the CPU overcommit - ratio can have a detrimental effect and cause a potential increase in a - noisy neighbor. The additional risk of increasing the overcommit ratio is - more instances failing when a compute host fails. - Compute host components can also be upgraded to account for - increases in demand; this is known as vertical scaling. - Upgrading CPUs with more cores, or increasing the overall - server memory, can add extra needed capacity depending on - whether the running applications are more CPU intensive or - memory intensive. - Insufficient disk capacity could also have a negative effect - on overall performance including CPU and memory usage. - Depending on the back-end architecture of the OpenStack Block - Storage layer, capacity includes adding disk shelves to - enterprise storage systems or installing additional block - storage nodes. Upgrading directly attached storage installed in - compute hosts, and adding capacity to the shared storage for - additional ephemeral storage to instances, may be necessary. - - For a deeper discussion on many of these topics, refer to the - OpenStack - Operations Guide. - -
-
diff --git a/doc/arch-design/generalpurpose/section_prescriptive_example_general_purpose.xml b/doc/arch-design/generalpurpose/section_prescriptive_example_general_purpose.xml deleted file mode 100644 index 1b4fb7a182..0000000000 --- a/doc/arch-design/generalpurpose/section_prescriptive_example_general_purpose.xml +++ /dev/null @@ -1,101 +0,0 @@ - -
- - Prescriptive example - An online classified advertising company wants to run web applications - consisting of Tomcat, Nginx and MariaDB in a private cloud. To be able - to meet policy requirements, the cloud infrastructure will run in their - own data center. The company has predictable load requirements, but requires - scaling to cope with nightly increases in demand. Their current environment - does not have the flexibility to align with their goal of running an open - source API environment. The current environment consists of the following: - - - Between 120 and 140 installations of Nginx and - Tomcat, each with 2 vCPUs and 4 GB of RAM - - - A three-node MariaDB and Galera cluster, each with 4 - vCPUs and 8 GB RAM - - - The company runs hardware load balancers and multiple web - applications serving their websites, and orchestrates environments - using combinations of scripts and Puppet. The website generates large amounts of - log data daily that requires archiving. - The solution would consist of the following OpenStack - components: - - - A firewall, switches and load balancers on the - public facing network connections. - - - OpenStack Controller service running Image, - Identity, Networking, combined with support services such as - MariaDB and RabbitMQ, configured for high availability on at - least three controller nodes. - - - OpenStack Compute nodes running the KVM - hypervisor. - - - OpenStack Block Storage for use by compute instances, - requiring persistent storage (such as databases for - dynamic sites). - - - OpenStack Object Storage for serving static objects - (such as images). - - - - Running up to 140 - web instances and the small number of MariaDB instances - requires 292 vCPUs available, as well as 584 GB RAM. On a - typical 1U server using dual-socket hex-core Intel CPUs with - Hyperthreading, and assuming 2:1 CPU overcommit ratio, this - would require 8 OpenStack Compute nodes. - The web application instances run from local storage on each - of the OpenStack Compute nodes. The web application instances - are stateless, meaning that any of the instances can fail and - the application will continue to function. - MariaDB server instances store their data on shared - enterprise storage, such as NetApp or Solidfire devices. If a - MariaDB instance fails, storage would be expected to be - re-attached to another instance and rejoined to the Galera - cluster. - Logs from the web application servers are shipped to - OpenStack Object Storage for processing and - archiving. - Additional capabilities can be realized by - moving static web content to be served from OpenStack Object - Storage containers, and backing the OpenStack Image service - with OpenStack Object Storage. - - - Increasing OpenStack Object Storage means network bandwidth - needs to be taken into consideration. Running OpenStack Object - Storage with network connections offering 10 GbE or better connectivity - is advised. - - - Leveraging Orchestration and Telemetry services is also a potential issue when - providing auto-scaling, orchestrated web application environments. - Defining the web applications in Heat Orchestration Templates (HOT) - negates the reliance on the current scripted Puppet solution. - OpenStack Networking can be used to control hardware load - balancers through the use of plug-ins and the Networking API. - This allows users to control hardware load balance pools - and instances as members in these pools, but their use in - production environments must be carefully weighed against - current stability. -
diff --git a/doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml b/doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml deleted file mode 100644 index bf291bda48..0000000000 --- a/doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml +++ /dev/null @@ -1,738 +0,0 @@ - - -%openstack; -]> -
- - Technical considerations - General purpose clouds are expected to - include these base services: - - - - Compute - - - - - Network - - - - - Storage - - - - Each of these services have different resource requirements. - As a result, you must make design decisions relating directly - to the service, as well as provide a balanced infrastructure for - all services. - Take into consideration the unique aspects of each service, as - individual characteristics and service mass can impact the hardware - selection process. Hardware designs should be generated for each of the - services. - Hardware decisions are also made in relation to network architecture - and facilities planning. These factors play heavily into - the overall architecture of an OpenStack cloud. - -
- Compute resource design - When designing compute resource pools, a number of factors - can impact your design decisions. Factors such as number of processors, - amount of memory, and the quantity of storage required for each hypervisor - must be taken into account. - You will also need to decide whether to provide compute resources - in a single pool or in multiple pools. In most cases, multiple pools - of resources can be allocated and addressed on demand. A compute design - that allocates multiple pools of resources makes best use of application - resources, and is commonly referred to as - bin packing. - In a bin packing design, each independent resource pool provides service - for specific flavors. This helps to ensure that, as instances are scheduled - onto compute hypervisors, each independent node's resources will be allocated - in a way that makes the most efficient use of the available hardware. Bin - packing also requires a common hardware design, with all hardware nodes within - a compute resource pool sharing a common processor, memory, and storage layout. - This makes it easier to deploy, support, and maintain nodes throughout their - life cycle. - An overcommit ratio is the ratio of available - virtual resources to available physical resources. This ratio is - configurable for CPU and memory. The default CPU overcommit ratio is 16:1, and - the default memory overcommit ratio is 1.5:1. Determining the tuning of the - overcommit ratios during the design phase is important as it has a direct - impact on the hardware layout of your compute nodes. - When selecting a processor, compare features and performance - characteristics. Some processors include features specific to virtualized - compute hosts, such as hardware-assisted virtualization, and technology - related to memory paging (also known as EPT shadowing). These types of features - can have a significant impact on the performance of your virtual machine. - You will also need to consider the compute requirements of non-hypervisor - nodes (sometimes referred to as resource nodes). This includes controller, object - storage, and block storage nodes, and networking services. - The number of processor cores and threads impacts the number of worker - threads which can be run on a resource node. Design decisions must relate - directly to the service being run on it, as well as provide a balanced - infrastructure for all services. - Workload can be unpredictable in a general purpose cloud, so consider - including the ability to add additional compute resource pools on demand. - In some cases, however, the demand for certain instance types or flavors may not - justify individual hardware design. In either case, start by allocating - hardware designs that are capable of servicing the most common instance - requests. If you want to add additional hardware to the overall architecture, - this can be done later. -
- -
- Designing network resources - OpenStack clouds generally have multiple network segments, with - each segment providing access to particular resources. The network services - themselves also require network communication paths which should - be separated from the other networks. When designing network services - for a general purpose cloud, plan for either a physical or logical - separation of network segments used by operators and tenants. You can also - create an additional network segment for access to internal services such as - the message bus and database used by various services. Segregating these - services onto separate networks helps to protect sensitive data and protects - against unauthorized access to services. - Choose a networking service based on the requirements of your instances. - The architecture and design of your cloud will impact whether you choose - OpenStack Networking(neutron), or legacy networking (nova-network). - - - Legacy networking (nova-network) - - The legacy networking (nova-network) service is primarily a - layer-2 networking service that functions in two modes, which - use VLANs in different ways. In a flat network mode, all - network hardware nodes and devices throughout the cloud are connected - to a single layer-2 network segment that provides access to - application data. - When the network devices in the cloud support segmentation - using VLANs, legacy networking can operate in the second mode. In - this design model, each tenant within the cloud is assigned a - network subnet which is mapped to a VLAN on the physical - network. It is especially important to remember the maximum - number of 4096 VLANs which can be used within a spanning tree - domain. This places a hard limit on the amount of - growth possible within the data center. When designing a - general purpose cloud intended to support multiple tenants, we - recommend the use of legacy networking with VLANs, and - not in flat network mode. - - - - Another consideration regarding network is the fact that - legacy networking is entirely managed by the cloud operator; - tenants do not have control over network resources. If tenants - require the ability to manage and create network resources - such as network segments and subnets, it will be necessary to - install the OpenStack Networking service to provide network - access to instances. - - - OpenStack Networking (neutron) - - OpenStack Networking (neutron) is a first class networking - service that gives full control over creation of virtual - network resources to tenants. This is often accomplished in - the form of tunneling protocols which will establish - encapsulated communication paths over existing network - infrastructure in order to segment tenant traffic. These - methods vary depending on the specific implementation, but - some of the more common methods include tunneling over GRE, - encapsulating with VXLAN, and VLAN tags. - - - - We recommend you design at least three network segments: - - - The first segment is a public network, used for access to REST APIs - by tenants and operators. The controller nodes and swift - proxies are the only devices connecting to this network segment. In some - cases, this network might also be serviced by hardware load balancers - and other network devices. - - - The second segment is used by administrators to manage hardware resources. - Configuration management tools also use this for deploying software and - services onto new hardware. In some cases, this network segment might also be - used for internal services, including the message bus and database services. - This network needs to communicate with every hardware node. - Due to the highly sensitive nature of this network segment, you also need to - secure this network from unauthorized access. - - - The third network segment is used by applications and consumers to access - the physical network, and for users to access applications. This network is - segregated from the one used to access the cloud APIs and is not - capable of communicating directly with the hardware resources in the cloud. - Compute resource nodes and network gateway services which allow application - data to access the physical network from outside of the cloud need to - communicate on this network segment. - - -
- -
- Designing OpenStack Object Storage - When designing hardware resources for OpenStack Object - Storage, the primary goal is to maximize the amount of storage - in each resource node while also ensuring that the cost per - terabyte is kept to a minimum. This often involves utilizing - servers which can hold a large number of spinning disks. - Whether choosing to use 2U server form factors with directly - attached storage or an external chassis that holds a larger - number of drives, the main goal is to maximize the storage - available in each node. - - We do not recommended investing in enterprise class drives - for an OpenStack Object Storage cluster. The consistency and - partition tolerance characteristics of OpenStack Object - Storage ensures that data stays up to date and survives - hardware faults without the use of any specialized data - replication devices. - - One of the benefits of OpenStack Object Storage is the ability - to mix and match drives by making use of weighting within the - swift ring. When designing your swift storage cluster, we - recommend making use of the most cost effective storage - solution available at the time. - To achieve durability and availability of data stored as objects - it is important to design object storage resource pools to ensure they can - provide the suggested availability. Considering rack-level and zone-level - designs to accommodate the number of replicas configured to be stored in the - Object Storage service (the default number of replicas is three) is important - when designing beyond the hardware node level. Each replica of - data should exist in its own availability zone with its own - power, cooling, and network resources available to service - that specific zone. - Object storage nodes should be designed so that the number - of requests does not hinder the performance of the cluster. - The object storage service is a chatty protocol, therefore - making use of multiple processors that have higher core counts - will ensure the IO requests do not inundate the server. -
- -
- Designing OpenStack Block Storage - When designing OpenStack Block Storage resource nodes, it is - helpful to understand the workloads and requirements that will - drive the use of block storage in the cloud. We recommend designing - block storage pools so that tenants can choose appropriate storage - solutions for their applications. By creating multiple storage pools of different - types, in conjunction with configuring an advanced storage - scheduler for the block storage service, it is possible to - provide tenants with a large catalog of storage services with - a variety of performance levels and redundancy options. - Block storage also takes advantage of a number of enterprise storage - solutions. These are addressed via a plug-in driver developed by the - hardware vendor. A large number of - enterprise storage plug-in drivers ship out-of-the-box with - OpenStack Block Storage (and many more available via third - party channels). General purpose clouds are more likely to use - directly attached storage in the majority of block storage nodes, - deeming it necessary to provide additional levels of service to tenants - which can only be provided by enterprise class storage solutions. - Redundancy and availability requirements impact the decision to use - a RAID controller card in block storage nodes. The input-output per second (IOPS) - demand of your application will influence whether or not you should use a RAID - controller, and which level of RAID is required. - Making use of higher performing RAID volumes is suggested when - considering performance. However, where redundancy of - block storage volumes is more important we recommend - making use of a redundant RAID configuration such as RAID 5 or - RAID 6. Some specialized features, such as automated - replication of block storage volumes, may require the use of - third-party plug-ins and enterprise block storage solutions in - order to provide the high demand on storage. Furthermore, - where extreme performance is a requirement it may also be - necessary to make use of high speed SSD disk drives' high - performing flash storage solutions. -
- -
- Software selection - The software selection process plays a large role in the - architecture of a general purpose cloud. The following have - a large impact on the design of the cloud: - - - - Choice of operating system - - - - - Selection of OpenStack software components - - - - - Choice of hypervisor - - - - - Selection of supplemental software - - - - Operating system (OS) selection plays a large role in the - design and architecture of a cloud. There are a number of OSes - which have native support for OpenStack including: - - - - Ubuntu - - - - - Red Hat Enterprise Linux (RHEL) - - - - - CentOS - - - - - SUSE Linux Enterprise Server (SLES) - - - - - Native support is not a constraint on the choice of OS; users are - free to choose just about any Linux distribution (or even - Microsoft Windows) and install OpenStack directly from source - (or compile their own packages). However, many organizations will - prefer to install OpenStack from distribution-supplied packages or - repositories (although using the distribution vendor's OpenStack - packages might be a requirement for support). - - - OS selection also directly influences hypervisor selection. - A cloud architect who selects Ubuntu, RHEL, or SLES has some - flexibility in hypervisor; KVM, Xen, and LXC are supported - virtualization methods available under OpenStack Compute - (nova) on these Linux distributions. However, a cloud architect - who selects Hyper-V is limited to Windows Servers. Similarly, a - cloud architect who selects XenServer is limited to the CentOS-based - dom0 operating system provided with XenServer. - The primary factors that play into OS-hypervisor selection - include: - - - User requirements - - The selection of OS-hypervisor - combination first and foremost needs to support the - user requirements. - - - - Support - - The selected OS-hypervisor combination - needs to be supported by OpenStack. - - - - Interoperability - - The OS-hypervisor needs to be - interoperable with other features and services in the - OpenStack design in order to meet the user - requirements. - - - -
- -
- Hypervisor - OpenStack supports a wide variety of hypervisors, one or - more of which can be used in a single cloud. These hypervisors - include: - - - KVM (and QEMU) - - - XCP/XenServer - - - vSphere (vCenter and ESXi) - - - Hyper-V - - - LXC - - - Docker - - - Bare-metal - - - A complete list of supported hypervisors and their - capabilities can be found at - OpenStack Hypervisor Support Matrix. - - We recommend general purpose clouds use hypervisors that - support the most general purpose use cases, such as KVM and - Xen. More specific hypervisors should be chosen to account - for specific functionality or a supported feature requirement. - In some cases, there may also be a mandated - requirement to run software on a certified hypervisor - including solutions from VMware, Microsoft, and Citrix. - The features offered through the OpenStack cloud platform - determine the best choice of a hypervisor. Each hypervisor - has their own hardware requirements which may affect the decisions - around designing a general purpose cloud. - In a mixed hypervisor environment, specific aggregates of - compute resources, each with defined capabilities, enable - workloads to utilize software and hardware specific to their - particular requirements. This functionality can be exposed - explicitly to the end user, or accessed through defined - metadata within a particular flavor of an instance. -
- -
- OpenStack components - A general purpose OpenStack cloud design should incorporate - the core OpenStack services to provide a wide range of - services to end-users. The OpenStack core services recommended - in a general purpose cloud are: - - - OpenStack Compute - (nova) - - - OpenStack Networking - (neutron) - - - OpenStack Image service - (glance) - - - OpenStack Identity - (keystone) - - - OpenStack dashboard - (horizon) - - - Telemetry - (ceilometer) - - - A general purpose cloud may also include OpenStack - Object Storage (swift). - OpenStack Block Storage - (cinder). These may be - selected to provide storage to applications and - instances. -
- -
- Supplemental software - A general purpose OpenStack deployment consists of more than - just OpenStack-specific components. A typical deployment - involves services that provide supporting functionality, - including databases and message queues, and may also involve - software to provide high availability of the OpenStack - environment. Design decisions around the underlying message - queue might affect the required number of controller services, - as well as the technology to provide highly resilient database - functionality, such as MariaDB with Galera. In such a - scenario, replication of services relies on quorum. - Where many general purpose deployments use hardware load - balancers to provide highly available API access and SSL - termination, software solutions, for example HAProxy, can also - be considered. It is vital to ensure that such software - implementations are also made highly available. High - availability can be achieved by using software such as - Keepalived or Pacemaker with Corosync. Pacemaker and Corosync - can provide active-active or active-passive highly available - configuration depending on the specific service in the - OpenStack environment. Using this software can affect the - design as it assumes at least a 2-node controller - infrastructure where one of those nodes may be running certain - services in standby mode. - Memcached is a distributed memory object caching system, and - Redis is a key-value store. Both are deployed on - general purpose clouds to assist in alleviating load to the - Identity service. The memcached service caches tokens, and due - to its distributed nature it can help alleviate some - bottlenecks to the underlying authentication system. Using - memcached or Redis does not affect the overall design of your - architecture as they tend to be deployed onto the - infrastructure nodes providing the OpenStack services. -
- -
- Controller infrastructure - The Controller infrastructure nodes provide management - services to the end-user as well as providing services - internally for the operating of the cloud. The Controllers - run message queuing services that carry system - messages between each service. Performance issues related to - the message bus would lead to delays in sending that message - to where it needs to go. The result of this condition would be - delays in operation functions such as spinning up and deleting - instances, provisioning new storage volumes and managing - network resources. Such delays could adversely affect an - application’s ability to react to certain conditions, - especially when using auto-scaling features. It is important - to properly design the hardware used to run the controller - infrastructure as outlined above in the Hardware Selection - section. - Performance of the controller services is not limited - to processing power, but restrictions may emerge in serving - concurrent users. Ensure that the APIs and Horizon services - are load tested to ensure that you are able to serve your - customers. Particular attention should be made to the - OpenStack Identity Service (Keystone), which provides the - authentication and authorization for all services, both - internally to OpenStack itself and to end-users. This service - can lead to a degradation of overall performance if this is - not sized appropriately. -
- -
- Network performance - In a general purpose OpenStack cloud, the requirements of - the network help determine performance capabilities. - It is possible to design OpenStack - environments that run a mix of networking capabilities. By - utilizing the different interface speeds, the users of the - OpenStack environment can choose networks that are fit for - their purpose. - Network performance can be boosted considerably by - implementing hardware load balancers to provide front-end - service to the cloud APIs. The hardware load balancers also - perform SSL termination if that is a requirement of your - environment. When implementing SSL offloading, it is important - to understand the SSL offloading capabilities of the devices - selected. -
- -
- Compute host - The choice of hardware specifications used in compute nodes - including CPU, memory and disk type directly affects the - performance of the instances. Other factors which can directly - affect performance include tunable parameters within the - OpenStack services, for example the overcommit ratio applied - to resources. The defaults in OpenStack Compute set a 16:1 - over-commit of the CPU and 1.5 over-commit of the memory. - Running at such high ratios leads to an increase in - "noisy-neighbor" activity. Care must be taken when sizing your - Compute environment to avoid this scenario. For running - general purpose OpenStack environments it is possible to keep - to the defaults, but make sure to monitor your environment as - usage increases. -
- -
- Storage performance - When considering performance of OpenStack Block Storage, - hardware and architecture choice is important. Block Storage - can use enterprise back-end systems such as NetApp or EMC, - scale out storage such as GlusterFS and Ceph, or simply use - the capabilities of directly attached storage in the nodes - themselves. Block Storage may be deployed so that traffic - traverses the host network, which could affect, and be - adversely affected by, the front-side API traffic performance. - As such, consider using a dedicated data storage network with - dedicated interfaces on the Controller and Compute - hosts. - When considering performance of OpenStack Object Storage, a - number of design choices will affect performance. A user’s - access to the Object Storage is through the proxy services, - which sit behind hardware load balancers. By the - very nature of a highly resilient storage system, replication - of the data would affect performance of the overall system. In - this case, 10 GbE (or better) networking is recommended - throughout the storage network architecture. -
- -
- Availability - In OpenStack, the infrastructure is integral to providing - services and should always be available, especially when - operating with SLAs. Ensuring network availability is - accomplished by designing the network architecture so that no - single point of failure exists. A consideration of the number - of switches, routes and redundancies of power should be - factored into core infrastructure, as well as the associated - bonding of networks to provide diverse routes to your highly - available switch infrastructure. - The OpenStack services themselves should be deployed across - multiple servers that do not represent a single point of - failure. Ensuring API availability can be achieved by placing - these services behind highly available load balancers that - have multiple OpenStack servers as members. - OpenStack lends itself to deployment in a highly available - manner where it is expected that at least 2 servers be - utilized. These can run all the services involved from the - message queuing service, for example RabbitMQ or QPID, and an - appropriately deployed database service such as MySQL or - MariaDB. As services in the cloud are scaled out, back-end - services will need to scale too. Monitoring and reporting on - server utilization and response times, as well as load testing - your systems, will help determine scale out decisions. - Care must be taken when deciding network functionality. - Currently, OpenStack supports both the legacy networking (nova-network) - system and the newer, extensible OpenStack Networking (neutron). Both - have their pros and cons when it comes to providing highly - available access. Legacy networking, which provides networking - access maintained in the OpenStack Compute code, provides a - feature that removes a single point of failure when it comes - to routing, and this feature is currently missing in OpenStack - Networking. The effect of legacy networking’s multi-host - functionality restricts failure domains to the host running - that instance. - When using OpenStack Networking, the - OpenStack controller servers or separate Networking - hosts handle routing. For a deployment that requires features - available in only Networking, it is possible to - remove this restriction by using third party software that - helps maintain highly available L3 routes. Doing so allows for - common APIs to control network hardware, or to provide complex - multi-tier web applications in a secure manner. It is also - possible to completely remove routing from - Networking, and instead rely on hardware routing capabilities. - In this case, the switching infrastructure must support L3 - routing. - OpenStack Networking and legacy networking - both have their advantages and - disadvantages. They are both valid and supported options that - fit different network deployment models described in the - OpenStack Operations Guide. - Ensure your deployment has adequate back-up capabilities. - Application design must also be factored into the - capabilities of the underlying cloud infrastructure. If the - compute hosts do not provide a seamless live migration - capability, then it must be expected that when a compute host - fails, that instance and any data local to that instance will - be deleted. However, when providing an expectation to users - that instances have a high-level of uptime guarantees, the - infrastructure must be deployed in a way that eliminates any - single point of failure when a compute host disappears. This - may include utilizing shared file systems on enterprise - storage or OpenStack Block storage to provide a level of - guarantee to match service features. - For more information on high availability in OpenStack, see the OpenStack - High Availability Guide. - -
- -
- Security - A security domain comprises users, applications, servers or - networks that share common trust requirements and expectations - within a system. Typically they have the same authentication - and authorization requirements and users. - These security domains are: - - - Public - - - Guest - - - Management - - - Data - - - These security domains can be mapped to an OpenStack - deployment individually, or combined. In each case, the cloud operator - should be aware of the appropriate security concerns. Security - domains should be mapped out against your specific OpenStack - deployment topology. The domains and their trust requirements - depend upon whether the cloud instance is public, private, or - hybrid. - - - The public security domain is an entirely untrusted area of - the cloud infrastructure. It can refer to the internet as a - whole or simply to networks over which you have no authority. - This domain should always be considered untrusted. - - - The guest security domain handles compute data generated by - instances on the cloud but not services that support the - operation of the cloud, such as API calls. Public cloud - providers and private cloud providers who do not have - stringent controls on instance use or who allow unrestricted - internet access to instances should consider this domain to be - untrusted. Private cloud providers may want to consider this - network as internal and therefore trusted only if they have - controls in place to assert that they trust instances and all - their tenants. - - - The management security domain is where services interact. - Sometimes referred to as the control plane, the networks - in this domain transport confidential data such as configuration - parameters, user names, and passwords. In most deployments this - domain is considered trusted. - - - The data security domain is concerned primarily with - information pertaining to the storage services within - OpenStack. Much of the data that crosses this network has high - integrity and confidentiality requirements and, depending on - the type of deployment, may also have strong availability - requirements. The trust level of this network is heavily - dependent on other deployment decisions. - - - When deploying OpenStack in an enterprise as a private cloud - it is usually behind the firewall and within the trusted - network alongside existing systems. Users of the cloud are - employees that are bound by the security - requirements set forth by the company. This tends to push most - of the security domains towards a more trusted model. However, - when deploying OpenStack in a public facing role, no - assumptions can be made and the attack vectors significantly - increase. - Consideration must be taken when managing the users of the - system for both public and private clouds. The identity - service allows for LDAP to be part of the authentication - process. Including such systems in an OpenStack deployment may - ease user management if integrating into existing - systems. - It is important to understand that user authentication - requests include sensitive information including user names, - passwords, and authentication tokens. For this reason, placing - the API services behind hardware that performs SSL termination - is strongly recommended. - - For more information OpenStack Security, see the OpenStack - Security Guide - -
-
diff --git a/doc/arch-design/generalpurpose/section_user_requirements_general_purpose.xml b/doc/arch-design/generalpurpose/section_user_requirements_general_purpose.xml deleted file mode 100644 index b8eec88776..0000000000 --- a/doc/arch-design/generalpurpose/section_user_requirements_general_purpose.xml +++ /dev/null @@ -1,155 +0,0 @@ - -
- - User requirements - When building a general purpose cloud, you should follow the - Infrastructure-as-a-Service (IaaS) - model; a platform best suited for use cases with simple requirements. - General purpose cloud user requirements are not complex. - However, it is important to capture them even - if the project has minimum business and technical requirements, such as a - proof of concept (PoC), or a small lab platform. - - - The following user considerations are written from the perspective of - the cloud builder, not from the perspective of the end user. - - - - - Cost - - Financial factors are a primary concern for - any organization. Cost is an important criterion - as general purpose clouds are considered the baseline - from which all other cloud architecture environments - derive. General purpose clouds do not always provide - the most cost-effective environment for specialized - applications or situations. Unless razor-thin margins and costs have - been mandated as a critical factor, cost should not be - the sole consideration when choosing or designing a - general purpose architecture. - - - - Time to market - - The ability to deliver services or products within - a flexible time frame is a common business factor - when building a general purpose cloud. - Delivering a product in six months instead - of two years is a driving force behind the - decision to build general purpose clouds. General - purpose clouds allow users to self-provision and gain - access to compute, network, and storage resources - on-demand thus decreasing time to market. - - - - Revenue opportunity - - Revenue opportunities for a - cloud will vary greatly based on the intended - use case of that particular cloud. Some general - purpose clouds are built for commercial customer - facing products, but there are alternatives - that might make the general purpose cloud the right - choice. - - - -
- Technical requirements - Technical cloud architecture requirements should be weighted - against the business requirements. - - - - Performance - - As a baseline product, general purpose - clouds do not provide optimized performance for any - particular function. While a general purpose cloud - should provide enough performance to satisfy average - user considerations, performance is not a general - purpose cloud customer driver. - - - - No predefined usage model - - The lack of a pre-defined - usage model enables the user to run a wide variety of - applications without having to know the application - requirements in advance. This provides a degree of - independence and flexibility that no other cloud - scenarios are able to provide. - - - - On-demand and self-service application - - By - definition, a cloud provides end users with the - ability to self-provision computing power, storage, - networks, and software in a simple and flexible way. - The user must be able to scale their resources up to a - substantial level without disrupting the underlying - host operations. One of the benefits of using a - general purpose cloud architecture is the ability to - start with limited resources and increase them over - time as the user demand grows. - - - - Public cloud - - For a company interested in building a - commercial public cloud offering based on OpenStack, - the general purpose architecture model might be the - best choice. Designers are not always going to - know the purposes or workloads for which the end users - will use the cloud. - - - - Internal consumption (private) cloud - - Organizations need to determine if it is logical to - create their own clouds internally. Using a private cloud, - organizations are able to maintain complete control over - architectural and cloud components. - - Users will want to combine - using the internal cloud with access to an external - cloud. If that case is likely, it might be worth - exploring the possibility of taking a multi-cloud - approach with regard to at least some of the - architectural elements. - - - Designs that incorporate the - use of multiple clouds, such as a private cloud and a - public cloud offering, are described in the - "Multi-Cloud" scenario, see . - - - - - Security - - Security should be implemented according - to asset, threat, and vulnerability risk assessment - matrices. For cloud domains that require increased - computer security, network security, or information - security, a general purpose cloud is not considered an - appropriate choice. - - - -
-
diff --git a/doc/arch-design/hybrid/section_architecture_hybrid.xml b/doc/arch-design/hybrid/section_architecture_hybrid.xml deleted file mode 100644 index ae8420e072..0000000000 --- a/doc/arch-design/hybrid/section_architecture_hybrid.xml +++ /dev/null @@ -1,190 +0,0 @@ - -
- - Architecture - Map out the dependencies of the expected workloads - and the cloud infrastructures required to support them to architect a - solution for the broadest compatibility between cloud platforms, - minimizing the need to create workarounds and processes to fill - identified gaps. - For your chosen cloud management platform, note the relative - levels of support for both monitoring and orchestration. - - - - - - -
- Image portability - The majority of cloud workloads currently run on instances - using hypervisor technologies. The challenge is that each of these - hypervisors uses an image format that may not be compatible with the - others. When possible, standardize on a single hypervisor and instance - image format. This may not be possible when using externally-managed - public clouds. - Conversion tools exist to address image format compatibility. - Examples include virt-p2v/virt-v2v - and - virt-edit. These tools cannot serve beyond basic cloud instance - specifications. - Alternatively, build a thin operating system image as - the base for new instances. This facilitates rapid creation of cloud - instances using cloud orchestration or configuration management tools - for more specific templating. Remember if you intend to use portable - images for disaster recovery, application diversity, or high - availability, your users could move the images and instances between - cloud platforms regularly. -
- -
- Upper-layer services - Many clouds offer complementary services beyond the - basic compute, network, and storage components. These - additional services often simplify the deployment - and management of applications on a cloud platform. - When moving workloads from the source to the destination - cloud platforms, consider that the destination cloud platform - may not have comparable services. Implement workloads in a - different way or by using a different technology. - For example, moving an application that uses a NoSQL database - service such as MongoDB could cause difficulties in maintaining - the application between the platforms. - There are a number of options that are appropriate for - the hybrid cloud use case: - - - Implementing a baseline of upper-layer services - across all of the cloud platforms. For - platforms that do not support a given service, create - a service on top of that platform and apply it to the - workloads as they are launched on that cloud. - For example, through the Database service - for OpenStack (trove), - OpenStack supports MySQL-as-a-Service but not NoSQL - databases in production. To move from or run - alongside AWS, a NoSQL workload must use an automation - tool, such as the Orchestration service (heat), to - recreate the NoSQL database on top of OpenStack. - - - - Deploying a Platform-as-a-Service (PaaS) - technology that abstracts the - upper-layer services from the underlying cloud - platform. The unit of application deployment and - migration is the PaaS. It leverages the services of - the PaaS and only consumes the base infrastructure - services of the cloud platform. - - - Using automation tools to create the required upper-layer services - that are portable across all cloud platforms. - For example, instead of using database services that - are inherent in the cloud platforms, launch cloud - instances and deploy the databases on those - instances using scripts or configuration and - application deployment tools. - - -
- -
- Network services - Network services functionality is a critical component of - multiple cloud architectures. It is an important factor - to assess when choosing a CMP and cloud provider. - Considerations include: - - - - Functionality - - - - - Security - - - - - Scalability - - - - - High availability (HA) - - - - Verify and test critical cloud endpoint features. - - - After selecting the network functionality framework, - you must confirm the functionality is compatible. This - ensures testing and functionality persists - during and after upgrades. - - Diverse cloud platforms may de-synchronize - over time if you do not maintain their mutual compatibility. - This is a particular issue with APIs. - - - - Scalability across multiple cloud providers determines - your choice of underlying network framework. It is important to - have the network API functions presented and to verify - that the desired functionality persists across all - chosen cloud endpoint. - - - High availability implementations vary in - functionality and design. Examples of some common - methods are active-hot-standby, active-passive, and - active-active. Develop your high availability - implementation and a test framework to understand - the functionality and limitations of the environment. - - - It is imperative to address security considerations. - For example, addressing how data is secured between client and - endpoint and any traffic that traverses the multiple clouds. - Business and regulatory requirements dictate what security - approach to take. For more information, see the - Security - Requirements Chapter - - -
- -
- Data - Traditionally, replication has been the best method of protecting - object store implementations. A variety of replication methods exist - in storage architectures, for example synchronous and asynchronous - mirroring. Most object stores and back-end storage systems implement - methods for replication at the storage subsystem layer. - Object stores also tailor replication techniques - to fit a cloud's requirements. - Organizations must find the right balance between - data integrity and data availability. Replication strategy may - also influence disaster recovery methods. - Replication across different racks, data centers, and - geographical regions increases focus on - determining and ensuring data locality. The ability to - guarantee data is accessed from the nearest or fastest storage - can be necessary for applications to perform well. - - When running embedded object store methods, ensure that you do - not instigate extra data replication as this can cause performance - issues. - -
-
diff --git a/doc/arch-design/hybrid/section_operational_considerations_hybrid.xml b/doc/arch-design/hybrid/section_operational_considerations_hybrid.xml deleted file mode 100644 index 47ef9085c2..0000000000 --- a/doc/arch-design/hybrid/section_operational_considerations_hybrid.xml +++ /dev/null @@ -1,86 +0,0 @@ - -
- - Operational considerations - Hybrid cloud deployments present complex operational - challenges. Differences between provider clouds can cause - incompatibilities with workloads or Cloud Management - Platforms (CMP). Cloud providers may also offer different levels of - integration with competing cloud offerings. - Monitoring is critical to maintaining a hybrid cloud, and it is - important to determine if a CMP supports - monitoring of all the clouds involved, or if compatible APIs - are available to be queried for necessary information. - -
- Agility - Hybrid clouds provide application - availability across different cloud environments and - technologies. This availability enables the deployment to - survive disaster in any single cloud environment. - Each cloud should provide the means to create instances quickly - in response to capacity issues or failure elsewhere in the hybrid - cloud. -
- -
- Application readiness - Enterprise workloads that depend on the - underlying infrastructure for availability are not designed to - run on OpenStack. If the application cannot - tolerate infrastructure failures, it is likely to require - significant operator intervention to recover. Applications for - hybrid clouds must be fault tolerant, with an SLA that is not tied - to the underlying infrastructure. Ideally, cloud applications should be - able to recover when entire racks and data centers experience an - outage. -
- -
- Upgrades - If a deployment includes a public cloud, predicting - upgrades may not be possible. Carefully examine provider SLAs. - - At massive scale, even when - dealing with a cloud that offers an SLA with a high percentage - of uptime, workloads must be able to recover quickly. - - When upgrading private cloud deployments, minimize disruption by - making incremental changes and providing a facility to either rollback - or continue to roll forward when using a continuous delivery - model. - You may need to coordinate CMP upgrades with hybrid cloud upgrades if - there are API changes. -
- -
- Network Operation Center - Consider infrastructure control - when planning the Network Operation Center (NOC) - for a hybrid cloud environment. If a significant - portion of the cloud is on externally managed systems, - prepare for situations where it may not be possible to - make changes. - Additionally, providers may differ on how - infrastructure must be managed and exposed. This can lead to - delays in root cause analysis where each insists the blame - lies with the other provider. - Ensure that the network structure connects all clouds to form - integrated system, keeping in mind the state of handoffs. - These handoffs must both be as reliable as possible and - include as little latency as possible to ensure the best - performance of the overall system. -
- -
- Maintainability - Hybrid clouds rely on third party systems and processes. As a - result, it is not possible to guarantee - proper maintenance of the overall system. Instead, be prepared to - abandon workloads and recreate them in an improved state. -
-
diff --git a/doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml b/doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml deleted file mode 100644 index d87f20036b..0000000000 --- a/doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml +++ /dev/null @@ -1,173 +0,0 @@ - -
- - Prescriptive examples - Hybrid cloud environments are designed for - these use cases: - - - Bursting workloads from private to public OpenStack - clouds - - - Bursting workloads from private to public - non-OpenStack clouds - - - High availability across clouds (for technical - diversity) - - - This chapter provides examples of environments - that address each of these use cases. -
- Bursting to a public OpenStack cloud - Company A's data center is running low on - capacity. It is not possible to expand the data center in the - foreseeable future. In order to accommodate - the continuously growing need for development resources in the - organization, Company A decides to use resources in the public - cloud. - Company A has an established data - center with a substantial amount of hardware. Migrating the - workloads to a public cloud is not feasible. - The company has an internal cloud management platform that - directs requests to the appropriate cloud, depending on - the local capacity. This is a custom in-house application written for - this specific purpose. - This solution is depicted in the figure below: - - - - - - This example shows two clouds with a Cloud Management - Platform (CMP) connecting them. This guide does not - discuss a specific CMP, but describes how the Orchestration and - Telemetry services handle, manage, and control workloads. - The private OpenStack cloud has at least one - controller and at least one compute node. It includes - metering using the Telemetry service. The Telemetry service - captures the load increase and the CMP processes the information. - If there is available capacity, the CMP uses the - OpenStack API to call the Orchestration service. This creates - instances on the private cloud in response to user requests. - When capacity is not available on the private cloud, - the CMP issues a request to the Orchestration service API of - the public cloud. This creates the instance on the public - cloud. - In this example, Company A does not direct the deployments to an - external public cloud due to concerns regarding resource control, - security, and increased operational expense -
- -
- Bursting to a public non-OpenStack cloud - The second example examines bursting workloads from the - private cloud into a non-OpenStack public cloud using Amazon - Web Services (AWS) to take advantage of additional capacity - and to scale applications. - The following diagram demonstrates an OpenStack-to-AWS hybrid - cloud: - - - - - - Company B states that its developers are already using AWS and - do not want to change to a different provider. - If the CMP is capable of connecting to an external - cloud provider with an appropriate API, the workflow process - remains the same as the previous scenario. The actions the - CMP takes, such as monitoring loads and creating new instances, - stay the same. However, the CMP performs actions in the - public cloud using applicable API calls. - If the public cloud is AWS, the CMP would use the - EC2 API to create a new instance and assign an Elastic IP. - It can then add that IP to HAProxy in the private cloud. - The CMP can also reference AWS-specific - tools such as CloudWatch and CloudFormation. - Several open source tool kits for building CMPs are - available and can handle this kind of translation. Examples include - ManageIQ, jClouds, and JumpGate. -
- -
- High availability and disaster recovery - Company C requires their local data center - to be able to recover from failure. Some of the - workloads currently in use are running on their private - OpenStack cloud. Protecting the data involves Block Storage, - Object Storage, and a database. The architecture - supports the failure of large components of the system while - ensuring that the system continues to deliver services. - While the services remain available to users, the failed - components are restored in the background based on standard - best practice data replication policies. To achieve these objectives, - Company C replicates data to a second cloud in a geographically distant - location. The following diagram describes this system: - - - - - - This example includes two private OpenStack clouds connected - with a CMP. The source cloud, - OpenStack Cloud 1, includes a controller and at least one - instance running MySQL. It also includes at least one Block - Storage volume and one Object Storage volume. This means that data - is available to the users at all times. The details of the - method for protecting each of these sources of data - differs. - Object Storage relies on the replication capabilities of - the Object Storage provider. Company C enables OpenStack Object Storage - so that it creates geographically separated replicas - that take advantage of this feature. The company configures storage - so that at least one replica exists in each cloud. In order to make - this work, the company configures a single array spanning both clouds - with OpenStack Identity. Using Federated Identity, the array talks - to both clouds, communicating with OpenStack Object Storage - through the Swift proxy. - For Block Storage, the replication is a little more - difficult, and involves tools outside of OpenStack itself. The - OpenStack Block Storage volume is not set as the drive itself - but as a logical object that points to a physical back end. Disaster - recovery is configured for Block Storage for - synchronous backup for the highest level of data protection, - but asynchronous backup could have been set as an alternative - that is not as latency sensitive. For asynchronous backup, the - Block Storage API makes it possible to export the data and also the - metadata of a particular volume, so that it can be moved and - replicated elsewhere. More information can be found here: - - https://blueprints.launchpad.net/cinder/+spec/cinder-backup-volume-metadata-support. - - The synchronous backups create an identical volume in both - clouds and chooses the appropriate flavor so that each cloud - has an identical back end. This is done by creating volumes - through the CMP. After this is configured, a solution - involving DRDB synchronizes the physical drives. - The database component is backed up using synchronous - backups. MySQL does not support geographically diverse - replication, so disaster recovery is provided by replicating - the file itself. As it is not possible to use Object Storage - as the back end of a database like MySQL, Swift replication - is not an option. Company C decides not to store the data on - another geo-tiered storage system, such as Ceph, as Block - Storage. This would have given another layer of protection. - Another option would have been to store the database on an - OpenStack Block Storage volume and backing it up like any - other Block Storage. -
-
diff --git a/doc/arch-design/hybrid/section_tech_considerations_hybrid.xml b/doc/arch-design/hybrid/section_tech_considerations_hybrid.xml deleted file mode 100644 index 041842941f..0000000000 --- a/doc/arch-design/hybrid/section_tech_considerations_hybrid.xml +++ /dev/null @@ -1,196 +0,0 @@ - - -%openstack; -]> -
- - Technical considerations - A hybrid cloud environment requires inspection and - understanding of technical issues in external data centers that may - not be in your control. Ideally, select an architecture - and CMP that are adaptable to changing environments. - Using diverse cloud platforms increases the risk of compatibility - issues, but clouds using the same version and distribution - of OpenStack are less likely to experience problems. - Clouds that exclusively use the same versions of OpenStack should - have no issues, regardless of distribution. More recent distributions - are less likely to encounter incompatibility between versions. An - OpenStack community initiative defines core functions that need to - remain backward compatible between supported versions. For example, the - DefCore initiative defines basic functions that every distribution must - support in order to use the name OpenStack. - - Vendors can add proprietary customization to their distributions. If - an application or architecture makes use of these features, it can be - difficult to migrate to or use other types of environments. - If an environment includes non-OpenStack clouds, it may experience - compatibility problems. CMP tools must account for the differences in - the handling of operations and the implementation of services. - - Possible cloud incompatibilities - - Instance deployment - - - Network management - - - Application management - - - Services implementation - - - -
- Capacity planning - One of the primary reasons many organizations use a - hybrid cloud is to increase capacity without making large capital - investments. - Capacity and the placement of workloads are key design considerations - for hybrid clouds. The long-term capacity plan for these - designs must incorporate growth over time to prevent permanent - consumption of more expensive external clouds. To avoid this scenario, - account for future applications' capacity requirements and plan growth - appropriately. - It is difficult to predict the amount of load a particular - application might incur if the number of users fluctuates, or the - application experiences an unexpected increase in use. It is - possible to define application requirements in terms of vCPU, RAM, - bandwidth, or other resources and plan appropriately. However, other - clouds might not use the same meter or even the same oversubscription - rates. - Oversubscription is a method to emulate more capacity than - may physically be present. For example, a physical - hypervisor node with 32 GB RAM may host 24 - instances, each provisioned with 2 GB RAM. As long - as all 24 instances do not concurrently use 2 full - gigabytes, this arrangement works well. However, some - hosts take oversubscription to extremes and, as a result, - performance can be inconsistent. If at all - possible, determine what the oversubscription rates of each - host are and plan capacity accordingly. -
-
- Utilization - A CMP must be aware of what workloads are running, where they are - running, and their preferred utilizations. For example, in - most cases it is desirable to run as many workloads internally - as possible, utilizing other resources only when necessary. On - the other hand, situations exist in which the opposite is - true, such as when an internal cloud is only for development and - stressing it is undesirable. A cost model of various scenarios and - consideration of internal priorities helps with this decision. To - improve efficiency, automate these decisions when possible. - The Telemetry service (ceilometer) provides information on the usage - of various OpenStack components. Note the following: - - - - If Telemetry must retain a large amount of data, for - example when monitoring a large or active cloud, we recommend - using a NoSQL back end such as MongoDB. - - - - You must monitor connections to non-OpenStack clouds - and report this information to the CMP. - - -
- -
- Performance - Performance is critical to hybrid cloud deployments, and they are - affected by many of the same issues as multi-site deployments, - such as network latency between sites. Also consider the time required - to run a workload in different clouds and methods for reducing this - time. This may require moving data closer to applications - or applications closer to the data they process, and - grouping functionality so that connections that - require low latency take place over a single cloud rather than - spanning clouds. This may also require a CMP that can determine which - cloud can most efficiently run which types of workloads. - As with utilization, native OpenStack tools help improve performance. - For example, you can use Telemetry to measure performance and the - Orchestration service (heat) to react to changes in demand. - - Orchestration requires special client configurations to integrate - with Amazon Web Services. For other types of clouds, use CMP - features. - - -
- -
- Components - Using more than one cloud in any design requires consideration of - four OpenStack tools: - - - OpenStack Compute (nova) - - Regardless of deployment location, hypervisor choice has a - direct effect on how difficult it is to integrate with - additional clouds. - - - - Networking (neutron) - - Whether using OpenStack Networking (neutron) or legacy - networking (nova-network), it is necessary to understand - network integration capabilities in order to - connect between clouds. - - - - Telemetry (ceilometer) - - Use of Telemetry depends, in large part, on what the other - parts of the cloud you are using. - - - - Orchestration (heat) - - Orchestration can be a valuable tool in orchestrating tasks a - CMP decides are necessary in an OpenStack-based cloud. - - - -
- -
- Special considerations - Hybrid cloud deployments require consideration of two issues that - are not common in other situations: - - - Image portability - - As of the Kilo release, there is no common image format that is - usable by all clouds. Conversion or recreation of images is necessary - if migrating between clouds. To simplify deployment, use the smallest - and simplest images feasible, install only what is necessary, and - use a deployment manager such as Chef or Puppet. Do not use golden - images to speed up the process unless you repeatedly deploy the same - images on the same cloud. - - - - API differences - - Avoid using a hybrid cloud deployment with more than just - OpenStack (or with different versions of OpenStack) as API changes - can cause compatibility issues. - - - -
-
diff --git a/doc/arch-design/hybrid/section_user_requirements_hybrid.xml b/doc/arch-design/hybrid/section_user_requirements_hybrid.xml deleted file mode 100644 index 97b5085921..0000000000 --- a/doc/arch-design/hybrid/section_user_requirements_hybrid.xml +++ /dev/null @@ -1,258 +0,0 @@ - -
- - User requirements - Hybrid cloud architectures are complex, especially those - that use heterogeneous cloud platforms. Ensure that design choices - match requirements so that the benefits outweigh the inherent additional - complexity and risks. - - Business considerations when designing a hybrid - cloud deployment - - Cost - - A hybrid cloud architecture involves multiple - vendors and technical architectures. These - architectures may be more expensive to deploy and - maintain. Operational costs can be higher because of - the need for more sophisticated orchestration and - brokerage tools than in other architectures. In - contrast, overall operational costs might be lower by - virtue of using a cloud brokerage tool to deploy the - workloads to the most cost effective platform. - - - - Revenue opportunity - - Revenue opportunities vary based on the intent and use case - of the cloud. As a commercial, customer-facing product, you - must consider whether building over multiple platforms makes - the design more attractive to customers. - - - - Time-to-market - - One common reason to use cloud platforms is to improve the - time-to-market of a new product or application. For example, - using multiple cloud platforms is viable because there is an - existing investment in several applications. It is faster to - tie the investments together rather than migrate the - components and refactoring them to a single platform. - - - - Business or technical diversity - - Organizations leveraging cloud-based services can - embrace business diversity and utilize a hybrid cloud - design to spread their workloads across multiple cloud - providers. This ensures that no single cloud provider is - the sole host for an application. - - - - Application momentum - - Businesses with existing applications may find that it is - more cost effective to integrate applications on multiple - cloud platforms than migrating them to a single platform. - - - - -
- Workload considerations - A workload can be a single application or a suite of applications - that work together. It can also be a duplicate set of applications that - need to run on multiple cloud environments. In a hybrid cloud - deployment, the same workload often needs to function - equally well on radically different public and private cloud - environments. The architecture needs to address these - potential conflicts, complexity, and platform - incompatibilities. - - Use cases for a hybrid cloud architecture - - Dynamic resource expansion or bursting - - An application that requires additional resources may suit - a multiple cloud architecture. - For example, a retailer needs additional resources - during the holiday season, but does not want to add private - cloud resources to meet the peak demand. The user can - accommodate the increased load by bursting to - a public cloud for these peak load - periods. These bursts could be for long or short - cycles ranging from hourly to yearly. - - - - Disaster recovery and business continuity - - Cheaper storage makes the public - cloud suitable for maintaining backup applications. - - - - Federated hypervisor and instance management - - Adding self-service, charge back, and transparent delivery of - the resources from a federated pool can be cost - effective. In a hybrid cloud environment, this is a - particularly important consideration. Look for a cloud - that provides cross-platform hypervisor support and - robust instance management tools. - - - - Application portfolio integration - - An enterprise cloud delivers efficient application portfolio - management and deployments by leveraging - self-service features and rules according to use. Integrating - existing cloud environments is a common driver when building - hybrid cloud architectures. - - - - Migration scenarios - - Hybrid cloud architecture enables the migration of - applications between different clouds. - - - - High availability - - A combination of locations and platforms enables a - level of availability that is not - possible with a single platform. This approach increases - design complexity. - - - - As running a workload on multiple cloud platforms increases design - complexity, we recommend first exploring options such as transferring - workloads across clouds at the application, instance, cloud platform, - hypervisor, and network levels. -
- -
- Tools considerations - Hybrid cloud designs must incorporate tools to facilitate working - across multiple clouds. - - Tool functions - - Broker between clouds - - Brokering software evaluates relative costs between different - cloud platforms. Cloud Management Platforms (CMP) - allow the designer to determine the right location for the - workload based on predetermined criteria. - - - - Facilitate orchestration across the clouds - - CMPs simplify the migration of application workloads between - public, private, and hybrid cloud platforms. We recommend - using cloud orchestration tools for managing a diverse - portfolio of systems and applications across multiple cloud - platforms. - - - -
- -
- Network considerations - It is important to consider the functionality, security, scalability, - availability, and testability of network when choosing a CMP and cloud - provider. - - - Decide on a network framework and - design minimum functionality tests. This ensures - testing and functionality persists during and after - upgrades. - - - Scalability across multiple cloud providers may - dictate which underlying network framework you - choose in different cloud providers. It is important - to present the network API functions and to - verify that functionality persists across all cloud - endpoints chosen. - - - High availability implementations vary in - functionality and design. Examples of some common - methods are active-hot-standby, active-passive, and - active-active. Development of high availability and test - frameworks is necessary to insure understanding of - functionality and limitations. - - - Consider the security of data between the client and the - endpoint, and of traffic that traverses the multiple - clouds. - - -
- -
- Risk mitigation and management considerations - Hybrid cloud architectures introduce additional risk because - they are more complex than a single cloud design and may involve - incompatible components or tools. However, they also reduce - risk by spreading workloads over multiple providers. - - Hybrid cloud risks - - Provider availability or implementation details - - - Business changes can affect provider availability. Likewise, - changes in a provider's service can disrupt a hybrid cloud - environment or increase costs. - - - - Differing SLAs - - Hybrid cloud designs must accommodate differences in SLAs - between providers, and consider their enforceability. - - - - Security levels - - Securing multiple cloud - environments is more complex than securing single - cloud environments. We recommend addressing concerns at - the application, network, and cloud platform levels. - Be aware that each cloud platform approaches security - differently, and a hybrid cloud design must address and - compensate for these differences. - - - - Provider API changes - - Consumers of external clouds rarely have control over - provider changes to APIs, and changes can break compatibility. - Using only the most common and basic APIs can minimize - potential conflicts. - - - -
-
diff --git a/doc/arch-design/introduction/section_how_this_book_is_organized.xml b/doc/arch-design/introduction/section_how_this_book_is_organized.xml deleted file mode 100644 index 2f71ceccc7..0000000000 --- a/doc/arch-design/introduction/section_how_this_book_is_organized.xml +++ /dev/null @@ -1,106 +0,0 @@ - -
- How this book is organized - This book examines some of the most common uses for OpenStack - clouds, and explains the considerations for each use case. - Cloud architects may use this book as a comprehensive guide by - reading all of the use cases, but it is also possible to review - only the chapters which pertain to a specific use case. - The use cases covered in this guide include: - - - - General purpose: Uses common components that address - 80% of common use cases. - - - - - Compute focused: For compute intensive workloads - such as high performance computing (HPC). - - - - - Storage focused: For storage intensive workloads such as - data analytics with parallel file systems. - - - - - Network focused: For high performance and reliable - networking, such as a content delivery network (CDN). - - - - - Multi-site: For applications that require multiple site - deployments for geographical, reliability or data - locality reasons. - - - - - Hybrid cloud: Uses multiple disparate clouds - connected either for failover, hybrid cloud bursting, or - availability. - - - - - Massively - scalable: For - cloud service providers or other large - installations - - - - - Specialized cases: Architectures that have not - previously been covered in the defined use cases. - - - - - -
diff --git a/doc/arch-design/introduction/section_how_this_book_was_written.xml b/doc/arch-design/introduction/section_how_this_book_was_written.xml deleted file mode 100644 index 28c9dccf82..0000000000 --- a/doc/arch-design/introduction/section_how_this_book_was_written.xml +++ /dev/null @@ -1,95 +0,0 @@ - -
- Why and how we wrote this book - We wrote this book to guide you through designing an OpenStack cloud - architecture. This guide identifies design considerations - for common cloud use cases and provides examples. - The Architecture Design Guide was written in a book sprint format, - which is a facilitated, rapid development production method for books. - The Book Sprint was facilitated by Faith Bosworth and Adam - Hyde of Book Sprints, for more information, see the Book Sprints website - (www.booksprints.net). - This book was written in five days during July 2014 while - exhausting the M&M, Mountain Dew and healthy options - supply, complete with juggling entertainment during lunches at - VMware's headquarters in Palo Alto. - We would like to thank VMware for their generous - hospitality, as well as our employers, Cisco, Cloudscaling, - Comcast, EMC, Mirantis, Rackspace, Red Hat, Verizon, and - VMware, for enabling us to contribute our time. We would - especially like to thank Anne Gentle and Kenneth Hui for all - of their shepherding and organization in making this - happen. - The author team includes: - - - Kenneth Hui (EMC) - @hui_kenneth - - - Alexandra Settle (Rackspace) - @dewsday - - - Anthony Veiga (Comcast) - @daaelar - - - Beth Cohen (Verizon) - @bfcohen - - - Kevin Jackson (Rackspace) - @itarchitectkev - - - Maish Saidel-Keesing (Cisco) - @maishsk - - - Nick Chase (Mirantis) - @NickChase - - - Scott Lowe (VMware) - @scott_lowe - - - Sean Collins (Comcast) - @sc68cal - - - Sean Winn (Cloudscaling) - @seanmwinn - - - Sebastian Gutierrez (Red Hat) - @gutseb - - - Stephen Gordon (Red Hat) - @xsgordon - - - Vinny Valdez (Red Hat) - @VinnyValdez - - -
diff --git a/doc/arch-design/introduction/section_intended_audience.xml b/doc/arch-design/introduction/section_intended_audience.xml deleted file mode 100644 index 4cf1263380..0000000000 --- a/doc/arch-design/introduction/section_intended_audience.xml +++ /dev/null @@ -1,18 +0,0 @@ - -
- Intended audience - This book has been written for architects and designers of - OpenStack clouds. For a guide on deploying and operating - OpenStack, please refer to the OpenStack Operations - Guide (http://docs.openstack.org/openstack-ops). - - Before reading this book, we recommend prior knowledge of cloud architecture - and principles, experience in enterprise system design, Linux - and virtualization experience, and a basic understanding of - networking principles and protocols. -
diff --git a/doc/arch-design/introduction/section_methodology.xml b/doc/arch-design/introduction/section_methodology.xml deleted file mode 100644 index f58e2a4ca4..0000000000 --- a/doc/arch-design/introduction/section_methodology.xml +++ /dev/null @@ -1,204 +0,0 @@ - - -%openstack; -]> -
- Methodology - The best way to design your cloud architecture is through creating and - testing use cases. Planning for applications that support thousands of - sessions per second, variable workloads, and complex, changing data, - requires you to identify the key meters. Identifying these key meters, - such as number of concurrent transactions per second, and size of - database, makes it possible to build a method for testing your assumptions. - Use a functional user scenario to develop test cases, and to measure - overall project trajectory. - - If you do not want to use an application to develop user - requirements automatically, you need to create requirements to build - test harnesses and develop usable meters. - - Establishing these meters allows you to respond to changes quickly without - having to set exact requirements in advance. - This creates ways to configure the system, rather than redesigning - it every time there is a requirements change. - - It is important to limit scope creep. Ensure you address tool limitations, - but do not recreate the entire suite of tools. Work - with technical product owners to establish critical features that are needed - for a successful cloud deployment. - - -
- Application cloud readiness - The cloud does more than host virtual machines and their applications. - This lift and shift - approach works in certain situations, but there is a fundamental - difference between clouds and traditional bare-metal-based - environments, or even traditional virtualized environments. - In traditional environments, with traditional enterprise - applications, the applications and the servers that run on them are - pets. - They are lovingly crafted and cared for, the servers have - names like Gandalf or Tardis, and if they get sick someone nurses - them back to health. All of this is designed so that the application - does not experience an outage. - In cloud environments, servers are more like - cattle. There are thousands of them, they get names like NY-1138-Q, - and if they get sick, they get put down and a sysadmin installs - another one. Traditional applications that are unprepared for this - kind of environment may suffer outages, loss of data, or - complete failure. - There are other reasons to design applications with the cloud in mind. - Some are defensive, such as the fact that because applications cannot be - certain of exactly where or on what hardware they will be launched, - they need to be flexible, or at least adaptable. Others are - proactive. For example, one of the advantages of using the cloud is - scalability. Applications need to be designed in such a way that - they can take advantage of these and other opportunities. -
- -
- Determining whether an application is cloud-ready - There are several factors to take into consideration when looking - at whether an application is a good fit for the cloud. - - - Structure - - - A large, monolithic, single-tiered, legacy - application typically is not a good fit for the - cloud. Efficiencies are gained when load can be - spread over several instances, so that a failure - in one part of the system can be mitigated without - affecting other parts of the system, or so that - scaling can take place where the app needs - it. - - - - - Dependencies - - - Applications that depend on specific - hardware, such as a particular chip set or an - external device such as a fingerprint - reader, might not be a good fit for the - cloud, unless those dependencies are specifically - addressed. Similarly, if an application depends on - an operating system or set of libraries that - cannot be used in the cloud, or cannot be - virtualized, that is a problem. - - - - - Connectivity - - - Self-contained applications, or those that depend - on resources that are not reachable by the cloud - in question, will not run. In some situations, - you can work around these issues with custom network - setup, but how well this works depends on the - chosen cloud environment. - - - - - Durability and resilience - - - Despite the existence of SLAs, things break: - servers go down, network connections are - disrupted, or too many tenants on a server make a - server unusable. An application must be sturdy - enough to contend with these issues. - - - - -
- -
- Designing for the cloud - Here are some guidelines to keep in mind when designing an - application for the cloud: - - - Be a pessimist: Assume everything fails and design - backwards. - - - Put your eggs in multiple baskets: Leverage multiple - providers, geographic regions and availability zones to - accommodate for local availability issues. Design for - portability. - - - Think efficiency: Inefficient designs will not scale. - Efficient designs become cheaper as they scale. Kill off - unneeded components or capacity. - - - Be paranoid: Design for defense in depth and zero - tolerance by building in security at every level and between - every component. Trust no one. - - - But not too paranoid: Not every application needs the - platinum solution. Architect for different SLA's, service - tiers, and security levels. - - - Manage the data: Data is usually the most inflexible and - complex area of a cloud and cloud integration architecture. - Do not short change the effort in analyzing and addressing - data needs. - - - Hands off: Leverage automation to increase consistency and - quality and reduce response times. - - - Divide and conquer: Pursue partitioning and - parallel layering wherever possible. Make components as small - and portable as possible. Use load balancing between layers. - - - - Think elasticity: Increasing resources should result in a - proportional increase in performance and scalability. - Decreasing resources should have the opposite effect. - - - - Be dynamic: Enable dynamic configuration changes such as - auto scaling, failure recovery and resource discovery to - adapt to changing environments, faults, and workload volumes. - - - - Stay close: Reduce latency by moving highly interactive - components and data near each other. - - - Keep it loose: Loose coupling, service interfaces, - separation of concerns, abstraction, and well defined API's - deliver flexibility. - - - Be cost aware: Autoscaling, data transmission, virtual - software licenses, reserved instances, and similar costs can rapidly - increase monthly usage charges. Monitor usage closely. - - - -
-
diff --git a/doc/arch-design/locale/arch-design.pot b/doc/arch-design/locale/arch-design.pot deleted file mode 100644 index 7cc73e97d6..0000000000 --- a/doc/arch-design/locale/arch-design.pot +++ /dev/null @@ -1,5868 +0,0 @@ -msgid "" -msgstr "" -"Project-Id-Version: PACKAGE VERSION\n" -"POT-Creation-Date: 2015-11-18 06:15+0000\n" -"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" -"Last-Translator: FULL NAME \n" -"Language-Team: LANGUAGE \n" -"MIME-Version: 1.0\n" -"Content-Type: text/plain; charset=UTF-8\n" -"Content-Transfer-Encoding: 8bit\n" - -#: ./doc/arch-design/ch_introduction.xml:7(title) -msgid "Introduction" -msgstr "" - -#: ./doc/arch-design/ch_introduction.xml:9(para) -msgid "OpenStack is a fully-featured, self-service cloud. This book takes you through some of the considerations you have to make when designing your cloud." -msgstr "" - -#: ./doc/arch-design/ch_specialized.xml:7(title) -msgid "Specialized cases" -msgstr "" - -#: ./doc/arch-design/ch_specialized.xml:8(para) -msgid "Although most OpenStack architecture designs fall into one of the seven major scenarios outlined in other sections (compute focused, network focused, storage focused, general purpose, multi-site, hybrid cloud, and massively scalable), there are a few use cases that do not fit into these categories. This section discusses these specialized cases and provide some additional details and design considerations for each use case:" -msgstr "" - -#: ./doc/arch-design/ch_specialized.xml:18(para) -msgid "Specialized networking: describes running networking-oriented software that may involve reading packets directly from the wire or participating in routing protocols." -msgstr "" - -#: ./doc/arch-design/ch_specialized.xml:28(para) -msgid "Software-defined networking (SDN): describes both running an SDN controller from within OpenStack as well as participating in a software-defined network." -msgstr "" - -#: ./doc/arch-design/ch_specialized.xml:37(para) -msgid "Desktop-as-a-Service: describes running a virtualized desktop environment in a cloud (Desktop-as-a-Service). This applies to private and public clouds." -msgstr "" - -#: ./doc/arch-design/ch_specialized.xml:46(para) -msgid "OpenStack on OpenStack: describes building a multi-tiered cloud by running OpenStack on top of an OpenStack installation." -msgstr "" - -#: ./doc/arch-design/ch_specialized.xml:54(para) -msgid "Specialized hardware: describes the use of specialized hardware devices from within the OpenStack environment." -msgstr "" - -#: ./doc/arch-design/ch_compute_focus.xml:7(title) -msgid "Compute focused" -msgstr "" - -#: ./doc/arch-design/ch_compute_focus.xml:8(para) -msgid "Compute-focused clouds are a specialized subset of the general purpose OpenStack cloud architecture. A compute-focused cloud specifically supports compute intensive workloads." -msgstr "" - -#: ./doc/arch-design/ch_compute_focus.xml:12(para) -msgid "Compute intensive workloads may be CPU intensive, RAM intensive, or both; they are not typically storage or network intensive." -msgstr "" - -#: ./doc/arch-design/ch_compute_focus.xml:15(para) -msgid "Compute-focused workloads may include the following use cases:" -msgstr "" - -#: ./doc/arch-design/ch_compute_focus.xml:18(para) ./doc/arch-design/ch_network_focus.xml:135(term) -msgid "High performance computing (HPC)" -msgstr "" - -#: ./doc/arch-design/ch_compute_focus.xml:21(para) -msgid "Big data analytics using Hadoop or other distributed data stores" -msgstr "" - -#: ./doc/arch-design/ch_compute_focus.xml:25(para) -msgid "Continuous integration/continuous deployment (CI/CD)" -msgstr "" - -#: ./doc/arch-design/ch_compute_focus.xml:28(para) -msgid "Platform-as-a-Service (PaaS)" -msgstr "" - -#: ./doc/arch-design/ch_compute_focus.xml:31(para) -msgid "Signal processing for network function virtualization (NFV)" -msgstr "" - -#: ./doc/arch-design/ch_compute_focus.xml:35(para) -msgid "A compute-focused OpenStack cloud does not typically use raw block storage services as it does not host applications that require persistent block storage." -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:8(title) -msgid "Security and legal requirements" -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:9(para) -msgid "This chapter discusses the legal and security requirements you need to consider for the different OpenStack scenarios." -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:12(title) -msgid "Legal requirements" -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:13(para) -msgid "Many jurisdictions have legislative and regulatory requirements governing the storage and management of data in cloud environments. Common areas of regulation include:" -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:18(para) -msgid "Data retention policies ensuring storage of persistent data and records management to meet data archival requirements." -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:23(para) -msgid "Data ownership policies governing the possession and responsibility for data." -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:27(para) -msgid "Data sovereignty policies governing the storage of data in foreign countries or otherwise separate jurisdictions." -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:32(para) -msgid "Data compliance policies governing certain types of information needing to reside in certain locations due to regulatory issues - and more importantly, cannot reside in other locations for the same reason." -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:38(para) -msgid "Examples of such legal frameworks include the data protection framework of the European Union and the requirements of the Financial Industry Regulatory Authority in the United States. Consult a local regulatory body for more information." -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:48(title) ./doc/arch-design/storage_focus/section_tech_considerations_storage_focus.xml:47(term) ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:422(term) ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:154(para) ./doc/arch-design/hybrid/section_architecture_hybrid.xml:112(para) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:581(term) ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:649(title) ./doc/arch-design/generalpurpose/section_user_requirements_general_purpose.xml:143(term) -msgid "Security" -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:49(para) -msgid "When deploying OpenStack in an enterprise as a private cloud, despite activating a firewall and binding employees with security agreements, cloud architecture should not make assumptions about safety and protection. In addition to considering the users, operators, or administrators who will use the environment, consider also negative or hostile users who would attack or compromise the security of your deployment regardless of firewalls or security agreements." -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:57(para) -msgid "Attack vectors increase further in a public facing OpenStack deployment. For example, the API endpoints and the software behind it become vulnerable to hostile entities attempting to gain unauthorized access or prevent access to services. This can result in loss of reputation and you must protect against it through auditing and appropriate filtering." -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:64(para) -msgid "It is important to understand that user authentication requests encase sensitive information such as user names, passwords, and authentication tokens. For this reason, place the API services behind hardware that performs SSL termination." -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:69(para) -msgid "Be mindful of consistency when utilizing third party clouds to explore authentication options." -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:74(title) -msgid "Security domains" -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:75(para) -msgid "A security domain comprises users, applications, servers or networks that share common trust requirements and expectations within a system. Typically, security domains have the same authentication and authorization requirements and users." -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:79(para) -msgid "You can map security domains individually to the installation, or combine them. For example, some deployment topologies combine both guest and data domains onto one physical network. In other cases these networks are physically separate. Map out the security domains against specific OpenStack topologies needs. The domains and their trust requirements depend on whether the cloud instance is public, private, or hybrid." -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:88(title) -msgid "Public security domains" -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:89(para) -msgid "The public security domain is an untrusted area of the cloud infrastructure. It can refer to the internet as a whole or simply to networks over which the user has no authority. Always consider this domain untrusted. For example, in a hybrid cloud deployment, any information traversing between and beyond the clouds is in the public domain and untrustworthy." -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:98(title) -msgid "Guest security domains" -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:99(para) -msgid "Typically used for compute instance-to-instance traffic, the guest security domain handles compute data generated by instances on the cloud but not services that support the operation of the cloud, such as API calls. Public cloud providers and private cloud providers who do not have stringent controls on instance use or who allow unrestricted internet access to instances should consider this domain to be untrusted. Private cloud providers may want to consider this network as internal and therefore trusted only if they have controls in place to assert that they trust instances and all their tenants." -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:112(title) -msgid "Management security domains" -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:113(para) -msgid "The management security domain is where services interact. The networks in this domain transport confidential data such as configuration parameters, user names, and passwords. Trust this domain when it is behind an organization's firewall in deployments." -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:119(title) -msgid "Data security domains" -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:120(para) -msgid "The data security domain is concerned primarily with information pertaining to the storage services within OpenStack. The data that crosses this network has integrity and confidentiality requirements. Depending on the type of deployment there may also be availability requirements. The trust level of this network is heavily dependent on deployment decisions and does not have a default level of trust." -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:130(title) -msgid "Hypervisor-security" -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:131(para) -msgid "The hypervisor also requires a security assessment. In a public cloud, organizations typically do not have control over the choice of hypervisor. Properly securing your hypervisor is important. Attacks made upon the unsecured hypervisor are called a hypervisor breakout. Hypervisor breakout describes the event of a compromised or malicious instance breaking out of the resource controls of the hypervisor and gaining access to the bare metal operating system and hardware resources." -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:141(para) -msgid "There is not an issue if the security of instances is not important. However, enterprises need to avoid vulnerability. The only way to do this is to avoid the situation where the instances are running on a public cloud. That does not mean that there is a need to own all of the infrastructure on which an OpenStack installation operates; it suggests avoiding situations in which sharing hardware with others occurs." -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:150(title) -msgid "Baremetal security" -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:151(para) -msgid "There are other services worth considering that provide a bare metal instance instead of a cloud. In other cases, it is possible to replicate a second private cloud by integrating with a private Cloud-as-a-Service deployment. The organization does not buy the hardware, but also does not share with other tenants. It is also possible to use a provider that hosts a bare-metal public cloud instance for which the hardware is dedicated only to one customer, or a provider that offers private Cloud-as-a-Service." -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:161(para) -msgid "Each cloud implements services differently. What keeps data secure in one cloud may not do the same in another. Be sure to know the security requirements of every cloud that handles the organization's data or workloads." -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:167(para) -msgid "More information on OpenStack Security can be found in the OpenStack Security Guide." -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:172(title) -msgid "Networking Security" -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:173(para) -msgid "Consider security implications and requirements before designing the physical and logical network topologies. Make sure that the networks are properly segregated and traffic flows are going to the correct destinations without crossing through locations that are undesirable. Consider the following example factors:" -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:180(para) ./doc/arch-design/ch_generalpurpose.xml:72(para) -msgid "Firewalls" -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:183(para) -msgid "Overlay interconnects for joining separated tenant networks" -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:186(para) -msgid "Routing through or avoiding specific networks" -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:189(para) -msgid "How networks attach to hypervisors can expose security vulnerabilities. To mitigate against exploiting hypervisor breakouts, separate networks from other systems and schedule instances for the network onto dedicated compute nodes. This prevents attackers from having access to the networks from a compromised instance." -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:196(title) -msgid "Multi-site security" -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:197(para) -msgid "Securing a multi-site OpenStack installation brings extra challenges. Tenants may expect a tenant-created network to be secure. In a multi-site installation the use of a non-private connection between sites may be required. This may mean that traffic would be visible to third parties and, in cases where an application requires security, this issue requires mitigation. In these instances, install a VPN or encrypted connection between sites to conceal sensitive traffic." -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:205(para) -msgid "Another security consideration with regard to multi-site deployments is Identity. Centralize authentication within a multi-site deployment. Centralization provides a single authentication point for users across the deployment, as well as a single point of administration for traditional create, read, update, and delete operations. Centralized authentication is also useful for auditing purposes because all authentication tokens originate from the same source." -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:214(para) -msgid "Just as tenants in a single-site deployment need isolation from each other, so do tenants in multi-site installations. The extra challenges in multi-site designs revolve around ensuring that tenant networks function across regions. OpenStack Networking (neutron) does not presently support a mechanism to provide this functionality, therefore an external system may be necessary to manage these mappings. Tenant networks may contain sensitive information requiring that this mapping be accurate and consistent to ensure that a tenant in one site does not connect to a different tenant in another site." -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:227(title) ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:353(para) ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:460(title) ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:219(title) ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:166(title) ./doc/arch-design/multi_site/section_tech_considerations_multi_site.xml:144(title) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:512(para) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:617(title) ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:412(title) -msgid "OpenStack components" -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:228(para) -msgid "Most OpenStack installations require a bare minimum set of pieces to function. These include OpenStack Identity (keystone) for authentication, OpenStack Compute (nova) for compute, OpenStack Image service (glance) for image storage, OpenStack Networking (neutron) for networking, and potentially an object store in the form of OpenStack Object Storage (swift). Bringing multi-site into play also demands extra components in order to coordinate between regions. Centralized Identity service is necessary to provide the single authentication point. Centralized dashboard is also recommended to provide a single login point and a mapped experience to the API and CLI options available. If needed, use a centralized Object Storage service, installing the required swift proxy service alongside the Object Storage service." -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:242(para) -msgid "It may also be helpful to install a few extra options in order to facilitate certain use cases. For instance, installing DNS service may assist in automatically generating DNS domains for each region with an automatically-populated zone full of resource records for each instance. This facilitates using DNS as a mechanism for determining which region would be selected for certain applications." -msgstr "" - -#: ./doc/arch-design/ch_legal-security-requirements.xml:249(para) -msgid "Another useful tool for managing a multi-site installation is Orchestration (heat). The Orchestration service allows the use of templates to define a set of instances to be launched together or for scaling existing sets. It can set up matching or differentiated groupings based on regions. For instance, if an application requires an equally balanced number of nodes across sites, the same heat template can be used to cover each site with small alterations to only the region name." -msgstr "" - -#: ./doc/arch-design/ch_network_focus.xml:7(title) -msgid "Network focused" -msgstr "" - -#: ./doc/arch-design/ch_network_focus.xml:8(para) -msgid "All OpenStack deployments depend on network communication in order to function properly due to its service-based nature. In some cases, however, the network elevates beyond simple infrastructure. This chapter discusses architectures that are more reliant or focused on network services. These architectures depend on the network infrastructure and require network services that perform reliably in order to satisfy user and application requirements." -msgstr "" - -#: ./doc/arch-design/ch_network_focus.xml:16(para) -msgid "Some possible use cases include:" -msgstr "" - -#: ./doc/arch-design/ch_network_focus.xml:19(term) -msgid "Content delivery network" -msgstr "" - -#: ./doc/arch-design/ch_network_focus.xml:21(para) -msgid "This includes streaming video, viewing photographs, or accessing any other cloud-based data repository distributed to a large number of end users. Network configuration affects latency, bandwidth, and the distribution of instances. Therefore, it impacts video streaming. Not all video streaming is consumer-focused. For example, multicast videos (used for media, press conferences, corporate presentations, and web conferencing services) can also use a content delivery network. The location of the video repository and its relationship to end users affects content delivery. Network throughput of the back-end systems, as well as the WAN architecture and the cache methodology, also affect performance." -msgstr "" - -#: ./doc/arch-design/ch_network_focus.xml:36(term) -msgid "Network management functions" -msgstr "" - -#: ./doc/arch-design/ch_network_focus.xml:38(para) -msgid "Use this cloud to provide network service functions built to support the delivery of back-end network services such as DNS, NTP, or SNMP." -msgstr "" - -#: ./doc/arch-design/ch_network_focus.xml:44(term) -msgid "Network service offerings" -msgstr "" - -#: ./doc/arch-design/ch_network_focus.xml:46(para) -msgid "Use this cloud to run customer-facing network tools to support services. Examples include VPNs, MPLS private networks, and GRE tunnels." -msgstr "" - -#: ./doc/arch-design/ch_network_focus.xml:52(term) -msgid "Web portals or web services" -msgstr "" - -#: ./doc/arch-design/ch_network_focus.xml:54(para) -msgid "Web servers are a common application for cloud services, and we recommend an understanding of their network requirements. The network requires scaling out to meet user demand and deliver web pages with a minimum latency. Depending on the details of the portal architecture, consider the internal east-west and north-south network bandwidth." -msgstr "" - -#: ./doc/arch-design/ch_network_focus.xml:63(term) -msgid "High speed and high volume transactional systems" -msgstr "" - -#: ./doc/arch-design/ch_network_focus.xml:65(para) -msgid "These types of applications are sensitive to network configurations. Examples include financial systems, credit card transaction applications, and trading and other extremely high volume systems. These systems are sensitive to network jitter and latency. They must balance a high volume of East-West and North-South network traffic to maximize efficiency of the data delivery. Many of these systems must access large, high performance database back ends." -msgstr "" - -#: ./doc/arch-design/ch_network_focus.xml:78(term) ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:48(title) ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:132(term) -msgid "High availability" -msgstr "" - -#: ./doc/arch-design/ch_network_focus.xml:80(para) -msgid "These types of use cases are dependent on the proper sizing of the network to maintain replication of data between sites for high availability. If one site becomes unavailable, the extra sites can serve the displaced load until the original site returns to service. It is important to size network capacity to handle the desired loads." -msgstr "" - -#: ./doc/arch-design/ch_network_focus.xml:89(term) -msgid "Big data" -msgstr "" - -#: ./doc/arch-design/ch_network_focus.xml:91(para) -msgid "Clouds used for the management and collection of big data (data ingest) have a significant demand on network resources. Big data often uses partial replicas of the data to maintain integrity over large distributed clouds. Other big data applications that require a large amount of network resources are Hadoop, Cassandra, NuoDB, Riak, and other NoSQL and distributed databases." -msgstr "" - -#: ./doc/arch-design/ch_network_focus.xml:101(term) -msgid "Virtual desktop infrastructure (VDI)" -msgstr "" - -#: ./doc/arch-design/ch_network_focus.xml:103(para) -msgid "This use case is sensitive to network congestion, latency, jitter, and other network characteristics. Like video streaming, the user experience is important. However, unlike video streaming, caching is not an option to offset the network issues. VDI requires both upstream and downstream traffic and cannot rely on caching for the delivery of the application to the end user." -msgstr "" - -#: ./doc/arch-design/ch_network_focus.xml:112(term) -msgid "Voice over IP (VoIP)" -msgstr "" - -#: ./doc/arch-design/ch_network_focus.xml:114(para) -msgid "This is sensitive to network congestion, latency, jitter, and other network characteristics. VoIP has a symmetrical traffic pattern and it requires network quality of service (QoS) for best performance. In addition, you can implement active queue management to deliver voice and multimedia content. Users are sensitive to latency and jitter fluctuations and can detect them at very low levels." -msgstr "" - -#: ./doc/arch-design/ch_network_focus.xml:124(term) -msgid "Video Conference or web conference" -msgstr "" - -#: ./doc/arch-design/ch_network_focus.xml:126(para) -msgid "This is sensitive to network congestion, latency, jitter, and other network characteristics. Video Conferencing has a symmetrical traffic pattern, but unless the network is on an MPLS private network, it cannot use network quality of service (QoS) to improve performance. Similar to VoIP, users are sensitive to network performance issues even at low levels." -msgstr "" - -#: ./doc/arch-design/ch_network_focus.xml:137(para) -msgid "This is a complex use case that requires careful consideration of the traffic flows and usage patterns to address the needs of cloud clusters. It has high east-west traffic patterns for distributed computing, but there can be substantial north-south traffic depending on the specific application." -msgstr "" - -#: ./doc/arch-design/ch_storage_focus.xml:7(title) -msgid "Storage focused" -msgstr "" - -#: ./doc/arch-design/ch_storage_focus.xml:9(para) -msgid "Cloud storage is a model of data storage that stores digital data in logical pools and physical storage that spans across multiple servers and locations. Cloud storage commonly refers to a hosted object storage service, however the term also includes other types of data storage that are available as a service, for example block storage." -msgstr "" - -#: ./doc/arch-design/ch_storage_focus.xml:15(para) -msgid "Cloud storage runs on virtualized infrastructure and resembles broader cloud computing in terms of accessible interfaces, elasticity, scalability, multi-tenancy, and metered resources. You can use cloud storage services from an off-premises service or deploy on-premises." -msgstr "" - -#: ./doc/arch-design/ch_storage_focus.xml:20(para) -msgid "Cloud storage consists of many distributed, synonymous resources, which are often referred to as integrated storage clouds. Cloud storage is highly fault tolerant through redundancy and the distribution of data. It is highly durable through the creation of versioned copies, and can be consistent with regard to data replicas." -msgstr "" - -#: ./doc/arch-design/ch_storage_focus.xml:26(para) -msgid "At large scale, management of data operations is a resource intensive process for an organization. Hierarchical storage management (HSM) systems and data grids help annotate and report a baseline data valuation to make intelligent decisions and automate data decisions. HSM enables automated tiering and movement, as well as orchestration of data operations. A data grid is an architecture, or set of services evolving technology, that brings together sets of services enabling users to manage large data sets." -msgstr "" - -#: ./doc/arch-design/ch_storage_focus.xml:35(para) -msgid "Example applications deployed with cloud storage characteristics:" -msgstr "" - -#: ./doc/arch-design/ch_storage_focus.xml:39(para) -msgid "Active archive, backups and hierarchical storage management." -msgstr "" - -#: ./doc/arch-design/ch_storage_focus.xml:43(para) -msgid "General content storage and synchronization. An example of this is private dropbox." -msgstr "" - -#: ./doc/arch-design/ch_storage_focus.xml:47(para) -msgid "Data analytics with parallel file systems." -msgstr "" - -#: ./doc/arch-design/ch_storage_focus.xml:50(para) -msgid "Unstructured data store for services. For example, social media back-end storage." -msgstr "" - -#: ./doc/arch-design/ch_storage_focus.xml:54(para) -msgid "Persistent block storage." -msgstr "" - -#: ./doc/arch-design/ch_storage_focus.xml:57(para) -msgid "Operating system and application image store." -msgstr "" - -#: ./doc/arch-design/ch_storage_focus.xml:60(para) -msgid "Media streaming." -msgstr "" - -#: ./doc/arch-design/ch_storage_focus.xml:63(para) -msgid "Databases." -msgstr "" - -#: ./doc/arch-design/ch_storage_focus.xml:66(para) -msgid "Content distribution." -msgstr "" - -#: ./doc/arch-design/ch_storage_focus.xml:69(para) -msgid "Cloud storage peering." -msgstr "" - -#: ./doc/arch-design/ch_generalpurpose.xml:7(title) -msgid "General purpose" -msgstr "" - -#: ./doc/arch-design/ch_generalpurpose.xml:8(para) -msgid "An OpenStack general purpose cloud is often considered a starting point for building a cloud deployment. They are designed to balance the components and do not emphasize any particular aspect of the overall computing environment. Cloud design must give equal weight to the compute, network, and storage components. General purpose clouds are found in private, public, and hybrid environments, lending themselves to many different use cases." -msgstr "" - -#: ./doc/arch-design/ch_generalpurpose.xml:18(para) -msgid "General purpose clouds are homogeneous deployments. They are not suited to specialized environments or edge case situations." -msgstr "" - -#: ./doc/arch-design/ch_generalpurpose.xml:23(para) -msgid "Common uses of a general purpose cloud include:" -msgstr "" - -#: ./doc/arch-design/ch_generalpurpose.xml:28(para) -msgid "Providing a simple database" -msgstr "" - -#: ./doc/arch-design/ch_generalpurpose.xml:33(para) -msgid "A web application runtime environment" -msgstr "" - -#: ./doc/arch-design/ch_generalpurpose.xml:38(para) -msgid "A shared application development platform" -msgstr "" - -#: ./doc/arch-design/ch_generalpurpose.xml:43(para) -msgid "Lab test bed" -msgstr "" - -#: ./doc/arch-design/ch_generalpurpose.xml:48(para) -msgid "Use cases that benefit from scale-out rather than scale-up approaches are good candidates for general purpose cloud architecture." -msgstr "" - -#: ./doc/arch-design/ch_generalpurpose.xml:51(para) -msgid "A general purpose cloud is designed to have a range of potential uses or functions; not specialized for specific use cases. General purpose architecture is designed to address 80% of potential use cases available. The infrastructure, in itself, is a specific use case, enabling it to be used as a base model for the design process. General purpose clouds are designed to be platforms that are suited for general purpose applications." -msgstr "" - -#: ./doc/arch-design/ch_generalpurpose.xml:58(para) -msgid "General purpose clouds are limited to the most basic components, but they can include additional resources such as:" -msgstr "" - -#: ./doc/arch-design/ch_generalpurpose.xml:63(para) ./doc/arch-design/ch_massively_scalable.xml:36(para) -msgid "Virtual-machine disk image library" -msgstr "" - -#: ./doc/arch-design/ch_generalpurpose.xml:66(para) ./doc/arch-design/ch_massively_scalable.xml:39(para) -msgid "Raw block storage" -msgstr "" - -#: ./doc/arch-design/ch_generalpurpose.xml:69(para) ./doc/arch-design/ch_massively_scalable.xml:42(para) -msgid "File or object storage" -msgstr "" - -#: ./doc/arch-design/ch_generalpurpose.xml:75(para) -msgid "Load balancers" -msgstr "" - -#: ./doc/arch-design/ch_generalpurpose.xml:78(para) -msgid "IP addresses" -msgstr "" - -#: ./doc/arch-design/ch_generalpurpose.xml:81(para) -msgid "Network overlays or virtual local area networks (VLANs)" -msgstr "" - -#: ./doc/arch-design/ch_generalpurpose.xml:85(para) ./doc/arch-design/ch_massively_scalable.xml:58(para) -msgid "Software bundles" -msgstr "" - -#: ./doc/arch-design/bk-openstack-arch-design.xml:7(title) -msgid "OpenStack Architecture Design Guide" -msgstr "" - -#: ./doc/arch-design/bk-openstack-arch-design.xml:9(titleabbrev) -msgid "Architecture Guide" -msgstr "" - -#: ./doc/arch-design/bk-openstack-arch-design.xml:17(orgname) ./doc/arch-design/bk-openstack-arch-design.xml:23(holder) -msgid "OpenStack Foundation" -msgstr "" - -#: ./doc/arch-design/bk-openstack-arch-design.xml:21(year) -msgid "2014" -msgstr "" - -#: ./doc/arch-design/bk-openstack-arch-design.xml:22(year) -msgid "2015" -msgstr "" - -#: ./doc/arch-design/bk-openstack-arch-design.xml:25(releaseinfo) -msgid "current" -msgstr "" - -#: ./doc/arch-design/bk-openstack-arch-design.xml:26(productname) -msgid "OpenStack" -msgstr "" - -#: ./doc/arch-design/bk-openstack-arch-design.xml:30(remark) -msgid "Copyright details are filled in by the template." -msgstr "" - -#: ./doc/arch-design/bk-openstack-arch-design.xml:36(remark) -msgid "Remaining licensing details are filled in by the template." -msgstr "" - -#: ./doc/arch-design/bk-openstack-arch-design.xml:41(para) -msgid "To reap the benefits of OpenStack, you should plan, design, and architect your cloud properly, taking user's needs into account and understanding the use cases." -msgstr "" - -#: ./doc/arch-design/ch_multi_site.xml:7(title) -msgid "Multi-site" -msgstr "" - -#: ./doc/arch-design/ch_multi_site.xml:9(para) -msgid "OpenStack is capable of running in a multi-region configuration. This enables some parts of OpenStack to effectively manage a group of sites as a single cloud." -msgstr "" - -#: ./doc/arch-design/ch_multi_site.xml:12(para) -msgid "Some use cases that might indicate a need for a multi-site deployment of OpenStack include:" -msgstr "" - -#: ./doc/arch-design/ch_multi_site.xml:16(para) -msgid "An organization with a diverse geographic footprint." -msgstr "" - -#: ./doc/arch-design/ch_multi_site.xml:20(para) -msgid "Geo-location sensitive data." -msgstr "" - -#: ./doc/arch-design/ch_multi_site.xml:23(para) -msgid "Data locality, in which specific data or functionality should be close to users." -msgstr "" - -#: ./doc/arch-design/ch_hybrid.xml:7(title) -msgid "Hybrid" -msgstr "" - -#: ./doc/arch-design/ch_hybrid.xml:8(para) -msgid "A hybrid cloud design is one that uses more than one cloud. For example, designs that use both an OpenStack-based private cloud and an OpenStack-based public cloud, or that use an OpenStack cloud and a non-OpenStack cloud, are hybrid clouds." -msgstr "" - -#: ./doc/arch-design/ch_hybrid.xml:13(para) -msgid "Bursting describes the practice of creating new instances in an external cloud to alleviate capacity issues in a private cloud." -msgstr "" - -#: ./doc/arch-design/ch_hybrid.xml:17(title) -msgid "Example scenarios suited to hybrid clouds" -msgstr "" - -#: ./doc/arch-design/ch_hybrid.xml:19(para) -msgid "Bursting from a private cloud to a public cloud" -msgstr "" - -#: ./doc/arch-design/ch_hybrid.xml:23(para) -msgid "Disaster recovery" -msgstr "" - -#: ./doc/arch-design/ch_hybrid.xml:26(para) -msgid "Development and testing" -msgstr "" - -#: ./doc/arch-design/ch_hybrid.xml:29(para) -msgid "Federated cloud, enabling users to choose resources from multiple providers" -msgstr "" - -#: ./doc/arch-design/ch_hybrid.xml:33(para) -msgid "Supporting legacy systems as they transition to the cloud" -msgstr "" - -#: ./doc/arch-design/ch_hybrid.xml:37(para) -msgid "Hybrid clouds interact with systems that are outside the control of the private cloud administrator, and require careful architecture to prevent conflicts with hardware, software, and APIs under external control." -msgstr "" - -#: ./doc/arch-design/ch_hybrid.xml:41(para) -msgid "The degree to which the architecture is OpenStack-based affects your ability to accomplish tasks with native OpenStack tools. By definition, this is a situation in which no single cloud can provide all of the necessary functionality. In order to manage the entire system, we recommend using a cloud management platform (CMP)." -msgstr "" - -#: ./doc/arch-design/ch_hybrid.xml:47(para) -msgid "There are several commercial and open source CMPs available, but there is no single CMP that can address all needs in all scenarios, and sometimes a manually-built solution is the best option. This chapter includes discussion of using CMPs for managing a hybrid cloud." -msgstr "" - -#: ./doc/arch-design/ch_massively_scalable.xml:7(title) -msgid "Massively scalable" -msgstr "" - -#: ./doc/arch-design/ch_massively_scalable.xml:9(para) -msgid "A massively scalable architecture is a cloud implementation that is either a very large deployment, such as a commercial service provider might build, or one that has the capability to support user requests for large amounts of cloud resources." -msgstr "" - -#: ./doc/arch-design/ch_massively_scalable.xml:14(para) -msgid "An example is an infrastructure in which requests to service 500 or more instances at a time is common. A massively scalable infrastructure fulfills such a request without exhausting the available cloud infrastructure resources. While the high capital cost of implementing such a cloud architecture means that it is currently in limited use, many organizations are planning for massive scalability in the future." -msgstr "" - -#: ./doc/arch-design/ch_massively_scalable.xml:21(para) -msgid "A massively scalable OpenStack cloud design presents a unique set of challenges and considerations. For the most part it is similar to a general purpose cloud architecture, as it is built to address a non-specific range of potential use cases or functions. Typically, it is rare that particular workloads determine the design or configuration of massively scalable clouds. The massively scalable cloud is most often built as a platform for a variety of workloads. Because private organizations rarely require or have the resources for them, massively scalable OpenStack clouds are generally built as commercial, public cloud offerings." -msgstr "" - -#: ./doc/arch-design/ch_massively_scalable.xml:32(para) -msgid "Services provided by a massively scalable OpenStack cloud include:" -msgstr "" - -#: ./doc/arch-design/ch_massively_scalable.xml:45(para) -msgid "Firewall functionality" -msgstr "" - -#: ./doc/arch-design/ch_massively_scalable.xml:48(para) -msgid "Load balancing functionality" -msgstr "" - -#: ./doc/arch-design/ch_massively_scalable.xml:51(para) -msgid "Private (non-routable) and public (floating) IP addresses" -msgstr "" - -#: ./doc/arch-design/ch_massively_scalable.xml:55(para) -msgid "Virtualized network topologies" -msgstr "" - -#: ./doc/arch-design/ch_massively_scalable.xml:61(para) -msgid "Virtual compute resources" -msgstr "" - -#: ./doc/arch-design/ch_massively_scalable.xml:64(para) -msgid "Like a general purpose cloud, the instances deployed in a massively scalable OpenStack cloud do not necessarily use any specific aspect of the cloud offering (compute, network, or storage). As the cloud grows in scale, the number of workloads can cause stress on all the cloud components. This adds further stresses to supporting infrastructure such as databases and message brokers. The architecture design for such a cloud must account for these performance pressures without negatively impacting user experience." -msgstr "" - -#: ./doc/arch-design/ch_references.xml:8(title) -msgid "References" -msgstr "" - -#: ./doc/arch-design/ch_references.xml:9(para) -msgid "Data Protection framework of the European Union: Guidance on Data Protection laws governed by the EU." -msgstr "" - -#: ./doc/arch-design/ch_references.xml:15(para) -msgid "Depletion of IPv4 Addresses: describing how IPv4 addresses and the migration to IPv6 is inevitable." -msgstr "" - -#: ./doc/arch-design/ch_references.xml:21(para) -msgid "Ethernet Switch Reliability: ​Research white paper on Ethernet Switch reliability." -msgstr "" - -#: ./doc/arch-design/ch_references.xml:27(para) -msgid "Financial Industry Regulatory Authority: ​Requirements of the Financial Industry Regulatory Authority in the USA." -msgstr "" - -#: ./doc/arch-design/ch_references.xml:33(para) -msgid "Image Service property keys: Glance API property keys allows the administrator to attach custom characteristics to images." -msgstr "" - -#: ./doc/arch-design/ch_references.xml:39(para) -msgid "LibGuestFS Documentation: Official LibGuestFS documentation." -msgstr "" - -#: ./doc/arch-design/ch_references.xml:43(para) -msgid "Logging and Monitoring: Official OpenStack Operations documentation." -msgstr "" - -#: ./doc/arch-design/ch_references.xml:49(para) -msgid "ManageIQ Cloud Management Platform: An Open Source Cloud Management Platform for managing multiple clouds." -msgstr "" - -#: ./doc/arch-design/ch_references.xml:54(para) -msgid "N-Tron Network Availability: Research white paper on network availability." -msgstr "" - -#: ./doc/arch-design/ch_references.xml:60(para) -msgid "Nested KVM: Post on how to nest KVM under KVM." -msgstr "" - -#: ./doc/arch-design/ch_references.xml:65(para) -msgid "Open Compute Project: The Open Compute Project Foundation's mission is to design and enable the delivery of the most efficient server, storage and data center hardware designs for scalable computing." -msgstr "" - -#: ./doc/arch-design/ch_references.xml:72(para) -msgid "OpenStack Flavors: Official OpenStack documentation." -msgstr "" - -#: ./doc/arch-design/ch_references.xml:77(para) -msgid "OpenStack High Availability Guide: Information on how to provide redundancy for the OpenStack components." -msgstr "" - -#: ./doc/arch-design/ch_references.xml:83(para) -msgid "OpenStack Hypervisor Support Matrix: ​Matrix of supported hypervisors and capabilities when used with OpenStack." -msgstr "" - -#: ./doc/arch-design/ch_references.xml:89(para) -msgid "OpenStack Object Store (Swift) Replication Reference: Developer documentation of Swift replication." -msgstr "" - -#: ./doc/arch-design/ch_references.xml:95(para) -msgid "OpenStack Operations Guide: The OpenStack Operations Guide provides information on setting up and installing OpenStack." -msgstr "" - -#: ./doc/arch-design/ch_references.xml:101(para) -msgid "OpenStack Security Guide: The OpenStack Security Guide provides information on securing OpenStack deployments." -msgstr "" - -#: ./doc/arch-design/ch_references.xml:107(para) -msgid "OpenStack Training Marketplace: The OpenStack Market for training and Vendors providing training on OpenStack." -msgstr "" - -#: ./doc/arch-design/ch_references.xml:113(para) -msgid "PCI passthrough: The PCI API patches extend the servers/os-hypervisor to show PCI information for instance and compute node, and also provides a resource endpoint to show PCI information." -msgstr "" - -#: ./doc/arch-design/ch_references.xml:121(para) -msgid "TripleO: TripleO is a program aimed at installing, upgrading and operating OpenStack clouds using OpenStack's own cloud facilities as the foundation." -msgstr "" - -#: ./doc/arch-design/network_focus/section_operational_considerations_network_focus.xml:8(title) ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:8(title) ./doc/arch-design/compute_focus/section_operational_considerations_compute_focus.xml:8(title) ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:8(title) ./doc/arch-design/hybrid/section_operational_considerations_hybrid.xml:8(title) ./doc/arch-design/massively_scalable/section_operational_considerations_massively_scalable.xml:8(title) ./doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml:8(title) -msgid "Operational considerations" -msgstr "" - -#: ./doc/arch-design/network_focus/section_operational_considerations_network_focus.xml:9(para) -msgid "Network-focused OpenStack clouds have a number of operational considerations that influence the selected design, including:" -msgstr "" - -#: ./doc/arch-design/network_focus/section_operational_considerations_network_focus.xml:13(para) -msgid "Dynamic routing of static routes" -msgstr "" - -#: ./doc/arch-design/network_focus/section_operational_considerations_network_focus.xml:16(para) -msgid "Service level agreements (SLAs)" -msgstr "" - -#: ./doc/arch-design/network_focus/section_operational_considerations_network_focus.xml:19(para) -msgid "Ownership of user management" -msgstr "" - -#: ./doc/arch-design/network_focus/section_operational_considerations_network_focus.xml:22(para) -msgid "An initial network consideration is the selection of a telecom company or transit provider." -msgstr "" - -#: ./doc/arch-design/network_focus/section_operational_considerations_network_focus.xml:24(para) -msgid "Make additional design decisions about monitoring and alarming. This can be an internal responsibility or the responsibility of the external provider. In the case of using an external provider, service level agreements (SLAs) likely apply. In addition, other operational considerations such as bandwidth, latency, and jitter can be part of an SLA." -msgstr "" - -#: ./doc/arch-design/network_focus/section_operational_considerations_network_focus.xml:30(para) -msgid "Consider the ability to upgrade the infrastructure. As demand for network resources increase, operators add additional IP address blocks and add additional bandwidth capacity. In addition, consider managing hardware and software life cycle events, for example upgrades, decommissioning, and outages, while avoiding service interruptions for tenants." -msgstr "" - -#: ./doc/arch-design/network_focus/section_operational_considerations_network_focus.xml:36(para) -msgid "Factor maintainability into the overall network design. This includes the ability to manage and maintain IP addresses as well as the use of overlay identifiers including VLAN tag IDs, GRE tunnel IDs, and MPLS tags. As an example, if you may need to change all of the IP addresses on a network, a process known as renumbering, then the design must support this function." -msgstr "" - -#: ./doc/arch-design/network_focus/section_operational_considerations_network_focus.xml:42(para) -msgid "Address network-focused applications when considering certain operational realities. For example, consider the impending exhaustion of IPv4 addresses, the migration to IPv6, and the use of private networks to segregate different types of traffic that an application receives or generates. In the case of IPv4 to IPv6 migrations, applications should follow best practices for storing IP addresses. We recommend you avoid relying on IPv4 features that did not carry over to the IPv6 protocol or have differences in implementation." -msgstr "" - -#: ./doc/arch-design/network_focus/section_operational_considerations_network_focus.xml:50(para) -msgid "To segregate traffic, allow applications to create a private tenant network for database and storage network traffic. Use a public network for services that require direct client access from the internet. Upon segregating the traffic, consider quality of service (QoS) and security to ensure each network has the required level of service." -msgstr "" - -#: ./doc/arch-design/network_focus/section_operational_considerations_network_focus.xml:55(para) -msgid "Finally, consider the routing of network traffic. For some applications, develop a complex policy framework for routing. To create a routing policy that satisfies business requirements, consider the economic cost of transmitting traffic over expensive links versus cheaper links, in addition to bandwidth, latency, and jitter requirements." -msgstr "" - -#: ./doc/arch-design/network_focus/section_operational_considerations_network_focus.xml:61(para) -msgid "Additionally, consider how to respond to network events. As an example, how load transfers from one link to another during a failure scenario could be a factor in the design. If you do not plan network capacity correctly, failover traffic could overwhelm other ports or network links and create a cascading failure scenario. In this case, traffic that fails over to one link overwhelms that link and then moves to the subsequent links until all network traffic stops." -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:42(None) -msgid "@@image: '../figures/Network_Web_Services1.png'; md5=7ad46189444753336edd957108a1a92b" -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:195(None) -msgid "@@image: '../figures/Network_Cloud_Storage2.png'; md5=3cd3ce6b19b20ecd7d22af03731cc7cd" -msgstr "" - -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:8(title) ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:8(title) ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:8(title) ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:12(title) ./doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml:8(title) -msgid "Prescriptive examples" -msgstr "" - -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:9(para) -msgid "An organization designs a large-scale web application with cloud principles in mind. The application scales horizontally in a bursting fashion and generates a high instance count. The application requires an SSL connection to secure data and must not lose connection state to individual servers." -msgstr "" - -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:15(para) -msgid "The figure below depicts an example design for this workload. In this example, a hardware load balancer provides SSL offload functionality and connects to tenant networks in order to reduce address consumption. This load balancer links to the routing architecture as it services the VIP for the application. The router and load balancer use the GRE tunnel ID of the application's tenant network and an IP address within the tenant subnet but outside of the address pool. This is to ensure that the load balancer can communicate with the application's HTTP servers without requiring the consumption of a public IP address." -msgstr "" - -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:27(para) -msgid "Because sessions persist until closed, the routing and switching architecture provides high availability. Switches mesh to each hypervisor and each other, and also provide an MLAG implementation to ensure that layer-2 connectivity does not fail. Routers use VRRP and fully mesh with switches to ensure layer-3 connectivity. Since GRE is provides an overlay network, Networking is present and uses the Open vSwitch agent in GRE tunnel mode. This ensures all devices can reach all other devices and that you can create tenant networks for private addressing links to the load balancer. " -msgstr "" - -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:45(para) -msgid "A web service architecture has many options and optional components. Due to this, it can fit into a large number of other OpenStack designs. A few key components, however, need to be in place to handle the nature of most web-scale workloads. You require the following components:" -msgstr "" - -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:52(para) -msgid "OpenStack Controller services (Image, Identity, Networking and supporting services such as MariaDB and RabbitMQ)" -msgstr "" - -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:57(para) -msgid "OpenStack Compute running KVM hypervisor" -msgstr "" - -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:60(para) -msgid "OpenStack Object Storage" -msgstr "" - -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:63(para) -msgid "Orchestration service" -msgstr "" - -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:66(para) -msgid "Telemetry service" -msgstr "" - -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:69(para) -msgid "Beyond the normal Identity, Compute, Image service, and Object Storage components, we recommend the Orchestration service component to handle the proper scaling of workloads to adjust to demand. Due to the requirement for auto-scaling, the design includes the Telemetry service. Web services tend to be bursty in load, have very defined peak and valley usage patterns and, as a result, benefit from automatic scaling of instances based upon traffic. At a network level, a split network configuration works well with databases residing on private tenant networks since these do not emit a large quantity of broadcast traffic and may need to interconnect to some databases for content." -msgstr "" - -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:83(title) -msgid "Load balancing" -msgstr "" - -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:84(para) -msgid "Load balancing spreads requests across multiple instances. This workload scales well horizontally across large numbers of instances. This enables instances to run without publicly routed IP addresses and instead to rely on the load balancer to provide a globally reachable service. Many of these services do not require direct server return. This aids in address planning and utilization at scale since only the virtual IP (VIP) must be public." -msgstr "" - -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:95(title) -msgid "Overlay networks" -msgstr "" - -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:96(para) -msgid "The overlay functionality design includes OpenStack Networking in Open vSwitch GRE tunnel mode. In this case, the layer-3 external routers pair with VRRP, and switches pair with an implementation of MLAG to ensure that you do not lose connectivity with the upstream routing infrastructure." -msgstr "" - -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:106(title) -msgid "Performance tuning" -msgstr "" - -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:107(para) -msgid "Network level tuning for this workload is minimal. Quality-of-Service (QoS) applies to these workloads for a middle ground Class Selector depending on existing policies. It is higher than a best effort queue but lower than an Expedited Forwarding or Assured Forwarding queue. Since this type of application generates larger packets with longer-lived connections, you can optimize bandwidth utilization for long duration TCP. Normal bandwidth planning applies here with regards to benchmarking a session's usage multiplied by the expected number of concurrent sessions with overhead." -msgstr "" - -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:120(title) -msgid "Network functions" -msgstr "" - -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:121(para) -msgid "Network functions is a broad category but encompasses workloads that support the rest of a system's network. These workloads tend to consist of large amounts of small packets that are very short lived, such as DNS queries or SNMP traps. These messages need to arrive quickly and do not deal with packet loss as there can be a very large volume of them. There are a few extra considerations to take into account for this type of workload and this can change a configuration all the way to the hypervisor level. For an application that generates 10 TCP sessions per user with an average bandwidth of 512 kilobytes per second per flow and expected user count of ten thousand concurrent users, the expected bandwidth plan is approximately 4.88 gigabits per second." -msgstr "" - -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:134(para) -msgid "The supporting network for this type of configuration needs to have a low latency and evenly distributed availability. This workload benefits from having services local to the consumers of the service. Use a multi-site approach as well as deploying many copies of the application to handle load as close as possible to consumers. Since these applications function independently, they do not warrant running overlays to interconnect tenant networks. Overlays also have the drawback of performing poorly with rapid flow setup and may incur too much overhead with large quantities of small packets and therefore we do not recommend them." -msgstr "" - -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:145(para) -msgid "QoS is desirable for some workloads to ensure delivery. DNS has a major impact on the load times of other services and needs to be reliable and provide rapid responses. Configure rules in upstream devices to apply a higher Class Selector to DNS to ensure faster delivery or a better spot in queuing algorithms." -msgstr "" - -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:153(title) -msgid "Cloud storage" -msgstr "" - -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:154(para) -msgid "Another common use case for OpenStack environments is providing a cloud-based file storage and sharing service. You might consider this a storage-focused use case, but its network-side requirements make it a network-focused use case." -msgstr "" - -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:158(para) -msgid "For example, consider a cloud backup application. This workload has two specific behaviors that impact the network. Because this workload is an externally-facing service and an internally-replicating application, it has both north-south and east-west traffic considerations:" -msgstr "" - -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:167(term) -msgid "north-south traffic" -msgstr "" - -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:169(para) -msgid "When a user uploads and stores content, that content moves into the OpenStack installation. When users download this content, the content moves out from the OpenStack installation. Because this service operates primarily as a backup, most of the traffic moves southbound into the environment. In this situation, it benefits you to configure a network to be asymmetrically downstream because the traffic that enters the OpenStack installation is greater than the traffic that leaves the installation." -msgstr "" - -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:181(term) -msgid "east-west traffic" -msgstr "" - -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:183(para) -msgid "Likely to be fully symmetric. Because replication originates from any node and might target multiple other nodes algorithmically, it is less likely for this traffic to have a larger volume in any specific direction. However this traffic might interfere with north-south traffic." -msgstr "" - -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:198(para) -msgid "This application prioritizes the north-south traffic over east-west traffic: the north-south traffic involves customer-facing data." -msgstr "" - -#: ./doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml:201(para) -msgid "The network design in this case is less dependent on availability and more dependent on being able to handle high bandwidth. As a direct result, it is beneficial to forgo redundant links in favor of bonding those connections. This increases available bandwidth. It is also beneficial to configure all devices in the path, including OpenStack, to generate and pass jumbo frames." -msgstr "" - -#: ./doc/arch-design/network_focus/section_architecture_network_focus.xml:7(title) ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:11(title) ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:8(title) ./doc/arch-design/multi_site/section_architecture_multi_site.xml:8(title) ./doc/arch-design/hybrid/section_architecture_hybrid.xml:8(title) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:12(title) -msgid "Architecture" -msgstr "" - -#: ./doc/arch-design/network_focus/section_architecture_network_focus.xml:8(para) -msgid "Network-focused OpenStack architectures have many similarities to other OpenStack architecture use cases. There are several factors to consider when designing for a network-centric or network-heavy application environment." -msgstr "" - -#: ./doc/arch-design/network_focus/section_architecture_network_focus.xml:12(para) -msgid "Networks exist to serve as a medium of transporting data between systems. It is inevitable that an OpenStack design has inter-dependencies with non-network portions of OpenStack as well as on external systems. Depending on the specific workload, there may be major interactions with storage systems both within and external to the OpenStack environment. For example, in the case of content delivery network, there is twofold interaction with storage. Traffic flows to and from the storage array for ingesting and serving content in a north-south direction. In addition, there is replication traffic flowing in an east-west direction." -msgstr "" - -#: ./doc/arch-design/network_focus/section_architecture_network_focus.xml:21(para) -msgid "Compute-heavy workloads may also induce interactions with the network. Some high performance compute applications require network-based memory mapping and data sharing and, as a result, induce a higher network load when they transfer results and data sets. Others may be highly transactional and issue transaction locks, perform their functions, and revoke transaction locks at high rates. This also has an impact on the network performance." -msgstr "" - -#: ./doc/arch-design/network_focus/section_architecture_network_focus.xml:28(para) -msgid "Some network dependencies are external to OpenStack. While OpenStack Networking is capable of providing network ports, IP addresses, some level of routing, and overlay networks, there are some other functions that it cannot provide. For many of these, you may require external systems or equipment to fill in the functional gaps. Hardware load balancers are an example of equipment that may be necessary to distribute workloads or offload certain functions. OpenStack Networking provides a tunneling feature, however it is constrained to a Networking-managed region. If the need arises to extend a tunnel beyond the OpenStack region to either another region or an external system, implement the tunnel itself outside OpenStack or use a tunnel management system to map the tunnel or overlay to an external tunnel." -msgstr "" - -#: ./doc/arch-design/network_focus/section_architecture_network_focus.xml:41(para) -msgid "Depending on the selected design, Networking itself might not support the required layer-3 network functionality. If you choose to use the provider networking mode without running the layer-3 agent, you must install an external router to provide layer-3 connectivity to outside systems." -msgstr "" - -#: ./doc/arch-design/network_focus/section_architecture_network_focus.xml:49(para) -msgid "Interaction with orchestration services is inevitable in larger-scale deployments. The Orchestration service is capable of allocating network resource defined in templates to map to tenant networks and for port creation, as well as allocating floating IPs. If there is a requirement to define and manage network resources when using orchestration, we recommend that the design include the Orchestration service to meet the demands of users." -msgstr "" - -#: ./doc/arch-design/network_focus/section_architecture_network_focus.xml:57(title) -msgid "Design impacts" -msgstr "" - -#: ./doc/arch-design/network_focus/section_architecture_network_focus.xml:58(para) -msgid "A wide variety of factors can affect a network-focused OpenStack architecture. While there are some considerations shared with a general use case, specific workloads related to network requirements influence network design decisions." -msgstr "" - -#: ./doc/arch-design/network_focus/section_architecture_network_focus.xml:62(para) -msgid "One decision includes whether or not to use Network Address Translation (NAT) and where to implement it. If there is a requirement for floating IPs instead of public fixed addresses then you must use NAT. An example of this is a DHCP relay that must know the IP of the DHCP server. In these cases it is easier to automate the infrastructure to apply the target IP to a new instance rather than to reconfigure legacy or external systems for each new instance." -msgstr "" - -#: ./doc/arch-design/network_focus/section_architecture_network_focus.xml:69(para) -msgid "NAT for floating IPs managed by Networking resides within the hypervisor but there are also versions of NAT that may be running elsewhere. If there is a shortage of IPv4 addresses there are two common methods to mitigate this externally to OpenStack. The first is to run a load balancer either within OpenStack as an instance, or use an external load balancing solution. In the internal scenario, Networking's Load-Balancer-as-a-Service (LBaaS) can manage load balancing software, for example HAproxy. This is specifically to manage the Virtual IP (VIP) while a dual-homed connection from the HAproxy instance connects the public network with the tenant private network that hosts all of the content servers. In the external scenario, a load balancer needs to serve the VIP and also connect to the tenant overlay network through external means or through private addresses." -msgstr "" - -#: ./doc/arch-design/network_focus/section_architecture_network_focus.xml:82(para) -msgid "Another kind of NAT that may be useful is protocol NAT. In some cases it may be desirable to use only IPv6 addresses on instances and operate either an instance or an external service to provide a NAT-based transition technology such as NAT64 and DNS64. This provides the ability to have a globally routable IPv6 address while only consuming IPv4 addresses as necessary or in a shared manner." -msgstr "" - -#: ./doc/arch-design/network_focus/section_architecture_network_focus.xml:88(para) -msgid "Application workloads affect the design of the underlying network architecture. If a workload requires network-level redundancy, the routing and switching architecture have to accommodate this. There are differing methods for providing this that are dependent on the selected network hardware, the performance of the hardware, and which networking model you deploy. Examples include Link aggregation (LAG) and Hot Standby Router Protocol (HSRP). Also consider whether to deploy OpenStack Networking or legacy networking (nova-network), and which plug-in to select for OpenStack Networking. If using an external system, configure Networking to run layer 2 with a provider network configuration. For example, implement HSRP to terminate layer-3 connectivity." -msgstr "" - -#: ./doc/arch-design/network_focus/section_architecture_network_focus.xml:101(para) -msgid "Depending on the workload, overlay networks may not be the best solution. Where application network connections are small, short lived, or bursty, running a dynamic overlay can generate as much bandwidth as the packets it carries. It also can induce enough latency to cause issues with certain applications. There is an impact to the device generating the overlay which, in most installations, is the hypervisor. This causes performance degradation on packet per second and connection per second rates." -msgstr "" - -#: ./doc/arch-design/network_focus/section_architecture_network_focus.xml:109(para) -msgid "Overlays also come with a secondary option that may not be appropriate to a specific workload. While all of them operate in full mesh by default, there might be good reasons to disable this function because it may cause excessive overhead for some workloads. Conversely, other workloads operate without issue. For example, most web services applications do not have major issues with a full mesh overlay network, while some network monitoring tools or storage replication workloads have performance issues with throughput or excessive broadcast traffic." -msgstr "" - -#: ./doc/arch-design/network_focus/section_architecture_network_focus.xml:118(para) -msgid "Many people overlook an important design decision: The choice of layer-3 protocols. While OpenStack was initially built with only IPv4 support, Networking now supports IPv6 and dual-stacked networks. Some workloads are possible through the use of IPv6 and IPv6 to IPv4 reverse transition mechanisms such as NAT64 and DNS64 or 6to4. This alters the requirements for any address plan as single-stacked and transitional IPv6 deployments can alleviate the need for IPv4 addresses." -msgstr "" - -#: ./doc/arch-design/network_focus/section_architecture_network_focus.xml:127(para) -msgid "OpenStack has limited support for dynamic routing, however there are a number of options available by incorporating third party solutions to implement routing within the cloud including network equipment, hardware nodes, and instances. Some workloads perform well with nothing more than static routes and default gateways configured at the layer-3 termination point. In most cases this is sufficient, however some cases require the addition of at least one type of dynamic routing protocol if not multiple protocols. Having a form of interior gateway protocol (IGP) available to the instances inside an OpenStack installation opens up the possibility of use cases for anycast route injection for services that need to use it as a geographic location or failover mechanism. Other applications may wish to directly participate in a routing protocol, either as a passive observer, as in the case of a looking glass, or as an active participant in the form of a route reflector. Since an instance might have a large amount of compute and memory resources, it is trivial to hold an entire unpartitioned routing table and use it to provide services such as network path visibility to other applications or as a monitoring tool." -msgstr "" - -#: ./doc/arch-design/network_focus/section_architecture_network_focus.xml:146(para) -msgid "Path maximum transmission unit (MTU) failures are lesser known but harder to diagnose. The MTU must be large enough to handle normal traffic, overhead from an overlay network, and the desired layer-3 protocol. Adding externally built tunnels reduces the MTU packet size. In this case, you must pay attention to the fully calculated MTU size because some systems ignore or drop path MTU discovery packets." -msgstr "" - -#: ./doc/arch-design/network_focus/section_architecture_network_focus.xml:155(title) -msgid "Tunable networking components" -msgstr "" - -#: ./doc/arch-design/network_focus/section_architecture_network_focus.xml:156(para) -msgid "Consider configurable networking components related to an OpenStack architecture design when designing for network intensive workloads that include MTU and QoS. Some workloads require a larger MTU than normal due to the transfer of large blocks of data. When providing network service for applications such as video streaming or storage replication, we recommend that you configure both OpenStack hardware nodes and the supporting network equipment for jumbo frames where possible. This allows for better use of available bandwidth. Configure jumbo frames across the complete path the packets traverse. If one network component is not capable of handling jumbo frames then the entire path reverts to the default MTU." -msgstr "" - -#: ./doc/arch-design/network_focus/section_architecture_network_focus.xml:168(para) -msgid "Quality of Service (QoS) also has a great impact on network intensive workloads as it provides instant service to packets which have a higher priority due to the impact of poor network performance. In applications such as Voice over IP (VoIP), differentiated services code points are a near requirement for proper operation. You can also use QoS in the opposite direction for mixed workloads to prevent low priority but high bandwidth applications, for example backup services, video conferencing, or file sharing, from blocking bandwidth that is needed for the proper operation of other workloads. It is possible to tag file storage traffic as a lower class, such as best effort or scavenger, to allow the higher priority traffic through. In cases where regions within a cloud might be geographically distributed it may also be necessary to plan accordingly to implement WAN optimization to combat latency or packet loss." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:8(title) ./doc/arch-design/storage_focus/section_tech_considerations_storage_focus.xml:8(title) ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:12(title) ./doc/arch-design/multi_site/section_tech_considerations_multi_site.xml:8(title) ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:12(title) ./doc/arch-design/massively_scalable/section_tech_considerations_massively_scalable.xml:12(title) ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:12(title) -msgid "Technical considerations" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:9(para) -msgid "When you design an OpenStack network architecture, you must consider layer-2 and layer-3 issues. Layer-2 decisions involve those made at the data-link layer, such as the decision to use Ethernet versus Token Ring. Layer-3 decisions involve those made about the protocol layer and the point when IP comes into the picture. As an example, a completely internal OpenStack network can exist at layer 2 and ignore layer 3. In order for any traffic to go outside of that cloud, to another network, or to the Internet, however, you must use a layer-3 router or switch." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:19(para) -msgid "The past few years have seen two competing trends in networking. One trend leans towards building data center network architectures based on layer-2 networking. Another trend treats the cloud environment essentially as a miniature version of the Internet. This approach is radically different from the network architecture approach in the staging environment: the Internet only uses layer-3 routing rather than layer-2 switching." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:27(para) -msgid "A network designed on layer-2 protocols has advantages over one designed on layer-3 protocols. In spite of the difficulties of using a bridge to perform the network role of a router, many vendors, customers, and service providers choose to use Ethernet in as many parts of their networks as possible. The benefits of selecting a layer-2 design are:" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:35(para) -msgid "Ethernet frames contain all the essentials for networking. These include, but are not limited to, globally unique source addresses, globally unique destination addresses, and error control." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:41(para) -msgid "Ethernet frames can carry any kind of packet. Networking at layer 2 is independent of the layer-3 protocol." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:46(para) -msgid "Adding more layers to the Ethernet frame only slows the networking process down. This is known as 'nodal processing delay'." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:51(para) -msgid "You can add adjunct networking features, for example class of service (CoS) or multicasting, to Ethernet as readily as IP networks." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:56(para) -msgid "VLANs are an easy mechanism for isolating networks." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:60(para) -msgid "Most information starts and ends inside Ethernet frames. Today this applies to data, voice (for example, VoIP), and video (for example, web cameras). The concept is that, if you can perform more of the end-to-end transfer of information from a source to a destination in the form of Ethernet frames, the network benefits more from the advantages of Ethernet. Although it is not a substitute for IP networking, networking at layer 2 can be a powerful adjunct to IP networking." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:68(para) -msgid "Layer-2 Ethernet usage has these advantages over layer-3 IP network usage:" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:74(para) -msgid "Speed" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:77(para) -msgid "Reduced overhead of the IP hierarchy." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:80(para) -msgid "No need to keep track of address configuration as systems move around. Whereas the simplicity of layer-2 protocols might work well in a data center with hundreds of physical machines, cloud data centers have the additional burden of needing to keep track of all virtual machine addresses and networks. In these data centers, it is not uncommon for one physical node to support 30-40 instances." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:91(para) -msgid "Networking at the frame level says nothing about the presence or absence of IP addresses at the packet level. Almost all ports, links, and devices on a network of LAN switches still have IP addresses, as do all the source and destination hosts. There are many reasons for the continued need for IP addressing. The largest one is the need to manage the network. A device or link without an IP address is usually invisible to most management applications. Utilities including remote access for diagnostics, file transfer of configurations and software, and similar applications cannot run without IP addresses as well as MAC addresses." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:104(title) -msgid "Layer-2 architecture limitations" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:105(para) -msgid "Outside of the traditional data center the limitations of layer-2 network architectures become more obvious." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:109(para) -msgid "Number of VLANs is limited to 4096." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:112(para) -msgid "The number of MACs stored in switch tables is limited." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:116(para) -msgid "You must accommodate the need to maintain a set of layer-4 devices to handle traffic control." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:120(para) -msgid "MLAG, often used for switch redundancy, is a proprietary solution that does not scale beyond two devices and forces vendor lock-in." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:125(para) -msgid "It can be difficult to troubleshoot a network without IP addresses and ICMP." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:129(para) -msgid "Configuring ARP can be complicated on large layer-2 networks." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:134(para) -msgid "All network devices need to be aware of all MACs, even instance MACs, so there is constant churn in MAC tables and network state changes as instances start and stop." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:140(para) -msgid "Migrating MACs (instance migration) to different physical locations are a potential problem if you do not set ARP table timeouts properly." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:145(para) -msgid "It is important to know that layer 2 has a very limited set of network management tools. It is very difficult to control traffic, as it does not have mechanisms to manage the network or shape the traffic, and network troubleshooting is very difficult. One reason for this difficulty is network devices have no IP addresses. As a result, there is no reasonable way to check network delay in a layer-2 network." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:152(para) -msgid "On large layer-2 networks, configuring ARP learning can also be complicated. The setting for the MAC address timer on switches is critical and, if set incorrectly, can cause significant performance problems. As an example, the Cisco default MAC address timer is extremely long. Migrating MACs to different physical locations to support instance migration can be a significant problem. In this case, the network information maintained in the switches could be out of sync with the new location of the instance." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:161(para) -msgid "In a layer-2 network, all devices are aware of all MACs, even those that belong to instances. The network state information in the backbone changes whenever an instance starts or stops. As a result there is far too much churn in the MAC tables on the backbone switches." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:168(title) -msgid "Layer-3 architecture advantages" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:169(para) -msgid "In the layer 3 case, there is no churn in the routing tables due to instances starting and stopping. The only time there would be a routing state change is in the case of a Top of Rack (ToR) switch failure or a link failure in the backbone itself. Other advantages of using a layer-3 architecture include:" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:177(para) -msgid "Layer-3 networks provide the same level of resiliency and scalability as the Internet." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:181(para) -msgid "Controlling traffic with routing metrics is straightforward." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:185(para) -msgid "You can configure layer 3 to use BGP confederation for scalability so core routers have state proportional to the number of racks, not to the number of servers or instances." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:192(para) -msgid "Routing takes instance MAC and IP addresses out of the network core, reducing state churn. Routing state changes only occur in the case of a ToR switch failure or backbone link failure." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:198(para) -msgid "There are a variety of well tested tools, for example ICMP, to monitor and manage traffic." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:202(para) -msgid "Layer-3 architectures enable the use of Quality of Service (QoS) to manage network performance." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:207(title) -msgid "Layer-3 architecture limitations" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:208(para) -msgid "The main limitation of layer 3 is that there is no built-in isolation mechanism comparable to the VLANs in layer-2 networks. Furthermore, the hierarchical nature of IP addresses means that an instance is on the same subnet as its physical host. This means that you cannot migrate it outside of the subnet easily. For these reasons, network virtualization needs to use IP encapsulation and software at the end hosts for isolation and the separation of the addressing in the virtual layer from the addressing in the physical layer. Other potential disadvantages of layer 3 include the need to design an IP addressing scheme rather than relying on the switches to keep track of the MAC addresses automatically and to configure the interior gateway routing protocol in the switches." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:225(title) -msgid "Network recommendations overview" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:226(para) -msgid "OpenStack has complex networking requirements for several reasons. Many components interact at different levels of the system stack that adds complexity. Data flows are complex. Data in an OpenStack cloud moves both between instances across the network (also known as East-West), as well as in and out of the system (also known as North-South). Physical server nodes have network requirements that are independent of instance network requirements, which you must isolate from the core network to account for scalability. We recommend functionally separating the networks for security purposes and tuning performance through traffic shaping." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:237(para) -msgid "You must consider a number of important general technical and business factors when planning and designing an OpenStack network. They include:" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:242(para) -msgid "A requirement for vendor independence. To avoid hardware or software vendor lock-in, the design should not rely on specific features of a vendor's router or switch." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:248(para) -msgid "A requirement to massively scale the ecosystem to support millions of end users." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:252(para) -msgid "A requirement to support indeterminate platforms and applications." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:256(para) -msgid "A requirement to design for cost efficient operations to take advantage of massive scale." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:260(para) -msgid "A requirement to ensure that there is no single point of failure in the cloud ecosystem." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:264(para) -msgid "A requirement for high availability architecture to meet customer SLA requirements." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:268(para) -msgid "A requirement to be tolerant of rack level failure." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:272(para) -msgid "A requirement to maximize flexibility to architect future production environments." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:276(para) -msgid "Bearing in mind these considerations, we recommend the following:" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:279(para) -msgid "Layer-3 designs are preferable to layer-2 architectures." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:283(para) -msgid "Design a dense multi-path network core to support multi-directional scaling and flexibility." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:287(para) -msgid "Use hierarchical addressing because it is the only viable option to scale network ecosystem." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:291(para) -msgid "Use virtual networking to isolate instance service network traffic from the management and internal network traffic." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:296(para) -msgid "Isolate virtual networks using encapsulation technologies." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:300(para) -msgid "Use traffic shaping for performance tuning." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:303(para) -msgid "Use eBGP to connect to the Internet up-link." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:306(para) -msgid "Use iBGP to flatten the internal traffic on the layer-3 mesh." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:310(para) -msgid "Determine the most effective configuration for block storage network." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:315(title) -msgid "Additional considerations" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:316(para) -msgid "There are several further considerations when designing a network-focused OpenStack cloud." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:319(title) -msgid "OpenStack Networking versus legacy networking (nova-network) considerations" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:321(para) -msgid "Selecting the type of networking technology to implement depends on many factors. OpenStack Networking (neutron) and legacy networking (nova-network) both have their advantages and disadvantages. They are both valid and supported options that fit different use cases:" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:330(th) ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:108(term) -msgid "Legacy networking (nova-network)" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:331(th) -msgid "OpenStack Networking" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:335(td) -msgid "Simple, single agent" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:336(td) -msgid "Complex, multiple agents" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:339(td) -msgid "More mature, established" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:340(td) -msgid "Newer, maturing" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:343(td) -msgid "Flat or VLAN" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:344(td) -msgid "Flat, VLAN, Overlays, L2-L3, SDN" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:346(td) -msgid "No plug-in support" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:347(td) -msgid "Plug-in support for 3rd parties" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:350(td) -msgid "Scales well" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:351(td) -msgid "Scaling requires 3rd party plug-ins" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:354(td) -msgid "No multi-tier topologies" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:355(td) -msgid "Multi-tier topologies" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:361(title) -msgid "Redundant networking: ToR switch high availability risk analysis" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:363(para) -msgid "A technical consideration of networking is the idea that you should install switching gear in a data center with backup switches in case of hardware failure." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:366(para) -msgid "Research indicates the mean time between failures (MTBF) on switches is between 100,000 and 200,000 hours. This number is dependent on the ambient temperature of the switch in the data center. When properly cooled and maintained, this translates to between 11 and 22 years before failure. Even in the worst case of poor ventilation and high ambient temperatures in the data center, the MTBF is still 2-3 years. See http://www.garrettcom.com/techsupport/papers/ethernet_switch_reliability.pdf for further information." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:375(para) -msgid "In most cases, it is much more economical to use a single switch with a small pool of spare switches to replace failed units than it is to outfit an entire data center with redundant switches. Applications should tolerate rack level outages without affecting normal operations, since network and compute resources are easily provisioned and plentiful." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:384(title) -msgid "Preparing for the future: IPv6 support" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:385(para) -msgid "One of the most important networking topics today is the impending exhaustion of IPv4 addresses. In early 2014, ICANN announced that they started allocating the final IPv4 address blocks to the Regional Internet Registries (http://www.internetsociety.org/deploy360/blog/2014/05/goodbye-ipv4-iana-starts-allocating-final-address-blocks/). This means the IPv4 address space is close to being fully allocated. As a result, it will soon become difficult to allocate more IPv4 addresses to an application that has experienced growth, or that you expect to scale out, due to the lack of unallocated IPv4 address blocks." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:395(para) -msgid "For network focused applications the future is the IPv6 protocol. IPv6 increases the address space significantly, fixes long standing issues in the IPv4 protocol, and will become essential for network focused applications in the future." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:400(para) -msgid "OpenStack Networking supports IPv6 when configured to take advantage of it. To enable IPv6, create an IPv6 subnet in Networking and use IPv6 prefixes when creating security groups." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:405(title) -msgid "Asymmetric links" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:406(para) -msgid "When designing a network architecture, the traffic patterns of an application heavily influence the allocation of total bandwidth and the number of links that you use to send and receive traffic. Applications that provide file storage for customers allocate bandwidth and links to favor incoming traffic, whereas video streaming applications allocate bandwidth and links to favor outgoing traffic." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:415(title) ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:18(para) ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:42(term) ./doc/arch-design/multi_site/section_tech_considerations_multi_site.xml:118(title) ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:108(title) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:347(term) ./doc/arch-design/generalpurpose/section_user_requirements_general_purpose.xml:72(term) -msgid "Performance" -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:416(para) -msgid "It is important to analyze the applications' tolerance for latency and jitter when designing an environment to support network focused applications. Certain applications, for example VoIP, are less tolerant of latency and jitter. Where latency and jitter are concerned, certain applications may require tuning of QoS parameters and network device queues to ensure that they queue for transmit immediately or guarantee minimum bandwidth. Since OpenStack currently does not support these functions, consider carefully your selected network plug-in." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:426(para) -msgid "The location of a service may also impact the application or consumer experience. If an application serves differing content to different users it must properly direct connections to those specific locations. Where appropriate, use a multi-site installation for these situations." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:431(para) -msgid "You can implement networking in two separate ways. Legacy networking (nova-network) provides a flat DHCP network with a single broadcast domain. This implementation does not support tenant isolation networks or advanced plug-ins, but it is currently the only way to implement a distributed layer-3 agent using the multi_host configuration. OpenStack Networking (neutron) is the official networking implementation and provides a pluggable architecture that supports a large variety of network methods. Some of these include a layer-2 only provider network model, external device plug-ins, or even OpenFlow controllers." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:442(para) -msgid "Networking at large scales becomes a set of boundary questions. The determination of how large a layer-2 domain must be is based on the amount of nodes within the domain and the amount of broadcast traffic that passes between instances. Breaking layer-2 boundaries may require the implementation of overlay networks and tunnels. This decision is a balancing act between the need for a smaller overhead or a need for a smaller domain." -msgstr "" - -#: ./doc/arch-design/network_focus/section_tech_considerations_network_focus.xml:450(para) -msgid "When selecting network devices, be aware that making this decision based on the greatest port density often comes with a drawback. Aggregation switches and routers have not all kept pace with Top of Rack switches and may induce bottlenecks on north-south traffic. As a result, it may be possible for massive amounts of downstream network utilization to impact upstream network devices, impacting service to the cloud. Since OpenStack does not currently provide a mechanism for traffic shaping or rate limiting, it is necessary to implement these features at the network hardware level." -msgstr "" - -#: ./doc/arch-design/network_focus/section_user_requirements_network_focus.xml:8(title) ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:8(title) ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:8(title) ./doc/arch-design/massively_scalable/section_user_requirements_massively_scalable.xml:8(title) ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:334(term) ./doc/arch-design/generalpurpose/section_user_requirements_general_purpose.xml:8(title) -msgid "User requirements" -msgstr "" - -#: ./doc/arch-design/network_focus/section_user_requirements_network_focus.xml:9(para) -msgid "Network-focused architectures vary from the general-purpose architecture designs. Certain network-intensive applications influence these architectures. Some of the business requirements that influence the design include:" -msgstr "" - -#: ./doc/arch-design/network_focus/section_user_requirements_network_focus.xml:15(para) -msgid "Network latency through slow page loads, degraded video streams, and low quality VoIP sessions impacts the user experience. Users are often not aware of how network design and architecture affects their experiences. Both enterprise customers and end-users rely on the network for delivery of an application. Network performance problems can result in a negative experience for the end-user, as well as productivity and economic loss." -msgstr "" - -#: ./doc/arch-design/network_focus/section_user_requirements_network_focus.xml:26(title) -msgid "High availability issues" -msgstr "" - -#: ./doc/arch-design/network_focus/section_user_requirements_network_focus.xml:27(para) -msgid "Depending on the application and use case, network-intensive OpenStack installations can have high availability requirements. Financial transaction systems have a much higher requirement for high availability than a development application. Use network availability technologies, for example quality of service (QoS), to improve the network performance of sensitive applications such as VoIP and video streaming." -msgstr "" - -#: ./doc/arch-design/network_focus/section_user_requirements_network_focus.xml:34(para) -msgid "High performance systems have SLA requirements for a minimum QoS with regard to guaranteed uptime, latency, and bandwidth. The level of the SLA can have a significant impact on the network architecture and requirements for redundancy in the systems." -msgstr "" - -#: ./doc/arch-design/network_focus/section_user_requirements_network_focus.xml:40(title) -msgid "Risks" -msgstr "" - -#: ./doc/arch-design/network_focus/section_user_requirements_network_focus.xml:43(term) -msgid "Network misconfigurations" -msgstr "" - -#: ./doc/arch-design/network_focus/section_user_requirements_network_focus.xml:45(para) -msgid "Configuring incorrect IP addresses, VLANs, and routers can cause outages to areas of the network or, in the worst-case scenario, the entire cloud infrastructure. Automate network configurations to minimize the opportunity for operator error as it can cause disruptive problems." -msgstr "" - -#: ./doc/arch-design/network_focus/section_user_requirements_network_focus.xml:53(term) ./doc/arch-design/compute_focus/section_operational_considerations_compute_focus.xml:55(title) ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:51(title) ./doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml:99(title) -msgid "Capacity planning" -msgstr "" - -#: ./doc/arch-design/network_focus/section_user_requirements_network_focus.xml:55(para) -msgid "Cloud networks require management for capacity and growth over time. Capacity planning includes the purchase of network circuits and hardware that can potentially have lead times measured in months or years." -msgstr "" - -#: ./doc/arch-design/network_focus/section_user_requirements_network_focus.xml:62(term) -msgid "Network tuning" -msgstr "" - -#: ./doc/arch-design/network_focus/section_user_requirements_network_focus.xml:64(para) -msgid "Configure cloud networks to minimize link loss, packet loss, packet storms, broadcast storms, and loops." -msgstr "" - -#: ./doc/arch-design/network_focus/section_user_requirements_network_focus.xml:69(term) -msgid "Single Point Of Failure (SPOF)" -msgstr "" - -#: ./doc/arch-design/network_focus/section_user_requirements_network_focus.xml:71(para) -msgid "Consider high availability at the physical and environmental layers. If there is a single point of failure due to only one upstream link, or only one power supply, an outage can become unavoidable." -msgstr "" - -#: ./doc/arch-design/network_focus/section_user_requirements_network_focus.xml:78(term) -msgid "Complexity" -msgstr "" - -#: ./doc/arch-design/network_focus/section_user_requirements_network_focus.xml:80(para) -msgid "An overly complex network design can be difficult to maintain and troubleshoot. While device-level configuration can ease maintenance concerns and automated tools can handle overlay networks, avoid or document non-traditional interconnects between functions and specialized hardware to prevent outages." -msgstr "" - -#: ./doc/arch-design/network_focus/section_user_requirements_network_focus.xml:89(term) -msgid "Non-standard features" -msgstr "" - -#: ./doc/arch-design/network_focus/section_user_requirements_network_focus.xml:91(para) -msgid "There are additional risks that arise from configuring the cloud network to take advantage of vendor specific features. One example is multi-link aggregation (MLAG) used to provide redundancy at the aggregator switch level of the network. MLAG is not a standard and, as a result, each vendor has their own proprietary implementation of the feature. MLAG architectures are not interoperable across switch vendors, which leads to vendor lock-in, and can cause delays or inability when upgrading components." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:9(para) -msgid "Several operational factors affect the design choices for a general purpose cloud. Operations staff receive tasks regarding the maintenance of cloud environments for larger installations, including:" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:14(term) -msgid "Maintenance tasks" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:16(para) -msgid "The storage solution should take into account storage maintenance and the impact on underlying workloads." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:22(term) -msgid "Reliability and availability" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:24(para) -msgid "Reliability and availability depend on wide area network availability and on the level of precautions taken by the service provider." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:31(term) -msgid "Flexibility" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:33(para) -msgid "Organizations need to have the flexibility to choose between off-premise and on-premise cloud storage options. This relies on relevant decision criteria with potential cost savings. For example, continuity of operations, disaster recovery, security, records retention laws, regulations, and policies." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:42(para) -msgid "Monitoring and alerting services are vital in cloud environments with high demands on storage resources. These services provide a real-time view into the health and performance of the storage systems. An integrated management console, or other dashboards capable of visualizing SNMP data, is helpful when discovering and resolving issues that arise within the storage cluster." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:49(para) -msgid "A storage-focused cloud design should include:" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:52(para) -msgid "Monitoring of physical hardware resources." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:55(para) -msgid "Monitoring of environmental resources such as temperature and humidity." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:59(para) -msgid "Monitoring of storage resources such as available storage, memory, and CPU." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:63(para) -msgid "Monitoring of advanced storage performance data to ensure that storage systems are performing as expected." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:68(para) -msgid "Monitoring of network resources for service disruptions which would affect access to storage." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:73(para) -msgid "Centralized log collection." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:76(para) -msgid "Log analytics capabilities." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:79(para) -msgid "Ticketing system (or integration with a ticketing system) to track issues." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:83(para) -msgid "Alerting and notification of responsible teams or automated systems which remediate problems with storage as they arise." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:88(para) -msgid "Network Operations Center (NOC) staffed and always available to resolve issues." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:94(title) -msgid "Application awareness" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:95(para) -msgid "Well-designed applications should be aware of underlying storage subsystems in order to use cloud storage solutions effectively." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:97(para) -msgid "If natively available replication is not available, operations personnel must be able to modify the application so that they can provide their own replication service. In the event that replication is unavailable, operations personnel can design applications to react such that they can provide their own replication services. An application designed to detect underlying storage systems can function in a wide variety of infrastructures, and still have the same basic behavior regardless of the differences in the underlying infrastructure." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:109(title) -msgid "Fault tolerance and availability" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:110(para) -msgid "Designing for fault tolerance and availability of storage systems in an OpenStack cloud is vastly different when comparing the Block Storage and Object Storage services." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:115(title) -msgid "Block Storage fault tolerance and availability" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:116(para) -msgid "Configure Block Storage resource nodes with advanced RAID controllers and high performance disks to provide fault tolerance at the hardware level." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:119(para) -msgid "Deploy high performing storage solutions such as SSD disk drives or flash storage systems for applications requiring extreme performance out of Block Storage devices." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:122(para) -msgid "In environments that place extreme demands on Block Storage, we recommend using multiple storage pools. In this case, each pool of devices should have a similar hardware design and disk configuration across all hardware nodes in that pool. This allows for a design that provides applications with access to a wide variety of Block Storage pools, each with their own redundancy, availability, and performance characteristics. When deploying multiple pools of storage it is also important to consider the impact on the Block Storage scheduler which is responsible for provisioning storage across resource nodes. Ensuring that applications can schedule volumes in multiple regions, each with their own network, power, and cooling infrastructure, can give tenants the ability to build fault tolerant applications that are distributed across multiple availability zones." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:137(para) -msgid "In addition to the Block Storage resource nodes, it is important to design for high availability and redundancy of the APIs, and related services that are responsible for provisioning and providing access to storage. We recommend designing a layer of hardware or software load balancers in order to achieve high availability of the appropriate REST API services to provide uninterrupted service. In some cases, it may also be necessary to deploy an additional layer of load balancing to provide access to back-end database services responsible for servicing and storing the state of Block Storage volumes. We also recommend designing a highly available database solution to store the Block Storage databases. Leverage highly available database solutions such as Galera and MariaDB to help keep database services online for uninterrupted access, so that tenants can manage Block Storage volumes." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:153(para) -msgid "In a cloud with extreme demands on Block Storage, the network architecture should take into account the amount of East-West bandwidth required for instances to make use of the available storage resources. The selected network devices should support jumbo frames for transferring large blocks of data. In some cases, it may be necessary to create an additional back-end storage network dedicated to providing connectivity between instances and Block Storage resources so that there is no contention of network resources." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:164(title) -msgid "Object Storage fault tolerance and availability" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:165(para) -msgid "While consistency and partition tolerance are both inherent features of the Object Storage service, it is important to design the overall storage architecture to ensure that the implemented system meets those goals. The OpenStack Object Storage service places a specific number of data replicas as objects on resource nodes. These replicas are distributed throughout the cluster based on a consistent hash ring which exists on all nodes in the cluster." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:173(para) -msgid "Design the Object Storage system with a sufficient number of zones to provide quorum for the number of replicas defined. For example, with three replicas configured in the Swift cluster, the recommended number of zones to configure within the Object Storage cluster in order to achieve quorum is five. While it is possible to deploy a solution with fewer zones, the implied risk of doing so is that some data may not be available and API requests to certain objects stored in the cluster might fail. For this reason, ensure you properly account for the number of zones in the Object Storage cluster." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:183(para) -msgid "Each Object Storage zone should be self-contained within its own availability zone. Each availability zone should have independent access to network, power and cooling infrastructure to ensure uninterrupted access to data. In addition, a pool of Object Storage proxy servers providing access to data stored on the object nodes should service each availability zone. Object proxies in each region should leverage local read and write affinity so that local storage resources facilitate access to objects wherever possible. We recommend deploying upstream load balancing to ensure that proxy services are distributed across the multiple zones and, in some cases, it may be necessary to make use of third-party solutions to aid with geographical distribution of services." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:196(para) -msgid "A zone within an Object Storage cluster is a logical division. Any of the following may represent a zone:" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:200(para) -msgid "A disk within a single node" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:205(para) -msgid "One zone per node" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:210(para) -msgid "Zone per collection of nodes" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:215(para) -msgid "Multiple racks" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:220(para) -msgid "Multiple DCs" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:225(para) -msgid "Selecting the proper zone design is crucial for allowing the Object Storage cluster to scale while providing an available and redundant storage system. It may be necessary to configure storage policies that have different requirements with regards to replicas, retention and other factors that could heavily affect the design of storage in a specific zone." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:236(title) -msgid "Scaling storage services" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:237(para) -msgid "Adding storage capacity and bandwidth is a very different process when comparing the Block and Object Storage services. While adding Block Storage capacity is a relatively simple process, adding capacity and bandwidth to the Object Storage systems is a complex task that requires careful planning and consideration during the design phase." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:245(title) -msgid "Scaling Block Storage" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:246(para) -msgid "You can upgrade Block Storage pools to add storage capacity without interrupting the overall Block Storage service. Add nodes to the pool by installing and configuring the appropriate hardware and software and then allowing that node to report in to the proper storage pool via the message bus. This is because Block Storage nodes report into the scheduler service advertising their availability. After the node is online and available, tenants can make use of those storage resources instantly." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:256(para) -msgid "In some cases, the demand on Block Storage from instances may exhaust the available network bandwidth. As a result, design network infrastructure that services Block Storage resources in such a way that you can add capacity and bandwidth easily. This often involves the use of dynamic routing protocols or advanced networking solutions to add capacity to downstream devices easily. Both the front-end and back-end storage network designs should encompass the ability to quickly and easily add capacity and bandwidth." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:268(title) -msgid "Scaling Object Storage" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:269(para) -msgid "Adding back-end storage capacity to an Object Storage cluster requires careful planning and consideration. In the design phase, it is important to determine the maximum partition power required by the Object Storage service, which determines the maximum number of partitions which can exist. Object Storage distributes data among all available storage, but a partition cannot span more than one disk, so the maximum number of partitions can only be as high as the number of disks." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:278(para) -msgid "For example, a system that starts with a single disk and a partition power of 3 can have 8 (2^3) partitions. Adding a second disk means that each has 4 partitions. The one-disk-per-partition limit means that this system can never have more than 8 disks, limiting its scalability. However, a system that starts with a single disk and a partition power of 10 can have up to 1024 (2^10) disks." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:285(para) -msgid "As you add back-end storage capacity to the system, the partition maps redistribute data amongst the storage nodes. In some cases, this replication consists of extremely large data sets. In these cases, we recommend using back-end replication links that do not contend with tenants' access to data." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:291(para) -msgid "As more tenants begin to access data within the cluster and their data sets grow, it is necessary to add front-end bandwidth to service data access requests. Adding front-end bandwidth to an Object Storage cluster requires careful planning and design of the Object Storage proxies that tenants use to gain access to the data, along with the high availability solutions that enable easy scaling of the proxy layer. We recommend designing a front-end load balancing layer that tenants and consumers use to gain access to data stored within the cluster. This load balancing layer may be distributed across zones, regions or even across geographic boundaries, which may also require that the design encompass geo-location solutions." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml:304(para) -msgid "In some cases, you must add bandwidth and capacity to the network resources servicing requests between proxy servers and storage nodes. For this reason, the network architecture used for access to storage nodes and proxy servers should make use of a design which is scalable." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_tech_considerations_storage_focus.xml:9(para) -msgid "Some of the key technical considerations that are critical to a storage-focused OpenStack design architecture include:" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_tech_considerations_storage_focus.xml:14(term) -msgid "Input-Output requirements" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_tech_considerations_storage_focus.xml:16(para) -msgid "Input-Output performance requirements require researching and modeling before deciding on a final storage framework. Running benchmarks for Input-Output performance provides a baseline for expected performance levels. If these tests include details, then the resulting data can help model behavior and results during different workloads. Running scripted smaller benchmarks during the life cycle of the architecture helps record the system health at different points in time. The data from these scripted benchmarks assist in future scoping and gaining a deeper understanding of an organization's needs." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_tech_considerations_storage_focus.xml:32(term) -msgid "Scale" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_tech_considerations_storage_focus.xml:34(para) -msgid "Scaling storage solutions in a storage-focused OpenStack architecture design is driven by initial requirements, including IOPS, capacity, bandwidth, and future needs. Planning capacity based on projected needs over the course of a budget cycle is important for a design. The architecture should balance cost and capacity, while also allowing flexibility to implement new technologies and methods as they become available." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_tech_considerations_storage_focus.xml:49(para) -msgid "Designing security around data has multiple points of focus that vary depending on SLAs, legal requirements, industry regulations, and certifications needed for systems or people. Consider compliance with HIPPA, ISO9000, and SOX based on the type of data. For certain organizations, multiple levels of access control are important." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_tech_considerations_storage_focus.xml:58(term) -msgid "OpenStack compatibility" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_tech_considerations_storage_focus.xml:60(para) -msgid "Interoperability and integration with OpenStack can be paramount in deciding on a storage hardware and storage management platform. Interoperability and integration includes factors such as OpenStack Block Storage interoperability, OpenStack Object Storage compatibility, and hypervisor compatibility (which affects the ability to use storage for ephemeral instance storage)." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_tech_considerations_storage_focus.xml:71(term) -msgid "Storage management" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_tech_considerations_storage_focus.xml:73(para) -msgid "You must address a range of storage management-related considerations in the design of a storage-focused OpenStack cloud. These considerations include, but are not limited to, backup strategy (and restore strategy, since a backup that cannot be restored is useless), data valuation-hierarchical storage management, retention strategy, data placement, and workflow automation." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_tech_considerations_storage_focus.xml:85(term) -msgid "Data grids" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_tech_considerations_storage_focus.xml:87(para) -msgid "Data grids are helpful when answering questions around data valuation. Data grids improve decision making through correlation of access patterns, ownership, and business-unit revenue with other metadata values to deliver actionable information about data." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_tech_considerations_storage_focus.xml:95(para) -msgid "When building a storage-focused OpenStack architecture, strive to build a flexible design based on an industry standard core. One way of accomplishing this might be through the use of different back ends serving different use cases." -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:36(None) -msgid "@@image: '../figures/Storage_Object.png'; md5=ad0b4ee39c96ab081a368ef7857479a5" -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:97(None) -msgid "@@image: '../figures/Storage_Hadoop3.png'; md5=bdc6373caede70b37209de260616b255" -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:131(None) -msgid "@@image: '../figures/Storage_Database_+_Object5.png'; md5=a0cb2374c3515b8f3203ebdc7bb7dbbf" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:9(para) -msgid "Storage-focused architecture depends on specific use cases. This section discusses three example use cases:" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:13(para) -msgid "An object store with a RESTful interface" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:18(para) -msgid "Compute analytics with parallel file systems" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:23(para) -msgid "High performance database" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:28(para) -msgid "The example below shows a REST interface without a high performance requirement." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:30(para) -msgid "Swift is a highly scalable object store that is part of the OpenStack project. This diagram explains the example architecture: " -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:40(para) -msgid "The example REST interface, presented as a traditional Object store running on traditional spindles, does not require a high performance caching tier." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:43(para) -msgid "This example uses the following components:" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:44(para) ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:149(para) -msgid "Network:" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:47(para) -msgid "10 GbE horizontally scalable spine leaf back-end storage and front end network." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:51(para) ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:156(para) -msgid "Storage hardware:" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:54(para) -msgid "10 storage servers each with 12x4 TB disks equaling 480 TB total space with approximately 160 TB of usable space after replicas." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:59(para) -msgid "Proxy:" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:62(para) ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:171(para) -msgid "3x proxies" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:65(para) ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:174(para) -msgid "2x10 GbE bonded front end" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:68(para) ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:177(para) -msgid "2x10 GbE back-end bonds" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:71(para) ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:180(para) -msgid "Approximately 60 Gb of total bandwidth to the back-end storage cluster" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:76(para) -msgid "It may be necessary to implement a 3rd-party caching layer for some applications to achieve suitable performance." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:81(title) -msgid "Compute analytics with Data processing service" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:82(para) -msgid "Analytics of large data sets are dependent on the performance of the storage system. Clouds using storage systems such as Hadoop Distributed File System (HDFS) have inefficiencies which can cause performance issues." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:87(para) -msgid "One potential solution to this problem is the implementation of storage systems designed for performance. Parallel file systems have previously filled this need in the HPC space and are suitable for large scale performance-orientated systems." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:91(para) -msgid "OpenStack has integration with Hadoop to manage the Hadoop cluster within the cloud. The following diagram shows an OpenStack store with a high performance requirement: " -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:101(para) -msgid "The hardware requirements and configuration are similar to those of the High Performance Database example below. In this case, the architecture uses Ceph's Swift-compatible REST interface, features that allow for connecting a caching pool to allow for acceleration of the presented pool." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:111(title) -msgid "High performance database with Database service" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:112(para) -msgid "Databases are a common workload that benefit from high performance storage back ends. Although enterprise storage is not a requirement, many environments have existing storage that OpenStack cloud can use as back ends. You can create a storage pool to provide block devices with OpenStack Block Storage for instances as well as object interfaces. In this example, the database I-O requirements are high and demand storage presented from a fast SSD pool." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:119(para) -msgid "A storage system presents a LUN backed by a set of SSDs using a traditional storage array with OpenStack Block Storage integration or a storage platform such as Ceph or Gluster." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:123(para) -msgid "This system can provide additional performance. For example, in the database example below, a portion of the SSD pool can act as a block device to the Database server. In the high performance analytics example, the inline SSD cache layer accelerates the REST interface." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:134(para) -msgid "In this example, Ceph presents a Swift-compatible REST interface, as well as a block level storage from a distributed storage cluster. It is highly flexible and has features that enable reduced cost of operations such as self healing and auto balancing. Using erasure coded pools are a suitable way of maximizing the amount of usable space." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:141(para) -msgid "There are special considerations around erasure coded pools. For example, higher computational requirements and limitations on the operations allowed on an object; erasure coded pools do not support partial writes." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:147(para) -msgid "Using Ceph as an applicable example, a potential architecture would have the following requirements:" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:152(para) -msgid "10 GbE horizontally scalable spine leaf back-end storage and front-end network" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:159(para) -msgid "5 storage servers for caching layer 24x1 TB SSD" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:163(para) -msgid "10 storage servers each with 12x4 TB disks which equals 480 TB total space with about approximately 160 TB of usable space after 3 replicas" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:168(para) -msgid "REST proxy:" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml:184(para) -msgid "Using an SSD cache layer, you can present block devices directly to hypervisors or instances. The REST interface can also use the SSD cache systems as an inline cache." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:12(para) -msgid "Consider the following factors when selecting storage hardware:" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:15(para) ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:35(term) ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:144(term) ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:374(term) ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:35(para) ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:142(para) ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:99(title) ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:17(term) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:95(term) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:229(term) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:534(term) ./doc/arch-design/generalpurpose/section_user_requirements_general_purpose.xml:24(term) -msgid "Cost" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:21(para) -msgid "Reliability" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:24(para) -msgid "Storage-focused OpenStack clouds must address I/O intensive workloads. These workloads are not CPU intensive, nor are they consistently network intensive. The network may be heavily utilized to transfer storage, but they are not otherwise network intensive." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:29(para) -msgid "The selection of storage hardware determines the overall performance and scalability of a storage-focused OpenStack design architecture. Several factors impact the design process, including:" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:37(para) -msgid "The cost of components affects which storage architecture and hardware you choose." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:44(para) -msgid "The latency of storage I/O requests indicates performance. Performance requirements affect which solution you choose." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:49(term) ./doc/arch-design/hybrid/section_architecture_hybrid.xml:117(para) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:243(term) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:472(term) -msgid "Scalability" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:51(para) -msgid "Scalability refers to how the storage solution performs as it expands to its maximum size. Storage solutions that perform well in small configurations but have degraded performance in large configurations are not scalable. A solution that performs well at maximum expansion is scalable. Large deployments require a storage solution that performs well as it expands." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:61(para) -msgid "Latency is a key consideration in a storage-focused OpenStack cloud. Using solid-state disks (SSDs) to minimize latency and, to reduce CPU delays caused by waiting for the storage, increases performance. Use RAID controller cards in compute hosts to improve the performance of the underlying disk subsystem." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:66(para) -msgid "Depending on the storage architecture, you can adopt a scale-out solution, or use a highly expandable and scalable centralized storage array. If a centralized storage array is the right fit for your requirements, then the array vendor determines the hardware selection. It is possible to build a storage array using commodity hardware with Open Source software, but requires people with expertise to build such a system." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:73(para) -msgid "On the other hand, a scale-out storage solution that uses direct-attached storage (DAS) in the servers may be an appropriate choice. This requires configuration of the server hardware to support the storage solution." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:77(para) -msgid "Considerations affecting storage architecture (and corresponding storage hardware) of a Storage-focused OpenStack cloud include:" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:81(term) ./doc/arch-design/introduction/section_methodology.xml:102(term) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:281(term) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:459(term) -msgid "Connectivity" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:83(para) -msgid "Based on the selected storage solution, ensure the connectivity matches the storage solution requirements. We recommended confirming that the network characteristics minimize latency to boost the overall performance of the design." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:91(term) -msgid "Latency" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:93(para) -msgid "Determine if the use case has consistent or highly variable latency." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:98(term) -msgid "Throughput" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:100(para) -msgid "Ensure that the storage solution throughput is optimized for your application requirements." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:106(term) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:311(term) -msgid "Server hardware" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:108(para) -msgid "Use of DAS impacts the server hardware choice and affects host density, instance density, power density, OS-hypervisor, and management tools." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:117(title) -msgid "Compute (server) hardware selection" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:118(para) -msgid "Four opposing factors determine the compute (server) hardware selection:" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:122(term) ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:26(para) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:73(term) -msgid "Server density" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:124(para) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:75(para) -msgid "A measure of how many servers can fit into a given measure of physical space, such as a rack unit [U]." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:130(term) ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:29(para) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:81(term) -msgid "Resource capacity" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:132(para) -msgid "The number of CPU cores, how much RAM, or how much storage a given server delivers." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:137(term) ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:32(para) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:88(term) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:255(term) -msgid "Expandability" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:139(para) -msgid "The number of additional resources you can add to a server before it reaches capacity." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:146(para) -msgid "The relative cost of the hardware weighed against the level of design effort needed to build the system." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:152(para) -msgid "You must weigh the dimensions against each other to determine the best design for the desired purpose. For example, increasing server density can mean sacrificing resource capacity or expandability. Increasing resource capacity and expandability can increase cost but decrease server density. Decreasing cost often means decreasing supportability, server density, resource capacity, and expandability." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:160(para) -msgid "Compute capacity (CPU cores and RAM capacity) is a secondary consideration for selecting server hardware. As a result, the required server hardware must supply adequate CPU sockets, additional CPU cores, and more RAM; network connectivity and storage capacity are not as critical. The hardware needs to provide enough network connectivity and storage capacity to meet the user requirements, however they are not the primary consideration." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:168(para) -msgid "Some server hardware form factors are better suited to storage-focused designs than others. The following is a list of these form factors:" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:173(para) -msgid "Most blade servers support dual-socket multi-core CPUs. Choose either full width or full height blades to avoid the limit. High density blade servers support up to 16 servers in only 10 rack units using half height or half width blades." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:179(para) -msgid "This decreases density by 50% (only 8 servers in 10 U) if a full width or full height option is used." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:185(para) -msgid "1U rack-mounted servers have the ability to offer greater server density than a blade server solution, but are often limited to dual-socket, multi-core CPU configurations." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:189(para) -msgid "Due to cooling requirements, it is rare to see 1U rack-mounted servers with more than 2 CPU sockets." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:192(para) -msgid "To obtain greater than dual-socket support in a 1U rack-mount form factor, customers need to buy their systems from Original Design Manufacturers (ODMs) or second-tier manufacturers." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:197(para) -msgid "This may cause issues for organizations that have preferred vendor policies or concerns with support and hardware warranties of non-tier 1 vendors." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:204(para) -msgid "2U rack-mounted servers provide quad-socket, multi-core CPU support but with a corresponding decrease in server density (half the density offered by 1U rack-mounted servers)." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:210(para) -msgid "Larger rack-mounted servers, such as 4U servers, often provide even greater CPU capacity. Commonly supporting four or even eight CPU sockets. These servers have greater expandability but such servers have much lower server density and usually greater hardware cost." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:218(para) -msgid "Rack-mounted servers that support multiple independent servers in a single 2U or 3U enclosure, \"sled servers\", deliver increased density as compared to a typical 1U-2U rack-mounted servers." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:224(para) -msgid "Other factors that influence server hardware selection for a storage-focused OpenStack design architecture include:" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:229(term) ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:93(para) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:161(term) -msgid "Instance density" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:231(para) -msgid "In this architecture, instance density and CPU-RAM oversubscription are lower. You require more hosts to support the anticipated scale, especially if the design uses dual-socket hardware designs." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:239(term) ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:96(para) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:173(term) -msgid "Host density" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:241(para) -msgid "Another option to address the higher host count is to use a quad-socket platform. Taking this approach decreases host density which also increases rack count. This configuration affects the number of power connections and also impacts network and cooling requirements." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:250(term) ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:99(para) -msgid "Power and cooling density" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:252(para) -msgid "The power and cooling density requirements might be lower than with blade, sled, or 1U server designs due to lower host density (by using 2U, 3U or even 4U server designs). For data centers with older infrastructure, this might be a desirable feature." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:261(para) -msgid "Storage-focused OpenStack design architecture server hardware selection should focus on a \"scale-up\" versus \"scale-out\" solution. The determination of which is the best solution (a smaller number of larger hosts or a larger number of smaller hosts), depends on a combination of factors including cost, power, cooling, physical rack and floor space, support-warranty, and manageability." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:271(title) -msgid "Networking hardware selection" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:272(para) -msgid "Key considerations for the selection of networking hardware include:" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:275(term) ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:109(para) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:397(term) -msgid "Port count" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:277(para) -msgid "The user requires networking hardware that has the requisite port count." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:282(term) ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:112(para) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:404(term) -msgid "Port density" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:284(para) -msgid "The physical space required to provide the requisite port count affects the network design. A switch that provides 48 10GbE ports in 1U has a much higher port density than a switch that provides 24 10GbE ports in 2U. On a general scale, a higher port density leaves more rack space for compute or storage components which is preferred. It is also important to consider fault domains and power density. Finally, higher density switches are more expensive, therefore it is important not to over design the network." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:298(term) ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:115(para) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:419(term) -msgid "Port speed" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:300(para) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:421(para) -msgid "The networking hardware must support the proposed network speed, for example: 1GbE, 10GbE, or 40GbE (or even 100GbE)." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:306(term) ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:118(para) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:428(term) -msgid "Redundancy" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:308(para) -msgid "User requirements for high availability and cost considerations influence the required level of network hardware redundancy. Achieve network redundancy by adding redundant power supplies or paired switches." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:313(para) -msgid "If this is a requirement, the hardware must support this configuration. User requirements determine if a completely redundant network infrastructure is required." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:321(term) ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:121(para) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:439(term) -msgid "Power requirements" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:323(para) -msgid "Ensure that the physical data center provides the necessary power for the selected network hardware. This is not an issue for top of rack (ToR) switches, but may be an issue for spine switches in a leaf and spine fabric, or end of row (EoR) switches." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:332(term) -msgid "Protocol support" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:334(para) -msgid "It is possible to gain more performance out of a single storage system by using specialized network technologies such as RDMA, SRP, iSER and SCST. The specifics for using these technologies is beyond the scope of this book." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:345(title) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:504(title) ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:261(title) -msgid "Software selection" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:346(para) -msgid "Factors that influence the software selection for a storage-focused OpenStack architecture design include:" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:350(para) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:509(para) -msgid "Operating system (OS) and hypervisor" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:356(para) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:515(para) ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:452(title) -msgid "Supplemental software" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:359(para) -msgid "Design decisions made in each of these areas impacts the rest of the OpenStack architecture design." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:364(title) ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:136(title) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:521(title) -msgid "Operating system and hypervisor" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:365(para) -msgid "Operating system (OS) and hypervisor have a significant impact on the overall design and also affect server hardware selection. Ensure the selected operating system and hypervisor combination support the storage hardware and work with the networking hardware selection and topology." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:370(para) -msgid "Operating system and hypervisor selection affect the following areas:" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:376(para) -msgid "Selecting a commercially supported hypervisor, such as Microsoft Hyper-V, results in a different cost model than a community-supported open source hypervisor like Kinstance or Xen. Similarly, choosing Ubuntu over Red Hat (or vice versa) impacts cost due to support contracts. However, business or application requirements might dictate a specific or commercially supported hypervisor." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:388(term) ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:145(para) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:548(term) -msgid "Supportability" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:390(para) -msgid "Staff must have training with the chosen hypervisor. Consider the cost of training when choosing a solution. The support of a commercial product such as Red Hat, SUSE, or Windows, is the responsibility of the OS vendor. If an open source platform is chosen, the support comes from in-house resources." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:400(term) ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:148(para) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:559(term) -msgid "Management tools" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:402(para) -msgid "Ubuntu and Kinstance use different management tools than VMware vSphere. Although both OS and hypervisor combinations are supported by OpenStack, there are varying impacts to the rest of the design as a result of the selection of one combination versus the other." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:411(term) ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:151(para) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:571(term) -msgid "Scale and performance" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:413(para) -msgid "Ensure the selected OS and hypervisor combination meet the appropriate scale and performance requirements needed for this storage focused OpenStack cloud. The chosen architecture must meet the targeted instance-host ratios with the selected OS-hypervisor combination." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:424(para) -msgid "Ensure the design can accommodate the regular periodic installation of application security patches while maintaining the required workloads. The frequency of security patches for the proposed OS-hypervisor combination impacts performance and the patch installation process could affect maintenance windows." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:434(term) ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:157(para) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:593(term) -msgid "Supported features" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:436(para) -msgid "Selecting the OS-hypervisor combination often determines the required features of OpenStack. Certain features are only available with specific OSes or hypervisors. For example, if certain features are not available, you might need to modify the design to meet user requirements." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:444(term) ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:160(para) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:602(term) ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:349(term) -msgid "Interoperability" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:446(para) -msgid "The OS-hypervisor combination should be chosen based on the interoperability with one another, and other OS-hyervisor combinations. Operational and troubleshooting tools for one OS-hypervisor combination may differ from the tools used for another OS-hypervisor combination. As a result, the design must address if the two sets of tools need to interoperate." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:461(para) -msgid "The OpenStack components you choose can have a significant impact on the overall design. While there are certain components that are always present (Compute and Image service, for example), there are other services that may not be required. As an example, a certain design may not require the Orchestration service. Omitting Orchestration would not typically have a significant impact on the overall design, however, if the architecture uses a replacement for OpenStack Object Storage for its storage component, this could potentially have significant impacts on the rest of the design." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:471(para) -msgid "A storage-focused design might require the ability to use Orchestration to launch instances with Block Storage volumes to perform storage-intensive processing." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:474(para) -msgid "A storage-focused OpenStack design architecture uses the following components:" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:478(para) ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:232(para) -msgid "OpenStack Identity (keystone)" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:481(para) -msgid "OpenStack dashboard (horizon)" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:484(para) -msgid "OpenStack Compute (nova) (including the use of multiple hypervisor drivers)" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:488(para) -msgid "OpenStack Object Storage (swift) (or another object storage solution)" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:492(para) ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:256(para) -msgid "OpenStack Block Storage (cinder)" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:495(para) ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:229(para) -msgid "OpenStack Image service (glance)" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:498(para) -msgid "OpenStack Networking (neutron) or legacy networking (nova-network)" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:502(para) -msgid "Excluding certain OpenStack components may limit or constrain the functionality of other components. If a design opts to include Orchestration but exclude Telemetry, then the design cannot take advantage of Orchestration's auto scaling functionality (which relies on information from Telemetry). Due to the fact that you can use Orchestration to spin up a large number of instances to perform the compute-intensive processing, we strongly recommend including Orchestration in a compute-focused architecture design." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:514(title) ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:209(title) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:631(title) -msgid "Networking software" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:515(para) -msgid "OpenStack Networking (neutron) provides a wide variety of networking services for instances. There are many additional networking software packages that may be useful to manage the OpenStack components themselves. Some examples include HAProxy, Keepalived, and various routing daemons (like Quagga). The OpenStack High Availability Guide describes some of these software packages, HAProxy in particular. See the Network controller cluster stack chapter of the OpenStack High Availability Guide." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:528(title) ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:224(title) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:667(title) -msgid "Management software" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:529(para) -msgid "Management software includes software for providing:" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:532(para) -msgid "Clustering" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:537(para) -msgid "Logging" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:542(para) ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:136(title) ./doc/arch-design/compute_focus/section_operational_considerations_compute_focus.xml:34(title) ./doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml:56(title) -msgid "Monitoring" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:547(para) -msgid "Alerting" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:553(para) -msgid "The factors for determining which software packages in this category to select is outside the scope of this design guide." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:557(para) -msgid "The availability design requirements determine the selection of Clustering Software, such as Corosync or Pacemaker. The availability of the cloud infrastructure and the complexity of supporting the configuration after deployment determines the impact of including these software packages. The OpenStack High Availability Guide provides more details on the installation and configuration of Corosync and Pacemaker." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:565(para) -msgid "Operational considerations determine the requirements for logging, monitoring, and alerting. Each of these sub-categories includes options. For example, in the logging sub-category you could select Logstash, Splunk, Log Insight, or another log aggregation-consolidation tool. Store logs in a centralized location to facilitate performing analytics against the data. Log data analytics engines can also provide automation and issue notification, by providing a mechanism to both alert and automatically attempt to remediate some of the more commonly known issues." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:576(para) -msgid "If you require any of these software packages, the design must account for the additional resource consumption. Some other potential design impacts include:" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:581(para) -msgid "OS-Hypervisor combination: Ensure that the selected logging, monitoring, or alerting tools support the proposed OS-hypervisor combination." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:586(para) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:697(para) -msgid "Network hardware: The network hardware selection needs to be supported by the logging, monitoring, and alerting software." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:594(title) ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:254(title) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:705(title) -msgid "Database software" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:595(para) -msgid "Most OpenStack components require access to back-end database services to store state and configuration information. Choose an appropriate back-end database which satisfies the availability and fault tolerance requirements of the OpenStack services." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:600(para) -msgid "MySQL is the default database for OpenStack, but other compatible databases are available." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:603(para) -msgid "Telemetry uses MongoDB." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:607(para) -msgid "The chosen high availability database solution changes according to the selected database. MySQL, for example, provides several options. Use a replication technology such as Galera for active-active clustering. For active-passive use some form of shared storage. Each of these potential solutions has an impact on the design:" -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:615(para) -msgid "Solutions that employ Galera/MariaDB require at least three MySQL nodes." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:619(para) -msgid "MongoDB has its own design considerations for high availability." -msgstr "" - -#: ./doc/arch-design/storage_focus/section_architecture_storage_focus.xml:623(para) -msgid "OpenStack design, generally, does not include shared storage. However, for some high availability designs, certain components might require it depending on the specific implementation." -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:194(None) -msgid "@@image: '../figures/Compute_Tech_Bin_Packing_General1.png'; md5=34f2f0b656a66124016d2484fb96068b" -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:202(None) -msgid "@@image: '../figures/Compute_Tech_Bin_Packing_CPU_optimized1.png'; md5=45084140c29e59a459d6b0af9b47642a" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:13(para) -msgid "In a compute-focused OpenStack cloud, the type of instance workloads you provision heavily influences technical decision making." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:16(para) -msgid "Public and private clouds require deterministic capacity planning to support elastic growth in order to meet user SLA expectations. Deterministic capacity planning is the path to predicting the effort and expense of making a given process perform consistently. This process is important because, when a service becomes a critical part of a user's infrastructure, the user's experience links directly to the SLAs of the cloud itself." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:24(para) -msgid "There are two aspects of capacity planning to consider:" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:27(para) -msgid "Planning the initial deployment footprint" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:30(para) -msgid "Planning expansion of the environment to stay ahead of the demands of cloud users" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:34(para) -msgid "Begin planning an initial OpenStack deployment footprint with estimations of expected uptake, and existing infrastructure workloads." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:36(para) -msgid "The starting point is the core count of the cloud. By applying relevant ratios, the user can gather information about:" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:41(para) -msgid "The number of expected concurrent instances: (overcommit fraction × cores) / virtual cores per instance" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:45(para) -msgid "Required storage: flavor disk size × number of instances" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:48(para) -msgid "These ratios determine the amount of additional infrastructure needed to support the cloud. For example, consider a situation in which you require 1600 instances, each with 2 vCPU and 50 GB of storage. Assuming the default overcommit rate of 16:1, working out the math provides an equation of:" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:56(para) -msgid "1600 = (16 (number of physical cores)) / 2" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:59(para) -msgid "Storage required = 50GB 1600" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:62(para) -msgid "On the surface, the equations reveal the need for 200 physical cores and 80TB of storage for /var/lib/nova/instances/. However, it is also important to look at patterns of usage to estimate the load that the API services, database servers, and queue servers are likely to encounter." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:69(para) -msgid "Aside from the creation and termination of instances, consider the impact of users accessing the service, particularly on nova-api and its associated database. Listing instances gathers a great deal of information and given the frequency with which users run this operation, a cloud with a large number of users can increase the load significantly. This can even occur unintentionally. For example, the OpenStack Dashboard instances tab refreshes the list of instances every 30 seconds, so leaving it open in a browser window can cause unexpected load." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:79(para) -msgid "Consideration of these factors can help determine how many cloud controller cores you require. A server with 8 CPU cores and 8 GB of RAM server would be sufficient for a rack of compute nodes, given the above caveats." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:83(para) -msgid "Key hardware specifications are also crucial to the performance of user instances. Be sure to consider budget and performance needs, including storage performance (spindles/core), memory availability (RAM/core), network bandwidth (Gbps/core), and overall CPU performance (CPU/core)." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:89(para) -msgid "The cloud resource calculator is a useful tool in examining the impacts of different hardware and instance load outs. See: https://github.com/noslzzp/cloud-resource-calculator/blob/master/cloud-resource-calculator.ods" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:95(title) -msgid "Expansion planning" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:96(para) -msgid "A key challenge for planning the expansion of cloud compute services is the elastic nature of cloud infrastructure demands." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:99(para) -msgid "Planning for expansion is a balancing act. Planning too conservatively can lead to unexpected oversubscription of the cloud and dissatisfied users. Planning for cloud expansion too aggressively can lead to unexpected underutilization of the cloud and funds spent unnecessarily on operating infrastructure." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:105(para) -msgid "The key is to carefully monitor the trends in cloud usage over time. The intent is to measure the consistency with which you deliver services, not the average speed or capacity of the cloud. Using this information to model capacity performance enables users to more accurately determine the current and future capacity of the cloud." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:115(title) -msgid "CPU and RAM" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:116(para) -msgid "OpenStack enables users to overcommit CPU and RAM on compute nodes. This allows an increase in the number of instances running on the cloud at the cost of reducing the performance of the instances. OpenStack Compute uses the following ratios by default:" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:123(para) -msgid "CPU allocation ratio: 16:1" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:126(para) -msgid "RAM allocation ratio: 1.5:1" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:129(para) -msgid "The default CPU allocation ratio of 16:1 means that the scheduler allocates up to 16 virtual cores per physical core. For example, if a physical node has 12 cores, the scheduler sees 192 available virtual cores. With typical flavor definitions of 4 virtual cores per instance, this ratio would provide 48 instances on a physical node." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:135(para) -msgid "Similarly, the default RAM allocation ratio of 1.5:1 means that the scheduler allocates instances to a physical node as long as the total amount of RAM associated with the instances is less than 1.5 times the amount of RAM available on the physical node." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:140(para) -msgid "You must select the appropriate CPU and RAM allocation ratio based on particular use cases." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:145(title) -msgid "Additional hardware" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:146(para) -msgid "Certain use cases may benefit from exposure to additional devices on the compute node. Examples might include:" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:150(para) -msgid "High performance computing jobs that benefit from the availability of graphics processing units (GPUs) for general-purpose computing." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:155(para) -msgid "Cryptographic routines that benefit from the availability of hardware random number generators to avoid entropy starvation." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:160(para) -msgid "Database management systems that benefit from the availability of SSDs for ephemeral storage to maximize read/write time." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:165(para) -msgid "Host aggregates group hosts that share similar characteristics, which can include hardware similarities. The addition of specialized hardware to a cloud deployment is likely to add to the cost of each node, so consider carefully whether all compute nodes, or just a subset targeted by flavors, need the additional customization to support the desired workloads." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:176(title) ./doc/arch-design/multi_site/section_tech_considerations_multi_site.xml:86(title) ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:80(title) -msgid "Utilization" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:177(para) -msgid "Infrastructure-as-a-Service offerings, including OpenStack, use flavors to provide standardized views of virtual machine resource requirements that simplify the problem of scheduling instances while making the best use of the available physical resources." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:182(para) -msgid "In order to facilitate packing of virtual machines onto physical hosts, the default selection of flavors provides a second largest flavor that is half the size of the largest flavor in every dimension. It has half the vCPUs, half the vRAM, and half the ephemeral disk space. The next largest flavor is half that size again. The following figure provides a visual representation of this concept for a general purpose computing design: " -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:197(para) -msgid "The following figure displays a CPU-optimized, packed server: " -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:205(para) -msgid "These default flavors are well suited to typical configurations of commodity server hardware. To maximize utilization, however, it may be necessary to customize the flavors or create new ones in order to better align instance sizes to the available hardware." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:210(para) -msgid "Workload characteristics may also influence hardware choices and flavor configuration, particularly where they present different ratios of CPU versus RAM versus HDD requirements." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:214(para) -msgid "For more information on Flavors see: OpenStack Operations Guide: Flavors" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:220(para) -msgid "Due to the nature of the workloads in this scenario, a number of components are highly beneficial for a Compute-focused cloud. This includes the typical OpenStack components:" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:226(para) ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:136(term) -msgid "OpenStack Compute (nova)" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:235(para) -msgid "Also consider several specialized components:" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:238(para) -msgid "Orchestration (heat)" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:239(para) -msgid "Given the nature of the applications involved in this scenario, these are heavily automated deployments. Making use of Orchestration is highly beneficial in this case. You can script the deployment of a batch of instances and the running of tests, but it makes sense to use the Orchestration service to handle all these actions." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:248(para) ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:153(term) -msgid "Telemetry (ceilometer)" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:249(para) -msgid "Telemetry and the alarms it generates support autoscaling of instances using Orchestration. Users that are not using the Orchestration service do not need to deploy the Telemetry service and may choose to use external solutions to fulfill their metering and monitoring requirements." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:257(para) -msgid "Due to the burst-able nature of the workloads and the applications and instances that perform batch processing, this cloud mainly uses memory or CPU, so the need for add-on storage to each instance is not a likely requirement. This does not mean that you do not use OpenStack Block Storage (cinder) in the infrastructure, but typically it is not a central component." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:266(para) ./doc/arch-design/multi_site/section_architecture_multi_site.xml:89(title) -msgid "Networking" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml:267(para) -msgid "When choosing a networking platform, ensure that it either works with all desired hypervisor and container technologies and their OpenStack drivers, or that it includes an implementation of an ML2 mechanism driver. You can mix networking platforms that provide ML2 mechanisms drivers." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:9(para) -msgid "The hardware selection covers three areas:" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:12(para) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:16(para) ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:17(para) -msgid "Compute" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:15(para) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:19(para) ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:22(para) -msgid "Network" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:18(para) ./doc/arch-design/multi_site/section_architecture_multi_site.xml:57(title) ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:114(para) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:22(para) ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:27(para) -msgid "Storage" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:21(para) -msgid "Compute-focused OpenStack clouds have high demands on processor and memory resources, and requires hardware that can handle these demands. Consider the following factors when selecting compute (server) hardware:" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:38(para) -msgid "Weigh these considerations against each other to determine the best design for the desired purpose. For example, increasing server density means sacrificing resource capacity or expandability." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:41(para) -msgid "A compute-focused cloud should have an emphasis on server hardware that can offer more CPU sockets, more CPU cores, and more RAM. Network connectivity and storage capacity are less critical." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:44(para) -msgid "When designing a compute-focused OpenStack architecture, you must consider whether you intend to scale up or scale out. Selecting a smaller number of larger hosts, or a larger number of smaller hosts, depends on a combination of factors: cost, power, cooling, physical rack and floor space, support-warranty, and manageability." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:50(para) -msgid "Considerations for selecting hardware:" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:53(para) -msgid "Most blade servers can support dual-socket multi-core CPUs. To avoid this CPU limit, select full width or full height blades. Be aware, however, that this also decreases server density. For example, high density blade servers such as HP BladeSystem or Dell PowerEdge M1000e support up to 16 servers in only ten rack units. Using half-height blades is twice as dense as using full-height blades, which results in only eight servers per ten rack units." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:63(para) -msgid "1U rack-mounted servers that occupy only a single rack unit may offer greater server density than a blade server solution. It is possible to place forty 1U servers in a rack, providing space for the top of rack (ToR) switches, compared to 32 full width blade servers." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:70(para) -msgid "2U rack-mounted servers provide quad-socket, multi-core CPU support, but with a corresponding decrease in server density (half the density that 1U rack-mounted servers offer)." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:75(para) -msgid "Larger rack-mounted servers, such as 4U servers, often provide even greater CPU capacity, commonly supporting four or even eight CPU sockets. These servers have greater expandability, but such servers have much lower server density and are often more expensive." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:81(para) -msgid "Sled servers are rack-mounted servers that support multiple independent servers in a single 2U or 3U enclosure. These deliver higher density as compared to typical 1U or 2U rack-mounted servers. For example, many sled servers offer four independent dual-socket nodes in 2U for a total of eight CPU sockets in 2U." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:89(para) -msgid "Consider these when choosing server hardware for a compute- focused OpenStack design architecture:" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:104(title) ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:382(title) -msgid "Selecting networking hardware" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:105(para) -msgid "Some of the key considerations for networking hardware selection include:" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:124(para) -msgid "We recommend designing the network architecture using a scalable network model that makes it easy to add capacity and bandwidth. A good example of such a model is the leaf-spline model. In this type of network design, it is possible to easily add additional bandwidth as well as scale out to additional racks of gear. It is important to select network hardware that supports the required port count, port speed, and port density while also allowing for future growth as workload demands increase. It is also important to evaluate where in the network architecture it is valuable to provide redundancy." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:137(para) -msgid "The selection of operating system (OS) and hypervisor has a significant impact on the end point design." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:139(para) -msgid "OS and hypervisor selection impact the following areas:" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:167(para) -msgid "The selection of OpenStack components is important. There are certain components that are required, for example the compute and image services, but others, such as the Orchestration service, may not be present." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:171(para) -msgid "For a compute-focused OpenStack design architecture, the following components may be present:" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:175(para) -msgid "Identity (keystone)" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:178(para) -msgid "Dashboard (horizon)" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:181(para) -msgid "Compute (nova)" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:184(para) -msgid "Object Storage (swift)" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:187(para) -msgid "Image (glance)" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:190(para) ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:144(term) -msgid "Networking (neutron)" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:193(para) ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:160(term) -msgid "Orchestration (heat)" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:197(para) -msgid "A compute-focused design is less likely to include OpenStack Block Storage. However, there may be some situations where the need for performance requires a block storage component to improve data I-O." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:201(para) -msgid "The exclusion of certain OpenStack components might also limit the functionality of other components. If a design includes the Orchestration service but excludes the Telemetry service, then the design cannot take advantage of Orchestration's auto scaling functionality as this relies on information from Telemetry." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:210(para) -msgid "OpenStack Networking provides a wide variety of networking services for instances. There are many additional networking software packages that might be useful to manage the OpenStack components themselves. The OpenStack High Availability Guide (http://docs.openstack.org/ha-guide/) describes some of these software packages in more detail." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:217(para) -msgid "For a compute-focused OpenStack cloud, the OpenStack infrastructure components must be highly available. If the design does not include hardware load balancing, you must add networking software packages, for example, HAProxy." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:225(para) -msgid "The selected supplemental software solution impacts and affects the overall OpenStack cloud design. This includes software for providing clustering, logging, monitoring and alerting." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:228(para) -msgid "The availability of design requirements is the main determiner for the inclusion of clustering software, such as Corosync or Pacemaker." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:230(para) -msgid "Operational considerations determine the requirements for logging, monitoring, and alerting. Each of these sub-categories include various options." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:233(para) -msgid "Some other potential design impacts include:" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:236(term) -msgid "OS-hypervisor combination" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:238(para) -msgid "Ensure that the selected logging, monitoring, or alerting tools support the proposed OS-hypervisor combination." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:244(term) -msgid "Network hardware" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:246(para) -msgid "The logging, monitoring, and alerting software must support the network hardware selection." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_architecture_compute_focus.xml:255(para) -msgid "A large majority of OpenStack components require access to back-end database services to store state and configuration information. Select an appropriate back-end database that satisfies the availability and fault tolerance requirements of the OpenStack services. OpenStack services support connecting to any database that the SQLAlchemy Python drivers support, however most common database deployments make use of MySQL or some variation of it. We recommend that you make the database that provides back-end services within a general-purpose cloud highly available. Some of the more common software solutions include Galera, MariaDB, and MySQL with multi-master replication." -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:78(None) -msgid "@@image: '../figures/Generic_CERN_Example.png'; md5=268e2171493d49ff3cc791071a98b49e" -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:148(None) -msgid "@@image: '../figures/Generic_CERN_Architecture.png'; md5=f5ec57432a0b3bd35efeaa25e84d9947" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:9(para) -msgid "The Conseil Européen pour la Recherche Nucléaire (CERN), also known as the European Organization for Nuclear Research, provides particle accelerators and other infrastructure for high-energy physics research." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:13(para) -msgid "As of 2011 CERN operated these two compute centers in Europe with plans to add a third." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:19(th) -msgid "Data center" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:19(th) -msgid "Approximate capacity" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:23(td) -msgid "Geneva, Switzerland" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:26(para) -msgid "3.5 Mega Watts" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:27(para) -msgid "91000 cores" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:28(para) -msgid "120 PB HDD" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:29(para) -msgid "100 PB Tape" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:30(para) -msgid "310 TB Memory" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:35(td) -msgid "Budapest, Hungary" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:38(para) -msgid "2.5 Mega Watts" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:39(para) -msgid "20000 cores" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:40(para) -msgid "6 PB HDD" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:46(para) -msgid "To support a growing number of compute-heavy users of experiments related to the Large Hadron Collider (LHC), CERN ultimately elected to deploy an OpenStack cloud using Scientific Linux and RDO. This effort aimed to simplify the management of the center's compute resources with a view to doubling compute capacity through the addition of a data center in 2013 while maintaining the same levels of compute staff." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:54(para) -msgid "The CERN solution uses cells for segregation of compute resources and for transparently scaling between different data centers. This decision meant trading off support for security groups and live migration. In addition, they must manually replicate some details, like flavors, across cells. In spite of these drawbacks cells provide the required scale while exposing a single public API endpoint to users." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:63(para) -msgid "CERN created a compute cell for each of the two original data centers and created a third when it added a new data center in 2013. Each cell contains three availability zones to further segregate compute resources and at least three RabbitMQ message brokers configured for clustering with mirrored queues for high availability." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:69(para) -msgid "The API cell, which resides behind a HAProxy load balancer, is in the data center in Switzerland and directs API calls to compute cells using a customized variation of the cell scheduler. The customizations allow certain workloads to route to a specific data center or all data centers, with cell RAM availability determining cell selection in the latter case." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:81(para) -msgid "There is also some customization of the filter scheduler that handles placement within the cells:" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:84(term) -msgid "ImagePropertiesFilter" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:86(para) -msgid "Provides special handling depending on the guest operating system in use (Linux-based or Windows-based)." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:91(term) -msgid "ProjectsToAggregateFilter" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:92(para) -msgid "Provides special handling depending on which project the instance is associated with." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:97(term) -msgid "default_schedule_zones" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:98(para) -msgid "Allows the selection of multiple default availability zones, rather than a single default." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:104(para) -msgid "A central database team manages the MySQL database server in each cell in an active/passive configuration with a NetApp storage back end. Backups run every 6 hours." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:109(title) -msgid "Network architecture" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:110(para) -msgid "To integrate with existing networking infrastructure, CERN made customizations to legacy networking (nova-network). This was in the form of a driver to integrate with CERN's existing database for tracking MAC and IP address assignments." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:114(para) -msgid "The driver facilitates selection of a MAC address and IP for new instances based on the compute node where the scheduler places the instance." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:117(para) -msgid "The driver considers the compute node where the scheduler placed an instance and selects a MAC address and IP from the pre-registered list associated with that node in the database. The database updates to reflect the address assignment to that instance." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:125(title) -msgid "Storage architecture" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:126(para) -msgid "CERN deploys the OpenStack Image service in the API cell and configures it to expose version 1 (V1) of the API. This also requires the image registry. The storage back end in use is a 3 PB Ceph cluster." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:130(para) -msgid "CERN maintains a small set of Scientific Linux 5 and 6 images onto which orchestration tools can place applications. Puppet manages instance configuration and customization." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:137(para) -msgid "CERN does not require direct billing, but uses the Telemetry service to perform metering for the purposes of adjusting project quotas. CERN uses a sharded, replicated, MongoDB back-end. To spread API load, CERN deploys instances of the nova-api service within the child cells for Telemetry to query against. This also requires the configuration of supporting services such as keystone, glance-api, and glance-registry in the child cells." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_prescriptive_examples_compute_focus.xml:151(para) -msgid "Additional monitoring tools in use include Flume, Elastic Search, Kibana, and the CERN developed Lemon project." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_operational_considerations_compute_focus.xml:9(para) -msgid "There are a number of operational considerations that affect the design of compute-focused OpenStack clouds, including:" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_operational_considerations_compute_focus.xml:13(para) -msgid "Enforcing strict API availability requirements" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_operational_considerations_compute_focus.xml:18(para) -msgid "Understanding and dealing with failure scenarios" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_operational_considerations_compute_focus.xml:23(para) -msgid "Managing host maintenance schedules" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_operational_considerations_compute_focus.xml:28(para) -msgid "Service-level agreements (SLAs) are contractual obligations that ensure the availability of a service. When designing an OpenStack cloud, factoring in promises of availability implies a certain level of redundancy and resiliency." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_operational_considerations_compute_focus.xml:35(para) -msgid "OpenStack clouds require appropriate monitoring platforms to catch and manage errors." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_operational_considerations_compute_focus.xml:38(para) -msgid "We recommend leveraging existing monitoring systems to see if they are able to effectively monitor an OpenStack environment." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_operational_considerations_compute_focus.xml:42(para) -msgid "Specific meters that are critically important to capture include:" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_operational_considerations_compute_focus.xml:46(para) ./doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml:62(para) -msgid "Image disk utilization" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_operational_considerations_compute_focus.xml:49(para) ./doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml:67(para) -msgid "Response time to the Compute API" -msgstr "" - -#: ./doc/arch-design/compute_focus/section_operational_considerations_compute_focus.xml:56(para) -msgid "Adding extra capacity to an OpenStack cloud is a horizontally scaling process." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_operational_considerations_compute_focus.xml:58(para) -msgid "We recommend similar (or the same) CPUs when adding extra nodes to the environment. This reduces the chance of breaking live-migration features if they are present. Scaling out hypervisor hosts also has a direct effect on network and other data center resources. We recommend you factor in this increase when reaching rack capacity or when requiring extra network switches." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_operational_considerations_compute_focus.xml:65(para) -msgid "Changing the internal components of a Compute host to account for increases in demand is a process known as vertical scaling. Swapping a CPU for one with more cores, or increasing the memory in a server, can help add extra capacity for running applications." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_operational_considerations_compute_focus.xml:70(para) -msgid "Another option is to assess the average workloads and increase the number of instances that can run within the compute environment by adjusting the overcommit ratio." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_operational_considerations_compute_focus.xml:74(para) -msgid "It is important to remember that changing the CPU overcommit ratio can have a detrimental effect and cause a potential increase in a noisy neighbor." -msgstr "" - -#: ./doc/arch-design/compute_focus/section_operational_considerations_compute_focus.xml:78(para) -msgid "The added risk of increasing the overcommit ratio is that more instances fail when a compute host fails. We do not recommend that you increase the CPU overcommit ratio in compute-focused OpenStack design architecture, as it can increase the potential for noisy neighbor issues." -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:9(para) -msgid "Multi-site OpenStack cloud deployment using regions requires that the service catalog contains per-region entries for each service deployed other than the Identity service. Most off-the-shelf OpenStack deployment tools have limited support for defining multiple regions in this fashion." -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:14(para) -msgid "Deployers should be aware of this and provide the appropriate customization of the service catalog for their site either manually, or by customizing deployment tools in use." -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:17(para) -msgid "As of the Kilo release, documentation for implementing this feature is in progress. See this bug for more information: https://bugs.launchpad.net/openstack-manuals/+bug/1340509." -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:24(title) -msgid "Licensing" -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:25(para) -msgid "Multi-site OpenStack deployments present additional licensing considerations over and above regular OpenStack clouds, particularly where site licenses are in use to provide cost efficient access to software licenses. The licensing for host operating systems, guest operating systems, OpenStack distributions (if applicable), software-defined infrastructure including network controllers and storage systems, and even individual applications need to be evaluated." -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:33(para) -msgid "Topics to consider include:" -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:36(para) -msgid "The definition of what constitutes a site in the relevant licenses, as the term does not necessarily denote a geographic or otherwise physically isolated location." -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:42(para) -msgid "Differentiations between \"hot\" (active) and \"cold\" (inactive) sites, where significant savings may be made in situations where one site is a cold standby for disaster recovery purposes only." -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:48(para) -msgid "Certain locations might require local vendors to provide support and services for each site which may vary with the licensing agreement in place." -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:54(title) -msgid "Logging and monitoring" -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:55(para) -msgid "Logging and monitoring does not significantly differ for a multi-site OpenStack cloud. The tools described in the Logging and monitoring chapter of the Operations Guide remain applicable. Logging and monitoring can be provided on a per-site basis, and in a common centralized location." -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:62(para) -msgid "When attempting to deploy logging and monitoring facilities to a centralized location, care must be taken with the load placed on the inter-site networking links." -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:66(title) ./doc/arch-design/hybrid/section_operational_considerations_hybrid.xml:44(title) -msgid "Upgrades" -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:67(para) -msgid "In multi-site OpenStack clouds deployed using regions, sites are independent OpenStack installations which are linked together using shared centralized services such as OpenStack Identity. At a high level the recommended order of operations to upgrade an individual OpenStack environment is (see the Upgrades chapter of the Operations Guide for details):" -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:77(para) -msgid "Upgrade the OpenStack Identity service (keystone)." -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:81(para) -msgid "Upgrade the OpenStack Image service (glance)." -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:84(para) -msgid "Upgrade OpenStack Compute (nova), including networking components." -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:88(para) -msgid "Upgrade OpenStack Block Storage (cinder)." -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:91(para) -msgid "Upgrade the OpenStack dashboard (horizon)." -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:94(para) -msgid "The process for upgrading a multi-site environment is not significantly different:" -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:98(para) -msgid "Upgrade the shared OpenStack Identity service (keystone) deployment." -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:102(para) -msgid "Upgrade the OpenStack Image service (glance) at each site." -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:106(para) -msgid "Upgrade OpenStack Compute (nova), including networking components, at each site." -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:110(para) -msgid "Upgrade OpenStack Block Storage (cinder) at each site." -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:114(para) -msgid "Upgrade the OpenStack dashboard (horizon), at each site or in the single central location if it is shared." -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:119(para) -msgid "Compute upgrades within each site can also be performed in a rolling fashion. Compute controller services (API, Scheduler, and Conductor) can be upgraded prior to upgrading of individual compute nodes. This allows operations staff to keep a site operational for users of Compute services while performing an upgrade." -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:126(title) -msgid "Quota management" -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:127(para) -msgid "Quotas are used to set operational limits to prevent system capacities from being exhausted without notification. They are currently enforced at the tenant (or project) level rather than at the user level." -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:131(para) -msgid "Quotas are defined on a per-region basis. Operators can define identical quotas for tenants in each region of the cloud to provide a consistent experience, or even create a process for synchronizing allocated quotas across regions. It is important to note that only the operational limits imposed by the quotas will be aligned consumption of quotas by users will not be reflected between regions." -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:138(para) -msgid "For example, given a cloud with two regions, if the operator grants a user a quota of 25 instances in each region then that user may launch a total of 50 instances spread across both regions. They may not, however, launch more than 25 instances in any single region." -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:143(para) -msgid "For more information on managing quotas refer to the Managing projects and users chapter of the OpenStack Operators Guide." -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:150(title) -msgid "Policy management" -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:151(para) -msgid "OpenStack provides a default set of Role Based Access Control (RBAC) policies, defined in a policy.json file, for each service. Operators edit these files to customize the policies for their OpenStack installation. If the application of consistent RBAC policies across sites is a requirement, then it is necessary to ensure proper synchronization of the policy.json files to all installations." -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:158(para) -msgid "This must be done using system administration tools such as rsync as functionality for synchronizing policies across regions is not currently provided within OpenStack." -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:162(title) -msgid "Documentation" -msgstr "" - -#: ./doc/arch-design/multi_site/section_operational_considerations_multi_site.xml:163(para) -msgid "Users must be able to leverage cloud infrastructure and provision new resources in the environment. It is important that user documentation is accessible by users to ensure they are given sufficient information to help them leverage the cloud. As an example, by default OpenStack schedules instances on a compute node automatically. However, when multiple regions are available, the end user needs to decide in which region to schedule the new instance. The dashboard presents the user with the first region in your configuration. The API and CLI tools do not execute commands unless a valid region is specified. It is therefore important to provide documentation to your users describing the region layout as well as calling out that quotas are region-specific. If a user reaches his or her quota in one region, OpenStack does not automatically build new instances in another. Documenting specific examples helps users understand how to operate the cloud, thereby reducing calls and tickets filed with the help desk." -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/arch-design/multi_site/section_architecture_multi_site.xml:24(None) -msgid "@@image: '../figures/Multi-Site_shared_keystone_horizon_swift1.png'; md5=fb80511b491731906fb54d5a1f029f91" -msgstr "" - -#: ./doc/arch-design/multi_site/section_architecture_multi_site.xml:9(para) -msgid " illustrates a high level multi-site OpenStack architecture. Each site is an OpenStack cloud but it may be necessary to architect the sites on different versions. For example, if the second site is intended to be a replacement for the first site, they would be different. Another common design would be a private OpenStack cloud with a replicated site that would be used for high availability or disaster recovery. The most important design decision is configuring storage as a single shared pool or separate pools, depending on user and technical requirements." -msgstr "" - -#: ./doc/arch-design/multi_site/section_architecture_multi_site.xml:20(title) -msgid "Multi-site OpenStack architecture" -msgstr "" - -#: ./doc/arch-design/multi_site/section_architecture_multi_site.xml:29(title) -msgid "OpenStack services architecture" -msgstr "" - -#: ./doc/arch-design/multi_site/section_architecture_multi_site.xml:30(para) -msgid "The Identity service, which is used by all other OpenStack components for authorization and the catalog of service endpoints, supports the concept of regions. A region is a logical construct used to group OpenStack services in close proximity to one another. The concept of regions is flexible; it may contain OpenStack service endpoints located within a distinct geographic region or regions. It may be smaller in scope, where a region is a single rack within a data center, with multiple regions existing in adjacent racks in the same data center." -msgstr "" - -#: ./doc/arch-design/multi_site/section_architecture_multi_site.xml:40(para) -msgid "The majority of OpenStack components are designed to run within the context of a single region. The Compute service is designed to manage compute resources within a region, with support for subdivisions of compute resources by using availability zones and cells. The Networking service can be used to manage network resources in the same broadcast domain or collection of switches that are linked. The OpenStack Block Storage service controls storage resources within a region with all storage resources residing on the same storage network. Like the OpenStack Compute service, the OpenStack Block Storage service also supports the availability zone construct which can be used to subdivide storage resources." -msgstr "" - -#: ./doc/arch-design/multi_site/section_architecture_multi_site.xml:52(para) -msgid "The OpenStack dashboard, OpenStack Identity, and OpenStack Object Storage services are components that can each be deployed centrally in order to serve multiple regions." -msgstr "" - -#: ./doc/arch-design/multi_site/section_architecture_multi_site.xml:58(para) -msgid "With multiple OpenStack regions, it is recommended to configure a single OpenStack Object Storage service endpoint to deliver shared file storage for all regions. The Object Storage service internally replicates files to multiple nodes which can be used by applications or workloads in multiple regions. This simplifies high availability failover and disaster recovery rollback." -msgstr "" - -#: ./doc/arch-design/multi_site/section_architecture_multi_site.xml:64(para) -msgid "In order to scale the Object Storage service to meet the workload of multiple regions, multiple proxy workers are run and load-balanced, storage nodes are installed in each region, and the entire Object Storage Service can be fronted by an HTTP caching layer. This is done so client requests for objects can be served out of caches rather than directly from the storage modules themselves, reducing the actual load on the storage network. In addition to an HTTP caching layer, use a caching layer like Memcache to cache objects between the proxy and storage nodes." -msgstr "" - -#: ./doc/arch-design/multi_site/section_architecture_multi_site.xml:73(para) -msgid "If the cloud is designed with a separate Object Storage service endpoint made available in each region, applications are required to handle synchronization (if desired) and other management operations to ensure consistency across the nodes. For some applications, having multiple Object Storage Service endpoints located in the same region as the application may be desirable due to reduced latency, cross region bandwidth, and ease of deployment." -msgstr "" - -#: ./doc/arch-design/multi_site/section_architecture_multi_site.xml:82(para) -msgid "For the Block Storage service, the most important decisions are the selection of the storage technology, and whether a dedicated network is used to carry storage traffic from the storage service to the compute nodes." -msgstr "" - -#: ./doc/arch-design/multi_site/section_architecture_multi_site.xml:90(para) -msgid "When connecting multiple regions together, there are several design considerations. The overlay network technology choice determines how packets are transmitted between regions and how the logical network and addresses present to the application. If there are security or regulatory requirements, encryption should be implemented to secure the traffic between regions. For networking inside a region, the overlay network technology for tenant networks is equally important. The overlay technology and the network traffic that an application generates or receives can be either complementary or serve cross purposes. For example, using an overlay technology for an application that transmits a large amount of small packets could add excessive latency or overhead to each packet if not configured properly." -msgstr "" - -#: ./doc/arch-design/multi_site/section_architecture_multi_site.xml:105(title) ./doc/arch-design/introduction/section_methodology.xml:86(term) -msgid "Dependencies" -msgstr "" - -#: ./doc/arch-design/multi_site/section_architecture_multi_site.xml:106(para) -msgid "The architecture for a multi-site OpenStack installation is dependent on a number of factors. One major dependency to consider is storage. When designing the storage system, the storage mechanism needs to be determined. Once the storage type is determined, how it is accessed is critical. For example, we recommend that storage should use a dedicated network. Another concern is how the storage is configured to protect the data. For example, the Recovery Point Objective (RPO) and the Recovery Time Objective (RTO). How quickly recovery from a fault can be completed, determines how often the replication of data is required. Ensure that enough storage is allocated to support the data protection strategy." -msgstr "" - -#: ./doc/arch-design/multi_site/section_architecture_multi_site.xml:119(para) -msgid "Networking decisions include the encapsulation mechanism that can be used for the tenant networks, how large the broadcast domains should be, and the contracted SLAs for the interconnects." -msgstr "" - -#: ./doc/arch-design/multi_site/section_tech_considerations_multi_site.xml:9(para) -msgid "There are many technical considerations to take into account with regard to designing a multi-site OpenStack implementation. An OpenStack cloud can be designed in a variety of ways to handle individual application needs. A multi-site deployment has additional challenges compared to single site installations and therefore is a more complex solution." -msgstr "" - -#: ./doc/arch-design/multi_site/section_tech_considerations_multi_site.xml:16(para) -msgid "When determining capacity options be sure to take into account not just the technical issues, but also the economic or operational issues that might arise from specific decisions." -msgstr "" - -#: ./doc/arch-design/multi_site/section_tech_considerations_multi_site.xml:20(para) -msgid "Inter-site link capacity describes the capabilities of the connectivity between the different OpenStack sites. This includes parameters such as bandwidth, latency, whether or not a link is dedicated, and any business policies applied to the connection. The capability and number of the links between sites determine what kind of options are available for deployment. For example, if two sites have a pair of high-bandwidth links available between them, it may be wise to configure a separate storage replication network between the two sites to support a single Swift endpoint and a shared Object Storage capability between them. An example of this technique, as well as a configuration walk-through, is available at http://docs.openstack.org/developer/swift/replication_network.html#dedicated-replication-network. Another option in this scenario is to build a dedicated set of tenant private networks across the secondary link, using overlay networks with a third party mapping the site overlays to each other." -msgstr "" - -#: ./doc/arch-design/multi_site/section_tech_considerations_multi_site.xml:38(para) -msgid "The capacity requirements of the links between sites is driven by application behavior. If the link latency is too high, certain applications that use a large number of small packets, for example RPC calls, may encounter issues communicating with each other or operating properly. Additionally, OpenStack may encounter similar types of issues. To mitigate this, Identity service call timeouts can be tuned to prevent issues authenticating against a central Identity service." -msgstr "" - -#: ./doc/arch-design/multi_site/section_tech_considerations_multi_site.xml:47(para) -msgid "Another network capacity consideration for a multi-site deployment is the amount and performance of overlay networks available for tenant networks. If using shared tenant networks across zones, it is imperative that an external overlay manager or controller be used to map these overlays together. It is necessary to ensure the amount of possible IDs between the zones are identical." -msgstr "" - -#: ./doc/arch-design/multi_site/section_tech_considerations_multi_site.xml:55(para) -msgid "As of the Kilo release, OpenStack Networking was not capable of managing tunnel IDs across installations. So if one site runs out of IDs, but another does not, that tenant's network is unable to reach the other site." -msgstr "" - -#: ./doc/arch-design/multi_site/section_tech_considerations_multi_site.xml:60(para) -msgid "Capacity can take other forms as well. The ability for a region to grow depends on scaling out the number of available compute nodes. This topic is covered in greater detail in the section for compute-focused deployments. However, it may be necessary to grow cells in an individual region, depending on the size of your cluster and the ratio of virtual machines per hypervisor." -msgstr "" - -#: ./doc/arch-design/multi_site/section_tech_considerations_multi_site.xml:67(para) -msgid "A third form of capacity comes in the multi-region-capable components of OpenStack. Centralized Object Storage is capable of serving objects through a single namespace across multiple regions. Since this works by accessing the object store through swift proxy, it is possible to overload the proxies. There are two options available to mitigate this issue:" -msgstr "" - -#: ./doc/arch-design/multi_site/section_tech_considerations_multi_site.xml:75(para) -msgid "Deploy a large number of swift proxies. The drawback is that the proxies are not load-balanced and a large file request could continually hit the same proxy." -msgstr "" - -#: ./doc/arch-design/multi_site/section_tech_considerations_multi_site.xml:80(para) -msgid "Add a caching HTTP proxy and load balancer in front of the swift proxies. Since swift objects are returned to the requester via HTTP, this load balancer would alleviate the load required on the swift proxies." -msgstr "" - -#: ./doc/arch-design/multi_site/section_tech_considerations_multi_site.xml:87(para) -msgid "While constructing a multi-site OpenStack environment is the goal of this guide, the real test is whether an application can utilize it." -msgstr "" - -#: ./doc/arch-design/multi_site/section_tech_considerations_multi_site.xml:90(para) -msgid "The Identity service is normally the first interface for OpenStack users and is required for almost all major operations within OpenStack. Therefore, it is important that you provide users with a single URL for Identity service authentication, and document the configuration of regions within the Identity service. Each of the sites defined in your installation is considered to be a region in Identity nomenclature. This is important for the users, as it is required to define the region name when providing actions to an API endpoint or in the dashboard." -msgstr "" - -#: ./doc/arch-design/multi_site/section_tech_considerations_multi_site.xml:99(para) -msgid "Load balancing is another common issue with multi-site installations. While it is still possible to run HAproxy instances with Load-Balancer-as-a-Service, these are defined to a specific region. Some applications can manage this using internal mechanisms. Other applications may require the implementation of an external system, including global services load balancers or anycast-advertised DNS." -msgstr "" - -#: ./doc/arch-design/multi_site/section_tech_considerations_multi_site.xml:106(para) -msgid "Depending on the storage model chosen during site design, storage replication and availability are also a concern for end-users. If an application can support regions, then it is possible to keep the object storage system separated by region. In this case, users who want to have an object available to more than one region need to perform cross-site replication. However, with a centralized swift proxy, the user may need to benchmark the replication timing of the Object Storage back end. Benchmarking allows the operational staff to provide users with an understanding of the amount of time required for a stored or modified object to become available to the entire environment." -msgstr "" - -#: ./doc/arch-design/multi_site/section_tech_considerations_multi_site.xml:119(para) -msgid "Determining the performance of a multi-site installation involves considerations that do not come into play in a single-site deployment. Being a distributed deployment, performance in multi-site deployments may be affected in certain situations." -msgstr "" - -#: ./doc/arch-design/multi_site/section_tech_considerations_multi_site.xml:124(para) -msgid "Since multi-site systems can be geographically separated, there may be greater latency or jitter when communicating across regions. This can especially impact systems like the OpenStack Identity service when making authentication attempts from regions that do not contain the centralized Identity implementation. It can also affect applications which rely on Remote Procedure Call (RPC) for normal operation. An example of this can be seen in high performance computing workloads." -msgstr "" - -#: ./doc/arch-design/multi_site/section_tech_considerations_multi_site.xml:132(para) -msgid "Storage availability can also be impacted by the architecture of a multi-site deployment. A centralized Object Storage service requires more time for an object to be available to instances locally in regions where the object was not created. Some applications may need to be tuned to account for this effect. Block Storage does not currently have a method for replicating data across multiple regions, so applications that depend on available block storage need to manually cope with this limitation by creating duplicate block storage entries in each region." -msgstr "" - -#: ./doc/arch-design/multi_site/section_tech_considerations_multi_site.xml:145(para) -msgid "Most OpenStack installations require a bare minimum set of pieces to function. These include the OpenStack Identity (keystone) for authentication, OpenStack Compute (nova) for compute, OpenStack Image service (glance) for image storage, OpenStack Networking (neutron) for networking, and potentially an object store in the form of OpenStack Object Storage (swift). Deploying a multi-site installation also demands extra components in order to coordinate between regions. A centralized Identity service is necessary to provide the single authentication point. A centralized dashboard is also recommended to provide a single login point and a mapping to the API and CLI options available. A centralized Object Storage service may also be used, but will require the installation of the swift proxy service." -msgstr "" - -#: ./doc/arch-design/multi_site/section_tech_considerations_multi_site.xml:159(para) -msgid "It may also be helpful to install a few extra options in order to facilitate certain use cases. For example, installing Designate may assist in automatically generating DNS domains for each region with an automatically-populated zone full of resource records for each instance. This facilitates using DNS as a mechanism for determining which region will be selected for certain applications." -msgstr "" - -#: ./doc/arch-design/multi_site/section_tech_considerations_multi_site.xml:166(para) -msgid "Another useful tool for managing a multi-site installation is Orchestration (heat). The Orchestration service allows the use of templates to define a set of instances to be launched together or for scaling existing sets. It can also be used to set up matching or differentiated groupings based on regions. For instance, if an application requires an equally balanced number of nodes across sites, the same heat template can be used to cover each site with small alterations to only the region name." -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:10(title) -msgid "Workload characteristics" -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:11(para) -msgid "An understanding of the expected workloads for a desired multi-site environment and use case is an important factor in the decision-making process. In this context, workload refers to the way the systems are used. A workload could be a single application or a suite of applications that work together. It could also be a duplicate set of applications that need to run in multiple cloud environments. Often in a multi-site deployment, the same workload will need to work identically in more than one physical location." -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:20(para) -msgid "This multi-site scenario likely includes one or more of the other scenarios in this book with the additional requirement of having the workloads in two or more locations. The following are some possible scenarios:" -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:24(para) -msgid "For many use cases the proximity of the user to their workloads has a direct influence on the performance of the application and therefore should be taken into consideration in the design. Certain applications require zero to minimal latency that can only be achieved by deploying the cloud in multiple locations. These locations could be in different data centers, cities, countries or geographical regions, depending on the user requirement and location of the users." -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:33(title) -msgid "Consistency of images and templates across different sites" -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:35(para) -msgid "It is essential that the deployment of instances is consistent across the different sites and built into the infrastructure. If the OpenStack Object Storage is used as a back end for the Image service, it is possible to create repositories of consistent images across multiple sites. Having central endpoints with multiple storage nodes allows consistent centralized storage for every site." -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:42(para) -msgid "Not using a centralized object store increases the operational overhead of maintaining a consistent image library. This could include development of a replication mechanism to handle the transport of images and the changes to the images across multiple sites." -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:49(para) -msgid "If high availability is a requirement to provide continuous infrastructure operations, a basic requirement of high availability should be defined." -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:52(para) -msgid "The OpenStack management components need to have a basic and minimal level of redundancy. The simplest example is the loss of any single site should have minimal impact on the availability of the OpenStack services." -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:56(para) -msgid "The OpenStack High Availability Guide contains more information on how to provide redundancy for the OpenStack components." -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:61(para) -msgid "Multiple network links should be deployed between sites to provide redundancy for all components. This includes storage replication, which should be isolated to a dedicated network or VLAN with the ability to assign QoS to control the replication traffic or provide priority for this traffic. Note that if the data store is highly changeable, the network requirements could have a significant effect on the operational cost of maintaining the sites." -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:69(para) -msgid "The ability to maintain object availability in both sites has significant implications on the object storage design and implementation. It also has a significant impact on the WAN network design between the sites." -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:73(para) -msgid "Connecting more than two sites increases the challenges and adds more complexity to the design considerations. Multi-site implementations require planning to address the additional topology used for internal and external connectivity. Some options include full mesh topology, hub spoke, spine leaf, and 3D Torus." -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:78(para) -msgid "If applications running in a cloud are not cloud-aware, there should be clear measures and expectations to define what the infrastructure can and cannot support. An example would be shared storage between sites. It is possible, however such a solution is not native to OpenStack and requires a third-party hardware vendor to fulfill such a requirement. Another example can be seen in applications that are able to consume resources in object storage directly. These applications need to be cloud aware to make good use of an OpenStack Object Store." -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:89(title) ./doc/arch-design/hybrid/section_operational_considerations_hybrid.xml:31(title) -msgid "Application readiness" -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:90(para) -msgid "Some applications are tolerant of the lack of synchronized object storage, while others may need those objects to be replicated and available across regions. Understanding how the cloud implementation impacts new and existing applications is important for risk mitigation, and the overall success of a cloud project. Applications may have to be written or rewritten for an infrastructure with little to no redundancy, or with the cloud in mind." -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:100(para) -msgid "A greater number of sites increase cost and complexity for a multi-site deployment. Costs can be broken down into the following categories:" -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:105(para) -msgid "Compute resources" -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:108(para) -msgid "Networking resources" -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:111(para) -msgid "Replication" -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:117(para) ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:663(para) -msgid "Management" -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:120(para) -msgid "Operational costs" -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:124(title) -msgid "Site loss and recovery" -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:125(para) -msgid "Outages can cause partial or full loss of site functionality. Strategies should be implemented to understand and plan for recovery scenarios." -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:130(para) -msgid "The deployed applications need to continue to function and, more importantly, you must consider the impact on the performance and reliability of the application when a site is unavailable." -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:136(para) -msgid "It is important to understand what happens to the replication of objects and data between the sites when a site goes down. If this causes queues to start building up, consider how long these queues can safely exist until an error occurs." -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:143(para) -msgid "After an outage, ensure the method for resuming proper operations of a site is implemented when it comes back online. We recommend you architect the recovery to avoid race conditions." -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:149(title) -msgid "Compliance and geo-location" -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:150(para) -msgid "An organization may have certain legal obligations and regulatory compliance measures which could require certain workloads or data to not be located in certain regions." -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:154(title) -msgid "Auditing" -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:155(para) -msgid "A well thought-out auditing strategy is important in order to be able to quickly track down issues. Keeping track of changes made to security groups and tenant changes can be useful in rolling back the changes if they affect production. For example, if all security group rules for a tenant disappeared, the ability to quickly track down the issue would be important for operational and legal reasons." -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:163(title) -msgid "Separation of duties" -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:164(para) -msgid "A common requirement is to define different roles for the different cloud administration functions. An example would be a requirement to segregate the duties and permissions by site." -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:169(title) -msgid "Authentication between sites" -msgstr "" - -#: ./doc/arch-design/multi_site/section_user_requirements_multi_site.xml:170(para) -msgid "It is recommended to have a single authentication domain rather than a separate implementation for each and every site. This requires an authentication mechanism that is highly available and distributed to ensure continuous operation. Authentication server locality might be required and should be planned for." -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:87(None) -msgid "@@image: '../figures/Multi-Site_Customer_Edge.png'; md5=01850cf774e7075bd7202c6e7f087f36" -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:194(None) -msgid "@@image: '../figures/Multi-site_Geo_Redundant_LB.png'; md5=c94a96f6084c2e50a0eb6846f6fde479" -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:231(None) -msgid "@@image: '../figures/Multi-Site_shared_keystone1.png'; md5=eaef18e7f04eec7e3f8968ad69aed7d3" -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:13(para) -msgid "There are multiple ways to build a multi-site OpenStack installation, based on the needs of the intended workloads. Below are example architectures based on different requirements. These examples are meant as a reference, and not a hard and fast rule for deployments. Use the previous sections of this chapter to assist in selecting specific components and implementations based on specific needs." -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:20(para) -msgid "A large content provider needs to deliver content to customers that are geographically dispersed. The workload is very sensitive to latency and needs a rapid response to end-users. After reviewing the user, technical and operational considerations, it is determined beneficial to build a number of regions local to the customer's edge. Rather than build a few large, centralized data centers, the intent of the architecture is to provide a pair of small data centers in locations that are closer to the customer. In this use case, spreading applications out allows for different horizontal scaling than a traditional compute workload scale. The intent is to scale by creating more copies of the application in closer proximity to the users that need it most, in order to ensure faster response time to user requests. This provider deploys two datacenters at each of the four chosen regions. The implications of this design are based around the method of placing copies of resources in each of the remote regions. Swift objects, Glance images, and block storage need to be manually replicated into each region. This may be beneficial for some systems, such as the case of content service, where only some of the content needs to exist in some but not all regions. A centralized Keystone is recommended to ensure authentication and that access to the API endpoints is easily manageable." -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:44(para) -msgid "It is recommended that you install an automated DNS system such as Designate. Application administrators need a way to manage the mapping of which application copy exists in each region and how to reach it, unless an external Dynamic DNS system is available. Designate assists by making the process automatic and by populating the records in the each region's zone." -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:50(para) -msgid "Telemetry for each region is also deployed, as each region may grow differently or be used at a different rate. Ceilometer collects each region's meters from each of the controllers and report them back to a central location. This is useful both to the end user and the administrator of the OpenStack environment. The end user will find this method useful, as it makes possible to determine if certain locations are experiencing higher load than others, and take appropriate action. Administrators also benefit by possibly being able to forecast growth per region, rather than expanding the capacity of all regions simultaneously, therefore maximizing the cost-effectiveness of the multi-site design." -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:63(para) -msgid "One of the key decisions of running this infrastructure is whether or not to provide a redundancy model. Two types of redundancy and high availability models in this configuration can be implemented. The first type is the availability of central OpenStack components. Keystone can be made highly available in three central data centers that host the centralized OpenStack components. This prevents a loss of any one of the regions causing an outage in service. It also has the added benefit of being able to run a central storage repository as a primary cache for distributing content to each of the regions." -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:74(para) -msgid "The second redundancy type is the edge data center itself. A second data center in each of the edge regional locations house a second region near the first region. This ensures that the application does not suffer degraded performance in terms of latency and availability." -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:79(para) -msgid " depicts the solution designed to have both a centralized set of core data centers for OpenStack services and paired edge data centers:" -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:83(title) -msgid "Multi-site architecture example" -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:92(title) -msgid "Geo-redundant load balancing" -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:93(para) -msgid "A large-scale web application has been designed with cloud principles in mind. The application is designed provide service to application store, on a 24/7 basis. The company has typical two tier architecture with a web front-end servicing the customer requests, and a NoSQL database back end storing the information." -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:99(para) -msgid "As of late there has been several outages in number of major public cloud providers due to applications running out of a single geographical location. The design therefore should mitigate the chance of a single site causing an outage for their business." -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:104(para) ./doc/arch-design/generalpurpose/section_prescriptive_example_general_purpose.xml:30(para) -msgid "The solution would consist of the following OpenStack components:" -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:108(para) ./doc/arch-design/generalpurpose/section_prescriptive_example_general_purpose.xml:34(para) -msgid "A firewall, switches and load balancers on the public facing network connections." -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:112(para) -msgid "OpenStack Controller services running, Networking, dashboard, Block Storage and Compute running locally in each of the three regions. Identity service, Orchestration service, Telemetry service, Image service and Object Storage service can be installed centrally, with nodes in each of the region providing a redundant OpenStack Controller plane throughout the globe." -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:121(para) ./doc/arch-design/generalpurpose/section_prescriptive_example_general_purpose.xml:44(para) -msgid "OpenStack Compute nodes running the KVM hypervisor." -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:125(para) -msgid "OpenStack Object Storage for serving static objects such as images can be used to ensure that all images are standardized across all the regions, and replicated on a regular basis." -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:131(para) -msgid "A distributed DNS service available to all regions that allows for dynamic update of DNS records of deployed instances." -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:136(para) -msgid "A geo-redundant load balancing service can be used to service the requests from the customers based on their origin." -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:141(para) -msgid "An autoscaling heat template can be used to deploy the application in the three regions. This template includes:" -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:145(para) -msgid "Web Servers, running Apache." -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:148(para) -msgid "Appropriate user_data to populate the central DNS servers upon instance launch." -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:152(para) -msgid "Appropriate Telemetry alarms that maintain state of the application and allow for handling of region or instance failure." -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:157(para) -msgid "Another autoscaling Heat template can be used to deploy a distributed MongoDB shard over the three locations, with the option of storing required data on a globally available swift container. According to the usage and load on the database server, additional shards can be provisioned according to the thresholds defined in Telemetry." -msgstr "" - -#. The reason that three regions were selected here was because of -#. the fear of having abnormal load on a single region in the -#. event of a failure. Two data center would have been sufficient -#. had the requirements been met. -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:167(para) -msgid "Two data centers would have been sufficient had the requirements been met. But three regions are selected here to avoid abnormal load on a single region in the event of a failure." -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:170(para) -msgid "Orchestration is used because of the built-in functionality of autoscaling and auto healing in the event of increased load. Additional configuration management tools, such as Puppet or Chef could also have been used in this scenario, but were not chosen since Orchestration had the appropriate built-in hooks into the OpenStack cloud, whereas the other tools were external and not native to OpenStack. In addition, external tools were not needed since this deployment scenario was straight forward." -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:179(para) -msgid "OpenStack Object Storage is used here to serve as a back end for the Image service since it is the most suitable solution for a globally distributed storage solution with its own replication mechanism. Home grown solutions could also have been used including the handling of replication, but were not chosen, because Object Storage is already an intricate part of the infrastructure and a proven solution." -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:186(para) -msgid "An external load balancing service was used and not the LBaaS in OpenStack because the solution in OpenStack is not redundant and does not have any awareness of geo location." -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:190(title) -msgid "Multi-site geo-redundant architecture" -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:200(title) -msgid "Location-local service" -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:201(para) -msgid "A common use for multi-site OpenStack deployment is creating a Content Delivery Network. An application that uses a location-local architecture requires low network latency and proximity to the user to provide an optimal user experience and reduce the cost of bandwidth and transit. The content resides on sites closer to the customer, instead of a centralized content store that requires utilizing higher cost cross-country links." -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:209(para) -msgid "This architecture includes a geo-location component that places user requests to the closest possible node. In this scenario, 100% redundancy of content across every site is a goal rather than a requirement, with the intent to maximize the amount of content available within a minimum number of network hops for end users. Despite these differences, the storage replication configuration has significant overlap with that of a geo-redundant load balancing use case." -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:218(para) -msgid "In , the application utilizing this multi-site OpenStack install that is location-aware would launch web server or content serving instances on the compute cluster in each site. Requests from clients are first sent to a global services load balancer that determines the location of the client, then routes the request to the closest OpenStack site where the application completes the request." -msgstr "" - -#: ./doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml:227(title) -msgid "Multi-site shared keystone architecture" -msgstr "" - -#: ./doc/arch-design/hybrid/section_operational_considerations_hybrid.xml:9(para) -msgid "Hybrid cloud deployments present complex operational challenges. Differences between provider clouds can cause incompatibilities with workloads or Cloud Management Platforms (CMP). Cloud providers may also offer different levels of integration with competing cloud offerings." -msgstr "" - -#: ./doc/arch-design/hybrid/section_operational_considerations_hybrid.xml:14(para) -msgid "Monitoring is critical to maintaining a hybrid cloud, and it is important to determine if a CMP supports monitoring of all the clouds involved, or if compatible APIs are available to be queried for necessary information." -msgstr "" - -#: ./doc/arch-design/hybrid/section_operational_considerations_hybrid.xml:20(title) -msgid "Agility" -msgstr "" - -#: ./doc/arch-design/hybrid/section_operational_considerations_hybrid.xml:21(para) -msgid "Hybrid clouds provide application availability across different cloud environments and technologies. This availability enables the deployment to survive disaster in any single cloud environment. Each cloud should provide the means to create instances quickly in response to capacity issues or failure elsewhere in the hybrid cloud." -msgstr "" - -#: ./doc/arch-design/hybrid/section_operational_considerations_hybrid.xml:32(para) -msgid "Enterprise workloads that depend on the underlying infrastructure for availability are not designed to run on OpenStack. If the application cannot tolerate infrastructure failures, it is likely to require significant operator intervention to recover. Applications for hybrid clouds must be fault tolerant, with an SLA that is not tied to the underlying infrastructure. Ideally, cloud applications should be able to recover when entire racks and data centers experience an outage." -msgstr "" - -#: ./doc/arch-design/hybrid/section_operational_considerations_hybrid.xml:45(para) -msgid "If a deployment includes a public cloud, predicting upgrades may not be possible. Carefully examine provider SLAs." -msgstr "" - -#: ./doc/arch-design/hybrid/section_operational_considerations_hybrid.xml:48(para) -msgid "At massive scale, even when dealing with a cloud that offers an SLA with a high percentage of uptime, workloads must be able to recover quickly." -msgstr "" - -#: ./doc/arch-design/hybrid/section_operational_considerations_hybrid.xml:52(para) -msgid "When upgrading private cloud deployments, minimize disruption by making incremental changes and providing a facility to either rollback or continue to roll forward when using a continuous delivery model." -msgstr "" - -#: ./doc/arch-design/hybrid/section_operational_considerations_hybrid.xml:56(para) -msgid "You may need to coordinate CMP upgrades with hybrid cloud upgrades if there are API changes." -msgstr "" - -#: ./doc/arch-design/hybrid/section_operational_considerations_hybrid.xml:61(title) -msgid "Network Operation Center" -msgstr "" - -#: ./doc/arch-design/hybrid/section_operational_considerations_hybrid.xml:62(para) -msgid "Consider infrastructure control when planning the Network Operation Center (NOC) for a hybrid cloud environment. If a significant portion of the cloud is on externally managed systems, prepare for situations where it may not be possible to make changes. Additionally, providers may differ on how infrastructure must be managed and exposed. This can lead to delays in root cause analysis where each insists the blame lies with the other provider." -msgstr "" - -#: ./doc/arch-design/hybrid/section_operational_considerations_hybrid.xml:72(para) -msgid "Ensure that the network structure connects all clouds to form integrated system, keeping in mind the state of handoffs. These handoffs must both be as reliable as possible and include as little latency as possible to ensure the best performance of the overall system." -msgstr "" - -#: ./doc/arch-design/hybrid/section_operational_considerations_hybrid.xml:80(title) -msgid "Maintainability" -msgstr "" - -#: ./doc/arch-design/hybrid/section_operational_considerations_hybrid.xml:81(para) -msgid "Hybrid clouds rely on third party systems and processes. As a result, it is not possible to guarantee proper maintenance of the overall system. Instead, be prepared to abandon workloads and recreate them in an improved state." -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:13(para) -msgid "A hybrid cloud environment requires inspection and understanding of technical issues in external data centers that may not be in your control. Ideally, select an architecture and CMP that are adaptable to changing environments." -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:17(para) -msgid "Using diverse cloud platforms increases the risk of compatibility issues, but clouds using the same version and distribution of OpenStack are less likely to experience problems." -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:20(para) -msgid "Clouds that exclusively use the same versions of OpenStack should have no issues, regardless of distribution. More recent distributions are less likely to encounter incompatibility between versions. An OpenStack community initiative defines core functions that need to remain backward compatible between supported versions. For example, the DefCore initiative defines basic functions that every distribution must support in order to use the name OpenStack." -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:28(para) -msgid "Vendors can add proprietary customization to their distributions. If an application or architecture makes use of these features, it can be difficult to migrate to or use other types of environments." -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:31(para) -msgid "If an environment includes non-OpenStack clouds, it may experience compatibility problems. CMP tools must account for the differences in the handling of operations and the implementation of services." -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:35(title) -msgid "Possible cloud incompatibilities" -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:37(para) -msgid "Instance deployment" -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:40(para) -msgid "Network management" -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:43(para) -msgid "Application management" -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:46(para) -msgid "Services implementation" -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:52(para) -msgid "One of the primary reasons many organizations use a hybrid cloud is to increase capacity without making large capital investments." -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:55(para) -msgid "Capacity and the placement of workloads are key design considerations for hybrid clouds. The long-term capacity plan for these designs must incorporate growth over time to prevent permanent consumption of more expensive external clouds. To avoid this scenario, account for future applications' capacity requirements and plan growth appropriately." -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:61(para) -msgid "It is difficult to predict the amount of load a particular application might incur if the number of users fluctuates, or the application experiences an unexpected increase in use. It is possible to define application requirements in terms of vCPU, RAM, bandwidth, or other resources and plan appropriately. However, other clouds might not use the same meter or even the same oversubscription rates." -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:68(para) -msgid "Oversubscription is a method to emulate more capacity than may physically be present. For example, a physical hypervisor node with 32GB RAM may host 24 instances, each provisioned with 2GB RAM. As long as all 24 instances do not concurrently use 2 full gigabytes, this arrangement works well. However, some hosts take oversubscription to extremes and, as a result, performance can be inconsistent. If at all possible, determine what the oversubscription rates of each host are and plan capacity accordingly." -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:81(para) -msgid "A CMP must be aware of what workloads are running, where they are running, and their preferred utilizations. For example, in most cases it is desirable to run as many workloads internally as possible, utilizing other resources only when necessary. On the other hand, situations exist in which the opposite is true, such as when an internal cloud is only for development and stressing it is undesirable. A cost model of various scenarios and consideration of internal priorities helps with this decision. To improve efficiency, automate these decisions when possible." -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:90(para) -msgid "The Telemetry service (ceilometer) provides information on the usage of various OpenStack components. Note the following:" -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:94(para) -msgid "If Telemetry must retain a large amount of data, for example when monitoring a large or active cloud, we recommend using a NoSQL back end such as MongoDB." -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:100(para) -msgid "You must monitor connections to non-OpenStack clouds and report this information to the CMP." -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:109(para) -msgid "Performance is critical to hybrid cloud deployments, and they are affected by many of the same issues as multi-site deployments, such as network latency between sites. Also consider the time required to run a workload in different clouds and methods for reducing this time. This may require moving data closer to applications or applications closer to the data they process, and grouping functionality so that connections that require low latency take place over a single cloud rather than spanning clouds. This may also require a CMP that can determine which cloud can most efficiently run which types of workloads." -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:119(para) -msgid "As with utilization, native OpenStack tools help improve performance. For example, you can use Telemetry to measure performance and the Orchestration service (heat) to react to changes in demand." -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:123(para) -msgid "Orchestration requires special client configurations to integrate with Amazon Web Services. For other types of clouds, use CMP features." -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:131(title) -msgid "Components" -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:132(para) -msgid "Using more than one cloud in any design requires consideration of four OpenStack tools:" -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:138(para) -msgid "Regardless of deployment location, hypervisor choice has a direct effect on how difficult it is to integrate with additional clouds." -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:146(para) -msgid "Whether using OpenStack Networking (neutron) or legacy networking (nova-network), it is necessary to understand network integration capabilities in order to connect between clouds." -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:155(para) -msgid "Use of Telemetry depends, in large part, on what the other parts of the cloud you are using." -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:162(para) -msgid "Orchestration can be a valuable tool in orchestrating tasks a CMP decides are necessary in an OpenStack-based cloud." -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:170(title) -msgid "Special considerations" -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:171(para) -msgid "Hybrid cloud deployments require consideration of two issues that are not common in other situations:" -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:175(term) ./doc/arch-design/hybrid/section_architecture_hybrid.xml:24(title) -msgid "Image portability" -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:177(para) -msgid "As of the Kilo release, there is no common image format that is usable by all clouds. Conversion or recreation of images is necessary if migrating between clouds. To simplify deployment, use the smallest and simplest images feasible, install only what is necessary, and use a deployment manager such as Chef or Puppet. Do not use golden images to speed up the process unless you repeatedly deploy the same images on the same cloud." -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:187(term) -msgid "API differences" -msgstr "" - -#: ./doc/arch-design/hybrid/section_tech_considerations_hybrid.xml:189(para) -msgid "Avoid using a hybrid cloud deployment with more than just OpenStack (or with different versions of OpenStack) as API changes can cause compatibility issues." -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:9(para) -msgid "Hybrid cloud architectures are complex, especially those that use heterogeneous cloud platforms. Ensure that design choices match requirements so that the benefits outweigh the inherent additional complexity and risks." -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:14(title) -msgid "Business considerations when designing a hybrid cloud deployment" -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:19(para) -msgid "A hybrid cloud architecture involves multiple vendors and technical architectures. These architectures may be more expensive to deploy and maintain. Operational costs can be higher because of the need for more sophisticated orchestration and brokerage tools than in other architectures. In contrast, overall operational costs might be lower by virtue of using a cloud brokerage tool to deploy the workloads to the most cost effective platform." -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:31(term) ./doc/arch-design/generalpurpose/section_user_requirements_general_purpose.xml:53(term) -msgid "Revenue opportunity" -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:33(para) -msgid "Revenue opportunities vary based on the intent and use case of the cloud. As a commercial, customer-facing product, you must consider whether building over multiple platforms makes the design more attractive to customers." -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:40(term) -msgid "Time-to-market" -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:42(para) -msgid "One common reason to use cloud platforms is to improve the time-to-market of a new product or application. For example, using multiple cloud platforms is viable because there is an existing investment in several applications. It is faster to tie the investments together rather than migrate the components and refactoring them to a single platform." -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:51(term) -msgid "Business or technical diversity" -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:53(para) -msgid "Organizations leveraging cloud-based services can embrace business diversity and utilize a hybrid cloud design to spread their workloads across multiple cloud providers. This ensures that no single cloud provider is the sole host for an application." -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:61(term) -msgid "Application momentum" -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:63(para) -msgid "Businesses with existing applications may find that it is more cost effective to integrate applications on multiple cloud platforms than migrating them to a single platform." -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:71(title) -msgid "Workload considerations" -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:72(para) -msgid "A workload can be a single application or a suite of applications that work together. It can also be a duplicate set of applications that need to run on multiple cloud environments. In a hybrid cloud deployment, the same workload often needs to function equally well on radically different public and private cloud environments. The architecture needs to address these potential conflicts, complexity, and platform incompatibilities." -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:81(title) -msgid "Use cases for a hybrid cloud architecture" -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:83(term) -msgid "Dynamic resource expansion or bursting" -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:85(para) -msgid "An application that requires additional resources may suit a multiple cloud architecture. For example, a retailer needs additional resources during the holiday season, but does not want to add private cloud resources to meet the peak demand. The user can accommodate the increased load by bursting to a public cloud for these peak load periods. These bursts could be for long or short cycles ranging from hourly to yearly." -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:97(term) -msgid "Disaster recovery and business continuity" -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:99(para) -msgid "Cheaper storage makes the public cloud suitable for maintaining backup applications." -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:104(term) -msgid "Federated hypervisor and instance management" -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:106(para) -msgid "Adding self-service, charge back, and transparent delivery of the resources from a federated pool can be cost effective. In a hybrid cloud environment, this is a particularly important consideration. Look for a cloud that provides cross-platform hypervisor support and robust instance management tools." -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:115(term) -msgid "Application portfolio integration" -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:117(para) -msgid "An enterprise cloud delivers efficient application portfolio management and deployments by leveraging self-service features and rules according to use. Integrating existing cloud environments is a common driver when building hybrid cloud architectures." -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:125(term) -msgid "Migration scenarios" -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:127(para) -msgid "Hybrid cloud architecture enables the migration of applications between different clouds." -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:134(para) -msgid "A combination of locations and platforms enables a level of availability that is not possible with a single platform. This approach increases design complexity." -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:141(para) -msgid "As running a workload on multiple cloud platforms increases design complexity, we recommend first exploring options such as transferring workloads across clouds at the application, instance, cloud platform, hypervisor, and network levels." -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:148(title) -msgid "Tools considerations" -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:149(para) -msgid "Hybrid cloud designs must incorporate tools to facilitate working across multiple clouds." -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:152(title) -msgid "Tool functions" -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:154(term) -msgid "Broker between clouds" -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:156(para) -msgid "Brokering software evaluates relative costs between different cloud platforms. Cloud Management Platforms (CMP) allow the designer to determine the right location for the workload based on predetermined criteria." -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:163(term) -msgid "Facilitate orchestration across the clouds" -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:165(para) -msgid "CMPs simplify the migration of application workloads between public, private, and hybrid cloud platforms. We recommend using cloud orchestration tools for managing a diverse portfolio of systems and applications across multiple cloud platforms." -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:176(title) -msgid "Network considerations" -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:177(para) -msgid "It is important to consider the functionality, security, scalability, availability, and testability of network when choosing a CMP and cloud provider." -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:182(para) -msgid "Decide on a network framework and design minimum functionality tests. This ensures testing and functionality persists during and after upgrades." -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:188(para) -msgid "Scalability across multiple cloud providers may dictate which underlying network framework you choose in different cloud providers. It is important to present the network API functions and to verify that functionality persists across all cloud endpoints chosen." -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:196(para) -msgid "High availability implementations vary in functionality and design. Examples of some common methods are active-hot-standby, active-passive, and active-active. Development of high availability and test frameworks is necessary to insure understanding of functionality and limitations." -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:204(para) -msgid "Consider the security of data between the client and the endpoint, and of traffic that traverses the multiple clouds." -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:212(title) -msgid "Risk mitigation and management considerations" -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:213(para) -msgid "Hybrid cloud architectures introduce additional risk because they are more complex than a single cloud design and may involve incompatible components or tools. However, they also reduce risk by spreading workloads over multiple providers." -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:218(title) -msgid "Hybrid cloud risks" -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:220(term) -msgid "Provider availability or implementation details" -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:222(para) -msgid "Business changes can affect provider availability. Likewise, changes in a provider's service can disrupt a hybrid cloud environment or increase costs." -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:229(term) -msgid "Differing SLAs" -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:231(para) -msgid "Hybrid cloud designs must accommodate differences in SLAs between providers, and consider their enforceability." -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:236(term) -msgid "Security levels" -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:238(para) -msgid "Securing multiple cloud environments is more complex than securing single cloud environments. We recommend addressing concerns at the application, network, and cloud platform levels. Be aware that each cloud platform approaches security differently, and a hybrid cloud design must address and compensate for these differences." -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:248(term) -msgid "Provider API changes" -msgstr "" - -#: ./doc/arch-design/hybrid/section_user_requirements_hybrid.xml:250(para) -msgid "Consumers of external clouds rarely have control over provider changes to APIs, and changes can break compatibility. Using only the most common and basic APIs can minimize potential conflicts." -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml:47(None) -msgid "@@image: '../figures/Multi-Cloud_Priv-Pub3.png'; md5=8fdb44f876665e2aa1bd793607c4537e" -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml:82(None) ./doc/arch-design/hybrid/section_architecture_hybrid.xml:19(None) -msgid "@@image: '../figures/Multi-Cloud_Priv-AWS4.png'; md5=3bba96b0b6ac0341a05581b00160ff17" -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml:121(None) -msgid "@@image: '../figures/Multi-Cloud_failover2.png'; md5=5a7be4a15d381288659c7268dff6724b" -msgstr "" - -#: ./doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml:9(para) -msgid "Hybrid cloud environments are designed for these use cases:" -msgstr "" - -#: ./doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml:13(para) -msgid "Bursting workloads from private to public OpenStack clouds" -msgstr "" - -#: ./doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml:17(para) -msgid "Bursting workloads from private to public non-OpenStack clouds" -msgstr "" - -#: ./doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml:21(para) -msgid "High availability across clouds (for technical diversity)" -msgstr "" - -#: ./doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml:25(para) -msgid "This chapter provides examples of environments that address each of these use cases." -msgstr "" - -#: ./doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml:28(title) -msgid "Bursting to a public OpenStack cloud" -msgstr "" - -#: ./doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml:29(para) -msgid "Company A's data center is running low on capacity. It is not possible to expand the data center in the foreseeable future. In order to accommodate the continuously growing need for development resources in the organization, Company A decides to use resources in the public cloud." -msgstr "" - -#: ./doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml:35(para) -msgid "Company A has an established data center with a substantial amount of hardware. Migrating the workloads to a public cloud is not feasible." -msgstr "" - -#: ./doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml:38(para) -msgid "The company has an internal cloud management platform that directs requests to the appropriate cloud, depending on the local capacity. This is a custom in-house application written for this specific purpose." -msgstr "" - -#: ./doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml:42(para) -msgid "This solution is depicted in the figure below:" -msgstr "" - -#: ./doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml:50(para) -msgid "This example shows two clouds with a Cloud Management Platform (CMP) connecting them. This guide does not discuss a specific CMP, but describes how the Orchestration and Telemetry services handle, manage, and control workloads." -msgstr "" - -#: ./doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml:54(para) -msgid "The private OpenStack cloud has at least one controller and at least one compute node. It includes metering using the Telemetry service. The Telemetry service captures the load increase and the CMP processes the information. If there is available capacity, the CMP uses the OpenStack API to call the Orchestration service. This creates instances on the private cloud in response to user requests. When capacity is not available on the private cloud, the CMP issues a request to the Orchestration service API of the public cloud. This creates the instance on the public cloud." -msgstr "" - -#: ./doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml:65(para) -msgid "In this example, Company A does not direct the deployments to an external public cloud due to concerns regarding resource control, security, and increased operational expense" -msgstr "" - -#: ./doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml:71(title) -msgid "Bursting to a public non-OpenStack cloud" -msgstr "" - -#: ./doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml:72(para) -msgid "The second example examines bursting workloads from the private cloud into a non-OpenStack public cloud using Amazon Web Services (AWS) to take advantage of additional capacity and to scale applications." -msgstr "" - -#: ./doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml:76(para) -msgid "The following diagram demonstrates an OpenStack-to-AWS hybrid cloud:" -msgstr "" - -#: ./doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml:85(para) -msgid "Company B states that its developers are already using AWS and do not want to change to a different provider." -msgstr "" - -#: ./doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml:87(para) -msgid "If the CMP is capable of connecting to an external cloud provider with an appropriate API, the workflow process remains the same as the previous scenario. The actions the CMP takes, such as monitoring loads and creating new instances, stay the same. However, the CMP performs actions in the public cloud using applicable API calls." -msgstr "" - -#: ./doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml:93(para) -msgid "If the public cloud is AWS, the CMP would use the EC2 API to create a new instance and assign an Elastic IP. It can then add that IP to HAProxy in the private cloud. The CMP can also reference AWS-specific tools such as CloudWatch and CloudFormation." -msgstr "" - -#: ./doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml:98(para) -msgid "Several open source tool kits for building CMPs are available and can handle this kind of translation. Examples include ManageIQ, jClouds, and JumpGate." -msgstr "" - -#: ./doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml:104(title) -msgid "High availability and disaster recovery" -msgstr "" - -#: ./doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml:105(para) -msgid "Company C requires their local data center to be able to recover from failure. Some of the workloads currently in use are running on their private OpenStack cloud. Protecting the data involves Block Storage, Object Storage, and a database. The architecture supports the failure of large components of the system while ensuring that the system continues to deliver services. While the services remain available to users, the failed components are restored in the background based on standard best practice data replication policies. To achieve these objectives, Company C replicates data to a second cloud in a geographically distant location. The following diagram describes this system:" -msgstr "" - -#: ./doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml:124(para) -msgid "This example includes two private OpenStack clouds connected with a CMP. The source cloud, OpenStack Cloud 1, includes a controller and at least one instance running MySQL. It also includes at least one Block Storage volume and one Object Storage volume. This means that data is available to the users at all times. The details of the method for protecting each of these sources of data differs." -msgstr "" - -#: ./doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml:132(para) -msgid "Object Storage relies on the replication capabilities of the Object Storage provider. Company C enables OpenStack Object Storage so that it creates geographically separated replicas that take advantage of this feature. The company configures storage so that at least one replica exists in each cloud. In order to make this work, the company configures a single array spanning both clouds with OpenStack Identity. Using Federated Identity, the array talks to both clouds, communicating with OpenStack Object Storage through the Swift proxy." -msgstr "" - -#: ./doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml:141(para) -msgid "For Block Storage, the replication is a little more difficult, and involves tools outside of OpenStack itself. The OpenStack Block Storage volume is not set as the drive itself but as a logical object that points to a physical back end. Disaster recovery is configured for Block Storage for synchronous backup for the highest level of data protection, but asynchronous backup could have been set as an alternative that is not as latency sensitive. For asynchronous backup, the Block Storage API makes it possible to export the data and also the metadata of a particular volume, so that it can be moved and replicated elsewhere. More information can be found here: https://blueprints.launchpad.net/cinder/+spec/cinder-backup-volume-metadata-support." -msgstr "" - -#: ./doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml:156(para) -msgid "The synchronous backups create an identical volume in both clouds and chooses the appropriate flavor so that each cloud has an identical back end. This is done by creating volumes through the CMP. After this is configured, a solution involving DRDB synchronizes the physical drives." -msgstr "" - -#: ./doc/arch-design/hybrid/section_prescriptive_examples_hybrid.xml:161(para) -msgid "The database component is backed up using synchronous backups. MySQL does not support geographically diverse replication, so disaster recovery is provided by replicating the file itself. As it is not possible to use Object Storage as the back end of a database like MySQL, Swift replication is not an option. Company C decides not to store the data on another geo-tiered storage system, such as Ceph, as Block Storage. This would have given another layer of protection. Another option would have been to store the database on an OpenStack Block Storage volume and backing it up like any other Block Storage." -msgstr "" - -#: ./doc/arch-design/hybrid/section_architecture_hybrid.xml:9(para) -msgid "Map out the dependencies of the expected workloads and the cloud infrastructures required to support them to architect a solution for the broadest compatibility between cloud platforms, minimizing the need to create workarounds and processes to fill identified gaps." -msgstr "" - -#: ./doc/arch-design/hybrid/section_architecture_hybrid.xml:14(para) -msgid "For your chosen cloud management platform, note the relative levels of support for both monitoring and orchestration." -msgstr "" - -#: ./doc/arch-design/hybrid/section_architecture_hybrid.xml:25(para) -msgid "The majority of cloud workloads currently run on instances using hypervisor technologies. The challenge is that each of these hypervisors uses an image format that may not be compatible with the others. When possible, standardize on a single hypervisor and instance image format. This may not be possible when using externally-managed public clouds." -msgstr "" - -#: ./doc/arch-design/hybrid/section_architecture_hybrid.xml:31(para) -msgid "Conversion tools exist to address image format compatibility. Examples include virt-p2v/virt-v2v and virt-edit. These tools cannot serve beyond basic cloud instance specifications." -msgstr "" - -#: ./doc/arch-design/hybrid/section_architecture_hybrid.xml:38(para) -msgid "Alternatively, build a thin operating system image as the base for new instances. This facilitates rapid creation of cloud instances using cloud orchestration or configuration management tools for more specific templating. Remember if you intend to use portable images for disaster recovery, application diversity, or high availability, your users could move the images and instances between cloud platforms regularly." -msgstr "" - -#: ./doc/arch-design/hybrid/section_architecture_hybrid.xml:48(title) -msgid "Upper-layer services" -msgstr "" - -#: ./doc/arch-design/hybrid/section_architecture_hybrid.xml:49(para) -msgid "Many clouds offer complementary services beyond the basic compute, network, and storage components. These additional services often simplify the deployment and management of applications on a cloud platform." -msgstr "" - -#: ./doc/arch-design/hybrid/section_architecture_hybrid.xml:53(para) -msgid "When moving workloads from the source to the destination cloud platforms, consider that the destination cloud platform may not have comparable services. Implement workloads in a different way or by using a different technology." -msgstr "" - -#: ./doc/arch-design/hybrid/section_architecture_hybrid.xml:57(para) -msgid "For example, moving an application that uses a NoSQL database service such as MongoDB could cause difficulties in maintaining the application between the platforms." -msgstr "" - -#: ./doc/arch-design/hybrid/section_architecture_hybrid.xml:60(para) -msgid "There are a number of options that are appropriate for the hybrid cloud use case:" -msgstr "" - -#: ./doc/arch-design/hybrid/section_architecture_hybrid.xml:64(para) -msgid "Implementing a baseline of upper-layer services across all of the cloud platforms. For platforms that do not support a given service, create a service on top of that platform and apply it to the workloads as they are launched on that cloud." -msgstr "" - -#: ./doc/arch-design/hybrid/section_architecture_hybrid.xml:69(para) -msgid "For example, through the Database service for OpenStack (trove), OpenStack supports MySQL-as-a-Service but not NoSQL databases in production. To move from or run alongside AWS, a NoSQL workload must use an automation tool, such as the Orchestration service (heat), to recreate the NoSQL database on top of OpenStack." -msgstr "" - -#: ./doc/arch-design/hybrid/section_architecture_hybrid.xml:79(para) -msgid "Deploying a Platform-as-a-Service (PaaS) technology that abstracts the upper-layer services from the underlying cloud platform. The unit of application deployment and migration is the PaaS. It leverages the services of the PaaS and only consumes the base infrastructure services of the cloud platform." -msgstr "" - -#: ./doc/arch-design/hybrid/section_architecture_hybrid.xml:88(para) -msgid "Using automation tools to create the required upper-layer services that are portable across all cloud platforms." -msgstr "" - -#: ./doc/arch-design/hybrid/section_architecture_hybrid.xml:90(para) -msgid "For example, instead of using database services that are inherent in the cloud platforms, launch cloud instances and deploy the databases on those instances using scripts or configuration and application deployment tools." -msgstr "" - -#: ./doc/arch-design/hybrid/section_architecture_hybrid.xml:100(title) -msgid "Network services" -msgstr "" - -#: ./doc/arch-design/hybrid/section_architecture_hybrid.xml:101(para) -msgid "Network services functionality is a critical component of multiple cloud architectures. It is an important factor to assess when choosing a CMP and cloud provider. Considerations include:" -msgstr "" - -#: ./doc/arch-design/hybrid/section_architecture_hybrid.xml:107(para) -msgid "Functionality" -msgstr "" - -#: ./doc/arch-design/hybrid/section_architecture_hybrid.xml:122(para) -msgid "High availability (HA)" -msgstr "" - -#: ./doc/arch-design/hybrid/section_architecture_hybrid.xml:127(para) -msgid "Verify and test critical cloud endpoint features." -msgstr "" - -#: ./doc/arch-design/hybrid/section_architecture_hybrid.xml:130(para) -msgid "After selecting the network functionality framework, you must confirm the functionality is compatible. This ensures testing and functionality persists during and after upgrades." -msgstr "" - -#: ./doc/arch-design/hybrid/section_architecture_hybrid.xml:135(para) -msgid "Diverse cloud platforms may de-synchronize over time if you do not maintain their mutual compatibility. This is a particular issue with APIs." -msgstr "" - -#: ./doc/arch-design/hybrid/section_architecture_hybrid.xml:141(para) -msgid "Scalability across multiple cloud providers determines your choice of underlying network framework. It is important to have the network API functions presented and to verify that the desired functionality persists across all chosen cloud endpoint." -msgstr "" - -#: ./doc/arch-design/hybrid/section_architecture_hybrid.xml:148(para) -msgid "High availability implementations vary in functionality and design. Examples of some common methods are active-hot-standby, active-passive, and active-active. Develop your high availability implementation and a test framework to understand the functionality and limitations of the environment." -msgstr "" - -#: ./doc/arch-design/hybrid/section_architecture_hybrid.xml:156(para) -msgid "It is imperative to address security considerations. For example, addressing how data is secured between client and endpoint and any traffic that traverses the multiple clouds. Business and regulatory requirements dictate what security approach to take. For more information, see the Security Requirements Chapter" -msgstr "" - -#: ./doc/arch-design/hybrid/section_architecture_hybrid.xml:168(title) ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:666(para) -msgid "Data" -msgstr "" - -#: ./doc/arch-design/hybrid/section_architecture_hybrid.xml:169(para) -msgid "Traditionally, replication has been the best method of protecting object store implementations. A variety of replication methods exist in storage architectures, for example synchronous and asynchronous mirroring. Most object stores and back-end storage systems implement methods for replication at the storage subsystem layer. Object stores also tailor replication techniques to fit a cloud's requirements." -msgstr "" - -#: ./doc/arch-design/hybrid/section_architecture_hybrid.xml:176(para) -msgid "Organizations must find the right balance between data integrity and data availability. Replication strategy may also influence disaster recovery methods." -msgstr "" - -#: ./doc/arch-design/hybrid/section_architecture_hybrid.xml:179(para) -msgid "Replication across different racks, data centers, and geographical regions increases focus on determining and ensuring data locality. The ability to guarantee data is accessed from the nearest or fastest storage can be necessary for applications to perform well." -msgstr "" - -#: ./doc/arch-design/hybrid/section_architecture_hybrid.xml:185(para) -msgid "When running embedded object store methods, ensure that you do not instigate extra data replication as this can cause performance issues." -msgstr "" - -#: ./doc/arch-design/introduction/section_intended_audience.xml:7(title) -msgid "Intended audience" -msgstr "" - -#: ./doc/arch-design/introduction/section_intended_audience.xml:8(para) -msgid "This book has been written for architects and designers of OpenStack clouds. For a guide on deploying and operating OpenStack, please refer to the OpenStack Operations Guide (http://docs.openstack.org/openstack-ops)." -msgstr "" - -#: ./doc/arch-design/introduction/section_intended_audience.xml:14(para) -msgid "Before reading this book, we recommend prior knowledge of cloud architecture and principles, experience in enterprise system design, Linux and virtualization experience, and a basic understanding of networking principles and protocols." -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:11(title) -msgid "Methodology" -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:12(para) -msgid "The best way to design your cloud architecture is through creating and testing use cases. Planning for applications that support thousands of sessions per second, variable workloads, and complex, changing data, requires you to identify the key meters. Identifying these key meters, such as number of concurrent transactions per second, and size of database, makes it possible to build a method for testing your assumptions." -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:18(para) -msgid "Use a functional user scenario to develop test cases, and to measure overall project trajectory." -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:21(para) -msgid "If you do not want to use an application to develop user requirements automatically, you need to create requirements to build test harnesses and develop usable meters." -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:25(para) -msgid "Establishing these meters allows you to respond to changes quickly without having to set exact requirements in advance. This creates ways to configure the system, rather than redesigning it every time there is a requirements change." -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:30(para) -msgid "It is important to limit scope creep. Ensure you address tool limitations, but do not recreate the entire suite of tools. Work with technical product owners to establish critical features that are needed for a successful cloud deployment." -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:37(title) -msgid "Application cloud readiness" -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:38(para) -msgid "The cloud does more than host virtual machines and their applications. This lift and shift approach works in certain situations, but there is a fundamental difference between clouds and traditional bare-metal-based environments, or even traditional virtualized environments." -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:43(para) -msgid "In traditional environments, with traditional enterprise applications, the applications and the servers that run on them are pets. They are lovingly crafted and cared for, the servers have names like Gandalf or Tardis, and if they get sick someone nurses them back to health. All of this is designed so that the application does not experience an outage." -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:50(para) -msgid "In cloud environments, servers are more like cattle. There are thousands of them, they get names like NY-1138-Q, and if they get sick, they get put down and a sysadmin installs another one. Traditional applications that are unprepared for this kind of environment may suffer outages, loss of data, or complete failure." -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:56(para) -msgid "There are other reasons to design applications with the cloud in mind. Some are defensive, such as the fact that because applications cannot be certain of exactly where or on what hardware they will be launched, they need to be flexible, or at least adaptable. Others are proactive. For example, one of the advantages of using the cloud is scalability. Applications need to be designed in such a way that they can take advantage of these and other opportunities." -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:66(title) -msgid "Determining whether an application is cloud-ready" -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:67(para) -msgid "There are several factors to take into consideration when looking at whether an application is a good fit for the cloud." -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:71(term) -msgid "Structure" -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:73(para) -msgid "A large, monolithic, single-tiered, legacy application typically is not a good fit for the cloud. Efficiencies are gained when load can be spread over several instances, so that a failure in one part of the system can be mitigated without affecting other parts of the system, or so that scaling can take place where the app needs it." -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:88(para) -msgid "Applications that depend on specific hardware, such as a particular chip set or an external device such as a fingerprint reader, might not be a good fit for the cloud, unless those dependencies are specifically addressed. Similarly, if an application depends on an operating system or set of libraries that cannot be used in the cloud, or cannot be virtualized, that is a problem." -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:104(para) -msgid "Self-contained applications, or those that depend on resources that are not reachable by the cloud in question, will not run. In some situations, you can work around these issues with custom network setup, but how well this works depends on the chosen cloud environment." -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:115(term) -msgid "Durability and resilience" -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:117(para) -msgid "Despite the existence of SLAs, things break: servers go down, network connections are disrupted, or too many tenants on a server make a server unusable. An application must be sturdy enough to contend with these issues." -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:130(title) -msgid "Designing for the cloud" -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:131(para) -msgid "Here are some guidelines to keep in mind when designing an application for the cloud:" -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:135(para) -msgid "Be a pessimist: Assume everything fails and design backwards." -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:139(para) -msgid "Put your eggs in multiple baskets: Leverage multiple providers, geographic regions and availability zones to accommodate for local availability issues. Design for portability." -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:145(para) -msgid "Think efficiency: Inefficient designs will not scale. Efficient designs become cheaper as they scale. Kill off unneeded components or capacity." -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:150(para) -msgid "Be paranoid: Design for defense in depth and zero tolerance by building in security at every level and between every component. Trust no one." -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:155(para) -msgid "But not too paranoid: Not every application needs the platinum solution. Architect for different SLA's, service tiers, and security levels." -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:160(para) -msgid "Manage the data: Data is usually the most inflexible and complex area of a cloud and cloud integration architecture. Do not short change the effort in analyzing and addressing data needs." -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:166(para) -msgid "Hands off: Leverage automation to increase consistency and quality and reduce response times." -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:170(para) -msgid "Divide and conquer: Pursue partitioning and parallel layering wherever possible. Make components as small and portable as possible. Use load balancing between layers." -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:176(para) -msgid "Think elasticity: Increasing resources should result in a proportional increase in performance and scalability. Decreasing resources should have the opposite effect." -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:182(para) -msgid "Be dynamic: Enable dynamic configuration changes such as auto scaling, failure recovery and resource discovery to adapt to changing environments, faults, and workload volumes." -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:188(para) -msgid "Stay close: Reduce latency by moving highly interactive components and data near each other." -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:192(para) -msgid "Keep it loose: Loose coupling, service interfaces, separation of concerns, abstraction, and well defined API's deliver flexibility." -msgstr "" - -#: ./doc/arch-design/introduction/section_methodology.xml:197(para) -msgid "Be cost aware: Autoscaling, data transmission, virtual software licenses, reserved instances, and similar costs can rapidly increase monthly usage charges. Monitor usage closely." -msgstr "" - -#: ./doc/arch-design/introduction/section_how_this_book_is_organized.xml:7(title) -msgid "How this book is organized" -msgstr "" - -#: ./doc/arch-design/introduction/section_how_this_book_is_organized.xml:8(para) -msgid "This book examines some of the most common uses for OpenStack clouds, and explains the considerations for each use case. Cloud architects may use this book as a comprehensive guide by reading all of the use cases, but it is also possible to review only the chapters which pertain to a specific use case. The use cases covered in this guide include:" -msgstr "" - -#: ./doc/arch-design/introduction/section_how_this_book_is_organized.xml:16(para) -msgid "General purpose: Uses common components that address 80% of common use cases." -msgstr "" - -#: ./doc/arch-design/introduction/section_how_this_book_is_organized.xml:22(para) -msgid "Compute focused: For compute intensive workloads such as high performance computing (HPC)." -msgstr "" - -#: ./doc/arch-design/introduction/section_how_this_book_is_organized.xml:28(para) -msgid "Storage focused: For storage intensive workloads such as data analytics with parallel file systems." -msgstr "" - -#: ./doc/arch-design/introduction/section_how_this_book_is_organized.xml:34(para) -msgid "Network focused: For high performance and reliable networking, such as a content delivery network (CDN)." -msgstr "" - -#: ./doc/arch-design/introduction/section_how_this_book_is_organized.xml:41(para) -msgid "Multi-site: For applications that require multiple site deployments for geographical, reliability or data locality reasons." -msgstr "" - -#: ./doc/arch-design/introduction/section_how_this_book_is_organized.xml:48(para) -msgid "Hybrid cloud: Uses multiple disparate clouds connected either for failover, hybrid cloud bursting, or availability." -msgstr "" - -#: ./doc/arch-design/introduction/section_how_this_book_is_organized.xml:55(para) -msgid "Massively scalable: For cloud service providers or other large installations" -msgstr "" - -#: ./doc/arch-design/introduction/section_how_this_book_is_organized.xml:63(para) -msgid "Specialized cases: Architectures that have not previously been covered in the defined use cases." -msgstr "" - -#: ./doc/arch-design/introduction/section_how_this_book_was_written.xml:7(title) -msgid "Why and how we wrote this book" -msgstr "" - -#: ./doc/arch-design/introduction/section_how_this_book_was_written.xml:8(para) -msgid "We wrote this book to guide you through designing an OpenStack cloud architecture. This guide identifies design considerations for common cloud use cases and provides examples." -msgstr "" - -#: ./doc/arch-design/introduction/section_how_this_book_was_written.xml:11(para) -msgid "The Architecture Design Guide was written in a book sprint format, which is a facilitated, rapid development production method for books. The Book Sprint was facilitated by Faith Bosworth and Adam Hyde of Book Sprints, for more information, see the Book Sprints website (www.booksprints.net)." -msgstr "" - -#: ./doc/arch-design/introduction/section_how_this_book_was_written.xml:16(para) -msgid "This book was written in five days during July 2014 while exhausting the M&M, Mountain Dew and healthy options supply, complete with juggling entertainment during lunches at VMware's headquarters in Palo Alto." -msgstr "" - -#: ./doc/arch-design/introduction/section_how_this_book_was_written.xml:20(para) -msgid "We would like to thank VMware for their generous hospitality, as well as our employers, Cisco, Cloudscaling, Comcast, EMC, Mirantis, Rackspace, Red Hat, Verizon, and VMware, for enabling us to contribute our time. We would especially like to thank Anne Gentle and Kenneth Hui for all of their shepherding and organization in making this happen." -msgstr "" - -#: ./doc/arch-design/introduction/section_how_this_book_was_written.xml:27(para) -msgid "The author team includes:" -msgstr "" - -#: ./doc/arch-design/introduction/section_how_this_book_was_written.xml:30(para) -msgid "Kenneth Hui (EMC) @hui_kenneth" -msgstr "" - -#: ./doc/arch-design/introduction/section_how_this_book_was_written.xml:35(para) -msgid "Alexandra Settle (Rackspace) @dewsday" -msgstr "" - -#: ./doc/arch-design/introduction/section_how_this_book_was_written.xml:40(para) -msgid "Anthony Veiga (Comcast) @daaelar" -msgstr "" - -#: ./doc/arch-design/introduction/section_how_this_book_was_written.xml:45(para) -msgid "Beth Cohen (Verizon) @bfcohen" -msgstr "" - -#: ./doc/arch-design/introduction/section_how_this_book_was_written.xml:50(para) -msgid "Kevin Jackson (Rackspace) @itarchitectkev" -msgstr "" - -#: ./doc/arch-design/introduction/section_how_this_book_was_written.xml:55(para) -msgid "Maish Saidel-Keesing (Cisco) @maishsk" -msgstr "" - -#: ./doc/arch-design/introduction/section_how_this_book_was_written.xml:60(para) -msgid "Nick Chase (Mirantis) @NickChase" -msgstr "" - -#: ./doc/arch-design/introduction/section_how_this_book_was_written.xml:65(para) -msgid "Scott Lowe (VMware) @scott_lowe" -msgstr "" - -#: ./doc/arch-design/introduction/section_how_this_book_was_written.xml:70(para) -msgid "Sean Collins (Comcast) @sc68cal" -msgstr "" - -#: ./doc/arch-design/introduction/section_how_this_book_was_written.xml:75(para) -msgid "Sean Winn (Cloudscaling) @seanmwinn" -msgstr "" - -#: ./doc/arch-design/introduction/section_how_this_book_was_written.xml:80(para) -msgid "Sebastian Gutierrez (Red Hat) @gutseb" -msgstr "" - -#: ./doc/arch-design/introduction/section_how_this_book_was_written.xml:85(para) -msgid "Stephen Gordon (Red Hat) @xsgordon" -msgstr "" - -#: ./doc/arch-design/introduction/section_how_this_book_was_written.xml:90(para) -msgid "Vinny Valdez (Red Hat) @VinnyValdez" -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/arch-design/specialized/section_desktop_as_a_service_specialized.xml:62(None) -msgid "@@image: '../figures/Specialized_VDI1.png'; md5=77729426d59881476de9a03e1ee8a22c" -msgstr "" - -#: ./doc/arch-design/specialized/section_desktop_as_a_service_specialized.xml:8(title) -msgid "Desktop-as-a-Service" -msgstr "" - -#: ./doc/arch-design/specialized/section_desktop_as_a_service_specialized.xml:9(para) -msgid "Virtual Desktop Infrastructure (VDI) is a service that hosts user desktop environments on remote servers. This application is very sensitive to network latency and requires a high performance compute environment. Traditionally these types of services do not use cloud environments because few clouds support such a demanding workload for user-facing applications. As cloud environments become more robust, vendors are starting to provide services that provide virtual desktops in the cloud. OpenStack may soon provide the infrastructure for these types of deployments." -msgstr "" - -#: ./doc/arch-design/specialized/section_desktop_as_a_service_specialized.xml:21(title) ./doc/arch-design/specialized/section_software_defined_networking_specialized.xml:19(title) ./doc/arch-design/specialized/section_openstack_on_openstack_specialized.xml:25(title) ./doc/arch-design/specialized/section_networking_specialized.xml:15(title) ./doc/arch-design/specialized/section_hardware_specialized.xml:16(title) -msgid "Challenges" -msgstr "" - -#: ./doc/arch-design/specialized/section_desktop_as_a_service_specialized.xml:22(para) -msgid "Designing an infrastructure that is suitable to host virtual desktops is a very different task to that of most virtual workloads. For example, the design must consider:" -msgstr "" - -#: ./doc/arch-design/specialized/section_desktop_as_a_service_specialized.xml:27(para) -msgid "Boot storms, when a high volume of logins occur in a short period of time" -msgstr "" - -#: ./doc/arch-design/specialized/section_desktop_as_a_service_specialized.xml:31(para) -msgid "The performance of the applications running on virtual desktops" -msgstr "" - -#: ./doc/arch-design/specialized/section_desktop_as_a_service_specialized.xml:35(para) -msgid "Operating systems and their compatibility with the OpenStack hypervisor" -msgstr "" - -#: ./doc/arch-design/specialized/section_desktop_as_a_service_specialized.xml:41(title) -msgid "Broker" -msgstr "" - -#: ./doc/arch-design/specialized/section_desktop_as_a_service_specialized.xml:42(para) -msgid "The connection broker determines which remote desktop host users can access. Medium and large scale environments require a broker since its service represents a central component of the architecture. The broker is a complete management product, and enables automated deployment and provisioning of remote desktop hosts." -msgstr "" - -#: ./doc/arch-design/specialized/section_desktop_as_a_service_specialized.xml:49(title) ./doc/arch-design/specialized/section_software_defined_networking_specialized.xml:28(title) ./doc/arch-design/specialized/section_networking_specialized.xml:23(title) -msgid "Possible solutions" -msgstr "" - -#: ./doc/arch-design/specialized/section_desktop_as_a_service_specialized.xml:50(para) -msgid "There are a number of commercial products currently available that provide a broker solution. However, no native OpenStack projects provide broker services. Not providing a broker is also an option, but managing this manually would not suffice for a large scale, enterprise solution." -msgstr "" - -#: ./doc/arch-design/specialized/section_desktop_as_a_service_specialized.xml:59(title) ./doc/arch-design/specialized/section_software_defined_networking_specialized.xml:40(title) ./doc/arch-design/specialized/section_openstack_on_openstack_specialized.xml:71(title) -msgid "Diagram" -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/arch-design/specialized/section_software_defined_networking_specialized.xml:44(None) -msgid "@@image: '../figures/Special_case_SDN_hosted.png'; md5=93f5e5b90b5aea50d24a098ba80c805d" -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/arch-design/specialized/section_software_defined_networking_specialized.xml:51(None) -msgid "@@image: '../figures/Special_case_SDN_external.png'; md5=12d9e840a0a10a5abcf1a2c1f6f80965" -msgstr "" - -#: ./doc/arch-design/specialized/section_software_defined_networking_specialized.xml:8(title) -msgid "Software-defined networking" -msgstr "" - -#: ./doc/arch-design/specialized/section_software_defined_networking_specialized.xml:9(para) -msgid "Software-defined networking (SDN) is the separation of the data plane and control plane. SDN is a popular method of managing and controlling packet flows within networks. SDN uses overlays or directly controlled layer-2 devices to determine flow paths, and as such presents challenges to a cloud environment. Some designers may wish to run their controllers within an OpenStack installation. Others may wish to have their installations participate in an SDN-controlled network." -msgstr "" - -#: ./doc/arch-design/specialized/section_software_defined_networking_specialized.xml:20(para) -msgid "SDN is a relatively new concept that is not yet standardized, so SDN systems come in a variety of different implementations. Because of this, a truly prescriptive architecture is not feasible. Instead, examine the differences between an existing and a planned OpenStack design and determine where potential conflicts and gaps exist." -msgstr "" - -#: ./doc/arch-design/specialized/section_software_defined_networking_specialized.xml:29(para) -msgid "If an SDN implementation requires layer-2 access because it directly manipulates switches, we do not recommend running an overlay network or a layer-3 agent. If the controller resides within an OpenStack installation, it may be necessary to build an ML2 plug-in and schedule the controller instances to connect to tenant VLANs that then talk directly to the switch hardware. Alternatively, depending on the external device support, use a tunnel that terminates at the switch hardware itself." -msgstr "" - -#: ./doc/arch-design/specialized/section_software_defined_networking_specialized.xml:41(para) -msgid "OpenStack hosted SDN controller: " -msgstr "" - -#: ./doc/arch-design/specialized/section_software_defined_networking_specialized.xml:48(para) -msgid "OpenStack participating in an SDN controller network: " -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/arch-design/specialized/section_openstack_on_openstack_specialized.xml:75(None) -msgid "@@image: '../figures/Specialized_OOO.png'; md5=65a8e3666ebf09a0145c61bc1d472144" -msgstr "" - -#: ./doc/arch-design/specialized/section_openstack_on_openstack_specialized.xml:12(title) -msgid "OpenStack on OpenStack" -msgstr "" - -#: ./doc/arch-design/specialized/section_openstack_on_openstack_specialized.xml:13(para) -msgid "In some cases, users may run OpenStack nested on top of another OpenStack cloud. This scenario describes how to manage and provision complete OpenStack environments on instances supported by hypervisors and servers, which an underlying OpenStack environment controls." -msgstr "" - -#: ./doc/arch-design/specialized/section_openstack_on_openstack_specialized.xml:18(para) -msgid "Public cloud providers can use this technique to manage the upgrade and maintenance process on complete OpenStack environments. Developers and those testing OpenStack can also use this technique to provision their own OpenStack environments on available OpenStack Compute resources, whether public or private." -msgstr "" - -#: ./doc/arch-design/specialized/section_openstack_on_openstack_specialized.xml:26(para) -msgid "The network aspect of deploying a nested cloud is the most complicated aspect of this architecture. You must expose VLANs to the physical ports on which the underlying cloud runs because the bare metal cloud owns all the hardware. You must also expose them to the nested levels as well. Alternatively, you can use the network overlay technologies on the OpenStack environment running on the host OpenStack environment to provide the required software defined networking for the deployment." -msgstr "" - -#: ./doc/arch-design/specialized/section_openstack_on_openstack_specialized.xml:37(title) ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:361(title) -msgid "Hypervisor" -msgstr "" - -#: ./doc/arch-design/specialized/section_openstack_on_openstack_specialized.xml:38(para) -msgid "In this example architecture, consider which approach you should take to provide a nested hypervisor in OpenStack. This decision influences which operating systems you use for the deployment of the nested OpenStack deployments." -msgstr "" - -#: ./doc/arch-design/specialized/section_openstack_on_openstack_specialized.xml:45(title) -msgid "Possible solutions: deployment" -msgstr "" - -#: ./doc/arch-design/specialized/section_openstack_on_openstack_specialized.xml:46(para) -msgid "Deployment of a full stack can be challenging but you can mitigate this difficulty by creating a Heat template to deploy the entire stack, or a configuration management system. After creating the Heat template, you can automate the deployment of additional stacks." -msgstr "" - -#: ./doc/arch-design/specialized/section_openstack_on_openstack_specialized.xml:50(para) -msgid "The OpenStack-on-OpenStack project (TripleO) addresses this issue. Currently, however, the project does not completely cover nested stacks. For more information, see https://wiki.openstack.org/wiki/TripleO." -msgstr "" - -#: ./doc/arch-design/specialized/section_openstack_on_openstack_specialized.xml:57(title) -msgid "Possible solutions: hypervisor" -msgstr "" - -#: ./doc/arch-design/specialized/section_openstack_on_openstack_specialized.xml:58(para) -msgid "In the case of running TripleO, the underlying OpenStack cloud deploys the Compute nodes as bare-metal. You then deploy OpenStack on these Compute bare-metal servers with the appropriate hypervisor, such as KVM." -msgstr "" - -#: ./doc/arch-design/specialized/section_openstack_on_openstack_specialized.xml:62(para) -msgid "In the case of running smaller OpenStack clouds for testing purposes, where performance is not a critical factor, you can use QEMU instead. It is also possible to run a KVM hypervisor in an instance (see http://davejingtian.org/2014/03/30/nested-kvm-just-for-fun/), though this is not a supported configuration, and could be a complex solution for such a use case." -msgstr "" - -#: ./doc/arch-design/specialized/section_networking_specialized.xml:8(title) -msgid "Specialized networking example" -msgstr "" - -#: ./doc/arch-design/specialized/section_networking_specialized.xml:9(para) -msgid "Some applications that interact with a network require specialized connectivity. Applications such as a looking glass require the ability to connect to a BGP peer, or route participant applications may need to join a network at a layer 2 level." -msgstr "" - -#: ./doc/arch-design/specialized/section_networking_specialized.xml:16(para) -msgid "Connecting specialized network applications to their required resources alters the design of an OpenStack installation. Installations that rely on overlay networks are unable to support a routing participant, and may also block layer-2 listeners." -msgstr "" - -#: ./doc/arch-design/specialized/section_networking_specialized.xml:24(para) -msgid "Deploying an OpenStack installation using OpenStack Networking with a provider network allows direct layer-2 connectivity to an upstream networking device. This design provides the layer-2 connectivity required to communicate via Intermediate System-to-Intermediate System (ISIS) protocol or to pass packets controlled by an OpenFlow controller. Using the multiple layer-2 plug-in with an agent such as Open vSwitch allows a private connection through a VLAN directly to a specific port in a layer-3 device. This allows a BGP point-to-point link to join the autonomous system. Avoid using layer-3 plug-ins as they divide the broadcast domain and prevent router adjacencies from forming." -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/arch-design/specialized/section_hardware_specialized.xml:48(None) -msgid "@@image: '../figures/Specialized_Hardware2.png'; md5=f8477d5d015f4c6d4fcd56d511f14ef9" -msgstr "" - -#: ./doc/arch-design/specialized/section_hardware_specialized.xml:8(title) -msgid "Specialized hardware" -msgstr "" - -#: ./doc/arch-design/specialized/section_hardware_specialized.xml:9(para) -msgid "Certain workloads require specialized hardware devices that have significant virtualization or sharing challenges. Applications such as load balancers, highly parallel brute force computing, and direct to wire networking may need capabilities that basic OpenStack components do not provide." -msgstr "" - -#: ./doc/arch-design/specialized/section_hardware_specialized.xml:17(para) -msgid "Some applications need access to hardware devices to either improve performance or provide capabilities that are not virtual CPU, RAM, network, or storage. These can be a shared resource, such as a cryptography processor, or a dedicated resource, such as a Graphics Processing Unit (GPU). OpenStack can provide some of these, while others may need extra work." -msgstr "" - -#: ./doc/arch-design/specialized/section_hardware_specialized.xml:26(title) -msgid "Solutions" -msgstr "" - -#: ./doc/arch-design/specialized/section_hardware_specialized.xml:27(para) -msgid "To provide cryptography offloading to a set of instances, you can use Image service configuration options. For example, assign the cryptography chip to a device node in the guest. The OpenStack Command Line Reference contains further information on configuring this solution in the chapter Image service property keys. A challenge, however, is this option allows all guests using the configured images to access the hypervisor cryptography device." -msgstr "" - -#: ./doc/arch-design/specialized/section_hardware_specialized.xml:37(para) -msgid "If you require direct access to a specific device, PCI pass-through enables you to dedicate the device to a single instance per hypervisor. You must define a flavor that has the PCI device specifically in order to properly schedule instances. More information regarding PCI pass-through, including instructions for implementing and using it, is available at https://wiki.openstack.org/wiki/Pci_passthrough." -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/arch-design/specialized/section_multi_hypervisor_specialized.xml:23(None) -msgid "@@image: '../figures/Compute_NSX.png'; md5=1745487faf16b74b13f80ffd837f43a0" -msgstr "" - -#: ./doc/arch-design/specialized/section_multi_hypervisor_specialized.xml:8(title) -msgid "Multi-hypervisor example" -msgstr "" - -#: ./doc/arch-design/specialized/section_multi_hypervisor_specialized.xml:9(para) -msgid "A financial company requires its applications migrated from a traditional, virtualized environment to an API driven, orchestrated environment. The new environment needs multiple hypervisors since many of the company's applications have strict hypervisor requirements." -msgstr "" - -#: ./doc/arch-design/specialized/section_multi_hypervisor_specialized.xml:14(para) -msgid "Currently, the company's vSphere environment runs 20 VMware ESXi hypervisors. These hypervisors support 300 instances of various sizes. Approximately 50 of these instances must run on ESXi. The remaining 250 or so have more flexible requirements." -msgstr "" - -#: ./doc/arch-design/specialized/section_multi_hypervisor_specialized.xml:19(para) -msgid "The financial company decides to manage the overall system with a common OpenStack platform." -msgstr "" - -#: ./doc/arch-design/specialized/section_multi_hypervisor_specialized.xml:26(para) -msgid "Architecture planning teams decided to run a host aggregate containing KVM hypervisors for the general purpose instances. A separate host aggregate targets instances requiring ESXi." -msgstr "" - -#: ./doc/arch-design/specialized/section_multi_hypervisor_specialized.xml:29(para) -msgid "Images in the OpenStack Image service have particular hypervisor metadata attached. When a user requests a certain image, the instance spawns on the relevant aggregate." -msgstr "" - -#: ./doc/arch-design/specialized/section_multi_hypervisor_specialized.xml:33(para) -msgid "Images for ESXi use the VMDK format. You can convert QEMU disk images to VMDK, VMFS Flat Disks. These disk images can also be thin, thick, zeroed-thick, and eager-zeroed-thick. After exporting a VMFS thin disk from VMFS to the OpenStack Image service (a non-VMFS location), it becomes a preallocated flat disk. This impacts the transfer time from the OpenStack Image service to the data store since transfers require moving the full preallocated flat disk rather than the thin disk." -msgstr "" - -#: ./doc/arch-design/specialized/section_multi_hypervisor_specialized.xml:42(para) -msgid "The VMware host aggregate compute nodes communicate with vCenter rather than spawning directly on a hypervisor. The vCenter then requests scheduling for the instance to run on an ESXi hypervisor." -msgstr "" - -#: ./doc/arch-design/specialized/section_multi_hypervisor_specialized.xml:46(para) -msgid "This functionality requires that VMware Distributed Resource Scheduler (DRS) is enabled on a cluster and set to Fully Automated. The vSphere requires shared storage because the DRS uses vMotion, which is a service that relies on shared storage." -msgstr "" - -#: ./doc/arch-design/specialized/section_multi_hypervisor_specialized.xml:50(para) -msgid "This solution to the company's migration uses shared storage to provide Block Storage capabilities to the KVM instances while also providing vSphere storage. The new environment provides this storage functionality using a dedicated data network. The compute hosts should have dedicated NICs to support the dedicated data network. vSphere supports OpenStack Block Storage. This support gives storage from a VMFS datastore to an instance. For the financial company, Block Storage in their new architecture supports both hypervisors." -msgstr "" - -#: ./doc/arch-design/specialized/section_multi_hypervisor_specialized.xml:59(para) -msgid "OpenStack Networking provides network connectivity in this new architecture, with the VMware NSX plug-in driver configured. legacy networking (nova-network) supports both hypervisors in this new architecture example, but has limitations. Specifically, vSphere with legacy networking does not support security groups. The new architecture uses VMware NSX as a part of the design. When users launch an instance within either of the host aggregates, VMware NSX ensures the instance attaches to the appropriate network overlay-based logical networks." -msgstr "" - -#: ./doc/arch-design/specialized/section_multi_hypervisor_specialized.xml:68(para) -msgid "The architecture planning teams also consider OpenStack Compute integration. When running vSphere in an OpenStack environment, nova-compute communications with vCenter appear as a single large hypervisor. This hypervisor represents the entire ESXi cluster. Multiple nova-compute instances can represent multiple ESXi clusters. They can connect to multiple vCenter servers. If the process running nova-compute crashes it cuts the connection to the vCenter server. Any ESXi clusters will stop running, and you will not be able to provision further instances on the vCenter, even if you enable high availability. You must monitor the nova-compute service connected to vSphere carefully for any distruptions as a result of this failure point." -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_user_requirements_massively_scalable.xml:9(para) -msgid "Defining user requirements for a massively scalable OpenStack design architecture dictates approaching the design from two different, yet sometimes opposing, perspectives: the cloud user, and the cloud operator. The expectations and perceptions of the consumption and management of resources of a massively scalable OpenStack cloud from these two perspectives are distinctly different." -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_user_requirements_massively_scalable.xml:16(para) -msgid "Massively scalable OpenStack clouds have the following user requirements:" -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_user_requirements_massively_scalable.xml:20(para) -msgid "The cloud user expects repeatable, dependable, and deterministic processes for launching and deploying cloud resources. You could deliver this through a web-based interface or publicly available API endpoints. All appropriate options for requesting cloud resources must be available through some type of user interface, a command-line interface (CLI), or API endpoints." -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_user_requirements_massively_scalable.xml:30(para) -msgid "Cloud users expect a fully self-service and on-demand consumption model. When an OpenStack cloud reaches the \"massively scalable\" size, expect consumption \"as a service\" in each and every way." -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_user_requirements_massively_scalable.xml:37(para) -msgid "For a user of a massively scalable OpenStack public cloud, there are no expectations for control over security, performance, or availability. Users expect only SLAs related to uptime of API services, and very basic SLAs for services offered. It is the user's responsibility to address these issues on their own. The exception to this expectation is the rare case of a massively scalable cloud infrastructure built for a private or government organization that has specific requirements." -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_user_requirements_massively_scalable.xml:49(para) -msgid "The cloud user's requirements and expectations that determine the cloud design focus on the consumption model. The user expects to consume cloud resources in an automated and deterministic way, without any need for knowledge of the capacity, scalability, or other attributes of the cloud's underlying infrastructure." -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_user_requirements_massively_scalable.xml:56(title) -msgid "Operator requirements" -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_user_requirements_massively_scalable.xml:57(para) -msgid "While the cloud user can be completely unaware of the underlying infrastructure of the cloud and its attributes, the operator must build and support the infrastructure for operating at scale. This presents a very demanding set of requirements for building such a cloud from the operator's perspective:" -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_user_requirements_massively_scalable.xml:64(para) -msgid "Everything must be capable of automation. For example, everything from compute hardware, storage hardware, networking hardware, to the installation and configuration of the supporting software. Manual processes are impractical in a massively scalable OpenStack design architecture." -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_user_requirements_massively_scalable.xml:72(para) -msgid "The cloud operator requires that capital expenditure (CapEx) is minimized at all layers of the stack. Operators of massively scalable OpenStack clouds require the use of dependable commodity hardware and freely available open source software components to reduce deployment costs and operational expenses. Initiatives like OpenCompute (more information available at http://www.opencompute.org) provide additional information and pointers. To cut costs, many operators sacrifice redundancy. For example, using redundant power supplies, network connections, and rack switches." -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_user_requirements_massively_scalable.xml:87(para) -msgid "Companies operating a massively scalable OpenStack cloud also require that operational expenditures (OpEx) be minimized as much as possible. We recommend using cloud-optimized hardware when managing operational overhead. Some of the factors to consider include power, cooling, and the physical design of the chassis. Through customization, it is possible to optimize the hardware and systems for this type of workload because of the scale of these implementations." -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_user_requirements_massively_scalable.xml:99(para) -msgid "Massively scalable OpenStack clouds require extensive metering and monitoring functionality to maximize the operational efficiency by keeping the operator informed about the status and state of the infrastructure. This includes full scale metering of the hardware and software status. A corresponding framework of logging and alerting is also required to store and enable operations to act on the meters provided by the metering and monitoring solutions. The cloud operator also needs a solution that uses the data provided by the metering and monitoring solution to provide capacity planning and capacity trending analysis." -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_user_requirements_massively_scalable.xml:114(para) -msgid "Invariably, massively scalable OpenStack clouds extend over several sites. Therefore, the user-operator requirements for a multi-site OpenStack architecture design are also applicable here. This includes various legal requirements; other jurisdictional legal or compliance requirements; image consistency-availability; storage replication and availability (both block and file/object storage); and authentication, authorization, and auditing (AAA). See for more details on requirements and considerations for multi-site OpenStack clouds." -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_user_requirements_massively_scalable.xml:128(para) -msgid "The design architecture of a massively scalable OpenStack cloud must address considerations around physical facilities such as space, floor weight, rack height and type, environmental considerations, power usage and power usage efficiency (PUE), and physical security." -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_operational_considerations_massively_scalable.xml:9(para) -msgid "In order to run efficiently at massive scale, automate as many of the operational processes as possible. Automation includes the configuration of provisioning, monitoring and alerting systems. Part of the automation process includes the capability to determine when human intervention is required and who should act. The objective is to increase the ratio of operational staff to running systems as much as possible in order to reduce maintenance costs. In a massively scaled environment, it is very difficult for staff to give each system individual care." -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_operational_considerations_massively_scalable.xml:19(para) -msgid "Configuration management tools such as Puppet and Chef enable operations staff to categorize systems into groups based on their roles and thus create configurations and system states that the provisioning system enforces. Systems that fall out of the defined state due to errors or failures are quickly removed from the pool of active nodes and replaced." -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_operational_considerations_massively_scalable.xml:26(para) -msgid "At large scale the resource cost of diagnosing failed individual systems is far greater than the cost of replacement. It is more economical to replace the failed system with a new system, provisioning and configuring it automatically and adding it to the pool of active nodes. By automating tasks that are labor-intensive, repetitive, and critical to operations, cloud operations teams can work more efficiently because fewer resources are required for these common tasks. Administrators are then free to tackle tasks that are not easy to automate and that have longer-term impacts on the business, for example, capacity planning." -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_operational_considerations_massively_scalable.xml:39(title) -msgid "The bleeding edge" -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_operational_considerations_massively_scalable.xml:40(para) -msgid "Running OpenStack at massive scale requires striking a balance between stability and features. For example, it might be tempting to run an older stable release branch of OpenStack to make deployments easier. However, when running at massive scale, known issues that may be of some concern or only have minimal impact in smaller deployments could become pain points. Recent releases may address well known issues. The OpenStack community can help resolve reported issues by applying the collective expertise of the OpenStack developers." -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_operational_considerations_massively_scalable.xml:49(para) -msgid "The number of organizations running at massive scales is a small proportion of the OpenStack community, therefore it is important to share related issues with the community and be a vocal advocate for resolving them. Some issues only manifest when operating at large scale, and the number of organizations able to duplicate and validate an issue is small, so it is important to document and dedicate resources to their resolution." -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_operational_considerations_massively_scalable.xml:57(para) -msgid "In some cases, the resolution to the problem is ultimately to deploy a more recent version of OpenStack. Alternatively, when you must resolve an issue in a production environment where rebuilding the entire environment is not an option, it is sometimes possible to deploy updates to specific underlying components in order to resolve issues or gain significant performance improvements. Although this may appear to expose the deployment to increased risk and instability, in many cases it could be an undiscovered issue." -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_operational_considerations_massively_scalable.xml:67(para) -msgid "We recommend building a development and operations organization that is responsible for creating desired features, diagnosing and resolving issues, and building the infrastructure for large scale continuous integration tests and continuous deployment. This helps catch bugs early and makes deployments faster and easier. In addition to development resources, we also recommend the recruitment of experts in the fields of message queues, databases, distributed systems, networking, cloud, and storage." -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_operational_considerations_massively_scalable.xml:77(title) -msgid "Growth and capacity planning" -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_operational_considerations_massively_scalable.xml:78(para) -msgid "An important consideration in running at massive scale is projecting growth and utilization trends in order to plan capital expenditures for the short and long term. Gather utilization meters for compute, network, and storage, along with historical records of these meters. While securing major anchor tenants can lead to rapid jumps in the utilization rates of all resources, the steady adoption of the cloud inside an organization or by consumers in a public offering also creates a steady trend of increased utilization." -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_operational_considerations_massively_scalable.xml:89(title) -msgid "Skills and training" -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_operational_considerations_massively_scalable.xml:90(para) -msgid "Projecting growth for storage, networking, and compute is only one aspect of a growth plan for running OpenStack at massive scale. Growing and nurturing development and operational staff is an additional consideration. Sending team members to OpenStack conferences, meetup events, and encouraging active participation in the mailing lists and committees is a very important way to maintain skills and forge relationships in the community. For a list of OpenStack training providers in the marketplace, see: http://www.openstack.org/marketplace/training/." -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/arch-design/massively_scalable/section_tech_considerations_massively_scalable.xml:128(None) -msgid "@@image: '../figures/Massively_Scalable_Cells_+_regions_+_azs.png'; md5=87d08365fefde431d6d055daf17d7d0e" -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_tech_considerations_massively_scalable.xml:13(para) -msgid "Repurposing an existing OpenStack environment to be massively scalable is a formidable task. When building a massively scalable environment from the ground up, ensure you build the initial deployment with the same principles and choices that apply as the environment grows. For example, a good approach is to deploy the first site as a multi-site environment. This enables you to use the same deployment and segregation methods as the environment grows to separate locations across dedicated links or wide area networks. In a hyperscale cloud, scale trumps redundancy. Modify applications with this in mind, relying on the scale and homogeneity of the environment to provide reliability rather than redundant infrastructure provided by non-commodity hardware solutions." -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_tech_considerations_massively_scalable.xml:28(title) -msgid "Infrastructure segregation" -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_tech_considerations_massively_scalable.xml:29(para) -msgid "OpenStack services support massive horizontal scale. Be aware that this is not the case for the entire supporting infrastructure. This is particularly a problem for the database management systems and message queues that OpenStack services use for data storage and remote procedure call communications." -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_tech_considerations_massively_scalable.xml:34(para) -msgid "Traditional clustering techniques typically provide high availability and some additional scale for these environments. In the quest for massive scale, however, you must take additional steps to relieve the performance pressure on these components in order to prevent them from negatively impacting the overall performance of the environment. Ensure that all the components are in balance so that if the massively scalable environment fails, all the components are near maximum capacity and a single component is not causing the failure." -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_tech_considerations_massively_scalable.xml:43(para) -msgid "Regions segregate completely independent installations linked only by an Identity and Dashboard (optional) installation. Services have separate API endpoints for each region, and include separate database and queue installations. This exposes some awareness of the environment's fault domains to users and gives them the ability to ensure some degree of application resiliency while also imposing the requirement to specify which region to apply their actions to." -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_tech_considerations_massively_scalable.xml:52(para) -msgid "Environments operating at massive scale typically need their regions or sites subdivided further without exposing the requirement to specify the failure domain to the user. This provides the ability to further divide the installation into failure domains while also providing a logical unit for maintenance and the addition of new hardware. At hyperscale, instead of adding single compute nodes, administrators can add entire racks or even groups of racks at a time with each new addition of nodes exposed via one of the segregation concepts mentioned herein." -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_tech_considerations_massively_scalable.xml:62(para) -msgid "Cells provide the ability to subdivide the compute portion of an OpenStack installation, including regions, while still exposing a single endpoint. Each region has an API cell along with a number of compute cells where the workloads actually run. Each cell has its own database and message queue setup (ideally clustered), providing the ability to subdivide the load on these subsystems, improving overall performance." -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_tech_considerations_massively_scalable.xml:71(para) -msgid "Each compute cell provides a complete compute installation, complete with full database and queue installations, scheduler, conductor, and multiple compute hosts. The cells scheduler handles placement of user requests from the single API endpoint to a specific cell from those available. The normal filter scheduler then handles placement within the cell." -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_tech_considerations_massively_scalable.xml:78(para) -msgid "Unfortunately, Compute is the only OpenStack service that provides good support for cells. In addition, cells do not adequately support some standard OpenStack functionality such as security groups and host aggregates. Due to their relative newness and specialized use, cells receive relatively little testing in the OpenStack gate. Despite these issues, cells play an important role in well known OpenStack installations operating at massive scale, such as those at CERN and Rackspace." -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_tech_considerations_massively_scalable.xml:88(title) -msgid "Host aggregates" -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_tech_considerations_massively_scalable.xml:89(para) -msgid "Host aggregates enable partitioning of OpenStack Compute deployments into logical groups for load balancing and instance distribution. You can also use host aggregates to further partition an availability zone. Consider a cloud which might use host aggregates to partition an availability zone into groups of hosts that either share common resources, such as storage and network, or have a special property, such as trusted computing hardware. You cannot target host aggregates explicitly. Instead, select instance flavors that map to host aggregate metadata. These flavors target host aggregates implicitly." -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_tech_considerations_massively_scalable.xml:101(title) -msgid "Availability zones" -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_tech_considerations_massively_scalable.xml:102(para) -msgid "Availability zones provide another mechanism for subdividing an installation or region. They are, in effect, host aggregates exposed for (optional) explicit targeting by users." -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_tech_considerations_massively_scalable.xml:106(para) -msgid "Unlike cells, availability zones do not have their own database server or queue broker but represent an arbitrary grouping of compute nodes. Typically, nodes are grouped into availability zones using a shared failure domain based on a physical characteristic such as a shared power source or physical network connections. Users can target exposed availability zones; however, this is not a requirement. An alternative approach is to set a default availability zone to schedule instances to a non-default availability zone of nova." -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_tech_considerations_massively_scalable.xml:116(title) -msgid "Segregation example" -msgstr "" - -#: ./doc/arch-design/massively_scalable/section_tech_considerations_massively_scalable.xml:117(para) -msgid "In this example the cloud is divided into two regions, one for each site, with two availability zones in each based on the power layout of the data centers. A number of host aggregates enable targeting of virtual machine instances using flavors, that require special capabilities shared by the target hosts such as SSDs, 10GbE networks, or GPU cards." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:13(para) -msgid "Hardware selection involves three key areas:" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:25(para) -msgid "Hardware for a general purpose OpenStack cloud should reflect a cloud with no pre-defined usage model, designed to run a wide variety of applications with varying resource usage requirements. These applications include any of the following:" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:32(para) -msgid "RAM-intensive" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:37(para) -msgid "CPU-intensive" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:42(para) -msgid "Storage-intensive" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:47(para) -msgid "Certain hardware form factors may better suit a general purpose OpenStack cloud due to the requirement for equal (or nearly equal) balance of resources. Server hardware must provide the following:" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:53(para) -msgid "Equal (or nearly equal) balance of compute capacity (RAM and CPU)" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:58(para) -msgid "Network capacity (number and speed of links)" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:63(para) -msgid "Storage capacity (gigabytes or terabytes as well as Input/Output Operations Per Second (IOPS)" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:69(para) -msgid "Evaluate server hardware around four conflicting dimensions:" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:83(para) -msgid "The number of CPU cores, amount of RAM, or amount of deliverable storage." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:90(para) -msgid "Limit of additional resources you can add to a server." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:97(para) -msgid "The relative purchase price of the hardware weighted against the level of design effort needed to build the system." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:103(para) -msgid "Increasing server density means sacrificing resource capacity or expandability, however, increasing resource capacity and expandability increases cost and decreases server density. As a result, determining the best server hardware for a general purpose OpenStack architecture means understanding how choice of form factor will impact the rest of the design. The following list outlines the form factors to choose from:" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:113(para) -msgid "Blade servers typically support dual-socket multi-core CPUs. Blades also offer outstanding density." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:118(para) -msgid "1U rack-mounted servers occupy only a single rack unit. Their benefits include high density, support for dual-socket multi-core CPUs, and support for reasonable RAM amounts. This form factor offers limited storage capacity, limited network capacity, and limited expandability." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:126(para) -msgid "2U rack-mounted servers offer the expanded storage and networking capacity that 1U servers tend to lack, but with a corresponding decrease in server density (half the density offered by 1U rack-mounted servers)." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:133(para) -msgid "Larger rack-mounted servers, such as 4U servers, will tend to offer even greater CPU capacity, often supporting four or even eight CPU sockets. These servers often have much greater expandability so will provide the best option for upgradability. This means, however, that the servers have a much lower server density and a much greater hardware cost." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:142(para) -msgid "Sled servers are rack-mounted servers that support multiple independent servers in a single 2U or 3U enclosure. This form factor offers increased density over typical 1U-2U rack-mounted servers but tends to suffer from limitations in the amount of storage or network capacity each individual server supports." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:151(para) -msgid "The best form factor for server hardware supporting a general purpose OpenStack cloud is driven by outside business and cost factors. No single reference architecture applies to all implementations; the decision must flow from user requirements, technical considerations, and operational considerations. Here are some of the key factors that influence the selection of server hardware:" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:163(para) -msgid "Sizing is an important consideration for a general purpose OpenStack cloud. The expected or anticipated number of instances that each hypervisor can host is a common meter used in sizing the deployment. The selected server hardware needs to support the expected or anticipated instance density." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:175(para) -msgid "Physical data centers have limited physical space, power, and cooling. The number of hosts (or hypervisors) that can be fitted into a given metric (rack, rack unit, or floor tile) is another important method of sizing. Floor weight is an often overlooked consideration. The data center floor must be able to support the weight of the proposed number of hosts within a rack or set of racks. These factors need to be applied as part of the host density calculation and server hardware selection." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:188(term) -msgid "Power density" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:190(para) -msgid "Data centers have a specified amount of power fed to a given rack or set of racks. Older data centers may have a power density as power as low as 20 AMPs per rack, while more recent data centers can be architected to support power densities as high as 120 AMP per rack. The selected server hardware must take power density into account." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:200(term) -msgid "Network connectivity" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:202(para) -msgid "The selected server hardware must have the appropriate number of network connections, as well as the right type of network connections, in order to support the proposed architecture. Ensure that, at a minimum, there are at least two diverse network connections coming into each rack." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:212(para) -msgid "The selection of form factors or architectures affects the selection of server hardware. Ensure that the selected server hardware is configured to support enough storage capacity (or storage expandability) to match the requirements of selected scale-out storage solution. Similarly, the network architecture impacts the server hardware selection and vice versa." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:220(title) -msgid "Selecting storage hardware" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:221(para) -msgid "Determine storage hardware architecture by selecting specific storage architecture. Determine the selection of storage architecture by evaluating possible solutions against the critical factors, the user requirements, technical considerations, and operational considerations. Incorporate the following facts into your storage architecture:" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:231(para) -msgid "Storage can be a significant portion of the overall system cost. For an organization that is concerned with vendor support, a commercial storage solution is advisable, although it comes with a higher price tag. If initial capital expenditure requires minimization, designing a system based on commodity hardware would apply. The trade-off is potentially higher support costs and a greater risk of incompatibility and interoperability issues." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:245(para) -msgid "Scalability, along with expandability, is a major consideration in a general purpose OpenStack cloud. It might be difficult to predict the final intended size of the implementation as there are no established usage patterns for a general purpose cloud. It might become necessary to expand the initial deployment in order to accommodate growth and user demand." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:257(para) -msgid "Expandability is a major architecture factor for storage solutions with general purpose OpenStack cloud. A storage solution that expands to 50PB is considered more expandable than a solution that only scales to 10PB. This meter is related to scalability, which is the measure of a solution's performance as it expands." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:267(para) -msgid "Using a scale-out storage solution with direct-attached storage (DAS) in the servers is well suited for a general purpose OpenStack cloud. Cloud services requirements determine your choice of scale-out solution. You need to determine if a single, highly expandable and highly vertical, scalable, centralized storage array is suitable for your design. After determining an approach, select the storage hardware based on this criteria." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:275(para) -msgid "This list expands upon the potential impacts for including a particular storage architecture (and corresponding storage hardware) into the design for a general purpose OpenStack cloud:" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:283(para) -msgid "Ensure that, if storage protocols other than Ethernet are part of the storage solution, the appropriate hardware has been selected. If a centralized storage array is selected, ensure that the hypervisor will be able to connect to that storage array for image storage." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:292(term) -msgid "Usage" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:294(para) -msgid "How the particular storage architecture will be used is critical for determining the architecture. Some of the configurations that will influence the architecture include whether it will be used by the hypervisors for ephemeral instance storage or if OpenStack Object Storage will use it for object storage." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:303(term) -msgid "Instance and image locations" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:305(para) -msgid "Where instances and images will be stored will influence the architecture." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:313(para) -msgid "If the solution is a scale-out storage architecture that includes DAS, it will affect the server hardware selection. This could ripple into the decisions that affect host density, instance density, power density, OS-hypervisor, management tools and others." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:322(para) -msgid "General purpose OpenStack cloud has multiple options. The key factors that will have an influence on selection of storage hardware for a general purpose OpenStack cloud are as follows:" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:328(term) -msgid "Capacity" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:330(para) -msgid "Hardware resources selected for the resource nodes should be capable of supporting enough storage for the cloud services. Defining the initial requirements and ensuring the design can support adding capacity is important. Hardware nodes selected for object storage should be capable of support a large number of inexpensive disks with no reliance on RAID controller cards. Hardware nodes selected for block storage should be capable of supporting high speed storage solutions and RAID controller cards to provide performance and redundancy to storage at a hardware level. Selecting hardware RAID controllers that automatically repair damaged arrays will assist with the replacement and repair of degraded or deleted storage devices." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:349(para) -msgid "Disks selected for object storage services do not need to be fast performing disks. We recommend that object storage nodes take advantage of the best cost per terabyte available for storage. Contrastingly, disks chosen for block storage services should take advantage of performance boosting features that may entail the use of SSDs or flash storage to provide high performance block storage pools. Storage performance of ephemeral disks used for instances should also be taken into consideration." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:361(term) -msgid "Fault tolerance" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:363(para) -msgid "Object storage resource nodes have no requirements for hardware fault tolerance or RAID controllers. It is not necessary to plan for fault tolerance within the object storage hardware because the object storage service provides replication between zones as a feature of the service. Block storage nodes, compute nodes, and cloud controllers should all have fault tolerance built in at the hardware level by making use of hardware RAID controllers and varying levels of RAID configuration. The level of RAID chosen should be consistent with the performance and availability requirements of the cloud." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:383(para) -msgid "Selecting network architecture determines which network hardware will be used. Networking software is determined by the selected networking hardware." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:386(para) -msgid "There are more subtle design impacts that need to be considered. The selection of certain networking hardware (and the networking software) affects the management tools that can be used. There are exceptions to this; the rise of open networking software that supports a range of networking hardware means that there are instances where the relationship between networking hardware and networking software are not as tightly defined." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:393(para) -msgid "Some of the key considerations that should be included in the selection of networking hardware include:" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:399(para) -msgid "The design will require networking hardware that has the requisite port count." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:406(para) -msgid "The network design will be affected by the physical space that is required to provide the requisite port count. A higher port density is preferred, as it leaves more rack space for compute or storage components that may be required by the design. This can also lead into concerns about fault domains and power density that should be considered. Higher density switches are more expensive and should also be considered, as it is important not to over design the network if it is not required." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:430(para) -msgid "The level of network hardware redundancy required is influenced by the user requirements for high availability and cost considerations. Network redundancy can be achieved by adding redundant power supplies or paired switches. If this is a requirement, the hardware will need to support this configuration." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:441(para) -msgid "Ensure that the physical data center provides the necessary power for the selected network hardware." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:445(para) -msgid "This may be an issue for spine switches in a leaf and spine fabric, or end of row (EoR) switches." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:452(para) -msgid "There is no single best practice architecture for the networking hardware supporting a general purpose OpenStack cloud that will apply to all implementations. Some of the key factors that will have a strong influence on selection of networking hardware include:" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:461(para) -msgid "All nodes within an OpenStack cloud require network connectivity. In some cases, nodes require access to more than one network segment. The design must encompass sufficient network capacity and bandwidth to ensure that all communications within the cloud, both north-south and east-west traffic have sufficient resources available." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:474(para) -msgid "The network design should encompass a physical and logical network design that can be easily expanded upon. Network hardware should offer the appropriate types of interfaces and speeds that are required by the hardware nodes." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:482(term) ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:575(title) -msgid "Availability" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:484(para) -msgid "To ensure that access to nodes within the cloud is not interrupted, we recommend that the network architecture identify any single points of failure and provide some level of redundancy or fault tolerance. With regard to the network infrastructure itself, this often involves use of networking protocols such as LACP, VRRP or others to achieve a highly available network connection. In addition, it is important to consider the networking implications on API availability. In order to ensure that the APIs, and potentially other services in the cloud are highly available, we recommend you design a load balancing solution within the network architecture to accommodate for these requirements." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:505(para) -msgid "Software selection for a general purpose OpenStack architecture design needs to include these three areas:" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:522(para) -msgid "The operating system (OS) and hypervisor have a significant impact on the overall design. Selecting a particular operating system and hypervisor can directly affect server hardware selection. Make sure the storage hardware and topology support the selected operating system and hypervisor combination. Also ensure the networking hardware selection and topology will work with the chosen operating system and hypervisor combination." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:530(para) -msgid "Some areas that could be impacted by the selection of OS and hypervisor include:" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:536(para) -msgid "Selecting a commercially supported hypervisor, such as Microsoft Hyper-V, will result in a different cost model rather than community-supported open source hypervisors including KVM, Kinstance or Xen. When comparing open source OS solutions, choosing Ubuntu over Red Hat (or vice versa) will have an impact on cost due to support contracts." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:550(para) -msgid "Depending on the selected hypervisor, staff should have the appropriate training and knowledge to support the selected OS and hypervisor combination. If they do not, training will need to be provided which could have a cost impact on the design." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:561(para) -msgid "The management tools used for Ubuntu and Kinstance differ from the management tools for VMware vSphere. Although both OS and hypervisor combinations are supported by OpenStack, there will be very different impacts to the rest of the design as a result of the selection of one combination versus the other." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:573(para) -msgid "Ensure that selected OS and hypervisor combinations meet the appropriate scale and performance requirements. The chosen architecture will need to meet the targeted instance-host ratios with the selected OS-hypervisor combinations." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:583(para) -msgid "Ensure that the design can accommodate regular periodic installations of application security patches while maintaining required workloads. The frequency of security patches for the proposed OS-hypervisor combination will have an impact on performance and the patch installation process could affect maintenance windows." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:595(para) -msgid "Determine which features of OpenStack are required. This will often determine the selection of the OS-hypervisor combination. Some features are only available with specific operating systems or hypervisors." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:604(para) -msgid "You will need to consider how the OS and hypervisor combination interactions with other operating systems and hypervisors, including other software solutions. Operational troubleshooting tools for one OS-hypervisor combination may differ from the tools used for another OS-hypervisor combination and, as a result, the design will need to address if the two sets of tools need to interoperate." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:618(para) -msgid "Selecting which OpenStack components are included in the overall design is important. Some OpenStack components, like compute and Image service, are required in every architecture. Other components, like Orchestration, are not always required." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:622(para) -msgid "Excluding certain OpenStack components can limit or constrain the functionality of other components. For example, if the architecture includes Orchestration but excludes Telemetry, then the design will not be able to take advantage of Orchestrations' auto scaling functionality. It is important to research the component interdependencies in conjunction with the technical requirements before deciding on the final architecture." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:632(para) -msgid "OpenStack Networking (neutron) provides a wide variety of networking services for instances. There are many additional networking software packages that can be useful when managing OpenStack components. Some examples include:" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:638(para) -msgid "Software to provide load balancing" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:643(para) -msgid "Network redundancy protocols" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:648(para) -msgid "Routing daemons" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:653(para) -msgid "Some of these software packages are described in more detail in the OpenStack High Availability Guide (refer to the Network controller cluster stack chapter of the OpenStack High Availability Guide)." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:659(para) -msgid "For a general purpose OpenStack cloud, the OpenStack infrastructure components need to be highly available. If the design does not include hardware load balancing, networking software packages like HAProxy will need to be included." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:668(para) -msgid "Selected supplemental software solution impacts and affects the overall OpenStack cloud design. This includes software for providing clustering, logging, monitoring and alerting." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:672(para) -msgid "Inclusion of clustering software, such as Corosync or Pacemaker, is determined primarily by the availability requirements. The impact of including (or not including) these software packages is primarily determined by the availability of the cloud infrastructure and the complexity of supporting the configuration after it is deployed. The OpenStack High Availability Guide provides more details on the installation and configuration of Corosync and Pacemaker, should these packages need to be included in the design." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:683(para) -msgid "Requirements for logging, monitoring, and alerting are determined by operational considerations. Each of these sub-categories includes a number of various options." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:686(para) -msgid "If these software packages are required, the design must account for the additional resource consumption (CPU, RAM, storage, and network bandwidth). Some other potential design impacts include:" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:692(para) -msgid "OS-hypervisor combination: Ensure that the selected logging, monitoring, or alerting tools support the proposed OS-hypervisor combination." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_architecture_general_purpose.xml:706(para) -msgid "OpenStack components often require access to back-end database services to store state and configuration information. Selecting an appropriate back-end database that satisfies the availability and fault tolerance requirements of the OpenStack services is required. OpenStack services supports connecting to a database that is supported by the SQLAlchemy python drivers, however, most common database deployments make use of MySQL or variations of it. We recommend that the database, which provides back-end service within a general purpose cloud, be made highly available when using an available technology which can accomplish that goal." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml:9(para) -msgid "In the planning and design phases of the build out, it is important to include the operation's function. Operational factors affect the design choices for a general purpose cloud, and operations staff are often tasked with the maintenance of cloud environments for larger installations." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml:14(para) -msgid "Expectations set by the Service Level Agreements (SLAs) directly affect knowing when and where you should implement redundancy and high availability. SLAs are contractual obligations that provide assurances for service availability. They define the levels of availability that drive the technical design, often with penalties for not meeting contractual obligations." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml:20(para) -msgid "SLA terms that affect design include:" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml:23(para) -msgid "API availability guarantees implying multiple infrastructure services and highly available load balancers." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml:28(para) -msgid "Network uptime guarantees affecting switch design, which might require redundant switching and power." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml:33(para) -msgid "Factor in networking security policy requirements in to your deployments." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml:39(title) -msgid "Support and maintainability" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml:40(para) -msgid "To be able to support and maintain an installation, OpenStack cloud management requires operations staff to understand and comprehend design architecture content. The operations and engineering staff skill level, and level of separation, are dependent on size and purpose of the installation. Large cloud service providers, or telecom providers, are more likely to be managed by specially trained, dedicated operations organizations. Smaller implementations are more likely to rely on support staff that need to take on combined engineering, design and operations functions." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml:49(para) -msgid "Maintaining OpenStack installations requires a variety of technical skills. You may want to consider using a third-party management company with special expertise in managing OpenStack deployment." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml:57(para) -msgid "OpenStack clouds require appropriate monitoring platforms to ensure errors are caught and managed appropriately. Specific meters that are critically important to monitor include:" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml:72(para) -msgid "Leveraging existing monitoring systems is an effective check to ensure OpenStack environments can be monitored." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml:77(title) -msgid "Downtime" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml:78(para) -msgid "To effectively run cloud installations, initial downtime planning includes creating processes and architectures that support the following:" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml:83(para) -msgid "Planned (maintenance)" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml:88(para) -msgid "Unplanned (system faults)" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml:93(para) -msgid "Resiliency of overall system and individual components are going to be dictated by the requirements of the SLA, meaning designing for high availability (HA) can have cost ramifications." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml:100(para) -msgid "Capacity constraints for a general purpose cloud environment include:" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml:104(para) -msgid "Compute limits" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml:109(para) -msgid "Storage limits" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml:114(para) -msgid "A relationship exists between the size of the compute environment and the supporting OpenStack infrastructure controller nodes requiring support." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml:117(para) -msgid "Increasing the size of the supporting compute environment increases the network traffic and messages, adding load to the controller or networking nodes. Effective monitoring of the environment will help with capacity decisions on scaling." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml:121(para) -msgid "Compute nodes automatically attach to OpenStack clouds, resulting in a horizontally scaling process when adding extra compute capacity to an OpenStack cloud. Additional processes are required to place nodes into appropriate availability zones and host aggregates. When adding additional compute nodes to environments, ensure identical or functional compatible CPUs are used, otherwise live migration features will break. It is necessary to add rack capacity or network switches as scaling out compute hosts directly affects network and datacenter resources." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml:129(para) -msgid "Assessing the average workloads and increasing the number of instances that can run within the compute environment by adjusting the overcommit ratio is another option. It is important to remember that changing the CPU overcommit ratio can have a detrimental effect and cause a potential increase in a noisy neighbor. The additional risk of increasing the overcommit ratio is more instances failing when a compute host fails." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml:135(para) -msgid "Compute host components can also be upgraded to account for increases in demand; this is known as vertical scaling. Upgrading CPUs with more cores, or increasing the overall server memory, can add extra needed capacity depending on whether the running applications are more CPU intensive or memory intensive." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml:141(para) -msgid "Insufficient disk capacity could also have a negative effect on overall performance including CPU and memory usage. Depending on the back-end architecture of the OpenStack Block Storage layer, capacity includes adding disk shelves to enterprise storage systems or installing additional block storage nodes. Upgrading directly attached storage installed in compute hosts, and adding capacity to the shared storage for additional ephemeral storage to instances, may be necessary." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_operational_considerations_general_purpose.xml:149(para) -msgid "For a deeper discussion on many of these topics, refer to the OpenStack Operations Guide." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:13(para) -msgid "General purpose clouds are expected to include these base services:" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:32(para) -msgid "Each of these services have different resource requirements. As a result, you must make design decisions relating directly to the service, as well as provide a balanced infrastructure for all services." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:36(para) -msgid "Take into consideration the unique aspects of each service, as individual characteristics and service mass can impact the hardware selection process. Hardware designs should be generated for each of the services." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:40(para) -msgid "Hardware decisions are also made in relation to network architecture and facilities planning. These factors play heavily into the overall architecture of an OpenStack cloud." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:45(title) -msgid "Compute resource design" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:46(para) -msgid "When designing compute resource pools, a number of factors can impact your design decisions. Factors such as number of processors, amount of memory, and the quantity of storage required for each hypervisor must be taken into account." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:50(para) -msgid "You will also need to decide whether to provide compute resources in a single pool or in multiple pools. In most cases, multiple pools of resources can be allocated and addressed on demand. A compute design that allocates multiple pools of resources makes best use of application resources, and is commonly referred to as bin packing." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:56(para) -msgid "In a bin packing design, each independent resource pool provides service for specific flavors. This helps to ensure that, as instances are scheduled onto compute hypervisors, each independent node's resources will be allocated in a way that makes the most efficient use of the available hardware. Bin packing also requires a common hardware design, with all hardware nodes within a compute resource pool sharing a common processor, memory, and storage layout. This makes it easier to deploy, support, and maintain nodes throughout their life cycle." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:64(para) -msgid "An overcommit ratio is the ratio of available virtual resources to available physical resources. This ratio is configurable for CPU and memory. The default CPU overcommit ratio is 16:1, and the default memory overcommit ratio is 1.5:1. Determining the tuning of the overcommit ratios during the design phase is important as it has a direct impact on the hardware layout of your compute nodes." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:70(para) -msgid "When selecting a processor, compare features and performance characteristics. Some processors include features specific to virtualized compute hosts, such as hardware-assisted virtualization, and technology related to memory paging (also known as EPT shadowing). These types of features can have a significant impact on the performance of your virtual machine." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:75(para) -msgid "You will also need to consider the compute requirements of non-hypervisor nodes (sometimes referred to as resource nodes). This includes controller, object storage, and block storage nodes, and networking services." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:78(para) -msgid "The number of processor cores and threads impacts the number of worker threads which can be run on a resource node. Design decisions must relate directly to the service being run on it, as well as provide a balanced infrastructure for all services." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:82(para) -msgid "Workload can be unpredictable in a general purpose cloud, so consider including the ability to add additional compute resource pools on demand. In some cases, however, the demand for certain instance types or flavors may not justify individual hardware design. In either case, start by allocating hardware designs that are capable of servicing the most common instance requests. If you want to add additional hardware to the overall architecture, this can be done later." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:92(title) -msgid "Designing network resources" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:93(para) -msgid "OpenStack clouds generally have multiple network segments, with each segment providing access to particular resources. The network services themselves also require network communication paths which should be separated from the other networks. When designing network services for a general purpose cloud, plan for either a physical or logical separation of network segments used by operators and tenants. You can also create an additional network segment for access to internal services such as the message bus and database used by various services. Segregating these services onto separate networks helps to protect sensitive data and protects against unauthorized access to services." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:103(para) -msgid "Choose a networking service based on the requirements of your instances. The architecture and design of your cloud will impact whether you choose OpenStack Networking(neutron), or legacy networking (nova-network)." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:110(para) -msgid "The legacy networking (nova-network) service is primarily a layer-2 networking service that functions in two modes, which use VLANs in different ways. In a flat network mode, all network hardware nodes and devices throughout the cloud are connected to a single layer-2 network segment that provides access to application data." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:116(para) -msgid "When the network devices in the cloud support segmentation using VLANs, legacy networking can operate in the second mode. In this design model, each tenant within the cloud is assigned a network subnet which is mapped to a VLAN on the physical network. It is especially important to remember the maximum number of 4096 VLANs which can be used within a spanning tree domain. This places a hard limit on the amount of growth possible within the data center. When designing a general purpose cloud intended to support multiple tenants, we recommend the use of legacy networking with VLANs, and not in flat network mode." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:130(para) -msgid "Another consideration regarding network is the fact that legacy networking is entirely managed by the cloud operator; tenants do not have control over network resources. If tenants require the ability to manage and create network resources such as network segments and subnets, it will be necessary to install the OpenStack Networking service to provide network access to instances." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:139(term) -msgid "OpenStack Networking (neutron)" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:141(para) -msgid "OpenStack Networking (neutron) is a first class networking service that gives full control over creation of virtual network resources to tenants. This is often accomplished in the form of tunneling protocols which will establish encapsulated communication paths over existing network infrastructure in order to segment tenant traffic. These methods vary depending on the specific implementation, but some of the more common methods include tunneling over GRE, encapsulating with VXLAN, and VLAN tags." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:153(para) -msgid "We recommend you design at least three network segments:" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:156(para) -msgid "The first segment is a public network, used for access to REST APIs by tenants and operators. The controller nodes and swift proxies are the only devices connecting to this network segment. In some cases, this network might also be serviced by hardware load balancers and other network devices." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:163(para) -msgid "The second segment is used by administrators to manage hardware resources. Configuration management tools also use this for deploying software and services onto new hardware. In some cases, this network segment might also be used for internal services, including the message bus and database services. This network needs to communicate with every hardware node. Due to the highly sensitive nature of this network segment, you also need to secure this network from unauthorized access." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:172(para) -msgid "The third network segment is used by applications and consumers to access the physical network, and for users to access applications. This network is segregated from the one used to access the cloud APIs and is not capable of communicating directly with the hardware resources in the cloud. Compute resource nodes and network gateway services which allow application data to access the physical network from outside of the cloud need to communicate on this network segment." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:184(title) -msgid "Designing OpenStack Object Storage" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:185(para) -msgid "When designing hardware resources for OpenStack Object Storage, the primary goal is to maximize the amount of storage in each resource node while also ensuring that the cost per terabyte is kept to a minimum. This often involves utilizing servers which can hold a large number of spinning disks. Whether choosing to use 2U server form factors with directly attached storage or an external chassis that holds a larger number of drives, the main goal is to maximize the storage available in each node." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:195(para) -msgid "We do not recommended investing in enterprise class drives for an OpenStack Object Storage cluster. The consistency and partition tolerance characteristics of OpenStack Object Storage ensures that data stays up to date and survives hardware faults without the use of any specialized data replication devices." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:202(para) -msgid "One of the benefits of OpenStack Object Storage is the ability to mix and match drives by making use of weighting within the swift ring. When designing your swift storage cluster, we recommend making use of the most cost effective storage solution available at the time." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:207(para) -msgid "To achieve durability and availability of data stored as objects it is important to design object storage resource pools to ensure they can provide the suggested availability. Considering rack-level and zone-level designs to accommodate the number of replicas configured to be stored in the Object Storage service (the default number of replicas is three) is important when designing beyond the hardware node level. Each replica of data should exist in its own availability zone with its own power, cooling, and network resources available to service that specific zone." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:216(para) -msgid "Object storage nodes should be designed so that the number of requests does not hinder the performance of the cluster. The object storage service is a chatty protocol, therefore making use of multiple processors that have higher core counts will ensure the IO requests do not inundate the server." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:224(title) -msgid "Designing OpenStack Block Storage" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:225(para) -msgid "When designing OpenStack Block Storage resource nodes, it is helpful to understand the workloads and requirements that will drive the use of block storage in the cloud. We recommend designing block storage pools so that tenants can choose appropriate storage solutions for their applications. By creating multiple storage pools of different types, in conjunction with configuring an advanced storage scheduler for the block storage service, it is possible to provide tenants with a large catalog of storage services with a variety of performance levels and redundancy options." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:234(para) -msgid "Block storage also takes advantage of a number of enterprise storage solutions. These are addressed via a plug-in driver developed by the hardware vendor. A large number of enterprise storage plug-in drivers ship out-of-the-box with OpenStack Block Storage (and many more available via third party channels). General purpose clouds are more likely to use directly attached storage in the majority of block storage nodes, deeming it necessary to provide additional levels of service to tenants which can only be provided by enterprise class storage solutions." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:243(para) -msgid "Redundancy and availability requirements impact the decision to use a RAID controller card in block storage nodes. The input-output per second (IOPS) demand of your application will influence whether or not you should use a RAID controller, and which level of RAID is required. Making use of higher performing RAID volumes is suggested when considering performance. However, where redundancy of block storage volumes is more important we recommend making use of a redundant RAID configuration such as RAID 5 or RAID 6. Some specialized features, such as automated replication of block storage volumes, may require the use of third-party plug-ins and enterprise block storage solutions in order to provide the high demand on storage. Furthermore, where extreme performance is a requirement it may also be necessary to make use of high speed SSD disk drives' high performing flash storage solutions." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:262(para) -msgid "The software selection process plays a large role in the architecture of a general purpose cloud. The following have a large impact on the design of the cloud:" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:267(para) -msgid "Choice of operating system" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:272(para) -msgid "Selection of OpenStack software components" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:277(para) -msgid "Choice of hypervisor" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:282(para) -msgid "Selection of supplemental software" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:287(para) -msgid "Operating system (OS) selection plays a large role in the design and architecture of a cloud. There are a number of OSes which have native support for OpenStack including:" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:292(para) -msgid "Ubuntu" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:297(para) -msgid "Red Hat Enterprise Linux (RHEL)" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:302(para) -msgid "CentOS" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:307(para) -msgid "SUSE Linux Enterprise Server (SLES)" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:313(para) -msgid "Native support is not a constraint on the choice of OS; users are free to choose just about any Linux distribution (or even Microsoft Windows) and install OpenStack directly from source (or compile their own packages). However, many organizations will prefer to install OpenStack from distribution-supplied packages or repositories (although using the distribution vendor's OpenStack packages might be a requirement for support)." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:322(para) -msgid "OS selection also directly influences hypervisor selection. A cloud architect who selects Ubuntu, RHEL, or SLES has some flexibility in hypervisor; KVM, Xen, and LXC are supported virtualization methods available under OpenStack Compute (nova) on these Linux distributions. However, a cloud architect who selects Hyper-V is limited to Windows Servers. Similarly, a cloud architect who selects XenServer is limited to the CentOS-based dom0 operating system provided with XenServer." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:330(para) -msgid "The primary factors that play into OS-hypervisor selection include:" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:336(para) -msgid "The selection of OS-hypervisor combination first and foremost needs to support the user requirements." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:342(term) -msgid "Support" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:344(para) -msgid "The selected OS-hypervisor combination needs to be supported by OpenStack." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:351(para) -msgid "The OS-hypervisor needs to be interoperable with other features and services in the OpenStack design in order to meet the user requirements." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:362(para) -msgid "OpenStack supports a wide variety of hypervisors, one or more of which can be used in a single cloud. These hypervisors include:" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:367(para) -msgid "KVM (and QEMU)" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:370(para) -msgid "XCP/XenServer" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:373(para) -msgid "vSphere (vCenter and ESXi)" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:376(para) -msgid "Hyper-V" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:379(para) -msgid "LXC" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:382(para) -msgid "Docker" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:385(para) -msgid "Bare-metal" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:388(para) -msgid "A complete list of supported hypervisors and their capabilities can be found at OpenStack Hypervisor Support Matrix." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:392(para) -msgid "We recommend general purpose clouds use hypervisors that support the most general purpose use cases, such as KVM and Xen. More specific hypervisors should be chosen to account for specific functionality or a supported feature requirement. In some cases, there may also be a mandated requirement to run software on a certified hypervisor including solutions from VMware, Microsoft, and Citrix." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:399(para) -msgid "The features offered through the OpenStack cloud platform determine the best choice of a hypervisor. Each hypervisor has their own hardware requirements which may affect the decisions around designing a general purpose cloud." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:403(para) -msgid "In a mixed hypervisor environment, specific aggregates of compute resources, each with defined capabilities, enable workloads to utilize software and hardware specific to their particular requirements. This functionality can be exposed explicitly to the end user, or accessed through defined metadata within a particular flavor of an instance." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:413(para) -msgid "A general purpose OpenStack cloud design should incorporate the core OpenStack services to provide a wide range of services to end-users. The OpenStack core services recommended in a general purpose cloud are:" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:419(para) -msgid "OpenStack Compute (nova)" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:423(para) -msgid "OpenStack Networking (neutron)" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:427(para) -msgid "OpenStack Image service (glance)" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:431(para) -msgid "OpenStack Identity (keystone)" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:435(para) -msgid "OpenStack dashboard (horizon)" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:439(para) -msgid "Telemetry (ceilometer)" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:443(para) -msgid "A general purpose cloud may also include OpenStack Object Storage (swift). OpenStack Block Storage (cinder). These may be selected to provide storage to applications and instances." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:453(para) -msgid "A general purpose OpenStack deployment consists of more than just OpenStack-specific components. A typical deployment involves services that provide supporting functionality, including databases and message queues, and may also involve software to provide high availability of the OpenStack environment. Design decisions around the underlying message queue might affect the required number of controller services, as well as the technology to provide highly resilient database functionality, such as MariaDB with Galera. In such a scenario, replication of services relies on quorum." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:463(para) -msgid "Where many general purpose deployments use hardware load balancers to provide highly available API access and SSL termination, software solutions, for example HAProxy, can also be considered. It is vital to ensure that such software implementations are also made highly available. High availability can be achieved by using software such as Keepalived or Pacemaker with Corosync. Pacemaker and Corosync can provide active-active or active-passive highly available configuration depending on the specific service in the OpenStack environment. Using this software can affect the design as it assumes at least a 2-node controller infrastructure where one of those nodes may be running certain services in standby mode." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:476(para) -msgid "Memcached is a distributed memory object caching system, and Redis is a key-value store. Both are deployed on general purpose clouds to assist in alleviating load to the Identity service. The memcached service caches tokens, and due to its distributed nature it can help alleviate some bottlenecks to the underlying authentication system. Using memcached or Redis does not affect the overall design of your architecture as they tend to be deployed onto the infrastructure nodes providing the OpenStack services." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:488(title) -msgid "Controller infrastructure" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:489(para) -msgid "The Controller infrastructure nodes provide management services to the end-user as well as providing services internally for the operating of the cloud. The Controllers run message queuing services that carry system messages between each service. Performance issues related to the message bus would lead to delays in sending that message to where it needs to go. The result of this condition would be delays in operation functions such as spinning up and deleting instances, provisioning new storage volumes and managing network resources. Such delays could adversely affect an application’s ability to react to certain conditions, especially when using auto-scaling features. It is important to properly design the hardware used to run the controller infrastructure as outlined above in the Hardware Selection section." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:504(para) -msgid "Performance of the controller services is not limited to processing power, but restrictions may emerge in serving concurrent users. Ensure that the APIs and Horizon services are load tested to ensure that you are able to serve your customers. Particular attention should be made to the OpenStack Identity Service (Keystone), which provides the authentication and authorization for all services, both internally to OpenStack itself and to end-users. This service can lead to a degradation of overall performance if this is not sized appropriately." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:517(title) -msgid "Network performance" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:518(para) -msgid "In a general purpose OpenStack cloud, the requirements of the network help determine performance capabilities. It is possible to design OpenStack environments that run a mix of networking capabilities. By utilizing the different interface speeds, the users of the OpenStack environment can choose networks that are fit for their purpose." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:525(para) -msgid "Network performance can be boosted considerably by implementing hardware load balancers to provide front-end service to the cloud APIs. The hardware load balancers also perform SSL termination if that is a requirement of your environment. When implementing SSL offloading, it is important to understand the SSL offloading capabilities of the devices selected." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:535(title) -msgid "Compute host" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:536(para) -msgid "The choice of hardware specifications used in compute nodes including CPU, memory and disk type directly affects the performance of the instances. Other factors which can directly affect performance include tunable parameters within the OpenStack services, for example the overcommit ratio applied to resources. The defaults in OpenStack Compute set a 16:1 over-commit of the CPU and 1.5 over-commit of the memory. Running at such high ratios leads to an increase in \"noisy-neighbor\" activity. Care must be taken when sizing your Compute environment to avoid this scenario. For running general purpose OpenStack environments it is possible to keep to the defaults, but make sure to monitor your environment as usage increases." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:552(title) -msgid "Storage performance" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:553(para) -msgid "When considering performance of OpenStack Block Storage, hardware and architecture choice is important. Block Storage can use enterprise back-end systems such as NetApp or EMC, scale out storage such as GlusterFS and Ceph, or simply use the capabilities of directly attached storage in the nodes themselves. Block Storage may be deployed so that traffic traverses the host network, which could affect, and be adversely affected by, the front-side API traffic performance. As such, consider using a dedicated data storage network with dedicated interfaces on the Controller and Compute hosts." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:564(para) -msgid "When considering performance of OpenStack Object Storage, a number of design choices will affect performance. A user’s access to the Object Storage is through the proxy services, which sit behind hardware load balancers. By the very nature of a highly resilient storage system, replication of the data would affect performance of the overall system. In this case, 10 GbE (or better) networking is recommended throughout the storage network architecture." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:576(para) -msgid "In OpenStack, the infrastructure is integral to providing services and should always be available, especially when operating with SLAs. Ensuring network availability is accomplished by designing the network architecture so that no single point of failure exists. A consideration of the number of switches, routes and redundancies of power should be factored into core infrastructure, as well as the associated bonding of networks to provide diverse routes to your highly available switch infrastructure." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:585(para) -msgid "The OpenStack services themselves should be deployed across multiple servers that do not represent a single point of failure. Ensuring API availability can be achieved by placing these services behind highly available load balancers that have multiple OpenStack servers as members." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:590(para) -msgid "OpenStack lends itself to deployment in a highly available manner where it is expected that at least 2 servers be utilized. These can run all the services involved from the message queuing service, for example RabbitMQ or QPID, and an appropriately deployed database service such as MySQL or MariaDB. As services in the cloud are scaled out, back-end services will need to scale too. Monitoring and reporting on server utilization and response times, as well as load testing your systems, will help determine scale out decisions." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:599(para) -msgid "Care must be taken when deciding network functionality. Currently, OpenStack supports both the legacy networking (nova-network) system and the newer, extensible OpenStack Networking (neutron). Both have their pros and cons when it comes to providing highly available access. Legacy networking, which provides networking access maintained in the OpenStack Compute code, provides a feature that removes a single point of failure when it comes to routing, and this feature is currently missing in OpenStack Networking. The effect of legacy networking’s multi-host functionality restricts failure domains to the host running that instance." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:610(para) -msgid "When using OpenStack Networking, the OpenStack controller servers or separate Networking hosts handle routing. For a deployment that requires features available in only Networking, it is possible to remove this restriction by using third party software that helps maintain highly available L3 routes. Doing so allows for common APIs to control network hardware, or to provide complex multi-tier web applications in a secure manner. It is also possible to completely remove routing from Networking, and instead rely on hardware routing capabilities. In this case, the switching infrastructure must support L3 routing." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:622(para) -msgid "OpenStack Networking and legacy networking both have their advantages and disadvantages. They are both valid and supported options that fit different network deployment models described in the OpenStack Operations Guide." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:629(para) -msgid "Ensure your deployment has adequate back-up capabilities." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:630(para) -msgid "Application design must also be factored into the capabilities of the underlying cloud infrastructure. If the compute hosts do not provide a seamless live migration capability, then it must be expected that when a compute host fails, that instance and any data local to that instance will be deleted. However, when providing an expectation to users that instances have a high-level of uptime guarantees, the infrastructure must be deployed in a way that eliminates any single point of failure when a compute host disappears. This may include utilizing shared file systems on enterprise storage or OpenStack Block storage to provide a level of guarantee to match service features." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:642(para) -msgid "For more information on high availability in OpenStack, see the OpenStack High Availability Guide." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:650(para) -msgid "A security domain comprises users, applications, servers or networks that share common trust requirements and expectations within a system. Typically they have the same authentication and authorization requirements and users." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:654(para) -msgid "These security domains are:" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:657(para) -msgid "Public" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:660(para) -msgid "Guest" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:669(para) -msgid "These security domains can be mapped to an OpenStack deployment individually, or combined. In each case, the cloud operator should be aware of the appropriate security concerns. Security domains should be mapped out against your specific OpenStack deployment topology. The domains and their trust requirements depend upon whether the cloud instance is public, private, or hybrid." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:678(para) -msgid "The public security domain is an entirely untrusted area of the cloud infrastructure. It can refer to the internet as a whole or simply to networks over which you have no authority. This domain should always be considered untrusted." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:684(para) -msgid "The guest security domain handles compute data generated by instances on the cloud but not services that support the operation of the cloud, such as API calls. Public cloud providers and private cloud providers who do not have stringent controls on instance use or who allow unrestricted internet access to instances should consider this domain to be untrusted. Private cloud providers may want to consider this network as internal and therefore trusted only if they have controls in place to assert that they trust instances and all their tenants." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:696(para) -msgid "The management security domain is where services interact. Sometimes referred to as the control plane, the networks in this domain transport confidential data such as configuration parameters, user names, and passwords. In most deployments this domain is considered trusted." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:703(para) -msgid "The data security domain is concerned primarily with information pertaining to the storage services within OpenStack. Much of the data that crosses this network has high integrity and confidentiality requirements and, depending on the type of deployment, may also have strong availability requirements. The trust level of this network is heavily dependent on other deployment decisions." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:712(para) -msgid "When deploying OpenStack in an enterprise as a private cloud it is usually behind the firewall and within the trusted network alongside existing systems. Users of the cloud are employees that are bound by the security requirements set forth by the company. This tends to push most of the security domains towards a more trusted model. However, when deploying OpenStack in a public facing role, no assumptions can be made and the attack vectors significantly increase." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:721(para) -msgid "Consideration must be taken when managing the users of the system for both public and private clouds. The identity service allows for LDAP to be part of the authentication process. Including such systems in an OpenStack deployment may ease user management if integrating into existing systems." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:727(para) -msgid "It is important to understand that user authentication requests include sensitive information including user names, passwords, and authentication tokens. For this reason, placing the API services behind hardware that performs SSL termination is strongly recommended." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml:732(para) -msgid "For more information OpenStack Security, see the OpenStack Security Guide" -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/arch-design/generalpurpose/section_prescriptive_example_general_purpose.xml:59(None) -msgid "@@image: '../figures/General_Architecture3.png'; md5=278d469e1d026634b3682209c454bff1" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_prescriptive_example_general_purpose.xml:8(title) -msgid "Prescriptive example" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_prescriptive_example_general_purpose.xml:9(para) -msgid "An online classified advertising company wants to run web applications consisting of Tomcat, Nginx and MariaDB in a private cloud. To be able to meet policy requirements, the cloud infrastructure will run in their own data center. The company has predictable load requirements, but requires scaling to cope with nightly increases in demand. Their current environment does not have the flexibility to align with their goal of running an open source API environment. The current environment consists of the following:" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_prescriptive_example_general_purpose.xml:18(para) -msgid "Between 120 and 140 installations of Nginx and Tomcat, each with 2 vCPUs and 4 GB of RAM" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_prescriptive_example_general_purpose.xml:22(para) -msgid "A three-node MariaDB and Galera cluster, each with 4 vCPUs and 8 GB RAM" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_prescriptive_example_general_purpose.xml:26(para) -msgid "The company runs hardware load balancers and multiple web applications serving their websites, and orchestrates environments using combinations of scripts and Puppet. The website generates large amounts of log data daily that requires archiving." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_prescriptive_example_general_purpose.xml:38(para) -msgid "OpenStack Controller service running Image, Identity, Networking, combined with support services such as MariaDB and RabbitMQ, configured for high availability on at least three controller nodes." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_prescriptive_example_general_purpose.xml:48(para) -msgid "OpenStack Block Storage for use by compute instances, requiring persistent storage (such as databases for dynamic sites)." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_prescriptive_example_general_purpose.xml:53(para) -msgid "OpenStack Object Storage for serving static objects (such as images)." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_prescriptive_example_general_purpose.xml:60(para) -msgid "Running up to 140 web instances and the small number of MariaDB instances requires 292 vCPUs available, as well as 584 GB RAM. On a typical 1U server using dual-socket hex-core Intel CPUs with Hyperthreading, and assuming 2:1 CPU overcommit ratio, this would require 8 OpenStack Compute nodes." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_prescriptive_example_general_purpose.xml:66(para) -msgid "The web application instances run from local storage on each of the OpenStack Compute nodes. The web application instances are stateless, meaning that any of the instances can fail and the application will continue to function." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_prescriptive_example_general_purpose.xml:70(para) -msgid "MariaDB server instances store their data on shared enterprise storage, such as NetApp or Solidfire devices. If a MariaDB instance fails, storage would be expected to be re-attached to another instance and rejoined to the Galera cluster." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_prescriptive_example_general_purpose.xml:75(para) -msgid "Logs from the web application servers are shipped to OpenStack Object Storage for processing and archiving." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_prescriptive_example_general_purpose.xml:78(para) -msgid "Additional capabilities can be realized by moving static web content to be served from OpenStack Object Storage containers, and backing the OpenStack Image service with OpenStack Object Storage." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_prescriptive_example_general_purpose.xml:83(para) -msgid "Increasing OpenStack Object Storage means network bandwidth needs to be taken into consideration. Running OpenStack Object Storage with network connections offering 10 GbE or better connectivity is advised." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_prescriptive_example_general_purpose.xml:90(para) -msgid "Leveraging Orchestration and Telemetry services is also a potential issue when providing auto-scaling, orchestrated web application environments. Defining the web applications in Heat Orchestration Templates (HOT) negates the reliance on the current scripted Puppet solution." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_prescriptive_example_general_purpose.xml:95(para) -msgid "OpenStack Networking can be used to control hardware load balancers through the use of plug-ins and the Networking API. This allows users to control hardware load balance pools and instances as members in these pools, but their use in production environments must be carefully weighed against current stability." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_user_requirements_general_purpose.xml:9(para) -msgid "When building a general purpose cloud, you should follow the Infrastructure-as-a-Service (IaaS) model; a platform best suited for use cases with simple requirements. General purpose cloud user requirements are not complex. However, it is important to capture them even if the project has minimum business and technical requirements, such as a proof of concept (PoC), or a small lab platform." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_user_requirements_general_purpose.xml:17(para) -msgid "The following user considerations are written from the perspective of the cloud builder, not from the perspective of the end user." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_user_requirements_general_purpose.xml:26(para) -msgid "Financial factors are a primary concern for any organization. Cost is an important criterion as general purpose clouds are considered the baseline from which all other cloud architecture environments derive. General purpose clouds do not always provide the most cost-effective environment for specialized applications or situations. Unless razor-thin margins and costs have been mandated as a critical factor, cost should not be the sole consideration when choosing or designing a general purpose architecture." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_user_requirements_general_purpose.xml:39(term) -msgid "Time to market" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_user_requirements_general_purpose.xml:41(para) -msgid "The ability to deliver services or products within a flexible time frame is a common business factor when building a general purpose cloud. Delivering a product in six months instead of two years is a driving force behind the decision to build general purpose clouds. General purpose clouds allow users to self-provision and gain access to compute, network, and storage resources on-demand thus decreasing time to market." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_user_requirements_general_purpose.xml:55(para) -msgid "Revenue opportunities for a cloud will vary greatly based on the intended use case of that particular cloud. Some general purpose clouds are built for commercial customer facing products, but there are alternatives that might make the general purpose cloud the right choice." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_user_requirements_general_purpose.xml:66(title) -msgid "Technical requirements" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_user_requirements_general_purpose.xml:67(para) -msgid "Technical cloud architecture requirements should be weighted against the business requirements." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_user_requirements_general_purpose.xml:74(para) -msgid "As a baseline product, general purpose clouds do not provide optimized performance for any particular function. While a general purpose cloud should provide enough performance to satisfy average user considerations, performance is not a general purpose cloud customer driver." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_user_requirements_general_purpose.xml:83(term) -msgid "No predefined usage model" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_user_requirements_general_purpose.xml:85(para) -msgid "The lack of a pre-defined usage model enables the user to run a wide variety of applications without having to know the application requirements in advance. This provides a degree of independence and flexibility that no other cloud scenarios are able to provide." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_user_requirements_general_purpose.xml:94(term) -msgid "On-demand and self-service application" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_user_requirements_general_purpose.xml:96(para) -msgid "By definition, a cloud provides end users with the ability to self-provision computing power, storage, networks, and software in a simple and flexible way. The user must be able to scale their resources up to a substantial level without disrupting the underlying host operations. One of the benefits of using a general purpose cloud architecture is the ability to start with limited resources and increase them over time as the user demand grows." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_user_requirements_general_purpose.xml:109(term) -msgid "Public cloud" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_user_requirements_general_purpose.xml:111(para) -msgid "For a company interested in building a commercial public cloud offering based on OpenStack, the general purpose architecture model might be the best choice. Designers are not always going to know the purposes or workloads for which the end users will use the cloud." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_user_requirements_general_purpose.xml:120(term) -msgid "Internal consumption (private) cloud" -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_user_requirements_general_purpose.xml:122(para) -msgid "Organizations need to determine if it is logical to create their own clouds internally. Using a private cloud, organizations are able to maintain complete control over architectural and cloud components." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_user_requirements_general_purpose.xml:127(para) -msgid "Users will want to combine using the internal cloud with access to an external cloud. If that case is likely, it might be worth exploring the possibility of taking a multi-cloud approach with regard to at least some of the architectural elements." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_user_requirements_general_purpose.xml:135(para) -msgid "Designs that incorporate the use of multiple clouds, such as a private cloud and a public cloud offering, are described in the \"Multi-Cloud\" scenario, see ." -msgstr "" - -#: ./doc/arch-design/generalpurpose/section_user_requirements_general_purpose.xml:145(para) -msgid "Security should be implemented according to asset, threat, and vulnerability risk assessment matrices. For cloud domains that require increased computer security, network security, or information security, a general purpose cloud is not considered an appropriate choice." -msgstr "" - -#. Put one translator per line, in the form of NAME , YEAR1, YEAR2 -#: ./doc/arch-design/generalpurpose/section_user_requirements_general_purpose.xml:0(None) -msgid "translator-credits" -msgstr "" - diff --git a/doc/arch-design/locale/zh_CN.po b/doc/arch-design/locale/zh_CN.po deleted file mode 100644 index 0cd150647f..0000000000 --- a/doc/arch-design/locale/zh_CN.po +++ /dev/null @@ -1,4880 +0,0 @@ -# Translators: -# apporc watson , 2015 -# Chen Peng , 2015 -# Hunt Xu , 2015 -# johnwoo_lee , 2015 -# Hunt Xu , 2015 -# 颜海峰 , 2015 -# -# -# Jimmy Li , 2015. #zanata -# OpenStack Infra , 2015. #zanata -msgid "" -msgstr "" -"Project-Id-Version: PACKAGE VERSION\n" -"POT-Creation-Date: 2015-11-11 05:24+0000\n" -"MIME-Version: 1.0\n" -"Content-Type: text/plain; charset=UTF-8\n" -"Content-Transfer-Encoding: 8bit\n" -"PO-Revision-Date: 2015-10-30 03:26+0000\n" -"Last-Translator: Jimmy Li \n" -"Language: zh-CN\n" -"Plural-Forms: nplurals=1; plural=0;\n" -"X-Generator: Zanata 3.7.1\n" -"Language-Team: Chinese (China)\n" - -msgid "" -"10 GbE horizontally scalable spine leaf back-end storage and front end " -"network." -msgstr "10 GbE 可水平扩展的分布式核心后端存储及前端网络。" - -msgid "" -"10 GbE horizontally scalable spine leaf back-end storage and front-end " -"network" -msgstr "10 GbE 可水平扩展的分布式核心后端存储及前端网络。" - -msgid "100 PB Tape" -msgstr "100 PB 磁带" - -msgid "120 PB HDD" -msgstr "120 PB 硬盘" - -msgid "1600 = (16 (number of physical cores)) / 2" -msgstr "1600 = (16 x (物理核数)) / 2" - -msgid "" -"1U rack-mounted servers have the ability to offer greater server density " -"than a blade server solution, but are often limited to dual-socket, multi-" -"core CPU configurations." -msgstr "" -"1U机架式服务器要比刀片服务解决方案提供更大的服务器密度。但是会有双CPU插槽、多" -"核CPU配置的限制。" - -msgid "" -"1U rack-mounted servers occupy only a single rack unit. Their benefits " -"include high density, support for dual-socket multi-core CPUs, and support " -"for reasonable RAM amounts. This form factor offers limited storage " -"capacity, limited network capacity, and limited expandability." -msgstr "" -"1U机架式服务器仅占用1U的机柜空间,他们的有点包括高密度,支持双插槽多核的CPU," -"支持内存扩展。局限性就是,有限的存储容量、有限的网络容量,以及有限的可扩展" -"性。" - -msgid "2.5 Mega Watts" -msgstr "2.5 晚瓦特" - -msgid "20000 cores" -msgstr "20000 个核" - -msgid "2014" -msgstr "2014" - -msgid "2015" -msgstr "2015" - -msgid "" -"2U rack-mounted servers offer the expanded storage and networking capacity " -"that 1U servers tend to lack, but with a corresponding decrease in server " -"density (half the density offered by 1U rack-mounted servers)." -msgstr "" -"2U的机架式服务器相比1U服务器,提供可扩展的存储和网络容量,但是相应的降低了服" -"务器密度(1U机架式服务器密度的一半)" - -msgid "" -"2U rack-mounted servers provide quad-socket, multi-core CPU support but with " -"a corresponding decrease in server density (half the density offered by 1U " -"rack-mounted servers)." -msgstr "" -"2U机架式服务器提供四插槽、多核CPU的支持,但是它相应的降低了服务器密度(相当于" -"1U机架式服务器的一半)。" - -msgid "2x10 GbE back-end bonds" -msgstr "2x10 GbE 后端绑定" - -msgid "2x10 GbE bonded front end" -msgstr "2x10 GbE 绑定的前端" - -msgid "3.5 Mega Watts" -msgstr "3.5 万瓦特" - -msgid "310 TB Memory" -msgstr "310 TB 内存" - -msgid "3x proxies" -msgstr "3x 代理" - -msgid "5 storage servers for caching layer 24x1 TB SSD" -msgstr "5 台作为缓存池的存储服务器,每台 24x1 TB SSD" - -msgid "6 PB HDD" -msgstr "6 PB 硬盘" - -msgid "91000 cores" -msgstr "91000 个核" - -msgid "" -"Cells provide the ability to " -"subdivide the compute portion of an OpenStack installation, including " -"regions, while still exposing a single endpoint. Each region has an API cell " -"along with a number of compute cells where the workloads actually run. Each " -"cell has its own database and message queue setup (ideally clustered), " -"providing the ability to subdivide the load on these subsystems, improving " -"overall performance." -msgstr "" -"单元提供了对一个 OpenStack 环境,也" -"包括区域中的计算部分进行细分的功能,同时保持对外的展现仍然为单个入口点。在每" -"个区域中将会为一系列实际承担负载的计算单元创建一个 API 单元。每个单元拥有其自" -"己的数据库和消息队列(理想情况下是集群化的),并提供将负载细分到这些子系统中的" -"功能,以提高整体性能。" - -msgid "" -"OpenStack on OpenStack: describes building a multi-tiered cloud by running OpenStack on top " -"of an OpenStack installation." -msgstr "" -"OpenStack 上的 " -"OpenStack:一些机构认为通过在一个 OpenStack 部署之上运行 OpenStack 的" -"方式来构建多层次的云有其技术上的意义。" - -msgid "" -"Desktop-as-a-Service: " -"describes running a virtualized desktop environment in a cloud " -"(Desktop-as-a-Service). This applies to private and " -"public clouds." -msgstr "" -"桌面即服务:这是为希望在云环境" -"中运行(桌面即服务)所准备的,在私有云以及公有云的情景下" -"都可用。" - -msgid "" -"Software-defined " -"networking (SDN): describes both running an SDN controller from " -"within OpenStack as well as participating in a software-defined network." -msgstr "" -"软件定义网络(SDN):" -"这种场景详细介绍了在 OpenStack 之中运行 SDN 控制器以及 OpenStack 加入到一个软" -"件定义的网络中的情况。" - -msgid "" -"Specialized hardware: " -"describes the use of specialized hardware devices from within the OpenStack " -"environment." -msgstr "" -"专门的硬件:有一些非常特别的情" -"况可能需要在 OpenStack 环境中使用专门的硬件设备。" - -msgid "" -"Specialized networking: describes running networking-oriented software that may involve " -"reading packets directly from the wire or participating in routing protocols." -msgstr "" -"特殊的网络应用:此节" -"介绍运行可能涉及直接从网线上读取数据包或者参与路由协议的面向联网的软件。" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: '../figures/Compute_NSX.png'; md5=1745487faf16b74b13f80ffd837f43a0" -msgstr "" -"@@image: '../figures/Compute_NSX.png'; md5=1745487faf16b74b13f80ffd837f43a0" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: '../figures/Compute_Tech_Bin_Packing_CPU_optimized1.png'; " -"md5=45084140c29e59a459d6b0af9b47642a" -msgstr "" -"@@image: '../figures/Compute_Tech_Bin_Packing_CPU_optimized1.png'; " -"md5=45084140c29e59a459d6b0af9b47642a" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: '../figures/Compute_Tech_Bin_Packing_General1.png'; " -"md5=34f2f0b656a66124016d2484fb96068b" -msgstr "" -"@@image: '../figures/Compute_Tech_Bin_Packing_General1.png'; " -"md5=34f2f0b656a66124016d2484fb96068b" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: '../figures/General_Architecture3.png'; " -"md5=278d469e1d026634b3682209c454bff1" -msgstr "" -"@@image: '../figures/General_Architecture3.png'; " -"md5=278d469e1d026634b3682209c454bff1" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: '../figures/Generic_CERN_Architecture.png'; " -"md5=f5ec57432a0b3bd35efeaa25e84d9947" -msgstr "" -"@@image: '../figures/Generic_CERN_Architecture.png'; " -"md5=f5ec57432a0b3bd35efeaa25e84d9947" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: '../figures/Generic_CERN_Example.png'; " -"md5=268e2171493d49ff3cc791071a98b49e" -msgstr "" -"@@image: '../figures/Generic_CERN_Example.png'; " -"md5=268e2171493d49ff3cc791071a98b49e" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: '../figures/Massively_Scalable_Cells_+_regions_+_azs.png'; " -"md5=87d08365fefde431d6d055daf17d7d0e" -msgstr "" -"@@image: '../figures/Massively_Scalable_Cells_+_regions_+_azs.png'; " -"md5=87d08365fefde431d6d055daf17d7d0e" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: '../figures/Multi-Cloud_Priv-AWS4.png'; " -"md5=3bba96b0b6ac0341a05581b00160ff17" -msgstr "" -"@@image: '../figures/Multi-Cloud_Priv-AWS4.png'; " -"md5=3bba96b0b6ac0341a05581b00160ff17" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: '../figures/Multi-Cloud_Priv-Pub3.png'; " -"md5=8fdb44f876665e2aa1bd793607c4537e" -msgstr "" -"@@image: '../figures/Multi-Cloud_Priv-Pub3.png'; " -"md5=8fdb44f876665e2aa1bd793607c4537e" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: '../figures/Multi-Cloud_failover2.png'; " -"md5=5a7be4a15d381288659c7268dff6724b" -msgstr "" -"@@image: '../figures/Multi-Cloud_failover2.png'; " -"md5=5a7be4a15d381288659c7268dff6724b" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: '../figures/Multi-Site_Customer_Edge.png'; " -"md5=01850cf774e7075bd7202c6e7f087f36" -msgstr "" -"@@image: '../figures/Multi-Site_Customer_Edge.png'; " -"md5=01850cf774e7075bd7202c6e7f087f36" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: '../figures/Multi-Site_shared_keystone1.png'; " -"md5=eaef18e7f04eec7e3f8968ad69aed7d3" -msgstr "" -"@@image: '../figures/Multi-Site_shared_keystone1.png'; " -"md5=eaef18e7f04eec7e3f8968ad69aed7d3" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: '../figures/Multi-Site_shared_keystone_horizon_swift1.png'; " -"md5=fb80511b491731906fb54d5a1f029f91" -msgstr "" -"@@image: '../figures/Multi-Site_shared_keystone_horizon_swift1.png'; " -"md5=fb80511b491731906fb54d5a1f029f91" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: '../figures/Multi-site_Geo_Redundant_LB.png'; " -"md5=c94a96f6084c2e50a0eb6846f6fde479" -msgstr "" -"@@image: '../figures/Multi-site_Geo_Redundant_LB.png'; " -"md5=c94a96f6084c2e50a0eb6846f6fde479" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: '../figures/Network_Cloud_Storage2.png'; " -"md5=3cd3ce6b19b20ecd7d22af03731cc7cd" -msgstr "" -"@@image: '../figures/Network_Cloud_Storage2.png'; " -"md5=3cd3ce6b19b20ecd7d22af03731cc7cd" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: '../figures/Network_Web_Services1.png'; " -"md5=7ad46189444753336edd957108a1a92b" -msgstr "" -"@@image: '../figures/Network_Web_Services1.png'; " -"md5=7ad46189444753336edd957108a1a92b" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: '../figures/Special_case_SDN_external.png'; " -"md5=12d9e840a0a10a5abcf1a2c1f6f80965" -msgstr "" -"@@image: '../figures/Special_case_SDN_external.png'; " -"md5=12d9e840a0a10a5abcf1a2c1f6f80965" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: '../figures/Special_case_SDN_hosted.png'; " -"md5=93f5e5b90b5aea50d24a098ba80c805d" -msgstr "" -"@@image: '../figures/Special_case_SDN_hosted.png'; " -"md5=93f5e5b90b5aea50d24a098ba80c805d" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: '../figures/Specialized_Hardware2.png'; " -"md5=f8477d5d015f4c6d4fcd56d511f14ef9" -msgstr "" -"@@image: '../figures/Specialized_Hardware2.png'; " -"md5=f8477d5d015f4c6d4fcd56d511f14ef9" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: '../figures/Specialized_OOO.png'; " -"md5=65a8e3666ebf09a0145c61bc1d472144" -msgstr "" -"@@image: '../figures/Specialized_OOO.png'; " -"md5=65a8e3666ebf09a0145c61bc1d472144" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: '../figures/Specialized_VDI1.png'; " -"md5=77729426d59881476de9a03e1ee8a22c" -msgstr "" -"@@image: '../figures/Specialized_VDI1.png'; " -"md5=77729426d59881476de9a03e1ee8a22c" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: '../figures/Storage_Database_+_Object5.png'; " -"md5=a0cb2374c3515b8f3203ebdc7bb7dbbf" -msgstr "" -"@@image: '../figures/Storage_Database_+_Object5.png'; " -"md5=a0cb2374c3515b8f3203ebdc7bb7dbbf" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: '../figures/Storage_Hadoop3.png'; " -"md5=bdc6373caede70b37209de260616b255" -msgstr "" -"@@image: '../figures/Storage_Hadoop3.png'; " -"md5=bdc6373caede70b37209de260616b255" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: '../figures/Storage_Object.png'; " -"md5=ad0b4ee39c96ab081a368ef7857479a5" -msgstr "" -"@@image: '../figures/Storage_Object.png'; " -"md5=ad0b4ee39c96ab081a368ef7857479a5" - -msgid "" -"A common requirement is to define different roles for the different cloud " -"administration functions. An example would be a requirement to segregate the " -"duties and permissions by site." -msgstr "" -"一个常见的需求是为不同的云管理功能定义不同的角色。此例即是站点需要合并职责和" -"权限的需求。" - -msgid "" -"A complete list of supported hypervisors and their capabilities can be found " -"at OpenStack Hypervisor Support Matrix." -msgstr "" -"支持hypervisor和它们的兼容性的完整列表可以从https://wiki.openstack." -"org/wiki/HypervisorSupportMatrix页面找到。" - -msgid "A disk within a single node" -msgstr "单个节点上的一块磁盘" - -msgid "" -"A firewall, switches and load balancers on the public facing network " -"connections." -msgstr "防火墙、交换机、以及负载均衡设备在公网中直接面向全网的连接。" - -msgid "" -"A general purpose OpenStack cloud design should incorporate the core " -"OpenStack services to provide a wide range of services to end-users. The " -"OpenStack core services recommended in a general purpose cloud are:" -msgstr "" -"通用型OpenStack云的设计,使核心的OpenStack服务的互相紧密配合为最终用户提供广" -"泛的服务。在通用型云中建议的OpenStack核心服务有:" - -msgid "" -"A general purpose cloud is designed to have a range of potential uses or " -"functions; not specialized for specific use cases. General purpose " -"architecture is designed to address 80% of potential use cases available. " -"The infrastructure, in itself, is a specific use case, enabling it to be " -"used as a base model for the design process. General purpose clouds are " -"designed to be platforms that are suited for general purpose applications." -msgstr "" -"通用型云被设计为有一系列潜在用途或功能,不是为特殊的用例特殊设计。通用型架构" -"是为满足80%用例而设计的。基础设施其本身就是一非常特别的用例,在设计过程将之用" -"于基本模型未尝不可。通用型云被设计为适合通用应用的平台。" - -msgid "" -"A general purpose cloud may also include OpenStack Object " -"Storage (swift). OpenStack " -"Block Storage (cinder). These " -"may be selected to provide storage to applications and instances." -msgstr "" -"通用型云还包括OpenStack 对象存储 (swift).。选择OpenStack 块存储 " -"(cinder) 是为应用和实例提供持久性的存储," - -msgid "" -"A hybrid cloud architecture involves multiple vendors and technical " -"architectures. These architectures may be more expensive to deploy and " -"maintain. Operational costs can be higher because of the need for more " -"sophisticated orchestration and brokerage tools than in other architectures. " -"In contrast, overall operational costs might be lower by virtue of using a " -"cloud brokerage tool to deploy the workloads to the most cost effective " -"platform." -msgstr "" -"混合云架构涉及到多个提供商和技术架构。这些架构也许为部署和维护付出高额的费" -"用。运维花费更高的原因在于相比与其它架构需要更多复杂的编排以及额外的工具。相" -"比之下,整体运营成本可能在最具成本效益的平台中使用云运营工具部署负载会降低。" - -msgid "" -"A measure of how many servers can fit into a given measure of physical " -"space, such as a rack unit [U]." -msgstr "" -"关于多少台服务器能够放下到一个给定尺寸的物理空间的量度,比如一个机柜单位[U]。" - -msgid "" -"A network designed on layer-2 protocols has advantages over one designed on " -"layer-3 protocols. In spite of the difficulties of using a bridge to perform " -"the network role of a router, many vendors, customers, and service providers " -"choose to use Ethernet in as many parts of their networks as possible. The " -"benefits of selecting a layer-2 design are:" -msgstr "" -"基于二层协议设计的网络相比基于三层协议设计的网络有一定优势。尽管使用桥接来扮" -"演路由的网络角色有困难,很多厂商、客户以及服务提供商都选择尽可能多地在他们的" -"网络中使用以太网。选择二层网络设计的好处在于:" - -msgid "" -"A relationship exists between the size of the compute environment and the " -"supporting OpenStack infrastructure controller nodes requiring support." -msgstr "" -"计算环境的规模和支撑它的OpenStack基础设施控制器节点之间的关系是确定的。" - -msgid "" -"A requirement for high availability architecture to meet customer SLA " -"requirements." -msgstr "对于实现高可用架构以满足客户服务等级协议(SLA)需求的要求。" - -msgid "" -"A requirement for vendor independence. To avoid hardware or software vendor " -"lock-in, the design should not rely on specific features of a vendor's " -"router or switch." -msgstr "" -"对厂商独立性的需要。要避免硬件或者软件的厂商选择受到限制,设计不应该依赖于某" -"个厂商的路由或者交换机的独特特性。" - -msgid "A requirement to be tolerant of rack level failure." -msgstr "对于容忍机柜级别的故障的要求。" - -msgid "" -"A requirement to design for cost efficient operations to take advantage of " -"massive scale." -msgstr "对于实现低成本运营以便获益于大规模扩展的设计的需要。" - -msgid "" -"A requirement to ensure that there is no single point of failure in the " -"cloud ecosystem." -msgstr "对于保证整个云生态系统中没有单点故障的需要。" - -msgid "" -"A requirement to massively scale the ecosystem to support millions of end " -"users." -msgstr "对于大规模地扩展生态系统以满足百万级别终端用户的需要。" - -msgid "" -"A requirement to maximize flexibility to architect future production " -"environments." -msgstr "对于最大化灵活性以便构架出未来的生产环境的要求。" - -msgid "A requirement to support indeterminate platforms and applications." -msgstr "对于支持不确定的平台和应用的要求。" - -msgid "" -"A security domain comprises users, applications, servers or networks that " -"share common trust requirements and expectations within a system. Typically " -"they have the same authentication and authorization requirements and users." -msgstr "" -"一个安全域在一个系统下包括让用户、应用、服务器和网络共享通用的信任需求和预" -"期。典型情况是他们有相同的认证和授权的需求和用户。" - -msgid "" -"A security domain comprises users, applications, servers or networks that " -"share common trust requirements and expectations within a system. Typically, " -"security domains have the same authentication and authorization requirements " -"and users." -msgstr "" -"一个安全域在一个系统下包括让用户、应用、服务器和网络共享通用的信任需求和预" -"期。典型情况下安全域有相同的认证和授权的需求和用户。" - -msgid "A shared application development platform" -msgstr "共享的应用开发平台" - -msgid "" -"A storage system presents a LUN backed by a set of SSDs using a traditional " -"storage array with OpenStack Block Storage integration or a storage platform " -"such as Ceph or Gluster." -msgstr "" -"一个存储系统被用以抛出 LUN,其后端由通过使用传统的存储阵列与 OpenStack 块存储" -"集成的一系列的 SSD 所支撑,或者是一个类似于 Ceph 或者 Gluster 之类的存储平" -"台。" - -msgid "A storage-focused cloud design should include:" -msgstr "存储型的云设计需要包括:" - -msgid "" -"A storage-focused design might require the ability to use Orchestration to " -"launch instances with Block Storage volumes to perform storage-intensive " -"processing." -msgstr "" -"一个存储型设计也许需要使用Orchestration,能够启动带块设备卷的实例,以满足存储" -"密集型任务处理。" - -msgid "A three-node MariaDB and Galera cluster, each with 4 vCPUs and 8 GB RAM" -msgstr "MariaDB安装在3个节点并组成Galera集群,每节点拥有4 vCPU和8GB内存" - -msgid "A web application runtime environment" -msgstr "一个web应用程序运行时环境" - -msgid "" -"A well thought-out auditing strategy is important in order to be able to " -"quickly track down issues. Keeping track of changes made to security groups " -"and tenant changes can be useful in rolling back the changes if they affect " -"production. For example, if all security group rules for a tenant " -"disappeared, the ability to quickly track down the issue would be important " -"for operational and legal reasons." -msgstr "" -"为了能够快速的追查问题,一个经过好的经过深思熟虑的审计策略是非常重要的。对于" -"安全组和租户的变动保持跟踪,在生产环境中可用于回滚,例如,如果一个租户的安全" -"组规则消失了,能够快速追查问题的能力对于运维来说很重要。" - -msgid "" -"A zone within an Object Storage cluster is a logical division. Any of the " -"following may represent a zone:" -msgstr "" -"对象存储集群内的区域是一个逻辑上的划分。一个区域可以是下列情形中的任意之一:" - -msgid "Active archive, backups and hierarchical storage management." -msgstr "活跃归档、备份和分级存储管理" - -msgid "" -"Adding storage capacity and bandwidth is a very different process when " -"comparing the Block and Object Storage services. While adding Block Storage " -"capacity is a relatively simple process, adding capacity and bandwidth to " -"the Object Storage systems is a complex task that requires careful planning " -"and consideration during the design phase." -msgstr "" -"将块存储和对象存储服务相比较,增加存储容量和带宽是完全不同的流程。增加块存储" -"的容量是一个相对简单的过程,增加对象存储系统的容量和带宽是一个复杂的任务,需" -"要经过精心的规划和周全的考虑。" - -msgid "" -"Additional capabilities can be realized by moving static web content to be " -"served from OpenStack Object Storage containers, and backing the OpenStack " -"Image service with OpenStack Object Storage." -msgstr "" -"附加功能可实现将静态的web内容迁移到OpenStack对象存储中,且使用OpenStack对象存" -"储作为OpenStack镜像服务的后端。" - -msgid "Additional considerations" -msgstr "额外的考虑因素" - -msgid "Additional hardware" -msgstr "额外的硬件" - -msgid "" -"Additional monitoring tools in use include Flume, Elastic Search, Kibana, and the CERN developed Lemon project." -msgstr "" -"另外使用的监测工具有Flume, Elastic Search, Kibana, 以及CERN开发的项目Lemon。" - -msgid "Agility" -msgstr "敏捷性" - -msgid "Alerting" -msgstr "警告" - -msgid "" -"Alerting and notification of responsible teams or automated systems which " -"remediate problems with storage as they arise." -msgstr "" -"对负责团队的警报和通知,或者能够在存储出现问题时能够修复问题的自动化系统。" - -msgid "" -"Alexandra Settle (Rackspace) @dewsday" -msgstr "" -"Alexandra Settle (Rackspace) @dewsday" - -msgid "" -"All nodes within an OpenStack cloud require network connectivity. In some " -"cases, nodes require access to more than one network segment. The design " -"must encompass sufficient network capacity and bandwidth to ensure that all " -"communications within the cloud, both north-south and east-west traffic have " -"sufficient resources available." -msgstr "" -"一个OpenStack云中所有的节点都需要网络连接。在一些情况下,节点需要访问多个网" -"段。云的设计必须围绕充足的网络容量和带宽去确保所有的通信,无论南北流量还是东" -"西流量都需要有充足的资源可用。" - -msgid "Also consider several specialized components:" -msgstr "也会考虑一些特别的组件:" - -msgid "" -"Although most OpenStack architecture designs fall into one of the seven " -"major scenarios outlined in other sections (compute focused, network " -"focused, storage focused, general purpose, multi-site, hybrid cloud, and " -"massively scalable), there are a few use cases that do not fit into these " -"categories. This section discusses these specialized cases and provide some " -"additional details and design considerations for each use case:" -msgstr "" -"尽管大多数的 OpenStack 架构设计都能归类于其它章节中描述的七种主要场景(计算密" -"集型、网络密集型、存储密集型、通用设计、多站点、混合云以及可大规模扩展),仍然" -"存在其他一些场景独特到无法归类入主要场景中的任何一个之中。本章将讨论一些这种" -"特殊的应用场景,以及每种场景的细节和设计的考虑因素。" - -msgid "" -"An OpenStack general purpose cloud is often considered a starting point for " -"building a cloud deployment. They are designed to balance the components and " -"do not emphasize any particular aspect of the overall computing environment. " -"Cloud design must give equal weight to the compute, network, and storage " -"components. General purpose clouds are found in private, public, and hybrid " -"environments, lending themselves to many different use cases." -msgstr "" -"OpenStack通用型云通常被认为是构建一个云的起点。它们被设计为平衡的组件以及在整" -"体的计算环境中不强调某个特定的领域。云的设计必须对计算、网络、存储等组件作必" -"要的公平权衡。通用型云可以是私有云,也可以是公有云,当然也可以混合的环境,自" -"身可以适用于各种用例。" - -msgid "" -"An external load balancing service was used and not the LBaaS in OpenStack " -"because the solution in OpenStack is not redundant and does not have any " -"awareness of geo location." -msgstr "" -"一个额外的负载均衡服务将被使用,而不是OpenStack的LBaaS,因为OpenStack的LBaaS在" -"OpenStack中不是冗余的解决方案而且不具有地理位置的任何特征。" - -msgid "" -"An important consideration in running at massive scale is projecting growth " -"and utilization trends in order to plan capital expenditures for the short " -"and long term. Gather utilization meters for compute, network, and storage, " -"along with historical records of these meters. While securing major anchor " -"tenants can lead to rapid jumps in the utilization rates of all resources, " -"the steady adoption of the cloud inside an organization or by consumers in a " -"public offering also creates a steady trend of increased utilization." -msgstr "" -"在大规模场景下运行 OpenStack 还有一个重要的考虑因素,是要对增长和利用率趋势进" -"行规划,从而为短期和长期计划资本性支出。这需要计算、网络以及存储等资源的利用" -"率的测量数据,以及这些数据的历史记录。固定的大客户租户可能造成所有资源的利用" -"率有个迅速的增长,在一个组织内部的对其内部云,或者在公有云上的用户对公开提供" -"的服务等的稳定增长的部署及使用,则会使得利用率出现一个稳定的增长趋势。" - -msgid "An object store with a RESTful interface" -msgstr "一个带有 RESTful 接口的对象存储" - -msgid "" -"An online classified advertising company wants to run web applications " -"consisting of Tomcat, Nginx and MariaDB in a private cloud. To be able to " -"meet policy requirements, the cloud infrastructure will run in their own " -"data center. The company has predictable load requirements, but requires " -"scaling to cope with nightly increases in demand. Their current environment " -"does not have the flexibility to align with their goal of running an open " -"source API environment. The current environment consists of the following:" -msgstr "" -"一家在线的广告公司,名称暂时保密,打算基于私有云方式运行他们的web应用,属于网" -"站典型的架构:Tomcat + Nginx + MariaDB。 为了迎合他们的合规性需求,云基础设施" -"运行在他们自己的数据中心。公司对负载需求有过预测,但是仍然提出了预防突发性的" -"需求而能够灵活扩展。他们目前的环境不具有灵活的调整目标到运行开源的应用程序接" -"口环境。目前的环境是如下面这样:" - -msgid "An organization with a diverse geographic footprint." -msgstr "一个有不同地域认证的组织" - -msgid "" -"Another consideration regarding network is the fact that legacy networking " -"is entirely managed by the cloud operator; tenants do not have control over " -"network resources. If tenants require the ability to manage and create " -"network resources such as network segments and subnets, it will be necessary " -"to install the OpenStack Networking service to provide network access to " -"instances." -msgstr "" -"另外的考虑是关于基于遗留网络的网络的管理是由云运维人员负责的。租户对网络资源" -"没有控制权。如果租户希望有管理和创建网络资源的能力,如创建、管理一个网段或子" -"网,那么就有必要安装OpenStack网络服务,以提供租户访问网络。" - -msgid "" -"Another kind of NAT that may be useful is protocol NAT. In some cases it may " -"be desirable to use only IPv6 addresses on instances and operate either an " -"instance or an external service to provide a NAT-based transition technology " -"such as NAT64 and DNS64. This provides the ability to have a globally " -"routable IPv6 address while only consuming IPv4 addresses as necessary or in " -"a shared manner." -msgstr "" -"另外一种可能有用的 NAT 是协议 NAT。某些情况下,可能需要在实例中只使用 IPv6 地" -"址,然后让一个实例或者外部的服务来提供基于 NAT 的转换技术,比如说 NAT64 和 " -"DNS64。这使得实例都有全局可达的 IPv6 地址,同时只在必要的情况下,或者以共享的" -"方式使用 IPv4 地址。" - -msgid "" -"Anthony Veiga (Comcast) @daaelar" -msgstr "" -"Anthony Veiga (Comcast) @daaelar" - -msgid "Application awareness" -msgstr "应用的可知性" - -msgid "Application cloud readiness" -msgstr "为云准备好应用程序" - -msgid "Application momentum" -msgstr "应用增长" - -msgid "Application readiness" -msgstr "应用准备" - -msgid "" -"Appropriate user_data to populate the central DNS servers " -"upon instance launch." -msgstr "当实例启动时,中心DNS服务器会填写合适的user_data。" - -msgid "" -"Appropriate Telemetry alarms that maintain state of the application and " -"allow for handling of region or instance failure." -msgstr "正确的Telemetry警告,维护着应用的状态且允许掌控region或实例失效。" - -msgid "Approximate capacity" -msgstr "大体的容量" - -msgid "Approximately 60 Gb of total bandwidth to the back-end storage cluster" -msgstr "到后端存储集群大约 60 Gb 的总带宽" - -msgid "Architecture" -msgstr "架构" - -msgid "Architecture Guide" -msgstr "架构指南" - -msgid "" -"As a baseline product, general purpose clouds do not provide optimized " -"performance for any particular function. While a general purpose cloud " -"should provide enough performance to satisfy average user considerations, " -"performance is not a general purpose cloud customer driver." -msgstr "" -"作为基础产品,通用型云并不对任何特定的功能提供优化性能。虽然通用型云希望能够" -"提供足够的性能以满足所以用户的考虑,但是性能本身并不是通用型云所关注的。" - -msgid "" -"As of 2011 CERN operated these two compute centers in Europe with plans to " -"add a third." -msgstr "CERN在2011年准备在欧洲建立第三个数据中心。" - -msgid "" -"As you add back-end storage capacity to the system, the partition maps " -"redistribute data amongst the storage nodes. In some cases, this replication " -"consists of extremely large data sets. In these cases, we recommend using " -"back-end replication links that do not contend with tenants' access to data." -msgstr "" -"为系统的后端存储增加了容量后,分区映射会带来数据重新分发到存储节点,在一些情" -"况下,这些复制会带来超大的数据集合,在此种情况下,建议使用后端复制链接,它也" -"不会阻断租户访问数据。" - -msgid "" -"Assessing the average workloads and increasing the number of instances that " -"can run within the compute environment by adjusting the overcommit ratio is " -"another option. It is important to remember that changing the CPU overcommit " -"ratio can have a detrimental effect and cause a potential increase in a " -"noisy neighbor. The additional risk of increasing the overcommit ratio is " -"more instances failing when a compute host fails." -msgstr "" -"通过评估平均负载,在计算环境中调整超分配比例来增加运行实例的数量是另外一个办" -"法。重要的是记住,改变CPU超分配比例有负面影响以及引起其它实例故障。加大超分配" -"的比例另外的风险是当计算节点失效后会引发更多的实例失效。" - -msgid "Asymmetric links" -msgstr "非对称连接" - -msgid "" -"At large scale, management of data operations is a resource intensive " -"process for an organization. Hierarchical storage management (HSM) systems " -"and data grids help annotate and report a baseline data valuation to make " -"intelligent decisions and automate data decisions. HSM enables automated " -"tiering and movement, as well as orchestration of data operations. A data " -"grid is an architecture, or set of services evolving technology, that brings " -"together sets of services enabling users to manage large data sets." -msgstr "" -"在规模到达一定的程度时,数据操作的管理对于整个组织来说就会是一个资源密集型的" -"过程。分级存储管理(HSM)系统以及数据网格能够帮助对数据评估的基准值作出注解以及" -"报告,从而做出正确的决定以及自动化该数据决策。HSM 支持自动化的排列和移动,以" -"及数据操作的协调编排。数据网格是一个架构,或者是一项不断发展的服务集合技术," -"此项技术能够将多个服务协调到一起,让用户能够管理大规模的数据集合。" - -msgid "Auditing" -msgstr "审计" - -msgid "Authentication between sites" -msgstr "站点之间的认证" - -msgid "Availability" -msgstr "可用性" - -msgid "Availability zones" -msgstr "可用域" - -msgid "" -"Availability zones provide another mechanism for subdividing an installation " -"or region. They are, in effect, host aggregates exposed for (optional) " -"explicit targeting by users." -msgstr "" -"可用域为细分一个 OpenStack 部署或者区域提供了另外一种机制。实际上,这是明确展" -"现给用户(可选项)的主机聚合。" - -msgid "Bare-metal" -msgstr "裸金属" - -msgid "" -"Be paranoid: Design for defense in depth and zero tolerance by building in " -"security at every level and between every component. Trust no one." -msgstr "" -"保持偏执:深度设计防御,通过构建在每一层和每个组件之间的安全确保零差错。不信" -"任任何人。" - -msgid "" -"Beth Cohen (Verizon) @bfcohen" -msgstr "" -"Beth Cohen (Verizon) @bfcohen" - -msgid "" -"Between 120 and 140 installations of Nginx and Tomcat, each with 2 vCPUs and " -"4 GB of RAM" -msgstr "" -"Nginx和Tomcat的安装量在120和140之间,每个应用的实例是2 虚拟CPU和4 GB内存" - -msgid "Big data" -msgstr "大数据" - -msgid "Big data analytics using Hadoop or other distributed data stores" -msgstr "使用Hadoop或其他分布式数据处理程序来分析大数据" - -msgid "Block Storage fault tolerance and availability" -msgstr "块存储的容错和可用性" - -msgid "" -"Block storage also takes advantage of a number of enterprise storage " -"solutions. These are addressed via a plug-in driver developed by the " -"hardware vendor. A large number of enterprise storage plug-in drivers ship " -"out-of-the-box with OpenStack Block Storage (and many more available via " -"third party channels). General purpose clouds are more likely to use " -"directly attached storage in the majority of block storage nodes, deeming it " -"necessary to provide additional levels of service to tenants which can only " -"be provided by enterprise class storage solutions." -msgstr "" -"块存储还可以利用一些企业级存储解决方案的优势。由硬件厂商开发的插件驱动得以实" -"现。基于OpenStack块存储有大量的企业存储写了它们带外的插件驱动(也有很大一部分" -"是通过第三方渠道来实现的)。作为通用型云使用的是直接挂载存储到块存储节点,如果" -"未来需要为租户提供额外级别的块存储,只须增加企业级的存储解决方案即可。" - -msgid "" -"Boot storms, when a high volume of logins occur in a short period of time" -msgstr "启动风暴,再很短的时间内发生了大量的虚拟机同时启动的事" - -msgid "Broker" -msgstr "代理" - -msgid "Budapest, Hungary" -msgstr "匈牙利布达佩斯" - -msgid "Bursting from a private cloud to a public cloud" -msgstr "从私有云突破到公有云" - -msgid "Bursting to a public non-OpenStack cloud" -msgstr "突破到一个不是OpenStack的公有云" - -msgid "Bursting workloads from private to public OpenStack clouds" -msgstr "从私有云突破负载到公有的OpenStack云" - -msgid "Bursting workloads from private to public non-OpenStack clouds" -msgstr "从私有云突破负载到公有的非OpenStack云" - -msgid "Business or technical diversity" -msgstr "业务或技术的多样性" - -msgid "" -"By definition, a cloud provides end users with the ability to self-provision " -"computing power, storage, networks, and software in a simple and flexible " -"way. The user must be able to scale their resources up to a substantial " -"level without disrupting the underlying host operations. One of the benefits " -"of using a general purpose cloud architecture is the ability to start with " -"limited resources and increase them over time as the user demand grows." -msgstr "" -"根据定义,云提供给最终用户通过简单灵活的方式自适应的计算能力、存储、网络和软" -"件的能力。用户必须能够在不破坏底层主机操作的情况下扩展资源到满足自身。使用通" -"用型云架构的一个优点就是,在有限的资源下启动,随着时间的增加和用户的需求增长" -"而轻松扩张的能力。" - -msgid "CPU allocation ratio: 16:1" -msgstr "CPU 超分配比例: 16:1" - -msgid "CPU and RAM" -msgstr "CPU 和内存" - -msgid "CPU-intensive" -msgstr "CPU密集型" - -msgid "Capacity" -msgstr "容量" - -msgid "Capacity constraints for a general purpose cloud environment include:" -msgstr "在通用型云环境中的容量限制包括:" - -msgid "Capacity planning" -msgstr "容量计划" - -msgid "" -"Care must be taken when deciding network functionality. Currently, OpenStack " -"supports both the legacy networking (nova-network) system and the newer, " -"extensible OpenStack Networking (neutron). Both have their pros and cons " -"when it comes to providing highly available access. Legacy networking, which " -"provides networking access maintained in the OpenStack Compute code, " -"provides a feature that removes a single point of failure when it comes to " -"routing, and this feature is currently missing in OpenStack Networking. The " -"effect of legacy networking’s multi-host functionality restricts failure " -"domains to the host running that instance." -msgstr "" -"当决定网络功能的时候务必小心谨慎。当前的OpenStack不仅支持遗留网络(nova-" -"network)系统还支持新的,可扩展的OpenStack网络(neutron)。二者在提供高可用访问" -"时有各自的优缺点。遗留网络提供的网络服务的代码是由OpenStack计算来维护的,它可" -"提供移除来自路由的单点故障特性,但是此特性在当前的Openstack网络中不被支持。遗" -"留网络的多主机功能受限于仅在运行实例的主机,一旦失效,此实例将无法被访问。" - -msgid "CentOS" -msgstr "CentOS" - -msgid "Centralized log collection." -msgstr "日志集中收集" - -msgid "" -"Certain hardware form factors may better suit a general purpose OpenStack " -"cloud due to the requirement for equal (or nearly equal) balance of " -"resources. Server hardware must provide the following:" -msgstr "" -"确定硬件的形式,也许更适合通用型OpenStack云,因为通用型有对资源的对等(接近对" -"等)平衡的需求。服务器硬件须提供如下资源平衡细节描述:" - -msgid "" -"Certain use cases may benefit from exposure to additional devices on the " -"compute node. Examples might include:" -msgstr "" -"在计算节点中使用一些额外的设备对于某些用例有着显著的益处。举几个常见的例子:" - -msgid "Challenges" -msgstr "挑战" - -msgid "Choice of hypervisor" -msgstr "选择Hypervisor" - -msgid "Choice of operating system" -msgstr "选择操作系统" - -msgid "Cloud storage" -msgstr "云存储" - -msgid "" -"Cloud storage consists of many distributed, synonymous resources, which are " -"often referred to as integrated storage clouds. Cloud storage is highly " -"fault tolerant through redundancy and the distribution of data. It is highly " -"durable through the creation of versioned copies, and can be consistent with " -"regard to data replicas." -msgstr "" -"云存储由很多分散的但是同质化的资源所组成,并且通常被称为集成存储云。云存储具" -"有非常高的容错能力,这是通过冗余以及数据的分布存储实现的。通过创建版本化的副" -"本,云存储是非常耐用的,而且对于数据的副本来说,其一致性也是非常高的。" - -msgid "" -"Cloud storage is a model of data storage that stores digital data in logical " -"pools and physical storage that spans across multiple servers and locations. " -"Cloud storage commonly refers to a hosted object storage service, however " -"the term also includes other types of data storage that are available as a " -"service, for example block storage." -msgstr "" -"云存储是一种数据存储的模型。这种模型下电子数据存放在逻辑的存储池之中,物理的" -"存储设备则分布在多个服务器或者地点之中。云存储一般来说指的是所支持的对象存储" -"服务,然而,随着发展,这个概念包括了其它能够作为服务提供的数据存储,比如块存" -"储。" - -msgid "Cloud storage peering." -msgstr "云存储配对。" - -msgid "" -"Cloud storage runs on virtualized infrastructure and resembles broader cloud " -"computing in terms of accessible interfaces, elasticity, scalability, multi-" -"tenancy, and metered resources. You can use cloud storage services from an " -"off-premises service or deploy on-premises." -msgstr "" -"云存储是运行在虚拟化基础设施之上的,并且在可访问接口、弹性、可扩展性、多租户" -"以及可测量资源方面都类似于更广泛意义上的云计算。云存储服务可以是场外的服务," -"也可以在内部进行部署。" - -msgid "" -"Cloud users expect a fully self-service and on-demand consumption model. " -"When an OpenStack cloud reaches the \"massively scalable\" size, expect " -"consumption \"as a service\" in each and every way." -msgstr "" -"云用户同样希望拥有一个完全自服务的和按需消费的模型。当一个 OpenStack 云达" -"到“可大规模扩展”的大小时,意味着它也是被希望以每一方面都“作为服务”来进行消费" -"的。" - -msgid "Clustering" -msgstr "集群" - -msgid "Common uses of a general purpose cloud include:" -msgstr "通常使用通用性的云包括:" - -msgid "" -"Companies operating a massively scalable OpenStack cloud also require that " -"operational expenditures (OpEx) be minimized as much as possible. We " -"recommend using cloud-optimized hardware when managing operational overhead. " -"Some of the factors to consider include power, cooling, and the physical " -"design of the chassis. Through customization, it is possible to optimize the " -"hardware and systems for this type of workload because of the scale of these " -"implementations." -msgstr "" -"运营可大规模扩展云的公司,同样也要求业务费用(OpEx)尽可能最小化。需要对运营开" -"销进行管理的时候,我们则建议使用为云场景进行过优化的硬件。还有一些需要考虑的" -"因素包括电源、冷却系统,以及甚至是底架的物理设计。由于这类实现的规模之大,定" -"制硬件和系统以确保它们是优化过的适合完成相关类型的工作,是非常可能的一件事" -"情。" - -msgid "" -"Company A has an established data center with a substantial amount of " -"hardware. Migrating the workloads to a public cloud is not feasible." -msgstr "" -"A公司已经拥有显著数量的硬件的数据中心,将负载迁移到公有云中不是可行的方式。" - -msgid "Complex, multiple agents" -msgstr "复杂,多个代理程序" - -msgid "Complexity" -msgstr "复杂性" - -msgid "Compliance and geo-location" -msgstr "合规性和地理位置" - -msgid "Components" -msgstr "组件" - -msgid "Compute" -msgstr "计算" - -msgid "Compute (nova)" -msgstr "计算 (nova)" - -msgid "Compute (server) hardware selection" -msgstr "计算(服务器)硬件选择" - -msgid "Compute analytics with Data processing service" -msgstr "带数据处理服务的计算分析" - -msgid "Compute analytics with parallel file systems" -msgstr "使用并行文件系统的计算分析" - -msgid "" -"Compute capacity (CPU cores and RAM capacity) is a secondary consideration " -"for selecting server hardware. As a result, the required server hardware " -"must supply adequate CPU sockets, additional CPU cores, and more RAM; " -"network connectivity and storage capacity are not as critical. The hardware " -"needs to provide enough network connectivity and storage capacity to meet " -"the user requirements, however they are not the primary consideration." -msgstr "" -"在选择服务器硬件时计算能力(CPU核和内存容量)是次要考虑的,服务器硬件必须能够提" -"供更多的CPU插槽,更多的CPU核的数量,以及更多的内存,至于网络连接和存储容量就" -"显得次要一些。硬件需要的配置以提供足够的网络连接和存储容量满足用户的最低需求" -"即可,但是这不是主要需要考虑的。" - -msgid "Compute focused" -msgstr "计算型" - -msgid "Compute host" -msgstr "计算主机" - -msgid "" -"Compute host components can also be upgraded to account for increases in " -"demand; this is known as vertical scaling. Upgrading CPUs with more cores, " -"or increasing the overall server memory, can add extra needed capacity " -"depending on whether the running applications are more CPU intensive or " -"memory intensive." -msgstr "" -"计算主机可以按需求来进行相应的组件升级,这就是传说中的纵向扩展。升级更多核的" -"CPU,增加整台服务器的内存,要视运行的应用是需要CPU更紧张,还是需要内存更急切," -"以及需要多少。" - -msgid "" -"Compute intensive workloads may be CPU intensive, RAM intensive, or both; " -"they are not typically storage or network intensive." -msgstr "" -"计算密集型负载可能是CPU 密集型,RAM 密集型或者两者同时。他们不是典型的存储或" -"网络密集型。" - -msgid "Compute limits" -msgstr "计算限制" - -msgid "" -"Compute nodes automatically attach to OpenStack clouds, resulting in a " -"horizontally scaling process when adding extra compute capacity to an " -"OpenStack cloud. Additional processes are required to place nodes into " -"appropriate availability zones and host aggregates. When adding additional " -"compute nodes to environments, ensure identical or functional compatible " -"CPUs are used, otherwise live migration features will break. It is necessary " -"to add rack capacity or network switches as scaling out compute hosts " -"directly affects network and datacenter resources." -msgstr "" -"计算节点自动挂接到OpenStack云,结果就是为OpenStack云添加更多的计算容量,亦即" -"是横向扩展。此流程需要节点是安置在合适的可用区域并且是支持主机聚合。当添加额" -"外的计算节点到环境中时,要确保CPU类型的兼容性,否则可能会使活迁移的功能失效。" -"扩展计算节点直接的结果会影响到网络及数据中心的其他资源,因为需要增加机柜容量" -"以及交换机。" - -msgid "Compute resources" -msgstr "计算资源" - -msgid "Compute-focused workloads may include the following use cases:" -msgstr "计算型负载可能包括如下使用情况:" - -msgid "" -"Configuration management tools such as Puppet and Chef enable operations " -"staff to categorize systems into groups based on their roles and thus create " -"configurations and system states that the provisioning system enforces. " -"Systems that fall out of the defined state due to errors or failures are " -"quickly removed from the pool of active nodes and replaced." -msgstr "" -"类似 Puppet 和 Chef 等配置管理工具允许运维人员将系统按照它们的角色进行分组," -"然后通过配置准备系统,对它们分别创建配置文件以及保证系统的状态。由于错误或者" -"故障而离开预定义的状态的系统很快就会被从活动节点池中移除,并被替换。" - -msgid "" -"Connecting specialized network applications to their required resources " -"alters the design of an OpenStack installation. Installations that rely on " -"overlay networks are unable to support a routing participant, and may also " -"block layer-2 listeners." -msgstr "" -"将特殊的网络应用连接至它们所需要的资源能够改变 OpenStack 部署的设计。基于覆盖" -"网络的部署是无法支持路由参与者应用的,而且也可能阻挡二层网络监听者应用。" - -msgid "Connectivity" -msgstr "连通性" - -msgid "" -"Consideration must be taken when managing the users of the system for both " -"public and private clouds. The identity service allows for LDAP to be part " -"of the authentication process. Including such systems in an OpenStack " -"deployment may ease user management if integrating into existing systems." -msgstr "" -"无论是公有云还是私有云,管理用户的系统必须认真的考虑。身份认证服务允许LDAP作" -"为认证流程的一部分。Including such systems in an OpenStack deployment may " -"ease user management if integrating into existing systems." - -msgid "Consistency of images and templates across different sites" -msgstr "镜像和模板在跨不同站点时要保持一致性。" - -msgid "Content delivery network" -msgstr "内容分发网络" - -msgid "Content distribution." -msgstr "内容分发。" - -msgid "Continuous integration/continuous deployment (CI/CD)" -msgstr "持续集成/持续部署(CI/CD)" - -msgid "Controller infrastructure" -msgstr "控制器基础设施" - -msgid "Controlling traffic with routing metrics is straightforward." -msgstr "使用路由度量进行流量控制非常直观。" - -msgid "Copyright details are filled in by the template." -msgstr "版权信息来自于模板" - -msgid "Cost" -msgstr "成本" - -msgid "" -"Cryptographic routines that benefit from the availability of hardware random " -"number generators to avoid entropy starvation." -msgstr "" -"Cryptographic routines 受益于硬件随机数生成器,以避免entropy starvation。" - -msgid "Dashboard (horizon)" -msgstr "仪表盘 (horizon)" - -msgid "Data" -msgstr "数据" - -msgid "Data analytics with parallel file systems." -msgstr "基于并行文件系统的数据分析。" - -msgid "Data center" -msgstr "数据中心" - -msgid "" -"Data centers have a specified amount of power fed to a given rack or set of " -"racks. Older data centers may have a power density as power as low as 20 " -"AMPs per rack, while more recent data centers can be architected to support " -"power densities as high as 120 AMP per rack. The selected server hardware " -"must take power density into account." -msgstr "" -"数据中心拥有一定的电源以满足指定的机架或几组机架。老的数据中心拥有的电源密度" -"一个低于每机架20AMP。近年来数据中心通常支持电源密度为高于每机架120 AMP。选择" -"过的服务器硬件必须将电源密度纳入考虑范围。" - -msgid "" -"Data compliance policies governing certain types of information needing to " -"reside in certain locations due to regulatory issues - and more importantly, " -"cannot reside in other locations for the same reason." -msgstr "" -"数据合规性-基于常规某些类型的数据需要放在某些位置,同样的原因,不能放在其他位" -"置更加的重要。" - -msgid "Data grids" -msgstr "数据网格" - -msgid "" -"Data grids are helpful when answering questions around data valuation. Data " -"grids improve decision making through correlation of access patterns, " -"ownership, and business-unit revenue with other metadata values to deliver " -"actionable information about data." -msgstr "" -"数据网格在准确解答关于数据评估的问题方面非常有帮助。当前的信息科学方面的一个" -"根本的挑战就是确定哪些数据值得保存,数据应该在哪个级别的访问和性能上存在,以" -"及数据保留在存储系统当中的时间应该多长。数据网格,通过研究访问模式、所有权、" -"商业单位\n" -"收益以及其它的元数据的值等的相关性,帮助做出相关的决定,提供关于数据的可行信" -"息。" - -msgid "" -"Data locality, in which specific data or functionality should be close to " -"users." -msgstr "数据所在地应该接近用户,特别是一些特殊数据或者功能" - -msgid "" -"Data ownership policies governing the possession and responsibility for data." -msgstr "管理数据的所有权和责任的数据所有权政策。" - -msgid "" -"Data retention policies ensuring storage of persistent data and records " -"management to meet data archival requirements." -msgstr "确保持久化数据的保管和记录管理以符合数据档案化需求的数据保留政策。" - -msgid "Data security domains" -msgstr "数据安全域" - -msgid "" -"Data sovereignty policies governing the storage of data in foreign countries " -"or otherwise separate jurisdictions." -msgstr "管理位于外国或者其它辖区的数据存储问题的数据独立性政策。" - -msgid "Database software" -msgstr "数据库软件" - -msgid "" -"Databases are a common workload that benefit from high performance storage " -"back ends. Although enterprise storage is not a requirement, many " -"environments have existing storage that OpenStack cloud can use as back " -"ends. You can create a storage pool to provide block devices with OpenStack " -"Block Storage for instances as well as object interfaces. In this example, " -"the database I-O requirements are high and demand storage presented from a " -"fast SSD pool." -msgstr "" -"数据库是一种常见的能够从高性能数据后端中获益的负载。尽管企业级的存储并不在需" -"求中,很多环境都已经拥有能够被用作 OpenStack 云后端的存储。如下图中所示,可以" -"划分出一个存储池出来,使用 OpenStack 块存储向实例提供块设备,同样也可以提供对" -"象存储\n" -"接口。在这个例子中,数据库的 I-O 需求非常高,所需的存储是从一个高速的 SSD 池" -"中抛出来的。" - -msgid "Databases." -msgstr "数据库。" - -msgid "Dependencies" -msgstr "依赖" - -msgid "" -"Depending on the selected hypervisor, staff should have the appropriate " -"training and knowledge to support the selected OS and hypervisor " -"combination. If they do not, training will need to be provided which could " -"have a cost impact on the design." -msgstr "" -"依赖于所选定的hypervisor,相关工作人员需要受过对应的培训以及接受相关的知识," -"才可支持所选定的操作系统和hpervisor组合。如果没有的话,那么在设计中就得考虑培" -"训的提供是需要另外的开销的。" - -msgid "" -"Deploying an OpenStack installation using OpenStack Networking with a " -"provider network allows direct layer-2 connectivity to an upstream " -"networking device. This design provides the layer-2 connectivity required to " -"communicate via Intermediate System-to-Intermediate System (ISIS) protocol " -"or to pass packets controlled by an OpenFlow controller. Using the multiple " -"layer-2 plug-in with an agent such as Open vSwitch " -"allows a private connection through a VLAN directly to a specific port in a " -"layer-3 device. This allows a BGP point-to-point link to join the autonomous " -"system. Avoid using layer-3 plug-ins as they divide the broadcast domain and " -"prevent router adjacencies from forming." -msgstr "" -"使用带有提供商网络的 OpenStack 联网方式进行 OpenStack 的部署可以允许直接到上" -"游网络设备的二层网络连接。这种设计提供了通过中间系统到中间系统(ISIS)协议进行" -"通信,或者传输由 OpenFlow 控制器所控制的网络包等功能所需要的二层联网要求。使" -"用比如 Open vSwitch 这类的带有代理程序的多种二层网络插" -"件能够允许通过 VLAN 直接到三层设备上的特定端口的私有连接。这使得之后会加入自" -"治系统的 BGP 点对点连接能够存在。应该尽量避免使用三层网络的插件,因为它们会分" -"隔广播域并且阻止邻接路由器的形成。" - -msgid "" -"Design a dense multi-path network core to support multi-directional scaling " -"and flexibility." -msgstr "设计一个密集的多路径网络核心以支持多个方向的扩展和确保灵活性。" - -msgid "" -"Design decisions made in each of these areas impacts the rest of the " -"OpenStack architecture design." -msgstr "上述选择项的任何一个设计决定都会影响到其余两个的OpenStack架构设计。" - -msgid "Design impacts" -msgstr "对设计的影响" - -msgid "Designing OpenStack Block Storage" -msgstr "规划OpenStack块存储" - -msgid "Designing OpenStack Object Storage" -msgstr "规划OpenStack对象存储" - -msgid "" -"Designing an infrastructure that is suitable to host virtual desktops is a " -"very different task to that of most virtual workloads. For example, the " -"design must consider:" -msgstr "" -"设计一个适用于运行虚拟桌面的基础设施是一个与为其它大部分虚拟化任务进行设计大" -"不相同的工作。该基础设施设计时必须考虑到各种因素,比如以下例子:" - -msgid "Designing for the cloud" -msgstr "设计云" - -msgid "Designing network resources" -msgstr "规划网络资源" - -msgid "" -"Designs that incorporate the use of multiple clouds, such as a private cloud " -"and a public cloud offering, are described in the \"Multi-Cloud\" scenario, " -"see ." -msgstr "" -"设计使用多个云,例如提供一个私有云和公有云的混合,有关“多云”的描述场景,请参" -"考。" - -msgid "Desktop-as-a-Service" -msgstr "桌面即服务" - -msgid "" -"Despite the existence of SLAs, things break: servers go down, network " -"connections are disrupted, or too many tenants on a server make a server " -"unusable. An application must be sturdy enough to contend with these issues." -msgstr "" -"尽管有服务水平协议(SLA)的存在,但是还是会有一些坏的事情发生:服务器宕机,网络" -"连接发生紊乱,多个租户无法访问服务。应用程序必须足够的稳固,以应对上述事情的" -"发生。" - -msgid "Determine if the use case has consistent or highly variable latency." -msgstr "决定的是如果用例中拥有并行或高度变化的延迟。" - -msgid "Determine the most effective configuration for block storage network." -msgstr "确定块存储网络的最高效配置。" - -msgid "Determining whether an application is cloud-ready" -msgstr "决定那些应用程序是可在云中运行" - -msgid "Development and testing" -msgstr "开发和测试" - -msgid "Diagram" -msgstr "图示" - -msgid "Differing SLAs" -msgstr "服务水平协议比较" - -msgid "Disaster recovery" -msgstr "灾难恢复" - -msgid "" -"Divide and conquer: Pursue partitioning and parallel layering wherever " -"possible. Make components as small and portable as possible. Use load " -"balancing between layers." -msgstr "" -"分离和征服:尽可能分区和并行的分层。尽可能的确保组件足够小且可移植。在层之间" -"使用负载均衡。" - -msgid "Docker" -msgstr "Docker" - -msgid "Documentation" -msgstr "文档" - -msgid "Downtime" -msgstr "宕机时间" - -msgid "Durability and resilience" -msgstr "耐久性和弹性" - -msgid "" -"Each compute cell provides a complete compute installation, complete with " -"full database and queue installations, scheduler, conductor, and multiple " -"compute hosts. The cells scheduler handles placement of user requests from " -"the single API endpoint to a specific cell from those available. The normal " -"filter scheduler then handles placement within the cell." -msgstr "" -"每个计算单元提供一整个完整的计算环境,包括完整的数据库及消息队列部署、调度" -"器,管理器以及多个计算主机。单元调度器将会把从单个 API 入口点上收到的用户请" -"求,安排到从可用的单元中选出的一个特定单元上。单元内部常规的过滤调度器将负责" -"其内部的这种安排。" - -msgid "" -"Ensure that selected OS and hypervisor combinations meet the appropriate " -"scale and performance requirements. The chosen architecture will need to " -"meet the targeted instance-host ratios with the selected OS-hypervisor " -"combinations." -msgstr "" -"确保所选择的操作系统和Hypervisor组合能满足相应的扩展和性能需求。所选择的架构" -"需要满足依据所选择的操作系统-hypervisor组合目标实例-主机比例。" - -msgid "" -"Ensure that the design can accommodate regular periodic installations of " -"application security patches while maintaining required workloads. The " -"frequency of security patches for the proposed OS-hypervisor combination " -"will have an impact on performance and the patch installation process could " -"affect maintenance windows." -msgstr "" -"确保设计能够在维护负载需求时能够容纳正常的所安装的应用的安全补丁。为操作系统-" -"hypervisor组合打安全补丁的频率会影响到性能,而且补丁的安装流程也会影响到维护" -"工作。" - -msgid "" -"Ensure that the physical data center provides the necessary power for the " -"selected network hardware." -msgstr "确保物理数据中心为选择的网络硬件提供了必要的电力。" - -msgid "" -"Ensure that, if storage protocols other than Ethernet are part of the " -"storage solution, the appropriate hardware has been selected. If a " -"centralized storage array is selected, ensure that the hypervisor will be " -"able to connect to that storage array for image storage." -msgstr "" -"确保连通性,如果存储协议作为存储解决方案的一部分使用的是非以太网,那么选择相" -"应的硬件。如果选择了中心化的存储阵列,hypervisor若访问镜像存储就得能够连接到" -"阵列。" - -msgid "" -"Environments operating at massive scale typically need their regions or " -"sites subdivided further without exposing the requirement to specify the " -"failure domain to the user. This provides the ability to further divide the " -"installation into failure domains while also providing a logical unit for " -"maintenance and the addition of new hardware. At hyperscale, instead of " -"adding single compute nodes, administrators can add entire racks or even " -"groups of racks at a time with each new addition of nodes exposed via one of " -"the segregation concepts mentioned herein." -msgstr "" -"在大规模场景下运行的环境需要更加细分其区域和站点,同时又不能要求用户指定故障" -"域。者提供了将整个部署更加细分到故障域的能力,同时也为维护和新硬件的添加提供" -"了逻辑上的单元。在超大规模的环境中,管理员可能一次性添加整个机柜或者甚至是一" -"组机柜上的机器,而不是添加单台计算节点。这样子的节点添加过程,将会用到这里所" -"提到的隔离概念。" - -msgid "Equal (or nearly equal) balance of compute capacity (RAM and CPU)" -msgstr "计算容量(内存和CPU)对等(或接近对等)" - -msgid "" -"Ethernet frames can carry any kind of packet. Networking at layer 2 is " -"independent of the layer-3 protocol." -msgstr "以太网帧能够承载任何类型的包。二层上的联网独立于三层协议之外。" - -msgid "" -"Ethernet frames contain all the essentials for networking. These include, " -"but are not limited to, globally unique source addresses, globally unique " -"destination addresses, and error control." -msgstr "" -"以太网帧包含了所有联网所需的要素。这些要素包括但不限于,全局唯一的源地址,全" -"局唯一的目标地址,以及错误控制。" - -msgid "Example applications deployed with cloud storage characteristics:" -msgstr "以下是云存储类型的应用部署的例子:" - -msgid "" -"Excluding certain OpenStack components can limit or constrain the " -"functionality of other components. For example, if the architecture includes " -"Orchestration but excludes Telemetry, then the design will not be able to " -"take advantage of Orchestrations' auto scaling functionality. It is " -"important to research the component interdependencies in conjunction with " -"the technical requirements before deciding on the final architecture." -msgstr "" -"去除某些OpenStack组件会导致其他组件的功能受限。举例,如果架构中包含了编排但是" -"去除了Telmetry,那么这个设计就无法使用编排的自动扩展功能。在决定最终架构之" -"前,研究组件间的内部依赖是很重要的技术需求。" - -msgid "" -"Excluding certain OpenStack components may limit or constrain the " -"functionality of other components. If a design opts to include Orchestration " -"but exclude Telemetry, then the design cannot take advantage of " -"Orchestration's auto scaling functionality (which relies on information from " -"Telemetry). Due to the fact that you can use Orchestration to spin up a " -"large number of instances to perform the compute-intensive processing, we " -"strongly recommend including Orchestration in a compute-focused architecture " -"design." -msgstr "" -"排除一些特定的OpenStack组件会让其他组件的功能受到限制。如果在一个设计中有" -"Orchestration模块但是没有包括Telemetry模块,那么此设计就无法使用Orchestration" -"带来自动伸缩功能的优点(Orchestration需要Telemetery提供监测数据)。用户使用" -"Orchestration在计算密集型处理任务时可自动启动大量的实例,因此强烈建议在计算型" -"架构设计中使用Orchestration。" - -msgid "Expandability" -msgstr "延伸性" - -msgid "Expansion planning" -msgstr "扩展计划" - -msgid "Fault tolerance" -msgstr "容错" - -msgid "Fault tolerance and availability" -msgstr "容错和可用性" - -msgid "" -"Federated cloud, enabling users to choose resources from multiple providers" -msgstr "联合云,允许用户从不同的提供商那选择资源" - -msgid "File or object storage" -msgstr "文件或对象存储" - -msgid "" -"Financial factors are a primary concern for any organization. Cost is an " -"important criterion as general purpose clouds are considered the baseline " -"from which all other cloud architecture environments derive. General purpose " -"clouds do not always provide the most cost-effective environment for " -"specialized applications or situations. Unless razor-thin margins and costs " -"have been mandated as a critical factor, cost should not be the sole " -"consideration when choosing or designing a general purpose architecture." -msgstr "" -"财务问题对任何组织来说都是头等大事。开销是一个重要的标准,通用型云以此作为基" -"本,其它云架构环境以此作为参考。通用型云一般不会为特殊的应用或情况提供较划算" -"的环境,除非是不赚钱或者成本已经定了,当选择或设计一个通用架构时开销才不是唯" -"一考虑的。" - -msgid "Firewall functionality" -msgstr "防火墙功能" - -msgid "Firewalls" -msgstr "防火墙" - -msgid "Flat or VLAN" -msgstr "Flat 或者 VLAN" - -msgid "Flat, VLAN, Overlays, L2-L3, SDN" -msgstr "Flat、VLAN、覆盖网络、二层到三层、软件定义网络(SDN)" - -msgid "Flexibility" -msgstr "灵活性:" - -msgid "" -"For a company interested in building a commercial public cloud offering " -"based on OpenStack, the general purpose architecture model might be the best " -"choice. Designers are not always going to know the purposes or workloads for " -"which the end users will use the cloud." -msgstr "" -"对于一家公司对基于OpenStack构建一个商业公有云有兴趣的话,通用型架构模式也许是" -"最好的选择。设计者毋须知道最终用户使用云的目的和具体负载。" - -msgid "" -"For a deeper discussion on many of these topics, refer to the OpenStack Operations " -"Guide." -msgstr "" -"有关这些题目的更加深入的讨论,请参考 OpenStack 运维实战" - -msgid "" -"For a general purpose OpenStack cloud, the OpenStack infrastructure " -"components need to be highly available. If the design does not include " -"hardware load balancing, networking software packages like HAProxy will need " -"to be included." -msgstr "" -"对于通用型OpenStack云来说,基础设施组件需要高可用。如果设计时没有包括硬件的负" -"载均衡器,那么就得包含网络软件包如HAProxy。" - -msgid "" -"For a user of a massively scalable OpenStack public cloud, there are no " -"expectations for control over security, performance, or availability. Users " -"expect only SLAs related to uptime of API services, and very basic SLAs for " -"services offered. It is the user's responsibility to address these issues on " -"their own. The exception to this expectation is the rare case of a massively " -"scalable cloud infrastructure built for a private or government organization " -"that has specific requirements." -msgstr "" -"对于一个可大规模扩展的 OpenStack 公有云的用户来说,对于安全性、性能以及可用性" -"的控制需求并没有那么强烈。只有与 API 服务的正常运行时间相关的 SLA,以及所提供" -"的服务非常基本的 SLA,是所需要的。用户明白解决这些问题是他们自己的责任。这种" -"期望的例外是一个非常罕见的场景:该可大规模扩展的云是为了一个有特别需求的私有" -"或者政府组织而构建的。" - -msgid "" -"For example, a system that starts with a single disk and a partition power " -"of 3 can have 8 (2^3) partitions. Adding a second disk means that each has 4 " -"partitions. The one-disk-per-partition limit means that this system can " -"never have more than 8 disks, limiting its scalability. However, a system " -"that starts with a single disk and a partition power of 10 can have up to " -"1024 (2^10) disks." -msgstr "" -"例如,一个系统开始使用单个磁盘,partition power是3,那么可以有8(2^3) 个分区。" -"增加第二块磁盘意味着每块将拥有4个分区。一盘一分区的限制意味着此系统不可能拥有" -"超过8块磁盘,它的扩展性受限。因此,一个系统开始时使用单个磁盘,且partition " -"power是10的话可以使用1024(2^10)个磁盘。" - -msgid "" -"For example, given a cloud with two regions, if the operator grants a user a " -"quota of 25 instances in each region then that user may launch a total of 50 " -"instances spread across both regions. They may not, however, launch more " -"than 25 instances in any single region." -msgstr "" -"举例来说,一个云有两个region,如果运维人员给某个用户的配额是在每个region中可" -"以启动25个实例,那么此用户在两个region加起来就可以启动50个实例。但是不可以这" -"么相加,因为配额只在一个region生效。" - -msgid "" -"For many use cases the proximity of the user to their workloads has a direct " -"influence on the performance of the application and therefore should be " -"taken into consideration in the design. Certain applications require zero to " -"minimal latency that can only be achieved by deploying the cloud in multiple " -"locations. These locations could be in different data centers, cities, " -"countries or geographical regions, depending on the user requirement and " -"location of the users." -msgstr "" -"对于很多案例来说用户的负载会直接影响到其应用的性能,因此需要将此纳入考虑范" -"围。欲确保应用的延迟为零或尽可能的最小化,多地点唯一的实现就是在部署云时。这" -"些地点可以是不同的数据中心,不同的城市,不同的国家或地理位置,取决于用户需求" -"和当地用户。" - -msgid "" -"For more information OpenStack Security, see the OpenStack Security Guide" -msgstr "" -"更多关于OpenStack安全的信息,请访问OpenStack 安全指南。" - -msgid "" -"For network focused applications the future is the IPv6 protocol. IPv6 " -"increases the address space significantly, fixes long standing issues in the " -"IPv4 protocol, and will become essential for network focused applications in " -"the future." -msgstr "" -"对于关注网络方面的应用来说,未来将是 IPv6 协议的天下。IPv6 显著地扩大了地址空" -"间,解决了 IPv4 协议长久以来存在的问题,并且将会成为未来面向网络应用的重要乃" -"至本质部分。" - -msgid "Functionality" -msgstr "功能" - -msgid "" -"General content storage and synchronization. An example of this is private " -"dropbox." -msgstr "通用内容存储和同步。比如私有的 dropbox。" - -msgid "General purpose" -msgstr "通用型" - -msgid "" -"General purpose OpenStack cloud has multiple options. The key factors that " -"will have an influence on selection of storage hardware for a general " -"purpose OpenStack cloud are as follows:" -msgstr "通用型OpenStack云有很多属性,影响存储硬件的选择的关键因素有以下:" - -msgid "" -"General purpose clouds are limited to the most basic components, but they " -"can include additional resources such as:" -msgstr "通用型云被限制在了大部分的基本组件,但是可以包含下面增加的资源,例如:" - -msgid "Geneva, Switzerland" -msgstr "瑞士日内瓦" - -msgid "Geo-location sensitive data." -msgstr "地理位置敏感数据" - -msgid "Geo-redundant load balancing" -msgstr "地理冗余负载均衡" - -msgid "Growth and capacity planning" -msgstr "增长和容量计划" - -msgid "Guest" -msgstr "客户机" - -msgid "" -"Hands off: Leverage automation to increase consistency and quality and " -"reduce response times." -msgstr "放手:自动化可以增加一致性、质量以及减少响应的时间。" - -msgid "" -"Hardware decisions are also made in relation to network architecture and " -"facilities planning. These factors play heavily into the overall " -"architecture of an OpenStack cloud." -msgstr "" -"许多额外的硬件决策会影响到网络的架构和设施的规划。这些问题在OpenStack云的整个" -"架构中扮演非常重要的角色。" - -msgid "Hardware selection involves three key areas:" -msgstr "硬件选择分为三大块:" - -msgid "" -"Here are some guidelines to keep in mind when designing an application for " -"the cloud:" -msgstr "这里有一些原则忠告,在为一个应用设计云的时候请时刻铭记:" - -msgid "High availability" -msgstr "高可用" - -msgid "High availability (HA)" -msgstr "高可用(HA)" - -msgid "High availability across clouds (for technical diversity)" -msgstr "跨云的高可用性(技术多样)" - -msgid "High availability issues" -msgstr "高可用问题" - -msgid "High performance computing (HPC)" -msgstr "高性能计算(HPC)" - -msgid "" -"High performance computing jobs that benefit from the availability of " -"graphics processing units (GPUs) for general-purpose computing." -msgstr "使用图形处理单元(GPU),可大大有益于高性能计算任务。" - -msgid "High performance database" -msgstr "高性能数据库" - -msgid "High performance database with Database service" -msgstr "带数据库服务的高性能数据库" - -msgid "High speed and high volume transactional systems" -msgstr "高速及大量数据的事务性系统" - -msgid "Host aggregates" -msgstr "主机聚合" - -msgid "" -"Host aggregates enable partitioning of OpenStack Compute deployments into " -"logical groups for load balancing and instance distribution. You can also " -"use host aggregates to further partition an availability zone. Consider a " -"cloud which might use host aggregates to partition an availability zone into " -"groups of hosts that either share common resources, such as storage and " -"network, or have a special property, such as trusted computing hardware. You " -"cannot target host aggregates explicitly. Instead, select instance flavors " -"that map to host aggregate metadata. These flavors target host aggregates " -"implicitly." -msgstr "" -"主机聚合使得能够将 OpenStack 计算部署划分成逻辑上的分组以实现负载均衡和实例的" -"分布。主机聚合也可以被用于对一个可用域进行更进一步的划分,想象一下一个云环境" -"可能使用主机聚合将可用域中的主机进行分组,分组依据的规则可能是主机共享资源," -"比如存储和网络,或者主机有特别的属性,比如可信计算硬件。主机聚合并不是明显面" -"向用户的,相反,是通过选择实例类型来实现的,这些实例的类型都带有额外的规格信" -"息,映射到主机聚合的元数据上。" - -msgid "Host density" -msgstr "主机密度" - -msgid "" -"How the particular storage architecture will be used is critical for " -"determining the architecture. Some of the configurations that will influence " -"the architecture include whether it will be used by the hypervisors for " -"ephemeral instance storage or if OpenStack Object Storage will use it for " -"object storage." -msgstr "" -"特定的存储架构如何使用是决定架构的关键。一些配置将直接影响到架构,包括用于" -"hypervisor的临时实例存储,OpenStack对象存储使用它来作为对象存储服务。" - -msgid "How this book is organized" -msgstr "本书是如何组织的" - -msgid "Hybrid" -msgstr "混合云" - -msgid "Hyper-V" -msgstr "Hyper-V" - -msgid "Hypervisor" -msgstr "虚拟机管理程序" - -msgid "IP addresses" -msgstr "IP 地址" - -msgid "Identity (keystone)" -msgstr "认证 (keystone)" - -msgid "" -"If an SDN implementation requires layer-2 access because it directly " -"manipulates switches, we do not recommend running an overlay network or a " -"layer-3 agent. If the controller resides within an OpenStack installation, " -"it may be necessary to build an ML2 plug-in and schedule the controller " -"instances to connect to tenant VLANs that then talk directly to the switch " -"hardware. Alternatively, depending on the external device support, use a " -"tunnel that terminates at the switch hardware itself." -msgstr "" -"如果一个 SDN 的实现由于它需要直接管理和操作交换机因而要求-2层网络的连接,那么" -"就不建议运行覆盖网络或者-3层网络的代理程序。假如 SDN 控制器运行在 OpenStack " -"环境之中,则可能需要创建一个 ML2 插件并且将该控制器实例调度到能够连接至能够直" -"接与交换机硬件进行通信的租户 VLAN。另一个可能的方式是,基于外部硬件设备的支持" -"情况,使用一端终结在交换机硬件之上的网络隧道。" - -msgid "" -"If high availability is a requirement to provide continuous infrastructure " -"operations, a basic requirement of high availability should be defined." -msgstr "" -"如果高可用是提供持续基础设施运营的需求,那么高可用的基本需求就应该被定义。" - -msgid "" -"If natively available replication is not available, operations personnel " -"must be able to modify the application so that they can provide their own " -"replication service. In the event that replication is unavailable, " -"operations personnel can design applications to react such that they can " -"provide their own replication services. An application designed to detect " -"underlying storage systems can function in a wide variety of " -"infrastructures, and still have the same basic behavior regardless of the " -"differences in the underlying infrastructure." -msgstr "" -"当创建一个需要副本数据的应用时,我们建议将应用设计成为能够检测副本复制是否是" -"底层存储子系统的原生特性。在复制功能不是底层存储系统的特性的情况下,才将应用" -"设计成为自身能够提供副本复制服务。一个被设计为能够觉察到底层存储系统的应用," -"同样可以在很大范围的基础设施中部署,并且依然拥有一些最基本的行为,而不管底层" -"的基础设施有什么不同。" - -msgid "" -"If the solution is a scale-out storage architecture that includes DAS, it " -"will affect the server hardware selection. This could ripple into the " -"decisions that affect host density, instance density, power density, OS-" -"hypervisor, management tools and others." -msgstr "" -"如果是一个囊括了DAS的横向存储架构的解决方案,它将影响到服务器硬件的选择。这会" -"波及到决策,因为影响了主机密度,实例密度,电源密度,操作系统-Hypervisor,管理" -"工具及更多。" - -msgid "" -"If these software packages are required, the design must account for the " -"additional resource consumption (CPU, RAM, storage, and network bandwidth). " -"Some other potential design impacts include:" -msgstr "" -"如果这些软件包都需要的话,设计必须计算额外的资源使用率(CPU,内存,存储以及网" -"络带宽)。其他一些潜在影响设计的有:" - -msgid "" -"If this is a requirement, the hardware must support this configuration. User " -"requirements determine if a completely redundant network infrastructure is " -"required." -msgstr "" -"如果这是必须的,那么对应的服务器硬件就需要配置以支持冗余的情况。用户的需求也" -"是决定是否采用全冗余网络基础设施的关键。" - -msgid "" -"If you require direct access to a specific device, PCI pass-through enables " -"you to dedicate the device to a single instance per hypervisor. You must " -"define a flavor that has the PCI device specifically in order to properly " -"schedule instances. More information regarding PCI pass-through, including " -"instructions for implementing and using it, is available at https://wiki." -"openstack.org/wiki/Pci_passthrough." -msgstr "" -"如果需要直接使用某个特定的设备,可以通过使用 PCI 穿透技术将设备指定至每个宿主" -"机上的单个实例。OpenStack 管理员需要定义一个明确具有 PCI 设备以便适当调度实例" -"的实例类别。关于 PCI 穿透的更多信息,包括实施与使用的说明,请参考 https://wiki." -"openstack.org/wiki/Pci_passthrough。" - -msgid "Image (glance)" -msgstr "镜像 (glance)" - -msgid "Image disk utilization" -msgstr "镜像磁盘使用" - -msgid "Image portability" -msgstr "镜像移植" - -msgid "" -"In OpenStack, the infrastructure is integral to providing services and " -"should always be available, especially when operating with SLAs. Ensuring " -"network availability is accomplished by designing the network architecture " -"so that no single point of failure exists. A consideration of the number of " -"switches, routes and redundancies of power should be factored into core " -"infrastructure, as well as the associated bonding of networks to provide " -"diverse routes to your highly available switch infrastructure." -msgstr "" -"在OpenStack架构下,基础设施是作为一个整体提供服务的,必须保证可用性,尤其是建" -"立了服务水平协议。确保网络的可用性,在完成设计网络架构后不可以有单点故障存" -"在。考虑使用多个交换机、路由器以及冗余的电源是必须考虑的,这是核心基础设施," -"使用bonding网卡、提供多个路由、交换机高可用等。" - -msgid "" -"In a cloud with extreme demands on Block Storage, the network architecture " -"should take into account the amount of East-West bandwidth required for " -"instances to make use of the available storage resources. The selected " -"network devices should support jumbo frames for transferring large blocks of " -"data. In some cases, it may be necessary to create an additional back-end " -"storage network dedicated to providing connectivity between instances and " -"Block Storage resources so that there is no contention of network resources." -msgstr "" -"在对块存储具有极端需求的云环境中,网络的架构需要考虑东西向的带宽流量,这些带" -"宽是实例使用可用的存储资源所必需的。所选择的网络设备必须支持巨型帧以传输大块" -"的数据。某些情况下,可能还需要创建额外的后端存储网络,专用于为实例和块存储资" -"源之间提供网络连接,来保证没有对于网络资源的争抢。" - -msgid "" -"In a mixed hypervisor environment, specific aggregates of compute resources, " -"each with defined capabilities, enable workloads to utilize software and " -"hardware specific to their particular requirements. This functionality can " -"be exposed explicitly to the end user, or accessed through defined metadata " -"within a particular flavor of an instance." -msgstr "" -"在混合的hypervisor环境中,等于聚合了计算资源,但是各个hypervisor定义了各自的" -"能力,以及它们分别对软、硬件特殊的需求等。这些功能需要明确暴露给最终用户,或" -"者通过为实例类型定义的元数据来访问。" - -msgid "" -"In environments that place extreme demands on Block Storage, we recommend " -"using multiple storage pools. In this case, each pool of devices should have " -"a similar hardware design and disk configuration across all hardware nodes " -"in that pool. This allows for a design that provides applications with " -"access to a wide variety of Block Storage pools, each with their own " -"redundancy, availability, and performance characteristics. When deploying " -"multiple pools of storage it is also important to consider the impact on the " -"Block Storage scheduler which is responsible for provisioning storage across " -"resource nodes. Ensuring that applications can schedule volumes in multiple " -"regions, each with their own network, power, and cooling infrastructure, can " -"give tenants the ability to build fault tolerant applications that are " -"distributed across multiple availability zones." -msgstr "" -"在对块存储具有极端需求的环境下,我们建议利用多个存储池所带来的好处。在这种场" -"景中,每个设备池,在池中的所有硬件节点上,必须都具有相似的硬件设计以及磁盘配" -"置。这使得设计能够为应用提供到多个块存储池的访问,每个存储池都有其自身的冗" -"余、可用性\n" -"和性能特点。部署多个存储池时,对负责在资源节点上准备存储的块存储调度器的影" -"响,也是一个重要的考虑因素。确保应用能够将其存储卷分散到不同的区域中,每个区" -"域具有其自身的网络、电源以及冷却基础设施,能够给予租户建立具有容错性应用的能" -"力。该应用会被分布到多个可用区域中。" - -msgid "" -"In order to scale the Object Storage service to meet the workload of " -"multiple regions, multiple proxy workers are run and load-balanced, storage " -"nodes are installed in each region, and the entire Object Storage Service " -"can be fronted by an HTTP caching layer. This is done so client requests for " -"objects can be served out of caches rather than directly from the storage " -"modules themselves, reducing the actual load on the storage network. In " -"addition to an HTTP caching layer, use a caching layer like Memcache to " -"cache objects between the proxy and storage nodes." -msgstr "" -"为满足多region的负载理应扩展对象存储服务,那么多个代理运行,且拥有负载均衡," -"在每个region都安装存储节点,对象存储服务的入口前端配置HTTP缓存层都是必要的。" -"这样的话,客户端请求对象时会访问缓存而不是直接到存储模块,可有效较少存储网络" -"的负载。另外在HTTP缓存层,在代理和存储节点之间使用诸如Memcache可缓存对象。" - -msgid "" -"In some cases, the demand on Block Storage from instances may exhaust the " -"available network bandwidth. As a result, design network infrastructure that " -"services Block Storage resources in such a way that you can add capacity and " -"bandwidth easily. This often involves the use of dynamic routing protocols " -"or advanced networking solutions to add capacity to downstream devices " -"easily. Both the front-end and back-end storage network designs should " -"encompass the ability to quickly and easily add capacity and bandwidth." -msgstr "" -"在一些情形下,来自实例对块存储的需求会耗尽可用的网络带宽。因此,设计网络基础" -"设施时,考虑到如此的块存储资源使用一定得很容易的增加容量和带宽。这通常涉及到" -"使用动态路由协议或者是高级网络解决方案允许轻松添加下游设备以增加容量。无论是" -"前端存储网络设计还是后端存储网络设计都得围绕着快速和容易添加容量和带宽的能力" -"来展开。" - -msgid "" -"In some cases, the resolution to the problem is ultimately to deploy a more " -"recent version of OpenStack. Alternatively, when you must resolve an issue " -"in a production environment where rebuilding the entire environment is not " -"an option, it is sometimes possible to deploy updates to specific underlying " -"components in order to resolve issues or gain significant performance " -"improvements. Although this may appear to expose the deployment to increased " -"risk and instability, in many cases it could be an undiscovered issue." -msgstr "" -"某种情况下,问题的最终解决办法会是部署一套更新版本的 OpenStack。所幸的是,当" -"在一个不可能整个推倒重建的生产环境要解决这样的问题的时候,还可以只重新部署能" -"够解决问题或者能够使得性能明显提高的底层组件的新版本。虽然这乍看起来像是可能" -"给部署带来更高的风险和不稳定性,但是大多数情况下这并不是什么问题。" - -msgid "" -"In some cases, you must add bandwidth and capacity to the network resources " -"servicing requests between proxy servers and storage nodes. For this reason, " -"the network architecture used for access to storage nodes and proxy servers " -"should make use of a design which is scalable." -msgstr "" -"一些情况下,要求给代理服务和存储节点之间的网络资源服务增加带宽和容量。基于这" -"个原因,用于访问存储节点和代理服务的网络架构要设计的具有扩展性。" - -msgid "" -"In the case of running TripleO, the underlying OpenStack cloud deploys the " -"Compute nodes as bare-metal. You then deploy OpenStack on these Compute bare-" -"metal servers with the appropriate hypervisor, such as KVM." -msgstr "" -"在运行 TripleO 的场景下,底层的 OpenStack 环境以裸金属的方式部署计算节点。然" -"后 OpenStack 将会被部署在这些计算的裸金属服务器上,并使用诸如 KVM 之类的管理" -"程序。" - -msgid "" -"In the case of running smaller OpenStack clouds for testing purposes, where " -"performance is not a critical factor, you can use QEMU instead. It is also " -"possible to run a KVM hypervisor in an instance (see http://" -"davejingtian.org/2014/03/30/nested-kvm-just-for-fun/), though this is " -"not a supported configuration, and could be a complex solution for such a " -"use case." -msgstr "" -"如果是要为测试目的运行小规模的 OpenStack 云环境,并且性能不是关键的考虑因素的" -"情况下,则 QEMU 可以被作为替代使用。在实例中运行一个 KVM 的宿主机是可能的(参" -"考 http://davejingtian.org/2014/03/30/nested-kvm-just-for-fun/)," -"但这不是被支持的配置方式,并且对这样一个使用场景来说,也会是一个复杂的解决方" -"案。" - -msgid "" -"In the planning and design phases of the build out, it is important to " -"include the operation's function. Operational factors affect the design " -"choices for a general purpose cloud, and operations staff are often tasked " -"with the maintenance of cloud environments for larger installations." -msgstr "" -"在构建过程中的规划和设计阶段,能够考虑到运维的内容显得异常的重要。对通用型云" -"来说,运维因素影响着设计的选择,而且运维人员担任着维护云环境的任务,以及最初" -"大规模的安装、部署。" - -msgid "" -"In this architecture, instance density and CPU-RAM oversubscription are " -"lower. You require more hosts to support the anticipated scale, especially " -"if the design uses dual-socket hardware designs." -msgstr "" -"在此架构中实例密度要被考虑为低,因此CPU和内存的超额认购比例也要低。为了支持实" -"例低密度的预期扩展需要更多的主机,尤其是设计中使用了双插槽的硬件。" - -msgid "" -"In this example the cloud is divided into two regions, one for each site, " -"with two availability zones in each based on the power layout of the data " -"centers. A number of host aggregates enable targeting of virtual machine " -"instances using flavors, that require special capabilities shared by the " -"target hosts such as SSDs, 10GbE networks, or GPU cards." -msgstr "" -"在这个例子中,整个云被划分为两个区域,每个地点各一个,每个区域之中又有两个基" -"于数据中心中的电源安排而划分的可用域。其中一系列的主机聚合也被定义以允许通过" -"使用实例规格来指向虚拟机实例,这要求目标主机需要共享诸如 SSD、10GbE 网络或者" -"显卡等特别的硬件和功能。" - -msgid "" -"In this example, Ceph presents a Swift-compatible REST interface, as well as " -"a block level storage from a distributed storage cluster. It is highly " -"flexible and has features that enable reduced cost of operations such as " -"self healing and auto balancing. Using erasure coded pools are a suitable " -"way of maximizing the amount of usable space." -msgstr "" -"选用 Ceph 以抛出 Swift 兼容的 REST 接口,同样的也从一个分布式的存储集群中提供" -"了块级别的存储。Ceph 非常的灵活,并且具有能够降低运营成本的许多特性,比如说自" -"动修复以及自动平衡。Ceph 中的 erasure coded 池被用以最大化可用空间的量。" - -msgid "" -"Increasing OpenStack Object Storage means network bandwidth needs to be " -"taken into consideration. Running OpenStack Object Storage with network " -"connections offering 10 GbE or better connectivity is advised." -msgstr "" -"增加OpenStack对象存储同时也意味着需要考虑更大的带宽。运行OpenStack对象存储," -"请使用10 GbE或更高的网络连接。" - -msgid "" -"Increasing server density means sacrificing resource capacity or " -"expandability, however, increasing resource capacity and expandability " -"increases cost and decreases server density. As a result, determining the " -"best server hardware for a general purpose OpenStack architecture means " -"understanding how choice of form factor will impact the rest of the design. " -"The following list outlines the form factors to choose from:" -msgstr "" -"增加服务器密度意味这损失资源容量和扩展性,同理,增加资源容量和扩展性会增加开" -"销和减少服务器密度。结果就是,为通用型OpenStack架构决定最合适的服务器硬件意味" -"着怎么选择都会影响到其他设计。从以下列表元素中作出选择:" - -msgid "" -"Increasing the size of the supporting compute environment increases the " -"network traffic and messages, adding load to the controller or networking " -"nodes. Effective monitoring of the environment will help with capacity " -"decisions on scaling." -msgstr "" -"增加支持计算环境的规模,会增加网络流量、消息,也会增加控制器和网络节点的负" -"载。有效的监测整个环境,对决定扩展容量很有帮助。" - -msgid "Infrastructure segregation" -msgstr "基础设施隔离" - -msgid "" -"Infrastructure-as-a-Service offerings, including OpenStack, use flavors to " -"provide standardized views of virtual machine resource requirements that " -"simplify the problem of scheduling instances while making the best use of " -"the available physical resources." -msgstr "" -"基础设施即服务所提供的,包括OpenStack,使用类型来提供标准的虚拟机资源需求视" -"图,为分配实例时能够充分利用可用的物理资源提供了简单的解决办法。" - -msgid "" -"Input-Output performance requirements require researching and modeling " -"before deciding on a final storage framework. Running benchmarks for Input-" -"Output performance provides a baseline for expected performance levels. If " -"these tests include details, then the resulting data can help model behavior " -"and results during different workloads. Running scripted smaller benchmarks " -"during the life cycle of the architecture helps record the system health at " -"different points in time. The data from these scripted benchmarks assist in " -"future scoping and gaining a deeper understanding of an organization's needs." -msgstr "" -"在对最终的存储框架做出决定之前,需要对输入-输出的性能需求进行研究并将其模型" -"化。为输入-输出性能运行一些检测能够提供一个大致的预期中性能水平的基准。假如测" -"试能够包含详细信息,比如说,在对象存储系统中的对象大小,或者对象存储以及块存" -"储的不同\n" -"容量级别,那么测试得出来的数据就能够帮助对不同的负载之下的行为和结果进行模型" -"化。在架构的生命周期中运行小规模脚本化的测试能够帮助在不同时间点及时记录系统" -"的健康状态。这些脚本化测试得出的数据,能够在以后对整个组织的需要进行研究和深" -"入了解的时候提供帮助。" - -msgid "Instance and image locations" -msgstr "实例和镜像存放地" - -msgid "Instance density" -msgstr "实例密度" - -msgid "" -"Insufficient disk capacity could also have a negative effect on overall " -"performance including CPU and memory usage. Depending on the back-end " -"architecture of the OpenStack Block Storage layer, capacity includes adding " -"disk shelves to enterprise storage systems or installing additional block " -"storage nodes. Upgrading directly attached storage installed in compute " -"hosts, and adding capacity to the shared storage for additional ephemeral " -"storage to instances, may be necessary." -msgstr "" -"磁盘容量不足会给整个性能带来负面影响,会波及到CPU和内存的使用。这取决于后端架" -"构的OpenStack块存储层,可以是为企业级存储系统增加磁盘,也可以是安装新的块存储" -"节点,也可以是为计算主机直接挂接存储,也可以为实例从共享存储中添加临时空间。" -"都有可能。" - -msgid "Intended audience" -msgstr "目标读者" - -msgid "Internal consumption (private) cloud" -msgstr "内部消费(私有)云" - -msgid "Interoperability" -msgstr "互操作性" - -msgid "" -"Interoperability and integration with OpenStack can be paramount in deciding " -"on a storage hardware and storage management platform. Interoperability and " -"integration includes factors such as OpenStack Block Storage " -"interoperability, OpenStack Object Storage compatibility, and hypervisor " -"compatibility (which affects the ability to use storage for ephemeral " -"instance storage)." -msgstr "" -"与 OpenStack 系统的互动性和集成情况,在决定存储硬件和存储管理平台的选择上,可" -"能是最为重要的。这里所说的互动性和集成度,包括比如与 OpenStack 块存储的互操作" -"性、与 OpenStack 对象存储的兼容性,以及与虚拟机管理程序的兼容性(影响为临时的" -"实例使用存储的能力)等因素。" - -msgid "Introduction" -msgstr "介绍" - -msgid "Isolate virtual networks using encapsulation technologies." -msgstr "使用封装技术隔离虚拟网络。" - -msgid "" -"It can be difficult to troubleshoot a network without IP addresses and ICMP." -msgstr "解决一个没有 IP 地址和 ICMP 的网络上的问题可能会很困难。" - -msgid "" -"It is important to know that layer 2 has a very limited set of network " -"management tools. It is very difficult to control traffic, as it does not " -"have mechanisms to manage the network or shape the traffic, and network " -"troubleshooting is very difficult. One reason for this difficulty is network " -"devices have no IP addresses. As a result, there is no reasonable way to " -"check network delay in a layer-2 network." -msgstr "" -"很重要的一点是,必须意识到二层网络上用于网络管理的工具非常有限。因此控制流量" -"非常困难,由于二层网络没有管理网络或者对流量进行整形的机制,解决网络问题也非" -"常困难。其中一个原因是网络设备没有 IP 地址。因此,在二层网络中没有合理的方式" -"来检查网络延迟。" - -msgid "" -"It is possible to gain more performance out of a single storage system by " -"using specialized network technologies such as RDMA, SRP, iSER and SCST. The " -"specifics for using these technologies is beyond the scope of this book." -msgstr "" -"使用特定的网络技术如RDMA,SRP,iSER或SCST来提高单个存储系统的性能是可能,但是讨" -"论这些技术已经超出了本书的范围。" - -msgid "" -"It may be necessary to implement a 3rd-party caching layer for some " -"applications to achieve suitable performance." -msgstr "对于一些应用来说,可能需要引入第3方的缓存层以实现合意的性能。" - -msgid "KVM (and QEMU)" -msgstr "KVM (and QEMU)" - -msgid "" -"Kenneth Hui (EMC) @hui_kenneth" -msgstr "" -"Kenneth Hui (EMC) @hui_kenneth" - -msgid "" -"Kevin Jackson (Rackspace) @itarchitectkev" -msgstr "" -"Kevin Jackson (Rackspace) @itarchitectkev" - -msgid "Key considerations for the selection of networking hardware include:" -msgstr "选择网络硬件主要考虑应包括:" - -msgid "" -"Key hardware specifications are also crucial to the performance of user " -"instances. Be sure to consider budget and performance needs, including " -"storage performance (spindles/core), memory availability (RAM/core), network " -"bandwidth (Gbps/core), and overall CPU performance (CPU/core)." -msgstr "" -"关键的硬件规格也是确保用户实例的性能的指标。请确保考虑好预算和性能需求,包括" -"存储性能 (spindles/core), 内存可用性 (RAM/core), 网络带宽 (Gbps/core), 以及整" -"个的CPU性能 (CPU/core)." - -msgid "LXC" -msgstr "LXC" - -msgid "Lab test bed" -msgstr "测试实验平台" - -msgid "" -"Larger rack-mounted servers, such as 4U servers, often provide even greater " -"CPU capacity, commonly supporting four or even eight CPU sockets. These " -"servers have greater expandability, but such servers have much lower server " -"density and are often more expensive." -msgstr "" -"大型机架式服务器,比如4U服务器,可提供更为强大的CPU容量。通常支持4个甚至8个" -"CPU插槽。拥有很强的扩展性,但是这些服务器会带来低密度,以及更加昂贵的开销。" - -msgid "" -"Larger rack-mounted servers, such as 4U servers, often provide even greater " -"CPU capacity. Commonly supporting four or even eight CPU sockets. These " -"servers have greater expandability but such servers have much lower server " -"density and usually greater hardware cost." -msgstr "" -"大型机架式服务器,比如4U服务器,可提供更为强大的CPU容量。通常支持4个甚至8个" -"CPU插槽。拥有很强的扩展性,但是这些服务器会带来低密度,以及更加昂贵的开销。" - -msgid "" -"Larger rack-mounted servers, such as 4U servers, will tend to offer even " -"greater CPU capacity, often supporting four or even eight CPU sockets. These " -"servers often have much greater expandability so will provide the best " -"option for upgradability. This means, however, that the servers have a much " -"lower server density and a much greater hardware cost." -msgstr "" -"高U的机架式服务器,比如4U的服务器,可提供更多的CPU容量,通常可提供四个甚至8个" -"CPU插槽。这些服务器拥有非常强的可扩展性,还拥有升级的最好条件。尽管如此,它们" -"也意味着低密度和更高的开销。" - -msgid "Latency" -msgstr "延迟" - -msgid "" -"Layer-2 Ethernet usage has these advantages over layer-3 IP network usage:" -msgstr "使用二层以太网相对于使用三层 IP 网络有如下优势:" - -msgid "Layer-2 architecture limitations" -msgstr "二层架构的局限性" - -msgid "Layer-3 architecture advantages" -msgstr "三层架构的优势" - -msgid "Layer-3 architecture limitations" -msgstr "三层架构的局限性" - -msgid "" -"Layer-3 networks provide the same level of resiliency and scalability as the " -"Internet." -msgstr "三层网络提供了与因特网相同的弹性及可扩展性。" - -msgid "Legacy networking (nova-network)" -msgstr "传统联网方式(nova-network)" - -msgid "Legal requirements" -msgstr "法律需求" - -msgid "" -"Leveraging existing monitoring systems is an effective check to ensure " -"OpenStack environments can be monitored." -msgstr "借助已有的监测系统是一种有效的检查,确保OpenStack环境可以被监测。" - -msgid "Licensing" -msgstr "许可" - -msgid "" -"Likely to be fully symmetric. Because replication originates from any node " -"and might target multiple other nodes algorithmically, it is less likely for " -"this traffic to have a larger volume in any specific direction. However this " -"traffic might interfere with north-south traffic." -msgstr "" -"很可能是完全对称的网络。因为根据算法来说,复制行为可能从任何节点发起并指向多" -"个其他节点,某个特定方向的流量更大的情况不大可能发生。然而,这些流量也可能干" -"扰到南北向的流量。" - -msgid "Load balancers" -msgstr "负载均衡器" - -msgid "Load balancing" -msgstr "负载均衡" - -msgid "Load balancing functionality" -msgstr "负载均衡功能" - -msgid "Location-local service" -msgstr "本地服务" - -msgid "Log analytics capabilities." -msgstr "日志分析能力。" - -msgid "Logging" -msgstr "日志记录" - -msgid "Logging and monitoring" -msgstr "记录日志和监测" - -msgid "" -"Logs from the web application servers are shipped to OpenStack Object " -"Storage for processing and archiving." -msgstr "将web应用的日志放在OpenStack对象存储中,用来集中处理和归档。" - -msgid "" -"MLAG, often used for switch redundancy, is a proprietary solution that does " -"not scale beyond two devices and forces vendor lock-in." -msgstr "" -"通常被用于实现交换机冗余的 MLAG 是私有的解决方案,不能扩展至两个设备以上,并" -"且迫使得厂商的选择受到限制。" - -msgid "Maintainability" -msgstr "可维护性" - -msgid "Maintenance tasks" -msgstr "维护任务:" - -msgid "" -"Maish Saidel-Keesing (Cisco) @maishsk" -msgstr "" -"Maish Saidel-Keesing (Cisco) @maishsk" - -msgid "Management" -msgstr "管理" - -msgid "Management security domains" -msgstr "管理安全域" - -msgid "Management software" -msgstr "管理软件" - -msgid "Management software includes software for providing:" -msgstr "管理软件所包含的能够提供的软件:" - -msgid "Management tools" -msgstr "管理工具" - -msgid "" -"Many jurisdictions have legislative and regulatory requirements governing " -"the storage and management of data in cloud environments. Common areas of " -"regulation include:" -msgstr "" -"很多辖区对于云环境中的数据的保管及管理都有相关的法律上或者监管上的要求。这些" -"规章的常见领域包括:" - -msgid "" -"MariaDB server instances store their data on shared enterprise storage, such " -"as NetApp or Solidfire devices. If a MariaDB instance fails, storage would " -"be expected to be re-attached to another instance and rejoined to the Galera " -"cluster." -msgstr "" -"MariaDB服务器实例将数据存储在共享存储中,使用企业级存储,有NetApp和Solidfire" -"的设备。如果一个MariaDB的实例宕掉,另外一个实例会重新挂接原来的存储,且重新加" -"入到Galera集群中。" - -msgid "Massively scalable" -msgstr "可大规模扩展的类型" - -msgid "" -"Massively scalable OpenStack clouds have the following user requirements:" -msgstr "可大规模扩展的 OpenStack 云有如下的一些用户需求:" - -msgid "" -"Massively scalable OpenStack clouds require extensive metering and " -"monitoring functionality to maximize the operational efficiency by keeping " -"the operator informed about the status and state of the infrastructure. This " -"includes full scale metering of the hardware and software status. A " -"corresponding framework of logging and alerting is also required to store " -"and enable operations to act on the meters provided by the metering and " -"monitoring solutions. The cloud operator also needs a solution that uses the " -"data provided by the metering and monitoring solution to provide capacity " -"planning and capacity trending analysis." -msgstr "" -"可大规模扩展的 OpenStack 云需要全面的测量及监控功能,通过保持运营人员能够清楚" -"知悉基础设施的状态,以达到最大化业务效率的目的。这包括对硬件和软件状态的全面" -"测度。同样的也需要一个相应的日志和警报框架,用以保存由测量和监控解决方案所提" -"供的数据,并允许针对测量得出的数据采取相应的动作。云运营商还需要一个能够使用" -"测量和监控方案所提供的数据进行容量计划以及容量趋势分析的解决方案。" - -msgid "Media streaming." -msgstr "媒体流。" - -msgid "" -"Memcached is a distributed memory object caching system, and Redis is a key-" -"value store. Both are deployed on general purpose clouds to assist in " -"alleviating load to the Identity service. The memcached service caches " -"tokens, and due to its distributed nature it can help alleviate some " -"bottlenecks to the underlying authentication system. Using memcached or " -"Redis does not affect the overall design of your architecture as they tend " -"to be deployed onto the infrastructure nodes providing the OpenStack " -"services." -msgstr "" -"Memcached是一个分布式的内存对象缓存系统,Redia是一个key-value存储系统。在通用" -"型云中使用这两个系统来减轻认证服务的负载。memcached服务缓存令牌,基于它天生的" -"分布式特性可以缓减授权系统的瓶颈。使用memcached或Redis不会影响到用户的架构设" -"计,虽然它们会部署到基础设施节点中为OpenStack提供服务。" - -msgid "Methodology" -msgstr "方法论" - -msgid "MongoDB has its own design considerations for high availability." -msgstr "MongoDB尤其自身的设计考虑,回馈就是可让数据库高可用。" - -msgid "Monitoring" -msgstr "监控" - -msgid "" -"Monitoring of advanced storage performance data to ensure that storage " -"systems are performing as expected." -msgstr "监测高级的存储性能数据,以确保存储系统正常运转。" - -msgid "Monitoring of environmental resources such as temperature and humidity." -msgstr "监测诸如温度和湿度等的环境信息。" - -msgid "" -"Monitoring of network resources for service disruptions which would affect " -"access to storage." -msgstr "监测网络资源情况,关注可能影响存储访问的网络服务中断问题。" - -msgid "Monitoring of physical hardware resources." -msgstr "监测物理硬件资源。" - -msgid "More mature, established" -msgstr "更加成熟,稳定的" - -msgid "" -"Most OpenStack components require access to back-end database services to " -"store state and configuration information. Choose an appropriate back-end " -"database which satisfies the availability and fault tolerance requirements " -"of the OpenStack services." -msgstr "" -"OpenStack组件通常需要访问后端的数据库服务以存放状态和配置信息。选择合适的后端" -"数据库以满足可用性和容错的需求,这是OpenStack服务所要求的。" - -msgid "Multi-hypervisor example" -msgstr "多种类型宿主机的例子" - -msgid "Multi-site" -msgstr "多区域" - -msgid "Multi-tier topologies" -msgstr "具有多层拓扑" - -msgid "Multiple DCs" -msgstr "多数据中心" - -msgid "" -"Multiple network links should be deployed between sites to provide " -"redundancy for all components. This includes storage replication, which " -"should be isolated to a dedicated network or VLAN with the ability to assign " -"QoS to control the replication traffic or provide priority for this traffic. " -"Note that if the data store is highly changeable, the network requirements " -"could have a significant effect on the operational cost of maintaining the " -"sites." -msgstr "" -"多网络链路应该在站点之间部署,以提供所有组件的冗余。这其中包括存储的复制,存" -"储应该使用专用网络隔离,或者使用VLAN的QoS功能来控制复制流量,或者为该流量提供" -"高的优先级。记住,如果数据存储经常改动,网络的需求就会显著影响到维护站点的运" -"维成本。" - -msgid "Multiple racks" -msgstr "多个机柜" - -msgid "" -"MySQL is the default database for OpenStack, but other compatible databases " -"are available." -msgstr "" -"MySQL是OpenStack通常考虑的后端数据库,其它和MySQL兼容的数据也同样可以很好的工" -"作。" - -msgid "" -"Native support is not a constraint on the choice of OS; users are free to " -"choose just about any Linux distribution (or even Microsoft Windows) and " -"install OpenStack directly from source (or compile their own packages). " -"However, many organizations will prefer to install OpenStack from " -"distribution-supplied packages or repositories (although using the " -"distribution vendor's OpenStack packages might be a requirement for support)." -msgstr "" -"原生并非是限制到某些操作系统,用户仍然可以自己选择Linux的任何发行版(甚至是微" -"软的Windows),然后从源码安装OpenStack(或者编译为某个发行版的包)。尽管如此,事" -"实上多数组织还是会从发行版支持的包或仓库去安装OpenStack(使用分发商的OpenStack" -"软件包也许需要他们的支持)。" - -msgid "Network" -msgstr "网络" - -msgid "Network Operation Center" -msgstr "网络操作中心" - -msgid "" -"Network Operations Center (NOC) staffed and always available to resolve " -"issues." -msgstr "配备有网络运营中心(NOC)员工并保持工作状态以解决问题。" - -msgid "Network architecture" -msgstr "网络架构" - -msgid "Network capacity (number and speed of links)" -msgstr "网络容量(连接数量和速度)" - -msgid "Network connectivity" -msgstr "网络连通性" - -msgid "Network considerations" -msgstr "网络考虑" - -msgid "Network focused" -msgstr "网络型" - -msgid "Network functions" -msgstr "网络功能" - -msgid "" -"Network functions is a broad category but encompasses workloads that support " -"the rest of a system's network. These workloads tend to consist of large " -"amounts of small packets that are very short lived, such as DNS queries or " -"SNMP traps. These messages need to arrive quickly and do not deal with " -"packet loss as there can be a very large volume of them. There are a few " -"extra considerations to take into account for this type of workload and this " -"can change a configuration all the way to the hypervisor level. For an " -"application that generates 10 TCP sessions per user with an average " -"bandwidth of 512 kilobytes per second per flow and expected user count of " -"ten thousand concurrent users, the expected bandwidth plan is approximately " -"4.88 gigabits per second." -msgstr "" -"网络功能的话题比较宽泛,但通常是指围绕在为系统的其他部分的网络提供支持的目的" -"的工作。这类网络负载通常都是由大量的存活期比较短的小网络包组成的,例如 DNS 请" -"求或者 SNMP 陷阱等。这类消息需要很快地到达并且由于可能非常大量而不关注丢包的" -"问题。这类型的负载还有一些额外的考虑因素需要顾及,并且可能改变直到宿主机级别" -"的网络配置。假设一个应用为每个用户生成 10 个 TCP 会话,每个数据流的速度平均" -"是 512 Kbps,而预期用户的数量是 1 万个用户同时使用,预期总计的带宽计划将达到" -"大约 4.88 Gbps。" - -msgid "" -"Network hardware: The network hardware selection needs to be supported by " -"the logging, monitoring, and alerting software." -msgstr "" -"网络硬件:网络硬件的选择,要看其支持日志系统、监测系统以及预警系统的情况。" - -msgid "Network management functions" -msgstr "网络管理功能" - -msgid "Network misconfigurations" -msgstr "错误的网络配置" - -msgid "Network overlays or virtual local area networks (VLANs)" -msgstr "网络覆盖或者(VLANs)" - -msgid "Network performance" -msgstr "网络性能" - -msgid "" -"Network performance can be boosted considerably by implementing hardware " -"load balancers to provide front-end service to the cloud APIs. The hardware " -"load balancers also perform SSL termination if that is a requirement of your " -"environment. When implementing SSL offloading, it is important to understand " -"the SSL offloading capabilities of the devices selected." -msgstr "" -"网络的性能可以考虑有硬件的负载均衡来帮助实现,为云抛出的API提供前端的服务。硬" -"件负载均衡通常也提供SSL终端,如果用户的环境有需求的话。当实现SSL减负时,理解" -"SSL减负的能力来选择硬件,这点很重要。" - -msgid "Network recommendations overview" -msgstr "网络建议的总结" - -msgid "Network redundancy protocols" -msgstr "网络冗余协议" - -msgid "Network service offerings" -msgstr "网络服务提供" - -msgid "Network services" -msgstr "网络服务" - -msgid "Network tuning" -msgstr "网络调优" - -msgid "" -"Network uptime guarantees affecting switch design, which might require " -"redundant switching and power." -msgstr "" -"网络运行时间保证影响这交换机的设计,需要交换机的冗余,以及其电源的冗余。" - -msgid "Network:" -msgstr "网络:" - -msgid "Networking" -msgstr "网络" - -msgid "Networking (neutron)" -msgstr "网络 (neutron)" - -msgid "Networking Security" -msgstr "网络安全" - -msgid "" -"Networking at the frame level says nothing about the presence or absence of " -"IP addresses at the packet level. Almost all ports, links, and devices on a " -"network of LAN switches still have IP addresses, as do all the source and " -"destination hosts. There are many reasons for the continued need for IP " -"addressing. The largest one is the need to manage the network. A device or " -"link without an IP address is usually invisible to most management " -"applications. Utilities including remote access for diagnostics, file " -"transfer of configurations and software, and similar applications cannot run " -"without IP addresses as well as MAC addresses." -msgstr "" -"数据帧级别上的联网与数据包级别上的 IP 地址存在与否并没有关系。几乎所有端口、" -"链路以及在 LAN 交换机构成的网络上的设备仍然有其 IP 地址,同样地,所有的源和目" -"标主机也都有。有很多原因需要继续保留 IP 地址。其中最大的一个理由是网络管理上" -"的需要。一个没有 IP 地址的设备或者连接通常对于大多数的管理程序来说是不可见" -"的。包括远程接入的诊断工具、用于传输配置和软件的文件传输工具以及其它类似的应" -"用程序都不能够在没有 IP 地址或者 MAC 地址的情况下运行。" - -msgid "Networking hardware selection" -msgstr "网络硬件选择" - -msgid "Networking resources" -msgstr "网络资源" - -msgid "Networking software" -msgstr "联网软件" - -msgid "Newer, maturing" -msgstr "更新,正在发展不断成熟中" - -msgid "" -"Nick Chase (Mirantis) @NickChase" -msgstr "" -"Nick Chase (Mirantis) @NickChase" - -msgid "No multi-tier topologies" -msgstr "没有多层拓扑" - -msgid "No plug-in support" -msgstr "没有插件支持" - -msgid "No predefined usage model" -msgstr "非预先定义的使用模型" - -msgid "Non-standard features" -msgstr "非标准特性" - -msgid "Number of VLANs is limited to 4096." -msgstr "VLAN 的数目被限制在 4096。" - -msgid "" -"OS selection also directly influences hypervisor selection. A cloud " -"architect who selects Ubuntu, RHEL, or SLES has some flexibility in " -"hypervisor; KVM, Xen, and LXC are supported virtualization methods available " -"under OpenStack Compute (nova) on these Linux distributions. However, a " -"cloud architect who selects Hyper-V is limited to Windows Servers. " -"Similarly, a cloud architect who selects XenServer is limited to the CentOS-" -"based dom0 operating system provided with XenServer." -msgstr "" -"操作系统的选择会直接影响到hypervisor的选择。一个云架构会选择Ubuntu,RHEL,或" -"SLES等,它们都拥有灵活的hypervisor可供选择,同时也被OpenStack 计算(nova)所支" -"持,如KVM,Xen,LXC等虚拟化。一个云架构若选择了Hyper-V,那么只能使用Windows服务" -"器版本。同样的,一个云架构若选择了XenServer,也限制到基于CenOS dom0操作系统。" - -msgid "" -"OS-Hypervisor combination: Ensure that the selected logging, monitoring, or " -"alerting tools support the proposed OS-hypervisor combination." -msgstr "" -"操作系统-Hypervisor组合:确保所选择的日志系统,监测系统,或预警工具都是被此组" -"合所支持的。" - -msgid "" -"OS-hypervisor combination: Ensure that the selected logging, monitoring, or " -"alerting tools support the proposed OS-hypervisor combination." -msgstr "" -"操作系统-hypervisor组合:确保选择的日志、监测、告警等工具支持打算组合的操作系" -"统-Hypervisor。" - -msgid "Object Storage (swift)" -msgstr "对象存储(Swift)" - -msgid "Object Storage fault tolerance and availability" -msgstr "对象存储容错和可用性" - -msgid "" -"Object storage nodes should be designed so that the number of requests does " -"not hinder the performance of the cluster. The object storage service is a " -"chatty protocol, therefore making use of multiple processors that have " -"higher core counts will ensure the IO requests do not inundate the server." -msgstr "" -"对象存储节点应该被设计为在集群中不至于区区几个请求就拖性能后腿的样子。对象存" -"储服务是一种频繁交互的协议,因此确定使用多个处理器,而且要多核的,这样才可确" -"保不至于因为IO请求将服务器搞垮。" - -msgid "" -"On large layer-2 networks, configuring ARP learning can also be complicated. " -"The setting for the MAC address timer on switches is critical and, if set " -"incorrectly, can cause significant performance problems. As an example, the " -"Cisco default MAC address timer is extremely long. Migrating MACs to " -"different physical locations to support instance migration can be a " -"significant problem. In this case, the network information maintained in the " -"switches could be out of sync with the new location of the instance." -msgstr "" -"在大规模的二层网络上,配置 ARP 学习也可能很复杂。交换机上的 MAC 地址超时设置" -"非常关键,并且,错误的设置可能引起严重的性能问题。例如 Cisco 交换机上的 MAC " -"地址的超时时间非常长。迁移 MAC 地址到另外的物理位置以实现实例的迁移就可能是一" -"个显著的问题。这种情形下,交换机中维护的网络信息与实例的新位置信息就会不同" -"步。" - -msgid "" -"On the other hand, a scale-out storage solution that uses direct-attached " -"storage (DAS) in the servers may be an appropriate choice. This requires " -"configuration of the server hardware to support the storage solution." -msgstr "" -"反过来说,一个横向扩展的存储解决方案使用直接挂接存储(DAS)给服务器也许是个恰当" -"的选择。如果是这样的话,服务器硬件就需要配置以支持此种存储解决方案。" - -msgid "" -"On the surface, the equations reveal the need for 200 physical cores and " -"80TB of storage for /var/lib/nova/instances/. However, it is also important " -"to look at patterns of usage to estimate the load that the API services, " -"database servers, and queue servers are likely to encounter." -msgstr "" -"表面上,公式的计算结果是需要200个物理核和80TB的存储,这些可在路径/var/lib/" -"nova/instances/中找到。尽管如此,另外还得重点看着其他负载的使用量,诸如API服" -"务,数据库服务,队列服务等,它们同样需要被计入总体。" - -msgid "On-demand and self-service application" -msgstr "按需或自服务应用" - -msgid "" -"One potential solution to this problem is the implementation of storage " -"systems designed for performance. Parallel file systems have previously " -"filled this need in the HPC space and are suitable for large scale " -"performance-orientated systems." -msgstr "" -"这个问题的一个解决方案是部署一个设计时便将性能考虑在内的存储系统。传统上来" -"说,并行文件系统填补了 HPC 空间里的这个需要,在适用的时候,可以作为大规模面向" -"性能的系统的一个备份的方案考虑。" - -msgid "One zone per node" -msgstr "每节点就是一个zone" - -msgid "OpenStack" -msgstr "OpenStack" - -msgid "OpenStack Compute (nova)" -msgstr "OpenStack 计算 (nova)" - -msgid "" -"OpenStack Identity (keystone)" -msgstr "" -"OpenStack 认证 (keystone)" - -msgid "" -"OpenStack Image service (glance)" -msgstr "" -"OpenStack 镜像服务 (glance)" - -msgid "" -"OpenStack Networking (neutron)" -msgstr "OpenStack 网络 (neutron)" - -msgid "" -"OpenStack dashboard (horizon)" -msgstr "" -"OpenStack 仪表盘 (horizon)" - -msgid "OpenStack Architecture Design Guide" -msgstr "OpenStack 架构设计指南" - -msgid "OpenStack Block Storage (cinder)" -msgstr "OpenStack 块存储(cinder)" - -msgid "" -"OpenStack Block Storage for use by compute instances, requiring persistent " -"storage (such as databases for dynamic sites)." -msgstr "" -"OpenStack块存储为计算实例所使用,需要持久性存储(正如数据库对于动态站点)。" - -msgid "OpenStack Compute (nova)" -msgstr "OpenStack 计算(nova)" - -msgid "" -"OpenStack Compute (nova) (including the use of multiple hypervisor drivers)" -msgstr "OpenStack 计算 (nova) (包括使用多hypervisor驱动)" - -msgid "OpenStack Compute nodes running the KVM hypervisor." -msgstr "OpenStack计算节点运行着KVM的hypervisor。" - -msgid "OpenStack Compute running KVM hypervisor" -msgstr "运行着 KVM 宿主机的 OpenStack 计算服务" - -msgid "" -"OpenStack Controller service running Image, Identity, Networking, combined " -"with support services such as MariaDB and RabbitMQ, configured for high " -"availability on at least three controller nodes." -msgstr "" -"OpenStack控制器运行着诸如镜像服务、认证服务、以及网络服务等,支撑它们的服务有" -"诸如:MariaDB、RabbitMQ,至少有三台控制器节点配置为高可用。" - -msgid "" -"OpenStack Controller services (Image, Identity, Networking and supporting " -"services such as MariaDB and RabbitMQ)" -msgstr "" -"OpenStack 控制服务(镜像服务、认证服务、网络服务以及例如 MariaDB 和 RabbitMQ " -"之类的支撑服务)" - -msgid "OpenStack Foundation" -msgstr "OpenStack基金会" - -msgid "OpenStack Identity (keystone)" -msgstr "OpenStack 认证(keystone)" - -msgid "OpenStack Networking" -msgstr "OpenStack 联网方式(neutron)" - -msgid "OpenStack Networking (neutron)" -msgstr "OpenStack 网络 (neutron)" - -msgid "" -"OpenStack Networking (neutron) is a first class networking service that " -"gives full control over creation of virtual network resources to tenants. " -"This is often accomplished in the form of tunneling protocols which will " -"establish encapsulated communication paths over existing network " -"infrastructure in order to segment tenant traffic. These methods vary " -"depending on the specific implementation, but some of the more common " -"methods include tunneling over GRE, encapsulating with VXLAN, and VLAN tags." -msgstr "" -"OpenStack网络 (neutron) 是第一次实现为租户提供全部的控制权来建立虚拟网络资源" -"的网络服务。为了实现给租户流量分段,通常是基于已有的网络基础设施来封装通信路" -"径,即以隧道协议的方式。这些方法严重依赖与特殊的实现方式,大多数通用的方式包" -"括GRE隧道,VXLAN封\n" -"装以及VLAN标签。" - -msgid "OpenStack Networking (neutron) or legacy networking (nova-network)" -msgstr "OpenStack 网络 (neutron) 或遗留网路服务 (nova-network)" - -msgid "" -"OpenStack Networking and legacy networking both have their advantages and " -"disadvantages. They are both valid and supported options that fit different " -"network deployment models described in the OpenStack Operations Guide." -msgstr "" -"OpenStack网络和遗留网络各有各的优点和缺点。它们有效的和支持的属性适合不同的网" -"络部署模式,详细描述见OpenStack 运维指南。" - -msgid "" -"OpenStack Networking can be used to control hardware load balancers through " -"the use of plug-ins and the Networking API. This allows users to control " -"hardware load balance pools and instances as members in these pools, but " -"their use in production environments must be carefully weighed against " -"current stability." -msgstr "" -"OpenStack网络可以通过使用插件和网络的应用程序接口控制硬件负载均衡器。允许用户" -"控制硬件负载均衡池以及作为池的成员的实例,但是使用它们在生产环境的话要小心权" -"衡它的稳定程度。" - -msgid "" -"OpenStack Networking versus legacy networking (nova-network) considerations" -msgstr "OpenStack 联网方式对传统联网(nova-network) 的考虑" - -msgid "OpenStack Object Storage" -msgstr "OpenStack 对象存储" - -msgid "OpenStack Object Storage (swift) (or another object storage solution)" -msgstr "OpenStack 对象存储 (swift) (或者是另外的对象存储解决方案)" - -msgid "OpenStack Object Storage for serving static objects (such as images)." -msgstr "OpenStack对象存储为静态对象(例如镜像)服务。" - -msgid "" -"OpenStack clouds require appropriate monitoring platforms to ensure errors " -"are caught and managed appropriately. Specific meters that are critically " -"important to monitor include:" -msgstr "" -"OpenStack云需要合适的检测平台来确保错误可以及时捕获,能够更好的管理。一些特别" -"的计量值需要重点监测的有:" - -msgid "OpenStack compatibility" -msgstr "OpenStack 兼容性" - -msgid "OpenStack components" -msgstr "OpenStack 组件" - -msgid "" -"OpenStack components often require access to back-end database services to " -"store state and configuration information. Selecting an appropriate back-end " -"database that satisfies the availability and fault tolerance requirements of " -"the OpenStack services is required. OpenStack services supports connecting " -"to a database that is supported by the SQLAlchemy python drivers, however, " -"most common database deployments make use of MySQL or variations of it. We " -"recommend that the database, which provides back-end service within a " -"general purpose cloud, be made highly available when using an available " -"technology which can accomplish that goal." -msgstr "" -"OpenStack组件通常需要访问后端的数据库服务以存放状态和配置信息。选择合适的后端" -"数据库以满足可用性和容错的需求,这是OpenStack服务所要求的。OpenStack服务支持" -"的连接数据库的方式是由SQLAlchemy python所驱动,尽管如此,绝大多数部署还是使用" -"MySQL或其变种。我们建议为通用型云提供后端服务的数据库使用高可用技术确保其高可" -"用,方可达到架构设计的目标。" - -msgid "OpenStack dashboard (horizon)" -msgstr "OpenStack GUI界面 (horizon)" - -msgid "" -"OpenStack design, generally, does not include shared storage. However, for " -"some high availability designs, certain components might require it " -"depending on the specific implementation." -msgstr "" -"通常在OpenStack的设计中是不包括共享存储的,但是在高可用的设计中,为了特定的实" -"现一些组件会用得到共享存储。" - -msgid "OpenStack hosted SDN controller: " -msgstr "OpenStack 上运行 SDN 控制器:" - -msgid "" -"OpenStack lends itself to deployment in a highly available manner where it " -"is expected that at least 2 servers be utilized. These can run all the " -"services involved from the message queuing service, for example RabbitMQ or " -"QPID, and an appropriately deployed database service such as MySQL or " -"MariaDB. As services in the cloud are scaled out, back-end services will " -"need to scale too. Monitoring and reporting on server utilization and " -"response times, as well as load testing your systems, will help determine " -"scale out decisions." -msgstr "" -"OpenStack其本身的部署是期望高可用的,实现此需要至少两台服务器来完成。他们可以" -"运行所有的服务,由消息队列服务连接起来,消息队列服务如RabbitMQ或QPID,还需要" -"部署一个合适的数据库服务如MySQL或MariaDB。在云中所有的服务都是可横向扩展的," -"其实后端的服务同样需要扩展。检测和报告在服务器上的使用和采集,以及负载测试," -"都可以帮助作出横向扩展的决定。" - -msgid "OpenStack on OpenStack" -msgstr "OpenStack 上的 OpenStack" - -msgid "OpenStack participating in an SDN controller network: " -msgstr "OpenStack 加入到 SDN 控制器所控制的网络中:" - -msgid "OpenStack services architecture" -msgstr "OpenStack服务架构" - -msgid "" -"OpenStack services support massive horizontal scale. Be aware that this is " -"not the case for the entire supporting infrastructure. This is particularly " -"a problem for the database management systems and message queues that " -"OpenStack services use for data storage and remote procedure call " -"communications." -msgstr "" -"幸运的是,OpenStack 的服务是被设计为能够支持水平上大规模的环境的。需要清楚的" -"是这并不是整个支撑基础设施的问题。准确的说,这只是好些 OpenStack 的服务进行数" -"据存储以及远程过程调用通信所用到的数据库管理系统和消息队列的问题。" - -msgid "" -"OpenStack supports a wide variety of hypervisors, one or more of which can " -"be used in a single cloud. These hypervisors include:" -msgstr "" -"OpenStack支持多种Hypervisor,在单一的云中可以有一种或多种。这些Hypervisor包" -"括:" - -msgid "Operating system (OS) and hypervisor" -msgstr "操作系统(OS)和虚拟机管理软件" - -msgid "" -"Operating system (OS) selection plays a large role in the design and " -"architecture of a cloud. There are a number of OSes which have native " -"support for OpenStack including:" -msgstr "" -"操作系统(OS)的选择在云的设计和架构中扮演着重要的角色。有许多操作系统是原生就" -"支持OpenStack的,它们是:" - -msgid "Operating system and application image store." -msgstr "操作系统和应用镜像存储。" - -msgid "Operating system and hypervisor" -msgstr "操作系统和虚拟机管理软件" - -msgid "Operating systems and their compatibility with the OpenStack hypervisor" -msgstr "操作系统以及和 OpenStack hypervisor的兼容性" - -msgid "Operational considerations" -msgstr "运营因素" - -msgid "" -"Operational considerations determine the requirements for logging, " -"monitoring, and alerting. Each of these sub-categories includes options. For " -"example, in the logging sub-category you could select Logstash, Splunk, Log " -"Insight, or another log aggregation-consolidation tool. Store logs in a " -"centralized location to facilitate performing analytics against the data. " -"Log data analytics engines can also provide automation and issue " -"notification, by providing a mechanism to both alert and automatically " -"attempt to remediate some of the more commonly known issues." -msgstr "" -"对日志、监测以及报警的需求是由运维考虑决定的。它们的每个子类别都包含了大量的" -"属性。举例,在日志子类别中,某些情况考虑使用Logstash,Splunk,instanceware " -"Log Insight等,或者其他的日志聚合-合并工具。日志需要存放在中心地带,使分析数" -"据变得更为简单。日志数据分析引擎亦得提供自动化和问题通知,通过提供一致的预警" -"和自动修复某些常见的已知问题。" - -msgid "Operational costs" -msgstr "运营成本" - -msgid "Operator requirements" -msgstr "运营者的需求" - -msgid "Orchestration (heat)" -msgstr "编排 (heat)" - -msgid "" -"Organizations need to determine if it is logical to create their own clouds " -"internally. Using a private cloud, organizations are able to maintain " -"complete control over architectural and cloud components." -msgstr "" -"组织需要决定在内部建立自己的云的逻辑。私有云的使用,组织需要维护整个架构中的" -"控制点以及组件。" - -msgid "" -"Outside of the traditional data center the limitations of layer-2 network " -"architectures become more obvious." -msgstr "在传统数据中心之外,二层网络架构的局限性更加明显。" - -msgid "Overlay interconnects for joining separated tenant networks" -msgstr "覆盖网络之间的相互连接以连接分离的租户网络" - -msgid "Overlay networks" -msgstr "覆盖网络" - -msgid "Performance" -msgstr "性能" - -msgid "" -"Performance of the controller services is not limited to processing power, " -"but restrictions may emerge in serving concurrent users. Ensure that the " -"APIs and Horizon services are load tested to ensure that you are able to " -"serve your customers. Particular attention should be made to the OpenStack " -"Identity Service (Keystone), which provides the authentication and " -"authorization for all services, both internally to OpenStack itself and to " -"end-users. This service can lead to a degradation of overall performance if " -"this is not sized appropriately." -msgstr "" -"控制器服务的性能不仅仅限于处理器的强大能力,也受限于所服务的并发用户。确认应" -"用程序接口和Horizon服务的负载测试可以>承受来自用户客户的压力。要特别关注" -"OpenStack认证服务(Keystone)的负载,Keystone为所有服务提供认证和授权,无论是最" -"终用户还是OpenStack的内部服务。如果此控制器服务没有正确的被设计,会导致整个环" -"境的性能低下。" - -msgid "Performance tuning" -msgstr "性能调优" - -msgid "Persistent block storage." -msgstr "持久化的块存储。" - -msgid "" -"Physical data centers have limited physical space, power, and cooling. The " -"number of hosts (or hypervisors) that can be fitted into a given metric " -"(rack, rack unit, or floor tile) is another important method of sizing. " -"Floor weight is an often overlooked consideration. The data center floor " -"must be able to support the weight of the proposed number of hosts within a " -"rack or set of racks. These factors need to be applied as part of the host " -"density calculation and server hardware selection." -msgstr "" -"物理数据中心【相对应的有虚拟数据中心,译者注】受到物理空间、电源以及制冷的限" -"制。主机(hypervisor)的数量需要适应所给定的条件(机架,机架单元,地板),这是另" -"外一个重要衡量规模大小的办法。地板受重通常是一个被忽视的因素。数据中心的地板" -"必须能够支撑一定数量的主机,当然是放在一个机架或一组机架中。这些因素都需要作" -"为主机密度计算的一部分和服务器硬件的选择来考虑,并需要通过。" - -msgid "Planned (maintenance)" -msgstr "计划内(维护)" - -msgid "Platform-as-a-Service (PaaS)" -msgstr "平台即服务(PaaS)" - -msgid "Plug-in support for 3rd parties" -msgstr "有第三方的插件支持" - -msgid "Policy management" -msgstr "规则管理" - -msgid "Port count" -msgstr "端口数目" - -msgid "Port density" -msgstr "端口密度" - -msgid "Port speed" -msgstr "端口速度" - -msgid "Possible solutions" -msgstr "可能的解决方案" - -msgid "Possible solutions: deployment" -msgstr "可能的解决方案:部署" - -msgid "Possible solutions: hypervisor" -msgstr "可能的解决方案:宿主机" - -msgid "Power and cooling density" -msgstr "电源和制冷密度" - -msgid "Power density" -msgstr "电源密度" - -msgid "Power requirements" -msgstr "电力要求" - -msgid "Preparing for the future: IPv6 support" -msgstr "为将来做准备:IPv6 支持" - -msgid "Prescriptive example" -msgstr "示例" - -msgid "Prescriptive examples" -msgstr "示例" - -msgid "Private (non-routable) and public (floating) IP addresses" -msgstr "私有(不可路由到达)及公有(浮动) IP 地址" - -msgid "" -"Projecting growth for storage, networking, and compute is only one aspect of " -"a growth plan for running OpenStack at massive scale. Growing and nurturing " -"development and operational staff is an additional consideration. Sending " -"team members to OpenStack conferences, meetup events, and encouraging active " -"participation in the mailing lists and committees is a very important way to " -"maintain skills and forge relationships in the community. For a list of " -"OpenStack training providers in the marketplace, see: http://www.openstack.org/" -"marketplace/training/." -msgstr "" -"对存储、网络以及计算等资源的增长进行规划只是为大规模运行 OpenStack 进行的扩展" -"规划的一个方面。对于开发及运维人员的增长以及能力的培养,也是一个重要的考虑因" -"素。让团队的成员参加 OpenStack 大型会议和聚会,鼓励团队成员积极参与邮件列表以" -"及委员会的讨论等,都是让他们保持技能领先并与社区建立良好关系的非常重要的方" -"式。另外,这里还有一个市场上提供 OpenStack 相关技能培训的机构的列表:http://www." -"openstack.org/marketplace/training/." - -msgid "Protocol support" -msgstr "协议支持" - -msgid "Provider API changes" -msgstr "提供者 API 变化" - -msgid "Provider availability or implementation details" -msgstr "供应商可用性或实现细节" - -msgid "Providing a simple database" -msgstr "提供一简单的数据库" - -msgid "Proxy:" -msgstr "代理:" - -msgid "Public" -msgstr "公有" - -msgid "Public cloud" -msgstr "公有云" - -msgid "Public security domains" -msgstr "公共安全域" - -msgid "" -"Put your eggs in multiple baskets: Leverage multiple providers, geographic " -"regions and availability zones to accommodate for local availability issues. " -"Design for portability." -msgstr "" -"将鸡蛋放在多个篮子里:考虑多个供应商,基于地理分区不同的数据中心,多可用的" -"zones以容纳本地存在的隐患。可移植性的设计。" - -msgid "Quota management" -msgstr "配额管理" - -msgid "RAM allocation ratio: 1.5:1" -msgstr "RAM 超分配比例: 1.5:1" - -msgid "RAM-intensive" -msgstr "内存密集型" - -msgid "REST proxy:" -msgstr "REST 代理:" - -msgid "" -"Rack-mounted servers that support multiple independent servers in a single " -"2U or 3U enclosure, \"sled servers\", deliver increased density as compared " -"to a typical 1U-2U rack-mounted servers." -msgstr "" -"“雪撬服务器”,支持在单个2U或3U的空间放置多个独立的服务器,增加的密度超过典型的" -"1U-2U机架服务器," - -msgid "Raw block storage" -msgstr "Raw 块存储" - -msgid "Red Hat Enterprise Linux (RHEL)" -msgstr "红帽企业Linux(RHEL)" - -msgid "Reduced overhead of the IP hierarchy." -msgstr "减少了 IP 层的开销" - -msgid "Redundancy" -msgstr "冗余" - -msgid "" -"Redundancy and availability requirements impact the decision to use a RAID " -"controller card in block storage nodes. The input-output per second (IOPS) " -"demand of your application will influence whether or not you should use a " -"RAID controller, and which level of RAID is required. Making use of higher " -"performing RAID volumes is suggested when considering performance. However, " -"where redundancy of block storage volumes is more important we recommend " -"making use of a redundant RAID configuration such as RAID 5 or RAID 6. Some " -"specialized features, such as automated replication of block storage " -"volumes, may require the use of third-party plug-ins and enterprise block " -"storage solutions in order to provide the high demand on storage. " -"Furthermore, where extreme performance is a requirement it may also be " -"necessary to make use of high speed SSD disk drives' high performing flash " -"storage solutions." -msgstr "" -"决定在块存储节点中使用RAID控制卡主要取决于应用程序对冗余和可用性的需求。应用" -"如果对每秒输入输出(IOPS)有很高的要求,不仅得使用RAID控制器,还得配置RAID的级" -"别值,当性能是重要因素时,建议使用高级别的RAID值,相对比的情况是,如果冗余的" -"因素考虑更多谢,那么就使用冗余RAID配置,比如RAID5或RAID6。一些特殊的特性,例" -"如自动复制块存储卷,需要使用第三方插件和企业级的块存储解决方案,以满足此高级" -"需求。进一步讲,如果有对性能有极致的要求,可以考虑使用告诉的SSD磁盘,即高性能" -"的flash存储解决方案。" - -msgid "Redundant networking: ToR switch high availability risk analysis" -msgstr "冗余网络:柜顶交换机高可用风险分析" - -msgid "References" -msgstr "参考" - -msgid "Reliability" -msgstr "可靠性" - -msgid "Reliability and availability" -msgstr "可靠性和可用性:" - -msgid "" -"Reliability and availability depend on wide area network availability and on " -"the level of precautions taken by the service provider." -msgstr "可靠性和可用性依赖于广域网的可用性以及服务提供商采取的预防措施的级别。" - -msgid "Remaining licensing details are filled in by the template." -msgstr "其余的授权细节来自于模版" - -msgid "Replication" -msgstr "重复" - -msgid "" -"Repurposing an existing OpenStack environment to be massively scalable is a " -"formidable task. When building a massively scalable environment from the " -"ground up, ensure you build the initial deployment with the same principles " -"and choices that apply as the environment grows. For example, a good " -"approach is to deploy the first site as a multi-site environment. This " -"enables you to use the same deployment and segregation methods as the " -"environment grows to separate locations across dedicated links or wide area " -"networks. In a hyperscale cloud, scale trumps redundancy. Modify " -"applications with this in mind, relying on the scale and homogeneity of the " -"environment to provide reliability rather than redundant infrastructure " -"provided by non-commodity hardware solutions." -msgstr "" -"将一个现存的为其他目的而设计的 OpenStack 环境改造成为可大规模扩展的类型,是一" -"项艰巨的任务。当从头建造一个可大规模扩展的环境时,需要确保初始的部署也是依据" -"不管环境如何增长依然能够适用的原则和选择而建造的。举例来说,在只在第一个地点" -"进行部署的时候,就将整个环境当作是一个多点环境进行部署,就是一个比较好的方" -"式,因为这使得相同的部署及隔离方法,随着环境的扩展,也能够在其它不同的地点被" -"使用,这些地点之间通过专门的线路或者广域网进行连接。在超大规模的环境中,扩展" -"胜过冗余。这种场景下的应用必须依据这个原则进行修改,依赖于整个环境的规模和扩" -"展性和同质性以提供可靠性,而不是使用非商品的硬件解决方案提供的冗余基础设施。" - -msgid "" -"Resiliency of overall system and individual components are going to be " -"dictated by the requirements of the SLA, meaning designing for high " -"availability (HA) can have cost ramifications." -msgstr "" -"保持弹性的系统,松耦合的组件,都是SLA的需求,这也意味着设计高可用(HA)需要花费" -"更多。" - -msgid "Resource capacity" -msgstr "资源容量" - -msgid "Response time to the Compute API" -msgstr "计算API的响应时间" - -msgid "Revenue opportunity" -msgstr "赢利空间" - -msgid "Risk mitigation and management considerations" -msgstr "风险规避和管理考虑" - -msgid "Risks" -msgstr "风险" - -msgid "Routing daemons" -msgstr "路由守护进程" - -msgid "Routing through or avoiding specific networks" -msgstr "使路由通过或者避免通过某个特定网络" - -msgid "" -"Running OpenStack at massive scale requires striking a balance between " -"stability and features. For example, it might be tempting to run an older " -"stable release branch of OpenStack to make deployments easier. However, when " -"running at massive scale, known issues that may be of some concern or only " -"have minimal impact in smaller deployments could become pain points. Recent " -"releases may address well known issues. The OpenStack community can help " -"resolve reported issues by applying the collective expertise of the " -"OpenStack developers." -msgstr "" -"在大规模的场景下运行 OpenStack 需要在稳定性与功能之间做好平衡。比如说,选择比" -"较旧的稳定版本的 OpenStack 以便使得部署更容易看起来比较令人动心。然而在大规模" -"部署的场景之下,对小规模部署造成的困扰不大或者甚至没有什么影响的已知问题,对" -"于大规模的部署来说都可能是缺陷。假如该问题是广为人知的,在通常情况下它可能在" -"较新的发布版本中被解决了。OpenStack 社区能够运用 OpenStack 社区开发者的集体智" -"慧,帮助解决报告到社区中的问题。" - -msgid "" -"Running up to 140 web instances and the small number of MariaDB instances " -"requires 292 vCPUs available, as well as 584 GB RAM. On a typical 1U server " -"using dual-socket hex-core Intel CPUs with Hyperthreading, and assuming 2:1 " -"CPU overcommit ratio, this would require 8 OpenStack Compute nodes." -msgstr "" -"运行140个web实例以及少量的MariaDB实例需要292颗vCPU,以及584GB内存。在典型的1U" -"的服务器,使用双socket,16核,开启超线程的IntelCPU,计算为2:1的CPU超分配比" -"例,可以得出需要8台这样的OpenStack计算节点。" - -msgid "" -"SDN is a relatively new concept that is not yet standardized, so SDN systems " -"come in a variety of different implementations. Because of this, a truly " -"prescriptive architecture is not feasible. Instead, examine the differences " -"between an existing and a planned OpenStack design and determine where " -"potential conflicts and gaps exist." -msgstr "" -"相对来说,SDN 是一个比较新的,仍未被标准化的概念,所以 SDN 系统可能来自很多不" -"同的具体实现。因此,一个真正意义上示范性架构是目前无法给出的。相反的,我们只" -"能够分析当前或者目标 OpenStack 设计中的各种不同,并确定哪些地方将会出现潜在的" -"冲突或者还存在差距。" - -msgid "SUSE Linux Enterprise Server (SLES)" -msgstr "SUSE Linux Enterprise Server (SLES)" - -msgid "Scalability" -msgstr "可扩展性" - -msgid "" -"Scalability refers to how the storage solution performs as it expands to its " -"maximum size. Storage solutions that perform well in small configurations " -"but have degraded performance in large configurations are not scalable. A " -"solution that performs well at maximum expansion is scalable. Large " -"deployments require a storage solution that performs well as it expands." -msgstr "" -"此节参考术语\"扩展性“,来解释存储解决方案的表现可扩展到最大规模是怎么个好法。" -"一个存储解决方案在小型配置时表现良好,但是在规模扩展的过程中性能降低,这就不" -"是好的扩展性,也不会被考虑。换句话说,一个解决方案只有在规模扩展最大性能没有" -"任何的降低才是好的扩展性。" - -msgid "" -"Scalability, along with expandability, is a major consideration in a general " -"purpose OpenStack cloud. It might be difficult to predict the final intended " -"size of the implementation as there are no established usage patterns for a " -"general purpose cloud. It might become necessary to expand the initial " -"deployment in order to accommodate growth and user demand." -msgstr "" -"可扩展性是通用型OpenStack云主要考虑的因素。正因为通用型云没有固定的使用模式," -"也许导致预测最终的使用大小是很困难的。也许就有必要增加初始部署规模以应对数据" -"增长和用户需求。" - -msgid "Scale" -msgstr "规模" - -msgid "Scale and performance" -msgstr "规模和性能" - -msgid "Scales well" -msgstr "能够很好地扩展" - -msgid "Scaling Block Storage" -msgstr "扩展块存储" - -msgid "Scaling Object Storage" -msgstr "扩展对象存储" - -msgid "Scaling requires 3rd party plug-ins" -msgstr "需要第三方插件进行扩展" - -msgid "Scaling storage services" -msgstr "扩展存储服务" - -msgid "" -"Scaling storage solutions in a storage-focused OpenStack architecture design " -"is driven by initial requirements, including IOPS, " -"capacity, bandwidth, and future needs. Planning capacity based on projected " -"needs over the course of a budget cycle is important for a design. The " -"architecture should balance cost and capacity, while also allowing " -"flexibility to implement new technologies and methods as they become " -"available." -msgstr "" -"在一个存储型的 OpenStack 架构设计当中,存储解决方案的规模,是由初始的需求,包" -"括 IOPS、容量以及带宽等,以及未来的需要所决定的。对于" -"一个设计来说,在整个预算周期之中,基于项目需要而计划容量,是很重要的。理想情" -"况下,所选择的架构必须在成本以及容量之间做出平衡,同时又必须提供足够的灵活" -"性,以便在新技术和方法可用时实现它们。" - -msgid "" -"Scott Lowe (VMware) @scott_lowe" -msgstr "" -"Scott Lowe (VMware) @scott_lowe" - -msgid "" -"Sean Collins (Comcast) @sc68cal" -msgstr "" -"Sean Collins (Comcast) @sc68cal" - -msgid "" -"Sean Winn (Cloudscaling) @seanmwinn" -msgstr "" -"Sean Winn (Cloudscaling) @seanmwinn" - -msgid "" -"Sebastian Gutierrez (Red Hat) @gutseb" -msgstr "" -"Sebastian Gutierrez (Red Hat) @gutseb" - -msgid "Security" -msgstr "安全性" - -msgid "Security and legal requirements" -msgstr "安全和法律要求" - -msgid "Security domains" -msgstr "安全域" - -msgid "Security levels" -msgstr "安全级别" - -msgid "" -"Security should be implemented according to asset, threat, and vulnerability " -"risk assessment matrices. For cloud domains that require increased computer " -"security, network security, or information security, a general purpose cloud " -"is not considered an appropriate choice." -msgstr "" -"安全应根据资产,威胁和脆弱性风险评估矩阵来实现。对于云领域来说,更加增加了计" -"算安全、网络安全和信息安全等的需求。通用型云不被认为是恰当的选择。" - -msgid "Segregation example" -msgstr "隔离的例子" - -msgid "" -"Selected supplemental software solution impacts and affects the overall " -"OpenStack cloud design. This includes software for providing clustering, " -"logging, monitoring and alerting." -msgstr "" -"选择支撑软件解决方案影响着整个OpenStack云设计过程。这些软件可提供诸如集群、日" -"志、监控以及告警。" - -msgid "" -"Selecting a commercially supported hypervisor, such as Microsoft Hyper-V, " -"will result in a different cost model rather than community-supported open " -"source hypervisors including KVM, Kinstance or Xen. When " -"comparing open source OS solutions, choosing Ubuntu over Red Hat (or vice " -"versa) will have an impact on cost due to support contracts." -msgstr "" -"选择商业支持的hypervisor,诸如微软 Hyper-V,和使用社区支持的开源hypervisor相" -"比,在开销方面有很大的差别。开源的hypervisor有KVM, Kinstance or Xen。当" -"比较开源操作系统解决方案时,选择了Ubuntu而不是红帽(反之亦然),由于支持的合同" -"不同,开销也会不一样。" - -msgid "Selecting networking hardware" -msgstr "选择网络硬件" - -msgid "Selecting storage hardware" -msgstr "选择存储硬件" - -msgid "" -"Selecting the proper zone design is crucial for allowing the Object Storage " -"cluster to scale while providing an available and redundant storage system. " -"It may be necessary to configure storage policies that have different " -"requirements with regards to replicas, retention and other factors that " -"could heavily affect the design of storage in a specific zone." -msgstr "" -"决定合适的区域设计的关键是对象存储集群的扩展,还能同时提供可靠和冗余的存储系" -"统。进一步讲,也许还需要根据不同的需求配置存储的策略,这些策略包括副本,保留" -"以及其它在特定区域会严重影响到存储设计的因素。" - -msgid "Selection of OpenStack software components" -msgstr "选择OpenStack软件组件" - -msgid "Selection of supplemental software" -msgstr "选择支撑软件" - -msgid "Separation of duties" -msgstr "职责分工" - -msgid "Server density" -msgstr "服务器密度" - -msgid "Server hardware" -msgstr "服务器硬件" - -msgid "Signal processing for network function virtualization (NFV)" -msgstr "专门处理网络功能虚拟化(NFV)" - -msgid "" -"Similarly, the default RAM allocation ratio of 1.5:1 means that the " -"scheduler allocates instances to a physical node as long as the total amount " -"of RAM associated with the instances is less than 1.5 times the amount of " -"RAM available on the physical node." -msgstr "" -"同样的,默认的内存超分配比例是1.5:1,这意味着调度器为实例分配的内存总量要少于" -"物理节点内存的1.5倍。" - -msgid "Simple, single agent" -msgstr "简单,单独一个代理程序" - -msgid "Single Point Of Failure (SPOF)" -msgstr "单点故障(SPOF)" - -msgid "Site loss and recovery" -msgstr "站点失效和恢复" - -msgid "" -"Sizing is an important consideration for a general purpose OpenStack cloud. " -"The expected or anticipated number of instances that each hypervisor can " -"host is a common meter used in sizing the deployment. The selected server " -"hardware needs to support the expected or anticipated instance density." -msgstr "" -"对于通用型OpenStack云来说规模大小是一个很重要的考虑因素。预料或预期在每个" -"hypervisor上可以运行多少实例,是部署中衡量大小的一个普遍元素。选择服务器硬件" -"需要支持预期的实例密度。" - -msgid "Skills and training" -msgstr "技能和培训" - -msgid "Software bundles" -msgstr "软件集合" - -msgid "Software selection" -msgstr "软件选择" - -msgid "" -"Software selection for a general purpose OpenStack architecture design needs " -"to include these three areas:" -msgstr "软件选择对于通用型 OpenStack 架构设计来说需要包括以下三个方面:" - -msgid "Software to provide load balancing" -msgstr "提供负载均衡的软件" - -msgid "Software-defined networking" -msgstr "软件定义网络" - -msgid "" -"Software-defined networking (SDN) is the separation of the data plane and " -"control plane. SDN is a popular method of managing and controlling packet " -"flows within networks. SDN uses overlays or directly controlled layer-2 " -"devices to determine flow paths, and as such presents challenges to a cloud " -"environment. Some designers may wish to run their controllers within an " -"OpenStack installation. Others may wish to have their installations " -"participate in an SDN-controlled network." -msgstr "" -"软件定义网络(SDN)指的是网络数据转发平面以及控制平面的隔离。SDN 已经成为在网络" -"中管理及控制网络包流的流行方案。SDN 使用覆盖网络或者直接控制的-2层网络设备来" -"监测网络流的路径,这对云环境提出了一些挑战。有些设计者可能希望在 OpenStack 环" -"境中运行他们的控制器。另外一些则可能希望将他们的 OpenStack 环境加入到一个 由 " -"SDN 方式进行控制的网络之中。" - -msgid "Solutions" -msgstr "解决方案" - -msgid "" -"Solutions that employ Galera/MariaDB require at least three MySQL nodes." -msgstr "解决方案采用Galera/MariaDB,需要至少3个MySQL节点。" - -msgid "" -"Some applications that interact with a network require specialized " -"connectivity. Applications such as a looking glass require the ability to " -"connect to a BGP peer, or route participant applications may need to join a " -"network at a layer 2 level." -msgstr "" -"有些与网络进行互动的应用需要更加专门的连接。类似于 looking glass 之类的应用需" -"要连接到 BGP 节点,或者路由参与者应用可能需要在2层上加入一个网络。" - -msgid "" -"Some areas that could be impacted by the selection of OS and hypervisor " -"include:" -msgstr "可能受到 OS 和虚拟管理程序的选择所影响的一些领域包括:" - -msgid "" -"Some of the key considerations that should be included in the selection of " -"networking hardware include:" -msgstr "选择联网硬件时需要考虑的一些关键因素包括:" - -msgid "" -"Some of the key technical considerations that are critical to a storage-" -"focused OpenStack design architecture include:" -msgstr "对存储型的 OpenStack 设计架构来说比较关键的一些技术上的考虑因素包括:" - -msgid "Some possible use cases include:" -msgstr "可能的用例方案包括:" - -msgid "" -"Some server hardware form factors are better suited to storage-focused " -"designs than others. The following is a list of these form factors:" -msgstr "" -"一些服务器硬件的因素更加适合其他类型的,但是CPU和内存的能力拥有最高的优先级。" - -msgid "" -"Some use cases that might indicate a need for a multi-site deployment of " -"OpenStack include:" -msgstr "一些应该使用多区域布署的案例可能会有以下特征:" - -msgid "Special considerations" -msgstr "特殊因素" - -msgid "Specialized cases" -msgstr "特殊场景" - -msgid "Specialized hardware" -msgstr "专门的硬件" - -msgid "Specialized networking example" -msgstr "特殊网络应用的例子" - -msgid "Speed" -msgstr "速度" - -msgid "" -"Staff must have training with the chosen hypervisor. Consider the cost of " -"training when choosing a solution. The support of a commercial product such " -"as Red Hat, SUSE, or Windows, is the responsibility of the OS vendor. If an " -"open source platform is chosen, the support comes from in-house resources." -msgstr "" -"无论选择那个hypervisor,相关的技术人员都要经过适当的培训和知识积累,才可以支持" -"所选择的操作系统和hypervisor组合。如果这些维护人员没有培训过,那么就得提供," -"当然它会影响到设计中的之处。另外一个考虑的方面就是关于操作系统-hypervisor的支" -"持问题,商业产品如Red Hat,SUSE或Windows等的支持是由操作系统供应商来支持的," -"如果选用了开源的平台,支持大部分得来自于内部资源。无论何种决定,都会影响到设" -"计时的支出。" - -msgid "" -"Stay close: Reduce latency by moving highly interactive components and data " -"near each other." -msgstr "贴近原则:通过移动高度密切的组件和相似数据靠近以减少延迟。" - -msgid "" -"Stephen Gordon (Red Hat) @xsgordon" -msgstr "" -"Stephen Gordon (Red Hat) @xsgordon" - -msgid "Storage" -msgstr "存储" - -msgid "Storage architecture" -msgstr "存储架构" - -msgid "" -"Storage can be a significant portion of the overall system cost. For an " -"organization that is concerned with vendor support, a commercial storage " -"solution is advisable, although it comes with a higher price tag. If initial " -"capital expenditure requires minimization, designing a system based on " -"commodity hardware would apply. The trade-off is potentially higher support " -"costs and a greater risk of incompatibility and interoperability issues." -msgstr "" -"存储在整个系统的开销中占有很大一部分。对于一个组织来说关心的是提供商的支持," -"以及更加倾向于商业的存储解决方案,尽管它们的价格是很高。假如最初的投入希望是" -"最少的,基于普通的硬件来设计系统也是可接受的,这就是权衡问题,一个是潜在的高" -"支持成本,还有兼容性和互操作性的高风险问题。" - -msgid "" -"Storage capacity (gigabytes or terabytes as well as Input/Output Operations " -"Per Second (IOPS)" -msgstr "存储能力(每秒输入/输入操作(IOPS),GB或TB)" - -msgid "Storage focused" -msgstr "存储型" - -msgid "Storage hardware:" -msgstr "存储硬件:" - -msgid "Storage limits" -msgstr "存储限制" - -msgid "Storage management" -msgstr "存储管理" - -msgid "Storage performance" -msgstr "存储性能" - -msgid "Storage-intensive" -msgstr "存储密集型" - -msgid "Structure" -msgstr "结构" - -msgid "Supplemental software" -msgstr "增强软件" - -msgid "Support" -msgstr "支持" - -msgid "Support and maintainability" -msgstr "支持和维护" - -msgid "Supportability" -msgstr "受支持程度" - -msgid "Supported features" -msgstr "支持的特性" - -msgid "" -"Swift is a highly scalable object store that is part of the OpenStack " -"project. This diagram explains the example architecture: " -msgstr "" -"Swift 是一个高度可扩展的对象存储,同时它也是 OpenStack 项目中的一部分。以下是" -"说明示例架构的一幅图示:" - -msgid "" -"Technical cloud architecture requirements should be weighted against the " -"business requirements." -msgstr "技术云架构需求相比业务需求,比重占得更多一些。" - -msgid "Technical considerations" -msgstr "技术因素" - -msgid "Technical requirements" -msgstr "技术需求" - -msgid "Telemetry uses MongoDB." -msgstr "Telemetry 使用MongoDB。" - -msgid "" -"The Controller infrastructure nodes provide management services to the end-" -"user as well as providing services internally for the operating of the " -"cloud. The Controllers run message queuing services that carry system " -"messages between each service. Performance issues related to the message bus " -"would lead to delays in sending that message to where it needs to go. The " -"result of this condition would be delays in operation functions such as " -"spinning up and deleting instances, provisioning new storage volumes and " -"managing network resources. Such delays could adversely affect an " -"application’s ability to react to certain conditions, especially when using " -"auto-scaling features. It is important to properly design the hardware used " -"to run the controller infrastructure as outlined above in the Hardware " -"Selection section." -msgstr "" -"控制器基础设施节点为最终用户提供的管理服务,以及在云内部为运维提供服务。控制" -"器较典型的现象,就是运行消息队列服务,在每个服务之间携带传递系统消息。性能问" -"题常和消息总线有关,它会延迟发送的消息到应该去的地方。这种情况的结果就是延迟" -"了实际操作的功能,如启动或删除实例、分配一个新的存储卷、管理网络资源。类似的" -"延迟会严重影响到应用的反应能力,尤其是使用了自动扩展这样的特性。所以运行控制" -"器基础设施的硬件设计是头等重要的,具体参考上述硬件选择一节。" - -msgid "" -"The OS-hypervisor needs to be interoperable with other features and services " -"in the OpenStack design in order to meet the user requirements." -msgstr "" -"在OpenStack的设计中,为满足用户需求,需要操作系统的hypervisor在彼此的特性和服" -"务中要有互操作性。" - -msgid "" -"The OpenStack dashboard, OpenStack Identity, and OpenStack Object Storage " -"services are components that can each be deployed centrally in order to " -"serve multiple regions." -msgstr "" -"OpenStack GUI,OpenStack认证,以及OpenStack对象存储等服务为了服务于多区域,这" -"些组件需要部署在中心化位置。" - -msgid "" -"The OpenStack services themselves should be deployed across multiple servers " -"that do not represent a single point of failure. Ensuring API availability " -"can be achieved by placing these services behind highly available load " -"balancers that have multiple OpenStack servers as members." -msgstr "" -"OpenStack自身的那些服务需要在跨多个服务器上部署,不能出现单点故障。确保应用程" -"序接口的可用性,作为多个OpenStack服务的成员放在高可用负载均衡的后面。" - -msgid "" -"The OpenStack-on-OpenStack project (TripleO) " -"addresses this issue. Currently, however, the project does not completely " -"cover nested stacks. For more information, see https://wiki.openstack.org/wiki/TripleO." -msgstr "" -"目前,OpenStack-on-OpenStack 项目(TripleO https://wiki.openstack.org/" -"wiki/TripleO 了解该项目的更多信息。" - -msgid "The author team includes:" -msgstr "作者团队成员有:" - -msgid "" -"The availability design requirements determine the selection of Clustering " -"Software, such as Corosync or Pacemaker. The availability of the cloud " -"infrastructure and the complexity of supporting the configuration after " -"deployment determines the impact of including these software packages. The " -"OpenStack High Availability Guide provides more " -"details on the installation and configuration of Corosync and Pacemaker." -msgstr "" -"在集群软件中如 Corosync和Pacemaker在可用性需求中占主流。包含这些软件包是主要" -"决定的,要使云基础设施具有高可用的话,当然在部署之后会带来复杂的配置。" -"OpenStack 高可用指南 提供了更加详细的安装和配置" -"Corosync和Pacemaker,所以在设计中这些软件包需要被包含。 " - -msgid "The bleeding edge" -msgstr "最前沿" - -msgid "" -"The choice of hardware specifications used in compute nodes including CPU, " -"memory and disk type directly affects the performance of the instances. " -"Other factors which can directly affect performance include tunable " -"parameters within the OpenStack services, for example the overcommit ratio " -"applied to resources. The defaults in OpenStack Compute set a 16:1 over-" -"commit of the CPU and 1.5 over-commit of the memory. Running at such high " -"ratios leads to an increase in \"noisy-neighbor\" activity. Care must be " -"taken when sizing your Compute environment to avoid this scenario. For " -"running general purpose OpenStack environments it is possible to keep to the " -"defaults, but make sure to monitor your environment as usage increases." -msgstr "" -"为计算节点选择硬件规格包括CPU,内存和磁盘类型,会直接影响到实例的性能。另外直" -"接影响性能的情形是为OpenStack服务优化参数,例如资源的超分配比例。默认情况下" -"OpenStack计算设置16:1为CPU的超分配比例,内存为1.5:1。运行跟高比例的超分配即会" -"导致有些服务无法启动。调整您的计算环境时,为了避免这种情况下必须小心。运行一" -"个通用型的OpenStack环境保持默认配置就可以,但是也要检测用户的环境中使用量的增" -"加。" - -msgid "" -"The chosen high availability database solution changes according to the " -"selected database. MySQL, for example, provides several options. Use a " -"replication technology such as Galera for active-active clustering. For " -"active-passive use some form of shared storage. Each of these potential " -"solutions has an impact on the design:" -msgstr "" -"为数据库提供高可用的解决方案选择将改变基于何种数据库。如果是选择了MySQL,有几" -"种方案可供选择,如果是主-主模式集群,则使用 Galera复制技术;如果是主-备模式则" -"必须使用共享存储。每个潜在的方案都会影响到架构的设计:" - -msgid "" -"The cloud user expects repeatable, dependable, and deterministic processes " -"for launching and deploying cloud resources. You could deliver this through " -"a web-based interface or publicly available API endpoints. All appropriate " -"options for requesting cloud resources must be available through some type " -"of user interface, a command-line interface (CLI), or API endpoints." -msgstr "" -"云用户希望对云资源进行启动和部署有可重复的、可靠的以及可确定的操作过程。这些" -"功能可以通过基于 web 的接口或者公开可用的 API 入口抛出。对云资源进行请求的所" -"有相应选项应该通过某种类型的用户接口展现给用户,比如命令行接口(CLI)或者API 入" -"口。" - -msgid "" -"The cloud user's requirements and expectations that determine the cloud " -"design focus on the consumption model. The user expects to consume cloud " -"resources in an automated and deterministic way, without any need for " -"knowledge of the capacity, scalability, or other attributes of the cloud's " -"underlying infrastructure." -msgstr "" -"可能与想像中一致,用以确定设计方案的云用户的需求以及期望,都是关注于消费模型" -"之上的。用户希望能够以一种自动化的和确定的方式来使用云中的资源,而不需要以任" -"何对容量、可扩展性或者其它关于该云的底层基础设施的属性的了解作为前提。" - -msgid "" -"The company runs hardware load balancers and multiple web applications " -"serving their websites, and orchestrates environments using combinations of " -"scripts and Puppet. The website generates large amounts of log data daily " -"that requires archiving." -msgstr "" -"公司的网站运行着基于硬件的负载均衡服务器和多个web应用服务,且他们的编排环境是" -"混合使用Puppet和脚本。网站每天都会产生大量的日志文件需要归档。" - -msgid "" -"The cost of components affects which storage architecture and hardware you " -"choose." -msgstr "使用什么存储架构和选择什么硬件将影响着开销。" - -msgid "" -"The data security domain is concerned primarily with information pertaining " -"to the storage services within OpenStack. Much of the data that crosses this " -"network has high integrity and confidentiality requirements and, depending " -"on the type of deployment, may also have strong availability requirements. " -"The trust level of this network is heavily dependent on other deployment " -"decisions." -msgstr "" -"数据安全域主要关心的是OpenStack存储服务相关的信息。多数通过此网络的数据具有高" -"度机密的要求,甚至在某些类型的部署中,还有高可用的需求。此网络的信任级别高度" -"依赖于其他部署的决定。" - -msgid "" -"The default CPU allocation ratio of 16:1 means that the scheduler allocates " -"up to 16 virtual cores per physical core. For example, if a physical node " -"has 12 cores, the scheduler sees 192 available virtual cores. With typical " -"flavor definitions of 4 virtual cores per instance, this ratio would provide " -"48 instances on a physical node." -msgstr "" -"默认的CPU超分配比例是16:1,这意味着调度器可以为每个物理核分配16个虚拟核。举例" -"来说,如果物理节点有12个核,调度器就拥有192个虚拟核。在典型的flavor定义中,每" -"实例4个虚拟核,那么此超分配比例可以在此物理节点上提供48个实例。" - -msgid "" -"The design architecture of a massively scalable OpenStack cloud must address " -"considerations around physical facilities such as space, floor weight, rack " -"height and type, environmental considerations, power usage and power usage " -"efficiency (PUE), and physical security." -msgstr "" -"关于诸如空间、底部负重、机柜高度及类型、环境性因素、电源使用及其使用效率" -"(PUE),以及物理上的安全性等等相关的物理设施的考虑因素,也应该在可大规模扩展" -"的 OpenStack 云的设计架构中一并进行考虑并解决。" - -msgid "" -"The design will require networking hardware that has the requisite port " -"count." -msgstr "设计要求网络硬件有充足的端口数目。" - -msgid "" -"The example REST interface, presented as a traditional Object store running " -"on traditional spindles, does not require a high performance caching tier." -msgstr "" -"所展现的 REST 接口不需要一个高性能的缓存层,并且被作为一个运行在传统设备\n" -"上的传统的对象存储抛出。" - -msgid "" -"The example below shows a REST interface without a high performance " -"requirement." -msgstr "本例描绘了没有高性能需求的 REST 接口。" - -msgid "" -"The factors for determining which software packages in this category to " -"select is outside the scope of this design guide." -msgstr "" -"这包括能够提供集群、日志、监测及预警等的软件。在此目录什么因素决定选择什么软" -"件包已经超出了本书的范围。" - -msgid "" -"The following user considerations are written from the perspective of the " -"cloud builder, not from the perspective of the end user." -msgstr "以下用户考量的内容来自于云构建者的记录,并非最终用户。" - -msgid "" -"The hardware requirements and configuration are similar to those of the High " -"Performance Database example below. In this case, the architecture uses " -"Ceph's Swift-compatible REST interface, features that allow for connecting a " -"caching pool to allow for acceleration of the presented pool." -msgstr "" -"实际的硬件需求以及配置与下面的高性能数据库例子相似。在这个场景中,采用的架构" -"使用了 Ceph 的 Swift 兼容 REST 接口,以及允许连接至一个缓存池以加速展现出来的" -"池的特性。" - -msgid "The hardware selection covers three areas:" -msgstr "硬件选择涵盖三个方面:" - -msgid "" -"The lack of a pre-defined usage model enables the user to run a wide variety " -"of applications without having to know the application requirements in " -"advance. This provides a degree of independence and flexibility that no " -"other cloud scenarios are able to provide." -msgstr "" -"由于缺少预先定义的使用模型,导致用户在根本不知道应用需求的情况运行各式各样的" -"应用。这里(OpenStack 通用型云,译者注)提供的独立性和灵活性的深度,也就只能" -"是这么多了,不能提供更多的云场景。" - -msgid "" -"The latency of storage I/O requests indicates performance. Performance " -"requirements affect which solution you choose." -msgstr "存储I/O请求的延迟影响着性能。解决方案的选择会影响到性能的需求。" - -msgid "" -"The level of network hardware redundancy required is influenced by the user " -"requirements for high availability and cost considerations. Network " -"redundancy can be achieved by adding redundant power supplies or paired " -"switches. If this is a requirement, the hardware will need to support this " -"configuration." -msgstr "" -"网络硬件的冗余级别需求是受用户对于高可用和开销考虑的影响的。网络冗余可以由增" -"加冗余的电力供应和结对的交换机来实现,加入遇到这样的需求,硬件需要支持冗余的" -"配置。" - -msgid "" -"The management tools used for Ubuntu and Kinstance differ from the " -"management tools for VMware vSphere. Although both OS and hypervisor " -"combinations are supported by OpenStack, there will be very different " -"impacts to the rest of the design as a result of the selection of one " -"combination versus the other." -msgstr "" -"Ubuntu和Kinstance的管理工具和VMware vSphere的管理工具是不一样的。尽管" -"OpenStack支持它们所有的操作系统和hypervisor组合。这也会对其他的设计有着非常不" -"同的影响,结果就是选择了一种组合再作出选择。" - -msgid "" -"The network design should encompass a physical and logical network design " -"that can be easily expanded upon. Network hardware should offer the " -"appropriate types of interfaces and speeds that are required by the hardware " -"nodes." -msgstr "" -"网络设计须围绕物理网路和逻辑网络能够轻松扩展的设计来开展。网络硬件须提供给服" -"务器节点所需要的合适的接口类型和速度。" - -msgid "" -"The network design will be affected by the physical space that is required " -"to provide the requisite port count. A higher port density is preferred, as " -"it leaves more rack space for compute or storage components that may be " -"required by the design. This can also lead into concerns about fault domains " -"and power density that should be considered. Higher density switches are " -"more expensive and should also be considered, as it is important not to over " -"design the network if it is not required." -msgstr "" -"由于端口数量的需求,就需要更大的物理空间,以至于会影响到网络的设计。一旦首选" -"敲定了高端口密度,在设计中就得考虑为计算和存储留下机架空间。进一步还得考虑容" -"错设备和电力密度。高密度的交换机会非常的昂贵,亦需考虑在内,当然如若不是刚性" -"需求,这个是没有设计网络本身重要。" - -msgid "" -"The networking hardware must support the proposed network speed, for " -"example: 1GbE, 10GbE, or 40GbE (or even 100GbE)." -msgstr "" -"网络硬件必须支持常见的网络速度,例如:1GbE、10GbE 或者 40GbE(甚至是 100GbE)。" - -msgid "" -"The number of CPU cores, how much RAM, or how much storage a given server " -"delivers." -msgstr "CPU核数,多少内存,或者多少存储可以交付。" - -msgid "The number of MACs stored in switch tables is limited." -msgstr "存储在交换表中的 MAC 地址数目是有限的。" - -msgid "" -"The number of additional resources you can add to a server before it reaches " -"capacity." -msgstr "在达到容量之前用户可以增加服务器来增加资源的数量。" - -msgid "" -"The number of organizations running at massive scales is a small proportion " -"of the OpenStack community, therefore it is important to share related " -"issues with the community and be a vocal advocate for resolving them. Some " -"issues only manifest when operating at large scale, and the number of " -"organizations able to duplicate and validate an issue is small, so it is " -"important to document and dedicate resources to their resolution." -msgstr "" -"当出现问题的时候,在差不多规模场景下运行 OpenStack 的组织的数量,相对于整个 " -"OpenStack 社区来说,是极小的一个比例,因此很重要的一件事情是要与社区分享所遇" -"到的问题,并且在社区中积极倡导将这些问题解决。有些问题可能只在大规模部署的场" -"景下才会出现,所以能够重现和验证该问题的组织是为数不多的,因此将问题良好地文" -"档化并且为问题的解决贡献一些必要的资源,是尤为重要的。" - -msgid "The performance of the applications running on virtual desktops" -msgstr "在提供的虚拟桌面中运行的应用的性能" - -msgid "" -"The physical space required to provide the requisite port count affects the " -"network design. A switch that provides 48 10GbE ports in 1U has a much " -"higher port density than a switch that provides 24 10GbE ports in 2U. On a " -"general scale, a higher port density leaves more rack space for compute or " -"storage components which is preferred. It is also important to consider " -"fault domains and power density. Finally, higher density switches are more " -"expensive, therefore it is important not to over design the network." -msgstr "" -"网络的设计会受到物理空间的影响,需要提供足够的端口数。一个占用1U机柜空间的可" -"提供48个 10GbE端口的交换机,显而易见的要比占用2U机柜空间的仅提供24个 10GbE端" -"口的交换机有着更高的端口密度。高端口密度是首先选择的,因为其可以为计算和存储" -"省下机柜空间。这也会引起人们的思考,容错的情况呢?电力密度?高密度的交换机更" -"加的昂贵,也应该被考虑使用,但是没有必要覆盖设计中所有的网络,要视实际情况而" -"定。" - -msgid "" -"The power and cooling density requirements might be lower than with blade, " -"sled, or 1U server designs due to lower host density (by using 2U, 3U or " -"even 4U server designs). For data centers with older infrastructure, this " -"might be a desirable feature." -msgstr "" -"电力和制冷的密度需求要低于刀片、雪撬或1U服务器,因为(使用2U,3U甚至4U服务器)拥" -"有更低的主机密度。对于数据中心内有旧的基础设施,这是非常有用的特性。" - -msgid "The primary factors that play into OS-hypervisor selection include:" -msgstr "影响操作系统-hypervisor选择的主要因素包括:" - -msgid "" -"The process for upgrading a multi-site environment is not significantly " -"different:" -msgstr "升级多区域环境的流程没有什么特别的不一样:" - -msgid "" -"The relative purchase price of the hardware weighted against the level of " -"design effort needed to build the system." -msgstr "相对硬件的购买价格,与构建系统所需要的设计功力的级别成反比。" - -msgid "" -"The selected OS-hypervisor combination needs to be supported by OpenStack." -msgstr "选择操作系统-虚拟机管理程序组合需要OpenStack支持。" - -msgid "" -"The selected supplemental software solution impacts and affects the overall " -"OpenStack cloud design. This includes software for providing clustering, " -"logging, monitoring and alerting." -msgstr "" -"所选择的支撑软件解决方案会影响到整个OpenStack云的设计,它们包括能够提供集群、" -"日志、监测以及预警的软件。" - -msgid "" -"The selection of OS-hypervisor combination first and foremost needs to " -"support the user requirements." -msgstr "选择操作系统-虚拟机管理程序组合,首先且最重要的是支持用户的需求。" - -msgid "" -"The software selection process plays a large role in the architecture of a " -"general purpose cloud. The following have a large impact on the design of " -"the cloud:" -msgstr "" -"软件筛选的过程在通用型云架构中扮演了重要角色。下列的选择都会在设计云时产生重" -"大的影响。" - -msgid "The solution would consist of the following OpenStack components:" -msgstr "解决方案将由下列OpenStack组件组成:" - -msgid "" -"The starting point is the core count of the cloud. By applying relevant " -"ratios, the user can gather information about:" -msgstr "其出发点是云计算的核心数量。通过相关的比例,用户可以收集有关信息:" - -msgid "" -"The storage solution should take into account storage maintenance and the " -"impact on underlying workloads." -msgstr "存储的解决方案需要考虑存储的维护以及其对底层负载的影响。" - -msgid "" -"The user requires networking hardware that has the requisite port count." -msgstr "用户将会对网络设备有充足的端口数有需求。" - -msgid "" -"The web application instances run from local storage on each of the " -"OpenStack Compute nodes. The web application instances are stateless, " -"meaning that any of the instances can fail and the application will continue " -"to function." -msgstr "" -"web应用实例均运行在每个OpenStack计算节点的本地存储之上。所有的web应用实例都是" -"无状态的,也就是意味着任何的实例宕掉,都不会影响到整体功能的继续服务。" - -msgid "" -"There are a variety of well tested tools, for example ICMP, to monitor and " -"manage traffic." -msgstr "有很多经过完善测试的工具,例如 ICMP,来监控和管理流量。" - -msgid "" -"There are several factors to take into consideration when looking at whether " -"an application is a good fit for the cloud." -msgstr "寻找适合于在云中运行的应用,还是有几种方法可以考虑的。" - -msgid "" -"There are special considerations around erasure coded pools. For example, " -"higher computational requirements and limitations on the operations allowed " -"on an object; erasure coded pools do not support partial writes." -msgstr "" -"请注意关于 erasure coded 池的使用有特殊的考虑因素,比如说,更高的计算要求以及" -"对象上所允许的操作限制。另外,部分写入在 erasure coded 的池中也不被支持。" - -msgid "" -"There is also some customization of the filter scheduler that handles " -"placement within the cells:" -msgstr "在cell里的可以定制的过滤器:" - -msgid "" -"There is no single best practice architecture for the networking hardware " -"supporting a general purpose OpenStack cloud that will apply to all " -"implementations. Some of the key factors that will have a strong influence " -"on selection of networking hardware include:" -msgstr "" -"网络硬件没有单一的最佳实践架构以支持一个通用型OpenStack云,让它满足所有的实" -"现。一些在选择网络硬件时有重大影响的关键元素包括:" - -msgid "These security domains are:" -msgstr "这些安全域有:" - -msgid "" -"Think efficiency: Inefficient designs will not scale. Efficient designs " -"become cheaper as they scale. Kill off unneeded components or capacity." -msgstr "" -"考虑效率:低效的设计将不可扩展。高效的设计可以无须花费多少钱即可轻松扩展。去" -"掉那些不需要的组件或容量。" - -msgid "" -"Think elasticity: Increasing resources should result in a proportional " -"increase in performance and scalability. Decreasing resources should have " -"the opposite effect." -msgstr "" -"保持弹性:随着增加的资源,确保结果是增加的性能和扩展。减少资源要没有负面影" -"响。" - -msgid "" -"This application prioritizes the north-south traffic over east-west traffic: " -"the north-south traffic involves customer-facing data." -msgstr "" -"此应用将南北向的流量的优先级设置得比东西向的流量的优先级更高:南北向的流量与" -"面向客户的数据相关。" - -msgid "" -"This chapter discusses the legal and security requirements you need to " -"consider for the different OpenStack scenarios." -msgstr "这个章节讨论你需要为不同的 OpenStack 场景下的安全和法律要求。" - -msgid "" -"This decreases density by 50% (only 8 servers in 10 U) if a full width or " -"full height option is used." -msgstr "" -"它相比于全高的刀片有效的减低了50%的密度,因为全高的刀片在(每10个机柜单元仅可" -"以放置8台服务器)。" - -msgid "This example uses the following components:" -msgstr "此例使用了如下组件:" - -msgid "" -"This list expands upon the potential impacts for including a particular " -"storage architecture (and corresponding storage hardware) into the design " -"for a general purpose OpenStack cloud:" -msgstr "" -"这个列表扩展了在设计通用型OpenStack云可能产生的影响,包括对特定存储架构(以及" -"相应的存储硬件)" - -msgid "" -"This may be an issue for spine switches in a leaf and spine fabric, or end " -"of row (EoR) switches." -msgstr "这可能会给脊柱交换机的叶和面带来问题,同样排尾(EoR)交换机也会有问题。" - -msgid "" -"This may cause issues for organizations that have preferred vendor policies " -"or concerns with support and hardware warranties of non-tier 1 vendors." -msgstr "" -"这会给企业带来额外的问题:重新评估供应商的政策,支持的力度是否够,非1线供应商" -"的硬件质量保证等。" - -msgid "" -"This multi-site scenario likely includes one or more of the other scenarios " -"in this book with the additional requirement of having the workloads in two " -"or more locations. The following are some possible scenarios:" -msgstr "" -"此多区域场景在本书中有一个或多个其他类似的场景,另外在两个或多个地点的额外负" -"载需求。下面是可能一样的场景:" - -msgid "" -"This system can provide additional performance. For example, in the database " -"example below, a portion of the SSD pool can act as a block device to the " -"Database server. In the high performance analytics example, the inline SSD " -"cache layer accelerates the REST interface." -msgstr "" -"这种类型的系统也能够在其他场景下提供额外的性能。比如说,在下面的数据库例子" -"中,SSD 池中的一部分可以作为数据库服务器的块设备。在高性能分析的例子中,REST " -"接口将会被内联的 SSD 缓存层所加速。" - -msgid "Throughput" -msgstr "吞吐量" - -msgid "" -"Ticketing system (or integration with a ticketing system) to track issues." -msgstr "票务系统(或者与其它票务系统的集成)以跟踪问题。" - -msgid "Time to market" -msgstr "上线时间" - -msgid "Time-to-market" -msgstr "上线时间" - -msgid "" -"To effectively run cloud installations, initial downtime planning includes " -"creating processes and architectures that support the following:" -msgstr "为有效的运行云,开始计划宕机包括建立流程和支持的架构有下列内容:" - -msgid "" -"To ensure that access to nodes within the cloud is not interrupted, we " -"recommend that the network architecture identify any single points of " -"failure and provide some level of redundancy or fault tolerance. With regard " -"to the network infrastructure itself, this often involves use of networking " -"protocols such as LACP, VRRP or others to achieve a highly available network " -"connection. In addition, it is important to consider the networking " -"implications on API availability. In order to ensure that the APIs, and " -"potentially other services in the cloud are highly available, we recommend " -"you design a load balancing solution within the network architecture to " -"accommodate for these requirements." -msgstr "" -"为确保云内部访问节点不会被中断,我们建议网络架构不要存在单点故障,应该提供一" -"定级别的冗余和容错。网络基础设施本身就能提供一部分,比如使用网络协议诸如LACP," -"VRRP或其他的保证网络连接的高可用。另外,考虑网络实现的API可用亦非常重要。为了" -"确保云中的API以及其它服务高可用,我们建议用户设计网络架构的负载均衡解决方案," -"以满足这些需求。" - -msgid "" -"To obtain greater than dual-socket support in a 1U rack-mount form factor, " -"customers need to buy their systems from Original Design Manufacturers " -"(ODMs) or second-tier manufacturers." -msgstr "" -"要想使1U的服务器支持超过2个插槽,用户需要通过原始设计制造商(ODM)或二线制造商" -"来购买。" - -msgid "" -"To reap the benefits of OpenStack, you should plan, design, and architect " -"your cloud properly, taking user's needs into account and understanding the " -"use cases." -msgstr "" -"欲充分利用OpenStack的优点,用户需要精心准备规划、设计及架构。认真纳入用户的需" -"求、揣摩透一些用例。" - -msgid "Tools considerations" -msgstr "工具考量" - -msgid "Topics to consider include:" -msgstr "考虑的主题有:" - -msgid "Tunable networking components" -msgstr "可调联网组件" - -msgid "Ubuntu" -msgstr "Ubuntu" - -msgid "" -"Ubuntu and Kinstance use different management tools than VMware vSphere. " -"Although both OS and hypervisor combinations are supported by OpenStack, " -"there are varying impacts to the rest of the design as a result of the " -"selection of one combination versus the other." -msgstr "" -"Ubuntu和Kinstance的管理工具和VMware vSphere的管理工具是不一样的。尽管" -"OpenStack对它们都支持。这也会对其他的设计有着非常不同的影响,结果就是选择了一" -"种组合,然后再据此做出后面的选择。" - -msgid "" -"Unfortunately, Compute is the only OpenStack service that provides good " -"support for cells. In addition, cells do not adequately support some " -"standard OpenStack functionality such as security groups and host " -"aggregates. Due to their relative newness and specialized use, cells receive " -"relatively little testing in the OpenStack gate. Despite these issues, cells " -"play an important role in well known OpenStack installations operating at " -"massive scale, such as those at CERN and Rackspace." -msgstr "" -"使用单元的缺点是这种解决方案在 OpenStack 的服务中,只有计算服务支持得比较好。" -"并且,这种方案也不能够支持一些相对基础的 OpenStack 功能,例如安全组和主机聚" -"合。由于这种方案比较新,并且用途相对特别,在 OpenStack 之中对这种方案的测试也" -"相对比较有限。即使存在种种的这些问题,单元还是在一些著名的大规模 OpenStack 环" -"境中被使用了,包括 CERN 和 Rackspace 中的那些。" - -msgid "Unplanned (system faults)" -msgstr "计划外(系统出错)" - -msgid "" -"Unstructured data store for services. For example, social media back-end " -"storage." -msgstr "某些服务的非结构化数据存储。比如社交媒体的后端存储。" - -msgid "Upgrade OpenStack Block Storage (cinder) at each site." -msgstr "在每个区域升级OpenStack块存储 (cinder)。" - -msgid "Upgrade OpenStack Block Storage (cinder)." -msgstr "升级 OpenStack 块存储 (cinder)。" - -msgid "" -"Upgrade OpenStack Compute (nova), including networking components, at each " -"site." -msgstr "在每个区域升级OpenStack计算(nova),包括网络组件。" - -msgid "Upgrade OpenStack Compute (nova), including networking components." -msgstr "升级 OpenStack 计算 (nova), 包括网络组件。" - -msgid "Upgrade the OpenStack Identity service (keystone)." -msgstr "升级OpenStack认证服务 (keystone)." - -msgid "" -"Upgrade the OpenStack dashboard (horizon), at each site or in the single " -"central location if it is shared." -msgstr "" -"在每个区域升级OpenStack GUI程序(horizon),或者假如它是共享的话,仅升级单个的数" -"据中心即可。" - -msgid "Upgrade the OpenStack dashboard (horizon)." -msgstr "升级 OpenStack GUI程序 (horizon)。" - -msgid "Upgrade the shared OpenStack Identity service (keystone) deployment." -msgstr "升级 OpenStack 认证共享服务 (keystone) 。" - -msgid "Upgrades" -msgstr "升级" - -msgid "Upper-layer services" -msgstr "上层服务" - -msgid "Usage" -msgstr "用量" - -msgid "" -"Use cases that benefit from scale-out rather than scale-up approaches are " -"good candidates for general purpose cloud architecture." -msgstr "在通用型云架构中,用例能够明确感受到横向扩展带来好处远远大于纵向扩展。" - -msgid "Use eBGP to connect to the Internet up-link." -msgstr "使用 eBGP 连接至因特网上行链路。" - -msgid "" -"Use hierarchical addressing because it is the only viable option to scale " -"network ecosystem." -msgstr "" -"使用具有层次结构的地址分配机制,因为这是扩展网络生态环境的唯一可行选项。" - -msgid "Use iBGP to flatten the internal traffic on the layer-3 mesh." -msgstr "在三层网络上使用 iBGP 将内部网络流量扁平化。" - -msgid "" -"Use of DAS impacts the server hardware choice and affects host density, " -"instance density, power density, OS-hypervisor, and management tools." -msgstr "" -"如果解决方案中使用了DAS,这会影响到但不限于,服务器硬件的选择会波及到主机密" -"度、实例密度、电力密度、操作系统-hypervisor、以及管理工具。" - -msgid "Use traffic shaping for performance tuning." -msgstr "使用流量整形工具调整网络性能。" - -msgid "" -"Use virtual networking to isolate instance service network traffic from the " -"management and internal network traffic." -msgstr "使用虚拟联网将实例服务网络流量从管理及内部网络流量中隔离出来。" - -msgid "User requirements" -msgstr "用户需求" - -msgid "" -"User requirements for high availability and cost considerations influence " -"the required level of network hardware redundancy. Achieve network " -"redundancy by adding redundant power supplies or paired switches." -msgstr "" -"网络硬件冗余级别需求会被用户对高可用和开销的考虑所影响。网络冗余可以是增加双" -"电力供应也可以是结对的交换机。" - -msgid "" -"Users will want to combine using the internal cloud with access to an " -"external cloud. If that case is likely, it might be worth exploring the " -"possibility of taking a multi-cloud approach with regard to at least some of " -"the architectural elements." -msgstr "" -"用户打算既使用内部云又可访问外部云的组合。假如遇到类似的用例,也许值得一试的" -"就是基于多个云的途径,考虑至少多个架构要点。" - -msgid "" -"Using Ceph as an applicable example, a potential architecture would have the " -"following requirements:" -msgstr "跟上面的例子相关的 Ceph 的一个可能架构需要如下组件:" - -msgid "Utilization" -msgstr "量力而行" - -msgid "VLANs are an easy mechanism for isolating networks." -msgstr "VLAN 是一个简单的用于网络隔离的机制。" - -msgid "Video Conference or web conference" -msgstr "视频会议和 web 会议" - -msgid "" -"Vinny Valdez (Red Hat) @VinnyValdez" -msgstr "" -"Vinny Valdez (Red Hat) @VinnyValdez" - -msgid "" -"Virtual Desktop Infrastructure (VDI) is a service that hosts user desktop " -"environments on remote servers. This application is very sensitive to " -"network latency and requires a high performance compute environment. " -"Traditionally these types of services do not use cloud environments because " -"few clouds support such a demanding workload for user-facing applications. " -"As cloud environments become more robust, vendors are starting to provide " -"services that provide virtual desktops in the cloud. OpenStack may soon " -"provide the infrastructure for these types of deployments." -msgstr "" -"虚拟桌面基础设施(VDI)是在远程服务器上提供用户桌面环境的服务。此类应用对于网络" -"延迟非常敏感并且需要一个高性能的计算环境。传统上这类环境并未成放到云环境之" -"中,因为极少的云环境会支持如此程度暴露给终端用户的高要求负载。近来,随着云环" -"境的稳定性越来越高,云厂商们开始提供能够在云中运行虚拟桌面的服务。在不远的将" -"来,OpenStack 便能够作为运行虚拟桌面基础设施的底层设施,不管是内部的,还是在" -"云端。" - -msgid "Virtual compute resources" -msgstr "虚拟计算资源" - -msgid "Virtual desktop infrastructure (VDI)" -msgstr "虚拟桌面基础设施(VDI)" - -msgid "Virtual-machine disk image library" -msgstr "虚拟机磁盘镜像库" - -msgid "Virtualized network topologies" -msgstr "虚拟化网络拓扑" - -msgid "Voice over IP (VoIP)" -msgstr "IP 语音(VoIP)" - -msgid "" -"We recommend building a development and operations organization that is " -"responsible for creating desired features, diagnosing and resolving issues, " -"and building the infrastructure for large scale continuous integration tests " -"and continuous deployment. This helps catch bugs early and makes deployments " -"faster and easier. In addition to development resources, we also recommend " -"the recruitment of experts in the fields of message queues, databases, " -"distributed systems, networking, cloud, and storage." -msgstr "" -"我们的建议是组织一个开发和运营的团队,由他们来负责开发所需要的特性,调试以及" -"解决问题,并建造用以进行大规模持续集成测试以及持续部署的基础设施。这能够及早" -"地发现缺陷以及使得部署更快和更加简单。除了开发的资源之外,我们也建议招聘消息" -"队列、数据库、分布式系统、网络、云以及存储方面的专家人员。" - -msgid "" -"We recommend general purpose clouds use hypervisors that support the most " -"general purpose use cases, such as KVM and Xen. More specific hypervisors " -"should be chosen to account for specific functionality or a supported " -"feature requirement. In some cases, there may also be a mandated requirement " -"to run software on a certified hypervisor including solutions from VMware, " -"Microsoft, and Citrix." -msgstr "" -"通用型云须确保使用的hypervisor可以支持多数通用目的的用例,例如KVM或Xen。更多" -"特定的hypervisor需要根据特定的功能和支持特性需求来做出选择。在一些情况下,也" -"许是授权所需,需要运行的软件必须是在认证的hpervisor中,比如来自VMware,微软和" -"思杰的产品。" - -msgid "" -"We would like to thank VMware for their generous hospitality, as well as our " -"employers, Cisco, Cloudscaling, Comcast, EMC, Mirantis, Rackspace, Red Hat, " -"Verizon, and VMware, for enabling us to contribute our time. We would " -"especially like to thank Anne Gentle and Kenneth Hui for all of their " -"shepherding and organization in making this happen." -msgstr "" -"我们非常感谢VMware的盛情款待,以及我们的雇主,Cisco, Cloudscaling,Comcast," -"EMC,Mirantis,Rackspace,Red Hat, Verizon和VMware,能够让我们花时间做点有意义的" -"事情。尤其感谢Anne Gentle 和 Kenneth Hui,由于二位的领导和组织,才有此书诞生" -"的机会。" - -msgid "Web Servers, running Apache." -msgstr "Web服务,运行 Apache。" - -msgid "Web portals or web services" -msgstr "web 门户或 web 服务" - -msgid "" -"When building a general purpose cloud, you should follow the Infrastructure-as-a-Service (IaaS) model; a " -"platform best suited for use cases with simple requirements. General purpose " -"cloud user requirements are not complex. However, it is important to capture " -"them even if the project has minimum business and technical requirements, " -"such as a proof of concept (PoC), or a small lab platform." -msgstr "" -"当构建通用型云时,用户需要遵循 Infrastructure-" -"as-a-Service (IaaS)模式,基于简单的需求为用户寻求最合适的平台。通" -"用型云的用户需求并不复杂。尽管如此,也要谨慎对待,即使项目是较小的业务,或者" -"诸如概念验证、小型实验平台等技术需求。" - -msgid "" -"When building a storage-focused OpenStack architecture, strive to build a " -"flexible design based on an industry standard core. One way of accomplishing " -"this might be through the use of different back ends serving different use " -"cases." -msgstr "" -"当尝试构建一个基于行业标准核心的灵活设计时,实现这个的一个办法,可能是通过使" -"用不同的后端服务于不同的使用场景。" - -msgid "" -"When considering performance of OpenStack Block Storage, hardware and " -"architecture choice is important. Block Storage can use enterprise back-end " -"systems such as NetApp or EMC, scale out storage such as GlusterFS and Ceph, " -"or simply use the capabilities of directly attached storage in the nodes " -"themselves. Block Storage may be deployed so that traffic traverses the host " -"network, which could affect, and be adversely affected by, the front-side " -"API traffic performance. As such, consider using a dedicated data storage " -"network with dedicated interfaces on the Controller and Compute hosts." -msgstr "" -"当考虑OpenStack块设备的性能时,硬件和架构的选择就显得非常重要。块存储可以使用" -"企业级后端系统如NetApp或EMC的产品,也可以使用横向扩展存储如GlusterFS和Ceph," -"更可以是简单的在节点上直接附加的存储。块存储或许是云所关注的贯穿主机网络的部" -"署,又或许是前端应用程序接口所关注的流量性能。无论怎么,都得在控制器和计算主" -"机上考虑使用专用的数据存储网络和专用的接口。" - -msgid "" -"When considering performance of OpenStack Object Storage, a number of design " -"choices will affect performance. A user’s access to the Object Storage is " -"through the proxy services, which sit behind hardware load balancers. By the " -"very nature of a highly resilient storage system, replication of the data " -"would affect performance of the overall system. In this case, 10 GbE (or " -"better) networking is recommended throughout the storage network " -"architecture." -msgstr "" -"当考虑OpenStack对象存储的性能时,有几样设计选择会影响到性能。用户访问对象存储" -"是通过代理服务,它通常是在硬件负载均衡之后。由于高弹性是此存储系统的天生特" -"性,所以复制数据将会影响到整个系统的性能。在此例中,存储网络使用10 GbE(或更" -"高)网络是我们所建议的。" - -msgid "" -"When designing OpenStack Block Storage resource nodes, it is helpful to " -"understand the workloads and requirements that will drive the use of block " -"storage in the cloud. We recommend designing block storage pools so that " -"tenants can choose appropriate storage solutions for their applications. By " -"creating multiple storage pools of different types, in conjunction with " -"configuring an advanced storage scheduler for the block storage service, it " -"is possible to provide tenants with a large catalog of storage services with " -"a variety of performance levels and redundancy options." -msgstr "" -"当设计OpenStack块存储资源节点时,有助于理解负载和需求,即在云中使用块存储。由" -"于通用型云的使用模式经常是未知的。在>此建议设计块存储池做到租户可以根据他们的" -"应用来选择不同的存储。创建多个不同类型的存储池,得与块存储服务配置高级的存储" -"调度相结合,才可能为租户提供基于多种不同性能级别和冗余属性的大型目录存储服" -"务。" - -msgid "" -"When designing hardware resources for OpenStack Object Storage, the primary " -"goal is to maximize the amount of storage in each resource node while also " -"ensuring that the cost per terabyte is kept to a minimum. This often " -"involves utilizing servers which can hold a large number of spinning disks. " -"Whether choosing to use 2U server form factors with directly attached " -"storage or an external chassis that holds a larger number of drives, the " -"main goal is to maximize the storage available in each node." -msgstr "" -"当为OpenStack对象存储设计硬件资源时,首要的目标就尽可能的为每个资源节点加上最" -"多的存储,当然也要在每TB的花费上保持最低。这往往涉及到了利用服务器的容纳大量" -"的磁盘。无论是选择使用2U服务器直接挂载磁盘,还是选择外挂大量的磁盘驱动,主要" -"的目标还是为每个节点得到最多的存储。" - -msgid "" -"When determining capacity options be sure to take into account not just the " -"technical issues, but also the economic or operational issues that might " -"arise from specific decisions." -msgstr "" -"当决定容量要算计的不仅仅是技术问题,还要考虑经济和运营问题,这些都可能会带来" -"更多麻烦。" - -msgid "" -"When selecting network devices, be aware that making this decision based on " -"the greatest port density often comes with a drawback. Aggregation switches " -"and routers have not all kept pace with Top of Rack switches and may induce " -"bottlenecks on north-south traffic. As a result, it may be possible for " -"massive amounts of downstream network utilization to impact upstream network " -"devices, impacting service to the cloud. Since OpenStack does not currently " -"provide a mechanism for traffic shaping or rate limiting, it is necessary to " -"implement these features at the network hardware level." -msgstr "" -"选择网络设备时,必须意识到基于最大的端口密度所做出的决定通常也有一个弊端。聚" -"合交换以及路由并不能完全满足柜顶交换机的需要,这可能引起南北向流量上的瓶颈。" -"其结果是,大量的下行网络使用可能影响上行网络设备,从而影响云中的服务。由于 " -"OpenStack 目前并未提供流量整形或者速度限制的机制,有必要在网络硬件的级别上实" -"现这些特性。" - -msgid "" -"When using OpenStack Networking, the OpenStack controller servers or " -"separate Networking hosts handle routing. For a deployment that requires " -"features available in only Networking, it is possible to remove this " -"restriction by using third party software that helps maintain highly " -"available L3 routes. Doing so allows for common APIs to control network " -"hardware, or to provide complex multi-tier web applications in a secure " -"manner. It is also possible to completely remove routing from Networking, " -"and instead rely on hardware routing capabilities. In this case, the " -"switching infrastructure must support L3 routing." -msgstr "" -"另一方面,当使用OpenStack网络时,OpenStack控制器服务器或者是分离的网络主机掌" -"控路由。对于部署来说,需要在网络中满足此特性,有可能使用第三方软件来协助维护" -"高可用的3层路由。这么做允许通常的应用程序接口来控制网络硬件,或者基于安全的行" -"为提供复杂多层的web应用。从OpenStack网络中完全移除路由是可以的,取而代之的是" -"硬件的路由能力。在此情况下,交换基础设施必须支持3层路由。" - -msgid "" -"Where instances and images will be stored will influence the architecture." -msgstr "实例和镜像存放在哪里会影响到架构。" - -msgid "" -"Where many general purpose deployments use hardware load balancers to " -"provide highly available API access and SSL termination, software solutions, " -"for example HAProxy, can also be considered. It is vital to ensure that such " -"software implementations are also made highly available. High availability " -"can be achieved by using software such as Keepalived or Pacemaker with " -"Corosync. Pacemaker and Corosync can provide active-active or active-passive " -"highly available configuration depending on the specific service in the " -"OpenStack environment. Using this software can affect the design as it " -"assumes at least a 2-node controller infrastructure where one of those nodes " -"may be running certain services in standby mode." -msgstr "" -"多数的通用型部署使用硬件的负载均衡来提供API访问高可用和SSL终端,但是软件的解" -"决方案也要考虑到,比如HAProxy。至关重要的是软件实现的高可用也很靠谱。这些高可" -"用的软件Keepalived或基于Corosync的Pacemaker。Pacemaker和Corosync配合起来可以" -"提供双活或者单活的高可用配置,至于是否双活取决于OpenStack环境中特别的服务。使" -"用Pacemaker会影响到设计,假定有至少2台控制器基础设施,其中一个节点可在待机模" -"式下运行的某些服务。" - -msgid "" -"While consistency and partition tolerance are both inherent features of the " -"Object Storage service, it is important to design the overall storage " -"architecture to ensure that the implemented system meets those goals. The " -"OpenStack Object Storage service places a specific number of data replicas " -"as objects on resource nodes. These replicas are distributed throughout the " -"cluster based on a consistent hash ring which exists on all nodes in the " -"cluster." -msgstr "" -"虽然一致性和区块容错性都是对象存储服务的内生特性,对整个存储架构进行设计,以" -"确保要实施的系统能够满足这些目标依然还是很重要的。OpenStack 对象存储服务将特" -"定数量的数据副本作为对象存放与资源节点上。这些副本分布在整个集群之中,基于存" -"在于集群\n" -"中所有节点上的一致性哈希环。" - -msgid "" -"While constructing a multi-site OpenStack environment is the goal of this " -"guide, the real test is whether an application can utilize it." -msgstr "" -"构建一个多区域OpenStack环境是本书的目的,但真正的考验来自于能否有一个应用能够" -"利用好。" - -msgid "" -"While the cloud user can be completely unaware of the underlying " -"infrastructure of the cloud and its attributes, the operator must build and " -"support the infrastructure for operating at scale. This presents a very " -"demanding set of requirements for building such a cloud from the operator's " -"perspective:" -msgstr "" -"用户对于云的底层基础设施以及属性应该是完全不清楚的,然而,运营者却必须能够构" -"建并且支持该基础设施,也应该了解在大规模的情况下如何操作它。这从运营者的角度" -"提出了关于构建这样一个云的一系列相当强烈的需求:" - -msgid "Why and how we wrote this book" -msgstr "我们为什么及如何写作此书" - -msgid "Workload characteristics" -msgstr "负载特性" - -msgid "" -"Workload characteristics may also influence hardware choices and flavor " -"configuration, particularly where they present different ratios of CPU " -"versus RAM versus HDD requirements." -msgstr "" -"负载的特点常会影响到硬件的选择和实例类型的配置,尤其是他们有不同的CPU、内存、" -"硬盘的比例需求。" - -msgid "Workload considerations" -msgstr "负载考虑" - -msgid "XCP/XenServer" -msgstr "XCP/XenServer" - -msgid "" -"You must weigh the dimensions against each other to determine the best " -"design for the desired purpose. For example, increasing server density can " -"mean sacrificing resource capacity or expandability. Increasing resource " -"capacity and expandability can increase cost but decrease server density. " -"Decreasing cost often means decreasing supportability, server density, " -"resource capacity, and expandability." -msgstr "" -"为达到期望的目的而决定最佳设计需要对一些因素作出取舍和平衡。举例来说,增加服" -"务器密度意味着牺牲资源的容量或扩展性。增加资源容量或扩展性又增加了开销但是降" -"低了服务器密度。减少开销又意味着减低支持力度,服务器密度,资源容量和扩展性。" - -msgid "" -"You will need to consider how the OS and hypervisor combination interactions " -"with other operating systems and hypervisors, including other software " -"solutions. Operational troubleshooting tools for one OS-hypervisor " -"combination may differ from the tools used for another OS-hypervisor " -"combination and, as a result, the design will need to address if the two " -"sets of tools need to interoperate." -msgstr "" -"用户需要考虑此操作系统和Hypervisor组合和另外的操作系统和hypervisor怎么互动," -"甚至包括和其它的软件。操作某一操作系统-hypervisor组合的故障排除工具,和操作其" -"他的操作系统-hypervisor组合也许根本就不一样,那结果就是,设计时就需要交付能够" -"使这两者工具集都能工作的工具。" - -msgid "Zone per collection of nodes" -msgstr "Zone是多个节点的集合" - -msgid "current" -msgstr "当前最新" - -msgid "east-west traffic" -msgstr "东西向流量" - -msgid "north-south traffic" -msgstr "南北向流量" - -#. Put one translator per line, in the form of NAME , YEAR1, YEAR2 -msgid "translator-credits" -msgstr "" -"apporc watson , 2015\n" -"Hunt Xu , 2015\n" -"johnwoo_lee , 2015\n" -"颜海峰 , 2015" - -msgid "vSphere (vCenter and ESXi)" -msgstr "vSphere (vCenter and ESXi)" diff --git a/doc/arch-design/massively_scalable/section_operational_considerations_massively_scalable.xml b/doc/arch-design/massively_scalable/section_operational_considerations_massively_scalable.xml deleted file mode 100644 index c979975a13..0000000000 --- a/doc/arch-design/massively_scalable/section_operational_considerations_massively_scalable.xml +++ /dev/null @@ -1,102 +0,0 @@ - -
- - Operational considerations - In order to run efficiently at massive scale, automate - as many of the operational processes as - possible. Automation includes the configuration of - provisioning, monitoring and alerting systems. Part of the - automation process includes the capability to determine when - human intervention is required and who should act. The - objective is to increase the ratio of operational staff to - running systems as much as possible in order to reduce maintenance - costs. In a massively scaled environment, it is very difficult - for staff to give each system individual care. - Configuration management tools such as Puppet and Chef enable - operations staff to categorize systems into groups based on - their roles and thus create configurations and system states - that the provisioning system enforces. Systems - that fall out of the defined state due to errors or failures - are quickly removed from the pool of active nodes and - replaced. - At large scale the resource cost of diagnosing failed individual - systems is far greater than the cost of - replacement. It is more economical to replace the failed - system with a new system, provisioning and configuring it - automatically and adding it to the pool of active nodes. - By automating tasks that are labor-intensive, - repetitive, and critical to operations, cloud operations - teams can work more - efficiently because fewer resources are required for these - common tasks. Administrators are then free to tackle - tasks that are not easy to automate and that have longer-term - impacts on the business, for example, capacity planning. -
- The bleeding edge - Running OpenStack at massive scale requires striking a - balance between stability and features. For example, it might - be tempting to run an older stable release branch of OpenStack - to make deployments easier. However, when running at massive - scale, known issues that may be of some concern or only have - minimal impact in smaller deployments could become pain points. - Recent releases may address well known issues. The OpenStack - community can help resolve reported issues by applying - the collective expertise of the OpenStack developers. - The number of organizations running at - massive scales is a small proportion of the - OpenStack community, therefore it is important to share - related issues with the community and be a vocal advocate for - resolving them. Some issues only manifest when operating at - large scale, and the number of organizations able to duplicate - and validate an issue is small, so it is important to - document and dedicate resources to their resolution. - In some cases, the resolution to the problem is ultimately - to deploy a more recent version of OpenStack. Alternatively, - when you must resolve an issue in a production - environment where rebuilding the entire environment is not an - option, it is sometimes possible to deploy updates to specific - underlying components in order to resolve issues or gain - significant performance improvements. Although this may appear - to expose the deployment to - increased risk and instability, in many cases it - could be an undiscovered issue. - We recommend building a development and operations - organization that is responsible for creating desired - features, diagnosing and resolving issues, and building the - infrastructure for large scale continuous integration tests - and continuous deployment. This helps catch bugs early and - makes deployments faster and easier. In addition to - development resources, we also recommend the recruitment - of experts in the fields of message queues, databases, distributed - systems, networking, cloud, and storage.
-
- Growth and capacity planning - An important consideration in running at massive scale is - projecting growth and utilization trends in order to plan capital - expenditures for the short and long term. Gather utilization - meters for compute, network, and storage, along with historical - records of these meters. While securing major - anchor tenants can lead to rapid jumps in the utilization - rates of all resources, the steady adoption of the cloud - inside an organization or by consumers in a public - offering also creates a steady trend of increased - utilization.
-
- Skills and training - Projecting growth for storage, networking, and compute is - only one aspect of a growth plan for running OpenStack at - massive scale. Growing and nurturing development and - operational staff is an additional consideration. Sending team - members to OpenStack conferences, meetup events, and - encouraging active participation in the mailing lists and - committees is a very important way to maintain skills and - forge relationships in the community. For a list of OpenStack - training providers in the marketplace, see: http://www.openstack.org/marketplace/training/. - -
-
diff --git a/doc/arch-design/massively_scalable/section_tech_considerations_massively_scalable.xml b/doc/arch-design/massively_scalable/section_tech_considerations_massively_scalable.xml deleted file mode 100644 index a71685517c..0000000000 --- a/doc/arch-design/massively_scalable/section_tech_considerations_massively_scalable.xml +++ /dev/null @@ -1,131 +0,0 @@ - - -%openstack; -]> -
- - Technical considerations - Repurposing an existing OpenStack environment to be - massively scalable is a formidable task. When building - a massively scalable environment from the ground up, ensure - you build the initial deployment with the same principles - and choices that apply as the environment grows. For example, - a good approach is to deploy the first site as a multi-site - environment. This enables you to use the same deployment - and segregation methods as the environment grows to separate - locations across dedicated links or wide area networks. In - a hyperscale cloud, scale trumps redundancy. Modify applications - with this in mind, relying on the scale and homogeneity of the - environment to provide reliability rather than redundant - infrastructure provided by non-commodity hardware - solutions. -
- Infrastructure segregation - OpenStack services support massive horizontal scale. - Be aware that this is not the case for the entire supporting - infrastructure. This is particularly a problem for the database - management systems and message queues that OpenStack services - use for data storage and remote procedure call communications. - Traditional clustering techniques typically - provide high availability and some additional scale for these - environments. In the quest for massive scale, however, you must - take additional steps to relieve the performance - pressure on these components in order to prevent them from negatively - impacting the overall performance of the environment. Ensure that - all the components are in balance so that if the massively - scalable environment fails, all the components are near maximum - capacity and a single component is not causing the failure. - Regions segregate completely independent - installations linked only by an Identity and Dashboard - (optional) installation. Services have separate - API endpoints for each region, and include separate database - and queue installations. This exposes some awareness of the - environment's fault domains to users and gives them the - ability to ensure some degree of application resiliency while - also imposing the requirement to specify which region to apply - their actions to. - Environments operating at massive scale typically need their - regions or sites subdivided further without exposing the - requirement to specify the failure domain to the user. This - provides the ability to further divide the installation into - failure domains while also providing a logical unit for - maintenance and the addition of new hardware. At hyperscale, - instead of adding single compute nodes, administrators can add - entire racks or even groups of racks at a time with each new - addition of nodes exposed via one of the segregation concepts - mentioned herein. - Cells provide the ability - to subdivide the compute portion - of an OpenStack installation, including regions, while still - exposing a single endpoint. Each region has an API cell - along with a number of compute cells where the - workloads actually run. Each cell has its own database and - message queue setup (ideally clustered), providing the ability - to subdivide the load on these subsystems, improving overall - performance. - Each compute cell provides a complete compute installation, - complete with full database and queue installations, - scheduler, conductor, and multiple compute hosts. The cells - scheduler handles placement of user requests from the single - API endpoint to a specific cell from those available. The - normal filter scheduler then handles placement within the - cell. - Unfortunately, Compute is the only OpenStack service that - provides good support for cells. In addition, cells - do not adequately support some standard - OpenStack functionality such as security groups and host - aggregates. Due to their relative newness and specialized use, - cells receive relatively little testing in the OpenStack gate. - Despite these issues, cells play an important role in - well known OpenStack installations operating at massive scale, - such as those at CERN and Rackspace.
-
- Host aggregates - Host aggregates enable partitioning of OpenStack Compute - deployments into logical groups for load balancing and - instance distribution. You can also use host aggregates to - further partition an availability zone. Consider a cloud which - might use host aggregates to partition an availability zone - into groups of hosts that either share common resources, such - as storage and network, or have a special property, such as - trusted computing hardware. You cannot target host aggregates - explicitly. Instead, select instance flavors that map to host - aggregate metadata. These flavors target host aggregates - implicitly.
-
- Availability zones - Availability zones provide another mechanism for subdividing - an installation or region. They are, in effect, host - aggregates exposed for (optional) explicit targeting - by users. - Unlike cells, availability zones do not have their own database - server or queue broker but represent an arbitrary grouping of - compute nodes. Typically, nodes are grouped into availability - zones using a shared failure domain based on a physical - characteristic such as a shared power source or physical network - connections. Users can target exposed availability zones; however, - this is not a requirement. An alternative approach is to set a default - availability zone to schedule instances to a non-default availability - zone of nova.
-
- Segregation example - In this example the cloud is divided into two regions, one - for each site, with two availability zones in each based on - the power layout of the data centers. A number of host - aggregates enable targeting of - virtual machine instances using flavors, that require special - capabilities shared by the target hosts such as SSDs, 10 GbE - networks, or GPU cards. - - - - -
-
diff --git a/doc/arch-design/massively_scalable/section_user_requirements_massively_scalable.xml b/doc/arch-design/massively_scalable/section_user_requirements_massively_scalable.xml deleted file mode 100644 index 022bbc0a1f..0000000000 --- a/doc/arch-design/massively_scalable/section_user_requirements_massively_scalable.xml +++ /dev/null @@ -1,135 +0,0 @@ - -
- - User requirements - Defining user requirements for a massively scalable OpenStack - design architecture dictates approaching the design from two - different, yet sometimes opposing, perspectives: the cloud - user, and the cloud operator. The expectations and perceptions - of the consumption and management of resources of a massively - scalable OpenStack cloud from these two perspectives are - distinctly different. - Massively scalable OpenStack clouds have the following user - requirements: - - - The cloud user expects repeatable, dependable, and - deterministic processes for launching and deploying - cloud resources. You could deliver this through a - web-based interface or publicly available API - endpoints. All appropriate options for requesting - cloud resources must be available through some type - of user interface, a command-line interface (CLI), or - API endpoints. - - - Cloud users expect a fully self-service and - on-demand consumption model. When an OpenStack cloud - reaches the "massively scalable" size, expect - consumption "as a service" in each and - every way. - - - For a user of a massively scalable OpenStack public - cloud, there are no expectations for control over - security, performance, or availability. Users expect - only SLAs related to uptime of API services, and - very basic SLAs for services offered. It is the user's - responsibility to address these issues on their own. - The exception to this expectation is the rare case of - a massively scalable cloud infrastructure built for - a private or government organization that has - specific requirements. - - - The cloud user's requirements and expectations that determine - the cloud design focus on the consumption model. The user - expects to consume cloud resources in an automated and - deterministic way, without any need for knowledge of the - capacity, scalability, or other attributes of the cloud's - underlying infrastructure. -
- Operator requirements - While the cloud user can be completely unaware of the - underlying infrastructure of the cloud and its attributes, the - operator must build and support the infrastructure for operating - at scale. This presents a very demanding set of requirements - for building such a cloud from the operator's perspective: - - - Everything must be capable of automation. For example, - everything from compute hardware, storage hardware, - networking hardware, to the installation and - configuration of the supporting software. Manual - processes are impractical in a massively scalable - OpenStack design architecture. - - - The cloud operator requires that capital expenditure - (CapEx) is minimized at all layers of the stack. - Operators of massively scalable OpenStack clouds - require the use of dependable commodity hardware and - freely available open source software components to - reduce deployment costs and operational expenses. - Initiatives like OpenCompute (more information - available at http://www.opencompute.org) - provide additional information and pointers. To cut - costs, many operators sacrifice redundancy. For - example, using redundant power supplies, network - connections, and rack switches. - - - Companies operating a massively scalable OpenStack - cloud also require that operational expenditures - (OpEx) be minimized as much as possible. We - recommend using cloud-optimized hardware when - managing operational overhead. Some of - the factors to consider include power, - cooling, and the physical design of the chassis. Through - customization, it is possible to optimize the hardware - and systems for this type of workload because of the - scale of these implementations. - - - Massively scalable OpenStack clouds require - extensive metering and monitoring functionality to - maximize the operational efficiency by keeping the - operator informed about the status and state of the - infrastructure. This includes full scale metering of - the hardware and software status. A corresponding - framework of logging and alerting is also required to - store and enable operations to act on the meters - provided by the metering and monitoring solutions. - The cloud operator also needs a solution that uses the - data provided by the metering and monitoring solution - to provide capacity planning and capacity trending - analysis. - - - Invariably, massively scalable OpenStack clouds extend - over several sites. Therefore, the user-operator - requirements for a multi-site OpenStack architecture - design are also applicable here. This includes various - legal requirements; other jurisdictional legal or - compliance requirements; image - consistency-availability; storage replication and - availability (both block and file/object storage); and - authentication, authorization, and auditing (AAA). - See - for more details on requirements and considerations - for multi-site OpenStack clouds. - - - The design architecture of a massively scalable OpenStack - cloud must address considerations around physical - facilities such as space, floor weight, rack height and - type, environmental considerations, power usage and power - usage efficiency (PUE), and physical security. - -
-
diff --git a/doc/arch-design/multi_site/section_architecture_multi_site.xml b/doc/arch-design/multi_site/section_architecture_multi_site.xml deleted file mode 100644 index 1c5860c771..0000000000 --- a/doc/arch-design/multi_site/section_architecture_multi_site.xml +++ /dev/null @@ -1,123 +0,0 @@ - -
- - Architecture - - illustrates a high level multi-site OpenStack - architecture. Each site is an OpenStack cloud but it may be necessary - to architect the sites on different versions. For example, if the - second site is intended to be a replacement for the first site, - they would be different. Another common design would be a private - OpenStack cloud with a replicated site that would be used for high - availability or disaster recovery. The most important design decision - is configuring storage as a single shared pool or separate pools, - depending on user and technical requirements. -
- Multi-site OpenStack architecture - - - - - -
-
- OpenStack services architecture - The Identity service, which is used by all other - OpenStack components for authorization and the catalog of - service endpoints, supports the concept of regions. A region - is a logical construct used to group OpenStack services in - close proximity to one another. The concept of - regions is flexible; it may contain OpenStack service - endpoints located within a distinct geographic region or regions. - It may be smaller in scope, where a region is a single rack - within a data center, with multiple regions existing in adjacent - racks in the same data center. - The majority of OpenStack components are designed to run - within the context of a single region. The Compute - service is designed to manage compute resources within a region, - with support for subdivisions of compute resources by using - availability zones and cells. The Networking service - can be used to manage network resources in the same broadcast - domain or collection of switches that are linked. The OpenStack - Block Storage service controls storage resources within a region - with all storage resources residing on the same storage network. - Like the OpenStack Compute service, the OpenStack Block Storage - service also supports the availability zone construct which can - be used to subdivide storage resources. - The OpenStack dashboard, OpenStack Identity, and OpenStack - Object Storage services are components that can each be deployed - centrally in order to serve multiple regions. -
-
- Storage - With multiple OpenStack regions, it is recommended to configure - a single OpenStack Object Storage service endpoint to deliver - shared file storage for all regions. The Object Storage service - internally replicates files to multiple nodes which can be used - by applications or workloads in multiple regions. This simplifies - high availability failover and disaster recovery rollback. - In order to scale the Object Storage service to meet the workload - of multiple regions, multiple proxy workers are run and - load-balanced, storage nodes are installed in each region, and the - entire Object Storage Service can be fronted by an HTTP caching - layer. This is done so client requests for objects can be served out - of caches rather than directly from the storage modules themselves, - reducing the actual load on the storage network. In addition to an - HTTP caching layer, use a caching layer like Memcache to cache - objects between the proxy and storage nodes. - If the cloud is designed with a separate Object Storage - service endpoint made available in each region, applications are - required to handle synchronization (if desired) and other management - operations to ensure consistency across the nodes. For some - applications, having multiple Object Storage Service endpoints - located in the same region as the application may be desirable due - to reduced latency, cross region bandwidth, and ease of - deployment. - - For the Block Storage service, the most important decisions - are the selection of the storage technology, and whether - a dedicated network is used to carry storage traffic - from the storage service to the compute nodes. - -
-
- Networking - When connecting multiple regions together, there are several design - considerations. The overlay network technology choice determines how - packets are transmitted between regions and how the logical network - and addresses present to the application. If there are security or - regulatory requirements, encryption should be implemented to secure - the traffic between regions. For networking inside a region, the - overlay network technology for tenant networks is equally important. - The overlay technology and the network traffic that an application - generates or receives can be either complementary or serve cross - purposes. For example, using an overlay technology for an application - that transmits a large amount of small packets could add excessive - latency or overhead to each packet if not configured - properly. -
-
- Dependencies - The architecture for a multi-site OpenStack installation - is dependent on a number of factors. One major dependency to - consider is storage. When designing the storage system, the - storage mechanism needs to be determined. Once the storage - type is determined, how it is accessed is critical. For example, - we recommend that storage should use a dedicated network. - Another concern is how the storage is configured to protect - the data. For example, the Recovery Point Objective (RPO) and - the Recovery Time Objective (RTO). How quickly recovery from - a fault can be completed, determines how often the replication of - data is required. Ensure that enough storage is allocated to - support the data protection strategy. - - Networking decisions include the encapsulation mechanism that can - be used for the tenant networks, how large the broadcast domains - should be, and the contracted SLAs for the interconnects. -
-
diff --git a/doc/arch-design/multi_site/section_operational_considerations_multi_site.xml b/doc/arch-design/multi_site/section_operational_considerations_multi_site.xml deleted file mode 100644 index 97240be6cd..0000000000 --- a/doc/arch-design/multi_site/section_operational_considerations_multi_site.xml +++ /dev/null @@ -1,180 +0,0 @@ - -
- - Operational considerations - Multi-site OpenStack cloud deployment using regions - requires that the service catalog contains per-region entries - for each service deployed other than the Identity service. Most - off-the-shelf OpenStack deployment tools have limited support - for defining multiple regions in this fashion. - Deployers should be aware of this and provide the appropriate - customization of the service catalog for their site either - manually, or by customizing deployment tools in use. - As of the Kilo release, documentation for - implementing this feature is in progress. See this bug for - more information: - https://bugs.launchpad.net/openstack-manuals/+bug/1340509. - -
- Licensing - Multi-site OpenStack deployments present additional - licensing considerations over and above regular OpenStack - clouds, particularly where site licenses are in use to provide - cost efficient access to software licenses. The licensing for - host operating systems, guest operating systems, OpenStack - distributions (if applicable), software-defined infrastructure - including network controllers and storage systems, and even - individual applications need to be evaluated. - Topics to consider include: - - - The definition of what constitutes a site - in the relevant licenses, as the term does not - necessarily denote a geographic or otherwise - physically isolated location. - - - Differentiations between "hot" (active) and "cold" - (inactive) sites, where significant savings may be made - in situations where one site is a cold standby for - disaster recovery purposes only. - - - Certain locations might require local vendors to - provide support and services for each site which may vary - with the licensing agreement in place. - -
-
- Logging and monitoring - Logging and monitoring does not significantly differ for a - multi-site OpenStack cloud. The tools described in the Logging - and monitoring chapter of the Operations - Guide remain applicable. Logging and monitoring - can be provided on a per-site basis, and in a common - centralized location. - When attempting to deploy logging and monitoring facilities - to a centralized location, care must be taken with the load - placed on the inter-site networking links.
-
- Upgrades - In multi-site OpenStack clouds deployed using regions, sites - are independent OpenStack installations which are linked - together using shared centralized services such as OpenStack - Identity. At a high level the recommended order of operations - to upgrade an individual OpenStack environment is (see the Upgrades - chapter of the Operations Guide - for details): - - - Upgrade the OpenStack Identity service - (keystone). - - - Upgrade the OpenStack Image service (glance). - - - Upgrade OpenStack Compute (nova), including - networking components. - - - Upgrade OpenStack Block Storage (cinder). - - - Upgrade the OpenStack dashboard (horizon). - - - The process for upgrading a multi-site environment is not - significantly different: - - - Upgrade the shared OpenStack Identity service - (keystone) deployment. - - - Upgrade the OpenStack Image service (glance) at each - site. - - - Upgrade OpenStack Compute (nova), including - networking components, at each site. - - - Upgrade OpenStack Block Storage (cinder) at each - site. - - - Upgrade the OpenStack dashboard (horizon), at each - site or in the single central location if it is - shared. - - - Compute upgrades within each site can also be performed in a rolling - fashion. Compute controller services (API, Scheduler, and - Conductor) can be upgraded prior to upgrading of individual - compute nodes. This allows operations staff to keep a site - operational for users of Compute services while performing an - upgrade.
-
- Quota management - Quotas are used to set operational limits to prevent system - capacities from being exhausted without notification. They are - currently enforced at the tenant (or project) level rather than - at the user level. - Quotas are defined on a per-region basis. Operators can - define identical quotas for tenants in each region of the - cloud to provide a consistent experience, or even create a - process for synchronizing allocated quotas across regions. It - is important to note that only the operational limits imposed - by the quotas will be aligned consumption of quotas by users - will not be reflected between regions. - For example, given a cloud with two regions, if the operator - grants a user a quota of 25 instances in each region then that - user may launch a total of 50 instances spread across both - regions. They may not, however, launch more than 25 instances - in any single region. - For more information on managing quotas refer to the - Managing - projects and users chapter of the OpenStack - Operators Guide. -
-
- Policy management - OpenStack provides a default set of Role Based Access - Control (RBAC) policies, defined in a policy.json file, for - each service. Operators edit these files to customize the - policies for their OpenStack installation. If the application - of consistent RBAC policies across sites is a requirement, then - it is necessary to ensure proper synchronization of the - policy.json files to all installations. - This must be done using system administration tools - such as rsync as functionality for synchronizing policies - across regions is not currently provided within OpenStack.
-
- Documentation - Users must be able to leverage cloud infrastructure and - provision new resources in the environment. It is important - that user documentation is accessible by users to ensure they - are given sufficient information to help them leverage the cloud. - As an example, by default OpenStack schedules instances on a compute node - automatically. However, when multiple regions are available, - the end user needs to decide in which region to schedule the - new instance. The dashboard presents the user with - the first region in your configuration. The API and CLI tools - do not execute commands unless a valid region is specified. - It is therefore important to provide documentation to your - users describing the region layout as well as calling out that - quotas are region-specific. If a user reaches his or her quota - in one region, OpenStack does not automatically build new - instances in another. Documenting specific examples helps - users understand how to operate the cloud, thereby reducing - calls and tickets filed with the help desk.
-
diff --git a/doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml b/doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml deleted file mode 100644 index 702ce99f55..0000000000 --- a/doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml +++ /dev/null @@ -1,236 +0,0 @@ - - -%openstack; -]> -
- - Prescriptive examples - There are multiple ways to build a multi-site OpenStack - installation, based on the needs of the intended workloads. - Below are example architectures based on different - requirements. These examples are meant as a reference, and not - a hard and fast rule for deployments. Use the previous - sections of this chapter to assist in selecting specific - components and implementations based on specific needs. - A large content provider needs to deliver content to - customers that are geographically dispersed. The workload is - very sensitive to latency and needs a rapid response to - end-users. After reviewing the user, technical and operational - considerations, it is determined beneficial to build a number - of regions local to the customer's edge. Rather than build a - few large, centralized data centers, the intent of the architecture - is to provide a pair of small data centers in locations that - are closer to the customer. In this use - case, spreading applications out allows for different - horizontal scaling than a traditional compute workload scale. - The intent is to scale by creating more copies of the - application in closer proximity to the users that need it - most, in order to ensure faster response time to user - requests. This provider deploys two datacenters at each of - the four chosen regions. The implications of this design are - based around the method of placing copies of resources in each - of the remote regions. Swift objects, Glance images, and block - storage need to be manually replicated into each region. - This may be beneficial for some systems, such as the case of - content service, where only some of the content needs to exist - in some but not all regions. A centralized Keystone is - recommended to ensure authentication and that access to the - API endpoints is easily manageable. - It is recommended that you install an automated DNS system such - as Designate. Application administrators need a way to - manage the mapping of which application copy exists in each - region and how to reach it, unless an external Dynamic DNS system - is available. Designate assists by making the process automatic - and by populating the records in the each region's zone. - Telemetry for each region is also deployed, as each region - may grow differently or be used at a different rate. - Ceilometer collects each region's meters from each - of the controllers and report them back to a central location. - This is useful both to the end user and the administrator of - the OpenStack environment. The end user will find this method - useful, as it makes possible to determine if certain - locations are experiencing higher load than others, and take - appropriate action. Administrators also benefit by - possibly being able to forecast growth per region, rather than - expanding the capacity of all regions simultaneously, - therefore maximizing the cost-effectiveness of the multi-site - design. - One of the key decisions of running this infrastructure is - whether or not to provide a redundancy - model. Two types of redundancy and high availability models in - this configuration can be implemented. The first type - is the availability of central OpenStack - components. Keystone can be made highly available in three - central data centers that host the centralized OpenStack - components. This prevents a loss of any one of the regions - causing an outage in service. It also has the added benefit of - being able to run a central storage repository as a primary - cache for distributing content to each of the regions. - The second redundancy type is the edge data center itself. - A second data center in each of the edge regional - locations house a second region near the first region. This - ensures that the application does not suffer degraded - performance in terms of latency and availability. - depicts - the solution designed to have both a centralized set of core - data centers for OpenStack services and paired edge data centers: -
- Multi-site architecture example - - - - - -
-
- Geo-redundant load balancing - A large-scale web application has been designed with cloud - principles in mind. The application is designed provide - service to application store, on a 24/7 basis. The company has - typical two tier architecture with a web front-end servicing the - customer requests, and a NoSQL database back end storing the - information. - As of late there has been several outages in number of major - public cloud providers due to applications running out of - a single geographical location. The design therefore should - mitigate the chance of a single site causing an outage for their - business. - The solution would consist of the following OpenStack - components: - - - A firewall, switches and load balancers on the - public facing network connections. - - - OpenStack Controller services running, Networking, - dashboard, Block Storage and Compute running locally in - each of the three regions. Identity service, Orchestration - service, Telemetry service, Image service and - Object Storage service can be installed centrally, with - nodes in each of the region providing a redundant - OpenStack Controller plane throughout the globe. - - - OpenStack Compute nodes running the KVM - hypervisor. - - - OpenStack Object Storage for serving static objects - such as images can be used to ensure that all images - are standardized across all the regions, and - replicated on a regular basis. - - - A distributed DNS service available to all - regions that allows for dynamic update of DNS - records of deployed instances. - - - A geo-redundant load balancing service can be used - to service the requests from the customers based on - their origin. - - - An autoscaling heat template can be used to deploy the - application in the three regions. This template includes: - - - Web Servers, running Apache. - - - Appropriate user_data to populate the central DNS - servers upon instance launch. - - - Appropriate Telemetry alarms that maintain state of - the application and allow for handling of region or - instance failure. - - - Another autoscaling Heat template can be used to deploy a - distributed MongoDB shard over the three locations, with the - option of storing required data on a globally available swift - container. According to the usage and load on the database - server, additional shards can be provisioned according to - the thresholds defined in Telemetry. - - Two data centers would have been sufficient had the requirements - been met. But three regions are selected here to avoid abnormal - load on a single region in the event of a failure. - Orchestration is used because of the built-in functionality of - autoscaling and auto healing in the event of increased load. - Additional configuration management tools, such as Puppet or - Chef could also have been used in this scenario, but were not - chosen since Orchestration had the appropriate built-in - hooks into the OpenStack cloud, whereas the other tools were - external and not native to OpenStack. In addition, external - tools were not needed since this deployment scenario was straight - forward. - OpenStack Object Storage is used here to serve as a back end for - the Image service since it is the most suitable solution for a - globally distributed storage solution with its own - replication mechanism. Home grown solutions could also have - been used including the handling of replication, but were not - chosen, because Object Storage is already an intricate part of the - infrastructure and a proven solution. - An external load balancing service was used and not the - LBaaS in OpenStack because the solution in OpenStack is not - redundant and does not have any awareness of geo location. -
- Multi-site geo-redundant architecture - - - - - -
-
-
- Location-local service - A common use for multi-site OpenStack deployment is - creating a Content Delivery Network. An application that - uses a location-local architecture requires low network - latency and proximity to the user to provide an - optimal user experience and reduce the cost of bandwidth and - transit. The content resides on sites closer to the customer, - instead of a centralized content store that requires utilizing - higher cost cross-country links. - This architecture includes a geo-location component - that places user requests to the closest possible node. In - this scenario, 100% redundancy of content across every site is - a goal rather than a requirement, with the intent to - maximize the amount of content available within a - minimum number of network hops for end users. Despite - these differences, the storage replication configuration has - significant overlap with that of a geo-redundant load - balancing use case. - In , - the application utilizing this multi-site OpenStack install - that is location-aware would launch web server or content - serving instances on the compute cluster in each site. Requests - from clients are first sent to a global services load balancer - that determines the location of the client, then routes the - request to the closest OpenStack site where the application - completes the request. -
- Multi-site shared keystone architecture - - - - - -
-
-
diff --git a/doc/arch-design/multi_site/section_tech_considerations_multi_site.xml b/doc/arch-design/multi_site/section_tech_considerations_multi_site.xml deleted file mode 100644 index bc523f2db2..0000000000 --- a/doc/arch-design/multi_site/section_tech_considerations_multi_site.xml +++ /dev/null @@ -1,176 +0,0 @@ - -
- - Technical considerations - There are many technical considerations to take into account - with regard to designing a multi-site OpenStack - implementation. An OpenStack cloud can be designed in a - variety of ways to handle individual application needs. A - multi-site deployment has additional challenges compared - to single site installations and therefore is a more - complex solution. - When determining capacity options be sure to take into - account not just the technical issues, but also the economic - or operational issues that might arise from specific - decisions. - Inter-site link capacity describes the capabilities of the - connectivity between the different OpenStack sites. This - includes parameters such as bandwidth, latency, whether or not - a link is dedicated, and any business policies applied to the - connection. The capability and number of the links between - sites determine what kind of options are available for - deployment. For example, if two sites have a pair of - high-bandwidth links available between them, it may be wise to - configure a separate storage replication network between the - two sites to support a single Swift endpoint and a shared - Object Storage capability between them. An example of this - technique, as well as a configuration walk-through, is - available at http://docs.openstack.org/developer/swift/replication_network.html#dedicated-replication-network. - Another option in this scenario is to build a dedicated set of - tenant private networks across the secondary link, using - overlay networks with a third party mapping the site overlays - to each other. - The capacity requirements of the links between sites is - driven by application behavior. If the link latency is - too high, certain applications that use a large number of - small packets, for example RPC calls, may encounter issues - communicating with each other or operating properly. - Additionally, OpenStack may encounter similar types of issues. - To mitigate this, Identity service call timeouts can be - tuned to prevent issues authenticating against a central - Identity service. - Another network capacity consideration for a multi-site - deployment is the amount and performance of overlay networks - available for tenant networks. If using shared tenant networks - across zones, it is imperative that an external overlay manager - or controller be used to map these overlays together. It is - necessary to ensure the amount of possible IDs between the zones - are identical. - - As of the Kilo release, OpenStack Networking was not - capable of managing tunnel IDs across installations. So if - one site runs out of IDs, but another does not, that tenant's - network is unable to reach the other site. - - Capacity can take other forms as well. The ability for a - region to grow depends on scaling out the number of available - compute nodes. This topic is covered in greater detail in the - section for compute-focused deployments. However, it may be - necessary to grow cells in an individual region, depending on - the size of your cluster and the ratio of virtual machines per - hypervisor. - A third form of capacity comes in the multi-region-capable - components of OpenStack. Centralized Object Storage is capable - of serving objects through a single namespace across multiple - regions. Since this works by accessing the object store through - swift proxy, it is possible to overload the proxies. There are - two options available to mitigate this issue: - - - Deploy a large number of swift proxies. The drawback is - that the proxies are not load-balanced and a large file - request could continually hit the same proxy. - - - Add a caching HTTP proxy and load balancer in front of - the swift proxies. Since swift objects are returned to the - requester via HTTP, this load balancer would alleviate the - load required on the swift proxies. - - -
Utilization - While constructing a multi-site OpenStack environment is the - goal of this guide, the real test is whether an application - can utilize it. - The Identity service is normally the first interface for - OpenStack users and is required for almost all major operations - within OpenStack. Therefore, it is important that you provide users - with a single URL for Identity service authentication, and - document the configuration of regions within the Identity service. - Each of the sites defined in your installation is considered - to be a region in Identity nomenclature. This is important for - the users, as it is required to define the region name when - providing actions to an API endpoint or in the dashboard. - Load balancing is another common issue with multi-site - installations. While it is still possible to run HAproxy - instances with Load-Balancer-as-a-Service, these are defined - to a specific region. Some applications can manage this using - internal mechanisms. Other applications may require the - implementation of an external system, including global services - load balancers or anycast-advertised DNS. - Depending on the storage model chosen during site design, - storage replication and availability are also a concern - for end-users. If an application can support regions, then it - is possible to keep the object storage system separated by region. - In this case, users who want to have an object available to - more than one region need to perform cross-site replication. - However, with a centralized swift proxy, the user may need to - benchmark the replication timing of the Object Storage back end. - Benchmarking allows the operational staff to provide users with - an understanding of the amount of time required for a stored or - modified object to become available to the entire environment. -
-
Performance - Determining the performance of a multi-site installation - involves considerations that do not come into play in a - single-site deployment. Being a distributed deployment, - performance in multi-site deployments may be affected in certain - situations. - Since multi-site systems can be geographically separated, - there may be greater latency or jitter when communicating across - regions. This can especially impact systems like the OpenStack - Identity service when making authentication attempts from regions - that do not contain the centralized Identity implementation. It - can also affect applications which rely on Remote Procedure Call (RPC) - for normal operation. An example of this can be seen in high - performance computing workloads. - Storage availability can also be impacted by the - architecture of a multi-site deployment. A centralized Object - Storage service requires more time for an object to be - available to instances locally in regions where the object was - not created. Some applications may need to be tuned to account - for this effect. Block Storage does not currently have a - method for replicating data across multiple regions, so - applications that depend on available block storage need - to manually cope with this limitation by creating duplicate - block storage entries in each region. -
-
- OpenStack components - Most OpenStack installations require a bare minimum set of - pieces to function. These include the OpenStack Identity - (keystone) for authentication, OpenStack Compute - (nova) for compute, OpenStack Image service (glance) for image - storage, OpenStack Networking (neutron) for networking, and - potentially an object store in the form of OpenStack Object - Storage (swift). Deploying a multi-site installation also demands extra - components in order to coordinate between regions. A centralized - Identity service is necessary to provide the single authentication - point. A centralized dashboard is also recommended to provide a - single login point and a mapping to the API and CLI - options available. A centralized Object Storage service may also - be used, but will require the installation of the swift proxy - service. - It may also be helpful to install a few extra options in - order to facilitate certain use cases. For example, - installing Designate may assist in automatically generating - DNS domains for each region with an automatically-populated - zone full of resource records for each instance. This - facilitates using DNS as a mechanism for determining which - region will be selected for certain applications. - Another useful tool for managing a multi-site installation - is Orchestration (heat). The Orchestration service allows the - use of templates to define a set of instances to be launched - together or for scaling existing sets. It can also be used to - set up matching or differentiated groupings based on - regions. For instance, if an application requires an equally - balanced number of nodes across sites, the same heat template - can be used to cover each site with small alterations to only - the region name. -
-
diff --git a/doc/arch-design/multi_site/section_user_requirements_multi_site.xml b/doc/arch-design/multi_site/section_user_requirements_multi_site.xml deleted file mode 100644 index 7937d92e41..0000000000 --- a/doc/arch-design/multi_site/section_user_requirements_multi_site.xml +++ /dev/null @@ -1,176 +0,0 @@ - -
- - User requirements -
- Workload characteristics - An understanding of the expected workloads for a desired - multi-site environment and use case is an important factor in - the decision-making process. In this context, workload - refers to the way the systems are used. A workload could be a - single application or a suite of applications that work together. - It could also be a duplicate set of applications that need to - run in multiple cloud environments. Often in a multi-site deployment, - the same workload will need to work identically in more than one - physical location. - This multi-site scenario likely includes one or more of the - other scenarios in this book with the additional requirement - of having the workloads in two or more locations. The - following are some possible scenarios: - For many use cases the proximity of the user to their - workloads has a direct influence on the performance of the - application and therefore should be taken into consideration - in the design. Certain applications require zero to minimal - latency that can only be achieved by deploying the cloud in - multiple locations. These locations could be in different data - centers, cities, countries or geographical regions, depending - on the user requirement and location of the users.
-
- Consistency of images and templates across different - sites - It is essential that the deployment of instances is - consistent across the different sites and built - into the infrastructure. If the OpenStack Object Storage is used as - a back end for the Image service, it is possible to create repositories - of consistent images across multiple sites. Having central - endpoints with multiple storage nodes allows consistent centralized - storage for every site. - Not using a centralized object store increases the operational - overhead of maintaining a consistent image library. This - could include development of a replication mechanism to handle - the transport of images and the changes to the images across - multiple sites.
-
- High availability - If high availability is a requirement to provide continuous - infrastructure operations, a basic requirement of high - availability should be defined. - The OpenStack management components need to have a basic and - minimal level of redundancy. The simplest example is the loss - of any single site should have minimal impact on the - availability of the OpenStack services. - The OpenStack - High Availability Guide - contains more information on how to provide redundancy for the - OpenStack components. - Multiple network links should be deployed between sites to - provide redundancy for all components. This includes storage - replication, which should be isolated to a dedicated network - or VLAN with the ability to assign QoS to control the - replication traffic or provide priority for this traffic. Note - that if the data store is highly changeable, the network - requirements could have a significant effect on the - operational cost of maintaining the sites. - The ability to maintain object availability in both sites - has significant implications on the object storage design and - implementation. It also has a significant impact on the - WAN network design between the sites. - Connecting more than two sites increases the challenges and - adds more complexity to the design considerations. Multi-site - implementations require planning to address the additional - topology used for internal and external connectivity. Some options - include full mesh topology, hub spoke, spine leaf, and 3D Torus. - If applications running in a cloud are not cloud-aware, there - should be clear measures and expectations to define what the - infrastructure can and cannot support. An example would be - shared storage between sites. It is possible, however such a - solution is not native to OpenStack and requires a third-party - hardware vendor to fulfill such a requirement. Another example - can be seen in applications that are able to consume resources - in object storage directly. These applications need to be - cloud aware to make good use of an OpenStack Object - Store.
-
- Application readiness - Some applications are tolerant of the lack of synchronized - object storage, while others may need those objects to be - replicated and available across regions. Understanding how - the cloud implementation impacts new and existing applications - is important for risk mitigation, and the overall success of a - cloud project. Applications may have to be written or rewritten - for an infrastructure with little to no redundancy, or with the - cloud in mind.
-
- Cost - A greater number of sites increase cost and complexity for a - multi-site deployment. Costs can be broken down into the following - categories: - - - Compute resources - - - Networking resources - - - Replication - - - Storage - - - Management - - - Operational costs - -
-
- Site loss and recovery - Outages can cause partial or full loss of site functionality. - Strategies should be implemented to understand and plan for recovery - scenarios. - - - The deployed applications need to continue to - function and, more importantly, you must consider the - impact on the performance and reliability of the application - when a site is unavailable. - - - It is important to understand what happens to the - replication of objects and data between the sites when - a site goes down. If this causes queues to start - building up, consider how long these queues can - safely exist until an error occurs. - - - After an outage, ensure the method for resuming proper - operations of a site is implemented when it comes back online. - We recommend you architect the recovery to avoid race conditions. - -
-
- Compliance and geo-location - An organization may have certain legal obligations and - regulatory compliance measures which could require certain - workloads or data to not be located in certain regions.
-
- Auditing - A well thought-out auditing strategy is important in order - to be able to quickly track down issues. Keeping track of - changes made to security groups and tenant changes can be - useful in rolling back the changes if they affect production. - For example, if all security group rules for a tenant - disappeared, the ability to quickly track down the issue would - be important for operational and legal reasons.
-
- Separation of duties - A common requirement is to define different roles for the - different cloud administration functions. An example would be - a requirement to segregate the duties and permissions by - site.
-
- Authentication between sites - It is recommended to have a single authentication domain - rather than a separate implementation for each and every - site. This requires an authentication mechanism that is highly - available and distributed to ensure continuous operation. - Authentication server locality might be required and should be - planned for.
-
diff --git a/doc/arch-design/network_focus/section_architecture_network_focus.xml b/doc/arch-design/network_focus/section_architecture_network_focus.xml deleted file mode 100644 index de7e0aa3b8..0000000000 --- a/doc/arch-design/network_focus/section_architecture_network_focus.xml +++ /dev/null @@ -1,184 +0,0 @@ - -
- Architecture - Network-focused OpenStack architectures have many similarities to - other OpenStack architecture use cases. There are several factors - to consider when designing for a network-centric or network-heavy - application environment. - Networks exist to serve as a medium of transporting data between - systems. It is inevitable that an OpenStack design has inter-dependencies - with non-network portions of OpenStack as well as on external systems. - Depending on the specific workload, there may be major interactions with - storage systems both within and external to the OpenStack environment. - For example, in the case of content delivery network, there is twofold - interaction with storage. Traffic flows to and from the storage array for - ingesting and serving content in a north-south direction. In addition, - there is replication traffic flowing in an east-west direction. - Compute-heavy workloads may also induce interactions with the - network. Some high performance compute applications require network-based - memory mapping and data sharing and, as a result, induce a higher network - load when they transfer results and data sets. Others may be highly - transactional and issue transaction locks, perform their functions, and - revoke transaction locks at high rates. This also has an impact on the - network performance. - Some network dependencies are external to OpenStack. While - OpenStack Networking is capable of providing network ports, IP addresses, - some level of routing, and overlay networks, there are some other - functions that it cannot provide. For many of these, you may require - external systems or equipment to fill in the functional gaps. Hardware - load balancers are an example of equipment that may be necessary to - distribute workloads or offload certain functions. OpenStack Networking - provides a tunneling feature, however it is constrained to a - Networking-managed region. If the need arises to extend a tunnel beyond - the OpenStack region to either another region or an external system, - implement the tunnel itself outside OpenStack or use a tunnel management - system to map the tunnel or overlay to an external tunnel. - - - Depending on the selected design, Networking itself might not - support the required layer-3 - network functionality. If you choose to use the - provider networking mode without running the layer-3 agent, you - must install an external router to provide layer-3 connectivity - to outside systems. - - Interaction with orchestration services is inevitable in - larger-scale deployments. The Orchestration service is capable of - allocating network resource defined in templates to map to tenant - networks and for port creation, as well as allocating floating IPs. - If there is a requirement to define and manage network resources when - using orchestration, we recommend that the design include the - Orchestration service to meet the demands of users. -
- Design impacts - A wide variety of factors can affect a network-focused OpenStack - architecture. While there are some considerations shared with a general - use case, specific workloads related to network requirements influence - network design decisions. - One decision includes whether or not to use Network Address - Translation (NAT) and where to implement it. If there is a requirement - for floating IPs instead of public fixed addresses then you must use - NAT. An example of this is a DHCP relay that must know the IP of the - DHCP server. In these cases it is easier to automate the infrastructure - to apply the target IP to a new instance rather than to reconfigure - legacy or external systems for each new instance. - NAT for floating IPs managed by Networking resides within the - hypervisor but there are also versions of NAT that may be running - elsewhere. If there is a shortage of IPv4 addresses there are two common - methods to mitigate this externally to OpenStack. The first is to run a - load balancer either within OpenStack as an instance, or use an external - load balancing solution. In the internal scenario, Networking's - Load-Balancer-as-a-Service (LBaaS) can manage load balancing - software, for example HAproxy. This is specifically to manage the - Virtual IP (VIP) while a dual-homed connection from the HAproxy instance - connects the public network with the tenant private network that hosts - all of the content servers. In the external scenario, a load balancer - needs to serve the VIP and also connect to the tenant overlay - network through external means or through private addresses. - Another kind of NAT that may be useful is protocol NAT. In some - cases it may be desirable to use only IPv6 addresses on instances and - operate either an instance or an external service to provide a NAT-based - transition technology such as NAT64 and DNS64. This provides the ability - to have a globally routable IPv6 address while only consuming IPv4 - addresses as necessary or in a shared manner. - Application workloads affect the design of the underlying network - architecture. If a workload requires network-level redundancy, the - routing and switching architecture have to accommodate this. There - are differing methods for providing this that are dependent on the - selected network hardware, the performance of the hardware, and which - networking model you deploy. Examples include - Link aggregation (LAG) and Hot Standby Router Protocol (HSRP). Also - consider whether to deploy OpenStack Networking or - legacy networking (nova-network), and which plug-in to select for - OpenStack Networking. If using an external system, configure Networking - to run layer 2 - with a provider network configuration. For example, implement HSRP - to terminate layer-3 connectivity. - Depending on the workload, overlay networks may not be the best - solution. Where application network connections are - small, short lived, or bursty, running a dynamic overlay can generate - as much bandwidth as the packets it carries. It also can induce enough - latency to cause issues with certain applications. There is an impact - to the device generating the overlay which, in most installations, - is the hypervisor. This causes performance degradation on packet - per second and connection per second rates. - Overlays also come with a secondary option that may not be - appropriate to a specific workload. While all of them operate in full - mesh by default, there might be good reasons to disable this function - because it may cause excessive overhead for some workloads. Conversely, - other workloads operate without issue. For example, most web services - applications do not have major issues with a full mesh overlay network, - while some network monitoring tools or storage replication workloads - have performance issues with throughput or excessive broadcast - traffic. - Many people overlook an important design decision: The choice of - layer-3 protocols. While OpenStack was initially built with only IPv4 - support, Networking now supports IPv6 and dual-stacked networks. - Some workloads are possible through the use of IPv6 and IPv6 to IPv4 - reverse transition mechanisms such as NAT64 and DNS64 or - 6to4. - This alters the requirements for any address plan as single-stacked and - transitional IPv6 deployments can alleviate the need for IPv4 - addresses. - OpenStack has limited support for - dynamic routing, however there are a number of options available by - incorporating third party solutions to implement routing within the - cloud including network equipment, hardware nodes, and instances. Some - workloads perform well with nothing more than static routes and default - gateways configured at the layer-3 termination point. In most cases this - is sufficient, however some cases require the addition of at least one - type of dynamic routing protocol if not multiple protocols. Having a - form of interior gateway protocol (IGP) available to the instances - inside an OpenStack installation opens up the possibility of use cases - for anycast route injection for services that need to use it as a - geographic location or failover mechanism. Other applications may wish - to directly participate in a routing protocol, either as a passive - observer, as in the case of a looking glass, or as an active participant - in the form of a route reflector. Since an instance might have a large - amount of compute and memory resources, it is trivial to hold an entire - unpartitioned routing table and use it to provide services such as - network path visibility to other applications or as a monitoring - tool. - Path maximum transmission unit (MTU) failures are lesser known but - harder to diagnose. The MTU must be large enough to handle normal - traffic, overhead from an overlay network, and the desired layer-3 - protocol. Adding externally built tunnels reduces the MTU packet size. - In this case, you must pay attention to the fully - calculated MTU size because some systems ignore or - drop path MTU discovery packets. -
-
- Tunable networking components - Consider configurable networking components related to an - OpenStack architecture design when designing for network intensive - workloads that include MTU and QoS. Some workloads require a larger MTU - than normal due to the transfer of large blocks of data. - When providing network service for applications such as video - streaming or storage replication, we recommend that you configure - both OpenStack hardware nodes and the supporting network equipment - for jumbo frames where possible. This allows for better use of - available bandwidth. Configure jumbo frames - across the complete path the packets traverse. If one network - component is not capable of handling jumbo frames then the entire - path reverts to the default MTU. - Quality of Service (QoS) also has a great impact on network - intensive workloads as it provides instant service to packets which - have a higher priority due to the impact of poor - network performance. In applications such as Voice over IP (VoIP), - differentiated services code points are a near requirement for proper - operation. You can also use QoS in the opposite direction for mixed - workloads to prevent low priority but high bandwidth applications, - for example backup services, video conferencing, or file sharing, - from blocking bandwidth that is needed for the proper operation of - other workloads. It is possible to tag file storage traffic as a - lower class, such as best effort or scavenger, to allow the higher - priority traffic through. In cases where regions within a cloud might - be geographically distributed it may also be necessary to plan - accordingly to implement WAN optimization to combat latency or - packet loss. -
-
diff --git a/doc/arch-design/network_focus/section_operational_considerations_network_focus.xml b/doc/arch-design/network_focus/section_operational_considerations_network_focus.xml deleted file mode 100644 index b27d42fb4f..0000000000 --- a/doc/arch-design/network_focus/section_operational_considerations_network_focus.xml +++ /dev/null @@ -1,68 +0,0 @@ - -
- - Operational considerations - Network-focused OpenStack clouds have a number of operational - considerations that influence the selected design, including: - - - Dynamic routing of static routes - - - Service level agreements (SLAs) - - - Ownership of user management - - - An initial network consideration is the selection of a telecom - company or transit provider. - Make additional design decisions about monitoring and alarming. - This can be an internal responsibility or the responsibility of the - external provider. In the case of using an external provider, service - level agreements (SLAs) likely apply. In addition, other operational - considerations such as bandwidth, latency, and jitter can be part of an - SLA. - Consider the ability to upgrade the infrastructure. As demand for - network resources increase, operators add additional IP address blocks - and add additional bandwidth capacity. In addition, consider managing - hardware and software life cycle events, for example upgrades, - decommissioning, and outages, while avoiding service interruptions for - tenants. - Factor maintainability into the overall network design. This - includes the ability to manage and maintain IP addresses as well as the - use of overlay identifiers including VLAN tag IDs, GRE tunnel IDs, and - MPLS tags. As an example, if you may need to change all of the IP - addresses on a network, a process known as renumbering, then the design - must support this function. - Address network-focused applications when considering certain - operational realities. For example, consider the impending exhaustion - of IPv4 addresses, the migration to IPv6, and the use of private - networks to segregate different types of traffic that an application - receives or generates. In the case of IPv4 to IPv6 migrations, - applications should follow best practices for storing IP addresses. - We recommend you avoid relying on IPv4 features that did not carry over - to the IPv6 protocol or have differences in implementation. - To segregate traffic, allow applications to create a private tenant - network for database and storage network traffic. Use a public network - for services that require direct client access from the internet. Upon - segregating the traffic, consider quality of service (QoS) and security - to ensure each network has the required level of service. - Finally, consider the routing of network traffic. - For some applications, develop a complex policy framework for - routing. To create a routing policy that satisfies business requirements, - consider the economic cost of transmitting traffic over expensive links - versus cheaper links, in addition to bandwidth, latency, and jitter - requirements. - Additionally, consider how to respond to network events. As an - example, how load transfers from one link to another during a - failure scenario could be a factor in the design. If you do not plan - network capacity correctly, failover traffic could overwhelm other ports - or network links and create a cascading failure scenario. In this case, - traffic that fails over to one link overwhelms that link and then moves - to the subsequent links until all network traffic stops. -
diff --git a/doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml b/doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml deleted file mode 100644 index 4d22f4f9ce..0000000000 --- a/doc/arch-design/network_focus/section_prescriptive_examples_network_focus.xml +++ /dev/null @@ -1,209 +0,0 @@ - -
- - Prescriptive examples - An organization designs a large-scale web application with cloud - principles in mind. The application scales - horizontally in a bursting fashion and generates a high - instance count. The application requires an SSL connection to - secure data and must not lose connection state to individual - servers. - The figure below depicts an example design for this workload. - In this example, a hardware load balancer provides SSL offload - functionality and connects - to tenant networks in order to reduce address consumption. - This load balancer links to the routing architecture as it - services the VIP for the application. The router and load - balancer use the GRE tunnel ID of the - application's tenant network and an IP address within - the tenant subnet but outside of the address pool. This is to - ensure that the load balancer can communicate with the - application's HTTP servers without requiring the consumption - of a public IP address. - Because sessions persist until closed, the routing and - switching architecture provides high availability. - Switches mesh to each hypervisor and each other, and - also provide an MLAG implementation to ensure that layer-2 - connectivity does not fail. Routers use VRRP - and fully mesh with switches to ensure layer-3 connectivity. - Since GRE is provides an overlay network, Networking is present - and uses the Open vSwitch agent in GRE tunnel - mode. This ensures all devices can reach all other devices and - that you can create tenant networks for private addressing - links to the load balancer. - - - - - - A web service architecture has many options and optional - components. Due to this, it can fit into a large number of - other OpenStack designs. A few key components, however, need - to be in place to handle the nature of most web-scale - workloads. You require the following components: - - - OpenStack Controller services (Image, Identity, - Networking and supporting services such as MariaDB and - RabbitMQ) - - - OpenStack Compute running KVM hypervisor - - - OpenStack Object Storage - - - Orchestration service - - - Telemetry service - - - Beyond the normal Identity, Compute, Image service, and Object - Storage components, we recommend the Orchestration service - component to handle the proper scaling of workloads to adjust to - demand. Due to the requirement for auto-scaling, - the design includes the Telemetry service. Web services - tend to be bursty in load, have very defined peak and valley - usage patterns and, as a result, benefit from automatic scaling - of instances based upon traffic. At a network level, a split - network configuration works well with databases residing on - private tenant networks since these do not emit a large quantity - of broadcast traffic and may need to interconnect to some - databases for content. - -
- Load balancing - Load balancing spreads requests across multiple instances. - This workload scales well horizontally across large numbers of - instances. This enables instances to run without publicly - routed IP addresses and instead to rely on the load - balancer to provide a globally reachable service. - Many of these services do not require - direct server return. This aids in address planning and - utilization at scale since only the virtual IP (VIP) must be - public. -
-
- Overlay networks - - The overlay functionality design includes OpenStack Networking - in Open vSwitch GRE tunnel mode. - In this case, the layer-3 external routers pair with - VRRP, and switches pair with an implementation of - MLAG to ensure that you do not lose connectivity with - the upstream routing infrastructure. - -
-
- Performance tuning - Network level tuning for this workload is minimal. - Quality-of-Service (QoS) applies to these workloads - for a middle ground Class Selector depending on existing - policies. It is higher than a best effort queue but lower - than an Expedited Forwarding or Assured Forwarding queue. - Since this type of application generates larger packets with - longer-lived connections, you can optimize bandwidth utilization - for long duration TCP. Normal bandwidth planning - applies here with regards to benchmarking a session's usage - multiplied by the expected number of concurrent sessions with - overhead. -
-
- Network functions - Network functions is a broad category but encompasses - workloads that support the rest of a system's network. These - workloads tend to consist of large amounts of small packets - that are very short lived, such as DNS queries or SNMP traps. - These messages need to arrive quickly and do not deal with - packet loss as there can be a very large volume of them. There - are a few extra considerations to take into account for this - type of workload and this can change a configuration all the - way to the hypervisor level. For an application that generates - 10 TCP sessions per user with an average bandwidth of 512 - kilobytes per second per flow and expected user count of ten - thousand concurrent users, the expected bandwidth plan is - approximately 4.88 gigabits per second. - The supporting network for this type of configuration needs - to have a low latency and evenly distributed availability. - This workload benefits from having services local to the - consumers of the service. Use a multi-site approach as - well as deploying many copies of the application to handle - load as close as possible to consumers. Since these - applications function independently, they do not warrant - running overlays to interconnect tenant networks. Overlays - also have the drawback of performing poorly with rapid flow - setup and may incur too much overhead with large quantities of - small packets and therefore we do not recommend them. - QoS is desirable for some workloads to ensure delivery. DNS - has a major impact on the load times of other services and - needs to be reliable and provide rapid responses. Configure rules - in upstream devices to apply a higher Class - Selector to DNS to ensure faster delivery or a better spot in - queuing algorithms. -
-
- Cloud storage - Another common use case for OpenStack environments is providing - a cloud-based file storage and sharing service. You might - consider this a storage-focused use case, but its network-side - requirements make it a network-focused use case. - For example, consider a cloud backup application. This workload - has two specific behaviors that impact the network. Because this - workload is an externally-facing service and an - internally-replicating application, it has both north-south and - east-west traffic - considerations: - - - north-south traffic - - When a user uploads and stores content, that content moves - into the OpenStack installation. When users download this - content, the content moves out from the OpenStack - installation. Because this service operates primarily - as a backup, most of the traffic moves southbound into the - environment. In this situation, it benefits you to - configure a network to be asymmetrically downstream - because the traffic that enters the OpenStack installation - is greater than the traffic that leaves the installation. - - - - east-west traffic - - Likely to be fully symmetric. Because replication - originates from any node and might target multiple other - nodes algorithmically, it is less likely for this traffic - to have a larger volume in any specific direction. However - this traffic might interfere with north-south traffic. - - - - - - - - - This application prioritizes the north-south traffic over - east-west traffic: the north-south traffic involves - customer-facing data. - The network design in this case is less dependent on - availability and more dependent on being able to handle high - bandwidth. As a direct result, it is beneficial to forgo - redundant links in favor of bonding those connections. This - increases available bandwidth. It is also beneficial to - configure all devices in the path, including OpenStack, to - generate and pass jumbo frames. -
-
diff --git a/doc/arch-design/network_focus/section_tech_considerations_network_focus.xml b/doc/arch-design/network_focus/section_tech_considerations_network_focus.xml deleted file mode 100644 index 66c4be329a..0000000000 --- a/doc/arch-design/network_focus/section_tech_considerations_network_focus.xml +++ /dev/null @@ -1,462 +0,0 @@ - -
- - Technical considerations - When you design an OpenStack network architecture, you must - consider layer-2 and layer-3 issues. Layer-2 - decisions involve those made at the data-link layer, such as - the decision to use Ethernet versus Token Ring. Layer-3 decisions - involve those made about the protocol layer and the point when - IP comes into the picture. As an example, a completely - internal OpenStack network can exist at layer 2 and ignore - layer 3. In order for any traffic to go outside of - that cloud, to another network, or to the Internet, however, you must - use a layer-3 router or switch. - The past few years have seen two competing trends in - networking. One trend leans towards building data center network - architectures based on layer-2 networking. Another trend treats - the cloud environment essentially as a miniature version of the - Internet. This approach is radically different from the network - architecture approach in the staging environment: - the Internet only uses layer-3 routing rather than - layer-2 switching. - A network designed on layer-2 protocols has advantages over one - designed on layer-3 protocols. In spite of the difficulties of - using a bridge to perform the network role of a router, many - vendors, customers, and service providers choose to use Ethernet - in as many parts of their networks as possible. The benefits of - selecting a layer-2 design are: - - - Ethernet frames contain all the essentials for - networking. These include, but are not limited to, - globally unique source addresses, globally unique - destination addresses, and error control. - - - Ethernet frames can carry any kind of packet. - Networking at layer 2 is independent of the layer-3 - protocol. - - - Adding more layers to the Ethernet frame only slows - the networking process down. This is known as 'nodal - processing delay'. - - - You can add adjunct networking features, for - example class of service (CoS) or multicasting, to - Ethernet as readily as IP networks. - - - VLANs are an easy mechanism for isolating - networks. - - - Most information starts and ends inside Ethernet frames. - Today this applies to data, voice (for example, VoIP), and - video (for example, web cameras). The concept is that, if you can - perform more of the end-to-end transfer of information from - a source to a destination in the form of Ethernet frames, the network - benefits more from the advantages of Ethernet. - Although it is not a substitute for IP networking, networking at - layer 2 can be a powerful adjunct to IP networking. - - Layer-2 Ethernet usage has these advantages over layer-3 IP - network usage: - - - - Speed - - - Reduced overhead of the IP hierarchy. - - - No need to keep track of address configuration as systems - move around. Whereas the simplicity of layer-2 - protocols might work well in a data center with hundreds - of physical machines, cloud data centers have the - additional burden of needing to keep track of all virtual - machine addresses and networks. In these data centers, it - is not uncommon for one physical node to support 30-40 - instances. - - - - Networking at the frame level says nothing - about the presence or absence of IP addresses at the packet - level. Almost all ports, links, and devices on a network of - LAN switches still have IP addresses, as do all the source and - destination hosts. There are many reasons for the continued - need for IP addressing. The largest one is the need to manage - the network. A device or link without an IP address is usually - invisible to most management applications. Utilities including - remote access for diagnostics, file transfer of configurations - and software, and similar applications cannot run without IP - addresses as well as MAC addresses. - -
- Layer-2 architecture limitations - Outside of the traditional data center the limitations of - layer-2 network architectures become more obvious. - - - Number of VLANs is limited to 4096. - - - The number of MACs stored in switch tables is - limited. - - - You must accommodate the need to maintain a set of - layer-4 devices to handle traffic control. - - - MLAG, often used for switch redundancy, is a - proprietary solution that does not scale beyond two - devices and forces vendor lock-in. - - - It can be difficult to troubleshoot a network - without IP addresses and ICMP. - - - Configuring ARP - can be complicated on large layer-2 networks. - - - All network devices need to be aware of all MACs, - even instance MACs, so there is constant churn in MAC - tables and network state changes as instances start and - stop. - - - Migrating MACs (instance migration) to different - physical locations are a potential problem if you do not - set ARP table timeouts properly. - - - It is important to know that layer 2 has a very limited set - of network management tools. It is very difficult to control - traffic, as it does not have mechanisms to manage the network - or shape the traffic, and network troubleshooting is very - difficult. One reason for this difficulty is network devices - have no IP addresses. As a result, there is no reasonable way - to check network delay in a layer-2 network. - On large layer-2 networks, configuring ARP learning can also - be complicated. The setting for the MAC address timer on - switches is critical and, if set incorrectly, can cause - significant performance problems. As an example, the Cisco - default MAC address timer is extremely long. Migrating MACs to - different physical locations to support instance migration can - be a significant problem. In this case, the network - information maintained in the switches could be out of sync - with the new location of the instance. - In a layer-2 network, all devices are aware of all MACs, - even those that belong to instances. The network state - information in the backbone changes whenever an instance starts - or stops. As a result there is far too much churn in - the MAC tables on the backbone switches. -
-
- Layer-3 architecture advantages - In the layer 3 case, there is no churn in the routing tables - due to instances starting and stopping. The only time there - would be a routing state change is in the case of a Top - of Rack (ToR) switch failure or a link failure in the backbone - itself. Other advantages of using a layer-3 architecture - include: - - - Layer-3 networks provide the same level of - resiliency and scalability as the Internet. - - - Controlling traffic with routing metrics is - straightforward. - - - You can configure layer 3 to use BGP - confederation for scalability so core routers have state - proportional to the number of racks, not to the number of - servers or instances. - - - Routing takes instance MAC and IP addresses - out of the network core, reducing state churn. Routing - state changes only occur in the case of a ToR switch - failure or backbone link failure. - - - There are a variety of well tested tools, for - example ICMP, to monitor and manage traffic. - - - Layer-3 architectures enable the use of Quality - of Service (QoS) to manage network performance. - - -
- Layer-3 architecture limitations - The main limitation of layer 3 is that there is no built-in - isolation mechanism comparable to the VLANs in layer-2 - networks. Furthermore, the hierarchical nature of IP addresses - means that an instance is on the same subnet as its - physical host. This means that you cannot migrate it outside - of the subnet easily. For these reasons, network - virtualization needs to use IP encapsulation - and software at the end hosts for isolation and the separation of - the addressing in the virtual layer from the addressing in the - physical layer. Other potential disadvantages of layer 3 - include the need to design an IP addressing scheme rather than - relying on the switches to keep track of the MAC - addresses automatically and to configure the interior gateway routing - protocol in the switches. -
-
-
- Network recommendations overview - OpenStack has complex networking requirements for several - reasons. Many components interact at different levels of the - system stack that adds complexity. Data flows are complex. - Data in an OpenStack cloud moves both between instances across - the network (also known as East-West), as well as in and out - of the system (also known as North-South). Physical server - nodes have network requirements that are independent of instance - network requirements, which you must isolate from the core - network to account for scalability. We recommend - functionally separating the networks for security purposes and - tuning performance through traffic shaping. - You must consider a number of important general technical - and business factors when planning and - designing an OpenStack network. They include: - - - A requirement for vendor independence. To avoid - hardware or software vendor lock-in, the design should - not rely on specific features of a vendor's router or - switch. - - - A requirement to massively scale the ecosystem to - support millions of end users. - - - A requirement to support indeterminate platforms and - applications. - - - A requirement to design for cost efficient - operations to take advantage of massive scale. - - - A requirement to ensure that there is no single - point of failure in the cloud ecosystem. - - - A requirement for high availability architecture to - meet customer SLA requirements. - - - A requirement to be tolerant of rack level - failure. - - - A requirement to maximize flexibility to architect - future production environments. - - - Bearing in mind these considerations, we recommend the following: - - - Layer-3 designs are preferable to layer-2 - architectures. - - - Design a dense multi-path network core to support - multi-directional scaling and flexibility. - - - Use hierarchical addressing because it is the only - viable option to scale network ecosystem. - - - Use virtual networking to isolate instance service - network traffic from the management and internal - network traffic. - - - Isolate virtual networks using encapsulation - technologies. - - - Use traffic shaping for performance tuning. - - - Use eBGP to connect to the Internet up-link. - - - Use iBGP to flatten the internal traffic on the - layer-3 mesh. - - - Determine the most effective configuration for block - storage network. - -
-
- Additional considerations - There are several further considerations when designing a - network-focused OpenStack cloud. -
- OpenStack Networking versus legacy networking (nova-network) - considerations - Selecting the type of networking technology to implement - depends on many factors. OpenStack Networking (neutron) and - legacy networking (nova-network) both have their advantages and - disadvantages. They are both valid and supported options that fit - different use cases: - - - - - Legacy networking (nova-network) - OpenStack Networking - - - - Simple, single agent - Complex, multiple agents - - - More mature, established - Newer, maturing - - - Flat or VLAN - Flat, VLAN, Overlays, L2-L3, SDN - - No plug-in support - Plug-in support for 3rd parties - - - Scales well - Scaling requires 3rd party plug-ins - - - No multi-tier topologies - Multi-tier topologies - - - -
-
- Redundant networking: ToR switch high availability - risk analysis - A technical consideration of networking is the idea that - you should install switching gear in a data center - with backup switches in case of hardware failure. - Research indicates the mean time between failures (MTBF) on switches - is between 100,000 and 200,000 hours. This number is dependent - on the ambient temperature of the switch in the data - center. When properly cooled and maintained, this translates to - between 11 and 22 years before failure. Even in the worst case - of poor ventilation and high ambient temperatures in the data - center, the MTBF is still 2-3 years. See http://www.garrettcom.com/techsupport/papers/ethernet_switch_reliability.pdf - for further information. - In most cases, it is much more economical to use a - single switch with a small pool of spare switches to replace - failed units than it is to outfit an entire data center with - redundant switches. Applications should tolerate rack level - outages without affecting normal - operations, since network and compute resources are easily - provisioned and plentiful. -
-
- Preparing for the future: IPv6 support - One of the most important networking topics today is the - impending exhaustion of IPv4 addresses. In early 2014, ICANN - announced that they started allocating the final IPv4 address - blocks to the Regional Internet Registries (http://www.internetsociety.org/deploy360/blog/2014/05/goodbye-ipv4-iana-starts-allocating-final-address-blocks/). - This means the IPv4 address space is close to being fully - allocated. As a result, it will soon become difficult to - allocate more IPv4 addresses to an application that has - experienced growth, or that you expect to scale out, due to the lack - of unallocated IPv4 address blocks. - For network focused applications the future is the IPv6 - protocol. IPv6 increases the address space significantly, - fixes long standing issues in the IPv4 protocol, and will - become essential for network focused applications in the - future. - OpenStack Networking supports IPv6 when configured to take - advantage of it. To enable IPv6, create an IPv6 subnet in - Networking and use IPv6 prefixes when creating security - groups.
-
- Asymmetric links - When designing a network architecture, the traffic patterns - of an application heavily influence the allocation of - total bandwidth and the number of links that you use to send - and receive traffic. Applications that provide file storage - for customers allocate bandwidth and links to favor - incoming traffic, whereas video streaming applications - allocate bandwidth and links to favor outgoing traffic. -
-
- Performance - It is important to analyze the applications' tolerance for - latency and jitter when designing an environment to support - network focused applications. Certain applications, for - example VoIP, are less tolerant of latency and jitter. Where - latency and jitter are concerned, certain applications may - require tuning of QoS parameters and network device queues to - ensure that they queue for transmit immediately or - guarantee minimum bandwidth. Since OpenStack currently does - not support these functions, consider carefully your selected - network plug-in. - The location of a service may also impact the application or - consumer experience. If an application serves - differing content to different users it must properly direct - connections to those specific locations. Where appropriate, - use a multi-site installation for these situations. - You can implement networking in two separate - ways. Legacy networking (nova-network) provides a flat DHCP network - with a single broadcast domain. This implementation does not - support tenant isolation networks or advanced plug-ins, but it - is currently the only way to implement a distributed layer-3 - agent using the multi_host configuration. - OpenStack Networking (neutron) is the official networking implementation - and provides a pluggable architecture that supports a large - variety of network methods. Some of these include a layer-2 - only provider network model, external device plug-ins, or even - OpenFlow controllers. - Networking at large scales becomes a set of boundary - questions. The determination of how large a layer-2 domain - must be is based on the amount of nodes within the domain - and the amount of broadcast traffic that passes between - instances. Breaking layer-2 boundaries may require the - implementation of overlay networks and tunnels. This decision - is a balancing act between the need for a smaller overhead or - a need for a smaller domain. - When selecting network devices, be aware that making this - decision based on the greatest port density often comes with a - drawback. Aggregation switches and routers have not all kept - pace with Top of Rack switches and may induce bottlenecks on - north-south traffic. As a result, it may be possible for - massive amounts of downstream network utilization to impact - upstream network devices, impacting service to the cloud. - Since OpenStack does not currently provide a mechanism for - traffic shaping or rate limiting, it is necessary to implement - these features at the network hardware level. -
-
-
diff --git a/doc/arch-design/network_focus/section_user_requirements_network_focus.xml b/doc/arch-design/network_focus/section_user_requirements_network_focus.xml deleted file mode 100644 index 4078fd8c49..0000000000 --- a/doc/arch-design/network_focus/section_user_requirements_network_focus.xml +++ /dev/null @@ -1,104 +0,0 @@ - -
- - User requirements - Network-focused architectures vary from the general-purpose - architecture designs. Certain network-intensive applications influence - these architectures. Some of the business requirements that influence - the design include: - - - Network latency through slow page loads, degraded video - streams, and low quality VoIP sessions impacts the user - experience. Users are often not aware of how network design and - architecture affects their experiences. Both enterprise customers - and end-users rely on the network for delivery of an application. - Network performance problems can result in a negative experience - for the end-user, as well as productivity and economic loss. - - - -
- High availability issues - Depending on the application and use case, network-intensive - OpenStack installations can have high availability requirements. - Financial transaction systems have a much higher requirement for high - availability than a development application. Use network availability - technologies, for example quality of service (QoS), to improve the - network performance of sensitive applications such as VoIP and video - streaming. - High performance systems have SLA requirements for a minimum - QoS with regard to guaranteed uptime, latency, and bandwidth. The level - of the SLA can have a significant impact on the network architecture and - requirements for redundancy in the systems. -
-
- Risks - - - Network misconfigurations - - Configuring incorrect IP addresses, VLANs, and routers - can cause outages to areas of the network or, in the worst-case - scenario, the entire cloud infrastructure. Automate network - configurations to minimize the opportunity for operator error - as it can cause disruptive problems. - - - - Capacity planning - - Cloud networks require management for capacity and growth - over time. Capacity planning includes the purchase of network - circuits and hardware that can potentially have lead times - measured in months or years. - - - - Network tuning - - Configure cloud networks to minimize link loss, packet loss, - packet storms, broadcast storms, and loops. - - - - Single Point Of Failure (SPOF) - - Consider high availability at the physical and environmental - layers. If there is a single point of failure due to only one - upstream link, or only one power supply, an outage can become - unavoidable. - - - - Complexity - - An overly complex network design can be difficult to - maintain and troubleshoot. While device-level configuration - can ease maintenance concerns and automated tools can handle - overlay networks, avoid or document non-traditional interconnects - between functions and specialized hardware to prevent - outages. - - - - Non-standard features - - There are additional risks that arise from configuring the - cloud network to take advantage of vendor specific features. - One example is multi-link aggregation (MLAG) used to provide - redundancy at the aggregator switch level of the network. MLAG - is not a standard and, as a result, each vendor has their own - proprietary implementation of the feature. MLAG architectures - are not interoperable across switch vendors, which leads to - vendor lock-in, and can cause delays or inability when upgrading - components. - - - -
-
diff --git a/doc/arch-design/pom.xml b/doc/arch-design/pom.xml deleted file mode 100644 index 468a078421..0000000000 --- a/doc/arch-design/pom.xml +++ /dev/null @@ -1,83 +0,0 @@ - - - - org.openstack.docs - parent-pom - 1.0.0-SNAPSHOT - ../pom.xml - - 4.0.0 - openstack-arch-design - jar - OpenStack Architecture Design Guide - - - - 0 - - - - - - - - com.rackspace.cloud.api - clouddocs-maven-plugin - - - - generate-webhelp - - generate-webhelp - - generate-sources - - - 0 - openstack-arch-design - 1 - UA-17511903-1 - - appendix toc,title - article/appendix nop - article toc,title - book toc,title,figure,table,example,equation - chapter toc,title - section toc - part toc,title - qandadiv toc - qandaset toc - reference toc,title - set toc,title - - - 0 - 1 - 0 - arch-design - arch-design - 7.44in - 9.68in - 1 - 1 - - - - - - true - . - - bk-openstack-arch-design.xml - - http://docs.openstack.org/openstack-arch-design/content - ${basedir}/../glossary/glossary-terms.xml - openstack - 0 - - - - - diff --git a/doc/arch-design-rst/setup.cfg b/doc/arch-design/setup.cfg similarity index 100% rename from doc/arch-design-rst/setup.cfg rename to doc/arch-design/setup.cfg diff --git a/doc/arch-design-rst/setup.py b/doc/arch-design/setup.py similarity index 100% rename from doc/arch-design-rst/setup.py rename to doc/arch-design/setup.py diff --git a/doc/arch-design-rst/source/common b/doc/arch-design/source/common similarity index 100% rename from doc/arch-design-rst/source/common rename to doc/arch-design/source/common diff --git a/doc/arch-design-rst/source/compute-focus-architecture.rst b/doc/arch-design/source/compute-focus-architecture.rst similarity index 100% rename from doc/arch-design-rst/source/compute-focus-architecture.rst rename to doc/arch-design/source/compute-focus-architecture.rst diff --git a/doc/arch-design-rst/source/compute-focus-operational-considerations.rst b/doc/arch-design/source/compute-focus-operational-considerations.rst similarity index 100% rename from doc/arch-design-rst/source/compute-focus-operational-considerations.rst rename to doc/arch-design/source/compute-focus-operational-considerations.rst diff --git a/doc/arch-design-rst/source/compute-focus-prescriptive-examples.rst b/doc/arch-design/source/compute-focus-prescriptive-examples.rst similarity index 100% rename from doc/arch-design-rst/source/compute-focus-prescriptive-examples.rst rename to doc/arch-design/source/compute-focus-prescriptive-examples.rst diff --git a/doc/arch-design-rst/source/compute-focus-technical-considerations.rst b/doc/arch-design/source/compute-focus-technical-considerations.rst similarity index 100% rename from doc/arch-design-rst/source/compute-focus-technical-considerations.rst rename to doc/arch-design/source/compute-focus-technical-considerations.rst diff --git a/doc/arch-design-rst/source/compute-focus.rst b/doc/arch-design/source/compute-focus.rst similarity index 100% rename from doc/arch-design-rst/source/compute-focus.rst rename to doc/arch-design/source/compute-focus.rst diff --git a/doc/arch-design-rst/source/conf.py b/doc/arch-design/source/conf.py similarity index 100% rename from doc/arch-design-rst/source/conf.py rename to doc/arch-design/source/conf.py diff --git a/doc/arch-design-rst/source/figures/Compute_NSX.png b/doc/arch-design/source/figures/Compute_NSX.png similarity index 100% rename from doc/arch-design-rst/source/figures/Compute_NSX.png rename to doc/arch-design/source/figures/Compute_NSX.png diff --git a/doc/arch-design-rst/source/figures/Compute_Tech_Bin_Packing_CPU_optimized1.png b/doc/arch-design/source/figures/Compute_Tech_Bin_Packing_CPU_optimized1.png similarity index 100% rename from doc/arch-design-rst/source/figures/Compute_Tech_Bin_Packing_CPU_optimized1.png rename to doc/arch-design/source/figures/Compute_Tech_Bin_Packing_CPU_optimized1.png diff --git a/doc/arch-design-rst/source/figures/Compute_Tech_Bin_Packing_General1.png b/doc/arch-design/source/figures/Compute_Tech_Bin_Packing_General1.png similarity index 100% rename from doc/arch-design-rst/source/figures/Compute_Tech_Bin_Packing_General1.png rename to doc/arch-design/source/figures/Compute_Tech_Bin_Packing_General1.png diff --git a/doc/arch-design-rst/source/figures/General_Architecture3.png b/doc/arch-design/source/figures/General_Architecture3.png similarity index 100% rename from doc/arch-design-rst/source/figures/General_Architecture3.png rename to doc/arch-design/source/figures/General_Architecture3.png diff --git a/doc/arch-design-rst/source/figures/Generic_CERN_Architecture.png b/doc/arch-design/source/figures/Generic_CERN_Architecture.png similarity index 100% rename from doc/arch-design-rst/source/figures/Generic_CERN_Architecture.png rename to doc/arch-design/source/figures/Generic_CERN_Architecture.png diff --git a/doc/arch-design-rst/source/figures/Generic_CERN_Example.png b/doc/arch-design/source/figures/Generic_CERN_Example.png similarity index 100% rename from doc/arch-design-rst/source/figures/Generic_CERN_Example.png rename to doc/arch-design/source/figures/Generic_CERN_Example.png diff --git a/doc/arch-design-rst/source/figures/Massively_Scalable_Cells_regions_azs.png b/doc/arch-design/source/figures/Massively_Scalable_Cells_regions_azs.png similarity index 100% rename from doc/arch-design-rst/source/figures/Massively_Scalable_Cells_regions_azs.png rename to doc/arch-design/source/figures/Massively_Scalable_Cells_regions_azs.png diff --git a/doc/arch-design-rst/source/figures/Multi-Cloud_Priv-AWS4.png b/doc/arch-design/source/figures/Multi-Cloud_Priv-AWS4.png similarity index 100% rename from doc/arch-design-rst/source/figures/Multi-Cloud_Priv-AWS4.png rename to doc/arch-design/source/figures/Multi-Cloud_Priv-AWS4.png diff --git a/doc/arch-design-rst/source/figures/Multi-Cloud_Priv-Pub3.png b/doc/arch-design/source/figures/Multi-Cloud_Priv-Pub3.png similarity index 100% rename from doc/arch-design-rst/source/figures/Multi-Cloud_Priv-Pub3.png rename to doc/arch-design/source/figures/Multi-Cloud_Priv-Pub3.png diff --git a/doc/arch-design-rst/source/figures/Multi-Cloud_failover2.png b/doc/arch-design/source/figures/Multi-Cloud_failover2.png similarity index 100% rename from doc/arch-design-rst/source/figures/Multi-Cloud_failover2.png rename to doc/arch-design/source/figures/Multi-Cloud_failover2.png diff --git a/doc/arch-design-rst/source/figures/Multi-Site_Customer_Edge.png b/doc/arch-design/source/figures/Multi-Site_Customer_Edge.png similarity index 100% rename from doc/arch-design-rst/source/figures/Multi-Site_Customer_Edge.png rename to doc/arch-design/source/figures/Multi-Site_Customer_Edge.png diff --git a/doc/arch-design-rst/source/figures/Multi-Site_shared_keystone1.png b/doc/arch-design/source/figures/Multi-Site_shared_keystone1.png similarity index 100% rename from doc/arch-design-rst/source/figures/Multi-Site_shared_keystone1.png rename to doc/arch-design/source/figures/Multi-Site_shared_keystone1.png diff --git a/doc/arch-design-rst/source/figures/Multi-Site_shared_keystone_horizon_swift1.png b/doc/arch-design/source/figures/Multi-Site_shared_keystone_horizon_swift1.png similarity index 100% rename from doc/arch-design-rst/source/figures/Multi-Site_shared_keystone_horizon_swift1.png rename to doc/arch-design/source/figures/Multi-Site_shared_keystone_horizon_swift1.png diff --git a/doc/arch-design-rst/source/figures/Multi-site_Geo_Redundant_LB.png b/doc/arch-design/source/figures/Multi-site_Geo_Redundant_LB.png similarity index 100% rename from doc/arch-design-rst/source/figures/Multi-site_Geo_Redundant_LB.png rename to doc/arch-design/source/figures/Multi-site_Geo_Redundant_LB.png diff --git a/doc/arch-design-rst/source/figures/Network_Cloud_Storage2.png b/doc/arch-design/source/figures/Network_Cloud_Storage2.png similarity index 100% rename from doc/arch-design-rst/source/figures/Network_Cloud_Storage2.png rename to doc/arch-design/source/figures/Network_Cloud_Storage2.png diff --git a/doc/arch-design-rst/source/figures/Network_Web_Services1.png b/doc/arch-design/source/figures/Network_Web_Services1.png similarity index 100% rename from doc/arch-design-rst/source/figures/Network_Web_Services1.png rename to doc/arch-design/source/figures/Network_Web_Services1.png diff --git a/doc/arch-design-rst/source/figures/Specialized_Hardware2.png b/doc/arch-design/source/figures/Specialized_Hardware2.png similarity index 100% rename from doc/arch-design-rst/source/figures/Specialized_Hardware2.png rename to doc/arch-design/source/figures/Specialized_Hardware2.png diff --git a/doc/arch-design-rst/source/figures/Specialized_OOO.png b/doc/arch-design/source/figures/Specialized_OOO.png similarity index 100% rename from doc/arch-design-rst/source/figures/Specialized_OOO.png rename to doc/arch-design/source/figures/Specialized_OOO.png diff --git a/doc/arch-design-rst/source/figures/Specialized_SDN_external.png b/doc/arch-design/source/figures/Specialized_SDN_external.png similarity index 100% rename from doc/arch-design-rst/source/figures/Specialized_SDN_external.png rename to doc/arch-design/source/figures/Specialized_SDN_external.png diff --git a/doc/arch-design-rst/source/figures/Specialized_SDN_hosted.png b/doc/arch-design/source/figures/Specialized_SDN_hosted.png similarity index 100% rename from doc/arch-design-rst/source/figures/Specialized_SDN_hosted.png rename to doc/arch-design/source/figures/Specialized_SDN_hosted.png diff --git a/doc/arch-design-rst/source/figures/Specialized_VDI1.png b/doc/arch-design/source/figures/Specialized_VDI1.png similarity index 100% rename from doc/arch-design-rst/source/figures/Specialized_VDI1.png rename to doc/arch-design/source/figures/Specialized_VDI1.png diff --git a/doc/arch-design-rst/source/figures/Storage_Database_+_Object5.png b/doc/arch-design/source/figures/Storage_Database_+_Object5.png similarity index 100% rename from doc/arch-design-rst/source/figures/Storage_Database_+_Object5.png rename to doc/arch-design/source/figures/Storage_Database_+_Object5.png diff --git a/doc/arch-design-rst/source/figures/Storage_Hadoop3.png b/doc/arch-design/source/figures/Storage_Hadoop3.png similarity index 100% rename from doc/arch-design-rst/source/figures/Storage_Hadoop3.png rename to doc/arch-design/source/figures/Storage_Hadoop3.png diff --git a/doc/arch-design-rst/source/figures/Storage_Object.png b/doc/arch-design/source/figures/Storage_Object.png similarity index 100% rename from doc/arch-design-rst/source/figures/Storage_Object.png rename to doc/arch-design/source/figures/Storage_Object.png diff --git a/doc/arch-design-rst/source/generalpurpose-architecture.rst b/doc/arch-design/source/generalpurpose-architecture.rst similarity index 100% rename from doc/arch-design-rst/source/generalpurpose-architecture.rst rename to doc/arch-design/source/generalpurpose-architecture.rst diff --git a/doc/arch-design-rst/source/generalpurpose-operational-considerations.rst b/doc/arch-design/source/generalpurpose-operational-considerations.rst similarity index 100% rename from doc/arch-design-rst/source/generalpurpose-operational-considerations.rst rename to doc/arch-design/source/generalpurpose-operational-considerations.rst diff --git a/doc/arch-design-rst/source/generalpurpose-prescriptive-example.rst b/doc/arch-design/source/generalpurpose-prescriptive-example.rst similarity index 100% rename from doc/arch-design-rst/source/generalpurpose-prescriptive-example.rst rename to doc/arch-design/source/generalpurpose-prescriptive-example.rst diff --git a/doc/arch-design-rst/source/generalpurpose-technical-considerations.rst b/doc/arch-design/source/generalpurpose-technical-considerations.rst similarity index 100% rename from doc/arch-design-rst/source/generalpurpose-technical-considerations.rst rename to doc/arch-design/source/generalpurpose-technical-considerations.rst diff --git a/doc/arch-design-rst/source/generalpurpose-user-requirements.rst b/doc/arch-design/source/generalpurpose-user-requirements.rst similarity index 100% rename from doc/arch-design-rst/source/generalpurpose-user-requirements.rst rename to doc/arch-design/source/generalpurpose-user-requirements.rst diff --git a/doc/arch-design-rst/source/generalpurpose.rst b/doc/arch-design/source/generalpurpose.rst similarity index 100% rename from doc/arch-design-rst/source/generalpurpose.rst rename to doc/arch-design/source/generalpurpose.rst diff --git a/doc/arch-design-rst/source/hybrid-architecture.rst b/doc/arch-design/source/hybrid-architecture.rst similarity index 100% rename from doc/arch-design-rst/source/hybrid-architecture.rst rename to doc/arch-design/source/hybrid-architecture.rst diff --git a/doc/arch-design-rst/source/hybrid-operational-considerations.rst b/doc/arch-design/source/hybrid-operational-considerations.rst similarity index 100% rename from doc/arch-design-rst/source/hybrid-operational-considerations.rst rename to doc/arch-design/source/hybrid-operational-considerations.rst diff --git a/doc/arch-design-rst/source/hybrid-prescriptive-examples.rst b/doc/arch-design/source/hybrid-prescriptive-examples.rst similarity index 100% rename from doc/arch-design-rst/source/hybrid-prescriptive-examples.rst rename to doc/arch-design/source/hybrid-prescriptive-examples.rst diff --git a/doc/arch-design-rst/source/hybrid-technical-considerations.rst b/doc/arch-design/source/hybrid-technical-considerations.rst similarity index 100% rename from doc/arch-design-rst/source/hybrid-technical-considerations.rst rename to doc/arch-design/source/hybrid-technical-considerations.rst diff --git a/doc/arch-design-rst/source/hybrid-user-requirements.rst b/doc/arch-design/source/hybrid-user-requirements.rst similarity index 100% rename from doc/arch-design-rst/source/hybrid-user-requirements.rst rename to doc/arch-design/source/hybrid-user-requirements.rst diff --git a/doc/arch-design-rst/source/hybrid.rst b/doc/arch-design/source/hybrid.rst similarity index 100% rename from doc/arch-design-rst/source/hybrid.rst rename to doc/arch-design/source/hybrid.rst diff --git a/doc/arch-design-rst/source/index.rst b/doc/arch-design/source/index.rst similarity index 100% rename from doc/arch-design-rst/source/index.rst rename to doc/arch-design/source/index.rst diff --git a/doc/arch-design-rst/source/introduction-how-this-book-is-organized.rst b/doc/arch-design/source/introduction-how-this-book-is-organized.rst similarity index 100% rename from doc/arch-design-rst/source/introduction-how-this-book-is-organized.rst rename to doc/arch-design/source/introduction-how-this-book-is-organized.rst diff --git a/doc/arch-design-rst/source/introduction-how-this-book-was-written.rst b/doc/arch-design/source/introduction-how-this-book-was-written.rst similarity index 100% rename from doc/arch-design-rst/source/introduction-how-this-book-was-written.rst rename to doc/arch-design/source/introduction-how-this-book-was-written.rst diff --git a/doc/arch-design-rst/source/introduction-intended-audience.rst b/doc/arch-design/source/introduction-intended-audience.rst similarity index 100% rename from doc/arch-design-rst/source/introduction-intended-audience.rst rename to doc/arch-design/source/introduction-intended-audience.rst diff --git a/doc/arch-design-rst/source/introduction-methodology.rst b/doc/arch-design/source/introduction-methodology.rst similarity index 100% rename from doc/arch-design-rst/source/introduction-methodology.rst rename to doc/arch-design/source/introduction-methodology.rst diff --git a/doc/arch-design-rst/source/introduction.rst b/doc/arch-design/source/introduction.rst similarity index 100% rename from doc/arch-design-rst/source/introduction.rst rename to doc/arch-design/source/introduction.rst diff --git a/doc/arch-design-rst/source/legal-security-requirements.rst b/doc/arch-design/source/legal-security-requirements.rst similarity index 100% rename from doc/arch-design-rst/source/legal-security-requirements.rst rename to doc/arch-design/source/legal-security-requirements.rst diff --git a/doc/arch-design-rst/source/massively-scalable-operational-considerations.rst b/doc/arch-design/source/massively-scalable-operational-considerations.rst similarity index 100% rename from doc/arch-design-rst/source/massively-scalable-operational-considerations.rst rename to doc/arch-design/source/massively-scalable-operational-considerations.rst diff --git a/doc/arch-design-rst/source/massively-scalable-technical-considerations.rst b/doc/arch-design/source/massively-scalable-technical-considerations.rst similarity index 100% rename from doc/arch-design-rst/source/massively-scalable-technical-considerations.rst rename to doc/arch-design/source/massively-scalable-technical-considerations.rst diff --git a/doc/arch-design-rst/source/massively-scalable-user-requirements.rst b/doc/arch-design/source/massively-scalable-user-requirements.rst similarity index 100% rename from doc/arch-design-rst/source/massively-scalable-user-requirements.rst rename to doc/arch-design/source/massively-scalable-user-requirements.rst diff --git a/doc/arch-design-rst/source/massively-scalable.rst b/doc/arch-design/source/massively-scalable.rst similarity index 100% rename from doc/arch-design-rst/source/massively-scalable.rst rename to doc/arch-design/source/massively-scalable.rst diff --git a/doc/arch-design-rst/source/multi-site-architecture.rst b/doc/arch-design/source/multi-site-architecture.rst similarity index 100% rename from doc/arch-design-rst/source/multi-site-architecture.rst rename to doc/arch-design/source/multi-site-architecture.rst diff --git a/doc/arch-design-rst/source/multi-site-operational-considerations.rst b/doc/arch-design/source/multi-site-operational-considerations.rst similarity index 100% rename from doc/arch-design-rst/source/multi-site-operational-considerations.rst rename to doc/arch-design/source/multi-site-operational-considerations.rst diff --git a/doc/arch-design-rst/source/multi-site-prescriptive-examples.rst b/doc/arch-design/source/multi-site-prescriptive-examples.rst similarity index 100% rename from doc/arch-design-rst/source/multi-site-prescriptive-examples.rst rename to doc/arch-design/source/multi-site-prescriptive-examples.rst diff --git a/doc/arch-design-rst/source/multi-site-technical-considerations.rst b/doc/arch-design/source/multi-site-technical-considerations.rst similarity index 100% rename from doc/arch-design-rst/source/multi-site-technical-considerations.rst rename to doc/arch-design/source/multi-site-technical-considerations.rst diff --git a/doc/arch-design-rst/source/multi-site-user-requirements.rst b/doc/arch-design/source/multi-site-user-requirements.rst similarity index 100% rename from doc/arch-design-rst/source/multi-site-user-requirements.rst rename to doc/arch-design/source/multi-site-user-requirements.rst diff --git a/doc/arch-design-rst/source/multi-site.rst b/doc/arch-design/source/multi-site.rst similarity index 100% rename from doc/arch-design-rst/source/multi-site.rst rename to doc/arch-design/source/multi-site.rst diff --git a/doc/arch-design-rst/source/network-focus-architecture.rst b/doc/arch-design/source/network-focus-architecture.rst similarity index 100% rename from doc/arch-design-rst/source/network-focus-architecture.rst rename to doc/arch-design/source/network-focus-architecture.rst diff --git a/doc/arch-design-rst/source/network-focus-operational-considerations.rst b/doc/arch-design/source/network-focus-operational-considerations.rst similarity index 100% rename from doc/arch-design-rst/source/network-focus-operational-considerations.rst rename to doc/arch-design/source/network-focus-operational-considerations.rst diff --git a/doc/arch-design-rst/source/network-focus-prescriptive-examples.rst b/doc/arch-design/source/network-focus-prescriptive-examples.rst similarity index 100% rename from doc/arch-design-rst/source/network-focus-prescriptive-examples.rst rename to doc/arch-design/source/network-focus-prescriptive-examples.rst diff --git a/doc/arch-design-rst/source/network-focus-technical-considerations.rst b/doc/arch-design/source/network-focus-technical-considerations.rst similarity index 100% rename from doc/arch-design-rst/source/network-focus-technical-considerations.rst rename to doc/arch-design/source/network-focus-technical-considerations.rst diff --git a/doc/arch-design-rst/source/network-focus-user-requirements.rst b/doc/arch-design/source/network-focus-user-requirements.rst similarity index 100% rename from doc/arch-design-rst/source/network-focus-user-requirements.rst rename to doc/arch-design/source/network-focus-user-requirements.rst diff --git a/doc/arch-design-rst/source/network-focus.rst b/doc/arch-design/source/network-focus.rst similarity index 100% rename from doc/arch-design-rst/source/network-focus.rst rename to doc/arch-design/source/network-focus.rst diff --git a/doc/arch-design-rst/source/references.rst b/doc/arch-design/source/references.rst similarity index 100% rename from doc/arch-design-rst/source/references.rst rename to doc/arch-design/source/references.rst diff --git a/doc/arch-design-rst/source/specialized-desktop-as-a-service.rst b/doc/arch-design/source/specialized-desktop-as-a-service.rst similarity index 100% rename from doc/arch-design-rst/source/specialized-desktop-as-a-service.rst rename to doc/arch-design/source/specialized-desktop-as-a-service.rst diff --git a/doc/arch-design-rst/source/specialized-hardware.rst b/doc/arch-design/source/specialized-hardware.rst similarity index 100% rename from doc/arch-design-rst/source/specialized-hardware.rst rename to doc/arch-design/source/specialized-hardware.rst diff --git a/doc/arch-design-rst/source/specialized-multi-hypervisor.rst b/doc/arch-design/source/specialized-multi-hypervisor.rst similarity index 100% rename from doc/arch-design-rst/source/specialized-multi-hypervisor.rst rename to doc/arch-design/source/specialized-multi-hypervisor.rst diff --git a/doc/arch-design-rst/source/specialized-networking.rst b/doc/arch-design/source/specialized-networking.rst similarity index 100% rename from doc/arch-design-rst/source/specialized-networking.rst rename to doc/arch-design/source/specialized-networking.rst diff --git a/doc/arch-design-rst/source/specialized-openstack-on-openstack.rst b/doc/arch-design/source/specialized-openstack-on-openstack.rst similarity index 100% rename from doc/arch-design-rst/source/specialized-openstack-on-openstack.rst rename to doc/arch-design/source/specialized-openstack-on-openstack.rst diff --git a/doc/arch-design-rst/source/specialized-software-defined-networking.rst b/doc/arch-design/source/specialized-software-defined-networking.rst similarity index 100% rename from doc/arch-design-rst/source/specialized-software-defined-networking.rst rename to doc/arch-design/source/specialized-software-defined-networking.rst diff --git a/doc/arch-design-rst/source/specialized.rst b/doc/arch-design/source/specialized.rst similarity index 100% rename from doc/arch-design-rst/source/specialized.rst rename to doc/arch-design/source/specialized.rst diff --git a/doc/arch-design-rst/source/storage-focus-architecture.rst b/doc/arch-design/source/storage-focus-architecture.rst similarity index 100% rename from doc/arch-design-rst/source/storage-focus-architecture.rst rename to doc/arch-design/source/storage-focus-architecture.rst diff --git a/doc/arch-design-rst/source/storage-focus-operational-considerations.rst b/doc/arch-design/source/storage-focus-operational-considerations.rst similarity index 100% rename from doc/arch-design-rst/source/storage-focus-operational-considerations.rst rename to doc/arch-design/source/storage-focus-operational-considerations.rst diff --git a/doc/arch-design-rst/source/storage-focus-prescriptive-examples.rst b/doc/arch-design/source/storage-focus-prescriptive-examples.rst similarity index 100% rename from doc/arch-design-rst/source/storage-focus-prescriptive-examples.rst rename to doc/arch-design/source/storage-focus-prescriptive-examples.rst diff --git a/doc/arch-design-rst/source/storage-focus-technical-considerations.rst b/doc/arch-design/source/storage-focus-technical-considerations.rst similarity index 100% rename from doc/arch-design-rst/source/storage-focus-technical-considerations.rst rename to doc/arch-design/source/storage-focus-technical-considerations.rst diff --git a/doc/arch-design-rst/source/storage-focus.rst b/doc/arch-design/source/storage-focus.rst similarity index 100% rename from doc/arch-design-rst/source/storage-focus.rst rename to doc/arch-design/source/storage-focus.rst diff --git a/doc/arch-design/specialized/section_desktop_as_a_service_specialized.xml b/doc/arch-design/specialized/section_desktop_as_a_service_specialized.xml deleted file mode 100644 index 6cf4a573f1..0000000000 --- a/doc/arch-design/specialized/section_desktop_as_a_service_specialized.xml +++ /dev/null @@ -1,66 +0,0 @@ - -
- - Desktop-as-a-Service - Virtual Desktop Infrastructure (VDI) is a service that hosts - user desktop environments on remote servers. This application - is very sensitive to network latency and requires a high - performance compute environment. Traditionally these types of - services do not use cloud environments because - few clouds support such a demanding workload for user-facing - applications. As cloud environments become more robust, - vendors are starting to provide services - that provide virtual desktops in the cloud. OpenStack - may soon provide the infrastructure for these types of - deployments. -
- Challenges - Designing an infrastructure that is suitable to host virtual - desktops is a very different task to that of most virtual - workloads. For example, the design must consider: - - - Boot storms, when a high volume of logins occur - in a short period of time - - - The performance of the applications running on - virtual desktops - - - Operating systems and their compatibility with the - OpenStack hypervisor - - -
-
- Broker - The connection broker determines which remote desktop host - users can access. Medium and large scale environments require a broker - since its service represents a central component of the architecture. - The broker is a complete management product, and enables automated - deployment and provisioning of remote desktop hosts. -
-
- Possible solutions - - There are a number of commercial products currently available that - provide a broker solution. However, no native OpenStack projects - provide broker services. Not providing a broker is also - an option, but managing this manually would not suffice for a - large scale, enterprise solution. - -
-
- Diagram - - - - - -
-
diff --git a/doc/arch-design/specialized/section_hardware_specialized.xml b/doc/arch-design/specialized/section_hardware_specialized.xml deleted file mode 100644 index a82f456a2f..0000000000 --- a/doc/arch-design/specialized/section_hardware_specialized.xml +++ /dev/null @@ -1,52 +0,0 @@ - -
- - Specialized hardware - Certain workloads require specialized hardware devices that - have significant virtualization or sharing challenges. - Applications such as load balancers, highly parallel brute - force computing, and direct to wire networking may need - capabilities that basic OpenStack components do not - provide. -
- Challenges - Some applications need access to hardware devices to either - improve performance or provide capabilities that are not - virtual CPU, RAM, network, or storage. These can be a shared - resource, such as a cryptography processor, or a dedicated - resource, such as a Graphics Processing Unit (GPU). OpenStack can - provide some of these, while others may need extra - work. -
-
- Solutions - To provide cryptography offloading to a set of - instances, you can use Image service configuration - options. For example, assign the cryptography chip to a - device node in the guest. The OpenStack Command Line - Reference contains further information on - configuring this solution in the chapter - Image service property keys. A challenge, however, is this - option allows all guests using the configured images - to access the hypervisor cryptography device. - If you require direct access to a specific device, PCI - pass-through enables you to dedicate the device to a single - instance per hypervisor. You must define a flavor that - has the PCI device specifically in order to properly schedule - instances. More information regarding PCI pass-through, - including instructions for implementing and - using it, is available at - https://wiki.openstack.org/wiki/Pci_passthrough. - - - - - -
-
diff --git a/doc/arch-design/specialized/section_multi_hypervisor_specialized.xml b/doc/arch-design/specialized/section_multi_hypervisor_specialized.xml deleted file mode 100644 index 0714e90f51..0000000000 --- a/doc/arch-design/specialized/section_multi_hypervisor_specialized.xml +++ /dev/null @@ -1,80 +0,0 @@ - -
- - Multi-hypervisor example - A financial company requires its applications migrated - from a traditional, virtualized environment to an API driven, - orchestrated environment. The new environment needs - multiple hypervisors since many of the company's applications - have strict hypervisor requirements. - Currently, the company's vSphere environment runs 20 VMware - ESXi hypervisors. These hypervisors support 300 instances of - various sizes. Approximately 50 of these instances must run - on ESXi. The remaining 250 or so have more flexible - requirements. - The financial company decides to manage the - overall system with a common OpenStack platform. - - - - - - Architecture planning teams decided to run a host aggregate - containing KVM hypervisors for the general purpose instances. A - separate host aggregate targets instances requiring ESXi. - Images in the OpenStack Image service have particular - hypervisor metadata attached. When a user requests a - certain image, the instance spawns on the relevant - aggregate. - Images for ESXi use the VMDK format. You can convert - QEMU disk images to VMDK, VMFS Flat Disks. These disk images - can also be thin, thick, zeroed-thick, and eager-zeroed-thick. - After exporting a VMFS thin disk from VMFS to the - OpenStack Image service (a non-VMFS location), it - becomes a preallocated flat disk. This impacts the transfer - time from the OpenStack Image service to the data store since transfers - require moving the full preallocated flat disk rather than the - thin disk. - The VMware host aggregate compute nodes communicate with - vCenter rather than spawning directly on a hypervisor. - The vCenter then requests scheduling for the instance to run on - an ESXi hypervisor. - This functionality requires that VMware Distributed Resource - Scheduler (DRS) is enabled on a cluster and set to Fully - Automated. The vSphere requires shared storage because the DRS - uses vMotion, which is a service that relies on shared storage. - This solution to the company's migration uses shared storage - to provide Block Storage capabilities to the KVM instances while - also providing vSphere storage. The new environment provides this - storage functionality using a dedicated data network. The - compute hosts should have dedicated NICs to support the - dedicated data network. vSphere supports OpenStack Block Storage. This - support gives storage from a VMFS datastore to an instance. For the - financial company, Block Storage in their new architecture supports - both hypervisors. - OpenStack Networking provides network connectivity in this new - architecture, with the VMware NSX plug-in driver configured. legacy - networking (nova-network) supports both hypervisors in this new - architecture example, but has limitations. Specifically, vSphere - with legacy networking does not support security groups. The new - architecture uses VMware NSX as a part of the design. When users launch an - instance within either of the host aggregates, VMware NSX ensures the - instance attaches to the appropriate network overlay-based logical - networks. - The architecture planning teams also consider OpenStack Compute - integration. When running vSphere in an OpenStack environment, nova-compute - communications with vCenter appear as a single large hypervisor. This - hypervisor represents the entire ESXi cluster. Multiple nova-compute - instances can represent multiple ESXi clusters. They can connect to - multiple vCenter servers. If the process running nova-compute - crashes it cuts the connection to the vCenter server. Any - ESXi clusters will stop running, and you will not - be able to provision further instances on the vCenter, even if you enable high - availability. You must monitor the nova-compute service - connected to vSphere carefully for any distruptions as a result of this - failure point. -
diff --git a/doc/arch-design/specialized/section_networking_specialized.xml b/doc/arch-design/specialized/section_networking_specialized.xml deleted file mode 100644 index 824fa7d5a3..0000000000 --- a/doc/arch-design/specialized/section_networking_specialized.xml +++ /dev/null @@ -1,39 +0,0 @@ - -
- - Specialized networking example - Some applications that interact with a network require - specialized connectivity. Applications such as a looking glass - require the ability to connect to a BGP peer, or route - participant applications may need to join a network at a layer - 2 level. -
- Challenges - Connecting specialized network applications to their - required resources alters the design of an OpenStack - installation. Installations that rely on overlay networks are - unable to support a routing participant, and may also block - layer-2 listeners. -
-
- Possible solutions - Deploying an OpenStack installation using OpenStack Networking with a - provider network allows direct layer-2 connectivity to an - upstream networking device. This design provides the layer-2 - connectivity required to communicate via Intermediate - System-to-Intermediate System (ISIS) protocol or to pass - packets controlled by an OpenFlow controller. Using the - multiple layer-2 plug-in with an agent such as - Open vSwitch - allows a private connection through a VLAN directly to a - specific port in a layer-3 device. This allows a BGP - point-to-point link to join the autonomous - system. Avoid using layer-3 plug-ins as they divide the - broadcast domain and prevent router adjacencies from - forming. -
-
diff --git a/doc/arch-design/specialized/section_openstack_on_openstack_specialized.xml b/doc/arch-design/specialized/section_openstack_on_openstack_specialized.xml deleted file mode 100644 index db1684f789..0000000000 --- a/doc/arch-design/specialized/section_openstack_on_openstack_specialized.xml +++ /dev/null @@ -1,80 +0,0 @@ - - -%openstack; -]> -
- - OpenStack on OpenStack - In some cases, users may run OpenStack nested on top - of another OpenStack cloud. This scenario describes how to - manage and provision complete OpenStack environments on instances - supported by hypervisors and servers, which an underlying OpenStack - environment controls. - Public cloud providers can use this technique to manage the - upgrade and maintenance process on complete OpenStack environments. - Developers and those testing OpenStack can also use this - technique to provision their own OpenStack environments on - available OpenStack Compute resources, whether public or - private. -
- Challenges - The network aspect of deploying a nested cloud is the most - complicated aspect of this architecture. You must expose VLANs - to the physical ports on which the underlying cloud runs because - the bare metal cloud owns all the - hardware. You must also expose them to the nested - levels as well. Alternatively, you can use the network overlay - technologies on the OpenStack environment running on the host - OpenStack environment to provide the required software defined - networking for the deployment. -
-
- Hypervisor - In this example architecture, consider which - approach you should take to provide a nested - hypervisor in OpenStack. This decision influences which - operating systems you use for the deployment of the nested - OpenStack deployments. -
-
- Possible solutions: deployment - Deployment of a full stack can be challenging but you can mitigate - this difficulty by creating a Heat template to deploy the - entire stack, or a configuration management system. After creating - the Heat template, you can automate the deployment of additional stacks. - The OpenStack-on-OpenStack project (TripleO) - addresses this issue. Currently, however, the project does - not completely cover nested stacks. For more information, see - - https://wiki.openstack.org/wiki/TripleO. -
-
- Possible solutions: hypervisor - In the case of running TripleO, the underlying OpenStack - cloud deploys the Compute nodes as bare-metal. You then deploy - OpenStack on these Compute bare-metal servers with the - appropriate hypervisor, such as KVM. - In the case of running smaller OpenStack clouds for testing - purposes, where performance is not a critical factor, you can use - QEMU instead. It is also possible to run a KVM hypervisor in an instance - (see - http://davejingtian.org/2014/03/30/nested-kvm-just-for-fun/), - though this is not a supported configuration, and could be a - complex solution for such a use case. -
-
- Diagram - - - - - - - -
-
diff --git a/doc/arch-design/specialized/section_software_defined_networking_specialized.xml b/doc/arch-design/specialized/section_software_defined_networking_specialized.xml deleted file mode 100644 index 17c00990c9..0000000000 --- a/doc/arch-design/specialized/section_software_defined_networking_specialized.xml +++ /dev/null @@ -1,55 +0,0 @@ - -
- - Software-defined networking - Software-defined networking (SDN) is the separation of the data - plane and control plane. SDN is a popular method of - managing and controlling packet flows within networks. SDN - uses overlays or directly controlled layer-2 devices to - determine flow paths, and as such presents challenges to a - cloud environment. Some designers may wish to run their - controllers within an OpenStack installation. Others may wish - to have their installations participate in an SDN-controlled - network. -
- Challenges - SDN is a relatively new concept that is not yet - standardized, so SDN systems come in a variety of different - implementations. Because of this, a truly prescriptive - architecture is not feasible. Instead, examine the differences - between an existing and a planned OpenStack design and determine - where potential conflicts and gaps exist. -
-
- Possible solutions - If an SDN implementation requires layer-2 access because it - directly manipulates switches, we do not recommend running an - overlay network or a layer-3 agent. If the controller - resides within an OpenStack installation, it may be necessary - to build an ML2 plug-in and schedule the controller instances - to connect to tenant VLANs that then talk directly to the - switch hardware. Alternatively, depending on the external - device support, use a tunnel that terminates at the switch - hardware itself. -
-
- Diagram - OpenStack hosted SDN controller: - - - - - - - OpenStack participating in an SDN controller network: - - - - - -
-
diff --git a/doc/arch-design/storage_focus/section_architecture_storage_focus.xml b/doc/arch-design/storage_focus/section_architecture_storage_focus.xml deleted file mode 100644 index 624ee8dcc3..0000000000 --- a/doc/arch-design/storage_focus/section_architecture_storage_focus.xml +++ /dev/null @@ -1,630 +0,0 @@ - - -%openstack; -]> -
- Architecture - Consider the following factors when selecting storage hardware: - - - Cost - - - Performance - - - Reliability - - - Storage-focused OpenStack clouds must address I/O - intensive workloads. These workloads are not CPU intensive, - nor are they consistently network intensive. The network may be - heavily utilized to transfer storage, but they are not - otherwise network intensive. - The selection of storage hardware determines the overall - performance and scalability of a storage-focused OpenStack design - architecture. - Several factors impact the design process, including: - - - Cost - - The cost of components affects which storage - architecture and hardware you choose. - - - - Performance - - The latency of storage I/O requests indicates performance. - Performance requirements affect which solution you choose. - - - - Scalability - - Scalability refers to how the storage solution performs - as it expands to its maximum size. Storage solutions - that perform well in small configurations but have - degraded performance in large configurations are not scalable. - A solution that performs well at maximum expansion is - scalable. Large deployments require a storage solution - that performs well as it expands. - - - - Latency is a key consideration in a storage-focused OpenStack cloud. - Using solid-state disks (SSDs) to minimize latency and, to reduce CPU - delays caused by waiting for the storage, increases performance. Use RAID - controller cards in compute hosts to improve the performance of the - underlying disk subsystem. - Depending on the storage architecture, you can adopt a scale-out - solution, or use a highly expandable and scalable centralized storage - array. If a centralized storage array is the right fit for - your requirements, then the array vendor determines the hardware - selection. It is possible to build a storage array using commodity - hardware with Open Source software, but requires people with expertise - to build such a system. - On the other hand, a scale-out storage solution that - uses direct-attached storage (DAS) in the servers may be an - appropriate choice. This requires configuration of the server - hardware to support the storage solution. - Considerations affecting storage architecture (and corresponding - storage hardware) of a Storage-focused OpenStack cloud include: - - - Connectivity - - Based on the selected storage solution, ensure the - connectivity matches the storage solution requirements. - We recommended confirming that the network - characteristics minimize latency to boost the - overall performance of the design. - - - - Latency - - Determine if the use case has - consistent or highly variable latency. - - - - Throughput - - Ensure that the storage solution - throughput is optimized for your application - requirements. - - - - Server hardware - - Use of DAS impacts the server - hardware choice and affects host density, instance - density, power density, OS-hypervisor, and management - tools. - - - - -
- Compute (server) hardware selection - Four opposing factors determine the compute (server) - hardware selection: - - - Server density - - A measure of how many servers can - fit into a given measure of physical space, such as a - rack unit [U]. - - - - Resource capacity - - The number of CPU cores, how much - RAM, or how much storage a given server delivers. - - - - Expandability - - The number of additional resources you can add to a server - before it reaches capacity. - - - - Cost - - The relative cost of the hardware weighed against - the level of design effort needed to build the - system. - - - - You must weigh the dimensions against each other to - determine the best design for the desired purpose. For - example, increasing server density can mean sacrificing - resource capacity or expandability. Increasing resource - capacity and expandability can increase cost but decrease - server density. Decreasing cost often means decreasing - supportability, server density, resource capacity, and - expandability. - Compute capacity (CPU cores and RAM capacity) is a secondary - consideration for selecting server hardware. As - a result, the required server hardware must supply adequate - CPU sockets, additional CPU cores, and more RAM; network - connectivity and storage capacity are not as critical. The - hardware needs to provide enough network connectivity and - storage capacity to meet the user requirements, however they - are not the primary consideration. - Some server hardware form factors are better - suited to storage-focused designs than others. The following is - a list of these form factors: - - - Most blade servers support dual-socket - multi-core CPUs. Choose either full width or full height - blades to avoid the limit. High density blade servers - support up to 16 servers in only 10 - rack units using half height or half width blades. - - This decreases density by 50% (only 8 servers - in 10 U) if a full width or full height option is used. - - - - - 1U rack-mounted servers have the ability to offer greater - server density than a blade server solution, but are often - limited to dual-socket, multi-core CPU configurations. - - Due to cooling requirements, it is rare to see 1U - rack-mounted servers with more than 2 CPU sockets. - - To obtain greater than dual-socket support in - a 1U rack-mount form factor, customers need to buy - their systems from Original Design Manufacturers - (ODMs) or second-tier manufacturers. - - This may cause issues for organizations that have - preferred vendor policies or concerns with support and - hardware warranties of non-tier 1 vendors. - - - - - 2U rack-mounted servers provide quad-socket, - multi-core CPU support but with a corresponding - decrease in server density (half the density offered - by 1U rack-mounted servers). - - - Larger rack-mounted servers, such as 4U servers, - often provide even greater CPU capacity. Commonly - supporting four or even eight CPU sockets. These - servers have greater expandability but such - servers have much lower server density and usually - greater hardware cost. - - - Rack-mounted servers - that support multiple independent servers in a single - 2U or 3U enclosure, "sled servers", deliver increased - density as compared to a typical 1U-2U rack-mounted servers. - - - Other factors that influence server hardware - selection for a storage-focused OpenStack design - architecture include: - - - Instance density - - In this architecture, instance - density and CPU-RAM oversubscription are lower. You - require more hosts to support the anticipated - scale, especially if the design uses dual-socket - hardware designs. - - - - Host density - - Another option to address the higher - host count is to use a quad-socket platform. Taking - this approach decreases host density which also - increases rack count. This configuration affects the - number of power connections and also impacts network - and cooling requirements. - - - - Power and cooling density - - The power and cooling - density requirements might be lower than with blade, - sled, or 1U server designs due to lower host density - (by using 2U, 3U or even 4U server designs). For data - centers with older infrastructure, this might be a - desirable feature. - - - - Storage-focused OpenStack design architecture server - hardware selection should focus on a "scale-up" versus - "scale-out" solution. The determination of which is the best - solution (a smaller number of larger hosts or a larger number of - smaller hosts), depends on a combination of factors - including cost, power, cooling, physical rack and floor space, - support-warranty, and manageability. -
- -
- Networking hardware selection - Key considerations for the selection of networking hardware include: - - - Port count - - The user requires networking - hardware that has the requisite port count. - - - - Port density - - The physical space required to provide the - requisite port count affects the network design. - A switch that provides 48 10 GbE - ports in 1U has a much higher port density than a - switch that provides 24 10 GbE ports in 2U. On a - general scale, a higher port density leaves more rack - space for compute or storage components which is - preferred. It is also important to consider fault - domains and power density. Finally, higher density - switches are more expensive, therefore it is important - not to over design the network. - - - - Port speed - - The networking hardware must support the - proposed network speed, for example: 1 GbE, 10 GbE, or - 40 GbE (or even 100 GbE). - - - - Redundancy - - User requirements for high availability and cost - considerations influence the required level of network - hardware redundancy. Achieve network redundancy by adding - redundant power supplies or paired switches. - - If this is a requirement, - the hardware must support this configuration. - User requirements determine if a completely - redundant network infrastructure is required. - - - - - Power requirements - - Ensure that the physical data - center provides the necessary power for the selected - network hardware. This is not an issue for - top of rack (ToR) switches, but may be an issue for - spine switches in a leaf and spine fabric, or end of - row (EoR) switches. - - - - Protocol support - - It is possible to gain more - performance out of a single storage system by using - specialized network technologies such as RDMA, SRP, - iSER and SCST. The specifics for using these - technologies is beyond the scope of this book. - - - -
- -
- Software selection - Factors that influence the software selection for a storage-focused - OpenStack architecture design include: - - - Operating system (OS) and hypervisor - - - OpenStack components - - - Supplemental software - - - Design decisions made in each of these areas impacts the - rest of the OpenStack architecture design. -
- -
- Operating system and hypervisor - Operating system (OS) and hypervisor have a significant impact - on the overall design and also affect server hardware - selection. Ensure the selected operating system and - hypervisor combination support the storage hardware and work - with the networking hardware selection and topology. - Operating system and hypervisor selection affect the following - areas: - - - Cost - - Selecting a commercially supported - hypervisor, such as Microsoft Hyper-V, results in - a different cost model than a - community-supported open source hypervisor like - Kinstance or Xen. Similarly, choosing Ubuntu over Red - Hat (or vice versa) impacts cost due to - support contracts. However, business or application - requirements might dictate a specific or commercially - supported hypervisor. - - - - Supportability - - Staff must have training with the chosen hypervisor. - Consider the cost of training when choosing - a solution. The support of a commercial product - such as Red Hat, SUSE, or Windows, is the - responsibility of the OS vendor. If an open source - platform is chosen, the support comes from in-house - resources. - - - - Management tools - - Ubuntu and Kinstance use different management tools - than VMware vSphere. Although both OS and hypervisor - combinations are supported by OpenStack, there are - varying impacts to the rest of the - design as a result of the selection of one combination - versus the other. - - - - Scale and performance - - Ensure the selected OS - and hypervisor combination meet the appropriate scale - and performance requirements needed for this storage - focused OpenStack cloud. The chosen architecture must - meet the targeted instance-host ratios with - the selected OS-hypervisor combination. - - - - Security - - Ensure the design can accommodate - the regular periodic installation of application - security patches while maintaining the required - workloads. The frequency of security patches for the - proposed OS-hypervisor combination impacts - performance and the patch installation process - could affect maintenance windows. - - - - Supported features - - Selecting the OS-hypervisor combination often determines - the required features of OpenStack. Certain features are only - available with specific OSes or hypervisors. For example, - if certain features are not available, you might need to modify - the design to meet user requirements. - - - - Interoperability - - The OS-hypervisor combination should be chosen - based on the interoperability with one another, and other - OS-hyervisor combinations. Operational and troubleshooting - tools for one OS-hypervisor combination may differ - from the tools used for another OS-hypervisor - combination. As a result, the design must - address if the two sets of tools need to interoperate. - - - - -
- -
- OpenStack components - The OpenStack components you choose can have a significant - impact on the overall design. While there are certain - components that are always present - (Compute and Image service, for example), there are other services - that may not be required. As an example, a certain design - may not require the Orchestration service. Omitting Orchestration would - not typically have a significant impact on the overall design, - however, if the architecture uses a replacement for OpenStack Object - Storage for its storage component, this could potentially have - significant impacts on the rest of the design. - A storage-focused design might require the ability to use - Orchestration to launch instances with Block Storage volumes to - perform storage-intensive processing. - A storage-focused OpenStack design architecture uses the - following components: - - - OpenStack Identity (keystone) - - - OpenStack dashboard (horizon) - - - OpenStack Compute (nova) (including the use of multiple - hypervisor drivers) - - - OpenStack Object Storage (swift) (or another object - storage solution) - - - OpenStack Block Storage (cinder) - - - OpenStack Image service (glance) - - - OpenStack Networking (neutron) or legacy networking - (nova-network) - - - Excluding certain OpenStack components may limit or - constrain the functionality of other components. If a design - opts to include Orchestration but exclude Telemetry, then the design - cannot take advantage of Orchestration's auto scaling - functionality (which relies on information from Telemetry). - Due to the fact that you can use Orchestration to spin up a large - number of instances to perform the compute-intensive - processing, we strongly recommend including Orchestration in a - compute-focused architecture design. -
- -
- Networking software - OpenStack Networking (neutron) provides a wide variety of networking - services for instances. There are many additional networking - software packages that may be useful to manage the OpenStack - components themselves. Some examples include HAProxy, - Keepalived, and various routing daemons (like Quagga). The - OpenStack High Availability Guide describes - some of these software packages, HAProxy in particular. See the Network - controller cluster stack chapter of the OpenStack High - Availability Guide. -
- -
- Management software - Management software includes software for providing: - - - - Clustering - - - - - Logging - - - - - Monitoring - - - - - Alerting - - - - - The factors for determining which - software packages in this category to select is - outside the scope of this design guide. - - The availability design requirements determine the selection of - Clustering Software, such as Corosync or Pacemaker. - The availability of the cloud infrastructure and the complexity - of supporting the configuration after deployment determines - the impact of including these software packages. The - OpenStack High Availability Guide provides - more details on the installation and configuration of Corosync - and Pacemaker. - Operational considerations determine the requirements for - logging, monitoring, and alerting. Each of these - sub-categories includes options. For - example, in the logging sub-category you could select - Logstash, Splunk, Log Insight, or another log - aggregation-consolidation tool. Store logs in a - centralized location to facilitate performing analytics - against the data. Log data analytics engines can also provide - automation and issue notification, by providing a mechanism to - both alert and automatically attempt to remediate some of the - more commonly known issues. - If you require any of these software packages, the - design must account for the additional resource consumption. - Some other potential design impacts include: - - - OS-Hypervisor combination: Ensure that the - selected logging, monitoring, or alerting tools - support the proposed OS-hypervisor combination. - - - Network hardware: The network hardware selection - needs to be supported by the logging, monitoring, and - alerting software. - - -
- -
- Database software - Most OpenStack components require access to - back-end database services to store state and configuration - information. Choose an appropriate back-end database which - satisfies the availability and fault tolerance requirements - of the OpenStack services. - MySQL is the default database for OpenStack, but other - compatible databases are available. - - - Telemetry uses MongoDB. - - - The chosen high availability database solution changes - according to the selected database. MySQL, for example, provides - several options. Use a replication technology such as Galera - for active-active clustering. For active-passive use some form of - shared storage. Each of these potential solutions has an - impact on the design: - - - Solutions that employ Galera/MariaDB require at - least three MySQL nodes. - - - MongoDB has its own design considerations for high - availability. - - - OpenStack design, generally, does not include shared - storage. However, for some high availability designs, - certain components might require it depending on the specific - implementation. - - -
-
diff --git a/doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml b/doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml deleted file mode 100644 index ec0cfa86a0..0000000000 --- a/doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml +++ /dev/null @@ -1,311 +0,0 @@ - -
- - Operational considerations - Several operational factors affect the design choices for a general - purpose cloud. Operations staff receive tasks regarding the - maintenance of cloud environments for larger installations, including: - - - Maintenance tasks - - The storage solution should take - into account storage maintenance and the impact on - underlying workloads. - - - - Reliability and availability - - Reliability and - availability depend on wide area network availability - and on the level of precautions taken by the service - provider. - - - - Flexibility - - Organizations need to have the - flexibility to choose between off-premise and - on-premise cloud storage options. This relies - on relevant decision criteria with potential cost savings. - For example, continuity of operations, disaster recovery, - security, records retention laws, regulations, and policies. - - - - Monitoring and alerting services - are vital in cloud environments with high demands - on storage resources. These services provide a real-time view - into the health and performance of the storage systems. An - integrated management console, or other dashboards capable of - visualizing SNMP data, is helpful when discovering and resolving - issues that arise within the storage cluster. - A storage-focused cloud design should include: - - - Monitoring of physical hardware resources. - - - Monitoring of environmental resources such as - temperature and humidity. - - - Monitoring of storage resources such as available - storage, memory, and CPU. - - - Monitoring of advanced storage performance data to - ensure that storage systems are performing as - expected. - - - Monitoring of network resources for service - disruptions which would affect access to - storage. - - - Centralized log collection. - - - Log analytics capabilities. - - - Ticketing system (or integration with a ticketing - system) to track issues. - - - Alerting and notification of responsible teams or - automated systems which remediate problems with - storage as they arise. - - - Network Operations Center (NOC) staffed and always - available to resolve issues. - - - -
- Application awareness - Well-designed applications should be aware of underlying storage - subsystems in order to use cloud storage solutions effectively. - If natively available replication is not available, operations personnel - must be able to modify the application so that they - can provide their own replication service. In the event that - replication is unavailable, operations personnel can design applications - to react such that they can provide their own replication services. - An application designed to detect underlying storage systems can - function in a wide variety of infrastructures, and still have - the same basic behavior regardless of the differences in the underlying - infrastructure. -
- -
- Fault tolerance and availability - Designing for fault tolerance and availability of storage - systems in an OpenStack cloud is vastly different when - comparing the Block Storage and Object Storage services. - -
- Block Storage fault tolerance and availability - Configure Block Storage resource nodes with advanced RAID - controllers and high performance disks to - provide fault tolerance at the hardware level. - Deploy high performing storage solutions - such as SSD disk drives or flash storage systems for applications - requiring extreme performance out of Block Storage devices. - In environments that place extreme demands on Block Storage, - we recommend using multiple storage pools. - In this case, each pool of devices should have a similar - hardware design and disk configuration across all hardware - nodes in that pool. This allows for a design that provides - applications with access to a wide variety of Block Storage - pools, each with their own redundancy, availability, and - performance characteristics. When deploying multiple pools of - storage it is also important to consider the impact on the - Block Storage scheduler which is responsible for provisioning - storage across resource nodes. Ensuring that applications can - schedule volumes in multiple regions, each with their own - network, power, and cooling infrastructure, can give tenants - the ability to build fault tolerant applications that are - distributed across multiple availability zones. - In addition to the Block Storage resource nodes, it is - important to design for high availability and redundancy of - the APIs, and related services that are responsible for - provisioning and providing access to storage. We - recommend designing a layer of hardware or software load - balancers in order to achieve high availability of the - appropriate REST API services to provide uninterrupted - service. In some cases, it may also be necessary to deploy an - additional layer of load balancing to provide access to - back-end database services responsible for servicing and - storing the state of Block Storage volumes. We also recommend - designing a highly available database solution to store the Block - Storage databases. Leverage highly available database - solutions such as Galera and MariaDB to help - keep database services online for uninterrupted access, - so that tenants can manage Block Storage volumes. - In a cloud with extreme demands on Block Storage, the network - architecture should take into account the amount of East-West - bandwidth required for instances to make use of - the available storage resources. The selected network devices - should support jumbo frames for transferring large blocks of - data. In some cases, it may be necessary to create an - additional back-end storage network dedicated to providing - connectivity between instances and Block Storage resources so - that there is no contention of network resources. -
-
- Object Storage fault tolerance and availability - While consistency and partition tolerance are both inherent - features of the Object Storage service, it is important to - design the overall storage architecture to ensure that the - implemented system meets those goals. The - OpenStack Object Storage service places a specific number of - data replicas as objects on resource nodes. These replicas are - distributed throughout the cluster based on a consistent hash - ring which exists on all nodes in the cluster. - Design the Object Storage system with a sufficient - number of zones to provide quorum for the number of replicas - defined. For example, with three replicas configured in the - Swift cluster, the recommended number of zones to configure - within the Object Storage cluster in order to achieve quorum - is five. While it is possible to deploy a solution with fewer - zones, the implied risk of doing so is that some data may not - be available and API requests to certain objects stored in the - cluster might fail. For this reason, ensure you properly account - for the number of zones in the Object Storage cluster. - Each Object Storage zone should be self-contained within its - own availability zone. Each availability zone should have - independent access to network, power and cooling - infrastructure to ensure uninterrupted access to data. In - addition, a pool of Object Storage proxy servers providing access - to data stored on the object nodes should service - each availability zone. Object proxies in each region - should leverage local read and write affinity so that local storage - resources facilitate access to objects wherever - possible. We recommend deploying upstream load balancing to ensure - that proxy services are distributed across the multiple zones and, - in some cases, it may be necessary to make use of third-party - solutions to aid with geographical distribution of services. - A zone within an Object Storage cluster is a logical - division. Any of the following may represent a zone: - - - - A disk within a single node - - - - - One zone per node - - - - - Zone per collection of nodes - - - - - Multiple racks - - - - - Multiple DCs - - - - Selecting the proper zone design is crucial for allowing the Object - Storage cluster to scale while providing an available and - redundant storage system. It may be necessary to - configure storage policies that have different requirements - with regards to replicas, retention and other factors that - could heavily affect the design of storage in a specific - zone. -
-
- -
- Scaling storage services - Adding storage capacity and bandwidth is a very different - process when comparing the Block and Object Storage services. - While adding Block Storage capacity is a relatively simple - process, adding capacity and bandwidth to the Object Storage - systems is a complex task that requires careful planning and - consideration during the design phase. - -
- Scaling Block Storage - You can upgrade Block Storage pools to add storage capacity - without interrupting the overall Block - Storage service. Add nodes to the pool by - installing and configuring the appropriate hardware and - software and then allowing that node to report in to the - proper storage pool via the message bus. This is because Block - Storage nodes report into the scheduler service advertising - their availability. After the node is online and available, - tenants can make use of those storage resources - instantly. - In some cases, the demand on Block Storage from instances - may exhaust the available network bandwidth. As a result, - design network infrastructure that services Block Storage - resources in such a way that you can add capacity and - bandwidth easily. This often involves the use of - dynamic routing protocols or advanced networking solutions to - add capacity to downstream devices easily. Both - the front-end and back-end storage network designs should - encompass the ability to quickly and easily add capacity and - bandwidth. -
-
- Scaling Object Storage - Adding back-end storage capacity to an Object Storage - cluster requires careful planning and consideration. In the - design phase, it is important to determine the maximum - partition power required by the Object Storage service, which - determines the maximum number of partitions which can exist. - Object Storage distributes data among all available storage, - but a partition cannot span more than one disk, so the maximum - number of partitions can only be as high as the number of - disks. - For example, a system that starts with a single disk and a - partition power of 3 can have 8 (2^3) partitions. Adding a - second disk means that each has 4 partitions. - The one-disk-per-partition limit means that this system can - never have more than 8 disks, limiting its scalability. - However, a system that starts with a single disk and a - partition power of 10 can have up to 1024 (2^10) disks. - As you add back-end storage capacity to the system, the - partition maps redistribute data amongst the storage - nodes. In some cases, this replication consists of - extremely large data sets. In these cases, we recommend - using back-end replication links that do not - contend with tenants' access to data. - As more tenants begin to access data within the cluster and - their data sets grow, it is necessary to add front-end - bandwidth to service data access requests. Adding front-end - bandwidth to an Object Storage cluster requires careful - planning and design of the Object Storage proxies that tenants - use to gain access to the data, along with the - high availability solutions that enable easy scaling of the - proxy layer. We recommend designing a front-end load - balancing layer that tenants and consumers use to gain access - to data stored within the cluster. This load balancing layer - may be distributed across zones, regions or even across - geographic boundaries, which may also require that the design - encompass geo-location solutions. - In some cases, you must add bandwidth and capacity to the network - resources servicing requests between proxy servers and storage - nodes. For this reason, the network - architecture used for access to storage nodes and proxy - servers should make use of a design which is scalable. -
-
-
diff --git a/doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml b/doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml deleted file mode 100644 index e6cee78a27..0000000000 --- a/doc/arch-design/storage_focus/section_prescriptive_examples_storage_focus.xml +++ /dev/null @@ -1,189 +0,0 @@ - -
- - Prescriptive examples - Storage-focused architecture depends on specific use cases. - This section discusses three example use cases: - - - - An object store with a RESTful interface - - - - - Compute analytics with parallel file systems - - - - - High performance database - - - - The example below shows a REST interface without a high - performance requirement. - Swift is a highly scalable object store that is part of the - OpenStack project. This diagram explains the example - architecture: - - - - - - - The example REST interface, presented as a traditional Object - store running on traditional spindles, does not require a high - performance caching tier. - This example uses the following components: - Network: - - - 10 GbE horizontally scalable spine leaf back-end - storage and front end network. - - - Storage hardware: - - - 10 storage servers each with 12x4 TB disks equaling - 480 TB total space with approximately 160 TB of - usable space after replicas. - - - Proxy: - - - 3x proxies - - - 2x10 GbE bonded front end - - - 2x10 GbE back-end bonds - - - Approximately 60 Gb of total bandwidth to the - back-end storage cluster - - - - It may be necessary to implement a 3rd-party caching layer - for some applications to achieve suitable performance. - - -
- Compute analytics with Data processing service - Analytics of large data sets are dependent on the performance - of the storage system. Clouds using storage systems such as - Hadoop Distributed File System (HDFS) have inefficiencies which can - cause performance issues. - - One potential solution to this problem is the implementation of - storage systems designed for performance. Parallel file systems have - previously filled this need in the HPC space and are suitable for large - scale performance-orientated systems. - OpenStack has integration with Hadoop to manage the Hadoop cluster - within the cloud. The following diagram shows an OpenStack store with - a high performance requirement: - - - - - - - The hardware requirements and configuration are - similar to those of the High Performance Database example - below. In this case, the architecture uses Ceph's - Swift-compatible REST interface, features that allow for - connecting a caching pool to allow for acceleration of the - presented pool. - -
- -
- High performance database with Database service - Databases are a common workload that benefit from high performance - storage back ends. Although enterprise storage is not a requirement, - many environments have existing storage that OpenStack cloud can use as - back ends. You can create a storage pool to provide block devices - with OpenStack Block Storage for instances as well as object interfaces. - In this example, the database I-O requirements are high and demand - storage presented from a fast SSD pool. - A storage system presents a LUN backed by - a set of SSDs using a traditional storage array with OpenStack - Block Storage integration or a storage platform such as Ceph - or Gluster. - This system can provide additional performance. For example, - in the database example below, a portion of the SSD pool can act - as a block device to the Database server. In the high performance - analytics example, the inline SSD cache layer accelerates the REST - interface. - - - - - - In this example, Ceph presents a Swift-compatible REST - interface, as well as a block level storage from a distributed - storage cluster. It is highly flexible and has features that - enable reduced cost of operations such as self healing and - auto balancing. Using erasure coded pools are a suitable way of - maximizing the amount of usable space. - - There are special considerations around erasure coded pools. - For example, higher computational requirements and limitations on - the operations allowed on an object; erasure coded pools do not - support partial writes. - - - Using Ceph as an applicable example, a potential architecture - would have the following requirements: - Network: - - - 10 GbE horizontally scalable spine leaf back-end - storage and front-end network - - - Storage hardware: - - - 5 storage servers for caching layer 24x1 TB SSD - - - - 10 storage servers each with 12x4 TB disks which - equals 480 TB total space with about approximately 160 - TB of usable space after 3 replicas - - - REST proxy: - - - 3x proxies - - - 2x10 GbE bonded front end - - - 2x10 GbE back-end bonds - - - Approximately 60 Gb of total bandwidth to the - back-end storage cluster - - - Using an SSD cache layer, you can present block devices - directly to hypervisors or instances. The REST interface can - also use the SSD cache systems as an inline cache. - -
-
diff --git a/doc/arch-design/storage_focus/section_tech_considerations_storage_focus.xml b/doc/arch-design/storage_focus/section_tech_considerations_storage_focus.xml deleted file mode 100644 index da90667da1..0000000000 --- a/doc/arch-design/storage_focus/section_tech_considerations_storage_focus.xml +++ /dev/null @@ -1,100 +0,0 @@ - -
- - Technical considerations - Some of the key technical considerations that are critical - to a storage-focused OpenStack design architecture - include: - - - Input-Output requirements - - Input-Output performance requirements require - researching and modeling before deciding on a - final storage framework. Running benchmarks for Input-Output - performance provides a baseline for expected - performance levels. If these tests include details, - then the resulting data can help model - behavior and results during different workloads. - Running scripted smaller benchmarks during the - life cycle of the architecture helps record the system - health at different points in time. The data from - these scripted benchmarks assist in future - scoping and gaining a deeper understanding of an - organization's needs. - - - - Scale - - Scaling storage solutions in a storage-focused - OpenStack architecture design is driven by initial - requirements, including IOPS, - capacity, bandwidth, and future needs. Planning - capacity based on projected needs over the - course of a budget cycle is important for a design. - The architecture should balance cost - and capacity, while also allowing flexibility to - implement new technologies and methods as - they become available. - - - - Security - - Designing security around data has multiple - points of focus that vary depending on SLAs, legal - requirements, industry regulations, and certifications - needed for systems or people. Consider compliance with HIPPA, - ISO9000, and SOX based on the type of data. For certain - organizations, multiple levels of access control are important. - - - - OpenStack compatibility - - Interoperability and - integration with OpenStack can be paramount in - deciding on a storage hardware and storage management - platform. Interoperability and integration includes - factors such as OpenStack Block Storage interoperability, - OpenStack Object Storage compatibility, and hypervisor - compatibility (which affects the ability to use - storage for ephemeral instance storage). - - - - Storage management - - You must address a range of storage - management-related considerations in - the design of a storage-focused OpenStack cloud. These - considerations include, but are not limited to, backup - strategy (and restore strategy, since a backup that - cannot be restored is useless), data - valuation-hierarchical storage management, retention - strategy, data placement, and workflow - automation. - - - - Data grids - - Data grids are helpful when answering questions - around data valuation. Data grids improve decision making - through correlation of access patterns, ownership, and - business-unit revenue with other metadata values to - deliver actionable information about data. - - - - When building a storage-focused OpenStack architecture, - strive to build a flexible design based on an - industry standard core. One way of accomplishing this might be - through the use of different back ends serving different use - cases. -
diff --git a/doc/pom.xml b/doc/pom.xml index 1aed472933..4b79dd87d9 100644 --- a/doc/pom.xml +++ b/doc/pom.xml @@ -10,7 +10,6 @@ pom - arch-design cli-reference config-reference glossary diff --git a/tools/build-all-rst.sh b/tools/build-all-rst.sh index dfef875329..39306fe335 100755 --- a/tools/build-all-rst.sh +++ b/tools/build-all-rst.sh @@ -12,7 +12,7 @@ if [[ $# > 0 ]] ; then fi for guide in user-guide user-guide-admin networking-guide admin-guide-cloud \ - contributor-guide image-guide; do + contributor-guide image-guide arch-design; do tools/build-rst.sh doc/$guide $GLOSSARY --build build \ --target $guide $LINKCHECK # Build it only the first time @@ -20,7 +20,7 @@ for guide in user-guide user-guide-admin networking-guide admin-guide-cloud \ done # Draft guides -for guide in arch-design-rst config-ref-rst; do +for guide in config-ref-rst; do tools/build-rst.sh doc/$guide --build build \ --target "draft/$guide" $LINKCHECK done diff --git a/www/static/.htaccess b/www/static/.htaccess index a7512c5b16..88368e46ab 100644 --- a/www/static/.htaccess +++ b/www/static/.htaccess @@ -160,6 +160,7 @@ redirectmatch 301 "^/user-guide/content/.*$" /user-guide/index.html redirectmatch 301 "^/user-guide-admin/content/.*" /user-guide-admin/index.html redirectmatch 301 "^/admin-guide-cloud/content/.*$" /admin-guide-cloud/index.html redirectmatch 301 "^/image-guide/content/.*$" /image-guide/index.html +redirectmatch 301 "^/arch-design/content/.*$" /arch-design/index.html # Hot-guide has moved to heat repo redirect 301 /user-guide/hot-guide/hot.html /developer/heat/template_guide/hot_guide.html