diff --git a/doc-tools-check-languages.conf b/doc-tools-check-languages.conf
index 59fc1861fc..852ecb2983 100644
--- a/doc-tools-check-languages.conf
+++ b/doc-tools-check-languages.conf
@@ -8,7 +8,7 @@ declare -A BOOKS=(
["de"]="install-guide"
["fr"]="install-guide"
["id"]="image-guide install-guide"
- ["ja"]="ha-guide image-guide install-guide ops-guide"
+ ["ja"]="ha-guide image-guide install-guide"
["ko_KR"]="install-guide"
["ru"]="install-guide"
["tr_TR"]="image-guide install-guide arch-design"
@@ -47,7 +47,6 @@ declare -A SPECIAL_BOOKS=(
["image-guide"]="RST"
["install-guide"]="RST"
["networking-guide"]="RST"
- ["ops-guide"]="RST"
# Do not translate for now, we need to fix our scripts first to
# generate the content properly.
["install-guide-debconf"]="skip"
diff --git a/doc/common/app-support.rst b/doc/common/app-support.rst
index dc58f8f3af..61492745b2 100644
--- a/doc/common/app-support.rst
+++ b/doc/common/app-support.rst
@@ -50,8 +50,6 @@ The following books explain how to configure and run an OpenStack cloud:
* `Configuration Reference `_
-* `Operations Guide `_
-
* `Networking Guide `_
* `High Availability Guide `_
diff --git a/doc/ops-guide/setup.cfg b/doc/ops-guide/setup.cfg
deleted file mode 100644
index 6747b30b6c..0000000000
--- a/doc/ops-guide/setup.cfg
+++ /dev/null
@@ -1,27 +0,0 @@
-[metadata]
-name = openstackopsguide
-summary = OpenStack Operations Guide
-author = OpenStack
-author-email = openstack-docs@lists.openstack.org
-home-page = https://docs.openstack.org/
-classifier =
-Environment :: OpenStack
-Intended Audience :: Information Technology
-Intended Audience :: System Administrators
-License :: OSI Approved :: Apache Software License
-Operating System :: POSIX :: Linux
-Topic :: Documentation
-
-[global]
-setup-hooks =
- pbr.hooks.setup_hook
-
-[files]
-
-[build_sphinx]
-warning-is-error = 1
-build-dir = build
-source-dir = source
-
-[wheel]
-universal = 1
diff --git a/doc/ops-guide/setup.py b/doc/ops-guide/setup.py
deleted file mode 100644
index 736375744d..0000000000
--- a/doc/ops-guide/setup.py
+++ /dev/null
@@ -1,30 +0,0 @@
-#!/usr/bin/env python
-# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-# implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
-import setuptools
-
-# In python < 2.7.4, a lazy loading of package `pbr` will break
-# setuptools if some other modules registered functions in `atexit`.
-# solution from: http://bugs.python.org/issue15881#msg170215
-try:
- import multiprocessing # noqa
-except ImportError:
- pass
-
-setuptools.setup(
- setup_requires=['pbr'],
- pbr=True)
diff --git a/doc/ops-guide/source/acknowledgements.rst b/doc/ops-guide/source/acknowledgements.rst
deleted file mode 100644
index ad027b7809..0000000000
--- a/doc/ops-guide/source/acknowledgements.rst
+++ /dev/null
@@ -1,51 +0,0 @@
-================
-Acknowledgements
-================
-
-The OpenStack Foundation supported the creation of this book with plane
-tickets to Austin, lodging (including one adventurous evening without
-power after a windstorm), and delicious food. For about USD $10,000, we
-could collaborate intensively for a week in the same room at the
-Rackspace Austin office. The authors are all members of the OpenStack
-Foundation, which you can join. Go to the `Foundation web
-site `_.
-
-We want to acknowledge our excellent host Rackers at Rackspace in
-Austin:
-
-- Emma Richards of Rackspace Guest Relations took excellent care of our
- lunch orders and even set aside a pile of sticky notes that had
- fallen off the walls.
-
-- Betsy Hagemeier, a Fanatical Executive Assistant, took care of a room
- reshuffle and helped us settle in for the week.
-
-- The Real Estate team at Rackspace in Austin, also known as "The
- Victors," were super responsive.
-
-- Adam Powell in Racker IT supplied us with bandwidth each day and
- second monitors for those of us needing more screens.
-
-- On Wednesday night we had a fun happy hour with the Austin OpenStack
- Meetup group and Racker Katie Schmidt took great care of our group.
-
-We also had some excellent input from outside of the room:
-
-- Tim Bell from CERN gave us feedback on the outline before we started
- and reviewed it mid-week.
-
-- Sébastien Han has written excellent blogs and generously gave his
- permission for re-use.
-
-- Oisin Feeley read it, made some edits, and provided emailed feedback
- right when we asked.
-
-Inside the book sprint room with us each day was our book sprint
-facilitator Adam Hyde. Without his tireless support and encouragement,
-we would have thought a book of this scope was impossible in five days.
-Adam has proven the book sprint method effectively again and again. He
-creates both tools and faith in collaborative authoring at
-`www.booksprints.net `_.
-
-We couldn't have pulled it off without so much supportive help and
-encouragement.
diff --git a/doc/ops-guide/source/app-crypt.rst b/doc/ops-guide/source/app-crypt.rst
deleted file mode 100644
index 35480419d1..0000000000
--- a/doc/ops-guide/source/app-crypt.rst
+++ /dev/null
@@ -1,536 +0,0 @@
-=================================
-Tales From the Cryp^H^H^H^H Cloud
-=================================
-
-Herein lies a selection of tales from OpenStack cloud operators. Read,
-and learn from their wisdom.
-
-Double VLAN
-~~~~~~~~~~~
-
-I was on-site in Kelowna, British Columbia, Canada setting up a new
-OpenStack cloud. The deployment was fully automated: Cobbler deployed
-the OS on the bare metal, bootstrapped it, and Puppet took over from
-there. I had run the deployment scenario so many times in practice and
-took for granted that everything was working.
-
-On my last day in Kelowna, I was in a conference call from my hotel. In
-the background, I was fooling around on the new cloud. I launched an
-instance and logged in. Everything looked fine. Out of boredom, I ran
-:command:`ps aux` and all of the sudden the instance locked up.
-
-Thinking it was just a one-off issue, I terminated the instance and
-launched a new one. By then, the conference call ended and I was off to
-the data center.
-
-At the data center, I was finishing up some tasks and remembered the
-lock-up. I logged into the new instance and ran :command:`ps aux` again.
-It worked. Phew. I decided to run it one more time. It locked up.
-
-After reproducing the problem several times, I came to the unfortunate
-conclusion that this cloud did indeed have a problem. Even worse, my
-time was up in Kelowna and I had to return back to Calgary.
-
-Where do you even begin troubleshooting something like this? An instance
-that just randomly locks up when a command is issued. Is it the image?
-Nope—it happens on all images. Is it the compute node? Nope—all nodes.
-Is the instance locked up? No! New SSH connections work just fine!
-
-We reached out for help. A networking engineer suggested it was an MTU
-issue. Great! MTU! Something to go on! What's MTU and why would it cause
-a problem?
-
-MTU is maximum transmission unit. It specifies the maximum number of
-bytes that the interface accepts for each packet. If two interfaces have
-two different MTUs, bytes might get chopped off and weird things
-happen—such as random session lockups.
-
-.. note::
-
- Not all packets have a size of 1500. Running the :command:`ls` command over
- SSH might only create a single packets less than 1500 bytes.
- However, running a command with heavy output, such as :command:`ps aux`
- requires several packets of 1500 bytes.
-
-OK, so where is the MTU issue coming from? Why haven't we seen this in
-any other deployment? What's new in this situation? Well, new data
-center, new uplink, new switches, new model of switches, new servers,
-first time using this model of servers… so, basically everything was
-new. Wonderful. We toyed around with raising the MTU at various areas:
-the switches, the NICs on the compute nodes, the virtual NICs in the
-instances, we even had the data center raise the MTU for our uplink
-interface. Some changes worked, some didn't. This line of
-troubleshooting didn't feel right, though. We shouldn't have to be
-changing the MTU in these areas.
-
-As a last resort, our network admin (Alvaro) and myself sat down with
-four terminal windows, a pencil, and a piece of paper. In one window, we
-ran ping. In the second window, we ran ``tcpdump`` on the cloud
-controller. In the third, ``tcpdump`` on the compute node. And the forth
-had ``tcpdump`` on the instance. For background, this cloud was a
-multi-node, non-multi-host setup.
-
-One cloud controller acted as a gateway to all compute nodes.
-VlanManager was used for the network config. This means that the cloud
-controller and all compute nodes had a different VLAN for each OpenStack
-project. We used the ``-s`` option of ``ping`` to change the packet
-size. We watched as sometimes packets would fully return, sometimes they'd
-only make it out and never back in, and sometimes the packets would stop at a
-random point. We changed ``tcpdump`` to start displaying the hex dump of
-the packet. We pinged between every combination of outside, controller,
-compute, and instance.
-
-Finally, Alvaro noticed something. When a packet from the outside hits
-the cloud controller, it should not be configured with a VLAN. We
-verified this as true. When the packet went from the cloud controller to
-the compute node, it should only have a VLAN if it was destined for an
-instance. This was still true. When the ping reply was sent from the
-instance, it should be in a VLAN. True. When it came back to the cloud
-controller and on its way out to the Internet, it should no longer have
-a VLAN. False. Uh oh. It looked as though the VLAN part of the packet
-was not being removed.
-
-That made no sense.
-
-While bouncing this idea around in our heads, I was randomly typing
-commands on the compute node:
-
-.. code-block:: console
-
- $ ip a
- …
- 10: vlan100@vlan20: mtu 1500 qdisc noqueue master br100 state UP
- …
-
-"Hey Alvaro, can you run a VLAN on top of a VLAN?"
-
-"If you did, you'd add an extra 4 bytes to the packet…"
-
-Then it all made sense…
-
-.. code-block:: console
-
- $ grep vlan_interface /etc/nova/nova.conf
- vlan_interface=vlan20
-
-In ``nova.conf``, ``vlan_interface`` specifies what interface OpenStack
-should attach all VLANs to. The correct setting should have been:
-
-.. code-block:: ini
-
- vlan_interface=bond0
-
-As this would be the server's bonded NIC.
-
-vlan20 is the VLAN that the data center gave us for outgoing Internet
-access. It's a correct VLAN and is also attached to bond0.
-
-By mistake, I configured OpenStack to attach all tenant VLANs to vlan20
-instead of bond0 thereby stacking one VLAN on top of another. This added
-an extra 4 bytes to each packet and caused a packet of 1504 bytes to be
-sent out which would cause problems when it arrived at an interface that
-only accepted 1500.
-
-As soon as this setting was fixed, everything worked.
-
-"The Issue"
-~~~~~~~~~~~
-
-At the end of August 2012, a post-secondary school in Alberta, Canada
-migrated its infrastructure to an OpenStack cloud. As luck would have
-it, within the first day or two of it running, one of their servers just
-disappeared from the network. Blip. Gone.
-
-After restarting the instance, everything was back up and running. We
-reviewed the logs and saw that at some point, network communication
-stopped and then everything went idle. We chalked this up to a random
-occurrence.
-
-A few nights later, it happened again.
-
-We reviewed both sets of logs. The one thing that stood out the most was
-DHCP. At the time, OpenStack, by default, set DHCP leases for one minute
-(it's now two minutes). This means that every instance contacts the
-cloud controller (DHCP server) to renew its fixed IP. For some reason,
-this instance could not renew its IP. We correlated the instance's logs
-with the logs on the cloud controller and put together a conversation:
-
-#. Instance tries to renew IP.
-
-#. Cloud controller receives the renewal request and sends a response.
-
-#. Instance "ignores" the response and re-sends the renewal request.
-
-#. Cloud controller receives the second request and sends a new
- response.
-
-#. Instance begins sending a renewal request to ``255.255.255.255``
- since it hasn't heard back from the cloud controller.
-
-#. The cloud controller receives the ``255.255.255.255`` request and
- sends a third response.
-
-#. The instance finally gives up.
-
-With this information in hand, we were sure that the problem had to do
-with DHCP. We thought that for some reason, the instance wasn't getting
-a new IP address and with no IP, it shut itself off from the network.
-
-A quick Google search turned up this: `DHCP lease errors in VLAN
-mode `_
-which further supported our DHCP theory.
-
-An initial idea was to just increase the lease time. If the instance
-only renewed once every week, the chances of this problem happening
-would be tremendously smaller than every minute. This didn't solve the
-problem, though. It was just covering the problem up.
-
-We decided to have ``tcpdump`` run on this instance and see if we could
-catch it in action again. Sure enough, we did.
-
-The ``tcpdump`` looked very, very weird. In short, it looked as though
-network communication stopped before the instance tried to renew its IP.
-Since there is so much DHCP chatter from a one minute lease, it's very
-hard to confirm it, but even with only milliseconds difference between
-packets, if one packet arrives first, it arrived first, and if that
-packet reported network issues, then it had to have happened before
-DHCP.
-
-Additionally, this instance in question was responsible for a very, very
-large backup job each night. While "The Issue" (as we were now calling
-it) didn't happen exactly when the backup happened, it was close enough
-(a few hours) that we couldn't ignore it.
-
-Further days go by and we catch The Issue in action more and more. We
-find that dhclient is not running after The Issue happens. Now we're
-back to thinking it's a DHCP issue. Running ``/etc/init.d/networking``
-restart brings everything back up and running.
-
-Ever have one of those days where all of the sudden you get the Google
-results you were looking for? Well, that's what happened here. I was
-looking for information on dhclient and why it dies when it can't renew
-its lease and all of the sudden I found a bunch of OpenStack and dnsmasq
-discussions that were identical to the problem we were seeing!
-
-`Problem with Heavy Network IO and
-Dnsmasq `_.
-
-`instances losing IP address while running, due to No
-DHCPOFFER `_.
-
-Seriously, Google.
-
-This bug report was the key to everything: `KVM images lose connectivity
-with bridged
-network `_.
-
-It was funny to read the report. It was full of people who had some
-strange network problem but didn't quite explain it in the same way.
-
-So it was a qemu/kvm bug.
-
-At the same time of finding the bug report, a co-worker was able to
-successfully reproduce The Issue! How? He used ``iperf`` to spew a ton
-of bandwidth at an instance. Within 30 minutes, the instance just
-disappeared from the network.
-
-Armed with a patched qemu and a way to reproduce, we set out to see if
-we've finally solved The Issue. After 48 hours straight of hammering the
-instance with bandwidth, we were confident. The rest is history. You can
-search the bug report for "joe" to find my comments and actual tests.
-
-Disappearing Images
-~~~~~~~~~~~~~~~~~~~
-
-At the end of 2012, Cybera (a nonprofit with a mandate to oversee the
-development of cyberinfrastructure in Alberta, Canada) deployed an
-updated OpenStack cloud for their `DAIR
-project `_. A few days into
-production, a compute node locks up. Upon rebooting the node, I checked
-to see what instances were hosted on that node so I could boot them on
-behalf of the customer. Luckily, only one instance.
-
-The :command:`nova reboot` command wasn't working, so I used :command:`virsh`,
-but it immediately came back with an error saying it was unable to find the
-backing disk. In this case, the backing disk is the Glance image that is
-copied to ``/var/lib/nova/instances/_base`` when the image is used for
-the first time. Why couldn't it find it? I checked the directory and
-sure enough it was gone.
-
-I reviewed the ``nova`` database and saw the instance's entry in the
-``nova.instances`` table. The image that the instance was using matched
-what virsh was reporting, so no inconsistency there.
-
-I checked Glance and noticed that this image was a snapshot that the
-user created. At least that was good news—this user would have been the
-only user affected.
-
-Finally, I checked StackTach and reviewed the user's events. They had
-created and deleted several snapshots—most likely experimenting.
-Although the timestamps didn't match up, my conclusion was that they
-launched their instance and then deleted the snapshot and it was somehow
-removed from ``/var/lib/nova/instances/_base``. None of that made sense,
-but it was the best I could come up with.
-
-It turns out the reason that this compute node locked up was a hardware
-issue. We removed it from the DAIR cloud and called Dell to have it
-serviced. Dell arrived and began working. Somehow or another (or a fat
-finger), a different compute node was bumped and rebooted. Great.
-
-When this node fully booted, I ran through the same scenario of seeing
-what instances were running so I could turn them back on. There were a
-total of four. Three booted and one gave an error. It was the same error
-as before: unable to find the backing disk. Seriously, what?
-
-Again, it turns out that the image was a snapshot. The three other
-instances that successfully started were standard cloud images. Was it a
-problem with snapshots? That didn't make sense.
-
-A note about DAIR's architecture: ``/var/lib/nova/instances`` is a
-shared NFS mount. This means that all compute nodes have access to it,
-which includes the ``_base`` directory. Another centralized area is
-``/var/log/rsyslog`` on the cloud controller. This directory collects
-all OpenStack logs from all compute nodes. I wondered if there were any
-entries for the file that :command:`virsh` is reporting:
-
-.. code-block:: console
-
- dair-ua-c03/nova.log:Dec 19 12:10:59 dair-ua-c03
- 2012-12-19 12:10:59 INFO nova.virt.libvirt.imagecache
- [-] Removing base file:
- /var/lib/nova/instances/_base/7b4783508212f5d242cbf9ff56fb8d33b4ce6166_10
-
-Ah-hah! So OpenStack was deleting it. But why?
-
-A feature was introduced in Essex to periodically check and see if there
-were any ``_base`` files not in use. If there were, OpenStack Compute
-would delete them. This idea sounds innocent enough and has some good
-qualities to it. But how did this feature end up turned on? It was
-disabled by default in Essex. As it should be. It was `decided to be
-turned on in Folsom `_.
-I cannot emphasize enough that:
-
-*Actions which delete things should not be enabled by default.*
-
-Disk space is cheap these days. Data recovery is not.
-
-Secondly, DAIR's shared ``/var/lib/nova/instances`` directory
-contributed to the problem. Since all compute nodes have access to this
-directory, all compute nodes periodically review the \_base directory.
-If there is only one instance using an image, and the node that the
-instance is on is down for a few minutes, it won't be able to mark the
-image as still in use. Therefore, the image seems like it's not in use
-and is deleted. When the compute node comes back online, the instance
-hosted on that node is unable to start.
-
-The Valentine's Day Compute Node Massacre
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Although the title of this story is much more dramatic than the actual
-event, I don't think, or hope, that I'll have the opportunity to use
-"Valentine's Day Massacre" again in a title.
-
-This past Valentine's Day, I received an alert that a compute node was
-no longer available in the cloud—meaning,
-
-.. code-block:: console
-
- $ openstack compute service list
-
-showed this particular node in a down state.
-
-I logged into the cloud controller and was able to both ``ping`` and SSH
-into the problematic compute node which seemed very odd. Usually if I
-receive this type of alert, the compute node has totally locked up and
-would be inaccessible.
-
-After a few minutes of troubleshooting, I saw the following details:
-
-- A user recently tried launching a CentOS instance on that node
-
-- This user was the only user on the node (new node)
-
-- The load shot up to 8 right before I received the alert
-
-- The bonded 10gb network device (bond0) was in a DOWN state
-
-- The 1gb NIC was still alive and active
-
-I looked at the status of both NICs in the bonded pair and saw that
-neither was able to communicate with the switch port. Seeing as how each
-NIC in the bond is connected to a separate switch, I thought that the
-chance of a switch port dying on each switch at the same time was quite
-improbable. I concluded that the 10gb dual port NIC had died and needed
-replaced. I created a ticket for the hardware support department at the
-data center where the node was hosted. I felt lucky that this was a new
-node and no one else was hosted on it yet.
-
-An hour later I received the same alert, but for another compute node.
-Crap. OK, now there's definitely a problem going on. Just like the
-original node, I was able to log in by SSH. The bond0 NIC was DOWN but
-the 1gb NIC was active.
-
-And the best part: the same user had just tried creating a CentOS
-instance. What?
-
-I was totally confused at this point, so I texted our network admin to
-see if he was available to help. He logged in to both switches and
-immediately saw the problem: the switches detected spanning tree packets
-coming from the two compute nodes and immediately shut the ports down to
-prevent spanning tree loops:
-
-.. code-block:: console
-
- Feb 15 01:40:18 SW-1 Stp: %SPANTREE-4-BLOCK_BPDUGUARD: Received BPDU packet on Port-Channel35 with BPDU guard enabled. Disabling interface. (source mac fa:16:3e:24:e7:22)
- Feb 15 01:40:18 SW-1 Ebra: %ETH-4-ERRDISABLE: bpduguard error detected on Port-Channel35.
- Feb 15 01:40:18 SW-1 Mlag: %MLAG-4-INTF_INACTIVE_LOCAL: Local interface Port-Channel35 is link down. MLAG 35 is inactive.
- Feb 15 01:40:18 SW-1 Ebra: %LINEPROTO-5-UPDOWN: Line protocol on Interface Port-Channel35 (Server35), changed state to down
- Feb 15 01:40:19 SW-1 Stp: %SPANTREE-6-INTERFACE_DEL: Interface Port-Channel35 has been removed from instance MST0
- Feb 15 01:40:19 SW-1 Ebra: %LINEPROTO-5-UPDOWN: Line protocol on Interface Ethernet35 (Server35), changed state to down
-
-He re-enabled the switch ports and the two compute nodes immediately
-came back to life.
-
-Unfortunately, this story has an open ending... we're still looking into
-why the CentOS image was sending out spanning tree packets. Further,
-we're researching a proper way on how to mitigate this from happening.
-It's a bigger issue than one might think. While it's extremely important
-for switches to prevent spanning tree loops, it's very problematic to
-have an entire compute node be cut from the network when this happens.
-If a compute node is hosting 100 instances and one of them sends a
-spanning tree packet, that instance has effectively DDOS'd the other 99
-instances.
-
-This is an ongoing and hot topic in networking circles —especially with
-the raise of virtualization and virtual switches.
-
-Down the Rabbit Hole
-~~~~~~~~~~~~~~~~~~~~
-
-Users being able to retrieve console logs from running instances is a
-boon for support—many times they can figure out what's going on inside
-their instance and fix what's going on without bothering you.
-Unfortunately, sometimes overzealous logging of failures can cause
-problems of its own.
-
-A report came in: VMs were launching slowly, or not at all. Cue the
-standard checks—nothing on the Nagios, but there was a spike in network
-towards the current master of our RabbitMQ cluster. Investigation
-started, but soon the other parts of the queue cluster were leaking
-memory like a sieve. Then the alert came in—the master Rabbit server
-went down and connections failed over to the slave.
-
-At that time, our control services were hosted by another team and we
-didn't have much debugging information to determine what was going on
-with the master, and we could not reboot it. That team noted that it
-failed without alert, but managed to reboot it. After an hour, the
-cluster had returned to its normal state and we went home for the day.
-
-Continuing the diagnosis the next morning was kick started by another
-identical failure. We quickly got the message queue running again, and
-tried to work out why Rabbit was suffering from so much network traffic.
-Enabling debug logging on nova-api quickly brought understanding. A
-``tail -f /var/log/nova/nova-api.log`` was scrolling by faster
-than we'd ever seen before. CTRL+C on that and we could plainly see the
-contents of a system log spewing failures over and over again - a system
-log from one of our users' instances.
-
-After finding the instance ID we headed over to
-``/var/lib/nova/instances`` to find the ``console.log``:
-
-.. code-block:: console
-
- adm@cc12:/var/lib/nova/instances/instance-00000e05# wc -l console.log
- 92890453 console.log
- adm@cc12:/var/lib/nova/instances/instance-00000e05# ls -sh console.log
- 5.5G console.log
-
-Sure enough, the user had been periodically refreshing the console log
-page on the dashboard and the 5G file was traversing the Rabbit cluster
-to get to the dashboard.
-
-We called them and asked them to stop for a while, and they were happy
-to abandon the horribly broken VM. After that, we started monitoring the
-size of console logs.
-
-To this day, `the issue `__
-doesn't have a permanent resolution, but we look forward to the discussion
-at the next summit.
-
-Havana Haunted by the Dead
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Felix Lee of Academia Sinica Grid Computing Centre in Taiwan contributed
-this story.
-
-I just upgraded OpenStack from Grizzly to Havana 2013.2-2 using the RDO
-repository and everything was running pretty well—except the EC2 API.
-
-I noticed that the API would suffer from a heavy load and respond slowly
-to particular EC2 requests such as ``RunInstances``.
-
-Output from ``/var/log/nova/nova-api.log`` on :term:`Havana`:
-
-.. code-block:: console
-
- 2014-01-10 09:11:45.072 129745 INFO nova.ec2.wsgi.server
- [req-84d16d16-3808-426b-b7af-3b90a11b83b0
- 0c6e7dba03c24c6a9bce299747499e8a 7052bd6714e7460caeb16242e68124f9]
- 117.103.103.29 "GET
- /services/Cloud?AWSAccessKeyId=[something]&Action=RunInstances&ClientToken=[something]&ImageId=ami-00000001&InstanceInitiatedShutdownBehavior=terminate...
- HTTP/1.1" status: 200 len: 1109 time: 138.5970151
-
-This request took over two minutes to process, but executed quickly on
-another co-existing Grizzly deployment using the same hardware and
-system configuration.
-
-Output from ``/var/log/nova/nova-api.log`` on :term:`Grizzly`:
-
-.. code-block:: console
-
- 2014-01-08 11:15:15.704 INFO nova.ec2.wsgi.server
- [req-ccac9790-3357-4aa8-84bd-cdaab1aa394e
- ebbd729575cb404081a45c9ada0849b7 8175953c209044358ab5e0ec19d52c37]
- 117.103.103.29 "GET
- /services/Cloud?AWSAccessKeyId=[something]&Action=RunInstances&ClientToken=[something]&ImageId=ami-00000007&InstanceInitiatedShutdownBehavior=terminate...
- HTTP/1.1" status: 200 len: 931 time: 3.9426181
-
-While monitoring system resources, I noticed a significant increase in
-memory consumption while the EC2 API processed this request. I thought
-it wasn't handling memory properly—possibly not releasing memory. If the
-API received several of these requests, memory consumption quickly grew
-until the system ran out of RAM and began using swap. Each node has 48
-GB of RAM and the "nova-api" process would consume all of it within
-minutes. Once this happened, the entire system would become unusably
-slow until I restarted the nova-api service.
-
-So, I found myself wondering what changed in the EC2 API on Havana that
-might cause this to happen. Was it a bug or a normal behavior that I now
-need to work around?
-
-After digging into the nova (OpenStack Compute) code, I noticed two
-areas in ``api/ec2/cloud.py`` potentially impacting my system:
-
-.. code-block:: python
-
- instances = self.compute_api.get_all(context,
- search_opts=search_opts,
- sort_dir='asc')
-
- sys_metas = self.compute_api.get_all_system_metadata(
- context, search_filts=[{'key': ['EC2_client_token']},
- {'value': [client_token]}])
-
-Since my database contained many records—over 1 million metadata records
-and over 300,000 instance records in "deleted" or "errored" states—each
-search took a long time. I decided to clean up the database by first
-archiving a copy for backup and then performing some deletions using the
-MySQL client. For example, I ran the following SQL command to remove
-rows of instances deleted for over a year:
-
-.. code-block:: console
-
- mysql> delete from nova.instances where deleted=1 and terminated_at < (NOW() - INTERVAL 1 YEAR);
-
-Performance increased greatly after deleting the old records and my new
-deployment continues to behave well.
diff --git a/doc/ops-guide/source/app-resources.rst b/doc/ops-guide/source/app-resources.rst
deleted file mode 100644
index 1c998987a7..0000000000
--- a/doc/ops-guide/source/app-resources.rst
+++ /dev/null
@@ -1,62 +0,0 @@
-=========
-Resources
-=========
-
-OpenStack
-~~~~~~~~~
-
-- `OpenStack Installation Tutorial for openSUSE and SUSE Linux Enterprise
- Server `_
-
-- `OpenStack Installation Tutorial for Red Hat Enterprise Linux and CentOS
- `_
-
-- `OpenStack Installation Tutorial for Ubuntu
- Server `_
-
-- `OpenStack Administrator Guide `_
-
-- `OpenStack Cloud Computing Cookbook (Packt
- Publishing) `_
-
-Cloud (General)
-~~~~~~~~~~~~~~~
-
-- `The NIST Definition of Cloud
- Computing `_
-
-Python
-~~~~~~
-
-- `Dive Into Python (Apress) `_
-
-Networking
-~~~~~~~~~~
-
-- `TCP/IP Illustrated, Volume 1: The Protocols, 2/E
- (Pearson) `_
-
-- `The TCP/IP Guide (No Starch
- Press) `_
-
-- `A tcpdump Tutorial and
- Primer `_
-
-Systems Administration
-~~~~~~~~~~~~~~~~~~~~~~
-
-- `UNIX and Linux Systems Administration Handbook (Prentice
- Hall) `_
-
-Virtualization
-~~~~~~~~~~~~~~
-
-- `The Book of Xen (No Starch
- Press) `_
-
-Configuration Management
-~~~~~~~~~~~~~~~~~~~~~~~~
-
-- `Puppet Labs Documentation `_
-
-- `Pro Puppet (Apress) `_
diff --git a/doc/ops-guide/source/app-roadmaps.rst b/doc/ops-guide/source/app-roadmaps.rst
deleted file mode 100644
index 48d28e574f..0000000000
--- a/doc/ops-guide/source/app-roadmaps.rst
+++ /dev/null
@@ -1,435 +0,0 @@
-=====================
-Working with Roadmaps
-=====================
-
-The good news: OpenStack has unprecedented transparency when it comes to
-providing information about what's coming up. The bad news: each release
-moves very quickly. The purpose of this appendix is to highlight some of
-the useful pages to track, and take an educated guess at what is coming
-up in the next release and perhaps further afield.
-
-OpenStack follows a six month release cycle, typically releasing in
-April/May and October/November each year. At the start of each cycle,
-the community gathers in a single location for a design summit. At the
-summit, the features for the coming releases are discussed, prioritized,
-and planned. The below figure shows an example release cycle, with dates
-showing milestone releases, code freeze, and string freeze dates, along
-with an example of when the summit occurs. Milestones are interim releases
-within the cycle that are available as packages for download and
-testing. Code freeze is putting a stop to adding new features to the
-release. String freeze is putting a stop to changing any strings within
-the source code.
-
-.. image:: figures/osog_ac01.png
- :width: 100%
-
-
-Information Available to You
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-There are several good sources of information available that you can use
-to track your OpenStack development desires.
-
-Release notes are maintained on the OpenStack wiki, and also shown here:
-
-.. list-table::
- :widths: 25 25 25 25
- :header-rows: 1
-
- * - Series
- - Status
- - Releases
- - Date
- * - Liberty
- - `Under Development
- `_
- - 2015.2
- - Oct, 2015
- * - Kilo
- - `Current stable release, security-supported
- `_
- - `2015.1 `_
- - Apr 30, 2015
- * - Juno
- - `Security-supported
- `_
- - `2014.2 `_
- - Oct 16, 2014
- * - Icehouse
- - `End-of-life
- `_
- - `2014.1 `_
- - Apr 17, 2014
- * -
- -
- - `2014.1.1 `_
- - Jun 9, 2014
- * -
- -
- - `2014.1.2 `_
- - Aug 8, 2014
- * -
- -
- - `2014.1.3 `_
- - Oct 2, 2014
- * - Havana
- - End-of-life
- - `2013.2 `_
- - Apr 4, 2013
- * -
- -
- - `2013.2.1 `_
- - Dec 16, 2013
- * -
- -
- - `2013.2.2 `_
- - Feb 13, 2014
- * -
- -
- - `2013.2.3 `_
- - Apr 3, 2014
- * -
- -
- - `2013.2.4 `_
- - Sep 22, 2014
- * -
- -
- - `2013.2.1 `_
- - Dec 16, 2013
- * - Grizzly
- - End-of-life
- - `2013.1 `_
- - Apr 4, 2013
- * -
- -
- - `2013.1.1 `_
- - May 9, 2013
- * -
- -
- - `2013.1.2 `_
- - Jun 6, 2013
- * -
- -
- - `2013.1.3 `_
- - Aug 8, 2013
- * -
- -
- - `2013.1.4 `_
- - Oct 17, 2013
- * -
- -
- - `2013.1.5 `_
- - Mar 20, 2015
- * - Folsom
- - End-of-life
- - `2012.2 `_
- - Sep 27, 2012
- * -
- -
- - `2012.2.1 `_
- - Nov 29, 2012
- * -
- -
- - `2012.2.2 `_
- - Dec 13, 2012
- * -
- -
- - `2012.2.3 `_
- - Jan 31, 2013
- * -
- -
- - `2012.2.4 `_
- - Apr 11, 2013
- * - Essex
- - End-of-life
- - `2012.1 `_
- - Apr 5, 2012
- * -
- -
- - `2012.1.1 `_
- - Jun 22, 2012
- * -
- -
- - `2012.1.2 `_
- - Aug 10, 2012
- * -
- -
- - `2012.1.3 `_
- - Oct 12, 2012
- * - Diablo
- - Deprecated
- - `2011.3 `_
- - Sep 22, 2011
- * -
- -
- - `2011.3.1 `_
- - Jan 19, 2012
- * - Cactus
- - Deprecated
- - `2011.2 `_
- - Apr 15, 2011
- * - Bexar
- - Deprecated
- - `2011.1 `_
- - Feb 3, 2011
- * - Austin
- - Deprecated
- - `2010.1 `_
- - Oct 21, 2010
-
-Here are some other resources:
-
-- `A breakdown of current features under development, with their target
- milestone `_
-
-- `A list of all features, including those not yet under
- development `_
-
-- `Rough-draft design discussions ("etherpads") from the last design
- summit `_
-
-- `List of individual code changes under
- review `_
-
-Influencing the Roadmap
-~~~~~~~~~~~~~~~~~~~~~~~
-
-OpenStack truly welcomes your ideas (and contributions) and highly
-values feedback from real-world users of the software. By learning a
-little about the process that drives feature development, you can
-participate and perhaps get the additions you desire.
-
-Feature requests typically start their life in Etherpad, a collaborative
-editing tool, which is used to take coordinating notes at a design
-summit session specific to the feature. This then leads to the creation
-of a blueprint on the Launchpad site for the particular project, which
-is used to describe the feature more formally. Blueprints are then
-approved by project team members, and development can begin.
-
-Therefore, the fastest way to get your feature request up for
-consideration is to create an Etherpad with your ideas and propose a
-session to the design summit. If the design summit has already passed,
-you may also create a blueprint directly. Read this `blog post about how
-to work with blueprints
-`_
-the perspective of Victoria Martínez, a developer intern.
-
-The roadmap for the next release as it is developed can be seen at
-`Releases `_.
-
-To determine the potential features going in to future releases, or to
-look at features implemented previously, take a look at the existing
-blueprints such as `OpenStack Compute (nova)
-Blueprints `_, `OpenStack
-Identity (keystone)
-Blueprints `_, and release
-notes.
-
-Aside from the direct-to-blueprint pathway, there is another very
-well-regarded mechanism to influence the development roadmap:
-the user survey. Found at `OpenStack User Survey
-`_,
-it allows you to provide details of your deployments and needs, anonymously by
-default. Each cycle, the user committee analyzes the results and produces a
-report, including providing specific information to the technical
-committee and project team leads.
-
-Aspects to Watch
-~~~~~~~~~~~~~~~~
-
-You want to keep an eye on the areas improving within OpenStack. The
-best way to "watch" roadmaps for each project is to look at the
-blueprints that are being approved for work on milestone releases. You
-can also learn from PTL webinars that follow the OpenStack summits twice
-a year.
-
-Driver Quality Improvements
----------------------------
-
-A major quality push has occurred across drivers and plug-ins in Block
-Storage, Compute, and Networking. Particularly, developers of Compute
-and Networking drivers that require proprietary or hardware products are
-now required to provide an automated external testing system for use
-during the development process.
-
-Easier Upgrades
----------------
-
-One of the most requested features since OpenStack began (for components
-other than Object Storage, which tends to "just work"): easier upgrades.
-In all recent releases internal messaging communication is versioned,
-meaning services can theoretically drop back to backward-compatible
-behavior. This allows you to run later versions of some components,
-while keeping older versions of others.
-
-In addition, database migrations are now tested with the Turbo Hipster
-tool. This tool tests database migration performance on copies of
-real-world user databases.
-
-These changes have facilitated the first proper OpenStack upgrade guide,
-found in :doc:`ops-upgrades`, and will continue to improve in the next
-release.
-
-Deprecation of Nova Network
----------------------------
-
-With the introduction of the full software-defined networking stack
-provided by OpenStack Networking (neutron) in the Folsom release,
-development effort on the initial networking code that remains part of
-the Compute component has gradually lessened. While many still use
-``nova-network`` in production, there has been a long-term plan to
-remove the code in favor of the more flexible and full-featured
-OpenStack Networking.
-
-An attempt was made to deprecate ``nova-network`` during the Havana
-release, which was aborted due to the lack of equivalent functionality
-(such as the FlatDHCP multi-host high-availability mode mentioned in
-this guide), lack of a migration path between versions, insufficient
-testing, and simplicity when used for the more straightforward use cases
-``nova-network`` traditionally supported. Though significant effort has
-been made to address these concerns, ``nova-network`` was not be
-deprecated in the Juno release. In addition, to a limited degree,
-patches to ``nova-network`` have again begin to be accepted, such as
-adding a per-network settings feature and SR-IOV support in Juno.
-
-This leaves you with an important point of decision when designing your
-cloud. OpenStack Networking is robust enough to use with a small number
-of limitations (performance issues in some scenarios, only basic high
-availability of layer 3 systems) and provides many more features than
-``nova-network``. However, if you do not have the more complex use cases
-that can benefit from fuller software-defined networking capabilities,
-or are uncomfortable with the new concepts introduced, ``nova-network``
-may continue to be a viable option for the next 12 months.
-
-Similarly, if you have an existing cloud and are looking to upgrade from
-``nova-network`` to OpenStack Networking, you should have the option to
-delay the upgrade for this period of time. However, each release of
-OpenStack brings significant new innovation, and regardless of your use
-of networking methodology, it is likely best to begin planning for an
-upgrade within a reasonable timeframe of each release.
-
-As mentioned, there's currently no way to cleanly migrate from
-``nova-network`` to neutron. We recommend that you keep a migration in
-mind and what that process might involve for when a proper migration
-path is released.
-
-Distributed Virtual Router
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-One of the long-time complaints surrounding OpenStack Networking was the
-lack of high availability for the layer 3 components. The Juno release
-introduced Distributed Virtual Router (DVR), which aims to solve this
-problem.
-
-Early indications are that it does do this well for a base set of
-scenarios, such as using the ML2 plug-in with Open vSwitch, one flat
-external network and VXLAN tenant networks. However, it does appear that
-there are problems with the use of VLANs, IPv6, Floating IPs, high
-north-south traffic scenarios and large numbers of compute nodes. It is
-expected these will improve significantly with the next release, but bug
-reports on specific issues are highly desirable.
-
-Replacement of Open vSwitch Plug-in with Modular Layer 2
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The Modular Layer 2 plug-in is a framework allowing OpenStack Networking
-to simultaneously utilize the variety of layer-2 networking technologies
-found in complex real-world data centers. It currently works with the
-existing Open vSwitch, Linux Bridge, and Hyper-V L2 agents and is
-intended to replace and deprecate the monolithic plug-ins associated
-with those L2 agents.
-
-New API Versions
-~~~~~~~~~~~~~~~~
-
-The third version of the Compute API was broadly discussed and worked on
-during the Havana and Icehouse release cycles. Current discussions
-indicate that the V2 API will remain for many releases, and the next
-iteration of the API will be denoted v2.1 and have similar properties to
-the existing v2.0, rather than an entirely new v3 API. This is a great
-time to evaluate all API and provide comments while the next generation
-APIs are being defined. A new working group was formed specifically to
-`improve OpenStack APIs `_
-and create design guidelines, which you are welcome to join.
-
-OpenStack on OpenStack (TripleO)
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-This project continues to improve and you may consider using it for
-greenfield deployments, though according to the latest user survey
-results it remains to see widespread uptake.
-
-Data processing service for OpenStack (sahara)
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-A much-requested answer to big data problems, a dedicated team has been
-making solid progress on a Hadoop-as-a-Service project.
-
-Bare metal Deployment (ironic)
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The bare-metal deployment has been widely lauded, and development
-continues. The Juno release brought the OpenStack Bare metal drive into
-the Compute project, and it was aimed to deprecate the existing
-bare-metal driver in Kilo. If you are a current user of the bare metal
-driver, a particular blueprint to follow is `Deprecate the bare metal
-driver
-`_
-
-Database as a Service (trove)
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The OpenStack community has had a database-as-a-service tool in
-development for some time, and we saw the first integrated release of it
-in Icehouse. From its release it was able to deploy database servers out
-of the box in a highly available way, initially supporting only MySQL.
-Juno introduced support for Mongo (including clustering), PostgreSQL and
-Couchbase, in addition to replication functionality for MySQL. In Kilo,
-more advanced clustering capability was delivered, in addition to better
-integration with other OpenStack components such as Networking.
-
-Message Service (zaqar)
-~~~~~~~~~~~~~~~~~~~~~~~
-
-A service to provide queues of messages and notifications was released.
-
-DNS service (designate)
-~~~~~~~~~~~~~~~~~~~~~~~
-
-A long requested service, to provide the ability to manipulate DNS
-entries associated with OpenStack resources has gathered a following.
-The designate project was also released.
-
-Scheduler Improvements
-~~~~~~~~~~~~~~~~~~~~~~
-
-Both Compute and Block Storage rely on schedulers to determine where to
-place virtual machines or volumes. In Havana, the Compute scheduler
-underwent significant improvement, while in Icehouse it was the
-scheduler in Block Storage that received a boost. Further down the
-track, an effort started this cycle that aims to create a holistic
-scheduler covering both will come to fruition. Some of the work that was
-done in Kilo can be found under the `Gantt
-project `_.
-
-Block Storage Improvements
---------------------------
-
-Block Storage is considered a stable project, with wide uptake and a
-long track record of quality drivers. The team has discussed many areas
-of work at the summits, including better error reporting, automated
-discovery, and thin provisioning features.
-
-Toward a Python SDK
--------------------
-
-Though many successfully use the various python-\*client code as an
-effective SDK for interacting with OpenStack, consistency between the
-projects and documentation availability waxes and wanes. To combat this,
-an `effort to improve the
-experience `_ has
-started. Cross-project development efforts in OpenStack have a checkered
-history, such as the `unified client
-project `_ having
-several false starts. However, the early signs for the SDK project are
-promising, and we expect to see results during the Juno cycle.
diff --git a/doc/ops-guide/source/app-usecases.rst b/doc/ops-guide/source/app-usecases.rst
deleted file mode 100644
index 595a8ea917..0000000000
--- a/doc/ops-guide/source/app-usecases.rst
+++ /dev/null
@@ -1,192 +0,0 @@
-=========
-Use Cases
-=========
-
-This appendix contains a small selection of use cases from the
-community, with more technical detail than usual. Further examples can
-be found on the `OpenStack website `_.
-
-NeCTAR
-~~~~~~
-
-Who uses it: researchers from the Australian publicly funded research
-sector. Use is across a wide variety of disciplines, with the purpose of
-instances ranging from running simple web servers to using hundreds of
-cores for high-throughput computing.
-
-Deployment
-----------
-
-Using OpenStack Compute cells, the NeCTAR Research Cloud spans eight
-sites with approximately 4,000 cores per site.
-
-Each site runs a different configuration, as a resource cells in an
-OpenStack Compute cells setup. Some sites span multiple data centers,
-some use off compute node storage with a shared file system, and some
-use on compute node storage with a non-shared file system. Each site
-deploys the Image service with an Object Storage back end. A central
-Identity, dashboard, and Compute API service are used. A login to the
-dashboard triggers a SAML login with Shibboleth, which creates an
-account in the Identity service with an SQL back end. An Object Storage
-Global Cluster is used across several sites.
-
-Compute nodes have 24 to 48 cores, with at least 4 GB of RAM per core
-and approximately 40 GB of ephemeral storage per core.
-
-All sites are based on Ubuntu 14.04, with KVM as the hypervisor. The
-OpenStack version in use is typically the current stable version, with 5
-to 10 percent back-ported code from trunk and modifications.
-
-Resources
----------
-
-- `OpenStack.org case
- study `_
-
-- `NeCTAR-RC GitHub `_
-
-- `NeCTAR website `_
-
-MIT CSAIL
-~~~~~~~~~
-
-Who uses it: researchers from the MIT Computer Science and Artificial
-Intelligence Lab.
-
-Deployment
-----------
-
-The CSAIL cloud is currently 64 physical nodes with a total of 768
-physical cores and 3,456 GB of RAM. Persistent data storage is largely
-outside the cloud on NFS, with cloud resources focused on compute
-resources. There are more than 130 users in more than 40 projects,
-typically running 2,000–2,500 vCPUs in 300 to 400 instances.
-
-We initially deployed on Ubuntu 12.04 with the Essex release of
-OpenStack using FlatDHCP multi-host networking.
-
-The software stack is still Ubuntu 12.04 LTS, but now with OpenStack
-Havana from the Ubuntu Cloud Archive. KVM is the hypervisor, deployed
-using `FAI `_ and Puppet for configuration
-management. The FAI and Puppet combination is used lab-wide, not only
-for OpenStack. There is a single cloud controller node, which also acts
-as network controller, with the remainder of the server hardware
-dedicated to compute nodes.
-
-Host aggregates and instance-type extra specs are used to provide two
-different resource allocation ratios. The default resource allocation
-ratios we use are 4:1 CPU and 1.5:1 RAM. Compute-intensive workloads use
-instance types that require non-oversubscribed hosts where ``cpu_ratio``
-and ``ram_ratio`` are both set to 1.0. Since we have hyper-threading
-enabled on our compute nodes, this provides one vCPU per CPU thread, or
-two vCPUs per physical core.
-
-With our upgrade to Grizzly in August 2013, we moved to OpenStack
-Networking, neutron (quantum at the time). Compute nodes have
-two-gigabit network interfaces and a separate management card for IPMI
-management. One network interface is used for node-to-node
-communications. The other is used as a trunk port for OpenStack managed
-VLANs. The controller node uses two bonded 10g network interfaces for
-its public IP communications. Big pipes are used here because images are
-served over this port, and it is also used to connect to iSCSI storage,
-back-ending the image storage and database. The controller node also has
-a gigabit interface that is used in trunk mode for OpenStack managed
-VLAN traffic. This port handles traffic to the dhcp-agent and
-metadata-proxy.
-
-We approximate the older ``nova-network`` multi-host HA setup by using
-"provider VLAN networks" that connect instances directly to existing
-publicly addressable networks and use existing physical routers as their
-default gateway. This means that if our network controller goes down,
-running instances still have their network available, and no single
-Linux host becomes a traffic bottleneck. We are able to do this because
-we have a sufficient supply of IPv4 addresses to cover all of our
-instances and thus don't need NAT and don't use floating IP addresses.
-We provide a single generic public network to all projects and
-additional existing VLANs on a project-by-project basis as needed.
-Individual projects are also allowed to create their own private GRE
-based networks.
-
-Resources
----------
-
-- `CSAIL homepage `_
-
-DAIR
-~~~~
-
-Who uses it: DAIR is an integrated virtual environment that leverages
-the CANARIE network to develop and test new information communication
-technology (ICT) and other digital technologies. It combines such
-digital infrastructure as advanced networking and cloud computing and
-storage to create an environment for developing and testing innovative
-ICT applications, protocols, and services; performing at-scale
-experimentation for deployment; and facilitating a faster time to
-market.
-
-Deployment
-----------
-
-DAIR is hosted at two different data centers across Canada: one in
-Alberta and the other in Quebec. It consists of a cloud controller at
-each location, although, one is designated the "master" controller that
-is in charge of central authentication and quotas. This is done through
-custom scripts and light modifications to OpenStack. DAIR is currently
-running Havana.
-
-For Object Storage, each region has a swift environment.
-
-A NetApp appliance is used in each region for both block storage and
-instance storage. There are future plans to move the instances off the
-NetApp appliance and onto a distributed file system such as :term:`Ceph` or
-GlusterFS.
-
-VlanManager is used extensively for network management. All servers have
-two bonded 10GbE NICs that are connected to two redundant switches. DAIR
-is set up to use single-node networking where the cloud controller is
-the gateway for all instances on all compute nodes. Internal OpenStack
-traffic (for example, storage traffic) does not go through the cloud
-controller.
-
-Resources
----------
-
-- `DAIR homepage `__
-
-CERN
-~~~~
-
-Who uses it: researchers at CERN (European Organization for Nuclear
-Research) conducting high-energy physics research.
-
-Deployment
-----------
-
-The environment is largely based on Scientific Linux 6, which is Red Hat
-compatible. We use KVM as our primary hypervisor, although tests are
-ongoing with Hyper-V on Windows Server 2008.
-
-We use the Puppet Labs OpenStack modules to configure Compute, Image
-service, Identity, and dashboard. Puppet is used widely for instance
-configuration, and Foreman is used as a GUI for reporting and instance
-provisioning.
-
-Users and groups are managed through Active Directory and imported into
-the Identity service using LDAP. CLIs are available for nova and
-Euca2ools to do this.
-
-There are three clouds currently running at CERN, totaling about 4,700
-compute nodes, with approximately 120,000 cores. The CERN IT cloud aims
-to expand to 300,000 cores by 2015.
-
-Resources
----------
-
-- `OpenStack in Production: A tale of 3 OpenStack
- Clouds `_
-
-- `Review of CERN Data Centre
- Infrastructure `_
-
-- `CERN Cloud Infrastructure User
- Guide `_
diff --git a/doc/ops-guide/source/appendix.rst b/doc/ops-guide/source/appendix.rst
deleted file mode 100644
index dc27aa0f51..0000000000
--- a/doc/ops-guide/source/appendix.rst
+++ /dev/null
@@ -1,12 +0,0 @@
-Appendix
-~~~~~~~~
-
-.. toctree::
- :maxdepth: 1
-
- app-usecases.rst
- app-crypt.rst
- app-roadmaps.rst
- app-resources.rst
- common/app-support.rst
- common/glossary.rst
diff --git a/doc/ops-guide/source/common b/doc/ops-guide/source/common
deleted file mode 120000
index dc879abe93..0000000000
--- a/doc/ops-guide/source/common
+++ /dev/null
@@ -1 +0,0 @@
-../../common
\ No newline at end of file
diff --git a/doc/ops-guide/source/conf.py b/doc/ops-guide/source/conf.py
deleted file mode 100644
index afc3df57f1..0000000000
--- a/doc/ops-guide/source/conf.py
+++ /dev/null
@@ -1,297 +0,0 @@
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-# implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# This file is execfile()d with the current directory set to its
-# containing dir.
-#
-# Note that not all possible configuration values are present in this
-# autogenerated file.
-#
-# All configuration values have a default; values that are commented out
-# serve to show the default.
-
-import os
-# import sys
-
-import openstackdocstheme
-
-# If extensions (or modules to document with autodoc) are in another directory,
-# add these directories to sys.path here. If the directory is relative to the
-# documentation root, use os.path.abspath to make it absolute, like shown here.
-# sys.path.insert(0, os.path.abspath('.'))
-
-# -- General configuration ------------------------------------------------
-
-# If your documentation needs a minimal Sphinx version, state it here.
-# needs_sphinx = '1.0'
-
-# Add any Sphinx extension module names here, as strings. They can be
-# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
-# ones.
-extensions = ['openstackdocstheme']
-
-# Add any paths that contain templates here, relative to this directory.
-# templates_path = ['_templates']
-
-# The suffix of source filenames.
-source_suffix = '.rst'
-
-# The encoding of source files.
-# source_encoding = 'utf-8-sig'
-
-# The master toctree document.
-master_doc = 'index'
-
-# General information about the project.
-repository_name = "openstack/openstack-manuals"
-bug_project = 'openstack-manuals'
-project = u'Operations Guide'
-bug_tag = u'ops-guide'
-copyright = u'2016-2017, OpenStack contributors'
-
-# The version info for the project you're documenting, acts as replacement for
-# |version| and |release|, also used in various other places throughout the
-# built documents.
-#
-# The short X.Y version.
-version = '15.0'
-# The full version, including alpha/beta/rc tags.
-release = '15.0.0'
-
-# The language for content autogenerated by Sphinx. Refer to documentation
-# for a list of supported languages.
-# language = None
-
-# There are two options for replacing |today|: either, you set today to some
-# non-false value, then it is used:
-# today = ''
-# Else, today_fmt is used as the format for a strftime call.
-# today_fmt = '%B %d, %Y'
-
-# List of patterns, relative to source directory, that match files and
-# directories to ignore when looking for source files.
-exclude_patterns = ['common/cli*', 'common/nova*',
- 'common/appendix.rst',
- 'common/get-started*', 'common/dashboard*']
-
-# The reST default role (used for this markup: `text`) to use for all
-# documents.
-# default_role = None
-
-# If true, '()' will be appended to :func: etc. cross-reference text.
-# add_function_parentheses = True
-
-# If true, the current module name will be prepended to all description
-# unit titles (such as .. function::).
-# add_module_names = True
-
-# If true, sectionauthor and moduleauthor directives will be shown in the
-# output. They are ignored by default.
-# show_authors = False
-
-# The name of the Pygments (syntax highlighting) style to use.
-pygments_style = 'sphinx'
-
-# A list of ignored prefixes for module index sorting.
-# modindex_common_prefix = []
-
-# If true, keep warnings as "system message" paragraphs in the built documents.
-# keep_warnings = False
-
-
-# -- Options for HTML output ----------------------------------------------
-
-# The theme to use for HTML and HTML Help pages. See the documentation for
-# a list of builtin themes.
-html_theme = 'openstackdocs'
-
-# Theme options are theme-specific and customize the look and feel of a theme
-# further. For a list of options available for each theme, see the
-# documentation.
-# html_theme_options = {}
-
-# Add any paths that contain custom themes here, relative to this directory.
-# html_theme_path = [openstackdocstheme.get_html_theme_path()]
-
-# The name for this set of Sphinx documents. If None, it defaults to
-# " v documentation".
-# html_title = None
-
-# A shorter title for the navigation bar. Default is the same as html_title.
-# html_short_title = None
-
-# The name of an image file (relative to this directory) to place at the top
-# of the sidebar.
-# html_logo = None
-
-# The name of an image file (within the static path) to use as favicon of the
-# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
-# pixels large.
-# html_favicon = None
-
-# Add any paths that contain custom static files (such as style sheets) here,
-# relative to this directory. They are copied after the builtin static files,
-# so a file named "default.css" will overwrite the builtin "default.css".
-# html_static_path = []
-
-# Add any extra paths that contain custom files (such as robots.txt or
-# .htaccess) here, relative to this directory. These files are copied
-# directly to the root of the documentation.
-# html_extra_path = []
-
-# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
-# using the given strftime format.
-# So that we can enable "log-a-bug" links from each output HTML page, this
-# variable must be set to a format that includes year, month, day, hours and
-# minutes.
-html_last_updated_fmt = '%Y-%m-%d %H:%M'
-
-# If true, SmartyPants will be used to convert quotes and dashes to
-# typographically correct entities.
-# html_use_smartypants = True
-
-# Custom sidebar templates, maps document names to template names.
-# html_sidebars = {}
-
-# Additional templates that should be rendered to pages, maps page names to
-# template names.
-# html_additional_pages = {}
-
-# If false, no module index is generated.
-# html_domain_indices = True
-
-# If false, no index is generated.
-html_use_index = False
-
-# If true, the index is split into individual pages for each letter.
-# html_split_index = False
-
-# If true, links to the reST sources are added to the pages.
-html_show_sourcelink = False
-
-# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
-# html_show_sphinx = True
-
-# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
-# html_show_copyright = True
-
-# If true, an OpenSearch description file will be output, and all pages will
-# contain a tag referring to it. The value of this option must be the
-# base URL from which the finished HTML is served.
-# html_use_opensearch = ''
-
-# This is the file name suffix for HTML files (e.g. ".xhtml").
-# html_file_suffix = None
-
-# Output file base name for HTML help builder.
-htmlhelp_basename = 'ops-guide'
-
-# If true, publish source files
-html_copy_source = False
-
-# -- Options for LaTeX output ---------------------------------------------
-pdf_theme_path = openstackdocstheme.get_pdf_theme_path()
-openstack_logo = openstackdocstheme.get_openstack_logo_path()
-
-latex_custom_template = r"""
-\newcommand{\openstacklogo}{%s}
-\usepackage{%s}
-""" % (openstack_logo, pdf_theme_path)
-
-latex_engine = 'xelatex'
-
-latex_elements = {
- # The paper size ('letterpaper' or 'a4paper').
- 'papersize': 'a4paper',
-
- # The font size ('10pt', '11pt' or '12pt').
- 'pointsize': '11pt',
-
- #Default figure align
- 'figure_align': 'H',
-
- # Not to generate blank page after chapter
- 'classoptions': ',openany',
-
- # Additional stuff for the LaTeX preamble.
- 'preamble': latex_custom_template,
-}
-
-# Grouping the document tree into LaTeX files. List of tuples
-# (source start file, target name, title,
-# author, documentclass [howto, manual, or own class]).
-latex_documents = [
- ('index', 'OpsGuide.tex', u'Operations Guide',
- u'OpenStack contributors', 'manual'),
-]
-
-# The name of an image file (relative to this directory) to place at the top of
-# the title page.
-# latex_logo = None
-
-# For "manual" documents, if this is true, then toplevel headings are parts,
-# not chapters.
-# latex_use_parts = False
-
-# If true, show page references after internal links.
-# latex_show_pagerefs = False
-
-# If true, show URL addresses after external links.
-# latex_show_urls = False
-
-# Documents to append as an appendix to all manuals.
-# latex_appendices = []
-
-# If false, no module index is generated.
-# latex_domain_indices = True
-
-
-# -- Options for manual page output ---------------------------------------
-
-# One entry per manual page. List of tuples
-# (source start file, name, description, authors, manual section).
-man_pages = [
- ('index', 'opsguide', u'Operations Guide',
- [u'OpenStack contributors'], 1)
-]
-
-# If true, show URL addresses after external links.
-# man_show_urls = False
-
-
-# -- Options for Texinfo output -------------------------------------------
-
-# Grouping the document tree into Texinfo files. List of tuples
-# (source start file, target name, title, author,
-# dir menu entry, description, category)
-texinfo_documents = [
- ('index', 'OpsGuide', u'Operations Guide',
- u'OpenStack contributors', 'OpsGuide',
- 'This book provides information about designing and operating '
- 'OpenStack clouds.', 'Miscellaneous'),
-]
-
-# Documents to append as an appendix to all manuals.
-# texinfo_appendices = []
-
-# If false, no module index is generated.
-# texinfo_domain_indices = True
-
-# How to display URL addresses: 'footnote', 'no', or 'inline'.
-# texinfo_show_urls = 'footnote'
-
-# If true, do not generate a @detailmenu in the "Top" node's menu.
-# texinfo_no_detailmenu = False
-
-# -- Options for Internationalization output ------------------------------
-locale_dirs = ['locale/']
diff --git a/doc/ops-guide/source/figures/Check_mark_23x20_02.png b/doc/ops-guide/source/figures/Check_mark_23x20_02.png
deleted file mode 100644
index e6e5d5a72b..0000000000
Binary files a/doc/ops-guide/source/figures/Check_mark_23x20_02.png and /dev/null differ
diff --git a/doc/ops-guide/source/figures/Check_mark_23x20_02.svg b/doc/ops-guide/source/figures/Check_mark_23x20_02.svg
deleted file mode 100644
index 3051a2f937..0000000000
--- a/doc/ops-guide/source/figures/Check_mark_23x20_02.svg
+++ /dev/null
@@ -1,60 +0,0 @@
-
-
-
diff --git a/doc/ops-guide/source/figures/create_project.png b/doc/ops-guide/source/figures/create_project.png
deleted file mode 100644
index 8906bcac35..0000000000
Binary files a/doc/ops-guide/source/figures/create_project.png and /dev/null differ
diff --git a/doc/ops-guide/source/figures/edit_project_member.png b/doc/ops-guide/source/figures/edit_project_member.png
deleted file mode 100644
index 84d7408bac..0000000000
Binary files a/doc/ops-guide/source/figures/edit_project_member.png and /dev/null differ
diff --git a/doc/ops-guide/source/figures/network_packet_ping.svg b/doc/ops-guide/source/figures/network_packet_ping.svg
deleted file mode 100644
index f5dda8e250..0000000000
--- a/doc/ops-guide/source/figures/network_packet_ping.svg
+++ /dev/null
@@ -1,3 +0,0 @@
-
-
-
diff --git a/doc/ops-guide/source/figures/neutron_packet_ping.svg b/doc/ops-guide/source/figures/neutron_packet_ping.svg
deleted file mode 100644
index 898794fffb..0000000000
--- a/doc/ops-guide/source/figures/neutron_packet_ping.svg
+++ /dev/null
@@ -1,1734 +0,0 @@
-
-
diff --git a/doc/ops-guide/source/figures/os-ref-arch.svg b/doc/ops-guide/source/figures/os-ref-arch.svg
deleted file mode 100644
index 7fea7f198c..0000000000
--- a/doc/ops-guide/source/figures/os-ref-arch.svg
+++ /dev/null
@@ -1,3 +0,0 @@
-
-
-
diff --git a/doc/ops-guide/source/figures/os_physical_network.svg b/doc/ops-guide/source/figures/os_physical_network.svg
deleted file mode 100644
index d4d83fcb60..0000000000
--- a/doc/ops-guide/source/figures/os_physical_network.svg
+++ /dev/null
@@ -1,3 +0,0 @@
-
-
-
diff --git a/doc/ops-guide/source/figures/osog_00in01.png b/doc/ops-guide/source/figures/osog_00in01.png
deleted file mode 100644
index 1a7c150ccf..0000000000
Binary files a/doc/ops-guide/source/figures/osog_00in01.png and /dev/null differ
diff --git a/doc/ops-guide/source/figures/osog_0201.png b/doc/ops-guide/source/figures/osog_0201.png
deleted file mode 100644
index 794c327e40..0000000000
Binary files a/doc/ops-guide/source/figures/osog_0201.png and /dev/null differ
diff --git a/doc/ops-guide/source/figures/osog_1201.png b/doc/ops-guide/source/figures/osog_1201.png
deleted file mode 100644
index d0e3a3fd4e..0000000000
Binary files a/doc/ops-guide/source/figures/osog_1201.png and /dev/null differ
diff --git a/doc/ops-guide/source/figures/osog_1202.png b/doc/ops-guide/source/figures/osog_1202.png
deleted file mode 100644
index ce1e475e52..0000000000
Binary files a/doc/ops-guide/source/figures/osog_1202.png and /dev/null differ
diff --git a/doc/ops-guide/source/figures/osog_ac01.png b/doc/ops-guide/source/figures/osog_ac01.png
deleted file mode 100644
index 6caddef4a2..0000000000
Binary files a/doc/ops-guide/source/figures/osog_ac01.png and /dev/null differ
diff --git a/doc/ops-guide/source/figures/provision-an-instance.graffle b/doc/ops-guide/source/figures/provision-an-instance.graffle
deleted file mode 100644
index 62ea26a84d..0000000000
Binary files a/doc/ops-guide/source/figures/provision-an-instance.graffle and /dev/null differ
diff --git a/doc/ops-guide/source/figures/provision-an-instance.png b/doc/ops-guide/source/figures/provision-an-instance.png
deleted file mode 100644
index b5370526ab..0000000000
Binary files a/doc/ops-guide/source/figures/provision-an-instance.png and /dev/null differ
diff --git a/doc/ops-guide/source/figures/provision-an-instance.svg b/doc/ops-guide/source/figures/provision-an-instance.svg
deleted file mode 100644
index 47db5aa8a8..0000000000
--- a/doc/ops-guide/source/figures/provision-an-instance.svg
+++ /dev/null
@@ -1,3 +0,0 @@
-
-
-
diff --git a/doc/ops-guide/source/figures/releasecyclegrizzlydiagram.png b/doc/ops-guide/source/figures/releasecyclegrizzlydiagram.png
deleted file mode 100644
index 26ae2250cf..0000000000
Binary files a/doc/ops-guide/source/figures/releasecyclegrizzlydiagram.png and /dev/null differ
diff --git a/doc/ops-guide/source/index.rst b/doc/ops-guide/source/index.rst
deleted file mode 100644
index dd5b3dba83..0000000000
--- a/doc/ops-guide/source/index.rst
+++ /dev/null
@@ -1,55 +0,0 @@
-==========================
-OpenStack Operations Guide
-==========================
-
-Abstract
-~~~~~~~~
-
-This guide provides information about operating OpenStack clouds.
-
-We recommend that you turn to the `Installation Tutorials and Guides
-`_,
-which contains a step-by-step guide on how to manually install the
-OpenStack packages and dependencies on your cloud.
-
-While it is important for an operator to be familiar with the steps
-involved in deploying OpenStack, we also strongly encourage you to
-evaluate `OpenStack deployment tools
-`_
-and configuration-management tools, such as :term:`Puppet` or
-:term:`Chef`, which can help automate this deployment process.
-
-In this guide, we assume that you have successfully deployed an
-OpenStack cloud and are able to perform basic operations
-such as adding images, booting instances, and attaching volumes.
-
-As your focus turns to stable operations, we recommend that you do skim
-this guide to get a sense of the content. Some of this content is useful
-to read in advance so that you can put best practices into effect to
-simplify your life in the long run. Other content is more useful as a
-reference that you might turn to when an unexpected event occurs (such
-as a power failure), or to troubleshoot a particular problem.
-
-Contents
-~~~~~~~~
-
-.. toctree::
- :maxdepth: 2
-
- acknowledgements.rst
- preface.rst
- common/conventions.rst
- ops-deployment-factors.rst
- ops-planning.rst
- ops-capacity-planning-scaling.rst
- ops-lay-of-the-land.rst
- ops-projects-users.rst
- ops-user-facing-operations.rst
- ops-maintenance.rst
- ops-network-troubleshooting.rst
- ops-logging-monitoring.rst
- ops-backup-recovery.rst
- ops-customize.rst
- ops-advanced-configuration.rst
- ops-upgrades.rst
- appendix.rst
diff --git a/doc/ops-guide/source/locale/ja/LC_MESSAGES/ops-guide.po b/doc/ops-guide/source/locale/ja/LC_MESSAGES/ops-guide.po
deleted file mode 100644
index 7bde069712..0000000000
--- a/doc/ops-guide/source/locale/ja/LC_MESSAGES/ops-guide.po
+++ /dev/null
@@ -1,13128 +0,0 @@
-# Translators:
-# Akihiro Motoki , 2013
-# Akira Yoshiyama , 2013
-# Andreas Jaeger , 2014-2015
-# Ying Chun Guo , 2013
-# doki701 , 2013
-# yfukuda , 2014
-# Masanori Itoh , 2013
-# Masanori Itoh , 2013
-# Masayuki Igawa , 2013
-# Masayuki Igawa , 2013
-# myamamot , 2014
-# *はたらくpokotan* <>, 2013
-# Tomoaki Nakajima <>, 2013
-# Yuki Shira , 2013
-# Shogo Sato , 2014
-# tsutomu.takekawa , 2013
-# Masanori Itoh , 2013
-# Toru Makabe , 2013
-# doki701 , 2013
-# Tom Fifield , 2014
-# Tomoyuki KATO , 2012-2015
-# Toru Makabe , 2013
-# tsutomu.takekawa , 2013
-# Ying Chun Guo , 2013
-# ykatabam , 2014
-# Yuki Shira , 2013
-#
-#
-# Akihiro Motoki , 2016. #zanata
-# KATO Tomoyuki , 2016. #zanata
-# Shu Muto , 2016. #zanata
-# KATO Tomoyuki , 2017. #zanata
-msgid ""
-msgstr ""
-"Project-Id-Version: Operations Guide 15.0\n"
-"Report-Msgid-Bugs-To: \n"
-"POT-Creation-Date: 2017-06-12 16:25+0000\n"
-"MIME-Version: 1.0\n"
-"Content-Type: text/plain; charset=UTF-8\n"
-"Content-Transfer-Encoding: 8bit\n"
-"PO-Revision-Date: 2017-03-22 07:06+0000\n"
-"Last-Translator: KATO Tomoyuki \n"
-"Language: ja\n"
-"Plural-Forms: nplurals=1; plural=0;\n"
-"X-Generator: Zanata 3.9.6\n"
-"Language-Team: Japanese\n"
-
-msgid "\"Hey Alvaro, can you run a VLAN on top of a VLAN?\""
-msgstr "「Alvaro、VLAN 上に VLAN って作れるのかい?」"
-
-msgid "\"If you did, you'd add an extra 4 bytes to the packet…\""
-msgstr "「もしやったら、パケットに余計に4バイト追加になるよ・・」"
-
-msgid "\"The Issue\""
-msgstr "「あの問題」"
-
-msgid "**Back matter:**"
-msgstr "**後付:**"
-
-msgid "**Block Storage service (cinder)**"
-msgstr "**Block Storage サービス (cinder)**"
-
-msgid "**Column**"
-msgstr "**カラム**"
-
-msgid "**Compute nodes**"
-msgstr "**コンピュートノード**"
-
-msgid "**Compute service (nova)**"
-msgstr "**Compute サービス (nova)**"
-
-msgid "**Controller node**"
-msgstr "**コントローラーノード**"
-
-msgid "**Create a port that can be reused**"
-msgstr "**再利用できるポートの作成**"
-
-msgid "**Description**"
-msgstr "**説明**"
-
-msgid "**Detach a port from an instance**"
-msgstr "**インスタンスからポートの切断**"
-
-msgid "**Ensuring Snapshots of Linux Guests Are Consistent**"
-msgstr "**Linux ゲストのスナップショットの整合性の保証**"
-
-msgid "**Ensuring Snapshots of Windows Guests Are Consistent**"
-msgstr "**Windows ゲストのスナップショットの整合性の保証**"
-
-msgid "**Example of Complexity**"
-msgstr "**複雑さの例**"
-
-msgid "**Example**"
-msgstr "**例**"
-
-msgid "**Identity service (keystone)**"
-msgstr "**Identity サービス (keystone)**"
-
-msgid "**Image service (glance)**"
-msgstr "**Image サービス (glance)**"
-
-msgid "**Networking service (neutron)**"
-msgstr "**Networking サービス (neutron)**"
-
-msgid "**Overhead**"
-msgstr "**オーバーヘッド**"
-
-msgid "**Provision an instance**"
-msgstr "**インスタンスの配備**"
-
-msgid "**Setting with openstack command**"
-msgstr "**openstack コマンドを用いたセットアップ方法**"
-
-msgid "**Shared services**"
-msgstr "**共有サービス**"
-
-msgid "**Storage nodes**"
-msgstr "**ストレージノード**"
-
-msgid ""
-"**To capture packets from the patch-tun internal interface on integration "
-"bridge, br-int:**"
-msgstr ""
-"統合ブリッジ ``br-int`` の内部インターフェース ``patch-tun`` からのパケットを"
-"キャプチャーする方法。"
-
-msgid ""
-"**To create the middleware and plug it in through Paste configuration:**"
-msgstr "**ミドルウェアを作成して Paste の環境設定を通して組み込むためには:**"
-
-msgid "**To create the scheduler and plug it in through configuration**"
-msgstr "**スケジューラーを作成して、設定を通して組み込む方法**"
-
-msgid ""
-"**To discover which internal VLAN tag is in use for a GRE tunnel by using "
-"the ovs-ofctl command**"
-msgstr ""
-"**ovs-ofctlコマンドを使用することにより、GRE トンネル向けに使用されている内"
-"部 VLAN タグを検索します。**"
-
-msgid ""
-"**To discover which internal VLAN tag is in use for a given external VLAN by "
-"using the ovs-ofctl command**"
-msgstr ""
-"**ovs-ofctl コマンドを使用することにより、外部 VLAN 向けに使用されている内部 "
-"VLAN タグを検索します。**"
-
-msgid "**To perform a rollback**"
-msgstr "**ロールバック方法**"
-
-msgid "**To update Block Storage quotas for a tenant (project)**"
-msgstr "**プロジェクトの Block Storage クォータの更新方法**"
-
-msgid "**To update quota values for a tenant (project)**"
-msgstr "**テナント (プロジェクト) のクォータ値の更新**"
-
-msgid "**To view Block Storage quotas for a tenant (project)**"
-msgstr "**プロジェクトの Block Storage クォータの表示方法**"
-
-msgid "**To view and update default Block Storage quota values**"
-msgstr "**Block Storage のデフォルトのクォータ値の表示と更新**"
-
-msgid "**To view and update default quota values**"
-msgstr "**デフォルトのクォータ値の表示と更新**"
-
-msgid "**To view quota values for a tenant (project)**"
-msgstr "**テナント (プロジェクト) のクォータ値の表示**"
-
-msgid "*Actions which delete things should not be enabled by default.*"
-msgstr "*何かを削除する操作はデフォルトで有効化されるべきではない。*"
-
-msgid "/var/lib/nova/instances"
-msgstr "/var/lib/nova/instances"
-
-msgid "0 GB"
-msgstr "0 GB"
-
-msgid "1"
-msgstr "1"
-
-msgid "1 GB"
-msgstr "1 GB"
-
-msgid "10"
-msgstr "10"
-
-msgid "10 GB"
-msgstr "10 GB"
-
-msgid "100"
-msgstr "100"
-
-msgid "15"
-msgstr "15"
-
-msgid "16 GB"
-msgstr "16 GB"
-
-msgid "160 GB"
-msgstr "160 GB"
-
-msgid "2"
-msgstr "2"
-
-msgid "2 GB"
-msgstr "2 GB"
-
-msgid "20"
-msgstr "20"
-
-msgid "20 GB"
-msgstr "20 GB"
-
-msgid "200 physical cores."
-msgstr "物理コア 200 個"
-
-msgid "2015.2"
-msgstr "2015.2"
-
-msgid "21"
-msgstr "21"
-
-msgid "22"
-msgstr "22"
-
-msgid "3"
-msgstr "3"
-
-msgid "4"
-msgstr "4"
-
-msgid "4 GB"
-msgstr "4 GB"
-
-msgid "40 GB"
-msgstr "40 GB"
-
-msgid "5"
-msgstr "5"
-
-msgid "512 MB"
-msgstr "512 MB"
-
-msgid "8"
-msgstr "8"
-
-msgid "8 GB"
-msgstr "8 GB"
-
-msgid "80 GB"
-msgstr "80 GB"
-
-msgid "98"
-msgstr "98"
-
-msgid "99"
-msgstr "99"
-
-msgid ":command:`cinder-manage`"
-msgstr ":command:`cinder-manage`"
-
-msgid ":command:`euca-describe-availability-zones verbose`"
-msgstr ":command:`euca-describe-availability-zones verbose`"
-
-msgid ":command:`glance-manage`"
-msgstr ":command:`glance-manage`"
-
-msgid ":command:`keystone-manage`"
-msgstr ":command:`keystone-manage`"
-
-msgid ":command:`nova-manage`"
-msgstr ":command:`nova-manage`"
-
-msgid ":command:`openstack compute service list`"
-msgstr ":command:`openstack compute service list`"
-
-msgid ":command:`openstack host list` (os-hosts)"
-msgstr ":command:`openstack host list` (os-hosts)"
-
-msgid ":doc:`app-crypt`"
-msgstr ":doc:`app-crypt`"
-
-msgid ":doc:`app-resources`"
-msgstr ":doc:`app-resources`"
-
-msgid ":doc:`app-roadmaps`"
-msgstr ":doc:`app-roadmaps`"
-
-msgid ":doc:`app-usecases`"
-msgstr ":doc:`app-usecases`"
-
-msgid ":doc:`common/glossary`"
-msgstr ":doc:`common/glossary`"
-
-msgid ":doc:`ops-advanced-configuration`"
-msgstr ":doc:`ops-advanced-configuration`"
-
-msgid ":doc:`ops-backup-recovery`"
-msgstr ":doc:`ops-backup-recovery`"
-
-msgid ":doc:`ops-customize`"
-msgstr ":doc:`ops-customize`"
-
-msgid ":doc:`ops-lay-of-the-land`"
-msgstr ":doc:`ops-lay-of-the-land`"
-
-msgid ":doc:`ops-logging-monitoring`"
-msgstr ":doc:`ops-logging-monitoring`"
-
-msgid ":doc:`ops-maintenance`"
-msgstr ":doc:`ops-maintenance`"
-
-msgid ":doc:`ops-network-troubleshooting`"
-msgstr ":doc:`ops-network-troubleshooting`"
-
-msgid ":doc:`ops-projects-users`"
-msgstr ":doc:`ops-projects-users`"
-
-msgid ":doc:`ops-upgrades`"
-msgstr ":doc:`ops-upgrades`"
-
-msgid ":doc:`ops-user-facing-operations`"
-msgstr ":doc:`ops-user-facing-operations`"
-
-msgid ""
-":ref:`table_segregation_methods` provides a comparison view of each "
-"segregation method currently provided by OpenStack Compute."
-msgstr ""
-":ref:`table_segregation_methods` では、OpenStack Compute が現在提供している各"
-"分割メソッドの比較ビューを提供しています。"
-
-msgid ""
-":term:`Availability zones ` and host aggregates, which "
-"merely divide a single Compute deployment."
-msgstr ""
-":term:`アベイラビリティゾーン ` およびホストアグリゲート。"
-"コンピュートのデプロイメントの分割のみを行います。"
-
-msgid ""
-msgstr "<スナップショットされたインスタンスの UUID>"
-
-msgid ""
-msgstr "<スナップショットされたインスタンスの元イメージの UUID>"
-
-msgid ""
-"A DHCP problem might be caused by a misbehaving dnsmasq process. First, "
-"debug by checking logs and then restart the dnsmasq processes only for that "
-"project (tenant). In VLAN mode, there is a dnsmasq process for each tenant. "
-"Once you have restarted targeted dnsmasq processes, the simplest way to rule "
-"out dnsmasq causes is to kill all of the dnsmasq processes on the machine "
-"and restart ``nova-network``. As a last resort, do this as root:"
-msgstr ""
-"DHCP の問題は dnsmasq の不具合が原因となりがちです。まず、ログを確認し、その"
-"後該当するプロジェクト(テナント)の dnsmasq プロセスを再起動してください。 "
-"VLAN モードにおいては、 dnsmasq プロセスはテナントごとに存在します。すでに該"
-"当の dnsmasq プロセスを再起動しているのであれば、もっともシンプルな解決法は、"
-"マシン上の全ての dnsmasq プロセスをkillし、 ``nova-network`` を再起動すること"
-"です。最終手段として、root で以下を実行してください。"
-
-msgid ""
-"A NetApp appliance is used in each region for both block storage and "
-"instance storage. There are future plans to move the instances off the "
-"NetApp appliance and onto a distributed file system such as :term:`Ceph` or "
-"GlusterFS."
-msgstr ""
-"各リージョンでは、ブロックストレージとインスタンスストレージの両方でNetApp ア"
-"プライアンスが使用されています。これらのインスタンスを NetApp アプライアンス"
-"から :term:`Ceph` または GlusterFS といった分散ファイルシステム上に移動する計"
-"画があります。"
-
-msgid ""
-"A basic type of alert monitoring is to simply check and see whether a "
-"required process is running. For example, ensure that the ``nova-api`` "
-"service is running on the cloud controller:"
-msgstr ""
-"基本的なアラーム監視は、単に要求されたプロセスが稼働しているかどうかを確認す"
-"ることです。 例えば、 ``nova-api`` サービスがクラウドコントローラーで稼働して"
-"いるかどうかを確認します。"
-
-msgid ""
-"A boolean to indicate whether the volume should be deleted when the instance "
-"is terminated. True can be specified as ``True`` or ``1``. False can be "
-"specified as ``False`` or ``0``."
-msgstr ""
-"インスタンスが終了したときに、ボリュームが削除されるかどうかを指示する論理値"
-"です。真は ``True`` または ``1`` として指定できます。偽は ``False`` または "
-"``0`` として指定できます。"
-
-msgid ""
-"A brief overview of how to send REST API requests to endpoints for OpenStack "
-"services"
-msgstr ""
-"OpenStack サービスのエンドポイントに REST API リクエストをどのように送信する"
-"かについての概要が説明されています"
-
-msgid ""
-"A cloud with multiple sites where you can schedule VMs \"anywhere\" or on a "
-"particular site."
-msgstr ""
-"複数サイトで構成されるクラウドで、仮想マシンを「任意のサイト」または特定のサ"
-"イトにスケジューリングしたい場合"
-
-msgid ""
-"A cloud with multiple sites, where you schedule VMs to a particular site and "
-"you want a shared infrastructure."
-msgstr ""
-"複数サイトで構成されるクラウドで、仮想マシンを特定のサイトに対してスケジュー"
-"リングでき、かつ共有インフラを利用したい場合"
-
-msgid ""
-"A collection of foreign keys are available to find relations to the "
-"instance. The most useful of these — ``user_id`` and ``project_id`` are the "
-"UUIDs of the user who launched the instance and the project it was launched "
-"in."
-msgstr ""
-"外部キーはインスタンスの関連を見つけるために利用可能です。これらの中で最も有"
-"用なものは、 ``user_id`` および ``project_id`` です。これらは、インスタンスを"
-"起動したユーザー、およびそれが起動されたプロジェクトの UUID です。"
-
-msgid ""
-"A common new-user issue with OpenStack is failing to set an appropriate "
-"security group when launching an instance. As a result, the user is unable "
-"to contact the instance on the network."
-msgstr ""
-"OpenStack の新しいユーザーがよく経験する問題が、インスタンスを起動するときに"
-"適切なセキュリティグループを設定できず、その結果、ネットワーク経由でインスタ"
-"ンスにアクセスできないというものです。"
-
-msgid ""
-"A common scenario is to take down production management services in "
-"preparation for an upgrade, completed part of the upgrade process, and "
-"discovered one or more problems not encountered during testing. As a "
-"consequence, you must roll back your environment to the original \"known good"
-"\" state. You also made sure that you did not make any state changes after "
-"attempting the upgrade process; no new instances, networks, storage volumes, "
-"and so on. Any of these new resources will be in a frozen state after the "
-"databases are restored from backup."
-msgstr ""
-"一般的なシナリオは、アップグレードの準備で本番の管理サービスを分解して、アッ"
-"プグレード手順の一部分を完了して、テスト中には遭遇しなかった 1 つ以上の問題に"
-"遭遇することです。環境を元の「万全な」状態にロールバックする必要があります。"
-"続けて、アップグレードプロセスを試行した後、新しいインスタンス、ネットワー"
-"ク、ストレージボリュームなど、何も状態を変更していないことを確実にしてくださ"
-"い。これらの新しいリソースはすべて、データベースがバックアップからリストアさ"
-"れた後、フリーズ状態になります。"
-
-msgid ""
-"A common use of host aggregates is to provide information for use with the "
-"``nova-scheduler``. For example, you might use a host aggregate to group a "
-"set of hosts that share specific flavors or images."
-msgstr ""
-"ホストアグリゲートの一般的な用途は ``nova-scheduler`` で使用する情報を提供す"
-"ることです。例えば、ホストアグリゲートを使って、特定のフレーバーやイメージを"
-"共有するホストの集合を作成することができます。"
-
-msgid ""
-"A common way of dealing with the recovery from a full system failure, such "
-"as a power outage of a data center, is to assign each service a priority, "
-"and restore in order. :ref:`table_example_priority` shows an example."
-msgstr ""
-"データセンターの電源障害など、完全なシステム障害からリカバリーする一般的な方"
-"法は、各サービスに優先度を付け、順番に復旧していくことです。 :ref:"
-"`table_example_priority` に例を示します。"
-
-msgid "A compute node"
-msgstr "コンピュートノード"
-
-msgid ""
-"A critical part of a cloud's scalability is the amount of effort that it "
-"takes to run your cloud. To minimize the operational cost of running your "
-"cloud, set up and use an automated deployment and configuration "
-"infrastructure with a configuration management system, such as :term:"
-"`Puppet` or :term:`Chef`. Combined, these systems greatly reduce manual "
-"effort and the chance for operator error."
-msgstr ""
-"クラウドのスケーラビリティにおける重要な部分の一つは、クラウドを運用するのに"
-"必要な労力にあります。クラウドの運用コストを最小化するために、 :term:"
-"`Puppet` や :term:`Chef` などの設定管理システムを使用して、自動化されたデプロ"
-"イメントおよび設定インフラストラクチャーを設定、使用してください。これらのシ"
-"ステムを統合すると、工数やオペレーターのミスを大幅に減らすことができます。"
-
-msgid ""
-"A descriptive name, such as xx.size\\_name, is conventional but not "
-"required, though some third-party tools may rely on it."
-msgstr ""
-"慣習として xx.size\\_name などの内容を表す名前を使用しますが、必須ではありま"
-"せん。いくつかのサードパーティツールはその名称に依存しているかもしれません。"
-
-msgid ""
-"A device name where the volume is attached in the system at ``/dev/dev_name``"
-msgstr "そのボリュームはシステムで ``/dev/dev_name`` に接続されます。"
-
-msgid ""
-"A different API endpoint for every region. Each region has a full nova "
-"installation."
-msgstr "各リージョンは完全な nova インストール環境を持ちます。"
-
-msgid ""
-"A feature was introduced in Essex to periodically check and see if there "
-"were any ``_base`` files not in use. If there were, OpenStack Compute would "
-"delete them. This idea sounds innocent enough and has some good qualities to "
-"it. But how did this feature end up turned on? It was disabled by default in "
-"Essex. As it should be. It was `decided to be turned on in Folsom `_. I cannot emphasize enough that:"
-msgstr ""
-"Essex で、 ``_base`` 下の任意のファイルが使用されていないかどうか定期的に"
-"チェックして確認する機能が導入された。もしあれば、OpenStack Compute はその"
-"ファイルを削除する。このアイデアは問題がないように見え、品質的にも良いよう"
-"だった。しかし、この機能を有効にすると最終的にどうなるのか?Essex ではこの機"
-"能がデフォルトで無効化されていた。そうあるべきであったからだ。これは、 "
-"`Folsom で有効になることが決定された `_ 。私はそうあるべきとは思わない。何故なら"
-
-msgid "A few nights later, it happened again."
-msgstr "数日後、それは再び起こった。"
-
-msgid ""
-"A final example is if a user is hammering cloud resources repeatedly. "
-"Contact the user and learn what he is trying to do. Maybe he doesn't "
-"understand that what he's doing is inappropriate, or maybe there is an issue "
-"with the resource he is trying to access that is causing his requests to "
-"queue or lag."
-msgstr ""
-"最後の例は、ユーザーがクラウドのリソースに繰り返し悪影響を与える場合です。"
-"ユーザーと連絡をとり、何をしようとしているのか理解します。ユーザー自身が実行"
-"しようとしていることを正しく理解していない可能性があります。または、アクセス"
-"しようとしているリソースに問題があり、リクエストがキューに入ったり遅れが発生"
-"している場合もあります。"
-
-msgid "A full set of options can be found using:"
-msgstr "すべてのオプションは、次のように確認できます。"
-
-msgid ""
-"A list of terms used in this book is included, which is a subset of the "
-"larger OpenStack glossary available online."
-msgstr ""
-"この本で使われている用語の一覧。オンライン上にある OpenStack 用語集のサブセッ"
-"トです。"
-
-msgid ""
-"A long requested service, to provide the ability to manipulate DNS entries "
-"associated with OpenStack resources has gathered a following. The designate "
-"project was also released."
-msgstr ""
-"長く要望されていたサービスです。配下を収集した OpenStack リソースを関連付けら"
-"れた DNS エントリーを操作する機能を提供します。designate プロジェクトもリリー"
-"スされました。"
-
-msgid ""
-"A major quality push has occurred across drivers and plug-ins in Block "
-"Storage, Compute, and Networking. Particularly, developers of Compute and "
-"Networking drivers that require proprietary or hardware products are now "
-"required to provide an automated external testing system for use during the "
-"development process."
-msgstr ""
-"主要な品質は、Block Storage、Compute、Networking におけるドライバーやプラグイ"
-"ンをまたがり発生しています。とくに、プロプライエタリーやハードウェア製品を必"
-"要とする Compute と Networking のドライバー開発者は、開発プロセス中に使用する"
-"ために、自動化された外部テストシステムを提供する必要があります。"
-
-msgid ""
-"A much-requested answer to big data problems, a dedicated team has been "
-"making solid progress on a Hadoop-as-a-Service project."
-msgstr ""
-"ビッグデータの問題に対する最も要望された回答です。専門チームが Hadoop-as-a-"
-"Service プロジェクトに安定した進捗を実現しました。"
-
-msgid ""
-"A note about DAIR's architecture: ``/var/lib/nova/instances`` is a shared "
-"NFS mount. This means that all compute nodes have access to it, which "
-"includes the ``_base`` directory. Another centralized area is ``/var/log/"
-"rsyslog`` on the cloud controller. This directory collects all OpenStack "
-"logs from all compute nodes. I wondered if there were any entries for the "
-"file that :command:`virsh` is reporting:"
-msgstr ""
-"DAIR のアーキテクチャーは ``/var/lib/nova/instances`` が共有 NFS マウントであ"
-"ることに注意したい。これは、全てのコンピュートノードがそのディレクトリにアク"
-"セスし、その中に ``_base`` ディレクトリが含まれることを意味していた。その他の"
-"集約化エリアはクラウドコントローラーの ``/var/log/rsyslog`` だ。このディレク"
-"トリは全コンピュートノードの全ての OpenStack ログが収集されていた。私は、 :"
-"command:`virsh` が報告したファイルに関するエントリがあるのだろうかと思った。"
-
-msgid ""
-"A number of operating systems use rsyslog as the default logging service. "
-"Since it is natively able to send logs to a remote location, you do not have "
-"to install anything extra to enable this feature, just modify the "
-"configuration file. In doing this, consider running your logging over a "
-"management network or using an encrypted VPN to avoid interception."
-msgstr ""
-"多くのオペレーティングシステムは、rsyslog をデフォルトのロギングサービスとし"
-"て利用します。rsyslog は、リモートにログを送信する機能を持っているので、何か"
-"を追加でインストールする必要がなく、設定ファイルを変更するだけです。リモート"
-"転送を実施する際は、盗聴を防ぐためにログが自身の管理ネットワーク上を通る、も"
-"しくは暗号化VPNを利用することを考慮する必要があります。"
-
-msgid ""
-"A number of time-related fields are useful for tracking when state changes "
-"happened on an instance:"
-msgstr ""
-"多くの時刻関連のフィールドは、いつ状態の変化がインスタンスに起こったかを追跡"
-"する際に役に立ちます:"
-
-msgid ""
-"A quick Google search turned up this: `DHCP lease errors in VLAN mode "
-"`_ which further "
-"supported our DHCP theory."
-msgstr ""
-"ちょっと Google 検索した結果、`VLAN モードでの DHCPリースエラー `_ を見つけた。この情報はその後の"
-"我々の DHCP 方針の支えになった。"
-
-msgid ""
-"A quick way to check whether DNS is working is to resolve a hostname inside "
-"your instance by using the :command:`host` command. If DNS is working, you "
-"should see:"
-msgstr ""
-"DNS が正しくホスト名をインスタンス内から解決できているか確認する簡単な方法"
-"は、 :command:`host` コマンドです。もし DNS が正しく動いていれば、以下メッ"
-"セージが確認できます。"
-
-msgid ""
-"A report came in: VMs were launching slowly, or not at all. Cue the standard "
-"checks—nothing on the Nagios, but there was a spike in network towards the "
-"current master of our RabbitMQ cluster. Investigation started, but soon the "
-"other parts of the queue cluster were leaking memory like a sieve. Then the "
-"alert came in—the master Rabbit server went down and connections failed over "
-"to the slave."
-msgstr ""
-"報告が入った。VM の起動が遅いか、全く起動しない。標準のチェック項目は?"
-"nagios 上は問題なかったが、RabbitMQ クラスタの現用系に向かうネットワークのみ"
-"高負荷を示していた。捜査を開始したが、すぐに RabbitMQ クラスタの別の部分がざ"
-"るのようにメモリリークを起こしていることを発見した。また警報か?RabbitMQ サー"
-"バーの現用系はダウンしようとしていた。接続は待機系にフェイルオーバーした。"
-
-msgid "A service to provide queues of messages and notifications was released."
-msgstr "メッセージと通知のキューを提供するサービスが提供されました。"
-
-msgid "A shell where you can get some work done"
-msgstr "作業を行うためのシェル"
-
-msgid ""
-"A similar pattern can be followed in other projects that use the driver "
-"architecture. Simply create a module and class that conform to the driver "
-"interface and plug it in through configuration. Your code runs when that "
-"feature is used and can call out to other services as necessary. No project "
-"core code is touched. Look for a \"driver\" value in the project's ``.conf`` "
-"configuration files in ``/etc/`` to identify projects that use a "
-"driver architecture."
-msgstr ""
-"ドライバ・アーキテクチャーを使う他のプロジェクトで、類似のパターンに従うこと"
-"ができます。単純に、そのドライバーインタフェースに従うモジュールとクラスを作"
-"成し、環境定義によって組み込んでください。あなたのコードはその機能が使われた"
-"時に実行され、必要に応じて他のサービスを呼び出します。プロジェクトのコアコー"
-"ドは一切修正しません。ドライバーアーキテクチャーを使っているプロジェクトを確"
-"認するには、``/etc/`` に格納されている、プロジェクトの ``.conf`` 設"
-"定ファイルの中で driver 変数を探してください。"
-
-msgid ""
-"A single :term:`API endpoint` for compute, or you require a second level of "
-"scheduling."
-msgstr ""
-"コンピュート資源に対する単一の :term:`API エンドポイント ` 、も"
-"しくは2段階スケジューリングが必要な場合"
-
-msgid "A single-site cloud with equipment fed by separate power supplies."
-msgstr "分離された電源供給ラインを持つ設備で構成される、単一サイトのクラウド。"
-
-msgid ""
-"A snapshot captures the state of the file system, but not the state of the "
-"memory. Therefore, to ensure your snapshot contains the data that you want, "
-"before your snapshot you need to ensure that:"
-msgstr ""
-"スナップショットは、ファイルシステムの状態をキャプチャーしますが、メモリーの"
-"状態をキャプチャーしません。そのため、スナップショットに期待するデータが含ま"
-"れることを確実にするために、次のことを確実にする必要があります。"
-
-msgid ""
-"A tangible example of this is the ``nova-compute`` process. In order to "
-"manage the image cache with libvirt, ``nova-compute`` has a periodic process "
-"that scans the contents of the image cache. Part of this scan is calculating "
-"a checksum for each of the images and making sure that checksum matches what "
-"``nova-compute`` expects it to be. However, images can be very large, and "
-"these checksums can take a long time to generate. At one point, before it "
-"was reported as a bug and fixed, ``nova-compute`` would block on this task "
-"and stop responding to RPC requests. This was visible to users as failure of "
-"operations such as spawning or deleting instances."
-msgstr ""
-"これの具体的な例が ``nova-compute`` プロセスです。libvirt でイメージキャッ"
-"シュを管理するために、``nova-compute`` はイメージキャッシュの内容をスキャンす"
-"る周期的なプロセスを用意します。このスキャンの中で、各イメージのチェックサム"
-"を計算し、チェックサムが ``nova-compute`` が期待する値と一致することを確認し"
-"ます。しかしながら、イメージは非常に大きく、チェックサムを生成するのに長い時"
-"間がかかる場合があります。このことがバグとして報告され修正される前の時点で"
-"は、``nova-compute`` はこのタスクで停止し RPC リクエストに対する応答を停止し"
-"てしまっていました。この振る舞いは、インスタンスの起動や削除などの操作の失敗"
-"としてユーザーに見えていました。"
-
-msgid ""
-"A tool such as **collectd** can be used to store this information. While "
-"collectd is out of the scope of this book, a good starting point would be to "
-"use collectd to store the result as a COUNTER data type. More information "
-"can be found in `collectd's documentation `_."
-msgstr ""
-"collectd のようなツールはこのような情報を保管することに利用できます。 "
-"collectd はこの本のスコープから外れますが、 collectd で COUNTER データ形とし"
-"て結果を保存するのがよい出発点になります。より詳しい情報は `collectd のドキュ"
-"メント `_ を参照してくださ"
-"い。"
-
-msgid "A typical user"
-msgstr "一般的なユーザー"
-
-msgid ""
-"A user might need a custom flavor that is uniquely tuned for a project she "
-"is working on. For example, the user might require 128 GB of memory. If you "
-"create a new flavor as described above, the user would have access to the "
-"custom flavor, but so would all other tenants in your cloud. Sometimes this "
-"sharing isn't desirable. In this scenario, allowing all users to have access "
-"to a flavor with 128 GB of memory might cause your cloud to reach full "
-"capacity very quickly. To prevent this, you can restrict access to the "
-"custom flavor using the :command:`nova flavor-access-add` command:"
-msgstr ""
-"ユーザーが、取り組んでいるプロジェクト向けに独自にチューニングした、カスタム"
-"フレーバーを必要とするかもしれません。例えば、ユーザーが 128 GB メモリーを必"
-"要とするかもしれません。前に記載したとおり、新しいフレーバーを作成する場合、"
-"ユーザーがカスタムフレーバーにアクセスできるでしょう。しかし、クラウドにある"
-"他のすべてのクラウドもアクセスできます。ときどき、この共有は好ましくありませ"
-"ん。この場合、すべてのユーザーが 128 GB メモリーのフレーバーにアクセスでき、"
-"クラウドのリソースが非常に高速に容量不足になる可能性があります。これを防ぐた"
-"めに、:command:`nova flavor-access-add` コマンドを使用して、カスタムフレー"
-"バーへのアクセスを制限できます。"
-
-msgid "A user recently tried launching a CentOS instance on that node"
-msgstr ""
-"最近、あるユーザがそのノード上で CentOS のインスタンスを起動しようとした。"
-
-msgid "AMQP broker"
-msgstr "AMQP ブローカー"
-
-msgid "Absolute limits"
-msgstr "絶対制限"
-
-msgid "Abstract"
-msgstr "概要"
-
-msgid "Account quotas"
-msgstr "アカウントのクォータ"
-
-msgid "Acknowledgements"
-msgstr "謝辞"
-
-msgid "Adam Hyde"
-msgstr "Adam Hyde"
-
-msgid ""
-"Adam Powell in Racker IT supplied us with bandwidth each day and second "
-"monitors for those of us needing more screens."
-msgstr ""
-"Rackspace IT部門 の Adam Powell は、私たちに毎日のネットワーク帯域を提供して"
-"くれました。また、より多くのスクリーンが必要となったため、セカンドモニタを提"
-"供してくれました。"
-
-msgid ""
-"Adam facilitated this book sprint. He also founded the book sprint "
-"methodology and is the most experienced book-sprint facilitator around. See "
-"`BookSprints `_ for more information. Adam "
-"founded FLOSS Manuals—a community of some 3,000 individuals developing Free "
-"Manuals about Free Software. He is also the founder and project manager for "
-"Booktype, an open source project for writing, editing, and publishing books "
-"online and in print."
-msgstr ""
-"Adam は今回の Book Sprint をリードしました。 Book Sprint メソッドを創設者でも"
-"あり、一番経験豊富な Book Sprint のファシリテーターです。詳しい情報は "
-"`BookSprints `_ を見てください。 3000人もの参加者"
-"がいるフリーソフトウェアのフリーなマニュアルを作成するコミュニティである "
-"FLOSS Manuals の創設者です。また、Booktype の創設者でプロジェクトマネージャー"
-"です。 Booktype はオンラインで本の執筆、編集、出版を行うオープンソースプロ"
-"ジェクトです。"
-
-msgid ""
-"Add all raw disks to one large RAID array, either hardware or software "
-"based. You can partition this large array with the boot, root, swap, and LVM "
-"areas. This option is simple to implement and uses all partitions. However, "
-"disk I/O might suffer."
-msgstr ""
-"すべてのローディスクを 1 つの大きな RAID 配列に追加します。ここでは、ソフト"
-"ウェアベースでもハードウェアベースでも構いません。この大きなRAID 配列を "
-"boot、root、swap、LVM 領域に分割します。この選択肢はシンプルですべてのパー"
-"ティションを利用することができますが、I/O性能に悪影響がでる可能性があります。"
-
-msgid "Add device ``snooper0`` to bridge ``br-int``:"
-msgstr "``snooper0`` デバイスを ``br-int`` ブリッジに追加します。"
-
-msgid "Add metadata to the container to allow the IP:"
-msgstr "メタデータをコンテナーに追加して、IP を許可します。"
-
-msgid "Add the repository for the new release packages."
-msgstr "新リリースのパッケージのリポジトリーを追加します。"
-
-msgid "Adding Custom Logging Statements"
-msgstr "カスタムログの追加"
-
-msgid "Adding Images"
-msgstr "イメージの追加"
-
-msgid "Adding Projects"
-msgstr "プロジェクトの追加"
-
-msgid "Adding Signed Images"
-msgstr "署名済みイメージの追加"
-
-msgid "Adding a Compute Node"
-msgstr "コンピュートノードの追加"
-
-msgid ""
-"Adding a new object storage node is different from adding compute or block "
-"storage nodes. You still want to initially configure the server by using "
-"your automated deployment and configuration-management systems. After that "
-"is done, you need to add the local disks of the object storage node into the "
-"object storage ring. The exact command to do this is the same command that "
-"was used to add the initial disks to the ring. Simply rerun this command on "
-"the object storage proxy server for all disks on the new object storage "
-"node. Once this has been done, rebalance the ring and copy the resulting "
-"ring files to the other storage nodes."
-msgstr ""
-"新しいオブジェクトストレージノードの追加は、コンピュートノードやブロックスト"
-"レージノードの追加とは異なります。サーバーの設定は、これまで通り自動配備シス"
-"テムと構成管理システムを使って行えます。完了した後、オブジェクトストレージ"
-"ノードのローカルディスクをオブジェクトストレージリングに追加する必要がありま"
-"す。これを実行するコマンドは、最初にディスクをリングに追加するのに使用したコ"
-"マンドと全く同じです。オブジェクトストレージプロキシサーバーにおいて、このコ"
-"マンドを、新しいオブジェクトストレージノードにあるすべてのディスクに対して、"
-"再実行するだけです。これが終わったら、リングの再バランスを行い、更新されたリ"
-"ングファイルを他のストレージノードにコピーします。"
-
-msgid "Adding an Object Storage Node"
-msgstr "オブジェクトストレージノードの追加"
-
-msgid ""
-"Adding security groups is typically done on instance boot. When launching "
-"from the dashboard, you do this on the :guilabel:`Access & Security` tab of "
-"the :guilabel:`Launch Instance` dialog. When launching from the command "
-"line, append ``--security-groups`` with a comma-separated list of security "
-"groups."
-msgstr ""
-"セキュリティグループの追加は、一般的にインスタンスの起動時に実行されます。"
-"ダッシュボードから起動するとき、これは :guilabel:`インスタンスの起動` ダイア"
-"ログの :guilabel:`アクセスとセキュリティー` タブにあります。コマンドラインか"
-"ら起動する場合には、 ``--security-groups`` にセキュリティグループのコンマ区切"
-"り一覧を指定します。"
-
-msgid ""
-"Adding to a RAID array (RAID stands for redundant array of independent "
-"disks), based on the number of disks you have available, so that you can add "
-"capacity as your cloud grows. Some options are described in more detail "
-"below."
-msgstr ""
-"使用可能なディスクの数をもとに、RAID 配列 (RAID は Redundant Array of "
-"Independent Disks の略) に追加します。 こうすることで、クラウドが大きくなった"
-"場合も容量を追加できます。オプションは、以下で詳しく説明しています。"
-
-msgid ""
-"Additional optional restrictions on which compute nodes the flavor can run "
-"on. This is implemented as key-value pairs that must match against the "
-"corresponding key-value pairs on compute nodes. Can be used to implement "
-"things like special resources (such as flavors that can run only on compute "
-"nodes with GPU hardware)."
-msgstr ""
-"フレーバーを実行できるコンピュートノードに関する追加の制限。これはオプション"
-"です。これは、コンピュートノードにおいて対応するキーバリューペアとして実装さ"
-"れ、コンピュートノードでの対応するキーバリューペアと一致するものでなければい"
-"けません。(GPU ハードウェアを持つコンピュートノードのみにおいて実行するフレー"
-"バーのように) 特別なリソースのようなものを実装するために使用できます。"
-
-msgid ""
-"Additionally, for Identity-related issues, try the tips in :ref:"
-"`sql_backend`."
-msgstr ""
-"さらに、Identity 関連の問題に対して、:ref:`sql_backend` にあるヒントを試して"
-"みてください。"
-
-msgid ""
-"Additionally, this instance in question was responsible for a very, very "
-"large backup job each night. While \"The Issue\" (as we were now calling it) "
-"didn't happen exactly when the backup happened, it was close enough (a few "
-"hours) that we couldn't ignore it."
-msgstr ""
-"加えて、問題のインスタンスは毎晩非常に長いバックアップジョブを担っていた。"
-"「あの問題」(今では我々はこの障害をこう呼んでいる)はバックアップが行われて"
-"いる最中には起こらなかったが、(数時間たっていて)「あの問題」が起こるまであ"
-"と少しのところだった。"
-
-msgid "Administrative Command-Line Tools"
-msgstr "管理系コマンドラインツール"
-
-msgid "Advanced Configuration"
-msgstr "高度な設定"
-
-msgid "After a Compute Node Reboots"
-msgstr "コンピュートノードの再起動後"
-
-msgid ""
-"After a cloud controller reboots, ensure that all required services were "
-"successfully started. The following commands use :command:`ps` and :command:"
-"`grep` to determine if nova, glance, and keystone are currently running:"
-msgstr ""
-"クラウドコントローラーの再起動後、すべての必要なサービスが正常に起動されたこ"
-"とを確認します。以下のコマンドは、 :command:`ps` と :command:`grep` を使用し"
-"て、nova、glance、keystone が現在動作していることを確認しています。"
-
-msgid "After a few minutes of troubleshooting, I saw the following details:"
-msgstr "数分間のトラブル調査の後、以下の詳細が判明した。"
-
-msgid ""
-"After digging into the nova (OpenStack Compute) code, I noticed two areas in "
-"``api/ec2/cloud.py`` potentially impacting my system:"
-msgstr ""
-"nova (OpenStack Compute) のコードを深堀りすると、私のシステムに影響を与える可"
-"能性がある 2 つの領域を ``api/ec2/cloud.py`` で見つけました。"
-
-msgid ""
-"After finding the instance ID we headed over to ``/var/lib/nova/instances`` "
-"to find the ``console.log``:"
-msgstr ""
-"インスタンスIDの発見後、``console.log`` を探すため ``/var/lib/nova/"
-"instances`` にアクセスした。"
-
-msgid ""
-"After learning about scalability in computing from particle physics "
-"experiments, such as ATLAS at the Large Hadron Collider (LHC) at CERN, Tom "
-"worked on OpenStack clouds in production to support the Australian public "
-"research sector. Tom currently serves as an OpenStack community manager and "
-"works on OpenStack documentation in his spare time."
-msgstr ""
-"CERN の Large Hadron Collider (LHC) で ATLAS のような素粒子物理学実験でコン"
-"ピューティングのスケーラビリティの経験を積んだ後、現在はオーストラリアの公的"
-"な研究部門を支援するプロダクションの OpenStack クラウドに携わっていました。現"
-"在は OpenStack のコミュニティマネージャーを務めており、空いた時間で "
-"OpenStack ドキュメントプロジェクトに参加しています。"
-
-msgid ""
-"After migration, users see different results from :command:`openstack image "
-"list` and :command:`glance image-list`. To ensure users see the same images "
-"in the list commands, edit the :file:`/etc/glance/policy.json` file and :"
-"file:`/etc/nova/policy.json` file to contain ``\"context_is_admin\": \"role:"
-"admin\"``, which limits access to private images for projects."
-msgstr ""
-"移行後、ユーザーは :command:`openstack image list` と :command:`glance image-"
-"list` から異なる結果を見ることになります。ユーザーが一覧コマンドにおいて同じ"
-"イメージをきちんと見るために、 ``/etc/glance/policy.json`` と :file:`/etc/"
-"nova/policy.json` ファイルを編集して、 ``\"context_is_admin\": \"role:admin"
-"\"`` を含めます。これは、プロジェクトのプライベートイメージへのアクセスを制限"
-"します。"
-
-msgid ""
-"After reproducing the problem several times, I came to the unfortunate "
-"conclusion that this cloud did indeed have a problem. Even worse, my time "
-"was up in Kelowna and I had to return back to Calgary."
-msgstr ""
-"何度か問題が再現した後、私はこのクラウドが実は問題を抱えているという不幸な結"
-"論に至った。更に悪いことに、私がケロウナから出発する時間になっており、カルガ"
-"リーに戻らなければならなかった。"
-
-msgid ""
-"After restarting the instance, everything was back up and running. We "
-"reviewed the logs and saw that at some point, network communication stopped "
-"and then everything went idle. We chalked this up to a random occurrence."
-msgstr ""
-"インスタンスの再起動後、全ては元通りに動くようになった。我々はログを見直し、"
-"問題の箇所(ネットワーク通信が止まり、全ては待機状態になった)を見た。我々は"
-"ランダムな事象の原因はこのインスタンスだと判断した。"
-
-msgid "After running"
-msgstr "実行後"
-
-msgid ""
-"After that, use the :command:`openstack` command to reboot all instances "
-"that were on c01.example.com while regenerating their XML files at the same "
-"time:"
-msgstr ""
-"その後、:command:`openstack` コマンドを使って、c01.example.com にあったすべて"
-"のインスタンスを再起動します。起動する際にインスタンスの XML ファイルを再生成"
-"します:"
-
-msgid ""
-"After the compute node is successfully running, you must deal with the "
-"instances that are hosted on that compute node because none of them are "
-"running. Depending on your SLA with your users or customers, you might have "
-"to start each instance and ensure that they start correctly."
-msgstr ""
-"コンピュートノードが正常に実行された後、そのコンピュートノードでホストされて"
-"いるインスタンスはどれも動作していないので、そのコンピュートノードにおいてホ"
-"ストされているインスタンスを処理する必要があります。ユーザーや顧客に対する "
-"SLA によっては、各インスタンスを開始し、正常に起動していることを確認する必要"
-"がある場合もあるでしょう。"
-
-msgid "After the dnsmasq processes start again, check whether DNS is working."
-msgstr "dnsmasq再起動後に、DNSが動いているか確認します。"
-
-msgid ""
-"After the packet is on this NIC, it transfers to the compute node's default "
-"gateway. The packet is now most likely out of your control at this point. "
-"The diagram depicts an external gateway. However, in the default "
-"configuration with multi-host, the compute host is the gateway."
-msgstr ""
-"パケットはこのNICに送られた後、コンピュートノードのデフォルトゲートウェイに転"
-"送されます。パケットはこの時点で、おそらくあなたの管理範囲外でしょう。図には"
-"外部ゲートウェイを描いていますが、マルチホストのデフォルト構成では、コン"
-"ピュートホストがゲートウェイです。"
-
-msgid ""
-"After this command it is common practice to call :command:`openstack image "
-"create` from your workstation, and once done press enter in your instance "
-"shell to unfreeze it. Obviously you could automate this, but at least it "
-"will let you properly synchronize."
-msgstr ""
-"このコマンドの後、お使いの端末から :command:`openstack image create` を呼び出"
-"すことが一般的な慣習です。実行した後、インスタンスの中で Enter キーを押して、"
-"フリーズ解除します。もちろん、これを自動化できますが、少なくとも適切に同期で"
-"きるようになるでしょう。"
-
-msgid ""
-"After you consider these factors, you can determine how many cloud "
-"controller cores you require. A typical eight core, 8 GB of RAM server is "
-"sufficient for up to a rack of compute nodes — given the above caveats."
-msgstr ""
-"これらの要素を検討した後、クラウドコントローラにどのくらいのコア数が必要なの"
-"か決定することができます。上記で説明した留意事項の下、典型的には、ラック 1 本"
-"分のコンピュートノードに対して8 コア、メモリ 8GB のサーバで充分です。"
-
-msgid ""
-"After you establish that the instance booted properly, the task is to figure "
-"out where the failure is."
-msgstr ""
-"インスタンスが正しく起動した後、この手順でどこが問題かを切り分けることができ"
-"ます。"
-
-msgid ""
-"After you have the list, you can use the :command:`openstack` command to "
-"start each instance:"
-msgstr ""
-"一覧を取得した後、各インスタンスを起動するために :command:`openstack` コマン"
-"ドを使用できます。"
-
-msgid ""
-"Again, it turns out that the image was a snapshot. The three other instances "
-"that successfully started were standard cloud images. Was it a problem with "
-"snapshots? That didn't make sense."
-msgstr ""
-"再度、イメージがスナップショットであることが判明した。無事に起動した他の3イ"
-"ンスタンスは標準のクラウドイメージであった。これはスナップショットの問題か?"
-"それは意味が無かった。"
-
-msgid ""
-"Again, the right answer depends on your environment. You have to make your "
-"decision based on the trade-offs between space utilization, simplicity, and "
-"I/O performance."
-msgstr ""
-"ここでも、環境によって適したソリューションが変わります。スペース使用状況、シ"
-"ンプルさ、I/O パフォーマンスの長所、短所をベースに意思決定していく必要があり"
-"ます。"
-
-msgid "Ah-hah! So OpenStack was deleting it. But why?"
-msgstr "あっはっは!じゃぁ、OpenStack が削除したのか。でも何故?"
-
-msgid ""
-"All files and directories in ``/var/lib/nova/instances`` are uniquely named. "
-"The files in \\_base are uniquely titled for the glance image that they are "
-"based on, and the directory names ``instance-xxxxxxxx`` are uniquely titled "
-"for that particular instance. For example, if you copy all data from ``/var/"
-"lib/nova/instances`` on one compute node to another, you do not overwrite "
-"any files or cause any damage to images that have the same unique name, "
-"because they are essentially the same file."
-msgstr ""
-"``/var/lib/nova/instances`` にあるすべてのファイルとディレクトリは一意に名前"
-"が付けられています。 \\_base にあるファイルは元となった glance イメージに対応"
-"する一意に名前が付けられています。また、``instance-xxxxxxxx`` という名前が付"
-"けられたディレクトリは特定のインスタンスに対して一意にタイトルが付けられてい"
-"ます。たとえば、あるコンピュートノードにある ``/var/lib/nova/instances`` のす"
-"べてのデータを他のノードにコピーしたとしても、ファイルを上書きすることはあり"
-"ませんし、また同じ一意な名前を持つイメージにダメージを与えることもありませ"
-"ん。同じ一意な名前を持つものは本質的に同じファイルだからです。"
-
-msgid ""
-"All in all, just issue the :command:`reboot` command. The operating system "
-"cleanly shuts down services and then automatically reboots. If you want to "
-"be very thorough, run your backup jobs just before you reboot."
-msgstr ""
-"大体の場合、単に :command:`reboot` コマンドを発行します。オペレーティングシス"
-"テムがサービスを正常にシャットダウンして、その後、自動的に再起動します。万全"
-"を期したい場合、再起動する前にバックアップジョブを実行します。"
-
-msgid ""
-"All interfaces on the ``br-tun`` are internal to Open vSwitch. To monitor "
-"traffic on them, you need to set up a mirror port as described above for "
-"``patch-tun`` in the ``br-int`` bridge."
-msgstr ""
-"``br-tun`` にあるすべてのインターフェースは、Open vSwitch 内部のものです。そ"
-"れらの通信を監視する場合、 ``br-int`` にある ``patch-tun`` 向けに上で説明した"
-"ようなミラーポートをセットアップする必要があります。"
-
-msgid "All nodes"
-msgstr "全ノード"
-
-msgid ""
-"All of the alert types mentioned earlier can also be used for trend "
-"reporting. Some other trend examples include:"
-msgstr ""
-"これまでに示した全てのアラートタイプは、トレンドレポートに利用可能です。その"
-"他のトレンドの例は以下の通りです。"
-
-msgid ""
-"All of the code for OpenStack lives in ``/opt/stack``. Go to the swift "
-"directory in the ``shell`` screen and edit your middleware module."
-msgstr ""
-"すべての OpenStack のコードは ``/opt/stack`` にあります。 ``shell`` セッショ"
-"ンの screen の中で swift ディレクトリに移動し、あなたのミドルウェアモジュール"
-"を編集してください。"
-
-msgid ""
-"All sites are based on Ubuntu 14.04, with KVM as the hypervisor. The "
-"OpenStack version in use is typically the current stable version, with 5 to "
-"10 percent back-ported code from trunk and modifications."
-msgstr ""
-"全サイトは Ubuntu 14.04 をベースにしており、ハイパーバイザとして KVM を使用し"
-"ています。使用している OpenStack のバージョンは基本的に安定バージョンであり、"
-"5~10%のコードが開発コードからバックポートされたか、修正されています。"
-
-msgid ""
-"All translation of GRE tunnels to and from internal VLANs happens on this "
-"bridge."
-msgstr "このブリッジで GRE トンネルと内部 VLAN の相互変換が行われます。"
-
-msgid "Allow DHCP client traffic."
-msgstr "DHCP クライアント通信の許可。"
-
-msgid "Allow IPv6 ICMP traffic to allow RA packets."
-msgstr "RA パケットを許可するための IPv6 ICMP 通信の許可。"
-
-msgid ""
-"Allow access to the share with IP access type and 10.254.0.4 IP address:"
-msgstr ""
-"IP アクセス形式と 10.254.0.4 IP アドレスを持つ共有へのアクセスを許可します。"
-
-msgid "Allow traffic from defined IP/MAC pairs."
-msgstr "定義済み IP/MAC ペアからの通信許可。"
-
-msgid ""
-"Almost all OpenStack components have an underlying database to store "
-"persistent information. Usually this database is MySQL. Normal MySQL "
-"administration is applicable to these databases. OpenStack does not "
-"configure the databases out of the ordinary. Basic administration includes "
-"performance tweaking, high availability, backup, recovery, and repairing. "
-"For more information, see a standard MySQL administration guide."
-msgstr ""
-"ほとんどすべての OpenStack コンポーネントは、永続的な情報を保存するために内部"
-"でデータベースを使用しています。このデータベースは通常 MySQL です。通常の "
-"MySQL の管理方法がこれらのデータベースに適用できます。OpenStack は特別な方法"
-"でデータベースを設定しているわけではありません。基本的な管理として、パフォー"
-"マンス調整、高可用性、バックアップ、リカバリーおよび修理などがあります。さら"
-"なる情報は標準の MySQL 管理ガイドを参照してください。"
-
-msgid ""
-"Also check that all services are functioning. The following set of commands "
-"sources the ``openrc`` file, then runs some basic glance, nova, and "
-"openstack commands. If the commands work as expected, you can be confident "
-"that those services are in working condition:"
-msgstr ""
-"また、すべてのサービスが正しく機能していることを確認します。以下のコマンド群"
-"は、 ``openrc`` ファイルを読み込みます。そして、いくつかの基本的な glance、"
-"nova、openstack コマンドを実行します。コマンドが期待したとおりに動作すれば、"
-"それらのサービスが動作状態にあると確認できます。"
-
-msgid "Also check that it is functioning:"
-msgstr "また、正しく機能していることを確認します。"
-
-msgid "Also ensure that it has successfully connected to the AMQP server:"
-msgstr "AMQP サーバーに正常に接続できることも確認します。"
-
-msgid ""
-"Also, in practice, the ``nova-compute`` services on the compute nodes do not "
-"always reconnect cleanly to rabbitmq hosted on the controller when it comes "
-"back up after a long reboot; a restart on the nova services on the compute "
-"nodes is required."
-msgstr ""
-"実際には、コンピュートノードの ``nova-compute`` サービスが、コントローラー上"
-"で動作している rabbitmq に正しく再接続されない場合があります。時間のかかるリ"
-"ブートから戻ってきた場合や、コンピュートノードの nova サービスを再起動する必"
-"要がある場合です。"
-
-msgid "Alter the configuration until it works."
-msgstr "正常に動作するまで設定を変更する。"
-
-msgid ""
-"Alternatively, if you want someone to help guide you through the decisions "
-"about the underlying hardware or your applications, perhaps adding in a few "
-"features or integrating components along the way, consider contacting one of "
-"the system integrators with OpenStack experience, such as Mirantis or "
-"Metacloud."
-msgstr ""
-"代わりに、ベースとするハードウェアやアプリケーション、いくつかの新機能の追"
-"加、コンポーネントをくみ上げる方法を判断するために、誰かに支援してほしい場"
-"合、Mirantis や Metacloud などの OpenStack の経験豊富なシステムインテグレー"
-"ターに連絡することを検討してください。"
-
-msgid ""
-"Alternatively, it is possible to configure VLAN-based networks to use "
-"external routers rather than the l3-agent shown here, so long as the "
-"external router is on the same VLAN:"
-msgstr ""
-"これとは別に、外部ルーターが同じ VLAN にあれば、ここの示されている L3 エー"
-"ジェントの代わりに外部ルーターを使用するよう、VLAN ベースのネットワークを設定"
-"できます。"
-
-msgid ""
-"Although the title of this story is much more dramatic than the actual "
-"event, I don't think, or hope, that I'll have the opportunity to use "
-"\"Valentine's Day Massacre\" again in a title."
-msgstr ""
-"この物語のタイトルは実際の事件よりかなりドラマティックだが、私はタイトル中に"
-"「バレンタインデーの大虐殺」を使用する機会が再びあるとは思わない(し望まな"
-"い)。"
-
-msgid ""
-"Although this method is not documented or supported, you can use it when "
-"your compute node is permanently offline but you have instances locally "
-"stored on it."
-msgstr ""
-"この方法はドキュメントに書かれておらず、サポートされていない方法ですが、コン"
-"ピュートノードが完全にオフラインになってしまったが、インスタンスがローカルに"
-"保存されているときに、この方法を使用できます。"
-
-msgid "Among the log statements you'll see the lines:"
-msgstr "ログの中に以下の行があるでしょう。"
-
-msgid ""
-"An OpenStack cloud does not have much value without users. This chapter "
-"covers topics that relate to managing users, projects, and quotas. This "
-"chapter describes users and projects as described by version 2 of the "
-"OpenStack Identity API."
-msgstr ""
-"OpenStack クラウドは、ユーザーなしでは特に価値はありません。本章では、ユー"
-"ザー、プロジェクト、クォータの管理に関するトピックを記載します。また、"
-"OpenStack Identity API のバージョン 2 で説明されているように、ユーザーとプロ"
-"ジェクトについても説明します。"
-
-msgid ""
-"An academic turned software-developer-slash-operator, Lorin worked as the "
-"lead architect for Cloud Services at Nimbis Services, where he deploys "
-"OpenStack for technical computing applications. He has been working with "
-"OpenStack since the Cactus release. Previously, he worked on high-"
-"performance computing extensions for OpenStack at University of Southern "
-"California's Information Sciences Institute (USC-ISI)."
-msgstr ""
-"アカデミック出身のソフトウェア開発者・運用者である彼は、Nimbis Services でク"
-"ラウドサービスの Lead Architect として働いていました。Nimbis Service では彼は"
-"技術計算アプリケーション用の OpenStack を運用しています。 Cactus リリース以"
-"来 OpenStack に携わっています。以前は、University of Southern California's "
-"Information Sciences Institute (USC-ISI) で OpenStack の high-performance "
-"computing 向けの拡張を行いました。"
-
-msgid ""
-"An administrative super user, which has full permissions across all projects "
-"and should be used with great care"
-msgstr ""
-"すべてのプロジェクトにわたり全権限を持つ管理ユーザー。非常に注意して使用する"
-"必要があります。"
-
-msgid ""
-"An advanced use of this general concept allows different flavor types to run "
-"with different CPU and RAM allocation ratios so that high-intensity "
-"computing loads and low-intensity development and testing systems can share "
-"the same cloud without either starving the high-use systems or wasting "
-"resources on low-utilization systems. This works by setting ``metadata`` in "
-"your host aggregates and matching ``extra_specs`` in your flavor types."
-msgstr ""
-"この一般的なコンセプトを高度なレベルで使用すると、集中度の高いコンピュート"
-"ロードや負荷の低い開発やテストシステムが使用量の多いシステムのリソースが不足"
-"したり、使用量の低いシステムでリソースを無駄にしたりしないで、同じクラウドを"
-"共有できるように、異なるフレーバーの種別が、異なる CPU および RAM 割当の比率"
-"で実行できるようになります。 これは、ホストアグリゲートに ``metadata`` を設"
-"定して、フレーバー種別の ``extra_specs`` と一致させると機能します。"
-
-msgid ""
-"An alternative to enabling the RabbitMQ web management interface is to use "
-"the ``rabbitmqctl`` commands. For example, :command:`rabbitmqctl "
-"list_queues| grep cinder` displays any messages left in the queue. If there "
-"are messages, it's a possible sign that cinder services didn't connect "
-"properly to rabbitmq and might have to be restarted."
-msgstr ""
-"RabbitMQ Web 管理インターフェイスを有効にするもう一つの方法としては、 "
-"``rabbitmqctl`` コマンドを利用します。例えば :command:`rabbitmqctl "
-"list_queues| grep cinder` は、キューに残っているメッセージを表示します。メッ"
-"セージが存在する場合、Cinder サービスが RabbitMQ に正しく接続できてない可能性"
-"があり、再起動が必要かもしれません。"
-
-msgid ""
-"An attempt was made to deprecate ``nova-network`` during the Havana release, "
-"which was aborted due to the lack of equivalent functionality (such as the "
-"FlatDHCP multi-host high-availability mode mentioned in this guide), lack of "
-"a migration path between versions, insufficient testing, and simplicity when "
-"used for the more straightforward use cases ``nova-network`` traditionally "
-"supported. Though significant effort has been made to address these "
-"concerns, ``nova-network`` was not be deprecated in the Juno release. In "
-"addition, to a limited degree, patches to ``nova-network`` have again begin "
-"to be accepted, such as adding a per-network settings feature and SR-IOV "
-"support in Juno."
-msgstr ""
-"Havana リリース中に ``nova-network`` を廃止しようという試みがありました。これ"
-"は、このガイドで言及した FlatDHCP マルチホスト高可用性モードなどの同等機能の"
-"不足、バージョン間の移行パスの不足、不十分なテスト、伝統的にサポートされる "
-"``nova-network`` のより簡単なユースケースに使用する場合のシンプルさ、などの理"
-"由により中断されました。甚大な努力によりこれらの心配事を解決してきましたが、 "
-"``nova-network`` は Juno リリースにおいて廃止されませんでした。さらに、Juno "
-"においてネットワークごとの設定機能や SR-IOV の追加などの限定された範囲で、 "
-"``nova-network`` へのパッチが再び受け入れられてきました。"
-
-msgid ""
-"An authorization policy can be composed by one or more rules. If more rules "
-"are specified, evaluation policy is successful if any of the rules evaluates "
-"successfully; if an API operation matches multiple policies, then all the "
-"policies must evaluate successfully. Also, authorization rules are "
-"recursive. Once a rule is matched, the rule(s) can be resolved to another "
-"rule, until a terminal rule is reached. These are the rules defined:"
-msgstr ""
-"認可ポリシーは、一つまたは複数のルールにより構成できます。複数のルールを指定"
-"すると、いずれかのルールが成功と評価されれば、評価エンジンが成功になります。"
-"API 操作が複数のポリシーに一致すると、すべてのポリシーが成功と評価される必要"
-"があります。認可ルールは再帰的にもできます。あるルールにマッチした場合、これ"
-"以上展開できないルールに達するまで、そのルールは別のルールに展開されます。以"
-"下のルールが定義できます。"
-
-msgid ""
-"An automated deployment system installs and configures operating systems on "
-"new servers, without intervention, after the absolute minimum amount of "
-"manual work, including physical racking, MAC-to-IP assignment, and power "
-"configuration. Typically, solutions rely on wrappers around PXE boot and "
-"TFTP servers for the basic operating system install and then hand off to an "
-"automated configuration management system."
-msgstr ""
-"自動のデプロイメントシステムは、物理ラッキング、MAC から IP アドレスの割当、"
-"電源設定など、必要最小限の手作業のみで、介入なしに新規サーバー上にオペレー"
-"ティングシステムのインストールと設定を行います。ソリューションは通常、PXE "
-"ブートや TFTP サーバー関連のラッパーに依存して基本のオペレーティングシステム"
-"をインストールして、次に自動設定管理システムに委譲されます。"
-
-msgid "An external server outside of the cloud"
-msgstr "クラウド外部のサーバー"
-
-msgid ""
-"An hour later I received the same alert, but for another compute node. Crap. "
-"OK, now there's definitely a problem going on. Just like the original node, "
-"I was able to log in by SSH. The bond0 NIC was DOWN but the 1gb NIC was "
-"active."
-msgstr ""
-"1時間後、私は同じ警告を受信したが、別のコンピュートノードだった。拍手。OK、"
-"問題は間違いなく現在進行中だ。元のノードと全く同様に、私は SSH でログインする"
-"ことが出来た。bond0 NIC は DOWN だったが、1Gb NIC は有効だった。"
-
-msgid ""
-"An initial idea was to just increase the lease time. If the instance only "
-"renewed once every week, the chances of this problem happening would be "
-"tremendously smaller than every minute. This didn't solve the problem, "
-"though. It was just covering the problem up."
-msgstr ""
-"最初のアイデアは、単にリース時間を増やすことだった。もしインスタンスが毎週1"
-"回だけIPアドレスを更新するのであれば、毎分更新する場合よりこの問題が起こる可"
-"能性は極端に低くなるだろう。これはこの問題を解決しないが、問題を単に取り繕う"
-"ことはできる。"
-
-msgid "An instance running on that compute node"
-msgstr "コンピュートノード内のインスタンス"
-
-msgid ""
-"An integral part of a configuration-management system is the item that it "
-"controls. You should carefully consider all of the items that you want, or "
-"do not want, to be automatically managed. For example, you may not want to "
-"automatically format hard drives with user data."
-msgstr ""
-"設定管理システムの不可欠な部分は、このシステムが制御する項目です。自動管理を"
-"する項目、しない項目をすべて慎重に検討していく必要があります。例えば、ユー"
-"ザーデータが含まれるハードドライブは自動フォーマットは必要ありません。"
-
-msgid ""
-"An upgrade pre-testing system is excellent for getting the configuration to "
-"work. However, it is important to note that the historical use of the system "
-"and differences in user interaction can affect the success of upgrades."
-msgstr ""
-"アップグレード前テストシステムは、設定を動作させるために優れています。しかし"
-"ながら、システムの歴史的な使用法やユーザー操作における違いにより、アップグ"
-"レードの成否に影響することに注意することが重要です。"
-
-msgid "And finally, you can disassociate the floating IP:"
-msgstr "最後に、floating IPを開放します。"
-
-msgid ""
-"And the best part: the same user had just tried creating a CentOS instance. "
-"What?"
-msgstr ""
-"そして、最も重要なこと。同じユーザが CentOS インスタンスを作成しようとしたば"
-"かりだった。何だと?"
-
-msgid "Anne Gentle"
-msgstr "Anne Gentle"
-
-msgid ""
-"Anne is the documentation coordinator for OpenStack and also served as an "
-"individual contributor to the Google Documentation Summit in 2011, working "
-"with the Open Street Maps team. She has worked on book sprints in the past, "
-"with FLOSS Manuals’ Adam Hyde facilitating. Anne lives in Austin, Texas."
-msgstr ""
-"Anne は OpenStack のドキュメントコーディネーターで、2011年の Google Doc "
-"Summit では individual contributor (個人コントリビュータ) を努め Open "
-"Street Maps チームとともに活動しました。Adam Hyde が進めていた FLOSS Manuals "
-"の以前の doc sprint にも参加しています。テキサス州オースティンに住んでいま"
-"す。"
-
-msgid ""
-"Another common concept across various OpenStack projects is that of periodic "
-"tasks. Periodic tasks are much like cron jobs on traditional Unix systems, "
-"but they are run inside an OpenStack process. For example, when OpenStack "
-"Compute (nova) needs to work out what images it can remove from its local "
-"cache, it runs a periodic task to do this."
-msgstr ""
-"様々な OpenStack プロジェクトに共通する別の考え方として、周期的タスク "
-"(periodic task) があります。周期的タスクは伝統的な Unix システムの cron ジョ"
-"ブに似ていますが、OpenStack プロセスの内部で実行されます。例えば、OpenStack "
-"Compute (nova) はローカルキャッシュからどのイメージを削除できるかを決める必要"
-"がある際に、これを行うために周期的タスクを実行します。"
-
-msgid ""
-"Another example is a user consuming a very large amount of bandwidth. Again, "
-"the key is to understand what the user is doing. If she naturally needs a "
-"high amount of bandwidth, you might have to limit her transmission rate as "
-"to not affect other users or move her to an area with more bandwidth "
-"available. On the other hand, maybe her instance has been hacked and is part "
-"of a botnet launching DDOS attacks. Resolution of this issue is the same as "
-"though any other server on your network has been hacked. Contact the user "
-"and give her time to respond. If she doesn't respond, shut down the instance."
-msgstr ""
-"別の例は、あるユーザーが非常に多くの帯域を消費することです。繰り返しですが、"
-"ユーザーが実行していることを理解することが重要です。必ず多くの帯域を使用する"
-"必要があれば、他のユーザーに影響を与えないように通信帯域を制限する、または、"
-"より多くの帯域を利用可能な別の場所に移動させる必要があるかもしれません。一"
-"方、ユーザーのインスタンスが侵入され、DDOS 攻撃を行っているボットネットの一部"
-"になっているかもしれません。この問題の解決法は、ネットワークにある他のサー"
-"バーが侵入された場合と同じです。ユーザーに連絡し、対応する時間を与えます。も"
-"し対応しなければ、そのインスタンスを停止します。"
-
-msgid "Another example is displaying all properties for a certain image:"
-msgstr ""
-"もう一つの例は、特定のイメージに関するすべてのプロパティを表示することです。"
-
-msgid ""
-"Any time an instance shuts down unexpectedly, it might have problems on "
-"boot. For example, the instance might require an ``fsck`` on the root "
-"partition. If this happens, the user can use the dashboard VNC console to "
-"fix this."
-msgstr ""
-"予期せずシャットダウンしたときは、ブートに問題があるかもしれません。たとえ"
-"ば、インスタンスがルートパーティションにおいて ``fsck`` を実行する必要がある"
-"かもしれません。もしこうなっても、これを修復するためにダッシュボード VNC コン"
-"ソールを使用できます。"
-
-msgid "Appendix"
-msgstr "付録"
-
-msgid "Apr 11, 2013"
-msgstr "2013年4月11日"
-
-msgid "Apr 15, 2011"
-msgstr "2011年4月15日"
-
-msgid "Apr 17, 2014"
-msgstr "2014年4月17日"
-
-msgid "Apr 3, 2014"
-msgstr "2014年4月3日"
-
-msgid "Apr 30, 2015"
-msgstr "2015年4月30日"
-
-msgid "Apr 4, 2013"
-msgstr "2013年4月4日"
-
-msgid "Apr 5, 2012"
-msgstr "2012年4月5日"
-
-msgid ""
-"Arbitrary local files can also be placed into the instance file system at "
-"creation time by using the ``--file `` option. You may "
-"store up to five files."
-msgstr ""
-"``--file `` オプションを使用することにより、任意のローカル"
-"ファイルを生成時にインスタンスのファイルシステムの中に置けます。5 ファイルま"
-"で保存できます。"
-
-msgid ""
-"Armed with a patched qemu and a way to reproduce, we set out to see if we've "
-"finally solved The Issue. After 48 hours straight of hammering the instance "
-"with bandwidth, we were confident. The rest is history. You can search the "
-"bug report for \"joe\" to find my comments and actual tests."
-msgstr ""
-"パッチを当てた qemu と再現方法を携えて、我々は「あの問題」を最終的に解決した"
-"かを確認する作業に着手した。インスタンスにネットワーク負荷をかけてから丸48時"
-"間後、我々は確信していた。その後のことは知っての通りだ。あなたは、joe へのバ"
-"グ報告を検索し、私のコメントと実際のテストを見つけることができる。"
-
-msgid ""
-"Artificial scale testing can go only so far. After your cloud is upgraded, "
-"you must pay careful attention to the performance aspects of your cloud."
-msgstr ""
-"人工的なスケールテストは、あくまである程度のものです。クラウドをアップグレー"
-"ドした後、クラウドのパフォーマンス観点で十分に注意する必要があります。"
-
-msgid ""
-"As a cloud administrative user, you can use the OpenStack dashboard to "
-"create and manage projects, users, images, and flavors. Users are allowed to "
-"create and manage images within specified projects and to share images, "
-"depending on the Image service configuration. Typically, the policy "
-"configuration allows admin users only to set quotas and create and manage "
-"services. The dashboard provides an :guilabel:`Admin` tab with a :guilabel:"
-"`System Panel` and an :guilabel:`Identity` tab. These interfaces give you "
-"access to system information and usage as well as to settings for "
-"configuring what end users can do. Refer to the `OpenStack Administrator "
-"Guide `__ for "
-"detailed how-to information about using the dashboard as an admin user."
-msgstr ""
-"クラウドの管理ユーザーとして OpenStack Dashboard を使用して、プロジェクト、"
-"ユーザー、イメージ、フレーバーの作成および管理を行うことができます。ユーザー"
-"は Image service の設定に応じて、指定されたプロジェクト内でイメージを作成/管"
-"理したり、共有したりすることができます。通常、ポリシーの設定では、管理ユー"
-"ザーのみがクォータの設定とサービスの作成/管理を行うことができます。ダッシュ"
-"ボードには :guilabel:`管理` タブがあり、 :guilabel:`システムパネル` と :"
-"guilabel:`ユーザー管理タブ` に分かれています。これらのインターフェースによ"
-"り、システム情報と使用状況のデータにアクセスすることができるのに加えて、エン"
-"ドユーザーが実行可能な操作を設定することもできます。管理ユーザーとしてダッ"
-"シュボードを使用する方法についての詳しい説明は `OpenStack Administrator "
-"Guide `__ を参照してく"
-"ださい。"
-
-msgid ""
-"As a last resort, our network admin (Alvaro) and myself sat down with four "
-"terminal windows, a pencil, and a piece of paper. In one window, we ran "
-"ping. In the second window, we ran ``tcpdump`` on the cloud controller. In "
-"the third, ``tcpdump`` on the compute node. And the forth had ``tcpdump`` on "
-"the instance. For background, this cloud was a multi-node, non-multi-host "
-"setup."
-msgstr ""
-"結局、我々のネットワーク管理者(Alvao)と私自身は4つのターミナルウィンドウ、"
-"1本の鉛筆と紙切れを持って座った。1つのウインドウで我々は ping を実行した。"
-"2つ目のウインドウではクラウドコントローラー上の ``tcpdump`` 、3つ目ではコン"
-"ピュートノード上の ``tcpdump`` 、4つ目ではインスタンス上の ``tcpdump`` を実"
-"行した。前提として、このクラウドはマルチノード、非マルチホスト構成である。"
-
-msgid ""
-"As a specific example, compare a cloud that supports a managed web-hosting "
-"platform with one running integration tests for a development project that "
-"creates one VM per code commit. In the former, the heavy work of creating a "
-"VM happens only every few months, whereas the latter puts constant heavy "
-"load on the cloud controller. You must consider your average VM lifetime, as "
-"a larger number generally means less load on the cloud controller."
-msgstr ""
-"特定の例としては、マネージド Web ホスティングプラットフォームをサポートするク"
-"ラウドと、コードコミットごとに仮想マシンを1つ作成するような開発プロジェクト"
-"の統合テストを実行するクラウドを比較してみましょう。前者では、VMを作成する負"
-"荷の大きい処理は数か月に 一度しか発生しないのに対して、後者ではクラウドコント"
-"ローラに常に負荷の大きい処理が発生します。一般論として、VMの平均寿命が長いと"
-"いうことは、クラウドコントローラの負荷が軽いことを意味するため、平均的なVMの"
-"寿命を検討する必要があります。"
-
-msgid ""
-"As an OpenStack cloud is composed of so many different services, there are a "
-"large number of log files. This chapter aims to assist you in locating and "
-"working with them and describes other ways to track the status of your "
-"deployment."
-msgstr ""
-"OpenStackクラウドは、様々なサービスから構成されるため、多くのログファイルが存"
-"在します。この章では、それぞれのログの場所と取り扱い、そしてシステムのさらな"
-"る監視方法について説明します。"
-
-msgid ""
-"As an administrative user, you can update the Block Storage service quotas "
-"for a tenant, as well as update the quota defaults for a new tenant. See :"
-"ref:`table_block_storage_quota`."
-msgstr ""
-"管理ユーザーは、既存のテナントの Block Storage のクォータを更新できます。ま"
-"た、新規テナントのクォータのデフォルト値を更新することもできます。:ref:"
-"`table_block_storage_quota` を参照してください。"
-
-msgid ""
-"As an administrative user, you can update the Compute service quotas for an "
-"existing tenant, as well as update the quota defaults for a new tenant. See :"
-"ref:`table_compute_quota`."
-msgstr ""
-"管理ユーザーは、既存のテナントの Compute のクォータを更新できます。また、新規"
-"テナントのクォータのデフォルト値を更新することもできます。 :ref:"
-"`table_compute_quota` を参照してください。"
-
-msgid ""
-"As an administrative user, you can use the :command:`cinder quota-*` "
-"commands, which are provided by the ``python-cinderclient`` package, to view "
-"and update tenant quotas."
-msgstr ""
-"管理ユーザーは :command:`cinder quota-*` コマンドを使って、テナントのクォー"
-"タを表示したり更新したりできます。コマンドは ``python-cinderclient`` パッケー"
-"ジに含まれます。"
-
-msgid ""
-"As an administrative user, you can use the :command:`nova quota-*` commands, "
-"which are provided by the ``python-novaclient`` package, to view and update "
-"tenant quotas."
-msgstr ""
-"管理ユーザーは :command:`nova quota-*` コマンドを使って、テナントのクォータ"
-"を表示したり更新したりできます。コマンドは ``python-novaclient`` パッケージに"
-"含まれます。"
-
-msgid ""
-"As an administrator, you have a few ways to discover what your OpenStack "
-"cloud looks like simply by using the OpenStack tools available. This section "
-"gives you an idea of how to get an overview of your cloud, its shape, size, "
-"and current state."
-msgstr ""
-"管理者は、利用可能な OpenStack ツールを使用して、OpenStack クラウドが全体像を"
-"確認する方法がいくつかあります。本項では、クラウドの概要、形態、サイズ、現在"
-"の状態についての情報を取得する方法について説明します。"
-
-msgid ""
-"As an example, recording ``nova-api`` usage can allow you to track the need "
-"to scale your cloud controller. By keeping an eye on ``nova-api`` requests, "
-"you can determine whether you need to spawn more ``nova-api`` processes or "
-"go as far as introducing an entirely new server to run ``nova-api``. To get "
-"an approximate count of the requests, look for standard INFO messages in ``/"
-"var/log/nova/nova-api.log``:"
-msgstr ""
-"例として、 ``nova-api`` の使用を記録することでクラウドコントローラーをスケー"
-"ルする必要があるかを追跡できます。 ``nova-api`` のリクエスト数に注目すること"
-"により、 ``nova-api`` プロセスを追加するか、もしくは、 ``nova-api`` を実行す"
-"るための新しいサーバーを導入することまで行なうかを決定することができます。リ"
-"クエストの概数を取得するには ``/var/log/nova/nova-api.log`` の INFO メッセー"
-"ジを検索します。"
-
-msgid ""
-"As an open source project, one of the unique aspects of OpenStack is that it "
-"has many different levels at which you can begin to engage with it—you don't "
-"have to do everything yourself."
-msgstr ""
-"OpenStack は、オープンソースプロジェクトとして、ユニークな点があります。その "
-"1 つは、さまざまなレベルで OpenStack に携わりはじめることができる点です。すべ"
-"てを自分自身で行う必要はありません。"
-
-msgid ""
-"As for your initial deployment, you should ensure that all hardware is "
-"appropriately burned in before adding it to production. Run software that "
-"uses the hardware to its limits—maxing out RAM, CPU, disk, and network. Many "
-"options are available, and normally double as benchmark software, so you "
-"also get a good idea of the performance of your system."
-msgstr ""
-"初期導入時と同じように、本番環境に追加する前に、すべてのハードウェアについて"
-"適切な通電テストを行うべきでしょう。ハードウェアを限界まで使用するソフトウェ"
-"アを実行します。RAM、CPU、ディスク、ネットワークを限界まで使用します。多くの"
-"オプションが利用可能であり、通常はベンチマークソフトウェアとの役割も果たしま"
-"す。そのため、システムのパフォーマンスに関する良いアイディアを得ることもでき"
-"ます。"
-
-msgid ""
-"As mentioned, there's currently no way to cleanly migrate from ``nova-"
-"network`` to neutron. We recommend that you keep a migration in mind and "
-"what that process might involve for when a proper migration path is released."
-msgstr ""
-"言及されたとおり、 ``nova-network`` から neutron にきれいに移行する方法は現在"
-"ありません。適切な移行パスがリリースされるまで、移行を心に留めておき、そのプ"
-"ロセスに関わることを推奨します。"
-
-msgid ""
-"As noted in the previous chapter, the number of rules per security group is "
-"controlled by the ``quota_security_group_rules``, and the number of allowed "
-"security groups per project is controlled by the ``quota_security_groups`` "
-"quota."
-msgstr ""
-"前の章で述べたとおり、セキュリティグループごとのルール数は "
-"``quota_security_group_rules`` により制御されます。また、プロジェクトごとに許"
-"可されるセキュリティグループ数は ``quota_security_groups`` クォータにより制御"
-"されます。"
-
-msgid "As soon as this setting was fixed, everything worked."
-msgstr "全力でこの問題を修正した結果、全てが正常に動作するようになった。"
-
-msgid "As this would be the server's bonded NIC."
-msgstr "これはサーバーの冗長化された(bonded)NIC であるべきだからだ。"
-
-msgid ""
-"As with most architecture choices, the right answer depends on your "
-"environment. If you are using existing hardware, you know the disk density "
-"of your servers and can determine some decisions based on the options above. "
-"If you are going through a procurement process, your user's requirements "
-"also help you determine hardware purchases. Here are some examples from a "
-"private cloud providing web developers custom environments at AT&T. This "
-"example is from a specific deployment, so your existing hardware or "
-"procurement opportunity may vary from this. AT&T uses three types of "
-"hardware in its deployment:"
-msgstr ""
-"多くのアーキテクチャーの選択肢と同様に、環境により適切なソリューションは変"
-"わって来ます。既存のハードウェアを使用する場合、サーバーのディスク密度を把握"
-"し、上記のオプションをもとに意思決定していきます。調達プロセスを行っている場"
-"合、ユーザー要件などもハードウェア購入決定の一助となります。ここでは AT&T の "
-"Web 開発者にカスタムの環境を提供するプライベートクラウドの例をあげています。"
-"この例は、特定のデプロイメントであるため、既存のハードウェアや調達機会はこれ"
-"と異なる可能性があります。AT&T は、デプロイメントに 3 種類のハードウェアを使"
-"用しています。"
-
-msgid ""
-"As with other removable disk technology, it is important that the operating "
-"system is not trying to make use of the disk before removing it. On Linux "
-"instances, this typically involves unmounting any file systems mounted from "
-"the volume. The OpenStack volume service cannot tell whether it is safe to "
-"remove volumes from an instance, so it does what it is told. If a user tells "
-"the volume service to detach a volume from an instance while it is being "
-"written to, you can expect some level of file system corruption as well as "
-"faults from whatever process within the instance was using the device."
-msgstr ""
-"他のリムーバブルディスク技術と同じように、ディスクを取り外す前に、オペレー"
-"ティングシステムがそのディスクを使用しないようにすることが重要です。Linux イ"
-"ンスタンスにおいて、一般的にボリュームからマウントされているすべてのファイル"
-"システムをアンマウントする必要があります。OpenStack Volume Service は、インス"
-"タンスから安全にボリュームを取り外すことができるかはわかりません。そのため、"
-"指示されたことを実行します。ボリュームに書き込み中にインスタンスからボリュー"
-"ムの切断を、ユーザーが Volume Service に指示すると、何らかのレベルのファイル"
-"システム破損が起きる可能性があります。それだけでなく、デバイスを使用していた"
-"インスタンスの中のプロセスがエラーを起こす可能性もあります。"
-
-msgid ""
-"As your cloud grows, MySQL is utilized more and more. If you suspect that "
-"MySQL might be becoming a bottleneck, you should start researching MySQL "
-"optimization. The MySQL manual has an entire section dedicated to this "
-"topic: `Optimization Overview `_."
-msgstr ""
-"クラウドが大きくなるにつれて、MySQL がさらに使用されてきます。MySQL がボトル"
-"ネックになってきたことが疑われる場合、MySQL 最適化の調査から始めるとよいで"
-"しょう。MySQL のマニュアルでは、 `Optimization Overview `_ というセクションがあり、一つ"
-"のセクション全部をあててこの話題を扱っています。"
-
-msgid ""
-"Aside from connection failures, RabbitMQ log files are generally not useful "
-"for debugging OpenStack related issues. Instead, we recommend you use the "
-"RabbitMQ web management interface. Enable it on your cloud controller:"
-msgstr ""
-"接続エラーは別として、RabbitMQ のログファイルは一般的に OpenStack 関連の問題"
-"をデバッグするために役立ちません。代わりに、RabbitMQ の Web 管理インター"
-"フェースを使用することを推奨します。クラウドコントローラーで Web 管理インター"
-"フェースを有効にするには以下のようにします。"
-
-msgid ""
-"Aside from the direct-to-blueprint pathway, there is another very well-"
-"regarded mechanism to influence the development roadmap: the user survey. "
-"Found at `OpenStack User Survey `_, "
-"it allows you to provide details of your deployments and needs, anonymously "
-"by default. Each cycle, the user committee analyzes the results and produces "
-"a report, including providing specific information to the technical "
-"committee and project team leads."
-msgstr ""
-"開発ロードマップに影響を与えるために、直接ブループリントに関わる道以外に、非"
-"常に高く評価された別の方法があります。ユーザー調査です。 `OpenStack User "
-"Survey `_ にあります。基本的に匿名"
-"で、お使いの環境の詳細、要望を送ることができます。各サイクルで、ユーザーコ"
-"ミッティーが結果を分析して、報告書を作成します。具体的な情報を TC や PTL に提"
-"供することを含みます。"
-
-msgid "Aspects to Watch"
-msgstr "ウォッチの観点"
-
-msgid "Associating Security Groups"
-msgstr "セキュリティグループの割り当て"
-
-msgid "Associating Users with Projects"
-msgstr "プロジェクトへのユーザーの割り当て"
-
-msgid ""
-"Associating existing users with an additional project or removing them from "
-"an older project is done from the :guilabel:`Projects` page of the dashboard "
-"by selecting :guilabel:`Manage Members` from the :guilabel:`Actions` column, "
-"as shown in the screenshot below."
-msgstr ""
-"既存のユーザーを追加のプロジェクトに割り当てる、または古いプロジェクトから削"
-"除することは、以下のスクリーンショットにあるとおり、ダッシュボードの :"
-"guilabel:`プロジェクト` ページから、:guilabel:`アクション` 列のユーザーの変更"
-"を選択することにより実行できます。"
-
-msgid ""
-"At that time, our control services were hosted by another team and we didn't "
-"have much debugging information to determine what was going on with the "
-"master, and we could not reboot it. That team noted that it failed without "
-"alert, but managed to reboot it. After an hour, the cluster had returned to "
-"its normal state and we went home for the day."
-msgstr ""
-"この時、我々のコントロールサービスは別のチームによりホスティングされており、"
-"我々には現用系サーバー上で何が起こっているのかを調査するための大したデバッグ"
-"情報がなく、再起動もできなかった。このチームは警報なしで障害が起こったと連絡"
-"してきたが、そのサーバーの再起動を管理していた。1時間後、クラスタは通常状態"
-"に復帰し、我々はその日は帰宅した。"
-
-msgid ""
-"At the data center, I was finishing up some tasks and remembered the lock-"
-"up. I logged into the new instance and ran :command:`ps aux` again. It "
-"worked. Phew. I decided to run it one more time. It locked up."
-msgstr ""
-"データセンターで、私はいくつかの仕事を済ませると、ロックアップのことを思い出"
-"した。私は新しいインスタンスにログインし、再度 :command:`ps aux` を実行した。"
-"コマンドは機能した。ふぅ。私はもう一度試してみることにした。今度はロックアッ"
-"プした。"
-
-msgid ""
-"At the end of 2012, Cybera (a nonprofit with a mandate to oversee the "
-"development of cyberinfrastructure in Alberta, Canada) deployed an updated "
-"OpenStack cloud for their `DAIR project `_. A "
-"few days into production, a compute node locks up. Upon rebooting the node, "
-"I checked to see what instances were hosted on that node so I could boot "
-"them on behalf of the customer. Luckily, only one instance."
-msgstr ""
-"2012年の終わり、Cybera (カナダ アルバータ州にある、サイバーインフラのデプロ"
-"イを監督する権限を持つ非営利団体)が、彼らの `DAIR project `_ 用に新しい OpenStack クラウドをデプロイした。サービスイ"
-"ンから数日後、あるコンピュートノードがロックアップした。問題のノードの再起動"
-"にあたり、私は顧客の権限でインスタンスを起動するため、そのノード上で何のイン"
-"スタンスがホスティングされていたかを確認した。幸運にも、インスタンスは1つだ"
-"けだった。"
-
-msgid ""
-"At the end of August 2012, a post-secondary school in Alberta, Canada "
-"migrated its infrastructure to an OpenStack cloud. As luck would have it, "
-"within the first day or two of it running, one of their servers just "
-"disappeared from the network. Blip. Gone."
-msgstr ""
-"2012年8月の終わり、カナダ アルバータ州のある大学はそのインフラを OpenStack ク"
-"ラウドに移行した。幸か不幸か、サービスインから1~2日間に、彼らのサーバーの1台"
-"がネットワークから消失した。ビッ。いなくなった。"
-
-msgid ""
-"At the same time of finding the bug report, a co-worker was able to "
-"successfully reproduce The Issue! How? He used ``iperf`` to spew a ton of "
-"bandwidth at an instance. Within 30 minutes, the instance just disappeared "
-"from the network."
-msgstr ""
-"バグ報告を発見すると同時に、同僚が「あの問題」を再現することに成功した!どう"
-"やって?彼は ``iperf`` を使用して、インスタンス上で膨大なネットワーク負荷をか"
-"けた。30 分後、インスタンスはネットワークから姿を消した。"
-
-msgid ""
-"At the time of writing, OpenStack has more than 3,000 configuration options. "
-"You can see them documented at the `OpenStack Configuration Reference "
-"`_. "
-"This chapter cannot hope to document all of these, but we do try to "
-"introduce the important concepts so that you know where to go digging for "
-"more information."
-msgstr ""
-"執筆時点では、OpenStack は 3,000 以上の設定オプションがあります。 `OpenStack "
-"Configuration Reference `_ にドキュメント化されています。本章は、これらのすべて"
-"をドキュメント化できませんが、どの情報を掘り下げて調べるかを理解できるよう、"
-"重要な概念を紹介したいと考えています。"
-
-msgid ""
-"At the very base of any operating system are the hard drives on which the "
-"operating system (OS) is installed."
-msgstr ""
-"オペレーティングシステムの基盤は、オペレーティングシステムがインストールされ"
-"るハードドライブです。"
-
-msgid "Attaching Block Storage"
-msgstr "ブロックストレージの接続"
-
-msgid "Attempt to boot a nova instance in the affected environment."
-msgstr "影響のある環境において、nova インスタンスを起動できるか試します。"
-
-msgid "Attempt to list the objects in the ``middleware-test`` container:"
-msgstr ""
-"``middleware-test`` コンテナーにあるオブジェクトを一覧表示しようとします。"
-
-msgid "Aug 10, 2012"
-msgstr "2012年8月10日"
-
-msgid "Aug 8, 2013"
-msgstr "2013年8月8日"
-
-msgid "Aug 8, 2014"
-msgstr "2014年8月8日"
-
-msgid "Austin"
-msgstr "Austin"
-
-msgid "Availability zone"
-msgstr "アベイラビリティゾーン"
-
-msgid "Availability zones"
-msgstr "アベイラビリティゾーン"
-
-msgid "Available vCPUs"
-msgstr "利用可能な vCPU 数"
-
-msgid ""
-"Back in your DevStack instance on the shell screen, add some metadata to "
-"your container to allow the request from the remote machine:"
-msgstr ""
-"シェル画面において DevStack 用インスタンスに戻り、リモートマシンからのリクエ"
-"ストを許可するようなコンテナのメタデータを追加します。"
-
-msgid ""
-"Back up HOT template ``yaml`` files, and the ``/etc/heat/`` directory "
-"containing Orchestration configuration files."
-msgstr ""
-"HOT テンプレートの ``yaml`` ファイル、Orchestration の設定ファイルを含む ``/"
-"etc/heat/`` ディレクトリーをバックアップします。"
-
-msgid ""
-"Back up the ``/etc/ceilometer`` directory containing Telemetry configuration "
-"files."
-msgstr ""
-"Telemetry の設定ファイルを含む ``/etc/ceilometer`` ディレクトリーをバックアッ"
-"プします。"
-
-msgid "Backing storage services"
-msgstr "バックエンドのストレージサービス"
-
-msgid "Backup and Recovery"
-msgstr "バックアップとリカバリー"
-
-msgid ""
-"Backup and subsequent recovery is one of the first tasks system "
-"administrators learn. However, each system has different items that need "
-"attention. By taking care of your database, image service, and appropriate "
-"file system locations, you can be assured that you can handle any event "
-"requiring recovery."
-msgstr ""
-"バックアップ、その後のリカバリーは、最初に学習するシステム管理の 1 つです。し"
-"かしながら、各システムは、それぞれ注意を必要とする項目が異なります。データ"
-"ベース、Image service、適切なファイルシステムの場所に注意することにより、リカ"
-"バリーを必要とするすべてのイベントを処理できることが保証されます。"
-
-msgid "Bare metal Deployment (ironic)"
-msgstr "Bare metal Deployment (ironic)"
-
-msgid ""
-"Be sure that the instance has successfully booted and is at a login screen "
-"before doing the above."
-msgstr ""
-"上記を実行する前に、インスタンスが正常に起動し、ログイン画面になっていること"
-"を確認します。"
-
-msgid ""
-"Because it is recommended to not use partitions on a swift disk, simply "
-"format the disk as a whole:"
-msgstr ""
-"Swift ディスクではパーティションを使用しないことが推奨されるので、単にディス"
-"ク全体をフォーマットします。"
-
-msgid ""
-"Because network troubleshooting is especially difficult with virtual "
-"resources, this chapter is chock-full of helpful tips and tricks for tracing "
-"network traffic, finding the root cause of networking failures, and "
-"debugging related services, such as DHCP and DNS."
-msgstr ""
-"ネットワークのトラブルシューティングは、仮想リソースでとくに難しくなります。"
-"この章は、ネットワーク通信の追跡、ネットワーク障害の根本原因の調査、DHCP や "
-"DNS などの関連サービスのデバッグに関するヒントとコツがたくさん詰まっていま"
-"す。"
-
-msgid ""
-"Because of the high redundancy of Object Storage, dealing with object "
-"storage node issues is a lot easier than dealing with compute node issues."
-msgstr ""
-"オブジェクトストレージの高い冗長性のため、オブジェクトストレージのノードに関"
-"する問題を処理することは、コンピュートノードに関する問題を処理するよりも簡単"
-"です。"
-
-msgid ""
-"Because without sensible quotas a single tenant could use up all the "
-"available resources, default quotas are shipped with OpenStack. You should "
-"pay attention to which quota settings make sense for your hardware "
-"capabilities."
-msgstr ""
-"妥当なクォータがないと、単一のテナントが利用可能なリソースをすべて使用してし"
-"まう可能性があるため、デフォルトのクォータが OpenStack には含まれています。お"
-"使いのハードウェア機能には、どのクォータ設定が適切か注意してください。"
-
-msgid ""
-"Because your cloud is most likely composed of many servers, you must check "
-"logs on each of those servers to properly piece an event together. A better "
-"solution is to send the logs of all servers to a central location so that "
-"they can all be accessed from the same area."
-msgstr ""
-"クラウドは多くのサーバーから構成されるため、各サーバー上にあるイベントログを"
-"繋ぎあわせて、ログをチェックしなければなりません。よい方法は全てのサーバーの"
-"ログを一ヶ所にまとめ、同じ場所で確認できるようにすることです。"
-
-msgid ""
-"Betsy Hagemeier, a Fanatical Executive Assistant, took care of a room "
-"reshuffle and helped us settle in for the week."
-msgstr ""
-"熱狂的なエグゼクティブアシスタントの Betsy Hagemeier は、部屋の改造の面倒を見"
-"てくれて、1週間で解決する手助けをしてくれました。"
-
-msgid "Bexar"
-msgstr "Bexar"
-
-msgid "Block Storage"
-msgstr "ブロックストレージ"
-
-msgid "Block Storage Creation Failures"
-msgstr "ブロックストレージの作成エラー"
-
-msgid "Block Storage Improvements"
-msgstr "Block Storage の改善"
-
-msgid ""
-"Block Storage is considered a stable project, with wide uptake and a long "
-"track record of quality drivers. The team has discussed many areas of work "
-"at the summits, including better error reporting, automated discovery, and "
-"thin provisioning features."
-msgstr ""
-"Block Storage は、品質ドライバーの幅広い理解と長く取られている記録を持つ、安"
-"定したプロジェクトと考えられています。このチームは、よりよいエラー報告、自動"
-"探索、シンプロビジョニング機能など、さまざまな領域の作業をサミットで議論しま"
-"した。"
-
-msgid "Block Storage nodes"
-msgstr "Block Storage ノード"
-
-msgid "Block Storage service"
-msgstr "Block Storage サービス"
-
-msgid ""
-"Block Storage service - Updating the Block Storage service only requires "
-"restarting the service."
-msgstr ""
-"Block Storage サービス - Block Storage サービスの更新は、サービスの再起動のみ"
-"を必要とします。"
-
-msgid ""
-"Boolean value that indicates whether the flavor is available to all users or "
-"private. Private flavors do not get the current tenant assigned to them. "
-"Defaults to ``True``."
-msgstr ""
-"フレーバーがすべてのユーザーに利用可能であるか、プライベートであるかを示す論"
-"理値。プライベートなフレーバーは、現在のテナントをそれらに割り当てません。デ"
-"フォルトは ``True`` です。"
-
-msgid "Boot a test server:"
-msgstr "テストサーバーを起動します。"
-
-msgid ""
-"Both Compute and Block Storage rely on schedulers to determine where to "
-"place virtual machines or volumes. In Havana, the Compute scheduler "
-"underwent significant improvement, while in Icehouse it was the scheduler in "
-"Block Storage that received a boost. Further down the track, an effort "
-"started this cycle that aims to create a holistic scheduler covering both "
-"will come to fruition. Some of the work that was done in Kilo can be found "
-"under the `Gantt project `_."
-msgstr ""
-"Compute と Block Storage はどちらも、仮想マシンやボリュームを配置する場所を決"
-"めるためにスケジューラーに頼っています。Havana では、Compute のスケジューラー"
-"が大幅に改善されました。これは、Icehouse において支援を受けた Block Storage "
-"におけるスケジューラーでした。さらに掘り下げて追跡すると、どちらも取り扱う全"
-"体的なスケジューラーを作成することを目指した、このサイクルを始めた努力が実を"
-"結ぶでしょう。Kilo において実行されたいくつかの作業は、`Gantt project "
-"`_ にあります。"
-
-msgid ""
-"Both Ubuntu and Red Hat Enterprise Linux include mechanisms for configuring "
-"the operating system, including preseed and kickstart, that you can use "
-"after a network boot. Typically, these are used to bootstrap an automated "
-"configuration system. Alternatively, you can use an image-based approach for "
-"deploying the operating system, such as systemimager. You can use both "
-"approaches with a virtualized infrastructure, such as when you run VMs to "
-"separate your control services and physical infrastructure."
-msgstr ""
-"Ubuntu と Red Hat Enterprise Linux にはいずれも、ネットワークブート後に使用可"
-"能なpreseed や kickstart といった、オペレーティングシステムを設定するための仕"
-"組みがあります。これらは、典型的には自動環境設定システムのブートストラップに"
-"使用されます。他の方法としては、systemimager のようなイメージベースのオペレー"
-"ティングシステムのデプロイメント手法を使うこともできます。これらの手法はどち"
-"らも、物理インフラストラクチャーと制御サービスを分離するために仮想マシンを実"
-"行する場合など、仮想化基盤と合わせて使用できます。"
-
-msgid "Burn-in Testing"
-msgstr "エージング試験"
-
-msgid ""
-"But how can you tell whether images are being successfully uploaded to the "
-"Image service? Maybe the disk that Image service is storing the images on is "
-"full or the S3 back end is down. You could naturally check this by doing a "
-"quick image upload:"
-msgstr ""
-"しかし、Image service にイメージが正しくアップロードされたことをどのように知"
-"ればいいのでしょうか? もしかしたら、Image service が保管しているイメージの"
-"ディスクが満杯、もしくは S3 のバックエンドがダウンしているかもしれません。簡"
-"易的なイメージアップロードを行なうことでこれをチェックすることができます。"
-
-msgid ""
-"By comparing a tenant's hard limit with their current resource usage, you "
-"can see their usage percentage. For example, if this tenant is using 1 "
-"floating IP out of 10, then they are using 10 percent of their floating IP "
-"quota. Rather than doing the calculation manually, you can use SQL or the "
-"scripting language of your choice and create a formatted report:"
-msgstr ""
-"テナントのハード制限と現在の使用量を比較することにより、それらの使用割合を確"
-"認できます。例えば、このテナントが Floating IP を 10 個中 1 個使用している場"
-"合、Floating IP クォータの 10% を使用していることになります。手動で計算するよ"
-"り、SQL やお好きなスクリプト言語を使用して、定型化されたレポートを作成できま"
-"す。"
-
-msgid "By default, Object Storage logs to syslog."
-msgstr "デフォルトで Object Storage は syslog にログを出力します。"
-
-msgid ""
-"By mistake, I configured OpenStack to attach all tenant VLANs to vlan20 "
-"instead of bond0 thereby stacking one VLAN on top of another. This added an "
-"extra 4 bytes to each packet and caused a packet of 1504 bytes to be sent "
-"out which would cause problems when it arrived at an interface that only "
-"accepted 1500."
-msgstr ""
-"ミスにより、私は全てのテナント VLAN を bond0 の代わりに vlan20 にアタッチする"
-"よう OpenStack を設定した。これにより1つの VLAN が別の VLAN の上に積み重な"
-"り、各パケットに余分に4バイトが追加され、送信されるパケットサイズが 1504 バ"
-"イトになる原因となった。これがパケットサイズ 1500 のみ許容するインターフェー"
-"スに到達した際、問題の原因となったのだった!"
-
-msgid ""
-"By modifying your configuration setup, you can set up IPv6 when using ``nova-"
-"network`` for networking, and a tested setup is documented for FlatDHCP and "
-"a multi-host configuration. The key is to make ``nova-network`` think a "
-"``radvd`` command ran successfully. The entire configuration is detailed in "
-"a Cybera blog post, `“An IPv6 enabled cloud” `_."
-msgstr ""
-"セットアップした設定を変更することにより、ネットワークに ``nova-network`` を"
-"使用している場合に、IPv6 をセットアップできます。テストされたセットアップ環境"
-"が FlatDHCP とマルチホストの設定向けにドキュメント化されています。重要な点"
-"は、``radvd`` を正常に実行されたと、``nova-network`` が考えるようにすることで"
-"す。設定全体の詳細は、Cybera のブログ記事 `“An IPv6 enabled cloud” `_ にありま"
-"す。"
-
-msgid ""
-"By running this command periodically and keeping a record of the result, you "
-"can create a trending report over time that shows whether your ``nova-api`` "
-"usage is increasing, decreasing, or keeping steady."
-msgstr ""
-"このコマンドを定期的に実行し結果を記録することで、トレンドレポートを作ること"
-"ができます。これにより ``/var/log/nova/nova-api.log`` の使用量が増えているの"
-"か、減っているのか、安定しているのか、を知ることができます。"
-
-msgid ""
-"By taking this script and rolling it into an alert for your monitoring "
-"system (such as Nagios), you now have an automated way of ensuring that "
-"image uploads to the Image Catalog are working."
-msgstr ""
-"このスクリプトを(Nagiosのような)監視システムに組込むことで、イメージカタログ"
-"のアップロードが動作していることを自動的に確認することができます。"
-
-msgid "CERN"
-msgstr "CERN"
-
-msgid ""
-"CONF.node_availability_zone has been renamed to CONF."
-"default_availability_zone and is used only by the ``nova-api`` and ``nova-"
-"scheduler`` services."
-msgstr ""
-"CONF.node_availability_zone は、CONF.default_availability_zone に名前が変更さ"
-"れ、``nova-api`` および ``nova-scheduler`` サービスのみで使用されます。"
-
-msgid "CONF.node_availability_zone still works but is deprecated."
-msgstr "CONF.node_availability_zone は今も機能しますが、非推奨扱いです。"
-
-msgid "Cactus"
-msgstr "Cactus"
-
-msgid "Can instances launch and be destroyed?"
-msgstr "インスタンスの起動と削除が可能か?"
-
-msgid "Can objects be stored and deleted?"
-msgstr "オブジェクトの保存と削除は可能か?"
-
-msgid "Can users be created?"
-msgstr "ユーザの作成は可能か?"
-
-msgid "Can volumes be created and destroyed?"
-msgstr "ボリュームの作成と削除は可能か?"
-
-msgid "Capacity Planning"
-msgstr "キャパシティプランニング"
-
-msgid "Cells"
-msgstr "セル"
-
-msgid ""
-"Cells and regions, which segregate an entire cloud and result in running "
-"separate Compute deployments."
-msgstr ""
-"セルおよびリージョン。クラウド全体を分離し、個別にコンピュートデプロイメント"
-"を稼働します。"
-
-msgid "Centrally Managing Logs"
-msgstr "ログの集中管理"
-
-msgid "Change access rules for shares, reset share state"
-msgstr "共有のアクセスルールの変更、共有状態のリセット"
-
-msgid "Change to the directory where Object Storage is installed:"
-msgstr "Object Storage がインストールされるディレクトリーを変更します。"
-
-msgid "Check cloud usage:"
-msgstr "クラウドの使用量を確認します。"
-
-msgid "Check for instances in a failed or weird state and investigate why."
-msgstr "故障または異常になっているインスタンスを確認し、理由を調査します。"
-
-msgid "Check for operator accounts that should be removed."
-msgstr "削除すべきオペレーターアカウントを確認します。"
-
-msgid "Check for security patches and apply them as needed."
-msgstr "セキュリティパッチを確認し、必要に応じて適用します。"
-
-msgid "Check for user accounts that should be removed."
-msgstr "削除すべきユーザーアカウントを確認します。"
-
-msgid "Check memory consumption:"
-msgstr "メモリー消費を確認します。"
-
-msgid "Check the attributes of the updated Share1:"
-msgstr "更新された Share1 の属性を確認します。"
-
-msgid "Check the port connection using the netcat utility:"
-msgstr "netcat ユーティリティーを使用してポート接続を確認します。"
-
-msgid "Check the ports for the lost IP address and update the name:"
-msgstr "失われた IP アドレス向けのポートを確認して、その名前を更新します。"
-
-msgid "Check usage and trends over the past month."
-msgstr "この 1 か月における使用量および傾向を確認します。"
-
-msgid "Check your monitoring system for alerts and act on them."
-msgstr "監視システムのアラートを確認し、それらに対処します。"
-
-msgid "Check your ticket queue for new tickets."
-msgstr "チケットキューの新しいチケットを確認します。"
-
-msgid ""
-"Clean up after an OpenStack upgrade (any unused or new services to be aware "
-"of?)."
-msgstr ""
-"OpenStack のアップグレード後に後始末を行います (未使用または新しいサービスを"
-"把握していますか?)。"
-
-msgid ""
-"Clean up by clearing all mirrors on ``br-int`` and deleting the dummy "
-"interface:"
-msgstr ""
-"``br-int`` にあるすべてのミラーを解除して、ダミーインターフェースを削除するこ"
-"とにより、クリーンアップします。"
-
-msgid "Click the :guilabel:`Create Project` button."
-msgstr ":guilabel:`プロジェクトの作成` ボタンをクリックします。"
-
-msgid "Cloud (General)"
-msgstr "Cloud (General)"
-
-msgid "Cloud Controller and Storage Proxy Failures and Maintenance"
-msgstr "クラウドコントローラーとストレージプロキシの故障とメンテナンス"
-
-msgid ""
-"Cloud computing is quite an advanced topic, and this book requires a lot of "
-"background knowledge. However, if you are fairly new to cloud computing, we "
-"recommend that you make use of the :doc:`common/glossary` at the back of the "
-"book, as well as the online documentation for OpenStack and additional "
-"resources mentioned in this book in :doc:`app-resources`."
-msgstr ""
-"クラウドコンピューティングは非常に高度な話題です。また、本書は多くの基礎知識"
-"を必要とします。しかしながら、クラウドコンピューティングに慣れていない場合、"
-"本書の最後にある :doc:`common/glossary` 、OpenStack のオンラインドキュメン"
-"ト、:doc:`app-resources` にある本書で参照されている参考資料を使うことを推奨し"
-"ます。"
-
-msgid "Cloud controller"
-msgstr "クラウドコントローラー"
-
-msgid "Cloud controller receives the renewal request and sends a response."
-msgstr "クラウドコントローラーは更新リクエストを受信し、レスポンスを返す。"
-
-msgid "Cloud controller receives the second request and sends a new response."
-msgstr ""
-"クラウドコントローラーは2度めのリクエストを受信し、新しいレスポンスを返す。"
-
-msgid "Command-Line Tools"
-msgstr "コマンドラインツール"
-
-msgid ""
-"Compare an attribute in the resource with an attribute extracted from the "
-"user's security credentials and evaluates successfully if the comparison is "
-"successful. For instance, ``\"tenant_id:%(tenant_id)s\"`` is successful if "
-"the tenant identifier in the resource is equal to the tenant identifier of "
-"the user submitting the request."
-msgstr ""
-"リソースの属性をユーザーのセキュリティクレデンシャルから抽出した属性と比較"
-"し、一致した場合に成功と評価されます。たとえば、リソースのテナント識別子がリ"
-"クエストを出したユーザーのテナント識別子と一致すれば、 ``\"tenant_id:"
-"%(tenant_id)s\"`` が成功します。"
-
-msgid "Compute"
-msgstr "コンピュート"
-
-msgid "Compute Node Failures and Maintenance"
-msgstr "コンピュートノードの故障とメンテナンス"
-
-msgid "Compute nodes"
-msgstr "コンピュートノード"
-
-msgid ""
-"Compute nodes can fail the same way a cloud controller can fail. A "
-"motherboard failure or some other type of hardware failure can cause an "
-"entire compute node to go offline. When this happens, all instances running "
-"on that compute node will not be available. Just like with a cloud "
-"controller failure, if your infrastructure monitoring does not detect a "
-"failed compute node, your users will notify you because of their lost "
-"instances."
-msgstr ""
-"コンピュートノードは、クラウドコントローラーの障害と同じように故障します。マ"
-"ザーボードや他の種類のハードウェア障害により、コンピュートノード全体がオフラ"
-"インになる可能性があります。これが発生した場合、そのコンピュートノードで動作"
-"中のインスタンスがすべて利用できなくなります。ちょうどクラウドコントローラー"
-"が発生した場合のように、インフラ監視機能がコンピュートノードの障害を検知しな"
-"くても、インスタンスが失われるので、ユーザーが気づくでしょう。"
-
-msgid ""
-"Compute nodes have 24 to 48 cores, with at least 4 GB of RAM per core and "
-"approximately 40 GB of ephemeral storage per core."
-msgstr ""
-"コンピュートノードは 24~48コアがあり、1コアあたり 4GB 以上の RAM があり、1"
-"コアあたり約 40GB 以上の一時ストレージがあります。"
-
-msgid "Compute quota descriptions"
-msgstr "Compute のクォータの説明"
-
-msgid "Compute service - Edit the configuration file and restart the service."
-msgstr "Compute サービス - 設定ファイルを編集して、サービスを再起動します。"
-
-msgid "Compute service, including networking components."
-msgstr "Compute サービス。ネットワークコンポーネントも含む。 "
-
-msgid "Conclusion"
-msgstr "まとめ"
-
-msgid "Configuration Management"
-msgstr "構成管理"
-
-msgid "Configuration changes to ``nova.conf``."
-msgstr "``nova.conf`` の設定を変更"
-
-msgid "Connect the qemu-nbd device to the disk."
-msgstr "qemu-nbd デバイスをディスクに接続します。"
-
-msgid "Connect the qemu-nbd device to the disk:"
-msgstr "qemu-nbd デバイスをディスクに接続します。"
-
-msgid ""
-"Consider adopting structure and options from the service configuration files "
-"and merging them with existing configuration files. The `OpenStack "
-"Configuration Reference `_ contains new, updated, and deprecated options for most services."
-msgstr ""
-"このサービス設定ファイルから構造とオプションを適用して、既存の設定ファイルに"
-"マージすることを検討してください。ほとんどのサービスは、`OpenStack "
-"Configuration Reference `_ に新しいオプション、更新されたオプション、非推奨になったオプションがあり"
-"ます。"
-
-msgid ""
-"Consider the approach to upgrading your environment. You can perform an "
-"upgrade with operational instances, but this is a dangerous approach. You "
-"might consider using live migration to temporarily relocate instances to "
-"other compute nodes while performing upgrades. However, you must ensure "
-"database consistency throughout the process; otherwise your environment "
-"might become unstable. Also, don't forget to provide sufficient notice to "
-"your users, including giving them plenty of time to perform their own "
-"backups."
-msgstr ""
-"お使いの環境をアップグレードする方法を検討します。運用中のインスタンスがある"
-"状態でアップグレードを実行できます。しかし、これは非常に危険なアプローチで"
-"す。アップグレードの実行中は、ライブマイグレーションを使用して、インスタンス"
-"を別のコンピュートノードに一時的に再配置することを考慮すべきでしょう。しかし"
-"ながら、プロセス全体を通して、データベースの整合性を担保する必要があります。"
-"そうでなければ、お使いの環境が不安定になるでしょう。また、ユーザーに十分に注"
-"意を促すことを忘れてはいけません。バックアップを実行するために時間の猶予を与"
-"えることも必要です。"
-
-msgid ""
-"Consider the example where you want to take a snapshot of a persistent block "
-"storage volume, detected by the guest operating system as ``/dev/vdb`` and "
-"mounted on ``/mnt``. The fsfreeze command accepts two arguments:"
-msgstr ""
-"永続ブロックストレージのスナップショットを取得したい例を検討します。ゲストオ"
-"ペレーティングシステムにより ``/dev/vdb`` として認識され、 ``/mnt`` にマウン"
-"トされているとします。fsfreeze コマンドが 2 つの引数を受け取ります:"
-
-msgid "Consider the following example:"
-msgstr "次のような例を考えてみましょう。"
-
-msgid ""
-"Consider the impact of an upgrade to users. The upgrade process interrupts "
-"management of your environment including the dashboard. If you properly "
-"prepare for the upgrade, existing instances, networking, and storage should "
-"continue to operate. However, instances might experience intermittent "
-"network interruptions."
-msgstr ""
-"アップグレードによるユーザーへの影響を考慮してください。アップグレードプロセ"
-"スは、ダッシュボードを含む、環境の管理機能を中断します。このアップグレードを"
-"正しく準備する場合、既存のインスタンス、ネットワーク、ストレージは通常通り動"
-"作し続けるべきです。しかしながら、インスタンスがネットワークの中断を経験する"
-"かもしれません。"
-
-msgid ""
-"Consider updating your SQL server configuration as described in the "
-"`Installation Tutorials and Guides `_."
-msgstr ""
-"`Installation Tutorials and Guides `_ に記載されているように、SQL サーバーの設定の更新を考"
-"慮してください。"
-
-msgid ""
-"Consider using a public cloud to test the scalability limits of your cloud "
-"controller configuration. Most public clouds bill by the hour, which means "
-"it can be inexpensive to perform even a test with many nodes."
-msgstr ""
-"お使いのクラウドコントローラーの設定に関するスケーラビリティーの限界をテスト"
-"するために、パブリッククラウドを使用することを考慮します。多くのパブリックク"
-"ラウドは時間単位で課金されます。つまり、多くのノードを用いてテストしても、そ"
-"れほど費用がかかりません。"
-
-msgid ""
-"Considered experimental. A new service, nova-cells. Each cell has a full "
-"nova installation except nova-api."
-msgstr ""
-"試験的とみなされます。新しいサービス nova-cells。各セルには nova-api 以外の"
-"全 nova 設定が含まれています。"
-
-msgid "Console (boot up messages) for VM instances:"
-msgstr "仮想マシンインスタンスのコンソール (起動メッセージ):"
-
-msgid "Container quotas"
-msgstr "コンテナーのクォータ"
-
-msgid ""
-"Contains a reference listing of all configuration options for core and "
-"integrated OpenStack services by release version"
-msgstr ""
-"リリースバージョン毎に、OpenStack のコアサービス、統合されたサービスのすべて"
-"の設定オプションの一覧が載っています"
-
-msgid ""
-"Contains each floating IP address that was added to Compute. This table is "
-"related to the ``fixed_ips`` table by way of the ``floating_ips."
-"fixed_ip_id`` column."
-msgstr ""
-"Compute に登録された各 Floating IP アドレス。このテーブルは ``floating_ips."
-"fixed_ip_id`` 列で ``fixed_ips`` テーブルと関連付けられます。"
-
-msgid ""
-"Contains each possible IP address for the subnet(s) added to Compute. This "
-"table is related to the ``instances`` table by way of the ``fixed_ips."
-"instance_uuid`` column."
-msgstr ""
-"nova に登録されたサブネットで利用可能なIPアドレス。このテーブルは "
-"``fixed_ips.instance_uuid`` 列で ``instances`` テーブルと関連付けられます。"
-
-msgid "Contains guidelines for designing an OpenStack cloud"
-msgstr "OpenStack クラウドの設計に関するガイドライン"
-
-msgid ""
-"Contains how-to information for managing an OpenStack cloud as needed for "
-"your use cases, such as storage, computing, or software-defined-networking"
-msgstr ""
-"あなたのユースケースに合わせて、ストレージ、コンピューティング、Software-"
-"defined-networking など OpenStack クラウドを管理する方法が書かれています"
-
-msgid "Contents"
-msgstr "内容"
-
-msgid ""
-"Continuing the diagnosis the next morning was kick started by another "
-"identical failure. We quickly got the message queue running again, and tried "
-"to work out why Rabbit was suffering from so much network traffic. Enabling "
-"debug logging on nova-api quickly brought understanding. A ``tail -f /var/"
-"log/nova/nova-api.log`` was scrolling by faster than we'd ever seen before. "
-"CTRL+C on that and we could plainly see the contents of a system log spewing "
-"failures over and over again - a system log from one of our users' instances."
-msgstr ""
-"翌朝の継続調査は別の同様の障害でいきなり始まった。我々は急いで RabbitMQ サー"
-"バーを再起動し、何故 RabbitMQ がそのような過剰なネットワーク負荷に直面してい"
-"るのかを調べようとした。nova-api のデバッグログを出力することにより、理由はす"
-"ぐに判明した。``tail -f /var/log/nova/nova-api.log`` は我々が見たこともない速"
-"さでスクロールしていた。CTRL+C でコマンドを止め、障害を吐き出していたシステム"
-"ログの内容をはっきり目にすることが出来た。-我々のユーザの1人のインスタンス"
-"からのシステムログだった。"
-
-msgid ""
-"Copy contents of configuration backup directories that you created during "
-"the upgrade process back to ``/etc/`` directory."
-msgstr ""
-"アップグレード作業中に作成した、設定ディレクトリーのバックアップの中身を ``/"
-"etc/`` にコピーします。"
-
-msgid ""
-"Copy the code as shown below into ``ip_whitelist.py``. The following code is "
-"a middleware example that restricts access to a container based on IP "
-"address as explained at the beginning of the section. Middleware passes the "
-"request on to another application. This example uses the swift \"swob\" "
-"library to wrap Web Server Gateway Interface (WSGI) requests and responses "
-"into objects for swift to interact with. When you're done, save and close "
-"the file."
-msgstr ""
-"以下の示すコードを ``ip_whitelist.py`` にコピーします。以下のコードは、このセ"
-"クションの初めに説明されたように、IP アドレスに基づいてコンテナーへのアクセス"
-"を制限するミドルウェアの例です。ミドルウェアは、他のアプリケーションへのリク"
-"エストを通過させます。この例は、swift \"swob\" ライブラリーを使用して、swift "
-"が通信するオブジェクトに関する Web Server Gateway Interface (WSGI) のリクエス"
-"トとレスポンスをラップします。これを実行したとき、ファイルを保存して閉じま"
-"す。"
-
-msgid "Create Share"
-msgstr "共有の作成"
-
-msgid "Create Snapshots"
-msgstr "スナップショットの作成"
-
-msgid "Create a Share Network"
-msgstr "共有ネットワークの作成"
-
-msgid ""
-"Create a clone of your automated configuration infrastructure with changed "
-"package repository URLs."
-msgstr ""
-"変更したパッケージリポジトリー URL を用いて、自動化された設定インフラストラク"
-"チャーのクローンを作成する。"
-
-msgid "Create a container called ``middleware-test``:"
-msgstr "``middleware-test`` という名前のコンテナーを作成します。"
-
-msgid "Create a port on the ``Public_AGILE`` network:"
-msgstr "``Public_AGILE`` ネットワークにポートを作成します。"
-
-msgid "Create a public share using :command:`manila create`."
-msgstr ":command:`manila create` を使用して、パブリック共有を作成します。"
-
-msgid "Create a share network"
-msgstr "共有ネットワークの作成"
-
-msgid "Create an OpenStack Development Environment"
-msgstr "OpenStack 開発環境の作成"
-
-msgid "Create and bring up a dummy interface, ``snooper0``:"
-msgstr "ダミーインターフェース ``snooper0`` を作成して起動します。"
-
-msgid "Create context"
-msgstr "コンテキストの作成"
-
-msgid ""
-"Create mirror of ``patch-tun`` to ``snooper0`` (returns UUID of mirror port):"
-msgstr ""
-"``patch-tun`` のミラーを ``snooper0`` に作成します (ミラーポートの UUID を返"
-"します)。"
-
-msgid "Create share"
-msgstr "共有の作成"
-
-msgid "Create share networks"
-msgstr "共有ネットワークの作成"
-
-msgid "Create snapshots"
-msgstr "スナップショットの作成"
-
-msgid "Create the ``ip_scheduler.py`` Python source code file:"
-msgstr "``ip_scheduler.py`` Python ソースコードファイルを作成します。"
-
-msgid "Create the ``ip_whitelist.py`` Python source code file:"
-msgstr "``ip_whitelist.py`` Python ソースコードファイルを作成します。"
-
-msgid "Create ways to automatically test these actions."
-msgstr "それらのアクションに対して自動テストを作成する"
-
-msgid "Create, update, delete, and force-delete shares"
-msgstr "共有の作成、更新、削除、強制削除"
-
-msgid "Creating New Users"
-msgstr "新規ユーザーの作成"
-
-msgid "Customization"
-msgstr "カスタマイズ"
-
-msgid "Customizing Authorization"
-msgstr "権限のカスタマイズ"
-
-msgid "Customizing Object Storage (Swift) Middleware"
-msgstr "Object Storage (Swift) ミドルウェアのカスタマイズ"
-
-msgid "Customizing the Dashboard (Horizon)"
-msgstr "Dashboard (Horizon) のカスタマイズ"
-
-msgid "Customizing the OpenStack Compute (nova) Scheduler"
-msgstr "OpenStack Compute (nova) スケジューラーのカスタマイズ"
-
-msgid "DAIR"
-msgstr "DAIR"
-
-msgid ""
-"DAIR is hosted at two different data centers across Canada: one in Alberta "
-"and the other in Quebec. It consists of a cloud controller at each location, "
-"although, one is designated the \"master\" controller that is in charge of "
-"central authentication and quotas. This is done through custom scripts and "
-"light modifications to OpenStack. DAIR is currently running Havana."
-msgstr ""
-"DAIR はカナダの2つの異なるデータセンタ(1つはアルバータ州、もう1つはケベック"
-"州)でホスティングされています。各拠点にはそれぞれクラウドコントローラがあり"
-"ますが、その1つが「マスター」コントローラーとして、認証とクォータ管理を集中"
-"して行うよう設計されています。これは、特製スクリプトと OpenStack の軽微な改造"
-"により実現されています。DAIR は現在、Havana で運営されています。"
-
-msgid ""
-"DHCP agents running on OpenStack networks run in namespaces similar to the "
-"l3-agents. DHCP namespaces are named ``qdhcp-`` and have a TAP device "
-"on the integration bridge. Debugging of DHCP issues usually involves working "
-"inside this network namespace."
-msgstr ""
-"OpenStack ネットワークで動作している DHCP エージェントは、l3-agent と同じよう"
-"な名前空間で動作します。DHCP 名前空間は、 ``qdhcp-`` という名前を持ち、"
-"統合ブリッジに TAP デバイスを持ちます。DHCP の問題のデバッグは、通常この名前"
-"空間の中での動作に関連します。"
-
-msgid ""
-"DHCP traffic uses UDP. The client sends from port 68 to port 67 on the "
-"server. Try to boot a new instance and then systematically listen on the "
-"NICs until you identify the one that isn't seeing the traffic. To use "
-"``tcpdump`` to listen to ports 67 and 68 on br100, you would do:"
-msgstr ""
-"DHCP トラフィックは UDP を使います。そして、クライアントは 68 番ポートから"
-"サーバーの 67 番ポートへパケットを送信します。新しいインスタンスを起動し、機"
-"械的にNICをリッスンしてください。トラフィックに現れない通信を特定できるまで行"
-"います。 ``tcpdump`` で br100 上のポート 67、68 をリッスンするには、こうしま"
-"す。"
-
-msgid "DNS service (designate)"
-msgstr "DNS サービス (designate)"
-
-msgid "Daily"
-msgstr "日次"
-
-msgid ""
-"Dashboard - In typical environments, updating Dashboard only requires "
-"restarting the Apache HTTP service."
-msgstr ""
-"Dashboard - 一般的な環境では、 Dashboard を更新するのに必要な作業は Apache "
-"HTTP サービスの再起動のみです。"
-
-msgid "Dashboard node"
-msgstr "ダッシュボードサービス"
-
-msgid "Data processing service for OpenStack (sahara)"
-msgstr "OpenStack の Data Processing サービス (sahara)"
-
-msgid "Database Backups"
-msgstr "データベースのバックアップ"
-
-msgid "Database Connectivity"
-msgstr "データベース接続性"
-
-msgid "Database as a Service (trove)"
-msgstr "Database as a Service (trove)"
-
-msgid "Databases"
-msgstr "データベース"
-
-msgid "Date"
-msgstr "リリース日"
-
-msgid "Dealing with Network Namespaces"
-msgstr "ネットワーク名前空間への対応"
-
-msgid "Debugging DHCP Issues with nova-network"
-msgstr "nova-network の DHCP 問題の デバッグ"
-
-msgid "Debugging DNS Issues"
-msgstr "DNS の問題をデバッグする"
-
-msgid "Dec 13, 2012"
-msgstr "2012年12月13日"
-
-msgid "Dec 16, 2013"
-msgstr "2013年12月16日"
-
-msgid ""
-"Decrease DHCP timeouts by modifying the :file:`/etc/nova/nova.conf` file on "
-"the compute nodes back to the original value for your environment."
-msgstr ""
-"コンピュートノードにおいて :file:`/etc/nova/nova.conf` ファイルを変更すること"
-"により、DHCP タイムアウトを元の環境の値に減らして戻します。"
-
-msgid ""
-"Dedicate entire disks to certain partitions. For example, you could allocate "
-"disk one and two entirely to the boot, root, and swap partitions under a "
-"RAID 1 mirror. Then, allocate disk three and four entirely to the LVM "
-"partition, also under a RAID 1 mirror. Disk I/O should be better because I/O "
-"is focused on dedicated tasks. However, the LVM partition is much smaller."
-msgstr ""
-"全ディスク領域を特定のパーティションに割り当てます。例えば、ディスク 1 と 2 "
-"すべてを RAID 1 ミラーとして boot、root、swapパーティションに割り当てます。そ"
-"して、ディスク 3 と 4 すべてを、同様に RAID 1 ミラーとしてLVMパーティションに"
-"割り当てます。I/O は専用タスクにフォーカスするため、ディスクの I/O は良くなる"
-"はずです。しかし、LVM パーティションははるかに小さくなります。"
-
-msgid "Default drop rule for unmatched traffic."
-msgstr "一致しない通信のデフォルト破棄ルール。"
-
-msgid "Define new share types"
-msgstr "新しい共有種別の作成"
-
-msgid "Delete Share"
-msgstr "共有の削除"
-
-msgid ""
-"Delete the instance and create a new instance using the ``--nic port-id`` "
-"option."
-msgstr ""
-"インスタンスを削除し、``--nic port-id`` オプションを使用して新しいインスタン"
-"スを作成します。"
-
-msgid "Delete the ports that are not needed:"
-msgstr "必要ないポートを削除します。"
-
-msgid "Deleting Images"
-msgstr "イメージの削除"
-
-msgid ""
-"Depending on the type of server, the contents and order of your package list "
-"might vary from this example."
-msgstr ""
-"サーバーの種類に応じて、パケット一覧の内容や順番がこの例と異なるかもしれませ"
-"ん。"
-
-msgid ""
-"Depending on your specific configuration, upgrading all packages might "
-"restart or break services supplemental to your OpenStack environment. For "
-"example, if you use the TGT iSCSI framework for Block Storage volumes and "
-"the upgrade includes new packages for it, the package manager might restart "
-"the TGT iSCSI services and impact connectivity to volumes."
-msgstr ""
-"お使いの設定によっては、すべてのパッケージを更新することにより、OpenStack 環"
-"境の補助サービスを再起動または破壊するかもしれません。例えば、Block Storage "
-"ボリューム向けに TGT iSCSI フレームワークを使用していて、それの新しいパッケー"
-"ジがアップグレードに含まれる場合、パッケージマネージャーが TGT iSCSI サービス"
-"を再起動して、ボリュームへの接続性に影響を与えるかもしれません。"
-
-msgid "Deployment"
-msgstr "デプロイ"
-
-msgid "Deprecated"
-msgstr "非推奨"
-
-msgid "Deprecation of Nova Network"
-msgstr "nova-network の非推奨"
-
-msgid ""
-"Describes a manual installation process, as in, by hand, without automation, "
-"for multiple distributions based on a packaging system:"
-msgstr ""
-"自動化せずに、手動で行う場合のインストール手順について説明しています。パッ"
-"ケージングシステムがある複数のディストリビューション向けのインストールガイド"
-"があります。"
-
-msgid ""
-"Describes potential strategies for making your OpenStack services and "
-"related controllers and data stores highly available"
-msgstr ""
-"OpenStack サービス、関連するコントローラーやデータストアを高可用にするために"
-"取りうる方策に説明しています"
-
-msgid "Description"
-msgstr "説明"
-
-msgid ""
-"Design and create an architecture for your first nontrivial OpenStack cloud. "
-"After you read this guide, you'll know which questions to ask and how to "
-"organize your compute, networking, and storage resources and the associated "
-"software packages."
-msgstr ""
-"初めての本格的な OpenStack クラウドのアーキテクチャーの設計と構築。この本を読"
-"み終えると、コンピュート、ネットワーク、ストレージのリソースを選ぶにはどんな"
-"質問を自分にすればよいのか、どのように組み上げればよいのかや、どんなソフト"
-"ウェアパッケージが必要かが分かることでしょう。"
-
-msgid ""
-"Designate a server as the central logging server. The best practice is to "
-"choose a server that is solely dedicated to this purpose. Create a file "
-"called ``/etc/rsyslog.d/server.conf`` with the following contents:"
-msgstr ""
-"集中ログサーバーとして使用するサーバーを決めます。ログ専用のサーバーを利用す"
-"るのが最も良いです。 ``/etc/rsyslog.d/server.conf`` を次のように作成します。"
-
-msgid ""
-"Despite only outputting the newly added rule, this operation is additive:"
-msgstr "新しく追加されたルールのみが出力されますが、この操作は追加操作です:"
-
-msgid ""
-"Determine which OpenStack packages are installed on your system. Use the :"
-"command:`dpkg --get-selections` command. Filter for OpenStack packages, "
-"filter again to omit packages explicitly marked in the ``deinstall`` state, "
-"and save the final output to a file. For example, the following command "
-"covers a controller node with keystone, glance, nova, neutron, and cinder:"
-msgstr ""
-"お使いの環境にインストールされている OpenStack パッケージを判断します。 :"
-"command:`dpkg --get-selections` コマンドを使用します。OpenStack パッケージを"
-"フィルターします。再びフィルターして、明示的に ``deinstall`` 状態になっている"
-"パッケージを省略します。最終出力をファイルに保存します。例えば、以下のコマン"
-"ドは、keystone、glance、nova、neutron、cinder を持つコントローラーノードを取"
-"り扱います。"
-
-msgid "Determine which servers the RabbitMQ alarms are coming from."
-msgstr "RabbitMQ のアラームが発生しているサーバーを特定します。"
-
-msgid "Determining Which Component Is Broken"
-msgstr "故障しているコンポーネントの特定"
-
-msgid ""
-"Develop an upgrade procedure and assess it thoroughly by using a test "
-"environment similar to your production environment."
-msgstr ""
-"アップグレード手順を作成し、本番環境と同じようなテスト環境を使用して、全体を"
-"評価します。"
-
-msgid "Diablo"
-msgstr "Diablo"
-
-msgid "Diagnose Your Compute Nodes"
-msgstr "コンピュートノードの診断"
-
-msgid "Diane Fleming"
-msgstr "Diane Fleming"
-
-msgid ""
-"Diane works on the OpenStack API documentation tirelessly. She helped out "
-"wherever she could on this project."
-msgstr ""
-"Diane は OpenStack API ドキュメントプロジェクトで非常に熱心に活動しています。"
-"このプロジェクトでは自分ができるところであれば、どこでも取り組んでくれまし"
-"た。"
-
-msgid "Differences Between Various Drivers"
-msgstr "ドライバーによる違い"
-
-msgid "Direct incoming traffic from VM to the security group chain."
-msgstr "仮想マシンからセキュリティグループチェインへの直接受信。"
-
-msgid "Direct packets associated with a known session to the RETURN chain."
-msgstr "既知のセッションに関連付けられたパケットの RETURN チェインへの転送。"
-
-msgid "Direct traffic from the VM interface to the security group chain."
-msgstr "仮想マシンインスタンスからセキュリティグループチェインへの直接通信。"
-
-msgid ""
-"Disable scheduling of new VMs to the node, optionally providing a reason "
-"comment:"
-msgstr ""
-"新規 VM のノードへのスケジューリングを無効化し、理由をコメントにします。"
-
-msgid "Disappearing Images"
-msgstr "イメージの消失"
-
-msgid "Disconnect the qemu-nbd device."
-msgstr "qemu-nbd デバイスを切断します。"
-
-msgid ""
-"Discrete regions with separate API endpoints and no coordination between "
-"regions."
-msgstr ""
-"リージョンごとに別々のAPIエンドポイントが必要で、リージョン間で協調する必要が"
-"ない場合"
-
-msgid "Disk"
-msgstr "ディスク"
-
-msgid "Disk partitioning and disk array setup for scalability"
-msgstr ""
-"スケーラビリティ確保に向けたディスクのパーティショニングおよびディスク配列設"
-"定"
-
-msgid "Disk space"
-msgstr "ディスク領域"
-
-msgid "Disk space is cheap these days. Data recovery is not."
-msgstr "今日、ディスクスペースは安価である。データの復元はそうではない。"
-
-msgid "Disk usage"
-msgstr "ディスク使用量"
-
-msgid "Distributed Virtual Router"
-msgstr "分散仮想ルーター"
-
-msgid ""
-"Do a full manual install by using the `Installation Tutorials and Guides "
-"`_ for your "
-"platform. Review the final configuration files and installed packages."
-msgstr ""
-"お使いのプラットフォーム用の `Installation Tutorials and Guides `_ を使用して、完全な手動イン"
-"ストールを実行する。最終的な設定ファイルとインストールされたパッケージをレ"
-"ビューします。"
-
-msgid ""
-"Do not mount a share without an access rule! This can lead to an exception."
-msgstr ""
-"アクセスルールなしで共有をマウントしてはいけません。これは、例外を引き起こす"
-"可能性があります。"
-
-msgid "Double VLAN"
-msgstr "二重 VLAN"
-
-msgid "Down the Rabbit Hole"
-msgstr "ウサギの穴に落ちて"
-
-msgid "Downgrade OpenStack packages."
-msgstr "OpenStack パッケージをダウングレードします。"
-
-msgid ""
-"Downgrading packages is by far the most complicated step; it is highly "
-"dependent on the distribution and the overall administration of the system."
-msgstr ""
-"パッケージのダウングレードは、かなり最も複雑な手順です。ディストリビューショ"
-"ン、システム管理全体に非常に依存します。"
-
-msgid ""
-"Downtime, whether planned or unscheduled, is a certainty when running a "
-"cloud. This chapter aims to provide useful information for dealing "
-"proactively, or reactively, with these occurrences."
-msgstr ""
-"停止時間(計画的なものと予定外のものの両方)はクラウドを運用するときに確実に"
-"発生します。本章は、プロアクティブまたはリアクティブに、これらの出来事に対処"
-"するために有用な情報を提供することを目的としています。"
-
-msgid "Driver Quality Improvements"
-msgstr "ドライバー品質の改善"
-
-msgid "Drop packets that are not associated with a state."
-msgstr "どの状態にも関連付けられていないパケットの破棄。"
-
-msgid "Drop traffic without an IP/MAC allow rule."
-msgstr "IP/MAC 許可ルールにない通信の破棄。"
-
-msgid ""
-"During an upgrade, operators can add configuration options to ``nova.conf`` "
-"which lock the version of RPC messages and allow live upgrading of the "
-"services without interruption caused by version mismatch. The configuration "
-"options allow the specification of RPC version numbers if desired, but "
-"release name alias are also supported. For example:"
-msgstr ""
-"運用者は、アップグレード中、RPC バージョンをロックして、バージョン不一致によ"
-"り引き起こされる中断なしでサービスのライブアップグレードできるよう、``nova."
-"conf`` に設定オプションを追加できます。この設定オプションは、使いたければ "
-"RPC バージョン番号を指定できます。リリース名のエイリアスもサポートされます。"
-"例:"
-
-msgid ""
-"EC2 compatibility credentials can be downloaded by selecting :guilabel:"
-"`Project`, then :guilabel:`Compute`, then :guilabel:`Access & Security`, "
-"then :guilabel:`API Access` to display the :guilabel:`Download EC2 "
-"Credentials` button. Click the button to generate a ZIP file with server "
-"x509 certificates and a shell script fragment. Create a new directory in a "
-"secure location because these are live credentials containing all the "
-"authentication information required to access your cloud identity, unlike "
-"the default ``user-openrc``. Extract the ZIP file here. You should have "
-"``cacert.pem``, ``cert.pem``, ``ec2rc.sh``, and ``pk.pem``. The ``ec2rc.sh`` "
-"is similar to this:"
-msgstr ""
-"EC2 互換のクレデンシャルをダウンロードするには、 :guilabel:`プロジェクト"
-"` 、 :guilabel:`コンピュート` 、 :guilabel:`アクセスとセキュリティ` 、 :"
-"guilabel:`API アクセス` の順に選択し、 :guilabel:`EC2 認証情報のダウンロード"
-"` ボタンを表示します。このボタンをクリックすると、 サーバーの x509 証明書と"
-"シェルスクリプトフラグメントが含まれた ZIP が生成されます。これらのファイル"
-"は、デフォルトの ``user-openrc`` とは異なり、クラウドのアイデンティティへのア"
-"クセスに必要なすべての認証情報を含む有効なクレデンシャルなので、セキュリティ"
-"保護された場所に新規ディレクトリを作成して、そこで ZIP ファイルを展開します。"
-"``cacert.pem``、``cert.pem``、``ec2rc.sh``、および ``pk.pem`` が含まれている"
-"はずです。``ec2rc.sh`` には、以下と似たような内容が記述されています。"
-
-msgid ""
-"Each OpenStack cloud is different even if you have a near-identical "
-"architecture as described in this guide. As a result, you must still test "
-"upgrades between versions in your environment using an approximate clone of "
-"your environment."
-msgstr ""
-"このガイドに記載されているような、理想的なアーキテクチャーに近いと思われる場"
-"合でも、各 OpenStack クラウドはそれぞれ異なります。そのため、お使いの環境の適"
-"切なクローンを使用して、お使いの環境のバージョン間でアップグレードをテストす"
-"る必要があります。"
-
-msgid ""
-"Each method provides different functionality and can be best divided into "
-"two groups:"
-msgstr ""
-"メソッド毎に異なる機能を提供しますが、このメソッドは 2 つのグループに分類する"
-"と良いでしょう。"
-
-msgid ""
-"Each site runs a different configuration, as a resource cells in an "
-"OpenStack Compute cells setup. Some sites span multiple data centers, some "
-"use off compute node storage with a shared file system, and some use on "
-"compute node storage with a non-shared file system. Each site deploys the "
-"Image service with an Object Storage back end. A central Identity, "
-"dashboard, and Compute API service are used. A login to the dashboard "
-"triggers a SAML login with Shibboleth, which creates an account in the "
-"Identity service with an SQL back end. An Object Storage Global Cluster is "
-"used across several sites."
-msgstr ""
-"各サイトは(OpenStack Compute のセル設定におけるリソースセルとして)異なる設"
-"定で実行されています。数サイトは複数データセンターに渡り、コンピュートノード"
-"外のストレージを共有ストレージで使用しているサイトもあれば、コンピュートノー"
-"ド上のストレージを非共有型ファイルシステムで使用しているサイトもあります。各"
-"サイトは Object Storage バックエンドを持つ Image service をデプロイしていま"
-"す。中央の Identity、dashboard、Compute API サービスが使用されています。"
-"dashboard へのログインが Shibboleth の SAML ログインのトリガーになり、SQL "
-"バックエンドの Identity サービスのアカウントを作成します。Object Storage "
-"Global Cluster は、いくつかの拠点をまたがり使用されます。"
-
-msgid ""
-"Early indications are that it does do this well for a base set of scenarios, "
-"such as using the ML2 plug-in with Open vSwitch, one flat external network "
-"and VXLAN tenant networks. However, it does appear that there are problems "
-"with the use of VLANs, IPv6, Floating IPs, high north-south traffic "
-"scenarios and large numbers of compute nodes. It is expected these will "
-"improve significantly with the next release, but bug reports on specific "
-"issues are highly desirable."
-msgstr ""
-"初期の目安は、ML2 プラグインと Open vSwitch、1 つのフラットな外部ネットワーク"
-"と VXLAN のテナントネットワークなど、基本的なシナリオに対してこれをうまく実行"
-"することです。しかしながら、VLAN、IPv6、Floating IP、大量のノース・サウス通信"
-"のシナリオ、大量のコンピュートノードなどで問題が発生しはじめます。これらは次"
-"のリリースで大幅に改善されることが期待されていますが、特定の問題におけるバグ"
-"報告が強く望まれています。"
-
-msgid "Easier Upgrades"
-msgstr "より簡単なアップグレード"
-
-msgid ""
-"Either ``snap``, which means that the volume was created from a snapshot, or "
-"anything other than ``snap`` (a blank string is valid). In the preceding "
-"example, the volume was not created from a snapshot, so we leave this field "
-"blank in our following example."
-msgstr ""
-"ボリュームがスナップショットから作成されたことを意味する ``snap`` 、または "
-"``snap`` 以外の何か (空文字列も有効) です。上の例では、ボリュームがスナップ"
-"ショットから作成されていません。そのため、この項目を以下の例において空白にし"
-"てあります。"
-
-msgid ""
-"Either approach is valid. Use the approach that matches your experience."
-msgstr ""
-"どのアプローチも有効です。あなたの経験に合うアプローチを使用してください。"
-
-msgid "ElasticSearch"
-msgstr "ElasticSearch"
-
-msgid "Email address"
-msgstr "電子メールアドレス"
-
-msgid ""
-"Emma Richards of Rackspace Guest Relations took excellent care of our lunch "
-"orders and even set aside a pile of sticky notes that had fallen off the "
-"walls."
-msgstr ""
-"Rackspace ゲストリレーションズの Emma Richards は、私たちのランチの注文を素晴"
-"らしく面倒を見てくれて、更に壁から剥がれ落ちた付箋紙の山を脇においてくれまし"
-"た。"
-
-msgid "Enable scheduling of VMs to the node:"
-msgstr "ノードへの仮想マシンのスケジュールを有効化します。"
-
-msgid "Enabled"
-msgstr "有効"
-
-msgid "Enabling IPv6 Support"
-msgstr "IPv6 サポートの有効化"
-
-msgid "Encode certificate in DER format"
-msgstr "証明書を DER 形式でエンコードします"
-
-msgid "End-User Configuration of Security Groups"
-msgstr "セキュリティグループのエンドユーザー設定"
-
-msgid "End-of-life"
-msgstr "エンドオブライフ"
-
-msgid ""
-"Ensure that cryptsetup is installed, and ensure that ``pythin-"
-"barbicanclient`` Python package is installed"
-msgstr ""
-"cryptsetup がきちんとインストールされ、``pythin-barbicanclient`` Python パッ"
-"ケージがインストールされていることを確認してください。"
-
-msgid "Ensure that the operating system has recognized the new disk:"
-msgstr ""
-"オペレーティングシステムが新しいディスクを認識していることを確認します。"
-
-msgid "Ephemeral"
-msgstr "エフェメラル"
-
-msgid "Essex"
-msgstr "Essex"
-
-msgid ""
-"Evaluate successfully if a field of the resource specified in the current "
-"request matches a specific value. For instance, ``\"field:networks:"
-"shared=True\"`` is successful if the attribute shared of the network "
-"resource is set to ``true``."
-msgstr ""
-"現在のリクエストに指定されたリソースの項目が指定された値と一致すれば、成功と"
-"評価されます。たとえば、ネットワークリソースの shared 属性が ``true`` に設定"
-"されている場合、 ``\"field:networks:shared=True\"`` が成功します。"
-
-msgid ""
-"Evaluate successfully if the user submitting the request has the specified "
-"role. For instance, ``\"role:admin\"`` is successful if the user submitting "
-"the request is an administrator."
-msgstr ""
-"リクエストを出したユーザーが指定された役割を持っていれば、成功と評価されま"
-"す。たとえば、リクエストを出しているユーザーが管理者ならば、 ``\"role:admin"
-"\"`` が成功します。"
-
-msgid ""
-"Even at smaller-scale testing, look for excess network packets to determine "
-"whether something is going horribly wrong in inter-component communication."
-msgstr ""
-"より小規模なテストにおいてさえも、過剰なネットワークパケットを探して、コン"
-"ポーネント間の通信で何かとてつもなくおかしくなっていないかどうかを判断しま"
-"す。"
-
-msgid ""
-"Ever have one of those days where all of the sudden you get the Google "
-"results you were looking for? Well, that's what happened here. I was looking "
-"for information on dhclient and why it dies when it can't renew its lease "
-"and all of the sudden I found a bunch of OpenStack and dnsmasq discussions "
-"that were identical to the problem we were seeing!"
-msgstr ""
-"探し続けてきた Google の検索結果が突然得られたという事態をお分かりだろうか?"
-"えっと、それがここで起こったことだ。私は dhclient の情報と、何故 dhclient が"
-"そのリースを更新できない場合に死ぬのかを探していて、我々が遭遇したのと同じ問"
-"題についての OpenStack と dnsmasq の議論の束を突然発見した。"
-
-msgid "Everett Toews"
-msgstr "Everett Toews"
-
-msgid ""
-"Everett is a developer advocate at Rackspace making OpenStack and the "
-"Rackspace Cloud easy to use. Sometimes developer, sometimes advocate, and "
-"sometimes operator, he's built web applications, taught workshops, given "
-"presentations around the world, and deployed OpenStack for production use by "
-"academia and business."
-msgstr ""
-"Everett は Rackspace の Developer Advocate で、OpenStack や Rackspace Cloud "
-"を使いやすくする仕事をしています。ある時は開発者、ある時は advocate、またある"
-"時は運用者です。彼は、ウェブアプリケーションを作成し、ワークショップを行い、"
-"世界中で公演を行い、教育界やビジネスでプロダクションユースとして使われる "
-"OpenStack を構築しています。"
-
-msgid "Example Image service Database Queries"
-msgstr "Image service のデータベースクエリーの例"
-
-msgid ""
-"Failures of hardware are common in large-scale deployments such as an "
-"infrastructure cloud. Consider your processes and balance time saving "
-"against availability. For example, an Object Storage cluster can easily live "
-"with dead disks in it for some period of time if it has sufficient capacity. "
-"Or, if your compute installation is not full, you could consider live "
-"migrating instances off a host with a RAM failure until you have time to "
-"deal with the problem."
-msgstr ""
-"クラウドインフラなどの大規模環境では、ハードウェアの故障はよくあることです。"
-"作業内容を考慮し、可用性と時間の節約のバランスを取ります。たとえば、オブジェ"
-"クトストレージクラスターは、十分な容量がある場合には、ある程度の期間は死んだ"
-"ディスクがあっても問題なく動作します。また、(クラウド内の) コンピュートノード"
-"に空きがある場合には、問題に対処する時間が取れるまで、ライブマイグレーション"
-"で RAM が故障したホストから他のホストへインスタンスを移動させることも考慮する"
-"とよいでしょう。"
-
-msgid ""
-"Feature requests typically start their life in Etherpad, a collaborative "
-"editing tool, which is used to take coordinating notes at a design summit "
-"session specific to the feature. This then leads to the creation of a "
-"blueprint on the Launchpad site for the particular project, which is used to "
-"describe the feature more formally. Blueprints are then approved by project "
-"team members, and development can begin."
-msgstr ""
-"機能追加リクエストは、通常 Etherpad で始まります。Etherpad は共同編集ツール"
-"で、デザインサミットのその機能に関するセッションで議論を整理するのに使われま"
-"す。続けて、プロジェクトの Launchpad サイトに blueprint が作成され、"
-"blueprint を使ってよりきちんとした形で機能が規定されていきます。 この後、"
-"blueprint はプロジェクトメンバーによって承認され、開発が始まります。"
-
-msgid "Feb 13, 2014"
-msgstr "2014年2月13日"
-
-msgid "Feb 3, 2011"
-msgstr "2011年2月3日"
-
-msgid ""
-"Felix Lee of Academia Sinica Grid Computing Centre in Taiwan contributed "
-"this story."
-msgstr ""
-"台湾の Academia Sinica Grid Computing Centre の Felix Lee さんがこの話を提供"
-"してくれました。"
-
-msgid "Field-based rules"
-msgstr "項目に基づいたルール"
-
-msgid "Figure. Neutron network paths"
-msgstr "図: Neutron ネットワーク経路"
-
-msgid "Figure. Traffic route for ping packet"
-msgstr "図: ping パケットの通信ルート"
-
-msgid "File System Backups"
-msgstr "ファイルシステムバックアップ"
-
-msgid "File injection"
-msgstr "ファイルインジェクション"
-
-msgid ""
-"File system to store files and directories, where all the data lives, "
-"including the root partition that starts and runs the system."
-msgstr ""
-"ファイルやディレクトリを格納するファイルシステム。システムを起動、実行する "
-"root パーティションなど、全データが設置される場所。"
-
-msgid "Final steps"
-msgstr "最終手順"
-
-msgid ""
-"Finally, Alvaro noticed something. When a packet from the outside hits the "
-"cloud controller, it should not be configured with a VLAN. We verified this "
-"as true. When the packet went from the cloud controller to the compute node, "
-"it should only have a VLAN if it was destined for an instance. This was "
-"still true. When the ping reply was sent from the instance, it should be in "
-"a VLAN. True. When it came back to the cloud controller and on its way out "
-"to the Internet, it should no longer have a VLAN. False. Uh oh. It looked as "
-"though the VLAN part of the packet was not being removed."
-msgstr ""
-"遂に、Alvaro が何かを掴んだ。外部からのパケットがクラウドコントローラーを叩い"
-"た際、パケットは VLAN で設定されるべきではない。我々はこれが正しいことを検証"
-"した。パケットがクラウドコントローラーからコンピュートノードに行く際、パケッ"
-"トはインスタンス宛の場合にのみ VLAN を持つべきである。これもまた正しかった。"
-"ping のレスポンスがインスタンスから送られる際、パケットは VLAN 中にいるべきで"
-"ある。OK。クラウドコントローラーからインターネットにパケットが戻る際、パ"
-"ケットには VLAN を持つべきではない。NG。うぉっ。まるで パケットの VLAN 部分"
-"が削除されていないように見える。"
-
-msgid ""
-"Finally, I checked StackTach and reviewed the user's events. They had "
-"created and deleted several snapshots—most likely experimenting. Although "
-"the timestamps didn't match up, my conclusion was that they launched their "
-"instance and then deleted the snapshot and it was somehow removed from ``/"
-"var/lib/nova/instances/_base``. None of that made sense, but it was the best "
-"I could come up with."
-msgstr ""
-"最後に、私は StackTack をチェックし、ユーザのイベントを見直した。彼らはいくつ"
-"かのスナップショットを作ったり消したりしていた-ありそうな操作ではあるが。タ"
-"イムスタンプが一致しないとはいえ、彼らがインスタンスを起動して、その後スナッ"
-"プショットを削除し、それが何故か ``/var/lib/nova/instances/_base`` から削除"
-"されたというのが私の結論だった。大した意味は無かったが、それがその時私が得た"
-"全てだった。"
-
-msgid "Finally, mount the disk:"
-msgstr "最後に、ディスクをマウントします。"
-
-msgid ""
-"Finally, reattach volumes using the same method described in the section :"
-"ref:`volumes`."
-msgstr ""
-"最後に、 :ref:`volumes` のセクションで説明されているのと同じ方法を用いて、ボ"
-"リュームを再接続します。"
-
-msgid ""
-"Finally, to create a share that uses this share network, get to Create Share "
-"use case described earlier in this chapter."
-msgstr ""
-"最後に、これまでの本章に記載された「共有の作成」ユースケースを参照して、この"
-"共有ネットワークを使用する共有を作成します。"
-
-msgid ""
-"Find the ``[filter:ratelimit]`` section in ``/etc/swift/proxy-server.conf``, "
-"and copy in the following configuration section after it:"
-msgstr ""
-"``/etc/swift/proxy-server.conf`` の ``[filter:ratelimit]`` セクションを探し、"
-"その後ろに以下の環境定義セクションを貼り付けてください。"
-
-msgid ""
-"Find the ``[pipeline:main]`` section in ``/etc/swift/proxy-server.conf``, "
-"and add ``ip_whitelist`` after ratelimit to the list like so. When you're "
-"done, save and close the file:"
-msgstr ""
-"``/etc/swift/proxy-server.conf`` ``[pipeline:main]`` セクションを探し、このよ"
-"うに ``ip_whitelist`` リストを ratelimit の後ろに追加してください。完了した"
-"ら、ファイルを保存して閉じてください。"
-
-msgid ""
-"Find the ``provider:segmentation_id`` of the network you're interested in. "
-"This is the same field used for the VLAN ID in VLAN-based networks:"
-msgstr ""
-"興味あるネットワークの ``provider:segmentation_id`` を探します。これは、VLAN "
-"ベースのネットワークにおける VLAN ID に使用されるものと同じ項目です。"
-
-msgid "Find the ``scheduler_driver`` config and change it like so:"
-msgstr "``scheduler_driver`` 設定を見つけ、このように変更してください。"
-
-msgid ""
-"Find the external VLAN tag of the network you're interested in. This is the "
-"``provider:segmentation_id`` as returned by the networking service:"
-msgstr ""
-"興味のあるネットワークの外部 VLAN タグを見つけます。これは、ネットワークサー"
-"ビスにより返される ``provider:segmentation_id`` です。"
-
-msgid "Find the port corresponding to the instance. For example:"
-msgstr "インスタンスに対応するポートを見つけます。例:"
-
-msgid "Finding a Failure in the Path"
-msgstr "経路上の障害を見つける"
-
-msgid "First, find the UUID of the instance in question:"
-msgstr "まず、インスタンスのUUIDを確認します。"
-
-msgid "First, unmount the disk:"
-msgstr "まず、ディスクをアンマウントします。"
-
-msgid ""
-"First, you can discover what servers belong to your OpenStack cloud by "
-"running:"
-msgstr ""
-"まず、あなたのOpenStackクラウドに属し、稼働しているサーバーを把握することがで"
-"きます。"
-
-msgid "Fixed IPs"
-msgstr "固定 IP"
-
-msgid "Flavors"
-msgstr "フレーバー"
-
-msgid ""
-"Flavors define a number of parameters, resulting in the user having a choice "
-"of what type of virtual machine to run—just like they would have if they "
-"were purchasing a physical server. :ref:`table_flavor_params` lists the "
-"elements that can be set. Note in particular ``extra_specs``, which can be "
-"used to define free-form characteristics, giving a lot of flexibility beyond "
-"just the size of RAM, CPU, and Disk."
-msgstr ""
-"フレーバーは、数多くのパラメーターを定義します。これにより、ユーザーが実行す"
-"る仮想マシンの種類を選択できるようになります。ちょうど、物理サーバーを購入す"
-"る場合と同じようなことです。:ref:`table_flavor_params` は、設定できる要素の一"
-"覧です。とくに ``extra_specs`` に注意してください。これは、メモリー、CPU、"
-"ディスクの容量以外にもかなり柔軟に、自由形式で特徴を定義するために使用できま"
-"す。"
-
-msgid "Floating IPs"
-msgstr "Floating IP"
-
-msgid "Folsom"
-msgstr "Folsom"
-
-msgid ""
-"For Compute, instance metadata is a collection of key-value pairs associated "
-"with an instance. Compute reads and writes to these key-value pairs any time "
-"during the instance lifetime, from inside and outside the instance, when the "
-"end user uses the Compute API to do so. However, you cannot query the "
-"instance-associated key-value pairs with the metadata service that is "
-"compatible with the Amazon EC2 metadata service."
-msgstr ""
-"Compute では、インスタンスのメタデータはインスタンスと関連付けられたキーバ"
-"リューペアの集まりです。エンドユーザーがこれらのキーバリューペアを読み書きす"
-"るために Compute API を使用するとき、Compute がインスタンスの生存期間中にイン"
-"スタンスの内外からこれらを読み書きします。しかしながら、Amazon EC2 メタデータ"
-"サービスと互換性のあるメタデータサービスを用いて、インスタンスに関連付けられ"
-"たキーバリューペアをクエリーできません。"
-
-msgid "For Object Storage, each region has a swift environment."
-msgstr "オブジェクトストレージ用に、各リージョンには swift 環境があります。"
-
-msgid ""
-"For an example of instance metadata, users can generate and register SSH "
-"keys using the :command:`openstack keypair create` command:"
-msgstr ""
-"インスタンスのメタデータの場合、ユーザーが :command:`openstack keypair "
-"create` コマンドを使用して SSH 鍵を生成および登録できます。"
-
-msgid ""
-"For details, see subsection `Security Services `__ of “Shared File "
-"Systems” section of OpenStack Administrator Guide document."
-msgstr ""
-"詳細は OpenStack Administrator Guide の Shared File Systems セクションにある "
-"`Security Services `__ を参照してください。"
-
-msgid ""
-"For environments using the OpenStack Networking service (neutron), verify "
-"the release version of the database. For example:"
-msgstr ""
-"OpenStack Networking サービス (neutron) を使用している環境では、リリースバー"
-"ジョンのデータベースを検証します。例:"
-
-msgid "For example"
-msgstr "例えば"
-
-msgid ""
-"For example, a group of users have instances that are utilizing a large "
-"amount of compute resources for very compute-intensive tasks. This is "
-"driving the load up on compute nodes and affecting other users. In this "
-"situation, review your user use cases. You may find that high compute "
-"scenarios are common, and should then plan for proper segregation in your "
-"cloud, such as host aggregation or regions."
-msgstr ""
-"例えば、あるユーザーのグループが、非常に計算負荷の高い作業用に大量のコン"
-"ピュートリソースを使うインスタンスを持っているとします。これにより、Compute "
-"ノードの負荷が高くなり、他のユーザーに影響を与えます。この状況では、ユーザー"
-"のユースケースを精査する必要があります。計算負荷が高いシナリオがよくあるケー"
-"スだと判明し、ホスト集約やリージョンなど、クラウドを適切に分割することを計画"
-"すべき場合もあるでしょう。"
-
-msgid ""
-"For example, let's say you have a special ``authorized_keys`` file named "
-"special_authorized_keysfile that for some reason you want to put on the "
-"instance instead of using the regular SSH key injection. In this case, you "
-"can use the following command:"
-msgstr ""
-"例えば、何らかの理由で通常の SSH 鍵の注入ではなく、 "
-"special_authorized_keysfile という名前の特別な ``authorized_keys`` ファイルを"
-"インスタンスに置きたいと言うとします。この場合、以下のコマンドを使用できます:"
-
-msgid "For example, run the following command:"
-msgstr "例えば、以下のコマンドを実行します。"
-
-msgid "For example, to place a 5 GB quota on an account:"
-msgstr "例として、アカウントに 5 GB のクォータを設定します。"
-
-msgid "For example, to restrict a project's image storage to 5 GB, do this:"
-msgstr ""
-"たとえば、プロジェクトのイメージストレージを 5GB に制限するには、以下を実行し"
-"ます。"
-
-msgid ""
-"For example, you usually cannot configure NICs for VLANs when PXE booting. "
-"Additionally, you usually cannot PXE boot with bonded NICs. If you run into "
-"this scenario, consider using a simple 1 GB switch in a private network on "
-"which only your cloud communicates."
-msgstr ""
-"例えば、PXE ブートの際には、通常は VLAN の設定は行えません。さらに、通常は "
-"bonding された NIC から PXE ブートを行うこともできません。このような状況の場"
-"合、クラウド内でのみ通信できるネットワークで、シンプルな 1Gbps のスイッチを使"
-"うことを検討してください。"
-
-msgid "For example:"
-msgstr "例えば"
-
-msgid ""
-"For instructions on installing, upgrading, or removing command-line clients, "
-"see the `Install the OpenStack command-line clients `_ "
-"section in OpenStack End User Guide."
-msgstr ""
-"コマンドラインクライアントのインストール、アップグレード、削除に関する詳細"
-"は、OpenStack エンドユーザーガイドの `OpenStack コマンドラインクライアントの"
-"インストール `_ セクションを参照ください。"
-
-msgid ""
-"For more details and additional information on snapshots, see `Share "
-"Snapshots `__ of “Shared File Systems” section of “OpenStack "
-"Administrator Guide” document."
-msgstr ""
-"スナップショットに関する詳細は、OpenStack Administrator Guide の Shared File "
-"Systems セクションにある `Share Snapshots `__ を参照してください。"
-
-msgid ""
-"For more information about updating Block Storage volumes (for example, "
-"resizing or transferring), see the `OpenStack End User Guide `__."
-msgstr ""
-"Block Storage ボリュームの更新 (例えばリサイズや譲渡など) に関する詳細は、 "
-"`OpenStack エンドユーザーガイド `__ を参照してください。"
-
-msgid ""
-"For more information on installing DevStack, see the `DevStack `_ website."
-msgstr ""
-"DevStack のインストールの詳細は `DevStack `_ の Web サイトにあります。"
-
-msgid ""
-"For more information, see `RabbitMQ documentation `_."
-msgstr ""
-"詳細は `RabbitMQ のドキュメント `_ "
-"を参照してください。"
-
-msgid ""
-"For readers who need to get a specialized feature into OpenStack, this "
-"chapter describes how to use DevStack to write custom middleware or a custom "
-"scheduler to rebalance your resources."
-msgstr ""
-"OpenStack に特別な機能を追加したい読者向けに、この章は、カスタムミドルウェア"
-"やカスタムスケジューラーを書いて、リソースを再配置するために、DevStack を使用"
-"する方法について説明します。"
-
-msgid ""
-"For resource alerting, for example, monitor disk capacity on a compute node "
-"with Nagios, add the following to your Nagios configuration:"
-msgstr ""
-"たとえば、リソースのアラートとして、コンピュートノード上のディスク容量を "
-"Nagios を使って監視する場合、次のような Nagios 設定を追加します。"
-
-msgid ""
-"For stable operations, you want to detect failure promptly and determine "
-"causes efficiently. With a distributed system, it's even more important to "
-"track the right items to meet a service-level target. Learning where these "
-"logs are located in the file system or API gives you an advantage. This "
-"chapter also showed how to read, interpret, and manipulate information from "
-"OpenStack services so that you can monitor effectively."
-msgstr ""
-"安定運用のために、障害を即座に検知して、原因を効率的に見つけたいと思います。"
-"分散システムを用いると、目標サービスレベルを満たすために、適切な項目を追跡す"
-"ることがより重要になります。ログが保存されるファイルシステムの場所、API が与"
-"える利点を学びます。本章は、OpenStack のサービスを効率的に監視できるよう、そ"
-"れらからの情報を読み、解釈し、操作する方法も説明しました。"
-
-msgid ""
-"For the cloud controller, the good news is if your cloud is using the "
-"FlatDHCP multi-host HA network mode, existing instances and volumes continue "
-"to operate while the cloud controller is offline. For the storage proxy, "
-"however, no storage traffic is possible until it is back up and running."
-msgstr ""
-"クラウドコントローラーの場合、良いニュースとしては、クラウドが FlatDHCP マル"
-"チホスト HA ネットワークモードを使用していれば、既存のインスタンスとボリュー"
-"ムはクラウドコントローラーがオフラインの間も動作を継続するという点がありま"
-"す。しかしながら、ストレージプロキシの場合には、サーバーが元に戻され動作状態"
-"になるまで、ストレージとの通信ができません。"
-
-msgid ""
-"For the second path, you can write new features and plug them in using "
-"changes to a configuration file. If the project where your feature would "
-"need to reside uses the Python Paste framework, you can create middleware "
-"for it and plug it in through configuration. There may also be specific ways "
-"of customizing a project, such as creating a new scheduler driver for "
-"Compute or a custom tab for the dashboard."
-msgstr ""
-"2 番目の方法として、新機能を書き、設定ファイルを変更して、それらをプラグイン"
-"することもできます。もし、あなたの機能が必要とされるプロジェクトが Python "
-"Paste フレームワークを使っているのであれば、そのための ミドルウェアを作成し、"
-"環境設定を通じて組み込めばよいのです。他にもプロジェクトをカスタマイズする方"
-"法があるかもしれません。例えば、Compute の新しいスケジューラーやダッシュボー"
-"ドのカスタムタブなど。"
-
-msgid ""
-"For the storage proxy, ensure that the :term:`Object Storage service