Imported Translations from Zanata

For more information about this automatic import see:
https://docs.openstack.org/i18n/latest/reviewing-translation-import.html

Change-Id: I5d8a2fb815c470d6f1a37f8ad9fd4787d903f105
This commit is contained in:
OpenStack Proposal Bot 2020-02-25 10:28:44 +00:00
parent 6e9cd82dc4
commit a1f8ff7531
1 changed files with 245 additions and 2 deletions

View File

@ -4,11 +4,11 @@ msgid ""
msgstr ""
"Project-Id-Version: openstack-helm\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-02-04 17:43+0000\n"
"POT-Creation-Date: 2020-02-24 17:44+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2020-02-07 09:39+0000\n"
"PO-Revision-Date: 2020-02-24 05:03+0000\n"
"Last-Translator: Andi Chandler <andi@gowling.com>\n"
"Language-Team: English (United Kingdom)\n"
"Language: en_GB\n"
@ -57,6 +57,9 @@ msgstr ""
msgid "2) Remove down OSD from Ceph cluster:"
msgstr "2) Remove down OSD from Ceph cluster:"
msgid "3 Node (VM based) env."
msgstr "3 Node (VM based) env."
msgid ""
"3) Fix Ceph Cluster: After node expansion, perform maintenance on Ceph "
"cluster to ensure quorum is reached and Ceph is HEALTH_OK."
@ -92,6 +95,15 @@ msgstr ""
msgid "6 Nodes (VM based) env"
msgstr "6 Nodes (VM based) env"
msgid ""
"A Ceph Monitor running on voyager3 (whose Monitor database is destroyed) "
"becomes out of quorum, and the mon-pod's status stays in ``Running`` -> "
"``Error`` -> ``CrashLoopBackOff`` while keeps restarting."
msgstr ""
"A Ceph Monitor running on voyager3 (whose Monitor database is destroyed) "
"becomes out of quorum, and the mon-pod's status stays in ``Running`` -> "
"``Error`` -> ``CrashLoopBackOff`` while keeps restarting."
msgid ""
"Above output shows Ceph cluster in HEALTH_OK with all OSDs and MONs up and "
"running."
@ -148,6 +160,13 @@ msgstr ""
"included as part of the deployment script. An example of this can be seen "
"in this script_."
msgid ""
"Also, the pod status of ceph-mon and ceph-osd changes from ``NodeLost`` back "
"to ``Running``."
msgstr ""
"Also, the pod status of ceph-mon and ceph-osd changes from ``NodeLost`` back "
"to ``Running``."
msgid "Any Helm tests associated with a chart can be run by executing:"
msgstr "Any Helm tests associated with a chart can be run by executing:"
@ -192,12 +211,30 @@ msgstr "Capture final Ceph pod statuses:"
msgid "Capture final Openstack pod statuses:"
msgstr "Capture final Openstack pod statuses:"
msgid "Case: 1 out of 3 Monitor Processes is Down"
msgstr "Case: 1 out of 3 Monitor Processes is Down"
msgid "Case: 2 out of 3 Monitor Processes are Down"
msgstr "Case: 2 out of 3 Monitor Processes are Down"
msgid "Case: 3 out of 3 Monitor Processes are Down"
msgstr "Case: 3 out of 3 Monitor Processes are Down"
msgid "Case: A OSD pod is deleted"
msgstr "Case: A OSD pod is deleted"
msgid "Case: A disk fails"
msgstr "Case: A disk fails"
msgid "Case: A host machine where ceph-mon is running is down"
msgstr "Case: A host machine where ceph-mon is running is down"
msgid "Case: Monitor database is destroyed"
msgstr "Case: Monitor database is destroyed"
msgid "Case: OSD processes are killed"
msgstr "Case: OSD processes are killed"
msgid "Case: One host machine where ceph-mon is running is rebooted"
msgstr "Case: One host machine where ceph-mon is running is rebooted"
@ -219,6 +256,12 @@ msgstr "Ceph MON and OSD PODs got scheduled on mnode4 node."
msgid "Ceph RBD provisioner docker images."
msgstr "Ceph RBD provisioner docker images."
msgid "Ceph Resiliency"
msgstr "Ceph Resiliency"
msgid "Ceph Upgrade"
msgstr "Ceph Upgrade"
msgid ""
"Ceph can be upgraded without downtime for Openstack components in a "
"multinode env."
@ -328,6 +371,9 @@ msgid ""
msgstr ""
"Followed OSH multinode guide steps to setup nodes and install K8s cluster"
msgid "Followed OSH multinode guide steps upto Ceph install"
msgstr "Followed OSH multinode guide steps up to Ceph install"
msgid "Following is a partial part from script to show changes."
msgstr "Following is a partial part from script to show changes."
@ -347,6 +393,28 @@ msgstr "Helm Tests"
msgid "Host Failure"
msgstr "Host Failure"
msgid ""
"In the mean time, we monitor the status of Ceph and noted that it takes "
"about 30 seconds for the 6 OSDs to recover from ``down`` to ``up``. The "
"reason is that Kubernetes automatically restarts OSD pods whenever they are "
"killed."
msgstr ""
"In the mean time, we monitor the status of Ceph and noted that it takes "
"about 30 seconds for the 6 OSDs to recover from ``down`` to ``up``. The "
"reason is that Kubernetes automatically restarts OSD pods whenever they are "
"killed."
msgid ""
"In the mean time, we monitored the status of Ceph and noted that it takes "
"about 24 seconds for the killed Monitor process to recover from ``down`` to "
"``up``. The reason is that Kubernetes automatically restarts pods whenever "
"they are killed."
msgstr ""
"In the mean time, we monitored the status of Ceph and noted that it takes "
"about 24 seconds for the killed Monitor process to recover from ``down`` to "
"``up``. The reason is that Kubernetes automatically restarts pods whenever "
"they are killed."
msgid ""
"In this test env, MariaDB chart is deployed with only 1 replica. In order to "
"test properly, the node with MariaDB server POD (mnode2) should not be "
@ -387,6 +455,9 @@ msgstr ""
"In this test env, since out of quorum MON is no longer available due to node "
"failure, we can processed with removing it from Ceph cluster."
msgid "Install Ceph charts (12.2.4) by updating Docker images in overrides."
msgstr "Install Ceph charts (12.2.4) by updating Docker images in overrides."
msgid "Install Ceph charts (version 12.2.4)"
msgstr "Install Ceph charts (version 12.2.4)"
@ -396,9 +467,19 @@ msgstr "Install OSH components as per OSH multinode guide."
msgid "Install Openstack charts"
msgstr "Install Openstack charts"
msgid ""
"It takes longer (about 1 minute) for the killed Monitor processes to recover "
"from ``down`` to ``up``."
msgstr ""
"It takes longer (about 1 minute) for the killed Monitor processes to recover "
"from ``down`` to ``up``."
msgid "Kubernetes version: 1.10.5"
msgstr "Kubernetes version: 1.10.5"
msgid "Kubernetes version: 1.9.3"
msgstr "Kubernetes version: 1.9.3"
msgid "Let's add more resources for K8s to schedule PODs on."
msgstr "Let's add more resources for K8s to schedule PODs on."
@ -412,6 +493,9 @@ msgstr ""
msgid "Mission"
msgstr "Mission"
msgid "Monitor Failure"
msgstr "Monitor Failure"
msgid ""
"Note: To find the daemonset associated with a failed OSD, check out the "
"followings:"
@ -429,6 +513,9 @@ msgstr ""
msgid "Number of disks: 24 (= 6 disks per host * 4 hosts)"
msgstr "Number of disks: 24 (= 6 disks per host * 4 hosts)"
msgid "OSD Failure"
msgstr "OSD Failure"
msgid "OSD count is set to 3 based on env setup."
msgstr "OSD count is set to 3 based on env setup."
@ -445,6 +532,9 @@ msgstr "OpenStack PODs that were scheduled mnode3 also shows NodeLost/Unknown."
msgid "OpenStack-Helm commit: 25e50a34c66d5db7604746f4d2e12acbdd6c1459"
msgstr "OpenStack-Helm commit: 25e50a34c66d5db7604746f4d2e12acbdd6c1459"
msgid "OpenStack-Helm commit: 28734352741bae228a4ea4f40bcacc33764221eb"
msgstr "OpenStack-Helm commit: 28734352741bae228a4ea4f40bcacc33764221eb"
msgid ""
"Our focus lies on resiliency for various failure scenarios but not on "
"performance or stress testing."
@ -456,9 +546,19 @@ msgid "PODs that were scheduled on mnode3 node has status of NodeLost/Unknown."
msgstr ""
"PODs that were scheduled on mnode3 node has status of NodeLost/Unknown."
msgid "Plan:"
msgstr "Plan:"
msgid "Recovery:"
msgstr "Recovery:"
msgid ""
"Remove the entire ceph-mon directory on voyager3, and then Ceph will "
"automatically recreate the database by using the other ceph-mons' database."
msgstr ""
"Remove the entire ceph-mon directory on voyager3, and then Ceph will "
"automatically recreate the database by using the other ceph-mons' database."
msgid ""
"Remove the failed OSD (OSD ID = 2 in this example) from the Ceph cluster:"
msgstr ""
@ -525,6 +625,9 @@ msgstr "Symptom:"
msgid "Test Environment"
msgstr "Test Environment"
msgid "Test Scenario:"
msgstr "Test Scenario:"
msgid "Test Scenarios:"
msgstr "Test Scenarios:"
@ -543,6 +646,13 @@ msgstr ""
"com/openstack/openstack-helm/tree/master/ceph>`_ is to show symptoms of "
"software/hardware failure and provide the solutions."
msgid ""
"The logs of the failed mon-pod shows the ceph-mon process cannot run as ``/"
"var/lib/ceph/mon/ceph-voyager3/store.db`` does not exist."
msgstr ""
"The logs of the failed mon-pod shows the ceph-mon process cannot run as ``/"
"var/lib/ceph/mon/ceph-voyager3/store.db`` does not exist."
msgid ""
"The node status of ``voyager3`` changes to ``Ready`` after the node is up "
"again. Also, Ceph pods are restarted automatically. Ceph status shows that "
@ -573,6 +683,24 @@ msgstr ""
msgid "The pod status of ceph-mon and ceph-osd shows as ``NodeLost``."
msgstr "The pod status of ceph-mon and ceph-osd shows as ``NodeLost``."
msgid ""
"The status of the pods (where the three Monitor processes are killed) "
"changed as follows: ``Running`` -> ``Error`` -> ``CrashLoopBackOff`` -> "
"``Running`` and this recovery process takes about 1 minute."
msgstr ""
"The status of the pods (where the three Monitor processes are killed) "
"changed as follows: ``Running`` -> ``Error`` -> ``CrashLoopBackOff`` -> "
"``Running`` and this recovery process takes about 1 minute."
msgid ""
"The status of the pods (where the two Monitor processes are killed) changed "
"as follows: ``Running`` -> ``Error`` -> ``CrashLoopBackOff`` -> ``Running`` "
"and this recovery process takes about 1 minute."
msgstr ""
"The status of the pods (where the two Monitor processes are killed) changed "
"as follows: ``Running`` -> ``Error`` -> ``CrashLoopBackOff`` -> ``Running`` "
"and this recovery process takes about 1 minute."
msgid ""
"This document captures steps and result from node reduction and expansion as "
"well as ceph recovery."
@ -580,11 +708,41 @@ msgstr ""
"This document captures steps and result from node reduction and expansion as "
"well as Ceph recovery."
msgid ""
"This guide documents steps showing Ceph version upgrade. The main goal of "
"this document is to demostrate Ceph chart update without downtime for OSH "
"components."
msgstr ""
"This guide documents steps showing Ceph version upgrade. The main goal of "
"this document is to demonstrate Ceph chart update without downtime for OSH "
"components."
msgid ""
"This is for the case when a host machine (where ceph-mon is running) is down."
msgstr ""
"This is for the case when a host machine (where ceph-mon is running) is down."
msgid "This is to test a scenario when 1 out of 3 Monitor processes is down."
msgstr "This is to test a scenario when 1 out of 3 Monitor processes is down."
msgid ""
"This is to test a scenario when 2 out of 3 Monitor processes are down. To "
"bring down 2 Monitor processes (out of 3), we identify two Monitor processes "
"and kill them from the 2 monitor hosts (not a pod)."
msgstr ""
"This is to test a scenario when 2 out of 3 Monitor processes are down. To "
"bring down 2 Monitor processes (out of 3), we identify two Monitor processes "
"and kill them from the 2 monitor hosts (not a pod)."
msgid ""
"This is to test a scenario when 3 out of 3 Monitor processes are down. To "
"bring down 3 Monitor processes (out of 3), we identify all 3 Monitor "
"processes and kill them from the 3 monitor hosts (not pods)."
msgstr ""
"This is to test a scenario when 3 out of 3 Monitor processes are down. To "
"bring down 3 Monitor processes (out of 3), we identify all 3 Monitor "
"processes and kill them from the 3 monitor hosts (not pods)."
msgid ""
"This is to test a scenario when a disk failure happens. We monitor the ceph "
"status and notice one OSD (osd.2) on voyager4 which has ``/dev/sdh`` as a "
@ -594,6 +752,34 @@ msgstr ""
"status and notice one OSD (osd.2) on voyager4 which has ``/dev/sdh`` as a "
"backend is down."
msgid ""
"This is to test a scenario when an OSD pod is deleted by ``kubectl delete "
"$OSD_POD_NAME``. Meanwhile, we monitor the status of Ceph and note that it "
"takes about 90 seconds for the OSD running in deleted pod to recover from "
"``down`` to ``up``."
msgstr ""
"This is to test a scenario when an OSD pod is deleted by ``kubectl delete "
"$OSD_POD_NAME``. Meanwhile, we monitor the status of Ceph and note that it "
"takes about 90 seconds for the OSD running in deleted pod to recover from "
"``down`` to ``up``."
msgid "This is to test a scenario when some of the OSDs are down."
msgstr "This is to test a scenario when some of the OSDs are down."
msgid ""
"To bring down 1 Monitor process (out of 3), we identify a Monitor process "
"and kill it from the monitor host (not a pod)."
msgstr ""
"To bring down 1 Monitor process (out of 3), we identify a Monitor process "
"and kill it from the monitor host (not a pod)."
msgid ""
"To bring down 6 OSDs (out of 24), we identify the OSD processes and kill "
"them from a storage host (not a pod)."
msgstr ""
"To bring down 6 OSDs (out of 24), we identify the OSD processes and kill "
"them from a storage host (not a pod)."
msgid "To replace the failed OSD, execute the following procedure:"
msgstr "To replace the failed OSD, execute the following procedure:"
@ -629,6 +815,13 @@ msgid ""
msgstr ""
"Upgrade Ceph charts to version 12.2.5 by updating Docker images in overrides."
msgid ""
"Upgrade Ceph component version from ``12.2.4`` to ``12.2.5`` without "
"downtime to OSH components."
msgstr ""
"Upgrade Ceph component version from ``12.2.4`` to ``12.2.5`` without "
"downtime to OSH components."
msgid ""
"Use Ceph override file ``ceph.yaml`` that was generated previously and "
"update images section as below"
@ -650,6 +843,56 @@ msgstr ""
"Validate the Ceph status (i.e., one OSD is added, so the total number of "
"OSDs becomes 24):"
msgid ""
"We also monitored the pod status through ``kubectl get pods -n ceph`` during "
"this process. The deleted OSD pod status changed as follows: ``Terminating`` "
"-> ``Init:1/3`` -> ``Init:2/3`` -> ``Init:3/3`` -> ``Running``, and this "
"process takes about 90 seconds. The reason is that Kubernetes automatically "
"restarts OSD pods whenever they are deleted."
msgstr ""
"We also monitored the pod status through ``kubectl get pods -n ceph`` during "
"this process. The deleted OSD pod status changed as follows: ``Terminating`` "
"-> ``Init:1/3`` -> ``Init:2/3`` -> ``Init:3/3`` -> ``Running``, and this "
"process takes about 90 seconds. The reason is that Kubernetes automatically "
"restarts OSD pods whenever they are deleted."
msgid ""
"We also monitored the status of the Monitor pod through ``kubectl get pods -"
"n ceph``, and the status of the pod (where a Monitor process is killed) "
"changed as follows: ``Running`` -> ``Error`` -> ``Running`` and this "
"recovery process takes about 24 seconds."
msgstr ""
"We also monitored the status of the Monitor pod through ``kubectl get pods -"
"n ceph``, and the status of the pod (where a Monitor process is killed) "
"changed as follows: ``Running`` -> ``Error`` -> ``Running`` and this "
"recovery process takes about 24 seconds."
msgid ""
"We have 3 Monitors in this Ceph cluster, one on each of the 3 Monitor hosts."
msgstr ""
"We have 3 Monitors in this Ceph cluster, one on each of the 3 Monitor hosts."
msgid ""
"We intentionlly destroy a Monitor database by removing ``/var/lib/openstack-"
"helm/ceph/mon/mon/ceph-voyager3/store.db``."
msgstr ""
"We intentionally destroy a Monitor database by removing ``/var/lib/openstack-"
"helm/ceph/mon/mon/ceph-voyager3/store.db``."
msgid ""
"We monitored the status of Ceph Monitor pods and noted that the symptoms are "
"similar to when 1 or 2 Monitor processes are killed:"
msgstr ""
"We monitored the status of Ceph Monitor pods and noted that the symptoms are "
"similar to when 1 or 2 Monitor processes are killed:"
msgid ""
"We monitored the status of Ceph when the Monitor processes are killed and "
"noted that the symptoms are similar to when 1 Monitor process is killed:"
msgstr ""
"We monitored the status of Ceph when the Monitor processes are killed and "
"noted that the symptoms are similar to when 1 Monitor process is killed:"
msgid "`Disk failure <./disk-failure.html>`_"
msgstr "`Disk failure <./disk-failure.html>`_"