OpenStack Proposal Bot d0aa21b3c1 Imported Translations from Zanata
For more information about this automatic import see:
https://docs.openstack.org/i18n/latest/reviewing-translation-import.html

Change-Id: I67b69acc26089eff593804fbebcf66ae0ecb04b0
2018-10-25 06:31:14 +00:00

689 lines
25 KiB
Plaintext

# suhartono <cloudsuhartono@gmail.com>, 2018. #zanata
msgid ""
msgstr ""
"Project-Id-Version: openstack-helm\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2018-10-24 23:52+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2018-09-25 05:17+0000\n"
"Last-Translator: suhartono <cloudsuhartono@gmail.com>\n"
"Language-Team: Indonesian\n"
"Language: id\n"
"X-Generator: Zanata 4.3.3\n"
"Plural-Forms: nplurals=1; plural=0\n"
msgid "3 Node (VM based) env."
msgstr "3 Node (VM based) env."
msgid ""
"A Ceph Monitor running on voyager3 (whose Monitor database is destroyed) "
"becomes out of quorum, and the mon-pod's status stays in ``Running`` -> "
"``Error`` -> ``CrashLoopBackOff`` while keeps restarting."
msgstr ""
"Monitor Ceph yang berjalan di voyager3 (yang database Monitor-nya dimatikan) "
"menjadi tidak di quorum, dan status mon-pod tetap dalam ``Running`` -> "
"``Error`` -> ``CrashLoopBackOff`` sementara terus restart."
msgid "Adding Tests"
msgstr "Menambahkan Tes"
msgid ""
"Additional information on Helm tests for OpenStack-Helm and how to execute "
"these tests locally via the scripts used in the gate can be found in the "
"gates_ directory."
msgstr ""
"Informasi tambahan tentang tes Helm untuk OpenStack-Helm dan cara melakukan "
"tes ini secara lokal melalui skrip yang digunakan di gate dapat ditemukan di "
"direktori gates_."
msgid ""
"After 10+ miniutes, Ceph starts rebalancing with one node lost (i.e., 6 osds "
"down) and the status stablizes with 18 osds."
msgstr ""
"Setelah 10+ menit, Ceph mulai menyeimbangkan kembali dengan satu node yang "
"hilang (yaitu, 6 osds turun) dan statusnya stabil dengan 18 osds."
msgid "After reboot (node voyager3), the node status changes to ``NotReady``."
msgstr ""
"Setelah reboot (node voyager3), status node berubah menjadi ``NotReady``."
msgid ""
"After the host is down (node voyager3), the node status changes to "
"``NotReady``."
msgstr ""
"Setelah host mati (node voyager3), status node berubah menjadi ``NotReady``."
msgid ""
"All tests should be added to the gates during development, and are required "
"for any new service charts prior to merging. All Helm tests should be "
"included as part of the deployment script. An example of this can be seen "
"in this script_."
msgstr ""
"Semua tes harus ditambahkan ke gate selama pengembangan, dan diperlukan "
"untuk chart layanan baru sebelum penggabungan. Semua tes Helm harus "
"dimasukkan sebagai bagian dari skrip pemasangan. Contoh ini dapat dilihat "
"dalam skrip ini_."
msgid ""
"Also, the pod status of ceph-mon and ceph-osd changes from ``NodeLost`` back "
"to ``Running``."
msgstr ""
"Juga, status pod ceph-mon dan ceph-osd berubah dari ``NodeLost`` kembali ke "
"``Running``."
msgid "Any Helm tests associated with a chart can be run by executing:"
msgstr ""
"Tes Helm apa pun yang terkait dengan chart dapat dijalankan dengan "
"mengeksekusi:"
msgid ""
"Any templates for Helm tests submitted should follow the philosophies "
"applied in the other templates. These include: use of overrides where "
"appropriate, use of endpoint lookups and other common functionality in helm-"
"toolkit, and mounting any required scripting templates via the configmap-bin "
"template for the service chart. If Rally tests are not appropriate or "
"adequate for a service chart, any additional tests should be documented "
"appropriately and adhere to the same expectations."
msgstr ""
"Setiap template untuk tes Helm yang diajukan harus mengikuti filosofi yang "
"diterapkan dalam template lain. Ini termasuk: penggunaan menimpa di mana "
"yang sesuai, penggunaan pencarian endpoint dan fungsi umum lainnya dalam "
"helm-toolkit, dan pemasangan semua scripting template yang diperlukan "
"melalui template configmap-bin untuk chart layanan. Jika pengujian Rally "
"tidak sesuai atau memadai untuk chart layanan, pengujian tambahan apa pun "
"harus didokumentasikan dengan tepat dan mematuhi harapan yang sama."
msgid "Capture Ceph pods statuses."
msgstr "Capture Ceph pods statuses."
msgid "Capture Openstack pods statuses."
msgstr "Capture Openstack pods statuses."
msgid "Capture final Ceph pod statuses:"
msgstr "Capture final Ceph pod statuses:"
msgid "Capture final Openstack pod statuses:"
msgstr "Capture final Openstack pod statuses:"
msgid "Case: 1 out of 3 Monitor Processes is Down"
msgstr "Kasus: 1 dari 3 Proses Monitor Sedang Turun"
msgid "Case: 2 out of 3 Monitor Processes are Down"
msgstr "Kasus: 2 dari 3 Proses Monitor Sedang Turun"
msgid "Case: 3 out of 3 Monitor Processes are Down"
msgstr "Kasus: 3 dari 3 Proses Monitor Sedang Turun"
msgid "Case: A OSD pod is deleted"
msgstr "Kasus: Pod OSD dihapus"
msgid "Case: A disk fails"
msgstr "Kasus: Disk gagal"
msgid "Case: A host machine where ceph-mon is running is down"
msgstr "Kasus: Mesin host di mana ceph-mon sedang bekerja sedang mati"
msgid "Case: Monitor database is destroyed"
msgstr "Kasus: Database monitor dimusnahkan"
msgid "Case: OSD processes are killed"
msgstr "Kasus: Proses OSD dimatikan"
msgid "Case: One host machine where ceph-mon is running is rebooted"
msgstr "Kasus: Satu mesin host di mana ceph-mon sedang dijalankan di-reboot"
msgid "Caveats:"
msgstr "Caveats:"
msgid "Ceph Cephfs provisioner docker images."
msgstr "Ceph Cephfs provisioner docker images."
msgid "Ceph Luminous point release images for Ceph components"
msgstr "Ceph Luminous point melepaskan image untuk komponen Ceph"
msgid "Ceph RBD provisioner docker images."
msgstr "Ceph RBD provisioner docker images."
msgid "Ceph Resiliency"
msgstr "Ceph Resiliency"
msgid "Ceph Upgrade"
msgstr "Ceph Upgrade"
msgid ""
"Ceph status shows that ceph-mon running on ``voyager3`` becomes out of "
"quorum. Also, 6 osds running on ``voyager3`` are down (i.e., 18 out of 24 "
"osds are up). Some placement groups become degraded and undersized."
msgstr ""
"Status Ceph menunjukkan bahwa ceph-mon yang berjalan pada ``voyager3`` "
"menjadi tidak dapat digunakan. Juga, 6 osds yang berjalan pada ``voyager3`` "
"sedang down (yaitu, 18 dari 24 osds naik). Beberapa kelompok penempatan "
"menjadi terdegradasi dan berukuran kecil."
msgid ""
"Ceph status shows that ceph-mon running on ``voyager3`` becomes out of "
"quorum. Also, six osds running on ``voyager3`` are down; i.e., 18 osds are "
"up out of 24 osds."
msgstr ""
"Status Ceph menunjukkan bahwa ceph-mon yang berjalan pada ``voyager3`` "
"menjadi tidak dapat digunakan. Juga, enam osds yang berjalan di ``voyager3`` "
"turun; yaitu, 18 osds naik dari 24 osds."
msgid "Ceph version: 12.2.3"
msgstr "Ceph versi: 12.2.3"
msgid "Check Ceph Pods"
msgstr "Periksa Ceph Pods"
msgid "Check version of each Ceph components."
msgstr "Periksa versi setiap komponen Ceph."
msgid "Check which images Provisionors and Mon-Check PODs are using"
msgstr "Periksa image mana yang digunakan Provisionors dan Mon-Check PODs"
msgid "Cluster size: 4 host machines"
msgstr "Ukuran cluster: 4 mesin host"
msgid "Conclusion:"
msgstr "Kesimpulan:"
msgid "Confirm Ceph component's version."
msgstr "Konfirmasi versi komponen Ceph."
msgid "Continue with OSH multinode guide to install other Openstack charts."
msgstr ""
"Lanjutkan dengan panduan multinode OSH untuk menginstal chart Openstack "
"lainnya."
msgid "Deploy and Validate Ceph"
msgstr "Menyebarkan dan Memvalidasi Ceph"
msgid "Disk Failure"
msgstr "Kegagalan Disk"
msgid "Docker Images:"
msgstr "Docker Images:"
msgid ""
"Every OpenStack-Helm chart should include any required Helm tests necessary "
"to provide a sanity check for the OpenStack service. Information on using "
"the Helm testing framework can be found in the Helm repository_. Currently, "
"the Rally testing framework is used to provide these checks for the core "
"services. The Keystone Helm test template can be used as a reference, and "
"can be found here_."
msgstr ""
"Setiap OpenStack-Helm chart harus menyertakan tes Helm yang diperlukan untuk "
"memberikan pemeriksaan (sanity check) kewarasan untuk layanan OpenStack. "
"Informasi tentang menggunakan kerangka pengujian Helm dapat ditemukan di "
"repositori Helm. Saat ini, kerangka pengujian Rally digunakan untuk "
"menyediakan pemeriksaan ini untuk layanan inti. Kerangka uji Keystone Helm "
"dapat digunakan sebagai referensi, dan dapat ditemukan di sini_."
msgid "Find that Ceph is healthy with a lost OSD (i.e., a total of 23 OSDs):"
msgstr "Temukan bahwa Ceph sehat dengan OSD yang hilang (yaitu, total 23 OSD):"
msgid "Follow all steps from OSH multinode guide with below changes."
msgstr ""
"Ikuti semua langkah dari panduan multinode OSH dengan perubahan di bawah ini."
msgid "Followed OSH multinode guide steps upto Ceph install"
msgstr "Mengikuti panduan multinode OSH langkah-langkah upto Ceph menginstal"
msgid "Following is a partial part from script to show changes."
msgstr ""
"Berikut ini adalah bagian parsial dari skrip untuk menunjukkan perubahan."
msgid ""
"From the Kubernetes cluster, remove the failed OSD pod, which is running on "
"``voyager4``:"
msgstr ""
"Dari kluster Kubernetes, hapus pod OSD yang gagal, yang berjalan di "
"``voyager4``:"
msgid "Hardware Failure"
msgstr "Kegagalan perangkat keras"
msgid "Helm Tests"
msgstr "Tes Helm"
msgid "Host Failure"
msgstr "Host Failure"
msgid ""
"In the mean time, we monitor the status of Ceph and noted that it takes "
"about 30 seconds for the 6 OSDs to recover from ``down`` to ``up``. The "
"reason is that Kubernetes automatically restarts OSD pods whenever they are "
"killed."
msgstr ""
"Sementara itu, kami memantau status Ceph dan mencatat bahwa dibutuhkan "
"sekitar 30 detik untuk 6 OSD untuk memulihkan dari ``down`` ke ``up``. "
"Alasannya adalah Kubernetes secara otomatis merestart pod OSD setiap kali "
"mereka dimatikan."
msgid ""
"In the mean time, we monitored the status of Ceph and noted that it takes "
"about 24 seconds for the killed Monitor process to recover from ``down`` to "
"``up``. The reason is that Kubernetes automatically restarts pods whenever "
"they are killed."
msgstr ""
"Sementara itu, kami memantau status Ceph dan mencatat bahwa dibutuhkan "
"sekitar 24 detik untuk proses Monitor yang mati untuk memulihkan dari "
"``down`` ke ``up``. Alasannya adalah Kubernetes secara otomatis me-restart "
"pod setiap kali mereka dimatikan."
msgid "Install Ceph charts (12.2.4) by updating Docker images in overrides."
msgstr ""
"Instal Ceph charts (12.2.4) dengan memperbarui Docker images di overrides."
msgid "Install Ceph charts (version 12.2.4)"
msgstr "Pasang chart Ceph (versi 12.2.4)"
msgid "Install OSH components as per OSH multinode guide."
msgstr "Instal komponen OSH sesuai panduan multinode OSH."
msgid "Install Openstack charts"
msgstr "Pasang chart Openstack"
msgid ""
"It takes longer (about 1 minute) for the killed Monitor processes to recover "
"from ``down`` to ``up``."
msgstr ""
"Diperlukan waktu lebih lama (sekitar 1 menit) untuk proses Monitor yang mati "
"untuk memulihkan dari ``down`` ke ``up``."
msgid "Kubernetes version: 1.10.5"
msgstr "Kubernetes versi: 1.10.5"
msgid "Kubernetes version: 1.9.3"
msgstr "Kubernetes version: 1.9.3"
msgid "Mission"
msgstr "Misi"
msgid "Monitor Failure"
msgstr "Memantau Kegagalan"
msgid ""
"Note: To find the daemonset associated with a failed OSD, check out the "
"followings:"
msgstr ""
"Catatan: Untuk menemukan daemon yang terkait dengan OSD yang gagal, periksa "
"yang berikut:"
msgid "Number of disks: 24 (= 6 disks per host * 4 hosts)"
msgstr "Jumlah disk: 24 (= 6 disk per host * 4 host)"
msgid "OSD Failure"
msgstr "Kegagalan OSD"
msgid "OSD count is set to 3 based on env setup."
msgstr "Penghitungan OSD diatur ke 3 berdasarkan pada env setup."
msgid "OpenStack-Helm commit: 25e50a34c66d5db7604746f4d2e12acbdd6c1459"
msgstr "OpenStack-Helm commit: 25e50a34c66d5db7604746f4d2e12acbdd6c1459"
msgid "OpenStack-Helm commit: 28734352741bae228a4ea4f40bcacc33764221eb"
msgstr "OpenStack-Helm commit: 28734352741bae228a4ea4f40bcacc33764221eb"
msgid ""
"Our focus lies on resiliency for various failure scenarios but not on "
"performance or stress testing."
msgstr ""
"Fokus kami terletak pada ketahanan untuk berbagai skenario kegagalan tetapi "
"tidak pada kinerja atau stress testing."
msgid "Plan:"
msgstr "Rencana:"
msgid "Recovery:"
msgstr "Pemulihan:"
msgid ""
"Remove the entire ceph-mon directory on voyager3, and then Ceph will "
"automatically recreate the database by using the other ceph-mons' database."
msgstr ""
"Hapus seluruh direktori ceph-mon di voyager3, dan kemudian Ceph akan secara "
"otomatis membuat ulang database dengan menggunakan database ceph-mons "
"lainnya."
msgid ""
"Remove the failed OSD (OSD ID = 2 in this example) from the Ceph cluster:"
msgstr "Hapus OSD yang gagal (OSD ID = 2 dalam contoh ini) dari kluster Ceph:"
msgid "Resiliency Tests for OpenStack-Helm/Ceph"
msgstr "Tes Ketahanan untuk OpenStack-Helm/Ceph"
msgid "Running Tests"
msgstr "Menjalankan Tes"
msgid "Setup:"
msgstr "Mempersiapkan:"
msgid ""
"Showing partial output from kubectl describe command to show which image is "
"Docker container is using"
msgstr ""
"Menampilkan sebagian output dari kubectl menggambarkan perintah untuk "
"menunjukkan image mana yang digunakan oleh container Docker"
msgid "Software Failure"
msgstr "Kegagalan Perangkat Lunak"
msgid "Solution:"
msgstr "Solusi:"
msgid "Start a new OSD pod on ``voyager4``:"
msgstr "Mulai pod LED baru pada ``voyager 4``:"
msgid "Steps:"
msgstr "Langkah:"
msgid "Symptom:"
msgstr "Gejala:"
msgid "Test Environment"
msgstr "Uji Lingkungan"
msgid "Test Scenario:"
msgstr "Test Scenario:"
msgid "Testing"
msgstr "Pengujian"
msgid "Testing Expectations"
msgstr "Menguji Ekspektasi"
msgid ""
"The goal of our resiliency tests for `OpenStack-Helm/Ceph <https://github."
"com/openstack/openstack-helm/tree/master/ceph>`_ is to show symptoms of "
"software/hardware failure and provide the solutions."
msgstr ""
"Tujuan dari uji ketahanan kami untuk `OpenStack-Helm/Ceph <https://github."
"com/openstack/openstack-helm/tree/master/ceph>`_ adalah untuk menunjukkan "
"gejala kegagalan perangkat lunak/perangkat keras dan memberikan solusi."
msgid ""
"The logs of the failed mon-pod shows the ceph-mon process cannot run as ``/"
"var/lib/ceph/mon/ceph-voyager3/store.db`` does not exist."
msgstr ""
"Log dari mon-pod gagal menunjukkan proses ceph-mon tidak dapat berjalan "
"karena ``/var/lib/ceph/mon/ceph-voyager3/store.db`` tidak ada."
msgid ""
"The node status of ``voyager3`` changes to ``Ready`` after the node is up "
"again. Also, Ceph pods are restarted automatically. Ceph status shows that "
"the monitor running on ``voyager3`` is now in quorum."
msgstr ""
"Status node ``voyager3`` berubah menjadi ``Ready`` setelah node naik lagi. "
"Juga, Ceph pod di-restart secara otomatis. Status Ceph menunjukkan bahwa "
"monitor yang dijalankan pada ``voyager3`` sekarang dalam kuorum."
msgid ""
"The node status of ``voyager3`` changes to ``Ready`` after the node is up "
"again. Also, Ceph pods are restarted automatically. The Ceph status shows "
"that the monitor running on ``voyager3`` is now in quorum and 6 osds gets "
"back up (i.e., a total of 24 osds are up)."
msgstr ""
"Status node ``voyager3`` berubah menjadi ``Ready`` setelah node naik lagi. "
"Juga, Ceph pod di-restart secara otomatis. Status Ceph menunjukkan bahwa "
"monitor yang berjalan pada ``voyager3`` sekarang berada di kuorum dan 6 osds "
"akan kembali (yaitu, total 24 osds naik)."
msgid ""
"The output of the Helm tests can be seen by looking at the logs of the pod "
"created by the Helm tests. These logs can be viewed with:"
msgstr ""
"Output dari tes Helm dapat dilihat dengan melihat log dari pod yang dibuat "
"oleh tes Helm. Log ini dapat dilihat dengan:"
msgid "The pod status of ceph-mon and ceph-osd shows as ``NodeLost``."
msgstr "Status pod ceph-mon dan ceph-osd ditampilkan sebagai ``NodeLost``."
msgid ""
"The status of the pods (where the three Monitor processes are killed) "
"changed as follows: ``Running`` -> ``Error`` -> ``CrashLoopBackOff`` -> "
"``Running`` and this recovery process takes about 1 minute."
msgstr ""
"Status pod (di mana ketiga proses Monitor dimatikan) diubah sebagai berikut: "
"``Running`` -> ``Error`` -> ``CrashLoopBackOff`` -> ``Running`` dan proses "
"pemulihan ini memakan waktu sekitar 1 menit."
msgid ""
"The status of the pods (where the two Monitor processes are killed) changed "
"as follows: ``Running`` -> ``Error`` -> ``CrashLoopBackOff`` -> ``Running`` "
"and this recovery process takes about 1 minute."
msgstr ""
"Status pod (di mana kedua proses Monitor mati) diubah sebagai berikut: "
"``Running`` -> ``Error`` -> ``CrashLoopBackOff`` -> ``Running`` dan proses "
"pemulihan ini memakan waktu sekitar 1 menit."
msgid ""
"This guide documents steps showing Ceph version upgrade. The main goal of "
"this document is to demostrate Ceph chart update without downtime for OSH "
"components."
msgstr ""
"Panduan ini mendokumentasikan langkah-langkah yang menunjukkan upgrade versi "
"Ceph. Tujuan utama dari dokumen ini adalah untuk mendemonstrasikan pembaruan "
"Ceph chart tanpa downtime untuk komponen OSH."
msgid ""
"This is for the case when a host machine (where ceph-mon is running) is down."
msgstr ""
"Ini untuk kasus ketika mesin host (di mana ceph-mon sedang berjalan) sedang "
"mati."
msgid "This is to test a scenario when 1 out of 3 Monitor processes is down."
msgstr "Ini untuk menguji skenario ketika 1 dari 3 proses Monitor mati."
msgid ""
"This is to test a scenario when 2 out of 3 Monitor processes are down. To "
"bring down 2 Monitor processes (out of 3), we identify two Monitor processes "
"and kill them from the 2 monitor hosts (not a pod)."
msgstr ""
"Ini untuk menguji skenario ketika 2 dari 3 proses Monitor sedang down. Untuk "
"menurunkan 2 proses Monitor (dari 3), kami mengidentifikasi dua proses "
"Monitor dan mematikannya dari 2 monitor host (bukan pod)."
msgid ""
"This is to test a scenario when 3 out of 3 Monitor processes are down. To "
"bring down 3 Monitor processes (out of 3), we identify all 3 Monitor "
"processes and kill them from the 3 monitor hosts (not pods)."
msgstr ""
"Ini untuk menguji skenario ketika 3 dari 3 proses Monitor sedang down. Untuk "
"menurunkan 3 proses Monitor (dari 3), kami mengidentifikasi semua 3 proses "
"Monitor dan mematikannya dari 3 monitor host (bukan pod)."
msgid ""
"This is to test a scenario when a disk failure happens. We monitor the ceph "
"status and notice one OSD (osd.2) on voyager4 which has ``/dev/sdh`` as a "
"backend is down."
msgstr ""
"Ini untuk menguji skenario ketika terjadi kegagalan disk. Kami memonitor "
"status ceph dan melihat satu OSD (osd.2) di voyager4 yang memiliki ``/dev/"
"sdh`` sebagai backend sedang down (mati)."
msgid "This is to test a scenario when some of the OSDs are down."
msgstr "Ini untuk menguji skenario ketika beberapa OSD turun."
msgid ""
"To bring down 1 Monitor process (out of 3), we identify a Monitor process "
"and kill it from the monitor host (not a pod)."
msgstr ""
"Untuk menurunkan 1 proses Monitor (dari 3), kami mengidentifikasi proses "
"Monitor dan mematikannya dari host monitor (bukan pod)."
msgid ""
"To bring down 6 OSDs (out of 24), we identify the OSD processes and kill "
"them from a storage host (not a pod)."
msgstr ""
"Untuk menurunkan 6 OSD (dari 24), kami mengidentifikasi proses OSD dan "
"mematikannya dari host penyimpanan (bukan pod)."
msgid "To replace the failed OSD, excecute the following procedure:"
msgstr "Untuk mengganti OSD yang gagal, jalankan prosedur berikut:"
msgid "Update Ceph Client chart with new overrides:"
msgstr "Perbarui Ceph Client chart dengan override baru:"
msgid "Update Ceph Mon chart with new overrides"
msgstr "Perbarui Ceph Mon chart dengan override baru"
msgid "Update Ceph OSD chart with new overrides:"
msgstr "Perbarui Ceph OSD chart dengan override baru:"
msgid "Update Ceph Provisioners chart with new overrides:"
msgstr "Perbarui Ceph Provisioners chart dengan override baru:"
msgid ""
"Update ceph install script ``./tools/deployment/multinode/030-ceph.sh`` to "
"add ``images:`` section in overrides as shown below."
msgstr ""
"Perbarui ceph install script ``./tools/deployment/multinode/030-ceph.sh`` "
"untuk menambahkan bagian ``images:`` di override seperti yang ditunjukkan di "
"bawah ini."
msgid ""
"Update, image section in new overrides ``ceph-update.yaml`` as shown below"
msgstr ""
"Pembaruan, bagian image di overrides baru ``ceph-update.yaml`` seperti yang "
"ditunjukkan di bawah ini"
msgid "Upgrade Ceph charts to update version"
msgstr "Tingkatkan Ceph charts untuk memperbarui versi"
msgid ""
"Upgrade Ceph charts to version 12.2.5 by updating docker images in overrides."
msgstr ""
"Tingkatkan Ceph chart ke versi 12.2.5 dengan memperbarui image docker di "
"overrides."
msgid ""
"Upgrade Ceph component version from ``12.2.4`` to ``12.2.5`` without "
"downtime to OSH components."
msgstr ""
"Upgrade versi komponen Ceph dari ``12.2.4`` ke ``12.2.5`` tanpa waktu henti "
"ke komponen OSH."
msgid ""
"Use Ceph override file ``ceph.yaml`` that was generated previously and "
"update images section as below"
msgstr ""
"Gunakan Ceph override file ``ceph.yaml`` yang telah dibuat sebelumnya dan "
"perbarui bagian image seperti di bawah ini"
msgid ""
"Validate the Ceph status (i.e., one OSD is added, so the total number of "
"OSDs becomes 24):"
msgstr ""
"Validasi status Ceph (yaitu satu OSD ditambahkan, sehingga jumlah total OSD "
"menjadi 24):"
msgid ""
"We also monitored the status of the Monitor pod through ``kubectl get pods -"
"n ceph``, and the status of the pod (where a Monitor process is killed) "
"changed as follows: ``Running`` -> ``Error`` -> ``Running`` and this "
"recovery process takes about 24 seconds."
msgstr ""
"Kami juga memantau status pod Monitor melalui ``kubectl get pods -n ceph``, "
"dan status pod (di mana proses Monitor mati) berubah sebagai berikut: "
"``Running`` -> ``Error`` -> ``Running`` dan proses pemulihan ini membutuhkan "
"waktu sekitar 24 detik."
msgid ""
"We have 3 Monitors in this Ceph cluster, one on each of the 3 Monitor hosts."
msgstr ""
"Kami memiliki 3 Monitor di cluster Ceph ini, satu di masing-masing dari 3 "
"host Monitor."
msgid ""
"We intentionlly destroy a Monitor database by removing ``/var/lib/openstack-"
"helm/ceph/mon/mon/ceph-voyager3/store.db``."
msgstr ""
"Kami bermaksud menghancurkan database Monitor dengan menghapus ``/var/lib/"
"openstack-helm/ceph/mon/mon/ceph-voyager3/store.db``."
msgid ""
"We monitored the status of Ceph Monitor pods and noted that the symptoms are "
"similar to when 1 or 2 Monitor processes are killed:"
msgstr ""
"Kami memantau status pod Ceph Monitor dan mencatat bahwa gejalanya mirip "
"dengan ketika 1 atau 2 proses Monitor dimatikan:"
msgid ""
"We monitored the status of Ceph when the Monitor processes are killed and "
"noted that the symptoms are similar to when 1 Monitor process is killed:"
msgstr ""
"Kami memantau status Ceph ketika proses Monitor dimatikan dan mencatat bahwa "
"gejala mirip dengan ketika 1 Proses monitor dimatikan:"
msgid "`Disk failure <./disk-failure.html>`_"
msgstr "`Disk failure <./disk-failure.html>`_"
msgid "`Host failure <./host-failure.html>`_"
msgstr "`Host failure <./host-failure.html>`_"
msgid "`Monitor failure <./monitor-failure.html>`_"
msgstr "`Monitor failure <./monitor-failure.html>`_"
msgid "`OSD failure <./osd-failure.html>`_"
msgstr "`OSD failure <./osd-failure.html>`_"
msgid ""
"``Results:`` All provisioner pods got terminated at once (same time). Other "
"ceph pods are running. No interruption to OSH pods."
msgstr ""
"``Results:`` Semua pod penyedia dihentikan sekaligus (saat yang sama). Ceph "
"pod lainnya sedang berjalan. Tidak ada gangguan pada pod OSH."
msgid ""
"``Results:`` Mon pods got updated one by one (rolling updates). Each Mon pod "
"got respawn and was in 1/1 running state before next Mon pod got updated. "
"Each Mon pod got restarted. Other ceph pods were not affected with this "
"update. No interruption to OSH pods."
msgstr ""
"``Results:`` Mon pod mendapat pembaruan satu per satu (pembaruan bergulir). "
"Setiap Mon pod mendapat respawn dan berada dalam 1/1 keadaan sebelum Mon pod "
"berikutnya diperbarui. Setiap Mon pod mulai dihidupkan ulang. Ceph pod "
"lainnya tidak terpengaruh dengan pembaruan ini. Tidak ada gangguan pada pod "
"OSH."
msgid ""
"``Results:`` Rolling updates (one pod at a time). Other ceph pods are "
"running. No interruption to OSH pods."
msgstr ""
"``Results:`` Bergulir pembaruan (satu pod dalam satu waktu). Ceph pod "
"lainnya sedang berjalan. Tidak ada gangguan pada pod OSH."
msgid ""
"``ceph_bootstrap``, ``ceph-config_helper`` and ``ceph_rbs_pool`` images are "
"used for jobs. ``ceph_mon_check`` has one script that is stable so no need "
"to upgrade."
msgstr ""
"Image ``ceph_bootstrap``, ``ceph-config_helper`` and ``ceph_rbs_pool`` "
"digunakan untuk pekerjaan. ``ceph_mon_check`` memiliki satu skrip yang "
"stabil sehingga tidak perlu melakukan upgrade."
msgid "``cp /tmp/ceph.yaml ceph-update.yaml``"
msgstr "``cp /tmp/ceph.yaml ceph-update.yaml``"
msgid "``helm upgrade ceph-client ./ceph-client --values=ceph-update.yaml``"
msgstr "``helm upgrade ceph-client ./ceph-client --values=ceph-update.yaml``"
msgid "``helm upgrade ceph-mon ./ceph-mon --values=ceph-update.yaml``"
msgstr "``helm upgrade ceph-mon ./ceph-mon --values=ceph-update.yaml``"
msgid "``helm upgrade ceph-osd ./ceph-osd --values=ceph-update.yaml``"
msgstr "``helm upgrade ceph-osd ./ceph-osd --values=ceph-update.yaml``"
msgid ""
"``helm upgrade ceph-provisioners ./ceph-provisioners --values=ceph-update."
"yaml``"
msgstr ""
"``helm upgrade ceph-provisioners ./ceph-provisioners --values=ceph-update."
"yaml``"
msgid "``series of console outputs:``"
msgstr "``series of console outputs:``"