# suhartono , 2018. #zanata # suhartono , 2019. #zanata msgid "" msgstr "" "Project-Id-Version: openstack-helm\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2019-05-27 21:13+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2019-05-28 12:08+0000\n" "Last-Translator: suhartono \n" "Language-Team: Indonesian\n" "Language: id\n" "X-Generator: Zanata 4.3.3\n" "Plural-Forms: nplurals=1; plural=0\n" msgid "" "1) Initial Ceph and OpenStack deployment: Install Ceph and OpenStack charts " "on 3 nodes (mnode1, mnode2 and mnode3). Capture Ceph cluster status as well " "as K8s PODs status." msgstr "" "1) Penyebaran Ceph dan OpenStack awal: Pasang bagan Ceph dan OpenStack pada " "3 node (mnode1, mnode2 dan mnode3). Capture Ceph cluster status serta status " "PODs K8s." msgid "" "1) Node reduction: Shutdown 1 of 3 nodes to simulate node failure. Capture " "effect of node failure on Ceph as well as other OpenStack services that are " "using Ceph." msgstr "" "1) Node reduction: Shutdown 1 dari 3 node untuk mensimulasikan kegagalan " "node. Menangkap efek kegagalan node pada Ceph serta layanan OpenStack lain " "yang menggunakan Ceph." msgid "1) Remove out of quorum MON:" msgstr "1) Hapus dari quorum MON:" msgid "" "2) Node expansion: Apply Ceph and OpenStack related labels to another unused " "k8 node. Node expansion should provide more resources for k8 to schedule " "PODs for Ceph and OpenStack services." msgstr "" "2) Ekspansi Node: Menerapkan label terkait Ceph dan OpenStack ke nodus k8 " "yang tidak digunakan lainnya. Perluasan node harus menyediakan lebih banyak " "sumber daya untuk k8 untuk menjadwalkan POD untuk layanan Ceph dan OpenStack." msgid "" "2) Node reduction (failure): Shutdown 1 of 3 nodes (mnode3) to test node " "failure. This should cause Ceph cluster to go in HEALTH_WARN state as it has " "lost 1 MON and 1 OSD. Capture Ceph cluster status as well as K8s PODs status." msgstr "" "2) Pengurangan Node (kegagalan): Shutdown 1 dari 3 node (mnode3) untuk " "menguji kegagalan node. Ini harus menyebabkan Ceph cluster masuk dalam " "kondisi HEALTH_WARN karena telah kehilangan 1 MON dan 1 OSD. Capture Ceph " "cluster status serta status PODs K8s." msgid "2) Remove down OSD from Ceph cluster:" msgstr "2) Hapus OSD dari Ceph cluster:" msgid "3 Node (VM based) env." msgstr "3 Node (VM based) env." msgid "" "3) Fix Ceph Cluster: After node expansion, perform maintenance on Ceph " "cluster to ensure quorum is reached and Ceph is HEALTH_OK." msgstr "" "3) Perbaiki Ceph Cluster: Setelah perluasan node, lakukan perawatan pada " "cluster Ceph untuk memastikan kuorum tercapai dan Ceph adalah HEALTH_OK." msgid "" "3) Node expansion: Add Ceph and OpenStack related labels to 4th node " "(mnode4) for expansion. Ceph cluster would show new MON and OSD being added " "to cluster. However Ceph cluster would continue to show HEALTH_WARN because " "1 MON and 1 OSD are still missing." msgstr "" "3) Node ekspansi: Tambahkan Ceph dan OpenStack terkait label ke 4 node " "(mnode4) untuk ekspansi. Ceph cluster akan menunjukkan MON baru dan OSD yang " "ditambahkan ke cluster. Namun cluster Ceph akan terus menunjukkan " "HEALTH_WARN karena 1 MON dan 1 OSD masih hilang." msgid "" "4) Ceph cluster recovery: Perform Ceph maintenance to make Ceph cluster " "HEALTH_OK. Remove lost MON and OSD from Ceph cluster." msgstr "" "4) Ceph cluster recovery: Lakukan Ceph maintenance untuk membuat Ceph " "cluster HEALTH_OK. Hapus MON dan OSD yang hilang dari cluster Ceph." msgid "" "4. Replace the failed disk with a new one. If you repair (not replace) the " "failed disk, you may need to run the following:" msgstr "" "4. Ganti disk yang gagal dengan yang baru. Jika Anda memperbaiki (bukan " "mengganti) disk yang gagal, Anda mungkin perlu menjalankan yang berikut:" msgid "6 Nodes (VM based) env" msgstr "6 Nodes (VM based) env" msgid "" "A Ceph Monitor running on voyager3 (whose Monitor database is destroyed) " "becomes out of quorum, and the mon-pod's status stays in ``Running`` -> " "``Error`` -> ``CrashLoopBackOff`` while keeps restarting." msgstr "" "Monitor Ceph yang berjalan di voyager3 (yang database Monitor-nya dimatikan) " "menjadi tidak di quorum, dan status mon-pod tetap dalam ``Running`` -> " "``Error`` -> ``CrashLoopBackOff`` sementara terus restart." msgid "" "Above output shows Ceph cluster in HEALTH_OK with all OSDs and MONs up and " "running." msgstr "" "Output di atas menunjukkan Ceph cluster di HEALTH_OK dengan semua OSD dan " "MONS aktif dan berjalan." msgid "Above output shows that ``osd.1`` is down." msgstr "Output di atas menunjukkan bahwa ``osd.1`` sedang down." msgid "Adding Tests" msgstr "Menambahkan Tes" msgid "" "Additional information on Helm tests for OpenStack-Helm and how to execute " "these tests locally via the scripts used in the gate can be found in the " "gates_ directory." msgstr "" "Informasi tambahan tentang tes Helm untuk OpenStack-Helm dan cara melakukan " "tes ini secara lokal melalui skrip yang digunakan di gate dapat ditemukan di " "direktori gates_." msgid "" "After 10+ miniutes, Ceph starts rebalancing with one node lost (i.e., 6 osds " "down) and the status stablizes with 18 osds." msgstr "" "Setelah 10+ menit, Ceph mulai menyeimbangkan kembali dengan satu node yang " "hilang (yaitu, 6 osds turun) dan statusnya stabil dengan 18 osds." msgid "After applying labels, let's check status" msgstr "Setelah menerapkan label, mari periksa status" msgid "After reboot (node voyager3), the node status changes to ``NotReady``." msgstr "" "Setelah reboot (node voyager3), status node berubah menjadi ``NotReady``." msgid "" "After the host is down (node voyager3), the node status changes to " "``NotReady``." msgstr "" "Setelah host mati (node voyager3), status node berubah menjadi ``NotReady``." msgid "All PODs are in running state." msgstr "Semua POD dalam keadaan berjalan." msgid "" "All tests should be added to the gates during development, and are required " "for any new service charts prior to merging. All Helm tests should be " "included as part of the deployment script. An example of this can be seen " "in this script_." msgstr "" "Semua tes harus ditambahkan ke gate selama pengembangan, dan diperlukan " "untuk chart layanan baru sebelum penggabungan. Semua tes Helm harus " "dimasukkan sebagai bagian dari skrip pemasangan. Contoh ini dapat dilihat " "dalam skrip ini_." msgid "" "Also, the pod status of ceph-mon and ceph-osd changes from ``NodeLost`` back " "to ``Running``." msgstr "" "Juga, status pod ceph-mon dan ceph-osd berubah dari ``NodeLost`` kembali ke " "``Running``." msgid "Any Helm tests associated with a chart can be run by executing:" msgstr "" "Tes Helm apa pun yang terkait dengan chart dapat dijalankan dengan " "mengeksekusi:" msgid "" "Any templates for Helm tests submitted should follow the philosophies " "applied in the other templates. These include: use of overrides where " "appropriate, use of endpoint lookups and other common functionality in helm-" "toolkit, and mounting any required scripting templates via the configmap-bin " "template for the service chart. If Rally tests are not appropriate or " "adequate for a service chart, any additional tests should be documented " "appropriately and adhere to the same expectations." msgstr "" "Setiap template untuk tes Helm yang diajukan harus mengikuti filosofi yang " "diterapkan dalam template lain. Ini termasuk: penggunaan menimpa di mana " "yang sesuai, penggunaan pencarian endpoint dan fungsi umum lainnya dalam " "helm-toolkit, dan pemasangan semua scripting template yang diperlukan " "melalui template configmap-bin untuk chart layanan. Jika pengujian Rally " "tidak sesuai atau memadai untuk chart layanan, pengujian tambahan apa pun " "harus didokumentasikan dengan tepat dan mematuhi harapan yang sama." msgid "" "As shown above, Ceph status is now HEALTH_OK and shows 3 MONs available." msgstr "" "Seperti yang ditunjukkan di atas, status Ceph sekarang HEALTH_OK dan " "menunjukkan 3 MON tersedia." msgid "" "As shown in Ceph status above, ``osd: 4 osds: 3 up, 3 in`` 1 of 4 OSDs is " "still down. Let's remove that OSD." msgstr "" "Seperti yang ditunjukkan dalam status Ceph di atas, ``osd: 4 osds: 3 up, 3 " "in`` 1 dari 4 OSDs masih turun. Mari hapus OSD itu." msgid "Capture Ceph pods statuses." msgstr "Capture Ceph pods statuses." msgid "Capture Openstack pods statuses." msgstr "Capture Openstack pods statuses." msgid "Capture final Ceph pod statuses:" msgstr "Capture final Ceph pod statuses:" msgid "Capture final Openstack pod statuses:" msgstr "Capture final Openstack pod statuses:" msgid "Case: 1 out of 3 Monitor Processes is Down" msgstr "Kasus: 1 dari 3 Proses Monitor Sedang Turun" msgid "Case: 2 out of 3 Monitor Processes are Down" msgstr "Kasus: 2 dari 3 Proses Monitor Sedang Turun" msgid "Case: 3 out of 3 Monitor Processes are Down" msgstr "Kasus: 3 dari 3 Proses Monitor Sedang Turun" msgid "Case: A OSD pod is deleted" msgstr "Kasus: Pod OSD dihapus" msgid "Case: A disk fails" msgstr "Kasus: Disk gagal" msgid "Case: A host machine where ceph-mon is running is down" msgstr "Kasus: Mesin host di mana ceph-mon sedang bekerja sedang mati" msgid "Case: Monitor database is destroyed" msgstr "Kasus: Database monitor dimusnahkan" msgid "Case: OSD processes are killed" msgstr "Kasus: Proses OSD dimatikan" msgid "Case: One host machine where ceph-mon is running is rebooted" msgstr "Kasus: Satu mesin host di mana ceph-mon sedang dijalankan di-reboot" msgid "Caveats:" msgstr "Caveats:" msgid "Ceph - Node Reduction, Expansion and Ceph Recovery" msgstr "Ceph - Node Reduction, Expansion, dan Ceph Recovery" msgid "Ceph Cephfs provisioner docker images." msgstr "Ceph Cephfs provisioner docker images." msgid "Ceph Luminous point release images for Ceph components" msgstr "Ceph Luminous point melepaskan image untuk komponen Ceph" msgid "Ceph MON and OSD PODs got scheduled on mnode4 node." msgstr "Ceph MON dan OSD POD dijadwalkan pada node mnode4." msgid "Ceph RBD provisioner docker images." msgstr "Ceph RBD provisioner docker images." msgid "Ceph Resiliency" msgstr "Ceph Resiliency" msgid "Ceph Upgrade" msgstr "Ceph Upgrade" msgid "" "Ceph can be upgraded without downtime for Openstack components in a " "multinode env." msgstr "" "Ceph dapat ditingkatkan tanpa downtime untuk komponen OpenStack dalam " "multinode env." msgid "Ceph cluster is in HEALTH_OK state with 3 MONs and 3 OSDs." msgstr "Ceph cluster dalam keadaan HEALTH_OK dengan 3 MON dan 3 OSD." msgid "Ceph status shows 1 Ceph MON and 1 Ceph OSD missing." msgstr "Status Ceph menunjukkan 1 Ceph MON dan 1 Ceph OSD hilang." msgid "Ceph status shows HEALTH_WARN as expected" msgstr "Status Ceph menunjukkan HEALTH_WARN seperti yang diharapkan" msgid "Ceph status shows that MON and OSD count has been increased." msgstr "Status Ceph menunjukkan bahwa jumlah MON dan OSD telah ditingkatkan." msgid "" "Ceph status shows that ceph-mon running on ``voyager3`` becomes out of " "quorum. Also, 6 osds running on ``voyager3`` are down (i.e., 18 out of 24 " "osds are up). Some placement groups become degraded and undersized." msgstr "" "Status Ceph menunjukkan bahwa ceph-mon yang berjalan pada ``voyager3`` " "menjadi tidak dapat digunakan. Juga, 6 osds yang berjalan pada ``voyager3`` " "sedang down (yaitu, 18 dari 24 osds naik). Beberapa kelompok penempatan " "menjadi terdegradasi dan berukuran kecil." msgid "" "Ceph status shows that ceph-mon running on ``voyager3`` becomes out of " "quorum. Also, six osds running on ``voyager3`` are down; i.e., 18 osds are " "up out of 24 osds." msgstr "" "Status Ceph menunjukkan bahwa ceph-mon yang berjalan pada ``voyager3`` " "menjadi tidak dapat digunakan. Juga, enam osds yang berjalan di ``voyager3`` " "turun; yaitu, 18 osds naik dari 24 osds." msgid "Ceph status still shows HEALTH_WARN as one MON and OSD are still down." msgstr "" "Status Ceph masih menunjukkan HEALTH_WARN sebagai satu MON dan OSD masih " "dalam keadaan down." msgid "Ceph version: 12.2.3" msgstr "Ceph versi: 12.2.3" msgid "Check Ceph Pods" msgstr "Periksa Ceph Pods" msgid "Check version of each Ceph components." msgstr "Periksa versi setiap komponen Ceph." msgid "Check which images Provisionors and Mon-Check PODs are using" msgstr "Periksa image mana yang digunakan Provisionors dan Mon-Check PODs" msgid "Cluster size: 4 host machines" msgstr "Ukuran cluster: 4 mesin host" msgid "Conclusion:" msgstr "Kesimpulan:" msgid "Confirm Ceph component's version." msgstr "Konfirmasi versi komponen Ceph." msgid "Continue with OSH multinode guide to install other Openstack charts." msgstr "" "Lanjutkan dengan panduan multinode OSH untuk menginstal chart Openstack " "lainnya." msgid "Deploy and Validate Ceph" msgstr "Menyebarkan dan Memvalidasi Ceph" msgid "Disk Failure" msgstr "Kegagalan Disk" msgid "Docker Images:" msgstr "Docker Images:" msgid "" "Every OpenStack-Helm chart should include any required Helm tests necessary " "to provide a sanity check for the OpenStack service. Information on using " "the Helm testing framework can be found in the Helm repository_. Currently, " "the Rally testing framework is used to provide these checks for the core " "services. The Keystone Helm test template can be used as a reference, and " "can be found here_." msgstr "" "Setiap OpenStack-Helm chart harus menyertakan tes Helm yang diperlukan untuk " "memberikan pemeriksaan (sanity check) kewarasan untuk layanan OpenStack. " "Informasi tentang menggunakan kerangka pengujian Helm dapat ditemukan di " "repositori Helm. Saat ini, kerangka pengujian Rally digunakan untuk " "menyediakan pemeriksaan ini untuk layanan inti. Kerangka uji Keystone Helm " "dapat digunakan sebagai referensi, dan dapat ditemukan di sini_." msgid "Find that Ceph is healthy with a lost OSD (i.e., a total of 23 OSDs):" msgstr "Temukan bahwa Ceph sehat dengan OSD yang hilang (yaitu, total 23 OSD):" msgid "First, run ``ceph osd tree`` command to get list of OSDs." msgstr "" "Pertama, jalankan perintah ``ceph osd tree`` untuk mendapatkan daftar OSD." msgid "Follow all steps from OSH multinode guide with below changes." msgstr "" "Ikuti semua langkah dari panduan multinode OSH dengan perubahan di bawah ini." msgid "" "Followed OSH multinode guide steps to install Ceph and OpenStack charts up " "to Cinder." msgstr "" "Mengikuti panduan langkah multinode OSH untuk menginstal grafik Ceph dan " "OpenStack hingga Cinder." msgid "" "Followed OSH multinode guide steps to setup nodes and install K8s cluster" msgstr "" "Mengikuti langkah-langkah panduan multinode OSH untuk mengatur node dan " "menginstal K8s cluster" msgid "Followed OSH multinode guide steps upto Ceph install" msgstr "Mengikuti panduan multinode OSH langkah-langkah upto Ceph menginstal" msgid "Following is a partial part from script to show changes." msgstr "" "Berikut ini adalah bagian parsial dari skrip untuk menunjukkan perubahan." msgid "" "From the Kubernetes cluster, remove the failed OSD pod, which is running on " "``voyager4``:" msgstr "" "Dari kluster Kubernetes, hapus pod OSD yang gagal, yang berjalan di " "``voyager4``:" msgid "Hardware Failure" msgstr "Kegagalan perangkat keras" msgid "Helm Tests" msgstr "Tes Helm" msgid "Host Failure" msgstr "Host Failure" msgid "" "In the mean time, we monitor the status of Ceph and noted that it takes " "about 30 seconds for the 6 OSDs to recover from ``down`` to ``up``. The " "reason is that Kubernetes automatically restarts OSD pods whenever they are " "killed." msgstr "" "Sementara itu, kami memantau status Ceph dan mencatat bahwa dibutuhkan " "sekitar 30 detik untuk 6 OSD untuk memulihkan dari ``down`` ke ``up``. " "Alasannya adalah Kubernetes secara otomatis merestart pod OSD setiap kali " "mereka dimatikan." msgid "" "In the mean time, we monitored the status of Ceph and noted that it takes " "about 24 seconds for the killed Monitor process to recover from ``down`` to " "``up``. The reason is that Kubernetes automatically restarts pods whenever " "they are killed." msgstr "" "Sementara itu, kami memantau status Ceph dan mencatat bahwa dibutuhkan " "sekitar 24 detik untuk proses Monitor yang mati untuk memulihkan dari " "``down`` ke ``up``. Alasannya adalah Kubernetes secara otomatis me-restart " "pod setiap kali mereka dimatikan." msgid "" "In this test env, MariaDB chart is deployed with only 1 replica. In order to " "test properly, the node with MariaDB server POD (mnode2) should not be " "shutdown." msgstr "" "Dalam test env, tabel MariaDB dikerahkan hanya dengan 1 replika. Untuk " "menguji dengan benar, node dengan server POD MariaDB (mnode2) tidak boleh " "dimatikan." msgid "In this test env, ``mnode3`` is out of quorum." msgstr "Dalam tes env ini, `` mnode3`` keluar dari quorum." msgid "" "In this test env, each node has Ceph and OpenStack related PODs. Due to " "this, shutting down a Node will cause issue with Ceph as well as OpenStack " "services. These PODs level failures are captured following subsequent " "screenshots." msgstr "" "Dalam tes env ini, setiap node memiliki POD yang terkait dengan Ceph dan " "OpenStack. Karena ini, mematikan Node akan menyebabkan masalah dengan Ceph " "serta layanan OpenStack. Tingkat kegagalan PODs ini ditangkap setelah " "screenshot berikutnya." msgid "In this test env, let's shutdown ``mnode3`` node." msgstr "Dalam tes ini env, mari kita mematikan node `` mnode3``." msgid "" "In this test env, let's use ``mnode4`` and apply Ceph and OpenStack related " "labels." msgstr "" "Dalam tes env ini, mari kita gunakan ``mnode4`` dan menerapkan label terkait " "Ceph dan OpenStack." msgid "" "In this test env, since out of quorum MON is no longer available due to node " "failure, we can processed with removing it from Ceph cluster." msgstr "" "Dalam test env ini, sejak quorum MON tidak lagi tersedia karena kegagalan " "node, kami dapat memprosesnya dengan menghapusnya dari cluster Ceph." msgid "Install Ceph charts (12.2.4) by updating Docker images in overrides." msgstr "" "Instal Ceph charts (12.2.4) dengan memperbarui Docker images di overrides." msgid "Install Ceph charts (version 12.2.4)" msgstr "Pasang chart Ceph (versi 12.2.4)" msgid "Install OSH components as per OSH multinode guide." msgstr "Instal komponen OSH sesuai panduan multinode OSH." msgid "Install Openstack charts" msgstr "Pasang chart Openstack" msgid "" "It takes longer (about 1 minute) for the killed Monitor processes to recover " "from ``down`` to ``up``." msgstr "" "Diperlukan waktu lebih lama (sekitar 1 menit) untuk proses Monitor yang mati " "untuk memulihkan dari ``down`` ke ``up``." msgid "Kubernetes version: 1.10.5" msgstr "Kubernetes versi: 1.10.5" msgid "Kubernetes version: 1.9.3" msgstr "Kubernetes version: 1.9.3" msgid "Let's add more resources for K8s to schedule PODs on." msgstr "" "Mari tambahkan lebih banyak sumber daya untuk K8 untuk menjadwalkan POD." msgid "" "Make sure only 3 nodes (mnode1, mnode2, mnode3) have Ceph and OpenStack " "related labels. K8s would only schedule PODs on these 3 nodes." msgstr "" "Pastikan hanya 3 node (mnode1, mnode2, mnode3) yang memiliki label terkait " "Ceph dan OpenStack. K8 hanya akan menjadwalkan POD pada 3 node ini." msgid "Mission" msgstr "Misi" msgid "Monitor Failure" msgstr "Memantau Kegagalan" msgid "" "Note: To find the daemonset associated with a failed OSD, check out the " "followings:" msgstr "" "Catatan: Untuk menemukan daemon yang terkait dengan OSD yang gagal, periksa " "yang berikut:" msgid "" "Now that we have added new node for Ceph and OpenStack PODs, let's perform " "maintenance on Ceph cluster." msgstr "" "Sekarang kita telah menambahkan node baru untuk Ceph dan OpenStack PODs, " "mari kita melakukan pemeliharaan pada cluster Ceph." msgid "Number of disks: 24 (= 6 disks per host * 4 hosts)" msgstr "Jumlah disk: 24 (= 6 disk per host * 4 host)" msgid "OSD Failure" msgstr "Kegagalan OSD" msgid "OSD count is set to 3 based on env setup." msgstr "Penghitungan OSD diatur ke 3 berdasarkan pada env setup." msgid "" "Only 3 nodes will have Ceph and OpenStack related labels. Each of these 3 " "nodes will have one MON and one OSD running on them." msgstr "" "Hanya 3 node yang memiliki label terkait Ceph dan OpenStack. Masing-masing " "dari 3 node ini akan memiliki satu MON dan satu OSD yang berjalan pada " "mereka." msgid "OpenStack PODs that were scheduled mnode3 also shows NodeLost/Unknown." msgstr "" "OpenStack PODs yang dijadwalkan mnode3 juga menunjukkan NodeLost / Unknown." msgid "OpenStack-Helm commit: 25e50a34c66d5db7604746f4d2e12acbdd6c1459" msgstr "OpenStack-Helm commit: 25e50a34c66d5db7604746f4d2e12acbdd6c1459" msgid "OpenStack-Helm commit: 28734352741bae228a4ea4f40bcacc33764221eb" msgstr "OpenStack-Helm commit: 28734352741bae228a4ea4f40bcacc33764221eb" msgid "" "Our focus lies on resiliency for various failure scenarios but not on " "performance or stress testing." msgstr "" "Fokus kami terletak pada ketahanan untuk berbagai skenario kegagalan tetapi " "tidak pada kinerja atau stress testing." msgid "PODs that were scheduled on mnode3 node has status of NodeLost/Unknown." msgstr "" "POD yang dijadwalkan pada node mnode3 memiliki status NodeLost / Unknown." msgid "Plan:" msgstr "Rencana:" msgid "Recovery:" msgstr "Pemulihan:" msgid "" "Remove the entire ceph-mon directory on voyager3, and then Ceph will " "automatically recreate the database by using the other ceph-mons' database." msgstr "" "Hapus seluruh direktori ceph-mon di voyager3, dan kemudian Ceph akan secara " "otomatis membuat ulang database dengan menggunakan database ceph-mons " "lainnya." msgid "" "Remove the failed OSD (OSD ID = 2 in this example) from the Ceph cluster:" msgstr "Hapus OSD yang gagal (OSD ID = 2 dalam contoh ini) dari kluster Ceph:" msgid "Resiliency Tests for OpenStack-Helm/Ceph" msgstr "Tes Ketahanan untuk OpenStack-Helm/Ceph" msgid "Run ``ceph osd purge`` command to remove OSD from ceph cluster." msgstr "" "Jalankan perintah ``ceph osd purge`` untuk menghapus OSD dari ceph cluster." msgid "Running Tests" msgstr "Menjalankan Tes" msgid "Setup:" msgstr "Mempersiapkan:" msgid "" "Showing partial output from kubectl describe command to show which image is " "Docker container is using" msgstr "" "Menampilkan sebagian output dari kubectl menggambarkan perintah untuk " "menunjukkan image mana yang digunakan oleh container Docker" msgid "" "Shutdown 1 of 3 nodes (mnode1, mnode2, mnode3) to simulate node failure/lost." msgstr "" "Shutdown 1 dari 3 node (mnode1, mnode2, mnode3) untuk mensimulasikan " "kegagalan node / lost." msgid "" "Since the node that was shutdown earlier had both Ceph and OpenStack PODs, " "mnode4 should get Ceph and OpenStack related labels as well." msgstr "" "Karena node yang shutdown sebelumnya memiliki Ceph dan OpenStack PODs, " "mnode4 harus mendapatkan label terkait Ceph dan OpenStack juga." msgid "Software Failure" msgstr "Kegagalan Perangkat Lunak" msgid "Solution:" msgstr "Solusi:" msgid "Start a new OSD pod on ``voyager4``:" msgstr "Mulai pod LED baru pada ``voyager 4``:" msgid "Step 1: Initial Ceph and OpenStack deployment" msgstr "Langkah 1: Penyebaran Ceph dan OpenStack awal" msgid "Step 2: Node reduction (failure):" msgstr "Langkah 2: Pengurangan nodus (kegagalan):" msgid "Step 3: Node Expansion" msgstr "Langkah 3: Ekspansi Node" msgid "Step 4: Ceph cluster recovery" msgstr "Langkah 4: Ceph cluster recovery" msgid "Steps:" msgstr "Langkah:" msgid "Symptom:" msgstr "Gejala:" msgid "Test Environment" msgstr "Uji Lingkungan" msgid "Test Scenario:" msgstr "Test Scenario:" msgid "Test Scenarios:" msgstr "Skenario Uji:" msgid "Testing" msgstr "Pengujian" msgid "Testing Expectations" msgstr "Menguji Ekspektasi" msgid "" "The goal of our resiliency tests for `OpenStack-Helm/Ceph `_ is to show symptoms of " "software/hardware failure and provide the solutions." msgstr "" "Tujuan dari uji ketahanan kami untuk `OpenStack-Helm/Ceph `_ adalah untuk menunjukkan " "gejala kegagalan perangkat lunak/perangkat keras dan memberikan solusi." msgid "" "The logs of the failed mon-pod shows the ceph-mon process cannot run as ``/" "var/lib/ceph/mon/ceph-voyager3/store.db`` does not exist." msgstr "" "Log dari mon-pod gagal menunjukkan proses ceph-mon tidak dapat berjalan " "karena ``/var/lib/ceph/mon/ceph-voyager3/store.db`` tidak ada." msgid "" "The node status of ``voyager3`` changes to ``Ready`` after the node is up " "again. Also, Ceph pods are restarted automatically. Ceph status shows that " "the monitor running on ``voyager3`` is now in quorum." msgstr "" "Status node ``voyager3`` berubah menjadi ``Ready`` setelah node naik lagi. " "Juga, Ceph pod di-restart secara otomatis. Status Ceph menunjukkan bahwa " "monitor yang dijalankan pada ``voyager3`` sekarang dalam kuorum." msgid "" "The node status of ``voyager3`` changes to ``Ready`` after the node is up " "again. Also, Ceph pods are restarted automatically. The Ceph status shows " "that the monitor running on ``voyager3`` is now in quorum and 6 osds gets " "back up (i.e., a total of 24 osds are up)." msgstr "" "Status node ``voyager3`` berubah menjadi ``Ready`` setelah node naik lagi. " "Juga, Ceph pod di-restart secara otomatis. Status Ceph menunjukkan bahwa " "monitor yang berjalan pada ``voyager3`` sekarang berada di kuorum dan 6 osds " "akan kembali (yaitu, total 24 osds naik)." msgid "" "The output of the Helm tests can be seen by looking at the logs of the pod " "created by the Helm tests. These logs can be viewed with:" msgstr "" "Output dari tes Helm dapat dilihat dengan melihat log dari pod yang dibuat " "oleh tes Helm. Log ini dapat dilihat dengan:" msgid "The pod status of ceph-mon and ceph-osd shows as ``NodeLost``." msgstr "Status pod ceph-mon dan ceph-osd ditampilkan sebagai ``NodeLost``." msgid "" "The status of the pods (where the three Monitor processes are killed) " "changed as follows: ``Running`` -> ``Error`` -> ``CrashLoopBackOff`` -> " "``Running`` and this recovery process takes about 1 minute." msgstr "" "Status pod (di mana ketiga proses Monitor dimatikan) diubah sebagai berikut: " "``Running`` -> ``Error`` -> ``CrashLoopBackOff`` -> ``Running`` dan proses " "pemulihan ini memakan waktu sekitar 1 menit." msgid "" "The status of the pods (where the two Monitor processes are killed) changed " "as follows: ``Running`` -> ``Error`` -> ``CrashLoopBackOff`` -> ``Running`` " "and this recovery process takes about 1 minute." msgstr "" "Status pod (di mana kedua proses Monitor mati) diubah sebagai berikut: " "``Running`` -> ``Error`` -> ``CrashLoopBackOff`` -> ``Running`` dan proses " "pemulihan ini memakan waktu sekitar 1 menit." msgid "" "This document captures steps and result from node reduction and expansion as " "well as ceph recovery." msgstr "" "Dokumen ini menangkap (capture) langkah dan hasil dari pengurangan dan " "perluasan node serta pemulihan ceph." msgid "" "This guide documents steps showing Ceph version upgrade. The main goal of " "this document is to demostrate Ceph chart update without downtime for OSH " "components." msgstr "" "Panduan ini mendokumentasikan langkah-langkah yang menunjukkan upgrade versi " "Ceph. Tujuan utama dari dokumen ini adalah untuk mendemonstrasikan pembaruan " "Ceph chart tanpa downtime untuk komponen OSH." msgid "" "This is for the case when a host machine (where ceph-mon is running) is down." msgstr "" "Ini untuk kasus ketika mesin host (di mana ceph-mon sedang berjalan) sedang " "mati." msgid "This is to test a scenario when 1 out of 3 Monitor processes is down." msgstr "Ini untuk menguji skenario ketika 1 dari 3 proses Monitor mati." msgid "" "This is to test a scenario when 2 out of 3 Monitor processes are down. To " "bring down 2 Monitor processes (out of 3), we identify two Monitor processes " "and kill them from the 2 monitor hosts (not a pod)." msgstr "" "Ini untuk menguji skenario ketika 2 dari 3 proses Monitor sedang down. Untuk " "menurunkan 2 proses Monitor (dari 3), kami mengidentifikasi dua proses " "Monitor dan mematikannya dari 2 monitor host (bukan pod)." msgid "" "This is to test a scenario when 3 out of 3 Monitor processes are down. To " "bring down 3 Monitor processes (out of 3), we identify all 3 Monitor " "processes and kill them from the 3 monitor hosts (not pods)." msgstr "" "Ini untuk menguji skenario ketika 3 dari 3 proses Monitor sedang down. Untuk " "menurunkan 3 proses Monitor (dari 3), kami mengidentifikasi semua 3 proses " "Monitor dan mematikannya dari 3 monitor host (bukan pod)." msgid "" "This is to test a scenario when a disk failure happens. We monitor the ceph " "status and notice one OSD (osd.2) on voyager4 which has ``/dev/sdh`` as a " "backend is down." msgstr "" "Ini untuk menguji skenario ketika terjadi kegagalan disk. Kami memonitor " "status ceph dan melihat satu OSD (osd.2) di voyager4 yang memiliki ``/dev/" "sdh`` sebagai backend sedang down (mati)." msgid "" "This is to test a scenario when an OSD pod is deleted by ``kubectl delete " "$OSD_POD_NAME``. Meanwhile, we monitor the status of Ceph and note that it " "takes about 90 seconds for the OSD running in deleted pod to recover from " "``down`` to ``up``." msgstr "" "Ini untuk menguji skenario ketika pod OSD dihapus oleh ``kubectl delete $ " "OSD_POD_NAME``. Sementara itu, kami memantau status Ceph dan perhatikan " "bahwa dibutuhkan sekitar 90 detik untuk OSD yang berjalan di pod yang " "dihapus untuk memulihkan dari ``down`` ke ``up``." msgid "This is to test a scenario when some of the OSDs are down." msgstr "Ini untuk menguji skenario ketika beberapa OSD turun." msgid "" "To bring down 1 Monitor process (out of 3), we identify a Monitor process " "and kill it from the monitor host (not a pod)." msgstr "" "Untuk menurunkan 1 proses Monitor (dari 3), kami mengidentifikasi proses " "Monitor dan mematikannya dari host monitor (bukan pod)." msgid "" "To bring down 6 OSDs (out of 24), we identify the OSD processes and kill " "them from a storage host (not a pod)." msgstr "" "Untuk menurunkan 6 OSD (dari 24), kami mengidentifikasi proses OSD dan " "mematikannya dari host penyimpanan (bukan pod)." msgid "To replace the failed OSD, execute the following procedure:" msgstr "Untuk mengganti OSD yang gagal, jalankan prosedur berikut:" msgid "Update Ceph Client chart with new overrides:" msgstr "Perbarui Ceph Client chart dengan override baru:" msgid "Update Ceph Mon chart with new overrides" msgstr "Perbarui Ceph Mon chart dengan override baru" msgid "Update Ceph OSD chart with new overrides:" msgstr "Perbarui Ceph OSD chart dengan override baru:" msgid "Update Ceph Provisioners chart with new overrides:" msgstr "Perbarui Ceph Provisioners chart dengan override baru:" msgid "" "Update ceph install script ``./tools/deployment/multinode/030-ceph.sh`` to " "add ``images:`` section in overrides as shown below." msgstr "" "Perbarui ceph install script ``./tools/deployment/multinode/030-ceph.sh`` " "untuk menambahkan bagian ``images:`` di override seperti yang ditunjukkan di " "bawah ini." msgid "" "Update, image section in new overrides ``ceph-update.yaml`` as shown below" msgstr "" "Pembaruan, bagian image di overrides baru ``ceph-update.yaml`` seperti yang " "ditunjukkan di bawah ini" msgid "Upgrade Ceph charts to update version" msgstr "Tingkatkan Ceph charts untuk memperbarui versi" msgid "" "Upgrade Ceph charts to version 12.2.5 by updating docker images in overrides." msgstr "" "Tingkatkan Ceph chart ke versi 12.2.5 dengan memperbarui image docker di " "overrides." msgid "" "Upgrade Ceph component version from ``12.2.4`` to ``12.2.5`` without " "downtime to OSH components." msgstr "" "Upgrade versi komponen Ceph dari ``12.2.4`` ke ``12.2.5`` tanpa waktu henti " "ke komponen OSH." msgid "" "Use Ceph override file ``ceph.yaml`` that was generated previously and " "update images section as below" msgstr "" "Gunakan Ceph override file ``ceph.yaml`` yang telah dibuat sebelumnya dan " "perbarui bagian image seperti di bawah ini" msgid "" "Using ``ceph mon_status`` and ``ceph -s`` commands, confirm ID of MON that " "is out of quorum." msgstr "" "Dengan menggunakan perintah ``ceph mon_status`` dan ``ceph -s``, " "konfirmasikan ID MON yang keluar dari quorum." msgid "" "Validate the Ceph status (i.e., one OSD is added, so the total number of " "OSDs becomes 24):" msgstr "" "Validasi status Ceph (yaitu satu OSD ditambahkan, sehingga jumlah total OSD " "menjadi 24):" msgid "" "We also monitored the pod status through ``kubectl get pods -n ceph`` during " "this process. The deleted OSD pod status changed as follows: ``Terminating`` " "-> ``Init:1/3`` -> ``Init:2/3`` -> ``Init:3/3`` -> ``Running``, and this " "process takes about 90 seconds. The reason is that Kubernetes automatically " "restarts OSD pods whenever they are deleted." msgstr "" "Kami juga memantau status pod melalui ``kubectl get pods -n ceph`` selama " "proses ini. Status pod OSD yang dihapus diubah sebagai berikut: " "``Terminating`` -> ``Init:1/3`` -> ``Init:2/3`` -> ``Init:3/3`` -> " "``Running``, dan proses ini membutuhkan waktu sekitar 90 detik. Alasannya " "adalah Kubernetes secara otomatis merestart pod OSD setiap kali dihapus." msgid "" "We also monitored the status of the Monitor pod through ``kubectl get pods -" "n ceph``, and the status of the pod (where a Monitor process is killed) " "changed as follows: ``Running`` -> ``Error`` -> ``Running`` and this " "recovery process takes about 24 seconds." msgstr "" "Kami juga memantau status pod Monitor melalui ``kubectl get pods -n ceph``, " "dan status pod (di mana proses Monitor mati) berubah sebagai berikut: " "``Running`` -> ``Error`` -> ``Running`` dan proses pemulihan ini membutuhkan " "waktu sekitar 24 detik." msgid "" "We have 3 Monitors in this Ceph cluster, one on each of the 3 Monitor hosts." msgstr "" "Kami memiliki 3 Monitor di cluster Ceph ini, satu di masing-masing dari 3 " "host Monitor." msgid "" "We intentionlly destroy a Monitor database by removing ``/var/lib/openstack-" "helm/ceph/mon/mon/ceph-voyager3/store.db``." msgstr "" "Kami bermaksud menghancurkan database Monitor dengan menghapus ``/var/lib/" "openstack-helm/ceph/mon/mon/ceph-voyager3/store.db``." msgid "" "We monitored the status of Ceph Monitor pods and noted that the symptoms are " "similar to when 1 or 2 Monitor processes are killed:" msgstr "" "Kami memantau status pod Ceph Monitor dan mencatat bahwa gejalanya mirip " "dengan ketika 1 atau 2 proses Monitor dimatikan:" msgid "" "We monitored the status of Ceph when the Monitor processes are killed and " "noted that the symptoms are similar to when 1 Monitor process is killed:" msgstr "" "Kami memantau status Ceph ketika proses Monitor dimatikan dan mencatat bahwa " "gejala mirip dengan ketika 1 Proses monitor dimatikan:" msgid "`Disk failure <./disk-failure.html>`_" msgstr "`Disk failure <./disk-failure.html>`_" msgid "`Host failure <./host-failure.html>`_" msgstr "`Host failure <./host-failure.html>`_" msgid "`Monitor failure <./monitor-failure.html>`_" msgstr "`Monitor failure <./monitor-failure.html>`_" msgid "`OSD failure <./osd-failure.html>`_" msgstr "`OSD failure <./osd-failure.html>`_" msgid "``Ceph MON Status:``" msgstr "``Ceph MON Status:``" msgid "``Ceph MON Status``" msgstr "``Ceph MON Status``" msgid "``Ceph PODs:``" msgstr "``Ceph PODs:``" msgid "``Ceph PODs``" msgstr "``Ceph PODs``" msgid "``Ceph Status:``" msgstr "``Ceph Status:``" msgid "``Ceph quorum status:``" msgstr "``Ceph quorum status:``" msgid "``Ceph quorum status``" msgstr "``Ceph quorum status``" msgid "``Ceph status:``" msgstr "``Ceph status:``" msgid "``Ceph status``" msgstr "``Ceph status``" msgid "``Check node status:``" msgstr "``Check node status:``" msgid "``Following are PODs scheduled on mnode3 before shutdown:``" msgstr "``Following are PODs scheduled on mnode3 before shutdown:``" msgid "``OpenStack PODs:``" msgstr "``OpenStack PODs:``" msgid "``OpenStack PODs``" msgstr "``OpenStack PODs``" msgid "``Remove MON from Ceph cluster``" msgstr "``Remove MON from Ceph cluster``" msgid "``Result/Observation:``" msgstr "``Result/Observation:``" msgid "" "``Results:`` All provisioner pods got terminated at once (same time). Other " "ceph pods are running. No interruption to OSH pods." msgstr "" "``Results:`` Semua pod penyedia dihentikan sekaligus (saat yang sama). Ceph " "pod lainnya sedang berjalan. Tidak ada gangguan pada pod OSH." msgid "" "``Results:`` Mon pods got updated one by one (rolling updates). Each Mon pod " "got respawn and was in 1/1 running state before next Mon pod got updated. " "Each Mon pod got restarted. Other ceph pods were not affected with this " "update. No interruption to OSH pods." msgstr "" "``Results:`` Mon pod mendapat pembaruan satu per satu (pembaruan bergulir). " "Setiap Mon pod mendapat respawn dan berada dalam 1/1 keadaan sebelum Mon pod " "berikutnya diperbarui. Setiap Mon pod mulai dihidupkan ulang. Ceph pod " "lainnya tidak terpengaruh dengan pembaruan ini. Tidak ada gangguan pada pod " "OSH." msgid "" "``Results:`` Rolling updates (one pod at a time). Other ceph pods are " "running. No interruption to OSH pods." msgstr "" "``Results:`` Bergulir pembaruan (satu pod dalam satu waktu). Ceph pod " "lainnya sedang berjalan. Tidak ada gangguan pada pod OSH." msgid "" "``ceph_bootstrap``, ``ceph-config_helper`` and ``ceph_rbs_pool`` images are " "used for jobs. ``ceph_mon_check`` has one script that is stable so no need " "to upgrade." msgstr "" "Image ``ceph_bootstrap``, ``ceph-config_helper`` and ``ceph_rbs_pool`` " "digunakan untuk pekerjaan. ``ceph_mon_check`` memiliki satu skrip yang " "stabil sehingga tidak perlu melakukan upgrade." msgid "``cp /tmp/ceph.yaml ceph-update.yaml``" msgstr "``cp /tmp/ceph.yaml ceph-update.yaml``" msgid "``helm upgrade ceph-client ./ceph-client --values=ceph-update.yaml``" msgstr "``helm upgrade ceph-client ./ceph-client --values=ceph-update.yaml``" msgid "``helm upgrade ceph-mon ./ceph-mon --values=ceph-update.yaml``" msgstr "``helm upgrade ceph-mon ./ceph-mon --values=ceph-update.yaml``" msgid "``helm upgrade ceph-osd ./ceph-osd --values=ceph-update.yaml``" msgstr "``helm upgrade ceph-osd ./ceph-osd --values=ceph-update.yaml``" msgid "" "``helm upgrade ceph-provisioners ./ceph-provisioners --values=ceph-update." "yaml``" msgstr "" "``helm upgrade ceph-provisioners ./ceph-provisioners --values=ceph-update." "yaml``" msgid "``series of console outputs:``" msgstr "``series of console outputs:``"