From 875d979d0d5536b825afa1025011dd0f05238dfc Mon Sep 17 00:00:00 2001 From: SaiKiran Date: Wed, 16 Dec 2015 12:52:11 +0530 Subject: [PATCH] [ops-guide] Deprecate the nova-manage sub-command Remove all instances of the nova-manage sub-command in the ops-guide and replace it with nova command-line client commands. Change-Id: Ibb3f0be68ccd165ce7a8ad7746fbd20b45f0ff6a Partial-Bug: #1517322 Co-Authored-By: daz --- doc/openstack-ops/app_crypt.xml | 5 +- doc/openstack-ops/ch_arch_scaling.xml | 4 -- doc/openstack-ops/ch_ops_lay_of_land.xml | 51 ++++++++++---------- doc/ops-guide/source/app_crypt.rst | 4 +- doc/ops-guide/source/arch_scaling.rst | 16 +++--- doc/ops-guide/source/ops_lay_of_the_land.rst | 44 +++++++++-------- 6 files changed, 60 insertions(+), 64 deletions(-) diff --git a/doc/openstack-ops/app_crypt.xml b/doc/openstack-ops/app_crypt.xml index 9ebab50f..9f33a945 100644 --- a/doc/openstack-ops/app_crypt.xml +++ b/doc/openstack-ops/app_crypt.xml @@ -368,9 +368,8 @@ again in a title. This past Valentine's Day, I received an alert that a compute node was no longer available in the cloud—meaning, - $nova-manage service list - showed this particular node with a status of - XXX. + $nova service-list + showed this particular node in down state. I logged into the cloud controller and was able to both ping and SSH into the problematic compute node which seemed very odd. Usually if I receive this type of alert, diff --git a/doc/openstack-ops/ch_arch_scaling.xml b/doc/openstack-ops/ch_arch_scaling.xml index 47295ac2..344cc04f 100644 --- a/doc/openstack-ops/ch_arch_scaling.xml +++ b/doc/openstack-ops/ch_arch_scaling.xml @@ -584,10 +584,6 @@ euca-describe-availability-zones verbose - - - nova-manage service list - The internal availability zone is hidden in euca-describe-availability_zones (nonverbose). diff --git a/doc/openstack-ops/ch_ops_lay_of_land.xml b/doc/openstack-ops/ch_ops_lay_of_land.xml index e044c75b..a41ea4f2 100644 --- a/doc/openstack-ops/ch_ops_lay_of_land.xml +++ b/doc/openstack-ops/ch_ops_lay_of_land.xml @@ -163,10 +163,6 @@ separately: - - nova-manage - - glance-manage @@ -464,28 +460,31 @@ http://203.0.113.10:8774/v2/98333aba48e756fa8f629c83a818ad57/servers | jq .First, you can discover what servers belong to your OpenStack cloud by running: - # nova-manage service list | sort + # nova service-list The output looks like the following: - Binary Host Zone Status State Updated_At -nova-cert cloud.example.com nova enabled :-) 2013-02-25 19:32:38 -nova-compute c01.example.com nova enabled :-) 2013-02-25 19:32:35 -nova-compute c02.example.com nova enabled :-) 2013-02-25 19:32:32 -nova-compute c03.example.com nova enabled :-) 2013-02-25 19:32:36 -nova-compute c04.example.com nova enabled :-) 2013-02-25 19:32:32 -nova-compute c05.example.com nova enabled :-) 2013-02-25 19:32:41 -nova-conductor cloud.example.com nova enabled :-) 2013-02-25 19:32:40 -nova-consoleauth cloud.example.com nova enabled :-) 2013-02-25 19:32:36 -nova-network cloud.example.com nova enabled :-) 2013-02-25 19:32:32 -nova-scheduler cloud.example.com nova enabled :-) 2013-02-25 19:32:33 - + ++----+------------------+-------------------+------+---------+-------+----------------------------+-----------------+ +| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | ++----+------------------+-------------------+------+---------+-------+----------------------------+-----------------+ +| 1 | nova-cert | cloud.example.com | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | +| 2 | nova-compute | c01.example.com | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | +| 3 | nova-compute | c01.example.com. | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | +| 4 | nova-compute | c01.example.com | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | +| 5 | nova-compute | c01.example.com | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | +| 6 | nova-compute | c01.example.com | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | +| 7 | nova-conductor | cloud.example.com | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | +| 8 | nova-cert | cloud.example.com | nova | enabled | up | 2016-01-05T17:20:42.000000 | - | +| 9 | nova-scheduler | cloud.example.com | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | +| 10 | nova-consoleauth | cloud.example.com | nova | enabled | up | 2016-01-05T17:20:35.000000 | - | ++----+------------------+-------------------+------+---------+-------+----------------------------+-----------------+ + The output shows that there are five compute nodes and one cloud - controller. You see a smiley face, such as :-), which + controller. You can see all the services are in up state, which indicates that the services are up and running. If a service is no - longer available, the :-) symbol changes to - XXX. This is an indication that you should troubleshoot why - the service is down. + longer available, then service state changes to down state. This is an indication + that you should troubleshoot why the service is down. If you are using cinder, run the following command to see a similar listing: @@ -542,8 +541,8 @@ cloud.example.com nova network traffic segregation. You can find the version of the Compute installation by using the - nova-manage command: # nova-manage version + nova client command: # nova version-list
@@ -634,10 +633,10 @@ cloud.example.com nova | 8283efb2-e53d-46e1-a6bd-bb2bdef9cb9a | test02 | 10.1.1.0/24 | +--------------------------------------+--------+--------------+ - The nova-manage tool can provide some additional + The nova command-line client can provide some additional details: - # nova-manage network list + # nova network-list id IPv4 IPv6 start address DNS1 DNS2 VlanID project uuid  1 10.1.0.0/24 None 10.1.0.3 None None 300 2725bbd beacb3f2 2 10.1.1.0/24 None 10.1.1.3 None None 301 none d0b1a796 @@ -651,7 +650,7 @@ cloud.example.com nova To find out whether any floating IPs are available in your cloud, run: - # nova-manage floating list + # nova floating-ip-list 2725bb...59f43f 1.2.3.4 None nova vlan20 None 1.2.3.5 48a415...b010ff nova vlan20 diff --git a/doc/ops-guide/source/app_crypt.rst b/doc/ops-guide/source/app_crypt.rst index 158548e0..cb1d42fe 100644 --- a/doc/ops-guide/source/app_crypt.rst +++ b/doc/ops-guide/source/app_crypt.rst @@ -341,9 +341,9 @@ no longer available in the cloud—meaning, .. code-block:: console - $ nova-manage service list + $ nova service-list -showed this particular node with a status of ``XXX``. +showed this particular node in a down state. I logged into the cloud controller and was able to both ``ping`` and SSH into the problematic compute node which seemed very odd. Usually if I diff --git a/doc/ops-guide/source/arch_scaling.rst b/doc/ops-guide/source/arch_scaling.rst index 26d12546..4b17457f 100644 --- a/doc/ops-guide/source/arch_scaling.rst +++ b/doc/ops-guide/source/arch_scaling.rst @@ -339,22 +339,22 @@ change all flavor types relating to them. When you run any of the following operations, the services appear in their own internal availability zone - (CONF.internal\_service\_availability\_zone): + (CONF.internal_service_availability_zone): - - nova host-list (os-hosts) + - :command:`nova host-list` (os-hosts) - - euca-describe-availability-zones verbose + - :command:`euca-describe-availability-zones verbose` - - ``nova-manage`` service list + - :command:`nova service-list` The internal availability zone is hidden in - euca-describe-availability\_zones (nonverbose). + euca-describe-availability_zones (nonverbose). - CONF.node\_availability\_zone has been renamed to - CONF.default\_availability\_zone and is used only by the + CONF.node_availability_zone has been renamed to + CONF.default_availability_zone and is used only by the ``nova-api`` and ``nova-scheduler`` services. - CONF.node\_availability\_zone still works but is deprecated. + CONF.node_availability_zone still works but is deprecated. Scalable Hardware ~~~~~~~~~~~~~~~~~ diff --git a/doc/ops-guide/source/ops_lay_of_the_land.rst b/doc/ops-guide/source/ops_lay_of_the_land.rst index 4e2df91c..b21a7d11 100644 --- a/doc/ops-guide/source/ops_lay_of_the_land.rst +++ b/doc/ops-guide/source/ops_lay_of_the_land.rst @@ -88,7 +88,6 @@ installed with the project's services on the cloud controller and do not need to be installed\*-manage command-line toolscommand-line tools administrative separately: -* :command:`nova-manage` * :command:`glance-manage` * :command:`keystone-manage` * :command:`cinder-manage` @@ -300,28 +299,31 @@ running: .. code-block:: console - # nova-manage service list | sort + # nova service-list The output looks like the following: .. code-block:: console - Binary Host Zone Status State Updated_At - nova-cert cloud.example.com nova enabled :-) 2013-02-25 19:32:38 - nova-compute c01.example.com nova enabled :-) 2013-02-25 19:32:35 - nova-compute c02.example.com nova enabled :-) 2013-02-25 19:32:32 - nova-compute c03.example.com nova enabled :-) 2013-02-25 19:32:36 - nova-compute c04.example.com nova enabled :-) 2013-02-25 19:32:32 - nova-compute c05.example.com nova enabled :-) 2013-02-25 19:32:41 - nova-conductor cloud.example.com nova enabled :-) 2013-02-25 19:32:40 - nova-consoleauth cloud.example.com nova enabled :-) 2013-02-25 19:32:36 - nova-network cloud.example.com nova enabled :-) 2013-02-25 19:32:32 - nova-scheduler cloud.example.com nova enabled :-) 2013-02-25 19:32:33 + +----+------------------+-------------------+------+---------+-------+----------------------------+-----------------+ + | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | + +----+------------------+-------------------+------+---------+-------+----------------------------+-----------------+ + | 1 | nova-cert | cloud.example.com | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | + | 2 | nova-compute | c01.example.com | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | + | 3 | nova-compute | c01.example.com. | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | + | 4 | nova-compute | c01.example.com | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | + | 5 | nova-compute | c01.example.com | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | + | 6 | nova-compute | c01.example.com | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | + | 7 | nova-conductor | cloud.example.com | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | + | 8 | nova-cert | cloud.example.com | nova | enabled | up | 2016-01-05T17:20:42.000000 | - | + | 9 | nova-scheduler | cloud.example.com | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | + | 10 | nova-consoleauth | cloud.example.com | nova | enabled | up | 2016-01-05T17:20:35.000000 | - | + +----+------------------+-------------------+------+---------+-------+----------------------------+-----------------+ The output shows that there are five compute nodes and one cloud -controller. You see a smiley face, such as ``:-)``, which indicates that -the services are up and running. If a service is no longer available, -the ``:-)`` symbol changes to ``XXX``. This is an indication that you +controller. You see all the services in the up state, which indicates that +the services are up and running. If a service is in a down state, it is +no longer available. This is an indication that you should troubleshoot why the service is down. If you are using cinder, run the following command to see a similar @@ -373,11 +375,11 @@ be done for different reasons, such as endpoint privacy or network traffic segregation. You can find the version of the Compute installation by using the -:command:`nova-manage` command: +nova client command: .. code-block:: console - # nova-manage version + # nova version-list Diagnose Your Compute Nodes --------------------------- @@ -453,11 +455,11 @@ the :command:`nova` command-line client to get the IP ranges: | 8283efb2-e53d-46e1-a6bd-bb2bdef9cb9a | test02 | 10.1.1.0/24 | +--------------------------------------+--------+--------------+ -The :command:`nova-manage` tool can provide some additional details: +The nova command-line client can provide some additional details: .. code-block:: console - # nova-manage network list + # nova network-list id IPv4 IPv6 start address DNS1 DNS2 VlanID project uuid 1 10.1.0.0/24 None 10.1.0.3 None None 300 2725bbd beacb3f2 2 10.1.1.0/24 None 10.1.1.3 None None 301 none d0b1a796 @@ -472,7 +474,7 @@ To find out whether any floating IPs are available in your cloud, run: .. code-block:: console - # nova-manage floating list + # nova floating-ip-list 2725bb...59f43f 1.2.3.4 None nova vlan20 None 1.2.3.5 48a415...b010ff nova vlan20