Fuel 3.2 corrections
This commit is contained in:
parent
6bacbf8599
commit
8001384ada
Binary file not shown.
Before Width: | Height: | Size: 11 KiB After Width: | Height: | Size: 9.3 KiB |
@ -6,7 +6,7 @@ RabbitMQ Cluster Restart Issues Following A Systemwide Power Failure
|
||||
simultaneously. RabbitMQ requires that after a full shutdown of the cluster,
|
||||
the first node brought up should be the last one to shut down, but it's not
|
||||
always possible to know which node that is in the event of a power outage or
|
||||
similar event. Fuel solve this problem by managing the restart of available
|
||||
similar event. Fuel solves this problem by managing the restart of available
|
||||
nodes, so you should not experience difficulty with this issue.
|
||||
|
||||
If you are still using previous versions of Fuel, the following describes
|
||||
|
@ -1,14 +1,14 @@
|
||||
HowTo: Enable/Disable Galera Cluster autorebuild mechanism
|
||||
----------------------------------------------------------
|
||||
|
||||
By defaults Fuel reassembles Galera cluster automatically without need for any
|
||||
user interaction.
|
||||
By default, Fuel reassembles the Galera MySQL cluster automatically without
|
||||
the need for any manual user intervention.
|
||||
|
||||
To prevent `autorebuild feature` you shall do::
|
||||
To disable the Galera `autorebuild feature`, run the following command::
|
||||
|
||||
crm_attribute -t crm_config --name mysqlprimaryinit --delete
|
||||
|
||||
To re-enable `autorebuild feature` you should do::
|
||||
To re-enable the Galera `autorebuild feature`, run the following command::
|
||||
|
||||
crm_attribute -t crm_config --name mysqlprimaryinit --update done
|
||||
|
||||
|
@ -5,10 +5,10 @@
|
||||
Add or Remove Controller and Compute Nodes Without Downtime
|
||||
===========================================================
|
||||
|
||||
This document will assist you in understanding certain concepts and
|
||||
processes around the management of controller and compute nodes within an
|
||||
OpenStack cluster. There are some specific details to note, making this
|
||||
document required reading.
|
||||
This document will help you become familiar with the process around lifecycle
|
||||
management of controller and compute nodes within an OpenStack cluster deployed
|
||||
by Fuel. There are some specific details to note, so reading this document
|
||||
is highly recommended.
|
||||
|
||||
1. The addition of compute nodes works seamlessly - just specify its
|
||||
IPs in `site.pp` file (if needed) and run puppet agent
|
||||
|
@ -5,16 +5,19 @@
|
||||
HowTo Notes
|
||||
===========
|
||||
|
||||
.. index:: HowTo: Create the XFS partition
|
||||
.. index:: HowTo: Create an XFS disk partition
|
||||
|
||||
.. _create-the-XFS-partition:
|
||||
|
||||
HowTo: Create the XFS partition
|
||||
-------------------------------
|
||||
HowTo: Create an XFS disk partition
|
||||
-----------------------------------
|
||||
|
||||
In most cases, Fuel creates the XFS partition for you. If for some reason you
|
||||
need to create it yourself, use this procedure:
|
||||
|
||||
.. note:: Replace ``/dev/sdb`` with the appropriate block device you wish to
|
||||
configure.
|
||||
|
||||
1. Create the partition itself::
|
||||
|
||||
fdisk /dev/sdb
|
||||
@ -40,6 +43,7 @@ need to create it yourself, use this procedure:
|
||||
noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
|
||||
mount -a
|
||||
|
||||
|
||||
.. index:: HowTo: Redeploy a node from scratch
|
||||
|
||||
.. _Redeploy_node_from_scratch:
|
||||
@ -49,12 +53,12 @@ HowTo: Redeploy a node from scratch
|
||||
|
||||
Compute and Cinder nodes in an HA configuration and controller in any
|
||||
configuration cannot be redeployed without completely redeploying the cluster.
|
||||
However, in a non-HA situation you can redeploy a Compute or Cinder node.
|
||||
To do so, follow these steps:
|
||||
However, for a non-HA OpenStack cluster, you can redeploy a Compute or
|
||||
Cinder node. To do so, follow these steps:
|
||||
|
||||
1. Remove the certificate for the node by executing the command
|
||||
1. Remove the certificate for the node by executing the command
|
||||
``puppet cert clean <hostname>`` on Fuel Master node.
|
||||
2. Reboot the node over the network so it can be picked up by cobbler.
|
||||
2. Reboot the node over the network so it can be picked up by Cobbler.
|
||||
3. Run the puppet agent on the target node using ``puppet agent --test``.
|
||||
|
||||
.. _Enable_Disable_Galera_autorebuild:
|
||||
@ -125,16 +129,16 @@ Here you can enter resource-specific commands::
|
||||
|
||||
**crm(live)resource# start|restart|stop|cleanup <resource_name>**
|
||||
|
||||
These commands let you correspondingly start, stop, restart resources.
|
||||
These commands allow you torespectively start, stop, and restart resources.
|
||||
|
||||
**cleanup**
|
||||
|
||||
Cleanup command cleans resources state on the nodes in case of their failure or
|
||||
unexpected operation, e.g. some residuals of SysVInit operation on resource, in
|
||||
which case pacemaker will manage it by itself, thus deciding in which node to
|
||||
run the resource.
|
||||
The pacemaker cleanup command resets a resource's state on the node if it is
|
||||
currently in a failed state or due to some unexpected operation, such as some
|
||||
side effects of a SysVInit operation on the resource. In such an event,
|
||||
pacemaker will manage it by itself, deciding which node will run the resource.
|
||||
|
||||
E.g.::
|
||||
Example::
|
||||
|
||||
3 Nodes configured, 3 expected votes
|
||||
3 Resources configured.
|
||||
@ -192,13 +196,13 @@ You can troubleshoot this by checking corosync connectivity between nodes.
|
||||
There are several points:
|
||||
|
||||
1) Multicast should be enabled in the network, IP address configured as
|
||||
multicast should not be filtered, mcastport and mcasport - 1 udp ports should
|
||||
be accepted on management network between controllers
|
||||
multicast should not be filtered. The mcast port, a single udp port should
|
||||
be accepted on the management network among all controllers
|
||||
|
||||
2) corosync should start after network interfaces are configured
|
||||
2) Corosync should start after network interfaces are activated.
|
||||
|
||||
3) `bindnetaddr` should be in the management network or at least in the same
|
||||
multicast reachable segment
|
||||
3) `bindnetaddr` should be located in the management network or at least in
|
||||
the same multicast reachable segment
|
||||
|
||||
You can check this in output of ``ip maddr show``:
|
||||
|
||||
@ -242,8 +246,9 @@ when members list is incomplete.
|
||||
How To Smoke Test HA
|
||||
--------------------
|
||||
|
||||
To test if Quantum HA is working, simply shut down the node hosting, e.g. Quantum
|
||||
agents (either gracefully or hardly). You should see agents start on the other node::
|
||||
To test if Quantum HA is working, simply shut down the node hosting, e.g.
|
||||
Quantum agents (either gracefully or hardly). You should see agents start on
|
||||
the other node::
|
||||
|
||||
|
||||
# crm status
|
||||
@ -323,4 +328,4 @@ tunnels/bridges/interfaces are created and connected properly::
|
||||
Interface br-tun
|
||||
type: internal
|
||||
ovs_version: "1.4.0+build0"
|
||||
|
||||
|
||||
|
@ -1,8 +1,8 @@
|
||||
Corosync crashes without network connectivity
|
||||
---------------------------------------------
|
||||
|
||||
Depending on a wide range of systems and configurations in network it is
|
||||
possible for Corosync's networking protocol, TOTEM, to time out. If this
|
||||
Depending on a wide range of systems and configurations in network, it is
|
||||
possible for Corosync's networking protocol, Totem, to time out. If this
|
||||
happens for an extended period of time, Corosync may crash. In addition,
|
||||
MySQL may have stopped. This guide illustrates the process of working
|
||||
through Corosync with MySQL issues.
|
||||
@ -11,17 +11,19 @@ through Corosync with MySQL issues.
|
||||
|
||||
1. Verify that corosync is really broken ``service corosync status``.
|
||||
|
||||
* You should see next error: ``corosync dead but pid file exists``
|
||||
* You should see next error::
|
||||
|
||||
corosync dead but pid file exists
|
||||
|
||||
2. Start corosync manually ``service corosync start``.
|
||||
|
||||
3. Run ``ps -ef | grep mysql`` and kill ALL(!) **mysqld** and
|
||||
**mysqld_safe** processes.
|
||||
|
||||
4. Wait while pacemaker starts mysql processes again.
|
||||
4. Wait for pacemaker to completely start mysql processes.
|
||||
|
||||
* You can check it with ``ps -ef | grep mysql`` command.
|
||||
* If it doesn't start, run ``crm resource p_mysql`` start.
|
||||
|
||||
5. Check with ``crm status`` command that this host is part of the cluster
|
||||
and p_mysql is not within "Failed actions".
|
||||
5. Check with ``crm status`` command to verify that this host is a member
|
||||
of the cluster and that p_mysql does not contain any "Failed actions".
|
||||
|
@ -22,9 +22,9 @@ This is a Puppet bug. See: http://projects.puppetlabs.com/issues/3234
|
||||
service puppetmaster restart
|
||||
|
||||
**Issue:**
|
||||
Puppet client will never resend the certificate to Puppet Master. The
|
||||
Puppet client does not resend the certificate to Puppet Master. The client
|
||||
certificate cannot be signed and verified.
|
||||
|
||||
|
||||
This is a Puppet bug. See: http://projects.puppetlabs.com/issues/4680
|
||||
|
||||
**Workaround:**
|
||||
@ -51,7 +51,7 @@ This is a Puppet bug. See: http://projects.puppetlabs.com/issues/4680
|
||||
|
||||
**Issue:**
|
||||
Timeout error for fuel-controller-XX when running ``puppet-agent --test`` to
|
||||
install OpenStack when using HDD instead of SSD ::
|
||||
install OpenStack in a virtual deployment when using HDD instead of SSD ::
|
||||
|
||||
| Sep 26 17:56:15 fuel-controller-02 puppet-agent[1493]: Could not retrieve
|
||||
| catalog from remote server: execution expired
|
||||
@ -73,4 +73,4 @@ add: ``configtimeout = 1200``
|
||||
| information from environment production source(s) puppet://fuel-pm.localdomain/plugins
|
||||
|
||||
**Workaround:**
|
||||
http://projects.reductivelabs.com/issues/2244
|
||||
Refer to http://projects.reductivelabs.com/issues/2244 for information.
|
||||
|
@ -12,16 +12,16 @@ How HA with Pacemaker and Corosync Works
|
||||
Corosync Settings
|
||||
-----------------
|
||||
|
||||
Corosync is using Totem protocol which is an implementation of Virtual Synchrony
|
||||
Corosync uses Totem protocol, which is an implementation of Virtual Synchrony
|
||||
protocol. It uses it in order to provide connectivity between cluster nodes,
|
||||
decide if cluster is quorate to provide services, to provide data layer for
|
||||
services that want to use features of Virtual Synchrony.
|
||||
|
||||
Corosync is used in Fuel as communication and quorum service for Pacemaker
|
||||
Corosync fuctions in Fuel as the communication and quorum service via Pacemaker
|
||||
cluster resource manager (`crm`). It's main configuration file is located in
|
||||
``/etc/corosync/corosync.conf``.
|
||||
|
||||
The main Corosync section is ``totem`` section which describes how cluster nodes
|
||||
The main Corosync section is the ``totem`` section which describes how cluster nodes
|
||||
should communicate::
|
||||
|
||||
totem {
|
||||
@ -44,11 +44,11 @@ should communicate::
|
||||
}
|
||||
}
|
||||
|
||||
Corosync usually uses multicast UDP transport and sets "redundant ring" for
|
||||
communication. Currently Fuel deploys controllers with one redundant ring. Each
|
||||
ring has it’s own multicast address and bind net address that specifies on which
|
||||
interface Corosync should join corresponding multicast group. Fuel uses default
|
||||
Corosync configuration, which can also be altered in Fuel manifests.
|
||||
Corosync usually uses multicast UDP transport and sets up a "redundant ring"
|
||||
for communication. Currently Fuel deploys controllers with one redundant ring.
|
||||
Each ring has it’s own multicast address and bind net address that specifies on
|
||||
which interface Corosync should join corresponding multicast group. Fuel uses
|
||||
default Corosync configuration, which can also be altered in Fuel manifests.
|
||||
|
||||
.. seealso:: ``man corosync.conf`` or Corosync documentation at
|
||||
http://clusterlabs.org/doc/ if you want to know how to tune installation
|
||||
|
@ -19,7 +19,7 @@ of configuration options. Consequently, getting the most out of your
|
||||
OpenStack cloud over time – in terms of flexibility, scalability, and
|
||||
manageability – requires a thoughtful combination of complex configuration
|
||||
choices. This can be very time consuming and requires that you become
|
||||
familiar with a lot of documentation from a number of different projects.
|
||||
familiar with much of the documentation from the number of different projects.
|
||||
|
||||
Mirantis Fuel™ for OpenStack was created to eliminate exactly these problems.
|
||||
This step-by-step guide takes you through this process of:
|
||||
@ -28,14 +28,14 @@ This step-by-step guide takes you through this process of:
|
||||
architecture
|
||||
* Deploying that architecture through an effective, well-integrated automation
|
||||
package that sets up and maintains the components and their configurations
|
||||
* Providing access to a well-integrated, up-to-date set of components known to
|
||||
work together
|
||||
* Providing access to a tested, integrated, and up-to-date set of components
|
||||
proven to work together
|
||||
|
||||
Fuel™ for OpenStack can be used to create virtually any OpenStack
|
||||
configuration. To make things easier, the installation includes several
|
||||
pre-defined architectures. For the sake of simplicity, this guide emphasises
|
||||
Fuel™ for OpenStack can be used to create and support many popular OpenStack
|
||||
configurations. To make the process easier, the installation includes several
|
||||
pre-defined architectures. For the sake of simplicity, this guide emphasizes
|
||||
a single, common reference architecture; the multi-node, high-availability
|
||||
configuration. We begin with an explanation of this architecture, then move
|
||||
on to the details of creating the configuration in a test environment using
|
||||
VirtualBox. Finally, we give you the information you need to know to create
|
||||
VirtualBox. Finally, we provide you the information you need to know to create
|
||||
this and other OpenStack architectures in a production environment.
|
||||
|
@ -19,11 +19,11 @@ Fuel is designed to help you easily install a standard OpenStack cluster, but wh
|
||||
Fuel usage scenarios and how they affect installation
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Two basic Fuel usage scenarios exist::
|
||||
Two basic Fuel usage scenarios exist:
|
||||
|
||||
* In the first scenario, a deployment engineer uses the Fuel ISO image to create a master node, make necessary changes to configuration files, and deploy OpenStack. In this scenario, each node gets a clean OpenStack installation.
|
||||
* In the first scenario, a deployment engineer uses the Fuel ISO image to create a master node, make necessary changes to configuration files, and deploy OpenStack. In this scenario, each node gets a clean OpenStack installation.
|
||||
|
||||
* In the second scenario, the master node and other nodes in the cluster have already been installed, and the deployment engineer has to deploy OpenStack to an existing configuration.
|
||||
* In the second scenario, the master node and other nodes in the cluster have already been installed, and the deployment engineer has to deploy OpenStack to an existing configuration.
|
||||
|
||||
For this discussion, the first scenario requires that any customizations needed must be applied during the deployment and the second scenario already has customizations applied.
|
||||
|
||||
|
@ -37,7 +37,7 @@ Benefits
|
||||
--------
|
||||
|
||||
* Using post-deployment checks helps you identify potential issues which
|
||||
may impact the health of a deployed system.
|
||||
may impact the health of a deployed system.
|
||||
|
||||
* All post-deployment checks provide detailed descriptions about failed
|
||||
operations and tell you which component or components are not working
|
||||
@ -45,7 +45,7 @@ Benefits
|
||||
|
||||
* Previously, performing these checks manually would have consumed a
|
||||
great deal of time. Now, with these checks the process will take only a
|
||||
few minutes.
|
||||
few minutes.
|
||||
|
||||
* Aside from verifying that everything is working correctly, the process
|
||||
will also determine how quickly your system works.
|
||||
@ -63,7 +63,7 @@ to understand if something is wrong with your OpenStack cluster.
|
||||
.. image:: /_images/healthcheck_tab.jpg
|
||||
:align: center
|
||||
|
||||
As you can see on the image above, the Fuel UI now contains a ``Healthcheck``
|
||||
As you can see on the image above, the Fuel UI now contains a ``Health Check``
|
||||
tab, indicated by the Heart icon.
|
||||
|
||||
All of the post-deployment checks are displayed on this tab. If your
|
||||
@ -75,17 +75,17 @@ failure was detected. All tests can be run on different environments, which
|
||||
you select on main page of Fuel UI. You can run checks in parallel on
|
||||
different environments.
|
||||
|
||||
Each test contains information on its estimated and actual duration. We have
|
||||
included information about test processing time from our own tests and
|
||||
indicate this in each test. Note that we show average times from the slowest
|
||||
to the fastest systems we have tested, so your results will vary.
|
||||
Each test contains information on its estimated and actual duration. There is
|
||||
information included about test processing time from in-house testing and
|
||||
indicate this in each test. Note that average times are listed from the slowest
|
||||
to the fastest systems tested, so your results may vary.
|
||||
|
||||
Once a test is complete the results will appear in the Status column. If
|
||||
there was an error during the test the UI will display the error message
|
||||
below the test name. To assist in the troubleshooting process, the test
|
||||
Once a test is complete, the results will appear in the Status column. If
|
||||
there was an error during the test, the you will see the error message
|
||||
below the test name. To assist in troubleshooting, the test
|
||||
scenario is displayed under the failure message and the failed step is
|
||||
highlighted. You will find more detailed information on these tests later in
|
||||
this section.
|
||||
this section.
|
||||
|
||||
An actual test run looks like this:
|
||||
|
||||
@ -96,25 +96,25 @@ What To Do When a Test Fails
|
||||
----------------------------
|
||||
|
||||
If a test fails, there are several ways to investigate the problem. You may
|
||||
prefer to start in Fuel UI since it's feedback is directly related to the
|
||||
prefer to start in Fuel UI, since its feedback is directly related to the
|
||||
health of the deployment. To do so, start by checking the following:
|
||||
|
||||
* Under the `Healthcheck` tab
|
||||
* Under the `Health Check` tab
|
||||
* In the OpenStack Dashboard
|
||||
* In the test execution logs (/var/log/ostf-stdout.log)
|
||||
* In the individual OpenStack components logs
|
||||
* In the individual OpenStack components' logs
|
||||
|
||||
Of course, there are many different conditions that can lead to system
|
||||
breakdowns, but there are some simple things that can be examined before you
|
||||
dig deep. The most common issues are:
|
||||
Certainly there are many different conditions that can lead to system
|
||||
breakdowns, but there are some simple items that can be examined before you
|
||||
start digging deeply. The most common issues include:
|
||||
|
||||
* Not all OpenStack services are running
|
||||
* Any defined quota has been exceeded
|
||||
* Something has been broken in the network configuration
|
||||
* There is a general lack of resources (memory/disk space)
|
||||
* Some defined quota has been exceeded
|
||||
* Something has broken in the network configuration
|
||||
* A general lack of resources (memory/disk space)
|
||||
|
||||
The first thing to be done is to ensure all OpenStack services are up and
|
||||
running. To do this you can run sanity test set, or execute the following
|
||||
running. To do this, you can run the sanity test set or execute the following
|
||||
command on your Controller node::
|
||||
|
||||
nova-manage service list
|
||||
@ -124,11 +124,11 @@ If any service is off (has “XXX” status), you can restart it using this comm
|
||||
service openstack-<service name> restart
|
||||
|
||||
If all services are on, but you`re still experiencing some issues, you can
|
||||
gather information on OpenStack Dashboard (exceeded number of instances,
|
||||
fixed IPs etc). You may also read the logs generated by tests which is
|
||||
stored at ``/var/log/ostf-stdout.log``, or go to ``/var/log/<component>`` and view
|
||||
if any operation has ERROR status. If it looks like the last item, you may
|
||||
have underprovisioned your environment and should check your math and your
|
||||
gather information from OpenStack Dashboard (exceeded number of instances,
|
||||
fixed IPs, etc). You may also read the logs generated by tests which are
|
||||
stored in ``/var/log/ostf-stdout.log``, or go to ``/var/log/<component>`` and
|
||||
check if any operation is in ERROR status. If it looks like the last item, you
|
||||
may have underprovisioned your environment and should check your math and your
|
||||
project requirements.
|
||||
|
||||
Sanity Tests Description
|
||||
@ -136,12 +136,12 @@ Sanity Tests Description
|
||||
|
||||
Sanity checks work by sending a query to all OpenStack components to get a
|
||||
response back from them. Many of these tests are simple in that they ask
|
||||
each service for a list of it's associated objects and waits for a response.
|
||||
The response can be something, nothing, and error, or a timeout, so there
|
||||
are several ways to determine if a service is up. The following list shows
|
||||
what test is used for each service:
|
||||
each service for a list of its associated objects and then waits for a
|
||||
response. The response can be something, nothing, an error, or a timeout,
|
||||
so there are several ways to determine if a service is up. The following list
|
||||
shows what test is used for each service:
|
||||
|
||||
.. topic:: Instances list availability
|
||||
.. topic:: Instance list availability
|
||||
|
||||
Test checks that Nova component can return list of instances.
|
||||
|
||||
@ -159,7 +159,7 @@ what test is used for each service:
|
||||
1. Request list of images.
|
||||
2. Check returned list is not empty.
|
||||
|
||||
.. topic:: Volumes list availability
|
||||
.. topic:: Volume list availability
|
||||
|
||||
Test checks that Swift component can return list of volumes.
|
||||
|
||||
@ -177,7 +177,7 @@ what test is used for each service:
|
||||
1. Request list of snapshots.
|
||||
2. Check returned list is not empty.
|
||||
|
||||
.. topic:: Flavors list availability
|
||||
.. topic:: Flavor list availability
|
||||
|
||||
Test checks that Nova component can return list of flavors.
|
||||
|
||||
@ -213,7 +213,7 @@ what test is used for each service:
|
||||
1. Request list of services.
|
||||
2. Check returned list is not empty.
|
||||
|
||||
.. topic:: Services execution monitoring
|
||||
.. topic:: Check all the services execute normally
|
||||
|
||||
Test checks that all of the expected services are on, meaning the test will
|
||||
fail if any of the listed services is in “XXX” status.
|
||||
@ -224,13 +224,24 @@ what test is used for each service:
|
||||
2. Execute nova-manage service list command.
|
||||
3. Check there are no failed services.
|
||||
|
||||
.. topic:: DNS availability
|
||||
.. topic:: Check Internet connectivity from a compute
|
||||
|
||||
Test checks that DNS is available.
|
||||
Test checks that public Internet is available for compute hosts.
|
||||
|
||||
Test scenario:
|
||||
|
||||
1. Connect to a Controller node via SSH.
|
||||
1. Connect to a Compute node via SSH.
|
||||
2. Execute ping command to IP 8.8.8.8.
|
||||
3. Check ping can be successfully completed.
|
||||
|
||||
|
||||
.. topic:: Check DNS resolution on a compute
|
||||
|
||||
Test checks that DNS is available for compute hosts.
|
||||
|
||||
Test scenario:
|
||||
|
||||
1. Connect to a Compute node via SSH.
|
||||
2. Execute host command for the controller IP.
|
||||
3. Check DNS name can be successfully resolved.
|
||||
|
||||
@ -261,12 +272,12 @@ Smoke tests verify how your system handles basic OpenStack operations under
|
||||
normal circumstances. The Smoke test series uses timeout tests for
|
||||
operations that have a known completion time to determine if there is any
|
||||
smoke, and thusly fire. An additional benefit to the Smoke Test series is
|
||||
that you get to see how fast your environment is the first time you run them.
|
||||
that you can observe how fast your environment is the first time you run it.
|
||||
|
||||
All tests use basic OpenStack services (Nova, Glance, Keystone, Cinder etc),
|
||||
therefore if any of them is off, the test using it will fail. It is
|
||||
recommended to run all sanity checks prior to your smoke checks to determine
|
||||
all services are alive. This helps ensure that you don't get any false
|
||||
All tests use the basic OpenStack services (Nova, Glance, Keystone, Cinder,
|
||||
etc), therefore if any of these are inactive, the test using it will fail. It
|
||||
is recommended to run all sanity checks prior to your smoke checks to determine
|
||||
that all services are alive. This helps ensure that you don't get any false
|
||||
negatives. The following is a description of each sanity test available:
|
||||
|
||||
.. topic:: Flavor creation
|
||||
@ -281,7 +292,7 @@ negatives. The following is a description of each sanity test available:
|
||||
2. Check created flavor has expected name.
|
||||
3. Check flavor disk has expected size.
|
||||
|
||||
For more information refer to nova cli reference.
|
||||
For more information refer to nova CLI documentation.
|
||||
|
||||
.. topic:: Volume creation
|
||||
|
||||
|
@ -33,13 +33,13 @@ Minimal Requirements
|
||||
|
||||
* Red Hat account (https://access.redhat.com)
|
||||
* Red Hat OpenStack entitlement (one per node)
|
||||
* Internet access for Fuel Master name
|
||||
* Internet access for Fuel Master node
|
||||
|
||||
Optional requirements
|
||||
+++++++++++++++++++++
|
||||
|
||||
* Red Hat Satellite Server
|
||||
* Configured Satellite activation key
|
||||
* Configured Satellite activation key
|
||||
|
||||
.. _RHSM:
|
||||
|
||||
@ -50,9 +50,8 @@ Benefits
|
||||
++++++++
|
||||
|
||||
* No need to handle large ISOs or physical media.
|
||||
* Register all your clients with just a single username and password.
|
||||
* Automatically register the necessary products required for installation and
|
||||
downloads a full cache.
|
||||
* Register all your hosts with just a single username and password.
|
||||
* Automatically register the necessary products required for installation.
|
||||
* Download only the latest packages.
|
||||
* Download only necessary packages.
|
||||
|
||||
@ -87,7 +86,7 @@ Considerations
|
||||
* Red Hat RHN Satellite is a separate offering from Red Hat and requires
|
||||
dedicated hardware
|
||||
* Still requires Red Hat Subscription Manager and Internet access to download
|
||||
registration packages (just for Fuel Master host)
|
||||
registration packages (for Master node only)
|
||||
|
||||
What you need
|
||||
+++++++++++++
|
||||
|
@ -7,11 +7,11 @@
|
||||
Network Issues
|
||||
==============
|
||||
|
||||
Fuel has a built-in capability to run network check before or after OpenStack
|
||||
deployment. Currently it can check connectivity between nodes within
|
||||
configured VLANs on configured server interfaces. Image below shows sample
|
||||
result of such check. By using this simple table it is easy to say which
|
||||
interfaces do not receive certain VLAN IDs. Usually it means that switch or
|
||||
Fuel has the built-in capability to run a network check before or after
|
||||
OpenStack deployment. Currently, it can check connectivity between nodes within
|
||||
configured VLANs on configured server interfaces. The image below shows sample
|
||||
result of such check. By using this simple table it is easy to determine which
|
||||
interfaces do not receive certain VLAN IDs. Usually, it means that a switch or
|
||||
multiple switches are not configured correctly and do not allow certain
|
||||
tagged traffic to pass through.
|
||||
|
||||
@ -21,17 +21,18 @@ tagged traffic to pass through.
|
||||
On VirtualBox
|
||||
-------------
|
||||
|
||||
Scripts which are provided for quick Fuel setup, create 3 host-interface
|
||||
adapters. Basically networking works as this being a 3 bridges, in each of
|
||||
them the only one VMs interfaces is connected. It means there is only L2
|
||||
connectivity between VMs on interfaces with the same name. If you try to
|
||||
move, for example, management network to `eth1` on Controller node, and the
|
||||
The scripts provided for quick Fuel setup create 3 host-interface adapters.
|
||||
Basically, networking works as if you have 3 switches, with one connected
|
||||
to a VM network interface. It means that there is only L2 connectivity between
|
||||
VMs on interfaces with the same name. If you try to move, for example,
|
||||
the management network to `eth1` on Controller node, and the
|
||||
same network to `eth2` on the Compute, then there will be no connectivity
|
||||
between OpenStack services in spite of being configured to live on the same
|
||||
VLAN. It is very easy to validate network settings before deployment by
|
||||
between OpenStack services, despite being configured to exist on the same
|
||||
VLAN. It is very easy to validate network settings prior to deployment by
|
||||
clicking the "Verify Networks" button.
|
||||
If you need to access OpenStack REST API over Public network, VNC console of VMs,
|
||||
Horizon in HA mode or VMs, refer to this section: :ref:`access_to_public_net`.
|
||||
If you need to access the OpenStack REST API over Public network, VNC console
|
||||
of VMs, Horizon in HA mode or VMs, refer to this section:
|
||||
:ref:`access_to_public_net`.
|
||||
|
||||
Timeout In Connection to OpenStack API From Client Applications
|
||||
---------------------------------------------------------------
|
||||
@ -94,10 +95,10 @@ option enabled::
|
||||
|
||||
INFO (connectionpool:191) Starting new HTTP connection (1): 172.16.1.2
|
||||
|
||||
Even though initial connection was in 192.168.0.2, then client tries to
|
||||
access Public network for Nova API. The reason is because Keystone returns
|
||||
Even though initial connection was in 192.168.0.2, the client tries to
|
||||
access the Public network for Nova API. The reason is because Keystone returns
|
||||
the list of OpenStack services URLs, and for production-grade deployments it
|
||||
is required to access services over public network.
|
||||
|
||||
.. seealso:: :ref:`access_to_public_net` if you want to configure the installation
|
||||
on VirtualBox to make all these issues fixed.
|
||||
.. seealso:: :ref:`access_to_public_net` if you want to configure the
|
||||
installation on VirtualBox and fix issues like the one above.
|
||||
|
Loading…
Reference in New Issue
Block a user