modify some misspellings in notes

Change-Id: I65a8e85ed7d6c642311a0327e58bc9c79535e551
This commit is contained in:
liangcui 2017-08-28 16:49:27 +08:00
parent 8c02482736
commit 62484b36c1
8 changed files with 20 additions and 20 deletions

View File

@ -38,7 +38,7 @@ Alternatives
We already have two bonding methods in use, the Linux bonding kernel module, We already have two bonding methods in use, the Linux bonding kernel module,
and Open vSwitch. However, adapter teaming is becoming a best practice, and and Open vSwitch. However, adapter teaming is becoming a best practice, and
this change will open up that possiblity. this change will open up that possibility.
Security Impact Security Impact
--------------- ---------------
@ -181,8 +181,8 @@ configuration.
Documentation Impact Documentation Impact
==================== ====================
The deployment documentaiton will need to be updated to cover the use of The deployment documentation will need to be updated to cover the use of
teaming. The os-net-config sample configurations will demostrate the use teaming. The os-net-config sample configurations will demonstrate the use
in os-net-config. TripleO Heat template examples should also help with in os-net-config. TripleO Heat template examples should also help with
deployments using teaming. deployments using teaming.

View File

@ -21,7 +21,7 @@ The pacemaker architecture deployed currently via
`puppet/manifests/overcloud_controller_pacemaker.pp` manages most `puppet/manifests/overcloud_controller_pacemaker.pp` manages most
service on the controllers via pacemaker. This approach, while having the service on the controllers via pacemaker. This approach, while having the
advantage of having a single entity managing and monitoring all services, does advantage of having a single entity managing and monitoring all services, does
bring a certain complexity to it and assumes that the operaters are quite bring a certain complexity to it and assumes that the operators are quite
familiar with pacemaker and its management of resources. The aim is to familiar with pacemaker and its management of resources. The aim is to
propose a new architecture, replacing the existing one, where pacemaker propose a new architecture, replacing the existing one, where pacemaker
controls the following resources: controls the following resources:
@ -130,7 +130,7 @@ The operators working with a cloud are impacted in the following ways:
https://github.com/ClusterLabs/pacemaker/blob/master/lib/services/systemd.c#L547 https://github.com/ClusterLabs/pacemaker/blob/master/lib/services/systemd.c#L547
With the new architecture, restarting a native openstack service across With the new architecture, restarting a native openstack service across
all controllers will require restaring it via `systemctl` on each node (as opposed all controllers will require restarting it via `systemctl` on each node (as opposed
to a single `pcs` command as it is done today) to a single `pcs` command as it is done today)
* All services will be configured to retry indefinitely to connect to * All services will be configured to retry indefinitely to connect to
@ -177,7 +177,7 @@ Work Items
* Prepare the roles that deploy the next generation architecture. Initially, * Prepare the roles that deploy the next generation architecture. Initially,
keep it as close as possible to the existing HA template and make it simpler keep it as close as possible to the existing HA template and make it simpler
in a second iteration (remove unnecesary steps, etc.) Template currently in a second iteration (remove unnecessary steps, etc.) Template currently
lives here and deploys successfully: lives here and deploys successfully:
https://review.openstack.org/#/c/314208/ https://review.openstack.org/#/c/314208/

View File

@ -41,7 +41,7 @@ Overview
The scope of this document for Pike is limited to recommendations for The scope of this document for Pike is limited to recommendations for
developers of containerized services, bearing in mind use cases for hybrid developers of containerized services, bearing in mind use cases for hybrid
environments. It addresses only intermediate implementaion steps for Pike and environments. It addresses only intermediate implementation steps for Pike and
smooth UX with upgrades from Ocata to Pike, and with future upgrades from Pike smooth UX with upgrades from Ocata to Pike, and with future upgrades from Pike
as well. as well.
@ -61,7 +61,7 @@ The scope for future releases, starting from Queens, shall include best
practices for collecting (shipping), storing (persisting), processing (parsing) practices for collecting (shipping), storing (persisting), processing (parsing)
and accessing (filtering) logs of hybrid TripleO deployments with advanced and accessing (filtering) logs of hybrid TripleO deployments with advanced
techniques like EFK (Elasticsearch, Fluentd, Kibana) or the like. Hereafter techniques like EFK (Elasticsearch, Fluentd, Kibana) or the like. Hereafter
those are refered as "future steps". those are referred as "future steps".
Note, this is limited to OpenStack and Linux HA stack (Pacemaker and Corosync). Note, this is limited to OpenStack and Linux HA stack (Pacemaker and Corosync).
We can do nothing to the rest of the supporting and legacy apps like We can do nothing to the rest of the supporting and legacy apps like
@ -275,7 +275,7 @@ Queens parts:
* Verify if the namespaced `/var/log/` for containers works and fits the case * Verify if the namespaced `/var/log/` for containers works and fits the case
(no assignee). (no assignee).
* Address the current state of OpenStack infrastructure apps as they are, and * Address the current state of OpenStack infrastructure apps as they are, and
gently move them towards these guidelines refered as "future steps" (no gently move them towards these guidelines referred as "future steps" (no
assignee). assignee).
Dependencies Dependencies

View File

@ -37,7 +37,7 @@ If the size exceeds a predetermined size (e.g. 10MB), Mistral will rename it to
``tripleo-ui-log-<timestamp>``, and create a new file in its place. The file ``tripleo-ui-log-<timestamp>``, and create a new file in its place. The file
will then receive the messages from Zaqar, one per line. Once we reach, let's will then receive the messages from Zaqar, one per line. Once we reach, let's
say, a hundred archives (about 1GB) we can start removing dropping data in order say, a hundred archives (about 1GB) we can start removing dropping data in order
to prevent unnecessary data accoumulation. to prevent unnecessary data accumulation.
To view the logging data, we can ask Swift for 10 latest messages with a prefix To view the logging data, we can ask Swift for 10 latest messages with a prefix
of ``tripleo-ui-log``. These files can be presented in the GUI for download. of ``tripleo-ui-log``. These files can be presented in the GUI for download.

View File

@ -109,7 +109,7 @@ Documentation Impact
==================== ====================
We should document the new settings introduced by the wizard. The documentation We should document the new settings introduced by the wizard. The documentation
should be transferrable between the heat template project, and TripleO UI. should be transferable between the heat template project, and TripleO UI.
References References
========== ==========

View File

@ -23,7 +23,7 @@ are not being verified whether is failing or passing.
Even if there is someone responsible to verify these runs, still is a manual Even if there is someone responsible to verify these runs, still is a manual
job go to logs web site, check what's the latest job, go to the logs, verify job go to logs web site, check what's the latest job, go to the logs, verify
if tempest ran, list the number of failures, check against a list if these if tempest ran, list the number of failures, check against a list if these
failures are known failures or new ones, and only afther all these steps, failures are known failures or new ones, and only after all these steps,
start to work to identify the root cause of the problem. start to work to identify the root cause of the problem.
Proposed Change Proposed Change

View File

@ -72,7 +72,7 @@ with node introspection for this workflow to be successful.
During the first iterations, all the roles in a deployment will be During the first iterations, all the roles in a deployment will be
analyzed to find a service associated with the role, which requires analyzed to find a service associated with the role, which requires
parameter derivation. Various options of using this and the final parameter derivation. Various options of using this and the final
choice for the current iteration is discsused in below section choice for the current iteration is discussed in below section
`Workflow Association with Services`_. `Workflow Association with Services`_.
This workflow assumes that all the nodes in a role have a homegenous This workflow assumes that all the nodes in a role have a homegenous
@ -80,7 +80,7 @@ hardware specification and introspection data of the first node will
be used for processing the parameters for the entire role. This will be used for processing the parameters for the entire role. This will
be reexamined in later iterations, based on the need for node specific be reexamined in later iterations, based on the need for node specific
derivations. The workflow will consider the flavor-profile association derivations. The workflow will consider the flavor-profile association
and nova placement scheduler to indentify the nodes associated with a and nova placement scheduler to identify the nodes associated with a
role. role.
Role-specific parameters are an important requirement for this workflow. Role-specific parameters are an important requirement for this workflow.
@ -121,7 +121,7 @@ take advantage of this optional feature by enabling it via ``plan-
environment.yaml``. A new section ``workflow_parameters`` will be added to environment.yaml``. A new section ``workflow_parameters`` will be added to
the ``plan-environments.yaml`` file to accomodate the additional parameters the ``plan-environments.yaml`` file to accomodate the additional parameters
required for executing workflows. With this additional section, we can ensure required for executing workflows. With this additional section, we can ensure
that the workflow sepcific parameters are provide only to the workflow, that the workflow specific parameters are provide only to the workflow,
without polluting the heat environments. It will also be possible to provide without polluting the heat environments. It will also be possible to provide
multiple plan environment files which will be merged in the CLI before plan multiple plan environment files which will be merged in the CLI before plan
creation. creation.
@ -154,7 +154,7 @@ Usecase 2: Derivation Profiles for HCI
This usecase uses HCI, running Ceph OSD and Nova Compute on the same node. HCI This usecase uses HCI, running Ceph OSD and Nova Compute on the same node. HCI
derive parameters workflow works with a default set of configs to categorize derive parameters workflow works with a default set of configs to categorize
the type of the workload that the role will host. An option will be provide to the type of the workload that the role will host. An option will be provide to
override the default configs with deployment specfic configs via ``plan- override the default configs with deployment specific configs via ``plan-
environment.yaml``. environment.yaml``.
In case of HCI deployment, the additional plan environment used for the In case of HCI deployment, the additional plan environment used for the
@ -230,7 +230,7 @@ service.
Workflow Association with Services Workflow Association with Services
---------------------------------- ----------------------------------
The optimal way to assosciate the derived parameter workflows with The optimal way to associate the derived parameter workflows with
services, is to get the list of the enabled services on a given role, services, is to get the list of the enabled services on a given role,
by previewing Heat stack. With the current limitations in Heat, it is by previewing Heat stack. With the current limitations in Heat, it is
not possible fetch the enabled services list on a role. Thus, a new not possible fetch the enabled services list on a role. Thus, a new
@ -308,7 +308,7 @@ Other End User Impact
--------------------- ---------------------
Operators need not manually derive the deployment parameters based on the Operators need not manually derive the deployment parameters based on the
introspection or hardware specficiation data, as it is automatically derived introspection or hardware specification data, as it is automatically derived
with pre-defined formulas. with pre-defined formulas.
Performance Impact Performance Impact
@ -369,7 +369,7 @@ Work Items
* Derive Params start workflow to find list of roles * Derive Params start workflow to find list of roles
* Workflow run for each role to fetch the introspection data and trigger * Workflow run for each role to fetch the introspection data and trigger
individual features workflow individual features workflow
* Workflow to indentify if a service associated with a features workflow is * Workflow to identify if a service associated with a features workflow is
enabled in a role enabled in a role
* DPDK Workflow: Analysis and concluding the format of the input data (jpalanis) * DPDK Workflow: Analysis and concluding the format of the input data (jpalanis)
* DPDK Workflow: Parameter deriving workflow (jpalanis) * DPDK Workflow: Parameter deriving workflow (jpalanis)

View File

@ -59,7 +59,7 @@ compute nodes using TripleO.
* real-time KVM * real-time KVM
* real-time tuned profiles * real-time tuned profiles
* a new real-time compute role that is a variant of the exising compute role * a new real-time compute role that is a variant of the existing compute role
* huge pages shall be enabled on the real-time compute nodes. * huge pages shall be enabled on the real-time compute nodes.
* huge pages shall be reserved for the real-time guests. * huge pages shall be reserved for the real-time guests.