Browse Source

modify some misspellings in notes

Change-Id: I65a8e85ed7d6c642311a0327e58bc9c79535e551
changes/72/498372/1
liangcui 4 years ago
parent
commit
62484b36c1
  1. 6
      specs/newton/os-net-config-teaming.rst
  2. 6
      specs/newton/pacemaker-next-generation-architecture.rst
  3. 6
      specs/pike/containerized-services-logs.rst
  4. 2
      specs/pike/gui-logging.rst
  5. 2
      specs/pike/network-configuration.rst
  6. 2
      specs/pike/send-mail-tool.rst
  7. 14
      specs/pike/tripleo-derive-parameters.rst
  8. 2
      specs/pike/tripleo-realtime.rst

6
specs/newton/os-net-config-teaming.rst

@ -38,7 +38,7 @@ Alternatives
We already have two bonding methods in use, the Linux bonding kernel module,
and Open vSwitch. However, adapter teaming is becoming a best practice, and
this change will open up that possiblity.
this change will open up that possibility.
Security Impact
---------------
@ -181,8 +181,8 @@ configuration.
Documentation Impact
====================
The deployment documentaiton will need to be updated to cover the use of
teaming. The os-net-config sample configurations will demostrate the use
The deployment documentation will need to be updated to cover the use of
teaming. The os-net-config sample configurations will demonstrate the use
in os-net-config. TripleO Heat template examples should also help with
deployments using teaming.

6
specs/newton/pacemaker-next-generation-architecture.rst

@ -21,7 +21,7 @@ The pacemaker architecture deployed currently via
`puppet/manifests/overcloud_controller_pacemaker.pp` manages most
service on the controllers via pacemaker. This approach, while having the
advantage of having a single entity managing and monitoring all services, does
bring a certain complexity to it and assumes that the operaters are quite
bring a certain complexity to it and assumes that the operators are quite
familiar with pacemaker and its management of resources. The aim is to
propose a new architecture, replacing the existing one, where pacemaker
controls the following resources:
@ -130,7 +130,7 @@ The operators working with a cloud are impacted in the following ways:
https://github.com/ClusterLabs/pacemaker/blob/master/lib/services/systemd.c#L547
With the new architecture, restarting a native openstack service across
all controllers will require restaring it via `systemctl` on each node (as opposed
all controllers will require restarting it via `systemctl` on each node (as opposed
to a single `pcs` command as it is done today)
* All services will be configured to retry indefinitely to connect to
@ -177,7 +177,7 @@ Work Items
* Prepare the roles that deploy the next generation architecture. Initially,
keep it as close as possible to the existing HA template and make it simpler
in a second iteration (remove unnecesary steps, etc.) Template currently
in a second iteration (remove unnecessary steps, etc.) Template currently
lives here and deploys successfully:
https://review.openstack.org/#/c/314208/

6
specs/pike/containerized-services-logs.rst

@ -41,7 +41,7 @@ Overview
The scope of this document for Pike is limited to recommendations for
developers of containerized services, bearing in mind use cases for hybrid
environments. It addresses only intermediate implementaion steps for Pike and
environments. It addresses only intermediate implementation steps for Pike and
smooth UX with upgrades from Ocata to Pike, and with future upgrades from Pike
as well.
@ -61,7 +61,7 @@ The scope for future releases, starting from Queens, shall include best
practices for collecting (shipping), storing (persisting), processing (parsing)
and accessing (filtering) logs of hybrid TripleO deployments with advanced
techniques like EFK (Elasticsearch, Fluentd, Kibana) or the like. Hereafter
those are refered as "future steps".
those are referred as "future steps".
Note, this is limited to OpenStack and Linux HA stack (Pacemaker and Corosync).
We can do nothing to the rest of the supporting and legacy apps like
@ -275,7 +275,7 @@ Queens parts:
* Verify if the namespaced `/var/log/` for containers works and fits the case
(no assignee).
* Address the current state of OpenStack infrastructure apps as they are, and
gently move them towards these guidelines refered as "future steps" (no
gently move them towards these guidelines referred as "future steps" (no
assignee).
Dependencies

2
specs/pike/gui-logging.rst

@ -37,7 +37,7 @@ If the size exceeds a predetermined size (e.g. 10MB), Mistral will rename it to
``tripleo-ui-log-<timestamp>``, and create a new file in its place. The file
will then receive the messages from Zaqar, one per line. Once we reach, let's
say, a hundred archives (about 1GB) we can start removing dropping data in order
to prevent unnecessary data accoumulation.
to prevent unnecessary data accumulation.
To view the logging data, we can ask Swift for 10 latest messages with a prefix
of ``tripleo-ui-log``. These files can be presented in the GUI for download.

2
specs/pike/network-configuration.rst

@ -109,7 +109,7 @@ Documentation Impact
====================
We should document the new settings introduced by the wizard. The documentation
should be transferrable between the heat template project, and TripleO UI.
should be transferable between the heat template project, and TripleO UI.
References
==========

2
specs/pike/send-mail-tool.rst

@ -23,7 +23,7 @@ are not being verified whether is failing or passing.
Even if there is someone responsible to verify these runs, still is a manual
job go to logs web site, check what's the latest job, go to the logs, verify
if tempest ran, list the number of failures, check against a list if these
failures are known failures or new ones, and only afther all these steps,
failures are known failures or new ones, and only after all these steps,
start to work to identify the root cause of the problem.
Proposed Change

14
specs/pike/tripleo-derive-parameters.rst

@ -72,7 +72,7 @@ with node introspection for this workflow to be successful.
During the first iterations, all the roles in a deployment will be
analyzed to find a service associated with the role, which requires
parameter derivation. Various options of using this and the final
choice for the current iteration is discsused in below section
choice for the current iteration is discussed in below section
`Workflow Association with Services`_.
This workflow assumes that all the nodes in a role have a homegenous
@ -80,7 +80,7 @@ hardware specification and introspection data of the first node will
be used for processing the parameters for the entire role. This will
be reexamined in later iterations, based on the need for node specific
derivations. The workflow will consider the flavor-profile association
and nova placement scheduler to indentify the nodes associated with a
and nova placement scheduler to identify the nodes associated with a
role.
Role-specific parameters are an important requirement for this workflow.
@ -121,7 +121,7 @@ take advantage of this optional feature by enabling it via ``plan-
environment.yaml``. A new section ``workflow_parameters`` will be added to
the ``plan-environments.yaml`` file to accomodate the additional parameters
required for executing workflows. With this additional section, we can ensure
that the workflow sepcific parameters are provide only to the workflow,
that the workflow specific parameters are provide only to the workflow,
without polluting the heat environments. It will also be possible to provide
multiple plan environment files which will be merged in the CLI before plan
creation.
@ -154,7 +154,7 @@ Usecase 2: Derivation Profiles for HCI
This usecase uses HCI, running Ceph OSD and Nova Compute on the same node. HCI
derive parameters workflow works with a default set of configs to categorize
the type of the workload that the role will host. An option will be provide to
override the default configs with deployment specfic configs via ``plan-
override the default configs with deployment specific configs via ``plan-
environment.yaml``.
In case of HCI deployment, the additional plan environment used for the
@ -230,7 +230,7 @@ service.
Workflow Association with Services
----------------------------------
The optimal way to assosciate the derived parameter workflows with
The optimal way to associate the derived parameter workflows with
services, is to get the list of the enabled services on a given role,
by previewing Heat stack. With the current limitations in Heat, it is
not possible fetch the enabled services list on a role. Thus, a new
@ -308,7 +308,7 @@ Other End User Impact
---------------------
Operators need not manually derive the deployment parameters based on the
introspection or hardware specficiation data, as it is automatically derived
introspection or hardware specification data, as it is automatically derived
with pre-defined formulas.
Performance Impact
@ -369,7 +369,7 @@ Work Items
* Derive Params start workflow to find list of roles
* Workflow run for each role to fetch the introspection data and trigger
individual features workflow
* Workflow to indentify if a service associated with a features workflow is
* Workflow to identify if a service associated with a features workflow is
enabled in a role
* DPDK Workflow: Analysis and concluding the format of the input data (jpalanis)
* DPDK Workflow: Parameter deriving workflow (jpalanis)

2
specs/pike/tripleo-realtime.rst

@ -59,7 +59,7 @@ compute nodes using TripleO.
* real-time KVM
* real-time tuned profiles
* a new real-time compute role that is a variant of the exising compute role
* a new real-time compute role that is a variant of the existing compute role
* huge pages shall be enabled on the real-time compute nodes.
* huge pages shall be reserved for the real-time guests.

Loading…
Cancel
Save