s/ElasticSearch/Elasticsearch/ where appropriate

Unlike OpenStack, there is no capital 'S' in Elasticsearch.

Change-Id: I6bd00983d2677a57c0ea080b2fd8226cef56f88f
This commit is contained in:
Simon Pasquier 2015-04-23 15:50:06 +02:00
parent fb953f8af3
commit 51f593692f
14 changed files with 42 additions and 42 deletions

View File

@ -7,7 +7,7 @@ Overview
The Logging, Monitoring & Alerting (LMA) collector is a service running on each
OpenStack node that collects logs and notifications. Those data are sent to an
ElasticSearch server for diagnostic, troubleshooting and alerting purposes.
Elasticsearch server for diagnostic, troubleshooting and alerting purposes.
Requirements
@ -17,7 +17,7 @@ Requirements
| Requirement | Version/Comment |
| ------------------------------ | ------------------------------------------------------------- |
| Mirantis OpenStack compatility | 6.1 or higher |
| A running ElasticSearch server | 1.4 or higher, the RESTful API must be enabled over port 9200 |
| A running Elasticsearch server | 1.4 or higher, the RESTful API must be enabled over port 9200 |
Limitations
@ -31,12 +31,12 @@ Installation Guide
Prior to installing the LMA Collector Plugin, you may want to install
ElasticSearch and Kibana.
To install ElasticSearch and Kibana automatically using Fuel, you can refer to the
[ElasticSearch-Kibana Fuel Plugin
Elasticsearch and Kibana.
To install Elasticsearch and Kibana automatically using Fuel, you can refer to the
[Elasticsearch-Kibana Fuel Plugin
](https://github.com/stackforge/fuel-plugin-elasticsearch-kibana).
You can install ElasticSearch and Kibana outside of Fuel as long as
You can install Elasticsearch and Kibana outside of Fuel as long as
your installation meets the LMA Collector plugin's requirements defined above.
**LMA collector plugin** installation
@ -84,14 +84,14 @@ the required fields.
Exploring the data
------------------
Refer to the [ElasticSearch/Kibana
Refer to the [Elasticsearch/Kibana
plugin](https://github.com/stackforge/fuel-plugin-elasticsearch-kibana) for
exploring and visualizing the collected data.
Troubleshooting
---------------
If you see no data in the ElasticSearch server, check the following:
If you see no data in the Elasticsearch server, check the following:
1. The LMA collector service is running
@ -104,7 +104,7 @@ If you see no data in the ElasticSearch server, check the following:
2. Look for errors in the LMA collector log file (located at
`/var/log/lma_collector.log`) on the different nodes.
3. Nodes are able to connect to the ElasticSearch server on port 9200.
3. Nodes are able to connect to the Elasticsearch server on port 9200.
Known issues

View File

@ -5,12 +5,12 @@ are running along with the Logging, Monitoring and Alerting collector.
The scripts require the [Docker](https://www.docker.com/) runtime.
# ElasticSearch
# Elasticsearch
[ElasticSearch](http://www.elasticsearch.org/overview/elasticsearch) is used to
[Elasticsearch](http://www.elasticsearch.org/overview/elasticsearch) is used to
store and index the logs and notifications gathered by the LMA collector.
To install the ElasticSearch stack, see (elasticsearch/README.md).
To install the Elasticsearch stack, see (elasticsearch/README.md).
# InfluxDB
@ -24,7 +24,7 @@ To install the InfluxDB stack, see (influxdb/README.md).
The LMA dashboards are based on:
* [Kibana](http://www.elasticsearch.org/overview/kibana) for displaying and
querying data in ElasticSearch.
querying data in Elasticsearch.
* [Grafana](http://grafana.org/) for displaying and querying data in InfluxDB.

View File

@ -1,11 +1,11 @@
# Description
Scripts and tools for running an ElasticSearch server to be used with the LMA
Scripts and tools for running an Elasticsearch server to be used with the LMA
collector.
# Requirements
To run ElasticSearch, the host should have at least 1GB of free RAM. You also
To run Elasticsearch, the host should have at least 1GB of free RAM. You also
need sufficient free disk space for storing the data. The exact amount of disk
depends highly on your environment and retention policy but 20GB is probably a
sane minimum.
@ -37,7 +37,7 @@ Supported environment variables for configuration:
# Testing
You can check that ElasticSearch is working using `curl`:
You can check that Elasticsearch is working using `curl`:
```
curl http://$HOST:9200/

View File

@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Start a container running ElasticSearch
# Start a container running Elasticsearch
#
# You can modify the following environment variables to tweak the configuration
# - ES_DATA: directory where to store the ES data and logs (default=~/es_volume)
@ -31,7 +31,7 @@ RUN_TIMEOUT=60
ES_DATA=${ES_DATA:-~/es_volume}
ES_MEMORY=${ES_MEMORY:-16}
# We don't want to accidently expose the ES ports on the Internet so we only
# publish ports on the host's loopback address. To access ElasticSearch from
# publish ports on the host's loopback address. To access Elasticsearch from
# another host, you can SSH to the Docker host with port forwarding (eg
# '-L 9200:127.0.0.1:9200').
# Or you can override the ES_LISTEN_ADDRESS variable...
@ -67,7 +67,7 @@ docker_pull_image ${DOCKER_IMAGE}
DOCKER_ID=$(timeout $RUN_TIMEOUT docker run -d -e ES_HEAP_SIZE=${ES_MEMORY}g -p ${ES_LISTEN_ADDRESS}:${ES_HTTP_PORT}:9200 -p ${ES_LISTEN_ADDRESS}:${ES_TRANSPORT_PORT}:9300 --name ${DOCKER_NAME} -v $ES_DATA:/data ${DOCKER_IMAGE} /elasticsearch/bin/elasticsearch -Des.config=/data/elasticsearch.yml)
SHORT_ID=$(docker_shorten_id $DOCKER_ID)
echo -n "Waiting for ElasticSearch to start"
echo -n "Waiting for Elasticsearch to start"
while ! curl http://${ES_LISTEN_ADDRESS}:${ES_HTTP_PORT} 1>/dev/null 2>&1; do
echo -n '.'
IS_RUNNING=$(docker inspect --format="{{ .State.Running }}" ${DOCKER_ID})
@ -88,4 +88,4 @@ curl -s -XDELETE ${ES_URL}/_template/log 1>/dev/null
curl -s -XPUT -d @log_index_template.json ${ES_URL}/_template/log 1>/dev/null
curl -s -XPUT -d @notification_index_template.json ${ES_URL}/_template/notification 1>/dev/null
echo "ElasticSearch API avaiable at ${ES_URL}"
echo "Elasticsearch API avaiable at ${ES_URL}"

View File

@ -19,7 +19,7 @@
# It performs the following operations:
# * Build the LMA collector plugin.
# * Install the LMA collector plugin.
# * Deploy an ElasticSearch container
# * Deploy an Elasticsearch container
# * Deploy an InfluxDB container
# * Deploy a container for running the LMA dashboards.
#
@ -125,10 +125,10 @@ if ! (cd ${CURRENT_DIR}/../doc && make html); then
info "Couldn't build the documentation."
fi
info "Starting the ElasticSearch container..."
info "Starting the Elasticsearch container..."
mkdir -p $ES_DIR
if ! (cd ${CURRENT_DIR}/elasticsearch && ES_MEMORY=$ES_MEMORY ES_DATA=$ES_DIR ES_LISTEN_ADDRESS=$PRIMARY_IP_ADDRESS ./run_container.sh); then
fail "Failed to start the ElasticSearch container."
fail "Failed to start the Elasticsearch container."
fi
info "Starting the InfluxDB container..."

View File

@ -21,7 +21,7 @@ the default parameters:
* ``GRAFANA_ENABLED``, whether or not to enable the Grafana dashboard (default:
"yes").
* ``ES_HOST``, the address of the ElasticSearch server (default: same host).
* ``ES_HOST``, the address of the Elasticsearch server (default: same host).
* ``INFLUXDB_HOST``, the address of the InfluxDB server (default: same host).
@ -48,5 +48,5 @@ The dashboards are available at the following URLs:
## Troubleshooting
If the dashboards fail to display the data or are unresponsive, run the
``docker logs lma_ui`` command and check that the ElasticSearch and InfluxDB
``docker logs lma_ui`` command and check that the Elasticsearch and InfluxDB
servers are reachable from the machine running the web browser.

View File

@ -30,7 +30,7 @@ if [[ ${KIBANA_ENABLED} == "yes" ]]; then
ES_HOST=${ES_HOST:-\"+window.location.hostname+\"}
ES_PORT=9200
ES_URL="http://${ES_HOST}:${ES_PORT}"
echo "ElasticSearch URL is ${ES_URL}"
echo "Elasticsearch URL is ${ES_URL}"
cat <<EOF > /usr/share/nginx/html/kibana/config.js
define(['settings'],

View File

@ -127,7 +127,7 @@ case $elasticsearch_mode {
$es_server = $es_nodes[0]['internal_address']
}
default: {
fail("'${elasticsearch_mode}' mode not supported for ElasticSearch")
fail("'${elasticsearch_mode}' mode not supported for Elasticsearch")
}
}

View File

@ -10,7 +10,7 @@ Usage
-----
To deploy the LMA collector service on a host and forward collected data to
ElasticSearch and/or InfluxDB servers.
Elasticsearch and/or InfluxDB servers.
```puppet
# Configure the common components of the collector service
@ -24,7 +24,7 @@ class {'lma_collector':
class { 'lma_collector::system_logs':
}
# Send data to ElasticSearch
# Send data to Elasticsearch
class { 'lma_collector::elasticsearch':
server => '10.20.0.2'
}

View File

@ -38,11 +38,11 @@ new metric messages.
The satellite clusters delivered as part of the LMA Toolchain starting with Mirantis OpenStack 6.1 include:
* `ElasticSearch <http://www.elasticsearch.org/>`_, a powerful open source search server based
* `Elasticsearch <http://www.elasticsearch.org/>`_, a powerful open source search server based
on Lucene and analytics engine that makes data like log messages and notifications easy to explore and analyse.
* `InfluxDB <http://influxdb.com/>`_, an open-source and distributed time-series database to store and search metrics.
By combining ElasticSearch with `Kibana <http://www.elasticsearch.org/overview/kibana/>`_,
By combining Elasticsearch with `Kibana <http://www.elasticsearch.org/overview/kibana/>`_,
the LMA Toolchain provides an effective way to search and correlate all service-affecting events
that occurred in the system for root cause analysis.

View File

@ -12,11 +12,11 @@ The supported backends are described hereunder.
.. _elasticsearch_output:
ElasticSearch
Elasticsearch
=============
The LMA collector is able to send :ref:`logs` and :ref:`notifications` to
`ElasticSearch <http://elasticsearch.org/>`_.
`Elasticsearch <http://elasticsearch.org/>`_.
There is one index per day and per type of message:

View File

@ -24,8 +24,8 @@ attributes:
elasticsearch_node_name:
value: 'elasticsearch'
label: "ElasticSearch node name"
description: 'Label of the node running the ElasticSearch/Kibana plugin that is deployed in the environment.'
label: "Elasticsearch node name"
description: 'Label of the node running the Elasticsearch/Kibana plugin that is deployed in the environment.'
weight: 30
type: "text"
restrictions:
@ -34,8 +34,8 @@ attributes:
elasticsearch_address:
value: ''
label: 'ElasticSearch address'
description: 'IP address or fully qualified domain name of the ElasticSearch server.'
label: 'Elasticsearch address'
description: 'IP address or fully qualified domain name of the Elasticsearch server.'
weight: 40
type: "text"
restrictions:

View File

@ -5,7 +5,7 @@ title: The Logging, Monitoring and Alerting (LMA) Collector Plugin
# Plugin version
version: 0.7.0
# Description
description: Collect logs, metrics and notifications from system and OpenStack services and forward that information to external backends such as ElasticSearch and InfluxDB.
description: Collect logs, metrics and notifications from system and OpenStack services and forward that information to external backends such as Elasticsearch and InfluxDB.
# Required fuel version
fuel_version: ['6.1']

View File

@ -12,7 +12,7 @@ https://blueprints.launchpad.net/fuel/+spec/lma-collector-plugin
The LMA (Logging, Monitoring & Alerting) collector is a service running on each
OpenStack node that collects metrics, logs and notifications. This data can be
sent to ElasticSearch [#]_ and/or InfluxDB [#]_ backends for diagnostic,
sent to Elasticsearch [#]_ and/or InfluxDB [#]_ backends for diagnostic,
troubleshooting and alerting purposes.
Problem description
@ -23,7 +23,7 @@ monitoring, diagnosing and troubleshooting the deployed OpenStack environments.
The LMA collector aims at addressing the following use cases:
* Send logs and notifications to ElasticSearch so operators can more easily
* Send logs and notifications to Elasticsearch so operators can more easily
troubleshoot issues.
* Send metrics to InfluxDB so operators can monitor and diagnose the usage
@ -113,7 +113,7 @@ Heka is written in Go).
Other deployer impact
---------------------
The deployer will have to run an ElasticSearch cluster and/or an InfluxDB
The deployer will have to run an Elasticsearch cluster and/or an InfluxDB
cluster to store the collected data. Eventually, these requirements will be
addressed by additional Fuel plugins once the custom role feature [#]_ gets
available.
@ -161,7 +161,7 @@ Testing
* Test the plugin by deploying environments with all Fuel deployment modes.
* Create integration tests with ElasticSearch and InfluxDB backends.
* Create integration tests with Elasticsearch and InfluxDB backends.
Documentation Impact
====================