Merge "Update documentation to the latest state"

This commit is contained in:
Jenkins 2016-11-03 18:31:38 +00:00 committed by Gerrit Code Review
commit 0d66326337
4 changed files with 85 additions and 29 deletions

View File

@ -1,10 +1,10 @@
======
API
======
===
API
===
There are a few things that you should know about API before using it.
There are few things that you should know about API before using it.
Four ways to add a new trace point.
Five ways to add a new trace point.
-----------------------------------
.. code-block:: python
@ -39,6 +39,19 @@ Four ways to add a new trace point.
def _traced_only_if_trace_private_true(self):
pass
@six.add_metaclass(profiler.TracedMeta)
class RpcManagerClass(object):
__trace_args__ = {'name': 'rpc',
'info': None,
'hide_args': False,
'trace_private': False}
def my_method(self, some_args):
pass
def my_method2(self, some_arg1, some_arg2, kw=None, kw2=None)
pass
How profiler works?
-------------------
@ -106,6 +119,14 @@ The fields are defined as the following:
Setting up the collector.
-------------------------
Using OSProfiler notifier.
^^^^^^^^^^^^^^^^^^^^^^^^^^
.. note:: The following way of configuring OSProfiler is deprecated. The new
version description is located below - `Using OSProfiler initializer.`_.
Don't use OSproliler notifier directly! Its support will be removed soon
from OSProfiler.
The profiler doesn't include a trace point collector. The user/developer
should instead provide a method that sends messages to a collector. Let's
take a look at a trivial sample, where the collector is just a file:
@ -125,6 +146,36 @@ take a look at a trivial sample, where the collector is just a file:
So now on every **profiler.start()** and **profiler.stop()** call we will
write info about the trace point to the end of the **traces** file.
Using OSProfiler initializer.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSProfiler now contains various storage drivers to collect tracing data.
Information about what driver to use and what options to pass to OSProfiler
are now stored in OpenStack services configuration files. Example of such
configuration can be found below:
.. code-block:: bash
[profiler]
enabled = True
trace_sqlalchemy = True
hmac_keys = SECRET_KEY
connection_string = messaging://
If such configuration is provided, OSProfiler setting up can be processed in
following way:
.. code-block:: python
if CONF.profiler.enabled:
osprofiler_initializer.init_from_conf(
conf=CONF,
context=context.get_admin_context().to_dict(),
project="cinder",
service=binary,
host=host
)
Initialization of profiler.
---------------------------
@ -148,7 +199,7 @@ profiler, e.g. ``stack_trace = [base_id, trace_id]``.
OSProfiler CLI.
---------------
To make it easier for end users to work with profiler from CLI, osprofiler
To make it easier for end users to work with profiler from CLI, OSProfiler
has entry point that allows them to retrieve information about traces and
present it in human readable from.
@ -180,9 +231,9 @@ Available commands:
$ osprofiler trace show <trace_id> --json/--html --out /path/to/file
* Using other storage drivers (e.g. MongoDB (URI: ``mongodb://``),
Messaging (URI: ``messaging://``), and
Ceilometer (URI: ``ceilometer://``)):
* In latest versions of OSProfiler with storage drivers (e.g. MongoDB (URI:
``mongodb://``), Messaging (URI: ``messaging://``), and Ceilometer
(URI: ``ceilometer://``)) ``--connection-string`` parameter should be set up:
.. parsed-literal::

View File

@ -1,16 +1,16 @@
============
Background
============
==========
Background
==========
OpenStack consists of multiple projects. Each project, in turn, is composed of
multiple services. To process some request, e.g. to boot a virtual machine,
OpenStack uses multiple services from different projects. In the case something
works too slowly, it's extremely complicated to understand what exactly goes
works too slow, it's extremely complicated to understand what exactly goes
wrong and to locate the bottleneck.
To resolve this issue, we introduce a tiny but powerful library,
**osprofiler**, that is going to be used by all OpenStack projects and their
python clients. To be able to generate 1 trace per request, that goes through
python clients. It generates 1 trace per request, that goes through
all involved services, and builds a tree of calls.
Why not cProfile and etc?
@ -18,15 +18,15 @@ Why not cProfile and etc?
**The scope of this library is quite different:**
* We are interested in getting one trace of points from different service,
not tracing all python calls inside one process.
* We are interested in getting one trace of points from different services,
not tracing all Python calls inside one process.
* This library should be easy integratable in OpenStack. This means that:
* This library should be easy integrable into OpenStack. This means that:
* It shouldn't require too many changes in code bases of integrating
projects.
* It shouldn't require too many changes in code bases of projects it's
integrated with.
* We should be able to turn it off fully.
* We should be able to fully turn it off.
* We should be able to keep it turned on in lazy mode in production
(e.g. admin should be able to "trace" on request).

View File

@ -1,10 +1,10 @@
==============================================
OSProfiler -- Cross-project profiling library
==============================================
=============================================
OSProfiler -- Cross-project profiling library
=============================================
OSProfiler provides a tiny but powerful library that is used by
most (soon to be all) OpenStack projects and their python clients. It
provides functionality to be able to generate 1 trace per request, that goes
provides functionality to generate 1 trace per request, that goes
through all involved services. This trace can then be extracted and used
to build a tree of calls which can be quite handy for a variety of
reasons (for example in isolating cross-project performance issues).

View File

@ -1,13 +1,13 @@
=============
Integration
=============
===========
Integration
===========
There are 4 topics related to integration OSprofiler & `OpenStack`_:
What we should use as a centralized collector?
----------------------------------------------
We decided to use `Ceilometer`_, because:
We primarily decided to use `Ceilometer`_, because:
* It's already integrated in OpenStack, so it's quite simple to send
notifications to it from all projects.
@ -16,11 +16,14 @@ What we should use as a centralized collector?
messages related to one trace. Take a look at
*osprofiler.drivers.ceilometer.Ceilometer:get_report*
In OSProfiler starting with 1.4.0 version other options (MongoDB driver in
1.4.0 release, Elasticsearch driver added later, etc.) are also available.
How to setup profiler notifier?
-------------------------------
We decided to use oslo.messaging Notifier API, because:
We primarily decided to use oslo.messaging Notifier API, because:
* `oslo.messaging`_ is integrated in all projects
@ -29,6 +32,8 @@ How to setup profiler notifier?
* We don't need to add any new `CONF`_ options in projects
In OSProfiler starting with 1.4.0 version other options (MongoDB driver in
1.4.0 release, Elasticsearch driver added later, etc.) are also available.
How to initialize profiler, to get one trace across all services?
-----------------------------------------------------------------