Digging and Tracing system for OpenStack
Go to file
2015-12-22 02:02:39 +00:00
devstack test multiple result for one tracer 2015-11-19 22:41:51 +08:00
doc add tracer-worker relationship image 2015-11-15 23:27:36 +08:00
scalpels enable pep8 2015-12-07 13:51:11 +08:00
scripts Merge "encapulate same codes into functions" 2015-12-22 02:02:39 +00:00
tests enable pep8 2015-12-07 13:51:11 +08:00
tools add is_running status to tracers 2015-11-18 11:43:11 +08:00
.gitignore add git ignore 2015-10-20 17:33:28 +08:00
.gitreview add gitreview 2015-10-23 19:49:46 +08:00
.testr.conf add tox-related stuff 2015-10-23 23:06:30 +08:00
LICENSE Initial commit 2015-09-16 14:20:51 +08:00
README.rst Update readme file of scalpels 2015-12-18 00:17:03 -05:00
requirements.txt add oslo.messaging requirements 2015-11-09 01:42:39 +08:00
setup.cfg redesign endpoints 2015-11-11 13:18:43 +08:00
setup.py add simple python framework 2015-10-20 15:39:06 +08:00
test-requirements.txt add tox-related stuff 2015-10-23 23:06:30 +08:00
tox.ini Deprecated tox -downloadcache option removed 2015-12-11 23:34:17 +01:00

Scalpels

Scalpels is a distributed tracing or debuging system for OpenStack.

Background

OpenStack is made of multiple Python-based projects. Each project has similiar but different architecture. Scalpels intergates some useful scripts or 3rd tools to help operator track system status in your cloud environments.

Contribute

This project is a prototype now and is under development. If you have interests in this work, please contact @kun_huang, at #openstack-chinese channel.

Mission

Scalpels is a kind of "debugfs" for OpenStack. It gathers data from number of tracers like proc or systemtap which could be used to quantize preformance under workloads.

Single Node Architecture

This type of deployment is used as POC in OpenStack community CI.

All-in-One deployment

In All-In-One scenario, Scalpels Client works as rpc client, Scalpels Agent works as rpc server. When Scalpels Agent receives a rpc to start tracer 3, it will start a unix process for tracer 3, which can write data to database. Scalpels Client can read data from database directly.

Multiple Node Architecture

This is under Designing:

Multiple deployment

In multiple nodes scenario, Scalpels Server are non-state servers, they are distributed in multiple nodes, each Scalpels Server knows all agents' location and could forward the request to start tracers.

Scalpels Agent is introduced to manage trace process, it can be combined with Scalpels server during implementation.

Tracers can write data into redis bus instead of database to keep data consistency.

Agent-Tracer-Worker

The relationship is:

agent-tracer-worker.png

Scalpels Agent: manager tracers by start/stop signals. Tracer: Start worker process and write its stdout to database. Worker: It fetches data from operating system.

Ideas

Each project will have scripts working:

  • on python calls
  • on sql queries
  • on filesystem I/O
  • on RPC calls if need
  • on necessary system calls
  • on common system statistics