Digging and Tracing system for OpenStack
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
OpenDev Sysadmins 150a89162d OpenDev Migration Patch 3 years ago
devstack test multiple result for one tracer 7 years ago
doc Remove unused pngmath Sphinx extension 6 years ago
scalpels enable pep8 7 years ago
scripts add an io count tracer 6 years ago
tests add an io count tracer 6 years ago
tools add an io count tracer 6 years ago
.gitignore add git ignore 7 years ago
.gitreview OpenDev Migration Patch 3 years ago
.testr.conf add tox-related stuff 7 years ago
LICENSE Initial commit 7 years ago
README.rst Update readme file of scalpels 7 years ago
requirements.txt add oslo.messaging requirements 7 years ago
setup.cfg redesign endpoints 7 years ago
setup.py add simple python framework 7 years ago
test-requirements.txt add tox-related stuff 7 years ago
tox.ini remove a unnecessary line 6 years ago

README.rst

Scalpels

Scalpels is a distributed tracing or debuging system for OpenStack.

Background

OpenStack is made of multiple Python-based projects. Each project has similiar but different architecture. Scalpels intergates some useful scripts or 3rd tools to help operator track system status in your cloud environments.

Contribute

This project is a prototype now and is under development. If you have interests in this work, please contact @kun_huang, at #openstack-chinese channel.

Mission

Scalpels is a kind of "debugfs" for OpenStack. It gathers data from number of tracers like proc or systemtap which could be used to quantize preformance under workloads.

Single Node Architecture

This type of deployment is used as POC in OpenStack community CI.

All-in-One deployment

In All-In-One scenario, Scalpels Client works as rpc client, Scalpels Agent works as rpc server. When Scalpels Agent receives a rpc to start tracer 3, it will start a unix process for tracer 3, which can write data to database. Scalpels Client can read data from database directly.

Multiple Node Architecture

This is under Designing:

Multiple deployment

In multiple nodes scenario, Scalpels Server are non-state servers, they are distributed in multiple nodes, each Scalpels Server knows all agents' location and could forward the request to start tracers.

Scalpels Agent is introduced to manage trace process, it can be combined with Scalpels server during implementation.

Tracers can write data into redis bus instead of database to keep data consistency.

Agent-Tracer-Worker

The relationship is:

agent-tracer-worker.png

Scalpels Agent: manager tracers by start/stop signals. Tracer: Start worker process and write its stdout to database. Worker: It fetches data from operating system.

Ideas

Each project will have scripts working:

  • on python calls
  • on sql queries
  • on filesystem I/O
  • on RPC calls if need
  • on necessary system calls
  • on common system statistics