Doc improvements:

- Explain prerequisites needed for benchmark examples
- Minor spelling and other fixes
This commit is contained in:
Brian Cantoni
2015-01-07 18:32:29 -08:00
parent a68b1de613
commit 4a911ef9ac
6 changed files with 25 additions and 14 deletions

View File

@@ -68,7 +68,7 @@ which sets the default keyspace for all queries made through that :class:`~.Sess
session = cluster.connect('mykeyspace') session = cluster.connect('mykeyspace')
You can always change a Sesssion's keyspace using :meth:`~.Session.set_keyspace` or You can always change a Session's keyspace using :meth:`~.Session.set_keyspace` or
by executing a ``USE <keyspace>`` query: by executing a ``USE <keyspace>`` query:
.. code-block:: python .. code-block:: python
@@ -385,7 +385,7 @@ prepared statement:
user2 = session.execute(user_lookup_stmt, [user_id2])[0] user2 = session.execute(user_lookup_stmt, [user_id2])[0]
The second option is to create a :class:`~.BoundStatement` from the The second option is to create a :class:`~.BoundStatement` from the
:class:`~.PreparedStatement` and binding paramaters and set a consistency :class:`~.PreparedStatement` and binding parameters and set a consistency
level on that: level on that:
.. code-block:: python .. code-block:: python

View File

@@ -8,7 +8,7 @@ The driver supports Python 2.6, 2.7, 3.3, and 3.4.
This driver is open source under the This driver is open source under the
`Apache v2 License <http://www.apache.org/licenses/LICENSE-2.0.html>`_. `Apache v2 License <http://www.apache.org/licenses/LICENSE-2.0.html>`_.
The source code for this driver can be found on `Github <http://github.com/datastax/python-driver>`_. The source code for this driver can be found on `GitHub <http://github.com/datastax/python-driver>`_.
Contents Contents
-------- --------

View File

@@ -12,7 +12,7 @@ Linux, OSX, and Windows are supported.
Installation through pip Installation through pip
------------------------ ------------------------
`pip <https://pypi.python.org/pypi/pip>`_ is the suggested tool for installing `pip <https://pypi.python.org/pypi/pip>`_ is the suggested tool for installing
packages. It will handle installing all python dependencies for the driver at packages. It will handle installing all Python dependencies for the driver at
the same time as the driver itself. To install the driver*:: the same time as the driver itself. To install the driver*::
pip install cassandra-driver pip install cassandra-driver
@@ -102,7 +102,7 @@ When installing manually through setup.py, you can disable both with
the ``--no-extensions`` option, or selectively disable one or the other the ``--no-extensions`` option, or selectively disable one or the other
with ``--no-murmur3`` and ``--no-libev``. with ``--no-murmur3`` and ``--no-libev``.
To compile the extenions, ensure that GCC and the Python headers are available. To compile the extensions, ensure that GCC and the Python headers are available.
On Ubuntu and Debian, this can be accomplished by running:: On Ubuntu and Debian, this can be accomplished by running::

View File

@@ -1,6 +1,6 @@
Performance Notes Performance Notes
================= =================
The python driver for Cassandra offers several methods for executing queries. The Python driver for Cassandra offers several methods for executing queries.
You can synchronously block for queries to complete using You can synchronously block for queries to complete using
:meth:`.Session.execute()`, you can use a future-like interface through :meth:`.Session.execute()`, you can use a future-like interface through
:meth:`.Session.execute_async()`, or you can attach a callback to the future :meth:`.Session.execute_async()`, or you can attach a callback to the future
@@ -13,7 +13,7 @@ Benchmark Notes
All benchmarks were executed using the All benchmarks were executed using the
`benchmark scripts <https://github.com/datastax/python-driver/tree/master/benchmarks>`_ `benchmark scripts <https://github.com/datastax/python-driver/tree/master/benchmarks>`_
in the driver repository. They were executed on a laptop with 16 GiB of RAM, an SSD, in the driver repository. They were executed on a laptop with 16 GiB of RAM, an SSD,
and a 2 GHz, four core CPU with hyperthreading. The Cassandra cluster was a three and a 2 GHz, four core CPU with hyper-threading. The Cassandra cluster was a three
node `ccm <https://github.com/pcmanus/ccm>`_ cluster running on the same laptop node `ccm <https://github.com/pcmanus/ccm>`_ cluster running on the same laptop
with version 1.2.13 of Cassandra. I suggest testing these benchmarks against your with version 1.2.13 of Cassandra. I suggest testing these benchmarks against your
own cluster when tuning the driver for optimal throughput or latency. own cluster when tuning the driver for optimal throughput or latency.
@@ -26,6 +26,17 @@ by using the ``--asyncore-only`` command line option.
Each benchmark completes 100,000 small inserts. The replication factor for the Each benchmark completes 100,000 small inserts. The replication factor for the
keyspace was three, so all nodes were replicas for the inserted rows. keyspace was three, so all nodes were replicas for the inserted rows.
The benchmarks require the Python driver C extensions as well as a few additional
Python packages. Follow these steps to install the prerequisites:
1. Install packages to support Python driver C extensions:
* Debian/Ubuntu: ``sudo apt-get install gcc python-dev libev4 libev-dev``
* RHEL/CentOS/Fedora: ``sudo yum install gcc python-dev libev4 libev-dev``
2. Install Python packages: ``pip install scales twisted blist``
3. Re-install the Cassandra driver: ``pip install --upgrade cassandra-driver``
Synchronous Execution (`sync.py <https://github.com/datastax/python-driver/blob/master/benchmarks/sync.py>`_) Synchronous Execution (`sync.py <https://github.com/datastax/python-driver/blob/master/benchmarks/sync.py>`_)
------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------
Although this is the simplest way to make queries, it has low throughput Although this is the simplest way to make queries, it has low throughput
@@ -173,7 +184,7 @@ Callback Chaining (`callback_full_pipeline.py <https://github.com/datastax/pytho
----------------------------------------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------------------------------------------------------
This pattern is very different from the previous patterns. Here we're taking This pattern is very different from the previous patterns. Here we're taking
advantage of the :meth:`.ResponseFuture.add_callback()` function to start advantage of the :meth:`.ResponseFuture.add_callback()` function to start
another request as soon as one finishes. Futhermore, we're starting 120 another request as soon as one finishes. Furthermore, we're starting 120
of these callback chains, so we've always got about 120 operations in of these callback chains, so we've always got about 120 operations in
flight at any time: flight at any time:
@@ -243,7 +254,7 @@ dramatically:
Average throughput: 679.61/sec Average throughput: 679.61/sec
When :attr:`.Cluster.protocol_version` is set to 1 or 2, you should limit the When :attr:`.Cluster.protocol_version` is set to 1 or 2, you should limit the
number of callback chains you run to rougly 100 per node in the cluster. number of callback chains you run to roughly 100 per node in the cluster.
When :attr:`~.Cluster.protocol_version` is 3 or higher, you can safely experiment When :attr:`~.Cluster.protocol_version` is 3 or higher, you can safely experiment
with higher numbers of callback chains. with higher numbers of callback chains.

View File

@@ -6,7 +6,7 @@ Upgrading
Upgrading to 2.1 from 2.0 Upgrading to 2.1 from 2.0
------------------------- -------------------------
Version 2.1 of the Datastax python driver for Apache Cassandra Version 2.1 of the DataStax Python driver for Apache Cassandra
adds support for Cassandra 2.1 and version 3 of the native protocol. adds support for Cassandra 2.1 and version 3 of the native protocol.
Cassandra 1.2, 2.0, and 2.1 are all supported. However, 1.2 only Cassandra 1.2, 2.0, and 2.1 are all supported. However, 1.2 only
@@ -68,7 +68,7 @@ See :ref:`udts` for more details.
Customizing Encoders for Non-prepared Statements Customizing Encoders for Non-prepared Statements
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Starting with version 2.1 of the driver, it is possible to customize Starting with version 2.1 of the driver, it is possible to customize
how python types are converted to CQL literals when working with how Python types are converted to CQL literals when working with
non-prepared statements. This is done on a per-:class:`~.Session` non-prepared statements. This is done on a per-:class:`~.Session`
basis through :attr:`.Session.encoder`: basis through :attr:`.Session.encoder`:
@@ -94,7 +94,7 @@ timestamp generated by the driver.
Upgrading to 2.0 from 1.x Upgrading to 2.0 from 1.x
------------------------- -------------------------
Version 2.0 of the DataStax python driver for Apache Cassandra Version 2.0 of the DataStax Python driver for Apache Cassandra
includes some notable improvements over version 1.x. This version includes some notable improvements over version 1.x. This version
of the driver supports Cassandra 1.2, 2.0, and 2.1. However, not of the driver supports Cassandra 1.2, 2.0, and 2.1. However, not
all features may be used with Cassandra 1.2, and some new features all features may be used with Cassandra 1.2, and some new features

View File

@@ -7,11 +7,11 @@ new type through ``CREATE TYPE`` statements in CQL::
CREATE TYPE address (street text, zip int); CREATE TYPE address (street text, zip int);
Version 2.1 of the python driver adds support for user-defined types. Version 2.1 of the Python driver adds support for user-defined types.
Registering a Class to Map to a UDT Registering a Class to Map to a UDT
----------------------------------- -----------------------------------
You can tell the python driver to return columns of a specific UDT as You can tell the Python driver to return columns of a specific UDT as
instances of a class by registering them with your :class:`~.Cluster` instances of a class by registering them with your :class:`~.Cluster`
instance through :meth:`.Cluster.register_user_type`: instance through :meth:`.Cluster.register_user_type`: