Merge "Project Migration to PyCQA"

This commit is contained in:
Zuul 2019-07-24 17:54:39 +00:00 committed by Gerrit Code Review
commit bc95cb4ab5
26 changed files with 8 additions and 1788 deletions

54
.gitignore vendored
View File

@ -1,54 +0,0 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
# C extensions
*.so
# Distribution / packaging
.Python
env/
bin/
build/
develop-eggs/
dist/
eggs/
lib/
lib64/
parts/
sdist/
var/
*.egg-info/
.installed.cfg
*.egg
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.cache
nosetests.xml
coverage.xml
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
# Rope
.ropeproject
# Django stuff:
*.log
*.pot
# Sphinx documentation
doc/build/

View File

@ -1,15 +0,0 @@
If you would like to contribute to the development of OpenStack,
you must follow the steps in this page:
http://docs.openstack.org/infra/manual/developers.html
Once those steps have been completed, changes to OpenStack
should be submitted for review via the Gerrit tool, following
the workflow documented at:
http://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will be ignored.
You can report the bugs at launchpad.
https://bugs.launchpad.net/doc8

View File

@ -1,4 +0,0 @@
doc8 Style Commandments
===============================================
Read the OpenStack Style Commandments http://docs.openstack.org/developer/hacking/

201
LICENSE
View File

@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,6 +0,0 @@
include README.rst
exclude .gitignore
exclude .gitreview
global-exclude *.pyc

View File

@ -1,135 +1,14 @@
====
Doc8
doc8
====
Doc8 is an *opinionated* style checker for `rst`_ (with basic support for
plain text) styles of documentation.
This project is no longer maintained in OpenStack.
QuickStart
==========
Please visit PyCQA to raise issues or make contributions:
::
https://github.com/PyCQA/doc8
pip install doc8
To run doc8 just invoke it against any doc directory::
$ doc8 coolproject/docs
Usage
=====
Command line usage
******************
::
$ doc8 -h
usage: doc8 [-h] [--config path] [--allow-long-titles] [--ignore code]
[--no-sphinx] [--ignore-path path] [--ignore-path-errors path]
[--default-extension extension] [--file-encoding encoding]
[--max-line-length int] [-e extension] [-v] [--version]
[path [path ...]]
Check documentation for simple style requirements.
What is checked:
- invalid rst format - D000
- lines should not be longer than 79 characters - D001
- RST exception: line with no whitespace except in the beginning
- RST exception: lines with http or https urls
- RST exception: literal blocks
- RST exception: rst target directives
- no trailing whitespace - D002
- no tabulation for indentation - D003
- no carriage returns (use unix newlines) - D004
- no newline at end of file - D005
positional arguments:
path Path to scan for doc files (default: current
directory).
optional arguments:
-h, --help show this help message and exit
--config path user config file location (default: doc8.ini, tox.ini,
pep8.ini, setup.cfg).
--allow-long-titles allow long section titles (default: false).
--ignore code ignore the given error code(s).
--no-sphinx do not ignore sphinx specific false positives.
--ignore-path path ignore the given directory or file (globs are
supported).
--ignore-path-errors path
ignore the given specific errors in the provided file.
--default-extension extension
default file extension to use when a file is found
without a file extension.
--file-encoding encoding
override encoding to use when attempting to determine
an input files text encoding (providing this avoids
using `chardet` to automatically detect encoding/s)
--max-line-length int
maximum allowed line length (default: 79).
-e extension, --extension extension
check file extensions of the given type (default:
.rst, .txt).
-q, --quiet only print violations
-v, --verbose run in verbose mode.
--version show the version and exit.
Ini file usage
**************
Instead of using the CLI for options the following files will also be examined
for ``[doc8]`` sections that can also provided the same set of options. If
the ``--config path`` option is used these files will **not** be scanned for
the current working directory and that configuration path will be used
instead.
* ``$CWD/doc8.ini``
* ``$CWD/tox.ini``
* ``$CWD/pep8.ini``
* ``$CWD/setup.cfg``
An example section that can be placed into one of these files::
[doc8]
ignore-path=/tmp/stuff,/tmp/other_stuff
max-line-length=99
verbose=1
ignore-path-errors=/tmp/other_thing.rst;D001;D002
**Note:** The option names are the same as the command line ones (with the
only variation of this being the ``no-sphinx`` option which from
configuration file will be ``sphinx`` instead).
Option conflict resolution
**************************
When the same option is passed on the command line and also via configuration
files the following strategies are applied to resolve these types
of conflicts.
====================== =========== ========
Option Overrides Merges
====================== =========== ========
``allow-long-titles`` Yes No
``ignore-path-errors`` No Yes
``default-extension`` Yes No
``extension`` No Yes
``ignore-path`` No Yes
``ignore`` No Yes
``max-line-length`` Yes No
``file-encoding`` Yes No
``sphinx`` Yes No
====================== =========== ========
**Note:** In the above table the configuration file option when specified as
*overrides* will replace the same option given via the command line. When
*merges* is stated then the option will be combined with the command line
option (for example by becoming a larger list or set of values that contains
the values passed on the command line *and* the values passed via
configuration).
.. _rst: http://docutils.sourceforge.net/docs/ref/rst/introduction.html
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".

View File

@ -1,70 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
'oslosphinx'
]
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'doc8'
copyright = u'2013, OpenStack Foundation'
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
# html_theme = '_theme'
# html_static_path = ['static']
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'%s.tex' % project,
u'%s Documentation' % project,
u'OpenStack Foundation', 'manual'),
]

View File

@ -1,4 +0,0 @@
============
Contributing
============
.. include:: ../../CONTRIBUTING.rst

View File

@ -1,20 +0,0 @@
Welcome to doc8's documentation!
================================
Contents:
.. toctree::
:maxdepth: 2
readme
installation
usage
contributing
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -1,12 +0,0 @@
============
Installation
============
At the command line::
$ pip install doc8
Or, if you have virtualenvwrapper installed::
$ mkvirtualenv doc8
$ pip install doc8

View File

@ -1 +0,0 @@
.. include:: ../../README.rst

View File

@ -1,7 +0,0 @@
=====
Usage
=====
To use doc8 in a project::
import doc8

View File

View File

@ -1,302 +0,0 @@
# Copyright (C) 2014 Ivan Melnikov <iv at altlinux dot org>
#
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import collections
import re
from docutils import nodes as docutils_nodes
import six
from doc8 import utils
@six.add_metaclass(abc.ABCMeta)
class ContentCheck(object):
def __init__(self, cfg):
self._cfg = cfg
@abc.abstractmethod
def report_iter(self, parsed_file):
pass
@six.add_metaclass(abc.ABCMeta)
class LineCheck(object):
def __init__(self, cfg):
self._cfg = cfg
@abc.abstractmethod
def report_iter(self, line):
pass
class CheckTrailingWhitespace(LineCheck):
_TRAILING_WHITESPACE_REGEX = re.compile('\s$')
REPORTS = frozenset(["D002"])
def report_iter(self, line):
if self._TRAILING_WHITESPACE_REGEX.search(line):
yield ('D002', 'Trailing whitespace')
class CheckIndentationNoTab(LineCheck):
_STARTING_WHITESPACE_REGEX = re.compile('^(\s+)')
REPORTS = frozenset(["D003"])
def report_iter(self, line):
match = self._STARTING_WHITESPACE_REGEX.search(line)
if match:
spaces = match.group(1)
if '\t' in spaces:
yield ('D003', 'Tabulation used for indentation')
class CheckCarriageReturn(LineCheck):
REPORTS = frozenset(["D004"])
def report_iter(self, line):
if "\r" in line:
yield ('D004', 'Found literal carriage return')
class CheckNewlineEndOfFile(ContentCheck):
REPORTS = frozenset(["D005"])
def __init__(self, cfg):
super(CheckNewlineEndOfFile, self).__init__(cfg)
def report_iter(self, parsed_file):
if parsed_file.lines and not parsed_file.lines[-1].endswith(b'\n'):
yield (len(parsed_file.lines), 'D005', 'No newline at end of file')
class CheckValidity(ContentCheck):
REPORTS = frozenset(["D000"])
EXT_MATCHER = re.compile(r"(.*)[.]rst", re.I)
# From docutils docs:
#
# Report system messages at or higher than <level>: "info" or "1",
# "warning"/"2" (default), "error"/"3", "severe"/"4", "none"/"5"
#
# See: http://docutils.sourceforge.net/docs/user/config.html#report-level
WARN_LEVELS = frozenset([2, 3, 4])
# Only used when running in sphinx mode.
SPHINX_IGNORES_REGEX = [
re.compile(r'^Unknown interpreted text'),
re.compile(r'^Unknown directive type'),
re.compile(r'^Undefined substitution'),
re.compile(r'^Substitution definition contains illegal element'),
]
def __init__(self, cfg):
super(CheckValidity, self).__init__(cfg)
self._sphinx_mode = cfg.get('sphinx')
def report_iter(self, parsed_file):
for error in parsed_file.errors:
if error.level not in self.WARN_LEVELS:
continue
ignore = False
if self._sphinx_mode:
for m in self.SPHINX_IGNORES_REGEX:
if m.match(error.message):
ignore = True
break
if not ignore:
yield (error.line, 'D000', error.message)
class CheckMaxLineLength(ContentCheck):
REPORTS = frozenset(["D001"])
def __init__(self, cfg):
super(CheckMaxLineLength, self).__init__(cfg)
self._max_line_length = self._cfg['max_line_length']
self._allow_long_titles = self._cfg['allow_long_titles']
def _extract_node_lines(self, doc):
def extract_lines(node, start_line):
lines = [start_line]
if isinstance(node, (docutils_nodes.title)):
start = start_line - len(node.rawsource.splitlines())
if start >= 0:
lines.append(start)
if isinstance(node, (docutils_nodes.literal_block)):
end = start_line + len(node.rawsource.splitlines()) - 1
lines.append(end)
return lines
def gather_lines(node):
lines = []
for n in node.traverse(include_self=True):
lines.extend(extract_lines(n, find_line(n)))
return lines
def find_line(node):
n = node
while n is not None:
if n.line is not None:
return n.line
n = n.parent
return None
def filter_systems(node):
if utils.has_any_node_type(node, (docutils_nodes.system_message,)):
return False
return True
nodes_lines = []
first_line = -1
for n in utils.filtered_traverse(doc, filter_systems):
line = find_line(n)
if line is None:
continue
if first_line == -1:
first_line = line
contained_lines = set(gather_lines(n))
nodes_lines.append((n, (min(contained_lines),
max(contained_lines))))
return (nodes_lines, first_line)
def _extract_directives(self, lines):
def starting_whitespace(line):
m = re.match(r"^(\s+)(.*)$", line)
if not m:
return 0
return len(m.group(1))
def all_whitespace(line):
return bool(re.match(r"^(\s*)$", line))
def find_directive_end(start, lines):
after_lines = collections.deque(lines[start + 1:])
k = 0
while after_lines:
line = after_lines.popleft()
if all_whitespace(line) or starting_whitespace(line) >= 1:
k += 1
else:
break
return start + k
# Find where directives start & end so that we can exclude content in
# these directive regions (the rst parser may not handle this correctly
# for unknown directives, so we have to do it manually).
directives = []
for i, line in enumerate(lines):
if re.match(r"^\s*..\s(.*?)::\s*", line):
directives.append((i, find_directive_end(i, lines)))
elif re.match(r"^::\s*$", line):
directives.append((i, find_directive_end(i, lines)))
# Find definition terms in definition lists
# This check may match the code, which is already appended
lwhitespaces = r"^\s*"
listspattern = r"^\s*(\* |- |#\. |\d+\. )"
for i in range(0, len(lines) - 1):
line = lines[i]
next_line = lines[i + 1]
# if line is a blank, line is not a definition term
if all_whitespace(line):
continue
# if line is a list, line is checked as normal line
if re.match(listspattern, line):
continue
if (len(re.search(lwhitespaces, line).group()) <
len(re.search(lwhitespaces, next_line).group())):
directives.append((i, i))
return directives
def _txt_checker(self, parsed_file):
for i, line in enumerate(parsed_file.lines_iter()):
if len(line) > self._max_line_length:
if not utils.contains_url(line):
yield (i + 1, 'D001', 'Line too long')
def _rst_checker(self, parsed_file):
lines = list(parsed_file.lines_iter())
doc = parsed_file.document
nodes_lines, first_line = self._extract_node_lines(doc)
directives = self._extract_directives(lines)
def find_containing_nodes(num):
if num < first_line and len(nodes_lines):
return [nodes_lines[0][0]]
contained_in = []
for (n, (line_min, line_max)) in nodes_lines:
if num >= line_min and num <= line_max:
contained_in.append((n, (line_min, line_max)))
smallest_span = None
best_nodes = []
for (n, (line_min, line_max)) in contained_in:
span = line_max - line_min
if smallest_span is None:
smallest_span = span
best_nodes = [n]
elif span < smallest_span:
smallest_span = span
best_nodes = [n]
elif span == smallest_span:
best_nodes.append(n)
return best_nodes
def any_types(nodes, types):
return any([isinstance(n, types) for n in nodes])
skip_types = (
docutils_nodes.target,
docutils_nodes.literal_block,
)
title_types = (
docutils_nodes.title,
docutils_nodes.subtitle,
docutils_nodes.section,
)
for i, line in enumerate(lines):
if len(line) > self._max_line_length:
in_directive = False
for (start, end) in directives:
if i >= start and i <= end:
in_directive = True
break
if in_directive:
continue
stripped = line.lstrip()
if ' ' not in stripped:
# No room to split even if we could.
continue
if utils.contains_url(stripped):
continue
nodes = find_containing_nodes(i + 1)
if any_types(nodes, skip_types):
continue
if self._allow_long_titles and any_types(nodes, title_types):
continue
yield (i + 1, 'D001', 'Line too long')
def report_iter(self, parsed_file):
if parsed_file.extension.lower() != '.rst':
checker_func = self._txt_checker
else:
checker_func = self._rst_checker
for issue in checker_func(parsed_file):
yield issue

View File

@ -1,382 +0,0 @@
# Copyright (C) 2014 Ivan Melnikov <iv at altlinux dot org>
#
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Check documentation for simple style requirements.
What is checked:
- invalid rst format - D000
- lines should not be longer than 79 characters - D001
- RST exception: line with no whitespace except in the beginning
- RST exception: lines with http or https urls
- RST exception: literal blocks
- RST exception: rst target directives
- no trailing whitespace - D002
- no tabulation for indentation - D003
- no carriage returns (use unix newlines) - D004
- no newline at end of file - D005
"""
import argparse
import collections
import logging
import os
import sys
if __name__ == '__main__':
# Only useful for when running directly (for dev/debugging).
sys.path.insert(0, os.path.abspath(os.getcwd()))
sys.path.insert(0, os.path.abspath(os.path.join(os.pardir, os.getcwd())))
import six
from six.moves import configparser
from stevedore import extension
from doc8 import checks
from doc8 import parser as file_parser
from doc8 import utils
from doc8 import version
FILE_PATTERNS = ['.rst', '.txt']
MAX_LINE_LENGTH = 79
CONFIG_FILENAMES = [
"doc8.ini",
"tox.ini",
"pep8.ini",
"setup.cfg",
]
def split_set_type(text, delimiter=","):
return set([i.strip() for i in text.split(delimiter) if i.strip()])
def merge_sets(sets):
m = set()
for s in sets:
m.update(s)
return m
def parse_ignore_path_errors(entries):
ignore_path_errors = collections.defaultdict(set)
for path in entries:
path, ignored_errors = path.split(";", 1)
path = path.strip()
ignored_errors = split_set_type(ignored_errors, delimiter=";")
ignore_path_errors[path].update(ignored_errors)
return dict(ignore_path_errors)
def extract_config(args):
parser = configparser.RawConfigParser()
read_files = []
if args['config']:
for fn in args['config']:
with open(fn, 'r') as fh:
parser.readfp(fh, filename=fn)
read_files.append(fn)
else:
read_files.extend(parser.read(CONFIG_FILENAMES))
if not read_files:
return {}
cfg = {}
try:
cfg['max_line_length'] = parser.getint("doc8", "max-line-length")
except (configparser.NoSectionError, configparser.NoOptionError):
pass
try:
cfg['ignore'] = split_set_type(parser.get("doc8", "ignore"))
except (configparser.NoSectionError, configparser.NoOptionError):
pass
try:
cfg['ignore_path'] = split_set_type(parser.get("doc8",
"ignore-path"))
except (configparser.NoSectionError, configparser.NoOptionError):
pass
try:
ignore_path_errors = parser.get("doc8", "ignore-path-errors")
ignore_path_errors = split_set_type(ignore_path_errors)
ignore_path_errors = parse_ignore_path_errors(ignore_path_errors)
cfg['ignore_path_errors'] = ignore_path_errors
except (configparser.NoSectionError, configparser.NoOptionError):
pass
try:
cfg['allow_long_titles'] = parser.getboolean("doc8",
"allow-long-titles")
except (configparser.NoSectionError, configparser.NoOptionError):
pass
try:
cfg['sphinx'] = parser.getboolean("doc8", "sphinx")
except (configparser.NoSectionError, configparser.NoOptionError):
pass
try:
cfg['verbose'] = parser.getboolean("doc8", "verbose")
except (configparser.NoSectionError, configparser.NoOptionError):
pass
try:
cfg['file_encoding'] = parser.get("doc8", "file-encoding")
except (configparser.NoSectionError, configparser.NoOptionError):
pass
try:
cfg['default_extension'] = parser.get("doc8", "default-extension")
except (configparser.NoSectionError, configparser.NoOptionError):
pass
try:
extensions = parser.get("doc8", "extensions")
extensions = extensions.split(",")
extensions = [s.strip() for s in extensions if s.strip()]
if extensions:
cfg['extension'] = extensions
except (configparser.NoSectionError, configparser.NoOptionError):
pass
return cfg
def fetch_checks(cfg):
base = [
checks.CheckValidity(cfg),
checks.CheckTrailingWhitespace(cfg),
checks.CheckIndentationNoTab(cfg),
checks.CheckCarriageReturn(cfg),
checks.CheckMaxLineLength(cfg),
checks.CheckNewlineEndOfFile(cfg),
]
mgr = extension.ExtensionManager(
namespace='doc8.extension.check',
invoke_on_load=True,
invoke_args=(cfg.copy(),),
)
addons = []
for e in mgr:
addons.append(e.obj)
return base + addons
def setup_logging(verbose):
if verbose:
level = logging.DEBUG
else:
level = logging.ERROR
logging.basicConfig(level=level,
format='%(levelname)s: %(message)s', stream=sys.stdout)
def scan(cfg):
if not cfg.get('quiet'):
print("Scanning...")
files = collections.deque()
ignored_paths = cfg.get('ignore_path', [])
files_ignored = 0
file_iter = utils.find_files(cfg.get('paths', []),
cfg.get('extension', []), ignored_paths)
default_extension = cfg.get('default_extension')
file_encoding = cfg.get('file_encoding')
for filename, ignoreable in file_iter:
if ignoreable:
files_ignored += 1
if cfg.get('verbose'):
print(" Ignoring '%s'" % (filename))
else:
f = file_parser.parse(filename,
default_extension=default_extension,
encoding=file_encoding)
files.append(f)
if cfg.get('verbose'):
print(" Selecting '%s'" % (filename))
return (files, files_ignored)
def validate(cfg, files):
if not cfg.get('quiet'):
print("Validating...")
error_counts = {}
ignoreables = frozenset(cfg.get('ignore', []))
ignore_targeted = cfg.get('ignore_path_errors', {})
while files:
f = files.popleft()
if cfg.get('verbose'):
print("Validating %s" % f)
targeted_ignoreables = set(ignore_targeted.get(f.filename, set()))
targeted_ignoreables.update(ignoreables)
for c in fetch_checks(cfg):
try:
# http://legacy.python.org/dev/peps/pep-3155/
check_name = c.__class__.__qualname__
except AttributeError:
check_name = ".".join([c.__class__.__module__,
c.__class__.__name__])
error_counts.setdefault(check_name, 0)
try:
extension_matcher = c.EXT_MATCHER
except AttributeError:
pass
else:
if not extension_matcher.match(f.extension):
if cfg.get('verbose'):
print(" Skipping check '%s' since it does not"
" understand parsing a file with extension '%s'"
% (check_name, f.extension))
continue
try:
reports = set(c.REPORTS)
except AttributeError:
pass
else:
reports = reports - targeted_ignoreables
if not reports:
if cfg.get('verbose'):
print(" Skipping check '%s', determined to only"
" check ignoreable codes" % check_name)
continue
if cfg.get('verbose'):
print(" Running check '%s'" % check_name)
if isinstance(c, checks.ContentCheck):
for line_num, code, message in c.report_iter(f):
if code in targeted_ignoreables:
continue
if not isinstance(line_num, (float, int)):
line_num = "?"
if cfg.get('verbose'):
print(' - %s:%s: %s %s'
% (f.filename, line_num, code, message))
else:
print('%s:%s: %s %s'
% (f.filename, line_num, code, message))
error_counts[check_name] += 1
elif isinstance(c, checks.LineCheck):
for line_num, line in enumerate(f.lines_iter(), 1):
for code, message in c.report_iter(line):
if code in targeted_ignoreables:
continue
if cfg.get('verbose'):
print(' - %s:%s: %s %s'
% (f.filename, line_num, code, message))
else:
print('%s:%s: %s %s'
% (f.filename, line_num, code, message))
error_counts[check_name] += 1
else:
raise TypeError("Unknown check type: %s, %s"
% (type(c), c))
return error_counts
def main():
parser = argparse.ArgumentParser(
prog='doc8',
description=__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter)
default_configs = ", ".join(CONFIG_FILENAMES)
parser.add_argument("paths", metavar='path', type=str, nargs='*',
help=("path to scan for doc files"
" (default: current directory)."),
default=[os.getcwd()])
parser.add_argument("--config", metavar='path', action="append",
help="user config file location"
" (default: %s)." % default_configs,
default=[])
parser.add_argument("--allow-long-titles", action="store_true",
help="allow long section titles (default: false).",
default=False)
parser.add_argument("--ignore", action="append", metavar="code",
help="ignore the given error code(s).",
type=split_set_type,
default=[])
parser.add_argument("--no-sphinx", action="store_false",
help="do not ignore sphinx specific false positives.",
default=True, dest='sphinx')
parser.add_argument("--ignore-path", action="append", default=[],
help="ignore the given directory or file (globs"
" are supported).", metavar='path')
parser.add_argument("--ignore-path-errors", action="append", default=[],
help="ignore the given specific errors in the"
" provided file.", metavar='path')
parser.add_argument("--default-extension", action="store",
help="default file extension to use when a file is"
" found without a file extension.",
default='', dest='default_extension',
metavar='extension')
parser.add_argument("--file-encoding", action="store",
help="override encoding to use when attempting"
" to determine an input files text encoding "
"(providing this avoids using `chardet` to"
" automatically detect encoding/s)",
default='', dest='file_encoding',
metavar='encoding')
parser.add_argument("--max-line-length", action="store", metavar="int",
type=int,
help="maximum allowed line"
" length (default: %s)." % MAX_LINE_LENGTH,
default=MAX_LINE_LENGTH)
parser.add_argument("-e", "--extension", action="append",
metavar="extension",
help="check file extensions of the given type"
" (default: %s)." % ", ".join(FILE_PATTERNS),
default=list(FILE_PATTERNS))
parser.add_argument("-q", "--quiet", action='store_true',
help="only print violations", default=False)
parser.add_argument("-v", "--verbose", dest="verbose", action='store_true',
help="run in verbose mode.", default=False)
parser.add_argument("--version", dest="version", action='store_true',
help="show the version and exit.", default=False)
args = vars(parser.parse_args())
if args.get('version'):
print(version.version_string())
return 0
args['ignore'] = merge_sets(args['ignore'])
cfg = extract_config(args)
args['ignore'].update(cfg.pop("ignore", set()))
if 'sphinx' in cfg:
args['sphinx'] = cfg.pop("sphinx")
args['extension'].extend(cfg.pop('extension', []))
args['ignore_path'].extend(cfg.pop('ignore_path', []))
cfg.setdefault('ignore_path_errors', {})
tmp_ignores = parse_ignore_path_errors(args.pop('ignore_path_errors', []))
for path, ignores in six.iteritems(tmp_ignores):
if path in cfg['ignore_path_errors']:
cfg['ignore_path_errors'][path].update(ignores)
else:
cfg['ignore_path_errors'][path] = set(ignores)
args.update(cfg)
setup_logging(args.get('verbose'))
files, files_ignored = scan(args)
files_selected = len(files)
error_counts = validate(args, files)
total_errors = sum(six.itervalues(error_counts))
if not args.get('quiet'):
print("=" * 8)
print("Total files scanned = %s" % (files_selected))
print("Total files ignored = %s" % (files_ignored))
print("Total accumulated errors = %s" % (total_errors))
if error_counts:
print("Detailed error counts:")
for check_name in sorted(six.iterkeys(error_counts)):
check_errors = error_counts[check_name]
print(" - %s = %s" % (check_name, check_errors))
if total_errors:
return 1
else:
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@ -1,144 +0,0 @@
# Copyright (C) 2014 Ivan Melnikov <iv at altlinux dot org>
#
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import errno
import os
import threading
import chardet
from docutils import frontend
from docutils import parsers as docutils_parser
from docutils import utils
import restructuredtext_lint as rl
import six
class ParsedFile(object):
FALLBACK_ENCODING = 'utf-8'
def __init__(self, filename, encoding=None, default_extension=''):
self._filename = filename
self._content = None
self._raw_content = None
self._encoding = encoding
self._doc = None
self._errors = None
self._lines = None
self._has_read = False
self._extension = os.path.splitext(filename)[1]
self._read_lock = threading.Lock()
if not self._extension:
self._extension = default_extension
@property
def errors(self):
if self._errors is not None:
return self._errors
self._errors = rl.lint(self.contents, filepath=self.filename)
return self._errors
@property
def document(self):
if self._doc is None:
# Use the rst parsers document output to do as much of the
# validation as we can without resorting to custom logic (this
# parser is what sphinx and others use anyway so it's hopefully
# mature).
parser_cls = docutils_parser.get_parser_class("rst")
parser = parser_cls()
defaults = {
'halt_level': 5,
'report_level': 5,
'quiet': True,
'file_insertion_enabled': False,
'traceback': True,
# Development use only.
'dump_settings': False,
'dump_internals': False,
'dump_transforms': False,
}
opt = frontend.OptionParser(components=[parser], defaults=defaults)
doc = utils.new_document(source_path=self.filename,
settings=opt.get_default_values())
parser.parse(self.contents, doc)
self._doc = doc
return self._doc
def _read(self):
if self._has_read:
return
with self._read_lock:
if not self._has_read:
with open(self.filename, 'rb') as fh:
self._lines = list(fh)
fh.seek(0)
self._raw_content = fh.read()
self._has_read = True
def lines_iter(self, remove_trailing_newline=True):
self._read()
for line in self._lines:
line = six.text_type(line, encoding=self.encoding)
if remove_trailing_newline and line.endswith("\n"):
line = line[0:-1]
yield line
@property
def lines(self):
self._read()
return self._lines
@property
def extension(self):
return self._extension
@property
def filename(self):
return self._filename
@property
def encoding(self):
if not self._encoding:
encoding = chardet.detect(self.raw_contents)['encoding']
if not encoding:
encoding = self.FALLBACK_ENCODING
self._encoding = encoding
return self._encoding
@property
def raw_contents(self):
self._read()
return self._raw_content
@property
def contents(self):
if self._content is None:
self._content = six.text_type(self.raw_contents,
encoding=self.encoding)
return self._content
def __str__(self):
return "%s (%s, %s chars, %s lines)" % (
self.filename, self.encoding, len(self.contents),
len(list(self.lines_iter())))
def parse(filename, encoding=None, default_extension=''):
if not os.path.isfile(filename):
raise IOError(errno.ENOENT, 'File not found', filename)
return ParsedFile(filename,
encoding=encoding,
default_extension=default_extension)

View File

View File

@ -1,191 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import tempfile
import testtools
from doc8 import checks
from doc8 import parser
class TestTrailingWhitespace(testtools.TestCase):
def test_trailing(self):
lines = ["a b ", "ab"]
check = checks.CheckTrailingWhitespace({})
errors = []
for line in lines:
errors.extend(check.report_iter(line))
self.assertEqual(1, len(errors))
(code, msg) = errors[0]
self.assertIn(code, check.REPORTS)
class TestTabIndentation(testtools.TestCase):
def test_tabs(self):
lines = [" b", "\tabc", "efg", "\t\tc"]
check = checks.CheckIndentationNoTab({})
errors = []
for line in lines:
errors.extend(check.report_iter(line))
self.assertEqual(2, len(errors))
(code, msg) = errors[0]
self.assertIn(code, check.REPORTS)
class TestCarriageReturn(testtools.TestCase):
def test_cr(self):
lines = ["\tabc", "efg", "\r\n"]
check = checks.CheckCarriageReturn({})
errors = []
for line in lines:
errors.extend(check.report_iter(line))
self.assertEqual(1, len(errors))
(code, msg) = errors[0]
self.assertIn(code, check.REPORTS)
class TestLineLength(testtools.TestCase):
def test_over_length(self):
content = b"""
===
aaa
===
----
test
----
"""
content += b"\n\n"
content += (b"a" * 60) + b" " + (b"b" * 60)
content += b"\n"
conf = {
'max_line_length': 79,
'allow_long_titles': True,
}
for ext in ['.rst', '.txt']:
with tempfile.NamedTemporaryFile(suffix=ext) as fh:
fh.write(content)
fh.flush()
parsed_file = parser.ParsedFile(fh.name)
check = checks.CheckMaxLineLength(conf)
errors = list(check.report_iter(parsed_file))
self.assertEqual(1, len(errors))
(line, code, msg) = errors[0]
self.assertIn(code, check.REPORTS)
def test_correct_length(self):
conf = {
'max_line_length': 79,
'allow_long_titles': True,
}
with tempfile.NamedTemporaryFile(suffix='.rst') as fh:
fh.write(b'known exploit in the wild, for example'
b' \xe2\x80\x93 the time'
b' between advance notification')
fh.flush()
parsed_file = parser.ParsedFile(fh.name, encoding='utf-8')
check = checks.CheckMaxLineLength(conf)
errors = list(check.report_iter(parsed_file))
self.assertEqual(0, len(errors))
def test_ignore_code_block(self):
conf = {
'max_line_length': 79,
'allow_long_titles': True,
}
with tempfile.NamedTemporaryFile(suffix='.rst') as fh:
fh.write(b'List which contains items with code-block\n'
b'- this is a list item\n\n'
b' .. code-block:: ini\n\n'
b' this line exceeds 80 chars but should be ignored'
b'this line exceeds 80 chars but should be ignored'
b'this line exceeds 80 chars but should be ignored')
fh.flush()
parsed_file = parser.ParsedFile(fh.name, encoding='utf-8')
check = checks.CheckMaxLineLength(conf)
errors = list(check.report_iter(parsed_file))
self.assertEqual(0, len(errors))
def test_unsplittable_length(self):
content = b"""
===
aaa
===
----
test
----
"""
content += b"\n\n"
content += b"a" * 100
content += b"\n"
conf = {
'max_line_length': 79,
'allow_long_titles': True,
}
# This number is different since rst parsing is aware that titles
# are allowed to be over-length, while txt parsing is not aware of
# this fact (since it has no concept of title sections).
extensions = [(0, '.rst'), (1, '.txt')]
for expected_errors, ext in extensions:
with tempfile.NamedTemporaryFile(suffix=ext) as fh:
fh.write(content)
fh.flush()
parsed_file = parser.ParsedFile(fh.name)
check = checks.CheckMaxLineLength(conf)
errors = list(check.report_iter(parsed_file))
self.assertEqual(expected_errors, len(errors))
def test_definition_term_length(self):
conf = {
'max_line_length': 79,
'allow_long_titles': True,
}
with tempfile.NamedTemporaryFile(suffix='.rst') as fh:
fh.write(b'Definition List which contains long term.\n\n'
b'looooooooooooooooooooooooooooooong definition term'
b'this line exceeds 80 chars but should be ignored\n'
b' this is a definition\n')
fh.flush()
parsed_file = parser.ParsedFile(fh.name, encoding='utf-8')
check = checks.CheckMaxLineLength(conf)
errors = list(check.report_iter(parsed_file))
self.assertEqual(0, len(errors))
class TestNewlineEndOfFile(testtools.TestCase):
def test_newline(self):
tests = [(1, b"testing"),
(1, b"testing\ntesting"),
(0, b"testing\n"),
(0, b"testing\ntesting\n")]
for expected_errors, line in tests:
with tempfile.NamedTemporaryFile() as fh:
fh.write(line)
fh.flush()
parsed_file = parser.ParsedFile(fh.name)
check = checks.CheckNewlineEndOfFile({})
errors = list(check.report_iter(parsed_file))
self.assertEqual(expected_errors, len(errors))

View File

@ -1,77 +0,0 @@
# Copyright (C) 2014 Ivan Melnikov <iv at altlinux dot org>
#
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import glob
import os
def find_files(paths, extensions, ignored_paths):
extensions = set(extensions)
ignored_absolute_paths = set()
for path in ignored_paths:
for expanded_path in glob.iglob(path):
expanded_path = os.path.abspath(expanded_path)
ignored_absolute_paths.add(expanded_path)
def extension_matches(path):
_base, ext = os.path.splitext(path)
return ext in extensions
def path_ignorable(path):
path = os.path.abspath(path)
if path in ignored_absolute_paths:
return True
last_path = None
while path != last_path:
# If we hit the root, this loop will stop since the resolution
# of "/../" is still "/" when ran through the abspath function...
last_path = path
path = os.path.abspath(os.path.join(path, os.path.pardir))
if path in ignored_absolute_paths:
return True
return False
for path in paths:
if os.path.isfile(path):
if extension_matches(path):
yield (path, path_ignorable(path))
elif os.path.isdir(path):
for root, dirnames, filenames in os.walk(path):
for filename in filenames:
path = os.path.join(root, filename)
if extension_matches(path):
yield (path, path_ignorable(path))
else:
raise IOError('Invalid path: %s' % path)
def filtered_traverse(document, filter_func):
for n in document.traverse(include_self=True):
if filter_func(n):
yield n
def contains_url(line):
return "http://" in line or "https://" in line
def has_any_node_type(node, node_types):
n = node
while n is not None:
if isinstance(n, node_types):
return True
n = n.parent
return False

View File

@ -1,24 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
try:
from pbr import version as pbr_version
_version_info = pbr_version.VersionInfo('doc8')
version_string = _version_info.version_string
except ImportError:
import pkg_resources
_version_info = pkg_resources.get_distribution('doc8')
version_string = lambda: _version_info.version

View File

@ -1,31 +0,0 @@
# The format of this file isn't really documented; just use --generate-rcfile
[Messages Control]
# C0111: Don't require docstrings on every method
# W0511: TODOs in code comments are fine.
# W0142: *args and **kwargs are fine.
# W0622: Redefining id is fine.
disable=C0111,W0511,W0142,W0622
[Basic]
# Variable names can be 1 to 31 characters long, with lowercase and underscores
variable-rgx=[a-z_][a-z0-9_]{0,30}$
# Argument names can be 2 to 31 characters long, with lowercase and underscores
argument-rgx=[a-z_][a-z0-9_]{1,30}$
# Method names should be at least 3 characters long
# and be lowercased with underscores
method-rgx=([a-z_][a-z0-9_]{2,50}|setUp|tearDown)$
[Design]
max-public-methods=100
min-public-methods=0
max-args=6
[Variables]
# List of additional names supposed to be defined in builtins. Remember that
# you should avoid to define new builtins when possible.
# _ is used by our localization
additional-builtins=_

View File

@ -1,9 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
chardet
docutils
restructuredtext-lint>=0.7
six
stevedore

View File

@ -1,33 +0,0 @@
[metadata]
name = doc8
summary = Style checker for Sphinx (or other) RST documentation
description-file =
README.rst
author = OpenStack
author-email = openstack-dev@lists.openstack.org
home-page = https://launchpad.net/doc8
classifier =
Intended Audience :: Information Technology
Intended Audience :: System Administrators
Intended Audience :: Developers
Development Status :: 4 - Beta
Topic :: Utilities
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
Programming Language :: Python :: 3
Programming Language :: Python :: 3.4
[entry_points]
console_scripts =
doc8 = doc8.main:main
[build_sphinx]
all_files = 1
build-dir = doc/build
source-dir = doc/source
[wheel]
universal = 1

View File

@ -1,30 +0,0 @@
#!/usr/bin/env python
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
# In python < 2.7.4, a lazy loading of package `pbr` will break
# setuptools if some other modules registered functions in `atexit`.
# solution from: http://bugs.python.org/issue15881#msg170215
try:
import multiprocessing # noqa
except ImportError:
pass
setuptools.setup(
setup_requires=['pbr'],
pbr=True)

View File

@ -1,10 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
doc8
hacking>=0.9.2,<0.10
nose
oslosphinx
sphinx>=1.1.2,!=1.2.0,<1.3
testtools

32
tox.ini
View File

@ -1,32 +0,0 @@
[tox]
minversion = 1.6
skipsdist = True
envlist = py35,py27,pep8
[testenv]
setenv = VIRTUAL_ENV={envdir}
usedevelop = True
install_command = pip install {opts} {packages}
deps = -r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
commands = nosetests {posargs}
[testenv:pep8]
commands = flake8 {posargs}
[testenv:pylint]
requirements = pylint==0.25.2
commands = pylint doc8
[testenv:venv]
commands = {posargs}
[testenv:docs]
commands =
doc8 -e .rst doc CONTRIBUTING.rst HACKING.rst README.rst
python setup.py build_sphinx
[flake8]
builtins = _
show-source = True
exclude=.venv,.git,.tox,dist,doc,*openstack/common*,*lib/python*,*egg,build