Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: matplotlib/mpl-probscale
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v0.2.0
Choose a base ref
...
head repository: matplotlib/mpl-probscale
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: master
Choose a head ref

Commits on Jul 13, 2016

  1. Copy the full SHA
    2ece40c View commit details

Commits on Jul 14, 2016

  1. Copy the full SHA
    1955fe6 View commit details
  2. Copy the full SHA
    e2f32b2 View commit details
  3. Copy the full SHA
    3a3b401 View commit details
  4. Copy the full SHA
    d18d085 View commit details
  5. Copy the full SHA
    8ba49f1 View commit details
  6. Copy the full SHA
    060b18e View commit details
  7. Copy the full SHA
    3d619e7 View commit details

Commits on Jul 17, 2016

  1. Copy the full SHA
    afd44f1 View commit details
  2. bump version to 0.2.1

    phobson committed Jul 17, 2016
    Copy the full SHA
    e3cf0bc View commit details
  3. Merge pull request #32 from phobson/add-CIs-to-fitter

    Add CIs options to linear fits and bump to version 0.2.1
    phobson authored Jul 17, 2016
    Copy the full SHA
    63aff59 View commit details
  4. adaptive tolerances in tests

    phobson committed Jul 17, 2016
    Copy the full SHA
    a0c6a66 View commit details
  5. Merge pull request #33 from phobson/py27-img-tolerance

    adaptive tolerances in tests
    phobson authored Jul 17, 2016
    Copy the full SHA
    5d6d618 View commit details
  6. re-disable a test for PY2k

    phobson committed Jul 17, 2016
    Copy the full SHA
    d098a36 View commit details
  7. Merge pull request #34 from phobson/py27-img-tolerance

    re-disable a test for PY2k
    phobson authored Jul 17, 2016
    Copy the full SHA
    cf59141 View commit details

Commits on Jul 18, 2016

  1. Copy the full SHA
    bfa377e View commit details
  2. Merge pull request #35 from phobson/py27-img-tolerance

    loosen probscale test tolerance
    phobson authored Jul 18, 2016
    Copy the full SHA
    d707ed8 View commit details
  3. Copy the full SHA
    adf039c View commit details
  4. Merge pull request #36 from phobson/fix-low-N-limits

    handle low N values better when setting limts
    phobson authored Jul 18, 2016
    Copy the full SHA
    65c554d View commit details
  5. bump to v0.2.2

    phobson committed Jul 18, 2016
    Copy the full SHA
    8d7f187 View commit details
  6. Merge pull request #37 from phobson/bump-to-v0.2.2

    bump to v0.2.2
    phobson authored Jul 18, 2016
    Copy the full SHA
    f6c0df0 View commit details
  7. switch to codecov.io

    phobson committed Jul 18, 2016
    Copy the full SHA
    6028281 View commit details
  8. Merge pull request #38 from phobson/switch-to-codecov

    switch to codecov.io for test coverage analysis
    phobson authored Jul 18, 2016
    Copy the full SHA
    e3aeaa5 View commit details
  9. codecov badge in readme

    phobson authored Jul 18, 2016
    Copy the full SHA
    7985ed0 View commit details
  10. Copy the full SHA
    4ad178e View commit details

Commits on Jul 26, 2016

  1. Create codeclimate.yml

    phobson authored Jul 26, 2016
    Copy the full SHA
    f200710 View commit details
  2. add code climate badge

    phobson authored Jul 26, 2016
    Copy the full SHA
    63466b4 View commit details
  3. Merge pull request #39 from phobson/add-code-climate

    Create codeclimate.yml
    phobson authored Jul 26, 2016
    Copy the full SHA
    a2fc33b View commit details

Commits on Jan 5, 2017

  1. Copy the full SHA
    d7b7d58 View commit details
  2. misc docs and repo configs

    phobson committed Jan 5, 2017
    Copy the full SHA
    1397b01 View commit details
  3. Copy the full SHA
    54d2c6a View commit details
  4. Copy the full SHA
    50231d9 View commit details
  5. Merge pull request #42 from phobson/repo-cleanup

    Repo cleanup
    phobson authored Jan 5, 2017
    Copy the full SHA
    e08e80b View commit details

Commits on Feb 6, 2017

  1. add python 3.6 to travis

    phobson committed Feb 6, 2017
    Copy the full SHA
    7d742a4 View commit details
  2. Merge pull request #43 from phobson/py36-in-travis

    add python 3.6 to travis
    phobson authored Feb 6, 2017
    Copy the full SHA
    7c68aba View commit details

Commits on Feb 7, 2017

  1. Copy the full SHA
    19a4564 View commit details
  2. Copy the full SHA
    1157929 View commit details
  3. Copy the full SHA
    2b5008c View commit details
  4. update notebook

    phobson committed Feb 7, 2017
    Copy the full SHA
    219ba0e View commit details
  5. PEP8 cleanup in the tests

    phobson committed Feb 7, 2017
    Copy the full SHA
    cfebdb6 View commit details
  6. Merge pull request #45 from phobson/refactor-algo

    Refactor into algo
    phobson authored Feb 7, 2017
    Copy the full SHA
    801b29f View commit details
  7. fix minor doc typos

    phobson committed Feb 7, 2017
    Copy the full SHA
    487133c View commit details
  8. Copy the full SHA
    fecc737 View commit details
  9. bump to version 0.2.3

    phobson committed Feb 7, 2017
    Copy the full SHA
    ac4f593 View commit details
  10. Copy the full SHA
    e06fe7c View commit details
  11. Merge pull request #46 from phobson/minor-doc-typos

    fix minor doc typos
    phobson authored Feb 7, 2017
    Copy the full SHA
    845e28a View commit details

Commits on Feb 8, 2017

  1. Copy the full SHA
    8258aff View commit details
  2. minor tweaks to figures

    phobson committed Feb 8, 2017
    Copy the full SHA
    5b06573 View commit details
  3. Copy the full SHA
    2048c6c View commit details
  4. Copy the full SHA
    9f9fd56 View commit details
Showing with 3,273 additions and 1,587 deletions.
  1. +5 −0 .codeclimate.yml
  2. +0 −1 .travis_coveragerc → .coveragerc
  3. +21 −0 .editorconfig
  4. +17 −0 .github/ISSUE_TEMPLATE.md
  5. +10 −0 .github/workflows/black.yml
  6. +32 −0 .github/workflows/check-test-coverage.yml
  7. +31 −0 .github/workflows/python-publish.yml
  8. +33 −0 .github/workflows/python-runlinter.yml
  9. +34 −0 .github/workflows/python-runtests-all.yml
  10. +5 −0 .gitignore
  11. +0 −57 .travis.yml
  12. +13 −0 AUTHORS.rst
  13. +0 −9 CONTRIBUTING.md
  14. +120 −0 CONTRIBUTING.rst
  15. +51 −28 README.md
  16. +0 −7 check_probscale.py
  17. +3 −3 conda.recipe/meta.yaml
  18. +1 −0 docs/authors.rst
  19. +101 −95 docs/conf.py
  20. +1 −0 docs/contributing.rst
  21. +38 −0 docs/examples/example.py
  22. BIN docs/img/example.png
  23. +13 −9 docs/index.rst
  24. +51 −0 docs/installation.rst
  25. +1 −0 docs/readme.rst
  26. +32 −28 docs/sphinxext/ipython_console_highlighting.py
  27. +276 −217 docs/sphinxext/ipython_directive.py
  28. +193 −143 docs/sphinxext/plot_directive.py
  29. +75 −73 docs/sphinxext/plot_generator.py
  30. +0 −5 docs/tutorial/Makefile
  31. +20 −7 docs/tutorial/closer_look_at_plot_pos.ipynb
  32. +132 −122 docs/tutorial/closer_look_at_viz.ipynb
  33. +17 −12 docs/tutorial/getting_started.ipynb
  34. +34 −0 docs/tutorial/make.py
  35. +0 −27 docs/tutorial/tools/nb_to_doc.py
  36. +0 −31 docs/tutorial/tools/nbstripout
  37. +6 −1 probscale/__init__.py
  38. +141 −0 probscale/algo.py
  39. +14 −15 probscale/formatters.py
  40. +25 −20 probscale/probscale.py
  41. +0 −10 probscale/tests/__init__.py
  42. BIN probscale/tests/baseline_images/test_probscale/test_the_scale_beta.png
  43. BIN probscale/tests/baseline_images/test_probscale/test_the_scale_default.png
  44. BIN probscale/tests/baseline_images/test_viz/test_probplot_beta_dist_best_fit_x.png
  45. BIN probscale/tests/baseline_images/test_viz/test_probplot_beta_dist_best_fit_y.png
  46. BIN probscale/tests/baseline_images/test_viz/test_probplot_color_and_label.png
  47. BIN probscale/tests/baseline_images/test_viz/test_probplot_pp.png
  48. BIN probscale/tests/baseline_images/test_viz/test_probplot_pp_bestfit.png
  49. BIN probscale/tests/baseline_images/test_viz/test_probplot_pp_bestfit_probax_y.png
  50. BIN probscale/tests/baseline_images/test_viz/test_probplot_prob.png
  51. BIN probscale/tests/baseline_images/test_viz/test_probplot_prob_bestfit.png
  52. BIN probscale/tests/baseline_images/test_viz/test_probplot_prob_bestfit_exceedance.png
  53. BIN probscale/tests/baseline_images/test_viz/test_probplot_prob_bestfit_probax_y.png
  54. BIN probscale/tests/baseline_images/test_viz/test_probplot_prob_probax_y.png
  55. BIN probscale/tests/baseline_images/test_viz/test_probplot_qq.png
  56. BIN probscale/tests/baseline_images/test_viz/test_probplot_qq_bestfit.png
  57. BIN probscale/tests/baseline_images/test_viz/test_probplot_qq_bestfit_probax_y.png
  58. BIN probscale/tests/baseline_images/test_viz/test_probplot_qq_probax_y.png
  59. +14 −0 probscale/tests/helpers.py
  60. +252 −0 probscale/tests/test_algo.py
  61. +51 −32 probscale/tests/test_formatters.py
  62. +90 −35 probscale/tests/test_probscale.py
  63. +66 −35 probscale/tests/test_transforms.py
  64. +54 −32 probscale/tests/test_validate.py
  65. +876 −342 probscale/tests/test_viz.py
  66. +22 −16 probscale/transforms.py
  67. +36 −19 probscale/validate.py
  68. +233 −138 probscale/viz.py
  69. +5 −0 setup.cfg
  70. +28 −18 setup.py
5 changes: 5 additions & 0 deletions .codeclimate.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
languages:
Python: true
exclude_paths:
- "tests/*.py"
- "docs/*"
1 change: 0 additions & 1 deletion .travis_coveragerc → .coveragerc
Original file line number Diff line number Diff line change
@@ -2,7 +2,6 @@
[run]
source = probscale
branch = True
include = probscale/*.py
omit =
probscale/tests/*

21 changes: 21 additions & 0 deletions .editorconfig
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# http://editorconfig.org

root = true

[*]
indent_style = space
indent_size = 4
trim_trailing_whitespace = true
insert_final_newline = true
charset = utf-8
end_of_line = lf

[*.bat]
indent_style = tab
end_of_line = crlf

[LICENSE]
insert_final_newline = false

[Makefile]
indent_style = tab
17 changes: 17 additions & 0 deletions .github/ISSUE_TEMPLATE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
* Python version:
* numpy version:
* matplotlib version:
* mpl-probscale version:
* Operating System:

### Description

Describe what you were trying to get done.
Tell us what happened, what went wrong, and what you expected to happen.

### What I Did

```
Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.
```
10 changes: 10 additions & 0 deletions .github/workflows/black.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
name: Lint with Black

on: [push, pull_request]

jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: psf/black@stable
32 changes: 32 additions & 0 deletions .github/workflows/check-test-coverage.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
name: Coverage (with doctests)
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]

jobs:
run:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Setup Python
uses: actions/setup-python@v2
with:
python-version: "3.11"
- name: Generate coverage report
run: |
python -m pip install --upgrade pip
pip install pytest pytest-cov pytest-mpl coverage docopt
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
pip install scipy seaborn
export MPL_IMGCOMP_TOLERANCE=20
coverage run -m pytest --mpl --doctest-glob="probscale/*.py" --cov-report=xml
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v1
with:
# directory: ./coverage/reports/
flags: unittests
name: codecov-umbrella
fail_ci_if_error: true
path_to_write_report: ./codecov_report.gz
31 changes: 31 additions & 0 deletions .github/workflows/python-publish.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# This workflows will upload a Python Package using Twine when a release is created
# For more information see: https://help.github.com/en/actions/language-and-framework-guides/using-python-with-github-actions#publishing-to-package-registries

name: Publish Python Package

on:
release:
types: [created]

jobs:
deploy:

runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.x'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install setuptools wheel twine
- name: Build and publish
env:
TWINE_USERNAME: ${{ secrets.PMH_PYPI_USER }}
TWINE_PASSWORD: ${{ secrets.PMH_PYPI_PASS }}
run: |
python setup.py sdist bdist_wheel
twine upload dist/*
33 changes: 33 additions & 0 deletions .github/workflows/python-runlinter.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions

name: Lint with flake8

on:
push:
branches: [ master ]
pull_request:
branches: [ master ]

jobs:
build:

runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: "3.11"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics --exclude .git,docs/*
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
34 changes: 34 additions & 0 deletions .github/workflows/python-runtests-all.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions

name: Run units test (w/ img comps)

on:
push:
branches: [ master ]
pull_request:
branches: [ master ]

jobs:
build:

runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.8", "3.9", "3.10", "3.11", "3.12"]

steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pytest pytest-mpl docopt
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: Test with pytest
run: |
export MPL_IMGCOMP_TOLERANCE=20
python -m pytest --mpl
5 changes: 5 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -44,6 +44,7 @@ coverage.xml
*,cover
.noseids
result_images
.pytest_cache/

# Translations
*.mo
@@ -63,5 +64,9 @@ target/

# VS
.vs
.vscode
*.pyproj
*.sln

# other text editors
.atom-build.yml
57 changes: 0 additions & 57 deletions .travis.yml

This file was deleted.

13 changes: 13 additions & 0 deletions AUTHORS.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
=======
Credits
=======

Development Lead
----------------

* Paul M. Hobson <pmhobson@gmail.com>

Contributors
------------

* Pierre Haessig
9 changes: 0 additions & 9 deletions CONTRIBUTING.md

This file was deleted.

120 changes: 120 additions & 0 deletions CONTRIBUTING.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
.. highlight:: shell

============
Contributing
============

Contributions are welcome, and they are greatly appreciated! Every
little bit helps, and credit will always be given.

You can contribute in many ways:

Types of Contributions
----------------------

Report Bugs
~~~~~~~~~~~

Report bugs at https://github.com/matplotlib/mpl-probscale/issues.

If you are reporting a bug, please include:

* Your operating system name and version.
* Any details about your local setup that might be helpful in troubleshooting.
* Detailed steps to reproduce the bug.

Fix Bugs
~~~~~~~~

Look through the GitHub issues for bugs. Anything tagged with "bug"
and "help wanted" is open to whoever wants to implement it.

Implement Features
~~~~~~~~~~~~~~~~~~

Look through the GitHub issues for features. Anything tagged with "enhancement"
and "help wanted" is open to whoever wants to implement it.

Write Documentation
~~~~~~~~~~~~~~~~~~~

mpl-probscale could always use more documentation, whether as part of the
official mpl-probscale docs, in docstrings, or even on the web in blog posts,
articles, and such.

Submit Feedback
~~~~~~~~~~~~~~~

The best way to send feedback is to file an issue at https://github.com/matplotlib/mpl-probscale/issues.

If you are proposing a feature:

* Explain in detail how it would work.
* Keep the scope as narrow as possible, to make it easier to implement.
* Remember that this is a volunteer-driven project, and that contributions
are welcome :)

Get Started!
------------

Ready to contribute? Here's how to set up `probscale` for local development.

1. Fork the `probscale` repo on GitHub.
2. Clone your fork locally::

$ git clone git@github.com:your_name_here/probscale.git

3. Install your local copy into a conda environment. Assuming you have conda installed, this is how you set up your fork for local development::

$ conda config --add channels conda-forge
$ conda create --name=probscale python=3.5 numpy matplotlib pytest pytest-cov pytest-pep8 pytest-mpl
$ cd probscale/
$ pip install -e .

4. Create a branch for local development::

$ git checkout -b name-of-your-bugfix-or-feature

Now you can make your changes locally.

5. When you're done making changes, check that your changes pass flake8 and the tests, including testing other Python versions with tox::

$ python -m pytest --mpl --pep8 --cov

6. Commit your changes and push your branch to GitHub::

$ git add <files you want to stage>
$ git commit -m "Your detailed description of your changes."
$ git push origin name-of-your-bugfix-or-feature

7. Submit a pull request through the GitHub website.

Matplotlib has good info on working with `source code`_ using `git and GitHub`_.

.. _source code: http://matplotlib.org/devel/coding_guide.html`
.. _git and GitHub: http://matplotlib.org/devel/gitwash/development_workflow.html

Pull Request Guidelines
-----------------------

Before you submit a pull request, check that it meets these guidelines:

1. The pull request should include tests.
2. If the pull request adds functionality, the docs should be updated. Put
your new functionality into a function with a docstring, and add the
feature to the list in README.rst.
3. The pull request should work for Python 3.4 and higher. Check
https://travis-ci.org/matplotlib/mpl-probscale/pull_requests
and make sure that the tests pass for all supported Python versions.

Tips
----

To run a subset of tests::

$ py.test tests.test_probscale


After this, hitting ctrl+b in either text editor will run the test suite.

.. _build: https://atom.io/packages/build
79 changes: 51 additions & 28 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,58 +1,81 @@
# mpl-probscale

Real probability scales for matplotlib

[![Build Status](https://travis-ci.org/phobson/mpl-probscale.svg)](https://travis-ci.org/phobson/mpl-probscale)
[![Coverage Status](https://coveralls.io/repos/phobson/mpl-probscale/badge.svg?branch=master&service=github)](https://coveralls.io/github/phobson/mpl-probscale?branch=master)
![Coverage](https://github.com/matplotlib/mpl-probscale/workflows/Coverage%20via%20codecov/badge.svg)
![Linter](https://github.com/matplotlib/mpl-probscale/workflows/Lint%20with%20flake8/badge.svg)
![Tests](https://github.com/matplotlib/mpl-probscale/workflows/Image%20comparison%20tests/badge.svg)

[Sphinx Docs](http://phobson.github.io/mpl-probscale/)
[Sphinx Docs](http://matplotlib.org/mpl-probscale/)

## Installation

### Official releases

Official releases are available through the conda-forge channel or pip"
Official releases are available through the conda-forge channel or pip

`conda install mpl-probscale --channel=conda-forge`

`pip install probscale`

### Development builds

Development builds are available through my conda channel:

`conda install mpl-probscale --channel=phobson`

This is a pure-python package, so building from source is easy on all platforms:

```shell
git clone git@github.com:matplotlib/mpl-probscale.git
cd mpl-probscale
pip install -e .
```

## Quick start

Simply importing `probscale` lets you use probability scales in your matplotlib figures:

```python
import matplotlib.pyplot as plt
import probscale
import seaborn
clear_bkgd = {'axes.facecolor':'none', 'figure.facecolor':'none'}
seaborn.set(style='ticks', context='notebook', rc=clear_bkgd)

fig, ax = plt.subplots(figsize=(8, 4))
ax.set_ylim(1e-2, 1e2)
ax.set_yscale('log')

ax.set_xlim(0.5, 99.5)
ax.set_xscale('prob')
seaborn.despine(fig=fig)
from matplotlib import pyplot
from scipy import stats
import probscale # nothing else needed

beta = stats.beta(a=3, b=4)
weibull = stats.weibull_min(c=5)
scales = [
{"scale": {"value": "linear"}, "label": "Linear (built-in)"},
{"scale": {"value": "log", "base": 10}, "label": "Log. Base 10 (built-in)"},
{"scale": {"value": "log", "base": 2}, "label": "Log. Base 2 (built-in)"},
{"scale": {"value": "logit"}, "label": "Logit (built-in)"},
{"scale": {"value": "prob"}, "label": "Standard Normal Probability (this package)"},
{
"scale": {"value": "prob", "dist": weibull},
"label": "Weibull probability scale, c=5 (this package)",
},
{
"scale": {"value": "prob", "dist": beta},
"label": "Beta probability scale, α=3 & β=4 (this package)",
},
]

N = len(scales)
fig, axes = pyplot.subplots(nrows=N, figsize=(9, N - 1), constrained_layout=True)
for scale, ax in zip(scales, axes.flat):
ax.set_xscale(**scale["scale"])
ax.text(0.0, 0.1, scale["label"] + "", transform=ax.transAxes)
ax.set_xlim(left=0.5, right=99.5)
ax.set_yticks([])
ax.spines.left.set_visible(False)
ax.spines.right.set_visible(False)
ax.spines.top.set_visible(False)

outpath = Path(__file__).parent.joinpath("../img/example.png").resolve()
fig.savefig(outpath, dpi=300)
```

![Alt text](docs/img/example.png "Example axes")

## Testing

Testing is generally done via the ``nose`` and ``numpy.testing`` modules.
The best way to run the tests is in an interactive python session:
Testing is generally done via ``pytest``.

```python
import matplotlib
matplotib.use('agg')
import probscale
probscale.test()
```shell
python -m pytest --mpl --doctest-glob="probscale/*.py"
```
7 changes: 0 additions & 7 deletions check_probscale.py

This file was deleted.

6 changes: 3 additions & 3 deletions conda.recipe/meta.yaml
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
package:
name: mpl-probscale
version: 0.2.0dev
version: 0.2.5

source:
path: ../

build:
script: python setup.py install
number: 1
number: 0

requirements:
build:
@@ -36,6 +36,6 @@ test:
- scipy

about:
home: http://phobson.github.io/mpl-probscale/
home: http://matplotlib.org/mpl-probscale/
license: BSD License
summary: 'Probability scales for matplotlib.'
1 change: 1 addition & 0 deletions docs/authors.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
.. include:: ../AUTHORS.rst
196 changes: 101 additions & 95 deletions docs/conf.py
Original file line number Diff line number Diff line change
@@ -18,50 +18,51 @@
import shlex

import seaborn
clear_bkgd = {'axes.facecolor':'none', 'figure.facecolor':'none'}
seaborn.set(style='ticks', context='talk', color_codes=True, rc=clear_bkgd)

clear_bkgd = {"axes.facecolor": "none", "figure.facecolor": "none"}
seaborn.set(style="ticks", context="talk", color_codes=True, rc=clear_bkgd)

# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# sys.path.insert(0, os.path.abspath('.'))

# -- General configuration ------------------------------------------------

# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# needs_sphinx = '1.0'

source_suffix = ['.rst']
source_suffix = [".rst"]

numpydoc_show_class_members = False
autodoc_member_order = 'bysource'
html_theme = 'sphinx_rtd_theme'
autodoc_member_order = "bysource"
html_theme = "sphinx_rtd_theme"

# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
sys.path.insert(0, os.path.abspath('sphinxext'))
sys.path.insert(0, os.path.abspath("sphinxext"))
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.doctest',
'sphinx.ext.intersphinx',
'sphinx.ext.todo',
'sphinx.ext.mathjax',
'sphinx.ext.viewcode',
'plot_generator',
'plot_directive',
'numpydoc',
'ipython_directive',
'ipython_console_highlighting',
"sphinx.ext.autodoc",
"sphinx.ext.doctest",
"sphinx.ext.intersphinx",
"sphinx.ext.todo",
"sphinx.ext.mathjax",
"sphinx.ext.viewcode",
"plot_generator",
"plot_directive",
"numpydoc",
"ipython_directive",
"ipython_console_highlighting",
]

# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
templates_path = ["_templates"]

# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
source_suffix = ".rst"

# Include the example source for plots in API docs
plot_include_source = True
@@ -70,24 +71,24 @@
plot_html_show_source_link = False

# The encoding of source files.
#source_encoding = 'utf-8-sig'
# source_encoding = 'utf-8-sig'

# The master toctree document.
master_doc = 'index'
master_doc = "index"

# General information about the project.
project = 'probscale'
copyright = '2015, Paul Hobson (Geosyntec Consultants)'
author = 'Paul Hobson (Geosyntec Consultants)'
project = "probscale"
copyright = "2015, Paul Hobson (Geosyntec Consultants)"
author = "Paul Hobson (Geosyntec Consultants)"

# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '0.2.0'
version = "0.2.5"
# The full version, including alpha/beta/rc tags.
release = '0.2.0'
release = "0.2.5"

# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
@@ -98,37 +99,37 @@

# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# today_fmt = '%B %d, %Y'

# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
exclude_patterns = ["_build"]

# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# default_role = None

# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# add_function_parentheses = True

# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# add_module_names = True

# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# show_authors = False

# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
pygments_style = "sphinx"

# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# modindex_common_prefix = []

# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False
# keep_warnings = False

# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = True
@@ -138,156 +139,155 @@

# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'sphinx_rtd_theme'
html_theme = "sphinx_rtd_theme"

# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# html_theme_options = {}

# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# html_theme_path = []

# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# html_title = None

# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# html_short_title = None

# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# html_logo = None

# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# html_favicon = None

# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
html_static_path = ["_static"]

# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#html_extra_path = []
# html_extra_path = []

# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# html_last_updated_fmt = '%b %d, %Y'

# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# html_use_smartypants = True

# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# html_sidebars = {}

# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# html_additional_pages = {}

# If false, no module index is generated.
#html_domain_indices = True
# html_domain_indices = True

# If false, no index is generated.
#html_use_index = True
# html_use_index = True

# If true, the index is split into individual pages for each letter.
#html_split_index = False
# html_split_index = False

# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# html_show_sourcelink = True

# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# html_show_sphinx = True

# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# html_show_copyright = True

# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# html_use_opensearch = ''

# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# html_file_suffix = None

# Language to be used for generating the HTML full-text search index.
# Sphinx supports the following languages:
# 'da', 'de', 'en', 'es', 'fi', 'fr', 'h', 'it', 'ja'
# 'nl', 'no', 'pt', 'ro', 'r', 'sv', 'tr'
#html_search_language = 'en'
# html_search_language = 'en'

# A dictionary with options for the search language support, empty by default.
# Now only 'ja' uses this config value
#html_search_options = {'type': 'default'}
# html_search_options = {'type': 'default'}

# The name of a javascript file (relative to the configuration directory) that
# implements a search results scorer. If empty, the default will be used.
#html_search_scorer = 'scorer.js'
# html_search_scorer = 'scorer.js'

# Output file base name for HTML help builder.
htmlhelp_basename = 'probscaledoc'
htmlhelp_basename = "probscaledoc"

# -- Options for LaTeX output ---------------------------------------------

latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',

# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',

# Additional stuff for the LaTeX preamble.
#'preamble': '',

# Latex figure (float) alignment
#'figure_align': 'htbp',
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
# Latex figure (float) alignment
#'figure_align': 'htbp',
}

# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'probscale.tex', 'probscale Documentation',
'Paul Hobson (Geosyntec Consultants)', 'manual'),
(
master_doc,
"probscale.tex",
"probscale Documentation",
"Paul Hobson (Geosyntec Consultants)",
"manual",
),
]

# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# latex_logo = None

# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# latex_use_parts = False

# If true, show page references after internal links.
#latex_show_pagerefs = False
# latex_show_pagerefs = False

# If true, show URL addresses after external links.
#latex_show_urls = False
# latex_show_urls = False

# Documents to append as an appendix to all manuals.
#latex_appendices = []
# latex_appendices = []

# If false, no module index is generated.
#latex_domain_indices = True
# latex_domain_indices = True


# -- Options for manual page output ---------------------------------------

# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'probscale', 'probscale Documentation',
[author], 1)
]
man_pages = [(master_doc, "probscale", "probscale Documentation", [author], 1)]

# If true, show URL addresses after external links.
#man_show_urls = False
# man_show_urls = False


# -- Options for Texinfo output -------------------------------------------
@@ -296,23 +296,29 @@
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'probscale', 'probscale Documentation',
author, 'probscale', 'One line description of project.',
'Miscellaneous'),
(
master_doc,
"probscale",
"probscale Documentation",
author,
"probscale",
"One line description of project.",
"Miscellaneous",
),
]

# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# texinfo_appendices = []

# If false, no module index is generated.
#texinfo_domain_indices = True
# texinfo_domain_indices = True

# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# texinfo_show_urls = 'footnote'

# If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False
# texinfo_no_detailmenu = False


# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {'https://docs.python.org/': None}
intersphinx_mapping = {"https://docs.python.org/": None}
1 change: 1 addition & 0 deletions docs/contributing.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
.. include:: ../CONTRIBUTING.rst
38 changes: 38 additions & 0 deletions docs/examples/example.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# %%
from pathlib import Path

from matplotlib import pyplot
from scipy import stats
import probscale # nothing else needed

beta = stats.beta(a=3, b=4)
weibull = stats.weibull_min(c=5)
scales = [
{"scale": {"value": "linear"}, "label": "Linear (built-in)"},
{"scale": {"value": "log", "base": 10}, "label": "Log. Base 10 (built-in)"},
{"scale": {"value": "log", "base": 2}, "label": "Log. Base 2 (built-in)"},
{"scale": {"value": "logit"}, "label": "Logit (built-in)"},
{"scale": {"value": "prob"}, "label": "Standard Normal Probability (this package)"},
{
"scale": {"value": "prob", "dist": weibull},
"label": "Weibull probability scale, c=5 (this package)",
},
{
"scale": {"value": "prob", "dist": beta},
"label": "Beta probability scale, α=3 & β=4 (this package)",
},
]

N = len(scales)
fig, axes = pyplot.subplots(nrows=N, figsize=(9, N - 1), constrained_layout=True)
for scale, ax in zip(scales, axes.flat):
ax.set_xscale(**scale["scale"])
ax.text(0.0, 0.1, scale["label"] + " →", transform=ax.transAxes)
ax.set_xlim(left=0.5, right=99.5)
ax.set_yticks([])
ax.spines.left.set_visible(False)
ax.spines.right.set_visible(False)
ax.spines.top.set_visible(False)

outpath = Path(__file__).parent.joinpath("../img/example.png").resolve()
fig.savefig(outpath, dpi=300)
Binary file modified docs/img/example.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
22 changes: 13 additions & 9 deletions docs/index.rst
Original file line number Diff line number Diff line change
@@ -7,13 +7,13 @@
mpl-probscale: Real probability scales for matplotlib
=====================================================

.. image:: https://travis-ci.org/phobson/watershed.svg?branch=master
:target: https://travis-ci.org/phobson/watershed
.. image:: https://travis-ci.org/matplotlib/mpl-probscale.svg?branch=master
:target: https://travis-ci.org/matplotlib/mpl-probscale

.. image:: https://coveralls.io/repos/phobson/mpl-probscale/badge.svg?branch=master&service=github
:target: https://coveralls.io/github/phobson/mpl-probscale?branch=master
.. image:: https://coveralls.io/repos/matplotlib/mpl-probscale/badge.svg?branch=master&service=github
:target: https://coveralls.io/github/matplotlib/mpl-probscale?branch=master

https://github.com/phobson/mpl-probscale
https://github.com/matplotlib/mpl-probscale

Installation
------------
@@ -32,9 +32,13 @@ or
Development builds
~~~~~~~~~~~~~~~~~~

Development builds are available through my conda channel:
This is a pure-python package, so building from source is easy on all platforms:

``conda install mpl-probscale --channel=phobson``
::

git clone git@github.com:matplotlib/mpl-probscale.git
cd mpl-probscale
pip install -e .


Quickstart
@@ -81,8 +85,8 @@ It's easiest to run the tests from an interactive python session:
import matplotlib
matplotlib.use('agg')
import probscale
probscale.test()
from probscale import tests
tests.test()
API References
==============
51 changes: 51 additions & 0 deletions docs/installation.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
.. highlight:: shell

============
Installation
============


Stable release
--------------

To install mpl-probscale, run this command in your terminal:

.. code-block:: console
$ pip install probscale
This is the preferred method to install mpl-probscale, as it will always install the most recent stable release.

If you don't have `pip`_ installed, this `Python installation guide`_ can guide
you through the process.

.. _pip: https://pip.pypa.io
.. _Python installation guide: http://docs.python-guide.org/en/latest/starting/installation/


From sources
------------

The sources for mpl-probscale can be downloaded from the `Github repo`_.

You can either clone the public repository:

.. code-block:: console
$ git clone git://github.com/matplotlib/mpl-probscale
Or download the `tarball`_:

.. code-block:: console
$ curl -OL https://github.com/matplotlib/mpl-probscale/tarball/master
Once you have a copy of the source, you can install it with:

.. code-block:: console
$ pip install .
.. _Github repo: https://github.com/matplotlib/mpl-probscale
.. _tarball: https://github.com/matplotlib/mpl-probscale/tarball/master
1 change: 1 addition & 0 deletions docs/readme.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
.. include:: ../README.md
60 changes: 32 additions & 28 deletions docs/sphinxext/ipython_console_highlighting.py
Original file line number Diff line number Diff line change
@@ -5,25 +5,24 @@
highlighted tracebacks.
"""

#-----------------------------------------------------------------------------
# -----------------------------------------------------------------------------
# Needed modules

# Standard library
import re

# Third party
from pygments.lexer import Lexer, do_insertions
from pygments.lexers.agile import (PythonConsoleLexer, PythonLexer,
PythonTracebackLexer)
from pygments.lexers.agile import PythonConsoleLexer, PythonLexer, PythonTracebackLexer
from pygments.token import Comment, Generic

from sphinx import highlighting

#-----------------------------------------------------------------------------
# -----------------------------------------------------------------------------
# Global constants
line_re = re.compile('.*?\n')
line_re = re.compile(".*?\n")

#-----------------------------------------------------------------------------
# -----------------------------------------------------------------------------
# Code begins - classes and functions


@@ -51,9 +50,9 @@ class IPythonConsoleLexer(Lexer):
- It assumes the default IPython prompts, not customized ones.
"""

name = 'IPython console session'
aliases = ['ipython']
mimetypes = ['text/x-ipython-console']
name = "IPython console session"
aliases = ["ipython"]
mimetypes = ["text/x-ipython-console"]
input_prompt = re.compile("(In \[[0-9]+\]: )|( \.\.\.+:)")
output_prompt = re.compile("(Out\[[0-9]+\]: )|( \.\.\.+:)")
continue_prompt = re.compile(" \.\.\.+:")
@@ -63,42 +62,46 @@ def get_tokens_unprocessed(self, text):
pylexer = PythonLexer(**self.options)
tblexer = PythonTracebackLexer(**self.options)

curcode = ''
curcode = ""
insertions = []
for match in line_re.finditer(text):
line = match.group()
input_prompt = self.input_prompt.match(line)
continue_prompt = self.continue_prompt.match(line.rstrip())
output_prompt = self.output_prompt.match(line)
if line.startswith("#"):
insertions.append((len(curcode),
[(0, Comment, line)]))
insertions.append((len(curcode), [(0, Comment, line)]))
elif input_prompt is not None:
insertions.append((len(curcode),
[(0, Generic.Prompt, input_prompt.group())]))
curcode += line[input_prompt.end():]
insertions.append(
(len(curcode), [(0, Generic.Prompt, input_prompt.group())])
)
curcode += line[input_prompt.end() :]
elif continue_prompt is not None:
insertions.append((len(curcode),
[(0, Generic.Prompt, continue_prompt.group())]))
curcode += line[continue_prompt.end():]
insertions.append(
(len(curcode), [(0, Generic.Prompt, continue_prompt.group())])
)
curcode += line[continue_prompt.end() :]
elif output_prompt is not None:
# Use the 'error' token for output. We should probably make
# our own token, but error is typicaly in a bright color like
# red, so it works fine for our output prompts.
insertions.append((len(curcode),
[(0, Generic.Error, output_prompt.group())]))
curcode += line[output_prompt.end():]
insertions.append(
(len(curcode), [(0, Generic.Error, output_prompt.group())])
)
curcode += line[output_prompt.end() :]
else:
if curcode:
for item in do_insertions(insertions,
pylexer.get_tokens_unprocessed(curcode)):
for item in do_insertions(
insertions, pylexer.get_tokens_unprocessed(curcode)
):
yield item
curcode = ''
curcode = ""
insertions = []
yield match.start(), Generic.Output, line
if curcode:
for item in do_insertions(insertions,
pylexer.get_tokens_unprocessed(curcode)):
for item in do_insertions(
insertions, pylexer.get_tokens_unprocessed(curcode)
):
yield item


@@ -111,6 +114,7 @@ def setup(app):
# suppresses the sphinx warning we'd get without it.
pass

#-----------------------------------------------------------------------------

# -----------------------------------------------------------------------------
# Register the extension as a valid pygments lexer
highlighting.lexers['ipython'] = IPythonConsoleLexer()
highlighting.lexers["ipython"] = IPythonConsoleLexer()
493 changes: 276 additions & 217 deletions docs/sphinxext/ipython_directive.py

Large diffs are not rendered by default.

336 changes: 193 additions & 143 deletions docs/sphinxext/plot_directive.py

Large diffs are not rendered by default.

148 changes: 75 additions & 73 deletions docs/sphinxext/plot_generator.py
Original file line number Diff line number Diff line change
@@ -15,7 +15,8 @@
import json

import matplotlib
matplotlib.use('Agg')

matplotlib.use("Agg")
import matplotlib.pyplot as plt

from matplotlib import image
@@ -116,15 +117,15 @@
"""


def create_thumbnail(infile, thumbfile,
width=300, height=300,
cx=0.5, cy=0.5, border=4):
def create_thumbnail(
infile, thumbfile, width=300, height=300, cx=0.5, cy=0.5, border=4
):
baseout, extout = op.splitext(thumbfile)

im = image.imread(infile)
rows, cols = im.shape[:2]
x0 = int(cx * cols - .5 * width)
y0 = int(cy * rows - .5 * height)
x0 = int(cx * cols - 0.5 * width)
y0 = int(cy * rows - 0.5 * height)
xslice = slice(x0, x0 + width)
yslice = slice(y0, y0 + height)
thumb = im[yslice, xslice]
@@ -134,25 +135,24 @@ def create_thumbnail(infile, thumbfile,
dpi = 100
fig = plt.figure(figsize=(width / dpi, height / dpi), dpi=dpi)

ax = fig.add_axes([0, 0, 1, 1], aspect='auto',
frameon=False, xticks=[], yticks=[])
ax.imshow(thumb, aspect='auto', resample=True,
interpolation='bilinear')
ax = fig.add_axes([0, 0, 1, 1], aspect="auto", frameon=False, xticks=[], yticks=[])
ax.imshow(thumb, aspect="auto", resample=True, interpolation="bilinear")
fig.savefig(thumbfile, dpi=dpi)
return fig


def indent(s, N=4):
"""indent a string"""
return s.replace('\n', '\n' + N * ' ')
return s.replace("\n", "\n" + N * " ")


class ExampleGenerator(object):
"""Tools for generating an example page from a file"""

def __init__(self, filename, target_dir):
self.filename = filename
self.target_dir = target_dir
self.thumbloc = .5, .5
self.thumbloc = 0.5, 0.5
self.extract_docstring()
with open(filename, "r") as fid:
self.filetext = fid.read()
@@ -161,12 +161,11 @@ def __init__(self, filename, target_dir):

# Only actually run it if the output RST file doesn't
# exist or it was modified less recently than the example
if (not op.exists(outfilename)
or (op.getmtime(outfilename) < op.getmtime(filename))):

if not op.exists(outfilename) or (
op.getmtime(outfilename) < op.getmtime(filename)
):
self.exec_file()
else:

print("skipping {0}".format(self.filename))

@property
@@ -183,24 +182,24 @@ def modulename(self):

@property
def pyfilename(self):
return self.modulename + '.py'
return self.modulename + ".py"

@property
def rstfilename(self):
return self.modulename + ".rst"

@property
def htmlfilename(self):
return self.modulename + '.html'
return self.modulename + ".html"

@property
def pngfilename(self):
pngfile = self.modulename + '.png'
pngfile = self.modulename + ".png"
return "_images/" + pngfile

@property
def thumbfilename(self):
pngfile = self.modulename + '_thumb.png'
pngfile = self.modulename + "_thumb.png"
return pngfile

@property
@@ -209,7 +208,7 @@ def sphinxtag(self):

@property
def pagetitle(self):
return self.docstring.strip().split('\n')[0].strip()
return self.docstring.strip().split("\n")[0].strip()

@property
def plotfunc(self):
@@ -225,28 +224,27 @@ def plotfunc(self):
return ""

def extract_docstring(self):
""" Extract a module-level docstring
"""
"""Extract a module-level docstring"""
lines = open(self.filename).readlines()
start_row = 0
if lines[0].startswith('#!'):
if lines[0].startswith("#!"):
lines.pop(0)
start_row = 1

docstring = ''
first_par = ''
docstring = ""
first_par = ""
tokens = tokenize.generate_tokens(lines.__iter__().next)
for tok_type, tok_content, _, (erow, _), _ in tokens:
tok_type = token.tok_name[tok_type]
if tok_type in ('NEWLINE', 'COMMENT', 'NL', 'INDENT', 'DEDENT'):
if tok_type in ("NEWLINE", "COMMENT", "NL", "INDENT", "DEDENT"):
continue
elif tok_type == 'STRING':
elif tok_type == "STRING":
docstring = eval(tok_content)
# If the docstring is formatted with several paragraphs,
# extract the first one:
paragraphs = '\n'.join(line.rstrip()
for line in docstring.split('\n')
).split('\n\n')
paragraphs = "\n".join(
line.rstrip() for line in docstring.split("\n")
).split("\n\n")
if len(paragraphs) > 0:
first_par = paragraphs[0]
break
@@ -259,8 +257,9 @@ def extract_docstring(self):
break
if thumbloc is not None:
self.thumbloc = thumbloc
docstring = "\n".join([l for l in docstring.split("\n")
if not l.startswith("_thumb")])
docstring = "\n".join(
[l for l in docstring.split("\n") if not l.startswith("_thumb")]
)

self.docstring = docstring
self.short_desc = first_par
@@ -269,9 +268,8 @@ def extract_docstring(self):
def exec_file(self):
print("running {0}".format(self.filename))

plt.close('all')
my_globals = {'pl': plt,
'plt': plt}
plt.close("all")
my_globals = {"pl": plt, "plt": plt}
execfile(self.filename, my_globals)

fig = plt.gcf()
@@ -288,28 +286,27 @@ def toctree_entry(self):
return " ./%s\n\n" % op.splitext(self.htmlfilename)[0]

def contents_entry(self):
return (".. raw:: html\n\n"
" <div class='figure align-center'>\n"
" <a href=./{0}>\n"
" <img src=../_static/{1}>\n"
" <span class='figure-label'>\n"
" <p>{2}</p>\n"
" </span>\n"
" </a>\n"
" </div>\n\n"
"\n\n"
"".format(self.htmlfilename,
self.thumbfilename,
self.plotfunc))
return (
".. raw:: html\n\n"
" <div class='figure align-center'>\n"
" <a href=./{0}>\n"
" <img src=../_static/{1}>\n"
" <span class='figure-label'>\n"
" <p>{2}</p>\n"
" </span>\n"
" </a>\n"
" </div>\n\n"
"\n\n"
"".format(self.htmlfilename, self.thumbfilename, self.plotfunc)
)


def main(app):
static_dir = op.join(app.builder.srcdir, '_static')
target_dir = op.join(app.builder.srcdir, 'examples')
image_dir = op.join(app.builder.srcdir, 'examples/_images')
static_dir = op.join(app.builder.srcdir, "_static")
target_dir = op.join(app.builder.srcdir, "examples")
image_dir = op.join(app.builder.srcdir, "examples/_images")
thumb_dir = op.join(app.builder.srcdir, "example_thumbs")
source_dir = op.abspath(op.join(app.builder.srcdir,
'..', 'examples'))
source_dir = op.abspath(op.join(app.builder.srcdir, "..", "examples"))
if not op.exists(static_dir):
os.makedirs(static_dir)

@@ -327,26 +324,29 @@ def main(app):

banner_data = []

toctree = ("\n\n"
".. toctree::\n"
" :hidden:\n\n")
toctree = "\n\n" ".. toctree::\n" " :hidden:\n\n"
contents = "\n\n"

# Write individual example files
for filename in glob.glob(op.join(source_dir, "*.py")):

ex = ExampleGenerator(filename, target_dir)

banner_data.append({"title": ex.pagetitle,
"url": op.join('examples', ex.htmlfilename),
"thumb": op.join(ex.thumbfilename)})
banner_data.append(
{
"title": ex.pagetitle,
"url": op.join("examples", ex.htmlfilename),
"thumb": op.join(ex.thumbfilename),
}
)
shutil.copyfile(filename, op.join(target_dir, ex.pyfilename))
output = RST_TEMPLATE.format(sphinx_tag=ex.sphinxtag,
docstring=ex.docstring,
end_line=ex.end_line,
fname=ex.pyfilename,
img_file=ex.pngfilename)
with open(op.join(target_dir, ex.rstfilename), 'w') as f:
output = RST_TEMPLATE.format(
sphinx_tag=ex.sphinxtag,
docstring=ex.docstring,
end_line=ex.end_line,
fname=ex.pyfilename,
img_file=ex.pngfilename,
)
with open(op.join(target_dir, ex.rstfilename), "w") as f:
f.write(output)

toctree += ex.toctree_entry()
@@ -356,12 +356,14 @@ def main(app):
banner_data = (4 * banner_data)[:10]

# write index file
index_file = op.join(target_dir, 'index.rst')
with open(index_file, 'w') as index:
index.write(INDEX_TEMPLATE.format(sphinx_tag="example_gallery",
toctree=toctree,
contents=contents))
index_file = op.join(target_dir, "index.rst")
with open(index_file, "w") as index:
index.write(
INDEX_TEMPLATE.format(
sphinx_tag="example_gallery", toctree=toctree, contents=contents
)
)


def setup(app):
app.connect('builder-inited', main)
app.connect("builder-inited", main)
5 changes: 0 additions & 5 deletions docs/tutorial/Makefile

This file was deleted.

27 changes: 20 additions & 7 deletions docs/tutorial/closer_look_at_plot_pos.ipynb
Original file line number Diff line number Diff line change
@@ -5,7 +5,8 @@
"metadata": {},
"source": [
"# Using different formulations of plotting positions\n",
"### Looking at normal vs Weibull scales + Cunnane vs Weibull plotting positions\n",
"\n",
"## Computing plotting positions\n",
"\n",
"When drawing a percentile, quantile, or probability plot, the potting positions of ordered data must be computed.\n",
"\n",
@@ -102,6 +103,13 @@
" ax2.set_ylabel('Weibull Probability Scale')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Normal vs Weibull scales and Cunnane vs Weibull plotting positions"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -129,10 +137,10 @@
"metadata": {},
"source": [
"Now let's create probability plots on both Weibull and normal probability scales.\n",
"Additionally, we'll compute the plotting positions two different but commone ways for each plot.\n",
"Additionally, we'll compute the plotting positions two different but common ways for each plot.\n",
"\n",
"First, in blue circles, we'll show the data with Weibull (α=0, β=0) plotting positions.\n",
"Weibull plotting positions are commonly use in my field, water resources engineering.\n",
"Weibull plotting positions are commonly use in fields such as hydrology and water resources engineering.\n",
"\n",
"In green squares, we'll use Cunnane (α=0.4, β=0.4) plotting positions.\n",
"Cunnane plotting positions are good for normally distributed data and are the default values."
@@ -171,10 +179,12 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"This demostrates that the different formulations of the plotting positions vary most at the extreme values of the dataset. \n",
"This demonstrates that the different formulations of the plotting positions vary most at the extreme values of the dataset. \n",
"\n",
"### Hazen plotting positions\n",
"\n",
"Next, let's compare the Hazen/Type 5 (α=0.5, β=0.5) formulation to Cunnane.\n",
"Hazen plotting positions (shown as red triangles) represet a piece-wise linear interpolation of the emperical cumulative distribution function of the dataset.\n",
"Hazen plotting positions (shown as red triangles) represent a piece-wise linear interpolation of the empirical cumulative distribution function of the dataset.\n",
"\n",
"Given the values of α and β=0.5 vary only slightly from the Cunnane values, the plotting position predictably are similar."
]
@@ -205,6 +215,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Summary\n",
"\n",
"At the risk of showing a very cluttered and hard to read figure, let's throw all three on the same normal probability scale:"
]
},
@@ -266,8 +278,9 @@
}
],
"metadata": {
"anaconda-cloud": {},
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python [default]",
"language": "python",
"name": "python3"
},
@@ -281,7 +294,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.1"
"version": "3.5.2"
}
},
"nbformat": 4,
254 changes: 132 additions & 122 deletions docs/tutorial/closer_look_at_viz.ipynb

Large diffs are not rendered by default.

29 changes: 17 additions & 12 deletions docs/tutorial/getting_started.ipynb
Original file line number Diff line number Diff line change
@@ -8,16 +8,16 @@
"\n",
"## Installation\n",
"\n",
"`mpl-probscale` is developed on Python 3.5. It is also tested on Python 3.4 and even 2.7 (for the time being).\n",
"`mpl-probscale` is developed on Python 3.6. It is also tested on Python 3.4, 3.5, and even 2.7 (for the time being).\n",
"\n",
"### From conda\n",
"Official releases of `mpl-probscale` can be found on conda-forge:\n",
"\n",
"`conda install --channel=conda-forge mpl-probscale`\n",
"\n",
"Fairly recent builds of the development verions are available on my channel:\n",
"Fairly recent builds of the development version are available on my channel:\n",
"\n",
"`conda install --channel=phobson mpl-probscale`\n",
"`conda install --channel=conda-forge mpl-probscale`\n",
"\n",
"\n",
"### From PyPI\n",
@@ -27,7 +27,7 @@
"\n",
"### From source\n",
"\n",
"`mpl-probscale` is a pure python package. It should be fairly trivial to install from source on any platform. To do that, download or clone from [github](https://github.com/phobson/mpl-probscale), unzip the archive if necessary then do:\n",
"`mpl-probscale` is a pure python package. It should be fairly trivial to install from source on any platform. To do that, download or clone from [github](https://github.com/matplotlib/mpl-probscale), unzip the archive if necessary then do:\n",
"\n",
"```\n",
"cd mpl-probscale # or wherever the setup.py got placed\n",
@@ -74,6 +74,8 @@
"source": [
"## Background\n",
"\n",
"### Built-in matplotlib scales\n",
"\n",
"To the casual user, you can set matplotlib scales to either \"linear\" or \"log\" (logarithmic). There are others (e.g., logit, symlog), but I haven't seen them too much in the wild.\n",
"\n",
"Linear scales are the default:"
@@ -124,7 +126,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Probabilty Scales "
"### Probability Scales "
]
},
{
@@ -294,6 +296,7 @@
" probax='y', # flip the plot\n",
" datascale='log', # scale of the non-probability axis\n",
" bestfit=True, # draw a best-fit line\n",
" estimate_ci=True,\n",
" datalabel='Lognormal Values', # labels and markers...\n",
" problabel='Non-exceedance probability',\n",
" scatter_kws=dict(marker='d', zorder=2, mew=1.25, mec='w', markersize=10),\n",
@@ -308,8 +311,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Percentile and Quanitile plots\n",
"For convenience, you can do percetile and quantile plots with the same function."
"### Percentile and Quantile plots\n",
"For convenience, you can do percentile and quantile plots with the same function."
]
},
{
@@ -363,18 +366,20 @@
"plot = (\n",
" seaborn.load_dataset(\"tips\")\n",
" .assign(pct=lambda df: 100 * df['tip'] / df['total_bill'])\n",
" .pipe(seaborn.FacetGrid, hue='sex', col='time', row='smoker', margin_titles=True, aspect=1.75)\n",
" .pipe(seaborn.FacetGrid, hue='sex', col='time', row='smoker', margin_titles=True, aspect=1., size=4)\n",
" .map(probscale.probplot, 'pct', bestfit=True, scatter_kws=dict(alpha=0.75), probax='y')\n",
" .add_legend()\n",
" .set_ylabels('Non-Exceedance Probabilty')\n",
" .set_ylabels('Non-Exceedance Probability')\n",
" .set_xlabels('Tips as percent of total bill')\n",
").set(ylim=(0.5, 99.5))"
" .set(ylim=(0.5, 99.5), xlim=(0, 100))\n",
")"
]
}
],
"metadata": {
"anaconda-cloud": {},
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python [default]",
"language": "python",
"name": "python3"
},
@@ -388,7 +393,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.1"
"version": "3.5.2"
}
},
"nbformat": 4,
34 changes: 34 additions & 0 deletions docs/tutorial/make.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
import os
import glob

import nbformat
from nbconvert import RSTExporter
from nbconvert.preprocessors import ExecutePreprocessor


def convert(nbfile):
basename, _ = os.path.splitext(nbfile)

meta = {"metadata": {"path": "."}}
with open(nbfile, "r", encoding="utf-8") as nbf:
nbdata = nbformat.read(nbf, as_version=4, encoding="utf-8")

runner = ExecutePreprocessor(timeout=600, kernel_name="probscale")
runner.preprocess(nbdata, meta)

img_folder = basename + "_files"
body_raw, images = RSTExporter().from_notebook_node(nbdata)
body_final = body_raw.replace(".. image:: ", ".. image:: {}/".format(img_folder))

with open(basename + ".rst", "w", encoding="utf-8") as rst_out:
rst_out.write(body_final)

for img_name, img_data in images["outputs"].items():
img_path = os.path.join(img_folder, img_name)
with open(img_path, "wb") as img:
img.write(img_data)


if __name__ == "__main__":
for nbfile in glob.glob("*.ipynb"):
convert(nbfile)
27 changes: 0 additions & 27 deletions docs/tutorial/tools/nb_to_doc.py

This file was deleted.

31 changes: 0 additions & 31 deletions docs/tutorial/tools/nbstripout

This file was deleted.

7 changes: 6 additions & 1 deletion probscale/__init__.py
Original file line number Diff line number Diff line change
@@ -2,6 +2,11 @@

from .viz import *
from .probscale import ProbScale
from .tests import test


scale.register_scale(ProbScale)


__version__ = "0.2.6dev"
__author__ = "Paul Hobson (Herrera Environmental Consultants)"
__author_email__ = "phobson@herrerainc.com"
141 changes: 141 additions & 0 deletions probscale/algo.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,141 @@
import numpy


def _make_boot_index(elements, niter):
"""Generate an array of bootstrap sample sets
Parameters
----------
elements : int
The number of rows in the original dataset.
niter : int
Number of iteration for the bootstrapping.
Returns
-------
index : numpy array
A collection of random *indices* that can be used to randomly
sample a dataset ``niter`` times.
"""
return numpy.random.randint(low=0, high=elements, size=(niter, elements))


def _fit_simple(x, y, xhat, fitlogs=None):
"""
Simple linear fit of x and y data using ``numpy.polyfit``.
Parameters
----------
x, y : array-like
fitlogs : str, optional.
Defines which data should be log-transformed. Valid values are
'x', 'y', or 'both'.
Returns
-------
xhat, yhat : array-like
Estimates of x and y based on the linear fit
results : dict
Dictionary of the fit coefficients
See also
--------
numpy.polyfit
"""

# do the best-fit
coeffs = numpy.polyfit(x, y, 1)

results = {"slope": coeffs[0], "intercept": coeffs[1]}

# estimate y values
yhat = _estimate_from_fit(
xhat,
coeffs[0],
coeffs[1],
xlog=fitlogs in ["x", "both"],
ylog=fitlogs in ["y", "both"],
)

return yhat, results


def _bs_fit(x, y, xhat, fitlogs=None, niter=10000, alpha=0.05):
"""
Percentile method bootstrapping of linear fit of x and y data using
``numpy.polyfit``.
Parameters
----------
x, y : array-like
fitlogs : str, optional.
Defines which data should be log-transformed. Valid values are
'x', 'y', or 'both'.
niter : int, optional (default is 10000)
Number of bootstrap iterations to use
alpha : float, optional
Confidence level of the estimate.
Returns
-------
xhat, yhat : array-like
Estimates of x and y based on the linear fit
results : dict
Dictionary of the fit coefficients
See also
--------
numpy.polyfit
"""

index = _make_boot_index(len(x), niter)
yhat_array = numpy.array(
[_fit_simple(x[ii], y[ii], xhat, fitlogs=fitlogs)[0] for ii in index]
)

percentiles = 100 * numpy.array([alpha * 0.5, 1 - alpha * 0.5])
yhat_lo, yhat_hi = numpy.percentile(yhat_array, percentiles, axis=0)
return yhat_lo, yhat_hi


def _estimate_from_fit(xhat, slope, intercept, xlog=False, ylog=False):
"""Estimate the dependent variables of a linear fit given x-data
and linear parameters.
Parameters
----------
xhat : numpy array or pandas Series/DataFrame
The input independent variable of the fit
slope : float
Slope of the best-fit line
intercept : float
y-intercept of the best-fit line
xlog, ylog : bool (default = False)
Toggles whether or not the logs of the x- or y- data should be
used to perform the regression.
Returns
-------
yhat : numpy array
Estimate of the dependent variable.
"""

xhat = numpy.asarray(xhat)
if ylog:
if xlog:
yhat = numpy.exp(intercept) * xhat**slope
else:
yhat = numpy.exp(intercept) * numpy.exp(slope) ** xhat

else:
if xlog:
yhat = slope * numpy.log(xhat) + intercept

else:
yhat = slope * xhat + intercept

return yhat
29 changes: 14 additions & 15 deletions probscale/formatters.py
Original file line number Diff line number Diff line change
@@ -3,7 +3,7 @@


class _FormatterMixin(Formatter):
""" A mpl-axes formatter mixin class """
"""A mpl-axes formatter mixin class"""

@classmethod
def _sig_figs(cls, x, n, expthresh=5, forceint=False):
@@ -15,7 +15,7 @@ def _sig_figs(cls, x, n, expthresh=5, forceint=False):
x : int or float
The number you want to format.
n : int
The number of significan figures it should have.
The number of significant figures it should have.
expthresh : int, optional (default = 5)
The absolute value of the order of magnitude at which numbers
are formatted in exponential notation.
@@ -41,58 +41,57 @@ def _sig_figs(cls, x, n, expthresh=5, forceint=False):
out = cls._sig_figs(float(x), n, expthresh=expthresh, forceint=forceint)

elif x == 0.0:
out = '0'
out = "0"

# check on the number provided
elif x is not None and numpy.isfinite(x):

# check on the _sig_figs
if n < 1:
raise ValueError("number of sig figs (n) must be greater than zero")
raise ValueError("number of sig figs (n) must be greater " "than zero")

elif forceint:
out = '{:,.0f}'.format(x)
out = "{:,.0f}".format(x)

# logic to do all of the rounding
else:
order = numpy.floor(numpy.log10(numpy.abs(x)))

if (-1.0 * expthresh <= order <= expthresh):
if -1.0 * expthresh <= order <= expthresh:
decimal_places = int(n - 1 - order)

if decimal_places <= 0:
out = '{0:,.0f}'.format(round(x, decimal_places))
out = "{0:,.0f}".format(round(x, decimal_places))

else:
fmt = '{0:,.%df}' % decimal_places
fmt = "{0:,.%df}" % decimal_places
out = fmt.format(x)

else:
decimal_places = n - 1
fmt = '{0:.%de}' % decimal_places
fmt = "{0:.%de}" % decimal_places
out = fmt.format(x)

# with NAs and INFs, just return 'NA'
else:
out = 'NA'
out = "NA"

return out

def __call__(self, x, pos=None):
if x < (10 / self.factor):
out = self._sig_figs(x, 1)
elif x <= (99 / self.factor):
out = self._sig_figs(x, 2)
out = self._sig_figs(x, 2)
else:
order = numpy.ceil(numpy.round(numpy.abs(numpy.log10(self.top - x)), 6))
out = self._sig_figs(x, order + self.offset)

return '{}'.format(out)
return "{}".format(out)


class PctFormatter(_FormatterMixin):
"""
Formatter class for MPL axes to display probalities as percentages.
Formatter class for MPL axes to display probabilities as percentages.
Examples
--------
@@ -114,7 +113,7 @@ class PctFormatter(_FormatterMixin):

class ProbFormatter(_FormatterMixin):
"""
Formatter class for MPL axes to display probalities as decimals.
Formatter class for MPL axes to display probabilities as decimals.
Examples
--------
45 changes: 25 additions & 20 deletions probscale/probscale.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import numpy
import warnings
from matplotlib.scale import ScaleBase
from matplotlib.ticker import (
FixedLocator,
@@ -13,65 +14,69 @@

class _minimal_norm(object):
"""
A basic implmentation of a normal distribution, minimally
API-complient with scipt.stats.norm
A basic implementation of a normal distribution, minimally
API-compliant with scipy.stats.norm
"""

_A = -(8 * (numpy.pi - 3.0) / (3.0 * numpy.pi * (numpy.pi - 4.0)))

@classmethod
def _approx_erf(cls, x):
""" Approximate solution to the error function
"""Approximate solution to the error function
http://en.wikipedia.org/wiki/Error_function
"""

guts = -x**2 * (4.0 / numpy.pi + cls._A * x**2) / (1.0 + cls._A * x**2)
return numpy.sign(x) * numpy.sqrt(1.0 - numpy.exp(guts))
guts = -(x**2) * (4.0 / numpy.pi + cls._A * x**2) / (1.0 + cls._A * x**2)
with warnings.catch_warnings():
warnings.filterwarnings("ignore", "invalid value encountered in sign")
return numpy.sign(x) * numpy.sqrt(1.0 - numpy.exp(guts))

@classmethod
def _approx_inv_erf(cls, z):
""" Approximate solution to the inverse error function
"""Approximate solution to the inverse error function
http://en.wikipedia.org/wiki/Error_function
"""

_b = (2 / numpy.pi / cls._A) + (0.5 * numpy.log(1 - z**2))
_c = numpy.log(1 - z**2) / cls._A
return numpy.sign(z) * numpy.sqrt(numpy.sqrt(_b**2 - _c) - _b)
with warnings.catch_warnings():
warnings.filterwarnings("ignore", "invalid value encountered in sign")
return numpy.sign(z) * numpy.sqrt(numpy.sqrt(_b**2 - _c) - _b)

@classmethod
def ppf(cls, q):
""" Percent point function (inverse of cdf)
"""Percent point function (inverse of cdf)
Wikipedia: https://goo.gl/Rtxjme
"""
return numpy.sqrt(2) * cls._approx_inv_erf(2*q - 1)
return numpy.sqrt(2) * cls._approx_inv_erf(2 * q - 1)

@classmethod
def cdf(cls, x):
""" Cumulative density function
"""Cumulative density function
Wikipedia: https://goo.gl/ciUNLx
"""
return 0.5 * (1 + cls._approx_erf(x/numpy.sqrt(2)))
return 0.5 * (1 + cls._approx_erf(x / numpy.sqrt(2)))


class ProbScale(ScaleBase):
""" A probability scale for matplotlib Axes.
"""A probability scale for matplotlib Axes.
Parameters
----------
axis : a matplotlib axis artist
The axis whose scale will be set.
dist : scipy.stats probability distribution, optional
The distribution whose ppf/cdf methods should be used to compute
the tick positions. By default, a minimal implimentation of the
the tick positions. By default, a minimal implementation of the
``scipy.stats.norm`` class is used so that scipy is not a
requirement.
@@ -90,17 +95,17 @@ class ProbScale(ScaleBase):
"""

name = 'prob'
name = "prob"

def __init__(self, axis, **kwargs):
self.dist = kwargs.pop('dist', _minimal_norm)
self.as_pct = kwargs.pop('as_pct', True)
self.nonpos = kwargs.pop('nonpos', 'mask')
self.dist = kwargs.pop("dist", _minimal_norm)
self.as_pct = kwargs.pop("as_pct", True)
self.nonpos = kwargs.pop("nonpos", "mask")
self._transform = ProbTransform(self.dist, as_pct=self.as_pct)

@classmethod
def _get_probs(cls, nobs, as_pct):
""" Returns the x-axis labels for a probability plot based on
"""Returns the x-axis labels for a probability plot based on
the number of observations (`nobs`).
"""
if as_pct:
@@ -120,8 +125,8 @@ def _get_probs(cls, nobs, as_pct):
lower_fringe = numpy.array([1])
upper_fringe = numpy.array([9])

new_lower = lower_fringe / 10**(n)
new_upper = upper_fringe / 10**(n) + axis_probs.max()
new_lower = lower_fringe / 10 ** (n)
new_upper = upper_fringe / 10 ** (n) + axis_probs.max()
axis_probs = numpy.hstack([new_lower, axis_probs, new_upper])

locs = axis_probs / factor
10 changes: 0 additions & 10 deletions probscale/tests/__init__.py
Original file line number Diff line number Diff line change
@@ -1,10 +0,0 @@
from pkg_resources import resource_filename

import pytest

import probscale

def test(*args):
options = [resource_filename('probscale', 'tests')]
options.extend(list(args))
return pytest.main(options)
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified probscale/tests/baseline_images/test_viz/test_probplot_pp.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified probscale/tests/baseline_images/test_viz/test_probplot_prob.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified probscale/tests/baseline_images/test_viz/test_probplot_qq.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
14 changes: 14 additions & 0 deletions probscale/tests/helpers.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
from functools import wraps

import numpy


def seed(func):
"""Decorator to seed the RNG before any function."""

@wraps(func)
def wrapper(*args, **kwargs):
numpy.random.seed(0)
return func(*args, **kwargs)

return wrapper
252 changes: 252 additions & 0 deletions probscale/tests/test_algo.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,252 @@
import numpy

import pytest
import numpy.testing as nptest

from probscale import algo


def test__make_boot_index():
result = algo._make_boot_index(5, 5000)
assert result.shape == (5000, 5)
assert result.min() == 0
assert result.max() == 4


@pytest.fixture
def plot_data():
data = numpy.array(
[
3.113,
3.606,
4.046,
4.046,
4.710,
6.140,
6.978,
2.000,
4.200,
4.620,
5.570,
5.660,
5.860,
6.650,
6.780,
6.790,
7.500,
7.500,
7.500,
8.630,
8.710,
8.990,
9.850,
10.820,
11.250,
11.250,
12.200,
14.920,
16.770,
17.810,
19.160,
19.190,
19.640,
20.180,
22.970,
]
)
return data


@pytest.mark.parametrize(
("fitlogs", "known_yhat"),
[
(None, numpy.array([0.7887, 3.8946, 7.0005, 10.1065, 13.2124, 16.3183])),
("x", numpy.array([0.2711, 1.2784, 1.5988, 1.7953, 1.9373, 2.0487])),
(
"y",
numpy.array(
[2.2006e00, 4.9139e01, 1.0972e03, 2.4501e04, 5.4711e05, 1.2217e07]
),
),
("both", numpy.array([1.3114, 3.5908, 4.9472, 6.0211, 6.9402, 7.7577])),
],
)
def test__fit_simple(plot_data, fitlogs, known_yhat):
x = numpy.arange(1, len(plot_data) + 1)
known_results = {"slope": 0.5177, "intercept": 0.2711}
xhat = x[::6]
yhat, results = algo._fit_simple(x, plot_data, xhat, fitlogs=fitlogs)
assert abs(results["intercept"] - known_results["intercept"]) < 0.0001
assert abs(results["slope"] - known_results["slope"]) < 0.0001
nptest.assert_allclose(yhat, known_yhat, rtol=0.0001)


@pytest.mark.parametrize(
("fitlogs", "known_lo", "known_hi"),
[
(
None,
numpy.array([-0.7944, 2.7051, 6.1974, 9.2612, 11.9382, 14.4290]),
numpy.array([2.1447, 4.8360, 7.7140, 10.8646, 14.1014, 17.4432]),
),
(
"x",
numpy.array([-1.4098, -0.2210, 0.1387, 0.3585, 0.5147, 0.6417]),
numpy.array([1.7067, 2.5661, 2.8468, 3.0169, 3.1400, 3.2341]),
),
(
"y",
numpy.array(
[4.5187e-01, 1.4956e01, 4.9145e02, 1.0522e04, 1.5299e05, 1.8468e06]
),
numpy.array(
[8.5396e00, 1.2596e02, 2.2396e03, 5.2290e04, 1.3310e06, 3.7627e07]
),
),
(
"both",
numpy.array([0.2442, 0.8017, 1.1488, 1.4312, 1.6731, 1.8997]),
numpy.array([5.5107, 13.0148, 17.232, 20.4285, 23.1035, 25.3843]),
),
],
)
def test__bs_fit(plot_data, fitlogs, known_lo, known_hi):
numpy.random.seed(0)
x = numpy.arange(1, len(plot_data) + 1)
xhat = x[::6]
yhat_lo, yhat_hi = algo._bs_fit(x, plot_data, xhat, fitlogs=fitlogs, niter=1000)

nptest.assert_allclose(yhat_lo, known_lo, rtol=0.001)
nptest.assert_allclose(yhat_hi, known_hi, rtol=0.001)


class Test__estimate_from_fit(object):
def setup_method(self):
self.x = numpy.arange(1, 11, 0.5)
self.slope = 2
self.intercept = 3.5

self.known_ylinlin = numpy.array(
[
5.5,
6.5,
7.5,
8.5,
9.5,
10.5,
11.5,
12.5,
13.5,
14.5,
15.5,
16.5,
17.5,
18.5,
19.5,
20.5,
21.5,
22.5,
23.5,
24.5,
]
)

self.known_yloglin = numpy.array(
[
3.50000000,
4.31093022,
4.88629436,
5.33258146,
5.69722458,
6.00552594,
6.27258872,
6.50815479,
6.71887582,
6.90949618,
7.08351894,
7.24360435,
7.39182030,
7.52980604,
7.65888308,
7.78013233,
7.89444915,
8.00258360,
8.10517019,
8.20275051,
]
)

self.known_yloglog = numpy.array(
[
33.11545196,
74.50976691,
132.46180783,
206.97157474,
298.03906763,
405.66428649,
529.84723134,
670.58790216,
827.88629897,
1001.74242175,
1192.15627051,
1399.12784525,
1622.65714598,
1862.74417268,
2119.38892536,
2392.59140402,
2682.35160865,
2988.66953927,
3311.54519587,
3650.97857845,
]
)

self.known_ylinlog = numpy.array(
[
2.44691932e02,
6.65141633e02,
1.80804241e03,
4.91476884e03,
1.33597268e04,
3.63155027e04,
9.87157710e04,
2.68337287e05,
7.29416370e05,
1.98275926e06,
5.38969848e06,
1.46507194e07,
3.98247844e07,
1.08254988e08,
2.94267566e08,
7.99902177e08,
2.17435955e09,
5.91052206e09,
1.60664647e10,
4.36731791e10,
]
)

def test_linlin(self):
ylinlin = algo._estimate_from_fit(
self.x, self.slope, self.intercept, xlog=False, ylog=False
)
nptest.assert_array_almost_equal(ylinlin, self.known_ylinlin)

def test_loglin(self):
yloglin = algo._estimate_from_fit(
self.x, self.slope, self.intercept, xlog=True, ylog=False
)
nptest.assert_array_almost_equal(yloglin, self.known_yloglin)

def test_loglog(self):
yloglog = algo._estimate_from_fit(
self.x, self.slope, self.intercept, xlog=True, ylog=True
)
nptest.assert_array_almost_equal(yloglog, self.known_yloglog)

def test_linlog(self):
ylinlog = algo._estimate_from_fit(
self.x, self.slope, self.intercept, xlog=False, ylog=True
)
diff = numpy.abs(ylinlog - self.known_ylinlog) / self.known_ylinlog
nptest.assert_array_almost_equal(diff, numpy.zeros(self.x.shape[0]), decimal=5)
83 changes: 51 additions & 32 deletions probscale/tests/test_formatters.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
import numpy

import pytest
import numpy.testing as nptest

from probscale import formatters

@@ -11,52 +10,72 @@ def test_base_class_of_formatter(fmtr):
assert issubclass(fmtr, formatters._FormatterMixin)


@pytest.mark.parametrize(('pct', 'expected'), [
(0.0301, '0.03'), (0.20, '0.2'), (0.100, '0.1'),
(10.000, '10'), (5.00, '5'), (50.00, '50'),
(99.000, '99'), (99.1, '99.1'), (99.99, '99.99'),
])
@pytest.mark.parametrize(
("pct", "expected"),
[
(0.0301, "0.03"),
(0.20, "0.2"),
(0.100, "0.1"),
(10.000, "10"),
(5.00, "5"),
(50.00, "50"),
(99.000, "99"),
(99.1, "99.1"),
(99.99, "99.99"),
],
)
def test__call___PctFormatter(pct, expected):
fmt = formatters.PctFormatter()
assert fmt(pct) == expected


@pytest.mark.parametrize(('prob', 'expected'), [
(0.000301, '0.0003'), (0.001000, '0.001'), (0.100000, '0.10'),
(0.050000, '0.05'), (0.500000, '0.50'), (0.990000, '0.99'),
(0.991000, '0.991'), (0.999900, '0.9999'),
])
@pytest.mark.parametrize(
("prob", "expected"),
[
(0.000301, "0.0003"),
(0.001000, "0.001"),
(0.100000, "0.10"),
(0.050000, "0.05"),
(0.500000, "0.50"),
(0.990000, "0.99"),
(0.991000, "0.991"),
(0.999900, "0.9999"),
],
)
def test__call___ProbFormmater(prob, expected):
fmt = formatters.ProbFormatter()
assert fmt(prob) == expected


@pytest.mark.parametrize(('value', 'N', 'expected', 'forceint'), [
(1234.56, 3, '1,230', False),
(1234.56, 4, '1,235', False),
('1.23', 3, '1.23', False),
(numpy.nan, 3, 'NA', False),
(numpy.inf, 3, 'NA', False),
(0, 3, '0', False),
(1234.56, 8, '1,234.5600', False),
(1.23456e8, 3, '1.23e+08', False),
(1234.56, 3, '1,235', True),
(0.123456, 3, '0.123', False),
(0.123456, 4, '0.1235', False),
('0.123456', 3, '0.123', False),
(numpy.nan, 3, 'NA', False),
(numpy.inf, 3, 'NA', False),
(0, 3, '0', False),
(0.123456, 8, '0.12345600', False),
(1.23456e-7, 3, '1.23e-07', False),
(0.123456, 3, '0', True),
])
@pytest.mark.parametrize(
("value", "N", "expected", "forceint"),
[
(1234.56, 3, "1,230", False),
(1234.56, 4, "1,235", False),
("1.23", 3, "1.23", False),
(numpy.nan, 3, "NA", False),
(numpy.inf, 3, "NA", False),
(0, 3, "0", False),
(1234.56, 8, "1,234.5600", False),
(1.23456e8, 3, "1.23e+08", False),
(1234.56, 3, "1,235", True),
(0.123456, 3, "0.123", False),
(0.123456, 4, "0.1235", False),
("0.123456", 3, "0.123", False),
(numpy.nan, 3, "NA", False),
(numpy.inf, 3, "NA", False),
(0, 3, "0", False),
(0.123456, 8, "0.12345600", False),
(1.23456e-7, 3, "1.23e-07", False),
(0.123456, 3, "0", True),
],
)
def test__sig_figs(value, N, expected, forceint):
fmt = formatters._FormatterMixin()
assert fmt._sig_figs(value, N, forceint=forceint) == expected


@pytest.mark.parametrize('N', [-1, 0, 0.5])
@pytest.mark.parametrize("N", [-1, 0, 0.5])
def test__sig_figs_errors(N):
fmt = formatters._FormatterMixin()
with pytest.raises(ValueError):
125 changes: 90 additions & 35 deletions probscale/tests/test_probscale.py
Original file line number Diff line number Diff line change
@@ -1,11 +1,12 @@
import sys
import os
import warnings

import numpy
import matplotlib.pyplot as plt

try:
from scipy import stats
except: # pragma: no cover
except ImportError: # pragma: no cover
stats = None

import pytest
@@ -14,8 +15,9 @@
from probscale.probscale import _minimal_norm


PYTHON27 = sys.version_info.major == 2
TOLERANCE = 22
# special toloerance for Github Action CI
TOLERANCE = int(os.environ.get("MPL_IMGCOMP_TOLERANCE", 15))
BASELINE_DIR = "baseline_images/test_probscale"


@pytest.fixture
@@ -25,10 +27,22 @@ def mn():

@pytest.fixture
def mn_input():
x = numpy.array([
0.331, 0.742, 0.067, 0.826, 0.357, 0.089,
0.754, 0.342, 0.762, 0.658, 0.239, 0.910,
])
x = numpy.array(
[
0.331,
0.742,
0.067,
0.826,
0.357,
0.089,
0.754,
0.342,
0.762,
0.658,
0.239,
0.910,
]
)
return x


@@ -38,11 +52,22 @@ def test_minimal_norm_A(mn):


def test_minimal_norm__approx_erf(mn, mn_input):
known_erf = numpy.array([
0.36029027, 0.70598131, 0.07548843, 0.75724986,
0.38635283, 0.10016122, 0.71371964, 0.37137355,
0.71880142, 0.64791492, 0.26463458, 0.80188283,
])
known_erf = numpy.array(
[
0.36029027,
0.70598131,
0.07548843,
0.75724986,
0.38635283,
0.10016122,
0.71371964,
0.37137355,
0.71880142,
0.64791492,
0.26463458,
0.80188283,
]
)

diff = mn._approx_erf(mn_input) - known_erf
assert numpy.all(numpy.abs(diff) < 0.001)
@@ -54,57 +79,87 @@ def test_minimal_norm__approx_inv_erf(mn, mn_input):


def test_minimal_norm_ppf(mn, mn_input):
known_ppf = numpy.array([
-0.43715354, 0.6495236 , -1.49851307, 0.93847570,
-0.36648929, -1.34693863, 0.68713129, -0.40701088,
0.71275076, 0.40701088, -0.70952297, 1.34075503,
])
known_ppf = numpy.array(
[
-0.43715354,
0.64952360,
-1.49851307,
0.93847570,
-0.36648929,
-1.34693863,
0.68713129,
-0.40701088,
+0.71275076,
0.40701088,
-0.70952297,
1.34075503,
]
)
diff = mn.ppf(mn_input) - known_ppf
assert numpy.all(numpy.abs(diff) < 0.001)


def test_minimal_norm_cdf(mn, mn_input):
known_cdf = numpy.array([
0.62967776, 0.77095633, 0.52670915, 0.79559795,
0.63945410, 0.53545904, 0.77457539, 0.63382455,
0.77697000, 0.74473093, 0.59444721, 0.81858875
])
known_cdf = numpy.array(
[
0.62967776,
0.77095633,
0.52670915,
0.79559795,
0.63945410,
0.53545904,
0.77457539,
0.63382455,
0.77697000,
0.74473093,
0.59444721,
0.81858875,
]
)
diff = mn.cdf(mn_input) - known_cdf
assert numpy.all(numpy.abs(diff) < 0.001)


def test_sign_with_nan_no_warning(mn):
with warnings.catch_warnings():
warnings.simplefilter("error")
res = mn._approx_erf(numpy.nan)
assert numpy.isnan(res)


def test_sign_with_nan_no_warning_inv(mn):
with warnings.catch_warnings():
warnings.simplefilter("error")
res = mn._approx_inv_erf(numpy.nan)
assert numpy.isnan(res)


@pytest.mark.mpl_image_compare(
baseline_dir='baseline_images/test_probscale',
tolerance=TOLERANCE
baseline_dir=BASELINE_DIR, tolerance=TOLERANCE, remove_text=True
)
@pytest.mark.skipif(PYTHON27, reason="legacy python")
def test_the_scale_default():
fig, ax = plt.subplots(figsize=(4, 8))
ax.set_yscale('prob')
ax.set_yscale("prob")
ax.set_ylim(0.01, 99.99)
fig.tight_layout()
return fig


@pytest.mark.mpl_image_compare(
baseline_dir='baseline_images/test_probscale',
tolerance=TOLERANCE
)
@pytest.mark.mpl_image_compare(baseline_dir=BASELINE_DIR, tolerance=TOLERANCE)
def test_the_scale_not_as_pct():
fig, ax = plt.subplots(figsize=(4, 8))
ax.set_yscale('prob', as_pct=False)
ax.set_yscale("prob", as_pct=False)
ax.set_ylim(0.02, 0.98)
return fig


@pytest.mark.mpl_image_compare(
baseline_dir='baseline_images/test_probscale',
tolerance=13
baseline_dir=BASELINE_DIR, tolerance=TOLERANCE, remove_text=True
)
@pytest.mark.skipif(stats is None, reason="scipy not installed")
def test_the_scale_beta():
fig, ax = plt.subplots(figsize=(4, 8))
ax.set_yscale('prob', as_pct=True, dist=stats.beta(3, 2))
ax.set_yscale("prob", as_pct=True, dist=stats.beta(3, 2))
ax.set_ylim(1, 99)
fig.tight_layout()
return fig
101 changes: 66 additions & 35 deletions probscale/tests/test_transforms.py
Original file line number Diff line number Diff line change
@@ -22,56 +22,80 @@ def test__clip_out_of_bounds():
assert numpy.all(diff < 0.0001)



@pytest.fixture
def prob_trans():
cls = transforms.ProbTransform
return cls(_minimal_norm)


@pytest.fixture
def quant_trans():
cls = transforms.QuantileTransform
return cls(_minimal_norm)


@pytest.mark.parametrize('trans', [prob_trans(), quant_trans()])
@pytest.mark.parametrize(
"trans",
[
transforms.ProbTransform(_minimal_norm),
transforms.QuantileTransform(_minimal_norm),
],
)
def test_transform_input_dims(trans):
assert trans.input_dims == 1


@pytest.mark.parametrize('trans', [prob_trans(), quant_trans()])
@pytest.mark.parametrize(
"trans",
[
transforms.ProbTransform(_minimal_norm),
transforms.QuantileTransform(_minimal_norm),
],
)
def test_transform_output_dims(trans):
assert trans.output_dims == 1


@pytest.mark.parametrize('trans', [prob_trans(), quant_trans()])
@pytest.mark.parametrize(
"trans",
[
transforms.ProbTransform(_minimal_norm),
transforms.QuantileTransform(_minimal_norm),
],
)
def test_transform_is_separable(trans):
assert trans.is_separable


@pytest.mark.parametrize('trans', [prob_trans(), quant_trans()])
@pytest.mark.parametrize(
"trans",
[
transforms.ProbTransform(_minimal_norm),
transforms.QuantileTransform(_minimal_norm),
],
)
def test_transform_has_inverse(trans):
assert trans.has_inverse


@pytest.mark.parametrize('trans', [prob_trans(), quant_trans()])
@pytest.mark.parametrize(
"trans",
[
transforms.ProbTransform(_minimal_norm),
transforms.QuantileTransform(_minimal_norm),
],
)
def test_transform_dist(trans):
trans.dist == _minimal_norm


@pytest.mark.parametrize(('trans', 'known_trans_na'), [
(prob_trans(), -2.569150498), (quant_trans(), 69.1464492)
])
@pytest.mark.parametrize(
("trans", "known_trans_na"),
[
(transforms.ProbTransform(_minimal_norm), -2.569150498),
(transforms.QuantileTransform(_minimal_norm), 69.1464492),
],
)
def test_transform_non_affine(trans, known_trans_na):
diff = numpy.abs(trans.transform_non_affine([0.5]) - known_trans_na)
assert numpy.all(diff < 0.0001)


@pytest.mark.parametrize(('trans', 'inver_cls'), [
(prob_trans(), transforms.QuantileTransform),
(quant_trans(), transforms.ProbTransform),
])
@pytest.mark.parametrize(
("trans", "inver_cls"),
[
(transforms.ProbTransform(_minimal_norm), transforms.QuantileTransform),
(transforms.QuantileTransform(_minimal_norm), transforms.ProbTransform),
],
)
def test_transform_inverted(trans, inver_cls):
t_inv = trans.inverted()
assert isinstance(t_inv, inver_cls)
@@ -80,18 +104,25 @@ def test_transform_inverted(trans, inver_cls):
assert trans.out_of_bounds == t_inv.out_of_bounds


@pytest.mark.parametrize('cls', [transforms.ProbTransform, transforms.QuantileTransform])
@pytest.mark.parametrize(
"cls", [transforms.ProbTransform, transforms.QuantileTransform]
)
def test_bad_out_of_bounds(cls):
with pytest.raises(ValueError):
cls(_minimal_norm, out_of_bounds='junk')


@pytest.mark.parametrize('cls', [transforms.ProbTransform, transforms.QuantileTransform])
@pytest.mark.parametrize(('method', 'func'), [
('clip', transforms._clip_out_of_bounds),
('mask', transforms._mask_out_of_bounds),
('junk', None),
])
cls(_minimal_norm, out_of_bounds="junk")


@pytest.mark.parametrize(
"cls", [transforms.ProbTransform, transforms.QuantileTransform]
)
@pytest.mark.parametrize(
("method", "func"),
[
("clip", transforms._clip_out_of_bounds),
("mask", transforms._mask_out_of_bounds),
("junk", None),
],
)
def test_out_of_bounds(cls, method, func):
if func is None:
with pytest.raises(ValueError):
86 changes: 54 additions & 32 deletions probscale/tests/test_validate.py
Original file line number Diff line number Diff line change
@@ -3,10 +3,12 @@
import pytest

from probscale import validate
from probscale import algo


def test_axes_object_invalid():
with pytest.raises(ValueError):
validate.axes_object('junk')
validate.axes_object("junk")


def test_axes_object_with_ax():
@@ -24,47 +26,50 @@ def test_axes_object_with_None():
assert isinstance(fig1, pyplot.Figure)


@pytest.mark.parametrize(('which', 'kwarg'), [
('x', 'fitprobs'),
('y', 'fitprobs'),
('y', 'fitlogs'),
('both', 'fitprobs'),
('both', 'fitlogs'),
(None, 'fitprobs'),
(None, 'fitlogs'),
])
@pytest.mark.parametrize(
("which", "kwarg"),
[
("x", "fitprobs"),
("y", "fitprobs"),
("y", "fitlogs"),
("both", "fitprobs"),
("both", "fitlogs"),
(None, "fitprobs"),
(None, "fitlogs"),
],
)
def test_fit_arguments_valid(which, kwarg):
result = validate.fit_argument(which, kwarg)
assert result == which


@pytest.mark.parametrize(('kwarg',), [
('fitprobs',),
('fitlogs',),
])
@pytest.mark.parametrize("kwarg", ["fitprobs", "fitlogs"])
def test_fit_arguments_invalid(kwarg):
with pytest.raises(ValueError):
validate.fit_argument('junk', kwarg)
validate.fit_argument("junk", kwarg)


@pytest.mark.parametrize(('value', 'error'), [
('x', None), ('y', None), ('junk', ValueError)
])
@pytest.mark.parametrize(
("value", "error"), [("x", None), ("y", None), ("junk", ValueError)]
)
def test_axis_name(value, error):
if error is not None:
with pytest.raises(error):
validate.axis_name(value, 'axname')
validate.axis_name(value, "axname")

else:
assert value == validate.axis_name(value, 'axname')


@pytest.mark.parametrize(('value', 'expected', 'error'), [
('PP', 'pp', None),
('Qq', 'qq', None),
('ProB', 'prob', None),
('junk', None, ValueError)
])
assert value == validate.axis_name(value, "axname")


@pytest.mark.parametrize(
("value", "expected", "error"),
[
("PP", "pp", None),
("Qq", "qq", None),
("ProB", "prob", None),
("junk", None, ValueError),
],
)
def test_axis_type(value, expected, error):
if error is not None:
with pytest.raises(error):
@@ -74,14 +79,31 @@ def test_axis_type(value, expected, error):
assert expected == validate.axis_type(value)


@pytest.mark.parametrize(('value', 'expected'), [
(None, dict()), (dict(a=1, b='test'), dict(a=1, b='test'))
])
@pytest.mark.parametrize(
("value", "expected"), [(None, dict()), (dict(a=1, b="test"), dict(a=1, b="test"))]
)
def test_other_options(value, expected):
assert validate.other_options(value) == expected


@pytest.mark.parametrize(('value', 'expected'), [(None, ''), ('test', 'test')])
@pytest.mark.parametrize(("value", "expected"), [(None, ""), ("test", "test")])
def test_axis_label(value, expected):
result = validate.axis_label(value)
assert result == expected


@pytest.mark.parametrize(
("value", "expected", "error"),
[
("fit", algo._bs_fit, None),
("resids", None, NotImplementedError),
("junk", None, ValueError),
],
)
def test_estimator(value, expected, error):
if error is not None:
with pytest.raises(error):
validate.estimator(value)
else:
est = validate.estimator(value)
assert est is expected
Loading