Skip to content

Commit 1c0e031

Browse files
authored
DEPS: Unpin docutils (pandas-dev#58413)
* DEPS: Unpin docutils * Address errors * Add backtick
1 parent 0962dcc commit 1c0e031

File tree

8 files changed

+136
-157
lines changed

8 files changed

+136
-157
lines changed

doc/source/user_guide/basics.rst

+3-4
Original file line numberDiff line numberDiff line change
@@ -160,11 +160,10 @@ Here is a sample (using 100 column x 100,000 row ``DataFrames``):
160160
.. csv-table::
161161
:header: "Operation", "0.11.0 (ms)", "Prior Version (ms)", "Ratio to Prior"
162162
:widths: 25, 25, 25, 25
163-
:delim: ;
164163

165-
``df1 > df2``; 13.32; 125.35; 0.1063
166-
``df1 * df2``; 21.71; 36.63; 0.5928
167-
``df1 + df2``; 22.04; 36.50; 0.6039
164+
``df1 > df2``, 13.32, 125.35, 0.1063
165+
``df1 * df2``, 21.71, 36.63, 0.5928
166+
``df1 + df2``, 22.04, 36.50, 0.6039
168167

169168
You are highly encouraged to install both libraries. See the section
170169
:ref:`Recommended Dependencies <install.recommended_dependencies>` for more installation info.

doc/source/user_guide/gotchas.rst

+2-13
Original file line numberDiff line numberDiff line change
@@ -315,19 +315,8 @@ Why not make NumPy like R?
315315

316316
Many people have suggested that NumPy should simply emulate the ``NA`` support
317317
present in the more domain-specific statistical programming language `R
318-
<https://www.r-project.org/>`__. Part of the reason is the NumPy type hierarchy:
319-
320-
.. csv-table::
321-
:header: "Typeclass","Dtypes"
322-
:widths: 30,70
323-
:delim: |
324-
325-
``numpy.floating`` | ``float16, float32, float64, float128``
326-
``numpy.integer`` | ``int8, int16, int32, int64``
327-
``numpy.unsignedinteger`` | ``uint8, uint16, uint32, uint64``
328-
``numpy.object_`` | ``object_``
329-
``numpy.bool_`` | ``bool_``
330-
``numpy.character`` | ``bytes_, str_``
318+
<https://www.r-project.org/>`__. Part of the reason is the
319+
`NumPy type hierarchy <https://numpy.org/doc/stable/user/basics.types.html>`__.
331320

332321
The R language, by contrast, only has a handful of built-in data types:
333322
``integer``, ``numeric`` (floating-point), ``character``, and

doc/source/user_guide/groupby.rst

+37-40
Original file line numberDiff line numberDiff line change
@@ -506,29 +506,28 @@ listed below, those with a ``*`` do *not* have an efficient, GroupBy-specific, i
506506
.. csv-table::
507507
:header: "Method", "Description"
508508
:widths: 20, 80
509-
:delim: ;
510-
511-
:meth:`~.DataFrameGroupBy.any`;Compute whether any of the values in the groups are truthy
512-
:meth:`~.DataFrameGroupBy.all`;Compute whether all of the values in the groups are truthy
513-
:meth:`~.DataFrameGroupBy.count`;Compute the number of non-NA values in the groups
514-
:meth:`~.DataFrameGroupBy.cov` * ;Compute the covariance of the groups
515-
:meth:`~.DataFrameGroupBy.first`;Compute the first occurring value in each group
516-
:meth:`~.DataFrameGroupBy.idxmax`;Compute the index of the maximum value in each group
517-
:meth:`~.DataFrameGroupBy.idxmin`;Compute the index of the minimum value in each group
518-
:meth:`~.DataFrameGroupBy.last`;Compute the last occurring value in each group
519-
:meth:`~.DataFrameGroupBy.max`;Compute the maximum value in each group
520-
:meth:`~.DataFrameGroupBy.mean`;Compute the mean of each group
521-
:meth:`~.DataFrameGroupBy.median`;Compute the median of each group
522-
:meth:`~.DataFrameGroupBy.min`;Compute the minimum value in each group
523-
:meth:`~.DataFrameGroupBy.nunique`;Compute the number of unique values in each group
524-
:meth:`~.DataFrameGroupBy.prod`;Compute the product of the values in each group
525-
:meth:`~.DataFrameGroupBy.quantile`;Compute a given quantile of the values in each group
526-
:meth:`~.DataFrameGroupBy.sem`;Compute the standard error of the mean of the values in each group
527-
:meth:`~.DataFrameGroupBy.size`;Compute the number of values in each group
528-
:meth:`~.DataFrameGroupBy.skew` *;Compute the skew of the values in each group
529-
:meth:`~.DataFrameGroupBy.std`;Compute the standard deviation of the values in each group
530-
:meth:`~.DataFrameGroupBy.sum`;Compute the sum of the values in each group
531-
:meth:`~.DataFrameGroupBy.var`;Compute the variance of the values in each group
509+
510+
:meth:`~.DataFrameGroupBy.any`,Compute whether any of the values in the groups are truthy
511+
:meth:`~.DataFrameGroupBy.all`,Compute whether all of the values in the groups are truthy
512+
:meth:`~.DataFrameGroupBy.count`,Compute the number of non-NA values in the groups
513+
:meth:`~.DataFrameGroupBy.cov` * ,Compute the covariance of the groups
514+
:meth:`~.DataFrameGroupBy.first`,Compute the first occurring value in each group
515+
:meth:`~.DataFrameGroupBy.idxmax`,Compute the index of the maximum value in each group
516+
:meth:`~.DataFrameGroupBy.idxmin`,Compute the index of the minimum value in each group
517+
:meth:`~.DataFrameGroupBy.last`,Compute the last occurring value in each group
518+
:meth:`~.DataFrameGroupBy.max`,Compute the maximum value in each group
519+
:meth:`~.DataFrameGroupBy.mean`,Compute the mean of each group
520+
:meth:`~.DataFrameGroupBy.median`,Compute the median of each group
521+
:meth:`~.DataFrameGroupBy.min`,Compute the minimum value in each group
522+
:meth:`~.DataFrameGroupBy.nunique`,Compute the number of unique values in each group
523+
:meth:`~.DataFrameGroupBy.prod`,Compute the product of the values in each group
524+
:meth:`~.DataFrameGroupBy.quantile`,Compute a given quantile of the values in each group
525+
:meth:`~.DataFrameGroupBy.sem`,Compute the standard error of the mean of the values in each group
526+
:meth:`~.DataFrameGroupBy.size`,Compute the number of values in each group
527+
:meth:`~.DataFrameGroupBy.skew` * ,Compute the skew of the values in each group
528+
:meth:`~.DataFrameGroupBy.std`,Compute the standard deviation of the values in each group
529+
:meth:`~.DataFrameGroupBy.sum`,Compute the sum of the values in each group
530+
:meth:`~.DataFrameGroupBy.var`,Compute the variance of the values in each group
532531

533532
Some examples:
534533

@@ -832,19 +831,18 @@ The following methods on GroupBy act as transformations.
832831
.. csv-table::
833832
:header: "Method", "Description"
834833
:widths: 20, 80
835-
:delim: ;
836-
837-
:meth:`~.DataFrameGroupBy.bfill`;Back fill NA values within each group
838-
:meth:`~.DataFrameGroupBy.cumcount`;Compute the cumulative count within each group
839-
:meth:`~.DataFrameGroupBy.cummax`;Compute the cumulative max within each group
840-
:meth:`~.DataFrameGroupBy.cummin`;Compute the cumulative min within each group
841-
:meth:`~.DataFrameGroupBy.cumprod`;Compute the cumulative product within each group
842-
:meth:`~.DataFrameGroupBy.cumsum`;Compute the cumulative sum within each group
843-
:meth:`~.DataFrameGroupBy.diff`;Compute the difference between adjacent values within each group
844-
:meth:`~.DataFrameGroupBy.ffill`;Forward fill NA values within each group
845-
:meth:`~.DataFrameGroupBy.pct_change`;Compute the percent change between adjacent values within each group
846-
:meth:`~.DataFrameGroupBy.rank`;Compute the rank of each value within each group
847-
:meth:`~.DataFrameGroupBy.shift`;Shift values up or down within each group
834+
835+
:meth:`~.DataFrameGroupBy.bfill`,Back fill NA values within each group
836+
:meth:`~.DataFrameGroupBy.cumcount`,Compute the cumulative count within each group
837+
:meth:`~.DataFrameGroupBy.cummax`,Compute the cumulative max within each group
838+
:meth:`~.DataFrameGroupBy.cummin`,Compute the cumulative min within each group
839+
:meth:`~.DataFrameGroupBy.cumprod`,Compute the cumulative product within each group
840+
:meth:`~.DataFrameGroupBy.cumsum`,Compute the cumulative sum within each group
841+
:meth:`~.DataFrameGroupBy.diff`,Compute the difference between adjacent values within each group
842+
:meth:`~.DataFrameGroupBy.ffill`,Forward fill NA values within each group
843+
:meth:`~.DataFrameGroupBy.pct_change`,Compute the percent change between adjacent values within each group
844+
:meth:`~.DataFrameGroupBy.rank`,Compute the rank of each value within each group
845+
:meth:`~.DataFrameGroupBy.shift`,Shift values up or down within each group
848846

849847
In addition, passing any built-in aggregation method as a string to
850848
:meth:`~.DataFrameGroupBy.transform` (see the next section) will broadcast the result
@@ -1092,11 +1090,10 @@ efficient, GroupBy-specific, implementation.
10921090
.. csv-table::
10931091
:header: "Method", "Description"
10941092
:widths: 20, 80
1095-
:delim: ;
10961093

1097-
:meth:`~.DataFrameGroupBy.head`;Select the top row(s) of each group
1098-
:meth:`~.DataFrameGroupBy.nth`;Select the nth row(s) of each group
1099-
:meth:`~.DataFrameGroupBy.tail`;Select the bottom row(s) of each group
1094+
:meth:`~.DataFrameGroupBy.head`,Select the top row(s) of each group
1095+
:meth:`~.DataFrameGroupBy.nth`,Select the nth row(s) of each group
1096+
:meth:`~.DataFrameGroupBy.tail`,Select the bottom row(s) of each group
11001097

11011098
Users can also use transformations along with Boolean indexing to construct complex
11021099
filtrations within groups. For example, suppose we are given groups of products and

doc/source/user_guide/indexing.rst

+9-9
Original file line numberDiff line numberDiff line change
@@ -94,13 +94,14 @@ well). Any of the axes accessors may be the null slice ``:``. Axes left out of
9494
the specification are assumed to be ``:``, e.g. ``p.loc['a']`` is equivalent to
9595
``p.loc['a', :]``.
9696

97-
.. csv-table::
98-
:header: "Object Type", "Indexers"
99-
:widths: 30, 50
100-
:delim: ;
10197

102-
Series; ``s.loc[indexer]``
103-
DataFrame; ``df.loc[row_indexer,column_indexer]``
98+
.. ipython:: python
99+
100+
ser = pd.Series(range(5), index=list("abcde"))
101+
ser.loc[["a", "c", "e"]]
102+
103+
df = pd.DataFrame(np.arange(25).reshape(5, 5), index=list("abcde"), columns=list("abcde"))
104+
df.loc[["a", "c", "e"], ["b", "d"]]
104105
105106
.. _indexing.basics:
106107

@@ -116,10 +117,9 @@ indexing pandas objects with ``[]``:
116117
.. csv-table::
117118
:header: "Object Type", "Selection", "Return Value Type"
118119
:widths: 30, 30, 60
119-
:delim: ;
120120

121-
Series; ``series[label]``; scalar value
122-
DataFrame; ``frame[colname]``; ``Series`` corresponding to colname
121+
Series, ``series[label]``, scalar value
122+
DataFrame, ``frame[colname]``, ``Series`` corresponding to colname
123123

124124
Here we construct a simple time series data set to use for illustrating the
125125
indexing functionality:

doc/source/user_guide/io.rst

+32-35
Original file line numberDiff line numberDiff line change
@@ -16,26 +16,25 @@ The pandas I/O API is a set of top level ``reader`` functions accessed like
1616
.. csv-table::
1717
:header: "Format Type", "Data Description", "Reader", "Writer"
1818
:widths: 30, 100, 60, 60
19-
:delim: ;
20-
21-
text;`CSV <https://en.wikipedia.org/wiki/Comma-separated_values>`__;:ref:`read_csv<io.read_csv_table>`;:ref:`to_csv<io.store_in_csv>`
22-
text;Fixed-Width Text File;:ref:`read_fwf<io.fwf_reader>`
23-
text;`JSON <https://www.json.org/>`__;:ref:`read_json<io.json_reader>`;:ref:`to_json<io.json_writer>`
24-
text;`HTML <https://en.wikipedia.org/wiki/HTML>`__;:ref:`read_html<io.read_html>`;:ref:`to_html<io.html>`
25-
text;`LaTeX <https://en.wikipedia.org/wiki/LaTeX>`__;;:ref:`Styler.to_latex<io.latex>`
26-
text;`XML <https://www.w3.org/standards/xml/core>`__;:ref:`read_xml<io.read_xml>`;:ref:`to_xml<io.xml>`
27-
text; Local clipboard;:ref:`read_clipboard<io.clipboard>`;:ref:`to_clipboard<io.clipboard>`
28-
binary;`MS Excel <https://en.wikipedia.org/wiki/Microsoft_Excel>`__;:ref:`read_excel<io.excel_reader>`;:ref:`to_excel<io.excel_writer>`
29-
binary;`OpenDocument <http://opendocumentformat.org>`__;:ref:`read_excel<io.ods>`;
30-
binary;`HDF5 Format <https://support.hdfgroup.org/HDF5/whatishdf5.html>`__;:ref:`read_hdf<io.hdf5>`;:ref:`to_hdf<io.hdf5>`
31-
binary;`Feather Format <https://github.com/wesm/feather>`__;:ref:`read_feather<io.feather>`;:ref:`to_feather<io.feather>`
32-
binary;`Parquet Format <https://parquet.apache.org/>`__;:ref:`read_parquet<io.parquet>`;:ref:`to_parquet<io.parquet>`
33-
binary;`ORC Format <https://orc.apache.org/>`__;:ref:`read_orc<io.orc>`;:ref:`to_orc<io.orc>`
34-
binary;`Stata <https://en.wikipedia.org/wiki/Stata>`__;:ref:`read_stata<io.stata_reader>`;:ref:`to_stata<io.stata_writer>`
35-
binary;`SAS <https://en.wikipedia.org/wiki/SAS_(software)>`__;:ref:`read_sas<io.sas_reader>`;
36-
binary;`SPSS <https://en.wikipedia.org/wiki/SPSS>`__;:ref:`read_spss<io.spss_reader>`;
37-
binary;`Python Pickle Format <https://docs.python.org/3/library/pickle.html>`__;:ref:`read_pickle<io.pickle>`;:ref:`to_pickle<io.pickle>`
38-
SQL;`SQL <https://en.wikipedia.org/wiki/SQL>`__;:ref:`read_sql<io.sql>`;:ref:`to_sql<io.sql>`
19+
20+
text,`CSV <https://en.wikipedia.org/wiki/Comma-separated_values>`__, :ref:`read_csv<io.read_csv_table>`, :ref:`to_csv<io.store_in_csv>`
21+
text,Fixed-Width Text File, :ref:`read_fwf<io.fwf_reader>` , NA
22+
text,`JSON <https://www.json.org/>`__, :ref:`read_json<io.json_reader>`, :ref:`to_json<io.json_writer>`
23+
text,`HTML <https://en.wikipedia.org/wiki/HTML>`__, :ref:`read_html<io.read_html>`, :ref:`to_html<io.html>`
24+
text,`LaTeX <https://en.wikipedia.org/wiki/LaTeX>`__, :ref:`Styler.to_latex<io.latex>` , NA
25+
text,`XML <https://www.w3.org/standards/xml/core>`__, :ref:`read_xml<io.read_xml>`, :ref:`to_xml<io.xml>`
26+
text, Local clipboard, :ref:`read_clipboard<io.clipboard>`, :ref:`to_clipboard<io.clipboard>`
27+
binary,`MS Excel <https://en.wikipedia.org/wiki/Microsoft_Excel>`__ , :ref:`read_excel<io.excel_reader>`, :ref:`to_excel<io.excel_writer>`
28+
binary,`OpenDocument <http://opendocumentformat.org>`__, :ref:`read_excel<io.ods>`, NA
29+
binary,`HDF5 Format <https://support.hdfgroup.org/HDF5/whatishdf5.html>`__, :ref:`read_hdf<io.hdf5>`, :ref:`to_hdf<io.hdf5>`
30+
binary,`Feather Format <https://github.com/wesm/feather>`__, :ref:`read_feather<io.feather>`, :ref:`to_feather<io.feather>`
31+
binary,`Parquet Format <https://parquet.apache.org/>`__, :ref:`read_parquet<io.parquet>`, :ref:`to_parquet<io.parquet>`
32+
binary,`ORC Format <https://orc.apache.org/>`__, :ref:`read_orc<io.orc>`, :ref:`to_orc<io.orc>`
33+
binary,`Stata <https://en.wikipedia.org/wiki/Stata>`__, :ref:`read_stata<io.stata_reader>`, :ref:`to_stata<io.stata_writer>`
34+
binary,`SAS <https://en.wikipedia.org/wiki/SAS_(software)>`__, :ref:`read_sas<io.sas_reader>` , NA
35+
binary,`SPSS <https://en.wikipedia.org/wiki/SPSS>`__, :ref:`read_spss<io.spss_reader>` , NA
36+
binary,`Python Pickle Format <https://docs.python.org/3/library/pickle.html>`__, :ref:`read_pickle<io.pickle>`, :ref:`to_pickle<io.pickle>`
37+
SQL,`SQL <https://en.wikipedia.org/wiki/SQL>`__, :ref:`read_sql<io.sql>`,:ref:`to_sql<io.sql>`
3938

4039
:ref:`Here <io.perf>` is an informal performance comparison for some of these IO methods.
4140

@@ -1837,14 +1836,13 @@ with optional parameters:
18371836

18381837
.. csv-table::
18391838
:widths: 20, 150
1840-
:delim: ;
18411839

1842-
``split``; dict like {index -> [index], columns -> [columns], data -> [values]}
1843-
``records``; list like [{column -> value}, ... , {column -> value}]
1844-
``index``; dict like {index -> {column -> value}}
1845-
``columns``; dict like {column -> {index -> value}}
1846-
``values``; just the values array
1847-
``table``; adhering to the JSON `Table Schema`_
1840+
``split``, dict like {index -> [index]; columns -> [columns]; data -> [values]}
1841+
``records``, list like [{column -> value}; ... ]
1842+
``index``, dict like {index -> {column -> value}}
1843+
``columns``, dict like {column -> {index -> value}}
1844+
``values``, just the values array
1845+
``table``, adhering to the JSON `Table Schema`_
18481846

18491847
* ``date_format`` : string, type of date conversion, 'epoch' for timestamp, 'iso' for ISO8601.
18501848
* ``double_precision`` : The number of decimal places to use when encoding floating point values, default 10.
@@ -2025,14 +2023,13 @@ is ``None``. To explicitly force ``Series`` parsing, pass ``typ=series``
20252023

20262024
.. csv-table::
20272025
:widths: 20, 150
2028-
:delim: ;
2029-
2030-
``split``; dict like {index -> [index], columns -> [columns], data -> [values]}
2031-
``records``; list like [{column -> value}, ... , {column -> value}]
2032-
``index``; dict like {index -> {column -> value}}
2033-
``columns``; dict like {column -> {index -> value}}
2034-
``values``; just the values array
2035-
``table``; adhering to the JSON `Table Schema`_
2026+
2027+
``split``, dict like {index -> [index]; columns -> [columns]; data -> [values]}
2028+
``records``, list like [{column -> value} ...]
2029+
``index``, dict like {index -> {column -> value}}
2030+
``columns``, dict like {column -> {index -> value}}
2031+
``values``, just the values array
2032+
``table``, adhering to the JSON `Table Schema`_
20362033

20372034

20382035
* ``dtype`` : if True, infer dtypes, if a dict of column to dtype, then use those, if ``False``, then don't infer dtypes at all, default is True, apply only to the data.

0 commit comments

Comments
 (0)