Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/master' into remove-panel
Browse files Browse the repository at this point in the history
  • Loading branch information
WillAyd committed Jul 1, 2019
2 parents 53abf20 + 46adc5b commit 9ec5917
Show file tree
Hide file tree
Showing 107 changed files with 744 additions and 987 deletions.
2 changes: 1 addition & 1 deletion ci/deps/azure-35-compat.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ dependencies:
- openpyxl=2.4.8
- pytables=3.4.2
- python-dateutil=2.6.1
- python=3.5.*
- python=3.5.3
- pytz=2017.2
- scipy=0.19.0
- xlrd=1.1.0
Expand Down
2 changes: 1 addition & 1 deletion ci/deps/azure-37-locale.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ dependencies:
- jinja2
- lxml
- matplotlib
- moto
- nomkl
- numexpr
- numpy
Expand All @@ -32,4 +33,3 @@ dependencies:
- pip
- pip:
- hypothesis>=3.58.0
- moto # latest moto in conda-forge fails with 3.7, move to conda dependencies when this is fixed
2 changes: 1 addition & 1 deletion ci/deps/azure-windows-37.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ dependencies:
- jinja2
- lxml
- matplotlib=2.2.*
- moto
- numexpr
- numpy=1.14.*
- openpyxl
Expand All @@ -29,6 +30,5 @@ dependencies:
- pytest-xdist
- pytest-mock
- pytest-azurepipelines
- moto
- hypothesis>=3.58.0
- pyreadstat
2 changes: 1 addition & 1 deletion ci/deps/travis-36-cov.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ dependencies:
- geopandas
- html5lib
- matplotlib
- moto
- nomkl
- numexpr
- numpy=1.15.*
Expand Down Expand Up @@ -46,6 +47,5 @@ dependencies:
- pip:
- brotlipy
- coverage
- moto
- pandas-datareader
- python-dateutil
2 changes: 1 addition & 1 deletion ci/deps/travis-36-locale.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ dependencies:
- jinja2
- lxml=3.8.0
- matplotlib=3.0.*
- moto
- nomkl
- numexpr
- numpy
Expand All @@ -36,7 +37,6 @@ dependencies:
- pytest>=4.0.2
- pytest-xdist
- pytest-mock
- moto
- pip
- pip:
- hypothesis>=3.58.0
1 change: 0 additions & 1 deletion doc/source/development/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -178,7 +178,6 @@ We'll now kick off a three-step process:
# Create and activate the build environment
conda env create -f environment.yml
conda activate pandas-dev
conda uninstall --force pandas
# or with older versions of Anaconda:
source activate pandas-dev
Expand Down
1 change: 0 additions & 1 deletion doc/source/getting_started/10min.rst
Original file line number Diff line number Diff line change
Expand Up @@ -712,7 +712,6 @@ See the :ref:`Plotting <visualization>` docs.
plt.close('all')
.. ipython:: python
:okwarning:
ts = pd.Series(np.random.randn(1000),
index=pd.date_range('1/1/2000', periods=1000))
Expand Down
2 changes: 0 additions & 2 deletions doc/source/reference/frame.rst
Original file line number Diff line number Diff line change
Expand Up @@ -198,7 +198,6 @@ Reindexing / selection / label manipulation
DataFrame.idxmin
DataFrame.last
DataFrame.reindex
DataFrame.reindex_axis
DataFrame.reindex_like
DataFrame.rename
DataFrame.rename_axis
Expand Down Expand Up @@ -337,7 +336,6 @@ Serialization / IO / conversion
.. autosummary::
:toctree: api/

DataFrame.from_csv
DataFrame.from_dict
DataFrame.from_items
DataFrame.from_records
Expand Down
15 changes: 11 additions & 4 deletions doc/source/user_guide/advanced.rst
Original file line number Diff line number Diff line change
Expand Up @@ -965,21 +965,26 @@ If you select a label *contained* within an interval, this will also select the
df.loc[2.5]
df.loc[[2.5, 3.5]]
``Interval`` and ``IntervalIndex`` are used by ``cut`` and ``qcut``:
:func:`cut` and :func:`qcut` both return a ``Categorical`` object, and the bins they
create are stored as an ``IntervalIndex`` in its ``.categories`` attribute.

.. ipython:: python
c = pd.cut(range(4), bins=2)
c
c.categories
Furthermore, ``IntervalIndex`` allows one to bin *other* data with these same
bins, with ``NaN`` representing a missing value similar to other dtypes.
:func:`cut` also accepts an ``IntervalIndex`` for its ``bins`` argument, which enables
a useful pandas idiom. First, We call :func:`cut` with some data and ``bins`` set to a
fixed number, to generate the bins. Then, we pass the values of ``.categories`` as the
``bins`` argument in subsequent calls to :func:`cut`, supplying new data which will be
binned into the same bins.

.. ipython:: python
pd.cut([0, 3, 5, 1], bins=c.categories)
Any value which falls outside all bins will be assigned a ``NaN`` value.

Generating ranges of intervals
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Expand Down Expand Up @@ -1108,6 +1113,8 @@ the :meth:`~Index.is_unique` attribute.
weakly_monotonic.is_monotonic_increasing
weakly_monotonic.is_monotonic_increasing & weakly_monotonic.is_unique
.. _advanced.endpoints_are_inclusive:

Endpoints are inclusive
~~~~~~~~~~~~~~~~~~~~~~~

Expand Down Expand Up @@ -1137,7 +1144,7 @@ index can be somewhat complicated. For example, the following does not work:
s.loc['c':'e' + 1]

A very common use case is to limit a time series to start and end at two
specific dates. To enable this, we made the design to make label-based
specific dates. To enable this, we made the design choice to make label-based
slicing include both endpoints:

.. ipython:: python
Expand Down
10 changes: 6 additions & 4 deletions doc/source/user_guide/indexing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -61,8 +61,8 @@ of multi-axis indexing.
* A list or array of labels ``['a', 'b', 'c']``.
* A slice object with labels ``'a':'f'`` (Note that contrary to usual python
slices, **both** the start and the stop are included, when present in the
index! See :ref:`Slicing with labels
<indexing.slicing_with_labels>`.).
index! See :ref:`Slicing with labels <indexing.slicing_with_labels>`
and :ref:`Endpoints are inclusive <advanced.endpoints_are_inclusive>`.)
* A boolean array
* A ``callable`` function with one argument (the calling Series or DataFrame) and
that returns valid output for indexing (one of the above).
Expand Down Expand Up @@ -335,8 +335,7 @@ The ``.loc`` attribute is the primary access method. The following are valid inp
* A list or array of labels ``['a', 'b', 'c']``.
* A slice object with labels ``'a':'f'`` (Note that contrary to usual python
slices, **both** the start and the stop are included, when present in the
index! See :ref:`Slicing with labels
<indexing.slicing_with_labels>`.).
index! See :ref:`Slicing with labels <indexing.slicing_with_labels>`.
* A boolean array.
* A ``callable``, see :ref:`Selection By Callable <indexing.callable>`.

Expand Down Expand Up @@ -418,6 +417,9 @@ error will be raised (since doing otherwise would be computationally expensive,
as well as potentially ambiguous for mixed type indexes). For instance, in the
above example, ``s.loc[1:6]`` would raise ``KeyError``.

For the rationale behind this behavior, see
:ref:`Endpoints are inclusive <advanced.endpoints_are_inclusive>`.

.. _indexing.integer:

Selection by position
Expand Down
23 changes: 10 additions & 13 deletions doc/source/user_guide/io.rst
Original file line number Diff line number Diff line change
Expand Up @@ -340,13 +340,6 @@ dialect : str or :class:`python:csv.Dialect` instance, default ``None``
`skipinitialspace`, `quotechar`, and `quoting`. If it is necessary to
override values, a ParserWarning will be issued. See :class:`python:csv.Dialect`
documentation for more details.
tupleize_cols : boolean, default ``False``
.. deprecated:: 0.21.0

This argument will be removed and will always convert to MultiIndex

Leave a list of tuples on columns as is (default is to convert to a MultiIndex
on the columns).

Error handling
++++++++++++++
Expand Down Expand Up @@ -1718,8 +1711,6 @@ function takes a number of arguments. Only the first is required.
* ``escapechar``: Character used to escape ``sep`` and ``quotechar`` when
appropriate (default None)
* ``chunksize``: Number of rows to write at a time
* ``tupleize_cols``: If False (default), write as a list of tuples, otherwise
write in an expanded line format suitable for ``read_csv``
* ``date_format``: Format string for datetime objects

Writing a formatted string
Expand Down Expand Up @@ -3393,15 +3384,15 @@ both on the writing (serialization), and reading (deserialization).

.. warning::

This is a very new feature of pandas. We intend to provide certain
optimizations in the io of the ``msgpack`` data. Since this is marked
as an EXPERIMENTAL LIBRARY, the storage format may not be stable until a future release.
The msgpack format is deprecated as of 0.25 and will be removed in a future version.
It is recommended to use pyarrow for on-the-wire transmission of pandas objects.

.. warning::

:func:`read_msgpack` is only guaranteed backwards compatible back to pandas version 0.20.3

.. ipython:: python
:okwarning:
df = pd.DataFrame(np.random.rand(5, 2), columns=list('AB'))
df.to_msgpack('foo.msg')
Expand All @@ -3411,20 +3402,23 @@ both on the writing (serialization), and reading (deserialization).
You can pass a list of objects and you will receive them back on deserialization.

.. ipython:: python
:okwarning:
pd.to_msgpack('foo.msg', df, 'foo', np.array([1, 2, 3]), s)
pd.read_msgpack('foo.msg')
You can pass ``iterator=True`` to iterate over the unpacked results:

.. ipython:: python
:okwarning:
for o in pd.read_msgpack('foo.msg', iterator=True):
print(o)
You can pass ``append=True`` to the writer to append to an existing pack:

.. ipython:: python
:okwarning:
df.to_msgpack('foo.msg', append=True)
pd.read_msgpack('foo.msg')
Expand All @@ -3435,6 +3429,7 @@ can pack arbitrary collections of Python lists, dicts, scalars, while intermixin
pandas objects.

.. ipython:: python
:okwarning:
pd.to_msgpack('foo2.msg', {'dict': [{'df': df}, {'string': 'foo'},
{'scalar': 1.}, {'s': s}]})
Expand All @@ -3453,14 +3448,16 @@ Read/write API
Msgpacks can also be read from and written to strings.

.. ipython:: python
:okwarning:
df.to_msgpack()
Furthermore you can concatenate the strings to produce a list of the original objects.

.. ipython:: python
:okwarning:
pd.read_msgpack(df.to_msgpack() + s.to_msgpack())
pd.read_msgpack(df.to_msgpack() + s.to_msgpack())
.. _io.hdf5:

Expand Down
1 change: 0 additions & 1 deletion doc/source/user_guide/missing_data.rst
Original file line number Diff line number Diff line change
Expand Up @@ -458,7 +458,6 @@ You can mix pandas' ``reindex`` and ``interpolate`` methods to interpolate
at the new values.

.. ipython:: python
:okexcept:
ser = pd.Series(np.sort(np.random.uniform(size=100)))
Expand Down
10 changes: 0 additions & 10 deletions doc/source/user_guide/timeseries.rst
Original file line number Diff line number Diff line change
Expand Up @@ -474,16 +474,6 @@ resulting ``DatetimeIndex``:
Custom frequency ranges
~~~~~~~~~~~~~~~~~~~~~~~

.. warning::

This functionality was originally exclusive to ``cdate_range``, which is
deprecated as of version 0.21.0 in favor of ``bdate_range``. Note that
``cdate_range`` only utilizes the ``weekmask`` and ``holidays`` parameters
when custom business day, 'C', is passed as the frequency string. Support has
been expanded with ``bdate_range`` to work with any custom frequency string.

.. versionadded:: 0.21.0

``bdate_range`` can also generate a range of custom frequency dates by using
the ``weekmask`` and ``holidays`` parameters. These parameters will only be
used if a custom frequency string is passed.
Expand Down
2 changes: 2 additions & 0 deletions doc/source/whatsnew/v0.13.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -829,6 +829,7 @@ Experimental
Since this is an EXPERIMENTAL LIBRARY, the storage format may not be stable until a future release.

.. ipython:: python
:okwarning:
df = pd.DataFrame(np.random.rand(5, 2), columns=list('AB'))
df.to_msgpack('foo.msg')
Expand All @@ -841,6 +842,7 @@ Experimental
You can pass ``iterator=True`` to iterator over the unpacked results

.. ipython:: python
:okwarning:
for o in pd.read_msgpack('foo.msg', iterator=True):
print(o)
Expand Down
2 changes: 1 addition & 1 deletion doc/source/whatsnew/v0.24.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1298,7 +1298,7 @@ Deprecations
- :meth:`Series.compress` is deprecated. Use ``Series[condition]`` instead (:issue:`18262`)
- The signature of :meth:`Series.to_csv` has been uniformed to that of :meth:`DataFrame.to_csv`: the name of the first argument is now ``path_or_buf``, the order of subsequent arguments has changed, the ``header`` argument now defaults to ``True``. (:issue:`19715`)
- :meth:`Categorical.from_codes` has deprecated providing float values for the ``codes`` argument. (:issue:`21767`)
- :func:`pandas.read_table` is deprecated. Instead, use :func:`read_csv` passing ``sep='\t'`` if necessary (:issue:`21948`)
- :func:`pandas.read_table` is deprecated. Instead, use :func:`read_csv` passing ``sep='\t'`` if necessary. This deprecation has been removed in 0.25.0. (:issue:`21948`)
- :meth:`Series.str.cat` has deprecated using arbitrary list-likes *within* list-likes. A list-like container may still contain
many ``Series``, ``Index`` or 1-dimensional ``np.ndarray``, or alternatively, only scalar values. (:issue:`21950`)
- :meth:`FrozenNDArray.searchsorted` has deprecated the ``v`` parameter in favor of ``value`` (:issue:`14645`)
Expand Down
Loading

0 comments on commit 9ec5917

Please sign in to comment.