Skip to content

Commit

Permalink
COMPAT: rename isnull -> isna, notnull -> notna
Browse files Browse the repository at this point in the history
existing isnull, notnull remain user facing

closes #15001
  • Loading branch information
jreback committed Jul 21, 2017
1 parent 4efe656 commit 05aed53
Show file tree
Hide file tree
Showing 134 changed files with 977 additions and 904 deletions.
2 changes: 1 addition & 1 deletion doc/source/10min.rst
Original file line number Diff line number Diff line change
Expand Up @@ -373,7 +373,7 @@ To get the boolean mask where values are ``nan``

.. ipython:: python
pd.isnull(df1)
pd.isna(df1)
Operations
Expand Down
20 changes: 10 additions & 10 deletions doc/source/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -187,8 +187,8 @@ Top-level missing data
.. autosummary::
:toctree: generated/

isnull
notnull
isna
notna

Top-level conversions
~~~~~~~~~~~~~~~~~~~~~
Expand Down Expand Up @@ -272,8 +272,8 @@ Conversion
Series.astype
Series.infer_objects
Series.copy
Series.isnull
Series.notnull
Series.isna
Series.notna

Indexing, iteration
~~~~~~~~~~~~~~~~~~~
Expand Down Expand Up @@ -780,8 +780,8 @@ Conversion
DataFrame.convert_objects
DataFrame.infer_objects
DataFrame.copy
DataFrame.isnull
DataFrame.notnull
DataFrame.isna
DataFrame.notna

Indexing, iteration
~~~~~~~~~~~~~~~~~~~
Expand Down Expand Up @@ -1098,8 +1098,8 @@ Conversion

Panel.astype
Panel.copy
Panel.isnull
Panel.notnull
Panel.isna
Panel.notna

Getting and setting
~~~~~~~~~~~~~~~~~~~
Expand Down Expand Up @@ -1342,8 +1342,8 @@ Missing Values

Index.fillna
Index.dropna
Index.isnull
Index.notnull
Index.isna
Index.notna

Conversion
~~~~~~~~~~
Expand Down
6 changes: 3 additions & 3 deletions doc/source/basics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -444,7 +444,7 @@ So, for instance, to reproduce :meth:`~DataFrame.combine_first` as above:

.. ipython:: python
combiner = lambda x, y: np.where(pd.isnull(x), y, x)
combiner = lambda x, y: np.where(pd.isna(x), y, x)
df1.combine(df2, combiner)
.. _basics.stats:
Expand Down Expand Up @@ -511,7 +511,7 @@ optional ``level`` parameter which applies only if the object has a
:header: "Function", "Description"
:widths: 20, 80

``count``, Number of non-null observations
``count``, Number of non-na observations
``sum``, Sum of values
``mean``, Mean of values
``mad``, Mean absolute deviation
Expand Down Expand Up @@ -541,7 +541,7 @@ will exclude NAs on Series input by default:
np.mean(df['one'].values)
``Series`` also has a method :meth:`~Series.nunique` which will return the
number of unique non-null values:
number of unique non-na values:

.. ipython:: python
Expand Down
4 changes: 2 additions & 2 deletions doc/source/categorical.rst
Original file line number Diff line number Diff line change
Expand Up @@ -863,14 +863,14 @@ a code of ``-1``.
s.cat.codes
Methods for working with missing data, e.g. :meth:`~Series.isnull`, :meth:`~Series.fillna`,
Methods for working with missing data, e.g. :meth:`~Series.isna`, :meth:`~Series.fillna`,
:meth:`~Series.dropna`, all work normally:

.. ipython:: python
s = pd.Series(["a", "b", np.nan], dtype="category")
s
pd.isnull(s)
pd.isna(s)
s.fillna("a")
Differences to R's `factor`
Expand Down
11 changes: 5 additions & 6 deletions doc/source/comparison_with_sas.rst
Original file line number Diff line number Diff line change
Expand Up @@ -444,13 +444,13 @@ For example, in SAS you could do this to filter missing values.
if value_x ^= .;
run;
Which doesn't work in in pandas. Instead, the ``pd.isnull`` or ``pd.notnull`` functions
Which doesn't work in in pandas. Instead, the ``pd.isna`` or ``pd.notna`` functions
should be used for comparisons.

.. ipython:: python
outer_join[pd.isnull(outer_join['value_x'])]
outer_join[pd.notnull(outer_join['value_x'])]
outer_join[pd.isna(outer_join['value_x'])]
outer_join[pd.notna(outer_join['value_x'])]
pandas also provides a variety of methods to work with missing data - some of
which would be challenging to express in SAS. For example, there are methods to
Expand Down Expand Up @@ -570,15 +570,15 @@ machine's memory, but also that the operations on that data may be faster.

If out of core processing is needed, one possibility is the
`dask.dataframe <http://dask.pydata.org/en/latest/dataframe.html>`_
library (currently in development) which
library (currently in development) which
provides a subset of pandas functionality for an on-disk ``DataFrame``

Data Interop
~~~~~~~~~~~~

pandas provides a :func:`read_sas` method that can read SAS data saved in
the XPORT or SAS7BDAT binary format.

.. code-block:: none
libname xportout xport 'transport-file.xpt';
Expand Down Expand Up @@ -613,4 +613,3 @@ to interop data between SAS and pandas is to serialize to csv.
In [9]: %time df = pd.read_csv('big.csv')
Wall time: 4.86 s
8 changes: 4 additions & 4 deletions doc/source/comparison_with_sql.rst
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ Just like SQL's OR and AND, multiple conditions can be passed to a DataFrame usi
# tips by parties of at least 5 diners OR bill total was more than $45
tips[(tips['size'] >= 5) | (tips['total_bill'] > 45)]
NULL checking is done using the :meth:`~pandas.Series.notnull` and :meth:`~pandas.Series.isnull`
NULL checking is done using the :meth:`~pandas.Series.notna` and :meth:`~pandas.Series.isna`
methods.

.. ipython:: python
Expand All @@ -121,9 +121,9 @@ where ``col2`` IS NULL with the following query:
.. ipython:: python
frame[frame['col2'].isnull()]
frame[frame['col2'].isna()]
Getting items where ``col1`` IS NOT NULL can be done with :meth:`~pandas.Series.notnull`.
Getting items where ``col1`` IS NOT NULL can be done with :meth:`~pandas.Series.notna`.

.. code-block:: sql
Expand All @@ -133,7 +133,7 @@ Getting items where ``col1`` IS NOT NULL can be done with :meth:`~pandas.Series.
.. ipython:: python
frame[frame['col1'].notnull()]
frame[frame['col1'].notna()]
GROUP BY
Expand Down
4 changes: 2 additions & 2 deletions doc/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -238,8 +238,8 @@
# https://github.com/pandas-dev/pandas/issues/16186

moved_api_pages = [
('pandas.core.common.isnull', 'pandas.isnull'),
('pandas.core.common.notnull', 'pandas.notnull'),
('pandas.core.common.isnull', 'pandas.isna'),
('pandas.core.common.notnull', 'pandas.notna'),
('pandas.core.reshape.get_dummies', 'pandas.get_dummies'),
('pandas.tools.merge.concat', 'pandas.concat'),
('pandas.tools.merge.merge', 'pandas.merge'),
Expand Down
2 changes: 1 addition & 1 deletion doc/source/gotchas.rst
Original file line number Diff line number Diff line change
Expand Up @@ -202,7 +202,7 @@ For many reasons we chose the latter. After years of production use it has
proven, at least in my opinion, to be the best decision given the state of
affairs in NumPy and Python in general. The special value ``NaN``
(Not-A-Number) is used everywhere as the ``NA`` value, and there are API
functions ``isnull`` and ``notnull`` which can be used across the dtypes to
functions ``isna`` and ``notna`` which can be used across the dtypes to
detect NA values.

However, it comes with it a couple of trade-offs which I most certainly have
Expand Down
26 changes: 13 additions & 13 deletions doc/source/missing_data.rst
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ When / why does data become missing?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Some might quibble over our usage of *missing*. By "missing" we simply mean
**null** or "not present for whatever reason". Many data sets simply arrive with
**na** or "not present for whatever reason". Many data sets simply arrive with
missing data, either because it exists and was not collected or it never
existed. For example, in a collection of financial time series, some of the time
series might start on different dates. Thus, values prior to the start date
Expand All @@ -63,27 +63,27 @@ to handling missing data. While ``NaN`` is the default missing value marker for
reasons of computational speed and convenience, we need to be able to easily
detect this value with data of different types: floating point, integer,
boolean, and general object. In many cases, however, the Python ``None`` will
arise and we wish to also consider that "missing" or "null".
arise and we wish to also consider that "missing" or "na".

.. note::

Prior to version v0.10.0 ``inf`` and ``-inf`` were also
considered to be "null" in computations. This is no longer the case by
default; use the ``mode.use_inf_as_null`` option to recover it.
considered to be "na" in computations. This is no longer the case by
default; use the ``mode.use_inf_as_na`` option to recover it.

.. _missing.isnull:
.. _missing.isna:

To make detecting missing values easier (and across different array dtypes),
pandas provides the :func:`~pandas.core.common.isnull` and
:func:`~pandas.core.common.notnull` functions, which are also methods on
pandas provides the :func:`isna` and
:func:`notna` functions, which are also methods on
``Series`` and ``DataFrame`` objects:

.. ipython:: python
df2['one']
pd.isnull(df2['one'])
df2['four'].notnull()
df2.isnull()
pd.isna(df2['one'])
df2['four'].notna()
df2.isna()
.. warning::

Expand Down Expand Up @@ -206,7 +206,7 @@ with missing data.
Filling missing values: fillna
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The **fillna** function can "fill in" NA values with non-null data in a couple
The **fillna** function can "fill in" NA values with non-na data in a couple
of ways, which we illustrate:

**Replace NA with a scalar value**
Expand All @@ -220,7 +220,7 @@ of ways, which we illustrate:
**Fill gaps forward or backward**

Using the same filling arguments as :ref:`reindexing <basics.reindexing>`, we
can propagate non-null values forward or backward:
can propagate non-na values forward or backward:

.. ipython:: python
Expand Down Expand Up @@ -288,7 +288,7 @@ a Series in this case.

.. ipython:: python
dff.where(pd.notnull(dff), dff.mean(), axis='columns')
dff.where(pd.notna(dff), dff.mean(), axis='columns')
.. _missing_data.dropna:
Expand Down
6 changes: 3 additions & 3 deletions doc/source/options.rst
Original file line number Diff line number Diff line change
Expand Up @@ -419,10 +419,10 @@ mode.chained_assignment warn Raise an exception, warn, or no
assignment, The default is warn
mode.sim_interactive False Whether to simulate interactive mode
for purposes of testing.
mode.use_inf_as_null False True means treat None, NaN, -INF,
INF as null (old way), False means
mode.use_inf_as_na False True means treat None, NaN, -INF,
INF as NA (old way), False means
None and NaN are null, but INF, -INF
are not null (new way).
are not NA (new way).
compute.use_bottleneck True Use the bottleneck library to accelerate
computation if it is installed.
compute.use_numexpr True Use the numexpr library to accelerate
Expand Down
20 changes: 17 additions & 3 deletions doc/source/whatsnew/v0.21.0.txt
Original file line number Diff line number Diff line change
Expand Up @@ -127,8 +127,6 @@ the target. Now, a ``ValueError`` will be raised when such an input is passed in
...
ValueError: Cannot operate inplace if there is no assignment

.. _whatsnew_0210.dtype_conversions:

Dtype Conversions
^^^^^^^^^^^^^^^^^

Expand Down Expand Up @@ -186,6 +184,22 @@ Dtype Conversions
- Inconsistent behavior in ``.where()`` with datetimelikes which would raise rather than coerce to ``object`` (:issue:`16402`)
- Bug in assignment against ``int64`` data with ``np.ndarray`` with ``float64`` dtype may keep ``int64`` dtype (:issue:`14001`)

.. _whatsnew_0210.api.na_changes:

NA naming Changes
^^^^^^^^^^^^^^^^^

In orde to promote more consistency among the pandas API, we have added additional top-level
functions :func:`isna` and :func:`notna` that are the same as :func:`isnull` and :func:`notnull`.
The naming scheme is now more consistent with methods ``.dropna()`` and ``.fillna()``. Furthermore
in all cases where ``.isnull()`` and ``.notnull()`` methods are defined, these have additional methods
named ``.isna()`` and ``.notna()``, these include for classes `Categorical`,
`Index`, `Series`, and `DataFrame`. (:issue:`15001`).

Using :func:`isnull` and :func:`notnull` will now issue a ``DeprecationWarning`` and recommend using :func:`isna` and :func`notnull` respectively.

The configuration option ``mode.use_inf_as_null``is deprecated, and ``mode.use_inf_as_na`` is added as a replacement.

.. _whatsnew_0210.api:

Other API Changes
Expand Down Expand Up @@ -259,7 +273,7 @@ Indexing
- Fixes bug where indexing with ``np.inf`` caused an ``OverflowError`` to be raised (:issue:`16957`)
- Bug in reindexing on an empty ``CategoricalIndex`` (:issue:`16770`)
- Fixes ``DataFrame.loc`` for setting with alignment and tz-aware ``DatetimeIndex`` (:issue:`16889`)

I/O
^^^

Expand Down
4 changes: 2 additions & 2 deletions pandas/_libs/algos_rank_helper.pxi.in
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ def rank_1d_{{dtype}}(object in_arr, ties_method='average', ascending=True,
nan_value = {{neg_nan_value}}

{{if dtype == 'object'}}
mask = lib.isnullobj(values)
mask = lib.isnaobj(values)
{{elif dtype == 'float64'}}
mask = np.isnan(values)
{{elif dtype == 'int64'}}
Expand Down Expand Up @@ -259,7 +259,7 @@ def rank_2d_{{dtype}}(object in_arr, axis=0, ties_method='average',
nan_value = {{neg_nan_value}}

{{if dtype == 'object'}}
mask = lib.isnullobj2d(values)
mask = lib.isnaobj2d(values)
{{elif dtype == 'float64'}}
mask = np.isnan(values)
{{elif dtype == 'int64'}}
Expand Down
8 changes: 4 additions & 4 deletions pandas/_libs/lib.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -286,7 +286,7 @@ def item_from_zerodim(object val):

@cython.wraparound(False)
@cython.boundscheck(False)
def isnullobj(ndarray arr):
def isnaobj(ndarray arr):
cdef Py_ssize_t i, n
cdef object val
cdef ndarray[uint8_t] result
Expand All @@ -303,7 +303,7 @@ def isnullobj(ndarray arr):

@cython.wraparound(False)
@cython.boundscheck(False)
def isnullobj_old(ndarray arr):
def isnaobj_old(ndarray arr):
cdef Py_ssize_t i, n
cdef object val
cdef ndarray[uint8_t] result
Expand All @@ -320,7 +320,7 @@ def isnullobj_old(ndarray arr):

@cython.wraparound(False)
@cython.boundscheck(False)
def isnullobj2d(ndarray arr):
def isnaobj2d(ndarray arr):
cdef Py_ssize_t i, j, n, m
cdef object val
cdef ndarray[uint8_t, ndim=2] result
Expand All @@ -339,7 +339,7 @@ def isnullobj2d(ndarray arr):

@cython.wraparound(False)
@cython.boundscheck(False)
def isnullobj2d_old(ndarray arr):
def isnaobj2d_old(ndarray arr):
cdef Py_ssize_t i, j, n, m
cdef object val
cdef ndarray[uint8_t, ndim=2] result
Expand Down
4 changes: 2 additions & 2 deletions pandas/_libs/testing.pyx
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
import numpy as np

from pandas import compat
from pandas.core.dtypes.missing import isnull, array_equivalent
from pandas.core.dtypes.missing import isna, array_equivalent
from pandas.core.dtypes.common import is_dtype_equal

cdef NUMERIC_TYPES = (
Expand Down Expand Up @@ -182,7 +182,7 @@ cpdef assert_almost_equal(a, b,
if a == b:
# object comparison
return True
if isnull(a) and isnull(b):
if isna(a) and isna(b):
# nan / None comparison
return True
if is_comparable_as_number(a) and is_comparable_as_number(b):
Expand Down
Loading

0 comments on commit 05aed53

Please sign in to comment.