Skip to content

Commit

Permalink
Spellcheck (pandas-dev#19017)
Browse files Browse the repository at this point in the history
  • Loading branch information
tommyod authored and jreback committed Jan 3, 2018
1 parent c883128 commit 6552718
Show file tree
Hide file tree
Showing 23 changed files with 203 additions and 160 deletions.
6 changes: 3 additions & 3 deletions doc/source/10min.rst
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ a default integer index:
s = pd.Series([1,3,5,np.nan,6,8])
s
Creating a :class:`DataFrame` by passing a numpy array, with a datetime index
Creating a :class:`DataFrame` by passing a NumPy array, with a datetime index
and labeled columns:

.. ipython:: python
Expand Down Expand Up @@ -114,7 +114,7 @@ Here is how to view the top and bottom rows of the frame:
df.head()
df.tail(3)
Display the index, columns, and the underlying numpy data:
Display the index, columns, and the underlying NumPy data:

.. ipython:: python
Expand Down Expand Up @@ -311,7 +311,7 @@ Setting values by position:
df.iat[0,1] = 0
Setting by assigning with a numpy array:
Setting by assigning with a NumPy array:

.. ipython:: python
Expand Down
9 changes: 5 additions & 4 deletions doc/source/advanced.rst
Original file line number Diff line number Diff line change
Expand Up @@ -316,7 +316,9 @@ Basic multi-index slicing using slices, lists, and labels.
dfmi.loc[(slice('A1','A3'), slice(None), ['C1', 'C3']), :]
You can use :class:`pandas.IndexSlice` to facilitate a more natural syntax using ``:``, rather than using ``slice(None)``.
You can use :class:`pandas.IndexSlice` to facilitate a more natural syntax
using ``:``, rather than using ``slice(None)``.

.. ipython:: python
Expand Down Expand Up @@ -557,7 +559,7 @@ Take Methods

.. _advanced.take:

Similar to numpy ndarrays, pandas Index, Series, and DataFrame also provides
Similar to NumPy ndarrays, pandas Index, Series, and DataFrame also provides
the ``take`` method that retrieves elements along a given axis at the given
indices. The given indices must be either a list or an ndarray of integer
index positions. ``take`` will also accept negative integers as relative positions to the end of the object.
Expand Down Expand Up @@ -729,7 +731,7 @@ This is an Immutable array implementing an ordered, sliceable set.
Prior to 0.18.0, the ``Int64Index`` would provide the default index for all ``NDFrame`` objects.
``RangeIndex`` is a sub-class of ``Int64Index`` added in version 0.18.0, now providing the default index for all ``NDFrame`` objects.
``RangeIndex`` is an optimized version of ``Int64Index`` that can represent a monotonic ordered set. These are analogous to python `range types <https://docs.python.org/3/library/stdtypes.html#typesseq-range>`__.
``RangeIndex`` is an optimized version of ``Int64Index`` that can represent a monotonic ordered set. These are analogous to Python `range types <https://docs.python.org/3/library/stdtypes.html#typesseq-range>`__.
.. _indexing.float64index:
Expand Down Expand Up @@ -763,7 +765,6 @@ The only positional indexing is via ``iloc``.
sf.iloc[3]
A scalar index that is not found will raise a ``KeyError``.
Slicing is primarily on the values of the index when using ``[],ix,loc``, and
**always** positional when using ``iloc``. The exception is when the slice is
boolean, in which case it will always be positional.
Expand Down
2 changes: 1 addition & 1 deletion doc/source/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -730,7 +730,7 @@ The dtype information is available on the ``Categorical``
Categorical.codes

``np.asarray(categorical)`` works by implementing the array interface. Be aware, that this converts
the Categorical back to a numpy array, so categories and order information is not preserved!
the Categorical back to a NumPy array, so categories and order information is not preserved!

.. autosummary::
:toctree: generated/
Expand Down
12 changes: 6 additions & 6 deletions doc/source/basics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -395,7 +395,7 @@ raise a ValueError:
In [56]: pd.Series(['foo', 'bar', 'baz']) == pd.Series(['foo'])
ValueError: Series lengths must match to compare
Note that this is different from the numpy behavior where a comparison can
Note that this is different from the NumPy behavior where a comparison can
be broadcast:

.. ipython:: python
Expand Down Expand Up @@ -1000,7 +1000,7 @@ We create a frame similar to the one used in the above sections.
tsdf.iloc[3:7] = np.nan
tsdf
Transform the entire frame. ``.transform()`` allows input functions as: a numpy function, a string
Transform the entire frame. ``.transform()`` allows input functions as: a NumPy function, a string
function name or a user defined function.

.. ipython:: python
Expand Down Expand Up @@ -1510,7 +1510,7 @@ To iterate over the rows of a DataFrame, you can use the following methods:
one of the following approaches:

* Look for a *vectorized* solution: many operations can be performed using
built-in methods or numpy functions, (boolean) indexing, ...
built-in methods or NumPy functions, (boolean) indexing, ...

* When you have a function that cannot work on the full DataFrame/Series
at once, it is better to use :meth:`~DataFrame.apply` instead of iterating
Expand Down Expand Up @@ -1971,7 +1971,7 @@ from the current type (e.g. ``int`` to ``float``).
df3.dtypes
The ``values`` attribute on a DataFrame return the *lower-common-denominator* of the dtypes, meaning
the dtype that can accommodate **ALL** of the types in the resulting homogeneous dtyped numpy array. This can
the dtype that can accommodate **ALL** of the types in the resulting homogeneous dtyped NumPy array. This can
force some *upcasting*.

.. ipython:: python
Expand Down Expand Up @@ -2253,7 +2253,7 @@ can define a function that returns a tree of child dtypes:
return dtype
return [dtype, [subdtypes(dt) for dt in subs]]
All numpy dtypes are subclasses of ``numpy.generic``:
All NumPy dtypes are subclasses of ``numpy.generic``:

.. ipython:: python
Expand All @@ -2262,4 +2262,4 @@ All numpy dtypes are subclasses of ``numpy.generic``:
.. note::

Pandas also defines the types ``category``, and ``datetime64[ns, tz]``, which are not integrated into the normal
numpy hierarchy and wont show up with the above function.
NumPy hierarchy and wont show up with the above function.
4 changes: 2 additions & 2 deletions doc/source/categorical.rst
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ The categorical data type is useful in the following cases:
* The lexical order of a variable is not the same as the logical order ("one", "two", "three").
By converting to a categorical and specifying an order on the categories, sorting and
min/max will use the logical order instead of the lexical order, see :ref:`here <categorical.sort>`.
* As a signal to other python libraries that this column should be treated as a categorical
* As a signal to other Python libraries that this column should be treated as a categorical
variable (e.g. to use suitable statistical methods or plot types).

See also the :ref:`API docs on categoricals<api.categorical>`.
Expand Down Expand Up @@ -366,7 +366,7 @@ or simply set the categories to a predefined scale, use :func:`Categorical.set_c
.. note::
Be aware that :func:`Categorical.set_categories` cannot know whether some category is omitted
intentionally or because it is misspelled or (under Python3) due to a type difference (e.g.,
numpys S1 dtype and python strings). This can result in surprising behaviour!
numpys S1 dtype and Python strings). This can result in surprising behaviour!

Sorting and Order
-----------------
Expand Down
4 changes: 2 additions & 2 deletions doc/source/comparison_with_sas.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ performed in pandas.
If you're new to pandas, you might want to first read through :ref:`10 Minutes to pandas<10min>`
to familiarize yourself with the library.

As is customary, we import pandas and numpy as follows:
As is customary, we import pandas and NumPy as follows:

.. ipython:: python
Expand Down Expand Up @@ -100,7 +100,7 @@ specifying the column names.
A pandas ``DataFrame`` can be constructed in many different ways,
but for a small number of values, it is often convenient to specify it as
a python dictionary, where the keys are the column names
a Python dictionary, where the keys are the column names
and the values are the data.

.. ipython:: python
Expand Down
2 changes: 1 addition & 1 deletion doc/source/comparison_with_sql.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ various SQL operations would be performed using pandas.
If you're new to pandas, you might want to first read through :ref:`10 Minutes to pandas<10min>`
to familiarize yourself with the library.

As is customary, we import pandas and numpy as follows:
As is customary, we import pandas and NumPy as follows:

.. ipython:: python
Expand Down
5 changes: 2 additions & 3 deletions doc/source/computation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -57,9 +57,8 @@ Covariance
s2 = pd.Series(np.random.randn(1000))
s1.cov(s2)
Analogously, :meth:`DataFrame.cov` to compute
pairwise covariances among the series in the DataFrame, also excluding
NA/null values.
Analogously, :meth:`DataFrame.cov` to compute pairwise covariances among the
series in the DataFrame, also excluding NA/null values.

.. _computation.covariance.caveats:

Expand Down
4 changes: 2 additions & 2 deletions doc/source/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ Creating a development environment
----------------------------------

To test out code changes, you'll need to build pandas from source, which
requires a C compiler and python environment. If you're making documentation
requires a C compiler and Python environment. If you're making documentation
changes, you can skip to :ref:`contributing.documentation` but you won't be able
to build the documentation locally before pushing your changes.

Expand Down Expand Up @@ -187,7 +187,7 @@ At this point you should be able to import pandas from your locally built versio
0.22.0.dev0+29.g4ad6d4d74

This will create the new environment, and not touch any of your existing environments,
nor any existing python installation.
nor any existing Python installation.

To view your environments::

Expand Down
8 changes: 4 additions & 4 deletions doc/source/cookbook.rst
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ above what the in-line examples offer.
Pandas (pd) and Numpy (np) are the only two abbreviated imported modules. The rest are kept
explicitly imported for newer users.

These examples are written for python 3.4. Minor tweaks might be necessary for earlier python
These examples are written for Python 3. Minor tweaks might be necessary for earlier python
versions.

Idioms
Expand Down Expand Up @@ -750,7 +750,7 @@ Timeseries
<http://nipunbatra.github.io/2015/06/timeseries/>`__

Turn a matrix with hours in columns and days in rows into a continuous row sequence in the form of a time series.
`How to rearrange a python pandas DataFrame?
`How to rearrange a Python pandas DataFrame?
<http://stackoverflow.com/questions/15432659/how-to-rearrange-a-python-pandas-dataframe>`__

`Dealing with duplicates when reindexing a timeseries to a specified frequency
Expand Down Expand Up @@ -1152,7 +1152,7 @@ Storing Attributes to a group node
store = pd.HDFStore('test.h5')
store.put('df',df)
# you can store an arbitrary python object via pickle
# you can store an arbitrary Python object via pickle
store.get_storer('df').attrs.my_attribute = dict(A = 10)
store.get_storer('df').attrs.my_attribute
Expand All @@ -1167,7 +1167,7 @@ Storing Attributes to a group node
Binary Files
************

pandas readily accepts numpy record arrays, if you need to read in a binary
pandas readily accepts NumPy record arrays, if you need to read in a binary
file consisting of an array of C structs. For example, given this C program
in a file called ``main.c`` compiled with ``gcc main.c -std=gnu99`` on a
64-bit machine,
Expand Down
4 changes: 2 additions & 2 deletions doc/source/dsintro.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Intro to Data Structures
We'll start with a quick, non-comprehensive overview of the fundamental data
structures in pandas to get you started. The fundamental behavior about data
types, indexing, and axis labeling / alignment apply across all of the
objects. To get started, import numpy and load pandas into your namespace:
objects. To get started, import NumPy and load pandas into your namespace:

.. ipython:: python
Expand Down Expand Up @@ -877,7 +877,7 @@ of DataFrames:
wp['Item3'] = wp['Item1'] / wp['Item2']
The API for insertion and deletion is the same as for DataFrame. And as with
DataFrame, if the item is a valid python identifier, you can access it as an
DataFrame, if the item is a valid Python identifier, you can access it as an
attribute and tab-complete it in IPython.

Transposing
Expand Down
6 changes: 3 additions & 3 deletions doc/source/ecosystem.rst
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Statistics and Machine Learning
`Statsmodels <http://www.statsmodels.org/>`__
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Statsmodels is the prominent python "statistics and econometrics library" and it has
Statsmodels is the prominent Python "statistics and econometrics library" and it has
a long-standing special relationship with pandas. Statsmodels provides powerful statistics,
econometrics, analysis and modeling functionality that is out of pandas' scope.
Statsmodels leverages pandas objects as the underlying data container for computation.
Expand Down Expand Up @@ -72,7 +72,7 @@ Hadley Wickham's `ggplot2 <http://ggplot2.org/>`__ is a foundational exploratory
Based on `"The Grammar of Graphics" <http://www.cs.uic.edu/~wilkinson/TheGrammarOfGraphics/GOG.html>`__ it
provides a powerful, declarative and extremely general way to generate bespoke plots of any kind of data.
It's really quite incredible. Various implementations to other languages are available,
but a faithful implementation for python users has long been missing. Although still young
but a faithful implementation for Python users has long been missing. Although still young
(as of Jan-2014), the `yhat/ggplot <https://github.com/yhat/ggplot>`__ project has been
progressing quickly in that direction.

Expand Down Expand Up @@ -192,7 +192,7 @@ or multi-indexed DataFrames.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
fredapi is a Python interface to the `Federal Reserve Economic Data (FRED) <http://research.stlouisfed.org/fred2/>`__
provided by the Federal Reserve Bank of St. Louis. It works with both the FRED database and ALFRED database that
contains point-in-time data (i.e. historic data revisions). fredapi provides a wrapper in python to the FRED
contains point-in-time data (i.e. historic data revisions). fredapi provides a wrapper in Python to the FRED
HTTP API, and also provides several convenient methods for parsing and analyzing point-in-time data from ALFRED.
fredapi makes use of pandas and returns data in a Series or DataFrame. This module requires a FRED API key that
you can obtain for free on the FRED website.
Expand Down
18 changes: 9 additions & 9 deletions doc/source/enhancingperf.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,13 +24,13 @@ Enhancing Performance
Cython (Writing C extensions for pandas)
----------------------------------------

For many use cases writing pandas in pure python and numpy is sufficient. In some
For many use cases writing pandas in pure Python and NumPy is sufficient. In some
computationally heavy applications however, it can be possible to achieve sizeable
speed-ups by offloading work to `cython <http://cython.org/>`__.

This tutorial assumes you have refactored as much as possible in python, for example
trying to remove for loops and making use of numpy vectorization, it's always worth
optimising in python first.
trying to remove for loops and making use of NumPy vectorization, it's always worth
optimising in Python first.

This tutorial walks through a "typical" process of cythonizing a slow computation.
We use an `example from the cython documentation <http://docs.cython.org/src/quickstart/cythonize.html>`__
Expand Down Expand Up @@ -86,8 +86,8 @@ hence we'll concentrate our efforts cythonizing these two functions.

.. note::

In python 2 replacing the ``range`` with its generator counterpart (``xrange``)
would mean the ``range`` line would vanish. In python 3 ``range`` is already a generator.
In Python 2 replacing the ``range`` with its generator counterpart (``xrange``)
would mean the ``range`` line would vanish. In Python 3 ``range`` is already a generator.

.. _enhancingperf.plain:

Expand Down Expand Up @@ -232,7 +232,7 @@ the rows, applying our ``integrate_f_typed``, and putting this in the zeros arra
.. note::

Loops like this would be *extremely* slow in python, but in Cython looping
over numpy arrays is *fast*.
over NumPy arrays is *fast*.

.. code-block:: ipython
Expand Down Expand Up @@ -315,7 +315,7 @@ Numba works by generating optimized machine code using the LLVM compiler infrast
Jit
~~~

Using ``numba`` to just-in-time compile your code. We simply take the plain python code from above and annotate with the ``@jit`` decorator.
Using ``numba`` to just-in-time compile your code. We simply take the plain Python code from above and annotate with the ``@jit`` decorator.

.. code-block:: python
Expand Down Expand Up @@ -391,7 +391,7 @@ Caveats

``numba`` will execute on any function, but can only accelerate certain classes of functions.

``numba`` is best at accelerating functions that apply numerical functions to numpy arrays. When passed a function that only uses operations it knows how to accelerate, it will execute in ``nopython`` mode.
``numba`` is best at accelerating functions that apply numerical functions to NumPy arrays. When passed a function that only uses operations it knows how to accelerate, it will execute in ``nopython`` mode.

If ``numba`` is passed a function that includes something it doesn't know how to work with -- a category that currently includes sets, lists, dictionaries, or string functions -- it will revert to ``object mode``. In ``object mode``, numba will execute but your code will not speed up significantly. If you would prefer that ``numba`` throw an error if it cannot compile a function in a way that speeds up your code, pass numba the argument ``nopython=True`` (e.g. ``@numba.jit(nopython=True)``). For more on troubleshooting ``numba`` modes, see the `numba troubleshooting page <http://numba.pydata.org/numba-doc/0.20.0/user/troubleshoot.html#the-compiled-code-is-too-slow>`__.

Expand Down Expand Up @@ -779,7 +779,7 @@ Technical Minutia Regarding Expression Evaluation

Expressions that would result in an object dtype or involve datetime operations
(because of ``NaT``) must be evaluated in Python space. The main reason for
this behavior is to maintain backwards compatibility with versions of numpy <
this behavior is to maintain backwards compatibility with versions of NumPy <
1.7. In those versions of ``numpy`` a call to ``ndarray.astype(str)`` will
truncate any strings that are more than 60 characters in length. Second, we
can't pass ``object`` arrays to ``numexpr`` thus string comparisons must be
Expand Down
2 changes: 1 addition & 1 deletion doc/source/gotchas.rst
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ See also :ref:`Categorical Memory Usage <categorical.memory>`.
Using If/Truth Statements with pandas
-------------------------------------

pandas follows the numpy convention of raising an error when you try to convert something to a ``bool``.
pandas follows the NumPy convention of raising an error when you try to convert something to a ``bool``.
This happens in a ``if`` or when using the boolean operations, ``and``, ``or``, or ``not``. It is not clear
what the result of

Expand Down
Loading

0 comments on commit 6552718

Please sign in to comment.