Skip to content

Commit

Permalink
BUG: nlargest/nsmallest gave wrong result (#22752)
Browse files Browse the repository at this point in the history
When asking for the n largest/smallest rows in a dataframe
nlargest/nsmallest sometimes failed to differentiate
the correct result based on the latter columns.
  • Loading branch information
troels committed Sep 19, 2018
1 parent 1a12c41 commit 03aa68d
Show file tree
Hide file tree
Showing 3 changed files with 48 additions and 15 deletions.
1 change: 1 addition & 0 deletions doc/source/whatsnew/v0.24.0.txt
Original file line number Diff line number Diff line change
Expand Up @@ -804,6 +804,7 @@ Other
- :meth:`~pandas.io.formats.style.Styler.background_gradient` now takes a ``text_color_threshold`` parameter to automatically lighten the text color based on the luminance of the background color. This improves readability with dark background colors without the need to limit the background colormap range. (:issue:`21258`)
- Require at least 0.28.2 version of ``cython`` to support read-only memoryviews (:issue:`21688`)
- :meth:`~pandas.io.formats.style.Styler.background_gradient` now also supports tablewise application (in addition to rowwise and columnwise) with ``axis=None`` (:issue:`15204`)
- :meth:`DataFrame.nlargest` and :meth:`DataFrame.nsmallest` now returns the correct n values when keep != 'all' also when tied on the first columns (:issue:`22752`)
- :meth:`~pandas.io.formats.style.Styler.bar` now also supports tablewise application (in addition to rowwise and columnwise) with ``axis=None`` and setting clipping range with ``vmin`` and ``vmax`` (:issue:`21548` and :issue:`21526`). ``NaN`` values are also handled properly.
- Logical operations ``&, |, ^`` between :class:`Series` and :class:`Index` will no longer raise ``ValueError`` (:issue:`22092`)
-
Expand Down
39 changes: 24 additions & 15 deletions pandas/core/algorithms.py
Original file line number Diff line number Diff line change
Expand Up @@ -1221,27 +1221,36 @@ def get_indexer(current_indexer, other_indexer):
# and break
# Otherwise we must save the index of the non duplicated values
# and set the next cur_frame to cur_frame filtered on all
# duplcicated values (#GH15297)
# duplicated values (#GH15297)
series = cur_frame[column]
values = getattr(series, method)(cur_n, keep=self.keep)
is_last_column = len(columns) - 1 == i
if is_last_column or values.nunique() == series.isin(values).sum():
values = getattr(series, method)(
cur_n,
keep=self.keep if is_last_column else 'all')

# Last column in columns or values are unique in
# series => values
# is all that matters
# Are we at the last column or have values got less
# or equal number of items to what we need.
if is_last_column or len(values) <= cur_n:
indexer = get_indexer(indexer, values.index)
break

duplicated_filter = series.duplicated(keep=False)
duplicated = values[duplicated_filter]
non_duplicated = values[~duplicated_filter]
indexer = get_indexer(indexer, non_duplicated.index)

# Must set cur frame to include all duplicated values
# to consider for the next column, we also can reduce
# cur_n by the current length of the indexer
cur_frame = cur_frame[series.isin(duplicated)]
# Find the border value. (Everything above/below that are
# good to go)
if method == 'nlargest':
border_value = values.min()
else:
border_value = values.max()

border_values = values[values == border_value]
# All other values than the border value
# is larger/smaller than the border value. So
# they should go.
safe_values = values[values != border_value]
indexer = get_indexer(indexer, safe_values.index)

# Now we have to break the tie among the
# border value rows.
cur_frame = cur_frame[border_values]
cur_n = n - len(indexer)

frame = frame.take(indexer)
Expand Down
23 changes: 23 additions & 0 deletions pandas/tests/series/test_analytics.py
Original file line number Diff line number Diff line change
Expand Up @@ -1996,6 +1996,29 @@ def test_duplicate_keep_all_ties(self):
expected = Series([6, 7, 7, 7, 7], index=[7, 3, 4, 5, 6])
assert_series_equal(result, expected)

@pytest.mark.parametrize('method,expected', [
('nlargest',
pd.DataFrame({'a': [2, 2, 2, 1], 'b': [3, 2, 1, 3]},
index=[2, 1, 0, 3])),
('nsmallest',
pd.DataFrame({'a': [2, 1, 1, 1], 'b': [1, 3, 2, 1]},
index=[0, 3, 4, 5]))])
def test_duplicates_on_starter_columns(self, method, expected):
# regression test for #22752

df = pd.DataFrame({
'a': [2, 2, 2, 1, 1, 1],
'b': [1, 2, 3, 3, 2, 1]
})

result = getattr(df, method)(
4, columns=['a', 'b']
).sort_values(
['a', 'b'], ascending=False
)

assert_frame_equal(result, expected)


class TestCategoricalSeriesAnalytics(object):

Expand Down

0 comments on commit 03aa68d

Please sign in to comment.