Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CLN: ASV eval benchmark #18500

Merged
merged 1 commit into from
Nov 26, 2017
Merged

CLN: ASV eval benchmark #18500

merged 1 commit into from
Nov 26, 2017

Conversation

mroeschke
Copy link
Member

asv run -b ^eval
[  0.00%] ·· Benchmarking conda-py3.6-Cython-matplotlib-numexpr-numpy-openpyxl-pytables-pytest-scipy-sqlalchemy-xlrd-xlsxwriter-xlwt
[ 14.29%] ··· Running eval.Eval.time_add                                                                                  41.8±0.7ms;...
[ 28.57%] ··· Running eval.Eval.time_and                                                                                  55.5±0.4ms;...
[ 42.86%] ··· Running eval.Eval.time_chained_cmp                                                                            48.7±1ms;...
[ 57.14%] ··· Running eval.Eval.time_mult                                                                                   38.5±1ms;...
[ 71.43%] ··· Running eval.Query.time_query_datetime_column                                                                       19.1ms
[ 85.71%] ··· Running eval.Query.time_query_datetime_index                                                                        51.9ms
[100.00%] ··· Running eval.Query.time_query_with_boolean_selection                                                                66.9ms

@codecov
Copy link

codecov bot commented Nov 26, 2017

Codecov Report

Merging #18500 into master will increase coverage by 0.02%.
The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff             @@
##           master   #18500      +/-   ##
==========================================
+ Coverage    91.3%   91.32%   +0.02%     
==========================================
  Files         163      163              
  Lines       49781    49781              
==========================================
+ Hits        45451    45463      +12     
+ Misses       4330     4318      -12
Flag Coverage Δ
#multiple 89.12% <ø> (+0.02%) ⬆️
#single 40.72% <ø> (ø) ⬆️
Impacted Files Coverage Δ
pandas/plotting/_converter.py 65.25% <0%> (+1.81%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 38f41e6...f6381ea. Read the comment docs.

@jreback
Copy link
Contributor

jreback commented Nov 26, 2017

FYI the random see is already defined in pandas_vb_common I would actually remove it entirely from benchmark suite (except for there). (and put a note in each file that its already defined).

separate PR for this.

@jreback jreback added the Benchmark Performance (ASV) benchmarks label Nov 26, 2017
@jreback jreback added this to the 0.22.0 milestone Nov 26, 2017
@jreback jreback merged commit c44a063 into pandas-dev:master Nov 26, 2017
self.index = date_range('20010101', periods=self.N, freq='T')
self.s = Series(self.index)
np.random.seed(1234)
self.N = 10**6
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When you are cleaning up, you can also remove such unneeded self.'s, for things that are only used in init (they are a left over from the autogeneration of this code when the benchmarks were translated to asv)

@mroeschke
Copy link
Member Author

@jreback I think the random seed should be included in the setup function instead so that the same data is generated for each benchmark. If the seed is global then different random data is generated for each benchmark.

You can also include a module-level setup function, which will be run for every benchmark within the module, prior to any setup assigned specifically to each function.

@jreback
Copy link
Contributor

jreback commented Nov 27, 2017

ok then let’s remove from the global name space (or maybe put a comment there) and add to setup as u r already doing

@jorisvandenbossche
Copy link
Member

I think the module level setup is a good idea

@mroeschke mroeschke deleted the asv_clean_eval branch November 27, 2017 17:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Benchmark Performance (ASV) benchmarks
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants