Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmarking framework using ASV and Playwright #62

Merged
merged 5 commits into from
Jul 26, 2023

Conversation

ianthomas23
Copy link
Collaborator

This PR adds a benchmarking framework using ASV and Playwright. To try this out, follow the instructions in benchmarks/README.md.

To use ASV this github repo needs to be locally installable into a virtual environment, hence the top-level pyproject.toml containing the build-system and dependencies. This is based on the one in the src/hvneuro directory. Note that it only needs to be locally installable from source, it never requires the project to be formally released on PyPI or conda.

At this stage there is just one simple benchmark that uses bokeh without any of the holoviz packages, and it starts with a browser already opened displaying a bokeh plot, and the benchmark times the transfer of pre-generated test data to the browser and the rendering of the first frame. This is repeated for datasets of different sizes, and for both canvas and webgl output backends. Note that webgl is generally slower here as there is extra overhead in using webgl that is included in the timings of the first rendered frame.

Subsequent benchmarks will be more complicated, based on the existing workflows. There could be multiple benchmarks per workflow to measure the time taken for the first render (the latency), the speed of interaction using pan and/or zoom, and so on.

Before merging, the branches section of asv.conf.json will need to be changed to

 "branches": ["benchmark_framework"],

Timings I obtain on my M1 mac are:

========== ============ =========
--             output_backend    
---------- ----------------------
    n         canvas      webgl  
========== ============ =========
   1000      96.2±1ms    124±3ms 
  10000      118±3ms     151±2ms 
  100000     106±3ms     126±2ms 
 1000000     409±2ms     431±5ms 
 10000000   3.40±0.01s   3.42±0s 
========== ============ =========

and on my Linux box with a decent graphics card:

========== ========== ==========
--             output_backend  
---------- ---------------------
    n        canvas     webgl  
========== ========== ==========
   1000     227±20ms   228±20ms
   10000     226±10ms   236±20ms
  100000    301±30ms   312±20ms
 1000000    844±40ms   879±20ms
 10000000   6.68±0s    6.68±0s  
========== ========== ==========

@droumis droumis self-requested a review July 18, 2023 12:23
@droumis
Copy link
Collaborator

droumis commented Jul 18, 2023

This is great!

  1. I think we need to add python to the env in order to have pip available

  2. with asv run.. I'm running into an error about building bokeh and missing node

 Building wheels for collected packages: bokeh
     Building wheel for bokeh (pyproject.toml): started
     Building wheel for bokeh (pyproject.toml): finished with status 'error'
   Failed to build bokeh
   STDERR -------->
     Running command git clone --filter=blob:none --quiet https://github.com/bokeh/bokeh.git /private/var/folders/pp/zp63v9q50m79py9t866mvg3h0000gp/T/pip-install-v4espest/bokeh_75656b3e0a9e49678d8770d415dfb785
     Running command git checkout -b ianthomas23/log_render_count --track origin/ianthomas23/log_render_count
     Switched to a new branch 'ianthomas23/log_render_count'
     branch 'ianthomas23/log_render_count' set up to track 'origin/ianthomas23/log_render_count'.
     error: subprocess-exited-with-error

     × Building wheel for bokeh (pyproject.toml) did not run successfully.
     │ exit code: 1
     ╰─> [11 lines of output]
         /private/var/folders/pp/zp63v9q50m79py9t866mvg3h0000gp/T/pip-build-env-v8l3g1b3/overlay/lib/python3.9/site-packages/setuptools/config/pyprojecttoml.py:66: _BetaConfiguration: Support for `[tool.setuptools]` in `pyproject.toml` is still *beta*.
           config = read_configuration(filepath, True, ignore_option_errors, dist)
         running bdist_wheel
         running build

         Building BokehJS... Failed.

         ERROR: 'node make build' failed to execute:

             [Errno 2] No such file or directory: 'node'

         [end of output]

     note: This error originates from a subprocess, and is likely not a problem with pip.
     ERROR: Failed building wheel for bokeh
   ERROR: Could not build wheels for bokeh, which is required to install pyproject.toml-based projects

·· Failed to build the project and import the benchmark suite.

@droumis
Copy link
Collaborator

droumis commented Jul 18, 2023

Before merging, the branches section of asv.conf.json will need to be changed to

 "branches": ["benchmark_framework"],

don't we want to change it to "main" before merging?

@ianthomas23
Copy link
Collaborator Author

Ah yes, I am set up to compile bokeh on all my dev machines. Try this:

conda create -n hvneuro_asv python=3.11
conda activate hvneuro_asv
conda install -c conda-forge asv virtualenv "nodejs>=18"

It should fix the two issues you've seen, but there may be more.

@ianthomas23
Copy link
Collaborator Author

don't we want to change it to "main" before merging?

Yes!

@ianthomas23
Copy link
Collaborator Author

If this goes well we could avoid compiling bokeh from source by either building a dev release containing the extra code that we want, or perhaps including it in the default branch but only enabling it using an environment variable.

@droumis
Copy link
Collaborator

droumis commented Jul 18, 2023

hmm, the benchmarks are failing for me:

❯ asv run
· Creating environments
· Discovering benchmarks
· Running 1 total benchmarks (1 commits * 1 environments * 1 benchmarks)
[  0.00%] · For hvneuro commit aa509ee0 <benchmark_framework>:
[  0.00%] ·· Benchmarking virtualenv-py3.11-playwright
[ 50.00%] ··· Running (timeseries.Timeseries.time_values--).
[100.00%] ··· timeseries.Timeseries.time_values                                                                     failed
[100.00%] ··· ========== ======== ========
              --           output_backend
              ---------- -----------------
                  n       canvas   webgl
              ========== ======== ========
                 1000     failed   failed
                10000     failed   failed
                100000    failed   failed
               1000000    failed   failed
               10000000   failed   failed
              ========== ======== ========

update.. apparently I can use asv run -e to show the traceback.. for this it was "Address already in use". I'm manually changing the port and trying again

@droumis
Copy link
Collaborator

droumis commented Jul 18, 2023

Recap of debugging session:
image

It looks like there are multiple renders for some unknown reason which likely leads to the mismatch in render count error

@ianthomas23
Copy link
Collaborator Author

It seems that debugging without using headless mode isn't much help as there are extra renders that you don't get in headless mode.

I have modified the Bokeh ianthomas23/log_render_count branch to output the unique figure id for each canvas render. I have modified this PR to only record the console messages for the specific figure of interest, and if there are multiple Bokeh sessions or documents open an error will be raised. To use this it will be necessary to rebuild the benchmarking virtual environment, which is most easily accomplished by deleting the benchmarks/.asv directory.

This all works fine for me locally. If it doesn't work for @droumis then hopefully we will see an error message about multiple sessions and/or documents which we should (hopefully) be able to fix using an unique port for each benchmark.

@droumis
Copy link
Collaborator

droumis commented Jul 25, 2023

I ran into the following error:

                 File "/Users/droumis/src/neuro/benchmarks/benchmarks/base.py", line 34, in _console_callback
                   count = int(args[1].json_value())
                               ^^^^^^^^^^^^^^^^^^
               AttributeError: 'str' object has no attribute 'json_value'
               WARNING:bokeh.server.views.ws:Failed sending message as connection was closed

So I made headless = False and added a pause to record the console again:
image

(Now I turn headless mode back on)

And now see that the count is actually the third arg.. so I changed

count = int(args[1].json_value())

to

count = int(args[2].json_value())

But then I see:

count = int(args[2].json_value())
                               ^^^^^^^^^^^^^^^^^^
               AttributeError: 'int' object has no attribute 'json_value'

sooooo I made the line this instead:

count = int(args[2])

but then I hit:

File "/Users/droumis/src/neuro/benchmarks/.asv/env/4d2756fde06817c5134509ea73f79b8d/lib/python3.11/site-packages/playwright/_impl/_connection.py", line 97, in inner_send
                   result = next(iter(done)).result()
                            ^^^^^^^^^^^^^^^^^^^^^^^^^
               playwright._impl._api_types.Error: Target page, context or browser has been closed

so it seems that either the teardown is getting called early or the context/browser is not yet active when playwright thinks it is?

@droumis
Copy link
Collaborator

droumis commented Jul 25, 2023

Progress! pretty similar to your M1. It probably didn't help that I had a bunch of other things running on my computer during this. when I use asv run -e I see issues but with just asv run it reports the completed runs like below

image

@ianthomas23
Copy link
Collaborator Author

This now removes the console callback at the end of each benchmark run so no errors should be reported. There is a 10 millisecond wait for any part-processed console messages to be completely handled. 5 ms works on my dev system but I think 10 ms is a safer default.

The catch_console boolean flag is now a member variable of the Base class and defaults to True as I can't imagine a benchmark here not using the console callback.

I have also changed the README to recommend use of asv run -e to show stderr as it is probably best for us to identify and fix errors as soon as they occur, even if this is in the teardown and hence not affecting the benchmark results. Currently I don't see any errors but for large n I see this warning:

WARNING:bokeh.server.views.ws:Failed sending message as connection was closed

This isn't really of concern as it is difficult to stop one of the server and browser without the other complaining that its friend isn't listening any more. It can be filtered out using the BOKEH_PY_LOG_LEVEL env var, e.g.

BOKEH_PY_LOG_LEVEL="error" asv run -e

We could do this programmatically within the benchmarks at some future date if desired. At this stage I would rather see the warnings.

Subject to approval from @droumis, this is now a minimum viable demo of benchmarking and I would be happy for this to be merged as is and we can add other benchmarks in new PRs.

@droumis
Copy link
Collaborator

droumis commented Jul 26, 2023

Great work! I'll merge

@droumis droumis merged commit fdce0a1 into holoviz-topics:main Jul 26, 2023
@ianthomas23 ianthomas23 deleted the benchmark_framework branch July 26, 2023 14:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

2 participants