-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Execute all notebooks / clear notebook output #1246
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## develop #1246 +/- ##
===========================================
- Coverage 84.31% 84.22% -0.09%
===========================================
Files 151 151
Lines 12331 12331
===========================================
- Hits 10397 10386 -11
- Misses 1934 1945 +11 ☔ View full report in Codecov by Sentry. |
I reduced the notebook output and run time by tweaking some small things like numbers of starts, rerunning, removing PEtab model compilation output, and added seeds for reproducability. Changes per notebook: - `amici.ipynb` -- TODO, didn't do yet - `censored.ipynb` -- reduce n_starts 10 -> 3 - `getting_started.ipynb` -- n_starts 100 -> 10 (do we need more for a nicer waterfall plot?), added seed for reproducability, Fides verbose=logging.ERROR, create_problem verbose=False (TODO amici.swig_wrappers logging) - `hierarchical.ipynb` -- change to only create_problem with verbose=False, set a seed for reproducability - TODO rest...
I made the following changes in the example notebooks: - `amici.py` -- verbose=False on model compilation, seed for reproducability, clear outputs - `juliy.py` -- set number of starts 100 --> 20, set seed for reproducability, samples 10000 -> 2000, clear outputs - `nonlinear_monotone` -- verbose=False on model compilation - `ordinal` -- verbose=False on model compilation - `petab_import` -- verbose=False on model compilation - `sampler_study` -- verbose=False on model compilation - `store` -- verbose=False on model compilation, fides verbose ERROR - `synthetic_data` -- n_starts 100 -> 20, verbose=False on model compilation - `workflow_comparison` -- verbose=False on model compilation, n_starts 25 - 10
I reduced the notebook output and run time by tweaking some small things like numbers of starts, rerunning, removing PEtab model compilation output, and added seeds for reproducability. Changes per notebook:
|
The docs test is very slow and ultimately times out at this point:
I'm not sure what is the issue with these |
Thanks for the cleanup so far. Don't worry about this message, not relevant here. I silenced it. Further opportunity for optimization: getting_started.ipynb:
... we probably can reduce n_starts there. same notebook: computing profiles for 3 parameter should be sufficient, we don't need all. |
If this is the point I think it is, I left it as 100 startpoints so the global optimum found is nice enough to have nice profiles. It doesn't take that long to run (relative to the profiling). But running 3 profiles is definitely better. |
see also #1159 |
To speed things up:
To not get held up for too long, I'd suggest to pull a couple of changes out of this PR (started: #1268) and merge what's already good/fast enough. |
There still seems to be too little time for the readthedocs build. It stops at
Will take a look at the rest of the notebooks, specifically the sampling ones, I have a feeling they are taking too long to build. |
New changes:
|
New changes: - `sampling_diagnostics` -- n_samples 10000 -> 1000 on both sampling - `sampler_study` -- n_samples 1e4 -> 1e3 where not necessary to have an extremely good sampling result, add seed for reproducibility (to have a consistently low burn-in to be able to have only 1e3 samples).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for cleaning this up.
…tor (#1245) * Hier. par. plotting for fides * hierarchical 'hess' bug * Fix storing of inner parameters in result Inner parameters were not being stored when the result was stored. This was due to a problem with putting a dictionary into an HDF5 format. I've transformed it into 2 lists now: a INNER_PARAMETERS_VALUES and INNER_PARAMETER_NAMES list. * adjusted documentation of hierarichal opt. * Fix some type issues * Small fix * Save only values, not names * Extend to all optimizers * Remove redundant * Small update * Update parameters.py * Small typo * Fix some bugs, use lists for saving instead * Fix tests * Update hierarchical parameter plot test * Largescale renaming and restructuring - Restructuring of the hierarchical scaling+offset method. The base classes used by all non-quantitative and semi-quantitative data types are put in the `base_...` files. All classes related to the scaling+offset method (for relative data) is moved to the `relative` folder, analogous to other data types. - Renaming: the naming scheme doesn't follow the method, but the data type now. It's more direct and removes the ambiguity of `OptimalScaling` for ordinal data and `scaling_offset` for relative data. - Optimal scaling -> ordinal - Spline approximation -> semiquantitative - Scaling + offset -> relative * Update base_problem.py * Fix Docstrings * Daniel&Dilan review changes * Small change * More Dilan Review changes * Fix testing * Fix base test * Rename notebooks (again) * Move inner pars to decorator * Collect decorators for minimize * Maren review changes * Rename & fix notebooks * Include notebook changes * FIx tests * Add output * Include relative into Collector Included the relative calculator into the Inner calculator collector. Now relative data can be used together with any other non-quantitative data type. A lot of TODOs to still clean up. But it works nicely on all examples, except one which I think is a bad one (Boehm_mixed_test with nonlinearities: 1 spline 1 relative 1 known: very bad convergence... fits look okayish...) * Small documentation update * Documentation update * Update the notebooks * Update spline_approximation.py * Change data_types from list to set * Use add_sim_grad_to_opt_grad * Rename to semiquantitative in example * mode RES with only RELATIVE data * Add petab data type validation * DO TODOs * Small TODO * FIx api.rst module name changes * Update CODEOWNERS module name changes * Fix underline too short & api.rst * Fix plotting routines * Inline literal error * Testing error * Testing error 2 * Test literal error * Fix missing indentation * Fix inline * Daniel, Fabian & Stephan review changes * Notebook changes * Dilan & Daniel review changes * Change title of relative_data.ipynb * Remove observableParameter from solvers * Notebook changes from #1246 * Fabian review change --------- Co-authored-by: Lea@Mac <leaseep@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
All of the example notebooks are stored on GitHub with their output generated which is a lot of figures and text, taking up storage unnecessarily. I've removed the output, but I still need to check the notebook output is being generated for the online docs.