Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add notebook markers #95

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

mcflugen
Copy link

Apologies for the unsolicited pull request. I've added a small bit of functionality to nbmake that we have found to be useful and thought others might as well.

This pull request allows users to mark notebooks in a way similar to what one might do with @pytest.mark. Markers are included within a notebook's metadata much like how the allow_errors and timeout options are specified. As an example,

{
  "cells": [ ... ],
  "metadata": {
    "kernelspec": { ... },
    "execution": {
      "markers": ["slow"]
    }
  }
}

would mark this notebook as "slow".

Markers can be either an array of strings (i.e. ["slow", "memory_intensive") or a string of comma-separated markers (i.e. "slow,memory_intensive"). Notebooks can then be included or excluded in the tests through the -m option just as with other pytest tests (e.g. pytest **/*.ipynb -m "slow and not memory_intensive" would only run notebooks that are slow but not memory intensive).

If this is something you think could be a part of nbmake (great!), I can add some tests and write a bit of documentation to go with this. If not, no worries.

@codecov
Copy link

codecov bot commented Feb 10, 2023

Codecov Report

Merging #95 (3e86d46) into main (966edac) will decrease coverage by 0.37%.
The diff coverage is 89.58%.

Impacted file tree graph

@@            Coverage Diff             @@
##             main      #95      +/-   ##
==========================================
- Coverage   92.46%   92.10%   -0.37%     
==========================================
  Files           4        4              
  Lines         146      190      +44     
==========================================
+ Hits          135      175      +40     
- Misses         11       15       +4     
Impacted Files Coverage Δ
src/nbmake/pytest_plugin.py 91.66% <83.33%> (-3.08%) ⬇️
src/nbmake/pytest_items.py 87.30% <84.61%> (-1.16%) ⬇️
src/nbmake/nb_run.py 94.62% <93.10%> (+0.77%) ⬆️

@alex-treebeard
Copy link
Member

alex-treebeard commented Feb 10, 2023

Thanks @mcflugen for this contribution, it's a great idea for a feature.

Some notes:

  • py 3.7 tests are failing, please ignore
  • We should probably namespace metadata fields specific to nbmake like we do for mocks
  • Please add a test to ensure that this does not change the failure mode for notebooks that cannot be read (e.g. because they are not valid JSON)
    • I can help with this if you are not sure how

@mcflugen
Copy link
Author

@alex-treebeard Ugh. So sorry for the delay on this!

We should probably namespace metadata fields specific to nbmake like we do for mocks

I've done this so that markers are no specified in the nbmake namespace,

{
  "cells": [ ... ],
  "metadata": {
    "kernelspec": { ... },
    "execution": {
      "nbmake": {
        "markers": ["slow"]
      }
    }
  }
}

Please add a test to ensure that this does not change the failure mode for notebooks that cannot be read (e.g. because they are not valid JSON)
I can help with this if you are not sure how

Maybe you could give me some more guidance on this?

@alex-treebeard
Copy link
Member

alex-treebeard commented Oct 4, 2024

hey @mcflugen - thanks for much for continuing this contribution.

Many people use this library but very few help improve it.

The issue is specifically: I can see that during test collection (the phase before tests are executed) - your new code will throw an unhandled exception if there is an invalid notebook.

My ask is that we err on the side of not breaking existing usage and ignore invalid notebooks at collection time (ideally printing a warning to the user).

In practice, this means you need to run this test, see that it's broken on your end, then push a fix.

Thanks again, and I'm looking forward to merging and releasing this once the tests pass.

EDIT: If you have a better idea for handling invalid json at collection time then happy to hear your approach.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants