Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a subsuite of unit tests for ML package and remove them from 'coverage' GHA workflow. #31255

Merged
merged 9 commits into from
May 13, 2024

Conversation

tvalentyn
Copy link
Contributor

@tvalentyn tvalentyn commented May 10, 2024

Add a subsuite of unit tests for ML package and run it with all known optional ML dependencies.

The suite should provide early detection signals when an ML dependency creates a new release, and we are no longer compatible with its API, but it could also create noise for us. Given that the ML dependencies are optional for Beam, we don't set version bounds for these test dependencies. We install a lot of (direct+transitive) dependencies in the ml_test extra , and leave dependency resolution up to pip. If dependency resolution becomes messy, we will have the option to tweak the dependencies or introduce more specialized extras.

Compatibility testing between Beam with specific fine-grained versions of particular dependencies is done in a separate postcommit suite .github/workflows/beam_PostCommit_Python_Dependency.yml.

Moving compat tests from 'coverage' suite reduces the time run the code coverage GHA. Tests that need extra ML deps were not included in code coverage tests previously, keeping it that way for now to reduce coverage runtime, but we can reconsider if we find this signal valuable.

I exclude the apache_beam/ml tests from the main precommit suite, since the new suite includes the same tests + some unit tests that were skipped when optional dependencies are not installed. If this test suite becomes too unstable, we might have to add apache_beam/ml without extra deps back to the precommit suite.

Adding this suite exposed several tests that were not running and not working properly: #31254, #31252, #31287, #31294 and reduced the 'coverage' suite run to under 30 min: https://github.com/apache/beam/actions/runs/9054795242/job/24875055178?pr=31255


Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

  • Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead.
  • Update CHANGES.md with noteworthy changes.
  • If this contribution is large, please file an Apache Individual Contributor License Agreement.

See the Contributor Guide for more tips on how to make review process smoother.

To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md

GitHub Actions Tests Status (on master branch)

Build python source distribution and wheels
Python tests
Java tests
Go tests

See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.

@tvalentyn tvalentyn changed the title Add a subsuite of unit tests for ML package Add a subsuite of unit tests for ML package and remove them from 'coverage' test suite. May 11, 2024
@tvalentyn tvalentyn changed the title Add a subsuite of unit tests for ML package and remove them from 'coverage' test suite. Add a subsuite of unit tests for ML package and remove them from 'coverage' GHA workflow. May 11, 2024
@tvalentyn tvalentyn marked this pull request as ready for review May 13, 2024 15:47
@tvalentyn
Copy link
Contributor Author

cc: @yathu @damccorm @jrmccluskey

Copy link
Contributor

Checks are failing. Will not request review until checks are succeeding. If you'd like to override that behavior, comment assign set of reviewers

Copy link
Contributor

@damccorm damccorm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, minus a few minor comments

on:
pull_request_target:
branches: [ "master", "release-*" ]
paths: [ "model/**","sdks/python/**","release/**", 'release/trigger_all_tests.json', '.github/trigger_files/beam_PreCommit_Python_ML.json']
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No-op right now, but we may eventually consider making this only trigger on ml changes

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also seen anecdotal evidence of using pytest-testmon, https://stackoverflow.com/questions/67398655/run-only-tests-which-depend-on-the-change.
we could give it an have a full postcommit coverage as backup

setenv =
extras = test,gcp,dataframe,ml_test
commands =
bash {toxinidir}/scripts/run_pytest.sh {envname} "{posargs}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this even accept posargs? Won't it always be apache_beam/ml?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, we can configure the path here instead of in the GHA yaml. it would be less consistent with other targeted GHA suites but still straightforward.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i took another look, i'd rather keep it as is to make uniform configuration across test suites.

gradle-command: :sdks:python:test-suites:tox:py${{steps.set_py_ver_clean.outputs.py_ver_clean}}:testPy${{steps.set_py_ver_clean.outputs.py_ver_clean}}ML
arguments: |
-Pposargs=apache_beam/ml/ \
-PpythonVersion=${{ matrix.python_version }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Noted elsewhere, but it is odd that we have to specify this for testPy311ML - I'd assume that's implied by the task name

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, that shouldn't be necessary.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like this was necessary to make sure the virtual environment that creates an sdist is created using a python version that is installed on GHA workers.

Copy link
Contributor Author

@tvalentyn tvalentyn May 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sdks/python/tox.ini Outdated Show resolved Hide resolved
Copy link
Contributor

Assigning reviewers. If you would like to opt out of this review, comment assign to next reviewer:

R: @damccorm for label python.
R: @damccorm for label build.

Available commands:

  • stop reviewer notifications - opt out of the automated review tooling
  • remind me after tests pass - tag the comment author after tests pass
  • waiting on author - shift the attention set back to the author (any comment or push by the author will return the attention set to the reviewers)

The PR bot will only process comments in the main thread (not review comments).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants