Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[enhancement] add sklearnex version of validate_data, _check_sample_weight #2177

Merged
merged 147 commits into from
Dec 10, 2024

Conversation

icfaust
Copy link
Contributor

@icfaust icfaust commented Nov 20, 2024

Description

This is another interim PR towards introducing the new onedal finiteness checker into the sklearnex estimator workflows. This is not yet introduced into any of the estimators, and so performance benchmarks are not necessary. This PR focuses on making sure that input and outputs of validate_data and _check_sample_weight are respected for sycl_usm_ndarray types and that the new finite checker is properly called and yields results in a range of scenarios. This is also done to minimize the review burden, as changing all the estimators is a large change.

The new process for all estimators will be as follows:

  • All estimators will call validate_data and _check_sample_weight once in sklearnex in _onedal_* methods called by device_offload's dispatch
  • All estimators will call assert_all_finite no where else but in validate_data or _check_sample_weight unless an operation before the oneDAL backend can yield a inf/NaN (this is a strict condition, and is expected to be extremely uncommon/ hard to allow)
  • Calls to check_array anywhere in the onedal or sklearnex folders must have assert_all_finite checks turned off.

A follow up PR will create a design test for this, and will introduce the new validate_data in one estimator. Other estimators will occur in individual PRs due to the depth of the changes.


PR should start as a draft, then move to ready for review state after CI is passed and all applicable checkboxes are closed.
This approach ensures that reviewers don't spend extra time asking for regular requirements.

You can remove a checkbox as not applicable only if it doesn't relate to this PR in any way.
For example, PR with docs update doesn't require checkboxes for performance while PR with any change in actual code should have checkboxes and justify how this code change is expected to affect performance (or justification should be self-evident).

Checklist to comply with before moving PR from draft:

PR completeness and readability

  • I have reviewed my changes thoroughly before submitting this pull request.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have updated the documentation to reflect the changes or created a separate PR with update and provided its number in the description, if necessary.
  • Git commit message contains an appropriate signed-off-by string (see CONTRIBUTING.md for details).
  • I have added a respective label(s) to PR if I have a permission for that.
  • I have resolved any merge conflicts that might occur with the base branch.

Testing

  • I have run it locally and tested the changes extensively.
  • All CI jobs are green or I have provided justification why they aren't.
  • I have extended testing suite if new functionality was introduced in this PR.

Performance

  • I have measured performance for affected algorithms using scikit-learn_bench and provided at least summary table with measured data, if performance change is expected.
  • I have provided justification why performance has changed or why changes are not expected.
  • I have provided justification why quality metrics have changed or why changes are not expected.
  • I have extended benchmarking suite and provided corresponding scikit-learn_bench PR if new measurable functionality was introduced in this PR.

@icfaust
Copy link
Contributor Author

icfaust commented Dec 5, 2024

Any perf results to share? (doesn't even have to be full benchmarks but even a large BasicStatistics single GPU run would indicate progress)

I am purposefully not including this in any estimators at this point to speed the review/merging of the PR: there will be performance benchmarks for #2209 #2207 #2206 #2201 and #2189. Good question. GPU performance improvements will occur when array_api support in the dispatch function is included. (so unfortunately not yet, there is some aspects missing to those PRs which must come from #2096)

@icfaust
Copy link
Contributor Author

icfaust commented Dec 5, 2024

/intelci: run

@icfaust
Copy link
Contributor Author

icfaust commented Dec 5, 2024

/intelci: run

X_table, sua_iface=sua_iface, sycl_queue=X.sycl_queue, xp=xp
)
self.y_attr_ = from_table(
y_table, sua_iface=sua_iface, sycl_queue=X.sycl_queue, xp=xp
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it Ok that y_attr_ goes with X's queue?

Also, what happens if X and y are from different namespaces and have different sua_iface? For example, X - from dpnp and y - from numpy. What is expected to happen in this case?

Suggested change
y_table, sua_iface=sua_iface, sycl_queue=X.sycl_queue, xp=xp
y_table, sua_iface=sua_iface, sycl_queue=y.sycl_queue, xp=xp

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was a failure in the original implementation, though the backend in sklearnex is very fuzzy in this (not standard). For questions about from_table I would ask @samir-nasibli.

Comment on lines 228 to 232
if dispatch:
assert type(X) == type(
X_array
), f"validate_data converted {type(X)} to {type(X_array)}"
assert type(X) == type(X_out), f"from_array converted {type(X)} to {type(X_out)}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess y needs to be checked here and in 'else' branch as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@icfaust
Copy link
Contributor Author

icfaust commented Dec 5, 2024

/intelci: run

@icfaust icfaust requested review from ethanglaser and Vika-F December 5, 2024 10:18
@ethanglaser
Copy link
Contributor

ethanglaser commented Dec 5, 2024

Any perf results to share? (doesn't even have to be full benchmarks but even a large BasicStatistics single GPU run would indicate progress)

I am purposefully not including this in any estimators at this point to speed the review/merging of the PR: there will be performance benchmarks for #2209 #2207 #2206 #2201 and #2189. Good question. GPU performance improvements will occur when array_api support in the dispatch function is included. (so unfortunately not yet, there is some aspects missing to those PRs which must come from #2096)

Are we expecting perf to be on par with what we were seeing from #2153 for the respective algorithms once the other PRs are finalized?

@icfaust
Copy link
Contributor Author

icfaust commented Dec 5, 2024

/intelci: run

@icfaust
Copy link
Contributor Author

icfaust commented Dec 5, 2024

/intelci: run

@icfaust
Copy link
Contributor Author

icfaust commented Dec 6, 2024

/intelci: run

@Alexsandruss Alexsandruss dismissed ahuber21’s stale review December 6, 2024 11:22

Comments were addressed.

Co-authored-by: ethanglaser <42726565+ethanglaser@users.noreply.github.com>
@icfaust
Copy link
Contributor Author

icfaust commented Dec 9, 2024

/intelci: run

@icfaust icfaust merged commit 95bd1ea into uxlfoundation:main Dec 10, 2024
24 of 27 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants