-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Fix] add dtype attribute to python oneDAL table objects #2172
Conversation
I'm not understanding the issue. What would be fp64 when the data is fp32? |
our tables can ingest complex float32 types (https://github.com/intel/scikit-learn-intelex/blob/main/onedal/datatypes/utils/numpy_helpers.hpp#L71) and will return float32 types: https://github.com/intel/scikit-learn-intelex/blob/main/onedal/datatypes/utils/numpy_helpers.cpp#L34, our checks for dtype are weak. We assume that dtype can only be of float32 or float64 which must be enforced at the very beginning of the code on sklearnex side using validate_data (just as an example from Linear Regression: https://github.com/intel/scikit-learn-intelex/blob/main/onedal/linear_model/linear_model.py#L46). We should use the datatype of the oneDAL table to describe how oneDAL is to operate. |
Got it. But if complex numbers are the only issue, wouldn't it be better to create a helper function along the lines of "is_fp32_dtype" or so, and convert the data in python before it reaches oneDAL? It'd avoid double copies or conversions when the inputs are not in array formats (e.g. arrow[complex64] -> numpy[complex64] -> array[float32]). Perhaps it could just use |
Another question in that regard: could the dtype potentially be used as part of the fit condition checks for oneDAL algorithm support? |
onedal/datatypes/table.cpp
Outdated
@@ -72,6 +72,10 @@ ONEDAL_PY_INIT_MODULE(table) { | |||
const auto column_count = t.get_column_count(); | |||
return py::make_tuple(row_count, column_count); | |||
}); | |||
table_obj.def_property_readonly("dtype", [](const table& t){ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
table_obj.def_property_readonly("dtype", [](const table& t){ | |
table_obj.def_property_readonly("get_numpy_dtype", [](const table& t){ |
I suggest to use this or just numpy_dtype
naming, just not to be confused
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And again, from my point of view for tables API, this seems unnecessary and looks like a workaround or a hack.
Description
Taken from #2126 convert_to_supported will set the data to float, but the fptype param may still be double. This is most simply solved by querying the data for its dtype at the parameter setting stage. Ideally fptype should be instead managed entirely in the backend. Unfortunately, that change is very core to the backend offloading and should be done more carefully.
PR should start as a draft, then move to ready for review state after CI is passed and all applicable checkboxes are closed.
This approach ensures that reviewers don't spend extra time asking for regular requirements.
You can remove a checkbox as not applicable only if it doesn't relate to this PR in any way.
For example, PR with docs update doesn't require checkboxes for performance while PR with any change in actual code should have checkboxes and justify how this code change is expected to affect performance (or justification should be self-evident).
Checklist to comply with before moving PR from draft:
PR completeness and readability
Testing
Performance