-
Notifications
You must be signed in to change notification settings - Fork 527
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[REVIEW] Refactor DBSCAN to use ml-prims. #44
Conversation
I tagged it as in progress while we look into the scalability issue, I’ll also try running your branch on Monday :) |
Sure, I've incorporated the latest changes to the ml-prims code, including the new distance API. Also removed unused code and merged the separate dbscan run paths for the Python & Googletests API. |
4bf0b15
to
c5c3d99
Compare
…ad of compiling when cythonizing
…meantime. This way the other cleanup can be merged while we work on the bug in algo6
…using resutls that differed slightly from sklearn
Conflicts: .gitignore CHANGELOG.md Dockerfile README.md cuML/CMakeLists.txt docs/source/conf.py
This is ready to be reviewed & merged. |
This PR means that PR #47 is no longer needed and we can just close it, right? |
Correct.
…Sent from my iPhone
On Dec 18, 2018, at 9:16 PM, Dante Gama Dessavre ***@***.***> wrote:
This PR means that PR #47 is no longer needed and we can just close it, right?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR is good, pending running tests on CUDA 10
This merge request encompasses a very large body of work and I have squashed the commits to make it easier for the reviewers.
The tests all pass, however, the DBSCAN Jupyter notebook seems to indicate there is a bug that could be causing a scalability issue.
The following error prints when I attempt to run the cuml DBSCAN training on 1000 rows of the mortgage dataset: