Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update to GCC 11 #5258

Merged
merged 6 commits into from
Mar 7, 2023
Merged

Update to GCC 11 #5258

merged 6 commits into from
Mar 7, 2023

Conversation

bdice
Copy link
Contributor

@bdice bdice commented Feb 28, 2023

This PR updates builds to use GCC 11.

@github-actions github-actions bot added the conda conda issue label Feb 28, 2023
@bdice
Copy link
Contributor Author

bdice commented Mar 1, 2023

https://github.com/rapidsai/cuml/actions/runs/4297821777/jobs/7491310331#step:6:1107

Error log:

In function 'void UMAPAlgo::Optimize::optimize_params(T*, int, const T*, T*, ML::UMAPParams*, cudaStream_t, float, int) [with T = float; int TPB_X = 256]',
    inlined from 'void UMAPAlgo::Optimize::find_params_ab(ML::UMAPParams*, cudaStream_t)' at $SRC_DIR/cpp/src/umap/optimize.cuh:208:30:
$SRC_DIR/cpp/src/umap/optimize.cuh:167:1: error: 'void operator delete(void*, std::size_t)' called on pointer returned from a mismatched allocation function [-Werror=mismatched-new-delete]
  167 |     delete grads_h;
      | ^   ~~~~~~~~~~
$SRC_DIR/cpp/src/umap/optimize.cuh: In function 'void UMAPAlgo::Optimize::find_params_ab(ML::UMAPParams*, cudaStream_t)':
$SRC_DIR/cpp/src/umap/optimize.cuh:156:25: note: returned from 'void* malloc(size_t)'
  156 |     T* grads_h = (T*)malloc(2 * sizeof(T));
      |                   ~~~~~~^~~~~~~~~~~~~~~~~
cc1plus: all warnings being treated as errors

@bdice bdice marked this pull request as ready for review March 2, 2023 02:51
@bdice bdice requested review from a team as code owners March 2, 2023 02:51
@bdice
Copy link
Contributor Author

bdice commented Mar 2, 2023

Waiting on #5259, then we can update this and CI should pass.

This is ready for review, but please do not merge until all RAPIDS repos are ready to migrate. (I do not have privileges to add the DO NOT MERGE label to prohibit merging.)

@jjacobelli jjacobelli added the 5 - DO NOT MERGE Hold off on merging; see PR for details label Mar 2, 2023
@raydouglass raydouglass added improvement Improvement / enhancement to an existing function non-breaking Non-breaking change and removed 5 - DO NOT MERGE Hold off on merging; see PR for details labels Mar 6, 2023
@raydouglass
Copy link
Member

/merge

@wence-
Copy link
Contributor

wence- commented Mar 7, 2023

This didn't merge due to a single failure test_concat_memory_leak[large_clf0-regression]:
assert (5631447040 - 5630435328) < 1000000.0 which is false because 5631447040 - 5630435328 == 1011712. Is this a tolerance issue one can relax?

@rapids-bot rapids-bot bot merged commit af20bff into rapidsai:branch-23.04 Mar 7, 2023
@bdice
Copy link
Contributor Author

bdice commented Mar 7, 2023

@wence- Apparently my last commit passed that threshold, triggering the merge. That seems like it might be a flaky test?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
conda conda issue CUDA/C++ improvement Improvement / enhancement to an existing function non-breaking Non-breaking change
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants