Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[refactor] Enable adaptive block_dim selection for CPU backend #5190

Merged
merged 3 commits into from
Jun 17, 2022

Conversation

qiao-bo
Copy link
Contributor

@qiao-bo qiao-bo commented Jun 16, 2022

Related issue = #3750 #4541

For kernels with arch=ti.cpu, users tend to rely on the default block_dim settings. The default value is always 32 for CPU backends, which leads to slow execution for simple kernels. This PR adds the option to adaptive select block_dim during codegen. This leads to a 1.4x speedup for a simple kernel as mentioned in #4541:

@ti.kernel
def reduce_para()->ti.f32:
    n = v1.shape[0]
    sum = 0.0
    ti.loop_config(block_dim_adaptive=False)
    for i in range(n):
        sum += v1[i]*v2[i]
    return sum

@netlify
Copy link

netlify bot commented Jun 16, 2022

Deploy Preview for docsite-preview ready!

Name Link
🔨 Latest commit 6b92a52
🔍 Latest deploy log https://app.netlify.com/sites/docsite-preview/deploys/62ab3761c7bc480009b62361
😎 Deploy Preview https://deploy-preview-5190--docsite-preview.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site settings.

Copy link
Collaborator

@bobcao3 bobcao3 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@qiao-bo qiao-bo merged commit 1ef6a31 into taichi-dev:master Jun 17, 2022
@qiao-bo qiao-bo deleted the block_dim branch June 17, 2022 02:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants