Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about inference speed on H100 GPU #53

Closed
yunhzou opened this issue Nov 15, 2024 · 1 comment
Closed

Question about inference speed on H100 GPU #53

yunhzou opened this issue Nov 15, 2024 · 1 comment
Labels
question Further information is requested

Comments

@yunhzou
Copy link

yunhzou commented Nov 15, 2024

Hi,
I am curious to learn the inference speed of AF3 when deployed on H100 GPUs. Could this be released? Thank you!

@Augustin-Zidek Augustin-Zidek added the question Further information is requested label Nov 18, 2024
@Augustin-Zidek
Copy link
Collaborator

Hi,

the inference speed on H100 should be at least as fast as on A100 (see https://github.com/google-deepmind/alphafold3/blob/main/docs/performance.md).

I recommend running a benchmark on your machine with the following number of tokens: 1024, 2048, 3072, 4096, 5120 and comparing these to the A100 numbers we report.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants