Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[doc] Update install.rst and Misc. #1009

Merged
merged 3 commits into from
May 19, 2020
Merged

Conversation

yuanming-hu
Copy link
Member

Copy link
Collaborator

@archibate archibate left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank for fixing these!

docs/gui.rst Outdated Show resolved Hide resolved
docs/gui.rst Show resolved Hide resolved
docs/gui.rst Show resolved Hide resolved
docs/gui.rst Outdated Show resolved Hide resolved
docs/gui.rst Outdated Show resolved Hide resolved
@archibate archibate self-assigned this May 18, 2020
Copy link
Contributor

@zhai-xiao zhai-xiao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work! I really appreciate these fixes. Thanks 👍

docs/faq.rst Outdated Show resolved Hide resolved
docs/gui.rst Outdated Show resolved Hide resolved
docs/gui.rst Show resolved Hide resolved
docs/gui.rst Outdated Show resolved Hide resolved
docs/gui.rst Outdated Show resolved Hide resolved
docs/install.rst Outdated

* Try adding ``export TI_USE_UNIFIED_MEMORY=0`` to your ``~/.bashrc``. This disables unified memory usage in CUDA backend.
This may be because your NVIDIA card is pre-Pascal and does not support `Unified Memory <https://www.nextplatform.com/2019/01/24/unified-memory-the-final-piece-of-the-gpu-programming-puzzle/>`_.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd say ''This might be due to the fact that ...''

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW, unified memory is not supported since GTX770? Maybe I remembered a lot of things wrong about CUDA...
So if I disable the unified memory, what computation capability do I need to use Taichi?

Copy link
Member Author

@yuanming-hu yuanming-hu May 19, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW, unified memory is not supported since GTX770? Maybe I remembered a lot of things wrong about CUDA...

NVIDIA has a fallback: https://devblogs.nvidia.com/unified-memory-cuda-beginners/ see What Happens on Kepler When I call cudaMallocManaged()? These GPUs doesn't support memadvise though...

So if I disable the unified memory, what computation capability do I need to use Taichi?

Taichi doesn't have a hard requirement for that, but GTX 9 series should work (without unified memory/adaptive memory pool allocation)...I haven't tested GTX 7 series (which released 7 years ago)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh thanks for that. I had a GTX770 for a long time and tried the unified memory ever since it was available (I think it was CUDA 6.0 but I'm not sure). I found it did a lot of unnecessary memory synchronization and I couldn't turn it off, so I just decided never use it again since then 😢

docs/install.rst Outdated Show resolved Hide resolved
docs/meta.rst Outdated Show resolved Hide resolved
docs/gui.rst Outdated Show resolved Hide resolved
@yuanming-hu
Copy link
Member Author

Thank you for the helpful suggestions! I fixed all of them. Let me merge this so that these changes will be reflected in v0.6.5.

@yuanming-hu yuanming-hu merged commit d5da3c0 into taichi-dev:master May 19, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants