Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[doc] Update install.rst and Misc. #1009

Merged
merged 3 commits into from
May 19, 2020
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
# -- Project information -----------------------------------------------------

project = 'taichi'
copyright = '2016, Taichi Developers'
copyright = '2020, Taichi Developers'
author = 'Taichi Developers'

version_fn = os.path.join(os.path.dirname(os.path.abspath(__file__)),
Expand Down
5 changes: 5 additions & 0 deletions docs/differentiable_programming.rst
Original file line number Diff line number Diff line change
Expand Up @@ -58,4 +58,9 @@ A few examples with neural network controllers optimized using differentiable si

.. image:: https://github.com/yuanming-hu/public_files/raw/master/learning/difftaichi/diffmpm3d.gif

.. note::

Apart from differentiating the simulation time steps, you can also automatically differentiate (negative) potential energies to get forces.
Here is an `example <https://github.com/taichi-dev/taichi/blob/master/examples/mpm_lagrangian_forces.py>`_.

Documentation WIP.
13 changes: 3 additions & 10 deletions docs/faq.rst
Original file line number Diff line number Diff line change
@@ -1,14 +1,7 @@
Frequently asked questions
==========================

**Can a user iterate over irregular topology instead of grids, such as tetrahedra meshes, line segment vertices?**
These structures have to be represented using 1D arrays in Taichi. You can still iterate over it using `for i in x` or `for i in range(n)`.
However, at compile time, there's little the Taichi compiler can do for you to optimize it. You can still tweak the data layout to get different run time cache behaviors and performance numbers.
**Q:** Can a user iterate over irregular topology instead of grids, such as tetrahedral meshes, line segment vertices?
yuanming-hu marked this conversation as resolved.
Show resolved Hide resolved

**Can potential energies be differentiated automatically to get forces?**
Yes. Taichi supports automatic differentiation.
We do have an `example <https://github.com/yuanming-hu/taichi/blob/master/examples/mpm_lagrangian_forces.py>`_ for this.

**Does the compiler backend support the same quality of optimizations for the GPU and CPU? For instance, if I switch to using the CUDA backend, do I lose the cool hash-table optimizations?**
Mostly. The CPU/GPU compilation workflow are basically the same, except for vectorization on SIMD CPUs.
You still have the hash table optimization on GPUs.
**A:** These structures have to be represented using 1D arrays in Taichi. You can still iterate over them using ``for i in x`` or ``for i in range(n)``.
However, at compile time, there's little the Taichi compiler can do for you to optimize it. You can still tweak the data layout to get different runtime cache behaviors and performance numbers.
2 changes: 1 addition & 1 deletion docs/global_settings.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Global settings
- To not use unified memory for CUDA: ``export TI_USE_UNIFIED_MEMORY=0``
- To specify pre-allocated memory size for CUDA: ``export TI_DEVICE_MEMORY_GB=0.5``
- Show more detailed log (TI_TRACE): ``export TI_LOG_LEVEL=trace``
- To specify which GPU to use for CUDA: ``export CUDA_VISIBLE_DEVICES=0``
- To specify which GPU to use for CUDA: ``export CUDA_VISIBLE_DEVICES=[gpuid]``
- To specify which Arch to use: ``export TI_ARCH=cuda``
- To print intermediate IR generated: ``export TI_PRINT_IR=1``
- To print verbose details: ``export TI_VERBOSE=1``
44 changes: 23 additions & 21 deletions docs/gui.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
GUI system
==========

Taichi has a built-in GUI system to help users display graphic results easier.
Taichi has a built-in GUI system to help users visualize results.


Create a window
Expand All @@ -20,7 +20,7 @@ Create a window
Create a window.
If ``res`` is scalar, then width will be equal to height.

This creates a window whose width is 1024, height is 768:
This creates a window whose width is 1024 and height is 768:
yuanming-hu marked this conversation as resolved.
Show resolved Hide resolved

::

Expand All @@ -35,7 +35,8 @@ Create a window
Show the window on the screen.

.. note::
If `filename` is specified, screenshot will be saved to the file specified by the name. For example, this screenshots each frame of the window, and save it in ``.png``'s:
If ``filename`` is specified, a screenshot will be saved to the file specified by the name.
For example, the following saves frames of the window to ``.png``'s:

::

Expand All @@ -45,25 +46,25 @@ Create a window
gui.show(f'{frame:06d}.png')


Paint a window
--------------
Paint on a window
-----------------


.. function:: gui.set_image(img)

:parameter gui: (GUI) the window object
:parameter img: (np.array or Tensor) tensor containing the image, see notes below

Set a image to display on the window.
Set an image to display on the window.

The pixel, ``i`` from bottom to up, ``j`` from left to right, is set to the value of ``img[i, j]``.
The window pixels, ``i`` from left to right, ``j`` from bottom to top, are set to the values of ``img[i, j]``.
yuanming-hu marked this conversation as resolved.
Show resolved Hide resolved


If the window size is ``(x, y)``, then the ``img`` must be one of:
If the window size is ``(x, y)``, then ``img`` must be one of:

* ``ti.var(shape=(x, y))``, a grey-scale image

* ``ti.var(shape=(x, y, 3))``, where `3` is for `(r, g, b)` channels
* ``ti.var(shape=(x, y, 3))``, where `3` is for ``(r, g, b)`` channels

* ``ti.Vector(3, shape=(x, y))`` (see :ref:`vector`)

Expand All @@ -74,39 +75,40 @@ Paint a window

The data type of ``img`` must be one of:

* float32, clamped into [0, 1]
* ``float32``, range ``[0, 1]``

* float64, clamped into [0, 1]
* ``float64``, range ``[0, 1]``

* uint8, range [0, 255]
* ``uint8``, range ``[0, 255]``

* uint16, range [0, 65535]
* ``uint16``, range ``[0, 65535]``

* uint32, range [0, UINT_MAX]
* ``uint32``, range ``[0, 4294967295]``

yuanming-hu marked this conversation as resolved.
Show resolved Hide resolved

.. function:: gui.circle(pos, color = 0xFFFFFF, radius = 1)

:parameter gui: (GUI) the window object
:parameter pos: (tuple of 2) the position of circle
:parameter color: (optional, RGB hex) color to fill the circle
:parameter radius: (optional, scalar) the radius of circle
:parameter pos: (tuple of 2) the position of the circle
:parameter color: (optional, RGB hex) the color to fill the circle
:parameter radius: (optional, scalar) the radius of the circle
yuanming-hu marked this conversation as resolved.
Show resolved Hide resolved

Draw a solid circle.


.. function:: gui.circles(pos, color = 0xFFFFFF, radius = 1)

:parameter gui: (GUI) the window object
:parameter pos: (np.array) the position of circles
:parameter color: (optional, RGB hex or np.array of uint32) color(s) to fill circles
:parameter radius: (optional, scalar) the radius of circle
:parameter pos: (np.array) the positions of the circles
:parameter color: (optional, RGB hex or np.array of uint32) the color(s) to fill the circles
:parameter radius: (optional, scalar) the radius (radii) of the circles
yuanming-hu marked this conversation as resolved.
Show resolved Hide resolved

Draw solid circles.

.. note::

If ``color`` is a numpy array, circle at ``pos[i]`` will be colored with ``color[i]``, therefore it must have the same size with ``pos``.
If ``color`` is a numpy array, circle at ``pos[i]`` will be colored with ``color[i]``,
so in this case ``color`` must have the same number of elements as ``pos``.
yuanming-hu marked this conversation as resolved.
Show resolved Hide resolved


.. function:: gui.line(begin, end, color = 0xFFFFFF, radius = 1)
Expand Down
3 changes: 1 addition & 2 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -61,9 +61,8 @@ The Taichi Programming Language

gui
global_settings
performance
acknowledgments
faq
acknowledgments


.. toctree::
Expand Down
58 changes: 32 additions & 26 deletions docs/install.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,49 +20,55 @@ Taichi can be easily installed via ``pip``:
Troubleshooting
---------------

Taichi crashes with the following messages:
CUDA issues
***********

.. code-block::
- If Taichi crashes with the following messages:

[Taichi] mode=release
[Taichi] version 0.6.0, supported archs: [cpu, cuda, opengl], commit 14094f25, python 3.8.2
[W 05/14/20 10:46:49.549] [cuda_driver.h:call_with_warning@60] CUDA Error CUDA_ERROR_INVALID_DEVICE: invalid device ordinal while calling mem_advise (cuMemAdvise)
[E 05/14/20 10:46:49.911] Received signal 7 (Bus error)
.. code-block::

[Taichi] mode=release
[Taichi] version 0.6.0, supported archs: [cpu, cuda, opengl], commit 14094f25, python 3.8.2
[W 05/14/20 10:46:49.549] [cuda_driver.h:call_with_warning@60] CUDA Error CUDA_ERROR_INVALID_DEVICE: invalid device ordinal while calling mem_advise (cuMemAdvise)
[E 05/14/20 10:46:49.911] Received signal 7 (Bus error)

This may because your NVIDIA card is pre-Pascal and therefore does not support `Unified Memory <https://www.nextplatform.com/2019/01/24/unified-memory-the-final-piece-of-the-gpu-programming-puzzle/>`_.

* Try adding ``export TI_USE_UNIFIED_MEMORY=0`` to your ``~/.bashrc``. This disables unified memory usage in CUDA backend.
This may be because your NVIDIA card is pre-Pascal and does not support `Unified Memory <https://www.nextplatform.com/2019/01/24/unified-memory-the-final-piece-of-the-gpu-programming-puzzle/>`_.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd say ''This might be due to the fact that ...''

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW, unified memory is not supported since GTX770? Maybe I remembered a lot of things wrong about CUDA...
So if I disable the unified memory, what computation capability do I need to use Taichi?

Copy link
Member Author

@yuanming-hu yuanming-hu May 19, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW, unified memory is not supported since GTX770? Maybe I remembered a lot of things wrong about CUDA...

NVIDIA has a fallback: https://devblogs.nvidia.com/unified-memory-cuda-beginners/ see What Happens on Kepler When I call cudaMallocManaged()? These GPUs doesn't support memadvise though...

So if I disable the unified memory, what computation capability do I need to use Taichi?

Taichi doesn't have a hard requirement for that, but GTX 9 series should work (without unified memory/adaptive memory pool allocation)...I haven't tested GTX 7 series (which released 7 years ago)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh thanks for that. I had a GTX770 for a long time and tried the unified memory ever since it was available (I think it was CUDA 6.0 but I'm not sure). I found it did a lot of unnecessary memory synchronization and I couldn't turn it off, so I just decided never use it again since then 😢


* **Possible solution**: add ``export TI_USE_UNIFIED_MEMORY=0`` to your ``~/.bashrc``. This disables unified memory usage in CUDA backend.

If you find other CUDA problems:

* Try adding ``export TI_ENABLE_CUDA=0`` to your ``~/.bashrc``. This disables the CUDA backend completely and Taichi will fall back on other GPU backends such as OpenGL.
- If you find other CUDA problems:

* **Possible solution**: add ``export TI_ENABLE_CUDA=0`` to your ``~/.bashrc``. This disables the CUDA backend completely and Taichi will fall back on other GPU backends such as OpenGL.

If Taichi crashes with a stack backtrace containing a line of ``glfwCreateWindow`` (see `#958 <https://github.com/taichi-dev/taichi/issues/958>`_):
OpenGL issues
*************

.. code-block::
- If Taichi crashes with a stack backtrace containing a line of ``glfwCreateWindow`` (see `#958 <https://github.com/taichi-dev/taichi/issues/958>`_):

[Taichi] mode=release
[E 05/12/20 18.25:00.129] Received signal 11 (Segmentation Fault)
***********************************
* Taichi Compiler Stack Traceback *
***********************************
.. code-block::

... (many lines, omitted)
[Taichi] mode=release
[E 05/12/20 18.25:00.129] Received signal 11 (Segmentation Fault)
***********************************
* Taichi Compiler Stack Traceback *
***********************************

/lib/python3.8/site-packages/taichi/core/../lib/taichi_core.so: _glfwPlatformCreateWindow
/lib/python3.8/site-packages/taichi/core/../lib/taichi_core.so: glfwCreateWindow
/lib/python3.8/site-packages/taichi/core/../lib/taichi_core.so: taichi::lang::opengl::initialize_opengl(bool)
... (many lines, omitted)

... (many lines, omitted)
/lib/python3.8/site-packages/taichi/core/../lib/taichi_core.so: _glfwPlatformCreateWindow
/lib/python3.8/site-packages/taichi/core/../lib/taichi_core.so: glfwCreateWindow
/lib/python3.8/site-packages/taichi/core/../lib/taichi_core.so: taichi::lang::opengl::initialize_opengl(bool)

This is likely because you are running Taichi on a virtual machine with an old OpenGL. Taichi requires OpenGL 4.3+ to work).
... (many lines, omitted)

* Try adding ``export TI_ENABLE_OPENGL=0`` to your ``~/.bashrc``, even if you don't initialize Taichi with OpenGL (``ti.init(arch=ti.opengl)``). This disables the OpenGL backend detection to avoid incompatibilities.
This is likely because you are running Taichi on a (virtual) machine with an old OpenGL API. Taichi requires OpenGL 4.3+ to work.

* **Possible solution**: add ``export TI_ENABLE_OPENGL=0`` to your ``~/.bashrc``, even if you don't initialize Taichi with OpenGL (``ti.init(arch=ti.opengl)``). This disables the OpenGL backend detection to avoid incompatibilities.
yuanming-hu marked this conversation as resolved.
Show resolved Hide resolved

If Taichi crashes and reports ``libtinfo.so.5 not found``:

* Please install ``libtinfo5`` on Ubuntu or ``ncurses5-compat-libs`` (AUR) on Arch Linux.
Linux issues
************

- If Taichi crashes and reports ``libtinfo.so.5 not found``: Please install ``libtinfo5`` on Ubuntu or ``ncurses5-compat-libs`` (AUR) on Arch Linux.
10 changes: 5 additions & 5 deletions docs/meta.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
.. _meta:

Metaprogramming
=================================================
===============

Taichi provides metaprogramming infrastructures. Metaprogramming can

Expand All @@ -26,7 +26,7 @@ Template metaprogramming


Dimensionality-independent programming using grouped indices
-------------------------------------------------------------
------------------------------------------------------------

.. code-block:: python

Expand All @@ -45,7 +45,7 @@ Dimensionality-independent programming using grouped indices
y[i, j + 1] = i + j

Tensor size reflection
------------------------------------------
----------------------

Sometimes it will be useful to get the dimensionality (``tensor.dim()``) and shape (``tensor.shape()``) of tensors.
These functions can be used in both Taichi kernels and python scripts.
Expand All @@ -61,7 +61,7 @@ These functions can be used in both Taichi kernels and python scripts.
For sparse tensors, the full domain shape will be returned.

Compile-time evaluations
------------------------------------------
------------------------
Using compile-time evaluation will allow certain computation to happen when kernels are instantiated.
yuanming-hu marked this conversation as resolved.
Show resolved Hide resolved
Such computation has no overhead at runtime.

Expand Down Expand Up @@ -106,7 +106,7 @@ Such computation has no overhead at runtime.


When to use for loops with ``ti.static``
-----------------------------------------
----------------------------------------

There are several reasons why ``ti.static`` for loops should be used.

Expand Down
6 changes: 0 additions & 6 deletions docs/performance.rst

This file was deleted.

2 changes: 1 addition & 1 deletion docs/snode.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Structural nodes (SNodes)
=========================

After writing the computation code, the user needs to specify the internal data structure hierarchy. Specifying a data structure includes choices at both the macro level, dictating how the data structure components nest with each other and the way they represent sparsity, and the micro level, dictating how data are grouped together (e.g. structure of arrays vs. array of structures).
Our language provides *structural nodes (SNodes)* to compose the hierarchy and particular properties. These constructs and their semantics are listed below:
Taichi provides *Structural Nodes (SNodes)* to compose the hierarchy and particular properties. These constructs and their semantics are listed below:

* dense: A fixed-length contiguous array.

Expand Down
10 changes: 5 additions & 5 deletions docs/vector.rst
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ As global tensors of vectors

.. note::

**Always** use two pair of square brackets to access scalar elements from tensors of vectors.
**Always** use two pairs of square brackets to access scalar elements from tensors of vectors.

- The indices in the first pair of brackets locate the vector inside the tensor of vectors;
- The indices in the second pair of brackets locate the scalar element inside the vector.
Expand Down Expand Up @@ -142,7 +142,7 @@ Methods
:parameter b: (Vector, 3 component)
:return: (Vector, 3D) the cross product of ``a`` and ``b``

We use right-handed coordinate system, E.g.,
We use a right-handed coordinate system. E.g.,
::

a = ti.Vector([1, 2, 3])
Expand All @@ -159,13 +159,13 @@ Methods
E.g.,
::

a = ti.Vector([1, 2, 3])
a = ti.Vector([1, 2])
b = ti.Vector([4, 5, 6])
c = ti.outer_product(a, b) # NOTE: c[i, j] = a[i] * b[j]
# c = [[1*4, 1*5, 1*6], [2*4, 2*5, 2*6], [3*4, 3*5, 3*6]]
# c = [[1*4, 1*5, 1*6], [2*4, 2*5, 2*6]]

.. note::
This is not the same as `ti.cross`. ``a`` and ``b`` do not have to be 3 component vectors.
This is not the same as ``ti.cross``. ``a`` and ``b`` do not have to be 3-component vectors.


.. function:: a.cast(dt)
Expand Down