diff --git a/docs/conf.py b/docs/conf.py index 3b78b363c0029..04ede75fd622b 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -19,7 +19,7 @@ # -- Project information ----------------------------------------------------- project = 'taichi' -copyright = '2016, Taichi Developers' +copyright = '2020, Taichi Developers' author = 'Taichi Developers' version_fn = os.path.join(os.path.dirname(os.path.abspath(__file__)), diff --git a/docs/differentiable_programming.rst b/docs/differentiable_programming.rst index cf39c52b6e85a..cc1e5d1cbd5a5 100644 --- a/docs/differentiable_programming.rst +++ b/docs/differentiable_programming.rst @@ -58,4 +58,9 @@ A few examples with neural network controllers optimized using differentiable si .. image:: https://github.com/yuanming-hu/public_files/raw/master/learning/difftaichi/diffmpm3d.gif +.. note:: + + Apart from differentiating the simulation time steps, you can also automatically differentiate (negative) potential energies to get forces. + Here is an `example `_. + Documentation WIP. diff --git a/docs/faq.rst b/docs/faq.rst index a71023a98bf0d..0536b2725aaf0 100644 --- a/docs/faq.rst +++ b/docs/faq.rst @@ -1,14 +1,7 @@ Frequently asked questions ========================== -**Can a user iterate over irregular topology instead of grids, such as tetrahedra meshes, line segment vertices?** -These structures have to be represented using 1D arrays in Taichi. You can still iterate over it using `for i in x` or `for i in range(n)`. -However, at compile time, there's little the Taichi compiler can do for you to optimize it. You can still tweak the data layout to get different run time cache behaviors and performance numbers. +**Q:** Can a user iterate over irregular topologies (e.g., graphs or tetrahedral meshes) instead of regular grids? -**Can potential energies be differentiated automatically to get forces?** -Yes. Taichi supports automatic differentiation. -We do have an `example `_ for this. - -**Does the compiler backend support the same quality of optimizations for the GPU and CPU? For instance, if I switch to using the CUDA backend, do I lose the cool hash-table optimizations?** -Mostly. The CPU/GPU compilation workflow are basically the same, except for vectorization on SIMD CPUs. -You still have the hash table optimization on GPUs. +**A:** These structures have to be represented using 1D arrays in Taichi. You can still iterate over them using ``for i in x`` or ``for i in range(n)``. +However, at compile time, there's little the Taichi compiler can do for you to optimize it. You can still tweak the data layout to get different runtime cache behaviors and performance numbers. diff --git a/docs/global_settings.rst b/docs/global_settings.rst index 4f0fb07ee9142..0310e9f406a87 100644 --- a/docs/global_settings.rst +++ b/docs/global_settings.rst @@ -7,7 +7,7 @@ Global settings - To not use unified memory for CUDA: ``export TI_USE_UNIFIED_MEMORY=0`` - To specify pre-allocated memory size for CUDA: ``export TI_DEVICE_MEMORY_GB=0.5`` - Show more detailed log (TI_TRACE): ``export TI_LOG_LEVEL=trace`` -- To specify which GPU to use for CUDA: ``export CUDA_VISIBLE_DEVICES=0`` +- To specify which GPU to use for CUDA: ``export CUDA_VISIBLE_DEVICES=[gpuid]`` - To specify which Arch to use: ``export TI_ARCH=cuda`` - To print intermediate IR generated: ``export TI_PRINT_IR=1`` - To print verbose details: ``export TI_VERBOSE=1`` diff --git a/docs/gui.rst b/docs/gui.rst index 0e671d1d80ae2..82ffba172518e 100644 --- a/docs/gui.rst +++ b/docs/gui.rst @@ -3,7 +3,7 @@ GUI system ========== -Taichi has a built-in GUI system to help users display graphic results easier. +Taichi has a built-in GUI system to help users visualize results. Create a window @@ -20,11 +20,11 @@ Create a window Create a window. If ``res`` is scalar, then width will be equal to height. - This creates a window whose width is 1024, height is 768: + The following code creates a window of resolution ``640x360``: :: - gui = ti.GUI('Window Title', (1024, 768)) + gui = ti.GUI('Window Title', (640, 360)) .. function:: gui.show(filename = None) @@ -35,7 +35,8 @@ Create a window Show the window on the screen. .. note:: - If `filename` is specified, screenshot will be saved to the file specified by the name. For example, this screenshots each frame of the window, and save it in ``.png``'s: + If ``filename`` is specified, a screenshot will be saved to the file specified by the name. + For example, the following saves frames of the window to ``.png``'s: :: @@ -45,8 +46,8 @@ Create a window gui.show(f'{frame:06d}.png') -Paint a window --------------- +Paint on a window +----------------- .. function:: gui.set_image(img) @@ -54,16 +55,17 @@ Paint a window :parameter gui: (GUI) the window object :parameter img: (np.array or Tensor) tensor containing the image, see notes below - Set a image to display on the window. + Set an image to display on the window. - The pixel, ``i`` from bottom to up, ``j`` from left to right, is set to the value of ``img[i, j]``. + The image pixels are set from the values of ``img[i, j]``, where ``i`` indicates the horizontal + coordinates (from left to right) and ``j`` the vertical coordinates (from bottom to top). - If the window size is ``(x, y)``, then the ``img`` must be one of: + If the window size is ``(x, y)``, then ``img`` must be one of: * ``ti.var(shape=(x, y))``, a grey-scale image - * ``ti.var(shape=(x, y, 3))``, where `3` is for `(r, g, b)` channels + * ``ti.var(shape=(x, y, 3))``, where `3` is for ``(r, g, b)`` channels * ``ti.Vector(3, shape=(x, y))`` (see :ref:`vector`) @@ -74,23 +76,28 @@ Paint a window The data type of ``img`` must be one of: - * float32, clamped into [0, 1] + * ``uint8``, range ``[0, 255]`` - * float64, clamped into [0, 1] + * ``uint16``, range ``[0, 65535]`` - * uint8, range [0, 255] + * ``uint32``, range ``[0, 4294967295]`` - * uint16, range [0, 65535] + * ``float32``, range ``[0, 1]`` - * uint32, range [0, UINT_MAX] + * ``float64``, range ``[0, 1]`` + + .. note :: + + When using ``float32`` or ``float64`` as the data type, + ``img`` entries will be clipped into range ``[0, 1]``. .. function:: gui.circle(pos, color = 0xFFFFFF, radius = 1) :parameter gui: (GUI) the window object - :parameter pos: (tuple of 2) the position of circle - :parameter color: (optional, RGB hex) color to fill the circle - :parameter radius: (optional, scalar) the radius of circle + :parameter pos: (tuple of 2) the position of the circle + :parameter color: (optional, RGB hex) the color to fill the circle + :parameter radius: (optional, scalar) the radius of the circle Draw a solid circle. @@ -98,15 +105,16 @@ Paint a window .. function:: gui.circles(pos, color = 0xFFFFFF, radius = 1) :parameter gui: (GUI) the window object - :parameter pos: (np.array) the position of circles - :parameter color: (optional, RGB hex or np.array of uint32) color(s) to fill circles - :parameter radius: (optional, scalar) the radius of circle + :parameter pos: (np.array) the positions of the circles + :parameter color: (optional, RGB hex or np.array of uint32) the color(s) to fill the circles + :parameter radius: (optional, scalar or np.array of float32) the radius (radii) of the circles Draw solid circles. .. note:: - If ``color`` is a numpy array, circle at ``pos[i]`` will be colored with ``color[i]``, therefore it must have the same size with ``pos``. + If ``color`` is a numpy array, the circle at ``pos[i]`` will be colored with ``color[i]``. + In this case, ``color`` must have the same size as ``pos``. .. function:: gui.line(begin, end, color = 0xFFFFFF, radius = 1) diff --git a/docs/index.rst b/docs/index.rst index 3ab7bdb74d40b..3ea836bc21bab 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -61,9 +61,8 @@ The Taichi Programming Language gui global_settings - performance - acknowledgments faq + acknowledgments .. toctree:: diff --git a/docs/install.rst b/docs/install.rst index 952ae0485d8f8..cdd7f75a3f218 100644 --- a/docs/install.rst +++ b/docs/install.rst @@ -20,49 +20,55 @@ Taichi can be easily installed via ``pip``: Troubleshooting --------------- -Taichi crashes with the following messages: +CUDA issues +*********** -.. code-block:: +- If Taichi crashes with the following messages: - [Taichi] mode=release - [Taichi] version 0.6.0, supported archs: [cpu, cuda, opengl], commit 14094f25, python 3.8.2 - [W 05/14/20 10:46:49.549] [cuda_driver.h:call_with_warning@60] CUDA Error CUDA_ERROR_INVALID_DEVICE: invalid device ordinal while calling mem_advise (cuMemAdvise) - [E 05/14/20 10:46:49.911] Received signal 7 (Bus error) + .. code-block:: + [Taichi] mode=release + [Taichi] version 0.6.0, supported archs: [cpu, cuda, opengl], commit 14094f25, python 3.8.2 + [W 05/14/20 10:46:49.549] [cuda_driver.h:call_with_warning@60] CUDA Error CUDA_ERROR_INVALID_DEVICE: invalid device ordinal while calling mem_advise (cuMemAdvise) + [E 05/14/20 10:46:49.911] Received signal 7 (Bus error) -This may because your NVIDIA card is pre-Pascal and therefore does not support `Unified Memory `_. -* Try adding ``export TI_USE_UNIFIED_MEMORY=0`` to your ``~/.bashrc``. This disables unified memory usage in CUDA backend. + This might be due to the fact that your NVIDIA GPU is pre-Pascal and has limited support for `Unified Memory `_. + * **Possible solution**: add ``export TI_USE_UNIFIED_MEMORY=0`` to your ``~/.bashrc``. This disables unified memory usage in CUDA backend. -If you find other CUDA problems: -* Try adding ``export TI_ENABLE_CUDA=0`` to your ``~/.bashrc``. This disables the CUDA backend completely and Taichi will fall back on other GPU backends such as OpenGL. +- If you find other CUDA problems: + * **Possible solution**: add ``export TI_ENABLE_CUDA=0`` to your ``~/.bashrc``. This disables the CUDA backend completely and Taichi will fall back on other GPU backends such as OpenGL. -If Taichi crashes with a stack backtrace containing a line of ``glfwCreateWindow`` (see `#958 `_): +OpenGL issues +************* -.. code-block:: +- If Taichi crashes with a stack backtrace containing a line of ``glfwCreateWindow`` (see `#958 `_): - [Taichi] mode=release - [E 05/12/20 18.25:00.129] Received signal 11 (Segmentation Fault) - *********************************** - * Taichi Compiler Stack Traceback * - *********************************** + .. code-block:: - ... (many lines, omitted) + [Taichi] mode=release + [E 05/12/20 18.25:00.129] Received signal 11 (Segmentation Fault) + *********************************** + * Taichi Compiler Stack Traceback * + *********************************** - /lib/python3.8/site-packages/taichi/core/../lib/taichi_core.so: _glfwPlatformCreateWindow - /lib/python3.8/site-packages/taichi/core/../lib/taichi_core.so: glfwCreateWindow - /lib/python3.8/site-packages/taichi/core/../lib/taichi_core.so: taichi::lang::opengl::initialize_opengl(bool) + ... (many lines, omitted) - ... (many lines, omitted) + /lib/python3.8/site-packages/taichi/core/../lib/taichi_core.so: _glfwPlatformCreateWindow + /lib/python3.8/site-packages/taichi/core/../lib/taichi_core.so: glfwCreateWindow + /lib/python3.8/site-packages/taichi/core/../lib/taichi_core.so: taichi::lang::opengl::initialize_opengl(bool) -This is likely because you are running Taichi on a virtual machine with an old OpenGL. Taichi requires OpenGL 4.3+ to work). + ... (many lines, omitted) -* Try adding ``export TI_ENABLE_OPENGL=0`` to your ``~/.bashrc``, even if you don't initialize Taichi with OpenGL (``ti.init(arch=ti.opengl)``). This disables the OpenGL backend detection to avoid incompatibilities. + This is likely because you are running Taichi on a (virtual) machine with an old OpenGL API. Taichi requires OpenGL 4.3+ to work. + * **Possible solution**: add ``export TI_ENABLE_OPENGL=0`` to your ``~/.bashrc`` even if you initialize Taichi with other backends than OpenGL. This disables the OpenGL backend detection to avoid incompatibilities. -If Taichi crashes and reports ``libtinfo.so.5 not found``: -* Please install ``libtinfo5`` on Ubuntu or ``ncurses5-compat-libs`` (AUR) on Arch Linux. +Linux issues +************ + +- If Taichi crashes and reports ``libtinfo.so.5 not found``: Please install ``libtinfo5`` on Ubuntu or ``ncurses5-compat-libs`` (AUR) on Arch Linux. diff --git a/docs/meta.rst b/docs/meta.rst index a9bfecb63dc3f..3ced4c60ed97e 100644 --- a/docs/meta.rst +++ b/docs/meta.rst @@ -1,7 +1,7 @@ .. _meta: Metaprogramming -================================================= +=============== Taichi provides metaprogramming infrastructures. Metaprogramming can @@ -26,7 +26,7 @@ Template metaprogramming Dimensionality-independent programming using grouped indices -------------------------------------------------------------- +------------------------------------------------------------ .. code-block:: python @@ -45,7 +45,7 @@ Dimensionality-independent programming using grouped indices y[i, j + 1] = i + j Tensor size reflection ------------------------------------------- +---------------------- Sometimes it will be useful to get the dimensionality (``tensor.dim()``) and shape (``tensor.shape()``) of tensors. These functions can be used in both Taichi kernels and python scripts. @@ -61,9 +61,9 @@ These functions can be used in both Taichi kernels and python scripts. For sparse tensors, the full domain shape will be returned. Compile-time evaluations ------------------------------------------- -Using compile-time evaluation will allow certain computation to happen when kernels are instantiated. -Such computation has no overhead at runtime. +------------------------ +Using compile-time evaluation will allow certain computations to happen when kernels are being instantiated. +This saves the overhead of those computations at runtime. * Use ``ti.static`` for compile-time branching (for those who come from C++17, this is `if constexpr `_.) @@ -106,7 +106,7 @@ Such computation has no overhead at runtime. When to use for loops with ``ti.static`` ------------------------------------------ +---------------------------------------- There are several reasons why ``ti.static`` for loops should be used. diff --git a/docs/performance.rst b/docs/performance.rst deleted file mode 100644 index ef4b32cfda6d8..0000000000000 --- a/docs/performance.rst +++ /dev/null @@ -1,6 +0,0 @@ -Performance tips -------------------------------------------- - -Avoid synchronization: when using GPU, an asynchronous task queue will be maintained. Whenever reading/writing global tensors, a synchronization will be invoked, which leads to idle cycles on CPU/GPU. - -Make Use of GPU Shared Memory and L1-d$ ``ti.cache_l1(x)`` will enforce data loads related to ``x`` cached in L1-cache. ``ti.cache_shared(x)`` will allocate shared memory. TODO: add examples diff --git a/docs/snode.rst b/docs/snode.rst index aa9df51264d2b..83c7d39d5c048 100644 --- a/docs/snode.rst +++ b/docs/snode.rst @@ -4,7 +4,7 @@ Structural nodes (SNodes) ========================= After writing the computation code, the user needs to specify the internal data structure hierarchy. Specifying a data structure includes choices at both the macro level, dictating how the data structure components nest with each other and the way they represent sparsity, and the micro level, dictating how data are grouped together (e.g. structure of arrays vs. array of structures). -Our language provides *structural nodes (SNodes)* to compose the hierarchy and particular properties. These constructs and their semantics are listed below: +Taichi provides *Structural Nodes (SNodes)* to compose the hierarchy and particular properties. These constructs and their semantics are listed below: * dense: A fixed-length contiguous array. diff --git a/docs/vector.rst b/docs/vector.rst index 23cb5fad1f16f..034c22a40735c 100644 --- a/docs/vector.rst +++ b/docs/vector.rst @@ -71,7 +71,7 @@ As global tensors of vectors .. note:: - **Always** use two pair of square brackets to access scalar elements from tensors of vectors. + **Always** use two pairs of square brackets to access scalar elements from tensors of vectors. - The indices in the first pair of brackets locate the vector inside the tensor of vectors; - The indices in the second pair of brackets locate the scalar element inside the vector. @@ -142,7 +142,7 @@ Methods :parameter b: (Vector, 3 component) :return: (Vector, 3D) the cross product of ``a`` and ``b`` - We use right-handed coordinate system, E.g., + We use a right-handed coordinate system. E.g., :: a = ti.Vector([1, 2, 3]) @@ -159,13 +159,13 @@ Methods E.g., :: - a = ti.Vector([1, 2, 3]) + a = ti.Vector([1, 2]) b = ti.Vector([4, 5, 6]) c = ti.outer_product(a, b) # NOTE: c[i, j] = a[i] * b[j] - # c = [[1*4, 1*5, 1*6], [2*4, 2*5, 2*6], [3*4, 3*5, 3*6]] + # c = [[1*4, 1*5, 1*6], [2*4, 2*5, 2*6]] .. note:: - This is not the same as `ti.cross`. ``a`` and ``b`` do not have to be 3 component vectors. + This is not the same as ``ti.cross``. ``a`` and ``b`` do not have to be 3-component vectors. .. function:: a.cast(dt)