Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[doc] Update syntax.rst and related sections #967

Merged
merged 4 commits into from
May 13, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion docs/differentiable_programming.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,9 @@
.. _differentiable:

Differentiable programming
==========================

Please check out `the DiffTaichi paper <https://arxiv.org/pdf/1910.00935.pdf>`_ and `video <https://www.youtube.com/watch?v=Z1xvAZve9aE>`_ to learn more about Taichi differentiable programming.
This page is work in progress. Please check out `the DiffTaichi paper <https://arxiv.org/pdf/1910.00935.pdf>`_ and `video <https://www.youtube.com/watch?v=Z1xvAZve9aE>`_ to learn more about Taichi differentiable programming.

The `DiffTaichi repo <https://github.com/yuanming-hu/difftaichi>`_ contains 10 differentiable physical simulators built with Taichi differentiable programming.

Expand Down
8 changes: 5 additions & 3 deletions docs/external.rst
Original file line number Diff line number Diff line change
@@ -1,10 +1,12 @@
.. _external:

Interacting with external arrays
====================================
================================

Here ``external arrays`` refer to ``numpy.ndarray`` or ``torch.Tensor``.

Conversion between Taichi tensors and external arrays
--------------------------------------------------------
-----------------------------------------------------

Use ``to_numpy``/``from_numpy``/``to_torch``/``from_torch``:

Expand Down Expand Up @@ -49,7 +51,7 @@ Use ``to_numpy``/``from_numpy``/``to_torch``/``from_torch``:


Use external arrays as Taichi kernel parameters
-------------------------------------------------
-----------------------------------------------

The type hint for external array parameters is ``ti.ext_arr()``. Please see the example below.
Note that struct-for's on external arrays are not supported.
Expand Down
2 changes: 1 addition & 1 deletion docs/faq.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
Frequently Asked Questions
====================================================
==========================

**Can a user iterate over irregular topology instead of grids, such as tetrahedra meshes, line segment vertices?**
These structures have to be represented using 1D arrays in Taichi. You can still iterate over it using `for i in x` or `for i in range(n)`.
Expand Down
4 changes: 2 additions & 2 deletions docs/global_settings.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
Global Settings
------------------
---------------

- Restart the Taichi runtime system (clear memory, destroy all variables and kernels): ``ti.reset()``
- Eliminate verbose outputs: ``ti.get_runtime().set_verbose(False)``
Expand All @@ -10,4 +10,4 @@ Global Settings
- To specify which GPU to use for CUDA: ``export CUDA_VISIBLE_DEVICES=0``
- To specify which Arch to use: ``export TI_ARCH=cuda``
- To print intermediate IR generated: ``export TI_PRINT_IR=1``
- To print verbosed details: ``export TI_VERBOSE=1``
- To print verbose details: ``export TI_VERBOSE=1``
24 changes: 24 additions & 0 deletions docs/internal.rst
Original file line number Diff line number Diff line change
Expand Up @@ -37,3 +37,27 @@ To print out all statistics in Python:
.. code-block:: Python

ti.core.print_stat()


Why Python frontend
-------------------

Embedding Taichi in ``python`` has the following advantages:

* Easy to learn. Taichi has a very similar syntax to Python.
* Easy to run. No ahead-of-time compilation is needed.
* This design allows people to reuse existing python infrastructure:

* IDEs. A python IDE mostly works for Taichi with syntax highlighting, syntax checking, and autocomplete.
* Package manager (pip). A developed Taichi application and be easily submitted to ``PyPI`` and others can easily set it up with ``pip``.
* Existing packages. Interacting with other python components (e.g. ``matplotlib`` and ``numpy``) is just trivial.

* The built-in AST manipulation tools in ``python`` allow us to do magical things, as long as the kernel body can be parsed by the Python parser.

However, this design has drawbacks as well:

* Taichi kernels must parse-able by Python parsers. This means Taichi syntax cannot go beyond Python syntax.

* For example, indexing is always needed when accessing elements in Taichi tensors, even if the tensor is 0D. Use ``x[None] = 123`` to set the value in ``x`` if ``x`` is 0D. This is because ``x = 123`` will set ``x`` itself (instead of its containing value) to be the constant ``123`` in python syntax, and, unfortunately, we cannot modify this behavior.

* Python has relatively low performance. This can cause a performance issue when initializing large Taichi tensors with pure python scripts. A Taichi kernel should be used to initialize a huge tensor.
42 changes: 28 additions & 14 deletions docs/meta.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,24 +11,38 @@ Taichi provides metaprogramming infrastructures. Metaprogramming can

Taichi kernels are *lazily instantiated* and a lot of computation can happen at *compile-time*. Every kernel in Taichi is a template kernel, even if it has no template arguments.


.. _template_metaprogramming:

Template metaprogramming
------------------------

.. code-block:: python

@ti.kernel
def copy(x: ti.template(), y: ti.template()):
for i in x:
y[i] = x[i]
archibate marked this conversation as resolved.
Show resolved Hide resolved


Dimensionality-independent programming using grouped indices
--------------------------------------------------------------
-------------------------------------------------------------

.. code-block:: python

@ti.kernel
def copy(x: ti.template(), y: ti.template()):
for I in ti.grouped(y):
x[I] = y[I]

@ti.kernel
def array_op(x: ti.template(), y: ti.template()):
# If tensor x is 2D
for I in ti.grouped(x): # I is a vector of size x.dim() and data type i32
y[I + ti.Vector([0, 1])] = I[0] + I[1]
# is equivalent to
for i, j in x:
y[i, j + 1] = i + j
@ti.kernel
def copy(x: ti.template(), y: ti.template()):
for I in ti.grouped(y):
x[I] = y[I]

@ti.kernel
def array_op(x: ti.template(), y: ti.template()):
# If tensor x is 2D
for I in ti.grouped(x): # I is a vector of size x.dim() and data type i32
y[I + ti.Vector([0, 1])] = I[0] + I[1]
# is equivalent to
for i, j in x:
y[i, j + 1] = i + j

Tensor size reflection
------------------------------------------
Expand Down
9 changes: 8 additions & 1 deletion docs/snode.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,8 @@ Our language provides *structural nodes (SNodes)* to compose the hierarchy and p

* dynamic: Variable-length array, with a predefined maximum length. It serves the role of ``std::vector`` in C++ or ``list`` in Python, and can be used to maintain objects (e.g. particles) contained in a block.

See :ref:`layout` for more details about data layout. ``ti.root`` is the root node of the data structure.

See :ref:`layout` for more details. ``ti.root`` is the root node of the data structure.

.. function:: snode.place(x, ...)

Expand Down Expand Up @@ -172,6 +173,12 @@ Working with ``dynamic`` SNodes
Inserts ``val`` into the ``dynamic`` node with indices ``indices``.


Taichi tensors like powers of two
---------------------------------

Non-power-of-two tensor dimensions are promoted into powers of two and thus these tensors will occupy more virtual address space.
For example, a (dense) tensor of size ``(18, 65)`` will be materialized as ``(32, 128)``.


Indices
-------
Expand Down
Loading