-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Lang] Update stable branch to the latest release v1.0.3 #5158
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* switch to skbuild * Switch the build system to scikit-build * include bc and libmolten * find llvm runtime bc * fix bc files installation * install bc after compile * Add more message * Auto Format * fix findpython * Kickstart CI * add empty line * add missing dependency * fix python args * start CI * Fix clang tidy run * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: Taichi Gardener <taichigardener@gmail.com> Co-authored-by: Ailing <ailzhang@users.noreply.github.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* Move LLVM Cmake to its own dir * Suppress warning from submodules * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Use current source dir * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Separate Vulkan runtime files from codegen * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [Doc] Add limitation about TLS optimization * Add link to reduction sum benchmark * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: Haidong Lan <turbo0628g@gmail.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
…#4863) * Add ASTSerializer, using it to generate offline-cache-key * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* Change the library output dir for export core * limit the change to the target
* Device API explicit semaphores * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Destroy the semaphore before the context * Fix type warnings * fix nits * return nullptr for devices that don't need semaphores * test out no semaphores between same queue * Use native command list instead of emulated for dx11 * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * remove the in-queue semaphore * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Use flush instead of sync in places * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix possible null semaphore Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [metal] Complete Device API * fix * fix
* Updated logo * Updated links that may break when the doc site has versions * Added information that numpy arrays and torch tensors can be passed as arguments * Fixed a broken link.
…g to ti.field (#4873) * [bug] Improved error messages for ilegal slicing or indexing to ti.field * Fixed test failures * Addressed code-review comments
…4865) * wip * migrate all buffers
…s CMake (#4864) * Move LLVM Cmake to its own dir * Suppress warning from submodules * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Use current source dir * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Separate Vulkan runtime files from codegen * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Use keywords instead of plain target_link_libraries * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [bug] Fixed type promotion rule for shift operations * removed debug info * Addressed review comments
* [aot] [vulkan] Expose symbols for AOT * weird windows * hide to make win happy * fix
* Move LLVM Cmake to its own dir * Suppress warning from submodules * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Use current source dir * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Separate Vulkan runtime files from codegen * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Use keywords instead of plain target_link_libraries * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Separate opengl runtime files from backend * Remove some warnings * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Minor * Add glfw include * Add link to taichi core * Update taichi/program/extension.cpp Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: yekuang <k-ye@users.noreply.github.com>
* Add new VMA vulkan functions. * fix
…4886) * Implement has_paddle(), to_paddle_type() and update to_taichi_type in python\taichi\lang\util.py * Implement get_paddle_callbacks() and update get_function_body(), match_ext_arr() in python\taichi\lang\kernel_impl.py * Add test test_io_devices() in tests\python\test_torch_io.py * Implement callback for CPU-GPU/GPU-CPU copy between Taichi and Paddle * Partially implement to_torch()/from_torch() according to PyTorch in Taichi * Fix paddle.Tensor's backend check * Update tests for from_paddle()/to_paddle() * [doc] Update Global settings with TI_ENABLE_PADDLE * Fix to avoid fail when only import paddle * [test] Fix the expected list alphabetically * [doc] Add info about paddle.Tensor * [ci] Try to test paddle's GPU version * Fix the usage of paddle.ones * Fix f16 tests for paddle * Fixed supported archs for tests of paddle * Use 1 thread run tests for torch and paddle * Fix linux test * Fix windows test * Unify the name to Paddle * Add tests for paddle * Replace usage of device to place for paddle * Paddle's GPU develop package on Linux import error
* [Doc] Updated broken links * Updated links that may break. * Added .md
* [test] Exit on error during Paddle windows test * Check if paddle test leaks memory * Increase device memory and reduce thread number * Revert "Check if paddle test leaks memory" This reverts commit e0522b0. * Disable paddle for non-paddle test
* add warp_barries warp instrinsic add warp_barrier unit test fix error: add Args mask in warp.py * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This constructor is mainly used to construct an Ndarray out of an existing device allocation. This PR updates the behavior of this constructor to seprate element_shape out of shape.
* Remove element shape from extra args.
* [llvm] [refactor] Move load_bit_pointer() to CodeGenLLVM * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This is a simplified version of https://github.com/ailzhang/taichi-aot-demo/tree/mpm88_cgraph_demo which strips the GGUI rendering part. Let's add this as a test (as well as demo ;) ) in the codebase. We used to test the saving part of mpm88 btw and it was replaced with this e2e test. Huge thanks to @k-ye for help debugging the GGUI rendering issue!
* update scene for mass_spring simulation * update scene for mass_spring simulation * update scene for mass_spring simulation
…5110) * [llvm] [refactor] Use LLVM native integer cast * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
) * [type] [llvm] [refactor] Fix function names in codegen_llvm_quant * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [bug] Fix build without llvm backend crash * Update taichi/python/export_lang.cpp Co-authored-by: yekuang <k-ye@users.noreply.github.com> Co-authored-by: yekuang <k-ye@users.noreply.github.com>
* Precommit fix * Add spirv source * Move device code back to backends * Expose glfw include in vulkan rhi * Fix llvm include * Fix include for test
* Add forward mode pipeline for autodiff pass * Replace the grad parameter with AutodiffMode to distinguish three kinds of kernels primal, forward ad and reverse ad
…ed initialize_llvm_runtime_snodes() (#5108) * [aot] [llvm] Implemented FieldCacheData and refactored initialize_llvm_runtime_snodes() * Addressed compilation erros * Added initialization for struct members * Minor fix
#5122) multiple times This bug was triggered when we tried to port stable_fluid demo so this PR also added a cgraph based stable fluid demo. ``` ti example stable_fluid_graph ``` Note it's not ideal to save both `FunctionType compiled_` as well as `aot::Kernel compiled_aot_kernel_` inside C++ `Kernel` class. But we plan to clean that up (likely by getting rid of `FunctionType compiled_`) in #5114.
* fix block dim warning in ggui * fix block dim warning in ggui * fix block dim warning in ggui
Note we explicitly exclude running pylint on them as it requires a bunch of manual fixes first.
* Replace is_custom_type() with is_quant() * Rename two functions * Use get_constant() if possible * Rename two metal functions * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
…uleBuilder to support Fields (#5120) * [aot] [llvm] Implemented FieldCacheData and refactored initialize_llvm_runtime_snodes() * Addressed compilation erros * [aot] [llvm] LLVM AOT Field #1: Adjust serialization/deserialization logics for FieldCacheData * [llvm] [aot] Added Field support for LLVM AOT * [aot] [llvm] LLVM AOT Field #2: Updated LLVM AOTModuleLoader & AOTModuleBuilder to support Fields * Fixed merge issues * Stopped abusing Program*
neozhaoliang
changed the title
[Release] Update stable branch to the latest release v1.0.3
[Lang] Update stable branch to the latest release v1.0.3
Jun 14, 2022
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.