Skip to content

Commit

Permalink
Renames package to nvtripy to avoid PyPI naming collisions
Browse files Browse the repository at this point in the history
  • Loading branch information
pranavm-nvidia committed Dec 16, 2024
1 parent ab9fa85 commit 75dff65
Show file tree
Hide file tree
Showing 311 changed files with 1,862 additions and 1,857 deletions.
24 changes: 12 additions & 12 deletions tripy/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,24 +25,24 @@ Thank you for considering contributing to Tripy!
docker login ghcr.io/nvidia/tensorrt-incubator
```
Next, pull and launch the container. From the [`tripy` root directory](.), run:
Next, pull and launch the container. From the [`nvtripy` root directory](.), run:
```bash
docker run --pull always --gpus all -it -p 8080:8080 -v $(pwd):/tripy/ --rm ghcr.io/nvidia/tensorrt-incubator/tripy
docker run --pull always --gpus all -it -p 8080:8080 -v $(pwd):/nvtripy/ --rm ghcr.io/nvidia/tensorrt-incubator/nvtripy
```
- Otherwise, you can build the container locally and launch it.
From the [`tripy` root directory](.), run:
From the [`nvtripy` root directory](.), run:
```bash
docker build -t tripy .
docker run --gpus all -it -p 8080:8080 -v $(pwd):/tripy/ --rm tripy:latest
docker build -t nvtripy .
docker run --gpus all -it -p 8080:8080 -v $(pwd):/nvtripy/ --rm nvtripy:latest
```
3. You should now be able to use `tripy` in the container. To test it out, you can run a quick sanity check:
3. You should now be able to use `nvtripy` in the container. To test it out, you can run a quick sanity check:
```bash
python3 -c "import tripy as tp; print(tp.ones((2, 3)))"
python3 -c "import nvtripy as tp; print(tp.ones((2, 3)))"
```
This should give you some output like:
Expand All @@ -64,7 +64,7 @@ You only need to do this once.
We suggest you do this *outside* the container and also use `git` from
outside the container.
From the [`tripy` root directory](.), run:
From the [`nvtripy` root directory](.), run:
```bash
python3 -m pip install pre-commit
pre-commit install
Expand All @@ -87,7 +87,7 @@ If you're intersted in adding a new operator to Tripy, refer to [this guide](./d
- Create a PR to merge changes from fork to the main repo
2. Managing PRs
- Label your PR correctly (e.g., use `tripy` for changes to `tripy`).
- Label your PR correctly (e.g., use `nvtripy` for changes to `nvtripy`).
- Add a brief description explaining the purpose of the change.
- Each functional change should include an update to an existing test or a new test.
- Ensure any commits you make are signed. See [this page](https://docs.github.com/en/authentication/managing-commit-signature-verification/about-commit-signature-verification#ssh-commit-signature-verification)
Expand Down Expand Up @@ -153,9 +153,9 @@ The Tripy container includes a build of MLIR-TensorRT, but in some cases, you ma

1. Build MLIR-TensorRT as per the instructions in the [README](../mlir-tensorrt/README.md).

2. Launch the container with mlir-tensorrt repository mapped for accessing wheels files; from the [`tripy` root directory](.), run:
2. Launch the container with mlir-tensorrt repository mapped for accessing wheels files; from the [`nvtripy` root directory](.), run:
```bash
docker run --gpus all -it -p 8080:8080 -v $(pwd):/tripy/ -v $(pwd)/../mlir-tensorrt:/mlir-tensorrt --rm tripy:latest
docker run --gpus all -it -p 8080:8080 -v $(pwd):/nvtripy/ -v $(pwd)/../mlir-tensorrt:/mlir-tensorrt --rm nvtripy:latest
```

3. Install MLIR-TensorRT wheels
Expand All @@ -180,5 +180,5 @@ The Tripy container includes a build of MLIR-TensorRT, but in some cases, you ma

4. Verify everything works:
```bash
python3 -c "import tripy as tp; print(tp.ones((2, 3)))"
python3 -c "import nvtripy as tp; print(tp.ones((2, 3)))"
```
8 changes: 4 additions & 4 deletions tripy/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ FROM ubuntu:22.04

LABEL org.opencontainers.image.description="Tripy development container"

WORKDIR /tripy
WORKDIR /nvtripy

SHELL ["/bin/bash", "-c"]

Expand All @@ -23,7 +23,7 @@ RUN apt-get update && \
python3 -m pip install --upgrade pip

COPY .lldbinit /root/
COPY pyproject.toml /tripy/pyproject.toml
COPY pyproject.toml /nvtripy/pyproject.toml

RUN pip install build .[docs,dev,test,build] \
-f https://nvidia.github.io/TensorRT-Incubator/packages.html \
Expand All @@ -46,5 +46,5 @@ RUN echo "deb http://apt.llvm.org/jammy/ llvm-toolchain-jammy-$LLVM_VERSION main
rm -rf /var/lib/apt/lists/* && \
ln -s /usr/bin/lldb-17 /usr/bin/lldb

# Export tripy into the PYTHONPATH so it doesn't need to be installed after making changes
ENV PYTHONPATH=/tripy
# Export nvtripy into the PYTHONPATH so it doesn't need to be installed after making changes
ENV PYTHONPATH=/nvtripy
16 changes: 8 additions & 8 deletions tripy/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,11 +29,11 @@ an excellent user experience without compromising performance. Some of the goals
<!-- Tripy: DOC: OMIT End -->

```bash
python3 -m pip install --no-index -f https://nvidia.github.io/TensorRT-Incubator/packages.html tripy --no-deps
python3 -m pip install -f https://nvidia.github.io/TensorRT-Incubator/packages.html tripy
python3 -m pip install --no-index -f https://nvidia.github.io/TensorRT-Incubator/packages.html nvtripy --no-deps
python3 -m pip install -f https://nvidia.github.io/TensorRT-Incubator/packages.html nvtripy
```

***Important:** There is another package named `tripy` on PyPI.*
***Important:** There is another package named `nvtripy` on PyPI.*
*Note that it is **not** the package from this repository.*
*Please use the instructions above to ensure you install the correct package.*

Expand All @@ -48,23 +48,23 @@ To get the latest changes in the repository, you can build Tripy wheels from sou
python3 -m pip install build
```

2. From the [`tripy` root directory](.), run:
2. From the [`nvtripy` root directory](.), run:

```bash
python3 -m build . -w
```

3. Install the wheel, which should have been created in the `dist/` directory.
From the [`tripy` root directory](.), run:
From the [`nvtripy` root directory](.), run:

```bash
python3 -m pip install -f https://nvidia.github.io/TensorRT-Incubator/packages.html dist/tripy-*.whl
python3 -m pip install -f https://nvidia.github.io/TensorRT-Incubator/packages.html dist/nvtripy-*.whl
```

4. **[Optional]** To ensure that Tripy was installed correctly, you can run a sanity check:

```bash
python3 -c "import tripy as tp; x = tp.ones((5,), dtype=tp.int32); assert x.tolist() == [1] * 5"
python3 -c "import nvtripy as tp; x = tp.ones((5,), dtype=tp.int32); assert x.tolist() == [1] * 5"
```

<!-- Tripy: DOC: OMIT End -->
Expand All @@ -73,7 +73,7 @@ To get the latest changes in the repository, you can build Tripy wheels from sou

We've included several guides in Tripy to make it easy to get started.
We recommend starting with the
[Introduction To Tripy](https://nvidia.github.io/TensorRT-Incubator/pre0_user_guides/00-introduction-to-tripy.html)
[Introduction To Tripy](https://nvidia.github.io/TensorRT-Incubator/pre0_user_guides/00-introduction-to-nvtripy.html)
guide.
Other features covered in our guides include:
Expand Down
4 changes: 2 additions & 2 deletions tripy/RELEASE.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
This document explains how to release a new version of Tripy.

1. Update version numbers in [`pyproject.toml`](./pyproject.toml) and
[`__init__.py`](./tripy/__init__.py) (make sure they match!).
[`__init__.py`](./nvtripy/__init__.py) (make sure they match!).

Often, updates to Tripy will also require updates to dependencies,
like MLIR-TRT, so make sure to update those version numbers as well.
Expand All @@ -21,7 +21,7 @@ This document explains how to release a new version of Tripy.

Once the post-merge pipelines have succeeded, create a new tag with:
```bash
git tag tripy-vX.Y.Z
git tag nvtripy-vX.Y.Z
```
replacing `X.Y.Z` with the version number and push it to the repository.

Expand Down
10 changes: 5 additions & 5 deletions tripy/docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ generation, such as where in the documentation hierarchy the API should be docum
The `generate_rsts.py` script uses this information to automatically generate a directory
structure and populate it with `.rst` files.

For more information, see the docstring for [`export.public_api()`](../tripy/export.py).
For more information, see the docstring for [`export.public_api()`](../nvtripy/export.py).

### Docstrings

Expand Down Expand Up @@ -70,7 +70,7 @@ This means we need to make some special considerations:
Myst can replace them with URLs to our remote repository. Otherwise, the links will
cause the relevant file to be downloaded. For example:
```
[Fill operation](source:/tripy/frontend/trace/ops/fill.py)
[Fill operation](source:/nvtripy/frontend/trace/ops/fill.py)
```
Links to markdown files are an exception; if a markdown file is part of the *rendered*
Expand All @@ -85,7 +85,7 @@ This means we need to make some special considerations:
For example:
```md
{class}`tripy.Tensor`
{class}`nvtripy.Tensor`
```
`<api_kind>` can take on any value that is a valid role provided by
Expand All @@ -101,11 +101,11 @@ Code examples in public facing docstrings and guides are preprocessed before
documentation is generated. Specifically:
- Any code examples are executed so that their output can be
displayed after the code block. Several modules, including `tripy` (as `tp`),
displayed after the code block. Several modules, including `nvtripy` (as `tp`),
`numpy` (as `np`), `cupy` (as `cp`), and `torch` are automatically imported
and can be used in code examples.
- The values of any `tripy` type local variables are appended to the output.
- The values of any `nvtripy` type local variables are appended to the output.
You can customize this behavior:
- To only display certain variables, add `# doc: print-locals` followed by a space
Expand Down
22 changes: 11 additions & 11 deletions tripy/docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,9 +26,9 @@

from tests import helper

import tripy as tp
from tripy.common.datatype import DATA_TYPES
from tripy.wrappers import TYPE_VERIFICATION
import nvtripy as tp
from nvtripy.common.datatype import DATA_TYPES
from nvtripy.wrappers import TYPE_VERIFICATION

PARAM_PAT = re.compile(":param .*?:")

Expand All @@ -52,8 +52,8 @@
python_use_unqualified_type_names = True

nitpick_ignore = {
("py:class", "tripy.types.ShapeLike"),
("py:class", "tripy.types.TensorLike"),
("py:class", "nvtripy.types.ShapeLike"),
("py:class", "nvtripy.types.TensorLike"),
("py:class", "Tensor"),
}
nitpick_ignore_regex = {
Expand Down Expand Up @@ -140,7 +140,7 @@
myst_url_schemes = {
"http": None,
"https": None,
"source": "https://github.com/NVIDIA/TensorRT-Incubator/tree/main/tripy/{{path}}",
"source": "https://github.com/NVIDIA/TensorRT-Incubator/tree/main/nvtripy/{{path}}",
}
myst_number_code_blocks = ["py", "rst"]

Expand All @@ -155,7 +155,7 @@
def process_docstring_impl(app, what, name, obj, options, lines):
doc = "\n".join(lines).strip()
blocks = helper.consolidate_code_blocks(doc)
name = name.lstrip("tripy.")
name = name.lstrip("nvtripy.")

# Check signature for functions/methods and class constructors.
if what in {"function", "method"} or (what == "class" and name in seen_classes):
Expand Down Expand Up @@ -265,9 +265,9 @@ def allow_no_example():
# `tp.Module`s include examples in their constructors, so their __call__ methods don't require examples.
is_tripy_module_call_method = False
if what == "method" and obj.__name__ == "__call__":
class_name = "tripy." + name.rpartition(".")[0]
# Class names are prefixed with tripy.<...>, so we need to import it here to make eval() work.
import tripy
class_name = "nvtripy." + name.rpartition(".")[0]
# Class names are prefixed with nvtripy.<...>, so we need to import it here to make eval() work.
import nvtripy

is_tripy_module_call_method = issubclass(eval(class_name), tp.Module)

Expand Down Expand Up @@ -335,7 +335,7 @@ def setup(app):
# A note on aliases: if you rename a class via an import statement, e.g. `import X as Y`,
# the documentation generated for `Y` will just be: "Alias of X"
# To get the real documentation, you can make Sphinx think that `Y` is not an alias but instead a real
# class/function. To do so, you just need to define the __name__ attribute in this function (*not* in tripy code!):
# class/function. To do so, you just need to define the __name__ attribute in this function (*not* in nvtripy code!):
# Y.__name__ = "Y"

app.connect("autodoc-process-docstring", process_docstring)
4 changes: 2 additions & 2 deletions tripy/docs/generate_rsts.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,9 @@
from textwrap import dedent, indent
from typing import Dict, List, Set

import tripy as tp
import nvtripy as tp
from tests import helper
from tripy.export import PUBLIC_APIS
from nvtripy.export import PUBLIC_APIS


@dataclass
Expand Down
12 changes: 6 additions & 6 deletions tripy/docs/packages.html
Original file line number Diff line number Diff line change
Expand Up @@ -10,22 +10,22 @@
<body>
<h1>Package Index</h1>
<a
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/tripy-v0.0.6/tripy-0.0.6-py3-none-any.whl">tripy-0.0.6-py3-none-any.whl</a><br>
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/nvtripy-v0.0.6/nvtripy-0.0.6-py3-none-any.whl">nvtripy-0.0.6-py3-none-any.whl</a><br>

<a
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/tripy-v0.0.5/tripy-0.0.5-py3-none-any.whl">tripy-0.0.5-py3-none-any.whl</a><br>
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/nvtripy-v0.0.5/nvtripy-0.0.5-py3-none-any.whl">nvtripy-0.0.5-py3-none-any.whl</a><br>

<a
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/tripy-v0.0.4/tripy-0.0.4-py3-none-any.whl">tripy-0.0.4-py3-none-any.whl</a><br>
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/nvtripy-v0.0.4/nvtripy-0.0.4-py3-none-any.whl">nvtripy-0.0.4-py3-none-any.whl</a><br>

<a
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/tripy-v0.0.3/tripy-0.0.3-py3-none-any.whl">tripy-0.0.3-py3-none-any.whl</a><br>
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/nvtripy-v0.0.3/nvtripy-0.0.3-py3-none-any.whl">nvtripy-0.0.3-py3-none-any.whl</a><br>

<a
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/tripy-v0.0.2/tripy-0.0.2-py3-none-any.whl">tripy-0.0.2-py3-none-any.whl</a><br>
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/nvtripy-v0.0.2/nvtripy-0.0.2-py3-none-any.whl">nvtripy-0.0.2-py3-none-any.whl</a><br>

<a
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/tripy-v0.0.1/tripy-0.0.1-py3-none-any.whl">tripy-0.0.1-py3-none-any.whl</a><br>
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/nvtripy-v0.0.1/nvtripy-0.0.1-py3-none-any.whl">nvtripy-0.0.1-py3-none-any.whl</a><br>

<a
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/mlir-tensorrt-v0.1.29/mlir_tensorrt_compiler-0.1.29+cuda12.trt102-cp310-cp310-linux_x86_64.whl">mlir_tensorrt_compiler-0.1.29+cuda12.trt102-cp310-cp310-linux_x86_64.whl</a><br>
Expand Down
18 changes: 9 additions & 9 deletions tripy/docs/post0_developer_guides/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,12 +80,12 @@ inp = tp.full((2, 3), value=0.5)
The `tp.full()` and `tp.tanh()` APIs are part of the frontend and like other frontend functions, map to one or more
(just one in this case) `Trace` operations. For frontend functions that map to exactly one `Trace` operation,
we define the function directly alongside the corresponding `Trace` operation.
In this case, the [`Fill` operation](source:/tripy/frontend/trace/ops/fill.py) provides `tp.full()` and
the [`UnaryElementwise` operation](source:/tripy/frontend/trace/ops/unary_elementwise.py) provides `tp.tanh()`.
In this case, the [`Fill` operation](source:/nvtripy/frontend/trace/ops/fill.py) provides `tp.full()` and
the [`UnaryElementwise` operation](source:/nvtripy/frontend/trace/ops/unary_elementwise.py) provides `tp.tanh()`.

*We organize it this way to reduce the number of files that need to be touched when adding new ops.*
*If an operation is composed of multiple `Trace` operations, the frontend function can be*
*defined under the [`frontend/ops`](source:/tripy/frontend/ops) submodule instead.*
*defined under the [`frontend/ops`](source:/nvtripy/frontend/ops) submodule instead.*

#### What Does It Do?

Expand Down Expand Up @@ -145,7 +145,7 @@ Here's the textual representation for the `Trace` from our example:
<!-- Tripy: DOC: OMIT Start -->
```py
# doc: no-print-locals
from tripy.frontend.trace import Trace
from nvtripy.frontend.trace import Trace
# Output has been eval'd already, so we'll construct a new one
new_out = tp.tanh(inp)
trace = Trace([new_out])
Expand All @@ -169,7 +169,7 @@ but this is good enough for a conceptual understanding):

```py
def to_flat_ir(self, inputs, outputs):
from tripy.flat_ir.ops import TanhOp
from nvtripy.flat_ir.ops import TanhOp

TanhOp.build(inputs, outputs)
```
Expand All @@ -178,9 +178,9 @@ Wait a second - what's happening here? The function doesn't return anything; in
anything at all!

The way this works is as follows: when we call `to_flat_ir()` we provide input and output
[`FlatIRTensor`](source:/tripy/flat_ir/tensor.py)s. `to_flat_ir()` is responsible for generating a
[`FlatIRTensor`](source:/nvtripy/flat_ir/tensor.py)s. `to_flat_ir()` is responsible for generating a
subgraph of `FlatIR` operations that bind to these inputs and outputs. The
[`BaseFlatIROp` build function](source:/tripy/flat_ir/ops/base.py) updates the producer of the output tensors,
[`BaseFlatIROp` build function](source:/nvtripy/flat_ir/ops/base.py) updates the producer of the output tensors,
meaning that *just building a `FlatIR` operation is enough to add it to the subgraph*. Once this binding
is done, we take the resulting subgraph and inline it into the `FlatIR`, remapping the I/O tensors to those
that already exist in the `FlatIR`.
Expand All @@ -203,7 +203,7 @@ Our final translation step is to go from `FlatIR` into MLIR.
Similar to `Trace` operations, `FlatIR` operations implement `to_mlir()` which generates MLIR operations.
Unlike `Trace` operations, this is always a 1:1 mapping.

Here's a snippet for how [`tanh()` is implemented](source:/tripy/flat_ir/ops/tanh.py):
Here's a snippet for how [`tanh()` is implemented](source:/nvtripy/flat_ir/ops/tanh.py):
```py
def to_mlir(self, operands):
return [stablehlo.TanhOp(*operands)]
Expand All @@ -227,4 +227,4 @@ Once we have the complete MLIR representation, we then compile it to an executab

Finally, we use the MLIR-TRT executor to launch the executable and retrieve the output tensors.
The executable returns [`memref`s](https://mlir.llvm.org/docs/Dialects/MemRef/) which we then
wrap in Tripy frontend {class}`tripy.Tensor`s.
wrap in Tripy frontend {class}`nvtripy.Tensor`s.
2 changes: 1 addition & 1 deletion tripy/docs/post0_developer_guides/debugging.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Alternatively, you can use [LLDB](https://lldb.llvm.org/) if you launch the cont
```bash
docker run --gpus all --cap-add=SYS_PTRACE \
--security-opt seccomp=unconfined --security-opt apparmor=unconfined \
-p 8080:8080 -v $(pwd):/tripy/ -it --rm tripy:latest
-p 8080:8080 -v $(pwd):/nvtripy/ -it --rm nvtripy:latest
```
<!-- Tripy: DOC: NO_EVAL End -->

Expand Down
Loading

0 comments on commit 75dff65

Please sign in to comment.