Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[sync] Issue template + Pyre #4

Merged
merged 6 commits into from
Oct 14, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
53 changes: 53 additions & 0 deletions .github/ISSUE_TEMPLATE/bug-report.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
---
name: "\U0001F41B Bug Report"
about: Submit a bug report to help us improve xFormers

---

# 🐛 Bug

<!-- A clear and concise description of what the bug is. -->

## Command

## To Reproduce

Steps to reproduce the behavior:

<!-- If you were running a command, post the exact command that you were running -->

1.
2.
3.

<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->

## Expected behavior

<!-- A clear and concise description of what you expected to happen. -->

## Environment

Please copy and paste the output from the
environment collection script from PyTorch
(or fill out the checklist below manually).

You can run the script with:

```bash
# For security purposes, please check the contents of collect_env.py before running it.
python -m torch.utils.collect_env
```

- PyTorch Version (e.g., 1.0):
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, source):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:

## Additional context

<!-- Add any other context about the problem here. -->
25 changes: 25 additions & 0 deletions .github/ISSUE_TEMPLATE/feature-request.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
---
name: "\U0001F680Feature Request"
about: Submit a proposal/request for a new xFormers feature

---

# 🚀 Feature

<!-- A clear and concise description of the feature proposal -->

## Motivation

<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->

## Pitch

<!-- A clear and concise description of what you want to happen. -->

## Alternatives

<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->

## Additional context

<!-- Add any other context or screenshots about the feature request here. -->
7 changes: 7 additions & 0 deletions .github/ISSUE_TEMPLATE/questions-help-support.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
name: "❓Questions/Help/Support"
about: Do you need support?

---

# ❓ Questions and Help
21 changes: 21 additions & 0 deletions .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
## What does this PR do?
Fixes # (issue).

## Before submitting

- [ ] Did you have fun?
- Make sure you had fun coding 🙃
- [ ] Did you read the [contributor guideline](https://github.com/facebookresearch/fairscale/blob/master/CONTRIBUTING.md)?
- [ ] Was this discussed/approved via a Github issue? (no need for typos, doc improvements)
- [ ] N/A
- [ ] Did you make sure to update the docs?
- [ ] N/A
- [ ] Did you write any new necessary tests?
- [ ] N/A
- [ ] Did you update the [changelog](https://github.com/facebookresearch/fairscale/blob/master/CHANGELOG.md)? (if needed)
- [ ] N/A


## PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
2 changes: 1 addition & 1 deletion .github/workflows/gh-pages.yml
Original file line number Diff line number Diff line change
Expand Up @@ -49,5 +49,5 @@ jobs:
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./public
publish_dir: docs/build/html
if: github.event_name != 'pull_request'
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -38,3 +38,5 @@ my_runs.md
# JetBrains PyCharm IDE
.idea/

# Pyre cache.
.pyre/
7 changes: 7 additions & 0 deletions .pyre_configuration
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
{
"python_version": "3.7",
"source_directories": [
"stubs",
"."
]
}
1 change: 1 addition & 0 deletions .watchmanconfig
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{}
80 changes: 80 additions & 0 deletions CODE_OF_CONDUCT.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
# Code of Conduct

## Our Pledge

In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to make participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, sex characteristics, gender identity and expression,
level of experience, education, socio-economic status, nationality, personal
appearance, race, religion, or sexual identity and orientation.

## Our Standards

Examples of behavior that contributes to creating a positive environment
include:

* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members

Examples of unacceptable behavior by participants include:

* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting

## Our Responsibilities

Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.

Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.

## Scope

This Code of Conduct applies within all project spaces, and it also applies when
an individual is representing the project or its community in public spaces.
Examples of representing a project or community include using an official
project e-mail address, posting via an official social media account, or acting
as an appointed representative at an online or offline event. Representation of
a project may be further defined and clarified by project maintainers.

This Code of Conduct also applies outside the project spaces when there is a
reasonable belief that an individual's behavior may have a negative impact on
the project or its community.

## Enforcement

Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at <opensource-conduct@fb.com>. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.

Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.

## Attribution

This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html

[homepage]: https://www.contributor-covenant.org

For answers to common questions about this code of conduct, see
https://www.contributor-covenant.org/faq
1 change: 1 addition & 0 deletions docs/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,3 +4,4 @@ sphinx_rtd_theme==0.4.3
sphinxcontrib-programoutput==0.16
git+https://github.com/pytorch/pytorch_sphinx_theme.git#egg=pytorch_sphinx_theme
torch>=1.6.0
numpy>=1.19.5
45 changes: 1 addition & 44 deletions docs/source/tutorials/pytorch_encoder.rst
Original file line number Diff line number Diff line change
Expand Up @@ -74,50 +74,7 @@ You can think of it as a declaration of the sequence of blocks that you would li
"attention": {
"name": "linformer", # whatever attention mechanism
"dropout": 0,
"causal": True,
"seq_len": 512,
},
},
"feedforward_config": {
"name": "MLP",
"dim_model": 384,
"dropout": 0,
"activation": "relu",
"hidden_layer_multiplier": 4,
},
}
}
]

config = xFormerConfig(**my_config) # This part of xFormers is entirely type checked and needs a config object, could be changed in the fututure
model = xFormer.from_config(config).to(device)

from xformers.factory.model_factory import xFormer, xFormerConfig

my_config = [
# A list of the encoder or decoder blocks which constitute the Transformer.
# Note that a sequence of different encoder blocks can be used, same for decoders
{
"reversible": False, # Optionally make these layers reversible, to save memory
"block_config": {
"block_type": "encoder",
"num_layers": 3, # Optional, this means that this config will repeat N times
"dim_model": 384,
"layer_norm_style": "pre", # Optional, pre/post
"position_encoding_config": {
"name": "vocab", # whatever position encodinhg makes sense
"dim_model": 384,
"seq_len": 1024,
"vocab_size": 64,
},
"multi_head_config": {
"num_heads": 4,
"dim_model": 384,
"residual_dropout": 0,
"attention": {
"name": "linformer", # whatever attention mechanism
"dropout": 0,
"causal": True,
"causal": False,
"seq_len": 512,
},
},
Expand Down
1 change: 1 addition & 0 deletions requirements-test.txt
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ black == 20.8b1
flake8 == 3.8.4
isort == 5.7.0
mypy == 0.812
pyre-check == 0.9.6

# Tools for unit tests & coverage.
pytest == 5.4.1
Expand Down
3 changes: 3 additions & 0 deletions stubs/fvcore/nn.pyi
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
from typing import Any

def __getattr__(name) -> Any: ...
3 changes: 3 additions & 0 deletions stubs/matplotlib/pyplot.pyi
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
from typing import Any

def __getattr__(name) -> Any: ...
3 changes: 3 additions & 0 deletions stubs/pandas.pyi
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
from typing import Any

def __getattr__(name) -> Any: ...
3 changes: 3 additions & 0 deletions stubs/recommonmark/transform.pyi
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
from typing import Any

def __getattr__(name) -> Any: ...
3 changes: 3 additions & 0 deletions stubs/seaborn.pyi
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
from typing import Any

def __getattr__(name) -> Any: ...
3 changes: 3 additions & 0 deletions stubs/sklearn/model_selection.pyi
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
from typing import Any

def __getattr__(name) -> Any: ...
3 changes: 3 additions & 0 deletions stubs/submitit.pyi
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
from typing import Any

def __getattr__(name) -> Any: ...
3 changes: 3 additions & 0 deletions stubs/tensorflow.pyi
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
from typing import Any

def __getattr__(name) -> Any: ...
3 changes: 3 additions & 0 deletions stubs/tqdm.pyi
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
from typing import Any

def __getattr__(name) -> Any: ...
3 changes: 3 additions & 0 deletions stubs/triton/__init__.pyi
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
from typing import Any

def __getattr__(name) -> Any: ...
3 changes: 3 additions & 0 deletions stubs/triton/language.pyi
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
from typing import Any

def __getattr__(name) -> Any: ...
3 changes: 3 additions & 0 deletions stubs/triton/ops/blocksparse.pyi
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
from typing import Any

def __getattr__(name) -> Any: ...
1 change: 1 addition & 0 deletions tests/test_attentions.py
Original file line number Diff line number Diff line change
Expand Up @@ -154,6 +154,7 @@ def test_different_kq_dimensions(
"linformer",
"blocksparse",
}:
# pyre-fixme[29]: The library function `pytest.skip` is not supported by Pyre.
pytest.skip(f"{attention_name} does not support different k, q dimensions yet.")
multi_head = _get_multihead(attention_name, 0.0, 0.0, False, heads, device)

Expand Down
1 change: 1 addition & 0 deletions tests/test_feedforward.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@ def test_feedforward(
ffw = build_feedforward(test_config)

if ffw.requires_cuda and not device.type == "cuda":
# pyre-fixme[29]: The library function `pytest.skip` is not supported by Pyre.
pytest.skip("This MLP requires CUDA and current device does not match")

inputs = torch.rand(BATCH, SEQ, LATENT, device=device)
Expand Down
2 changes: 2 additions & 0 deletions tests/test_triton_fused_linear.py
Original file line number Diff line number Diff line change
Expand Up @@ -98,6 +98,8 @@ def test_fused_linear_parity(shape, activation: Activation, bias: bool, amp: boo
X_ = torch.normal(0, 1, size=shape, device="cuda")
X_.requires_grad_()

# pyre-ignore[16]: TODO(T101400990): Pyre did not recognize the
# `FusedLinear` import.
triton_fused_linear = FusedLinear(
shape[-1], shape[-1] // 2, bias=bias, activation=activation
).to("cuda")
Expand Down
2 changes: 2 additions & 0 deletions xformers/benchmarks/benchmark_triton_blocksparse.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,8 @@ def bench_matmul(dtype: torch.dtype, shapes):
if mode == "sdd":
b_cs = b_cs.transpose(-2, -1)

# pyre-fixme[16]: TODO(T101400990): Pyre did not recognize the
# `SparseCS` import.
sparse_cs_mask = SparseCS(
mask.flatten(start_dim=0, end_dim=1).contiguous(),
device=torch.device("cuda"),
Expand Down
2 changes: 2 additions & 0 deletions xformers/benchmarks/benchmark_triton_layernorm.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,8 @@ def bench_layernorm(backward: bool):
# Pytorch layer norn
torch_layernorm = torch.nn.LayerNorm([K]).to(dtype=dtype, device=device)

# pyre-ignore[16]: TODO(T101400990): Pyre did not recognize the
# `FusedLinearNorm` import.
# Fused layernorm equivalent
fused_layernorm = FusedLayerNorm([K]).to(dtype=dtype, device=device)

Expand Down
Loading