Skip to content

Commit

Permalink
[minor] Enabling faster layernorm in the factory (#92)
Browse files Browse the repository at this point in the history
  • Loading branch information
blefaudeux authored Nov 11, 2021
1 parent c92931b commit 1272b12
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion examples/microViT.py
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ def forward(self, x):
x = self.patch_emb(x)

# flatten patches into sequence
x = x.flatten(2, 3).transpose(1, 2) # B HW C
x = x.flatten(2, 3).transpose(1, 2).contiguous() # B HW C

if self.hparams.classifier == Classifier.TOKEN:
# prepend classification token
Expand Down
2 changes: 1 addition & 1 deletion xformers/components/residual.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
import torch.nn as nn

# NOTE: The Triton layernorm can be activated/deactivated from here
_is_triton_available = False # torch.cuda.is_available()
_is_triton_available = torch.cuda.is_available()

if _is_triton_available:
try:
Expand Down

0 comments on commit 1272b12

Please sign in to comment.