Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove ring_flash_attention warning #9119

Merged
merged 2 commits into from
Sep 13, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 11 additions & 11 deletions paddlenlp/transformers/ring_flash_attention.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,17 +20,6 @@
from paddle import _C_ops
from paddle.autograd.py_layer import PyLayer

try:
from paddlenlp_ops import flash_attn_bwd
except (ImportError, ModuleNotFoundError):
from paddlenlp.utils.log import logger

logger.warning(
"if you run ring_flash_attention.py, please ensure you install "
"the paddlenlp_ops by following the instructions "
"provided at https://github.com/PaddlePaddle/PaddleNLP/blob/develop/csrc/README.md"
)


class RingCommunicator:
def __init__(self, group, local_key, local_value):
Expand Down Expand Up @@ -233,6 +222,17 @@
if attn_mask is not None:
attn_masks_list = paddle.split(attn_mask, num_or_sections=cp_size * 2, axis=3)

try:
from paddlenlp_ops import flash_attn_bwd
except (ImportError, ModuleNotFoundError):
from paddlenlp.utils.log import logger

Check warning on line 228 in paddlenlp/transformers/ring_flash_attention.py

View check run for this annotation

Codecov / codecov/patch

paddlenlp/transformers/ring_flash_attention.py#L225-L228

Added lines #L225 - L228 were not covered by tests

logger.warning(

Check warning on line 230 in paddlenlp/transformers/ring_flash_attention.py

View check run for this annotation

Codecov / codecov/patch

paddlenlp/transformers/ring_flash_attention.py#L230

Added line #L230 was not covered by tests
"if you run ring_flash_attention.py, please ensure you install "
"the paddlenlp_ops by following the instructions "
"provided at https://github.com/PaddlePaddle/PaddleNLP/blob/develop/csrc/README.md"
)

for step in range(cp_size):
block_k, block_v = kv_comm_buffer.get_buffers()

Expand Down
Loading