Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support fused_attention_qkv for auto_parallel llama #8432

Merged
merged 5 commits into from
May 16, 2024

add

5e2edde
Select commit
Loading
Failed to load commit list.
Merged

Support fused_attention_qkv for auto_parallel llama #8432

add
5e2edde
Select commit
Loading
Failed to load commit list.
Codecov / codecov/project failed May 16, 2024 in 0s

55.42% (target 58.00%)

View this Pull Request on Codecov

55.42% (target 58.00%)

Details

Codecov Report

Attention: Patch coverage is 0% with 6 lines in your changes are missing coverage. Please review.

Project coverage is 55.42%. Comparing base (53ad2da) to head (5e2edde).
Report is 6 commits behind head on develop.

Files Patch % Lines
paddlenlp/transformers/llama/modeling_auto.py 0.00% 6 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #8432      +/-   ##
===========================================
- Coverage    55.43%   55.42%   -0.01%     
===========================================
  Files          616      617       +1     
  Lines        96243    96283      +40     
===========================================
+ Hits         53348    53366      +18     
- Misses       42895    42917      +22     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.