Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[llm]support qlora pp #7801

Merged
merged 2 commits into from
Jan 10, 2024
Merged

[llm]support qlora pp #7801

merged 2 commits into from
Jan 10, 2024

Conversation

lugimzzz
Copy link
Contributor

@lugimzzz lugimzzz commented Jan 8, 2024

PR types

Others

PR changes

APIs

Description

支持qlora pp

Copy link

paddle-bot bot commented Jan 8, 2024

Thanks for your contribution!

Copy link

codecov bot commented Jan 8, 2024

Codecov Report

Attention: 1 lines in your changes are missing coverage. Please review.

Comparison is base (5c7efcc) 57.30% compared to head (9922a0b) 57.11%.
Report is 14 commits behind head on develop.

Files Patch % Lines
paddlenlp/transformers/model_utils.py 50.00% 1 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #7801      +/-   ##
===========================================
- Coverage    57.30%   57.11%   -0.19%     
===========================================
  Files          584      587       +3     
  Lines        87688    88194     +506     
===========================================
+ Hits         50252    50376     +124     
- Misses       37436    37818     +382     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@@ -2193,6 +2193,10 @@ def from_pretrained(cls, pretrained_model_name_or_path, *args, **kwargs):
quantization_config=config.quantization_config,
llm_int8_threshold=config.quantization_config.llm_int8_threshold,
)
quantization_linear_list = []
for key in model.state_dict().keys():
if "quant_weight" in key:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pp会改变state_dict的命名,改用这个方式获取quantization_linear_list

Copy link
Collaborator

@ZHUI ZHUI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Collaborator

@wawltor wawltor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@wawltor wawltor merged commit 5c2bf81 into PaddlePaddle:develop Jan 10, 2024
8 of 9 checks passed
@lugimzzz lugimzzz deleted the quant1 branch January 10, 2024 05:15
lugimzzz added a commit to lugimzzz/PaddleNLP that referenced this pull request Jan 11, 2024
* supprt qlora pp

* fix scale dtype
lugimzzz added a commit that referenced this pull request Jan 11, 2024
* fix lora (#7824)

* [llm]support qlora pp (#7801)

* supprt qlora pp

* fix scale dtype
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants