Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[utc] fix loading local model in taskflow #4505

Merged
merged 5 commits into from
Jan 17, 2023

Conversation

LemonNoel
Copy link
Contributor

PR types

Bug fixes

PR changes

APIs

Description

  • Fix bug when load local checkpoints from task_path in taskflow.

@paddle-bot
Copy link

paddle-bot bot commented Jan 17, 2023

Thanks for your contribution!

@LemonNoel LemonNoel requested a review from linjieccc January 17, 2023 06:26
@codecov
Copy link

codecov bot commented Jan 17, 2023

Codecov Report

Merging #4505 (7f51676) into develop (689428a) will increase coverage by 0.01%.
The diff coverage is 79.41%.

@@             Coverage Diff             @@
##           develop    #4505      +/-   ##
===========================================
+ Coverage    41.27%   41.29%   +0.01%     
===========================================
  Files          432      432              
  Lines        61705    61733      +28     
===========================================
+ Hits         25468    25491      +23     
- Misses       36237    36242       +5     
Impacted Files Coverage Δ
...addlenlp/taskflow/zero_shot_text_classification.py 18.25% <33.33%> (+0.79%) ⬆️
paddlenlp/utils/serialization.py 88.05% <75.00%> (-0.23%) ⬇️
paddlenlp/transformers/t5/modeling.py 84.82% <86.95%> (-0.26%) ⬇️
paddlenlp/transformers/nezha/modeling.py 20.60% <0.00%> (-0.34%) ⬇️
paddlenlp/transformers/activations.py 78.68% <0.00%> (+1.63%) ⬆️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

@@ -105,7 +105,7 @@ def _construct_model(self, model):
if self.from_hf_hub:
model_instance = UTC.from_pretrained(self._task_path, from_hf_hub=self.from_hf_hub)
else:
model_instance = UTC.from_pretrained(model)
model_instance = UTC.from_pretrained(self._task_path)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@LemonNoel 改成from_pretrained("{local_path}")的形式后,需要定义resource_files_names和resource_files_urls并在__init__ 中增加 self._check_task_files(),可以参考这里https://github.com/PaddlePaddle/PaddleNLP/blob/develop/paddlenlp/taskflow/information_extraction.py#L115

关于例如from_pretrained("utc_large")调用后模型不能再通过from_pretrained("{local_path}")方式加载的问题也请 @wj-Mcat 帮忙看下,我们后续看看能不能解决一下这里的gap

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

如果是改成这样,没必要if了,直接一行model_instance = UTC.from_pretrained(self._task_path, from_hf_hub=self.from_hf_hub) 就行了。
@linjieccc UTC taskflow有做from_pretrained以外的文件下载吗?像这种已经整合pretrained config的模型和taskflow, 建议下载功能全部由from_pretrained承载,不要再做分开的下载逻辑了

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sijunhe 嗯嗯,这里确实改成下载功能全部通过from_pretrained承载好些,也可以避免模型文件重复下载的情况,后续会针对这块处理进行统一升级

关于Taskflow内非transformers类的模型,例如像GRU-CRF,目前模型是放在$PPNLP_HOME/.taskflow/{task_name}/{model_name},是否这部分模型后续也统一放在$PPNLP_HOME/.paddlenlp/models管理,模型加载的代码在Taskflow中实现

@sijunhe
Copy link
Collaborator

sijunhe commented Jan 17, 2023

任何涉及LEGACY_CONFIG_NAME的都可以删除,因为UTC在ernie升级pretrainded config之后,不涉及向后兼容 @LemonNoel

@LemonNoel
Copy link
Contributor Author

任何涉及LEGACY_CONFIG_NAME的都可以删除,因为UTC在ernie升级pretrainded config之后,不涉及向后兼容 @LemonNoel

已删除

Copy link
Contributor

@linjieccc linjieccc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants