Skip to content

Commit

Permalink
[LLM] fix llama precision on custom devices (PaddlePaddle#7895)
Browse files Browse the repository at this point in the history
  • Loading branch information
SylarTiaNII authored and xysheng-baidu committed Feb 22, 2024
1 parent ed42f16 commit 17ba504
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion paddlenlp/transformers/llama/modeling.py
Original file line number Diff line number Diff line change
Expand Up @@ -1524,7 +1524,7 @@ def forward(self, prediction_scores, masked_lm_labels):
_hcg = fleet.get_hybrid_communicate_group()
masked_lm_loss = ConcatSePMaskedLoss.apply(masked_lm_loss, axis=1, group=_hcg.get_sep_parallel_group())
# skip ignore_index which loss == 0
masked_lm_loss = masked_lm_loss[masked_lm_loss > 0].astype("float32")
masked_lm_loss = masked_lm_loss[masked_lm_loss > 0]
loss = paddle.mean(masked_lm_loss)

return loss
Expand Down

0 comments on commit 17ba504

Please sign in to comment.