Skip to content

Commit

Permalink
change variable name
Browse files Browse the repository at this point in the history
  • Loading branch information
greycooker committed Dec 17, 2024
1 parent 44f633e commit 275c623
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 4 deletions.
6 changes: 3 additions & 3 deletions paddlenlp/peft/lora/lora_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -334,18 +334,18 @@ def process_split_and_assign(name, concat_tensor, axis, init_dict, state_dict):
final_lora, init_lora = paddle.split(concat_tensor, 2, axis=axis)
init_dict[name] = init_lora
state_dict[name] = final_lora
return final_lora, init_lora
return init_lora

Check warning on line 337 in paddlenlp/peft/lora/lora_model.py

View check run for this annotation

Codecov / codecov/patch

paddlenlp/peft/lora/lora_model.py#L334-L337

Added lines #L334 - L337 were not covered by tests

for name in state_dict.keys():
if "lora_A" in name:
concat_lora_A = state_dict[name]
final_loraA, init_loraA = process_split_and_assign(
init_loraA = process_split_and_assign(

Check warning on line 342 in paddlenlp/peft/lora/lora_model.py

View check run for this annotation

Codecov / codecov/patch

paddlenlp/peft/lora/lora_model.py#L339-L342

Added lines #L339 - L342 were not covered by tests
name, concat_lora_A, axis=1, init_dict=self.loraga_init_dict, state_dict=state_dict
)

loraB_name = name.replace("lora_A", "lora_B")
concat_lora_B = state_dict[loraB_name]
final_loraB, init_loraB = process_split_and_assign(
init_loraB = process_split_and_assign(

Check warning on line 348 in paddlenlp/peft/lora/lora_model.py

View check run for this annotation

Codecov / codecov/patch

paddlenlp/peft/lora/lora_model.py#L346-L348

Added lines #L346 - L348 were not covered by tests
loraB_name, concat_lora_B, axis=0, init_dict=self.loraga_init_dict, state_dict=state_dict
)

Expand Down
2 changes: 1 addition & 1 deletion paddlenlp/peft/lora/loraga_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ def estimate_gradient(self, model: PretrainedModel):
iters += 1

Check warning on line 76 in paddlenlp/peft/lora/loraga_utils.py

View check run for this annotation

Codecov / codecov/patch

paddlenlp/peft/lora/loraga_utils.py#L75-L76

Added lines #L75 - L76 were not covered by tests
# Pipeline parallel not supported currently
with paddle.amp.auto_cast(enable=True, custom_black_list=self.args.amp_custom_black_list):
loss, logits = model(**batch)
loss, _ = model(**batch)
loss.backward()

Check warning on line 80 in paddlenlp/peft/lora/loraga_utils.py

View check run for this annotation

Codecov / codecov/patch

paddlenlp/peft/lora/loraga_utils.py#L78-L80

Added lines #L78 - L80 were not covered by tests

if iters == self.loraga_init_iters:
Expand Down

0 comments on commit 275c623

Please sign in to comment.