We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The log appears like this:
{'loss': 1.8709, 'learning_rate': 0.0, 'epoch': 0.0}
{'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.01}
{'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.02}
{'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.03}
{'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.04}
{'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.05}:
Script:
deepspeed --include="localhost:0,1,2,3" --master_port=20001 fastchat/train/train_mem.py --deepspeed playground/deepspeed_config_s6.json --model_name_or_path NousResearch/Redmond-Puffin-13B --data_path data/dummy_conversation.json --output_dir PUFFIN_ON_ZOOTIEZ --num_train_epochs 2 --per_device_train_batch_size 2 --per_device_eval_batch_size 2 --gradient_accumulation_steps 8 --evaluation_strategy epoch --save_strategy "steps" --save_steps 1200 --save_total_limit 10 --learning_rate 1e-4 --weight_decay 0. --warmup_ratio 0.1 --lr_scheduler_type "cosine" --logging_steps 1 --fp16 --cache_dir "/tmp" --model_max_length 4096 --gradient_checkpointing True --lazy_preprocess True
The text was updated successfully, but these errors were encountered:
I also have a similar issue.
Sorry, something went wrong.
Hi @jerryjalapeno @CCCarloooo , I also encountered the same problem when I finetuning a 7B model, do you fix it now?
No. I ended up switching to axolotl repo, which works fine for me.
This PR addresses this issue #2423
No branches or pull requests
The log appears like this:
{'loss': 1.8709, 'learning_rate': 0.0, 'epoch': 0.0}
{'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.01}
{'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.01}
{'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.01}
{'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.01}
{'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.02}
{'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.02}
{'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.02}
{'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.02}
{'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.03}
{'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.03}
{'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.03}
{'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.04}
{'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.04}
{'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.04}
{'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.04}
{'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.05}:
Script:
deepspeed --include="localhost:0,1,2,3" --master_port=20001 fastchat/train/train_mem.py
--deepspeed playground/deepspeed_config_s6.json
--model_name_or_path NousResearch/Redmond-Puffin-13B
--data_path data/dummy_conversation.json
--output_dir PUFFIN_ON_ZOOTIEZ
--num_train_epochs 2
--per_device_train_batch_size 2
--per_device_eval_batch_size 2
--gradient_accumulation_steps 8
--evaluation_strategy epoch
--save_strategy "steps"
--save_steps 1200
--save_total_limit 10
--learning_rate 1e-4
--weight_decay 0.
--warmup_ratio 0.1
--lr_scheduler_type "cosine"
--logging_steps 1
--fp16
--cache_dir "/tmp"
--model_max_length 4096
--gradient_checkpointing True
--lazy_preprocess True
The text was updated successfully, but these errors were encountered: