Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change default model and quantization value for LLM example #3100

Merged
merged 6 commits into from
Mar 11, 2024
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions examples/llm-flowertune/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,11 +50,11 @@ With an activated Python environment, run the example with default config values
python main.py
```

This command will run FL simulations with an 8-bit [OpenLLaMA 3Bv2](https://huggingface.co/openlm-research/open_llama_3b_v2) model involving 2 clients per rounds for 100 FL rounds. You can override configuration parameters directly from the command line. Below are a few settings you might want to test:
This command will run FL simulations with n 4-bit [OpenLLaMA 7Bv2](https://huggingface.co/openlm-research/open_llama_7b_v2) model involving 2 clients per rounds for 100 FL rounds. You can override configuration parameters directly from the command line. Below are a few settings you might want to test:
danieljanes marked this conversation as resolved.
Show resolved Hide resolved

```bash
# Use OpenLLaMA-7B instead of 3B and 4-bits quantization
python main.py model.name="openlm-research/open_llama_7b_v2" model.quantization=4
# Use OpenLLaMA-3B instead of 7B and 8-bits quantization
python main.py model.name="openlm-research/open_llama_3b_v2" model.quantization=8

# Run for 50 rounds but increasing the fraction of clients that participate per round to 25%
python main.py num_rounds=50 fraction_fit.fraction_fit=0.25
Expand Down
4 changes: 2 additions & 2 deletions examples/llm-flowertune/conf/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@ dataset:
name: "vicgalle/alpaca-gpt4"

model:
name: "openlm-research/open_llama_3b_v2"
quantization: 8 # 8 or 4 if you want to do quantization with BitsAndBytes
name: "openlm-research/open_llama_7b_v2"
quantization: 4 # 8 or 4 if you want to do quantization with BitsAndBytes
gradient_checkpointing: True
lora:
peft_lora_r: 32
Expand Down
Loading