Skip to content

Commit

Permalink
Change default model and quantization value for LLM example (#3100)
Browse files Browse the repository at this point in the history
  • Loading branch information
yan-gao-GY authored Mar 11, 2024
1 parent b2f968a commit 3d41776
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 5 deletions.
6 changes: 3 additions & 3 deletions examples/llm-flowertune/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,11 +50,11 @@ With an activated Python environment, run the example with default config values
python main.py
```

This command will run FL simulations with an 8-bit [OpenLLaMA 3Bv2](https://huggingface.co/openlm-research/open_llama_3b_v2) model involving 2 clients per rounds for 100 FL rounds. You can override configuration parameters directly from the command line. Below are a few settings you might want to test:
This command will run FL simulations with a 4-bit [OpenLLaMA 7Bv2](https://huggingface.co/openlm-research/open_llama_7b_v2) model involving 2 clients per rounds for 100 FL rounds. You can override configuration parameters directly from the command line. Below are a few settings you might want to test:

```bash
# Use OpenLLaMA-7B instead of 3B and 4-bits quantization
python main.py model.name="openlm-research/open_llama_7b_v2" model.quantization=4
# Use OpenLLaMA-3B instead of 7B and 8-bits quantization
python main.py model.name="openlm-research/open_llama_3b_v2" model.quantization=8

# Run for 50 rounds but increasing the fraction of clients that participate per round to 25%
python main.py num_rounds=50 fraction_fit.fraction_fit=0.25
Expand Down
4 changes: 2 additions & 2 deletions examples/llm-flowertune/conf/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@ dataset:
name: "vicgalle/alpaca-gpt4"

model:
name: "openlm-research/open_llama_3b_v2"
quantization: 8 # 8 or 4 if you want to do quantization with BitsAndBytes
name: "openlm-research/open_llama_7b_v2"
quantization: 4 # 8 or 4 if you want to do quantization with BitsAndBytes
gradient_checkpointing: True
lora:
peft_lora_r: 32
Expand Down

0 comments on commit 3d41776

Please sign in to comment.