Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set default eval batch size to 2 for LLM fine-tuning #3599

Merged
merged 1 commit into from
Sep 12, 2023

Conversation

arnavgarg1
Copy link
Contributor

Our current batch size tuning logic for eval batch size is more geared towards ECD than LLM.

For LLMs, we want to create synthetic data and push the largest possible "most computationally expensive" batches through during batch size tuning, but right now we just use batches that exist in the dataset. More on this in the future.

For now, let's set the default eval_batch_size is 2.

@github-actions
Copy link

Unit Test Results

  6 files  ±0    6 suites  ±0   1h 17m 40s ⏱️ - 2m 6s
34 tests ±0  29 ✔️ ±0    5 💤 ±0  0 ±0 
88 runs  ±0  72 ✔️ ±0  16 💤 ±0  0 ±0 

Results for commit 221fa0e. ± Comparison against base commit 6178b48.

@arnavgarg1 arnavgarg1 merged commit 6931fe4 into master Sep 12, 2023
15 of 16 checks passed
@arnavgarg1 arnavgarg1 deleted the llm_batch_size_tuning branch September 12, 2023 22:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants