-
Notifications
You must be signed in to change notification settings - Fork 334
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BIT-601] Scaling law on EMA loss #1022
Conversation
The neural language model scaling law is typically meant to be computed on a loss averaged over the entire training sample. Currently it is computed within-batch only, which frequently sees losses below 1.69 the of natural entropy of text. Here we now compute the scaling law and the resultant effective number of model parameters on the exponentially moving average loss for a server, which should greatly improve the definition of the result.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Comparative analysis of BIT-601We compare the neuron stats of two types of nakamoto validators, one on current master branch, and the other on the BIT-601 branch. The change from master is that BIT-601 now applies the scaling law on the average loss across multiple batches, instead of on each batch_loss separately like master. master: # estimate the effective number of model parameters from the batch_loss
_num_params = scaling_law_loss_to_params(_loss) BIT-601: # estimate the effective number of model parameters from EMA loss
_num_params = scaling_law_loss_to_params(torch.tensor(stats['loss_nxt'])) We expect a change in
|
BIT-601 Scaling law on EMA loss
The neural language model scaling law [1] is typically meant to be computed on a loss averaged over the entire training data. Currently it is computed within-batch only, which frequently sees losses below 1.69 (the natural entropy of text).
Here we now compute the scaling law and the resultant effective number of model parameters on the exponentially moving average loss for a server, which should greatly improve the definition of the result.
[1] (OpenAI scaling laws) Kaplan, Jared, et al. "Scaling laws for neural language models." arXiv:2001.08361 (2020)