You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
class MonitorCallback(CallbackAny2Vec):
def __init__(self, test_cui, test_sec):
self.test_cui = test_cui
self.test_sec = test_sec
def on_epoch_end(self, model):
print('Model loss:', model.get_latest_training_loss())
for word in self.test_cui: # show wv logic changes
print(word, model.wv.most_similar(word))
for word in self.test_sec: # show dv logic changes
print(word, model.dv.most_similar(word))
Each time the callback prints, it prints 0. The second issue is that after the first epoch, the model seems pretty good according to calls to most_similar. Yet, after the second it appears random. I have a fairly large dataset so I don't think dramatic overfitting is happening. Is there a bug after the first epoch or is the learning rate getting messed up? It's tough to know what's going on because there's no within-epoch logging and the training loss is always evaluating to 0.
The text was updated successfully, but these errors were encountered:
Model loss: 0.0
Model loss: 0.0
Model loss: 0.0
Model loss: 0.0
Model loss: 0.0
Model loss: 0.0
Model loss: 0.0
Model loss: 0.0
Model loss: 0.0
Model loss: 0.0
Each time the callback prints, it prints 0. The second issue is that after the first epoch, the model seems pretty good according to calls to most_similar. Yet, after the second it appears random. I have a fairly large dataset so I don't think dramatic overfitting is happening. Is there a bug after the first epoch or is the learning rate getting messed up? It's tough to know what's going on because there's no within-epoch logging and the training loss is always evaluating to 0.
The text was updated successfully, but these errors were encountered: