You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 3, 2023. It is now read-only.
I guess seq2seq example is not updated with the new agent based on TorchAgent. I got the following errors. In case I like to use the latest seq2seq, what argument for "--model" do I need to add?
python seq2seq_baseline.py --gpu 0 -m seq2seq --task fromfile:parlaiformat
[ building dictionary first... ]
[ dictionary already built .]
Traceback (most recent call last):
File "seq2seq_baseline.py", line 104, in
TrainLoop(opt).train()
File "/usr0/home/dongyeok/work/ParlAI/parlai/scripts/train_model.py", line 170, in init
self.agent = create_agent(opt)
File "/usr0/home/dongyeok/work/ParlAI/parlai/core/agents.py", line 436, in create_agent
model = load_agent_module(opt)
File "/usr0/home/dongyeok/work/ParlAI/parlai/core/agents.py", line 330, in load_agent_module
''.format(v=curr_version))
RuntimeError: It looks like you are trying to load an older version of the selected model. Change your model argument to use the old version from parlai/legacy_agents: for example, --model legacy:model_name:0 or -m parlai.legacy_agents.model.model_v0:ModelAgent
The text was updated successfully, but these errors were encountered:
Hi @dykang, this error is because you are loading a model that was trained with the old seq2seq code. The loaded parameters would not match the current modules. You can follow the error message's instructions to change the model arg to -m legacy:seq2seq:0 to load using the old code instead.
Also note: any classes which are subclasses of the old seq2seq model should change their parent class to the parlai/agents/legacy_agents/seq2seq/seq2seq_v0 (to work right away), or should be rewritten to use the new Seq2seqAgent (to have more support and better performance in the long term.
I guess seq2seq example is not updated with the new agent based on TorchAgent. I got the following errors. In case I like to use the latest seq2seq, what argument for "--model" do I need to add?
python seq2seq_baseline.py --gpu 0 -m seq2seq --task fromfile:parlaiformat
[ building dictionary first... ]
[ dictionary already built .]
Traceback (most recent call last):
File "seq2seq_baseline.py", line 104, in
TrainLoop(opt).train()
File "/usr0/home/dongyeok/work/ParlAI/parlai/scripts/train_model.py", line 170, in init
self.agent = create_agent(opt)
File "/usr0/home/dongyeok/work/ParlAI/parlai/core/agents.py", line 436, in create_agent
model = load_agent_module(opt)
File "/usr0/home/dongyeok/work/ParlAI/parlai/core/agents.py", line 330, in load_agent_module
''.format(v=curr_version))
RuntimeError: It looks like you are trying to load an older version of the selected model. Change your model argument to use the old version from parlai/legacy_agents: for example,
--model legacy:model_name:0
or-m parlai.legacy_agents.model.model_v0:ModelAgent
The text was updated successfully, but these errors were encountered: