You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Most of the information about how to setup inference and training runs is present in the readme, but there are a few gaps.
Here are the additional steps I had to take in order to make inference with a pretrained model work:
adjust classpath argument in run_standford_corenlp_server.sh (also there is a typo in the filename of this script)
sh download_artifacts.sh
this is listed as a step for training a model, but it's also necessary if you just want to run inference
before running it for the first time, uncomment the commented lines (though I don't think GloVe is needed)
after running it, rename the config file to just config.json and the vocab file to just vocab.txt in bert-base-cased/
make sure bert-base-cased/ is one level above from the AMR-gs top-level directory; i.e., from within AMR-gs, the relative path should be ../bert-base-cased/ (unfortunately this relative path is currently expected by the pretrained model)
make sure that all (train/dev/test) data is reachable from AMR-gs via data/AMR/amr_2.0 (or data/AMR/amr_1.0)
this will be done automatically if you run prepare_data.sh on one of the LDC AMR corpora, but if you use custom datasets you may have to do it manually, e.g., with symbolic links: ln -s actual/path/to/data path-to-AMR-gs/data/AMR/amr_2.0
make sure the pretrained vocabulary files at, e.g., amr2.0.bert.gr/vocabs/ are reachable from AMR-gs at data/AMR/amr_2.0_reca/, e.g., with symbolic links ln -s amr2.0.bert.gr/vocabs/ path-to-AMR-gs/data/AMR/amr_2.0_reca/
the machine learning code in parser/ expects a GPU/CUDA setup; if you want to run on CPU (but also in general) it is preferable to use the version of the files fixed in c795b88
NOW you can proceed to follow the instructions in the readme for AMR Parsing with Pretrained Models
The text was updated successfully, but these errors were encountered:
Most of the information about how to setup inference and training runs is present in the readme, but there are a few gaps.
Here are the additional steps I had to take in order to make inference with a pretrained model work:
pip install -r requirements.txt
pip install pytorch_pretrained_bert
pip install pycorenlp
run_standford_corenlp_server.sh
(also there is a typo in the filename of this script)sh download_artifacts.sh
config.json
and the vocab file to justvocab.txt
inbert-base-cased/
bert-base-cased/
is one level above from the AMR-gs top-level directory; i.e., from within AMR-gs, the relative path should be../bert-base-cased/
(unfortunately this relative path is currently expected by the pretrained model)data/AMR/amr_2.0
(ordata/AMR/amr_1.0
)ln -s actual/path/to/data path-to-AMR-gs/data/AMR/amr_2.0
amr2.0.bert.gr/vocabs/
are reachable from AMR-gs atdata/AMR/amr_2.0_reca/
, e.g., with symbolic linksln -s amr2.0.bert.gr/vocabs/ path-to-AMR-gs/data/AMR/amr_2.0_reca/
parser/
expects a GPU/CUDA setup; if you want to run on CPU (but also in general) it is preferable to use the version of the files fixed in c795b88The text was updated successfully, but these errors were encountered: