Skip to content

Paymemoney/Video-Caption

Repository files navigation

This is a modified edition of video-caption.pytorch, which is developed for a project of SEU Student Research Training Program.

pytorch implementation of video captioning

recommend installing dependencies with the code below:

pip3 install -r requirements.txt

recommend installing pytorch and python packages using Anaconda

requirements

  • cuda
  • pytorch 0.4.0
  • python3
  • ffmpeg (can install using anaconda)

python packages

  • tqdm
  • pillow
  • pretrainedmodels
  • nltk

Data

MSR-VTT. Test video doesn't have captions, so I spilit train-viedo to train/val/test. Extract and put them in ./data/ directory

Options

all default options are defined in opt.py or corresponding code file, change them for your like.

Acknowledgements

Some code refers to ImageCaptioning.pytorch

Usage

(Optional) c3d features

you can use video-classification-3d-cnn-pytorch to extract features from video.

Steps

  1. preprocess videos and labels
python prepro_feats.py --output_dir data/feats/resnet152 --model resnet152 --n_frame_steps 40  --gpu 4,5

python prepro_vocab.py
  1. Training a model
python train.py --gpu 0 --epochs 3001 --batch_size 300 --checkpoint_path data/save --feats_dir data/feats/resnet152 --model S2VTAttModel  --with_c3d 1 --c3d_feats_dir data/feats/c3d_feats --dim_vid 4096
  1. test

    opt_info.json will be in same directory as saved model.

python eval.py --recover_opt data/save/opt_info.json --saved_model data/save/model_1000.pth --batch_size 100 --gpu 1

TODO

Acknowledgements

Some code refers to ImageCaptioning.pytorch

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •