Skip to content

kingardor/Activity-Recognition-TensorRT

Repository files navigation

Activity Recognition TensorRT

Perform video classification using 3D ResNets trained on Kinetics-700 and Moments in Time dataset - accelerated with TensorRT 8.0

ActivityGIF

P.S Click on the gif to watch the full-length video!

Index

TensorRT 8 Installation

Assuming you have CUDA already installed, go ahead and download TensorRT 8 from here.

Follow instructions of installing the system binaries and python package for tensorrt here.

Python dependencies

Install the necessary python dependencies by running the following command -

pip3 install -r requirements.txt

Clone the repository

This is a straightforward step, however, if you are new to git recommend glancing threw the steps.

First, install git

sudo apt install git

Next, clone the repository

# Using HTTPS
https://github.com/kn1ghtf1re/Activity-Recognition-TensorRT.git
# Using SSH
git@github.com:kn1ghtf1re/Activity-Recognition-TensorRT.git

Download Pretrained Models

Download models from google-drive and place them in the current directory.

Running the code

The code supports a number of command line arguments. Use help to see all supported arguments

➜ python3 action_recognition_tensorrt.py --help
usage: action_recognition_tensorrt.py [-h] [--stream STREAM] [--model MODEL] [--fp16] [--frameskip FRAMESKIP] [--save_output SAVE_OUTPUT]

Action Recognition using TensorRT 8

optional arguments:
  -h, --help            show this help message and exit
  --stream STREAM       Path to use video stream
  --model MODEL         Path to model to use
  --fp16                To enable fp16 precision
  --frameskip FRAMESKIP
                        Number of frames to skip
  --save_output SAVE_OUTPUT
                        Save output as video

Run the script this way:

# Video
python3 action_recognition_tensorrt.py --stream /path/to/video --model resnet-18-kinetics-moments.onnx --fp16 --frameskip 2

# Webcam
python3 action_recognition_tensorrt.py --stream webcam --model resnet-18-kinetics-moments.onnx --fp16 --frameskip 2

Citations

@article{hara3dcnns,
  author={Kensho Hara and Hirokatsu Kataoka and Yutaka Satoh},
  title={Can Spatiotemporal 3D CNNs Retrace the History of 2D CNNs and ImageNet?},
  journal={arXiv preprint},
  volume={arXiv:1711.09577},
  year={2017},
}

About

3D ResNet Video Classification accelerated by TensorRT

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages