DL4PuDe:
A hybrid framework of deep learning and visualization for pushing behavior detection in pedestrian dynamics
This repository is for the DL4PuDe framework, along with its published paper, which is as follows.
Alia, Ahmed, Mohammed Maree, and Mohcine Chraibi. 2022. "A Hybrid Deep Learning and Visualization Framework for Pushing Behavior Detection in Pedestrian Dynamics" Sensors 22, no. 11: 4040.
- Framework aim.
- Framework Motivation.
- Pushing Behavior Defention.
- Framework Architecture.
- How to install and use the framework.
- Demo.
- Experiments Videos.
- CNN-based Classifiers
- List of papers that cited this work.
Dl4PuDe
aims to automatically detect and annotate pushing behavior at the patch level in video recordings of human crowds.
To assist researchers in the field of crowd dynamics in gaining a better understanding of pushing dynamics, which is crucial for effectively managing a comfortable and safe crowd.
In this article, pushing can be defined as a behavior that pedestrians use to reach a target faster.
DL4PuDe
mainly relied on the power of EfficientNet-B0-based classifier, RAFT and wheel visualization methods.
Example
Input video | Output video * |
---|---|
* The framework detects pushing patches every 12 frames (12/25 s), the red boxes refer to the pushing patches. |
- Clone the repository in your directory.
git clone https://github.com/PedestrianDynamics/DL4PuDe.git
- Install the required libraries.
pip install -r libraries.txt
- Run the framework.
python3 run.py --video [input video path]
--roi ["x coordinate of left-top ROI corner" "y coordinate of left-top ROI corner"
"x coordinate of right-bottom ROI corner" "y coordinate of right-bottom ROI corner" ]
--patch [rows cols]
--ratio [scale of video]
--angle [angle in degrees for rotating the input video to make crowd flow direction
from left to right ---> ]
Run the following command
python3 run.py --video ./videos/150.mp4 --roi 380 128 1356 1294 --patch 3 3 --ratio 0.5 --angle 0
Then, you will see the following details.
When the progress of the framework is complete, it will generate the annotated video in the framework directory. Please note that the "150 annotated video" is available on the directory root under the "150-demo.mp4" name.
The original experiments videos that are used in this work are available through the Pedestrian Dynamics Data Archive hosted by the Forschungszentrum Juelich. Also, the undistorted videos are available by this link.
We use four CNN-based classifiers for building and evaluating our classifier, including EfficientNet-B0, MobileNet, InceptionV3, and ResNet50. The source code for building, training and evaluating the CNN-based classifiers, as well as the trained classifiers are available in the below links.
- Source code for building and training the CNN-based classifiers.
- Trained CNN-based classifiers.
- CNN-based classifiers Evaluation.
- Patch-based MIM test sets.
- MIM training and validation sets are available from the corresponding authors upon request.
To access the list of papers citing this work, kindly click on this link.
If you utilize this framework or the generated dataset in your work, please cite it using the following BibTex entry:
Alia, Ahmed, Mohammed Maree, and Mohcine Chraibi. 2022. "A Hybrid Deep Learning and Visualization Framework for Pushing Behavior Detection in Pedestrian Dynamics" Sensors 22, no. 11: 4040.
-
This work was funded by the German Federal Ministry of Education and Research (BMBF: funding number 01DH16027) within the Palestinian-German Science Bridge project framework, and partially by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—491111487.
-
Thanks to the Forschungszentrum Juelich, Institute for Advanced Simulation-7, for making the Pedestrian Dynamics Data Archive publicly accessible under the CC Attribution 4.0 International license.
-
Thanks to Anna Sieben, Helena Lügering, and Ezel Üsten for developing the rating system and annotating the pushing behavior in the video experiments.
-
Thanks to the authors of the paper titled ``RAFT: Recurrent All Pairs Field Transforms for Optical Flow'' for making the RAFT source code available.