Skip to content

The framework automatically detects pushing behavior from videos of crowded event entrances.

License

Notifications You must be signed in to change notification settings

PedestrianDynamics/DL4PuDe

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DL4PuDe: A hybrid framework of deep learning and visualization for pushing behavior detection in pedestrian dynamics

DOI License Python 3.7 | 3.8 GPU RAM16GB

This repository is for the DL4PuDe framework, along with its published paper, which is as follows.

Alia, Ahmed, Mohammed Maree, and Mohcine Chraibi. 2022. "A Hybrid Deep Learning and Visualization Framework for Pushing Behavior Detection in Pedestrian Dynamics" Sensors 22, no. 11: 4040. 

Content

  1. Framework aim.
  2. Framework Motivation.
  3. Pushing Behavior Defention.
  4. Framework Architecture.
  5. How to install and use the framework.
  6. Demo.
  7. Experiments Videos.
  8. CNN-based Classifiers
  9. List of papers that cited this work.

Aim of Dl4PuDe Framework

Dl4PuDe aims to automatically detect and annotate pushing behavior at the patch level in video recordings of human crowds.

To assist researchers in the field of crowd dynamics in gaining a better understanding of pushing dynamics, which is crucial for effectively managing a comfortable and safe crowd.

In this article, pushing can be defined as a behavior that pedestrians use to reach a target faster.

Entering the event faster

The Architecture of DL4PuDe

DL4PuDe mainly relied on the power of EfficientNet-B0-based classifier, RAFT and wheel visualization methods.

Kindly note that we use the
[RAFT repository] for optical flow estimation in our project.

Example

Input video Output video *
* The framework detects pushing patches every 12 frames (12/25 s), the red boxes refer to the pushing patches.

Installation

  1. Clone the repository in your directory.
git clone https://github.com/PedestrianDynamics/DL4PuDe.git
  1. Install the required libraries.
pip install -r libraries.txt
  1. Run the framework.
python3 run.py --video [input video path]  
               --roi ["x coordinate of left-top ROI corner" "y coordinate of left-top ROI corner"
               "x coordinate of  right-bottom ROI corner" "y coordinate of right-bottom ROI corner" ] 
               --patch [rows cols]    
               --ratio [scale of video]   
               --angle [angle in degrees for rotating the input video to make crowd flow direction
               from left to right ---> ]

Run the following command

python3 run.py --video ./videos/150.mp4  --roi 380 128 1356 1294 --patch 3 3 --ratio 0.5  --angle 0

Then, you will see the following details.

When the progress of the framework is complete, it will generate the annotated video in the framework directory. Please note that the "150 annotated video" is available on the directory root under the "150-demo.mp4" name.

The original experiments videos that are used in this work are available through the Pedestrian Dynamics Data Archive hosted by the Forschungszentrum Juelich. Also, the undistorted videos are available by this link.

CNN-based Classifiers

We use four CNN-based classifiers for building and evaluating our classifier, including EfficientNet-B0, MobileNet, InceptionV3, and ResNet50. The source code for building, training and evaluating the CNN-based classifiers, as well as the trained classifiers are available in the below links.

  1. Source code for building and training the CNN-based classifiers.
  2. Trained CNN-based classifiers.
  3. CNN-based classifiers Evaluation.
  4. Patch-based MIM test sets.
  5. MIM training and validation sets are available from the corresponding authors upon request.

List of papers that cited this work

To access the list of papers citing this work, kindly click on this link.

Citation

If you utilize this framework or the generated dataset in your work, please cite it using the following BibTex entry:

Alia, Ahmed, Mohammed Maree, and Mohcine Chraibi. 2022. "A Hybrid Deep Learning and Visualization Framework for Pushing Behavior Detection in Pedestrian Dynamics" Sensors 22, no. 11: 4040. 

Acknowledgments

  • This work was funded by the German Federal Ministry of Education and Research (BMBF: funding number 01DH16027) within the Palestinian-German Science Bridge project framework, and partially by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—491111487.

  • Thanks to the Forschungszentrum Juelich, Institute for Advanced Simulation-7, for making the Pedestrian Dynamics Data Archive publicly accessible under the CC Attribution 4.0 International license.

  • Thanks to Anna Sieben, Helena Lügering, and Ezel Üsten for developing the rating system and annotating the pushing behavior in the video experiments.

  • Thanks to the authors of the paper titled ``RAFT: Recurrent All Pairs Field Transforms for Optical Flow'' for making the RAFT source code available.