LCrowdV

Procedural framework for generating crowd videos

A novel framework that can generate as many labeled crowd video as one needed. The videos produced help in training models for crowd understanding, including pedestrian detection, crowd classification, etc.


Arxiv report:
Ernest Cheung, Tsan Kwong Wong, Aniket Bera, Xiaogang Wang & Dinesh Manocha (2016).
LCrowdV: Generating Labeled Videos for Simulation-based Crowd Behavior.


Paper accepted in ECCVW: Ernest Cheung, Tsan Kwong Wong, Aniket Bera, Xiaogang Wang & Dinesh Manocha (2016).
LCrowdV: Generating Labeled Videos for Simulation-based Crowd Behavior. ECCVW 2016

Features

Annotations

One of the biggest challenges of acquiring data for training models in crowd understanding is that ground truth annotations have to be done manually. But with LCrowdV, the labor intensive effort in annotation and the risk of human mistake is eliminated. Trajectories of every agent, the bounding box of the interested objects, and any other feature one would like to study can be easily generated using our framework.

Variety

Each video generated in LCrowdV comes along with 7 labels, which can be varied as parameters. That includes crowd density, population, lighting conditions, background scene, camera angles, agent personality and noise level. In other words, the videos produced are in a wide range of variety, with different population density, background environment, individual agent behavior, etc.

Results

We have improved the performance of pedestrian detection using HOG+SVM by 3%, by augmenting the training data using LCrowdV videos. And we have also combined LCrowdV videos with the training dataset used in Faster R-CNN for pedestrian detection and improved the average precision by 7.3%. We plan to extend the use of LCrowdV data to different crowd understanding work, including flow estimation, crowd counting, behavior classification, etc.

Download Dataset

A set of data can be downloaded using the link below.

Download here

Contact

If you are interested in the work, or have any question in mind, please feel free to contact us.

Ernest Cheung, via