Computer Vision

Event Detection for Video Surveillance


Representative Publications:

  • Anan Liu, Zan Gao, Tong Hao, Yuting Su, Zhaoxuan Yang, “Partwise Bag of Words-Based Multi-task Learning for Human Action Recognition”, Electronics Letters, Vol.49, No.13, pp.803-804, 2013.

  • Anan Liu, “Bidirectional Integrated Random Fields for Human Behavior Understanding”, Electronics Letters, 2012.

  • Anan Liu, “'Human Action Recognition with Structured Discriminative Random Fields,” Electronics Letters, Volume 47, Issue 11, pp. 651–653, 2011.

  • Anan Liu, Dong Han, "Spatiotemporal Sparsity Induced Similarity Measure for Human Action Recognition", JDCTA: International Journal of Digital Content Technology and its Applications, Vol. 4, No. 8, pp. 143 ~ 149, 2010.

  • Zan Gao, Anan Liu, etc., “TJUT-TJU@TRECVID 2011: Surveillance Event Detection”, In Proc. TRECVID Workshop, 2011.

  • Anan Liu, Zan Gao, etc., “TRECVID 2010 Surveillance Event Detection by MMM-TJU”, In Proc. TRECVID Workshop, 2010.



  • Object Detection & Tracking


    Representative Publications:

  • Weizhi Nie, Anan Liu, Yuting Su, “An Effective Tracking System for Multiple Object Tracking in Occlusion Scenes”, International Conference on Multimedia Modeling, 2013.

  • Weizhi Nie, Anan Liu, Yuting Su, “Multiple Person Tracking by Spatiotemporal Tracklet Association”, 9th International Conference on Advanced Video and Signal-based Surveillance, pp:481-486, Sep 18-21,2012.



  • Kinect-based Action Recognition


    We develop the unsupervised sequential template-based human action recognition. Microsoft Kinect sensor is first implemented for RGB/depth/skeleton data capturing. Then we construct the equential templates for individual action categrory. At last, the designed sequence matching algorithm is leveraged for similarity measurement. The developed system can recognize multiple human actions in unsupervised manner in real time. It can be widely utilized for natural human-machine interaction.



    Video Surveillance System


    Video Surveillance System is an integrated system to achieve video detection, tracking, events detection and presentations. It mainly includes five modules. Detection modules can realize target detection and human detection. Target detection mainly three steps: feature extraction, separate scanning and feature matching. The features can select shift characteristics, color characteristics, contour features, texture features, HOG characteristics, etc. In the process of video detection, users can add their own application modules to detect the corresponding target of the images or videos and view the real-time test results. Tracking module mainly includes two kinds of algorithm: particle filter and Meanshift. Tracking module has a function to initialize the region, and when the user input the test video, he need to specify the tracking area and select the tracking program, then the track results will write to the appropriate document and display in the information window. Event detection module is an application of target detection and tracking, users can import video and select the event detect type, and the results will be displayed in the main window. Simple processing module includes some of the commonly used image processing or video processing tools such as image smoothing, Kalman filter, background modeling, etc. Comprehensive presentation module is an integrated of all modules, users can select the outcome documents of detection, tracking or events for presentation to view the visual effects of the algorithm, which is convenient for user to analyze problems and propose solutions.