›› 2012, Vol. ›› Issue (03): 754-758.

Previous Articles     Next Articles

New space-time interest point detection scheme based on cumulative entropy difference

YIN Jian-qin1, WANG Jing-jing2, LI Jin-ping1   

  1. 1. Shandong Provincial Key Laboratory of Network Based Intelligent Computing, University of Jinan, Jinan 250022, China;
    2. College of Physics and Electronics, Shandong Normal University, Jinan 250014, China
  • Received:2011-01-11 Online:2012-05-01

Abstract: A new scheme to detect space-time interest points based on cumulative entropy difference is proposed for human action recognition and video analysis. First, periodic space-time features are detected. Second, the concept of cumulative entropy difference is put forward as the criterion to evaluate the interest points. Third, the points with high cumulative entropy are selected as the final key points. Finally, c-means cluster analysis is applied to find the features, and the prototype of the key video is obtained. The video is converted to the feature vector presented by the prototype; then the video classification is conducted. Experimental results show that the cumulative entropy difference evaluation can effectively remove the noise cuboids, and the method based on the evaluation can realize human action analysis and facial expression recognition.

Key words: information processing, space-time interest points, entropy, action recognition, video analysis

CLC Number: 

  • TN911.73
[1] Harris C, Stephens M. A combined corner and edge detector[C]//Proceedings of The Fourth Alvey Vision Conference, 1988: 147-151.
[2] Mikolajczyk K, Schmid C. Indexing based on scale invariant interest points[C]//Proceedings. Eighth IEEE International Conference on Computer Vision, Vancouver BC, Canada,2001:525-531.
[3] Lowe D G. Distinctive image features from scale-invariant keypoints[J].International Journal of Computer Vision, 2004,60(2): 91-110.
[4] 刘萍萍, 赵宏伟, 耿庆田,等. 基于局部特征和视皮层识别机制的图像分类[J]. 吉林大学学报:工学版,2011, 41 (5):1401-1406. Liu Ping-ping, Zhao Hong-wei, Geng Qing-tian,et al. Image classification method based on local feature and visual cortex recognition mechanism[J]. Journal of Jilin University(Engineering and Technology Edition), 2011, 41 (5):1401-1406.
[5] Laptev I, Lindeberg T. Space-time interest points[C]//ICCV, France,2003:432-439.
[6] Laptev I, Lindeberg T. Interest point detection and scale selection in space-time[C]//In Proc Scale Space Methods in Computer Vision, Isle of Skye, UK,2003:372-387.
[7] Laptev I. On space-time interest points[J]. International Journal of Computer Vision,2005,64(2/3):107-123.
[8] Laptev I, Caputo B,Schüldt C, et al. Local velocity-adapted motion events for spatio-temporal recognition[J]. Computer Vision and Image Understanding,2007,108(3):207-229.
[9] Imran N, Dexter E, Laptev I. View-independent action recognition form temporal self-similarities[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2011,31(1):172-185.
[10] Dollar P, Rabaud V, Cottrell G,et al. Behavior recognition via sparse spatio-temporal features[C]//Proceedings of the 14th International Conference on Computer Communications and Networks, Washington DC, USA,2005:65-72.
[11] Belongie S, Branson K, Dollar P. Monitoring animal behavior in the smart vivarium[C]//Measuring Behavior,2005:72-75.
[12] Ke Y, Sukthankar R, Hebert M,et al. Efficient visual event detection using volumetric features[C]//Tenth IEEE International Conference on Computer Vision, Beijing, China,2005:166-173.
[13] Oikonomopoulos A, Patras I, Pantic M. Human action recognition with spatiotemporal salient points[J]. IEEE Transactions on Systems, Man, and Cybernetics,2006,36(3):710-719.
[14] Bregonzio M, Gong S G,Xiang T. Recognising action as clouds of space-time interest points[C]//Computer Vision and Pattern Recognition, Miami FL,2009:1948-1955.
[15] Willems G, Tuytelaars T,Gool L,et al. An efficient dense and scale-invariant spatio-temporal interest point detector[C]//Proceedings of the 10th European Conference on Computer Vision: Part II, Berlin, Heidelberg,2008:650-663.
[16] Wong Shu-fai, Roberto C. Extracting spatiotemporal interest points using global information[C]//IEEE 11th International Conference on Computer Vision, Rio de Janeiro,2007:1-8.
[1] YING Huan,LIU Song-hua,TANG Bo-wen,HAN Li-fang,ZHOU Liang. Efficient deterministic replay technique based on adaptive release strategy [J]. Journal of Jilin University(Engineering and Technology Edition), 2018, 48(6): 1917-1924.
[2] LIU Zhong-min,WANG Yang,LI Zhan-ming,HU Wen-jin. Image segmentation algorithm based on SLIC and fast nearest neighbor region merging [J]. Journal of Jilin University(Engineering and Technology Edition), 2018, 48(6): 1931-1937.
[3] SHAN Ze-biao,LIU Xiao-song,SHI Hong-wei,WANG Chun-yang,SHI Yao-wu. DOA tracking algorithm using dynamic compressed sensing [J]. Journal of Jilin University(Engineering and Technology Edition), 2018, 48(6): 1938-1944.
[4] NIAN Teng-fei, LI Ping, LIN Mei. Micro-morphology and gray entropy analysis of asphalt characteristics functional groups and rheological parameters under freeze-thaw cycles [J]. 吉林大学学报(工学版), 2018, 48(4): 1045-1054.
[5] YAO Hai-yang, WANG Hai-yan, ZHANG Zhi-chen, SHEN Xiao-hong. Reverse-joint signal detection model with double Duffing oscillator [J]. 吉林大学学报(工学版), 2018, 48(4): 1282-1290.
[6] QUAN Wei, HAO Xiao-ming, SUN Ya-dong, BAI Bao-hua, WANG Yu-ting. Development of individual objective lens for head-mounted projective display based on optical system of actual human eye [J]. 吉林大学学报(工学版), 2018, 48(4): 1291-1297.
[7] CHEN Mian-shu, SU Yue, SANG Ai-jun, LI Pei-peng. Image classification methods based on space vector model [J]. 吉林大学学报(工学版), 2018, 48(3): 943-951.
[8] CHEN Tao, CUI Yue-han, GUO Li-min. Improved algorithm of multiple signal classification for single snapshot [J]. 吉林大学学报(工学版), 2018, 48(3): 952-956.
[9] MENG Guang-wei, LI Rong-jia, WANG Xin, ZHOU Li-ming, GU Shuai. Analysis of intensity factors of interface crack in piezoelectric bimaterials [J]. 吉林大学学报(工学版), 2018, 48(2): 500-506.
[10] LIN Jin-hua, WANG Yan-jie, SUN Hong-hai. Improved feature-adaptive subdivision for Catmull-Clark surface model [J]. 吉林大学学报(工学版), 2018, 48(2): 625-632.
[11] WANG Ke, LIU Fu, KANG Bing, HUO Tong-tong, ZHOU Qiu-zhan. Bionic hypocenter localization method inspired by sand scorpion in locating preys [J]. 吉林大学学报(工学版), 2018, 48(2): 633-639.
[12] YU Hua-nan, DU Yao, GUO Shu-xu. High-precision synchronous phasor measurement based on compressed sensing [J]. 吉林大学学报(工学版), 2018, 48(1): 312-318.
[13] WANG Fang-shi, WANG Jian, LI Bing, WANG Bo. Deep attribute learning based traffic sign detection [J]. 吉林大学学报(工学版), 2018, 48(1): 319-329.
[14] LIU Dong-liang, WANG Qiu-shuang. Instantaneous velocity extraction method on NGSLM data [J]. 吉林大学学报(工学版), 2018, 48(1): 330-335.
[15] TANG Kun, SHI Rong-hua. Detection of wireless sensor network failure area based on butterfly effect signal [J]. 吉林大学学报(工学版), 2017, 47(6): 1939-1948.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!