吉林大学学报(工学版) ›› 2013, Vol. 43 ›› Issue (增刊1): 270-274.

• 论文 • 上一篇    下一篇

基于互信息相对熵差异的显著区域提取算法

郭礼华   

  1. 华南理工大学 电子与信息学院,广州 510641
  • 收稿日期:2012-05-10 发布日期:2013-06-01
  • 作者简介:郭礼华(1978-),男,副教授.研究方向:图像理解与处理.E-mail:guolihua@scut.edu.cn
  • 基金资助:

    国家自然科学基金项目(60902087);广州市科委珠江新星计划项目.

The salient region extraction method based on relative entropy difference

GUO Li-hua   

  1. College of Electronic and Information Engineering, South China University of Technology, Guangzhou 510641, China
  • Received:2012-05-10 Published:2013-06-01

摘要:

在沿用Itti模型的中央周边操作的指导思想下,提出一种基于互信息相对熵局部直方图差异的视觉显著区域提取算法。该方法首先利用像素与像素之间对比度检测各个像素的差异值,然后分析局部区域内中心和周边的像素差异直方图分布,最后利用互信息相对熵寻找视觉显著区域。该方法省去了Itti模型的运算复杂度高的Gabor滤波器运算,算法简单,有效,且提取的显著性区域符合视觉主观特性。

关键词: 视觉显著区域, 直方图对比, 相对熵差异

Abstract:

Following Itti model's idea of the saliency extractiong using central-surround center-surrounding difference,a visual salient region extraction method based on relative entropy difference was proposed.First,the difference among all the image pixels was calculated;then,the central-surround region histogram difference was analyzed;finally,the visual salient region was extracted using the relative entropy theory.This method can be efficiently implemented because of avoiding the highly complex Gabor filter extraction of Itti model,and the extracted salient regions are the same with the human visual system.

Key words: nisual salient region, histogram contrasts, relative entropy difference

中图分类号: 

  • TP394

[1] Itti l,Koch C,Niebur E.A model of saliency-based visual attention for rapid scene analysis[J].IEEE Tran.on Pattern Analysis and Machine Intelligence,1998,20(11):1254-1259.

[2] Itti l,Koch C.A saliency-based search mechanism for overt and covert shifts of visual attention[J].Vision Research,2000,40(10):1489-1506.

[3] Itti l,Koch C,Niebur E.Computation modeling of visual attention[J].Nature Reviews Neuroscience,2001,2(11):194-203.

[4] Walther D,Itti l,RiesenhubeR M,et al,Attentional selection for object recognitiona gentle way[J].Lecture Notes in Computer Science,2002,25(1):472-479.

[5] Walther D,Itti l,Koch C.Saliency toolbox[OL].[2012-05-04].http://www.saliencytoolbox.net, 2008.11.

[6] Bruce N,Tsotsos J.Salencey based on information maximization[C]//Advances in Neural Information Processing Systems.Vancouver,Canada,2006:155-162.

[7] Harel J,Koch C,Perona P.Graph-based visual saliency[C]//Advances in Neural Information Processing Systems.Vancouver,Canada,2007:545-552.

[8] Gao D,Mahadevan V,Vasconcelos N.The discriminant center-surround hypothesis for bottom-up saliency[C]//Advances in Neural Information Processing Systems.Vancouver,Canada,2007:497-504.

[9] Achanta R,Estrada F,Wils P,et al.Salient region detection and segmentation[C]// Proceedings of International Conference on Computer Vision Systems.Santorini,Greece,2008:66-75.

[10] Hou X,Zhang L.Saliency detection:a spectral residual approach[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Minnesota,USA,2007:1-8.

[11] Achanta R,Hemami S,Estrada F,et al.Frequency-tuned salient region detection[C]// Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition.Florida,USA,2009:1597-1604.

[12] Zhai Y Shah M.Visual attention detection in video sequences using spatiotemporal cues.[C]//Proceedings of ACM Multimedia.Santa Barbara,USA,2006:815-824.

[13] Cheng Ming-ming,Zhang Guo-xin,Niloy J Mitra,et al.Global Contrast based Salient Region Detection.[C]//Proceedings of IEEE CVPR.Colorado Springs,USA,2011:409-416.

[1] 刘洲洲, 彭寒. 基于节点可靠度的无线传感器网络拓扑控制算法[J]. 吉林大学学报(工学版), 2018, 48(2): 571-577.
[2] 王新华, 欧阳继红, 庞武斌. 压缩编码孔径红外成像超分辨重建[J]. 吉林大学学报(工学版), 2016, 46(4): 1239-1245.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!