吉林大学学报(工学版) ›› 2014, Vol. 44 ›› Issue (4): 1203-1208.doi: 10.13229/j.cnki.jdxbgxb201404046

• • 上一篇    下一篇

图的图像融合方法

王晓文, 赵宗贵, 庞秀梅, 刘敏   

  1. 1.解放军理工大学 指挥信息系统学院, 南京210007;
    2.中国人民解放军93175部队, 长春 130051;
    3.中国电子科技集团公司 第28研究所, 南京210007;
    4.解放军理工大学 理学院, 南京210014;
    5.73677部队 电子对抗中心, 南京210016
  • 收稿日期:2012-12-27 出版日期:2014-07-01 发布日期:2014-07-01
  • 作者简介:王晓文(1978-), 女, 博士研究生.研究方向:图像融合, 目标跟踪.E-mail:hncandy@163.com
  • 基金资助:
    总装备部“十二五”国防预先研究课题项目(513060402); 军用“973”项目(613101)

Image fusion method based on visual saliency maps

WANG Xiao-wen1, 2, ZHAO Zong-gui3, PANG Xiu-mei4, LIU Min5   

  1. 1.College of Command Information Systems, PLA University of Science and Technology, Nanjing 210007, China;
    2.Unit 93175 of PLA, Changchun 130051, China;
    3.The 28th Research Institute, China Electronics Technology Group Cooperation, Nanjing 210007, China;
    4.College of Sciences, PLA University of Science and Technology, Nanjing 210014, China;
    5.Center of EW, Unit 73677 of PLA, Nanjing 210016, China
  • Received:2012-12-27 Online:2014-07-01 Published:2014-07-01

摘要: 基于图像的不同视觉特征, 构造各源图像的视觉显著图, 提出一种基于视觉显著图的多尺度图像融合算法低频子带融合规则, 构建了一种新的多尺度图像融合方法。结合àtrous小波和非下采样轮廓波变换(Non subsampled contourlet transform, NSCT), 对多传感器图像和多聚焦图像的融合实验表明, 应用本文方法所得的融合图像, 无论是视觉效果还是客观评价得分均优于基于平均法或神经网络选择低频系数的融合方法。

关键词: 信息处理技术, 图像融合, 视觉显著图, 融合规则, 低频子带

Abstract: Visual saliency maps of the source images are extracted with various image features. A novel fusion rule for the low-frequency subbands of multi-scale image fusion algorithms is proposed, and a new multi-scale image fusion method is developed based on the visual saliency maps. Átrous wavelet is integrated with Nonsubsampled Contourlet Transform, and is applied to the experiment of multi-sensor and multi-focus image fusion. It is demonstrated that the proposed approach yields better results both in visual inspection and objective evaluation than the methods choosing low frequency coefficients based on average or neural network.

Key words: information processing technology, image fusion, visual saliency map, fusion rule, low frequency subbands

中图分类号: 

  • TN911.73
[1] 刘刚.基于多尺度的多传感器图像融合研究[D]. 上海:上海交通大学微电子学院, 2005. Liu Gang. Research on multiresolution-based multisensor image fusion[D]. Shanghai: School of Microelectronics, Shanghai Jiaotong University, 2005.
[2] 汤磊.多分辨率图像融合方法与技术研究[D]. 南京:解放军理工大学指挥自动化学院, 2008. Tang Lei. Research on multiresolution image fusion method and technology[D]. Nangjing: Institute of Command and Automation, PLA University of Science and Technology, 2008.
[3] Garcia J A, Sanchez R R, Valdivia J F. Axiomatic approach to computational attention[J]. Pattern Recognition, 2010, 43(4): 1618-1630.
[4] Lai Jie-ling, Yi Yang. Key frame extraction based on visual attention model[J]. Journal of Visual Communication and Image Representation, 2012, 23(1): 114-125.
[5] Hu Yi-qun, Xie Xing, Ma Wei-ying, et al. Salient object extraction combining visual attention and edge information[R]. Technical Report, 2004.
[6] Fang Yu-ming, Chen Zhen-zhong, Lin Wei-si, et al. Saliency detection in the compressed domain for adaptive image retargeting[J]. IEEE Transactions on Image Processing, 2012, 21(9): 3888-3901.
[7] Engelke U, Nguyen V X, Zepernick H J. Regional attention to structural degradations for perceptual image quality metric design[C]∥Proc IEEE Int Conf Acoust, Speech, and Signal Processing, 2008.
[8] Gopalakrishnan V, Hu Y Q, Rajan D. Random walks on graphs for salient object detection in images[J]. IEEE Transactions on Image Processing, 2010, 19(12):3232-3242.
[9] 叶传奇, 王宝树, 苗启广. 一种基于区域特性的红外与可见光图像融合算法[J]. 光子学报, 2009, 38(6): 1498-1503. Ye Chuan-qi, Wang Bao-shu, Miao Qi-guang. Fusion algorithm of infrared and visible images based on region feature[J]. Acta Photonica Sinica, 2009, 38(6): 1498-1503.
[10] Chai Yi, Li Hua-feng, Li Zhao-fei. Multifocus image fusion scheme using focused region detection and multi-resolution[J]. Optics Communications, 2011, 284 (19): 4376-4389.
[11] 王晓文, 赵宗贵, 汤磊. 一种新的红外与可见光图像融合评价方法[J]. 系统工程与电子技术, 2012, 34(5): 27-31. Wang Xiao-wen, Zhao Zong-gui, Tang Lei. A novel quality metric for infrared and visible image fusion[J]. System Engineering and Electronics, 2012, 34(5): 27-31.
[12] Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis[J]. IEEE Trans Pattern Analysis and Machine Intelligence, 1998, 20: 1254-1259.
[13] Da C A, Zhou J P, Do M N. The non-subsampled contourlet transform: theory, design, and applications[J]. IEEE Transactions on Image Processing, 2006, 15(10): 3089-3101.
[14] Wong A K C, Sahoo P K. A gray-level threshold selection method based on maximum entropy principle[J]. IEEE Transactions on Systems, Man, and Cybernetics, 1989, 19(4): 866-871.
[15] Qu Xiao-bo, Yan Jing-wen, Zhu Zi-qian, et al. Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in non- subsampled contourlet transform domain[J]. Acta Automatica Sinica, 2008, 34(12): 1508-1514.
[16] Qu Gui-hong, Zhang Da-li, Yan Ping-fan. Information measure for performance of image fusion[J]. Electronics Letters, 2002, 38(7): 313-315.
[17] Xydeas C S, Petrovic V. Objective image fusion performance measure[J]. Electronics Letters, 2000, 36(4): 308-309.
[18] Zheng Y F, Essock E A, Hansen B C, et al. A new metric based on extended spatial frequency and its application to DWT based fusion algorithms[J]. Information Fusion, 2007, 8(2): 177-192.
[19] Piella G. New quality measures for image fusion[C]∥Proc IEEE 7th International Conference on Information Fusion, 2004.
[20] Yang Cui, Zhang Jian-qi, Wang Xiao-rui, et al. A novel similarity based quality metric for image fusion[J]. Information Fusion, 2008, 9(2):156-160.
[21] Sugihara K. Robust gift wrapping for the three-dimensional convex hull[J]. Journal of Computer and System Sciences, 1994, 49:391-407.
[1] 苏寒松,代志涛,刘高华,张倩芳. 结合吸收Markov链和流行排序的显著性区域检测[J]. 吉林大学学报(工学版), 2018, 48(6): 1887-1894.
[2] 徐岩,孙美双. 基于卷积神经网络的水下图像增强方法[J]. 吉林大学学报(工学版), 2018, 48(6): 1895-1903.
[3] 黄勇,杨德运,乔赛,慕振国. 高分辨合成孔径雷达图像的耦合传统恒虚警目标检测[J]. 吉林大学学报(工学版), 2018, 48(6): 1904-1909.
[4] 李居朋,张祖成,李墨羽,缪德芳. 基于Kalman滤波的电容屏触控轨迹平滑算法[J]. 吉林大学学报(工学版), 2018, 48(6): 1910-1916.
[5] 应欢,刘松华,唐博文,韩丽芳,周亮. 基于自适应释放策略的低开销确定性重放方法[J]. 吉林大学学报(工学版), 2018, 48(6): 1917-1924.
[6] 陆智俊,钟超,吴敬玉. 星载合成孔径雷达图像小特征的准确分割方法[J]. 吉林大学学报(工学版), 2018, 48(6): 1925-1930.
[7] 刘仲民,王阳,李战明,胡文瑾. 基于简单线性迭代聚类和快速最近邻区域合并的图像分割算法[J]. 吉林大学学报(工学版), 2018, 48(6): 1931-1937.
[8] 单泽彪,刘小松,史红伟,王春阳,石要武. 动态压缩感知波达方向跟踪算法[J]. 吉林大学学报(工学版), 2018, 48(6): 1938-1944.
[9] 刘哲, 徐涛, 宋余庆, 徐春艳. 基于NSCT变换和相似信息鲁棒主成分分析模型的图像融合技术[J]. 吉林大学学报(工学版), 2018, 48(5): 1614-1620.
[10] 姚海洋, 王海燕, 张之琛, 申晓红. 双Duffing振子逆向联合信号检测模型[J]. 吉林大学学报(工学版), 2018, 48(4): 1282-1290.
[11] 全薇, 郝晓明, 孙雅东, 柏葆华, 王禹亭. 基于实际眼结构的个性化投影式头盔物镜研制[J]. 吉林大学学报(工学版), 2018, 48(4): 1291-1297.
[12] 陈绵书, 苏越, 桑爱军, 李培鹏. 基于空间矢量模型的图像分类方法[J]. 吉林大学学报(工学版), 2018, 48(3): 943-951.
[13] 陈涛, 崔岳寒, 郭立民. 适用于单快拍的多重信号分类改进算法[J]. 吉林大学学报(工学版), 2018, 48(3): 952-956.
[14] 孟广伟, 李荣佳, 王欣, 周立明, 顾帅. 压电双材料界面裂纹的强度因子分析[J]. 吉林大学学报(工学版), 2018, 48(2): 500-506.
[15] 林金花, 王延杰, 孙宏海. 改进的自适应特征细分方法及其对Catmull-Clark曲面的实时绘制[J]. 吉林大学学报(工学版), 2018, 48(2): 625-632.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!