吉林大学学报(工学版) ›› 2014, Vol. 44 ›› Issue (4): 1203-1208.doi: 10.13229/j.cnki.jdxbgxb201404046

Previous Articles     Next Articles

Image fusion method based on visual saliency maps

WANG Xiao-wen1, 2, ZHAO Zong-gui3, PANG Xiu-mei4, LIU Min5   

  1. 1.College of Command Information Systems, PLA University of Science and Technology, Nanjing 210007, China;
    2.Unit 93175 of PLA, Changchun 130051, China;
    3.The 28th Research Institute, China Electronics Technology Group Cooperation, Nanjing 210007, China;
    4.College of Sciences, PLA University of Science and Technology, Nanjing 210014, China;
    5.Center of EW, Unit 73677 of PLA, Nanjing 210016, China
  • Received:2012-12-27 Online:2014-07-01 Published:2014-07-01

Abstract: Visual saliency maps of the source images are extracted with various image features. A novel fusion rule for the low-frequency subbands of multi-scale image fusion algorithms is proposed, and a new multi-scale image fusion method is developed based on the visual saliency maps. Átrous wavelet is integrated with Nonsubsampled Contourlet Transform, and is applied to the experiment of multi-sensor and multi-focus image fusion. It is demonstrated that the proposed approach yields better results both in visual inspection and objective evaluation than the methods choosing low frequency coefficients based on average or neural network.

Key words: information processing technology, image fusion, visual saliency map, fusion rule, low frequency subbands

CLC Number: 

  • TN911.73
[1] 刘刚.基于多尺度的多传感器图像融合研究[D]. 上海:上海交通大学微电子学院, 2005. Liu Gang. Research on multiresolution-based multisensor image fusion[D]. Shanghai: School of Microelectronics, Shanghai Jiaotong University, 2005.
[2] 汤磊.多分辨率图像融合方法与技术研究[D]. 南京:解放军理工大学指挥自动化学院, 2008. Tang Lei. Research on multiresolution image fusion method and technology[D]. Nangjing: Institute of Command and Automation, PLA University of Science and Technology, 2008.
[3] Garcia J A, Sanchez R R, Valdivia J F. Axiomatic approach to computational attention[J]. Pattern Recognition, 2010, 43(4): 1618-1630.
[4] Lai Jie-ling, Yi Yang. Key frame extraction based on visual attention model[J]. Journal of Visual Communication and Image Representation, 2012, 23(1): 114-125.
[5] Hu Yi-qun, Xie Xing, Ma Wei-ying, et al. Salient object extraction combining visual attention and edge information[R]. Technical Report, 2004.
[6] Fang Yu-ming, Chen Zhen-zhong, Lin Wei-si, et al. Saliency detection in the compressed domain for adaptive image retargeting[J]. IEEE Transactions on Image Processing, 2012, 21(9): 3888-3901.
[7] Engelke U, Nguyen V X, Zepernick H J. Regional attention to structural degradations for perceptual image quality metric design[C]∥Proc IEEE Int Conf Acoust, Speech, and Signal Processing, 2008.
[8] Gopalakrishnan V, Hu Y Q, Rajan D. Random walks on graphs for salient object detection in images[J]. IEEE Transactions on Image Processing, 2010, 19(12):3232-3242.
[9] 叶传奇, 王宝树, 苗启广. 一种基于区域特性的红外与可见光图像融合算法[J]. 光子学报, 2009, 38(6): 1498-1503. Ye Chuan-qi, Wang Bao-shu, Miao Qi-guang. Fusion algorithm of infrared and visible images based on region feature[J]. Acta Photonica Sinica, 2009, 38(6): 1498-1503.
[10] Chai Yi, Li Hua-feng, Li Zhao-fei. Multifocus image fusion scheme using focused region detection and multi-resolution[J]. Optics Communications, 2011, 284 (19): 4376-4389.
[11] 王晓文, 赵宗贵, 汤磊. 一种新的红外与可见光图像融合评价方法[J]. 系统工程与电子技术, 2012, 34(5): 27-31. Wang Xiao-wen, Zhao Zong-gui, Tang Lei. A novel quality metric for infrared and visible image fusion[J]. System Engineering and Electronics, 2012, 34(5): 27-31.
[12] Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis[J]. IEEE Trans Pattern Analysis and Machine Intelligence, 1998, 20: 1254-1259.
[13] Da C A, Zhou J P, Do M N. The non-subsampled contourlet transform: theory, design, and applications[J]. IEEE Transactions on Image Processing, 2006, 15(10): 3089-3101.
[14] Wong A K C, Sahoo P K. A gray-level threshold selection method based on maximum entropy principle[J]. IEEE Transactions on Systems, Man, and Cybernetics, 1989, 19(4): 866-871.
[15] Qu Xiao-bo, Yan Jing-wen, Zhu Zi-qian, et al. Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in non- subsampled contourlet transform domain[J]. Acta Automatica Sinica, 2008, 34(12): 1508-1514.
[16] Qu Gui-hong, Zhang Da-li, Yan Ping-fan. Information measure for performance of image fusion[J]. Electronics Letters, 2002, 38(7): 313-315.
[17] Xydeas C S, Petrovic V. Objective image fusion performance measure[J]. Electronics Letters, 2000, 36(4): 308-309.
[18] Zheng Y F, Essock E A, Hansen B C, et al. A new metric based on extended spatial frequency and its application to DWT based fusion algorithms[J]. Information Fusion, 2007, 8(2): 177-192.
[19] Piella G. New quality measures for image fusion[C]∥Proc IEEE 7th International Conference on Information Fusion, 2004.
[20] Yang Cui, Zhang Jian-qi, Wang Xiao-rui, et al. A novel similarity based quality metric for image fusion[J]. Information Fusion, 2008, 9(2):156-160.
[21] Sugihara K. Robust gift wrapping for the three-dimensional convex hull[J]. Journal of Computer and System Sciences, 1994, 49:391-407.
[1] YING Huan,LIU Song-hua,TANG Bo-wen,HAN Li-fang,ZHOU Liang. Efficient deterministic replay technique based on adaptive release strategy [J]. Journal of Jilin University(Engineering and Technology Edition), 2018, 48(6): 1917-1924.
[2] LIU Zhong-min,WANG Yang,LI Zhan-ming,HU Wen-jin. Image segmentation algorithm based on SLIC and fast nearest neighbor region merging [J]. Journal of Jilin University(Engineering and Technology Edition), 2018, 48(6): 1931-1937.
[3] SHAN Ze-biao,LIU Xiao-song,SHI Hong-wei,WANG Chun-yang,SHI Yao-wu. DOA tracking algorithm using dynamic compressed sensing [J]. Journal of Jilin University(Engineering and Technology Edition), 2018, 48(6): 1938-1944.
[4] LIU Zhe, XU Tao, SONG Yu-qing, XU Chun-yan. Image fusion technology based on NSCT and robust principal component analysis model with similar information [J]. Journal of Jilin University(Engineering and Technology Edition), 2018, 48(5): 1614-1620.
[5] YAO Hai-yang, WANG Hai-yan, ZHANG Zhi-chen, SHEN Xiao-hong. Reverse-joint signal detection model with double Duffing oscillator [J]. 吉林大学学报(工学版), 2018, 48(4): 1282-1290.
[6] QUAN Wei, HAO Xiao-ming, SUN Ya-dong, BAI Bao-hua, WANG Yu-ting. Development of individual objective lens for head-mounted projective display based on optical system of actual human eye [J]. 吉林大学学报(工学版), 2018, 48(4): 1291-1297.
[7] CHEN Mian-shu, SU Yue, SANG Ai-jun, LI Pei-peng. Image classification methods based on space vector model [J]. 吉林大学学报(工学版), 2018, 48(3): 943-951.
[8] CHEN Tao, CUI Yue-han, GUO Li-min. Improved algorithm of multiple signal classification for single snapshot [J]. 吉林大学学报(工学版), 2018, 48(3): 952-956.
[9] MENG Guang-wei, LI Rong-jia, WANG Xin, ZHOU Li-ming, GU Shuai. Analysis of intensity factors of interface crack in piezoelectric bimaterials [J]. 吉林大学学报(工学版), 2018, 48(2): 500-506.
[10] LIN Jin-hua, WANG Yan-jie, SUN Hong-hai. Improved feature-adaptive subdivision for Catmull-Clark surface model [J]. 吉林大学学报(工学版), 2018, 48(2): 625-632.
[11] WANG Ke, LIU Fu, KANG Bing, HUO Tong-tong, ZHOU Qiu-zhan. Bionic hypocenter localization method inspired by sand scorpion in locating preys [J]. 吉林大学学报(工学版), 2018, 48(2): 633-639.
[12] YU Hua-nan, DU Yao, GUO Shu-xu. High-precision synchronous phasor measurement based on compressed sensing [J]. 吉林大学学报(工学版), 2018, 48(1): 312-318.
[13] WANG Fang-shi, WANG Jian, LI Bing, WANG Bo. Deep attribute learning based traffic sign detection [J]. 吉林大学学报(工学版), 2018, 48(1): 319-329.
[14] LIU Dong-liang, WANG Qiu-shuang. Instantaneous velocity extraction method on NGSLM data [J]. 吉林大学学报(工学版), 2018, 48(1): 330-335.
[15] TANG Kun, SHI Rong-hua. Detection of wireless sensor network failure area based on butterfly effect signal [J]. 吉林大学学报(工学版), 2017, 47(6): 1939-1948.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!