吉林大学学报(工学版) ›› 2020, Vol. 50 ›› Issue (1): 227-236.doi: 10.13229/j.cnki.jdxbgxb20190116

• 计算机科学与技术 • 上一篇    

基于SVM和窗口梯度的多焦距图像融合方法

李雄飞1,2(),王婧1,2,张小利1,2,范铁虎3()   

  1. 1. 吉林大学 计算机科学与技术学院,长春 130012
    2. 吉林大学 符号计算与知识工程教育部重点实验室,长春 130012
    3. 吉林大学 仪器科学与电气工程学院,长春 130033
  • 收稿日期:2019-01-23 出版日期:2020-01-01 发布日期:2020-02-06
  • 通讯作者: 范铁虎 E-mail:lxf@jlu.edu.cn;fth@jlu.edu.cn
  • 作者简介:李雄飞(1963-),男,教授,博士生导师. 研究方向:机器学习,信息融合,图像处理. E-mail:lxf@jlu.edu.cn
  • 基金资助:
    国家科技支撑计划项目(2012BAH48F02);国家自然科学基金项目(61801190);吉林省自然科学基金项目(20180101055JC);吉林省优秀青年人才基金项目(20180520029JH);中国博士后科学基金项目(2017M611323)

Multi-focus image fusion based on support vector machines and window gradient

Xiong-fei LI1,2(),Jing WANG1,2,Xiao-li ZHANG1,2,Tie-hu FAN3()   

  1. 1. College of Computer Science and Technology,Jilin University, Changchun 130012,China
    2. Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education,Jilin University,Changchun 130012,China
    3. College of Instrumentation and Electrical Engineering, Jilin University,Changchun 130033,China
  • Received:2019-01-23 Online:2020-01-01 Published:2020-02-06
  • Contact: Tie-hu FAN E-mail:lxf@jlu.edu.cn;fth@jlu.edu.cn

摘要:

为提高多焦距图像融合质量,提出了一种基于支持向量机(SVM)和窗口梯度的多焦距图像融合方法。该方法首先对多焦距图像进行基于窗口的经验模态分解(WEMD),得到一组内涵模式函数分量(高频)和残余分量(低频),WEMD可以有效解决图像分解中的信号混叠问题;然后,利用SVM的输出指导低频分量融合,选取更清晰的聚焦区域;利用本文的窗口梯度对比算法指导高频分量融合,在保持融合图像对比度的同时保证图像的一致性;最后,经过WEMD逆变换得到融合图像。在9组多焦距图像上进行实验,从主观评价和5种客观评价指标方面,本文的融合方法相比于其他5种方法能获得更好的融合质量。

关键词: 计算机应用, 多焦距图像融合, 经验模态分解, 支持向量机, 图像梯度

Abstract:

In order to improve the quality of multi-focus image fusion, a multi-focus image fusion method based on support vector machines (SVM) and window gradient is proposed in this paper. First, the multi-focus images are decomposed by window empirical mode decomposition (WEMD), and a set of intrinsic mode function components (high frequency part) and residual components (low frequency part) are obtained. WEMD can effectively solve the signal aliasing problem in image decomposition. Then, the fusion rule of low-frequency components is determined by the output of the support vector machine, and the clearer focus area is selected. The window gradient contrast algorithm proposed in this paper is used to guide the fusion of high-frequency components, and the consistency of the image is ensured while maintaining the contrast of the fused image. Finally, the WEMD inverse transform is performed to obtain the fused image. Experiments were carried out on 9 sets of multi-focus images. Results show that the proposed method can obtain better fusion quality than the other five methods in terms of the subjective evaluation and five objective evaluation indicators.

Key words: computer application, multi-focus image fusion, empirical mode decomposition, support vector machine, image gradient

中图分类号: 

  • TP391

图1

本文算法框架图"

图2

WEMD示例图"

图3

梯度取大法融合示例"

图4

测试图像集"

图5

第1组测试图像及融合结果"

图6

第2组测试图像及融合结果"

图7

第3组测试图像及融合结果"

图8

第2组测试图像局部放大"

表1

第1组融合图像客观评价结果"

融合算法MIQABFQQWQE
NSCT17.362 40.825 60.948 90.921 30.855 6
MDFB17.163 40.825 20.951 20.924 20.854 1
EWT15.789 60.812 30.932 60.914 50.852 6
BEMD18.632 40.831 20.948 20.924 80.859 1
CEMD15.249 60.817 60.915 60.912 30.852 4
本文算法19.321 60.842 30.955 30.926 60.861 9

表2

第2组融合图像客观评价结果"

融合算法MIQABFQQWQE
NSCT28.056 10.674 20.783 20.828 60.831 4
MDFB28.644 20.673 80.803 20.834 70.833 7
EWT27.685 50.625 40.776 50.821 40.828 7
BEMD28.726 50.687 10.809 90.835 60.834 9
CEMD27.318 20.621 30.781 30.825 40.828 9
本文算法29.762 10.705 30.821 60.837 90.838 5

表3

第3组融合图像客观评价结果"

融合算法MIQABFQQWQE
NSCT24.258 40.697 50.875 70.864 50.878 9
MDFB24.036 80.685 60.868 10.861 20.876 5
EWT23.486 90.528 60.741 60.752 30.845 4
BEMD24.987 50.694 20.876 90.867 30.881 2
CEMD23.423 10.527 80.734 70.768 50.854 6
本文算法25.253 70.725 80.893 50.876 90.889 4

表4

9组融合图像的平均客观评价结果"

融合算法MIQABFQQWQE
NSCT24.775 40.753 40.857 70.854 90.867 2
MDFB24.780 90.740 80.851 80.857 20.870 5
EWT23.390 80.674 30.781 60.792 30.784 6
BEMD25.897 30.776 60.863 40.863 90.876 2
CEMD22.578 90.657 80.764 70.778 50.774 6
本文算法27.543 60.809 70.887 20.879 50.881 4

表5

各算法运行时间比较 s"

图像NSCTMDFBEWTBEMDCEMD本文算法
9组图像平均114.763 66.518 48.732 6129.567 444.169 79.583 2
第1组图像82.487 35.882 67.353 292.381 540.867 210.882 6
第2组图像120.420 66.852 89.131 2137.151 345.881 712.937 4
第3组图像77.301 65.435 66.809 383.159 738.473 88.695 4
1 蔺素珍, 韩泽. 基于深度堆叠卷积神经网络的图像融合[J]. 计算机学报, 2017, 40(11): 2506-2518.
Lin Su-zhen, Han Ze. Images fusion based on deep stack convolutional neural network[J]. Chinese Journal of Computers, 2017, 40(11): 2506-2518.
2 Li S, Kang X, Fang L, et al. Pixel-level image fusion: a survey of the state of the art[J]. Information Fusion, 2017, 33: 100-112.
3 Zhu Z, Yin H, Chai Y, et al. A novel multi-modality image fusion method based on image decomposition and sparse representation[J]. Information Sciences, 2018, 432: 516-529.
4 Do M N, Vetterli M. The contourlet transform: an efficient directional multiresolution image representation[J]. IEEE Transactions on Image Processing, 2005, 14(12): 2091-2106.
5 da Cunha A L, Zhou J, do Minh N. The nonsubsampled contourlet transform: theory, design, and applications[J]. IEEE Transactions on Image Processing, 2006, 15(10): 3089-3101.
6 Yang Y, Huang S Y, Gao J, et al. Multi-focus image fusion using an effective discrete wavelet transform based algorithm[J]. Meas Sci Rev, 2014,14(2): 102-108.
7 Wang H. Multi-focus image fusion algorithm based on focus detection in spatial and NSCT domain[J]. PloS One, 2018, 13(9): e0204225.
8 杨扬, 戴明, 周箩鱼, 等. 基于非下采样 Bandelet 变换的多聚焦图像融合[J]. 吉林大学学报: 工学版, 2014,44(2): 525-530.
Yang Yang, Dai Ming, Zhou Luo-yu, et al. Multifocus image fusion based on nonsubsampled Bandelet transform [J]. Journal of Jilin University (Engineering and Technology Edition), 2014,44(2): 525-530.
9 Moushmi S, Sowmya V, Soman K P. Empirical wavelet transform for multifocus image fusion[C]∥Proceedings of the International Conference on Soft Computing Systems. New Delhi,India, 2016: 257-263.
10 Chen C, Gend P, Lu K. Multifocus İmage Fusion Based on Multiwavelet and DFB[J]. Chem Engineer Trans, 2015, 46: 277-282.
11 Huang N E, Shen Z, Long S R, et al. The empirical mode decomposition and theHilbert spectrum for nonlinear and nonstationary time series analysis[J]. Proc of the Royal Society of London A, 1998, 454:903-995.
12 李欢利, 郭立红, 陈涛, 等. 基于改进的经验模态分解的虹膜识别方法[J]. 吉林大学学报: 工学版, 2013,43(1): 198-205.
Li Huan-li, Guo Li-hong, Chen Tao, et al. Iris recognition based on improved empirical mode decomposition method [J]. Journal of Jilin University (Engineering and Technology Edition), 2013,43(1): 198-205.
13 Nunes J C, Niang O, Bouaoune Y, et al. Bidimensional empirical mode decomposition modified for texture analysis[J]. Lect Notes Comp Sci, 2003,2749: 171-177.
14 Yang J, Guo L, Yu S, et al. A new multi-focus image fusion algorithm based on BEMD and improved local energy[J]. Journal of Software, 2014, 9(9): 2329-2335.
15 Yeh M H. The complex bidimensional empirical mode decomposition[J]. Signal Processing, 2012, 92(2): 523-541.
16 梁灵飞, 平子良. 基于窗口经验模式分解的医学图像增强[J]. 光电子∙激光, 2010, 21(9): 1421-1425.
Liang Ling-fei, Ping Zi-liang. Medical image enhancement based on window empirical mode decomposition algorithm[J]. Journal of Optoelectronics·Laser, 2010, 21(9): 1421-1425.
17 Huang W, Jing Z. Evaluation of focus measures in multi-focus image fusion[J]. Pattern Recognition Letters, 2007, 28(4): 493-500.
18 Qu G, Zhang D, Yan P. Information measure for performance of image fusion[J]. Electronics Letters, 2002, 38(7): 313-315.
19 Xydeas C S, Petrovic V. Objective image fusion performance measure[J]. Electronics Letters, 2000, 36(4): 308-309.
20 Piella G, Heijmans H. A new quality metric for image fusion[C]∥International Conference on Image Processing, Barcelona, Catalonia, Spain,2003.
[1] 谷远利, 张源, 芮小平, 陆文琦, 李萌, 王硕. 基于免疫算法优化LSSVM的短时交通流预测[J]. 吉林大学学报(工学版), 2019, 49(6): 1852-1857.
[2] 赵宏伟,王鹏,范丽丽,胡黄水,刘萍萍. 相似性保持实例检索方法[J]. 吉林大学学报(工学版), 2019, 49(6): 2045-2050.
[3] 沈军,周晓,吉祖勤. 服务动态扩展网络及其结点系统模型的实现[J]. 吉林大学学报(工学版), 2019, 49(6): 2058-2068.
[4] 周柚,杨森,李大琳,吴春国,王岩,王康平. 基于现场可编程门电路的人脸检测识别加速平台[J]. 吉林大学学报(工学版), 2019, 49(6): 2051-2057.
[5] 周炳海,吴琼. 考虑工具和空间约束的机器人装配线平衡优化[J]. 吉林大学学报(工学版), 2019, 49(6): 2069-2075.
[6] 车翔玖,刘华罗,邵庆彬. 基于Fast RCNN改进的布匹瑕疵识别算法[J]. 吉林大学学报(工学版), 2019, 49(6): 2038-2044.
[7] 卢洋,王世刚,赵文婷,赵岩. 基于离散Shearlet类别可分性测度的人脸表情识别方法[J]. 吉林大学学报(工学版), 2019, 49(5): 1715-1725.
[8] 赵宏伟,李明昭,刘静,胡黄水,王丹,臧雪柏. 基于自然性和视觉特征通道的场景分类[J]. 吉林大学学报(工学版), 2019, 49(5): 1668-1675.
[9] 李宾,周旭,梅芳,潘帅宁. 基于K-means和矩阵分解的位置推荐算法[J]. 吉林大学学报(工学版), 2019, 49(5): 1653-1660.
[10] 赵金钢,张明,占玉林,谢明志. 基于塑性应变能密度的钢筋混凝土墩柱损伤准则[J]. 吉林大学学报(工学版), 2019, 49(4): 1124-1133.
[11] 李雄飞,宋璐,张小利. 基于协同经验小波变换的遥感图像融合[J]. 吉林大学学报(工学版), 2019, 49(4): 1307-1319.
[12] 刘元宁,刘帅,朱晓冬,霍光,丁通,张阔,姜雪,郭书君,张齐贤. 基于决策粒子群优化与稳定纹理的虹膜二次识别[J]. 吉林大学学报(工学版), 2019, 49(4): 1329-1338.
[13] 李宾,申国君,孙庚,郑婷婷. 改进的鸡群优化算法[J]. 吉林大学学报(工学版), 2019, 49(4): 1339-1344.
[14] 翟凤文,党建武,王阳萍,金静,罗维薇. 基于扩展轮廓的快速仿射不变特征提取[J]. 吉林大学学报(工学版), 2019, 49(4): 1345-1356.
[15] 孙延君,申铉京,陈海鹏,赵永哲. 基于局部平面线性点的翻拍图像鉴别算法[J]. 吉林大学学报(工学版), 2019, 49(4): 1320-1328.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!