吉林大学学报(信息科学版) ›› 2024, Vol. 42 ›› Issue (4): 600-609.

• • 上一篇    下一篇

联合学习透射图和去雾图的条件生成对抗网络

万晓玲a, 段 锦a,b, 祝 勇b,c ,刘 举a, 姚安妮a   

  1. 长春理工大学 a. 电子信息工程学院; b. 空间光电研究所; c. 计算机学院, 长春 130022

  • 收稿日期:2023-05-26 出版日期:2024-07-22 发布日期:2024-07-22
  • 作者简介:万晓玲(1997— ), 女, 重庆人, 长春理工大学硕士研究生, 主要从事数字图像处理研究, ( Tel)86-13756467695( E-mail) 1954761473@ qq. com; 通讯作者: 段锦(1971— ), 男, 长春人, 长春理工大学教授, 博士生导师, 主要从事模式识别、偏振成像研究, ( Tel)86-13514494485( E-mail) duanjin@ vip. sina. com。
  • 基金资助:

     吉林省产业技术研究与开发基金资助项目(2023C031-3);  重庆自然科学基金资助项目( cstc2021jcyj-msxmX0145)

Biconditional Generative Adversarial Networks for Joint Learning Transmission Map and Dehazing Map

WAN Xiaolinga, DUAN Jinaa,b, ZHU Yongb,c, LIU Jua, YAO Annia   

  1. a. College of Electronic Information Engineering; b. Institute of Space Optoelectronic Technology; c. College of Computer Science, Changchun University of Science and Technology, Changchun 130022, China
  • Received:2023-05-26 Online:2024-07-22 Published:2024-07-22

摘要:

针对雾霾天气拍摄的图片质量大幅下降的问题, 基于经典的大气散射模型提出了一种新的多任务学习方法, 以端到端的方式联合学习透射图和去雾图像。该网络框架是基于一种新的双重条件生成对抗网络, 由两个改进的条件生成对抗网络( CGAN: Conditional Generative Adversarial Network) 堆叠组成, 即将有雾图像输入第 1 阶段 CGAN 估计透射图, 然后将预测的透射图和有雾图像输入第 2 阶段 CGAN, 通过第 2 个生成器恢复出对应的无雾图像。为改善输出图像的颜色失真和边缘模糊问题, 设计了联合损失函数, 提高图像转化的质量。在合成和真实数据集上与多种去雾方法进行定性和定量实验比较, 结果表明, 该方法输出的无雾图像具有更好的视觉效果, 其结构相似性和峰值信噪比的值分别达到了 0. 985 和 32. 880 dB。

关键词: 图像去雾, 大气散射模型, 条件生成对抗网络, 多任务学习, 联合损失

Abstract: To address the problem of significantly degraded image quality in hazy weather, a new multi-task learning method is proposed based on the classical atmospheric scattering model. This method aims to jointly learn the transmission map and dehazed image in an end-to-end manner. The network framework is built upon a new biconditional generative adversarial network, which consists of two improved CGANs( Conditional Generative Adversarial Network). The hazy image is inputted into the first stage CGAN to estimate the transmission map. Then, the predicted transmission map and the hazy image are passed into the second stage CGAN, which generates the corresponding dehazed image. To improve the color distortion and edge blurring in the output image, a joint loss function is designed to enhance the quality of image transformation. By conducting qualitative and quantitative experiments on synthetic and real datasets, and comparing with various dehazing methods, the results demonstrate that the dehazed images produced by this method exhibit better visual effects. The structural similarity index is measured at 0. 985, and the peak signal-to-noise ratio value is 32. 880 dB.

Key words: image dehazing, atmospheric scattering model, conditional generative adversarial network, multi- task learning, joint loss

中图分类号: 

  • TP391. 4