Journal of Jilin University(Engineering and Technology Edition) ›› 2024, Vol. 54 ›› Issue (3): 785-796.doi: 10.13229/j.cnki.jdxbgxb.20220483

Previous Articles    

Underwater image enhancement based on color correction and TransFormer detail sharpening

De-xing WANG(),Kai GAO,Hong-chun YUAN(),Yu-rui YANG,Yue WANG,Ling-dong KONG   

  1. School of Information,Shanghai Ocean University,Shanghai 201306,China
  • Received:2022-04-27 Online:2024-03-01 Published:2024-04-18
  • Contact: Hong-chun YUAN E-mail:dxwang@shou.edu.cn;hcyuan@shou.edu.cn

Abstract:

A multi-input underwater image recovery method based on TransFormer and convolutional neural network (CNN) was proposed to address the issues of low contrast, poor detail representation and color error in underwater images.TransFormer and relative total variatio were used to construct a depth feature extraction module to fuse the texture map extracted by relative total variatio (RTV) with the image information extracted by TransFormer, which effectively enhances the detail features of the image. The color correction module was constructed by using automatic color equalization and Lab color space to enhance the image contrast and correct the color. A multinomial loss function was used to constrain the network convergence to obtain the enhanced clear underwater images. Finally, the quantitative and qualitative comparative analysis of the proposed method with other methods on the test set was carried out, and the experimental results show that the images processed by the proposed method outperform other comparative methods in terms of sharpness, color performance and texture information.

Key words: image processing, underwater image enhancement, TransFormer, color correction, detail sharpening

CLC Number: 

  • TP751

Fig.1

Architecture diagram of model"

Fig.2

Structure of CNN color correction module"

Fig.3

Residual convolution structure and the structure of D-conv"

Fig.4

Depth feature extraction module and architecture diagram of Swin-Transformer"

Table 1

Metrics data for TransFormer detail module ablation experiments on test set Test-1"

模型SSIMPSNRUIQMUCIQE
完整0.897923.21753.41770.5856
完全不含0.892223.59903.43680.5913
含部分0.873322.91113.47770.5810

Fig.5

Qualitative comparison of TransFormer detail module ablation experiments"

Table 2

Color-corrected module ablation experiment metrics data on Test-1"

模型SSIMPSNRUIQMUCIQE
完整0.897923.21753.41770.5856
不含0.882722.28433.47740.5777

Table 3

Skip connections module ablation experiment metrics data on Test-1"

模型SSIMPSNRUIQMUCIQE
完整0.897923.21753.41770.5856
不含跳跃连接0.751120.83233.37210.5824
不含卷积层0.885823.11803.41670.5831
只含RGB0.876223.15613.41870.5854

Fig.6

Qualitative comparison of skip connections module ablation experiments"

Fig.7

Qualitative comparison of different methods on Test-1"

Table 4

Metric values for different comparison methods on Test-1"

方法SSIMPSNRUIQMUCIQE
CLAHE0.848820.76893.06720.5683
UCM0.798921.05402.53180.6267
UDCP0.549412.85551.79520.5911
FUnIE-GAN0.720320.01653.26360.5499
Water-Net0.841621.10383.19390.5810
MLFc-GAN0.653618.20002.66390.5466
Ucolor0.877423.49743.31970.5747
本文0.897923.21753.41770.5856

Fig.8

Qualitative comparison of different methods on Test-2"

Table 5

Metric values for different comparison methods on Test-2"

方法NIQEUIQMUCIQE
CLAHE5.41582.46830.5560
UCM6.05481.96770.6110
UDCP6.80531.18250.5526
FUnIEGAN6.02782.88410.5442
Water-Net6.25342.69200.5748
MLFc-GAN8.06892.15000.5389
Ucolor6.32992.70980.5501
本文5.50392.96680.5819
1 Skarlatos D, Agrafiotis P, Menna F, et al. Ground control networks for underwater photogrammetry in archaeological excavations[C]∥Proceedings of the 3rd IMEKO International Conference on Metrology for Archaeology and Cultural Heritage, Lecce, Italy, 2017: 23-25.
2 Chuang M C, Hwang J N, Kresimir W. A feature learning and object recognition framework for underwater fish images[J]. IEEE Transactions on Image Processing: a Publication of the IEEE Signal Processing Society, 2016, 25(4): 1862-1872.
3 Trahanias P E, Venetsanopoulos A N. Institute of electric and electronic engineer. color image enhancement through 3-D histogram equalization[C]∥11th IAPR International Conference on Image, Speech and Signal Analysis, The Hague, Netherlands, 1992: 545-548.
4 Zuiderveld. Contrast limited adaptive histogram equalization[J/OL]. [2022-04-18].
5 Zou W, Wang X, Li K, et al. Self-tuning underwater image fusion method based on dark channel prior[C]∥IEEE International Conference on Robotics and Biomimetics, Qingdao, China, 2016: 788-793.
6 AbuNaser A, Doush I A, Mansour N, et al. Underwater image enhancement using particle swarm optimization[J]. Journal of Intelligent Systems, 2015, 24(1): 99-115.
7 Hitam M S, Awalludin E A, Yussof W N J H W Y, et al. Mixture contrast limited adaptive histogram equalization for underwater image enhancement[C]∥International Conference on Computer Applications Technology, Sousse, Tunisia, 2013: 1-5.
8 Ancuti C, Ancuti C O, Bekaert P. Enhancing underwater images and videos by fusion[C]∥2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 2012: 81-88.
9 Zhang S, Wang T, Dong J, et al. Underwater image enhancement via extended multi-scale Retinex[J]. Neurocomputing, 2017, 245: 1-9.
10 Akkaynak D, Treibitz T, Sea-thru: a method for removing water from underwater images[C]∥2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019: 1682-1691.
11 Trucco E, Olmos-Antillon T. Self-tuning underwater image restoration[J].IEEE Journal of Oceanic Engineering, 2006, 31(2): 511-519.
12 McGlamery B L. A computer model for underwater camera systems[J/OL]. [2022-04-18].
13 Jaffe J S. Computer modeling and the design of optimal underwater imaging systems[J]. IEEE Journal of Oceanic Engineering, 1990, 15(2): 101-111.
14 Chiang J, Chen Y. Underwater image enhancement by wavelength compensation and dehazing[J]. IEEE Transactions on Image Processing: a Publication of the IEEE Signal Processing Society, 2012, 21(4): 1756-1769.
15 Peng Y T, Cao K M, Cosman P C. Generalization of the dark channel prior for single image restoration[J]. IEEE Transactions on Image Processing: a Publication of the IEEE Signal Processing Society, 2018, 27(6): 2856-2868.
16 Akkaynak D, Treibitz T. A revised underwater image formation model[C]∥2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018: 6723-6732.
17 Ding X, Wang Y, Zheng L, et al. Towards Underwater Image Enhancement Using Super-Resolution Convolutional Neural Networks[M]. Singapore: Springer, 2018.
18 Li J, Skinner K A, Eustice R M, et al. WaterGAN: unsupervised generative network to enable real-time color correction of monocular underwater images[J]. IEEE Robotics and Automation Letters,2018,3(1): 387-394.
19 Fabbri C, Jahidul I M, Sattar J. Enhancing underwater imagery using generative adversarial networks[EB/OL]. [2022-04-28].
20 Li C, Guo C, Ren W, et al. An underwater image enhancement benchmark dataset and beyond[J]. IEEE Transactions on Image Processing: a Publication of the IEEE Signal Processing Society, 2019, 29: 4376-4389.
21 Wang Y D, Guo J C, Gao H, et al. UIEC^2-Net: CNN-based underwater image enhancement using two color space[J]. Signal Processing: Image Communication, 2021, 96: No. 116250.
22 Liu Z, Lin Y T, Cao Y, et al. Swin Transformer: hierarchical vision transformer using shifted windows[J/OL]. [2022-04-28].
23 Xu L, Yan Q, Xia Y, et al. Structure extraction from texture via relative total variation[J]. ACM Transactions on Graphics, 2012, 31(6): 1-10.
24 Getreuer P. Automatic color enhancement (ACE) and its fast implementation[J]. Image Processing on Line, 2012, 2: 266-277.
25 Li C, Anwar S, Hou J, et al. Underwater image enhancement via medium transmission-guided multi-color space embedding[J]. IEEE Transactions on Image Processing, 2021, 30: 4985-5000.
26 Isola P, Zhu J Y, Zhou T H, et al. Image-to-image translation with conditional adversarial networks[C]∥IEEE Conference on Computer Vision & Pattern Recognition, Honolulu, HI, USA, 2017: 5967-5976.
27 Zhao H, Gallo O, Frosio I, et al. Loss functions for neural networks for image processing[EB/OL]. [2022-04-28].
28 Islam M J, Xia Y, Sattar J. Fast underwater image enhancement for improved visual perception[J]. IEEE Robotics and Automation Letters, 2020, 5(2): 3227-3234.
29 Li C, Anwar S. Underwater scene prior inspired deep underwater image and video enhancement[J]. Pattern Recognition, 2019, 98(1): No. 107038.
[1] Guo-jun YANG,Ya-hui QI,Xiu-ming SHI. Review of bridge crack detection based on digital image technology [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(2): 313-332.
[2] Ming-yao XIAO,Xiong-fei LI,Rui ZHU. Medical image fusion based on pixel correlation analysis in NSST domain [J]. Journal of Jilin University(Engineering and Technology Edition), 2023, 53(9): 2640-2648.
[3] Xiao-jun JIN,Yan-xia SUN,Jia-lin YU,Yong CHEN. Weed recognition in vegetable at seedling stage based on deep learning and image processing [J]. Journal of Jilin University(Engineering and Technology Edition), 2023, 53(8): 2421-2429.
[4] Zhen-hai ZHANG,Kun JI,Jian-wu DANG. Crack identification method for bridge based on BCEM model [J]. Journal of Jilin University(Engineering and Technology Edition), 2023, 53(5): 1418-1426.
[5] Jian LI,Qi XIONG,Ya-ting HU,Kong-yu LIU. Chinese named entity recognition method based on Transformer and hidden Markov model [J]. Journal of Jilin University(Engineering and Technology Edition), 2023, 53(5): 1427-1434.
[6] Ke HE,Hai-tao DING,Xuan-qi LAI,Nan XU,Kong-hui GUO. Wheel odometry error prediction model based on transformer [J]. Journal of Jilin University(Engineering and Technology Edition), 2023, 53(3): 653-662.
[7] Xin RONG,Hong-hai LIU,Zuo-yao YIN,Hai-xiang LIN,Qing-hua BIAN. On-line detection method of aggregate gradation based on image processing [J]. Journal of Jilin University(Engineering and Technology Edition), 2023, 53(10): 2847-2855.
[8] Huai-jiang YANG,Er-shuai WANG,Yong-xin SUI,Feng YAN,Yue ZHOU. Simplified residual structure and fast deep residual networks [J]. Journal of Jilin University(Engineering and Technology Edition), 2022, 52(6): 1413-1421.
[9] Hai-yang JIA,Rui XIA,An-qi LYU,Ceng-xuan GUAN,Juan CHEN,Lei WANG. Panoramic mosaic approach of ultrasound medical images based on template fusion [J]. Journal of Jilin University(Engineering and Technology Edition), 2022, 52(4): 916-924.
[10] Kang WANG,Meng YAO,Li-ben LI,Jian-qiao LI,Xiang-jin DENG,Meng ZOU,Long XUE. Mechanical performance identification for lunar soil in lunar surface sampling [J]. Journal of Jilin University(Engineering and Technology Edition), 2021, 51(3): 1146-1152.
[11] Xiao-ran GUO,Ping LUO,Wei-lan WANG. Chinese named entity recognition based on Transformer encoder [J]. Journal of Jilin University(Engineering and Technology Edition), 2021, 51(3): 989-995.
[12] Jian LI,Kong-yu LIU,Xian-sheng REN,Qi XIONG,Xue-feng DOU. Application of canny algorithm based on adaptive threshold in MR Image edge detection [J]. Journal of Jilin University(Engineering and Technology Edition), 2021, 51(2): 712-719.
[13] Fu LIU,Lu LIU,Tao HOU,Yun LIU. Night road image enhancement method based on optimized MSR [J]. Journal of Jilin University(Engineering and Technology Edition), 2021, 51(1): 323-330.
[14] LIU Zhe, XU Tao, SONG Yu-qing, XU Chun-yan. Image fusion technology based on NSCT and robust principal component analysis model with similar information [J]. Journal of Jilin University(Engineering and Technology Edition), 2018, 48(5): 1614-1620.
[15] CHE Xiang-jiu, WANG Li, GUO Xiao-xin. Improved boundary detection based on multi-scale cues fusion [J]. Journal of Jilin University(Engineering and Technology Edition), 2018, 48(5): 1621-1628.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!