Journal of Jilin University(Engineering and Technology Edition) ›› 2024, Vol. 54 ›› Issue (10): 3018-3026.doi: 10.13229/j.cnki.jdxbgxb.20221564

Previous Articles    

Compressive sensing image reconstruction based on deep unfolding self-attention network

Jin-peng TIAN(),Bao-jun HOU   

  1. School of Communication and Information Engineering,Shanghai University,Shanghai 200444,China
  • Received:2022-12-07 Online:2024-10-01 Published:2024-11-22

Abstract:

Convolutional neural network applied to image compressive sensing reconstruction at low sampling rate, which contains less information, and it is difficult for the reconstruction network to pay attention to the context information of the image. To overcome this problem, a compressive sensing image reconstruction based on deep unfolded self-attention network is proposed. The network combines sampling matrix and self-attention mechanism for image depth reconstruction, and fully utilizes the information of measurement values through multi-stage reconstruction module to enhance the quality of image reconstruction. The experimental results show that the network proposed can make full use of the sampling information of the image, outperform the existing state-of-the-art methods on different datasets, and the visual effect of the reconstructed image is better.

Key words: computer application, compressive sensing, image reconstruction, self-attention, residual

CLC Number: 

  • TP393

Fig.1

Self-attention layer structure"

Fig.2

Deep unfolding self-attention network"

Fig.3

Loss values and reconstruction PSNR values of different DUSANet reconstruction stage numbers at 10% sampling rate"

Fig.4

Reconstruction performance of different DUSANet channel numbers at 10% sampling rate"

Table 1

Reconstruction performance of different reconstruction structures"

性能参数结构(a)结构(b)结构(c)结构(d)
PSNR/dB26.5827.9229.4429.67

Table 2

PSNR(dB) comparison of 5 algorithms for Set11 dataset"

DatasetAlgorithmPSNR/dB
MR=1%MR=4%MR=10%MR=25%Avg
Set 11TVAL315.3722.3625.4729.3123.13
MH17.6521.6427.6231.4324.59
GSR17.9122.3328.7632.2125.3
D-AMP5.620.2329.2134.5222.39
DUSANet21.1125.4329.6734.8927.78

Table 3

Comparison of PSNR (dB) and SSIM of 6 deep learning algorithms in different datasets"

DatasetMRReconNetDR2-NetISTA-NetTIP-CSNetAMP-NetDUSANet
PSNR/dBSSIMPSNR/dBSSIMPSNR/dBSSIMPSNR/dBSSIMPSNR/dBSSIMPSNR/dBSSIM
Set 51%18.070.413 818.50.452 818.550.440 824.070.627 222.420.618 324.100.639 9
4%21.610.545 322.740.618 023.450.661 928.740.829 227.810.817 229.110.837 1
10%24.580.676 226.560.757 428.610.831 532.280.909 732.100.902 433.540.923 1
25%27.220.771 231.010.867 734.170.927 236.240.953 236.790.953 238.130.963 1
Avg22.870.601 624.700.674 026.200.715 430.330.829 829.780.822 831.220.840 8
Set 111%17.280.382 417.420.429 417.450.412 820.950.543 320.200.558 121.110.559 6
4%20.000.525 720.800.580 621.550.623 624.830.760 725.260.772 225.430.784 2
10%21.690.599 124.250.717 526.460.803 128.060.862 929.400.877 929.670.890 5
25%25.570.776 928.660.843 232.380.923 232.300.929 534.630.948034.890.953 8
Avg21.140.571022.780.642 724.460.690 726.540.774 127.370.789 127.780.7970
Set 141%18.090.390 718.310.415 018.220.401 422.740.549 521.650.543 423.040.563 4
4%20.620.488 921.330.536 622.080.570 826.160.717 925.490.700 426.710.733 1
10%22.910.597 324.430.663 726.000.728 928.910.828 128.770.818 230.110.848 2
25%25.300.711 128.110.798 430.620.870 132.340.902 132.510.914 434.450.930 2
Avg21.730.547 023.050.603 424.230.642 827.540.749 427.110.744 128.580.768 7
Urban 1001%16.090.303 216.140.323 616.220.318 318.740.416 918.890.436 420.200.453 4
4%18.090.405 018.550.456 318.940.492 821.030.601 121.970.642 623.360.678 0
10%19.950.523 121.200.612 022.640.583 624.080.762 725.320.805 526.900.839 1
25%22.190.658 324.840.777 828.290.887 330.050.914 630.420.925 932.040.939 8
Avg19.080.472 420.190.542 521.520.570 523.480.673 824.120.702 625.630.727 6
Dataset Avg21.210.548 022.680.615 724.100.654 926.970.756 827.090.764 6528.300.783 5

Fig.5

Reconstruction performance comparison under different sampling rate"

1 Donoho D L. Compressed sensing[J]. IEEE Transactions on information theory, 2006, 52(4):1289-1306.
2 Baraniuk R G, Candes E, Nowak R, et al. Compressive sampling [from the guest editors][J]. IEEE Signal Processing Magazine, 2008, 25(2):12-13.
3 He K, Wang Z H, Huang X, et al. Computational multifocal microscopy[J]. Biomedical Optics Express, 2018, 9(12):6477-6496.
4 Mairal J, Sapiro G, Elad M. Learning multiscale sparse representations for image and video restoration[J]. Multiscale Modeling & Simulation, 2008, 7(1): 214-241.
5 Duarte M F, Davenport M A, Takhar D, et al. Single-pixel imaging via compressive sampling[J]. IEEE Signal Processing Magazine, 2008, 25(2): 83-91.
6 Liu Y P, Wu S, Huang X L, et al. Hybrid CS-DMRI: periodic time-variant subsampling and omnidirectional total variation based reconstruction[J]. IEEE Transactions on Medical Imaging, 2017, 36(10): 2148-2159.
7 Sharma S K, Lagunas E, Chatzinotas S, et al. Application of compressive sensing in cognitive radio communications: a survey[J]. IEEE Communications Surveys & Tutorials, 2017, 18(3):1838-1860.
8 Ma J W, Liu X Y, Shou Z, et al. Deep tensor ADMM-Net for snapshot compressive imaging[C]∥2019 IEEE/CVF International Conference on Computer Vision(ICCV), Seoul, South Korea,2019: 10222-10231.
9 Kulkarni K, Lohit S, Turaga P, et al. Reconnet: non-iterative reconstruction of images from compressively sensed measurements[C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,Las Vegas, USA,2016: 449-458.
10 Yao H T, Dai F, Zhang S L, et al. DR2-Net: deep residual reconstruction network for image compressive sensing[J]. Neurocomputing, 2019,359(24): 483-493.
11 Zhang J, Ghanem B. ISTA-Net: interpretable optimization-inspired deep network for image compressive sensing[C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,Salt Lake City, USA,2018: 1828-1837.
12 Shi W Z, Jiang F, Liu S H, et al. Image compressed sensing using convolutional neural network[J]. IEEE Transactions on Image Processing, 2020, 29: 375-388.
13 Zhang Z H, Liu Y P, Liu J N, et al. AMP-Net: denoising-based deep unfolding for compressive image sensing[J]. IEEE Transactions on Image Processing, 2021, 30: 1487-1500.
14 Hillar C J, Lim L H. Most tensor problems are NP-hard[J]. Journal of the ACM(JACM), 2013, 60(6): 1-39.
15 Li C B, Yin W T, Zhang Y. User's guide for TVAL3: TV minimization by augmented lagrangian and alternating direction algorithms[J]. CAAM Report,2009,20:46-47.
16 Chen C, Tramel E W, Fowler J E. Compressed-sensing recovery of images and video using multihypothesis predictions[C]∥2011 Conference Record of the Forty Fifth Asilomar Conference on Signals, Systems and Computers(ASILOMAR), Pacific Grove, USA, 2011: 1193-1198.
17 Mousavi A, Patel A B, Baraniuk R G. A deep learning approach to structured signal recovery[C]∥The 53rd Annual Allerton Conference on Communication, Control, and Computing(Allerton), Monticello, USA,2015: 1336-1343.
18 Sun Y B, Chen J W, Liu Q S. Dual-path attention network for compressed sensing image reconstruction[J]. IEEE Transactions on Image Processing, 2020, 29: 9482-9495.
19 Beck A, Teboulle M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems[J]. SIAM Journal on Imaging Sciences, 2009, 2(1): 183-202.
20 Vaswani A, Shazeer N, Parmar N. Attention is all you need[DB/OL].[2022-11-26]..
21 Zhang H, Ian G, Dimitris M, et al. Self-attention generative adversarial networks[C]∥International Conference on Machine Learning, Brussels, Belgium,2019: 7354-7363.
22 Khan S, Naseer M, Hayat M, et al. Transformers in vision: a survey[DB/OL].[2022-11-26]..
23 Radu T, Eirikur A, Gool Luc Van, et al. Ntire 2017 challenge on single image super-resolution: methods and results[C]∥Proceedings of the Ieee Conference on Computer Vision and Pattern Recognition Workshops,Hawaii, USA,2017: 114-125.
24 Bevilacqua M, Roumy A, Guillemot C, et al. Low-complexity single-image super-resolution based on nonnegative neighbor embedding[C]∥Proceedings of the 23rd British Machine Vision Conference,Guildford,UK, 2012:No.135.
25 Zeyde R, Elad M, Protter M. On single image scale-up using sparse-representations[C]∥International Conference on Curves and Surfaces, Beijing,China,2012: 711-730.
26 Huang J B, Abhishek S, Narendra A. Single image super-resolution from transformed self-exemplars[C]∥Proceedings of the Ieee Conference on Computer Vision and Pattern Recognition, Boston, USA,2015: 5197-5206.
27 Zhang J, Zhao D B, Gao W. Group-based sparse representation for image restoration[J]. IEEE Transactions on Image Processing, 2014, 23(8): 3336-3351.
28 Metzler C A, Maleki A, Baraniuk R G. From denoising to compressed sensing[J]. IEEE Transactions on Information Theory, 2016, 62(9): 5117-5144.
[1] Wei-song YANG,An ZHANG,Wei-xiao XU,Hai-sheng LI,Ke DU. Seismic performance of stiffness enhanced metal coupling beam damper [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(9): 2469-2483.
[2] Lu Li,Jun-qi Song,Ming Zhu,He-qun Tan,Yu-fan Zhou,Chao-qi Sun,Cheng-yu Zhou. Object extraction of yellow catfish based on RGHS image enhancement and improved YOLOv5 network [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(9): 2638-2645.
[3] Hong-wei ZHAO,Hong WU,Ke MA,Hai LI. Image classification framework based on knowledge distillation [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(8): 2307-2312.
[4] Xin-gang GUO,Chao CHENG,Zi-qi SHEN. Face expression recognition based on attention mechanism of convolution network [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(8): 2319-2328.
[5] Yun-zuo ZHANG,Yu-xin ZHENG,Cun-yu WU,Tian ZHANG. Accurate lane detection of complex environment based on double feature extraction network [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(7): 1894-1902.
[6] Ming-hui SUN,Hao XUE,Yu-bo JIN,Wei-dong QU,Gui-he QIN. Video saliency prediction with collective spatio-temporal attention [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(6): 1767-1776.
[7] Yan-feng LI,Ming-yang LIU,Jia-ming HU,Hua-dong SUN,Jie-yu MENG,Ao-ying WANG,Han-yue ZHANG,Hua-min YANG,Kai-xu HAN. Infrared and visible image fusion based on gradient transfer and auto-encoder [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(6): 1777-1787.
[8] Li-ping ZHANG,Bin-yu LIU,Song LI,Zhong-xiao HAO. Trajectory k nearest neighbor query method based on sparse multi-head attention [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(6): 1756-1766.
[9] Li-ming LIANG,Long-song ZHOU,Jiang YIN,Xiao-qi SHENG. Fusion multi-scale Transformer skin lesion segmentation algorithm [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(4): 1086-1098.
[10] Yun-zuo ZHANG,Wei GUO,Wen-bo LI. Omnidirectional accurate detection algorithm for dense small objects in remote sensing images [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(4): 1105-1113.
[11] Bo-song FAN,Chun-fu SHAO. Urban rail transit emergency risk level identification method [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(2): 427-435.
[12] Na CHE,Yi-ming ZHU,Jian ZHAO,Lei SUN,Li-juan SHI,Xian-wei ZENG. Connectionism based audio-visual speech recognition method [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(10): 2984-2993.
[13] Feng-feng ZHOU,Tao YU,Yu-si FAN. Generative adversarial autoencoder integrated voting algorithm based on mass spectral data [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(10): 2969-2977.
[14] Xing WEI,Ya-jie GAO,Zhi-rui KANG,Yu-chen LIU,Jun-ming ZHAO,Lin XIAO. Numerical simulation of residual stress field of stud girth weld in low temperature environment [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(1): 198-208.
[15] Guang HUO,Da-wei LIN,Yuan-ning LIU,Xiao-dong ZHU,Meng YUAN,Di GAI. Lightweight iris segmentation model based on multiscale feature and attention mechanism [J]. Journal of Jilin University(Engineering and Technology Edition), 2023, 53(9): 2591-2600.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!