Journal of Jilin University(Engineering and Technology Edition) ›› 2022, Vol. 52 ›› Issue (10): 2438-2446.doi: 10.13229/j.cnki.jdxbgxb20210298

Previous Articles    

Hyperspectral image classification based on hierarchical spatial-spectral fusion network

Ning OUYANG(),Zu-feng LI,Le-ping LIN()   

  1. School of Information and Communication,Guilin University of Electronic Technology,Guilin 541004,China
  • Received:2021-04-06 Online:2022-10-01 Published:2022-11-11
  • Contact: Le-ping LIN E-mail:ynou@guet.edu.cn;linleping@guet.edu.cn

Abstract:

To acquire fine spectral and spatial features and their interaction information in hyperspectral image classification, a hierarchical spatial-spectral fusion network is proposed. Firstly, the hierarchical features extraction module was exploited to extract the spectral and spatial features of hyperspectral image respectively. Secondly, the spatial-spectral feature interactive fusion module was designed and employed to fuse the features and produce the joint spatial-spectral features. The proposed network can not only extract and integrate the fine spatial and spectral features of different levels,but also capture the interaction between spectral and spatial features by joint learning. The experimental results show that the proposed network performed better than the state-of-the-art neural network-based classification methods. The network is shown the capability of extracting fine features and capturing the spatial-spectral joint features for classification.

Key words: hyperspectral image classification, hierarchical features extraction module, spatial-spectral feature interactive fusion module, feature fusion

CLC Number: 

  • TP753

Fig.1

Hyperspectral image classification framework based on hierarchical spatial-spectral fusion network"

Fig.2

Structure of spatial-spectral hierarchical feature extraction module"

Table 1

Parameter setting of spatial-spectral hierarchical feature extraction module"

操作类型HFEM_spaHFEM_spe

步幅卷积size

/strides

2×2×1,122×2×1,strides1×1×2,121×1×2,strides

常规卷积size

/strides

2×2×1,121×1×1,strides1×1×2,121×1×1,strides

Conv size

/strides

2×2×2,5121×1×1,strides1×1×2,5121×1×1,strides
Reshape512×S1512×S2

Fig.3

Structure of spatial-spectral feature interactive fusion module"

Table 2

Classification results of different methods for Indian Pines datasets"

方 法3D-CNN3D-DenseNetMSDN(n=4)HSFF_CHSFF
197.2397.3199.1097.29100
298.9596.5196.7398.3299.59
396.8793.8899.8599.2599.49
495.7794.9594.8897.8498.90
590.6898.9794.1399.6499.55
692.7297.7298.9599.5599.96
794.6892.1998.2099.5299.47
899.8298.2195.8199.69100
984.6393.0199.7098.33100
1098.0196.2093.8099.1399.08
1198.8197.4196.0599.6099.70
1295.4195.0396.9492.6799.59
1399.3599.8695.7499.79100
1499.1999.1999.3999.1699.83
1593.0995.5295.5097.4398.90
1696.0893.4395.8889.1996.12
OA/%96.60±2.5896.92±0.4097.06±0.6098.70±0.4199.57±0.11
AA/%95.71±1.6196.21±1.2496.87±1.1397.90±0.3599.38±0.17
Kappa×10096.13±2.9296.49±0.4696.64±0.6998.52±0.4799.50±0.12

Table 3

Classification results of different methods for Kennedy Space Center datasets"

方 法3D?CNN3D?DenseNetMSDN(n=4)HSFF_CHSFF
198.6399.6599.8799.9699.81
298.4991.2699.4899.3399.65
392.0591.3791.9893.5598.90
489.6592.5390.9695.5195.95
562.3491.0687.7894.1493.36
696.8993.4295.9498.6499.32
797.5695.0593.0497.5699.49
897.3196.7495.8698.8199.77
999.8195.2494.6699.97100
1099.4496.7499.40100100
1110099.4799.9610099.86
1299.6298.1599.0899.31100
1310099.91100100100
OA/%96.19±1.5496.23±1.9997.36±0.4398.92±0.2899.41±0.17
AA/%94.75±1.5795.43±1.7396.00±0.5898.21±0.5098.93±0.39
Kappa×10095.76±1.7195.80±2.2297.07±0.4798.80±0.3199.35±0.19

Table 4

Classification results of different methods for the University of Pavia datasets"

方法3D?CNN3D?DenseNetMSDN(n=4)HSFF_CHSFF
193.3398.4298.3399.6699.87
299.2299.8899.9499.9499.96
396.1596.7997.5999.2899.63
499.8499.7899.8899.9099.91
510099.3899.9399.98100
694.1599.7699.9199.5699.98
799.9397.5999.1299.8799.98
887.8597.1796.7698.4599.35
999.9399.6299.9610099.95
OA/%96.33±1.9999.15±0.1599.26±0.0799.69±0.0699.87±0.04
AA/%96.67±2.0398.71±0.2599.04±0.1499.62±0.0999.84±0.04
Kappa×10095.12±2.6698.87±0.2099.02±0.1099.58±0.0999.83±0.06

Table 5

Classification results of different methods for the Salinas datasets"

方法3D?CNN3D?DenseNetMSDN(n=4)HSFF_CHSFF
199.9798.8399.8199.9199.99
299.2410099.80100100
399.8699.9199.7999.77100
499.0099.7199.7099.8499.88
599.7199.9599.8399.6499.99
699.9899.9799.7699.8999.99
771.3599.99100100100
899.9499.4599.6699.8999.92
998.4099.9999.7899.43100
1099.7399.6399.5199.5599.94
1199.7199.3099.6699.7299.98
1299.9799.9810099.6799.96
1399.5499.6210099.7999.93
1496.3799.7598.0399.51100
1591.0998.4299.8299.8099.82
1697.7510099.48100100
OA/%89.94±2.1499.56±0.1099.68±0.1199.72±0.0699.95±0.03
AA/%96.98±0.6999.66±0.1399.66±0.1299.70±0.1099.96±0.02
Kappa×10088.67±2.4499.51±0.1199.71±0.0999.77±0.0799.94±0.03

Fig.4

Classification maps for Indian Pines dataset"

Fig.5

Classification maps for Kennedy Space Center dataset"

Fig.6

Classification maps for University of Pavia dataset"

Table 6

Training and testing time of different methods for three datasets"

数据t/s3D?CNN3D?DenseNetMSDN(n=4)HSFF_CHSFF
IP训练1309.36136.79106.22122.73898.3
测试5.526.840.711.819.8
UP训练1694.17394.66796.13879.15526.9
测试14.760.961.939.656.9
KSC训练619.23205.12008.51028.31809.6
测试2.512.89.15.77.1
Epochs300200200300300
1 Li W, Prasad S, Flower J E, et al. Locality-preserving dimensionality reduction and classification for hyperspectral image analysis[J]. IEEE Transactions on Geoscience & Remote Sensing,2012,50(4):1185-1198.
2 Li W, Chen C, Su H J, et al. Local binary patterns and extreme learning machine for hyperspectral imagery classification[J]. IEEE Transactions on Geoence & Remote Sensing, 2015, 53(7): 3681-3693.
3 Huang K, Li S, Kang X, et al. Spectral-spatial hyperspectral image classification based on KNN[J]. Sensing and Imaging, 2016, 17(1):1-13.
4 闫敬文, 陈宏达, 刘蕾. 高光谱图像分类的研究进展[J]. 光学精密工程, 2019, 27(3): 680-693.
Yan Jing-wen, Chen Hong-da, Liu Lei. Overiew of hyperspectral image classification[J]. Optics and Precision Engineering, 2019, 27(3): 680-693.
5 Zhang H, Li Y, Zhang Y, et al. Spectral-spatial classification of hyperspectral imagery using a dual-channel convolutional neural network[J]. Remote Sensing Letters, 2017, 8(5):438-447.
6 Ouyang N, Zhu T, Lin L P. Convolutional neural network trained by joint loss for hyperspectral image classification[J]. IEEE Geoscience and Remote Sensing Letters, 2018, 16(3): 457-461.
7 Qing Y, Liu W. Hyperspectral image classification based on multi-scale residual network with attention mechanism[J]. Remote Sensing,2021,13(3):No.335.
8 Zhu M H, Jiao L C, Liu F, et al. Residual spectral–spatial attention network for hyperspectral image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2020, 59(1): 449-462.
9 欧阳宁, 朱婷, 林乐平. 基于空-谱融合网络的高光谱图像分类方法[J].计算机应用, 2018, 38(7): 1888-1892.
Ouyang Ning, Zhu Ting, Lin Le-ping. Spatial-spetral fusion network for hyperspectral image classification method[J]. Journal of Computer Applications, 2018, 38(7): 1888-1892.
10 Roy S K, Krishna G, Dubey S R, et al. HybridSN: exploring 3D-2D CNN feature hierarchy for hyperspectral image classification[J]. IEEE Geoscience and Remote Sensing Letters, 2020, 17(2): 277-281.
11 Li Y, Zhang H K, Shen Q. Spectral-spatial classification of hyperspectral imagery with 3D convolutional neural network[J]. Remote Sensing, 2017, 9(1): 67-87.
12 Zhang C, Li G, Du S, et al. Three-dimensional densely connected convolutional network for hyperspectral remote sensing image classification[J]. Journal of Applied Remote Sensing, 2019, 13(1):No.16519.
13 Zhang C J, Li G D, Du S H, et al. Multi-scale dense networks for hyperspectral remote sensing image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(11): 9201-9222.
14 Wang W, Dou S, Jiang Z, et al. A fast dense spectral-spatial convolution network framework for hyperspectral images classification[J]. Remote Sensing, 2018, 10(7): No.1068.
15 Zhu M H, Jiao L C, Liu F, et al. Residual spectral–spatial attention network for hyperspectral image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2020, 59(1): 449-462.
16 Huang G, Chen D, Li T, et al. Multi-scale dense networks for resource efficient image classification[C]//International Conference on Learning Representations, Vancouver, Canada, 2018: 1-14.
17 Chang Y H, Xu J T, Gao Z Y. Multi-scale dense attention network for stereo matching[J]. Electronics, 2020, 9(11):1881-1892.
18 Lu J, Yang J W, Batra D, et al. Hierarchical question-image co-attention for visual question answering[C]//Proceedings of the Advances in Neural Information Processing Systems, Barcelona, Spain, 2016: 289-297.
19 Ma H Y, Li Y J, Ji X P, et al. MsCoa: multi-step co-attention model for multi-label classification[J]. IEEE Access, 2019, 7: 109635-109645.
20 Li M N, Tei K J, Fukazawa Y. An efficient adaptive attention neural network for social recommendation[J]. IEEE Access, 2020, 8: 63595-63606.
[1] Xiao-ying PAN,De WEI,Yi-zhe ZHAO. Detecetion of lung nodule based on mask R-CNN and contextual convolutional neural network [J]. Journal of Jilin University(Engineering and Technology Edition), 2022, 52(10): 2419-2427.
[2] Da-ke ZHOU,Chao ZHANG,Xin YANG. Self-supervised 3D face reconstruction based on multi-scale feature fusion and dual attention mechanism [J]. Journal of Jilin University(Engineering and Technology Edition), 2022, 52(10): 2428-2437.
[3] Hong-wei ZHAO,Dong-sheng HUO,Jie WANG,Xiao-ning LI. Image classification of insect pests based on saliency detection [J]. Journal of Jilin University(Engineering and Technology Edition), 2021, 51(6): 2174-2181.
[4] WANG Sheng-sheng, GUO Xu, ZHANG Jia-chen, WANG Guang-yao, ZHAO Xin. Shape recognition algorithm based on fusion of global and local properties [J]. 吉林大学学报(工学版), 2016, 46(5): 1627-1632.
[5] ZHANG Hao, LIU Hai-ming, WU Chun-guo, ZHANG Yan-mei, ZHAO Tian-ming, LI Shou-tao. Detection method of vehicle in highway green toll lane based on multi-feature fusion [J]. 吉林大学学报(工学版), 2016, 46(1): 271-276.
[6] YANG Xin,LIU Jia,ZHOU Peng-yu,ZHOU Da-ke. Adaptive particle filter for object tracking based on fusing multiple features [J]. 吉林大学学报(工学版), 2015, 45(2): 533-539.
[7] WU Di, CAO Jie. Multi-feature fusion face recognition based on KRWDA algorithm under smart environment [J]. 吉林大学学报(工学版), 2013, 43(02): 439-443.
[8] WANG Xin-ying, LIU Gang, GU Fang-ming, XIAO Wei. Heterogeneous feature fusion method based on semantic and shape for 3D model retrieval [J]. 吉林大学学报(工学版), 2012, 42(增刊1): 359-363.
[9] QU Zhi-guo, WANG Ping, GAO Ying-hui, WANG Peng, SHEN Zhen-kang, LI Jiang. Edge detection based on feature fusion of USAN area [J]. , 2012, (03): 759-765.
[10] CHEN Mian-Shu, FU Ping, LI Yong, ZHANG Hui. Image feature fusion based on scope similarity scores minimization [J]. 吉林大学学报(工学版), 2010, 40(增刊): 365-0368.
[11] WANG Ying, LI Wen-Hui. Highprecision video flame detection algorithm base on multifeature fusion [J]. 吉林大学学报(工学版), 2010, 40(03): 769-0775.
[12] SHANG Fei, MA Jun-Xiao, YAO Li, TIAN Di, QIU Chun-Ling. Multifeature fusion based method for monitoring working status of instruments [J]. 吉林大学学报(工学版), 2010, 40(02): 545-0548.
[13] ZHENG Ya-yu,TIAN Xiang,CHEN Yao-wu. Visual attention model based on fussion of spatiotemporal features [J]. 吉林大学学报(工学版), 2009, 39(06): 1625-1630.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!