吉林大学学报(工学版) ›› 2022, Vol. 52 ›› Issue (10): 2438-2446.doi: 10.13229/j.cnki.jdxbgxb20210298

• 计算机科学与技术 • 上一篇    

基于多层次空谱融合网络的高光谱图像分类

欧阳宁(),李祖锋,林乐平()   

  1. 桂林电子科技大学 信息与通信学院,广西 桂林 541004
  • 收稿日期:2021-04-06 出版日期:2022-10-01 发布日期:2022-11-11
  • 通讯作者: 林乐平 E-mail:ynou@guet.edu.cn;linleping@guet.edu.cn
  • 作者简介:欧阳宁(1972-),男,教授.研究方向:数字图像处理,智能信息处理.E-mail: ynou@guet.edu.cn
  • 基金资助:
    国家自然科学基金项目(62001133);广西科技重大专项项目(桂科AA20302001);广西科技基地和人才专项项目(桂科AD19110060);广西无线宽带通信与信号处理重点实验室基金项目(GXKL06200114);广西高等学校千名中青年骨干教师培育计划项目

Hyperspectral image classification based on hierarchical spatial-spectral fusion network

Ning OUYANG(),Zu-feng LI,Le-ping LIN()   

  1. School of Information and Communication,Guilin University of Electronic Technology,Guilin 541004,China
  • Received:2021-04-06 Online:2022-10-01 Published:2022-11-11
  • Contact: Le-ping LIN E-mail:ynou@guet.edu.cn;linleping@guet.edu.cn

摘要:

为了在高光谱图像分类中更好提取和表达光谱与空间的精细特征以及特征间的交互信息,提出一种基于多层次空-谱融合网络的高光谱图像分类方法。首先,利用多层次特征提取模块,分别提取高光谱图像的多层次空间和光谱特征;其次,设计空-谱特征交互融合模块将获得的多层次空间与光谱特征进行特征融合,以产生空-谱融合特征。本文方法可以结合网络中不同层次的空间与光谱特征,有效地捕获高光谱图像精细特征;同时,通过联合学习融合空间与光谱特征,捕获光谱与空间特征之间交互作用。实验结果表明,与现有基于神经网络的分类方法相比,所提出的高光谱图像分类算法能够获得更高的分类精度,表明该网络能有效地提取精细特征和增强空-谱融合特征的表达能力。

关键词: 高光谱图像分类, 多层次特征提取模块, 空?谱特征交互融合模块, 特征融合

Abstract:

To acquire fine spectral and spatial features and their interaction information in hyperspectral image classification, a hierarchical spatial-spectral fusion network is proposed. Firstly, the hierarchical features extraction module was exploited to extract the spectral and spatial features of hyperspectral image respectively. Secondly, the spatial-spectral feature interactive fusion module was designed and employed to fuse the features and produce the joint spatial-spectral features. The proposed network can not only extract and integrate the fine spatial and spectral features of different levels,but also capture the interaction between spectral and spatial features by joint learning. The experimental results show that the proposed network performed better than the state-of-the-art neural network-based classification methods. The network is shown the capability of extracting fine features and capturing the spatial-spectral joint features for classification.

Key words: hyperspectral image classification, hierarchical features extraction module, spatial-spectral feature interactive fusion module, feature fusion

中图分类号: 

  • TP753

图1

基于多层次空-谱融合网络的高光谱图像分类网络结构"

图2

空间-光谱多层次特征提取模块示意图"

表1

空间-光谱多层次特征提取模块参数设置"

操作类型HFEM_spaHFEM_spe

步幅卷积size

/strides

2×2×1,122×2×1,strides1×1×2,121×1×2,strides

常规卷积size

/strides

2×2×1,121×1×1,strides1×1×2,121×1×1,strides

Conv size

/strides

2×2×2,5121×1×1,strides1×1×2,5121×1×1,strides
Reshape512×S1512×S2

图3

空-谱特征交互融合模块结构示意图"

表2

使用不同方法在Indian Pines数据上的分类结果"

方 法3D-CNN3D-DenseNetMSDN(n=4)HSFF_CHSFF
197.2397.3199.1097.29100
298.9596.5196.7398.3299.59
396.8793.8899.8599.2599.49
495.7794.9594.8897.8498.90
590.6898.9794.1399.6499.55
692.7297.7298.9599.5599.96
794.6892.1998.2099.5299.47
899.8298.2195.8199.69100
984.6393.0199.7098.33100
1098.0196.2093.8099.1399.08
1198.8197.4196.0599.6099.70
1295.4195.0396.9492.6799.59
1399.3599.8695.7499.79100
1499.1999.1999.3999.1699.83
1593.0995.5295.5097.4398.90
1696.0893.4395.8889.1996.12
OA/%96.60±2.5896.92±0.4097.06±0.6098.70±0.4199.57±0.11
AA/%95.71±1.6196.21±1.2496.87±1.1397.90±0.3599.38±0.17
Kappa×10096.13±2.9296.49±0.4696.64±0.6998.52±0.4799.50±0.12

表3

使用不同方法在Kennedy Space Center数据上的分类结果"

方 法3D?CNN3D?DenseNetMSDN(n=4)HSFF_CHSFF
198.6399.6599.8799.9699.81
298.4991.2699.4899.3399.65
392.0591.3791.9893.5598.90
489.6592.5390.9695.5195.95
562.3491.0687.7894.1493.36
696.8993.4295.9498.6499.32
797.5695.0593.0497.5699.49
897.3196.7495.8698.8199.77
999.8195.2494.6699.97100
1099.4496.7499.40100100
1110099.4799.9610099.86
1299.6298.1599.0899.31100
1310099.91100100100
OA/%96.19±1.5496.23±1.9997.36±0.4398.92±0.2899.41±0.17
AA/%94.75±1.5795.43±1.7396.00±0.5898.21±0.5098.93±0.39
Kappa×10095.76±1.7195.80±2.2297.07±0.4798.80±0.3199.35±0.19

表4

使用不同方法在University of Pavia数据上的分类结果"

方法3D?CNN3D?DenseNetMSDN(n=4)HSFF_CHSFF
193.3398.4298.3399.6699.87
299.2299.8899.9499.9499.96
396.1596.7997.5999.2899.63
499.8499.7899.8899.9099.91
510099.3899.9399.98100
694.1599.7699.9199.5699.98
799.9397.5999.1299.8799.98
887.8597.1796.7698.4599.35
999.9399.6299.9610099.95
OA/%96.33±1.9999.15±0.1599.26±0.0799.69±0.0699.87±0.04
AA/%96.67±2.0398.71±0.2599.04±0.1499.62±0.0999.84±0.04
Kappa×10095.12±2.6698.87±0.2099.02±0.1099.58±0.0999.83±0.06

表5

使用不同方法在Salinas数据上的分类结果"

方法3D?CNN3D?DenseNetMSDN(n=4)HSFF_CHSFF
199.9798.8399.8199.9199.99
299.2410099.80100100
399.8699.9199.7999.77100
499.0099.7199.7099.8499.88
599.7199.9599.8399.6499.99
699.9899.9799.7699.8999.99
771.3599.99100100100
899.9499.4599.6699.8999.92
998.4099.9999.7899.43100
1099.7399.6399.5199.5599.94
1199.7199.3099.6699.7299.98
1299.9799.9810099.6799.96
1399.5499.6210099.7999.93
1496.3799.7598.0399.51100
1591.0998.4299.8299.8099.82
1697.7510099.48100100
OA/%89.94±2.1499.56±0.1099.68±0.1199.72±0.0699.95±0.03
AA/%96.98±0.6999.66±0.1399.66±0.1299.70±0.1099.96±0.02
Kappa×10088.67±2.4499.51±0.1199.71±0.0999.77±0.0799.94±0.03

图4

Indian Pines数据集分类效果图"

图5

Kennedy Space Center数据集分类效果图"

图6

University of Pavia数据集分类效果图"

表6

三种数据下不同方法的训练和测试时间"

数据t/s3D?CNN3D?DenseNetMSDN(n=4)HSFF_CHSFF
IP训练1309.36136.79106.22122.73898.3
测试5.526.840.711.819.8
UP训练1694.17394.66796.13879.15526.9
测试14.760.961.939.656.9
KSC训练619.23205.12008.51028.31809.6
测试2.512.89.15.77.1
Epochs300200200300300
1 Li W, Prasad S, Flower J E, et al. Locality-preserving dimensionality reduction and classification for hyperspectral image analysis[J]. IEEE Transactions on Geoscience & Remote Sensing,2012,50(4):1185-1198.
2 Li W, Chen C, Su H J, et al. Local binary patterns and extreme learning machine for hyperspectral imagery classification[J]. IEEE Transactions on Geoence & Remote Sensing, 2015, 53(7): 3681-3693.
3 Huang K, Li S, Kang X, et al. Spectral-spatial hyperspectral image classification based on KNN[J]. Sensing and Imaging, 2016, 17(1):1-13.
4 闫敬文, 陈宏达, 刘蕾. 高光谱图像分类的研究进展[J]. 光学精密工程, 2019, 27(3): 680-693.
Yan Jing-wen, Chen Hong-da, Liu Lei. Overiew of hyperspectral image classification[J]. Optics and Precision Engineering, 2019, 27(3): 680-693.
5 Zhang H, Li Y, Zhang Y, et al. Spectral-spatial classification of hyperspectral imagery using a dual-channel convolutional neural network[J]. Remote Sensing Letters, 2017, 8(5):438-447.
6 Ouyang N, Zhu T, Lin L P. Convolutional neural network trained by joint loss for hyperspectral image classification[J]. IEEE Geoscience and Remote Sensing Letters, 2018, 16(3): 457-461.
7 Qing Y, Liu W. Hyperspectral image classification based on multi-scale residual network with attention mechanism[J]. Remote Sensing,2021,13(3):No.335.
8 Zhu M H, Jiao L C, Liu F, et al. Residual spectral–spatial attention network for hyperspectral image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2020, 59(1): 449-462.
9 欧阳宁, 朱婷, 林乐平. 基于空-谱融合网络的高光谱图像分类方法[J].计算机应用, 2018, 38(7): 1888-1892.
Ouyang Ning, Zhu Ting, Lin Le-ping. Spatial-spetral fusion network for hyperspectral image classification method[J]. Journal of Computer Applications, 2018, 38(7): 1888-1892.
10 Roy S K, Krishna G, Dubey S R, et al. HybridSN: exploring 3D-2D CNN feature hierarchy for hyperspectral image classification[J]. IEEE Geoscience and Remote Sensing Letters, 2020, 17(2): 277-281.
11 Li Y, Zhang H K, Shen Q. Spectral-spatial classification of hyperspectral imagery with 3D convolutional neural network[J]. Remote Sensing, 2017, 9(1): 67-87.
12 Zhang C, Li G, Du S, et al. Three-dimensional densely connected convolutional network for hyperspectral remote sensing image classification[J]. Journal of Applied Remote Sensing, 2019, 13(1):No.16519.
13 Zhang C J, Li G D, Du S H, et al. Multi-scale dense networks for hyperspectral remote sensing image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(11): 9201-9222.
14 Wang W, Dou S, Jiang Z, et al. A fast dense spectral-spatial convolution network framework for hyperspectral images classification[J]. Remote Sensing, 2018, 10(7): No.1068.
15 Zhu M H, Jiao L C, Liu F, et al. Residual spectral–spatial attention network for hyperspectral image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2020, 59(1): 449-462.
16 Huang G, Chen D, Li T, et al. Multi-scale dense networks for resource efficient image classification[C]//International Conference on Learning Representations, Vancouver, Canada, 2018: 1-14.
17 Chang Y H, Xu J T, Gao Z Y. Multi-scale dense attention network for stereo matching[J]. Electronics, 2020, 9(11):1881-1892.
18 Lu J, Yang J W, Batra D, et al. Hierarchical question-image co-attention for visual question answering[C]//Proceedings of the Advances in Neural Information Processing Systems, Barcelona, Spain, 2016: 289-297.
19 Ma H Y, Li Y J, Ji X P, et al. MsCoa: multi-step co-attention model for multi-label classification[J]. IEEE Access, 2019, 7: 109635-109645.
20 Li M N, Tei K J, Fukazawa Y. An efficient adaptive attention neural network for social recommendation[J]. IEEE Access, 2020, 8: 63595-63606.
[1] 潘晓英,魏德,赵逸喆. 基于Mask R⁃CNN和上下文卷积神经网络的肺结节检测[J]. 吉林大学学报(工学版), 2022, 52(10): 2419-2427.
[2] 周大可,张超,杨欣. 基于多尺度特征融合及双重注意力机制的自监督三维人脸重建[J]. 吉林大学学报(工学版), 2022, 52(10): 2428-2437.
[3] 赵宏伟,霍东升,王洁,李晓宁. 基于显著性检测的害虫图像分类[J]. 吉林大学学报(工学版), 2021, 51(6): 2174-2181.
[4] 倪涛,刘海强,王林林,邹少元,张红彦,黄玲涛. 基于双向长短期记忆模型的起重机智能操控方法[J]. 吉林大学学报(工学版), 2020, 50(2): 445-453.
[5] 王生生, 郭湑, 张家晨, 王光耀, 赵欣. 基于全局与局部形状特征融合的形状识别算法[J]. 吉林大学学报(工学版), 2016, 46(5): 1627-1632.
[6] 张浩, 刘海明, 吴春国, 张艳梅, 赵天明, 李寿涛. 基于多特征融合的绿色通道车辆检测判定[J]. 吉林大学学报(工学版), 2016, 46(1): 271-276.
[7] 杨欣,刘加,周鹏宇,周大可. 基于多特征融合的粒子滤波自适应目标跟踪算法[J]. 吉林大学学报(工学版), 2015, 45(2): 533-539.
[8] 齐滨, 赵春晖, 王玉磊. 基于支持向量机与相关向量机的高光谱图像分类[J]. 吉林大学学报(工学版), 2013, 43(增刊1): 143-147.
[9] 吴迪, 曹洁. 智能环境下基于核相关权重鉴别分析算法的多特征融合人脸识别[J]. 吉林大学学报(工学版), 2013, 43(02): 439-443.
[10] 王新颖, 刘钢, 谷方明, 肖巍. 三维模型检索中的语义与形状异构特征融合[J]. 吉林大学学报(工学版), 2012, 42(增刊1): 359-363.
[11] 曲智国, 王平, 高颖慧, 王鹏, 沈振康, 李江. 基于同化核分割相同值区域特征融合的边缘检测 [J]. , 2012, (03): 759-765.
[12] 陈绵书, 付平, 李勇, 张慧. 基于范围相似分最小化的图像特征融合[J]. 吉林大学学报(工学版), 2010, 40(增刊): 365-0368.
[13] 王莹, 李文辉. 基于多特征融合的高精度视频火焰检测算法[J]. 吉林大学学报(工学版), 2010, 40(03): 769-0775.
[14] 商飞, 马骏骁, 姚立, 田地, 邱春玲. 基于多特征融合的科学仪器工作状态检测方法[J]. 吉林大学学报(工学版), 2010, 40(02): 545-0548.
[15] 郑雅羽,田翔.陈耀武. 基于时空特征融合的视觉注意模型[J]. 吉林大学学报(工学版), 2009, 39(06): 1625-1630.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!