Journal of Jilin University(Engineering and Technology Edition) ›› 2025, Vol. 55 ›› Issue (12): 3840-3851.doi: 10.13229/j.cnki.jdxbgxb.20240397

Previous Articles    

Method of lane detection based on adaptive fusion of double branch features

Tian-min DENG(),Peng-fei XIE,Yang YU,Yue-tian CHEN   

  1. School of Traffic and Transportation,Chongqing Jiaotong University,Chongqing 400074,China
  • Received:2024-04-15 Online:2025-12-01 Published:2026-02-03

Abstract:

In order to solve the problem of feature corrosion and submergence caused by direct fusion of deep and shallow features and realize accurate lane detection in complex environment, a lane detection method based on adaptive fusion of double-branch features was proposed. Firstly, a dual-branch feature extraction network was designed in the method to enhance the feature extraction capability for lane lines in complex environments and reduce the loss of spatial detail information. Secondly, a feature adaptive fusion module was constructed, in which channel attention and self-attention were utilized to guide feature selection and fusion. The fusion process was adaptively adjusted to optimize the channel and spatial semantic information of feature maps. In addition, the improved parallel hybrid pyramid pooling module is more in line with the characteristics of long and narrow roads and captures remote context in multiple directions. Finally, the proposed method was tested on TuSimple, CULane and Curvelanes data sets, and the F1 reaches 96.93%, 76.48% and 83.21% respectively. The experimental results show that the proposed method can effectively deal with lane line detection tasks in complex scenes such as occlusion and shadow, and its performance is significantly improved compared with the mainstream segmentation lane line detection methods.

Key words: computer application, lane detection, feature adaptive fusion, channel attention, self-attention, pyramid pooling

CLC Number: 

  • TP391.4

Fig.1

Network overall framework"

Fig.2

Double-branch feature extraction backbone network"

Fig.3

Channel perception fusion module"

Fig.4

Cross self-attention fusion module"

Fig.5

Parallel hybrid pyramid pool module"

Fig.6

Two-grain upsampling decoder"

Fig.7

Detection head for lane line existence"

Table 1

Data set of lane lines"

数据集样本量训练集测试集验证集分辨率道路类型
TuSimple6 4083 2682 7823581 280×720高速公路
CULane133 23588 88034 6809 6751 640×590城市,高速公路
Curvelanes150 000100 00030 00020 0002 560×1 440城市,高速公路

Table 2

Comparison of experimental results with TuSimple data sets"

方 法F1/%Acc/%FP/%FN/%
SCNN1295.9796.536.171.80
EL-GAN2496.2694.904.123.36
ENet-SAD1595.0796.646.022.05
ERF-E2E2596.2596.023.214.28
UFLD-ResNet34888.0295.8618.913.75
PolyLaneNet590.6293.369.429.33
LaneATT2696.7795.633.532.92
DBDNet-ResNet3496.9396.723.123.01

Table 3

Comparison of experimental results with CULane data sets"

方 法综合/%正常/%拥挤/%夜晚/%无车道/%阴影/%箭头/%眩光/%弯道/%交叉路口/个FPS
SCNN1271.6090.6069.7066.1043.4066.9084.1058.5064.4019907.5
ENet-SAD1570.8090.1068.8066.0041.6065.9084.0060.2065.70199875
ERFNet-E2E2574.0091.0073.1067.9046.6074.1085.8064.5071.902022
PINet2774.4090.3072.3067.7049.8068.4083.7066.3065.20142725
UFLD-ResNet34872.3090.7070.2066.7044.4069.3085.7059.5069.502037170
CurveLanes-NAS-M1673.5090.2070.5068.2048.8069.3085.7065.9067.502359
RESA-ResNet341474.5091.9072.4069.8046.3072.0088.1066.5068.60189645.5
LaneATT2675.1391.1776.3369.5150.1673.2587.8263.7568.581020129
DBDNet-ResNet3476.4892.5574.9770.7949.6777.7988.3566.6372.35135876.5

Table 4

Generalization experiment results"

方法F1/%Precision/%Recall/%
SCNN565.0296.536.17
ENet-SAD850.3163.6041.60
PointLaneNet2878.4786.3371.59
UFLDv2-ResNet342981.3481.4979.44
CurveLanes-NAS-M1681.8093.4972.71
DBDNet-ResNet3483.2186.6580.03

Table 5

Results of ablation experiment"

模型DBDNetCPFMCSAFMPPMPHPPMAux_HeadF1/%
ResNet34-Seg71.37
(a)73.39
(b)73.86
(c)74.76
(d)75.22
(e)75.51
(f)75.74
(g)76.48

Fig. 8

Visual comparative analysis"

[1] Sun T Y, Tsai S J, Chan V, et al. HSI color model based lane-marking detection[C]∥IEEE Intelligent Transportation Systems Conference,Toronto, Canada, 2006: 1168-1172.
[2] 赵颖, 王书茂, 陈兵旗. 基于改进Hough变换的公路车道线快速检测算法[J]. 中国农业大学学报, 2006(3): 104-108.
Zhao Ying, Wang Shu-mao, Chen Bing-qi. Fast lane detection algorithm based on improved Hough transform[J]. Journal of China Agricultural University, 2006(3):104-108.
[3] 蔡创新, 邹宇, 潘志庚, 等. 基于多特征融合和窗口搜索的新型车道线检测算法[J]. 江苏大学学报: 自然科学版, 2023, 44(4): 386-391.
Cai Chuang-xin, Zou Yu, Pan Zhi-geng, et al. Novel lane detection algorithm based on multi-feature fusion and windows searching[J]. Journal of Jiangsu University (Natural Science Edition), 2023, 44(4): 386-391.
[4] 杨金鑫, 范英, 谢纯禄. 基于行距离及粒子滤波的车道线识别算法[J]. 江苏大学学报: 自然科学版, 2020, 41(2): 138-142, 198.
Yang Jin-xin, Fan Ying, Xie Chun-lu. Lane detection algorithm based on row distance and particle filter[J]. Journal of Jiangsu University (Natural Science Edition), 2020, 41(2): 138-142, 198.
[5] Tabelini L, Berriel R, Paixao T M, et al. Polylanenet: lane estimation via deep polynomial regression[C]∥25th International Conference on Pattern Recognition (ICPR),Milan, Italy, 2021: 6150-6156.
[6] Liu R, Yuan Z, Liu T, et al. End-to-end lane shape prediction with transformers[C]∥Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision,Waikoloa, USA, 2021: 3693-3701.
[7] Feng Z, Guo S, Tan X, et al. Rethinking efficient lane detection via c ourve modeling[C]∥Proceedings of the IEEE/CVF Conferencen Computer Vision and Pattern Recognition,New Orleans,USA,2022: 17041-17049.
[8] Qin Z, Wang H, Li X. Ultra fast structure-aware deep lane detection[C]∥Computer Vision-ECCV : 16th European Conference, Glasgow, UK, 2020: 276-291.
[9] 张云佐, 郑宇鑫, 武存宇, 等. 基于双特征提取网络的复杂环境车道线精准检测[J].吉林大学学报: 工学版, 2024, 54(7): 1894-1902 .
Zhang Yun-zuo, Zheng Yu-xin, Wu Cun-yu, et al. Accurate detection of lane lines in complex environment based on dual feature extraction network [J]. Journal of Jilin University (Engineering and Technology Edition), 2024,54 (7): 1894-1902 .
[10] 时小虎, 吴佳琦, 吴春国, 等. 基于残差网络的弯道增强车道线检测方法[J]. 吉林大学学报: 工学版, 2023, 53(2): 584-592.
Shi Xiao-hu, Wu Jia-qi, Wu Chun-guo, et al. Detection method of enhanced lane lines in curves based on residual network [J]. Journal of Jilin University (Engineering and Technology Edition), 2023,53(2):584-592.
[11] Liu L, Chen X, Zhu S, et al. CondLaneNet: a top-to-down lane detection framework based on conditional convolution[C]∥Proceedings of the IEEE/CVF International Conference on Computer Vision,Montreal, Canada, 2021:3753-3762.
[12] Pan X, Shi J, Luo P, et al. Spatial as deep: spatial CNN for traffic scene understanding[C]∥Proceedings of the AAAI Conference on Artificial Intelligence,New Orleans, USA,2018:7276-7283.
[13] Su J, Chen C, Zhang K, et al. Structure guided lane detection[J/OL].[2024-04-05].
[14] Zheng T, Fang H, Zhang Y, et al. RESA: recurrent feature-shift aggregator for lane detection[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2021, 35(4): 3547-3554.
[15] Hou Y, Ma Z, Liu C, et al. Learning lightweight lane detection CNNs by self attention distillation[C]∥Proceedings of the IEEE/CVF International Conference on Computer Vision,Seoul, Korea (South),2019: 1013-1021.
[16] Xu H, Wang S, Cai X, et al. Curvelane-nas: unifying lane-sensitive architecture search and adaptive point blending[C]∥Computer Vision-ECCV : 16th European Conference, Glasgow, UK, 2020: 689-704.
[17] Pan H, Hong Y, Sun W, et al. Deep dual-resolution networks for realtime and accurate semantic segmentation of traffic scenes[J]. IEEE Transactions on Intelligent Transportation Systems, 2022, 24(3): 3448-3460.
[18] Bae W, Yoo J, Chul Ye J. Beyond deep residual learning for image restoration: persistent homology-guided manifold simplification[C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, USA, 2017:1141-1149.
[19] Cheng M, Su J, Li L, et al. A-DFPN: adversarial learning and deformation feature pyramid networks for object detection[C]∥IEEE 5th International Conference on Image, Vision and Computing (ICIVC),Beijing, China, 2020: 11-18.
[20] Liu S, Qi L, Qin H, et al. Path aggregation network for instance segmentation[C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,Salt Lake City, USA, 2018: 8759-8768.
[21] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[C]∥NIPS'17: Proceedings of the 31st International Conference on Neural Information Processing Systems,Long Beach, USA,2017: 6000-6010.
[22] Zhao H, Shi J, Qi X, et al. Pyramid scene parsing network[C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,Honolulu, USA, 2017: 6230-6239.
[23] TuSimple. TuSimple lane detection benchmark[DB/OL]. [2024-04-06].
[24] Ghafoorian M, Nugteren C, Baka N, et al. EL-GAN: embedding loss driven generative adversarial networks for lane detection[C]∥Proceedings of the European Conference on Computer Vision (ECCV) Workshops,Munich, Germany, 2018: 256-272.
[25] Yoo S, Lee H S, Myeong H, et al. End-to-end lane marker detection via row-wise classification[C]∥Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, USA, 2020:4335-4343.
[26] Tabelini L, Berriel R, Paixao T M, et al. Keep your eyes on the lane: real-time attention-guided lane detection[C]∥Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA, 2021: 294-302.
[27] Ko Y, Lee Y, Azam S, et al. Key points estimation and point instance segmentation approach for lane detection[J]. IEEE Transactions on Intelligent Transportation Systems, 2021, 23(7): 8949-8958.
[28] Chen Z, Liu Q, Lian C.PointLaneNet: efficient end-to-end CNNs for accurate real-time lane detection[C]∥IEEE Intelligent Vehicles Symposium (IV), Paris, France, 2019: 2563-2568.
[29] Qin Z Q, Zhang P Y, Li X, et al. Ultra fast deep lane detection with hybrid anchor driven ordinal classification[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 46(5): 2555-2568.
[1] Zhen HUO,Li-sheng JIN,Qiang HUA, HEYang. Edge feature⁃guided semantic segmentation method for intelligent vehicle [J]. Journal of Jilin University(Engineering and Technology Edition), 2025, 55(9): 3032-3041.
[2] Qing-lin AI,Yuan-xiao LIU,Jia-hao YANG. Small target swmantic segmentation method based MFF-STDC network in complex outdoor environments [J]. Journal of Jilin University(Engineering and Technology Edition), 2025, 55(8): 2681-2692.
[3] Yu-fei ZHANG,Li-min WANG,Jian-ping ZHAO,Zhi-yao JIA,Ming-yang LI. Robot inverse kinematics solution based on center selection battle royale optimization algorithm [J]. Journal of Jilin University(Engineering and Technology Edition), 2025, 55(8): 2703-2710.
[4] Yan PIAO,Ji-yuan KANG. RAUGAN:infrared image colorization method based on cycle generative adversarial networks [J]. Journal of Jilin University(Engineering and Technology Edition), 2025, 55(8): 2722-2731.
[5] Qiong-xin LIU,Tian-tian WANG,Ya-nan WANG. Non-dominated sorted particle swarm genetic algorithm to solve vehicle location routing problems [J]. Journal of Jilin University(Engineering and Technology Edition), 2025, 55(7): 2464-2474.
[6] Xiang-jiu CHE,Liang LI. Graph similarity measurement algorithm combining global and local fine-grained features [J]. Journal of Jilin University(Engineering and Technology Edition), 2025, 55(7): 2365-2371.
[7] Wen-hui LI,Chen YANG. Few-shot remote sensing image classification based on contrastive learning text perception [J]. Journal of Jilin University(Engineering and Technology Edition), 2025, 55(7): 2393-2401.
[8] Shan-na ZHUANG,Jun-shuai WANG,Jing BAI,Jing-jin DU,Zheng-you WANG. Video-based person re-identification based on three-dimensional convolution and self-attention mechanism [J]. Journal of Jilin University(Engineering and Technology Edition), 2025, 55(7): 2409-2417.
[9] Hai-peng CHEN,Shi-bo ZHANG,Ying-da LYU. Multi⁃scale context⁃aware and boundary⁃guided image manipulation detection method [J]. Journal of Jilin University(Engineering and Technology Edition), 2025, 55(6): 2114-2121.
[10] Jian WANG,Chen-wei JIA. Trajectory prediction model for intelligent connected vehicle [J]. Journal of Jilin University(Engineering and Technology Edition), 2025, 55(6): 1963-1972.
[11] Feng-feng ZHOU,Zhe GUO,Yu-si FAN. Feature representation algorithm for imbalanced classification of multi⁃omics cancer data [J]. Journal of Jilin University(Engineering and Technology Edition), 2025, 55(6): 2089-2096.
[12] Xiang-jiu CHE,Yu-peng SUN. Graph node classification algorithm based on similarity random walk aggregation [J]. Journal of Jilin University(Engineering and Technology Edition), 2025, 55(6): 2069-2075.
[13] Ping-ping LIU,Wen-li SHANG,Xiao-yu XIE,Xiao-kang YANG. Unbalanced image classification algorithm based on fine⁃grained analysis [J]. Journal of Jilin University(Engineering and Technology Edition), 2025, 55(6): 2122-2130.
[14] You-wei WANG,Ao LIU,Li-zhou FENG. New method for text sentiment classification based on knowledge distillation and comment time [J]. Journal of Jilin University(Engineering and Technology Edition), 2025, 55(5): 1664-1674.
[15] Hong-wei ZHAO,Ming-zhu ZHOU,Ping-ping LIU,Qiu-zhan ZHOU. Medical image segmentation based on confident learning and collaborative training [J]. Journal of Jilin University(Engineering and Technology Edition), 2025, 55(5): 1675-1681.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!