Journal of Jilin University(Engineering and Technology Edition) ›› 2025, Vol. 55 ›› Issue (2): 529-536.doi: 10.13229/j.cnki.jdxbgxb.20230395

Previous Articles    

Standardized constructing method of a roadside multi-source sensing dataset

Li LI1(),Yu-jian BAO1,Wen-chen YANG2,Qing-ling CHU1,Gui-ping WANG1   

  1. 1.School of Electronic and Control Engineering,Chang'an University,Xi'an 710064,China
    2.National Engineering Laboratory For Surface Transportation Weather Impacts Prevention,Broadvision Engineering Consultants Co. ,Ltd. ,Kunming 650200,China
  • Received:2023-04-22 Online:2025-02-01 Published:2025-04-16
  • Contact: Gui-ping WANG E-mail:lili@chd.edu.cn

Abstract:

To meet the need of standard open datasets for the researches of roadside multi-source fusion sensing algorithms, this paper proposed a method to construct a standard roadside multi-source sensing dataset. The LIDAR and image data was collected at an urban T-junction and matched each other in both spatial and temporal dimensions. A vehicle three-dimension configuration extraction method was proposed, which included the steps of road space division, road pavement segmentation, and laser point cloud clustering. A vehicle labeling method was designed, which included the steps of target filtering and classification, recognition difficulty division, 3D bounding box calibration, and tag information supplement. It constructed a standardized roadside multi-source sensing dataset that contained the labels of 9 794 cars and heavy vehicles in daylight and nighttime. The YOLOv5 algorithm and the PointRCNN algorithm were used to test the 2D and 3D target recognition performance on the constructed dataset. Test results showed that due to the differences of scene complexity, data collection device and vehicle type, the constructed dataset and open vehicular datasets had differences in terms of average number of scene and vehicle laser points, and size of vehicle 3D bounding box. The YOLOv5 algorithm and the PointRCNN algorithm have similar vehicle target recognition accuracy on the open vehicular datasets and the constructed roadside multi-source sensing dataset.

Key words: intelligent transportation, roadside sensing, dataset, multi-source sensing, LIDAR, target recognition

CLC Number: 

  • U495

Fig. 1

Standardized construction flow of a multi-source sensing dataset"

Fig. 2

Data collection equipment and its deploying location"

Table 1

Average number of laser points reflected by vehicle and scene factors in different datasets"

要素类型数据集名称
KITTIArgoverseNuscenesLyftWaymo本文数据集
车辆838557866141 3561 347
场景要素19 0576 3902 67911 08219 48324 334

Table 2

Sizes of vehicle 3D bounding boxes at different sensing distances"

数据集名称高度分布比例/%
≤1.25 m1.25~1.5 m1.5~1.75 m1.75~2.0 m2.0~2.25 m
KITTI0.7849.0739.578.561.94
Waymo0.1418.2331.8534.5612.48
Argoverse0.2318.6431.3545.946.47
本文数据集0.4115.6335.5832.7614.35
数据集名称宽度分布比例/%
≤1.251.25~1.5 m1.5~1.75 m1.75~2.0 m2.0~2.25 m
KITTI0.349.0774.5716.375.94
Waymo0.152.048.6149.5832.67
Argoverse0.311.5819.7653.826.04
本文数据集0.351.825.6172.2619.43
数据集名称长度分布比例/%
≤3.0 m3.0~4.0 m4.0 m~5.0 m5.0 m~6.0 m6.0 m~7.0 m
KITTI2.8371.7220.633.541.21
Waymo0.151.6476.9416.434.78
Argoverse0.121.1379.2917.971.38
本文数据集0.091.3380.1117.311.04

Table 3

Results of two-dimension target recognition"

数据集查准率召回率

mAP

IoU=0.5)

mAP

IoU=0.5~0.95)

KITTI0.8620.7890.9000.557
本文数据集0.8520.7520.8670.537

Table 4

Target confidence rate before and after fusion"

目标置信率车辆1车辆2

车辆

3

车辆4车辆5车辆6
融合前的二维目标置信率0.920.880.610.780.860.58
融合前的三维目标置信率0.940.9100.840.890.60
融合后的目标置信率0.970.960.610.890.900.61

Table 5

Comparison of vehicle recognition accuracy"

数据集车辆(IoU=0.7)
简单样本中等样本困难样本

KITTI

Waymo

86.96

85.32

75.64

69.64

70.70

67.73

本文数据集86.8975.9171.23
1 张毅, 姚丹亚, 李力, 等. 智能车路协同系统关键技术与应用[J]. 交通运输系统工程与信息, 2021, 21(5): 40-51.
Zhang Yi, Yao Dan-ya, Li Li, et al. Technologies and applications for intelligent vehicle-infrastructure cooperation systems[J]. Journal of Transportation Systems Engineering and Information Technology, 2021, 21(5): 40-51.
2 Geiger A, Lenz P, Urtasun R. Are we ready for autonomous driving? The KITTI vision benchmark suite[C]∥IEEE Conference on Computer Vision and Pattern Recognition, Providence, USA, 2012: 3354-3361.
3 Chang M F, Lambert J, Sangkloy P, et al. Argoverse: 3D tracking and forecasting with rich maps[C]∥ IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 8740-8749.
4 Wang J, Fu T. TJRD TS [DB/OL]. [2023-04-22]. , 2021.
5 Ye X, Shu M, Li H Y, et al. Rope3D: the roadside perception dataset for autonomous driving and monocular 3D object detection task[C]∥Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, USA, 2022: 21341-21350.
6 Lin C, Tian D, Duan X, et al. CL3D: camera-lidar 3D object detection with point feature enhancement and point-guided fusion[J]. IEEE Transactions on Intelligent Transportation Systems, 2022, 23(10): 18040-18050.
7 王新竹, 李骏, 李红建, 等. 基于三维激光雷达和深度图像的自动驾驶汽车障碍物检测方法[J]. 吉林大学学报: 工学版, 2016, 46(2): 360-365.
Wang Xin-zhu, Li Jun, Li Hong-jian, et al. Obstacle detection based on 3D laser scanner and range image for intelligent vehicle[J]. Journal of Jilin University (Engineering and Technology Edition), 2016, 46(2): 360-365.
8 Wang Y, Chen X, You Y, et al. Train in Germany, test in the USA: making 3D object detectors generalize[C]∥IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2020: 11713-11723.
9 中国汽车工程学会. 《智能网联汽车融合感知系统 第2部分:数据格式规范》标准立项[EB/OL]. [2023-04-22].
10 Ester M, Kriegel H P, Sander J, et al. A density-based algorithm for discovering clusters in large spatial databases with noise[C]∥ Proceedings of 2nd International Conference on Knowledge Discovery and Data Mining, Portland, USA, 1996: 226-231.
11 Sun P, Kretzschmar H, Dotiwalla X, et al. Scalability in perception for autonomous driving: waymo open dataset[C]∥IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2020: 2443-2451.
12 Upesh N, Hossein E. Comparing YOLOV3, YOLOV4 and YOLOV5 for autonomous landing spot detection in faulty UAVs[J]. Sensors, 2022, 22(2): 2202464.
13 Zhao J, Xu H, Liu H, et al. Detection and tracking of pedestrians and vehicles using roadside lidar sensors[J]. Transportation Research Part C: Emerging Technologies, 2019, 100: 68-87.
14 Shi S S, Wang X G, Li H S. Pointrcnn: 3D object proposal generation and detection from point cloud[C]∥IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 770-779.
[1] Nan YANG,Jun XIAO. Energy saving optimization control of urban intelligent transportation under sequential quadratic programming algorithm [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(8): 2223-2228.
[2] Li LI,Xiao-qiang WU,Wen-chen YANG,Rui-jie ZHOU,Gui-ping WANG. Target recognition and tracking of group vehicles based on roadside millimeter-wave radar [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(7): 2104-2114.
[3] Qing-jin XU,Rui FU,Ying-shi GUO,Fu-wei WU. Roadside prediction method for truck rollover on the curve [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(5): 1302-1310.
[4] Xiu-jian YANG,Xiao-han JIA,Sheng-bin ZHANG. Characteristics of mixed traffic flow taking account effect of dynamics of vehicular platoon [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(4): 947-958.
[5] Xin CHENG,Sheng-xian LIU,Jing-mei ZHOU,Zhou ZHOU,Xiang-mo ZHAO. 3D object detection algorithm fusing dense connectivity and Gaussian distance [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(12): 3589-3600.
[6] Hai-long GAO,Yi-bo XU,De-zao HOU,Xue-song WANG. Shortterm traffic flow prediction algorithm for road network based on deep asynchronous residual network [J]. Journal of Jilin University(Engineering and Technology Edition), 2023, 53(12): 3458-3464.
[7] Ming-hua GAO,Can YANG. Traffic target detection method based on improved convolution neural network [J]. Journal of Jilin University(Engineering and Technology Edition), 2022, 52(6): 1353-1361.
[8] Xue-mei LI,Chun-yang WANG,Xue-lian LIU,Da XIE. Time delay estimation of linear frequency-modulated continuous-wave lidar signals via SESTH [J]. Journal of Jilin University(Engineering and Technology Edition), 2022, 52(4): 950-958.
[9] Xian-yan KUANG,Hui-chao LUO,Rui ZHONG,Peng OUYANG. Bus arrival time prediction based on wavelet neural network optimized by Beetle Antennae Search [J]. Journal of Jilin University(Engineering and Technology Edition), 2022, 52(1): 110-117.
[10] Jian ZHANG,Kun-run WU,Min YANG,Bin RAN. Double⁃ring adaptive control model of intersection during intelligent and connected environment [J]. Journal of Jilin University(Engineering and Technology Edition), 2021, 51(2): 541-548.
[11] Chang-fu ZONG,Long WEN,Lei HE. Object detection based on Euclidean clustering algorithm with 3D laser scanner [J]. Journal of Jilin University(Engineering and Technology Edition), 2020, 50(1): 107-113.
[12] ZHAO Hong-wei, LIU Yu-qi, DONG Li-yan, WANG Yu, LIU Pei. Dynamic route optimization algorithm based on hybrid in ITS [J]. 吉林大学学报(工学版), 2018, 48(4): 1214-1223.
[13] PAN Yi-yong, MA Jian-xiao, SUN Lu. Optimal path in dynamic network with random link travel times based on reliability [J]. 吉林大学学报(工学版), 2016, 46(2): 412-417.
[14] PAN Yi-yong, SUN Lu. Adaptive reliable shortest path problem in stochastic traffic network [J]. 吉林大学学报(工学版), 2014, 44(6): 1622-1627.
[15] PAN Hai-yang, LIU Shun-an, YAO Yong-ming. Depth information-basd autonomous aerial refueling [J]. 吉林大学学报(工学版), 2014, 44(6): 1750-1756.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!