Journal of Jilin University(Engineering and Technology Edition) ›› 2025, Vol. 55 ›› Issue (1): 93-104.doi: 10.13229/j.cnki.jdxbgxb.20230313

Previous Articles     Next Articles

Driver behavior recognition method based on dual-branch and deformable convolutional neural networks

Hong-yu HU(),Zheng-guang ZHANG,You QU,Mu-yu CAI,Fei GAO(),Zhen-hai GAO   

  1. State Key Laboratory of Automotive Simulation and Control,Jilin University,Changchun 130022,China
  • Received:2023-04-05 Online:2025-01-01 Published:2025-03-28
  • Contact: Fei GAO E-mail:huhongyu@jlu.edu.cn;gaofei123284123@jlu.edu.cn

Abstract:

This research offers a recognition approach based on a dual-branch neural network for recognizing driver behavior in the vehicle cockpit. The main branch of the network model employs ResNet 50 as the backbone network for feature extraction, and employs deformable convolution to adapt the model to changes in the shape and position of the driver in the image. The auxiliary branch aids in updating the parameters of the backbone network during the gradient backpropagation process, so that the backbone network can better extract features that are beneficial to driver behavior recognition, thereby improving the recognition performance. The results of ablation experiments and comparing experiments of the network model on the State Farm public dataset reveal that the proposed network model has a recognition accuracy of 96.23% and a better recognition effect on easily confused behavior categories. The study's findings are critical for understanding driver behavior in the vehicle cockpit and guaranteeing driving safety.

Key words: vehicle engineering, intelligent driving, driver behavior recognition, convolution neural network, auxiliary branch, deformable convolution

CLC Number: 

  • U471.3

Fig.1

Driver behavior recognition network framework"

Fig.2

Ten driver behaviors in the State Farm dataset"

Fig.3

Test accuracy of the model with different loss weight coefficients"

Table 1

Recognition precision of various behaviors and overall accuracy of the test set with different loss weight coefficients"

权重系数αC0C1C2C3C4C5C6C7C8C9总体准确率
0.582.2199.5599.7999.3699.1599.5898.95100.091.5185.295.74
0.684.6699.5599.7999.1598.52100.099.3799.4292.0088.1696.23
0.782.3899.5599.7999.1598.7399.5899.7999.7181.7182.8694.81
0.880.0099.3399.5899.5799.79100.098.4899.7183.8587.1394.98
0.984.1799.7899.5899.3698.9499.3799.58100.086.3175.4794.60

Fig.4

Ablation experiment's results"

Table 2

Test set's overall accuracy and recognition precision for various behaviors in ablation experiment"

网络C0C1C2C3C4C5C6C7C8C9准确率
基线(ResNet50)76.6998.999.7999.5796.1100.099.1599.6781.9482.6793.68
ResNet 50+辅助分支79.8999.1199.5899.5799.599.7998.95100.094.2489.8895.76
ResNet 50+可变形卷积76.1699.3399.5899.3699.5798.9598.33100.089.0860.5192.06
ResNet 50+可变形卷积+辅助分支84.6699.5599.7999.1598.52100.099.3799.4292.0088.1696.23

Fig.5

Visualization results of different networks' attention areas in ablation experiments"

Fig.6

Recognition accuracy of different models during training and testing"

Table 3

Comparison of recognition precision for various driver behaviors and overall accuracy of the test set with different network models"

网络C0C1C2C3C4C5C6C7C8C9准确率
AlexNet85.9398.3197.2598.3986.7887.2784.1299.6660.3564.3485.93
DenseNet80.7898.67100.099.5797.5098.7398.9499.7085.4084.5394.67
InceptionV378.8999.10100.099.3599.5799.5897.9399.1389.5283.5394.86
VGG1657.2994.1594.5796.4690.0669.8794.92100.053.1771.4380.02
本文84.6699.5599.7999.1598.52100.099.3799.4292.0088.1696.23

Fig.7

Confusion matrix of different network models"

Table 4

Realtime performance comparison of different network models(FPS)"

网络模型test 1test 2test 3test 4test 5test 6test 7test 8test 9test 10平均值
AlexNet201.1182.9182.6197.6183.9196.3192.2193.6193.9191.6191.6
DenseNet43.2340.2941.6242.1843.9243.242.2644.4342.4843.7442.74
Inception V354.6453.9555.6254.3154.255.2254.954.1655.8655.9754.88
VGG1653.5652.6952.7151.7252.9854.2953.252.7253.6552.6653.02
本文59.1159.0458.5758.957.9857.6157.6158.5359.0458.658.5
1 Dogan E, Rahal M C, Deborne R, et al. Transition of control in a partially automated vehicle: effects of anticipation and non-driving-related task involvement [J]. Transportation Research Part F: Traffic Psychology and Behaviour, 2017, 46(2017): 205-215.
2 Deeb J, Zakaria F, Kamali W, et al. Method of detection of early falling asleep while driving using EOG analysis[C]∥Proceedings of the 4th International Conference on Advances in Biomedical Engineering (ICABME), Beirut, Lebanon, 2017:150-153.
3 Halomoan J, Ramli K, Sudiana D. Statistical analysis to determine the ground truth of fatigue driving state using ECG recording and subjective reporting[C]∥Proceedings of the 1st International Conference on Information Technology, Advanced Mechanical and Electrical Engineering, Yogyakarta, Indonesia, 2020: 244-248.
4 Satti A T, Kim J, Yi E, et al. Microneedle array electrode-based wearable EMG system for detection of driver drowsiness through steering wheel grip[J]. Sensors, 2021, 21(15): 5091-5104.
5 Ebrahimian S, Nahvi A, Tashakori M, et al. Multi-level classification of driver drowsiness by simultaneous analysis of ECG and respiration signals using deep neural networks[J]. International Journal of Environmental Research and Public Health, 2022, 19(17): 10736-10752.
6 Jantan S, Ahmad S A, Soh A C, et al. A multi-model analysis for driving fatigue detection using EEG signals[C]∥Proceedings of the 7th IEEE-EMBS Conference on Biomedical Engineering and Sciences(IECBES), Kuala Lumpur, Malaysia, 2022: 183-188.
7 Pakdamanian E, Sheng S, Baee S, et al. DeepTake: prediction of driver takeover behavior using multimodal data[C]∥Proceedings of the CHI Conference on Human Factors in Computing Systems, Yokohama,Japan, 2021:No.3445563.
8 Wang X, Xu R, Zhang S, et al. Driver distraction detection based on vehicle dynamics using naturalistic driving data[J]. Transportation Research Part C: Emerging Technologies, 2022, 136(2022): 103561-103572.
9 Li Z, Chen L, Nie L, et al. A novel learning model of driver fatigue features representation for steering wheel angle[J]. IEEE Transactions on Vehicular Technology, 2022, 71(1): 269-281.
10 Tang M, Wu F, Zhao L L, et al. Detection of distracted driving based on MultiGranularity and Middle-Level features[C]∥Chinese Automation Congress (CAC),Shanghai, China, 2020: 2717-2722.
11 Lin Y, Cao D, Fu Z, et al. A lightweight attention-based network towards distracted driving behavior recognition[J]. Applied Sciences, 2022, 12(9): 4191-4208.
12 Huang T, Fu R, Chen Y, et al. Real-time driver behavior detection based on deep deformable inverted residual network with an attention mechanism for human-vehicle co-driving system[J]. IEEE Transactions on Vehicular Technology, 2022, 71(12): 12475-12488.
13 Liu D, Yamasaki T, Wang Y, et al. Toward extremely lightweight distracted driver recognition with distillation-based neural architecture search and knowledge transfer[J]. IEEE Transactions on Intelligent Transportation Systems, 2022, 24(1): 764-777.
14 He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA,2016:770-778.
15 Deng J, Dong W, Socher R, et al. Imagenet: a large-scale hierarchical image database[C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, USA,2009:248-255.
16 Dai J F, Qi H Z, Xiong Y W, et al. Deformable convolutional networks[C]∥Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy,2017:764-773.
17 Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions[C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 2015:No.7298594.
18 Inc Kaggle. State farm distracted driver detection[DB/OL].[2023-03-25]..
19 Omeiza D, Speakman S, Cintas C, et al. Smooth grad-cam++: an enhanced inference level visualization technique for deep convolutional neural network models [J/OL].[2023-03-26]. .
20 Krizhevsky A, Sutskever I, Hinton G E. Imagenet classification with deep convolutional neural networks [J]. Communications of the ACM, 2017, 60(6): 84-90.
21 Huang G, Liu Z, Van Der Maaten L, et al. Densely connected convolutional networks[C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu,USA,2017: 4700-4708.
22 Szegedy C, Vanhoucke V, Ioffe S, et al. Rethinking the inception architecture for computer vision[C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas,USA,2016: 2818-2826.
23 Su J, Vargas D V, Sakurai K. One pixel attack for fooling deep neural networks[J]. IEEE Transactions on Evolutionary Computation, 2019, 23(5): 828-841.
[1] Jun-nian WANG,Yu-jing CAO,Zhi-ren LUO,Kai-xuan LI,Wen-bo ZHAO,Ying-yi MENG. Online detection algorithm of road water depth based on binocular vision [J]. Journal of Jilin University(Engineering and Technology Edition), 2025, 55(1): 175-184.
[2] Cao TAN,Hao-xin REN,Wen-qing GE,Ya-dong SONG,Jia-yu LU. Improved active disturbance rejection control for hydraulic vibration stages based on the direct-drive valve [J]. Journal of Jilin University(Engineering and Technology Edition), 2025, 55(1): 84-92.
[3] Shou-tao LI,Lu YANG,Ru-yi QU,Peng-peng SUN,Ding-li YU. Slip rate control method based on model predictive control [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(9): 2687-2696.
[4] Liang WU,Yi-fan GU,Biao XING,Fang-wu MA,Li-wei NI,Wei-wei JIA. Steering four-wheel distributed integrated control method based on LQR [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(9): 2414-2422.
[5] Chao-lu TEMUR,Ya-ping ZHANG. Link anomaly detection algorithm for wireless sensor networks based on convolutional neural networks [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(8): 2295-2300.
[6] Yu-hai WANG,Xiao-zhi LI,Xing-kun LI. Predictive energy saving algorithm for hybrid electric truck under high-speed condition [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(8): 2121-2129.
[7] Sheng CHANG,Hong-fei LIU,Nai-wei ZOU. H loop shaping robust control of vehicle tracking on variable curvature curve [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(8): 2141-2148.
[8] Hong-wei ZHAO,Hong WU,Ke MA,Hai LI. Image classification framework based on knowledge distillation [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(8): 2307-2312.
[9] Jin-zhou ZHANG,Shi-qing JI,Chuang TAN. Fusion algorithm of convolution neural network and bilateral filtering for seam extraction [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(8): 2313-2318.
[10] Jian-ze LIU,Jiang LIU,Min LI,Xin-jie ZHANG. Vehicle speed decoupling road identification method based on least squares [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(7): 1821-1830.
[11] Xian-yi XIE,Ming-jun ZHANG,Li-sheng JIN,Bin ZHOU,Tao HU,Yu-fei BAI. Artificial bee colony trajectory planning algorithm for intelligent vehicles considering comfortable [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(6): 1570-1581.
[12] Ling HUANG,Zuan CUI,Feng YOU,Pei-xin HONG,Hao-chuan ZHONG,Yi-xuan ZENG. Vehicle trajectory prediction model for multi-vehicle interaction scenario [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(5): 1188-1195.
[13] Hong-yan GUO,Lian-bing WANG,Xu ZHAO,Qi-kun DAI. Joint estimation of vehicle mass and road slope considering lateral motion [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(5): 1175-1187.
[14] Yu-kai LU,Shuai-ke YUAN,Shu-sheng XIONG,Shao-peng ZHU,Ning ZHANG. High precision detection system for automotive paint defects [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(5): 1205-1213.
[15] Shao-hua WANG,Qi-rui ZHANG,De-hua SHI,Chun-fang YIN,Chun LI. Analysis of nonlinear vibration response characteristics of hybrid transmission system with dual-planetary gear sets [J]. Journal of Jilin University(Engineering and Technology Edition), 2024, 54(4): 890-901.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!