吉林大学学报(理学版) ›› 2024, Vol. 62 ›› Issue (5): 1179-1187.

• • 上一篇    下一篇

 基于双层数据增强的监督对比学习文本分类模型

吴量, 张方方, 程超, 宋诗楠   

  1. 长春工业大学 计算机科学与工程学院, 长春 130012
  • 收稿日期:2023-08-04 出版日期:2024-09-26 发布日期:2024-09-26
  • 通讯作者: 宋诗楠 E-mail: songshinan@163.com

Supervised Contrastive Learning Text Classification Model Based on Double-Layer Data Augmentation

WU Liang, ZHANG Fangfang, CHENG Chao, SONG Shinan   

  1. College of Computer Science and Engineering, Changchun University of Technology, Changchun 130012, China
  • Received:2023-08-04 Online:2024-09-26 Published:2024-09-26

摘要: 针对DoubleMix算法在数据增强时的非选择性扩充及训练方式的不足, 提出一种基于双层数据增强的监督对比学习文本分类模型, 有效提高了在训练数据稀缺时文本分类的准确率. 首先, 对原始数据在输入层进行基于关键词的数据增强, 不考虑句子结构的同时对数据进行有选择增强; 其次, 在BERT隐藏层对原始数据与增强后的数据进行插值, 然后送入TextCNN进一步提取特征; 最后, 使用Wasserstein距离和双重对比损失对模型进行训练, 进而提高文本分类的准确率. 对比实验结果表明, 该方法在数据集SST-2,CR,TREC和PC上分类准确率分别达93.41%,93.55%,97.61%和95.27%, 优于经典算法.

关键词: 数据增强, 文本分类, 对比学习, 监督学习

Abstract: Aiming at  the non-selective expansion  and training deficiencies of the DoubleMix algorithm during data augmentation, we proposed a supervised contrastive learning text classification model based on double-layer data augmentation, which effectively improved the accuracy of text classification when training data was scarce. Firstly, keyword-based data augmentation was applied to the original data at the input layer, while selectively enhancing the data without considering sentence structure. Secondly, we  interpolated  the original and augmented data in the BERT hidden layers, and  then send them to the TextCNN for further feature extraction. Finally, the model was trained by using Wasserstein distance and double contrastive loss to enhance text classification accuracy. The comparative experimental results on SST-2, CR, TREC, and PC datasets show that the classification accuracy of the proposed method is 93.41%, 93.55%, 97.61%, and 95.27% respectively, which is superior to classical algorithms.

Key words: data augmentation, text classification, comparative learning, supervised learning

中图分类号: 

  •