Journal of Jilin University Science Edition ›› 2023, Vol. 61 ›› Issue (2): 325-330.

Previous Articles     Next Articles

Causality Extraction Based on BERT-GCN

LI Yueze1, ZUO Xianglin1, ZUO Wanli1,2, LIANG Shining1, ZHANG Yijia3, ZHU Yuan3   

  1. 1. College of Computer Science and Technology, Jilin University, Changchun 130012, China;
    2. Key Laboratory of Symbol Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China;
    3. College of Software, Jilin University, Changchun 130012, China
  • Received:2022-01-07 Online:2023-03-26 Published:2023-03-26

Abstract: Aiming at the problem that the traditional causality extraction in natural language processing was mainly based  on  pattern matching methods
 or machine learning algorithms, and accuracy of the results was low, and only explicit causality with causal cue words could be extracted, we proposed an algorithm BERT-GCN using large-scale pretraining model combined with graph convolutional neural network. Firstly,  we used BERT (bidirectional encoder representation from transformers) to encode the corpus and generate word vectors. Secondly,  we put the generated word vectors into the graph convolutional neural network for training. Finally, we put them into the Softmax layer to complete the extraction of causality. The experimental results show that  the model obtains good results on the SEDR-CE dataset, and the effect of implicit causality is also good.

Key words: natural language processing, causality extraction, graph convolutional neural network (GCN), bidirectional encoder representation from transformers model

CLC Number: 

  • TP391