吉林大学学报(理学版) ›› 2026, Vol. 64 ›› Issue (2): 291-0300.

• • 上一篇    下一篇

基于大语言模型与适配器驱动的知识图谱补全算法

姜昀奇1, 韩晓同2, 田原2   

  1. 1. 吉林大学 计算机科学与技术学院, 长春 130012; 2. 吉林大学 人工智能学院, 长春 130012
  • 收稿日期:2024-12-19 出版日期:2026-03-26 发布日期:2026-03-26
  • 通讯作者: 田原 E-mail:yuantian@jlu.edu.cn

Knowledge Graph Completion Algorithm Based on Large Language Models and Adapter Driver

JIANG Yunqi1, HAN Xiaotong2, TIAN Yuan2   

  1. 1. College of Computer Science and Technology, Jilin University, Changchun 130012, China;
    2. College of Artificial Intelligence, Jilin University, Changchun 130012, China
  • Received:2024-12-19 Online:2026-03-26 Published:2026-03-26

摘要: 针对基于Transformer为骨干网络的知识图谱补全方法在前馈网络参数冗余、 常识场景下尾实体识别困难以及对比学习嵌入存在偏差等问题, 提出一种融合大语言模型与多正样本对比学习的适配器增强知识图谱补全算法. 该算法通过在前馈网络中引入多头适配器减少冗余特征, 并利用大型语言模型提升常识推理能力, 同时通过多正样本对比学习校正嵌入偏差. 实验结果表明, 相比于当前最佳模型, 该算法在数据集WN18RR和FB15k-237上的MRR分别提高了5.4%和9.2%, 在更复杂的数据集Wikidata5M下的转导与归纳设定中分别提高3.6%和6.7%, 并在低资源与复杂场景中展现出更佳的泛化能力.

关键词: 知识图谱补全, 知识图谱, 大语言模型, 对比学习, 适配器学习

Abstract: Aiming at the problems that the  knowledge graph completion method based on  Transformer as the backbone network, included parameter redundancy in feed-forward networks, difficulties in identifying tail entities under commonsense scenarios, and embedding biases in contrastive learning, we proposed an adapter-enhanced knowledge graph completion algorithm that 
integrated large language models with multi-positive sample contrastive learning. The algorithm reduced redundant features by introducing multi-head adapters in feed-forward network, and utilized  large language models to enhance commonsense reasoning ability. At the same time, it corrected embedding biases through  multi-positive sample contrastive learning. Experimental results 
show that, compared to the current state-of-the-art models, the algorithm improves MRR by 5.4% and 9.2% on WN18RR and FB15k-237 datasets, respectively, and by  3.6% and 6.7% in transductive and inductive settings  on the more complex Wikidata5M dataset,  respectively, and demonstrates superior generalization ability under low-resource and complex scenarios.

Key words: knowledge graph completion, knowledge graph, large language model, contrastive learning, adapter learning

中图分类号: 

  • TP391.1