融合知识图谱的大语言模型研究综述

打开文本图片集
键词:大语言模型;知识图谱;可解释性;幻觉问题;知识语言协同
中图分类号:TP391.1 文献标志码:A 文章编号:1001-3695(2025)08-002-2255-12
doi:10.19734/j.issn.1001-3695.2024.12.0532
Review of large language models integrating knowledge graph
Cao Rongrong',Liu Lin1†,Yu Yandong²,Wang Hailong1 (1.CollegeofpuerSiece&TchoogrogolaNalUiestHoo2ina;2.UanqabKLbatofI telligent InformationProcessing&Security,JiningNormal University,UlanqabNei MongolO12ooo,China)
Abstract:LLMshavedemonstrated exceptional performanceacrossmultipleverticaldomains,yettheir practicaldeployment remainsconstrainedbylimitedexplainabilityandhallcinationisues ingeneratedcontent.KGs,whichstorefactualknowledgein structured semantic networks,providea novel pathway toenhance thecontrolabilityand knowledgeconstraintsof LLMs.To addressthesechallenges,this paper systematicallyreviewed technical approaches for integrating KGs withLLMs.It analyzedrepresentative methodsacross threekeystages—pretrainingadaptation,architectural modification,andfine-tuning optimization,andsummarized their mechanisms for improvingmodel explainabilityandsuppressinghallucinations.Furthermore,itidentifiedcorechalengessuchasmultimodalknowledgerepresentationalignmentandlatencyindynamicknowledge integration.Theanalysisrevealsthatdeep integrationofKGssignificantlyenhancesthefactualconsistencyofLLM-generated content.However,futureresearchmustovercomecriticaltechnicalbotlenecksinmultimodal knowledgealignment,lightweight incrementalfusion,andcomplexreasoning verification toshiftLLMsfromlanguage-centrictoknowledge-language-augmented paradigms,therebyestablishing theoreticalandtechnicalfoundationsforbuilding trustworthyandinterpretableAIsystems.
Key words:large languagemodels(LLM);knowledgegraph(KG);explainability;halucinationproblem;knowledgelanguage synergy
0 引言
大语言模型(LLM)[1]作为人工智能领域的前沿技术,特指基于超大规模语料库训练,参数量超过百亿级别的深度神经网络模型。(剩余37680字)