基于对抗式提示样本增强与原型学习的少样本持续关系抽取

打开文本图片集
中图分类号:TP391 文献标识码:A 文章编号:2096-4706(2025)18-0045-08
Abstract: Recent advances in Large Language Models have demonstrated impressive performance in terms of relation extraction.However,mtigatingtheisuesofcatastrophicforgetingandsimilarelationclassconfusionemainschallngingin dynamicenvironments withlimitedsamples.Toaddressthese isses,this paper proposesaMemory-basedContinualAdversarial Prototype Leaming (CAPL)method.CAPLemploysamemory-basedarchitecture thatidentifiesrepresentativesamples through extractedrelaonprotosformemoyplayostlvatetessuefatastroicgetig.Idion,Csigs anadversarialsamplegeneration strategy tailored for few-shotscenarios toaugmenttraining data and efectivelyreduce class confusion probability.Experimental results ontwopublic few-shot datasets show that CAPLachieves significant improvements over mainstream baseline methods, outperforming the previous best-performing methods by 4.82% and 5.91% in terms of averageaccuracyrespectively.
KeyWords: Continual Learning;Few-shot Learning; Catastrophic Forgeting; Prototype Learning;Adversarial Sample Augmentation
0 引言
持续关系抽取(ContinualRelationExtraction,CRE)是机器学习的子领域,核心是在动态环境下保持对旧有关系的持续记忆能力,在任务间转移知识,并通过学习新关系应对新任务。(剩余13983字)