面向调制识别模型鲁棒性提升的对抗防御方法研究

  • 打印
  • 收藏
收藏成功


打开文本图片集

中图分类号: TN97 文献标志码: A

Adversarial defense methods for enhancing the robustnessof modulation recognition

LIU Fenghui,LU Jiazhan,ZHOU Jun,SONG Yiwen,LUO Hao (Nanjing Electronic Equipment Research Institute,Nanjing 2lOooO,Jiangsu,China)

Abstract:Researches have showed that wel-performed signal classification deep learning models will show vulnerability when faced with adversarial examples.For the purpose of improving the robustness and security of clasification models,the defense problem is divided into two parts,including model defense and data sample defense.In model part,a new loss function named Inter-Class Distance Loss is proposed,which can compact the same class feature points while discrete the feature points from diferent class after feature center being determined. The robustness of classification model can be improved by learning a more dispersed feature space during training process; in data sample defense,Denoising Autoencoder is used to reconstruct adversarial examples after being trained to learn the mapping between original samples and adversarial examples.The perturbation in adversarial examples can be filtered after reconstruction.Experiments are conducted on modulation classification dataset,experiment results show that the accuracy of classification model is around 82% when the perturbation is O.Oo8.Besides,the proposed defense method can show a satisfactory robustness when faced with different attack methods.

KeyWords:deep learning;adversarial example;adversarial defense;modulation classification

0 引言

现阶段用频设备数量激增,用户终端数呈指数增长,传统调制识别技术难以满足海量数据的处理需求。(剩余11174字)

monitor
客服机器人