Journal of Jilin University (Information Science Edition) ›› 2022, Vol. 40 ›› Issue (5): 846-855.

Previous Articles     Next Articles

Adversarial Examples Defense Method Base on Parallel Attention Mechanism

ZHAO Jie, GUO Dong   

  1. College of Computer Science and Technology, Jilin University, Changchun 130012, China
  • Received:2021-12-22 Online:2022-10-10 Published:2022-10-10

Abstract: We have the effect of adversarial examples is reduced and the accuracy of the classification model is improved under the threat. Inspired by the mammalian visual modality, we proposed a purification defense method using a novel parallel attention mechanism to mitigate the effect of adversarial examples, called PSCAM- GAN(Parallel Spatial and Channel Attention Mechanism Adversarial Generative Network). The defense model first generates the feature map through the encoder, the parallel attention module is used to extract the object and space information. Under the condition that these features remain unchanged, the weight of the feature map is readjusted generating purification results by decoder. This method can keep the consistency between the purification result and the input while removing malicious perturbation, and effectively reduce the influence of adversarial samples on the model accuracy. The robustness of the model is evaluated through various types of attacks on CIFAR-10 and MNIST dataset. The experiments show that PSCAM-GAN completely surpassed other pre-processing based defense methods. These mean the defense method can effectively improve the robustness of the original models.

Key words: deep learning, adversarial examples, generative adversarial networks, image classification

CLC Number: 

  • TP391