deep learning, privacy protection, computer vision, adversarial attack, adversarial example ,"/> Alternative Data Generation Method of Privacy-Preserving Image 

Journal of Jilin University (Information Science Edition) ›› 2024, Vol. 42 ›› Issue (1): 59-66.

Previous Articles     Next Articles

Alternative Data Generation Method of Privacy-Preserving Image 

LI Wanying a,b , LIU Xueyan a,b , YANG Bo a,b    

  1. a. College of Computer Science and Technology; b. Key Laboratory of Symbolic Computing and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China
  • Received:2022-12-25 Online:2024-01-29 Published:2024-02-04

Abstract: Aiming at the privacy protection requirements of existing image datasets, a privacy-preserving scenario of image datasets and a privacy-preserving image alternative data generation method is proposed. The scenario is to replace the original image dataset with an alternative image dataset processed by a privacy-preserving method, where the substitute image is in one-to-one correspondence with the original image. And humans can not identify the category of the substitute image, the substitute image can be used to train existing deep learning images classification algorithm, having a good classification effect. For this scenario, the data privacy protection method based on the PGD ( Project Gradient Descent) attack is improved, and the attack target of the original PGD attack is changed from the label to the image, that is the image-to-image attack. A robust model for image-to- image attacks as a method for generating alternative data. On the standard testset, the replaced CIFAR(Canadian Institute For Advanced Research 10)dataset and CINIC dataset achieved 87. 15% and 74. 04% test accuracy on the image classification task. Experimental results show that the method is able to generate an alternative dataset to the original dataset while guaranteeing the privacy of the alternative dataset to humans, and guarantees the classification performance of existing methods on this dataset. 

Key words: deep learning')">

deep learning, privacy protection, computer vision, adversarial attack, adversarial example

CLC Number: 

  • TP391