AI Security: adversarial example defense through parametric noise injection

This code is a pytorch implementation of Parametric Noise Injection to improve neural network robustness against adversarial input.

Related Publication

Zhezhi He*, Adnan Siraj Rakin* and Deliang Fan, “Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness against Adversarial Attack,” Conference on Computer Vision and Pattern Recognition (CVPR), June 16-20, 2019, Long Beach, CA, USA
* The first two authors contributed equally [pdf]

Method summary

Recent developments in the field of Deep Learning have exposed the underlying vulnerability of Deep Neural Network (DNN) against adversarial examples. In image classification, an adversarial example is a carefully modified image that is visually imperceptible to the original image but can cause DNN model to misclassify it. Training the network with Gaussian noise is an effective technique to perform model regularization, thus improving model robustness against input variation. Inspired by this classical method, we explore to utilize the regularization characteristic of noise injection to improve DNN’s robustness against adversarial attack. In this work, we propose ParametricNoise-Injection (PNI) 1 which involves trainable Gaussian noise injection at each layer on either activation or weights through solving the Min-Max optimization problem, embedded with adversarial training. These parameters are trained explicitly to achieve improved robustness. The extensive results show that our proposed PNI technique effectively improves the robustness against a variety of powerful whitebox and black-box attacks such as PGD, C & W, FGSM, transferable attack, and ZOO attack. Last but not the least, PNI method improves both clean- and perturbed-data accuracy in comparison to the state-of-the-art defense methods, which outperforms current unbroken PGD defense by 1.1 % and 6.8 % on clean- and perturbed- test data respectively, using ResNet-20 architecture.