Perceiving Systems, Computer Vision

Resisting Adversarial Attacks using Gaussian Mixture Variational Autoencoders

2019

Conference Paper

ps


Susceptibility of deep neural networks to adversarial attacks poses a major theoretical and practical challenge. All efforts to harden classifiers against such attacks have seen limited success till now. Two distinct categories of samples against which deep neural networks are vulnerable, ``adversarial samples" and ``fooling samples", have been tackled separately so far due to the difficulty posed when considered together. In this work, we show how one can defend against them both under a unified framework. Our model has the form of a variational autoencoder with a Gaussian mixture prior on the latent variable, such that each mixture component corresponds to a single class. We show how selective classification can be performed using this model, thereby causing the adversarial objective to entail a conflict. The proposed method leads to the rejection of adversarial samples instead of misclassification, while maintaining high precision and recall on test data. It also inherently provides a way of learning a selective classifier in a semi-supervised scenario, which can similarly resist adversarial attacks. We further show how one can reclassify the detected adversarial samples by iterative optimization.

Author(s): Partha Ghosh and Arpan Losalka and Michael J Black
Book Title: Proc. AAAI
Year: 2019

Department(s): Perceiving Systems
Research Project(s): Learning Deep Representations of 3D
Bibtex Type: Conference Paper (inproceedings)
Paper Type: Conference

URL: https://arxiv.org/abs/1806.00081

BibTex

@inproceedings{ghosh2019resisting,
  title = {Resisting Adversarial Attacks using Gaussian Mixture Variational Autoencoders},
  author = {Ghosh, Partha and Losalka, Arpan and Black, Michael J},
  booktitle = {Proc. AAAI},
  year = {2019},
  doi = {},
  url = {https://arxiv.org/abs/1806.00081}
}