To Compress Or Not To Compress: Understanding The Interactions Between Adversarial Attacks And Neural Network Compression

Part of Proceedings of Machine Learning and Systems 1 (MLSys 2019)

Bibtex Metadata Paper Supplemental

Authors

Ilia Shumailov, Yiren Zhao, Robert Mullins, Ross Anderson

Abstract

As deep neural networks (DNNs) become widely used, pruned and quantised models are becoming ubiquitous on edge devices; such compressed DNNs lower the computational requirements. Meanwhile, multiple recent studies show ways of constructing adversarial samples that make DNNs misclassify. We therefore investigate the extent to which adversarial samples are transferable between uncompressed and compressed DNNs. We find that such samples remain transferable for both pruned and quantised models. For pruning, adversarial samples at high sparsities are marginally less transferable. For quantisation, we find the transferability of adversarial samples is highly sensitive to integer precision.