Explaining and Harnessing
Adversarial Examples
解释和利用对抗性例子
Abstract 抽象
Several machine learning models, including neural networks, consistently
misclassify adversarial examples—inputs formed
by applying small but intentionally worstcase perturbations to examples from the dataset,
such that the perturbed input results in the model outputting an incorrect
answer with high confidence.
Early attempts at explaining this phenomenon focused on nonlinearity and overfitting.
We argue instead that the primary cause of neural networks’ vulnerability to adversarial
perturbation is their linear nature.
This explanation is supported by new quantitative results while
giving the first explanation of the most intriguing fact about
them: their generalization across architectures and training sets.
Moreover, this view yields a simple and fast method of generating adversarial examples.
Using this approach to provide examples for adversarial training, we reduce the test
set error of a maxout network on the MNIST dataset.
包括神经网络在内的几种机器学习模型始终错误地对对抗性示例进行分类，即通过对数据集中的示例应用微小但有意的最坏情况扰动而形成的输入，因此受干扰的输入导致模型以高置信度输出错误答案。解释这种现象的早期尝试集中在非线性和过拟合上。相反，我们认为神经网络容易受到对抗性扰动的主要原因是它们的线性性质。这种解释得到了新的定量结果的支持，同时给出了关于它们最有趣的事实的第一个解释：它们在架构和训练集上的泛化。此外，这种观点产生了一种简单而快速的生成对抗性示例的方法。使用这种方法为对抗训练提供示例，我们减少了 MNIST 数据集上最大网络的测试集误差。
1 Introduction 1引言
Szegedy et al. (2014b) made an intriguing discovery: several machine learning models,
including stateoftheart neural networks, are vulnerable to adversarial examples.
That is, these machine learning models misclassify examples that are only slightly different
from correctly classified examples drawn from the data distribution.
In many cases, a wide variety of models with different architectures trained
on different subsets of the training data misclassify the same adversarial example.
This suggests that adversarial examples expose
fundamental blind spots in our training algorithms.
Szegedy et al. （ 2014b） 有一个有趣的发现：几种机器学习模型，包括最先进的神经网络，都容易受到对抗性示例的影响。也就是说，这些机器学习模型错误地分类了与从数据分布中提取的正确分类示例略有不同的示例。在许多情况下，在训练数据的不同子集上训练的具有不同架构的各种模型会错误地分类相同的对抗性示例。这表明对抗性示例暴露了我们训练算法中的基本盲点。
The cause of these adversarial examples was a mystery, and speculative explanations have
suggested it is due to extreme nonlinearity of deep neural networks, perhaps combined
with insufficient model averaging and insufficient regularization of the purely supervised
learning problem. We show that these speculative hypotheses are unnecessary.
Linear behavior in highdimensional spaces is sufficient to cause adversarial examples.
This view enables us to design a fast method of generating adversarial examples that
makes adversarial training practical. We show that adversarial training can provide
an additional regularization benefit beyond that provided by using dropout (Srivastava et al., 2014) alone.
Generic regularization strategies such as dropout, pretraining,
and model averaging do not confer a significant reduction in a model’s vulnerability to
adversarial examples, but changing to nonlinear model families such as RBF networks can do so.
这些对抗性例子的原因是一个谜，推测性的解释表明这是由于深度神经网络的极端非线性，可能与模型平均不足和纯监督学习问题的正则化不足相结合。我们表明这些推测性假设是不必要的。高维空间中的线性行为足以引起对抗性示例。这种观点使我们能够设计一种快速生成对抗性示例的方法，使对抗性训练变得实用。我们表明，对抗性训练可以提供额外的正规化益处，而不是单独使用辍学（Srivastava et al.， 2014）。通用正则化策略（如辍学、预训练和模型平均）并不能显著降低模型对对抗性样本的脆弱性，但更改为非线性模型族（如 RBF 网络）可以做到这一点。
Our explanation suggests a fundamental tension between designing models that are easy to train due
to their linearity and designing models that use nonlinear effects to resist adversarial
perturbation. In the long run, it may be possible to escape this tradeoff by designing more
powerful optimization methods that can succesfully train more nonlinear models.
我们的解释表明，设计由于线性而易于训练的模型与设计使用非线性效应来抵抗对抗性扰动的模型之间存在着根本的紧张关系。从长远来看，可以通过设计更强大的优化方法来避免这种权衡，这些方法可以成功地训练更多的非线性模型。
2 Related work 2相关工作
Szegedy et al. (2014b) demonstrated a variety of intriguing properties of neural networks
and related models. Those most relevant to this paper include:
Szegedy et al. （ 2014b） 展示了神经网络和相关模型的各种有趣特性。与本文最相关的内容包括：

•
Boxconstrained LBFGS can reliably find adversarial examples.
• 箱约束的 LBFGS 可以可靠地找到对抗性示例。 
•
On some datasets, such as ImageNet (Deng et al., 2009), the adversarial examples were so close to the original examples that the differences were indistinguishable to the human eye.
在一些数据集上，如ImageNet（邓等人，2009），对抗性示例与原始示例非常接近，以至于人眼无法区分差异。 
•
The same adversarial example is often misclassified by a variety of classifiers with different architectures or trained on different subsets of the training data.
• 相同的对抗性示例经常被具有不同架构的各种分类器错误分类，或者在训练数据的不同子集上进行训练。 
•
Shallow softmax regression models are also vulnerable to adversarial examples.
• 浅层 softmax 回归模型也容易受到对抗性示例的影响。 
•
Training on adversarial examples can regularize the model—however, this was not practical at the time due to the need for expensive constrained optimization in the inner loop.
• 对对抗性示例的训练可以使模型正则化，然而，这在当时是不切实际的，因为需要在内循环中进行昂贵的约束优化。
These results suggest that classifiers based on modern machine learning techniques, even
those that obtain excellent performance on the test set, are not learning the true underlying
concepts that determine the correct output label. Instead, these algorithms
have built a Potemkin village that works well on naturally occuring data, but is exposed as
a fake when one visits points in space that do not have high probability in the data distribution.
This is particularly disappointing because a popular approach in computer vision is to use
convolutional network features as a space where Euclidean distance approximates perceptual distance.
This resemblance is clearly flawed if images that have an immeasurably small perceptual distance
correspond to completely different classes in the network’s representation.
这些结果表明，基于现代机器学习技术的分类器，即使是那些在测试集上获得出色性能的分类器，也没有学习确定正确输出标签的真正基础概念。取而代之的是，这些算法构建了一个波将金村，该村在自然发生的数据上效果很好，但当人们访问空间中在数据分布中概率不高的点时，就会被暴露为假的。这尤其令人失望，因为计算机视觉中的一种流行方法是使用卷积网络特征作为欧几里得距离近似感知距离的空间。如果具有不可估量的感知距离的图像对应于网络表示中完全不同的类别，则这种相似性显然是有缺陷的。
These results have often been interpreted as being a flaw in deep networks in particular, even
though linear classifiers have the same problem. We regard the knowledge of this flaw as an opportunity
to fix it. Indeed,
Gu & Rigazio (2014) and Chalupka et al. (2014)
have already begun the first steps toward designing models that resist adversarial perturbation,
though no model has yet succesfully done so while maintaining state of the art accuracy on
clean inputs.
这些结果通常被解释为深度网络中的缺陷，尽管线性分类器也有同样的问题。我们将了解此缺陷视为修复它的机会。事实上，Gu & Rigazio （2014）和Chalupka et al. （2014）已经开始了设计抗对抗性扰动模型的第一步，尽管还没有模型成功地做到这一点，同时保持了清洁输入的最新准确性。
3 The linear explanation of adversarial examples
We start with explaining the existence of adversarial examples for linear models.
In many problems, the precision of an individual input feature is limited. For example, digital images often use only 8 bits per pixel so they discard all information below $1/255$ of the dynamic range. Because the precision of the features is limited, it is not rational for the classifier to respond differently to an input ${\bm{x}}$ than to an adversarial input $\tilde{{\bm{x}}}={\bm{x}}+{\bm{\eta}}$ if every element of the perturbation ${\bm{\eta}}$ is smaller than the precision of the features. Formally, for problems with wellseparated classes, we expect the classifier to assign the same class to ${\bm{x}}$ and $\tilde{{\bm{x}}}$ so long as ${\bm{\eta}}_{\infty}<{\epsilon}$, where ${\epsilon}$ is small enough to be discarded by the sensor or data storage apparatus associated with our problem.
Consider the dot product between a weight vector ${\bm{w}}$ and an adversarial example $\tilde{{\bm{x}}}$:
${\bm{w}}^{\top}\tilde{{\bm{x}}}={\bm{w}}^{\top}{\bm{x}}+{\bm{w}}^{\top}{\bm{\eta}}.$ 
The adversarial perturbation causes the activation to grow by ${\bm{w}}^{\top}{\bm{\eta}}$.We can maximize this increase subject to the max norm constraint on ${\bm{\eta}}$ by assigning $\eta=\text{sign}({\bm{w}})$. If ${\bm{w}}$ has $n$ dimensions and the average magnitude of an element of the weight vector is $m$, then the activation will grow by ${\epsilon}mn$. Since $\eta_{\infty}$ does not grow with the dimensionality of the problem but the change in activation caused by perturbation by $\eta$ can grow linearly with $n$, then for high dimensional problems, we can make many infinitesimal changes to the input that add up to one large change to the output. We can think of this as a sort of “accidental steganography,” where a linear model is forced to attend exclusively to the signal that aligns most closely with its weights, even if multiple signals are present and other signals have much greater amplitude.
This explanation shows that a simple linear model can have adversarial examples if its input has sufficient dimensionality. Previous explanations for adversarial examples invoked hypothesized properties of neural networks, such as their supposed highly nonlinear nature. Our hypothesis based on linearity is simpler, and can also explain why softmax regression is vulnerable to adversarial examples.
4 Linear perturbation of nonlinear models
The linear view of adversarial examples suggests a fast way of generating them. We hypothesize that neural networks are too linear to resist linear adversarial perturbation. LSTMs (Hochreiter & Schmidhuber, 1997), ReLUs (Jarrett et al., 2009; Glorot et al., 2011), and maxout networks (Goodfellow et al., 2013c) are all intentionally designed to behave in very linear ways, so that they are easier to optimize. More nonlinear models such as sigmoid networks are carefully tuned to spend most of their time in the nonsaturating, more linear regime for the same reason. This linear behavior suggests that cheap, analytical perturbations of a linear model should also damage neural networks.
Let ${\bm{\theta}}$ be the parameters of a model, ${\bm{x}}$ the input to the model, $y$ the targets associated with ${\bm{x}}$ (for machine learning tasks that have targets) and $J({\bm{\theta}},{\bm{x}},y)$ be the cost used to train the neural network. We can linearize the cost function around the current value of ${\bm{\theta}}$, obtaining an optimal maxnorm constrained pertubation of
${\bm{\eta}}={\epsilon}{\text{sign}}\left(\nabla_{\bm{x}}J({\bm{\theta}},{\bm{x}},y)\right).$ 
We refer to this as the “fast gradient sign method” of generating adversarial examples. Note that the required gradient can be computed efficiently using backpropagation.
We find that this method reliably causes a wide variety of models to misclassify their input. See Fig. 1 for a demonstration on ImageNet. We find that using ${\epsilon}=.25$, we cause a shallow softmax classifier to have an error rate of 99.9% with an average confidence of 79.3% on the MNIST (LeCun+98) test set^{1}^{1}1This is using MNIST pixel values in the interval [0, 1]. MNIST data does contain values other than 0 or 1, but the images are essentially binary. Each pixel roughly encodes “ink” or “no ink”. This justifies expecting the classifier to be able to handle perturbations within a range of width 0.5, and indeed human observers can read such images without difficulty. . In the same setting, a maxout network misclassifies 89.4% of our adversarial examples with an average confidence of 97.6%. Similarly, using ${\epsilon}=.1$, we obtain an error rate of 87.15% and an average probability of 96.6% assigned to the incorrect labels when using a convolutional maxout network on a preprocessed version of the CIFAR10 (Krizhevsky & Hinton, 2009) test set^{2}^{2}2 See https://github.com/lisalab/pylearn2/tree/master/pylearn2/scripts/papers/maxout. for the preprocessing code, which yields a standard deviation of roughly 0.5. . Other simple methods of generating adversarial examples are possible. For example, we also found that rotating ${\bm{x}}$ by a small angle in the direction of the gradient reliably produces adversarial examples.
The fact that these simple, cheap algorithms are able to generate misclassified examples serves as evidence in favor of our interpretation of adversarial examples as a result of linearity. The algorithms are also useful as a way of speeding up adversarial training or even just analysis of trained networks.
$\ +\ .007\ \times$  $=$  
$\centering{\bm{x}}\@add@centering$  ${\text{sign}}(\nabla_{\bm{x}}J({\bm{\theta}},{\bm{x}},y))$  ${\bm{x}}+\epsilon{\text{sign}}(\nabla_{\bm{x}}J({\bm{\theta}},{\bm{x}},y))$  
“panda”  “nematode”  “gibbon”  
57.7% confidence  8.2% confidence  99.3 % confidence 
5 Adversarial training of linear models versus weight decay
Perhaps the simplest possible model we can consider is logistic regression. In this case, the fast gradient sign method is exact. We can use this case to gain some intuition for how adversarial examples are generated in a simple setting. See Fig. 2 for instructive images.
If we train a single model to recognize labels $y\in\{1,1\}$ with $P(y=1)=\sigma\left({\bm{w}}^{\top}{\bm{x}}+b\right)$ where $\sigma(z)$ is the logistic sigmoid function, then training consists of gradient descent on
$\mathbb{E}_{{\bm{x}},y\sim p_{\text{data}}}\zeta(y({\bm{w}}^{\top}{\bm{x}}+b))$ 
where $\zeta(z)=\log\left(1+\exp(z)\right)$ is the softplus function. We can derive a simple analytical form for training on the worstcase adversarial perturbation of ${\bm{x}}$ rather than ${\bm{x}}$ itself, based on gradient sign perturbation. Note that the sign of the gradient is just ${\text{sign}}({\bm{w}})$, and that ${\bm{w}}^{\top}{\text{sign}}({\bm{w}})={\bm{w}}_{1}$. The adversarial version of logistic regression is therefore to minimize
$\mathbb{E}_{{\bm{x}},y\sim p_{\text{data}}}\zeta(y({\epsilon}{\bm{w}}_{1}{\bm{w}}^{\top}{\bm{x}}b)).$ 
This is somewhat similar to $L^{1}$ regularization. However, there are some important differences. Most significantly, the $L^{1}$ penalty is subtracted off the model’s activation during training, rather than added to the training cost. This means that the penalty can eventually start to disappear if the model learns to make confident enough predictions that $\zeta$ saturates. This is not guaranteed to happen—in the underfitting regime, adversarial training will simply worsen underfitting. We can thus view $L^{1}$ weight decay as being more “worst case” than adversarial training, because it fails to deactivate in the case of good margin.
If we move beyond logistic regression to multiclass softmax regression, $L^{1}$ weight decay becomes even more pessimistic, because it treats each of the softmax’s outputs as independently perturbable, when in fact it is usually not possible to find a single ${\bm{\eta}}$ that aligns with all of the class’s weight vectors. Weight decay overestimates the damage achievable with perturbation even more in the case of a deep network with multiple hidden units. Because $L^{1}$ weight decay overestimates the amount of damage an adversary can do, it is necessary to use a smaller $L^{1}$ weight decay coefficient than the ${\epsilon}$ associated with the precision of our features. When training maxout networks on MNIST, we obtained good results using adversarial training with ${\epsilon}=.25$. When applying $L^{1}$ weight decay to the first layer, we found that even a coefficient of .0025 was too large, and caused the model to get stuck with over 5% error on the training set. Smaller weight decay coefficients permitted succesful training but conferred no regularization benefit.
(a)  (b)  (c)  (d) 
6 Adversarial training of deep networks
The criticism of deep networks as vulnerable to adversarial examples is somewhat misguided, because unlike shallow linear models, deep networks are at least able to represent functions that resist adversarial perturbation. The universal approximator theorem (Hornik et al., 1989) guarantees that a neural network with at least one hidden layer can represent any function to an arbitary degree of accuracy so long as its hidden layer is permitted to have enough units. Shallow linear models are not able to become constant near training points while also assigning different outputs to different training points.
Of course, the universal approximator theorem does not say anything about whether a training algorithm will be able to discover a function with all of the desired properties. Obviously, standard supervised training does not specify that the chosen function be resistant to adversarial examples. This must be encoded in the training procedure somehow.
Szegedy et al. (2014b) showed that by training on a mixture of adversarial and clean examples, a neural network could be regularized somewhat. Training on adversarial examples is somewhat different from other data augmentation schemes; usually, one augments the data with transformations such as translations that are expected to actually occur in the test set. This form of data augmentation instead uses inputs that are unlikely to occur naturally but that expose flaws in the ways that the model conceptualizes its decision function. At the time, this procedure was never demonstrated to improve beyond dropout on a state of the art benchmark. However, this was partially because it was difficult to experiment extensively with expensive adversarial examples based on LBFGS.
We found that training with an adversarial objective function based on the fast gradient sign method was an effective regularizer:
$\tilde{J}({\bm{\theta}},{\bm{x}},y)=\alpha J({\bm{\theta}},{\bm{x}},y)+(1\alpha)J({\bm{\theta}},{\bm{x}}+{\epsilon}{\text{sign}}\left(\nabla_{\bm{x}}J({\bm{\theta}},{\bm{x}},y)\right).$ 
In all of our experiments, we used $\alpha=0.5$. Other values may work better; our initial guess of this hyperparameter worked well enough that we did not feel the need to explore more. This approach means that we continually update our supply of adversarial examples, to make them resist the current version of the model. Using this approach to train a maxout network that was also regularized with dropout, we were able to reduce the error rate from 0.94% without adversarial training to 0.84% with adversarial training.
We observed that we were not reaching zero error rate on adversarial examples on the training set. We fixed this problem by making two changes. First, we made the model larger, using 1600 units per layer rather than the 240 used by the original maxout network for this problem. Without adversarial training, this causes the model to overfit slightly, and get an error rate of 1.14% on the test set. With adversarial training, we found that the validation set error leveled off over time, and made very slow progress. The original maxout result uses early stopping, and terminates learning after the validation set error rate has not decreased for 100 epochs. We found that while the validation set error was very flat, the adversarial validation set error was not. We therefore used early stopping on the adversarial validation set error. Using this criterion to choose the number of epochs to train for, we then retrained on all 60,000 examples. Five different training runs using different seeds for the random number generators used to select minibatches of training examples, initialize model weights, and generate dropout masks result in four trials that each had an error rate of 0.77% on the test set and one trial that had an error rate of 0.83%. The average of 0.782% is the best result reported on the permutation invariant version of MNIST, though statistically indistinguishable from the result obtained by finetuning DBMs with dropout (Srivastava et al., 2014) at 0.79%.
The model also became somewhat resistant to adversarial examples. Recall that without adversarial training, this same kind of model had an error rate of 89.4% on adversarial examples based on the fast gradient sign method. With adversarial training, the error rate fell to 17.9%. Adversarial examples are transferable between the two models but with the adversarially trained model showing greater robustness. Adversarial examples generated via the original model yield an error rate of 19.6% on the adversarially trained model, while adversarial examples generated via the new model yield an error rate of 40.9% on the original model. When the adversarially trained model does misclassify an adversarial example, its predictions are unfortunately still highly confident. The average confidence on a misclassified example was 81.4%. We also found that the weights of the learned model changed significantly, with the weights of the adversarially trained model being significantly more localized and interpretable (see Fig. 3).
The adversarial training procedure can be seen as minimizing the worst case error when the data is perturbed by an adversary. That can be interpreted as learning to play an adversarial game, or as minimizing an upper bound on the expected cost over noisy samples with noise from $U({\epsilon},{\epsilon})$ added to the inputs. Adversarial training can also be seen as a form of active learning, where the model is able to request labels on new points. In this case the human labeler is replaced with a heuristic labeler that copies labels from nearby points.
We could also regularize the model to be insensitive to changes in its features that are smaller than the ${\epsilon}$ precision simply by training on all points within the ${\epsilon}$ max norm box, or sampling many points within this box. This corresponds to adding noise with max norm ${\epsilon}$ during training. However, noise with zero mean and zero covariance is very inefficient at preventing adversarial examples. The expected dot product between any reference vector and such a noise vector is zero. This means that in many cases the noise will have essentially no effect rather than yielding a more difficult input. In fact, in many cases the noise will actualy result in a lower objective function value. We can think of adversarial training as doing hard example mining among the set of noisy inputs, in order to train more efficiently by considering only those noisy points that strongly resist classification. As control experiments, we trained training a maxout network with noise based on randomly adding $\pm{\epsilon}$ to each pixel, or adding noise in $U({\epsilon},{\epsilon})$ to each pixel. These obtained an error rate of 86.2% with confidence 97.3% and an error rate of 90.4% with a confidence of 97.8% respectively on fast gradient sign adversarial examples.
Because the derivative of the sign function is zero or undefined everywhere, gradient descent on the adversarial objective function based on the fast gradient sign method does not allow the model to anticipate how the adversary will react to changes in the parameters. If we instead adversarial examples based on small rotations or addition of the scaled gradient, then the perturbation process is itself differentiable and the learning can take the reaction of the adversary into account. However, we did not find nearly as powerful of a regularizing result from this process, perhaps because these kinds of adversarial examples are not as difficult to solve.
One natural question is whether it is better to perturb the input or the hidden layers or both. Here the results are inconsistent. Szegedy et al. (2014b) reported that adversarial perturbations yield the best regularization when applied to the hidden layers. That result was obtained on a sigmoidal network. In our experiments with the fast gradient sign method, we find that networks with hidden units whose activations are unbounded simply respond by making their hidden unit activations very large, so it is usually better to just perturb the original input. On saturating models such as the Rust model we found that perturbation of the input performed comparably to perturbation of the hidden layers. Perturbations based on rotating the hidden layers solve the problem of unbounded activations growing to make additive perturbations smaller by comparison. We were able to succesfully train maxout networks with rotational perturbations of the hidden layers. However, this did not yield nearly as strong of a regularizing effect as additive perturbation of the input layer. Our view of adversarial training is that it is only clearly useful when the model has the capacity to learn to resist adversarial examples. This is only clearly the case when a universal approximator theorem applies. Because the last layer of a neural network, the linearsigmoid or linearsoftmax layer, is not a universal approximator of functions of the final hidden layer, this suggests that one is likely to encounter problems with underfitting when applying adversarial perturbations to the final hidden layer. We indeed found this effect. Our best results with training using perturbations of hidden layers never involved perturbations of the final hidden layer.
7 Different kinds of model capacity
One reason that the existence of adversarial examples can seem counterintuitive is that most of us have poor intuitions for high dimensional spaces. We live in three dimensions, so we are not used to small effects in hundreds of dimensions adding up to create a large effect. There is another way that our intuitions serve us poorly. Many people think of models with low capacity as being unable to make many different confident predictions. This is not correct. Some models with low capacity do exhibit this behavior. For example shallow RBF networks with
$p(y=1\mid{\bm{x}})=\exp\left(({\bm{x}}\mu)^{\top}{\bm{\beta}}({\bm{x}}\mu)\right)$ 
are only able to confidently predict that the positive class is present in the vicinity of $\mu$. Elsewhere, they default to predicting the class is absent, or have lowconfidence predictions.
RBF networks are naturally immune to adversarial examples, in the sense that they have low confidence when they are fooled. A shallow RBF network with no hidden layers gets an error rate of 55.4% on MNIST using adversarial examples generated with the fast gradient sign method and ${\epsilon}=.25$. However, its confidence on mistaken examples is only $1.2\%$. Its average confidence on clean test examples is $60.6$%. We can’t expect a model with such low capacity to get the right answer at all points of space, but it does correctly respond by reducing its confidence considerably on points it does not “understand.”
RBF units are unfortunately not invariant to any significant transformations so they cannot generalize very well. We can view linear units and RBF units as different points on a precisionrecall tradeoff curve. Linear units achieve high recall by responding to every input in a certain direction, but may have low precision due to responding too strongly in unfamiliar situations. RBF units achieve high precision by responding only to a specific point in space, but in doing so sacrifice recall. Motivated by this idea, we decided to explore a variety of models involving quadratic units, including deep RBF networks. We found this to be a difficult task—very model with sufficient quadratic inhibition to resist adversarial perturbation obtained high training set error when trained with SGD.
8 Why do adversarial examples generalize?
An intriguing aspect of adversarial examples is that an example generated for one model is often misclassified by other models, even when they have different architecures or were trained on disjoint training sets. Moreover, when these different models misclassify an adversarial example, they often agree with each other on its class. Explanations based on extreme nonlinearity and overfitting cannot readily account for this behavior—why should multiple extremely nonlinear model with excess capacity consistently label outofdistribution points in the same way? This behavior is especially surprising from the view of the hypothesis that adversarial examples finely tile space like the rational numbers among the reals, because in this view adversarial examples are common but occur only at very precise locations.
Under the linear view, adversarial examples occur in broad subspaces. The direction ${\bm{\eta}}$ need only have positive dot product with the gradient of the cost function, and ${\epsilon}$ need only be large enough. Fig. 4 demonstrates this phenomenon. By tracing out different values of ${\epsilon}$ we see that adversarial examples occur in contiguous regions of the 1D subspace defined by the fast gradient sign method, not in fine pockets. This explains why adversarial examples are abundant and why an example misclassified by one classifier has a fairly high prior probability of being misclassified by another classifier.
To explain why mutiple classifiers assign the same class to adversarial examples, we hypothesize that neural networks trained with current methodologies all resemble the linear classifier learned on the same training set. This reference classifier is able to learn approximately the same classification weights when trained on different subsets of the training set, simply because machine learning algorithms are able to generalize. The stability of the underlying classification weights in turn results in the stability of adversarial examples.
To test this hypothesis, we generated adversarial examples on a deep maxout network and classified these examples using a shallow softmax network and a shallow RBF network. On examples that were misclassified by the maxout network, the RBF network predicted the maxout network’s class assignment only 16.0% of the time, while the softmax classifier predict the maxout network’s class correctly 54.6% of the time. These numbers are largely driven by the differing error rate of the different models though. If we exclude our attention to cases where both models being compared make a mistake, then softmax regression predict’s maxout’s class 84.6% of the time, while the RBF network is able to predict maxout’s class only 54.3% of the time. For comparison, the RBF network can predict softmax regression’s class 53.6% of the time, so it does have a strong linear component to its own behavior. Our hypothesis does not explain all of the maxout network’s mistakes or all of the mistakes that generalize across models, but clearly a significant proportion of them are consistent with linear behavior being a major cause of crossmodel generalization.
9 Alternative hypotheses
We now consider and refute some alternative hypotheses for the existence of adversarial examples. First, one hypothesis is that generative training could provide more constraint on the training process, or cause the model to learn what to distinguish “real” from “fake” data and be confident only on “real” data. The MPDBM (Goodfellow et al., 2013a) provides a good model to test this hypothesis. Its inference procedure gets good classification accuracy (an 0.88% error rate) on MNIST. This inference procedure is differentiable. Other generative models either have nondifferentiable inference procedures, making it harder to compute adversarial examples, or require an additional nongenerative discriminator model to get good classification accuracy on MNIST. In the case of the MPDBM, we can be sure that the generative model itself is responding to adversarial examples, rather than the nongenerative classifier model on top. We find that the model is vulnerable to adversarial examples. With an $\epsilon$ of 0.25, we find an error rate of 97.5% on adversarial examples generated from the MNIST test set. It remains possible that some other form of generative training could confer resistance, but clearly the mere fact of being generative is not alone sufficient.
Another hypothesis about why adversarial examples exist is that individual models have strange quirks but averaging over many models can cause adversarial examples to wash out. To test this hypothesis, we trained an ensemble of twelve maxout networks on MNIST. Each network was trained using a different seed for the random number generator used to initialize the weights, generate dropout masks, and select minibatches of data for stochastic gradient descent. The ensemble gets an error rate of 91.1% on adversarial examples designed to perturb the entire ensemble with $\epsilon=.25$. If we instead use adversarial examples designed to perturb only one member of the ensemble, the error rate falls to 87.9%. Ensembling provides only limited resistance to adversarial perturbation.
10 Summary and discussion
As a summary, this paper has made the following observations:

•
Adversarial examples can be explained as a property of highdimensional dot products. They are a result of models being too linear, rather than too nonlinear.

•
The generalization of adversarial examples across different models can be explained as a result of adversarial perturbations being highly aligned with the weight vectors of a model, and different models learning similar functions when trained to perform the same task.

•
The direction of perturbation, rather than the specific point in space, matters most. Space is not full of pockets of adversarial examples that finely tile the reals like the rational numbers.

•
Because it is the direction that matters most, adversarial perturbations generalize across different clean examples.

•
We have introduced a family of fast methods for generating adversarial examples.

•
We have demonstrated that adversarial training can result in regularization; even further regularization than dropout.

•
We have run control experiments that failed to reproduce this effect with simpler but less efficient regularizers including $L^{1}$ weight decay and adding noise.

•
Models that are easy to optimize are easy to perturb.

•
Linear models lack the capacity to resist adversarial perturbation; only structures with a hidden layer (where the universal approximator theorem applies) should be trained to resist adversarial perturbation.

•
RBF networks are resistant to adversarial examples.

•
Models trained to model the input distribution are not resistant to adversarial examples.

•
Ensembles are not resistant to adversarial examples.
Some further observations concerning rubbish class examples are presented in the appendix:

•
Rubbish class examples are ubiquitous and easily generated.

•
Shallow linear models are not resistant to rubbish class examples.

•
RBF networks are resistant to rubbish class examples.
Gradientbased optimization is the workhorse of modern AI. Using a network that has been designed to be sufficiently linear–whether it is a ReLU or maxout network, an LSTM, or a sigmoid network that has been carefully configured not to saturate too much– we are able to fit most problems we care about, at least on the training set. The existence of adversarial examples suggests that being able to explain the training data or even being able to correctly label the test data does not imply that our models truly understand the tasks we have asked them to perform. Instead, their linear responses are overly confident at points that do not occur in the data distribution, and these confident predictions are often highly incorrect. This work has shown we can partially correct for this problem by explicitly identifying problematic points and correcting the model at each of these points. However, one may also conclude that the model families we use are intrinsically flawed. Ease of optimization has come at the cost of models that are easily misled. This motivates the development of optimization procedures that are able to train models whose behavior is more locally stable.
Acknowledgments
We would like to thank Geoffrey Hinton and Ilya Sutskever for helpful discussions. We would also like to thank Jeff Dean, Greg Corrado, and Oriol Vinyals for their feedback on drafts of this article. We would like to thank the developers of Theano(Bergstra et al., 2010; Bastien et al., 2012), Pylearn2(Goodfellow et al., 2013b), and DistBelief (Dean et al., 2012).
References
 Bastien et al. (2012) Bastien, Frédéric, Lamblin, Pascal, Pascanu, Razvan, Bergstra, James, Goodfellow, Ian J., Bergeron, Arnaud, Bouchard, Nicolas, and Bengio, Yoshua. Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012.
 Bergstra et al. (2010) Bergstra, James, Breuleux, Olivier, Bastien, Frédéric, Lamblin, Pascal, Pascanu, Razvan, Desjardins, Guillaume, Turian, Joseph, WardeFarley, David, and Bengio, Yoshua. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy), June 2010. Oral Presentation.
 Chalupka et al. (2014) Chalupka, K., Perona, P., and Eberhardt, F. Visual Causal Feature Learning. ArXiv eprints, December 2014.
 Dean et al. (2012) Dean, Jeffrey, Corrado, Greg S., Monga, Rajat, Chen, Kai, Devin, Matthieu, Le, Quoc V., Mao, Mark Z., Ranzato, Marc’Aurelio, Senior, Andrew, Tucker, Paul, Yang, Ke, and Ng, Andrew Y. Large scale distributed deep networks. In NIPS, 2012.
 Deng et al. (2009) Deng, Jia, Dong, Wei, Socher, Richard, jia Li, Li, Li, Kai, and Feifei, Li. Imagenet: A largescale hierarchical image database. In In CVPR, 2009.
 Glorot et al. (2011) Glorot, Xavier, Bordes, Antoine, and Bengio, Yoshua. Deep sparse rectifier neural networks. In JMLR W&CP: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics (AISTATS 2011), April 2011.
 Goodfellow et al. (2013a) Goodfellow, Ian J., Mirza, Mehdi, Courville, Aaron, and Bengio, Yoshua. Multiprediction deep Boltzmann machines. In Neural Information Processing Systems, December 2013a.
 Goodfellow et al. (2013b) Goodfellow, Ian J., WardeFarley, David, Lamblin, Pascal, Dumoulin, Vincent, Mirza, Mehdi, Pascanu, Razvan, Bergstra, James, Bastien, Frédéric, and Bengio, Yoshua. Pylearn2: a machine learning research library. arXiv preprint arXiv:1308.4214, 2013b.
 Goodfellow et al. (2013c) Goodfellow, Ian J., WardeFarley, David, Mirza, Mehdi, Courville, Aaron, and Bengio, Yoshua. Maxout networks. In Dasgupta, Sanjoy and McAllester, David (eds.), International Conference on Machine Learning, pp. 1319–1327, 2013c.
 Gu & Rigazio (2014) Gu, Shixiang and Rigazio, Luca. Towards deep neural network architectures robust to adversarial examples. In NIPS Workshop on Deep Learning and Representation Learning, 2014.
 Hochreiter & Schmidhuber (1997) Hochreiter, S. and Schmidhuber, J. Long shortterm memory. Neural Computation, 9(8):1735–1780, 1997.
 Hornik et al. (1989) Hornik, Kurt, Stinchcombe, Maxwell, and White, Halbert. Multilayer feedforward networks are universal approximators. Neural Networks, 2:359–366, 1989.
 Jarrett et al. (2009) Jarrett, Kevin, Kavukcuoglu, Koray, Ranzato, Marc’Aurelio, and LeCun, Yann. What is the best multistage architecture for object recognition? In Proc. International Conference on Computer Vision (ICCV’09), pp. 2146–2153. IEEE, 2009.
 Krizhevsky & Hinton (2009) Krizhevsky, Alex and Hinton, Geoffrey. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009.
 Nguyen et al. (2014) Nguyen, A., Yosinski, J., and Clune, J. Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images. ArXiv eprints, December 2014.
 Rust et al. (2005) Rust, Nicole, Schwartz, Odelia, Movshon, J. Anthony, and Simoncelli, Eero. Spatiotemporal elements of macaque V1 receptive fields. Neuron, 46(6):945–956, 2005.
 Srivastava et al. (2014) Srivastava, Nitish, Hinton, Geoffrey, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958, 2014.
 Szegedy et al. (2014a) Szegedy, Christian, Liu, Wei, Jia, Yangqing, Sermanet, Pierre, Reed, Scott, Anguelov, Dragomir, Erhan, Dumitru, Vanhoucke, Vincent, and Rabinovich, Andrew. Going deeper with convolutions. Technical report, arXiv preprint arXiv:1409.4842, 2014a.
 Szegedy et al. (2014b) Szegedy, Christian, Zaremba, Wojciech, Sutskever, Ilya, Bruna, Joan, Erhan, Dumitru, Goodfellow, Ian J., and Fergus, Rob. Intriguing properties of neural networks. ICLR, abs/1312.6199, 2014b. URL http://arxiv.org/abs/1312.6199.
Appendix A Rubbish class examples
A concept related to adversarial examples is the concept of examples drawn from a “rubbish class.” These examples are degenerate inputs that a human would classify as not belonging to any of the categories in the training set. If we call these classes in the training set “the positive classes,” then we want to be careful to avoid false positives on rubbish inputs–i.e., we do not want to classify a degenerate input as being something real. In the case of separate binary classifiers for each class, we want all classes output near zero probability of the class being present, and in the case of a multinoulli distribution over only the positive classes, we would prefer that the classifier output a highentropy (nearly uniform) distribution over the classes. The traditional approach to reducing vulnerability to rubbish inputs is to introduce an extra, constant output to the model representing the rubbish class (LeCun+98). Nguyen et al. (2014) recently repopularized the concept of the rubbish class in the context of computer vision under the name fooling images. As with adversarial examples, there has been a misconception that rubbish class false positives are hard to find, and that they are primarily a problem faced by deep networks.
Our explanation of adversarial examples as the result of linearity and high dimensional spaces also applies to analyzing the behavior of the model on rubbish class examples. Linear models produce more extreme predictions at points that are far from the training data than at points that are near the training data. In order to find high confidence rubbish false positives for such a model, we need only generate a point that is far from the data, with larger norms yielding more confidence. RBF networks, which are not able to confidently predict the presence of any class far from the training data, are not fooled by this phenomenon.
We generated 10,000 samples from $\mathcal{N}(0,{\bm{I}}_{784})$ and fed them into various classifiers on the MNIST dataset. In this context, we consider assigning a probability greater than 0.5 to any class to be an error. A naively trained maxout network with a softmax layer on top had an error rate of 98.35% on Gaussian rubbish examples with an average confidence of 92.8% on mistakes. Changing the top layer to independent sigmoids dropped the error rate to 68% with an average confidence on mistakes of 87.9%. On CIFAR10, using 1,000 samples from $\mathcal{N}(0,{\bm{I}}_{3072})$, a convolutional maxout net obtains an error rate of 93.4%, with an average confidence of 84.4%.
These experiments suggest that the optimization algorithms employed by Nguyen et al. (2014) are overkill (or perhaps only needed on ImageNet), and that the rich geometric structure in their fooling images are due to the priors encoded in their search procedures, rather than those structures being uniquely able to cause false positives.
Though Nguyen et al. (2014) focused their attention on deep networks, shallow linear models have the same problem. A softmax regression model has an error rate of 59.8% on the rubbish examples, with an average confidence on mistakes of 70.8%. If we use instead an RBF network, which does not behave like a linear function, we find an error rate of 0%. Note that when the error rate is zero the average confidence on a mistake is undefined.
Nguyen et al. (2014) focused on the problem of generating fooling images for a specific class, which is a harder problem than simply finding points that the network confidently classifies as belonging to any one class despite being defective. The above methods on MNIST and CIFAR10 tend to have a very skewed distribution over classes. On MNIST, 45.3% of a naively trained maxout network’s false positives were classified as 5s, and none were classified as 8s. Likewise, on CIFAR10, 49.7% of the convolutional network’s false positives were classified as frogs, and none were classified as airplanes, automobiles, horses, ships, or trucks.
To solve the problem introduced by Nguyen et al. (2014) of generating a fooling image for a particular class, we propose adding ${\epsilon}\nabla_{\bm{x}}p(y=i\mid{\bm{x}})$ to a Gaussian sample ${\bm{x}}$ as a fast method of generating a fooling image classified as class $i$. If we repeat this sampling process until it succeeds, we a randomized algorithm with variable runtime. On CIFAR10, we found that one sampling step had a 100% success rate for frogs and trucks, and the hardest class was airplanes, with a success rate of 24.7% per sampling step. Averaged over all ten classes, the method has an average perstep success rate of 75.3%. We can thus generate any desired class with a handful of samples and no special priors, rather than tens of thousands of generations of evolution. To confirm that the resulting examples are indeed fooling images, and not images of real classes rendered by the gradient sign method, see Fig. 5. The success rate of this method in terms of generating members of class $i$ may degrade for datasets with more classes, since the risk of inadvertently increasing the activation of a different class $j$ increases in that case.
We found that we were able to train a maxout network to have a zero percent error rate on Gaussian rubbish examples (it was still vulnerable to rubbish examples generated by applying a fast gradient sign step to a Gaussian sample) with no negative impact on its ability to classify clean examples. Unfortunately, unlike training on adversarial examples, this did not result in any significant reduction of the model’s test set error rate.
In conclusion, it appears that a randomly selected input to deep or shallow models built from linear parts is overwhelmingly likely to be processed incorrectly, and that these models only behave reasonably on a very thin manifold encompassing the training data.