the perturbations in the study were not random, they had to be crafted.
To the GP, noise is often added to training datasets for exactly the reason you're suggesting it. One of the novel things the paper cited discusses, however, is even if you feed the adversarial perturbations into additional training data, there are yet new ways to subtly perturb the inputs to get incorrect results.
Misclassification is a pretty fundamental consequence of dimensionality reduction, of course, but the surprise is how close those misclassifications are in input-space. This isn't mistaking a box for a square because it's looked at head on, it's mistaking a bus for an ostrich because some of the pixels in the image changed to a slightly different shade of yellow.
To the GP, noise is often added to training datasets for exactly the reason you're suggesting it. One of the novel things the paper cited discusses, however, is even if you feed the adversarial perturbations into additional training data, there are yet new ways to subtly perturb the inputs to get incorrect results.
Misclassification is a pretty fundamental consequence of dimensionality reduction, of course, but the surprise is how close those misclassifications are in input-space. This isn't mistaking a box for a square because it's looked at head on, it's mistaking a bus for an ostrich because some of the pixels in the image changed to a slightly different shade of yellow.