Charles Explorer logo
🇨🇿

Using Local Convolutional Units to Defend against Adversarial Examples

Publikace na Matematicko-fyzikální fakulta |
2019

Tento text není v aktuálním jazyce dostupný. Zobrazuje se verze "en".Abstrakt

Deep neural networks are known to be sensitive to adversarial examples - inputs that are created in such a way that they are similar (if viewed by people) to clean inputs, but the neural network has high confidence that they belong to another class.In this paper, we study a new type of neural network unit similar to the convolutional units, but with a more local behavior. The unit is based on the Gaussian radial basis function.

We show that if we replace the first convolutional layer in a convolutional network by the new layer (called RBFolutional), we obtain better robustness towards adversarial samples on the MNIST and CIFAR10 datasets, without sacrificing the performance on the clean examples.