Recent research has found that neural networks for computer vision are Toy vulnerable to several types of external attacks that modify the input of the model, with the malicious intent of producing a misclassification.With the increase in the number of feasible attacks, many defence approaches have been proposed to mitigate the effect of these attacks and protect the models.Mainly, the research on both attack and defence has focused on RGB images, while other domains, such as the infrared domain, are currently underexplored.In this paper, we propose two attacks, and we evaluate them on multiple datasets and neural network models, showing that the results outperform others established attacks, on both RGB as Figure Kits well as infrared domains.
In addition, we show that our proposal can be used in an adversarial training protocol to produce more robust models, with respect to both adversarial attacks and natural perturbations that can be applied to input images.Lastly, we study if a successful attack in a domain can be transferred to an aligned image in another domain, without any further tuning.The code, containing all the files and the configurations used to run the experiments, is available https://github.com/jaryP/IR-RGB-domain-attackonline.