Towards Defending against Adversarial Examples
via Attack-Invariant Features
455
891 5 Wang 1 5Chunlei Peng 4 Xinbo Gao 5
Dawei Zhou 1 2 Tongliang Liu 2 Bo Han 3 Nannan 5
Abstract
Deep neural networks (DNNs) are vulnerable to
adversarial noise. Their adversarial robustness
PGD
can be improved by exploiting adversarial ex ...


雷达卡




京公网安备 11010802022788号







