Robust Learning for Data Poisoning Attacks
Yunjuan Wang 1 Poorya Mianjy 1 Raman Arora 1
Abstract in settings where an adversary can affect any part of the
training data. Therefore, in this paper, we are interested in
We investigate the robustness of stochastic ap- quantifying the maximal adversarial noise that is tolerable
proximation approaches against data poisoning by SGD when training wide ...


雷达卡




京公网安备 11010802022788号







