Â¥Ö÷: Mujahida
69 0

[ÆäËû] ICML-To be Robust or to be Fair Towards Fairness in Adversarial Training [ÍÆ¹ãÓн±]

  • 1¹Ø×¢
  • ·ÛË¿

ÒÑÂô£º5641·Ý×ÊÔ´

Ì©¶·

0%

»¹²»ÊÇVIP/¹ó±ö

-

ÍþÍû
0 ¼¶
ÂÛ̳±Ò
13322 ¸ö
ͨÓûý·Ö
711.3950
ѧÊõˮƽ
459 µã
ÈÈÐÄÖ¸Êý
484 µã
ÐÅÓõȼ¶
414 µã
¾­Ñé
115899 µã
Ìû×Ó
4676
¾«»ª
0
ÔÚÏßʱ¼ä
16723 Сʱ
×¢²áʱ¼ä
2013-1-2
×îºóµÇ¼
2025-12-14

Â¥Ö÷
Mujahida ÔÚÖ°ÈÏÖ¤  ·¢±íÓÚ 2025-7-27 11:52:01 |AIдÂÛÎÄ

+2 ÂÛ̳±Ò
kÈË ²ÎÓë»Ø´ð

¾­¹ÜÖ®¼ÒËÍÄúÒ»·Ý

Ó¦½ì±ÏÒµÉúרÊô¸£Àû!

ÇóÖ°¾ÍҵȺ
ÕÔ°²¶¹ÀÏʦ΢ÐÅ£ºzhaoandou666

¾­¹ÜÖ®¼ÒÁªºÏCDA

ËÍÄúÒ»¸öÈ«¶î½±Ñ§½ðÃû¶î~ !

¸ÐлÄú²ÎÓëÂÛ̳ÎÊÌâ»Ø´ð

¾­¹ÜÖ®¼ÒËÍÄúÁ½¸öÂÛ̳±Ò£¡

+2 ÂÛ̳±Ò
To be Robust or to be Fair: Towards Fairness in Adversarial Training

             Han Xu * 1 Xiaorui Liu * 1 Yaxin Li 1 Anil K. Jain 1 Jiliang Tang 1

             Abstract                to be wrongly classified:
                                                   
                                       min E max L(f (x + ¦Ä), y) .¡®         (1)
   Adversarial training algorithms have been proved             f  x  ||¦Ä||¡Ü
   to be reliable to improve machine learning models¡¯
                    ...
¶þάÂë

ɨÂë¼ÓÎÒ À­ÄãÈëȺ

Çë×¢Ã÷£ºÐÕÃû-¹«Ë¾-ְλ

ÒÔ±ãÉóºË½øÈº×ʸñ£¬Î´×¢Ã÷Ôò¾Ü¾ø

¹Ø¼ü´Ê£ºFairness Training Towards robust Toward

ÄúÐèÒªµÇ¼ºó²Å¿ÉÒÔ»ØÌû µÇ¼ | ÎÒҪע²á

±¾°æÎ¢ÐÅȺ
jg-xs1
À­Äú½ø½»Á÷Ⱥ
GMT+8, 2025-12-24 06:15