请选择 进入手机版 | 继续访问电脑版
楼主: oliyiyi
1152 1

Does Deep Learning Have Deep Flaws? [推广有奖]

版主

泰斗

0%

还不是VIP/贵宾

-

TA的文库  其他...

计量文库

威望
7
论坛币
272091 个
通用积分
31269.1753
学术水平
1435 点
热心指数
1554 点
信用等级
1345 点
经验
383778 点
帖子
9599
精华
66
在线时间
5466 小时
注册时间
2007-5-21
最后登录
2024-3-21

初级学术勋章 初级热心勋章 初级信用勋章 中级信用勋章 中级学术勋章 中级热心勋章 高级热心勋章 高级学术勋章 高级信用勋章 特级热心勋章 特级学术勋章 特级信用勋章

oliyiyi 发表于 2016-8-12 09:26:57 |显示全部楼层 |坛友微信交流群

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币

Here are three pairs of images. Can you find the differences between left columns and right columns?

I suppose not. They are truly identical to human eyes, while not for the deep neural networks. Actually, deep neural networks can only recognize the left correctly, but can not recognize the right ones. Interesting, isn’t it?

A recent study by researchers from Google, New York University and University of Montreal has found this flaw in almost every deep neural network.

Two counter intuitive properties of deep neural networks are presented.

1. It is the space, rather than the individual units, that contains the semantic information in the high layer of neural networks.This means that random distortion of the originals can also be correctly classified.

The figures below compare the natural basis to the random basis on the convolutional neural network trained on MNIST, using ImageNet dataset as validation set.

For the natural basis (upper images), it sees an activation of a hidden unit as a feature and looks for input images which maximize the activation value of this single feature. We can interpret these features into meaningful variations in the input domain. However, the experiments show that such interpretable semantic also works for any random directions (lower images).

Take white flower recognition as an example. Traditionally, the images with black circle and fan-shaped white region will be most likely classified as white flowers. However, it turns out that if we pick a random set of basis, the images responded can also be semantically interpreted in a similar way.

2. The network may misclassify an image after the researchers applied a certain imperceptible perturbation. The perturbations are found by adjusting the pixel values to maximize the prediction error.

For all the networks we studied (MNIST, QuocNet, AlexNet), for each sample, we always manage to generate very close, visually indistinguishable, adversarial examples that are misclassified by the original network.

The examples below are (left) correctly predicted samples, (right) adversarial examples, (center) 10*magnification of differences between them. The two columns are the same to a human, while totally different to a neural network.

This is a remarkable finding.  Shouldn’t they be immune to small perturbations? The continuity and stability of deep neural networks are questioned. The smoothness assumption does not hold for deep neural networks any more.

What’s more surprising is that the same perturbation can cause a different network, which was trained on a different training dataset, to misclassify the same image. It means that adversarial examples are somewhat universal.

This result may change our way to manage training/validation datasets and edge cases. It has been supported by the experiments that if we keep a pool of adversarial examples and mix it into the original training set, the generalization will be improved. Adversarial examples for the higher layer seem to be more useful than those on the lower layers.

The influences are not limited to this.

The authors claimed,

Although the set of adversarial negatives is dense, the probability is extremely low, so it is rarely observed in the test set.

But “rare” does not mean never. Imagine that AI systems with such blind spots were applied to criminal detection, security devices or banking systems. If we keep digging deeper, do such adversarial images exist in our brain? Scary, isn’t it?


Ran Bi is a master student in Data Science program at New York University. She has done several projects in machine learning, deep learning and also big data analytics during her study at NYU. With the background in Financial Engineering for undergrad study, she is also interested in business analytics.


二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:Learning earning Learn deep laws University Google between counter suppose

缺少币币的网友请访问有奖回帖集合
https://bbs.pinggu.org/thread-3990750-1-1.html
h2h2 发表于 2016-8-12 11:22:38 |显示全部楼层 |坛友微信交流群
谢谢分享

使用道具

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-3-29 16:53