请选择 进入手机版 | 继续访问电脑版
楼主: oliyiyi
730 2

3 Thoughts on Why Deep Learning Works So Well [推广有奖]

版主

泰斗

0%

还不是VIP/贵宾

-

TA的文库  其他...

计量文库

威望
7
论坛币
272091 个
通用积分
31269.1753
学术水平
1435 点
热心指数
1554 点
信用等级
1345 点
经验
383778 点
帖子
9599
精华
66
在线时间
5466 小时
注册时间
2007-5-21
最后登录
2024-3-21

初级学术勋章 初级热心勋章 初级信用勋章 中级信用勋章 中级学术勋章 中级热心勋章 高级热心勋章 高级学术勋章 高级信用勋章 特级热心勋章 特级学术勋章 特级信用勋章

oliyiyi 发表于 2016-8-12 07:57:56 |显示全部楼层 |坛友微信交流群

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币

Last week, deep learning research leader Yann LeCun took part in a Quora Session, during which he answered questions from community members on a wide variety of (mostly machine/deep learning) topics.


But... what does Yann LeCun think he does?

During the session, this question was posed:

When will we see a theoretical background and mathematical foundation for deep learning?

The answer turned into a very eloquent overview of three particular thoughts on why deep learning works so well. Here is a quick overview.

LeCun's first point of explanation, which maps to a good reason why deep learning works so well, is as follows:

One theoretical puzzle is why the type of non-convex optimization that needs to be done when training deep neural nets seems to work reliably.

The main idea here is that local minima do not arise in very high dimensional space, so greedy-search gradient optimization is not trapped in a "box." As LeCun states:

It’s hard to build a box in 100 million dimensions.

Moving on, LeCun introduces his next point as:

Another interesting theoretical question is why multiple layers help.

The point here, beyond LeCun stating that there is not a complete understanding as to why, is that multiple layers help to implement complex functions more concisely. While he points out that computer scientists are accustomed to the idea of sequential steps and multiple layers of computation, this doesn't quite cover the reasons why multiple layers in deep neural networks work as they do.

For his last point, he turns to a specific neural network architecture.

A third interesting question is why ConvNets work so well.

Interesting question indeed. This article gets cited as reading for why ConvNet architectures are right for analyzing certain types of signals, touching on the fact that ConvNets actually work very well for some type of signals, like spatial. Sorting out why is this is so is one thing, but noting that it is true places squarely into perspective just how well ConvNets work when they do.

While Yann LeCun was answering the question posed to him on when we could expect a mathematical foundation for deep leaning to emerge, in doing so he provided valuable insight into why deep learning functions as well as it does.


二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:Thoughts Learning earning Thought Learn foundation background particular learning research

缺少币币的网友请访问有奖回帖集合
https://bbs.pinggu.org/thread-3990750-1-1.html
h2h2 发表于 2016-8-12 11:18:10 |显示全部楼层 |坛友微信交流群
谢谢分享

使用道具

william9225 学生认证  发表于 2016-8-12 13:41:47 来自手机 |显示全部楼层 |坛友微信交流群
谢谢分享

使用道具

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-3-29 06:36