楼主: xujingjun
1511 1

危险”的人工智能 [推广有奖]

  • 7关注
  • 66粉丝

巨擘

0%

还不是VIP/贵宾

-

威望
2
论坛币
18257 个
通用积分
4062.8691
学术水平
299 点
热心指数
390 点
信用等级
264 点
经验
709121 点
帖子
23136
精华
0
在线时间
11541 小时
注册时间
2006-1-2
最后登录
2024-5-10

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
危险”的人工智能
Genuine concerns about artificial intelligence (575 words)

December 3, 2014 7:08 pm

The idea that computers will one day turn on man is not far-fetched

------------------------------------------

Since the dawn of civilisation, mankind has been obsessed by the possibility that it will one day be extinguished. The impact of an asteroid on earth and the spectre of nuclear holocaust are the most prevalent millenarian fears of our age. But some scientists are increasingly of the view that a new nightmare must be added to the list. Their concern is that intelligent computers will eventually develop minds of their own and destroy the human race.

The latest warning comes from Professor Stephen Hawking, the renowned astrophysicist who has motor neurone disease. He told an interviewer this week that artificial intelligence could “outsmart us all” and that there is a “near certainty” of technological catastrophe. Most non-experts will dismiss his claims as a fantasy rooted in science fiction. But the pace of progress in artificial intelligence, or AI, means policy makers should already be considering the social consequences.

The idea that machines might one day be capable of thinking like people has been loosely discussed since the dawn of computing in the 1950s. The huge amount of cash being poured into AI research by US technology companies, together with the exponential growth in computer power, means startling predictions are now being made.

According to a recent survey, half the world’s AI experts believe human-level machine intelligence will be achieved by 2040 and 90 per cent say it will arrive by 2075. Several AI experts talk about the possibility that the human brain will eventually be “reverse engineered”. Some prominent tech leaders, meanwhile, warn that the consequences are unpredictable. Elon Musk, the pioneer of electric cars and private space flight at Tesla Motors and SpaceX, has argued that advanced computer technology is “potentially more dangerous than nukes”.

Western governments should be taking the ethical implications of the development of AI seriously. One concern is that nearly all the research being conducted in this field is privately undertaken by US-based technology companies. Google has made some of the most ambitious investments, ranging from its work on quantum computing through to its purchase this year of British AI start-up Deep Mind. But although Google set up an ethics panel following the Deep Mind acquisition, outsiders have no idea what the company is doing — nor how much resource goes into controlling the technology rather than developing it as fast as possible. As these technologies develop, lack of public oversight may become a concern.

That said, the risk that computers might one day pose a challenge to humanity should be put in perspective. Scientists may not be able to say with certainty when, or if, machines will match or outperform mankind.

But before the world gets to that point, the drawing together of both human and computer intelligence will almost certainly help to tackle pressing problems that cannot otherwise be solved. The growing ability of computers to crunch enormous quantities of data, for example, will play a huge role in helping humanity tackle climate change and disease over the next few decades. It would be folly to arrest the development of computer technology now — and forgo those benefits — because of risks that lie much further in the future.

There is every reason to be optimistic about AI research. There is no evidence that scientists will struggle to control computers, even at their most advanced stage. But this is a sector in which pioneers must tread carefully — and with their eyes open to the enduring ability of science to surprise us.

二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:人工智能 Intelligence Consequences Implications Increasingly 人工智能

沙发
olympic 发表于 2015-2-15 22:08:40 |只看作者 |坛友微信交流群

某台计算机有一天会变成人的想法并不过分


自从文明的曙光,人类一直痴迷于它总有一天会熄灭的可能性。地球上的小行星和核灾难的阴影的影响,是我们这个时代最流行的千禧年的恐惧。但一些科学家越来越一个新恶梦必须被添加到该列表的视图。他们担心的是,智能计算机最终会发展自己的思想和摧毁人类。

最新的警告来自于教授斯蒂芬·霍金,谁拥有运动神经元疾病的著名天体物理学家。他告诉面试官,本周人工智能可以“智取我们所有人”,并有技术灾难的“几乎可以肯定”。大多数非专家将驳回他的要求作为植根于科幻小说是幻想。但在人工智能,或AI,前进的步伐意味着政策制定者应该已经考虑到社会后果。

该机器或许有一天能够像人的思维的想法,因为计算在20世纪50年代的黎明被松散地讨论。巨额资金被投入到人工智能研究由美国科技公司,在计算机电源的指数一起成长,这意味着目前正在作出惊人预测。

根据最近的一项调查显示,世界上一半的AI专家认为人类级别的机器智能将到2040年实现90%的说,这将通过2075几位专家AI抵达谈的可能性,人的大脑最终会被“反向工程“。一些著名的技术领导者,同时警告说,其后果是无法预料的。伊隆·马斯克,电动车和私人太空飞行的特斯拉汽车公司和SpaceX公司的先锋,认为先进的计算机技术是“可能比核武器更危险”。

西方国家的ZF应该采取AI的发展的伦理问题严重。其中值得关注的是正在开展几乎所有的研究在该领域是由总部位于美国的科技公司私下进行。谷歌已经取得了一些最雄心勃勃的投资,从对量子计算的工作,通过今年的英国AI初创内心深处的其购买。但是,尽管谷歌建立了一个伦理面板下面的内心深处采集,外人根本不知道什么公司是做 - 也多少资源进入控制技术,而不是尽可能快地开发它。随着这些技术的发展,缺乏公共监督有可能成为一个问题。

尽管如此,这台计算机也许有一天提出了挑战人类的风险应该正确看待。科学家可能无法肯定的时候,或者如果,机器将匹配或超越人类的话。

但世界到达该点之前,绘制在一起人类和计算机智能将几乎肯定有助于解决不能以其他方式解决的紧迫问题。计算机紧缩大批量数据的能力越来越强,例如,将在帮助人类应对气候变化和疾病,在未来几十年了巨大的作用。这将是愚蠢的,现在逮捕计算机技术的发展 - 而放弃这些利益 - 因为说谎更远的未来风险。

我们有充分的理由对人工智能研究持乐观态度。没有证据表明,科学家们将很难控制电脑,甚至在他们最先进的阶段。但是,这是在开拓者必须谨慎行事一个部门 - 和他们的眼睛开放科学给我们惊喜的持久能力。

使用道具

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加JingGuanBbs
拉您进交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-5-10 13:21