楼主: fin-qq
1925 0

[英语] The Frightening Side Of Intelligent Robots╱人工智能失控后果可怕 [推广有奖]

区版主

大师

93%

还不是VIP/贵宾

-

TA的文库  其他...

Fin之计量经济学与统计软件

经管百科之Fin世界

知识产权与专利

威望
7
论坛币
1084865 个
通用积分
27565.6372
学术水平
3514 点
热心指数
4310 点
信用等级
3313 点
经验
305741 点
帖子
14039
精华
21
在线时间
2881 小时
注册时间
2013-11-12
最后登录
2024-4-17

初级热心勋章 初级信用勋章 中级热心勋章 初级学术勋章 中级信用勋章 中级学术勋章 高级热心勋章 特级热心勋章 高级信用勋章 高级学术勋章

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币

The Frightening Side Of Intelligent Robots╱人工智能失控后果可怕

By NICK BILTON╱李京伦译

Ebola gives me nightmares. Bird flu and SARS scareme. But what terrifies me is artificial intelligence. The first three, withenough resources, humans could stop. The last, which humans are creating, couldbecome unstoppable.

伊波拉病毒让我做恶梦,禽流感和严重急性呼吸道症候群(SARS)让我怕怕,而让我吓得要命的却是人工智能。前三种,人类只要有足够资源就能阻止;最后一种,人类正在研发,却可能变得无法遏制。

Consider what artificial intelligence is. Grab aniPhone and ask Siri about the weather or stocks. Her answers are artificiallyintelligent. These artificially intelligent machines are cute now, but as theyare given more power , the y may not take long to spiral out of control.

看看什么是人工智能。拿起iPhone,问个人助理软件Siri天气或股市如何,它的答案就是人工智能。目前这些人工智能机器都都算可爱,不过,当机器被赋予更多力量时,也许不多久就会失控。

In the beginning, the glitches will be small buteventful. Maybe a rogue computer momentarily derails the stock market . Or adriverless car freezes on the highway because a software update goes awry.

刚开始,人工智能故障的规模小但影响大,也许一台不安分的计算机让股市暂时脱序,或者一辆自动驾驶车因软件更新出错而停在公路上。

But the upheavals can escalate quickly . Imaginehow a medical robot, programmed to rid cancer, could conclude that the best wayto obliterate cancer is to exterminate humans who are genetically prone to thedisease.

刚开始,人工智能故障的规模小但影响大,也许一台不安分的计算机让股市暂时脱序,或者一辆自动驾驶车因软件更新出错而停在公路上。

Nick Bostrom, author of the book “Superintelligence,”lays out some doomsday settings. One envisions self-replicating nanobots, whichare microscopic robots designed to make copies of themselves. These bots couldfight diseases in the human body or eat radioactive material . But, Mr. Bostromsays, a “person of malicious intent in possession of this technology mightcause the extinction of intelligent life on Earth.”

「超级人工智能」一书作者薄思纯描绘了一些末日情景,其中一种是能够自我复制的奈米机器人,这是非常微小的机器人,设计宗旨就是自我复制。这些机器人可在人体内跟疾病作战,或吃掉辐射物质,不过薄思纯说,「掌握这种技术却心怀不轨的人,可能让地球上的智慧生命灭亡。」

Artificial-intelligence proponents argue thatprogrammers are going to build safeguards. But didn’t it take nearly ahalf-century for programmers to stop computers from crashing every time youused them?

世上最聪明的人之一霍金曾写道,人工智能成功「将是人类史上最重大的事件,不幸的是,可能也是最后一个事件。」

Stephen Hawking, one of the smartest people onearth, wrote that successful A. I. “would be the biggest event in humanhistory. Unfortunately, it might also be the last.”

虽然人工智能支持者主张,程序设计师会预做防护,不过,程序设计师不是花了将近半世纪,才让计算机不会在你每次使用时都当机?

One fear is that we are starting to create machinesthat can make decisions , but these machines don’t have morality and likelynever will. A more-distant fear is that once we build systems that are asintelligent as humans, the y will be able to build smarter machines, oftenreferred to as superintelligence. That, experts say, is when things couldreally spiral out of control . We can’t build safeguards into something that wehaven’t built ourselves.

令人担心的一件事是,我们已开始创造能做决定的机器,这些机器却没有道德感,而且大概永远不会有。往远处想,令人害怕的是,一旦我们设计的系统跟人类一样聪明,系统就能自行设计更聪明的机器,通常称为「超级人工智能」。专家说,那就是局面真正失控的时候。我们没法为不是我们造出来的东西预做防护。

Wehumans steer the future not because we’re the strongest beings on the planet,or the fastest, but because we are the smartest,” said James Barrat, author of“Our Final Invention: Artificial Intelligence and the End of the Human Era.”“So when there is something smarter than us on the planet, it will rule over uson the planet.”

「我们最后的发明:人工智能与人类时代的终结」一书作者巴拉特说:「我们人类能主导未来走向,不是因为我们是地球上最强壮或动作最快的物种,而是因为我们最聪明,所以,如果地球上有某种东西比我们聪明,它就会统治我们。」

What makes it harder to comprehend is that we don’tknow what superintelligent machines will look or act like. “ Artificialintelligence won’t be like us,” Mr. Barrat said, “but it will be the ultimateintellectual version of us.”

让这种恐惧更难理解的是,我们并不知道超级人工智能机器长得怎样、如何行动。巴拉特说:「人工智能不会像我们,但会是我们最聪明的版本。」

Perhaps the scariest setting is how thesetechnologies will be used by the military. Bonnie Docherty, a lecturer at HarvardUniversity and a senior researcher at Human Rights Watch, said that the race tobuild autonomous weapons with artificial intelligence — already underway — isreminiscent of the early days of the race to build nuclear weapons, and thattreaties should be put in place before machines are killing people on thebattlefield. Machines that have no morality or mortality, she said, “ shouldnot be given power to kill.”

也许最可怕的情景是,这种技术会被军方使用。哈佛大学讲师、「人权观察」组织高级研究员邦妮.杜彻提说,研发人工智能自动武器的竞赛已经展开,让人联想到核武研发竞赛早期,相关的限制条约应该在机器上战场杀人前生效。她说,机器既没有道德感也不会死,「不该握有杀人的力量。」

So how do we ensure that all these doomsdaysituations don’t come to fruition? In some instances, we likely won’t be ableto stop them. But we can hinder some of the potential chaos by following thelead of Google. This year when the search-engine giant acquired DeepMind, a nartificial intelligence company based in London, the two put together anartificial intelligence safety and ethics board that aims to ensure thesetechnologies are developed safely.

那我们该如何确保这些末日情景不会发生?在某些情况下,我们可能没法阻止,不过我们可以效法搜索引擎巨头谷歌,防止某些潜在乱局上演。谷歌今年收购总部在伦敦的人工智能公司DeepMind,二者共组「人工智能安全与道德委员会」,确保这种技术朝安全的方向研发。

Demis Hassabis, founder of DeepMind, said thatanyone building artificial intelligence should do the same thing. “They shoulddefinitely be thinking about the ethical consequences of what they do,” Dr.Hassabis said. “Way ahead of time.”

DeepMind创办人哈萨毕斯博士说,凡是研发人工智能的人都该这么做,「他们理当想到自己所作所为的道德后果,而且要早早就想到。」


二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:Intelligent Robots robot RIGHT INTEL 人工智能

已有 1 人评分经验 论坛币 学术水平 收起 理由
reduce_fat + 67 + 23 + 3 奖励积极上传好的资料

总评分: 经验 + 67  论坛币 + 23  学术水平 + 3   查看全部评分

Mankind is great because of dreams.
您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加JingGuanBbs
拉您进交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-5-1 07:29