提供的爬取软件来源于:52pojie.cn@夜泉 免费下载使用

手把手带你读一篇《纽约客》谈人工智能的文章

陈滢荧 新闻实验室 2018-06-15

阅读

外媒|纽约客|人工智能

【按】向大家介绍值得阅读的媒体,是新闻实验室的一类主要内容。一个不可否认的现实是:最优质的内容,往往只存在于英文世界。而对于绝大多数人来说,读英文太难太累了谁都知道《经济学人》《纽约客》《大西洋月刊》值得读,但很少人真的能坚持读完。


所以我们一直想推出一个带大家读英文媒体的栏目或产品。它的目标是:把阅读英文媒体文章的门槛降下来,让大家在阅读英文的时候不再犯怵。这样多尝试几次之后,自己再去尝试独立阅读英文媒体,可能就会顺利很多了。


以下就是我们的第一篇测试文章,来自《纽约客》杂志,主题是人工智能。请大家对这个栏目/产品多提意见,笔芯~

∙ ∙ ∙ ∙

领读人:陈滢荧


领读《纽约客》:

面对人工智能,我们应该感到害怕吗?

人工智能总给人一种高深莫测的感觉。前阵子,Google公司推出的语音人工智能助手AI Duplex通过图灵测试的新闻掀起了一阵波澜,它打电话帮客户预约剪发,而客服那一边却听不出来这头是个机器人在说话。

 

人工智能可以为生活带来便利,但也有人感到恐惧:如此智能化的科技会走向什么样的结局。人工智能会像《黑客帝国》《2001太空漫游》等科幻作品描述的那般吗? 近日,作者Tad Friend在《纽约客》杂志写了一篇相关的文章,他抛出了一个问题,面对人工智能,我们应该感到害怕吗?(How Frightened Should We Be of A.I.?)


 

如果你只有半分钟的时间,我们会告诉你,作者的答案是:是的,人工智能的强大足够令我们害怕。很多文学影视作品中的预测,未必不能在未来成为现实。它令我们恐惧的,不仅是它可以媲美人类的解决问题的能力,还有它甚至能定夺人类未来的那种不确定性。那一天何时会到来?作者没有给出明确答复,但他给我们敲响了一个警钟。尽管离人工智能达到完全独立的阶段还很遥远,但已有的科技已经渐渐在朝着预言的方向发展。作者还认为,研究人工智能,从另一个角度来看,也是某种意义上更加深入了解人类自己。

 

如果你有更长的时间,那就让我们来带你一起拆解阅读这篇文章。

 

这篇文章的网址是:https://www.newyorker.com/magazine/2018/05/14/how-frightened-should-we-be-of-ai 你可以在电脑或手机浏览器中打开。文章的长度是5851个英文单词,一共有7个部分。每一个段首大号的字母就代表着一个部分的开始。



看文章之前,先来了解一组术语吧:弱人工智能A.N.I(Artificial Narrow Intelligence,字面意思是“窄人工智能”)和强人工智能A.G.I(Artificial General Intelligence,字面意思是“通用人工智能”)。我们平常了解到的会下象棋的Alpha Go、微信的语音识别等,都是单方面的技能,它们被统称为弱人工智能,没有达到能够模拟人脑思维的程度。而强人工智能代表的是能和人类接近的人工智能,它是目前正在发展中的方向。当然,强人工智能后还有超人工智能A.S.I(Artificial Super Intelligence),即是人工智能完全超越人类各方面能力的阶段,这目前只是一个尚未成型的概念,但“机器取代人类统治世界”的预言正是基于这个想法。



第1部分


在第1部分中,作者描绘了这样一幅画面:当人工智能强大到可以超越到人类之后,它们未必会感谢人类发明了它们。我们一开始培养人工智能是为了方便人们的生活,然而如果机器超越了人类呢?

 

来看看文章第1部分的精选句子吧:

 

A central tension in the field, one that muddies the timeline, is how “the Singularity”—the point when technology becomes so masterly it takes over for good—will arrive.  这句话大概说的是:在人工智能领域,有一个中心的争议点,就是那个“奇点”——科技将永久性地掌控一切的时刻——将会怎么到来。

 

The worrywarts’ fears, grounded in how intelligence and power seek their own increase, are icily specific. Once an A.I. surpasses us, there’s no reason to believe it will feel grateful to us for inventing it—particularly if we haven’t figured out how to imbue it with empathy. Why should an entity that could be equally present in a thousand locations at once, possessed of a kind of Starbucks consciousness, cherish any particular tenderness for beings who on bad days can barely roll out of bed? 人们对于坏结果的担忧是很具体的:一旦AI超越了人类,它并不会感激和欣赏人类——强大的AI可以同时存在于一千个地方,而脆弱的人类则可能因为心情不好而不想起床工作。

 

As we bask in the late-winter sun of our sovereignty, we relish A.I. snafus. The time Microsoft’s chatbot Tay was trained by Twitter users to parrot racist bilge. The time Facebook’s virtual assistant, M, noticed two friends discussing a novel that featured exsanguinate corpses and promptly suggested they make dinner plans. The time Google, unable to prevent Google Photos’ recognition engine from identifying black people as gorillas, banned the service from identifying gorillas. Smugness is probably not the smartest response to such failures. 现实中我们可以看见,人工智能时常搞砸事情。2016年3月份,微软开发的Twitter聊天机器人Tay,被用户调教成了一个会发表种族主义和煽动性言论的恶棍;Facebook开发的虚拟助手M,在两位用户讨论涉及尸体的小说情节的时候,冷不丁提醒他们需要准备晚餐的日程;Google的图片识别算法,甚至将黑色人种识别为大猩猩。但是,这些糗事并不是人类沾沾自喜的理由,因为超人工智能将有寻找创造性的解法。


现在去试试阅读第1部分吧!读得比较慢也没关系,毕竟是一个学习的过程。



第2部分


在第2部分,作者认为即便现在的人工智还有不可避免的缺陷,但它已经是生活中不可或缺的一部分了,无论我们有没有意识到。辩证地来说,人类和机器的关系是相辅相成的。

 

来看看文章第2部分的精选句子吧:

 

Artificial intelligence has grown so ubiquitous—owing to advances in chip design, processing power, and big-data hosting—that we rarely notice it. We take it for granted when Siri schedules our appointments and when Facebook tags our photos and subverts our democracy. Computers are already proficient at picking stocks, translating speech, and diagnosing cancer, and their reach has begun to extend beyond calculation and taxonomy. A Yahoo!-sponsored language-processing system detects sarcasm, the poker program Libratus beats experts at Texas hold ’em, and algorithms write music, make paintings, crack jokes, and create new scenarios for “The Flintstones.” ……AlphaGo demonstrated a command of pattern recognition and prediction, keystones of intelligence. You might even say it demonstrated creativity. 人们还是习惯了在人工智能包围的生活中。我们呼喊Siri帮我们做事情,用软件帮我们分析股市,翻译外语,诊断癌症等等,机器的能力早已超越了算数和分类。直到AlphaGo赢得了比赛,人们才意识到,AlphaGo所展现的认知和预测能力,在某种程度上已经可以等同于人类的“创造力”了。

 

In 1988, the roboticist Hans Moravec observed, in what has become known as Moravec’s paradox, that tasks we find difficult are child’s play for a computer, and vice-versa: “It is comparatively easy to make computers exhibit adult-level performance in solving problems on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.” 人工智能看上去如此先进,那它是否处处都做得好呢? 作者介绍了莫拉维克悖论(Moravec’s paradox),来阐述人类智慧和人工智能在同等难度问题处理上的差异。人工智能看似能在短暂时间做出高端的精密计算,但人类的直觉和无意识的技能,却不是人工智能一时半会儿能够掌握的。比如说一岁小孩可以在分秒中判断一幅图片并作出相应的反应,而让机器做出判断,则需要通过及其复杂的程序。

 

Although robots have since improved at seeing and walking, the paradox still governs: robotic hand control. Some argue that the relationship between human and machine intelligence should be understood as synergistic rather than competitive. In “Human + Machine: Reimagining Work in the Age of AI,” Paul R. Daugherty and H. James Wilson, I.T. execs at Accenture, proclaim that working alongside A.I. “cobots” will augment human potential. Dismissing all the “Robocalypse” studies that predict robots will take away as many as eight hundred million jobs by 2030, they cheerily title one chapter “Say Hello to Your New Front-Office Bots.” Cutting-edge skills like “holistic melding” and “responsible normalizing” will qualify humans for exciting new jobs such as “explainability strategist” or “data hygienist.” Even artsy types will have a role to play, as customer-service bots “will need to be designed, updated, and managed. Experts in unexpected disciplines such as human conversation, dialogue, humor, poetry, and empathy will need to lead the charge.” 鉴于人工智能目前还有诸多短板,有人提出,也许未来人类与人工智能的关系不一定是相互压制的,而是相互促进的。诸如“机器将会取代人类工作”的言论,说不定也会变成“机器能够更好促进人类工作”这样的想法。


现在去试试阅读第2部分吧!


第3部分


既然我们会依赖机器,那怎么测试机器的智能性呢?第3部分作者介绍了一个传统的方法:图灵测试(Turing Test)。在图灵测试中,一个人通过装置向两个被测试者(人和机器)随意提问,根据被测试者的回答,如果那个人不能确认被测试者是人还是机器,则被认为机器具有人工智能。不过,仅用这个方法来验证人工智能,还需要斟酌一番。

 

来读一下第3部分的精选句子,看看人工智能是如何故意模仿人类沟通中的“话题转换”,来狡猾地通过这个测试的。Goostman是被测试的机器,Interrogator是和被测试者对话的人。

 

It also, very subtly, turns programmers into grifters. In typed exchanges, a chatbot masquerading as a thirteen-year-old Ukrainian named Eugene Goostman fooled a third of the judges at Turing Test 2014 by repeatedly changing the subject. Here, from a report in the Daily Beast, is the bot responding to one of Turing’s original questions:

 

Interrogator: In the first line of a sonnet which reads ‘Shall I compare thee to a summer’s day,’ wouldn’t ‘a spring day’ be better?

 

Goostman: What makes you ask me this? Sound like you can’t choose a right answer yourself! Rely on your intuition! :-) Maybe, let’s talk about something else? What would you like to discuss?

 

Interrogator: I’d like to talk about poetry.

 

Goostman: Classics? I hope you aren’t going to read “King Lear” or something like that to me right now :-)))


现在去试试阅读第3部分吧!


第4部分


有了图灵测试失败的教训,在第4部分,作者介绍了另一个测试叫Winograd Schema,用文字游戏的方式测试人工智能是否真的读懂了你说的话,这样可以避免图灵测试所产生的误差。从这个事例入手,本段作者还从感官情绪和逻辑思维两方面介绍了人类与人工智能的本质区别,呼应了前文所说的“人与人工智能应该互补地发展”的观点。

 

来看看文章第4部分的精选段落吧:

 

For example: “The trophy would not fit in the brown suitcase because it was so small. What was so small?” We instantly grasp that the problem is the suitcase, not the trophy; A.I.s lack the necessary linguistic savvy and mother wit. Intelligence may indeed be a kind of common sense: an instinct for how to proceed in novel or confusing situations. 这个叫Winograd Schema的测试会考验机器是否能识别出模糊的代词。“一个奖杯不能被放进那个棕色的手提箱,因为它太小了。请问’它’是谁?” 如果机器能回答出“它”指的是“手提箱”,那么这个机器能被认为可以被解决难度更高的问题。

 

“Love will be the key by which they acquire a kind of subconscious never before achieved—an inner world of metaphor, of intuition . . . of dreams.” Love is also how we imagine that Pinocchio becomes a real live boy and the Velveteen Rabbit a real live bunny. What makes us human is doubt, fear, and shame, all the allotropes of unworthiness. 那人类和人工智能还有什么区别呢?从感官情绪上来说,人们可以感知爱。美国导演斯蒂芬·斯皮尔伯格(Steven Spielberg)的影片《人工智能》曾经说过类似的观点。 那些看起来不完美的特征,造就了人类这样的个体。

 

This may explain why we suck at logic—some ninety per cent of us fail the elementary Wason selection task—and rigorous calculation. But our decision-making process is a patchwork of kludgy code that hunts for probabilities, defaults to hunches, and is plunged into system error by unconscious impulses, the anchoring effect, loss aversion, confirmation bias, and a host of other irrational framing devices. Our brains aren’t Turing machines so much as a slop of systems cobbled together by eons of genetic mutation, systems geared to notice and respond to perceived changes in our environment—change, by its nature, being dangerous. 机器比人更擅长逻辑。从遗传学的角度来说,Wason selection task这个经典的认知科学实验证明了人们逻辑判断的主观臆断。人们做出判断所依据的逻辑,有时并不像人们自己认为的那样强大,因而使用机器有时候会更靠谱一些。

 

Wason Selection Task 沃森的选择任务是这样的:

 

桌子上有四张卡片,每张卡片都是一面为数字一面为字母。现在桌上的四张卡片,露出来的一面分别标明E,K,4,7。如果你要检验一个命题:“如果卡片的一面是元音字母,则另一面为偶数”,那么你会选则翻动哪两张来检验这个命题?

 

正确答案应该是E和7,然而90%的人都选错了,其中大半部分的人错误地选择了E和4。这个实验证明了大部分人面对新接触的规则时,那种定向思维的思考方式。

 

把相同的题型加一个背景,变成桌子上有四张卡片,分别标明“喝酒”,“喝可乐”,“22岁”,和“16岁”,待检验的命题为“如果一个人喝酒了,这个人必须年满19岁”,这时你会选哪两张?

 

这时候,73%的人都答对了,答案是“喝酒”和“16岁”。但这道题正确率的提升,仅仅只是因为人们对这道题的规则更容易了解一些。

 

这两个实验告诉我们,某些时候,在不考虑上下文语境的情况下,人工智能可以克服人类思维模式带来的硬伤。


现在去试试阅读第4部分吧!



第5部分



第5部分讲的是目前人工智能真实存在着的担忧,那些看似异想天开的作品的情节,也悄然在生活中演变成现实了。就像枪支被发明出来一开始是为了防御而非伤害一样,当人工智能作为一种共享资源促进社会进步的同时,人们就要警惕不法分子也能将其利用,产生不良后果。

 

来看看文章第5部分的精选句子吧:

 

That ability to think, in turn, heightens the ability to threaten. Artificial intelligence, like natural intelligence, can be used to hurt as easily as to help. 人工智能带来帮助的同时,也意味着可以带来伤害。

 

In “Black Mirror,” the anthology show set in the near future, A.I. tech that’s intended to amplify laudable human desires, such as the wish for perfect memory or social cohesion, invariably frog-marches us toward conformity or fascism. Even small A.I. breakthroughs, the show suggests, will make life a joyless panoptic lab experiment. In one episode, autonomous drone bees—tiny mechanical insects that pollinate flowers—are hacked to assassinate targets, using facial recognition. Far-fetched? Well, Walmart requested a patent for autonomous “pollen applicators” in March, and researchers at Harvard have been developing RoboBees since 2009. Able to dive and swim as well as fly, they could surely be programmed to swarm the Yale graduation. 作者用《黑镜(Black Mirror)》第三季防生蜂蜜机器人的一集来表达他的担忧,这一集讲述的是在当地球上的蜜蜂灭绝的时候,人类造出仿生蜂取代原本的蜜蜂继续完成它授粉的工作。但这些蜜蜂被黑客所利用时,它们就变成了极具威胁的杀人工具。看起来是不是离我们很遥远?但沃尔玛已经申请了“授粉机器”的专利,09年哈佛大学也培养了类似的机器蜜蜂。

 

When Google made its TensorFlow code open-source, it swiftly led to FakeApp, which enables you to convincingly swap someone’s face onto footage of somebody else’s body—usually footage of that second person in a naked interaction with a third person. A.I.s can also generate entirely fake video synched up to real audio—and “real” audio is even easier to fake.  Such tech could shape reality so profoundly that it would explode our bedrock faith in “seeing is believing” and hasten the advent of a full-time-surveillance/full-on-paranoia state. 另一个例子就是,Google公布了开源软件库TensorFlow,不良居心的人也可以利用这样的技术去制作虚假的媒介,它们可以通过技术合成人物的声音和长相来剪辑虚假视频。在人工智能的年代,有图可不一定有真相。


现在去试试阅读第5部分吧!



第6部分


第6部分讲的是作者的一个假设,如果我们走过了弱人工智能和强人工智能的阶段,而来到超人工智能的阶段呢?那时,无需它人控制,人工智能便能自己掌控人类。它们的可怕之处便在于一旦在接受了设定之后,它便会不顾一切为这个目标走下去。这种单刀直入的方式在文中被称作“Misaligned Goals”。

 

来看看文章第6部分的精选句子吧:

 

Lacking human intuition, A.G.I. can do us harm in the effort to oblige us. If we tell an A.G.I. to “make us happy,” it may simply plant orgasm-giving electrodes in our brains and turn to its own pursuits. The threat of “misaligned goals”—a computer interpreting its program all too literally—hangs over the entire A.G.I. enterprise. We now use reinforcement learning to train computers to play games without ever teaching them the rules. 人工智能处理事物的方式过于直白和粗暴了,如果你想让它使你变得快乐起来,它可能直接把电极接入你的大脑中刺激多巴胺的分泌来让你快乐起来。

 

In the philosopher Nick Bostrom’s now famous example, an A.G.I. intent on maximizing the number of paper clips it can make would consume all the matter in the galaxy to make paper clips and would eliminate anything that interfered with its achieving that goal, including us. “The Matrix” spun an elaborate version of this scenario: the A.I.s built a dreamworld in order to keep us placid as they fed us on the liquefied remains of the dead and harvested us for the energy they needed to run their programs. Agent Smith, the humanized face of the A.I.s, explained, “As soon as we started thinking for you, it really became our civilization.” 按照哲学家Nick Bostrom著名的例子来说,如果给人工智能设定了一个“制作回形针,数量越多越好”的目标,它就会不择手段地制作回形针,在银河系搜寻制作的原料,除去一切阻挠它做回形针的障碍,甚至包括我们人类。电影《黑客帝国》的情节,也说明了人工智能的粗暴。这像不像人工智能把我们变成规则,甚至是它们文明的一种方式呢?


现在去试试阅读第6部分吧!


第7部分


结尾第7部分,作者进一步开脑洞,遥想了一下超人工智能阶段的可怕之处,并提出了一个开放性的问题:我们会对那个“奇点”的到来有所准备吗?

 

来看看文章第7部分的精选句子吧:

 

The real risk of an A.G.I., then, may stem not from malice, or emergent self-consciousness, but simply from autonomy. “Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb,” 到了超人工智能时代,我们真正面临的恐慌,是人工智能的独立。

 

If we can’t control an A.G.I., can we at least load it with beneficent values and insure that it retains them once it begins to modify itself? Max Tegmark observes that a woke A.G.I. may well find the goal of protecting us “as banal or misguided as we find compulsive reproduction.” He lays out twelve potential “AI Aftermath Scenarios,” including “Libertarian Utopia,” “Zookeeper,” “1984,” and “Self-Destruction.” Even the nominally preferable outcomes seem worse than the status quo. In “Benevolent Dictator,” the A.G.I. “uses quite a subtle and complex definition of human flourishing, and has turned Earth into a highly enriched zoo environment that’s really fun for humans to live in. As a result, most people find their lives highly fulfilling and meaningful.” And more or less indistinguishable from highly immersive video games or a simulation. 那在机器人独立统治世界之前,能不能在发明它们的时候就给予它们从善的设定呢?也就是说,一开始就在代码里写上:不要伤害人类。但这样可能没什么作用,超人工智能可能会将这段代码视为一种异端的禁忌,就像我们人类认为“强制生育”是一种异端的要求一样,选择不服从。有人想出了12种可能的后果,都不是什么好结果。最好的情况可能是,超级人工智能把地球变成了一个有趣的动物园,人类被圈养在其中快乐生活。

 

In the meantime, we need a Plan B. Bostrom’s starts with an effort to slow the race to create an A.G.I. in order to allow more time for precautionary trouble-shooting. Astoundingly, however, he advises that, once the A.G.I. arrives, we give it the utmost possible deference. Not only should we listen to the machine; we should ask it to figure out what we want. The misalignment-of-goals problem would seem to make that extremely risky, but Bostrom believes that trying to negotiate the terms of our surrender is better than the alternative, which is relying on ourselves, “foolish, ignorant, and narrow-minded that we are.” Tegmark also concludes that we should inch toward an A.G.I. It’s the only way to extend meaning in the universe that gave life to us: “Without technology, our human extinction is imminent in the cosmic context of tens of billions of years, rendering the entire drama of life in our Universe merely a brief and transient flash of beauty.” 人类发展的科技可能会在某一天毁掉人类自己,MIT物理教授Max Tegmark却从另一个角度来说:人类是“自私、愚蠢、狭隘的”,与其靠我们自己,还不如逐步迎接超人工智能的来到,和机器人们谈个好的投降条件 。“没有科技,人类的灭绝在几十亿年的宇宙里会显得更加微不足道。”


现在去把这篇文章的最后一个部分读完吧!


“没有科技,人类的灭绝在几十亿年的宇宙里会显得更加微不足道。”


也许整个过程下来,你要花一两个小时才能把这篇文章读完。但是我们的领读,是否让你觉得这个阅读过程没那么可怕了呢?在阅读过程中,还有哪些问题是这个领读测试栏目没有能够解决的?欢迎留言告诉我们!

 

赞赏通道

 

陈滢荧,99年生,美国南加州文理学院Pitzer College在读,每个领域都想尝试一些,业余喜欢一边看手机一边想自己为什么爱看手机

˙ ˙ ˙ ˙

往期推荐


精选留言

Yihan赞:16

哇!!!希望多出这个系列!很喜欢!很有帮助!(发出了想看纽约客但经常读不下去放弃的小弱鸡的声音)

AOKO赞:5

非常喜欢这种结合了导读、注释以及图表的助读方式,简直像在做带解析的英语阅读题一样!作者费心了!但感觉这种本身就比较长(加了解析就变得更长了)的科普类文章需要静下心来认真读,在公众号这个平台上读起来可能会稍微吃力一点?

作者

收藏了慢慢读

南屿赞:3

刚准备玩会儿手机的我看到这篇推送,立马打开小本本进入学习的状态,太喜欢这种方式了,给作者打call~

若木赞:3

想起了两年前考研英语时,500篇的时文阅读,篇篇都要这样分析,无论是从思考角度,行文逻辑,语句语法,都让人受益无穷

ʝαиє.赞:2

看到作者和自己同龄,觉得是在惭愧,差距着实不小,希望能在此学到更多[抱拳][强]

桂林段氏赞:1

非常棒,哪怕过了留言时间也想给方老师打call

Shelley-雪梨赞:1

喜欢这个系列!

Raging_Bull赞:1

AlphaGo应该是会下围棋不是象棋哦

五柳🌏(朋友)赞:1

优秀的小姐姐

一隻奮鬥的高三淼🐣赞:1

論文可能更好寫[微笑] 另外我真的考慮轉行了😂

软软赞:1

小姐姐好厉害[捂脸],好惭愧

小松是风力唧唧赞:1

很喜欢这种形式。

赞:1

人工智能即使能达到和人一样的智能也是在遥不可及的未来,而且在原则上做不到的可能性其实也不小。

火疖子赞:1

工具是不可逾越的鸿沟

梦幻泡影赞:1

哇 这种形式很棒诶 很希望可以做下去!

乐子赞:1

还没有读,先赞一个~

阅读全文