查看原文
其他

人工智能的风险真实存在,但风险可控 | 盖茨笔记

Bill Gates 比尔盖茨 2023-07-20

The risks created by artificial intelligence can seem overwhelming. What happens to people who lose their jobs to an intelligent machine? Could AI affect the results of an election? What if a future AI decides it doesn’t need humans anymore and wants to get rid of us?

人工智能带来的风险似乎势不可挡。被智能机器夺走工作的人会怎样?人工智能会影响选举结果吗?如果未来的人工智能不再需要人类,并且想摆脱人类,我们该怎么办?

These are all fair questions, and the concerns they raise need to be taken seriously. But there’s a good reason to think that we can deal with them: This is not the first time a major innovation has introduced new threats that had to be controlled. We’ve done it before.

以上的问题合情合理,我们需要认真对待其引发的担忧。然而,我们有充分的理由相信,我们有能力应对这些问题。这并非是第一次出现,重要的创新带来了需要加以控制的新的威胁。我们曾经也面临过类似的情况。

Whether it was the introduction of cars or the rise of personal computers and the Internet, people have managed through other transformative moments and, despite a lot of turbulence, come out better off in the end. Soon after the first automobiles were on the road, there was the first car crash. But we didn’t ban cars—we adopted speed limits, safety standards, licensing requirements, drunk-driving laws, and other rules of the road.

无论是汽车的问世,还是电脑和互联网的崛起,人们很好地应对了这些转型时刻,尽管经历了不少波折,但人类社会最终变得更好了。在汽车首次上路后不久,就发生了第一起车祸。但我们并没有禁止汽车,而是颁布了限速措施、安全标准、驾照要求、酒驾法规和其他交通规则。

We’re now in the earliest stage of another profound change, the Age of AI. It’s analogous to those uncertain times before speed limits and seat belts. AI is changing so quickly that it isn’t clear exactly what will happen next. We’re facing big questions raised by the way the current technology works, the ways people will use it for ill intent, and the ways AI will change us as a society and as individuals.

我们现在正处于另一个深刻变革的初期阶段——人工智能时代。这类似于在限速和安全带出现之前的那段不确定时期。人工智能发展得如此迅速,导致我们尚不清楚接下来会发生什么。当前技术如何运作,人们将如何利用人工智能违法乱纪,以及人工智能将如何改变社会和作为独立个体的我们,这些都对我们提出了一系列严峻考验。

In a moment like this, it’s natural to feel unsettled. But history shows that it’s possible to solve the challenges created by new technologies.

在这样的时刻,感到不安是很正常的。然而,历史表明,解决新技术带来的挑战是完全有可能的。

I have written before about how AI is going to revolutionize our lives. It will help solve problems—in health, education, climate change, and more—that used to seem intractable. The Gates Foundation is making it a priority, and our CEO, Mark Suzman, recently shared how he’s thinking about its role in reducing inequity.

曾经写过一篇文章,谈到人工智能将如何彻底改变我们的生活。它将帮助解决健康、教育和气候变化等问题,而我们在过去对于这些问题似乎无可奈何。盖茨基金会将其列为优先事项,我们的首席执行官马克·苏斯曼最近分享了他如何看待人工智能在减少不平等方面的作用。

I’ll have more to say in the future about the benefits of AI, but in this post, I want to acknowledge the concerns I hear and read most often, many of which I share, and explain how I think about them.

关于人工智能的益处,我在将来会谈及更多,但在这篇文章中,我想谈谈我经常听到和读到的一些担忧——其中许多我也感同身受——并解释一下我是如何看待这些担忧的。

One thing that’s clear from everything that has been written so far about the risks of AI—and a lot has been written—is that no one has all the answers. Another thing that’s clear to me is that the future of AI is not as grim as some people think or as rosy as others think. The risks are real, but I am optimistic that they can be managed. As I go through each concern, I’ll return to a few themes:

迄今为止,所有关于人工智能风险的文章(已经有很多文章了)都可以让我们清楚地看出一件事,那就是——没有人知道所有的答案。另一件对我来说很清楚的事是,人工智能的未来并不像某些人想象的那样严峻,也不像其他人想象的那样美好。风险是真实存在的,但我乐观地认为这些风险是可以控制的。当我逐一阐述这些担忧时,我将回顾以下几个主题:

● Many of the problems caused by AI have a historical precedent. For example, it will have a big impact on education, but so did handheld calculators a few decades ago and, more recently, allowing computers in the classroom. We can learn from what’s worked in the past.

● 人工智能引发的许多问题都有历史先例。例如,它将对教育产生巨大影响,但几十年前的手持计算器,以及近些年课堂上使用的计算机也是如此。我们可以借鉴过去的成功经验。

● Many of the problems caused by AI can also be managed with the help of AI.

● 人工智能带来的许多问题也可以在人工智能的帮助下得到解决。

● We’ll need to adapt old laws and adopt new ones—just as existing laws against fraud had to be tailored to the online world.

● 我们需要调整旧的法律并采用新的法律——就像我们必须调整现有的反诈法律去治理网络世界一样。

In this post, I’m going to focus on the risks that are already present, or soon will be. I’m not dealing with what happens when we develop an AI that can learn any subject or task, as opposed to today’s purpose-built AIs. Whether we reach that point in a decade or a century, society will need to reckon with profound questions. What if a super AI establishes its own goals? What if they conflict with humanity’s? Should we even make a super AI at all?

在这篇文章中,我将重点关注已经存在或即将到来的风险。我不打算讨论遥远的未来,当我们开发出一种能够自我学习的人工智能,而不是现存专用型人工智能,世界会发生什么。无论我们是在十年还是一个世纪内达到这一阶段,社会都将需要面对深刻的问题。如果超级人工智能设定了自己的目标怎么办?如果它们的目标与人类的目标有冲突怎么办?我们是否应该制造超级人工智能?

But thinking about these longer-term risks should not come at the expense of the more immediate ones. I’ll turn to them now.

但考虑这些长期风险不应以牺牲眼前风险为代价。我现在就来谈谈眼前的风险。

Deepfakes and misinformation generated by AI could undermine elections and democracy.

人工智能生成的深度伪造和错误信息可能会破坏选举和民主。

The idea that technology can be used to spread lies and untruths is not new. People have been doing it with books and leaflets for centuries. It became much easier with the advent of word processors, laser printers, email, and social networks.

利用技术传播谎言和虚假信息并不是什么新鲜事。几个世纪以来,人们一直通过书籍和传单来这么做。随着文字处理器、激光打印机、电子邮件和社交网络的出现,这种做法变得更加容易。

AI takes this problem of fake text and extends it, allowing virtually anyone to create fake audio and video, known as deepfakes. If you get a voice message that sounds like your child saying “I’ve been kidnapped, please send $1,000 to this bank account within the next 10 minutes, and don’t call the police,” it’s going to have a horrific emotional impact far beyond the effect of an email that says the same thing.

人工智能延续了伪造文本的问题,并将其提升到了新的高度,使几乎任何人都可以制作伪造的音频和视频,即所谓的“深度伪造”。如果你收到一条语音消息,听起来就像你的孩子在说“我被绑架了,请在10分钟内向这个银行账户汇款1000美元,不要报警”,这使你产生的恐惧情绪,将远超一封同样内容的电子邮件。

On a bigger scale, AI-generated deepfakes could be used to try to tilt an election. Of course, it doesn’t take sophisticated technology to sow doubt about the legitimate winner of an election, but AI will make it easier.

当投入更大规模的应用时,人工智能生成的深度伪造可能使选举结果严重偏向某一候选人。当然,不需要复杂的技术,就能使人们对选举的合法获胜者产生怀疑,但人工智能将使猜忌变得更加容易。

There are already phony videos that feature fabricated footage of well-known politicians. Imagine that on the morning of a major election, a video showing one of the candidates robbing a bank goes viral. It’s fake, but it takes news outlets and the campaign several hours to prove it. How many people will see it and change their votes at the last minute? It could tip the scales, especially in a close election.

现在已经出现了一些伪造的视频,其中就有捏造的知名政客的镜头。试想一下,在大选当天早上,一段某位候选人抢劫银行的视频在网上疯传。视频的确是假的,但新闻机构和竞选团队需要花数小时才证明它是假的。有多少人会看到这段视频并在最后一刻改变投票?这可能会使天平倾斜,在选情胶着的情况下更是如此。

When OpenAI co-founder Sam Altman testified before a U.S. Senate committee recently, Senators from both parties zeroed in on AI’s impact on elections and democracy. I hope this subject continues to move up everyone’s agenda.

最近,OpenAI联合创始人山姆·阿尔特曼在美国参议院委员会作证时,两党参议员都谈到了人工智能对选举和民主的影响。我希望每个人都能持续关注,将这个话题提上议程。

We certainly have not solved the problem of misinformation and deepfakes. But two things make me guardedly optimistic. One is that people are capable of learning not to take everything at face value. For years, email users fell for scams where someone posing as a Nigeran prince promised a big payoff in return for sharing your credit card number. But eventually, most people learned to look twice at those emails. As the scams got more sophisticated, so did many of their targets. We’ll need to build the same muscle for deepfakes.

我们当然还没法解决错误信息和深度伪造的问题。但有两件事让我持谨慎乐观的态度。一是人们有能力学会不轻信一切表面价值。多年来,电子邮件用户都曾上当受骗,因为有人冒充尼日尔王子,承诺只要分享信用卡号码就能获得巨额回报。但最终,大多数人学会了三思而后行。随着诈骗手段越来越高明,许多目标人群也变得越来越警惕。我们需要为深度伪造建立同样的能力。

The other thing that makes me hopeful is that AI can help identify deepfakes as well as create them. Intel, for example, has developed a deepfake detector, and the government agency DARPA is working on technology to identify whether video or audio has been manipulated.

另一件让我充满希望的事情是,人工智能既然能够制造深度伪造,就也可以帮助识别深度伪造。例如,英特尔已经开发出一种深度伪造检测器,而政府机构DARPA(美国国防高级研究计划局)也正在研发识别视频或音频是否被篡改的技术。

This will be a cyclical process: Someone finds a way to detect fakery, someone else figures out how to counter it, someone else develops counter-countermeasures, and so on. It won’t be a perfect success, but we won’t be helpless either.

这将是一个循环往复的过程:有人找到识别虚假信息的方法,就会有人开发出反制措施,进而有人想出对付反制措施的办法,如此循环往复。这将不会是一个完美的答案,但我们也不会完全束手无策。

AI makes it easier to launch attacks on people and governments.

人工智能使对人类和政府发动攻击变得更容易。

Today, when hackers want to find exploitable flaws in software, they do it by brute force—writing code that bangs away at potential weaknesses until they discover a way in. It involves going down a lot of blind alleys, which means it takes time and patience.

当今天的黑客希望找到软件中的可利用漏洞时,他们会使用蛮力——即编写代码攻击潜在的弱点,直到发现入侵方法。这需要走很多弯路,因此意味着需要耗费他们的时间和耐心。

Security experts who want to counter hackers have to do the same thing. Every software patch you install on your phone or laptop represents many hours of searching, by people with good and bad intentions alike.

要想对抗黑客,网络安全专家也必须这样做。你在手机或笔记本上安装的每一个软件补丁,都是出于善意和恶意的人花费大量时间探寻后的产物。

AI models will accelerate this process by helping hackers write more effective code. They’ll also be able to use public information about individuals, like where they work and who their friends are, to develop phishing attacks that are more advanced than the ones we see today.

通过帮助黑客编写更有效的代码,人工智能模型将加速这一过程。他们还能够利用个人的公共信息,如工作地点和朋友身份等,开发出比现在更先进的网络钓鱼攻击。

The good news is that AI can be used for good purposes as well as bad ones. Government and private-sector security teams need to have the latest tools for finding and fixing security flaws before criminals can take advantage of them. I hope the software security industry will expand the work they’re already doing on this front—it ought to be a top concern for them.

好消息是,人工智能既可用于坏的目的,也可用于好的目的。政府和私立的安全团队需要使用最新的工具,发现并修复安全漏洞,以防犯罪分子利用可乘之机造成破坏。我希望软件安全行业能够扩大他们在这方面已经开展的工作,这本来就应该是他们最关心的问题。

This is also why we should not try to temporarily keep people from implementing new developments in AI, as some have proposed. Cyber-criminals won’t stop making new tools. Nor will people who want to use AI to design nuclear weapons and bioterror attacks. The effort to stop them needs to continue at the same pace.

这也是为什么我们不应该像某些人建议的那样,试图阻挠人工智能的新发展。网络犯罪分子不会停止制造新工具。想要利用人工智能设计核武器和生物恐怖袭击的人也不会停止。我们要以同样的速度继续阻止他们。

There’s a related risk at the global level: an arms race for AI that can be used to design and launch cyberattacks against other countries. Every government wants to have the most powerful technology so it can deter attacks from its adversaries. This incentive to not let anyone get ahead could spark a race to create increasingly dangerous cyber weapons. Everyone would be worse off.

在全球层面还有一个相关的风险:人工智能的军备竞赛——即各国争相制造人工智能,使其能够设计和发动针对别国的网络攻击。每个国家的政府都希望拥有最强大的技术,以威慑和阻止对手的攻击。这种“不让任何人抢先”的动机可能会引发一场军备竞赛,制造出日益危险的网络武器。每个国家的处境都会变得更糟。

That’s a scary thought, but we have history to guide us. Although the world’s nuclear nonproliferation regime has its faults, it has prevented the all-out nuclear war that my generation was so afraid of when we were growing up. Governments should consider creating a global body for AI similar to the International Atomic Energy Agency.

这是一个可怕的想法,但我们有历史为鉴。尽管世界核不扩散机制存在缺陷,但它阻止了我们这一代人在成长过程中深感恐惧的核战争。各国政府应考虑建立一个类似于国际原子能机构的全球人工智能机构。

AI will take away people’s jobs.

人工智能将夺走人们的工作。

In the next few years, the main impact of AI on work will be to help people do their jobs more efficiently. That will be true whether they work in a factory or in an office handling sales calls and accounts payable. Eventually, AI will be good enough at expressing ideas that it will be able to write your emails and manage your inbox for you. You’ll be able to write a request in plain English, or any other language, and generate a rich presentation on your work.

未来几年,人工智能对职场的主要影响将是:帮助人们更高效地完成工作。无论是在工厂上班,还是在办公室处理销售电话和应付账款,都将如此。最终,人工智能将能够足够好地表达自己的想法,从而为你撰写电子邮件和管理收件箱。你将能够用简单的英语或任何其他语言对它提出请求,还能根据你的工作生成生动的演示文稿。

As I argued in my February post, it’s good for society when productivity goes up. It gives people more time to do other things, at work and at home. And the demand for people who help others—teaching, caring for patients, and supporting the elderly, for example—will never go away. But it is true that some workers will need support and retraining as we make this transition into an AI-powered workplace. That’s a role for governments and businesses, and they’ll need to manage it well so that workers aren’t left behind—to avoid the kind of disruption in people’s lives that has happened during the decline of manufacturing jobs in the United States.

正如我在二月份的文章中所说,人工智能带来的生产力提高将有益于社会。它让人们有更多的时间在工作和家庭中做其他事情。对那些帮助人们的职业需求永远不会消失——比如教书育人、照顾病人和赡养老人。但是,在向“人工智能驱动”的职场过渡过程中,有些职业确实需要我们的支持和再培训。这是政府和企业的职责所在,我们需要妥善管理,以免工人被时代淘汰——避免像美国制造业工作岗位减少时,这一情况对人们生活的干扰。

Also, keep in mind that this is not the first time a new technology has caused a big shift in the labor market. I don’t think AI’s impact will be as dramatic as the Industrial Revolution, but it certainly will be as big as the introduction of the PC. Word processing applications didn’t do away with office work, but they changed it forever. Employers and employees had to adapt, and they did. The shift caused by AI will be a bumpy transition, but there is every reason to think we can reduce the disruption to people’s lives and livelihoods.

此外,请记住,这并不是第一次发生新技术造成劳动力市场的变革了。我不认为人工智能会带来像工业革命那样翻天覆地的变化,但它肯定会像个人电脑那样产生一些影响。文字处理应用软件并没有取代文职岗位,但却永远地改变了办公室的工作。雇主和员工曾经必须适应这一改变,他们也确实做到了。人工智能的变革将是一个跌宕起伏的过程,但我们完全有理由相信,我们能够最大程度减少对人们生活和生计的负面干扰。

AI inherits our biases and makes things up.

人工智能继承了我们的偏见,并且凭空捏造。

Hallucinations—the term for when an AI confidently makes some claim that simply is not true—usually happen because the machine doesn’t understand the context for your request. Ask an AI to write a short story about taking a vacation to the moon and it might give you a very imaginative answer. But ask it to help you plan a trip to Tanzania, and it might try to send you to a hotel that doesn’t exist.

“幻觉”,这个概念指的是人工智能自信地提出一些根本不符合事实的主张,这通常是因为机器不了解提问的语境而发生的。让人工智能写一个去月球度假的小故事,它可能会给你一个非常有想象力的答案。但如果让人工智能帮你计划去坦桑尼亚旅行,它可能会把你送到一家根本不存在的酒店。

Another risk with artificial intelligence is that it reflects or even worsens existing biases against people of certain gender identities, races, ethnicities, and so on.

人工智能的另一个风险是,它反映甚至加剧了人们对某些性别、种族、民族等的偏见。

To understand why hallucinations and biases happen, it’s important to know how the most common AI models work today. They are essentially very sophisticated versions of the code that allows your email app to predict the next word you’re going to type: They scan enormous amounts of text—just about everything available online, in some cases—and analyze it to find patterns in human language.

要理解为什么会出现幻觉和偏见,重要的是要知道当今最常见的人工智能模型是如何工作的。它们本质上是非常复杂代码的不同版本,可以让电子邮件应用程序预测你将要键入的下一个词:它们扫描海量文本——在某些情况下,几乎是网络上所有可用的文本——然后进行分析,找出人类语言中存在的既定模式。

When you pose a question to an AI, it looks at the words you used and then searches for chunks of text that are often associated with those words. If you write “list the ingredients for pancakes,” it might notice that the words “flour, sugar, salt, baking powder, milk, and eggs” often appear with that phrase. Then, based on what it knows about the order in which those words usually appear, it generates an answer. (AI models that work this way are using what's called a transformer. GPT-4 is one such model.)

当你向人工智能提出一个问题时,它会关注你使用的词语,然后搜索高频出现与这些单词相关联的文本块。如果您写下“列出薄饼的配料”,它可能会注意到“面粉、糖、盐、发酵粉、牛奶和鸡蛋”等词经常与该短语一起出现,然后,根据这些单词通常出现的顺序,生成一个答案。(以这种方式工作的人工智能模型使用的就是所谓的“转换器”。GPT-4就是这样一个模型。)

This process explains why an AI might experience hallucinations or appear to be biased. It has no context for the questions you ask or the things you tell it. If you tell one that it made a mistake, it might say, “Sorry, I mistyped that.” But that’s a hallucination—it didn’t type anything. It only says that because it has scanned enough text to know that “Sorry, I mistyped that” is a sentence people often write after someone corrects them.

这个过程解释了为什么人工智能可能会产出幻觉或偏见。对于你提出的问题或告诉它的事情,它没有任何背景知识。如果你告诉人工智能它犯了一个错误,它可能会说“对不起,我打错字了”。但这只是幻觉,它完全没有“打字”。它之所以这么说,是因为在扫描了足够多的文本后,它知道“对不起,我打错字了”是被别人纠正之后人们经常说的一句话。

Similarly, AI models inherit whatever prejudices are baked into the text they’re trained on. If one reads a lot about, say, physicians, and the text mostly mentions male doctors, then its answers will assume that most doctors are men.

同样,人工智能模型也会继承它们所训练的文本中蕴含的偏见。如果一个人读了很多关于医生的文章,而文章大多提到男医生,那么它的答案就会假定大多数医生都是男性。

Although some researchers think hallucinations are an inherent problem, I don’t agree. I’m optimistic that, over time, AI models can be taught to distinguish fact from fiction. OpenAI, for example, is doing promising work on this front.

尽管一些研究人员认为幻觉是一个内在固有的问题,但我并不同意。我乐观地认为,随着时间的推移,人工智能模型可以学会区分事实与虚构。例如,OpenAI就在这方面做了很有前途的工作。

Other organizations, including the Alan Turing Institute and the National Institute of Standards and Technology, are working on the bias problem. One approach is to build human values and higher-level reasoning into AI. It’s analogous to the way a self-aware human works: Maybe you assume that most doctors are men, but you’re conscious enough of this assumption to know that you have to intentionally fight it. AI can operate in a similar way, especially if the models are designed by people from diverse backgrounds.

包括艾伦·图灵研究所和美国国家标准与技术研究院在内的其他组织也在致力于解决偏见问题。一种方法是在人工智能中构建人类价值观和更高层次的推理。这类似于有自我意识的人类思维方式:也许你默认大多数医生都是男性,但你对这一认知有足够的意识,知道必须有意识地与之斗争。人工智能也可以以类似的方式运行,尤其是当模型是由来自不同背景的人一同设计的。

Finally, everyone who uses AI needs to be aware of the bias problem and become an informed user. The essay you ask an AI to draft could be as riddled with prejudices as it is with factual errors. You’ll need to check your AI’s biases as well as your own.

最后,每个使用人工智能的人都需要意识到偏见问题,成为一名充分知情的用户。你让人工智能写的论文可能不但错漏百出,而且充满偏见。这时,你需要检查人工智能的偏见,以及自省你自己的偏见。

Students won’t learn to write because AI will do the work for them.

学生将不会学习写作,因为人工智能会代替他们完成这项工作。

Many teachers are worried about the ways in which AI will undermine their work with students. In a time when anyone with Internet access can use AI to write a respectable first draft of an essay, what’s to keep students from turning it in as their own work?

许多教师担心人工智能会破坏他们与学生的合作。在一个只要能上网,任何人都能用人工智能写出可圈可点的论文初稿的时代,有什么能阻止学生将其作为自己的作品上交呢?

There are already AI tools that are learning to tell whether something was written by a person or by a computer, so teachers can tell when their students aren’t doing their own work. But some teachers aren’t trying to stop their students from using AI in their writing—they’re actually encouraging it.

已经有人工智能工具正在学习辨别某些东西是由人还是由计算机编写的,因此老师可以判断学生何时没有做自己的作业。但一些老师并没有试图阻止学生在写作中使用人工智能——他们实际上是在鼓励这样做。

In January, a veteran English teacher named Cherie Shields wrote an article in Education Week about how she uses ChatGPT in her classroom. It has helped her students with everything from getting started on an essay to writing outlines and even giving them feedback on their work.

今年一月,一位名叫切丽·希尔兹的资深英语教师在《教育周刊》上撰文介绍了她如何在课堂上使用ChatGPT。ChatGPT帮助她的学生从开始写作文到写提纲,甚至给他们的作业提供反馈。

“Teachers will have to embrace AI technology as another tool students have access to,” she wrote. “Just like we once taught students how to do a proper Google search, teachers should design clear lessons around how the ChatGPT bot can assist with essay writing. Acknowledging AI’s existence and helping students work with it could revolutionize how we teach.” Not every teacher has the time to learn and use a new tool, but educators like Cherie Shields make a good argument that those who do will benefit a lot.

“教师必须接受人工智能技术,将其作为学生可以使用的另一种工具。”她写道,“就像我们曾经教学生如何进行正确的谷歌搜索一样,教师应该围绕ChatGPT机器人如何协助论文写作设计清晰的课程。承认人工智能的存在并帮助学生使用它可以彻底改变我们的教学方式。”并非每位教师都有时间学习和使用新工具,但像切丽·希尔兹这样的教育工作者提出了一个很好的论点,即那些有时间的教师将受益匪浅。

It reminds me of the time when electronic calculators became widespread in the 1970s and 1980s. Some math teachers worried that students would stop learning how to do basic arithmetic, but others embraced the new technology and focused on the thinking skills behind the arithmetic.

这让我想起了上世纪七八十年代电子计算器普及的时代。一些数学教师担心学生会停止学习如何进行基本运算,但另一些教师则接受了这项新技术,并将重点放在运算背后的思维能力上。

There’s another way that AI can help with writing and critical thinking. Especially in these early days, when hallucinations and biases are still a problem, educators can have AI generate articles and then work with their students to check the facts. Education nonprofits like Khan Academy and OER Project, which I fund, offer teachers and students free online tools that put a big emphasis on testing assertions. Few skills are more important than knowing how to distinguish what’s true from what’s false.

人工智能还可以通过另一种方式帮助写作和批判性思维。特别是在早期,当幻觉和偏见仍然是一个问题时,教育工作者可以让人工智能生成文章,然后与学生一起检查事实。像可汗学院和我资助的OER Project这样的教育非营利组织,为教师和学生提供免费的在线工具,这些工具非常强调测试断言。没有什么技能比知道如何辨别真假更重要了。

We do need to make sure that education software helps close the achievement gap, rather than making it worse. Today’s software is mostly geared toward empowering students who are already motivated. It can develop a study plan for you, point you toward good resources, and test your knowledge. But it doesn’t yet know how to draw you into a subject you’re not already interested in. That’s a problem that developers will need to solve so that students of all types can benefit from AI.

我们确实需要确保教育软件有助于缩小成绩差距,而不是让差距变得更大。如今的软件主要是面向那些具备学习动力的学生。它可以为你制定学习计划,指出好的资源,测试知识储备。但它还不知道如何把你吸引到一个你不感兴趣的话题上。这是开发人员需要解决的问题,以便所有类型的学生都能从人工智能中受益。

What’s next?

下一步该做什么?

I believe there are more reasons than not to be optimistic that we can manage the risks of AI while maximizing their benefits. But we need to move fast.

我相信我们有更多理由乐观地认为,我们可以控制人工智能的风险,同时最大程度地提高它们的效益。但我们得快点行动。

Governments need to build up expertise in artificial intelligence so they can make informed laws and regulations that respond to this new technology. They’ll need to grapple with misinformation and deepfakes, security threats, changes to the job market, and the impact on education. To cite just one example: The law needs to be clear about which uses of deepfakes are legal and about how deepfakes should be labeled so everyone understands when something they’re seeing or hearing is not genuine.

各国政府需要积累人工智能方面的专业知识,以便制定应对这一新技术的法律法规。他们需要应对错误信息和深度伪造、安全威胁、就业市场的变化以及对教育的影响。仅举一例:法律需要明确哪些使用深度伪造是合法的,以及如何标注深度伪造,以便每个人都能知晓他们看到或听到的东西是假的。

Political leaders will need to be equipped to have informed, thoughtful dialogue with their constituents. They’ll also need to decide how much to collaborate with other countries on these issues versus going it alone.

政治领导人需要具备与选民进行知情的、深思熟虑的对话的能力。他们还需要决定在这些问题上与其他国家合作的程度,而不是单打独斗。

In the private sector, AI companies need to pursue their work safely and responsibly. That includes protecting people’s privacy, making sure their AI models reflect basic human values, minimizing bias, spreading the benefits to as many people as possible, and preventing the technology from being used by criminals or terrorists. Companies in many sectors of the economy will need to help their employees make the transition to an AI-centric workplace so that no one gets left behind. And customers should always know when they’re interacting with an AI and not a human.

在私营部门,人工智能公司需要安全、负责任地开展工作。这包括保护人们的隐私,确保其人工智能模型反映人类的基本价值观,最大限度地减少偏见,让尽可能多的人受益,并防止技术被犯罪分子或恐怖分子利用。许多经济领域的公司都需要帮助其员工过渡到以人工智能为中心的工作场所,这样就不会有人掉队。客户应始终知道他们是在与人工智能而非人类互动。

Finally, I encourage everyone to follow developments in AI as much as possible. It’s the most transformative innovation any of us will see in our lifetimes, and a healthy public debate will depend on everyone being knowledgeable about the technology, its benefits, and its risks. The benefits will be massive, and the best reason to believe that we can manage the risks is that we have done it before.

最后,我鼓励大家尽可能关注人工智能的发展。这将是我们有生之年见证到的最具变革性的创新,而一场健康的公共辩论将取决于每个人对人工智能和其益处和风险的了解。人工智能将带来巨大的利益,而相信我们能够控制风险的最好理由就是:我们曾经做到过。


您可能也对以下帖子感兴趣

文章有问题?点此查看未经处理的缓存