6.2 C
New York
Friday, November 15, 2024

Buy now

The risks of AI are real but manageable

By Bill Gates| GatesNotes |

The risks created by artificial intelligence can seem overwhelming. What happens to people who lose their jobs to an intelligent machine? Could AI affect the results of an election? What if a future AI decides it doesn’t need humans anymore and wants to get rid of us?

These are all fair questions, and the concerns they raise need to be taken seriously. But there’s a good reason to think that we can deal with them: This is not the first time a major innovation has introduced new threats that had to be controlled. We’ve done it before.

Whether it was the introduction of cars or the rise of personal computers and the Internet, people have managed through other transformative moments and, despite a lot of turbulence, come out better off in the end. Soon after the first automobiles were on the road, there was the first car crash. But we didn’t ban cars—we adopted speed limits, safety standards, licensing requirements, drunk-driving laws, and other rules of the road.

We’re now in the earliest stage of another profound change, the Age of AI. It’s analogous to those uncertain times before speed limits and seat belts. AI is changing so quickly that it isn’t clear exactly what will happen next. We’re facing big questions raised by the way the current technology works, the ways people will use it for ill intent, and the ways AI will change us as a society and as individuals.

In a moment like this, it’s natural to feel unsettled. But history shows that it’s possible to solve the challenges created by new technologies.

I have written before about how AI is going to revolutionize our lives. It will help solve problems—in health, education, climate change, and more—that used to seem intractable. The Gates Foundation is making it a priority, and our CEO, Mark Suzman, recently shared how he’s thinking about its role in reducing inequity.

I’ll have more to say in the future about the benefits of AI, but in this post, I want to acknowledge the concerns I hear and read most often, many of which I share, and explain how I think about them.

One thing that’s clear from everything that has been written so far about the risks of AI—and a lot has been written—is that no one has all the answers. Another thing that’s clear to me is that the future of AI is not as grim as some people think or as rosy as others think. The risks are real, but I am optimistic that they can be managed. As I go through each concern, I’ll return to a few themes:

  • Many of the problems caused by AI have a historical precedent. For example, it will have a big impact on education, but so did handheld calculators a few decades ago and, more recently, allowing computers in the classroom. We can learn from what’s worked in the past.
  • Many of the problems caused by AI can also be managed with the help of AI.
  • We’ll need to adapt old laws and adopt new ones—just as existing laws against fraud had to be tailored to the online world.

In this post, I’m going to focus on the risks that are already present, or soon will be. I’m not dealing with what happens when we develop an AI that can learn any subject or task, as opposed to today’s purpose-built AIs. Whether we reach that point in a decade or a century, society will need to reckon with profound questions. What if a super AI establishes its own goals? What if they conflict with humanity’s? Should we even make a super AI at all?

But thinking about these longer-term risks should not come at the expense of the more immediate ones. I’ll turn to them now.

Deepfakes and misinformation generated by AI could undermine elections and democracy.

The idea that technology can be used to spread lies and untruths is not new. People have been doing it with books and leaflets for centuries. It became much easier with the advent of word processors, laser printers, email, and social networks.

AI takes this problem of fake text and extends it, allowing virtually anyone to create fake audio and video, known as deepfakes. If you get a voice message that sounds like your child saying “I’ve been kidnapped, please send $1,000 to this bank account within the next 10 minutes, and don’t call the police,” it’s going to have a horrific emotional impact far beyond the effect of an email that says the same thing.

On a bigger scale, AI-generated deepfakes could be used to try to tilt an election. Of course, it doesn’t take sophisticated technology to sow doubt about the legitimate winner of an election, but AI will make it easier.

There are already phony videos that feature fabricated footage of well-known politicians. Imagine that on the morning of a major election, a video showing one of the candidates robbing a bank goes viral. It’s fake, but it takes news outlets and the campaign several hours to prove it. How many people will see it and change their votes at the last minute? It could tip the scales, especially in a close election.

When OpenAI co-founder Sam Altman testified before a U.S. Senate committee recently, Senators from both parties zeroed in on AI’s impact on elections and democracy. I hope this subject continues to move up everyone’s agenda.

We certainly have not solved the problem of misinformation and deepfakes. But two things make me guardedly optimistic. One is that people are capable of learning not to take everything at face value. For years, email users fell for scams where someone posing as a Nigeran prince promised a big payoff in return for sharing your credit card number. But eventually, most people learned to look twice at those emails. As the scams got more sophisticated, so did many of their targets. We’ll need to build the same muscle for deepfakes.

The other thing that makes me hopeful is that AI can help identify deepfakes as well as create them. Intel, for example, has developed a deepfake detector, and the government agency DARPA is working on technology to identify whether video or audio has been manipulated.

This will be a cyclical process: Someone finds a way to detect fakery, someone else figures out how to counter it, someone else develops counter-countermeasures, and so on. It won’t be a perfect success, but we won’t be helpless either.

AI makes it easier to launch attacks on people and governments.

Today, when hackers want to find exploitable flaws in software, they do it by brute force—writing code that bangs away at potential weaknesses until they discover a way in. It involves going down a lot of blind alleys, which means it takes time and patience.

Security experts who want to counter hackers have to do the same thing. Every software patch you install on your phone or laptop represents many hours of searching, by people with good and bad intentions alike.

AI models will accelerate this process by helping hackers write more effective code. They’ll also be able to use public information about individuals, like where they work and who their friends are, to develop phishing attacks that are more advanced than the ones we see today.

The good news is that AI can be used for good purposes as well as bad ones. Government and private-sector security teams need to have the latest tools for finding and fixing security flaws before criminals can take advantage of them. I hope the software security industry will expand the work they’re already doing on this front—it ought to be a top concern for them.

This is also why we should not try to temporarily keep people from implementing new developments in AI, as some have proposed. Cyber-criminals won’t stop making new tools. Nor will people who want to use AI to design nuclear weapons and bioterror attacks. The effort to stop them needs to continue at the same pace.

There’s a related risk at the global level: an arms race for AI that can be used to design and launch cyberattacks against other countries. Every government wants to have the most powerful technology so it can deter attacks from its adversaries. This incentive to not let anyone get ahead could spark a race to create increasingly dangerous cyber weapons. Everyone would be worse off.

That’s a scary thought, but we have history to guide us. Although the world’s nuclear nonproliferation regime has its faults, it has prevented the all-out nuclear war that my generation was so afraid of when we were growing up. Governments should consider creating a global body for AI similar to the International Atomic Energy Agency.

AI will take away people’s jobs.

In the next few years, the main impact of AI on work will be to help people do their jobs more efficiently. That will be true whether they work in a factory or in an office handling sales calls and accounts payable. Eventually, AI will be good enough at expressing ideas that it will be able to write your emails and manage your inbox for you. You’ll be able to write a request in plain English, or any other language, and generate a rich presentation on your work.

As I argued in my February post, it’s good for society when productivity goes up. It gives people more time to do other things, at work and at home. And the demand for people who help others—teaching, caring for patients, and supporting the elderly, for example—will never go away. But it is true that some workers will need support and retraining as we make this transition into an AI-powered workplace. That’s a role for governments and businesses, and they’ll need to manage it well so that workers aren’t left behind—to avoid the kind of disruption in people’s lives that has happened during the decline of manufacturing jobs in the United States.

Also, keep in mind that this is not the first time a new technology has caused a big shift in the labor market. I don’t think AI’s impact will be as dramatic as the Industrial Revolution, but it certainly will be as big as the introduction of the PC. Word processing applications didn’t do away with office work, but they changed it forever. Employers and employees had to adapt, and they did. The shift caused by AI will be a bumpy transition, but there is every reason to think we can reduce the disruption to people’s lives and livelihoods.

AI inherits our biases and makes things up.

Hallucinations—the term for when an AI confidently makes some claim that simply is not true—usually happen because the machine doesn’t understand the context for your request. Ask an AI to write a short story about taking a vacation to the moon and it might give you a very imaginative answer. But ask it to help you plan a trip to Tanzania, and it might try to send you to a hotel that doesn’t exist.

Another risk with artificial intelligence is that it reflects or even worsens existing biases against people of certain gender identities, races, ethnicities, and so on.

To understand why hallucinations and biases happen, it’s important to know how the most common AI models work today. They are essentially very sophisticated versions of the code that allows your email app to predict the next word you’re going to type: They scan enormous amounts of text—just about everything available online, in some cases—and analyze it to find patterns in human language.

When you pose a question to an AI, it looks at the words you used and then searches for chunks of text that are often associated with those words. If you write “list the ingredients for pancakes,” it might notice that the words “flour, sugar, salt, baking powder, milk, and eggs” often appear with that phrase. Then, based on what it knows about the order in which those words usually appear, it generates an answer. (AI models that work this way are using what’s called a transformer. GPT-4 is one such model.)

This process explains why an AI might experience hallucinations or appear to be biased. It has no context for the questions you ask or the things you tell it. If you tell one that it made a mistake, it might say, “Sorry, I mistyped that.” But that’s a hallucination—it didn’t type anything. It only says that because it has scanned enough text to know that “Sorry, I mistyped that” is a sentence people often write after someone corrects them.

Similarly, AI models inherit whatever prejudices are baked into the text they’re trained on. If one reads a lot about, say, physicians, and the text mostly mentions male doctors, then its answers will assume that most doctors are men.

Although some researchers think hallucinations are an inherent problem, I don’t agree. I’m optimistic that, over time, AI models can be taught to distinguish fact from fiction. OpenAI, for example, is doing promising work on this front.

Other organizations, including the Alan Turing Institute and the National Institute of Standards and Technology, are working on the bias problem. One approach is to build human values and higher-level reasoning into AI. It’s analogous to the way a self-aware human works: Maybe you assume that most doctors are men, but you’re conscious enough of this assumption to know that you have to intentionally fight it. AI can operate in a similar way, especially if the models are designed by people from diverse backgrounds.

Finally, everyone who uses AI needs to be aware of the bias problem and become an informed user. The essay you ask an AI to draft could be as riddled with prejudices as it is with factual errors. You’ll need to check your AI’s biases as well as your own.

Students won’t learn to write because AI will do the work for them.

Many teachers are worried about the ways in which AI will undermine their work with students. In a time when anyone with Internet access can use AI to write a respectable first draft of an essay, what’s to keep students from turning it in as their own work?

There are already AI tools that are learning to tell whether something was written by a person or by a computer, so teachers can tell when their students aren’t doing their own work. But some teachers aren’t trying to stop their students from using AI in their writing—they’re actually encouraging it.

In January, a veteran English teacher named Cherie Shields wrote an article in Education Week about how she uses ChatGPT in her classroom. It has helped her students with everything from getting started on an essay to writing outlines and even giving them feedback on their work.

“Teachers will have to embrace AI technology as another tool students have access to,” she wrote. “Just like we once taught students how to do a proper Google search, teachers should design clear lessons around how the ChatGPT bot can assist with essay writing. Acknowledging AI’s existence and helping students work with it could revolutionize how we teach.” Not every teacher has the time to learn and use a new tool, but educators like Cherie Shields make a good argument that those who do will benefit a lot.

It reminds me of the time when electronic calculators became widespread in the 1970s and 1980s. Some math teachers worried that students would stop learning how to do basic arithmetic, but others embraced the new technology and focused on the thinking skills behind the arithmetic.

There’s another way that AI can help with writing and critical thinking. Especially in these early days, when hallucinations and biases are still a problem, educators can have AI generate articles and then work with their students to check the facts. Education nonprofits like Khan Academy and OER Project, which I fund, offer teachers and students free online tools that put a big emphasis on testing assertions. Few skills are more important than knowing how to distinguish what’s true from what’s false.

We do need to make sure that education software helps close the achievement gap, rather than making it worse. Today’s software is mostly geared toward empowering students who are already motivated. It can develop a study plan for you, point you toward good resources, and test your knowledge. But it doesn’t yet know how to draw you into a subject you’re not already interested in. That’s a problem that developers will need to solve so that students of all types can benefit from AI.

What’s next?

I believe there are more reasons than not to be optimistic that we can manage the risks of AI while maximizing their benefits. But we need to move fast.

Governments need to build up expertise in artificial intelligence so they can make informed laws and regulations that respond to this new technology. They’ll need to grapple with misinformation and deepfakes, security threats, changes to the job market, and the impact on education. To cite just one example: The law needs to be clear about which uses of deepfakes are legal and about how deepfakes should be labeled so everyone understands when something they’re seeing or hearing is not genuine

Political leaders will need to be equipped to have informed, thoughtful dialogue with their constituents. They’ll also need to decide how much to collaborate with other countries on these issues versus going it alone.

In the private sector, AI companies need to pursue their work safely and responsibly. That includes protecting people’s privacy, making sure their AI models reflect basic human values, minimizing bias, spreading the benefits to as many people as possible, and preventing the technology from being used by criminals or terrorists. Companies in many sectors of the economy will need to help their employees make the transition to an AI-centric workplace so that no one gets left behind. And customers should always know when they’re interacting with an AI and not a human.

Finally, I encourage everyone to follow developments in AI as much as possible. It’s the most transformative innovation any of us will see in our lifetimes, and a healthy public debate will depend on everyone being knowledgeable about the technology, its benefits, and its risks. The benefits will be massive, and the best reason to believe that we can manage the risks is that we have done it before.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

21,156FansLike
3,912FollowersFollow
2,245SubscribersSubscribe
- Advertisement -spot_img

Latest Articles