© 2024 Western New York Public Broadcasting Association

140 Lower Terrace
Buffalo, NY 14202

Mailing Address:
Horizons Plaza P.O. Box 1263
Buffalo, NY 14240-1263

Buffalo Toronto Public Media | Phone 716-845-7000
WBFO Newsroom | Phone: 716-845-7040
Your NPR Station
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Mitigating the risk of AI should be a global priority, open letter says

MARY LOUISE KELLY, HOST:

More than 300 executives, researchers and engineers working on AI issued a dire statement today. It is just one sentence long, and it reads, mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

NPR tech reporter Bobby Allyn joins me to discuss for this week's All Tech Considered. Bobby, that is pretty dire indeed. Tell me more about who's issuing this warning.

BOBBY ALLYN, BYLINE: Yeah, a San Francisco nonprofit called the Center for AI Safety issued it. And as you noted, it's pretty alarmist, right? It doesn't get more doomsday than extinction by AI.

KELLY: Indeed.

ALLYN: I feel like I'm in an episode of "Black Mirror" or something, you know? But, you know, many notable serious people in the AI world signed it. So Sam Altman is one of them. He runs the Microsoft-backed OpenAI startup, which developed popular AI tools like image creator Dall-E and the chatbot everyone's playing with these days, ChatGPT. The CEO of DeepMind, which is an industry-leading AI research lab run by Google, also signed it, and this man named Geoffrey Hinton, who's considered the godfather of AI for his work on what's known as neural networks, which is kind of the basis of so many AI applications. And I actually talked to Hinton recently, and he echoed the sentiment that he now shares with 300 others in the statement.

GEOFFREY HINTON: There's a serious danger that we'll get things smarter than us fairly soon and that these things might get bad motives and take control. Politicians and industry leaders need to take that very seriously. This isn't just a science fiction problem.

KELLY: Isn't just a science fiction problem - but, again, AI leading to humanity's extinction sure sounds like the sci-fi of my '80s youth. How seriously are people taking this warning?

ALLYN: Hinton says we should be worried, but other experts say we definitely should not be freaking out about humanity's extinction just yet, right? I think the extreme language here, Mary Louise, is really intended to attract attention and be sort of a wake-up call to policymakers and regulators. Now, within AI safety circles, there's this phrase that's getting pretty popular. You hear it all the time. It's called P(doom) - P for probability and, you know, the probability that AI leads to doom. What does that doom look like? For some experts, their P(doom) is AI taking over a power grid. For others, their P(doom) is AI leading to mass job loss. Others say their (P)doom is already happening now - right? - and it's AI supercharging the harms of social media, including the spread of misinformation. So yeah, the language in the statement is pretty over-the-top, but I think it points to a debate the AI community is having right now.

KELLY: And if you're right that this is intended as a wake-up call for regulators, how are regulators, how are lawmakers responding?

ALLYN: Well, Washington - there's growing awareness and real interest, but legislation has been slow to emerge. Meanwhile, across the pond in the EU, they're already debating some pretty specific proposals. Lawmakers there have floated something called the AI Act, which calls for sweeping regulations, including that tech companies would be forced to open up the black boxes of AI and show the world how much copyrighted information is in there. We don't know that right now. It would also hold companies responsible for how their AI is used in the world.

But here's what's really interesting. You know, leading voices like Sam Altman, who recently appeared before Washington and urged for regulations - well, when the EU started debating the specifics of this bill I just mentioned, he said, no, not so fast - it might not be possible to comply...

KELLY: Right.

ALLYN: ...And that, if the law does pass, OpenAI might have to pull out of the EU. So the cynical take here is these tech executives are asking for regulations, but, you know, they're kind of just looking like good guys...

KELLY: Right.

ALLYN: ...When they know nothing might actually be done.

KELLY: Thank you, Bobby.

ALLYN: Thanks, Mary Louise.

KELLY: NPR's Bobby Allyn. Transcript provided by NPR, Copyright NPR.

Bobby Allyn is a business reporter at NPR based in San Francisco. He covers technology and how Silicon Valley's largest companies are transforming how we live and reshaping society.