Yoshua Bengio (L) and Max Tegmark (R) discuss the development of artificial general intelligence during a live podcast recording of CNBC’s “Beyond The Valley” in Davos, Switzerland in January 2025.
CNBC
Artificial general intelligence built like “agents” could prove dangerous as its creators might lose control of the system, two of of the world’s most prominent AI scientists told CNBC.
In the latest episode of CNBC’s “Beyond The Valley” podcast released on Tuesday, Max Tegmark, a professor at the Massachusetts Institute of Technology and the President of the Future of Life Institute, and Yoshua Bengio, dubbed one of the “godfathers of AI” and a professor at the Université de Montréal, spoke about their concerns about artificial general intelligence, or AGI. The term broadly refers to AI systems that are smarter than humans.
Their fears stem from the world’s biggest firms now talking about “AI agents” or “agentic AI” — which companies claim will allow AI chatbots to act like assistants or agents and assist in work and everyday life. Industry estimates vary on when AGI will come into existence.
With that concept comes the idea that AI systems could have some “agency” and thoughts of their own, according to Bengio.
“Researchers in AI have been inspired by human intelligence to build machine intelligence, and, in humans, there’s a mix of both the ability to understand the world like pure intelligence and the agentic behavior, meaning … to use your knowledge to achieve goals,” Bengio told CNBC’s “Beyond The Valley.”
“Right now, this is how we’re building AGI: we are trying to make them agents that understand a lot about the world, and then can act accordingly. But this is actually a very dangerous proposition.”
Bengio added that pursuing this approach would be like “creating a new species or a new intelligent entity on this planet” and “not knowing if they’re going to behave in ways that agree with our needs.”
“So instead, we can consider, what are the scenarios in which things go badly and they all rely on agency? In other words, it is because the AI has its own goals that we could be in trouble.”
The idea of self-preservation could also kick in, as AI gets even smarter, Bengio said.
“Do we want to be in competition with entities that are smarter than us? It’s not a very reassuring gamble, right? So we have to understand how self-preservation can emerge as a goal in AI.”
AI tools the key
For MIT’s Tegmark, the key lies in so-called “tool AI” — systems that are created for a specific, narrowly-defined purpose, but that don’t have to be agents.
Tegmark said a tool AI could be a system that tells you how to cure cancer, or something that possesses “some agency” like a self-driving car “where you can prove or get some really high, really reliable guarantees that you’re still going to be able to control it.”
“I think, on an optimistic note here, we can have almost everything that we’re excited about with AI … if we simply insist on having some basic safety standards before people can sell powerful AI systems,” Tegmark said.
“They have to demonstrate that we can keep them under control. Then the industry will innovate rapidly to figure out how to do that better.”
Tegmark’s Future of Life Institute in 2023 called for a pause to the development of AI systems that can compete with human-level intelligence. While that has not happened, Tegmark said people are talking about the topic, and now it is time to take action to figure out how to put guardrails in place to control AGI.
“So at least now a lot of people are talking the talk. We have to see if we can get them to walk the walk,” Tegmark told CNBC’s “Beyond The Valley.”
“It’s clearly insane for us humans to build something way smarter than us before we figured out how to control it.”
There are several views on when AGI will arrive, partly driven by varying definitions.
OpenAI CEO Sam Altman said his company knows how to build AGI and said it will arrive sooner than people think, though he downplayed the impact of the technology.
“My guess is we will hit AGI sooner than most people in the world think and it will matter much less,” Altman said in December.