Elon Musk, chief executive officer of Tesla Inc., during the US-Saudi Investment Forum at the Kennedy Center in Washington, DC, US, on Wednesday, Nov. 19, 2025.
Bloomberg | Bloomberg | Getty Images
Elon Musk has again sounded the alarm on the dangers of AI and listed what he considers as the three most important ingredients to ensure a positive future with the technology.
The billionaire CEO of Tesla, SpaceX, xAI, X and The Boring Company, appeared on a podcast with Indian billionaire Nikhil Kamath on Sunday.
“It’s not that we’re guaranteed to have a positive future with AI,” Musk said on the podcast. “There’s some danger when you create a powerful technology, that a powerful technology can be potentially destructive.”
Musk was a co-founder of OpenAI alongside Sam Altman, but left its board in 2018 and publicly criticized the company for ditching its founding mission as a non-profit to develop AI safely after it launched ChatGPT in 2022. Musk’s xAI developed its own chatbot, Grok, in 2023.
Musk has previously warned that “one of the biggest risks to the future of civilization is AI,” and stressed that rapid advancements are leading AI to become a bigger risk to society than cars or planes or medicines.
On the podcast, the tech billionaire emphasized the importance of ensuring AI technologies pursue truth instead of repeating inaccuracies. “That can be very dangerous,” Musk told Kamath, who is also the co-founder of retail stockbroker Zerodha.
“Truth and beauty and curiosity. I think those are the three most important things for AI,” he said.
He said that, without strictly adhering to truths, AI will learn information from online sources where it “will absorb a lot of lies and then have trouble reasoning because these lies are incompatible with reality.”
He added: “You can make an AI go insane if you force it to believe things that aren’t true because it will lead to conclusions that are also bad.”
“Hallucination” — responses that are incorrect or misleading — is a major challenge facing AI. Earlier this year, an AI feature launched by Apple on its iPhones generated fake news alerts.
These included a false summary from the BBC News app notifications on a story about the PDC World Darts Championship semi-final, where it wrongly claimed that the British darts player Luke Littler had won the championship. Littler did not win the tournament’s final until the next day.
Apple told the BBC at the time that it was working on an update to resolve the problem which clarifies when Apple Intelligence is responsible for the text shown in the notifications.
Musk added that “some appreciation of beauty is important” and that “you know it when you see it.”
Musk said AI should want to know more about the nature of reality because humanity is more interesting than machines.
“It’s more interesting to see the continuation if not the prosperity of humanity than to exterminate humanity,” he said.
Geoffrey Hinton, a computer scientist and ex-vice president at Google known as a “Godfather of AI,” said earlier this year that there’s a “10% to 20% chance” that AI will “wipe us out,” in an episode of the Diary of a CEO podcast. Some of the shorter term risks he cited included hallucinations and the automation of entry level jobs.
“The hope is that if enough smart people do enough research with enough resources, we’ll figure out a way to build them so they’ll never want to harm us,” Hinton added.


