Sam Altman, CEO of OpenAI, and Lisa Su, CEO of Advanced Micro Devices, testify during the Senate Commerce, Science and Transportation Committee hearing titled “Winning the AI Race: Strengthening U.S. Capabilities in Computing and Innovation,” in Hart building on Thursday, May 8, 2025.

Tom Williams | CQ-Roll Call, Inc. | Getty Images

In a sweeping interview last week, OpenAI CEO Sam Altman addressed a plethora of moral and ethical questions regarding his company and the popular ChatGPT AI model.  

“Look, I don’t sleep that well at night. There’s a lot of stuff that I feel a lot of weight on, but probably nothing more than the fact that every day, hundreds of millions of people talk to our model,” Altman told former Fox News host Tucker Carlson in a nearly hour-long interview. 

“I don’t actually worry about us getting the big moral decisions wrong,” Altman said, though he admitted “maybe we will get those wrong too.” 

Rather, he said he loses the most sleep over the “very small decisions” on model behavior, which can ultimately have big repercussions.

These decisions tend to center around the ethics that inform ChatGPT, and what questions the chatbot does and doesn’t answer. Here’s an outline of some of those moral and ethical dilemmas that appear to be keeping Altman awake at night.

How does ChatGPT address suicide?

According to Altman, the most difficult issue the company is grappling with recently is how ChatGPT approaches suicide, in light of a lawsuit from a family who blamed the chatbot for their teenage son’s suicide.

The CEO said that out of the thousands of people who commit suicide each week, many of them could possibly have been talking to ChatGPT in the lead-up.

“They probably talked about [suicide], and we probably didn’t save their lives,” Altman said candidly. “Maybe we could have said something better. Maybe we could have been more proactive. Maybe we could have provided a little bit better advice about, hey, you need to get this help.” 

Last month, the parents of Adam Raine filed a product liability and wrongful death suit against OpenAI after their son died by suicide at age 16. In the lawsuit, the family said that “ChatGPT actively helped Adam explore suicide methods.”

Soon after, in a blog post titled “Helping people when they need it most,” OpenAI detailed plans to address ChatGPT’s shortcomings when handling “sensitive situations,” and said it would keep improving its technology to protect people who are at their most vulnerable. 

How are ChatGPT’s ethics determined?

Another large topic broached in the sit-down interview was the ethics and morals that inform ChatGPT and its stewards. 

While Altman described the base model of ChatGPT as trained on the collective experience, knowledge and learnings of humanity, he said that OpenAI must then align certain behaviors of the chatbot and decide what questions it won’t answer. 

“This is a really hard problem. We have a lot of users now, and they come from very different life perspectives… But on the whole, I have been pleasantly surprised with the model’s ability to learn and apply a moral framework.” 

When pressed on how certain model specifications are decided, Altman said the company had consulted “hundreds of moral philosophers and people who thought about ethics of technology and systems.”

An example he gave of a model specification made was that ChatGPT will avoid answering questions on how to make biological weapons if prompted by users.

“There are clear examples of where society has an interest that is in significant tension with user freedom,” Altman said, though he added the company “won’t get everything right, and also needs the input of the world” to help make these decisions.

How private is ChatGPT?

Another big discussion topic was the concept of user privacy regarding chatbots, with Carlson arguing that generative AI could be used for “totalitarian control.”

In response, Altman said one piece of policy he has been pushing for in Washington is “AI privilege,” which refers to the idea that anything a user says to a chatbot should be completely confidential. 

“When you talk to a doctor about your health or a lawyer about your legal problems, the government cannot get that information, right?… I think we should have the same concept for AI.” 

According to Altman, that would allow users to consult AI chatbots about their medical history and legal problems, among other things. Currently, U.S. officials can subpoena the company for user data, he added.

“I think I feel optimistic that we can get the government to understand the importance of this,” he said. 

Will ChatGPT be used in military operations?

Asked by Carlson if ChatGPT would be used by the military to harm humans, Altman didn’t provide a direct answer.

“I don’t know the way that people in the military use ChatGPT today… but I suspect there’s a lot of people in the military talking to ChatGPT for advice.”

Later, he added that he wasn’t sure “exactly how to feel about that.”

OpenAI was one of the AI companies that received a $200 million contract from the U.S. Department of Defense to put generative AI to work for the U.S. military. The firm said in a blog post that it would provide the U.S. government access to custom AI models for national security, support and product roadmap information.

Just how powerful is OpenAI?

Carlson, in his interview, predicted that on its current trajectory, generative AI and by extension, Sam Altman, could amass more power than any other person, going so far as to call ChatGPT a “religion.”

In response, Altman said he used to worry a lot about the concentration of power that could result from generative AI, but he now believes that AI will result in “a huge up leveling” of all people. 

“What’s happening now is tons of people use ChatGPT and other chatbots, and they’re all more capable. They’re all kind of doing more. They’re all able to achieve more, start new businesses, come up with new knowledge, and that feels pretty good.”

However, the CEO said he thinks AI will eliminate many jobs that exist today, especially in the short-term.



Source link

Leave A Reply

Exit mobile version