Openai CEO Sam Altman and Advanced Micro Devices CEO Lisa Su testify to a Senate Commerce, Science and Transportation Committee committee entitled “AI Race Winning: Strengthening the US Capacity of Computing and Innovation” at the Heart Building on Thursday, May 8, 2025.
Tom Williams | CQ-Roll Call, Inc. |Getty Images
In a drastic interview last week, Openai CEO Sam Altman tackled numerous moral and ethical questions about his company and the popular ChatGPT AI model.
“Look, I don’t sleep that well at night. I have a lot of things that I feel a lot of weight, but it’s just the fact that there are probably hundreds of millions of people talking to our models every day,” Altman told former Fox News host Tucker Carlson in an almost hour-long interview.
“I’m not really worried about us making big moral decisions wrong,” Altman said, but he admitted that he “probably will get them wrong.”
Rather, he said he loses most sleep in “very small decisions” about the behavior of the model.
These decisions tend to center around the ethics that inform ChatGpt, and what questions chatbots ask and don’t answer them. This is an overview of some of the moral and ethical dilemmas that seem to awaken Altman at night.
How does ChatGpt deal with suicide?
According to Altman, the most difficult issue the company is working on recently is how ChatGpt approaches suicide in light of a family lawsuit that accused the chatbot of his teenage son of suicide.
The CEO said of the thousands who committed suicide each week, many of them may have probably spoken to ChatGpt in lead-up.
“They probably talked about (suicide), but we probably didn’t save their lives,” Altman said honestly. “Maybe we could have said something better. Maybe we’ve become more aggressive. Maybe we need to get this help.”

Last month, Adam Lane’s parents filed a product liability and unlawful death lawsuit against Openai after his son committed suicide at the age of 16. In the lawsuit, the family said “ChatGpt helped Adam actively explore methods for suicide.”
Soon afterwards, in a blog post entitled “Assisting people when they need it most,” Openai detailed its detailed plan to address ChatGpt’s shortcomings when dealing with “sensitive situations,” saying it will continue to improve its technology to protect the most vulnerable.
How is ChatGpt ethics determined?
Another big topic I broached in the sit-in interview was the ethics and morality that informed ChatGpt and its stewards.
Altman described the basic model of ChatGpt as being trained in human collective experience, knowledge and learning, but stated that Openai must coordinate specific behaviors in chatbots and decide which questions they will not answer.
“This is a really difficult problem. We have a lot of users. They come from a very different perspective on life… But overall, I was amazed at the ability to learn and apply the moral framework of models.”
Pushing on how a particular model specification is determined, Altman said the company consulted “hundreds of moral philosophers and people who thought about the ethics of technology and systems.”
An example of the model specification he created was to avoid answering questions about how to create biological weapons when ChatGpt was urged by users.
“There are clear examples of places where society has an interest in critical tensions about user freedom,” Altman said, but he added, “we don’t get everything right and we also need global input.”
How private is ChatGpt?
Another major topic of discussion was the concept of user privacy regarding chatbots. Carlson argues that the generative AI can be used for “totalitarian management.”
In response, Altman said one policy he is pushing in Washington is “AI privilege.”
“When you talk to your doctor about your health about lawyers about your legal issues, the government can’t get that information, right? … I think we should have the same concept as AI.”

According to Altman, it will allow users to view AI chatbots about their medical history, legal issues, and more. He added that US officials can now summon user data to the company.
“I think I’m optimistic that we can make the government understand the importance of this,” he said.
Is ChatGpt used in military operations?
When asked by Carlson whether Chatgup would be used by the army to injure humans, Altman did not provide a direct answer.
“I don’t know that people in the military are using ChatGpt today… but I think a lot of people in the military are talking to them looking for ChatGpt and advice.”
He then added that he wasn’t sure how exactly I felt about it.
Openai was one of the AI companies that received a $200 million contract from the US Department of Defense to install generated AI for the US military. In a blog post, the company said the US government will provide access to custom AI models for national security, support and product roadmap information.
How powerful is Openai?
In an interview, Carlson predicted that his current trajectory, generative AI, and even Sam Altman could gather more power than any other person, and call Chatgup “religion.”
In response, Altman said he was worried about a lot about the concentration of forces that could arise from the generated AI, but now he believes that AI will bring about “big leveling” for all.
“What’s going on right now is that a lot of people are using ChatGpt and other chatbots, and they’re all more capable. They’re all doing more.
However, the CEO said he believes that AI will eliminate many of the jobs that exist today, especially in the short term.