Sam Altman, chief executive officer of OpenAI, during a fireside chat organized by Softbank Ventures Asia in Seoul, South Korea, on Friday, June 9, 2023.
SeongJoon Cho | Bloomberg | Getty Images
A group of ex-OpenAI employees, Nobel laureates, law professors and civil society organizations sent a letter last week to attorneys general in California and Delaware requesting that they halt the startup’s restructuring efforts out of safety concerns.
In the letter, which was delivered to OpenAI’s board on Tuesday, the group wrote that restructuring to a for-profit entity would “subvert OpenAI’s charitable purpose,” and “remove nonprofit control and eliminate critical governance safeguards.”
“No sale price can compensate for loss of control,” the group wrote.
OpenAI, which was created as a nonprofit artificial intelligence research lab in 2015, has been commercializing products in recent years, most notably its viral ChatGPT chatbot. The company is still overseen by a nonprofit parent but announced last year that it would convert into a for-profit company, wresting control from the nonprofit but keeping it as a separate arm.
The change, which requires the approval of principal backer Microsoft and the California attorney general, would remove some potential restraints as the company takes on competitors including Microsoft, Google, Amazon and Elon Musk’s xAI.
The effort sparked controversy among OpenAI employees and AI leaders, as it seemed to contradict the company’s mission and founding principles. Multiple company executives have since left and started their own AI companies.
“OpenAI may one day build technology that could get us all killed,” Nisan Stiennon, who worked at OpenAI from 2018 to 2020, said in a statement. “It is to OpenAI’s credit that it’s controlled by a nonprofit with a duty to humanity. This duty precludes giving up that control.”
Jacob Hilton, who worked at OpenAI from 2018 to 2023, said that throughout his time at the company, “leadership repeatedly emphasized that OpenAI’s primary fiduciary duty is to humanity.”
“They claimed this was a legally binding promise, enshrined in the company’s Charter and enforced through its corporate structure,” Hilton said in a statement. “Now they are proposing to abandon that foundational pledge.”
An OpenAI spokesperson said that in becoming a for-profit entity, the company will have a similar structure to competitors like Anthropic and xAI.
“Our Board has been very clear: our nonprofit will be strengthened and any changes to our existing structure would be in service of ensuring the broader public can benefit from AI,” the spokesperson said in an email.

OpenAI’s articles of incorporation include a charitable purpose “to ensure that artificial general intelligence benefits all of humanity” rather than advancing “the private gain of any person.” AGI is a branch of AI that equals or surpasses human intellect on a wide range of tasks, which OpenAI and its rivals are fast pursuing.
The pressure on OpenAI is tied in part to its recent $300 billion valuation, achieved last month as part of its $40 billion funding round led by SoftBank. The funding could be slashed by as much as $10 billion if the company doesn’t restructure into a for-profit entity by Dec. 31.
OpenAI has also faced significant hurdles due largely to Musk, who co-founded OpenAI and has since become one of Altman’s chief adversaries in a heated legal battle over the company’s decision to convert to a for profit.
A group of 12 ex-OpenAI staffers, in support of Musk’s lawsuit, asked a court’s permission earlier this month to share its concerns about the company’s restructuring.
Geoffrey Hinton, a computer scientist and Nobel laureate, clarified that the group that sent the letter this week is not connected to Musk’s lawsuit.
“I like OpenAI’s mission to ‘ensure that artificial general intelligence benefits all of humanity,’ and
I would like them to execute that mission instead of enriching their investors,” Hinton said in a statement. “I’m happy there is an effort to hold OpenAI to its mission that does not involve Elon Musk.”
The group wrote in the letter that nonprofit control over how AGI is developed and governed is vital to OpenAI’s mission, and that removing such control “would violate the special fiduciary duty owed to the nonprofit’s beneficiaries” and threaten its charitable purpose.
OpenAI’s Superalignment team, announced in 2023 and disbanded last year, had focused on “scientific and technical breakthroughs to steer and control AI systems much smarter than us.” At the time, OpenAI said it would commit 20% of its computing power to the initiative over four years. But the effort ended after the team’s leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departures.
Jason Green-Lowe, executive director of the Center for AI Policy, said in a statement that even under OpenAI’s current structure, it was able to back away from its promise to set aside 20% of its compute for safety research.
“If this is how OpenAI behaves when it’s still notionally subject to nonprofit oversight, it’s terrifying to imagine how they’ll behave after they’re freed to focus entirely on maximizing profits,” Green-Lowe said in his statement. “This is not a company that you want to see start behaving with even less social responsibility — the stakes are too high.”
