Former members of the OpenAI board are advocating for increased governmental oversight of the organization amidst criticisms of CEO Sam Altman’s management style.

According to Helen Toner and Tasha McCauley — two among several ex-staff members featured in the profile that led to Altman’s ousting in November — the decision to oust Altman and strengthen OpenAI’s regulatory framework was triggered by “persistent patterns of conduct displayed by Mr. Altman” that “eroded the board’s supervision of critical decisions and internal safety protocols.”

In an opinion piece published in The Economist on May 26, Toner and McCauley claim that Altman’s behavior, coupled with an overreliance on self-regulation, poses a significant risk of AGI-related calamity.

RELATED:

Potential AI labeling requirements for political advertisements

Despite initially joining the company with optimism about OpenAI’s prospects, buoyed by the seemingly noble intentions of the then non-profit entity, the two have since raised concerns about Altman’s conduct and the organization as a whole. “Several key leaders had confidentially expressed serious worries to the board,” they note, indicating that they believed Altman fostered a “dysfunctional culture of deceit” and engaged in “actions [that] could be classified as emotional manipulation.”

“Events following his reinstatement to the company — such as his reappointment to the board and the departure of senior safety-focused personnel — foretell a grim future for OpenAI’s experiment in self-governance,” they remark. “Even with the best motives, without external scrutiny, this form of self-regulation will inevitably prove ineffective, especially given the immense financial incentives at play. Governments need to take a more active role.”

In retrospect, Toner and McCauley assert, “If any organization could have successfully managed its affairs while responsibly and ethically developing advanced AI systems, it would have been OpenAI.”

Delaware News Hub Light Speed

RELATED:

Lessons from the Scarlett Johansson controversy at OpenAI for AI’s future

The ex-board members argue against the prevailing trend of self-regulation and minimal external oversight of AI companies due to the stalling of federal regulations. Internationally, AI task forces have already identified shortcomings in entrusting tech giants to spearhead safety initiatives. Recently, the EU issued a substantial warning to Microsoft for failing to disclose potential hazards associated with their AI-driven CoPilot and Image Creator. A recent report from the UK AI Safety Institute revealed that several major public Large Language Models (LLMs) had weak safeguards against malicious manipulation.

In recent weeks, OpenAI has been a focal point of discussions on AI regulation following a series of high-profile resignations by top-ranking employees who disagreed on the company’s future. After the departure of co-founder and superalignment team head Ilya Sutskever and his colleague Jan Leike, OpenAI disbanded its internal safety unit.

Expressing concerns about OpenAI’s future, Leike pointed out that “safety culture and processes have taken a backseat to flashy products.”

RELATED:

Explanation from a departing safety leader at OpenAI

Altman faced backlash over a leaked corporate off-boarding policy requiring departing staff to sign NDAs preventing negative feedback about OpenAI under the threat of losing equity in the company.

Post-controversy, Altman and president and co-founder Greg Brockman addressed the issue, noting, “The road ahead will be more challenging than before. We must continue to elevate our safety efforts to match the significance of each new model…We are actively collaborating with governments and various stakeholders on safety. There’s no established playbook for navigating the path to AGI.”

According to many former OpenAI employees, the traditionally hands-off approach to internet regulation may no longer suffice.

Key Themes:
Artificial Intelligence
OpenAI