Gray and white robot illustration
The safety and regulation of A.I. research and development is a global debate that keeps on giving. Photo by Rock'n Roll Monkey on Unsplash
Artificial Intelligence By Adebiyi Adedotun

OpenAI and a Tale of Corporate Schism

When the growing discontent at OpenAI came to a head.


Sometimes last month, the corporate infighting that ensued at OpenAI—and its lurid demise—was prompted by the sudden ouster of its well-liked C.E.O. and A.I. poster boy Sam Altman. What started on a Friday slithered through the weekend into the following week. Then it all went away, leaving, in its wake, an ideological schism that was an entrée into a fractious debate about the safety of A.I. systems. In hindsight, however, the timeline of events offered a more merciful reckoning than its real-time counterpart.

On November 17th, Altman was abruptly fired in an impromptu board meeting spearheaded by OpenAI’s co-founder, chief scientist, and board member Ilya Sutskever. Hours following the news, Greg Brockman, the company's co-founder and president, who, simultaneously, had been relegated as chair of the board but was to remain at the company, quit in solidarity. Altman's layoff was brusque as was revelatory in the board's vague—and rather evasive—messaging. In a blog post following Altman's sack, the board said it had lost confidence in Altman's ability to continue at the helm because “he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” What responsibilities, you might ask.

The grimes of the board's tragic miscalculation provoked public reaction and backlash. Outcry, opinion, condolences, and support for the befallen were well underway. Brian Chesky, the C.E.O of Airbnb tweeted that “Sam Altman is one of the best founders of his generation and has made an immense contribution to our industry.” Ron Conway, the American venture capitalist made it known that “What happened at OpenAI today is a Board coup that we have not seen the likes of since 1985 when the then-Apple board pushed out Steve Jobs. It is shocking; it is irresponsible; and it does not do right by Sam & Greg or all the builders in OpenAI.”

Yet, there was more.

Microsoft, OpenAI’s biggest investor and a non-voting board observer with no control had been unexpectedly blindsided in what was a shocking revelation to many who didn’t understand the atypical corporate structure of OpenAI where a for-profit subsidiary is at the complete behest of a nonprofit board. In 2019, Microsoft invested one billion dollars in OpenAI followed by a subsequent investment of thirteen billion dollars (with a 49% stake) in 2023. However, as reported in The Inside Story of Microsoft’s Partnership with OpenAI, the board's incorrigible decision to let Altman go was already made.

The following Monday, November 20th, after Altman's negotiation to be restituted had failed, Satya Nadella, the Chairman and C.E.O. at Microsoft, announced that Altman and Brockman would "be joining Microsoft to lead a new advanced AI research team.” Consequentially, more than ninety percent (702 out of 770) of OpenAI's employees signed a letter to the board of directors threatening to follow Altman to the promised land, unless he was reinstated—and the board resigned. One of the notable signatories included Sutskever himself.

Then, in what can be recounted as the public crescendo of the comical debacle, Sutskever went to Jesus and backtracked, tweeting that “I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.” But if ever Sutskever would be involved in cleaning up, he'd have to do it as a former board member. As it stands, Altman has been reinstated as C.E.O. and trooped with new board members. From part of his reinstatement announcement, he wrote: “I love and respect Ilya, I think he's a guiding light of the field and a gem of a human being. I harbor zero ill will towards him. While Ilya will no longer serve on the board, we hope to continue our working relationship and are discussing how he can continue his work at OpenAI.”

How "Open" is OpenAI?

The name "OpenAI" is a deliberate compound that represents the fundamental ideals of its founders. Having set out as a nonprofit dedicated to developing A.I. systems that are generally smarter than humans—Artificial General Intelligence (A.G.I.)—in 2015, OpenAI also assigned itself the custodian of its safety at large. In its charter, OpenAI pledges its primary fiduciary duty to humanity and “will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.”

For OpenAI's lofty goals, the self-assigned mandate of ensuring the safe development of artificial intelligence was easier said than done. In reality, donations and grants fell short of the substantial amount of money and resources needed for research and development. So in 2019, when push came to shove, OpenAI acquiesced and created OpenAI Limited Partnership (OpenAI LP), it said, “as a hybrid of a for-profit and nonprofit.” Through OpenAI LP, it could attract investors, raise capital, and acquire the much-needed resources to succeed in its mission—with a capped-profit model for investors (returns would be limited to 100 times the initial investment and any excess returns given to the OpenAI Nonprofit) and mandate that the Nonprofit maintains its legal authority over OpenAI LP in alignment with its mission. Months after the birth of OpenAI LP, OpenAI announced that Microsoft was investing one billion dollars and partnering to support its vision of building A.G.I.

In time, there was a growing discontent between OpenAI Nonprofit and its for-profit subsidiary about the rapid pace of development, the frequency of public releases, the focus on commercialization, and questions about the safety of the outcomes. But every peaceful coexistence, ever so slightly, requires an oft unachievable balancing act. And even though extreme ideological differences were no match for its unity—or tolerance—there was a first crack in the wall in 2021, when eleven ex-OpenAI employees, led by OpenAI's former VP of research Dario Amodei, and his sister, Daniela, who previously worked as OpenAI's VP of Safety and Policy, broke off to establish Anthropic as an “AI safety and research company.” In hindsight, this divorce perhaps presaged the inevitable recent rift at OpenAI, which, according to The Atlantic, began “with the release of ChatGPT.”

By birth, OpenAI is an A.I. research company that has become one of the tech juggernauts of recent history. Its life-altering chatbot, ChatGPT, is reportedly well on its way to generating more than one billion dollars annually. However, according to The Atlantic, “the past year at OpenAI was chaotic and defined largely by a stark divide in the company’s direction.” The epic success of ChatGPT opened up the floodgates to aggressive commercialization, considerable enough for people like Sutskever to develop a growing skepticism about OpenAI's—and his—founding values, that safety matters and it must first and foremost be prioritized before anything or anyone else.

The Crusade for A.I. Safety and Regulation

The safety and regulation of A.I. research and development is a global debate that keeps on giving—just as with nuclear weapons. It is the fundamental reason, as we now know, for the chasm at OpenAI. If ever, it will be the signifier that ushered in the future on a platter of gold. A future where we wouldn't need the legal or external apparatuses to act, but rather internal minds to take responsibility for a cause to which they are beholden.

As far as proponents go, Sutskever is one out of the many. Calls for embargo on the development of new A.I. systems and letters of warning of a potential foreboding have been released and signed in solicitude to society and humanity by scientists, technologists, researchers, and institutions like the Association for the Advancement of Artificial Intelligence—a forty-year-old academic society. Needless to say, however, the genie is already out of the bottle. And OpenAI drama explains the human penchant for risk-taking.

Another leading proponent is the savant computer scientist Geoffrey Hinton, a man who's been considered the Godfather of A.I. Hinton is a thought leader in the deep learning (mimicking biological intelligence in computers) and artificial intelligence field, which he helped pioneer and nurture with his studies on artificial neural networks (computational models for machine learning inspired by the interconnectedness of neurons in the brain) at a time when it was an unexciting venture whose viability a few researchers believed in. In 2018, Hinton received the Turing Award alongside Yoshua Bengio and Yann LeCun for their prosperous work on deep learning.

Five years prior, in March of 2013, Hinton, along with his cofounders and graduate students, Alex Krishevsky and Ilya Sutskever, sold their A.I. company, DNNresearch, to Google for forty-four million dollars. He continued working at Google until May when he quit over the predisposition of potential self-censorship over his concerns about the latent risks of A.I.

Proponents like Hinton and Sutskever, among several others, often share a kindred spirit on A.I. research and safety. They readily acknowledge its potential socioeconomic benefits and are optimistic about its future. But they also push the envelope further and are pessimistic about the severe ramifications of catastrophic misuses and A.I. takeover—an eventual apocalypse or, to put it starkly, the end of humanity. Having made it their life's work, they collectively marvel at the importance of their creations and how A.I. will eradicate several existential problems in the crime, education, and health sectors, among others; but are also preoccupied with the alternative and new problems that will ensue such as A.I. driven unemployment, unchecked developments in autonomous weapons, widespread misinformation, and to quote Sutskever, "infinitely stable dictatorships.”

As far as regulation goes, the United States Congress has so far attempted in futility to regulate A.I., and the European Union Artificial Intelligence Act, first proposed by the European Commission in April 2021, has been passed but is pending enactment—expected to in 2025.

Several experts are in opposition. They believe that the headstrong belief of whether the singularity is already here or is inbound is a futile predisposition. But perhaps we first have to empty our cups and understand that there's no such thing as "artificial intelligence" as it is touted in popular discourse. And, secondly, we need to confront the brutal fact that the imagined future of A.I. is less important than its current realities. Whatever the case may be, scientists have pioneered an unexpected and uncontrollable new frontier with A.I. where an arms race between companies and nations is inevitable, where there's a lot of profit and superiority to be gained, and where regulations can only play catchup, or fear, a hindrance to competing innovation.