Opinion | I Was on the Board of OpenAI. Here’s How to Stop the Tech Apocalypse.

9 months ago

At one point in my two years on the board of OpenAI, I experienced something that I had experienced only once in my over two decades of working in national security: I was freaked out by a briefing. I saw for the first time what was going to eventually be known as GPT4 and the power of the tool was clear: This signified just the first step in the process to achieve artificial general intelligence.

Artificial intelligence is designed for specific tasks and operates within a limited scope based on its programming. Artificial general intelligence will be an evolution of AI that can understand, learn and apply intelligence to a wide range of problems, not just those for which it was specifically trained. Indistinguishable from human cognition, AGI will enable solutions to complex global issues, from climate change to medical breakthroughs. If unchecked, AGI could also lead to consequences as impactful and irreversible as those of nuclear war.

This is something that, in my experience, the OpenAI board took very seriously. At every meeting, OpenAI engineers were able to demonstrate with mathematical certainty the potential future advances of artificial intelligence. Which is why, during my time on the board the supermajority of our conversations focused on safety and alignment (making sure AI follows human intention).

This is also why the late November 2023 governance debacle at OpenAI was troubling. (I stepped down from the board on June 1, 2023, in order to pursue the Republican nomination for president.) In the span of just five days, four members of the six-person board of directors removed their board chair and fired their CEO and fellow board member Sam Altman. After over 90 percent of the rest of the employees threatened to quit, the board ultimately reinstated Altman. Today’s OpenAI board now has three people, one holdover and two new people.

We still don’t really know why the board did what they did. OpenAI certainly has some basic governance questions to answer: Should four people be able to run a $90 billion company into the ground? Is the structure at OpenAI, arguably the most advanced AGI company in the world, too complicated?

However, there are some much bigger philosophical questions generated by this controversy when it comes to the development of AGI. Who can be trusted to develop such a powerful tool and weapon? Who should be entrusted with the tool once it’s created? How do we ensure the discovery of AGI is a net positive for humanity, not an extinction-level event?

As this technology becomes more science fact than science fiction, its governance can't be left to the whims of a few people. Like the nuclear arms race, there are bad actors, including our adversaries, moving forward without ethical or human considerations. This moment is not just about a company's internal politics; it's a call to action to ensure guard rails are put in place to ensure AGI is a force for good, rather than the harbinger of catastrophic consequences.

Legal Accountability

Let’s start by mandating legal accountability. We need to ensure that all AI tools abide by existing laws, and there are no special exemptions shielding developers from liability if their models fail to follow the law. We can’t make the same mistakes with AI that we did when it comes to software and social media.

The current landscape consists of a fragmented array of city and state regulations, each targeting specific applications of AI. AI technologies, including those used in sectors like finance and health care, generally operate under interpretations of existing legal frameworks applicable to their industries, without specific AI-targeted guidance.


This patchwork approach, combined with the intense market pressure for AI developers to be first to market, could incentivize the brightest minds in the field to favor a repeat of the regulatory and legal leniency seen in other tech sectors, leading to gaps in accountability and oversight and potentially compromising the responsible development and use of AI.

In 2025, Americans are projected to lose $10.5 trillion because of cybercrime. Why? One reason is because our legislature and courts don’t view software as a product and therefore not subject to strict liability.

Social media is causing an increase in self-harm among teenage girls and providing opportunities for white nationalists to spread hate, for antisemitic groups to promote bigotry, and for foreign intelligence services to attempt the manipulation of our elections. Why? One reason is because Congress carved social media out of the regulatory rules that radio, TV and newspapers must follow.

If AI is used in banking then the people who built the tool and the people who are deploying the tool must follow and be held accountable to all the existing banking laws. No exemptions should be granted in any industry because AI is “new.”

Protecting IP in the AI Era

Second, let's protect intellectual property. Creators, who produce the data that trains these models, should be appropriately compensated when their creations are utilized in AI-generated content.

If someone wrote a book, earned profits from it, and used material from my blogs beyond the legal doctrine of fair use in the process, I would be entitled to royalties. The same regulatory framework should be applied to AI.

Companies like Adobe and Canva are already enabling creators to earn royalties if their content is used. Applying and adapting existing copyright and trademark laws to AI that require companies to follow existing rules to compensate creators for their content can also ensure a steady stream of data to train algorithms. This will incentivize the creation of high-quality content by a robust industry of content creators.

Implementing Safety Permitting

Three, we should implement safety permitting. Just like a company needs a permit to build a nuclear power plant or a parking lot, powerful AI models should need to obtain a permit too. This will ensure that powerful AI systems are operating with safe, reliable and agreed upon standards.

The Biden administration has made valiant efforts to continue the trend set by American presidents since Barack Obama to address the issue of AI with executive orders. However, President Joe Biden’s recent executive order to address safety permitting missed the mark. It was the equivalent of saying, “Hey y’all, if you are doing something interesting in AI let Uncle Sam know.”



The White House should use its convening power to come up with a definition of really powerful AI. I would recommend the White House prioritize defining powerful AI by its level of autonomy and decision-making capabilities, especially in contexts where AI decisions have serious implications for individuals' rights, safety and privacy. Additionally, attention should be paid to AI systems that process extensive amounts of personal and sensitive data, as well as those that could be easily repurposed for harmful or unethical purposes.

To ensure comprehensive safeguards against the risks of really powerful AI, any entity producing an AI model meeting this new standard should be made to apply to the National Institute of Standards and Technology for a permit before releasing its product to the public.

A Vision for the Future of AI

At the center of all these regulations are transparency and accountability. Transparency means that the workings of an AI system are understandable, allowing experts to assess how decisions are made, which is important to prevent hidden biases and errors. Accountability ensures that if an AI system causes harm or makes a mistake, it's clear who is responsible for fixing it, which is vital for maintaining public trust and ensuring responsible usage of AI technologies.

These values are particularly important as AI tools become more integrated into critical areas like health care, finance and criminal justice, where decisions have significant impact on people's lives.

The events at OpenAI serve as a pivotal lesson and a beacon for action. The governance of Artificial general intelligence is not merely a corporate issue but a global concern, impacting every facet of our lives.

The path ahead demands robust legal frameworks, respect for intellectual property and stringent safety standards, akin to the meticulous oversight of nuclear energy. But beyond regulations, it requires a shared vision. A vision where technology serves humanity and innovation is balanced with ethical responsibility. We must embrace this opportunity with wisdom, courage and a collective commitment to a future that uplifts all of humanity.

Read Entire Article