How can Washington rein in AI’s biggest threats? This tech exec has a plan.

1 year ago


Washington’s best chance of combatting the destructive effects of artificial intelligence begins with restricting the sale of technologies needed to develop it and appointing a cabinet-level regulator to oversee its use, according to one leading AI executive.

Those are just two of the steps that Mustafa Suleyman, the chief executive of Inflection AI and co-founder of Google DeepMind, argues are necessary to prevent the most powerful AI systems from being used to topple governments, engender corporate giants and otherwise scramble the power centers that exist today.

Suleyman may be an unexpected messenger to deliver such a warning, having built his name and fortune as an AI pioneer. But he’s now one of the few industry leaders pushing the government to exert more control over the fast-moving technology — including regulations that go beyond the voluntary safety rules he signed at the White House earlier this summer.

“We should encourage regulatory experimentation and risk taking, because this is a moment when I think no one really knows for sure how this is going to roll out,” Suleyman said in an interview on the POLITICO Tech podcast. “And so we've got to sort of try to avoid being hypercritical of efforts that are genuinely trying to figure out a sensible path here.”

In his new book, “The Coming Wave,” Suleyman writes that regulation alone cannot thwart the most pressing threats as AI-powered technology becomes cheaper and more ubiquitous. But governments stand a better chance of keeping pace with AI’s evolution if they install tech-savvy overseers with the knowledge and resources to put checks on industry.

“It's the missing piece in a lot of policymaking and government decision making,” Suleyman said. “I mean, it is a travesty that we don't have senior technical contributors in cabinet and in every government department given how critical digitization is to every aspect of our world.”

Suleyman predicts fully autonomous AI is less than a decade away, and to “buy time,” the U.S. government should leverage “choke points” by restricting the sale of critical technologies to China and other adversaries. That includes high-tech microchips made by Nvidia and cloud computing services from the likes of Google, IBM and Amazon.

“Export controls are then not just a geostrategic play but a live experiment, a possible map for how technology can be contained but not strangled altogether,” Suleyman writes in “The Coming Wave.” “Eventually, all these technologies will be widely diffused. Before then, the next five or so years are absolutely critical, a tight window when certain pressure points can still slow technology down. While the option is still there, let’s take it and buy time.”

A number of Suleyman’s ideas put him at odds with peers in the industry — including top officials at his old employer, Google — who have advocated against a single government agency to regulate AI and have pushed for regulatory measures that impose few restrictions or legal liabilities.

Suleyman contends that tech executives should not resist such government intervention — and abandon the “pessimism aversion” that leads them to ignore the ill-effects of their inventions. The industry must also revamp business models that prioritize profit over the negative consequences that AI can have on society, he argues.

That’s the approach Suleyman has taken with Inflection AI, he said, establishing the upstart as a “public benefit corporation” in which executives have obligations that go beyond simply generating value for shareholders.

“This is a new kind of legal structure. It’s experimental. It's not clear that you will always get it right, but I do think it's a first step in the right direction to give us a proper legal framework to try to make decisions that really are in the best interests of people long term and don't just optimize for short-term profit maximization,” he said in an interview.

Pandemic-induced downtime forced Suleyman to confront the societal threats posed by the very technology he has a hand in developing, he writes. Before launching Inflection AI, he co-founded London-based AI company DeepMind, which Google bought in 2014, and later became a key figure in crafting Google’s AI policies.

Suleyman was among the executives to join President Joe Biden at the White House earlier this summer and signed a voluntary agreement to implement certain AI safety practices. He was joined by executives from Meta, Microsoft, Google, Amazon, OpenAI and Anthropic, many of whom are also participating in next week’s Senate AI forum.

The White House’s voluntary commitments lack enforcement and will not be sufficient on their own, Suleyman acknowledges. But it sets the tech industry and government on a path forward, one that will depend on them setting entrenched positions that Suleyman argues are “dangerous” and “unproductive.”

“There's just extremist positions on both sides these days,” he said. “There's just a real apathy and cynicism for politics at a time when we actually need to lean into it and really try to progress it.”

Annie Rees contributed to this report. 

To hear the full interview with Suleyman and other tech leaders, subscribe to POLITICO Tech on Apple, Spotify, Google or wherever you get your podcasts. 

Read Entire Article