Opinion | It’s Time for the Government to Regulate AI. Here’s How.

10 months ago

Amazon. Google. Facebook. Microsoft. These juggernauts have all found themselves on the receiving end of state and federal antitrust lawsuits alleging that they’ve used anticompetitive tactics to amass power and stifle budding competition to their core businesses. As a result, these companies wield incredible power over industry, media, politics and our everyday life.

Now, they’re poised to control an emerging technology that will be fundamental to the future of the American economy: artificial intelligence.

Leaving the development of such a revolutionary technology to a few unregulated mega-corporations is short-sighted at best and dangerous at worst. While AI might be new, the problems that arise from concentration in core technologies are not. To keep Big Tech from becoming an unregulated AI oligopoly, we should turn to the playbook regulators have used to address other industries that offer fundamental services, like electricity, telecommunications and banking services.

As AI becomes an integral part of a wide range of products and services, such regulations would prevent these dominant players from abusing their economic power. They would also facilitate innovation and competition, even bringing new, unexpected players into the market — like a public option for the cloud computing infrastructure that is critical to any AI development. Such a public option could democratize the technology, making AI development more accessible to a range of competitors and researchers.


To understand why antimonopoly regulation, and a public option, are so important, we first have to understand the threats posed by an unregulated AI oligopoly. One of the greatest concerns is vertical integration up and down something called the “AI stack.” The stack is like the supply chain for AI. It starts with physical hardware and ends with apps, like ChatGPT. Altogether, there are four main layers: chips, cloud infrastructure, models and apps. Vertical integration means that companies are absorbing more power and control at every layer of the stack.

It may seem like everyone is developing a new AI-driven application — but a peek at the lower, hidden layers of the stack reveals a staggering amount of concentration. At the bottom are microprocessing chips, the semiconductors that make computing possible. One company, Nvidia, dominates the design of the most advanced and powerful chips; another company, Taiwan Semiconductor Manufacturing Corporation, dominates production.

Chips are then sold to companies that provide cloud infrastructure — the huge server farms that provide the computing power needed to train and operate AI at scale. Amazon Web Services is by far the biggest player, but Google Cloud Platform and Microsoft Azure are also significant.

All that cloud computing power is used to train foundation models by having them “learn” from incomprehensibly huge quantities of data. Unsurprisingly, the entities that own these massive computing resources are also the companies that dominate model development. Google has Bard, Meta has LLaMa. Amazon recently invested $4 billion into one of OpenAI’s leading competitors, Anthropic. And Microsoft has a 49 percent ownership stake in OpenAI — giving it extraordinary influence, as the recent board struggles over Sam Altman’s role as CEO showed.


Finally, returning to the app layer, the models are fine-tuned to power specific products, such as ChatGPT, or integrated into existing ones like Microsoft’s Bing search engine. At the app layer, there’s far greater competition, as we might expect. But the big tech companies are major players there too.

All this vertical integration poses a particular concern. The existing Big Tech giants are already entrenched up and down the stack, and companies with power at one layer in the stack could give themselves an advantage, and shut out the competition, at another layer. Imagine trying to run an AI-based legal services company that helps people draft court documents. You might rely on Amazon Web Services for your cloud computing. But what if Amazon decides to get into the legal-services game too? It could charge you higher prices or degrade your service. It would have visibility into your business, which it could use to copy your ideas. Self-preferencing hurts innovation because it means a less vibrant ecosystem of platform users. Why bother investing in a new AI-based idea if you know that another big company might copy you and take all the profit?

Concentration in the cloud and model layers may be the most alarming, because high costs and huge scale make it difficult for new businesses to break into the market. According to some estimates, Microsoft will need 20,000 servers with 8 NVIDIA chips each to operate ChatGPT for all Bing users. At a cost of $200,000 per 8-chip server, that is an extraordinary expense of $4 billion — and that’s a tiny fraction of what it would cost for Google, which processes around 30 times more search volume than Bing does. These barriers to entry mean that smaller companies don’t stand a chance of competing in this market, leaving it to the big, entrenched players. Foundation models are also built on vast troves of data, which take extraordinary cost and effort to collect and process. Big tech companies have spent years collecting and buying data, giving them a huge head start.


So what should we do about this? On the one hand, the risks of concentration are real. The fewer companies there are in a given market, the less pressure they face to innovate in ways that benefit consumers and the more power they have to harm consumers and competition. On the other hand, in the AI economy, cloud and foundation models are what electricity was to the early 20th century: They are essential inputs to dozens of uses, many of which we have yet to imagine. As a result, we might actually want monopolistic scale in some layers in the stack. It takes a lot of computing power (and carbon) to run the most powerful models, and models with a lot of good data are better than ones without. These costs and scale make it hard for small businesses to replicate what the tech giants can do.

There is a longstanding American tradition to regulate businesses with similar features using tools from the law of networks, platforms and utilities. We accept that it doesn’t make sense to have a dozen competing electricity or telephone providers in the same neighborhood — but also expect government to closely regulate these local monopolies. One form of regulation we could apply to AI is structural separation: not allowing a business to operate at multiple layers in the stack. For example, banks have long been prohibited from running commercial businesses, out of concern that they would use their power over money to favor their enterprises over the competition. In the AI realm, structural separation might mean blocking cloud providers from also running the businesses that rely on the cloud to reach customers.

A related concept is nondiscrimination: rules that require businesses to offer all users equal service and prices. Just like net neutrality prevents Comcast from favoring Peacock and throttling your access to Netflix, these rules would prevent Amazon from favoring affiliated entities while stifling competitors. Regulation can also include licensing requirements that ensure safety. These and similar rules have governed a range of businesses for generations, including railroads, airlines, telecommunications services, electric utilities, and banks. They could be adapted to tech companies too, as we argue in a new paper.


We could also think even bigger. The federal government could offer its own competing service by creating a public option for cloud computing. Imagine a publicly funded, publicly run supercomputer that could serve government agencies and researchers who want to solve public problems, rather than using AI to make tech platforms even more addictive. That public cloud could offer businesses a more affordable alternative to the Big Tech companies. And because it would be free from the profit motive, we could trust that it would not put its own commercial interests ahead of society’s or its users’.

Lawmakers have a range of tools at their disposal. The question is whether they will use them. We have spent two decades learning the hard way what happens when tech companies have unchecked, unregulated power to swallow up markets and eliminate competition. The rise of AI offers what might be our last, best chance to get this power under control.

Read Entire Article