In the nearly two week flood of violence and disinformation triggered by the Israel-Hamas war, powerful U.S. figures — including senators and the New York attorney general — have called for online platforms to stem the tide.
But there’s little they can do about it.
American politicians have been reduced to issuing sternly worded letters while their European counterparts deliver threats with potentially expensive consequences. In Brussels, officials have already launched an investigation of Elon Musk’s X, and have the power to fine companies for hosting violent content and disinformation.
In Washington, powerful members of Congress largely sputtered, issuing public demands for social media accountability with no clear way to enforce them.
On Tuesday, Sen. Michael Bennet (D-Colo.), a vocal critic of tech platforms, sent a letter to leaders of X, Meta, TikTok and Alphabet demanding answers about the numbers of posts removed and numbers of employees dedicated to content moderation. The office of Rep. Cathy McMorris Rodgers (R-Wash.) told POLITICO the four companies agreed to provide briefings this week to the staff of the House Energy and Commerce Committee she chairs, following her demand for information on their content moderation policies. Rep. Frank Pallone (D-N.J.), the ranking member on the committee, called on Meta, X and YouTube to “vigorously enforce” their terms of service.
In New York, Attorney General Letitia James wrote to Google, X, Meta, TikTok, Reddit and Rumble on Friday demanding answers about how the sites were addressing calls for violence on their platforms.
Yet none of their appeals have the teeth to force any change, thanks to years of stalled efforts in Congress to regulate online content, the First Amendment's free speech protections and a unique liability shield that tech firms have enjoyed since the 1990s.
Washington’s flimsy response the past two weeks stands in sharp contrast to the European Union, where officials quickly deployed their new Digital Services Act to launch an investigation into X last week over its handling of violent content around the Hamas attacks. Brussels has also sent warnings to Meta, TikTok and Google’s YouTube. Regulators have the authority to fine a company up to 6 percent of its global revenue for failing to remove violent content and disinformation.
In the absence of federal law, the White House has also reached out directly to social media companies to raise concerns about their platforms, according to Nathaniel Fick, the inaugural U.S. ambassador at large for cyberspace and digital policy.
“We are in regular dialogue with the tech platforms on these issues of responsible behavior at a volatile time, obviously starting at a position of respect for the First Amendment,” Fick said on the POLITICO Tech podcast.
The White House’s outreach was striking in light of the political blowback the Biden administration has faced over its dealings with social media firms, including a GOP-led lawsuit claiming the administration violated the First Amendment by allegedly censoring content during the coronavirus pandemic. House Judiciary Chair Jim Jordan presides over an investigation into whether the executive branch “coerced or colluded” with tech firms to censor speech.
One reason U.S. officials are hamstrung is Section 230 of the 1996 Communications Decency Act, which shields platforms from liability over most of the content they disseminate. President Joe Biden has called on Congress to “fundamentally reform” the statute.
In Congress, Bennet has proposed creating a new federal entity to regulate social media, but the bill lacks any cosponsors and has never been taken up by a committee.
“We need a federal regulator empowered to write rules to prevent foreign disinformation on digital platforms, increase transparency around content moderation and levy fines to hold these companies accountable,” Bennet told POLITICO.
In New York, James recently introduced legislation with Democratic Gov. Kathy Hochul and state lawmakers to set guardrails around harmful content on social media, but it won’t be considered until the legislature returns next year.
Platforms contacted by POLITICO say they are taking steps to address the toxic content on their sites. YouTube spokesperson Ivy Choi said the site had removed “tens of thousands of harmful videos and terminated hundreds of channels” since the Oct. 7 Hamas attack against Israel and subsequent Israeli bombing campaign in the Gaza Strip. TikTok said it removed over 500,000 videos and closed 8,000 live streams since Oct. 7, and added an unspecified number of additional content moderators who speak Hebrew and Arabic. Meta said it has removed more than 700,000 videos violating its policies against violent content and hate speech or labeled them as disturbing during the first three days of conflict. X did not respond to a request for comment.
In Europe, meanwhile, one European Commission official told POLITICO the White House had praised its approach.
“We’ve had some high U.S. representatives thanking us for what we’ve done with our regulation on Big Tech in the context of dealing with disinformation, misinformation and illegal content after the Hamas attack,” the official said, speaking anonymously to discuss the matter openly.
The White House declined to comment.
Clothilde Goujard contributed to this report.