Google to require disclosure of AI use in political ads

1 year ago

Starting in November, Google will mandate all political advertisements label the use of artificial intelligence tools and synthetic content in their videos, images and audio.

As campaigns and digital strategists explore using generative AI-tools heading into the 2024 election cycle, Google is the first tech company to announce an AI-related disclosure requirement for political advertisers.

Increased AI scrutiny in politics: Already, a PAC supporting Florida Gov. Ron DeSantis’ run for president used an AI-generated version of Donald Trump’s voice this summer on YouTube.

While the Federal Election Commission hasn’t set rules on using AI in political campaign ads, in August it voted to seek public comments on whether to update its misinformation policy to include deceptive AI ads.

The Google policy change also comes as Congress is working on comprehensive legislation to set guardrails on AI, and is meeting with leaders next week in the generative AI space, including Google CEO Sundar Pichai, which owns AI subsidiary DeepMind.

The specifics: Google’s latest rule update — which also applies to YouTube videos — requires all verified advertisers to prominently disclose whether their ads contain “synthetic content that inauthentically depicts real or realistic-looking people or events.” The company mandates the disclosure be “clear and conspicuous” on the video, image or audio content. Such disclosure language could be “this video content was synthetically generated,” or “this audio was computer generated,” the company said.

A disclosure wouldn’t be required if AI tools were used in editing techniques, like resizing or cropping, or in background edits that don’t create realistic interpretations of actual events.

Political ads that don't have disclosures will be blocked from running or later removed if they evaded initial detection, said a Google spokesperson, but advertisers can appeal, or resubmit their ads with disclosures.

Elections worldwide: Google’s policy updates its existing election ads rules in regions outside the U.S. as well, including Europe, India and Brazil — which all have elections in 2024 as well. It will also apply to advertisements using “deepfakes,” which are videos or images that have been synthetically created to mislead, that are banned under the company’s existing misrepresentation policy.

Facebook currently doesn’t require the disclosure of synthetic or AI-generated content in its ads policies. It does have a policy banning manipulated media in videos that are not in advertisements, and bans the use of deepfakes.

Read Entire Article