The Federal Election Commission voted unanimously to issue emergency regulations banning the use of AI-generated deepfake images, audio, and video in federal political advertising, following a wave of synthetic media depicting presidential and congressional candidates making statements they never made.
The incidents that prompted the action were alarming in scale. A deepfake video of a presidential candidate apparently conceding the election spread to 14 million Americans before being identified as fabricated. An AI-generated audio clip of a senator endorsing a primary opponent was shared 4 million times in 48 hours.
Under the emergency rules, political campaigns and PACs must include prominent disclosures on any AI-generated content used in advertising. Using AI to create realistic depictions of real candidates saying or doing things they did not say or do is now a federal offense carrying fines up to $1 million per violation.
Meta, YouTube, and X have agreed to implement real-time AI content detection and labeling on their platforms for political content in the 90 days before any federal election.