The U.S. is set to pass a federal law banning the distribution of intimate images without consent. The Take It Down Act introduces criminal liability for publishing explicit content, including AI-generated deepfakes.
The law applies to both real and fake images, punishing those who share content without the subject’s consent. Penalties include fines, imprisonment, and restitution.
Online platforms are also affected. Social media and digital services must remove such content within 48 hours of notification and delete duplicates.
This is the first federal initiative to combat online exploitation. Previously, enforcement varied across states, leaving legal gaps for adult victims.
The act empowers victims and clarifies law enforcement procedures.
The law targets tech abuse: explicit deepfakes cause harm and serve no public interest. They’re often used to harass teens and women.
It passed Congress with near-unanimous support and backing from Meta, TikTok, and Google. It’s one of the first federal laws addressing AI-related risks.
Still, rights groups warn the law’s broad language could lead to over-censorship, potentially affecting legal content or political speech.
Supporters argue the law sets a clear standard: digital abuse will no longer go unpunished.
(text translation is done automatically)