The federal TAKE IT DOWN Act โ€” formally the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act โ€” became law on May 19, 2025, when President Trump signed it. One year later, May 19, 2026, is the date the Federal Trade Commission begins enforcing its core operative requirement: covered platforms must remove nonconsensual intimate images, including AI-generated deepfakes, within 48 hours of receiving a valid victim notice.

If your organization operates a public website or mobile application where users can post or share content, and you do not yet have a compliant notice-and-removal system live, you have days, not weeks, to close the gap.


What the Law Does

The TAKE IT DOWN Act creates two distinct criminal and civil liability frameworks. The criminal prohibition โ€” making it illegal to knowingly publish or threaten to publish nonconsensual intimate images or AI-generated deepfakes of an identifiable person โ€” became effective immediately upon signing. The civil compliance regime, enforced by the FTC, is what hits its first major milestone this month.

The Act covers two categories of content:

  1. Real intimate visual depictions โ€” photographs, video, or other recordings of an identifiable person engaged in sexual conduct or showing intimate body parts, published without consent.
  2. AI-generated synthetic intimate depictions (deepfakes) โ€” computer-generated or digitally altered content depicting an identifiable person in intimate or sexual contexts, without consent.

Both categories trigger the same 48-hour removal obligation once a covered platform receives a qualifying victim notice.


Who Is a โ€œCovered Platformโ€

The Act defines a covered platform as a publicly accessible website or mobile application that:

  • Allows users to upload or share content, and
  • Has more than a de minimis number of users

The statute does not set a specific user threshold. The FTCโ€™s implementing guidance makes clear that this is not limited to large social media companies. Discussion forums, image hosting services, adult content platforms, dating applications, fan sites, and any UGC-enabled property with public accessibility are within scope.

Platforms that operate exclusively in business-to-business contexts, behind login walls with closed memberships, or as internal enterprise tools may be outside the definition โ€” but counsel should evaluate specific facts rather than assume an exemption applies.


The Core Compliance Obligations

1. Establish a Notice-and-Removal System

Covered platforms must create a process through which users โ€” meaning victims โ€” can submit reports of nonconsensual intimate images. The Act does not mandate a specific technical format, but the process must be accessible and functional. Best practice based on FTC guidance is a dedicated web form or email channel that:

  • Is clearly labeled for reporting intimate image abuse
  • Confirms receipt automatically
  • Routes to staff or a vendor capable of evaluating and acting within 48 hours
  • Creates a timestamped record of the report and the platformโ€™s response

2. Remove Content Within 48 Hours

Upon receiving a valid notice, the platform must remove or block access to the reported content within 48 hours. This clock runs from receipt, not from when the platform verifies the report. The Act does not provide a verification grace period inside the 48-hour window.

For platforms with large volumes of reported content or multi-jurisdiction operations, 48 hours is a genuinely tight operational requirement. Staffing, escalation paths, and after-hours coverage are not optional design considerations at this point โ€” they are compliance requirements.

3. Make Reasonable Efforts to Remove Duplicates

The Act requires covered platforms to make reasonable efforts to remove or block access to other copies of the same content on the platform, not just the specific URL reported. This is operationally significant. A victim should not have to report each individual re-upload separately. Platforms with hash-matching or perceptual hashing infrastructure can satisfy this with existing tools. Those without it need a documented process for manual review and takedown of known duplicates.

4. Provide Clear User Notice

Covered platforms must publish a clear and conspicuous notice to users explaining:

  • What the platformโ€™s obligations are under the Act
  • How users can submit reports
  • What happens after a report is received

This does not require verbose legal language. It does require that the notice be prominent enough that an affected person could reasonably find it โ€” a footer link labeled โ€œLegalโ€ pointing to a dense terms page almost certainly does not satisfy โ€œclear and conspicuous.โ€


The Safe Harbor

The Act provides a limited safe harbor for platforms that act in good faith. If a covered platform removes content that should not have been removed โ€” a false positive, a misidentification, a disputed claim โ€” the platform is not liable for the removal provided it acted in good faith.

This safe harbor has a documentation requirement embedded in it. โ€œGood faithโ€ is not self-proving. Platforms that want to rely on the safe harbor should maintain records of:

  • Each report received (timestamp, reporter identity if provided, content description)
  • Each removal action taken (timestamp, content identifier, basis for action)
  • Each case where removal was declined and why

Without records, a platform facing an FTC investigation has no evidence that its process was functioning or that individual decisions were made in good faith.


FTC Enforcement Authority and Penalties

The Federal Trade Commission enforces the Actโ€™s civil provisions as an unfair or deceptive practice under Section 5 of the FTC Act. The FTC has indicated it will treat non-compliant platforms โ€” particularly those without any functioning notice-and-removal system โ€” as priority enforcement targets.

There is no private right of action under the Act for civil suits between private parties. Enforcement is FTC-led. However, noncompliant platforms may also face exposure under state laws that predate or run parallel to the Act, including Californiaโ€™s existing deepfake laws and similar statutes in Illinois, Texas, Virginia, and Georgia.

First-time violations can result in civil penalties of up to $51,744 per violation (the current FTC civil penalty ceiling, adjusted for inflation). Repeat violations or patterns of non-compliance attract substantially higher penalties and potential injunctive relief including mandatory compliance programs.


What to Audit Before May 19

With the enforcement date three days away as of this writing, the immediate priority is verifying that basic operational requirements are in place:

Live and accessible reporting channel. Does your platform have a working submission mechanism for intimate image abuse reports? Test it from a fresh browser session with no authentication. If a victim cannot find it in under two minutes, it is not accessible enough.

48-hour response capacity. Is someone โ€” internal staff or a third-party moderation vendor โ€” monitoring the intake channel around the clock, including weekends? If your current setup routes reports to a business-hours email queue, you are not meeting the 48-hour obligation.

Duplicate removal process. Do you have any tooling or documented manual process for identifying and removing re-uploads of reported content? If not, document a reasonable interim process now and implement tooling as a near-term priority.

User-facing notice. Does your terms of service, community guidelines, or a dedicated help page explain the TAKE IT DOWN Act obligations and how to report? If not, add language this week.

Documentation system. Are reports and removal actions being timestamped and logged? This does not require custom software โ€” a structured spreadsheet or ticketing system entry is sufficient to establish a record. The important thing is that it is happening.


The Regulatory Context

The TAKE IT DOWN Act is the first federal law to directly regulate the use of AI in ways harmful to individuals, specifically targeting synthetic intimate imagery. It arrived alongside a rapidly expanding state-level landscape: California AB 602 and AB 1978 created civil causes of action for deepfake intimate images, and at least a dozen states have passed or are advancing similar legislation in 2025 and 2026.

For compliance teams, this means federal compliance with the TAKE IT DOWN Act is the floor, not the ceiling. Multi-jurisdiction platforms need to assess whether state laws impose additional notice periods, user rights, or reporting obligations that go beyond the federal framework.

The FTC has also signaled that it views deepfake-related harm as part of a broader AI accountability agenda. Platforms that demonstrate proactive, well-documented compliance with the Actโ€™s requirements are substantially better positioned in any future AI-related investigation than those that treat the law as a checkbox.


Conclusion

The TAKE IT DOWN Act enforcement clock expires this Monday. The obligations are not complex โ€” a functioning notice intake channel, 48-hour removal capacity, duplicate removal effort, user-facing notice, and documentation โ€” but they require operational infrastructure that cannot be assembled overnight.

If your platform does not have these elements live by May 19, 2026, you are exposed to FTC enforcement action. The first enforcement matters in a new regulatory regime typically establish what the agency considers the minimum acceptable standard. It is worth being on the right side of that line.

This article is provided for informational purposes only and does not constitute legal advice. Organizations should consult qualified legal counsel regarding their specific compliance obligations under the TAKE IT DOWN Act and applicable state laws.