Congress has long been accused of moving too slowly on artificial intelligence policy while states rushed to fill the void. That calculus may be shifting. On March 18, 2026, U.S. Senator Marsha Blackburn (R-TN) released a sweeping discussion draft that signals Washington is finally ready to engage seriously on a federal AI rulebook — and the cybersecurity and privacy implications are significant.

Formally titled the “Republic Unifying Meritocratic Performance Advancing Machine Intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act” — yes, that acronym spells out “TRUMP AMERICA AI Act” — the nearly 300-page draft represents one of the most ambitious attempts yet to establish a unified national standard for artificial intelligence governance.


What’s Driving This Now?

The impetus is President Trump’s December 2025 executive order, which directed Congress to develop a federal AI framework that would preempt the patchwork of state-level AI laws that have proliferated over the past two years. White House Special Advisor for AI and Crypto David Sacks called it an effort to create “that federal framework” and push back on “the most onerous and excessive state regulations.”

Blackburn’s draft is explicitly framed as a negotiating starting point — a “discussion draft” designed to give Congress a position at the table as the White House prepares its own separate legislative recommendation. The two proposals are expected to be merged into a final unified bill. Don’t expect quick passage; this is the opening move in what will likely be a lengthy legislative chess match.


The “4 Cs” Framework

Blackburn structures the bill around four priorities she calls the “4 Cs”: Children, Creators, Conservatives, and Communities. While the political branding is unmistakably Trumpian, the substantive policy provisions draw from bipartisan legislation and address real gaps in current AI governance.


Key Provisions You Need to Know

1. Children’s Online Safety — Duty of Care and Age Verification

The children’s safety provisions draw heavily from the Kids Online Safety Act (KOSA), which passed the Senate 91-3 in July 2024 but stalled in the House over First Amendment concerns. Blackburn is using the AI framework to give KOSA another shot at becoming law.

Under the draft, AI developers would face a duty of care — a legal obligation to exercise reasonable care in the design, development, and operation of AI platforms to prevent and mitigate foreseeable harm to users under 17. Platforms would be required to implement tools and safeguards protecting minors and restrict the use of their data.

The chatbot-specific provisions (housed under what the bill calls the “Guard Act”) go further:

  • Mandatory age verification using government-issued IDs or other reasonable verification methods for all accounts
  • Rolling re-verification of previously verified accounts on an ongoing basis
  • Required disclosures when users are interacting with an AI system rather than a human
  • Separate reminders distinguishing AI interactions from professional consultations (medical, legal, mental health, etc.)

A private right of action is included for child harms caused by AI systems, covering defective design, failure to warn, express warranty, and unreasonably dangerous product claims.

2. Section 230 Sunset

Perhaps the most industry-shaking provision: the bill proposes to sunset Section 230 of the Communications Act — the 1996 legal shield that has protected online platforms from liability for third-party content for three decades. Sunsetting 230 would make platforms potentially liable for algorithmic outputs and AI-generated content in ways that could fundamentally reshape how services operate.

The tech industry has fiercely lobbied against any erosion of Section 230. Expect this to be the most contested battleground as the bill moves forward.

The copyright provisions integrate the NO FAKES Act (Nurture Originals, Foster Art, and Keep Entertainment Safe Act), which would give individuals — particularly artists and entertainers — the right to control the use of their digital likeness. Unauthorized AI-generated replicas of a person’s voice or appearance would be prohibited without consent.

Notably, the draft also includes legislation allowing copyright holders to subpoena AI companies to determine whether their work was used to train a model. This is a significant escalation of creator rights in the AI training data debate.

The framework further:

  • Sets federal transparency requirements for marking, authenticating, and detecting AI-generated content
  • Tasks NIST with developing cybersecurity standards around content provenance, watermarking, and synthetic content detection
  • Explicitly states that unauthorized use of copyrighted works to train AI models does not constitute fair use under the Copyright Act — a position that directly contradicts the Trump administration’s stated view and sets up a significant legal debate

4. Bias Audits and Political Affiliation

In a provision that reflects conservative grievances about perceived anti-conservative bias in AI systems, the bill would require third-party audits of high-risk AI systems to detect discrimination based on political affiliation or viewpoint. It also codifies Trump’s executive order directing agency heads to only procure large language models that are “truthful and do not exhibit bias.”

5. AI Infrastructure and Innovation

Beyond the protective provisions, the bill would:

  • Codify the Center for AI Standards and Innovation at NIST
  • Codify the National AI Research Resource within the National Science Foundation
  • Require the Energy Department to enter agreements with data centers to protect ratepayers from electricity cost increases driven by AI compute demand

The Preemption Question

One of the central goals of this legislation is to preempt state-level AI laws and replace them with a single federal standard. But the picture is more nuanced than it first appears.

ZwillGen’s Brenda Leong notes that Section 1701 of the bill broadly preserves “generally applicable” state and local AI laws. That means state bias audit requirements, automated decision-making obligations, transparency requirements, and algorithmic accountability frameworks — like those in Colorado, Illinois, and New York — would likely survive even if this legislation passes.

The KOSA preemption provision adds another wrinkle: it explicitly allows states to go beyond the federal standard to protect children, acknowledging the political difficulty of placing a federal ceiling on child safety protections given how active states have been in this space.


What Industry and Experts Are Saying

Reaction from the policy and legal community has been mixed-to-skeptical.

ZwillGen’s Leong flagged what she called a potentially extraordinary government overreach: under the bill, enforcers could demand access to an AI company’s code, training data, and model weights. “No U.S. regulatory regime has ever conditioned the right to operate on surrendering your entire intellectual property to a government agency on demand,” she said, citing profound constitutional questions about regulatory takings and due process.

EPIC’s Calli Schroder argues the framework “suffers from trying to appeal to both the president and those concerned with AI’s demonstrable harms,” ultimately failing to meaningfully address AI’s problems while enshrining industry interests.

The Computer & Communications Industry Association was more measured, supporting shared goals around youth safety and transparency while warning that “unworkable provisions that unnecessarily hinder innovation or raise serious constitutional questions are fundamentally at odds” with promoting AI development.


What This Means for CISOs and Compliance Teams

For cybersecurity and privacy professionals, several provisions demand immediate attention.

NIST Watermarking Standards — The bill would direct NIST to develop cybersecurity standards for content provenance and watermarking on AI-generated content. Organizations deploying AI in content creation should begin tracking these standards as they develop, as compliance timelines could be tight once enacted.

Third-Party Audit Requirements — High-risk AI system operators should start preparing for mandatory third-party bias and discrimination audits. Building audit-ready documentation now is prudent risk management.

Age Verification Infrastructure — Organizations operating AI chatbots, companions, or consumer-facing AI tools need to assess their current age verification capabilities. The bill’s requirements are technically demanding and operationally complex.

IP Exposure — The provision enabling copyright holders to subpoena training data records creates new legal exposure for AI developers and potentially for enterprise users of AI tools trained on unlicensed data. Review your AI vendor agreements now.

Section 230 Sunset Risk — If Section 230 protections are eliminated or significantly curtailed, organizations operating AI-powered platforms that host or generate third-party content face dramatically increased liability exposure.


The Road Ahead

This is a discussion draft, not a law. The White House is expected to release its own legislative recommendations shortly, and the two frameworks will be negotiated into a unified proposal. The bipartisan elements — KOSA and NO FAKES — give the bill its best chance at broad Senate support, but the more aggressive provisions (Section 230 sunset, the IP demands, political bias audit mandates) will face stiff opposition from industry and potentially from within Congress itself.

The Colorado AI Act, set to take effect June 30, 2026, is meanwhile pressing forward at the state level — a reminder that regardless of Washington’s timeline, AI governance obligations are arriving now.

For organizations operating in regulated industries or deploying AI at scale, the message is clear: the era of AI policy ambiguity is ending. Federal standards are coming. Build your compliance infrastructure accordingly.


Sources: IAPP, Blackburn Senate Office, Roll Call, CyberAdviser, Inside Global Tech, Deadline. This article is provided for informational purposes only and does not constitute legal advice. Consult qualified counsel for compliance guidance specific to your organization.