The UK’s Information Commissioner’s Office (ICO) has sent a clear message to social media platforms: protecting children’s data isn’t optional. Reddit has been fined £19.5 million ($24.6 million USD) for systematic failures to adequately protect children’s personal information, exposing minors under 13 to inappropriate and harmful content they could not understand, consent to, or control.

This enforcement action represents one of the most significant children’s privacy penalties issued by the ICO and demonstrates the serious consequences platforms face when they fail to implement proper safeguards for minor users. For companies operating in the UK or serving UK users, this case provides essential lessons on what regulators expect when it comes to protecting children online.

The Violations: What Reddit Got Wrong

The ICO’s investigation uncovered multiple systemic failures in how Reddit handled children’s personal data:

1. No Age Assurance Mechanism (Until 2025)

Reddit did not implement any meaningful age assurance or verification mechanism before January 2025. This means:

  • Children under 13 could easily create accounts by simply lying about their age- No technical barriers prevented underage users from accessing the platform- No identity verification or age-estimation technology was employed- Self-declaration was the only check, and it could be bypassed with a few clicks

As UK Information Commissioner John Edwards explained: “Children under 13 had their personal information collected and used in ways they could not understand, consent to, or control. That left them potentially exposed to content they should not have seen.”

2. No Lawful Basis for Processing Children’s Data

Under UK data protection law (similar to GDPR), processing personal data requires a lawful basis. For children under 13, the standard bases—consent, legitimate interest, contract—generally don’t apply because:

  • Children cannot provide valid consent without parental involvement- Platforms can’t claim legitimate interest when the data subject is a vulnerable child- Children can’t enter into enforceable contracts

Reddit was processing the personal information of children under 13 without a lawful basis, meaning every piece of data collected, every interaction tracked, and every profile built was illegal under UK law.

3. No Data Protection Impact Assessment (DPIA)

Before processing data that creates high risks to individuals—especially vulnerable populations like children—organizations must conduct a Data Protection Impact Assessment (DPIA) to:

  • Identify risks to data subjects- Assess the severity of potential harms- Implement mitigations to reduce those risks- Document the decision-making process

Reddit failed to conduct a DPIA to assess and mitigate risks to children before January 2025. This meant the company was operating blind, with no systematic understanding of how their data processing practices affected minors.

4. Exposure to Inappropriate Content

The combination of no age verification and no risk assessment meant children were exposed to:

  • Adult content including NSFW (not safe for work) subreddits- Violent or disturbing material- Targeted advertising based on behavioral tracking- Potential predatory contact from adult users- Algorithmic amplification of engagement-maximizing content, regardless of appropriateness

Children couldn’t understand what data was being collected about them, couldn’t meaningfully consent to that collection, and had no effective control over how their information was used.

The £19.5 Million Penalty: How the ICO Calculated It

The ICO’s methodology for determining the fine provides insight into how regulators assess penalties:

Factors Considered:

  1. Number of children affected - The scale of the violation2. Risk of harm - The potential for psychological, developmental, or safety impacts3. Duration of failings - How long Reddit operated without proper protections4. Reddit’s global turnover - The company’s ability to pay and whether the fine will have deterrent effect5. Nature of the violation - Systematic failures vs. isolated incidents

The £19.5 million figure represents a significant enforcement action, though notably smaller than some GDPR fines issued for other violations. This likely reflects:

  • Reddit’s relatively smaller revenue compared to tech giants like Meta or Google- The principle of proportionality requiring fines to be appropriate to the violation- Reddit’s eventual implementation of age assurance measures (in January 2025)

July 2025: Reddit’s (Insufficient) Response

In July 2025—presumably after learning of the ICO’s investigation—Reddit finally implemented age assurance measures:

What Reddit Added:

  1. Age verification for mature content - Users must verify age to access NSFW subreddits2. Account creation age declaration - Users must declare their age when creating accounts3. Presumably, some content restrictions for accounts declaring ages under 13

However, the ICO made clear that Reddit’s response remained insufficient:

The Self-Declaration Problem

UK Information Commissioner John Edwards directly addressed the limitations of Reddit’s approach:

“Relying on users to declare their age themselves is not enough when children may be at risk and we are focusing now on companies that are primarily using this method. I therefore strongly encourage industry to take note, reflect on their practices and urgently make any necessary improvements to their platforms.”

Why Self-Declaration Isn’t Enough:

Self-declaration is the weakest form of age assurance because:

  • Trivial to bypass - Children can simply lie about their birthdate- No consequences for lying - There’s no verification or penalty- No technical barrier - It’s just a dropdown menu or text field- Creates a false sense of compliance - Companies check a box without actually protecting children

The ICO’s statement makes clear that platforms relying primarily on self-declaration should expect similar scrutiny and potential enforcement actions.

What Age Assurance Actually Looks Like

The ICO’s criticism of Reddit’s self-declaration approach raises the question: what would proper age assurance look like?

The Age Assurance Spectrum

Tier 1: Self-Declaration (Weakest)

  • User types or selects their birthdate- No verification- Easily defeated by anyone who can do basic math- Status: Insufficient for high-risk platforms or content

Tier 2: Soft Age Gates

  • Email verification with age-appropriate email providers- Behavioral analysis (typing speed, browsing patterns)- Device fingerprinting (common on children’s vs. adult devices)- Status: Better than nothing, but still relatively easy to bypass

Tier 3: Age Estimation

  • AI-based facial age estimation from selfie- Voice analysis for age indicators- Behavioral biometrics over time- Status: More robust, but privacy concerns and accuracy issues

Tier 4: Age Verification (Strongest)

  • Government ID verification- Credit card verification (adults only)- Third-party age verification services- Parent/guardian approval with verification- Status: Most reliable, but creates privacy concerns and barriers to access

The Balancing Act

Effective age assurance requires balancing multiple considerations:

Privacy vs. Protection:

  • Strong verification (ID checks) protects children but creates privacy risks- Weak verification (self-declaration) preserves privacy but fails to protect children- The solution: risk-based approach matching assurance level to potential harm

Accuracy vs. Accessibility:

  • Strict verification reduces errors but excludes legitimate users without ID- Loose verification is more accessible but allows many children through- The solution: multiple pathways with appropriate fallbacks

Cost vs. Compliance:

  • Robust age verification systems are expensive to implement- Self-declaration is cheap but insufficient for high-risk platforms- The solution: invest in protection or don’t serve minors

Best Practices for Platforms: Learning from Reddit’s Mistakes

Reddit’s £19.5M fine provides a roadmap for what not to do. Here’s what platforms should be doing instead:

1. Conduct a Comprehensive DPIA Before Launch (or Now)

Before processing any data from minors, conduct a thorough DPIA:

  • Identify all data processing activities involving children- Map data flows from collection through deletion- Assess risks to children’s safety, privacy, and wellbeing- Evaluate potential harms including psychological impacts- Document mitigation measures and justify any residual risks- Review and update annually or when processing changes significantly

2. Implement Age-Appropriate Assurance

Match your age assurance mechanism to the risk level:

Low-Risk Services (educational, age-appropriate content):

  • Self-declaration with parental consent mechanisms- Monitoring for suspicious behavior patterns- Regular reminders about age requirements

Medium-Risk Services (social media, user-generated content):

  • Age estimation technology- Behavioral analysis- Progressive verification (more proof required for more features)

High-Risk Services (adult content, financial services, sensitive processing):

  • Government ID verification- Third-party age verification services- Mandatory parental controls for minor accounts

3. Establish Lawful Basis for Children’s Data Processing

For children under 13 (or under 16 in some jurisdictions):

  • Parental consent is typically required- Consent must be verifiable - prove you obtained it from an actual parent/guardian- Minimize data collection - collect only what’s absolutely necessary- Provide age-appropriate privacy policies - explain data practices in ways children can understand

4. Implement Technical and Policy Controls

Technical Controls:

  • Default privacy settings for accounts identified as minors- Restricted content filtering for users under age limits- Disabled or limited targeted advertising for children- Parental dashboard providing visibility and control- Enhanced moderation for areas accessible to minors

Policy Controls:

  • Clear age requirements in terms of service- Age-appropriate content policies enforced algorithmically and by moderators- Mandatory content warnings and age gates for mature content- Reporting mechanisms specifically for child safety concerns- Regular audits of compliance with children’s privacy requirements

5. Document Everything

Regulators will want to see:

  • DPIA documentation showing risk assessment and mitigation- Age assurance methodology and effectiveness metrics- Decision-making process for data processing activities- Incident response records for child safety issues- Regular compliance reviews and updates

6. Monitor and Improve

Continuous improvement cycle:

  • Track age assurance effectiveness - How many children bypass controls?- Monitor for child safety incidents - What’s getting through your filters?- Gather feedback from parents, child safety organizations, and regulators- Update controls as children develop new bypass techniques- Stay current with regulatory expectations and industry best practices

The Regulatory Landscape: What This Signals

Reddit’s fine is part of a broader enforcement trend focused on children’s online safety:

UK’s Aggressive Stance

The ICO has made children’s privacy a top enforcement priority:

  • Age-appropriate design code requiring specific protections for minors- Online Safety Bill imposing additional duties on platforms- Independent enforcement post-Brexit, not bound by GDPR consistency mechanisms- Willingness to fine major platforms for systemic failures

Information Commissioner Edwards’ statement signals future focus: “We are focusing now on companies that are primarily using [self-declaration]. I therefore strongly encourage industry to take note, reflect on their practices and urgently make any necessary improvements to their platforms.”

Translation: If your platform relies primarily on self-declared age, expect ICO scrutiny.

Global Children’s Privacy Enforcement

The UK isn’t alone in prioritizing children’s protection:

United States:

  • COPPA (Children’s Online Privacy Protection Act) requires parental consent for under-13s- State-level age verification laws emerging in multiple states- FTC enforcement actions against platforms violating COPPA

European Union:

  • GDPR requirements for parental consent for children under 16 (or lower, based on member state)- Digital Services Act imposing additional obligations- Coordinated enforcement through European Data Protection Board

Australia:

  • eSafety Commissioner with broad child protection powers- Online Safety Act requiring age verification for certain content

Trend: Global convergence toward stronger children’s privacy protections and stricter platform accountability.

What This Means for Different Types of Platforms

Social Media Platforms

High Risk, High Scrutiny:

  • Expect age assurance beyond self-declaration to be mandatory- Default privacy settings for minors required- Targeted advertising to children likely to face restrictions- Algorithm transparency regarding content served to minors

Examples of Compliance:

  • Instagram’s separate experience for under-16s with restricted features- TikTok’s screen time limits and privacy defaults for minors- YouTube Kids as separate platform with curated content

Gaming Platforms

Particular Challenges:

  • In-game purchases and monetization- Voice/text chat with potential predator risk- User-generated content and moderation burden- Cross-platform play complicating age assurance

Key Controls:

  • Parental approval for purchases- Chat restrictions for minor accounts- Robust reporting and moderation for user-generated content- Age-gated access to certain game modes or features

Educational Technology

Delicate Balance:

  • COPPA compliance in US typically through school/district consent- GDPR/UK law may require individual parental consent- Minimizing data collection while providing educational value- Transparency with both schools and parents

Best Practices:

  • Purpose limitation - only collect data necessary for educational objectives- Explicit service agreements with schools defining data use- Parent dashboard providing visibility into child’s data- Regular data deletion when no longer educationally necessary

E-Commerce and Service Platforms

Age Verification Imperative:

  • Many products/services have age restrictions- Payment processing often requires adult- Shipping alcohol, tobacco, adult items requires verification

Solutions:

  • Age verification at purchase, not just account creation- Integration with identity verification services- Clear policies on age-restricted products- Fallback to parental approval mechanisms

The Cost of Non-Compliance

Reddit’s £19.5M fine is substantial, but the total cost of non-compliance extends beyond the penalty:

Direct Financial Costs:

  • £19.5M fine to ICO- Legal fees for defense and compliance remediation- Implementation costs for new age assurance systems- Ongoing monitoring and compliance program expenses

Indirect Costs:

  • Reputational damage - “Reddit fined for failing to protect children”- User trust erosion - Particularly among parents and safety-conscious users- Regulatory scrutiny - Ongoing monitoring by ICO and increased inspection likelihood- Competitive disadvantage - Competitors can position themselves as safer alternatives- Insurance impacts - Higher premiums or difficulty obtaining coverage

Opportunity Costs:

  • Executive time spent on remediation and regulatory response- Engineering resources diverted from product development to compliance- Delayed feature launches while age assurance systems are implemented

For a company of Reddit’s size, the all-in cost of this violation likely exceeds £50M when accounting for remediation, legal fees, and opportunity costs.

Practical Steps: What to Do Now

If you operate a platform that could potentially have minor users:

Immediate Actions (This Week)

  1. Conduct age distribution analysis - How many of your users might be minors?2. Review current age assurance mechanisms - Are you relying solely on self-declaration?3. Assess data processing activities - What data are you collecting from potential minors?4. Review your DPIA - When was it last updated? Does it address child safety?5. Consult legal counsel - Do you understand your obligations in each jurisdiction?

Short-Term Actions (This Month)

  1. Implement enhanced age assurance - Move beyond self-declaration if feasible2. Create child safety incident response plan - How will you handle reports of minors on your platform?3. Update privacy policies - Ensure age-appropriate language and clear explanations4. Train content moderation teams - Recognize and respond to child safety issues5. Establish parental consent mechanisms - If you intentionally serve minors

Long-Term Actions (This Quarter)

  1. Build comprehensive age assurance system - Risk-appropriate verification2. Develop child-specific user experience - If you’ll serve minors, do it safely3. Implement technical controls - Content filtering, privacy defaults, advertising restrictions4. Create compliance monitoring program - Ongoing audits and effectiveness measurement5. Establish relationships with child safety organizations - Get external perspective on your controls

The Bigger Picture: Online Child Safety as Competitive Advantage

While compliance is mandatory, forward-thinking platforms are recognizing that exceptional child safety practices can be a competitive differentiator:

Parents are increasingly conscious of online safety risks and gravitating toward platforms that demonstrate genuine commitment to protecting children.

Advertisers and investors are considering child safety practices in their due diligence and decision-making.

Regulators are offering “soft landing” approaches for platforms that proactively implement strong protections before enforcement becomes necessary.

Industry leadership on child safety can position a platform as responsible and trustworthy, building long-term brand value.

Conclusion: Protecting Children Isn’t Optional

Reddit’s £19.5M fine delivers an unambiguous message: protecting children’s data and safety online is not optional, and self-declaration is not sufficient for high-risk platforms.

The ICO has made clear that it’s actively investigating other platforms that rely primarily on self-declared age, and similar enforcement actions are likely coming for companies that haven’t yet implemented robust age assurance mechanisms.

For platforms serving or potentially serving minor users, the path forward is clear:

  1. Assess your risk - Are children on your platform? What harms might they face?2. Implement appropriate age assurance - Move beyond self-declaration for anything higher than low-risk3. Conduct comprehensive DPIAs - Document your risk assessment and mitigations4. Establish lawful basis - Ensure you have proper legal grounds for processing children’s data5. Monitor and improve - Age assurance and child safety require continuous attention

The cost of compliance may seem high, but it’s far less than the combined cost of fines, remediation, and reputational damage that comes from failing to protect children.

Information Commissioner John Edwards said it best: “Relying on users to declare their age themselves is not enough when children may be at risk… I therefore strongly encourage industry to take note, reflect on their practices and urgently make any necessary improvements to their platforms.”

The question isn’t whether regulators will enforce children’s privacy requirements—Reddit’s fine proves they will. The only question is whether your platform will be compliant before or after you receive a notice of investigation.


Sources:

  • UK Information Commissioner’s Office: “Reddit issued with £14.47m fine for children’s privacy failures” (February 25, 2026)- Help Net Security: “Reddit fined $19.5 million for failing to protect children’s personal data” (February 25, 2026)- ICO Age Appropriate Design Code- COPPA Compliance Guide