Introducing the Child Safety Blueprint

Introducing the Child Safety Blueprint

Child sexual exploitation is one of the most urgent challenges of the digital age. AI is rapidly changing both how these harms emerge across the industry and how they can be addressed at scale.

At OpenAI, we have built and continue to strengthen safeguards to prevent misuse of our systems, and we work closely with partners like the National Center for Missing and Exploited Children (NCMEC) and law enforcement to improve detection and reporting. This work has helped surface where stronger, shared standards are needed across the industry.

Today, we’re introducing a policy blueprint that outlines a practical path forward for strengthening U.S. child protection frameworks in the age of AI. This blueprint reflects and incorporates feedback from several leading organizations and experts across the child safety ecosystem, including NCMEC, the Attorney General Alliance and its AI Task Force co-chairs—North Carolina Attorney General Jeff Jackson and Utah Attorney General Derek Brown—and Thorn to ensure it reflects their priorities and can facilitate more effective collaboration to prevent harm to children.

The blueprint focuses on three key priorities: modernizing laws to address AI-generated and altered CSAM, improving provider reporting and coordination to support more effective investigations, and building safety-by-design measures directly into AI systems to prevent and detect misuse.

No single intervention can address this challenge alone. This framework brings together legal, operational, and technical approaches to better identify risks, accelerate responses, and support accountability, while ensuring that enforcement authorities remain strong as technology evolves.

Together, these steps enable the industry to address child safety earlier and more effectively. By interrupting exploitation attempts sooner, improving the quality of signals sent to law enforcement, and strengthening accountability across the ecosystem, this framework aims to prevent harm before it happens and help ensure faster protection for children when risks emerge.

_“As Co-Chairs of the Attorney General Alliance's AI Task Force, we welcome this blueprint as a meaningful step toward aligning the technology sector's child safety practices with the enforcement realities our offices confront every day. We are particularly encouraged by the framework's recognition that effective GenAI safeguards require layered defenses — not a single technical control, but a combination of detection, refusal mechanisms, human oversight, and continuous adaptation to emerging misuse patterns. This mirrors what we see in practice: the threat evolves constantly, and static solutions are insufficient. Getting the prevention architecture right upstream is the single highest-leverage investment the industry can make in child safety._

_Ultimately, the strength of any voluntary framework depends on the specificity of its commitments and the willingness of industry to be held accountable against them. We look forward to continued partnership with OpenAI, NCMEC, and our fellow Attorneys General to ensure these recommendations translate into durable protections for children.”_

—State Attorneys General Jeff Jackson (North Carolina) and Derek Brown (Utah), Co-Chairs of the AI Task Force of the Attorney General Alliance.

_“The Attorney General Alliance is leading the way in protecting young people online by bringing together attorneys general, industry leaders, nonprofits, and global partners to advance practical, forward-looking solutions on AI and digital safety. Through collaboration and innovation, AGA is setting a strong standard for how we safeguard youth while responsibly embracing emerging technologies. We applaud OpenAI’s continuing commitment to safety and engagement with AGA and attorneys general in developing a highly valuable blueprint for child safety.”_

—Karen White, Executive Director of Attorney General Alliance

_“Generative AI is accelerating the crime of online child sexual exploitation in deeply troubling ways - lowering barriers, increasing scale, and enabling new forms of harm. But at the same time, the National Center for Missing & Exploited Children (NCMEC) is encouraged to see companies like OpenAI reflect on how these tools can be designed more responsibly, with safeguards built in from the start__.__No single organization, business or sector can address this alone. We remain committed to working with partners across industry, government, and the child protection community to advance solutions that reduce harm and better support children’s safe_ ty.”

—Michelle DeLaune, President & CEO, National Center for Missing & Exploited Children

Trusted access for the next era of cyber defense Security Apr 14, 2026

Enterprises power agentic workflows in Cloudflare Agent Cloud with OpenAI Global Affairs Apr 13, 2026

Introducing the OpenAI Safety Fellowship Safety Apr 6, 2026

Our Research * Research Index * Research Overview * Research Residency * OpenAI for Science * Economic Research

Latest Advancements * GPT-5.3 Instant * GPT-5.3-Codex * GPT-5 * Codex

Safety * Safety Approach * Security & Privacy * Trust & Transparency

ChatGPT * Explore ChatGPT(opens in a new window) * Business * Enterprise * Education * Pricing(opens in a new window) * Download(opens in a new window)

Sora * Sora Overview * Features * Pricing * Sora log in(opens in a new window)

API Platform * Platform Overview * Pricing * API log in(opens in a new window) * Documentation(opens in a new window) * Developer Forum(opens in a new window)

For Business * Business Overview * Solutions * Contact Sales

Company * About Us * Our Charter * Foundation(opens in a new window) * Careers * Brand

Support * Help Center(opens in a new window)

More * News * Stories * Academy * Livestreams * Podcast * RSS

Terms & Policies * Terms of Use * Privacy Policy * Other Policies

(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)

OpenAI © 2015–2026 Manage Cookies

English United States

This editorial summary reflects OpenAI and other public reporting on Introducing the Child Safety Blueprint.

Reviewed by WTGuru editorial team.