
AI Safety and IP – Can Sora Keep it “PG” with Disney Characters?
“They’re really going to give creepy dudes in their moms’ basements the power to generate video with their beloved Disney characters?”
- my immediate reaction to the Disney x Sora deal
In December of 2025, Disney recently announced a $1 billion investment and three-year licensing agreement with OpenAI, the parent of Sora. The deal granted the generative AI (genAI) platform (and its users) access to over 200 Disney, Marvel, Pixar, and Star Wars characters.
My concern arises from real experience. For the past three years, I have advised multiple AI startups whose tech allows users to generate images at-will. Understandably, addressing how to navigate explicit content figures high on the list of business and legal priorities for these companies. We talked recently about dealing with a real crisis for a genAI startup whose user was using its platform to generate illegal, explicit content.
Right after the deal announcement, I witnessed several parody adult cartoons using the Disney characters. We will not share those images here, but as you can use your imagination about the kinds of inappropriate (and adult) situations into which some people might find entertaining to place Mickey, Elsa, Spiderman, etc. and other animated favorites. Your imagination would probably not be far off.
Even if you don’t have similar professional experience, when you hear that Disney is giving an AI platform access to its characters, the following questions are obvious:
How are they handling AI safety (including creation of explicit content) ?
What about quality control (preventing content that uses Disney characters in a way that damages the Disney brand)? This is something we and our colleague Anu Kinhal deal with and discuss regularly in the context of licensing transactions.
Content Monitoring/Moderation
Not surprisingly, Sora has implemented comprehensive safety measures to protect both against pornographic content and misuse of Disney’s intellectual property.
Sora 2 employs a sophisticated three-tier safety architecture that operates before, during, and after video generation. The system includes:
Input Blocking
Output Blocking
Frame-by-Frame Scanning

While acknowledging that effectiveness remains imperfect, OpenAI has implemented strict measures to prevent sexual content generation. Recent safety evaluations using thousands of adversarial prompts indicate that Sora successfully blocks 96.04% of adult nudity and sexual content without use of likeness, and 98.4% when a person’s likeness is involved. However, this means there remains a 1.6% chance that sexual deepfakes could be created using someone’s likeness despite safeguards.[1]
The platform prohibits generation of graphic sexual content, nudity involving minors, non-consensual intimate imagery, and content that sexualizes minors. These protections will face a real test to the extent OpenAI’s proposed “Adult Mode” also extends to Sora.
Multiple specialized classifiers work in concert to detect NSFW content:
Multi-modal moderation classifiers scan both text prompts and image/video uploads
NSFW-specific output classifiers examine generated content
Integration with Thorn’s Safer tool to detect known CSAM content, with confirmed matches rejected and reported to the National Center for Missing & Exploited Children (NCMEC)
Human review serves as an additional layer for potential child safety violations
When classifiers detect a potential minor in uploaded images or videos—including through Sora’s “Cameo” feature—subsequent generations face even tighter safety thresholds to prevent additional harmful content.
Quality Control
The Disney partnership includes stringent content restrictions designed to protect the entertainment giant’s family-friendly brand image.
Joint Steering Committee: OpenAI and Disney established a joint steering committee to monitor user-generated content and ensure compliance with a comprehensive “brand appendix” that outlines prohibited scenarios Disney wants to avoid associating with its characters.[2]
Prohibited Activities: The agreement explicitly forbids Disney characters from appearing in content involving:
Sexual content or nudity
Drugs or illicit substances
Alcohol consumption
Violence beyond what’s appropriate for family audiences
Interactions with characters from other media franchises (preventing unauthorized crossovers)
Excluded Content: The license specifically excludes any talent likenesses or voices of Disney actors, addressing concerns from SAG-AFTRA and other Hollywood unions. This separation between character IP and performer attributes represents an important legal boundary. Voices can be deepfaked too and we have seen several contracts for voice actors that include provisions related to AI-training.
Oversight and Enforcement: Disney retains significant oversight and control over its intellectual property. The company’s ownership rights extend to content created using its characters on Sora. Disney also has a license to curate and stream the best user-generated videos on Disney+. OpenAI also committed to “age-appropriate policies” and “robust controls to prevent the generation of illegal or harmful content” as part of the agreement.[3]
Technical Enforcement Mechanisms
Sora employs multiple technical measures to enforce these restrictions:
Copyright Detection: The platform actively blocks prompts containing references to copyrighted characters. Any attempt to generate “anthropomorphic” content is scrutinized for similarity to third-party content, particularly Disney properties like Zootopia characters.[4] In our crisis situation for our own genAI client, we recommended fighting tech misbehavior with tech, including new detection technology like this.
Prompt Filtering: Automated systems block requests that mention prohibited combinations, such as Disney character names paired (within a certain number of words) with terms related to drugs, alcohol, or sexual content.
Watermarking and Provenance: All videos downloaded from Sora include visible moving watermarks and C2PA metadata providing verifiable origin information, helping to identify AI-generated content and deter misuse. (Elsewhere, we’ve discussed the potential stacking of blockchain technology with AI for greater origin verification).
User Reporting and Human Review: Beyond automated systems, Sora provides in-app reporting mechanisms where users can flag inappropriate content. Human reviewers complement automated moderation for high-impact harms.
Why This Tale Matters for Small Business Operators/Startup Founders
Small business owners, you don’t have to be a big tech company like OpenAI to care about the safety of AI tools on your website or app or used in your operations. Our recent client story about explicit content on an AI platform shows this. Creators, you don’t have to be a big media company like Disney to prioritize the quality control of the intellectual property that you license out to users or business partners. We help our creative, consumer-facing and other clients negotiate quality control (not just ownership protection) for their brand deals and licenses.
Disclaimer. The contents of this article should not be construed as legal advice or a legal opinion on any specific facts or circumstances or investment or fundraising advice. Your viewing and/or use of the contents of this article do not create an attorney-client relationship with Cadet Legal. The contents are intended for general informational purposes only, and you are urged to consult with counsel concerning your situation and specific legal questions you may have.
Additional Sources
https://www.axios.com/2025/12/11/openai-disney-sora
https://www.axios.com/2025/12/11/disney-openai-sora-deal-what-to-know
https://www.nytimes.com/2025/12/11/business/media/disney-openai-sora-deal.html
https://openai.com/index/sora-system-card/
https://sfg.media/en/a/disney-first-major-media-company-to-license-content-to-openai/
https://www.wired.com/story/disney-and-openais-deal-is-a-major-turning-point/
[1] https://cdn.openai.com/pdf/50d5973c-c4ff-4c2d-986f-c72b5d0ff069/sora_2_system_card.pdf?
[2] https://www.spellbook.legal/newsletter/inside-disneys-1b-openai-deal
[3] https://thewaltdisneycompany.com/news/disney-openai-sora-agreement/
[4] https://help.apiyi.com/sora-2-third-party-content-similarity-violation-guide-2.html


