Sora and Sora 2 pose multiple overlaps of risk across ethics, law, and creative rights. One core issue is copyright / IP infringement. Because users might prompt Sora to generate videos that reference or mimic copyrighted characters, movies, or styles, there’s a risk that such output infringes on creator rights. OpenAI’s policy currently supports a model where copyrighted content is allowed unless rights holders opt out, which has drawn criticism from creative industries.
Another major risk is deepfake / impersonation / misuse of likeness. Users may generate videos portraying real individuals, either living or deceased, in contexts that mislead or defame. Even with cameo safeguards, the risk of misuse is substantial if the system is abused—or if permissions are mismanaged. Cases have already emerged of disturbing videos using real persons in harmful or violent scenes.
Furthermore, content moderation, misinformation, and harm are serious concerns. AI-generated video can amplify disinformation or create fabricated events. Ensuring that moderation rules are enforced is harder in video than in text or image because of complexity. There's also risk of algorithmic bias or unacceptable outputs toward underrepresented groups. And legally, liability is unsettled: who is responsible when generated video causes harm—user, platform, model provider? Creative industries, regulators, and courts will need to catch up. Therefore, deployment of Sora must include strong transparency, moderation, provenance, and user control policies.
