OpenAI announced in a joint statement Monday that it will work with Bryan Cranston, SAG-AFTRA and other actors’ unions to prevent deepfakes in its artificial intelligence video creation app, Sora.
The “Breaking Bad” and “Malcolm in the Middle” actor raised concerns after unauthorized AI-generated clips featuring his voice and likeness appeared on the app after the release of Sora 2 in late September, the Screen Actors Guild and National Federation of Television and Radio Artists said in a post on X.
“We appreciate OpenAI’s improved policies and guardrails and hope that OpenAI and all companies involved in this work will respect our personal and professional rights to control the reproduction of our voices and likenesses,” Cranston said in a statement.
OpenAI, along with SAG-AFTRA, announced that it will work with United Talent Agency, the Talent Agents Association, and Creative Artists Agency, representing Cranston, to strengthen guardrails against unauthorized AI generations.
CAA and UTA previously accused Sora of using copyrighted material and posing a risk to customers and intellectual property.
OpenAI had to block a Martin Luther King Jr. video on Sora last week at the request of King’s estate after a user created a “disrespectful depiction” of the civil rights leader.
Zelda Williams, the daughter of the late comedian Robin Williams, asked people to stop sending her father AI-generated videos shortly after the release of Sora 2.
OpenAI’s approach to copyright restrictions and other issues related to likeness has evolved since the launch of Sora 2 on September 30th.
On October 3, CEO Sam Altman updated Sora’s opt-out policy, which previously allowed use of IP unless a studio specifically requested that the material not be used, to give rights holders “more control over character generation.”
While Sora required individuals to opt-in to the use of their voices and likenesses at launch, OpenAI said it is also now working to “quickly respond to complaints we receive.”
The company reiterated its support for the NO FAKES Act, a federal bill that aims to protect against unauthorized replicas of human voices and physical appearance generated by AI.
“OpenAI is committed to protecting our performers from the misuse of their voices and likenesses,” Altman said in a statement. “We were early supporters of the NO FAKES Act, introduced last year, and will always support the rights of performers.”

