A bipartisan group of senators has introduced a new bill aimed at combating the rise of digital deepfakes, a technology that has the potential to manipulate and misrepresent individuals’ likenesses. The No Fakes Act, led by Democratic Senator Chris Coons of Delaware, has garnered support from a wide range of stakeholders, including actors, studios, and tech companies.
SAG-AFTRA President Fran Drescher hailed the bill as a “huge win” for performers whose livelihoods depend on their likeness and brand. The Motion Picture Association, which represents major studios like Netflix, Disney, and Warner Bros, also expressed support for the legislation, emphasizing the need to protect performers from generative AI abuse.
The bill, which would establish federal protections against unauthorized digital replicas, has been revised to address concerns raised by various stakeholders. Penalties for producing, hosting, or sharing digital replicas without consent include fines, damages, and removal of the replica. Civil action can also be brought against perpetrators, with online services facing a $5,000 penalty per work containing an unauthorized replica.
AI expert Marva Bailer noted that tech companies like OpenAI and IBM are backing the bill to establish guardrails around the use of deepfake technology. The legislation aims to bring transparency to the use of AI and protect individuals from being misrepresented or exploited.
The No Fakes Act comes at a critical time as the entertainment industry grapples with the impact of AI on various forms of media. SAG-AFTRA is currently on strike on behalf of video game performers, seeking protections in their contracts related to AI language. The union hopes that the bill, if passed, will add to the protections already in place for writers and actors.
Reflecting on past strikes and the challenges posed by AI, SAG-AFTRA National Executive Director Duncan Crabtree-Ireland emphasized the importance of staying ahead of technological advancements to protect members’ careers. The No Fakes Act represents a crucial step in safeguarding individuals from the harmful effects of deepfake technology and ensuring a fair and transparent digital landscape for all.