Thank You

You are now registered for our Rouse Insights Newsletter

Navigating the Deepfake Dilemma: Legal Challenges and Global Responses

Published on 13 Jun 2025 | 4 minute read

Introduction

Deepfake technology, which uses artificial intelligence to create hyper-realistic but fabricated images, videos, and audio recordings, has emerged as a pressing global issue. Its misuse poses significant risks, from political manipulation to personal harm.

Following our article, Rouse - The rise of Deepfakes: Can IP solutions help?, this piece examines recent deepfake incidents and the legal challenges they present, alongside legislative responses, while offering recommendations for addressing this growing concern.

 

Major recent deepfake cases and events  

Political Manipulation

In the realm of political manipulation, notable incidents have raised alarms. One significant event occurred during the 2024 New Hampshire primary when thousands of voters received robocalls featuring a deepfake voice mimicking President Biden. These calls urged recipients not to vote, highlighting the ease with which deepfakes can sway public opinion.

Financial Fraud

Financial fraud has also seen a rise due to deepfake technology. In one instance, a deepfake of the CFO of the British engineering firm Arup led to the unauthorized transfer of $25 million to fraudulent accounts in Hong Kong during a video conference. Similarly, thieves used a deepfaked voice of a UK energy CEO to facilitate a €220,000 transfer, demonstrating the potential for deepfakes to enable complex financial crimes. In Indonesia, deepfake videos impersonating President Prabowo Subianto circulated widely, with scammers promising financial aid in exchange for fees, resulting in significant losses for victims.

Personal Harm

Personal harm caused by deepfakes is equally concerning. Notably, Lei Jun, the CEO of Xiaomi, became a victim when AI-generated videos portraying him in a damaging light went viral during China’s National Day in 2024, generating over 200 million views in just eight days. At schools in New Jersey and Pennsylvania, students created sexually explicit deepfakes of their classmates, showcasing the invasive potential of this technology. In Maryland, an athletic director produced a deepfake audio recording to falsely portray a principal as racist, further illustrating the reputational dangers associated with deepfakes.

 

Legal Challenges Posed by Deepfakes

The rise of deepfake technology has raised numerous legal challenges. Intellectual property rights are often infringed upon, as deepfakes typically rely on existing media for training, leading to potential disputes over unauthorized use.

Additionally, the unauthorized use of an individual's likeness in a deepfake can violate public image rights, prompting recent legislative proposals aimed at establishing protections at various levels such violations.

Privacy concerns are paramount, as creating deepfakes involves processing personal data, including biometric information. This raises significant issues regarding the lawful use of personal information. Moreover, deepfakes can distort reality, potentially violating consumer protection laws, especially in industries where accurate representations are critical.

While legislation like the UK’s Online Safety Act and the EU’s Digital Services Act seeks to address illegal content, they often fall short of comprehensively regulating deepfakes. The focus tends to be on advisory committees and media literacy rather than direct regulation.

 

Legislative Responses

In response to these challenges, various legislative measures have emerged.

In the U.S., the TAKE IT DOWN Act, passed by the House in April 2025, addresses non-consensual intimate imagery, including AI-generated deepfakes. This bipartisan legislation provides a mechanism for victims to swiftly remove harmful content and holds perpetrators accountable. Another significant piece of U.S. legislation, the NO FAKES Act, was reintroduced in April 2025 to protect individuals' rights against unauthorized use of their likeness or voice in deepfakes. This act has garnered support from key stakeholders in the technology and entertainment sectors.

Internationally, the EU AI Act, effective from August 2024, aims to regulate AI-driven misinformation and impose fines on platforms that fail to manage disinformation adequately.

In the Asia-Pacific region, countries like China are proactively regulating deepfake technology, requiring the labeling of synthetic media and enforcing rules to prevent the spread of misleading information.

 

Challenges and Recommendations

Despite these legislative efforts, significant challenges remain.

Detecting deepfakes and attributing them to specific creators is increasingly difficult due to the sophistication of AI technologies. Additionally, the cross-border nature of deepfake creators complicates legal enforcement, necessitating strong international cooperation. The rapid pace of technological advancement requires that regulatory frameworks be flexible and adaptive.

To address these challenges, several recommendations can be made. Investing in enhanced detection tools and fostering collaboration among governments, academia, and the private sector can accelerate the development of effective solutions. Strengthening international agreements can harmonize regulations and facilitate enforcement actions.

Public awareness and education about the risks associated with deepfakes are essential. Media literacy programs can empower individuals to critically evaluate digital content and recognize misinformation. Furthermore, legislative frameworks must remain adaptable to emerging threats, ensuring they protect rights while fostering innovation.

Corporate responsibility is also crucial; companies should implement robust security measures and ethical guidelines for AI technology use. Self-regulation and industry standards can complement legislative efforts.

 

Conclusion

The rise of deepfake technology presents unprecedented challenges across various sectors, including politics, finance, and personal safety. Recent cases underscore the urgent need for robust legal frameworks and international cooperation to combat these threats effectively. While legislative responses represent significant progress, the rapid evolution of technology demands ongoing vigilance and adaptive strategies.

A collaborative approach involving governments, the private sector, and civil society is essential to safeguard democratic processes, protect personal privacy, and maintain public trust in digital media. By fostering innovation and education, we can mitigate the risks posed by deepfakes and ensure that AI technologies serve the broader good. As we move forward, it is imperative to navigate these complex challenges with concerted effort and shared responsibility, fostering a safer and more transparent digital future.

 

The information in this article is for general informational purposes only and should not be considered as professional or legal advice. Please get in touch with us should you like to discuss further. 

30% Complete
Senior Consultant
+86 10 8632 4000
Senior Consultant
+86 10 8632 4000