The problem of completely fabricated AI generated images and videos is causing growing concern around the world. Recent examples include:
Many deepfakes are inarguably illegal in some countries, e.g. deepfake pornographic videos. However, most are far less clear. The problem is that public authorities will either not know how to deal with these issues or will be too slow to take any useful action. That is because most are circulated on social media and do so incredibly fast.
So, what can be done about this growing problem, and can IP laws help? There are in fact many analogies to discovering an IP violation such as a counterfeit goods or copyright piracy problem.
First you need to assess the illegal acts.
A. Copyright
Deepfakes often incorporate copyrighted material, potentially infringing copyright. However, complexities arise.
B. Personality Rights
Personality rights, in contrast to copyright, can offer a more direct legal path to tackle deepfakes, but these rights do not exist in every country. Personality rights, where they do exist, may protect against unauthorized appropriation of someone's, likeness, voice, or persona for commercial gain or to cause harm. Sometimes a level of reputation is required. This is easy to prove in some cases like pornography, but harder for others like satire or political misuse. One would expect courts to be generous here, weighing the severity of harm against the context and purpose of the deepfake. Satire, parody, and artistic expression might be exceptions to the "fair use" principles in copyright laws, blurring the lines of acceptable use.
Overall, personality rights can offer a potentially stronger legal defence against harmful deepfakes compared to copyright, but the exact application and effectiveness depend on the country, specific context, and evolving legal landscape.
C. Personal data rights
Personal data may have been used to create deepfakes, creating potential liability for data misuse. Violation of data privacy laws, PDP rules in China or GDPR (Europe) or CCPA (California) or a multitude of others, should be explored. Biometric data (voice, face) used for manipulation might raise concerns around its collection, storage, and usage without explicit consent. This is particularly sensitive in areas like healthcare or finance.
D. Trade mark rights
There may be a misuse of brands, logos, slogans, or even impersonating brand representation. Some celebrities’ likenesses, usually stylised forms, can be registered trade marks. Trade marks offer advantages in enforcement since it is often clearer to see at first glance than copyright.
E. Passing off or unfair competition
For those with a reputation whose business depends on it, a deepfake used in a commercial situation could constitute as passing off or unfair competition.
F. Other laws
Beyond IP, other illegal elements may apply, whether inappropriate content in the case of pornography, or interference with elections in the case of political deepfakes. In China, for example, wider public security, technology misuse, and social trust rules may be breached.
All of the above need to be assessed in the countries where the infringement occurred. That poses a challenge and may require a global approach. We should not assume the source is the same as the location of the infringement (e.g. on a US Social media platform). Taking action may be necessary in multiple countries nearly simultaneously. Once you know what laws to enforce, the next questions are against who, and how.
The first issue is ‘who’. This is usually simple in practice. The creator of deepfake content may not be easily or quickly tracked down, but we can and must act quickly against the platform hosting the deepfake content. X, TikTok, Facebook and most platforms have legal terms which prohibit illegal content. So, having identified a legal breach, you can request removal.
Later, once the spread is curtailed, you can start to think about who created it and whether it is worth pursuing them. Just like in other IP violations, a single instance is unlikely to warrant investigations targeting the creator. Exceptions do occur when there is greater significance or harm caused. However, any kind of repeat deepfake postings may warrant further inquiry.
The ‘how’. This is usually to request removal of the image or video from the relevant platforms. All the alleged illegal acts must be checked against the illegal content rules of the platform and asserted in the takedown request. Where copyright content misused in the deepfake is owned by third parties, e.g. film, TV etc, the subject may need the support and cooperation of the copyright owner. Celebrities’ commercial relationships with these content owners can be important. A Batman deepfake may concern the actor as well as Warner who own the DC rights for example. Quick cooperation would be vital.
Speed is critical to stop spread. The good news is most of the platforms are aware of the risks of deepfakes and should be helpful. However, platforms’ IP systems and content removal mechanisms vary. Check what liability the platform has for their act or inaction in each country. This may add pressure on them to act faster and more effectively.
Unorthodox actions may also help. Taylor Swift’s fans acted quickly to fill X with positive images of her, so the deepfake images were harder to locate. Celebrities may be able to motivate a support base, but less well-known people may not be able to do so. This is where technology can help. Software can be used to mass request removal of illegal content. It can also flood a platform with positive content, although the platforms may not like it.
Sometimes doing little or nothing may be best. We need to look at what actual harm is being caused. Potential harm such as celebrities’ lost licensing revenues due to a deepfake image used to promote products will require specific proof of damage. Some deepfakes are done for parody and may be protected by fair uses or free speech. Sometimes it’s better to let simple things die quietly.
Apart from the platforms’ removal systems, site or content blocking rules exist in many countries. Some countries allow a relatively simply application, citing the illegal acts, which lead to content or site blocks, usually by the Ministry of Communications or other relevant body. Common law countries tend to require an application to court for a site blocking injunction.
Court proceedings are always available, but a careful balance is needed to justify the large costs of litigation. Cases against platforms may not result in costs orders to allow you to recover legal fees.
The US is exploring new laws to prohibit certain categories of deepfakes. China also recently established provisions for deepfake providers, in effect as of 10 January 2023, through the Cyberspace Administration of China (CAC). The EU has proposed laws requiring social media companies to remove deepfakes and the EU’s Code of Practice on Disinformation sets fines on platforms. South Korea also has laws banning harmful deepfakes. We predict new caselaw will appear rapidly.
The laws to address this problem are still emerging. Few countries have tried and tested laws. Any solution will be an amalgam of existing and new rules. Speed and effectiveness are key. Can you do something and do it quickly? A rapid reaction system is required considering all the possible angles.
The information in this article is for general informational purposes only and should not be considered as professional or legal advice. Please get in touch with us should you like to discuss further.