How can Artists Protect Themselves from AI-Generated Deepfakes
- Michele
- 16 April 2024, Tuesday
As AI tools continue becoming increasingly elaborate, artists need to remain aware of how they could put them at risk. This article discusses what artists can currently do to protect themselves from deepfakes and what steps they can take after discovering fake AI-generated content of themselves.
The case for AI and lawmaking
Artists and music industry members alike are increasingly critical of artificial intelligence and the real and potential risks it presents. As a result, governments and institutions are considering taking steps to offer more protection to individuals in the digital realm.
For instance, the Government of Tennessee recently passed the ELVIS Act to protect artists from copyright infringements, impersonations, and financial exploitation through AI voice cloning tools. Around the same time, the EU Parliament decided to pass an AI act to safeguard the privacy and security of its citizens. The law aims to ensure more transparency, stating that “artificial or manipulated images, audio or video content (“deepfakes”) need to be clearly labeled as such.”
While both decisions are positive steps toward more digital safety, they only protect those who reside in the areas where the laws apply. Moreover, the EU act will take quite some time to go into effect and does not explicitly highlight the rights of musicians, among whom many depend on their online presence. For this reason, artists should take preemptive measures to protect themselves from AI deepfakes and learn how to react to discovering one.
Awareness and learning
As AI enthusiasts continue developing increasingly elaborate tools, artists need to remain aware of how such innovations could put them at risk. Currently, the main concern within the music industry is AI’s ability to generate fake yet highly realistic content by using the voices, images, or videos of others. Such tools are widely used for memes or to create song covers and new, AI-generated music, usually without the musician’s consent. While it cannot be denied that some of the content produced is incredibly funny and entertaining, much of it can be damaging to an artist’s reputation, career, and income. Understanding how such tools work will make you more aware of how your content could be used to create deepfakes.
Research the laws of the country you reside in
Next, you should research whether the country or state you reside in has introduced any AI-related laws or whether it is planning to do so. Knowing of such laws will allow you to make informed decisions in the case of an emergency. Alternatively, you may be able to rely on regulations that are not explicitly about AI but could still protect you. For instance, the State of California prohibits the usage of other people’s “name, voice, signature, photograph, or likeness, in any manner, on or in products, merchandise, or goods, or for purposes of advertising or selling, or soliciting purchases of, products, merchandise, goods or services, without such person’s prior consent.”
Monitor your online presence
Monitoring your online presence comes with various benefits, as knowing what people say about you on the web gives you more control over your image and safety. This way, you can manage your reputation and quickly respond to anything that concerns you. You can set up reputation management software and tools like BuzzSumo or Google Alerts to get notified about new mentions of your name. Such apps can help you monitor the web for content about you, including deepfakes, and ideally detect them early on. Keep in mind that tools are usually not capable of monitoring the entire web in all languages and that not all content will be found. For this reason, it can be helpful to regularly look into the most relevant social media platforms manually.
Consider a personal cyber insurance
Many countries, such as Germany, have well-established insurance systems that allow citizens to receive (financial) protection in challenging situations. Some companies offer personal cyber insurance to assist their customers across a range of scenarios, including identity theft or fraud. Depending on the contract, artists may benefit from cyber insurance in defamation cases or when another person impersonates them to scam their fans. Some insurers even offer help removing reputation-damaging content from the web if the situation is reasonably unjust. While insurance companies usually keep their offers somewhat abstract and open to interpretation, it is better to have some protection than none.
Watermark your visual content
Watermarking is a common practice among photographers, who add logos to their pictures to ensure they are not stolen and reproduced without their permission. Removing well-placed watermarks can take a lot of time and requires experience in graphic design. Although including them in your videos and images alone may not stop others from misusing them, it can surely discourage some people from trying. However, watermarks can also mess up the quality of your images and make them less appealing, so it is up to you to decide what you care about more.
Encrypt your visual content
Artists can further protect their images by using “anti-AI” tools that, essentially, add a close-to-invisible layer to an image to confuse AI tools. This way, they can prevent it from reproducing their content. Among such tools are Glaze and Nightshade. When it comes to music, so far, we do not know of any tools that exist to encrypt your tracks.
Further ways to protect yourself online
You can do a few more things to protect yourself online as an artist. For example, you can enable the two-step authentication system and use strong passwords to secure your accounts. You should also be cautious about clicking on links and emails you receive to avoid falling victim to phishing baits and getting your personal data stolen.
Lastly, you can try to be more mindful about what you share. This can be challenging for performing artists, who are often required to make themselves highly available to the public online. One way around this would be not to share too much high-quality footage, as AI models require a substantial amount of detailed imagery or audio to produce a realistic deepfake.
What to do if you discover a deepfake?
Before offering further advice on what to do if you discover a deepfake, we recommend developing a concrete action plan. This includes creating a document that lists all relevant laws, the names of organizations and institutions that support individuals in data protection and copyright-related cases, and the phone numbers of lawyers who could help you if a situation escalates. Being prepared will help you navigate the overwhelm and allow you to take action quickly.
If you come across a deepfake of yourself, you should first assess its potential impact and how you feel about its existence. If you find it uncomfortable yet believe it is harmless, you should report it as content that aims to impersonate you and see if it gets taken down. You can also ask your friends, family, or community to report it for you. This will usually remove the video or image from the platform.
If a deepfake has the potential to cause significant harm or if it aims to damage your reputation, you should take further steps. Bear in mind that each situation is different, and how you react will depend on the context and scope of the issue. The first step is to gather all relevant evidence, including a screenshot or screen recording of the content, before reporting it to the platform. Next, inform your followers about the situation to ensure they know the content is fake. You should then contact your insurance and legal professionals, who will help you assess the issue and find tangible solutions.
In especially dire cases, you may want to consider speaking to experts in cybersecurity and digital forensics or contact PR professionals who can assist you with your reputation management. Unfortunately, it can be challenging to remove a deepfake from the Internet once it has gained many views. This is why it is important to take action fast, especially when the content was created with the intention to cause harm.
To sum it up...
Although more and more countries are beginning to take action against the dangers of AI, we all know that many tools are here to stay. As an artist with an online presence, you should remain mindful of the potential harm AI can cause through copyright infringements, impersonations, and deepfakes. Taking steps to ensure their security online and being prepared for various situations can help you prevent a worst-case-scenario.
Ready to get your music out there?
Distribute your music to the widest range of streaming platforms and shops worldwide.