Denmark to Copyright Faces: Key Findings
Quick listen: Denmark’s face copyright law could reshape AI marketing. Here’s what brands need to know, in under 2 minutes.
Denmark, a country not usually in the headlines for tech regulation, is planning to let people copyright their own faces.
Not just celebrities or influencers, but even regular citizens.
If passed, the law would give people the right to take down AI-generated versions of themselves and, in some cases, demand compensation.
The move is aimed at curbing deepfakes, but the signal it sends reaches far beyond Denmark.
It tells us what’s coming.
Denmark just drew the line on deepfakes—should other countries follow?https://t.co/GUYiskZ5o4pic.twitter.com/OysjBb7c40
— Pulse Media (@PulseInDc) July 5, 2025
The proposal, backed by lawmakers across party lines, is set to enter public consultation before summer and move toward formal legislation in the fall.
“In the bill we agree and are sending an unequivocal message that everybody has the right to their own body, their own voice and their own facial features, which is apparently not how the current law is protecting people against generative AI.
Human beings can be run through the digital copy machine and be misused for all sorts of purposes and I’m not willing to accept that,” Danish Culture Minister Jakob Engel-Schmidt told The Guardian.
Denmark’s move shows that governments are starting to draw clearer lines around how AI can use human identity, and more countries are likely to follow soon.
Why This Isn’t Just a European Problem
I know what some brand leaders might be thinking. This is happening in Denmark, not Delaware.
U.S. law is different. In fact, we’ve barely seen national movement on AI regulation.
But here’s where things may start to tighten. The Danish bill could have the broad support of other European nations.
Engel-Schmidt said he’ll push the EU to adopt similar protections.
And if this happens, U.S. companies running global campaigns will have to play by these rules.
“Denmark to tackle deepfakes by giving people copyright to their own features”
— Brian Roemmele (@BrianRoemmele) June 27, 2025
This is absolutely vital. You must have full ownership of your likeness, voice, DNA and flora. https://t.co/aACIYGTrkX
At the same time, we’re watching a flood of deepfake abuse in recent years.
One recent example involved a deepfake video showing U.S. President Donald Trump and Ukrainian President Volodymyr Zelensky fighting in the White House.
It spread across kids' content feeds and raised alarms about how easily synthetic media can bypass content safeguards.
Incidents like this show how deepfakes aren’t just targeting adults or politics. They’re slipping into entertainment and reaching younger audiences, too.
🚨 #𝐁𝐑𝐄𝐀𝐊𝐈𝐍𝐆: 𝐓𝐡𝐞 𝐦𝐞𝐞𝐭𝐢𝐧𝐠 𝐛𝐞𝐭𝐰𝐞𝐞𝐧 𝐙𝐞𝐥𝐞𝐧𝐬𝐤𝐲 𝐚𝐧𝐝 𝐓𝐫𝐮𝐦𝐩 𝐜𝐨𝐧𝐜𝐥𝐮𝐝𝐞𝐬. pic.twitter.com/GHlmUmBRlF
— The Fauxy (@the_fauxy) February 28, 2025
In just the first half of 2025, nearly 580 deepfake-related incidents were reported, according to a 2025 study by SurfShark.
This is almost four times the number tracked in all of 2024.
SurfShark also estimates that deepfake scams have already caused nearly $900 million in financial losses globally, with $410 million of it occurring in the first half of this year.
If you’ve seen AI-generated celebrity crypto promos, fake audio clips of CEOs, or synthetic influencer ads, you know the risks aren’t theoretical.
They’re landing in inboxes and feeds now.
And every brand that uses AI-generated likenesses, whether to cut production costs or scale faster, could get caught in the fallout if the rules shift.
What once seemed like clever marketing automation is quickly becoming a legal and ethical minefield, especially as regulators take notice.
Without clear consent, even well-meaning campaigns risk triggering takedown demands, lawsuits, or public backlash.
How to Stay Ahead of AI Regulations
As U.S. lawmakers watch countries like Denmark and regulators in the EU pass stricter regulations on AI use, they’re likely to follow with their own.
California, Illinois, and Texas already have early laws in place. Congressional debate is picking up. And the public is tired of being fooled.
Even social media platforms are stepping in.
YouTube is now going to limit ad revenue for AI-generated content that feels recycled or unclear, signaling that enforcement won’t just come from lawmakers.
View this post on Instagram
And with 69.1% of marketers using AI to some degree, we’re way past the question of whether brands will be affected.
The real question is: Are you building in safeguards now, or will you be rebuilding everything later?
If I were running a creative team, here are four steps I’d prioritize right away:
- Start with a policy audit. Do you currently use synthetic voice or image tools? Have you ensured consent for any AI-generated content featuring real people?
- Update your contracts. Add specific clauses that address the use of AI to create likeness-based content. Include opt-ins for digital reproduction and outline who owns the result.
- Avoid scraping or using likenesses you didn’t license. AI tools often pull from public data. That doesn’t mean you’re protected. Avoid generic “celebrity” models or cloned influencers unless rights are secured.
- Prepare your team for compliance. Train creative teams and agencies on what content is allowed under stricter rules. Create clear internal guidelines for ethical AI use.
These aren’t just reputational risk management tasks. They’re trust builders.
I’ve seen how quickly consumer confidence erodes when big brands use AI.
The company I work for, Silverside AI, just created an ad for @CocaCola (yes THE Coca-Cola)!! We were asked to bring the classic Coca Cola Holidays are Coming ad back through the use of AI. Check it out: pic.twitter.com/A1esNvqozW
— Chris Barber (@code_rgb) November 15, 2024
Last year, Skechers faced backlash after readers spotted that its Vogue ad featured AI-generated models, with critics calling the campaign lazy and inauthentic.
Even Coca-Cola’s AI remake of its classic holiday ad stirred mixed reactions, with some longtime fans saying the updated version felt hollow compared to the original.
When AI-generated media is everywhere, showing care for how real people are portrayed can help your brand stand apart for the right reasons.
The takeaway for brand and agency leaders isn’t panic. It’s preparation.
There’s still a window to get ahead of stricter rules on likeness and consent before they become formal legal requirements.
But this window is closing fast.
And when it does, no one wants to be the brand explaining why a campaign had to be pulled for using someone’s likeness without asking.
AI policy is changing fast. These firms help you understand what’s required and what’s next:








