
What Happens When AI Turns Creators Into Ads They Never Approved?

“I bought this stuff thinking it was you!” his nephew told him.
The nephew showed the doctor the bottles he purchased from an ad he saw online. “Liver junk,” the doctor called it.
“It was a total nonsensical product. Another one of these liver detox things, none of which are proven to have any efficacy whatsoever,” said Dr. Terry Simpson, trusted physician, content creator, and now, AI deepfake victim.
Simpson has created content for 20+ years to combat medical misinformation. It’s a mission for him. His 1.5M followers turn to him for evidence-based health and wellness information. So to him, this kind of fraud wasn’t just dangerous — it was personal.
He said he told his nephew, “‘No, it’s absolutely not me,’ and that I didn’t endorse it, and that I’m sorry you spent your money based on my image.”
AI Scams Can Fool Anyone (Even Your Family)
The man in the video looked like him. The background was his studio, the very one Chronically Online staff witnessed in a Zoom interview with the good doctor, complete with the light-up marquis sign with his name: @ DR TERRY SIMPSON.

But a few things were off.
The speaker called himself Tom and claimed he was a liver specialist.
“Well, obviously, my name isn’t Tom. I’m not a liver physician.” Simpson said (though he did note he’s operated on livers in the past). “I notified TikTok immediately. I think [the video] was taken down.”
Simpson started his TikTok channel in 2020. Social media wasn’t his first foray into medical content. In fact, he often appeared on daytime TV. But the COVID-19 pandemic was what made his TikTok take off.
“There was a lot of bad information out there about COVID,” he said. “My channel just sort of blew up, because people wanted good, decent information.”
He built his reputation on sharing science over hype. (That’s even in his profile.)
So hawking shady detox pills? Not on brand. Something others did recognize: Simpson told COM reporters he actually received messages from followers warning him someone was using his likeness in ways he wouldn’t sanction.
But not everyone spotted the fraud; even his own family was tricked. That’s the problem with AI-cloned endorsements — they take a healthy amount of skepticism and a very watchful eye to spot.
“They aren’t terribly good, but they’re good enough,” he said.
An Epidemic of Fake Ads
Current technology allows scammers to clone voices or likenesses from only a few seconds of audio or video using freely available tools. They then use these deepfake endorsements to sell bogus products or trick people into submitting personal information like credit card numbers to fake websites.

In one notable case, TV personality Oprah Winfrey was caught up in this dangerous new web in late 2025 after a series of videos featuring her likeness promoted a miracle “pink salt” weight-loss supplement.
The supplement was presented as a cheap and easy alternative to popular drugs like Mounjaro, which can be expensive and require a prescription to access. The scammers used Winfrey’s celebrity and history of endorsement to legitimize something that seemed too good to be true — and was.
She was eventually forced to speak out against the fake videos on her own social media after being repeatedly approached by those who had fallen for the scam.
The trend is creating reputational damage, with some criminals even creating fake, sexually-explicit material, yet another way AI scams can threaten creators’ bottom lines.
Earlier this year, creator attorney Brittany Ratelle spoke to Chronically Online Magazine about the trend, noting that the violation of someone’s likeness being stolen and used so horrifically without consent wasn’t the extent of the damage that could occur. A creator whose image got stolen could actually be forced to pay back money they earned if the offending material breached a contract clause.
“There is almost always a morals clause in creator deals that says, ‘Hey, if the creator does anything that harms the brand, if they’re involved in controversy, if they break the law, if they’re caught saying anything incendiary online, we can cancel this contract,’” Ratelle said.
She noted that sometimes brands would attempt to claw back money paid to creators before the discovery of the offending material. The burden of proof then shifts to the creator to show that the material was AI-generated or nonconsensual, but that may not be enough.
And as technology improves, deepfakes will only become harder and harder to distinguish. Consumers should be wary.
And so should those who may find themselves consumed.
With Content Available Online, How Can Creators Avoid Being Deepfaked?
In short: they can’t. They can only respond after deepfaking has occurred, and the resolution might require difficult steps.
Using existing laws is an imperfect solution.
“As an intellectual property attorney, I’m being guided by laws to protect your digital world that were created before the words ‘digital world’ even existed,” Intellectual Property (IP) attorney and retired police officer Pablo Segarra told COM.

Segarra says he uses three main tools: trademarks, copyrights, and patents, to protect creators today. While all three are meant to help protect individual IP, they differ in what they protect.
Trademarks reflect brands, such as company names, he explained. (Segarra gave the example of MrBeast, whose real name is Jimmy Donaldson. MrBeast could be, and is, trademarked.)
Copyright protects the works that a trademark creates, such as individual videos or other content under the MrBeast brand.
Patents are the most specific of all three Segarra described, covering individual novel discoveries or inventions. The iPhone you’re reading this article on is patented, protecting the manufacturer from others making or selling exact replicas.
But none protect your face or voice — your individual likeness.
And while copyright laws have been updated, such as through the Digital Millennium Copyright Act in 1998, lawmakers and experts like Segarra agree that further development is needed to address issues like these.

In a first-of-its-kind legislative initiative, Denmark is attempting to tackle deepfakes by amending its copyright law to grant individual citizens ownership of their likenesses, enabling them to request the removal of material and even recover monetary damages.
No such protections exist here, though legislators are scrambling to keep up with the rise of harmful deepfakes through the passage of federal laws like the TAKE IT DOWN Act in May 2025.
While TAKE IT DOWN is a good first step to address problems like the sexually-explicit deepfakes mentioned earlier, gaps in U.S. law don’t cover every instance of fraudulent usage.
“Technology is moving at such a rapid pace that the government is never going to catch up,” Segarra said.
But that doesn’t mean creators are totally stuck.
Platform Policies May Offer Some Protection Against Deepfakes
Another option for creators to protect themselves is through platform-specific policies, such as Community Standards or Terms of Use, which may offer more robust protections by prohibiting the fraudulent use of user likenesses by other users.
Platforms enforce these policies by removing content or suspending users' posting privileges, sometimes going so far as to ban accounts. Platforms have in-app reporting tools that allow users to flag stolen content.
(Ratelle noted in her earlier interview that holding a copyright or a trademark offers creators a stronger chance at getting offending posts taken down, and Simpson noted in his that reporting fraudulent content of a different creator he recognized didn’t result in the material being removed.)
But while other entities can’t take your content and use it for their own purposes, anything you upload may be used by platforms themselves to train their AI models. Segarra says this is a Catch-22 because creators depend on platforms to share their content.
“Be careful,” he said. “It’s not feasible for me to tell you, ‘Don’t ever use Instagram. Don’t ever use YouTube,’ because that’s not going to work, either.”
He recommends reading every platform’s Terms and Conditions (or having an attorney or even ChatGPT read and summarize them) so creators understand what data and access they are granting when they upload content.
Private Contracts, Public Content
A third is private contracts. Segarra suggests that creators scrutinize brand deal contracts to ensure the terms are fair and to prevent unauthorized use of their image.

He emphasized that creators should ensure that the terms of a brand deal contract specify collaboration to avoid confusion about ownership of work beyond the product of the contract. Collaboration agreements differ from work-for-hire agreements, he explained, because in collaborations, brands can’t claim ownership of creators’ Name, Image, and Likeness (NIL). Work-for-hire agreements carry ownership implications that may muddy the waters.
“The first step is to be able to look through and make sure it’s a collaboration agreement. Second is to be able to go through and say, ‘Are there any clauses that incorporate artificial intelligence? Are you going to be able to use my creations to train your artificial intelligence datasets? Are you going to be able to just take what we did today and recreate it with something else?’” he asked.
Segarra told COM creators to avoid this.
For UGC agreements, in which brands provide scripts and are usually unavoidably work-for-hire, creators shouldn’t accept agreements “in perpetuity” in favor of time-bound contracts that specify an end date. (He gave five years as an example of an acceptable term of use.)
“Perpetuity, no. Let’s limit timeframes to a couple of years. Let’s try to limit that as much as possible.”
Ultimately, he said, creators should treat their accounts like businesses by incorporating and trademarking their names, logos, and slogans, and copyrighting much of the work they create. This creates a strong foundation to protect them should unauthorized usage occur.
And while Segarra said brands can’t legally use creators’ NIL for commercial reasons without approval, NIL protections are state-dependent. Some states, like New York or California, have strong laws in place, while others lag, creating issues depending on a creator’s location.
This leads us back to the necessity of stronger federal laws to protect against image theft. He suggested that a law like Denmark’s could be a good example that would allow individual citizens to control how their images are used beyond just commercial applications.
But in the meantime, as creators and consumers are caught in the sweeping tides of technological change, a healthy dose of skepticism remains the best medicine.
“I do endorse products from time to time, and sometimes I get paid for it,” Simpson, the deepfake victim, told COM. “But mostly, my purpose is not to endorse products. My purpose is to say that this is good and this is bad, and the wide range of liver nonsense that is going on on the internet is absolutely unproven, untenable, sometimes harmful, and something I have railed against […] but again, you’re only as good as your last video.”
.webp)

















