British Bodyboard Club
TikTok deepfake scam: Fake Molly‑Mae perfume ads duped fans
Kieran Lockhart

Kieran Lockhart

People bought a bottle of “Nyla” because a video of Molly‑Mae Hague told them it was her favorite scent. The catch? She never said it. The clip was a deepfake—an AI‑stitched ad that hijacked her face and voice to sell a perfume she hadn’t even smelled.

The stunt is part of a wider wave of synthetic celebrity endorsements sweeping TikTok and other short‑video apps. Creators behind the scheme recycled old footage of Molly‑Mae and layered in cloned audio that sounded like her, along with slick captions and product shots. The result looked and felt real enough to convert casual scrollers into paying customers.

How the deepfake perfume scam worked

The mechanics are depressingly simple. Scammers scraped existing videos of Molly‑Mae—interviews, vlogs, TikToks—then used consumer‑grade tools to clone her voice from short audio snippets. Lip‑sync models mapped that voice track to her mouth movements. An off‑the‑shelf ad template provided upbeat music, product b‑roll, and captions that screamed authenticity. The ad pushed “Nyla perfume,” a name with no verified link to her, but packaged as if it were her all‑time favorite.

Distribution was the real engine. The clips appeared in feeds as regular posts, as paid placements, and via accounts that looked like fan pages. A few takes were cut to look like “storytime” videos; others mimicked haul videos. Some ran with subtitles to capture viewers who scroll with sound off. Once a few iterations caught traction, copycats multiplied them, and the content churn made the campaign feel ubiquitous.

It didn’t stop with Molly‑Mae. Similar edits popped up featuring other reality TV faces, including alumni from Love Island US. The playbook barely changed: lift recognizable footage, fake the voiceover, promise a limited‑time discount, and send viewers to a checkout page that collected card details before anyone smelled a sample.

Why this works: trust and speed. Fans believe they “know” influencers they’ve watched for years. Deepfakes tap that familiarity and move faster than platform moderation. If a video lasts even a few hours before it’s reported, it can rack up hundreds of thousands of views and enough sales to make the fraud worth it.

Who profits? Often, a chain of anonymous sellers. Some use dropshipping, routing orders to low‑cost manufacturers. Others are affiliate arbitrageurs taking a cut per conversion. Domains are disposable, payment processors are rotated, and the operators vanish once chargebacks pile up.

Molly‑Mae told followers she had nothing to do with the campaign and urged them to double‑check endorsements through her official channels. That single post exposed the scale of the problem: many buyers only realized they’d been duped after she spoke out.

What platforms and regulators can actually do

What platforms and regulators can actually do

Platforms say synthetic media that misleads is banned. TikTok’s policies require labeling AI‑generated content and prohibit impersonation. Ad policies also demand advertiser verification. In practice, enforcement is uneven. Scammers exploit gaps by using new accounts, borderline edits, and rapid reposts. Automated detection struggles when the source footage is real, the AI voice is clean, and the message resembles typical influencer content.

In the UK, the Advertising Standards Authority can order misleading ads down and name offending advertisers, but that presumes an identifiable advertiser in reach of UK rules. The Online Safety Act gives big platforms legal duties to tackle fraudulent ads, with Ofcom setting the compliance bar. Those rules are still bedding in, and determined scammers are good at staying one step outside jurisdiction.

Elsewhere, regulators are circling. The EU’s Digital Services Act forces Very Large Online Platforms to address systemic ad risks, including deceptive or manipulated media. In the US, the Federal Trade Commission has been weighing tougher rules on AI‑powered impersonation and has warned that fake endorsements—AI or not—can trigger enforcement. The direction of travel is clear: more liability for platforms and sellers, more pressure to verify ads, and fewer excuses.

Civil law gives celebrities some tools too. In the UK, using a person’s image to imply they endorsed a product can fall under “passing off,” a doctrine tested in high‑profile cases where stars challenged unauthorized merchandise and ads. Data protection law may also come into play when someone processes biometric data—like a face or voice—for commercial gain without consent. None of that is fast, though, and scammers count on delay.

Tech fixes are improving but not foolproof. Watermarking and provenance systems can tag content at creation so platforms can check if a clip is genuine or edited. Voice‑print detectors flag synthetic audio with tell‑tale patterns. But watermarking only helps if tools adopt it and platforms enforce it, and detectors are in an arms race with new generation models.

Consumers still need practical checks. Before buying off a viral video, ask: is the account verified? Does the caption link to the influencer’s known shop or brand partners? Does it carry a clear “ad” or “paid partnership” label? Are there comments from the influencer’s usual community—or are the replies generic and oddly timed? If something feels off, it probably is.

  • Verify endorsements on the celebrity’s official pages or known storefronts before purchasing.
  • Be wary of “secret favorite” claims, aggressive countdowns, and prices that seem far below market.
  • Check the seller’s domain registration, returns policy, and company address. No address, no trust.
  • Use payment methods with strong buyer protection. Avoid bank transfers to unknown merchants.
  • Report suspicious ads using in‑app tools. The faster they’re flagged, the fewer people they reach.

Already bought? Keep records of the ad, the order confirmation, and all messages. If the product never arrives or isn’t as described, request a refund from the seller, then dispute the charge with your card issuer. In the UK, Section 75 can apply for credit card purchases over £100; for smaller transactions, chargeback rules still help. Report the case to Action Fraud and submit an ASA complaint with screenshots—those reports make it easier to trace patterns across campaigns.

Brands and retailers have skin in the game too. If a product is benefiting from fake endorsements, reputable marketplaces should pull listings and freeze payouts while they investigate. Legit brands can help by publishing an always‑up‑to‑date list of official ambassadors and current campaigns. That transparency gives consumers something to cross‑check when the next “favorite” suddenly appears in their feed.

Under the hood, the tools that powered the Molly‑Mae clips are widely available. Voice cloning models can mimic a speaker from a few minutes of audio. Face‑sync software can make lips match any script. Add a product shot and an affiliate link, and you have a convincing ad in under an hour. The barrier to entry isn’t money—it’s intent.

That’s why this case is a warning for the entire creator economy. Influencers rely on trust to sell real partnerships. When deepfakes flood the zone, genuine ads risk getting ignored along with the scams. Expect to see more creators watermarking their videos, pre‑announcing deals, and using distinctive sign‑offs that are harder to fake.

The phrase everyone will be searching for after this story is TikTok deepfake scam. It captures the core problem: the collision of a frictionless ad machine with cheap synthetic media that looks real. Until platforms make it far harder to run anonymous performance ads and much easier to verify endorsements, the next “favorite perfume” is only a template away.

Popular Tag : TikTok deepfake scam Molly-Mae Hague AI-generated endorsements Nyla perfume


Write a comment