Scammers Targeting Influencers in Deepfake A.I. Advertising Campaigns Using Their Images in Surging Trend

Malicious actors are swiping videos from social media to create ads, despite those featured in these campaigns having no say in their likenesses being used.

Due to the ease and affordability of crafting fake videos based on real content, along with the current lack of legal repercussions, experts predict a surge in ads featuring stolen identities, Knewz.com has learned.

Videos from social media are being swiped to create deepfake ads, despite those featured having no say in their likenesses being used. By: MEGA

While deepfakes — media manipulated or created with artificial intelligence — have increasingly targeted well-known celebrities and politicians, those behind the predatory practice have now set their sights on social media influencers for advertising.

Tools made available by companies like HeyGen and Eleven Labs allow anyone to manipulate video and audio snippets to create realistic altered content. These deepfake videos are challenging to detect and can propagate quickly, leaving victims either unaware of their exploitation, or desperately searching for solutions once the content has already become widespread.

A 20-year-old student at the University of Pennsylvania, Olga Loiek, found her likeness being used in nearly 5,000 videos across Chinese social media platforms. Some of these videos utilized an AI cloning tool from HeyGen, according to direct messages Loiek shared with The Washington Post.

Loiek, born and raised in Ukraine, found that in one of the videos, her doppelgänger stood in front of the Kremlin and touted Russia as "the best country in the world," speaking in Mandarin — a language she does not speak.

Olga Loiek, 20, born and raised in Ukraine, found her likeness being used in nearly 5,000 videos across Chinese social media platforms. By: The Washington Post

“I felt extremely violated,” she said. “These are the things that I would obviously never do in my life.”

In another case, Michel Janse, a 27-year-old content creator known for her Christian lifestyle posts, was recently surprised to find she was featured in a YouTube ad for erectile dysfunction supplements, as The Post also reported.

In the commercial, Janse was portrayed in her genuine bedroom and clothing, yet discussed a fictional partner with intimacy issues.

"Michael spent years having a lot of difficulty maintaining an erection and having a very small member," Janse's deepfake alter ego said in the ad.

Scammers seem to have doctored her most viewed video, in which she emotionally recounted a past divorce.

Michel Janse, a 27-year-old content creator known for her Christian lifestyle posts, was recently surprised to find she was featured in a YouTube ad for erectile dysfunction supplements. Scammers appeared to have doctored her most viewed video, in which she recounted a past divorce. By: YouTube/Michel Janse

The ad was reportedly removed from YouTube at the request of Janse's management team. Yet those with fewer resources face challenges when it comes to spotting deepfakes or identifying perpetrators.

YouTube recently announced plans to allow users to request the removal of AI-generated or other altered content that features identifiable individuals, a policy introduced in November.

Law enforcement efforts to combat this new form of identity theft have been sluggish, however. Police departments, limited by budgetary constraints, lack the resources for cybercrime investigations or dedicated training for officers, as The Post reported.

No federal legislation specifically addresses deepfakes, and while some states are advancing AI bills, regulations primarily pertain to political ads and nonconsensual pornography.

Scammers only require a small sample to create falsified content, according to Ben Colman, the CEO of Reality Defender, a deepfake detection platform.

“If audio, video, or images exist publicly — even if just for a handful of seconds — it can be easily cloned, altered, or outright fabricated to make it appear as if something entirely unique happened,” Colman told The Post.

Earlier this month, Knewz.com reported that a series of photos depicting presidential candidate Donald Trump apparently mingling with Black voters were called out as deepfakes.

Those behind the AI creations were accused of trying to sway this demographic in Trump's favor ahead of the 2024 presidential election, BBC Panorama reported on Sunday, March 3.

© EMG, INC