top of page

AI-powered deepfake scams shoot up during festive season sale

  • Staff Writer
  • Nov 18, 2024
  • 3 min read

Updated: Dec 14, 2024


deepfake

Recent advancements in generative AI have led to a proliferation of deepfake celebrity scams on social media, especially during online festive season sales. 

A new survey by cybersecurity firm McAfee shows that 45% of online shoppers in India have been duped by deepfake shopping scams or know someone who has. Out of those affected, 56% lost money, with 46% losing over Rs 41,500 to such scams. 


Celebrity endorsements wield a strong influence  on the purchasing decisions of many Indian customers, which is why almost all brands use them. Now, scammers are using celebrity deepfakes to lead online shoppers to malicious websites or links, where they are offered limited-time offers or special discounts. Information shared by users on such websites can be used to carry out identity thefts and financial frauds. 


“With AI-powered tools, scammers can now more quickly and easily create incredibly realistic fake celebrity endorsements and near-perfect imitations of trusted brand messages and websites. We’re urging people to stay cautious, think twice about deals that seem too good to be true, and use the best online tools to protect their information,” warned Pratim Mukherjee, senior director of engineering at McAfee. 


Most of these deepfake scams are being carried out through email, according to the McAfee survey. Around 39% of shoppers in India said they have encountered fake messages while checking their email, while 31% are targeted through SMS and 30% via social media.


The McAfee survey was based on input from 7,128 adults in seven countries including the US, Australia, India, UK, France, Germany, Japan. Though the sample size is small, its enough to remind buyers that the threat of deepfake scams looms larger especially during festive sales. 


As the name suggests, a deepfake is an artificially generated content from photos, videos and audio recordings of an actual person. To make the artificial content as realistic as the original, deepfake use machine learning (ML) models such as generative adversarial networks (GAN), where a discriminator algorithm is used to learn and detect inconsistencies, while the generator algorithm tries to avoid detection creating new versions of the deepfake.  This is why deepfakes look so realistic and can be hard to detect. 


Social media companies are also growing aware of the rise of celeb bait deep fakes and taking measures to address it. For instance, last month meta announced that it plans to revive the use of face-recognition tech (FRT) to protect users from scammers running malicious celeb- bait ads. 


Though Meta uses ML classifiers to review millions of ads with images and videos on its platforms every day so it can detect scams or violations of ad policy, the company acknowledged that celeb-bait ads are not easy to detect.


Meta plans to use the FRT system to compare faces in a celebrity endorsed ad with that of the celebrity’s Facebook and Instagram profile pictures. If the FRT finds a match and further investigation reveals that the ad is a scam, it will be blocked. Meta said it will inform the celebrities who are widely impersonated in such scams and automatically enroll them under a new protection program.


Deepfake videos of Elon Musk for promoting cryptocurrency scams have been spotted on social media platforms such as YouTube. In many of these videos Musk is shown as endorsing crypto investment schemes with the promise of huge returns and an opportunity to get rich quickly. 


In September, cybersecurity firm CloudSek said in a report that it found a network of 15 YouTube channels promoting crypto scams using deepfakes of Musk. Most of these accounts were verified by YouTube, which made them look legitimate. CloudSek also found that these channels were taking donations in Bitcoin, Solana, Dogecoin, and Ethereum from their victims to support the Trump campaign.



Image credit: Dall-E

bottom of page