GenAI is not the end of the world
GenAI helps with automation For example, in the phishing part of my talk, I examined the complete chain of phishing emails—from target selection,...
Guess what! Generative AI is still a topic in 2025.
We have already discussed in our blog how generative AI has shifted certain aspects of phishing, such as crafting more convincing text messages and scaling personalized spear phishing campaigns. However, its impact extends beyond improved text quality. Generative AI also introduces media files, including images and videos, into the mix, further enhancing social engineering tactics.
There have been a handful of business email compromise (BEC) cases highlighting this evolution of deepfakes. One widely discussed incident involved a British engineering company, where an employee fell victim to a video call featuring multiple deepfake participants. The employee initially suspected that the message was part of a phishing attack, but thought there is no harm in joining the video call to double check. The scammers followed up with direct messages in the name of the CFO, leading to several fraudulent transactions being conducted totaling $25 million.
Another recent case of deepfakes occurred in France, where a woman lost €830,000 to a romance scammer impersonating Brad Pitt. The fraudsters fabricated a plausible backstory, claiming the actor needed money for kidney cancer treatment but was unable to access his funds due to frozen accounts following his recent divorce. The victim later reported that the scammer "knew exactly how to talk to women," possibly suggesting AI-generated messages or a well versed social engineer. The deepfake images of Pitt in a hospital definitely reinforced the deception. While the operation was likely a blend of human interaction and automation scripts, it highlights AI enhanced fraud is more common than a fully AI orchestrated scam. However, this balance could shift with the future development of AI agents.
The scam did not end there. Following the incident, the woman was approached by another entity claiming they could help recover her lost funds. Whether this was the original scammer or another opportunistic fraudster is unclear, but follow-up scams designed to exploit victims further are a well-documented tactic among cybercriminals.
Raising awareness of these scams is crucial, as their increasing sophistication makes them harder to detect. The fundamental rule remains: be cautious whenever money or sensitive information is involved. Unfortunately, the victim in this case faced significant public ridicule after sharing her story. Such cyberbullying discourages other victims from coming forward, making it more difficult to learn from these incidents and track the perpetrators.
If you want to learn and discuss about deepfakes and the impact they have, then we highly recommend attending the Applied Machine Learning Days (AMLD) conference mid February at EPFL Switzerland. Four days full of deep dives and actionable insights into AI/ML.
Candid Wüest from xorlab will be keynoting the AMLD track on “Unmasking the Digital Deception: Defending Against DeepFakes and Disinformation Attacks” and later on the same day discuss upcoming AI threats in the panel “ GenAI Security Threats.
Join us at AMLD to learn more about deepfakes.
T A L K
by Candid Wüest
AMLD I February 12, 14:00
GenAI helps with automation For example, in the phishing part of my talk, I examined the complete chain of phishing emails—from target selection,...
AI-generated malware: what’s fact and what’s fiction? At this year’s Insomni’Hack conference in March (Lausanne), we’ll be diving into this topic....
Prediction #1: More deep fakes Especially in the case of Business Email Compromise (BEC) scams, AI-generated deepfake videos, images, and audio files...