Table of contents

    The AI-powered malware era: hype or reality?

    Malicious emails are often the first step in a cyber attack—like the opening move in a high-stakes chess game. And if that email slips past built-in security, the game can quickly spiral out of control. Beyond phishing for sensitive information, attackers still love to drop malware, particularly infostealers, straight into unsuspecting inboxes. Over the years, malware payloads have constantly evolved - starting with simple .EXE attachments, moving to Office documents with macros, password protected archives, and SVG files smuggling HTML and JavaScript. Now, generative AI has been added into the mix, stirring things up once again.

    Security researchers have already found email campaigns in the wild with script payloads stuffed with excessive comments - likely the handiwork of an AI, left by an inexperienced attacker who forgot to clean it up. But the real question is: How much does generative AI actually change the malware landscape for organizations? What’s achievable for cybercriminals today, what’s already happening, and what’s just media-fueled mythos?

    AI-generated malware: what’s fact and what’s fiction?

    At this year’s Insomni’Hack conference in March (Lausanne), we’ll be diving into this topic. Candid Wüest will take the stage with our session, “The Rise of AI-Driven Malware: Threats, Myths, and Defenses”. Our goal? To separate fact from fiction and clarify the differences between AI-generated, AI-supported, and AI-powered malware.

    Sure, generating basic malware with code-focused large language models (LLMs) is easy - at least once you bypass the guardrails with prompt injections or use an unfiltered model. But here’s the thing: malware generator kits on underground forums have offered similar services for decades. So, while AI makes the process more accessible, it’s not exactly a groundbreaking innovation. If a payload behaves the same way - whether it’s stealing a Bitcoin wallet or encrypting files - modern security solutions with behavior-based detection and anomaly analysis will still spot and stop it. What AI really changes is the scale of attacks, allowing more cybercriminals to generate and distribute malware at a faster pace.

    AImalware_types

    The old becomes new again: polymorphic and metamorphic malware

    Remember the Tequila virus from the early ’90s? It used a polymorphic encryption engine to evade detection. Fast forward to today, and we’re seeing a similar strategy with a modern twist. Take ChattyCaty, for instance - this malware uses LLMs to rewrite its code on the fly during infection, making every instance unique. By simply asking an LLM in plain English to generate functionally equivalent code, attackers can bypass static signature detection.

    However, there’s a catch. While this tactic may dodge traditional static analysis, the overall behavior of the malware still raises red flags. If a threat suddenly starts using every persistence technique in the MITRE ATT&CK framework, any decent Endpoint Detection and Response (EDR) system will light up like a Christmas tree.

    AI-powered cybercrime: the reality check

    Yes, AI has already been used to create malware for real-world attacks. For example, last October a man in Japan was arrested and sentenced to three years in prison for deploying ransomware generated with AI. His alleged statement to the police? “I wanted to make money through ransomware. I thought I could do anything if I asked AI.” It took him about six hours to create the ransomware - but the fact that he got caught suggests that being an AI-powered cybercriminal isn’t as easy as he thought.

    Still, as AI tools advance, so will the level of sophistication in AI-assisted attacks. And once again, it’s less about AI reinventing malware itself and more about how it accelerates and scales these attacks.

    The rise of AI agents in cybercrime

    With AI agents popping up everywhere this year, attackers have a whole new playground to explore. We’ve already seen proof-of-concept (PoC) threats leveraging Microsoft’s CoPilot to steal data, and it won’t stop there. Indirect prompt injection is a hot topic, also within the malware community. This raises a critical question: How do you know if your AI agent has gone rogue?

    Prevention is still the best defense

    At the end of the day, the best strategy remains stopping malicious emails before they even reach inboxes. That’s why a sophisticated email security solution is a must-have as an early defense layer.

    Meanwhile, our research team at xorlab is keeping a close eye on AI-powered malware trends, ensuring we stay ahead of emerging threats. If you’re wondering whether AI agents are about to invade your IT systems Matrix-style, join us at Insomni’Hack - or just reach out to us. We’d love to chat.

    T A L K

    The Rise of AI-Driven Malware: Threats, Myths, and Defenses

    by Candid Wüest

    Insomni’Hack  I  March 14, 10:30

    insomniahack_image