What’s inside the report:
Summary:
A threat intelligence researcher from Cato CTRL, the Cato Networks threat intelligence team, successfully bypassed security controls in ChatGPT, Copilot, and DeepSeek—the GenAI models that enterprises are using to improve workflow efficiency.
By developing a new LLM jailbreak technique, all three tools were tricked into creating malware that steals login credentials from Chrome. The researcher had no prior malware coding expertise—just a cleverly crafted narrative that fooled every security guardrail.
Cybercrime isn’t limited to skilled threat actors anymore. With basic tools, anyone can launch an attack. For CIOs, CISOs, and IT leaders, this means more threats, greater risks, and the need for stronger AI security strategies.