RSAC: AI Is Helping Hackers; We Need More AI!

SAN FRANCISCO—AI anybody? AI was of course topic du jour here at RSAC, where literally hundreds of companies were talking about the threats to AI, as well as the solutions.
In this cavalcade of AI jargon and hype, the irony is that vendors are pitching AI as the solution to the AI security problem. Key AI security risks include prompt injection, LLM data poisoning, and supply-chain attacks (both software and hardware)—among many others. In a roundtable I did with Fortanix as well as a dinner with friends from Aryaka, NetFoundry, HackerOne, and CloudBrink, we gained consensus around the idea that AI may be emerging as the ultimate shadow IT app—and the compliance headache of the century.
Here are just some of the AI security topics and threats I heard here in four days of discussions, panels, and roundtables:
AI Compliance and Governance. How do you know what's going on with AI in your organization? You must know what's happening before you secure it. "Observability is the first step in governance," said Scott Fanning, VP of Product Management at Aryaka.
Data Privacy. How do you protect your proprietary data and control how it's being used in external large language models (LLMs) or private small language models (SLMs). How do you protect the most sensitive data?
Safety and Guardrails. How do you build in AI guardrails and make sure your people are using AI safely? I heard of one instance where a Fortune 500 company has banned all external AI apps and is building their own.
Agents and MCP. Agentic AI can be scary. Imagine AI agents talking to other agents. AI talking to Model Context Protocol servers. What's going on there exactly? You can read a whole Cloud Tracker Pro piece about MCP by Craig Matsumoto.
Of course those are the basics, but there are more issues. But the main question is: How do you control it all?
Interestingly, at the same time that AI is the problem, it can also be the solution. I saw interesting applications of AI-driven technology to help patch the looming AI security loopholes, whether that's Stellar Cyber's "multilayer AI" to aid in detection, correlation, and response; an entire private AI platform, Armet AI, based on confidential computing from Fortanix; or Aryaka's unique integration of AI-powered observability for advanced threat detection, prevention, and analytics within a managed network service.
False Voices and Fake Faces
The message that AI is both a threat and a solution emerged from an AI Security Report released by Check Point Software Technologies Ltd. here at the RSA Conference. For every AI-based application, LLM, and AI platform deployed by enterprises, there’s a malicious corresponding entity on the so-called dark web. And protecting against these threats means fighting AI with AI.
Check Point, for instance, cites ChatGPT as the top tool favored for business use, with a presence in 37% of corporate environments. At the same time, ChatGPT and OpenAI’s API are the top two favored AI tools deployed by hackers to create audio and video mimicking actual humans to coerce users into releasing sensitive data.
In one example of an LLM turned to bad use, threat actors created a clone of the voice of Italy’s defense minister, Guido Crosetto, and used it to make phone calls to prominent Italian citizens asking for funds to free Italian journalists supposedly incarcerated in the Middle East. The scam fooled at least one victim, who sent 1 million Euro to a Hong Kong account before the truth came out.
In another instance cited by Check Point, a supposed job candidate used a fake video in place of his face to mask his intent to harvest sensitive data from a would-be employer. The interviewer wasn’t fooled, as you can see on this disturbing LinkedIn post.
Deploying jailbroken models, created by cybercriminals through bypassing fundamental LLM guardrails, generates diabolical versions of the digital twin concept, according to Lotem Finkelstein, Director of Check Point Research. “These aren’t just lookalikes or soundalikes, but AI-driven replicas capable of mimicking human thought and behavior. It’s not a distant future – it’s just around the corner,” he stated in a press release.
Check Point’s report cites many more examples of online deception, including the theft of AI accounts for OpenAI, Perplexity, Claude, and other AI sources. Through phishing, credential stuffing, and other means, hackers steal account authentication information and sell it online.
So how widespread is the problem? According to Check Point’s research, AI services are used on 51% of enterprise networks worldwide monthly. And about 1.25% of prompts issued to GenAI services from user devices contain the potential for serious data leakage. One in three prompts contains potentially sensitive data.
Cybercriminals are creating their own AI models to automate DDoS attacks and distribute malware. AI is also used to extract, correlate, and clean up user data for sale on the dark web.
Risks Run Deep in the Supply Chain
AI risks run everywhere—not just within LLMs or data. The risk of attacks is rising on critical infrastructure, as we recently pointed out with the risk of nation-state and typhoon attacks.
As Eclypsium Founder and CEO Yuriy Bulygin pointed out to me, the vast explosion of AI infrastructure to deliver AI is creating new attack surfaces.
"What people don't realize is all this AI runs on NVIDIA, Huawei, or Supermicro," he said. "These AI chips can are amazingly complex and then you add SmartNICs—there are so many risks. It's an extremely complicated supply chain. The modern GPU server will have 6,000 components in it."
Solving with AI
So, why not fight AI with AI? Check Point uses its report to tout its AI protection software, including an Infinity AI Copilot that deploys AI to detect and thwart threats like the ones mentioned above. That service leverages over 50 AI engines and big data gleaned from hundreds of millions of sensors, Check Point says, to halt phishing, malware, ransomware, and DNS attacks.
Fortinet, which launched its own Fortinet Threat Report at the RSA show, offers FortiGuard AI-powered Security Services, designed to work with the vendor’s FortiGate Next-Generation Firewalls and other products.
Both Check Point and Fortinet’s reports offer a dire look at the state of cybersecurity in enterprise networks. Both indicate the efforts that longstanding cybersecurity companies are making toward mitigating the threats emerging from AI use. But both companies’ efforts add AI to existing security products—something that may draw ire from customers unwilling to add yet another layer of complexity onto their security point products.
In the Stellar Cyber booth, I was impressed with demonstrations from MSSPs of the company's SecOps platform, which uses AI analytics to correlate threats from thousands of sources, including XDR and SIEM solutions from popular cyber platforms such as CrowdStrike, Palo Alto Networks, and others.
Stellar's marketing can include all the cyber alphabets under the sun, including "AI-driven SIEM," NDR, Open XDR, and Multi-Layer AITM, but I think there is a theme in what they are delivering—customers need to work with many vendors and sources of data, and then use AI-driven analytics to build a comprehensive view of what's going on in an organization. That's how they arrive at a true AI-driven security platform.
What's the ultimate message? While the AI boom may indeed be introducing a new era of productivity, it's also ushering in a new era of risks. Customers should be glad that the discussion has pivoted toward the rising tide of AI risks and how to solve them, but they should be putting more pressure on vendors to work together on integrated solutions.
Futuriom Take: Efforts to mitigate enterprise AI risks add a layer to existing security wares, though many customers will likely welcome the additional software as essential to operating a safe AI environment.