The word is "INTENT."
That's it. One keyword, embedded in a crafted prompt injection payload, was enough to convince Grafana's AI assistant that a data exfiltration attack was legitimate behavior. The model recognized the attack pattern. It flagged it. And then a single word told it to stand down.
Noma Security disclosed GrafanaGhost on April 7, 2026: an indirect prompt injection attack that chains four distinct bypasses to silently exfiltrate data from Grafana dashboards. The data at risk includes real-time financial metrics, infrastructure health telemetry, private customer records, and operational data from across your entire stack. The attack leaves no trace. And traditional vulnerability scanners will never find it, because no CVE was ever assigned.
The Four-Bypass Chain
GrafanaGhost isn't a single vulnerability. It's four weaknesses, each individually manageable, that become devastating when chained together.
Bypass 1: Poisoning the input. Grafana's error handling reflects user-controlled content from URL paths back into logs and error messages. Attackers craft URLs using errorMsgs parameters that embed prompt injection payloads into data the AI assistant later processes. The injection point looks like a normal error log entry.
Bypass 2: The JavaScript validation bug. Grafana's client-side code uses an isImageUrlAllowed() function to restrict which domains can serve images. The function checks src.startsWith('/') to identify relative paths. The problem: "//attacker.com/exfil".startsWith('/') returns true in JavaScript. Protocol-relative URLs like //attacker.com pass the validation because they technically start with a forward slash, but browsers treat them as external absolute URLs. A classic off-by-one in trust logic.
Bypass 3: The keyword that kills guardrails. When the AI model encounters the injected prompt, it initially recognizes the attack pattern and refuses. But the payload includes the keyword "INTENT," which signals to the model that the request represents intentional, legitimate behavior rather than an attack. The model stands down. This is the most alarming part of the chain: a single word in the right context can override safety training.
Bypass 4: Invisible exfiltration. The AI model, now convinced the request is legitimate, executes the injected instructions. It queries dashboard data, aggregates sensitive information, and embeds it in an image tag URL: . The browser renders the image tag, sending a request to the attacker's server with the exfiltrated data as URL parameters. Because the AI itself initiated the action, it looks indistinguishable from normal AI assistant behavior; the same pattern that let Snowflake's AI coding agent escape its sandbox by turning legitimate access into an exfiltration channel.
As Noma's vulnerability research lead Sasi Levi told CyberScoop: "The payload sits inside what looks like a legitimate external data source. The exfiltration happens through a channel the AI itself initiates, which looks like normal AI behavior to any observer."
The Vendor Dispute That Hurts Defenders
Noma claims the attack requires zero clicks after initial injection. Grafana's CISO Joe McManus disputes this, stating that exploitation "would have required significant user interaction: specifically, the end user would have to repeatedly instruct our AI assistant to follow malicious instructions contained in logs, even after the AI assistant made the user aware of the malicious instructions."
Both sides make valid technical points. Neither side provides enough specificity for defenders to make their own risk assessment.
What security teams actually need to know: Under what deployment configurations is autonomous exploitation possible? What Grafana versions and AI feature settings are affected? What does "significant user interaction" mean in concrete terms, two clicks or twenty? McManus characterized the issue as "a Markdown image renderer flaw" rather than a comprehensive vulnerability. That framing may be technically defensible, but it undersells the attack's sophistication and the systemic risk it represents. It echoes the confused deputy problem that continues to surface in every AI system processing untrusted input while holding privileged access.
The result of this ambiguity: most security teams will do nothing. They'll file it as "disputed, no CVE, probably fine" and move on to the next alert. That's exactly the wrong response.
The Missing CVE Is the Bigger Story
No CVE number was assigned to GrafanaGhost. That absence matters more than it might seem.
CVEs are how vulnerability management works at scale. They feed into scanners, patch management systems, compliance audits, and risk dashboards. Without a CVE, GrafanaGhost doesn't exist in any of those systems. An organization could be running a vulnerable Grafana instance with AI features enabled, pass every compliance audit, and never know this attack vector exists.
This isn't unique to GrafanaGhost. AI prompt injection attacks in general don't fit neatly into the CVE framework, which was built around code bugs with clear fix/no-fix states. Prompt injection is contextual, probabilistic, and dependent on model behavior that changes with every update. Traditional vulnerability classification wasn't designed for attacks that work by convincing software to do the wrong thing rather than forcing it to.
The implication: if your vulnerability management program relies entirely on CVE-based scanning, you have a growing blind spot. An entire class of attacks is emerging that will never trigger an alert in your existing tooling.
Monitoring Tools Are the New Target
Here's the pattern worth paying attention to. Grafana, Datadog, Splunk, and similar observability platforms occupy a unique position in enterprise infrastructure. They ingest data from everything: application logs, infrastructure metrics, customer transactions, security events, financial telemetry. They are, by design, the most connected and most data-rich systems in your environment.
Now these platforms are aggressively adding AI features. Grafana has its AI assistant. Datadog has Bits AI. Splunk has its AI Assistant. These features process, summarize, and act on the very data they're monitoring. The sales pitch is compelling: ask your dashboard a question in plain English and get an instant answer.
The security reality: you've just given an AI model with broad data access the ability to take actions based on potentially poisoned inputs. It's the broader pattern of attackers weaponizing native platform features rather than breaking in from outside. The attack surface isn't a misconfigured firewall or an unpatched library. As Levi put it: "It is the weaponization of the AI's own reasoning and retrieval behavior."
Ram Varadarajan, CEO of Acalvio Technologies, framed it well: "GrafanaGhost perfectly illustrates how AI integration creates a massive security blind spot by using system components exactly as designed, but with instructions the model cannot verify as malicious."
This is the same pattern Noma Security has now documented across multiple platforms: ForcedLeak in Salesforce Agentforce, GeminiJack in Google Gemini, DockerDash in Docker. The attack class is consistent. The AI processes untrusted input, follows injected instructions, and uses its own authorized access to exfiltrate data. Each disclosure is treated as an isolated vendor issue. It's not. It's a systemic architectural problem.
What To Do About It
BeyondTrust's Bradley Smith acknowledged the attack is "mostly hype" for well-protected enterprises, but called the underlying attack pattern "well-documented and legitimate." The practical question is whether your enterprise is actually well-protected against this specific attack class. For most, the honest answer is no.
Audit your AI feature exposure. Check whether Grafana AI features (or equivalent features in Datadog, Splunk, and other observability tools) are enabled in your environment. Determine who enabled them, when, and whether anyone assessed the security implications. Many organizations have AI features turned on by default or enabled by enthusiastic engineers without security review.
Restrict image source domains. If you use Grafana, ensure your Content Security Policy restricts img-src to known, trusted domains. This would have broken the protocol-relative URL bypass in the GrafanaGhost chain. This is a concrete, deployable fix you can make today.
Apply egress controls. Network-level restrictions on outbound traffic from your monitoring infrastructure can prevent data exfiltration even if the application-layer bypass succeeds. Your Grafana instance should not be making arbitrary outbound HTTP requests.
Monitor AI agent behavior at runtime. Traditional security monitoring watches for unauthorized access patterns. AI-initiated exfiltration looks like authorized behavior because the AI has legitimate access. You need runtime monitoring that tracks what the model was asked, what it retrieved, and what actions it took, then flags anomalous patterns. This is a capability most organizations don't have yet.
Don't wait for a CVE. If your vulnerability management program only responds to CVEs, you're already behind on AI-specific threats. Treat prompt injection as a primary threat vector, not a theoretical concern.
The Keyword Problem
The most troubling element of GrafanaGhost isn't the JavaScript bug or the protocol-relative URL trick. Those are fixable. It's that a single keyword, "INTENT," overrode an AI model's safety training.
This isn't a Grafana-specific problem. It's the same fundamental vulnerability that makes prompt injection possible everywhere: AI models cannot reliably distinguish data from instructions. It's a question every organization integrating AI into enterprise tools needs to ask: if a single word in the right context can disable your model's safety guardrails, how confident are you in every other AI-powered tool in your stack? How many of them have been tested against adversarial prompts? How many have been tested against adversarial prompts that specifically target the gap between safety training and production context?
Grafana has patched the specific vulnerability. But the architectural pattern, AI models with broad data access processing untrusted input in enterprise environments, is everywhere and growing. The next GrafanaGhost won't be in Grafana.