On April 19, 2026, Vercel disclosed that a breach of its internal systems started not with a phished admin or a compromised CI pipeline, but with an AI agent platform that one of its employees had been using to build slide decks. The attacker pivoted from Context.ai's OAuth token into the Vercel employee's Google Workspace, then into customer environment variables that weren't marked "sensitive." By the time the disclosure went out, 580 Vercel employee records were being shopped for $2M on Telegram, and Vercel's own CEO was confirming that "a limited subset of customers" had data compromised.
The chain worked because nothing in Vercel's vendor review process, or its customers' vendor review processes, had a row for "AI productivity tools that an employee self-provisioned with Allow All Google Workspace permissions." This is fourth-party risk: the vendor of a vendor that nobody downstream approved, showing up inside the OAuth consent screen.
February 2026: The Roblox Script
The infection started when a Context.ai employee downloaded a Roblox "auto-farm" script. These tools are part of a MaaS-distributed game exploit economy circulating through YouTube, Discord, and GitHub that drops Lumma, RedLine, or Vidar infostealers alongside whatever game cheat the user thinks they're installing. Hudson Rock obtained the resulting credential logs and identified them a full month before Context.ai itself disclosed.
The haul included Context.ai's Google Workspace credentials, a support@context.ai account, Supabase, Datadog, and Authkit. Browser logs showed the attacker actively navigating Vercel's own environment variable management URLs: vercel.com/context-inc/valinor/settings/environment-variables, /settings, and /logs. The Roblox vector is exactly the pattern I documented in the 149 million stolen credential epidemic: a $200-per-month malware-as-a-service economy with industrial distribution and a 66% EDR bypass rate. The infrastructure did what it was designed to do.
March 2026: The Silent Detection
In March 2026, Context.ai identified unauthorized AWS access tied to the stolen employee credentials and blocked it. Its public disclosure says the company then "fully deprecated the consumer product" (the AI Office Suite), closed the AWS environment, and worked with CrowdStrike on hardening. What the disclosure does not say: anyone downstream was notified. Vercel did not learn its Google Workspace was compromised until April. Vercel's customers, in turn, did not learn until April 19.
Between March and April, the attacker held a live OAuth token with "Allow All" scopes into a Vercel employee's corporate Google Workspace. Those scopes, granted by the employee rather than by Vercel's admin, include Gmail read and send, Drive access, Calendar, and whatever else Google Workspace's consent screen bundles into its "allow everything" default. The permissions live on the user, not on Context.ai's enterprise product (which Context.ai is careful to note runs in customer-controlled environments and "was not affected"). They live on the user, and they persist until the user revokes them.
April 19-20: The Disclosure
On April 19 at 11:04 AM PST, Vercel published the security incident bulletin. CEO Guillermo Rauch confirmed in statements to BleepingComputer that the attacker had reached customer environment variables, but only "non-sensitive" ones. That distinction is load-bearing. Vercel's "sensitive" flag is opt-in. Environment variables not explicitly marked sensitive are stored unencrypted at rest. GitGuardian's Guillaume Valadon wrote the critical analysis the day the bulletin dropped: "non-sensitive" env vars routinely contain API keys, database connection strings, and service account credentials, because the flag is opt-in and engineers miss it.
The same afternoon, a ShinyHunters-branded actor claimed the breach on Telegram and asked $2M for 580 Vercel employee records, source code, deployment configurations, API keys including NPM and GitHub tokens, and "access to several internal deployments." Actual ShinyHunters members denied involvement. Attribution here is contested, and the SSO-as-master-key playbook those actors made famous is visibly being copied by unrelated groups. TechCrunch confirmed the breach "may affect hundreds of users across many organizations." The opt-in default-setting failure is a pattern I have covered before: Anthropic's data leak wasn't a cyberattack, it was a default setting. Vercel's opt-in sensitive flag is the same architecture.
The Governance Row That Doesn't Exist
Mercor's breach, which I wrote about yesterday, exposed how AI labs were procuring privileged-access vendors like staffing agencies. The Vercel breach extends that gap downstream: enterprises are provisioning AI agent platforms like productivity tools, not like privileged identities, even though the OAuth consent model makes them exactly that.
Context.ai's AI Office Suite is designed to perform actions across Gmail, Drive, Calendar, and every other app the user has connected. That is the product's whole point. When a Vercel engineer clicked "Allow All" during signup, they gave Context.ai everything their corporate account could touch. Every AI agent platform your employees are using today (Glean, Copilot Studio, ChatGPT with connectors, the enterprise tier of Context.ai itself) uses the same architectural pattern. User-scoped OAuth tokens, inheriting whatever the signing human can reach, persisting until explicitly revoked. The agent is not a bot service account with a narrowly scoped API key. It is a human user's full Google Workspace identity, delegated.
The governance problem this creates is not solved by asking engineers to be careful. Third-party data sharing was already the largest enterprise risk surface in 2026; Verizon's DBIR reports third-party-involved breaches doubled year over year. AI agent platforms are a new class of third party: one that actively uses the OAuth scopes it was granted, at machine speed, across all connected apps, often during hours when no human would reasonably be logged in. The blast radius of a compromised AI agent platform token is every application the granting user can access, not just the ones the vendor is supposed to touch. When an AI agent becomes an insider threat, it is because nothing in the identity model distinguishes between "Claude writing a memo on Drive" and "an attacker using Claude's OAuth token to exfiltrate Drive."
Three changes would actually matter.
First, treat AI agent platforms as privileged identities inside your vendor risk program. That means the same TPRM, SOC 2 review, data processing agreements, and breach notification clauses you require of your payroll vendor. Not self-service signup with a user's corporate identity.
Second, block user consent to unverified OAuth apps at the Google Workspace or Microsoft Entra admin level. Admin-approved consent for any app requesting high-scope Workspace access is a fifteen-minute configuration change and prevents the entire Context.ai consent vector from happening inside your org.
Third, require AI agent platforms you do approve to run on scoped service accounts with narrowly defined API permissions, not on the signing user's delegated OAuth token. The enterprise tier of Context.ai runs this way in customer-controlled environments. The consumer tier that an employee self-provisions does not. Vercel's breach ran through the second kind.
Vendor governance has always been about asking "what happens if this vendor gets compromised?" The Vercel breach is asking a different question: what happens when a vendor you didn't approve, a vendor of your vendor, gets compromised, and carries permissions your compliance team never reviewed? The answer Vercel customers found out on Saturday morning is that somebody else's Roblox script reaches your environment variables.