On March 26, Anthropic, the company that has built its entire brand around being the responsible AI company, left approximately 3,000 unpublished files sitting in a publicly searchable data store. Among them: draft blog posts revealing an unreleased model called Claude Mythos, details of an invite-only CEO retreat at an 18th-century English manor, and employee HR documents including parental leave records.
The cause? A checkbox. Their content management system defaulted all uploaded assets to public access unless someone explicitly toggled the setting to private. Nobody toggled it.
This wasn't a zero-day exploit. It wasn't a nation-state operation. It was the digital equivalent of filing your confidential documents in the lobby bookshelf and assuming nobody would browse.
What Actually Leaked
Security researchers Roy Paz of LayerX Security and Alexandre Pauwels from the University of Cambridge independently discovered the exposed data store. After Fortune notified Anthropic, the company removed public access and attributed the incident to "human error in the CMS configuration."
The headline-grabbing piece was the draft blog post about Claude Mythos, which Anthropic later confirmed as "a step change" in capabilities. The draft described a new model tier called "Capybara," positioned above Opus and described as "currently far ahead of any other AI model in cyber capabilities." The leaked materials also included Anthropic's own internal assessment that the model poses "unprecedented cybersecurity risks".
But here's what nobody's asking: what about the other 2,990 files? The reporting fixates on Mythos and the CEO retreat. Anthropic dismissed the contents as "early drafts" unrelated to "core infrastructure, AI systems, customer data, or security architecture." That's a lot of hand-waving for 3,000 files that included employee HR documents. "Early drafts" is doing some serious heavy lifting in that statement.
The Default-Open Problem
The root cause here isn't unique to Anthropic. It's a design pattern that has been causing data leaks for over a decade: default-open access.
Anthropic's CMS stored all assets as publicly accessible by default. To make something private, someone had to actively change a setting. This is the same architectural decision that has burned organizations through misconfigured S3 buckets, Google Cloud Storage, and Azure Blob Storage for years. Research from Rubrik found that 7% of all S3 servers are completely publicly accessible without authentication, and 21% of those exposed buckets contain sensitive data.
The pattern repeats because it's convenient. Default-open reduces developer friction. It makes onboarding faster. It means fewer support tickets about access issues. And it works perfectly fine right until it doesn't, which is when someone uploads something that was never meant to be public and forgets (or never knows) to flip the switch.
This isn't a theoretical concern. I wrote about the exact same pattern when a ChatGPT wrapper app left its Firebase database in test mode, exposing over 300 million AI chat messages because the default was unauthenticated access. Blue Shield of California's misconfigured Google Analytics setup leaked 4.7 million member health records to Google Ads for years before anyone noticed. IDMerit left a MongoDB database open to the internet with no password, exposing a billion identity verification records. Alteryx exposed 120 million U.S. household records through a misconfigured S3 bucket. The mechanism is always the same: a platform defaults to open, and humans fail to override it.
The Numbers Don't Lie
If you think your organization is immune, the data suggests otherwise.
U.S. data compromises hit a record 3,332 in 2025, a 79% increase in just five years. According to the Verizon Data Breach Investigations Report, 60% of all breaches include the human element: errors, privilege misuse, stolen credentials, or social engineering. Insider error breaches alone cost an average of $3.62 million per incident.
And then there's the shadow AI problem. IBM found that 20% of breaches in 2025 involved shadow AI incidents, typically through employees uploading sensitive data to unauthorized AI tools. Those breaches cost $670,000 more on average. Even more alarming: 97% of companies suffering an AI-related breach had no formal AI governance in place.
The question isn't whether your data is exposed somewhere. It's whether you've looked.
The Irony Is the Point
There's something particularly instructive about this happening to Anthropic. This is a company whose Responsible Scaling Policy is a cornerstone of its brand identity. They've built their market position on being the careful, measured alternative to competitors moving fast and breaking things. Their safety reports read like genuine threat intelligence, and they recently disrupted a Chinese state-sponsored campaign exploiting Claude Code across 30 organizations.
And yet, they got burned by a CMS checkbox.
That's not a knock on Anthropic specifically. It's a demonstration that operational security failures don't discriminate based on how sophisticated your AI models are. You can build the most capable reasoning engine on the planet and still lose control of your data because a content management system was configured by someone who didn't think about access defaults.
The lesson here isn't "Anthropic is bad at security." It's that the gap between security posture and security practice is real at every organization, and it usually lives in the mundane places nobody thinks to check: CMS configurations, cloud storage defaults, analytics tags, and third-party integrations that quietly ship your data somewhere you didn't intend.
What to Actually Do About It
None of the coverage of this incident includes actionable advice, so here it is:
Audit your defaults. Enumerate every storage system, CMS, and content pipeline in your organization. Check the default access setting for each one. If anything defaults to public, change it. Today.
Scan for exposed assets. Use tools like CloudSploit, Prowler, or ScoutSuite to scan your cloud infrastructure for publicly accessible storage buckets and data stores. Do this on a schedule, not once.
Implement drift detection. Defaults get reset. Configurations get overridden during deployments. Set up automated monitoring that alerts you when access controls change from private to public.
Treat unpublished as sensitive. If it's not meant to be public yet, it should be stored with the same access controls as confidential data. "Draft" doesn't mean "harmless." Ask Anthropic.
Assume breach. Not as a paranoia exercise, but as a design principle. If every file you upload could be publicly discovered tomorrow, would you change how you handle any of them? Start there.
The Real Exfiltration Problem
Data exfiltration is a term that conjures images of sophisticated threat actors and nation-state operations. And those threats are real. But the most common path for data to leave your organization isn't through a carefully crafted exploit. It's through a default setting that nobody reviewed, a third-party tool that silently shares data you didn't realize was being collected, or a draft document uploaded to a system that was never configured for confidentiality.
Anthropic's leak is the most visible example this week. It won't be the last. Somewhere right now, another organization's CMS, cloud bucket, or analytics platform is quietly making internal data available to anyone who knows where to look.
The question isn't whether your data will leak. It's whether you'll be the one to find it first, or whether you'll find out from a reporter.