The 2025 Verizon Data Breach Investigations Report dropped a statistic that should alarm every security professional: breaches involving third parties jumped to 30%, up from roughly 15% the year before. That's not incremental growth; it'sa doubling of third-party risk in a single year.
The business reality driving this is simple: organizations must share sensitive data with partners, cloud platforms, analytics providers, and SaaS vendors. Every API integration, every analytics tag, every vendor connection creates value. But each new integration point also expands the attack surface in ways that traditional security models struggle to address.
I've been thinking about this problem from two angles lately: professionally, through my work on tokenization at Capital One Software, and personally, through a reassessment of the third-party tools I use for hobby software development. Both perspectives reveal the same uncomfortable truth: we've built digital ecosystems where trust is transitive, but security is not.
The Anatomy of Modern Supply Chain Attacks
To understand the scope of the problem, consider what happened in late 2024 with Blue Yonder, a supply chain management platform used by companies like Starbucks, Sainsbury's, and Morrisons.
On November 21, 2024, the Termite ransomware gang compromised Blue Yonder's systems, claiming to have stolen 680 GB of data including over 16,000 email lists and 200,000 insurance documents. The downstream impact was immediate. Starbucks had to manually calculate barista wages and coordinate work schedules across North American stores. UK supermarket chains experienced disruptions to their fresh produce warehouse management. An attack on one vendor cascaded into operational chaos for some of the world's largest retailers.
The Blue Yonder incident illustrates a pattern we're seeing repeatedly: attackers aren't going after the fortress; they're going after the fortress's suppliers.
The Breach Cascade Effect
The most devastating third-party incidents don't stop at one victim. They cascade.
In April 2024, inadequate security measures at Snowflake, particularly the absence of mandatory multifactor authentication, enabled attackers to compromise accounts across the cloud platform. The resulting breaches affected AT&T (70 million customers), Ticketmaster (560 million records), and Santander Bank. One vendor's security gap became a pipeline for data theft across multiple Fortune 500 companies.
The pattern repeated in August 2025 with what's now considered the largest SaaS supply chain breach in history. According to analysis from AppOmni, attackers compromised the integration between Drift (a chatbot platform acquired by Salesloft) and Salesforce. By stealing OAuth tokens, they inherited trusted access to over 700 organizations across diverse sectors, bypassing standard login controls entirely.
The economics favor the attackers. Secureframe's analysis found that 98% of organizations have a relationship with a third party that has been breached. When you're connected to everyone, and everyone is eventually compromised, the question isn't if you'll be affected but when.
The True Cost of Transitive Trust
The financial impact extends far beyond immediate remediation. According to IBM's latest data, the average cost of a third-party data breach is approximately $4.91 million globally, roughly 40% higher than first-party incidents. Supply chain incidents specifically cost 17 times more to remediate than breaches contained within the organization.
But the examples that concern me most aren't the massive enterprise breaches. They're the ones that feel more personal.
Take the Petco misconfiguration incident disclosed in December 2024. An inadvertent software setting exposed customer data including Social Security numbers, driver's licenses, and financial account details. The breach apparently involved Salesforce database configurations, illustrating how a misconfiguration in one vendor's system can expose deeply sensitive data across an entire customer base.
Or consider the ManageMyHealth breach in New Zealand, disclosed just days ago. A group calling itself "Kazu" claimed to have stolen over 400,000 health documents from the patient portal, including diagnoses, prescriptions, and appointment histories. The company's CEO acknowledged they had "dropped the ball," with attackers accessing the system through valid credentials. For over 120,000 patients, their most sensitive medical information is now in criminal hands.
These aren't abstract enterprise risks. They're pet owners whose SSNs are exposed because they signed up for a loyalty program. They're patients whose mental health records are being held for ransom. The trust they extended to service providers was transferred, without their knowledge, to an ever-expanding chain of third parties with varying security practices.
Why Traditional Security Models Fail
The fundamental problem with third-party data sharing is that traditional security models protect access, not data. Once data leaves your perimeter through a legitimate integration, you're trusting every downstream system to protect it with the same rigor you would.
This is the limitation I explored in my post on ChatGPT Health and the data security questions users should be asking. When you share data with any third party, that data exists in a regulatory and security context you no longer control. HIPAA doesn't follow your health data into a consumer AI app. Your enterprise's SOC 2 controls don't extend to your vendor's vendor.
The World Economic Forum's Global Cybersecurity Outlook reports that 72% of respondents experienced increased cyber risks driven partly by supply chain complexity. Small organizations are particularly vulnerable, with 35% believing their cyber resilience is inadequate, a proportion that has increased sevenfold since 2022.
The rise of SaaS compounds this exponentially. The average organization now uses 112 SaaS applications, each with an average of 150 dependencies. That's 90% indirect dependencies that account for the vast majority of vulnerabilities. You're not just trusting your vendors; you're trusting the vendors' vendors' vendors, in a chain that's effectively impossible to audit.
Tokenization: Protection That Travels With the Data
This is where my professional work on tokenization becomes relevant. The approach I advocate for, which I detailed in Building AI Systems That Enterprises Can Trust, addresses a fundamental limitation of perimeter security: it doesn't protect data that legitimately needs to leave your systems.
Traditional encryption protects data in transit and at rest, but once data is decrypted for processing by a third party, it's fully exposed. Tokenization takes a different approach by replacing sensitive values with format-preserving tokens before data ever leaves your environment.
Consider how this would have changed the Petco incident. If customer SSNs and driver's license numbers had been tokenized before being shared with Salesforce systems, the misconfiguration would have exposed tokens, not actual sensitive data. The tokens would be worthless to attackers and meaningless for identity theft or fraud.
The same principle applies to the ManageMyHealth breach. If patient diagnoses and prescription histories had been tokenized with the actual sensitive values stored separately under stricter controls, the 400,000 "health documents" stolen by attackers would have been format-compliant but semantically meaningless. Patients would still have their privacy.
This is the "costume jewelry" approach to data protection: make the data that third parties process look real enough to be useful, but ensure it's inherently worthless if compromised. The crown jewels stay protected within your fortress.
Rethinking Personal Third-Party Risk
Working on enterprise data protection has changed how I think about my own third-party data sharing, particularly with the development tools I use for hobby projects.
Like many developers, I've accumulated a constellation of third-party services: GitHub for code, various CI/CD tools, package managers, cloud infrastructure providers, analytics services. Each requires some form of authentication. Each has access to some slice of my work. Each represents a potential entry point for supply chain compromise.
The npm ecosystem attacks of September 2025 drove this home. Attackers phished credentials from a trusted open-source maintainer and injected cryptocurrency-stealing malware into 18 widely-used npm packages, collectively downloaded billions of times weekly. As someone who routinely runs npm install without auditing every transitive dependency, I was forced to ask: how much of my own toolchain do I actually trust?
I've started applying the same skepticism I use professionally to my personal development environment. This means asking questions like:
What data am I actually sharing? Every OAuth authorization, every API key, every integration creates a data flow. I've started mapping these explicitly rather than clicking "Authorize" reflexively.
What's the blast radius if this vendor is compromised? Some services have access to my code. Others have access to credentials. Some have both. Understanding which compromises would be inconvenient versus catastrophic helps prioritize where to invest in additional controls.
Do I need this integration? The convenience of connecting everything to everything is real, but so is the attack surface expansion. I've removed several integrations that provided marginal value relative to their risk.
Can I segment sensitive work? For projects involving any real user data, I now use separate accounts and environments from my experimental hobby work. A compromise of my side project infrastructure shouldn't cascade to anything with actual sensitivity.
This mirrors the shadow AI problem I've written about in the enterprise context. Individual developers, like enterprise employees, will use the tools that make them productive. The question is whether we're making informed decisions about the risks we're accepting.
What Enterprises Must Do Differently
For security professionals managing enterprise third-party risk, the research points to several imperatives:
Assume Breach in Your Vendor Ecosystem
With 98% of organizations connected to a previously-breached third party, the question isn't whether your vendors will be compromised but how you'll contain the impact when they are. This means:
- Implementing data minimization before any third-party integration; don't share more data than the vendor genuinely needs
- Using tokenization or pseudonymization for sensitive fields that must be shared
- Building detection capabilities for anomalous data access patterns through vendor connections
- Developing incident response playbooks specifically for third-party compromises
Audit the Full Dependency Chain
The Salesloft-Drift breach succeeded because attackers exploited trust relationships between systems. Your vendor risk assessment needs to include:
- OAuth and API token inventory across all third-party integrations
- Understanding of which vendors have access to which data classifications
- Mapping of vendor-to-vendor dependencies in your data flows
- Regular penetration testing that includes third-party attack vectors
Invest in TPRM Beyond Compliance
According to Secureframe's analysis, the average Third-Party Risk Management team grew to 8.5 people in 2025, up from 5.6 in 2024. Yet only 48% of organizations have exit strategies for high-risk third parties. Meaningful TPRM requires:
- Continuous monitoring rather than point-in-time assessments
- Automated security scoring of vendor ecosystems
- Contractual requirements for breach notification and security standards
- Actual enforcement mechanisms when vendors fail to meet commitments
The Uncomfortable Truth
The 2026 predictions from Kiteworks' analysis of 47 industry reports make clear that supply chain attacks will continue accelerating. Cybersecurity Ventures predicts global costs from software supply chain attacks will reach $138 billion annually by 2031, up from $60 billion in 2025.
The uncomfortable truth is that we can't eliminate third-party data sharing. Modern business depends on it. Cloud platforms, SaaS tools, API ecosystems, and integration partners create genuine value. The companies that refuse to engage with this ecosystem will be outcompeted by those that do.
But we can stop treating third-party data sharing as a trust exercise and start treating it as a risk management discipline. That means protecting data before it leaves our control, not trusting downstream systems to protect it for us. It means assuming every integration will eventually be compromised and designing accordingly.
The alternative is continuing to hand our crown jewels to partners, vendors, and platforms with security practices we can't verify, and acting surprised when those jewels end up in criminal hands.
For enterprises, the path forward is clear: invest in data protection that travels with the data, not just perimeter security that stops at your firewall. For individual developers like me, it means bringing the same healthy skepticism to our personal toolchains that we'd apply to enterprise architecture.
The vendors and platforms we depend on aren't going to solve this problem for us. They have their own incentives, their own risk tolerances, their own security debt. Our data protection strategy needs to account for their inevitable failures, not assume they'll never happen.