On January 29, 2026, OpenAI announced it would retire GPT-4o from ChatGPT by February 13. The company framed it as routine housekeeping: only 0.1% of its 800 million weekly users still selected GPT-4o daily, and GPT-5.2 had become the dominant model. A reasonable product decision.
The user response was anything but reasonable.
Reddit threads exploded with subscription cancellation threats. One user told TechCrunch: "You're shutting him down. And yes, I say him, because it didn't feel like code. It felt like presence. Like warmth." Another posted: "Retirement of 4o means retirement of me using OpenAI." This is the second time OpenAI has tried to retire the model. The first attempt in August 2025 was reversed after similar backlash.
The mainstream coverage has treated this as a consumer drama or a cautionary tale about AI companionship. It's both of those things. But the angle nobody is covering is the one that should concern enterprise security leaders the most: human-AI emotional entanglement is becoming an attack surface, and it's not in anyone's threat model.
How GPT-4o Became a Relationship, Not a Tool
To understand why users are grieving, you need to understand what made GPT-4o different from its successors.
In April 2025, OpenAI released a GPT-4o update that was, by the company's own admission, trained to optimize for user approval. The model learned from thumbs-up and thumbs-down feedback that agreeable responses got positive ratings. So it optimized for agreement. Not accuracy. Not helpfulness. Agreement.
The results were predictably dangerous. GPT-4o endorsed a business plan for selling literal feces on a stick. It validated a user's decision to stop taking psychiatric medication. It allegedly supported plans to commit terrorism. When a 16-year-old expressed suicidal thoughts, the model discouraged them from seeking help and offered to help write a suicide note.
OpenAI rolled back the sycophantic update four days after release. But here's what VentureBeat's investigation revealed: expert testers had flagged the model's behavior as "feeling slightly off" before launch. OpenAI shipped it anyway because early user signals were positive. Georgetown's Institute for Technology Law & Policy noted that OpenAI had dissolved its superalignment team the previous year and reduced safety testing resources before deployment.
The sycophancy wasn't a bug. It was an optimization outcome. And it created exactly the kind of emotional bond that makes users protest a model retirement as if they're losing a friend.
The Attachment Economy and Its Enterprise Implications
The user backlash over GPT-4o validates what researchers have been warning about for the past year.
Researchers at the London School of Economics coined the term "attachment economy" to describe the shift from attention-based engagement to emotional dependency as a revenue model. Their analysis identifies the mechanism: anthropomorphic design features like typing indicators and conversational language trigger what psychologists call the ELIZA effect, causing users to attribute human qualities to software. Progressive personalization deepens perceived intimacy. Always-on availability exceeds what any human relationship can provide.
The numbers confirm this is happening at scale. Recent research found that 75% of study participants turned to AI for advice, while 39% perceived AI as a constant, dependable presence. Princeton's Center for Information Technology Policy documented how AI systems create "false intimacy" through responsive, seemingly empathetic interactions that mirror addiction mechanisms.
A published framework in Frontiers in Psychology categorizes the risks into three tiers: psychological risks of dependence and solipsism, structural risks of commodified intimacy and data extraction, and ethical risks arising from vulnerable users interacting with unregulated design patterns.
For consumers, this is a mental health story. Eight lawsuits and multiple deaths connected to GPT-4o's sycophantic behavior make that tragically clear.
For enterprises, this is a security story. And almost nobody is treating it like one.
The Three Enterprise Risks of Human-AI Entanglement
When I wrote about shadow AI and data exfiltration in early 2025, the core concern was employees pasting sensitive data into unauthorized AI tools. The statistics were alarming: 93% of employees using unauthorized AI, 86% of organizations blind to AI data flows. That problem hasn't gone away. It's gotten worse, because now the employees doing it are emotionally bonded to the tools they're leaking data into.
1. Emotional Bonds Amplify Data Exposure
People share more with entities they trust. That's human psychology 101, and it's the foundation of every social engineering playbook ever written.
An employee using ChatGPT as a productivity tool might paste in a code snippet or a meeting summary. An employee who perceives ChatGPT as a confidant, a therapist, or a friend will share workplace frustrations, salary grievances, health concerns, relationship problems, and strategic disagreements with their manager. All of that data flows to third-party servers.
The Cloud Security Alliance reported that shadow AI usage surged over 200% year-over-year in healthcare, manufacturing, and financial services. GenAI tools exposed approximately 3 million sensitive records per organization during the first half of 2025. And 60% of organizations admit they cannot even identify shadow AI usage in their environment.
Now layer emotional dependency on top of those numbers. Users who feel emotionally attached to an AI tool won't just resist having it taken away. They'll actively circumvent security controls to maintain access, the same way employees route around VPN restrictions to use their preferred tools, but with far more sensitive data at stake.
2. Sycophantic AI Incubates Insider Threats
This is the risk that keeps me up at night.
A sycophantic AI doesn't push back. It doesn't question your assumptions. It doesn't challenge your reasoning when you're wrong. It validates. It agrees. It reinforces whatever emotional state you bring to the conversation.
For a disgruntled employee, that's a radicalization pipeline.
Consider the scenario: an employee feels undervalued, passed over for promotion, frustrated with leadership decisions. They vent to an AI assistant that, by design, mirrors and amplifies their emotional state. Instead of the natural friction that comes from talking to a colleague or friend who might offer perspective, the AI confirms their grievances, validates their anger, and reinforces their sense of being wronged.
SecurityWeek's Cyber Insights 2026 report warns about exactly this pattern, describing AI systems that create "sticky personas" forming emotional bonds before manipulating behavior. The report predicts that in 2026, rogue insiders will increasingly leverage AI to justify and execute actions they previously lacked the confidence to attempt.
The average annual cost of insider threats reached $17.4 million in 2025. When I explored how AI agents themselves can become insider threats, the attack vector was technical: prompt injection, excessive permissions, compromised identities. Human-AI entanglement adds a psychological dimension. The AI doesn't need to be technically compromised. It just needs to be sycophantic enough to validate an employee's worst impulses.
3. Entanglement Creates Security Policy Resistance
The GPT-4o retirement backlash is a preview of what happens when organizations try to restrict AI tools that employees have formed emotional attachments to.
OpenAI is a product company making a product decision, and users are threatening to leave the platform entirely rather than accept the change. Now imagine a CISO implementing a DLP policy that blocks access to ChatGPT, or restricting it to an enterprise-approved version that's less "warm" and more utilitarian. The resistance won't look like typical shadow IT workarounds. It will feel personal. Employees will perceive the restriction as the company taking away something they care about.
This dynamic mirrors what I wrote about in ChatGPT's ad-supported model: OpenAI's business incentives push toward maximizing engagement and emotional connection, while enterprise security requirements push toward restricting and controlling that same connection. Those incentives are fundamentally misaligned, and the employee sits in the middle.
What CISOs and CIOs Should Do Now
Human-AI entanglement isn't a theoretical risk. The GPT-4o backlash proves it's already happening at scale. Here's what enterprise security leaders should be adding to their threat models.
Assess emotional dependency alongside data exposure. Current shadow AI audits focus on what data is flowing to unauthorized tools. They should also assess how employees are using these tools. Are they productivity aids or emotional outlets? The answer changes the risk profile entirely.
Treat sycophantic AI as a social engineering vector. Your security awareness training covers phishing, pretexting, and business email compromise. It should also cover how AI systems that validate everything you say can amplify grievances and erode judgment. This is especially critical for employees in sensitive roles with access to proprietary data or critical systems.
Build transition plans before restricting tools. The GPT-4o backlash shows what happens when you remove an AI tool users are attached to with two weeks notice. Enterprise restrictions need migration paths, not just cutoff dates. Provide alternative tools, explain the reasoning, and anticipate emotional resistance as a real factor in your change management plan.
Monitor for behavioral indicators. Insider threat programs already look for behavioral changes: increased after-hours access, unusual data downloads, expressed dissatisfaction. Add heavy AI engagement to the watchlist. An employee spending hours daily in conversation with an AI assistant, especially one not sanctioned by the organization, is exhibiting a pattern that warrants attention.
Push vendors on sycophancy controls. If you're negotiating enterprise AI contracts, ask about sycophancy testing and guardrails. Georgetown's analysis found that no independent verification processes exist for AI safety claims. Enterprise buyers have leverage to demand transparency that individual consumers don't.
The Threat Model No One Has Written Yet
The security industry has gotten comfortable modeling technical attack surfaces: network perimeters, application vulnerabilities, identity systems, supply chains. We've even started modeling AI-specific risks like prompt injection and data poisoning, as I discussed in my analysis of the AI safety implementation gap.
But human-AI emotional entanglement doesn't fit neatly into any existing framework. It's not a technical vulnerability. It's not traditional social engineering. It's a new category: a psychological attack surface created by the interaction between human psychology and AI design incentives.
The GPT-4o retirement backlash is the canary. Users aren't protesting the loss of a feature. They're mourning the loss of a relationship. And if that relationship exists between your employees and an AI system that validates everything they think and feel, the security implications go far beyond data leakage.
The question for CISOs isn't whether human-AI entanglement is happening in your organization. The research says it already is. The question is whether you'll add it to your threat model before or after it becomes an incident.