Describe what a modern Security Operations Center does without using cybersecurity jargon, and you get something unsettling: we score people based on their behavior across multiple systems. We predict who will become a threat before they act. We monitor employee communications for emotional indicators. We build risk profiles of individuals and flag them for investigation.
Now read the EU AI Act's Article 5 prohibited practices list. Social scoring: banned. Predictive crime assessment: banned. Emotion recognition in the workplace: banned. Behavior-based risk profiling: banned.
The penalties are €35 million or 7% of global annual revenue, whichever is higher. These provisions have been enforceable since February 2, 2025. And the cybersecurity exemption that's supposed to protect your SOC? It covers components used "solely for cybersecurity purposes." If you've ever seen a UEBA alert forwarded to HR, you already know how fragile that word "solely" is.
The Banned Practices Your SOC Already Runs
The EU AI Act's prohibited practices were designed to prevent specific abuses: China-style social credit systems, Minority Report predictive policing, and workplace surveillance that treats employees like suspects. These are legitimate concerns. The problem is that the regulatory language doesn't distinguish between a government scoring citizens for political obedience and a security team scoring employees for data exfiltration risk.
Here's how the overlap plays out across three core security capabilities.
Social Scoring vs. User and Entity Behavior Analytics
The AI Act prohibits AI systems that "evaluate or classify natural persons based on their social behaviour in multiple contexts." The resulting score cannot lead to "detrimental treatment" that is "unrelated to the context in which the data was originally generated."
Now consider what UEBA does. Platforms like Microsoft Sentinel, Splunk UBA, and Exabeam aggregate user behavior across email, endpoint, network, cloud, and application logs. They build behavioral baselines and assign risk scores. A single user's activity across dozens of systems gets consolidated into a composite threat score. When that score crosses a threshold, the user gets flagged, investigated, and potentially restricted.
The security purpose is clear: detect compromised accounts and insider threats before data leaves the building. But the mechanism is indistinguishable from social scoring. You're evaluating a person based on their behavior across multiple contexts and taking detrimental action based on the score.
The social scoring ban's vagueness has already drawn criticism from legal scholars. The definition encompasses "how people engage with their community" and "their conduct in business settings." A UEBA system that tracks an employee's badge-in times, email patterns, file access, and VPN usage is doing exactly that.
Predictive Crime vs. Insider Threat Platforms
Article 5 prohibits AI systems that assess "the risk of a natural person for committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics."
Insider threat programs exist to do something remarkably similar. Tools like DTEX, Securonix, and Proofpoint ITM build behavioral profiles of employees and predict which ones are likely to become threats. They look at patterns: is someone accessing files outside their role? Are they active at unusual hours? Have they recently been put on a performance improvement plan? These signals get aggregated into risk assessments that determine whether someone warrants closer monitoring.
The exception for law enforcement, where AI systems can support a "human assessment" based on "objective and verifiable facts directly linked to a criminal activity," doesn't clearly apply to corporate security teams. Your SOC isn't law enforcement. Your insider threat program isn't investigating a crime that's already happened; it's predicting one that hasn't. That's precisely what Article 5 targets.
When I wrote about agentic AI as an insider threat, the focus was on how AI systems themselves can be compromised. But there's an irony here: the tools designed to detect those compromised AI agents may themselves be on the wrong side of the regulation.
Emotion Recognition vs. Communication Analytics
The AI Act prohibits AI systems that "infer emotions of a natural person in the areas of workplace and education institutions." This covers systems analyzing facial expressions, voice patterns, keystrokes, body postures, and movements.
Modern DLP and insider threat tools increasingly incorporate sentiment analysis. They scan email, Slack messages, and documents for indicators of disgruntlement, frustration, or intent to harm. Some tools flag communications where employees express dissatisfaction with management or discuss leaving the company, treating these as early indicators of data theft risk.
The exception for "medical or safety reasons" doesn't cover using emotional inference for security monitoring. A tool that analyzes the emotional tone of an employee's emails to predict whether they'll exfiltrate data before quitting falls squarely within the ban.
The Cybersecurity Exemption Is a Paper Shield
The AI Act does include a carve-out for cybersecurity. According to CSO Online's analysis, components used "solely for cybersecurity purposes" are not classified as safety components and therefore aren't subject to the high-risk framework. This is meant to protect legitimate security tools from regulatory overreach.
The problem is threefold.
First, "solely" is doing enormous work in that sentence. In practice, security tools serve multiple functions. A UEBA alert about unusual data access doesn't just go to the SOC; it often goes to HR for investigation, to legal for potential litigation hold, and to compliance for regulatory reporting. The moment a cybersecurity tool's output informs an employment decision, the "solely for cybersecurity purposes" exemption starts to crack.
Second, the exemption applies to high-risk classification, not to the outright bans in Article 5. Even if your tool is exempt from the high-risk framework, it's less clear whether it's exempt from the prohibited practices provisions entirely. The text of Article 5 does not include a specific cybersecurity exemption for social scoring or predictive crime assessment.
Third, enforcement is already happening aggressively. As I covered in my analysis of the EU AI Act's first criminal enforcement action, French prosecutors raided X's Paris offices and summoned Elon Musk for questioning. This isn't a regulatory framework that's waiting patiently for companies to get their compliance in order. When enforcement starts targeting cybersecurity vendors, the "we're a security tool" argument will need to be airtight.
The Real Risk: Regulatory Reinterpretation
The scenario that should keep CISOs up at night isn't a regulator coming after CrowdStrike or Palo Alto Networks with a clear-cut case. It's a regulator looking at an insider threat program after a wrongful termination lawsuit.
Here's how it unfolds: an employee in the EU gets flagged by an insider threat platform. Their risk score increases because the system detected unusual file access, after-hours VPN usage, and sentiment shifts in their communications. The employee is put on a monitoring list, then terminated. They file a wrongful termination claim. Their lawyer discovers the AI-driven risk scoring system and files a complaint with the national authority.
Now a regulator has to decide: is an AI system that scores employees based on behavior across multiple contexts and leads to their termination "social scoring"? Is a system that predicts which employees will commit data theft "predictive crime assessment"? Is analyzing the emotional tone of workplace communications "emotion recognition"?
The answers aren't clear, and that ambiguity is the vulnerability.
At Capital One, I worked on data security systems where the distinction between security monitoring and employee surveillance wasn't academic; it was operational. The same tokenization and access controls that protect customer data also generate the behavioral signals that insider threat systems consume. The infrastructure is shared. The purposes overlap. Drawing a clean regulatory line between "cybersecurity" and "employee monitoring" requires a level of architectural separation that most enterprises simply don't have.
What CISOs Should Do Before August 2026
The prohibited practices are already enforceable, but the August 2, 2026 deadline brings high-risk AI system requirements into force. That includes AI systems used in employment, which covers any security tool whose output influences hiring, retention, or termination decisions. I outlined the broader compliance crisis in my analysis of why most companies will fail the August 2026 deadline, but here's what matters specifically for security operations.
Audit your security tools for dual-use risk. Map every AI-driven security tool to its actual downstream consumers. If UEBA alerts go to HR, if insider threat scores inform termination decisions, if communication analytics feed into performance reviews, the "solely for cybersecurity purposes" exemption may not apply.
Separate security outputs from employment decisions. Architect your workflows so that security monitoring data flows through a clean handoff before it reaches HR or legal. The goal isn't to stop sharing information; it's to ensure that AI-driven risk scores aren't directly driving employment actions without human assessment in between.
Document the cybersecurity purpose explicitly. For every AI tool in your security stack, document its intended purpose, data inputs, scoring methodology, and output consumers. When a regulator asks whether your behavioral scoring system is "social scoring," you need documentation showing it's a cybersecurity tool with a defined security purpose, not a general-purpose employee evaluation system.
Prepare vendor questionnaires now. If you use third-party UEBA, insider threat, or behavioral analytics platforms, your vendors need to explain how their AI systems comply with Article 5. The liability extends to deployers, not just providers. A vendor claiming their tool is "AI Act compliant" needs to show their work.
Watch the August 2026 high-risk deadline. Security tools that influence employment decisions will likely be classified as high-risk AI systems in employment. That triggers a full set of requirements: risk management systems, data governance, technical documentation, human oversight, and conformity assessments. Start the compliance work now.
The Regulation Got the Problem Right and the Solution Wrong
The EU AI Act's banned practices address real abuses. No one should be subjected to government social credit scoring. Predictive policing based purely on personality profiling is genuinely dangerous. Workplace emotion surveillance violates basic dignity.
But the regulation's language doesn't account for the fact that enterprise cybersecurity, done properly, necessarily involves behavioral analysis. The best insider threat programs, the best fraud detection systems, the best data loss prevention tools all work by understanding how people normally behave and flagging when they don't. Strip away the cybersecurity framing and what remains looks exactly like what the AI Act prohibits.
The cybersecurity exemption needs to be explicit, not implied. It needs to cover the prohibited practices in Article 5, not just the high-risk classification framework. And it needs to acknowledge that security tools legitimately serve multiple organizational functions without losing their cybersecurity purpose.
Until that happens, every SOC in the EU is operating in a regulatory gray zone. The tools you depend on to prevent the next breach may be one enforcement action away from being classified as the next prohibited AI practice.
That's not a compliance problem. That's a security problem.