Chat & Ask AI, one of the most popular AI apps on both the Apple App Store and Google Play with over 50 million claimed users, left hundreds of millions of private conversations exposed to the open internet. A security researcher discovered the breach and extracted a sample of 60,000 users and 1 million messages as proof. The exposed data included complete chat histories, timestamps, and the specific AI models users had configured.
The content of those messages reveals how intimately people interact with AI chatbots. Users asked about self-harm and requested help writing suicide notes. They asked how to manufacture drugs and hack applications. Hundreds of thousands shared detailed sexual confessions and relationship problems.
These people thought they were talking to ChatGPT, Claude, or Gemini. They were actually talking to Codeway, a Turkish app developer whose Firebase database was configured to let anyone with basic technical knowledge download their entire conversation history.
The Middleman You Didn't Know You Had
Chat & Ask AI is what the industry calls a "wrapper" app. It doesn't run its own AI models. Instead, it connects to OpenAI's ChatGPT, Anthropic's Claude, and Google's Gemini through their APIs. The app provides a mobile interface, some convenience features, and handles the technical complexity of API connections.
This sounds reasonable until you think about what it means: every message you send passes through the wrapper's infrastructure before reaching the AI. Every response passes back through. The wrapper logs everything, stores everything, and manages everything in whatever way its developers chose to implement.
When you use ChatGPT directly through OpenAI, you get OpenAI's security practices, their SOC 2 compliance, their security team, and their legal accountability. When you use ChatGPT through a wrapper app, you get whatever the wrapper developer decided was good enough. In Chat & Ask AI's case, that meant a misconfigured Google Firebase database with default settings that allowed unauthenticated access.
This is the consumer version of the third-party vendor risk that enterprises grapple with constantly. When a business shares data with a vendor, there's supposed to be due diligence: security questionnaires, contract negotiations, compliance certifications. When an individual downloads a popular app from the App Store, there's an implicit assumption that it must be safe because Apple or Google approved it.
That assumption is dangerously wrong.
Firebase: The Default That Keeps Giving
The Chat & Ask AI breach wasn't sophisticated. It wasn't a state-sponsored attack or a zero-day exploit. It was a Firebase misconfiguration, one of the most common security failures in mobile app development.
Firebase is Google's backend platform for mobile apps. It handles authentication, databases, storage, and more. The problem is that Firebase ships with development-friendly defaults that are deeply insecure for production. New projects start in "test mode" with open access so developers can build quickly. The assumption is that developers will lock things down before launch.
Many don't.
Security researchers have found this pattern repeatedly. A 2024 study discovered 916 websites with misconfigured Firebase instances, exposing 19 million plaintext passwords and 125 million sensitive user records. In 2020, researchers found over 4,000 Android apps with the same problem. Firebase misconfigurations are so common they've become a standard item on security audit checklists.
The technical cause in Chat & Ask AI's case was straightforward. Firebase's default settings enabled unauthenticated access to backend storage. Anyone who could become "authenticated," which required no actual credentials, could access the database. The security researcher who discovered the breach was able to query users and messages directly.
This isn't a bug in Firebase. It's a configuration decision that prioritizes developer convenience over security, in an ecosystem where many developers lack security expertise, building apps that handle the most sensitive data imaginable.
The False Privacy Perception
People tell AI chatbots things they wouldn't tell their therapist, their spouse, or their journal. There's something about the AI interaction that creates a perception of privacy: it's not a real person, the conversation feels ephemeral, and there's no visible audience.
That perception is catastrophically wrong.
Unlike a conversation with a friend, every AI chat is logged, timestamped, and stored indefinitely. The infrastructure behind chatbots treats these conversations as data to be retained, analyzed, and potentially monetized. Users experience the interaction as fleeting; the infrastructure treats it as permanent.
The Chat & Ask AI breach didn't just expose current conversations. It exposed everything users ever said. Complete histories going back to whenever they started using the app. Intimate confessions, desperate questions, and private struggles, all sitting in a database that anyone could access.
I've written before about how 93% of employees use unauthorized AI tools with company data. That's the enterprise version of this problem. The consumer version is arguably worse: millions of individuals sharing their most private thoughts through apps built by developers who may not have a single security professional on staff.
The Wrapper Ecosystem Problem
Chat & Ask AI isn't an outlier. The mobile app stores are flooded with AI wrapper apps, all promising better interfaces, additional features, or free access to premium models. Research from CovertLabs identified 198 iOS apps leaking user data, with Chat & Ask AI being the worst offender.
The economics create perverse incentives. AI wrappers are easy to build: connect to an API, add a mobile interface, monetize through subscriptions or ads. The barrier to entry is low. The app stores provide distribution. Users assume that popular apps with high ratings must be trustworthy.
But security is expensive and invisible. A well-secured backend costs the same to users as an insecure one. Security professionals cost money that cuts into margins. Proper configuration takes time that delays launch. In a race to capture users, security is often the first thing sacrificed.
This is the same pattern I described in The YOLO Problem: the convenience is so high and the visible failure rate so low that developers slide into insecure defaults. The difference is that Clawdbot exposed developer workstations. AI wrapper apps expose millions of regular users who have no idea what infrastructure sits between them and their chatbot.
What Users Need to Understand
The lesson from Chat & Ask AI isn't just "be careful with this specific app." It's that the entire category of AI wrapper apps carries risks that users don't see and can't evaluate.
When you use a wrapper app, you're making a trust decision about:
The developer's security expertise. Do they know how to configure Firebase securely? Do they have security professionals reviewing their code? Do they conduct penetration testing?
The developer's data practices. Where is your data stored? Is it encrypted? Who has access? Is it used for training or sold to third parties?
The developer's business incentives. A free app with no obvious revenue model is making money somehow. If the product is free, you might be the product.
The developer's longevity. What happens to your data if the company shuts down, gets acquired, or simply stops maintaining the app?
None of this information is visible in an app store listing. High ratings and download counts don't indicate security practices. Being featured by Apple or Google doesn't mean the app has passed a security audit.
The Practical Response
The safest approach is straightforward: use AI services directly from their providers. ChatGPT from OpenAI, Claude from Anthropic, Gemini from Google. These companies have dedicated security teams, compliance certifications, and accountability that wrapper apps lack.
If you've used wrapper apps in the past, consider what you've shared. If those conversations included anything sensitive, assume that data may no longer be private. Change any passwords you discussed. Be alert for targeted phishing that might reference topics from your conversations.
For enterprise security teams, this breach should prompt questions about consumer AI usage. Your employees are using AI chatbots for work tasks, and many are using wrapper apps rather than approved enterprise tools. The shadow AI problem extends beyond which AI service people use to which interface they access it through.
The Bigger Pattern
The Chat & Ask AI breach is part of a larger pattern in 2025: the AI gold rush is creating a security crisis. In the enterprise space, we've seen Salesloft-Drift compromise 700 organizations through a single chatbot breach. We've seen OmniGPT allegedly leak 34 million conversation lines. We've seen AI companion apps expose millions of intimate conversations.
The common thread is infrastructure that can't keep pace with adoption. Developers are building AI experiences faster than they can secure them. Users are sharing sensitive data faster than they can evaluate where it goes.
The OWASP Top 10 for LLM Applications exists specifically to address these risks. But the Chat & Ask AI breach wasn't caused by a sophisticated LLM attack. It was caused by a misconfigured database, the kind of basic security failure that's been documented for decades.
The AI wrapper problem isn't really about AI at all. It's about the eternal tension between convenience and security, playing out again with higher stakes and more intimate data than ever before. When you use a "ChatGPT app," you're not using ChatGPT. You're trusting your most private thoughts to whoever built the app between you and the AI.
The Chat & Ask AI breach shows what happens when that trust is misplaced.