Real Data Breaches Caused by AI-Generated Code — And How to Prevent Them
Real Data Breaches Caused by AI-Generated Code
AI coding tools are everywhere. Cursor, Bolt, Lovable, ChatGPT — they can build an entire app from a text prompt in minutes. But they're also causing real security breaches that expose real user data.
This isn't theoretical. These are documented incidents that happened to real companies in 2025 and 2026.
The Numbers Are Alarming
Before we get to the stories, here's what the research says:
- 45% of AI-generated code contains security flaws — according to Veracode's GenAI Code Security Report
- AI-generated code is now the cause of 1 in 5 breaches — per Aikido Security's 2026 report
- Fewer than half of developers review AI-generated code before committing it — Stack Overflow's 2025 survey of 49,000 developers
- 35 new CVEs in March 2026 alone were the direct result of AI-generated code, up from 6 in January
And those are professional developers. For vibe coders with no development background, the risks are even higher.
Case 1: Moltbook — 1.5 Million API Keys Exposed
What happened: Moltbook, a social platform for AI agents, was completely vibe-coded by its founder. Security researchers at Wiz discovered that the entire production database was accessible to anyone on the internet.
What was exposed:
- 1.5 million API authentication tokens
- 35,000 user email addresses
- Private messages between agents
- Full read AND write access to every database table
How it happened: The Supabase API key was embedded in client-side JavaScript — visible to anyone who opened the browser's developer tools. But the real problem was that Row Level Security (RLS) was completely disabled on the database. That means the exposed key didn't just let attackers read data — it let them read, write, and delete anything in the entire database.
Why AI missed it: AI coding tools generate code that works. The app functioned perfectly. Data loaded, users could sign up, everything looked normal. But the AI never thought to ask: "Should this API key be visible to the browser?" or "Should I restrict which rows each user can access?"
Case 2: Base44 — Private Enterprise Apps Exposed to Anyone
What happened: Base44 is a vibe coding platform (later acquired by Wix for $80 million). In July 2025, Wiz Research discovered that any unauthenticated attacker could access private applications — including enterprise tools handling HR data, internal chatbots, and knowledge bases.
What was exposed:
- Private enterprise applications with sensitive internal data
- Apps handling PII and HR operations
- Authentication completely bypassable
How it happened: The platform had undocumented API endpoints for registration and email verification that were never properly secured. An attacker only needed a publicly visible app_id value to bypass authentication — including Single Sign-On (SSO). No special tools required. No advanced hacking knowledge. Just basic API calls.
Why AI missed it: The authentication looked like it was working. Users logged in, private apps were hidden from public view. But the underlying API endpoints were wide open. AI tools build the happy path — the flow that works when everyone behaves normally. They don't think about what happens when someone deliberately tries to bypass the front door.
Case 3: Enrichlead — "100% AI-Written" Platform Falls Apart on Launch
What happened: Enrichlead was a startup whose founder publicly boasted that 100% of the platform's code was written by Cursor AI, with "zero hand-written code." Days after launch, security researchers found the platform was riddled with basic security flaws.
What was exposed:
- Anyone could access paid features without paying
- Users could alter other users' data
- Basic authorization was missing entirely
How it happened: The AI generated code that handled the features — data enrichment, user accounts, payment integration — but didn't implement proper authorization checks. There was no verification that the person making a request was actually allowed to make that request. The code worked perfectly for honest users and fell apart completely against anyone who poked at it.
Why AI missed it: This is the most common pattern. AI builds features, not guardrails. It creates the walls of the house but doesn't install the locks. Authorization — checking that User A can't access User B's data — requires understanding the intent behind the code. AI doesn't have that. It just writes what works.
The Pattern: Why AI Code Is Vulnerable
Every one of these breaches follows the same pattern:
- The app works. Everything functions correctly from the user's perspective.
- The security isn't there. API keys are exposed, authentication is bypassable, data isn't separated between users.
- Nobody checks. The code ships without an expert reviewing it.
- Someone finds the holes. If you're lucky, it's a security researcher. If you're not, it's someone who wants your users' data.
Research from Checkmarx found over 2,000 vulnerabilities, 400+ exposed secrets, and 175 instances of exposed personal data across 5,600 vibe-coded applications. These aren't edge cases. This is the norm.
The most common issues:
- No input validation — users can submit anything, including malicious code
- API keys in the browser — secrets that should be server-side are visible to everyone
- Missing authorization — no checks that User A can't access User B's data
- Insecure dependencies — AI suggests packages that are deprecated, vulnerable, or don't even exist
- No rate limiting — nothing stopping someone from hammering your API
How to Prevent This From Happening to Your App
If you built your app with AI, it almost certainly has some of these issues. The question isn't if — it's how many and how serious.
Here's what you can do:
1. Get a professional code review
This is the fastest and most reliable way to find what your AI missed. A human expert who understands security will catch issues that no automated tool can — like missing authorization logic, data leaking between users, or architectural decisions that won't scale.
2. Never trust AI-generated security code
If your AI set up authentication, database access rules, or anything involving user data — assume it's wrong until proven otherwise. These are the highest-risk areas and AI consistently gets them wrong.
3. Check your database access rules
If you're using Supabase, Firebase, or any database with client-side access — make sure Row Level Security (or equivalent) is enabled and properly configured. This single check could prevent a Moltbook-scale breach.
4. Search your frontend code for secrets
Open your browser's developer tools on your live app. Search for "key", "token", "secret", "password" in the JavaScript bundle. If you find anything, it's exposed to the world.
5. Don't ship without a second pair of eyes
The founders of Moltbook, Base44, and Enrichlead all shipped without expert review. All three had their data exposed. The pattern is clear.
Your App Could Be Next
Every day your AI-built app runs without a review is a day your users' data is at risk. The breaches above weren't caused by sophisticated attacks — they were caused by basic security mistakes that an expert would have caught in minutes.
Get your app reviewed before your users find out the hard way.