Back to Blog
Security8 min read

Why Your Vibe-Coded SaaS Is Hackable (and How to Fix It in 5 Minutes)

AI coding tools are incredible at building features. They're terrible at security.

Not because the models are dumb — they know what RLS is, they know how webhook signatures work. But they optimize for "make it work" not "make it secure." Unless you specifically ask for security, you don't get it.

We've scanned hundreds of AI-generated SaaS applications built with Cursor, Claude Code, Manus, Bolt, and Lovable. The same 9 mistakes show up in almost every one.

The 9 Mistakes

1. Supabase tables without Row Level Security

Your AI creates a profiles table. It doesn't enable RLS. Every row is now readable by anyone with your Supabase URL and anon key — which is in your frontend code by design.

2. service_role key in frontend

Supabase has two keys: anon (safe for browsers) and service_role (bypasses all security). AI tools don't distinguish between them. If service_role ends up in your React app, an attacker has god-mode access to your database.

3. Stripe webhooks without signature verification

Your webhook handler accepts any POST to /api/webhooks/stripe. An attacker sends a fake customer.subscription.updated event granting themselves a premium subscription. Your handler processes it because it never checked the signature.

4. Secret keys in frontend environment variables

Vite exposes any variable prefixed with VITE_. Next.js exposes NEXT_PUBLIC_. Your AI sees the pattern and helpfully prefixes your Stripe secret key. It's now in your JavaScript bundle, visible in browser dev tools.

5. CORS set to wildcard

cors(origin: '*') means any website can make authenticated requests to your API. Your AI sets this because it "fixes" the CORS error during development. It ships to production unchanged.

6. Optional reCAPTCHA

Your AI implements reCAPTCHA as if recaptcha_token: verify(token). Bots simply don't send a token and skip verification entirely.

7. SMTP on cloud platforms

Railway, Render, and Heroku block port 587. Your AI uses smtplib because that's what the Mailgun docs show first. Emails fail silently. Users never get password resets.

8. Math.random() for security tokens

Your AI generates a password reset token with Math.random().toString(36). This is not cryptographically secure. Tokens are predictable.

9. No rate limiting on auth endpoints

Five failed login attempts? A thousand? Ten thousand? Without rate limiting, brute force attacks work. Your AI doesn't add rate limiting unless you ask.

The Fix: One File, 5 Minutes

We extracted the security patterns from our own production stack into an open-source rules file. Drop it in your project root and your AI coding tool follows all 40+ security requirements automatically.

curl -sL https://vettiq.ai/api/blueprint/claude.md -o claude.md

For Cursor users: mv claude.md .cursorrules

For Windsurf users: mv claude.md .windsurfrules

For Manus users: paste the contents into your Project Instructions

The file contains ALWAYS/NEVER directives organized by domain:

  • Supabase:ALWAYS enable RLS, NEVER use service_role in frontend
  • Stripe:ALWAYS verify webhook signatures, NEVER expose secret keys
  • Auth:ALWAYS use crypto.randomBytes, NEVER use Math.random
  • API:ALWAYS restrict CORS, ALWAYS rate limit auth endpoints
  • Email:ALWAYS use HTTP API, NEVER use SMTP on cloud platforms

Beyond the Rules File

The full Blueprint includes 27 acceptance tests you can run to verify your implementation, and 16 scan rules for automated checking. Everything is open-source under MIT.