I've audited dozens of AI-generated codebases over the past year. ChatGPT, Cursor, Copilot, Lovable, Bolt. The pattern is always the same: the app works, it looks good, and it has security holes you could drive a truck through.
AI tools optimize for "does it run?" not "is it safe?" That's fine for prototyping, but the moment real users are involved, these vulnerabilities become serious liabilities.
Here are the five I find most often, and how to fix each one.
1. Exposed API Keys and Secrets
This is the most common one by far. AI tools regularly hardcode API keys, database connection strings, and secret tokens directly in client-side code.
What it looks like:
// This is sitting in your React component
const supabaseKey = 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...'
const stripeSecret = 'sk_live_51abc123...'
Anyone who opens the browser console can see these. If it's a Stripe secret key, they can issue refunds, create charges, or access your customer data.
How to fix it:
- Move all secrets to environment variables (
.envfiles) - Never use secret keys on the client side. Use server-side API routes or edge functions instead.
- If you've already pushed secrets to Git, rotate them immediately. Bots scrape GitHub for exposed keys within minutes.
- Use
.gitignoreto exclude.envfiles from your repository
2. No Server-Side Validation
AI-generated apps love to validate inputs only on the client side. Form checks in React, required fields in HTML, frontend regex patterns. All of which can be bypassed in about 10 seconds.
The reality: Any data sent to your server can be modified. Someone can open the browser dev tools, change the request body, and send whatever they want.
How to fix it:
- Validate everything on the server, regardless of what the frontend does
- Use a validation library like
zodorjoito define schemas for your API inputs - Never trust client-side data for anything security-sensitive (prices, user roles, permissions)
// Bad: trusting the price from the frontend
const { price } = req.body;
await createCharge(price);
// Good: looking up the price server-side
const product = await db.products.findById(req.body.productId);
await createCharge(product.price);
3. Broken Authentication and Session Handling
AI code often implements authentication in a way that technically works but is wildly insecure. Common patterns I see:
- Storing JWTs in localStorage (vulnerable to XSS attacks)
- Not verifying tokens on the server for each request
- No token expiration, or tokens that last for months
- Using the same secret across environments (dev and prod)
- Password reset flows that don't expire or can be reused
How to fix it:
- Store tokens in httpOnly, secure, sameSite cookies instead of localStorage
- Verify the token server-side on every protected API route
- Set reasonable expiration times (15 minutes for access tokens, 7 days for refresh tokens)
- Use a battle-tested auth library like NextAuth, Supabase Auth, or Clerk instead of rolling your own
4. Missing Row-Level Security
This one is subtle and dangerous. The app has authentication, so users can log in. But once they're logged in, they can access any other user's data by changing an ID in the URL or API request.
What it looks like:
// API route that fetches user profile
app.get('/api/users/:id', async (req, res) => {
const user = await db.users.findById(req.params.id);
res.json(user);
});
There's no check to make sure the logged-in user is requesting their own data. Anyone can fetch anyone else's profile, orders, payment info, whatever.
How to fix it:
- Always check that the authenticated user has permission to access the requested resource
- If you're using Supabase, enable Row Level Security (RLS) policies on every table
- Test by logging in as one user and trying to access another user's data. If it works, you have a problem.
5. SQL Injection and NoSQL Injection
AI-generated code sometimes builds database queries by concatenating strings with user input. This is one of the oldest and most well-known vulnerabilities in web development, and AI tools still do it.
What it looks like:
// Never do this
const query = "SELECT * FROM users WHERE email = '" + req.body.email + "'";
An attacker can input something like ' OR '1'='1 and get access to your entire database.
How to fix it:
- Always use parameterized queries or an ORM (Prisma, Drizzle, Sequelize)
- Never concatenate user input into SQL strings
- If you're using MongoDB, watch out for NoSQL injection too. Validate that inputs are the expected type (string, not object).
The Bottom Line
If real people are using your app, especially if they're entering passwords, payment info, or personal data, you need to check for these issues. AI tools won't flag them for you.
A quick security audit before launch can save you from a data breach, legal liability, and the kind of reputation damage that's hard to come back from.
Not sure if your app is secure?
I offer security audits starting at $1,000. I'll go through your codebase, identify every vulnerability, and give you a clear plan to fix them.
Request an Audit