AI-generated code audit specialists — written Code audit of your Lovable, Bolt, v0 or Cursor app before you hire anyone
AI-generated code audit before you spend a dollar on rescue. We read the repo, run semgrep and Supabase RLS audit passes, and ship a written report covering Lovable code audit, Bolt audit, v0 and Cursor findings across security, data model, auth, Stripe, deploy, tests and architecture — every finding severity-ranked with a rescue-vs-rewrite verdict in 48 hours.
Why AI builders ship broken Code audit
Founders arrive at us with the same question: 'is this salvageable?' The honest answer requires reading the code. Lovable, Bolt, v0, Cursor and Replit Agent all ship apps that look similar on the surface and differ enormously underneath. One Lovable app we audited had clean RLS and 40 indexes; the next had RLS disabled on every table and every query unbounded. The tool doesn't decide the audit outcome — the specific prompts and data model do.
The industry-wide picture explains why the audit is worth it before you hire: our 2026 vibe-coding research summarizes the AI-code vulnerability benchmark, the widely-reported Lovable/Supabase RLS disclosure, and the NIST CVE-2025-53773 (CVSS 9.6) GitHub Copilot issue. An audit is the cheapest step in the rescue funnel — it stops you paying to fix the wrong problem first, and it surfaces the severities that actually block your launch.
Which AI builder shipped your broken Code audit?
The Code audit failure mode usually depends on the tool that shipped the code. Find your builder below, then read the matching problem page.
| AI builder | What breaks in Code audit | Go to |
|---|---|---|
| Lovable | RLS disabled, secrets committed, Supabase exposed publicly | Lovable audit → |
| Bolt.new | Stripe misconfigured, webhooks unverified, env vars in client | Bolt audit → |
| v0 | No backend at all, no auth, no rate limits on API routes added later | v0 audit → |
| Cursor | Multi-file drift, tests missing or fake, architecture debt | Cursor audit → |
| Replit Agent | SQLite in prod, infrastructure lock-in, secrets in repl env | Replit audit → |
| Claude Code | Generally cleaner; audit focuses on architecture and tests | Claude Code audit → |
| Windsurf | Enterprise-scale compliance and audit-log gaps | Windsurf audit → |
| Base44 | Proprietary runtime — audit focuses on escape plan | Base44 audit → |
Anatomy of an audit finding in an AI-built app
A founder sent us a Lovable fintech MVP last quarter with 1,400 pilot users on the waitlist and a demo scheduled with a strategic partner the following Monday. The app 'worked.' The sign-up flow looked immaculate. The dashboard charted the right numbers. They wanted to know whether they could launch and ship payments on time. We ran the standard eight-area audit over 48 hours.
The first finding was the one that shut the demo down: RLS disabled on every single table. Anyone with the Supabase anon key — which ships in the public JavaScript bundle — could read and write every row. The second finding: hardcoded Stripe test key in the client, and the live key in the same file guarded by a NODE_ENV check that Vercel's build process overrode anyway. The third: the webhook endpoint returned 200 without verifying Stripe's signature, so anyone could forge a 'payment succeeded' event and flip an order to paid. The fourth: password reset emails were never sent — SMTP was unconfigured, so users who forgot their password had no recovery path. The fifth: no pagination on the main dashboard query, which loaded 47,000 rows into the browser on every page view.
“The audit report delivered that Friday gave the founder the ranked list: patch the critical five before the demo, patch seven highs within two weeks, accept the ten mediums.”
Individually, each was a half-day fix. Together, they would have buried the demo and made a public breach disclosure a matter of when, not if — exactly the pattern captured by the February 2026 Lovable/Supabase RLS disclosure (summarized in our 2026 research). The audit report delivered that Friday gave the founder the ranked list: patch the critical five before the demo, patch seven highs within two weeks, accept the ten mediums. The rescue shipped in 9 days. The demo happened.
What the audit was worth, concretely: the founder got a defensible scope to hire against, a ranked list to triage internally, and a document to hand the strategic partner's security team. The partner's security review came back clean because we had already found and fixed everything their checklist covered. The $499 audit fee was credited against the rescue engagement. The alternative — hire a developer at hourly rate with no scope — would have consumed weeks of discovery before the first fix. Every founder we've audited has reported the same math: the audit is the cheapest step in the rescue funnel, and it protects against the specific failure modes Veracode measured across its 2025 benchmark. Nothing in the audit is speculative. Every finding maps to a CWE, a line, and an estimate.
What a Code audit rescue engagement ships
From first diagnostic to production handoff — the explicit steps on every Code audit engagement.
- 01
Free 30-minute diagnostic
We talk for 30 minutes, you share the repo, we look at five things: auth, data model, secrets, deploy, tests. You get a written one-pager in 48 hours.
- 02
Paid audit ($499)
Full written audit: security, data model, auth, payments, deploy pipeline, tests, architecture, performance. Each finding has severity (critical/high/medium/low), file references, and estimated fix effort.
- 03
Rewrite-or-rescue recommendation
We tell you honestly whether to rescue the app or throw it out. We have no incentive to inflate — if you should rewrite, we say so.
- 04
Quoted fix plan
Every critical and high finding comes with a fix estimate. You decide what to do in-house and what to hand us.
- 05
Optional: rescue kickoff
If you hire us to fix the findings, the audit fee is credited against the rescue engagement.
Every Code audit rescue audit checks
The diagnostic pass on every Code audit rescue. Each item takes under 10 minutes; together they cover the patterns that cause 90% of AI-built-app failures.
- 01RLS status on every Supabase table
We enumerate tables, check whether RLS is enabled, and check whether each policy is non-trivial. Policies like `USING (true)` fail this check. Industry AI-vulnerability benchmarks (see our 2026 research) put rates near half; RLS misconfiguration is the single most common class.
- 02Secrets in the client bundle
Grep the built JavaScript for 'sk_live', 'sk_test', 'SUPABASE_SERVICE_ROLE', and common API key prefixes. Any match is a critical finding.
- 03Webhook signature verification
Every Stripe, GitHub, or provider webhook handler must verify the signature before acting. We read the route and confirm the check.
- 04Env var parity between preview and production
We diff the Vercel or host env vars for each environment. Any preview-only value that the app references in production is flagged.
- 05OAuth redirect URLs registered with providers
Google, GitHub, and Supabase allowlists must include the production URL exactly. Mismatched protocols, trailing slashes, or missing subdomains are common.
- 06Database migrations in git
If the Supabase dashboard state can't be reproduced from the repo, the team has no rollback. We check for a migrations folder and that it matches the live schema.
- 07Test coverage on the critical paths
We count tests on sign-up, sign-in, checkout, and the top 3 mutations. Zero tests on a $100k ARR flow is a high finding.
- 08Error boundaries and 404/500 pages
Classic React default: one component crashes, the whole app goes white. We check for route-level error boundaries and custom error pages.
- 09Rate limits on auth and mutating endpoints
Sign-up, sign-in, password reset, and any POST without rate limiting is DoS-able and brute-forceable.
- 10Indexes on foreign keys and filterable columns
We dump the schema and check pg_indexes. Any table with over 10k rows and a foreign key without an index is a high finding.
- 11Input validation at API boundaries
Zod, Valibot, or equivalent at every route handler. Untyped req.body reaching the database is flagged.
- 12CORS, CSP, and security headers
CORS set to '*' or missing Content-Security-Policy is noted. We recommend next-safe or Helmet equivalents per stack.
Common Code audit patterns we fix
These are the shapes AI-generated code arrives in — and the shape we leave behind.
01README.md contains a one-line description and a `npm run dev` command. Setup takes three days because nobody knows which env vars are needed or how to seed data.01README.md documents every env var with example values and source of truth, a one-command bootstrap script that seeds a working dev DB, and a first-day checklist for new engineers.01API route runs `supabase.from('orders').select('*')` and returns to the client — 47,000 rows, 12-second load, Vercel function timeout, client-side filter.01Paginated query with explicit LIMIT, OFFSET or cursor, moved to a server component that streams, with an index on the filter column.01Order created client-side before webhook fires. Success page marks order paid. Failed payments silently become paid orders.01Order created pending, webhook is the source of truth for paid status, signature verified, idempotency key attached.01tsconfig strict is false. Functions typed as `(data: any) => any`. Compiler cannot catch renames or missing fields.01Strict mode on. Zod schemas at IO boundaries. Types derived from schemas. Renames caught at compile time.01Every piece of state — URL, form, server — lives in one Zustand store. Every render re-renders everything.01URL state in URL (searchParams), server state in TanStack Query, form state in React Hook Form, client UI state local to components.01Fetch calls inlined in 40 files, each with its own error handling (or none). No retry, no timeout, no loading state.01One typed API client, shared TanStack Query hooks, consistent error boundaries and toast layer.01API returns a vague 500 with no log, no Sentry, no error surface to the user. The engineer finds out from a user complaint.01Sentry wired, structured logs, user-facing error states with retry, 500 page distinct from 404.01README says `npm install && npm run dev`. It's not. Three secrets are missing, one integration is optional but undocumented.01Fully working .env.example, scripted bootstrap, first-run README verified on a fresh machine before handoff.Code audit red flags in AI-built code
If any of these are true in your repo, the rescue is probably worth more than the rewrite.
Fixed-price Code audit engagements
No hourly meter. Scope agreed up front, written fix plan, delivered on date.
- turnaround
- Security audit
- scope
- Written report, 48hr turnaround, credited against rescue.
- turnaround
- Emergency triage
- scope
- If you're mid-outage, audit + fix plan same-day.
What Code audit rescues actually cost
Anonymized, representative scopes from recent Code audit rescues. Every price is the one we actually quoted.
A solo founder who built a Lovable marketplace with 240 beta users and is worried about launching publicly. The audit flags RLS off, no rate limits, and no Stripe webhook signing.
- Scope
- 8-area audit, written report with severities, 30-minute walkthrough call.
- Duration
- 48 hours
A seed-stage team who raised $1.5M on a Bolt.new prototype and has investor diligence in three weeks. The audit surfaces 23 findings, 7 critical; they hire us to patch the criticals.
- Scope
- Audit plus remediation of all critical and high findings. RLS policies written, webhooks signed, secrets rotated, env var guards.
- Duration
- 2 weeks
A growth-stage company with an AI-built admin tool that now processes 40k orders/month and is being pitched to an enterprise buyer who wants a security review.
- Scope
- Full audit, remediation, SOC 2 prep checklist, audit log implementation, pen-test prep and partner handoff.
- Duration
- 4-6 weeks
Audit-only, audit + rescue, or audit + rewrite?
Most founders who book an audit fall into one of three buckets after they read the report. The first bucket — roughly half — needs targeted remediation. The audit surfaces five to fifteen findings, most are scoped fixes (RLS policies, signature verification, env vars), and the right next step is to hire a developer or us to patch the ranked list. We credit the audit fee against any rescue engagement we run for the same codebase.
The second bucket — roughly a third — needs a deeper architectural pass. The findings cluster around a structural issue: state management is wrong, the data model has fundamental shape problems, the auth provider is incompatible with the team's needs. These are fixable but the fixes overlap, and we recommend a 2 to 6 week refactor engagement instead of a long list of one-off patches. The audit becomes the scope document for that engagement.
The third bucket — the smallest, around 15% — gets a 'rewrite' recommendation. The verdict applies when the data model is unrecoverable, when the codebase is locked into a no-code platform with no real export path, or when the cost of patching exceeds the cost of starting over with the audit's findings as the spec for the new build. We prefer to be honest about this verdict early; nothing is more frustrating than spending three months patching a codebase that should have been rewritten in week one. The audit gives you the data to make the call with confidence.
Code audit runbook and reference material
The documentation, CLIs, and specs we rely on for every Code auditengagement. We cite, we don't improvise.
- OWASP Top 10 (2024)
The baseline we audit against for web app security findings.
- Veracode 2025 State of Software Security — AI code
Industry AI-vulnerability benchmark. We cite the dataset per finding.
- NIST NVD — CVE database
We cross-reference every dependency finding against NVD and CVSS scores.
- Supabase — Row Level Security
RLS patterns we verify and, when missing, write for you.
- Stripe — Webhook signature verification
Every Stripe webhook audit starts here.
- Next.js — Security headers
The CSP and header config we recommend for Vercel-hosted AI-built apps.
- semgrep — AI-generated code rules
Part of the static-analysis pass on every audit.
Code auditrescues we've shipped
Related Code audit specialists
Related Code audit problems we rescue
Code audit questions founders ask
Sources cited in this dossier
Your AI builder shipped broken Code audit. We ship the fix.
Send the repo. We'll tell you exactly what's wrong in your Code audit layer — and the fixed price to ship it — in 48 hours.
Book free diagnostic →