afterbuild/ops
§ S-03/security-audit

AI app security audit — RLS, secrets, OWASP, in 3 days.

Supabase RLS, secrets scanning, webhook signatures, and the OWASP top-10 for your Lovable, Bolt, Cursor, or Replit codebase. Delivered as a severity-rated report with patch diffs ready to merge.

price · $499turnaround · 3 daysguarantee · patch diffs included
Quick verdict

AI app security audit is a $499 fixed-fee, 3-day engagement covering Supabase RLS on every table, secrets in repo history and frontend bundles, auth logic, OWASP top-10, CORS, rate limits, and webhook signature verification. Every Critical and High finding ships with a patch diff. Industry benchmarks put AI-code vulnerability rates near half — and ~70% of Lovable apps reach production with RLS disabled. Audit first, launch second.

§ 01/diagnosis

Symptoms AI app security audit fixes

Eight failure classes we see on nearly every Lovable, Bolt, v0, and Cursor codebase. Each row maps a visible symptom to its AI-generated root cause and the patch we ship.

diagnostic matrix · ai-generated code vulnerabilities
SymptomRoot cause (AI-generated pattern)Our fix
Lovable preview ships, production leaks dataSupabase RLS disabled by default; ~70% of Lovable apps reach production with RLS still offTable-by-table policy audit, enable RLS, add tenant-scoped policies, two-user test
Supabase service-role key in the browserAI generator pasted the service-role key into a client component instead of the anon keyRotate the key, swap to anon + RLS, scan bundle and repo history for other leaks
Stripe webhook accepts unsigned POSTsHandler skipped signature verification; anyone on the internet can POST fake eventsVerify the Stripe-Signature header, add idempotency on event.id, reject unknown events
CORS allows every originAI generator shipped access-control-allow-origin: * with no rate limitingLock origins to the production domain + preview domains, add per-route rate limits
Password reset email never verifies identityHappy-path reset flow with no token expiry, no email confirmation, no rate limitShort-TTL signed tokens, require email verification, throttle requests per account
Session persists after logout on another deviceAI builder implemented sign-out on the client only; tokens still valid server-sideServer-side session revocation, refresh-token rotation, invalidation on password change
Secrets committed to git historyGitleaks trips on API keys, tokens, or database URLs from earlier commitsRotate every exposed secret, rewrite history or move repo, add a pre-commit secrets hook
Investor due diligence incomingNo written baseline of the security posture, no severity-rated findings logAudit report formatted for diligence disclosure with CWE + OWASP mapping on every finding
§ 02/schedule

3-day Lovable security audit schedule

Day-by-day breakdown from access grant to merged patches. Clock runs continuously, weekends included.

  1. D1

    Access grant + automated scan

    You grant read-only access to GitHub, Supabase, and the production host. Inside 2 hours we kick off Semgrep, Gitleaks, and npm audit. Known-CVE findings land in the report first.

  2. D2

    Supabase RLS audit + secrets pass

    Table-by-table RLS review with a two-user incognito test. Bundle analysis confirms no service-role keys, Stripe secrets, or third-party tokens leaked into the client payload.

  3. D3

    OWASP top-10 + auth logic review

    Manual review of the auth flows (sign-up, sign-in, password reset, OAuth callback), CORS allow-lists, rate limits, input validation, and webhook signature verification on every public endpoint.

  4. D3

    Patch drafting for Critical & High

    Every Critical and High finding ships with a minimal git diff. Medium findings get a written recommendation; Low findings get a note. No adjacent refactors, no scope creep.

  5. D3

    Vulnerability report + walkthrough

    PDF report with executive summary, severity ratings, reproduction steps, and a 30-minute Loom. You merge the patches yourself or we open the PR on request.

§ 03/rls-vignette

Supabase RLS audit — before and after

The exact policy shape we find on Lovable-built apps versus the tenant-scoped policy we ship. This single swap is the difference between a widely-reported public disclosure and a clean diligence review.

✕ before · ai-shipped
invoices · ai-shipped
sql
01-- supabase/migrations/0001_invoices.sql02-- AI-shipped policy: permissive, tenant leak03create table invoices (04  id uuid primary key default gen_random_uuid(),05  org_id uuid not null,06  amount_cents int not null,07  created_at timestamptz default now()08);09 10-- RLS off by default in Lovable preview11-- When finally enabled, the AI generator wrote:12alter table invoices enable row level security;13 14create policy "invoices are readable"15  on invoices for select16  using ( true );   -- any authenticated user sees every org's invoices
RLS disabled, then re-enabled with `using (true)` — every org sees every other org's rows.
✓ after · afterbuild
invoices · afterbuild
sql
01-- supabase/migrations/0002_invoices_rls.sql02-- Afterbuild Labs tenant-scoped policies03alter table invoices enable row level security;04 05-- read: only rows in the caller's org06create policy "invoices_select_own_org"07  on invoices for select08  using ( org_id = (auth.jwt() ->> 'org_id')::uuid );09 10-- write: only the owner's org, with a check on insert11create policy "invoices_insert_own_org"12  on invoices for insert13  with check ( org_id = (auth.jwt() ->> 'org_id')::uuid );14 15-- two-user incognito test passes: user A cannot see user B's rows
Tenant scoped on auth.jwt() -> org_id; two-user incognito test passes.
§ 04/ledger

What the security audit engagement ships

Eight deliverables fixed-scope, written into the statement of work before day one starts.

  • 01Supabase RLS audit — every table, every policy, two-user incognito verification on reads and mutations
  • 02Secrets audit — repo history, env files, frontend bundle, client-exposed service-role keys
  • 03Auth logic review — session handling, password reset, email verification, role-based access control
  • 04OWASP top-10 pass — injection, XSS, CSRF, broken access control, SSRF, insecure deserialization
  • 05CORS, rate limiting, and input validation on every API route and edge function
  • 06Webhook signature verification and idempotency on every Stripe, Slack, and GitHub endpoint
  • 07Written vulnerability report with Critical / High / Medium / Low severity ratings and CWE mapping
  • 08Patch diffs ready to merge for every Critical and High finding plus a 30-minute Loom walkthrough
§ 05/price

Fixed-fee AI-generated code vulnerability audit

most common
price
$499
turnaround
3 days
scope
Single AI-built app under 10k lines · patch diffs on every Critical and High
guarantee
PDF report + 30-min Loom walkthrough
Book audit · $499
§ 06/comparison

vs hourly pen-test · vs a Supabase RLS audit from Upwork

Why the $499 fixed fee beats the alternatives on the specific failure shapes AI builders ship.

DimensionHourly pen-testAfterbuild Labs audit
Price$3,500 – $15,000, scope-dependent$499 fixed, 3 days, inclusive
ScopeBroad network + app pen-test; generic OWASPAI-generated code failure modes (RLS, secrets, webhooks, OAuth)
DeliverablePDF of findings, no patchesPDF report + patch diffs on Critical/High + Loom walkthrough
Turnaround2 – 4 weeks after scoping3 days from access grant
Lovable RLS coverageNot always included; often misses service-role key leaksTable-by-table audit + two-user incognito test every time
§ 07/fit

Who this security audit is for

Pick the security audit if…

  • You are about to launch a Lovable, Bolt, v0, or Cursor app on a custom domain
  • You have users already and just found your first exposed API key or RLS gap
  • Investor diligence or acquirer technical review is scheduled in the next 30 days
  • You shipped Stripe, Slack, or GitHub webhooks without signature verification
  • You want a written severity-rated baseline before bringing on a full-time engineer

Don't pick it if…

  • The app is in a regulated industry requiring an accredited assessor (HIPAA, PCI, SOC 2) — the audit is the technical baseline, not a substitute
  • You need broad network, physical, or social-engineering pen-testing — book a traditional pen-test firm
  • Your codebase is 100k+ lines across multiple services — we scope that as a custom engagement on the diagnostic call
  • You are mid-breach with an active exploit — email us for breach-response triage first, audit after
§ 09/audit-anatomy

What we look at — the AI-generated code vulnerability pass

The 3-day audit is structured around the seven failure classes that account for almost every AI-built app incident we have triaged. The order matters because a finding higher in the list usually invalidates findings lower in the list — there is no point reviewing rate limits on an endpoint that already returns every other tenant’s data.

Row-level security on every Supabase table. We log into a fresh incognito window as a synthetic user, capture a session, and walk through every table the application reads or writes. For each table we ask three questions: can an unauthenticated visitor read this, can a different authenticated user read this, and can the user mutate it through the anon key. Anything that should be protected and is not goes into the report as Critical. Lovable apps almost always have at least one Critical finding here; the public incident pattern documented by The Register in February 2026 is exactly this failure expressed at scale.

Secrets in repository history, environment files, and frontend bundles. Gitleaks scans the entire commit history for keys, tokens, and credentials. Webpack/Vite bundle analysis confirms that no service-role keys, Stripe secret keys, or third-party API tokens have leaked into the client bundle. The frontend-bundle check catches the most common Lovable mistake, which is pasting a Supabase service-role key into a client component instead of using the anon key with RLS.

Authentication logic and session handling. We exercise sign-up, sign-in, sign-out, password reset, email verification, and social login. We test session refresh on a stale token. We test the post-OAuth callback against a redirect URL that is not in the allow list. We test what happens when a user changes their email. Each of these is a path that AI generators ship with a happy-path implementation and a missing edge case.

OWASP top-ten for web applications. Injection, cross-site scripting, cross-site request forgery, broken access control, server-side request forgery, insecure deserialization, security misconfiguration, vulnerable components, identification failures, and insufficient logging. Some of this is automated (Semgrep rule packs, npm audit on the lockfile, Snyk on the container if there is one); the rest is manual. Industry benchmarks — see our 2026 vibe-coding research — show roughly half of AI-generated code shipping with known vulnerabilities; that’s the floor we expect to clear, not the ceiling.

CORS, rate limiting, and input validation on every public API route. We enumerate every endpoint that does not require authentication and confirm each has a rate limit, an origin allow-list, and a schema that validates the payload. AI generators routinely ship endpoints with permissive CORS (access-control-allow-origin: *) and no rate limiting at all, which becomes an abuse vector the day a bot finds it.

Webhook signature verification and idempotency.Every Stripe, Slack, GitHub, or other webhook endpoint must verify the provider’s signature header and must be idempotent against a replay of the same event ID. AI generators routinely ship the happy-path handler and forget the signature check, which means anyone on the internet can POST a fake event and trigger a billing state change. We verify both properties on every webhook the application accepts.

Deploy and infrastructure configuration. Environment variables in the production host versus the repo, security headers (Content-Security-Policy, Strict-Transport-Security, X-Frame-Options), TLS configuration on the custom domain, and the database backup posture. We confirm that a restore from backup has been tested at least once.

What you receive at the end

A PDF report with an executive summary, a finding-by-finding breakdown with severity ratings using the standard Critical / High / Medium / Low taxonomy, reproduction steps for each Critical and High, and a patch diff ready to merge for each. Every finding maps to a specific OWASP, CWE, or vendor guideline so the report is suitable for sharing with investors, acquirers, or compliance reviewers without translation. We include a 30-minute Loom walkthrough so the founder hears the reasoning behind the severities, not just the labels.

The patch diffs are intentionally minimal. We do not refactor adjacent code while we are in the file, and we do not change anything outside the scope of the finding. That keeps the audit useful even if you want to merge the patches over several weeks rather than all at once. Each diff is independently reviewable and independently revertible, which matters when the audit lands on a Friday and the team wants to ship Critical fixes before Monday and defer Mediums.

When this audit alone is not enough

If we find more than six Critical and High findings combined, we will recommend upgrading to Break-the-Fix-Loop ($3,999) which bundles the audit plus full remediation including Mediums and Lows. If the application is in a regulated industry (HIPAA, PCI, SOC 2) the $499 audit gives you the technical baseline but it is not a substitute for an accredited assessor — we say so explicitly in the report and point you at the right next step. The audit is calibrated for AI-built SaaS apps under 10,000 lines pre-launch or in early traction.

FAQ
Does the AI app security audit actually ship patches, or just list vulnerabilities?
Both. Every Critical and High finding ships with a patch diff ready to merge. Medium findings come with a written recommendation; Low findings get a note. If you want Mediums fixed too, upgrade to Break-the-Fix-Loop ($3,999) which bundles the audit plus full remediation inside the same engagement.
Why does a Lovable security audit keep finding the same RLS failure?
Lovable's preview ships with Supabase RLS disabled by default. Research suggests ~70% of Lovable-built apps reach production with RLS still off — the widely-reported Feb 2026 Lovable/Supabase disclosure captured the failure at scale. The table-by-table RLS review is the single most valuable part of the audit for any Lovable launch.
Does the audit cover Bolt, Cursor, v0, Replit, and Claude Code apps too?
Yes — the audit works for any AI-built or AI-assisted codebase. OWASP top-10, secrets scanning, webhook verification, and auth logic review are tool-agnostic. We adjust the RLS/auth portion to your stack (Supabase, Clerk, Auth.js, custom) on the diagnostic call before day one starts.
What happens if the 3-day security audit deadline slips?
The 3-day clock starts when you grant access. For apps under 5,000 lines on standard stacks we hit the deadline 95% of the time. For larger codebases we scope upward on the free diagnostic call first — no surprise bills. If we miss on an in-scope audit we extend the turnaround at no additional cost.
Do you run the security audit on weekends or for active breaches?
The 3-day clock runs continuously — book Friday, delivered Monday. For active breach response (live exploit, users reporting exposed data) email us — we usually start same-day and skip the queue. Breach-response triage billing runs at Emergency Triage rates ($299) on top of the audit.
Is $499 enough for a real AI-generated code vulnerability audit?
Yes, for a single AI-built app under ~10k lines. We are not selling a 200-page enterprise pen-test — we are selling the specific OWASP + RLS + secrets pass that catches the failure modes AI builders ship with. Industry benchmarks put AI-code vulnerability rates close to half (see our 2026 research); this audit is the floor, not the ceiling. Larger or regulated apps get a custom quote.

Security Audit is a fixed-fee 48-hour pre-launch scope. For the full list of engagement tiers — Emergency Triage, Integration Fix, Rescue, Retainer — see pricing.

Next step

Audit before launch.

3 days. $499. Every RLS policy, every secret, every auth path — reviewed, patched, reported.

Book free diagnostic →