afterbuild/ops
§ FIX/DB/database fixes

Database fixes for AI-built apps

The database layer is where AI-built apps leak data in production. Lovable, Bolt, v0, Cursor, and Claude Code scaffold Supabase and Postgres schemas with Row-Level Security disabled for speed, then either leave it off (a multi-tenant data leak) or turn it on at the last minute without writing the USING and WITH CHECKpolicies that make inserts work (every write silently fails). Per Supabase's 2025 best-practices guidance, any public table without explicit RLS policies is treated as readable to every anon key holder. This hub groups every database-category fix on the site — RLS, silent inserts, N+1, connection-pool exhaustion, storage uploads — into one navigable index.

By Hyder ShahFounder · Afterbuild LabsLast updated 2026-04-18

5
Indexed database fixes
1.4s → 60ms
Typical N+1 fix TTFB
30
Users that tip over a default pool
Prisma on Vercel
100%
Root-cause fix
+ regression test
§ 01/scope

What this hub covers

This hub covers failures at the data layer: Supabase Row-Level Security, silent insert failures, N+1 query patterns, connection-pool exhaustion under load, and storage-bucket 403s. Primary surfaces are Supabase (Postgres plus storage plus auth), Prisma on Postgres/Neon, and raw Postgres connections. The failure modes are largely provider-agnostic — RLS and connection-pooling patterns transfer between Supabase, Neon, and self-hosted Postgres.

What this hub does not cover: authentication (covered in the auth hub), Stripe customer linkage (payments hub), or general 500 errors where the database happens to be the final failure mode (deploy hub). If the root cause is a missing RLS policy, a slow query, a connection-pool cap, or a storage permission, it belongs here. If the root cause is that the user is not signed in or a webhook is unverified, it belongs in another hub.

§ 02/common failures

The most common failures

Five database-category failure modes appear in every AI-built app rescue intake. Each is a predictable consequence of the generator optimizing for the single-user demo path and never stressing the multi-user production path.

§ 03/indexed fixes

Indexed database fixes

Each link is a root-cause walkthrough: exact error string, the commit shape that produced it, the fix, and the regression test.

§ 04/shared root causes

Shared root causes

Database symptoms cluster into four root causes. Rule each out before reading individual queries.

§ 05/prevention checklist

Prevention checklist

Merge these before the next database-touching deploy. Each one closes a class of silent failure.

  1. Enable RLS on every public table. Write one USING policy for SELECT and one WITH CHECK policy for INSERT/UPDATE/DELETE matching auth.uid() to the ownership column.
  2. Write an integration test that inserts a row as User A and asserts User B cannot read it. Run it on every migration.
  3. Route all Prisma traffic through the pooled DATABASE_URL (port 6543 on Supabase, pgbouncer equivalent on Neon). Set connection_limit=1 in serverless.
  4. Keep the service-role key in server-only env vars. Never prefix with NEXT_PUBLIC_. Never log it, never send it to a logging service unsanitized.
  5. Add an index on every foreign key column and every column used in a where filter the app actually ships.
  6. Replace list/detail useEffect patterns with Prisma include, Supabase nested select (select("*, detail(*)")), or a Dataloader.
  7. Add a Supabase storage bucket policy for every bucket before the first upload hits production.
  8. Run the app under a load test of at least 50 concurrent requests against a fresh deploy. Connection-pool exhaustion only appears under load.
  9. Audit the SQL log (Supabase Dashboard → Database → Logs) after every deploy for unexpected error codes — especially 42501, PGRST116, and FATAL entries.
  10. Back up the database on a schedule independent of the platform's default snapshots, and test the restore path before the first incident.
§ 06/escalation signals

When to bring in a developer

Most database fixes resolve in 30–90 minutes once the cause is identified. Bring in a developer immediately if: users report seeing each other's data, an audit log shows cross-tenant reads, the service-role key has ever shipped in a client bundle, a migration has been reverted more than once, or the app is repeatedly OOM-killed on Vercel function invocations against the database.

Escalate without delay for any incident that could constitute a data leak or PII exposure. Book the Security Audit for a full RLS and data-isolation review, or the Emergency Triage for a single blocking database incident.

§ 07/related clusters

Related clusters

For the stack-wide walkthrough of Supabase, read the Supabase fix stack hub. For Prisma and Neon specifically, read the Prisma/Neon fix stack hub. If the app is on Firebase, the Firebase rescue hub covers equivalent security-rule failures. For platform-specific database failures, see Lovable RLS disabled, Bolt RLS disabled, v0 add backend, and Replit slow database. When the database symptom chains into another category, continue at the payment fix hub, the auth fix hub, or the deploy fix hub.

Next step

Data leaking or queries timing out?

Book the 48-hour emergency triage for one database-blocking fix, fixed price, refund if we miss. Or the free diagnostic for a written rescue-vs-rewrite recommendation.