Database fixes for AI-built apps
The database layer is where AI-built apps leak data in production. Lovable, Bolt, v0, Cursor, and Claude Code scaffold Supabase and Postgres schemas with Row-Level Security disabled for speed, then either leave it off (a multi-tenant data leak) or turn it on at the last minute without writing the USING and WITH CHECKpolicies that make inserts work (every write silently fails). Per Supabase's 2025 best-practices guidance, any public table without explicit RLS policies is treated as readable to every anon key holder. This hub groups every database-category fix on the site — RLS, silent inserts, N+1, connection-pool exhaustion, storage uploads — into one navigable index.
By Hyder ShahFounder · Afterbuild LabsLast updated 2026-04-18
- 5
- Indexed database fixes
- 1.4s → 60ms
- Typical N+1 fix TTFB
- 30
- Users that tip over a default pool
- 100%
- Root-cause fix
What this hub covers
This hub covers failures at the data layer: Supabase Row-Level Security, silent insert failures, N+1 query patterns, connection-pool exhaustion under load, and storage-bucket 403s. Primary surfaces are Supabase (Postgres plus storage plus auth), Prisma on Postgres/Neon, and raw Postgres connections. The failure modes are largely provider-agnostic — RLS and connection-pooling patterns transfer between Supabase, Neon, and self-hosted Postgres.
What this hub does not cover: authentication (covered in the auth hub), Stripe customer linkage (payments hub), or general 500 errors where the database happens to be the final failure mode (deploy hub). If the root cause is a missing RLS policy, a slow query, a connection-pool cap, or a storage permission, it belongs here. If the root cause is that the user is not signed in or a webhook is unverified, it belongs in another hub.
The most common failures
Five database-category failure modes appear in every AI-built app rescue intake. Each is a predictable consequence of the generator optimizing for the single-user demo path and never stressing the multi-user production path.
- RLS enabled without matching policies. The most common symptom is
42501 permission deniedon select, or a silentPGRST116on insert. The table has RLS turned on but noWITH CHECKpolicy covering INSERT, so every write fails. See Supabase RLS blocking insert. - Insert returns 200 but the row never appears. RLS with a SELECT policy but no INSERT policy, a
.select()filter that excludes the new row, or the anon key writing to a table only the service-role can see. The client reads success; the database never wrote. See Database saves with no error and no row. - N+1 queries on every list page. AI tools generate list/detail patterns as separate
useEffecthooks. 20 rows, 21 queries, 1.4 seconds of synchronous latency. Fix with Prismainclude, Supabase nested select, or a Dataloader. See N+1 query slow page load. - Connection pool exhausted under load. Prisma on serverless instantiates a new client per concurrent request. The Supabase free tier caps at 60 connections; 30 concurrent users with default settings exhausts the cap and throws
P1001orFATAL: too many clients. Switch to the pooled DATABASE_URL (pgbouncer, port 6543) withconnection_limit=1. See App crashes under load. - Supabase storage 403 on upload. Storage buckets are RLS-governed separately from tables. AI-scaffolded uploads miss the matching
storage.objectspolicy on the bucket or push to a path prefix that does not matchauth.uid(). See Supabase storage upload 403. - Service-role key in a client bundle. Not its own leaf because it is a pattern, not a symptom. A
NEXT_PUBLIC_SUPABASE_SERVICE_ROLEin.envexposes the key to every visitor's devtools and grants full database access. Rotate immediately and move server-only keys out of any NEXT_PUBLIC_ variable.
Indexed database fixes
Each link is a root-cause walkthrough: exact error string, the commit shape that produced it, the fix, and the regression test.
- § FX-17→ READ
Supabase RLS blocking insert
42501 permission denied or silent PGRST116. Write a WITH CHECK policy matching auth.uid() to the ownership column.
- § FX-18→ READ
Database saves with no error and no row
Supabase insert returns 200, row never appears. RLS with no INSERT policy, chained .select() filter, or wrong client key.
- § FX-19→ READ
N+1 query slow page load
101 queries instead of 2. Fix with Prisma include, Supabase nested select, or Dataloader. TTFB 1.4s to 60ms.
- § FX-20→ READ
App crashes under load
30–50 concurrent users tips over. Prisma P1001 or FATAL too many clients. Switch to pooled DATABASE_URL, bound connection_limit.
- § FX-23→ READ
Supabase storage upload 403
Upload returns 403 Unauthorized. Missing storage bucket policy, wrong client key, or a path prefix that does not match auth.uid().
Shared root causes
Database symptoms cluster into four root causes. Rule each out before reading individual queries.
- Security model written in a hurry, or not at all. RLS disabled, left at the default permissive, or enabled without the corresponding INSERT/UPDATE policies. Storage buckets without object-level policies. Service-role key used where the anon key would be correct (or worse, the other way around).
- Query patterns that assume one user. N+1 list/detail loops, no pagination,
select *returning joined multi-megabyte rows, and missing indexes on the filter columns the app actually uses. - Connection topology wrong for serverless. Direct Postgres connections from Vercel functions, no pooler, default Prisma
connection_limit=10. The architecture works for one test user and breaks at the first ten concurrent requests. - Client/server key confusion. Anon key used where service-role is required (silent permission failures) or service-role used in client code (a catastrophic leak). Both stem from AI tools not distinguishing the two surfaces clearly in the scaffold.
Prevention checklist
Merge these before the next database-touching deploy. Each one closes a class of silent failure.
- Enable RLS on every public table. Write one
USINGpolicy for SELECT and oneWITH CHECKpolicy for INSERT/UPDATE/DELETE matchingauth.uid()to the ownership column. - Write an integration test that inserts a row as User A and asserts User B cannot read it. Run it on every migration.
- Route all Prisma traffic through the pooled DATABASE_URL (port 6543 on Supabase, pgbouncer equivalent on Neon). Set
connection_limit=1in serverless. - Keep the service-role key in server-only env vars. Never prefix with NEXT_PUBLIC_. Never log it, never send it to a logging service unsanitized.
- Add an index on every foreign key column and every column used in a
wherefilter the app actually ships. - Replace list/detail
useEffectpatterns with Prismainclude, Supabase nested select (select("*, detail(*)")), or a Dataloader. - Add a Supabase storage bucket policy for every bucket before the first upload hits production.
- Run the app under a load test of at least 50 concurrent requests against a fresh deploy. Connection-pool exhaustion only appears under load.
- Audit the SQL log (Supabase Dashboard → Database → Logs) after every deploy for unexpected error codes — especially
42501,PGRST116, andFATALentries. - Back up the database on a schedule independent of the platform's default snapshots, and test the restore path before the first incident.
When to bring in a developer
Most database fixes resolve in 30–90 minutes once the cause is identified. Bring in a developer immediately if: users report seeing each other's data, an audit log shows cross-tenant reads, the service-role key has ever shipped in a client bundle, a migration has been reverted more than once, or the app is repeatedly OOM-killed on Vercel function invocations against the database.
Escalate without delay for any incident that could constitute a data leak or PII exposure. Book the Security Audit for a full RLS and data-isolation review, or the Emergency Triage for a single blocking database incident.
Related clusters
For the stack-wide walkthrough of Supabase, read the Supabase fix stack hub. For Prisma and Neon specifically, read the Prisma/Neon fix stack hub. If the app is on Firebase, the Firebase rescue hub covers equivalent security-rule failures. For platform-specific database failures, see Lovable RLS disabled, Bolt RLS disabled, v0 add backend, and Replit slow database. When the database symptom chains into another category, continue at the payment fix hub, the auth fix hub, or the deploy fix hub.
Data leaking or queries timing out?
Book the 48-hour emergency triage for one database-blocking fix, fixed price, refund if we miss. Or the free diagnostic for a written rescue-vs-rewrite recommendation.