afterbuild/ops
Supabase developer for fixes

Supabase Developer & Fix Expert

AI coding tools generate Supabase integrations that look right but fail in production: RLS disabled, anon key shipped to the browser, auth callbacks broken, and realtime subscriptions leaking data. Afterbuild Labs audits and fixes your Supabase stack in two to five days from $299.

By Hyder ShahFounder · Afterbuild LabsLast updated 2026-04-17

Why Supabase breaks in AI-built apps

Supabase is the default Postgres backend for apps scaffolded by Lovable, Bolt, v0, Cursor, and Claude Code. The tools generate a schema, wire the client, and produce something that runs in a preview. None of that work is wrong per se, but almost all of it is incomplete. A Supabase developer for fixes spends most of their time on the five surfaces the code generator skips: row level security, key management, auth configuration, realtime permissions, and migration discipline.

The pattern is consistent across the roughly 40 Supabase rescues we have run in the past year. The scaffolded code speaks SQL, but it does not speak production. Tables are created with RLS off. Service role keys make their way into React components. Auth callbacks point at localhost. Realtime channels subscribe without cleanup and leak rows across users. Edge functions get called on every render. The app looks finished and the founder ships it to paying customers. Then an ordinary bug tips over into a data exposure, and the rescue call lands in our inbox.

AI coding tools are optimizing for a demo that works, not a backend that survives. A Supabase developer for fixes tightens the parts the generator never saw — policies that encode who can see what, keys that stay on the right side of the network, and a migrations directory that lets you replay the schema into a new environment without guesswork.

The 8 RLS mistakes we see in AI-generated Supabase code

Row level security is the Supabase feature that most commonly goes wrong. Here are the eight failure modes a Supabase RLS expert looks for first, ranked by how often they appear in our audits.

  1. 01
    RLS disabled on tables that hold user data

    Lovable and Bolt create tables and forget to enable row level security. With RLS off, the anon key reads every row from every user. This is the single most common finding in our Supabase audits and the proximate cause of the widely-reported 2026 Lovable/Supabase RLS disclosure.

  2. 02
    RLS enabled with no INSERT policy

    The generator enables RLS and writes a select policy, then the app starts throwing error 42501 on every create action. With RLS on and no insert policy, every insert is denied by default. The fix is a three-line insert policy keyed on auth.uid().

  3. 03
    Policies reference the wrong user column

    AI tools pick column names inconsistently: user_id, author_id, owner, created_by. A policy keyed on user_id against a table with author_id silently filters every row. The symptom is an empty array with no error, which is the hardest failure mode to debug.

  4. 04
    Service role key in the client bundle

    The service role key bypasses RLS entirely and must never reach the browser. We find it in roughly one in three Bolt and Lovable audits, usually inside a component that calls a privileged query. The remediation is rotate, relocate to a server route, and audit git history.

  5. 05
    Policies that evaluate subqueries on every row

    A policy like using (exists (select 1 from memberships where ...)) is correct but slow. Without a matching index, Postgres runs the subquery per row and the endpoint times out at 1000 rows. Add the compound index during the fix.

  6. 06
    Public tables masquerading as private

    A table named public_profiles with RLS off for convenience ends up holding email addresses and tokens because a later prompt added the columns. The fix is a strict partition between public and private schemas, enforced at the column level.

  7. 07
    Storage buckets with no policies

    Supabase Storage has its own RLS layer that AI tools rarely configure. Buckets default to public read in prototypes and leak private file URLs. Write bucket policies per role and confirm signed URLs expire.

  8. 08
    Anon key with INSERT grants on system tables

    Legacy Supabase projects granted the anon role broader SQL privileges than needed. Audit the role with pg_roles and revoke any grant that is not explicitly required for the app.

Broken — RLS off
-- Lovable-generated schema, RLS never enabled
create table todos (
  id uuid primary key default gen_random_uuid(),
  user_id uuid references auth.users(id),
  title text not null,
  done boolean default false
);
-- anon key can select every row
Fixed — RLS plus row-owner policies
alter table todos enable row level security;

create policy "todos_select_own" on todos
  for select using (auth.uid() = user_id);

create policy "todos_insert_own" on todos
  for insert with check (auth.uid() = user_id);

create policy "todos_update_own" on todos
  for update using (auth.uid() = user_id)
  with check (auth.uid() = user_id);

create policy "todos_delete_own" on todos
  for delete using (auth.uid() = user_id);

Auth patterns AI tools miss: callbacks, session refresh, MFA

A Supabase auth fix usually comes down to three gaps. First, OAuth callbacks point at localhost after deploy because the AI tool configured the client redirect but never updated Supabase Site URL or the Google Cloud authorized redirect URIs. Second, session refresh silently fails because the generated client never enables persistSession or handles the TOKEN_REFRESHED event. Third, multi-factor auth, which Supabase supports, is almost never scaffolded because the code generator treats it as optional.

We test a full sign-up, sign-in, sign-out, password reset, and refresh sequence against a staging project before marking auth as fixed. We also add a regression test for the token refresh path, because it is the auth failure that shows up two weeks later when tokens start expiring in production. Our auth specialist covers the same territory for apps built on Clerk or Auth.js.

The other common finding is a broken email flow. Lovable defaults to the built-in Supabase SMTP, which is unauthenticated and lands in spam. Swap in Resend, Postmark, or SendGrid, verify DKIM, SPF, and DMARC, and confirm the password reset email lands in a fresh inbox before closing the ticket.

Realtime subscription security

Supabase Realtime broadcasts Postgres changes over WebSockets. It respects RLS on the underlying table, which means a subscription without RLS is a data leak. The AI-generated pattern is predictable: a useEffect that calls supabase.channel(...).on(...).subscribe(), a table with RLS disabled for convenience, and no cleanup function. Every connected browser receives every insert across every user.

A Supabase realtime debugging pass does three things. First, confirm RLS is enabled on every table in the realtime publication. Second, scope the publication with alter publication supabase_realtime set table (...) to list only the tables the app actually needs. Third, audit every subscribe call and add a removeChannel cleanup in the effect return. We grep for .subscribe() during the audit and log any call without matching cleanup.

The cost of a missing cleanup compounds. After a few route changes the app holds dozens of dead WebSocket listeners, memory climbs, and Supabase throttles the connection. Users see a subtle slowdown followed by a hard reconnect storm. Fixing it is two lines, but the lines are never there.

Edge functions and server-side keys

A Supabase edge functions fix usually starts with a single question: why is the service role key in the browser? The honest answer, most of the time, is that the AI tool copied the Supabase docs example verbatim and pasted it into a client component. The service role key bypasses RLS. If it lands in a React file, every visitor has database-admin rights. We rotate the key, relocate the query, and audit the build output to confirm nothing leaked.

Edge functions are the right home for privileged database work. We wrap the operation in a Supabase edge function, authenticate the caller, validate input, and return a minimal payload. For anything that needs to persist across requests — Stripe webhooks, cron work, background jobs — we reach for a dedicated queue rather than relying on the edge runtime. See our Stripe integration developer hub for the webhook patterns that pair with edge functions.

A common anti-pattern is an edge function that reinvokes a chatty Supabase client on every call. The function spins up, connects, runs one query, and returns, which doubles the latency the client would have seen. We consolidate reads into a single query, cache where safe, and set sensible timeouts. Most edge function bills we rescue drop by 60% after the first audit.

Broken — service role shipped to browser
// app/components/AdminPanel.tsx
'use client';
import { createClient } from '@supabase/supabase-js';

const supabase = createClient(
  process.env.NEXT_PUBLIC_SUPABASE_URL!,
  process.env.NEXT_PUBLIC_SERVICE_ROLE_KEY! // leaks to bundle
);

export function AdminPanel() {
  // any visitor has DB admin rights
  const run = () => supabase.from('users').delete().neq('id', '0');
  return <button onClick={run}>Purge</button>;
}
Fixed — service role confined to server route
// app/api/admin/purge/route.ts (server only)
import { createClient } from '@supabase/supabase-js';

export async function POST(req: Request) {
  const auth = req.headers.get('authorization');
  if (!isAdmin(auth)) return new Response('forbidden', { status: 403 });

  const admin = createClient(
    process.env.SUPABASE_URL!,
    process.env.SUPABASE_SERVICE_ROLE_KEY! // server env, no NEXT_PUBLIC
  );
  await admin.from('users').delete().neq('id', '0');
  return new Response('ok');
}

Migration safety and schema management

AI-generated apps rarely have a migrations directory. Changes are made in the Supabase dashboard, captured nowhere, and impossible to replay. The first time something breaks in production, there is no way to reproduce the schema in staging. A Supabase rescue therefore always includes a migrations pass: pull the current schema with supabase db diff, commit the output into migrations/, and run supabase db reset against a branch project to prove the migrations replay cleanly.

From that point forward, schema changes go through pull requests. Supabase branching lets you spin up per-PR databases that run the new migrations automatically, which means reviewers can exercise the schema before it touches production. We also wire a GitHub Action that runs supabase db lint and blocks merges that introduce destructive changes without a down migration. This is the baseline every production Supabase app needs.

Our Supabase rescue process

Every Supabase rescue follows the same six-step sequence. Triage starts within 24 hours and the written diagnostic lands in 48.

  1. 01
    Enable RLS on every user-data table

    Run alter table <name> enable row level security on every table that holds anything a user owns. Without RLS, policies do not apply and the anon key grants broad access to every row.

  2. 02
    Write select, insert, update, and delete policies

    Author per-role policies keyed on auth.uid(). Test them with a signed-in user, a different signed-in user, and an unauthenticated session before merging.

  3. 03
    Move service role calls behind a server route

    Grep the codebase for SUPABASE_SERVICE_ROLE_KEY and relocate every hit into an API route or edge function. Rotate the key once the client side is clean.

  4. 04
    Harden the auth configuration

    Turn on email confirmation, set JWT expiry to one hour, enable refresh token rotation, and lock the redirect URL allowlist to production domains.

  5. 05
    Audit realtime subscriptions and edge functions

    Confirm RLS covers realtime publications, add removeChannel cleanup to every subscribe, and set function timeouts and retry limits for edge functions.

  6. 06
    Commit migrations and run a staging replay

    Capture the current schema with supabase migration new, commit to git, and run supabase db reset on a branch project to prove the schema reproduces.

When to migrate off Supabase (rare)

Most Supabase problems are configuration problems, not platform problems. The right move is almost always to fix the RLS, relocate the service role key, and ship a migrations directory — not to migrate off. That said, there are three cases where a migration makes sense. First, a regulated workload (HIPAA, strict PCI) where a dedicated, audited Postgres cluster is required. Second, a mature product whose auth, storage, and realtime needs have grown beyond the Supabase defaults. Third, a team whose DBA expertise is the bottleneck and a managed Postgres with pgbouncer is a better fit.

If a migration is warranted we plan the cutover with care — Supabase Auth user export, RLS rewritten as middleware, storage relocated, and realtime rebuilt on your own WebSockets or on Ably or Pusher. See our app migration service for the full process. In ninety percent of the Supabase conversations we have, the right answer is a rescue, not a migration.

DIY vs Afterbuild Labs vs hiring a Supabase specialist

Three paths to a hardened Supabase stack. Pick based on budget, timeline, and how much risk you can carry.

DimensionDIY with AI toolAfterbuild LabsHire Supabase specialist
TurnaroundIndefinite — regression loops48h diagnostic, 2-5 days fix2-6 weeks to start
Fixed priceNo — per-credit billingYes — from $299$120-220/hr
RLS policy coveragePartial, often wrongEvery table auditedDepends on contractor
Service role key auditUsually missedGrep plus rotationIncluded if requested
Production deploy checklistNot provided10-point passCustom, variable
Migration history in gitRareCaptured and committedUsually included
Post-delivery supportNoneWarranty plus retainerContractor-dependent
FAQ

Hire Supabase developer — FAQs

Should I turn Supabase Row Level Security on or off?+

Always on for any table that holds user data. Supabase creates new tables with RLS available but disabled by default, and the anon key ships with broad access. With RLS off, every browser can read and write every row. Turn RLS on with a single alter table command, then write per-role policies for select, insert, update, and delete. A Supabase developer for fixes can harden a schema in under a day.

When should I use the anon key versus the service role key?+

The anon key is safe in the browser only when every table has RLS enabled and correct policies. The service role key bypasses RLS entirely and must never reach client code. Use it only inside server routes, edge functions, or background workers. If an AI tool placed the service role key in a React component, rotate it immediately, move the logic behind an API route, and audit the git history for accidental commits.

Why is Supabase auth not verifying emails in my Lovable app?+

Lovable and Bolt frequently ship with email confirmation disabled and the default SMTP provider, which lands verification emails in spam. The fix is three settings: enable email confirmation in the Supabase auth config, plug in a real provider like Resend or Postmark, and verify DKIM, SPF, and DMARC on your sending domain. We also set the redirect URL allowlist so magic links only redirect to approved domains.

How do realtime subscription permissions work in Supabase?+

Realtime respects RLS on the underlying table as of the Supabase 2024 update, but you still need to enable the realtime publication and confirm policies allow select for the subscribing role. AI tools often subscribe from the browser with the anon key and no RLS, which broadcasts every insert to every connected client. Add RLS, scope the publication to specific tables, and make sure each subscription unsubscribes on unmount.

How much do Supabase edge functions cost in production?+

Supabase edge functions bill on invocations and compute seconds. At the time of writing, the Pro plan includes 2 million invocations monthly with overages around $2 per million. Compute seconds bill separately. For most AI-built apps we rescue, edge functions cost under $20 a month. The bigger cost risk is pairing edge functions with a chatty client that re-invokes on every render — we fix that pattern during the audit.

How do I migrate from Supabase to plain Postgres?+

Supabase runs on vanilla Postgres, so data migration is a pg_dump and pg_restore away. What you lose is auth, storage, realtime, and RLS enforced at the client boundary — those are Supabase services, not Postgres features. Before migrating, honestly answer why: most teams we talk to wanted a DBA, not a migration. If Supabase itself is the blocker we plan a cutover and rebuild auth on Auth.js or Clerk.

How do I configure Supabase environment variables on Vercel?+

Set NEXT_PUBLIC_SUPABASE_URL and NEXT_PUBLIC_SUPABASE_ANON_KEY on Vercel for every environment — preview, production, and development. Set SUPABASE_SERVICE_ROLE_KEY as a plain server variable, never NEXT_PUBLIC. Preview deploys should use a separate Supabase project or a branch database so preview traffic does not mutate production data. A common failure is forgetting to set preview env vars and seeing silent nulls.

What is a typical timeline for a Supabase rescue?+

A scoped Supabase fix runs two to five business days from $299 for triage. Work includes enabling RLS on user-data tables, writing per-role policies, moving service_role calls off the client, hardening auth settings, and shipping a migrations directory checked into git. Larger rescues that cover realtime, edge functions, and Stripe webhooks scale to two weeks. We send a written diagnostic within 48 hours so you know the scope before you commit.

Next step

Get a Supabase developer for fixes on your repo this week.

Send the repo and your Supabase project ref. A written diagnostic lands in 48 hours, fixed-price rescue ships in two to five days. Hire a Supabase developer who has shipped production rescues across Lovable, Bolt.new, and Cursor and will stay on the engagement until every RLS policy, auth callback, and realtime subscription is production-safe.