afterbuild/ops
Firebase to Supabase migration developer

Firebase Rescue & Migration Developer

Firebase-backed apps built with AI tools commonly have three problems: security rules that allow public reads of private data, cost runaway from unoptimized Firestore queries, and auth flows that do not verify emails. Afterbuild Labs audits in three days from $499, migrates in one to three weeks.

By Hyder ShahFounder · Afterbuild LabsLast updated 2026-04-17

Why AI-built Firebase apps hemorrhage money

Firebase prices each Firestore read, write, and delete separately. A single listing page that fetches a whole collection from the client, unpaginated, costs one document read per row rendered — which sounds cheap until you realize the page is loaded thousands of times a day and fetches 200 documents per load. AI-generated Firebase apps lean heavily on this pattern because it is the easiest client code to generate. The bill stays negligible in development and explodes the week the first marketing push lands.

The cost runaway pairs with a rules problem. Firestore security rules default permissive in many AI-built apps, not because the generator left them open on purpose, but because the rules file was never touched. allow read, write: if true; stays in place because the first deploy worked. A Firebase rescue developer reads the rules file against the collections it protects and almost always finds at least one path that reads more than it should.

Both problems compound with a third: Firebase Auth flows generated by AI tools rarely enforce email verification, and password reset emails are often left on the default sender domain that most mail clients flag as unauthenticated. Founders ship the app, onboard users, and then discover the security, cost, and auth problems in the same month.

The 7 Firestore security rule mistakes we see most

Every Firebase security rules fix we run hits some combination of these seven patterns. Ranked by how often they appear in our audits.

  1. 01
    Rules file untouched from default template

    allow read, write: if request.time < timestamp.date(2030, 1, 1) — the Firebase default — is effectively public until the expiry. Seen in roughly one in four AI-built apps we audit.

  2. 02
    Public read on collections with private fields

    A profiles collection allows public reads so the landing page can render names. The same documents also hold email, phone, and billing info. Split public from private into sibling collections.

  3. 03
    allow write gated only on authenticated

    allow write: if request.auth != null grants every signed-in user write access to every document. Must additionally match request.auth.uid against the owning user.

  4. 04
    Nested subcollection with inherited lax rules

    A subcollection under a locked parent document inherits the parent rule surface but does not inherit its conditions. Explicitly write rules for each subcollection.

  5. 05
    Rules that reference unvalidated client fields

    request.resource.data.role == 'admin' trusts the client payload. Move role-based access to a server function that reads the role from a locked document.

  6. 06
    Functions in rules that fan out reads

    A rule that calls getAfter or get to look up a membership on every read adds Firestore reads per request. Watch for latency and bill impact; denormalize where possible.

  7. 07
    Storage rules ignored entirely

    Firebase Storage has its own rules file and is commonly left as allow read, write: if true on AI-built apps. User uploads are readable by anyone.

Broken — rules wide open
rules_version = '2';
service cloud.firestore {
  match /databases/{database}/documents {
    // Firebase default — public until 2030
    match /{document=**} {
      allow read, write: if request.time < timestamp.date(2030, 1, 1);
    }
  }
}
Fixed — owner-scoped rules
rules_version = '2';
service cloud.firestore {
  match /databases/{database}/documents {
    match /todos/{todoId} {
      allow read: if request.auth != null
        && resource.data.ownerUid == request.auth.uid;
      allow create: if request.auth != null
        && request.resource.data.ownerUid == request.auth.uid;
      allow update, delete: if request.auth != null
        && resource.data.ownerUid == request.auth.uid;
    }
  }
}

Firestore query cost patterns (N+1, full-collection reads)

The cost audit during a Firebase rescue usually surfaces two patterns. The first is a full-collection read on page load — a listing page that calls collection().get() without pagination or field projection. Every render costs one read per document, and Firebase does not cache client-side reads by default. The second is an N+1 read pattern — a loop that fetches a parent document, then fetches a related child document per item, which multiplies the billed reads by the list length.

We fix both with a Firestore query audit and rewrites. Listing pages move to paginated queries with limit and startAfter cursors. Related documents get denormalized into the parent or fetched with in queries (up to 30 IDs per query). Field projection with select() keeps payload size down but — importantly — does not reduce billed reads, which are per-document. The only way to cut bills is fewer reads, not smaller reads.

Broken — full-collection read
// one read per document on every page render
const snap = await getDocs(collection(db, 'posts'));
const posts = snap.docs.map((d) => d.data());

// 2,000 docs * 10,000 renders/day = 20M reads/day
Fixed — paginated query plus cursor
const q = query(
  collection(db, 'posts'),
  orderBy('publishedAt', 'desc'),
  limit(20)
);
const snap = await getDocs(q);
const posts = snap.docs.map((d) => d.data());

// 20 reads per page. Infinite scroll uses startAfter
// with the last doc cursor. Bill drops 100x.

Firebase Auth migration paths

A Firebase Auth migration is the gate that blocks most Firebase-to-Supabase projects. The good news: Firebase exports password hashes in a format Supabase can accept directly, so users keep their passwords and do not need to reset. The work is a scripted export, a mapping of custom claims to JWT metadata, and a Supabase import that specifies the scrypt hash algorithm with the Firebase pepper.

Third-party sign-in providers (Google, Apple, Facebook) migrate differently. The underlying identity at Google is the same, but the OAuth client lives in your Firebase or Google Cloud project and needs to be re-authorized against the new Supabase redirect URI. We run the auth migration last in the sequence so OAuth clients switch over at the same time as the read path flip.

Cloud Functions cold starts and cost

Cloud Functions cold starts on Node.js runtimes average 500-1500 ms. For an API route called on page load, that latency is visible to users. A Firebase Cloud Functions fix often involves three moves: switch to the 2nd gen runtime which has faster cold starts, pre-warm with minInstances on hot paths, or migrate the function to a serverless runtime with better cold start profile (Vercel edge, Supabase edge). Pre-warming adds cost — minInstances of 1 keeps the function billing continuously — so we use it only on the one or two routes that matter most.

A migration to Supabase edge functions restructures the trigger model. Firestore-triggered functions become Postgres triggers or webhook-driven edge functions. Scheduled functions move to pg_cron. The code changes are minor; the new trigger model is the main adjustment. Most teams we migrate report lower cold starts on edge and no meaningful change in functional behavior.

Migrating from Firestore to Postgres and Supabase

The Firestore to Postgres mapping is a deliberate piece of schema design. Firestore documents go to Postgres rows; collections go to tables; nested objects go to JSONB columns or normalized side tables. Document IDs become primary keys. Subcollections become foreign-keyed child tables. The work is not translation — it is a redesign with the benefit of seeing the real query patterns the app already runs.

We run the dual-write window during cutover: the app writes both to Firestore and to the new Postgres schema, reads from Firestore as before, and we backfill the historical data into Postgres in batches. When the row counts match and the spot-checks pass, we flip the read path. Firebase stays up for 30 days as a rollback, then we decommission. See our app migration service for the full sequence.

Our Firebase rescue process

Every Firebase engagement follows the same six-step sequence. Audit in three business days from $499; migration, if chosen, runs one to three weeks.

  1. 01
    Audit every Firestore security rule

    Read the rules file line by line. Flag any allow read or allow write that is not gated on request.auth. Tighten each to role-aware matching against documents and subcollections.

  2. 02
    Profile Firestore read costs

    Use the Firebase usage dashboard to find the hottest collections. Identify N+1 patterns and full-collection reads from the client that drive the bill.

  3. 03
    Decide: harden Firebase or migrate

    Cost-curve Firebase at projected scale against Supabase Pro. If the crossover is within six months, plan the migration. If Firebase still wins, harden and stay.

  4. 04
    Ship tighter rules and query rewrites

    Deploy the corrected rules. Rewrite client queries to paginate, project fields, and denormalize where necessary. Confirm the billed reads drop.

  5. 05
    Plan the migration (if required)

    Pick the target backend, write the collection-to-table mapping, export auth users, and script the dual-write window. Review the plan with the founder.

  6. 06
    Cut over and verify

    Run the dual-write window, backfill historical data, flip the read path, and decommission Firebase once verified. Keep an export as a rollback for 30 days.

When to stay on Firebase vs migrate

Firebase is the right home for a subset of apps. Mobile-first products with real offline sync needs get value from Firestore that Postgres cannot match. Write-heavy ingestion pipelines in the hundreds of thousands per hour scale on Firestore with fewer tuning headaches. Teams already deep in Google Cloud benefit from Firebase because the IAM, billing, and observability are already set up.

For web-first AI-built apps, which is the bulk of what we rescue, Supabase wins on cost, portability, and SQL ergonomics once the app passes a few thousand MAU. We run the cost curve honestly — projected Firebase cost at 10x current usage against Supabase Pro plus a reasonable read-heavy overage — and share the result before anyone commits to a migration. If Firebase still wins, we harden it and stay.

DIY vs Afterbuild Labs vs hiring a Firebase specialist

Three paths to a hardened Firebase stack or a clean migration. Pick based on cost urgency and team capacity.

DimensionDIY with AI toolAfterbuild LabsHire Firebase specialist
TurnaroundIndefinite — cost keeps climbing48h diagnostic, 1-3 week fix3-8 weeks to start
Fixed priceNo — Firebase overage billsYes — from $499$140-220/hr
Security rules auditRarely doneEvery rule reviewedIncluded
Cost audit with query rewritesNot attemptedTop collections profiled and rewrittenIncluded
Firebase Auth export to SupabaseManual and riskyScripted, preserves passwordsDepends on contractor
Zero-downtime migrationUnlikelyDual-write plus backfillDepends on contractor
30-day rollback windowNot plannedFirebase stays up post-cutoverRarely offered
FAQ

Hire Firebase developer — FAQs

Firebase vs Supabase — which should I pick for an AI-built app?+

Pick Supabase for SQL-shaped apps, relational data, and teams that want to stay close to Postgres. Pick Firebase for write-heavy mobile apps with offline sync, large document collections, or existing Firebase Auth users. For most AI-built web apps we rescue, Supabase wins on cost at mid-scale, SQL ergonomics, and portability. Firebase wins on mobile offline sync and massive write volume. Neither is universally better — it is a fit question.

How long does a Firebase to Supabase migration take?+

A Firebase to Supabase migration runs one to three weeks depending on collection count and auth user volume. Smaller apps with under ten Firestore collections and a few hundred users finish in a week. Larger apps with complex security rules, many collections, and Cloud Functions that need to become Supabase edge functions take three weeks. A Firebase rescue developer scopes the work after reviewing the collections and the rules file, usually within 48 hours.

Will there be downtime during a Firebase migration?+

No if we dual-write during the cutover. We run both Firebase and Supabase in parallel, mirror every new write into both systems, and backfill the historical data during a read-only window. When the Supabase side is verified, we flip the read path and turn Firebase writes off. Users see nothing. A zero-downtime Firebase migration is our default — a hard cutover is only used on apps under 100 users where the migration window is short enough to announce.

How does Firebase cost compare to Supabase at 10k MAU?+

Firestore bills per read, write, delete, and network egress; Firebase Auth is free up to 50k MAU. Supabase Pro is $25 per month flat with generous included limits. At 10k MAU, a typical AI-built app sees Firebase bills of $80-300 monthly, driven by reads — especially when the client fetches whole collections on page load. Supabase on Pro usually lands at $25-50. The gap widens as reads scale because Firestore pricing is per-operation and Postgres pricing is flat.

How do I migrate Firebase Auth users to Supabase?+

Export Firebase users with firebase auth:export — you get a JSON file containing emails, UIDs, and hashed passwords. Firebase uses scrypt with a Firebase-specific pepper, which Supabase accepts with the right hash parameters. Import with the Supabase Management API plus a script that sets the hash_algorithm to scrypt and supplies the Firebase hash config. Users continue to sign in with the same password; no reset required. We ship this as a script during the migration.

Cloud Functions vs Supabase edge functions — what changes?+

Cloud Functions are a mature serverless runtime with Firestore triggers and scheduled jobs. Supabase edge functions are Deno-based, lighter, and trigger on HTTP or Postgres webhooks. A migration replaces Firestore-triggered functions with Postgres triggers or a webhook-driven edge function. Scheduled jobs move to Supabase cron (pg_cron). The code changes are minor; the trigger model is the main adjustment. A Firebase Cloud Functions fix during a rescue often uncovers cold start issues that an edge function removes.

Can I export data from Firestore?+

Yes. Use gcloud firestore export for a full export to Cloud Storage, or the REST API for per-document export. The exported format is proprietary and requires conversion to relational tables during a migration. We write a deterministic mapping from Firestore collections to Postgres tables, preserving document IDs as primary keys and expanding nested objects into columns or JSONB fields where appropriate.

What does a Firebase rescue cost?+

A Firebase security audit is $499 fixed price in three business days — rules review, cost audit, auth config check, and written report. A migration to Supabase or another backend runs $2,999 to $7,499 depending on collection count and user volume. A pure Firebase hardening without migration (tighten rules, cut costs, keep Firebase) is $1,499 to $2,999. Every engagement opens with a 48-hour written diagnostic so you know the scope before committing.

Next step

Get a Firebase to Supabase migration developer on your repo this week.

Send the Firestore rules file, a read of the current monthly bill, and a link to the repo. A written Firebase rescue diagnostic lands in 48 hours; the audit ships in three business days from $499; migrations run one to three weeks. Hire a Firebase to Supabase migration developer who has moved dozens of AI-built apps with zero downtime.