afterbuild/ops
§ CS-92/b2b-saas-escape-from-bolt-to-nextjs
B2B SaaS (workforce scheduling) · Bolt.new · Platform Escape (App Migration)

Bolt.new to Next.js migration — B2B SaaS escape after $4,200 in burned tokens

Bolt.new to Next.js migration for Shiftwright, a shift-scheduling SaaS with 84 paying SMB customers: $900 a month burned on Bolt tokens, deploys forking into new 'universes' instead of updating production, a contractor refusing to touch the codebase. Afterbuild Labs's Platform Escape migrated the app to Next.js 16 + Postgres in 5 weeks. Token spend: $900 → $0/month. Deploy time: 40 minutes → 90 seconds. Zero customer downtime across 84 accounts. The contractor merged 11 PRs in the three weeks following handoff — more than they'd shipped in the previous four months combined.

updated April 8, 2026/10 min read/by Hyder Shah/client · Shiftwright (name changed)
§ CS-92.1/headline-numbers
$0
Monthly AI token spend
from $900+
90 seconds
Deploy time
from ~40 min (forking universes)
1 canonical
Production URLs in customer inboxes
from 17 stale
11 in 3 weeks
Contractor PRs merged
from 0 in 4 weeks
§ CS-92.2/client-context

About B2B SaaS (workforce scheduling) client

Shiftwright (name changed) is a b2b saas (workforce scheduling) team at the post-revenue, $9.1k mrr, 84 paying customers stage. They built their product with Bolt.new and shipped it to pilot users before discovering that the generated scaffolding masked a set of production-grade failures. The engagement that followed was scoped as Platform Escape (App Migration) ($9,999 fixed fee).

Stack before
Bolt.newStackBlitz hostingBolt-managed authBolt KV store
Stack after
Next.js 16 (App Router)Postgres (Neon)ClerkVercelDrizzle ORMGitHub Actions
§ CS-92.3/day-zero-autopsy

Audit findings on day zero

What the first production-readiness pass uncovered before a single line of code was changed. Each finding is a specific Bolt.newfailure mode we’ve seen repeat across engagements.

  1. F01

    Token spiral: ~$900/month on edits that didn't stick

    The founder's words: "Bolt.new ate tokens like a parking meter eats coins." A typical session: ask Bolt to change a dropdown label, watch it rewrite four unrelated components, spend 180k tokens, then discover the label still says the old thing. Over four months, $4,200 burned. The founder estimated about a third of every working week was spent rewording prompts trying to coax Bolt into making the change without rewriting half the app, and another third was spent reverting unwanted changes Bolt had made along the way. Real product work was the remaining third.

  2. F02

    Deploys forking into parallel 'universes'

    Every deploy spun up a new production URL instead of updating the existing site. Customers' bookmarks, SSO links, and email receipts pointed at stale versions. Support tickets accumulated at the rate of 2–3 per deploy. The founder eventually built a private Notion page documenting which production URL was the 'real' one this week, and shared the link with their support contractor — who then had to forward it to confused customers individually. By the time we arrived, the founder was actively avoiding deploying because every deploy meant a half-day of manual cleanup.

  3. F03

    A contract dev who wouldn't touch the code

    The founder paid a contractor for a month. The contractor refused to commit anything: "I can't tell what Bolt generated versus what I wrote, and I can't run the stack outside Bolt, so I can't guarantee my changes won't be clobbered next prompt." Zero PRs shipped. The contractor was a senior engineer with 12 years of Node experience; the issue was not skill, it was that Bolt's environment had no separation between human code and machine code, no version control discipline, and no way to test anything locally. No senior engineer can professionally sign off on a codebase they cannot run on their own laptop.

  4. F04

    No export path for customer data

    The Bolt-managed KV store had 14 months of scheduling data and audit trails. There was no documented export; the founder's plan was to "hope" it migrated cleanly when they moved off. We later discovered the KV store also held some operational metadata — the per-customer time-zone offsets — that lived nowhere else. Losing that during a hasty export would have silently corrupted every customer's published shift schedules until somebody noticed.

  5. F05

    Tests didn't exist and couldn't exist

    Bolt's environment ran everything in a WebContainer. The app assumed Node APIs that don't exist in any real server runtime, so local testing was impossible. The contractor had attempted to set up Vitest locally and discovered that roughly 30 of the imported modules did not actually run outside Bolt — they were polyfills the WebContainer provided that did not exist anywhere else. There was no path to test coverage without first rebuilding the app on a portable runtime.

§ CS-92.4/root-cause-analysis

Root cause of the Bolt.new failure mode

Bolt.new optimised for making changes feel fast — at the cost of making changes be real. The 'universe' deploy pattern, the hidden KV store, and the WebContainer runtime created a stack that only existed inside Bolt. The causal chain: Bolt's sandbox abstracts away infrastructure → the generated code silently depends on that abstraction → any attempt to leave the sandbox (a contractor, a test, a migration) surfaces the dependency all at once → the founder pays tokens to paper over each surfacing instead of fixing the root dependency. Migrating off Bolt wasn't the hard part. Migrating off Bolt without losing the customers whose data lived inside it was. The reason the contractor refused to commit wasn't laziness; it was that committing meant taking professional responsibility for code that could be silently overwritten by Bolt the next time the founder typed a prompt. No senior engineer will sign their name to that arrangement, and Shiftwright had hit the ceiling of what a no-engineer SaaS can achieve on a sandboxed platform. The path forward was not optional: own the stack or lose the contractor and the customers along with it.

§ CS-92.5/remediation

How we fixed the Bolt.new rescue stack

Each step below is one remediation workstream from the engagement. In cases where the underlying data includes before/after code vignettes, those render inline; otherwise we describe the change in prose.

  1. 01

    Reverse-engineered the Bolt KV store schema by intercepting network traffic in a headless browser session, then dumped all 14 months of scheduling data to JSON under the founder's supervision.

  2. 02

    Stood up a clean Next.js 16 App Router project with Drizzle ORM pointed at a new Neon Postgres. Designed the schema to match the reconstructed Bolt data model 1:1, with extra indexes on the three queries the app actually made.

  3. 03

    Rebuilt the UI in Next.js, component by component, using the existing Bolt-generated JSX as a spec but normalising the styling and pulling data through server components instead of client-side fetches.

  4. 04

    Migrated auth to Clerk with a one-time token exchange: on first login post-migration, each existing user was issued a signed magic link that re-bound their Clerk account to their prior Shiftwright ID. Zero password resets required.

  5. 05

    Set up GitHub Actions CI with Playwright end-to-end tests covering the four critical paths (sign-in, create shift, publish schedule, export timesheet). The contractor could finally commit; their first PR merged in week three.

  6. 06

    Deployed to Vercel with a single stable production URL, a preview environment per PR, and a cutover plan: DNS pointed at Vercel at 02:00 UTC on a Sunday, with a 15-minute dual-write window. Three customers were online during cutover; none noticed.

  7. 07

    Stripe billing was migrated alongside the data: every existing subscription was re-created in the new Stripe account through the Customer Portal, the founder approved the test charge to a personal card, then the live charge cutover ran with no card-on-file friction for any of the 84 customers. Two customers had attempted refunds during the prior month that Bolt's webhook had silently lost; both were honoured manually as part of the cutover and resolved a long-running support thread.

  8. 08

    Wrote the founder a 14-page operations runbook covering deploy, rollback, env var management, secret rotation cadence, an on-call playbook, and a dependency upgrade checklist tied to npm-check-updates. The contractor uses it as their reference; the founder reads it once a quarter to remind themselves what could go wrong.

§ CS-92.6/founder-quote
Sample client perspective — composite, not an individual testimonial
The deploy forking thing alone was driving me insane. Every push, another universe. Afterbuild Labs got me to a stack where I push once and it just updates. My contractor actually writes code now instead of refusing to. The $900 a month I was burning on tokens pays their retainer.
Dan Reuter· Founder, Shiftwright
§ CS-92.7/outcome-delta

Outcome after the escaped rescue

Every metric below was measured directly — RLS coverage via pgTAP, webhook success via Stripe dashboards, response times via production APM, MRR via Stripe billing.

Before / after — B2B SaaS (workforce scheduling)
MetricBeforeAfter
Monthly AI token spend$900+$0
Deploy time~40 min (forking universes)90 seconds
Production URLs in customer inboxes17 stale1 canonical
Contractor PRs merged0 in 4 weeks11 in 3 weeks
Customers lost during migration0 of 84
Total engagement length5 weeks
MRR post-migration$9.1k$11.4k (3 paused customers returned)
§ CS-92.8/engineer-note
Engineer retrospective — technical lesson
We'd start the data export on day one in parallel with the Next.js scaffold, not week two. The KV reverse-engineering ate more calendar time than it needed to because we serialised the work.
Hyder Shah· Founder, Afterbuild Labs
§ CS-92.9/replicate-this-rescue

How to replicate this Bolt.new rescue

The same engagement path runs across every b2b saas (workforce scheduling) rescue we take on. Start with the diagnostic, then route into the service tier that matches the breakage surface.

§ CS-92.10/related-rescues

Similar b2b saas (workforce scheduling) rescues

Browse the full archive of Bolt.new and adjacent AI-builder rescue write-ups.

§ CS-92.11/industry-deep-dive

Related industry deep-dive

vertical · b2b saas (workforce scheduling)
Read more B2B SaaS rescue patterns

Token spirals, unstable deploy pipelines, and tests that can't leave the builder sandbox are the recurring B2B SaaS failure modes this escape surfaced. The vertical page covers the multi-tenant, RBAC, and Stripe-subscription production concerns every post-MVP SaaS hits within its first hundred paying customers.

Next step

Got a broken Bolt.new app that looks like this one?

Send the repo. We'll tell you what it takes to ship — in 48 hours, fixed fee. Free diagnostic, no obligation.

Book free diagnostic →