Bolt.new to Next.js migration — B2B SaaS escape after $4,200 in burned tokens
Bolt.new to Next.js migration for Shiftwright, a shift-scheduling SaaS with 84 paying SMB customers: $900 a month burned on Bolt tokens, deploys forking into new 'universes' instead of updating production, a contractor refusing to touch the codebase. Afterbuild Labs's Platform Escape migrated the app to Next.js 16 + Postgres in 5 weeks. Token spend: $900 → $0/month. Deploy time: 40 minutes → 90 seconds. Zero customer downtime across 84 accounts. The contractor merged 11 PRs in the three weeks following handoff — more than they'd shipped in the previous four months combined.
- $0
- Monthly AI token spend
- 90 seconds
- Deploy time
- 1 canonical
- Production URLs in customer inboxes
- 11 in 3 weeks
- Contractor PRs merged
About B2B SaaS (workforce scheduling) client
Shiftwright (name changed) is a b2b saas (workforce scheduling) team at the post-revenue, $9.1k mrr, 84 paying customers stage. They built their product with Bolt.new and shipped it to pilot users before discovering that the generated scaffolding masked a set of production-grade failures. The engagement that followed was scoped as Platform Escape (App Migration) ($9,999 fixed fee).
Audit findings on day zero
What the first production-readiness pass uncovered before a single line of code was changed. Each finding is a specific Bolt.newfailure mode we’ve seen repeat across engagements.
- F01
Token spiral: ~$900/month on edits that didn't stick
The founder's words: "Bolt.new ate tokens like a parking meter eats coins." A typical session: ask Bolt to change a dropdown label, watch it rewrite four unrelated components, spend 180k tokens, then discover the label still says the old thing. Over four months, $4,200 burned. The founder estimated about a third of every working week was spent rewording prompts trying to coax Bolt into making the change without rewriting half the app, and another third was spent reverting unwanted changes Bolt had made along the way. Real product work was the remaining third.
- F02
Deploys forking into parallel 'universes'
Every deploy spun up a new production URL instead of updating the existing site. Customers' bookmarks, SSO links, and email receipts pointed at stale versions. Support tickets accumulated at the rate of 2–3 per deploy. The founder eventually built a private Notion page documenting which production URL was the 'real' one this week, and shared the link with their support contractor — who then had to forward it to confused customers individually. By the time we arrived, the founder was actively avoiding deploying because every deploy meant a half-day of manual cleanup.
- F03
A contract dev who wouldn't touch the code
The founder paid a contractor for a month. The contractor refused to commit anything: "I can't tell what Bolt generated versus what I wrote, and I can't run the stack outside Bolt, so I can't guarantee my changes won't be clobbered next prompt." Zero PRs shipped. The contractor was a senior engineer with 12 years of Node experience; the issue was not skill, it was that Bolt's environment had no separation between human code and machine code, no version control discipline, and no way to test anything locally. No senior engineer can professionally sign off on a codebase they cannot run on their own laptop.
- F04
No export path for customer data
The Bolt-managed KV store had 14 months of scheduling data and audit trails. There was no documented export; the founder's plan was to "hope" it migrated cleanly when they moved off. We later discovered the KV store also held some operational metadata — the per-customer time-zone offsets — that lived nowhere else. Losing that during a hasty export would have silently corrupted every customer's published shift schedules until somebody noticed.
- F05
Tests didn't exist and couldn't exist
Bolt's environment ran everything in a WebContainer. The app assumed Node APIs that don't exist in any real server runtime, so local testing was impossible. The contractor had attempted to set up Vitest locally and discovered that roughly 30 of the imported modules did not actually run outside Bolt — they were polyfills the WebContainer provided that did not exist anywhere else. There was no path to test coverage without first rebuilding the app on a portable runtime.
Root cause of the Bolt.new failure mode
Bolt.new optimised for making changes feel fast — at the cost of making changes be real. The 'universe' deploy pattern, the hidden KV store, and the WebContainer runtime created a stack that only existed inside Bolt. The causal chain: Bolt's sandbox abstracts away infrastructure → the generated code silently depends on that abstraction → any attempt to leave the sandbox (a contractor, a test, a migration) surfaces the dependency all at once → the founder pays tokens to paper over each surfacing instead of fixing the root dependency. Migrating off Bolt wasn't the hard part. Migrating off Bolt without losing the customers whose data lived inside it was. The reason the contractor refused to commit wasn't laziness; it was that committing meant taking professional responsibility for code that could be silently overwritten by Bolt the next time the founder typed a prompt. No senior engineer will sign their name to that arrangement, and Shiftwright had hit the ceiling of what a no-engineer SaaS can achieve on a sandboxed platform. The path forward was not optional: own the stack or lose the contractor and the customers along with it.
How we fixed the Bolt.new rescue stack
Each step below is one remediation workstream from the engagement. In cases where the underlying data includes before/after code vignettes, those render inline; otherwise we describe the change in prose.
- 01
Reverse-engineered the Bolt KV store schema by intercepting network traffic in a headless browser session, then dumped all 14 months of scheduling data to JSON under the founder's supervision.
- 02
Stood up a clean Next.js 16 App Router project with Drizzle ORM pointed at a new Neon Postgres. Designed the schema to match the reconstructed Bolt data model 1:1, with extra indexes on the three queries the app actually made.
- 03
Rebuilt the UI in Next.js, component by component, using the existing Bolt-generated JSX as a spec but normalising the styling and pulling data through server components instead of client-side fetches.
- 04
Migrated auth to Clerk with a one-time token exchange: on first login post-migration, each existing user was issued a signed magic link that re-bound their Clerk account to their prior Shiftwright ID. Zero password resets required.
- 05
Set up GitHub Actions CI with Playwright end-to-end tests covering the four critical paths (sign-in, create shift, publish schedule, export timesheet). The contractor could finally commit; their first PR merged in week three.
- 06
Deployed to Vercel with a single stable production URL, a preview environment per PR, and a cutover plan: DNS pointed at Vercel at 02:00 UTC on a Sunday, with a 15-minute dual-write window. Three customers were online during cutover; none noticed.
- 07
Stripe billing was migrated alongside the data: every existing subscription was re-created in the new Stripe account through the Customer Portal, the founder approved the test charge to a personal card, then the live charge cutover ran with no card-on-file friction for any of the 84 customers. Two customers had attempted refunds during the prior month that Bolt's webhook had silently lost; both were honoured manually as part of the cutover and resolved a long-running support thread.
- 08
Wrote the founder a 14-page operations runbook covering deploy, rollback, env var management, secret rotation cadence, an on-call playbook, and a dependency upgrade checklist tied to npm-check-updates. The contractor uses it as their reference; the founder reads it once a quarter to remind themselves what could go wrong.
“The deploy forking thing alone was driving me insane. Every push, another universe. Afterbuild Labs got me to a stack where I push once and it just updates. My contractor actually writes code now instead of refusing to. The $900 a month I was burning on tokens pays their retainer.”
Outcome after the escaped rescue
Every metric below was measured directly — RLS coverage via pgTAP, webhook success via Stripe dashboards, response times via production APM, MRR via Stripe billing.
| Metric | Before | After |
|---|---|---|
| Monthly AI token spend | $900+ | $0 |
| Deploy time | ~40 min (forking universes) | 90 seconds |
| Production URLs in customer inboxes | 17 stale | 1 canonical |
| Contractor PRs merged | 0 in 4 weeks | 11 in 3 weeks |
| Customers lost during migration | — | 0 of 84 |
| Total engagement length | — | 5 weeks |
| MRR post-migration | $9.1k | $11.4k (3 paused customers returned) |
“We'd start the data export on day one in parallel with the Next.js scaffold, not week two. The KV reverse-engineering ate more calendar time than it needed to because we serialised the work.”
- →We'd insist on a 72-hour freeze on Bolt edits the moment the contract starts. The founder shipped one 'small fix' in week two that we then had to port twice.
- →We'd run the Playwright suite against a copy of production data, not synthetic fixtures, from week one. The four critical-path tests passed against fixtures and then surfaced one edge case (a customer with 11 simultaneous shifts) the moment they ran against real data. Two extra days, easily preventable.
- →We'd negotiate the Stripe account migration earlier with the founder. The new account took 9 calendar days to verify due to KYC document review; we had to leave the old Bolt-managed Stripe account active for an extra week, which meant maintaining dual webhook handlers temporarily. Beginning that paperwork on day one would have saved a week of duplicated work at the end.
How to replicate this Bolt.new rescue
The same engagement path runs across every b2b saas (workforce scheduling) rescue we take on. Start with the diagnostic, then route into the service tier that matches the breakage surface.
Similar b2b saas (workforce scheduling) rescues
Browse the full archive of Bolt.new and adjacent AI-builder rescue write-ups.
Related industry deep-dive
Token spirals, unstable deploy pipelines, and tests that can't leave the builder sandbox are the recurring B2B SaaS failure modes this escape surfaced. The vertical page covers the multi-tenant, RBAC, and Stripe-subscription production concerns every post-MVP SaaS hits within its first hundred paying customers.
Got a broken Bolt.new app that looks like this one?
Send the repo. We'll tell you what it takes to ship — in 48 hours, fixed fee. Free diagnostic, no obligation.
Book free diagnostic →