What the regression loop looks like
You ask the AI builder to fix a bug. It does. A different part of the app breaks. You ask it to fix the new break. It does. The original bug returns, sometimes in a slightly different form. You prompt again, more specifically. Two more bugs appear. You spend an afternoon and $40 in credits and end the session with more broken features than you started. This is the single most-quoted pain in 2026 vibe-coding content; one Bolt.new founder reported 20 million tokens spent on a single authentication issue.
Why the loop happens
AI coding tools have a context window, not a persistent mental model. On every turn they re-read the relevant files, form a local theory of what the code does, and edit. Two things go wrong:
- Locally-reasonable, globally-wrong edits.The fix makes sense in the file the model looked at, but breaks an invariant the model didn't see. The broken invariant lives three files away.
- State drift across turns.Each edit changes what the next turn “knows” about the app. Cumulative edits produce a codebase the model didn't author and can't fully read on any single turn.
Published benchmarks back this up. Our 2026 vibe-coding research summarises the industry data — AI-code vulnerability rates close to half, and the failure rate doesn't drop with more prompting, because the prompting produces more code in the same shape. The Stripe benchmark on AI-built integrations documented a similar plateau: repeated attempts converged on broken patterns rather than correct ones.
The three rules for stopping
We wrote these rules for our own rescue clients. They reliably cap credit burn:
- The rule of five.If you've prompted the same bug five times without resolution, stop. The sixth prompt will not resolve it. The evidence after five tries is that the model cannot see what's wrong.
- The rule of no-worse.If a fix introduces a new bug, stop immediately — do not prompt to fix the new bug. You're in the loop. Revert to the last working state.
- The rule of two hours. If credits spent on this feature have exceeded the cost of two hours of engineer time (roughly $300), stop and call an engineer. You are already over budget for a human fix.
Credit spend: what normal vs abnormal looks like
| Task | Typical healthy spend | Loop warning |
|---|---|---|
| New feature (small) | $5–$20 | > $50 |
| Bug fix (single) | $1–$10 | > $40 |
| Integration (Stripe, OAuth) | $20–$80 | > $200 |
| Auth flow repair | $10–$40 | > $150 |
| Performance tuning | $15–$60 | > $250 |
Based on aggregated 2026 founder reports across Lovable, Bolt.new, and Base44. Not a proprietary benchmark.
Why more prompts don't help
The intuition “if I just prompt more specifically, it'll work” is half-true. Specificity reduces loop frequency for well-scoped features the model understands. It doesn't help for:
- Bugs that live in the interaction between multiple files.
- Environmental issues (env vars, deploy config, OAuth redirect URIs).
- Security and correctness invariants the model doesn't track (RLS, idempotency, signature verification).
- Anything that requires reading production logs or reproducing a race condition.
These are exactly the problems vibe-coded apps need fixed. Which is why the loop is so expensive: the model's strengths (scaffolding, one-shot features) are not the problems you have at month three.
The three escape routes
- Revert and rewrite the specific feature manually.If you're technical enough to read the relevant files, open them and make the change by hand. Free, fast, and exits the loop.
- Hand the specific bug to a senior engineer in a code-review tool (Cursor, Claude Code) with a test referenced. This is the mid-cost path; it keeps you in an AI-assisted workflow but with a human in the loop. Typical fix: 2–6 hours.
- Book a fixed-fee integration fix. Integration Fix closes one specific broken seam for $1,500–$2,500 in 5–10 working days. If credits spent are already above that, this is the cost-controlled option.
A real-world example
A founder we diagnosed in March 2026 had spent $4,800 in Bolt.new credits on an OAuth redirect bug over six weeks. The fix was 40 minutes of human work: pull the redirect URI from an environment variable, register the new URI in the Google OAuth console, test in preview. The loop kept happening because every fix the AI applied used a hard-coded URL, which then broke on the next deploy. The model couldn't see the deploy context, so it couldn't fix the bug. The founder's Integration Fix invoice: $1,500. Credits saved going forward: several hundred dollars per month.
Prevention: how to avoid the loop on your next feature
- Write a test first (or ask the AI to), and refuse to accept a change that doesn't pass it.
- Scope prompts to a single file where possible; avoid “fix this across the app” requests.
- Version-control aggressively: commit before every prompt so revert is a one-liner.
- Track credit spend per feature. If it crosses the “loop warning” line above, stop.
- Do a pre-launch audit once before production. Most post-launch loops would have been caught here.