FUNCTION_INVOCATION_TIMEOUT · Task timed out after 10.00 seconds
appears when:When a serverless or Edge route runs longer than the plan's wall-clock cap — 10s Hobby, 60s Pro default, 25s Edge
Vercel function timeout error
Vercel kills any function that runs past its plan cap. OpenAI calls, slow DB joins, and heavy SSR routes all hit this. Fix by streaming, queuing, or moving to a longer runtime.
Response, offloading long work to a queue (Inngest, QStash, Trigger.dev), or extending with export const maxDuration = 60 on Pro.Quick fix for Vercel function timeout
01// app/api/chat/route.ts — stream OpenAI responses so the function stays under the timeout02import OpenAI from "openai";03 04const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });05 06// Edge runtime has a hard 25s cap but starts cold faster and streams natively07export const runtime = "edge";08// On Node runtime you can extend to 60s on Pro: export const maxDuration = 60;09 10export async function POST(req: Request) {11 const { messages } = await req.json();12 13 const stream = await openai.chat.completions.create({14 model: "gpt-4o",15 messages,16 stream: true,17 });18 19 const encoder = new TextEncoder();20 const body = new ReadableStream({21 async start(controller) {22 for await (const chunk of stream) {23 const text = chunk.choices[0]?.delta?.content ?? "";24 if (text) controller.enqueue(encoder.encode(text));25 }26 controller.close();27 },28 });29 30 return new Response(body, {31 headers: {32 "content-type": "text/plain; charset=utf-8",33 "cache-control": "no-store",34 },35 });36}Deeper fixes when the quick fix fails
01 · Move PDF generation to Inngest background queue
01// app/api/reports/route.ts — enqueue instead of blocking02import { inngest } from "@/lib/inngest";03import { NextResponse } from "next/server";04 05export async function POST(req: Request) {06 const { reportId } = await req.json();07 // enqueue — returns in milliseconds08 const { ids } = await inngest.send({09 name: "reports/generate.pdf",10 data: { reportId },11 });12 return NextResponse.json({ jobId: ids[0] });13}14 15// inngest/reports.ts — the slow job runs on Inngest's infra16import { inngest } from "@/lib/inngest";17 18export const generateReport = inngest.createFunction(19 { id: "reports-generate-pdf", retries: 3 },20 { event: "reports/generate.pdf" },21 async ({ event, step }) => {22 const data = await step.run("fetch-data", () => fetchReportData(event.data.reportId));23 const pdf = await step.run("render-pdf", () => renderPdf(data));24 await step.run("upload", () => uploadToStorage(event.data.reportId, pdf));25 return { ok: true };26 },27);02 · Extend Node runtime on Pro with maxDuration
01// app/api/slow-report/route.ts — only works on Pro plan02export const runtime = "nodejs";03export const maxDuration = 60; // max 300 on Pro04 05export async function GET() {06 const result = await slowQuery(); // takes up to 45s07 return Response.json(result);08}03 · Warm Neon compute with a cron ping
01// app/api/cron/warm-db/route.ts — keep Neon compute awake during business hours02import { db } from "@/lib/db";03 04export const runtime = "nodejs";05export const maxDuration = 10;06 07export async function GET(req: Request) {08 if (req.headers.get("authorization") !== `Bearer ${process.env.CRON_SECRET}`) {09 return new Response("unauthorized", { status: 401 });10 }11 await db.execute("select 1");12 return Response.json({ ok: true });13}14 15// vercel.json — run every minute, 9am-6pm UTC16// {17// "crons": [18// { "path": "/api/cron/warm-db", "schedule": "* 9-18 * * *" }19// ]20// }Why AI-built apps hit Vercel function timeout
Vercel runs Node and Edge functions on AWS Lambda and Cloudflare Workers respectively. Both platforms enforce a wall-clock timeout. Vercel exposes this via plan tiers: 10 seconds on Hobby, 60 seconds default on Pro (configurable up to 300 with maxDuration), 25 seconds on Edge regardless of plan. Once the cap is hit, the function process is killed and Vercel returns a 504 with FUNCTION_INVOCATION_TIMEOUT. The client sees a gateway error; no body, no partial data.
AI builders scaffold OpenAI handlers that call chat.completions.create without streaming. A short completion takes two seconds and works in preview. A real product question with tool use takes fifteen to forty seconds and times out on Hobby every time. The fix is streaming — you return a Response backed by a ReadableStream. The function returns the first byte in milliseconds; Vercel counts that as "done" for the purposes of the timeout and the client receives tokens until the model finishes.
The second source is slow database work. Cold Neon compute wakes in three to five seconds. A complex join over a few million rows can run for ten seconds. Any function that hits both a cold Neon branch and a complex query will time out on the first request and succeed on the second. The fix is to warm the database with a cron ping, add appropriate indexes, or move the query to a background job.
The third pattern is long-running work that never should have lived inside an HTTP request: PDF rendering, email blasts, image generation, data exports. None of those belong in a serverless function. Move them to Inngest, QStash, or Trigger.dev. Your API route enqueues the job and returns immediately. The queue service runs the work on dedicated infrastructure with no 60-second cap, retries on failure, and calls a webhook when done.
Vercel function timeout by AI builder
How often each AI builder ships this error and the pattern that produces it.
| Builder | Frequency | Pattern |
|---|---|---|
| Lovable | Every AI app | Uses chat.completions.create without stream: true |
| Bolt.new | Common | Generates PDFs or sends bulk emails inside an API route |
| v0 | Common | SSR-renders huge pages with synchronous DB calls |
| Cursor | Sometimes | Sets maxDuration = 60 on a Hobby plan — silently capped to 10s |
| Replit Agent | Rare | Opens a raw Postgres client per request — cold start adds 3-5s |
Related errors we fix
Stop Vercel function timeout recurring in AI-built apps
- →Default to streaming for every OpenAI route — use the AI SDK's streamText helper.
- →Move any work that can take over 5 seconds to Inngest, QStash, or Trigger.dev.
- →Warm Neon or planetscale with a 1-minute cron during business hours.
- →Set maxDuration explicitly on every route and confirm it matches your plan.
- →Add an uptime test that hits the slowest route with a realistic payload before every deploy.
Still stuck with Vercel function timeout?
Vercel function timeout questions
What is the default Vercel function timeout?+
Why does my OpenAI call always time out on Vercel?+
How do I run a job that needs to take two minutes?+
Is the Edge runtime faster than the Node runtime on Vercel?+
How long does a Vercel function timeout fix take?+
Ship the fix. Keep the fix.
Emergency Triage restores service in 48 hours. Break the Fix Loop rebuilds CI so this error cannot ship again.
Hyder Shah leads Afterbuild Labs, shipping production rescues for apps built in Lovable, Bolt.new, Cursor, Replit, v0, and Base44. our rescue methodology.