ilable for some runtimes (e.g. not for Node.js) and its impact is limited. Lambda strictly assigns on VM to a request. If you talk to an LLM for 60 seconds you pay for 60 seconds. On Vercel's Fluid you pay for the 10s of milliseconds you actually processed the LLM response (and a very small memory charge to avoid abuse). For such workloads Fluid is orders of magnitude more efficient than Lambda. We have customers who run 100s of requests with a single instance equivalent to 1 lambda and it would take 100s of lambdas to so the same