← Back to all tweets

Tweet by @rauchg

View original on X

The Fluid team took “pay for what you use” to the next level. You pay for what you 𝘢𝘤𝘵𝘶𝘢𝘭𝘭𝘺 use. For what you compute. It’s over for paying for idle time. It’s anxiety-inducing. A backend you don’t control can go down or become slow and you’d have to pay for it. Plus, AI models are slow. You want a compute platform that leans into their slowness. o3 Pro can spend 15 minutes thinking! I believe this is a fundamental leap. There’s been versions of this pricing model for limited, proprietary runtimes. This works for Node, Python and more to come.

Vercel
Vercel
@vercel

Fluid compute now uses Active CPU pricing. Only pay CPU rates when your function is actively computing. Building on existing Fluid gains, this brings additional cost savings of up to 90% for workloads like LLM calls, AI agents, or tasks with idle time. vercel.com/blog/introduci…

285
Reply