← Back to all tweets

Tweet by @cramforce

View original on X

Dude that sells you servers goes around here saying "serverless is dead because of long-running agents". What is true is that agents can run for a long time and that this doesn't fit into the execution duration of a single serverless function invocation. But agents don't run high-density compute for hours on end. They - call LLMs (duh) - perform business logic - wait for events Notably, many of these things can fail, and once you go into hours of duration, you must anticipate that the machine you are running on will go away no matter how serverful you are. On the other hand, any actual individual compute task within the long-running agent is typically quite short. Imagine an agent that has worked for 2 hours on a task and now a failure occurs. It could be the host machine getting recycled or just a plain-old outage of an API that the agent wants to call. Do you fail the agent and start from scratch??? No, you gotta retry that unit of work and proceed. You lose 1 second of time rather than redoing 2 hours (and spending the tokens again). So, in reality, agents must be architected to be durable across task failures. This happens to introduce exactly the same complexity-cost as supporting a serverless architecture. Hence the argument that serveless isn't a good fit couldn't be further from the truth: You pay the "serverless tax" whether you go serverful or serverless, but you only get the serverless benefits if you go serverless. We built Workflow DevKit (https://t.co/QT5rc3uywf) to reduce the cost of durable execution to almost zero. And we make available Sandboxes (https://t.co/iE59UsL7SQ) so agents have access to stateful compute when they actually need it. This is the right path to go down to rather than renting computers and hoping nothing goes wrong.

190
Reply