Prompt injection is sometimes called an "unsolvable". But we shouldn't give up like that. I've been on a mission to introduce hard security boundaries into AI agents and this is the next step: What if compromised MCP tools couldn't just passively prompt inject your app?
Agents that load dynamic MCP tools risk security and quality issues: • Prompt injection • Unreliable tool calls • Unexpected changes • Wasted tokens 𝚖𝚌𝚙-𝚝𝚘-𝚊𝚒-𝚜𝚍𝚔 generates static tools you control so they stay stable and predictable. vercel.com/blog/generate-…