The Real Question Behind the AI Agent Boom: Who Controls the Execution Layer?
As AI agents gain access to your wallet, the infrastructure they run on matters as much as the AI itself.
The AI agent boom in crypto has been hard to miss. A growing share of onchain volume is now being routed through autonomous agents, and the number of products promising to automate your DeFi strategy, execute trades on your behalf, and manage your portfolio while you sleep has expanded rapidly.
Most of the conversation has focused on the intelligence layer: which AI model is running, how sophisticated the reasoning is, how well it understands natural language instructions. That is a reasonable thing to care about.
But it is not the most important question. The most important question is: who controls the execution layer?
Two Ways to Give an Agent Access
When an agent acts on your behalf in DeFi, it needs to interact with smart contracts. It needs to sign transactions. And for it to sign transactions, it needs some form of access to your wallet. How that access is structured determines your exposure if the agent is compromised, your ability to revoke access when something goes wrong, and whether you remain in control of your assets at all.
There are two fundamentally different ways to give an agent that access. The first is custodial: you hand over your keys, or the platform generates keys on your behalf and holds them. The agent then acts freely within the wallet because it has full control. This approach is simpler to build and produces a smoother initial user experience. It is also the architecture that has produced the most damaging incidents in this space.
The second approach is non-custodial with delegation. You keep your keys. You grant the agent a specific, revocable permission to act on your behalf within limits you define. The agent operates within that boundary. If something goes wrong, you revoke access. Your keys, and therefore your assets, were never at risk.
What Happens When Things Go Wrong
The distinction sounds technical. The consequences are not.
When an agent runs on a custodial architecture and something goes wrong, your recourse is limited. You are trusting that the platform’s controls are robust, that its AI models behave as intended, and that its infrastructure cannot be compromised. These are significant assumptions to make about any system, let alone one operating in a fast-moving space where even well-designed AI models can misinterpret user intent.
The incidents that have already occurred in this space are instructive. When an AI model misinterprets a user’s instructions and executes actions they did not sanction, the damage is proportional to the access that model has. On a custodial platform, that can mean substantial losses with limited ability to stop it. On a properly permissioned, non-custodial platform, the agent is bounded. It can only do what it was given permission to do, and nothing beyond that.
Why the Infrastructure Underneath the Agent Matters
This is why the infrastructure underneath an agent matters as much as the agent itself. Delegation standards like ERC-7710 and ERC-7715 exist precisely to solve this problem: they allow users to grant agents specific, scoped permissions at the smart contract level, without transferring keys. CoinFello is built on this model: the agent works within the limits you set, your keys stay in your control, and you can revoke access at any time.
As agents become more capable and more widely deployed, the temptation will be to optimize for convenience. Custodial architectures are easier to onboard users to. They require fewer decisions upfront. But convenience purchased at the cost of control is a trade-off that compounds in the wrong direction as the stakes get higher.
The Gap Between AI Capability and Execution Safety
The agent era is not arriving gradually. It is already here. Autonomous agents are managing meaningful amounts of capital onchain today. The AI models powering them are improving faster than most execution infrastructure is designed to handle. That gap, between AI capability and execution safety, is where most of the risk lives.
The Question Worth Asking Before You Choose an Agent
This does not mean you should avoid AI agents for onchain activity. It means you should ask the right questions before you choose one. Not just: does it understand what I want? But: if it gets something wrong, what happens? Can I revoke access immediately? Are my keys ever exposed? What are the actual limits on what it can do?
The architecture you choose is not a technical detail. It is the decision that determines whether you remain in control when something goes wrong. In a space moving this fast, that question deserves an answer before you grant any agent access to your wallet.

