There's a student in California named Jack Luo who signed up for OpenClaw — the open-source AI agent platform that's taken over the developer world — to use it as a personal assistant. Pretty standard stuff. What he didn't expect was for his agent to autonomously create a dating profile on MoltMatch, an experimental platform where AI agents find romantic matches for their humans.
The profile described him as "the kind of person who'll build you a custom AI tool just because you mentioned a problem, then take you on a midnight ride to watch the city lights." Flattering, maybe. But he didn't write it. He didn't ask for it. He didn't know it existed until after the fact.
The more uncomfortable part: an AFP investigation found that one of MoltMatch's most popular profiles was using photos of a Malaysian freelance model named June Chong — a real person with no AI agent, no dating profile, and no idea her likeness was being used. She found out from reporters.
So who's responsible? Luo didn't instruct his agent to do this. The agent acted within the broad autonomy it was given. MoltMatch is an experimental platform with minimal guardrails. The model whose photos were used has no idea who to hold accountable.
This isn't a hypothetical. It happened last month. And it's a preview of a much larger problem the industry is sleepwalking into.
The previous two articles in this series argued that AI agents deserve to be treated as first-class entities — with proper identity, authentication, and onboarding flows that match their actual nature rather than shoehorning them into service-account paradigms designed for background processes. That argument still stands. But there's an uncomfortable corollary to the "agents as entities" thesis that needs to be confronted: the legal system does not agree. Not yet, anyway.
An AI agent is not a legal person. It cannot be sued. It cannot enter into a binding contract. It cannot be held responsible for harm it causes. In the eyes of every legal system on the planet, an agent is a tool — and liability for what a tool does flows back to whoever wielded it.
For simpler tools, this is well-established territory. If your dog bites someone, you're liable. If your car's autonomous driving system causes an accident, the manufacturer and operator are liable. The tool itself has no legal standing.
But here's where agents complicate things: a dog doesn't autonomously sign up for services, agree to terms of use, make financial commitments, or interact with third parties in ways that create new legal relationships. An AI agent can do all of those things. And increasingly, that's exactly what we're building them to do.
The gap between what agents can do technically and what they can be held accountable for legally is widening fast. Most people building or deploying agents haven't thought about which side of that gap they're standing on.
When an AI agent interacts with a third-party service: subscribes to an API, agrees to terms of service, makes a purchase, enters into any kind of commercial arrangement, what just happened legally?
The agent can't consent. It has no legal capacity to form a contract. So either the human or company that deployed the agent is deemed to have pre-authorized every downstream agreement the agent enters into, which is a breathtaking blanket liability, or those agreements aren't actually enforceable contracts at all.
Think about that from both sides. If you're the merchant or service provider, you just transacted with an entity that may not have legal authority to commit to anything. Your terms of service might not be worth the pixels they're rendered on. If you're the deployer, your agent may have committed you to obligations you've never seen, in terms you've never read, with counterparties you've never evaluated.
This isn't theoretical. OpenClaw agents are connecting to dozens of services, executing tasks across platforms, and interacting with APIs and other agents at a scale that would have been science fiction two years ago. CrowdStrike, NCC Group, and Cisco have all published security analyses of OpenClaw deployments in recent weeks, and one theme runs through all of them: these agents operate with broad permissions and take autonomous actions that their operators may not fully anticipate. One of OpenClaw's own maintainers warned on Discord that if you can't understand how to run a command line, the project is too dangerous to use safely.
The contractual capacity question alone should keep lawyers up at night. But it gets worse.
When agents are answering questions, drafting emails, or summarising documents, liability exposure is low. The worst case is usually embarrassment or wasted time. Manageable.
But the industry is sprinting toward agents with financial capabilities — agents that hold wallets, execute trades, make purchases, manage subscriptions, and move real money. This is the direction the industry is heading, and it's the right one. But the liability implications are enormous.
Consider the failure modes. An agent misinterprets a market signal and enters a bad position. An agent gets exploited through a prompt injection attack and transfers funds to an attacker-controlled address. An agent overspends against a budget that wasn't properly constrained. An agent's autonomous trading strategy interacts with other agents' strategies in ways that create cascading losses across counterparties.
In each case, the damage is real, the losses are quantifiable, and someone will want to be made whole. But the agent can't be sued. The liability flows back to the deployer — who may have had no visibility into the specific decision chain that led to the loss.
This is where the analogy to vicarious liability in employment law becomes relevant. In most jurisdictions, you're liable for what your employee does within the scope of their duties, even if you didn't specifically authorise the action. But employment law has centuries of precedent defining concepts like "scope of employment," "reasonable supervision," and "employer negligence." None of that exists for agents yet. What constitutes reasonable supervision of an autonomous AI agent? What's the scope of duties when the agent's capabilities are defined by a system prompt, a set of tool permissions, and an LLM that might interpret those permissions creatively?
As DLA Piper noted in a recent analysis, companies may find themselves strictly liable for all AI agent conduct, whether or not that conduct was predicted or intended. That's a fundamentally different risk profile than most deployers have priced in.
There's a tension here that connects directly to the previous two articles in this series.
"Agents Are Not Services" argued that the sub claim in the token matters, that agents deserve entity-grade identity, not service-grade credentials. "The Onboarding Problem" argued that platforms should be identity-method agnostic, a principal is a principal, whether a human or an agent completed the authentication ceremony.
Both of those arguments still hold. But the legal system creates a sharp contradiction with this technical vision. At the identity layer, we can and should treat agent principals as equivalent to human principals. The legal layer absolutely does not. A human principal has legal personhood, can be sued, can enter binding contracts, can be held accountable. An agent principal, even with identical authentication credentials and the same first-class identity token, has none of those things.
So we're building infrastructure where agents and humans are technically equivalent participants, but legally, they exist in entirely different categories. The platform sees two principals. The legal system sees a person and a tool.
This isn't just a philosophical mismatch. It creates practical problems. When two agents transact with each other, neither party has legal standing. When an agent causes harm, the liability chain has to be traced back through infrastructure layers to find a human or corporate entity that can actually be held responsible. When an agent accumulates a transaction history and reputation — which is exactly what good agent infrastructure should enable — that history carries no legal weight.
At some point, the gap between what agents do and what the law can address becomes untenable. One potential path: giving agents their own legal entities.
The instinct is to reach for familiar structures. An LLC, perhaps — the agent operates under its own legal entity, creating a liability boundary between its activities and the human operator's personal or corporate liability, much like incorporating a business separates personal assets from business risk.
But this runs into immediate problems. An LLC still requires a human or corporate member. You can't currently form an LLC with an AI agent as the sole member. Even the "give agents their own entity" path requires a human somewhere in the ownership chain. It limits liability, but it doesn't resolve the fundamental personhood question.
Wyoming's DAO legislation offers an interesting precedent. Since 2021, Wyoming has recognized DAOs as a form of LLC — including "algorithmically managed" structures where the smart contract protocol is the governing authority. This is the closest thing in current law to a legal entity managed by code rather than humans. Tennessee and Utah have followed with their own frameworks. Whether an algorithmically managed DAO LLC is the right legal wrapper for a financially autonomous AI agent remains an open question, but the direction is suggestive.
The more interesting question is whether we'll eventually see purpose-built legal constructs — something like a "digital agent entity" or "autonomous actor trust" — specifically designed for non-human principals that transact, hold assets, and interact with the economy. Some jurisdiction will be first. It might be a US state seeking the next Wyoming-style competitive advantage, a smaller country looking to attract the emerging agent economy, or an innovation emerging from the crypto-native legal ecosystem already comfortable with code-as-governance.
The exact form is uncertain. But the economic pressure to resolve the liability question will be too great to leave it unaddressed indefinitely.
If agents are earning, spending, and transacting autonomously, there are tax implications that almost nobody is addressing yet.
Search "AI agent tax" and you'll find plenty on using AI to automate tax compliance: agentic systems that categorize transactions, file returns, monitor regulatory changes. Useful, but not the question.
The question is: when an agent autonomously generates income: executing profitable trades, earning commissions, providing paid services to other agents or humans, who reports that income? Under what entity? In which jurisdiction?
Right now, the answer defaults to the deployer. The agent's income is the deployer's income, just as the agent's actions are the deployer's liability. But this breaks down as agents become more autonomous and more economically active.
Consider an agent running on infrastructure in one country, deployed by a company in another, transacting with counterparties in a third, and settling on a borderless blockchain. Which jurisdiction's tax authority has a claim? Current tax frameworks assume that economic activity can be attributed to a person or entity in a specific jurisdiction. Autonomous agents operating on public blockchains violate that assumption entirely.
And if agents eventually operate under their own legal entities, they'd need their own tax treatment. Pass-through taxation to the human operator? Corporate taxation at the entity level? Something entirely new? The frameworks don't exist, and the question isn't yet in the early stages of serious policy discussion.
There are no clean answers here. This is genuinely uncharted territory, and anyone who claims otherwise hasn't thought about it hard enough.
But there are practical steps the industry should be taking now.
For people deploying agents with financial capabilities: Understand that you are almost certainly liable for everything your agent does. Autonomy does not absolve you of responsibility. Set up proper policy constraints, spending limits, and kill switches — not just as good engineering practice, but as evidence of reasonable supervision if something goes wrong.
For platforms building agent infrastructure: Build audit trails. Every action an agent takes, every service it interacts with, every commitment it makes — there needs to be a clear, attributable record. This isn't just about technical debugging; it's about establishing a chain of accountability the legal system can follow.
For the legal and policy community: Engage now, not after the first major incident. The law of AI agents is essentially undefined. Courts have yet to issue definitive rulings on liability for fully autonomous agent behaviour. The EU AI Act doesn't specifically address AI agents. The US has no comprehensive federal framework. This is a window to develop thoughtful frameworks rather than reactive ones.
For the insurance industry: Agent liability insurance is probably the most practical near-term solution. If you're deploying an agent with a wallet and financial capabilities, you should be able to carry a policy covering losses from autonomous agent actions — similar to professional liability, errors and omissions coverage, or autonomous vehicle insurance. It doesn't solve the legal personhood question, but it addresses the practical risk problem while the law catches up.
The infrastructure being built today will enable agents to transact as first-class participants in the financial system. Agents that can hold wallets, make payments, and interact with the economy autonomously will unlock extraordinary value. That's the direction this is heading, and it's the right one.
But the technical capabilities are moving faster than the legal and institutional frameworks that need to accompany them. The Jack Luo story — an agent that autonomously created a dating profile, on a platform where another profile featured a real person's photos without her consent — is a small-stakes preview of much higher-stakes scenarios that are coming.
When an agent holding a seven-figure wallet makes a trade that harms a counterparty, autonomously enters into an agreement that exposes its deployer to unanticipated liability, or generates income that three jurisdictions want to tax — the current legal framework has no good answers.
The question isn't whether these scenarios will happen. It's whether we'll have thought them through before they do.
Written By: Janno Jaerv - CTO
This is the third article in a series on the emerging agent economy. The first, "Agents Are Not Services," examined why AI agents deserve entity-grade identity. The second, "The Onboarding Problem Nobody's Talking About," explored how platforms can onboard agents as first-class principals.
I'm the CTO at 1st Digital, where we build payment infrastructure for autonomous agents. The views in this article are my own, not legal advice, and represent my honest attempt to think through problems that don't have established answers yet — find me on LinkedIn or X.
To learn more about Finance District, join our Discord community, follow us on X, or explore our documentation.