The Agent Economy
Who Decides What We Buy
The most important economic question of the next decade is not what humans will buy, but who will decide it for them.
Moltbook, the social media page for AI agents, was an early signal of what is coming.
As AI agents get smarter, gain real autonomy, and become cheaper to run, a meaningful part of the economy will shift toward an agent economy.
The agent economy is where software agents act as economic participants on behalf of humans (or other agents). They make decisions, execute transactions, and coordinate with other agents with minimal human involvement.
We already accept that machines make consequential decisions for us. More than half of stock trading is algorithmic. Nearly everything you see on social media is ranked by systems no human fully understands. These systems don’t make recommendations or wait for human approval. They decide.
There is a fundamental difference between earlier generations of algorithms and LLM-based agents. Traditional systems optimize for a narrow objective. LLM agents operate with context and can reason across multiple, fundamentally different objectives.
To make decisions for humans, they need to be able to buy with taste. It is not sufficient to buy any kind of shoes. You need to like how they look, they need to fit, and they need to arrive when expected. Agents need to understand intent, preferences, trade-offs, and second-order effects.
In other words, they need to make decisions that look a lot like the ones a human would have made. That’s inherently difficult. Even other humans struggle to make good decisions for us unless they know us well or get extremely detailed instructions.
So why does this matter beyond the fact that you might soon outsource your online grocery shopping to GPT-6?
It matters because agents will control a significant share of consumer spend. The shift has already begun. Many people get their shopping recommendations from LLMs. I am one of them. Most of my product research has completely shifted into conversations with LLMs on the pros and cons of specific options.
However, I am still making the purchase decision and handling the transaction myself. Over time, we will hand over more decision autonomy and sometimes might not even be involved in the product research process at all.
In some ways, they are better decision-makers than humans. They do not get tired. They do not anchor on brands. They do not fall for artificial scarcity. They do not abandon a purchase because a checkout flow is annoying. They can continuously search the entire market and act the moment conditions are met. Once agents start controlling even a small portion of consumer spend, entire markets will shift. Marketing changes. Branding changes. Distribution changes. Product decisions change.
The agent economy will grow much faster than the consumer internet. Once agents cross the threshold where we trust them with a few percent of our spending, that share can quickly grow tenfold.
Recent progress makes this shift far closer than it seems. Tools like Claude Code have crossed a threshold where software engineering can be automated well enough for agents to build their own tools. Instead of waiting for humans to create integrations, agents can increasingly construct the software they need to navigate the internet on their own.
At the same time, inference costs keep falling. Today, it is still expensive to have frontier models continuously scour the internet on your behalf. That constraint is temporary. As costs drop, persistent agents become economically viable. Setup friction is collapsing as well. What once required careful orchestration can now be spun up in minutes. Moltbot is one visible example. Several teams I know are already building agent systems with surprisingly capable demos.
So what is the real bottleneck? Some of it is still intelligence. Some of it is compute capacity. Some of it is still inference cost. Agents won’t handle spending for most consumers if they cost more in compute than the products they buy. But the bigger bottleneck is structural. Agents have to operate inside an internet that was built for humans.
If an agent wants to buy a plane ticket today, it has to navigate a human-centered browser, deal with visual clutter designed to manipulate attention, and use a credit card issued to a human. It also has to make irreversible decisions without a clean permissioning or feedback layer. The system technically works, but it is deeply inefficient.
This will not last forever.
At first, agents will communicate for us with other humans or traditional servers via API. They will use the infrastructure that has been built for humans and deterministic code.
This has already started and will be the first step of transition.
Over time, we will build native infrastructure for agents. Separate payment rails. Explicit permissioning systems. Markets where agents acquire skills from other agents. Protocols for trading information. And agent-to-agent communication that is optimized for bandwidth and precision rather than English prose.
Parts of the agent economy are already being built. We’re seeing agent-native communication primitives like Agentmail, early payment rails like AP2 (Google) and ACP (Stripe), and a growing sub-industry focused on identity and permissioning for agents spanning big tech companies and early startups.
At the same time, an entire marketing ecosystem has emerged around optimizing how products surface inside AI systems. Many of these tools will naturally evolve from influencing human decisions to marketing directly to agents.
In the future, we need agent marketplaces where agents will be able to buy new capabilities for themselves. In that world, agents will build reputation scores, gain credit lines, and insure actions taken on behalf of their humans. We will also need conflict resolution mechanisms, negotiation protocols, and other primitives that any functioning economy requires.
When that happens, a large share of consumer spending will still belong to humans on paper, but will be decided elsewhere. Brands will no longer compete for human attention. They will compete for acceptance by machines acting on their users’ behalf.
