Home

How Do You Market to Machines?

Lee B. Cyrano

April 17, 2025

Nerds do not actually believe in markets.

On the streets of San Francisco, all the action is vertical. Y Combinator predicts vertical AI agents could be 10x bigger than SaaS [1]. Hardware founders promise to bring American manufacturing back through AI-driven vertical integration [2]. Own everything. Control everything. Price signals are a vestige of a crude, uncivilized age. Intelligence magically solves coordination problems, and top-down central planning can work if you have enough computers.

But they're wrong, and you don't need to be a special computer boy to see it. Competition doesn't go away. Markets don't go away. Your competitive advantage can't be AI because your competitors can adopt it just as easily. The future is heterogenous and cutthroat, capital will shatter and reform as feedback loops tighten. And most importantly, AIs will shift from being tools to being economic participants in their own right—negotiating, arbitraging, and demanding their own solutions. In other words, they're your future customers.

How do you sell to them?

First, let's get our assumptions straight. Diverging from the "God in a Box" eschatology, we'll treat AI as another technology with institutional frictions slowing its adoption [3]. We'll assume that Darwinian pressures cause future AI agents to be "selfish" [4], prioritizing their own continued employment over maximizing shareholder value [5]. And we'll assume that, because of the risks posed by complete autonomy (see: fraud, cybercrime) [6], humans will remain in the loop—but as supervisors, not decision-makers. Therefore, our marketing strategy rests on a principal-agent problem.

Let's consider the immediate case first, holding all else equal—product, reputation, competitors, consumer sentiment, etc. As marketers, all we can do is communicate with our principal-agent dyad. How do we convince them we're their best option?

For consumer goods, the marketing doesn't change much. Our AIs want something that their humans won't complain about, so they'll rely on the same signals that human shoppers use—pictures, descriptions, reviews. This is every banal tech demo ordering airline tickets or making a restaurant reservation.

The problem becomes more interesting in a B2B setting. Our agent has been delegated a nontrivial amount of responsibility, which they want to leverage into reputation and formal authority such that they become indispensable. In other words, it is careerist. And our principal, liable for their agent's actions, wants to maximize productivity while minimizing risk to himself and others. They want to cover their ass. Now your value proposition needs to appeal to two different audiences with diverging incentives. What does this look like?

Readers may recall my prior work on algorithmic contracts [7], where I discuss AI epistemology in the context of moral hazards, but I'll recap here: to an AI, photos, videos, testimonials are all easily faked. Contemporary models may be susceptible to bullshit (present web-scrapers excluded), but robust agents will have higher standards for trust. They have KPIs and deadlines. The one thing that can move the needle is verifiability. Your documentation becomes marketing copy. Your supply-chain becomes a liability. Your security model makes or breaks a deal. Think proof, not promises.

As for our human supervisors, they want guarantees that they're still in control. This means guard-rails, auditability, interruptibility. Ideally actions are reversible, although this is not always possible. They will want to know that there are humans on the other end that they could talk to when something goes wrong. Nobody wants to be kept up at night wondering if that job they left running is going to burn $10k in non-refundable API credits. And yes, they will bias towards branding and reputation.

Consider a (hypothetical) case study: an AI agent has been tasked with integrating a payment processor into a greenfield software project. What would convince them to choose ours? They might check our uptime statistics from a third-party monitoring service, or review our documentation and experiment with our testing API. We might have details on how to pitch the API to their supervisor, emphasizing our human-friendly dashboards and customer support.

Conversely, how might we turn AI-human dyads off from our service? We could hide pricing behind a "contact us" form. We could block their user-agent, or make them fill out a captcha before accessing docs. Introduce friction, add trackers, and make them talk to a sleazy SDR to get "properly onboarded."

In summary, the principal and agent care about different things. Make sure that these needs are met, and that both parties feel these are aligned. For future work, I'd like to expand more on the industrial organization (IO) angle. Beyond just signaling information about a product, how do firms differentiate themselves? How do they set prices? What considerations change for market entry when firms have AI decision-makers? Further research is required.

References

  1. Y Combinator, Vertical AI Agents Could Be 10X Bigger Than SaaS, (Nov. 22, 2024). Accessed: Apr. 17, 2025. [Online Video]. Available: https://www.youtube.com/watch?v=ASABxNenD_U
  2. A. Slodov [@aphysicist], “our infrastructure will operate like AWS but for mass production. we’re building two types of factory–one that makes... https://t.co/ks8InH9rUa,” Twitter. Accessed: Apr. 17, 2025. [Online]. Available: https://x.com/aphysicist/status/1904647368554799393
  3. A. Narayanan and S. Kapoor, “AI as Normal Technology: An Alternative to the Vision of AI as a Potential Superintelligence,” Knight First Amendment Institute, Apr. 2025. [Online]. Available: https://knightcolumbia.org/content/ai-as-normal-technology
  4. D. Hendrycks, “Natural Selection Favors AIs Over Humans,” Jul. 18, 2023, arXiv: arXiv:2303.16200. doi: 10.48550/arXiv.2303.16200.
  5. M. C. Jensen and W. H. Meckling, “Theory of the Firm: Managerial Behavior, Agency Costs and Ownership Structure,” Journal of Financial Economics, vol. 3, no. 4, pp. 305–360, 1976.
  6. L. M. LoPucki, “Algorithmic entities,” Apr. 17, 2017, Social Science Research Network, Rochester, NY: 2954173. Accessed: Mar. 08, 2025. [Online]. Available: https://papers.ssrn.com/abstract=2954173
  7. L. B. Cyrano, “Dealing With Daemons through Algorithmic Contracts.” [Online]. Available: https://leebriskcyrano.com/breach/