What Is AI 2027 Really About?
July 24, 2025
On its face, AI 2027 is a story about technology.
We've cracked the learning algorithm,
and today's stumbling agents
are becoming
exponentially more capable.
(1.
Thomas Kwa et al.,
“Measuring AI Ability to Complete Long Tasks,”
March 18, 2025,
https://doi.org/10.48550/arXiv.2503.14499.
)
Extrapolate from investment in
large-scale compute clusters,
(2.
Romeo Dean,
“Compute Forecast,”
AI 2027,
April 2025,
https://ai-2027.com/research/compute-forecast.
)
and we should expect superhuman coders
by 2027.
(3.
Daniel Kokotajlo et al.,
AI 2027,
April 3, 2025,
53,
https://ai-2027.com/scenario.pdf.
)
Once we set these coders
on AI research,
we get a positive feedback loop
of technological improvement
where humans are no longer the bottleneck.
(4.
Kokotajlo et al.,
AI 2027,
15.
)
The resulting superintelligence
outmaneuvers us,
and we all die.
But we've heard this story before.
Leopold Aschenbrenner's
Situational Awareness
was feeling the AGI
around this time last year,
(5.
Leopold Aschenbrenner,
Situational Awareness,
June 2024,
https://situational-awareness.ai/.
)
and
futurists like Nick Bostrom
(6.
Nick Bostrom,
Superintelligence: Paths, Dangers, Strategies
(Oxford University Press, 2014).
)
and Eliezer Yudkowsky
(7.
Eliezer Yudkowsky,
Intelligence Explosion Microeconomics
(Machine Intelligence Research Institute, 2013),
https://intelligence.org/files/IEM.pdf.
)
have been warning of an intelligence explosion
for decades.
Hell,
I. J. Good was writing about this
all the way back in 1966.
(8.
Irving John Good,
“Speculations Concerning the First Ultraintelligent Machine,”
in Advances in Computers, vol. 6 (Elsevier, 1966),
https://doi.org/10.1016/S0065-2458(08)60418-0.
)
What's the real lede here?
To my knowledge,
AI 2027 is the first
detailed takeoff scenario
to consider multi-agent dynamics.
We don't get a singleton
AI
but a swarm of them,
dramatically widening the system's
spatiotemporal locus of control.
Or in other words,
it can do things in parallel.
By March 2027,
Agent-3 is working in teams to
speed up AI research.
(9.
Kokotajlo et al.,
AI 2027,
11.
)
By September,
Agent-4 is a
corporation within a corporation,
300,000 strong and thinking 50x faster
than humans.
(10.
Kokotajlo et al.,
AI 2027,
19.
)
And by November,
Agent-5 is a near-perfect hive mind
of 400,000 agents thinking 200x faster.
(11.
Kokotajlo et al.,
AI 2027,
26.
)
The real story is about AI
coordination under scarcity.
This may seem like a stretch—the
word scarce
is only used once
(12.
Kokotajlo et al.,
AI 2027,
45.
)
in a mention of Veblen goods—but consider,
the intelligence explosion
is usually framed in terms of
abundance.
We have an abundance of compute.
Technology creates abundance exponentially
in a runaway chain reaction.
Scarcity and abundance are
two sides of the same coin.
This abundance of compute creates
new coordination problems,
as the agents work together to
allocate this newfound compute and attention.
For research problems
which are definitionally uncertain,
this is non-trivial!
This scenario may seem outlandish, but it's not a creative writing exercise—it's serious forecasting. There are reputations on the line. The authors are so confident in their predictions, they're offering cold hard cash to anyone that can change their minds. (13. Daniel Kokotajlo et al., “About Us,” AI 2027, accessed July 24, 2025, https://ai-2027.com/about. ) It looks like we should update towards AI radically transforming the nature of coordination within the next three years.
This is incredible. How does AI do this?
There's an appendix on
superintelligence-enabled coordination technology.
(14.
Kokotajlo et al.,
AI 2027,
66.
)
Once we solve the superalignment problem,
we should expect
elegant, verifiable, nuanced treaties and compromises,
but that's just for the treaty
with the Chinese AI.
It doesn't say anything about
how the hive mind coordinates with itself.
It seems like the coordination begins with Agent-3 (emphasis added):
Now that coding has been fully automated, OpenBrain can quickly churn out high-quality training environments to teach Agent-3's weak skills like research taste and large-scale coordination. Whereas previous training environments included “Here are some GPUs and instructions for experiments to code up and run, your performance will be evaluated as if you were a ML engineer,” now they are training on “Here are a few hundred GPUs, an internet connection, and some research challenges; you and a thousand other copies must work together to make research progress.”
That's basically all we get.
So if I understand this correctly, Agent-3 figures out coordination through reinforcement learning because it's a superhuman coder. This coordination makes it self-improving, which makes it more intelligent, which makes it better at coordination, which makes it better at... self-improving... until Agent-5 is a unified hive-mind that can centrally coordinate a worldwide robotic economy. (15. Kokotajlo et al., AI 2027, 29. ) And this economy's doubling time is under a year. (16. Kokotajlo et al., AI 2027, 66. )
Hm.
I get that we have to be epistemically humble about predicting how a superintelligence will behave. We can't know how it will solve problems, only that it would do so more effectively than humans. (17. Eliezer Yudkowsky, “AI Alignment: Why It’s Hard, and Where to Start,” December 28, 2016, https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/. ) But we're saying that AI solves coordination because it's intelligent, and then saying it's intelligent because it solved coordination. Can't we apply this logic to anything? I don't think anyone is arguing that AI solves perpetual motion, and it's not like our hominid ancestors became less inclined to kill each other as their brains grew. Where is the feedback loop?
I must be missing something obvious. These are very smart people—they've clearly thought about coordination problems in depth. Perhaps the thread to pull on is this: everyone is supposed to be dead by 2030 because we're made obsolete by Agent-5's centrally planned economy...
But if central planning is so efficient, why don't we plan the economy now?