
Daniel Kokotajlo has become one of the most striking voices in the debate on advanced AI. After leaving OpenAI’s governance team in 2024 — refusing to sign a clause that would have prevented him from speaking freely — he founded the AI Futures Project in Berkeley. His mission is to anticipate how the development of artificial intelligence may unfold, and to warn about the risks along the way.
Daniel Kokotajlo’s vision of “a country of geniuses in the data center” raises a pressing question: how far can brute-force scaling really take us? Larger models like GPT-5 show diminishing returns, and bottlenecks such as hardware, energy, and data exhaustion could slow the race. This tension mirrors issues we’ve explored before, for instance in When Bias Resurfaces in Large Language Models: Beyond Guardrails, where we noted that simply adding more scale doesn’t solve deeper structural problems.
One of Kokotajlo’s most-discussed contributions is AI 2027, a speculative scenario report that imagines how the next few years could lead to super-intelligence. For him, this term is not about vague “AGI.” It means something concrete: an AI system outperforming the best humans across all intellectual activities.
Scaling as the driver
Kokotajlo’s vision of progress does not depend on a single dramatic breakthrough. Instead, he sees it as a process of scaling and automation. Once AI can design, code, test, and analyze its own experiments, it effectively becomes a self-improving research workforce.
He calls this the “country of geniuses in the data center”: tens of thousands of digital researchers running around the clock, each working at speeds far beyond human cognition. If a major lab in 2027 were to dedicate even a small fraction of its compute power to such agents, Kokotajlo estimates that the overall pace of AI research could accelerate by 25× to 50× compared to today.
Why brute force has limits
The idea is seductive. After all, larger models have consistently outperformed smaller ones, from GPT-2 to GPT-4. Coding is also the easiest part of research to automate, so why not lean into brute force?
The reality is less straightforward. Kokotajlo emphasizes that scaling cannot continue indefinitely.
- Diminishing returns: Each generation of models delivers smaller improvements relative to the massive costs of training them. GPT-5 is technically stronger than GPT-4, yet many users struggle to notice a difference in daily use.
- Finite data: The supply of high-quality human text and code is nearly exhausted. Over-reliance on synthetic data risks “model collapse,” where the system amplifies its own flaws.
- Hardware and energy costs: Training already requires vast amounts of electricity and specialized chips. Scaling to 100,000 AI researchers would demand colossal infrastructure, new supply chains, and unprecedented power consumption.
- Economic concentration: With training runs costing hundreds of millions or even billions, only a handful of corporations and governments can stay in the race.
The role of “research taste”
Even if all the technical challenges could be solved, there remains what Kokotajlo calls “research taste.” This is the subtle skill of identifying which experiments are worth running, and interpreting results in ways that actually move the field forward. Human progress has often depended less on speed and more on good judgment. For super-intelligence to eclipse humanity, it would also need to surpass our best instincts for scientific direction.
Preparing for acceleration
Kokotajlo’s work is provocative because it blends credible technical insight with bold timelines. Whether his “intelligence explosion” unfolds in 2027 or decades later, his core warning stands: automation and scale could dramatically accelerate AI research faster than society’s institutions can adapt.
The key question is not simply when super-intelligence arrives, but whether our infrastructure, governance, and ethics are ready for it. Energy demand, cost, and oversight will shape the path just as much as algorithms and data.
If we do end up with a “country of geniuses in the data center,” the bigger challenge may not be building it — but deciding how humanity lives alongside it.