“Modeling the geopolitics of AI development” by Alex Amadori, Gabriel Alfour, Andrea_Miotti, Eva_B
Description
We model how rapid AI development may reshape geopolitics in the absence of international coordination on preventing dangerous AI development. We focus on predicting which strategies would be pursued by superpowers and middle powers and which outcomes would result from them.
You can read our paper here: ai-scenarios.com
Attempts to predict scenarios with fast AI progress should be more tractable compared to most attempts to make forecasts. This is because a single factor (namely, access to AI capabilities) overwhelmingly determines geopolitical outcomes.
This becomes even more the case once AI has mostly automated the key bottlenecks of AI R&D. If the best AI also produces the fastest improvements in AI, the advantage of the leader in an ASI race can only grow as time goes on, until their AI systems can produce a decisive strategic advantage (DSA) over all actors.
In this model, superpowers are likely to engage in a heavily state-sponsored (footnote: “Could be entirely a national project, or helped by private actors; either way, countries will invest heavily at scales only possible with state involvement, and fully back research efforts eg. by providing nation-state level security.” ) race to ASI, which will culminate in one of three [...]
---
First published:
November 4th, 2025
Source:
https://www.lesswrong.com/posts/4E7cyTFd9nsT4o4d4/modeling-the-geopolitics-of-ai-development
---
Narrated by TYPE III AUDIO.



