“Preventing covert ASI development in countries within our agreement” by Aaron_Scher
Description
We at the Machine Intelligence Research Institute's Technical Governance Team have proposed an illustrative international agreement (blog post) to halt the development of superintelligence until it can be done safely. For those who haven’t read it already, we recommend familiarizing yourself with the agreement before reading this post.
Summary
This post addresses a common objection to our proposed international agreement to halt ASI development: that countries would simply cheat by pursuing covert ASI projects.
The agreement makes large, covert, ASI projects hard to hide by monitoring chip supply chains and data centers (via supplier audits, satellite/power analysis, inspections, whistleblowers, etc.) and by verifying chip use. It also prohibits risky AI research and verifies that this research is not taking place through various potential methods, such as embedding auditors in high-risk organizations. Some leakage is expected on both fronts, but we’re optimistic that it's not enough to run or scale a serious ASI effort, and if cheating is detected, enforcement can be used (sanctions up to destruction of AI infrastructure).
Introduction
Some people object that the agreement would be insufficient to reduce the risk of ASI being covertly developed in countries that are part of the agreement. [...]
---
Outline:
(00:30 ) Summary
(01:22 ) Introduction
(02:09 ) Verification focused on the drivers of AI progress
(03:30 ) Monitoring and verification focused on AI chips
(08:57 ) Verifying research restrictions by focusing on the researchers
(16:18 ) Enforcement, deterrence, and verification at gunpoint
(20:09 ) Should you simulate a covert ASI project?
(21:57 ) Conclusion
---
First published:
November 19th, 2025
---
Narrated by TYPE III AUDIO.



