DiscoverLessWrong (Curated & Popular)"Contradict my take on OpenPhil’s past AI beliefs" by Eliezer Yudkowsky
"Contradict my take on OpenPhil’s past AI beliefs" by Eliezer Yudkowsky

"Contradict my take on OpenPhil’s past AI beliefs" by Eliezer Yudkowsky

Update: 2025-12-21
Share

Description

At many points now, I've been asked in private for a critique of EA / EA's history / EA's impact and I have ad-libbed statements that I feel guilty about because they have not been subjected to EA critique and refutation. I need to write up my take and let you all try to shoot it down.

Before I can or should try to write up that take, I need to fact-check one of my take-central beliefs about how the last couple of decades have gone down. My belief is that the Open Philanthropy Project, EA generally, and Oxford EA particularly, had bad AI timelines and bad ASI ruin conditional probabilities; and that these invalidly arrived-at beliefs were in control of funding, and were explicitly publicly promoted at the expense of saner beliefs.

An exemplar of OpenPhil / Oxford EA reasoning about timelines is that, as late as 2020, their position on timelines seemed to center on Ajeya Cotra's "Biological Timelines" estimate which put median timelines to AGI at 30 years later. Leadership dissent from this viewpoint, as I recall, generally centered on having longer rather than shorter median timelines.

An exemplar of poor positioning on AI ruin is [...]

---

First published:
December 20th, 2025

Source:
https://www.lesswrong.com/posts/ZpguaocJ4y7E3ccuw/contradict-my-take-on-openphil-s-past-ai-beliefs

---



Narrated by TYPE III AUDIO.

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

"Contradict my take on OpenPhil’s past AI beliefs" by Eliezer Yudkowsky

"Contradict my take on OpenPhil’s past AI beliefs" by Eliezer Yudkowsky