DiscoverEA Forum Podcast (All audio)“We won’t solve non-alignment problems by doing research” by MichaelDickens
“We won’t solve non-alignment problems by doing research” by MichaelDickens

“We won’t solve non-alignment problems by doing research” by MichaelDickens

Update: 2025-11-24
Share

Description

Even if we solve the AI alignment problem, we still face non-alignment problems, which are all the other existential problems[1] that AI may bring.


People have written research agendas on various imposing problems that we are nowhere close to solving, and that we may need to solve before developing ASI. An incomplete list of topics: misuse; animal-inclusive AI; AI welfare; S-risks from conflict; gradual disempowerment; risks from malevolent actors; moral error.


The standard answer to these problems, the one that most research agendas take for granted, is "do research". Specifically, do research in the conventional way where you create a research agenda, explore some research questions, and fund other people to work on those questions.


If transformative AI arrives within the next decade, then we won't solve non-alignment problems by doing research on how to solve them.


These problems are thorny, to put it mildly. They're the sorts of problems where you have no idea how much progress you're making or how much work it will take. I can think of analogous philosophical problems that have seen depressingly little progress in 300 years. I don't expect to see meaningful progress in the next 10.


[...]

---

Outline:

(02:54 ) Approach 1: Meta-research on what approach to use

(03:20 ) Approach 2: Pause AI

(05:02 ) Approach 3: Develop human-level AI first, then (maybe) pause

(06:40 ) Approach 4: Research how to steer ASI toward solving non-alignment problems

(07:57 ) Conclusion

The original text contained 6 footnotes which were omitted from this narration.

---


First published:

November 21st, 2025



Source:

https://forum.effectivealtruism.org/posts/utSWKRBazPNTuvSPk/we-won-t-solve-non-alignment-problems-by-doing-research


---


Narrated by TYPE III AUDIO.

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

“We won’t solve non-alignment problems by doing research” by MichaelDickens

“We won’t solve non-alignment problems by doing research” by MichaelDickens