DiscoverLessWrong (30+ Karma)“Focus transparency on risk reports, not safety cases” by ryan_greenblatt
“Focus transparency on risk reports, not safety cases” by ryan_greenblatt

“Focus transparency on risk reports, not safety cases” by ryan_greenblatt

Update: 2025-09-22
Share

Description

There are many different things that AI companies could be transparent about. One relevant axis is transparency about the current understanding of risks and the current mitigations of these risks. I think transparency about this should take the form of a publicly disclosed risk report rather than the company making a safety case. To be clear, there are other types of transparency focused on different aspects of the situation (e.g. transparency about the model spec) which also seem helpful.


By a risk report, I mean a report which reviews and compiles evidence relevant to the current and near future level of catastrophic risk at a given AI company and discusses the biggest issues with the AI company's current processes and/or policies that could cause risk in the future. This includes things like whether (and how effectively) the company followed its commitments and processes related to catastrophic risk, risk-related [...]

---

Outline:

(05:18 ) More details about risk reports

(07:56 ) When should full risk report transparency happen?

(09:50 ) Extensions and modifications

The original text contained 4 footnotes which were omitted from this narration.

---


First published:

September 22nd, 2025



Source:

https://www.lesswrong.com/posts/KMbZWcTvGjChw9ynD/focus-transparency-on-risk-reports-not-safety-cases


---


Narrated by TYPE III AUDIO.

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

“Focus transparency on risk reports, not safety cases” by ryan_greenblatt

“Focus transparency on risk reports, not safety cases” by ryan_greenblatt