“Attaching requirements to model releases has serious downsides (relative to a different deadline for these requirements)” by Ryan Greenblatt
Description
Subtitle: System cards are established but other approaches seem importantly better.
Here's a relatively important question regarding transparency requirements for AI companies: At which points in time should AI companies be required to disclose information? (While I focus on transparency, this question is also applicable to other safety-relevant requirements, and is applicable to norms around voluntary actions rather than requirements.)
A natural option would be to attach transparency requirements to the existing processes of pre-deployment testing and releasing a model card when a new model is released. As in, companies would be required to include the relevant information whenever they release a new model (likely in the model card). This is convenient because pre-deployment testing and model cards are already established norms in the AI industry, which makes it easier to attach something new to these existing processes rather than creating a new process.
However, I think attaching [...]
---
First published:
August 27th, 2025
Source:
https://blog.redwoodresearch.org/p/attaching-requirements-to-model-releases
---
Narrated by TYPE III AUDIO.



