#199 – Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy
Description
"I do think that there is a really significant sentiment among parts of the opposition that it’s not really just that this bill itself is that bad or extreme — when you really drill into it, it feels like one of those things where you read it and it’s like, 'This is the thing that everyone is screaming about?' I think it’s a pretty modest bill in a lot of ways, but I think part of what they are thinking is that this is the first step to shutting down AI development. Or that if California does this, then lots of other states are going to do it, and we need to really slam the door shut on model-level regulation or else they’re just going to keep going.
"I think that is like a lot of what the sentiment here is: it’s less about, in some ways, the details of this specific bill, and more about the sense that they want this to stop here, and they’re worried that if they give an inch that there will continue to be other things in the future. And I don’t think that is going to be tolerable to the public in the long run. I think it’s a bad choice, but I think that is the calculus that they are making." —Nathan Calvin
In today’s episode, host Luisa Rodriguez speaks to Nathan Calvin — senior policy counsel at the Center for AI Safety Action Fund — about the new AI safety bill in California, SB 1047, which he’s helped shape as it’s moved through the state legislature.
Links to learn more, highlights, and full transcript.
They cover:
- What’s actually in SB 1047, and which AI models it would apply to.
- The most common objections to the bill — including how it could affect competition, startups, open source models, and US national security — and which of these objections Nathan thinks hold water.
- What Nathan sees as the biggest misunderstandings about the bill that get in the way of good public discourse about it.
- Why some AI companies are opposed to SB 1047, despite claiming that they want the industry to be regulated.
- How the bill is different from Biden’s executive order on AI and voluntary commitments made by AI companies.
- Why California is taking state-level action rather than waiting for federal regulation.
- How state-level regulations can be hugely impactful at national and global scales, and how listeners could get involved in state-level work to make a real difference on lots of pressing problems.
- And plenty more.
Chapters:
- Cold open (00:00:00 )
- Luisa's intro (00:00:57 )
- The interview begins (00:02:30 )
- What risks from AI does SB 1047 try to address? (00:03:10 )
- Supporters and critics of the bill (00:11:03 )
- Misunderstandings about the bill (00:24:07 )
- Competition, open source, and liability concerns (00:30:56 )
- Model size thresholds (00:46:24 )
- How is SB 1047 different from the executive order? (00:55:36 )
- Objections Nathan is sympathetic to (00:58:31 )
- Current status of the bill (01:02:57 )
- How can listeners get involved in work like this? (01:05:00 )
- Luisa's outro (01:11:52 )
Producer and editor: Keiran Harris
Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore