Discover
Scaling Laws
Scaling Laws
Author: Lawfare & University of Texas Law School
Subscribed: 119Played: 1,303Subscribe
Share
© Lawfare
Description
Scaling Laws explores (and occasionally answers) the questions that keep OpenAI’s policy team up at night, the ones that motivate legislators to host hearings on AI and draft new AI bills, and the ones that are top of mind for tech-savvy law and policy students. Co-hosts Alan Rozenshtein, Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas and Senior Editor at Lawfare, dive into the intersection of AI, innovation policy, and the law through regular interviews with the folks deep in the weeds of developing, regulating, and adopting AI. They also provide regular rapid-response analysis of breaking AI governance news.
Hosted on Acast. See acast.com/privacy for more information.
202 Episodes
Reverse
Packy McCormick, founder of Not Boring and Not Boring Capital, joins Kevin Frazier, Director of the AI Innovation and Law Program at the University of Texas School of Law and a Senior Fellow at the Abundance Institute, to discuss the power of narratives in tech, the intersection of investing and policy, and what it means to build frameworks for the future in an age of rapid technological change. Hosted on Acast. See acast.com/privacy for more information.
An impasse is coming to a head. The resolution is unknown. The Department of Defense has made clear that Anthropic has until 5:01pm ET today, February 27th, 2026, to permit its use of Claude for any lawful purpose. CEO Dario Amodei doubled down on his insistence that Anthropic tools should not be used for mass domestic surveillance or the operation of lethal autonomous weapons. The Pentagon's Spokesman agrees that such usage would indeed be unlawful and yet, the two parties cannot come to terms. If the DOD is to be taken at its word, the likely result is that Anthropic will be labled as a supply chain risk--an unprecedented decision with huge business ramifications. Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, joins Kevin Frazier, Senior Fellow at the Abudnance Institute and a Senior Editor at Lawfare, to break this all down.You can also read more on this weighty issue via Alan’s two recent Lawfare pieces here and here. Hosted on Acast. See acast.com/privacy for more information.
Alan Rozenshtein, research director at Lawfare, spoke with Cullen O'Keefe, research director at the Institute for Law & AI, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas at Austin School of Law and senior editor at Lawfare, about their paper, "Automated Compliance and the Regulation of AI" (and associated Lawfare article), which argues that AI systems can automate many regulatory compliance tasks, loosening the trade-off between safety and innovation in AI policy.The conversation covered the disproportionate burden of compliance costs on startups versus large firms; the limitations of compute thresholds as a proxy for targeting AI regulation; how AI can automate tasks like transparency reporting, model evaluations, and incident disclosure; the Goodhart's Law objection to automated compliance; the paper's proposal for "automatability triggers" that condition regulation on the availability of cheap compliance tools; analogies to sunrise clauses in other areas of law; incentive problems in developing compliance-automating AI; the speculative future of automated compliance meeting automated governance; and how co-authoring the paper shifted each author's views on the AI regulation debate. Hosted on Acast. See acast.com/privacy for more information.
Alan Rozenshtein, research director at Lawfare, and Kevin Frazier, senior editor at Lawfare, spoke with Amanda Askell, head of personality alignment at Anthropic, about Claude's Constitution: a 20,000-word document that describes the values, character, and ethical framework of Anthropic's flagship AI model and plays a direct role in its training.The conversation covered how the constitution is used during supervised learning and reinforcement learning to shape Claude's behavior; analogies to constitutional law, including fidelity to text, the potential for a body of "case law," and the principal hierarchy of Anthropic, operators, and users; the decision to ground the constitution in virtue ethics and practical judgment rather than rigid rules; the document's treatment of Claude's potential moral patienthood and the question of AI personhood; whether the constitution's values are too Western and culturally specific; the tension between Anthropic's commercial incentives and its stated mission; and whether the constitutional approach can generalize to specialized domains like cybersecurity and military applications. Hosted on Acast. See acast.com/privacy for more information.
Kevin Frazier sits down with Andrew Freedman of Fathom and Gillian Hadfield, AI governance scholar, at the Ashby Workshops to examine innovative models for AI regulation.They discuss:Why traditional regulation struggles with rapid AI innovation.The concept of Regulatory Markets and how it aligns with the unique governance challenges posed by AI.Critiques of hybrid governance: concerns about a “race to the bottom,” the limits of soft law on catastrophic risks, and how liability frameworks interact with governance.What success looks like for Ashby Workshops and the future of adaptive AI policy design.Whether you’re a policy wonk, technologist, or governance skeptic, this episode bridges ideas and practice in a time of rapid technological change. Hosted on Acast. See acast.com/privacy for more information.
Alan Rozenshtein, research director at Lawfare, and Renee DiResta, associate research professor at Georgetown University's McCourt School of Public Policy and contributing editor at Lawfare, spoke with David Rand, professor of information science, marketing, and psychology at Cornell University.The conversation covered how inattention to accuracy drives misinformation sharing and the effectiveness of accuracy nudges; how AI chatbots can durably reduce conspiracy beliefs through evidence-based dialogue; research showing that conversational AI can shift voters' candidate preferences, with effect sizes several times larger than traditional political ads; the finding that AI persuasion works through presenting factual claims, but that the claims need not be true to be effective; partisan asymmetries in misinformation sharing; the threat of AI-powered bot swarms on social media; the political stakes of training data and system prompts; and the policy case for transparency requirements. Additional reading:"Durably Reducing Conspiracy Beliefs Through Dialogues with AI" - Science (2024)"Persuading Voters Using Human-Artificial Intelligence Dialogues" - Nature (2025)"The Levers of Political Persuasion with Conversational Artificial Intelligence" Science (2025)"How Malicious AI Swarms Can Threaten Democracy" - Science (2026) Hosted on Acast. See acast.com/privacy for more information.
Nathan Labenz, host of the Cognitive Revolution, sat down with Alan and Kevin to talk about the intersection of AI and the law. The trio explore everything from how AI may address the shortage of attorneys in rural communities to the feasibility and desirability of the so-called "Right to Compute." Learn more about the Cognitive Revolution here. It's our second favorite AI podcast! Hosted on Acast. See acast.com/privacy for more information.
Most folks agree that AI is going to drastically change our economy, the nature of work, and the labor market. What's unclear is when those changes will take place and how best Americans can navigate the transition. Brent Orrell, senior fellow at the American Enterprise Institute, joins Kevin Frazier, a Senior Fellow at the Abundance Institute, the Director of the AI Innovation and Law Fellow at the University of Texas School of Law, and a Senior Editor at Lawfare, to help tackle these and other weighty questions.Orrell has been studying the future of work since before it was cool. His two cents are very much worth a nickel in this important conversation. Send us your feedback (scalinglaws@lawfaremedia.org) and leave us a review! Hosted on Acast. See acast.com/privacy for more information.
Jakub Kraus, a Tarbell Fellow at Lawfare, spoke with Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Research Director at Lawfare, and Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law, a Senior Fellow at the Abundance Institute, and a Senior Editor at Lawfare, about Anthropic's newly released "constitution" for its AI model, Claude. The conversation covered the lengthy document's principles and underlying philosophical views, what these reveal about Anthropic's approach to AI development, how market forces are shaping the AI industry, and the weighty question of whether an AI model might ever be a conscious or morally relevant being. Mentioned in this episode:Kevin Frazier, "Interpreting Claude's Constitution," LawfareAlan Rozenshtein, "The Moral Education of an Alien Mind," Lawfare Hosted on Acast. See acast.com/privacy for more information.
Shlomo Klapper, founder of Learned Hand, joins Kevin Frazier, the Director of the AI Innovation and Law Fellow at the University of Texas School of Law, a Senior Fellow at the Abundance Institute, and a Senior Editor at Lawfare, to discuss the rise of judicial AI, the challenges of scaling technology inside courts, and the implications for legitimacy, due process, and access to justice. Hosted on Acast. See acast.com/privacy for more information.
Alan Rozenshtein, research director at Lawfare, spoke with Francis Shen, Professor of Law at the University of Minnesota, director of the Shen Neurolaw Lab, and candidate for Hennepin County Attorney.The conversation covered the intersection of neuroscience, AI, and criminal justice; how AI tools can improve criminal investigations and clearance rates; the role of AI in adjudication and plea negotiations; precision sentencing and individualized justice; the ethical concerns around AI bias, fairness, and surveillance; the practical challenges of implementing AI systems in local government; building institutional capacity and public trust; and the future of the prosecutor's office in an AI-augmented justice system. Hosted on Acast. See acast.com/privacy for more information.
Ziad Reslan, a member of OpenAI’s Product Policy Staff and a Senior Fellow with the Schmidt Program on Artificial Intelligence, Emerging Technologies, and National Power at Yale University, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to talk about iterative deployment--the lab’s approach to testing and deploying its models. It’s a complex and, at times, controversial approach. Ziad provides the rationale behind iterative deployment and tackles some questions about whether the strategy has always worked as intended. Hosted on Acast. See acast.com/privacy for more information.
Connecticut State Senator James Maroney and Neil Chilson, Head of AI Policy at the Abundance Institute, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, and Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, for a look back at a wild year in AI policy.Neil provides his expert analysis of all that did (and did not) happen at the federal level. Senator Maroney then examines what transpired across the states. The four then offer their predictions for what seems likely to be an even busier 2026. Hosted on Acast. See acast.com/privacy for more information.
Alan Z. Rozenshtein, Lawfare senior editor and associate professor of law the University of Minnesota, speaks with Cass Sunstein, the Robert Walmsley University Professor at Harvard University, about his new book, Imperfect Oracle: What AI Can and Cannot Do. They discuss when we should trust algorithms over our own judgment, why AI can eliminate the noise and bias that plague human decision-making but can't predict revolutions, cultural hits, or even a coin flip—and, perhaps most importantly, when it makes sense to delegate our choices to AI and when we should insist on deciding for ourselves. Hosted on Acast. See acast.com/privacy for more information.
Renée DiResta, Lawfare contributing editor and associate research professor at Georgetown's McCourt School of Public Policy, and Alan Z. Rozenshtein, Lawfare senior editor and associate professor of law the University of Minnesota, spoke with Jacob Mchangama, research professor of political science at Vanderbilt University and founder of The Future of Free Speech, and Jacob Shapiro, the John Foster Dulles Professor of International Affairs at Princeton University. The conversation covered the findings of a new report examining how AI models handle contested speech; comparative free speech regulations across six jurisdictions; empirical testing of how major chatbots respond to politically sensitive prompts; and the tension between free expression principles and concerns about manipulation in AI systems. Hosted on Acast. See acast.com/privacy for more information.
In this rapid response episode, Lawfare senior editors Alan Rozenshtein and Kevin Frazier and Lawfare Tarbell fellow Jakub Kraus discuss President Trump's new executive order on federal preemption of state AI laws, the politics of AI regulation and the split between Silicon Valley Republicans and MAGA populists, and the administration's decision to allow Nvidia to export H200 chips to China. Mentioned in this episode:Executive Order: Ensuring a National Policy Framework for Artificial IntelligenceCharlie Bullock, "Legal Issues Raised by the Proposed Executive Order on AI Preemption," Institute for Law & AI Hosted on Acast. See acast.com/privacy for more information.
Graham Dufault, General Counsel at ACT | The App Association, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore how small- and medium-sized enterprises (SMEs) are navigating the EU's AI regulatory framework. The duo breakdown the Association's recent survey of SMEs, which included the views of more than 1,000 enterprises and assessed their views on regulation and adoption of AI. Follow Graham: @GDufault and ACT | The App Association: @actonline Hosted on Acast. See acast.com/privacy for more information.
Caleb Withers, a researcher at the Center for a New American Security, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss how frontier models shift the balance in favor of attackers in cyberspace. The two discuss how labs and governments can take steps to address these asymmetries favoring attackers, and the future of cyber warfare driven by AI agents.Jack Mitchell, a student fellow in the AI Innovation and Law Program at the University of Texas School of Law, provided excellent research assistance on this episode.Check out Caleb’s recent research here. Hosted on Acast. See acast.com/privacy for more information.
Andrew Prystai, CEO and co-founder of Vesta, and Thomas Bueler-Faudree, co-founder of August Law, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to think through AI policy from the startup perspective. Andrew and Thomas are the sorts of entrepreneurs that politicians on both sides of the aisle talk about at town halls and press releases. They’re creating jobs and pushing the technological frontier. So what do they want AI policy leaders to know as lawmakers across the country weigh regulatory proposals? That’s the core question of the episode. Giddy up for a great chat! Learn more about the guests and their companies here:Andrew's Linkedin, Vesta's LinkedinThomas’s LinkedIn, August’s LinkedIn Hosted on Acast. See acast.com/privacy for more information.
Jeff Bleich, General Counsel at Anthropic, former Chief Legal Officer at Cruise, and former Ambassador to Australia during the Obama administration, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to get a sense of how the practice of law looks at the edge of the AI frontier.The two also review how Jeff’s prior work in the autonomous vehicle space prepared him for the challenges and opportunities posed by navigating legal uncertainties in AI governance. Hosted on Acast. See acast.com/privacy for more information.





If for instance FB has to pay a violation fine, why not seed the expansion of FTC through appropriations but dedicate fines partially to FTC funding and research.
tedious lefties
the dying legacy media whining about the new media.😂