DiscoverScaling Laws
Scaling Laws
Claim Ownership

Scaling Laws

Author: Lawfare & University of Texas Law School

Subscribed: 123Played: 1,442
Share

Description

Scaling Laws explores (and occasionally answers) the questions that keep OpenAI’s policy team up at night, the ones that motivate legislators to host hearings on AI and draft new AI bills, and the ones that are top of mind for tech-savvy law and policy students. Co-hosts Alan Rozenshtein, Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas and Senior Editor at Lawfare, dive into the intersection of AI, innovation policy, and the law through regular interviews with the folks deep in the weeds of developing, regulating, and adopting AI. They also provide regular rapid-response analysis of breaking AI governance news.

Hosted on Acast. See acast.com/privacy for more information.

212 Episodes
Reverse
Fabien Curto Millet, Chief Economist at Google, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Fellow at the Abundance Institute, and Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, to discuss the potential of AI to catalyze a productivity boom while also addressing labor market instability. The three dive into likely changes in AI capabilities as well as ongoing reasons for slow organizational adoption of AI. Finally, they close with a brief discussion of potential policy approaches.  Hosted on Acast. See acast.com/privacy for more information.
Nicholas Bagley, Professor of Law at Michigan Law, joins Kevin Frazier, Director of the AI Innovation and Law Program at the University of Texas School of Law and a Senior Fellow at the Abundance Institute, for a live recording of the podcast in Ann Arbor. Thanks to Graham Hardig and Brinson Elliott for organizing a great event. Professors Bagley and Frazier start by analyzing a recent debate over housing policy before diving into the weeds of the Abundance Agenda, its nexus with AI policy, and what this all means for the future of legal education and governance. Hosted on Acast. See acast.com/privacy for more information.
Representative Nick Begich, Alaska's at-large member of Congress, joins Kevin Frazier, Director the the AI Innovation and Law Program at the University of Texas School of Law and a Senior Fellow at the Abundance Institute, to discuss the current state of AI policy on the Hill. As one of the few members of Congress with a background in tech, Rep. Begich offers a unique perspective on this unique and evolving regulatory question. The two also assess how Alaska may be a leader in developing AI infrastructure. Finally, Rep. Begich shares how he and his staff leverage AI to improve their own operations.  Hosted on Acast. See acast.com/privacy for more information.
Kendall Cotton, Founder and CEO of Montana’s Frontier Institute, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss Montana’s groundbreaking Right to Compute Act and how Montana hopes to protect access to AI and related technologies. We will discuss the history and reach of this Act and why other states may want to follow Montana's lead. Hosted on Acast. See acast.com/privacy for more information.
Why Data Governance Is the Key to AI Biosecurity, with Jassi Pannu and Doni Bloomfield Alan Rozenshtein, research director at Lawfare, spoke with Jassi Pannu, assistant professor at the Johns Hopkins Bloomberg School of Public Health and senior scholar at the Johns Hopkins Center for Health Security, and Doni Bloomfield, associate professor of law at Fordham Law School, about their proposed framework for governing biological data to reduce AI-enabled biosecurity risks. The conversation covered the origins of the proposal in the 50th anniversary of the 1975 Asilomar conference on recombinant DNA; the distinction between general-purpose AI models and biology-specific foundation models like genomic language models; the biosecurity threats posed by AI, including uplift of novice actors and raising the ceiling of expert capabilities; the proposed biosecurity data levels (BDL 0-4) framework and how it draws on precedents from biosafety levels and genetic privacy regulation; the challenge of capabilities-based rather than pathogen-based data classification; the institutional and regulatory mechanisms for enforcement, including the role of NIH grant conditions and a proposed mandatory federal regime; international collaboration and the importance of U.S. leadership given that most high-tier data is generated domestically; the relationship between the proposal and open-source biological AI development; and the offense-defense imbalance in biosecurity and the case for mandatory gene synthesis screening. Mentioned in this episode:Jassi Pannu and Doni Bloomfield et al., "Biological data governance in an age of AI," Science (2026)Jassi Pannu, Doni Bloomfield, et al., "Dual-use capabilities of concern of biological AI models," PLOS Computational Biology (2025)Dario Amodei, "The Adolescence of Technology" (2026)The Genesis Mission Executive Order (November 2025) Hosted on Acast. See acast.com/privacy for more information.
On Friday, March 20, the Trump Administration announced a National Policy Framework for AI. White House officials have stressed that they want Congress to act on the framework's recommendations within the year. What this all means for AI policy is an open question that warrants calling in two of the smartest folks in the business: Helen Toner, Interim Executive Director at Georgetown's Center for Security and Emerging Technology (CSET), and Dean Ball, a senior fellow at the Foundation for American Innovation.  This rapid response episode cuts to the chase as everyone makes sense of this important development in the national AI policy conversation.  Hosted on Acast. See acast.com/privacy for more information.
Alan Rozenshtein, research director at Lawfare, spoke with Woodrow Hartzog, the Andrew R. Randall Professor of Law at Boston University School of Law, and Jessica Silbey, Professor of Law and Honorable Frank R. Kenison Distinguished Scholar in Law at Boston University School of Law, about their new paper "How AI Destroys Institutions," which argues that AI systems threaten to erode the civic institutions that organize democratic society. The conversation covered the sociological concept of institutions and why they differ from organizations; the idea of technological affordances from science and technology studies; how AI undermines human expertise through both accuracy and inaccuracy; the cognitive offloading problem and whether AI-driven skill atrophy differs from past technological transitions; whether AI-generated decisions can satisfy the legitimacy requirements of the rule of law; the role of reason-giving, contestation, and political accountability in legal institutions; the tension between the paper's sweeping diagnosis and its more incremental prescriptions; and the case for bespoke, institution-specific AI tools over general-purpose deployment. Hosted on Acast. See acast.com/privacy for more information.
Tomicah Tillemann, President at Project Liberty Institute, joins the show. Tomicah offers a unique perspective on regulating emerging technology given his time as a venture capitalist and head of policy at Andreessen Horowitz and Haun Ventures. His contemporary focus is on identifying “policy solutions that enable human agency and human flourishing in an AI-powered world.” It’s a tall order that he breaks down with Kevin Frazier, a Senior Fellow at the Abundance Institute, Adjunct Research Fellow at the Cato Institute, and a Senior Editor at Lawfare. Hosted on Acast. See acast.com/privacy for more information.
Kevin Frazier hangs out with Caleb Watney of the Institute for Progress and Austin Carson of SeedAI at the Ashby Workshops to discuss the long-run policy foundations needed for the AI Age.Rather than focusing on near-term regulation, the conversation explores how AI challenges existing assumptions about state capacity, research funding, talent pipelines, and institutional design. Caleb and Austin unpack concepts like meta-science, public compute infrastructure, immigration policy, and congressional expertise—and explain why these “boring” policy areas may matter more for AI outcomes than headline-grabbing rules.The episode also examines how AI policy discourse has evolved in Washington, what lessons policymakers should draw from efforts like the National AI Research Resource, and why many AI governance failures may ultimately be failures of institutions rather than intent. Hosted on Acast. See acast.com/privacy for more information.
Alan Rozenshtein, associate professor of law at the University of Minnesota and research director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and senior editor at Lawfare, were joined by Dean Ball, senior fellow at the Foundation for American Innovation and author of the Hyperdimensional newsletter, and Timothy B. Lee, author of the Understanding AI newsletter, for a joint crossover episode of the Scaling Laws and AI Summer podcasts about the escalating dispute between Anthropic and the Pentagon over AI usage restrictions in military contracts.The conversation covered the timeline of the Anthropic-Pentagon dispute and Secretary Hegseth's supply chain risk designation; the legal basis for the designation under 10 U.S.C. § 3252 and whether it was intended to apply to domestic companies; the role of personality and politics in the dispute; OpenAI's competing Pentagon contract and debate over whether its terms actually match Anthropic's red lines; public opinion polling showing bipartisan concern about AI mass surveillance and autonomous weapons; the broader question of what the government-AI industry relationship should look like; the prospect of partial or full nationalization of AI capabilities; and whether frontier AI models are actually decisive for military applications. Hosted on Acast. See acast.com/privacy for more information.
Packy McCormick, founder of Not Boring and Not Boring Capital, joins Kevin Frazier, Director of the AI Innovation and Law Program at the University of Texas School of Law and a Senior Fellow at the Abundance Institute, to discuss the power of narratives in tech, the intersection of investing and policy, and what it means to build frameworks for the future in an age of rapid technological change. Hosted on Acast. See acast.com/privacy for more information.
An impasse is coming to a head. The resolution is unknown. The Department of Defense has made clear that Anthropic has until 5:01pm ET today, February 27th, 2026, to permit its use of Claude for any lawful purpose. CEO Dario Amodei doubled down on his insistence that Anthropic tools should not be used for mass domestic surveillance or the operation of lethal autonomous weapons. The Pentagon's Spokesman agrees that such usage would indeed be unlawful and yet, the two parties cannot come to terms. If the DOD is to be taken at its word, the likely result is that Anthropic will be labled as a supply chain risk--an unprecedented decision with huge business ramifications. Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, joins Kevin Frazier, Senior Fellow at the Abudnance Institute and a Senior Editor at Lawfare, to break this all down.You can also read more on this weighty issue via Alan’s two recent Lawfare pieces here and here. Hosted on Acast. See acast.com/privacy for more information.
Alan Rozenshtein, research director at Lawfare, spoke with Cullen O'Keefe, research director at the Institute for Law & AI, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas at Austin School of Law and senior editor at Lawfare, about their paper, "Automated Compliance and the Regulation of AI" (and associated Lawfare article), which argues that AI systems can automate many regulatory compliance tasks, loosening the trade-off between safety and innovation in AI policy.The conversation covered the disproportionate burden of compliance costs on startups versus large firms; the limitations of compute thresholds as a proxy for targeting AI regulation; how AI can automate tasks like transparency reporting, model evaluations, and incident disclosure; the Goodhart's Law objection to automated compliance; the paper's proposal for "automatability triggers" that condition regulation on the availability of cheap compliance tools; analogies to sunrise clauses in other areas of law; incentive problems in developing compliance-automating AI; the speculative future of automated compliance meeting automated governance; and how co-authoring the paper shifted each author's views on the AI regulation debate. Hosted on Acast. See acast.com/privacy for more information.
Alan Rozenshtein, research director at Lawfare, and Kevin Frazier, senior editor at Lawfare, spoke with Amanda Askell, head of personality alignment at Anthropic, about Claude's Constitution: a 20,000-word document that describes the values, character, and ethical framework of Anthropic's flagship AI model and plays a direct role in its training.The conversation covered how the constitution is used during supervised learning and reinforcement learning to shape Claude's behavior; analogies to constitutional law, including fidelity to text, the potential for a body of "case law," and the principal hierarchy of Anthropic, operators, and users; the decision to ground the constitution in virtue ethics and practical judgment rather than rigid rules; the document's treatment of Claude's potential moral patienthood and the question of AI personhood; whether the constitution's values are too Western and culturally specific; the tension between Anthropic's commercial incentives and its stated mission; and whether the constitutional approach can generalize to specialized domains like cybersecurity and military applications. Hosted on Acast. See acast.com/privacy for more information.
Kevin Frazier sits down with Andrew Freedman of Fathom and Gillian Hadfield, AI governance scholar, at the Ashby Workshops to examine innovative models for AI regulation.They discuss:Why traditional regulation struggles with rapid AI innovation.The concept of Regulatory Markets and how it aligns with the unique governance challenges posed by AI.Critiques of hybrid governance: concerns about a “race to the bottom,” the limits of soft law on catastrophic risks, and how liability frameworks interact with governance.What success looks like for Ashby Workshops and the future of adaptive AI policy design.Whether you’re a policy wonk, technologist, or governance skeptic, this episode bridges ideas and practice in a time of rapid technological change. Hosted on Acast. See acast.com/privacy for more information.
Alan Rozenshtein, research director at Lawfare, and Renee DiResta, associate research professor at Georgetown University's McCourt School of Public Policy and contributing editor at Lawfare, spoke with David Rand, professor of information science, marketing, and psychology at Cornell University.The conversation covered how inattention to accuracy drives misinformation sharing and the effectiveness of accuracy nudges; how AI chatbots can durably reduce conspiracy beliefs through evidence-based dialogue; research showing that conversational AI can shift voters' candidate preferences, with effect sizes several times larger than traditional political ads; the finding that AI persuasion works through presenting factual claims, but that the claims need not be true to be effective; partisan asymmetries in misinformation sharing; the threat of AI-powered bot swarms on social media; the political stakes of training data and system prompts; and the policy case for transparency requirements. Additional reading:"Durably Reducing Conspiracy Beliefs Through Dialogues with AI" - Science (2024)"Persuading Voters Using Human-Artificial Intelligence Dialogues" - Nature (2025)"The Levers of Political Persuasion with Conversational Artificial Intelligence" Science (2025)"How Malicious AI Swarms Can Threaten Democracy" - Science (2026) Hosted on Acast. See acast.com/privacy for more information.
Nathan Labenz, host of the Cognitive Revolution, sat down with Alan and Kevin to talk about the intersection of AI and the law. The trio explore everything from how AI may address the shortage of attorneys in rural communities to the feasibility and desirability of the so-called "Right to Compute."  Learn more about the Cognitive Revolution here. It's our second favorite AI podcast! Hosted on Acast. See acast.com/privacy for more information.
Most folks agree that AI is going to drastically change our economy, the nature of work, and the labor market. What's unclear is when those changes will take place and how best Americans can navigate the transition.  Brent Orrell, senior fellow at the American Enterprise Institute, joins Kevin Frazier, a Senior Fellow at the Abundance Institute, the Director of the AI Innovation and Law Fellow at the University of Texas School of Law, and a Senior Editor at Lawfare, to help tackle these and other weighty questions.Orrell has been studying the future of work since before it was cool. His two cents are very much worth a nickel in this important conversation. Send us your feedback (scalinglaws@lawfaremedia.org) and leave us a review! Hosted on Acast. See acast.com/privacy for more information.
Jakub Kraus, a Tarbell Fellow at Lawfare, spoke with Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Research Director at Lawfare, and Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law, a Senior Fellow at the Abundance Institute, and a Senior Editor at Lawfare, about Anthropic's newly released "constitution" for its AI model, Claude. The conversation covered the lengthy document's principles and underlying philosophical views, what these reveal about Anthropic's approach to AI development, how market forces are shaping the AI industry, and the weighty question of whether an AI model might ever be a conscious or morally relevant being. Mentioned in this episode:Kevin Frazier, "Interpreting Claude's Constitution," LawfareAlan Rozenshtein, "The Moral Education of an Alien Mind," Lawfare Hosted on Acast. See acast.com/privacy for more information.
Shlomo Klapper, founder of Learned Hand, joins Kevin Frazier, the Director of the AI Innovation and Law Fellow at the University of Texas School of Law, a Senior Fellow at the Abundance Institute, and a Senior Editor at Lawfare, to discuss the rise of judicial AI, the challenges of scaling technology inside courts, and the implications for legitimacy, due process, and access to justice. Hosted on Acast. See acast.com/privacy for more information.
loading
Comments (3)

CJ

If for instance FB has to pay a violation fine, why not seed the expansion of FTC through appropriations but dedicate fines partially to FTC funding and research.

Feb 19th
Reply

C muir

tedious lefties

Feb 14th
Reply

C muir

the dying legacy media whining about the new media.😂

Feb 14th
Reply
loading