DiscoverScaling Laws
Scaling Laws
Claim Ownership

Scaling Laws

Author: Lawfare & University of Texas Law School

Subscribed: 109Played: 1,180
Share

Description

Scaling Laws explores (and occasionally answers) the questions that keep OpenAI’s policy team up at night, the ones that motivate legislators to host hearings on AI and draft new AI bills, and the ones that are top of mind for tech-savvy law and policy students. Co-hosts Alan Rozenshtein, Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas and Senior Editor at Lawfare, dive into the intersection of AI, innovation policy, and the law through regular interviews with the folks deep in the weeds of developing, regulating, and adopting AI. They also provide regular rapid-response analysis of breaking AI governance news.

Hosted on Acast. See acast.com/privacy for more information.

189 Episodes
Reverse
Alan Z. Rozenshtein, Lawfare senior editor and associate professor of law the University of Minnesota, speaks with Cass Sunstein, the Robert Walmsley University Professor at Harvard University, about his new book, Imperfect Oracle: What AI Can and Cannot Do. They discuss when we should trust algorithms over our own judgment, why AI can eliminate the noise and bias that plague human decision-making but can't predict revolutions, cultural hits, or even a coin flip—and, perhaps most importantly, when it makes sense to delegate our choices to AI and when we should insist on deciding for ourselves. Hosted on Acast. See acast.com/privacy for more information.
Renée DiResta, Lawfare contributing editor and associate research professor at Georgetown's McCourt School of Public Policy, and Alan Z. Rozenshtein, Lawfare senior editor and associate professor of law the University of Minnesota, spoke with Jacob Mchangama, research professor of political science at Vanderbilt University and founder of The Future of Free Speech, and Jacob Shapiro, the John Foster Dulles Professor of International Affairs at Princeton University. The conversation covered the findings of a new report examining how AI models handle contested speech; comparative free speech regulations across six jurisdictions; empirical testing of how major chatbots respond to politically sensitive prompts; and the tension between free expression principles and concerns about manipulation in AI systems. Hosted on Acast. See acast.com/privacy for more information.
In this rapid response episode, Lawfare senior editors Alan Rozenshtein and Kevin Frazier and Lawfare Tarbell fellow Jakub Kraus discuss President Trump's new executive order on federal preemption of state AI laws, the politics of AI regulation and the split between Silicon Valley Republicans and MAGA populists, and the administration's decision to allow Nvidia to export H200 chips to China.  Mentioned in this episode:Executive Order: Ensuring a National Policy Framework for Artificial IntelligenceCharlie Bullock, "Legal Issues Raised by the Proposed Executive Order on AI Preemption," Institute for Law & AI Hosted on Acast. See acast.com/privacy for more information.
Graham Dufault, General Counsel at ACT | The App Association, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore how small- and medium-sized enterprises (SMEs) are navigating the EU's AI regulatory framework. The duo breakdown the Association's recent survey of SMEs, which included the views of more than 1,000 enterprises and assessed their views on regulation and adoption of AI. Follow Graham: @GDufault and ACT | The App Association: @actonline Hosted on Acast. See acast.com/privacy for more information.
Caleb Withers, a researcher at the Center for a New American Security, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss how frontier models shift the balance in favor of attackers in cyberspace. The two discuss how labs and governments can take steps to address these asymmetries favoring attackers, and the future of cyber warfare driven by AI agents.Jack Mitchell, a student fellow in the AI Innovation and Law Program at the University of Texas School of Law, provided excellent research assistance on this episode.Check out Caleb’s recent research here. Hosted on Acast. See acast.com/privacy for more information.
Andrew Prystai, CEO and co-founder of Vesta, and Thomas Bueler-Faudree, co-founder of August Law, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to think through AI policy from the startup perspective.  Andrew and Thomas are the sorts of entrepreneurs that politicians on both sides of the aisle talk about at town halls and press releases. They’re creating jobs and pushing the technological frontier. So what do they want AI policy leaders to know as lawmakers across the country weigh regulatory proposals? That’s the core question of the episode. Giddy up for a great chat! Learn more about the guests and their companies here:Andrew's Linkedin, Vesta's LinkedinThomas’s LinkedIn, August’s LinkedIn Hosted on Acast. See acast.com/privacy for more information.
Jeff Bleich, General Counsel at Anthropic, former Chief Legal Officer at Cruise, and former Ambassador to Australia during the Obama administration, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to get a sense of how the practice of law looks at the edge of the AI frontier.The two also review how Jeff’s prior work in the autonomous vehicle space prepared him for the challenges and opportunities posed by navigating legal uncertainties in AI governance. Hosted on Acast. See acast.com/privacy for more information.
Anton Korinek, a professor of economics at the University of Virginia and newly appointed economist to Anthropic's Economic Advisory Council, Nathan Goldschlag, Director of Research at the Economic Innovation Group, and Bharat Chander, Economist at Stanford Digital Economy Lab, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to sort through the myths, truths, and ambiguities that shape the important debate around the effects of AI on jobs. We discuss what happens when machines begin to outperform humans in virtually every computer-based task, how that transition might unfold, and what policy interventions could ensure broadly shared prosperity.These three are prolific researchers. Give them a follow to find their latest works.Anton: @akorinek on XNathan: @ngoldschlag and @InnovateEconomy on XBharat: X: @BharatKChandar, LinkedIn: @bharatchandar, Substack: @bharatchandar Hosted on Acast. See acast.com/privacy for more information.
Gabriel Nicholas, a member of the Product Public Policy team at Anthropic, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to introduce the policy problems (and some solutions) posed by AI agents. Defined as AI tools capable of autonomously completing tasks on your behalf, it’s widely expected that AI agents will soon become ubiquitous. The integration of AI agents into sensitive tasks presents a slew of technical, social, economic, and political questions. Gabriel walks through the weighty questions that labs are thinking through as AI agents finally become “a thing.” Hosted on Acast. See acast.com/privacy for more information.
Alan Rozenshtein, senior editor at Lawfare, spoke with Brett Goldstein, special advisor to the chancellor on national security and strategic initiatives at Vanderbilt University; Brett Benson, associate professor of political science at Vanderbilt University; and Renée DiResta, Lawfare contributing editor and associate research professor at Georgetown University's McCourt School of Public Policy.The conversation covered the evolution of influence operations from crude Russian troll farms to sophisticated AI systems using large language models; the discovery of GoLaxy documents revealing a "Smart Propaganda System" that collects millions of data points daily, builds psychological profiles, and generates resilient personas; operations targeting Hong Kong's 2020 protests and Taiwan's 2024 election; the fundamental challenges of measuring effectiveness; GoLaxy's ties to Chinese intelligence agencies; why detection has become harder as platform integrity teams have been rolled back and multi-stakeholder collaboration has broken down; and whether the United States can get ahead of this threat or will continue the reactive pattern that has characterized cybersecurity for decades.Mentioned in this episode:"The Era of A.I. Propaganda Has Arrived, and America Must Act" by Brett J. Goldstein and Brett V. Benson (New York Times, August 5, 2025)"China Turns to A.I. in Information Warfare" by Julian E. Barnes (New York Times, August 6, 2025)"The GoLaxy Papers: Inside China's AI Persona Army" by Dina Temple-Raston and Erika Gajda (The Record, September 19, 2025)"The supply of disinformation will soon be infinite" by Renée DiResta (The Atlantic, September 2020) Hosted on Acast. See acast.com/privacy for more information.
California State Senator Scott Wiener, author of Senate Bill 53--a frontier AI safety bill--signed into law by Governor Newsom earlier this month, joins Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explain the significance of SB 53 in the large debate about how to govern AI.The trio analyze the lessons that Senator Wiener learned from the battle of SB 1047, a related bill that Newsom vetoed last year, explore SB 53’s key provisions, and forecast what may be coming next in Sacramento and D.C. Hosted on Acast. See acast.com/privacy for more information.
Mosharaf Chowdhury, associate professor at the University of Michigan and director of the ML Energy lab, and Dan Zhao, AI researcher at MIT, GoogleX, and Microsoft focused on AI for science and sustainable and energy-efficient AI, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss the energy costs of AI.  They break down exactly how much a energy fuels a single ChatGPT query, why this is difficult to figure out, how we might improve energy efficiency, and what kinds of policies might minimize AI’s growing energy and environmental costs.  Leo Wu provided excellent research assistance on this podcast. Read more from Mosharaf:https://ml.energy/ https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/ Read more from Dan:https://arxiv.org/abs/2310.03003’https://arxiv.org/abs/2301.11581 Hosted on Acast. See acast.com/privacy for more information.
David Sullivan, Executive Director of the Digital Trust & Safety Partnership, and Rayi Iyer, Managing Director of the Psychology of Technology Institute at USC’s Neely Center, join join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss the evolution of the Trust & Safety field and its relevance to ongoing conversations about how best to govern AI.   They discuss the importance of thinking about the end user in regulation, debate the differences and similarities between social media and AI companions, and evaluate current policy proposals. You’ll “like” (bad pun intended) this one. Leo Wu provided excellent research assistance to prepare for this podcast. Read more from David:https://www.weforum.org/stories/2025/08/safety-product-build-better-bots/https://www.techpolicy.press/learning-from-the-past-to-shape-the-future-of-digital-trust-and-safety/ Read more from Ravi:https://shows.acast.com/arbiters-of-truth/episodes/ravi-iyer-on-how-to-improve-technology-through-designhttps://open.substack.com/pub/psychoftech/p/regulate-value-aligned-design-not?r=2alyy0&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false  Read more from Kevin:https://www.cato.org/blog/california-chatroom-ab-1064s-likely-constitutional-overreach Hosted on Acast. See acast.com/privacy for more information.
In this Scaling Laws rapid response episode, hosts Kevin Frazier and Alan Rozenshtein talk about SB-53, the frontier AI transparency (and more) law that California Governor Gavin Newsom signed into law on September 29. Hosted on Acast. See acast.com/privacy for more information.
Neil Chilson, Head of AI Policy at the Abundance Institute, and Gus Hurwitz, Senior Fellow and CTIC Academic Director at Penn Carey Law School and Director of Law & Economics Programs at the International Center for Law & Economics, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore how academics can overcome the silos and incentives that plague the Ivory Tower and positively contribute to the highly complex, evolving, and interdisciplinary work associated with AI governance. The trio recorded this podcast live at the Institute for Humane Studies’s Technology, Liberalism, and Abundance Conference in Arlington, Virginia.Read about Kevin's thinking on the topic here: https://www.civitasinstitute.org/research/draining-the-ivory-towerLearn about the Conference: https://www.theihs.org/blog/curated-event/technology-abundance-and-liberalism/ Hosted on Acast. See acast.com/privacy for more information.
Alan Rozenshtein, Renee DiResta, and Jess Miers discuss the distinct risks that generative AI systems pose to children, particularly in relation to mental health. They explore the balance between the benefits and harms of AI, emphasizing the importance of media literacy and parental guidance. Recent developments in AI safety measures and ongoing legal implications are also examined, highlighting the evolving landscape of AI regulation and liability. Hosted on Acast. See acast.com/privacy for more information.
On today's Scaling Laws episode, Alan Rozenshtein sat down with Pam Samuelson, the Richard M. Sherman Distinguished Professor of Law at the University of California, Berkeley, School of Law, to discuss the rapidly evolving legal landscape at the intersection of generative AI and copyright law. They dove into the recent district court rulings in lawsuits brought by authors against AI companies, including Bartz v. Anthropic and Kadrey v. Meta. They explored how different courts are treating the core questions of whether training AI models on copyrighted data is a transformative fair use and whether AI outputs create a “market dilution” effect that harms creators. They also touched on other key cases to watch and the role of the U.S. Copyright Office in shaping the debate. Mentioned in this episode:"How to Think About Remedies in the Generative AI Copyright Cases"by Pam Samuelson in LawfareAndy Warhol Foundation for the Visual Arts, Inc. v. GoldsmithBartz v. AnthropicKadrey v. Meta PlatformsThomson Reuters Enterprise Centre GmbH v. Ross Intelligence Inc.U.S. Copyright Office, Copyright and Artificial Intelligence, Part 3: Generative AI Training Hosted on Acast. See acast.com/privacy for more information.
Joshua Gans, a professor at the University of Toronto and co-author of "Power and Prediction: The Disruptive Economics of Artificial Intelligence," joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to evaluate ongoing concerns about AI-induced job displacement, the likely consequences of various regulatory proposals on AI innovation, and how AI tools are already changing higher education. Select works by Gans include: A Quest for AI Knowledge (https://www.nber.org/papers/w33566)Regulating the Direction of Innovation (https://www.nber.org/papers/w32741)How Learning About Harms Impacts the Optimal Rate of Artificial Intelligence Adoption (https://www.nber.org/papers/w32105) Hosted on Acast. See acast.com/privacy for more information.
Steven Adler, former OpenAI safety researcher, author of Clear-Eyed AI on Substack, and independent AGI-readiness researcher, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law, to assess the current state of AI testing and evaluations. The two walk through Steven’s views on industry efforts to improve model testing and what he thinks regulators ought to know and do when it comes to preventing AI harms. You can read Steven’s Substack here: https://stevenadler.substack.com/ Thanks to Leo Wu for research assistance! Hosted on Acast. See acast.com/privacy for more information.
Anu Bradford, Professor at Columbia Law School, and Kate Klonick, Senior Editor at Lawfare and Associate Professor at St. John's University School of Law, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to assess the ongoing contrasting and, at times, conflicting regulatory approaches to Big Tech being pursued by the EU and US. The trio start with an assessment of the EU’s use of the Brussels Effect, coined by Anu, to shape AI development. Next, then explore the US’s increasingly interventionist industrial policy with respect to key sectors, especially tech. Read more:Anu’s op-ed in The New York TimesThe Impact of Regulation on Innovation by Philippe Aghion, Antonin Bergeaud & John Van ReenenDraghi Report on the Future of European Competitiveness Hosted on Acast. See acast.com/privacy for more information.
loading
Comments (3)

CJ

If for instance FB has to pay a violation fine, why not seed the expansion of FTC through appropriations but dedicate fines partially to FTC funding and research.

Feb 19th
Reply

C muir

tedious lefties

Feb 14th
Reply

C muir

the dying legacy media whining about the new media.😂

Feb 14th
Reply