DiscoverFaster, Please! β€” The PodcastπŸ€– My chat (+transcript) with tech policy analyst Adam Thierer on regulating AI
πŸ€– My chat (+transcript) with tech policy analyst Adam Thierer on regulating AI

πŸ€– My chat (+transcript) with tech policy analyst Adam Thierer on regulating AI

Update: 2024-05-30
Share

Description

While AI doomers proselytize their catastrophic message, many politicians are recognizing that the loss of America’s competitive edge poses a much more real threat than the supposed β€œexistential risk” of AI. Today on Faster, Please!β€”The Podcast, I talk with Adam Thierer about the current state of the AI policy landscape and the accompanying fierce regulatory debate.

Thierer is a senior fellow at the R Street Institute, where he promotes greater freedom for innovation and entrepreneurship. Prior to R Street, he worked as a senior fellow at the Mercatus Center at George Mason University, president of the Progress and Freedom Foundation, and at the Adam Smith Institute, Heritage Foundation, and Cato Institute.

In This Episode

* A changing approach (1:09 )

* The global AI race (7:26 )

* The political economy of AI (10:24 )

* Regulatory risk (16:10 )

* AI policy under Trump (22:29 )

Below is a lightly edited transcript of our conversation

A changing approach (1:09 )

Pethokoukis: Let's start out with just trying to figure out the state of play when it comes to AI regulation. Now I remember we had people calling for the AI Pause, and then we had a Biden executive order. They're passing some sort of act in Europe on AI, and now recently a senate working group in AI put out a list of guidelines or recommendations on AI. Given where we started, which was β€œshut it down,” to where we're at now, has that path been what you might've expected, given where we were when we were at full panic?

Thierer: No, I think we've moved into a better place, I think. Let's look back just one year ago this week: In the Senate Judiciary Committee, there was a hearing where Sam Altman of OpenAI testified along with Gary Marcus, who's a well-known AI worrywart, and the lawmakers were falling all over themselves to praise Sam and Gary for basically calling for a variety of really extreme forms of AI regulation and controls, including not just national but international regulatory bodies, new general purpose licensing systems for AI, a variety of different types of liability schemes, transparency mandates, disclosure as so-called β€œAI nutritional labels,” I could go on down the list of all the types of regulations that were being proposed that day. And of course this followed, as you said, Jim, a call for an AI Pause, without any details about exactly how that would work, but it got a lot of signatories, including people like Elon Musk, which is very strange considering he was at the same time deploying one of the biggest AI systems in history. But enough about Elon.

The bottom line is that those were dark days, and I think the tenor of the debate and the proposals on the table today, one year after that hearing, have improved significantly. That's the good news. The bad news is that there's still a lot of problematic regulatory proposals percolating throughout the United States. As of this morning, as we're taping the show, we are looking at 738 different AI bills pending in the United States according to multistate.ai, an AI tracking service. One hundred andβ€”I thinkβ€”eleven of those are federal bills. The vast majority of it is state. But that count does not include all of the municipal regulatory proposals that are pending for AI systems, including some that have already passed in cities like New York City that already has a very important AI regulation governing algorithmic hiring practices. So the bottom line, Jim, is it's the best of times, it's the worst of times. Things have both gotten better and worse.

Wellβ€”just because the most recent thing that happenedβ€”I know with this the senate working group, and they were having all kinds of technologists and economists come in and testify. So that report, is it really calling for anything specific to happen? What's in there other than just kicking it back to all the committees? If you just read that report, what does it want to happen?

A crucial thing about this report, and let's be clear what this is, because it was an important report because senator Senate Majority Leader Chuck Schumer was in charge of this, along with a bipartisan group of other major senators, and this started the idea of, so-called β€œAI insight forums” last year, and it seemed to be pulling some authority away from committees and taking it to the highest levels of the Senate to say, β€œHey, we're going to dictate AI policy and we're really scared.” And so that did not look good. I think in the process, just politically speakingβ€”

That, in itself, is a good example. That really represents the level of concern that was going around, that we need to do something different and special to address this existential risk.

And this was the leader of the Senate doing it and taking away power, in theory, from his committee membersβ€”which did not go over well with said committee members, I should add. And so a whole bunch of hearings took place, but they were not really formal hearings, they were just these AI insight forum working groups where a lot of people sat around and said the same things they always say on a daily basis, and positive and negatives of AI. And the bottom line is, just last week, a report came out from this AI senate bipartisan AI working group that was important because, again, it did not adopt the recommendations that were on the table a year ago when the process got started last June. It did not have overarching general-purpose licensing of artificial intelligence, no new call for a brand new Federal Computer Commission for America, no sweeping calls for liability schemes like some senators want, or other sorts of mandates.

Instead, it recommended a variety of more generic policy reforms and then kicked a lot of the authority back to those committee members to say, β€œYou fill out the details, for better for worse.” And it also included a lot of spending. One thing that seemingly everybody agrees on in this debate is that, well, the government should spend a lot more money and so another $30 billion was on the table of sort of high-tech pork for AI-related stuff, but it really did signal a pretty important shift in approach, enough that it agitated the groups on the more pro-regulatory side of this debate who said, β€œOh, this isn't enough! We were expecting Schumer to go for broke and swing for the fences with really aggressive regulation, and he's really let us down!” To which I can only say, β€œWell, thank God he did,” because we're in a better place right now because we're taking a more wait-and-see approach on at least some of these issues.

A big, big part of the change in this narrative is an acknowledgement of what I like to call the realpolitik of AI policy and specifically the realpolitik of geopolitics

The global AI race (7:26 )

I'm going to ask you in a minute what stuff in those recommendations worries you, but before I do, what happened? How did we get from where we were a year ago to where we've landed today?

A big, big part of the change in this narrative is an acknowledgement of what I like to call the realpolitik of AI policy and specifically the realpolitik of geopolitics. We face major adversaries, but specifically China, who has said in documents that the CCP [Chinese Communist Party] has published that they want to be the global leader in algorithmic and computational technologies by 2030, and they're spending a lot of money putting a lot of state resources into it. Now, I don't necessarily believe that means they're going to automatically win, of course, but they're taking it seriously. But it's not just China. We have seen in the past year massive state investments and important innovations take place across the globe.

I'm always reminding people that people talk a big game about America's foundational models are large scale systems, including things like Meta’s Llama, which was the biggest open source system in the world a year ago, and then two months after Meta launched Llama, their open source platform, the government of the UAE came out with Falcon 180B, an open source AI model that was two-and-a-half times larger than Facebook's model. That meant America's AI supremacy and open source foundational models lasted for two months. And that's not China, that's the government of the UAE, which has piled massive resources into being a global leader in computation. Meanwhile, China's launched their biggest superβ€”I'm sorry, Russia's launched their

CommentsΒ 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

πŸ€– My chat (+transcript) with tech policy analyst Adam Thierer on regulating AI

πŸ€– My chat (+transcript) with tech policy analyst Adam Thierer on regulating AI

James Pethokoukis