DiscoverFaster, Please! β€” The PodcastπŸ€–πŸ§  My chat (+transcript) with Google DeepMind's SΓ©b Krier on AGI and public policy
πŸ€–πŸ§  My chat (+transcript) with Google DeepMind's SΓ©b Krier on AGI and public policy

πŸ€–πŸ§  My chat (+transcript) with Google DeepMind's SΓ©b Krier on AGI and public policy

Update: 2024-08-09
Share

Description

In a world of Artificial General Intelligence, machines would be able to match, and even exceed, human cognitive abilities. AGI might still be science fiction, but SΓ©b Krier sees this technology as not only possible, but inevitable. Today on Faster, Please! β€” The Podcast, I chat with Krier about how our public policy should facilitate AGI’s arrival and flourishing.

Krier is an AI policy expert, adviser, and attorney. He currently works in policy development and strategy at Google DeepMind. He previously served as Head of Regulation for the UK Government’s Office for Artificial Intelligence and was a Senior Tech Policy Researcher at Stanford’s Cyber Policy Center.

In This Episode

* The AGI vision (1:24 )

* The risk conversation (5:15 )

* Policy strategy (11:25 )

* AGI: β€œif” or β€œwhen”? (15:44 )

* AI and national security (18:21 )

* Chatbot advice (20:15 )

Below is a lightly edited transcript of our conversation

Pethokoukis: SΓ©b, welcome to the podcast.

Krier: Thank you. Great to be here.

The AGI vision (1:24 )

Let's start with a bit of context that may influence the rest of the conversation. What is the vision or image of the future regarding AI β€” you can define it as machine learning or generative AI β€” that excites you, that gets you going in the day, that you feel like you're part of something important? What is that vision?

I think that's a great question. In my mind, I think AI has been going on for quite a long time, but I think the aim has always been artificial general intelligence. And in a sense, I think of this as a huge deal, and the vision I have for the future is being able to have a very, very large supply of cognitive resources that you can allocate to quite a wide range of different problems, whether that's energy, healthcare, governance, there's many, many ways in which this technology can be applied as a general purpose technology. And so I guess my vision is seeing that being used to solve quite a wide range of problems that humans have had for decades, centuries, millennia. And I think you could go into so many different directions with that, whether it's curing diseases, or optimizing energy grids, and more. But I think, broadly, that’s the way I think about it. So the objective, in a sense, is safe AGI [Artificial General Intelligence], and from that I think it can go even further. And I think in many ways, this can be hugely beneficial to science, R&D, and humanity as a whole. But of course, that also comes with ways in which this could be misused, or accidents, and so on. And so huge emphasis on the safe development of AGI.

So you're viewing it as a tool, as a way to apply intelligence across a variety of fields, a variety of problems, to solve those problems, and of course, the word in there doing a lot of lifting is β€œsafely.” Given the discussion over the past 18 months about that word, β€œsafely,” is, one, I think someone who maybe only pays passing attention to this issue might think that it's almost impossible to do it safely without jeopardizing all those upside benefits, but you're confident that those two things can ultimately be in harmony?

Yeah, absolutely, otherwise I wouldn't be necessarily working on an AGI policy. So I think I'm very confident this can be done well. I think it also depends what we mean by β€œsafety” and what kind of safety we have in mind. Any technology, we will have costs and trade-offs, but of course the upside here is enormous, and, in my mind, very much outweighs potential downsides.

However, I think for certain risks, things like potentially catastrophic risks and so on, there is an argument in treading some careful path and making sure this is done scientifically with a scientific method in mind, and doing that well. But I don't think there's fundamentally a necessary tension, and I think, in fact, what many people sometimes underestimate is how AI itself, as a technology, will be helpful in mitigating a lot of the risks we're foreseeing and thinking about. There's obviously ways in which AI can be used for cyber offense, but many ways in which you can also use that for defense, for example. I'm cautiously optimistic about how this can be developed and used in the long run

The risk conversation (5:15 )

Since these large language models and chatbots were rolled out to public awareness in late 2022, has the safety regulatory debate changed in any way? It seems to me that there was a lot of talk early on about these existential risks. Now I seem to hearing less about that and more about issues about, maybe it's disinformation or bias. From your perspective, has that debate changed and has it changed for the better, or worse?

I think it has evolved quite a lot over the past β€” I've been working in AI policy since 2017 and there's been different phases, and at first a lot of skepticism around AI even being useful, or hype, and so on, and then seeing more and more of what these general models could do, and I think, initially, a lot of the concerns were around things like bias, and discrimination, and errors. So even things like, early-on, facial-recognition technologies were very problematic in many ways: not just ways in which they were applied, but they would be prone to a lot of errors and biases that could be unfair, whereas they're much better now, and therefore the concern now is more on misuse than it accidentally misidentifying someone, I would say. So I think, in that sense, these things have changed. And then a lot of the discourse around existential risk and so on, there was a bit of a peak a bit last year, and then this switched a bit towards more catastrophic risks and misuse.

There's a few different things. Broadly, I think it's good that these risks are taken seriously. So, in some sense, I'm happy that these have taken more space, in a way, but I think there's also been a lot of alarmism and unnecessary doomerism, of crying wolf a little bit too early. I think what happens is that sometimes people also conflate a capability of a system and how that fits within a wider risk or threat model, or something; and the latter is often under-defined, and there's a tendency for people to often see the worst in technology, particularly in certain regions of the world, so I think sometimes a lot has been a little bit exaggerated or overhyped.

But, having said that, I think it’s very good there's lots of research going on on the many ways in which this could potentially be harmful, certainly on the research side, the evaluation side, there’s a lot of great work. We've published some papers on sociotechnical evaluations, dangerous capabilities, and so on. All of that is great, but I think there has also been some more polarized parts calling for excessive measures, whether regulatory, or pausing AI, and so on, that I think have been a little bit too trigger-happy. So I'm less happy about these bits, but there's been a lot of good as well.

And much of the debate about policy has been about the right sort of policy to prevent bad things from happening. How should we think about policy that maximizes the odds of good things happening? What should policymakers do to help promote AI to reshape science, to help promote AI diffusing as efficiently as possible throughout an economy? How do we optimize the upside through policy rather than just focusing on making sure the bad things don't happen?

I think the very first thing is not having rushed regulation. I'm not personally a huge fan of the Precautionary Principle, and I think that, very often, regulations can cause quite a lot of harm downstream, and they're very sticky, hard to remove.

The other thing that you can do beyond avoiding bad policy is I think a lot of the levers to making sure that the development goes well aren't necessarily all directly AI-related. So it'll be things like immigration: attracting a lot of talent, for example, I think will be very important, so immigration is a big one. Power and energy: you want there to be a lot more β€” I'm a big fan of nuclear, so I think that kind of thing is also very helpful in terms of the expected needs for AI development in the future. And then there are certain things governments could potentially do with some narrow domains like Advance Market Commitments, for example, although that's not a panacea.

Commitments to do what?

Oh, Advance Market Commitments like pull mechanisms to create a market for a particular solution. So like Operation Warp Speed, but you could have an AI equivalent for certain app

CommentsΒ 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

πŸ€–πŸ§  My chat (+transcript) with Google DeepMind's SΓ©b Krier on AGI and public policy

πŸ€–πŸ§  My chat (+transcript) with Google DeepMind's SΓ©b Krier on AGI and public policy

James Pethokoukis