DiscoverFaster, Please! β€” The PodcastπŸ€–πŸŒˆ My chat (+transcript) with Nick Bostrom on life in an AI utopia
πŸ€–πŸŒˆ My chat (+transcript) with Nick Bostrom on life in an AI utopia

πŸ€–πŸŒˆ My chat (+transcript) with Nick Bostrom on life in an AI utopia

Update: 2024-06-06
Share

Description

The media is full of dystopian depictions of artificial intelligence, such as The Terminator and The Matrix, yet few have dared to dream up the image of an AI utopia. Nick Bostrom’s most recent book, Deep Utopia: Life and Meaning in a Solved World attempts to do exactly that. Bostrom explores what it would mean to live in a post-work world, where human labor is vastly outperformed by AI, or even made obsolete. When all of our problems have been solved in an AI utopia . . . well, what’s next for us humans?

Bostrom is a philosopher and was founding director of the Future of Humanity Institute at Oxford University. He is currently the founder and director of research at the Macrostrategy Research Initiative. He also wrote the much-discussed 2014 book, Superintelligence: Paths, Dangers, Strategies.

In This Episode

* Our dystopian predisposition (1:29 )

* A utopian thought experiment (5:16 )

* The plausibility of a solved world (12:53 )

* Weighing the risks (20:17 )

Below is a lightly edited transcript of our conversation

Our dystopian predisposition (1:29 )

Pethokoukis: The Dutch futurist, Frederik Polak famously put it that any culture without a positive vision of the future has no future. It's a light paraphrase. And I kind of think that's where we are right now, that despite the title of your book, I feel like right now people can only imagine dystopia. Is that what you think? Do I have that wrong?

Bostrom: It's easier to imagine dystopia. I think we are all familiar with a bunch of dystopian works of fiction. The average person could rattle off Brave New World, 1984, The Handmaid's Tale. Most people couldn't probably name a single utopian work, and even the attempts that have been made, if you look closely at them, you probably wouldn't actually want to live there. It is an interesting fact that it seems easier for us to imagine ways in which things could be worse than ways in which things could be better. Maybe some culture that doesn't have a positive vision has no future but, then again, cultures that have had positive visions also often have ended in tears. A lot of the times utopian blueprints have been used as excuses for imposing coercively some highly destructive vision on society. So you could argue either way whether it is actually beneficial for societies to have a super clear, long-term vision that they are staring towards.

I think if we were to ask people to give a dystopian vision, we would get probably some very picturesque, highly detailed visions from having sort of marinated in science fiction for decades. But then if you asked people about utopia, I wonder if all their visions would be almost alike: Kind of this clean, green world, with maybe some tall skyscrapers or something, and people generally getting along. I think it'd be a fairly bland, unimaginative vision.

That would be the idea of β€œall happy families are alike, but each unhappy family is unhappy in its own unique way.” I think it's easy enough to enable ways in which the world could be slightly better than it is. So imagine a world exactly like the one we have, except minus childhood leukemia. So everybody would agree that definitely seems better. The problem is if you start to add these improvements and you stack on enough of them, then eventually you face a much more philosophically challenging proposition, which is, if you remove all the difficulties and all the shadows of human life, all forms of suffering and inconvenience, and all injustice and everything, then you risk ending up in this rather bland future where there is no challenge, no purpose, no meaning for us humans, and it then almost becomes utopian again, but in a different way. Maybe all our basic needs are catered to, but there seems to be then some other part missing that is important for humans to have flourishing lives.

A utopian thought experiment (5:16 )

Is your book a forecast or is it a thought experiment?

It's much more a thought experiment. As it happens, I think there is a non-trivial chance we will actually end up in this condition, I call it a β€œsolved world,” particularly with the impending transition to the machine intelligence era, which I think will be accompanied by significant risks, including existential risk. In my previous book, Superintelligence, which came out in 2014, focused on what could go wrong when we are developing machine super intelligence, but if things go rightβ€”and this could unfold within the lifetime of a lot of us who are alive on this planet todayβ€”if things go right, they could go very right, and, in particular, all kinds of problems that could be solved with better technology could be solved in this future where you have superintelligent AIs doing the technological development. And we might then actually confront the situation where these questions we can now explore as a thought experiment would become pressing practical questions where we would actually have to make decisions on what kinds of lives we want to live, what kind of future we want to create for ourselves if all these instrumental limitations were removed that currently constrain the choices set that we face.

I imagine the book would seem almost purely a thought experiment before November 2022 when ChatGPT was rolled out by OpenAI, and now, to some people, it seems like these are questions certainly worth pondering. You talked about the impending machine superintelligenceβ€”how impending do you think, and what is your confidence level? Certainly we have technologists all over the map speaking about the likelihood of reaching that maybe through large language models, other people think they can't quite get us there, so how much work is β€œimpending” doing in that sentence?

I don't think we are in a position any longer to rule out even extremely short timelines. We can't be super confident that we might not have an intelligence explosion next year. It could take longer, it could take several years, it could take a decade or longer. We have to think in terms of smeared out probability distributions here, but we don't really know what capabilities will be unlocked as you scale up even the current architectures one more order of magnitude like GPT-5-level or GPT-6-level. It might be that, just as the previous steps from GPT-2 to GPT-3 and 3 to 4 sort of unlocked almost qualitatively new capabilities, the same might hold as we keep going up this ladder of just scaling up the current architectures, and so we are now in a condition where it could happen at any time, basically. It doesn't mean it will happen very soon, but we can't be confident that it won't.

I do think it is slightly easier for people maybe now, even just with looking at the current AI systems, we have to take these questions seriously, and I think it will become a lot easier as the penny starts to drop that we're about to see this big transition to the machine intelligence era. The previous book, Superintelligence, back in 2014 when that was publishedβ€”and it was in the works for six years priorβ€”at that time, what was completely outside the Overton window was even the idea that one day we would have machine superintelligence, and, in particular, the idea that there would then be an alignment problem, a technical difficulty of steering these superintelligent intellects so that they would actually do what we want. It was completely neglected by academia. People thought, that’s just science fiction or idle futurism. There were maybe a handful of people on the internet who were starting to think about that. In the intervening 10 years, that has changed, and so now all the fr

CommentsΒ 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

πŸ€–πŸŒˆ My chat (+transcript) with Nick Bostrom on life in an AI utopia

πŸ€–πŸŒˆ My chat (+transcript) with Nick Bostrom on life in an AI utopia

James Pethokoukis