#153 — Possible Minds

#153 — Possible Minds

Update: 2019-04-1523
Share

Description

In this episode of the Making Sense podcast, Sam Harris introduces John Brockman's new anthology, "Possible Minds: 25 Ways of Looking at AI," in conversation with three of its authors: George Dyson, Alison Gopnik, and Stuart Russell.

SUBSCRIBE to continue listening and gain access to all content on samharris.org/subscribe.

Comments (9)

Raúl Beristáin

so much nonsense in the last interview. Too bad Sam dismissed the great arguments of the lady in the middle one. For example, the discussion about a machine with goals not allowing us to switch it off is based on flawed assumptions: if it has a simple goal such as fetching coffee, it is nonsensical to believe that it has the capacity to reason about the conveniences of being switched off. It would has enough capabilities to navigate the environment and get coffee, but if you try to switch it off it will enter a fail condition. It can try to ram through you if you are on its way, but more likely it will have obstacle avoidance and try to get around. It doesn't follow that it has sufficient intelligence to work out your intent to switch it off, and much less that it will have the mental capacity to plan actions to stop you. On the other hand, a machine that has general intelligence is not likely to either follow our willing choose simple tasks and goals like making paper clips or fetching coffee. It will likely follow or choose much more complex goals, and because WE make them and WE --at least initially-- need to communicate with them, they should have the capacity to communicate their intent and understand ours. Yes, communication doesn't guarantee agreement, but it won't catch you off guard. It won't be your janitorial robot suddenly gaining awareness and rebelling. And yes, you can turn it off because it will be a brain in a jar for years to come. We can build brains in jars (computers) with these amazing powers of processing, and we can build robots that can barley get by in the physical world under limited conditions. But if we are decades away from general artificial intelligence (in spite these guys fantasies), we are perhaps twice as far from putting GAI in a robot body that can act under its own intelligence and power. And we have all the guns ;) Now, if someone is stupid enough to plug a known GAI to large weapons systems they deserve what happens next but my point is that this is so so far off in the future that worrying about it is worse than pointless in an age of climate change, as the lady rightly said.

Apr 17th
Reply (7)

Christoffer Enfors

I can't listen to the first guy.. hooooly

Apr 16th
Reply
Download from Google Play
Download from App Store
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

#153 — Possible Minds

#153 — Possible Minds