#153 — Possible Minds

#153 — Possible Minds

Update: 2019-04-1513
Share

Description

In this episode of the Making Sense podcast, Sam Harris introduces John Brockman's new anthology, "Possible Minds: 25 Ways of Looking at AI," in conversation with three of its authors: George Dyson, Alison Gopnik, and Stuart Russell.

You can support the Making Sense podcast and receive subscriber-only content at samharris.org/subscribe.

Comments (9)

Raúl Beristáin

so much nonsense in the last interview. Too bad Sam dismissed the great arguments of the lady in the middle one. For example, the discussion about a machine with goals not allowing us to switch it off is based on flawed assumptions: if it has a simple goal such as fetching coffee, it is nonsensical to believe that it has the capacity to reason about the conveniences of being switched off. It would has enough capabilities to navigate the environment and get coffee, but if you try to switch it off it will enter a fail condition. It can try to ram through you if you are on its way, but more likely it will have obstacle avoidance and try to get around. It doesn't follow that it has sufficient intelligence to work out your intent to switch it off, and much less that it will have the mental capacity to plan actions to stop you. On the other hand, a machine that has general intelligence is not likely to either follow our willing choose simple tasks and goals like making paper clips or fetching coffee. It will likely follow or choose much more complex goals, and because WE make them and WE --at least initially-- need to communicate with them, they should have the capacity to communicate their intent and understand ours. Yes, communication doesn't guarantee agreement, but it won't catch you off guard. It won't be your janitorial robot suddenly gaining awareness and rebelling. And yes, you can turn it off because it will be a brain in a jar for years to come. We can build brains in jars (computers) with these amazing powers of processing, and we can build robots that can barley get by in the physical world under limited conditions. But if we are decades away from general artificial intelligence (in spite these guys fantasies), we are perhaps twice as far from putting GAI in a robot body that can act under its own intelligence and power. And we have all the guns ;) Now, if someone is stupid enough to plug a known GAI to large weapons systems they deserve what happens next but my point is that this is so so far off in the future that worrying about it is worse than pointless in an age of climate change, as the lady rightly said.

Apr 17th
Reply

Winds of the Magnetar

Raúl Beristáin Nothing in the 2nd interview constituted dismissal. They overwhelmingly agreed and complimented each other’s knowledge. The only notable distance between their views was on the hard problem of consciousness. One should not have to explain why that’s a legitimate point of contention. Even on that single PhD program’s worth of material they just weighted possibilities differently from each other. We have the capacity, and could use a heightened capacity, to postulate about future issues and current ones. For such a long winded comment it ends with a total refutation of an entire field of conversation, which sounds like a self-defeating cause. The coffee maker was a derivation, a thought experiment, meant to demonstrate a systems commitment to completing a task independent of limitation, reason, common sense, or a goal structure aligned with human objectives. In an attempt to refute their assumed reasoning (in fact the coffee maker example constitutes flawed human alignment on the part of the machine) you assume good reasoning and built in limits which are not guaranteed, they have to be engineered and, imagine this, considered deeply in conversation by its creators and their advisors as part of development. That’s the entire point, there are a starling portion of people seeking technology without consideration of these limitations. You seem to be skipping half the argument, when these things, if ever, become possible is not the main concern. Regardless of the time frame we should still be seriously considering ethical, legal, technical, and geopolitical repercussions. Thought experiments and derivations are meant to help one understand, sometimes one can be confused by them instead and should listen again more closely.

May 13th
Reply

Raúl Beristáin

Rick Arden I hear you, but I don't think I agree. I'm not an expert, and I assume you aren't either. If so, both of us need to rely on what actual experts say it's going on. It is true that there are scientists --experts-- that disagree that climate change is man-made. But this isn't one of them "two sides to every story" issues. This is a story with 10,000 sides, and 989 sides say it's real, whereas only 11 say it isn't. Of course, the majority has been wrong about science before but the maths on this issue have been verified by the vast majority of experts and the agree that, if anything its worse than we thought. But let's imagine for a second it is made up. That reminds me of a cartoon I saw once that said: What is this is a hoax and we end up building a better world for nothing!?!

Apr 25th
Reply

Christoffer Enfors

I can't listen to the first guy.. hooooly

Apr 16th
Reply
loading
Download from Google Play
Download from App Store
00:00
00:00
1.0x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

#153 — Possible Minds

#153 — Possible Minds