DiscoverIn-Ear Insights from Trust InsightsIn-Ear Insights: How Generative AI Reasoning Models Work
In-Ear Insights: How Generative AI Reasoning Models Work

In-Ear Insights: How Generative AI Reasoning Models Work

Update: 2025-06-11
Share

Description

In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss the Apple AI paper and critical lessons for effective prompting, plus a deep dive into reasoning models.


You’ll learn what reasoning models are and why they sometimes struggle with complex tasks, especially when dealing with contradictory information. You’ll discover crucial insights about AI’s “stateless” nature, which means every prompt starts fresh and can lead to models getting confused. You’ll gain practical strategies for effective prompting, like starting new chats for different tasks and removing irrelevant information to improve AI output. You’ll understand why treating AI like a focused, smart intern will help you get the best results from your generative AI tools. Tune in to learn how to master your AI interactions!


Watch the video here:



Can’t see anything? Watch it on YouTube here.


Listen to the audio here:


https://traffic.libsyn.com/inearinsights/tipodcast-how-generative-ai-reasoning-models-work.mp3

Download the MP3 audio here.



[podcastsponsor]


Machine-Generated Transcript


What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.


Christopher S. Penn – 00:00
In this week’s In Ear Insights, there is so much in the AI world to talk about. One of the things that came out recently that I think is worth discussing, because we can talk about the basics of good prompting as part of it, Katie, is a paper from Apple. Apple’s AI efforts themselves have stalled a bit, showing that reasoning models, when given very complex puzzles—logic-based puzzles or spatial-based puzzles, like moving blocks from stack to stack and getting them in the correct order—hit a wall after a while and then just collapse and can’t do anything. So, the interpretation of the paper is that there are limits to what reasoning models can do and that they can kind of confuse themselves. On LinkedIn and social media and stuff,


Christopher S. Penn – 00:52
Of course, people have taken this to the illogical extreme, saying artificial intelligence is stupid, nobody should use it, or artificial general intelligence will never happen. None of that is within the paper. Apple was looking at a very specific, narrow band of reasoning, called deductive reasoning. So what I thought we’d talk about today is the paper itself to a degree—not a ton about it—and then what lessons we can learn from it that will make our own AI practices better. So to start off, when we talk about reasoning, Katie, particularly you as our human expert, what does reasoning mean to the human?


Katie Robbert – 01:35
When I think, if you say, “Can you give me a reasonable answer?” or “What is your reason?” Thinking about the different ways that the word is casually thrown around for humans. The way that I think about it is, if you’re looking for a reasonable answer to something, then that means that you are putting the expectat

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

In-Ear Insights: How Generative AI Reasoning Models Work

In-Ear Insights: How Generative AI Reasoning Models Work

Trust Insights