DiscoverThe Innovators Studio with Phil McKinney
The Innovators Studio with Phil McKinney
Claim Ownership

The Innovators Studio with Phil McKinney

Author: Phil McKinney

Subscribed: 2,420Played: 17,711
Share

Description

Forty years of billion-dollar innovation decisions. The real stories, the hard calls, and the patterns that repeat across every organization that's ever tried to build something new. Phil McKinney shares what those decisions actually look like.

Phil was HP's CTO when Fast Company named it one of the most innovative companies in the world three years running. He co-founded a company and took it public. Now he runs CableLabs, the R&D engine behind the global broadband industry.

This isn't theory. It's what happened. And what you can see coming if you know what to look for.

Running since 2005, originally as The Killer Innovations Show, now The Innovators Studio. Tens of millions of downloads. Full archive at killerinnovations.com. New episodes at philmckinney.com.
708 Episodes
Reverse
Twenty years. Nearly one thousand episodes on this show. And starting today, we're going to try something a little different this season. Season 21 is about the decisions that actually determine whether innovation lives or dies inside any organization. The real calls. Not the fluff stuff we read in academic textbooks. I want to actually put you in the rooms where these decisions are happening. What went right. What went wrong. My objective is to expose you to the patterns in innovation decisions so that you can recognize them. Recognize them in yourself, in the people you need to influence, long before you step into any landmines. So let's get into it. The Encounter on the Top Floor of Building 25 Making generational decisions on innovation investment can be a make-or-break moment. What I refer to as a CLM, a Career Limiting Move. In my case, it started with a chance conversation with Mark Hurd, HP's CEO. Let me take you back to 2005. HP headquarters is on Page Mill Road in Palo Alto, referred to internally as Building 25. The top floor is where all of the executive offices are. That's where Mark's office was. I was up there doing some meetings and got snagged by Mark. Now, Mark had a reputation. He was a big numbers guy. He believed in what he called extreme benchmarking. You tore into your competitors' numbers. You knew your own numbers in and out.1 Others had warned me about this. He had a famous quote that everybody shared:  "Stare at the numbers long enough, and they will eventually confess." Mark believed you could not lead a critical role at HP if you did not know your numbers cold, inside and out. Didn't matter whether it was sales, CTO, a function, or a division. It didn't matter. And Mark tested everyone on the leadership team. Not just the leadership team. He would randomly stop employees and ask them for their numbers based on what group they worked in. It was non-stop. It was constant. To where support staff was literally constantly preparing briefing books for managers, VPs, leaders, just in case they got nabbed by Mark. In my case, I happened to be walking past his office. Mark waved me in. I sat down, and he immediately started drilling me on the CTO numbers. The number he focused on was R&D as a percentage of revenue. The Broken Benchmark: R&D as a Percentage of Revenue Now, if you've been a regular listener of this show, you know my opinion of that metric. R&D as a percentage of revenue is a meaningless number.2 It is absolutely meaningless. But every public company CEO at an innovation-dependent company, all the tech companies, AI companies, even automotive, they live by this number. It's a number that Wall Street looks at. You have to report it as part of your quarterlies, and from there it's simple math.3 When Mark grilled me, he was focused specifically on the PC group at HP. HP's number at the time for the PC group was about one and a half percent. R&D as a percentage of the PC group's revenue. Acer, which was a key competitor, was at 0.8%. Less than one percent. Roughly half of HP's number.4 Apple was at four percent.5 Mark's question, and he was really pounding on this, was: How do we get our ratios in line with Acer? Basically, he was saying: how do we cut costs so that our R&D expense as a percentage of revenue equals Acer at 0.8%? This is exactly the problem with choosing the wrong metric. Now I'm going to quote somebody who I think was probably one of the most insightful leaders in the business world. Charlie Munger. If you've ever watched any of his talks, he had a really strong opinion on certain metrics. Specifically EBITDA, earnings before interest, taxes, depreciation and amortization. Charlie referred to EBITDA as BS earnings. It was a metric Wall Street swore by, and Munger said it hid more than it revealed. His exact words: "Every time you see the word EBITDA, just substitute the word 'bullshit' earnings."6 R&D as a percentage of revenue is the same problem in a different disguise. It's the metric that makes every company look like it's investing when all it's doing is spending. Mark was using a broken instrument to make a generational decision. If you make decisions based on R&D as a percentage of revenue, and then you do comparisons like "let's make our numbers look like Acer," what you are actually deciding to do is cut your R&D. That is generational. You will destroy a company's innovation capability over the next ten to twenty years before you can even have a hope of rebuilding it.7 "We Are Not Apple and We Never Will Be" I looked at him and said: Why aren't we raising our R&D spend to match Apple? Mark didn't hesitate. He said: "We are not Apple and we never will be." I took offense at that. I was offended that he wouldn't even contemplate it. And I pushed back. I pushed back hard. I argued we could be Apple in areas where we had genuine advantage. Here's one example. Go back to September 2004, about a year before my meeting with Mark. Carly Fiorina was still CEO. Carly had just handed Steve Jobs access to the retail shelf space HP spent thirty years building.8 At that time, HP controlled about nine, nine and a half percent of all retail shelf space for consumer electronics, the largest single entity holding in that category. Where did all that come from? It traces back to the calculator days in the 1970s. Those relationships, those stocking slots, that footprint: HP had spent three decades building that access. Apple was launching the iPod.9 It had no retail distribution in consumer electronics. None. And rather than HP taking advantage of that for itself, it actually opened the door and allowed Apple to come in. That is how the iPod got its traction. It bought Apple the time to build out its own retail strategy, which is ultimately what allowed Apple to be where it is today. That wasn't an accident of history. That was HP giving away a structural competitive asset. When I tried to push back on Mark, saying we could be better with the right investment, it didn't land. Mark viewed the PC business as a commodity. And if it's a commodity, you manage expenses. You don't invest in capabilities. Monthly Arguments and the Search for Better Metrics There was no decision made that day. But something shifted in me. That was the first of many monthly arguments I had with Mark. And they were non-stop. What it drove me to do was start looking for better metrics. We had something most companies don't have: HP's complete financial history going all the way back to the 1940s. I had access to the numbers, division by division, for one of the founding companies of Silicon Valley.10 We were getting traction. I was actually getting Mark to align. I was getting the HP board to align. And then what happens? Mark gets removed as CEO and Leo comes in. Then Meg kicked Leo out and she took over. Then the split of HP into two companies. Acer today? Still roughly 0.9% of revenue in R&D.11 Twenty years later, almost exactly where Mark wanted HP to get to. What I Would Do Differently: Right Argument, Wrong Language If I'm being honest about what I would do differently, I had the right argument. I had the wrong language. The job wasn't to prove Mark wrong. Nobody changes their mind when they're being told they're wrong. I needed to stop speaking CTO and start speaking CEO. Meet him where he was. Make the case in the language of margin, risk, competitive position, the language he already trusted. But that language didn't exist when it came to R&D and innovation. That's the reason I spent the rest of my career building something better. And that is what this season is about. What Comes Next: The Metrics That Tell the Truth That conversation with Mark sent me looking. If R&D as a percentage of revenue was the wrong metric, and I believe to my core that it was, and is, then what's the right one? We went back through HP's own numbers. We back-cast all the way to the 1940s, looking at the numbers by division, by the overall organization. And then something unexpected happened. The archive team at HP gave me access to something nobody had looked at in decades: Bill Hewlett and Dave Packard's original notebooks. What I found in there pointed me somewhere nobody had thought to look. In the next episode, we're going to talk about the metrics that actually tell the truth when it comes to R&D and innovation.     If this episode gave you some insights, shifted something, share it with somebody who you think needs to hear it. Particularly if you're trying to fight senior leaders around R&D investment. And in the comments below, tell me: what's that one benchmark that you are required to hit, and yet you've never questioned? Is it the right benchmark? Have you really looked at it? I genuinely would like to know. Show notes and this week's Studio Notes are over at philmckinney.com. Subscribe there. That's where the deeper analysis lives. Every Monday that we post, subscribe. You don't want to miss the next one. I'll see you in the next episode.  
The best decision-makers aren't better at deciding. They're better at controlling when, where, and how they decide. It took me twenty years to figure that out. Most people spend that time trying harder: more discipline, more willpower, more resolve to think clearly under pressure. It doesn't work. That's when mindjacking wins. Not through force. Through the door you left unguarded. The answer isn't trying harder. It's building systems that protect your thinking before the pressure hits. By the end of this episode, you'll have four concrete strategies for doing exactly that, and a one-page system you'll build before we're done. And I have something else to share at the end. Something I've been working toward for twenty years. Let's get into it. Why Willpower Fails and Design Works Ulysses knew his ship would pass the island of the Sirens. He also knew the song was irresistible. Sailors who heard it became incapacitated and drove straight into the rocks. He didn't try to be stronger than it. He had his crew fill their ears with wax and tie him to the mast, with strict orders not to release him, no matter what he said when the music reached him. His calm self setting rules for his compromised self. That's the core of everything in this episode. These are called commitment devices. The decision gets made early, when your thinking is clear, before you're tempted to take the wrong path. Studies tracking self-imposed contracts found that when people added meaningful stakes to their commitments, their follow-through nearly doubled. Not because they became more virtuous, but because they'd taken the choice off the table at the moment they were most likely to get it wrong. Stop asking "How do I resist?" Start asking, "What can I decide now, so I don't have to decide under pressure?" Before you can build the right commitments, you need to know exactly where your thinking breaks down. Not decision-making in general. Yours. Finding Your Personal Vulnerability Think back across the last few months. Where did your thinking most clearly cost you? Some people stall. They keep researching past the point of useful information, using "I need more data" as cover for avoiding a commitment they know they need to make. Others make their worst calls at the end of long days. Saying yes when they mean no, because no requires energy they've already spent. Some get caught by urgency. A deadline appears, the pressure closes off their thinking, and they move fast. Only later do they discover the deadline was manufactured to do exactly that. Others walk into a room with a clear position and walk out agreeing with the loudest voice, unable to explain exactly when they shifted. And some defend decisions past the point where the evidence says stop, because stopping would mean admitting something about themselves they're not ready to face. Identify yours. Write it down before we go further. Your primary vulnerability is a design target, not a character flaw. You can't build around something you haven't named. Four Strategies for Protecting Your Judgment Strategy 1: Control When You Decide Every morning I put on the same thing: a black golf shirt, blue jeans, and cowboy boots. Same brands, same routine, no decisions. My wife tolerates it. I've stopped apologizing for it. It's not a fashion choice. It's a cognitive load choice. Your brain has a finite amount of decision-making capacity each day. Every trivial choice draws from the same reserve you need for the decisions that actually matter. What to wear, what to eat, which route to take. Eliminating those choices doesn't just save time. It protects the mental fuel you'll need later. Decision-making capacity isn't flat across the day. It peaks early, when you're rested and fresh. It degrades, measurably, as conditions erode. The same call made at 8 a.m. and at the end of your seventh consecutive meeting aren't equivalent. Same person, different machine. Pull up your calendar from the last two weeks. Look at when your biggest decisions actually happened. For most people, it's not in a calm moment with a clear head. It's in the hallway, on a rushed call, in the last fifteen minutes of a meeting that ran over. That's not bad luck. That's the default you haven't changed yet. Write a standing rule: no significant, hard-to-reverse commitments after a certain hour or after a certain number of back-to-back meetings without a mandatory pause. Hold it like a policy, not a preference. Because preferences are exactly what disappear under the conditions where you need them most. Strategy 2: Build Your Kitchen Cabinet One of the things I credit most for whatever success I've had in my career isn't a framework or a methodology. It's four people. I call them my kitchen cabinet. They've seen my best decisions and my worst ones. They know when I'm rationalizing. They know when I'm avoiding. And they are not afraid to call me out when I'm off the tracks. Here's what surprises people when I describe them. They're not senior executives. They're not peers from inside my industry. They don't work in any organization I've ever worked for. They're a deliberate mix: different backgrounds, different areas of expertise, different ways of seeing the world. One of them has been in my cabinet for nearly thirty years. I trust them completely, and everything we discuss stays between us. That independence is the whole point. The people inside your organization have something at stake in your decisions. Your peers have their own agendas, even when they don't mean to. Your boss has a preferred outcome. None of that makes them bad advisors. It just means they can't give you the one thing you need most when a decision gets hard: a perspective with no skin in the game. Your kitchen cabinet can. Because they have nothing to gain or lose from what you decide, they can ask the question everyone else in the room is avoiding. They can tell you what you don't want to hear. And they'll do it before you've committed, when it still matters, not after the fact, when all they can do is watch. Build yours deliberately. Four to six people is enough. Prioritize independence over seniority. Look for people who will push back, not people who will reassure. And make the relationship reciprocal. You show up for their decisions too. The cabinet only works if the trust runs both ways and the conversations stay private. You don't need them for every decision. You need them for the ones where you're most at risk of fooling yourself. Strategy 3: Write Your Position Before the Room Fills Up I've sat in enough rooms where I walked in with a clear position and walked out having said almost none of it. Not because I was wrong. Because by the time the senior voice spoke and the heads started nodding, my own analysis felt less certain than it did twenty minutes earlier. The brain doesn't just nudge your answer when social pressure arrives. It rewrites your perception. What you saw before entering the room changes to match what the room already believes, before you've consciously registered the pressure. Before any consequential group decision, write down where you stand. Three sentences. What you believe. What evidence supports it. What would genuinely change your mind. A note on your phone is enough. It doesn't need to be formal. It needs to be external, because your memory will quietly revise itself once the social pressure arrives. Those three sentences are a record of what you actually concluded before the room had a chance to work on you. When the discussion moves toward a position, you can then distinguish between "I'm updating because I heard something new" and "I'm caving because the silence is uncomfortable." Without that record, those two experiences feel identical in the moment, and one of them will reliably win. Strategy 4: Assume the Failure Before You Commit In August 2016, Delta Air Lines ran a routine scheduled test of the backup generator at their Atlanta data center. A transformer caught fire. Three hundred of Delta's 7,000 servers, improperly connected to a single power source, went dark. They couldn't fail over to backups. The servers that stayed online couldn't communicate with the ones that hadn't. The entire system collapsed: passenger check-in, baggage, websites, kiosks, and airport displays. Gone. Delta cancelled 2,100 flights over three days. $150 million in losses. Thousands of passengers slept on airport floors. The system had redundancy designed in. The backup had been tested. The specific failure mode, servers with no alternate power connection, was a known vulnerability that nobody had ever stopped to question. A year before the fire, cognitive psychologist Gary Klein, the researcher who developed the pre-mortem, had written a thought experiment describing almost this exact scenario. Imagine, he wrote, that an airline CEO gathered top management and asked: "Every one of our flights around the world has been cancelled for two straight days. Why?" People would think terrorism first. The real progress, Klein said, would come from mundane answers: a reservation system down, a backup that didn't activate, a cascade nobody had traced in advance. Delta built what Klein described. Without running the question that would have found it. The pre-mortem is that question. Before you commit to a significant decision, assume it's six months later, and the decision failed. Not possibly, but definitely. Then ask: What went wrong? What did you know but not say? What did someone sense but find too awkward to raise in the room? "What could go wrong?" produces hedged answers. People soften concerns to preserve harmony. "It failed. What happened?" changes the psychology entirely. You're not being negative. You're being forensic. The things that surface, the concerns that felt impolitic, the risks that seemed too small to mention, are frequently the ones that end up mattering most. Each of these four strategies is a designed defense against the same thing
Ron Johnson was one of the most successful retail executives in America. He'd made Target hip. He'd built the Apple Store from nothing into a retail phenomenon. So when J.C. Penney hired him as CEO in 2011, expectations were sky-high. Johnson moved fast. He killed the coupons. Eliminated the sales events. Redesigned the stores. When his team suggested testing the new pricing strategy in a few locations first, Johnson said five words that explain everything that happened next: "We didn't test at Apple." Within seventeen months, sales dropped twenty-five percent. He was fired. And here's the part nobody talks about: Johnson had access to all the data. Every week, the numbers told the same story. Customers were leaving. Revenue was collapsing. The board was getting nervous. He could see it all. He just couldn't act on it. Because changing course would mean he wasn't the visionary who reinvented retail. He wasn't making a business decision anymore. He was protecting who he believed he was. That's the identity trap. And it doesn't just happen to CEOs.  What if changing your mind didn't have to feel like losing yourself? Let's get into it. Why Identity Bias Looks Like Your Best Qualities The trap doesn't target bad thinkers. It targets good ones. Think about the entrepreneur who poured three years and her life savings into a startup. The data says it's failing. The metrics are clear. Her advisors are suggesting it's time to pivot or shut down. She has every analytical tool to evaluate this accurately. And she can't do it. She's plenty smart. The problem is that admitting failure would mean she's "a quitter." And she is not a quitter. That's not who she is. Johnson wasn't stupid either. He was brilliant. His identity as the retail visionary just happened to make him blind to the one thing that could save his company: the possibility that what worked at Apple wouldn't work at Penney's. He experienced his blindness as conviction. As leadership. And that's the disguise. Every other thinking error in this series, uncertainty, depletion, time pressure, social pressure, you can feel those happening. You know when you're tired. You know when you're rushed. But identity fusion is invisible from the inside. It disguises itself as your best qualities. The entrepreneur calls it perseverance. Johnson called it vision. The investor who won't sell a losing position? He calls it discipline. Your ego doesn't announce that it's taking over. It puts on a costume that looks exactly like your strengths. And your brain? Your brain is in on it. Why Changing Your Mind Feels Like a Threat When a belief becomes part of your identity, your brain defends it as it would defend your body. Challenge that belief, and your brain responds the same way it would to a physical threat. Not metaphorically. The same neural circuits that protect you from danger activate to protect you from being wrong. That's why arguments about strategy or direction can generate so much heat and so little light. You're not debating a position anymore. You're defending territory. And sometimes you defend it long past the point where the evidence says stop. A project you've poured months into. A strategy you championed. A hire you fought for. The data says cut your losses, but you keep going because walking away would mean all that time, all that effort, all that money was wasted. That's the sunk cost fallacy. And most people think it's about the money or the time. But it's not. Sunk cost is about identity. Think about that manager who spent eighteen months building a new system. The team knows it's not working. She knows it's not working. But scrapping it doesn't just waste eighteen months of budget. It means her judgment failed. It means she led her team down the wrong road for a year and a half. "I've invested too much to quit" sounds like a financial calculation. It's not. It's an identity statement. What she's really saying is: "If I quit, I'm the kind of person who wastes eighteen months of people's lives." The sunk cost isn't financial. It's existential. And suddenly you can see that every time you've held on too long, stayed in something past its expiration date, defended something you knew wasn't working, the force holding you there wasn't logic. It was your self-image refusing to absorb the hit. So how do you loosen the grip once you realize it's there? Three Warning Signs Your Ego Has Taken the Wheel Here's what to watch for. 1. Emotional Intensity That Doesn't Match the Stakes Someone suggests a different approach to a process you built. Not a criticism. Just an alternative. And you feel a flash of heat in your chest. Defensiveness. Maybe irritation. The reaction is way out of proportion to the suggestion. Pay attention to that gap. The intensity isn't about the process. It's about what being wrong would say about you. 2. How You Argue When someone pushes back on your position, watch what happens. If you find yourself attacking the person instead of engaging their argument, that's identity talking. "You don't understand our industry." "You haven't been doing this as long as I have." The moment you shift from "here's why the evidence supports my position" to "here's why you're not qualified to question it," you've stopped defending a conclusion and started defending yourself. The tell is subtle: you'll feel righteous, not curious. 3. The Evidence Filter When you're evaluating something objectively, new information can move you in either direction. But when identity is involved, watch what happens. You accept supporting evidence quickly, uncritically, almost with relief. Contradicting evidence? You tear it apart. You find flaws in the methodology. You question the source. You say, "That's just one study." When you're applying completely different standards depending on which direction the evidence points, that's not critical thinking. That's identity protection wearing a lab coat. How To Loosen the Grip So what do you do once you recognize the grip? Early in my career, I championed a technology direction that I was convinced was right. The evidence started coming back that it wasn't working. And I was doing exactly what I just described. Scrutinizing the bad data, embracing the good data, and getting irritated when people questioned me. It wasn't until a colleague looked at me and said, "You're not evaluating this anymore. You're defending it," that I realized my identity had completely hijacked my judgment. What helped was a shift in language that sounds simple but changes everything. Stop holding beliefs as part of your identity. Start holding them as a working thesis. The Reframe Listen to the difference between these two statements. First: "I believe this company will succeed." Second: "My working thesis is that this company will succeed." The first version fuses the belief to you. If the company fails, you were wrong. You made a bad bet. The second version builds in the expectation that your thinking will evolve. New data doesn't make you wrong. It makes you better informed. The Proof That colleague I mentioned? After that conversation, I started framing every strong opinion as a working thesis in my own head. Not out loud at first. Just internally. And the effect was immediate. I stopped feeling attacked when contradicting data came in. I started treating it as an update instead of a threat. The position I was defending? I reversed it completely. And the thing I was most afraid of — looking like I'd wasted everyone's time — never happened. The team was relieved. The Practice Next time you find yourself defending a position with more heat than it deserves, pause and restate it starting with "My working thesis is..." Then ask yourself: "What would I need to see to change this?" If you can't answer that question, if there's literally no evidence that could change your mind, that belief has become part of your identity. And your brain will protect it like one. The Door The goal isn't to be wishy-washy. Commit fully to your working thesis. Act on it with confidence. The difference is that you've built a door in the wall, and you've given yourself permission to walk through it if the evidence changes. That door is the difference between updating when you're wrong and doubling down until it costs you. Why Identity Is the Amplifier The identity trap doesn't operate alone. It recruits every other force we've covered in Part Two of this series. Facing uncertainty? Identity says, "You're not the kind of person who hesitates." Someone manufactures a deadline to pressure you? "Leaders are decisive. Act now." The whole room disagrees with your position? Identity whispers "I'm a team player" — or digs in with "I'm the one who sees what others miss." Identity is the amplifier. It takes every vulnerability from Episodes 10 through 13 and cranks up the volume. That's why we saved it for last. Everything else we've covered in Part Two? Necessary. But not sufficient. Because if you haven't dealt with your identity's grip on your beliefs, those skills have a backdoor that ego walks right through. And this is exactly what mindjacking exploits. I go much deeper into an article I wrote and in my dedicated mindjacking episode, links below. But the core mechanism is this: mindjacking doesn't just offer you convenient conclusions. It attaches those conclusions to who you are. "People like us think this." "Smart people choose this." Once a belief becomes a badge of identity, you'll convince yourself. No external persuasion required. From Seeing the Trap to Building the Escape Here's your challenge this week. Pick one belief you hold that you've never seriously questioned. Something professional. Your management philosophy. Your investment thesis. Your view on how your industry works. Something you'd describe as "just who I am." Now find the strongest argument against it. Not a straw man. The real, best case the other side would make. Sit with it. See if you can engage with it without
When neuroscientists scanned the brains of people going along with a group, they expected to find lying. What they found instead was something far stranger. The group wasn't changing people's answers. It was changing what they actually saw. We'll get to that study in a minute. But first, I want you to remember the last time you were in a meeting, and you knew something was wrong. The numbers didn't add up. The risk was being underestimated. And someone needed to say it. Then the most senior person in the room spoke first: "I think this is exactly what we need." Heads nodded. Finance agreed. Marketing agreed. The consultant agreed. And by the time it was your turn, you heard yourself saying, "I have some minor concerns, but overall I think it's solid." You're not alone. Research shows that roughly half of employees stay silent at work rather than voice a concern. And among those who stayed quiet, 40% estimated they wasted 2 weeks or more replaying what they didn't say. Two weeks. Mentally rehearsing the point they should have made in a meeting that's already over. That silence isn't a character flaw. It's your neurology working against you. And today I'm going to show you exactly why it happens and how to stop it.  It starts with what was happening inside your head during that meeting you just remembered. Why Your Brain Surrenders to the Group Most people know about the Asch conformity experiments from the 1950s. People were asked to match line lengths, and seventy-five percent went along with answers that were obviously wrong. That result gets cited everywhere. But the more important study came fifty years later, and it revealed something the Asch experiment never could. In 2005, neuroscientist Gregory Berns at Emory University put people inside an MRI machine and ran a similar conformity task, this time with three-dimensional shape rotation. Like Asch, he planted actors who gave wrong answers. But unlike Asch, he could watch what was happening inside people's brains while the conformity was occurring. Berns expected the MRI to show activity in the prefrontal cortex, the brain's decision-making center, when people went along with wrong answers. That would mean they were knowingly lying to fit in. Just a social calculation. That's not what the scans showed. People who conformed showed no increased activity in decision-making regions. Instead, the activity showed up in the parts of the brain that handle visual and spatial perception, the occipital and parietal areas. The group wasn't changing people's answers. It was changing what they actually saw. Their brains were rewriting their experience to match the room. And the people who resisted the group? Their scans told a different story. Heightened activity in the amygdala, the brain's threat detection center. The same circuitry that fires when you encounter physical danger lit up when someone disagreed with the group. Berns put it plainly. The fear of social isolation activates the same neural machinery as the fear of genuine threats to survival. When you caved in that meeting, your neurology wasn't malfunctioning. It was doing exactly what it was designed to do. Keep you safe inside the tribe. This is why what I call mindjacking works so well. Algorithms manufacture social proof by showing you what's trending, what your friends liked, and what similar people chose. Your wiring responds the same way it does at the conference table. You're fighting your own threat-detection system every time you try to hold an independent position within a group. You can't turn off the wiring. But you can learn to catch it in the act. And that starts with one critical distinction. The First Skill: Separating Updating from Caving Sometimes the people around you know something you don't. Changing your mind in a group isn't always a surrender. Sometimes it's the smartest move in the room. The real skill is knowing which one just happened. You can test this in real time. When you feel your position shifting in a group, ask yourself three questions. First: Did someone introduce information I didn't have before? If the CFO reveals a data point that genuinely changes the calculus, updating your view isn't a weakness. It's intelligence. That's new evidence. Second: Can I articulate why I changed my mind, in specific terms? If you can say, "I shifted because of the margin data in Q3 that I hadn't seen," that's a real update. If you can only say, "I don't know, everyone seemed to think it was fine," that's capitulation. Third: Would I have reached this same conclusion alone, with the same information? This is the killer question. If the answer is no, and you only arrived at this position because others were already there, you haven't updated. You've surrendered. Getting this wrong is costly. And not just the one time. When you capitulate and call it updating, you train yourself to stop trusting your own analysis. Do it enough times, and you won't even bother preparing, because you already know you're going to defer. That's how capable people slowly become passengers in rooms where they should be driving. Capture those three questions somewhere you'll see them. They're your real-time check on whether you're being open-minded or spineless. Those questions work when you're already in the meeting and the pressure is live. But what if you could protect your thinking before the pressure even starts? The Pre-Meeting Lock-In The most important thing you can do to protect your independent thinking doesn't happen during the meeting. It happens before. I call it the Pre-Meeting Lock-In, and it takes less than two minutes. Before any meeting where a decision will be made, write down three things:  Your position  Two or three key reasons supporting it What would it take to change your mind Put it on paper. Put it in a note on your phone. Just get it out of your head and into a form you can reference. Why does this work? Because once the discussion starts, your mind is going to quietly edit your memories of what you believed. You'll start thinking, "Well, I wasn't really sure about that point anyway." Your pre-meeting notes are an anchor against that self-deception. They're a record of what you actually thought before the social pressure arrived. You want to see what happens when someone has the analysis but doesn't lock it in?  The night before the Challenger launch in January 1986, engineer Roger Boisjoly and his team at Morton Thiokol had the data. They knew the O-ring seals were dangerous in cold weather. They'd written memos. They'd run the numbers. They recommended against launching. But when NASA pushed back hard on the teleconference, Thiokol management called an off-line caucus and excluded the engineers from the room. When the call resumed, management reversed the recommendation. Boisjoly had the analysis. His managers had heard it. But under pressure from their biggest customer, the conclusion got edited in real time. Boisjoly later described it as an unethical forum driven by what he called "intense customer intimidation." He fought like hell, but the room won. That's the most extreme version of the problem. Life and death. But the mechanics are the same in every conference room. The analysis exists. The pressure arrives. And without something anchoring you to what you actually concluded, the room rewrites the story. There's a bonus effect to the Lock-In, too. When you've documented what it would take to change your mind, you've given yourself permission to be genuinely open. You're not being stubborn for the sake of it. You're saying, "Show me evidence that meets this threshold, and I'll update." That's intellectual honesty with a backbone. But you can know exactly what you think and still fail if you can't get anyone else to hear it. How to Dissent and Actually Be Heard Most dissent fails not because it's wrong, but because it's delivered badly.  Blurting out "I think this is a mistake" when the group is already aligned feels like an attack. People get defensive. Your point gets ignored, not because it lacked merit, but because your delivery threatened the group's cohesion. You triggered the same threat response in them that you've been learning to manage in yourself. Charlan Nemeth, a psychologist at UC Berkeley, has studied dissent for decades. You'd expect her research to show that dissent helps groups when the dissenter is right. When someone spots a flaw that everyone else missed. That makes intuitive sense. But that's not what she found. Nemeth discovered that when someone voices a genuine minority opinion, the entire group thinks more carefully. They consider more information, examine more alternatives, and reach better conclusions. And the group benefits even when the dissenter turns out to be wrong. Even when you're wrong, the act of dissenting makes the group smarter. Your disagreement forces everyone out of autopilot. Decades of research by Moscovici supports this. Minority voices don't just influence people in the moment. They shift perception afterward, in private, long after the meeting ends. That's the good news. The catch is in how the dissent happens. Nemeth tested what happens when dissent is assigned rather than authentic, when someone plays devil's advocate because they were told to. It doesn't produce the same effect. Groups can tell when disagreement is performative. The cognitive benefits only show up when the dissent is authentic. When someone actually believes what they're saying. That means the goal isn't just to voice disagreement. It's to voice it in a way that people can actually receive. And the hardest version of this isn't when you have a minor concern about an otherwise good plan. It's when the whole direction is wrong, and finding something to praise would be dishonest. In those moments, the move is to separate the people from the position. "I respect the work that went into this, and I know this isn't what anyone wants to hear, but I think we're solving
"We need an answer by the end of the day." Ten words. And the moment you hear them, something shifts inside your chest. Your pulse ticks up. Your focus narrows. Careful thinking stops. The clock starts. You probably haven't even asked the most important question yet. Is that deadline real? Most of the urgency you feel every day is fake. Manufactured by someone who benefits from you deciding fast instead of deciding well. Most people can't tell a real deadline from a manufactured one. By the end of this, you will. Let's get into it. What Time Pressure Actually Does to Your Brain Last episode, we talked about decision fatigue. How your brain degrades over a long day. Time pressure is different. Fatigue is a slow drain. Time pressure is a switch. When the clock is ticking, your brain stops analyzing and starts reacting. Normally, the front of your brain runs the show: careful analysis, weighing trade-offs, long-term thinking. Under time pressure, a faster, older, more emotional region takes over. You don't feel less accurate. You feel more confident. Decades of decision science research have found that under time pressure, people's confidence in their decisions goes up while their actual accuracy goes down. You're not just thinking worse. You're thinking worse while being more sure you're right. That false confidence makes you predictably worse at three specific things. Evaluating trade-offs. You lock onto whichever side your gut grabs first. Considering consequences beyond the immediate. Second-order thinking goes offline. Recognizing what you don't know. Because you feel certain, you stop looking for what you're missing. And that's exactly what manufactured urgency is designed to exploit. This is mindjacking in its purest form. Someone engineers the pressure, your brain switches modes, and you make their decision instead of yours. The Urgency Trap: Real vs. Manufactured Not all time pressure is the same. Some deadlines are real. Your tax filing date is real. The board meeting on Thursday is real. The patient who needs a decision in the next ten minutes? That's real. These deadlines exist because of actual constraints in the world, not because someone manufactured them. A huge portion of the urgency you experience? It's engineered. "This offer expires at midnight." Really? Will the company stop wanting your money tomorrow? "We need your decision today." Why today? What actually changes between today and Wednesday? Manufactured urgency is one of the most effective persuasion tools ever invented. Countdown timers on websites that reset when you refresh the page. "Limited time" sales that somehow run every month. Negotiators who invent deadlines because pressure extracts concessions. Manufactured urgency is everywhere. And it works because of what we just covered. Time pressure flips you into fast-decision mode. When someone engineers urgency, they're not just rushing you. They're changing which part of your mind makes the call. The decisions that actually shape your career almost never show up with a countdown timer. The urgency trap pulls your attention to whatever is loudest, while the ones that matter sit quietly in the background. Until it's too late. Five Tests for Manufactured Urgency How do you tell the difference? I use five tests. Test One: The Source Test. Ask yourself: who benefits from me deciding quickly? If the answer is "the person creating the deadline," that's a red flag. Real deadlines serve the situation. Fake deadlines serve the person imposing them. The car salesperson who says "this price is only good today"? That deadline serves the dealership, not you. The surgeon who says "we need to operate within the hour"? That deadline serves the patient. Test Two: The Consequence Test. Ask: what actually happens if I wait? Not what I'm told will happen. What actually happens. "The offer expires." Does it? What would happen if you called back next week? In most cases, the offer magically reappears. Real deadlines have real, verifiable consequences. Manufactured ones have threats that evaporate on contact. Test Three: The History Test. Has this "urgent" situation happened before? If the company has run "ending soon" promotions every month for a year, that's not urgency. That's a business model. If a colleague marks everything "urgent" in their emails, that's not urgency. That's a habit. Test Four: The Reversibility Test. This one builds on our earlier work in the series. How reversible is this decision? If you can cancel, return, or renegotiate, urgency matters less. But if the decision is hard to reverse, like a long-term contract or a major hire, artificial urgency is especially dangerous. The less reversible the decision, the more suspicious you should be of anyone rushing you. Test Five: The Separation Test. Remove yourself from the pressure source and check if the urgency survives. Step out of the room. Sleep on it. Call back tomorrow. Real urgency persists when you leave. Manufactured urgency dissolves. You don't need all five to spot fake urgency. Two or three is usually enough. And once you start applying these tests, something shifts. You realize how much of the urgency in your life was never yours to begin with. I've watched this happen with more than one friend. A cancer diagnosis. Doctors giving them a timeline. And in every case, the same thing happened. Not panic. Clarity. Every manufactured urgency in their lives just fell away. The stuff that didn't matter stopped getting their attention. The stuff that did got all of it. They're well past the timelines their doctors gave them. The outlook is good. But the clarity never went away. They don't need the five tests. They already know which pressure is real. Most of us won't get that kind of forced clarity. So we need tools to create it for ourselves. When "I Need More Time" Is the Problem Everything I just said could become a very sophisticated excuse to never decide anything. "I need more time to think about it" is sometimes wisdom. And sometimes it's avoidance wearing wisdom's clothes. They feel identical from the inside. And that's what makes this so difficult. Recognizing avoidance in yourself is one of the hardest skills in this entire series. We spent all of Episode 10 on it because there's no quick trick for telling the two apart. If you haven't watched that one, I'd recommend going back to it. For this episode, the key connection is this: manufactured urgency and avoidance are opposite problems that feed each other. The more you've been burned by fake deadlines, the more justified "I need more time" feels. And the more you default to delay, the more vulnerable you become when real urgency hits. But watch for this: if you're using the Five Tests to justify delay rather than to evaluate urgency, that's avoidance borrowing the language of skepticism. The tests are meant to help you evaluate the deadline, not to give you another reason to avoid the decision. Calibrating Speed to Stakes So how do you calibrate between moving too fast and waiting too long? Jeff Bezos talks about one-way and two-way door decisions. I've expanded that into what I call the Stakes-Reversibility Grid. Two questions: How much does this matter? And how hard is it to undo? Low stakes, easy to reverse. Which project management tool to try. Where to hold the offsite. What to order for lunch. Decide immediately. These are the decisions people waste hours on that deserve minutes. High stakes, easy to reverse. A new marketing campaign. A pilot program. A hire with a 90-day probation period. Decide quickly, but build in a review date. You can course-correct, so speed matters more than perfection. Low stakes, hard to reverse. The subscription you never cancel. The small clause in a contract nobody reads. These are sneaky. They don't feel important, so you rush. But they're hard to undo, so they accumulate. High stakes, hard to reverse. A merger. A long-term contract. Shutting down a product line. This is where you slow down. This is where you deploy every test for manufactured urgency. This is where anyone rushing you should make you suspicious. Most people get this backwards. They spend weeks picking a laptop and fifteen minutes reviewing an employment contract. The grid fixes that. Be fast on what doesn't matter so you have the bandwidth to be slow on what does. From Knowing to Doing Early in my career, I watched all of this play out in a single conversation. I was negotiating a major technology partnership. The other side's lead negotiator dropped this line: "We need a signed term sheet by Friday or we're moving to the next candidate." Friday was two days away. I felt the shift. That tightening in my chest, that narrowing of focus. My brain immediately started racing toward "how do we make this work by Friday?" Not "should we?" Not "are these the right terms?" Just speed. Then I caught it. Source Test: who benefits from this Friday deadline? They did. We were their preferred partner and they knew it. Consequence Test: what actually happens if we miss Friday? They go to a backup they'd already passed over once. So I said: "We're serious about this partnership and we want to get the terms right. We'll have our response by next Wednesday." Pause. Then: "Okay." The deadline was never real. That's what this skill gives you. Not the ability to stall. Not the excuse to avoid commitment. The judgment to know which pressure deserves your speed and which deserves your skepticism. Next time you feel that tightening in your chest, that rush to decide, run two tests before you respond. The Source Test: who benefits from me deciding fast? The Consequence Test: what actually happens if I wait? You don't need all five every time. Those two alone will catch most manufactured urgency before it catches you. That's not indecisiveness. That's intelligence. Closing In Episode 10, we tackled uncertainty. In Episode 11, depletion. Now you can spot manufactured urgency. But there's a press
A nurse in Pennsylvania had been on her feet for twelve hours. She was supposed to go home, but the unit was short-staffed, so she stayed. During that overtime, a patient was diagnosed with cancer and needed two chemotherapy doses. She administered the first, placed the second in a drawer, and headed home. She forgot about the second dose. It wasn't discovered until the next day. The patient was fine; they got the treatment in time. But think about what happened. This wasn't a careless nurse. This was a dedicated professional who stayed late to help her team. Her skills didn't fail. Her knowledge didn't fail. Her energy failed, and her judgment went with it. That's the trap. We assume our thinking stays constant, that the brain in hour fourteen is the same brain that showed up in hour one. It's not. Last episode, we tackled deciding under uncertainty. But fatigue does something different. Uncertainty makes you hesitate. Fatigue makes you stop caring. Why Your Brain Makes Worse Decisions by Evening You've probably heard the popular saying: "Making too many decisions wears you out, so by evening your judgment is shot." That idea dominated psychology for twenty years. Researchers believed decision-making drained from a limited mental reserve, like a battery running down. Then, independent labs tried to reproduce those results at scale, and the effect vanished. One study, 23 labs, over 2,000 people, found nothing. A second, 36 labs, 3,500 people, same result. The experience is real, though. People do make worse decisions after a long day of mental effort. What was wrong was the explanation. Your brain doesn't drain like a battery. After sustained effort, it shifts priorities. It starts favoring speed and ease over accuracy. Not because it can't think carefully, but because it decides careful thinking isn't worth the effort. Decision fatigue isn't your brain shutting down. It's your brain quietly lowering its standards without telling you. Decision Fatigue in the Real World That science isn't abstract. It plays out every day. Researchers at Brigham and Women's Hospital tracked over 21,000 patient visits. Doctors prescribed unnecessary antibiotics more frequently as the day went on. Not because afternoon patients were sicker. Because saying "here's a prescription" is easier than explaining why you don't need one. Five percent more patients received antibiotics they didn't need, purely because of timing. The same pattern shows up everywhere. Surgeons make more conservative calls later in the day. Hand hygiene compliance drops across a twelve-hour shift. Financial analysts grow less accurate with each additional stock prediction they make in a single day. The drift always goes in the same direction: toward whatever requires the least effort. That drift explains something we've been exploring across this series. When you're exhausted, someone else's conclusion isn't just tempting, it's a relief. The algorithm's recommendation saves you from having to evaluate. The expert's opinion saves you from forming your own. That's mindjacking, finding the open door. Fatigue doesn't just degrade your thinking. It makes you grateful to hand it over. Your Four Warning Signals Knowing the science is useful. But what matters more is catching fatigue in yourself before it costs you. Here are four signals that your judgment is compromised. Signal 1: The Default Drift. Someone proposes a plan that sounds... fine. Not great, not terrible. Two hours ago, you'd have pushed back, asked harder questions. Now you just nod. You're not agreeing because you're convinced. You're agreeing because disagreeing takes energy you no longer have. Signal 2: The Irritability Spike. A colleague asks a reasonable question, and it feels like an interruption. When your emotional response is out of proportion to the situation, it's not the situation. Your reserves are low. Signal 3: The Shortcut Reflex. A decision that should take twenty minutes takes thirty seconds. You skip the analysis, go with your gut. There's a version of this that sounds like confidence. "I trust my instincts." But late in the day, that phrase is often code for "I'm too tired to think this through." Signal 4: The Surrender. You stop forming conclusions and start borrowing them. Someone says, "I think we should go with Option B" and you feel a wave of relief. Not because Option B is right, but because you no longer have to figure it out. That relief is the signal. When outsourcing, your judgment feels like a gift instead of a loss, you're running on empty. If two or more of these show up at the same time, stop. Your judgment isn't reliable right now. Don't trust it with anything that matters. Four Moves to Protect Your Judgment Those signals tell you something's wrong. Here's what to do about it. Move 1: Postpone it. Move the decision to a high-energy window. For most people, that's morning. Think of those hours like premium real estate. Stop filling them with trivial meetings. Move 2: Shrink it. Instead of "Should we pursue this entire strategy?" ask "What's the one thing I need to decide tonight?" Tired minds handle focused questions better than open-ended ones. Move 3: Add a checkpoint. Make the call, but build in a mandatory review: "Here's my decision. We revisit on Thursday morning." Not indecisiveness. A safety net. Move 4: Pre-commit. Before you're ever exhausted, set rules for your future tired self. "I don't approve expenditures over $10,000 after 6 PM." "I don't respond to emotionally charged emails the same day." "I don't make personnel decisions on Fridays." This is the most powerful move because you're making the decision when you're strong so you don't have to make it when you're weak. Pre-commitment also means structuring the order of your decisions. Researchers studying car buyers found that customers who faced the most complex choices first were significantly more likely to accept defaults on everything that followed. The decisions wore them down. The fix was simple: put simple choices first. Front-load your high-stakes choices the same way. Design your day so that by the time your energy fades, the remaining decisions matter least. Recovery as a Decision-Making Strategy Everything I've just described helps you manage fatigue in the moment. But there's a deeper question: what are you doing to actually replenish? We treat fatigue like it's inevitable. It's not. It's a sign you're spending more than you're recovering. The fix isn't another productivity hack. It's genuine rest. Real time away. Disconnected. Off. I learned this the hard way. Early in my career, I was a workaholic, just like my father. It took years to see the connection between rest and judgment. When I became a CEO, I made recovery a priority. We offer unlimited PTO, but offering it isn't enough. I take it visibly, because if the person at the top doesn't step away, nobody believes they're allowed to. A team that never replenishes is permanently operating in a degraded state. That's slow-motion failure. The triage framework buys you time. Recovery is what actually refills the tank. Your Pre-Commitment Challenge Every framework in this series assumes you'll use it when it counts. But mental fatigue is the silent killer of good frameworks. You can know everything about logical reasoning and second-order effects, and still make a terrible call at 10 PM because your mind decided careful thinking wasn't worth the effort. That's why this isn't just another episode. This is the one that determines whether everything else actually works in your real life. So, before this episode ends, pick one pre-commitment. One rule your strong self creates for your tired self. "I don't approve budgets after 7 PM." "I don't reply to conflict emails the same day." Whatever yours is, write it where you'll see it when you're exhausted. Then tell one person. Not for accountability theater. Because saying it out loud makes it real in a way that thinking it never does. Remember that nurse? She had the knowledge, skills, and dedication to stay late for her team. What she lacked was a system to protect her judgment when her energy failed. Your worst decisions don't happen because you're not smart enough. They happen because you're too tired to use the intelligence you already have. That nurse had all night to realize what she'd missed. But what if she hadn't? What if someone had needed that decision in the next five minutes? That's a different kind of danger. Not fatigue alone, but fatigue with a ticking clock. "We need an answer by the end of the day." "This offer expires at midnight." "The board meets tomorrow." Sometimes those deadlines are real. Sometimes they're manufactured to make you decide before you can think. How do you tell the difference? That's next time. Subscribe so you don't miss it. Before You Go If you haven't written down your pre-commitment yet, do it now. Sticky note, phone, back of your hand — I don't care where. Then tell someone. If mindjacking is a new concept for you, I've got a full episode that breaks down how to spot when your thinking has been hijacked, whether by outside forces or by yourself. Link's below. For those who want to support the work and the team behind these episodes, you can become a paid subscriber on Substack.  One question for the comments: What's your pre-commitment? Drop it below. Make it public. Make it real. The best decision you make today might be the one you don't let your exhausted self make tonight. Sources: Berxi/NCSBN case studies: Pennsylvania nurse fatigue incident (chemotherapy administration error) https://www.berxi.com/resources/articles/medication-errors-in-nursing/ Linder, J.A., et al. (2014). Time of Day and the Decision to Prescribe Antibiotics. JAMA Internal Medicine, 174(12), 2029-2031. https://jamanetwork.com/journals/jamainternalmedicine/fullarticle/1910546 Hagger, M.S., et al. (2016). A Multilab Preregistered Replication of the Ego-Depletion Effect. Perspective
You've got a decision you've been putting off. Maybe it's a career move. An investment. A difficult conversation you keep rehearsing in your head but never starting. You tell yourself you need more information. More data. More time to think. But you're not gathering information. You're hiding behind it. What looks like due diligence is actually overthinking in disguise. The certainty you're waiting for doesn't exist. It won't exist until after you decide and see what happens. I call this mindjacking: when something hijacks your ability to think for yourself. Sometimes it's external. Algorithms, experts, crowds thinking for you. But sometimes you're the one doing it. That endless research? It feels like diligence. It functions as delay. You're not being thorough. You're mindjacking yourself. Today, you'll learn a framework for knowing when you have enough information, even when it doesn't feel like enough. Because deciding before you're ready isn't recklessness. It's a skill. And for most people, that skill has completely atrophied. The Real Cost of Waiting At a California supermarket, researchers set up a tasting booth for gourmet jams. Some days, the display showed 24 varieties. Other days, just six. The bigger display attracted more attention. Sixty percent of people stopped to look. But only three percent actually bought jam. When shoppers saw just six options? Thirty percent purchased. Ten times the conversion rate. More options didn't help people choose. More options paralyzed them. The jam study has been replicated across dozens of categories since then. The pattern holds. More choices, more overthinking, fewer decisions. Think about your postponed decision. How many options are you juggling? How many articles have you read? Every expert you consult, every scenario you play out in your head... you're not getting closer to certainty. You're adding jams to the display. And while you're researching, the world keeps moving. Opportunities close. Competitors act. Your own situation shifts. The decision you're avoiding today won't even be the same decision six months from now. Waiting has a cost. Most people dramatically underestimate it. The Two-Door Framework So how do you know when you have enough information? Jeff Bezos uses a mental model that's useful here. Picture every decision as a door you're about to walk through. Some doors are one-way: once you're through, you can't come back. Selling your company. Getting married. Signing a ten-year lease. These deserve serious deliberation. Most decisions, though, are two-way doors. You walk through, look around, and if you don't like what you see, you walk back out. Starting a side project. Trying a new marketing strategy. Having that difficult conversation. The consequences are real, but they're not permanent. The mistake most people make is treating two-way doors like one-way doors. They apply the same level of analysis to choosing project management software as acquiring a company. They're not being thorough. They're overthinking reversible choices. That's how organizations grind to a halt. That's how careers stall. That's how opportunities evaporate while you're still "thinking about it." Before you gather more information, ask yourself: Can I reverse this? If yes, even if reversing would be annoying, you're probably overthinking it. The 40-70 Rule General Colin Powell used a decision framework he called the 40-70 rule. Military leaders and executives have adopted it for decades. The Floor: 40% Never decide with less than forty percent of the information you'd want. Below that threshold, you're not being decisive. You're gambling. The Ceiling: 70% Don't wait for more than seventy percent. By the time you've gathered that much data, the window has usually closed. Someone else acted. The situation changed. The decision got made for you, by default. The Sweet Spot That range between forty and seventy percent is where good decisions actually happen. It feels uncomfortable because you're not certain. That discomfort isn't a warning sign, though. It's the signal that you're doing it right. Most overthinking happens above seventy percent. You already have what you need. You're just not ready to commit. If deciding feels completely comfortable, you've probably waited too long. The Productive Discomfort Test "I genuinely need more information" and "I'm using research as a hiding place" feel identical from the inside. Both feel responsible. Both feel like due diligence. I once watched a friend spend eleven months researching a career change. She read books. Took assessments. Talked to people in the field. Built spreadsheets comparing options. She knew more about the industry than people working in it. And at month eleven, she was no closer to a decision than at month one. The research had become the activity. The feeling of progress without the risk of commitment. She wasn't preparing. She was hiding. And she couldn't tell the difference. So how do you tell productive research apart from overthinking? Four tests: Test 1: The Flip Question Ask yourself: What specifically would change my decision? Not what would make me more comfortable. What would actually flip my choice? If you can't name something concrete, you're not gathering information. You're stalling. Test 2: The Repetition Check Are you learning genuinely new things? Or finding different sources that confirm what you already suspected? The third article about the same topic isn't research. It's reassurance-seeking dressed up as diligence. Test 3: The Timeline Test Have you set a deadline for deciding? "When I have enough information" isn't a deadline. That's an escape hatch that never closes. A real deadline has a date. It's in your calendar. It arrives whether you're ready or not. Test 4: The Broken Record Test If you keep telling the same people "I'm still thinking about it" for the same decision over weeks or months, that's not thinking. That's avoidance on autopilot. You've become a broken record, and everyone can hear it except you. Here's the uncomfortable truth: if you fail more than one of these tests, you probably already have enough information. You're not under-informed. You're over-attached to the comfort of not having decided yet. The goal isn't to eliminate uncertainty. You can't. The goal is to act while uncertainty is still manageable, while you can still correct course, while the opportunity is still breathing. Your Decision Deadline That decision you've been postponing? It has an expiration date. Not one you set. One that's already running. Every week you wait, the context shifts. The opportunity narrows. The person you'd need to have that conversation with forms new assumptions about your silence. You're not preserving your options by waiting. You're watching them quietly disappear. This week, not someday, identify the decision you've been postponing. The one that popped into your head when this video started. You know exactly which one I mean. Set a deadline. Pick a specific date by which you will decide. Not a date by which you'll have complete information. A date by which you'll commit to a direction. Write it down. Put it in your calendar. Make it real. Then ask the two-door question: Is this reversible? If it is, your deadline should be soon. Days, not months. When that deadline arrives, decide. Not perfectly. Not with complete confidence. Deliberately, with the information you have, knowing you can adjust as you learn more. And once you've decided, set a checkpoint. Pick a date, two weeks out, a month out, when you'll evaluate whether to stay the course or walk back through the door. This isn't second-guessing. It's building the feedback loop that makes two-way doors work. Decide now, verify later. That feeling of deciding before you're fully ready? Get used to it. That's what good decision-making actually feels like. Closing Uncertainty isn't going away. Not for this decision, not for any decision that actually matters. The question is whether you'll learn to act within it, or let it become a permanent excuse. Acting under uncertainty requires energy, though. Mental fuel. And when that fuel runs out, everything changes. That's next time: deciding when you're depleted. Because the hardest decisions in your life won't happen when you're rested and sharp. They'll happen at 10 PM after a brutal day, when someone needs an answer and you're running on empty. Before You Go You've got two choices right now. Choice one: scroll to the next video. Let this become another thing you watched but didn't act on. Choice two: pause for thirty seconds. Think about that decision. Set the deadline. Put it in your calendar before you leave this page. Thirty seconds. That's the difference between insight and action. If mindjacking is a new concept for you, I've got a full episode that breaks down how to spot when your thinking has been hijacked, whether by outside forces or by yourself. Link's below. For those who want to support the work and the team behind these episodes, you can become a paid subscriber on Substack. That link is below too. One question for the comments: What decision are you finally going to stop researching and start making? Your deadline begins now. Sources The Jam Study Iyengar, S. S., & Lepper, M. R. (2000). When choice is demotivating: Can one desire too much of a good thing? Journal of Personality and Social Psychology, 79(6), 995-1006. The study was conducted at Draeger's Market in Menlo Park, California. PubMed: https://pubmed.ncbi.nlm.nih.gov/11138768/ Full paper: https://faculty.washington.edu/jdb/345/345%20Articles/Iyengar%20&%20Lepper%20(2000).pdf The 40-70 Rule Attributed to General Colin Powell. The rule appears in "Quotations from Chairman Powell: A Leadership Primer" by Oren Harari (1996), based on Powell's My American Journey (1995). Powell served as a four-star general in the U.S. Army and as the 65th U.S. Secretary of State (2001-2005). The formula "P = 40 to 70" re
You've built a toolkit over the last several episodes. Logical reasoning. Causal thinking. Mental models. Serious intellectual firepower. Now the uncomfortable question: When's the last time you actually used it to make a decision? Not a decision you think you made. One where you evaluated the options yourself. Weighed the evidence. Formed your own conclusion. Here's what most of us do instead: we Google it, ask ChatGPT, go with whatever has the most stars. We feel like we're deciding, but we're not. We're just choosing which borrowed answer to accept. That gap between thinking you're deciding and actually deciding is where everything falls apart. And there's a name for it. What Mindjacking Actually Is  Mindjacking. Not the sci-fi version where hackers seize your brain through neural implants. The real version. Where you voluntarily hand over your thinking because someone else already did the work. It's not dramatic. It's convenient. The algorithm ranked the results. The expert weighed in. The crowd already decided. Why duplicate the effort? Mindjacking is different from ordinary influence. You choose it. Every single time. Nobody forces you to stop evaluating. You volunteer, because forming your own conclusion is harder than borrowing someone else's. What exactly are you losing when this happens? The Two Skills Under Attack  Mindjacking destroys two distinct capabilities. They're different, and you need both. Evaluation independence is the ability to assess whether a claim is valid. Not whether the source has credentials. Not whether experts agree. Whether the evidence actually supports the conclusion. Decision independence is the ability to commit to a path based on your own judgment, without needing someone else to validate it first. Both skills need each other. Watch what happens when one erodes faster than the other. A woman researches her medical condition for hours. Journal articles. Treatment comparisons. She understands her options better than most medical students would. She walks into the doctor's office, lays out her analysis. It's thorough. Sophisticated, even. The doctor reviews it and says, "This is impressive. You've really done your homework." She nods. Then looks up and asks: "So what should I do?" She can evaluate. She can't decide. Now flip it. Think about someone who decides fast. Trusts their gut. Never waits for permission. How often does that person get burned by bad information they never verified? They can decide. They can't evaluate. Lose either ability and you're trapped. Lose both and you're not thinking at all. The Four Surrender Signals  How do you know when mindjacking is happening? It has a signature. Four internal signals that reveal the handoff in progress, if you know how to read them. Signal one: Relief. The moment you find "the answer," you notice a weight lifting. Pay attention to that. Relief isn't insight. It's the burden of thinking being removed. When you actually work through a problem yourself, the result isn't relief. It's clarity. And clarity usually comes with new questions, not a sense of "done." Signal two: Speed. Uncertainty to certainty in seconds? That's not evaluation. You found someone else's answer and adopted it. There's a difference between "I figured it out" and "I found someone who figured it out." One took effort. The other took a search bar. Signal three: Echo. Listen to your own conclusions. Do they sound like something you read, heard, or scrolled past recently? If your "own opinion" matches a headline almost word-for-word, it probably isn't yours. You're not thinking. You're repeating. Signal four: Unearned confidence. You're certain about a conclusion, but ask yourself: could you explain the reasoning behind it? Not where you heard it. The actual reasoning. If you can't, that confidence isn't yours. It came attached to someone else's answer, and you absorbed both their conclusion and their certainty without doing any analysis yourself. Once you notice these signals firing, you need a way to stop the pattern before it completes. The Interrupt  The interrupt is a single question: "Did I reach this conclusion, or just find it?" Six words. That's the whole thing. It works because it forces a distinction your brain normally blurs. "I decided" and "I adopted someone's decision" are identical from the inside, until you ask the question. Test it now. Think about the last opinion you formed. The last purchase you made. The last recommendation you accepted. Did you reach that conclusion, or just find it? The interrupt doesn't tell you what to think. It tells you whether you're thinking at all. Finding an answer isn't the same as reaching one. This matters more than you might realize, because the pattern is bigger than any single decision you make. The Aha Moment: The Illusion of Expertise  Researchers at Penn State looked at 35 million Facebook posts and found something remarkable: seventy-five percent of shared links were never clicked. Three out of four times, people passed along articles they hadn't read. But that's not the strange part. A separate study from the University of Texas discovered that the act of sharing content, even content you haven't read, makes you think you understand it. Sharing tricks you into believing you know. You didn't read the article about investing, but you shared it, so now you believe you understand investing. Worse: people act on that false knowledge. In the study, people who shared an investing article took significantly more financial risk afterward, even though they never read what they shared. They weren't pretending to know. They genuinely believed they knew, because sharing had become a substitute for learning. That's mindjacking at scale. Millions of people believing they're informed, acting confident, having never actually thought about any of it. The Feed Challenge  I want you to try something as soon as this video ends. Open your social media feed. Find a post where someone you know has liked or shared an article, an opinion, a hot take. Now ask: Did they actually think about this? Or did they just pass it along? Look for the signals. Is their comment just echoing the headline? Are they expressing certainty about something they probably spent ten seconds on? Did they add anything that suggests they read past the first paragraph? Or did they just click "like" and move on? Remember: seventy-five percent of shared links are never clicked. That like or share you're looking at? They probably never read what they're endorsing. You'll be shocked how easy this becomes once you start looking. It's everywhere. People confidently endorsing opinions they never examined. Certainty without evaluation. Expertise without effort. Why start with what others are putting in your feed? Because it's much easier to spot mindjacking in others than in yourself. Your ego doesn't interfere. Train your eye on what's coming at you first. Then turn it inward. Awareness precedes choice. You can't reclaim what you can't see. What's Next  Now you can see the handoff happening. That's the foundation. But seeing it isn't enough. Knowing the signals won't help you when you're exhausted and the algorithm is offering relief. Understanding the trap won't save you when everyone in the room disagrees and consensus feels like safety. Awareness alone won't protect you when the deadline is tomorrow and you don't have time to think. Those are the moments where mindjacking wins. Not because you lack the ability to think, but because thinking starts to look like a luxury you can't afford. That's the real battle. And that's what comes next. Next, we tackle the hardest version of this problem: acting before you're ready. What happens when you have to decide, the information isn't complete, and it never will be? Waiting for certainty feels responsible. But sometimes, waiting is the trap. If you're new here, check out the earlier episodes where we built the evaluation toolkit this series is built on. Watch the series on YouTube.  Don't Click Yet  Here's a thought: most people will finish this video and scroll to the next one. The algorithm already has a recommendation queued up. Relief is one click away. But you could do something different. You could stick with the discomfort for a minute. Actually, try the feed challenge before moving on. If you want to go deeper on mindjacking, the full breakdown lives at philmckinney.com/mindjacking. And if you want to support the team that helps me to produce this content, consider becoming a paid subscriber on Substack.  What's one opinion you realized might not actually be yours? Share this with someone who needs to hear it.   References Penn State University (2024). "Social media users probably won't read beyond this headline, researchers say." Analysis of 35 million Facebook posts published in Nature Human Behaviour.  Ward, A., Zheng, J.F., & Broniarczyk, S.M. (2022). "I share, therefore I know? Sharing online content – even without reading it – inflates subjective knowledge." Journal of Consumer Psychology, University of Texas at Austin McCombs School of Business. 
Welcome to this week's show. I'm recording this episode from my hotel room here in Las Vegas, Nevada, at the annual Consumer Electronics Show 2026. If you've been around this channel for long, you know I do this every year. This is 20-plus years I've been coming to the Consumer Electronics Show. Normally, I don't cover tech and new products on this channel—except for once a year at CES. And it's less about specific companies and what they've announced. You can find that on thousands of channels on YouTube or podcasts. What I like to talk about are the trends—the trends that are emerging—and give you my view and opinion on what they really mean for the innovation space. Are we really innovating, or are we just regurgitating the same thing year after year? I do have some notes here that I'll be glancing at as we go through this today, and we'll be splicing in videos I took on the show floor, along with video supplied to us by CES, to give you a feel for what was here and what's going on. The Show's Legacy First, let's recognize that the Consumer Electronics Show is now in its 59th year. It's a spin-off from the old Chicago music show back in the late 1960s. Yes, the late '60s. It's gone through some gyrations over the decades and remains one of the few big shows that survived COVID. Traditional Consumer Electronics As usual, one of the big emphases is TVs, displays, home automation, new refrigerators, new washers and dryers—true consumer electronics, things you would find and put into your home. This year was no different. The big manufacturers were here, along with a number of new smaller manufacturers showcasing new TV technologies. Micro LED is the new buzzword bouncing around the show, and there were plenty of displays to see. I'm a big TV guy, so I definitely had to check that out and see what could be the next TV I put into my house. The AI and Robotics Takeover The one thing about this year's show that was just overwhelming was robots and AI. They were everywhere. I couldn't even tell you how many times we saw AI applied to things that make no sense—though some applications were actually pretty smart. But how many AI toilets do you really need at any given show? On the robotics side, we saw all the familiar ones—like lawn mowers that automatically find your boundaries. One was actually selling the feature that you could program in graphic designs, and it would cut your yard in such a way that the design would appear in your lawn. We also saw humanoid robots, robots doing backflips, robots dancing with people, dancing hands where the fingers are moving. You could buy just the hands or the arms or the elbows and assemble your own robots. It was pretty crazy. Then we started seeing the combination of AI and robots—interactive robots where you could stand there, talk with them, point, and they would follow your commands. Pick up this item. Move this item somewhere else. Not programming through some controller, but simply pointing and talking to direct the robot to do what you want. The Evolution of Electric Vehicles One thing we've seen in past shows was the big emphasis on electric vehicles. This year, the EV car market—which we've seen slow down generally—also slowed down here at the show. However, what we saw in its place focused on two areas: Commercial EVs and Hybrids: There was significant attention on commercial use of EVs, particularly hybrid electric vehicles with combustion engines. Emergency Response Innovation: One exhibit that really impressed me was a fire truck supplied by Dallas Fort Worth Airport. This massive Oshkosh fire truck is a hybrid that uses electric motors for high torque and high acceleration—literally shaving seconds off response time. Given the limited distance on airport property, if there's a disaster or fire requiring quick reaction, the electric motors can accelerate very quickly. There are only about 15 of these trucks in the world, and something like six or seven are just at Dallas Fort Worth Airport. I spent a fair amount of time with that team. This is a perfect example of smart innovation—innovation that isn't just because something is shiny and new. They thought carefully about how to use it, when to apply the right design, leveraging the benefits of electric while using the combustion engine to run the water pumps. Electric Motorcycles: The other area with significant EV presence was motorcycles, particularly dirt bikes. When you're going out for the day to have some fun, the low noise of an electric motor means you're not disturbing rural areas with a combustion engine. Another example of good, smart innovation. Autonomous Vehicles in Commercial Applications The other big area for the show was autonomous vehicles—not just EVs, but vehicles that can operate themselves, particularly in commercial use like farming. John Deere has a long history of autonomous farming with very accurate planting using GPS technologies. Caterpillar had a really interesting exhibit where they were live streaming Caterpillar machines doing autonomous mining from spots all over the world right into the booth. You could see autonomous technology in action. A lot of people think of autonomous vehicles as something new, with Tesla being the innovator. Just to give you a data point: Caterpillar has offered autonomous vehicles since 1995. That's right—1995. Caterpillar introduced the first version of their machines that could operate autonomously. What we all think is new is really the perfect example of what's old becoming new again as progress is made. Kubota: I'm a big Kubota fan, so I had to stop in there. They had an interesting vehicle that applies to a variety of different devices—tractors, even things you can do around a small ranch like what I own in northern Colorado, where I'm trying to harvest hay. It's something that fits smaller operations. You don't have to be a big farm to take advantage of these technologies. Other Notable Technologies Obviously, there were all the other normal things at the Consumer Electronics Show—thousands and thousands of rows of different types of Bluetooth speakers. Battery technology was a big thing, though a lot of it was just more efficiency from lithium-ion. There was an interesting booth on what they call paper batteries—literally paper where you print the battery and then roll it up into whatever form factor you want. The Bottom Line The show this year was overly dominated by AI—AI everything—and robotics. Those would be the two fundamental themes. That's the walk-away after spending three days and something like 45,000 to 50,000 steps covering all the show floor space. That's my insight as I wrap up this episode. This is my one time a year that I geek out on all the technologies. If you have any questions or your own thoughts—if you were there and saw something different you'd want to share—go ahead and put a comment down below, or pop over to PhilMcKinney.com and post a comment to the post there. Next week we'll be back, kicking off Part Two of the Thinking 101 series. We did Part One and wrapped that up right before the holidays. Now we're kicking off Part Two—you don't want to miss it. Make sure you subscribe, hit the like button, and give us a thumbs up. It all helps with the algorithm. Have a great week, and we'll talk to you next week. Bye-bye.  
Twenty-one years. That's how long I've been doing this. Producing content. Showing up. Week after week, with only a handful of exceptions—most of them involving hospitals and cardiac surgeons, but that's another story. After twenty-one years, you learn what lands and what doesn't. You learn not to get too attached because you never know what's going to connect. But this one surprised me. Thinking 101—the response has been different. More comments. More questions. More people saying, "This is exactly what I needed." It's made me reflect on why I started this series. Years ago, I was in a room with people from the Department of Education. I asked them a simple question: Why are we graduating people who can't think? Not "don't know things." Can't think. Can't reason through a problem. Can't evaluate an argument. Their answer was... let's just say it wasn't satisfying. That moment stuck with me. When AI exploded onto the scene—when everyone suddenly had a machine that could generate answers instantly—it became clear: thinking for yourself isn't just valuable anymore. It's survival. That's what Part One was about. The Foundations. Building your thinking toolkit. So what's next? For the next few weeks—nothing. We're taking a breather for the holidays. I'm going to spend time with my wife, my kids, my grandkids. We'll be back in early January. And if you're heading to CES in Las Vegas that first week—let me know. I'd love to meet up. But before I go, I have a question for you. Should there be a Part Two? I have ideas. If Part One was about building your toolkit, Part Two could be about what happens when you have to use it. Because knowing how to think and making good decisions aren't the same thing. Real decisions happen when you're tired. When you're stressed. When your own brain is working against you. Part Two could be about that gap—between knowing and doing. But I want to hear from you first. Should I do it? What topics would you want covered? What questions are you wrestling with? Post a comment. If you're a paid subscriber on Substack, send me a DM—I read those. And speaking of paid subscribers—that's the best way to support the team that makes this happen. Twenty-one years of showing up doesn't happen alone. You can also visit our store at innovation DOT tools for merch, my book, and more. Part One is done. The holidays are calling. Thank you for making this series land the way it did. See you in January. I'm Phil McKinney. Take care of yourselves—and each other.
Before the Space Shuttle Challenger exploded in 1986, NASA management officially estimated the probability of catastrophic failure at one in one hundred thousand. That's about the same odds as getting struck by lightning while being attacked by a shark. The engineers working on the actual rockets? They estimated the risk at closer to one in one hundred. A thousand times more dangerous than management believed.¹ Both groups had access to the same data. The same flight records. The same engineering reports. So how could their conclusions be off by a factor of a thousand? The answer isn't about intelligence or access to information. It's about the mental frameworks they used to interpret that information. Management was using models built for public relations and budget justification. Engineers were using models built for physics and failure analysis. Same inputs, radically different outputs. The invisible toolkit they used to think was completely different. Your brain doesn't process raw reality. It processes reality through models. Simplified representations of how things work. And the quality of your thinking depends entirely on the quality of mental models you possess. By the end of this episode, you'll have three of the most powerful mental models ever developed. A starter kit. Three tools that work together, each one strengthening the others. The same tools the NASA engineers were using while management flew blind. Let's build your toolkit. What Are Mental Models? A mental model is a representation of how something works. It's a framework your brain uses to make sense of reality, predict outcomes, and make decisions. You already have hundreds of them. You just might not realize it. When you understand that actions have consequences, you're using a mental model. When you recognize that people respond to incentives, that's a model too. Think of mental models as tools. A hammer drives nails. A screwdriver turns screws. Each tool does a specific job. Mental models work the same way. Each one helps you do a specific kind of thinking. One model might help you spot hidden assumptions. Another might reveal risks you'd otherwise miss. A third might show you what success requires by first mapping what failure looks like. The collection of models you carry with you? That's your thinking toolkit. And like any toolkit, the more quality tools you have, and the better you know when to use each one, the more problems you can solve. Here's the problem. Research from Ohio State University found that people often know the optimal strategy for a given situation but only follow it about twenty percent of the time.² The models sit unused while we default to gut reactions and habits. The goal isn't just to collect mental models. It's to build a system where the right tool shows up at the right moment. And that starts with having a few powerful models you know deeply, not dozens you barely remember. Let's add three tools to your toolkit. Tool One: The Map Is Not the Territory This might be the most foundational mental model of all. Coined by philosopher Alfred Korzybski in the 1930s, it delivers a simple but profound insight: our models of reality are not reality itself.³ A map of Denver isn't Denver. It's a simplified representation that leaves out countless details. The smell of pine trees, the feel of altitude, the conversation happening at that corner café. The map is useful. But it's not the territory. Every mental model, every framework, every belief you hold is a map. Useful? Absolutely. Complete? Never. This explains the NASA disaster. Management's map showed a reliable shuttle program with an impressive safety record. The engineers' map showed O-rings that became brittle in cold weather and a launch schedule that left no room for delay. Both maps contained some truth. But management's map left out critical territory: the physics of rubber at thirty-six degrees Fahrenheit. When your map doesn't match the territory, the territory wins. Every time. How to use this tool: Before any major decision, ask yourself: What is my current map leaving out? Who might have a different map of this same situation, and what does their map show that mine doesn't? The NASA engineers weren't smarter than management. They just had a map that included more of the relevant territory. Tool Two: Inversion Most of us approach problems head-on. We ask: How do I succeed? How do I win? How do I make this work? Inversion flips the question. Instead of asking how to succeed, ask: How would I guarantee failure? What would make this project collapse? What's the surest path to disaster? Then avoid those things. Inversion reveals dangers that forward thinking misses. When you're focused on success, you develop blind spots. You see the path you want to take and ignore the cliffs on either side. Here's a surprising example. When Nirvana set out to record Nevermind in 1991, they had a budget of just $65,000. Hair metal bands were spending millions on polished productions.⁴ Instead of trying to compete on the same terms and failing, they inverted the formula entirely. Where hair metal was flashy, Nirvana was raw. Where others added complexity, they stripped down. Where the industry zigged, they zagged. The result? They didn't just succeed. They created an entirely new genre and sold over thirty million copies. They won by inverting the game everyone else was playing. How to use this tool: Before pursuing any goal, spend ten minutes listing everything that would guarantee failure. Be specific. Be ruthless. Then look at your current plan and ask: Am I accidentally doing any of these things? Inversion doesn't replace forward planning. It completes it. Tool Three: The Premortem Imagine your project has already failed. Not "might fail" or "could fail." It has failed. Completely. Now your job is to explain why. Researchers at Wharton, Cornell, and the University of Colorado tested this approach and found something striking: simply imagining that failure has already happened increases your ability to correctly identify reasons for future problems by thirty percent.⁵ Why does this work? When we think about what "might" go wrong, we stay optimistic. We protect our plans. We downplay risks because we're invested in success. But when we imagine failure has already occurred, we shift into explanation mode. We're no longer defending our plan. We're forensic investigators examining a wreck. Here's proof the premortem works in the real world. Before Enron collapsed in 2001, its company credit union had run through scenarios imagining what would happen if their sponsor company failed.⁶ They asked: If Enron goes under, what happens to us? They made plans. They reduced their dependence. When the scandal broke and Enron imploded, taking billions in shareholder value with it, the credit union survived. They'd already rehearsed the disaster. Every other institution tied to Enron was blindsided. The credit union had seen the future because they'd imagined it first. How to use this tool: Before any major decision, fast-forward to failure. It's one year from now and everything has gone wrong. Write down why. What did you miss? What risks did you ignore? Then prevent those things from happening. You can't prevent what you refuse to imagine. How These Three Tools Work Together Each tool is powerful alone. Together, they're transformational. Imagine you're considering a career change. Leaving your stable job to start a business. Start with The Map Is Not the Territory. What's your current map of entrepreneurship? Probably shaped by success stories, LinkedIn posts, and survivorship bias. But what's the actual territory? CB Insights analyzed over a hundred failed startups to find out why they died. The number one reason, responsible for forty-two percent of failures, was building something nobody wanted.⁷ Founders had a map that said "customers will love this." The territory said otherwise. What is your map leaving out? Apply Inversion. How would you guarantee this business fails? Starting undercapitalized. Launching without testing the market. Ignoring early warning signs because you're emotionally invested. Now look at your current plan. Are you doing any of these things? Run a Premortem. It's two years from now. The business has failed. Write the story. Maybe you ran out of money at month fourteen. Maybe your key assumption about customer behavior turned out to be wrong. What happened? One tool gives you a perspective. Three tools working together give you something close to wisdom. This is exactly what the NASA engineers were doing, and what management wasn't. The engineers were constantly asking: Does our map match the territory? What would cause failure? What are we missing? Management was stuck in a single frame: schedule and budget. The difference between a one-in-one-hundred-thousand estimate and a one-in-one-hundred estimate? The difference between confidence and catastrophe? It was the thinking toolkit each group brought to the problem. Practice: The Three-Tool Test Here's how to put these tools to work this week. Identify a decision you're currently facing. Something real. Something that matters. Write it in one sentence. Check your map. What assumptions are you making? Where did they come from? Who might see this differently? Invert it. Set a timer for five minutes. List every way you could guarantee failure. Be ruthless. Run the premortem. It's one year from now. You chose wrong. Write two paragraphs explaining what happened. Find the overlap. Where do your inversion list and premortem story agree? That's your highest-risk blind spot. Take one action. What's one step you can take this week to address your biggest risk? Twenty minutes. One decision. Run it once, then try it again next week on a different decision. As you use these tools, you'll notice other mental models worth adding. Your toolkit will grow. Most decisions feel routine until they're not. That morning at NASA
Quick—which is more dangerous: the thing that kills 50,000 Americans every year, or the thing that kills 50? Your brain says the first one, obviously. The data says you're dead wrong. Heart disease kills 700,000 people annually, but you're not terrified of cheeseburgers. Shark attacks kill about 10 people worldwide per year, but millions of people are genuinely afraid of the ocean. Your brain can't do the math, so you worry about the wrong things and ignore the actual threats. And here's the kicker: The people selling you fear, products, and policies? They know your brain works this way. They're counting on it. You're not bad at math. You're operating with Stone Age hardware in an Information Age world. And that gap between your intuition and reality? It's being weaponized every single day. Let me show you how to fight back. What They're Exploiting  Here's what's happening: You can instantly tell the difference between 3 apples and 30 apples. But a million and a billion? They both just feel like "really big." Research from the OECD found that numeracy skills are collapsing across developed countries. Over half of American adults can't work with numbers beyond a sixth-grade level. We've become a society that can calculate tips but can't spot when we're being lied to with statistics. And I'm going to be blunt: if you can't think proportionally in 2025, you're flying blind. Let's fix that right now. Translation: Make the Invisible Visible  Okay, stop everything. I'm going to change how you see numbers forever. One million seconds is 11 days. Take a second, feel that. Eleven days ago—that's a million seconds. One billion seconds is 31 years. A billion seconds ago, it was 1994. Bill Clinton was president. The internet was just getting started. That's how far back you have to go. Now here's where it gets wild: One trillion seconds is 31,000 years. Thirty-one THOUSAND years. A trillion seconds ago, humans hadn't invented farming yet. We were hunter-gatherers painting on cave walls. So when you hear someone say "What's the difference between a billion and a trillion?"—the difference is the entire span of human civilization. This isn't trivia. This is the key to seeing through manipulation. Because when a politician throws around billions and trillions in the same sentence like they're comparable? Now you know—they're lying to your face, banking on you not understanding scale. The "Per What?" Weapon  Here's the trick they use on you constantly, and once you see it, you can't unsee it. A supplement company advertises: "Our product reduces your risk by 50%!" Sounds incredible, right? Must buy immediately. But here's what they're not telling you: If your risk of something was 2 in 10,000, and now it's 1 in 10,000—that's technically a 50% reduction. But your actual risk only dropped by 0.01%. They just made almost nothing sound like everything. Or flip it around: "This causes a 200% increase in risk!" Terrifying! Except if your risk went from 1 in a million to 3 in a million, you're still almost certainly fine. This is how they play you. They show you percentages when absolute numbers would expose them. They show you raw numbers when rates would destroy their argument. Your defense? Three words: "Per what, exactly?" 50% of what baseline? 200% increase from what starting point? That denominator is where the truth hides. Once you start asking this, you'll see the manipulation everywhere. Let's Catch a Lie in Real Time  Okay, let's do this together right now. I'm going to show you a real manipulation pattern I see constantly. Headline: "4 out of 5 dentists recommend our toothpaste!" Sounds pretty convincing, right? Let's apply what we just learned. First—per what? Four out of five of how many dentists? If they surveyed 10 dentists and 8 said yes, that's technically 80%, but it's meaningless. Second—what was the actual question? Turns out, they asked dentists to name ALL brands they'd recommend, not which ONE was best. So 80% mentioned this brand... along with seven other brands. Third—scale: There are 200,000 dentists in the US. They surveyed 150. That's 80% of 0.075% of all dentists. See how fast that falls apart? That's the power of asking "per what? The Exponential Trap This is where your intuition doesn't just fail—it catastrophically fails. And it's costing people everything. Grab a piece of paper. Fold it in half. Twice as thick, no big deal. Fold it again. Four times. Okay. Keep going. Most people think if you could fold it 42 times, maybe it'd be as tall as a building? No. It would reach the moon. From Earth. To the moon. That's exponential growth, and your brain cannot comprehend it. Here's why this matters in your actual life: You've got a credit card with $5,000 on it at 18% interest. You think "I'll just pay the minimum, I'll catch up eventually." Your brain treats this like a linear problem. It's not. It's exponential. That $5,000 becomes $10,000 faster than you can possibly imagine, and then $20,000, and suddenly you're drowning. Or retirement: Starting to save at 25 versus 35 doesn't feel like a huge difference. Ten years, whatever. But exponential growth means that ten-year head start could be worth 2-3 times more money when you're 65. When you hear "doubles every," "grows by X percent," or "compounds"—stop. Your intuition just became your enemy. Rapid Reality Checking  You don't need a calculator to spot lies. You need a sanity check that takes ten seconds. I'm going to give you the fastest BS detector I know: Round brutally. 47 million becomes 50 million. 8.7% becomes 10%. Precision is the enemy of speed. Find the zeros. Is this thousands, millions, billions? Get the ballpark right first. Do the rough math. What's 7% of 50 million? Well, 10% is 5 million, so 7% is about 3.5 million. Done. Close enough to catch the lie. Smell test it. Someone claims a new app has a billion users after launching last month? That's one in eight humans on Earth. Really? I use this every single day now. News article, social media post, advertisement—ten seconds and I know if someone's lying to me. You're not trying to be exact. You're trying to be un-foolable. Don't Make These Mistakes Before we go further, let me save you from three traps I see people fall into. First: Don't become the conspiracy theorist who distrusts ALL numbers. Sometimes 50% really is 50%. The goal is healthy skepticism, not paranoid cynicism. Second: Don't weaponize this to win petty arguments. "Actually, you didn't do 50% of the dishes"—nobody likes that person. Third: Don't assume you're now immune to manipulation. These are tools, not shields. Stay humble. Smart people get fooled all the time—they just recover faster. Putting It All Together  Let me show you how these four techniques work as a system. A tech company announces: "We've tripled our user base to 3 million, growing 200% annually, and reduced complaints by 90%!" Watch this: Scale check: 3 million users. In social media? That's tiny. Instagram has 2 billion. Context matters. Per what? Tripled from what starting point? If they went from 50,000 to 3 million, that's actually 60x growth—why understate it? And 90% reduction from how many complaints? Ten to one? Who cares. Exponential check: 200% annual growth is explosive... and unsustainable. What happens when they hit market saturation next quarter? Quick estimate: If they have 3 million users and the market is 300 million potential users, they've captured 1%. Still lots of room to grow—or lots of room for competitors. See how these stack? Your Turn—Right Now Okay, pause this video. Seriously, pause it. Open your news app or social media feed. Look at the first three posts with numbers in them. Now run them through the test: What's the scale? Per what? Is it exponential? Does it pass the smell test? I'll give you 60 seconds. Go. Done? Did you find manipulation? I bet you found at least one. Comment below what you discovered—I genuinely want to know what you're seeing out there. The Real Stakes  Let me tell you what just happened. You learned five techniques. But you actually learned something bigger: You learned that your intuition about numbers is systematically broken, and people in power know it and exploit it. Remember the opening? The reason you're more afraid of sharks than heart disease isn't random. Media companies know fear drives clicks, and rare dramatic events trigger your brain differently than common statistical threats. So they show you the sharks, not the cheeseburgers. They're not smarter than you. They're just counting on you not checking the math. We're entering an era of AI-generated stats, algorithmic manipulation, and deepfake data. Your ability to think proportionally isn't just about making better decisions anymore. It's about knowing what's real. The people who can't tell a million from a billion will be led by people who can. And those people? They're fine with you staying confused. So what are you going to be—the one doing the math, or the one getting played? If you want to keep sharpening these skills, this is episode 7 in the Thinking 101 series. Each episode gives you another tool for thinking clearly in a world designed to confuse you. Hit subscribe so you don't miss the next one. And if this changed how you see numbers? Share it. Someone in your life needs this. Choose today.
The Clock is Screaming

The Clock is Screaming

2025-11-2511:50

I stepped out of the shower in March and my chest split open. Not a metaphor. The surgical incision from my cardiac device procedure just… opened. Blood and fluid everywhere. Three bath towels to stop it. My wife—a nurse, the exact person I needed—was in Chicago dealing with her parents' estate. Both had just died. So my daughter drove me to the ER instead. That was surgery number one. By Thanksgiving this year, I'd had five cardiac surgeries. Six hospitalizations. All in twelve months. And somewhere between surgery three and four, everything I thought I knew about gratitude… broke. When the Comfortable List Stopped Working Five surgeries. Three cardiac devices. My body kept rejecting the thing meant to save my life. Lying there before surgery number five, waiting for the anesthesia, one question kept circling: What if I don't make it this time? And that's when the comfortable list stopped working. You know the one. Health. Family. Career. The things we say around the table because they sound right. But when you're not sure you'll wake up from surgery… when your wife is burying both her parents while managing your near-death… when the calendar is filled with hospital dates instead of holidays… You can't perform gratitude anymore. You have to find out what it actually means. The clock isn't just ticking anymore. It's screaming. What Survives And that's when I saw it clearly. Not in a hospital room—at a lunch table with my grandson. Last month, Liam sat next to me after church. He's twelve. Runs his own business designing 3D models. And he'd been listening to my podcast episode about breakthrough innovations. He had an idea. A big one. "It would need way better batteries than we have now, Papa." So we went deep—the kind of conversation where you forget a twelve-year-old is asking questions most engineers won't touch. He's already thinking about making the impossible possible. And sitting there, watching him work through the problem, I realized something: This is what survives when I'm gone. My grandfather would take me to my Uncle Bishop's tobacco farm in rural Kentucky. When we'd do something wrong—cut a corner, rush through it—we'd hear it: "A job worth doing is worth doing right." Almost like a family mantra. I heard it on that farm. My kids heard it from me. Liam hears it now. And that line will keep moving forward long after I'm gone. Not because of the accolades. Because of the people. It's Not Just Liam But here's what hit me sitting there with Liam: It's not just him. It's you. Every week for more than twenty years, I've been putting out content. Podcasts. Videos. Articles. Not for the downloads. Not for the metrics. For this exact moment—where something I share gets passed forward. Where you have a conversation with someone younger who needs to hear it. Where you take what works and make it your own. That's what legacy actually is. Not the content I create. Not what's on a shelf. The people we invest time in. The effort we put into helping them become who the future needs. My legacy is Liam, yes. But it's also every person who's taken something from these conversations and shared it forward. That's you. That's the reason the clock screaming doesn't make me stop. It makes me keep going. Because you're going to pass this forward. And that's what survives. The Math I turned sixty-five in September. Both my parents died at sixty-eight. The math isn't encouraging. So when people ask me why I keep pushing—why I'm still creating content when I can barely type, when I've had five surgeries in twelve months— It's because I finally understand what I'm grateful for. Not my health. That's been failing spectacularly. Not comfort. That ended in March. I'm grateful I get to see what happens when you invest in people. I'm grateful Liam asks me about batteries over lunch. I'm grateful you're watching this and thinking about who you're investing in. I'm grateful for what the breaking revealed. What I'm Actually Grateful For That morning when my chest split open? I was terrified. Thinking about everything that could go wrong. Now? I'm grateful for what it forced me to see. Who shows up. What survives. Why it matters to keep going even when it would be easier to stop. This week on Studio Notes, I'm telling the full story. The medical mystery that took five surgeries to solve. The conversation with Liam that changed everything. What my wife actually thinks about me writing a second book while recovering from all this. And what gratitude looks like when the comfortable list stops working. Read the full story on Studio Notes: https://philmckinney.substack.com/p/what-im-actually-thankful-for-after Your Turn But here's what I really want to know: When was the last time you were grateful for something that hurt you? Not the easy stuff. Not the list you perform around the table. The thing that broke you open. The thing that forced you to see differently. Drop it in the comments. Tell me what you found inside the breaking. Because maybe that's what Thanksgiving is actually for. Learning what gratitude looks like when everything breaks. And discovering that what survives isn't what we thought.   Happy Thanksgiving.  
In August 2025, Polish researchers tested something nobody had thought to check: what happens to doctors' skills after they rely on AI assistance? The AI worked perfectly—catching problems during colonoscopies, flagging abnormalities faster than human eyes could. But when researchers pulled the AI away, the doctors' detection rates had dropped. They'd become less skilled at spotting problems on their own. We're all making decisions like this right now. A solution fixes the immediate problem—but creates a second-order consequence that's harder to see and often more damaging than what we started with. Research from Gartner shows that poor operational decisions cost companies upward of 3% of their annual profits. A company with $5 billion in revenue loses $150 million every year because managers solved first-order problems and created second-order disasters. You see this pattern everywhere. A retail chain closes underperforming stores to cut costs—and ends up losing more money when loyal customers abandon the brand entirely. A daycare introduces a late pickup fee to discourage tardiness—and late pickups skyrocket because parents now feel they've paid for the privilege. The skill that separates wise decision-makers from everyone else isn't speed. It's the ability to ask one simple question repeatedly: "And then what?" What Second-Order Thinking Actually Means First-order thinking asks: "What happens if I do this?" Second-order thinking asks: "And then what? And then what after that?" Most people stop at the first question. They see the immediate consequence and act. But every action creates a cascade of effects, and the second and third-order consequences are often the opposite of what we intended. Think about social media platforms. First-order? They connect people across distances. Second-order? They fragment attention spans and fuel polarization. The difference isn't about being cautious—it's about being thorough. In a world where business decisions come faster and with higher stakes than ever before, the ability to trace consequences forward through multiple levels isn't optional anymore. Let me show you how. How To Think in Consequences Before we get into the specific strategies, here's what you need to understand: Second-order thinking isn't about predicting the future with certainty. It's about systematically considering possibilities that most people ignore. The reason most people fail at this isn't lack of intelligence—it's that our brains evolved to focus on immediate threats and rewards. First-order thinking kept our ancestors alive. But in complex modern systems—businesses, markets, organizations—first-order thinking gets you killed. The good news? This is a learnable skill. You don't need special training or advanced degrees. You need two things: a framework for mapping consequences, and a method for forcing yourself to actually use it. Two strategies will stop your solutions from creating bigger problems: Map How People Will Actually Respond - trace your decision through stakeholders, understand what you're actually incentivizing, and predict how the system adapts. Run the "And Then What?" Drill - force yourself to see three moves ahead before you act, using a simple three-round questioning method. Let's break down each one. Strategy 1: Map How People Will Actually Respond Here's the fundamental insight that separates good decision-makers from everyone else: People respond to what you reward, not what you intend. When you make a decision, you're not just choosing an action—you're sending signals into a complex system of human beings who will interpret those signals, adapt their behavior, and create consequences you never imagined. Your job is to trace those adaptations before they happen. This strategy has three components that work together: First: Identify ALL Your Stakeholders When considering a decision, list everyone it will affect directly and indirectly. Don't just think about your immediate team—think about: Your customers (current and potential) Your competitors (how will they respond?) Your suppliers and partners Your employees at different levels Your investors or board Regulatory bodies or industry watchdogs Adjacent markets or ecosystems Most executives stop after listing two or three obvious groups. The consequences you miss come from the stakeholders you forgot to consider. Here's what research shows: Wharton professor Philip Tetlock spent two decades studying how well experts predict future events. His landmark finding? Even highly credentialed experts' predictions were only slightly better than random chance—barely better than a dart-throwing chimp. But the real insight came when Tetlock discovered that certain people can forecast with exceptional accuracy. These "superforecasters" share one key trait: they relentlessly ask "And then what?" before making predictions. They don't just see the immediate effect. They trace the decision through the entire system. The people making million-dollar decisions are operating blind beyond the first consequence. Our job is to see what they're missing. Second: Understand What You're Actually Rewarding This is where most decisions go wrong. You think you're incentivizing one behavior, but you're actually rewarding something completely different. Here's the test: For each stakeholder, ask yourself: "What does this decision make easier, more profitable, or less risky for them?" Quick example: Remember the daycare that introduced a late pickup fee to discourage tardiness? They thought they were incentivizing on-time pickup. But here's what they actually rewarded: guilt-free lateness. Parents who felt terrible about being late now had a clear price for that guilt. The fee didn't discourage the behavior—it legitimized it. Late pickups skyrocketed. The daycare asked the wrong question. They asked: "What punishment will discourage lateness?" Instead, they should have asked: "What does a $5 fee actually incentivize?" Another example: You add a performance metric to improve efficiency. First-order thinking says: "People will work more efficiently." But what are you actually rewarding? Optimizing for the metric—often at the expense of things you didn't measure but actually matter more. Sales quotas reward closing deals, not necessarily solving customer problems. Employee of the month awards reward visibility, not necessarily the best work. Quarterly earnings targets reward short-term thinking, not building long-term value. When you rush a hiring decision to fill a role quickly, you're rewarding speed over quality. The second-order effect? Your team learns that urgency matters more than fit, and future hiring suffers. The pattern: People don't follow the spirit of your policy—they follow the incentives. And they're incredibly creative at finding ways to game systems when the incentives misalign with the goals. Third: Trace Each Response Forward Now that you know who's affected and what you're incentivizing, trace how they'll respond—and then how the system responds to THEIR response. This is where the stakeholder analysis and incentives analysis combine into real predictive power. Example: When ride-sharing apps added surge pricing to solve driver shortages, here's how it played out: First-order: More drivers show up when prices surge. Problem solved, right? Second-order stakeholder responses: Customers started waiting out surge periods, meaning fewer overall rides Drivers started gaming the system—turning off their apps to create artificial shortages that triggered surges Competitors without surge pricing captured price-sensitive customers Media coverage made "surge pricing" synonymous with price gouging, damaging brand trust Third-order systemic effects: The solution trained customers to use the service less frequently It taught drivers to manipulate the platform rather than respond to genuine demand It created a PR vulnerability that regulators could exploit The very mechanism designed to solve shortages created new shortages through gaming behavior The original problem (driver shortages during peak times) was real. The first-order solution (higher prices attract more drivers) was economically sound. But nobody mapped how customers and drivers would actually respond to the incentives created by surge pricing. The key insight: Complex systems don't just accept your decisions—they adapt to them. And those adaptations often work directly against your original intent. Try it now: Pause this video for 30 seconds. Think of one decision your company made in the last year. Who were the stakeholders? How did they actually respond? Was it what you expected? [5-second pause built into video] If their response surprised you—you just found a second-order effect you missed. Strategy 2: Run the "And Then What?" Drill Now you have a framework for thinking about consequences. But frameworks don't change behavior—practice does. This is your daily practice method. Before any significant decision, literally ask yourself "And then what?" at least three times. Out loud. Make it awkward. Make it unavoidable. Here's why this works: Your brain will naturally stop at the first answer. The question forces you to keep going. It's a cognitive override—a way to fight your brain's preference for first-order thinking. The Three Rounds: Round 1: Immediate Consequence State the obvious first-order effect. This should come easily. "We'll discount our product by 20%." And then what? "We'll attract more customers and gain market share." Round 2: Response and Adaptation Now apply Strategy 1. How will stakeholders respond? What are we actually incentivizing? And then what? "Competitors will match our discount to protect their market share. And customers will start expecting permanently lower prices—we've trained them that our regular price was inflated. Early adopters who paid full price feel cheated." Round 3: Systemic Effects Trace the second-order respons
You're frozen. The deadline's approaching. You don't have all the data. Everyone wants certainty. You can't give it. Sound familiar? Maybe it's a hiring decision with three qualified candidates and red flags on each one. Or a product launch where the market research is mixed. Or a career pivot where you can't predict which path leads where. You want more information. More time. More certainty. But you're not going to get it. Meanwhile, a small group of professionals—poker players, venture capitalists, military strategists—consistently make better decisions than the rest of us in exactly these situations. Not because they have more information, but because they've mastered something fundamentally different: they think in probabilities, not certainties. I learned this the hard way—I once created a biometric security algorithm that the NSA reverse-engineered, where I mastered probabilistic thinking perfectly in the technology, then made every wrong bet with the business around it. By the end of this episode, you'll possess a powerful mental toolkit that transforms how you approach uncertainty. You'll learn to estimate likelihoods without perfect data, update your beliefs as new information emerges, make confident decisions when multiple uncertain factors collide, and act decisively even when you can't guarantee the outcome. This is the difference between paralysis and power, between gambling recklessly and betting wisely. What Is Probabilistic Thinking? But what does probabilistic thinking actually entail? At its core, it's the practice of reasoning in terms of likelihoods rather than absolutes—thinking in percentages instead of yes-or-no answers. Instead of asking "Will this work?" you ask "What are the odds this will work, and what are the consequences if it doesn't?" This approach acknowledges that the future is uncertain and that every decision carries risk. By quantifying that uncertainty and weighing it against potential outcomes, you make smarter choices even when you can't eliminate the unknown. The Cost of Demanding Certainty Today's world punishes those who demand certainty before acting. Research from Oracle's 2023 Decision Dilemma study—which surveyed over 14,000 employees and business leaders across 17 countries—found that 86% feel overwhelmed by the amount of data available to them. Rather than clarity, all that information creates decision paralysis. And the paralysis has real consequences. When we can't be certain, we freeze. We endlessly research options, seeking that final piece of data that will guarantee success. We postpone critical decisions, waiting for perfect information that never arrives. Meanwhile, opportunities pass us by, problems grow worse, and competitors who are comfortable with uncertainty move forward. This demand for certainty doesn't just slow us down—it exhausts us. Decision fatigue sets in as we agonize over choices, draining our mental resources until we either make impulsive decisions or avoid deciding altogether. Neither outcome serves us well. What Certainty-Seeking Actually Costs You Here's what it looks like in real life: You're the VP of Marketing. Your CMO wants a decision on next quarter's campaign budget by Friday. You have three agencies to choose from, each with strengths and weaknesses. So you ask for more data. Customer focus groups. Competitive analysis. Agency references. By Wednesday you're drowning in spreadsheets and conflicting opinions. Friday arrives. You still can't be certain which choice is right, so you ask for an extension. Two weeks later, you finally pick one—not because you're confident, but because you're exhausted and the CMO is furious about the delay. The campaign launches late. You've burned political capital. And you still have no idea if you made the right choice. Meanwhile, your competitor's marketing VP looked at the same decision, spent two hours assessing the probabilities, and launched on time. If it works, great. If it doesn't, they'll pivot. They didn't need certainty. They needed enough information to make a good bet. That's the tax you pay for demanding certainty: missed timing, exhausted teams, and decisions made from fatigue rather than judgment. Meanwhile, a small group of professionals thrives in these exact conditions. Professional poker players like Annie Duke understand that good decisions sometimes lead to bad outcomes and bad decisions sometimes get lucky—so they judge their choices by process, not results. Venture capitalists often see that most of their investments will fail, but they bet anyway because one success out of twenty can return the entire fund. Military strategists make life-and-death decisions with incomplete intelligence, not because they're reckless, but because waiting for perfect information means defeat. The difference isn't access to better information. It's the willingness to act on probabilities rather than certainties. How To Make Better Decisions When Nothing Is Certain So how do you actually develop this skill? It's more accessible than you might think. Here are clear strategies to transform how you process uncertainty and make decisions. Think in Ranges, Not Points The first shift in probabilistic thinking is abandoning single-number estimates for ranges of possibility. When most people predict an outcome, they pick one number: "Sales will be $500,000 next quarter" or "This project will take three months." But the world doesn't work that way. Every estimate carries uncertainty, and pretending otherwise sets you up for failure. Professional forecasters think differently. They don't ask "What will happen?" They ask "What's the range of plausible outcomes, and how likely is each?" This approach forces you to acknowledge what you don't know while still making useful predictions. Watch a professional poker player deciding whether to call a bet. They're not thinking "Do I have the best hand?" They're thinking "Given what I've seen, maybe 35% chance I have the best hand, 20% chance my opponent is bluffing, 45% chance they've got me beat." They act on probabilities, not certainties. Steps to implement range thinking: Replace point estimates with probability ranges. When making any prediction, state a range instead of a single number. Instead of "We'll close 50 deals," say "We'll likely close 40-60 deals, with a small chance of 30-70."   Assign rough percentages to your ranges. You don't need mathematical precision—just honest self-assessment. Estimate: "60% chance of 40-50 deals, 30% chance of 50-60, 10% chance outside that range." This forces you to think about likelihood, not just possibility.   Track your estimates against actual outcomes. Keep a simple log of your predictions and what actually happened. Over time, you'll discover if you're consistently over-optimistic, over-cautious, or actually well-calibrated. This feedback loop is how you improve.   Update Your Beliefs with New Evidence One of the most powerful aspects of probabilistic thinking is treating your beliefs as hypotheses, not conclusions. When new information emerges, skilled thinkers update their probability estimates rather than clinging to their original position. This practice—called Bayesian updating after the mathematician Thomas Bayes—is how professionals stay accurate in changing environments. Consider a doctor diagnosing a patient with intermittent chest pain. Initially, based on the patient's age and health history, she estimates a 15% probability of heart disease. Then the EKG comes back with minor abnormalities—not definitive, but concerning. She updates her estimate to 35%. Blood work shows elevated cardiac markers. Now she's at 65%. Each piece of evidence shifts the probability, but none gives absolute certainty. She doesn't wait for 100% certainty to act—she orders more tests and starts precautionary treatment based on her updated 65% estimate. That's Bayesian thinking in action. Financial firms continuously adjust their models as new data arrives. Weather forecasters update storm predictions hourly. In my own work building biometric security systems, we updated our false acceptance and rejection rates constantly—but I failed to apply that same updating framework to the business model itself. The enemy of updating is confirmation bias—our tendency to accept information that supports our existing beliefs and dismiss information that contradicts them. When you're emotionally invested in being right, you'll unconsciously filter evidence to protect your original view. Steps to update your thinking: Start with a baseline probability before you have strong evidence. If you're launching a new product, estimate: "Based on what I know about similar products, there's maybe a 40% chance this succeeds." That's your prior—your starting point before specific evidence comes in.   When new information arrives, ask: "How much should this change my estimate?" Not all evidence is equal. Strong evidence—like actual customer purchases—should move your probability significantly. Weak evidence—like one person's opinion—should barely budge it.   Separate the quality of a decision from the quality of the outcome. This is crucial. A good decision based on sound probabilities can still result in a bad outcome due to chance. Conversely, a terrible decision can get lucky. Judge yourself on whether you correctly assessed the probabilities and acted accordingly, not on whether you "got it right" this time.   Actively seek disconfirming evidence. Force yourself to look for information that contradicts your current view. If you think your strategy will work, deliberately search for reasons it might fail. This counteracts confirmation bias and gives you a more accurate probability estimate.   Make Decisions by Expected Value Probabilistic thinking isn't just about estimating odds—it's about acting on them. The concept of expected value gives you a framework for making decisions when outcomes are uncertain. Expected value multiplies each po
Try to go through a day without using an analogy. I guarantee you'll fail within an hour. Your morning coffee tastes like yesterday's batch. Traffic is moving like molasses. Your boss sounds like a broken record. Every comparison you make—every single one—is your brain's way of understanding the world. You can't turn it off. When someone told you ChatGPT is "like having a smart assistant," your brain immediately knew what to expect—and what to worry about. When Netflix called itself "the HBO of streaming," investors understood the strategy instantly. These comparisons aren't just convenient—they're how billion-dollar companies are built and how your brain actually learns. The person who controls the analogy controls your thinking. In a world where you're bombarded with new concepts every single day—AI tools, cryptocurrency, remote work culture, creator economies—your brain needs a way to make sense of it all. By the end of this episode, you'll possess a powerful toolkit for understanding the unfamiliar by connecting it to what you already know—and explaining complex ideas so clearly that people wonder why they never saw it before. Thinking in analogies—or what's called analogical thinking—is how the greatest innovators, communicators, and problem-solvers operate. It's the skill that turns confusion into clarity and complexity into something you can actually work with. What is Analogical Thinking? But what does analogical thinking entail? At its core, it's the practice of understanding something new by comparing it to something you already understand. Your brain is constantly asking: "What is this like?" When you learned what a virus does to your computer, you understood it by comparing it to how biological viruses infect living organisms. When someone explains blockchain as "a shared spreadsheet that no one can erase," they're using analogy to make an abstract concept concrete. Researchers have found something remarkable: your brain doesn't actually store information as facts—it stores it as patterns and relationships. When you learn something new, your brain is literally asking "What does this remind me of?" and building connections to existing knowledge. Analogies aren't just helpful for communication—they're the fundamental mechanism of human understanding. You can't NOT think in analogies. The question is whether you're doing it consciously and well, or unconsciously and poorly. The quality of your analogies determines how quickly you learn, how deeply you understand, and how effectively you can explain ideas to others. Remember this: whoever controls the analogy controls the conversation. Master this skill, and you'll never be at the mercy of someone else's framing again. The Crisis of Bad Analogies Thinking in analogies is a double-edged sword. I learned this the hard way. A few years ago, I watched a brilliant engineer struggle to explain a revolutionary idea to executives. He had the data, the logic, the technical proof—but he couldn't get buy-in. Then someone in the room said, "So it's basically like Uber, but for industrial equipment?" Instantly, heads nodded. Funding approved. Project greenlit. One analogy did what an hour of explanation couldn't. Six months later, that same analogy killed the project. Because "Uber for equipment" came with assumptions—about pricing, about scale, about network effects—that didn't actually apply. The team kept forcing their solution to fit the analogy instead of recognizing when the comparison broke down. I watched millions of dollars and two years of work disappear because nobody questioned whether the analogy was still serving them. The same mental shortcut that helps you understand new things can also trap you in outdated patterns. Consider Quibi's spectacular failure. In 2020, Jeffrey Katzenberg and Meg Whitman launched a streaming service with $1.75 billion in funding—more than Netflix had when it started. Their analogy? "It's like TV shows, but designed for your phone." They created high-quality 10-minute episodes optimized for mobile viewing. Six months later, Quibi shut down. What went wrong? The analogy was flawed. They assumed mobile viewing was like TV viewing, just shorter. But people don't watch phones the way they watch TV—they watch phones while doing other things, in stolen moments, with interruptions. YouTube and TikTok understood this. They built for distraction and fragmentation. Quibi built for focused attention that didn't exist. That misunderstanding burned through nearly $2 billion in 18 months. We see this constantly where complex issues get reduced to simplistic analogies that feel intuitive but lead to flawed conclusions. Someone compares running a country to running a household budget—"If families have to balance their budgets, why shouldn't governments?" The analogy sounds intuitive, but it ignores that countries can print currency, carry strategic long-term debt, and operate on completely different time horizons than households. The cost of bad analogical thinking is enormous. You waste time applying solutions that worked in one context to problems where they don't fit. You miss opportunities because you're trying to squeeze new situations into old patterns. And worst of all, you become easy to manipulate—because anyone who controls your analogies controls how you think. How To Think Using Analogies So how do we harness the power of analogy while avoiding its traps? Let me show you five essential strategies that will transform how you use comparison to understand your world. Generate Analogies Systematically The first skill is learning to create useful analogies on demand. Most people wait for analogies to pop into their heads randomly, but you can develop a systematic process for generating them whenever you need one. Map the structure of what you're trying to understand, then search for similar structures in domains you know well. Netflix's recommendation algorithm didn't come from studying other algorithms—it came from asking "How do humans recommend things?" and mapping that social process onto a technical system. Steps to generate powerful analogies: Identify the core function or relationship: Strip away surface details and ask what the thing actually does. A heart pumps fluid through a system. Now you can compare it to anything else that pumps fluid—engines, wells, plumbing systems.  Look across multiple domains: Don't limit yourself to obvious comparisons. The best analogies often come from unexpected places. The inventor of Velcro, George de Mestral, understood how burrs stuck to fabric by comparing them to hooks and loops—leading to a billion-dollar fastening system.  Map specific correspondences: Once you find a potential analogy, be explicit about what maps to what. If you're comparing your startup to a marathon, what corresponds to training? What's the equivalent of hitting the wall? What represents the finish line?  Test the analogy's limits: Push the comparison and see where it breaks down. This isn't a failure—it's information. Every analogy has boundaries, and knowing them makes the analogy more useful.  Consider multiple analogies: Don't settle for the first comparison that works. Electricity is like water flowing through pipes AND like cars on a highway. Each analogy reveals different insights.  Recognize When Analogies Break Down Most people fall in love with an analogy and push it beyond its useful range. A powerful analogy becomes a dangerous one the moment you forget it's just a comparison, not reality itself. The human brain loves patterns, and once we find one that works, we want to apply it everywhere. This is how we end up with terrible advice like "Just be yourself in job interviews" because "authentic relationships require honesty"—taking an analogy from personal relationships and stretching it to professional contexts where it doesn't fit. How to recognize the breakdown: Watch for forced mappings: If you find yourself struggling to make pieces fit, the analogy might be wrong. When the comparison starts requiring elaborate explanations or special exceptions, it's probably breaking down.  Check for contradictory predictions: A good analogy should help you predict behavior. If your analogy suggests one outcome but reality keeps producing another, the comparison isn't working.  Look for what's missing: What does the analogy leave out? Understanding the gaps is as important as understanding the matches. Social media isn't "the modern town square"—because town squares had time constraints, physical presence, and social accountability that platforms lack.  Test edge cases: Push your analogy to extremes. If "your body is a temple," does that mean you should let tourists visit? When an analogy gets absurd at the edges, you've found its limits.  A good analogy is a map, not the territory. The moment you forget that, you're lost. Use Analogies to Explain Complex Ideas Analogies are your secret weapon for making complicated concepts accessible to anyone. The person who can explain quantum physics using everyday comparisons has a superpower in our information-saturated world. Match the analogy to your audience's knowledge and choose comparisons that illuminate rather than obscure. The explanatory analogy playbook: Know your audience's knowledge base: You can compare machine learning to "teaching a child through examples" for general audiences, but that same analogy won't work for computer scientists who need technical precision.  Start with the familiar: Always move from what people know to what they don't. "Imagine your favorite playlist, but instead of songs it recommends..." grounds abstract concepts in concrete experience.  Be explicit about the comparison: Don't assume people will automatically see the connection. Say "Think of it like this..." and make the mapping clear.  Use multiple analogies for complex concepts: One analogy rarely captures everything. Combine several different comparisons to give people mul
$37 billion. That's how much gets wasted annually on marketing budgets because of poor attribution and misunderstanding of what actually drives results. Companies' credit campaigns that didn't work. They kill initiatives that were actually succeeding. They double down on coincidences while ignoring what's actually driving outcomes.   Three executives lost their jobs this month for making the same mistake. They presented data showing success after their initiatives were launched. Boards approved promotions. Then someone asked the one question nobody thought to ask: "Could something else explain this?" The sales spike coincided with a competitor going bankrupt. The satisfaction increase happened when a toxic manager quit. The correlation was real. The causation was fiction. This mistake derailed their careers.   But here's the good news: once you see how this works, you'll never unsee it. And you'll become the person in the room who spots these errors before they cost millions.   But first, you need to understand what makes this mistake so common—and why even smart people fall for it every single day. What is Causal Thinking? At its core, causal thinking is the practice of identifying genuine cause-and-effect relationships rather than settling for surface-level associations. It's asking not just "do these things happen together?" but "does one actually cause the other?"   This skill means you look beyond patterns and correlations to understand what's actually producing the outcomes you're seeing. When you think causally, you can spot the difference between coincidence, correlation, and true causation—a distinction that separates effective decision-makers from those who waste millions on solutions that were never going to work. Loss of Causal Thinking Skills Across every domain of professional life, this confusion costs fortunes and derails careers.   A SaaS company sees customer churn decrease after implementing new onboarding emails—and immediately scales it company-wide. What they missed: they launched the emails the same week their biggest competitor raised prices by 40%. The competitor's pricing reduced churn. But they'll never know, because they never asked the question. Six months later, when they face real churn issues, they keep doubling down on emails that never actually worked.   This happens outside of work too. You start taking a new vitamin, and two weeks later your energy improves. But you started taking it in early March—right when days got longer and you began going outside more. Was it the vitamin or the sunlight and exercise? Most people credit the vitamin without asking the question.   But here's the good news: once you understand how to think causally, these mistakes become obvious. And one of these five strategies can be used in your very next meeting—literally 30 seconds from now. Let me show you how. How To Master Causal Thinking Mastering causal thinking isn't about becoming a statistician or learning complex formulas. It's about developing five practical strategies that work together to reveal what's really driving results. These build on each other—starting with basic tests you can apply right now, and progressing to a complete system you can use for any decision. Strategy 1: The Three Tests of True Causation Think of these as your checklist for evaluating any causal claim.   The Three Tests:   Test #1 - Timing: Confirm the supposed cause actually happened before the effect. If traffic spiked Monday but you launched the campaign Tuesday, that campaign didn't cause it. The cause must always come before the effect.   Test #2 - Consistent Movement: When the supposed cause is present, does the effect reliably occur? When the cause is absent, does the effect disappear? Document instances where they occur together. Then examine situations where the cause is absent. If the effect happens just as often without the cause, you're looking at correlation, not causation.   Test #3 - Rule Out Alternatives: Think carefully about what else could explain what you're seeing. Actively try to disprove your idea rather than only looking for supporting evidence. If you can't eliminate other explanations, you don't have causation. Strategy 2: Ask "Could Something Else Explain This?" Here's a technique you can implement in the next 30 seconds that will immediately improve your causal thinking: whenever someone presents a causal claim, ask out loud: "Could something else explain this?"   This single question is remarkably powerful. It forces the speaker to consider hidden factors they ignored. It reveals whether they've actually done causal analysis or just noticed a correlation and declared victory. It shifts the conversation from assumption to examination.   Try it in your next meeting when someone says "We did X and Y improved." Watch how often they haven't considered alternatives. Watch how often their confident causal claim becomes less certain when forced to address this simple question.   Most people present correlations as causations without even realizing it. Your question makes that leap visible. Suddenly they have to justify it with evidence or back down. It's not confrontational—it's curious. And curiosity is the foundation of good causal thinking.   Use it today. Use it every time someone attributes an outcome to a cause without ruling out alternatives.   That question leads us naturally to our next strategy—learning to identify what those "something elses" actually are. Strategy 3: Hunt for Hidden Causes A confounding variable is a third factor that influences both your suspected cause and your observed effect. It creates the illusion of a direct relationship where none exists.   Here's a simple example: ice cream sales and drowning deaths both increase during summer months. Does ice cream cause drowning? Obviously not. The confounding variable is warm weather, which causes both more ice cream purchases and more swimming.   Now here's the business version: A retail company sees both customer satisfaction and sales increase after renovating their stores. Does the renovation cause higher satisfaction? Maybe—but both also increased because they renovated during the holiday shopping season when people are generally happier and spending more anyway. Same logical structure. Same expensive mistake if they conclude renovations always boost satisfaction.   Map the Relationship: When you observe a correlation, write down your suspected cause and your observed effect. This visualization helps you spot gaps in your logic immediately.   Ask "What Else Changed?": Think carefully about what other factors were present or changed during the same period. Make a written list so your brain doesn't skip over these hidden causes.   Search for Common Causes: Identify factors that could influence both variables at the same time. For instance, if both employee satisfaction and productivity increased, could several toxic managers have left the company?   Consider Time-Based and Environmental Factors: Examine seasons, business cycles, economic trends, reorganizations, leadership changes, and industry shifts that could affect multiple outcomes at once.   Test by Controlling Variables: If possible, create scenarios where you can control or account for potential hidden causes. Try analyzing subgroups where the hidden cause is absent, or run controlled A/B tests.   Once you can spot these hidden causes, you're ready to understand why your brain makes these mistakes in the first place. And this next one? It's probably happening in your head right now without you realizing it. Strategy 4: Outsmart Your Brain's Shortcuts Your brain is wired to see causal connections everywhere, even where none exist. This isn't a design flaw—it's a survival mechanism that kept your ancestors alive. But in the modern business world, this pattern-seeking instinct can mislead you.   Your brain wants simple causal stories. Reality is usually more complex. Once you know what to watch for, you can catch yourself before making these errors.   Catch Your Instant Explanations: When you observe a pattern, pause before declaring causation. Ask yourself: "Am I seeing causation because it's really there, or because my brain desperately needs an explanation?"   Fight Confirmation Bias: Actively search for information that challenges your causal idea, not just data that supports it. If you can't find contradicting evidence, you haven't looked hard enough.   Here's how this plays out: A manager believes remote work hurts productivity. She notices every time someone's late to a Zoom call. But she doesn't notice the three on-time people. She remembers the one missed deadline but forgets the five delivered early. Her brain is filtering reality to confirm what she already believes.   Question Your Compelling Stories: Be wary of explanations that sound too neat. If your causal explanation reads like a perfect success story, double-check it.   Don't See Patterns in Randomness: Three successful quarters in a row doesn't mean you've discovered a winning formula. It might just be a lucky streak. Always ask "Could this pattern occur by chance?"   Watch the 'After Therefore Because' Trap: Every time you catch yourself thinking "we did X and then Y happened," force yourself to consider alternative explanations. Ask yourself "What would I need to see to know this isn't causal?"   Now that you understand how your brain works, let's put this all together into a practical system you can use every time you need to make a high-stakes decision. Strategy 5: The Five-Question Causation Check Mastering causal thinking requires more than understanding principles—it demands a clear approach you can apply when the stakes are high and the pressure is on.   The Five-Question Causation Check:   Define the Relationship Clearly: Write out the specific causal claim you're evaluating with precision. "Social media advertising increases quali
You see a headline: "Study Shows Coffee Drinkers Live Longer." You share it in 3 seconds flat. But here's what just happened—you confused correlation with causation, inductive observation with deductive proof, and you just became a vector for misinformation. Right now, millions of people are doing the exact same thing, spreading beliefs they think are facts, making decisions based on patterns that don't exist, all while feeling absolutely certain they're thinking clearly.   We live in a world drowning in information—but starving for truth. Every day, you're presented with hundreds of claims, arguments, and patterns. Some are solid. Most are not. And the difference between knowing which is which and just guessing? That's the difference between making good decisions and stumbling through life confused about why things keep going wrong.   Most of us have never been taught the difference between deductive and inductive reasoning. We stumble through life applying deductive certainty to inductive guesses, treating observations as proven facts, and wondering why our conclusions keep failing us. But once we understand which type of reasoning a situation demands, we gain something powerful—the ability to calibrate our confidence appropriately, recognize manipulation, and build every other thinking skill on a foundation that actually works.   By the end of this episode, you'll possess a practical toolkit for improving your logical reasoning—four core strategies, one quick-win technique, and a practice exercise you can start today.   This is Episode 2 of Thinking 101, a new 8-part series on essential thinking skills most of us never learned in school. Links to all episodes are in the description below.       What is Logical Reasoning? But what does logical reasoning entail? At its core, there are two fundamental ways humans draw conclusions, and you're using both right now without consciously choosing between them.   Deductive reasoning moves from general principles to specific conclusions with absolute certainty. If the premises are true, the conclusion must be true. "All mammals have hearts. Dogs are mammals. Therefore, dogs have hearts." There's no wiggle room—if those first two statements are true, the conclusion is guaranteed. This is the realm of mathematics, formal logic, and established law.   Inductive reasoning works in reverse, building from specific observations toward general principles with varying degrees of probability. You observe patterns and infer likely explanations. "I've seen 1,000 swans and they were all white, therefore all swans are probably white." This feels certain, but it's actually just highly probable based on limited evidence. History proved this reasoning wrong when black swans were discovered in Australia.   Both are tools. Neither is "better." The question is which tool fits the job—and whether you're using it correctly.       Loss of Logical Reasoning Skills Why does this matter? Because across every domain of life, this reasoning confusion is costing us.   In our social media consumption, we're drowning in inductive reasoning disguised as deductive proof. Researchers at MIT found that fake news spreads ten times faster than accurate reporting. Why? Because misleading content exploits this confusion. You see a viral post claiming "New study proves smartphones cause depression in teenagers," with graphs and official-looking citations. What you're actually seeing is inductive correlation presented as deductive causation—researchers observed that depressed teenagers often use smartphones more, but that doesn't prove smartphones caused the depression.   And this is where it gets truly terrifying—I need you to hear this carefully:   In 2015, researchers tried to replicate 100 psychology studies published in top scientific journals. Only 36% held up. Read that again: Nearly two-thirds of peer-reviewed, published research couldn't be reproduced. And those false studies? Still being cited. Still shaping policy. Still being shared as "science proves." You're building your worldview on a foundation where 64% of the bricks are made of air.   In our personal relationships, we constantly make inductive inferences about people's intentions and treat them as deductive facts. Your partner forgets to text back three times this week. You observe the pattern, inductively infer "they're losing interest," then act with deductive certainty—becoming distant, accusatory, or defensive. But what if those three instances had three different explanations? What if the pattern we detected isn't actually a pattern at all? We say "you always" or "you never" based on three data points. We end relationships over patterns that never existed.   So why didn't anyone teach us this? Traditional schooling focuses on teaching us what to think—facts, formulas, established knowledge. Deductive reasoning gets attention in math class as a mechanical process for solving equations. Inductive reasoning gets buried in science class, completely disconnected from actual decision-making. We graduated with facts crammed into our heads but no framework for evaluating new claims.   But that changes now.       How To Improve Your Logical Reasoning You now understand the two reasoning systems and why mixing them up is costing you. Let's fix that. These five strategies will give you immediate control over your logical reasoning—starting with the most foundational skill and building to a technique you can use in your next conversation. Label Your Reasoning Type The first step to improving your logical reasoning is becoming aware of which system you're using—and we rarely stop to check.   We flip between deductive and inductive thinking dozens of times per day without realizing it. You see your colleague get promoted after working late, and you instantly conclude that working late leads to promotion—that's inductive. But you're treating it like a deductive rule: "If I work late, I WILL get promoted." The moment you label which type you're using, you regain control.   Start with a daily reasoning journal. At the end of each day, write down three conclusions you made—about people, work, news, anything.   For each conclusion, ask: "What evidence led me here?" If it's general rules applied to specifics (all mammals have hearts, dogs are mammals), you used deduction. If it's patterns from observations (I've seen this three times), you used induction.   Label each one: "D" for deductive, "I" for inductive. This creates conscious awareness. You'll likely find 80-90% of your daily reasoning is inductive—but you've been treating it as deductive certainty.   When you catch yourself saying "always," "never," "definitely," stop and ask: "Is this deductive certainty or inductive probability?" That single pause changes everything.   Practice in real-time during conversations. When someone makes a claim, silently label it: deductive or inductive? Weak reasoning becomes obvious instantly.   After one week of journaling, review your entries. Patterns emerge in your reasoning errors—specific topics where you consistently overstate certainty, or people you make assumptions about. This awareness is the foundation for improvement.       Calibrate Your Confidence Once you've labeled your reasoning type, the next step is matching your certainty level to the strength of your evidence.   Here's where most people fail: they feel 100% certain about conclusions built on three observations. Your brain doesn't naturally calibrate—it defaults to "this feels true, therefore it IS true." But when you explicitly assign probability levels to inductive conclusions, you stop making the most common reasoning error: treating patterns as proven facts.   For every inductive conclusion, assign a percentage. "Given these five observations, I'm 60% confident this pattern is real." Never use 100% for inductive reasoning—by definition, inductive conclusions are probabilistic, not certain.   Use this language shift in conversations: Replace "You always ignore my suggestions" with "I've brought up ideas in the last two meetings and haven't heard feedback, which makes me about 40% confident there's a communication pattern worth discussing." Replace "This definitely works" with "From what I've seen, I'm 70% confident this approach is effective."   Create a certainty threshold for action. Decide: "I need 70% confidence before I make a major decision based on inductive reasoning." This prevents impulsive moves based on weak patterns. Below 50%? Keep observing. Above 80%? Worth acting on.   Keep a confidence log for one week. Write your predictions with probability levels ("80% confident it will rain tomorrow," "60% confident this project will succeed"). Then check if you were right. This trains your calibration. You'll discover whether you're overstating or understating your certainty—and you can adjust.   When someone presents "definitive" claims based on inductive evidence, ask: "What certainty level would you assign that? 60%? 90%?" Watch them realize they've been overstating their case. This question immediately disrupts manipulation.       Hunt for Contradictions Your brain naturally seeks confirming evidence and ignores contradictions—this strategy forces you to do the opposite.   Confirmation bias is the enemy of good inductive reasoning. Once you believe something, your brain becomes a heat-seeking missile for evidence that supports it. The only antidote? Actively hunt for evidence that contradicts your conclusion. It's uncomfortable, yes, but it's the difference between being right and feeling right.   For every inductive conclusion you reach, set a 24-hour "contradiction hunt." Your job is to find at least two pieces of evidence that contradict your conclusion. If you believe "remote work increases productivity," you must find credible sources claiming the opposite.   Use search terms designed to find opposites. Search for "remote
The Crisis We're Not Talking About We're living through the greatest thinking crisis in human history—and most people don't even realize it's happening. Right now, AI generates your answers before you've finished asking the question. Search engines remember everything so you don't have to. Algorithms curate your reality, telling you what to think before you've had the chance to think for yourself. We've built the most sophisticated cognitive tools humanity has ever known, and in doing so, we've systematically dismantled our ability to use our own minds. A recent MIT study found that students who exclusively used ChatGPT to write essays showed weaker brain connectivity, lower memory retention, and a fading sense of ownership over their work. Even more alarming? When they stopped using AI tools later, the cognitive effects lingered. Their brains had gotten lazy, and the damage wasn't temporary. This isn't about technology being bad. This is about survival. In a world where machines can think faster than we can, the ability to think clearly—to reason, analyze, question, and decide—has become the most valuable skill you can possess. Those who can think will thrive. Those who can't will be left behind. The Scope of Cognitive Collapse Let's be clear about what we're facing. Multiple studies across 2024 and 2025 have found a significant negative correlation between frequent AI tool usage and critical thinking abilities. We're not talking about a slight dip in performance. We're talking about measurable cognitive decline. A Swiss study showed that more frequent AI use led to cognitive decline as users offloaded critical thinking to machines, with younger participants aged 17-25 showing higher dependence on AI tools and lower critical thinking scores compared to older age groups. Think about that. The generation that should be developing the sharpest minds is instead experiencing the steepest cognitive erosion. The data gets worse. Researchers from Microsoft and Carnegie Mellon University found that the more users trusted AI-generated outputs, the less cognitive effort they applied—confidence in AI correlates with diminished analytical engagement. We're outsourcing our thinking, and in the process, we're forgetting how to think at all. But AI dependency is only part of the story. Our entire information ecosystem has become hostile to independent thought. Social media algorithms create filter bubbles that curate content aligned with your existing views. Users online tend to prefer information adhering to their worldviews, ignore dissenting information, and form polarized groups around shared narratives—and when polarization is high, misinformation quickly proliferates. You're not thinking anymore. You're being fed a carefully constructed reality designed to keep you engaged, not informed. The algorithm knows what you'll click on, what will make you angry, and what will keep you scrolling. And every time you accept that curated reality without question, your capacity for independent thought atrophies a little more. What Happened to Education? Here's where it gets personal. Schools used to teach you HOW to think. Now they teach you WHAT to think—and there's a massive difference. Research from Harvard professional schools found that while more than half of faculty surveyed said they explicitly taught critical thinking in their courses, students reported that critical thinking was primarily being taught implicitly. Translation? Professors think they're teaching thinking skills, but students aren't actually learning them. Students were generally unable to recall or define key terms like metacognition and cognitive biases. The problem runs deeper than higher education. Teachers struggle with balancing the demands of covering vast amounts of content with the need for in-depth learning experiences, and there's a misconception that critical thinking is an innate ability that develops naturally over time. But research shows the opposite: critical thinking skills can be explicitly taught and developed through deliberate practice. So why aren't we doing it? Because education systems reward compliance and memorization, not inquiry and analysis. Students learn to regurgitate information for tests, not to question assumptions or evaluate evidence. They're taught to accept authority, not challenge it. To consume information, not interrogate it. We've created generations of people who are educated but can't think. Who have degrees but lack discernment. Who can Google anything but can't reason through problems on their own. The Cost of Mental Outsourcing Let's talk about what you're actually losing when you stop thinking for yourself. First, you lose agency. When you can't analyze information independently, you become dependent on whoever controls the information flow. Political leaders, social media influencers, corporations, algorithms—they all shape your reality, and you don't even realize it's happening. 73% of Democrats and Republicans can't even agree on basic facts. Not opinions. Facts. That's what happens when thinking skills collapse—you can't distinguish between what's true and what you want to be true. Second, you lose adaptability. Repeated use of AI tools creates cognitive debt that reduces long-term learning performance in independent thinking and can lead to diminished critical inquiry, increased vulnerability to manipulation, and decreased creativity. In a rapidly changing world, the inability to think flexibly and adapt to new information is a death sentence for your career, your relationships, and your relevance. Third, you lose connection—to your work, your decisions, your life. 83% of students who used ChatGPT exclusively couldn't recall key points in their essays, and none could provide accurate quotes from their own papers. When you outsource thinking, you forfeit ownership. Your work stops being yours. Your ideas stop being original. You become a conduit for someone else's thinking, not a generator of your own. Research shows that partisan echo chambers increase both policy and affective polarization compared to mixed discussion groups. You're not just losing the ability to think—you're losing the ability to connect with people who think differently. You're trapped in a bubble where everyone agrees with you, which feels comfortable but leaves you intellectually brittle and socially isolated. The societal cost? We're becoming ungovernable. When people can't think critically, they can't solve complex problems. They can't compromise. They can't distinguish between legitimate disagreement and malicious manipulation. Democracy requires citizens who can reason, debate, and arrive at informed conclusions. Without thinking skills, democratic institutions collapse into tribal warfare where the loudest voices win, not the most rational ones. Why This Moment Demands Action Here's what makes this crisis urgent: we're at an inflection point. Researchers have identified a tipping point beyond which the process of polarization speeds up as the forces driving it are compounded and forces mitigating polarization are overwhelmed. Some political groups may have already passed this critical threshold. Once you cross that line, reversing cognitive decline becomes exponentially harder. Think about what's coming. AI is getting smarter, faster, and more persuasive. Deepfakes and AI-manipulated media are becoming increasingly sophisticated and harder to detect. Whether or not they've already influenced major events, the capability exists—and your ability to evaluate what's real becomes more critical every day. Social media platforms are optimizing for engagement, not truth. Educational systems are struggling to adapt. The information environment is becoming more hostile to critical thinking every single day. If you don't develop thinking skills now—if you don't reclaim your capacity for independent thought—you'll be swept along by forces you can't see and can't resist. You'll believe what you're told to believe. Buy what you're told to buy. Vote how you're told to vote. And you won't even realize you've lost the ability to choose. But here's the truth they don't want you to know: thinking skills can be learned. They can be developed. They can be strengthened through deliberate practice. You're not doomed to cognitive passivity. You can take back control of your mind. What Becomes Possible Imagine waking up every morning with the confidence that you can evaluate any information that comes your way. No more anxiety about whether you're being manipulated. No more second-guessing your decisions because you don't trust your own judgment. No more feeling like everyone else knows something you don't. When you master thinking skills, you become intellectually self-sufficient. You can spot logical fallacies in arguments. You can identify bias in news sources. You can separate correlation from causation. You can ask the right questions instead of accepting convenient answers. You can hold two competing ideas in your mind and evaluate them fairly without your ego getting in the way. You become harder to fool and impossible to control. Political propaganda bounces off you because you can see through emotional manipulation. Marketing tactics lose their power because you understand psychological triggers. Social media algorithms can't trap you in echo chambers because you actively seek out diverse perspectives and challenge your own assumptions. Your relationships improve because you can actually listen to people who disagree with you without feeling threatened. Your career accelerates because you can solve problems others can't see. Your decisions get better because you're working from logic and evidence, not fear and instinct. Research shows that innovative teaching methods like problem-based learning and interactive instruction significantly boost academic performance and cultivate critical thinking skills. These aren't just abstract benefits—they t
Most innovation leaders are performing someone else's version of innovation thinking. I've spent decades in this field. Worked with Fortune 100 companies. And here's what I see happening everywhere. Brilliant leaders following external frameworks. Copying methodologies from people they admire. Shifting their approach based on whatever's trendy. But they never develop their own innovation thinking skills. Today, I'd like to share a simple practice that has transformed my life. And I'll show you exactly how I do it. The Problem Here's what I see in corporate America. Leaders are reacting to innovation trends instead of thinking for themselves. They chase metrics without questioning if those metrics matter. They abandon promising ideas when obstacles appear because they don't have internal principles to guide them. I watched a $300 million innovation initiative collapse. Not because the market wasn't ready. Not because the technology was wrong. But because the leader had no personal framework for making innovation decisions under pressure. This is the hidden cost of borrowed thinking. You can't innovate authentically when you're following someone else's playbook. After four decades, I've come to realize something that most people miss. We teach innovation methods. But we never teach people how to think as innovators. There's a massive difference. And that difference is everything. When you develop your own innovation thinking skills, you stop being reactive. You start operating from internal principles instead of external pressures. You ask better questions. Not just "How can we solve this?" but "Should we solve this?" That's what authentic innovation thinking looks like. The Solution So what's the answer? Innovation journaling. Now, before you roll your eyes, this isn't keeping a diary. This is a systematic development of your innovation thinking skills through targeted questions. My mentor taught me this practice early in my career. It became a 40-year obsession because it works. The process is simple. Choose a question. Write until the thought feels complete. Close the journal. Start your day. However, what makes this powerful is... The questions force you to examine your core beliefs about innovation. They help you develop principles that guide decisions when external pressures try to pull you in different directions. Most people operate from borrowed frameworks. Market demands. Best practices. Organizational expectations. Their approach shifts based on context. Innovation journaling builds something different. An internal compass. Your own thinking skills provide consistency across various challenges. Let me show you exactly how I do this. Sample Prompt/Demonstration Let me give you a question that consistently surprises people. Here's the prompt: "What innovation challenges do you consistently avoid, and what does that tell you about your beliefs?" Most people want to talk about what they pursue. But what you avoid reveals just as much about your innovation thinking. I've watched executives discover they avoid innovations that require long-term thinking because they're addicted to quick wins. Others realize they dodge anything that might make them look foolish, which kills breakthrough potential. One leader discovered she avoided innovations that required extensive collaboration. Not because she didn't like people. But because her core belief was that innovation required individual genius. That insight changed how she approached team projects. The question isn't comfortable. That's the point. Innovation journaling works because it bypasses your intellectual defenses. It accesses thinking you normally suppress or ignore. When you write "I consistently avoid innovations that..." you're forced to be honest. And that honesty reveals your actual innovation philosophy. Try this question yourself. Don't overthink it. Just write whatever comes up. You'll be surprised by what you discover. The Benefits Here's what changes when you develop your innovation thinking skills this way. You stop being reactive to whatever methodology is trendy. You have principles that guide you through uncertainty. You make decisions faster because they align with your authentic beliefs. Your team dynamics improve. People respond differently when you lead from consistent principles instead of borrowed frameworks. You create psychological safety because you're comfortable with not knowing. You ask ‌better questions. Instead of rushing to solutions, you examine whether problems deserve solving. You integrate your values with your innovation work. Most importantly, you stop performing someone else's version of innovation. You start thinking like the innovator you actually are. I've been doing this practice for 40 years. It's the foundation of every breakthrough innovation I've created. Not because it gave me ideas. But because it taught me how to think. Your innovation thinking skills are like a muscle. They get stronger with consistent use. Innovation journaling is how you build that strength. The compound effect is remarkable. After just two weeks, you'll see patterns in your thinking you never noticed. After a month, you'll make innovation decisions with confidence you didn't know you had. This isn't a quick fix. It's foundational development that serves you for years to come. Two-Week Exercise I want to help you get started. I've created a complete two-week innovation journaling program. Ten daily prompts plus weekend reflections. Each question is designed to develop different aspects of your innovation thinking skills. You can download it for free on my Substack [button href="https://philmckinney.substack.com/p/2-week-innovation-journaling-starter" primary="true" centered="true" newwindow="true"]Two-Week Innovation Journaling Program[/button] This isn't just a list of questions. It includes the context for each prompt. Implementation guidance. And the framework for building this into a sustainable practice. Start tomorrow. Choose one question. Write for 10-15 minutes. See what emerges. And if you find this helpful, I'm quietly working on something bigger. A whole year of innovation thinking prompts—different questions for each week to keep developing these skills over time. Subscribe on Substack to get notified when that's ready. It'll be worth the wait. Your authentic innovation thinking skills are waiting to be developed. The world needs innovators who think for themselves. Not performers following someone else's playbook. Develop your innovation thinking skills. Everything else will follow.
loading
Comments (1)

Simon

The last 5 minute section was good. It's go me thinking about how we find the next change for our customers. Thanks.

Jan 6th
Reply
loading