DiscoverSnafu w/ Robin ZanderHow The Future Works with Brian Elliott
How The Future Works with Brian Elliott

How The Future Works with Brian Elliott

Update: 2025-08-03
Share

Description

Welcome back to Snafu w/ Robin Zander. 

In this episode, I’m joined by Brian Elliott, former Slack executive and co-founder of Future Forum.

We discuss the common mistakes leaders make about AI and why trust and transparency are more crucial than ever. Brian shares lessons from building high-performing teams, what makes good leadership, and how to foster real collaboration. He also reflects on raising values-driven kids, the breakdown of institutional trust, and why purpose matters. We touch on the early research behind Future Forum and what he’d do differently today.

Brian will also be joining us live at Responsive Conference 2025, and I’m excited to continue the conversation there. If you haven’t gotten your tickets yet, get them here.

What Do Most People Get Wrong About AI? (1:53 )

“Senior leaders sit on polar ends of the spectrum on this stuff. Very, very infrequently, sit in the middle, which is kind of where I find myself too often.” 

  • Robin notes Brian will be co-leading an active session on AI at Responsive Conference with longtime collaborator Helen Kupp.

  • He tees up the conversation by saying Brian holds “a lot of controversial opinions” on AI, not that it’s insignificant, but that there’s a lot of “idealization.”

  • Brian says most senior leaders fall into one of two camps:

    • Camp A: “Oh my God, this changes everything.” These are the fear-mongers shouting: “If you don’t adopt now, your career is over.”

    • Camp B: “This will blow over.” They treat AI as just another productivity fad, like others before it.

  • Brian positions himself somewhere in the middle but is frustrated by both ends of the spectrum.

    • He points out that the loudest voices (Mark Benioff, Andy Jassy, Zuckerberg, Sam Altman) are “arms merchants” – they’re pushing AI tools because they’ve invested billions.

  • These tools are massively expensive to build and run, and unless they displace labor, it’s unclear how they generate ROI.

    • believe in AI’s potential and 

    • aggressively push adoption inside their companies.

    • So, naturally, these execs have to:

  • But “nothing ever changes that fast,” and both the hype and the dismissal are off-base.

Why Playing with AI Matters More Than Training (3:29 )

  • AI is materially different from past tech, but what’s missing is attention to how adoption happens.

    • “The organizational craft of driving adoption is not about handing out tools. It's all emotional.”

  • Adoption depends on whether people respond with fear or aspiration, not whether they have the software.

  • Frontline managers are key: it’s their job to create the time and space for teams to experiment with AI.

  • Brian credits Helen Kupp for being great at facilitating this kind of low-stakes experimentation.

  • Suggests teams should “play with AI tools” in a way totally unrelated to their actual job.

    • Example: take a look at your fridge, list the ingredients you have, and have AI suggest a recipe. “Well, that’s a sucky recipe, but it could do that, right?”

  • The point isn’t utility,  it’s comfort and conversation:

    • What’s OK to use AI for?

    • Is it acceptable to draft your self-assessment for performance reviews with AI?

    • Should you tell your boss or hide it?

The Purpose of Doing the Thing (5:30 )

  • Robin brings up Ezra Klein’s podcast in The New York Times, where Ezra asks: “What’s the purpose of writing an essay in college?”

  • AI can now do better research than a student, faster and maybe more accurately.

  • But Robin argues that the act of writing is what matters, not just the output.

    • Says: “I’m much better at writing that letter than ChatGPT can ever be, because only Robin Zander can write that letter.”

    • Example: Robin and his partner are in contract on a house and wrote a letter to the seller – the usual “sob story” to win favor.

  • All the writing he’s done over the past two years prepared him to write that one letter better.

    • “The utility of doing the thing is not the thing itself – it’s what it trains.”

Learning How to Learn (6:35 )

  • Robin’s fascinated by “skills that train skills” – a lifelong theme in both work and athletics.

  • He brings up Josh Waitzkin (from Searching for Bobby Fischer), who went from chess prodigy to big wave surfer to foil board rider.

    • Josh trained his surfing skills by riding a OneWheel through NYC, practicing balance in a different context.

  • Robin is drawn to that kind of transfer learning and “meta-learning” – especially since it’s so hard to measure or study.

    • He asks: What might AI be training in us that isn’t the thing itself?

  • We don’t yet know the cognitive effects of using generative AI daily, but we should be asking.

Cognitive Risk vs. Capability Boost (8:00 )

  • Brian brings up early research suggesting AI could make us “dumber.”

    • Outsourcing thinking to AI reduces sharpness over time.

  • But also: the “10,000 repetitions” idea still holds weight – doing the thing builds skill.

  • There’s a tension between “performance mode” (getting the thing done) and “growth mode” (learning).

  • He relates it to writing:

    • Says he’s a decent writer, not a great one, but wants to keep getting better.

    • Has a “quad project” with an editor who helps refine tone and clarity but doesn’t do the writing.

    • The setup: he provides 80% drafts, guidelines, tone notes, and past writing samples.

  • The AI/editor cleans things up, but Brian still reviews:

    • “I want that colloquialism back in.”

    • “I want that specific example back in.”

    • “That’s clunky, I don’t want to keep it.”

  • Writing is iterative, and tools can help, but shouldn’t replace his voice.

On Em Dashes & Detecting Human Writing (9:30 )

  • Robin shares a trick: he used em dashes long before ChatGPT and does them with a space on either side. He says that ChatGPT’s em dashes are double-length and don’t have spaces.

    • If you want to prove ChatGPT didn’t write something, “just add the space.”

  • Brian agrees and jokes that his editors often remove the spaces, but he puts them back in.

    • Reiterates that professional human editors like the ones he works with at Charter and Sloan are still better than AI.

Closing the Gap Takes More Than Practice (10:31 )

  • Robin references The Gap by Ira Glass, a 2014 video that explores the disconnect between a creator’s vision and their current ability to execute on that vision.

    • He highlights Glass’s core advice: the only way to close that gap is through consistent repetition – what Glass calls “the reps.”

  • Brian agrees, noting that putting in the reps is exactly what creators must do, even when their output doesn’t yet meet their standards.

  • Brian also brings up his recent conversation with Nick Petrie, whose work focuses not only on what causes burnout but also on what actually resolves it.

    • He notes research showing that people stuck in repetitive performance mode – like doctors doing the same task for decades – eventually see a decline in performance.

  • Brian recommends mixing in growth opportunities alongside mastery work.

    • “exploit” mode (doing what you’re already good at) and 

    • “explore” mode (trying something new that pushes you)

    • He says doing things that stretch your boundaries builds muscle that strengthens your core skills and breaks stagnation.

    • He em

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

How The Future Works with Brian Elliott

How The Future Works with Brian Elliott

Robin P. Zander