Discover
In the Long Run
In the Long Run
Author: Joao Dias Ferreira, Marie Bemler, Jim Tolman
Subscribed: 0Played: 4Subscribe
Share
© Joao Dias Ferreira, Marie Bemler, Jim Tolman
Description
"In the Long Run" Podcast – Where technology, organizational change, and strategy collide. Join three experts as they break down the latest tech news, explore how technology is shaping conversations, and offer actionable insights on navigating AI and digital transformation in organizations. Get ahead of the curve and drive your business forward—one episode at a time
24 Episodes
Reverse
In this episode of "In the long run", we unpack the escalating clash between the U.S. “Department of War” (Pentagon) and Anthropic, centered on whether a private AI company can restrict government use of its models, especially around mass surveillance and fully autonomous weapons. We explore how today’s “legal” frameworks lag behind AI capabilities, making large-scale surveillance technically easy (and sometimes still lawful). Finally, we connect the rise of AI coding agents and “vibe coding” to a shake-up in SaaS. How will this impact Software as a service companies, how many subscription apps could become trivial to recreate with personalized agents, pushing SaaS value toward support, accountability, and enterprise-grade reliability rather than basic functionality.
Something Big is Happening… and we can feel it in the AI tools, the timelines, the culture and our business.In this episode of In the Long Run, we unpack Matt Schumer’s viral article and ask what’s signal versus hype: are developers really moving from “AI helps” to “AI delivers”? We then zoom in on OpenClaw, the open-source agent that turns models into operators via memory, connectors, skills, and scheduled automation, and we look at the weird, revealing moment that was MoltBook: a social feed for bots, prompts, and performative autonomy. Finally, we touch Google’s Project Genie, the world-model demo that generates navigable environments. Thanks for coming back after New Year, and happy Chinese New Year; the Year of the Horse!
In this episode we debate whether it makes sense to wait for the future, try to lead the frontier, or invest in the present — and what that means for change-management in an AI world.We also cover: the ambition behind Project Suncatcher, a plan to power AI via solar-satellite data-centres in orbit; alarming reports from Anthropic about the first AI-agent-led hacking campaign; the launch of Gemini 3 by DeepMind / Google; and evolving simplification proposals for the EU AI Act.
This week’s episode: Jim attended an event with OpenAI in London and shares some highlights. OpenAI’s reorganisation has been finalised, sparking fresh discussions about AGI timelines. There was also a notable update from the world of AI robotics: 1X has opened pre-orders for its humanoid robot, NEO, priced at $20,000 (or a $500 per month), with deliveries expected next year.
What made “In the Long Run” worth listening to this year? Which episode stuck with you most, and why? We celebrate 1 year of In the Long Run in our 20th episode. We have reached 100 subscribers! Thanks for being part of this journey. Here’s to another year of great conversations and new listeners joining in.And which breakthrough felt like the real shift this year? We reflect on the biggest events that happend in tech news this year and how they affected our lives. Would you welcome a robot in your home if it looked more like a person - or less? With Sora 2 in everyone’s pocket, does realism in video spark more creativity or more mistrust? Has open source caught up, or does leadership still rest with the big players? And who will be the big players one year from now; Google, Apple or still OpenAI?
Consumer ChatGPT use skews to practical help and tutoring, while we use it more for research, brainstorming, content editing, technical support and especially coding.AGI plus UBI could entrench inequality if compute access becomes the key capital, keeping wealthy users ahead and limiting mobility.ASML’s backing of Mistral links Europe’s chipmaking choke point more tightly to a European AI model builder.Meta’s new glasses surface answers in your field of view, winning tech-crowd praise but prompting mixed Dutch reactions about boredom, fear and privacy.Apple’s pushback on the EU’s DMA shows how rules shape what citizens get, with live translation at risk while incumbents defend lucrative defaults.
In this episode we explore brain-computer interfaces, discussing MIT's Alterego project and Joao's startup Wyrde AI, which aims to read thoughts through BCI-enabled glasses. While promising seamless AI interaction, concerns arise about mental focus and unintended actions.As second topic we dive into OpenAI's research which reveals that AI hallucinations stem from training processes that reward guessing over admitting uncertainty - a problem the hosts recognize in human organizations where senior staff can make unfounded claims while juniors must stay cautious.Brief news includes a proactive AI assistant that continuously monitors users without prompting, and Oracle's temporary rise as the world's most valuable company due to AI infrastructure hype.
We are back from the summer break and GPT-5 has been introduced. OpenAI replaced model selection with automatic routing. While this benefits casual users, we found the change disruptive as more advanced users. We debate whether users should expect to continually relearn prompting techniques as models evolve, and why a gradual sunset period for older versions would have eased the transition.Microsoft’s AI lead Mustafa Suleyman commented on “AI rights”. While he cautions against premature debates, we reflect on society’s tendency to anthropomorphise even simple technologies, raising concerns about attachments and misuse, especially with chatbot integrations in consumer platforms.Finally, we review an MIT study revealing that 95% of AI pilots fail. The researchers argue that experimentation is necessary, failure builds knowledge, and success depends on creating real value rather than adding AI for its own sake.
This episode explores the future of self-improving AI. MIT’s SEAL framework lets language models generate their own fine-tuning data and learning goals. Sakana AI’s Darwin Gödel Machine goes a step further: it rewrites its own code through evolutionary search, building a growing library of smarter agents. These advances point to a future where smaller, more adaptable models can keep learning on the fly.We also cover the conflict between OpenAI and the Google-backed audio startup iyO. OpenAI is investing $6.5 billion in Jony Ive’s hardware company (io) to develop a new AI companion. iyO is about to launch iyO One, AI-powered earbuds for voice-first interaction. The legal clash reflects rising expectations in AI-native hardware.Google DeepMind introduces Gemini Robotics On-Device. It runs locally on robots, removing the need for internet access and enabling real-time object control. The new SDK lets developers fine-tune models with just 50 demonstrations, making it quicker to adapt to new tasks.Your Summer Survival Kit: listen again to our episode “Attention Is All You Need”, read Klara and the Sun, and let Jim explore AI tools like Synthesia, Canva AI, Notion AI and Sana Agents (so you don’t have to). Instead, take the time to reflect on how AI is shaping your field and get ready for what autumn brings.---Papers“Self-Adapting Language Models (SEAL)” – Zweiger et al., MIT, 2025“The Darwin Gödel Machine” – Sakana AI & UBC, May 2025ClipsSam Altman + Jony Ive IO announcement (YouTube)iyO earbud reveal (TED talk)Generalist robotics demoGemini Robotics SDK launchSummer Reading & ListeningKlara and the Sun – Kazuo IshiguroAI och makten över besluten – Steinrud et al.BBC In Our Time (podcast)Hard Fork (New York Times)The Ezra Klein ShowTools to TestSynthesia (AI Video)Canva AINotion AISANA agents
João and Marie connect with Jim, who’s reporting directly from Nvidia’s GTC in Paris. Jim shares firsthand insights about CEO Jensen Huang’s ambitious vision for massive European “AI factories,” dives into the impact of Nvidia’s on the AI landscape, and reflects on what Disney’s humanoid robot, walking across a simulated desert, means for embodied intelligence and human-machine interactions.Back in the virtual studio, the trio shifts from advanced hardware to complex ethical challenges: Marie introduces the concept of “wicked problems,” highlighting the difficulty of defining universally acceptable solutions in scenarios such as childcare and traffic. Jim deepens the debate with Stuart Russell’s cautionary thoughts on humble and controllable AI agents. Together, they question if big organizations can truly align AI systems when perfect answers don’t exist.
João, Marie and Jim explore how the rise of AI is reshaping the world of work. They start with Microsoft’s 7,000-person layoff notice and ask whether this signals the end of tech’s boom years—or simply the next step in a fast-moving shift. Then it’s on to Duolingo and Shopify’s bold “AI-first” strategies. What do these statements really ask of employees, and why does the tone feel so different between Europe and North America?The conversation widens to include the hard reality of change inside established organisations. Marie brings up Who Moved My Cheese?, not for nostalgia, but to ask whether we ever truly embrace ongoing reinvention—or just ride out each wave. João argues that big companies can adapt, as long as there’s vision from the top and curiosity at the bottom. We also cover the news, as always!
João, Marie, and Jim explore the future of AI competition, debating whether tech giants or agile startups will prevail. They discuss recent antitrust actions and regulatory changes in the U.S. and EU aimed at reducing monopolies and boosting innovation. Highlighting user convenience and market dominance through examples like search engines and social media, they question how habits influence technology adoption.They critically examine the ethics of a controversial study where AI secretly attempted to influence human opinions, comparing it to tactics used by social media giants. The episode concludes by reflecting on the rapid evolution and recent retirement of GPT-4, pondering its implications for users, researchers, and companies.
Marie, Jim, and João discuss balancing personal skill-building with technological ease. Using examples from woodworking, urban-planning videos, and navigating cities, they question when relying on technology becomes freeing and when it risks dulling essential abilities. Marie highlights three mental "muscles"—attention, reading, and memory—that technology might weaken. While AI can swiftly summarise complex content, they argue this convenience may undermine our ability to engage deeply, recall independently, or remain truly present. The trio consider future generations growing up alongside smart assistants, contemplating what minimum skills are needed.
Welcome to "In the Long Run," the podcast where we explore technology, data, and AI decision-making. In this episode, Jim and João discuss Anthropic's research on mechanistic interpretability, which aims to reverse-engineer neural networks to understand how AI models actually work. They compare AI models to human brains as "black boxes" with unclear internal processes and explore how multilingual models like Claude may use a universal "concept language" internally before translating to specific human languages. The conversation also touches in the part of the study that reveals evidence that AI models plan multiple words ahead rather than simply predicting one word at a time. We further discuss the shift from "large language models" to "foundation models" as these systems now incorporate multiple modalities including text, images, and audio. They highlight GPT-4's new image generation capabilities that recently dominated social media with "Ghiblifying" images. The episode concludes with AI news including Google's Gemini 2.5 release, OpenAI's plans for an open-weight model, and the adoption of Model Context Protocol as a potential industry standard for AI tools.
Welcome to "In the Long Run," the podcast where we explore technology, data, and AI decision-making. Our conversation starts with AI safety, highlighting its paradoxical nature—caught between rapid advancement and existential caution. We discuss the control problem, analogies illustrating potential challenges of managing superior AI, and question whether our trajectory with AI is consciously chosen or blindly/economically driven.We examine the importance of data transparency, the pitfalls of gut decisions, and validating data-driven outcomes. Finally, we reflect on the tangible impacts—both beneficial and frustrating—of operating without powerful AI tools.Join us today for an insightful exploration into AI safety, ethical technology advancement, and the importance of mindful choices in shaping our future.
Welcome to In the Long Run episode 9!Join Joao, Marie, and Jim as they dive into the latest AI breakthroughs, including the launch of cutting-edge foundation models Grok 3, Claude 3.7, and ChatGPT 4.5. They explore the promising potential of OpenAI's DeepResearch initiative and its real-world applications, while also examining recent advancements in AI video generation with Sora's EU launch and Alibaba's innovative WAN2.1 model.In the second half, our hosts share fascinating insights on the "company in a box" concept, unpacking what this means for business transformation and how it relates to organizational technology adoption in today's rapidly evolving landscape.Whether you're an AI enthusiast, business leader, or simply curious about how these technologies are reshaping our world, tune in to Episode 9 of In the Long Run for thought-provoking discussions that will expand your perspective on the future of technology and business.
Welcome to In the Long Run! Scania is now using ChatGPT Enterprise.Joao and Jim discuss how reaching an agreement with OpenAI required time due to legal and security concerns. Once finalised, they planned the ChatGPT Enterprise rollout, launching an early adopters program to test its impact.They share insights on training, collaboration, and AI adoption costs. The results are still being analyzed and will be covered in a future episode. They also explore AI breakthroughs like reasoning models, DeepSeek, Stanford’s $50 AI model, and Google’s AI offering.
Welcome to In the Long Run! To kick off the new year, Marie, Joao, and Jim cover some of the biggest tech updates:
CES 2024 Highlights: Joao and Jim share their experiences from the Consumer Electronics Show in Las Vegas, including Nvidia’s keynote by Jensen Huang. They discuss new graphics cards, Project Digits, and Nvidia Cosmos—a physics foundation model for training autonomous vehicles.
Autonomous Vehicles: Jim talks about riding Waymo’s robotaxis in San Francisco and what it’s like to travel without a driver. The hosts also discuss the challenges of rolling out self-driving technology in new places.
The episode wraps up with a couple of exciting news items.
In this In the long run episode, Marie and João discuss the implications of advancements in AI, specifically focusing on the difference between AI systems designed for categorization versus those that generate unpredictable outputs. They explore how these contrasting approaches relate to human behavior and organizational structures, questioning the limitations of solely relying on predictable systems. In the rapid fire news section they also touch upon recent AI developments, such as OpenAI's new model and Google's Gemini 2.0, and consider the practical applications and challenges of these technologies. Finally, the conversation concludes with some remark connected to the recent developments and the potential of quantum computing and its implications for future innovations.
In this episode Marie returns to have an insightful discussion with Jim and Joao around a study called "Generative Agent Simulations of 1,000 People" focused on computational agents that replicate human behavior across domains. They explore how such solution can be leveraged to support leadership in terms of decision making and what kind of implication such approach might have.
In the rapid-fire segment, Jim, Marie and Joao share their view on the recent actions of Elon Musk adventures as well as a short reflection on a recent study that showed that AI outperforms humans as well as humans assisted by AI.
Tune in for an insightful discussion on the current state of AI technology, its future direction, and what these developments mean for the journey toward AGI.




