Discover
Digital Disruption with Geoff Nielson
Digital Disruption with Geoff Nielson
Author: Info-Tech Research Group
Subscribed: 67Played: 821Subscribe
Share
© Info-Tech Research Group
Description
The Next Industrial Revolution is Already Here
Digital Disruption is where industry leaders and experts share insights on leveraging technology to build the organizations of the future.
As intelligent technologies reshape our lives and our livelihoods, we speak with the thinkers, the doers and innovators who will help us predict and harness this disruption. Join us as we explore how to adapt to and harness digital transformation.
Digital Disruption is where industry leaders and experts share insights on leveraging technology to build the organizations of the future.
As intelligent technologies reshape our lives and our livelihoods, we speak with the thinkers, the doers and innovators who will help us predict and harness this disruption. Join us as we explore how to adapt to and harness digital transformation.
53 Episodes
Reverse
Is AI actually going to replace developers? Or is the hype getting ahead of reality?On this episode of Digital Disruption, we’re joined by Sebastian Raschka, AI Research Engineer and author.Sebastian Raschka sits down with Geoff Nielson to unpack the real state of Large Language Models (LLMs) in 2026. As an LLM research engineer, Sebastian bridges deep technical expertise with practical, real-world AI implementation. In this conversation, he cuts through AI hype to focus on what’s actually achievable with modern LLMs, reasoning models, reinforcement learning, and inference scaling and where the limitations still exist. Sebastian explains why most companies should not build a large language model from scratch, but also why understanding the fundamentals may be one of the most important investments technology leaders can make. This conversation breaks down: ◼️Why coding is currently the strongest LLM use case ◼️Why “reasoning” models still fail simple tasks like counting letters in “strawberry” ◼️The reality behind Math Olympiad gold-level AI claims ◼️The true cost of training large models (millions in GPU compute) ◼️The privacy risks of uploading proprietary data into APIs ◼️How enterprises should think about fine-tuning vs API-based prompting ◼️Why benchmarks and leaderboards can be misleading Sebastian Raschka has over a decade of experience in artificial intelligence and machine learning. His work bridges academia and industry, serving as a Senior Engineer at Lightning AI and as a faculty member at the University of Wisconsin–Madison. He is the author of Build a Large Language Model from Scratch and is widely recognized for his practical, code-driven approach to AI education and research. His expertise lies in LLM research, transformer architectures, reinforcement learning, and the development of high-performance AI systems, with a strong focus on real-world implementation.In this video:00:00 Intro01:23 The Rise of “Reasoning” and Thinking Models03:06 Inference scaling vs training scaling06:17 What LLMs are actually good (and bad) at07:09 The “Strawberry” Problem and Reasoning Limits09:00 Tool use and why LLMs don’t need to count letters10:20 Math Olympiads & self-refinement techniques12:01 Why coding is the killer use case13:28 Does AI make developers obsolete?18:02 The Reality of 10x developer productivity claims21:43 Generalist vs specialized models23:53 Build from scratch vs fine-tune vs API prompting25:01The true cost of training an LLM27:33 API customization vs owning your model29:12 Who should build an LLM from scratch?33:16 Data requirements & why you need terabytes34:28 Enterprise data challenges35:40 Retrieval-Augmented Generation (RAG) explained46:05 Multi-agent systems & tool calling49:48 The problem with LLM benchmarks55:43 Using LLMs as judges58:00 Biggest misconceptions about LLMs1:04:19 Reinforcement learning with verifiable rewards1:06:32 Advice for technology leaders1:11:48 Escaping AI hype through fundamentalsConnect with Sebastian:LinkedIn: https://www.linkedin.com/in/sebastianraschka/X: https://x.com/rasbtConnect with Sebastian:LinkedIn: https://www.linkedin.com/in/sebastianraschka/X: https://x.com/rasbt Our links:Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcastFollow us on YouTube: https://www.youtube.com/@InfoTechRG
On this episode of Digital Disruption, we’re joined by Josh Browder, founder of DoNotPay.Josh, also known as the “robot lawyer,” is an entrepreneur and the founder of DoNotPay, often referred to as “the Robin Hood of the Internet.” He created DoNotPay after noticing the disproportionate targeting of elderly and disabled motorists through parking ticket enforcement. Since its launch, DoNotPay has helped UK and New York motorists save an estimated $5 million. Josh has been recognized as one of MIT Technology Review’s “35 Innovators Under 35” and named one of the top legal innovators in America by the Financial Times.Josh joins Geoff to discuss how AI in law is transforming the legal landscape and driving real-world innovation. From the impact of artificial intelligence on consumer rights, the legal system, and everyday life, this episode explores how AI is being used to help people push back against predatory business practices.Josh shares how his company, DoNotPay evolved from simple legal templates into sophisticated AI agents capable of negotiating directly with companies and in some cases, even AI vs. AI negotiations with corporate chatbots. The conversation dives into how large organizations profit from consumer friction, dark patterns, and bureaucracy, and how AI can help consumers fight fire with fire.In this video:00:00 Intro01:10 Mission behind DoNotPay03:00 Why big companies exploit consumer inaction05:00 Robo Revenge: Fighting spam calls with AI08:00 From templates to AI chatbots: The evolution of DoNotPay11:00 AI vs AI: Negotiating bills with corporate chatbots14:00 Subscription traps, free trials, and dark patterns16:00 Medical bills, the no surprises act, and AI negotiation18:45 Property taxes as the next major consumer battleground20:00 Phone AI agents and real-time negotiation challenges23:00 Ethical Constraints: Why DoNotPay’s AI never lies25:00 Passive AI: The future of automatic refunds27:00 Unclaimed money and forgotten refunds29:00 Will AI replace lawyers? Legal industry disruption32:00 AI inside DoNotPay: Running a lean, profitable company35:00 Training AI models and the new AI job economy38:00 Bias, truth, and the limits of AI judgment41:00 Venture capital, scale, and staying independent44:00 Real stories: Justice, anger, and consumer wins47:00 Why consumer exploitation isn’t going away49:00 The future of AI assistants, wearables, and contracts51:00 Bold predictions: AI, robotics, and human relationships53:00 Where AI should not be usedConnect with Josh:LinkedIn: https://www.linkedin.com/in/joshua-browder-b0b573116/X: https://x.com/JoshuabrowderInstagram: https://www.instagram.com/joshuabrowder12/Our links:Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcastFollow us on YouTube: https://www.youtube.com/@InfoTechRG
Is artificial intelligence strengthening democracy or quietly reshaping power in ways we’re not prepared for?On this episode of Digital Disruption, we’re joined by world-renowned security technologist and author Bruce Schneier.Bruce is described by the Economist as a "security guru," he is best known as a refreshingly candid and lucid security critic and commentator. He works at the intersection of security, technology, and people. and has been writing about security issues on his blog since 2004 and monthly newsletter since 1998. He is a fellow and lecturer at Harvard’s Kennedy School, a board member of the Electronic Frontier Foundation, and the Chief of Security Architecture at Inrupt, Inc.Bruce joins Geoff to explore one of the most important questions: Will AI strengthen democracy or quietly undermine it? From government services and public policy to cybersecurity, labor, and the justice system, Bruce breaks down how artificial intelligence acts as a power-magnifying technology, amplifying both the best and worst intentions of those who use it. Drawing from real-world examples in Germany, Brazil, Japan, France, Canada, and the United States, this conversation examines where AI is already reshaping democratic institutions. He also outlines four concrete strategies for steering AI toward democratic outcomes: resisting harmful uses, reforming the AI ecosystem, responsibly deploying AI where it helps, and fixing the underlying societal problems AI tends to amplify.This conversation also dives into: ◼️How AI can improve government efficiency without replacing human judgment ◼️The risks of AI concentration in the hands of powerful corporations and governments ◼️AI’s impact on work, jobs, and hiring in an era of automation ◼️The role of regulation, reform, and resistance in shaping AI’s future ◼️Whether AI will ultimately democratize power or reinforce inequality In this video:00:00 Intro02:10 AI as a power-magnifying technology05:00 The four ways AI can help or harm democracy07:20 Real-World AI use in elections and civic engagement (Germany & Japan)10:00 AI in courts, justice systems, and public administration (Brazil)12:30 Journalism, transparency, and AI as an investigative tool15:00 Human-in-the-loop: Why oversight still matters18:20 Designing AI that can say “yes” but not “no”21:00 AI, Work, hiring, and the automation arms race24:00 Fraud, trust, and remote work in the AI Era27:00 Does AI democratize power or reinforce it?30:00 Trustworthy AI vs. “good enough” AI33:00 When AI is forced on citizens without choice36:00 Regulation, markets, and the myth of the AI arms race39:00 What leaders should ask before deploying AI42:00 Jobs, backlash, and AI-driven inequality44:00 Lessons from blockchain and past tech hype cycles48:00 AI, cybersecurity, and the attacker vs. defender balance52:00 The future of AI skills and careers55:00 Steering AI toward democracyConnect with Bruce:Facebook: https://www.facebook.com/bruce.schneierOur links:Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcastFollow us on YouTube: https://www.youtube.com/@InfoTechRG
What does the future of AI really look like as we head toward 2026, beyond the hype, headlines, and fear-driven narratives?On this episode of Digital Disruption, we’re joined by internationally recognized advisor, speaker, and researcher on AI strategy, Walter Pasquarelli.Walter is one of the world’s leading voices on ethical and strategic AI. He has advised governments, global institutions, and leading technology companies on AI governance, policy, and readiness, and brings a grounded perspective on what it really takes to lead in the age of artificial intelligence.Walter joins Geoff to unpack what’s actually happening with artificial intelligence and what most media coverage gets wrong. He brings a 360-degree view of AI adoption and how AI is moving out of boardrooms and into everyday life, reshaping how people think, decide, work, and relate to technology. This conversation dives into: Why AI adoption is accelerating among consumers not just enterprises The rise of AI companions, humanoid robots, and everyday AI use The real risks behind automation anxiety, data privacy, and emotional dependency What “AI psychosis” is and why it’s a growing concern Why AI literacy matters more than fear, hype, or blind regulation How AI is reshaping work, leadership, and global competitiveness In this video:00:00 Intro01:14 AI’s trajectory toward 202602:09 AI moves from boardrooms to living rooms03:16 Humanoid robots: From screens to physical bodies05:35 Household robots, prestige, and consumer adoption07:33 Military, drones, and high-stakes AI applications08:46 Self-driving cars as robotics, not just vehicles13:49 Automation anxiety and ethical reality18:30 Shifting authority from humans to algorithms19:58 Power concentration and data privacy risks22:49 AI, mental health, and emotional dependency28:11 Why regulation alone will always lag33:25 Business leaders’ biggest AI misconceptions36:49 Data, talent, and capability gaps42:18 Estonia, strategy, and digital leadership44:42 Advising governments: What leaders must do49:49 AI, sector disruption, and the future of work53:22 Why top performers benefit most from AI56:14 Judgment, curation, and human excellenceConnect with Walter:LinkedIn: https://www.linkedin.com/in/walter-m-pasquarelli/?originalSubdomain=ukX: https://x.com/waltpasquarelliOur links:Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcastFollow us on YouTube: https://www.youtube.com/@InfoTechRG
What if AI becomes the most consequential technology in human history? On this episode of Digital Disruption, we’re joined by Nina Schick, geopolitical analyst and one of the world’s leading voices on AI. Nina Schick is a globally recognized expert on AI, geopolitics, and power. She was among the first to forecast the societal impact of AI-generated content and now leads the conversation on Industrial Intelligence, the idea that AI is not just software, but a geopolitical and industrial transformation. Nina sits down with Geoff to unpack how intelligence itself is becoming a geopolitical weapon. She explains why we are entering the Age of Intelligence, where non-biological intelligence may soon rival or surpass human intelligence, reshaping economics, warfare, democracy, labor, and global power structures This conversation goes far beyond AI tools and chatbots. We explore: Why AGI may arrive sooner than expected How AI infrastructure and scaling laws are reshaping global power What AI means for warfare, democracy, and national security How near-zero-cost intelligence will transform work and leadership In this episode:00:00 Intro01:33 The AI scaling laws accelerating toward AGI03:09 Excitement vs fear: how disruptive will AI really be?05:16 Deepfakes, information warfare, and early AI misuse06:58 AI’s true killer app: scientific discovery08:49 Intelligence as a utility and the speed of global disruption10:38 Why AI is becoming the biggest political issue of our time12:12 Concentration of power and the rise of AI monopolies14:33 Technology, history, and why power always follows innovation16:18AI infrastructure, hyperscalers, and trillion-dollar CapEx19:05 AI as hard power and American technological dominance21:15 NATO, national security, and autonomous warfare23:55 The end of American hegemony and the rise of hard power politics27:36 Democracy vs authoritarianism in the AI race29:48 Why trivial consumer AI is a strategic failure35:52 What AI deployment really means for businesses39:13 The myth of AI tools vs intelligence as a capability41:41 Will AI actually cause mass layoffs?46:32 Asset ownership in a world of cheap intelligence50:24 How AI empowers individuals and emerging economies52:51 The most important skills for the AI age55:38 Why being human matters more than ever56:41 Resilience, risk, and the futureConnect with Nina:LinkedIn: https://www.linkedin.com/in/ninaschick/X: https://x.com/NinaDSchickVisit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcastFollow us on YouTube: https://www.youtube.com/@InfoTechRG
Is the metaverse actually dead or just badly branded?On this episode of Digital Disruption, we’re joined by Christian Venables, co-founder of Radical Realities.Christian specializes in immersive technology and AI, staying at the forefront of emerging tools, platforms, and workflows. With a strong foundation in architecture and design, he has transitioned into extended reality (XR), exploring the evolving possibilities of virtual reality (VR), augmented reality (AR), and spatial computing. He is the Co-Founder of Radical Realities, a global immersive studio of creative innovators operating entirely virtually. The studio delivers experiences that transcend the physical world, spanning metaverse development, gaming, AR/VR/MR, CGI, VFX, and AI consultancy. Throughout his career, Christian has led and contributed to immersive projects for globally recognized brands including Coachella, Universal, Disney, Cartier, and Hyundai.Christian sits down with Geoff to break down why the metaverse will be rebranded and not abandoned. The real future isn’t cartoon avatars or fantasy worlds, but spatial computing, AR glasses, and ambient interfaces that blend seamlessly into everyday life. Despite years of hype, backlash, and false hope, the metaverse may finally be entering its most practical and powerful phase. Christian explains why the term itself may disappear, while the underlying technologies XR, spatial computing, AI-driven 3D design, and wearable AR glasses, are already reshaping how we work, learn, design, and collaborate. From Meta Ray-Ban display glasses and neural wristbands to Gravity Sketch, Unreal Engine, and AI-assisted worldbuilding; This conversation explores how immersive computing is moving beyond gimmicks into real-world utility, especially across architecture, engineering, education, and the creative industries.In this video:00:00 Intro03:00 Is the metaverse dead or just misbranded?06:00 Spatial computing vs virtual worlds09:00 Why AR glasses matter more than headsets12:00 Smart glasses: why this wave is different15:00 Neural wristbands and gesture-based control18:00 How quickly humans become dependent on tech21:00 The split between human-made and AI-generated culture24:00 Augmenting creativity instead of replacing it27:00 Designing entirely inside VR30:00 Gravity Sketch: true 3D creation explained34:00 Why spatial collaboration beats screens38:00 Real-world use cases: architecture & manufacturing42:00 Why mouse and keyboard are reaching their limits47:00 AI + XR: generating worlds in real time52:00 What needs to happen before immersive tech scales56:00 Should immersive computing be back on our radar?Connect with Christian:LinkedIn: https://www.linkedin.com/in/christian-venables-74542836/Instagram: https://www.instagram.com/csavenables/X: https://x.com/CsavenablesVisit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcastFollow us on YouTube: https://www.youtube.com/@InfoTechRG
What happens when the people building artificial intelligence quietly believe it might destroy us? On this episode of Digital Disruption, we’re joined by Gregory Warner, Peabody Award–winning journalist, former NPR correspondent, and host of the hit AI podcast The Last Invention.Gregory Warner is a versatile journalist and podcaster. He has been recognized with a Peabody Award and other awards from organizations like Edward R. Murrow, New York Festivals, AP, and PRNDI. Warner's career includes serving as an East Africa correspondent, where he covered the region's economic growth and terrorism threats. He has also worked as a senior reporter for American Public Media's Marketplace, focusing on the economics of American health care. His work has been recognized with a Best News Feature award from the Third Coast International Audio Festival.Gregory sits down with Geoff for an honest conversation about the AI race unfolding today. After years spent interviewing the architects, skeptics, and true believers behind advanced AI systems, Gregory has come away with an unsettling insight: the same people racing to build more powerful models are often the most worried about where this technology is heading. This episode explores whether we’re already living inside the AI risk window, why AI safety may be even harder than nuclear safety, and why Silicon Valley’s “move fast and fix later” mindset may not apply to superintelligence. It also examines the growing philosophical divide between AI doomers and AI accelerationists. This conversation goes far beyond chatbots and job-loss headlines. It asks a deeper question few are willing to confront: are we building something we can’t control and, doing it anyway? In this video:00:00 Intro03:00 AI models that already behave like elite hackers05:00 Why the AI risk window may already be open06:30 What AI safety actually means (and why it’s so hard)12:00 Human-in-the-loop: safety feature or illusion?15:00 AI as an alien intelligence, not a human one19:00 The Silicon Valley AI arms race explained21:00 OpenAI, DeepMind, Anthropic, xAI: who’s racing and why25:00 The “Compressed Century” and radical AI optimism27:00 Can AI actually solve humanity’s biggest problems?33:00 Capital, competition, and the pressure to deploy37:00 Is AI more dangerous than nuclear weapons?39:00 The problem with comparing AI to past technologies43:00 What happens to human agency in an AI-driven world?45:00 How AI reshapes creativity, journalism, and truth53:00 The quiet assumptions built into AI systems55:00 Why optimism and fear both miss the full picture59:00 What responsibility do users have?01:01:00 The most important question we’re not asking about AIConnect with Gregory:LinkedIn: https://www.linkedin.com/in/radiogrego/Instagram: https://www.instagram.com/radiogrego/Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcastFollow us on YouTube: https://www.youtube.com/@InfoTechRG
What happens to jobs, money, and meaning when intelligence becomes cheaper than labor and humans are no longer the smartest ones in the room?On this episode of Digital Disruption, we’re joined by Emad Mostaque, founder of Stability AI and a leading voice in the global AI revolution.Emad Mostaque is a businessman, mathematician, and former hedge fund manager. He is the co-founder and was CEO of Stability AI, the company behind the popular text-to-image generator Stable Diffusion. With a master’s degree in mathematics and computer science from Oxford, Emad Mostaque has significantly contributed to artificial intelligence. His vision for Stability AI was to “build the foundation to activate humanity’s potential” through open-source generative AI.Emad sits down with Geoff to explore a future that may arrive far sooner than most people expect. He argues that within the next 1,000 days, artificial intelligence will fundamentally reshape the global economy, upending work, capitalism, enterprise software, and even how we define human value. Drawing from his book The Last Economy, Emad lays out a stark and deeply thought-provoking framework for understanding what comes next when cognitive labor becomes economically irrelevant. This conversation explores the inevitabilities of exponential AI progress, including why intelligence is becoming “too cheap to measure,” how AI agents will replace many jobs done behind a screen, and the coming shift from human-plus-AI teams to AI-only systems. Beyond the economics, Emad also tackles the human question: where meaning comes from in a world where AI outperforms us at most cognitive tasks. He argues that resilience in the AI age will depend less on job titles and more on community, networks, relationships, and how deeply individuals engage with the technology itself.In this video:00:00 Intro04:30 What is “The Last Economy”?08:45 Intelligence becomes too cheap to measure13:30 The three possible AI futures18:00 Are humans becoming the weakest link?22:15 The rise of economic agents27:00 Digital doubles and the end of white-collar work31:45 Enterprises racing toward zero employees36:30 Why AI is cheaper than human labor (by orders of magnitude)41:15 Software, SaaS, and the collapse of enterprise moats46:00 The internet after AI agents50:15 Who controls the “AI next to you”?54:30 Open-Source vs Big Tech AI58:45 The first one-person billion-dollar company1:03:30 What humans are still for1:07:00 How to prepare for the AI economy nowConnect with Emad:LinkedIn: https://www.linkedin.com/in/emad-mostaque-9840ba274/?originalSubdomain=ukX: https://x.com/EMostaqueFacebook: https://www.facebook.com/mostaquee/Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcastFollow us on YouTube: https://www.youtube.com/@InfoTechRG
Is artificial intelligence humanity’s greatest salvation, or the most dangerous force we’ve ever unleashed?Artificial intelligence is no longer a future concept, it’s a force already reshaping geopolitics, economics, warfare, and the human experience itself. In this year in review episode of Digital Disruption, we bring together the most provocative, conflicting, and urgent ideas from this past year to confront the biggest question of our time: What does AI actually mean for humanity’s future?Across more than 40 conversations with leading technologists, journalists, researchers, and futurists, one theme dominated every debate, AI. Some guests argue that artificial general intelligence (AGI) and superintelligence could trigger an extinction-level event. Others believe AI may usher in an era of total abundance, solving humanity’s hardest problems. And still others claim today’s AI hype is little more than marketing smoke and mirrors.This episode puts those worldviews head-to-head.In this episode:00:00 The AI singularity is here05:00 Existential threat or greatest opportunity?10:00 Why no one agrees on ai’s future15:00 The race toward AGI and superintelligence20:00 The control problem nobody has solved25:00 Intelligence has no morality30:00 Capitalism, venture capital, and the AI arms race35:00 Is AI just a marketing illusion?40:00 Generative AI: Power, limits, and misuse45:00 Autonomous weapons and modern warfare50:00 Fear as the driver of dangerous innovation55:00 Why AI is not like nuclear weapons1:00:00 The first and second AI dilemmas1:05:00 Handing decisions over to machines1:10:00 Collapse, abundance, or course correction?Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcastFollow us on YouTube: https://www.youtube.com/@InfoTechRG
Are companies preparing for an AI-powered future or reacting out of fear of being left behind?Looking ahead to 2026, Geoff Nielson and Jeremy Roberts sit down for an unfiltered conversation about artificial intelligence, the economy, and the future of work. As AI hype accelerates across markets, boardrooms, and headlines, they ask the hard questions many leaders and workers are quietly worrying about: Are we in an AI bubble? If so, what happens when expectations collide with reality? This episode explores whether today’s massive investment in AI, GPUs, infrastructure, copilots, and generative tools is laying the foundation for long-term value or repeating the familiar patterns of past tech bubbles like the dot-com boom and the subprime mortgage crisis. Geoff and Jeremy break down why traditional metrics like price-to-earnings ratios matter, why Nvidia and big tech dominate the narrative, and why the real risk may not be collapse but widespread underperformance. The conversation goes far beyond markets. They dig into the impact of AI on jobs, layoffs, and corporate restructuring, challenging the idea that AI is “taking jobs” versus being used as convenient cover for economic tightening. From IT, HR, and operations to customer-facing roles, they examine how AI could reshape workforce composition, accelerate automation, and create a new and potentially unsettling employment equilibrium. You’ll also hear a candid critique of how organizations are actually using AI today and what is to come next in 2026.Tech Trends Report 2026: https://www.infotech.com/research/ss/tech-trends-2026?utm_source=youtube&utm_medium=social&utm_campaign=researchIn this video:00:00 Just add AI to everything?03:45 Looking ahead to 2026: Nobody knows what’s coming07:10 Are we in an AI bubble? 12:30 Comparing AI to the dot-com and 2008 crashes18:10 Nvidia, GPUs, and the AI Gold Rush24:20 Why AI infrastructure may be ahead of real-world use cases.30:40 Markets untethered from reality36:50 is AI really taking jobs or is something else happening?43:30 The real employment question for 202649:40 Corporate bloat, back-office roles, and automation56:10 Why most AI projects fail to deliver value1:02:45 From productivity theater to real ROI1:09:20 Faster horses vs. Real cars in AI1:15:40 AI 2.0: Agents, experiments, and what comes next1:22:10 The real risk ahead: Underperformance, not collapseVisit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcastFollow us on YouTube: https://www.youtube.com/@InfoTechRG
Are we using AI in a way that actually makes us smarter or are we unknowingly making ourselves less capable, less curious, and easier to automate?On this episode of Digital Disruption, we are joined by artificial intelligence expert and neuroscientist, Dr. Vivienne Ming.Over her career, Dr. Vivienne Ming has founded 6 startups, been chief scientist at 2 others, and founded The Human Trust, a philanthropic data trust and “mad science incubator” that explores seemingly intractable problems—from a lone child’s disability to global economic inclusion—for free. She co-founded Dionysus Health, combining AI and epigenetics to invent the first ever biological test for postpartum depression and change the lives of millions of families. She also develops AI tools for learning at home and in school, models of bias in hiring and promotion, and neurotechnologies to treat dementia and TBI. Vivienne was named one of “10 Women to Watch in Tech” by Inc. Magazine and one of the BBC’s 100 Women in 2017. She is featured frequently for her research and inventions in The Financial Times, The Atlantic, Quartz Magazine and the New York Times.Dr. Vivienne Ming sits down with Geoff to unpack one of the most misunderstood truths about artificial intelligence: AI isn’t here to replace your thinking it’s here to challenge it. And whether you grow or get left behind depends entirely on how you choose to engage with it. Dr. Ming reveals why most organizations and most individuals are using AI in the worst possible way. Instead of creating leverage, they’re creating “work slop,” cognitive dependency, shallow automation, and declining human capability. She explains why the real competitive advantage in the AI age comes from productive friction, creative complementarity, and teams that know how to use AI to explore the ill-posed problems—the ambiguous, uncertain, high-value challenges machines can’t solve on their own. From how to robot-proof your company, to why AI tutors fail when they give answers, to the science of courage, reward systems, and organizational culture, this conversation is one of the most honest explorations of the future of human capability in an AI-saturated world. In this video:00:00 Intro02:30 The real value of hybrid intelligence05:00 Cognitive automation vs. true complementarity08:20 Ill-posed problems: where humans still win12:10 What elite performers really do differently16:00 The paradox of AI: why more automation creates more work18:30 How hybrid teams beat prediction markets20:50 Inequality & imagination disease in AI23:10 AI tutors & the golden rule: never give the answer28:00 The nemesis prompt: how to robot-proof yourself44:20 Courage, ethics & reward structures in organizations54:00 Using AI without losing the human story01:06:30 How to robot-proof your companyConnect with Vivienne:Website: https://socos.org/about-vivienneLinkedIn: https://www.linkedin.com/in/vivienneming/Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcastFollow us on YouTube: https://www.youtube.com/@InfoTechRG
If AI is becoming a “playground” for experimentation, are today’s organizations bold enough to explore it or are they still too afraid to try?On this episode of Digital Disruption, we are joined by Kenneth Cukier, Deputy Executive Editor at The Economist and bestselling author.Kenneth Cukier is the Deputy Executive Editor at The Economist. He is the author of several books on technology and society, notably “Framers” on the power of mental models and the limitations of AI, with Viktor Mayer-Schönberger and Francis de Vericourt, as well as “Big Data: A Revolution That Transforms How We Live, Work and Think” with Viktor. It was a NYT bestseller translated into over 20 languages, and sold over two million copies worldwide. It won the National Library of China’s Wenjin Book Award and was a finalist for the FT Business Book of the Year. Kenn also coauthored a follow-on book, “Learning with Big Data: The Future of Education”. He has been a frequent commentator on CBS, CNN, NPR, the BBC and was a member of the World Economic Forum’s global council on data-driven development.Kenneth has spent decades at the intersection of AI, journalism, business strategy, and global policy. In this conversation, he sits down with Geoff to share candid insights on how AI is reshaping organizations, leadership, economics, and the future of work. He breaks down the real state of AI, what’s hype, what’s real, and what it means for workers, leaders, and companies. Kenneth explains how AI is shifting from automating tasks to expanding the frontier of knowledge, why today’s multi-trillion-dollar AI investment wave is both overhyped and underhyped, and how everything from healthcare to management is poised to transform. This episode explores why most companies should treat AI as a “playground” for experimentation, how The Economist is using generative AI behind the scenes, the human skills needed to stay competitive, and why great leadership now requires enabling curiosity, psychological safety, and responsible innovation. Kenneth also unpacks the growing “AI-lash,” the limits of GDP as a measure of progress, and why the organizations that learn fastest, not the ones that simply know the most, will win the future.In this episode:00:00 Intro05:00 AI Today: Overhyped, underhyped, or both?10:00 From Big Data to LLMs: How we got here15:00 The $3 trillion AI wave: What it really signals20:00 Automation vs. knowledge expansion25:00 Inside The Economist: How they actually use Generative AI30:00 Why “more content” isn’t a strategy35:00 Leadership in the age of AI: Curiosity, judgment, culture40:00 The skills humans must keep and why they matter more now45:00 The rise of the “AI-lash” and public skepticism50:00 GDP, progress, and what we’re measuring wrong55:00 Why the fastest learners win the future1:01:00 What can this technology really do?Connect with Kenneth:Connect with Kenneth:Website: http://www.cukier.com/LinkedIn: https://www.linkedin.com/in/kenneth-cukier-9ab56335/X: https://x.com/kncukierVisit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcastFollow us on YouTube: https://www.youtube.com/@InfoTechRG
What does the future of work really look like when AI, identity, and culture collide?On this episode of Digital Disruption, we’re joined by Dr. Anne-Marie Imafidon, Chair of the Institute for the Future of Work.Anne-Marie is a leading voice in the tech world, known for her work as a trustee at the Institute for the Future of Work and as the temporary Arithmetician on Channel 4’s Countdown. A former child prodigy who passed A-level computing at 11 and earned a Master’s in Maths and Computer Science from Oxford by 20, she has since spoken globally for companies including Facebook, Amazon, Google and Mastercard. She hosts the acclaimed Women Tech Charge podcast and is a sought-after presenter who has interviewed figures such as Jack Dorsey and Sir Lewis Hamilton. Anne-Marie has received multiple Honorary Doctorates, serves on several national boards, and continues to champion diversity and innovation in tech. Her latest book, She’s In CTRL, was published in 2022.Dr. Anne-Marie joins Geoff to break down how AI, big data, quantum, and the wider “Fourth Industrial Revolution” are transforming jobs, workplaces, identity, culture, and society. From redefining long-held beliefs about “jobs for life,” to the cultural fractures emerging between companies, workers, and society, Dr. Anne-Marie goes deep on what’s changing, what still isn’t understood, and what leaders must do right now to avoid being left behind. This conversation dives into why most AI use cases are still limited to fraud detection and customer service, and the hidden cultural blockers preventing real transformation. She emphasizes the danger of hype cycles, and how to stay focused on real value and how to build organizations that can experiment, learn, and make “high-quality mistakes.”In this episode:00:00 Intro00:31 The Future of Work: What’s changing now02:32 Generational identity, legacy jobs & why work is no longer “for life”04:36 Work identity crisis & fragmentation of modern careers07:45 Rethinking digital transformation & the fourth industrial revolution11:36 Why the institute avoids the AI hype & looks beyond it13:39 AI Hype vs. reality17:50 High-quality mistakes21:06 Tech design failures23:18 Culture, customers & building organizations that reflect the real world29:04 Destroying the “Einstein Myth” & rewriting who tech is for39:37 First-principles thinking50:34 Norms, unintended consequences & system-level change55:32 When will the dust settle? ai timelines, disruption & what’s next57:28 Closing thoughtsConnect with Dr. Ann-Marie:LinkedIn: https://www.linkedin.com/in/aimafidon/Instagram: https://www.instagram.com/notyouraverageami/Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcastFollow us on YouTube: https://www.youtube.com/@InfoTechRG
When intelligence becomes abundant, what happens to humanity’s purpose?Andy Mills, the co-founder of The New York Times’ The Daily and creator of The Last Invention, joins us on this episode of Digital Disruption.Andy is a reporter, editor, podcast producer, and co-founder of Longview. His most recent series, The Last Invention, explores the AI revolution, from Alan Turing’s early ideas to today’s fierce debates between accelerationists, doomers, and those focused on building the technology safely. Before that, he co-created The Daily at The New York Times and produced acclaimed documentary series including Rabbit Hole, Caliphate, and The Witch Trials of J.K. Rowling. A former fundamentalist Christian from Louisiana and Illinois, Andy now champions curiosity, skepticism, and the transformative power of listening to people with different perspectives, values that shape his award-winning journalism across politics, terrorism, culture wars, technology, and science. Andy sits down with Geoff to break down the real debate shaping the future of AI. From the “doomers” warning of existential risk to the accelerationists racing toward AGI, Andy maps out the three major AI camps influencing policy, economics, and the future of human intelligence. This conversation explores why some researchers fear AGI, why others believe it will save humanity, how job loss and automation could reshape society, and why 2025 is becoming an “AI 101 moment” for the public. Andy also shares what he’s learned after years investigating OpenAI, Anthropic, xAI, and the people behind the AGI race. If you want clarity on AGI, existential risk, the future of work, and what it all means for humanity, this is an episode you won’t want to miss. In this episode:00:00 Intro01:00 The three camps of AI: doom, acceleration, scouts05:00 Why skeptics aren’t driving the AI debate07:00 Job loss, productivity & “good” vs. “bad” disruption09:00 Existential risk & why scientists are sounding alarms12:00 The origins of doomers and accelerationists17:00 How AI debates escalated after ChatGPT22:00 Why 2025 is an AI “101 moment” for the public24:00 The tech stack wars: OpenAI, Anthropic, xAI28:00 Why leaders joined the AI race30:00 The accelerationist mindset33:00 Contrarians, symbolists & the forgotten history of AI39:00 Big Tech, branding & why AI CEOs avoid open conflict42:00 The closed group chats of AI’s elite builders46:00 Sci-Fi narratives vs. real-world intelligence risks52:00 The AI bubble & why adoption is unlike any tech before01:00:00 Are we entering a wright-brothers-to-moon-landing era?01:10:00 What AGI means for capitalism, work & purpose01:18:00 Why public debate needs to start now01:20:00 What happens nextConnect with Andy:Website: https://www.andymills.work/aboutVisit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcastFollow us on YouTube: https://www.youtube.com/@InfoTechRG
Are we chasing the wrong goal with Artificial General Intelligence, and missing the breakthroughs that matter nowOn this episode of Digital Disruption, we’re joined by former research director at Google and AI legend, Peter Norvig.Peter is an American computer scientist and a Distinguished Education Fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI). He is also a researcher at Google, where he previously served as Director of Research and led the company’s core search algorithms group. Before joining Google, Norvig headed NASA Ames Research Center’s Computational Sciences Division, where he served as NASA’s senior computer scientist and received the NASA Exceptional Achievement Award in 2001.He is best known as the co-author, alongside Stuart J. Russell, of Artificial Intelligence: A Modern Approach — the world’s most widely used textbook in the field of artificial intelligence.Peter sits down with Geoff to separate facts from fiction about where AI is really headed. He explains why the hype around Artificial General Intelligence (AGI) misses the point, how today’s models are already “general,” and what truly matters most: making AI safer, more reliable, and human-centered. He discusses the rapid evolution of generative models, the risks of misinformation, AI safety, open-source regulation, and the balance between democratizing AI and containing powerful systems. This conversation explores the impact of AI on jobs, education, cybersecurity, and global inequality, and how organizations can adapt, not by chasing hype, but by aligning AI to business and societal goals. If you want to understand where AI actually stands, beyond the headlines, this is the conversation you need to hear. In this episode:00:00 Intro01:00 How AI evolved since Artificial Intelligence: A Modern Approach03:00 Is AGI already here? Norvig’s take on general intelligence06:00 The surprising progress in large language models08:00 Evolution vs. revolution10:00 Making AI safer and more reliable12:00 Lessons from social media and unintended consequences15:00 The real AI risks: misinformation and misuse18:00 Inside Stanford’s Human-Centered AI Institute20:00 Regulation, policy, and the role of government22:00 Why AI may need an Underwriters Laboratory moment24:00 Will there be one “winner” in the AI race?26:00 The open-source dilemma: freedom vs. safety28:00 Can AI improve cybersecurity more than it harms it?30:00 “Teach Yourself Programming in 10 Years” in the AI age33:00 The speed paradox: learning vs. automation36:00 How AI might (finally) change productivity38:00 Global economics, China, and leapfrog technologies42:00 The job market: faster disruption and inequality45:00 The social safety net and future of full-time work48:00 Winners, losers, and redistributing value in the AI era50:00 How CEOs should really approach AI strategy52:00 Why hiring a “PhD in AI” isn’t the answer54:00 The democratization of AI for small businesses56:00 The future of IT and enterprise functions57:00 Advice for staying relevant as a technologist59:00 A realistic optimism for AI’s futureConnect with Peter:LinkedIn: https://www.linkedin.com/in/pnorvig/Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcastFollow us on YouTube: https://www.youtube.com/@InfoTechRG
Is “AI-first” the future of business or just another tech buzzword?On this episode of Digital Disruption, we’re joined by former Google Chief Decision Scientist and CEO of Kozyr, Cassie Kozyrkov.Cassie is best known for founding the field of Decision Intelligence and serving as Google’s first Chief Decision Scientist, where she helped lead the company’s AI-first transformation. A sought-after advisor and keynote speaker, Cassie has guided organizations including Gucci, NASA, Meta, Spotify, Salesforce, and GSK on AI strategy. She combines deep technical expertise with theater-trained charisma to make complex concepts engaging and actionable for executive and general audiences alike delighting audiences in over 40 countries across all seven continents, including stages at the UN, WEF, Web Summit, and SXSW.Cassie sits down with Geoff to unpack the hidden cost of the “AI-first” hype, the dangers of AI infrastructure debt, and why real AI readiness starts with people, not technology. She reveals how leaders can architect their organizations for innovation, build human-in-the-loop systems, and create cultures that embrace experimentation instead of fearing mistakes.Cassie exposes why 95% of organizations fail to achieve measurable ROI from AI and how leaders can finally bridge the AI value gap. This conversation dives into why AI success isn’t about tools, it’s about leadership, measurement, and mindset.Most organizations chasing “AI transformation” see no measurable ROI not because the technology fails, but because leaders are still measuring value the old way. Generative AI success is hard to quantify when there isn’t a single “right answer,” yet many businesses keep trying to apply outdated metrics to a completely new paradigm.In this video:00:00 Intro00:44 The Generative AI Value Gap: Why 95% get no ROI02:20 The paradox of AI productivity05:38 Why measuring AI value is harder than we think12:04 Leadership abdication: “Just sprinkle AI on everything”15:10 AI infrastructure debt explained20:17 What real AI readiness looks like (beyond tech)23:42 Humans as part of AI infrastructure28:00 Why “AI-first” isn’t one-size-fits-all33:31 Building human judgment into AI systems36:19 The risks of scaling too fast41:34 Automation vs augmentation: where leaders go wrong44:00 The “do the work” approach to AI success48:35 The recipe for an AI-ready organization53:40 Guardrails, governance, and security in AI systems57:00 Thinking probabilistically: a new mindset for leaders1:03:20 The human side of AI transformation1:06:45 Leading through uncertaintyConnect with Cassie:Website: https://www.kozyr.com/aboutLinkedIn: https://www.linkedin.com/in/kozyrkov/X: https://x.com/decisionleaderYouTube: https://www.youtube.com/c/KozyrkovVisit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcastFollow us on YouTube: https://www.youtube.com/@InfoTechRG
Why is adaptability the real superpower for leaders in the digital age?On this episode of Digital Disruption, we’re joined by Erik Qualman, a digital leadership expert, best-selling author, and motivational speaker.Erik is a 5x #1 Bestselling Author and Keynote Speaker who has inspired audiences in over 55 countries and reached 50 million people. Voted the #2 Most Likeable Author in the World behind J.K. Rowling, his work Socialnomics has been featured on 60 Minutes, in The Wall Street Journal, and used by organizations from the National Guard to NASA. A professor of Digital Leadership at Northwestern University, Qualman’s research and courses are studied at 500+ universities worldwide. Through his animation studio, he has partnered with brands like Disney, Oreo, Chase, and Cartier. A former MIT and Harvard edX professor and honorary doctorate recipient, Qualman is also the creator of the bestselling board game Kittycorn.Erik joins Geoff Nielson to break down what it really means to be AI-ready. He reveals why the leaders who know how to leverage AI and adapt fast will replace those who don’t. He explains why AI is overhyped in the short term but underhyped in the long term, and how the most successful leaders of the next decade will blend Flintstones-level human connection with Jetsons-era innovation. Erik explains why adaptability and emotional intelligence (EQ) are the new competitive edge in the age of artificial intelligence. This conversation explores how AI can remove friction, save time, and ironically help us become more human, while also exploring the guardrails needed for responsible tech adoption. Erik also shares lessons from advising some of the world’s top brands including Facebook, Disney, and Sony and explains why the future favors those who fail fast, fail forward, and fail better.In this video:00:00 Intro02:00 The “Flintstones First” approach to digital leadership04:40 How AI helps us become more human06:15 Winners, losers, and adaptability in the AI era08:30 Emotional intelligence and leadership in a tech-driven world11:00 The need for guardrails in AI and social media13:00 Teaching AI and digital leadership at Northwestern15:00 How technology is transforming the classroom17:45 The 70/30 rule: what changes vs. what never will19:00 Core advice for leaders and digital innovators21:30 Avoiding hype: testing new tech like AI and Clubhouse23:00 Lessons from Montblanc and the origins of “Digital Leadership”25:00 The Disney+ story: digital transformation done right27:00 Building a culture of “fail fast, fail forward, fail better”30:00 Balancing the Flintstones and the JetsonsConnect with Erik:Website: https://equalman.com/LinkedIn: https://www.linkedin.com/in/qualman/X: https://x.com/equalmanYouTube: @equalmanVisit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcastFollow us on YouTube: https://www.youtube.com/@InfoTechRG
Could AI’s biggest impact be economic, not technological?On this episode of Digital Disruption, we’re joined by the founder of EZPR and host of Better Offline podcast, Ed Zitron.Ed is a technology writer, public relations expert, and podcaster known for his critical takes on the tech industry and its biggest players. His work has appeared in leading outlets including The Atlantic, Business Insider, and TechCrunch. He is the author of the popular newsletter Where’s Your Ed At, launched in 2020, where he explores the intersection of technology, business, and culture. Ed also hosts the Better Offline podcast, delving into the realities of the tech industry and the ripple effects of the AI boom. With his candid insights and thoughtful commentary, Ed has become a trusted voice and sought-after speaker within the tech community. One of the most outspoken critics of the AI boom, Ed Zitron joins Geoff to cut through the noise and talk about the truth behind generative AI. Ed breaks down why he believes AI “doesn’t work,” what’s really driving the trillion-dollar hype, and why big tech, media, and investors may be steering straight into the next Enron moment. This conversation unpacks why large language models fall short, how Microsoft’s AI Copilot has failed to deliver, and how corporate opportunism and investor “vibes” are fueling one of the biggest speculative bubbles in tech history. They also explore the “Enron-like” risks in the AI hardware race, the potential fallout for retail investors and startups, and tackle one of tech’s most misunderstood narratives, the myth of AI-driven job loss, revealing who’s really being replaced. In this episode:00:00 Intro00:36 “AI doesn’t work”02:05 The limits of LLMs 04:33 Microsoft Copilot and the illusion of productivity07:05 The AI job myth: Who’s really being replaced?10:00 CEOs, opportunism, and the false narrative of AI efficiency12:00 The Salesforce example: Lies, hype, and failure to deliver14:00 What AI can actually do 18:00 The trust problem19:45 Media complacency and tech industry collusion22:00 Microsoft, Nvidia, and false growth25:00 The Enron parallels28:30 Why investors are rewarding bad behavior31:00 Who gets hurt when the AI bubble bursts? 35:00 Unsustainable startups and rising model costs38:00 The coming collapse of AI infrastructure40:00 What business leaders should do now to avoid being burned44:30 The harsh truth about ChatGPT49:00 What real innovation looks like: Batteries, EVs, AR, and more54:00 The future of work beyond AI hypeConnect with Ed:LinkedIn: https://www.linkedin.com/in/edzitron/X: https://x.com/edzitronInstagram: instagram.com/edzitronVisit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcastFollow us on YouTube: https://www.youtube.com/@InfoTechRG
Is the AI arms race between tech giants and nations pushing us toward a dangerous future?On this episode of Digital Disruption, we’re joined by the founder of SingularityNET and the pioneering mind behind the term Artificial General Intelligence (AGI), Dr. Ben Goertzel.Dr. Ben Goertzel is a leading figure in artificial intelligence, robotics, and computational finance. Holding a Ph.D. in Mathematics from Temple University, he has been a pioneer in advancing both the theory and practical applications of AI, particularly in the pursuit of artificial general intelligence (AGI) a term he helped popularize. He currently leads the SingularityNET Foundation, TrueAGI, the OpenCog Foundation, and the AGI Society, and has organized the Artificial General Intelligence Conference for over fifteen years. A co-founder and principal architect of OpenCog, an open-source project to build human-level AI, Dr. Goertzel’s work reflects a singular mission: to develop benevolent AGI that advances humanity’s collective good. Dr. Goertzel sits down with Geoff to share his insights on the accelerating progress toward AGI, what it truly means, and how it could reshape human life, work, and consciousness. He discusses the role of Big Tech in shaping AI’s direction and how corporate incentives, and commercialization are both driving innovation and limiting true AGI research. From DeepMind and OpenAI to decentralized AI networks, Dr. Goertzel reveals where the real breakthroughs might happen. The conversation also explores the ethics of AI, the dangers of fake democratization and false compassion, and why humanity must shape AI’s evolution with empathy and awareness.In this episode:00:00 Intro00:21 What is Artificial General Intelligence (AGI)?01:10 The pace of AI progress and the hype cycle05:44 The path from human-level AGI to superintelligence09:20 How close are we to AGI? 13:08 Transformer vs. multi-agent systems14:05 Which AI labs might strike AGI gold? (DeepMind, OpenAI, Anthropic)17:07 Big Tech’s innovator’s dilemma and why true AGI may come elsewhere20:20 Predictive coding22:59 Why Big Tech resists new AI training paradigms29:16 Imagining life after AGI: optimism, transhumanism, and choice33:29 Navigating the transition from AGI to ASI37:55 Decentralized vs. centralized control of AGI43:20 Who (or what) will be in control47:19 Risks of power concentration in early AGI development51:01 Who should own and guide AGI?53:06 Why we need participatory governance for intelligent systems54:47 The danger of fake compassion and false democratization1:00:50 Finding meaning in the age of intelligent machines1:04:13 How AGI could help humanity focus on inner growth1:07:20 – Learning how to learn: the last human advantageConnect with Dr. Goertzel:LinkedIn: https://www.linkedin.com/in/bengoertzel/X: https://x.com/bengoertzelVisit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcastFollow us on YouTube: https://www.youtube.com/@InfoTechRG
How are AI and automation shaping both the attack and defense sides of cybersecurity?On this episode of Digital Disruption, we’re joined by the founder and CEO of Have I Been Pwned, Troy Hunt.Troy Hunt is an Australian security researcher and the founder of the data breach notification service, Have I Been Pwned. With a background in software development specializing in information security, Troy is a regular conference speaker and trainer. He frequently appears in the media, collaborates with government and law enforcement agencies, and has appeared before the U.S. Congress as an expert witness on the impact of data breaches. Troy also serves as a Microsoft Regional Director (an honorary title) and regularly blogs at troyhunt.com from his home on Australia’s Gold Coast.Troy sits down with Geoff to share eye-opening insights on the evolving threat landscape of 2025 and beyond. Despite the rise of AI and automation, Troy emphasizes that many of today’s most damaging data breaches and ransomware attacks still stem from basic human error and social engineering. He explains how ransomware has shifted from encrypting files to threatening data disclosure, making it harder for organizations to manage risk and justify ransom payments. The conversation also touches on how breach fatigue and apathy have led many individuals and businesses to underestimate cybersecurity risks, even as incidents rise globally. He also highlights how AI tools are being weaponized by both defenders and attackers and argues that cybersecurity isn’t about perfect protection but about finding equilibrium: balancing usability, education, and risk mitigation.In this episode: 00:00 Intro01:15 Why human weakness beats AI02:00 Young hackers and the rise of scattered spider04:00 From hacktivists to career criminals05:00 Ransomware’s new tactics07:30 Should companies pay the ransom? 10:20 Can you ever be fully protected? Defense vs. response11:20 How to convince boards cybersecurity is worth the money14:20 Breach fatigue and public apathy18:00 Reframing what ‘sensitive data’ really means20:00 Passwords, reuse, and the real risk equation24:00 Biometrics, face ID & the future of authentication26:30 Threat Modeling 10127:30 Barriers to cyber preparedness29:30 How Have I Been Pwned works 32:00 The Future of Data Breaches38:00 Microsoft’s Role in the Security Ecosystem40:30 AI Hype vs. reality in cybersecurity43:00 When AI helps hackers 52:00 Why transparency still matters after every breach54:00 Accepting risk, building resilienceConnect with Troy:Website: https://www.troyhunt.com/LinkedIn: https://www.linkedin.com/in/troyhunt/X: https://x.com/troyhuntVisit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcastFollow us on YouTube: https://www.youtube.com/@InfoTechRG























