Discover
Agents Of Tech
Agents Of Tech
Author: WebsEdge
Subscribed: 4Played: 39Subscribe
Share
© WebsEdge
Description
*Where big questions meet bold ideas*
Agents of Tech is a video podcast exploring the biggest questions of our time—featuring bold thinkers and transformative ideas driving change. Perfect for the curious, the thoughtful and anyone invested in what’s next for our planet.
Hosted by Stephen Horn, former BBC producer turned entrepreneur and CEO, Autria Godfrey, Emmy Award-winning journalist and Laila Rizvi, neuroscience and tech researcher, the show features conversations with trailblazers reshaping the scientific frontier.
Agents of Tech is a video podcast exploring the biggest questions of our time—featuring bold thinkers and transformative ideas driving change. Perfect for the curious, the thoughtful and anyone invested in what’s next for our planet.
Hosted by Stephen Horn, former BBC producer turned entrepreneur and CEO, Autria Godfrey, Emmy Award-winning journalist and Laila Rizvi, neuroscience and tech researcher, the show features conversations with trailblazers reshaping the scientific frontier.
40 Episodes
Reverse
Diplomacy used to be about treaties and territory – now it seems it's more about data, algorithms, and the companies that control them. At Donald Trump’s inauguration, Silicon Valley’s most powerful figures stood steps away, a sign that Big Tech now sits at the centre of global power. Tech companies pervade everyday life and wield power once reserved for nation states. Are the people in charge of global power those elected to office or those appointed to positions within those companies? To explore how AI is reshaping diplomacy, from negotiation and representation to influence operations and disinformation, hosts Autria Godfrey, Stephen Horn, and Laila Rizvi interview Dr. Jennifer Cassidy, AI & Diplomacy, University of Oxford, about: How AI is transforming diplomacy’s core functionsWhy Big Tech now rivals governments in geopolitical influenceThe rise of “digital sovereigns” and private powerWhen former political leaders move into tech, where accountability goesDemocratic versus authoritarian uses of AIWhy global AI governance is still largely non-bindingFor Dr. Cassidy, diplomacy rests on three, timeless pillars: communication, representation, and negotiation. AI “is not demolishing these pillars, but quietly rewiring the architecture that holds them together… Predictive analysis now allows ministries to read the global mood” almost in real-time. The United Nations and the World Bank use AI models that monitor food prices, rainfall patterns, and social media data to anticipate instability “up to 6 weeks before that instability might actually break out.” NATO employs machine learning to map Russian disinformation. “What we’re seeing here is the move from reactive diplomacy… to anticipatory diplomacy.”One of the most pressing questions is whose AI is being used to create “sovereign diplomatic AI systems.” France and the EU train their AI on Mistral, a French company. US AI models are OpenAI's and Anthropic's. Microsoft's Azure Cloud hosts data for NATO and national governments.These companies have become “digital sovereigns” – private actors who control the three levers of power that were once defined by the state: information, infrastructure and interpretation. Former politicians like Nick Clegg (Meta) and Rishi Sunak (Microsoft) represent a “circuit of influence” where “experience, access, and authority are just flowing continuously between capitals and campuses in Silicon Valley.” While “democracies do need experienced voices helping to steer the tech transition,” we must ensure that “when the expertise moves, accountability moves with it.” What about bad actors using AI? Jennifer says we’ve seen this in elections in the US and the world. In China, “predictive policing algorithms are tracking not just where crime might occur, but who might commit it… Authoritarian regimes are combining facial recognition, travel data, and digital behaviour into vast surveillance scores.” It is “digital authoritarianism in its most refined form… controlled by prediction, rather than force.”Dr. Cassidy concludes, “We have a very, very, very long way to go regarding the governance and structure of, and frameworks for AI… a difficult task… that has to be done.”What’s your take? Share your thoughts in the comments and subscribe for more on AI, geopolitics and global power. CHAPTERS00:00:00 Tech, Trump and the New Global Power Game00:01:26 Do Tech Giants Now Run Foreign Policy?00:04:00 How AI Is Reshaping Diplomacy? 00:06:37 Why Nations Are Building Their Own AI Models 00:09:18 Have Big Tech Companies Become Sovereigns?00:12:33 From Prime Minister to Big Tech: The Revolving Door00:16:46 AI Power Politics Beyond the West00:19:43 AI for Good or Digital Authoritarianism?00:22:09 Who Sets the Rules for AI?00:24:48 Closing Thoughts with Dr. Jennifer Cassidy00:25:05 Debrief: Authoritarian Drift and Regulation Fights00:27:13 AI Ministers, Echo Chambers and What Comes Next
Quantum computing may finally be ready for the real world - and it could power the next wave of AI. In this episode of Agents of Tech we sit down with Oxford Quantum Circuits (OQC) CEO Gerald Mullally to explore how OQC is integrating quantum computers into data-centres in London, Tokyo and New York.
Larry Ellison built Oracle into a cornerstone of the modern tech economy. Now he is making a $2.5 billion bet on Oxford, backing the Ellison Institute of Technology at Oxford to fuse AI, medicine and sustainability in one global hub.In this episode of Agents of Tech, Autria Godfrey, Stephen Horn and Laila Rizvi sit down in Oxford with Professor Santa Ono, Global President of the Ellison Institute of Technology (EIT), to ask a simple question: Can Oxford really become Europe’s Silicon Valley?We explore:- Why Ellison chose Oxford and the UK over Chicago or California- How EIT plans to recruit 7,000 world class scientists and double Oxford’s research base- The model of science-led capitalism and why commercialization is central to Ellison’s vision- The UK’s unique advantage in health data and biobanks (NHS data, UK Biobank, Protein Data Bank)- How AI, machine learning and robotics will change drug discovery, pandemics and healthcare- The relationship between EIT and Oracle, and how independent the institute really is- Parallels and contrasts with the Bill & Melinda Gates Foundation model of philanthropy- What this means for the UK’s role between the US and China in the global innovation raceProfessor Ono explains why he believes the UK is now one of the best places in the world to build AI-driven science: from single-payer health data to a fast-growing ecosystem of serial entrepreneurs. He also addresses questions about data privacy, ethics, bioterrorism risks and public concerns about American tech money in historic British institutions.If you care about:- How AI and health data will reshape medicine- Whether Oxford and Cambridge can anchor Europe’s answer to Silicon Valley- What it really takes to build a global science and technology campus at scale…this conversation is for you.Tell us in the comments: Do you think Ellison’s Oxford gamble is a bold new model for global science, or another moonshot that will be hard to scale?CHAPTERS00:00 Larry Ellison’s $2.5B bet on Oxford00:35 Agents of Tech intro01:22 Why Oxford?02:45 Interview begins: Santa Ono03:01 Ellison’s vision for EIT05:11 Scaling talent and entrepreneurship05:53 Science capitalism vs traditional philanthropy07:52 Why base EIT in the UK10:38 NHS data, privacy and AI concerns12:55 AI’s impact on jobs and drug discovery15:12 Commercialisation and scientific breakthroughs17:38 Building a new global research hub20:26 AI geopolitics and the UK’s role21:03 EIT as a global model22:50 Interview ends23:01 Post-interview reflections24:41 Closing and invitation to Larry Ellison
Is the U.S. LOSING the AI race to China?China and the U.S. are neck and neck in the AI race for global dominance. Former OpenAI board member Helen Toner (now at Georgetown’s CSET) joins us in Washington, D.C. to break down China vs U.S. strategies—open-source diffusion vs big tech global dominance— and what “winning” actually means.Helen has recently spent time in China and works at the center of U.S. AI policy—offering a rare inside view of both ecosystems and who’s truly ahead.Helen explains: - Who’s ahead right now and how to measure it (frontier AI vs adoption/diffusion)- Open-source vs closed: DeepSeek, Qwen, Kimi, Gemma, Llama vs OpenAI, Anthropic, Google- Compute & chips: NVIDIA dependence, export controls, and why compute concentration matters- AGI timelines: whether “AI 2027” holds up and why short timelines cooled after GPT-5- “AI+” strategy: applying AI to manufacturing, healthcare, and finance vs pure frontier bragging rights- What governments should do now: transparency, auditing, AI literacy, and measurement scienceWho do you think is winning and WHY – China or the U.S.? Drop one evidence-backed reason (links welcome). We’ll pin the best reply. Don’t forget to like and subscribe for more unfiltered conversations on AI, tech, and society.Chapters00:00 – Two strategies, one AI race01:00 – Open-source China vs Big-Tech USA03:37 – Not one race: choose your finish line04:04 – Who’s actually open? DeepSeek, Qwen/Kimi, Llama, Gemma, GPT-OSS06:26 – Frontier bragging rights vs real-world adoption07:46 – China’s “AI Plus” play (AI + industry)10:06 – Is the US still ahead at the frontier?12:04 – GPT-5 reality check & AGI timelines20:58 – Compute decides: chips, export controls, auto-ML engineers23:04 – What we need now: transparency, audits, AI literacy28:02 – Standards in practice: de-facto beats de-jure30:56 – Next 5 years: closed peaks, open bow wave37:55 – Final take: which path wins?#OpenAI #HelenToner #ai #GPT5 #OpenSource #podcast #China #DeepSeek
Is the U.S. LOSING the AI race to China?China and the U.S. are neck and neck in the AI race for global dominance. Former OpenAI board member Helen Toner (now at Georgetown’s CSET) joins us in Washington, D.C. to break down China vs U.S. strategies—open-source diffusion vs big tech global dominance— and what “winning” actually means.Helen has recently spent time in China and works at the center of U.S. AI policy—offering a rare inside view of both ecosystems and who’s truly ahead.Helen explains: - Who’s ahead right now and how to measure it (frontier AI vs adoption/diffusion)- Open-source vs closed: DeepSeek, Qwen, Kimi, Gemma, Llama vs OpenAI, Anthropic, Google- Compute & chips: NVIDIA dependence, export controls, and why compute concentration matters- AGI timelines: whether “AI 2027” holds up and why short timelines cooled after GPT-5- “AI+” strategy: applying AI to manufacturing, healthcare, and finance vs pure frontier bragging rights- What governments should do now: transparency, auditing, AI literacy, and measurement scienceWho do you think is winning and WHY – China or the U.S.? Drop one evidence-backed reason (links welcome). We’ll pin the best reply. Don’t forget to like and subscribe for more unfiltered conversations on AI, tech, and society.Chapters00:00 – Two strategies, one AI race01:00 – Open-source China vs Big-Tech USA03:37 – Not one race: choose your finish line04:04 – Who’s actually open? DeepSeek, Qwen/Kimi, Llama, Gemma, GPT-OSS06:26 – Frontier bragging rights vs real-world adoption07:46 – China’s “AI Plus” play (AI + industry)10:06 – Is the US still ahead at the frontier?12:04 – GPT-5 reality check & AGI timelines20:58 – Compute decides: chips, export controls, auto-ML engineers23:04 – What we need now: transparency, audits, AI literacy28:02 – Standards in practice: de-facto beats de-jure30:56 – Next 5 years: closed peaks, open bow wave37:55 – Final take: which path wins?#OpenAI #HelenToner #ai #GPT5 #OpenSource #podcast #China #DeepSeek
Has OpenAI made the WRONG bet? Gary Marcus argues that OpenAI is taking the wrong approach - and the AI bubble is real. We walk through why GPT-5 underwhelmed, where the scaling paradigm breaks, and what might really get us to AGI.Gary explains:– Why the economics don’t add up for current AI– Why GPT-5 isn’t as good as expected– The core LLM limitations and why the scaling paradigm fails– Why AI won’t take your job in the near term– A practical path to AGI (hybrid / neuro-symbolic, world models)We also debate whether investors are over- or under-valuing AI, what productivity gains are real, and how long it will take before AI truly replaces jobs.References discussed in this episode:– Gary Marcus, The Algebraic Mind: Integrating Connectionism and Cognitive Science (2001)– Gary Marcus, “Deep Learning Is Hitting a Wall” (Nautilus, 2022)– Gary Marcus, The Next Decade in AI (arXiv, 2020)– Mike Dash, TulipomaniaDo you think Gary is right—or are we just getting started? Drop one strong piece of evidence either way (links welcome). We’ll pin the best reply. Don’t forget to like and subscribe for more unfiltered conversations on AI, tech, and society.Chapters00:00 – Is AI in a bubble?00:30 – AI hype vs. reality01:10 – GPT-5 launch: disappointment or progress?03:50 – Can AI be both a revolution and a bubble?05:40 – Productivity gains and investment hype08:00 – Gary Marcus joins the conversation09:30 – Why Gary calls himself a skeptic11:15 – GPT-5 and the limits of scaling14:00 – Financial reality of large language models17:20 – “Deep Learning Is Hitting a Wall”19:00 – Why hallucinations won’t go away21:00 – Neuro-symbolic AI explained24:00 – Building world models for AI27:00 – Are AI valuations sustainable?29:30 – Lessons from Tulipomania31:30 – Will AI take all our jobs?36:00 – What comes next for AI research38:30 – Final thoughts#OpenAI #GaryMarcus #ai #GPT5 #scaling #podcast
AI agents can’t easily talk to each other. To solve this, Cisco has joined forces with Google, Dell, Oracle, and Red Hat to launch AGNTCY, the open-source Internet of Agents. It’s designed so agents from different organizations can discover, trust, and work together - and it’s now donated to the Linux Foundation for neutral governance.Vijoy Pandey, Senior VP and GM of Outshift by Cisco, explains how AGNTCY tackles interoperability, why secure messaging (via SLIM) matters, and how enterprises are already using multi-agent systems in the real world. We also look ahead to quantum networking - the next frontier Cisco and its partners are preparing for, connecting processors and data centers across vast distances.👉 Tell us in the comments how you see open-source agents changing your industry.Visit AGNTCY.orgVisit Outshift.com Chapters00:00 Cold open00:34 Welcome and hosts01:45 Why interoperability is the bottleneck03:10 What AGNTCY enables05:55 Trust, identity, and evaluation12:40 SLIM and secure agent messaging16:45 Real-world deployments20:10 From pilots to production21:55 Open source and Linux Foundation25:20 Quantum networking outlook28:50 Final thoughts and CTAGuest:Vijoy Pandey, SVP and GM, Outshift by Cisco#AI #Agents #Cisco #AGNTCY #Interoperability #OpenSource #LinuxFoundation #Quantum #AgenticAI #Outshift
AI is changing the way we do biology. But can it crack one of the toughest challenges in science — drug discovery?In this episode of Agents of Tech, we speak with Professor Charlotte Deane, MBE, University of Oxford, one of the UK’s leading computational biologists. As a co-lead of OpenBind, Charlotte is helping build the global data infrastructure needed to accelerate drug discovery with AI.From the role of open science to the cultural gap in innovation between the US and UK, Charlotte explains why the UK, despite its size, continues to punch well above its weight in AI and computational biology.📍 Recorded live at ISMB/ECCB in Liverpool, the world’s leading computational biology conference.🔔 Subscribe to Agents of Tech for more conversations with the people shaping the future of AI, science, and society: https://www.youtube.com/@AgentsOfTech #AI #DrugDiscovery #CharlotteDeane #UKAI #Biology #AgentsOfTech00:00 Intro: AI’s toughest challenge – drug discovery 00:27 Welcome to Agents of Tech at ISMB/ECCB, Liverpool 01:00 Why is drug discovery so hard for AI? 02:00 The UK and AI: overlooked but world-class 05:57 Innovation culture: risk-taking in the UK vs US 08:00 Introducing Professor Charlotte Deane 09:12 What is the OpenBind consortium? 13:00 Pharma data vs open science data 15:59 How the UK punches above its weight in AI 19:20 Training the next generation of AI-literate scientists 22:00 How AI will change the way science is done 25:00 Can AI ask better scientific questions? 27:40 How AI is changing drug discovery workflows 32:00 Final thoughts from Charlotte Deane 34:00 Host debrief: UK AI, OpenBind, and the future of biology
Can artificial intelligence replace scientists? At Stanford University, Professor James Zou is leading research on AI scientists, virtual labs, and digital researchers that are already transforming biology and medicine.In this episode of Agents of Tech, recorded live at ISMB/ECCB in Liverpool, James explains how his lab is building virtual research teams powered by AI. These “digital scientists” act like a real lab, with specialized roles in immunology, chemistry, and computational biology. They collaborate, design experiments, and even help discover potential Covid vaccine candidates.We discuss:How AI schools train agents to become domain experts in daysWhy virtual conferences run by AI could change scientific publishingWhat James’s team learned from designing nanobody therapies with AIThe future of human and AI collaboration in science Read James Zou’s recent Nature paper on AI scientist agents: https://www.nature.com/articles/s41586-025-09442-9 Subscribe for more conversations with global AI and science leaders: https://www.youtube.com/@AgentsOfTech#AI #Science #JamesZou #Stanford #VirtualLabs #ComputationalBiology #ArtificialIntelligence #MachineLearning Chapters (time-coded)00:00 – AI isn’t just a tool, it’s becoming the scientist00:17 – Meet Professor James Zou of Stanford00:41 – What are AI “virtual labs”?01:07 – Can AI replace human researchers?02:00 – The promise and fear of agentic AI03:15 – Building AI teams with different expertise04:45 – Human creativity vs virtual collaboration05:36 – James Zou explains the concept of virtual labs06:40 – Early success: AI scientists design Covid nanobody candidates08:20 – Why virtual labs are more than large language models09:40 – Specialized AI agents with domain expertise11:05 – Human collaboration with AI scientists12:20 – Filling critical expertise gaps with AI13:45 – How virtual labs “teach themselves” through AI schools15:20 – The problem of agreeable AIs and why critics are needed17:00 – Bias in literature and how AI agents learn18:45 – Trust and experimental validation in AI science20:20 – Why human scientists still matter in the lab21:10 – Next steps for Stanford’s virtual lab research22:20 – Potential applications in biology, medicine, and beyond23:20 – The future of AI-run conferences and publishing25:10 – Explosion of research papers and the role of AI reviewers26:00 – Reactions: Are AI scientists partners or competitors?29:20 – What does AI mean for the future of human discovery?30:00 – Closing thoughts and thanks to James Zou
In this exclusive Agents of Tech interview from ISMB/ECCB in Liverpool, we sit down with Dr. John Jumper, Nobel Laureate and leader of the groundbreaking AlphaFold project at Google DeepMind.Discover how AlphaFold reshaped molecular biology by accurately predicting protein structures, and hear Jumper's vision for using AI to not just predict biology, but design it. Join us as we explore the future of computational biology, scientific reasoning with AI, and what’s next for this Nobel Prize-winning scientist. 00:00 - Introduction to John Jumper and AlphaFold00:29 - ISMB/ECCB Liverpool Overview01:00 - AlphaFold's Impact on Structural Biology02:10 - Understanding Protein Structures with AlphaFold04:14 - Google's Role in Scientific Innovation05:20 - AlphaFold’s Influence Beyond Biology05:55 - Welcoming John Jumper06:23 - Why ISMB/ECCB Conference?06:55 - How Does AlphaFold Work?08:46 - Surprising Applications of AlphaFold09:17 - Importance of Protein Databank (PDB)12:00 - Industrial Science and the Nobel Prize14:14 - Ingredients of AI: Data, Compute, Research17:38 - Collaborative Teams in Modern Science18:50 - Responsibility as a Public Intellectual22:02 - Jumper’s Next Big Research Question25:56 - Public Trust in AI31:56 - Scientific Method and AI Predictions34:01 - Concluding Thoughts#AlphaFold #JohnJumper #GoogleDeepMind #AgentsOfTech #AI #ArtificialIntelligence #MachineLearning #ComputationalBiology #StructuralBiology #ScienceInnovation #ISMB #ISCB2025 #ECCB2025 #LiverpoolScience #NobelPrize #NobelPrize2024 #NobelLaureate #Podcast
In this episode of Agents of Tech, we dive into agentic AI, a new kind of artificial intelligence designed to enhance human agency rather than automate people out of the loop.We’re joined by Dr. Niloufar Salehi, Assistant Professor at UC Berkeley and Chief Product Officer at Across AI. Her work spans healthcare, education, criminal justice, and the creative industries. She explains why most AI systems misunderstand how people actually work, and what it will take to build systems that empower rather than override human judgment.Topics include:- The rise of agentic AI and what it means for work- Case studies in medicine, law, and content creation- Why most automation fails in the real world- The hidden risks of synthetic data and algorithmic bias- What we can learn from creatives trying to outsmart YouTube’s algorithmThis is a must-watch if you’re thinking about AI and ethics, human-computer interaction, or the future of decision-making in high-stakes settings.00:00 Intro to Agentic AI 00:16 Meet Dr. Niloufar Salehi 00:24 What is Agentic AI? 00:48 Hosts Discuss Human-AI Interaction 04:00 Guest Interview Begins 04:40 HCI and Interdisciplinary Design 06:34 Algorithmic Misconceptions 07:11 Xerox PARC & HCI Origins 10:48 AI in Healthcare: What Works 13:26 Translation Risk in Medicine 16:08 AI in the Courtroom 19:02 Synthetic Data: Power & Pitfalls 21:00 YouTube Creators & Algorithm Personas 26:00 Designing Interfaces Around Human Strengths 29:00 AI in Hiring, Policing & Due Process 31:00 AI That Offers Options, Not Orders 32:30 Future of AI & Human Collaboration 📘 Plans and Situated Actions – Lucy A. Suchman• Chapter “Situated Actions” in Human Machine ReconfigurationsDOI: 10.1017/CBO9780511808418.008 (asmepublications.onlinelibrary.wiley.com, Cambridge University Press & Assessment)📘 AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference – Arvind Narayanan & Sayash Kapoor• DOI for an excerpt in Stanford Social Innovation Review: 10.48558/0Z9Z DR86 (Stanford Social Innovation Review)• DOI for a full-length review article: 10.1215/2834703X 11700273 (Cambridge University Press & Assessment, Duke University Press)📕 Study: Effect of Prior Diagnoses on Dermatopathologists' Interpretations…• DOI: 10.1001/jamadermatol.2022.0000 📕 Study: Disparities in Dermatology AI: Assessments Using Diverse Clinical Images• DOI: 10.1126/sciadv.abq6147 (Science)
Artificial intelligence is revolutionizing scientific research—but can it really replace human scientists? In this episode of Agents of Tech, we speak with Professor José Penadés and Dr. Tiago Costa from Imperial College London about their groundbreaking collaboration with Google's Gemini 2.0 AI co-scientist.Within just two days, AI unraveled a scientific mystery they'd spent years exploring. But what does this rapid success mean for human researchers and the future of science itself?🔑 Key highlights:How Google's AI co-scientist generates novel scientific hypothesesWhy AI may accelerate—but not replace—human discoveryThe surprising ways AI avoids human biases in scientific researchSubscribe for more cutting-edge insights from Agents of Tech!#AI #ArtificialIntelligence #ScienceInnovation #GoogleAI #AIResearch #AgentsOfTech. Research Paper: https://storage.googleapis.com/coscientist_paper/ai_coscientist.pdf 0:00 – Intro: The AI Era in Science0:20 – The Problem AI Solved in 2 Days0:52 – What Are AI Co-Scientists?1:16 – Can AI Generate Entirely New Hypotheses?1:43 – Hosts Discuss the Implications of AI Collaboration2:34 – Difference Between AI Bots and AI Co-Scientists3:33 – Skepticism About AI’s Scientific Creativity5:27 – Could AI Break Free from Outdated Knowledge?7:45 – Interview Start: José Penadés & Tiago Costa9:00 – How AI Avoids Human Bias in Science10:06 – AI’s Ability to Generate Original Hypotheses12:41 – Overcoming Human Scientific Biases14:03 – Will AI Accelerate Discovery or Complicate Science?15:25 – Obstacles to AI-generated Breakthroughs16:12 – Can AI Innovate Across Disciplines?18:23 – Human Intuition vs. AI-generated Hypotheses21:01 – Could AI Ever Formulate Revolutionary Ideas?23:36 – The Human Role: Asking the Right Questions25:52 – AI and the Future of Peer Review27:52 – Advice for Young Scientists Using AI29:00 – Final Thoughts: AI's Role in Future Science30:39 – Closing & Credits
Is generative AI replacing workers—or helping them thrive? In this episode of Agents of Tech, we explore how AI is transforming the global workforce. Economist Lindsey Raymond (Microsoft Research), co-author of a groundbreaking study with Erik Brynjolfsson and Danielle Li, joins us to unpack how AI tools are boosting productivity, equalizing skills, and reshaping the modern job.We cover:AI’s impact on job creation and automation (World Economic Forum’s 2025 forecast)Why AI helps junior workers most—but might demotivate top performersHow “editor over producer” is becoming the new workplace modelWhether AI narrows or widens global inequalityWhat skills will remain uniquely humanHow AI tools can power faster business scaling—and redesign the nature of jobs🎧 Featuring: Lindsey Raymond, Microsoft ResearchAutria Godfrey, Laila Rizvi & Stephen Horn, Agents of Tech hostsSubscribe for more deep dives into AI, tech, and the future of work. 🔔 Don’t forget to like, comment, and share.Papers and Reports: https://academic.oup.com/qje/article/140/2/889/7990658?login=falsehttps://www.weforum.org/publications/the-future-of-jobs-report-2025/?gad_source=1&gclid=Cj0KCQjw4cS-BhDGARIsABg4_J3j8i_TwZLC_yzLxuOzrfHphxgBT02X0IWMRTUUYM27PICz30xiJRsaAvEUEALw_wcB#GenerativeAI #FutureOfWork #ArtificialIntelligence #AIandJobs #AIAutonomy #WorkplaceTransformation #ErikBrynjolfsson #LindseyRaymond #AIProductivity #AgentsOfTech
Can AI help us survive extreme weather and climate change—or is it part of the problem?In this episode of Agents of Tech, we speak to Professor Auroop Ganguly, Director of AI for Climate and Sustainability at Northeastern University, about how artificial intelligence is being used to predict disasters, improve infrastructure resilience, and shape sustainable climate policy.🌀 We discuss:• How AI predicts floods, wildfires, and extreme weather• The rising energy cost of AI and the push for green computing• Challenges in low-data regions and climate modelling• Why AI needs governance to support sustainability goals• Using AI to build infrastructure that can withstand climate shocks🚨 AI is powerful, but not a magic bullet. Tune in for a grounded look at the promises, limits, and risks of AI in the climate fight.📍 Guest: Professor Auroop Ganguly, Northeastern University🎙️ Hosts: Autria Godfrey & Laila Rizvi📺 Produced by WebsEdge🔗 Subscribe for more on the future of AI, climate tech, and innovation.#ClimateChange #AI #Sustainability #ClimateTech #AuroopGanguly #GreenComputing #DisasterResilience #AgentsOfTech #ExtremeWeather #MachineLearning #AIClimateModels #TVA #FloodPrediction #ClimateAdaptation #NetZero #AIethics #aigovernance CHAPTERS00:33 – The climate crisis and the AI dilemma01:30 – Can AI help or harm the planet?02:45 – Guest intro: Professor Auroop Ganguly04:10 – How AI supports climate resilience06:00 – Case study: NASA-funded flood prediction with AI08:30 – Improving infrastructure with predictive models10:00 – AI’s limits in reducing emissions11:45 – Can DeepSeek lead to greener AI?13:00 – Making climate models more accurate with AI15:30 – The Global South's data gap problem17:20 – Using transfer learning in low-data regions18:40 – Combining AI with physics for better predictions20:00 – The energy demands of AI explained22:00 – Why governance matters for sustainable AI23:30 – AI's role in shaping future climate policy25:00 – Risks of bias in disaster AI systems26:30 – Innovation, equity, and evolution over revolution28:00 – Hosts’ reflections: What we learnedhttps://www.nature.com/articles/s41467-023-35968-5
We’re entering an era where AI systems don’t just follow prompts—they act independently. But what happens when we hand over too much control?In this episode of Agents of Tech, our hosts Stephen Horn, Autria Godfrey and Laila Rizvi sit down with Margaret Mitchell, Chief Ethics Scientist at Hugging Face and one of the world’s leading voices on AI ethics. Together, they explore what autonomy really means in artificial intelligence—and why we can’t afford to be passive observers.#AIEthics #ResponsibleAI #TechForGood #EthicalAI #HumanInTheLoop #AIRegulationTopics include:🔹 Why AI autonomy is fundamentally different from traditional automation🔹 The real risks of removing human oversight🔹 How trust and anthropomorphism distort our judgment🔹 What “human in the loop” must mean going forwardThis is a must-watch conversation for anyone working with AI—or impacted by it (which means all of us).https://huggingface.co/papers/2502.02649🎧 Listen now on your favorite podcast platform and subscribe for more deep tech insight. 0:00:00 - Intro: Are we giving AI too much control?0:00:17 - Meet DeepSeek and Monica – the next-gen AI agents0:00:58 - The ethics of autonomous AI0:02:13 - Guest intro: Margaret Mitchell on AI autonomy0:04:12 - Human control vs machine independence0:06:25 - AI, society, and the illusion of moral reasoning0:07:49 - Security risks from autonomous coding agents0:11:27 - The BBC analogy & user-generated chaos0:13:42 - Deepfakes, consent & harmful content0:15:45 - Why AI doesn’t think like us0:17:22 - Sensitive data, agents & social media nightmares0:19:30 - Ease vs privacy: Why people give up control0:21:00 - Good uses of agents: Accessibility & productivity0:22:30 - AI and the future of creative jobs0:24:20 - Capitalism, AGI & the wealth imbalance0:26:10 - Margaret's message to governments0:27:50 - Rights-based regulation vs restriction0:29:35 - Looking 10 years ahead: Margaret’s fears & hopes0:31:30 - Final reflections: Are we too late?0:35:30 - Outro: Like, share & subscribe!
In this episode of Agents of Tech, hosts Autria Godfrey and Stephen Horn dive deep into one of AI’s most pressing challenges: energy consumption. With the release of DeepSeek and rising concerns over compute power and costs, the race to build efficient AI is heating up.We speak to two pioneering researchers:Dr. Shreyas Sen (Purdue University), who’s developing nervous-system-inspired chips that connect wearables with ultra-low energy use.Dr. Hongyin Luo (MIT CSAIL / BitEnergy AI), whose work on Linear-Complexity Multiplication (L-Mul) may drastically cut compute cost and energy usage.Is the future of AI massive centralized data centers — or decentralized personal clouds and localized compute? And what happens when we run out of training data?👉 Don’t forget to like, comment, and subscribe to support meaningful tech discussions!⏱️ Timestamps / Chapters:00:00 - Introduction: Welcome to Agents of Tech00:20 - Why AI energy usage is an urgent issue01:00 - DeepSeek’s $6M run and the energy debate02:00 - The promise of mixture-of-experts and energy savings02:45 - Interview intro: Dr. Hongyin Luo (BitEnergy AI)03:25 - What is L-Mul and why it matters06:00 - Floating-point vs integer math in AI08:30 - Shifting compute from datacenters to the edge10:00 - Barriers to L-Mul adoption and FPGA innovation12:00 - The case for local family clouds13:30 - Moonshot idea: Stop pretraining to save energy15:00 - Interview intro: Dr. Shreyas Sen (Purdue University)15:45 - Wearable brains and nervous-system-inspired design17:30 - Conductive human body as an AI network19:00 - Brain-to-device communication breakthroughs21:30 - Data layers: cloud, edge, and leaf devices23:00 - Real-world use and commercialization of Wi-R tech24:00 - Future implications and distributed AI potential26:00 - Panel discussion: What does efficient AI really mean?28:30 - The end of training and rise of true intelligence?30:00 - From megawatt datacenters to household AI hubs32:00 - Wrap-up and reflections33:55 - Credits and thanks#ArtificialIntelligence #AI #EnergyEfficiency #GreenAI #EdgeComputing #BrainInspiredTech #WearableTech#NeuralNetworks #BodyPoweredAI #futureofai #SustainableTech #AIChips #AIInnovation #SmartWearables
What if a virtual version of you could help cure disease?In this episode of Agents of Tech, hosts Autria Godfrey and Stephen Horn explore the fascinating world of digital twins—virtual replicas of real-world entities that are revolutionizing industries, especially healthcare. From their origins in NASA’s Apollo missions to accelerating clinical trials today, digital twins are changing how we predict, prevent, and personalize treatment.Featuring Aaron Smith, founder and machine learning scientist at UnLearn, we discuss how AI-driven digital twin models are transforming clinical trials—cutting costs, speeding up timelines, and reducing the need for placebo groups.Don’t forget to like, comment, and subscribe for more deep dives into the future of tech and science. 00:00 - Welcome to Agents of Tech00:15 - What are Digital Twins?01:00 - Origins in NASA and Apollo 1301:30 - From Static to Intelligent Twins02:30 - AI-powered evolution: Science fiction becomes science fact03:45 - Applications in Medicine & Clinical Trials04:30 - Guest intro: Aaron Smith from UnLearn05:10 - How Digital Twins Accelerate Clinical Trials07:00 - The AI models behind the magic08:20 - Why focus on clinical trials, not patient care?10:15 - Reducing risk and increasing trial power12:00 - Future of Phase 3 trials13:30 - The long-term vision for digital twins in medicine15:00 - Placebo groups and statistical innovation17:00 - Machine learning in rare diseases18:20 - Regulatory frameworks: EMA & FDA alignment21:00 - What makes a disease suitable for digital twin modeling?22:30 - Pharma partnerships and proprietary data24:00 - Recap: Why this tech changes everything25:15 - Outro: Like & subscribe!
Welcome to Agents of Tech, where we explore the innovations shaping tomorrow. In this episode, we dive into how digital twins are transforming personalized medicine, featuring Terry Poon, CTO of Twin Health.From reversing disease to real-time metabolic modeling, learn how AI, wearables and data-driven empathy are creating a new era of preventative healthcare.🔗 Subscribe for more conversations with the visionaries behind tomorrow’s tech.📣 Like, comment, and share to support the show!https://diabetesjournals.org/diabetes/article/73/Supplement_1/851-P/156350/851-P-Whole-Body-Digital-Twin-WBDT-Enabled. #DigitalTwins #HealthcareAI #TwinHealth #PersonalizedMedicine #AgentsOfTech. 00:00 - Intro: What Are Digital Twins in Healthcare?00:27 - From Sci-Fi to Reality: Digital Twins Today01:14 - Meet Terry Poon, CTO at Twin Health01:54 - Understanding Unique Metabolic Responses02:31 - Why NASA’s Tech is Now Revolutionizing Medicine03:20 - The Holistic Power of Digital Twins04:22 - Real-World Use: Predicting Heart Attacks & Optimizing Hospitals05:00 - Disclaimer: Medical Info vs Professional Advice05:14 - Terry Poon on What Digital Twins Actually Do06:36 - Reversing Disease, Not Just Managing It07:33 - The Data That Powers Twin Health’s Success08:55 - Tackling the Complexity of Human Metabolism10:07 - How AI Makes Sense of 10 Billion+ Data Points11:33 - Data Quality + Quantity = Better Outcomes12:20 - Combining AI + Human Empathy in Care13:33 - Could This Approach Go Beyond Metabolic Disease?14:25 - Can AI Discover What Humans Can’t See?15:57 - Surprising Patterns: Morning vs Afternoon Exercise16:33 - What True Personalization Takes to Work17:32 - The Tight Feedback Loop That Fuels Change18:33 - Advice for AI Innovators in Healthcare19:39 - A Future Without One-Size-Fits-All Medicine20:41 - Empathy: The Underrated Key to Patient Adherence21:17 - Final Thoughts & Takeaways
Welcome to Agents of Tech! In this episode, we dive into the future of data storage with the visionary Dr. Stuart Parkin. A pioneer in spintronics and the inventor of racetrack memory, Dr. Parkin’s groundbreaking work has shaped the way we store and process data in the digital age. With AI’s insatiable demand for data, how do we keep up? Dr. Parkin shares insights into the challenges of traditional storage, the revolutionary potential of spintronics, and how racetrack memory could transform AI and computing as we know it.🔹 How does AI’s growing demand impact data storage?🔹 What is spintronics, and how does it work?🔹 Can racetrack memory revolutionize computing?🔹 How close are we to commercializing this next-gen storage technology?Tune in to find out! If you enjoy the conversation, don’t forget to like, subscribe, and share to help us bring more groundbreaking discussions to you. 00:00 - Introduction to Agents of Tech & AI’s Data Challenge00:06 - Meet Dr. Stuart Parkin & His Impact on Data Storage00:28 - How the Spin Valve Revolutionized Storage Technology01:26 - The Growing Bottleneck: AI’s Increasing Data Needs03:00 - Spintronics & The Promise of Racetrack Memory05:29 - Dr. Stuart Parkin’s Journey & Groundbreaking Innovations07:32 - Overcoming Storage Barriers & The Future of Spintronics10:06 - The Push for Energy-Efficient Computing & AI13:31 - Why Racetrack Memory is Critical for AI & Big Data16:56 - The Investment Needed to Make Racetrack Memory Mainstream18:56 - Neuromorphic Computing & The Future of AI Hardware23:39 - Final Thoughts & Closing Remarks
AI is advancing at a breakneck pace, but how will we store the massive amounts of data it requires? In this episode of Agents of Tech, hosts Autria Godfrey, Stephen Horn, and Laila Rizvi explore the growing demand for AI data storage and the innovations shaping the future.💡 We break down:- The Stargate Project, a $500 billion AI infrastructure investment- DeepSeek AI, a cost-efficient rival to ChatGPT?- The next frontier of data storage – synthetic DNA- How biology meets computing to create a more sustainable tech futureWe’re joined by Dr. Jeff Nivala, Co-Director of the Molecular Information Systems Lab (University of Washington & Microsoft), to discuss the groundbreaking potential of DNA-based storage and its role in the future of AI.🚀 If you enjoy the episode, don’t forget to like, subscribe, and share to stay up to date with cutting-edge tech discussions!📌 Follow us for more: Bluesky: @agentsoftech.bsky.socialTikTok: @Agents_of_TechInstagram: @agents_of_tech_ X/Twitter: @Agents_of_Tech🔗 AgentsofTech.ai#AI #DataStorage #DeepSeek #DNAStorage #techpodcast Chapters:00:00 - Welcome to Agents of Tech00:11 - The AI data storage crisis00:42 - The $500B Stargate Project – The U.S. bets big on AI01:22 - The rise of DeepSeek AI – Can it rival ChatGPT?02:44 - Mark Zuckerberg’s AI ‘war room’03:37 - Meet our guest: Laila from WebsEdge04:52 - The storage bottleneck – why AI outpaces traditional infrastructure06:47 - The innovation lag: Can engineers solve AI’s storage problem?08:33 - Dr. Jeff Nivala joins the discussion09:46 - How synthetic DNA can store digital data12:00 - Can DNA replace traditional data centers?14:33 - The intersection of biology and computing16:21 - DNA vs. traditional computing: A paradigm shift18:45 - Scaling DNA-based storage – When will it be practical?21:00 - The future of AI storage: Data forests instead of data centers? 23:00 - Final thoughts & wrap-upProgram Notes: https://openai.com/index/announcing-the-stargate-project/ https://fortune.com/2025/01/27/mark-zuckerberg-meta-llama-assembling-war-rooms-engineers-deepseek-ai-china/




