Discover
Professor Insight Podcast - AI, Science and Business

Professor Insight Podcast - AI, Science and Business
Author: Billy Sung
Subscribed: 2Played: 4Subscribe
Share
© Copyright 2025 All rights reserved.
Description
The Professor Insight Podcast is your TLDR or ”too long, didn’t read” guide to the frontiers of artificial intelligence, neuroscience, and technology that are reshaping business today. Curated by Professor Billy and fully powered by AI, we unpack the most intriguing news, novel research findings, and real-world applications, keeping you informed and ahead of the curve. Perfect for tech-savvy entrepreneurs, business leaders, and inquisitive minds, each episode equips you with actionable insights and fascinating perspectives. Tune in to discover how breakthroughs in AI and science apply to the world of business.
28 Episodes
Reverse
Artificial intelligence is no longer just a support tool in business, it is becoming a core driver of growth and efficiency. In this episode, we explore Google’s new report, The ROI of AI 2025: How Agents Are Unlocking the Next Wave of AI-Driven Business Value. The findings reveal a major shift from asking whether to use AI to focusing on how to scale it effectively. With companies now moving into the agentic era, AI agents are stepping up to perform real work and deliver measurable impact.
Listeners will hear surprising statistics and insights from the report. For example, 88 percent of early adopters are already seeing ROI from generative AI, and more than half of executives using AI have put agents into production. The report highlights where the biggest returns are happening, from productivity gains and customer experience improvements to marketing and security. You will also hear how companies are deploying agents across different industries, what role executive sponsorship plays in success, and why data privacy and system integration remain top concerns.
This episode matters because it shows the practical reality of AI adoption, not just the theory. Businesses that move quickly are pulling ahead, and the lessons from early adopters provide a clear picture of what it takes to see real value. Whether you are in leadership, strategy, or operations, understanding how AI agents are being used today can help you make better decisions about where to invest and how to prepare for the next stage of digital transformation.
When you receive unexpected news from a company, who delivers it may matter more than the news itself. In this episode, we explore a fascinating study from the Journal of Marketing titled Bad News? Send an AI. Good News? Send a Human. The research, led by Aaron Garvey, TaeWoo Kim, and Adam Duhachek, reveals surprising insights about how consumers react to offers depending on whether they come from a human representative or an AI agent.
Listeners will hear how the empirical package of five studies tested situations ranging from concert ticket pricing to ride-sharing fees, showing that people are more willing to accept worse-than-expected offers when delivered by AI, and respond more positively to better-than-expected offers when delivered by humans. We also discuss how making AI more humanlike changes these reactions, why perceived intentions play such a powerful role, and what this means for marketing practices across industries.
This conversation matters because it sheds light on a subtle but important shift in customer relationships. As businesses adopt AI in more consumer-facing roles, understanding when to rely on AI and when to emphasize human connection can affect trust, satisfaction, and long-term engagement. The episode offers a thoughtful look at how technology and psychology intersect, giving you practical insights into communication, strategy, and ethics in an increasingly AI-driven marketplace.
Artificial intelligence is often celebrated for its breakthroughs in science, business, and daily life, but what about its hidden environmental cost? In this episode, we take a closer look at a new research paper from Google titled Measuring the Environmental Impact of Delivering AI at Google Scale. While training large models has long been seen as the main driver of energy use, the surge in everyday AI adoption means the real focus now is on inference, the act of generating responses to billions of user prompts worldwide.
Listeners will discover how Google developed a comprehensive method for tracking not only energy use but also emissions and water consumption across its AI infrastructure. The episode explores surprising findings such as how a single Gemini text prompt consumes just 0.24 watt-hours of energy and only 0.26 milliliters of water, far lower than many previous public estimates. We also highlight the difference between narrow measurements and Google’s full-system approach, as well as the efficiency gains that led to a 33 times reduction in energy and a 44 times reduction in emissions within a year.
This conversation matters because AI is no longer a niche technology, it is a daily tool for millions of people. Understanding its environmental footprint helps industry leaders, policymakers, and users see both the progress being made and the challenges that remain. By unpacking the details of Google’s study, we reveal why transparent measurement is essential if AI is to scale responsibly while minimizing impact on energy systems, climate, and water resources.
Can AI truly understand how people think, or is it just guessing based on patterns? In this episode of the Professor Insight Podcast, we explore a compelling new study that challenges the growing belief that large language models can stand in for real human participants. Titled Large Language Models Do Not Simulate Human Psychology, the paper examines how models like GPT-4 and CENTAUR handle moral decision-making scenarios and whether their responses align with actual human judgment. The findings reveal important limits that anyone relying on AI-generated insights should take seriously.
You’ll hear how researchers tested these models against real human participants by subtly changing the wording of moral scenarios and measuring the shifts in responses. While people reacted strongly to semantic differences, the models barely moved. We break down what this tells us about how LLMs process meaning, where their generalizations fall short, and why semantic nuance is still a uniquely human strength. You’ll also learn what this means for the growing use of synthetic data in research and business, and why treating AI responses as a proxy for human behavior may be more misleading than helpful.
This episode matters because it brings clarity to a topic that is gaining traction in marketing, research, and product development: using AI to simulate customer behavior. While the appeal of synthetic data is understandable, this study reminds us that human nuance cannot be fully predicted by token patterns. For leaders making data-driven decisions, understanding the limits of AI-generated insights is essential for maintaining relevance, integrity, and real-world effectiveness.
What if you could peek behind the curtain and see exactly how people are using AI at work right now? In this episode, we dive into an exclusive, unpublished study from Microsoft that does just that. Titled "Working with AI: Measuring the Occupational Implications of Generative AI," the paper analyzes 200,000 real, anonymized conversations between users and Microsoft Bing Copilot to uncover how AI is reshaping the workforce. This isn’t theoretical. This is actual, on-the-ground usage, rich with data, surprising insights, and implications for nearly every job you can imagine.
We explore what people are really asking AI to help with and what the AI is actually doing in response. From writing and research to coaching and advising, the results may surprise you, especially the fact that in 40 percent of cases, what users wanted and what AI did were completely different tasks. The study maps these interactions to job roles using the O*NET occupational database, producing an “AI applicability score” that highlights which professions are most and least exposed to AI capabilities today. Spoiler: knowledge workers, communicators, and information professionals should pay close attention.
Whether you’re a business leader, knowledge worker, or educator, this episode offers a grounded look at how generative AI is actually being used across different types of work. The findings show that AI’s current strengths lie in supporting tasks like writing, information gathering, and communication, while its direct performance is most visible in roles involving teaching, advising, or coaching. Physical and manual occupations remain less affected, for now, but even those show signs of interaction. By focusing on real-world data rather than predictions, the episode provides a more nuanced view of how AI is fitting into the workplace today.
In this episode, we explore the results of a major new global study from the University of Melbourne and KPMG titled Trust, Attitudes and Use of Artificial Intelligence: A Global Study 2025. Drawing on the views of more than 48,000 people across 47 countries, this research offers one of the most detailed snapshots to date of how AI is perceived, trusted, and used around the world. It examines differences between advanced and emerging economies, workplace adoption, student use in education, and the growing call for stronger governance.
The conversation unpacks why emerging economies are leading the way in AI trust and uptake, and why advanced economies are showing more scepticism. It highlights the gap between public confidence in AI’s technical ability and concerns about its safety and ethical use. You will hear about patterns in workplace behaviour, from productivity gains to policy breaches, and how students are using AI both to enhance learning and, in some cases, to bypass it. The episode also discusses the widespread demand for stronger AI regulation, especially to counter misinformation.
This discussion matters because it captures the reality of AI adoption beyond the headlines, showing both its opportunities and its risks. The findings reveal where trust is being built and where it is eroding, and why literacy, governance, and clear regulation are critical as adoption accelerates. Whether working in a business, leading a team, or studying in a university, understanding these trends can help in making informed decisions about how to engage with AI responsibly and effectively.
Think machines can predict your next move? Think again. In this episode, we dive into one of the most intriguing challenges at the crossroads of psychology, artificial intelligence, and business: can we truly predict what people will choose before they do? We explore the breakthrough research published in Nature Human Behaviour, spotlighting BEAST-GB, a revolutionary model that blends the best of behavioural science with cutting-edge machine learning. It’s not just another algorithm—it’s a new lens on how humans really make decisions.
Join us as we unpack why pure AI and raw data alone keep falling short in predicting real human choices. Discover how BEAST-GB leverages psychological insight, cognitive biases, and decades of behavioural research to outperform even the smartest deep learning models. You’ll also hear how these ideas are powering new tools like Accurment, designed to help marketers and decision-makers move beyond guesswork and gut instinct, transforming messy human data into clear, actionable strategy.
If you’re curious about the future of decision science, want to understand the secrets behind truly effective marketing, or just love uncovering what makes people tick, this episode is a must-listen. Hit play to find out why the smartest predictions don’t come from AI alone, but from a powerful partnership between machine learning and human understanding.
Is artificial intelligence the next master persuader? In this episode of Professor Insight Podcast, we dig into one of the most fascinating questions of the digital age: can AI agents actually out-persuade humans? Drawing on a landmark 2023 meta-analysis from the Journal of Communication, we explore the world of AI-powered chatbots, recommendation engines, and digital advisors, asking whether these technologies are quietly winning the battle for our beliefs, behaviors, and buying decisions.
Discover what actually makes AI persuasive, how machines subtly nudge us, and why our resistance to algorithm-driven advice might not be as strong as we think. We break down the latest science, sharing stories, surprising findings, and actionable insights. You will also hear about the situations where the human touch still holds the edge and why that matters.
Backed by nearly 90 studies and the experiences of more than 50,000 participants, this episode is your deep dive into the evolving psychology of influence. Whether you are a marketer, leader, educator, or just curious about how AI is shaping your everyday choices, you will find plenty here to challenge your assumptions. Tune in to see if you can really tell when you are being persuaded by a person or a machine.
In this episode, we delve into one interesting findings in the world of AI and science communication. A new study published in Royal Society Open Science, authored by Uwe Peters and Benjamin Chin-Yee, reveals a systematic problem in how large language models summarise scientific research. Even when prompted for accuracy, many LLMs, including the latest versions of ChatGPT, Claude, and DeepSeek, consistently overgeneralise research findings. They take cautious, specific claims and subtly turn them into broad statements that were never actually made in the original papers.
This phenomenon called generalisation bias may not sound alarming at first, but its implications are massive. Imagine a clinical study that finds a treatment is effective in some patients being summarised as effective for all patients. Or nuanced scientific uncertainty being rewritten as confident advice. According to the study, AI-generated summaries are nearly five times more likely to contain these distortions than human-written summaries. And here’s the twist - the newer, more advanced models are often worse offenders than their predecessors.
If you rely on AI tools to digest research, teach, communicate, or make decisions based on scientific evidence, this episode is essential listening. We unpack how and why this bias happens, explore its potential risks for science, education, medicine, and media, and share practical tips for working smarter with LLMs.
What happens to your brain when you let AI do the thinking for you? In this episode, we explore a fascinating and widely discussed study out of the MIT Media Lab titled Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. This research takes us deep into the cognitive consequences of using large language models like ChatGPT for academic work. Using EEG headsets to monitor brain activity, the study reveals something few of us want to hear: repeated reliance on AI might make your work easier but your brain weaker.
The researchers tracked students over four months as they wrote essays using either ChatGPT, a search engine, or no tools at all. The group that relied on AI not only showed weaker brain connectivity but also had trouble recalling their own work and claimed lower ownership of what they wrote. The paper introduces the concept of "cognitive debt"—the hidden cost of letting AI carry your intellectual load. Meanwhile, the "brain-only" group showed stronger neural activity, deeper memory retention, and a clearer sense of authorship.
This is not an anti-AI message. It is a moment to pause and consider how these tools are reshaping our learning, our memory, and our sense of self. Whether you are a student, educator, business leader, or technologist, understanding the neuroscience behind AI use is more important than ever. In this episode, we unpack what the data tells us and what it means for the way we think, learn, and create in a world increasingly mediated by machines.
Is using generative AI more acceptable when you do it yourself than when someone else does it? According to a new study from the Rotterdam School of Management, most of us think so. This episode dives into the fascinating research behind the paper “Acceptability Lies in the Eye of the Beholder”, which explores how we judge AI-assisted work differently depending on who is using the tech. The authors conducted nine studies with nearly 4,500 participants to unpack how we assess human versus AI contribution — and the results are both surprising and incredibly relevant.
At the heart of the findings is a powerful bias. People believe they use AI tools like ChatGPT as a source of inspiration, while assuming others rely on them to outsource the heavy lifting. That difference in perception has real consequences. It affects how students are evaluated, how job applicants are judged, and how AI-generated content is viewed in marketing, education, and business. This isn't just about technology, it's about human psychology and the stories we tell ourselves about fairness, effort, and ownership.
In this episode, we break down what the research uncovered, why it matters, and what it reveals about our evolving relationship with AI. Whether you’re a teacher, marketer, manager, or just curious about how AI is reshaping the rules of credit and creativity, this conversation offers insight into the silent double standards that shape our views.
Generative AI has captured the world's attention for its power to create, accelerate, and enhance—but what happens when these same tools are misused? In this episode of the Professor Insight Podcast, we turn the spotlight to the darker side of GenAI. Drawing on a groundbreaking new paper from DeepMind titled Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data, we explore nearly 200 real-life cases where generative tools were used to deceive, defraud, and manipulate.
From deepfake impersonations and non-consensual imagery to AI-powered phishing scams and fake social media botnets, the study reveals how GenAI is being exploited in ways that are surprisingly low-tech but highly effective. These misuse tactics often require little technical skill and rely on easy-to-access tools, making them more widespread and harder to track. We break down the taxonomy of tactics and the motivations behind them, including disinformation, monetisation, harassment, and even digital resurrection. This episode is a deep dive into how misuse is already reshaping online trust and public perception. We also discuss the broader implications for AI governance, content authenticity, and digital safety.
In this episode of the Professor Insight Podcast, we explore what it really takes to build infrastructure capable of supporting generative AI at scale. Based on Google’s 2025 State of AI Infrastructure Report, this conversation cuts through the buzz to focus on the practical realities facing tech and business leaders. From adoption trends to infrastructure strategy, we look at how organizations are navigating the shift from experimentation to production.
With 98 percent of companies already working with generative AI, the urgency to get infrastructure right has never been greater. We unpack the report’s key findings, including the challenges around data governance, the growing importance of cost-efficiency, and the role of secure, scalable platforms. Hybrid cloud and edge computing are gaining momentum, and a robust AI foundation is now seen as essential for future competitiveness.
This episode covers the strategic decisions shaping enterprise AI—from managing costs and scaling models to deploying across distributed systems. We also discuss the infrastructure features most valued by leaders today, including flexibility, security, and performance. If your organization is planning for long-term AI integration, this episode highlights the groundwork needed to get there.
In this episode of the Professor Insight Podcast, we take a closer look at how generative AI is fundamentally changing the way we search for information, products, and services online. With ChatGPT rolling out new shopping features and AI search tools gaining traction, it's clear we're entering a new era of digital discovery. But what does this shift mean for the businesses, marketers, and platforms that rely on traditional search behavior to connect with customers?
We draw from Algolia’s report on How Generative AI Affects Search: A Full Stack Story, the Future of Professionals Report to explore how AI is disrupting search engine marketing, SEO strategy, and consumer habits. From voice-driven search to conversational queries and AI Overviews, this episode breaks down what’s changing, where the search volume is going, and how brands can stay relevant in a fragmented, AI-first ecosystem.
Generative AI is not only changing how people find information but also how businesses think about content strategy, advertising investments, and customer engagement. Understanding these shifts is crucial as traditional search models evolve and new opportunities emerge across both established and emerging platforms. This episode offers context and insights into how the future of search is likely to unfold.
In this episode of the Professor Insight Podcast, we dive into the rapidly evolving intersection of artificial intelligence and the legal profession. Drawing from two powerful resources — Prompt Engineering for Lawyers, a joint publication by Microsoft and the Singapore Academy of Law, and the 2024 Future of Professionals Report by Thomson Reuters — we explore how AI is not just automating tasks but redefining the way legal professionals work, think, and serve their clients.
We break down how AI is being used right now in law firms and legal departments, from contract review and research to drafting and client communication. More importantly, we focus on one emerging superpower every legal professional should pay attention to: prompt engineering. We unpack how lawyers can write better AI prompts to improve outcomes, increase efficiency, and stay compliant with ethical and professional standards. This is practical, applicable knowledge for anyone navigating the shift from traditional practice to AI-powered productivity.
If you work in law, support legal teams, or are simply fascinated by how AI is transforming high-trust professions, this episode is your guide. You’ll walk away with insights into the tools, techniques, and mindset needed to thrive in the AI-enabled future of legal work.
In this episode, we return to the unfolding debate around AI’s so-called ability to “think.” Building on last week’s discussion of Apple’s controversial paper The Illusion of Thinking, we now explore the equally provocative rebuttal by Anthropic, the company behind Claude. Their paper, pointedly titled The Illusion of the Illusion of Thinking, argues that Apple’s findings say more about poor experimental design than any real limits in AI reasoning. So who's right? Or are both missing the point?
We unpack the technical details, the debate, and the AI community backlash — including criticisms that Apple may be using this research to explain away its slower pace in AI development. But more importantly, we tackle the deeper question hiding underneath it all: what do we actually mean by “reasoning” in machines? And are we setting the right benchmarks when we measure AI against human thought?
If you’ve ever wondered where the line is between token prediction and true intelligence, or if you’re curious about how models like Claude, GPT, and Gemini really handle complex problems, this is a conversation that pulls back the curtain. It’s not just about who’s right — it’s about whether our entire framework for evaluating AI thinking needs to change.
Are today's most advanced AI models really capable of “thinking”? Or are we simply projecting human-like reasoning onto machines that are fundamentally limited in how they solve complex problems? In this episode of the Professor Insight Podcast, we dive into a provocative new paper from Apple titled The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models. It explores how some of the most powerful reasoning models — like Claude 3.7 Sonnet Thinking, Gemini Thinking, and OpenAI's o1 and o3 — struggle when problems get even modestly more complex.
The researchers tested these models on classic puzzle environments like Tower of Hanoi, River Crossing, and Blocks World — environments that allow precise measurement of reasoning complexity. The findings are surprising: despite their promise, these models hit a “reasoning wall.” They collapse in accuracy as complexity grows, underutilise their available thinking capacity, and even “overthink” simple problems. Apple identifies three distinct regimes where these models either outperform, flounder, or completely fail — and the implications are significant.
But the paper hasn't landed without controversy. Critics argue Apple’s conclusions are overstated and possibly self-serving, especially as the company faces pressure over lagging behind in AI development. Is this research a serious warning about the current limits of reasoning in AI? Or is it a carefully timed narrative to reshape public expectations? Tune in as we unpack the science, the backlash, and the broader debate on what it really means for AI to “think.”
If you’ve listened to Episodes Five and Six of the Professor Insight Podcast, you’ll know we’ve already laid the groundwork on what Agentic AI is and why it matters. But this week, we’re taking it a step further. This episode is your practical guide to building AI agents—an extension of our earlier discussions, now grounded in real-world application. Based on OpenAI’s newly released Practical Guide to Building Agents, we distill a technical framework into actionable insights tailored for business leaders, strategists, and product teams.
We explore what it actually takes to develop your first AI agent—from choosing the right use case and designing safe, scalable workflows, to configuring models, tools, and instructions that help agents operate autonomously and intelligently. This isn’t just about writing prompts. It’s about building systems that make decisions, take action across platforms, and adapt in real time—all while staying aligned with business goals and compliance requirements.
Whether you're trying to reduce operational friction, tackle high-complexity workflows, or enable more intelligent automation inside your organisation, this episode will give you the clarity and confidence to start building. With insights on orchestration, human-in-the-loop design, and guardrails for safety and governance, this is your field guide to the next evolution of AI in business.
In this episode of the Professor Insight Podcast, we explore a fascinating and timely question: how is generative artificial intelligence beginning to reshape academic research itself? Drawing on a newly published academic paper from the Journal of the Academy of Marketing Science (April 2025), we examine how large multimodal models like ChatGPT-4o could revolutionise the way marketing scholars generate ideas, build theories, design experiments, and analyse data. Authored by Kiwoong Yoo, Michael Haenlein, and Kelly Hewett, the paper—titled “A whole new world, a new fantastic point of view”—offers a serious look at how generative AI might soon become a research partner, not just a productivity tool.
Rather than speculating, the authors put AI to the test by replicating the entire research process of 35 published consumer research articles using ChatGPT-4o. They applied advanced prompting strategies like chain-of-thought prompting and carefully evaluated the AI’s performance across key stages such as theory development, pilot testing, and data analysis. What emerged is a nuanced picture of where AI excels—like generating conceptual frameworks and simulating study designs—and where it still struggles, such as interpreting human behaviour or conducting robust statistical analyses.
This episode is a must-listen for anyone curious about the intersection of AI and knowledge creation. Whether you are a researcher, educator, PhD student, or simply someone fascinated by how ideas are shaped and shared, this conversation offers a rare look at how AI may redefine the boundaries of academic research. We explore the promises, the limitations, and the ethical questions that come with using AI as a co-investigator in scholarly work.
In this second instalment of our two-part series on prompt engineering, we take things to the next level. If you have ever wondered how to get large language models to think more deeply, reason more clearly, or act more independently, this episode is for you. We explore advanced prompting techniques that unlock far more than simple question-and-answer interactions—techniques that help AI reason, plan, and execute in more intelligent ways.
For more technical users—those building with AI, developing workflows, or exploring the limits of what generative AI can do—we will be covering Chain of Thought prompting, which helps guide models through complex reasoning steps, and Tree of Thought prompting, which enables the model to explore multiple problem-solving paths in parallel. We will also briefly touch on ReAct prompting, which allows models to not just reason but take action using tools and external resources.
Even if you’re not deeply technical, this episode still has plenty for you. We cover Automatic Prompt Engineering, a technique where AI helps refine its own instructions for better performance, and explore how to write powerful prompts for code generation, translation, explanation, and debugging—even if you're not a developer.
While some of the strategies we discuss go beyond what most non-technical users will use day to day, they offer a deeper look into what is possible when AI is used with precision and purpose. And to wrap it all up, we share a comprehensive set of best practices in prompt engineering that will help you write clearer, more effective prompts regardless of your background or use case.