DiscoverTalking AI in Market Research
Talking AI in Market Research

Talking AI in Market Research

Author: ResearchWiseAI

Subscribed: 1Played: 12
Share

Description

Will and Ray Poynter, the founders of ResearchWiseAI, discuss all things relating to Artificial Intelligence (AI) and market research. Our aim is help market research professionals keep up to date with the dizzying world of AI.
20 Episodes
Reverse
Explore the cutting edge of qualitative research in this episode of Talking AI, as host Ray Poynter welcomes Tanya Berlina, Client Success Director at Yasna AI. Tanya shares over 20 years of qualitative research expertise, discussing how AI-driven platforms are transforming the way researchers collect and analyze conversational data. Key Topics Covered Blending Traditional and AI Moderation: Discover how Tanya educates clients to synergize human expertise with AI efficiency for deeper insights. The Surge in Conversational Data Collection: Understand what's driving the rapid adoption of conversational AI in research, from stakeholder interest to the quest for efficiency. Multilingual and Global Projects: Learn how Yasna AI's platform makes it seamless to conduct research across languages and cultures, empowering teams worldwide. The Future of AI-Moderated Research: Get a glimpse into upcoming advancements, including AI’s growing capability in group moderation, multimedia analysis, and learning from previous sessions. Practical Tips for Getting Started: Tanya offers actionable advice for researchers new to conversational data collection—highlighting curiosity-driven projects and the unique benefits of global studies. About the Guest Tanya Berlina is a seasoned qualitative research professional and Client Success Director at Yasna AI. With a career spanning over two decades, she specializes in helping market research agencies worldwide embrace AI-powered methodologies. Tanya's expertise lies in blending traditional qualitative approaches with state-of-the-art AI moderation, making her a sought-after advisor in the evolving research landscape.
In this engaging episode of Talking AI, host Ray Poynter sits down with Steve Phillips, Founder, Chair, and Chief Innovation Officer at Zappi, to explore how artificial intelligence and automation are transforming the world of consumer insights. Key Highlights Steve Phillips' Journey: From the early days of qualitative and quantitative research across multiple continents to founding Zappi, a company built on automating and innovating market research methodologies. The Zappi Story: Learn how Zappi shifted the paradigm from slow, expensive research to rapid, scalable, and high-quality insights, leveraging automation and AI from the outset. Client Partnerships & Innovation: Discover the importance of 'pirate' clients like Unilever, Coca-Cola, and PepsiCo, who partnered early with Zappi to co-create new approaches and how these collaborations led to the birth of ADA, PepsiCo's cutting-edge insights platform. Generative AI in Research: Steve explains how the arrival of LLMs (like GPT-3) transformed automated report writing, enabling reports with near-expert quality and real-time data synthesis. The Power of Data Assets: Insights on how automation not only accelerates research but also builds vast, actionable data assets for meta-analytics and predictive modeling. Democratizing Insights: Addressing the debate about democratization, Steve argues that structured automation empowers insight teams to focus on data architecture and strategy, amplifying their organizational impact. Looking Ahead: Predictions for the next 18-24 months, covering synthetic research, AI agents, and how insight teams are poised to lead organizations into an AI-first future. Advice for Young Researchers: Steve's advice to newcomers: embrace AI, experiment, and reimagine your role for a data-driven, AI-powered era. Whether you're a research professional, data enthusiast, or innovation leader, this episode offers a wealth of forward-thinking perspectives on the evolving landscape of insights and AI. Learn more about Zappi: https://www.zappi.io/ Connect with Ray Poynter: LinkedIn Discover more Talking AI episodes: Podcast Homepage Guests: Steve Phillips
In this engaging episode of Talking AI, host Ray Poynter sits down with Karlien Kriegler, co-founder of Hello Ara, for a deep dive into the evolving world of market research powered by artificial intelligence. Meet the Guest: Karlien Kriegler Karlien brings a unique blend of creativity, innovation, and research expertise to the conversation. Based in Cape Town, South Africa, she shares how her upbringing in a family that valued education and creativity shaped her pursuit of a career in market research, and how her passion for understanding people led her to co-found Hello Ara. Alongside her co-founder, David Wright, who brings international experience and a strong technical background, Karlien is driving transformation in the research industry. What You’ll Learn How Hello Ara leverages conversational AI, gaming environments, and unstructured data to deliver deeper insights for clients who are eager to move beyond traditional, scale-heavy research methods. The challenges and opportunities of innovating from South Africa, including the importance of agility, creativity, and inclusivity in diverse markets with unique infrastructural needs. The evolving role of AI: Discover how advances in AI are making research processes more affordable, creative, and impactful, empowering both researchers and participants. Future trends: Hear Karlien’s excitement for the intersection of AI, visual experiences, and the metaverse, and what these developments could mean for the future of research. Why Listen? This episode is a must-listen for anyone interested in The future of AI in market research Practical innovation in diverse markets How creative, people-first approaches are shaping research methodologies Whether you’re a research professional, AI enthusiast, or business leader, you’ll come away inspired by Karlien’s vision and the actionable insights she shares. Listen now to discover how AI is redefining the craft of research at Hello Ara, and what’s next for the industry!
In this free episode of Talking AI, host Ray Poynter sits down with Dharmendra Jain, Founder & CEO of Actnable AI (Nairobi, Kenya), to uncover how AI-driven solutions are revolutionizing market research across emerging markets. Drawing on nearly two decades of experience spanning India and Africa in market research, operations, data, and technology, Jain explains Actnable’s all-in-one SaaS platform—featuring: Multilingual language solutions: translation in 132 languages and transcription in over 60 End-to-end qualitative research: recruitment (online/offline), FGD moderation, transcription & AI analytics Cloud-based creative testing: automated emotional expression and topic-modeling analytics Chatbot Data Insights: real-time survey data upload for early-trend visualization Conversational AI surveys: telephonic interviews in local languages, accents, and personas They discuss the excitement—and confusion—surrounding AI adoption in Africa’s mobile-first landscape, the gap between hype and practical usage, and real-world use cases from fast-tracking innovation cycles to serving low-resource languages. Jain shares actionable advice for clients and agencies under pressure to integrate AI, as well as tips for new researchers on balancing human curiosity with emerging technologies.
In this episode of Talking AI, host Ray Poynter sits down with Laura Quinn, president and co-founder of PJ Quinn, Inc. With over 25 years in marketing research and a background in biology and medical training, Laura has built her two-person boutique practice into a go-to qualitative partner for specialty-care pharmaceutical and healthcare clients. Laura shares her journey from AI skeptic to enthusiastic adopter: attending early AI-focused industry conferences, evaluating multiple platforms, and ultimately choosing a research-dedicated solution (CoLoop) that meets strict HIPAA and GDPR requirements. She walks through the structured protocols and prompt-engineering techniques she and her team developed—alongside key cases where AI outpaced traditional analysis—to deliver faster, richer, and more reliable insights while preserving the human touch in each interview. Whether you’re a solo practitioner or a small qualitative team, you’ll walk away with actionable best practices for vetting AI vendors, designing discussion guides with AI in mind, safeguarding data privacy, and integrating AI into every stage of a project without losing the adaptability that makes qualitative research unique. Tune in to learn how to harness AI to make your next pharma study cheaper (in time), faster, and—most importantly—better.
In this free episode of Talking AI, Ray Poynter welcomes Alok Jain, co-founder and CEO of Reveal (doReveal.com), to dive into achieving true depth and quality in AI-powered research. Drawing on his journey from founding the first UX team at the World Bank through leading research at Fortune 50 Centene, Alok reveals how Reveal uses human-centric design and built-in prompt best practices to help researchers focus on craft, not busywork. Together, they explore: Why simply “10xing” breadth isn’t enough—and how depth delivers real value The pros and cons of generic LLM chat platforms versus specialist tools like Reveal Guardrails, context windows, and prompt engineering baked into research workflows Proven strategies to minimize hallucinations and ensure AI-driven insights stay tethered to your data Best go-to resources—MRX Pros, grounded theory frameworks, LinkedIn and Twitter threads—and Alok’s top tips for AI newcomers, including “wipe coding” to automate tasks in natural language Alok Jain brings over a decade of experience at the intersection of economics, UX design, and machine learning. At Reveal, he’s on a mission to synthesize qualitative interviews and focus group transcripts so that researchers can spend more time asking the right questions—and less time wrangling data. Tune in to learn how to harness AI for deeper, more trustworthy insights.
In this free episode of Talking AI, host Ray Poynter sits down with Scott Swigart, SVP of AI Innovation at Shapiro & Raj and Head of Product for Stellar, to uncover the five core causes of AI hallucinations in ResTech—and how to crush them. From visually dense slide decks and the “chunking chainsaw massacre” to outdated training data, overconfidence bias, and data contradictions, Scott shares practical, human-augmented workflows that ensure your AI-powered insights are rock-solid. Scott Swigart brings over 20 years of experience at the intersection of market research and technology. A former co-owner of a boutique insights agency and a life-long programmer since age 12, he now leads development of Stellar, Shapiro & Raj’s proprietary AI insights platform, helping life-sciences clients turn mountains of qualitative and quantitative data into trustworthy conclusions. Tune in to learn: How to reverse-engineer complex charts into clean data tables and quality-control them Why context window size matters and how to avoid “chunking” pitfalls Techniques for forcing AI to treat your data as the source of truth, not its outdated training set Ways to expose AI’s overconfidence and request built-in “show-your-work” reasoning Methods to map, weight, and surface contradictions across reports, transcripts, and studies Equip yourself with the tactics you need to transform AI’s superpower—scaling across thousands of pages—into reliable, actionable insights. Scott's LinkedIn Article: 5 Causes of Hallucination in Insights AI
In this Talking AI episode, Ray Poynter welcomes Ramona Daniel—an independent cultural insights consultant with over 25 years in research—to discuss how generative AI has transformed her workflow. Ramona recounts her journey from telephone interviewer at Ipsos to leading a 79-market media study at Group M, and finally to striking out on her own in 2024. Facing the challenges of solo consulting, she turned to LLMs to replicate the role of missing team members, ask critical questions, and uncover patterns in cross-cultural data. What You'll Learn Ramona’s AI Onboarding Story Why she began experimenting with ChatGPT, Claude, Gemini, DeepSeek, Copala, and Perplexity How she evaluated outputs, compared platforms, and found which tools delivered the best insights Practical AI Experiments Turning raw PCA outputs from jamovi into interpretable narratives via ChatGPT or Claude Using Perplexity as a natural-language search engine instead of traditional keyword Google searches Converting dense research papers into audio “podcasts” with NotebookLM to quickly grasp key themes and cultural implications Lessons Learned & Pitfalls to Avoid Why off-the-shelf LLM outputs can feel “middle of the road” and how to apply critical thinking to interrogate them Which features (e.g., ChatGPT’s voice mode, Claude’s voice, Gemini’s latest modules) fell short and why The temptation to shortcut learning—how to ensure AI tools complement rather than replace your own expertise Opportunities & Threats for Cultural Research How AI’s pattern-recognition capabilities can surface unseen cultural signals beyond English-only sources Inherent biases in LLM training data and the importance of human validation when interpreting cultural context Actionable Tips for Beginners Experiment liberally: press buttons, type queries, see what happens, and fail fast Don’t lock into one platform—test multiple LLMs (ChatGPT, Claude, Gemini, etc.) to find which voice aligns with your needs Shape AI workflows to fit your unique process, rather than expecting plug-and-play solutions Key Takeaways AI as a Collaborative Partner—Use LLMs to fill gaps in your network, challenge your thinking, and highlight hidden data patterns, but always apply critical judgment. Multimodal & Multilingual Research—Tools like NotebookLM let you transform PDFs, research papers, and non-English sources into digestible formats, expanding your cultural horizon. Continuous Experimentation—AI platforms evolve rapidly; regularly revisit and compare ChatGPT, Claude, Gemini, Perplexity, and emerging tools to stay ahead.
In this Talking AI episode, co-founders Ray and Will Poynter break down the rise of Deep Research—an AI workflow that combines live web search, long-context reasoning and citation-linked summarisation. What You'll Learn What “Deep Research” Means Why multiple vendors (ChatGPT, Gemini, Perplexity) use the same term How it differs from standard chat prompts or Canvas sessions Speed vs. Depth Trade-offs 178 web hits & 30 sources in < 10 min: when the wait is worth it When iterative Canvas-style prompting is still faster Source Quality & Hallucination Control Forcing reputable domains, spotting blocked sites (e.g. BBC, pay-walled press) Using the final summary first, then drilling into citations Practical Use-Cases Entering new verticals (e.g., canned-coffee market in Japan, TfL transport data) Generating monthly polling digests, executive briefings, podcast scripts Limitations & Work-arounds Verbosity, missing premium sources, daily search caps on free tiers Future outlook: personalised memory, task-based scheduled research, dashboard feeds Key Takeaways Deep Research = AI research assistant on steroids—ideal for zero-to-sixty topic ramp-ups. Quality in, quality out—specify sources and always sanity-check citations. Iterate smartly—use summaries to steer; don’t wade through 20 pages blind. Free to start—Perplexity’s free tier offers one robust run per day; ChatGPT-4o currently leads on depth and reasoning but costs more.
In this deep-dive episode of Talking AI, co-founders Ray and Will Poynter unpack the concept of Vibe Coding—a term coined by Andrej Karpathy to describe using AI to generate executable code. They separate hype from reality, explore real-world productivity studies, and forecast how AI will reshape both professional and hobbyist coding. What You'll Learn Defining Vibe Coding: Understand the broad definition of AI-generated code and why it's more than just hype. Impact on Professional Developers: Shift from boilerplate implementation to AI oversight, security, and system design New skill sets for debugging AI-generated code versus human-written code AI Pair Programming: Why “AI Pair Programming” is a more positive framing than Vibe Coding Best practices for collaborating with AI as a coding partner Tools & Platforms: Recommendations for beginners (Google Colab + ChatGPT/Gemini) Enterprise-grade environments (Codex, Cursor, Windsurfer) Prototyping vs. Production: How AI accelerates prototyping Why human review remains essential for production-quality code Visual & UX Design with AI: AI's growing competence in HTML/CSS layout and styling Limitations in non-code visual design (e.g., PowerPoint) Key Takeaways AI isn't replacing all coders—it augments them. Security, compliance, and originality become critical new frontiers for human engineers. AI Pair Programming fosters a collaborative workflow: AI generates and engineers review, refine, and integrate. Choose your tools wisely—start with Colab for learning; explore Codex or enterprise agents for dedicated development. Whether you're a seasoned developer, an insights professional experimenting with Python/R, or simply curious about the future of software creation, this episode equips you with actionable insights and tool recommendations to ride the next wave of AI-powered coding.
In-Depth Exploration of AI, Behavioral Science, and LLMs In this compelling episode of Talking AI, host Ray Poynter sits down with Elina Halonen, a pioneering behavioral strategy consultant and founder of Prismatic Strategy. With almost 20 years of experience in consumer insights and over a decade specializing in behavioral science, Elina offers a unique lens on how human behavior intersects with artificial intelligence. What You'll Discover: Behavioral Strategy & AI: How decades of expertise in human behavior are applied to unlock the full potential of AI and large language models (LLMs). LLMs & Cognitive Science: An exploration of how machine learning models mimic human thought processes, and the implications for AI adoption. Ethics and Bias in AI: A deep dive into the ethical challenges, representation issues, and bias that influence AI’s role in society. Cultural & Linguistic Influences: Insights into how language, cultural nuances, and communication shape AI behavior and trust. Practical Applications: Real-world examples of using AI as a collaborative thinking partner to enhance productivity, creativity, and decision-making. Join us for a thought-provoking discussion that bridges behavioral science with cutting-edge AI trends. Whether you're a tech enthusiast, a professional in digital transformation, or simply curious about the future of AI, this episode is packed with SEO-rich insights to keep you at the forefront of innovation.
Join us on Talking AI as host Ray teams up with guest Paul Marsden to explore the rapidly evolving world of artificial intelligence and digital innovation. This episode is packed with insights on how AI and machine learning are transforming industries, enhancing business strategies, and driving global digital transformation. In This Episode: AI Evolution: Learn about the latest trends in AI and how they are reshaping industries. Machine Learning & Data: Discover practical applications of machine learning and data-driven decision making. Future Tech Insights: Gain expert predictions on emerging technologies and digital innovation. Ethics & Regulation: Understand the ethical challenges and regulatory issues surrounding AI development. Strategic Innovation: Explore actionable strategies for leveraging AI to gain a competitive edge. Whether you're a tech enthusiast, business leader, or industry professional, this in-depth discussion provides valuable insights into the future of technology and innovation. Enhance your understanding of AI and prepare for the digital changes ahead. About the Guest: Dr Paul Marsden is a consumer psychologist known for his research on emerging technology and mental health. He featured in the award-winning film on teens and tech, I Am Gen Z, and currently researches how AI shapes wellbeing, creativity, and performance. Paul lectures in consumer psychology and positive psychology - the science of happiness - at UAL, and advises global brands through Brand Genetics on positive innovation, using insights from positive psychology to develop products and experiences that enhance wellbeing, performance and flourishing. He co-founded Brainjuicer (now System1 PLC), one of the first agencies to apply AI in market research, and is chartered by the British Psychological Society (BPS).
In this episode of Talking AI, hosts Ray Poynter and Will Poynter from ResearchWiseAI sit down with Andrew Jeavons, co-founder of Signoi, to delve into the world of AI personas and how they’re reshaping market research. Discover why companies are using personas to bring segmentation data to life, the importance of privacy safeguards in synthetic research, and how AI can quickly (and cost-effectively) assist with concept testing—all without sacrificing data integrity. Andrew also sheds light on quantitative semiotics, the interplay between AI and image analysis, and the ethical dilemmas posed by digital twins. You’ll hear about real-world use cases for persona-driven insights, the challenges of ensuring bias-free outputs, and a look at how RAG-based retrieval is transforming the way brands update their AI with new findings over time. If you’re curious about how generative AI, synthetic data, and personalized consumer simulations can drive innovation while respecting anonymity and data security, this is an episode you won’t want to miss. Tune in to stay ahead of the curve in AI, personas, and the evolving landscape of consumer research. Be sure to check out our related episodes on RAG and synthetic data for even more insights into the future of artificial intelligence.
In this episode of Talking AI, Ray and Will Poynter—co-founders of ResearchWise AI—take a deep dive into the world of custom GPTs. They explore how these tailored AI assistants allow users to add unique instructions, integrate proprietary knowledge bases, and access specialized tools without requiring extensive coding skills. Ray and Will discuss how platforms like OpenAI’s Custom GPTs and Hugging Face Assistants let both novices and experts rapidly prototype AI-driven solutions for tasks ranging from HR and compliance to market research and client engagement. A key focus in the conversation is how custom GPTs differ from conventional AI chatbots. Ray and Will emphasize the possibility of enhancing a standard AI model with company policies, proprietary datasets, and multi-agent frameworks for more powerful interactions. While custom GPTs provide a swift, no-code entry point, they also address practical concerns such as data privacy, usage caps, and the need for hosting accounts. Drawing on their own experience with projects like “Virtual Ray,” Ray and Will show how creating an AI prototype in minutes can accelerate the product development cycle—yet, over time, more robust, coded solutions may be necessary for advanced or highly specialized tasks. For market researchers, the duo highlights exciting applications. From testing discussion guides with synthetic personas to creating interactive reporting tools, custom GPTs can make research findings more accessible and engaging. Ray and Will also touch on the broader AI ecosystem, mentioning how commercial software and big tech players are increasingly integrating custom GPT capabilities to attract users and streamline creative workflows. Lastly, they note that while custom GPTs can quickly fill gaps in knowledge or workflow, they come with limitations. Building a fully bespoke system allows for deeper control, stronger security, and the ability to implement complex chain-of-thought or multi-agent reasoning. Yet for many businesses, starting with a low-code or no-code solution is a strategic first step to validate ideas and scale later. Whether you’re a market researcher curious about speeding up insights generation, an HR professional looking to centralize internal knowledge, or an AI enthusiast seeking to prototype new apps, this conversation breaks down the value, practicality, and potential pitfalls of custom GPTs. Tune in to hear Ray and Will share insider tips, real-world examples, and a vision for how custom GPTs can reshape the future of AI-driven decision-making—only on Talking AI.
In this episode of Talking AI, Ray and Will Poynter, co-founders of ResearchWiseAI, discuss the practical uses, challenges, and future of synthetic data in market research. Synthetic data is defined as data created rather than collected, with Ray clarifying three main categories currently relevant to the market research industry: Augmented Synthetic Data: The most common approach in market research, augmenting existing survey data by adding synthetic cases to fill gaps, improving upon traditional weighting methods. It is valuable for reducing cost and time, particularly addressing the last 10-20% of data collection that typically incurs the greatest expense. Personas: Qualitative or quantitative synthetic entities representing customer segments or groups, such as loyalists or trialists. These are interactive personas that help brand managers generate insights, ideas, and strategies through simulated dialogue and creative brainstorming. Fully Synthetic Data: Entire datasets created synthetically, bypassing traditional data collection entirely. Though not yet widespread due to concerns about efficacy and trust, this method offers significant potential for privacy protection and rapid analysis. Ray and Will highlight practical advantages of synthetic data, such as faster turnaround times, reduced costs, and enhanced data security and privacy. Synthetic data originated partly to address privacy issues—like adding noise to census data—and continues to offer strong security benefits by replacing sensitive personal information. However, concerns about synthetic data persist. Ray emphasizes the primary industry worries include questions about accuracy, validation methods, and reliability across different contexts. The lack of standardized validation techniques to assess synthetic data accuracy remains a critical hurdle. Ray advises that validation should ultimately focus on whether synthetic data supports effective business decisions. Discussing future trends, Ray and Will predict significant growth for augmented synthetic data and interactive personas, driven by increased industry acceptance and regulatory clarity. They foresee augmented data increasingly replacing traditional weighting, while personas evolve into dynamic tools allowing brands to simulate interactions with target audiences in real-time. While fully synthetic data may face limitations, especially if overused without fresh data collection, Ray suggests it could eventually eliminate traditional surveys by directly leveraging AI’s deep understanding of consumer behavior and business questions. However, this approach might become obsolete if AI systems reach a point where they directly generate insights without needing traditional survey structures at all. To conclude, Ray and Will encourage careful, validated adoption of synthetic data approaches, underscoring their potential to transform market research by speeding processes, enhancing privacy, and generating richer, more actionable insights. Tune in next week for another episode of Talking AI.
In this episode, Ray and Will discuss the basics of RAG (Retrieval Augmented Generation). What is RAG? What is a RAG AI good for? And why are some people dismissive of it.
In our fourth installment, Ray and Will discuss how to embrace generative AI to boost your qualitative analysis. Ray shares his experience with the latest tools and techniques coupled with his 45 years experience in the field of qualitative analysis.
Gemini: Google's AI

Gemini: Google's AI

2025-02-2411:13

What is Google's answer to ChatGPT? Ray and Will discuss Google Gemini and its implications for the market research AI landscape.
Ray and Will from ResearchWiseAI talk about agents—what they are, how they work, and where they fit into market research. We explain the difference between autonomous agents, like a Roomba or Grammarly, and reactive agents powered by advanced LLMs such as ChatGPT Operator. We also discuss how agents can handle tasks such as dynamic data collection, cleaning, and analysis by autonomously selecting the right tools for the job. Tune in to learn more about how AI agents are evolving and how they might eventually assist with day-to-day research tasks.
This is the first episode in a series we are calling Talking AI. In this episode, Ray and Will discuss how and why you might want to download AI models on to your computer.
Comments