Ed-Technical

<p>Join two former teachers - Libby Hills from the Jacobs Foundation and AI researcher Owen Henkel - for the Ed-Technical podcast series about AI in education. Each episode, Libby and Owen will ask experts to help educators sift the useful insights from the AI hype. They’ll be asking questions like - how does this actually help students and teachers? What do we actually know about this technology, and what’s just speculation? And (importantly!) when we say AI, what are we actually talking about? </p>

Assessment in Education: To AI or Not to AI?

In this episode of EdTechnical, Libby and Owen speak with assessment expert Dylan Wiliam, Emeritus Professor at UCL Institute of Education, about how formative assessment and AI are reshaping classroom practice. Dylan brings decades of experience in educational research and teacher development to a timely conversation about what works, what doesn’t, and what’s next for assessment. They cover: Why formative assessment remains underused despite its proven impact How AI is resha...

08-14
37:10

Is ChatGPT Rotting Your Brain?

In this short, Libby and Owen digest a recent MIT study attracting a lot of attention, ‘Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing’. The study looked at how using tools like ChatGPT for writing essays affects people's brains and writing abilities compared to using search engines or just their own thinking. Is there a potential trade-off between making writing easier in the short term, but harming cognitive abilities and learning over tim...

07-17
15:38

Finding Their Voice: Voice AI for Literacy Support

Voice AI is having a moment in education. As schools grapple with declining literacy scores and stretched teaching resources, voice-enabled tools have the potential to help. But what's already working in real classrooms, and what challenges remain? In this episode, Libby and Owen speak with Kristen Huff from Curriculum Associates and Amelia Kelly from SoapBox Labs about the emerging field of voice AI for literacy support and assessment. Together they explore how automatic speech recognition t...

06-17
33:43

Coach or Crutch?: Using AI to hone self regulation (not outsource it)

In this episode, Libby and Owen talk to Sanna Järvelä and Inge Molenaar, two of the world’s leading scholars on self‑regulated learning (SRL). Together they cover SRL 101: what self-regulated learning is and why it is a valuable skill. Self-regulated learning is students setting their own goals and then monitoring their learning to achieve those goals. Self-regulation can come more naturally in informal learning settings like sports, but it can be harder to monitor your learning and kno...

05-06
30:43

A1 sauce for all: Reflections from SXSW and ASUGSV

This week Owen and Libby reflect on two recent EdTech conferences in the US: SXSW Edu in March and ASUGSV in April. They discuss how much things have shifted for US education over this short time period, and three themes that stood out to them both: AI literacy, transformation versus efficiency, and the disruptive potential of AI for education. Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel) Listen to all episodes of Ed-Te...

04-22
14:54

Mimicry versus meaning: why context is important for AI tools

Another live Ed-Technical episode! In this short, Owen does a deep dive on AI and discourse analysis (the study of how meaning is constructed through language) with three experts. The conversation explores the intersection between AI, particularly Large Language Models (LLMs), and the study of discourse. This is a topical conversation as LLM capabilities continue to evolve. LLMs have mastered sentence level communication. However we know less about their ability to be useful over the co...

03-26
22:06

Live from SXSW EDU: Evidence Eats AI for Breakfast

Everyone is talking about AI’s power to provide answers, but what about your lingering questions? What does the latest research actually tell us? Join Libby and Owen for this live session from SXSW EDU as they delve into the latest research to uncover where AI is truly adding value in the educational landscape — and where it falls short. They’re joined by two expert guests: Kristen DiCerbo from Khan Academy and Assistant Professor Peter Bergman from University of Texas at Austin and Learning ...

03-17
31:21

181 Papers Later: What We Know (and Don't) About GenAI in Schools

In this episode, Owen and Libby chat with Chris Agnew about Stanford's new generative AI hub for education. Chris leads this initiative within Stanford's SCALE program, which aims to be a trusted source for education system leaders on what works in AI and learning. Chris walks us through their research repository of 181 papers examining AI's impact in K-12 education. He outlines their GenAI tools typology which breaks down AI applications into three categories: efficiency gains, improving stu...

02-25
15:42

Is two years of learning possible in six weeks with AI?

In this short, Owen and Libby discuss a recent World Bank blog post about a study in Nigeria that evaluated the impact of Microsoft Copilot (powered by ChatGPT) on student learning outcomes. In a six-week after school programme, students were supported to use Copilot. The full study hasn’t been published yet but the blog post reports “overwhelmingly positive effects on learning outcomes”. It reports that the learning improvement over the six-week programme was equivalent to nearly two years o...

02-10
08:43

Babies & AI: what can AI tell us about how babies learn language?

In this episode, Libby and Owen interview Mike Frank, Professor at Stanford University and leading expert in child development. This episode has a different angle to the others, as it is more about AI as a scientific instrument rather than as a tool for learning. Libby and Owen have a fascinating discussion with Mike about language acquisition and what we can learn about language learning from large language models. Mike explains some of the differences between how large language models devel...

01-27
35:00

Teachers & ChatGPT: 25.3 extra minutes a week

In this short, Libby and Owen discuss a hot-off-the-press study that is one of the first to test how ChatGPT impacts the time science teachers spend on lesson preparation. The TLDR is that teachers who used ChatGPT, with a guide, spent 31% less time preparing lessons - that’s 25.3 minutes per week on average. This very promising result points to the potential for ChatGPT and similar generative AI tools to help teachers with their workload. However we encourage you to dig into the summar...

01-13
10:55

How & why did Google build an education specific LLM? (part 2/3)

This episode is the second in our three-part mini-series with Google, where we find out how one of the world’s largest tech companies developed a family of large language models specifically for education, called LearnLM. This instalment focuses on the technical and conceptual groundwork behind LearnLM. Libby and Owen speak to three expert guests from across Google, including DeepMind, who are heavily involved in developing LearnLM. One of the problems with out-of-the-box large language...

12-16
38:22

AI tutoring part 2: How good can it get?

In this episode, Owen and Libby chat about AI tutoring with guests, Ben Kornell, Managing Partner at Common Sense Growth Fund, and Alex Sarlin, a veteran in the edtech industry. Both co-founded Edtech Insiders, a leading newsletter and podcast covering the growing Edtech industry. Ben and Alex differentiate between AI-powered search and true AI tutoring, and discuss trends like AI-enhanced human tutors, hybrid models, and fully autonomous AI bots. The conversation highlights the need fo...

12-02
21:26

Inside the black box: How Google is thinking about AI & education (part 1 of 3)

This episode is the first of a three part mini-series with Google. There is a lot of interest in how big tech companies are engaging in AI and education and what their future plans are - in this mini-series, hear the latest directly from Google. The genesis of this mini-series was a short Ed-Technical episode from earlier this year. Libby and Owen discussed a paper Google released about the work they had done to fine-tune a LLM called LearnLM to make it more useful for education. This w...

11-18
36:19

Big data and algorithmic bias in education: what is it and why does it matter?

This episode, Owen and Libby speak to Ryan Baker, a leading expert in using big data to study learners and learning interactions with educational software. Ryan is a Professor in the Graduate School of Education at the University of Pennsylvania, and is Director of the Penn Center for Learning Analytics. Ryan provides an overview of educational data mining (otherwise known as EDM) and explains how insights from EDM can help improve learner engagement and outcomes. Libby and Owen a...

10-21
25:22

Think aloud or think before you speak?: OpenAI’s new model for advanced reasoning

In this short episode, Libby and Owen discuss OpenAI’s new model for advanced reasoning, o1. They talk about its new capabilities and strengths, and what they think about its significance for education after an initial play around. They talk through the benefits of ‘think aloud’ versus ‘think before you speak’ approaches in education, and how this relates to o1. Links: OpenAI’s announcement about o1 Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen...

10-08
10:59

Misconceptions about misconceptions: How AI can help teachers understand & tackle student misconceptions

In this episode, Libby and Owen are joined by Craig Barton, Head of Education at Eedi and host of the Mr Barton Maths and Tips for Teachers podcasts, along with Simon Woodhead, Director of Research at Eedi. Together, they explore the world of educational misconceptions—what they are, why they matter and how AI and data science can help tackle them. Links: Craig Barton biography Simon Woodhead biography Eedi’s research Join us on social media: BOLD (@BOLD_...

09-23
34:16

Why Language Models are suck ups and how this can be bad for learning

In this short, Libby and Owen discuss recent research from Anthropic looking at sycophancy – the tendency to agree with users – in large language models (LLMs), and key research from educational psychology about how important feedback is for learning. Libby and Owen connect the two papers and explore why sycophancy is especially a problem when it comes to using LLMs for educational purposes. Links: Anthropic paper on sycophancy in language models John Hattie and Helen Timberley’s p...

09-09
11:56

Passionate about planning (and Tim Walz): automated lesson planning tools

In this short, Libby and Owen discuss automated lesson planning tools (after Owen stops talking about his Tim Walz crush). There’s now a growing number of lesson planning tools out there for teachers who are using AI: Khanmigo, Magic School, Diffit and Oak National Academy (who will soon release a lesson planning tool) to name a few. Libby and Owen cover what some of the automated tools do and what some of their features are. They share their thoughts about the value and benefits of the tools...

08-26
13:36

Short: Generative AI Can Harm Learning - our quick takes

In this short, Libby and Owen discuss a recent paper that has generated interest and discussion called ‘Generative AI Can Harm Learning’. The paper presents the findings from a thought-provoking study of nearly 1,000 students in Turkey. The study tested the effects of giving students access to two different versions of GPT-4 while studying math: one was essentially ChatGPT and the other was a version of GPT-4 that had been tailored for tutoring with a thin prompt wrapper – so it didn’t just g...

08-12
10:13

Recommend Channels