Discover
Foundational Impact
Foundational Impact
Author: Good Future Foundation
Subscribed: 0Played: 2Subscribe
Share
© Copyright 2025 Good Future Foundation
Description
Welcome to Foundational Impact, a podcast series that focuses on education and artificial intelligence from a nonprofit perspective. Hosted by Daniel Emerson, the Executive Director of Good Future Foundation, a non profit whose mission is to equip educators to confidently prepare all students, regardless of their background, to benefit from and succeed in an AI infused world.
This podcast series sets out to explore the trials and tribulations of building a non profit from the ground up, while also investigating the changing world of technology in life, learning, and work.
This podcast series sets out to explore the trials and tribulations of building a non profit from the ground up, while also investigating the changing world of technology in life, learning, and work.
30 Episodes
Reverse
What skills will our students genuinely need to thrive in a future driven by AI? To find the answer, Daniel Emmerson goes straight to the source and sits down with brilliant young minds behind seven teams from the Hult Prize Global Accelerator, one of the final stages of the world’s largest student startup competition.This episode takes you on a global tour of innovation. You'll hear how these young entrepreneurs are using AI to tackle large problems, like enhancing public safety by turning CCTV cameras into proactive witnesses, helping firefighters respond faster, pioneering sustainability by transforming agricultural waste into valuable resources, and creating a "calorie counter" for your carbon footprint. The conversations also cover how AI is being used to deliver personalised education in Ethiopia and provide gentle, effective speech therapy for children. Although these startups focus on very different issues, they all agree that the future isn’t about AI replacing people, but being empowered by it. Tune in to find out the essential skills our future generation will need.
In this episode, we spotlight Adam, the co-founder of Stick’Em, the innovative startup that just won the prestigious $1 million Hult Prize. He explains how his team develops a robotics kit that costs a mere fraction of traditional models to make quality STEAM education more accessible to children everywhere. This conversation, in two parts, offers a rare before-and-after glimpse of a startup on the cusp of greatness. First, hear from Adam during the Hult Prize Accelerator where he emphasises the importance of STEAM education in fostering skills like creative and analytical thinking to prepare students for an AI-infused world. Then, Daniel reconnects with Adam following his win and discusses how Stick’Em is already leveraging AI to streamline startup operations and plan their global expansion.Tune in for an interesting look at the intersection of affordable education, social impact, and cutting-edge artificial intelligence.
In this episode, Daniel speaks with Muireann Hendriksen, the Principal Research Scientist at Pearson, about her team's recent research study called "Asking to Learn" The study analysed 128,000 AI queries from 9,000 student users to gain deeper insights into how students learn when they interact with AI study tools. Their key finding revealed that approximately one-third of student queries demonstrated higher-order thinking skills. Their conversation also explores important themes around trust, student engagement, accessibility, and inclusivity, as well as how AI tools can promote active learning behaviours. You can find the full research report at https://plc.pearson.com/sites/pearson-corp/files/asking-to-learn.pdf
SummaryThis conversation explores the integration of AI in education, focusing on how a school has embraced AI technology to enhance learning outcomes. The discussion covers the importance of building a shared vision among stakeholders, the challenges of risk assessment and data privacy, and the cultural shift required to embrace AI. The speakers share their experiences with teacher training, experimentation with AI tools, and the current state of AI in classrooms. They reflect on their journey towards achieving an AI quality mark and discuss the future implications of AI in education.TakeawaysAI integration requires a shared vision among all stakeholders.Risk assessment and data privacy are critical in AI implementation.Teachers need support and training to effectively use AI tools.Open communication with parents is essential for AI integration.Experimentation with AI can lead to innovative teaching practices.AI can enhance student engagement and learning outcomes.Professional development is key to overcoming resistance to AI.Schools must adapt policies to address the challenges of AI.AI tools should be used to complement, not replace, traditional teaching methods.The future of education will increasingly involve AI technologies.
SummaryThis episode features Matthew Pullen from Jamf, who talks about what thoughtful integration of technology and AI looks like in educational settings. Drawing from his experience working in the education division of a company that serves more than 40,000 schools globally, Mat has seen numerous use cases. He distinguishes between the purposeful application of technology to dismantle learning barriers and the less effective approach of adopting technology for its own sake. He also asserts that finding the correct balance between IT needs and pedagogical objectives is crucial for successful implementation. Good Future Foundationwww.goodfuture.foundation
Summary:Many schools begin their AI journey by formulating AI policies. However, Matthew King, Director of Innovative Learning at Brentwood School, reveals their preference for establishing guiding principles over rigid policies considering AI’s rapidly evolving nature. Matt shares his approach to fostering dialogue with diverse school stakeholders. He describes engaging primary students in AI ethics and literacy, obtaining parental consent for use of AI in students’ learning, exploring AI applications with operational staff, and addressing teacher resistance through supportive one-on-one conversations. Throughout Brentwood’s AI journey, human connection remains central to cultivating a school-wide culture of AI literacy and integration. It is the power of conversation, not rulebooks that create the foundation for responsible and effective AI adoption. Good Future Foundationwww.goodfuture.foundation
Summary:Alex was genuinely fascinated when reviewing transcripts from his research interviews and noticed that students consistently referred to AI as "they," while adults, including teachers, used "it." This small but meaningful linguistic difference revealed a fundamental variation in how different generations perceive artificial intelligence.As a teacher, senior leader, and STEM Learning consultant, Alex developed his passion for educational technology through creating the award-winning "Future Classroom", a space designed to make students owners rather than consumers of knowledge. In this episode, he shares insights from his research on student voice, explores the race toward Artificial General Intelligence (AGI), and unpacks the concept of AI "glazing". While he touches on various topics around AI during his conversation with Daniel, the key theme that shines through is the importance of approaching AI thoughtfully and deliberately balancing technological progress with human connection.Good Future Foundationwww.goodfuture.foundation
SummaryThis podcast episode was recorded during the Watergrove Trust AI professional development workshop, delivered by Good Future Foundation and Educate Ventures. Dave Leonard, the Strategic IT Director, and Steve Lancaster, a member of their AI Steering Group, shared how they led the Trust's exploration and discussion of AI with a thoughtful, cautious optimism. With strong support from leadership and voluntary participation from staff across the Trust forming the AI working group, they've been able to foster a trust-wide commitment to responsible AI use and harness AI to support their priority of staff wellbeing.Good Future Foundationwww.goodfuture.foundation
SummaryThis episode features Thomas Sparrow, a correspondent and fact checker, who helps us differentiate misinformation and disinformation, and understand the evolving landscape of information dissemination, particularly through social media and the challenges posed by generative AI. He is also very passionate about equipping teachers and students with practical fact checking techniques and encourages educators to incorporate discussions about disinformation into their curricula. Good Future Foundationwww.goodfuture.foundation
SummaryWith her extensive teaching experience in both mainstream and special schools, Bukky Yusuf shares how purposeful and strategic use of technology can unlock learning opportunities for students. She also equally emphasises the ethical dimensions of AI adoption, raising important concerns about data representation, societal inequalities, and the risks of widening digital divides and unequal access.Good Future Foundationwww.goodfuture.foundation
SummaryIn this enlightening episode, Dr Lulu Shi from the University of Oxford, shares technology’s role in education and society through a sociological lens. She examines how edtech companies shape learning environments and policy, while challenging the notion that technological progress is predetermined. Instead, Dr. Shi argues that our collective choices and actions actively shape technology's future and emphasises the importance of democratic participation in technological development.Good Future Foundationwww.goodfuture.foundation
In this podcast episode, Daniel, George, and Ricky discuss the integration of AI and technology in education, particularly at Belgrave St Bartholomew's Academy. They explore the local context of the school, the impact of technology on teaching and learning, and how AI is being utilised to enhance student engagement and learning outcomes. The conversation also touches on the importance of community involvement, parent engagement, and the challenges and opportunities presented by AI in the classroom. They emphasise the need for effective professional development for staff and the importance of understanding the purpose behind using technology in education.
In this episode, Becci Peters and Ben Davies discuss their work with Computing at School (CAS), an initiative backed by BCS, The Chartered Institute for IT, which boasts 27,000 dedicated members who support computing teachers. Through their efforts with CAS, they've noticed that many teachers still feel uncomfortable about AI technology, and many schools are grappling with uncertainty around AI policies and how to implement them. There's also a noticeable digital divide based on differing school budgets for AI tools. Keeping these challenges in mind, their efforts don’t just focus on technical skills; they aim to help more teachers grasp AI principles and understand important ethical considerations like data bias and the limitations of training models. They also work to equip educators with a critical mindset, enabling them to make informed decisions about AI usage.Good Future Foundationwww.goodfuture.foundation
SummaryWhen generative AI first appeared on the scene, many educators had concerns about how students might misuse this technology. A lot of discussions focused on plagiarism. However, after hearing from the Good Future Foundation Student Council, you might be pleasantly surprised to discover that some students are actually much more critical and reflective on how generative AI can help their learning and influence their social and emotional wellbeing. In this episode, four members of our Student Council, Conrado, Kerem, Felicitas and Victoria, who are between 17 and 20 years old, share their personal experiences and observations about using generative AI, both for themselves and their peers. They also talk about why it’s so crucial for teachers to confront and familiarize themselves with this new technology.Good Future Foundationwww.goodfuture.foundation
AI’s impact spans globally across sectors, yet attention and voices aren’t equally distributed across impacted communities. This week, the Foundational Impact presents a humanitarian perspective as Daniel Emmerson speaks with Suzy Madigan, Responsible AI Lead at CARE International, to shine a light on those often left out of the AI narrative. The heart of their discussion centers on “AI and the Global South, Exploring the Role of Civil Society in AI Decision-Making”, a recent report that Suzy co-authored with Accentures, a multinational tech company. They discuss how critical challenges including digital infrastructure gaps, data representation, and ethical frameworks, perpetuate existing inequalities. Increasing civil society participation in AI governance has become more important than ever to ensure an inclusive and ethical AI development. Good Future Foundationwww.goodfuture.foundation
SummaryThink about how overwhelming it could be if you're driving at 70 miles per hour but have to rebuild the car as a whole. This is how Liz Robinson, CEO of Big Education Trust, describes her feeling as she started to realise the impact of AI in every aspect of education while managing daily school operations.In this episode, Liz opens up about her path and reflects on her own "conscious incompetence" with AI - that pivotal moment when she understood that if she, as a leader of a forward-thinking trust, feels overwhelmed by AI's implications, many other school leaders must feel the same. Rather than shying away from this challenge, she chose to lean in, launching an exciting new initiative to help school leaders navigate the AI landscape.Good Future Foundationwww.goodfuture.foundation
In this episode, Hult Prize CEO Lori van Dam pulls back the curtain on the global competition empowering student innovators into social entrepreneurs across 100+ countries. She showcases how young minds are tackling world challenges through creative solutions - like turning Hong Kong’s wasted bread into beer, and supporting indigenous Mexican communities through mindful tourism. Beyond the headline-grabbing $1 million prize, Lori shares a deeper story about reimagining education and social impact. She brings to light how donor-reliant nonprofits are often constrained by narrow focuses on expense ratios rather than actual impact and believes in sustainable models that combine social good with financial viability. Lori also explores how AI is becoming a powerful ally in this space, while stressing that human creativity and cross-cultural collaboration remain at the heart of meaningful innovation. Good Future Foundationwww.goodfuture.foundation
From decoding languages to decoding the future of education: Laura Knight takes us on her fascinating journey from a linguist to a computer science teacher, then Director of Digital Learning, and now a consultant specialising in digital strategy in education. With two decades of classroom wisdom under her belt, Laura has witnessed firsthand how AI is reshaping education and she’s here to help make sense of it all.Remember when AI discussions were all doom and gloom or over-the-top enthusiasm? Laura breaks down how these conversations have matured into something far more practical and nuanced. She’s adamant that schools shouldn’t just jump on the AI bandwagon because everyone else is. Instead, she advocates for thoughtful adoption that considers each school’s unique context and needs. It’s this practical, grounded perspective that makes her newly published book “Little Guide for Teachers on Generative AI” particularly exciting for educators looking for real world guidance. Good Future Foundationwww.goodfuture.foundation
SummaryRichard Culatta, former Government advisor, speaks about flying planes as an analogy to explain the perils of taking a haphazard approach to AI in education. Using aviation as an illustration, he highlights the most critical tech skills that teachers need today. The CEO of ISTE and ASCD and author of "Digital for Good" draws a clear parallel: just as planes don't fly by magic, educators must deeply understand AI's capabilities and limitations.Key TakeawaysRichard breaks down three distinct approaches teachers take toward AI: those hoping it will disappear (which it won't), those using it merely as a glorified search engine, and those embracing a fundamental redesign of education for an AI-infused world.He emphasises that teachers should understand AI isn't magic and should focus on identifying uniquely human skills versus tasks better suited for AI. Rather than banning technology, he advocates for developing digital citizenship and promoting responsible technology use.Richard highlights that technology skills are now critical across all career paths, even in creative fields like music. He stresses that using technology for effective problem-solving has become an essential skill for students to master.He encourages educators to join professional communities like ISTE and ASCD Connect to continue learning about AI and technology, while adopting innovative approaches to technology integration.Good Future Foundationwww.goodfuture.foundation
Summary Professor Anselmo Reyes, an international arbitrator and legal expert, discusses the potential of AI in making legal services more accessible to underserved communities and identifies the kinds of legal cases where AI could be most beneficial. He notes that while AI works well for standardised legal matters like commercial cases and contract disputes, it faces limitations in areas requiring emotional intelligence or complex human judgment. Prof Reyes advocates for teaching law students to use AI critically as an assistive tool, emphasising that human oversight remains essential in legal decision making. Takeaways Prof Reyes sees AI as a promising tool to make legal services more affordable and accessible to those who currently cannot afford traditional legal representation. However, he recognises a significant challenge which AI decisions can be skewed by biases in their training data, which often only become apparent after patterns emerge over time. Prof Reyes identifies that AI applications might work better in standardised, rule-based legal areas such as commercial cases, contract disputes, and small claims tribunal decisions. He observes that AI systems have limitations when it comes to nuanced fact-finding, particularly in cases with conflicting personal testimonies. This makes them less suitable for sensitive areas like family law and cases requiring emotional intelligence or complex human judgement. When it comes to legal education, Prof Reyes suggested a balanced approach. Rather than banning AI, he believes in teaching students to use it critically as an assistive tool, not a replacement for human legal reasoning. He emphasises that as this evolving technology continues to develop, maintaining strong human oversight and careful risk mitigation remains essential. Good Future Foundationwww.goodfuture.foundation























