DiscoverThe Edtech Podcast#281 - Risk Assessments for AI Learning Tools, a conversation, Part 1
#281 - Risk Assessments for AI Learning Tools, a conversation, Part 1

#281 - Risk Assessments for AI Learning Tools, a conversation, Part 1

Update: 2024-11-07
Share

Description

In today’s episode, we have the first part of a two-part miniseries on risk management, risk mitigation and risk assessment in AI learning tools.  Professor Rose Luckin is away in Australia, speaking internationally, so Rowland Wells takes the reins to chat with Educate Ventures Research team members about their experience managing risk as teachers and developers.  What does a risk assessment look like and whose responsibility is it to take onboard its insights?  Rose joins our discussion group towards the end of the episode, and in the second instalment of the conversation, Rowland sits down with Dr Rajeshwari Iyer of sAInaptic to hear her perspective on risk and testing features of a tool as a developer and CEO herself.   

View our Risk Assessments here: https://www.educateventures.com/risk-assessments

In the studio:

  • Rowland Wells, Creative Producer, EVR
  • Dave Turnbull, Deputy Head of Educator AI Training, EVR
  • Ibrahim Bashir, Technical Projects Manager, EVR
  • Rose Luckin, CEO & Founder, EVR

Talking points and questions include:

  • Who are these for?  what’s the profile of the person we want to engage with these risk assessments?  They’re concise, easy-to-read, no technical jargon.  But it’s still an analysis, for people with a research/evidence mindset.  Many people ignore it: we know that even learning tool developers who put research on their tools ON THEIR WEBSITES do not actually have it read by the public.  So how do we get this in front of people?  Do we lead the conversation with budget concerns?  Safeguarding concerns?  Value for money?
  • What’s the end goal of this?  Are you trying to raise the sophistication of conservation around evidence and risk?  Many developers who you critique might just think you’re trying to make a name pulling apart their tools.  Surely the market will sort itself out?
  • What’s the process involved in making judgements about a risk assessment?  If we’re trying to demonstrate to the buyers of these tools, the digital leads in schools and colleges, what to look for, what’s the first step?  Can this be done quickly?  Many who might benefit from AI tools might not have the time to exhaustively hunt out all the little details of a learning tool and interpret them themselves? 
  • Schools aren’t testbeds for intellectual property or tech interventions.  Why is it practitioners’ responsibilities to make these kind of evaluations, even with the aid of these kind of assessments?  Why is the tech and AI sector not capable of regulating their own practices?
  • You’ve all worked with schools and learning and training institutions using AI tools.  Although this episode is about using the tools wisely, effectively and safely, please tell us how you’ve seen teaching and learning enhanced with the safe and impactful use of AI
Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

#281 - Risk Assessments for AI Learning Tools, a conversation, Part 1

#281 - Risk Assessments for AI Learning Tools, a conversation, Part 1

Professor Rose Luckin