DiscoverAI Education Podcast
AI Education Podcast
Claim Ownership

AI Education Podcast

Author: Dan Bowen and Ray Fleming

Subscribed: 39Played: 619
Share

Description

Dan Bowen and Ray Fleming are experienced education renegades who have worked in many various educational institutions and educational companies across the world. They talk about Artificial Intelligence in Education - what it is, how it works, and the different ways it is being used. It's not too serious, or too technical, and is intended to be a good conversation.

Please note the views on the podcast are our own or those of our guests, and not of our respective employers (unless we say otherwise at the time!)
95 Episodes
Reverse
Content warning! This episode talks about an academic research paper titled "ChatGPT is bulls**t", and we've not edited the word out - in fact, we've gone to town with it, talking about the different types of it (in the strictest academic sense). So you may not want to play this in the car on your school run! The news item discussed is: Student crafts elaborate AI scheme to pass university exam, gets arrested https://cybernews.com/news/turkish-student-found-using-ai-arrested/   This week's papers discussed are: Developing evaluative judgement for a time of generative artificial intelligence https://www.tandfonline.com/doi/full/10.1080/02602938.2024.2335321   Prompting Large Language Models for Zero-shot Essay Scoring via Multi-trait Specialization https://arxiv.org/abs/2404.04941   Working Alongside, Not Against, AI Writing Tools in the Composition Classroom: a Dialectical Retrospective https://uen.pressbooks.pub/teachingandgenerativeai/chapter/working-alongside-not-against-ai-writing-tools-in-the-composition-classroom-a-dialectical-retrospective/   GPT versus Resident Physicians — A Benchmark Based on Official Board Scores https://ai.nejm.org/doi/pdf/10.1056/AIdbp2300192   Evaluating General Vision-Language Models for Clinical Medicine https://www.medrxiv.org/content/10.1101/2024.04.12.24305744v1   Re-evaluating GPT-4’s bar exam performance https://link.springer.com/article/10.1007/s10506-024-09396-9   Automated Social Science: Language Models as Scientist and Subjects https://arxiv.org/abs/2404.11794   Large language models cannot replace human participants because they cannot portray identity groups https://arxiv.org/abs/2402.01908   The impact of large language models on university students’ literacy development https://www.tandfonline.com/doi/epdf/10.1080/07294360.2024.2332259?needAccess=true   Do teachers spot AI? Evaluating the detectability of AI-generated texts among student essays https://www.sciencedirect.com/science/article/pii/S2666920X24000109   Feedback sources in essay writing: peer-generated or AI-generated feedback? https://link.springer.com/article/10.1186/s41239-024-00455-4   ChatGPT is bullshit https://link.springer.com/epdf/10.1007/s10676-024-09775-5?sharing_token=0CIhP_zo5-plierRq8kkDPe4RwlQNchNByi7wbcMAY77xTOWyddkW01qGFs1m5zuuoZGBctVlsJF8SbYqcxWi-XzgEYEPiw7xwWi4bMYXJ_1JARDrER9JGdWZOW-UGSkrk_tXPjPh-XWvFNoiFzNlnDUUUEBAztiX9PtP2p6jfI%3D
Wow, this week we have a bumper episode with more resources than a GPT factory!  Any time we get a guest making their second appearance, and therefore enter our Hall of Fame, then we officially dub them "Friend of the Show". And so this week, we've got Friend of the Show Leon Furze sharing his experiences and expertise.   Here are the links and posts related to our conversation with Leon around assessment.     Leon's blog - for all the updates and posts that he is working on  https://leonfurze.com/blog    Leons's free e-Book on assessment can be found here: https://mailchi.mp/leonfurze/assessment (free ebook on assessment)   Leon Furze Linkedin profile is here if you want to follow his stream of thoughts, and to connect with him: Leon Furze - Furze Smith Consulting | LinkedIn   The Artificial Intelligence Assessment Scale (AIAS) paper we discussed can be found here:  https://open-publishing.org/journals/index.php/jutlp/article/view/810    The online course around practical AI strategies: https://practicalaistrategies.com/p/practical-ai-strategies  And, even better, he's given listeners a discount code that will save you 25% of the cost. Just use the magic word 'AIPODCAST'   The blog post we mentioned a couple of times during the episode: https://leonfurze.com/2024/05/27/dont-use-genai-to-grade-student-work/   Leons new book and course can be also found on his main site here: https://practicalaistrategies.com/
This week we set the episode timer for 15 minutes, and managed to get through just five papers before the buzzer went off! So we have plenty more papers to discuss in future episodes... ENHANCING K-12 STUDENTS’ PERFORMANCE IN CHEMISTRY THROUGH CHATGPT-POWERED BLENDED LEARNING IN THE EDUCATION 4.0 ERA https://library.iated.org/view/ORTIZDEZARATE2024ENH   Empowering student self-regulated learning and science education through ChatGPT: A pioneering pilot study https://bera-journals.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/bjet.13454   ChatGPT “contamination”: estimating the prevalence of LLMs in the scholarly literature https://arxiv.org/abs/2403.16887   Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews https://arxiv.org/abs/2403.07183   Large language models are able to downplay their cognitive abilities to fit the persona they simulate https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0298522  
Assessment - Chris Goodall In this episode of the AI Education Podcast, host Dan converses with Chris Goodall, the head of digital education at the Bourne Education Trust in England. They discuss the integration of AI into education, how it can be used to enhance teaching and learning processes, and the impact of personalized AI tools on students and educators. The conversation covers practical applications of AI, the ongoing need for teacher and student adaptation to new technologies, as well as ethical considerations and future possibilities for AI in education. Chris Goodalls Linkedin profile here: Chris Goodall | LinkedIn Practical Advice for embedding IT in school: Embedding AI use in school 
Research Update - 31st May 2024 Honestly folks, we've been trying to keep. We really have. But we have so much great content in the fortnightly (or is it bi-weekly?) interviews, that we've had to bite the bullet and switch to weekly podcasts, so that we can still fit in the Research Updates! Going forwards you'll get a longer interview-style podcast once every two weeks, and a shorter 15-20 minute "Research Update" podcast every two weeks. Filling your Fridays with AI in Education podcast joy! Here's the links to all the research papers discussed this week: Remote Proctoring: Understanding the Debate https://link.springer.com/referenceworkentry/10.1007/978-3-031-54144-5_150#DOI   Large language model-powered chatbots for internationalizing student support in higher education https://arxiv.org/abs/2403.14702   ChatGPT in Veterinary Medicine: A Practical Guidance of Generative Artificial Intelligence in Clinics, Education, and Research https://arxiv.org/abs/2403.14654   Investigation of the effectiveness of applying ChatGPT in Dialogic Teaching Using Electroencephalography https://arxiv.org/abs/2403.16687   An Exploratory Study on Upper-Level Computing Students' Use of Large Language Models as Tools in a Semester-Long Project https://arxiv.org/abs/2403.18679   An MIT Exploration of Generative AI https://mit-genai.pubpub.org/ MIT - Massachusetts Institute of Technology - have just published a series of really interesting papers about the impact of generative AI on  a number of industries, and dive into the implications for society, education, human interaction and other areas. I actually think the whole set are interesting - and they're really easy to get - you can read them on the web, or get a PDF, an ebook, or even an audio book of every one!   We talked about the 3 education ones: When Disruptive Innovations Drive Educational Transformation: Literacy, Pocket Calculator, Google Translate, ChatGPT https://mit-genai.pubpub.org/pub/6chtnd56/release/3?readingCollection=0e231e9c   Generative AI and K-12 Education: An MIT Perspective https://mit-genai.pubpub.org/pub/4k9msp17/release/1?readingCollection=0e231e9c   Generative AI and Creative Learning: Concerns, Opportunities, and Choices https://mit-genai.pubpub.org/pub/gj6eod3e/release/2?readingCollection=0e231e9c  
This week we continue our series on Assessment and AI. Ray talks with Jason Lodge from The University of Queensland, and who must have the longest business card in Australia, as he's Associate Professor of Educational Psychology in School of Education and Deputy Associate Dean in the Faculty of Humanities, Arts and Social Sciences!   The conversation talks about the challenges of assessment, and the options for rethinking assessment - and then we go deeper into Jason's views on the future of learning and assessment.   Jason's a great guest to share his experiences, as during 2023 he was on the TEQSA group of experts that came together to produce a report on assessment for Australian universities, Assessment reform for the age of artificial intelligence https://www.teqsa.gov.au/guides-resources/resources/corporate-publications/assessment-reform-age-artificial-intelligence   Working on policy and guidance in an area where technology is developing so rapidly - and students are racing ahead of institutions, was interesting and Jason talks about the group dynamic. One of the interesting notes he talks about is the mindset: "The mantra we kept returning to is that we weren't trying to develop a map, but a compass. This is the direction we think we might need to head here."
AI and the Future of Assessment: Transforming Educational Practices Episode Overview: In this episode of the AI Education Podcast, hosts Dan and Ray, alongside guests Adam Bridgman and Danny Liu, dive into the evolving landscape of academic assessment in the age of artificial intelligence. Recorded in the University of Sydney's own studios, this discussion explores the significant shifts in assessment strategies and the integration of AI in educational settings. Guest Introductions: Professor Adam Bridgeman: Pro Vice Chancellor Educational Innovation at the University of Sydney - focused on enhancing teaching quality across the university. [University bio] Professor Danny Liu: Professor of Educational Technologies - dedicated to empowering educators to improve their teaching methods through innovative technologies. [University page - LinkedIn page] Key Topics Discussed: The Persistence of Traditional Assessment Models: Despite the push to digital platforms during the COVID-19 pandemic, traditional assessment methods have largely remained unchanged, continuing the practice of replicating physical exam environments online. AI's Role in Rethinking Assessment: The guests discuss how AI challenges the conventional reasons for assessments, advocating for a paradigm shift towards assessments that truly measure student understanding and application of knowledge. Two-Lane Assessment Approach: Adam introduces a dual-lane strategy for assessment: Lane One: Ensures the rigorous verification of student competencies necessary in professional fields. Lane Two: Uses AI to foster skill development in using technology effectively, moving beyond traditional assessment forms to embrace innovative educational practices. Implementation Challenges and Solutions: The transition to new assessment models is recognised as a gradual process, needing careful planning and support for educators in rethinking their assessment strategies. Inclusivity and Access to Technology: Ensuring equitable access to AI tools for all students is highlighted as a critical aspect of the evolving educational landscape, emphasizing the need to support diverse student backgrounds and technological proficiencies. Future Outlook: The discussion concludes with reflections on the potential long-term impacts of AI on educational practices, the necessity of ongoing adaptation by educational institutions, and the importance of preparing students for a future where AI is seamlessly integrated into professional and everyday contexts. Further Reading: We recommend these three articles from the team, that give more detail on the topics discussed Where are we with generative AI as semester 1 starts? What to do about assessments if we can’t out-design or out-run AI? Embracing the future of assessment at the University of Sydney
It's time to start a new series, so welcome to Series 8! This episode is the warm up into the series that's going to be focused on Assessment. We'll interview some fascinating people about what's happening in school and university assessment, how we might think differently about assessing students, and what you can be thinking about if you're a teacher. There's no shownotes, links or anything else for your homework for this episode - just listen and enjoy! Dan and Ray
The season-ending episode for Series 7, this is the fifteenth in the series that started on 1st November last year with the "Regeneration: Human Centred Educational AI" episode. And it's an unbelievable 87th episode for the podcast (which started in September 2019). When we come back with Series 8 after a short break for Easter, we're going to take a deeper dive into two specific use cases for AI in Education. The first we'll discuss is Assessment, where there's both a threat and opportunity created by AI. And the second topic is AI Tutors, where there's more of a focus on how we can take advantage of the technology to help improve support for learning for students. This episode looks at one key news announcement - the EU AI Act - and a dozen new research papers on AI in education. News EU AI Act https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law The European Parliament approved the AI Act on 13 March and there's some stuff in here that would make good practice guidance. And if you're developing AI solutions for education, and there's a chance that one of your customers or users might be in the EU, then you're going to need to follow these laws (just like GDPR is an EU law, but effectively applies globally if you're actively offering a service to EU residents). The Act bans some uses of AI that threaten citizen's rights - such as social scoring and biometric identification at mass level (things like untargeted facial scanning of CCTV or internet content, emotion recognition in the workplace or schools, and AI built to manipulate human behaviour) - and for the rest it relies on regulation according to categories.  High Risk AI systems have to be assessed before being deployed and throughout their lifecycle. In the High Risk AI category it includes critical infrastructure (like transport and energy), product safety, law enforcement, justice and democratic processes, employment decision making - and Education. So decision making using AI in education needs to do full risk assessments, maintain usage logs, be transparent and accurate - and ensure human oversight. Examples of decision making that would be covered would be things like exam scoring, student recruitment screening, or behaviour management. General generative AI - like chatgpt or co-pilots - will not be classified as high risk, but they'll still have obligations under the Act to do things like clear labelling for AI generated image, audio and video content ; make sure there's it can't generate illegal content, and also disclose what copyright data was used for training. But, although general AI may not be classified as high risk, if you then use that to build a high risk system - like an automated exam marker for end-of-school exams, then this will be covered under the high risk category. All of this is likely to become law by the middle of the year, and by the end of 2024 prohibited AI systems will be banned - and by mid-2025 the rules will start applying for other AI systems. ResearchAnother huge month. I spent the weekend reviewing a list of 350 new papers published in the first two weeks of March, on Large Language Models, ChatGPT etc, to find the ones that are really interesting for the podcast Adapting Large Language Models for Education: Foundational Capabilities, Potentials, and Challenges arXiv:2401.08664   A Study on Large Language Models' Limitations in Multiple-Choice Question Answering arXiv:2401.07955   Dissecting Bias of ChatGPT in College Major Recommendations arXiv:2401.11699   Evaluating Large Language Models in Analysing Classroom Dialogue arXiv:2402.02380    The Future of AI in Education: 13 Things We Can Do to Minimize the Damage https://osf.io/preprints/edarxiv/372vr   Scaling the Authoring of AutoTutors with Large Language Models https://arxiv.org/abs/2402.09216   Role-Playing Simulation Games using ChatGPT https://arxiv.org/abs/2402.09161   Economic and Financial Learning with Artificial Intelligence: A Mixed-Methods Study on ChatGPT https://arxiv.org/abs/2402.15278   A Study on the Vulnerability of Test Questions against ChatGPT-based Cheating https://arxiv.org/abs/2402.14881   Incorporating Artificial Intelligence Into Athletic Training Education: Developing Case-Based Scenarios Using ChatGPT https://meridian.allenpress.com/atej/article/19/1/42/498456   Incorporating Artificial Intelligence Into Athletic Training Education: Developing Case-Based Scenarios Using ChatGPT https://meridian.allenpress.com/atej/article/19/1/42/498456   RECIPE4U: Student-ChatGPT Interaction Dataset in EFL Writing Education https://arxiv.org/abs/2403.08272   Comparison of the problem-solving performance of ChatGPT-3.5, ChatGPT-4, Bing Chat, and Bard for the Korean emergency medicine board examination question bank https://journals.lww.com/md-journal/fulltext/2024/03010/comparison_of_the_problem_solving_performance_of.48.aspx?context=latestarticles   Comparing the quality of human and ChatGPT feedback of students’ writing https://www.sciencedirect.com/science/article/pii/S0959475224000215          
This week we talked with Professor Danny Liu and Dr Joanne Hinitt, of The University of Sydney, about the Cogniti AI service that's been created in the university, and how it's being used to support teaching and learning. Danny is a molecular biologist by training, programmer by night, researcher and academic developer by day, and educator at heart. He works at the confluence of educational technology, student engagement, artificial intelligence, learning analytics, pedagogical research, organisational leadership, and professional development. He is currently a Professor in the Educational Innovation team in the DVC (Education) Portfolio at the University of Sydney. Here's Danny's academic profile. If you want to follow Danny's future work you can find him on LinkedIn and Twitter Joanne is a Lecturer in Occupational Therapy, and her primary area of interest is working with children and their families who experience difficulties participating in occupations related to going to school. She has extensive clinical experience working within occupational therapy settings, providing services for children and their families. Her particular interest is working collaboratively with teachers in the school setting and she completed her PhD in this area. Here's Joanne's academic profile Further reading on the topics discussed in the podcast Cogniti's website is at https://cogniti.ai/ Articles about the topics discussed: How Sydney educators are building ‘AI doubles’ of themselves to help their students, Dec 2023 AI as an authentic and engaging teaching tool for occupational therapy students, Oct 2023 Meet ‘Mrs S’: a classroom teacher who helps budding occupational therapists hone their skills, Oct 2023 Recorded talks Using Cogniti to design for Diversity, Feb 2023      
It's a News and Research Episode this week    There has been a lot of AI news and AI research that's related to education since our last Rapid Rundown, so we've had to be honest and drop 'rapid' from the title! Despite talking fast, this episode still clocked in just over 40 minutes, and we really can't out what to do - should we talk less, cover less news and research, or just stop worrying about time, and focus instead on making sure we bring you the key things every episode?     News More than half of UK undergraduates say they use AI to help with essays https://www.theguardian.com/technology/2024/feb/01/more-than-half-uk-undergraduates-ai-essays-artificial-intelligence This was from a Higher Education Policy Institute of 1,000 students, where they found 53% are using AI to generate assignment material. 1 in 4 are using things like ChatGPT and Bard to suggest topics 1 in 8 are using it to create content And 1 in 20 admit to copying and pasting unedited AI-generated text straight into their assignments Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’ https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html An HK-based employee of a multinational firm wired out $25M after attending a video call where all employees were deepfaked, including the CFO. He first got an email which was suspicious but then was reassured on the video call with his “coworkers.”   NSW Department of Education Launch NSW EduChat https://www.theguardian.com/australia-news/2024/feb/12/the-ai-chat-app-being-trialled-in-nsw-schools-which-makes-students-work-for-the-answers NSW are rolling out a trial to 16 public schools of a chatbot built on Open AI technology, but without giving students and staff unfettered access to ChatGPT. Unlike ChatGPT, the app has been designed to only respond to questions that relate to schooling and education, via content-filtering and topic restriction. It does not reveal full answers or write essays, instead aiming to encourage critical thinking via guided questions that prompt the student to respond – much like a teacher.   The Productivity Commission has thoughts on AI and Education https://www.pc.gov.au/research/completed/making-the-most-of-the-ai-opportunity The PC released a set of research papers about "Making the most of the AI opportunity", looking at Productivity, Regulation and Data Access. They do talk about education in two key ways: "Recent improvements in generative AI are expected to present opportunities for innovation in publicly provided services such as healthcare, education, disability and aged care, which not only account for a significant part of the Australian economy but also traditionally exhibit very low productivity growth" "A challenge for tertiary education institutions will be to keep up to date with technological developments and industry needs. As noted previously by the Commission,  short courses and unaccredited training are often preferred by businesses for developing digital and data skills as they can be more relevant and up to date, as well as more flexible"   Yes, AI-Assisted Inventions can be inventions News from the US, that may set a precedent for the rest of the world. Patents can be granted for AI-assisted inventions - including prompts, as long as there's significant contribution from the human named on the patent https://www.federalregister.gov/public-inspection/2024-02623/guidance-inventorship-guidance-on-ai-assisted-inventions   Not news, but Ray mentioned his Very British Chat bot. Sadly, you need the paid version of ChatGPT to access it as it's one of the public GPTs, but if you have that you'll find it here: Very British Chat   Sora was announced https://www.abc.net.au/news/2024-02-16/ai-video-generator-sora-from-openai-latest-tech-launch/103475830 Although it was the same day that Google announced Gemini 1.5, we led with Sora here - just like the rest of the world's media did!  On the podcast, we didn't do it justice with words, so instead here's four threads on X that are worth your time to read\watch to understand what it can do: Taking a video, and changing the style/environment: https://x.com/minchoi/status/1758831659833602434?s=20 Some phenomenally realistic videos: https://x.com/AngryTomtweets/status/1759171749738840215?s=20 (remember, despite how 'real' these videos appear, none of these places exist outside of the mind of Sora!) Bling Zoo: https://x.com/billpeeb/status/1758223674832728242?s=20 This cooking grandmother does not exist: https://x.com/sama/status/1758219575882301608?s=20 (A little bit like her mixing spoon, that appears to exist only for mixing and then doesn't)   Google's Gemini 1.5 is here…almost https://www.oneusefulthing.org/p/google-gemini-advanced-tasting-notes       Research Papers   Google's Gemini 1.5 can translate languages it doesn't know https://storage.googleapis.com/deepmind-media/gemini/gemini_v1_5_report.pdf Google also published a 58 page report on what their researchers had found with it, and we found the section on translation fascinating. Sidenote: There's an interesting Oxford Academic research project report from last year that was translating cuneiform tablets from Akkadian into English, which didn't use Large Language Models, but set the thinking going on this aspect of using LLMs   Understanding the Role of Large Language Models in Personalizing and Scaffolding Strategies to Combat Academic Procrastination arXiv:2312.13581   Challenges and Opportunities of Moderating Usage of Large Language Models in Education arXiv:2312.14969   ChatEd: A Chatbot Leveraging ChatGPT for an Enhanced Learning Experience in Higher Education arXiv:2401.00052    AI Content Self-Detection for Transformer-based Large Language Models arXiv:2312.17289   Evaluating the Performance of Large Language Models for Spanish Language in Undergraduate Admissions Exams arXiv:2312.16845   Taking the Next Step with Generative Artificial Intelligence: The Transformative Role of Multimodal Large Language Models in Science Education arXiv:2401.00832   Empirical Study of Large Language Models as Automated Essay Scoring Tools in English Composition - Taking TOEFL Independent Writing Task for Example arXiv:2401.03401   Using Large Language Models to Assess Tutors' Performance in Reacting to Students Making Math Errors arXiv:2401.03238   Future-proofing Education: A Prototype for Simulating Oral Examinations Using Large Language Models arXiv:2401.06160   How Teachers Can Use Large Language Models and Bloom's Taxonomy to Create Educational Quizzes arXiv:2401.05914   How does generative artificial intelligence impact student creativity? https://www.sciencedirect.com/science/article/pii/S2713374523000316   Large Language Models As MOOCs Graders arXiv:2402.03776    Can generative AI and ChatGPT outperform humans on cognitive-demanding problem-solving tasks in science? arXiv:2401.15081   
This week's episode is our final interview recorded at the AI in Education Conference at Western Sydney University at the end of last year. Over the last few months you have had the chance to hear many different voices and perspectives Leanne Cameron, is a Senior Lecturer in Education Technologies, from James Cook University in Queensland. Over her career Leanne's worked at a number of Australian universities, focusing on online learning and teacher education, and so has a really solid grasp of the reality - and potential - of education technology. She explores the use of AI in lesson planning, assessment, and providing feedback to students. Leanne highlights the potential of AI to alleviate administrative burdens and inspire teachers with innovative teaching ideas.  And we round the episode with Dan and Ray as they reflect on the profound insights shared by Leanne and discuss the future of teacher education. You can connect with Leanne on LinkedIn here
This week's episode is an absolute bumper edition. We paused our Rapid Rundown of the news and research in AI for the Australian summer holidays - and to bring you more of the recent interviews. So this episode we've got two months to catch up with! We also started mentioning Ray's AI Workshop in Sydney on 20th February. Three hours of exploring AI through the lens of organisational leaders, and a Design Thinking exercise to cap it off, to help you apply your new knowledge in company with a small group. Details & tickets here: https://www.innovategpt.com.au/event And now, all the links to every news article and research we discussed: News stories The Inside Story of Microsoft’s Partnership with OpenAI https://www.newyorker.com/magazine/2023/12/11/the-inside-story-of-microsofts-partnership-with-openai All about the dram that unfolded at OpenAI, and Microsoft, from 17th November, when the OpenAI CEO, Sam Altman suddenly got fired. And because it's 10,000 words, I got ChatGPT to write me the one-paragraph summary: This article offers a gripping look at the unexpected drama that unfolded inside Microsoft, a real tech-world thriller that's as educational as it is enthralling. It's a tale of high-stakes decisions and the unexpected firing of a key figure that nearly upended a crucial partnership in the tech industry. It's an excellent read to understand how big tech companies handle crises and the complexities of partnerships in the fast-paced world of AI   MinterEllison sets up own AI Copilot to enhance productivity https://www.itnews.com.au/news/minterellison-sets-up-own-ai-copilot-603200 This is interesting because it's a firm of highly skilled white collar professionals, and the Chief Digital Officer gave some statistics of the productivity changes they'd seen since starting to use Microsoft's co-pilots: "at least half the group suggests that from using Copilot, they save two to five hours per day," “One-fifth suggest they’re saving at least five hours a day. Nine out of 10 would recommend Copilot to a colleague." “Finally, 89 percent suggest it's intuitive to use, which you never see with the technology, so it's been very easy to drive that level of adoption.” Greg Adler also said “Outside of Copilot, we've also started building our own Gen AI toolsets to improve the productivity of lawyers and consultants.”   Cheating Fears Over Chatbots Were Overblown, New Research Suggests https://www.nytimes.com/2023/12/13/technology/chatbot-cheating-schools-students.html Although this is US news, let's celebrate that the New York Times reports that Stanford education researchers have found that AI chatbots have not boosted overall cheating rates in schools. Hurrah! Maybe the punch is that they said that in their survey, the cheating rate has stayed about the same - at 60-70% Also interesting in the story is the datapoint that 32% of US teens hadn't heard of ChatGPT. And less than a quarter had heard a lot about it.   Game changing use of AI to test the Student Experience. https://www.mlive.com/news/grand-rapids/2024/01/your-classmate-could-be-an-ai-student-at-this-michigan-university.html Ferris State University is enrolling two 'AI students' into classes (Ann and Fry). They will sit (virtually) alongside the students to attend lectures, take part in discussions and write assignments. as more students take the non-traditional route into and through university.     "The goal of the AI student experiment is for Ferris State staff to learn what the student experience is like today" "Researchers will set up computer systems and microphones in Ann and Fry’s classrooms so they can listen to their professor’s lectures and any classroom discussions, Thompson said. At first, Ann and Fry will only be able to observe the class, but the goal is for the AI students to soon be able to speak during classroom discussions and have two-way conversations with their classmates, Thompson said. The AI students won’t have a physical, robotic form that will be walking the hallways of Ferris State – for now, at least. Ferris State does have roving bots, but right now researchers want to focus on the classroom experience before they think about adding any mobility to Ann and Fry, Thompson said." "Researchers plan to monitor Ann and Fry’s experience daily to learn what it’s like being a student today, from the admissions and registration process, to how it feels being a freshman in a new school. Faculty and staff will then use what they’ve learned to find ways to make higher education more accessible."     Research Papers Towards Accurate Differential Diagnosis with Large Language Models https://arxiv.org/pdf/2312.00164.pdf There has been a lot of past work trying to use AI to help with medical decision-making, but they often used other forms of AI, not LLMs. Now Google has trained a LLM specifically for diagnoses and in a randomized trial with 20 clinicians and 302 real-world medical cases, AI correctly diagnosed 59% of hard cases. Doctors only got 33% right even when they had access to Search and medical references. (Interestingly, doctors & AI working together did well, but not as good as AI did alone) The LLM’s assistance was especially beneficial in challenging cases, hinting at its potential for specialist-level support.   How to Build an AI Tutor that Can Adapt to Any Course and Provide Accurate Answers Using Large Language Model and Retrieval-Augmented Generation https://arxiv.org/ftp/arxiv/papers/2311/2311.17696.pdf The researcher from the Education University of Hong Kong, used Open AI's GPT-4, in November, to create the chatbot tutor that was fed with course guides and materials to be able to tutor a student in a natural conversation. He describes the strengths as the natural conversation and human-like responses, and the ability to cover any topic as long as domain knowledge documents were available. The downsides highlighted are the accuracy risks, and that the performance depends on the quality and clarity of the student's question, and the quality of the course materials. In fact, on accuracy they conclude "Therefore, the AI tutor’s answers should be verified and validated by the instructor or other reliable sources before being accepted as correct" which isn't really that helpful. TBH This is more of a project description than a research paper, but a good read nonetheless, to give confidence in AI tutors, and provides design outlines that others might find useful.   Harnessing Large Language Models to Enhance Self-Regulated Learning via Formative Feedback https://arxiv.org/abs/2311.13984 Researchers in German universities created an open-access tool or platform called LEAP to provide formative feedback to students, to support self-regulated learning in Physics. They found it stimulated students' thinking and promoted deeper learning. It's also interesting that between development and publication, the release of new features in ChatGPT allows you to create a tutor yourself with some of the capabilities of LEAP. The paper includes examples of the prompts that they use, which means you can replicate this work yourself - or ask them to use their platform.   ChatGPT in the Classroom: Boon or Bane for Physics Students' Academic Performance? https://arxiv.org/abs/2312.02422 These Columbian researchers let half of the students on a course loose with the help of ChatGPT, and the other half didn't have access. Both groups got the lecture, blackboard video and simulation teaching. The result? Lower performance for the ones who had ChatGPT, and a concern over reduced critical thinking and independent learning. If you don't want to do anything with generative AI in your classroom, or a colleague doesn't, then this is the research they might quote! The one thing that made me sit up and take notice was that they included a histogram of the grades for students in the two groups. Whilst the students in the control group had a pretty normal distribution and a spread across the grades, almost every single student in the ChatGPT group got exactly the same grade. Which makes me think that they all used ChatGPT for the assessment as well, which explains why they were all just above average. So perhaps the experiment led them to switch off learning AND switch off doing the assessment. So perhaps not a surprising result after all. And perhaps, if instead of using the free version they'd used the paid GPT-4, they might all have aced the exam too!     Multiple papers on ChatGPT in Education There's been a rush of papers in early December in journals, produced by university researchers right across Asia, about the use of AI in Nursing Education, Teacher Professional Development, setting Maths questions, setting questions after reading textbooks and in Higher Education in Tamansiswa International Journal in Education and Science, International Conference on Design and Digital Communication, Qatar University and Universitas Negeri Malang in Indonesia. One group of Brazilian researchers tested in in elementary schools. And a group of 7 researchers from University of Michigan Medical School and 4 Japanese universities discovered that GPT-4 beat 2nd year medical residents significantly in Japan's General Medicine In-Training Examination (in Japanese!) with the humans scoring 56% and GPT-4 scoring 70%. Also fascinating in this research is that they classified all the questions as easy, normal or difficult. And GPT-4 did worse than humans in the easy problems (17% worse!), but 25% better in the normal and difficult problems. All these papers come to similar conclusions - things are changing, and there's upsides - and potential downsides to be managed. Imagine the downside of AI being better than humans at passing exams the harder they get!   ChatGPT for generating questions and assessments based on accreditations https://arxiv.org/abs/2312.00047 There was also an interesting paper from a Saudi Arabian researc
In this second episode of 2024, we bring you excerpts from interviews conducted at the AI in education conference at Western Sydney University in late 2023. In this week's episode, we dive deep into the world of AI in higher education and discuss its transformative potential. From personalised tutoring to improved assessment methods, we discuss how AI is revolutionising the teaching and learning experience. Section 1: Vitomir Kovanovic, Associate Professor of Education Futures, University of South Australia In this interview, Vitomir, a senior lecturer at UniSA Education Futures, shares his perspective on AI in education. Vitomir highlights the major impact that generative AI is having in the field and compares it to previous technological advancements such as blockchain and the internet. He emphasises the transformative nature of generative AI and its potential to reshape teaching methodologies, organizational structures, and job markets. Vita also discusses the importance of adapting to this new way of interacting with technology and the evolving role of teachers as AI becomes more integrated into education. Section 2: Tomas Trescak - Director of Academic Programs in Undergraduate ICT, Western Sydney University Tomas  delves into the challenges of assessment in the age of AI. He highlights the inherent lack of integrity in online assessments due to the availability of undetectable tools that can easily fill in answers. Tomas suggests that online assessments should play a complementary role in assessing students' knowledge and skills, while the main focus should be on in-person assessments that can't be easily duplicated or cheated. He also discusses the role of AI in assessing skills that won't be replaced by robots and the importance of developing graduates who can complement AI in the job market. Section 3:  Back to Vitomir,  to discuss the changing model of education and the potential impact of AI. We explore the concept of education as both a craft and a science and how technology is gradually shifting education towards a more personalised and flexible approach. The discussion highlights the ability of AI to adapt to individual teaching styles and preferences, making it a valuable tool for teachers. We also delve into the potential of AI in healthcare and tutoring, where AI can provide personalised support to students and doctors, leading to more efficient and equitable outcomes.    
The podcast was a special dual-production episode between the AI and Education podcast, and the Data Revolution podcast, welcoming Ray Fleming and Kate Carruthers as the guests. The conversation centred around the transformation of the traditional data systems in education to incorporating AI. Kate Carruthers, the Chief Data and Insights Officer at the University of New South Wales, and Head of Business Intelligence for the UNSW AI Institute, discussed the use of data in the business and research-related aspects of higher education. On the other hand, Fleming, the Chief Education Officer at InnovateGPT, elaborated on the growth and potential of generative Artificial Intelligence (AI) in educational technology and its translation into successful business models in Australia. The guests pondered the potential for AI to change industries, especially higher education, and the existing barriers to AI adoption. The conversation revolved around adapting education to make use of unstructured data through AI and dealing with the implications of this paradigm shift in education.   The Data Revolution podcast is available on Apple Podcasts, Google Podcasts and Spotify.   00:00 Introduction and Welcome 00:58 Guest Introductions and Backgrounds 01:56 The Role of Data in Education and AI 02:32 The Intersection of Data and AI in Education 04:11 The Importance of Data Quality and Governance 08:00 The Future of AI in Education 09:49 Generative AI as the Interface of the Future 10:20 The Potential of Generative AI in Business Processes 11:26 The Impact of AI on Traditional Roles and Skills 12:00 The Role of AI in Decision Making 13:46 The Future of AI in Email Communication 14:38 The Role of AI in Education and Career Guidance 16:34 The Impact of AI on Traditional Education Systems 18:18 The Role of AI in Academic Assessment 20:11 The Future of AI in Navigating Education Pathways 36:37 The Role of Unstructured Data in Generative AI 38:10 Conclusion and Farewell
Our final episode for 2024 is an absolutely fabulous Christmas gift, full of a lots of presents in the form of different AI tips and services  Joe Dale, who's a UK-based education ICT & Modern Foreign Languages consultant, spends 50 lovely minutes sharing a huge list of AI tools for teachers and ideas for how to get the most out of AI in learning. We strongly recommend you find and follow Joe on LinkedIn or Twitter And if you're a language teacher, join Joe's Language Teaching with AI Facebook group Joe's also got an upcoming webinar series on using ChatGPT for language teachers: Resource Creation with ChatGPT on Mondays - 10.00, 19.00 and 21.30 GMT (UTC) in January - 8th, 15th, 22nd and 29th January 2024 Good news - 21:30 GMT is 8:30 AM and 10:00 GMT is 9PM in Sydney/Melbourne, so there's two times that work for Australia. And if you can't attend live, you get access to the recordings and all the prompts and guides that Joe shares on the webinars. There was a plethora of AI tools and resources mentioned in this episode: ChatGPT: https://chat.openai.com DALL-E: https://openai.com/dall-e-2 Voice Dictation in MS Word Online https://support.microsoft.com/en-au/office/dictate-your-documents-in-word-3876e05f-3fcc-418f-b8ab-db7ce0d11d3c Transcripts in Word Online https://support.microsoft.com/en-us/office/transcribe-your-recordings-7fc2efec-245e-45f0-b053-2a97531ecf57 AudioPen: https://audiopen.ai ‘Live titles’ in Apple Clips https://www.apple.com/uk/clips Scribble Diffusion: https://www.scribblediffusion.com Wheel of Names: https://wheelofnames.com Blockade Labs: https://blockadelabs.com Momento360: https://momento360.com Book Creator: https://app.bookcreator.com Bing Chat: https://www.bing.com/chat Voice Control for ChatGPT https://chrome.google.com/webstore/detail/voice-control-for-chatgpt/eollffkcakegifhacjnlnegohfdlidhn Joe Dale’s Language Teaching with AI Facebook group https://www.facebook.com/groups/1364632430787941 TalkPal for Education https://talkpal.ai/talkpal-for-education Pi: https://pi.ai/talk ChatGPT and Azure https://azure.microsoft.com/en-us/blog/chatgpt-is-now-available-in-azure-openai-service Google Earth: https://www.google.com/earth Questionwell https://www.questionwell.org MagicSchool https://www.magicschool.ai Eduaide https://www.eduaide.ai “I can’t draw’ in Padlet: https://padlet.com    
In todays epsiode, Inside the New Australian AI Frameworks with their Creators, we speak to Andrew Smith of ESA and AI guru Leon Furze.   This should have been the rapid news rundown, and you may remember that 20 minutes before the last rapid news rundown (two weeks ago), the new  Australian Framework for Generative Artificial Intelligence (AI) in Schools was published. So we ditched our plans to give you a full new rundown this week, and instead found a couple of brilliant guests to talk on the podcast about the new framework, and what it means for school leaders and teachers in Australian schools. Some key links from todays episode to learn more: Andrew Smith Andrew Smith | LinkedIn Home (esa.edu.au) Leon Furze http://Leonfurze.com https://www.linkedin.com/in/leonfurze/  https://ambapress.com.au/products/practical-ai-strategies   Other useful reading VINE (Victorian ICT Network for Education) Generative Artificial Intelligence Guidelines Authored by Leon https://vine.vic.edu.au/resources/Documents/GAI_Guidelines/VINE%20Generative%20Artificial%20Intelligence%20Guidelines.pdf   Finding the Right Balance: Reflections on Writing a School AI Policy https://matthewwemyss.wordpress.com/2023/08/15/writing-a-school-ai-policy/    
Matt Esterman is Director of Innovation & Partnerships, and history teacher, at Our Lady of Mercy College Parramatta. An educational leader who's making things happen with AI in education in Australia, Matt created and ran the AI in Edcuation conference in Sydney in November 2023, where this interview with Dan and Ray was recorded.  Part of Matt's role is to help his school on the journey to adopting and using generative AI. As an example, he spent time understanding the UNESCO AI Framework for education, and relating that to his own school. One of the interesting perspectives from Matt is the response to students using ChatGPT to write assignments and assessments - and the advice for teachers within his school on how to handle this well with them (which didn't involve changing their assessment policy!) "And so we didn't have to change our assessment policy. We didn't have to change our ICT acceptable use policy. We just apply the rules that should work no matter what. And just for the record, like I said, 99 percent of the students did the right thing anyway." This interview is full of common sense advice, and it's reassuring the hear the perspective of a leader, and school, that might be ahead on the journey. Follow Matt on Twitter and LinkedIn
Academic Research   Researchers Use GPT-4 To Generate Feedback on Scientific Manuscripts https://hai.stanford.edu/news/researchers-use-gpt-4-generate-feedback-scientific-manuscripts https://arxiv.org/abs/2310.01783 Two episodes ago I shared the news that for some major scientific publications, it's okay to write papers with ChatGPT, but not to review them. But… Combining a large language model and open-source peer-reviewed scientific papers, researchers at Stanford built a tool they hope can help other researchers polish and strengthen their drafts. Scientific research has a peer problem. There simply aren’t enough qualified peer reviewers to review all the studies. This is a particular challenge for young researchers and those at less well-known institutions who often lack access to experienced mentors who can provide timely feedback. Moreover, many scientific studies get “desk rejected” — summarily denied without peer review. James Zou, and his research colleagues, were able to test using GPT-4 against human reviews 4,800 real Nature + ICLR papers. It found AI reviewers overlap with human ones as much as humans overlap with each other, plus, 57% of authors find them helpful and 83% said it beats at least one of their real human reviewers.     Academic Writing with GPT-3.5 (ChatGPT): Reflections on Practices, Efficacy and Transparency https://dl.acm.org/doi/pdf/10.1145/3616961.3616992 Oz Buruk, from Tampere University in Finland, published a paper giving some really solid advice (and sharing his prompts) for getting ChatGPT to help with academic writing. He uncovered 6 roles: Chunk Stylist Bullet-to-Paragraph Talk Textualizer Research Buddy Polisher Rephraser He includes examples of the results, and the prompts he used for it. Handy for people who want to use ChatGPT to help them with their writing, without having to resort to trickery     Considerations for Adapting Higher Education Technology Course for AI Large Language Models: A Critical Review of the Impact of ChatGPT https://www.sciencedirect.com/journal/machine-learning-with-applications/articles-in-press This is a journal pre-proof from the Elsevier journal "Machine Learning with Applications", and takes a look at how ChatGPT might impact assessment in higher education. Unfortunately it's an example of how academic publishing can't keep up with the rate of technology change, because the four academics from University of Prince Mugrin who wrote this submitted it on 31 May, and it's been accepted into the Journal in November - and guess what? Almost everything in the paper has changed. They spent 13 of the 24 pages detailing exactly which assessment questions ChatGPT 3 got right or wrong - but when I re-tested it on some sample questions, it got nearly all correct. They then tested AI Detectors - and hey, we both know that's since changed again, with the advice that none work. And finally they checked to see if 15 top universities had AI policies. It's interesting research, but tbh would have been much, much more useful in May than it is now. And that's a warning about some of the research we're seeing. You need to really check carefully about whether the conclusions are still valid - eg if they don't tell you what version of OpenAI's models they’ve tested, then the conclusions may not be worth much. It's a bit like the logic we apply to students "They’ve not mastered it…yet"     A SWOT (Strengths, Weaknesses, Opportunities, and Threats) Analysis of ChatGPT in the Medical Literature: Concise Review https://www.jmir.org/2023/1/e49368/ They looked at 160 papers published on PubMed in the first 3 months of ChatGPT up to the end of March 2023 - and the paper was written in May 2023, and only just published in the Journal of Medical Internet Research. I'm pretty sure that many of the results are out of date - for example, it specifically lists unsuitable uses for ChatGPT including "writing scientific papers with references, composing resumes, or writing speeches", and that's definitely no longer the case.     Emerging Research and Policy Themes on Academic Integrity in the Age of Chat GPT and Generative AI https://ajue.uitm.edu.my/wp-content/uploads/2023/11/12-Maria.pdf This paper, from a group of researchers in the Philippines, was written in August. The paper referenced 37 papers, and then looked at the AI policies of the 20 top QS Rankings universities, especially around academic integrity & AI. All of this helped the researchers create a 3E Model - Enforcing academic integrity, Educating faculty and students about the responsible use of AI, and Encouraging the exploration of AI's potential in academia.   Can ChatGPT solve a Linguistics Exam? https://arxiv.org/ftp/arxiv/papers/2311/2311.02499.pdf If you're keeping track of the exams that ChatGPT can pass, then add to it linguistics exams, as these researchers from the universities of Zurich & Dortmund, came  to the conclusion that, yes, chatgpt can pass the exams, and said "Overall, ChatGPT reaches human-level competence and         performance without any specific training for the task and has performed similarly to the student cohort of that year on a first-year linguistics exam" (Bonus points for testing its understanding of a text about Luke Skywalker and unmapped galaxies)   And, I've left the most important research paper to last: Math Education with Large Language Models: Peril or Promise? https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4641653 Researchers at University of Toronto and Microsoft Research have published a paper that is the first large scale, pre-registered controlled experiment using GPT-4, and that looks at Maths education. It basically studied the use of Large Language Models as personal tutors. In the experiment's learning phase, they gave participants practice problems and manipulated two key factors in a between-participants design: first, whether they were required to attempt a problem before or after seeing the correct answer, and second, whether participants were shown only the answer or were also exposed to an LLM-generated explanation of the answer. Then they test participants on new test questions to assess how well they had learned the underlying concepts. Overall they found that LLM-based explanations positively impacted learning relative to seeing only correct answers. The benefits were largest for those who attempted problems on their own first before consulting LLM explanations, but surprisingly this trend held even for those participants who were exposed to LLM explanations before attempting to solve practice problems on their own. People said they learn more when they were given explanations, and thought the subsequent test was easier They tried it using standard GPT-4 and got a 1-3 standard deviation improvement; and using a customised GPT got a 1 1/2 - 4 standard deviation improvement. In the tests, that was basically the difference between getting a 50% score and a 75% score. And the really nice bonus in the paper is that they shared the prompt's they used to customise the LLM This is the one paper out of everything I've read in the last two months that I'd recommend everybody listening to read.       News on Gen AI in Education   About 1 in 5 U.S. teens who’ve heard of ChatGPT have used it for schoolwork https://policycommons.net/artifacts/8245911/about-1-in-5-us/9162789/ Some research from the Pew Research Center in America says 13% of all US teens have used it in their schoolwork - a quarter of all 11th and 12th graders, dropping to 12% of 7th and 8th graders. This is American data, but pretty sure it's the case everywhere.     UK government has published 2 research reports this week. Their Generative AI call for evidence had over 560  responses from all around the education system and is informing UK future policy design. https://www.gov.uk/government/calls-for-evidence/generative-artificial-intelligence-in-education-call-for-evidence     One data point right at the end of the report was that 78% of people said they, or their institution, used generative AI in an educational setting   Two-thirds of respondents reported a positive result or impact from using genAI. Of the rest, they were divided between 'too early to tell', a bit of +positive and a bit of negative, and some negative - mainly around cheating by students and low-quality outputs.   GenAI is being used by educators for creating personalized teaching resources and assisting in lesson planning and administrative tasks. One Director of teaching and learning said "[It] makes lesson planning quick with lots of great ideas for teaching and learning" Teachers report GenAI as a time-saver and an enhancer of teaching effectiveness, with benefits also extending to student engagement and inclusivity. One high school principal said "Massive positive impacts already. It marked coursework that would typically take 8-13 hours in 30 minutes (and gave feedback to students). " Predominant uses include automating marking, providing feedback, and supporting students with special needs and English as an additional language.   The goal for more teachers is to free up more time for high-impact instruction.     Respondents reported five broad challenges that they had experienced in adopting GenAI: • User knowledge and skills - this was the major thing - people feeling the need for more help to use GenAI effectively • Performance of tools - including making stuff up • Workplace awareness and attitudes • Data protection adherence • Managing student use • Access   However, the report also highlight common worries - mainly around AI's tendency to generate false or unreliable information. For History, English and language teachers especially, this could be problematic when AI is used for assessment and grading   There are three case studies at the end of the report - a college using it for online formative assessment with r
This episode is one to listen to and treasure - and certainly bookmark to share with colleagues now and in the future. No matter where you are on your journey with using generative AI in education, there's something in this episode for you to apply in the classroom or leading others in the use of AI. There are many people to thank for making this episode possible, including the extraordinary guests: Matt Esterman - Director of Innovation & Partnerships at Our Lady of Mercy College Parramatta. An educational leader who's making things happen with AI in education in Australia, Matt created and ran the conference where these interviews happened. He emphasises the importance of passionate educators coming together to improve education for students. He shares his main takeaways from the conference and the need to rethink educational practices for the success of students. Follow Matt on Twitter and LinkedIn Roshan Da Silva - Dean of Digital Learning and Innovation at The King's School - shares his experience of using AI in both administration and teaching. He discusses the evolution of AI in education and how it has advanced from simple question-response interactions to more sophisticated prompts and research assistance. Roshan emphasises the importance of teaching students how to use AI effectively and proper sourcing of information. Follow Roshan on Twitter  Siobhan James - Teacher Librarian at Epping Boys High School - introduces her journey of exploring AI in education. She shares her personal experimentation with AI tools and services, striving to find innovative ways to engage students and enhance learning. Siobhan shares her excitement about the potential of AI beyond traditional written subjects and its application in other areas. Follow Siobhan on LinkedIn Mark Liddell - Head of Learning and Innovation from St Luke's Grammar School - highlights the importance of supporting teachers on their AI journey. He explains the need to differentiate learning opportunities for teachers and address their fears and misconceptions. Mark shares his insights on personalised education, assessment, and the role AI can play in enhancing both. Follow Mark on Twitter and LinkedIn Anthony England - Director of Innovative Learning Technologies at Pymble Ladies College - discusses his extensive experimentation with AI in education. He emphasises the need to challenge traditional assessments and embrace AI's ability to provide valuable feedback and support students' growth and mastery. Anthony also explains the importance of inspiring curiosity and passion in students, rather than focusing solely on grades. And we're not sure which is our favourite quote from the interviews, but Anthony's "Haters gonna hate, cheater's gonna cheat" is up there with his "Pushing students into beige" Follow Anthony on Twitter and LinkedIn   Special thanks to Jo Dunbar and the team at Western Sydney University's Education Knowledge Network who hosted the conference, and provided Dan and I with a special space to create our temporary podcast studio for the day
loading
Comments